diff --git a/data_all_eng_slimpj/shuffled/split2/finalzeyw b/data_all_eng_slimpj/shuffled/split2/finalzeyw new file mode 100644 index 0000000000000000000000000000000000000000..b03652544c0def1f65736118fdb19e474124ac9f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzeyw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe analysis of quantum information processing protocols is a challenging task. Let it be a quantum tomography process, transmission of quantum information over a noisy channel or a cryptographic protocol -- all need to be analysed under general conditions. Since one usually has limited information about the actual quantum state given as input, the analysis should be valid for any given quantum state. For example, a cryptographic protocol should be proven secure independently of the input state, which can be chosen by a malicious adversary. As the space of all possible states can be very large and the structure of the states therein might be complicated due to entanglement, this task can be tedious in the good case, and infeasible in the worst. \n\nThe quantum de Finetti theorems \\cite{hudson1976locally, raggio1989quantum, caves2002unknown, renner2007symmetry} and the post selection theorem \\cite{christandl2009postselection} address the above problem, by exploiting the symmetry of the considered states, namely permutation invariance. These mathematical tools allow us to simplify the analysis of quantum information processing tasks by reducing permutation invariant quantum states to a more structured state, called the quantum de Finetti state.\nIn general, we say that a state is of de Finetti-type if it is a convex combination of independent identically distributed (i.i.d.\\@) states. \n\nde Finetti states are usually much easier to handle than general states due to their simple structure. Moreover, most established information-theoretic techniques can be applied only to i.i.d\\ states, and therefore while not applicable when considering a general state, they can be used when considering de Finetti states. Therefore, a reduction to such states can simplify calculations and proofs of various quantum information processing tasks. Indeed, one of the famous applications of reductions to de Finetti states is a proof which states that in order to establish security of quantum key distribution against general attacks it is sufficient to consider attacks on individual signals~\\cite{christandl2009postselection}. Other applications include quantum tomography \\cite{christandl2012reliable} or quantum reverse Shannon coding \\cite{berta2011reverse}. \n\nUnfortunately, the known variants of the quantum de Finetti theorems are not always applicable. A big class of protocols, commonly used in the past several years, to which those theorems are not applicable is the class of protocols in which the dimension of the states is unknown or cannot be bounded, and in particular, the class of device independent protocols (for a review on the topic, see for example \\cite{scarani2012device, brunner2013bell}). The above mentioned theorems cannot be used in such cases for they depend on the dimension of the quantum state. \n\nIn device independent cryptography \\cite{mayers1998quantum,pironio2009device}, for example, one considers the devices as black boxes, about which we know nothing. The security of such protocols can therefore rely only on the observed statistics and not on the specific quantum states and measurements used in the protocol (in some protocols one does not even assume that the underlying physical system is restricted to be quantum! \\cite{barrett2005no,hanggi2009quantum}). In these cases, one possible framework to work with is the framework of conditional probability distributions. \n\nConditional probability distributions describe the operational behaviour of physical systems under measurements. That is, if we are only interested in modelling the measurement-outcome behaviour of our physical system, then the system can be described by a conditional probability distribution $\\mathrm{P}_{A|X}$ where $X$ is the input, or the measurement performed on the system, and $A$ is the output. $\\mathrm{P}_{A|X}(a|x)$ is the probability for outcome $a$ given that a measurement $x$ was made. We then say that $\\mathrm{P}_{A|X}$ is the state of the system. Note that the state may have as many inputs and outputs as required and therefore we do not restrict the structure of the underlying system by describing it as a conditional probability distribution.\n\nIn quantum physics, for example, $\\mathrm{P}_{A|X}$ is given by Born's rule. However, conditional probability distributions can also be used to describe states that might not conform with the theory of quantum physics, such as non-signalling states. Consider for example a state $\\mathrm{P}_{AB|XY}$ shared by two space-like separated parties, Alice and Bob, each holding a subsystem of the state. $X$ and $A$ are then, respectively, the input and output of Alice, and $Y$ and $B$ of Bob. We then say that the state is non-signalling if it cannot be used to communicate, i.e., the output of one party is independent of the input of the other. The PR-box \\cite{PR-box} is an example for a non-quantum bipartite state which can be written as a (non-signalling) conditional probability distribution. \n\nGiven all the above, it is thus necessary to see whether de Finetti theorems are unique for quantum states or can be also proven on the level of the correlations in the framework of conditional probability distributions. More specifically, we are interested in a theorem that will allow us to reduce permutation invariant conditional probability distributions to a simple de Finetti-type conditional probability distribution, in a way that will be applicable in device independent protocols and, more generally, when the dimension of the underlying quantum states is unknown. Several different non-signalling de Finetti theorems have been established recently \\cite{barrett2009finetti,christandl2009finite,brandao2012quantum}, but it is yet unknown how these can be applied to device independent cryptography\\footnote{In most of these variants of de Finetti theorems, for example, it is assumed that the subsystems cannot signal each other. For current applications this is a too restrictive condition, since it is equivallent to assuming there is no memory in the devices.}. \n\nIn this letter we prove a general de Finetti reduction theorem, from which we can derive several more specialised statements that are of interest for applications. The different reductions differ from one another in two main aspects -- the set of states to which they can be applied and the specific structure of the de Finetti state. Different de Finetti reductions can therefore be useful in different scenarios and under different assumptions. \n\nThe simplest and most straightforward variant is a de Finetti reduction which can be applied to any permutation invariant conditional probability distribution. The second variant is a reduction which can be applied to a family of states which is relevant for cryptographic protocols based on the CHSH inequality \\cite{CHSH} or the chained Bell inequalities \\cite{braunstein1990wringing,Barrett2006Maximally}. There we connect any state $\\mathrm{P}_{AB|XY}$ out of this family of states to a special \\emph{non-signalling} de Finetti state $\\tau^\\mathcal{CHSH}_{AB|XY}$. We do not assume any non-signalling conditions between the subsystems of $\\mathrm{P}_{AB|XY}$ and therefore the use of the de Finetti reduction is not restricted only to scenarios where each of the subsystems cannot signal each other. \n\nUp to date, almost all known device independent cryptographic protocols are based on the CHSH inequality or the more general chained Bell inequalities. For this reason we pay specific attention to states which are relevant for such protocols. However, our theorem can be applied also to other families of states which might be useful in future protocols. As an example of an application of our theorem we prove that for protocols which are based on the violation of the CHSH and chained Bell inequalities it is sufficient to consider the case where Alice and Bob share the de Finetti state $\\tau^\\mathcal{CHSH}_{AB|XY}$. We do this by bounding the distance between two channels which act on conditional probability distributions. \n\nIn the following we start by describing and explaining the different de Finetti reductions. We then illustrate how the reductions can be used in applications. All the proofs are given in the Appendix.\n\n\\section{Results}\n\nFor stating the different de Finetti reductions we will need some basic definitions. $A$ and $X$ denote discrete random variables over $a \\in \\{0,1, ... , l-1\\}^n$ and $x \\in \\{0,1, ... , m-1\\}^n$ respectively. We use $[n]$ to denote the set $\\{1,\\dotsc,n\\}$. An $n$-partite state $\\mathrm{P}_{A|X}$ is a conditional probability distribution if for every $x$, $\\sum_a \\mathrm{P}_{A|X}(a|x)=1$ and for every $a, x$, $\\mathrm{P}_{A|X}(a|x)\\geq 0$. When we consider two different states $\\mathrm{P}_{A|X}$ and $\\mathrm{Q}_{A|X}$ it is understood that both states are over the same random variables $X$ and $A$. The de Finetti reductions deal with permutation invariant states and de Finetti states. Formally we define these as follows.\n\\begin{defn}\\label{def:permutation}\n\tGiven a state $\\mathrm{P}_{A|X}$ and a permutation $\\pi$ of its subsystems\\footnote{Since we permute $a$ and $x$ together this is exactly as permuting the subsystems.} we denote by $\\mathrm{P}_{A|X}\\circ\\pi$ the state which is defined by \n\t\\[\n\t\t\\forall a,x \\quad \\left(\\mathrm{P}_{A|X}\\circ\\pi \\right) (a|x)=\\mathrm{P}_{A|X}(\\pi(a)|\\pi(x)) \\;.\n\t\\]\n\tAn $n$-partite state $\\mathrm{P}_{A|X}$ is permutation invariant if for any permutation $\\pi$, $\\mathrm{P}_{A|X} = \\mathrm{P}_{A|X}\\circ\\pi$. \n\\end{defn}\nAs mentioned above, we say that a state is a de Finetti state if it is a convex combination of i.i.d.\\ states. Formally, \n\\begin{defn}\n\tA de Finetti state is a state of the form\n\t\\[\n\t\t\\tau_{A|X} = \\int Q_{A_1|X_1}^{\\otimes n} \\mathrm{d}Q_{A_1|X_1}\n\t\\]\n\twhere $x_1\\in \\{0,1, ... , m-1\\}$, $a_1 \\in \\{0,1, ... , l-1\\}$, $\\mathrm{d}Q_{A_1|X_1}$ is some measure on the space of 1-party states and $Q_{A_1|X_1}^{\\otimes n}$ is a product of $n$ identical 1-party states $Q_{A_1|X_1}$, i.e., it is defined according to \n\t\\[\n\t\tQ_{A_1|X_1}^{\\otimes n}(a|x) = \\prod_{i\\in[n]} Q_{A_1|X_1}(a_i|x_i) \\;.\n\t\\]\n\\end{defn}\nAs seen from the above definition, by choosing different measures $\\mathrm{d}Q_{A_1|X_1}$ we define different de Finetti states. \n\nWe are now ready to state the de Finetti reductions. For simplicity we start by giving the first corollary of the more general theorem (Theorem \\ref{thm:post-selection}). This corollary is a reduction for conditional probability distributions, which connects general permutation invariant states to a specific de Finetti state. \n\n\\begin{cor}[de Finetti reduction for conditional probability distributions]\\label{cor:conditional}\n\tThere exists a de Finetti state $\\tau_{A|X}$ where $x \\in \\{ 0,1, ... ,m-1 \\}^n$ and $a \\in \\{ 0,1, ... ,l-1 \\} ^n$ such that for every permutation invariant state $\\mathrm{P}_{A|X}$ \n\t\\[\n\t\t\\forall a,x \\quad \\mathrm{P}_{A|X} (a|x) \\leq (n+1)^{m(l-1)} \\; \\tau_{A|X} (a|x) \\;.\n\t\\]\n\\end{cor}\n\nThe de Finetti state $\\tau_{A|X}$ is an \\emph{explicit} state that we construct in the proof of the general theorem in Appendix~\\ref{sec:general-proof}. The proof uses mainly combinatoric arguments; we choose $\\tau_{A|X}$ in a specific way, such that a lower bound on $\\tau_{A|X}(a|x)$ for all $a,x$ can be proven. We then use the permutation invariance of $\\mathrm{P}_{A|X}$ to prove an upper bound on $\\mathrm{P}_{A|X}(a|x)$. The result is then derived by combining the two bounds.\n\nCorollary \\ref{cor:conditional} holds for every permutation invariant state $\\mathrm{P}_{A|X}$, not necessarily quantum or non-signalling. At first sight, the generality of the above mathematical statement might seem as a drawback in applications where only a restricted set of correlations is considered (e.g., only non-signalling correlations). Nevertheless, in a following work \\cite{arnon2014nonsignalling} we show that this is not the case and apply this general theorem to prove parallel repetition theorems for non-signalling games. \nNote that according to Definition~\\ref{def:permutation} we consider permutations which permute the 1-party subsystems of $\\mathrm{P}_{A|X}$\\footnote{This is in contrast to states $\\mathrm{P}_{AB|XY}$ which can also be permuted as $\\left(\\mathrm{P}_{AB|XY}\\circ\\pi \\right) (ab|xy)=\\mathrm{P}_{AB|XY}\\left(\\pi(a)\\pi(b)|\\pi(x)\\pi(y)\\right)$, as is usually the case in cryptographic tasks. For dealing with such states we will consider a different reduction, stated as Corollary~\\ref{cor:chsh-post-selection}.}. \n\nThe multiplicative pre-factor of the de Finetti reduction, $(n+1)^{m(l-1)}$ in Corollary \\ref{cor:conditional} for example, is relevant for applications. Intuitively, this is the ``cost'' for using $\\tau_{A|X}$ instead of $\\mathrm{P}_{A|X}$ in the analysis of the considered protocol. We therefore want it to be as small as possible. Nevertheless, as will be explained later, in many cases a pre-factor polynomial in $n$ suffices. \n\nCorollary \\ref{cor:conditional} is relevant for scenarios in which one considers permutation invariant conditional probability distributions $\\mathrm{P}_{A|X}$. However, if the states one considers have additional symmetries $\\mathcal{S}$ then we can prove a better de Finetti reduction --- a reduction with a smaller pre-factor and a special de Finetti state with the same symmetries~$\\mathcal{S}$.\n\nIn the following we consider a specific family of symmetries --- symmetries between different inputs and outputs of the subsystems of $\\mathrm{P}_{A|X}$. Formally, the types of symmetries that we consider are described, among other things, by a number $d\\leq m(l-1)$ which we call the degrees of freedom of the symmetry (see Appendix~\\ref{sec:general-proof} for details and formal definition of the symmetries). More symmetry implies less degrees of freedom, i.e., smaller $d$, and as shown in the following theorem, this leads to a smaller pre-factor in the reduction. The general theorem then reads:\n\n\\begin{thm}[de Finetti reduction for conditional probability distributions with symmetries]\\label{thm:post-selection}\n\tThere exists a de Finetti state $\\tau^\\mathcal{S}_{A|X}$ where $x \\in \\{ 0,1, ... ,m-1 \\}^n$ and $a \\in \\{ 0,1, ... ,l-1 \\} ^n$ such that for every permutation invariant state $\\mathrm{P}_{A|X}$ with symmetry $\\mathcal{S}$ (with $d$ degrees of freedom) \n\t\\[\n\t\t\\forall a,x \\quad \\mathrm{P}_{A|X} (a|x) \\leq (n+1)^d \\; \\tau^\\mathcal{S}_{A|X} (a|x) \\;. \n\t\\]\n\\end{thm}\nFor the case of no symmetry we have $d=m(l-1)$ from which Corollary \\ref{cor:conditional} stated before follows. \n\nThe symmetries $\\mathcal{S}$ that we consider are of particular interest when considering cryptographic protocols which are based on non-signalling states. For example, the states which are relevant for protocols which are based on the violation of the CHSH inequality (such as \\cite{masanes2009universally,hanggi2009quantum}) have a great amount of symmetry. The additional symmetry allows us to prove a corollary of Theorem \\ref{thm:post-selection} which can be used to simplify such protocols. \n\nBefore we state the corollary for the CHSH case, let us define what we mean when we say that a state has a CHSH-type symmetry. In cryptographic protocols based on the CHSH inequality the basic states that we consider are bipartite states $\\mathrm{P}_{AB|XY}$ held by Alice and Bob where $a,b,x,y\\in \\{0,1\\}^n$. \n\\begin{defn}[CHSH-type symmetry]\\label{def:chsh-symmetry}\n\tA state $\\mathrm{P}_{AB|XY}$ where $a,b,x,y\\in \\{0,1\\}^n$ has a CHSH-type symmetry if there exist $p_1,\\dotsc,p_n\\in [0,\\frac{1}{2}]$ such that $\\forall i\\in\\{1,\\dotsc,n\\}$,\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t & \\forall a_i, b_i, x_i, y_i \\\\\n\t\t & a_i\\oplus\\ b_i=x_i\\cdot y_i \\rightarrow \\mathrm{P}_{AB|XY}(a_{\\overline{i}}a_ib_{\\overline{i}}b_i|x_{\\overline{i}}x_iy_{\\overline{i}}y_i) = \\frac{1}{2}-p_i \\\\\n\t\t& a_i\\oplus\\ b_i \\neq x_i\\cdot y_i \\rightarrow \\mathrm{P}_{AB|XY}(a_{\\overline{i}}a_ib_{\\overline{i}}b_i|x_{\\overline{i}}x_iy_{\\overline{i}}y_i) = p_i \\;.\n\t\\end{split}\n\t\\end{equation*}\n\twhere $a_{\\overline{i}}=a_1a_2\\dotsc a_{i-1}a_{i+1} \\dotsc a_n$ and $b_{\\overline{i}}, x_{\\overline{i}}, y_{\\overline{i}}$ are defined in a similar way.\n\\end{defn}\nA simple state $\\mathrm{P}_{AB|XY}$ which has this symmetry for example is a product state of 2-partite states as in Figure~\\ref{fig:CHSH_symmetry} with different values of $p$.\n\n\\begin{figure}\n\\begin{centering}\n\t\\begin{tikzpicture}[scale=0.5]\n\n\t\t\\draw[step=2] (-5,-4) grid (4,5);\n\t\t\\draw[ultra thick] (-6,4)--(4,4);\n\t\t\\draw[ultra thick] (-6,-4)--(4,-4);\n\t\t\\draw[ultra thick] (-4,-4)--(-4,6);\n\t\t\\draw[ultra thick] (4,-4)--(4,6);\n\t\t\\draw[ultra thick] (-6,0)--(4,0);\n\t\t\\draw[ultra thick] (0,-4)--(0,6);\n\t\t\\draw (-4,4)--(-6,6);\n\n\t\t\\draw (-3,3) node {$\\frac{1}{2}-p$};\n\t\t\\draw[red] (-1,3) node {$p$};\n\t\t\\draw (1,3) node {$\\frac{1}{2}-p$};\n\t\t\\draw[red] (3,3) node {$p$};\n\n\t\t\\draw[red] (-3,1) node {$p$};\n\t\t\\draw (-1,1) node {$\\frac{1}{2}-p$};\n\t\t\\draw[red] (1,1) node {$p$};\n\t\t\\draw (3,1) node {$\\frac{1}{2}-p$};\n\n\t\t\\draw (-3,-1) node {$\\frac{1}{2}-p$};\n\t\t\\draw[red] (-1,-1) node {$p$};\n\t\t\\draw[red] (1,-1) node {$p$};\n\t\t\\draw (3,-1) node {$\\frac{1}{2}-p$};\n\n\t\t\\draw[red] (-3,-3) node {$p$};\n\t\t\\draw (-1,-3) node {$\\frac{1}{2}-p$};\n\t\t\\draw (1,-3) node {$\\frac{1}{2}-p$};\n\t\t\\draw[red] (3,-3) node {$p$};\n\n\t\t\\draw (-5,3) node {0};\n\t\t\\draw (-5,1) node {1};\n\t\t\\draw (-5,-1) node {0};\n\t\t\\draw (-5,-3) node {1};\n\t\t\\draw (-6,2) node {0};\n\t\t\\draw (-6,-2) node {1};\n\n\t\t\\draw (-3,5) node {0};\n\t\t\\draw (-1,5) node {1};\n\t\t\\draw (1,5) node {0};\n\t\t\\draw (3,5) node {1};\n\t\t\\draw (-2,6) node {0};\n\t\t\\draw (2,6) node {1};\n\n\t\t\\draw (-5,4.4) node {$B_1$};\n\t\t\\draw (-4.4,5) node {$A_1$};\n\t\t\\draw (-6,5) node {$Y_1$};\n\t\t\\draw (-5,6) node {$X_1$};\n\t\t\t\n\t\\end{tikzpicture}\n\\end{centering}\n\\caption{A simple 2-partite state $P_{A_1B_1|X_1Y_1}$ with the CHSH symmetry.} \\label{fig:CHSH_symmetry}\n\\end{figure} \n\n\\begin{cor}[de Finetti reduction for states with the CHSH symmetry] \\label{cor:chsh-post-selection}\n\tThere exists a non-signalling de Finetti state $\\tau^\\mathcal{CHSH}_{AB|XY}$ where $a,b,x,y \\in \\{ 0,1 \\}^n$ such that for every permutation invariant\\footnote{Here a permutation acts on the bipartite state as $\\left(\\mathrm{P}_{AB|XY}\\circ\\pi \\right) (ab|xy)=\\mathrm{P}_{AB|XY}\\left(\\pi(a)\\pi(b)|\\pi(x)\\pi(y)\\right)$.} state $\\mathrm{P}_{AB|XY}$ with the CHSH symmetry, for all $a,b,x,y$,\n\t\\[\n\t\t\\mathrm{P}_{AB|XY} (a,b|x,y) \\leq (n+1) \\; \\tau^\\mathcal{CHSH}_{AB|XY} (a,b|x,y)\\;. \n\t\\]\n\\end{cor}\nNote that we do not assume that the state $\\mathrm{P}_{AB|XY}$ satisfies any non-signalling conditions. Our theorem holds even when there is signalling between the subsystems, and therefore can be used in a broad set of applications.\n\nCorollary \\ref{cor:chsh-post-selection} is derived from Theorem \\ref{thm:post-selection} by showing that $d=1$ for the CHSH symmetry\\footnote{Intuitivly, in the CHSH symmetry there is only one degree of freedom, i.e. $d=1$, since we are only free to choose one value $p$ when defining the basic CHSH state given in Figure \\ref{fig:CHSH_symmetry}. Less symmetry implies more degrees of freedom. }. For pedagogical reasons, we also present a self-contained proof including an explicit construction of the state $\\tau^\\mathcal{CHSH}_{AB|XY}$ in Appendix~\\ref{sec:chsh-proof}. \n\nAlthough the assumption about the symmetry of the states in Corollary \\ref{cor:chsh-post-selection} appears to be rather restrictive, the statement turns out to be useful for applications. \n\n\\section{Applications}\n\nTo illustrate the use of the de Finetti reductions, we start by considering the following abstract application. Let $\\mathcal{T}$ be a test which interacts with a state $\\mathrm{P}_{A|X}$ and outputs ``success'' or ``fail'' with some probabilities. One can think about this test, which can be chosen according to the application being considered, as a way to quantify the success probability of the protocol when the state $\\mathrm{P}_{A|X}$ is given as input. For example, if one considers an estimation, or a tomography, protocol a test can be chosen to output ``success'' when the estimated state is close to the actual state \\cite{christandl2009postselection}. \n\nWe denote by $\\mathrm{Pr}_{\\text{fail}}(\\mathrm{P}_{A|X})$ the probability that $\\mathcal{T}$ outputs ``fail'' after interacting with $\\mathrm{P}_{A|X}$. We consider permutation invariant tests, defined as follows. \n\\begin{defn}\\label{def:permutation-invariant-test}\n\tA test $\\mathcal{T}$ is permutation invariant if for all states $\\mathrm{P}_{A|X}$ and all permutations $\\pi$ we have\n\t\\[\n\t \t\\mathrm{Pr}_{\\text{fail}}(\\mathrm{P}_{A|X}) = \\mathrm{Pr}_{\\text{fail}}(\\mathrm{P}_{A|X}\\circ\\pi) \\;.\n\t\\]\n\\end{defn}\n\nUsing the de Finetti reduction in Corollary \\ref{cor:conditional} we can prove upper bounds of the following type: \n\n\\begin{lem}\\label{lem:test-bound}\n\tLet $\\mathcal{T}$ be a permutation invariant test. Then for every state $\\mathrm{P}_{A|X}$ \n\t\\[\n\t\t\t\\mathrm{Pr}_{\\mathrm{fail}}(\\mathrm{P}_{A|X}) \\leq (n+1)^{m(l-1)} \\mathrm{Pr}_{\\mathrm{fail}}(\\tau_{A|X}) \\;.\n\t\\]\n\\end{lem}\n\nThe importance of the de Finetti reductions is obvious from this abstract example --- if one wishes to prove an upper bound on the failure probability of the test $\\mathcal{T}$, instead of proving it for all states $\\mathrm{P}_{A|X}$ it is sufficient to prove it for the de Finetti state $\\tau_{A|X}$ and ``pay'' for it with the additional polynomial pre-factor of $(n+1)^{m(l-1)}$. Since the de Finetti state has an i.i.d.\\ structure this can highly simplify the calculations of the bound. \n\nMoreover, in many cases one finds that the bound on $\\mathrm{Pr}_{\\text{fail}}(\\tau_{A|X})$ is exponentially small in $n$. For an estimation protocol, the failure probability of the test, when interacting with an i.i.d.\\ state, can be shown to be exponentially small in the number of subsystems used for the estimation, using Chernoff bounds. This is also the case when dealing with security proofs -- the failure probability of a protocol, when a de Finetti state is given as input, is usually exponentially small in the number of subsystems used in the protocol. If this is indeed the case then the polynomial pre-factor of $(n+1)^{m(l-1)}$ will not affect the bound in the asymptotic limit of large $n$. That is, an exponentially small bound on $\\mathrm{Pr}_{\\text{fail}}(\\tau_{A|X})$ implies an exponentially small bound on $\\mathrm{Pr}_{\\text{fail}}(\\mathrm{P}_{A|X})$.\n\nFor an estimation protocol as mentioned above the notion of the test, combined with the de Finetti reductions, can be used to prove that an estimation procedure of permutation invariant states succeeds with high probability. \n\nFor readers who are interested in cryptography, we show in Appendix~\\ref{sec:diamond-proofs} how to derive a similar result when considering the diamond norm~\\cite{kitaev1997quantum}, i.e., the distance between channels acting on conditional probability distributions, instead of the abstract test $\\mathcal{T}$. The diamond norm is the relevant distance measure when considering cryptographic protocols, and therefore using de Finetti reductions to upper bound the diamond norm can simplify the analysis of device independent protocols. \n\n\\section{Concluding remarks}\nIn this letter we introduced a general de Finetti-type theorem from which various more specialised variants can be derived. Crucially, such theorems can be formulated even without relying on assumptions regarding the non-signalling conditions between the subsystems or the underlying dimension. In the general theorem, Theorem~\\ref{thm:post-selection}, we can also see how additional symmetries of the states can affect the pre-factor in the de Finetti reduction. This suggests that the same relationship might also exist in the quantum post selection theorem \\cite{christandl2009postselection}, which is the quantum variant of the de Finetti reductions presented here.\n\nAs an example for an application we showed how our theorems can be used to bound the failure probability of a test. In a following work \\cite{arnon2014nonsignalling} we show how to use the concept of the test, together with the de Finetti reduction given in Corollary \\ref{cor:conditional} to prove parallel repetition results for non-local games. Previous de Finetti theorems could have not been used in the setting of non-local games due to their dependency on the dimension of the systems or the strict non-signalling conditions they assume. The new de Finetti theorem presented here therefore opens new possibilities and therefore strictly extends the range of applications to which de Finetti reductions can be applied.\n\nAs an additional example, we explain how our theorem can be used in device independent protocols in which the parties are not assumed to be restricted by quantum theory in Appendix~\\ref{sec:diamond-proofs}. We hope that this approach will also be useful for quantum device independent information processing protocols in the future. One possible direction can be to use a similar de Finetti reduction as in Corollary \\ref{cor:chsh-post-selection}, but for a Bell inequality in which the maximal violation is achieved within quantum theory. This way, the resulting de Finetti state will be not only non-signalling but also quantum.\nDue to the general structure of the de Finetti reductions and the increasing use of conditional probability distributions in quantum information theory, we also hope that the presented reductions will be useful in other applications apart from cryptography, such as quantum tomography, as was the case for the quantum post selection theorem~\\cite{christandl2009postselection}.\n\nThe techniques used to prove our theorems (mainly combinatoric arguments) differ from the techniques used in previous papers to establish general de Finetti theorems. We therefore hope that they will shed new light on de Finetti reductions in general. For example, it might be possible to apply some ideas from the proof in (device dependent) quantum de Finetti reductions. \n\n\\begin{acknowledgments}\nThe authors thank Roger Colbeck and Michael Walter for discussing a preliminary version of this work. This work was supported by the Swiss National Science Foundation (via the National Centre of Competence in Research ``QSIT'' and SNF project No. 200020-135048), by the European Research Council (via project No. 258932), by the CHIST-ERA project ``DIQIP'' and by the EC STREP project ``RAQUEL''. \n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/overview-apinet.pdf}\n \\caption{Overview of proposed Multi Channel Xception Attentive Pairwise Interaction (MCX-API) network. Two inputs are first represented in $n$ kinds of color spaces, $CS_1$ to $CS_n$ to obtain a two N-channel input and subsequently feature vectors. We thereafter obtain $x_{i}^{self}$ and $x_{i}^{other}$ by comparison through MCX-API, where $x_{i}^{self}$ is enhanced by its own images and $x_{i}^{other}$ is activated by the other image. $x_{i}$ is therefore improved with discriminative clues that come from both images. By comparison, we can finally distinguish if an image is pristine or fake.}\n \\label{fig:overview}\n\\end{figure}\nDeepfakes are synthetic media that are generated by deep learning methods to manipulate the content in images and videos. The manipulations include altering people's identities, faces, expressions, speech or bodies to both entertainment and malicious intent (for example pornographic uses). Benefiting from the remarkable advancement in generation models, amateurish individuals are capable of creating Deepfakes using off-the-shelf models~\\cite{DeepFaceLab, fffs, FaceApp} without tedious efforts. \nIn the meantime, channelized efforts have been dedicated to devising Deepfakes detection algorithms using multiple approaches such as by determining unique artifacts~\\cite{matern2019exploiting,ciftci2020fakecatcher,fernandes2019predicting,haliassos2021lips,agarwal2019protecting,li2020face}, utilizing Convolutional Neural Networks (CNNs) based networks~\\cite{marra2019gans,rossler2019faceforensics++,nguyen2019use}, employing frequency domain information~\\cite{durall2019unmasking,chen2021attentive,qian2020thinking} and other clues~\\cite{cozzolino2021id, cozzolino2022audio}.\n\\par With an atomic effort, these methods could perform well with an average of more than 99\\%~\\cite{rossler2019faceforensics++} accuracy in a closed-set problem where the training and testing data are pulled from the same label and feature spaces. For example, the network is trained on attacks A, B and C and tested on images\/videos drawn from attack A or B or C. However, newer DeepFakes generation mechanisms make the detection algorithms untrustworthy and non-generalizable by degrading the performance of the detector~\\cite{zhao2021learning,zhou2021joint} as no exception to those classifiers trained with machine learning methods. In the context of DeepFakes detection, this can be parallel to detecting attack D when the detector is trained on A, B, and C, making it an open-set problem. The reasons behind the collapse of detection models towards unseen contents can, to some degree, be attributed to various generation algorithms, which often result in different data distributions, feature spaces, and appearance properties of images or videos. While one can see the imperative need for a generalizable detection technique to make reliable decisions on unknown\/unseen generation types in addition to known\/seen generation data, we note low performances of networks in this direction \\cite{zhao2021learning,zhou2021joint,xu2022supervised, aneja2020generalized}. \n\\par We thus motivate our work, focusing on both closed-set and open-set detection in this article. We draw our inspiration from how humans tend to detect altered media in a fine-grained manner by comparing one kind of visual content to another. Human decision making relies on detecting an unseen kind of manipulated images\/videos as fake by comparing the unknown generation type to the known generation types, especially the artifacts and clues~\\cite{zhuang2020learning}. Initial work using on pairwise interaction has shown promising directions to capture subtle differences in a pairwise manner with not only principal parts of the image but also distinct details from the other image \\cite{zhuang2020learning}. Using such a paradigm, we propose to learn the known type of generations in a fine-grained pairwise manner explicitly to improve the performance of a Deepfake detector for unknown types. Further, we also note the complementary information an image\/video can exhibit in different color spaces along the same lines. We therefore incorporate information from four color spaces, including RGB, CIELab, HSV, and YCbCr integrating to boost the attentive pairwise learning to guide the detector to classify the non-manipulated images efficiently. Our proposed approach exploits the information from color channels in a pairwise manner using the strengths of the Xception network and we refer to this as the Multi-Channel Xception Attentive Pairwise Interaction (MCX-API) network between non-manipulated images against a set of manipulated images and to try to generalize the detector towards unknown manipulation types or unseen data. \\Cref{fig:overview} shows an overview of the idea presented in this work. \n\\par To validate our idea in this work, we conduct various experiments using FaceForensics++ dataset~\\cite{rossler2019faceforensics++} which consists of four different manipulation classes including DeepFakes (DF)~\\cite{ffdf}, FaceSwap (FS)~\\cite{fffs}, Face2Face (F2F)~\\cite{thies2016face2face} and NeuralTextures (NT)\\cite{thies2019deferred} where we obtain better state-of-the-art (SOTA) performance or at par detection performance to best performing SOTA approaches in closed-set experiments~\\cite{chollet2017xception,afchar2018mesonet,zhao2021learning,li2020face}. Furthermore, we demonstrate the effectiveness of variants of the proposed approach in detecting Deepfakes in open-set scenarios where our approach achieves better results than SOTA models on three other public datasets such as FakeAV~\\cite{khalid2021fakeavceleb}, KoDF~\\cite{kwon2021kodf}, and Celeb-DF~\\cite{li2020celeb}.\n\\par A detailed ablation study is presented on MCX-API to illustrate the variability of performance of the detector to various design choices in the network. Thus, the main contributions of our paper are \\textbf{(1)} We propose a new framework - Multi-Channel Xception Attentive Pairwise Interaction (MCX-API) for Deepfakes detection by exploiting color space and pairwise interaction simultaneously, bringing a novel fine-grained idea for the Deepfakes detection field. \\textbf{(2)}We report all results by balanced-open-set-classification (BOSC) accuracy to exemplify the generalizability of our proposed approach. \n \n \n\\textbf{(3)}We conduct cross-datasets validations with three SOTA Deepfake datasets, Celeb-DF~\\cite{li2020celeb}, KoDF~\\cite{kwon2021kodf} and FakeAVCelebDF~\\cite{khalid2021fakeavceleb}. Furthermore, we compared the results with SOTA Deepfake detection methods. Our MCX-API obtains 98.48\\% BOSC accuracy on the FF++ dataset and 90.87\\% BOSC accuracy on the Celeb-DF dataset, indicating an optimistic direction for the generalization of DeepFake detection.\n\nIn the rest of the paper, we list a set of directly related works in \\cref{sec:related-works} and then present our proposed approach in \\cref{sec:proposed-approach}. \nWe provide an analysis of explainability in \\cref{sec:ExplainableAnalysisofMCX-API} with the set of experiments and results on generalizability detailed in \\cref{sec:Resutls}. We finally conclude the work in \\cref{sec:conclusion}.\n\n\\begin{figure*}[htp]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/API-Net.pdf}\n \\caption{The architecture of MCX-API network.}\n \\label{fig:arr_bar}\n\\end{figure*}\n\n\\section{Related Work}\n\\label{sec:related-works}\n\\textbf{Deepfakes detection methods.} A track of Deepfakes detection focuses on the unique artifacts on human faces, such as eye blinking~\\cite{li2018ictu}, different eye colors~\\cite{matern2019exploiting}, abnormal heartbeat rhythms shown on the face~\\cite{ciftci2020fakecatcher,fernandes2019predicting}. LipForensics~\\cite{haliassos2021lips} targets high-level semantic abnormalities in mouth movements, which the authors observe as a common indicator in many generated videos. Some articles are dedicated to finding inconsistencies in images and videos. These inconsistencies arise out of generation process where landmarks, head pose are inconsistent~\\cite{yang2019exposing,agarwal2019protecting} or observable in image blending~\\cite{li2018exposing,li2020face}. Cozzolino \\textit{et. al.}~\\cite{cozzolino2021id} have introduced ID-Reveal, an identity-aware detection approach leveraging a set of reference videos of a target person and trained in an adversarial manner. \nMany papers have utilized CNNs-based methods for detecting features existing in forged images\\cite{marra2019gans,rossler2019faceforensics++,nguyen2019use}. Using high-frequency features~\\cite{durall2019unmasking,chen2021attentive,qian2020thinking} to distinguish Deepfakes are also gaining more popularity on this topic.\nAlthough pairwise learning have been used for Deepfake detection~\\cite{hsu2020deep, xu2022supervised}, they lack the pairwise interactions by using contrastive learning.\n\n\\par \\textbf{Generalization to unseen manipulations.} While many works are proposed for detecting DeepFakes, they have focused on closed-set experiments where the training and testing set distributions are similar. The open-set experiments indicate that they underperform on unseen manipulations. In the meantime, an increasing number of works have tried to address the problem of generalization of DeepFakes detection. These works have focused on domain adaptation and transfer learning to minimize the task of learning parameters in an end-to-end manner~\\cite{aneja2020generalized,kim2021fretal,lee2021tar}. \\textit{Cozzolino et. al.}~\\cite{cozzolino2018forensictransfer} proposed an autoencoder-like structure ForensicTransfer and the generalization aspect was studied using a single detection method for multiple target domains. The follow-up works like Locality-aware AutoEncoder (LAE)~\\cite{du2019towards} and Multi-task Learning were proposed for detecting and segmenting manipulated facial images and videos~\\cite{nguyen2019multi}. Transfer learning-based Autoencoder with Residuals (TAR)~\\cite{lee2021tar} recently proposed uses the residuals from autoencoders to handle generalizability. \\textit{Kim et. al.}~\\cite{kim2021fretal} employed the Representation Learning (ReL) and Knowledge Distillation (KD) paradigms to introduce a transfer learning-based Feature Representation Transfer Adaptation Learning (FReTAL) method. While these transfer learning and zero-shot\/few-shot learning methods could not wholly deal with the Deepfakes detection generalization problem, because the networks have already seen the manipulated image\/videos. Therefore, strictly speaking, it is not an open-set situation.\n\\par In the meantime, some other novel networks have been proposed dealing with the generalization problem of Deepfakes detection. A new method to detect deepfake images using the cue of the source feature inconsistency within the forged images~\\cite{zhao2021learning} is proposed based on the hypothesis that distinct source features can be preserved and extracted through SOTA deepfake generation processes. Joint Audio-Visual Deepfake Detection~\\cite{zhou2021joint} is proposed by jointly modeling video and audio modalities. This novel visual\/auditory deepfake combined detection task shows that exploiting the intrinsic synchronization between the visual and auditory modalities could benefit deepfake detection. \\textit{Xu et. al.}~\\cite{xu2022supervised} proposed a novel method using supervised contrastive learning to deal with the generalization problem in detecting forged visual media.\n\n\\section{Proposed Method}\n\\label{sec:proposed-approach}\nFine-grained method has been widely used for classification problems where the categories are visually very similar~\\cite{zhuang2020learning, xiao2015application, akata2015evaluation}. We draw similar inspiration to our problem of Deepfake detection following the architecture proposed by earlier~\\cite{zhuang2020learning} and build upon with number of improvements. We assert that architecture for fine-grained classification can help in detecting Deepfakes. Unlike the orginal architecture, we introduce Xception~\\cite{chollet2017xception} to extract the embeddings motivated by earlier works in Deepfake detection~\\cite{rossler2019faceforensics++, zhao2021multi, wang2022m2tr, kim2021fretal}. \n\nSecond, to benefit from information from different color spaces, we make the base network to a multi-channel network. Then, we enforce pairwise learning by following the architecture of Attentive Pairwise Learning \\cite{zhuang2020learning}. We propose using the Multi Channel Xception Attentive Pairwise Interaction Network (MCX-API) to deal with the Deepfakes classification problem as detailed further.\n\n\\subsection{Architecture}\n\\par We first utilize MTCNN\\cite{zhang2016joint} to crop and align the face region of a single frame. Two selected face images are further sent to a Multi-Channel Xception backbone, and this backbone network extracts two corresponding $\\mathrm{D}$-dimension feature vectors $x_{1}$ and $x_{2}$ using the face image represented in $N$ different channels that include RGB, CIELab, HSV, and YCbCr. A mutual vector $x_{m}\\in \\mathbb{R}^{D}$ is further generated by concatenating $x_{1}$ and $x_{2}$ and using a Multi-Layer Perceptron (MLP) function for mapping $x_{m}$ to get a $\\mathrm{D}$ dimension. $x_{m}$ is a joint feature that includes high-level contrastive clues of both input images across multiple color channels.\n\n\\par In order to compare $x_{m}$ with $x_{1}$ and $x_{2}$, we need to activate $x_{m}$ using sigmoid function to increase the positive relation with $x_{i}$ and decrease the negative relation against $x_{i}$~\\cite{zhuang2020learning}. Therefore, two gate vectors $g_{1}$ and $g_{2}$ will be generated. $g_{i}$ is calculated by $x_{m}$ and $x_{i}$, thus containing contrastive clues and acting as discriminative attention spots semantic contrasts with a distinct view of each $x_{i}$. The gate vector $g_{i}$ is the sigmoid of the output of channel-wise product between $x_{m}$ and $x_{i}$, whose formula is provided in \\Cref{eqn:gate-vector}. \n\\vspace{-1mm}\n\\begin{equation}\n g_{i} = sigmoid(x_{m} \\odot x_{i}), \\;\\; i \\in{\\{1,2\\}}\n \\label{eqn:gate-vector}\n\\end{equation}\n\n\\par A pairwise interaction between input features $x_{i}$ and gate vectors $g_{i}$ is performed to induce residual attention by comparing one image to the other to distinguish the final class. The sequence of interaction can be shown in \\Cref{eqn:pairwise-interaction}.\n\\begin{equation}\n\\centering\n\\begin{split}\n x_{1}^{pristine}=x_{1}+x_{1}\\odot g_{1} \\\\\n x_{1}^{fake}=x_{1}+x_{1}\\odot g_{2} \\\\\n x_{2}^{pristine}=x_{2}+x_{2}\\odot g_{2} \\\\\n x_{2}^{fake}=x_{2}+x_{2}\\odot g_{1}\n\\end{split}\n\\label{eqn:pairwise-interaction}\n\\end{equation}\nThrough the pairwise interaction of each feature $x_{i}$, two attentive feature vectors $x_{i}^{pristine}\\in \\mathbb{R}^{D}$ and $x_{i}^{fake}\\in \\mathbb{R}^{D}$ are further produced. The former one is highlighted by its gate vector, and the latter is triggered by the gate vector of the compared image. $x_{i}$ is thus enhanced with discriminative clues from both input features through pairwise interaction.\n\n\n\n\\subsection{Loss calculation}\nThe four attentive features $x_{i}^{j}$ where $i \\in {\\{1,2\\}}$ and $j \\in {\\{pristine,fake\\}}$, the pairwise interaction outputs, are fed into a $softmax$ classifier for the loss calculation~\\cite{zhuang2020learning}. The output of $softmax$ denoted by $p_{i}^{j}$ is the probability of a feature belonging to a specific class (i.e., non-manipulated or Deepfake). The main loss in our case is the cross-entropy loss \n\\begin{equation}\n\\mathcal{L}_{ce} = -\\sum_{i \\in \\{ 1,2 \\}} \\sum_{j \\in \\{ pristine,fake \\}} y_{i}^{\\intercal} log(p_{i}^{j})\n\\label{eqn:lce}\n\\end{equation}\nwhere $y_{i}$ is the one-hot label for image $i$ in the pair and $\\intercal$ represents the transpose. MCX-API can be trained to determine all the attentive features $x_{i}^{j}$ under the supervision of the label $y_{i}$ through this loss.\n\n\\par Furthermore, a hinge loss of score ranking regularization\n\\begin{equation}\n \\mathcal{L}_{rk} = \\sum_{i\\in {\\{1, 2\\}}} max(0, p_{i}^{fake}(c_{i})-p_{i}^{pristine}(c_{i})+\\epsilon )\n\\label{eqn:lrk}\n\\end{equation}\nis also introduced when computing the complete loss~\\cite{zhuang2020learning}. $c_{i}$ is the corresponding index associated with the ground truth label of image $i$. So $p_{i}^{j}(c_{i})$ is a softmax score of $p_{i}^{j}$. Since $x_{i}^{pristine}$ is activated by its gate vector $g_{i}$, it should contain more discriminative features to identify the corresponding image, compared to $x_{i}^{fake}$. $\\mathcal{L}_{rk}$ is utilized to promote the priority of $x_{i}^{pristine}$ where the score difference between $p_{i}^{fake}(c_{i})$ and $p_{i}^{pristine}(c_{i})$ should be greater than a margin.\nThe whole loss for a pair is composed of two losses, cross-entropy loss $\\mathcal{L}_{ce}$ and score ranking regularization $\\mathcal{L}_{rk}$ with coefficient $\\lambda$. \n\\begin{equation}\n \\mathcal{L} = \\mathcal{L}_{ce} + \\lambda \\mathcal{L}_{rk}\n\\label{eqn:loss}\n\\end{equation}\nIn this way, MCX-API is able to take feature priorities into account adaptively and learns to recognize each image in the pair.\n\n\\section{Experiments and Results}\n\\label{sec:Resutls}\n\n\\subsection{Datasets}\n\\textbf{Training data: }We select FaceForensics++ \\cite{rossler2019faceforensics++} to train the proposed approach. This forensics dataset consists of 1000 original videos and corresponding number of manipulated videos consisting of 1000 videos for each of the subsets - DeepFakes (denoted as DF)~\\cite{ffdf}, Face2Face (denoted as F2F)~\\cite{thies2016face2face}, FaceSwap (denoted as FS)~\\cite{fffs}, and NeuralTextures (denoted as NT)~\\cite{thies2019deferred}.\n\n\\textbf{Cross-dataset Validation: }We also select three other SOTA datasets for generalization test and comparison. \\textbf{Celeb-DF~\\cite{li2020celeb}}: For Celeb-DF, we choose id51-id61 from Celeb-real, Celeb-synthesis and id240-id299 from YouTube-real for the test set.\n \n\\textbf{KoDF~\\cite{kwon2021kodf}} We randomly selected 265 real videos and 734 fake ones as our test set.\n\\textbf{FakeAV~\\cite{khalid2021fakeavceleb}} We randomly selected 500 videos as our test set.\n\n\n\\begin{table*}[htp]\n\\caption{\\textbf{Frame-level BOSC Accuracy and AUC for our proposed MCX-API networks and SOTA methods on seen data.} We compare the results with the SOTA methods on DF\/F2F\/FS\/NT respectively. All networks are trained on the whole FF++ c23 dataset. The data of the first three methods are adopted from Table 5 in Appendix of FF++~\\cite{chollet2017xception}.}\n\\label{tab:bosc-ffpp}\n\\begin{threeparttable}\n\\centering\n\\begin{tabular}{llllllll}\n\\toprule\nFF++ c23 & & \\multicolumn{6}{c}{Frame-level (BOSC(\\%)\/AUC)} \\\\ \\cline{1-1} \\cline{3-8} \nMethod & & \\multicolumn{1}{c}{DF} & \\multicolumn{1}{c}{F2F} & \\multicolumn{1}{c}{FS} & \\multicolumn{1}{c}{NT} & & Average \\\\ \\cline{1-6} \\cline{8-8} \nCozzolino \\textit{et al.}~\\cite{cozzolino2017recasting} & & 75.51\/ - & 86.34\/ - & 76.81\/ - & 75.34\/ - & & 78.50\/ - \\\\\nBayar and Stamm~\\cite{bayar2016deep} & & 90.25\/ - & 93.96\/ - & 87.74\/ - & 83.69\/ - & & 88.91\/ - \\\\\nMesoNet~\\cite{afchar2018mesonet} & & 89.55\/ - & 88.60\/ - & 81.24\/ - & 92.19\/ - & & 87.90\/ - \\\\\nXception\\tnote{*} \\cite{chollet2017xception} & & 96.35\/0.9941 & 96.26\/0.9937 & 96.29\/0.9952 & 92.43\/0.9736 & & 95.33\/0.9892 \\\\\nSupCon\\tnote{*} \\cite{xu2022supervised} & & 97.18\/0.9984 & 96.88\/0.9978 & 97.05\/0.9980 & 92.92\/0.9846 & & 96.01\/0.9947 \\\\ \nAPI-Net(ResNet101)\\tnote{*} \\cite{zhuang2020learning} & & 88.71\/0.9820 & 90.13\/0.9860 & 87.79\/0.9728 & 82.96\/0.9248 & & 87.40\/0.9664 \\\\ \\hline\nOurs & & & & & & & \\\\\n\\textbf{MCX-API(RGB)} & & \\textbf{98.75}\/0.9996 & \\textbf{99.90}\/0.9986 & \\textbf{98.5}\/\\textbf{0.9993} & \\textbf{96.75}\/0.9896 & & \\textbf{98.48}\/0.9968 \\\\\n\\textbf{MCX-API(RGB+HSV)} & & \\textbf{98.75}\/0.9988 & 98.50\/0.9979 & 97.75\/0.9978 & 95.75\/0.9829 & & 97.69\/0.9943 \\\\\n\\textbf{MCX-API(RGB+CIELab)} & & 97.00\/0.9996 & 96.50\/0.9985 & 96.25\/0.9989 & 95.25\/0.9909 & & 96.25\/0.9970 \\\\\n\\textbf{MCX-API(RGB+YCbCr)} & & 98.00\/\\textbf{0.9998} & 98.25\/\\textbf{0.9991} & 97.75\/\\textbf{0.9993} & \\textbf{96.75}\/0.9920 & & 97.69\/\\textbf{0.9976} \\\\\n\\textbf{MCX-API(RGB+HSV+CIELab)} & & 96.50\/0.9990 & 95.50\/0.9888 & 96.00\/0.9835 & 95.50\/\\textbf{0.9933} & & 95.88\/0.9912 \\\\\n\\textbf{MCX-API(RGB+LAB+YCbCr)} & & 92.00\/0.9963 & 92.25\/0.9972 & 91.50\/0.9960 & 91.00\/0.9870 & & 91.69\/0.9941 \\\\\n\\bottomrule\n\\end{tabular}\n \\begin{tablenotes}\n \\footnotesize\n \\item [*] Our implementation of the method.\n \\end{tablenotes}\n\\end{threeparttable}\n\\end{table*}\n\n\\textbf{Implementation details.} \nWe choose uncompressed videos for our experiments in this work using the Pytorch framework~\\cite{pytorch} to develop the models and the experiments are conducted on Python 3.6 environment on NVIDIA Tesla V100 32Gb in IDUN system owned by NTNU~\\cite{sjalander+:2019epic}. \n\nMulti-task Cascade Convolutional Neural Networks (MTCNN)~\\cite{zhang2016joint} is employed for face detection and face alignment since our experiments are focused on detecting the manipulated face region alone. We allow loose cropping of the face region to capture the entire silhouette against tight cropping. The first 30 frames from each video are extracted, resulting in 150000 total images. We use random cropping in the training phase and center cropping during the testing phase ($512^{2}\\to448^{2}$). In all our experiments, we employ Xception as the backbone where we derive the feature vector $x_{i}\\in \\mathbb{R}^{2048}$ after the global average pooling. We use a batch sampler during the training by randomly sampling three categories in each batch. For each category, we randomly choose nine images due to the limitations of the GPU and memory constraints. We further exercise care to have no sample overlap among all batches, as we exclude the selected sample from the dataset. We locate its most similar image from both its own class and the rest classes for each image by calculating the distance between features by utilizing both Euclidean distance and cosine distance. Each image would get one image as its intra- and inter-pair in the batch, respectively. Each pair is used as input $x_{1}$ and $x_{2}$ as well as generating a mutual vector $x_{m}\\in \\mathbb{R}^{2048}$ through the concatenation and the multilayer perceptron (MLP).\n\\par Based on empirical evaluations, we adopt the coefficient $\\lambda$ in \\Cref{eqn:loss} as 1.0, and 0.05 as the margin value in the score-ranking regularization. We use cosine annealing strategy to alter the learning rate starting from 0.01 \\cite{zhao2021learning}. We train the network with 100 epochs and freeze the parameters in the CNN backbone, and further on train only the classifier in the first eight epochs.\n\n\\textbf{Evaluation Metrics.}\nWe adopt Balanced-Open-Set-Classification (BOSC) accuracy and AUC as evaluation metrics.\n$BOSC = \\frac{Sensitivity+Specificity}{2}$, where $Sensitivity=\\frac{TP}{TP + FN}$ and $Specificity = \\frac{TN}{TN + FP}$.\n\n\\begin{table}[htp]\n\\caption{Comparison of the test results on the FF++ dataset with c23 (high-quality compression) settings. Training for all networks is carried out on FF++ c23. The accuracy and AUC score are at frame-level. The best performances are marked in bold. Data for Xception, $F^3$-Net, and EfficientNet-B4 are adopted from Table 2 in MaDD~\\cite{zhao2021multi}.}\n\\label{tab:ff-sota}\n\\centering\n\\begin{tabular}{llcc}\n\\toprule\nMethod && ACC & AUC \\\\ \\cline{1-1} \\cline{3-4} \nXception && 95.73 & 0.9909 \\\\ \n$F^3$-Net~\\cite{qian2020thinking} && 97.52 & 0.9810 \\\\ \nEfficientNet-B4~\\cite{tan2019efficientnet} && 96.63 & 0.9918 \\\\ \nDCL~\\cite{sun2022dual} && 96.74 & 0.9930 \\\\ \nMaDD~\\cite{zhao2021multi} && 97.60 & 0.9929 \\\\ \nM2TR~\\cite{wang2022m2tr} && 97.93 & 0.9951 \\\\ \nAPI-Net && 87.40 & 0.9664 \\\\ \\hline\nOurs && \\textbf{98.48} & \\textbf{0.9968} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table*}[htp]\n\\caption{\\textbf{Video-level BOSC Accuracy and AUC for our proposed MCX-API networks and SOTA methods on unseen data.} We compare the results with the SOTA methods on FakeAV\/KoDF\/Celeb-DF respectively. All the networks are trained on the whole FF++ c23 dataset. The data of the SOTA methods are adopted from Table 2 from \\cite{cozzolino2022audio}.}\n\\label{tab:cross-all}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{lllll}\n\\toprule\nFF++ c23 & & \\multicolumn{3}{c}{Video-level (BOSC(\\%)\/AUC)} \\\\ \\cline{1-1} \\cline{3-5} \nMethod & & \\multicolumn{1}{c}{FakeAV} & \\multicolumn{1}{c}{KoDF} & \\multicolumn{1}{c}{Celeb-DF} \\\\ \\hline\nXception\\tnote{*} & & 23.99\/0.450 & 25.97\/0.482 & 31.34\/0.505 \\\\\nSeferbekov~\\cite{seferbekov} & & \\textbf{95.0\/0.986} & 79.2\/0.884 & 75.3\/0.860 \\\\\nFTCN~\\cite{zheng2021exploring} & & 64.9\/0.840 & 63.0\/0.765 & - \\\\\nLipForensics~\\cite{haliassos2021lips} & & 83.3\/0.976 & 56.1\/0.929 & -\/0.820 \\\\\nID-reveal~\\cite{cozzolino2021id} & & 63.7\/0.876 & 60.3\/0.702 & 71.6\/0.840 \\\\\nPOI~\\cite{cozzolino2022audio} & & 86.6\/0.941 & 81.1\/0.899 & - \\\\ \nAPI-Net(ResNet101)\\tnote{*} & & 59.99\/0.72 & 66.92\/0.76 & 58.00\/0.76 \\\\\\hline\nOurs & & & & \\\\\n\\textbf{MCX-API(RGB)} & & 74.94\/0.95 & 78.09\/\\underline{0.87} & 77.88\/0.87 \\\\\n\\textbf{MCX-API(HSV)} & & 74.63\/0.75 & \\underline{80.64}\/0.85 & 75.67\/0.88 \\\\\n\\textbf{MCX-API(CIELab)} & & 84.28\/0.90 & \\textbf{81.16\/0.90} & 64.28\/0.81 \\\\\n\\textbf{MCX-API(RGB+HSV)} & & 71.58\/0.93 & 78.11\/\\underline{0.87} & \\underline{80.18}\/0.88 \\\\\n\\textbf{MCX-API(RGB+CIELab)} & & 83.89\/0.93 & 77.93\/0.83 & 68.34\/\\textbf{0.91} \\\\\n\\textbf{MCX-API(RGB+YCbCr)} & & 70.41\/0.92 & 78.39\/0.85 & \\textbf{90.87}\/\\underline{0.90} \\\\\n\\textbf{MCX-API(RGB+HSV+CIELab)} & & \\underline{92.38\/0.98} & 78.91\/0.83 & 59.04\/0.89 \\\\\n\\textbf{MCX-API(RGB+LAB+YCbCr)} & & 82.93\/0.96 & 76.20\/0.80 & 54.92\/0.85 \\\\\n\\bottomrule\n\\end{tabular}\n \\begin{tablenotes}\n \\footnotesize\n \\item [*] Our implementation of the method.\n \\end{tablenotes}\n\\end{threeparttable}\n\\end{table*}\n\n\\subsection{Experimental Results}\n\\label{ExperimentalResults}\nWe evaluate the effectiveness of the proposed MCX-API network with both seen and unseen data in this section. \n\n\\subsubsection{Intra-dataset Evaluation (Closed Set Protocol)}\nWe conduct experiments on six networks with different color spaces on MCX-API whose results are presented in \\cref{tab:bosc-ffpp}. All networks are trained with all four manipulated methods along with pristine in FF++ c23 dataset. We test the frame-level detection performance on the test data of FF++ c23 in a non-overlapping manner regarding the ID. \n\nIn \\cref{tab:bosc-ffpp}, the frame-level test results are listed. We observe that our proposed MCX-API network with RGB inputs reaches the highest average accuracy, 98.48\\%. In addition, this setting also gains the highest accuracy on DF, F2F, and FS with 98.87\\%, 99.90\\% and 98.50\\%, respectively. MCX-API with YCbCr achieves the highest accuracy for NT with 97.00\\%. As RGB provides best performance under 3-channel setting, we combine RGB with HSV, CIELab, and YCbCr, respectively, to create three 6-channel MCX-API networks. From the second block in \\cref{tab:bosc-ffpp}, we can see that RGB+YCbCr obtains the highest average AUC score of 0.9976 and the best performance on DF, F2F, and FS regarding AUC score. This indicates better prediction output scores using MCX-API with the combination of RBG and YCbCr color spaces. The 9-channel MCX-API network with RGB, HSV, and CIELab further gains the highest 0.9933 AUC score for NT.\n\nThe results of the comparison with the SOTA methods are reported in \\cref{tab:ff-sota}. All networks are trained on FF++ c23 (high-quality compression). The accuracy and AUC scores are measured at frame level. The results are averaged on all the test sets from FF++ c23, including pristine and all four kinds of manipulated videos. Our proposed method MCX-API with RGB color space obtains the best performance compared to SOTA methods. The best accuracy of the BOSC is 98.48\\%, and the highest AUC score is 0.9968. The result shows that our idea of pairwise learning in a fine-grained manner could work well in inter-class (closed-set) setting of Deepfake detection problem.\n\n\\subsubsection{Cross-dataset Evaluation}\nWe conduct a comparison on cross-dataset validation with SOTA methods to validate the proposed approach. We employ FakeAV, KoDF, and Celeb-DF to test the generalizability of our MCX-API network. Training for all networks are carried out on the FF++ c23 dataset and tested on FakeAV, KoDF, and Celeb-DF. We note that MCX-API with CIELab color space gets the best scores for KoDF with an accuracy of 81.86\\% and an AUC score of 0.90 as presented in \\cref{tab:cross-all}. MCX-API with RGB+YCbCr wins in the cross-dataset validation for Celeb-DF with an accuracy of 90.87\\% and the second best AUC score 0.90. MCX-API with color space RGB+HSV+CIELab achieves the second best place for FakeAV with 92.38\\% accuracy and 0.98 AUC score. In general, our proposed network gets a relatively better performance than the SOTA methods which indicates the better generalizability of the proposed MCX-API network.\n\n\\section{Explainable Analysis of MCX-API}\n\\label{sec:ExplainableAnalysisofMCX-API}\nWe further analyze the network to understand the performance gain by analyzing embeddings using t-SNE plots~\\cite{van2008visualizing} and class activation maps~\\cite{selvaraju2017grad, chattopadhay2018grad, draelos2020use, jiang2021layercam, fu2020axiom}. While the t-SNE provides topology explanations of the learned features, the activation maps allow for a better visualization of what has been learned by our network. \n\n\\begin{figure*}[ht]\\centering\n\t\\resizebox{2.\\columnwidth}{!}{\n\t\t\\begin{tabular}{cc}\n\t\t\t\\begin{tikzpicture}[spy using outlines={rectangle,yellow,magnification=3,size=4.0cm, connect spies, every spy on node\/.append style={very thick}}]\n\t\t\t\t\\node {\\includegraphics[width=9cm, height=9cm]{figures\/tsne_MCXapi.png}};\n\t\t\t\t\\spy[red] on (-0.3,0.2) in node [right] at (-5.5,6.3);\n\t\t\t\t\\spy[green,size=2.7cm] on (0.1,-1.2) in node [right] at (-6.9,-2.2);\n\t\t\t\t\\spy[blue,size=3.cm] on (0.5,2.1) in node [right] at (1.,6.);\n \\spy[orange,size=2.7cm] on (-2.5,1.0) in node [left] at (-4.2,1);\n\t\t\t\\end{tikzpicture} & \n\n\t\t\t\\begin{tikzpicture}[spy using outlines={rectangle,yellow,magnification=3,size=4cm, connect spies, every spy on node\/.append style={very thick}}]\n\t\t\t\t\\node {\\includegraphics[width=9cm, height=9cm]{figures\/tsne_api.png}};\n\t\t\t\t\\spy[red] on (-0.3,0.2) in node [right] at (-5.5,6.3);\n\t\t\t\t\\spy[green,size=2.7cm] on (0.1,-1.2) in node [right] at (-6.9,-2.2);\n\t\t\t\t\\spy[blue,size=3.cm] on (0.5,2.1) in node [right] at (1.,6.);\n \\spy[orange,size=2.7cm] on (-2.,1.0) in node [left] at (-4.2,1);\n\t\t\t\\end{tikzpicture}\\\\\n \n\t\t\t\\large{MCX-API (RGB)} & \\large{API} \n\t\t\\end{tabular}\n\t}\n\t\\caption{Data visualizations in 2D by t-SNE for MCX-API(RGB) and API. The left plot is t-SNE for our proposed MCX-API. The right plot is t-SNE for base architecture API-Net. We blow up the intersection parts and outliers for a clear view.}\n\\label{fig:tsne}\n\\end{figure*}\n\n\\begin{figure*}[ht]\\centering\n\t\\resizebox{1.7\\columnwidth}{!}{\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/df_frame181.png}};\n\t\t\t\t\\spy[red] on (-0.1,-0.4) in node [right] at (2.,-4);\n\t\t\t\t\\spy[yellow] on (-1.4,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture} & \n\t\t\t\n\t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/df_mcxapi_layercam.png}};\n\t\t\t\t\\spy[red] on (-0.1,-0.4) in node [right] at (2.,-4);\n\t\t\t\t\\spy[yellow] on (-1.4,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture} & \n \n\t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/df_api_layercam.png}};\n\t\t\t\t\\spy[red] on (-0.1,-0.4) in node [right] at (2.,-4);\n\t\t\t\t\\spy[yellow] on (-1.4,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture}\\\\\n \n\t\t\t\\huge{DF} &\\huge{MCX-API (RGB)} & \\huge{API} \\\\\n\n \t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/f2f_frame181.png}};\n\t\t\t\t\\spy[red] on (-1.,1.) in node [left] at (-2.7,2.);\n\t\t\t\t\\spy[yellow] on (0.,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture} & \n\t\t\t\n\t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/f2f_mcxapi_layercam.png}};\n\t\t\t\t\\spy[red] on (-1.,1.) in node [left] at (-2.7,2.);\n\t\t\t\t\\spy[yellow] on (0.,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture} & \n \n\t\t\t\\begin{tikzpicture}[spy using outlines={circle,yellow,magnification=3,size=4.5cm, connect spies, every spy on node\/.append style={ultra thick}}]\n\t\t\t\t\\node {\\includegraphics[width=6cm, height=6cm]{figures\/blowup\/f2f_api_layercam.png}};\n\t\t\t\t\\spy[red] on (-1.,1.) in node [left] at (-2.7,2.);\n\t\t\t\t\\spy[yellow] on (0.,-1.3) in node [right] at (-5,-5);\n\t\t\t\t\\spy[blue] on (1.6,0.8) in node [right] at (2,4);\n\t\t\t\\end{tikzpicture}\\\\\n \n\t\t\t\\huge{F2F} &\\huge{MCX-API (RGB)} & \\huge{API} \n \n\t\t\\end{tabular}\n\t}\n\t\\caption{Blow up in activation maps from LayerCAM analysis of MCX-API(RGB) and base architecture API-Net on DF and F2F faces.}\n\t\\label{fig:blowup-activation}\n\\end{figure*}\n\n\\begin{figure}[htp]\n \\centering\n \\subfigure[Visualization of the last block of the exit flow of MCX-API (RGB).]{\\label{fig:visualization-MCXapi}\\includegraphics[width=0.95\\linewidth]{figures\/visualization_MCXapi.pdf}}\n \\subfigure[Visualization of the last block of the API-Net.]{\\label{fig:visualization-api}\\includegraphics[width=0.95\\linewidth]{figures\/visualization_api.pdf}}\n \n \\caption{Visualization of the last layer of MCX-API (RGB) and API networks. We utilize Grad-CAM~\\cite{selvaraju2017grad}, Grad-CAM++~\\cite{chattopadhay2018grad}, HiResCAM~\\cite{draelos2020use}, LayberCAM~\\cite{jiang2021layercam} and XGradCAM~\\cite{fu2020axiom} as our visualization tool. For larger figure, please refer to \\cref{fig:activation-map-large}.}\n \\label{fig:activation-map}\n\\end{figure}\n\n\\subsection{Data Visualizations With t-SNE}\nThe results of a t-SNE 2D map for the feature vectors are illustrated in \\cref{fig:tsne}. We compare the t-SNE of our MCX-API and base architecture API-Net.\nWe notice that the five classes of Real, DF, F2F, FS, and NT for MCX-API are well separated with five different clusters as against the base architecture of API-Net. There is an unclear boundary between Real and NT, shown in the blue box for MCX-API. This overlapping can be the reason for the relatively lower accuracy obtained on NT. There are small areas overlapping between DF\/NT(yellow\/purple) and Real\/F2F(red\/blue). We further notice a few samples of Real (red dots) distributed in each fake class, leading to the errors of our proposed network.\n\n\\subsection{Visualizing Decisions With Attention Maps}\nWe apply different class activation visualization methods on the last layer of proposed network to analyze MCX-API shown in \\cref{fig:activation-map}. For comparison, we also show the visualization of the base API-Net. Precisely, we adopt Grad-CAM~\\cite{selvaraju2017grad}, Grad-CAM++~\\cite{chattopadhay2018grad}, HiResCAM~\\cite{draelos2020use}, LayberCAM~\\cite{jiang2021layercam} and XGradCAM~\\cite{fu2020axiom}. The visualization results are provided in \\cref{fig:visualization-MCXapi} for our proposed MCX-API and in \\cref{fig:visualization-api} for API-Net.\n\n\nThe activation map for Output Real is on the left part with a green background, and the activation map for Output Fake is on the right part with a pink background. The rows from top to bottom are the visualization for five classes of Real, DF, F2F, FS, and NT, respectively. We can observe that real images gains more attention within Output Real(left part) than Output Fake(right part). In contrast, fake images obtain more attention within Output Fake than Output Real. This explains the ability of our network to detect Deepfakes.\n\nWe further blow up the activation maps from LayerCAM for DF and F2F images in \\cref{fig:blowup-activation}. From visual analysis, it is evident that the MCX-API focuses more on the facial region, such as the eyes and the mouth. For instance, double eyebrows are found in the DF image (blue circle). MCX-API pays more attention than API around this region. \n\n\\section{Limitations of our work}\nWe notice in \\cref{tab:bosc-ffpp} that with the increase in color spaces, there are no apparent improvements in BOSC accuracy. We assume that there is redundant information among channels, and further work would be focused on finding the most helpful color information to extend our proposed approach. We also observe that no single configuration could perform reasonably well for all the unseen data, which is the biggest issue for Deepfake detection field. Introducing other information, such as temporal data and audio, would be a good idea as more inconsistency could be found by extending our work to video based approach.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThere is an imperative need for a generalized DeepFakes detection method to deal with the newer manipulation methods in visual media. In this paper, we proposed to apply the Multi-Channel Xception Attentive Pairwise Interaction (MCX-API) network to the Deepfakes detection field in a fine-grained manner. We conducted experiments on the publicly available FaceForensics++ dataset, and our approach obtained better performance than the SOTA approaches on both seen and unseen manipulation types. We obtain 98.48\\% BOSC accuracy on the FF++ dataset and 90.87\\% BOSC accuracy on the CelebDF dataset suggesting a promising direction for the generalization of DeepFake detection. Comprehensive ablation studies have been conducted to understand our algorithm better. We further explain the performance of our network by using t-SNE and attention maps. The results showed that Deepfake had been well separated from real videos. While our approach has indicated a promising solution to obtain a generalized detection mechanism, we have listed certain limitations that can pave the way for future work. \n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\n\n\n\\subsection{The aim of this article}\n\n\\subsubsection{}\\label{history}\n\nLet $A$ be an abelian variety defined over a number field $k$.\n\nThen, by a celebrated theorem of Mordell and Weil, the abelian group that is formed by the set $A(k)$ of points of $A$ with coefficients in $k$ is finitely generated.\n\nIt is also conjectured that the Hasse-Weil $L$-series $L(A,z)$ for $A$ over $k$ has a meromorphic continuation to the entire complex plane and satisfies a functional equation with central point $z=1$ and, in addition, that the Tate-Shafarevich group $\\sha(A_k)$ of $A$ over $k$ is finite.\n\nAssuming these conjectures to be true, the Birch and Swinnerton-Dyer Conjecture concerns the leading coefficient $L^\\ast(A,1)$ in the Taylor expansion of $L(A,z)$ at $z=1$. This conjecture was originally formulated for elliptic curves in a series of articles of Birch and Swinnerton-Dyer in the 1960's (see, for example, \\cite{birch}) and then reinterpreted and extended to the setting of abelian varieties by Tate in \\cite{tate} to give the following prediction.\n\\medskip\n\n\\noindent{}{\\bf Conjecture} (Birch and Swinnerton-Dyer)\n\\begin{itemize}\n\\item[(i)] The order of vanishing of $L(A,z)$ at $z=1$ is equal to the rank of $A(k)$.\n\\item[(ii)] One has\n\\begin{equation}\\label{bsd equality} L^\\ast(A,1) = \\frac{\\Omega_A\\cdot R_A\\cdot {\\rm Tam}(A)}{\\sqrt{D_k}^{{\\rm dim}(A)}\\cdot |A(k)_{\\rm tor}|\\cdot |A^t(k)_{\\rm tor}|}\\cdot |\\sha(A_k)|.\\end{equation}\n\\end{itemize}\n\\medskip\n\nHere $\\Omega_A$ is the canonical period of $A$, $R_A$ the discriminant of the canonical N\\'eron-Tate height pairing on $A$, ${\\rm Tam}(A)$ the product over the (finitely many) places of $k$ at which $A$ has bad reduction of local `Tamagawa numbers', $D_k$ the absolute value of the discriminant of $k$, $A(k)_{\\rm tor}$ the torsion subgroup of $A(k)$ and $A^t$ the dual abelian variety of $A$.\n\nIt should be noted at the outset that, even if one assumes $L(A,z)$ can be meromorphically continued to $z=1$ and $\\sha(A_k)$ is finite, so that both sides of (\\ref{bsd equality}) make sense, the predicted equality is itself quite remarkable.\n\nFor instance, the $L$-series is defined via an Euler product over places of $k$ so that its leading coefficient at $z=1$ is intrinsically local and analytic in nature whilst the most important terms on the right hand side of (\\ref{bsd equality}) are both global and algebraic in nature.\n\nIn addition, and more concretely, whilst isogenous abelian varieties give rise to the same $L$-series, the individual terms that occur in the `Euler characteristic' on the right hand side of (\\ref{bsd equality}) are not themselves isogeny invariant and it requires a difficult theorem of Tate to show that the validity of (\\ref{bsd equality}) is invariant under isogeny.\n\nFor these, and many other, reasons, the above conjecture, which in the remainder of the introduction we abbreviate to ${\\rm BSD}(A_k)$, has come to be regarded as one of the most important problems in arithmetic geometry today.\n\nNevertheless, there are various natural contexts in which it seems likely that ${\\rm BSD}(A_k)$ does not encompass the full extent of the interplay between the analytic and algebraic invariants of $A$. Moreover, a good understanding of the finer connections that can arise could lead to much greater insight into concrete questions such as, for example, the growth of ranks of Mordell-Weil groups in extensions of number fields.\n\nFor instance, if $A$ has a large endomorphism ring, then it seems reasonable to expect there to be a version of ${\\rm BSD}(A_k)$ that reflects the existence of such endomorphisms.\n\nThe earliest example of such a refinement appears to be Gross's formulation in \\cite{G-BSD} of an equivariant\nBirch and Swinnerton-Dyer conjecture for elliptic curves $A$ with complex multiplication by the maximal order $\\mathcal{O}$ of an imaginary quadratic field.\n\nThis conjecture incorporates natural refinements of both ${\\rm BSD}(A_k)$(i) and ${\\rm BSD}(A_k)$(ii) and is supported by numerical evidence obtained by Gross and Buhler in \\cite{GrossBuhler} and by theoretical evidence obtained by Rubin in \\cite{rubin2}.\n\nIn a different direction one can study the leading coefficients of the Hasse-Weil-Artin $L$-series $L(A,\\psi,z)$ that are obtained from $A$ and finite dimensional complex characters $\\psi$ of the absolute Galois group $G_k$ of $k$.\n\nIn this setting, general considerations led Deligne and Gross to the expectation that for any finite dimensional character $\\chi$ of $G_k$ over a number field $E$ the order of vanishing ${\\rm ord}_{z=1}L(A,\\sigma\\circ\\chi,z)$ at $z=1$ of $L(A,\\sigma\\circ\\chi,z)$ should be independent of the choice of an embedding $\\sigma: E \\to \\CC$. This prediction in turn led them naturally to the conjecture that for each complex character $\\psi$ one should have\n\\begin{equation}\\label{dg equality} {\\rm ord}_{z=1}L(A,\\psi,z) = {\\rm dim}_\\CC\\bigl( \\Hom_{\\CC[\\Gal(F\/k)]}(V_\\psi,\\CC\\otimes_\\ZZ A^t(F))\\bigr)\\end{equation}\n(cf. \\cite[p.127]{rohrlich}). Here $F$ is any finite Galois extension of $k$ such that $\\psi$ factors through the projection $G_k \\to \\Gal(F\/k)$ and $V_\\psi$ is any $\\CC[\\Gal(F\/k)]$-module of character $\\psi$ (see also Conjecture \\ref{conj:ebsd}(ii) below).\n\nThis prediction is a natural generalization of ${\\rm BSD}(A_k)$(i) and is known to have important, and explicit, consequences for ${\\rm ord}_{z=1}L(A,\\psi,z)$ (see, for example, the recent article \\cite{bisatt} of Bisatt and Dokchitser).\n\nIn addition, for rational elliptic curves $A$ and characters $\\psi$ for which $L(A,\\psi,1)$ does not vanish, there is by now strong evidence for the conjecture of Deligne and Gross.\n\nSuch evidence has been obtained by Bertolini and Darmon \\cite{BD} in the setting of ring-class characters of imaginary quadratic fields, by Kato \\cite{kato} in the setting of linear characters of $\\QQ$ (in this regard see also Rubin \\cite[\\S8]{rubin}), by Bertolini, Darmon and Rotger \\cite{bdr} for odd, irreducible two-dimensional\nArtin representations of $\\QQ$ and by Darmon and Rotger \\cite{dr} for certain self-dual Artin representations of $\\QQ$ of dimension at most four.\n (We also recall in this context that, in the setting of \\cite{bdr}, recent work of Kings, Loeffler and Zerbes \\cite{klz} proves the finiteness of components of the $p$-primary part of the Tate-Shafarevich group of $A$ over $F$ for a large set of primes $p$.)\n\nWrite $\\mathcal{O}_\\psi$ for the ring of integers of the number field generated by the values of $\\psi$. Then, as a refinement of the conjectural equality (\\ref{dg equality}), and a natural analogue of ${\\rm BSD}(A_k)$(ii) relative to $\\psi$, it would be of interest to understand a precise conjectural formula in terms of suitable `$\\psi$-components' of the standard algebraic invariants of $A$ for the fractional $\\mathcal{O}_\\psi$-ideal that is generated by the product of the leading coefficient of $L(A,\\psi,z)$ at $z=1$ and an appropriate combination of `$\\psi$-isotypic' periods and regulators.\n\nSuch a formula might also reasonably be expected to lead to concrete predictions concerning the behaviour of natural arithmetic invariants attached to the abelian variety.\n\nFor example, in the recent article of Dokchitser, Evans and Wiersema \\cite{vdrehw}, inexplicit versions of such a formula have been shown to lead, under suitable hypotheses, to predictions concerning the non-triviality of Tate-Shafarevich groups and the existence of points of infinite order on $A$ over extension fields of $k$.\n\nHowever, the formulation of an explicit such conjecture has hitherto been straightforward only if one avoids the $p$-primary support of such fractional ideals for primes $p$ that divide the degree of the extension of $k$ that corresponds to the kernel of $\\psi$.\n\nIn addition, such a conjectural formula would not itself take account of any connections that might exist between the leading coefficients of $L(A,\\psi,z)$ for characters $\\psi$ that are not in the same orbit under the action of $G_\\QQ$.\n\nIn this direction, Mazur and Tate \\cite{mt} have in the special case that $k =\\QQ$, $A$ is an elliptic curve and $\\psi$ is linear predicted an explicit family of such congruence relations that refine ${\\rm BSD}(A_k)$(ii).\n\nThese congruences rely heavily on an explicit formula in terms of modular symbols for the values $L(A,\\psi,1)$ for certain classes of tamely ramified Dirichlet characters $\\psi$ that Mazur had obtained in \\cite{mazur79}. They are expressed in terms of the discriminants of integral group-ring valued pairings constructed by using the geometrical theory of bi-extensions and are closely linked to earlier work of Mazur, Tate and Teitelbaum in \\cite{mtt} regarding the formulation of $p$-adic analogues of ${\\rm BSD}(A_k)$(ii).\n\nThe conjecture of Mazur and Tate has in turn motivated much subsequent work and the formulation of several new conjectures involving the values $L(A,\\psi,1)$.\n\nSuch conjectures include the congruence relations that are formulated by Bertolini and Darmon in \\cite{bert} and \\cite{bert2} and involve a natural notion of `derived height pairings' and links to the Galois structure of Selmer modules that are predicted by Kurihara in \\cite{kuri}.\n\nIt has, however, proved to be much more difficult to formulate explicit refinements of ${\\rm BSD}(A_k)$(ii) that involve congruence relations between the values of derivatives of Hasse-Weil-Artin $L$-series.\n\nIn this direction, Darmon \\cite{darmon} has used the theory of Heegner points to formulate an analogue of the Mazur-Tate congruence conjecture for the first derivatives of Hasse-Weil-Artin $L$-series that arise from rational elliptic curves and ring class characters of imaginary quadratic fields.\n\nHowever, aside from this example, the only other such explicit study that we are aware of is due to Kisilevsky and Fearnley who in \\cite{kisilevsky} and \\cite{kisilevsky2} formulated, and studied numerically, conjectures for the `algebraic parts' of the leading coefficients of Hasse-Weil-Artin $L$-series that arise from rational elliptic curves and certain restricted families of Dirichlet characters.\n\n\\subsubsection{}In a more general setting, the formulation by Bloch and Kato \\cite{bk} of the `Tamagawa number conjecture' for the motive $h^1(A_k)(1)$ offers a different approach to the formulation of ${\\rm BSD}(A_k)$.\n\nIn particular, the subsequent re-working of this conjecture by Fontaine and Perrin-Riou in \\cite{fpr}, and its `equivariant' extension to motives with coefficients, as described by Flach and the first author in \\cite{bufl99}, in principle provides a systematic means of studying refined versions of ${\\rm BSD}(A_k)$.\n\nIn this setting it is known, for example, that the conjectures formulated by Gross in \\cite{G-BSD} are equivalent to the equivariant Tamagawa number conjecture for the motive $h^1(A_k)(1)$ with respect to the coefficient ring $\\mathcal{O}$ (cf. \\cite[\\S4.3, Rem. 10]{bufl99}).\n\nTo study Hasse-Weil-Artin $L$-series it is convenient to fix a finite Galois extension $F$ of $k$ of group $G$.\n\nThen the equivariant Tamagawa number conjecture for $h^1(A_{F})(1)$ with respect to the integral group ring $\\ZZ[G]$ is formulated as an equality of the form\n\\begin{equation}\\label{etnc eq} \\delta(L^\\ast(A_{F\/k},1)) = \\chi(h^1(A_{F})(1),\\ZZ[G]).\\end{equation}\n\\noindent{}Here $\\delta$ is a canonical homomorphism from the unit group $\\zeta(\\br [G])^\\times$ of the centre of $\\RR[G]$ to the relative algebraic $K$-group of the ring extension $\\bz [G] \\subseteq \\br\n[G]$ and $L^\\ast(A_{F\/k},1)$ is an element of $\\zeta(\\RR[G])^\\times$ that is defined using the leading coefficients $L^\\ast(A,\\psi,1)$ for each irreducible complex character $\\psi$ of $G$. Also, $\\chi(h^1(A_{F})(1),\\ZZ[G])$ is an adelic Euler characteristic that is constructed by\ncombining virtual objects (in the sense of Deligne) for each prime $p$ of the compactly supported \\'etale cohomology of the $p$-adic Tate modules of $A$ together with the Neron-Tate height pairing and period isomorphisms for $A$ over $F$, as well as an analysis of the finite support cohomology groups introduced by Bloch and Kato.\n\nThe equality (\\ref{etnc eq}) is known to constitute a strong and simultaneous refinement of the conjectures ${\\rm BSD}(A_L)$ as $L$ ranges over the intermediate fields of $F\/k$.\n\nHowever, the rather technical, and inexplicit, nature of this equality means that it has proved to be very difficult to interpret in a concrete way, let alone to verify either theoretically or numerically.\n\nFor example, in the technically most demanding case in which $A$ has strictly positive rank over $F$ it has still only been verified numerically in a small number of cases by Navilarekallu in \\cite{tejaswi}, by Bley in \\cite{Bley1} and \\cite{Bley2}, by Bley and the second author in \\cite{bleymc} and by Wuthrich and the present authors in \\cite{bmw}.\n\nIn addition, the only theoretical evidence for the conjecture in this setting is its verification for certain restricted dihedral families of the form $F\/\\QQ$ where $F$ is an unramified abelian extension of an imaginary quadratic field (this is the main result of \\cite{bmw} and relies on the theorem of Gross and Zagier). In particular, the restriction on ramification that the latter result imposes on $F\/k$ means that many of the more subtle aspects of the conjecture are avoided.\n\nTo proceed we note that the conjectural equality (\\ref{etnc eq}) naturally decomposes into `components', one for each rational prime $p$, in a way that will be made precise in Appendix \\ref{consistency section}, and that each such $p$-component (which for convenience we refer to as `(\\ref{etnc eq})$_p$' in the remainder of this introduction) is itself of some interest.\n\nFor example, if $A$ has good ordinary reduction at $p$, then the compatibility result proved by Venjakob and the first named author in~\\cite[Th. 8.4]{BV2} shows, modulo the assumed non-degeneracy of classical $p$-adic height pairings, that the equality (\\ref{etnc eq})$_p$ is a consequence of the main conjecture of\nnon-commutative Iwasawa theory for $A$, as formulated by Coates et al in \\cite{cfksv} with respect to any compact $p$-adic Lie extension of $k$ that contains the cyclotomic $\\ZZ_p$-extension of $F$.\n\nThis means that the study of (\\ref{etnc eq}), and of its more explicit consequences, is directly relevant to attempts to properly understand the content of the main conjecture of non-commutative Iwasawa theory. It also shows that the $p$-adic congruence relations that are proved numerically by Dokchitser and Dokchitser in \\cite{dokchitsers} are related to the equality (\\ref{etnc eq}).\n\nTo study congruences in a more general setting we fix an embedding of $\\RR$ into the completion $\\CC_p$ of the algebraic closure of $\\QQ_p$. Then the long exact sequence of relative $K$-theory implies that the equality (\\ref{etnc eq})$_p$ determines the image of $L^\\ast(A_{F\/k},1)$ in $\\zeta(\\CC_p[G])^\\times$ modulo the image under the natural reduced norm map of the Whitehead group $K_1(\\ZZ_p[G])$ of $\\ZZ_p[G]$.\n\nIn view of the explicit description of the latter image that is obtained by Kakde in \\cite{kakde} or, equivalently, by the methods of Ritter and Weiss in \\cite{rw}, this means that (\\ref{etnc eq}) is essentially equivalent to an adelic family of (albeit inexplicit) congruence relations between the leading coefficients $L^\\ast(A,\\psi, 1)$, suitably normalised by a product of explicit equivariant regulators and periods, as $\\psi$ varies over the set of irreducible complex characters of $G$.\n\nThis is also the reason why the study of congruence relations between suitably normalised derivatives of Hasse-Weil-Artin $L$-series should be related to the construction of `$p$-adic $L$-functions' in the setting of non-commutative Iwasawa theory.\n\n\\subsubsection{}The main aim of the present article is then to develop general techniques that will allow one to understand the above congruence relations in a more explicit way, and in a much wider setting, than has previously been possible.\n\nIn this way we are led to the formulation (in Conjecture \\ref{conj:ebsd}) of a seemingly definitive refinement of the Birch and Swinnerton-Dyer formula (\\ref{bsd equality}) in the setting of Hasse-Weil-Artin $L$-series. We then derive a range of concrete consequences of this conjecture that are amenable to explicit investigation, either theoretically or numerically, in cases that (for the first time) involve a thoroughgoing mixture of difficult archimedean considerations that are related to refinements of the conjectural equality (\\ref{dg equality}) of Deligne and Gross, and of delicate $p$-adic congruence relations that are related to aspects of non-commutative Iwasawa theory.\n\nIn particular, we shall show that this family of predictions both refines and extends the explicit refinements of (\\ref{dg equality}) that were recalled in \\S\\ref{history}. It also gives insight into the more subtle aspects of the conjectural equality (\\ref{etnc eq}), and hence (via the results of \\cite{BV2}) of the main conjecture of non-commutative Iwasawa theory, that go well beyond the sort of concrete congruence conjectures that have been considered previously in connection to the central conjectures of either \\cite{bufl99} or \\cite{cfksv}.\n\nWe also believe that some of the general results obtained here can help contribute towards establishing a proper framework for the subsequent investigation of these important questions.\n\n\n\n\\subsection{The main contents}\n\n\\subsubsection{}\n\n\n\n\n\n\n\nAs a key part of our approach, we shall first associate two natural notions of Selmer complex to the $p$-adic Tate module of an abelian variety.\n\n The `classical Selmer complex' that we define in \\S\\ref{selmer section} is closely related to the `finite support cohomology' that was introduced by Bloch and Kato in \\cite{bk} and, as a result, its cohomology can be explicitly described in terms of Mordell-Weil groups and Selmer groups.\n\n Nevertheless, this complex is not well-suited to certain $K$-theoretical calculations since it is not always `perfect' over the relevant $p$-adic group ring.\n\n For this reason we shall in \\S\\ref{selmer section} also associate a notion of `Nekov\\'a\\v r-Selmer complex' to certain choices of $p$-adic submodules of the groups of semi-local points.\n\n This construction is motivated by the general approach of Nekov\\'a\\v r in \\cite{nek} and gives a complex that is always perfect and has cohomology that can be described in terms of the Selmer modules studied by Mazur and Rubin in \\cite{MRkoly}.\n\n Such Nekov\\'a\\v r-Selmer complexes will then play an important role in several subsequent $K$-theoretical computations.\n\nIn \\S\\ref{selmer section} we also explain how a suitably compatible family over all primes $p$ of $p$-adic modules of semi-local points, or a `perfect Selmer structure' for $A$ and $F\/k$ as we shall refer to it, gives rise to a canonical perfect complex of $G$-modules.\n\nWe shall then show that such structures naturally arise from a choice of global differentials and compute the cohomology groups of the associated Selmer complexes.\n\nIn \\S\\ref{ref bsd section} we formulate the Birch and Swinnerton-Dyer Conjecture for the variety $A$ and Galois extension $F$ of $k$, or `${\\rm BSD}(A_{F\/k})$' as we shall abbreviate it.\n\nUnder the assumed validity of an appropriate case of the Generalized Riemann Hypothesis, we shall first associate a $K_1$-valued leading coefficient element to the data $A$ and $F\/k$.\n\nAfter fixing a suitable choice of global differentials we can also associate a $K_1$-valued `period' element to $A$ and $F\/k$.\n\nThe central conjecture of this article then asserts that the image under the natural connecting homomorphism of the quotient of these $K_1$-valued invariants is equal to the Euler characteristic in a relative $K$-group of a pair comprising a Nekov\\'a\\v r-Selmer complex constructed from the given set of differentials and the classical N\\'eron-Tate height pairing for $A$ over $F$.\n\nThis conjectural equality also involves a small number of `Fontaine-Messing' correction terms that we use to\ncompensate for the choice of a finite set of places of $k$ that is necessary to state ${\\rm BSD}(A_{F\/k})$.\n\n\nIt can be directly shown that this conjecture recovers ${\\rm BSD}(A_{k})$ in the case that $F=k$, is consistent in several key respects and has good functorial properties under change of Galois extension $F\/k$.\n\nThe conjecture can also be interpreted as a natural analogue of the `refined Birch and Swinnerton-Dyer Conjecture' for abelian varieties over global function fields that was recently formulated, and in some important cases proved, by Kakde, Kim and the first author in \\cite{bkk}.\n\nIn \\S\\ref{k theory period sect} and \\S\\ref{local points section} we shall then concentrate on proving several technical results that will subsequently help us to derive explicit consequences from the assumed validity of ${\\rm BSD}(A_{F\/k})$.\n\nIn \\S\\ref{k theory period sect} these results include establishing the precise link between $K_1$-valued periods, classical periods, Galois resolvents and suitably modified Galois-Gauss sums.\n\n\nIn \\S\\ref{local points section} we shall prepare for subsequent $K$-theoretical computations with classical Selmer complexes by studying the cohomological-triviality of local points on ordinary abelian varieties.\n\nIn particular, we shall use these results to define a natural $K$-theoretical invariant of the twist matrix of such a variety.\n\n We shall then give a partial computation of these invariants and also explain how, in the case of elliptic curves, the (assumed) compatibility under unramified twist of suitable cases of the local epsilon constant conjecture (as formulated in its most general form by Fukaya and Kato in \\cite{fukaya-kato}) leads to an explicit description of the invariants.\n\nIn \\S\\ref{tmc} we shall impose several mild hypotheses on both the reduction types of $A$ and the ramification invariants of $F\/k$ that together ensure that the classical Selmer complex defined in \\S\\ref{selmer section} is perfect over the relevant $p$-adic group ring.\n\nWorking under these hypotheses, we shall then combine the results of \\S\\ref{k theory period sect} and \\S\\ref{local points section} together with a significant strengthening of the main computations that are made by Wuthrich and the present authors in \\cite{bmw} to derive a more explicit interpretation of ${\\rm BSD}(A_{F\/k})$.\n\nThese results are in many respects the technical heart of this article and rely heavily on the subtle, and still for the most part conjectural, arithmetic properties of wildly ramified Galois-Gauss sums.\n\nThe $K$-theoretical computations in \\S\\ref{tmc} also constitute a natural equivariant refinement and generalisation of several earlier computations in this area including those that are made by Venjakob in~\\cite[\\S3.1]{venjakob}, by the first author in \\cite{ltav}, by Bley in~\\cite{Bley1}, by Kings in~\\cite[Lecture 3]{kings} and by the second author in \\cite{dmc}.\n\nIn \\S\\ref{ecgs} we shall discuss concrete consequences of ${\\rm BSD}(A_{F\/k})$ concerning both the explicit Galois structure of Selmer complexes and modules and the formulation of precise refinements of the Deligne-Gross Conjecture.\n\nIn particular, in this section we shall address a problem explicitly raised by Dokchitser, Evans and Wiersema in \\cite{vdrehw} (see, in particular Remark \\ref{evans}).\n\nIn \\S\\ref{congruence sec} and \\S\\ref{mrsconjecturesection} we shall then specialise to consider abelian extensions $F\/k$ and combine our approach with general techniques recently developed by Sano, Tsoi and the first author in \\cite{bst} in order to derive from ${\\rm BSD}(A_{F\/k})$ several explicit congruence relations between the suitably normalized derivatives of Hasse-Weil-Artin $L$-series.\n\nIn \\S\\ref{comparison section} we then prove that the pairing constructed by Mazur and Tate in \\cite{mt} using the theory of bi-extensions coincides with the inverse of a canonical `Nekov\\'a\\v r height pairing' that we define by using Bockstein homomorphisms arising naturally from Galois descent considerations.\n\nThis comparison result relies, in part, on earlier results of Bertolini and Darmon \\cite{bert2} and of Tan \\cite{kst} and is, we believe, of some independent interest.\n\nThe relations that are discussed in \\S\\ref{congruence sec} and \\S\\ref{mrsconjecturesection} often take a very explicit form (see, for example, the discussion in \\S\\ref{explicit examples intro} below) and, when combined with the results of \\S\\ref{comparison section}, can be seen to extend and refine the earlier conjectures of Mazur and Tate \\cite{mt} and Darmon \\cite{darmon0} amongst others.\n\nThis approach also shows that for certain cyclic and dihedral extensions $F\/k$ the key formula that is predicted by ${\\rm BSD}(A_{F\/k})$ is equivalent to the validity of a family of explicit congruence relations that simultaneously involve both the N\\'eron-Tate and Mazur-Tate height pairings.\n\nIn this way we shall for the first time render refined versions of the Birch and Swinnerton-Dyer Conjecture accessible to numerical verification in cases in which they involve an intricate mixture of both archimedean phenomenon and delicate $p$-adic congruences.\n\nIn particular, in \\S\\ref{mrsconjecturesection} we give details of several such numerical verifications of the `$p$-component' of ${\\rm BSD}(A_{F\/k})$ for primes $p$ that divide the degree of $F\/k$ that Werner Bley has been able to perform by using this approach (see, in particular, Remark \\ref{bleyexamples rem} and Examples \\ref{bleyexamples}).\n\nIn \\S\\ref{mod sect} and \\S\\ref{HHP} we shall then specialise to consider applications of our general approach in two classical settings.\n\nFirstly, in \\S\\ref{mod sect} we consider rational elliptic curves over fields that are both abelian and tamely ramified over $\\QQ$. In this case\nwe can use the theory of modular symbols to give an explicit reinterpretation of ${\\rm BSD}(A_{F\/\\QQ})$ and thereby describe precise conditions under which the conjecture is valid.\n\nAs a concrete application of this result we then use it to deduce from Kato's theorem \\cite{kato} that for every natural number $n$ there are infinitely many primes $p$ and, for each such $p$, infinitely many abelian extensions $F\/\\QQ$ for which the $p$-component of ${\\rm BSD}(A_{F\/\\QQ})$ is valid whilst the degree and discriminant of $F\/\\QQ$ are each divisible by at least $n$ distinct primes and the Sylow $p$-subgroup of $\\Gal(F\/\\QQ)$ has exponent at least $p^n$ and rank at least $n$. This result is a considerable strengthening of the main result of Bley in \\cite{Bley3}.\n\nThen in \\S\\ref{HHP} we consider abelian extensions of imaginary quadratic fields and elliptic curves that satisfy the Heegner hypothesis.\n\nThe main result of this section is a significant extension of the main result of Wuthrich and the present authors in \\cite{bmw} and relies on Zhang's generalization of the theorem of Gross and Zagier relating first derivatives of Hasse-Weil-Artin $L$-series to the heights of Heegner points.\n\nIn this section we shall also point out an inconsistency in the formulation of a conjecture of Bradshaw and Stein in \\cite{BS} regarding Zhang's formula and offer a possible correction.\n\n\nThe article also contains two appendices. In Appendix \\ref{consistency section} we use techniques developed by Wuthrich and the present authors in \\cite{bmw} to explain the precise link between our central conjecture ${\\rm BSD}(A_{F\/k})$ and the conjectural equality (\\ref{etnc eq}).\n\nThis technical result may perhaps be of interest in its own right but also allows us to deduce from the general theory of equivariant Tamagawa numbers that our formulation of ${\\rm BSD}(A_{F\/k})$ is consistent in several key respects.\n\nFinally, in Appendix \\ref{exp rep section} we make explicit certain standard constructions in homological algebra relating to Poitou-Tate duality and also describe a general construction of algebraic height pairings from Bockstein homomorphisms.\n\nThe results of Appendix \\ref{exp rep section} are for the most part routine but nevertheless play an important role in the arguments that we use to compare height pairings in \\S\\ref{comparison section}.\n\n\\subsubsection{}\\label{explicit examples intro}To end the introduction we shall give some concrete examples of the sort of congruence predictions that result from our approach (all taken from the more general material given in \\S\\ref{congruence sec} and \\S\\ref{mrsconjecturesection}).\n\nTo do this we fix a finite abelian extension of number fields $F\/k$ of group $G$ and an elliptic curve $A$ over $k$.\n\nWe also fix an odd prime $p$ that does not divide the order of the torsion subgroup of $A(F)$ and an isomorphism of fields $\\CC\\cong \\CC_p$ (that we do not explicitly indicate in the sequel). We write $\\widehat{G}$ for the set of irreducible complex characters of $G$.\n\nThe first prediction concerns the values at $z=1$ of Hasse-Weil-Artin $L$-series. To state it we set $F_p := \\QQ_p\\otimes_\\QQ F$ and write ${\\rm log}_{A,p}$ for the formal group logarithm of $A$ over $\\QQ_p\\otimes_\\QQ k$ and $\\Sigma(k)$ for the set of embeddings $k \\to \\CC$.\n\nFor each subset $x_\\bullet = \\{x_{\\sigma}: \\sigma \\in \\Sigma(k)\\}$ of $A(F_p)$ and each character $\\psi$ in $\\widehat{G}$ we then define a `$p$-adic logarithmic resolvent' by setting\n\n\\begin{equation}\\label{log resol abelian} \\mathcal{LR}_\\psi(x_\\bullet) := {\\rm det}\\left(\\bigl(\\sum_{g \\in G} \\hat \\sigma(g^{-1}({\\rm log}_{A,p}(x_{\\sigma'})))\\cdot \\psi(g)\\bigr)_{\\sigma,\\sigma' \\in \\Sigma(k)}\\right),\\end{equation}\nwhere we fix an ordering of $\\Sigma(k)$ and an extension $\\hat \\sigma$ to $F$ of each $\\sigma$ in $\\Sigma(k)$.\n\nNow if $S$ is any finite set of places of $k$ that contains all archimedean places, all that ramify in $F$, all at which $A$ has bad reduction and all above $p$, and $L_{S}(A,\\check\\psi,z)$ is the $S$-truncated $L$-series attached to $A$ and the contragredient $\\check\\psi$ of $\\psi$, then our methods predict that for any $x_\\bullet$ the sum\n\n\\begin{equation}\\label{first predict} \\sum_{\\psi \\in \\widehat{G}}\\frac{L_{S}(A,\\check\\psi,1)\\cdot \\mathcal{LR}_\\psi(x_\\bullet)}{\\Omega^\\psi_A\\cdot w_\\psi}\\cdot e_\\psi \\end{equation}\nbelongs to $\\ZZ_p[G]$ and annihilates the $p$-primary part $\\sha(A_{F})[p^\\infty]$ of the Tate-Shafarevich group of $A$ over $F$. Here $e_\\psi$ denotes the idempotent $|G|^{-1}\\sum_{g \\in G}\\check\\psi(g)g$ of $\\CC[G]$ and the periods $\\Omega^\\psi_A$ and Artin root numbers $w_\\psi$ are as explicitly defined in \\S\\ref{k theory period sect2}.\n\nIn particular, if one finds for some choice of $x_\\bullet$ that the sum in (\\ref{first predict}) is a unit of $\\ZZ_p[G]$, then this implies that $\\sha(A_F)[p^\\infty]$ should be trivial.\n\nMore generally, the fact that each sum in (\\ref{first predict}) should belong to $\\ZZ_p[G]$ implies that for every $g$ in $G$, and every set of local points $x_\\bullet$, there should be a congruence\n\\[ \\sum_{\\psi \\in \\widehat{G}}\\psi(g)\\frac{L_{S}(A,\\check\\psi,1)\\cdot \\mathcal{LR}_\\psi(x_\\bullet)}{\\Omega_A^\\psi\\cdot w_\\psi}\n \\equiv 0 \\,\\,\\,({\\rm mod}\\,\\, |G|\\cdot \\ZZ_p).\\]\n\nIn concrete examples these congruences are strong restrictions on the values $L_{S}(A,\\check\\psi,1)$ that can be investigated numerically but cannot be deduced by solely considering Birch and Swinnerton-Dyer type formulas for individual Hasse-Weil-Artin $L$-series\n\n\nIn general, our analysis leads to a range of congruence predictions that are both finer than the above and also involve the values at $z=1$ of higher derivatives of Hasse-Weil-Artin $L$-series, suitably normalised by a product of explicit regulators and periods.\n\nTo give an example of this sort of prediction, we shall focus on the simple case that $F\/k$ is cyclic of degree $p$ (although entirely similar predictions can be made in the setting of cyclic extensions of arbitrary $p$-power order and also for certain dihedral families of extensions).\n\nIn this case, under certain natural, and very mild, hypotheses on $A$ and $F\/k$ relative to $p$ there exist non-negative integers $m_0$ and $m_1$ with the property that the pro-$p$ completion $A(F)_p$ of $A(F)$ is isomorphic as a $\\ZZ_p[G]$-module to a direct sum of $m_0$ copies of $\\ZZ_p$ and $m_1$ copies of $\\ZZ_p[G]$.\n\nIn particular, if we further assume $m_0=2$ and $m_1= 1$, then we may fix points $P_{0}^1$ and $P_{0}^2$ in $A(k)$ and $P_1$ in $A(F)$ such that $A(F)_p$ is the direct sum of the $\\ZZ_p[G]$-modules generated by $P_{0}^1,$ $P_{0}^2$ and $P_1.$ (In Example \\ref{wuthrich example} the reader will find explicit examples of such pairs $A$ and $F\/k$ for the prime $p=3$.)\n\n\nThen, writing $L^{(1)}_{S}(A,\\check\\psi,1)$ for the value at $z=1$ of the first derivative of $L_{S}(A,\\check\\psi,z)$, our methods predict that, under mild additional hypotheses, there should exist an element $x$ of $\\ZZ_p[G]$ that annihilates $\\sha(A_{F})[p^\\infty]$ and is such that\n\\begin{equation}\\label{second predict} \\sum_{\\psi \\in \\widehat{G}}\\frac{L^{(1)}_{S}(A,\\check\\psi,1)\\cdot\\tau^*(\\QQ,\\psi)}{\\Omega^\\psi_A\\cdot i^{r_2}}\\cdot e_\\psi = x\\cdot \\sum_{g\\in G}\\langle g(P_1),P_1\\rangle_{A_F}\\cdot g^{-1},\\end{equation}\nwhere $\\langle -,-\\rangle_{A_F}$ denotes the N\\'eron-Tate height pairing of $A$ relative to $F$, $\\tau^*(\\QQ,\\psi)$ is the (modified, global) Galois-Gauss sum of the character of $G_\\QQ$ that is obtained by inducing $\\psi$ and $r_2$ is the number of complex places of $k$.\n\nTo be more explicit, we write $R_A$ for the determinant of the N\\'eron-Tate regulator matrix of $A$ over $k$ with respect to the ordered $\\QQ$-basis $\\{P_{0}^1,P_{0}^2,\\sum_{g\\in G}g(P_1)\\}$ of $\\QQ\\cdot A(k)$ and for each non-trivial $\\psi$ in $\\widehat{G}$ we define a non-zero complex number by setting\n\\[ h^\\psi(P_1):=\\sum_{g\\in G}\\langle g(P_1),P_1\\rangle_{A_F}\\cdot\\psi(g)^{-1}.\\]\n\nWe finally write $S_{\\rm r}$ for the set of places of $k$ that ramify in $F$, $d_k$ for the discriminant of $k$ and $I_p(G)$ for the augmentation ideal of $\\ZZ_p[G]$. Then, under mild hypotheses, our methods predict that there should be containments\n\\[\n\\sum_{\\psi\\neq {\\bf 1}_G}\\frac{L_{S_{\\rm r}}^{(1)}(A,\\check\\psi,1)\\cdot\\tau^*(\\QQ,\\psi)}{\\Omega^\\psi_{A}\\cdot i^{r_2} \\cdot h^\\psi(P_1)}\\cdot e_\\psi \\in I_p(G)^2\\,\\,\\text{ and }\\,\\, \\frac{L_{S_{\\rm r}}^{(3)}(A,1)\\sqrt{|d_k|}}{\\Omega_{A}\\cdot R_{A}}\\in \\ZZ_p,\\]\nand a congruence modulo $I_p(G)^3$ of the form\n\\begin{equation}\\label{examplecongruent}\n\\sum_{\\psi\\neq {\\bf 1}_G}\\frac{L_{S_{\\rm r}}^{(1)}(A,\\check\\psi,1)\\cdot\\tau^*(\\QQ,\\psi)}{ \\Omega^\\psi_{A}\\cdot i^{r_2} \\cdot h^\\psi(P_1)}\\cdot\\! e_\\psi\n \\equiv \\!\\frac{L_{S_{\\rm r}}^{(3)}(A,1)\\sqrt{|d_k|}}{\\Omega_{A}\\cdot R_{A}}\\cdot\\det\\left(\\begin{array}{cc}\n\\langle P_{0}^1,P_{0}^1\\rangle^{\\rm MT} & \\langle P_{0}^1,P_{0}^2\\rangle^{\\rm MT}\n\\\\\n\\langle P_{0}^2,P_{0}^1\\rangle^{\\rm MT} & \\langle P_{0}^2,P_{0}^2\\rangle^{\\rm MT}\n\\end{array}\\right)\\!.\n\\end{equation}\nHere $L_{S_{\\rm r}}^{(3)}(A,1)$ denotes the value at $z=1$ of the third derivative of $L_{S_{\\rm r}}(A,z)$ and\n\\[ \\langle\\,,\\rangle^{\\rm MT}:A(k)\\times A(k)\\to I_p(G)\/I_p(G)^2\\]\nis the canonical pairing that Mazur and Tate define in \\cite{mt0} by using the geometrical theory of bi-extensions.\n\nFurther, if $\\sha(A_F)[p^\\infty]$ is trivial, then the $p$-component of ${\\rm BSD}(A_{F\/k})$ is valid if and only if (\\ref{examplecongruent}) holds and, in addition, the $p$-component of the Birch and Swinnerton-Dyer Conjecture is true for $A$ over both $k$ and $F$.\n\n\n\nWe remark that even in the simplest possible case that $k = \\QQ$ and $p=3$, these predictions strongly refine those made by Kisilevsky and Fearnley in \\cite{kisilevsky} and cannot be deduced by simply considering leading term formulas for individual Hasse-Weil-Artin $L$-series.\n\n\n\n\n\n\n\\subsection{General notation} For the reader's convenience we give details here of some of the general notation and terminology that will be used throughout the article.\n\n\\subsubsection{}We write $|X|$ for the cardinality of a finite set $X$.\n\nFor an abelian group $M$ we write $M_{\\rm tor}$ for its torsion subgroup and $M_{\\rm tf}$ for the quotient of $M$ by $M_{\\rm tor}$.\n\nFor a prime $p$ and natural number $n$ we write $M[p^n]$ for the subgroup $\\{m \\in M: p^nm =0\\}$ of the Sylow $p$-subgroup $M[p^{\\infty}]$ of $M_{\\rm tor}$.\n\nWe set $M_p := \\ZZ_p\\otimes_\\ZZ M$, write $M^\\wedge_p$ for the pro-$p$-completion $\\varprojlim_n M\/p^n M$ of $M$ and denote the Pontryagin dual $\\Hom(M,\\QQ\/\\ZZ)$ of $M$ by $M^\\vee$.\n\nIf $M$ is finite of exponent dividing $p^m$, then $M^\\vee$ identifies with $\\Hom_{\\ZZ_p}(M,\\QQ_p\/\\ZZ_p)$ and we shall (without explicit comment) use the canonical identification $\\QQ_p\/\\ZZ_p=\\varinjlim_n \\ZZ\/p^n\\ZZ$ to identify elements of $M^\\vee$ with their canonical image in the linear dual $\\Hom_{\\ZZ\/p^m\\ZZ}(M,\\ZZ\/p^m\\ZZ)$.\n\nIf $M$ is finitely generated, then for a field extension $E$ of $\\QQ$ we shall often abbreviate $E\\otimes_\\ZZ M$ to $E\\cdot M$.\n\n\nIf $M$ is a $\\Gamma$-module for some group $\\Gamma$, then we always endow $M^\\vee$ with the natural contragredient action of $\\Gamma$.\n\n\nWe recall that if $\\Gamma$ is finite, then a $\\Gamma$-module $M$ is said to be `cohomologically-trivial' if for all subgroups $\\Delta$ of $\\Gamma$ and all integers $i$ the Tate cohomology group $\\hat H^i(\\Delta,M)$ vanishes.\n\n\\subsubsection{}For any ring $R$ we write $R^\\times$ for its multiplicative group and $\\zeta(R)$ for its centre.\n\nUnless otherwise specified we regard all $R$-modules as left $R$-modules. We write ${\\rm Mod}(R)$ for the abelian category of $R$-modules and ${\\rm Mod}^{\\rm fin}(R)$ for the abelian subcategory of ${\\rm Mod}(R)$ comprising all $R$-modules that are finite.\n\nWe write $D(R)$ for the derived category of complexes of $R$-modules. If $R$ is noetherian, then we write $D^{\\rm perf}(R)$ for the full triangulated subcategory of $D(R)$ comprising complexes that are `perfect' (that is, isomorphic in $D(R)$ to a bounded complex of finitely generated projective $R$-modules).\n\nFor a natural number $n$ we write $\\tau_{\\le n}$ for the exact truncation functor on $D(R)$ with the property that for each object $C$ in $D(R)$ and each integer $i$ one has\n\\[ H^i(\\tau_{\\le n}(C)) = \\begin{cases} H^i(C), &\\text{if $i \\le n$}\\\\\n 0, &\\text{otherwise.}\\end{cases}\\]\n\n\\subsubsection{}For a Galois extension of number fields $L\/K$ we set $G_{L\/K} := \\Gal(L\/K)$. We also fix an algebraic closure $K^c$ of $K$ and set $G_K := G_{K^c\/K}$.\n\nFor each non-archimedean place $v$ of a number field we write $\\kappa_v$ for its residue field and denote its absolute norm $|\\kappa_v|$ by ${\\rm N}v$.\n\nWe write the dual of an abelian variety $A$ as $A^t$ and usually abbreviate its dimension ${\\rm dim}(A)$ to $d$.\n\nIf $A$ is defined over a number field $k$, then for each extension $F$ of $k$ we write the Tate-Shafarevich group of $A$ over $F$ as $\\sha(A_F)$.\n\nWe shall also use the following notation regarding sets of places of $k$.\n\n\\begin{itemize}\n\\item[-] $S_k^\\RR$ is the set of real archimedean places of $k$;\n\\item[-] $S_k^\\CC$ is the set of complex archimedean places of $k$;\n\\item[-] $S_k^\\infty (= S_k^\\RR \\cup S_k^\\CC$) is the set of archimedean places of $k$;\n\\item[-] $S_k^f$ is the set of non-archimedean places of $k$;\n\\item[-] $S_k^v$ is the set of places of $k$ that extend a place $v$ of a given subfield of $k$. In particular,\n\\item[-] $S_k^p$ is the set of $p$-adic places of $k$ for a given prime number $p$;\n\\item[-] $S_k^F$ is the set of places of $k$ that ramify in a given extension $F$ of $k$;\n\\item[-] $S_k^A$ is the set of places of $k$ at which a given abelian variety $A$ has bad reduction.\n\\end{itemize}\n\n For a fixed set of places $S$ of $k$ we also write $S(F)$ for the set of places of $F$ which lie above a place in $S$.\n\n\nFor a place $v$ of $k$ we set $G_{v} := G_{k_v^c\/k_v}$. We also write $k_v^{\\rm un}$ for the maximal unramified extension of $k$ in $k_v^c$ and set $I_{v} := G_{k_v^c\/k_v^{\\rm un}}$.\n\nWe fix a place $w$ of $F$ above $v$ and a corresponding embedding $F\\to k_v^c$. We write $G_w$ and $I_w$ for the images of $G_{v}$ and $I_{v}$ under the induced homomorphism $G_{v} \\to G$. We also fix a lift $\\Phi_v$ to $G$ of the Frobenius automorphism in $G_w\/I_w$.\n\nWe write $\\Sigma(k)$ for the set of embeddings $k \\to \\CC$.\n\n\\subsection{Acknowledgements} We are very grateful to St\\'ephane Vigui\\'e for his help with aspects of the argument presented in \\S\\ref{ptduality} and to Werner Bley for providing us with the material in \\S\\ref{ell curve sect} and for pointing out a sign-error in an earlier version of the argument in \\S\\ref{comparison section}.\n\nWe are also grateful to both Werner Bley and Christian Wuthrich for their interest, helpful correspondence and tremendous generosity regarding numerical computations.\n\nIn addition, we would like to thank Rob Evans, Masato Kurihara, Jan Nekov\\'a\\v r, Takamichi Sano, Kwok-Wing Tsoi and Stefano Vigni for helpful discussions and correspondence.\n\nFinally, it is a great pleasure to thank Dick Gross for his strong encouragement regarding this project and for several insightful remarks.\n\nThe second author acknowledges financial support from the Spanish Ministry of Economy and Competitiveness, through the `Severo Ochoa Programme for Centres of Excellence in R\\&D' (SEV-2015-0554) as well as through project MTM2016-79400-P.\n\n\n\\section{Selmer complexes}\\label{selmer section}\n\nIn this section we define the Selmer complexes that play a key role in the conjecture we shall later formulate and establish some of their key properties.\n\nAt the outset we fix a finite set of places $S$ of $k$ with\n\\[ S_k^\\infty\\cup S_k^F \\cup S_k^A\\subseteq S.\\]\n\nWe also write $\\Sigma(k)$ for the set of embeddings $k \\to \\CC$ and $\\Sigma_\\sigma(F)$ for each $\\sigma$ in $\\Sigma(k)$ for the set of embeddings $F \\to \\CC$ that extend $\\sigma$.\n\nFor each $v$ in $S_k^\\infty$ we fix a corresponding embedding $\\sigma_v$ in $\\Sigma(k)$ and an embedding $\\sigma'_v$ in $\\Sigma_{\\sigma_v}(F)$.\n\nWe then write $Y_{v,F}$ for the module $\\prod_{\\Sigma_{\\sigma_v}(F)}\\ZZ$ endowed with its natural action of $G\\times G_v$ (via which $G$ and $G_v$ respectively act via pre-composition and post-composition with the embeddings in $\\Sigma_{\\sigma_v}(F)$).\n\nFor each $v$ in $S_k^\\infty$ we set\n\\[ H_v(A_{F\/k}) := H^0(k_v,Y_{v,F}\\otimes_{\\ZZ}H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)),\\]\nregarded as a $G$-module via the given action on $Y_{v,F}$. We note that this $G$-module is free of rank $2d$ if $v$ is in $S_k^\\CC$ and spans a free $\\ZZ[1\/2][G]$-module of rank $d$ if $v$ is in $S_k^\\RR$.\n\nWe then define a $G$-module by setting\n\\[ H_\\infty(A_{F\/k}) := \\bigoplus_{v \\in S_k^\\infty}H_v(A_{F\/k}).\\]\n\nFinally we write $\\Sigma_k(F)$ for the set of $k$-embeddings $F \\to k^c$ and $Y_{F\/k}$ for the module $\\prod_{\\Sigma_k(F)}\\ZZ$, endowed with its natural action of $G\\times G_k$.\n\n\n\n\n\n\\subsection{Classical Selmer complexes}\\label{p-adiccomplexes} In this section we fix a prime number $p$.\n\n\n\\subsubsection{}We first record a straightforward (and well-known) result regarding pro-$p$ completions that will be useful in the sequel.\n\nWe let $B$ denote either $A$ or its dual variety $A^t$ and write $T_p(B)$ for the $p$-adic Tate module of $B$.\n\n\\begin{lemma}\\label{v not p} For each non-archimedean place $w'$ of $F$ the following claims are valid.\n\\end{lemma}\n\\begin{itemize}\n\\item[(i)] If $w'$ is not $p$-adic then the natural Kummer map $B(F_{w'})^\\wedge_p \\to H^1(F_{w'},T_p(B))$ is bijective.\n\\item[(ii)] There exists a canonical short exact sequence\n\\[ 0 \\to H^1(\\kappa_{w'}, T_p(B)^{I_{w'}}) \\to B(F_{w'})^\\wedge_p \\to H^0(F_{w'}, H^1(I_{w'}, T_{p}(B))_{\\rm tor})\\to 0.\\]\n\\end{itemize}\n\n\\begin{proof} We note first that if $w'$ does not divide $p$, then the module $H^1(F_{w'},T_p(B))$ is finite. This follows, for example, from Tate's local Euler characteristic formula, the vanishing of $H^0(F_{w'},T_p(B))$ and the fact that local duality identifies $H^2(F_{w'},T_p(B))$ with the finite module $H^0(F_{w'},T_p(B^t)\\otimes_{\\ZZ_p}\\QQ_p\/\\ZZ_p)^\\vee$.\n\nGiven this observation, claim (i) is obtained directly upon passing to the inverse limit over $m$ in the natural Kummer theory exact sequence\n\\begin{equation}\\label{kummerseq} 0 \\to B(F_{w'})\/p^m \\to H^1(F_{w'},T_p(B)\/p^m) \\to H^1(F_{w'},B)[p^m]\\to 0.\\end{equation}\n\n\nClaim (ii) is established by Flach and the first author in \\cite[(1.38)]{bufl95} (after recalling the fact that the group denoted $H^1_f(F_{w'},T_p(B))_{\\rm BK}$ in loc. cit. is equal to the image of $B(F_{w'})^\\wedge_p$ in $H^1(F_{w'},T_p(B))$ under the injective Kummer map).\n\\end{proof}\n\n\\begin{remark}\\label{Tamagawa remark}{\\em The cardinality of each module $H^0(F_{w'}, H^1(I_{w'}, T_{p}(B))_{\\rm tor})$ that occurs in Lemma \\ref{v not p}(ii) is the maximal power of $p$ that divides the Tamagawa number of $B$ at $w'$. }\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{}If $B$ denotes either $A$ or $A^t$, then for any subfield $E$ of $k$ and any place $v$ in $S_E^f$ we obtain a $G$-module by setting\n\\[ B(F_v) := \\prod_{w' \\in S_F^v}B(F_{w'}).\\]\n\nFor later purposes we note that if $E = k$, then this module is isomorphic to the module of $G_w$-coinvariants\n$Y_{F\/k}\\otimes_{\\ZZ[G_w]} B(F_w)$ of the tensor product $Y_{F\/k}\\otimes_{\\ZZ} B(F_w)$, upon which $G$ acts only the first factor but $G_k$ acts diagonally on both.\n\nIn a similar way, the $p$-adic Tate module of the base change of $B$ through $F\/k$ is equal to\n\\[ T_{p,F}(B) := Y_{F\/k,p}\\otimes_{\\ZZ_p}T_p(B)\\]\n(where, again, $G$ acts only on the first factor of the tensor product whilst $G_k$ acts on both). We set $V_{p,F}(B) := \\QQ_p\\cdot T_{p,F}(B)$.\n\n\nWe can now introduce a notion of Selmer complex that will play an important role in the sequel.\n\n\\begin{definition}\\label{bkdefinition}{\\em For any finite subset $\\Sigma$ of $S_k^f$ that contains each of $S_k^p, S_k^F\\cap S_k^f$ and $S_k^A$, the `classical $p$-adic Selmer complex' ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$ for the data $A, F\/k$ and $\\Sigma$ is the mapping fibre of the morphism\n\\begin{equation}\\label{bkfibre}\n\\tau_{\\le 3}(R\\Gamma(\\mathcal{O}_{k,S_k^\\infty\\cup \\Sigma},T_{p,F}(A^t))) \\oplus \\left(\\bigoplus_{v\\in\\Sigma} A^t(F_v)^\\wedge_p\\right)[-1] \\xrightarrow{(\\lambda,\\kappa)} \\bigoplus_{v \\in \\Sigma} R\\Gamma (k_v, T_{p,F}(A^t))\n\\end{equation}\nin $D(\\ZZ_p[G])$. Here $\\lambda$ is the natural diagonal localisation morphism and $\\kappa$ is induced by the Kummer theory maps\n$A^t(F_v)^\\wedge_p\\to H^1(k_v,T_{p,F}(A^t))$ (and the fact that $H^0(k_v, T_{p,F}(A^t))$ vanishes for all $v$ in $\\Sigma$).\n\n}\n\\end{definition}\n\n\\begin{remark}{\\em If $p$ is odd, then $R\\Gamma(\\mathcal{O}_{k,S_k^\\infty\\cup \\Sigma},T_{p,F}(A^t)))$ is acyclic in degrees greater than two and so the natural morphism $\\tau_{\\le 3}(R\\Gamma(\\mathcal{O}_{k,S_k^\\infty\\cup \\Sigma},T_{p,F}(A^t))) \\to R\\Gamma(\\mathcal{O}_{k,S_k^\\infty\\cup \\Sigma},T_{p,F}(A^t)))$ in $D(\\ZZ_p[G])$ is an isomorphism. In this case the truncation functor $\\tau_{\\le 3}$ can therefore be omitted from the above definition.}\\end{remark}\n\nThe following result shows that the Selmer complex ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$ is independent, in a natural sense, of the choice of the set of places $\\Sigma$.\n\nFor this reason, in the sequel we shall usually write ${\\rm SC}_{p}(A_{F\/k})$ in place of ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$.\n\n\\begin{lemma}\\label{independenceofsigma} Let $\\Sigma$ and $\\Sigma'$ be any finite subsets of $S_k^f$ as in Definition \\ref{bkdefinition} with $\\Sigma\\subseteq\\Sigma'$. Then there is a canonical isomorphism ${\\rm SC}_{\\Sigma',p}(A_{F\/k})\\to {\\rm SC}_{\\Sigma,p}(A_{F\/k})$ in $D(\\ZZ_p[G])$.\n\\end{lemma}\n\n\\begin{proof} We recall that the compactly supported cohomology complex\n\\[ R\\Gamma_{c,\\Sigma}:=R\\Gamma_c(\\mathcal{O}_{k,S_k^\\infty\\cup\\Sigma},T_{p,F}(A^t))\\]\nis defined to be the mapping fibre of the diagonal localisation morphism\n\\begin{equation}\\label{compactloc} R\\Gamma(\\mathcal{O}_{k,S_k^\\infty\\cup\\Sigma},T_{p,F}(A^t)) \\to \\bigoplus_{v \\in S_k^\\infty\\cup\\Sigma}R\\Gamma(k_v,T_{p,F}(A^t))\\end{equation}\nin $D(\\ZZ_p[G])$.\n\nWe further recall that $R\\Gamma_{c,\\Sigma}$ is acyclic outside degrees $1, 2$ and $3$ (see, for example, \\cite[Prop. 1.6.5]{fukaya-kato}) and that\n $R\\Gamma(k_v,T_{p,F}(A^t))$ for each $v$ in $\\Sigma$ is acyclic outside degrees $1$ and $2$ and hence that the natural morphisms\n\\[ \\tau_{\\le 3}(R\\Gamma_{c,\\Sigma}) \\to R\\Gamma_{c,\\Sigma}\\,\\,\\text{ and } \\,\\,\\tau_{\\le 3}(R\\Gamma(k_v,T_{p,F}(A^t))) \\to R\\Gamma(k_v,T_{p,F}(A^t))\\]\nin $D(\\ZZ_p[G])$ are isomorphisms.\n\nUpon comparing these facts with the definition of ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$ one deduces the existence of a canonical exact triangle in $D(\\ZZ_p[G])$ of the form\n\\begin{equation}\\label{comparingtriangles}R\\Gamma_{c,\\Sigma}\\to {\\rm SC}_{\\Sigma,p}(A_{F\/k})\\to \\left(\\bigoplus_{ v\\in \\Sigma}A^t(F_v)_p^\\wedge\\right)[-1]\\oplus \\tau_{\\le 3}(R\\Gamma_\\infty) \\to R\\Gamma_{c,\\Sigma}[1],\n\\end{equation}\nwhere we abbreviate $\\bigoplus_{ v\\in S_k^\\infty} R\\Gamma(k_v,T_{p,F}(A^t))$ to $R\\Gamma_\\infty$.\n\n\nIn addition, by the construction of \\cite[(30)]{bufl99}, there is a canonical exact triangle in $D(\\ZZ_p[G])$ of the form\n\\begin{equation}\\label{independencetriangle}\\bigoplus\\limits_{ v\\in \\Sigma'\\setminus\\Sigma}R\\Gamma(\\kappa_v,T_{p,F}(A^t)^{I_v})[-1]\\to R\\Gamma_{c,\\Sigma'}\\to R\\Gamma_{c,\\Sigma}\\to\\bigoplus\\limits_{ v\\in \\Sigma'\\setminus\\Sigma}R\\Gamma(\\kappa_v,T_{p,F}(A^t)^{I_v}).\\end{equation}\n\nFinally we note that, since the choice of $\\Sigma$ implies that $A$ has good reduction at each place $v$ in $\\Sigma'\\setminus\\Sigma$, the module $H^0(F_{w'}, H^1(I_{w'}, T_{p}(A^t))_{\\rm tor})$ vanishes for every $w'$ in $S_F^v$.\n\nThus, since for each $v$ in $\\Sigma'\\setminus\\Sigma$ the complex $R\\Gamma(\\kappa_v,T_{p,F}(A^t)^{I_v})$ is acyclic outside degree one, the exact sequences in Lemma \\ref{v not p}(ii) induce a canonical isomorphism\n\\begin{equation}\\label{firstrow}A^t(F_v)_p^\\wedge[-1]\\to R\\Gamma(\\kappa_v,T_{p,F}(A^t)^{I_v})\\end{equation}\nin $D(\\ZZ_p[G])$.\n\nThese three facts combine to give a canonical commutative diagram in $D(\\ZZ_p[G])$ of the form\n\n\\begin{equation*}\\label{complexesdiag}\\xymatrix{\n\\bigoplus\\limits_{ v\\in \\Sigma'\\setminus\\Sigma}A^t(F_v)_p^\\wedge[-2] \\ar@{^{(}->}[d] \\ar[r]^{\\hskip-0.4truein\\sim} &\n\\bigoplus\\limits_{ v\\in \\Sigma'\\setminus\\Sigma}R\\Gamma(\\kappa_v,T_{p,F}(A^t)^{I_v})[-1] \\ar[d] &\n\\\\\n\\bigoplus\\limits_{ v\\in \\Sigma'}A^t(F_v)_p^\\wedge[-2]\\oplus \\tau_{\\le 3}(R\\Gamma_\\infty)[-1] \\ar@{->>}[d] \\ar[r] &\nR\\Gamma_{c,\\Sigma'} \\ar[d] \\ar[r] &\n{\\rm SC}_{\\Sigma',p}(A_{F\/k})\n\\\\\n\\bigoplus\\limits_{ v\\in \\Sigma}A^t(F_v)_p^\\wedge[-2]\\oplus \\tau_{\\le 3}(R\\Gamma_\\infty)[-1] \\ar[r] &\nR\\Gamma_{c,\\Sigma} \\ar[r] &\n{\\rm SC}_{\\Sigma,p}(A_{F\/k}).\n}\\end{equation*}\nHere the first row is induced by the isomorphisms (\\ref{firstrow}), the second and third rows by the triangles (\\ref{comparingtriangles}) for $\\Sigma'$ and $\\Sigma$ respectively, the first column is the obvious short exact sequence and the second column is given by the triangle (\\ref{independencetriangle}).\n\nIn particular, since all rows and columns in this diagram are exact triangles, its commutativity implies the existence of a canonical isomorphism in $D(\\ZZ_p[G])$ from\n${\\rm SC}_{\\Sigma',p}(A_{F\/k})$ to ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$, as required.\n\\end{proof}\n\nTaken together, Lemma \\ref{v not p}(ii) and Remark \\ref{Tamagawa remark} imply that if $p$ is odd and the Tamagawa numbers of $A$ over $F$ are divisible by $p$, then the complex ${\\rm SC}_{p}(A_{F\/k})$ differs slightly from the `finite support cohomology' complex $R\\Gamma_f(k,T_{p,F}(A))$ that was defined (for odd $p$) and played a key role in the article \\cite{bmw} of Wuthrich and the present authors.\n\nFor such $p$ we have preferred to use ${\\rm SC}_{p}(A_{F\/k})$ rather than $R\\Gamma_f(k,T_{p,F}(A))$ in this article since it is more amenable to certain explicit constructions that we have to make in later sections.\n\nFor the moment, we record only the following facts about ${\\rm SC}_{p}(A_{F\/k})$ that will be established in Propositions \\ref{explicitbkprop} and \\ref{explicitbkprop2} below. We write $\\Sel_p(A_{F})$ for the classical $p$-primary Selmer group of $A$ over $F$. Then ${\\rm SC}_{p}(A_{F\/k})$ is acyclic outside degrees one, two and three and, assuming the Tate-Shafarevich group $\\sha(A_F)$ of $A$ over $F$ to be finite, there are canonical identifications for each odd $p$ of the form\n\\begin{equation}\\label{bksc cohom} H^i({\\rm SC}_{p}(A_{F\/k})) = \\begin{cases} A^t(F)_p, &\\text{if $i=1$,}\\\\\n\\Sel_p(A_F)^\\vee, &\\txt{if $i=2$,}\\\\\nA(F)[p^{\\infty}]^\\vee, &\\text{if $i=3$,}\\end{cases}\\end{equation}\nwhilst for $p=2$ there is a canonical identification $H^1({\\rm SC}_{2}(A_{F\/k})) = A^t(F)_2$ and a canonical homomorphism $\\Sel_2(A_F)^\\vee \\to H^2({\\rm SC}_{2}(A_{F\/k}))$ with finite kernel and cokernel, and the module $H^3({\\rm SC}_{2}(A_{F\/k}))$ is finite.\n\n\\begin{remark}\\label{indeptremark}{\\em A closer analysis of the argument in Lemma \\ref{independenceofsigma} shows that, with respect to the identifications (\\ref{bksc cohom}) that are established (under the hypothesis that $\\sha(A_F)$ is finite) in Proposition \\ref{explicitbkprop} below, the isomorphism ${\\rm SC}_{\\Sigma',p}(A_{F\/k})\\to {\\rm SC}_{\\Sigma,p}(A_{F\/k})$ constructed in Lemma \\ref{independenceofsigma} induces the identity map on all degrees of cohomology.}\\end{remark}\n\n\n\n\\subsection{Nekov\\'a\\v r-Selmer complexes} In this section we again fix a prime number $p$.\n\n\\subsubsection{}Whilst the modules that occur in (\\ref{bksc cohom}) are the primary objects of interest in the theory of abelian varieties, the complex ${\\rm SC}_{p}(A_{F\/k})$ is not always well-suited to our purposes since, except in certain special cases (that will be discussed in detail in \\S\\ref{tmc}), it does not belong to $D^{\\rm perf}(\\ZZ_p[G])$.\n\nFor this reason, we find it convenient to introduce the following alternative notion of Selmer complexes.\n\nThis construction is motivated by the general approach developed by Nekov\\'a\\v r in \\cite{nek}\n\n\\begin{definition}\\label{selmerdefinition}{\\em Fix $\\ZZ_p[G]$-submodules $X$ of $A^t(F_p)^\\wedge_p$ and $X'$ of $H_{\\infty}(A_{F\/k})_p$. Then the `Nekov\\'a\\v r-Selmer complex' ${\\rm SC}_{S}(A_{F\/k};X,X')$ of the data $(A,F,S,X,X')$ is the mapping fibre of the morphism\n\\begin{equation}\\label{fibre morphism}\nR\\Gamma(\\mathcal{O}_{k,S\\cup S_k^p},T_{p,F}(A^t)) \\oplus X[-1] \\oplus X'[0] \\xrightarrow{(\\lambda, \\kappa_1,\\kappa_2)} \\bigoplus_{v \\in S \\cup S_k^p} R\\Gamma (k_v, T_{p,F}(A^t))\n\\end{equation}\n in $D(\\ZZ_p[G])$. Here $\\lambda$ is again the natural diagonal localisation morphism, $\\kappa_1$ is the morphism\n\n \\[ X[-1]\\rightarrow \\bigoplus_{v \\in S_k^p}R\\Gamma (k_v, T_{p,F}(A^t))\\]\n\n induced by the sum over $v$ of the local Kummer maps (and the fact each group $H^0(k_v,T_{p,F}(A^t))$ vanishes) and $\\kappa_2$ is the morphism\n\n\\[ X'[0] \\to \\bigoplus_{v \\in S_k^\\infty}R\\Gamma (k_v, T_{p,F}(A^t))\\]\nthat is induced by the canonical comparison isomorphisms\n\\begin{equation}\\label{cancompisom} Y_{v,F,p}\\otimes_{\\ZZ} H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ) \\cong Y_{F\/k,p}\\otimes_{\\ZZ_p}T_{p}(A^t)= T_{p,F}(A^t) \\end{equation}\nfor each $v$ in $S_k^\\infty$\n}\\end{definition}\n\n\n\nIn the next result we establish the basic properties of these Nekov\\'a\\v r-Selmer complexes. In this result we shall write ${\\rm Mod}^\\ast(\\ZZ_p[G])$ for the category ${\\rm Mod}(\\ZZ_p[G])$ in the case that $p$ is odd and for the quotient of ${\\rm Mod}(\\ZZ_2[G])$ by its subcategory ${\\rm Mod}^{\\rm fin}(\\ZZ_2[G])$ in the case that $p = 2$.\n\n\n\n\\begin{proposition}\\label{prop:perfect} Let $X$ be a finite index $\\ZZ_p[G]$-submodule of $A^t(F_p)^\\wedge_p$ that is cohomologically-trivial as a $G$-module.\n\nLet $X'$ be a finite index projective $\\ZZ_p[G]$-submodule of $H_\\infty(A_{F\/k})_p$, with $X' = H_\\infty(A_{F\/k})_p$ if $p$ is odd.\n\nThen the following claims are valid.\n\\begin{itemize}\n\\item[(i)] ${\\rm SC}_{S}(A_{F\/k};X,X')$ is an object of $D^{\\rm perf}(\\ZZ_p[G])$ that is acyclic outside degrees one, two and three.\n\\item[(ii)] $H^3({\\rm SC}_{S}(A_{F\/k};X,X'))$ identifies with $A(F)[p^{\\infty}]^\\vee$.\n\\item[(iii)] If $\\sha(A_F)$ is finite, then in ${\\rm Mod}^\\ast(\\ZZ_p[G])$ there exists a canonical injective homomorphism\n\\[ H^1({\\rm SC}_{S}(A_{F\/k};X,X')) \\to A^t(F)_p \\]\nthat has finite cokernel and a canonical surjective homomorphism\n\\[ H^2({\\rm SC}_{S}(A_{F\/k};X,X')) \\to {\\rm Sel}_p(A_{F})^\\vee\\]\nthat has finite kernel.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} Set $C_{S} := {\\rm SC}_{S}(A_{F\/k};X,X')$.\n\nThen, by comparing the definition of $C_{S}$ as the mapping fibre of (\\ref{fibre morphism}) with the definition of the compactly supported cohomology complex $R\\Gamma_c(A_{F\/k}) := R\\Gamma_c(\\mathcal{O}_{k,S\\cup S_k^p},T_{p,F}(A^t))$ as the mapping fibre of the morphism (\\ref{compactloc})\none finds that there is an exact triangle in $D(\\ZZ_p[G])$ of the form\n\\begin{equation}\\label{can tri} R\\Gamma_c(A_{F\/k})\\to C_{S} \\to X[-1] \\oplus X'[0]\\to R\\Gamma_c(A_{F\/k})[1].\\end{equation}\n\nTo derive claim (i) from this triangle it is then enough to recall (from, for example, \\cite[Prop. 1.6.5]{fukaya-kato}) that $R\\Gamma_c(A_{F\/k})$ belongs to $D^{\\rm perf}(\\ZZ_p[G])$ and is acyclic outside degrees one, two and three and note that both of the $\\ZZ_p[G]$-modules $X$ and $X'$ are finitely generated and cohomologically-trivial.\n\nThe above triangle also gives a canonical identification\n\\begin{equation}\\label{artinverdier} H^3(C_{S}) \\cong H^3(R\\Gamma_c(A_{F\/k})) \\cong H^0(k,T_{p,F}(A)\\otimes_{\\ZZ_p}\\QQ_p\/\\ZZ_p)^\\vee = A(F)[p^{\\infty}]^\\vee\\end{equation}\nwhere the second isomorphism is induced by the Artin-Verdier Duality Theorem.\n\nIn a similar way, if we set $\\Sigma:=(S\\cap S_k^f)\\cup S_k^p$ and abbreviate the classical $p$-adic Selmer complex ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$ to $C'_\\Sigma$, then a direct comparison of the definitions of $C_{S}$ and $C_\\Sigma'$ shows that $C_{S}$ is isomorphic in $D(\\ZZ_p[G])$ to the mapping fibre of the morphism\n\\begin{equation}\\label{selmer-finite tri} C'_\\Sigma \\oplus X[-1] \\oplus X'[0]\n\\xrightarrow{(\\lambda', \\kappa'_1,\\kappa_2)}\n \\bigoplus_{v \\in \\Sigma} A^t(F_v)^\\wedge_p[-1] \\oplus \\bigoplus_{v \\in S_k^\\infty}\\tau_{\\le 3}(R\\Gamma(k_v,T_{p,F}(A^t)))\\end{equation}\nwhere $\\lambda'$ is the canonical morphism\n\\[ C'_\\Sigma \\to \\bigoplus_{v \\in \\Sigma} A^t(F_v)^\\wedge_p[-1]\\]\ndetermined by the definition of of $C'_\\Sigma$ as the mapping fibre of (\\ref{bkfibre}),\nand the morphism $$\\kappa'_1: X[-1]\\rightarrow \\bigoplus_{v \\in S_k^p} A^t(F_v)^\\wedge_p[-1]$$ is induced by the given inclusion\n $X \\subseteq A^t(F_p)^\\wedge_p$.\n\n\nThis description of $C_{S}$ gives rise to a canonical long exact sequence of $\\ZZ_p[G]$-modules\n\n\\begin{multline}\\label{useful1} 0 \\to {\\rm cok}(H^0(\\kappa_2)) \\to H^1(C_{S}) \\to H^1(C_\\Sigma') \\\\\n \\to (A^t(F_p)^\\wedge_p\/X) \\oplus\\bigoplus_{v \\in (S\\cap S_k^f)\\setminus S_k^p} A^t(F_v)^\\wedge_p \\oplus \\bigoplus_{v \\in S_k^\\infty}H^1(k_v,T_{p,F}(A^t))\\\\ \\to H^2(C_{S})\\to H^2(C_\\Sigma') \\to \\bigoplus_{v \\in S_k^\\infty}H^2(k_v,T_{p,F}(A^t)). \\end{multline}\n\nIn addition, for each $v \\in S_k^\\infty$ the groups $H^1(k_v,T_{p,F}(A^t))$ and $H^2(k_v,T_{p,F}(A^t))$ vanish if $p$ is odd and are finite if $p=2$, whilst our choice of $X'$ ensures that ${\\rm cok}(H^0(\\kappa_2))$ is also a finite group of $2$-power order.\n\nClaim (iii) therefore follows upon combining the above sequence with the identifications of $H^1(C_\\Sigma')$ and $H^2(C_\\Sigma')$ given in (\\ref{bksc cohom}) for odd $p$, and in the subsequent remarks for $p=2$, that are valid whenever $\\sha(A_F)$ is finite.\n\n\\end{proof}\n\n\n\\begin{remark}\\label{mrselmer}{\\em If $p$ is odd, then the proof of Proposition \\ref{prop:perfect} shows that the cohomology group\n$H^1({\\rm SC}_{S}(A_{F\/k};X,H_\\infty(A_{F\/k})_p))$ coincides with the Selmer group $H^1_{\\mathcal{F}_X}(k,T_{p,F}(A^t))$ in the sense of Mazur and Rubin \\cite{MRkoly}, where $\\mathcal{F}_X$ is the Selmer structure with $\\mathcal{F}_{X,v}$ equal to the image of $X$ in $H^1(k_v,T_{p,F}(A^t))$ for $v\\in S_k^p$ and equal to $0$ for $v \\in S\\setminus S_k^p$.} \\end{remark}\n\n\n\\subsection{Perfect Selmer structures and integral complexes}\\label{perfect selmer integral} We write $\\ell(v)$ for the residue characteristic of a non-archimedean place $v$ of $k$.\n\n\n\\begin{definition}\\label{pgss def}{\\em A `perfect Selmer structure' for the pair $A$ and $F\/k$ is a collection\n\\[ \\mathcal{X} := \\{\\mathcal{X}(v): v \\}\\]\nover all places $v$ of $k$ of $G$-modules that satisfy the following conditions.\n\n\\begin{itemize}\n\\item[(i)] For each $v$ in $S_k^\\infty$ the module $\\mathcal{X}(v)$ is projective and a submodule of $H_v(A_{F\/k})$ of finite $2$-power index.\n\\item[(ii)] For each $v$ in $S_k^f$ the module $\\mathcal{X}(v)$ is cohomologically-trivial and a finite index $\\ZZ_{\\ell(v)}[G]$-submodule of $A^t(F_v)^\\wedge_{\\ell(v)}$.\n\\item[(iii)] For almost all (non-archimedean) places $v$ one has $\\mathcal{X}(v) = A^t(F_v)^\\wedge_{\\ell(v)}.$\n\\end{itemize}\nWe thereby obtain a projective $G$-submodule\n\\[ \\mathcal{X}(\\infty) := \\bigoplus_{v \\in S_k^\\infty}\\mathcal{X}(v)\\]\nof $H_\\infty(A_{F\/k})$ of finite $2$-power index and, for each rational prime $\\ell$, a finite index cohomologically-trivial $\\ZZ_\\ell[G]$-submodule\n\\[ \\mathcal{X}(\\ell) := \\bigoplus_{v\\in S_k^\\ell}\\mathcal{X}(v)\\]\nof $A^t(F_\\ell)^\\wedge_{\\ell}$.}\n\\end{definition}\n\n\n\\begin{remark}{\\em The conditions (ii) and (iii) in Definition \\ref{pgss def} are consistent since if $\\ell$ does not divide $|G|$, then any $\\ZZ_{\\ell}[G]$-module is automatically cohomologically-trivial for $G$.} \\end{remark}\n\nIn the following result we write $X_\\ZZ(A_F)$ for the `integral Selmer group' of $A$ over $F$ defined by Mazur and Tate in \\cite{mt}.\n\nWe recall that, if the Tate-Shafarevich group $\\sha(A_F)$ is finite, then $X_\\ZZ(A_F)$ is a finitely generated $G$-module and there exists an isomorphism of $\\hat \\ZZ[G]$-modules\n\\[ \\hat\\ZZ\\otimes_\\ZZ X_\\ZZ(A_F) \\cong {\\rm Sel}(A_F)^\\vee\\]\nthat is unique up to automorphisms that induce the identity map on both the submodule $X_\\ZZ(A_F)_{\\rm tor} = \\sha(A_F)^\\vee$ and quotient module $X_\\ZZ(A_F)_{\\rm tf} = \\Hom_\\ZZ(A(F), \\ZZ)$. (Here $\\hat\\ZZ$ denotes the profinite completion of $\\ZZ$).\n\nWe identify ${\\rm Mod}^{\\rm fin}(\\ZZ_2[G])$ as an abelian subcategory of ${\\rm Mod}(\\ZZ[G])$ in the obvious way and write ${\\rm Mod}^\\ast(\\ZZ[G])$ for the associated quotient category.\n\n\\begin{proposition}\\label{prop:perfect2} Assume that $\\sha(A_F)$ is finite. Then for any perfect Selmer structure $\\mathcal{X}$ for $A$ and $F\/k$\n there exists a complex $C_S(\\mathcal{X}) = {\\rm SC}_{S}(A_{F\/k};\\mathcal{X})$ in $D^{\\rm perf}(\\ZZ[G])$ that is unique up to isomorphisms in $D^{\\rm perf}(\\Z[G])$ that induce the identity map in all degrees of cohomology and has all of the following properties.\n\\begin{itemize}\n\\item[(i)] For each prime $\\ell$ there is a canonical isomorphism in $D^{\\rm perf}(\\ZZ_\\ell[G])$\n\\[ \\ZZ_\\ell\\otimes_\\ZZ C_S(\\mathcal{X}) \\cong {\\rm SC}_{S}(A_{F\/k};\\mathcal{X}(\\ell),\\mathcal{X}(\\infty)_\\ell).\\]\n\\item[(ii)] $C_S(\\mathcal{X})$ is acyclic outside degrees one, two and three.\n\\item[(iii)] There is a canonical identification $H^3(C_S(\\mathcal{X})) = (A(F)_{\\rm tor})^\\vee$.\n\n\\item[(iv)] In ${\\rm Mod}^\\ast(\\ZZ[G])$ there exists a canonical injective homomorphism\n\\[ H^1(C_S(\\mathcal{X})) \\to A^t(F) \\]\nthat has finite cokernel and a canonical surjective homomorphism\n\\[ H^2(C_S(\\mathcal{X})) \\to X_\\ZZ(A_F)\\]\nthat has finite kernel.\n\\item[(v)] If $\\mathcal{X}(v)\\subseteq A^t(F_v)$ for all $v$ in $S\\cap S_k^f$ and $\\mathcal{X}(v) = A^t(F_v)^\\wedge_{\\ell(v)}$ for all $v\\notin S$, then there exists an exact sequence in ${\\rm Mod}^\\ast(\\ZZ[G])$ of the form\n\\[ 0 \\to H^1(C_S(\\mathcal{X})) \\to A^t(F) \\xrightarrow{\\Delta_{S,\\mathcal{X}}} \\bigoplus_{v \\in S\\cap S_k^f}\\frac{A^t(F_v)}{\\mathcal{X}(v)} \\to\nH^2(C_S(\\mathcal{X})) \\to X_\\ZZ(A_F)\\to 0\\]\nin which $\\Delta_{S,\\mathcal{X}}$ is the natural diagonal map.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} We write $\\hat \\ZZ$ for the profinite completion of $\\ZZ$ and for each prime $\\ell$ set $C_S(\\ell) := {\\rm SC}_{S}(A_{F\/k};\\mathcal{X}(\\ell),\\mathcal{X}(\\infty)_\\ell)$.\n\nTo construct a suitable complex $C_S(\\mathcal{X})$ we shall use the general result of \\cite[Lem. 3.8]{bkk} with the complex $\\widehat C$ in loc. cit. taken to be the object $\\prod_\\ell C_S(\\ell)$ of $D(\\hat \\ZZ[G])$.\n\nIn fact, since $\\mathcal{X}$ satisfies the conditions (i) and (ii) in Definition \\ref{pgss def}, Proposition \\ref{prop:perfect}(i) implies that each complex $C_S(\\ell)$ belongs to $D^{\\rm perf}(\\ZZ_\\ell[G])$ and is acyclic outside degrees one, two and three and so to apply \\cite[Lem. 3.8]{bkk} it is enough to specify for each $j \\in \\{1,2,3\\}$ a finitely generated $G$-module $M^j$ together with an isomorphism of $\\hat \\Z[G]$-modules of the form $\\iota_j: \\hat \\Z\\otimes_\\Z M^j \\cong \\prod_\\ell H^j(C_S(\\ell))$.\n\nBy Proposition \\ref{prop:perfect}(ii) it is clear that one can take $M^3 = A(F)_{\\rm tor}^\\vee$ and $\\iota_3$ the canonical identification induced by the decomposition $A(F)_{\\rm tor}^\\vee = \\prod_\\ell A(F)[\\ell^\\infty]^\\vee$.\n\nTo construct suitable modules $M^1$ and $M^2$,\n\n\nwe note first that the proof of Proposition \\ref{prop:perfect}(iii) combines with the fact that $\\mathcal{X}$ satisfies condition (iii) in Definition \\ref{pgss def} to give rise to a homomorphism of $\\hat\\ZZ[G]$-modules\n\\[ \\prod_\\ell H^1(C_S(\\ell)) \\xrightarrow{\\theta_1} \\hat\\ZZ\\otimes_\\ZZ A^t(F) \\]\nwith the property that $\\ker(\\theta_1)$ is finite of $2$-power order and ${\\rm cok}(\\theta_1)$ is finite, and to a diagram of homomorphisms of $\\hat\\ZZ[G]$-modules\n\\begin{equation}\\label{derived diag}\n\\prod_\\ell H^2(C_S(\\ell)) \\xrightarrow{\\theta_2} \\prod_\\ell H^2({\\rm SC}_{\\Sigma_\\ell,\\ell}(A_{F\/k})) \\xleftarrow{\\theta_3} \\hat\\ZZ\\otimes_\\ZZ X_{\\ZZ}(A_F)\\end{equation}\nin which $\\ker(\\theta_2)$ is finite whilst ${\\rm cok}(\\theta_2), \\ker(\\theta_3)$ and ${\\rm cok}(\\theta_3)$ are all finite of $2$-power order. Here for each prime number $\\ell$ we have also set $\\Sigma_\\ell:=(S\\cap S_k^f)\\cup S_k^\\ell$.\n\nIt is then straightforward to construct a commutative (pull-back) diagram of $G$-modules\n\\begin{equation}\\label{useful2 diagrams} \\begin{CD}\n M^1 @> >> A^t(F)\\\\\n @V \\iota_{11} VV @VV\\iota_{12} V\\\\\n \\prod_\\ell H^1(C_S(\\ell)) @> \\theta_1>> \\hat\\ZZ\\otimes_\\ZZ A^t(F)\\end{CD}\\end{equation}\nin which $M^1$ is finitely generated, the upper horizontal arrow has finite kernel of $2$-power order and finite cokernel, the morphism $\\iota_{12}$ is the natural inclusion and the morphism $\\iota_{11}$ induces an isomorphism of $\\hat\\ZZ[G]$-modules $\\iota_1$ of the required sort.\n\nIn a similar way, there is a pull-back diagram of $G$-modules\n\\begin{equation*} \\begin{CD}\n M_2 @> \\theta_2' >> \\theta_3(X_{\\ZZ}(A_F))\\\\\n @V \\iota_{21} VV @VV\\iota_{22} V\\\\\n \\prod_\\ell H^2(C_S(\\ell)) @> \\theta_2 >> \\prod_\\ell H^2({\\rm SC}_{\\Sigma_\\ell,\\ell}(A_{F\/k}))\\end{CD}\\end{equation*}\nin which $M_2$ is finitely generated, $\\iota_{22}$ is the natural inclusion, $\\ker(\\theta_2')$ is finite, ${\\rm cok}(\\theta_2')$ is finite of $2$-power order and the morphism $\\iota_{21}$ induces a short exact sequence\n\\[0\\to \\hat \\ZZ\\otimes_\\ZZ M_2 \\to \\prod_\\ell H^2(C_S(\\ell)) \\to M_2' \\to 0 \\]\nin which $M_2'$ is finite of $2$-power order. Then, since $\\hat \\ZZ$ is a flat $\\ZZ$-module, one has\n\\[ {\\rm Ext}^1_{G}(M_2',M_2) = \\hat\\ZZ\\otimes_\\ZZ{\\rm Ext}^1_{G}(M_2',M_2) = {\\rm Ext}^1_{\\hat \\ZZ[G]}(M_2',\\hat\\ZZ\\otimes_\\ZZ M_2)\\]\nand so there exists an exact commutative diagram of $G$-modules\n\\[ \\begin{CD}\n0 @> >> \\hat \\ZZ\\otimes_\\ZZ M_2 @> \\iota_{21} >> \\prod_\\ell H^2(C_S(\\ell)) @> >> M_2' @> >> 0\\\\\n& & @A AA @A\\iota_2 AA @\\vert\\\\\n0 @> >> M_2 @> >> M^2 @> >> M_2' @> >> 0\\end{CD}\\]\nin which the left hand vertical arrow is the natural inclusion and $M^2$ is finitely generated.\n\nIt is then clear that $\\iota_2$ induces an isomorphism $\\hat\\ZZ\\otimes_\\ZZ M^2 \\cong \\prod_\\ell H^2(C_S(\\ell))$ and that the diagram\n\\[ M^2 \\xleftarrow{\\iota_{21}} M_2 \\xrightarrow{\\theta_2'} \\theta_3(X_{\\ZZ}(A_F)) \\xleftarrow{\\theta_3} X_{\\ZZ}(A_F)\\]\nconstitutes a morphism in ${\\rm Mod}^\\ast(\\ZZ[G])$. This morphism is surjective, has finite kernel and lies in a commutative diagram in ${\\rm Mod}^\\ast(\\ZZ[G])$\n\\begin{equation}\\label{derived diag2} \\begin{CD} M^2 @> >> X_\\ZZ(A_F)\\\\\n @V \\iota_2 VV @VV V\\\\\n \\prod_{\\ell}H^2(C_S(\\ell)) @> >> \\hat\\ZZ\\otimes_\\ZZ X_\\ZZ(A_F)\\end{CD}\\end{equation}\nin which the right hand vertical arrow is the inclusion map and the lower horizontal arrow corresponds to the diagram (\\ref{derived diag}).\n\nThese observations show that we can apply \\cite[Lem. 3.8]{bkk} in the desired way in order to obtain a complex $C_S(\\mathcal{X})$ in $D^{\\rm perf}(\\ZZ[G])$ that has $H^j({\\rm SC}_{S}(A_{F\/k};\\mathcal{X})) = M^j$ for each $j$ in $\\{1,2,3\\}$ and satisfies all of the stated properties in claims (i)-(iv).\n\nTurning to claim (v) we note that the given conditions on the modules $\\mathcal{X}(v)$ imply that for each $v$ in $S\\cap S_k^f$ there is a direct sum decomposition of finite modules\n\\[ \\frac{A^t(F_v)}{\\mathcal{X}(v)} = \\frac{A^t(F_v)^\\wedge_{\\ell(v)}}{\\mathcal{X}(v)} \\oplus \\bigoplus_{\\ell \\not= \\ell(v)}A^t(F_v)^\\wedge_\\ell\\]\nand hence also a direct sum decomposition over all primes $\\ell$ of the form\n\\[ \\bigoplus_{v \\in S\\cap S_k^f} \\frac{A^t(F_v)}{\\mathcal{X}(v)} = \\bigoplus_\\ell \\left( \\bigoplus_{v \\in S_k^\\ell}\\frac{A^t(F_v)^\\wedge_{\\ell(v)}}{\\mathcal{X}(v)} \\oplus \\bigoplus_{v \\in (S\\cap S_k^f)\\setminus S_k^\\ell}A^t(F_v)^\\wedge_\\ell\\right).\\]\n\nThis shows that the kernel and cokernel of the map $\\Delta_{S,\\mathcal{X}}$ in claim (v) respectively coincide with the intersection over all primes $\\ell$ of the kernel and the direct sum over all primes $\\ell$ of the cokernel of the diagonal map\n\\[ A^t(F) \\to \\bigoplus_{v \\in S_k^\\ell}\\frac{A^t(F_v)^\\wedge_\\ell}{\\mathcal{X}(v)} \\oplus \\bigoplus_{v \\in (S\\cap S_k^f)\\setminus S_k^\\ell}A^t(F_v)^\\wedge_\\ell\\]\nthat occurs in the sequence (\\ref{useful1}) (with $p$ replaced by $\\ell$ and $X$ by $\\mathcal{X}(\\ell)$).\n\nGiven this fact, the exact sequence follows from the commutativity of the diagrams (\\ref{useful2 diagrams}) and (\\ref{derived diag2}) and the exactness of the sequence (\\ref{useful1}).\n\\end{proof}\n\n\\subsection{Global differentials and perfect Selmer structures}\\label{perf sel sect}\n\nWith a view to the subsequent formulation (in \\S\\ref{statement of conj section}) of our central conjecture we explain how a choice of global differentials gives rise to a natural perfect Selmer structure for $A$ and $F\/k$.\n\nIn the sequel we shall for a natural number $m$ write $[m]$ for the (ordered) set of integers $i$ that satisfy $1 \\le i\\le m$.\n\n\n\\subsubsection{}\\label{gamma section}For each $v$ in $S_k^\\RR$ we fix ordered $\\ZZ$-bases\n\\[ \\{\\gamma_{v,a}^+: a\\in [d]\\}\\]\nof $H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)^{c=1}$ and\n\\[ \\{\\gamma_{v,a}^-: a\\in [d]\\}\\]\nof $H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)^{c=-1}$, where $c$ denotes complex conjugation.\n\nFor each $v$ in $S_k^\\CC$ we fix an ordered $\\ZZ$-basis\n\\[ \\{\\gamma_{v,a}: a\\in [2d]\\}\\]\nof $H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)$.\n\n\nFor each $v$ in $S_k^\\infty$ we then fix $\\tau_v\\in G$ with $\\tau_v(\\sigma_v')=c\\circ\\sigma_v'$ and write $H_v(\\gamma_\\bullet)$ for the free $G$-module with basis\n\\begin{equation}\\label{gamma basis}\\begin{cases} \\{ (1+\\tau_v)\\sigma_v'\\otimes \\gamma^+_{v,a} + (1-\\tau_v)\\sigma_v'\\otimes \\gamma^-_{v,a}: a \\in [d]\\}, &\\text{ if $v$ is real}\\\\\n \\{\\sigma_v'\\otimes\\gamma_{v,a}:a \\in [2d]\\}, &\\text{ if $v$ is complex.}\\end{cases}\\end{equation}\n\nThe direct sum\n\\[ H_\\infty(\\gamma_\\bullet) := \\bigoplus_{v \\in S_k^\\infty}H_v(\\gamma_\\bullet)\\]\nis then a free $G$-submodule of $H_\\infty(A_{F\/k})$ of finite $2$-power index.\n\nTo specify an ordered $\\ZZ[G]$-basis of $H_\\infty(\\gamma_\\bullet)$ we fix an ordering of $S_k^\\infty$ and then order the union of the sets\n (\\ref{gamma basis}) lexicographically.\n\n\\subsubsection{}\\label{perf sel construct}We next fix a N\\'eron model $\\mathcal{A}^t$ for $A^t$ over $\\mathcal{O}_k$ and, for each non-archimedean place $v$ of $k$, a N\\'eron model $\\mathcal{A}_v^t$ for $A^t_{\/k_v}$ over $\\mathcal{O}_{k_v}$.\n\nFor any subfield $E$ of $k$ and any non-archimedean place $v$ of $E$ we set $\\mathcal{O}_{F,v}:=\\prod_{w'\\in S_F^v}\\mathcal{O}_{F_{w'}}$.\n\nFor each non-archimedean place $v$ of $k$ we then set\n\\begin{equation}\\label{mathcalD} \\mathcal{D}_F(\\mathcal{A}^t_v) := \\mathcal{O}_{F,v}\\otimes_{\\mathcal{O}_{k_v}}\\Hom_{\\mathcal{O}_{k_v}}(H^0(\\mathcal{A}_v^t,\\Omega^1_{\\mathcal{A}_v^t}), \\mathcal{O}_{k_v}).\\end{equation}\n\n\nWe finally fix an ordered $\\QQ[G]$-basis $\\omega_\\bullet$ of the space of invariant differentials\n\\[ H^0(A^t_F,\\Omega^1_{A^t_F}) \\cong F\\otimes_k H^0(A^t,\\Omega^1_{A^t})\\]\nand write $\\mathcal{F}(\\omega_\\bullet)$ for the $G$-module generated by the elements of $\\omega_\\bullet$. In the sequel we often identify $\\omega_\\bullet$ with its dual ordered $\\QQ[G]$-basis in $\\Hom_{F}(H^0(A^t_F,\\Omega^1_{A^t_F}),F)$ and $\\mathcal{F}(\\omega_\\bullet)$ with the $G$-module generated by this dual basis.\n\nIn the sequel, for any subfield $E$ of $k$ and any place $v$ in $S_E^f$ we set $F_v:=\\prod_{w'\\in S_F^v}F_{w'}$.\n\nFor each non-archimedean place $v$ of $k$ we write $\\mathcal{F}(\\omega_\\bullet)_v$ for the $\\ZZ_{\\ell(v)}$-closure of the image of $\\mathcal{F}(\\omega_\\bullet)$ in $F_{v}\\otimes_k\\Hom_{k}(H^0(A^t,\\Omega^1_{A^t}), k)$ and\n\\begin{equation*}\\label{classical exp} {\\rm exp}_{A^t,F_v}: F_v\\otimes_k \\Hom_k(H^0(A^t,\\Omega^1_{A^t}),k) \\cong \\Hom_{F_v}(H^0(A^t_{F_v},\\Omega^1_{A^t_{F_v}}),F_v) \\cong \\QQ_{\\ell(v)}\\cdot A^t(F_v)^\\wedge_{\\ell(v)}\\end{equation*}\nfor the exponential map of $A^t_{F_v}$ relative to some fixed $\\mathcal{O}_{k_v}$-basis of $H^0(\\mathcal{A}_v^t,\\Omega^1_{\\mathcal{A}_v^t})$.\n\nThen, if necessary after multiplying each element of $\\omega_\\bullet$ by a suitable natural number, we may, and will, assume that the following conditions are satisfied:\n\n\\begin{itemize}\n\\item[(i$_{\\omega_\\bullet}$)] for each $v$ in $S_k^f$ one has $\\mathcal{F}(\\omega_\\bullet)_{v}\\subseteq \\mathcal{D}_F(\\mathcal{A}_v^t)$;\n\\item[(ii$_{\\omega_\\bullet}$)] for each $v$ in $S\\cap S_k^f$, the map ${\\rm exp}_{A^t,F_v}$ induces an isomorphism of $\\mathcal{F}(\\omega_\\bullet)_{v}$ with a submodule of $A^t(F_v)$.\n\\end{itemize}\n\n\n\nWe then define $\\mathcal{X}=\\mathcal{X}_S(\\omega_\\bullet) = \\mathcal{X}_S(\\{\\mathcal{A}^t_v\\}_v,\\omega_\\bullet,\\gamma_\\bullet)$ to be the perfect Selmer structure for $A$, $F\/k$ and $S$ that has the following properties:\n\\begin{itemize}\n\\item[(i$_\\mathcal{X}$)] If $v\\in S_k^\\infty$, then $\\mathcal{X}(v) = H_v(\\gamma_\\bullet)$.\n\\item[(ii$_\\mathcal{X}$)] If $v \\in S\\cap S_k^f$, then $\\mathcal{X}(v) = {\\rm exp}_{A^t,F_v}(\\mathcal{F}(\\omega_\\bullet)_v)$.\n\\item[(iii$_\\mathcal{X}$)] If $v \\notin S$, then $\\mathcal{X}(v) = A^t(F_v)^\\wedge_{\\ell(v)}$.\n\\end{itemize}\n\n\\begin{remark}{\\em This specification does define a perfect Selmer structure for $A$ and $F\/k$ since if $v$ does not belong to $S$, then the $G$-module $A^t(F_v)^\\wedge_{\\ell(v)}$ is cohomologically-trivial (by Lemma \\ref{useful prel}(ii) below).} \\end{remark}\n\n\\begin{remark}\\label{can structure groups}{\\em The perfect Selmer structure $\\mathcal{X}_S(\\omega_\\bullet) = \\mathcal{X}_S(\\{\\mathcal{A}^t_v\\}_v,\\omega_\\bullet,\\gamma_\\bullet)$ defined above satisfies the conditions of Proposition \\ref{prop:perfect2}(v). As a consequence, if one ignores finite modules of $2$-power order, then the cohomology modules of the Selmer complex\n$C_S(\\mathcal{X}(\\omega_\\bullet)) = {\\rm SC}_{S}(A_{F\/k};\\mathcal{X}_S(\\omega_\\bullet))$ can be described as follows: \\\n\n\\noindent{} - $H^1(C_S(\\mathcal{X}(\\omega_\\bullet)))$ is the submodule of $A^t(F)$ comprising all elements $x$ with the property that, for each $v$ in $S$, the image of $x$ in $A^t(F_v)$ belongs to the subgroup ${\\rm exp}_{A^t,F_v}(\\mathcal{F}(\\omega_\\bullet)_v)$.\n\n\\noindent{} - $H^2(C_S(\\mathcal{X}(\\omega_\\bullet)))$ is an extension of the integral Selmer group $X_\\ZZ(A_F)$ by the (finite) cokernel of the diagonal map $A^t(F) \\to \\bigoplus_{v \\in S}\\bigl(A^t(F_v)\/{\\rm exp}_{A^t,F_v}(\\mathcal{F}(\\omega_\\bullet)_v)\\bigr)$.\n\n\\noindent{} - $H^3(C_S(\\mathcal{X}(\\omega_\\bullet)))$ is equal to $(A(F)_{\\rm tor})^\\vee$.}\\end{remark}\n\n\n\n\n\n\n\\section{The refined Birch and Swinnerton-Dyer Conjecture}\\label{ref bsd section}\n\nIn this section we formulate (as Conjecture \\ref{conj:ebsd}) a precise refinement of the Birch and Swinnerton-Dyer Conjecture.\n\n\\subsection{Relative $K$-theory} For the reader's convenience, we first quickly review some relevant facts of algebraic $K$-theory.\n\n\\subsubsection{}\\label{Relative $K$-theory}\n\nFor a Dedekind domain $R$ with field of fractions $F$, an $R$-order $\\mathfrak{A}$ in a finite dimensional separable $F$-algebra $A$ and a field extension $E$ of $F$ we set $A_E := E\\otimes_F A$.\n\nThe relative algebraic $K_0$-group $K_0(\\mathfrak{A},A_E)$ of the ring inclusion $\\mathfrak{A}\\subset A_E$ is described explicitly in terms of generators and relations by Swan in \\cite[p. 215]{swan}.\n\nFor any extension field $E'$ of $E$ there exists a canonical commutative diagram\n\\begin{equation} \\label{E:kcomm}\n\\begin{CD} K_1(\\mathfrak{A}) @> >> K_1(A_{E'}) @> \\partial_{\\mathfrak{A},A_{E'}} >> K_0(\\mathfrak{A},A_{E'}) @> \\partial'_{\\mathfrak{A},A_{E'}} >> K_0(\\mathfrak{A})\\\\\n@\\vert @A\\iota AA @A\\iota' AA @\\vert\\\\\nK_1(\\mathfrak{A}) @> >> K_1(A_E) @> \\partial_{\\mathfrak{A},A_E} >> K_0(\\mathfrak{A},A_E) @> \\partial'_{\\mathfrak{A},A_E} >> K_0(\\mathfrak{A})\n\\end{CD}\n\\end{equation}\nin which the upper and lower rows are the respective long exact sequences in relative $K$-theory of the inclusions $\\mathfrak{A}\\subset A_E$ and $\\mathfrak{A}\\subset A_{E'}$ and both of the vertical arrows are injective and induced by the inclusion $A_E \\subseteq A_{E'}$. (For more details see \\cite[Th. 15.5]{swan}.)\n\n\nIn particular, if $R = \\ZZ$ and for each prime $\\ell$ we set $\\mathfrak{A}_\\ell := \\ZZ_\\ell\\otimes_\\ZZ \\mathfrak{A}$ and $A_\\ell:=\n\\QQ_\\ell\\otimes _\\QQ A$, then we can regard each group $K_0(\\mathfrak{A}_\\ell,A_\\ell)$ as a subgroup of $K_0(\\mathfrak{A},A)$ by means of the canonical composite homomorphism\n\\begin{equation}\\label{decomp}\n\\bigoplus_\\ell K_0(\\mathfrak{A}_\\ell,A_\\ell) \\cong K_0(\\mathfrak{A},A)\\subset K_0(\\mathfrak{A},A_\\RR),\n\\end{equation}\nwhere $\\ell$ runs over all primes, the isomorphism is as described in the discussion following \\cite[(49.12)]{curtisr} and the inclusion is induced by the relevant case of $\\iota'$.\n\nFor an element $x$ of $K_0(\\mathfrak{A},A)$ we write $(x_\\ell)_\\ell$ for its image in $\\bigoplus_\\ell K_0(\\mathfrak{A}_\\ell,A_\\ell)$ under the isomorphism in (\\ref{decomp}).\n\nThen, if $G$ is a finite group and $E$ is a field of characteristic zero, taking reduced norms over the semisimple algebra $E[G]$ induces (as per the discussion in \\cite[\\S 45A]{curtisr}) an injective homomorphism\n\\[ {\\rm Nrd}_{E[G]}: K_1(E[G]) \\to \\zeta(E[G])^\\times. \\]\nThis homomorphism is bijective if $E$ is either algebraically closed or complete.\n\n\n\n\\subsubsection{}\\label{nad sec} We shall also use a description of $K_0(\\mathfrak{A},A_E)$ in terms of the formalism of `non-abelian determinants' that is given by Fukaya and Kato in \\cite[\\S1]{fukaya-kato}.\n\nWe recall, in particular, that any pair comprising an object $C$ of $D^{\\rm perf}(\\mathfrak{A})$ and a morphism of non-abelian determinants $\\theta: {\\rm Det}_{A_E}(E\\otimes_R C) \\to {\\rm Det}_{A_E}(0)$ gives rise to a canonical element of $K_0(\\mathfrak{A},A_E)$ that we shall denote by $\\chi_\\mathfrak{A}(C,\\theta)$.\n\nIf $E\\otimes_RC$ is acyclic, then one obtains in this way a canonical element $\\chi_\\mathfrak{A}(C,0)$ of $K_0(\\mathfrak{A},A_E)$.\n\nMore generally, if $E\\otimes_RC$ is acyclic outside of degrees $a$ and $a+1$ for any integer $a$, then a choice of isomorphism of $A_E$-modules $h: E\\otimes_RH^a(C) \\cong E\\otimes_RH^{a+1}(C)$ gives rise to a morphism $h^{\\rm det}: {\\rm Det}_{A_E}(E\\otimes_R C) \\to {\\rm Det}_{A_E}(0)$ of non-abelian determinants and we set\n\\[ \\chi_\\mathfrak{A}(C,h) := \\chi_\\mathfrak{A}(C,h^{\\rm det}).\\]\n\nWe recall the following general result concerning these elements (which follows directly from \\cite[Lem. 1.3.4]{fukaya-kato}) since it will be used often in the sequel.\n\n\\begin{lemma}\\label{fk lemma} Let $C_1 \\to C_2 \\to C_3 \\to C_1[1]$ be an exact triangle in $D^{\\rm perf}(\\mathfrak{A})$ that satisfies the following two conditions:\n\\begin{itemize}\n\\item[(i)] there exists an integer $a$ such that each $C_i$ is acyclic outside degrees $a$ and $a+1$;\n\\item[(ii)] there exists an exact commutative diagram of $A_E$-modules\n\\[\\begin{CD}\n0 @> >> E\\otimes_R H^a(C_1) @> >> E\\otimes_R H^a(C_2) @> >> E\\otimes_R H^a(C_3) @> >> 0\\\\\n@. @V h_1VV @V h_2VV @V h_3VV \\\\\n0 @> >> E\\otimes_R H^{a+1}(C_1) @> >> E\\otimes_R H^{a+1}(C_2) @> >> E\\otimes_R H^{a+1}(C_3) @> >> 0\\end{CD}\\]\nin which each row is induced by the long exact cohomology sequence of the given exact triangle and each map $h_i$ is bijective.\n\\end{itemize}\n\nThen in $K_0(\\mathfrak{A},A_E)$ one has $\\chi_\\mathfrak{A}(C_2,h_2) = \\chi_\\mathfrak{A}(C_1,h_1) + \\chi_\\mathfrak{A}(C_3,h_3)$.\n\\end{lemma}\n\n\n\n\\begin{remark}\\label{comparingdets}{\\em If $\\mathfrak{A}$ is commutative, then $K_0(\\mathfrak{A},A_E)$ identifies with the multiplicative group of invertible $\\mathfrak{A}$-submodules of $A_E$. If, in this case, $C$ is acyclic outside degrees one and two, then for any isomorphism of $A_E$-modules $h: E\\otimes_RH^1(C)\\to E\\otimes_RH^2(C)$ one finds that the element $\\chi_{\\mathfrak{A}}(C,h)$ defined above corresponds under this identification to the inverse of the ideal $\\vartheta_{h}({\\rm Det}_{\\mathfrak{A}}(C))$ that is defined in \\cite[Def. 3.1]{bst}.}\\end{remark}\n\nFor convenience, we shall often abbreviate the notations $\\chi_{\\ZZ[G]}(C,h)$ and $\\chi_{\\ZZ_p[G]}(C,h)$ to $\\chi_G(C,h)$ and $\\chi_{G,p}(C,h)$ respectively.\n\nWhen the field $E$ is clear from context, we also write $\\partial_{G}$, $\\partial'_{G}$, $\\partial_{G,p}$ and $\\partial'_{G,p}$ in place of $\\partial_{\\ZZ[G],E[G]}$, $\\partial'_{\\ZZ[G],E[G]}$, $\\partial_{\\ZZ_p[G],E[G]}$ and $\\partial'_{\\ZZ_p[G],E[G]}$ respectively.\n\n\n\n\n\n\n\n\\subsection{Statement of the conjecture}\\label{statement of conj section}\n\nIn the sequel we fix a finite set of places $S$ of $k$ as in \\S\\ref{selmer section}. We also fix cycles $\\gamma_\\bullet$ and differentials $\\omega_\\bullet$ as in \\S\\ref{perf sel sect}.\n\n\n\\subsubsection{\n\nWe write $\\Omega_{\\omega_\\bullet}(A_{F\/k})$ for the element of $K_1(\\RR[G])$ that is represented by the matrix of the canonical `period' isomorphism of $\\RR[G]$-modules\n\\begin{multline*} \\RR\\otimes_\\ZZ H_\\infty(\\gamma_\\bullet) = \\RR\\otimes_\\ZZ \\bigoplus_{v \\in S_k^\\infty}H^0(k_v,Y_{v,F}\\otimes_{\\ZZ}H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ))\\\\\n \\cong \\RR\\otimes_{\\QQ} \\Hom_F(H^0(A_F^t,\\Omega^1_{A_F^t}),F),\\end{multline*}\nwith respect to the ordered $\\ZZ[G]$-basis of $H_\\infty(\\gamma_\\bullet)$ specified in \\S\\ref{gamma section} and the ordered $\\QQ[G]$-basis $\\omega_\\bullet$ of $ \\Hom_F(H^0(A_F^t,\\Omega^1_{A_F^t}),F)$.\n\nThis element $\\Omega_{\\omega_\\bullet}(A_{F\/k})$ constitutes a natural `$K$-theoretical period' and can be explicitly computed in terms of the classical periods that are associated to $A$ (see Lemma \\ref{k-theory period} below).\n\n\nTo take account of the local behaviour of the differentials $\\omega_\\bullet$ we define a $G$-module\n\\[ \\mathcal{Q}(\\omega_\\bullet)_S := \\bigoplus_{v \\notin S} \\mathcal{D}_F(\\mathcal{A}_v^t)\/\\mathcal{F}(\\omega_\\bullet)_v,\\]\nwhere $v$ runs over all places of $k$ that do not belong to $S$.\n\nIt is easily seen that almost all terms in this direct sum vanish and hence that $\\mathcal{Q}(\\omega_\\bullet)_S$ is finite. This $G$-module is also cohomologically-trivial since $\\mathcal{D}_F(\\mathcal{A}_v^t)$ and $\\mathcal{F}(\\omega_\\bullet)_v$ are both free $\\ZZ_{\\ell(v)}[G]$-modules for each $v$ outside $S$.\n\nWe can therefore define an object of $D^{\\rm perf}(\\ZZ[G])$ by setting\n\\[ {\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}) := {\\rm SC}_S(A_{F\/k},\\mathcal{X}_S(\\omega_\\bullet)) \\oplus \\mathcal{Q}(\\omega_\\bullet)_S[0],\\]\nwhere we abbreviate the perfect Selmer structure $\\mathcal{X}_S(\\{\\mathcal{A}^t_v\\}_v,\\omega_\\bullet,\\gamma_\\bullet)$ defined by the conditions (i$_\\mathcal{X}$), (ii$_\\mathcal{X}$) and (iii$_\\mathcal{X}$) in \\S\\ref{perf sel sect} to $\\mathcal{X}_S(\\omega_\\bullet)$.\n\nWe next write\n \\[\nh_{A,F}: A(F)\\times A^t(F) \\to \\RR\n\\]\nfor the classical N\\'eron-Tate height-pairing for $A$ over $F$.\n\nThis pairing is non-degenerate and hence, assuming $\\sha(A_{F})$ to be finite, it combines with the properties of the Selmer complex\n${\\rm SC}_{S}(A_{F\/k},\\mathcal{X}(\\omega_\\bullet))$ established in Proposition \\ref{prop:perfect}(ii) to induce a canonical isomorphism of $\\RR[G]$-modules\n\\begin{multline*} \\label{height triv}\nh_{A,F}': \\RR\\otimes_\\ZZ H^1({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k})) = \\RR\\otimes_\\ZZ A^t(F)\\\\ \\cong \\RR\\otimes_\\ZZ\\Hom_\\ZZ(A(F),\\ZZ) = \\RR\\otimes_\\ZZ H^2({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k})).\\end{multline*}\nThis isomorphism then gives rise via the formalism recalled in \\S\\ref{nad sec} to a canonical element\n\\[ \\chi_{G}({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h_{A,F}) := \\chi_{G}({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h'_{A,F})\\]\nof the relative algebraic $K$-group $K_0(\\ZZ[G],\\RR[G])$.\n\nOur conjecture will predict an explicit formula for this element in terms of Hasse-Weil-Artin $L$-series.\n\n\n\n\\subsubsection{}For every prime $\\ell$ the reduced norm maps ${\\rm Nrd}_{\\QQ_\\ell[G]}$ and ${\\rm Nrd}_{\\CC_\\ell[G]}$ discussed in \\S\\ref{Relative $K$-theory} are bijective and so there exists a composite homomorphism\n\\begin{equation}\\label{G,O hom} \\delta_{G,\\ell}: \\zeta(\\CC_\\ell[G])^\\times \\to K_1(\\CC_\\ell[G]) \\xrightarrow{\\partial_{\\ZZ_\\ell[G],\\CC_\\ell[G]}}\nK_0(\\ZZ_\\ell[G],\\CC_\\ell[G]) \\end{equation}\nin which the first map is the inverse of ${\\rm Nrd}_{\\CC_\\ell[G]}$. This homomorphism maps $\\zeta(\\QQ_\\ell[G])^\\times$ to the subgroup $K_0(\\ZZ_\\ell[G],\\QQ_\\ell[G])$ of $K_0(\\ZZ[G],\\QQ[G])$.\n\nIf now $v$ is any place of $k$ that does not belong to $S$, then $v$ is unramified in $F\/k$ and so the finite $G$-modules $$\\kappa_{F_v}:=\\prod_{w'\\in S_F^v}\\kappa_{F_{w'}}\\,\\,\\,\\, \\text{ and } \\,\\,\\,\\,\\tilde A^t_v(\\kappa_{F_v}):=\\prod_{w'\\in S_F^v}\\tilde A^t(\\kappa_{F_{w'}})$$ are both cohomologically-trivial by Lemma \\ref{useful prel}(i) below. Here for any place $w'$ in $S_F^v$, $\\tilde A^t$ denotes the reduction of $A^t_{\/F_{w'}}$ to $\\kappa_{F_{w'}}$.\n\nFor any such $v$ we may therefore define an element of the subgroup $K_0(\\ZZ_{\\ell(v)}[G],\\QQ_{\\ell(v)}[G])$ of $K_0(\\ZZ[G],\\QQ[G])$ by setting\n\\begin{equation}\\label{localFM} \\mu_{v}(A_{F\/k}) := \\chi_{G,\\ell(v)}\\bigl(\\kappa_{F_v}^d[0]\\oplus\\tilde A^t_v(\\kappa_{F_v})_{\\ell(v)}[-1],0\\bigr)-\\delta_{G,\\ell(v)}(L_v(A,F\/k))\\end{equation}\nwhere $L_v(A,F\/k)$ is the element of $\\zeta(\\QQ[G])^\\times$ that is equal to the value at $z=1$ of the $\\zeta(\\CC[G])$-valued $L$-factor at $v$ of the motive $h^1(A_{F})(1)$, regarded as defined over $k$ and with coefficients $\\QQ[G]$, as discussed in \\cite[\\S4.1]{bufl99}.\n\nThe sum\n\\[ \\mu_{S}(A_{F\/k}) := \\sum_{v\\notin S}\\mu_{v}(A_{F\/k})\\]\nwill play an important role in our conjecture.\n\nWe shall refer to this sum as the `Fontaine-Messing correction term' for the data $A, F\/k$ and $S$ since, independently of any conjecture, the theory developed by Fontaine and Messing in \\cite{fm} implies that $\\mu_{v}(A_{F\/k})$ vanishes for all but finitely many $v$ and hence that $\\mu_S(A_{F\/k})$ is a well-defined element of $K_0(\\ZZ[G],\\QQ[G])$. (For details see Lemma \\ref{fm} below).\n\n\\subsubsection{}We write $\\widehat{G}$ for the set of irreducible complex characters of $G$. In the sequel, for each $\\psi$ in $\\widehat{G}$ we fix a $\\CC[G]$-module $V_\\psi$ of character $\\psi$.\n\nWe recall that a character $\\psi$ in $\\widehat{G}$ is said to be `symplectic' if the subfield of\n$\\bc$ that is generated by the values of $\\psi$ is totally real\nand $\\End_{\\br [G]}(V_\\psi)$ is isomorphic to the division ring\nof real Quaternions. We write $\\widehat{G}^{\\rm s}$ for the subset of\n$\\widehat{G}$ comprising such characters.\n\nFor each $\\psi$ in $\\widehat{G}$ we write $\\check\\psi$ for its contragredient character and\n\\[ e_\\psi:=\\frac{\\psi(1)}{|G|}\\sum_{g\\in G}\\psi(g^{-1})g\\]\nfor the associated central primitive idempotents of $\\CC[G]$.\n\nThese idempotents induce an identification of $\\zeta(\\CC[G])$ with $\\prod_{\\widehat{G}}\\CC$ and we write $x = (x_\\psi)_\\psi$ for the corresponding decomposition of each element $x$ of $\\zeta(\\CC[G])$.\n\nFor each $\\psi$ in $\\widehat{G}$ we write $L_{S}(A,\\psi,z)$ for the Hasse-Weil-Artin $L$-series of $A$ and $\\psi$, truncated by removing the Euler factors corresponding to places in $S$.\n\nWe can now state the central conjecture of this article.\n\n\\begin{conjecture}\\label{conj:ebsd}\nThe following claims are valid.\n\\begin{itemize}\n\\item[(i)] The group $\\sha(A_F)$ is finite.\n\\item[(ii)] For all $\\psi$ in $\\widehat{G}$ the function $L(A,\\psi,z)$ has an analytic continuation to $z=1$ where it has a zero of order\n $\\psi(1)^{-1}\\cdot {\\rm dim}_{\\CC}(e_\\psi(\\CC\\otimes_\\ZZ A^t(F)))$.\n\\item[(iii)] For all $\\psi$ in $\\widehat{G}^{\\rm s}$ the leading coefficient $L^*_S(A,\\psi,1)$ at $z=1$ of the function $L_S(A,\\psi,z)$ is a strictly positive real number. In particular, there exists a unique element $L^*_{S}(A_{F\/k},1)$ of $K_1(\\RR[G])$ with\n\\[ {\\rm Nrd}_{\\RR[G]}(L_S^*(A_{F\/k},1))_\\psi = L_S^*(A,\\check\\psi,1)\\]\nfor all $\\psi$ in $\\widehat{G}$.\n\\item[(iv)] In $K_0(\\ZZ[G],\\RR[G])$ one has\n\\[ \\partial_G\\left(\\frac{L_S^*(A_{F\/k},1)}{\\Omega_{\\omega_\\bullet}(A_{F\/k})}\\right) = \\chi_G({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h_{A,F}) + \\mu_{S}(A_{F\/k}).\\]\n\\end{itemize}\n\\end{conjecture}\n\nIn the sequel we shall refer to this conjecture as the `Birch and Swinnerton-Dyer Conjecture for the pair $(A,F\/k)$' and abbreviate it to ${\\rm BSD}(A_{F\/k})$.\n\n\n\n\\begin{remark}{\\em The assertion of ${\\rm BSD}(A_{F\/k})$(i) is the celebrated Shafarevich-Tate conjecture. The quantity $\\psi(1)^{-1}\\cdot {\\rm dim}_{\\CC}(e_\\psi(\\CC\\otimes_\\ZZ A^t(F)))$ is equal to the multiplicity with which the character $\\psi$ occurs in the rational representation $\\QQ\\otimes_\\ZZ A^t(F)$ of $G$ (and hence to the right hand side of the equality (\\ref{dg equality})) and so the assertion of ${\\rm BSD}(A_{F\/k})$(ii) coincides with a conjecture of Deligne and Gross (cf. \\cite[p. 127]{rohrlich}).}\\end{remark}\n\n\\begin{remark}{\\em Write $\\tau$ for complex conjugation. Then, by the Hasse-Schilling-Maass Norm Theorem (cf. \\cite[(7.48)]{curtisr}), the image of ${\\rm Nrd}_{\\RR[G]}$ is the subset of $\\prod_{\\widehat{G}}\\CC^\\times$ comprising $x$ with the property that $x_{\\psi^\\tau} = \\tau(x_\\psi)$ for all $\\psi$ in $\\widehat{G}$ and also that $x_\\psi$ is a strictly positive real number for all $\\psi$ in $\\widehat{G}^{\\rm s}$. This means that the second assertion of ${\\rm BSD}(A_{F\/k})$(iii) follows immediately from the first assertion, the injectivity of ${\\rm Nrd}_{\\RR[G]}$ and the fact that $L_S^*(A,\\psi^\\tau,1) = \\tau(L_S^*(A,\\psi,1))$ for all $\\psi$ in $\\widehat{G}$.\n\nThe first assertion of ${\\rm BSD}(A_{F\/k})$(iii) is itself motivated by the fact that if $\\psi$ belongs to $\\widehat{G}^{\\rm s}$, and $[\\psi]$ denotes the associated Artin motive over $k$, then one can show that $L^*_S(A,\\psi,1)$ is a strictly positive real number whenever the motive $h^1(A)\\otimes [\\psi]$ validates the `Generalized Riemann Hypothesis' discussed by Deninger in \\cite[(7.5)]{den}. However, since this fact does not itself provide any more evidence for ${\\rm BSD}(A_{F\/k})$(iii) we omit the details.} \\end{remark}\n\n\\begin{remark}\\label{weaker BSD}{\\em It is possible to formulate a version of ${\\rm BSD}(A_{F\/k})$ that omits claim (iii) and hence avoids any possible reliance on the validity of the Generalized Riemann Hypothesis. To do this we recall that the argument of \\cite[\\S4.2, Lem. 9]{bufl99} constructs a canonical `extended boundary homomorphism' of relative $K$-theory $\\delta_G: \\zeta(\\RR[G])^\\times \\to K_0(\\ZZ[G],\\RR[G])$ that lies in a commutative diagram\n\n\\[ \\xymatrix{\nK_1(\\RR[G]) \\ar@{^{(}->}[d]^{{\\rm Nrd}_{\\RR[G]}} \\ar[rr]^{\\hskip -0.2truein\\partial_G} & & K_0(\\ZZ[G],\\RR[G])\\\\\n\\zeta(\\RR[G])^\\times . \\ar[urr]^{\\delta_G}}\\]\n\nHence, to obtain a version of the conjecture that omits claim (iii) one need only replace the term on the left hand side of the equality in claim (iv) by the difference\n\\[ \\delta_G\\bigl(\\calL_S^*(A_{F\/k},1)\\bigr) - \\partial_G\\bigl(\\Omega_{\\omega_\\bullet}(A_{F\/k})\\bigr)\\]\nwhere $\\calL_S^*(A_{F\/k},1)$ denotes the element of $\\zeta(\\RR[G])^\\times$ with $\\calL_S^*(A_{F\/k},1)_\\psi = L_S^*(A,\\check\\psi,1)$ for all $\\psi$ in $\\widehat{G}$.}\\end{remark}\n\n\n\\begin{remark}\\label{rbsd etnc rem}{\\em The approach developed by Wuthrich and the present authors in \\cite[\\S4]{bmw} can be extended to show that\n the weaker version of ${\\rm BSD}(A_{F\/k})$ discussed in the last remark is equivalent to the validity of the equivariant Tamagawa number conjecture for the pair $(h^1(A_F)(1),\\ZZ[G])$, as formulated in \\cite[Conj. 4]{bufl99} (for details see Appendix A). Taken in conjunction with the results of Venjakob and the first author in \\cite{BV2}, this observation implies that the study of ${\\rm BSD}(A_{F\/k})$ and its consequences is relevant if one wishes to properly understand the content of the main conjecture of non-commutative Iwasawa theory, as formulated by Coates et al in \\cite{cfksv}.}\\end{remark}\n\n\\begin{remark}\\label{cons1}{\\em If, for each prime $\\ell$, we fix an isomorphism of fields $\\CC\\cong \\CC_\\ell$, then the exactness of the lower row in (\\ref{E:kcomm}) with $\\mathfrak{A} = \\ZZ_\\ell[G]$ and $A_E = \\CC_\\ell[G]$ implies that the equality in ${\\rm BSD}(A_{F\/k})$(iv) determines the image of $(L^*_S(A,\\psi, 1))_{\\psi\\in \\widehat{G}}$ in $\\zeta(\\CC_\\ell[G])^\\times$ modulo the image under the reduced norm map ${\\rm Nrd}_{\\QQ_\\ell[G]}$ of $K_1(\\ZZ_\\ell[G])$. In view of the explicit description of the latter image that is obtained by Kakde in \\cite{kakde} (or, equivalently, by the methods of Ritter and Weiss in \\cite{rw}), this means ${\\rm BSD}(A_{F\/k})$(iv) implicitly incorporates families of congruence relations between the leading coefficients $L^*_S(A,\\psi, 1)$ for varying $\\psi$ in $\\widehat{G}$.}\\end{remark}\n\n\n\n\n\\begin{remark}\\label{consistency remark}{\\em The formulation of ${\\rm BSD}(A_{F\/k})$ is consistent in the following respects.\n\\begin{itemize}\n\\item[(i)] Its validity is independent of the choices of set $S$ and ordered $\\QQ[G]$-basis $\\omega_\\bullet$.\n\n\\item[(ii)] Its validity for the pair $(A,F\/k)$ implies its validity for $(A_E,F\/E)$ for any intermediate field $E$ of $F\/k$ and for $(A,E\/k)$ for any such $E$ that is Galois over $k$.\n\n\\item[(iii)] Its validity for the pair $(A,k\/k)$ is equivalent, up to sign, to the Birch and Swinnerton-Dyer Conjecture for $A$ over $k$.\n\\end{itemize}\nEach of these statements can be proven directly but also follows from the observation in Remark \\ref{rbsd etnc rem} (see Remark \\ref{consistency} for more details).}\\end{remark}\n\n\n\\begin{remark}{\\em A natural analogue of ${\\rm BSD}(A_{F\/k})$ has been formulated, and in some important cases proved, in the setting of abelian varieties over global function fields by Kakde, Kim and the first author in \\cite{bkk}.} \\end{remark}\n\n\n\n\n\n\n\nMotivated at least in part by Remark \\ref{rbsd etnc rem}, our main aim in the rest of this article will be to describe, and in important special cases provide evidence for, a range of explicit consequences that would follow from the validity of ${\\rm BSD}(A_{F\/k})$.\n\n\\subsection{$p$-components}\\label{pro-p sect} To end this section we show that the equality in ${\\rm BSD}(A_{F\/k})$(iv) can be checked by considering separately `$p$-primary components' for each prime $p$.\n\nFor each prime $p$ and each isomorphism of fields $j: \\CC\\cong \\CC_p$, the inclusion $\\RR \\subset \\CC$ combines with the functoriality of $K$-theory to induce a homomorphism\n\\[ K_1(\\RR[G]) \\to K_1(\\CC_p[G])\\]\nand also pairs with the inclusion $\\ZZ \\to \\ZZ_p$ to induce a homomorphism\n\\[ K_0(\\ZZ[G],\\RR[G]) \\to K_0(\\ZZ_p[G],\\CC_p[G]).\\]\nIn the sequel we shall, for convenience, use $j_\\ast$ to denote both of these homomorphisms as well as the inclusion $\\zeta(\\RR[G])^\\times \\to \\zeta(\\CC_p[G])^\\times$ and isomorphism $\\zeta(\\CC[G])^\\times\\cong \\zeta(\\CC_p[G])^\\times$ that are induced by the action of $j$ on coefficients.\n\n\n\\begin{lemma}\\label{pro-p lemma} Fix $\\omega_\\bullet$ and $S$ as in ${\\rm BSD}(A_{F\/k})$. Then, to prove the equality in ${\\rm BSD}(A_{F\/k})$(iv) it suffices to prove, for every prime $p$ and every isomorphism of fields $j:\\CC\\cong \\CC_p$, that\n\\begin{multline}\\label{displayed pj} \\partial_{G,p}\\left(\\frac{j_*(L_S^*(A_{F\/k},1))}{j_*(\\Omega_{\\omega_\\bullet}(A_{F\/k}))}\\right) = \\chi_{G,p}({\\rm SC}_S(A_{F\/k},\\mathcal{X}(p),\\mathcal{X}(\\infty)_p),h^j_{A,F})\\\\ +\\chi_{G,p}( \\mathcal{Q}(\\omega_\\bullet)_{S,p} [0],0) + \\mu_{S}(A_{F\/k})_p,\\end{multline}\nwhere we write $\\mathcal{X}$ for $\\mathcal{X}_S(\\omega_\\bullet)$ and $h^{j}_{A,F}$ for $\\CC_p\\otimes_{\\RR,j}h^{{\\rm det}}_{A,F}$\n\\end{lemma}\n\n\\begin{proof} We consider the diagonal homomorphism of abelian groups\n\\begin{equation}\\label{local iso} K_0(\\ZZ[G],\\RR[G]) \\xrightarrow{(\\prod j_*)_p} \\prod_p\\left(\\prod_{j: \\CC\\cong \\CC_p} K_0(\\ZZ_p[G],\\CC_p[G])\\right),\\end{equation}\nwhere the products run over all primes $p$ and all choices of isomorphism $j$.\n\nThe key fact that we shall use is that this map is injective. This fact is certainly well-known but, given its importance, we shall, for completeness, prove it.\n\nWe consider the exact sequences that are given by the lower row of (\\ref{E:kcomm}) with $\\mathfrak{A}= R[G]$ and $A_E = E[G]$ for the\npairs $(R,E)=(\\ZZ,\\QQ)$, $(\\ZZ,\\RR)$, $(\\ZZ_p,\\QQ_p)$ and\n$(\\ZZ_p,\\CC_p)$ and the maps between these sequences which are\ninduced by the obvious inclusions and by an embedding\n$j:\\RR\\to\\CC_p$.\n\nBy an easy diagram chase one obtains a\ncommutative diagram of short exact sequences\n\\begin{equation*}\n\\xymatrix{\n0 \\ar[r] & K_0(\\ZZ[G],\\QQ[G]) \\ar[r] \\ar[d] & K_0(\\ZZ[G],\\RR[G]) \\ar[r] \\ar[d] &\nK_1(\\RR[G])\/K_1(\\QQ[G]) \\ar[r] \\ar[d] & 0 \\\\\n0 \\ar[r] & K_0(\\ZZ_p[G],\\QQ_p[G]) \\ar[r] & K_0(\\ZZ_p[G],\\CC_p[G]) \\ar[r] &\nK_1(\\CC_p[G])\/K_1(\\QQ_p[G]) \\ar[r] & 0.\n}\n\\end{equation*}\nTherefore it suffices to show that the maps\n\\begin{equation}\n\\label{equation_K_injectivity_left}\nK_0(\\ZZ[G],\\QQ[G])\\to\\prod_{p,j} K_0(\\ZZ_p[G],\\QQ_p[G])\n\\end{equation}\nand\n\\begin{equation}\n\\label{equation_K_injectivity_right}\nK_1(\\RR[G])\/K_1(\\QQ[G])\\to\\prod_{p,j}\nK_1(\\CC_p[G])\/K_1(\\QQ_p[G])\n\\end{equation}\nare injective. The injectivity of (\\ref{equation_K_injectivity_left})\nfollows immediately from the relevant case of the isomorphism in (\\ref{decomp}).\n\nLet $x\\in K_1(\\RR[G])$ be such that for all $p$ and all $j$ one has\n\\[ j_*(x)\\in K_1(\\QQ_p[G])\\subseteq K_1(\\CC_p[G]).\\]\n\nWe now use the (injective) maps ${\\rm Nrd}_{\\RR[G]}$ and ${\\rm Nrd}_{\\QQ[G]}$ to identify $K_1(\\RR[G])$ and $K_1(\\QQ[G])$ with $\\im({\\rm Nrd}_{\\RR[G]})$ and $\\im({\\rm Nrd}_{\\QQ[G]})$ respectively.\n\nThen, $x=\\sum_{g\\in G} c_gg$ is an element of $\\im({\\rm Nrd}_{\\RR[G]})$ such that\n\\begin{equation}\n\\label{equation_lemma_K_injectivity}\nj_*(x)=\\sum_{g\\in G} j(c_g)g\\in\\zeta(\\QQ_p[G])^\\times.\n\\end{equation}\n\nWe claim that $\\sum_{g\\in G} c_gg\\in\\QQ[G]$. Let $g\\in G$ and consider the\ncoefficient $c_g$.\n\nIf, firstly, $c_g$ was transcendental over $\\QQ$, then\nthere would be an embedding $j:\\RR\\to\\CC_p$ such that\n$j(c_g)\\not\\in\\QQ_p$, thereby contradicting\n(\\ref{equation_lemma_K_injectivity}).\n\nTherefore $c_g$ is algebraic\nover $\\QQ$. Now $j(c_g)\\in\\QQ_p$ for all $p$ and embeddings\n$j$ implies that all primes are completely split in the\nnumber field $\\QQ(c_g)$ and therefore $\\QQ(c_g)=\\QQ$.\n\nHence $x$ belongs to $\\im({\\rm Nrd}_{\\RR[G]})\\cap\\QQ[G]$ which, by the Hasse-Schilling-Maass Norm Theorem, is equal to $\\im({\\rm Nrd}_{\\QQ[G]})$.\n\nThis shows the injectivity of (\\ref{equation_K_injectivity_right}) and hence also of the map (\\ref{local iso}).\nThe injectivity of (\\ref{local iso}) in turn implies that the equality of ${\\rm BSD}(A_{F\/k})$(iv) is valid if and only if its image under each maps $j_*$ is valid.\n\nSet $\\mathcal{X} := \\mathcal{X}_S(\\omega_\\bullet)$. Then\n\\begin{multline*} j_*(\\chi_{G}({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h_{A,F})) = \\chi_{G,p}(\\ZZ_p\\otimes_\\ZZ {\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}), \\CC_p\\otimes_{\\RR,j}h_{A,F}))\\\\\n=\\chi_{G,p}({\\rm SC}_S(A_{F\/k},\\mathcal{X}(p),\\mathcal{X}(\\infty)_p),h^j_{A,F}) +\\chi_{G,p}( \\mathcal{Q}(\\omega_\\bullet)_{S,p}[0],0),\\end{multline*}\nwhere the first equality is by definition of the map $j_*$ and the second by Proposition \\ref{prop:perfect2}(i).\n\nGiven this, the claim follows from the obvious equality $j_*(\\mu_{S}(A_{F\/k})) = \\mu_{S}(A_{F\/k})_p$ and the commutativity of the diagram\n\\begin{equation*}\\label{commute K thry} \\begin{CD} K_1(\\RR[G]) @> \\partial_G >> K_0(\\ZZ[G],\\RR[G])\\\\\n@VV j_{*} V @VV j_{*}V\\\\\nK_1(\\CC_p[G]) @> \\partial_{G,p} >> K_0(\\ZZ_p[G],\\CC_p[G]).\\end{CD}\\end{equation*}\n\\end{proof}\n\n\n\\begin{remark}{\\em In the sequel we shall say, for any given prime $p$, that the `$p$-primary component' {\\rm BSD}$_p(A_{F\/k})$(iv) of the equality in {\\rm BSD}$(A_{F\/k})$(iv) is valid if for every choice of isomorphism of fields $j:\\CC\\cong \\CC_p$ the equality (\\ref{displayed pj}) is valid.} \\end{remark}\n\n\\section{Periods and Galois-Gauss sums}\\label{k theory period sect}\n\nTo prepare for arguments in subsequent sections, we shall now explain the precise link between the $K$-theoretical period $\\Omega_{\\omega_\\bullet}(A_{F\/k})$ that occurs in ${\\rm BSD}(A_{F\/k})$ and the classical periods that are associated to $A$ over $k$.\n\n\\subsection{Periods and Galois resolvents}\\label{k theory period sect2} At the outset we fix an ordered $k$-basis $\\{\\omega'_j: j \\in [d]\\}$ of $H^0(A^t,\\Omega^1_{A^t})$.\n\nFor each $v$ in $S_\\RR^k$ we then set\n\\[ \\Omega_{A,v}^+ := {\\rm det}\\left(\\left(\\int_{\\gamma_{v,a}^{+}}\\sigma_{v,*}(\\omega'_b)\\right)_{a,b}\\right)\\]\nand\n\n\\[ \\Omega_{A,v}^- := {\\rm det}\\left(\\left(\\int_{\\gamma_{v,a}^{-}}\\sigma_{v,*}(\\omega'_b)\\right)_{a,b}\\right),\\]\nwhere the elements $\\gamma_{v,a}^+$ and $\\gamma_{v,a}^-$ of $H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)$ are as specified in \\S\\ref{gamma section} and in both matrices $(a,b)$ runs over $[d]\\times [d]$.\n\nFor each $v$ in $S_k^\\CC$ we also set\n\\[ \\Omega_{A,v} := {\\rm det}\\left(\\left(\\int_{\\gamma_{v,a}}\\sigma_{v,*}(\\omega'_b),c\\!\\left(\\int_{\\gamma_{v,a}}\\sigma_{v,*}(\\omega'_b)\\right)\\right)_{a,b}\\right)\\]\nwhere the elements $\\gamma_{v,a}$ of $H_1((A^t)^{\\sigma_v}(\\CC),\\ZZ)$ are again as specified in \\S\\ref{gamma section} and $(a,b)$ runs over $[2d]\\times [d]$.\n\nWe note that, by explicitly computing integrals, the absolute values of these determinants can be explicitly related to the periods that occur in the classical formulation of the Birch and Swinnerton-Dyer conjecture (see, for example, Gross \\cite[p. 224]{G-BSD}).\n\nFor each archimedean place $v$ of $k$ and character $\\psi$ we then set\n\\[\\Omega^\\psi_{A,v} := \\begin{cases} \\Omega_{A,v}^{\\psi(1)}, &\\text{ if $v \\in S_k^\\CC$,}\\\\\n (\\Omega^+_{A,v})^{1-\\psi_v^-(1)}(\\Omega^-_{A,v})^{\\psi_v^-(1)}, &\\text{ if $v \\in S_k^\\RR$}\\end{cases} \\]\nwith\n\\[\\psi_v^-(1) := \\psi(1) - {\\rm dim}_\\CC(H^0(G_w,V_\\psi)),\\]\nwhere again $V_\\psi$ is a fixed choice of $\\CC[G]$-module of character $\\psi$.\n\nFor each $\\psi$ we set\n\\[ \\Omega_A^\\psi := \\prod_{v \\in S_k^\\infty}\\Omega^\\psi_{A,v}\\]\nand we then finally define an element of $\\zeta(\\CC[G])^\\times$ by setting\n\\begin{equation}\\label{period def} \\Omega_A^{F\/k} := \\sum_{\\psi \\in \\widehat{G}}\\Omega^\\psi_A\\cdot e_\\psi.\\end{equation}\n\nFor each $v$ in $S_k^\\RR$, resp. in $S_k^\\CC$, we also set\n\\[ w_{v,\\psi} := \\begin{cases} i^{\\psi^-_v(1)}, &\\text{if $v\\in S_k^\\RR$,}\\\\\n i, &\\text{if $v\\in S_k^\\CC$.}\\end{cases}\\]\n\nFor each character $\\psi$ we then set\n\\[ w_{\\psi} := \\prod_{v \\in S_k^\\infty}w_{v,\\psi}\\]\nand then define an element of $\\zeta(\\CC[G])^\\times$ by setting\n\\begin{equation}\\label{root number def} w_{F\/k} := \\sum_{\\psi\\in \\widehat{G}} w_\\psi\\cdot e_\\psi .\\end{equation}\n\n\n\n\\begin{lemma}\\label{k-theory period} Set $n := [k:\\QQ]$. Fix an ordered $\\QQ[G]$-basis $\\{z_i: i \\in [n]\\}$ of $F$ and write $\\omega_\\bullet$ for the (lexicographically ordered) $\\QQ[G]$-basis $\\{ z_i\\otimes \\omega'_j: i \\in [n], j \\in [d]\\}$ of $H^0(A_F^t,\\Omega^1_{A_F^t})$. Then in $\\zeta(\\RR[G])^\\times$ one has\n\\[ {\\rm Nrd}_{\\RR[G]}(\\Omega_{\\omega_\\bullet}(A_{F\/k})) = \\Omega_A^{F\/k}\\cdot w_{F\/k}^d\\cdot {\\rm Nrd}_{\\QQ[G]}\n\\left( \\left( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z_i)))\\cdot g\\right)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right)^{-d} \\]\nwhere we have fixed an extension $\\hat\\sigma$ to $\\Sigma(F)$ of each embedding $\\sigma$ in $\\Sigma(k)$.\\end{lemma}\n\n\\begin{proof} This follows from the argument of \\cite[Lem. 4.5]{bmw}.\\end{proof}\n\n\n\\subsection{Galois resolvents and Galois-Gauss sums}\n\nUnder suitable conditions, one can also choose the $\\QQ[G]$-basis $\\{z_i: i \\in [n]\\}$ of $F$ so that the reduced norm of the Galois resolvent matrix that occurs in Lemma \\ref{k-theory period} can be explicitly described in terms of Galois-Gauss sums.\n\nBefore explaining this we first recall the relevant notions of Galois-Gauss sums.\n\n\\subsubsection{}\\label{mod GGS section}\n\nThe `global Galois-Gauss sum of $F\/k$' is the element\n\\[ \\tau(F\/k) :=\\sum_{\\psi \\in \\widehat{G}}\\tau(\\QQ,\\psi)\\cdot e_\\psi\\]\nof $\\zeta(\\QQ^c[G])^\\times$.\n\nHere we regard each character $\\psi$ of $G$ as a character of $G_k$ via the projection $G_k \\to G$ and then write\n$\\tau(\\QQ,\\psi)$ for the global Galois-Gauss sum (as defined by Martinet in \\cite{martinet})\nof the induction of $\\psi$ to $G_\\QQ$.\n\n\nTo define suitable modifications of these sums\nwe then define the `unramified characteristic' of $v$ at each character $\\psi$ in $\\widehat{G}$ by setting\n\\[ u_{v,\\psi} := {\\rm det}(-\\Phi_v^{-1}\\mid V_\\psi^{I_w})\\in \\QQ^{c,\\times}.\\]\n\nFor each character $\\psi$ in $\\widehat{G}$ we set\n\\[ u_\\psi := \\prod_{v\\in S_k^F}u_{v,\\psi}.\\]\n\nWe then define elements of $\\zeta(\\QQ[G])^\\times$ by setting\n\\begin{equation}\\label{u def} u_v(F\/k) := \\sum_{\\psi\\in \\widehat{G}}u_\\psi\\cdot e_\\psi\\end{equation}\nand\n\\[ u_{F\/k} := \\prod_{v\\in S_k^F}u_v(F\/k) = \\sum_{\\psi\\in \\widehat{G}}u_\\psi\\cdot e_\\psi.\\]\n\nWe finally define the `modified global Galois-Gauss sum of $\\psi$' to be the element\n\\[ \\tau^\\ast(\\QQ,\\psi) := u_\\psi\\cdot \\tau(\\QQ,\\psi)\\]\nof $\\QQ^c$, and the `modified global Galois-Gauss sum of $F\/k$' to be the element\n\\[ \\tau^\\ast(F\/k) := u_{F\/k}\\cdot \\tau(F\/k) = \\sum_{\\psi\\in \\widehat{G}}\\tau^\\ast(\\QQ,\\psi)\\cdot e_\\psi \\]\nof $\\zeta(\\QQ^c[G])^\\times$.\n\n\n\n\\begin{remark}{\\em The modified Galois-Gauss sums $\\tau^\\ast(\\QQ,\\psi)$ defined above play a key role in the proof of the main results of classical Galois module theory, as discussed by Fr\\\"ohlich in \\cite{frohlich}. In Lemma \\ref{imprimitive GGS} below, one can also find a more concrete reason as to why such terms should arise naturally in the setting of leading term conjectures.}\n\\end{remark}\n\n\\subsubsection{} The next result shows that under mild hypotheses the Galois-resolvent matrix that occurs in Lemma \\ref{k-theory period} can be explicitly interpreted in terms of the elements $\\tau^\\ast(F\/k)$ introduced above.\n\n\\begin{proposition}\\label{lms}The following claims are valid.\n\n\\begin{itemize}\n\\item[(i)] For any ordered $\\QQ[G]$-basis $\\omega_\\bullet$ of $H^0(A_F^t,\\Omega^1_{A_F^t})$ there exists an element $u(\\omega_\\bullet)$ of $\\zeta(\\QQ[G])^\\times$ such that\n\\[ {\\rm Nrd}_{\\RR[G]}(\\Omega_{\\omega_\\bullet}(A_{F\/k})) = u(\\omega_\\bullet)\\cdot \\Omega_A^{F\/k}\\cdot w_{F\/k}^d\\cdot \\tau^\\ast(F\/k)^{-d}.\\]\n\n\\item[(ii)] Fix a prime $p$ and set $\\mathcal{O}_{F,p} := \\ZZ_p\\otimes_\\ZZ\\mathcal{O}_F.$ Then if no $p$-adic place of $k$ is wildly ramified in $F$, there is an ordered $\\ZZ_p[G]$-basis $\\{z^p_{i}\\}_{i \\in [n]}$ of $\\mathcal{O}_{F,p}$ for which one has\n\\[ {\\rm Nrd}_{\\QQ_p[G]}\\left( \\left( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z^p_i)))\\cdot g\\right)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right) = \\tau^\\ast(F\/k).\\]\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} It is enough to prove claim (i) for any choice of $\\QQ[G]$-basis $\\omega_\\bullet$. Then, choosing $\\omega_\\bullet$ as in Lemma\n\\ref{k-theory period}, the latter result implies that it is enough to prove that the product\n\\[ {\\rm Nrd}_{\\QQ_p[G]}\\left( \\left( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z_i)))\\cdot g\\right)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right)\\cdot \\tau^\\ast(F\/k)^{-1}\\]\nbelongs to $\\zeta(\\QQ[G])^\\times$ and this follows from the argument used by Bley and the first author to prove \\cite[Prop. 3.4]{bleyburns}.\n\nTurning to claim (ii) we note that if no $p$-adic place of $k$ is wildly ramified in $F$, then the $\\ZZ_p[G]$-module $\\mathcal{O}_{F,p}$ is free of rank $n$ (by Noether's Theorem) and so we may fix an ordered $\\ZZ_p[G]$-basis $z^p_\\bullet := \\{z^p_{i}: i \\in [n]\\}$.\n\nThe matrix\n\n\\[ M(z^p_\\bullet) := ( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z^p_b)))\\cdot g)_{\\sigma\\in \\Sigma(k),b\\in [n]})\\]\nin ${\\rm GL}_{n}(\\CC_p[G])$ then represents, with respect to the bases $z^p_\\bullet$ of $F_p$ and $\\{\\hat \\sigma: \\sigma \\in \\Sigma(k)\\}$ of $Y_{F\/k,p}$, the isomorphism of $\\CC_p[G]$-modules\n\\[ \\mu_{F,p}: \\CC_p\\otimes_{\\QQ_p} F_p \\cong \\CC_p\\otimes_{\\ZZ_p}Y_{F\/k,p}\\]\nthat sends each $z\\otimes f$ to $(z\\hat\\sigma(f))_{\\sigma\\in\\Sigma(k)}$.\n\nHence one has\n\\begin{align*} \\delta_{G,p}\\bigl({\\rm Nrd}_{\\CC_p[G]}\\bigl(M(z^p_\\bullet)\\bigr)\\bigr) = \\, &\\partial_{G,p}\\bigl([M(z^p_\\bullet)]\\bigr)\\\\\n = \\, &[\\mathcal{O}_{F,p}, Y_{F\/k,p}; \\mu_{F,p}]\\\\\n = \\, &\\delta_{G,p}(\\tau^\\ast(F\/k)),\\end{align*}\n \nwhere $[M(z^p_\\bullet)]$ denotes the class of $M(z^p_\\bullet)$ in $K_1(\\CC_p[G])$ and the last equality follows from the proof of \\cite[Th. 7.5]{bleyburns}.\n\nNow the exact sequence of relative $K$-theory implies that kernel of $\\delta_{G,p}$ is equal to the image of $K_1(\\ZZ_p[G])$ under the map ${\\rm Nrd}_{\\QQ_p[G]}$.\n\nIn addition, the ring $\\ZZ_p[G]$ is semi-local and so the natural map ${\\rm GL}_{n}(\\ZZ_p[G]) \\to K_1(\\ZZ_p[G])$ is surjective.\n\nIt follows that there exists a matrix $U$ in ${\\rm GL}_n(\\ZZ_p[G])$ with\n\\[ {\\rm Nrd}_{\\CC_p[G]}(M(z^p_\\bullet))\\cdot {\\rm Nrd}_{\\CC_p[G]}(U) = \\tau^\\ast(F\/k)\\]\nand so it suffices to replace the basis $z^p_\\bullet$ by its image under the automorphism of $\\mathcal{O}_{F,p}$ that corresponds to the matrix $U$. \\end{proof}\n\nTaken together, Lemma \\ref{k-theory period} and Proposition \\ref{lms}(ii) give an explicit interpretation of the $K$-theoretical periods that occur in the formulation of {\\rm BSD}$(A_{F\/k})$.\n\nHowever, the existence of $p$-adic places that ramify wildly in $F$ makes the situation more complicated and this leads to technical difficulties in later sections.\n\n\n\n\n\n\\section{Local points on ordinary varieties}\\label{local points section}\n\nIn \\S\\ref{tmc} we will impose several mild hypotheses on the reduction types of $A$ and the ramification invariants of $F\/k$ which together ensure that the classical Selmer complex is perfect over $\\ZZ_p[G]$. Under these hypotheses, we will then give a more explicit interpretation of the equality in ${\\rm BSD}(A_{F\/k})$(iv).\n\nAs a necessary preparation for these results, in this section we shall establish several preliminary results concerning the properties of local points on varieties with good ordinary reduction.\n\n\n\\subsection{Cohomological-triviality}For this purpose we assume to be given a finite Galois extension $N\/M$ of $p$-adic fields and set $\\Gamma := G_{N\/M}$. We fix a Sylow $p$-subgroup $\\Delta$ of $\\Gamma$. We write $\\Gamma_0$ for the inertia subgroup of $\\Gamma$ and set $N_0 := N^{\\Gamma_0}$.\n\nWe also assume to be given an abelian variety $B$, of dimension $d$, over $M$ that has good reduction and write $\\tilde B$ for the corresponding reduced variety.\n\n\\begin{lemma}\\label{useful prel} The following claims are valid.\n\\begin{itemize}\n\\item[(i)] If $N\/M$ is unramified, then the $\\Gamma$-modules $B(N)$, $\\tilde B(\\kappa_{N})$ and $\\kappa_N$ are cohomologically-trivial.\n\\item[(ii)] If $N\/M$ is at most tamely ramified, then the $\\ZZ_p[\\Gamma]$-modules $B(N)^\\wedge_p$ and $\\tilde B(\\kappa_{N})[p^\\infty]$ are cohomologically-trivial.\n\\item[(iii)] If the variety $B$ is ordinary and $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes, then the $\\ZZ_p[\\Gamma]$-module $B(N)^\\wedge_p$ is cohomologically-trivial.\n\\item[(iv)] Assume that $B$ is ordinary and write $u$ for the twist matrix (in ${\\rm GL}_{d}(\\ZZ_p)$) of its formal group over the completion of $M^{\\rm un}$. If $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes, then $B(N)^\\wedge_p$ is torsion-free, and hence projective over $\\ZZ_p[\\Gamma]$, if and only if for any non-trivial $d$-fold vector $\\underline{\\zeta}$ of $p$-th roots of unity in $N^{\\rm un}$ one has\n\\[ \\Phi_N(\\underline{\\zeta}) \\not= \\underline{\\zeta}^u,\\]\nwhere $\\Phi_N$ is the Frobenius automorphism in $G_{N^{\\rm un}\/N}$. In particular, this is the case if any $p$-power root of unity in $N^{\\rm un}$ belongs to $N$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} A standard Hochschild-Serre spectral sequence argument combines with the criterion of \\cite[Thm. 9]{cf} to show that claim (i) is valid provided that each of the modules $B(N)$, $\\tilde B(\\kappa_{N})$ and $\\kappa_N$ is cohomologically-trivial with respect to every subgroup $C$ of $\\Gamma$ of prime order (see the proof of \\cite[Lem. 4.1]{bmw} for a similar argument).\n\nWe therefore fix a subgroup $C$ of $\\Gamma$ that has prime order. Now cohomology over $C$ is periodic of order 2 and each of the modules $B(N)$, $\\tilde B(\\kappa_{N})$ and $\\kappa_N$ span free $\\QQ[\\Gamma]$-modules.\nIt thus follows from \\cite[Cor. to Prop. 11]{cf} that the Herbrand Quotient with respect to $C$ of each of these modules is equal to 1.\nTo prove claim (i) it is enough to show that the natural norm maps $B(N)\\to B(N^C)$, $\\tilde B(\\kappa_{N}) \\to \\tilde B(\\kappa_{N^C})$ and $\\kappa_{N}\\to \\kappa_{N^C}$ are surjective.\n\nSince the extension $N\/N^C$ is unramified, this surjectivity is well-known for the module $\\kappa_N$ and for the modules $B(N)$ and $\\tilde B(\\kappa_{N})$ it follows directly from the result of Mazur in \\cite[Cor. 4.4]{m}.\n\nTo prove claim (ii) we assume that $N\/M$ is tamely ramified. In this case the order of $\\Gamma_0$ is prime to $p$ and so the same standard Hochschild-Serre spectral sequence argument as in claim (i) implies claim (ii) is true if the modules $B(N_0)^\\wedge_p = (B(N)^\\wedge_p)^{\\Gamma_0}$ and $\\tilde B(\\kappa_{N})[p^\\infty] = \\tilde B(\\kappa_{N})[p^\\infty]^{\\Gamma_0}$ are cohomologically-trivial with respect to every subgroup $C$ of $\\Gamma\/\\Gamma_0$ of order $p$. Since $N_0\/N_0^C$ is unramified, this follows from the argument in claim (i).\n\nIn a similar way, to prove claim (iii) one is reduced to showing that if $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes, then for each subgroup $C$ of $\\Gamma_0$ of order $p$, the norm map ${\\rm N}_C: B(N)^\\wedge_p \\to B(N^C)^\\wedge_p$ is surjective.\n\nNow the main result of Lubin and Rosen in \\cite{LR} implies that the cokernel of ${\\rm N}_C$ is isomorphic to the cokernel of the natural action of ${\\rm I}_d-u$ on the direct sum of $d$-copies of $C$ and from the proof of \\cite[Th. 2]{LR} one knows that ${\\rm det}({\\rm I}_d-u)$ is a $p$-adic divisor of $|\\tilde B(\\kappa_N)|$.\n But if $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes, then $\\tilde B(\\kappa_{N})[p^\\infty]$ also vanishes (as $\\Delta$ is a $p$-group) and so\n ${\\rm det}({\\rm I}_d-u)$ is a $p$-adic unit. It follows that ${\\rm cok}(N_C)$ vanishes, as required to prove claim (iii).\n\nTo prove claim (iv) we assume $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes. Then claim (ii) implies $B(N)^\\wedge_p$ is a projective $\\ZZ_p[\\Gamma]$-module if and only if $B(N)^\\wedge_p[p^\\infty]$ vanishes. In addition, from the lemma in \\cite[\\S1]{LR} (with $L = K = N$), we know that the group $B(N)^\\wedge_p[p^\\infty]$ is isomorphic to the subgroup of $(N^{{\\rm un},\\times})^d$ comprising $p$-torsion elements $\\underline{\\eta}$ which satisfy\n$\\Phi_N(\\underline{\\eta}) = \\underline{\\eta}^u$.\n\nThis directly implies the first assertion of claim (iv) and the second assertion then follows because ${\\rm det}({\\rm I}_d-u)$ is a $p$-adic unit and so $u\\not\\equiv 1$ (mod $p$). \\end{proof}\n\n\\begin{remark}{\\em A more general analysis of the cohomological properties of formal groups was recently given by Ellerbrock and Nickel in \\cite{ellerbrocknickel}.}\\end{remark}\n\n\\subsection{Twist matrices and $K$-theory}\\label{twist inv prelim}\n\nIn this section we fix an abelian variety $B$ over $M$ of dimension $d$.\nWe assume that $B$ has good ordinary reduction and is such that $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ vanishes.\n\nWe shall then use Lemma \\ref{useful prel} to define a natural invariant in $K_0(\\ZZ_p[\\Gamma],\\QQ_p[\\Gamma])$ of the twist matrix of $B$ that will play an important role in the explicit interpretation of the equality in ${\\rm BSD}(A_{F\/k})$(iv) that will be given in \\S \\ref{tmc} below.\n\nAt the outset we recall that the complex $R\\Gamma(N,\\ZZ_p(1))$ belongs to $D^{\\rm perf}(\\ZZ_p[\\Gamma])$. Hence, following Lemma \\ref{useful prel}(iii), we obtain a complex in $D^{\\rm perf}(\\ZZ_p[\\Gamma])$ by setting\n\\[ C_{B,N}^{\\bullet} := R\\Gamma(N,\\ZZ_p(1))^d[1] \\oplus B(N)^\\wedge_p[-1].\\]\n\nThis complex is acyclic outside degrees zero and one. In addition, Kummer theory gives an identification $H^1(N,\\ZZ_p(1)) = (N^\\times)^\\wedge_p$ and the invariant map ${\\rm inv}_N$ of $N$ an isomorphism $ H^2(N,\\ZZ_p(1)) \\cong \\ZZ_p$.\n\nWe next fix a choice of isomorphism of $\\QQ_p[\\Gamma]$-modules $\\lambda_{B,N}$ which lies in a commutative diagram\n\\begin{equation}\\label{lambda diag}\\begin{CD}\n0 @> >> \\QQ_p\\cdot (U^{(1)}_{N})^d @> \\subset >> \\QQ_p\\cdot H^0(C_{B,N}^{\\bullet}) @> ({\\rm val}_N)^d >> \\QQ_p^d @> >> 0\\\\\n@. @V {\\rm exp}_{B,N}VV @V \\lambda_{B,N} VV @V \\times f_{N\/M}VV\\\\\n0 @> >> \\QQ_p \\cdot B(N)^\\wedge_p @> \\subset >> \\QQ_p\\cdot H^1(C_{B,N}^{\\bullet}) @> {\\rm can} >> \\QQ_p^d @> >> 0.\\end{CD}\\end{equation}\nHere $U_N^{(1)}$ is the group of 1-units of $N$, ${\\rm val}_N: \\QQ_p\\cdot (N^\\times)^\\wedge_p\\to \\QQ_p$ is the canonical valuation map on $N$, $f_{N\/M}$ is the residue degree of $N\/M$, `{\\rm can}' is induced by ${\\rm inv}_N$ and ${\\rm exp}_{B,N}$ is the composite isomorphism\n\\[ \\QQ_p\\cdot (U^{(1)}_{N})^d \\cong N^d \\cong \\QQ_p\\cdot B(N)^\\wedge_p\\]\nwhere the first isomorphism is induced by the $p$-adic logarithm on $N$ and the second by the exponential map of the formal group of $B$ over $N$.\n\nWe now introduce a useful general convention: for each element $x$ of $\\zeta(\\CC_p[\\Gamma])$ we write $^\\dagger x$ for the unique element of $\\zeta(\\CC_p[\\Gamma])^\\times$ with the property that for each $\\mu$ in $\\widehat{\\Gamma}$ one has\n\\begin{equation}\\label{dagger eq} e_\\mu (^\\dagger x) = \\begin{cases} e_\\mu x, &\\text{ if $e_\\mu x\\not= 0$,}\\\\\n e_\\mu, &\\text{ otherwise.}\\end{cases}\\end{equation}\n(This construction is written as $x \\mapsto ^*\\!\\! x$ in \\cite{bleyburns,breuning2}).\n\nWe then define an element\n\\[ c_{N\/M} := \\frac{^\\dagger((|\\kappa_M|-\\Phi_{N\/M})e_{\\Gamma_0})}{^\\dagger((1-\\Phi_{N\/M})e_{\\Gamma_0})}\\]\nof $\\zeta(\\QQ[\\Gamma])^\\times$. Here and in the sequel, $\\Phi_{N\/M}$ is a fixed lift to $\\Gamma$ of the Frobenius automorphism in $\\Gamma\/\\Gamma_0$ and, for any subgroup $J$ of $\\Gamma$, $e_{J}$ denotes the idempotent $(1\/|J|)\\sum_{\\gamma\\in J}\\gamma$.\n\nWe finally obtain our desired element of $K_0(\\ZZ_p[\\Gamma],\\QQ_p[\\Gamma])$ by setting\n\\[ R_{N\/M}(\\tilde B) := \\chi_{\\Gamma,p}(C_{B,N}^{\\bullet},\\lambda_{B,N}) +d\\cdot\\delta_{\\Gamma,p}(c_{N\/M}). \\]\n\n\n\\begin{proposition}\\label{basic props} Assume $B$ is ordinary and $\\tilde B(\\kappa_{N^\\Delta})[p^{\\infty}]$ vanishes.\n\\begin{itemize}\n\\item[(i)] $R_{N\/M}(\\tilde B)$ depends only upon $N\/M$ and the reduced variety $\\tilde B$.\n\\item[(ii)] $R_{N\/M}(\\tilde B)$ has finite order.\n\\item[(iii)] If $N\/M$ is tamely ramified, then\n $R_{N\/M}(\\tilde B)$ vanishes.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} We set $C^\\bullet := C_{B,N}^\\bullet$, $\\lambda := \\lambda_{B,N}$ and $f := f_{N\/M}$, and also write $\\wp = \\wp_N$ for the maximal ideal in the valuation ring of $N$.\n\nThen, whilst $\\lambda$ can be chosen in many different ways to ensure that (\\ref{lambda diag}) commutes, it is straightforward to check that $\\chi_{\\Gamma,p}(C^{\\bullet},\\lambda)$ is independent of this choice. The fact that this element depends only on the (twist matrix of the) reduced variety $\\tilde B$ follows from Lemma \\ref{twist dependence} below. This proves claim (i).\n\nIt is convenient to prove claim (iii) before claim (ii) and so we assume $N\/M$ is tamely ramified.\n\nWe write $\\hat B$ for the formal group of $B$. In this case Lemma \\ref{ullom}(i) below implies that for each natural number $n$ the $\\ZZ_p[\\Gamma]$-modules $U^{(n)}:= \\mathbb{G}_m(\\wp^n)$ and $V^{(n)}:= \\hat B(\\wp^n)$ are cohomologically-trivial and there exist exact triangles in $D^{\\rm perf}(\\ZZ_p[\\Gamma])$ of the form\n\\begin{equation*}\\label{use tri}\n\\begin{cases} (U^{(n)})^d[0]\\oplus V^{(n)}[-1]\\xrightarrow{\\alpha} C^{\\bullet}\\to C_{\\alpha}^\\bullet \\to (U^{(n)})^d[1]\\oplus V^{(n)}[0]\\\\\nU^{(1)}[0] \\xrightarrow{\\beta} R\\Gamma(N,\\ZZ_p(1))[1] \\to C^\\bullet_{N,1} \\to U_N^1[1]\\\\\n(U^{(1)}\/U^{(n)})^d[0] \\xrightarrow{\\gamma} C_{\\alpha}^\\bullet \\to (C^\\bullet_{N,1})^d\\oplus (V^{(1)}\/V^{(n)})[-1] \\to (U^{(1)}\/U^{(n)})^d[1].\\end{cases}\\end{equation*}\nHere $\\alpha$ is the unique morphism such that $H^0(\\alpha)$ and $H^1(\\alpha)$ are respectively induced by the inclusions $U^{(n)}\\subset (N^\\times)^\\wedge_p$ and $V^{(n)} \\subseteq B(N)^\\wedge_p$ and so that the cohomology sequence of the first triangle induces identifications of $H^0(C_{\\alpha}^\\bullet)$ and $H^1(C_{\\alpha}^\\bullet)$ with $((N^\\times)^\\wedge_p\/U^{(n)})^d$ and $\\ZZ_p^d \\oplus V^{(1)}\/V^{(n)}$; $\\beta$ is the unique morphism so that $H^0(\\beta)$ is induced by the inclusion $U^{(1)} \\subset (N^\\times)^\\wedge_p$ and so the cohomology sequence of the second triangle induces identifications of $H^0(C_{N,1}^\\bullet)$ and $H^1(C_{N,1}^\\bullet)$ with $(N^\\times)^\\wedge_p\/U^{(1)}$ and $\\ZZ_p$ respectively; $\\gamma$ is the unique morphism so that $H^{0}(\\gamma)$ is the inclusion $(U^{(1)}\/U^{(n)})^d \\subset H^0(C_{\\alpha}^\\bullet)$.\n\nIn particular, if $n$ is sufficiently large, then we may apply Lemma \\ref{fk lemma} to the first and third of the above triangles to deduce that\n\n\\begin{align}\\label{interm} &\\chi_{\\Gamma,p}(C^{\\bullet},\\lambda)\\\\ = \\,&\\chi_{\\Gamma,p}((U^{(n)})^d[0]\\oplus V^{(n)}[-1],{\\rm exp}_{B,N}) + \\chi_{\\Gamma,p}(C_{\\alpha}^{\\bullet},\\lambda_{\\alpha})\\notag\\\\\n= \\,&\\chi_{\\Gamma,p}(C_{\\alpha}^{\\bullet},\\lambda_{\\alpha})\\notag\\\\\n= \\,& \\chi_{\\Gamma,p}((U^{(1)}\/U^{(n)})^d[0],0) + d\\cdot \\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},f\\cdot{\\rm val}_N) + \\chi_{\\Gamma,p}((V^{(1)}\/V^{(n)})[-1],0)\\notag\\\\\n= \\,& \\chi_{\\Gamma,p}((U^{(1)}\/U^{(n)})^d[0],0) + d\\cdot \\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},f\\cdot{\\rm val}_N) - \\chi_{\\Gamma,p}((V^{(1)}\/V^{(n)})[0],0)\\notag\\\\\n= \\, &d\\cdot \\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},f\\cdot{\\rm val}_N),\\notag\n\n \\end{align}\nwhere we write $\\lambda_{\\alpha}$ for the isomorphism of $\\QQ_p[\\Gamma]$-modules\n\\[ \\QQ_p\\cdot H^0(C_{\\alpha}^\\bullet) = \\QQ_p\\cdot ((N^\\times)^\\wedge_p\/U^{(n)})^d \\cong \\QQ_p^d = \\QQ_p\\cdot H^1(C_{\\alpha}^\\bullet)\\]\nthat is induced by the map $f\\cdot {\\rm val}_N$ and the second and last equalities in (\\ref{interm}) follow from Lemma \\ref{ullom}.\n\nBut $$\\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},f\\cdot {\\rm val}_N) = \\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},{\\rm val}_N) + \\delta_{\\Gamma,p}(^\\dagger(f\\cdot e_\\Gamma))$$ whilst from \\cite[Th. 4.3]{bleyburns} one has\n\n\\[ \\chi_{\\Gamma,p}(C_{N,1}^{\\bullet},{\\rm val}_N) = -\\delta_{\\Gamma,p}(c_{N\/M}\\cdot ^\\dagger\\!(f\\cdot e_\\Gamma))= - \\delta_{\\Gamma,p}(c_{N\/M}) - \\delta_{\\Gamma,p}(^\\dagger\\!(f\\cdot e_\\Gamma)).\\]\nClaim (iii) is thus obtained by substituting these facts into the equality (\\ref{interm}).\n\n\nTo deduce claim (ii) from claim (iii) we recall that an element $\\xi$ of $K_0(\\ZZ_p[\\Gamma],\\QQ_p[\\Gamma])$ has finite order if and only if for cyclic subgroup $\\Upsilon$ of $\\Gamma$ and every quotient $\\Omega = \\Upsilon\/\\Upsilon'$ of\norder prime to $p$ one has $(q^\\Upsilon_{\\Omega}\\circ\\rho^{\\Gamma}_\\Upsilon)(\\xi) = 0$. Here\n$$\\rho^{\\Gamma}_\\Upsilon:K_0(\\ZZ_p[\\Gamma],\\QQ_p[\\Gamma])\\to K_0(\\ZZ_p[\\Upsilon],\\QQ_p[\\Upsilon])$$ is the natural restriction map,\n$$q^\\Upsilon_{\\Omega}:K_0(\\ZZ_p[\\Upsilon],\\QQ_p[\\Upsilon])\\to K_0(\\ZZ_p[\\Omega],\\QQ_p[\\Omega])$$\nmaps the class of a triple $(P,\\phi,Q)$ to the class of $(P^{\\Upsilon'},\\phi^{\\Upsilon'},Q^{\\Upsilon'})$, and the stated general fact is proved in \\cite[Thm. 4.1]{ewt}.\n\nSince the extension $N^{\\Upsilon'}\/N^\\Upsilon$ is tamely ramified, it is thus enough to show that\n\\[ (q^\\Upsilon_{\\Omega}\\circ\\rho^{\\Gamma}_\\Upsilon)(R_{N\/M}(\\tilde B))=R_{N^{\\Upsilon'}\/N^\\Upsilon}(\\tilde B).\\]\n\nThis is proved by a routine computation in relative $K$-theory that uses the same ideas as in \\cite[Rem. 2.9]{breuning2}.\n In fact, the only point worth mentioning explicitly in this regard is that if $\\Gamma'$ is normal in $\\Gamma$, and we set $N' := N^{\\Gamma'}$, then the natural projection isomorphism $\\iota:\\ZZ_p[\\Gamma\/\\Gamma']\\otimes^{\\mathbb{L}}_{\\ZZ_p[\\Gamma]}R\\Gamma(N,\\ZZ_p(1)) \\cong R\\Gamma(N',\\ZZ_p(1))$ in $D^{\\rm perf}(\\ZZ_p[\\Gamma\/\\Gamma'])$ gives a commutative diagram of (trivial) $\\QQ_p[\\Gamma\/\\Gamma']$-modules\n\n\\[ \\begin{CD} \\QQ_p\\cdot H^2(N,\\ZZ_p(1))^{\\Gamma'} @> {\\rm inv}_N >> \\QQ_p\\\\\n@V H^2(\\iota)VV @VV \\times f_{N\/N'}V\\\\\n\\QQ_p\\cdot H^2(N',\\ZZ_p(1)) @> {\\rm inv}_{N'}>> \\QQ_p.\\end{CD}\\]\n\\end{proof}\n\n\\begin{lemma}\\label{ullom} If $N\/M$ is tamely ramified, the following claims are valid for all natural numbers $a$.\n\\begin{itemize}\n\\item[(i)] The $\\ZZ_p[\\Gamma]$-modules $U^{(a)}$ and $V^{(a)}$ are cohomologically-trivial.\n\\item[(ii)] One has $d\\cdot \\chi_{\\Gamma,p}((U^{(1)}\/U^{(a)})[0],0) = \\chi_{\\Gamma,p}((V^{(1)}\/V^{(a)})[0],0)$.\n\\item[(iii)] For all sufficiently large $a$ one has $\\chi_{\\Gamma,p}((U^{(a)})^d[0]\\oplus V^{(a)}[-1],{\\rm exp}_{B,N})=0$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} The key fact in this case is that for every integer $i$ the $\\ZZ_p[\\Gamma]$-module $\\wp^i$ is cohomologically-trivial (by Ullom \\cite{Ullom}).\n\nIn particular, if we write $i_0$ for the least integer with $i_0 \\ge e\/(p-1)$, where $e$ is the ramification degree of $N\/\\QQ_p$, then for any integer $a \\ge i_0$ the formal logarithm ${\\rm log}_{B}$ and $p$-adic exponential map restrict to give isomorphisms of $\\ZZ_p[\\Gamma]$-modules\n\\begin{equation}\\label{iso1} V^{(a)}\\cong (\\wp^{a})^d,\\,\\,\\,\\,\\,\\,\\,\\,\\wp^a \\cong U^{(a)}\\end{equation}\nand so the $\\ZZ_p[\\Gamma]$-modules $V^{(a)}$ and $U^{(a)}$ are cohomologically-trivial.\n\nIn addition, for all $a$ the natural isomorphisms\n\\begin{equation}\\label{iso2} U^{(a)}\/U^{(a+1)} \\cong\\wp^a\/\\wp^{a+1},\\,\\,\\,\\,\\,\\,\\,\\, \\bigl(\\wp^a\/\\wp^{a+1}\\bigr)^d \\cong V^{(a)}\/V^{(a+1)}\\end{equation}\nimply that these quotient modules are also cohomologically-trivial. By using the tautological exact sequences for each $a < i_0$\n\\begin{equation}\\label{filter1} \\begin{cases} &0 \\to U^{(a+1)}\/U^{(i_0)} \\to U^{(a)}\/U^{(i_0)} \\to U^{(a)}\/U^{(a+1)} \\to 0,\\\\\n &0 \\to V^{(a+1)}\/V^{(i_0)} \\to V^{(a)}\/V^{(i_0)} \\to V^{(a)}\/V^{(i+1)} \\to 0\\end{cases}\\end{equation}\none can therefore deduce (by a downward induction on $a$, starting at $i_0$) that all modules $U^{(a)}$ and $V^{(a)}$ are cohomologically-trivial. This proves claim (i).\n\nIn addition, by repeatedly using the exact sequences (\\ref{filter1}) and isomorphisms (\\ref{iso2}) one computes that $d\\cdot \\chi_{\\Gamma,p}((U^{(1)}\/U^{(a)})[0],0)$ is equal to\n\n\\begin{align*} d\\cdot\\sum_{b=1}^{b=a-1}\\chi_{\\Gamma,p}((U^{(b)}\/U^{(b+1)})[0],0) = \\, &\\sum_{b=1}^{b=a-1}\\chi_{\\Gamma,p}(\\bigl((U^{(b)}\/U^{(b+1)})\\bigr)^d[0],0)\\\\\n = \\, &\\sum_{b=1}^{b=a-1}\\chi_{\\Gamma,p}((V^{(b)}\/V^{(b+1)})[0],0)\\\\\n = \\, &\\chi_{\\Gamma,p}((V^{(1)}\/V^{(b)})[0],0),\\end{align*}\nas required go prove claim (ii).\n\nFinally, claim (iii) is a direct consequence of the isomorphisms (\\ref{iso1}).\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{twist dependence} Let $B$ and $B'$ be abelian varieties over $M$, of the same dimension $d$, that have good ordinary reduction and are such that $\\tilde B(\\kappa_{N^\\Delta})[p^\\infty]$ and $\\tilde B'(\\kappa_{N^\\Delta})[p^{\\infty}]$ both vanish. Then the following claims are valid.\n\n\\begin{itemize}\n\\item[(i)] The $\\ZZ_p[\\Gamma]$-modules $B(N)^\\wedge_p$ and $B'(N)^\\wedge_p$ are cohomologically-trivial and the formal group logarithms induce an isomorphism of $\\QQ_p[\\Gamma]$-modules\n\\[ \\QQ_p\\cdot B(N)^\\wedge_p \\xrightarrow{{\\rm log}_{B,N}} N^d \\xrightarrow{{\\rm exp}_{B',N}} \\QQ_p\\cdot B'(N)^\\wedge_p.\\]\n\n\\item[(ii)] If the reduced varieties $\\tilde B$ and $\\tilde B'$ are isomorphic, then in $K_0(\\ZZ_p[\\Gamma],\\QQ_p[\\Gamma])$ one has\n\\[ \\chi_{\\Gamma,p}(B(N)^\\wedge_p[0] \\oplus B'(N)^\\wedge_p[-1], {\\rm exp}_{B',N}\\circ {\\rm log}_{B,N}) = 0.\\]\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} Claim (i) follows directly from Lemma \\ref{useful prel}(iii).\n\nTo prove claim (ii) we write $M^{\\rm un}$ for the maximal unramified extension of $M$, $\\hat M^{\\rm un}$ for its completion and $\\mathcal{O}$ for the valuation ring of $\\hat M^{\\rm un}$. We write $\\varphi_M$ for the Frobenius automorphism in $G_{M^{\\rm un}\/M}$.\n\nThen the formal group $\\hat B$ of $B$ is toroidal and so there exists an isomorphism of formal groups $f_1: \\hat B \\cong \\mathbb{G}_m^d$ over $\\mathcal{O}$. If we let $\\varphi_M$ act on the coefficients of $f_1$, then one has $f_1^{\\varphi_M} = u\\circ f_1$, where $u$ is the `twist matrix' of $B$. Thus $u$ belongs to ${\\rm GL}_d(\\ZZ_p)$ and depends only on $\\tilde B$ (by the argument of Mazur in \\cite[p. 216]{m}).\n\nIn particular, if $\\tilde B'$ is isomorphic to $\\tilde B$, then there exists an isomorphism of formal groups $f_2: \\hat B' \\cong \\mathbb{G}_m^d$ over $\\mathcal{O}$ for which one also has $f_2^{\\varphi_M} = u\\circ f_2$.\n\nWe now consider the isomorphism $\\phi:= f_2^{-1}\\circ f_1$ from $\\hat B$ to $\\hat B'$ over $\\mathcal{O}$. We fix an element $x$ of $\\mathcal{O}_N\\mathcal{O}^{\\rm un}$ and an element $g$ of $G_{N^{\\rm un}\/M}$ whose image in $G_{M^{\\rm un}\/M}$ is an integral power $\\varphi_M^a$ of $\\varphi_M$.\n\n\nThen one has \n\\begin{multline*} g(\\phi(x)) = \\phi^g(g(x)) = ((f_2^{\\varphi_M^a})^{-1}\\circ f_1^{\\varphi_M^a})(g(x))\\\\ = ((u^a\\circ f_2)^{-1}\\circ (u^a\\circ f_1))(g(x)) =\n(f_2^{-1}\\circ f_1)(g(x)) = \\phi(g(x)).\\end{multline*}\n\nThis means that $\\phi$ is an isomorphism of $\\ZZ_p[[G_{N^{\\rm un}\/M}]]$-modules and so restricts to give an isomorphism of $\\Gamma$-modules\n\\[ \\hat B(N) = \\hat B(N^{\\rm un})^{G_{N^{\\rm un}\/N}} \\cong \\hat B'(N^{\\rm un})^{G_{N^{\\rm un}\/N}} = \\hat B'(N).\\]\n\nUpon passing to pro-$p$-completions, and noting that the groups $\\tilde B(\\kappa_{N})[p^\\infty]$ and $\\tilde B'(\\kappa_{N})[p^\\infty]$ vanish, we deduce that $\\phi$ induces an isomorphism of $\\ZZ_p[\\Gamma]$-modules\n\\[ \\phi_p: B(N)^\\wedge_p = \\hat B(N)^\\wedge_p\\cong \\hat B'(N)^\\wedge_p = B'(N)^\\wedge_p.\\]\n\nThere is also a commutative diagram of formal group isomorphisms\n\\[\n\\begin{CD} \\hat B @> f_1 >> \\mathbb{G}_m^d @> (f_2)^{-1} >> \\hat B' \\\\\n@V {\\rm log}_B VV @V {\\rm log}_{\\mathbb{G}_m} VV @VV {\\rm log}_{B'}V\\\\\n\\mathbb{G}_a^d @> \\times f_1'(0) >> \\mathbb{G}_a^d @> \\times f_2'(0)^{-1} >> \\mathbb{G}_a^d\\end{CD}\\]\n\nTaken together with the isomorphism $\\phi_p$, this diagram implies that the element\n\\[ \\chi_{\\Gamma,p}(B(N)^\\wedge_p[0] \\oplus B'(N)^\\wedge_p[-1], {\\rm exp}_{B',N}\\circ {\\rm log}_{B,N}) \\]\nis equal to the image under $\\partial_{\\Gamma,p}$ of the automorphism of the $\\QQ_p[\\Gamma]$-module $N^d$ that corresponds to the matrix\n $f_2'(0)^{-1}f_1'(0)$.\n\nIt is thus enough to note that, since the latter matrix belongs to ${\\rm GL}_d(\\mathcal{O}_M)$ it is represented by a matrix in ${\\rm GL}_{d[M:\\QQ_p]}(\\ZZ_p[\\Gamma])$ and so belongs to the kernel of $\\partial_{\\Gamma,p}$, as required. \\end{proof}\n\n\n\n\n\n\\subsection{Elliptic curves}\\label{ell curve sect} If $B$ is an elliptic curve over $\\QQ_p$, then it is possible in certain cases to formulate a precise conjectural formula for the elements $R_{N\/M}(\\tilde B)$ defined above.\n\nThis aspect of the theory will be considered in detail elsewhere. However, to give a brief idea of the general approach we fix an isomorphism of formal groups $f: \\hat{B} \\lra \\mathbb{G}_m$ as in the proof of Lemma \\ref{twist dependence}.\n\nThen, with $\\varphi$ denoting the Frobenius automorphism in $G_{\\Qu_p^{\\rm un} \/ \\Qp}$, the twist matrix of $\\tilde B$ is the unique element $u$ of $\\ZZ_p^\\times$ for which the composite $f^\\varphi\\circ f^{-1}$ is equal to the endomorphism $[u]_{\\mathbb{G}_m}$ of $\\mathbb{G}_m$.\n\n\\begin{lemma} $\\hat{B}$ is a Lubin-Tate formal group with respect to the parameter $u^{-1}p$.\n\\end{lemma}\n\\begin{proof} By using the equalities\n\\[\nf^\\varphi \\circ [u^{-1}p]_{\\hat{B}} \\circ f^{-1} = [u^{-1}p]_{\\mathbb{G}_m} \\circ f^\\varphi \\circ f^{-1} = [p]_{\\mathbb{G}_m}\n\\]\none computes that\n\n\\begin{eqnarray*}\n [u^{-1}p]_{\\hat{B}} &=& \\left( f^\\varphi \\right)^{-1} \\circ [p]_{\\mathbb{G}_m} \\circ f \\\\\n &\\equiv& \\left( f^\\varphi \\right)^{-1} \\circ X^p \\circ f \\pmod{p} \\\\\n&=& \\left( f^{-1} \\right)^{\\varphi} \\left( f(X)^p \\right) \\\\\n&=& \\left( f^{-1}(f(X)) \\right)^p \\\\\n&=& X^p.\n\\end{eqnarray*}\n\nThus, since it is well known that $ [u^{-1}p]_{\\hat{B}} \\equiv u^{-1}p X \\pmod{\\deg 2}$, it follows that $[u^{-1}p]_{\\hat{B}} $ is a Lubin-Tate power series with respect to $u^{-1}p$, as claimed.\n\\end{proof}\n\n\n\nWe write $\\chi^{\\rm ur}$ for the restriction to $G_M$ of the character\n\\[\n\\chi_\\Qp^{\\rm ur} \\colon G_\\Qp \\lra \\Ze_p^\\times, \\quad \\varphi \\mapsto u^{-1}.\n\\]\n\nWe assume that the restriction of $\\chi^{\\rm ur}$ to $G_N$ is non-trivial and write $T$ for the (unramified) twist $\\Zp(\\chi^{\\rm ur})(1)$ of the representation $\\Zp(1)$.\n\nThen, by \\cite[Prop.~2.5]{IV} or \\cite[Lem. 3.2.1]{BC2}, the complex $R\\Gamma(N, T)$ is acyclic outside\ndegrees one and two and there are canonical identifications\n\\[\n H^i(N, T) = \\begin{cases} \\hat{B}(\\frp_N), &\\text{ if $i=1$,}\\\\\n \\bigl( \\Zp \/ p^{\\omega_N} \\Zp \\bigr) (\\chi^{ur}),&\\text{ if $i=2$,}\\end{cases}\\]\nwhere $\\omega_N$ denotes the $p$-adic valuation of the element $1 - \\chi^{ur}(\\varphi^{f_{N\/\\Qp}})$.\n\nThese explicit descriptions allow one to interpret $R_{N\/M}(\\tilde B)$ in terms of differences between elements that occur in the formulations of the local epsilon constant conjecture for the representations $\\ZZ_p(1)$ and $T$, as studied by Benois and Berger \\cite{benoisberger}, Bley and Cobbe \\cite{BC2} and Izychev and Venjakob \\cite{IV}.\n\nIn this way one finds that the (assumed) compatibility of these conjectures for the representations $\\ZZ_p(1)$ and $T$ implies the following equality\n\n\\begin{equation}\\label{curve local eps conj} R_{N\/M}(\\tilde B) = \\delta_{\\Gamma,p}\\bigl(\\bigl(\\sum_{\\chi}u^{f_{M\/\\Qp}(s_M\\chi(1) + m_\\chi)}e_\\chi\\bigr)\n\\frac{^\\dagger((1 - (u\\cdot\\varphi^{-1})^{f_{M\/\\Qp}})e_{\\Gamma_0})}\n{^\\dagger((|\\kappa_M| - (u\\cdot\\varphi^{-1})^{-f_{M\/\\Qp}})e_{\\Gamma_0})}\\bigr)\n\\end{equation}\nwhere the conductor of each character $\\chi$ is $\\pi_M^{m_\\chi}\\calO_M$ and the different of $M\/\\QQ_p$ is $\\pi_M^{s_M}\\calO_M$.\n\nIn particular, the results of \\cite{BC2} imply that the equality (\\ref{curve local eps conj}) is unconditionally valid for certain natural families of wildly ramified extensions $N\/M$.\n\n\\section{Classical Selmer complexes and refined BSD}\\label{tmc}\n\nIn this section we study ${\\rm BSD}(A_{F\/k})$ under the assumption that $A$ and $F\/k$ satisfy the following list of hypotheses.\n\nIn this list we fix an {\\em odd} prime number $p$ and an intermediate field $K$ of $F\/k$ such that $\\Gal(F\/K)$ is a Sylow $p$-subgroup of $G$\n\\begin{itemize}\n\\item[(H$_1$)] The Tamagawa number of $A_{K}$ at each place in $S_K^A$ is not divisible by $p$;\n\\item[(H$_2$)] $S_K^A \\cap S_K^p = \\emptyset$ (that is, no place of bad reduction for $A_{K}$ is $p$-adic);\n\\item[(H$_3$)] For all $v$ in $S_K^p$ above a place in $S_k^F$ the reduction is ordinary and $A(\\kappa_v)[p^\\infty]$ vanishes;\n\\item[(H$_4$)] For all $v$ in $S_K^f\\setminus S_K^p$ above a place in $S_k^F$ the group $A(\\kappa_v)[p^\\infty]$ vanishes;\n\\item[(H$_5$)] $S_k^A\\cap S_k^F = \\emptyset$ (that is, no place of bad reduction for $A$ is ramified in $F$);\n\\item[(H$_6$)] $\\sha(A_F)$ is finite.\n\\end{itemize}\n\n\\begin{remark}\\label{satisfying H} {\\em For a fixed abelian variety $A$ over $k$ and extension $F\/k$ the hypotheses (H$_1$) and (H$_2$) are clearly satisfied by all but finitely many odd primes $p\n, (H$_4$) and (H$_5$) constitute a mild restriction on the ramification of $F\/k$ and (H$_6$) coincides with the claim of ${\\rm BSD}(A_{F\/k})$(i). However, the hypothesis~ (H$_3$) excludes the case that is called `anomalous' by Mazur in~\\cite{m} and, for a given $A$, there may be infinitely many primes $p$ for which there are $p$-adic places $v$ at which $A$ has good ordinary reduction but $A(\\kappa_v)[p]$ does not vanish.\nNevertheless, it is straightforward to describe examples of abelian varieties $A$ for which there are only finitely many such anomalous places -- see, for example, the result of Mazur and Rubin in~\\cite[Lem. A.5]{mr}.}\n\\end{remark}\n\n\\begin{remark} {\\em The validity of each of the hypotheses listed above is equivalent\nto the validity of the corresponding hypothesis with $A$ replaced by $A^t$ and we will often use this fact without explicit comment.}\n\\end{remark}\n\nIn this section we first verify and render fully explicit the computation (\\ref{bksc cohom}) of the cohomology of the Selmer complex introduced in Definition \\ref{bkdefinition}, thereby extending the computations given by Wuthrich and the present authors in \\cite[Lem. 4.1]{bmw}.\n\nSuch an explicit computation will be useful in the proof of the main result of \\S\\ref{comparison section} below. We shall also use Lemma \\ref{useful prel} to ensure that, under the hypotheses listed above, this complex belongs to the category $D^{\\rm perf}(\\ZZ_p[G])$.\n\nIn the main result of this section we shall then re-interpret ${\\rm BSD}(A_{F\/k})$ in terms of invariants that can be associated to the classical Selmer complex under the above listed hypotheses.\n\n\\subsection{The classical Selmer complex}\\label{explicitbk} We fix an odd prime number $p$ and a finite set of non-archimedean places $\\Sigma$ of $k$ with\n\\[ S_k^p\\cup (S_k^F\\cap S_k^f) \\cup S_k^A \\subseteq \\Sigma.\\]\n\nFor any such set $\\Sigma$ the classical Selmer complex ${\\rm SC}_{\\Sigma,p}(A_{F\/k})$ is defined as the mapping fibre of the morphism (\\ref{bkfibre}) in $D(\\ZZ_p[G])$.\n\nWe further recall from Lemma \\ref{independenceofsigma} that this complex is, in a natural sense, independent of the choice of $\\Sigma$ and so will be abbreviated to ${\\rm SC}_{p}(A_{F\/k})$.\n\nIn the next result we describe consequences of Lemma \\ref{useful prel} for this complex and also give a description of its cohomology that will be useful in the computations that are carried out in \\S \\ref{comparison section} below.\n\n\\begin{proposition}\\label{explicitbkprop}\n\nSet $C:= {\\rm SC}_{p}(A_{F\/k})$.\nThen the following claims are valid.\n\\begin{itemize}\n\\item[(i)] The complex $C$ is acyclic outside degrees one, two and three and there is a canonical identification $H^3(C)=A(F)[p^{\\infty}]^\\vee$ and a canonical inclusion of $H^1(C)$ into $H^1\\bigl(\\mathcal{O}_{F,S_k^\\infty(F)\\cup \\Sigma(F)},T_{p}(A^t)\\bigr)$.\n\\item[(ii)] Assume that $A$, $F\/k$ and $p$ satisfy the hypotheses (H$_1$)-(H$_5$). Then for every non-archimedean place $v$ of $k$ the $G$-modules $A^t(F_v)^\\wedge_p$ and $\\ZZ_p\\otimes_\\ZZ A^t(F_v)$ are cohomologically-trivial. In addition, the module $A^t(F_v)^\\wedge_p$ vanishes for every place $v$ in $S_k^F\\setminus S_k^p$.\n\nIn particular, the complex $C$ belongs to $D^{\\rm perf}(\\ZZ_p[G])$.\n\\item[(iii)] Assume that $A$, $F\/k$ and $p$ satisfy the hypotheses (H$_1$)-(H$_5$). Then for each normal subgroup $J$ of $G$ there is a natural isomorphism in $D^{\\rm perf}(\\ZZ_p[G\/J])$ of the form\n\\[ \\ZZ_p[G\/J]\\otimes^{\\mathbb{L}}_{\\ZZ_p[G]}{\\rm SC}_{p}(A_{F\/k}) \\cong {\\rm SC}_{p}(A_{F^J\/k}).\\]\n\\item[(iv)] If $\\sha(A_F)$ is finite, then $H^1(C)$ identifies with the image of the injective Kummer map $A^t(F)_p\\to H^1\\bigl(\\mathcal{O}_{F,S_k^\\infty(F)\\cup \\Sigma(F)},T_{p}(A^t)\\bigr)$ and there is a canonical isomorphism of $H^2(C)$ with $\\Sel_p(A_F)^\\vee$ (that is described in detail in the course of the proof below).\n \\end{itemize}\\end{proposition}\n\n\n\n\\begin{proof}\nThroughout this argument we abbreviate the rings $\\mathcal{O}_{k,S_k^\\infty\\cup\\Sigma}$ and $\\mathcal{O}_{F,S_k^\\infty(F)\\cup \\Sigma(F)}$ to $U_{k}$ and $U_{F}$ respectively.\n\nSince the complexes $R\\Gamma (k_v, T_{p,F}(A^t))$ for $v$ in $\\Sigma$ are acyclic in degrees greater than two, the first assertion of claim (i) follows directly from the definition of $C$ as the mapping fibre of the morphism (\\ref{bkfibre}).\n\nIn addition, the description of the complex ${\\rm SC}_{S_k^\\infty\\cup\\Sigma}(A_{F\/k},X)$ (for any module $X$ as in Proposition \\ref{prop:perfect}) as the mapping fibre of the morphism (\\ref{selmer-finite tri}) in $D(\\ZZ_p[G])$ also implies that $H^3(C)$ is canonically isomorphic to $H^3({\\rm SC}_{S_k^\\infty\\cup\\Sigma}(A_{F\/k},X))$. Proposition \\ref{prop:perfect}(ii) thus implies that $H^3(C)$ identifies with $A(F)[p^\\infty]^\\vee$.\n\nFinally, the explicit definition of $C$ as a mapping fibre (combined with Lemma \\ref{v not p}(i)) also gives an associated canonical long exact sequence\n\\begin{multline}\\label{longexact}0 \\to H^1(C) \\to H^1\\bigl(U_F,T_{p}(A^t)\\bigr) \\to\n\\bigoplus\\limits_{w'\\in S_F^p}T_p\\bigl(H^1(F_{w'},A^t)\\bigr) \\stackrel{\\delta}{\\to} H^2(C)\\\\\n \\to H^2\\bigl(U_{F},T_{p}(A^t)\\bigr) \\to \\bigoplus\\limits_{ w'\\in \\Sigma(F)}H^2\\bigl(F_{w'},T_{p}(A^t)\\bigr) \\to\n H^3(C) \\to 0\n\\end{multline}\nin which the third and sixth arrows are the canonical maps induced by localisation. In this sequence each term $T_p\\bigl(H^1(F_{w'},A^t)\\bigr)$ denotes the $p$-adic Tate module of $H^1(F_{w'},A^t)$, which we have identified with the quotient of $H^1(F_{w'},T_{p}(A^t))$ by the image of $A^t(F_{w'})_p^\\wedge$ under the canonical Kummer map.\n\n\nIn particular, the sequence (\\ref{longexact}) gives a canonical inclusion $H^1(C) \\subseteq H^1\\bigl(U_{F},T_{p}(A^t)\\bigr)$ and this completes the proof of claim (i).\n\n\n\n\n\nTurning to claim (ii) we note first that if $v$ does not belong to $S_k^A\\cup S_k^F$ then the cohomological-triviality of $A^t(F_v)^\\wedge_p$ follows directly from Lemma \\ref{useful prel}(ii).\n\nIn addition, if $v$ is $p$-adic, then $A^t(F_v)^\\wedge_p$ is cohomologically-trivial as a consequence of Lemma \\ref{useful prel}(ii) and (iii) and the given hypotheses (H$_2$) and (H$_3$).\n\nIt suffices therefore to consider the $G$-modules $A^t(F_v)^\\wedge_p$ for places in $(S_k^A\\cup S_k^F)\\setminus S_k^p$. For each such $v$ we write $C_v(A_F)$ for the direct sum over $w'$ in $S_k^v$ of the modules $H^0(F_{w'}, H^1(I_{w'}, T_{p}(A^t))_{\\rm tor})$ that occur in the exact sequence of Lemma \\ref{v not p}(ii).\n\nNow if $v$ belongs to $S_k^A$, then (H$_5$) implies $v$ is unramified in $F\/k$ and so the $\\ZZ_p[G]$-module\n$$T_{p,F}(A^t)^{I_v}\\cong\\ZZ_p[G]\\otimes_{\\ZZ_p}T_p(A^t)^{I_v}$$\nis free. In this case therefore, the natural exact sequence\n$$0\\to T_{p,F}(A^t)^{I_v}\\stackrel{1-\\Phi_v^{-1}}{\\longrightarrow}T_{p,F}(A^t)^{I_v}\\to H^1(\\kappa_v,T_{p,F}(A^t)^{I_v})\\to 0$$\nimplies that the $G$-module $H^1(\\kappa_v,T_{p,F}(A^t)^{I_v})$ is cohomologically-trivial.\n\nSince the conditions (H$_1$) and (H$_5$) combine in this case to imply that $C_v(A_F)$ vanishes (as in the proof of \\cite[Lem. 4.1(ii)]{bmw}) the cohomological-triviality of $A^t(F_v)^\\wedge_p$ therefore follows from the exact sequence in Lemma \\ref{v not p}(ii).\n\nFinally, we claim that $A^t(F_v)^\\wedge_p$ vanishes for each $v$ that belongs to $S_k^F\\setminus S_k^p$. To see this we note that, in this case, (H$_5$) implies $v$ does not belong to $S_k^A$ so that $C_v(A_F)$ vanishes whilst the conditions (H$_4$) and (H$_5$) also combine (again as in the proof of \\cite[Lem. 4.1(i)]{bmw}) to imply $H^1(\\kappa_v,T_{p,F}(A^t)^{I_v})$ vanishes. From the exact sequence of Lemma \\ref{v not p}(ii) we can therefore deduce that $A^t(F_v)^\\wedge_p$ vanishes, as claimed.\n\nAt this stage we have proved that for every non-archimedean place $v$ of $k$, the $G$-module $A^t(F_v)^\\wedge_p$ is cohomologically-trivial. Since each $\\ZZ_p[G]$-module $A^t(F_v)^\\wedge_p$ is finitely generated this implies that each complex $A^t(F_v)^\\wedge_p[-1]$ is an object of $D^{\\rm perf}(\\ZZ_p[G])$.\n\nGiven this fact, the final assertion of claim (ii) is a consequence of the definition of $C$ as the mapping fibre of (\\ref{bkfibre}) and the fact that, since $p$ is odd, the complexes $R\\Gamma(U_k,T_{p,F}(A^t))$ and $R\\Gamma (k_v, T_{p,F}(A^t))$ for each $v$ in $\\Sigma$ each belong to $D^{\\rm perf}(\\ZZ_p[G])$ (as a consequence, for example, of \\cite[Prop. 1.6.5(2)]{fukaya-kato}).\n\nTo complete the proof of claim (ii) we fix a non-archimedean place $v$ of $k$ and consider instead the $G$-module $\\ZZ_p\\otimes_\\ZZ A(F_v)$. We recall that there exists a short exact sequence of $G$-modules of the form\n\\begin{equation*}\\label{finalassertion}0\\to\\mathcal{O}_{F,v}^d\\to A(F_v)\\to C\\to 0\\end{equation*}\nin which the group $C$ is finite. From this exact sequence one may in turn derive short exact sequences\n\\begin{equation}\\label{completions}0\\to((\\mathcal{O}_{F,v})^\\wedge_p)^d\\to A(F_v)^\\wedge_p\\to C^\\wedge_p\\to 0\\end{equation} and\n\\begin{equation}\\label{tensorproducts}0\\to(\\ZZ_p\\otimes_\\ZZ\\mathcal{O}_{F,v})^d\\to\\ZZ_p\\otimes_\\ZZ A(F_v)\\to \\ZZ_p\\otimes_\\ZZ C\\to 0.\\end{equation}\n\nWe assume first that $v$ is $p$-adic. In this case the canonical maps $\\ZZ_p\\otimes_\\ZZ\\mathcal{O}_{F,v}\\to(\\mathcal{O}_{F,v})^\\wedge_p$ and $\\ZZ_p\\otimes_\\ZZ C\\to C^\\wedge_p$ are bijective and hence the exactness of the above sequences implies that the canonical map $\\ZZ_p\\otimes_\\ZZ A(F_v)\\to A(F_v)^\\wedge_p$ is also an isomorphism. The $G$-module $\\ZZ_p\\otimes_\\ZZ A(F_v)$ is thus cohomologically-trivial, as required.\n\nWe finally assume that $v$ is not $p$-adic. In this case, the exact sequence (\\ref{completions}) gives an isomorphism $$A(F_v)^\\wedge_p\\cong C^\\wedge_p=\\ZZ_p\\otimes_\\ZZ C$$ and thus from the exact sequence (\\ref{tensorproducts}) we derive a short exact sequence\n$$0\\to(\\ZZ_p\\otimes_\\ZZ\\mathcal{O}_{F,v})^d\\to\\ZZ_p\\otimes_\\ZZ A(F_v)\\to A(F_v)^\\wedge_p\\to 0.$$\nSince we have already established the cohomological-triviality of $A(F_v)^\\wedge_p$, we know that the $G$-module $\\ZZ_p\\otimes_\\ZZ A(F_v)$ is cohomologically-trivial if and only if the $G$-module $\\ZZ_p\\otimes_\\ZZ\\mathcal{O}_{F,v}$ is cohomologically-trivial. But the latter module is naturally a $\\QQ$-vector-space, and therefore is indeed cohomologically-trivial. This completes the proof of claim (ii).\n\nTurning to claim (iii) we note that the cohomological-triviality of the $\\ZZ_p[G]$-module $A^t(F_v)^\\wedge_p$ for each $v$ in $\\Sigma$ (as is proved by claim (ii) under the given hypotheses) implies that there are natural isomorphisms in $D(\\ZZ_p[G\/J])$ of the form\n\\begin{align*} \\ZZ_p[G\/J]\\otimes^{\\mathbb{L}}_{\\ZZ_p[G]}A^t(F_v)^\\wedge_p[-1] \\cong\\, &(\\ZZ_p[G\/J]\\otimes_{\\ZZ_p[G]}A^t(F_v)^\\wedge_p)[-1]\\\\\n\\cong\\, &H_0(J,A^t(F_v)^\\wedge_p)[-1]\\\\\n\\cong\\, &H^0(J,A^t(F_v)^\\wedge_p)[-1]\\\\\n= \\, & A^t(F^J_v)^\\wedge_p[-1],\\end{align*}\nwhere the third isomorphism is induced by the map sending each element $x$ of $A^t(F_v)^\\wedge_p$ to its image under the action of $\\sum_{g \\in J}g$.\n\nThe existence of the isomorphism in claim (iii) is then deduced by combining these isomorphisms together with the explicit definitions of the complexes ${\\rm SC}_{p}(A_{F\/k})$ and ${\\rm SC}_{p}(A_{F^J\/k})$ as mapping fibres and the fact (recalled, for example, from \\cite[Prop. 1.6.5(3)]{fukaya-kato}) that there are standard Galois descent isomorphisms in $D(\\ZZ_p[G\/J])$ of the form\n\\begin{equation}\\label{global descent} \\ZZ_p[G\/J]\\otimes^{\\mathbb{L}}_{\\ZZ_p[G]}R\\Gamma(U_k,T_{p,F}(A^t)) \\cong R\\Gamma(U_k,T_{p,F^J}(A^t))\\end{equation}\nand\n\\begin{equation}\\label{local descent} \\ZZ_p[G\/J]\\otimes^{\\mathbb{L}}_{\\ZZ_p[G]}R\\Gamma (k_v, T_{p,F}(A^t))\\cong R\\Gamma (k_v, T_{p,F^J}(A^t))\\end{equation}\nfor each $v$ in $\\Sigma$.\n\nTo prove claim (iv) we assume $\\sha(A_F)$ is finite and first prove that\nthe image of $H^1(C)$ in $H^1\\bigl(U_{F},T_{p}(A^t)\\bigr)$ coincides with the image of the injective Kummer map $$\\kappa:A^t(F)_p\\to H^1\\bigl(U_{F},T_{p}(A^t)\\bigr).$$\nIn order to do so,\nwe identify ${\\rm cok}(\\kappa)$ with the $p$-adic Tate module $T_p\\bigl(H^1\\bigl(U_F,A^t\\bigr)\\bigr)$ of $H^1\\bigl(U_F,A^t\\bigr)$.\n\nIt is clear that any element of ${\\rm im}(\\kappa)$ is mapped to the image of $A^t(F_{w'})_p^\\wedge$ in $H^1\\bigl(F_{w'},T_{p}(A^t)\\bigr)$ by localising at any place $w'$ in $S_F^p$.\n\nThe exactness of (\\ref{longexact}) therefore implies that ${\\rm im}(\\kappa)$ is contained in $H^1(C)$ and furthermore that we have a commutative diagram\nwith exact rows\n\\begin{equation}\\label{sha diag} \\xymatrix{\n0 \\ar[r] & A^t(F)_p \\ar[r] \\ar[d]^{\\kappa} & H^1\\bigl(U_F,T_{p}(A^t)\\bigr) \\ar[r] \\ar@{=}[d] & T_p\\bigl(H^1\\bigl(U_F,A^t\\bigr)\\bigr) \\ar[r] \\ar[d] & 0\\\\\n0 \\ar[r] & H^1(C) \\ar[r] & H^1\\bigl(U_F,T_{p}(A^t)\\bigr) \\ar[r] & \\bigoplus\\limits_{w'\\in S_F^p}T_p\\bigl(H^1(F_{w'},A^t)\\bigr),\n}\\end{equation}\nwhere the right-most vertical arrow is induced by the localisation maps.\nBut the assumed finiteness of $\\sha(A_F)$ (combined with \\cite[Ch. I, Cor 6.6]{milne}) then implies that this arrow is injective, and therefore the Snake Lemma implies that $\\im(\\kappa)=H^1(C)$, as required.\n\n\n\nTo conclude the proof of claim (iv) we use the canonical exact triangle\n\\begin{multline}\\label{compacttriangle}R\\Gamma_c\\bigl(U_F,T_p(A^t)\\bigr)\\to R\\Gamma\\bigl(U_k,T_{p,F}(A^t)\\bigr)\\to \\bigoplus\\limits_{ v\\in S_k^\\infty\\cup\\Sigma} R\\Gamma(k_v,T_{p,F}(A^t))\\\\ \\to R\\Gamma_c\\bigl(U_F,T_p(A^t)\\bigr)[1]\\end{multline}\nin $D(\\ZZ_p[G])$. We also write $\\Delta$ for the canonical composite homomorphism\n$$\\bigoplus\\limits_{ v\\in \\Sigma}A^t(F_v)_p^\\wedge\\to\\bigoplus\\limits_{ w'\\in \\Sigma(F)}H^1(F_{w'},T_{p}(A^t))\\to H^2_c(U_F,T_p(A^t)),$$\nwith the first arrow given by the local Kummer maps and the second arrow given by the long exact cohomology sequence associated to the triangle (\\ref{compacttriangle}). We then claim that there is a canonical commutative diagram\n\n\\begin{equation}\\label{Selmerdiagram}\\xymatrix\nH^2_c\\bigl(U_F,T_p(A^t)\\bigr) \\ar[d] \\ar@{=}[r] &\n H^2_c\\bigl(U_F,T_{p}(A^t)\\bigr) \\ar[d] \\ar[r]^-{w\\circ s^{-1}} &\nH^1\\bigl(U_F,A[p^\\infty])^\\vee\n \\ar[d] \\\\\nH^2(C) \\ar^-{\\sim}[r] &\n\\cok(\\Delta) \\ar[r]^-{\\sim} &\n\\Sel_p(A_F)^\\vee\n.\n}\\end{equation}\nHere the second and third vertical arrows are the canonical projection maps and $w$ and $s$ are the isomorphisms defined in (\\ref{themapw}) and (\\ref{themaps}) in Appendix \\ref{exp rep section} below.\n\nThe composition of the horizontal arrows in the bottom row of the diagram (\\ref{Selmerdiagram}) will then define the desired canonical isomorphism of $H^2(C)$ with $\\Sel_p(A_F)^\\vee$.\n\nTo verify the existence of the diagram (\\ref{Selmerdiagram}) we use the canonical exact sequence\n\\begin{multline}\\label{comparingsequence}0\\to\\bigoplus\\limits_{ v\\in S_k^\\infty} H^0(k_v,T_{p,F}(A^t)) \\to H^1_c\\bigl(U_F,T_p(A^t)\\bigr)\\to H^1(C)\\\\ \\to\\bigoplus\\limits_{ v\\in \\Sigma}A^t(F_v)_p^\\wedge\\stackrel{\\Delta}{\\to}H^2_c\\bigl(U_F,T_p(A^t)\\bigr)\\to H^2(C)\\to 0\\end{multline} associated to the exact triangle (\\ref{comparingtriangles}).\n(Here we have used the fact that, as $p$ is odd, the group $H^i(k_v,T_{p,F}(A^t))$ vanishes for every $v$ in $S_k^\\infty$ and every $i > 0$.)\n\nThis exact sequence induces the desired canonical isomorphism of $H^2(C)$ with $\\cok(\\Delta)$ and, by construction, the last map occurring in the sequence gives a vertical map making the first square of the diagram (\\ref{Selmerdiagram}) commute.\n\nIt is finally straightforward, using the commutativity of the diagram in Corollary \\ref{Tatepoitouexplicit} below, to deduce that the isomorphism $$w\\circ s^{-1}:H^2_c\\bigl(U_{F},T_{p}(A^t)\\bigr) \\to\nH^1\\bigl(U_{F},A[p^\\infty])^\\vee$$ induces an isomorphism $$\\cok(\\Delta)\\stackrel{\\sim}{\\to}\\Sel_p(A_F)^\\vee.$$ This induced isomorphism completes the construction of the diagram (\\ref{Selmerdiagram}) and thus also the proof of claim (iv).\\end{proof}\n\n\n\n\n\n\nIn the next result we shall (exceptionally for \\S\\ref{tmc}) consider the prime $2$ and describe an analogue of Proposition \\ref{explicitbkprop} in this case.\n\n\\begin{proposition}\\label{explicitbkprop2} The following claims are valid for the complex $C:= {\\rm SC}_{2}(A_{F\/k})$.\n\\begin{itemize}\n\\item[(i)] $C$ is acyclic outside degrees one, two and three.\n\\item[(ii)] If $\\sha(A_F)$ is finite, then $H^1(C)$ identifies with the image of the injective Kummer map $A^t(F)_2\\to H^1\\bigl(\\mathcal{O}_{F,S_k^\\infty(F)\\cup \\Sigma(F)},T_{2}(A^t)\\bigr)$ and there exists a\n canonical homomorphism $\\Sel_2(A_F)^\\vee \\to H^2(C)$, the kernel and cokernel of which are both finite.\n\\item[(iii)] The module $H^3(C)$ is finite.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} Claim (i) is established by the same argument that is used to prove the first assertion of Proposition \\ref{explicitbkprop}(i).\n\nIn a similar way, the analysis concerning the diagram (\\ref{sha diag}) is also valid in the case $p=2$ and proves the first assertion of claim (ii).\n\nTo prove the remaining claims we set $U_F := \\mathcal{O}_{F,S_k^\\infty(F)\\cup \\Sigma(F)}$ and note that the long exact cohomology sequence of the exact triangle (\\ref{comparingtriangles}) gives rise in this case to an exact sequence\n\\begin{multline*} \\bigoplus\\limits_{ v\\in \\Sigma}A^t(F_v)_2^\\wedge \\oplus\\bigoplus\\limits_{ v\\in S_k^\\infty} H^1(k_v,T_{2,F}(A^t)) \\to H^2_c\\bigl(U_F,T_2(A^t)\\bigr) \\to H^2(C)\\\\\n\\to \\bigoplus\\limits_{ v\\in S_k^\\infty} H^2(k_v,T_{2,F}(A^t)) \\to H^3_c(U_F,T_{2}(A^t)) \\to H^3(C) \\to \\bigoplus_{ v\\in S_k^\\infty} H^3(k_v,T_{2,F}(A^t)).\\end{multline*}\nIn addition, for each $v$ in $S_k^\\infty$ and each $j \\in \\{1,2,3\\}$ the group $H^j(k_v,T_{2,F}(A^t))$ is finite.\n\nGiven these facts, the second assertion of claim (ii) is a consequence of Artin-Verdier Duality (just as with the analogous assertion in Proposition \\ref{explicitbkprop}(iv)) and claim (iii) follows directly from the isomorphism (\\ref{artinverdier}).\\end{proof}\n\n\n\n\\subsection{Statement of the main result}\\label{somr tmc sec} We continue to assume that the hypotheses (H$_1$)-(H$_6$) are satisfied.\n\nIn this case, for any isomorphism of fields $j:\\CC\\cong\\CC_p$, the isomorphism\n\\[ h^{j}_{A,F}:=\\CC_p\\otimes_{\\RR,j}h_{A,F}^{{\\rm det}}\\]\ncombines with the explicit descriptions given in Proposition \\ref{explicitbkprop} to give a canonical element\n\\[ \\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F})\\]\nof $K_0(\\ZZ_p[G],\\CC_p[G])$. By Lemma \\ref{independenceofsigma} (and Remark \\ref{indeptremark}) this element is in particular independent of the choice of set $\\Sigma$ with respect to which ${\\rm SC}_p(A_{F\/k})={\\rm SC}_{\\Sigma,p}(A_{F\/k})$ is defined. In the rest of \\S\\ref{tmc} we may and will thus set $$\\Sigma:=S_k^p\\cup (S_k^F\\cap S_k^f) \\cup S_k^A.$$\n\nOur aim in the rest of \\S\\ref{tmc} is to interpret ${\\rm BSD}_p(A_{F\/k})$(iv) in terms of an explicit description of\n this element.\n\n\n\n\\subsubsection{} At the outset we note that Hypotheses (H$_2$) and (H$_3$) imply that for each $v$ in $S_k^p$ the restriction $A^t_v$ of $A^t$ to $k_v$ satisfies the conditions that are imposed on $B$ in Proposition \\ref{basic props} and hence that the element $R_{F_w\/k_v}(\\tilde A^t_v)$ of $K_0(\\ZZ_p[G_w],\\QQ_p[G_w])$ is well-defined.\n\nWe write $d$ for ${\\rm dim}(A)$ and then define an element of $K_0(\\ZZ_p[G],\\QQ_p[G])$ by setting\n\n\\[ R_{F\/k}(\\tilde A^t_v) := {\\rm ind}^G_{G_w}(d\\cdot R_{F_w\/k_v}+ R_{F_w\/k_v}(\\tilde A^t_v)).\\]\nHere ${\\rm ind}^G_{G_w}$ is the induction homomorphism $K_0(\\ZZ_p[G_w],\\QQ_p[G_w])\\to K_0(\\ZZ_p[G],\\QQ_p[G])$ and $R_{F_w\/k_v}$ is the canonical element of $K_0(\\ZZ_p[G_w],\\QQ_p[G_w])$ that is defined by Breuning in \\cite{breuning2} (and will be explicitly recalled in the course of the proof of Proposition \\ref{heavy part} below).\n\nIn the sequel we will fix a finite set of places $S$ of $k$ as in \\S\\ref{selmer section} (and hence as in the statement of Conjecture \\ref{conj:ebsd}).\n\nWe abbreviate $S_k^F\\cap S_k^f$ to $S_{\\rm r}$ and set $S_{p,{\\rm r}} := S_k^p\\cap S_k^F$.\nWe shall also write $S_{p,{\\rm w}}$ and $S_{p,{\\rm t}}$ for the (disjoint) subsets of $S_{p,{\\rm r}}$ comprising places that are respectively wildly and tamely ramified in $F$ and $S_{p,{\\rm u}}$ for the set $S_k^p\\setminus S_{p,{\\rm r}}$ of $p$-adic places in $k$ that do not ramify in $F$.\n\nFor each place $v$ in $S_k^p$ and each character $\\psi$ in $\\widehat{G}$ we define a non-zero element\n\\[ \\varrho_{v,\\psi} := {\\rm det}({\\rm N}v\\mid V_\\psi^{I_w})\\]\nof $\\QQ^c$.\n\nWe then define an invertible element of $\\zeta(\\CC[G])$ by setting\n\\begin{equation}\\label{bkcharelement}\n \\mathcal{L}^*_{A,F\/k} := \\sum_{\\psi \\in \\widehat{G}} \\frac{L^{*}_{S_{\\rm r}}(A,\\check{\\psi},1)\\cdot \\tau^{\\ast}(\\QQ,\\psi)^d\\cdot \\prod_{v\\in S_{p,{\\rm r}}}\\varrho_{v,\\psi}^d}{\\Omega_A^\\psi\\cdot w_\\psi^d}\\cdot e_\\psi\\end{equation}\nwhere, for each $\\psi$ in $\\widehat{G}$, the period $\\Omega_A^\\psi$ and root number $w_\\psi$ are as defined in \\S\\ref{k theory period sect2}, the modified global Galois-Gauss sum $\\tau^{\\ast}(\\QQ,\\psi)$ is as defined in \\S \\ref{mod GGS section} and the Hasse-Weil-Artin $L$-series $L_{S_{\\rm r}}(A,\\check{\\psi},z)$ is truncated by removing only the Euler factors corresponding to places in $S_{\\rm r}$.\n\n\n\n\\subsubsection{}\n\nWe can now state the main result of this section. In order to do so we use the homomorphism\n$$\\delta_{G,p}:\\zeta(\\CC_p[G])^\\times\\to K_0(\\ZZ_p[G],\\CC_p[G])$$ defined in (\\ref{G,O hom}) and also the local Fontaine-Messing correction $\\mu_v(A_{F\/k})$ terms defined in (\\ref{localFM}).\n\n\\begin{theorem}\\label{bk explicit} Assume $A$, $F\/k$ and $p$ satisfy all of the hypotheses (H$_1$)-(H$_6$).\n\n\nThen the equality of ${\\rm BSD}_p(A_{F\/k})$(iv) is valid if and only if for every isomorphism of fields $j:\\CC\\cong \\CC_p$ one has\n\\[ \\delta_{G,p}(j_\\ast(\\mathcal{L}^*_{A,F\/k}))=\\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F}) + \\sum_{v \\in S_{p,{\\rm w}}}R_{F\/k}(\\tilde A^t_v)+\n\\sum_{v\\in S_{p,{\\rm u}}^*} \\mu_v(A_{F\/k}). \\\nHere $S^*_{p,{\\rm u}}$ is the subset of $S_{p,{\\rm u}}$ comprising places that divide the different of $k\/\\QQ$.\n\\end{theorem}\n\n\\begin{remark}\\label{emptysets}{\\em If the sets $S_{p,{\\rm w}}$ and $S^*_{p,{\\rm u}}$ are empty, then the above result implies that the equality in ${\\rm BSD}_p(A_{F\/k})$(iv) is valid if and only if one has\n\\[ \\delta_{G,p}(\\mathcal{L}^*_{A,F\/k})=\\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F}).\\]\nThis is, in particular, the case if $p$ is unramified in $F\/\\QQ$ and, in this way, Theorem \\ref{bk explicit} recovers the results of Wuthrich and the present authors in \\cite[Prop. 4.2 and Th. 4.3]{bmw}. More generally, the above equality is predicted whenever no $p$-adic place of $k$ is wildly ramified in $F$ and, in addition, $p$ is unramified in $k\/\\QQ$ (as is obviously the case if $k = \\QQ$) and this case will play an important role in the special settings considered in \\S\\ref{mod sect} and \\S\\ref{HHP}. }\\end{remark}\n\n\\begin{remark}\\label{breuning remark}{\\em In \\cite[Conj. 3.2]{breuning2} Breuning has conjectured that the terms $R_{F_w\/k_v}$ should always vanish. In \\cite{breuning} and \\cite{breuning2} he has proved this conjecture for all tamely ramified extensions, for all abelian\nextensions of $\\QQ_p$ with $p$ odd, for all $S_3$-extensions of $\\QQ_p$ and for certain families of\ndihedral and quaternion extensions. If $p$ is odd, then Bley and Debeerst \\cite{bleydebeerst} have also given an algorithmic proof of the conjecture for all Galois extensions of $\\QQ_p$ of degree at most $15$. More recently, Bley and Cobbe \\cite{BC} have proved the conjecture for certain natural families of wildly ramified extensions. }\\end{remark}\n\n\n\\begin{remark}{\\em If $A$ is an elliptic curve, then Remark \\ref{breuning remark} combines with the equality in (\\ref{curve local eps conj}) to give a completely explicit description of the elements $R_{F\/k}(\\tilde A_v)$. However, whilst the results of \\cite{BC2} imply that this description is unconditionally valid for certain families of wildly ramified extensions, it is, in general, conjectural.}\\end{remark}\n\n\n\\begin{remark}\\label{bsdinvariants}{\\em If $F=k$ then it can be shown that the element (\\ref{bkcharelement}) is equal to the product $(-1)^d\\cdot(L^\\ast(A,1)\/\\Omega_A)\\cdot (\\sqrt{|d_k|)}^{d}$ with $$\\Omega_A=\\prod_{v\\in S_k^\\CC}\\Omega_{A,v}\\cdot\\prod_{v\\in S_k^\\RR}\\Omega_{A,v}^+,$$ where the classical periods $\\Omega_{A,v}$ and $\\Omega_{A,v}^+$ are as defined in \\S\\ref{k theory period sect2}.}\n\\end{remark}\n\n\n\\subsection{The proof of Theorem \\ref{bk explicit}}\n\n\n\\subsubsection{}\\label{clever peiods}\n\nIn view of Lemma \\ref{pro-p lemma} it is enough for us to fix a field isomorphism $j:\\CC\\cong \\CC_p$ and show that the displayed equality in Theorem \\ref{bk explicit} is equivalent to (\\ref{displayed pj}).\n\n\nTaking advantage of Remark \\ref{consistency remark}(i), we first specify the set $S$ to be equal to $S^\\infty_k\\cup S_k^F\\cup S_k^A$. We shall next use the approach of \\S\\ref{k theory period sect} to make a convenient choice of differentials $\\omega_\\bullet$.\n\nFor each $v$ in $S_k^p$ we set\n\\[ \\mathcal{D}_v := \\Hom_{\\mathcal{O}_{k_v}}(H^0(\\mathcal{A}_v^t,\\Omega^1_{\\mathcal{A}_v^t}), \\mathcal{O}_{k_v}),\\]\nwhere the N\\'eron models $\\mathcal{A}^t_v$ are as fixed at the beginning of \\S\\ref{perf sel sect}.\n\nFor each such $v$ we also fix a free (rank one) $\\mathcal{O}_{k_v}[G]$-submodule $\\mathcal{F}_v$ of $F_v = k_v\\otimes_k F$ and we assume that for each $v \\in S_{p,{\\rm u}}$ one has\n\\[ \\mathcal{F}_v = \\mathcal{O}_{F,v} = \\mathcal{O}_{k_v}\\otimes_{\\mathcal{O}_{k}}\\mathcal{O}_{F}.\\]\n\nWe then set\n\\[ \\Delta(\\mathcal{F}_v) := \\mathcal{F}_v\\otimes_{\\mathcal{O}_{k_v}}\\mathcal{D}_v.\\]\n\nFor each place $w'$ in $S_F^p$ we write $\\Sigma(F_{w'})$ for the set of $\\QQ_p$-linear embeddings $F_{w'} \\to \\QQ_p^c$, we define a $\\ZZ_p[G_{w'}]$-module $Y_{F_{w'}} := \\prod_{\\sigma \\in \\Sigma(F_{w'})}\\ZZ_p$ (upon which $G_{w'}$ acts via precomposition with the embeddings) and write\n\\[ \\pi_{F_{w'}}: \\QQ_p^c\\otimes_{\\ZZ_p}F_{w'} \\to \\QQ_p^c\\otimes_{\\ZZ_p}Y_{F_{w'}}\\]\nfor the isomorphism of $\\QQ_p^c[G_{w'}]$-modules that sends each element $\\ell\\otimes f$ to $(\\ell\\otimes \\sigma(f))_\\sigma$.\n\nFor each $v$ in $S_k^p$ we then consider the isomorphism of $\\QQ_p[G]$-modules\n\\[ \\pi_{F_v}: \\QQ_p^c\\otimes_{\\ZZ_p}F_v = \\prod_{w'\\in S_F^v}(\\QQ_p^c\\otimes_{\\ZZ_p}F_{w'}) \\xrightarrow{(\\pi_{F_{w'}})_{w'}} \\QQ_p^c\\otimes_{\\ZZ_p}\\bigoplus_{w' \\in S_F^v}Y_{F_{w'}} = \\QQ_p^c\\otimes_{\\ZZ_p}Y_{F_v}, \\]\nwhere we set $Y_{F_v} := \\bigoplus_{w'}Y_{F_{w'}}$.\n\nAfter fixing an embedding of $\\QQ^c$ into $\\QQ_p^c$ we obtain an induced identification of $\\bigoplus_{v \\in S_k^p}Y_{F_v}$ with the module $Y_{F,p} := \\bigoplus_{\\Sigma(F)}\\ZZ_p$, upon which $G$ acts via pre-composition on the embeddings.\n\nWe next fix\n\\begin{itemize}\n\\item[$\\bullet$] an ordered $k$-basis $\\{\\omega'_j:j \\in [d]\\}$ of $H^0(A^t,\\Omega^1_{A^t})$ that generates over $\\mathcal{O}_{k,p}$ the module\n $\\mathcal{D}_p := \\prod_{v \\in S_k^p}\\mathcal{D}_v$, and\n\\item[$\\bullet$] an ordered $\\ZZ_p[G]$-basis $\\{z_b:b \\in [n]\\}$ of $\\mathcal{F}_p := \\prod_{v\\in S_k^p}\\mathcal{F}_v$.\n\\end{itemize}\n\nThen the (lexicographically ordered) set\n\\[ \\omega_\\bullet:= \\{ z_b\\otimes \\omega'_j: b \\in [n], j \\in [d]\\}\\]\nis a $\\QQ_p[G]$-basis of $H^0(A_F^t,\\Omega^1_{A_F^t}) = F\\otimes_kH^0(A^t,\\Omega^1_{A^t})$ and the\n\narguments of Lemma \\ref{k-theory period} and Proposition \\ref{lms} combine to show that\n\\begin{equation}\\label{norm resolvents} \\partial_{G,p}\\left(j_*(\\Omega_{\\omega_\\bullet}(A_{F\/k}))\\right)=\\delta_{G,p}\\left(j_*(\\Omega_A^{F\/k}\\cdot w_{F\/k}^d)\\right) + \\sum_{v\\in S_k^p}d\\cdot[\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]\\end{equation}\nin $K_0(\\ZZ_p[G],\\CC_p[G])$.\n\n\\subsubsection{}Now, if necessary, we can multiply $\\mathcal{F}$ by a sufficiently large power of $p$ in order to ensure that for every $v$ in $S_{p,{\\rm r}}$ the following two conditions are satisfied.\n\n\\begin{itemize}\n\\item[$\\bullet$] the $p$-adic exponential map induces a (well-defined) injective homomorphism from $\\mathcal{F}_v$ to $(F_v^\\times)^\\wedge_p$;\n\\item[$\\bullet$] the formal group exponential ${\\rm exp}_{A^t,F_v}$ that arises from the differentials $\\{\\omega'_j:j \\in [d]\\}$ induces an isomorphism of $\\Delta(\\mathcal{F}_v)$ with a submodule of $A^t(F_v)^\\wedge_p$.\n\\end{itemize}\n\nFor each $v$ in $S_k^p$ we now set\n\\[ X(v) := \\begin{cases}{\\rm exp}_{A^t,F_v}(\\Delta(\\mathcal{F}_v)), &\\text{ if $v \\in S_{p,{\\rm r}}$}\\\\\nA^t(F_v)^\\wedge_p, &\\text{ if $v \\in S_{p,{\\rm u}}$.}\n\\end{cases}\\]\nThen it is clear that, for any choice of $\\gamma_\\bullet$ as in \\S \\ref{perf sel sect} and our specific choices of $S$ and $\\omega_\\bullet$, the module $X(v)$ coincides with $\\mathcal{X}(v)$ for $\\mathcal{X}=\\mathcal{X}_S(\\{\\mathcal{A}_v^t\\}_v,\\omega_\\bullet,\\gamma_\\bullet)$.\n\nThe description of the complex\n\\[ C_{X(p)} := {\\rm SC}_{S}(A_{F\/k};X(p),H_\\infty(A_{F\/k})_p)\\]\nas the mapping fibre of the morphism (\\ref{selmer-finite tri}) gives rise to an exact triangle\n\\[ C_{X(p)} \\to {\\rm SC}_p(A_{F\/k}) \\oplus X(p)[-1] \\xrightarrow{(\\lambda', \\kappa'_1)}\n\\bigoplus_{v \\in S_k^p\\cup S_k^A} A^t(F_v)_p^\\wedge[-1]\\to C_{X(p)}[1].\\]\nHere we have used the fact that, for $v\\in S_{\\rm r}\\setminus S_k^p$, the module $A^t(F_v)_p^\\wedge$ vanishes by Proposition \\ref{explicitbkprop}(ii).\n\nFurther, since Proposition \\ref{explicitbkprop}(ii) implies that the $\\ZZ_p[G]$-modules\n\\[ X(p):= \\prod_{v\\in S_k^p}X(v) \\,\\,\\text{ and }\\,\\, \\bigoplus_{v \\in S_k^p\\cup S_k^A} A^t(F_v)_p^\\wedge\\]\nare cohomologically-trivial and that the complex ${\\rm SC}_p(A_{F\/k})$ is perfect, Proposition \\ref{prop:perfect}(i) implies that this is a triangle in $D^{\\rm perf}(\\ZZ_p[G])$.\n\nBy applying Lemma \\ref{fk lemma} to this exact triangle, we can therefore deduce that there is in $K_0(\\ZZ_p[G],\\CC_p[G])$ an equality\n\n\\begin{align}\\label{first comp} &\\chi_{G,p}(C_{X(p)},h^{j}_{A,F})\\\\\n =\\, &\\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F}) - \\sum_{v \\in S_k^A}\\chi_{G,p}(A^t(F_v)_p^\\wedge[-1],0)\\notag\\\\\n &\\hskip 2truein - \\sum_{v\\in S_k^p}\\chi_{G,p}(X(v)[0]\\oplus A^t(F_v)^\\wedge_p[-1],{\\rm id})\\notag\\\\\n= \\, & \\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F}) - \\sum_{v \\in S_k^A}\\chi_{G,p}(A^t(F_v)_p^\\wedge[-1],0)\\notag\\\\\n&\\hskip 2truein - \\sum_{v\\in S_{p,{\\rm r}}}\\chi_{G,p}(\\Delta(\\mathcal{F}_v)[0]\\oplus A^t(F_v)^\\wedge_p[-1],{\\rm exp}_{A^t,F_v})\\notag\\\\\n= \\, & \\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F}) - \\delta_{G,p}\\Bigl(\\prod_{v\\in S_k^A} L_v(A,F\/k)\\Bigr)\\notag\\\\\n&\\hskip 2truein - \\sum_{v\\in S_{p,{\\rm r}}}\\chi_{G,p}(\\Delta(\\mathcal{F}_v)[0]\\oplus A^t(F_v)^\\wedge_p[-1],{\\rm exp}_{A^t,F_v})\\notag\\end{align}\nwhere the last equality holds because, for each $v\\in S_k^A$, one has\n\\[ \\chi_{G,p}(A^t(F_v)_p^\\wedge[-1],0)= \\delta_{G,p}\\Bigl( L_v(A,F\/k)\\Bigr).\\]\nThis equality in turn follows upon combining the argument that gives \\cite[(13)]{bmw} with the exactness of the sequence of Lemma \\ref{v not p}(ii) for each place $w'$ of $F$ above a place in $S_k^A$ and the fact that the third term occurring in each of these sequences vanishes, as verified in the course of the proof of Proposition \\ref{explicitbkprop}(ii).\n\n\n\\subsubsection{}The equalities (\\ref{norm resolvents}) and (\\ref{first comp}) lead us to consider for each place $v$ in $S_k^p$ the element\n\\[ c(F\/k,\\tilde A^t_v) := \\begin{cases} d\\cdot[\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]-\\chi_{G,p}(\\Delta(\\mathcal{F}_v)[0]\\oplus A^t(F_v)^\\wedge_p[-1],{\\rm exp}_{A^t,F_v}), &\\text{if $v \\in S_{p,{\\rm r}}$,}\\\\\nd\\cdot[\\mathcal{O}_{F,v},Y_{F_v};\\pi_{F_v}], &\\text{if $v \\in S_{p,{\\rm u}}$}.\\end{cases}\\]\n\nIt is straightforward to check that for each $v \\in S_{p,{\\rm r}}$ this element is independent of the choice of $\\mathcal{F}_v$ and that Lemma \\ref{twist dependence} implies its dependence on $A$ is restricted to (the twist matrix of) the reduction of $A^t$ at $v$.\n\nThe key step, however, in the proof of Theorem \\ref{bk explicit} is the computation of this element in term of local Galois-Gauss sums that is described in the next result.\n\nFor each place $v$ in $S_k^f$ we define an\n`equivariant local Galois-Gauss sum' by setting\n\\[ \\tau_v(F\/k) := \\sum_{\\psi \\in \\widehat{G}}\\tau(\\QQ_{\\ell(v)},\\psi_v)\\cdot e_\\psi\\in \\zeta(\\QQ^c[G])^\\times.\\]\nHere $\\psi_v$ denotes the restriction of $\\psi$ to $G_w$ and $\\tau(\\QQ_{\\ell(v)},\\psi_v)$ is the Galois-Gauss sum (as defined in \\cite{martinet}) of the induction to $G_{\\QQ_{\\ell(v)}}$ of the character of $G_{k_v}$ that is obtained by composing $\\psi_v$ with the natural projection $G_{k_v} \\to G_w$.\n\nWe also define a modified local Galois-Gauss sum by setting\n\\[ \\tau_v^p(F\/k) := \\begin{cases} \\varrho_v(F\/k)\\cdot u_v(F\/k)\\cdot \\tau_v(F\/k), &\\text{ if $v \\in S_{p,{\\rm r}}$,}\\\\\n u_v(F\/k)\\cdot \\tau_v(F\/k), &\\text{ otherwise,}\\end{cases}\\]\nwhere we set\n\\begin{equation}\\label{varrho def} \\varrho_{v}(F\/k) := \\sum_{\\psi\\in \\widehat{G}}\\varrho_{v,\\psi}\\cdot e_\\psi\\end{equation}\nand the element $u_v(F\/k)$ of $\\zeta(\\QQ[G])^\\times$ is as defined in (\\ref{u def}).\n\nFinally, for each $p$-adic place $v$ of $k$ we set\n\n\\[ U_v(F\/k) := {\\rm ind}^{G}_{G_w}(U_{F_w\/k_v}),\\]\nwhere $U_{F_w\/k_v}$ is the `unramified term' in $K_0(\\ZZ_p[G_w],\\QQ_p^c[G_w])$ that is defined by Breuning in \\cite[\\S2.5]{breuning2}.\n\n\\begin{proposition}\\label{heavy part} For each place $v$ in $S_k^p$ and any choice of $j$ one has\n\\[c(F\/k,\\tilde A^t_v) = d\\cdot \\delta_{G,p}(j_*(\\tau_v^p(F\/k))) + d\\cdot U_v(F\/k) - R_{F\/k}(\\tilde A^t_v).\\]\nin $K_0(\\ZZ_p[G],\\QQ_p[G])$.\n\\end{proposition}\n\n\\begin{proof} The term $\\delta_{G,p}(j_*(\\tau^p_v(F\/k)))$ is independent of the choice of $j$ by \\cite[Lem. 2.2]{breuning2}. To prove the claimed equality we consider separately the cases $v \\in S_{p,{\\rm r}}$ and $v \\in S_{p,{\\rm u}}$.\n\nWe assume first that $v$ belongs to $S_{p,{\\rm r}}$. Then for every place $w'$ in $S_F^v$ we consider the corresponding perfect complex of $\\ZZ_p[G_{w'}]$-modules $R\\Gamma(F_{w'},\\ZZ_p(1))$ as described in \\S \\ref{twist inv prelim} and obtain a perfect complex of $\\ZZ_p[G]$-modules\n$$R\\Gamma(F_v,\\ZZ_p(1)):=\\prod_{w'\\in S_F^v}R\\Gamma(F_{w'},\\ZZ_p(1)).$$\n\nSince Kummer theory canonically identifies the cohomology in degree one of this complex with $(F_v^\\times)_p^\\wedge$ we may define an additional perfect complex of $\\ZZ_p[G]$-modules $C^\\bullet_{\\mathcal{F}_v}$ through the exact triangle\n\\[ \\mathcal{F}_v[0] \\xrightarrow{\\alpha_v} R\\Gamma(F_v,\\ZZ_p(1))[1] \\to C^\\bullet_{\\mathcal{F}_v} \\to \\mathcal{F}_v[1] \\]\nin $D^{\\rm perf}(\\ZZ_p[G])$, with $H^0(\\alpha_v)$ induced by the $p$-adic exponential map ${\\rm exp}_p$.\n\nWe write $f$ for the residue degree of our fixed place $w$ in $S_F^v$. Then the long exact cohomology sequence of the above triangle implies that the normalised valuation map ${\\rm val}_{F\/k,v} := (f\\cdot({\\rm val}_{F_{w'}}))_{w'\\in S_F^v}$\ninduces an isomorphism of $\\QQ_p[G]$-modules\n\\begin{multline*} \\QQ_p \\cdot H^0(C^\\bullet_{\\mathcal{F}_v}) \\cong \\QQ_p\\cdot ((F_v^\\times)^\\wedge_p\/{\\rm exp}_p(\\mathcal{F}_v)) \\xrightarrow{ {\\rm val}_{F\/k,v}}\\prod_{w'\\in S_F^v}\\QQ_p \\\\\n\\xleftarrow{ ({\\rm inv}_{F_{w'}})_{w'}} \\QQ_p\\cdot \\prod_{w'\\in S_F^v}H^2(F_{w'},\\ZZ_p(1)) \\cong \\QQ_p\\cdot H^1(C^\\bullet_{\\mathcal{F}_v})\\end{multline*}\nwhich, by abuse of notation, we also denote by ${\\rm val}_{F\/k,v}$.\n\nIn addition, the chosen differentials $\\{\\omega'_a: a \\in [d]\\}$ induce an isomorphism of\n $\\mathcal{O}_{k_v}$-modules $\\mathcal{D}_{v}\\cong \\mathcal{O}^d_{k_v}$\nand hence an isomorphism of $\\mathcal{O}_{k_v}[G]$-modules $\\omega_{v,*}: \\Delta(\\mathcal{F}_v) \\cong \\mathcal{F}_v^d$.\n\nIn particular, if we write $C_{A^t,F}^{v,\\bullet}$ for the complex $\\prod_{w'\\in S_F^v} C_{A^t_v,F_{w'}}^\\bullet$, where each complex $C_{A^t_v,F_{w'}}^\\bullet$ is as defined at the beginning of \\S\\ref{twist inv prelim}, then there exists a canonical exact triangle\n\\[ \\Delta(\\mathcal{F}_v)[0]\\oplus A^t(F_v)^\\wedge_p[-1] \\xrightarrow{\\iota_v} C_{A^t,F}^{v,\\bullet} \\xrightarrow{\\iota'_v} C^{\\bullet,d}_{\\mathcal{F}_v} \\to \\Delta(\\mathcal{F}_v)[1]\\oplus A^t(F_v)^\\wedge_p[0]\\]\nin $D^{\\rm perf}(\\ZZ_p[G])$.\nHere $C^{\\bullet,d}_{\\mathcal{F}_v}$ denotes the product of $d$ copies of $C^{\\bullet}_{\\mathcal{F}_v}$, $H^0(\\iota_v)$ is the composite map $({\\rm exp}_p)^d\\circ \\omega_{v,*}$ and $H^1(\\iota_v)$ is the identity map between\n$A^t(F_v)^\\wedge_p = H^1(A^t(F_v)^\\wedge_p[-1])$ and the direct summand $A^t(F_v)^\\wedge_p$ of $H^1(C_{A^t,F}^{v,\\bullet})$.\n\nThe long exact cohomology sequence of this triangle also gives an exact commutative diagram of $\\QQ_p[G]$-modules\n\\[\\minCDarrowwidth1em\\begin{CD} 0 @> >>\\! \\QQ_p\\cdot \\Delta(\\mathcal{F}_v)\\! @> \\QQ_p\\otimes_{\\ZZ_p}H^0(\\iota_v) >>\\! \\QQ_p\\cdot H^0(C_{A^t,F}^{v,\\bullet})\\! @> \\QQ_p\\otimes_{\\ZZ_p}H^0(\\iota'_v) >> \\!\\QQ_p\\cdot H^0(C^{\\bullet,d}_{\\mathcal{F}_v})\\! @> >>\\! 0\\\\\n@. @V{\\rm exp}_{A^t,F_v} VV @V \\lambda^v_{A^t,F} VV @V ({\\rm val}_{F\/k,v})^d VV \\\\\n0 @> >>\\! \\QQ_p\\cdot A^t(F_v)_p^\\wedge\\! @> \\QQ_p\\otimes_{\\ZZ_p}H^1(\\iota_v) >>\\! \\QQ_p\\cdot H^1(C_{A^t,F}^{v,\\bullet})\\! @> \\QQ_p\\otimes_{\\ZZ_p}H^1(\\iota'_v)>> \\!\\QQ_p\\cdot H^1(C^{\\bullet, d}_{\\mathcal{F}_v}) @> >>\\! 0,\\end{CD}\\]\nin which $\\lambda^v_{A^t,F} = (\\lambda_{A^t_v,F_{w'}})_{w'\\in S_F^v}$, where each map $\\lambda_{A^t_v,F_{w'}}$ is fixed as in diagram (\\ref{lambda diag}).\n\nAfter recalling the definition of $R_{F_w\/k_v}(\\tilde A^t_v)$ and applying Lemma \\ref{fk lemma} to this commutative diagram one can therefore derive an equality\n\\begin{align*}\\label{third}\n&\\chi_{G,p}(\\Delta(\\mathcal{F}_v)[0]\\oplus A^t(F_v)^\\wedge_p[-1],{\\rm exp}_{A^t,F_v})\\\\\n= \\, &\\chi_{G,p}(C_{A^t,F}^{v,\\bullet},\\lambda_{A^t,F}^v) - d\\cdot\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})\\notag\\\\\n= \\, &{\\rm ind}^G_{G_w}(\\chi_{G_w,p}(C_{A^t_v,F_{w}}^\\bullet,\\lambda_{A^t_v,F_{w}})) - d\\cdot\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})\\notag\\\\\n= \\, &{\\rm ind}^G_{G_w}(R_{F_w\/k_v}(\\tilde A^t_v))-{\\rm ind}^G_{G_w}(d\\cdot\\delta_{G_w,p}(c_{F_w\/k_v})) - d\\cdot\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})\\notag\\\\\n=\\, & {\\rm ind}^G_{G_w}(R_{F_w\/k_v}(\\tilde A^t_v)) - d(\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})+\\delta_{G,p}(c_{F_w\/k_v})).\\notag\\end{align*}\n\nIt follows that\n\\[ c(F\/k,\\tilde A^t_v) = d\\cdot[\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]-{\\rm ind}^G_{G_w}(R_{F_w\/k_v}(\\tilde A^t_v)) + d(\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})+\\delta_{G,p}(c_{F_w\/k_v}))\\]\nand hence that the claimed result is true in this case if one has\n\\begin{multline}\\label{wanted at last} [\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}] + \\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,v})+\\delta_{G,p}(c_{F_w\/k_v})\n\\\\ = \\delta_{G,p}(j_*(\\tau^p_v(F\/k))) + U_v(F\/k) - {\\rm ind}^G_{G_w}(R_{F_w\/k_v}).\\end{multline}\n\nTo prove this we note that the very definition of $R_{F_w\/K_v}$ in \\cite[\\S3.1]{breuning2} implies that\n\\begin{multline*} {\\rm ind}^G_{G_w}(R_{F_w\/k_v})\\\\ = \\delta_{G,p}(j_*(\\tau_v(F\/k))) - [\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}] - \\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}'_{F\/k,v}) + U_v(F\/k) - \\delta_{G,p}(m_{F_w\/k_v}),\\end{multline*}\nwhere ${\\rm val}'_{F\/k,v}$ denotes the isomorphism of $\\QQ_p[G]$-modules\n\\[ \\QQ_p\\cdot H^1(C^\\bullet_{\\mathcal{F}_v})\\cong \\QQ_p\\cdot H^2(C^\\bullet_{\\mathcal{F}_v})\\]\nthat is induced by the maps ($({\\rm val}_{F_{w'}}))_{w'\\in S_F^v}$ and we use the element\n\\[ m_{F_w\/k_v}:= \\frac{^\\dagger(f\\cdot e_{G_w})\\cdot \\, ^\\dagger((1- \\Phi_v\\cdot {\\rm N}v^{-1})e_{I_w})}{^\\dagger((1-\\Phi_v^{-1})e_{I_w})}\\]\nof $\\zeta(\\QQ[G])^\\times$. (Here we use the notational convention introduced in (\\ref{dagger eq}). In addition, to derive the above formula for ${\\rm ind}^G_{G_w}(R_{F_w\/k_v})$ we have relied on \\cite[Prop. 2.6]{breuning2} and the fact that in loc. cit. Breuning uses the `opposite' normalization of Euler characteristics to that used here, so that the term $\\chi_{G,p}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}'_{F\/k,v})$ appears in the corresponding formulas in loc. cit. with a negative sign.)\n\nTo deduce the required equality (\\ref{wanted at last}) from this formula it is then enough to note that\n\\[ \\chi_{G}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}'_{F\/k,p})\n= \\chi_{G}(C^\\bullet_{\\mathcal{F}_v},{\\rm val}_{F\/k,p}) - \\delta_{G,p}(^\\dagger(f\\cdot e_{G_w})),\\]\nand that an explicit comparison of definitions shows that\n\\[ j_*(\\tau^p_v(F\/k))\\cdot m_{F_w\/k_v} = j_*(\\tau_v(F\/k))\\cdot ^\\dagger(f\\cdot e_{G_w})\\cdot c_{F_w\/k_v}.\\]\n\nTurning now to the case $v\\in S_{p,{\\rm u}}$ we only need to prove that for each such place $v$ one has\n\\[ d\\cdot[\\mathcal{O}_{F,v},Y_{F_v};\\pi_{F_v}] = d\\cdot \\delta_{G,p}(j_*(u_v(F\/k)\\cdot\\tau_v(F\/k))) + d\\cdot U_v(F\/k) - R_{F\/k}(\\tilde A^t_v).\\]\n\nNow, since each such $v$ is unramified in $F\/k$ the term $R_{F\/k}(\\tilde A^t_v)$ vanishes (as a consequence of Proposition \\ref{basic props}(iii) and Remark \\ref{breuning remark}) and so it is enough to note that\n\\[ [\\mathcal{O}_{F,v},Y_{F_v};\\pi_{F_v}] = \\delta_{G,p}((u_v(F\/k)\\cdot \\tau_v(F\/k))) + U_v(F\/k),\\]\nas is shown in the course of the proof of \\cite[Th. 3.6]{breuning2}.\n\\end{proof}\n\n\n\n\\subsubsection{}We next record a result concerning the decomposition of global Galois-Gauss sums as a product of local Galois-Gauss sums.\n\n\\begin{lemma}\\label{gauss} In $K_0(\\ZZ_p[G],\\QQ^c_p[G])$ one has\n\\[ \\delta_{G,p}(j_*(\\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_{p,{\\rm r}}}\\varrho_v(F\/k))) = \\sum_{v \\in S_{k}^p}(\\delta_{G,p}(j_*(\\tau_v^p(F\/k))) + U_v(F\/k)).\\]\n\\end{lemma}\n\n\\begin{proof} We observe first that the difference $\\xi$ of the left and right hand sides of this claimed equality belongs to $K_0(\\ZZ_p[G],\\QQ_p[G])$.\n\nThis follows from the fact that both the term\n\\[ \\delta_{G,p}(j_*(\\tau^\\ast(F\/k))) - \\sum_{v \\in S_k^p}[\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}],\\]\nand for each $v \\in S_k^p$ the term\n\\[ \\delta_{G,p}(j_*(\\tau_v^p(F\/k))) + U_v(F\/k) - [\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}],\\]\nbelong to $K_0(\\ZZ_p[G],\\QQ_p[G])$ (the former as a consequence of \\cite[Prop. 3.4 and (34)]{bleyburns} and the latter as a consequence of \\cite[Prop. 3.4]{breuning2}).\n\nThus, by Taylor's Fixed Point Theorem for group determinants (as discussed in \\cite[Chap. 8]{Taylor}) it is enough for us to show that $\\xi$ belongs to the kernel of the natural homomorphism $\\iota: K_0(\\ZZ_p[G],\\QQ_p^c[G]) \\to K_0(\\mathcal{O}^t_p[G],\\QQ_p^c[G])$ where $\\mathcal{O}_p^t$ is the valuation ring of the maximal tamely ramified extension of $\\QQ_p$ in $\\QQ_p^c$.\n\nNow from \\cite[Prop. 2.12(i)]{breuning2} one has $\\iota(U_v(F\/k)) = 0$ for all $v$ in $S_k^p$. In addition, for each non-archimedean place $v$ of $k$ that is not $p$-adic the vanishing of $\\iota(\\delta_{G,p}(j_*(u_v(F\/k)\\cdot \\tau_v(F\/k))))$ is equivalent to the result proved by Holland and Wilson in\n\\cite[Th. 3.3(b)]{HW3} (which itself relies crucially on the work of Deligne and Henniart in \\cite{deligne-henniart}).\n\nThe vanishing of $\\iota(\\xi)$ is thus a consequence of the fact that the classical decomposition of global Galois-Gauss sums as a product of local Galois-Gauss sums implies that\n\\[ \\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_{p,{\\rm r}}}\\varrho_v(F\/k) = \\prod_{v}\\tau^p_v(F\/k)\\]\nwhere $v$ runs over all places of $k$ that divide the discriminant of $F$, since for any place $v$ that does not ramify in $F$ one has $\\tau(\\QQ_{\\ell(v)},\\psi_v) = 1$ for all $\\psi$ in $\\widehat{G}$.\n\\end{proof}\n\n\\subsubsection{}We can now complete the proof of Theorem \\ref{bk explicit}.\n\nTo this end we note first that the definition (\\ref{bkcharelement}) of $\\mathcal{L}^*_{A,F\/k}$ implies that\n\n\\begin{align*} &\\delta_{G,p}(j_\\ast(\\mathcal{L}^*_{A,F\/k})) - \\partial_{G,p}\\left(\\frac{j_*(L_S^*(A_{F\/k},1))}{j_*(\\Omega_{\\omega_\\bullet}(A_{F\/k}))}\\right)\\\\\n=\\, &\\delta_{G,p}\\bigl(\\prod_{v\\in S_k^A} L_v(A,F\/k)\\bigr) + d\\cdot\\bigl(\\delta_{G,p}(j_*(\\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_{p,{\\rm r}}}\\varrho_v(F\/k))) - \\sum_{v\\in S_k^p}[\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]\\bigr)\\\\\n= \\, &\\delta_{G,p}\\bigl(\\prod_{v\\in S_k^A} L_v(A,F\/k)\\bigr) + d\\cdot\\sum_{v \\in S_{k}^p}\\bigl(\\delta_{G,p}(j_*(\\tau_v^p(F\/k))) + U_v(F\/k) - [\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]\\bigr)\n\\\\\n= \\, &\\delta_{G,p}\\bigl(\\prod_{v\\in S_k^A} L_v(A,F\/k)\\bigr) +\\sum_{v \\in S_{k}^p} \\bigl(c(F\/k,\\tilde A^t_v) - d\\cdot [\\mathcal{F}_v,Y_{F_v};\\pi_{F_v}]\\bigr) +\\sum_{v \\in S_{k}^p}R_{F\/k}(\\tilde A^t_v)\\\\\n= \\, &\\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F})+\\sum_{v \\in S_{k}^p}R_{F\/k}(\\tilde A^t_v) - \\chi_{G,p}(C_{X(p)},h^{j}_{A,F})\\\\\n= \\, &\\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h^{j}_{A,F})+\\sum_{v \\in S_{p,{\\rm w}}}R_{F\/k}(\\tilde A^t_v) - \\chi_{G,p}(C_{X(p)},h^{j}_{A,F}).\\end{align*}\nHere the first equality follows from (\\ref{norm resolvents}), the second from Lemma \\ref{gauss}, the third from Proposition \\ref{heavy part}, the fourth from the definition of the terms $c(F\/k,\\tilde A^t_v)$ and the equality (\\ref{first comp}) and the last from the fact that $R_{F\/k}(\\tilde A^t_v)$ vanishes for each $v \\in S_k^p\\setminus S_{p,{\\rm w}}$ as consequence of Proposition \\ref{basic props}(iii) and Remark \\ref{breuning remark}.\n\nIt is thus sufficient to show that the above displayed equality implies that the equality (\\ref{displayed pj}), with set of places $S$ and differentials $\\omega_\\bullet$ chosen as in \\S\\ref{clever peiods}, is equivalent to the equality stated in Theorem \\ref{bk explicit}.\n\nBut this is true since our choice of periods $\\omega_\\bullet$ and lattices $\\mathcal{F}_v$ as in \\S\\ref{clever peiods} implies that the module $Q(\\omega_\\bullet)_{S,p}$ vanishes and because\n\n\\begin{align*} \\mu_S(A_{F\/k})_p =\\, &\\sum_{v \\in S_k^p\\setminus S}\\mu_v(A_{F\/k})\\\\\n =\\, &\\sum_{v \\in S_{p,{\\rm u}}}\\mu_v(A_{F\/k})\\\\\n =\\, &\\sum_{v \\in S^\\ast_{p,{\\rm u}}} \\mu_v(A_{F\/k})\\end{align*}\nwhere the last equality follows from Lemma \\ref{fm} below.\n\nThis completes the proof of Theorem \\ref{bk explicit}.\n\n\\begin{lemma}\\label{fm} For any place $v\\in S_k^A$ for which the residue characteristic $\\ell(v)$ is unramified in $F$, the term\n $\\mu_{v}(A_{F\/k})$ vanishes. \\end{lemma}\n\n\\begin{proof} We fix a place $v$ as in the statement of the lemma and set $p:= \\ell(v)$. We write $\\mathcal{O}_{F_v}$ for the integral closure of $\\ZZ_p$ in $F_v$ and set $\\wp_{F_v} := p\\cdot\\mathcal{O}_{F_v}$.\n\nThen, since $p$ does not ramify in $F\/\\QQ$, the $\\ZZ_p[G]$-modules $\\mathcal{O}_{F_v}$ and $\\wp_{F_v}$ are projective and $\\wp_{F_v}$ is the direct sum of the maximal ideals of the valuation rings in each field component of $F_v=\\prod_{w'\\in S_F^v}F_{w'}$.\n\nWe use the canonical comparison isomorphism of $\\QQ_p[G]$-modules\n\\[ \\nu_v: \\Hom_{F_v}(H^0(A^t_{F_v},\\Omega^1_{A^t_{F_v}}),F_v) \\cong {\\rm DR}_v(V_{p,F}(A^t))\/F^0\\]\nand the exponential map ${\\rm exp}_{\\rm BK}: {\\rm DR}_v(V_{p,F}(A^t))\/F^0\\to H^1_f(k,V_{p,F}(A^t))$ of Bloch and Kato.\n\nWe recall, in particular, that in this case there is a natural identification of spaces $H^1_f(k,V_{p,F}(A^t)) = \\QQ_p\\cdot A^t(F_v)^\\wedge_p$ under which the composite ${\\rm exp}_{\\rm BK}\\circ\\nu_v$ sends the free $\\ZZ_p[G]$-lattice $\\mathcal{D}_F(\\mathcal{A}_v^t)$ defined in (\\ref{mathcalD}) to ${\\rm exp}_{A^t,F_v}((\\mathcal{O}_{F_v})^d)$, where ${\\rm exp}_{A^t,F_v}$ is the classical exponential map of $A^t$ over $F_{v}$ (cf. the result of Bloch and Kato in \\cite[Exam. 3.11]{bk}).\n\nIn particular, since $A^t$ has good reduction over the absolutely unramified algebra $F_{v}$, the theory of Fontaine and Messing \\cite{fm} implies (via the proof of \\cite[Lem. 3.4]{bmw}) that there exists an exact sequence of $\\ZZ_p[G]$-modules\n\\[ 0 \\to {\\rm exp}_{A^t,F_v}((\\mathcal{O}_{F_v})^d) \\xrightarrow{\\subseteq} A^t(F_v)^\\wedge_p \\to N \\to 0\\]\nwhere $N$ is a finite module that has finite projective dimension and is such that\n\\[ \\chi_{G,p}(N[-1],0) = \\delta_{G,p}(L_v(A,F\/k))\\]\nin $K_0(\\ZZ_p[G],\\QQ_p[G])$.\n\nOn the other hand, since $F_{v}$ is absolutely unramified, the map ${\\rm exp}_{A^t,F_v}$ restricts to give a short exact sequence of $\\ZZ_p[G]$-modules\n\\[ 0 \\to \\wp_{F_v}^d \\xrightarrow{{\\rm exp}_{A^t,F_v}} A^t(F_v)^\\wedge_p \\to \\tilde A^t_v(\\kappa_{F_v})_p \\to 0.\\]\n\nBy comparing these exact sequences, and noting that $\\mathcal{O}_{F_v}\/\\wp_{F_v}$ identifies with the ring $\\kappa_{F_v}$, one obtains a short exact sequence of $\\ZZ_p[G]$-modules\n\\[ 0 \\to \\kappa_{F_v}^d \\to \\tilde A^t_v(\\kappa_{F_v})_p \\to N \\to 0\\]\nin which each term is both finite and of finite projective dimension.\n\nThus, upon taking Euler characteristics of this exact sequence, one finds that\n\\begin{align*} \\delta_{G,p}(L_v(A,F\/k)) =\\,&\\chi_{G,p}(N[-1],0)\\\\\n = \\, &\\chi_{G,p}(\\tilde A^t_v(\\kappa_{F_v})_p[-1],0) - \\chi_{G,p}(\\kappa_{F_v}^d[-1],0)\\\\\n = \\, &\\chi_{G,p}(\\tilde A^t_v(\\kappa_{F_v})_p[-1],0) + \\chi_{G,p}(\\kappa_{F_v}^d[0],0)\\\\\n =\\, &\\chi_{G,p}(\\kappa_{F_v}^d[0]\\oplus\\tilde A^t_v(\\kappa_{F_v})_p[-1],0),\\end{align*}\n\n\\noindent{}and hence that the element $\\mu_{v}(A_{F\/k})$ vanishes, as required.\n\\end{proof}\n\n\n\\section{Euler characteristics and Galois structures}\\label{ecgs}\n\nIn this section we consider consequences of ${\\rm BSD}(A_{F\/k})$ concerning both the Galois structure of Selmer complexes and modules and the formulation of refinements of the Deligne-Gross Conjecture.\n\n\n\n\\subsection{Galois structures of Selmer complexes}\\label{Galoiscomplexes} In this section we fix a finite set $S$ of places of $k$ as described at the beginning of \\S\\ref{selmer section} as well as data $\\{\\mathcal{A}^t_v\\}_v$ and $\\omega_\\bullet$ as in \\S\\ref{perf sel construct}. We then write\n\\[ \\Upsilon = \\Upsilon(\\{\\mathcal{A}^t_v\\}_v,\\omega_\\bullet,S)\\]\nfor the finite set of non-archimedean places $v$ of $k$ that are such that either $v$ belongs to $S$ or divides the different of $k\/\\QQ$ or the lattice $\\mathcal{F}(\\omega_\\bullet)_{v}$ differs from $\\mathcal{D}_F(\\mathcal{A}_v^t)$.\n\nWe then consider the perfect Selmer structure $\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet)$ that is defined in \\S\\ref{perf sel construct}.\n\n\n\\begin{proposition}\\label{gec} If ${\\rm BSD}(A_{F\/k})$ is valid, then for any data $S$, $\\{\\mathcal{A}^t_v\\}_v$ and $\\omega_\\bullet$ as above the following claims are valid.\n\n\\begin{itemize}\n\\item[(i)] The Selmer complex ${\\rm SC}_{\\Upsilon}(A_{F\/k};\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet))$ is represented by a bounded complex of finitely generated free $G$-modules.\n\\item[(ii)] Set $\\ZZ' := \\ZZ[1\/2]$. If neither of the groups $A(F)$ and $A^t(F)$ has an element of odd order, then the $\\ZZ'[G]$-module $\\ZZ'\\otimes_\\ZZ H^2({\\rm SC}_{\\Upsilon}(A_{F\/k};\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet)))$ has a presentation with the same number of generators and relations.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} We set $C_{\\omega_\\bullet} := {\\rm SC}_{\\Upsilon}(A_{F\/k};\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet))$ and write $\\chi_G(C_{\\omega_\\bullet})$ for its Euler characteristic in $K_0(\\ZZ[G])$.\n\nThen, the definition of $\\Upsilon$ implies immediately that the module $\\mathcal{Q}(\\omega_\\bullet)_\\Upsilon$ vanishes and also combines with Lemma \\ref{fm} to imply that $\\mu_{\\Upsilon}(A_{F\/k})$ vanishes.\n\nThe vanishing of $\\mathcal{Q}(\\omega_\\bullet)_\\Upsilon[0]$ implies ${\\rm SC}_{\\Upsilon,\\omega_\\bullet}(A_{F\/k}) = C_{\\omega_\\bullet}$ and hence that\n\\[ \\partial'_G(\\chi_G({\\rm SC}_{\\Upsilon,\\omega_\\bullet}(A_{F\/k}),h_{A,F})) = \\chi_G(C_{\\omega_\\bullet}),\\]\nwhere $\\partial'_{G}$ denotes the canonical connecting homomorphism $K_0(\\ZZ[G],\\RR[G]) \\to K_0(\\ZZ[G])$ of relative $K$-theory.\n\nThen, given the vanishing of $\\mu_{\\Upsilon}(A_{F\/k})$, the equality in ${\\rm BSD}(A,F\/k)$(iv) implies that\n\\[ \\chi_G(C_{\\omega_\\bullet}) = \\partial'_G\\bigl(\\partial_G(L_\\Upsilon^*(A_{F\/k},1)\/\\Omega_{\\omega_\\bullet}(A_{F\/k}))).\\]\n\nHowever, the exactness of the lower row of diagram (\\ref{E:kcomm}), with $\\mathfrak{A} = \\ZZ[G]$ and $A_E = \\RR[G]$, implies that the composite homomorphism $\\partial'_G\\circ \\partial_G$ is zero and so it follows that the Euler characteristic $\\chi_G(C_{\\omega_\\bullet})$ must vanish.\n\nNow, by a standard resolution argument, we may fix a bounded complex of finitely generated $G$-modules $C^\\bullet$ that is isomorphic in $D(\\ZZ[G])$ to $C_{\\omega_\\bullet}$ and is such that, for some integer $a$, all of the following properties are satisfied: $C^i = 0$ for all $i < a$; $C^a$ is projective of rank (over $\\ZZ[G]$) at least two; $C^{i}$ is free for all $i \\not= a$.\n\nFrom the vanishing of $\\chi_G(C_{\\omega_\\bullet}) = \\chi_G(C^\\bullet)$ it then follows that the class of $C^a$ in $K_0(\\ZZ[G])$ coincides with that of a free $G$-module.\n\nThus, since the rank over $\\ZZ[G]$ of $C^a$ is at least two, we may conclude from the Bass Cancellation Theorem (cf. \\cite[(41.20)]{curtisr}) that $C^a$ is a free $G$-module, as required to prove claim (i).\n\nTurning to claim (ii), we note that if $\\ZZ'\\otimes_\\ZZ A(F)$ and $\\ZZ'\\otimes_\\ZZ A^t(F)$ are torsion-free, then Proposition \\ref{prop:perfect2} implies that the complex $C'_{\\omega_\\bullet} := \\ZZ'\\otimes_\\ZZ C_{\\omega_\\bullet}$ is acyclic outside degrees one and two and that $H^1(C'_{\\omega_\\bullet})$ is torsion-free.\n\nThis implies that $C'_{\\omega_\\bullet}$ is isomorphic in $D^{\\rm perf}(\\ZZ'[G])$ to a complex of finitely generated $\\ZZ'[G]$-modules of the form $(C')^1 \\xrightarrow{d'} (C')^2$ where $(C')^1$ is projective and $(C')^2$ is free.\n\nThe vanishing of the Euler characteristic of $C'_{\\omega_\\bullet}$ then implies, by the same argument as in claim (i), that the module $(C')^1$ is free.\n\nIn addition, the fact that the $\\RR[G]$-modules generated by $H^1(C'_{\\omega_\\bullet})$ and $H^2(C'_{\\omega_\\bullet})$ are isomorphic implies that the free modules $(C')^1$ and $(C')^2$ must have the same rank.\n\nGiven this, claim (ii) follows from the tautological exact sequence\n\\[ (C')^1 \\xrightarrow{d'} (C')^2 \\to \\ZZ'\\otimes_\\ZZ H^2({\\rm SC}_{\\Upsilon}(A_{F\/k};\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet))) \\to 0.\\]\n\\end{proof}\n\n\\begin{remark}{\\em An explicit description of the module $\\ZZ'\\otimes_\\ZZ H^2({\\rm SC}_{\\Upsilon}(A_{F\/k};\\mathcal{X}_{\\Upsilon}(\\omega_\\bullet)))$ that occurs in Proposition \\ref{gec}(ii) can be found in Remark \\ref{can structure groups}.}\\end{remark}\n\n\n\n\n\\subsection{Refined Deligne-Gross-type conjectures}\n\nIn this section we address a problem raised by Dokchitser, Evans and Wiersema in \\cite[Rem. 14]{vdrehw} by explaining how ${\\rm BSD}(A_{F\/k})$ leads to an explicit formula for the fractional ideal that is generated by the product of the leading coefficients of Hasse-Weil-Artin $L$-series\n by a suitable combination of `isotypic' periods and regulators. (See, in particular, Remark \\ref{evans} below.)\n\n\\subsubsection{}We fix a character $\\psi$ in $\\widehat {G}$. We also then fix a subfield $E$ of $\\bc$ that is both Galois and of finite degree over\n$\\QQ$ and also large enough to ensure that, with\n$\\mathcal{O}$ denoting the ring of algebraic integers of $E$, there exists a finitely generated ${\\mathcal\nO}[G]$-lattice $T_\\psi$ that is free over $\\mathcal{O}$ and such that the $\\bc[G]$-module\n$V_\\psi:= \\bc \\otimes_{\\mathcal{O}}T_\\psi$ has character $\\psi$.\n\n\nWe then obtain a left, respectively right,\nexact functor from the category of $G$-modules to the category of $\\mathcal{O}$-modules by setting\n\n\\begin{align*} X^\\psi &:= \\Hom_{{\\mathcal O}}(T_\\psi,{\\mathcal O}\n\\otimes_{\\ZZ} X)^G,\n\\\\ X_\\psi &:= \\Hom_{{\\mathcal O}}(T_\\psi,{\\mathcal O}\\otimes_{\\ZZ} X)_G,\n\\end{align*}\nwhere the $\\Hom$-sets are endowed with the natural diagonal\n$G$-action.\n\nIt is easily seen that for any $G$-module $X$ there is a natural isomorphism of $\\mathcal{O}$-modules\n\\begin{equation}\\label{func iso} \\Hom_\\ZZ(X,\\ZZ)_\\psi \\cong \\Hom_\\mathcal{O}(X^{\\check{\\psi}},\\mathcal{O}).\\end{equation}\n\nFor a given odd prime number $p$, each maximal ideal $\\mathfrak p$ of $\\mathcal{O}$ that divides $p$ and each $\\mathcal{O}$-module $X$ we set $X_\\mathfrak{p} := \\mathcal{O}_\\mathfrak{p}\\otimes_{\\mathcal{O}}X$.\n\nWe also write $I(\\mathcal{O}_\\mathfrak{p})$ for the multiplicative group of invertible $\\mathcal{O}_\\mathfrak{p}$-submodules of $\\CC_p$ and we use the composite homomorphism of abelian groups\n\\[ \\rho_{\\mathfrak{p}}^{\\psi}: K_0(\\ZZ_p [G],\\CC_p[G]) \\to K_0(\\mathcal{O}_\\mathfrak{p} ,\\CC_p) \\xrightarrow{\\iota_\\mathfrak{p}} I(\\mathcal{O}_\\mathfrak{p}).\\]\nHere the first map is induced by the composite functor $X \\mapsto X^\\psi\\to (X^\\psi)_\\mathfrak{p}$ and $\\iota_\\mathfrak{p}$ is the canonical isomorphism induced by\n the upper row of (\\ref{E:kcomm}) with $\\A = {\\mathcal O}_\\mathfrak{p}$ and $E' = \\bc_p$ and the canonical\nisomorphisms $K_1(\\bc_p) \\xrightarrow{\\sim} \\bc_p^\\times$ and\n$K_1({\\mathcal O}_\\mathfrak{p}) \\xrightarrow{\\sim} {\\mathcal O}_\\mathfrak{p}^\\times$.\n\nFor any finite ${\\mathcal O}$-module $X$ we also set\n\\[ {\\rm char}_{\\mathfrak{p}}(X) := \\mathfrak{p}^{{\\rm length}_{{\\mathcal O}_{{\\mathfrak\np}}}(X_{\\mathfrak p})}.\\]\n\n\\subsubsection{}\\label{explicit ec section} Using the isomorphism (\\ref{func iso}), we define $R^\\psi_A$ to be the determinant, with respect to a choice of $\\mathcal{O}$-bases of $A^t(F)^{\\psi}$ and $A(F)^{\\check\\psi}$ of the isomorphism of $\\CC$-spaces\n\\[ h^\\psi_{A,F}: \\CC\\cdot A^t(F)^{\\psi} \\cong \\CC\\cdot \\Hom_\\ZZ(A(F),\\ZZ)^\\psi \\cong \\CC\\cdot \\Hom_\\mathcal{O}(A(F)^{\\check\\psi},\\mathcal{O})\\]\nthat is induced by the N\\'eron-Tate height pairing of $A$ relative to $F$.\n\nMotivated by \\cite[Def. 12]{vdrehw}, we then define a non-zero complex number by setting\n\\[ \\mathcal{L}^\\ast(A,\\psi) := \\frac{L^\\ast(A,\\check\\psi,1)\\cdot \\tau^\\ast(\\QQ,\\psi)^d}{\\Omega_A^\\psi\\cdot w^d_\\psi\\cdot R_A^\\psi}.\\]\n\nFinally, after recalling the integral Selmer group $X_\\ZZ(A_F)$ of $A$ over $F$ that is defined by Mazur and Tate \\cite{mt} (and discussed in \\S\\ref{perfect selmer integral}), we note that if $\\sha(A_F)$ is finite then the kernel $\\sha_\\psi(A_F)$ of the natural surjective homomorphism of $\\mathcal{O}$-modules\n\\[ X_\\ZZ(A_F)_\\psi \\to \\Hom_\\ZZ(A(F),\\ZZ)_\\psi \\cong \\Hom_\\mathcal{O}(A(F)^{\\check\\psi},\\mathcal{O})\\]\nis finite.\n\nWe can now state the main result of this section.\n\n\\begin{proposition}\\label{ref deligne-gross} If ${\\rm BSD}(A_{F\/k})$ is valid, then so are the following claims.\n\\begin{itemize}\n\\item[(i)] For every $\\omega$ in $G_\\QQ$ one has $\\mathcal{L}^\\ast(A,\\omega\\circ \\psi) = \\omega(\\mathcal{L}^\\ast(A,\\psi))$. In particular, the complex number $\\mathcal{L}^\\ast(A,\\psi)$ belongs to $E$.\n\n\\item[(ii)] Assume that no place of $k$ at which $A$ has bad reduction is ramified in $F$. Then for every odd prime $p$ that satisfies the conditions (H$_1$)-(H$_4$) listed in \\S\\ref{tmc} and for which neither $A(F)$ nor $A^t(F)$ has a point of order $p$, and every maximal ideal $\\mathfrak{p}$ of $\\mathcal{O}$ that divides $p$, there is an equality of fractional $\\mathcal{O}_\\mathfrak{p}$-ideals\n\n\\[ \\mathcal{L}^\\ast(A,\\psi)\\cdot \\mathcal{O}_{\\mathfrak{p}} = \\frac{{\\rm char}_\\mathfrak{p}(\\sha_\\psi(A_F))\\cdot \\prod_{v\\in S^*_{p,{\\rm u}}}\\rho_\\mathfrak{p}^\\psi(\\mu_v(A_{F\/k}))}{|G|^{r_{A,\\psi}}\\cdot \\prod_{v \\in S_k^p\\cap S_k^F}\\varrho_\\psi^d\\cdot\\prod_{v\\in S_k^F\\cap S_k^f}P_v(A,\\check\\psi,1)}.\\]\nHere $$r_{A,\\psi}:={\\rm dim}_\\CC(\\CC\\cdot A^t(F)^\\psi)$$\nwhile $S^*_{p,{\\rm u}}$ is the set of $p$-adic places of $k$ that are unramified in $F$ but divide the different of $k\/\\QQ$ and, for every $v\\in S_k^F\\cap S_k^f$, $P_v(A,\\check\\psi,1)$ is the value at $z=1$ of the Euler factor at $v$ of the $\\check\\psi$-twist of $A$.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} The first assertion of claim (i) is equivalent to asserting that the element\n\\[ \\mathcal{L}^\\ast := \\sum_{\\psi \\in \\widehat{G}}\\mathcal{L}^\\ast(A,\\psi)\\cdot e_\\psi\\]\nbelongs to the subgroup $\\zeta(\\QQ[G])^\\times$ of $\\zeta(\\RR[G])^\\times$.\n\nRecalling that $\\zeta(\\QQ[G])^\\times$ is the full pre-image under $\\delta_G$ of the subgroup $K_0(\\ZZ[G],\\QQ[G])$ of $K_0(\\ZZ[G],\\RR[G])$, it is therefore enough to prove that $\\delta_G(\\mathcal{L}^\\ast)$ belongs to $K_0(\\ZZ[G],\\QQ[G])$.\n\nTo do this we fix any basis of differentials $\\omega_\\bullet$ as in the statement of ${\\rm BSD}(A_{F\/k})$ and write $\\mathcal{L}^\\ast$ as a product $(\\mathcal{L}^\\ast_1)^{-1}\\cdot \\mathcal{L}^\\ast_2\\cdot (\\mathcal{L}^\\ast_3)^{-1}$ with\n\\[ \\mathcal{L}^\\ast_1 := {\\rm Nrd}_{\\RR[G]}(\\Omega_{\\omega_\\bullet}(A_{F\/k}))^{-1} \\cdot \\sum_{\\psi\\in \\widehat{G}} \\Omega_A^\\psi\\cdot w^d_\\psi\\cdot\\tau^\\ast(\\QQ,\\psi)^{-d}\\cdot e_\\psi,\\]\n\\[ \\mathcal{L}^\\ast_2 := {\\rm Nrd}_{\\RR[G]}(\\Omega_{\\omega_\\bullet}(A_{F\/k}))^{-1}\\cdot \\sum_{\\psi \\in \\widehat{G}}L^\\ast(A,\\check\\psi,1)\\cdot e_\\psi,\\]\nand\n\\[ \\mathcal{L}^\\ast_3 := \\sum_{\\psi\\in \\widehat{G}}R_A^\\psi\\cdot e_\\psi.\\]\n\nProposition \\ref{lms}(i) implies $\\mathcal{L}^\\ast_1$ belongs to $\\zeta(\\QQ[G])^\\times$. In addition, for any set of places $S$ as in the statement of ${\\rm BSD}(A_{F\/k})$, the element $\\mu_S(A_{F\/k})$ belongs to $K_0(\\ZZ[G],\\QQ[G])$. We next note that $\\delta_G(\\mathcal{L}^\\ast_2)$ differs from the left hand side of the equality in ${\\rm BSD}(A_{F\/k})$(iv) by\n$$\\sum_{v\\in S\\cap S_k^f}\\delta_G(L_v(A,F\/k)),$$ where $L_v(A,F\/k)\\in\\zeta(\\QQ[G])^\\times$ is the equivariant Euler factor of $(A,F\/k)$ at $v$ (see Appendix \\ref{consistency section} below).\n\nThis difference belongs to $K_0(\\ZZ[G],\\QQ[G])$ and therefore the validity of ${\\rm BSD}(A_{F\/k})$ implies that $$\\delta_G(\\mathcal{L}^\\ast_2)- \\chi_G({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h_{A,F})$$ also belongs to $K_0(\\ZZ[G],\\QQ[G])$.\n\nTo prove $\\mathcal{L}^\\ast \\in \\zeta(\\QQ[G])^\\times$ it is therefore enough to note that $\\delta_G(\\mathcal{L}^\\ast_3)$ also differs from $\\chi_G({\\rm SC}_{S,\\omega_\\bullet}(A_{F\/k}),h_{A,F})$ by an element of $K_0(\\ZZ[G],\\QQ[G])$ (as one verifies by straightforward computation).\n\nThe second assertion of claim (i) is true since if $\\omega$ is any element of $G_{\\QQ^c\/E}$, then $\\omega\\circ \\psi = \\psi$ and so $\\omega(\\mathcal{L}^\\ast(A,\\psi)) = \\mathcal{L}^\\ast(A,\\omega\\circ\\psi) = \\mathcal{L}^\\ast(A,\\psi)$.\n\nTurning to claim (ii) we note that the given hypotheses imply that the data $A, F\/k$ and $p$ satisfy the conditions of Theorem \\ref{bk explicit}. To prove claim (ii) it is therefore enough to show that, if the difference between the left and right hand sides of the equality in Theorem \\ref{bk explicit} belongs to $K_0(\\ZZ_p[G],\\QQ_p[G])$, then its image under $\\rho_{\\mathfrak{p}}^{\\psi}$ is the equality in claim (ii). Since this image is independent of the choice of isomorphism $j:\\CC\\cong\\CC_p$ we will omit it from all notations.\n\nThe group $I(\\mathcal{O}_\\mathfrak{p})$ is torsion-free and so Proposition \\ref{basic props}(ii) implies that each term $R_{F\/k}(\\tilde A^t_v)$ that occurs in Theorem \\ref{bk explicit} belongs to $\\ker(\\rho_{\\mathfrak{p}}^{\\psi})$.\n\nThus, since $\\mathcal{L}^\\ast(A,\\psi)$ differs from the element $\\mathcal{L}^\\ast_{A,\\psi}$ defined in (\\ref{bkcharelement}) by the equality\n\\[ \\mathcal{L}^\\ast(A,\\psi) = \\mathcal{L}^\\ast_{A,\\psi}\\cdot (R_A^\\psi)^{-1}\\prod_{v \\in S_k^p\\cap S_k^F}\\varrho_{\\psi}^{-d}\\cdot\\prod_{v\\in S_k^F\\cap S_k^f}P_v(A,\\check\\psi,1)^{-1}\\]\nthe claimed result will follow if we can show that $\\rho_{\\mathfrak{p}}^{\\psi}$ sends the element\n\\[ \\chi_{G,p}( {\\rm SC}_p(A_{F\/k}),h_{A,F})-\\delta_{G,p}(\\mathcal{L}^\\ast_3)\\]\nof $K_0(\\ZZ_p[G],\\QQ_p[G])$ to the ideal $|G|^{-r_{A,\\psi}}\\cdot {\\rm char}_\\mathfrak{p}(\\sha_\\psi(A_F))$.\n\nNow, under the given hypotheses, Proposition \\ref{explicitbkprop} implies that the complex $C := {\\rm SC}_p(A_{F\/k})$ is acyclic outside degrees one and two and has cohomology $A^t(F)_p$ and $X_\\ZZ(A_F)_p=\\Sel_p(A_F)^\\vee$ in these respective degrees.\n\nIn particular, since $A^t(F)_p$ is torsion-free we can fix a representative of $C$ of the form $C^1 \\xrightarrow{d} C^2$, where $C^1$ and $C^2$ are free $\\ZZ_p[G]$-modules of the same rank.\n\nThen the tautological exact sequence\n\\begin{equation} 0 \\rightarrow H^1(C) \\xrightarrow{\\iota} C^1 \\xrightarrow{d}\nC^2 \\xrightarrow{\\pi} H^2(C) \\rightarrow\n0\\label{tatseq}\\end{equation}\ninduces a commutative diagram of ${\\mathcal O}_p$-modules with exact rows\n\\[\\begin{CD}\n@. @. C^1_{\\psi} @> d_\\psi >> C^2_{\\psi} @> \\pi_\\psi\n>> H^2(C)_\\psi @> >> 0\\\\ @. @. @V {t^1_\\psi} VV @V\n{t^2_\\psi} VV \\\\ 0 @> >> H^{1}(C)^\\psi @> \\iota^\\psi >>\n C^{1,\\psi} @> d^\\psi >> C^{2,\\psi}.\\end{CD}\\]\nEach vertical morphism $t^i_\\psi$ here is induced by sending each $x$ in $\\Hom_{{\\mathcal O}_p}(T_{\\psi,p}, {\\mathcal O}_p\\otimes_{\\ZZ_p}C^i)$ to $\\sum_{g \\in G}g(x)$ and is bijective since the $\\ZZ_p[G]$-module $C^i$ is free.\n\nThis diagram gives rise to an exact sequence of ${\\mathcal O}_\\mathfrak{p}$-modules\n\\begin{equation}\\label{scal}\n0\\rightarrow\nH^{1}(C)^\\psi_\\mathfrak{p}\n\\xrightarrow{\\iota^\\psi}C_\\mathfrak{p}^{1,\\psi}\\xrightarrow{d^\\psi}\nC_\\mathfrak{p}^{2,\\psi} \\xrightarrow{\\pi_\\psi\\circ (t^2_\\psi)^{-1}} H^2(C)_{\\psi,\\mathfrak{p}}\n\\rightarrow 0\\end{equation}\nwhich in turn implies that\n\\begin{equation}\\label{firstform} \\rho_{\\mathfrak{p}}^{\\psi}(\\chi_{G,p}( {\\rm SC}_p(A_{F\/k}),h_{A,F})) = \\iota_\\mathfrak{p}\\bigl(\\chi_{\\mathcal{O}_\\mathfrak{p}}(C^{\\bullet,\\psi}_{\\mathfrak{p}}, \\tilde h^\\psi)\\bigr).\\end{equation}\nHere $C^{\\bullet,\\psi}_{\\mathfrak{p}}$ denotes the complex $C^{1,\\psi}_\\mathfrak{p} \\xrightarrow{d^\\psi} C^{2,\\psi}_\\mathfrak{p}$ and $\\tilde h^\\psi$ the composite isomorphism of $\\CC_p$-modules\n\\[ \\CC_p\\cdot H^1(C^{\\bullet,\\psi}_{\\mathfrak{p}}) \\cong \\CC_p\\cdot (A^t(F)^\\psi)_\\mathfrak{p} \\xrightarrow{h^\\psi} \\Hom_{\\CC_p}(\\CC_p\\cdot (A(F)^{\\check\\psi})_\\mathfrak{p},\\CC_p) \\cong \\CC_p\\cdot H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\\]\nin which the first and third isomorphisms are induced by the maps in (\\ref{scal}) and $h^\\psi$ is induced by the isomorphism $h^\\psi_{A,F}$.\n\nGiven the definition of each term $R_A^\\psi$ it is, on the other hand, clear that\n\\begin{equation}\\label{secondform} \\rho^\\psi_\\mathfrak{p}(\\delta_{G,p}(\\mathcal{L}^\\ast_3)) = \\iota_\\mathfrak{p}\\bigl(\\chi_{\\mathcal{O}_\\mathfrak{p}}(H^1(C^{\\bullet,\\psi}_{\\mathfrak{p}})[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2], h^\\psi)\\bigr).\\end{equation}\n\nNow, since $\\mathcal{O}_\\mathfrak{p}$ is a discrete valuation ring it is straightforward to construct an exact triangle in $D(\\mathcal{O}_\\mathfrak{p})$ of the form\n\\[ H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tor}[-2] \\to C^{\\bullet,\\psi}_{\\mathfrak{p}} \\to H^1(C^{\\bullet,\\psi}_{\\mathfrak{p}})[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2] \\to H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tor}[-1]\\]\nand Lemma \\ref{fk lemma} applies to this triangle to imply that\n\\begin{align}\\label{thirdform}\n &\\chi_{\\mathcal{O}_\\mathfrak{p}}(C^{\\bullet,\\psi}_{\\mathfrak{p}}, \\tilde h^\\psi) - \\chi_{\\mathcal{O}_\\mathfrak{p}}(H^1(C^{\\bullet,\\psi}_{p})[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2], h^\\psi)\\notag\\\\\n=\\,&\\chi_{\\mathcal{O}_\\mathfrak{p}}(H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2], \\tilde h^\\psi\\circ ( h^\\psi)^{-1}) +\\chi_{\\mathcal{O}_\\mathfrak{p}}(H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tor}[-2], 0).\\end{align}\n\nNext we note the definition of $\\sha_\\psi(A_F)$ ensures $\\sha_\\psi(A_F)_\\mathfrak{p} = H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tor}$ and hence that\n\\begin{equation}\\label{fourthform}\\iota_\\mathfrak{p}\\bigl(\\chi_{\\mathcal{O}_\\mathfrak{p}}(H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tor}[-2], 0)\\bigr) = {\\rm char}_\\mathfrak{p}(\\sha_\\psi(A_F)).\\end{equation}\n\nIn addition, after identifying both $\\CC_p\\cdot H^2(C)_{\\psi,\\mathfrak{p}}$ and $\\CC_p\\cdot H^2(C)^\\psi_{\\mathfrak{p}}$ with $e_\\psi(\\CC_p\\cdot H^2(C)_{\\mathfrak{p}})$ in the natural way, the map $t^2_\\psi$ that occurs in (\\ref{scal}) induces upon the latter space the map given by multiplication by $|G|$.\n\nTo derive claim (ii) from the displayed formulas (\\ref{firstform}), (\\ref{secondform}), (\\ref{thirdform}) and (\\ref{fourthform}) it is thus enough to note that\n\\[ \\dim_{\\CC_p}(\\CC_p\\cdot H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}}))=\\dim_{\\CC_p}(\\CC_p\\cdot (A^t(F)^\\psi)_\\mathfrak{p})=r_{A,\\psi},\\]\nand hence that\n\\begin{multline*} \\chi_{\\mathcal{O}_\\mathfrak{p}}(H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2], \\tilde h^\\psi\\circ ( h^\\psi)^{-1})\\\\ = \\chi_{\\mathcal{O}_\\mathfrak{p}}(H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-1]\\oplus H^2(C^{\\bullet,\\psi}_{\\mathfrak{p}})\n_{\\rm tf}[-2], \\cdot |G|^{-1})\\end{multline*}\nis sent by $\\iota_\\mathfrak{p}$ to the ideal generated by $|G|^{-r_{A,\\psi}}$.\n\\end{proof}\n\n\\begin{remark}\\label{evans}{\\em Fix a Galois extension $F$ of $k = \\QQ$ and an elliptic curve $A$ whose conductor $N_A$ is prime to the discriminant $d_F$ of $F$ and is such that $A(F)$ is finite. Then for each $\\psi$ in $\\widehat{G}$ one has $r_{A,\\psi}=0$, and hence $R_A^\\psi = 1$, so that the complex number $\\mathcal{L}^\\ast(A,\\psi)$ agrees up to a unit of $\\mathcal{O}$ with the element $\\mathcal{L}(A,\\psi)$ that is defined in \\cite[Def. 12]{vdrehw}. Now fix an odd prime $p$ that is prime to $d_F$, to $N_A$, to the order of $A(F)$, to the order of the group of points of the reduction of $A$ at any prime divisor of $d_F$ and to the Tamagawa number of $A_F$ at each place of bad reduction. Then the data $A, F\/k$ and $p$ satisfy the hypotheses of Proposition \\ref{ref deligne-gross}(ii) and the sets $S^*_{p,{\\rm u}}$ and $S_k^p\\cap S_k^F$ are empty (the former since $k = \\QQ$). The explicit formula in the latter result therefore simplifies to give\n\\[ \\mathcal{L}(A,\\psi)\\cdot \\mathcal{O}_{\\mathfrak{p}} = {\\rm char}_\\mathfrak{p}(\\sha_\\psi(A_F)) \\cdot\\prod_{\\ell\\mid d_F}P_\\ell(A,\\check\\psi,1)^{-1}\\]\nwhere in the product $\\ell$ runs over all prime divisors of $d_F$. This formula shows that, in any such case, the fractional $\\mathcal{O}$-ideal generated by $\\mathcal{L}(A,\\psi)$ should depend crucially on the structure of $\\sha(A_F)$ as a $G$-module, as already suggested in this context by Dokchitser, Evans and Wiersema in \\cite[Rem. 40]{vdrehw}. In particular, this observation is both consistent with, and helps to clarify, the result of [loc. cit., Th. 37(2)]. }\n\\end{remark}\n\n\n\\section{Abelian congruence relations and module structures}\\label{congruence sec}\n\nIn both this and the next section we apply the general results of Sano, Tsoi and the first author in \\cite{bst} to derive from the assumed validity of ${\\rm BSD}_p(A_{F\/k})$(iv) families of $p$-adic congruences that can be much more explicit than those discussed in Remark \\ref{cons1}.\n\nIn particular, in this section we will focus on congruence relations that express links to the Galois structures of Selmer and Tate-Shafarevich groups.\n\n\n\nIn contrast to \\S\\ref{tmc}, in this section we are not required to assume any of the hypotheses (H$_1$)-(H$_5$).\n\nHowever, we will now, unless explicitly stated otherwise, restrict to the case that $G$ is abelian and hence will not distinguish between the leading coefficient element $L_S^*(A_{F\/k},1)$ in $K_1(\\RR[G])$ and its reduced norm $\\sum_{\\psi \\in \\widehat G}L_S^*(A,\\check\\chi,1)\\cdot e_\\psi$ in $\\RR[G]^\\times$.\n\nAn extension of the results of this section to the general (non-abelian) setting will be given in the upcoming article \\cite{dmckwt}.\n\n\\subsection{Statement of the main result}\\label{8.1}\n\nThroughout this section we give ourselves a fixed odd prime $p$ and an isomorphism of fields $\\CC\\cong\\CC_p$ (that we will usually not mention).\n\nWe also fix a finite set $S$ of places of $k$ with\n\\[ S_k^\\infty\\cup S_k^p\\cup S_k^F \\cup S_k^A\\subseteq S.\\]\n\nWe write $x\\mapsto x^\\#$ for the $\\CC$-linear involution of $\\CC[G]$ that inverts elements of $G$. We also set $n := [k:\\QQ]$ and write $d$ for the dimension of $A$.\n\n\\subsubsection{}In order to state the main result of this section we must first extend the definition of logarithmic resolvents given in (\\ref{log resol abelian}) to the setting of abelian varieties.\n\nTo do this we do not require $F\/k$ to be abelian but we do assume to be given an ordered $k$-basis $\\{\\omega'_j: j \\in [d]\\}$ of $H^0(A^t,\\Omega^1_{A^t})$ and we use this basis to define a classical period $\\Omega_A^{F\/k}$ in $\\CC[G]^{\\times}$ as in (\\ref{period def}).\n\n\nFor each index $j$ we then write ${\\rm log}_{\\omega'_j}$ for the formal logarithm of $A^t$ over $F_p$ that is defined with respect to $\\omega_j'$.\n\nWe also fix an ordering of $\\Sigma(k)$. We write $\\CC_p[G]^{nd}$ for the direct sum of $nd$ copies of $\\CC_p[G]$ and fix a bijection between the standard basis of this module and the lexicographically-ordered direct product $[d]\\times \\Sigma(k)$.\n\nThen for any ordered subset\n\\begin{equation}\\label{setx} x_\\bullet:= \\{x_{(i,\\sigma)}: (i,\\sigma) \\in [d]\\times\\Sigma(k)\\}\\end{equation}\nof $A^t(F_p)^\\wedge_p$ we define a logarithmic resolvent element of $\\zeta(\\CC_p[G])$ by setting\n\\[ \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet) := {\\rm Nrd}_{\\QQ^c_p[G]}\\left(\\bigl(\\sum_{g \\in G} \\hat \\sigma(g^{-1}({\\rm log}_{\\omega_j'}(x_{(j',\\sigma')})))\\cdot g \\bigr)_{(j,\\sigma),(j',\\sigma')}\\right) \\]\nwhere the indices $(j,\\sigma)$ and $(j',\\sigma')$ run over $[d]\\times \\Sigma(k)$ and ${\\rm Nrd}_{\\QQ^c_p[G]}(-)$ denotes the reduced norm of the given matrix in ${\\rm M}_{dn}(\\QQ_p^c[G])$.\n\nIt is clear that if $A$ is an elliptic curve (so $d=1$) and $F\/k$ is abelian, then the `$\\psi$-component' of this definition agrees with (\\ref{log resol abelian}).\n\n\\subsubsection{}\\label{statementstructure} For each non-archimedean place $v$ of $k$ that does not ramify in $F\/k$ and at which $A$ has good reduction we define an element of $\\QQ [G]$ by setting\n\\[ P_v(A_{F\/k},1) := 1-\\Phi_v\\cdot a_v + \\Phi_v^2\\cdot {\\rm N}v^{-2}.\\]\nHere $a_v$ is the integer $1 + {\\rm N}v - | A(\\kappa_v)|$\n\nFor a non-negative integer $a$ we write $\\widehat{G}_{A,(a)}$ for the subset of $\\widehat{G}$ comprising characters $\\psi$ for which the $L$-series $L(A,\\psi,z)$ vanishes at $z=1$ to order at least $a$. This definition ensures that the $\\CC[G]$-valued function\n\\[ L^{(a)}_{S}(A_{F\/k},z) := \\sum_{\\psi \\in \\widehat{G}_{A,(a)}}z^{-a}L_S(A,\\check\\psi,z)\\cdot e_\\psi\\]\nis holomorphic at $z=1$.\n\nFor each $a$ we also define idempotents of $\\QQ[G]$ by setting\n\\[ e_{(a)} = e_{F,(a)} := \\sum_{\\psi \\in \\widehat{G}_{A,(a)}}e_\\psi\\]\nand\n\\[ e_{a} = e_{F,a}:= \\sum_{\\psi \\in \\widehat{G}_{A,(a)}\\setminus \\widehat{G}_{A,(a+1)}}e_\\psi\\]\n(so that $e_{(a)} = \\sum_{b \\ge a}e_b$).\n\nThe N\\'eron-Tate height pairing for $A$ over $F$ induces a canonical isomorphism of $\\RR[G]$-modules\n\\[ h_{A_{F\/k}}:\\RR\\cdot A^t(F) \\cong \\Hom_\\RR(\\RR\\cdot A(F),\\RR) = \\RR\\cdot \\Hom_{\\ZZ[G]}(A(F),\\ZZ[G]).\\]\nFor each non-negative integer $a$ this pairing combines with our fixed isomorphism of fields $\\CC\\cong \\CC_p$ to induce an isomorphism of $\\CC_p[G]$-module\n\n\\begin{equation}\\label{athpower} {\\rm ht}^{a}_{A_{F\/k}}: \\CC_p\\cdot {\\bigwedge}^a_{\\ZZ_p[G]}\\Hom_{\\ZZ_p[G]}(A(F)_p,\\ZZ_p[G]) \\cong \\CC_p\\cdot{\\bigwedge}^a_{\\ZZ_p[G]} A^t(F)_p.\\end{equation}\n\n\nIn the following result we shall also use (the scalar extension of) the canonical `evaluation' pairing $${\\bigwedge}^a_{\\ZZ_p[G]} A^t(F)_p\\times{\\bigwedge}^a_{\\ZZ_p[G]} \\Hom_{\\ZZ_p[G]}(A^t(F)_p,\\ZZ_p[G])\\to \\ZZ_p[G]$$\nand write ${\\rm Fit}^a_{\\ZZ_p[G]}(\\Sel_p(A_F)^\\vee)$ for the $a$-th Fitting ideal of the $\\ZZ_p[G]$-module $\\Sel_p(A_F)^\\vee$.\n\n\nThe proof of this result will be given in \\S\\ref{proof of big conj}.\n\n\n\\begin{theorem}\\label{big conj} Fix an ordered maximal subset $x_\\bullet:= \\{x_{(i,\\sigma)}: (i,\\sigma) \\in [d]\\times\\Sigma(k)\\}$ of $A^t(F_p)^\\wedge_p$ that is linearly independent over $\\ZZ_p[G]$ and a finite non-empty set $T$ of places of $k$ that is disjoint from $S$\n\nIf ${\\rm BSD}(A_{F\/k})$ is valid, then for any non-negative integer $a$, any subsets $\\{\\theta_j: j \\in [a]\\}$ and $\\{\\phi_i: i\\in [a]\\}$ of $\\Hom_{\\ZZ_p[G]}(A^t(F)_p,\\ZZ_p[G])$ and $\\Hom_{\\ZZ_p[G]}(A(F)_p,\\ZZ_p[G])$ respectively, and any element $\\alpha$ of $\\ZZ_p[G]\\cap \\ZZ_p[G]e_{(a)}$ the product\n\\begin{equation}\\label{key product} \\alpha^{1+2a}\\cdot (\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#) \\cdot \\frac{L^{(a)}_{S}(A_{F\/k},1)}{\\Omega_A^{F\/k}\\cdot w_{F\/k}^d}\\cdot \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\cdot (\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^{a}_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))\\end{equation}\nbelongs to ${\\rm Fit}^a_{\\ZZ_p[G]}(\\Sel_p(A_F)^\\vee)$ and annihilates $\\sha(A^t_{F})[p^\\infty]$.\n\\end{theorem}\n\n\nWe remark on several ways in which this result either simplifies or becomes more explicit.\n\n\\begin{remarks}\\label{more explicit rem}{\\em \\\n\n\\noindent{}(i) If $A(F)$ does not contain an element of order $p$, then our methods will show that the prediction in Theorem \\ref{big conj} should remain true if the term $\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#$ is omitted. For more details see Remark \\ref{omit T} below.\n\n\\noindent{}(ii) If one fixes a subset $\\{\\phi_i: i \\in [a]\\}$ of $\\Hom_{\\ZZ_p[G]}(A(F)_p,\\ZZ_p[G])$ of cardinality $a$ that generates a free direct summand of rank $a$, then our approach combines with \\cite[Th. 3.10(ii)]{bst} to suggest that, as the subset $\\{\\theta_j: j \\in [a]\\}$ varies, elements of the form (\\ref{key product}) can be used to give an explicit description of the $a$-th Fitting ideal ${\\rm Fit}^a_{\\ZZ_p[G]}(\\Sel_p(A_F)^\\vee)$.}\\end{remarks}\n\n\\begin{remark}\\label{e=1 case}{\\em In special cases one can either show, or is led to predict, that the idempotent $e_{(a)}$ is equal to $1$ and hence that the element $\\alpha$ in (\\ref{key product}) can be taken to be $1$.\n This is, for example, the case if $a = 0$, since each function $L(A, \\psi,z)$ is holomorphic at $z=1$ and, in the setting of abelian extensions of $\\QQ$, this case will be considered in detail in \\S\\ref{mod sect}. This situation can also arises naturally in cases with $a > 0$, such as the following.\n\n\\noindent{}(i) If $F$ is a ring class field of an imaginary quadratic field $k$ and suitable hypotheses are satisfied by an elliptic curve $A\/\\QQ$ and the extension $F\/\\QQ$, then the existence of a Heegner point in $A(F)$ with non-zero trace to $A(k)$ combines with the theorem of Gross and Zagier to imply that $e_{(1)} = 1$. This case will be considered in detail in \\S\\ref{HHP}.\n\n\\noindent{}(ii) As a generalization of (i), if $F$ is a generalized dihedral extension of a field $F'$, $k$ is the unique quadratic extension of $F'$ in $F$, all $p$-adic places split completely in $k\/F'$ and the rank of $A(k)$ is odd, then the result of Mazur and Rubin in \\cite[Th. B]{mr2} combines with the prediction of ${\\rm BSD}(A_{F\/k})$(ii) to imply that $e_{(1)} = 1$.\n\n\\noindent{}(iii) Let $F$ be a finite extension of $k$ inside a $\\ZZ_p$-extension $k_\\infty$, set $\\Gamma:= G_{k_\\infty\/k}$ and write $r_\\infty$ for the corank of ${\\rm Sel}_p(A_{k_\\infty})$ as a module over the Iwasawa algebra $\\ZZ_p[[\\Gamma]]$. Then, if $A$, $F\/k$ and $p$ satisfy all of the hypotheses listed at the beginning of \\S\\ref{tmc}, one can show that the inverse limit $\\varprojlim_{F'}A^t(F')_p$, where $F'$ runs over all finite extensions of $F$ in $k_\\infty$, is a free $\\ZZ_p[[\\Gamma]]$-module of rank $r_\\infty$ and this in turn combines with the prediction of ${\\rm BSD}(A_{F\/k})$(ii) to imply that $e_{(r_\\infty)}=1$.\n}\n\\end{remark}\n\n\\begin{remark}\\label{new remark SC}{\\em Under suitable additional hypotheses it is also possible to obtain more explicit versions of the containments predicted by Theorem \\ref{big conj}. To describe an example, assume that neither $A(F)$ nor $A^t(F)$ has a point of order $p$, that $p$ is unramified in $k$, that all $p$-adic places of $k$ are at most tamely ramified in $F$ and that $A$, $F\/k$ and $p$ satisfy the hypotheses (H$_1)$-(H$_5$) that are listed in \\S\\ref{tmc}. Then, after taking account of the equality in Remark \\ref{emptysets}, the argument that is used to prove Theorem \\ref{big conj} can be directly applied to the Selmer complex ${\\rm SC}_p(A_{F\/k})$ rather than to the complex $C_{S,X}$ that occurs in \\S\\ref{proof of big conj}. In this way one finds that ${\\rm BSD}(A_{F\/k})$ predicts under the above hypotheses that for any given non-negative integer $a$ and any data\n $\\alpha$, $\\{\\theta_j:j\\in[a]\\}$ and $\\{\\phi_i:i\\in[a]\\}$ as in Theorem \\ref{big conj}, the product\n\\begin{equation}\\label{key product2} \\alpha^{1+2a}\\cdot(e_{F,a}\\cdot\\mathcal{L}_{A,F\/k}^*)\\cdot (\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^{a}_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))\\end{equation}\nshould belong to ${\\rm Fit}^a_{\\ZZ_p[G]}(\\Sel_p(A_F)^\\vee)$ and annihilate $\\sha(A_F^t)[p^\\infty]$. Here $\\mathcal{L}_{A,F\/k}^*$ is the element defined in (\\ref{bkcharelement}) and hence is related to the values\nof $L$-functions that are truncated only at the places in $S_k^f\\cap S_k^F$ rather than at all places in $S$ (as in the expression (\\ref{key product})).\n}\\end{remark}\n\n\\begin{remark}\\label{integrality rk}{\\em To obtain concrete congruences from the result of Theorem \\ref{big conj} or its variant in Remark \\ref{new remark SC} one can, for example, proceed as follows. The stated results imply that the product expression $\\mathcal{L}$ in either (\\ref{key product}) or (\\ref{key product2}) belongs to $\\ZZ_p[G]$ and hence that for every $g$ in $G$ one has\n\\[ \\sum_{\\psi \\in \\widehat{G}_{A,a}}\\psi(g)\\mathcal{L}_\\psi \\equiv 0 \\,\\,\\,({\\rm mod}\\,\\, |G|\\cdot \\ZZ_p).\\]\nHere each element $\\mathcal{L}_\\psi$ of $\\CC_p$ is defined by the equality $\\mathcal{L} = \\sum_{\\psi\\in \\widehat{G}_{A,a}}\\mathcal{L}_\\psi\\cdot e_\\psi$ and so can be explicitly related to the value at $z=1$ of the function $z^{-a}L(A,\\check\\psi,z)$.}\\end{remark}\n\n\\subsection{Explicit regulator matrices}Motivated by Remark \\ref{e=1 case}, we consider in more detail the case that $e_{(a)} = 1$.\n\nIn this case, there exist $a$-tuples in $A^t(F)$ and $A(F)$ that are each linearly independent over $\\QQ[G]$ and this fact implies the expressions $(\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^a_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))$ in Theorem \\ref{big conj} can be interpreted in terms of classical Neron-Tate heights.\n\nTo state the result we use the following notation: for ordered $a$-tuples $P_\\bullet = \\{P_i: i \\in [a]\\}$ of $A^t(F)$ and $Q_\\bullet = \\{Q_j: j \\in [a]\\}$ of $A(F)$ we define a matrix in ${\\rm M}_a(\\RR[G])$ by setting\n\\begin{equation}\\label{regulatormatrix} h_{F\/k}(P_\\bullet, Q_\\bullet) := (\\sum_{g \\in G}\\langle g(P_i),Q_j\\rangle_{A_F}\\cdot g^{-1})_{1\\le i,j\\le a},\\end{equation}\nwhere $\\langle -,-\\rangle_{A_F}$ denotes the Neron-Tate height pairing for $A$ over $F$\n\n\\begin{lemma}\\label{height pairing interp} Fix a natural number $a$ such that $e_{(a)} =1$ and then choose ordered $a$-tuples $P_\\bullet$ of $A^t(F)$ and $Q_\\bullet$ of $A(F)$ that are each linearly independent over $\\QQ[G]$. Then the matrix $e_a\\cdot h_{F\/k}(P_\\bullet, Q_\\bullet)$ belongs to ${\\rm GL}_a(\\RR[G]e_a)$ and one has\n\\begin{multline*} e_a\\cdot\\left\\{(\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^a_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))\\mid \\theta_j\\in \\Hom_{\\ZZ[G]}(A^t(F),\\ZZ[G]),\\phi_i\\in \\Hom_{\\ZZ[G]}(A(F),\\ZZ[G])\\right\\}\\\\\n= {\\rm det}(e_a\\cdot h_{F\/k}(P_\\bullet, Q_\\bullet))^{-1}\\cdot{\\rm Fit}^0_{\\ZZ[G]}((A^t(F)_{\\rm tf}\/\\langle P_\\bullet\\rangle)^\\vee_{\\rm tor})\\cdot{\\rm Fit}^0_{\\ZZ[G]}((A(F)_{\\rm tf}\/\\langle Q_\\bullet\\rangle)^\\vee_{\\rm tor}) \\end{multline*}\nwhere $\\langle P_\\bullet\\rangle$ and $\\langle Q_\\bullet\\rangle$ denote the $G$-modules that are generated by $P_\\bullet$ and $Q_\\bullet$.\n \\end{lemma}\n\n\\begin{proof} We write $N(P_\\bullet)$ and $N(Q_\\bullet)$ for the quotients of $A^t(F)_{\\rm tf}$ and $A(F)_{\\rm tf}$ by $\\langle P_\\bullet\\rangle$ and $\\langle Q_\\bullet\\rangle$. Then, by taking $\\ZZ$-linear duals of the tautological short exact sequence\n\\[ 0 \\to \\langle P_\\bullet\\rangle \\xrightarrow{\\iota_{P_\\bullet}} A^t(F)_{\\rm tf} \\to N(P_\\bullet) \\to 0\\]\none obtains an exact sequence $$A^t(F)^\\ast \\xrightarrow{\\iota_{P_\\bullet}^\\ast} \\langle P_\\bullet\\rangle^\\ast \\to N(P_\\bullet)_{\\rm tor}^\\vee \\to 0$$ and hence an equality\n\\[ \\im({\\bigwedge}^a_{\\ZZ[G]}\\iota_{P_\\bullet}^\\ast) = {\\rm Fit}^0_{\\ZZ[G]}(N(P_\\bullet)_{\\rm tor}^\\vee).\\]\n\nIn the same way one derives an equality\n\\[ \\im({\\bigwedge}^a_{\\ZZ[G]}\\iota_{Q_\\bullet}^\\ast) = {\\rm Fit}^0_{\\ZZ[G]}(N(Q_\\bullet)_{\\rm tor}^\\vee).\\]\n\nSince the maps $e_{a}(\\QQ\\otimes_\\ZZ\\iota_{P_\\bullet}^\\ast)$ and $e_{a}(\\QQ\\otimes_\\ZZ\\iota_{Q_\\bullet}^\\ast)$ are bijective, these equalities imply that the lattice\n\\[ e_a\\cdot\\left\\{(\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^a_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))\\mid \\theta_j\\in A^t(F)^\\ast,\\phi_i\\in A(F)^\\ast\\right\\}\\]\nis equal to the product\n\\begin{multline*} \\left\\{e_a(\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^a_{A_{F\/k}}(e_a(\\wedge_{i=1}^{i=a}\\phi_i)))\\mid \\theta_j\\in \\langle P_\\bullet\\rangle^\\ast,\\phi_i\\in \\langle Q_\\bullet\\rangle^\\ast\\right\\}\\\\\n\\times {\\rm Fit}^0_{\\ZZ[G]}(N(P_\\bullet)_{\\rm tor}^\\vee)\\cdot{\\rm Fit}^0_{\\ZZ[G]}(N(Q_\\bullet)_{\\rm tor}^\\vee). \\end{multline*}\n\nThis implies the claimed result since, writing $P_i^\\ast$ and $Q_j^\\ast$ for the elements of $\\langle P_\\bullet\\rangle^\\ast$ and $\\langle Q_\\bullet\\rangle^\\ast$ that are respectively dual to $P_i$ and $Q_j$, one has\n\\begin{multline*} \\left\\{e_a(\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^a_{A_{F\/k}}(e_a(\\wedge_{i=1}^{i=a}\\phi_i)))\\mid \\theta_j\\in \\langle P_\\bullet\\rangle^\\ast,\\phi_i\\in \\langle Q_\\bullet\\rangle^\\ast\\right\\}\\\\\n= \\ZZ[G]\\cdot (e_a\\wedge_{i=1}^{i=a}P_i^\\ast)({\\rm ht}^a_{A_{F\/k}}(e_a\\wedge_{i=1}^{i=a}Q^\\ast_i))\\end{multline*}\nand\n\\begin{align*} e_a(\\wedge_{i=1}^{i=a}P_i^\\ast)({\\rm ht}^a_{A_{F\/k}}(e_a\\wedge_{i=1}^{i=a}Q^\\ast_i)) =\\, &{\\rm det}\\bigl( (e_aP_i^\\ast(h^{-1}_{A_{F\/k}}(e_a Q^\\ast_j)))_{1\\le i, j\\le a}\\bigr)\\\\\n =\\, &{\\rm det}(e_a\\cdot h_{F\/k}(P_\\bullet, Q_\\bullet))^{-1}.\\end{align*}\nWe note that the last equality is true because the definition of ${\\rm ht}^a_{A_{F\/k}}$ implies that for every $j$ one has\n\\[ h^{-1}_{A_{F\/k}}(e_a Q^\\ast_j) = \\sum_{b=1}^{b=a} ((e_a\\cdot h_{F\/k}(P_\\bullet, Q_\\bullet))^{-1})_{bj}P_b.\\]\n\\end{proof}\n\n\\subsection{Bounds on logarithmic resolvents}\\label{p-adic sec} The result of Lemma \\ref{height pairing interp} means that in many cases the only term in the product expression (\\ref{key product}) that is not explicitly understood is the logarithmic resolvent of the chosen semi-local points.\n\nWe shall now partly address this issue. For subsequent purposes (related to the upcoming article \\cite{dmckwt}) we do not here require $G$ to be abelian.\n\n\n\n\\subsubsection{} We start by deriving an easy consequence of the arguments in Proposition \\ref{lms} and Lemma \\ref{ullom}. For each natural number $i$ we set $\\wp_{F_p}^i:=\\prod_{w'\\in S_F^p}\\wp_{F_{w'}}^i$, where $\\wp_{F_{w'}}$ denotes the maximal ideal in the valuation ring of $F_{w'}$. We also set $\\hat A^t(\\wp_{F_p}^i):=\\prod_{w'\\in S_F^p}\\hat A^t_{w'}(\\wp_{F_{w'}}^i)$, where $\\hat A^t_{w'}$ denotes the formal group of $A^t_{\/F_{w'}}$.\n\n\\begin{proposition}\\label{explicit log resolve} If all $p$-adic places are tamely ramified in $F\/k$ and $\\hat A^t(\\wp_{F_p})$ is torsion-free, then there exists an ordered $\\ZZ_p[G]$-basis $x_\\bullet$ of $\\hat A^t(\\wp_{F_p})$ for which one has\n\\[ \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet) = \\bigl(\\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_k^p}\\varrho_v(F\/k)\\bigr)^d,\\]\nwhere the elements $\\varrho_v(F\/k)$ of $\\zeta(\\QQ[G])^\\times$ are as defined in (\\ref{varrho def}).\n\\end{proposition}\n\n\\begin{proof} Since all $p$-adic places of $k$ are tamely ramified in $F$ Lemma \\ref{ullom} implies that the $\\ZZ_p[G]$-module $\\hat A^t(\\wp^{i}_{F_p})$ is cohomologically-trivial for all $i$. Hence, if $\\hat A^t(\\wp_{F_p})$ is torsion-free, then it is a projective $\\ZZ_p[G]$-module (by \\cite[Th. 8]{cf}) and therefore free of rank $nd$ (since $\\QQ_p\\otimes_{\\ZZ_p}\\hat A^t(\\wp_{F_p})$ is isomorphic to $F^d_p$).\n\nIn this case we fix an ordered basis $x_\\bullet$ of $\\hat A^t(\\wp_{F_p})$ and regard it as a $\\QQ_p^c[G]$-basis of $\\QQ_p^c\\otimes_{\\ZZ_p}\\hat A^t(\\wp_{F_p})$.\n\nWe also regard $\\{(i,\\hat\\sigma): (i,\\sigma) \\in [d]\\times\\Sigma(k)\\}$ as a $\\QQ_p^c[G]$-basis of the direct sum $(\\QQ_p^c\\cdot Y_{F\/k,p})^d$ of $d$ copies of $\\QQ_p^c\\cdot Y_{F\/k,p}$.\n\nThen, with respect to these bases, the matrix\n\\begin{equation}\\label{log resolve matrix} \\bigl(\\sum_{g \\in G} \\hat \\sigma(g^{-1}({\\rm log}_{\\omega_j'}(x_{(j',\\sigma')})))\\cdot g \\bigr)_{(j,\\sigma),(j',\\sigma')}\\end{equation}\nrepresents the composite isomorphism of $\\QQ^c_p[G]$-modules\n\\[ \\mu': \\QQ_p^c\\otimes_{\\ZZ_p}\\hat A^t(\\wp_{F_p}) \\xrightarrow{({\\rm log}_{\\omega_j'})_{j\\in [d]}}\n(\\QQ_p^c\\otimes_\\QQ F)^d \\xrightarrow{(\\mu)_{i \\in [d]}} (\\QQ_p^c\\cdot Y_{F\/k,p})^d,\\]\nwhere $\\mu$ is the isomorphism $\\QQ_p^c\\otimes_\\QQ F \\cong \\QQ_p^c\\cdot Y_{F\/k,p}$ that sends each element $\\lambda\\otimes f$ to $(\\lambda\\cdot \\hat\\sigma(f))_{\\sigma\\in \\Sigma(k)}$. This fact implies that\n\\begin{equation}\\label{eq 1}\\delta_{G,p}(\\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)) = [\\hat A^t(\\wp_{F_p}) , Y^d_{F\/k,p}; \\mu'] \\end{equation}\nin $K_0(\\ZZ_p[G],\\QQ^c_p[G])$.\n\nIn addition, since for any large enough integer $i$ the image of $\\hat A^t(\\wp^i_{F_p})$ under each map ${\\rm log}_{\\omega_j'}$ is equal to $\\wp^i_{F,p}$, the `telescoping' argument of Lemma \\ref{ullom} implies that\n\\begin{align}\\label{eq 2} [\\hat A^t(\\wp_{F_p}) , Y^d_{F\/k,p}; \\mu'] = \\, &d\\cdot [\\wp_{F_p} , Y_{F\/k,p}; \\mu]\\\\\n = \\, &d\\cdot [\\mathcal{O}_{F,p}, Y_{F\/k,p}; \\mu] + d\\cdot \\chi_{G,p}\n \\bigl((\\mathcal{O}_{F,p}\/\\wp_{F_p})[0],0\\bigr).\\notag\n \\end{align}\n\nNext we note that if $f_v$ is the absolute residue degree of a $p$-adic place $v$, then the normal basis theorem for\n$\\mathcal{O}_{F_w}\/\\wp_{F_w}$ over the field with $p$ elements implies that there exists a short\nexact sequence of $G_w\/I_w$-modules\n\n\\[ 0 \\to \\ZZ [G_w\/I_w] ^{f_{v}} \\xrightarrow{\\times p} \\ZZ[G_w\/I_w]^{f_{v}} \\to \\mathcal{O}_{F_w}\/\\wp_{F_w} \\to 0.\\]\nBy using these sequences (for each such $v$) one computes that\n\n\\begin{equation*}\\label{eq 3} \\chi_{G,p}\\bigl((\\mathcal{O}_{F,p}\/\\wp_{F_p})[0],0\\bigr) = \\delta_{G,p}\\bigl(\\prod_{v \\in S_k^p}\\varrho_v(F\/k)\\bigr).\\end{equation*}\n\n\nUpon combining this equality with (\\ref{eq 1}) and (\\ref{eq 2}) we deduce that\n\\[ \\delta_{G,p}(\\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\cdot (\\prod_{v \\in S_k^p}\\varrho_v(F\/k))^{-d}) = d\\cdot [\\mathcal{O}_{F,p}, Y_{F\/k,p}; \\mu]\\]\nand from here one can deduce the claimed result by using the argument of Proposition \\ref{lms}.\\end{proof}\n\n\n\\begin{remark}\\label{new add}{\\em Assume that all $p$-adic places are at most tamely ramified in $F\/k$, that neither $\\widehat{A^t}(\\wp_{F_p})$ nor $A(F)$ has an element of order $p$ and that $e_{(a)} = 1$ so there exist ordered $a$-tuples $P_\\bullet$ and $Q_\\bullet$ as in Lemma \\ref{height pairing interp}. Then Proposition \\ref{explicit log resolve}, Lemma \\ref{height pairing interp} and Remark \\ref{more explicit rem}(i) combine with Theorem \\ref{big conj} to imply ${\\rm BSD}_p(A_{F\/k})$(iv) predicts that any element in the set\n\\begin{equation*}\\label{explicit ann}\n\\frac{L^{(a)}_{S}(A_{F\/k},1)\\cdot \\bigl(\\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_k^p}\\varrho_v(F\/k)\\bigr)^d}{\\Omega_A^{F\/k}\\cdot w_{F\/k}^d\\cdot {\\rm det}(h_{F\/k}(P_\\bullet, Q_\\bullet))} \\cdot {\\rm Fit}^0_{\\ZZ[G]}((A^t(F)\/\\langle P_\\bullet\\rangle)^\\vee_{\\rm tor})\\cdot{\\rm Fit}^0_{\\ZZ[G]}((A(F)\/\\langle Q_\\bullet\\rangle)^\\vee_{\\rm tor})\\end{equation*}\nbelongs to ${\\rm Fit}^a_{\\ZZ_{p}[G]}({\\rm Sel}_{p}(A_{F})^\\vee)$ and annihilates $\\sha(A^t_{F})[p^\\infty]$}.\\end{remark}\n\n\\begin{remark}\\label{new add2}{\\em To obtain a variant of the prediction in Remark \\ref{new add} that may in some cases be more amenable to numerical investigation assume that $p$ is unramified in $k$, that all $p$-adic places of $k$ are at most tamely ramified in $F$, that neither $A(F)$ nor $A^t(F)$ has an element of order $p$, that $e_{(a)} = 1$ and that $A$, $F\/k$ and $p$ satisfy the hypotheses (H$_1$)-(H$_5$). Then by using Remark \\ref{new remark SC} in place of Theorem \\ref{big conj} the same approach as in Remark \\ref{new add} allows one to show that under these hypotheses the prediction in Remark \\ref{new add} should be true after replacing the terms $L^{(a)}_{S}(A_{F\/k},1)$ and $S_k^p$ that occur in the given displayed expression by $L^{(a)}_{S_{\\rm r}}(A_{F\/k},1)$ and $S_{p,{\\rm r}}$ respectively, where, as in (\\ref{bkcharelement}), we set $S_{\\rm r} := S_{k}^f\\cap S_k^F$ and $S_{p,{\\rm r}}:= S_k^p\\cap S_k^F$.\n}\\end{remark}\n\n\\begin{example}\\label{wuthrich example}{\\em Christian Wuthrich kindly supplied us with the following concrete applications of Remark \\ref{new add}. Set $k = \\QQ$ and $K = \\QQ(\\sqrt{229})$ and write $F$ for the Galois closure of the field $L = \\QQ(\\alpha)$ with $\\alpha^3-4\\alpha+1 = 0$. Then $K \\subset F$ and the group $G := G_{F\/k}$ is dihedral of order six. Let $A$ denote either of the curves 3928b1 (with equation $y^2 = x^3-x^2 + x + 4$) or 5864a1 (with equation $y^2 = x^3-x^2 -24x + 28$). Then ${\\rm rk}(A_\\QQ)= 2$, ${\\rm rk}(A_K) = {\\rm rk}(A_L) = 3$ and ${\\rm rk}(A_F) = 5$ and, since $\\sha_3(A_K)$ vanishes (as can be shown via a computation with Heegner points on the quadratic twist of $A$), these facts combine with \\cite[Cor. 2.10(i)]{bmw0} to imply the $\\ZZ_{3}[G]$-module $A(F)_{3}$ is isomorphic to $\\ZZ_{3}[G](1+\\tau) \\oplus \\ZZ_{3}\\oplus \\ZZ_{3}$, with $\\tau$ the unique non-trivial element in $G_{F\/L}$. In particular, if we set $\\Gamma := G_{F\/K}$, then we can choose a point $P$ that generates over $\\ZZ_{3}[\\Gamma]$ a free direct summand of $A(F)_{3}$. In addition, ${\\rm Fit}^1_{\\ZZ_{3}[\\Gamma]}({\\rm Sel}_{3}(A_{F})^\\vee)$ is contained in\n\\[ {\\rm Fit}^1_{\\ZZ_{3}[\\Gamma]}(\\ZZ_{3}[G](1+\\tau) \\oplus \\ZZ_{3}\\oplus \\ZZ_{3}) = {\\rm Fit}^0_{\\ZZ_{3}[\\Gamma]}(\\ZZ_{3}\\oplus \\ZZ_{3}) = I_{3}(\\Gamma)^2,\\]\nwhere $I_{3}(\\Gamma)$ is the augmentation ideal of $\\ZZ_{3}[\\Gamma]$. Finally, we note that $3$ splits in $K$ and is unramified in $F$ so that for each $3$-adic place $v$ of $K$ the element $\\varrho_v(F\/K)$ is equal to $3$. After taking account of these facts, Remark \\ref{new add} (with $F\/k$ taken to be $F\/K$, $a$ to be $1$ and $P_1 = Q_1$ to be $P$) shows ${\\rm BSD}_3(A_{F\/k})$(iv) predicts that\n\\[ 9\\cdot \\tau^\\ast(F\/K)\\cdot\\frac{L^{(1)}_{S}(A_{F\/K},1)}{\\Omega_A^{F\/k}} = x\\cdot \\sum_{g \\in G}\\langle g(P),P\\rangle_{A_F}\\cdot g^{-1}\\]\nfor an element $x$ of $I_{3}(\\Gamma)^2$ that annihilates the $3$-primary component of $\\sha(A_{F})$. Here we have also used the fact that $w_{F\/K}=1$ because each of the real places $v$ of $K$ has trivial decomposition group in $\\Gamma$ so $\\psi^-_v(1)=1-1=0$ and thus $w_\\psi=1$ for each $\\psi\\in\\widehat\\Gamma$.}\n\\end{example}\n\n\\subsubsection{}It is possible to prove less precise versions of Proposition \\ref{explicit log resolve} without making any hypotheses on ramification and to thereby obtain more explicit versions of the prediction that is made in Theorem \\ref{big conj}.\n\nFor example, if $F_p^\\times$ has no element of order $p$, $\\mathfrak{M}$ is any choice of maximal $\\ZZ_p$-order in $\\QQ_p[G]$ that contains $\\ZZ_p[G]$ and $x$ is any element of $\\zeta(\\ZZ_p[G])$ such that $x\\cdot \\mathfrak{M} \\subseteq \\ZZ_p[G]$, then one can deduce from the result of \\cite[Cor. 7.8]{bleyburns} that for every element $y$ of the ideal\n\\[ \\zeta(\\ZZ_p[G])\\cdot p^d\\cdot x^{1+2d}((1-e_G)+ |G|\\cdot e_G)\\]\nthere exists an ordered $nd$-tuple $x(y)_\\bullet$ of points in $A^t(F_p)$ for which one has\n\\[ \\mathcal{LR}^p_{A^t_{F\/k}}(x(y)_\\bullet) = y\\cdot \\bigl(\\tau^\\ast(F\/k)\\cdot \\prod_{v \\in S_k^p}\\varrho_v(F\/k)\\bigr)^d\\]\nin $\\zeta(\\QQ^c[G])$.\n\nHowever, we shall not prove this result here both because it is unlikely to be the best possible `bound' that one can give on logarithmic resolvents in terms of Galois-Gauss sums and also because, from the perspective of numerical investigations, the following much easier interpretation of logarithmic resolvents in terms of Galois resolvents is likely to be more helpful.\n\n\\begin{lemma} Let $i_0$ be the least integer with $i_0 > e_{F,p}\/(p-1)$, where $e_{F,p}$ is the maximal absolute ramification degree of any $p$-adic place of $F$. Let $z$ be an element of $F$ that belongs to the $i_0$-th power of every prime ideal of $\\mathcal{O}_F$ above $p$.\n\nThen, for any integral basis $\\{a_i: i \\in [n]\\}$ of $\\mathcal{O}_k$, there exists an ordered $nd$-tuple $x_\\bullet$ of points in $A^t(F_p)$ for which one has\n\\[ \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet) = {\\rm Nrd}_{\\QQ_p^c[G]}\\left( \\bigl(\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z\\cdot a_i))\\cdot g\\bigr)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right)^d. \\]\n\\end{lemma}\n\n\\begin{proof} The definition of $i_0$ ensures that the formal group logarithm of $A^t$ over $F_p$ gives an isomorphism of $\\hat A^t(\\wp_{F_p}^{i_0})$ with a direct sum $(\\wp_{F_p}^{i_0})^d$ of $d$ copies of $\\wp_{F_p}^{i_0}$ (cf. \\cite[Th. 6.4(b)]{silverman}). Here the individual copies of $\\wp_{F_p}^{i_0}$ in the sum are parametrised by the differentials $\\{\\omega_j': j \\in [n]\\}$ that are used to define ${\\rm log}_{A^t}$.\n\nThe choice of $z$ also implies that $Z := \\{z\\cdot a_i: i \\in [n]\\}$ is a subset of $\\wp_{F_p}^{i_0}$. We may therefore choose a pre-image $x_\\bullet$ in $\\hat A^t(\\wp_{F_p}^{i_0})$ of the ordered $nd$-tuple in $(\\wp_{F_p}^{i_0})^d$ that is obtained by placing a copy of $Z$ in each of the $d$ direct summands.\n\nFor these points $x_\\bullet$ the interpretation of the matrix (\\ref{log resolve matrix}) that is given in the proof of Proposition \\ref{explicit log resolve} shows that it is equal to a $d\\times d$ diagonal block matrix with each diagonal block equal to the matrix\n\\[ \\bigl(\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z\\cdot a_i))\\cdot g\\bigr)_{\\sigma\\in \\Sigma(k),i\\in [n]}.\\]\n\nSince $\\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)$ is defined to be the reduced norm of the matrix (\\ref{log resolve matrix}) the claimed equality is therefore clear.\\end{proof}\n\n\\subsection{The proof of Theorem \\ref{big conj}}\\label{proof of big conj}\n\n\\subsubsection{}We start by proving a technical result that is both necessary for the proof of Theorem \\ref{big conj} and will also be of further use in the sequel.\n\nIn this result we use the terminology of `characteristic elements' from \\cite[Def. 3.1]{bst}.\n\n\\begin{lemma}\\label{modifiedlemma} Fix an ordered subset $x_\\bullet$ of $A^t(F_p)^\\wedge_p$ as in Theorem \\ref{big conj}. Write $X$ for the $\\ZZ_p[G]$-module generated by $x_\\bullet$ and $C_{S,X}$ for the Selmer complex ${\\rm SC}_S(A_{F\/k};X,H_\\infty(A_{F\/k})_p)$ from Definition \\ref{selmerdefinition}. Then the following claims are valid.\n\n\\begin{itemize}\n\\item[(i)] The module $H^1(C_{S,X})$ is torsion-free.\n\\item[(ii)] For any finite non-empty set of places $T$ of $k$ that is disjoint from $S$, there exists an exact triangle in $D^{\\rm perf}(\\ZZ_p[G])$ of the form\n\\begin{equation}\\label{modifiedtriangle}\\bigoplus_{v\\in T}R\\Gamma(\\kappa_v,T_{p,F}(A^t)(-1))[-2]\\to C_{S,X}\\stackrel{\\theta}{\\to} C_{S,X,T}\\to\\bigoplus_{v\\in T}R\\Gamma(\\kappa_v,T_{p,F}(A^t)(-1))[-1]\\end{equation}\nin which $C_{S,X,T}$ is acyclic outside degrees one and two and there are canonical identifications of $H^1(C_{S,X,T})$ with $H^1(C_{S,X})$ and of $\\Sel_p(A_F)^\\vee$ with a subquotient of $H^2(C_{S,X,T})$ in such a way that $\\QQ_p\\cdot\\Sel_p(A_F)^\\vee=\\QQ_p\\cdot H^2(C_{S,X,T})$.\n \\item[(iii)] Following claim (ii) we write\n\\[ h^{T}_{A,F}: \\CC_p\\cdot H^1(C_{S,X,T}) \\to \\CC_p\\cdot H^2(C_{S,X,T})\\]\nfor the isomorphism $(\\CC_p\\otimes_{\\ZZ_p}H^2(\\theta))\\circ(\\CC_p\\otimes_{\\RR,j}h_{A,F})$ of $\\CC_p[G]$-modules. Then, if ${\\rm BSD}_p(A_{F\/k})$(iv) is valid, there exists a characteristic element $\\mathcal{L}$ for $(C_{S,X},h_{A,F})$ in $\\CC_p[G]^\\times$ with the property that for any non-negative integer $a$ one has\n\\[ e_a\\cdot\\mathcal{L}^{-1}=\\frac{L^{(a)}_{S}(A_{F\/k},1)}{\\Omega_A^{F\/k}\\cdot w_{F\/k}^d}\\cdot \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet).\\]\n\nIn addition, in this case the element\n\\[ \\mathcal{L}_T := (\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#)^{-1}\\cdot\\mathcal{L}\\]\nis a characteristic element for $(C_{S,X,T},h^{T}_{A,F})$.\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n\nSince $p$ is odd there exists a homomorphism of $\\ZZ_p[G]$-modules $\\phi$ from the module $H_\\infty(A_{F\/k})_p = \\bigoplus_{v\\in S_k^\\infty}H^0(k_v,T_{p,F}(A^t))$ to $A^t(F_p)^\\wedge_p$ that sends the ordered $\\ZZ_p[G]$-basis of $H_\\infty(A_{F\/k})_p$ specified at the end of \\S\\ref{gamma section} to $x_\\bullet$.\n\nThen, comparing the triangle (\\ref{can tri}) with the construction of \\cite[Prop. 2.8 (ii)]{bst} immediately implies that $C_{S,X}$ is isomorphic in $D^{\\rm perf}(\\ZZ_p[G])$ to the complex that is denoted in loc. cit. by $C_\\phi(T_{p,F}(A^t))$.\n\nGiven this, claim (i) follows directly from \\cite[Prop. 2.8 (ii)]{bst} and claim (ii) from \\cite[Prop. 2.8 (iii)]{bst} with $C_{S,X,T}:=C_{\\phi,T}(T_{p,F}(A^t))$.\n\nTurning to claim (iii) we note that each place $v$ in $T$ is not $p$-adic, does not ramify in $F\/k$ and is of good reduction for $A$.\n\nEach complex $R\\Gamma(\\kappa_v,T_{p,F}(A^t)(-1))$ is therefore well-defined. Since these complexes are acyclic outside degree one, where they have finite cohomology, we may therefore apply Lemma \\ref{fk lemma} to the triangle (\\ref{modifiedtriangle}) to deduce that\n\\begin{align*}\\chi_{G,p}(C_{S,X},h_{A,F})-\\chi_{G,p}(C_{S,X,T},h^{T}_{A,F})= \\, &\\sum_{v\\in T}\\chi_{G,p}(R\\Gamma(\\kappa_v,T_{p,F}(A^t)(-1))[-2],0)\\\\\n=\\, &\\delta_{G,p}({\\det}_{\\QQ_p[G]}(1-\\Phi_v|\\QQ_p\\cdot T_{p,F}(A^t)(-1)))\\\\\n=\\, &\\delta_{G,p}(P_v(A_{F\/k},1)^\\#)\n\\end{align*}\nin $K_0(\\ZZ_p[G],\\CC_p[G])$.\n\nThus if $\\mathcal{L}$ is a characteristic element of $(C_{S,X},h_{A,F})$, then $(\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#)^{-1}\\cdot\\mathcal{L}$ is a characteristic element for $(C_{S,X,T},h^{T}_{A,F})$, as claimed in the final assertion of claim (iii).\n\nIt is thus enough to deduce from ${\\rm BSD}_p(A_{F\/k})$(iv) the existence of a characteristic element $\\mathcal{L}$ of $(C_{S,X},\\CC_p\\otimes_{\\RR}h_{A,F})$ with the required interpolation property.\n\nNow, since $S$ contains all $p$-adic places of $k$, the module $\\mathcal{Q}(\\omega_\\bullet)_{S,p}$ vanishes and the $p$-primary component of the term $\\mu_{S}(A_{F\/k})$ is also trivial.\n\nIn addition, as the validity of {\\rm BSD}$_p(A_{F\/k})$(iv) is independent of the choice of global periods and we can assume firstly that $\\omega_\\bullet$ is the set $\\{ z_i\\otimes \\omega'_j: i \\in [n], j \\in [d]\\}$ fixed in Lemma \\ref{k-theory period} and secondly that the image of $\\mathcal{F}(\\omega_\\bullet)_p$ under the formal group exponential ${\\rm exp}_{A^t,F_p}$ (defined with respect to the differentials $\\{\\omega_j': j \\in [d]\\})$ is contained in $X$.\n\nThen the assumed validity of the equality (\\ref{displayed pj}) in this case combines with the equality in Lemma \\ref{k-theory period} to imply that the element\n\n\\begin{equation}\\label{char el 1} \\frac{L_S^*(A_{F\/k},1)}{{\\rm Nrd}_{\\RR[G]}(\\Omega_{\\omega_\\bullet}(A_{F\/k}))}= \\frac{L_S^*(A_{F\/k},1)}{ \\Omega_A^{F\/k}\\cdot w_{F\/k}^{d}}\\cdot {\\rm det}\\left( \\left( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z_i)))\\cdot g\\right)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right)^{d}\\end{equation}\nof $\\zeta(\\CC_p[G])^\\times$ is the inverse of a characteristic element of $(C_{S,\\omega_\\bullet},h_{A,F})$.\n\nHere we write $C_{S,\\omega_\\bullet}$ for the Selmer complex ${\\rm SC}_S(A_{F\/k};\\mathcal{X}(p),H_\\infty(A_{F\/k})_p)$ where $\\mathcal{X}$ is the perfect Selmer structure $\\mathcal{X}_S(\\{\\mathcal{A}^t_v\\}_v,\\omega_\\bullet,\\gamma_\\bullet)$ defined in \\S\\ref{perf sel sect}. (In addition, the fact that it is the inverse of a characteristic element results from a comparison of our chosen normalisation of non-abelian determinants compared with that of \\cite[(10)]{bst}, as described in Remark \\ref{comparingdets}).\n\nIn particular, since $\\mathcal{X}(p)$ is by definition equal to ${\\rm exp}_{A^t,F_p}(\\mathcal{F}(\\omega_\\bullet)_p)$ and hence, by assumption, contained in $X$, a comparison of the definitions of $C_{S,\\omega_\\bullet}$ and $C_{S,X}$ shows that there is an exact triangle\n\\[ C_{S,\\omega_\\bullet} \\to C_{S,X} \\to \\bigl(X\/\\mathcal{X}(p))[-1] \\to C_{S,\\omega_\\bullet}[1]\\]\nin $D^{\\rm perf}(\\ZZ_p[G])$.\n\nSince the product\n\n\\[ \\xi := \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\cdot {\\rm det}\\left( \\left( (\\sum_{g \\in G} \\hat \\sigma(g^{-1}(z_i)))\\cdot g\\right)_{\\sigma\\in \\Sigma(k),i\\in [n]}\\right)^{-d}\\]\nis equal to the determinant of a matrix that expresses a basis of the free $\\ZZ_p[G]$-module $\\mathcal{X}(p)$ in terms of the basis $x_\\bullet$ of $X$, the above triangle implies that the product\n\\[ \\frac{L_S^*(A_{F\/k},1)}{ \\Omega_A^{F\/k}\\cdot w_{F\/k}^{d}}\\cdot \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\]\nof $\\xi$ and the element (\\ref{char el 1}) is the inverse of a characteristic element for $(C_{S,X},h_{A,F})$.\n\nThe claimed interpolation formula is thus a consequence of the fact that $L_S^{(a)}(A_{F\/k},1)$ is equal to $e_a\\cdot L_S^*(A_{F\/k},1)$. \\end{proof}\n\n\n\\subsubsection{}We are now ready to prove Theorem \\ref{big conj}.\n\nTo do this we will apply the general result of \\cite[Th. 3.10(i)]{bst} to the complex $C_{S,X,T}$, isomorphism $h^{T}_{A,F}$ and characteristic element $\\mathcal{L}_T$ constructed in Lemma \\ref{modifiedlemma}.\n\n In order to do so, we fix an ordered subset $\\Phi:=\\{\\phi_i: i \\in [a]\\}$ of $\\Hom_{\\ZZ_p[G]}(A(F)_p,\\ZZ_p[G])$ of cardinality $a$. We fix a pre-image $\\phi_i'$ of each $\\phi_i$ under the surjective composite homomorphism\n\\[ H^2(C_{S,X})\\to\\Sel_p(A_F)^\\vee\\to \\Hom_{\\ZZ_p[G]}(A(F)_p,\\ZZ_p[G]),\\]\nwhere the first arrow is the canonical map from Proposition \\ref{prop:perfect}(iii) and the second is induced by the canonical short exact sequence\n\\begin{equation}\\label{sha-selmer}\n \\xymatrix{0 \\ar[r] & \\sha(A_F)[p^\\infty]^\\vee \\ar[r] & \\Sel_p(A_F)^\\vee \\ar[r]& \\Hom_{\\ZZ_p}(A(F)_p,\\ZZ_p)\\ar[r] & 0.}\n\\end{equation}\n\nWe set $\\Phi':=\\{\\phi'_i:i \\in [a]\\}$ and consider the image $H^2(\\theta)(\\Phi')$ of $\\Phi'$ in $H^2(C_{S,X,T})$, where $\\theta$ is the morphism that occurs in the triangle (\\ref{modifiedtriangle}) (so that $H^2(\\theta)$ is injective).\n\n\nWe next write $\\iota:H^1(C_{S,X,T})=H^1(C_{S,X})\\to A^t(F)_p$ for the canonical homomorphism in Proposition \\ref{prop:perfect}(iii).\n\nThen, with $\\mathcal{L}_T$ the element specified in Lemma \\ref{modifiedlemma}(iii), a direct comparison of the definitions of $h_{A,F}^{T}$ and ${\\rm ht}_{A_{F\/k}}^{a}$ shows that the `higher special element' that is associated via \\cite[Def. 3.3]{bst} to the data $(C_{S,X,T},h^{T}_{A,F},\\mathcal{L}_T,H^2(\\theta)(\\Phi'))$ coincides with the pre-image under the bijective map $\\CC_p\\cdot\\bigwedge_{\\ZZ_p[G]}^a\\iota$ of the element\n\\begin{equation}\\label{hse interpret}(\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#)\\cdot \\frac{L^{(a)}_{S}(A_{F\/k},1)}{\\Omega_A^{F\/k}\\cdot w_{F\/k}^d}\\cdot \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\cdot {\\rm ht}^{a}_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i).\\end{equation}\n(Here we have also used the fact that, since ${\\rm BSD}(A_{F\/k})$(ii) is assumed to be valid, the idempotent $e_a$ defined here coincides with the idempotent denoted $e_a$ in \\cite[\\S3.1]{bst} for the complex $C_{S,X,T}$.)\n\nTo proceed we fix an ordered subset $\\{\\theta_j: j \\in [a]\\}$ of $\\Hom_{\\ZZ_p[G]}(A^t(F)_p,\\ZZ_p[G])$ and identify it with its image under the injective map\n\\[ \\Hom_{\\ZZ_p[G]}(A^t(F)_p,\\ZZ_p[G]) \\to \\Hom_{\\ZZ_p[G]}(H^1(C_{S,X,T}),\\ZZ_p[G])\\]\ninduced by $\\iota$. We also set $\\mathfrak{A} := \\ZZ_p[G]e_{(a)}$ and $M := \\mathfrak{A}\\otimes_{\\ZZ_p[G]}H^2(C_{S,X,T})$.\n\nThen the above interpretation of the higher special element in terms of the product (\\ref{hse interpret}) combines with the general result of \\cite[Th. 3.10(i)]{bst} to imply that for any element $\\alpha$ of $\\ZZ_p[G]\\cap\\mathfrak{A}$ and any element $y$ of $\\ZZ_p[G]$ that annihilates ${\\rm Ext}^2_{\\mathfrak{A}}(M,\\mathfrak{A})$\nthe product\n\\[\\alpha \\cdot y^a \\cdot(\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#)\\cdot \\frac{L^{(a)}_{S_F}(A_{F\/k},1)}{\\Omega_A^{F\/k}\\cdot w_{F\/k}^d}\\cdot \\mathcal{LR}^p_{A^t_{F\/k}}(x_\\bullet)\\cdot (\\wedge_{j=1}^{j=a}\\theta_j)({\\rm ht}^{a}_{A_{F\/k}}(\\wedge_{i=1}^{i=a}\\phi_i))\\]\nboth belongs to ${\\rm Fit}^a_{\\ZZ_p[G]}(\\Sel_p(A_F)^\\vee)$ and annihilates $(\\Sel_p(A_F)^\\vee)_{\\rm tor}$.\n\nIn addition, the exact sequence (\\ref{sha-selmer}) identifies $(\\Sel_p(A_F)^\\vee)_{\\rm tor}$ with $\\sha(A_F)[p^\\infty]^\\vee$ and the Cassels-Tate pairing identifies $\\sha(A_F)[p^\\infty]^\\vee$ with $\\sha(A^t_F)[p^\\infty]$.\n\nTo deduce the result of Theorem \\ref{big conj} from here, it is therefore enough to show that $\\alpha^2$ annihilates ${\\rm Ext}^2_{\\mathfrak{A}}(M,\\mathfrak{A})$.\n\nTo do this we use the existence of a convergent first quadrant cohomological spectral sequence\n\\[ E_2^{pq} = {\\rm Ext}_{\\mathfrak{A}}^p(M,{\\rm Ext}^q_{\\ZZ_p[G]}(\\mathfrak{A},\\mathfrak{A})) \\Rightarrow {\\rm Ext}^{p+q}_{\\ZZ_p[G]}(M,\\mathfrak{A})\\]\n(cf. \\cite[Exer. 5.6.3]{weibel}).\n\nIn particular, since the long exact sequence of low degree terms of this spectral sequence gives an exact sequence of $\\ZZ_p[G]$-modules\n\\[ \\Hom_{\\ZZ_p[G]}(M,{\\rm Ext}^1_{\\ZZ_p[G]}(\\mathfrak{A},\\mathfrak{A})) \\to {\\rm Ext}_{\\mathfrak{A}}^2(M,\\mathfrak{A}) \\to {\\rm Ext}^{2}_{\\ZZ_p[G]}(M,\\mathfrak{A}),\\]\nwe find that it is enough to show that the element $\\alpha$ annihilates both ${\\rm Ext}^1_{\\ZZ_p[G]}(\\mathfrak{A},\\mathfrak{A})$ and ${\\rm Ext}^{2}_{\\ZZ_p[G]}(M,\\mathfrak{A})$.\n\nTo verify this we write $\\mathfrak{A}^\\dagger$ for the ideal $\\{x \\in \\ZZ_p[G]: x\\cdot e_{(a)} = 0\\}$ so that there is a natural short exact sequence of $\\ZZ_p[G]$-modules $0 \\to \\mathfrak{A}^\\dagger \\to \\ZZ_p[G] \\to \\mathfrak{A} \\to 0$.\n\nThen by applying the exact functor ${\\rm Ext}^\\bullet_{\\ZZ_p[G]}(-,\\mathfrak{A})$ to this sequence one obtains a surjective homomorphism\n\\[ \\Hom_{\\ZZ_p[G]}(\\mathfrak{A}^\\dagger,\\mathfrak{A}) \\twoheadrightarrow {\\rm Ext}^1_{\\ZZ_p[G]}(\\mathfrak{A},\\mathfrak{A}).\\]\n\nIn addition, since $\\ZZ_p[G]$ is Gorenstein, by applying the exact functor ${\\rm Ext}^{\\bullet}_{\\ZZ_p[G]}(M,-)$ to the above sequence one finds that there is a natural isomorphism\n\\[ {\\rm Ext}^{3}_{\\ZZ_p[G]}(M,\\mathfrak{A}^\\dagger) \\cong {\\rm Ext}^{2}_{\\ZZ_p[G]}(M,\\mathfrak{A}).\\]\n\nTo complete the proof of Theorem \\ref{big conj} it is thus enough to note that the left hand modules in both of the last two displays are annihilated by $\\alpha$ since the definition of $\\mathfrak{A}^\\dagger$ implies immediately that $\\alpha\\cdot \\mathfrak{A}^\\dagger = 0$.\n\n\\begin{remark}\\label{omit T}{\\em If $A(F)$ does not contain an element of order $p$, then \\cite[Th. 3.10(i)]{bst} can be directly applied to the complex $C_{S,X}$ rather than to the auxiliary complex $C_{S,X,T}$. This shows that, in any such case, the prediction in Theorem \\ref{big conj} should remain true if the term $\\prod_{v \\in T}P_v(A_{F\/k},1)^\\#$ is omitted.}\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Abelian congruence relations and height pairings}\\label{mrsconjecturesection}\n\n\n\n\n\n\n\nIn this section we continue to investigate the $p$-adic congruence relations between the leading coefficients of Hasse-Weil-Artin $L$-series that are encoded by ${\\rm BSD}_p(A_{F\/k})$(iv) in the case that $F\/k$ is abelian.\n\nMore concretely in \\S\\ref{mtchd} we will show that, beyond the integrality properties that are discussed in Theorem \\ref{big conj} and Remark \\ref{integrality rk}, elements of the form (\\ref{key product}) can be expected to satisfy additional congruence relations in the integral augmentation filtration that involve Mazur-Tate regulators.\n\n\nThen in \\S \\ref{cycliccongssection} we specialise to the case of cyclic extensions in order to make these additional congruence relations fully explicit.\n\nIn particular, in this way we render the equality ${\\rm BSD}_p(A_{F\/k})$(iv) amenable to (numerical) verification even in cases in which it incorporates a thorough-going mixture of both archimedean phenomena and delicate $p$-adic congruences.\n\nFinally, in \\S\\ref{dihedral} we explain how these results extend to certain families of non-abelian extensions.\n\nThroughout this section, just as in \\S\\ref{8.1}, we give ourselves a fixed odd prime $p$ and isomorphism of fields $j:\\CC\\cong\\CC_p$ (explicit mention of which we usually omit), a finite set $S$ of places of $k$ with\n\\[ S_k^\\infty\\cup S_k^p\\cup S_k^F \\cup S_k^A\\subseteq S\\]\nand a fixed ordered $k$-basis $\\{\\omega_j'\\}_{j\\in[d]}$ of $H^0(A^t,\\Omega^1_{A^t})$ with associated classical period $\\Omega_A^{F\/k}$.\n\nExcept in \\S\\ref{dihedral} we shall always assume in this section that $F\/k$ is abelian. In addition, we shall always assume that $p$ is chosen so that neither $A(F)$ nor $A^t(F)$ has a point of order $p$.\n\n\n\n\\subsection{A Mazur-Tate conjecture for higher derivatives}\\label{mtchd}\n\nIn this section we formulate a Mazur-Tate type conjecture for higher derivatives of Hasse-Weil-Artin $L$-series. We then show that, under the hypotheses listed in \\S \\ref{tmc}, this conjecture would follow from the validity of ${\\rm BSD}(A_{F\/k})$.\n\n\\subsubsection{\n\nWe first quickly recall the construction of canonical height pairings of Mazur and Tate \\cite{mt0}.\n\nTo do this we fix a subgroup $J$ of $G$ and set $E := F^J$.\nWe recall that the subgroups of `locally-normed elements' of $A(E)$ and $A^t(E)$ respectively are defined by setting \\begin{equation}\\label{localnorms}U_{F\/E}:=\\bigcap_v \\bigl(A(E)\\cap N_{F_w\/E_v}(A(F_w))\\bigr),\\,\\,\\,\\,U^t_{F\/E}:=\\bigcap_v \\bigl(A^t(E)\\cap N_{F_w\/E_v}(A^t(F_w))\\bigr).\\end{equation}\nHere each intersection runs over all (finite) primes $v$ of $E$ and $w$ is a fixed prime of $F$ above $v$. In addition, $N_{F_w\/E_v}$ denotes the norm map of $F_w\/E_v$ and each intersection of the form $A(E)\\cap N_{F_w\/E_v}(A(F_w))$, resp. $A^t(E)\\cap N_{F_w\/E_v}(A^t(F_w))$, takes place inside $A(E_v)$, resp. $A^t(E_v)$.\n\nWe recall from Lemma \\ref{useful prel}(i) that each of the expressions displayed in (\\ref{localnorms}) is in general a finite intersection of subgroups of $A(E)$, resp. of $A^t(E)$, and that the subgroups $U_{F\/E}$ and $U^t_{F\/E}$ have finite index in $A(E)$ and $A^t(E)$ respectively.\n\nWe note for later use that, whenever $A$, $F\/k$ and $p$ satisfy the hypotheses (H$_1$)-(H$_5$) listed in \\S \\ref{tmc}, then Proposition \\ref{explicitbkprop}(ii) (together with the duality of these hypotheses) implies that $$U_{F\/E,p}:=\\ZZ_p\\otimes_{\\ZZ}U_{F\/E}\\,\\,\\text{ and }\\,\\,U^t_{F\/E,p}:=\\ZZ_p\\otimes_{\\ZZ}U^t_{F\/E}$$ are equal to $A(E)_p$ and to $A^t(E)_p$ respectively (for every given subgroup $J$ of $G$).\n\nIn general, Mazur and Tate \\cite{mt0} construct, by using the theory of biextensions, a canonical height pairing \\begin{equation}\\label{tanpairing}\\langle\\,,\\rangle^{\\rm MT}_{F\/E}:U^t_{F\/E}\\otimes_\\ZZ U_{F\/E}\\to J.\\end{equation}\nThis pairing will be a key ingredient of our conjectural congruence relations. To formulate our conjecture we must first describe how to make reasonable choices of points on which to evaluate the Mazur-Tate pairing.\n\n\\begin{definition}\\label{separablepair}{\\em Fix a subgroup $J$ of $G$ and set $E:=F^J$. We define a `$p$-separable choice of points of $A$ for $F\/E$ of rank $(a,a')$' (with $a'\\geq a\\geq 0$) to be a pair $(\\mathcal{Y},\\mathcal{Y}')$ chosen as follows.\n\nLet $\\mathcal{Y}= \\{y_i: i \\in [a]\\}$ be any ordered finite subset of $A(F)_p$ that generates a free $\\ZZ_p[G]$-direct-summand $Y$ of $A(F)_p$ of rank equal to $|\\mathcal{Y}|=a$. Then $\\Tr_J(Y)=Y^J$ is a $\\ZZ_p[G\/J]$-direct-summand of $A(E)_p$\n\nWe then let $$\\mathcal{Y}'=\\Tr_J(\\mathcal{Y})\\cup\\{w_i:a\n>> I_p(G)\\otimes_{\\ZZ_p[G]}P @> \\subseteq >> P @> \\Tr_{G} >>\n P^G @> >> 0\\\\\n@. @VV {\\rm id}\\otimes_{\\ZZ_p[G]}\\Theta V @VV \\Theta V @VV\\Theta^{G} V\\\\\n0 @>\n>> I_p(G)\\otimes_{\\ZZ_p[G]}P @> \\subseteq >> P @> \\Tr_{G} >>\n P^G @> >> 0\\\\\n@. @VV ({\\rm id}\\otimes_{\\ZZ_p[G]}\\pi)_{G} V \\\\ @.\nI_p(G)\/I_p(G)^2\\otimes_{\\ZZ_p} (A(F)_p^*)_{G}.\n\\end{CD}\\]\n\nIn addition, the equality (\\ref{matrixPsi}) implies that\n$$\\iota(\\phi^{-1}(P^t_{0,k}))=\\iota(\\sum_{l\\in[m_0]} \\Psi_{(0,l),(0,k)} P^t_{0,l})=\\sum_{l\\in[m_0]} \\Psi_{(0,l),(0,k)}\\cdot\\Tr_G(b_{0,l})$$\nand so, since $$P^*_{0,l}(P_{0,j})=\\begin{cases}1,\\,\\,\\,\\,\\,\\,\\,\\,l=j,\\\\\n0,\\,\\,\\,\\,\\,\\,\\,\\,l\\neq j,\\end{cases}$$\nwe can finally use the above diagram to compute that\n\\begin{align*}-\\langle P^t_{0,k},P_{0,j}\\rangle_{F\/k}^{\\rm MT}=&\\left(({\\rm id}\\otimes_{\\ZZ_p[G]}\\pi)_{G}\\left(\\Theta(\\Psi_{(0,j),(0,k)}\\cdot b_{0,j})\\right)\\right)(P_{0,j})\\\\\n=&\\left(({\\rm id}\\otimes_{\\ZZ_p[G]}\\pi)_{G}\\left((\\sigma-1)\\Psi_{(0,j),(0,k)}\\cdot b_{0,j}\\right)\\right)(P_{0,j})\\\\\n=&\\left(\\left((\\sigma-1)+I_p(G)^2\\right)\\otimes\\Psi_{(0,j),(0,k)}\\cdot P^*_{0,j}\\right)(P_{0,j})\\\\\n=&\\Psi_{(0,j),(0,k)}\\cdot(\\sigma-1)+I_p(G)^2.\n\\end{align*}\nHere the first equality is given by Theorem \\ref{thecomptheorem}. This verifies the equalities (\\ref{PsicomputesMT}) and thus completes the proof of Theorem \\ref{mequals1}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Dihedral congruence relations}\\label{dihedral} With a view to extending the classes of extensions $F\/k$ for which the equality of ${\\rm BSD}_p(A_{F\/k})$(iv) can be made fully explicit we consider the case that $F\/k$ is generalized dihedral of order $2p^n$.\n\nWe recall that this means the Sylow $p$-subgroup $P$ of $G$ is abelian and of index two and that the conjugation action of any lift to $G$ of the generator of $G\/P$ inverts elements of $P$. We write $K$ for the unique quadratic extension of $k$ in $F$.\n\nIn this setting we shall show that, in certain situations, the validity of ${\\rm BSD}_p(A_{F\/k})$(iv) can be checked by verifying congruences relative to the abelian extension $F\/K$.\n\nIn order to state the precise result we fix a finite Galois extension $E$ of $\\QQ$ in $\\bc$ that is large enough to ensure that, with $\\mathcal{O}$ denoting the ring of algebraic integers of $E$, there exists for each character $\\psi$ of $\\widehat{G}$ a finitely generated $\\mathcal{O}[G]$-lattice that is free over $\\mathcal{O}$ and spans a $\\bc[G]$-module of character $\\psi$.\n\nFor each $\\psi$ in $\\widehat{G}$ we recall the non-zero complex number $\\mathcal{L}^\\ast(A,\\psi)$ defined in \\S\\ref{explicit ec section}.\n\n\\begin{proposition}\\label{dihedral prop} Let $F\/k$ be generalized dihedral of degree $2p^n$ as above. Assume that $\\sha(A_F)$ is finite and that no place of $k$ at which $A$ has bad reduction is ramified in $F$. Assume also that $p$ satisfies the conditions (H$_1$)-(H$_4$) listed in \\S\\ref{tmc} and that neither $A(K)$ nor $A^t(K)$ has a point of order $p$.\n\nThen the equality of ${\\rm BSD}_p(A_{F\/k})$(iv) is valid if the following three conditions are satisfied.\n\n\\begin{itemize}\n\\item[(i)] For every $\\psi$ in $\\widehat{G}$ and $\\omega$ in $G_\\QQ$, one has $\\mathcal{L}^\\ast(A,\\omega\\circ \\psi) = \\omega(\\mathcal{L}^\\ast(A,\\psi))$.\n\\item[(ii)] For every $\\psi$ in $\\widehat{G}$ and every prime ideal $\\mathfrak{p}$ of $\\mathcal{O}$ that divides $p$, the explicit formula for $\\mathcal{L}^\\ast(A, \\psi)\\cdot \\mathcal{O}_\\mathfrak{p}$ that is given in Proposition \\ref{ref deligne-gross}(ii) is valid.\n\\item[(iii)] The equality of ${\\rm BSD}_p(A_{F\/K})$(iv) is valid.\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof} Since $F\/K$ is an extension of $p$-power degree, the assumption that neither $A(K)$ nor $A^t(K)$ has a point of order $p$ implies that neither $A(F)$ nor $A^t(F)$ has a point of order $p$.\n\nHence, in this case, the given assumptions imply that the data $A$, $F\/k$ and $p$ satisfy all of the hypotheses of Proposition \\ref{ref deligne-gross}.\n\nIn particular, if we write $\\xi$ for the difference between the left and right hand sides of the equality in Theorem \\ref{bk explicit}, then the argument of Proposition \\ref{ref deligne-gross} shows that the assumed validity of the given conditions (i) and (ii) implies that $\\xi$ belongs to\n $K_0(\\ZZ_p[G],\\QQ_p[G])$ and also to the kernel of the homomorphism $\\rho_\\mathfrak{p}^\\psi$ for every $\\psi$ in $\\widehat{G}$ and every prime ideal $\\mathfrak{p}$ of $\\mathcal{O}$ that divides $p$.\n\nThese facts combine with the general result of \\cite[Th. 4.1]{ewt} to imply that $\\xi$ belongs to the finite group $K_0(\\ZZ_p[G],\\QQ_p[G])_{\\rm tor}$.\n\nWe next recall from \\cite[Lem. 5.12(ii)]{bmw} that, since $G$ is assumed to be dihedral, the natural restriction map ${\\rm res}^G_P:K_0(\\ZZ_p[G],\\QQ_p[G])_{\\rm tor} \\to K_0(\\ZZ_p[P],\\QQ_p[P])$ is injective.\n\nIt follows that $\\xi$ vanishes, and hence by Theorem \\ref{bk explicit} that ${\\rm BSD}_p(A_{F\/k})$(iv) is valid, if the element ${\\rm res}^G_P(\\xi)$ vanishes.\n\nTo complete the proof, it is therefore enough to note that the functorial behaviour of the conjecture ${\\rm BSD}(A_{F\/k})$ under change of extension, as described in Remark \\ref{consistency remark}(ii) (and justified via Remark \\ref{consistency}(ii)), implies that ${\\rm res}^G_P(\\xi)$ vanishes if and only if the equality of ${\\rm BSD}_p(A_{F\/K})$(iv) is valid. \\end{proof}\n\n\\begin{remark}{\\em If $P$ is cyclic, then Proposition \\ref{dihedral prop} shows that in certain situations the validity of ${\\rm BSD}_p(A_{F\/k})$(iv) for the non-abelian extension $F\/k$ can be checked by verifying the relevant cases of the refined Deligne-Gross Conjecture formula in Proposition \\ref{ref deligne-gross}(ii) together with explicit congruences for the cyclic extension $F\/K$ of the form that are discussed in \\S\\ref{cycliccongssection}. In addition, if $P$ is cyclic, then the main result of Yakovlev in \\cite{yakovlev2} can be used to show that if the groups $\\sha(A_{F'})[p^\\infty]$ vanish for all proper subfields of $F$ that contain $K$, then the $\\ZZ_p[G]$-module $A(F)_p$ is a `trivial source module' and so has a very explicit structure. }\\end{remark}\n\n\\begin{example}{\\em For the examples described in Example \\ref{wuthrich example} the field $F$ is a dihedral extension of $k = \\QQ$ of degree $6$ and both of the given elliptic curves $A$ satisfy all of the hypotheses that are necessary to apply Proposition \\ref{dihedral prop} (in the case $p=3$ and $n=1$). In this way one finds that the validity of ${\\rm BSD}_3(A_{F\/\\QQ})$(iv) implies, and if $\\sha(A_F)[3^\\infty]$ vanishes is equivalent to, the validity of the relevant cases of the refined Deligne-Gross Conjecture together with the validity of an explicit congruence of the form described in Theorem \\ref{mequals1} for the cyclic extension $F\/K$ (and with $m_1= 1$ and $m_0 = 2$). Unfortunately, however, since $F\/\\QQ$ is of degree $6$ it seems that for the given curves $A$ the latter congruences are at present beyond the range of numerical investigation via the methods that have been used to verify the cases discussed in Example \\ref{bleyexamples}. }\\end{example}\n\n\n\\subsection{The proof of Theorem \\ref{rbsdimpliesmt}}\\label{mrsproof}\n\nWe fix a subgroup $J$ of $G$ and set $E := F^J$ and a maximal subset $x_\\bullet$ of $A^t(F_p)^\\wedge_p$ that is linearly independent over $\\ZZ_p[G]$.\n\nTo study the relationship between ${\\rm BSD}(A_{F\/k})$ and Conjecture \\ref{mrsconjecture} we set $X:=\\langle x_\\bullet\\rangle_{\\ZZ_p[G]}$ and consider both of the complexes $C_{S,X}:={\\rm SC}_S(A_{F\/k};X,H_\\infty(A_{F\/k})_p)$ and $C_{S,X,J}:={\\rm SC}_S(A_{E\/k};X^J,H_\\infty(A_{E\/k})_p)$.\n\nThen the definition of $C_{S,X}$ as the mapping fibre of the morphism (\\ref{fibre morphism}), the well-known properties of \\'etale cohomology under Galois descent and the fact that $X$ is a free $\\ZZ_p[G]$-module (see also (\\ref{global descent}), (\\ref{local descent}) and the argument that precedes them) imply that the object $$(C_{S,X})_J:=\\ZZ_p[G\/J]\\otimes_{\\ZZ_p[G]}^{\\mathbb{L}}C_{S,X}$$ of $D^{\\rm perf}(\\ZZ_p[G\/J])$ is isomorphic to $C_{S,X,J}$. This fact combines with \\cite[Lem. 3.33]{bst} to give canonical identifications\n$$H^1(C_{S,X})^J=H^1((C_{S.X})_J)=H^1(C_{S,X,J})$$ and\n$$H^2(C_{S,X})_J=H^2((C_{S,X})_J)=H^2(C_{S,X,J}).$$\n\nIn addition, our assumption that neither $A(F)$ nor $A^t(F)$ has a point of order $p$ combines with Proposition \\ref{prop:perfect} to imply that,\nin the terminology of \\cite{bst}, the complex $C_{S,X}$ is a `strictly admissible' complex of $\\ZZ_p[G]$-modules. The approach of \\S 3.4.1 in loc. cit. therefore gives a `Bockstein homomorphism' of $\\ZZ_p[G\/J]$-modules \\begin{equation}\\label{htc}{\\rm ht}^{C_{S,X}}:H^1(C_{S,X})^J\\to \\mathcal{I}_p(J)\/\\mathcal{I}_p(J)^2\\otimes_{\\ZZ_p[G\/J]}H^2(C_{S,X,J}).\\end{equation}\n\nFor any $p$-separable choice of points $(\\mathcal{Y},\\mathcal{Y}')$ of $A$ for $F\/E$ of rank $(a,a')$\nand each index $i$ with $a< i \\le a'$ we then use the dual point $w_i^*$ to construct a composite homomorphism of $\\ZZ_p[G\/J]$-modules \\begin{equation}\\label{projection}H^2(C_{S,X,J})\\to\\Sel_p(A_E)^\\vee\\to A(E)_p^*\\to (Y')^*\\to\\ZZ_p[G\/J].\\end{equation} Here the first arrow is the canonical homomorphism of Proposition \\ref{prop:perfect}(iii), the second arrow is the canonical homomorpism occurring in (\\ref{sha-selmer}), the third arrow is the natural restriction maps and the fourth arrow maps an element of $(Y')^*$ to its coefficient at the basis element $w_i^*$.\n\nUpon composing ${\\rm ht}^{C_{S,X}}$ with each map (\\ref{projection}) we thereby obtain, for each index $i$ with $a 1$.\n\nIn this case, if we set ${\\rm T}_{c,c'} := \\sum_{g \\in G_{K_c\/K_{c'}}}g$, then the norm-compatibility of Heegner points implies that\n\\begin{equation}\\label{nc heegner} {\\rm T}_{c,c'}(y_c) = a_{c,c'}\\cdot y_{c'}.\\end{equation}\n\nThis implies $e_{\\psi}(y_c) = (h_{c'}\/h_{c})a_{c,c'}\\cdot e_\\phi(y_{c'})$ and $e_{\\check\\psi}(y_c) = (h_{c'}\/h_{c})a_{c,c'}\\cdot e_{\\check\\phi}(y_{c'})$\nand hence\n\\begin{align*} h_c\\langle e_{\\psi}(y_c),e_{\\check\\psi}(y_c)\\rangle _{K_c} =\\, &h_c(h_{c'}\/h_{c})^2(a_{c,c'})^2\\cdot \\langle e_{\\phi}(y_{c'}),e_{\\check\\phi}(y_{c'})\\rangle _{K_c}\\\\\n=\\, &h_c(h_{c'}\/h_{c})(a_{c,c'})^2\\cdot \\langle e_{\\phi}(y_{c'}),e_{\\check\\phi}(y_{c'})\\rangle _{K_{c'}}\\\\\n=\\, &(a_{c,c'})^2\\cdot h_{c'}\\langle e_{\\phi}(y_{c'}),e_{\\check\\phi}(y_{c'})\\rangle _{K_{c'}}.\\end{align*}\nwhere the second equality follows from the general result of \\cite[Chap. VIII, Lem. 5.10]{silverman}. This proves claim (i).\n\nClaim (ii) follows directly from claim (i) and the fact that the terms $L'(A_{K},\\check{\\psi},1)$ and $(\\Omega^+\\Omega^-C)\/(c_\\psi\\sqrt{|d_K|})$ that occur in (\\ref{e-def}) do not change if one replaces $\\psi$ by $\\phi$.\n\nTo prove claim (iii) we set $\\epsilon_{A,\\phi}:= \\epsilon_{A,\\psi}$. Then, since both $e_\\phi(z_{c'}) = e_\\phi(y_{c'})$ and $e_{\\check\\phi}(z'_{c'}) = e_{\\check\\phi}(y_{c'})$, an explicit comparison of the equalities (\\ref{e-def}) and (\\ref{u-def}) shows that it suffices to show (\\ref{u-def}) remains valid if one replaces $c$ by $c'$ and $\\psi$ by $\\phi$.\n\nWe now set $d:=c\/c'$. By a routine computation using (\\ref{nc heegner}) one then finds that\n\\begin{equation}\\label{trace heegner} {\\rm T}_{c,c'}(z_c) = \\prod_{\\ell \\mid d} (\\ell + 1 - a_\\ell)\\cdot z_{c'} \\,\\,\\,\\text{ and }\\,\\,\\,\n{\\rm T}_{c,c'}(z'_c) = \\prod_{\\ell \\mid d} (\\ell + 1 + a_\\ell)\\cdot z'_{c'}.\\end{equation}\n\nIn addition, for each prime divisor $\\ell$ of $c$ one has\n\\begin{equation}\\label{trace heegner2} |A(\\kappa_{(\\ell)})| = (\\ell + 1 - a_\\ell)(\\ell + 1 + a_\\ell)\\end{equation}\nand so\n\n\\begin{align*} \\frac{h_c}{c^2}\\langle e_{\\psi}(z_c),e_{\\check\\psi}(z'_c)\\rangle _{K_c} &= \\frac{h_c}{c^2}(\\prod_{\\ell \\mid d}(\\ell + 1 - a_\\ell)(\\ell + 1 + a_\\ell))\\langle e_{\\phi}(z_{c'}),e_{\\check\\phi}(z'_{c'})\\rangle _{K_c}\\\\\n&= \\bigl(\\prod_{\\ell \\mid d}\\frac{|A(\\kappa_{(\\ell)})|}{\\ell^2}\\bigr)\\frac{h_{c'}}{(c')^2}\\langle e_{\\phi}(z_{c'}),e_{\\check\\phi}(z'_{c'})\\rangle _{K_{c'}},\\end{align*}\nwhere the second equality uses \\cite[Chap. VIII, Lem. 5.10]{silverman} and the fact $c = c'\\cdot\\prod_{\\ell\\mid d}\\ell$.\n\n\nThe right hand side of (\\ref{u-def}) therefore changes by a factor of $(\\prod_{\\ell \\mid d}|A(\\kappa_{(\\ell)})|\/\\ell^{2})^{-1}$ if one replaces $c$ by $c'$ and $\\psi$ by $\\phi$.\n\nTo show that the left hand side of (\\ref{u-def}) changes by the same factor we note that\n\\begin{align*} L'_{c'}(A_{K},\\check{\\phi},1)L'_{c}(A_{K},\\check{\\psi},1)^{-1} =\\, &L'_{c'}(A_{K},\\check{\\psi},1)L'_{c}(A_{K},\\check{\\psi},1)^{-1}\\\\ =\\, &\\prod_{\\ell\\mid d}P_\\ell(A_{K},\\check{\\psi},1)^{-1}\\end{align*}\nwhere $P_\\ell(A_{K},\\check{\\psi},t)$ denotes the Euler factor at (the unique prime of $K$ above) $\\ell$ of the $\\psi$-twist of $A$.\n\nNow $K_c$ is a dihedral extension of $\\QQ$ and so any prime $\\ell$ that is inert in $K$ must split completely in the maximal subextension of $K_c$ in which it is unramified. In particular, for each prime divisor $\\ell$ of $d$ this implies that $P_\\ell(A_{K},\\check{\\psi},t)$ coincides with the Euler factor $P_\\ell(A_{K},t)$ at $\\ell$ of $A_{K}$ and hence that\n\\[ P_\\ell(A_{K},\\check{\\psi},1) = P_\\ell(A_{K},1) = \\frac{|A(\\kappa_{(\\ell)})|}{{\\rm N}_{K\/\\QQ}(\\ell)} = \\frac{|A(\\kappa_{(\\ell)})|}{\\ell^2},\\]\nas required.\n\\end{proof}\n\n\\subsubsection{}\\label{bs conj section\n\nIf $c=1$, then for each character $\\psi$ in $\\widehat{G_c}$ one has $c_\\psi = 1$ and the results of Gross and Zagier in \\cite[see, in particular, \\S I, (6.5) and the discussion on p. 310]{GZ} imply directly that $\\epsilon_{A,c,\\psi}=1$.\n\nIn addition, for $c > 1$ the work of Zhang in \\cite{zhang01, zhang} implies for each $\\psi$ in $\\widehat{G_c}$ a formula for the algebraic number $\\epsilon_{A,c,\\psi}$.\n\nHowever, as observed by Bradshaw and Stein in \\cite[\\S2]{BS}, this formula is difficult to make explicit and is discussed in the literature in several mutually inconsistent ways.\n\nIn particular, it is explained in loc. cit. that the earlier articles of Hayashi \\cite{hayashi} and Jetchev, Lauter and Stein \\cite{JLS} together contain three distinct formulas for the elements $\\epsilon_{A,c,\\psi}$ that are mutually inconsistent and all apparently incorrect.\n\nIn an attempt to clarify this issue, in \\cite[Conj. 6]{BS} Bradshaw and Stein conjecture that for every non-trivial character $\\psi$ in $\\widehat{G_c^+}$ one should have\n\\begin{equation}\\label{bs conj} \\epsilon_{A,c,\\psi} = 1,\\end{equation}\nand Zhang has asserted that the validity of this conjecture can indeed be deduced from his results in \\cite{zhang} (see, in particular, \\cite[Rem. 7]{BS}).\n\nHowever, if $c > 1$, then Lemma \\ref{independence}(ii) implies $\\epsilon_{A,c,\\psi}$ is not always equal to $\\epsilon_{A,c_\\psi,\\psi}$ and hence that the conjectural equalities (\\ref{bs conj}) are in general mutually compatible only if one restricts to characters $\\psi$ with $c_\\psi = c$.\n\nFor further comments in this regard see Remark \\ref{bs conj rem} below.\n\n\\subsection{Heegner points and refined BSD} In this section we interpret the complex numbers $\\epsilon_{A,\\psi}$ defined above in terms of our refined Birch and Swinnerton-Dyer Conjecture.\n\n\n\nWe define an element of $\\CC[G_c]$ by setting\n\\[ \\epsilon_{A,c} := \\sum_{\\psi\\in \\widehat{G_c}}\\epsilon_{A,\\psi}\\cdot e_\\psi.\\]\nLemma \\ref{independence}(iii) combines with the properties (\\ref{stark ec}) to imply $\\epsilon_{A,c}$ belongs to $\\QQ[G_c]$.\n\nWe also define an element of $\\QQ[G]^\\times$ by setting\n\\[ u_{K,c}:= (-1)^{n(c)}\\sum_{\\psi\\in \\widehat{G_c}}(-1)^{n(c_\\psi)}\\cdot e_\\psi,\\]\nwhere $n(d)$ denotes the number of rational prime divisors of a natural number $d$.\n\n\\begin{theorem}\\label{h-rbsd} Let $F$ be an abelian extension of $K$ of conductor $c$ and set $G := G_{F\/K}$. Fix an odd prime $p$ and assume that all of the following conditions are satisfied:\n\\begin{itemize}\n\\item[$\\bullet$] the data $A$, $F\/K$ and $p$ satisfy the hypotheses (H$_1$)-(H$_6$) listed in \\S\\ref{tmc}.\n\\item[$\\bullet$] $A(F)$ has no point of order $p$.\n\\item[$\\bullet$] The trace to $K$ of $y_1$ is non-zero.\n\\item[$\\bullet$] $p$ is unramified in $K$.\n\\end{itemize}\nSet $z_{F} := {\\rm Tr}_{K_c\/F}(z_c)$ and $z'_{F} := {\\rm Tr}_{K_c\/F}(z'_c)$. Then the following claims are valid.\n\n\\begin{itemize}\n\\item[(i)]\nIf ${\\rm BSD}_p(A_{F\/K})$(iv) is valid then every element of\n\\[ {\\rm Fit}^0_{\\ZZ_p[G]}\\left( \\bigl(A(F)_p\/\\langle z_F\\rangle\\bigr)^\\vee\\right)\\cdot {\\rm Fit}^0_{\\ZZ_p[G]}\\left(\\bigl( A(F)_p\/\\langle z'_F\\rangle\\bigr)^\\vee\\right)\\cdot C\\cdot u_{K,c}\\cdot\\epsilon_{A,c} \\]\nbelongs to ${\\rm Fit}^1_{\\ZZ_p[G]}({\\rm Sel}_p(A_{F})^\\vee)$ and annihilates $\\sha(A_{F})[p^\\infty]$.\n\n\\item[(ii)] Assume that $F\/K$ is of $p$-power degree and that $p$ does not divide the trace to $K$ of $y_1$. Then ${\\rm BSD}_p(A_{F\/K})$(iv) is valid if and only if one has $${\\rm Fit}^0_{\\ZZ_p[G]}(\\sha(A_{F})[p^\\infty]) = \\ZZ_p[G]\\cdot C\\cdot u_{K,c}\\cdot \\epsilon_{A,c}.$$\n\\end{itemize}\n\\end{theorem}\n\n\n\\begin{proof} Since the extension $F\/K$ is tamely ramified we shall derive claim (i) as a consequence of the observation in Remark \\ref{new add2}.\n\n\n\nWe first note that the assumed non-vanishing of the trace to $K$ of $y_1=z_1$ combines with the trace compatibilities in (\\ref{trace heegner}) to imply that the elements $e_\\psi(z_c)$ and $e_\\psi(z'_c)$ are non-zero for all $\\psi$ in $\\widehat{G}$.\n\nTaken in conjunction with (\\ref{u-def}), this fact implies directly that each function $L_{S_{\\rm r}}(A,\\check{\\psi},z)$ vanishes to order one at $z=1$, where as in Remark \\ref{new add2} we have set $S_{\\rm r}=S_k^F\\cap S_k^F$. It also combines with the main result of Bertolini and Darmon in \\cite{BD} to imply that the $\\ZZ_p[G]$-modules generated by $z_c$ and $z_c'$ each have finite index in $A(K_c)$. This implies, in particular, that the idempotent $e_{(1)}$ is equal to $1$.\n\n\nSince every prime divisor of $c$ is inert in $K$ and then splits completely in the maximal subextension of $K_c$ in which it is unramified, the conductor of each character $\\psi$ in $\\widehat{G_c}$ is a divisor $c_\\psi$ of $c$ and the unramified characteristic $u_\\psi$ defined in \\S\\ref{mod GGS section} is equal to $(-1)^{n(c) + n(c_\\psi)}$.\n\nBy using \\cite[(21)]{bmw} one can then compute that for every $\\psi$ in $\\widehat{G_c}$ one has\n\\begin{align}\\label{gauss sums_eq}\n\\tau^\\ast \\bigl(\\QQ,\\psi\\bigr)\\cdot w_\\psi^{-1}=\\, &u_\\psi\\cdot \\tau \\bigl(\\QQ,\\psi\\bigr)\\cdot w_\\psi^{-1}\n\\\\ =\\, &u_\\psi\\sqrt{\\vert d_{K}\\vert} \\sqrt{{\\rm N}c_\\psi}\\notag\\\\\n =\\, &(-1)^{n(c) + n(c_\\psi)}c_\\psi\\sqrt{\\vert d_{K}\\vert}.\\notag\\end{align}\n\nIn addition, for each $\\psi$ in $\\widehat{G}$ one has\n\\[ \\Omega_A^\\psi= \\Omega^+\\Omega^-\\]\nand\n\\begin{align*} e_\\psi\\cdot h_{F\/K}(z_F,z_F') =\\, &\\sum_{g \\in G}\\langle g(z_F),z_F'\\rangle_{F}\\cdot \\psi(g)^{-1}e_\\psi\\\\\n=\\, & |G|\\langle e_\\psi(z_F),z_F'\\rangle_{F}\\cdot e_\\psi\\\\\n=\\, & |G|\\langle e_\\psi(z_F),e_{\\check\\psi}(z_F')\\rangle_{F} \\cdot e_\\psi\\\\\n=\\, & |G|(|G|\/h_c)\\langle e_\\psi(z_F),e_{\\check\\psi}(z_F')\\rangle_{K_c} \\cdot e_\\psi\\\\\n=\\, & h_c\\cdot\\langle e_\\psi(z_c),e_{\\check\\psi}(z_c')\\rangle_{K_c} \\cdot e_\\psi,\\end{align*}\nwhere in the last equality $\\psi$ and $\\check\\psi$ are regarded as characters of $G_c$.\n\nSetting $S_{p,{\\rm r}}=S_{\\rm r}\\cap S_k^p$ one may also explicitly compute, for $\\psi\\in\\widehat{G}$, that $m_\\psi:=\\prod_{v\\in S_{p,{\\rm r}}}\\varrho_{v,\\psi}$ is equal to $p^2$ if $p$ divides $c$ but not $c_\\psi$ and is equal to $1$ otherwise. We use this explicit description to extend the definition of $m_\\psi$ to all characters $\\psi\\in\\widehat{G_c}$.\n\nThese facts combine with (\\ref{u-def}) to imply that for any $\\psi\\in\\widehat{G}$ one has\n\n\\begin{align}\\label{explicit lt}&\\left(\\frac{L^{(1)}_{S_{\\rm r}}(A_{F\/K},1)\\cdot\\tau^*(F\/K)\\cdot\\prod_{v\\in S_{p,{\\rm r}}}\\varrho_{v}(F\/k)}{\\Omega_A^{F\/K}\\cdot w_{F\/k}\\cdot h_{F\/K}(z_F,z_F')}\\right)_\\psi\\\\= \\, &\\frac{L'_{c}(A_K,\\check{\\psi},1)\\cdot\\tau^*(\\QQ,\\psi)\\cdot m_\\psi}{\\Omega_A^\\psi\\cdot w_\\psi\\cdot h_c\\cdot\\langle e_\\psi(z_c),e_{\\check\\psi}(z_c')\\rangle_{K_c}}\\notag\\\\\n = \\, & \\frac{L'_{c}(A_{K},\\check{\\psi},1)(-1)^{n(c)+ n(c_\\psi)}c_\\psi\\sqrt{|d_K|}}{\\Omega^+\\Omega^-\\cdot h_c\\cdot\\langle e_\\psi(z_c),e_{\\check\\psi}(z_c')\\rangle_{K_c}} \\cdot m_\\psi\\notag\\\\\n = \\, & (-1)^{n(c)+ n(c_\\psi)}\\epsilon_{A,\\psi}\\cdot C \\cdot (c_\\psi\/c)^2m_\\psi \\notag\\\\% \\cdot h_c \\langle e_{\\chi}(z_c),e_{\\check\\chi}(z'_c)\\rangle_{K_c}\\notag\\\\\n = \\, & (-1)^{n(c)}(-1)^{n(c_\\psi)}\\epsilon_{A,\\psi}\\cdot C \\cdot (c_\\psi\/c)^2 m_\\psi. \\nota\n \\end{align}\n\nTo deduce claim (i) from Remark \\ref{new add2} it is thus sufficient to show that the sum\n\\begin{equation}\\label{unit sum} \\sum_{\\psi\\in\\widehat{G_c}}(c_\\psi\/c)^2m_\\psi\\cdot e_\\psi\\end{equation}\n\nbelongs to $\\ZZ_p[G_c]^\\times$. This fact follows from the result of Lemma \\ref{bley lemma} with $A=G_c$ and $i=2$ (so that $n_\\psi^i=m_\\psi$) and, for each positive divisor $d$ of $c$, with the subgroup $H_d$ of $G_c$ specified to be $G_{K_c\/K_d}$. Indeed, this choice of subgroups satisfies the assumption (ii) of Lemma \\ref{bley lemma} because (\\ref{explicit iso}) implies that $|G_{K_c\/K_d}|$ is equal to $\\prod_{\\ell\\mid (c\/d)}(\\ell+1)$.\n\n\nTo prove claim (ii) it suffices to show that, under the given hypotheses, the equality in Theorem \\ref{bk explicit} is valid if and only if one has ${\\rm Fit}^0_{\\ZZ_p[G]}(\\sha(A_{F})[p^\\infty]) = \\ZZ_p[G]\\cdot C\\cdot u_{K,c}\\cdot\\epsilon_{A,c}.$\n\nNow, in Theorem \\ref{bk explicit}, the set $S_{p,{\\rm w}}$ is empty since $F$ is a tamely ramified extension of $k =K$ and the set $S_{p,{\\rm u}}^\\ast$ is empty since we are assuming that $p$ is unramified in $K$.\n\nWe next note that $z_1 = z'_1 = y_1$ and hence that (\\ref{trace heegner}) implies\n\\[ {\\rm Tr}_{F\/K}(z_F) = {\\rm Tr}_{K_c\/K}(z_c) = \\mu_c\\cdot{\\rm Tr}_{K_1\/K}(z_1) = \\mu_c\\cdot {\\rm Tr}_{K_1\/K}(y_1)\\]\nwith $\\mu_c = \\prod_{\\ell \\mid c} (\\ell + 1 - a_\\ell)$ and, similarly, that\n\\[ {\\rm Tr}_{F\/K}(z'_F) = \\mu_c'\\cdot {\\rm Tr}_{K_1\/K}(y_1)\\]\nwith $\\mu'_c = \\prod_{\\ell \\mid c} (\\ell + 1 +a_\\ell)$.\n\nIn particular, since (\\ref{trace heegner2}) implies that $\\mu_c\\cdot \\mu'_c = \\prod_{\\ell\\mid c}|A(\\kappa_{(\\ell)})|$, our assumption that the hypotheses (H$_3$) and (H$_4$) hold for $F\/K$ means that $\\mu_c\\cdot \\mu'_c$ is not divisible by $p$. This fact in turn combines with our assumption that ${\\rm Tr}_{K_1\/K}(y_1)$ is not divisible by $p$ in $A(K)$ to imply that neither ${\\rm Tr}_{F\/K}(z_F)$ nor ${\\rm Tr}_{F\/K}(z'_F)$ is divisible by $p$ in $A(K)_p$.\n\n Since our hypotheses imply that $A(K)_p = A(F)_p^{G}$ is a free $\\ZZ_p$-module of rank one, it follows that ${\\rm Tr}_{F\/K}(z_F)$ and ${\\rm Tr}_{F\/K}(z'_F)$ are both $\\ZZ_p$-generators of $A(F)_p^G$, and hence, by Nakayama's Lemma, that $A(F)_p$ is itself a free rank one $\\ZZ_p[G]$-module that is generated by both $z_F$ and $z'_F$.\n\nTaken in conjunction with the explicit descriptions of cohomology given in (\\ref{bksc cohom}) (that are valid under the present hypotheses), these facts imply that the Euler characteristic that occurs in Theorem \\ref{bk explicit} can be computed as\n\\[ \\chi_{G,p}({\\rm SC}_p(A_{F\/k}),h_{A,F}) = {\\rm Fit}^0_{\\ZZ_p[G]}(\\sha(A_F)[p^\\infty])\\cdot h_{F\/k}(z_F,z_F'),\\]\nwhere we have identified $K_0(\\ZZ_p[G],\\CC_p[G])$ with the multiplicative group of invertible $\\ZZ_p[G]$-lattices in $\\CC_p[G]$ (as in Remark \\ref{comparingdets}).\n\nWhen combined with the equality (\\ref{explicit lt}) these facts imply that the product $$\\mathcal{L}^*_{A,F\/K}\\cdot h_{F\/k}(z_F,z_F')^{-1},$$ where $\\mathcal{L}^*_{A,F\/K}$ is the leading term element defined in (\\ref{bkcharelement}), is equal to the projection to $\\ZZ_p[G]$ of $C\\cdot u_{K,c}\\cdot\\epsilon_{A,c}$ multiplied by the sum (\\ref{unit sum}).\n\nClaim (ii) is therefore a consequence of Theorem \\ref{bk explicit} and the fact that, as already observed above, the sum (\\ref{unit sum}) belongs to $\\ZZ_p[G_c]^\\times$.\n\\end{proof}\n\n\n\n \n\n\n\n\n\n\n\n\\begin{remark}\\label{bs conj rem}{\\em If $p$ is prime to all factors in $C$, then the hypotheses of Theorem \\ref{h-rbsd}(ii) combine with an argument of Kolyvagin to imply $\\sha(A\/K)[p^\\infty]$ vanishes (cf. \\cite[Prop. 2.1]{gross_koly}). This fact combines with the projectivity of $A(F)^\\ast$ to imply $\\sha(A\/F)[p^\\infty]$ vanishes and hence, via Theorem \\ref{h-rbsd}(ii), that ${\\rm BSD}_p(A_{F\/K})$(iv) is valid if and only if the product $u_{K,c}\\cdot\\epsilon_{A,c}$ projects to a unit of $\\ZZ_p[G]$. In the case that $F\/K$ is unramified, this observation was used by Wuthrich and the present authors to prove the main result of \\cite{bmw}. In the general case, it is consistent with an affirmative answer to the question of whether for every $\\psi$ in $\\widehat{G}$ one should always have\n\\[ \\epsilon_{A,\\psi} = (-1)^{n(c_\\psi)}?\\]\nWe observe that such an equality would, if valid, constitute a functorially well-behaved, and consistent, version of the conjecture of Bradshaw and Stein that was discussed in \\S\\ref{bs conj section}.}\\end{remark}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\\textit{Chandra} X-ray observatory has found strong X-ray emission from large scale jets of \nmany radio loud quasars. Some of them, most notorious, PKS 0637-752 \n\\citep{schwartz2000,chartes2000} and 3C 273 \\citep{sambruna2001,marshall2001}, \nare hard to explain \nby synchrotron radiation from a high energy extension of radio-optical \nemitting electrons, \nsince the observed X-ray flux is far above the extension from radio-optical \nspectra and since the X-ray spectrum is harder than the optical one.\nThe most conventional synchrotron self-Compton (SSC) model requires \na very small magnetic field strength,\nwhich means a large departure from equi-partition between \nmagnetic field and relativistic electrons with an enormous jet power.\nThe latter has been regarded unlikely. \nIt is then considered that inverse Compton (IC) scattering of cosmic microwave \nbackground (CMB) photons may explain the X-ray emission provided that the jet is \nrelativistic and makes a small angle with the line of \nsight \\citep{tavecchio2000,cgc2001}.\n\\deleted{In this model, however, the intrinsic jet length becomes as large as Mpcs, \nrarely seen in the parent population of radio galaxies. }\nBoth SSC and IC\/CMB models predict that high energy extension of the \nX-ray spectrum reaches GeV energy range which can be detected with \\textit{Fermi} LAT \n\\citep{georganopoulos2006}. \nRecently, \\textit{Fermi} LAT observations have been reported to put upper limits of \nthe $\\gamma$-ray flux for the jets of 3C 273 \\citep{mg2014}\nand PKS 0637-752 \\citep{meyer2015}. \nThey are an order of magnitude lower than the SSC and IC\/CMB predictions \nso that these models are incompatible with observations. \n\nIn this situation, several alternative models should be considered. \nWithin the leptonic scenario, a separate population of electrons from \nthose emitting radio-optical photons, extending up to around 100 TeV is considered. \nThose electrons emit X-ray photons by synchrotron radiation. \nThe existence of such an electron \npopulation is rather ad-hoc although not impossible. \nIts origin is usually considered a separate particle acceleration process \nfrom that for the radio-optical emitting population.\nHowever, it is hard to imagine such an efficient acceleration mechanism, \nsince the high energy end of radio-optical \nemitting electrons is determined by the balance of acceleration and radiative cooling. \nActually, for PKS 0637-752, radio, optical, and X-ray emissions are spatially coincident and \nconcentrated in a few bright knots, the size of which is an order of kpc, \nand the broadband spectra imply a break around $10^{12}$-$10^{13}$ Hz which \nis naturally due to radiative cooling across the source.\n\nAlternatively, relativistic hadrons may be responsible for the X-ray emission.\n\\cite{bg2016} and earlier \\cite{aharonian2002} considered proton synchrotron emission \nassuming magnetic field stronger than 10 mG and a large energy density of \naccelerated protons up to more than 10 PeV. \nAlthough they argued that the proton and Poynting powers are modest, it is due to \ntheir adopted time scale. They assumed that the confinement time is as long as \n$10^7$-$10^8$ years,\nwhich is a few orders of magnitude longer than the light \ncrossing time of $10^3$-$10^4$ years. \nSince the jet does not involve a large inertia nor a large confining pressure, \nwe regard that the relevant time scale should be similar to the latter. \nThen, the required proton and Poynting powers become \nan order of $L_p \\approx 10^{50}$ erg s$^{-1}$ \nand $L_\\mathrm{Poy} \\approx 10^{49}$ erg s$^{-1}$, respectively.\nAt the same time, the energy density of relativistic electrons becomes\nabout six orders of magnitude smaller than that of the Poynting power. \n\nRelativistic protons also contribute to X-ray and $\\gamma$-ray emission \nthrough the production of high energy electrons and positrons by\nBethe-Heitler and photo-pion processes. \nThe latter processes were considered before the \\textit{Chandra} era by \\cite{mkb91}\nfor a possible mechanism of the X-ray emission from hot spots of radio galaxies.\n\\cite{aharonian2002} also examined these processes for the knot A1 of 3C 273 and \nconcluded that they are too inefficient.\nBut, the efficiency of these processes depends on the size of the knot as well as \nthe infrared photon spectrum.\n\\added{In fact, the energy density of mm and sub-mm radiation \nin \\cite{aharonian2002} is smaller than our model presented below.}\nThe proton models have been discussed for more compact emission regions of blazars and \nBethe-Heitler process can contribute to the same order, depending on the \nsoft photon spectral shape \\citep{pm2015}.\nThus, in this paper, we consider both Bethe-Heitler and photo-pion processes \nand examine if these processes can explain the X-ray emission from PKS 0762-752.\n\nIn Section \\ref{sec:estimate} we make a rough estimate of physical quantities \nof the X-ray emitting large scale jet of PKS 0637-752, whose redshift is 0.651. \nIn Section \\ref{sec:model} we formulate the problem, \nand in Section \\ref{sec:results} numerical results are presented. \nIn Section \\ref{sec:conclusion} we draw conclusions.\n \n\n\\section{Rough Estimate} \\label{sec:estimate}\n\nWe first make a rough estimate of the physical quantities in order to \ncapture the essence of the problem. For simplicity, \nwe assume a single uniform sphere of radius $R=R_\\mathrm{kpc}$ kpc for the emission region\nand ignore effects of relativistic beaming and redshift for the time being. \nAlthough the emission region is divided into a few knots in reality, \nwe here treat a combined emission region.\n\n\nObserved spectra at radio through optical frequencies suggest that \nthe synchrotron emission from primary electrons has a peak at infrared band \naround $10^{12}$ Hz with a luminosity $L_\\mathrm{syn}$ about \n$3\\times 10^{44}$ erg s$^{-1}$. \nThus, energy density of synchrotron photons $u_\\mathrm{syn}$ is about \n\\begin{equation}\n u_\\mathrm{syn}=\\frac{3L_\\mathrm{syn}}{4\\pi R^2c}\n \\approx 3 \\times 10^{-10}R_\\mathrm{kpc}^{-2} \\,\\, \\text{erg} \\,\\, \\text{cm}^{-3} .\n\\end{equation}\nThe number density of photons at radio frequencies may be approximated by \n\\begin{equation}\n \\nu n_\\nu\\approx 10^6 \\left(\\frac{\\nu}{10^{10} \\, \\text{Hz}}\\right)^{-0.75}\n R_\\mathrm{kpc}^{-2} \\,\\, \\text{cm}^{-3} \n\\end{equation}\nand that at optical frequencies \n\\begin{equation}\n \\nu n_\\nu\\approx 10^{2}\\left(\\frac{\\nu}{10^{14} \\, \\text{Hz}}\\right)^{-1.25}\n R_\\mathrm{kpc}^{-2} \\,\\, \\text{cm}^{-3} .\n\\end{equation}\nFor the magnetic field strength of $B=B_\\mathrm{mG}$ mG, \nthe energy density of magnetic field is \n\\begin{equation}\n u_\\mathrm{mag}=4 \\times 10^{-8} B_\\mathrm{mG}^2 \\,\\, \\text{erg} \\,\\, \\text{cm}^{-3} ,\n\\end{equation}\nand the Poynting power is estimated as \n\\begin{equation}\n L_\\mathrm{Poy}=\\pi R^2 u_\\mathrm{mag}c \n \\approx 4\\times 10^{46}B_\\mathrm{mG}^{2}R_\\mathrm{kpc}^{2} \n \\,\\, \\text{erg} \\,\\, \\text{s}^{-1} .\n\\end{equation}\n\nThe Lorentz factor of electrons ranges from \n\\replaced{$\\gamma_{e , \\mathrm{min}}\\approx 3 \\times 10^3B_\\mathrm{mG}^{-0.5}$}\n{$\\gamma_{e , \\mathrm{min}}\\approx 2 \\times 10^3B_\\mathrm{mG}^{-0.5}\n (\\nu_\\mathrm{min}\/10^{10} \\, \\mathrm{Hz})^{0.5}$}\nto $\\gamma_{e, \\mathrm{max}}\\approx 10^6B_\\mathrm{mG}^{-0.5}$\nwith a broken power law spectrum. \nThe power law index of electrons is tentatively taken as 2.5 at low energies \nand 3.5 at high energies in accordance with the above photon spectra. \n\\deleted{The energy density of electrons is governed by the low energy end}\n\\added{The peak luminosity of $3 \\times 10^{44}$ erg s$^{-1}$ at $10^{12}$ Hz\nis emitted by primary electrons with \n$\\gamma_\\mathrm{br} \\sim 2 \\times 10^{4} B_\\mathrm{mG}^{-0.5}$ below which \nthe number spectrum is given by $n_e(\\gamma_e) = K_e \\gamma_e^{-2.5}$.\nOn the other hand, $L_\\mathrm{syn}$ is given by \n$\\sim (4 \\pi R^3\/3) c \\sigma_\\mathrm{T} u_\\mathrm{mag} \\gamma_\\mathrm{br}^2 \nn_e(\\gamma_\\mathrm{br}) \\gamma_\\mathrm{br}$,\nwhere $\\sigma_\\mathrm{T}$ is the Thomson cross section.\nFrom these relations we write $K_e$ in terms of $\\gamma_\\mathrm{br}$, $R$, and $B$.\nThe electron energy density is now given by\n$u_e \\sim 2 m_e c^2 K_e \\gamma_\\mathrm{min}^{-0.5}$ \n}\nand estimated as \n\\begin{equation}\n u_e \\approx 8 \\times 10^{-10} B_\\mathrm{mG}^{-1.5} R_\\mathrm{kpc}^{-3}\n \\left(\\frac{\\nu_\\mathrm{min}}{10^{10} \\mathrm{Hz} }\\right )^{-0.25} \n \\,\\, \\text{erg} \\,\\, \\text{cm}^{-3} . \n \\label{eq:ue}\n\\end{equation}\nIt may seem that \\replaced{inverse Compton}{SSC} luminosity can be large,\nif the magnetic field strength is smaller than 0.1 mG for a typical source size of 1 kpc. \nHowever, in this case, most of the inverse Compton \nemission is produced in the MeV-GeV range. \nTo reproduce the observed X-ray flux, magnetic field needs to be as small as 0.01 mG.\nIn this case, the kinetic power of electrons given by\n\\begin{equation}\n L_e= \\frac{4\\pi R^3u_e}{3}\\frac{c}{3R}\n \\approx 3 \\times 10^{44} B_\\mathrm{mG}^{-1.5} R_\\mathrm{kpc}^{-1}\n \\left(\\frac{\\nu_\\mathrm{min}}{10^{10} \\text{Hz} }\\right)^{-0.25}\n \\,\\, \\text{erg} \\,\\, \\text{s}^{-1} \n\\end{equation}\nwould become very large, \n\\added{i.e., $L_e \\sim 3 \\times 10^{47}$ erg s$^{-1}$ for $B = 0.01$ mG\nand $\\nu_\\mathrm{min} = 10$ GHz.}\nHere, we take the escape time of electrons as $3R\/c$. \nHistorically, for this reason, the beamed IC\/CMB model was proposed, \nbut as noted in Section \\ref{sec:intro}, this model is now regarded unlikely. \nThe minimum power for explaining the radio-optical flux \nis realized at $B_\\mathrm{mG}\\approx 0.2$ for $R_\\mathrm{kpc}=1$\nwith $L_\\mathrm{Poy} \\approx L_e\\approx 2 \\times 10^{45} \\, \\text{erg} \\,\\, \\text{s}^{-1}$.\n\n\n\n\\added{Theoretically,} \nthe break energy of the electron energy distribution, \\added{$\\gamma_b$,} \nis determined by the \nbalance between synchrotron cooling and escape; if we equate the cooling time \nwith escape time, we obtain\n\\begin{equation}\n \\gamma_b \\approx 3 \\times 10^3B_\\mathrm{mG}^{-2}R_\\mathrm{kpc}^{-1} .\n\\end{equation}\nIf the break corresponds to the break of radio-optical spectrum at $10^{12}$ Hz, \nwe obtain the field strength of around 0.2 mG for $R=1 \\, \\text{kpc}$.\nThese considerations lead to $B_\\mathrm{mG}\\approx 0.1-0.3$ as an appropriate choice.\n \nThe maximum possible Lorentz factor of electrons is estimated by equating the \ncooling time with the \ngyrotime and given by \n\\begin{equation}\n \\gamma_{e, \\mathrm{lim}} \\approx 10^9 B_\\mathrm{mG}^{-1\/2} .\n\\end{equation}\nThus, in principle it is possible to obtain $\\gamma_e$ \nas large as $10^8$ with which electrons emit synchrotron X-rays. \nHowever, since the observed X-ray spectrum is much flatter than the optical spectrum, \nsuch a high energy population should be separate from radio-optical emitting one and\nthe acceleration mechanism should be very efficient and distinct. \nAlternatively, such electrons may be supplied from photo-hadronic processes.\nIt should be noted that AGN jets are composed of protons and electrons\/positrons and \nthe inertia is likely to be dominated by protons \\added{\\citep{uchiyama2005}}, \nwhile the existence of electron-positron pairs is also suggested\nby various analysis of observations. \nProton acceleration \\added{may} also naturally \\deleted{takes} \\added{take} place \nand in principle the maximum energy of \nprotons can be as large as $10^{20}$ eV for 1 mG field and 1 kpc size. \n\nTwo photo-hadronic processes can provide secondary high energy electrons\/positrons, i.e., \nphoto-pion production process and Bethe-Heitler process. \nThe former is through strong interaction with the cross section of about \n$3\\times 10^{-28} \\,\\, \\mathrm{cm}^2$ \nand the threshold energy of\n\\begin{equation}\n \\gamma_{p, \\mathrm{th}} \\approx m_\\pi c^2 \\epsilon_\\mathrm{soft}^{-1}\n = 3\\times 10^{12} \\left(\\frac{\\nu_\\mathrm{soft}}{10^{10} \\mathrm{Hz}}\\right)^{-1},\n\\end{equation}\nwhere $\\epsilon_\\mathrm{soft}=h \\nu_\\mathrm{soft}$ is the energy of a target photon. \nThus, for $\\gamma_p=10^{10}$, the energy of target photons should be larger than \n$3\\times 10^{12}$ Hz\nwith the number density about $10^5 \\,\\, \\text{cm}^{-3}$ for $R_\\mathrm{kpc}=1$. \nThus, a proton interaction probability is around 0.1 for rectilinear propagation. \nCharged pions decay to produce electrons and positrons, while neutral pions decay into \ntwo $\\gamma$-rays, which interact with soft photons to produce electron-positron pairs. \n\\added{About 5 \\% of the inelastic energy goes into electron\/positrons.\nThis is because pion mass is about 15 \\% of proton mass, so that about 15 \\% of\nproton energy goes to pions near the threshold.\nThis energy is further distributed to 4 leptons almost equally,\n\\cite[e.g.,][]{ka2008,dm2009}.}\n\\deleted{Considering that about 5 \\% of the inelastic energy goes into electron\/positrons, }\n\\added{Considering this,}\nwe estimate 0.5 \\% of the proton power can be used to produce electrons \nand positrons with the Lorentz factor of around $2.5 \\times 10^{11}$, \nwhich subsequently radiate synchrotron radiation below the TeV energy \nregion for $B=1 \\,\\text{mG}$. \nWhile photons with energy higher than TeV is optically thick to \nphoton-photon pair production, most synchrotron photons are emitted below TeV and \npair cascade process does not much develop. \nSince these electrons\/positrons rapidly cool to make \nthe energy distribution of a power law with an index of $-2$, \nthe resultant photon energy spectrum is a power law with an index of $-0.5$\nand the X-ray luminosity is four orders of magnitudes smaller than the TeV luminosity. \nThus, if we would explain the X-ray observations with this mechanism, \nthe predicted GeV luminosity becomes around \n$3 \\times 10^{47} \\, \\text{erg} \\,\\, \\text{s}^{-1}$, \nwhich exceeds the \\textit{Fermi} LAT upper limit. \nRequired proton power is uncomfortably large \namounting to $3 \\times 10^{51} \\, \\, \\text{erg} \\,\\, \\text{s}^{-1}$.\nHigher energy protons exacerbate the problem and lower energy protons \nhave a negligibly small interaction probability. \n\n\nBethe-Heitler process has a larger cross section of \n$3\\times 10^{-27} \\, \\text{cm}^2$ and the lower threshold energy of \n\\begin{equation}\n \\gamma_{p, \\mathrm{th}}= m_e c^2\\epsilon_\\mathrm{soft}^{-1}\n = 3\\times 10^{10} \\left(\\frac{\\nu_\\mathrm{soft}}{10^{10} \\, \\mathrm{Hz}}\\right)^{-1} ,\n\\end{equation}\nbut with a lower efficiency of energy transfer to electrons and positrons.\nFor $\\gamma_p =10^{10}$, target photon energy is larger than $3\\times 10^{10}$ Hz \nwith the number density $5\\times 10^5 \\,\\, \\text{cm}^{-3}$ for $R_\\mathrm{kpc}=1$, \nand on average a proton produces 5 electron-positron pairs before it escapes from the region.\nTaking the efficiency of 0.001, 0.5\\% of the proton power can be used to \npair production, roughly the same order as the photo-pion production. \nIn this case, however, the injection Lorentz factor of electrons and positrons is around \n\\deleted{$10^{10}$} \\added{$\\gamma_e \\sim \\gamma_p = 10^{10}$} \nand the synchrotron frequency is peaked at 100MeV for $B=1$ mG. \nSince the resultant synchrotron spectrum becomes a power law with an index of $-0.5$,\nthe X-ray luminosity is about two orders of magnitudes lower than the 100 MeV \nluminosity. \nThe required proton power is around $10^{49} \\,\\, \\text{erg} \\, \\, \\text{s}^{-1}$, \nwhich is very large but not inconceivable considering the Poynting power is \nan order of $4 \\times 10^{46} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$. \nWhen the magnetic field strength is 0.1 mG, \nthe required proton power is $3\\times 10^{48} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$,\n\\added{because the beak frequency of synchrotron radiation by secondary leptons\ndecreases as $B$ decreases.}\nThe power of primary electrons becomes \n $10^{46} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$ while the Poynting power \nis $4 \\times 10^{44} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$. \nThus we regard about $B = 0.1 \\,\\text{mG}$ is a best guess. \n\\added{The proton power $\\sim 10^{49}$ erg s$^{-1}$ estimated above\n is $\\sim 120 L_\\mathrm{Edd}$,\n where $L_\\mathrm{Edd}$ is the Eddington luminosity with the black hole mass\n $M_\\mathrm{BH} = 6.5 \\times 10^8 M_\\sun$ \\citep{ljg2006}.\n It is to be noted that $M_\\mathrm{BH} = 7.8 \\times 10^9 M_\\sun$ has been \n reported by \\cite{gcj2001}. For this value of $M_\\mathrm{BH}$, \n the proton power is $\\sim 10 L_\\mathrm{Edd}$.}\n\n\nSomewhat lower energy protons also contribute to Bethe-Heitler process; although the \ninteraction probability becomes lower, the injection Lorentz factor also becomes smaller, \nwhich makes the synchrotron peak lower.\nFor example, for $\\gamma_p=10^9$, the target photon energy is above $3\\times 10^{11}$ Hz\nwith the number density of $10^5 \\,\\, \\text{cm}^{-3}$, \nand the interaction probability of a proton becomes 1. \nThus 0.1\\% of the proton power is available. \nThe injection Lorentz factor of electrons\/positrons is also $10^9$, which emit \nsynchrotron radiation at 1 MeV for $B=1 \\, \\text{mG}$ or 0.1 MeV for $B=0.1 \\, \\text{mG}$.\nSo the proton power of $3 \\times 10^{48} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$ \ngives rise to the observed X-ray luminosity. \n\nIt is to be noted that for a larger $R_\\mathrm{kpc}$ the interaction probability \nof protons becomes small and required proton power accordingly increases; \na larger $R_\\mathrm{kpc}$ is unfavorable for the photo-hadronic model. \nFor these parameters, proton synchrotron luminosity is a few orders of magnitude \nsmaller than the synchrotron luminosity of secondary pairs. \nIts peak is at $10^{19}$ Hz for $B=0.1\\, \\text{mG}$.\n\nSince the resultant emission spectra depend on details of the secondary \nelectron\/positron spectra and resultant radiative cooling, \nwe numerically investigate such details in the next section. \nBased on the present investigation, in this paper, \nwe concentrate on Bethe-Heitler process.\n\n\n\\section{The Model} \\label{sec:model}\n\nWe adopt a single zone model in which the size of the emission region of $R$\naround 1 kpc, magnetic field of $B$ around 0.1 mG, and the proton spectrum is \n\\begin{equation}\n n_p \\added{(\\gamma_p)} = K_p \\gamma^{-p} ,\n \\label{eq:proton-spec}\n\\end{equation}\nfor $\\gamma_{p, \\mathrm{min}}\\le \\gamma_p \\le \\gamma_{p, \\mathrm{max}}$. \nCanonically we take $p=2$, $\\gamma_{p, \\mathrm{min}}=1$,\nand $\\gamma_{p, \\mathrm{max}}=10^{10}$. \nSince photo-hadronic processes work only for large values of $\\gamma_p$, \nthe value of $\\gamma_{p, \\mathrm{min}}$ does not affect the resultant \nspectrum but only affects the energy density of protons \\added{logarithmically}.\nIf proton energy distribution is concentrated in the range near $\\gamma_{p, \\mathrm{max}}$,\nthe power requirement below will be relieved by an order of magnitude. \nNote that the estimates of the proton power and proton energy density given in the \nprevious section are those for such a high energy population \nand an order of magnitude lower than those for $\\gamma_{p, \\mathrm{min}}=1$.\nIn contrast $\\gamma_{p, \\mathrm{max}}$ critically affects the results.\nThe energy density of protons is \n\\begin{equation}\n u_p =10^{-3} K_p \\ln(\\gamma_{p, \\mathrm{max}}\/\\gamma_{p, \\mathrm{min}}) \\,\\,\n \\text{erg} \\,\\, \\text{cm}^{-3} .\n\\end{equation}\nThe proton power is \n\\begin{equation}\n L_p = \\frac{4\\pi R^3 u_p}{3}\\frac{c}{3R}\n \\approx 4 \\times 10^{50} K_p\n \\ln(\\gamma_{p, \\mathrm{max}}\/\\gamma_{p, \\mathrm{min}}) R_\\mathrm{kpc}^2 \n \\,\\, \\text{erg} \\,\\, \\text{s}^{-1},\n\\end{equation}\nwhere we take the escape time of $3R\/c$.\n\n\nThe target photon spectra of photo-pion and Bethe-Heitler processes \nare based on the observed radio to optical photons,\ntaking into account of the cosmological redshift of 0.651.\nPrimary electrons responsible for the radio-optical emission are \ndetermined so as to reproduce the radio-optical spectra. \nWe also calculate the inverse Compton scattering of primary electrons \noff radio-optical photons, \nwhose flux is generally orders of magnitude short of X-ray observations.\nIn numerical calculations we solve the kinetic equation of primary electrons\nwith the injection spectrum given by \n$q_\\mathrm{inj}(\\gamma_e) = K_e \\gamma_e^{-\\alpha_e} \\exp(-\\gamma_e\/\\gamma_{e, 0})$,\nwhere $\\gamma_e$ is the Lorentz factor of electrons and $K_e$, $\\alpha_e$, \nand $\\gamma_{e, 0}$ are parameters to fit the observed radio-optical spectrum.\nSince we consider only mildly relativistic beaming if any, we ignore IC\/CMB. \nAlthough the number density of CMB photons is larger than \nthat of radio-optical synchrotron photons when $R_\\mathrm{kpc}$ is larger than 10, \nthe efficiency of those processes becomes small for large $R$, \nso that our treatment is justified.\n \n\nThe emission by high energy leptons produced by the hadronic processes is \ncalculated for the lepton spectrum\nobtained by solving the kinetic equation given by\n\\begin{equation}\n \\frac{d n_e (\\gamma_e)}{d t}=q_\\mathrm{BH}(\\gamma_e) + q_\\mathrm{pp}(\\gamma_e)\n -\\frac{c n_e (\\gamma_e)}{3R}\n -\\frac{d~}{d\\gamma_e} [\\dot{\\gamma}_e n_e (\\gamma_e)] ,\n \\label{eq:e-kinetic-hadronic}\n\\end{equation}\nwhere $n_e(\\gamma_e)$ is the lepton density per unit interval of $\\gamma_e$.\nThe lepton injection rate is denoted by $q_\\mathrm{BH}(\\gamma_e)$ for Bethe-Heitler process, \nwhich is calculated according to the formulation given by \\cite{ka2008},\n\\added{using a given proton spectrum with equation (\\ref{eq:proton-spec}).}\nLepton production via photo-pion processes is denoted by $q_\\mathrm{pp}(\\gamma_e)$.\nHowever, we do not include this term in numerical calculations \nbecause the leptons produced by photo-pion processes do not contribute much \nto the X-ray emission.\nHere $\\dot{\\gamma}_e$ denotes radiative cooling through synchrotron radiation \nand inverse Compton scattering.\nWe also set the lepton escape time to be $3 R\/c$.\nUsing the steady solution of equation (\\ref{eq:e-kinetic-hadronic}), \nwe calculate the emission spectra through synchrotron radiation \nand inverse Compton scattering. \n\n\n\n\\section{Results} \\label{sec:results}\n\n\\begin{figure}[ht!] \n \\centering\\includegraphics[scale=0.5,clip]{figure1.eps}\n \\caption{The production spectrum of electrons and positrons \n for \\added{$p = 2$ and} $K_p = 1$ \\added{cm$^{-3}$.}\n The synchrotron radiation spectrum by primary electrons \n for $R = 1$ kpc and $B = 0.1$ mG (Fig. \\ref{fig:1kpc-no-beaming-exp})\n is used as a target photon spectrum in lepton production.\n The black line shows the pair production rate by Bethe-Heitler process,\n the red and blue lines show the electron and positron production rates, \n respectively, by photo-pion process.\n \\label{fig:pair-production-spec}}\n\\end{figure}\n\n\\begin{figure}[ht!] \n \\centering\\includegraphics[scale=0.5,clip]{figure2.eps}\n \\caption{The lepton energy spectrum for $R = 1$ kpc, $B = 0.1$ mG,\n \\added{and $p=2$}.\n The spectrum is calculated by equation (\\ref{eq:e-kinetic-hadronic})\n for a steady state.\n The injection of leptons is by Bethe-Heitler process.\n \\label{fig:lepton-energy-spec}}\n\\end{figure}\n\n\n\n\nIn Figure \\ref{fig:pair-production-spec} we show the production spectrum of electrons \nand positrons through Bethe-Heitler and photo-pion processes\nfor a photon field relevant to PKS 0637-752 jet for the fiducial case $R_\\mathrm{kpc} = 1$\nand $B_\\mathrm{mG} = 0.1$ with \\added{$p =2$} and $K_p=1$ \\added{cm$^{-3}$}.\n\\added{The pair production rate is calculated based on \\cite{ka2008}.\nThe target photons are synchrotron radiation by primary electrons,\nthe spectra of which are shown in Figure \\ref{fig:1kpc-no-beaming-exp} \nfor various parameters.}\n\\added{Figure \\ref{fig:pair-production-spec} and Table \\ref{table:prod-rate-2-18} below\ndo not include pair production via the decay of neutral pions.\nThe gamma-rays produced by the decay of neutral pions have energy comparable to\nthe energy of electrons\/positrons produced by charged pions.\nIn collisions with soft photons the gamma-rays produce electron-positron pairs, \nand these pairs contribute mainly to TeV emission.}\nAs we described in Section \\ref{sec:estimate}, the production spectrum\nby Bethe-Heitler process has a rather broad number spectrum \ncentered on $10^6<\\gamma_e<10^{10}$,\nwhile those through photo-pion production have a \\deleted{peaked spectrum}\n\\added{peak} at $\\gamma_e \\approx 10^{11}$-$10^{12}$.\nThe energy injection rate for both processes concentrates on the high energy ends,\nwith the Bethe-Heitler process being an order of magnitude larger than that for the\nphoto-pion production.\n\\added{For $\\gamma_e \\gtrsim 10^{11.5}$, \nphoto-pion processes dominate the Bethe-Heitler process.\nLeptons with $\\gamma_e \\gtrsim 10^{11.5}$ emit synchrotron radiation at \n$\\nu \\gtrsim 10^{25}$ Hz and do not contribute to X-ray emission that is our\ninterest in this paper.}\n\n\nThe resultant steady state electron\/positron energy spectrum is shown \nin Figure \\ref{fig:lepton-energy-spec}\nfor $R_\\mathrm{kpc}=1$, $B_\\mathrm{mG}=0.1$, \\added{and $p=2$}.\nThe value of $K_p$ is taken as $8.8 \\times 10^{-3}$ \\added{cm$^{-3}$} to reproduce \nthe observed X-ray flux.\nThe proton power amounts to $5 \\times 10^{49} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$.\nAlthough this seems to be too large, it can be reduced by an order of magnitude \nif $\\gamma_{p, \\mathrm{min}}$ is large enough or if the spectral index of protons \nis less than 2, i.e., when the energy of relativistic protons is \nconcentrated in the high energy end. \nWe tabulate in Table \\ref{table:prod-rate-2-18}\nthe pair production rates for $p = 2$ and 1.8.\n\n\nThe resultant photon spectra are shown in Figure \\ref{fig:1kpc-no-beaming-exp}.\nAs is seen, while an overall spectral shape is well reproduced, \noptical flux at $5 \\times 10^{14}$ Hz tends to be overproduced.\nAt this frequency primary electrons and secondary pairs equally contribute and \nthe resultant combined flux is a factor of two higher than the observed one.\nThis is due to radiative cooling of secondary pairs and rather a general feature.\nFor a range of the magnetic field strength this feature persists in the present model.\nPossible way out from this problem is discussed later.\nWhen the magnetic field becomes smaller, inverse Compton scattering \nof radio-optical synchrotron photons by primary electrons (SSC) becomes larger.\nIf the magnetic field is as small as 20$\\mu$G, SSC can work as an X-ray emission \nmechanism. \nIn this case, however, the electron power is as large \nas $3 \\times 10^{47} \\, \\text{erg} \\,\\, \\text{s}^{-1}$ with a large deviation \nfrom equi-partition.\nFurthermore, excessive production of optical photons by IC is inevitable.\nThus, SSC model does not work.\n\nThe secondary pairs scatter the synchrotron photons emitted by the primary electrons. \nThis IC component does not affect the emission below\n$\\sim 10^{24} \\, \\text{Hz}$ but the peak in $\\nu F_\\nu$ appears in the TeV band.\nThe IC component is not much affected by the magnetic field strength\nbecause the spectra of the (radio-optical) soft photons and the secondary pairs are fixed.\nIt is \\deleted{, however,} to be noted that \nour model does not include the absorption of $\\gamma$-rays\nby extragalactic background light.\n\\added{The TeV bump will be hard to be observed by CTA,\nbecause the expected flux is lower than the lower limit of CTA.}\n\n\n\\floattable\n\\begin{deluxetable}{c|rrrrrr}\n \\tablecaption{Pair Production Rate for $R = 1$ kpc and $B = 0.1$ mG\n \\label{table:prod-rate-2-18}}\n \\tablewidth{0pt}\n \\tablehead{\n \\colhead{} & \\multicolumn{2}{c}{$p=2$} \n & \\colhead{} & \\multicolumn{3}{c}{$p=1.8$}\n \\\\\n \\cline{2-3}\n \\cline{5-7}\n \\colhead{} & \\colhead{$\\dot{n}_\\pm$} & \\colhead{$q_\\pm$}\n & \n \\colhead{} & \\colhead{} & \\colhead{$\\dot{n}_\\pm$} & \\colhead{$q_\\pm$}\n \\\\\n \\colhead{} & \\colhead{(cm$^{-3}$ s$^{-1}$)} & \\colhead{(erg cm$^{-3}$ s$^{-1}$)} \n &\n \\colhead{} &\\colhead{} & \\colhead{(cm$^{-3}$ s$^{-1}$)} & \\colhead{(erg cm$^{-3}$ s$^{-1}$)} \n }\n \\startdata\n B-H & $9.4 \\times 10^{-21}$ \\phn & $4.0 \\times 10^{-18}$ \\phn \n & & \n & $4.8 \\times 10^{-19}$ \\phn & $3.2 \\times 10^{-16}$ \\phn \n \\\\ \n $e^-$ & $2.6 \\times 10^{-25}$ \\phn & $5.4 \\times 10^{-20}$ \\phn \n & & \n & $1.9 \\times 10^{-23}$ \\phn & $3.9 \\times 10^{-18}$ \\phn \n \\\\ \n $e^+$ & $2.4 \\times 10^{-24}$ \\phn & $5.5 \\times 10^{-19}$ \\phn \n & &\n & $1.5 \\times 10^{-22}$ \\phn & $4.0 \\times 10^{-17}$ \\phn \n \\\\ \n \\enddata\n \\tablecomments{\n $\\dot{n}_\\pm$: the electron\/positron production rate per unit volume,\n $q_\\pm$: the energy production rate per unit volume in electron\/positron production.\n B-H: Bethe-Heitler pair production, $e^-$ and $e^+$: photo-pion processes.\n $K_p = 1$ \\added{cm$^{-3}$}, $\\gamma_{p, \\mathrm{min}}=1$,\n and $\\gamma_{p, \\mathrm{max}} = 10^{10}$ are assumed.\n All the values in the table are proportional to $K_p$.\n }\n\\end{deluxetable}\n\n\n\\begin{figure}[t!]\n \\centering\\includegraphics[scale=0.5,clip]{figure3.eps}\n \\caption{The emission spectrum of PKS 0637-0752.\n The black filled circles and crosses are data taken from \\cite{meyer2015} \n and the crosses show the \\textit{Fermi} upper limit.\n Models are calculated for $R =1$ kpc.\n \\added{Solid lines are synchrotron radiation and SSC by primary electrons.\n The bumps above $\\sim 10^{15}$ - $10^{16}$ Hz of the solid lines are SSC components.}\n The blue solid line is for $B=0.05$ mG, the black solid line is \n for $B=0.1$ mG, and the magenta solid line for $B=0.2$ mG.\n \\added{Dashed and dot-dashed lines are emission by secondary pairs.}\n The \\deleted{black} dashed line is for $p=2$ and $K_p = 8.8\\times 10^{-3}$ \n \\added{cm$^{-3}$}\n and the \\deleted{black} dot-dashed line is for $p=1.8$ and $K_p = 1.3 \\times 10^{-4}$\n \\added{cm$^{-3}$}.\n These spectra \\added{(dashed and dot-dashed)} are calculated with $B=0.1 \\,\\text{mG}$.\n The bumps above $\\sim 10^{26} \\, \\text{Hz}$ are produced by inverse Compton scattering \n of radio-optical photons off the secondary pairs produced by the Bethe-Heitler process.\n \\label{fig:1kpc-no-beaming-exp}}\n\\end{figure}\n\n\nWe also examined a smaller size of $R_\\mathrm{kpc}=0.1$ and applied to \nthe knot WK8.9 as shown in Figure \\ref{fig:01kpc-spec}.\nWhen the magnetic field is $B_\\mathrm{mG}=0.3$, $K_p=0.18$ \\added{cm$^{-3}$} can \nreproduce the radio-optical and X-ray spectra, although \noverproduction of optical photons still persists. \nThe proton power is around $10^{49} \\, \\, \\text{erg} \\,\\, \\text{s}^{-1}$.\n\nFor a larger value of $R$, the required proton power increases; for example \nfor $R_\\mathrm{kpc} = 5$, $K_p =10^{-3}$ \\added{cm$^{-3}$} is needed amounting to \n$L_p =10^{50} \\,\\, \\text{erg}\\,\\, \\text{s}^{-1}$.\nThus, as for the emission region size, a small radius is favored in the energetics of protons. \nThese numerical results are consistent with a rather simple and optimistic estimate made \nin the previous section.\nThe predicted photon spectra show a roll-over at 10 MeV-GeV range \nand are compatible with the reported \\textit{Fermi} upper limits.\nSince the spectra are rather flat, the real problem is in the low energy \nend of the synchrotron emission. \nWe mostly skipped the contribution from electrons\/positrons of photo-pion origin.\nThey contribute mostly in the TeV range\nwith roughly similar luminosity to X-rays so that they do not affect the X-ray spectrum \nand the \\textit{Fermi} upper limit.\nWhen the maximum energy of protons is not so large, this component \ncan be totally ignored. \n\n\nSince the overproduction of optical flux is rather general, \nwe consider the reduction of the emission from the primary electrons.\nIn the above models we assumed the injection spectrum with the exponential cutoff.\nWhen we assume the power-law injection spectrum without the exponential cutoff,\nthe optical emission is mainly from the secondary pairs.\nOur numerical result is shown in Figure \\ref{fig:1kpc-no-exp-cutoff}.\nSuch an abrupt super-exponential cutoff of primary electron \nenergy distribution may not be unlikely,\nwhen the acceleration is limited by cooling \\citep{krm98}.\n\nAn alternative idea to reduce the overproduction of the optical flux is \ntaking into account a mild relativistic beaming.\nIn this case, the photo-hadronic rates become less frequent\nby a factor of $\\delta^{3.75}$ due to the beaming effects for the same value \nof $\\gamma_p$,\n\\added{where $\\delta$ is the beaming factor.}\n\\added{(The scaling of $\\delta^{3.75}$ is obtained as follows.\nThe observed frequency $\\nu_\\mathrm{obs}$ and the frequency in the source \n$\\nu_s$ are related by $\\nu_\\mathrm{obs} = \\delta \\nu_s$. \nThe photon density at $\\nu_\\mathrm{obs}$ is given by\n$\\nu_\\mathrm{obs} n_{\\nu_\\mathrm{obs}} = \\delta^3 \\nu_s n_{\\nu_s}\n= A \\nu_\\mathrm{obs}^{-0.75} = A \\delta^{-0.75} \\nu_s^{-0.75}$, where $A$ is a constant.\nThen $\\nu_s n_{\\nu_s} = A \\delta^{-3.75} \\nu_s^{-0.75}$ follows.)\n}\nHowever, the source frame X-ray luminosity also decreases by a factor of $\\delta^4$, \nso that the required proton power in the source frame does not much change,\n\\added{i.e., proportional to $\\delta^{0.25}$.}\nThe required kinetic power of protons increases by a factor of $\\delta^2$,\nif we set the bulk Lorentz factor $\\Gamma$ of the knot equal to $\\delta$.\nThe results for $R_\\mathrm{kpc}=1$ and $B_\\mathrm{mG}=0.1$ are shown \nin Figure \\ref{fig:beaming-spec} for $\\delta = 3$.\nAs is seen, the overproduction of the optical flux can be \navoided in this case as well.\nThe appropriate value of $K_p=10^{-2}$ \\added{cm$^{-3}$} is similar to the non-beamed case,\nso that the required proton power becomes as high as \n$10^{51} \\,\\, \\text{erg} \\,\\, \\text{s}^{-1}$, which seems to be unlikely.\nHowever, we note that milder value of $\\delta$ may reproduce the observed spectra , \nif the uncertainty of the ultraviolet flux exists by a factor of 1.5 or so.\n\n\n\n\\begin{figure}[ht!] \n \\centering\\includegraphics[scale=0.5,clip]{figure4.eps}\n \\caption{Emission spectrum of the knot WK8.9 (the data are shown by filled green\n circles taken from \\cite{meyer2015} ).\n The size of the emission region is assumed to be $0.1$ kpc.\n The red lines \\added{(solid and dashed)} are for $B = 0.3$ mG, \n \\added{and} the black lines \\added{(solid and dashed)} are for $B=0.4$ mG.\n \\deleted{and the green dot-dashed line is for $B=1$ mG.}\n \\added{For $B = 1$ mG, only synchrotron radiation and SSC by primary electrons is shown\n by the green dot-dashed line.}\n To calculate the emission from pairs produced by Bethe-Heitler process, \n $K_p = 0.18$ and 0.2 \\added{cm$^{-3}$} are assumed for $B=0.3$ and 0.4 mG, respectively.\n \\label{fig:01kpc-spec}}\n\\end{figure}\n\n\n\\begin{figure}[ht!] \n \\centering\\includegraphics[scale=0.5,clip]{figure5.eps}\n \\caption{\n The injection of primary electrons without exponential cutoff\n is assumed for $R = 1$ kpc.\n The black line is for $B=0.1$ mG, the blue line is for $B=0.2$ mG, and the magenta line \n is for $B = 0.3$ mG.\n The red line is the emission from pairs produced by Bethe-Heitler pair production for\n $B= 0.1$ mG and $K_p=8 \\times 10^{-3}$ \\added{cm$^{-3}$}.\n \\label{fig:1kpc-no-exp-cutoff}}\n\\end{figure}\n\n\n\\begin{figure} [ht!] \n \\centering\\includegraphics[scale=0.5,clip]{figure6.eps}\n \\caption{Beaming factor $\\delta = 3$ is assumed for $R =1$ kpc and $B = 0.1$ mG.\n To calculate X-ray emission, $K_p = 1.1 \\times 10^{-2}$ \\added{cm$^{-3}$} is assumed.\n \\label{fig:beaming-spec}}\n\\end{figure}\n\n\n \n\\section{Conclusion} \\label{sec:conclusion}\n\nWe examined the photo-hadronic model of the X-ray emission from large scale jets \nof radio loud quasars specifically PKS 0637-752 and have found that Bethe-Heitler process is \neffective for the high energy electron\/positron injection. \nElectrons and positrons from photo-pion production mainly radiate\nmulti-TeV photons by synchrotron radiation and do not much contribute to X-ray emission.\nFor an appropriate choice of parameters such as $R=1$ kpc and $B=0.1$ mG, \nthe required proton power is an order of $10^{49} \\, \\text{erg} \\,\\, \\text{s}^{-1}$,\n\\added{which is $\\sim 120 L_\\mathrm{Edd}$ for $M_\\mathrm{BH} = 6.5 \\times 10^8 M_\\sun$},\nwhen the energy density of protons is concentrated in the region \nof $\\gamma_p =10^{9}$-$10^{10}$. \nCooling tail of these electrons and positrons radiate optical synchrotron emission,\nseparate from the primary electrons. \nTo avoid the overproduction of optical-ultraviolet flux,\neither the energy distribution of the primary electrons has a super-exponential cutoff or \na mild degree of the relativistic beaming effect ($\\delta \\sim 3$) appears.\nFor the latter case, the required proton power tends to be large.\nThe Poynting and primary electron powers remain moderate. \nBecause the value of the beaming factor is not strongly constrained\n\\citep[e.g.,][]{meyer2015}, further work is needed to determine\nwhich mechanism is applicable to reduce the optical synchrotron emission.\n\nProton synchrotron radiation is a few orders of magnitude smaller than \nthe photo-hadronic model prediction.\nThus, photo-proton model is an alternative option to \nexplain the strong X-ray emission from large scale jets. \n\n\n\\added{The black hole mass of PKS0627-752 given by \\cite{ljg2006} is \n$6.5 \\times 10^8 M_\\odot$. \nOn the other hand, \\cite{gcj2001} gives $7.8 \\times 10^9 M_\\odot$.\nThe Eddington luminosity is \n$\\sim 8.2 \\times 10^{46}$ erg s$^{-1}$ and $\\sim 9.8 \\times 10^{47}$ erg s$^{-1}$ \nfor $M_\\mathrm{BH} = 6.5 \\times 10^8 M_\\odot$ and $7.8 \\times 10^9 M_\\odot$,\nrespectively.\nThus $10^{49}$ erg s$^{-1}$ is $\\sim 10$ - $100$ times larger than the Eddington \nluminosity.\nAs for other AGNs, very luminous AGNs have been observed.\nFor example, \\cite{gfv2009} showed that S5 0014+813 has\n$\\nu L_\\nu \\sim 10^{48}$ erg s$^{-1}$ in the optical and this corresponds to \n$0.17 L_\\mathrm{Edd}$ for $M_\\mathrm{BH} = 4 \\times 10^{10} M_\\sun$.\nSome authors, e.g., \\cite{sn2016}, on the other hand, \nperformed radiation magnetohydrodynamical simulation of super-Eddington mass accretion.\nIn view of uncertainties of mass estimation and theoretical possibility \nof super-Eddington jet power, we believe our model is still viable, \nalthough the required proton power is very large.\n}\n\nFinally, our model predicts TeV emission by inverse Compton scattering of radio-optical\nphotons off pairs produced by the Bethe-Heitler process.\nEmission by electrons\/positrons produced by photo-pion processes will also \ncontribute to TeV emission.\n\n\n\n\\acknowledgments\n\\added{We are grateful to the referee for useful comments that improved the manuscript \nconsiderably. }\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\nEntanglement shared among multiple parties is acknowledged as one of the fundamental resources driving the second quantum revolution~\\cite{DowlingMilburn2003}, for instance, as a basis of quantum network proposals~\\cite{EppingKampermannMacchiavelloBruss2017, PivoluskaHuberMalik2018, RibeiroMurtaWehner2018, BaeumlAzuma2017}, as a key resource for improved quantum sensing~\\cite{Toth2012} and quantum error correction~\\cite{Scott2004} or as generic ingredient in quantum algorithms~\\cite{BrussMacchiavello2011} and measurement-based quantum computation~\\cite{RaussendorfBriegel2001, BriegelRaussendorf2001}. Yet, its detection and characterisation are complicated by several factors: among them, the computational hardness of deciding whether any given system even exhibits any entanglement at all~\\cite{Gurvits2004} as well as the fact that the usual paradigm of local operations and classical communication (LOCC) lead to infinitely many types of entanglement~\\cite{VerstraeteDehaeneDeMoorVerschelde2002, OsterlohSiewert2005, DeVicenteSpeeKraus2013, SchwaigerSauerweinCuquetDeVicenteKraus2015, DeVicenteSpeeSauerweinKraus2017, SpeeDeVicenteSauerweinKraus2017, SauerweinWallachGourKraus2018} already for single copies of multipartite states. Significant effort has thus been devoted to devising practical means of entanglement certification from limited experimental data~\\cite{TothGuehne2005b, FriisVitaglianoMalikHuber2019}.\n\n\nOne of the principal challenges for the characterisation of multipartite entanglement lies in distinguishing between \\emph{partial separability} and its counterpart, \\emph{genuine multipartite entanglement} (GME)\\footnote{Note that the term was also coined for multipartite pure states with exclusively non-vanishing $n$-tangle in Ref.~\\cite{OsterlohSiewert2005}.}.\nHere, a multipartite state is called \\emph{partially separable} if it can be decomposed as a mixture of \\emph{partition-separable} states, i.e., of states separable with respect to some (potentially different) partitions of the parties into two or more groups, whereas any state that cannot be decomposed in this way has GME (see Fig.~\\ref{fig:GME structure} and Table~\\ref{tab:term}). One may further classify partially separable states as $k$-separable states according to the maximal number $k$ of tensor factors that all terms in the partially separable decomposition can be factorised into. If a state admits a decomposition where each term is composed of at least two tensor factors ($k=2$), the state is called \\emph{biseparable}. Thus, every partially separable state is $k$-separable for some $k\\geq2$, and hence (at least) biseparable.\nThis distinction arises naturally when considering the resources required to create a specific state:\nany biseparable state can be produced via LOCC in setups where all parties share classical randomness and subsets of parties share entangled states.\nOne of the counter-intuitive features of partially separable states is the possibility for bipartite entanglement across every possible bipartition\\footnote{An explicit example of a $k\/2$-separable (and thus biseparable) $k$-qubit state (for even $k$) with the bipartite entanglement between all neighbouring qubits in a linear arrangement can be found in~\\cite[footnote 30]{FriisMartyEtal2018}.}.\nConsequently, the notion of bipartite entanglement across partitions is insufficient to capture the notion of partial separability, and conventional methods, such as positive maps~\\cite{HorodeckiMPR1996, Peres1996}, cannot be straightforwardly applied to reveal GME (with new concepts for positive maps derived for that purpose in~\\cite{HuberSengupta2014, ClivazHuberLamiMurta2017}), which results in additional challenges compared to the \\textemdash~relatively \\textemdash~simpler scenario of detecting bipartite or partition entanglement (e.g., as in~\\cite{RodriguezBlancoBermudezMuellerShahandeh2021}).\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{flower2.png}\n\\vspace*{-8mm}\n\\caption{\\textbf{GME and (partial) separability for three qubits}. All three-qubit states separable with respect to~one of the three bipartitions, $\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}$ (yellow), $\\mathcal{A}_{2}|\\mathcal{A}_{1}\\mathcal{A}_{3}$ (darker green), and $\\mathcal{A}_{3}|\\mathcal{A}_{1}\\mathcal{A}_{2}$ (background), form convex sets, whose intersection (turquoise) contains (but is not limited to) all fully separable states $\\mathcal{A}_{1}|\\mathcal{A}_{2}|\\mathcal{A}_{3}$ (dark blue). The convex hull of these partition-separable states contains all partially separable (the same as biseparable for tripartite systems) states. All states that are not biseparable are GME\\@. States with $k$-copy activatable GME are contained in the set of biseparable but not partition-separable states and are conjectured to form the lighter green areas, with those states for which GME is activatable for higher values of $k$ farther away from the border between GME and biseparability.\nThe horizontal line represents the family of isotropic GHZ states $\\rho(p)$, containing the maximally mixed state ($p=0$) and the GHZ state ($p=1$). The values $p\\suptiny{0}{0}{(k)}_{\\mathrm{GME}}$ indicate $k$-copy GME activation thresholds, which we discuss in the following.}\n\\label{fig:GME structure}\n\\end{figure}\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{GME_activation_3b.pdf}\n\\vspace*{-4mm}\n\\caption{\\textbf{Activation of GME from biseparable states}. (a) Separable bipartite states remain separable, no matter how many copies are shared, e.g., if $\\rho\\subtiny{0}{0}{\\mathcal{A}_{1}\\mathcal{A}_{2}}$ and $\\rho\\subtiny{0}{0}{\\mathcal{B}_{1}\\mathcal{B}_{2}}$ are separable with respect to~the bipartitions $\\mathcal{A}_{1}|\\mathcal{A}_{2}$ and $\\mathcal{B}_{1}|\\mathcal{B}_{2}$, then so is $\\rho\\subtiny{0}{0}{\\mathcal{A}_{1}\\mathcal{A}_{2}}\\otimes\\rho\\subtiny{0}{0}{\\mathcal{B}_{1}\\mathcal{B}_{2}}$. (b) In contrast, the joint state of multiple copies of biseparable states, e.g., $\\rho\\subtiny{0}{0}{\\mathcal{A}_{1},\\mathcal{A}_{2},\\ldots,\\mathcal{A}_{N}}$, $\\rho\\subtiny{0}{0}{\\mathcal{B}_{1},\\mathcal{B}_{2},\\ldots,\\mathcal{B}_{N}}$, and $\\rho\\subtiny{0}{0}{\\mathcal{C}_{1},\\mathcal{C}_{2},\\ldots,\\mathcal{C}_{N}}$, can be GME with respect to~the partition $\\mathcal{A}_{1}\\mathcal{B}_{1}\\mathcal{C}_{1}|\\mathcal{A}_{2}\\mathcal{B}_{2}\\mathcal{C}_{2}|\\ldots|\\mathcal{A}_{\\!N}\\mathcal{B}_{\\!N}\\mathcal{C}_{\\!N}$.\n\\label{fig:GME activation}\n}\n\\end{figure}\n\n\\begin{table*}\n\\centering\n\\caption{\\label{tab:term}Summary of terminology on GME in this paper.}\n\\begin{tabular}{@{}ll@{}}\n\\toprule\nTerm & Meaning\\\\\n\\midrule\n$k$-separable &\\parbox{12cm}{convex combination of pure states, each of which is a product of at least $k$ projectors}\\\\\nbiseparable & synonymous with $2$-separable\\\\\npartially separable& $k$-separable for some $k>1$\\\\\npartition-separable & \\parbox{12cm}{separable for a specific partition of the multipartite Hilbert space, i.e., a convex combination of projectors, each of which is a product with respect to the same partition into subsystems}\\\\\nmultipartite entangled & entangled across all bipartitions\\\\\ngenuine multipartite entangled& non-biseparable\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table*}\n\nAn assumption inherent in the definitions above is that all parties locally act only on a single copy of the distributed state. \nHowever, in many experiments where quantum states are distributed among (potentially distant) parties, multiple independent but identically prepared copies of states are (or at least, can be) shared. For instance, exceptionally high visibilities of photonic states can only be achieved if each detection event stems from almost identical quantum states~\\cite{JoshieEtAl2020,WengerowskyJoshiSteinlechnerHuebelUrsin2018}. Adding noise to the channel then produces the situation we focus on in this article: multiple copies of noisy quantum states produced in a laboratory~\\cite{Ecker-Huber2019,HuEtAl2020}.\nEven limited access to quantum memories or signal delays then allows one to act on multiple copies of the distributed states, which is a recurring theme also in research on quantum networks~\\cite{YamasakiPirkerMuraoDuerKraus2018,NavascuesWolfeRossetPozasKerstjens2020, KraftDesignolleRitzBrunnerGuehneHuber2021}.\nCharacterising properties of GME in multi-copy scenarios is thus not only of fundamental theoretical interest but also crucial for practical applications that require GME to be distributed, such as conference key agreement~\\cite{MurtaGrasselliKampermannBruss2020}.\n\nHowever, we demonstrate here that, unlike the distinction between separable and entangled states, the distinction between biseparability and GME is not maintained in the transition from one to many copies; i.e., partial separability is not a tensor-stable concept.\nAs we show, for $N$ parties $1,\\ldots,N$, there exist multipartite quantum states $\\rho\\subtiny{0}{0}{\\mathcal{A}_{1},\\mathcal{A}_{2},\\ldots,\\mathcal{A}_{N}}$ that are biseparable, but which can be \\emph{activated} in the sense that sharing two copies results in a GME state, i.e., such that the joint state $\\rho\\subtiny{0}{0}{\\mathcal{A}_{1},\\mathcal{A}_{2},\\ldots,\\mathcal{A}_{N}}\\otimes \\rho\\subtiny{0}{0}{\\mathcal{B}_{1},\\mathcal{B}_{2},\\ldots,\\mathcal{B}_{N}}$ of two identical copies (labelled $\\mathcal{A}$ and $\\mathcal{B}$, respectively) is not biseparable with respect to the partition $\\mathcal{A}_{1}\\mathcal{B}_{1}|\\mathcal{A}_{2}\\mathcal{B}_{2}|\\ldots|\\mathcal{A}_{N}\\mathcal{B}_{N}$. (See Fig.~\\ref{fig:GME activation}.)\nThat such activation of GME is in principle possible had previously only been noted in~\\cite{HuberPlesch2011}, where it was observed that two copies of a particular four-qubit state that is itself almost fully separable can become GME\\@. \nHere, we systematically investigate this phenomenon of \\emph{multi-copy GME activation}. As the first main result, we show that the property of biseparability is not tensor stable in general by identifying a family of $N$-qubit isotropic Greenberger-Horne-Zeilinger (GHZ) states with two-copy activatable GME for all $N$.\nWe further demonstrate the existence of biseparable states within this family for which two copies are not enough to activate GME, but three copies are.\nMoreover, we show that the bound for partition-separability coincides with the asymptotic (in terms of the number of copies) GME-activation bound for isotropic GHZ states.\n\nMulti-copy GME activation is particularly remarkable \\textemdash~and may appear surprising at first \\textemdash~because it is in stark contrast to bipartite entanglement:\nTwo copies of states separable with respect to a fixed partition always remain partition-separable and can never become GME\\@.\nHowever, from the perspective of entanglement distillation \\textemdash~the concentration of entanglement from many weakly entangled (copies of) states to few strongly entangled ones \\textemdash~such an activation seems more natural.\nAfter all, if one party shares bipartite maximally entangled states with each other party, these could be used to establish any GME state among all $N$ parties via standard teleportation, thus distributing GME by sharing only two-party entangled states.\nNevertheless, such a procedure would require at least $N-1$ copies of these bipartite entangled states (in addition to a local copy of the GME state to be distributed), and already the example from~\\cite{HuberPlesch2011} suggests that one does not have to go through first distilling bipartite entangled pairs, followed by teleportation, but two copies can naturally feature GME already.\nWhile we have seen that the phenomenon of GME activation is more than just distillation, one may still be tempted to think that distillable entanglement is required for GME activation.\nIt is known that there exist bound entangled states \\textemdash~entangled states that do not admit distillation of entanglement no matter how many copies are provided.\nIn particular, all entangled states with positive partial transpose (PPT) across a given cut are undistillable since any number of copies is also PPT\\@.\nOne might thus suspect that GME activation should not be possible for biseparable states that are PPT across every cut and hence have no distillable entanglement (even if multiple parties are allowed to collaborate).\nAs another main result, we show that this is not the case by constructing a biseparable state that is PPT with respect to~every cut, yet two copies of the state are indeed GME\\@. \nTogether, our results thus support the following conjectures:\n\\begin{enumerate}[(i)]\n\\item{\\label{conjecture i}\nThere exists a hierarchy of states with $k$-copy activatable GME, i.e., for all $k\\geq2$ there exists a biseparable but not partition-separable state $\\rho$ such that $\\rho^{\\otimes k-1}$ is biseparable, but $\\rho^{\\otimes k}$ is GME\\@.\n}\n\\item{\\label{conjecture ii}\nGME may be activated for any biseparable but not partition-separable state (light green areas in Fig.~\\ref{fig:GME structure}) of any number of parties as $k\\rightarrow\\infty$.}\n\\end{enumerate}\n\nIn the following, we first provide the formal definitions for biseparability and GME in Sec.~\\ref{sec:sep and gme} before turning to the family of $N$-qubit isotropic GHZ states in Sec.~\\ref{sec:GME of isotropic GHZ states}. For all biseparable states in this family, we provide upper bounds on the minimal number of copies required to activate GME in Sec.~\\ref{sec:Multi-copy GME criterion}. In Sec.~\\ref{sec:Hierarchy of k-copy activatable states}, we then consider the case of three qubits ($N=3$), for which we can show that the bound on three-copy GME activation is tight in the sense that we identify all states in the family for which one requires at least three copies to activate GME, while two copies remain biseparable, and can also show that GME can indeed be activated for any biseparable but not partition-separable state in this family. Moreover, in Sec.~\\ref{sec:GME activation of PPT entangled states}, we construct an explicit example for two-copy GME activation from biseparable states with no distillable bipartite entanglement. Finally, we discuss the implications of our results and open questions in Sec.~\\ref{sec:Conclusion and Outlook}.\n\n\n\\section{Definitions of biseparability \\& GME}\\label{sec:sep and gme}\nWe summarise the formal definitions of biseparability and GME in this paper.\n(See also Table~\\ref{tab:term} for the summary of the definitions here.)\nFormally, a pure quantum state of an $N$-partite system with Hilbert space $\\mathcal{H}\\suptiny{0}{0}{(N)}=\\bigotimes_{i=1}^{N}\\mathcal{H}_{i}$ is separable with respect to~a $k$-partition $\\{\\mathcal{A}_{1},\\mathcal{A}_{2},\\ldots,\\mathcal{A}_{k}\\}$, with $\\mathcal{A}_{i}\\subset\n\\{1,2,3,\\ldots,N\\}$ and $\\bigcup_{i=1}^{k} \\mathcal{A}_{i}= \\{1,2,3,\\ldots,N\\}$ such that $\\bigotimes_{i=1}^{k}\\mathcal{H}_{\\mathcal{A}_{i}}=\\mathcal{H}\\suptiny{0}{0}{(N)}$, if it can be written as\n\\begin{align}\n \\ket{\\Phi\\suptiny{0}{0}{(k)}} &=\\,\\bigotimes\\limits_{i=1}^{k}\\,\\ket{\\phi_{\\mathcal{A}_{i}}},\\quad\\ket{\\phi_{\\mathcal{A}_{i}}}\\in\\mathcal{H}_{\\mathcal{A}_{i}}\n \\,.\\label{pure}\n\\end{align}\nWhen generalising to density matrices, it is common not to specify all possible partitions, but to use the notion of \\emph{$k$-separability} instead: \nA density operator is called \\emph{$k$-separable} if it can be decomposed as a convex sum of pure states that are all separable with respect to~\\emph{some} $k$-partition, i.e., if it is of the form (see, e.g., the review~\\cite{FriisVitaglianoMalikHuber2019})\n\\begin{align}\n \\rho\\suptiny{0}{0}{(k)} &=\\,\n \\sum\\limits_{i} p_{i} \n \\ket{\\Phi_i\\suptiny{0}{0}{(k)}}\\!\\!\\bra{\\Phi_i\\suptiny{0}{0}{(k)}}\n \\,.\\label{ksep}\n\\end{align}\nNote that the lack of tensor stability of partial separability shown in the following also implies that the related concept of $k$-producibility~\\cite{GuehneToth2009,Szalay2019} is not tensor stable. Crucially, each $\\ket{\\Phi_i\\suptiny{0}{0}{(k)}}$ may be $k$-separable with respect to~a different $k$-partition. Consequently, $k$-separability does not imply separability of $\\rho\\suptiny{0}{0}{(k)}$ with respect to~a specific partition, except when $\\rho\\suptiny{0}{0}{(k)}$ is a pure state or when $k=N$. In the latter case the state is called \\emph{fully separable}.\nTo make this distinction more explicit, we refer to all (at least) biseparable states that are actually separable with respect to~some bipartition as \\emph{partition-separable}.\nAt the other end of this separability spectrum one encounters \\emph{biseparable states} ($k=2$), while all states that are not at least biseparable (formally, $k=1$) are called \\emph{genuinely $N$-partite entangled}. We will here use the term GME for the case $k=1$.\nThe operational reason for this definition of GME is easily explained: any biseparable state of the form of Eq.~(\\ref{ksep}) can be created by $N$ parties purely by sharing partition-separable states of the form of Eq.~(\\ref{pure}) and some classical randomness. \nIn addition, this conveniently results in a convex notion of biseparability (as illustrated for the example in Fig.~\\ref{fig:GME structure}) amenable to entanglement witness techniques, which inherently rely on convexity.\n\n\n\\section{GME of isotropic GHZ states}\\label{sec:GME of isotropic GHZ states}\nTo overcome the difficulty in analysing GME, the crucial technique here is to use states in $X$-form, i.e., those with nonzero entries of density operators only on the main diagonal and main anti-diagonal with respect to~the computational basis.\nLet us now consider a family of mixed $N$-qubit states, \\emph{isotropic GHZ states}, given by\n\\begin{align}\n \\rho(p) &=\\,p\\,\\ket{\\mathrm{GHZ}_{N}\\!}\\!\\!\\bra{\\mathrm{GHZ}_{N}\\!}\\,+\\,(1-p)\\,\\tfrac{1}{2^{N}}\\mathds{1}_{2^{N}}\\,,\n \\label{eq:GHZ with white noise}\n\\end{align}\nobtained as convex combination of the $N$-qubit maximally mixed state $\\tfrac{1}{2^{N}}\\mathds{1}_{2^{N}}$ and a pure \n$N$-qubit GHZ state\n\\begin{align}\n \\ket{\\mathrm{GHZ}_{N}\\!} &=\\,\\tfrac{1}{\\sqrt{2}}\\bigl(\\ket{0}^{\\otimes N}+\\ket{1}^{\\otimes N}\\bigr).\n\\end{align}\nwith real mixing parameter $p\\in[-1\/(2^{N}-1),1]$.\nSince states in this family are in $X$-form with respect to~the $N$-qubit computational basis, we can straightforwardly calculate the \\emph{genuine multipartite} (GM) \\emph{concurrence}, an entanglement measure for a multipartite state defined in terms of a polynomial of elements of its density matrix~\\cite{HashemiRafsanjaniHuberBroadbentEberly2012,MaChenChenSpenglerGabrielHuber2011}. For any $N$-qubit density operator $\\rho_{X}$ in $X$-form, i.e., \n\\begin{align}\n \\rho_{X}=\\begin{pmatrix} \\tilde{a} & \\tilde{z}\\,\\tilde{d} \\\\ \\tilde{d}\\,\\tilde{z}^{\\dagger} & \\tilde{d}\\,\\tilde{b}\\,\\tilde{d} \\end{pmatrix},\n\\end{align}\nwhere $\\tilde{a}=\\diag\\{a_{1},\\ldots,a_{n}\\}$, $\\tilde{b}=\\diag\\{b_{1},\\ldots,b_{n}\\}$, and $\\tilde{z}=\\diag\\{z_{1},\\ldots,z_{n}\\}$ are diagonal $n\\times n$ matrices with $n=2^{N-1}$, $a_{i},b_{i}\\in\\mathbb{R}$ and $z_{i}\\in\\mathbb{C}$ for all $i=1,2,\\ldots,n$, and $\\tilde{d}=\\operatorname{antidiag}\\{1,1,\\ldots,1\\}$ is antidiagonal,\nthe GM concurrence is given by\n\\begin{align}\n C_{\\mathrm{GM}}(\\rho_{X}) &=\\,2\\max\\bigl\\{0,\\max_{i}\\{|z_{i}|-\\sum\\limits_{j\\neq i}^{n}\\sqrt{a_{j}b_{j}}\\}\\bigr\\},\n \\label{eq:GM concurrence}\n\\end{align}\nand provides a necessary and sufficient condition for GME whenever $C_{\\mathrm{GM}}>0$. \nIn the case of the state $\\rho(p)$ from Eq.~(\\ref{eq:GHZ with white noise}), we have $a_{i}=b_{i}=\\tfrac{1-p}{2^{N}}+\\delta_{i1}\\tfrac{p}{2}$ and $z_{i}=\\delta_{i1}\\tfrac{p}{2}$, such that\n\\begin{align}\n C_{\\mathrm{GM}}\\bigl[\\rho(p)\\bigr] &=\\,\\max\\{0,|p|-(1-p)(1-2^{1-N})\\}.\n\\end{align}\nThus, $\\rho(p)$ is GME if and only if\n\\begin{align}\n p &>\\,p\\suptiny{0}{0}{(1)}_{\\mathrm{GME}}(N)\\,\\coloneqq\\,\\frac{2^{N-1}-1}{2^{N}-1}\\,,\n\\end{align}\ni.e., if and only if $p$ surpasses the single-copy threshold $p\\suptiny{0}{0}{(1)}_{\\mathrm{GME}}$.\nConversely, we can be certain that $\\rho(p)$ is not GME for $p\\leq (2^{N-1}-1)\/(2^{N}-1)$, and hence at least biseparable.\n\n\n\\section{Multi-copy GME criterion}\\label{sec:Multi-copy GME criterion}\nOur first goal is then to check if two copies of $\\rho(p)$ are GME\\@.\nSince the GM concurrence is an entanglement monotone, $C_\\mathrm{GM}\\bigl[\\rho(p)^{\\otimes k}\\bigr]$ is monotonically non-decreasing as $k$ increases~\\cite{MaChenChenSpenglerGabrielHuber2011};\nthat is, if we have $C_\\mathrm{GM}\\bigl[\\rho(p)\\bigr]=0$ for $\\rho(p)$ in $X$-form, it holds that $C_\\mathrm{GM}\\bigl[\\rho(p)^{\\otimes 2}\\bigr]\\geq 0$ in general.\nHowever, \nusing $C_\\mathrm{GM}\\bigl[\\rho(p)^{\\otimes 2}\\bigr]> 0$\nas a necessary and sufficient criterion for GME \nis not an option in this case, \nsince ${\\rho(p)}^{\\otimes 2}$ may not be of $X$-form even if a single copy is, and we therefore generally cannot directly calculate $C_\\mathrm{GM}\\bigl[\\rho(p)^{\\otimes 2}\\bigr]$. \nThe crucial idea here is to make use of the fact that stochastic LOCC (SLOCC) can never create GME from a biseparable state.\n\nTo construct a sufficient GME criterion,\nwe therefore use a map $\\mathcal{E}_{\\circ}$ implementable via SLOCC~\\cite{LamiHuber2016}, which, for any two density operators $\\rho$ and $\\sigma$ acting on $\\mathcal{H}$,\nmaps the state $\\rho\\otimes\\sigma$ acting on $\\mathcal{H}^{\\otimes 2}$ to\n\\begin{align}\n \\mathcal{E}_{\\circ}[\\rho\\otimes\\sigma] &=\\,\\frac{\\rho\\circ\\sigma}{\\textnormal{Tr}(\\rho\\circ\\sigma)}\\quad\\text{on }\\mathcal{H},\n\\end{align}\nwhere the right-hand side is a density operator acting on $\\mathcal{H}$, and ``$\\circ$'' denotes the Hadamard product (or Schur product), i.e., the component-wise multiplication of the two matrices.\nWhat is useful for us here is that the Hadamard product of two $X$-form matrices results in an $X$-form matrix. Consequently, we can directly calculate the GM concurrence for the state resulting from applying the `\\emph{Hadamard-product map}' $\\mathcal{E}_{\\circ}$ to two copies of an originally biseparable state.\nIf the GM concurrence of $\\mathcal{E}_{\\circ}[\\rho(p)^{\\otimes 2}]$ is nonzero, we can conclude that two copies of $\\rho(p)$ are GME, even if a single copy is not.\nTo decide whether $\\mathcal{E}_{\\circ}[\\rho(p)^{\\otimes 2}]$ is GME or not, i.e., whether the GM concurrence is nonzero or not, we can ignore the normalization and just consider $\\rho(p)\\circ\\rho(p)=\\rho(p)^{\\circ 2}$. Moreover, in the maximization over the index $i$ in Eq.~(\\ref{eq:GM concurrence}), the maximum is obtained for $i=1$. We can thus conclude that $\\rho(p)^{\\otimes 2}$ is GME if\n\\begin{align}\n |z_{1}^{2}|-\\sum\\limits_{j\\neq 1}^{n}\\sqrt{a_{j}^{2}b_{j}^{2}}\\,=\\,\\tfrac{p^{2}}{4}-(2^{N-1}-1)\\bigl(\\tfrac{1-p}{2^{N}}\\bigr)^{2}\\,>\\,0,\n \\label{eq:nonzero GM concurrence 2 copies}\n\\end{align}\nwhich translates to the condition $p\/(1-p)>\\sqrt{2^{N-1}-1}\/2^{N-1}$, and in turn can be reformulated to the condition\n\\vspace*{-2mm}\n\\begin{align}\n p &>\\,p\\suptiny{0}{0}{(2)}_{\\mathrm{GME}}(N)\\,\\coloneqq\\,\\frac{\\sqrt{2^{N-1}-1}}{2^{N-1}+\\sqrt{2^{N-1}-1}}.\n \\label{eq:GME treshhold 2 copies}\n\\end{align}\nAs we see, we have $p\\suptiny{0}{0}{(1)}_{\\mathrm{GME}}>p\\suptiny{0}{0}{(2)}_{\\mathrm{GME}}$ for all $N\\geq3$, confirming that \\emph{there exist biseparable states} with values $pp\\suptiny{0}{0}{(2)}_{\\mathrm{GME}}$. \n\nMoreover, we can now concatenate multiple uses of the SLOCC map $\\mathcal{E}_{\\circ}$. For instance, we can identify the threshold value $p\\suptiny{0}{0}{(3)}_{\\mathrm{GME}}$ of $p$ at which the state $\\mathcal{E}_{\\circ}\\bigl[\\rho(p)\\otimes\\mathcal{E}_{\\circ}[\\rho(p)^{\\otimes 2}]\\bigr]$ resulting from $2$ applications of $\\mathcal{E}_{\\circ}$ to a total of $3$ copies of $\\rho(p)$ is GME, or, more generally, the corresponding threshold value $p\\suptiny{0}{0}{(k)}_{\\mathrm{GME}}$ for which $k$ copies result in a GME state after applying the map $\\mathcal{E}_{\\circ}$ a total of $k-1$ times. From Eq.~(\\ref{eq:nonzero GM concurrence 2 copies}) it is easy to see that these threshold values are obtained as\n\\begin{align}\n p\\suptiny{0}{0}{(k)}_{\\mathrm{GME}}(N)\\,\\coloneqq\\,\\frac{\\sqrt[k]{2^{N-1}-1}}{2^{N-1}+\\sqrt[k]{2^{N-1}-1}}.\n\\end{align}\n\n\n\\section{Hierarchy of $k$-copy activatable states}\\label{sec:Hierarchy of k-copy activatable states}\nThe threshold values $p\\suptiny{0}{0}{(k)}_{\\mathrm{GME}}$ provide upper bounds on the minimal number of copies required to activate GME\\@: a value $p$ satisfying $p\\suptiny{0}{0}{(k)}_{\\mathrm{GME}}p_{\\mathrm{crit}}$ features $k$-copy activatable GME, at least asymptotically as $k\\rightarrow\\infty$, and is thus also not partition-separable. This leads us to our second conjecture, also repeated here for convenience:\n\n\\noindent\n\\emph{Conjecture~(\\ref{conjecture ii}):\\ \nGME may be activated for any biseparable but not partition-separable state of any number of parties as $k\\rightarrow\\infty$.}\n\nConjecture~(\\ref{conjecture ii}) holds for isotropic GHZ states. But does it hold in general?\n\n\n\\section{GME activation from PPT entangled states}\\label{sec:GME activation of PPT entangled states}\nA situation where one might imagine Conjecture~(\\ref{conjecture ii}) to fail is the situation of biseparable (but not partition-separable) states with PPT entanglement across every bipartition, as discussed in Sec.~\\ref{sec:introduction}.\nFor isotropic GHZ states, however, the PPT criterion across every cut coincides exactly with the threshold $p_{\\mathrm{crit}}$ for biseparability (and GME activation), as one can confirm by calculating the eigenvalues of the partial transpose of $\\rho(p)$ (see Appendix~\\ref{appendix:PPT criterion for isotropic GHZ states}).\nWe thus turn to a different family of states, for which this is not the case.\n\nSpecifically, as we show in detail in Appendix~\\ref{appendix:PPT entangled GME activation}, we construct a family of biseparable three-party states \n\\begin{align}\n \\rho_{\\mathcal{A}_{1}\\mathcal{A}_{2}\\mathcal{A}_{3}\n } &=\\,\n \\sum\\limits_{\\substack{i,j,k=1\\\\ i\\neq j\\neq k\\neq i}}^{3}p_{i}\\ \\rho_{\\mathcal{A}_{i}}\\otimes\\rho_{\\mathcal{A}_{j}\\mathcal{A}_{k}}\\suptiny{0}{0}{\\mathrm{PPT}}\n\\end{align}\nwhere the $\\rho_{\\mathcal{A}_{j}\\mathcal{A}_{k}}\\suptiny{0}{0}{\\mathrm{PPT}}$ are (different) two-qutrit states with PPT entanglement across the respective cuts $\\mathcal{A}_{j}|\\mathcal{A}_{k}$ for $j\\neq k\\in\\{1,2,3\\}$ and $\\sum_{i}p_{i}=1$. Via LOCC, three copies (labelled $\\mathcal{A}$, $\\mathcal{B}$, and $\\mathcal{C}$, respectively) of this state $\\rho_{\\mathcal{A}_{1}\\mathcal{A}_{2}\\mathcal{A}_{3}}$ can be converted to what we call \\emph{PPT-triangle states} of the form\n\\begin{equation}\n \\rho_{\\mathcal{A}_{2}\\mathcal{A}_{3}}\\suptiny{0}{0}{\\mathrm{PPT}}\\otimes \\rho_{\\mathcal{B}_{1}\\mathcal{B}_{3}}\\suptiny{0}{0}{\\mathrm{PPT}}\\otimes \\rho_{\\mathcal{C}_{1}\\mathcal{C}_{2}}\\suptiny{0}{0}{\\mathrm{PPT}}.\n\\end{equation}\nUsing a GME witness based on the lifted Choi map (cf.~\\cite{HuberSengupta2014, ClivazHuberLamiMurta2017}), we show that there exists a parameter range where these PPT-triangle states are GME.\nTherefore, it is proved that GME activation is possible even from biseparable states only with PPT entanglement across every bipartition.\n\n\n\\section{GME activation and shared randomness}\n\nProvided that our conjectures are true, incoherent mixing (access to shared randomness) can lead to situations where the number of copies needed for GME activation is reduced. In the extreme case, and this is true even based only on the results already proven here (and thus independently of whether or not the conjectures turn out to be true or not), the probabilistic combination of partition-separable states (without activatable GME) can results in a state \\textemdash\\ a biseparable isotropic GHZ state \\textemdash\\ which has activatable GME. Although this may at first glance appear to be at odds with the usual understanding of bipartite entanglement, which cannot arise from forming convex combinations of separable states, we believe this can be understood rather intuitively if we view incoherent mixing as a special case of a more general scenario in which one may have any amount of information on the states that are shared between different observers. As an example, consider the following situation:\n\nThree parties, labelled, $1$, $2$ and $3$, share two identical (as in, the system and its subsystems have the same Hilbert space dimensions and are represented by the same physical degrees of freedom) tripartite quantum systems, labelled $\\mathcal{A}$ and $\\mathcal{B}$, in the states $\\rho_{\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}}$ and $\\rho_{\\mathcal{B}_{1}\\mathcal{B}_{2}|\\mathcal{B}_{3}}$, respectively, where we assume that $\\rho_{\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}}$ is separable with respect to the bipartition $\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}$ and $\\rho_{\\mathcal{B}_{1}\\mathcal{B}_{2}|\\mathcal{B}_{3}}$ is separable with respect to the bipartition $\\mathcal{B}_{1}\\mathcal{B}_{2}|\\mathcal{B}_{3}$. Clearly, both of these systems and states individually are biseparable, but if the parties have full information about which system is which, e.g., the first system is $A$ and the second system is $B$, then the joint state $\\rho_{\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}}\\otimes\\rho_{\\mathcal{B}_{1}\\mathcal{B}_{2}|\\mathcal{B}_{3}}$ can be GME with respect to the partition $\\mathcal{A}_{1}\\mathcal{B}_{1}|\\mathcal{A}_{2}\\mathcal{B}_{2}|\\mathcal{A}_{3}\\mathcal{B}_{3}$. In this sense, two biseparable systems can yield one GME system. Now, let us suppose that the parties do not have full information which system is in which state. For simplicity, let us assume that either system may be in either state with the same probability $\\tfrac{1}{2}$. Then the state of either of the systems is described by the convex mixture $\\rho_{\\mathrm{mix}}=\\tfrac{1}{2}\\rho_{A_{1}|A_{2}A_{3}}+\\tfrac{1}{2}\\rho_{B_{1}B_{2}|B_{3}}$, where we have kept the labels $A$ and $B$, but they now refer to the same subsystems, i.e., $A_{i}=B_{i}$ for all~$i$. The state $\\rho_{\\mathrm{mix}}$ may not be partition separable anymore, but is certainly still biseparable. In particular, it may have activatable GME, even though neither $\\rho_{A_{1}|A_{2}A_{3}}$ nor $\\rho_{B_{1}B_{2}|B_{3}}$ do. For the sake of the argument let us assume that the latter is indeed the case and that GME is activated for $2$ copies in this case, such that $\\rho_{\\mathrm{mix}}^{\\otimes 2}$ is GME. That means, if one has access to both systems, $A$ and $B$, even without knowing which system is in which state, one would end up with GME. However, the additional randomness with respect to the case where one knows exactly which state which system is in results in an increased entropy of $\\rho_{\\mathrm{mix}}^{\\otimes 2}$ with respect to $\\rho_{\\mathcal{A}_{1}|\\mathcal{A}_{2}\\mathcal{A}_{3}}\\otimes\\rho_{\\mathcal{B}_{1}\\mathcal{B}_{2}|\\mathcal{B}_{3}}$, and thus represents a disadvantage with respect to the latter case.\n\nIn general, it is therefore not problematic that the conjectures, if true, would imply that incoherent mixtures of $k$-activatable states may result in $k'$-activatable states with $k'0$, for any coupling strength above\n\\begin{equation}\n\\lambda_0=\\sqrt{\\omega\\omega_0}.\n\\label{critical_point_eta=0}\n\\end{equation}\nConsidering zero temperature, as we do in this paper, $\\lambda_0$ has the significance of a critical coupling strength, where for $\\lambda\\le\\lambda_0$ the photon number is zero in the ground state, while it follows the formula\n\\begin{equation}\n\\frac{\\langle a^\\dagger a\\rangle_0}N=\\frac{\\omega_0}{4\\omega}\\frac{\\lambda^4-\\lambda_0^4}{\\lambda^2\\lambda_0^2}\n\\label{eqn:photon_number_eta=0}\n\\end{equation}\nwhen $\\lambda>\\lambda_0$. Soon after the rigorous calculation of Hepp and Lieb, the same result was derived by Wang and Hioe \\cite{wang&hioe_1973} using a simpler method [see their Eq.~(40)].\n\\subsection{Counter-rotating terms}\n\\label{sec:Dicke_counter_rotating}\nThe method of Wang and Hioe readily generalizes to an interaction without the rotating-wave approximation: $aJ_++a^\\dagger J_-\\to (a+a^\\dagger)(J_-+J_+)$. The calculation, made by Hepp and Lieb \\cite{hepp&lieb_1973b} and Carmichael \\emph{et al.} \\cite{carmichael_etal_1973}, retains the phase transition and the form of Eq.~(\\ref{eqn:photon_number_eta=0}), but unlike in the rotating-wave approximation, the state of nonzero photon number now assigns a definite phase to the field, and the critical coupling is changed to $\\sqrt{\\omega\\omega_0}\/2$. In fact Hepp and Lieb \\cite{hepp&lieb_1973b} consider a Hamiltonian generalized in the form\n\\begin{eqnarray}\nH_\\eta&=&\\omega a^\\dagger a+\\omega_0J_z\\notag\\\\\n&&+\\frac{\\lambda}{\\sqrt N}(aJ_++a^\\dagger J_-)+\\eta\\frac{\\lambda}{\\sqrt N}(a^\\dagger J_++aJ_-),\n\\label{eqn:hamiltonian_counter_rotating}\n\\end{eqnarray}\nwith $\\eta$ a parameter. We let $\\eta$ vary from 0 to 1 and show (Sec.~\\ref{sec:epsilon=0}) that there are actually two critical coupling strengths marking transitions to states of definite phase:\n\\begin{equation}\n\\lambda_\\eta^\\pm=\\frac1{1\\pm\\eta}\\sqrt{\\omega\\omega_0}.\n\\label{eqn:critical_points_kappa=0}\n\\end{equation}\nMoreover, photon numbers for solutions bifurcating from both critical points, $\\lambda_\\eta^+$ and $\\lambda_\\eta^-$, follow the same form, that of Eq.~(\\ref{eqn:photon_number_eta=0}):\n\\begin{equation}\n\\frac{\\langle a^\\dagger a\\rangle_\\eta^\\pm}N=\\frac{\\omega_0}{4\\omega}\\frac{\\lambda^4-(\\lambda_\\eta^\\pm)^4}{\\lambda^2(\\lambda_\\eta^\\pm)^2}.\n\\label{eqn:photon_number_kappa=0}\n\\end{equation}\nThe transition at $\\lambda_\\eta^+$ corresponds to the extension of the Dicke phase transition of Ref.~\\cite{hepp&lieb_1973a} discussed in Refs.~\\cite{hepp&lieb_1973b} and \\cite{carmichael_etal_1973}: the zero photon state becomes unstable and is replaced by a stable state of nonzero photon number. The transition at $\\lambda_\\eta^-$, not identified before to our knowledge, marks a restabilization of the zero photon state and the birth of an unstable state of nonzero photon number. It provides the fulcrum upon which the unification of the coherently driven extension of the Dicke phase transition and the breakdown of photon blockade turns.\n\\subsection{Dissipative realization}\n\\label{sec:dissipative_realization}\nWhile Dicke's paper \\cite{dicke_1954} generated enormous interest in superradiance as a transient, away-from-equilibrium process \\cite{gross&haroche_1982}, the Dicke quantum phase transition of Hepp and Lieb was, for many years, largely seen as academic---beyond the reach of experiments due to a needed coupling strength on the order of the transition frequency, and, on the theory side, suspect because of approximations used in the Dicke model \\cite{rzazewski_etal_1975,bialynicki-birula&rzazewski_1979,gawedzki&rzazewski_1981,rzazewski&wodkiewicz_1991}. Dissipative realizations of the Dicke Hamiltonian as an effective Hamiltonian overcome these obstacles by replacing a transition from a ground to an excited state by one between a pair of ground states. Specifically, we have the scheme introduced by Dimer \\emph{et al.} \\cite{dimer_etal_2007,baden_etal_2014} in mind; although there are essentially parallel setups, where internal states are replaced by momentum states of a Bose-Einstein condensate \\cite{baumann_etal_2010,baumann_etal_2011}.\n\nWe consider a pair of Raman transitions between states $|1\\rangle$ and $|2\\rangle$---the two-state system---as sketched in Fig.~\\ref{fig:fig1}, where one leg of each transition is driven by a laser field, with amplitudes and frequencies $\\Omega_{1,2}$ and $\\omega_{1,2}$, and the other creates and annihilates cavity photons of frequency $\\omega$, with coupling strength to the cavity mode $g$. Adopting this configuration, with the excited states (not shown) adiabatically eliminated, and in an interaction picture---free Hamiltonian $\\omega_+a^\\dagger a+\\omega_-J_z$, $\\omega_{\\pm}=(\\omega_1\\pm\\omega_2)\/2$---an effective Hamiltonian is realized in the form of Eq.~(\\ref{eqn:hamiltonian_counter_rotating}):\n\\begin{eqnarray}\nH_\\eta^\\prime&=&\\Delta a^\\dagger a+\\Delta_0J_z\\notag\\\\\n&&+\\frac{\\lambda}{\\sqrt N}(aJ_++a^\\dagger J_-)+\\eta\\frac{\\lambda}{\\sqrt N}(a^\\dagger J_++aJ_-),\n\\label{eqn:hamiltonian_raman_model}\n\\end{eqnarray}\nwith effective frequencies\n\\begin{eqnarray}\n\\Delta&=&\\omega-\\omega_+=\\frac{\\delta_1+\\delta_2}2,\\\\\n\\label{eqn:delta}\n\\Delta_0&=&\\omega_0-\\omega_-=\\frac{\\delta_1-\\delta_2}2,\n\\label{eqn:delta_zero}\n\\end{eqnarray}\nwhere $\\delta_1$ and $\\delta_2$ are Raman detunings (Fig.~\\ref{fig:fig1}), and the coupling constants $\\lambda$ and $\\eta\\lambda$ follow from the strength of the Raman coupling (see Ref.~\\cite{dimer_etal_2007}). We consider an initial state $|0\\rangle|1\\rangle$, with $|0\\rangle$ the cavity mode vacuum, in which case the Raman driving is a source of photons through the counter-rotating interaction, an external drive that is off-set by the cavity loss; thus, the dissipative realization of the generalized Dicke Hamiltonian, Eq.~(\\ref{eqn:hamiltonian_counter_rotating}), is modeled by the master equation\n\\begin{equation}\n\\frac{d\\rho}{dt}=-i[H_\\eta^\\prime,\\rho]+\\kappa{\\mathcal L}[a]\\rho,\n\\label{eqn:master_equation}\n\\end{equation}\nwhere $\\kappa$ is the loss rate and ${\\mathcal L}[\\xi]\\,\\cdot=2\\xi \\cdot \\xi^\\dagger-\\xi^\\dagger \\xi\\cdot-\\cdot \\xi^\\dagger\\xi$. We show (Sec.~\\ref{sec:epsilon=0}) that in the presence of dissipation, for $\\eta<\\eta_\\kappa$,\n\\begin{equation}\n\\eta_\\kappa\\equiv\\frac{\\kappa}{|\\Delta|}\\left[1+\\sqrt{1+\\frac{\\kappa^2}{\\Delta^2}}\\mkern3mu\\right]^{-1},\n\\label{eqn:eta_critical}\n\\end{equation}\nthere is no critical coupling strength, while for $\\eta\\geq\\eta_\\kappa$, there are two that for $\\kappa\\to0$ reduce to Eq.~(\\ref{eqn:critical_points_kappa=0}):\n\\begin{equation}\n\\lambda_\\eta^\\pm\\equiv\\frac{\\sqrt{|\\Delta\\Delta_0|}}{1-\\eta^2}\\left[1+\\eta^2\\mp2\\eta\\sqrt{1-\\frac{(1-\\eta^2)^2}{4\\eta^2}\\frac{\\kappa^2}{\\Delta^2}}\\mkern3mu\\right]^{1\/2}.\n\\label{eqn:lambda_critical}\n\\end{equation}\nPhoton numbers generalizing Eq.~(\\ref{eqn:photon_number_kappa=0}) are recovered from the mean-field steady state in Sec.~\\ref{sec:epsilon=0} [Eq.~(\\ref{eqn:photon_number_epsilon=0})].\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=2.75in]{figure1.pdf}\n\\end{center}\n\\caption{Schematic of the open system realization of the model Hamiltonian, Eq.~(\\ref{eqn:hamiltonian_raman_model}). A pair of ground states, denoted $|1\\rangle$ and $|2\\rangle$, are coupled to an optical cavity mode, frequency $\\omega$, via far-from-resonance Raman transitions, where bold dashed arrows represent external laser drives while the transfer of photons to and from the cavity mode, coupling strength $g$, is represented by bold solid arrows; $\\Omega_{1,2}$ and $\\omega_{1,2}$ are drive amplitudes and frequencies, and $\\delta_{1,2}$ are detunings; excited states are assumed far from resonance and not shown.}\n\\label{fig:fig1}\n\\end{figure}\n\n\\subsection{Extended model with coherent drive}\nEquations~(\\ref{eqn:hamiltonian_raman_model}) and (\\ref{eqn:master_equation}) set out a driven and dissipative model where the driving of the field mode is mediated by externally driven Raman transitions; the dissipative realization of the effective rotating and counter-rotating interactions amounts to a \\emph{nonlinear} driving of the field mode. In studies of the so-called breakdown of photon blockade \\cite{alsing&carmichael_1991,carmichael_2015,armen_etal_2009,fink_etal_2017}, the mode is subject to a coherent drive, i.e., \\emph{linear} driving by an external field. We now extend our model by adding a coherent drive of amplitude $\\sqrt N\\epsilon$ and frequency $\\omega_d$---a detuning $\\omega_d-\\omega_+$ in the interaction picture of Eq.~(\\ref{eqn:hamiltonian_raman_model}). Choosing $\\omega_1$ and $\\omega_2$ so that $\\omega_+=\\omega_d$, the master equation then becomes\n\\begin{equation}\n\\frac{d\\rho}{dt}=-i[H_\\eta^\\prime,\\rho]-i\\sqrt N\\epsilon[a^\\dagger+a,\\rho]+\\kappa{\\mathcal L}[a]\\rho,\n\\label{eqn:master_equation_drive}\n\\end{equation}\nwhere, from Eq.~(\\ref{eqn:delta}), $\\Delta=\\omega-\\omega_d$ is now the detuning of the field mode from the drive.\n\nThe next section explores the parameter dependence of the mean-field steady states of Eq.~(\\ref{eqn:master_equation_drive}). In particular, we connect the breakdown of photon blockade, realized for $\\eta=0$, to the coherently driven extension of the Dicke quantum phase transition. We show that an $\\eta$-dependent critical point organizes behavior as a function of drive strength; we then establish a link through the previously unreported phase of the generalized model presented in Ref.~\\cite{hepp&lieb_1973b}, i.e., the second critical coupling strength $\\lambda_\\eta^-$.\n\n\\section{Mean-Field Steady States}\n\\label{sec:mean-field}\nThe mean-field Maxwell-Bloch equations derived from the master equation, Eq.~(\\ref{eqn:master_equation_drive}), are:\n\\begin{eqnarray}\n\\frac{d\\alpha}{dt}&=&-(\\kappa+i\\Delta)\\alpha-i\\frac{\\lambda}{\\sqrt N}\\frac12(\\beta+\\eta\\beta^*)-i\\sqrt N\\epsilon,\n\\label{eqn:mean-field_alpha}\\\\\n\\frac{d\\beta}{dt}&=&-i\\Delta_0\\beta+2i\\frac{\\lambda}{\\sqrt N}(\\alpha+\\eta\\alpha^*)\\zeta,\n\\label{eqn:mean-field_beta}\\\\\n\\frac{d\\zeta}{dt}&=&-i\\frac{\\lambda}{\\sqrt N}\\left[(\\alpha\\beta^*-\\alpha^*\\beta)-\\eta(\\alpha\\beta-\\alpha^*\\beta^*)\\right],\n\\label{eqn:mean-field_zeta}\n\\end{eqnarray}\nwith $\\alpha\\equiv\\langle a\\rangle$, $\\beta\\equiv2\\langle J_-\\rangle$, and $\\zeta\\equiv2\\langle J_z\\rangle$. We first outline a general approach to their steady state solution, where, introducing intensive variables\n\\begin{equation}\n\\bar\\alpha\\equiv\\alpha\/\\sqrt N,\\qquad\\bar\\beta\\equiv\\beta\/N,\\qquad\\bar\\zeta\\equiv\\zeta\/N,\n\\label{eqn:scaling}\n\\end{equation}\nEqs.~(\\ref{eqn:mean-field_alpha}) and (\\ref{eqn:mean-field_beta}) require\n\\begin{eqnarray}\n\\bar\\beta_x&=&2\\lambda\\frac{1+\\eta}{\\Delta_0}\\bar\\alpha_x\\bar\\zeta,\n\\label{eqn:steady-state_beta1}\\\\\n\\bar\\beta_y&=&2\\lambda\\frac{1-\\eta}{\\Delta_0}\\bar\\alpha_y\\bar\\zeta,\n\\label{eqn:steady-state_beta2}\n\\end{eqnarray}\nwith $\\bar\\alpha_x$ and $\\bar\\alpha_y$ satisfying the simultaneous equations:\n\\begin{eqnarray}\n\\kappa\\bar\\alpha_x-\\left[\\Delta+\\lambda^2\\frac{(1-\\eta)^2}{\\Delta_0}\\bar\\zeta\\right]\\bar\\alpha_y&=&0,\n\\label{eqn:steady-state_alpha1}\\\\\n\\kappa\\bar\\alpha_y+\\left[\\Delta+\\lambda^2\\frac{(1+\\eta)^2}{\\Delta_0}\\bar\\zeta\\right]\\bar\\alpha_x&=&-\\epsilon.\n\\label{eqn:steady-state_alpha2}\n\\end{eqnarray}\nWe may then solve Eqs.~(\\ref{eqn:steady-state_beta1})--(\\ref{eqn:steady-state_alpha2}) for $|\\bar\\beta|^2$ in terms of $\\bar\\zeta$ and impose the conservation law $\\bar\\zeta^2+|\\bar\\beta|^2=1$; hence we find an autonomous equation satisfied by $\\bar\\zeta$,\n\\begin{equation}\n(1-\\bar\\zeta^2)[P(\\bar\\zeta)]^2=\\frac{4\\epsilon^2}{\\lambda^2(1+\\eta)^2}\\bar\\zeta^2Q(\\bar\\zeta),\n\\label{eqn:6th-order_polynomial}\n\\end{equation}\nwith $P(\\bar\\zeta)$ and $Q(\\bar\\zeta)$ both quadratic:\n\\begin{equation}\nP(\\bar\\zeta)=\\bar\\zeta^2+2\\frac{\\Delta\\Delta_0(1+\\eta^2)}{\\lambda^2(1-\\eta^2)^2}\\bar\\zeta+\\frac{\\Delta_0^2(\\kappa^2+\\Delta^2)}{\\lambda^4(1-\\eta^2)^2},\n\\label{eqn:p_quadratic}\n\\end{equation}\nand\n\\begin{equation}\nQ(\\bar\\zeta)=\\bar\\zeta^2+2\\frac{\\Delta\\Delta_0}{\\lambda^2(1-\\eta)^2}\\bar\\zeta+\\frac{\\Delta_0^2\\kappa^2}{\\lambda^4(1-\\eta^2)^2}+\\frac{\\Delta^2\\Delta_0^2}{\\lambda^4(1-\\eta)^4}.\n\\label{eqn:q_quadratic}\n\\end{equation}\nSteady-state solutions for $\\bar\\zeta$ are seen to be roots of a 6th-order polynomial, with a possible six distinct solutions for any setting of the parameters: $\\eta$, $\\Delta$, $\\Delta_0$, $\\lambda$, $\\epsilon$, and $\\kappa$. In the following, for the most part, we set $\\Delta_0=\\Delta$ and keep $\\kappa\/\\lambda$ fixed; we then explore the parameter dependence in the $(\\Delta\/\\lambda,\\epsilon\/\\lambda)$-plane for different choices of $\\eta$. To start, we recover the results summarized in Secs.~\\ref{sec:Dicke_rotating_wave} and \\ref{sec:Dicke_counter_rotating} from our general solution scheme.\n\\subsection{Zero drive: $\\epsilon=0$}\n\\label{sec:epsilon=0}\nIn the absence of a coherent drive, the right-hand side of Eq.~(\\ref{eqn:6th-order_polynomial}) is zero, and the 6th-order polynomial satisfied by $\\bar\\zeta$ reduces to\n\\begin{equation}\n(1-\\bar\\zeta^2)[P(\\bar\\zeta)]^2=0.\n\\label{eqn:6th-order_polynomial_epsilon=0}\n\\end{equation}\nEquations~(\\ref{eqn:steady-state_alpha1}) and (\\ref{eqn:steady-state_alpha2}) are replaced by the homogeneous system\n\\begin{equation}\n\\left(\n\\begin{matrix}\n\\Delta_0\\kappa&-\\Delta\\Delta_0-\\lambda^2(1-\\eta)^2\\bar\\zeta\\\\\n\\noalign{\\vskip4pt}\n\\Delta\\Delta_0+\\lambda^2(1+\\eta)^2\\bar\\zeta&\\Delta_0\\kappa\n\\end{matrix}\n\\mkern3mu\\right)\\mkern-4mu\\left(\n\\begin{matrix}\n\\bar\\alpha_x\\\\\n\\noalign{\\vskip4pt}\n\\bar\\alpha_y\n\\end{matrix}\n\\right)=0.\n\\label{eqn:steady-state_alpha_epsilon=0}\n\\end{equation}\nNoting then that the determinant of this homogeneous system is $\\lambda^4(1-\\lambda^2)^2P(\\bar\\zeta)$, the condition for nontrivial solutions for $\\bar\\alpha$ is $P(\\bar\\zeta)=0$. Thus, the roots $\\bar\\zeta=\\pm1$ of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0}) correspond to the trivial solution, $\\bar\\alpha=0$, while the roots of $P(\\bar\\zeta)=0$,\n\\begin{equation}\n\\bar\\zeta_\\pm=-\\frac{\\Delta\\Delta_0}{\\lambda^2(1-\\eta^2)^2}\\mkern-2mu\\left[1+\\eta^2\\mp2\\eta\\sqrt{1-\\frac{(1-\\eta^2)^2}{4\\eta^2}\n\\frac{\\kappa^2}{\\Delta^2}}\\mkern3mu\\right],\n\\label{eqn:nontrivial_zeta_epsilon=0}\n\\end{equation}\nyield nontrivial solutions for $\\bar\\alpha$. The latter are physically acceptable if $\\bar\\zeta_\\pm$ are real and $|\\bar\\zeta_\\pm|\\leq1$; the first condition is satisfied if $\\eta\\geq\\eta_\\kappa$, $\\eta_\\kappa$ defined in Eq.~(\\ref{eqn:eta_critical}), and the second gives the critical coupling strengths, $\\lambda_\\eta^\\pm$, defined in Eq.~(\\ref{eqn:lambda_critical}); for $\\eta\\geq\\eta_\\kappa$ and $\\lambda_\\eta^+\\leq\\lambda\\leq\\lambda_\\eta^-$, $\\bar\\zeta_+$ is the only acceptable root, while $\\bar\\zeta_+$ and $\\bar\\zeta_-$ are both acceptable if $\\lambda\\geq\\lambda_\\eta^-$.\n\nNote that $\\Delta$ and $\\Delta_0$ are detunings and therefore two cases arise, one with $\\Delta\\Delta_0$ positive and $\\bar\\zeta_\\pm<0$, and the other with $\\Delta\\Delta_0$ negative and $\\bar\\zeta_\\pm>0$. Considering steady states only, there is no physical difference between the cases as a quick inspection of Eqs.~(\\ref{eqn:mean-field_alpha})-(\\ref{eqn:mean-field_zeta}) shows---simply reverse the signs of $\\Delta_0$ and $\\bar\\zeta$ in Eq.~(\\ref{eqn:mean-field_beta}); steady state stability can change, though. We always illustrate results with $\\Delta_0=\\Delta$, whence $\\Delta\\Delta_0$ is positive.\n\nBy eliminating $\\Delta_0\\kappa$ from the homogeneous system, Eq.~(\\ref{eqn:steady-state_alpha_epsilon=0}), we may solve for\n\\begin{eqnarray}\n(\\bar\\alpha_x^\\pm)^2&=&-|\\bar\\alpha_\\pm|^2\\frac{\\Delta\\Delta_0+\\lambda^2(1-\\eta)^2\\bar\\zeta_\\pm}{4\\lambda^2\\eta\\bar\\zeta_\\pm},\n\\label{eqn:nontrivial_alphax_epsilon=0}\\\\\n(\\bar\\alpha_y^\\pm)^2&=&+|\\bar\\alpha_\\pm|^2\\frac{\\Delta\\Delta_0+\\lambda^2(1+\\eta)^2\\bar\\zeta_\\pm}{4\\lambda^2\\eta\\bar\\zeta_\\pm},\n\\label{eqn:nontrivial_alphay_epsilon=0}\n\\end{eqnarray}\nand hence, using Eqs.~(\\ref{eqn:steady-state_beta1}) and (\\ref{eqn:steady-state_beta2}), and the conservation law $\\bar\\zeta^2+|\\bar\\beta|^2=1$, find\n\\begin{equation}\n|\\bar\\alpha_\\pm|^2=-\\frac{\\Delta_0}{4\\Delta}\\frac{1-\\bar\\zeta_\\pm^2}{\\bar\\zeta_\\pm}.\n\\label{eqn:photon_number_epsilon=0}\n\\end{equation}\nThis result gives back Eq.~(\\ref{eqn:photon_number_kappa=0}), with $\\omega\\to\\Delta$ and $\\omega_0\\to\\Delta_0$, when $\\kappa=0$.\n\nFigure \\ref{fig:fig2} displays four cross-sections of the parameter space for $\\epsilon=0$ and $\\Delta_0=\\Delta$, each subdivided according to the number of distinct steady-state solutions. Frames (a) and (c) apply to the non-dissipative model ($\\kappa=0$), while frames (b) and (d) include cavity mode loss. Two complementary perspectives are provided: first, in frames (a) and (b), where the cut is the ($\\lambda\/\\Delta$,$\\eta$)-plane, and then, in frames (c) and (d), where the ($\\Delta\/\\lambda$,$\\eta$)-plane is shown. The first view envisages the coupling strength $\\lambda$, at fixed detuning $\\Delta$, as the control parameter, the historical view suggested by Refs.~\\cite{hepp&lieb_1973a,wang&hioe_1973,hepp&lieb_1973b,carmichael_etal_1973}; the second envisages $\\Delta$ as the control parameter, with $\\lambda$ fixed, which is more natural for experiments in optics and the perspective carried through the remainder of the paper. To connect with Secs.~\\ref{sec:Dicke_rotating_wave} and \\ref{sec:Dicke_counter_rotating}, we note the following points:\n\\begin{enumerate}[(i)]\n\\item\nThe Dicke quantum phase transition in the rotating-wave approximation, originally proposed by Hepp and Lieb \\cite{hepp&lieb_1973a}, maps to the line $\\eta=0$ in frames (a) and (c). The critical point $\\lambda\/\\Delta=\\Delta\/\\lambda=1$ marks a transition from the trivial solution to one with photon number $|\\alpha_\\pm|^2=(\\Delta^4-\\lambda^4)\/4\\lambda^2\\Delta^2$ [Eqs.~(\\ref{eqn:photon_number_eta=0}) and (\\ref{eqn:photon_number_epsilon=0})], where $\\bar\\zeta_\\pm=-\\Delta^2\/\\lambda^2$ is a double root of $P(\\bar\\zeta)=0$; $\\bar\\beta\/\\bar\\alpha=-2\\Delta\/\\lambda$, but there is no preferred phase for $\\bar\\beta$, since Eqs.~(\\ref{eqn:nontrivial_alphax_epsilon=0}) and (\\ref{eqn:nontrivial_alphay_epsilon=0}) reduce to the tautology $0=0$.\n\\item\nThe $\\eta=0$ transition does not occur in the presence of dissipation, as in frames (b) and (d) the $\\eta=0$ axis bounds only the $R_2$ region.\n\\item\nThe critical point on the line $\\eta=0$ [frames (a) and (c)] splits into a pair of critical points when $\\eta>0$, subdividing the plane into regions of two, three, and four distinct solutions (two, four, and six solutions when double roots of $[P(\\bar\\zeta)]^2=0$ are considered). The transition at $\\lambda_{\\eta=1}^+=\\Delta\/2$ from region $R_2$ to $R_3$ recovers the renormalized critical point \\cite{carmichael_etal_1973} when the rotating-wave approximation is lifted---the $R_2\/R_3$ boundary carries that renormalization through as a function of $\\eta$. To our knowledge, the critical point defining the $R_3\/R_4$ boundary has not been reported before, although Hepp and Lieb do discuss a model that embraces our inclusion of the parameter $\\eta$ \\cite{hepp&lieb_1973b}. The transition between regions $R_3$ and $R_4$ is central to the unification we present with a coherent drive included (Sec.~\\ref{sec:coherent_drive_intermediate_eta}).\n\\item\nContrasting the situation in (i), nontrivial solutions in regions $R_3$ and $R_4$ assign $\\bar\\beta$ and $\\bar\\alpha$ a definite phase, through Eqs.~(\\ref{eqn:steady-state_beta1}), (\\ref{eqn:steady-state_beta2}), (\\ref{eqn:nontrivial_alphax_epsilon=0}), and (\\ref{eqn:nontrivial_alphay_epsilon=0}).\n\\item\nWhile the map from frame (b) to frame (d) appears straightforward, the map from frame (c) to frame (d) is not: a diagram with two boundaries at fixed $\\eta$ now acquires three, as the $R_2\/R_4$ boundary bends up to meet $\\eta=1$. This follows from the term $\\kappa^2\/\\Delta^2$ under the square root in Eq.~(\\ref{eqn:nontrivial_zeta_epsilon=0}): when $\\kappa\\neq0$, $\\bar\\zeta_\\pm$ are complex for $\\eta>\\eta_\\kappa$, a $\\Delta$-dependent condition at fixed $\\kappa$ [Eq.~(\\ref{eqn:eta_critical})].\n\\end{enumerate}\n\nFigure \\ref{fig:fig3} further illustrates the parameter dependence of the mean-field steady states in the absence of a drive. The symmetrical presentation of the phase diagram in frame (a) is modelled after Ref.~\\cite{carmichael_2015} (Figs.~1 and 2) and carried through in Figs.~\\ref{fig:fig4}, \\ref{fig:fig5}, and \\ref{fig:fig7}. Frames (b)-(e) show steady states and their stability as a function of detuning for $\\eta=0.2$ and $\\eta=0.6$; they illustrate how the regions in frame (a) interconnect as solutions track smoothly with the changing detuning and bifurcate at boundaries:\n\\begin{description}\n\\item[Region $R_2$]\nSolutions $\\bar\\zeta=\\pm1$ only; the solution $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\item[Region $R_3$]\nSolutions $\\bar\\zeta=\\pm1$ and the root $\\bar\\zeta_+$ of $P(\\bar\\zeta)=0$; the solutions $\\bar\\zeta=\\pm1$ are both unstable and $\\bar\\zeta^+$ is stable. Three solutions in total.\n\\item[Region $R_4$]\nSolutions $\\bar\\zeta=\\pm1$ and the roots $\\bar\\zeta_+$ and $\\bar\\zeta_-$ of $P(\\bar\\zeta)=0$; the solutions $\\bar\\zeta=-1$ (+1) and $\\bar\\zeta_+$ ($\\bar\\zeta_-$) are stable (unstable). Four solutions in total.\n\\end{description}\n\\noindent\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1.7in]{figure2a.pdf}\\includegraphics[width=1.7in]{figure2b.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.7in]{figure2c.pdf}\\includegraphics[width=1.7in]{figure2d.pdf}\n\\end{center}\n\\caption{Mean-field phase diagram for zero drive and $\\Delta_0=\\Delta$: (a) $\\kappa\/\\Delta=0$, (b) $\\kappa\/\\Delta=0.7$, (c) $\\kappa\/\\lambda=0$, and (d) $\\kappa\/\\lambda=0.1$. The cut through parameter space is the $(\\eta,\\lambda\/\\Delta)$-plane in (a) and (b), and the $(\\eta,\\Delta\/\\lambda)$-plane in (c) and (d).}\n\\label{fig:fig2}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure3a.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure3b.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure3c.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure3d.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure3e.pdf}\n\\end{center}\n\\caption{Mean-field steady states for zero drive and $\\Delta_0=\\Delta$: $\\kappa\/\\lambda=0.1$ and $\\eta=0.2$ [(b),(c)] and $\\eta=0.6$ [(d),(e)]. The two sweeps through the phase diagram are indicated by dashed lines in (a); solid red (dashed blue) lines indicate locally stable (unstable) steady states in (b)-(e).}\n\\label{fig:fig3}\n\\end{figure}\n\n\\subsection{Critical drive strength: $\\Delta_0=0$}\n\\label{sec:critical_drive}\nWe turn now to the dependence on the coherent drive strength, where we begin by identifying the critical point that organizes behavior as function of $\\epsilon$. To this end, we must first give special consideration to $\\Delta_0=0$, a limit not readily recovered from our general solution scheme, due to the $\\Delta_0$ in the denominator of Eqs.~(\\ref{eqn:steady-state_beta1}) and (\\ref{eqn:steady-state_beta2}); we essentially review an analysis presented by Alsing and Carmichael \\cite{alsing&carmichael_1991}, but extended here to arbitrary $\\eta$.\n\nFrom Eqs.~(\\ref{eqn:p_quadratic}) and (\\ref{eqn:q_quadratic}), when $\\Delta_0=0$, $P(\\bar\\zeta)=Q(\\bar\\zeta)=\\bar\\zeta^2$, and the 6th-order polynomial satisfied by $\\bar\\zeta$ becomes\n\\begin{equation}\n(1-\\bar\\zeta^2)\\bar\\zeta^4=\\left(\\epsilon\/\\epsilon_{\\rm crit}\\right)^2\\bar\\zeta^4,\n\\label{eqn:6th-order_polynomial_Delta_0=0}\n\\end{equation}\nwith\n\\begin{equation}\n\\epsilon_{\\rm crit}\\equiv\\frac12\\lambda(1+\\eta),\n\\label{eqn:critical_drive}\n\\end{equation}\nwhere the significance of $\\epsilon_{\\rm crit}$ as a critical drive strength is elaborated below.\nEquations (\\ref{eqn:steady-state_beta1}) and (\\ref{eqn:steady-state_beta2}) carry over in the form\n\\begin{equation}\n\\bar\\alpha_x\\bar\\zeta=\\bar\\alpha_y\\bar\\zeta=0,\n\\label{eqn:alpha_zeta_Delta_0=0}\n\\end{equation}\nand Eqs.~(\\ref{eqn:steady-state_alpha1}) and (\\ref{eqn:steady-state_alpha2}) as\n\\begin{eqnarray}\n\\kappa\\bar\\alpha_x-\\Delta\\bar\\alpha_y-\\lambda\\frac12(1-\\eta)\\bar\\beta_y&=&0,\n\\label{eqn:steady-state_alpha1_Delta_0=0}\\\\\n\\kappa\\bar\\alpha_y+\\Delta\\bar\\alpha_x+\\lambda\\frac12(1+\\eta)\\bar\\beta_x&=&-\\epsilon.\n\\label{eqn:steady-state_alpha2_Delta_0=0}\n\\end{eqnarray}\nWorking then from Eq.~(\\ref{eqn:alpha_zeta_Delta_0=0}), we can identify two distinct classes of solutions, one holding below $\\epsilon_{\\rm crit}$ and the other above.\n\n\\subsubsection{Solutions with $\\bar\\alpha_x=\\bar\\alpha_y=0$ ($\\epsilon\\leq\\epsilon_{\\rm crit}$)}\nEquation (\\ref{eqn:alpha_zeta_Delta_0=0}) may be satisfied with $\\bar\\alpha_x=\\bar\\alpha_y=0$, which, from Eqs.~(\\ref{eqn:steady-state_alpha1_Delta_0=0}) and (\\ref{eqn:steady-state_alpha2_Delta_0=0}), requires\n\\begin{equation}\n\\bar\\beta_x=-\\epsilon\/\\epsilon_{\\rm crit},\\qquad\\bar\\beta_y=0,\n\\end{equation}\nand hence, from the conservation law $\\bar\\zeta^2+|\\bar\\beta|^2=1$,\n\\begin{equation}\n\\bar\\zeta=\\pm\\sqrt{1-\\left(\\epsilon\/\\epsilon_{\\rm crit}\\right)^2}.\n\\label{eqn:zeta_below_Delta_0=0}\n\\end{equation}\nThe same result follows directly from Eq.~(\\ref{eqn:6th-order_polynomial_Delta_0=0}) under the assumption $\\bar\\zeta\\neq0$. This solution is physically acceptable for $\\epsilon\\leq\\epsilon_{\\rm crit}$, though larger drives require Eq.~(\\ref{eqn:alpha_zeta_Delta_0=0}) to be satisfied in another way.\n\n\\subsubsection{Solutions with $\\bar\\zeta=0$ ($\\epsilon\\geq\\epsilon_{\\rm crit}$)}\nEquation (\\ref{eqn:alpha_zeta_Delta_0=0}) may also be satisfied with $\\bar\\zeta=0$, which leaves only the phase of $\\bar\\beta$ to be determined:\n\\begin{equation}\n\\bar\\beta=e^{i\\phi}.\n\\end{equation}\nFrom Eq.~(\\ref{eqn:mean-field_zeta}), the phase of $\\bar\\alpha$ must satisfy\n\\begin{equation}\n{\\rm Im}\\big[\\bar\\alpha(e^{-i\\phi}-\\eta e^{i\\phi})\\big]=0,\n\\end{equation}\nand also, from Eq.~(\\ref{eqn:mean-field_alpha}),\n\\begin{equation}\n\\bar\\alpha=-i\\frac{\\epsilon+\\epsilon_{\\rm crit}(e^{i\\phi}+\\eta e^{-i\\phi})\/(1+\\eta)}{\\kappa+i\\Delta}.\n\\end{equation}\nThe phase $\\phi$ is therefore a solution of the transcendental equation\n\\begin{equation}\n\\epsilon\\cos\\phi+\\epsilon_{\\rm crit}=\\frac{\\Delta\\sin\\phi}{\\kappa(1-\\eta^2)}[\\epsilon(1+\\eta)^2+\\epsilon_{\\rm crit}4\\eta\\cos\\phi].\n\\end{equation}\nIf we then take $\\Delta=0$ as well as $\\Delta_0=0$ (and $\\eta\\neq1$), we arrive at the much simpler equation\n\\begin{equation}\n\\phi=\\cos^{-1}(-\\epsilon_{\\rm crit}\/\\epsilon),\n\\end{equation}\nwith solution $\\phi=\\pi$ for $\\epsilon=\\epsilon_{\\rm crit}$ and two solutions for the phase of $\\bar\\beta$ above $\\epsilon_{\\rm crit}$. This prediction of a bistability in phase above $\\epsilon_{\\rm crit}$ recovers the so-called Spontaneous Dressed-State Polarization of Alsing and Carmichael \\cite{alsing&carmichael_1991} (see also \\cite{kilin&krinitskaya_1991}) but generalized to $\\eta\\neq0$.\n\n\\subsection{Rotating-wave approximation with coherent drive: $\\eta=0$}\n\\label{sec:rotating_wave_coherent_drive_eta=0}\nWe now begin to lay out the connection between the breakdown of photon blockade and the coherently driven extension of the Dicke quantum phase transition. In this section, we introduce the breakdown of photon blockade as the coherently driven extension of Sec.~\\ref{sec:epsilon=0} in the limit $\\eta=0$. In so doing, we introduce a completely new region of nontrivial steady states, one disconnected and distinct from regions $R_3$ and $R_4$ of Figs.~\\ref{fig:fig2} and \\ref{fig:fig3}. What follows recovers results from Ref.~\\cite{carmichael_2015}.\n\nReturning to the 6th-order polynomial satisfied by $\\bar\\zeta$, Eq.~(\\ref{eqn:6th-order_polynomial}), with $\\eta$ zero, $Q(\\bar\\zeta)=P(\\bar\\zeta)$, and the polynomial takes the simpler form\n\\begin{equation}\n(1-\\bar\\zeta^2)[P(\\bar\\zeta)]^2=\\bar\\epsilon^2\\bar\\zeta^2P(\\bar\\zeta),\n\\label{eqn:6th-order_polynomial_eta=0}\n\\end{equation}\nwith\n\\begin{equation}\nP(\\bar\\zeta)=(\\bar\\Delta_0\\bar\\kappa)^2+(\\bar\\Delta_0\\bar\\Delta+\\bar\\zeta)^2,\n\\label{eqn:p_quadratic_eta=0}\n\\end{equation}\nwhere we have introduced parameters scaled by $\\epsilon_{\\rm crit}$:\n\\begin{equation}\n\\bar\\epsilon\\equiv\\epsilon\/\\epsilon_{\\rm crit},\\qquad (\\bar\\kappa,\\bar\\Delta,\\bar\\Delta_0)\\equiv(\\kappa,\\Delta,\\Delta_0)\/2\\epsilon_{\\rm crit}.\n\\label{eqn:scaled_parameters}\n\\end{equation}\nThe roots of $P(\\bar\\zeta)=0$ are nonphysical (complex) when $\\eta=0$ [Eq.~(\\ref{eqn:nontrivial_zeta_epsilon=0})] and therefore $P(\\bar\\zeta)$ may be cancelled on both sides of Eq.~(\\ref{eqn:6th-order_polynomial_eta=0}), which means there are at most four distinct solutions.\n\nTurning then to the field, the homogeneous system, Eq.~(\\ref{eqn:steady-state_alpha_epsilon=0}), is replaced by\n\\begin{equation}\n\\left(\n\\begin{matrix}\n\\bar\\kappa&-\\bar\\Delta-\\bar\\Delta_0^{-1}\\bar\\zeta\\\\\n\\noalign{\\vskip2pt}\n\\bar\\Delta+\\bar\\Delta_0^{-1}\\bar\\zeta&\\bar\\kappa\n\\end{matrix}\n\\mkern3mu\\right)\\mkern-4mu\\left(\n\\begin{matrix}\n\\bar\\alpha_x\\\\\n\\noalign{\\vskip2pt}\n\\bar\\alpha_y\n\\end{matrix}\n\\right)=\\left(\n\\begin{matrix}\n0\\\\\n\\noalign{\\vskip2pt}\n-\\bar\\epsilon\/2\n\\end{matrix}\n\\right)\n\\label{eqn:inhomogeneous_system_eta=0}\n\\end{equation}\nwith solution for the field amplitude ($\\bar\\Delta_0\\neq0$)\n\\begin{equation}\n\\bar\\alpha=-i\\frac{\\bar\\epsilon\/2}{\\bar\\kappa+i\\left(\\bar\\Delta+\\bar\\Delta_0^{-1}\\bar\\zeta\\right)}.\n\\label{eqn:steady-state_alpha3_eta=0}\n\\end{equation}\nThus, the field mode responds to coherent driving as a resonator in the presence of a nonlinear dispersion, where the dispersion is defined by solutions to Eq.~(\\ref{eqn:6th-order_polynomial_eta=0}). If we then note that $P(\\bar\\zeta)=\\bar\\Delta_0^2\\bar\\epsilon^2\/4|\\bar\\alpha|^2$ [Eqs.~(\\ref{eqn:p_quadratic_eta=0}) and (\\ref{eqn:steady-state_alpha3_eta=0})], whence, from Eq.~(\\ref{eqn:6th-order_polynomial_eta=0}),\n\\begin{equation}\n\\bar\\zeta=\\pm\\frac{|\\bar\\Delta_0|}{\\left(\\bar\\Delta_0^2+4|\\bar\\alpha|^2\\right)^{1\/2}},\n\\end{equation}\nwe recover the autonomous equation of state for the field mode \\cite{carmichael_2015}:\n\\begin{equation}\n\\bar\\alpha=-i\\frac{\\bar\\epsilon\/2}{\\bar\\kappa+i\\left[\\bar\\Delta\\pm\\hbox{sgn}\\left(\\bar\\Delta_0)(\\bar\\Delta_0^2+4|\\bar\\alpha|^2\\right)^{-1\/2}\\right]}.\n\\label{eqn:state_equation_eta=0}\n\\end{equation}\n\nFigure \\ref{fig:fig4} illustrates the results for mean-field steady states obtained from Eqs.~(\\ref{eqn:6th-order_polynomial_eta=0}) and (\\ref{eqn:state_equation_eta=0}) when $\\Delta_0=\\Delta$. The phenomenology follows that mapped out in Fig.~4 of Ref.~\\cite{carmichael_2015}, where regions of two and four distinct solutions [frame (a)] interconnect through the frequency pulling of vacuuum Rabi resonances located at $\\Delta\/2\\epsilon_{\\rm crit}=\\pm1$ for $\\epsilon\/\\epsilon_{\\rm crit}\\to0$:\n\\begin{description}\n\\item[Region $R_2^a$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\item[Region $R_4$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive and two additional solutions that arise from the bistable folding of the solution that approaches $\\bar\\zeta=-1$; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable), and the two additional solutions are stable and unstable. Four solutions in total.\n\\item[Region $R_2^b$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of large detuning; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\end{description}\nWe emphasize that regions $R_2^a$ and $R_2^b$ comprise a single connected region of two distinct solutions in frame (a) of Fig.~\\ref{fig:fig4}; region $R_4$ does not touch the $\\Delta\/2\\epsilon_{\\rm crit}$ axis, although it comes close when $\\kappa\/\\lambda$ is small. We note also that regions $R_4$ of Fig.~\\ref{fig:fig3} and $R_4$ of Fig.~\\ref{fig:fig4} are distinct and do not share a common boundary; their interface occurs away from $\\eta=0$ and is discussed in Sec.~\\ref{sec:coherent_drive_intermediate_eta}.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure4a.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure4b.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure4c.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure4d.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure4e.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure4f.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure4g.pdf}\n\\end{center}\n\\caption{Mean-field steady states for $\\eta=0$ and $\\Delta_0=\\Delta$: $\\kappa\/\\lambda=0.02$ and $\\epsilon\/\\epsilon_{\\rm crit}=0.6$ [(b),(c)], $\\epsilon\/\\epsilon_{\\rm crit}=1.0$ [(d),(e)], and $\\epsilon\/\\epsilon_{\\rm crit}=1.2$ [(f),(g)]. The three sweeps through the phase diagram are indicated by dashed lines in (a); solid red (dashed blue) lines indicate stable (unstable) steady states in (b)-(g); dashed black lines demark the range of bistability in (c).}\n\\label{fig:fig4}\n\\end{figure}\n\n\\subsection{The quantum Rabi Hamiltonian with coherent drive: $\\eta=1$}\n\\label{sec:Dicke_coherent_drive_eta=1}\nTaking now the opposite limit, $\\eta=1$, we meet with a region of nontrivial steady states that is contiguous with $R_3$ of Figs.~\\ref{fig:fig2} and \\ref{fig:fig3}. The new region supports four distinct solutions, while $R_3$ supports only three. Nonetheless, the boundary forms a continuous interface since one solution in $R_3$ corresponds to a double root of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0})---a root of $[P(\\bar\\zeta)]^2=0$; the coherent drive lifts this degeneracy and splits one distinct solution into two.\n\nIn order to avoid the divergence of $P(\\bar\\zeta)$ and $Q(\\bar\\zeta)$ as $\\eta\\to1$, we take Eqs.~(\\ref{eqn:p_quadratic}) and (\\ref{eqn:q_quadratic}) over in the form\n\\begin{equation}\n(1-\\eta)^2P(\\bar\\zeta)=4\\bar\\Delta\\bar\\Delta_0\\bar\\zeta+4\\bar\\Delta_0^2(\\bar\\kappa^2+\\bar\\Delta^2),\n\\end{equation}\nand\n\\begin{equation}\n(1-\\eta)^4Q(\\bar\\zeta)=16\\bar\\Delta^2\\Delta_0^2,\n\\end{equation}\nin which case the 6th-order polynomial in $\\bar\\zeta$, Eq.~(\\ref{eqn:6th-order_polynomial}), simplifies as\n\\begin{equation}\n(1-\\bar\\zeta^2)\\mkern-2mu\\left[\\bar\\zeta+\\frac{\\bar\\Delta_0}{\\bar\\Delta}(\\bar\\kappa^2+\\bar\\Delta^2)\\right]^2=\\bar\\epsilon^2\\bar\\zeta^2,\n\\label{eqn:6th-order_polynomial_eta=1}\n\\end{equation}\nagain a 4th-order polynomial with two or four physically acceptable solutions. In the $\\bar\\epsilon\\to0$ limit, the range of four solutions is confined by the inequality\n\\begin{equation}\n\\frac{|\\bar\\Delta_0|}{|\\bar\\Delta|}(\\bar\\kappa^2+\\bar\\Delta^2)\\leq1,\n\\end{equation}\nwhich recovers the $\\lambda_{\\eta\\to1}^+$ threshold of Eq.~(\\ref{eqn:lambda_critical}). Note also that, as advertised, the root $\\bar\\zeta=-(\\bar\\Delta_0\/\\bar\\Delta)(\\bar\\kappa^2+\\bar\\Delta^2)$ on the $\\bar\\epsilon=0$ boundary is a double root; thus the region $R_4$ of Fig.~\\ref{fig:fig5}---four distinct roots in the interior---interfaces continuously with the three distinct roots of region $R_3$ in Figs.~\\ref{fig:fig2} and \\ref{fig:fig3}.\n\nTurning to the field, from Eqs.~(\\ref{eqn:steady-state_alpha1}) and (\\ref{eqn:steady-state_alpha2}), Eq.~(\\ref{eqn:inhomogeneous_system_eta=0}) ($\\eta=0$) is replaced by\n\\begin{equation}\n\\left(\n\\begin{matrix}\n\\bar\\kappa&-\\bar\\Delta\\\\\n\\noalign{\\vskip2pt}\n\\bar\\Delta+\\bar\\Delta_0^{-1}\\bar\\zeta&\\bar\\kappa\n\\end{matrix}\n\\mkern3mu\\right)\\mkern-4mu\\left(\n\\begin{matrix}\n\\bar\\alpha_x\\\\\n\\noalign{\\vskip2pt}\n\\bar\\alpha_y\n\\end{matrix}\n\\right)=\\left(\n\\begin{matrix}\n0\\\\\n\\noalign{\\vskip2pt}\n-\\bar\\epsilon\/2\n\\end{matrix}\n\\right),\n\\label{eqn:inhomogeneous_system_eta=1}\n\\end{equation}\nwhere the coupling through $\\bar\\zeta$ is no longer symmetrical in the off-diagonals of the matrix on the left-hand side, and is therefore not serving the function of a nonlinear dispersion. Indeed, the physical interpretation for $\\eta=1$ says the coupling through $\\bar\\zeta$ belongs on the right-hand side of Eq.~(\\ref{eqn:inhomogeneous_system_eta=1}) where it acts as a nonlinear drive. The interpretation is made particularly clear if we write\n\\begin{equation}\n\\bar\\beta=\\bar\\Delta_0^{-1}2\\bar\\alpha_x\\bar\\zeta,\n\\end{equation}\nEqs.~(\\ref{eqn:steady-state_beta1}) and (\\ref{eqn:steady-state_beta2}), and then, from $\\bar\\zeta^2+|\\bar\\beta|^2=1$,\n\\begin{equation}\n\\bar\\zeta=\\pm|\\bar\\Delta_0|(\\bar\\Delta_0^2+4\\bar\\alpha_x^2)^{-1\/2}.\n\\end{equation}\nNow, moving the term $\\Delta_0^{-1}\\bar\\alpha_x\\bar\\zeta$ to the right-hand side of Eq.~(\\ref{eqn:inhomogeneous_system_eta=1}), the equation is rewritten as\n\\begin{equation}\n\\left(\n\\begin{matrix}\n\\bar\\kappa&-\\bar\\Delta\\\\\n\\noalign{\\vskip2pt}\n\\bar\\Delta&\\bar\\kappa\n\\end{matrix}\n\\mkern3mu\\right)\\mkern-4mu\\left(\n\\begin{matrix}\n\\bar\\alpha_x\\\\\n\\noalign{\\vskip2pt}\n\\bar\\alpha_y\n\\end{matrix}\n\\right)=\\left(\n\\begin{matrix}\n0\\\\\n\\noalign{\\vskip2pt}\n-\\bar\\epsilon\/2\\mp\\bar\\alpha_x(\\bar\\Delta_0^2+4\\bar\\alpha_x^2)^{-1\/2}\n\\end{matrix}\n\\right),\n\\label{eqn:alpha_eta=1}\n\\end{equation}\nwhere, if we can assume $4\\bar\\alpha_x^2\\gg\\bar\\Delta_0^2$, we find two solutions with the amplitude of the coherent drive simply changed from $\\bar\\epsilon$ to $\\bar\\epsilon\\pm1$:\n\\begin{equation}\n\\bar\\alpha=-i\\frac{(\\bar\\epsilon\\pm1)\/2}{\\bar\\kappa+i\\bar\\Delta},\n\\label{eqn:alpha_eta=1_approx}\n\\end{equation}\nand $\\bar\\zeta=\\pm|\\bar\\Delta_0|\/|\\bar\\alpha_x|$, $\\bar\\beta=\\pm{\\rm sgn}(\\bar\\Delta_0){\\rm sgn}(\\bar\\alpha_x)$.\n\nMore generally, Fig.~\\ref{fig:fig5} shows the dependence of mean-field steady states on drive amplitude and detuning for $\\eta=1$ and $\\bar\\Delta_0=\\bar\\Delta$; frames (b)-(g) illustrate results for three sweeps through a parameter space that divides into just two separate regions [frame (a)]:\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure5a.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure5b.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure5c.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure5d.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure5e.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure5f.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure5g.pdf}\n\\end{center}\n\\caption{Mean-field steady states for $\\eta=1$ and $\\Delta_0=\\Delta$: $\\kappa\/\\lambda=0.02$ and $\\epsilon\/\\epsilon_{\\rm crit}=0.6$ [(b),(c)], $\\epsilon\/\\epsilon_{\\rm crit}=1.0$ [(d),(e)], and $\\epsilon\/\\epsilon_{\\rm crit}=1.2$ [(f),(g)]. The three sweeps through the phase diagram are indicated by dashed lines in (a); solid red (dashed blue) lines indicate stable (unstable) steady states in (b)-(g); dashed black lines demark the range of bistability in (c).}\n\\label{fig:fig5}\n\\end{figure}\n\\begin{description}\n\\item[Region $R_4$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive and two that approach the root $\\bar\\zeta_+=-\\bar\\kappa^2-\\bar\\Delta^2$ of $[P(\\bar\\zeta)]^2=0$; the solutions that approach $\\bar\\zeta=\\bar\\zeta_+$ ($\\pm1$) are stable (unstable); the solution that approaches $\\bar\\zeta=-1$ links in a closed loop to one of the solutions approaching $\\bar\\zeta_+$. Four solutions in total.\n\\item[Region $R_2^b$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of large detuning; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\end{description}\nWe note the following additional points:\n\\begin{enumerate}[(i)]\n\\item\nTwo of the four solutions in region $R_4$ are consistent with the assumption adopted above Eq.~(\\ref{eqn:alpha_eta=1_approx}) [frame (c) of Fig.~\\ref{fig:fig5}] so long as $\\bar\\kappa\\ll1$; the remaining two solutions satisfy Eq.~(\\ref{eqn:alpha_eta=1}) but do not admit the approximation leading to Eq.~(\\ref{eqn:alpha_eta=1_approx}).\n\\item\nThe boundary between regions $R_4$ and $R_2^b$ in frame (a) of Fig.~\\ref{fig:fig5} follows the curve\n\\begin{equation}\n\\bar\\epsilon=\\left\\{1-\\left[\\frac{|\\bar\\Delta_0|}{|\\bar\\Delta|}(\\bar\\kappa^2+\\bar\\Delta^2)\\right]^{2\/3}\\right\\}^{3\/2}.\n\\end{equation}\nThe boundary is a line of double roots of Eq.~(\\ref{eqn:6th-order_polynomial_eta=1}), and the curve may be found by equating derivatives on the left- and right-hand sides of this equation.\n\\item\nThe critical point $\\epsilon_{\\rm crit}$ [Eq.~(\\ref{eqn:critical_drive})] organizes behavior as a function of drive strength and detuning in much the same way as it does for $\\eta=0$.\n\\item\nThe closed loop in frame (b) of Fig.~\\ref{fig:fig5} is similar to the loop in frame (b) of Fig.~\\ref{fig:fig4}; both shrink with increasing drive strength to eventually vanish at the critical point---frames (d) of Figs.~\\ref{fig:fig4} and \\ref{fig:fig5}. Note, though, that the stabilities are interchanged; this change is clearly reflected in the accompanying plots of $|\\bar\\alpha|^2$ [frames (c) of Figs.~\\ref{fig:fig4} and \\ref{fig:fig5}].\n\\item\nThe stable solutions displayed in frames (c), (e), and (g) of Fig.~\\ref{fig:fig5} are all single nearly Lorentzian peaks; the splitting in the corresponding frames of Fig.~\\ref{fig:fig4} does not occur.\n\\end{enumerate}\n\n\\subsection{Intermediate regime: $0<\\eta<1$}\n\\label{sec:coherent_drive_intermediate_eta}\nSummarizing what we have learned: with no counter-rotating interaction, the dissipative Dicke system shows no phase transition as a function of coupling strength [$\\eta=0$ in frames (b) and (d) of Fig.~\\ref{fig:fig1}], although the breakdown of photon blockade takes place in the presence of a coherent drive (Fig.~\\ref{fig:fig4}); the dissipative system does, however, show the standard phase transition when $\\eta=1$, where it is deformed by a coherent drive and vanishes with increasing drive strength at a renormalized photon-blockade-breakdown critical point (Fig.~\\ref{fig:fig5}).\n\nIn this section we unify these limiting cases by letting $\\eta$ vary continuously between 0 and 1. We show how the previously unreported phase of the Dicke system, i.e., region $R_4$ of Figs.~\\ref{fig:fig2} and \\ref{fig:fig3}, underlies this unification.\n\nWe begin with the interface between frame (a) of Fig.~\\ref{fig:fig3} and frame (a) of Fig.~\\ref{fig:fig5}, where regions of three and four distinct solutions connect on the boundary $\\bar\\epsilon=0$, $\\eta=1$: moving off the boundary with a perturbation $\\bar\\epsilon\\to\\delta\\bar\\epsilon$ lifts the degeneracy of a double root of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0}), and thus provides the link between regions. Something similar is encountered on the $\\bar\\epsilon=0$ boundary with $\\eta_\\kappa<\\eta<1$ (e.g., along the lines $\\eta=0.6$ and $\\eta=0.2$ in Fig.~\\ref{fig:fig3}); however, now two regions, $R_3$ and $R_4$, link to contiguous regions under the perturbation $\\bar\\epsilon\\to\\delta\\bar\\epsilon$. Since $R_4$ accommodates \\emph{two} double roots of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0}), we predict its linkage to a contiguous region of six distinct solutions in the presence of a coherent drive.\n\nWe illustrate this situation in Fig.~\\ref{fig:fig6} where we plot the function $\\sqrt{1-\\bar\\zeta^2}P(\\bar\\zeta)$---the square root of the left-hand side of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0})---for four detunings along the $\\eta=0.2$ sweep of Fig.~\\ref{fig:fig3}: frames (a), (b), (c), (d) refer, in sequence, to points in regions $R_2$, $R_3$, $R_4$, $R_2$ along the sweep---moving inwards from either end; they show examples of two, three, four, and again two distinct roots. The trivial roots, $\\bar\\zeta=\\pm1$, appear in every plot, while the additional roots [frames (b) and (c)] are double roots of $[P(\\bar\\zeta)]^2=0$. The perturbation $\\bar\\epsilon\\to\\delta\\bar\\epsilon$ replaces each dashed line in the figure by a pair of curves $\\pm\\delta\\bar\\epsilon|\\bar\\zeta|\\sqrt{Q(\\bar\\zeta)}$, and thus lifts the degeneracy of each double root. [It is readily shown that $Q(\\bar\\zeta)>0$.]\n\nFigure~\\ref{fig:fig7} shows how the results displayed in Figs.~\\ref{fig:fig3}-\\ref{fig:fig5} are generalized for $\\eta=0.2$ and $\\bar\\Delta_0=\\bar\\Delta$. The parameter space is now divided into a larger number of regions, integrating those already met in the three limiting cases:\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1.7in]{figure6a.pdf}\\includegraphics[width=1.7in]{figure6b.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.7in]{figure6c.pdf}\\includegraphics[width=1.7in]{figure6d.pdf}\n\\end{center}\n\\caption{The square root of the left-hand side of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0}) as a function of $\\bar\\zeta$ for $\\eta=0.2$ and $\\Delta_0=\\Delta$: $\\kappa\/2\\epsilon_{\\rm crit}=1\/12$ and $|\\Delta|\/2\\epsilon_{\\rm crit}=1.0$, $0.7$, $0.5$, $0.15$ [(a)-(d)]. Zeros of this function (crossings of the black dashed lines) are roots of Eq.~(\\ref{eqn:6th-order_polynomial_epsilon=0}).}\n\\label{fig:fig6}\n\\end{figure}\n\n\\begin{description}\n\\item[Region $R_2^a$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\item[Region $R_6$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive and four additional solutions---two that approach each of the double roots, $\\bar\\zeta_\\pm$, of $[P(\\bar\\zeta)]^2=0$. The solutions approaching $\\bar\\zeta=-1$ and $\\bar\\zeta_+$ ($+1$ and $\\bar\\zeta_-$) are stable (unstable). Six solutions in total.\n\\item[Region $R_4^a$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive and two that approach the double root $\\bar\\zeta_+$ of $[P(\\bar\\zeta)]^2=0$; the solutions approaching $\\bar\\zeta_+$ ($\\pm1$) are stable (unstable). Four solutions in total.\n\\item[Region $R_4^b$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of zero drive and two additional solutions that arise from the bistable folding of the solution that approaches $\\bar\\zeta=-1$; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable), and the two additional solutions are one stable\/unstable. Four solutions in total.\n\\item[Region $R_2^b$]\nTwo solutions that approach $\\bar\\zeta=\\pm1$ in the limit of large detuning; the solution approaching $\\bar\\zeta=-1$ ($+1$) is stable (unstable). Two solutions in total.\n\\end{description}\n\nFrames (b)-(e) of Fig.~\\ref{fig:fig7} show how the corresponding plots in Fig.~\\ref{fig:fig3} change when the degeneracy of the double roots ($\\bar\\epsilon=0$) is lifted ($\\bar\\epsilon\\neq0$) to link regions $R_3$ and $R_4$ of Fig.~\\ref{fig:fig3} to regions $R_4^a$ and $R_6$, respectively, of Fig.~\\ref{fig:fig7}. (Note, however, that $\\kappa\/\\lambda$ takes different values in the figures, so region boundaries do not line up.) The change is clearly seen, for example, comparing frame (b) of Fig.~\\ref{fig:fig3} with frames (b) and (d) of Fig.~\\ref{fig:fig7}: a single stable upper branch---Fig.~\\ref{fig:fig3}---is split into two stable upper branches---Fig.~\\ref{fig:fig7}; and a single unstable branch connecting upper and lower stable branches in Fig.~\\ref{fig:fig3} splits into two unstable branches in Fig.~\\ref{fig:fig7} [near overlapping dashed lines in frame (d)]. In this way features met separately in the limiting cases of Figs.~\\ref{fig:fig4} and \\ref{fig:fig5} are linked together.\n\nFinally, for larger amplitudes of the drive---e.g., adding sweeps at $\\bar\\epsilon=0.6$, 1.0, and 1.2 in frame (a) of Fig.~\\ref{fig:fig7}---mean-field steady states follow the breakdown of photon blockade, as in frames (b)-(g) of Fig.~\\ref{fig:fig4}.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure7a.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure7b.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure7c.pdf}\n\\vskip0.1in\n\\includegraphics[width=1.65in]{figure7d.pdf}\\hskip0.075in\\includegraphics[width=1.65in]{figure7e.pdf}\n\\end{center}\n\\caption{Mean-field steady states for $\\eta=0.2$ and $\\Delta_0=\\Delta$: $\\kappa\/\\lambda=0.02$ and $\\epsilon\/\\epsilon_{\\rm crit}=0.2$; [(d),(e)] expands the view in [(b),(c)]. The sweep through the phase diagram is indicated by the dashed line in (a); solid red (dashed blue and magenta) lines indicate stable (unstable) steady states in (b)-(e); dashed black lines demark the range of bistability or tristability in (c).}\n\\label{fig:fig7}\n\\end{figure}\n\n\\section{Quantum Fluctuations: One Two-State System}\n\\label{sec:quantum_fluctuations}\nWhile the mean field analysis may be highly suggestive of what to expect from an experimental realization of our generalized Jaynes-Cummings-Rabi model, an account in these terms is incomplete---fluctuations are neglected. We encounter coexisting steady states, for example, and although both are stable under small perturbations when Maxwell-Bloch equations are solved, what of the stability once quantum fluctuations are introduced?\n\nIt is beyond the scope of this work to address questions like this in any detail. We limit ourselves here to a few observations about the full quantum treatment for the case $N=1$, where a number of calculations are feasible, some analytical and some numerical, to parallel results for the breakdown of photon blockade \\cite{carmichael_2015}. While it may seem that $N=1$ takes us very far from a many-particle limit where contact with mean-field results may be made, this is not generally the case: it is shown in Ref.~\\cite{carmichael_2015} that the many-photon limit is a strong-coupling limit, and many of the figures from Sec.~\\ref{sec:mean-field} have photon numbers ranging in the hundreds for $N=1$---after the scaling of Eq.~(\\ref{eqn:scaling}) is undone.\n\nIn this section, we show that the $\\eta$-dependence of the critical drive strength (Sec.~\\ref{sec:critical_drive}) follows from the quasi-energy spectrum, extending the previous calculation of the spectrum for $\\eta=0$~\\cite{alsing_etal_1992} to the general case. We then address the role of multi-photon resonances in the limit of small $\\eta$, where we uncover behavior similar to multi-photon blockade \\cite{shamailov_etal_2010} under weak coherent driving, but only for even numbers of photons absorbed. Finally, we use quantum trajectories to explore the accessibility of co-existing mean-field steady states in the presence of fluctuations.\n\n\\subsection{Quasienergies for $\\Delta_0=\\Delta=0$}\n\nEver since the seminal work of Jaynes and Cummings \\cite{jaynes&cummings_1963} (see also \\cite{paul_1963})), the energy spectrum of a single two-state system interacting with one mode of the radiation field has been a fundamental element of quantum optics models and physical understanding. The level scheme is remarkably simple when compared with extensions to the quantum Rabi model \\cite{braak_2011} and generalizations to include a counter-rotating interaction after the manner of Sec.~\\ref{sec:Dicke_counter_rotating} \\cite{tomka_etal_2014}. Alsing \\emph{et al}. \\cite{alsing_etal_1992} showed that the simplicity carries over to the driven Jaynes-Cummings Hamiltonian when the two-state system and radiation mode are resonant with the drive. In this case, a Bogoliubov transformation diagonalizes the interaction picture Hamiltonian, so that quasienergies are recovered. The critical drive $\\epsilon_{\\rm crit}$ is then the point at which all quasienergies collapse to zero. In this section we show that the method employed by Alsing \\emph{et al}. carries through for arbitrary $\\eta$, and the collapse to zero reproduces Eq.~(\\ref{eqn:critical_drive}).\n\nWe consider the Hamiltonian $H_\\eta^{\\prime\\prime}=H_\\eta^\\prime+\\sqrt N\\epsilon(a^\\dagger+a)$, where $H_\\eta^\\prime$ is given by Eq.~(\\ref{eqn:hamiltonian_raman_model}). Taking the coherent drive on resonance and considering just one two-state system, the Hamiltonian is\n\\begin{equation}\nH_\\eta^{\\prime\\prime}=\\lambda(a\\sigma_++a^\\dagger\\sigma_-)+\\eta\\lambda(a^\\dagger\\sigma_++a\\sigma_-)+\\epsilon(a^\\dagger+a).\n\\label{eqn:hamiltonian_zero_detuning_one_atom}\n\\end{equation}\nWe seek solutions to the eigenvalue problem $H_\\eta^{\\prime\\prime}|\\psi_E\\rangle=E|\\psi_E\\rangle$, where $E$ is a quasienergy and\n\\begin{equation}\n|\\psi_E\\rangle=|\\psi^{(1)}_E\\rangle|1\\rangle+|\\psi^{(2)}_E\\rangle|2\\rangle,\n\\label{eqn:eigenket}\n\\end{equation}\nwith the kets $|\\psi^{(1,2)}_E\\rangle$ expanded over the Fock states, $|n\\rangle$, $n=1,2,\\ldots$, of the field mode; we must find allowed values of $E$ and the corresponding field kets.\n\nIt is straightforward to show that the field kets satisfy the homogeneous system of equations\n\\begin{equation}\n\\left(\n\\begin{matrix}\n\\epsilon(a^\\dagger+a)-E&\\lambda(a^\\dagger+\\eta a)\\\\\n\\noalign{\\vskip4pt}\n\\lambda(\\eta a^\\dagger+a)&\\epsilon(a^\\dagger+a)-E\n\\end{matrix}\n\\mkern2mu\\right)\\mkern-4mu\\left(\n\\begin{matrix}\n|\\psi^{(1)}_E\\rangle\\\\\n\\noalign{\\vskip4pt}\n|\\psi^{(2)}_E\\rangle\n\\end{matrix}\n\\right)=0,\n\\label{eqn:quasienergy_homogeneous_system}\n\\end{equation}\nwhence multiplication on the left by ${\\rm diag}(\\eta a^\\dagger+a,a^\\dagger+\\eta a)$ takes us to the coupled equations:\n\\begin{eqnarray}\n-\\epsilon(1-\\eta)|\\psi^{(1)}_E\\rangle&=&[\\epsilon(a^\\dagger+a)-E](\\eta a^\\dagger+a)|\\psi^{(1)}_E\\rangle\\notag\\\\\n\\noalign{\\vskip2pt}\n&&+\\lambda[aa^\\dagger+\\eta(a^{\\dagger 2}+a^2)+\\eta^2a^\\dagger a]|\\psi^{(2)}_E\\rangle,\\notag\\\\\n&&\\label{eqn:eigenket_again1}\\\\\n\\epsilon(1-\\eta)|\\psi^{(2)}_E\\rangle&=&[\\epsilon(a^\\dagger+a)-E](a^\\dagger+\\eta a)|\\psi^{(2)}_E\\rangle\\notag\\\\\n\\noalign{\\vskip2pt}\n&&+\\lambda[a^\\dagger a+\\eta(a^{\\dagger 2}+a^2)+\\eta^2aa^\\dagger]|\\psi^{(1)}_E\\rangle.\\notag\\\\\n&&\\label{eqn:eigenket_again2}\n\\end{eqnarray}\nWe then use Eq.~(\\ref{eqn:quasienergy_homogeneous_system}) to substitute for $(\\eta a^\\dagger+a)|\\psi^{(1)}_E\\rangle$ and $(a^\\dagger+\\eta a)|\\psi^{(2)}_E\\rangle$, respectively, on the right-hand sides of Eqs.~(\\ref{eqn:eigenket_again1}) and (\\ref{eqn:eigenket_again2}), and thus obtain the more compact form:\n\\begin{eqnarray}\n\\left(O(E)-\\lambda^2\\frac{1-\\eta^2}2\\right)|\\psi^{(1)}_E\\rangle-\\epsilon\\lambda(1-\\eta)|\\psi^{(2)}_E\\rangle&=&0,\\mkern20mu\n\\label{eqn:eigenket_yet_again1}\\\\\n\\noalign{\\vskip2pt}\n\\left(O(E)+\\lambda^2\\frac{1-\\eta^2}2\\right)|\\psi^{(2)}_E\\rangle+\\epsilon\\lambda(1-\\eta)|\\psi^{(1)}_E\\rangle&=&0,\n\\label{eqn:eigenket_yet_again2}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nO(E)&=&\\lambda^2(1+\\eta^2)\\frac{a^\\dagger a+aa^\\dagger}2+\\lambda^2\\eta\\left(a^{\\dagger 2}+a^2\\right)\\notag\\\\\n&&-\\left[\\epsilon\\left(a^\\dagger+a\\right)-E\\right]^2.\n\\label{eqn:o_operator}\n\\end{eqnarray}\nSince the coefficients of the second terms on the left-hand side are constants, Eqs.~(\\ref{eqn:eigenket_yet_again1}) and (\\ref{eqn:eigenket_yet_again2}) can now be readily uncoupled, and yield the autonomous equation\n\\begin{equation}\nO_+(E)O_-(E)|\\psi^{(1,2)}_E\\rangle=0,\n\\label{eqn:oplus_by_ominus_equation}\n\\end{equation}\nwhere\n\\begin{equation}\nO_{\\pm}(E)=O(E)\\pm\\lambda^2\\frac{1-\\eta^2}2\\Lambda^{1\/2},\n\\label{eqn:oplusminus_operator}\n\\end{equation}\nwith\n\\begin{equation}\n\\Lambda=1-\\frac1{(1+\\eta)^2}\\frac{4\\epsilon^2}{\\lambda^2}.\n\\label{eqn:Lambda_definition}\n\\end{equation}\n\nNote now that $O_+(E)$ and $O_-(E)$ commute, and so the general solution to Eq.~(\\ref{eqn:oplus_by_ominus_equation}) expands as\n\\begin{equation}\n|\\psi_{E}^{(1,2)}\\rangle=c_+^{(1,2)}|\\psi_E^{(+)}\\rangle+c_-^{(1,2)}|\\psi_E^{(-)}\\rangle,\n\\label{eqn:oplus_by_ominus_solution}\n\\end{equation}\nwhere $|\\psi_E^{(+)}\\rangle$ and $|\\psi_E^{(-)}\\rangle$ satisfy\n\\begin{equation}\nO_{\\pm}(E)|\\psi_E^{(\\pm)}\\rangle=0.\n\\label{eqn:oplusminus_equation}\n\\end{equation}\nMoreover, the operators $O_{\\pm}(E)$ are quadratic in creation and annihilation operators, so the diagonalization may be completed by a Bogoliubov transformation: introduce parameters $\\nu$, $\\xi$, $\\alpha(E)$, and $\\mu_\\pm(E)$, such that\n\\begin{equation}\nO_\\pm(E)=\\nu U^\\dagger[\\xi,\\alpha(E)]\\frac{a^\\dagger a+aa^\\dagger}2U[\\xi,\\alpha(E)]+\\mu_\\pm(E),\n\\label{eqn:oplusminus_unitary}\n\\end{equation}\nwhere the unitary $U[\\xi,\\alpha(E)]\\equiv D[\\alpha(E)]S(\\xi)$ executes a displacement and then a squeeze,\n\\begin{equation}\na\\buildrel U\\over\\to[a+\\alpha(E)]\\cosh\\xi+[a^\\dagger+\\alpha(E)]\\sinh\\xi,\n\\label{eqn:a_transform}\n\\end{equation}\nwhence, from Eq.~(\\ref{eqn:oplusminus_equation}),\n\\begin{equation}\n\\left(\\frac{a^\\dagger a+aa^\\dagger}2+\\frac{\\mu_\\pm(E)}\\nu\\right)\\!\\left\\{U[\\xi,\\alpha(E)]|\\psi_E^{(\\pm)}\\rangle\\right\\}=0.\n\\end{equation}\nThe number operator now acts on the left-hand side, and $|\\psi_E^{(+)}\\rangle$ and $|\\psi_E^{(-)}\\rangle$ are displaced and squeezed Fock states:\n\\begin{equation}\n|\\psi_{E_{n_\\pm}}^{(\\pm)}\\rangle=U^\\dagger[\\xi,\\alpha(E_{n_\\pm})]|n_\\pm\\rangle,\n\\end{equation}\n$n_\\pm=0,1,2,\\ldots$, where allowed quasienergies are indexed by the integers $n_\\pm$ and must satisfy\n\\begin{equation}\nn_\\pm+\\frac12+\\frac{\\mu_\\pm(E_{n_\\pm})}\\nu=0.\n\\label{eqn:quasienergy_constraint}\n\\end{equation}\nIt remains to equate terms on both sides of Eq.~(\\ref{eqn:oplusminus_unitary}) to fix the parameters of the Bogoliubov transformation, which yields\n\\begin{equation}\n\\nu=\\lambda^2(1-\\eta^2)\\Lambda^{1\/2},\\qquad\n\\xi=\\frac12\\ln\\left(\\frac{1+\\eta}{1-\\eta}\\Lambda^{1\/2}\\right),\n\\label{eqn:paramters1}\n\\end{equation}\nand\n\\begin{eqnarray}\n\\alpha(E)&=&\\frac{2\\epsilon E}{\\lambda^2(1+\\eta)^2}\\Lambda^{-1},\\\\\n\\noalign{\\vskip4pt}\n\\mu_\\pm(E)&=&\\pm\\frac\\nu2-E^2\\Lambda^{-1},\n\\label{eqn:parameters2}\n\\end{eqnarray}\nand thus the allowed quasienergies follow from\n\\begin{equation}\nn_\\pm+\\frac12\\pm\\frac12-E_{n_{\\pm}}^2\\frac1{\\lambda^2(1-\\eta^2)}\\Lambda^{-3\/2}=0.\n\\label{eqn:energies1}\n\\end{equation}\n\nEquation (\\ref{eqn:energies1}) is the targeted result, which reveals the generalized critical drive strength. It is helpful, however, for clarity, to recognize that $n_+$ and $n_-$ provide a double coverage of the nonnegative integers---traced to the \\emph{two} components on the right-hand side of Eq.~(\\ref{eqn:oplus_by_ominus_solution})---and to replace $n_\\pm$ by a single index $n$: first, associate $n=0$ with $n_-=0$, from which Eq.~(\\ref{eqn:energies1}) yields the quasienergy\n\\begin{equation}\nE_0=0,\n\\label{eqn:energy_zero}\n\\end{equation}\nwith corresponding ket\n\\begin{equation}\n|\\psi_{E_0}^{(-)}\\rangle=U^\\dagger[\\xi,\\alpha(E_0)]|0\\rangle;\n\\end{equation}\nand second, associate $n=1,2,\\dots$ with both $n_+=n-1$ and $n_-=n$, both of which, when substituted in Eq.~(\\ref{eqn:energies1}), yield the quasienergy doublet\n\\begin{equation}\nE_{n,\\pm}=\\pm\\lambda\\sqrt n\\sqrt{1-\\eta^2}\\Lambda^{3\/4},\n\\label{eqn:quasienergy_doublet}\n\\end{equation}\nalthough with distinct corresponding kets:\n\\begin{eqnarray}\n|\\psi_{E_{n,\\pm}}^{(+)}\\rangle&=&U^\\dagger[\\xi,\\alpha(E_{n,\\pm})]|n-1\\rangle,\\\\\n\\noalign{\\vskip2pt}\n|\\psi_{E_{n,\\pm}}^{(-)}\\rangle&=&U^\\dagger[\\xi,\\alpha(E_{n,\\pm})]|n\\rangle.\n\\end{eqnarray}\n\nIt is clear from Eq.~(\\ref{eqn:quasienergy_doublet}) that all quasienergies collapse to zero for $n$ finite and $\\Lambda=0$, a condition that returns, from Eq.~(\\ref{eqn:Lambda_definition}), the critical drive strength $\\epsilon_{\\rm crit}$ [Eq.~(\\ref{eqn:critical_drive})]. From this fully quantum mechanical point of view, $\\epsilon_{\\rm crit}$ marks a transition from a discrete quasienergy spectrum to a continuous one; the continuous side is recovered from the limit $\\Lambda\\to0$, $n\\to\\infty$, $\\sqrt n\\Lambda^{3\/4}$ constant. Note that a continuous spectrum is also recovered in the limit $\\eta\\to1$, $n\\to\\infty$, $\\sqrt n\\sqrt{1-\\eta^2}$ constant. A continuous spectrum is expected for $\\eta=1$, since if we set $\\eta=1$ in Eq.~(\\ref{eqn:quasienergy_homogeneous_system}), $E$ is an eigenvalue of the quadrature operator $a^\\dagger+a$.\n\nThe coefficients $c_\\pm^{(1,2)}$ [Eq.~(\\ref{eqn:oplus_by_ominus_solution})] follow from Eqs.~(\\ref{eqn:quasienergy_homogeneous_system}) and (\\ref{eqn:eigenket_yet_again1}), and normalization (see Ref.~\\cite{alsing_etal_1992}).\n\n\\subsection{Multi-photon resonance}\nWith the focus on just one two-state system, Figs.~\\ref{fig:fig4}, \\ref{fig:fig5}, and \\ref{fig:fig7} show photon numbers ranging from zero to a few thousand, and although numbers are smaller in Fig.~\\ref{fig:fig3}, the range is similar when $\\kappa\/\\lambda$ is set to $0.02$ instead of $0.1$. While we might expect mean-field theory to be broadly reliable for thousands, even hundreds of photons, it will surely miss important features when photon numbers are small. Indeed, photon blockade is a photon by photon effect, underpinned, not by a mean-field nonlinearity, but by a strongly anharmonic ladder of few-photon excited states; it breaks down through multi-photon absorption, where, in Fig.~4 of Ref.~\\cite{carmichael_2015}, for example, multi-photon resonances dominate the response to weak driving and the mean-field story of dispersive bistability is not picked up until $\\epsilon\/\\epsilon_{\\rm crit}\\sim0.4$.\n\nRecall now that in its dissipate realization (Sec.~\\ref{sec:dissipative_realization}) our generalized model involves not one, but two external\ndrives---a linear drive of strength $\\epsilon$, and a second, nonlinear drive of strength $\\eta$. We show now that the multi-photon response to weak driving carries over, with minor modification, from linear to nonlinear driving.\n\nReinstating detuning and setting $\\Delta_0=\\Delta$, we consider the Hamiltonian $H^{\\prime\\prime\\prime}_\\eta=\\Delta a^\\dagger a+\\Delta\\sigma_z+H^{\\prime\\prime}_\\eta$, where $H^{\\prime\\prime}_\\eta$ is given by Eq.~(\\ref{eqn:hamiltonian_zero_detuning_one_atom}). It is convenient for clarity, however, to adopt an interaction picture, where we define\n\\begin{eqnarray}\nH_\\eta^{\\prime\\prime}(t)&\\equiv&U_0^\\dagger(t)H^{\\prime\\prime}_\\eta U_0(t)\\notag\\\\\n\\noalign{\\vskip2pt}\n&=&H_{\\rm JC}+H_{\\epsilon}(t)+H_\\eta(t),\n\\end{eqnarray}\n$U_0(t)\\equiv\\exp[-i\\Delta(a^\\dagger a+\\sigma_z)t]$, and thus isolate the Jaynes-Cummings interaction,\n\\begin{equation}\nH_{\\rm JC}=\\lambda(a\\sigma_++a^\\dagger\\sigma_-),\n\\end{equation}\nwhich is perturbed by the linear drive\n\\begin{equation}\nH_\\epsilon(t)=\\epsilon(a^\\dagger e^{i\\Delta t}+ae^{-i\\Delta t}),\n\\end{equation}\nand the nonlinear drive\n\\begin{equation}\nH_\\eta(t)=\\eta\\lambda(a^\\dagger\\sigma_+e^{2i\\Delta t}+a\\sigma_-e^{-2i\\Delta t}).\n\\end{equation}\nWe also recall the eigenvalues and eigenkets of $H_{\\rm JC}$:\n\\begin{equation}\nE_0^{\\rm JC}=0,\\qquad E_{n,\\pm}^{\\rm JC}=\\pm\\lambda\\sqrt n,\n\\end{equation}\n$n=1,2,\\ldots$, and\n\\begin{eqnarray}\n|E_0^{\\rm JC}\\rangle&=&|0\\rangle|1\\rangle,\\\\\n|E_{n,\\pm}^{\\rm JC}\\rangle&=&\\frac1{\\sqrt2}\\left(|n\\rangle|1\\rangle\\pm|n-1\\rangle|2\\rangle\\right),\n\\end{eqnarray}\nwhere the first (second) ket refers to the field mode (two-state system) in each product on the right-hand side.\n\nNote now that the perturbation $H_\\epsilon(t)$ has non-zero matrix elements between neighboring pairs of kets in the $n$-step sequence\n\\begin{equation}\n|E_0^{\\rm JC}\\rangle\\rightarrow|E_{1,\\pm}^{\\rm JC}\\rangle\\rightarrow\\cdots\\rightarrow|E_{n-1,\\pm}^{\\rm JC}\\rangle\\rightarrow|E_{n,\\pm}^{\\rm JC}\\rangle,\n\\end{equation}\n$n=1,2,\\ldots$, while $H_\\eta(t)$ has non-zero matrix elements between pairs of kets in the $n\/2$-step sequence\n\\begin{equation}\n|E_0^{\\rm JC}\\rangle\\rightarrow|E_{2,\\pm}^{\\rm JC}\\rangle\\rightarrow\\cdots\\rightarrow|E_{n-2,\\pm}^{\\rm JC}\\rangle\\rightarrow|E_{n,\\pm}^{\\rm JC}\\rangle,\n\\end{equation}\n$n=2,4,\\ldots$. There are thus matrix elements to mediate multi-photon transitions from $|E_0^{\\rm JC}\\rangle$ to $|E_{n,\\pm}^{\\rm JC}\\rangle$ driven by either perturbation, but with the qualification that $H_\\eta(t)$ can only drive those with even $n$; resonance is achieved under the condition\n\\begin{equation}\n\\Delta=\\pm\\lambda\/\\sqrt n,\n\\end{equation}\nwhich is met either by $n$ steps of $\\Delta$ off-setting $\\pm\\lambda\\sqrt n$, or $n\/2$ steps of $2\\Delta$.\n\nFrame (a) of Fig.~\\ref{fig:fig8} illustrates the breakdown of photon blockade from a fully quantum mechanical point of view; we identify up to six multi-photon resonances before they begin to merge and wash out due to power broadening at higher drives. This figure displays quantum corrections, for $N=1$, to the mean-field results of Fig.~\\ref{fig:fig4}, where at high drives---$\\epsilon\/\\epsilon_{\\rm crit}=0.40$ and $0.48$---the layout of frame (a) of Fig.~\\ref{fig:fig4} begins to appear with the photon number averaged over fluctuation-driven switching between the pair of coexisting mean-field steady states.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure8a.pdf}\n\\vskip0.1in\n\\includegraphics[width=3.4in]{figure8b.pdf}\n\\end{center}\n\\caption{Steady-state photon number expectation computed from the master equation, Eq.~(\\ref{eqn:master_equation_drive}), for $N=1$, $\\Delta_0=\\Delta$, and $\\kappa\/\\lambda=0.02$: (a) $\\eta=0$ and $\\epsilon\/\\epsilon_{\\rm crit}=0.08$, $0.16$, $0.24$, $0.32$, $0.40$, $0.48$ (lower to upper) and (b) $\\epsilon\/\\epsilon_{\\rm crit}=0$ and $\\eta=0.04$, $0.08$, $0.12$, $0.16$, $0.20$, $0.24$ (lower to upper); successive curves are displace upwards by $0.2$ and $0.3$ in (a) and (b), respectively.}\n\\label{fig:fig8}\n\\end{figure}\n\nFrame (b) of Fig.~\\ref{fig:fig8} shows the similar figure for driving through the nonlinear perturbation $H_\\eta(t)$. Once again multi-photon resonances are seen, but only three of the previous six---those corresponding to the absorption of two, four, and six photons. The figure in this case adds quantum corrections to the mean-field results of Fig.~\\ref{fig:fig3} (but note that $\\kappa\/\\lambda$ is $0.02$ in Fig~\\ref{fig:fig8} and $0.1$ in Fig.~\\ref{fig:fig3}).\n\n\\subsection{Quantum induced switching between mean-field steady states}\nWhile multi-photon resonances are completely beyond the scope of mean-field results, Fig.~\\ref{fig:fig8} does provide a hint of mean-field predictions once photon numbers rise above two or three, where, in the vicinity of zero detuning, we see clear evidence of regions $R_2^a$ in Fig.~\\ref{fig:fig4} and $R_2$ in Fig.~\\ref{fig:fig3}. In this section, we use quantum trajectory simulations to further trace connections between the mean-field theory and a full quantum treatment.\n\nNote, first, that unlike the common situation for phase transitions of light, where the many-photon limit is a weak-coupling limit (Secs.~IVA and IVC of Ref.~\\cite{carmichael_2015}), the photon number for our generalized Jaynes-Cummings Rabi model scales with $N(\\lambda\/\\kappa)^2$---i.e., the many-photon limit is a strong-coupling limit; this is seen, for example, from Eq.~(\\ref{eqn:alpha_eta=1_approx}), which, undoing the scaling of Eqs.~(\\ref{eqn:scaling}) and (\\ref{eqn:scaled_parameters}), reads\n\\begin{equation}\n|\\alpha|^2=N\\left(\\frac{\\lambda}{\\kappa}\\right)^2\\frac{(1+\\eta)^2}4\\frac{(\\bar\\epsilon\\pm 1)^2}{1+(\\Delta\/\\kappa)^2}.\n\\label{eqn:photon_number_eta=1_unscaled}\n\\end{equation}\nThe scaling is also apparent from a comparison between frames (c) and (e) of Fig.~3, and frames (c), (e), and (f) of Figs.~\\ref{fig:fig4} and \\ref{fig:fig5}, and frame (c) of Fig.~\\ref{fig:fig7}: with $\\lambda\/\\kappa=10$ in Fig.~\\ref{fig:fig3}, photon numbers range from 4 to 40, while with five times larger coupling in Figs.~\\ref{fig:fig4}, \\ref{fig:fig5}, and \\ref{fig:fig7} they range in the hundreds and thousands; indeed, frames (c), (e), and (f) of Fig.~\\ref{fig:fig5} rise to reach photon numbers of $6.4\\times10^3$, $10^4$, and $1.21\\times10^4$, respectively, at zero detuning [Eq.~({\\ref{eqn:photon_number_eta=1_unscaled}}]. Such high numbers can be reached with just one two-state system, since, when the coupling is strong, there is no need for a large value of $N$ to offset a weak nonlinearity per photon.\n\nAmongst the many effects of quantum fluctuations, in the following we target just two: first, mean-field steady states that are stable under Maxwell-Bloch equations are expected to be metastable in the presence of quantum fluctuations; and, second, isolated stable steady states---e.g., the lower state in frames (b) and (c) of Fig.~\\ref{fig:fig5} [the minus sign in Eq.~(\\ref{eqn:photon_number_eta=1_unscaled})]---might be accessed via quantum fluctuations. These effects are illustrated in Figs.~\\ref{fig:fig9} and \\ref{fig:fig10}, where we plot quantum trajectories of the photon number expectation while the detuning is slowly swept, from negative to positive. The coupling $\\lambda\/\\kappa=10$ is used in Fig.~\\ref{fig:fig9} in order to keep the maximum photon number relatively low, while the larger value in Fig.~\\ref{fig:fig10} maps to the mean-field results of Fig.~\\ref{fig:fig7}.\n\n\\begin{figure}[htpb!]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure9a.pdf}\n\\vskip0.2in\n\\includegraphics[width=3.4in]{figure9b.pdf}\n\\vskip0.2in\n\\includegraphics[width=3.4in]{figure9c.pdf}\n\\end{center}\n\\caption{Sample quantum trajectories as a function of scanned detuning and steady-state $Q$ functions for $N=1$, $\\Delta_0=\\Delta$, $\\epsilon\/\\epsilon_{\\rm crit}=0.2$, $\\kappa\/\\lambda=0.1$, and $\\eta=1$, $0.8$, $0.6$ (top to bottom); in all frames the detuning is scanned from $\\Delta\/\\lambda=-1$ to $+1$ in a time $\\kappa T=6\\times10^4$. Two sample scans are plotted in each frame (solid yellow and cyan lines) against the background of mean-field steady states (solid red and dashed blue curves). The inset $Q$ functions are for detunings $\\Delta\/2\\epsilon_{\\rm crit}=0$ (left) and $\\Delta\/2\\epsilon_{\\rm crit}=0.04$, $0.015$, $0.04$ (top to bottom) (right).}\n\\label{fig:fig9}\n\\end{figure}\n\nFigure \\ref{fig:fig9} presents a sequence of plots illustrating the role of quantum fluctuations as we move away from the limit of the coherently driven extension of the Dicke phase transition of Sec.~\\ref{sec:Dicke_coherent_drive_eta=1} into the intermediate regime of Sec.~\\ref{sec:coherent_drive_intermediate_eta}. Beginning with $\\eta=1$, the upper frame shows quantum trajectories tracking the two mean-field curves plotted from Eq.~(\\ref{eqn:photon_number_eta=1_unscaled}). Both trajectories (yellow and cyan lines) start on the left by following the higher mean-field branch, but quantum fluctuations allow the isolated [see frames (b) and (c) of Fig.~\\ref{fig:fig5}] lower branch to be accessed too. The two branches correspond to fields that are $\\pi$ out of phase in the imaginary direction at zero detuning---inset $Q$ function to the left---and rotate to eventually align with the real axis as the detuning is changed---inset $Q$ function to the right.\n\nSimilar results are plotted for $\\eta=0.8$ and $\\eta=0.6$ in the middle and bottom frames, respectively. Once again, mean-field curves are faithfully followed over segments of the path, but the switching between branches is more common. The most prominent feature, however, is the dramatic loss of stability around zero detuning: although the mean-field analysis finds a stable steady state at zero photon number [region $R_2^a$ in frame (a) of Fig.~\\ref{fig:fig5}], the full quantum treatment yields fluctuations spanning the two previously stable coherent states; the fluctuations are particularly apparent from the inset $Q$ functions in the middle frame of Fig~{\\ref{fig:fig9}}. The spikes that accompany switches between branches are not numerical artifacts; they are decaying oscillations---evidence of a spiraling trajectory for the field amplitude in the approach to the new locally stable state.\n\nFigure \\ref{fig:fig10} presents the results of two detuning scans for $\\lambda\/\\kappa=50$ and $\\eta=0.2$, corresponding to the parameters of Fig.~\\ref{fig:fig7}. In one scan the quantum trajectory follows the highest branch of stable mean-field solutions all the way up to its maximum. Much more commonly, though, the trajectory switches between this branch and the vacuum state in the region of $\\Delta\/2\\epsilon_{\\rm crit}=\\pm 0.1$, as illustrated by the second scan. In this region the quantum fluctuations show clear evidence of the three coexisting stable mean-field steady states illustrated in frame (e) of Fig.~\\ref{fig:fig7} (region $R_6$)---inset $Q$ function to the right.\n\\begin{figure}[htpb!]\n\\begin{center}\n\\includegraphics[width=3.4in]{figure10.pdf}\n\\end{center}\n\\caption{As in Fig.~{\\ref{fig:fig9}} but for $\\kappa\/\\lambda=0.02$ and $\\eta=0.2$, and with the detuning scanned from $\\Delta\/\\lambda=-0.6$ to $+0.6$ in a time $\\kappa T=6\\times10^4$. The inset $Q$ functions are for detunings $\\Delta\/2\\epsilon_{\\rm crit}=0$ (left) and $\\Delta\/2\\epsilon_{\\rm crit}=0.12$ (right).}\n\\label{fig:fig10}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\\noindent\nWe have generalized the dissipative extension \\cite{dimer_etal_2007} of the Dicke model \\cite{dicke_1954} of light interacting with matter in two directions, thus linking the superradiant phase transition of Hepp and Lieb \\cite{hepp&lieb_1973a,hepp&lieb_1973b} to the breakdown of blockade \\cite{carmichael_2015,fink_etal_2017}. Although the former was originally approached through exact calculations in the thermodynamic limit for $N$ two-state systems in thermal equilibrium, and the latter as a phenomenon of single systems, both might be engineered in many- and one-two-state-system versions, with the same underlying mean-field phenomenology and where the central issue of photon number in the presence of dissipation is governed not by the number of two-state systems only, but also the ratio of coupling strength to photon loss \\cite{carmichael_2015}---even one two-state system can control many photons in cavity and circuit QED \\cite{armen_etal_2009,fink_etal_2017}.\n\nWe adopted a generalization introduced by Hepp and Lieb \\cite{hepp&lieb_1973b}, and taken up in a number of recent publications \\cite{stepanov_etal_2008,schiro_etal_2012,tomka_etal_2014,xie_etal_2014,tomka_etal_2015,wang_etal_2016,moroz_2016,kirton_etal_2018}, where the interaction Hamiltonian is made from a sum of rotating and counter-rotating terms of variable relative strength; in this way we span the continuum from the Jaynes-Cummings to the quantum Rabi interaction. We also added direct driving of the field mode, since that, not the counter-rotating interaction, creates photons in the breakdown of photon blockade. We analyzed mean-field steady states as a function of adjustable parameters for this extended model and found that a common critical drive strength, $\\epsilon_{\\rm crit}=\\lambda(1+\\eta)\/2$, links the superradiant phase transition to the breakdown of photon blockade---$\\lambda$ is the coupling strength and $\\eta$ the relative strength of counter-rotating to rotating interactions. More generally, we found that the extended phase diagram moves from a region of pure superradiant character into the region of broken blockade, passing through a phase that although present in the generalized model of Hepp and Lieb \\cite{hepp&lieb_1973b} is not identified in that work.\n\nWe then carried our analysis beyond mean-field steady states to a fully quantum treatment for the limiting case of one two-state system: we extended a prior calculation of quasi-energies \\cite{alsing_etal_1992} to the generalized Hamiltonian---resonant driving of the field mode and no dissipation---and obtained numerical results with both detuning and photon loss included. The quasi-energy spectrum for one two-state system was shown to be singular at $\\epsilon_{\\rm crit}$, where it undergoes a transition from discrete to continuous, and numerical simulations broadly support mean-field results, though expanding the view from earlier work \\cite{carmichael_2015,shamailov_etal_2010} of multi-photon resonances at weak drive and exhibiting quantum-fluctuation-induced switching amongst locally stable mean-field steady states.\n\nThe aim of this study has been to uncover connections between different dissipative quantum phase transition for light and we have left many directions untouched; for example, a broader investigation of a very rich parameter space and the fully quantum treatment. We expect future work on the theoretical side will fill the gaps and hope that experiments in the spirit of Refs.~\\cite{baumann_etal_2010,baumann_etal_2011,baden_etal_2014,fink_etal_2017,armen_etal_2009} will prove feasible.\n\n\\section*{Acknowledgments}\nThis work was supported by the Marsden fund of the RSNZ. Quantum trajectory simulations were carried out on the NeSI Pan Cluster at the University of Auckland, supported by the Center for eResearch, University of Auckland.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec-Intro}\nA number of different high-energy, $\\geq$ 1GeV, neutrino sources have been proposed in literature, that include active galactic nuclei (AGNs)\\citep{ste91,sza94,nell93, ato01,alv04}, gamma-ray bursts (GRBs)\\citep{wax97,der03,raz04b,mur06,gup07}, supernova remnants \\citep{alv02,cos05} and core collapse supernovae \\citep{wax01, wan07}, although long duration GRBs have been found to be tightly connected with core-collapse supernovae \\citep{hjo03,sta03}. Properties of neutrino fluxes, energy range, shape of the energy spectra and flavor content depend on physical conditions in the sources. \nNeutrinos are useful for studying sources, especially when photons cannot escape directly. They could be the only prompt signatures of the \"hidden\" sources. These have been associated to core collapse of massive stars leading to supernovae (SNe) of type Ib,c and II with mildly relativistic jets emitted by a central engine, a black hole or a highly magnetized neutron star. Depending on the initial density and metallicity, the pre-supernova star could have different radii. Type Ic supernovae are believed to be He stars with radius $R_\\star\\approx $ 10$^{11}$ cm and Supernovae of type II and Ib are thought to have a radius of $R_\\star\\approx$ 3$\\times$10$^{12}$ cm. \\\\\nRecently, IceCube reported the detection of two neutrino-induced events with energies between 1- 10 PeV \\citep{aar13}. These events have been discussed as having an extragalactic origin, for instance; GRBs\\citep{cho12} and low-luminosity GRBs \\citep{liu13}. On the other hand, high-energy neutrinos are produced in the decay of charged pions and muons when energetic protons in the jet interact with synchrotron thermalized photons or nucleons\/mesons (pp, pn)\/($\\pi$, K) in the shocks. For internal shocks, synchrotron radiation and the number density of particles could be calculated with enough accuracy if we know the distribution of the magnetic field and the particle momentum in the shocked region. These quantities are calculated using the energy equipartition hypothesis through the equipartition parameters; electron equipartition ($\\epsilon_e=U_e\/U$) and magnetic equipartition $\\epsilon_B=U_B\/U$\\citep{mes98}. Many authors \\citep{barn12,fra12,sac12,kum10,she10} have estimated these parameters to be $\\epsilon_e\\simeq$ 0.1, and 0.1$\\leq \\epsilon_B\\leq 10^{-4}$, to obtain a good description of more than a dozen of GRBs.\\\\\nOn the other hand, the neutrino flavor ratio is expected to be, at the source, $\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$=1 : 2 : 0 and on Earth (due to neutrino oscillations between the source and Earth) $\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$=1 : 1 : 1 and $\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$=1 : 1.8 : 1.8 for neutrino energies less and greater than 100 TeV, respectively, for gamma ray bursts ($\\phi^0_{\\nu_l}$ is the sum of $\\nu_l$ and $\\bar{\\nu}_l$) \\citep{kas05}. Also it has been pointed out that measurements of the deviation of this standard flavor ratio of astrophysical high-energy neutrinos may probe new physics \\citep{lea95,ath00, kas05}. \nAs it is known, neutrino properties get modified when it propagates in a medium. Depending on their flavor, neutrinos interact via neutral and\/or charged currents, for instance, $\\nu_e$ interacts with electrons via both neutral and charged currents, whereas $\\nu_\\nu(\\nu_\\tau)$ interacts only via the neutral current. This induces a coherent effect in which maximal conversion of $\\nu_e$ into $\\nu_\\mu (\\nu_\\tau)$ takes place. The resonant conversion of neutrino from one flavor to another due to the medium effect is well known as the Mikheyev-Smirnov-Wolfenstein effect \\citep{wol78}. \nResonance condition of high-energy neutrinos in hidden jets has been studied in the literature \\citep{men07,raz10, sah10}. Recently, \\citet{2013arXiv1304.4906O} studied the three-flavor neutrino oscillations on the surface of the star for neutrino energy in the range (0.1 - 100) TeV. They found that those neutrinos generated on the surface with energies of less than 10 TeV could oscillate. Unlike previous studies, we show that these sources are capable of generating PeV neutrinos pointing them out as possible progenitors of the first observation of PeV-energy neutrinos with IceCube \\citep{aar13}. Besides, we do a full analysis of resonance conditions (two- and three-flavors) for neutrinos produced at different places in the star, estimating the flavor ratios on Earth.\\\\\nIn this paper we both show that PeV neutrinos can be produced in hidden jets and estimate the flavor ratio of high-energy neutrinos expected on Earth. Firstly, we compute the energy range of neutrinos produced by cooling down of hadrons and mesons accelerated in a mildly relativistic jet. After that we take different matter density profiles to show that neutrinos may oscillate resonantly depending on the neutrino energy and mixing neutrino parameters. Finally, we discuss our results in the fail jet framework.\n\\section{Jet dynamics}\nFor the internal shocks, we consider a mildly relativistic shock propagating with bulk Lorentz factor $\\Gamma_b=10^{0.5}\\Gamma_{b,0.5}$. Behind the shock, the comoving number density of particles and density of energy are $n'_e=n'_p=1\/(8\\,\\pi\\,m_p\\,c^5)\\,\\Gamma_b^{-4}\\,E_j\\,t^{-2}_{\\nu,s}\\,t^{-1}_j=3.1\\times10^{18}$ cm$^{-3}\\,\\,t^{-2}_{\\nu,s}$ and $n'_p m_p c^2$, respectively, where we have taken the set of typical values for which the jet drills but hardly breaks through the stellar envelope: the jet kinetic energy $E_j=10^{51.5} E_{j,51.5}$ erg, the variability time scale of the central object $t_\\nu=t_{\\nu, {\\rm s}}\\,{\\rm s}$ with $t_{\\nu,{\\rm s}}$= 0.1 and 0.01, and the jet duration $t_j=10\\,t_{j,1}$ s \\citep{raz05,and05,2013MNRAS.432..857M}. We assume that electrons and protons are accelerated in the internal shocks to a power-law distribution $N(\\gamma_j) d\\gamma_j\\propto \\gamma_j^{-p} d\\gamma_j$. The internal shocks due to shell collisions take place at a radium $r_j=2\\Gamma_b^2\\,c\\,t_\\nu= 6 \\times 10^{11}\\,\\rm{cm}\\,\\Gamma^2_{0.5}\\,t_{v,s}$. Electrons, with minimum energy $E_{e,m}=\\frac{p-2}{p-1} \\epsilon_e\\,m_p c^2 \\Gamma_b$ and maximum energy limited by the dynamic time scale $t'_{dyn}\\simeq t_\\nu\\Gamma_b$, cool down rapidly by synchrotron radiation in the presence of the magnetic field given by\n\\begin{eqnarray}\nB'&=&\\biggl(\\frac{\\epsilon_B}{c^3}\\,\\Gamma_b^{-4}\\,E_j\\,t^{-2}_\\nu\\,t^{-1}_j \\biggr)^{1\/2}\\cr\n&=& 3.43\\times10^8\\,{\\rm G}\\, \\Gamma_{b,0.5}^{-2}\\,E^{1\/2}_{j,51.5}\\,t^{-1\/2}_{j,1}\\,\\epsilon_B^{1\/2}\\,t^{-1}_{\\nu,s}\\,,\n\\label{mfield}\n\\end{eqnarray}\nwhere here and further on the magnetic equipartition parameter and $t_{\\nu,{\\rm s}}$ lie in the range 0.1$\\leq \\epsilon_B\\leq 10^{-4}$ and 0.1 $\\leq t_{\\nu,{\\rm s}} \\leq $ 0.01, respectively. The radiated photon energies by electron synchrotron emission with energy $E_e$ is $E_{syn,\\gamma}=eB'\/(\\hbar m_e^3c^5) E^2_e$, and also the opacity to Thomson scattering by these photons is\n\\begin{eqnarray}\n\\tau_{th}'&=&\\frac{\\sigma_T}{4\\pi\\,m_p\\,c^4} \\Gamma_b^{-3}\\,E_j\\,t^{-1}_\\nu\\,t^{-1}_j\\cr\n&=&3.9\\times 10^5\\, \\Gamma_{b,0.5}^{-3}\\,E_{j,51.5}\\,t^{-1}_{j,1}\\,t^{-1}_{\\nu,s}\\,.\n\\label{opde}\n\\end{eqnarray}\nDue to the large Thomson optical depth, synchrotron photons will thermalize to a black body temperature, therefore the peak energy is given by\n\\begin{eqnarray}\nE'_{\\gamma}\\sim k_B\\,T_{\\gamma}&=&\\biggl(\\frac{15(\\hbar\\,c)^3}{8\\pi^4\\,c^3}\\biggr)^{1\/4}\\,\\epsilon_e^{1\/4}\\, E_j^{1\/4}\\,\\Gamma^{-1}_b\\,t_v^{-1\/2}\\,t^{-1\/4}_j\\cr\n&=&1.36\\,{\\rm keV}\\, E_{j,51.5}^{1\/4}\\,\\Gamma^{-1}_{b,0.5}\\,t^{-1\/4}_{j,1}\\,\\epsilon^{1\/4}_{e,-1}\\,t_{\\nu,s}^{-1\/2}\\,,\n\\label{enph}\n\\end{eqnarray}\nand the number density of thermalized photons is \n\\begin{eqnarray}\n\\eta'_\\gamma&=&\\frac{2\\,\\zeta(3)}{\\pi^2\\,(c\\,\\hbar)^3}\\,\\biggl(\\frac{15\\,\\hbar\\,\\epsilon_e\\, E_j}{8\\pi^4\\,\\Gamma^{4}_b\\,t_v^{2}\\,t_j} \\biggr)^{3\/4}\\cr\n&=&2.86 \\times 10^{23} {\\rm cm^{-3}}\\, E^{3\/4}_{j,51.5}\\,\\Gamma^{-3}_{b,0.5}\\,t^{-3\/4}_{j,1}\\,\\epsilon^{3\/4}_{e,-1}\\,t_{\\nu,s}^{-3\/2}\\,.\n\\label{denph}\n\\end{eqnarray}\nAlthough keV photons can hardly escape due to the high optical depth, they are able to interact with relativistic protons accelerated in the jet, producing high-energy neutrinos via charged pion decay. The pion energies depend on the proton energy and characteristics of the jet. \n\\section{Hadronic model}\nProtons accelerated in internal shocks, on the one hand, radiate photons by synchrotron radiation and also scatter the internal photons by inverse Compton (IC) scattering, and on the other hand, interact with thermal keV photons and hadrons by p$\\gamma$ and p-hadron interactions. The optical depths for p$\\gamma$ and p-hadron interactions are\n\\begin{eqnarray}\n\\tau'_{p\\gamma}&=&\\frac{4\\,\\zeta(3)\\sigma_{p\\gamma}}{\\pi^2\\,(c\\,\\hbar)^3}\\,\\biggl(\\frac{15\\,\\hbar\\,\\epsilon_e\\, E_j}{8\\pi^4\\,\\Gamma^{8\/3}_b\\,t_v^{2\/3}\\,t_j} \\biggr)^{3\/4}\\cr\n&=&3.19\\times 10^6\\, E^{3\/4}_{j,51.5}\\,\\Gamma^{-2}_{b,0.5}\\,t^{-3\/4}_{j,-1}\\,\\epsilon^{3\/4}_{e,-1}\\,t_{\\nu,s}^{-1\/2}\\,,\n\\label{optpg}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\tau'_{pp}&=&\\frac{\\sigma_{pp}}{4\\,\\pi\\,m_p\\,c^5}\\, E_j\\, \\Gamma^{-3}_b\\,t_v^{-1}\\,t_j^{-1} \\cr\n&=&1.77\\times 10^4\\, E_{j,51.5}\\, \\Gamma^{-3}_{b,0.5}\\,t_{j,1}^{-1}\\,t_{\\nu,s}^{-1}\\,,\n\\label{optpp}\n\\end{eqnarray}\nrespectively. Due to the optical depths for p$\\gamma$ and p-hadron interactions are very high, p$\\gamma$ and p-hadron are effective, although p-hadron interactions are more effective at lower energy than p$\\gamma$ interactions \\citep{raz04b}.\n\\subsection{Cooling time scales}\nThe shock acceleration time for an energy proton, $E'_p$, is\n\\begin{eqnarray}\nt'_{acc}&=&\\frac{2\\pi\\xi}{c} r_L =\\frac{2\\pi\\xi\\,c^{1\/2}\\,B'_{c,p}}{m_p^2}\\,E'_p\\,\\epsilon^{-1\/2}_B\\, E^{-1\/2}_j\\,\\Gamma^{2}_b\\,t_v\\,t^{1\/2}_j\\cr\n&=&2.04\\times 10^{-12}{\\rm s}\\,E'_p\\,\\xi\\, E^{-1\/2}_{j,51.5}\\,\\Gamma^{2}_{b,0.5}\\,t^{1\/2}_{j,1}\\,\\epsilon^{-1\/2}_B\\,t_{\\nu,s}\\,,\n\\label{tacc}\n\\end{eqnarray}\nwhere $r_L$ is the Larmor's radius and $\\xi$ is a factor of equality. The acceleration time, $t'_{acc}$, gives an account of the maximum proton energy achieved, when it is compared with the maximum cooling time scales. In the following subsections we are going to calculate the cooling time scales for protons and mesons.\n\n\n\\subsubsection{Proton cooling time scales}\n\n\nThe cooling time scale for proton synchrotron radiation is\n\\begin{eqnarray}\nt'_{p,syn}&=&\\frac{E'_p}{(dE'_p\/dt)_{syn}}=\\frac{6\\pi\\,m_p^4\\,c^6}{\\sigma_T\\,\\beta^2\\,m_e^2\\,E'_p}\\,\\epsilon^{-1}_B\\, E^{-1}_j\\,\\Gamma^{4}_b\\,t^2_v\\,t_j\\cr\n&=& 38.3\\, {\\rm s}\\,E'^{-1}_{p,9}\\, E^{-1}_{j,51.5}\\,\\Gamma^{4}_{b,0.5}\\,t_{j,1}\\,\\epsilon^{-1}_B\\,t^2_{\\nu,s}\\,.\n\\label{tsyn}\n\\end{eqnarray}\nProtons in the shock region can upscatter the thermal keV photons $E'_{IC,\\gamma}\\sim\\gamma^2_p\\,E'_\\gamma$ with peak energy and density given in eqs.(\\ref{enph}) and (\\ref{denph}). The IC cooling time scale in the Thomson regimen is\n\\begin{eqnarray}\nt'^{th}_{p,ic}&=&\\frac{E'_p}{(dE'_p\/dt)^{th}_{ic}}=\\frac{m_p^4\\,c^4\\,\\pi^6(c\\,\\hbar)^2}{5\\,\\sigma_T\\,\\beta^2\\,m_e^2\\,\\zeta(3)\\,E'_p}\\,\\epsilon^{-1}_e\\, E^{-1}_j\\,\\Gamma^{4}_b\\,t^2_v\\,t_j\\cr\n&=& 383.1\\, {\\rm s}\\,E'^{-1}_{p,9}\\, E^{-1}_{j,51.5}\\,\\Gamma^{4}_{b,0.5}\\,t_{j,1}\\,\\epsilon^{-1}_{e,-1}\\,t^2_{\\nu,s}\\, .\n\\label{tic}\n\\end{eqnarray}\nAlso, the IC cooling time scale in the Klein-Nishina (KN) regimen, $E'_pE'_\\gamma\/m_p^2c^4=\\Gamma_{KN}$ with ($\\Gamma_{KN}=1$), is\n\\begin{eqnarray}\nt'^{KN}_{p,ic}&=&\\frac{E'_p}{(dE'_p\/dt)^{KN}_{ic}}=\\frac{3\\pi^4(c\\,\\hbar)^3 \\,E'_p\\,\\epsilon^{-1\/2}_e\\, E^{-1\/2}_j\\,\\Gamma^{2}_b\\,t_v\\,t^{1\/2}_j }{2\\sqrt{30\\hbar}\\,\\sigma_T\\,\\beta^2\\,m_e^2\\,c^5\\,\\zeta(3)}\\cr\n&=&5.15 \\times 10^{-10} \\, {\\rm s}\\,E'_{p,9}\\,E^{-1\/2}_{j,51.5}\\,\\Gamma^{2}_{b,0.5}\\,t^{1\/2}_{j,1}\\,\\epsilon^{-1\/2}_{e,-1}\\,t_{\\nu,s}\\,.\n\\end{eqnarray}\nOn the other hand, protons could upscatter thermal photons according to Bethe-Heitler (BH) process. The proton energy loss is taken away by the pairs produced in this process. The cooling time scale for the BH scattering is \n\\begin{eqnarray}\nt'_{BH}&=&\\frac{E'_p}{(dE'_p\/dt)_{BH}}=\\frac{E'_p}{n'\\,c\\sigma_{BH}\\Delta E'_p}\\cr\n&=&\\frac{E'_p\\,(m^2_p\\,c^4+2E'_pE'_\\gamma)^{1\/2}}{2n'_\\gamma\\,m_e\\,c^3\\sigma_{BH}(E'_p+E'_\\gamma)}\\,,\n\\end{eqnarray}\nwhere $\\sigma_{BH}=\\alpha r^2_e\\, ((28\/9) \\,ln[2E'_p\\,E'_\\gamma\/(m_pm_ec^4)]-106\/9)$.\nThe energy loss rate due to pion production for p$\\gamma$ interactions is \\citep{ste68,bec09}\n\\begin{eqnarray}\nt'_{p\\gamma}&=&\\frac{\\pi^2\\,(c\\,\\hbar)^3}{0.3\\,c\\,\\sigma_{p\\gamma}\\,\\zeta(3)}\\,\\biggl(\\frac{8\\pi^4\\,\\Gamma^{4}_b\\,t_v^{2}\\,t_j}{15\\,\\hbar\\,\\epsilon_e\\, E_j} \\biggr)^{3\/4}\\cr\n&=&1.32\\times10^{-5}\\,\\,{\\rm s}\\, E^{-3\/4}_{j,51.5}\\Gamma^{3}_{b,0.5}\\,t^{3\/4}_{j,1} \\,\\epsilon^{-3\/4}_{e,-1}\\,t_{\\nu,s}^{3\/2}\\,,\n\\end{eqnarray}\nand for p-hadron interactions is \\citep{der03,der09}\n\\begin{eqnarray}\nt'_{pp}&=&\\frac{10\\,\\pi\\,m_p\\,c^4}{\\sigma_{pp}}\\, E_j^{-1}\\, \\Gamma^{4}_b\\,t_v^{2}\\,t_j \\cr \n&=&4.47\\times 10^{-4}\\,{\\rm s}\\,E_{j,51.5}^{-1}\\, \\Gamma^{4}_{b,0.5}\\,t_{j,1} \\,t_{\\nu,s}^{2}\\,.\n\\end{eqnarray}\nIn figs \\ref{ptime_r1} and \\ref{ptime_r2} we have plotted the proton cooling time scales when the magnetic field is distributed in order to 0.1$\\leq \\epsilon_B\\leq 10^{-4}$ and internal shocks take place at r=$6\\times 10^{9}$ cm and r=$6\\times 10^{10}$ cm, respectively.\n\\subsubsection{Meson cooling time scales}\nHigh-energy charged pions and kaons produced by p-hadron and p$\\gamma$ interactions ($p+\\gamma\/p \\to X+\\pi^{\\pm}\/K^{\\pm}$) radiate in the presence of the magnetic field (eq. \\ref{mfield}). Therefore, their cooling time scales are \n\\begin{eqnarray}\nt'_{\\pi^+, syn}&=& \\frac{E'_{\\pi^+}}{(dE'_{\\pi^+}\/dt)} \\simeq \\frac{6\\pi c^6 m^4_{\\pi^+}}{\\sigma_T\\,\\beta^2\\,m_e^2}\\,\\epsilon^{-1}_B\\, E^{-1}_j\\,\\Gamma^{2}_b\\,t^2_v\\,t_j \\,E'^{-1}_{\\pi^+}\\cr\n&=&1.9\\times 10^{-2} \\,{\\rm s}\\, E'^{-1}_{\\pi^+,9}\\,E^{-1}_{j,51.5}\\,\\Gamma^{2}_{b,0.5}\\,t_{j,1}\\,\\epsilon^{-1}_B\\,t^2_{\\nu,s}\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nt'_{K^+, syn}&=& \\frac{E'_{k^+}}{(dE'_{k^+}\/dt)} \\simeq \\frac{6\\pi c^6 m^4_{k^+}}{\\sigma_T\\,\\beta^2\\,m_e^2}\\,\\epsilon^{-1}_B\\, E^{-1}_j\\,\\Gamma^{2}_b\\,t^2_v\\,t_j \\,E'^{-1}_{k^+}\\cr\n&=&2.94 \\,{\\rm s}\\,E'^{-1}_{k^+,9}\\, E^{-1}_{j,51.5}\\,\\Gamma^{2}_{b,0.5}\\,t_{j,1}\\,\\epsilon^{-1}_B\\,t^2_{\\nu,s}\\,.\n\\end{eqnarray}\nAs protons can collide with secondary pions and kaons ($\\pi^+ p$ and $K^+p$), then its respective cooling time scale is given by\n\\begin{eqnarray}\nt'_{had}&=&\\frac{10\\,\\pi\\,m_p\\,c^4}{\\sigma_{(pK\/p\\pi^+)}}\\, E_j^{-1}\\, \\Gamma^{4}_b\\,t_v^{2}\\,t_j \\cr \n&=& 4.47\\times 10^{-9} \\,{\\rm s}\\, E_{j,51.5}^{-1}\\, \\Gamma^{4}_{b,0.5}\\,t_{j,1}\\,t_{\\nu,s}^{2}\\,.\n\\end{eqnarray}\nHere we have used the cross-section $\\sigma_{(pK^+\/p\\pi^+)} \\approx 3\\times 10^{-26}$ cm$^2$. Because the mean lifetime of these mesons may be comparable with the synchrotron and hadron time scales in some energy range, it is necessary to consider the cooling time scales related to their mean lifetime which are given by\n\\begin{eqnarray}\nt'_{\\pi^+,dec}&=&\\frac{E'_{\\pi^+}}{m_{\\pi^+}c^2}\\,\\tau_{\\pi^+}\\cr\n&=&=1.87\\times 10^{-7} \\,{\\rm s}\\,E'_{\\pi^+,9}\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nt'_{K^+,dec}&=&\\frac{E'_{K^+}}{m_{K^+}c^2}\\tau_{K^+}\\cr\n&=&2.51\\times 10^{-8} \\,{\\rm s}\\,E'_{K^+,9}\\,,\n\\end{eqnarray}\nwhere $\\tau_{\\pi^+\/K^+}$is the mean lifetime for $\\pi^+\/K^+$ and E$_{\\pi^+\/K^+,9}$= 10$^9$ E$_{\\pi^+\/K^+}$ eV.\\\\\nIn figs \\ref{mtime_r1} and \\ref{mtime_r2} we have plotted the meson cooling time scales when internal shocks happen at r=$6\\times 10^{9}$ cm and r=$6\\times 10^{10}$ cm and the magnetic equipartition parameter is in the range 0.1$\\leq \\epsilon_B\\leq 10^{-4}$.\n\\subsection{Neutrino production}\nThe single-pion production channels are $p+\\gamma\\to n+\\pi^+$ and $p+\\gamma\\to p+ \\pi^0$, where the relevant pion decay chains are $\\pi^0\\to 2\\gamma$, $\\pi^+\\to \\mu^++\\nu_\\mu\\to e^++\\nu_e+\\bar{\\nu}_\\mu+\\nu_\\mu$ and $\\pi^-\\to \\mu^-+\\bar{\\nu}_\\mu\\to e^-+\\bar{\\nu}_e+\\nu_\\mu+\\bar{\\nu}_\\mu$ \\citep{der03}, then the threshold neutrino energy from p$\\gamma$ interaction is \n\\begin{eqnarray}\nE'_{\\nu,\\pi}&=&2.5\\times 10^{-2}\\biggl(\\frac{8\\pi^4}{15\\hbar}\\biggr)^{1\/4}\\,\\,\\frac{(m^2_\\Delta-m_p^2)}{(1-cos\\theta)}\\cr\n&&\\hspace{3.4cm}\\times\\, \\epsilon^{-1\/4}_e\\, E^{-1\/4}_j\\,\\Gamma_b\\,t^{1\/2}_v\\,t^{1\/4}_j\\cr\n&=& 9.72 \\,{\\rm TeV}\\, E^{-1\/4}_{j,51.5}\\,\\Gamma_{b,0.5}\\,t^{1\/4}_{j,1}\\, \\epsilon^{-1\/4}_{e,-1}\\,t^{1\/2}_{\\nu,s}\\,.\n\\end{eqnarray}\nComparing the time cooling scales we can estimate the neutrino break energy for each process. Equaling $t_{acc}\\simeq t'_{p,syn}$, we can approximately estimate the maximum proton energy\n\\begin{eqnarray}\nE'_{p,max}&=&\\biggl(\\frac{3\\,e\\,m_p^4\\,c^{11\/2}}{\\sigma_T\\,\\xi\\,\\beta^2\\,m_e^2}\\biggr)^{1\/2}\\,\\epsilon^{-1\/4}_B\\, E^{-1\/4}_j\\,\\Gamma_b\\,t^{1\/2}_v\\,t^{1\/4}_j\\cr\n&=&4.3\\times 10^{3} \\,{\\rm GeV}\\, E^{-1\/4}_{j,51.5}\\,\\Gamma_{b,0.5}\\,t^{1\/4}_{j,1}\\,\\epsilon^{-1\/4}_B\\,t^{1\/2}_{\\nu,s}\\,.\n\\end{eqnarray}\nFrom the condition of the synchrotron cooling time scales for mesons ($t'_{\\pi^+,syn}=t'_{had}$ and $t'_{K^+,syn}=t'_{had}$), one may roughly define the neutrino break energies as\n\\begin{eqnarray}\nE'_{\\nu,\\pi^+syn}&=&0.15\\times \\frac{m^4_{\\pi^+}\\,c^2\\,\\sigma_{pp}}{m_p\\,\\sigma_T\\,\\beta^2\\,m_e^2}\\epsilon^{-1}_B\\cr\n&=&10.5 \\,{\\rm GeV}\\, \\epsilon^{-1}_{B}\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nE'_{\\nu,k^+syn}&=&0.3\\times \\frac{m^4_{k^+}\\,c^2\\,\\sigma_{pp}}{m_p\\,\\sigma_T\\,\\beta^2\\,m_e^2}\\epsilon^{-1}_B\\cr\n&=&3.28 \\,{\\rm TeV}\\,\\epsilon^{-1}_{B}\\,.\n\\end{eqnarray}\nFrom the lifetime condition of cooling time scale ($t'_{\\pi^+,dec}=t'_{had}$ and $t'_{K^+,dec}=t'_{had}$), one again we can obtain the neutrino break energies, which for these cases are \n\\begin{eqnarray}\nE'_{\\nu, \\pi^+lt}&=&2.5\\frac{\\pi\\,m_p\\,m_{\\pi^+}\\,c^6}{\\sigma_{pp}}\\,\\tau^{-1}_{\\pi^+} \\,E^{-1}_j\\,\\Gamma^{4}_b\\,t^2_v\\,t_j\\cr\n&=&0.6 \\,{\\rm TeV}\\,E^{-1}_{j,51.5}\\,\\Gamma^{4}_{b,0.5}\\,t_{j,1}\\,t^2_{\\nu,s}\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\nE'_{\\nu, k^+lt}&=&5\\frac{\\pi\\,m_p\\,m_{k^+}\\,c^6}{\\sigma_{pp}}\\,\\tau^{-1}_{k^+} \\,E^{-1}_j\\,\\Gamma^{4}_b\\,t^2_v\\,t_j\\cr\n&=&8.92 \\,{\\rm TeV}\\,E^{-1}_{j,51.5}\\,\\Gamma^{4}_{b,0.5}\\,t_{j,1}\\,t^2_{\\nu,s}\\,.\n\\end{eqnarray}\nIt is important to say that muons may be suppressed by electromagnetic energy losses and in that case would not contribute much to high-energy neutrino production. The ratio $\\frac{t'_{\\pi^+\/K^+,cool}}{t'_{\\pi^+\/K^+,dec}}$, where $t'_{\\pi^+\/K^+,cool}=\\frac{t'_{\\pi^+\/K^+,em}\\cdot \\,\\,\\,t'_{\\pi^+\/K^+,had}} {t'_{\\pi^+\/K^+,em}\\,\\,+\\, \\,\\,t'_{\\pi^+\/K^+,had} }$, determines the suppression of mesons before they decay to neutrinos \\citep{raz05}. \\\\\nIn fig. \\ref{prod_neu} we have plotted the neutrino energy created by distinct interaction processes at different distances, $6\\times 10^{9}$ cm (above) and $6\\times 10^{10}$ cm (below), as a function of the magnetic equipartition parameter.\n\\section{Density profile of the source}\nAnalytical and numerical models of density distribution in a pre-supernova have shown a decreasing dependence on radius $\\rho\\propto r^{-n}$, with n=3\/2 - 3 above the core, being 3\/2 and 3 convective and radiative envelopes respectively \\citep{woo93, shi90, arn91}. In particular, distributions with $\\rho \\propto r^{-3}$ and $\\rho \\propto r^{-17\/7}$ have been proposed to describe simple blast wave distributions \\citep{bet90, che89}. Following \\cite{men07}, we use three models of density profile; Model [A], Model [B] and Model [C].\\\\ Model [A] ,\n\\begin{equation}\n\\hspace{0.7cm}{\\rm [A]} ~~\\rho(r) = 4.0\\times 10^{-6} \\left( \\frac{R_\\star}{r} -1\\right)^3 ~{\\rm g~cm}^{-3}\\,,\n\\label{dens-pro-A} \\\\\n\\end{equation}\ncorresponds to a polytropic hydrogen envelope with $\\rho(r)\\propto r^{-3}$, scaling valid in the range $r_{jet}\\geq r \\geq R_\\star$. Model [B], \n\\begin{eqnarray}\n&&{\\rm [B]}~~\\rho(r) = 3.4\\times 10^{-5} ~{\\rm g~cm}^{-3}\\cr\n&&\\times\n\\cases{ \n(R_\\star\/r)^{17\/7}\\,; \\hspace{1cm}10^{10.8} ~{\\rm cm}r_b\\,,& \\cr\n} \n\\label{dens-pro-B} \\\\\n\\end{eqnarray}\nis a power-law fit with an effective polytropic index $n_{eff}=17\/7$ as done for SN 1987A \\citep{che89}. Here $r_j \\sim 10^{10.8} ~{\\rm cm} $ is the radius of inner border of the envelope, where the density $\\rho=0.4$ g cm$^{-3}$. Associating the number of electron per nucleon $Y_e=$0.5, we obtained the number density of electrons as $N_e=N_a\\,\\rho(r)\\, Y_e=$1.2$\\times$10$^{23}$ cm$^{-3}$ and Model [C], \n \\begin{eqnarray}\n{\\rm [C]} ~~\\rho(r) = 6.3\\times 10^{-6} {\\it A}\n\\left( \\frac{R_\\star}{r} -1 \\right)^{n_{\\rm eff}} \n~{\\rm g~cm}^{-3} \\cr\n(n_{\\rm eff}, {\\it A})=\n\\cases{ \n(2.1,20) ~;~ &$10^{10.8} ~{\\rm cm}< r < 10^{11} ~{\\rm cm}$ \\cr\n(2.5,1) ~;~ &$r > 10^{11} ~{\\rm cm}$\\,, \\cr\n}\n\\label{dens-pro-C}\n\\end{eqnarray}\nincludes a sharp drop in density at the edge of the helium core \\citep{mat99}. \n\\section{Neutrino Mixing}\nIn the following subsections we are going to describe the neutrino oscillations in the matter (along the jet for three density profiles given in section 4 ) and in vacuum (its path up to Earth). We will be using the best fit parameters for two-neutrino mixing (solar, atmospheric and accelerator neutrino experiments) and three-neutrino mixing.\nThe best fit value of solar, atmospheric and accelerator neutrino experiments are given as follow.\\\\\n\\textbf{Solar Neutrinos} are electron neutrinos produced in the thermonuclear reactions which generate the solar energy. The Sudbury Neutrino Observatory (SNO) was designed to measure the flux of neutrinos produced by $^8$B decays in the sun, so-called $^8$B neutrinos, and to study neutrino oscillations, as proposed by \\cite{che85}. A two-flavor neutrino oscillation analysis gave the following parameters: $\\delta m^2=(5.6^{+1.9}_{-1.4})\\times 10^{-5}\\,{\\rm eV^2}$ and $\\tan^2\\theta=0.427^{+0.033}_{-0.029}$\\citep{aha11}.\\\\\n\\textbf{Atmospheric Neutrinos} are electron neutrinos $\\nu_e$ produced mainly from the decay chain $\\pi\\to \\mu+\\nu_\\mu$ followed by $\\mu\\to e+\\nu_\\mu+\\nu_e$. Super-Kamiokande (SK) observatory observes interactions between neutrinos with electrons or with nuclei or water via the water Cherenkov method. Under a two-flavor disappearance model with separate mixing parameters between neutrinos and antineutrinos there were found the following parameters for the SK-I + II + III data: $\\delta m^2=(2.1^{+0.9}_{-0.4})\\times 10^{-3}\\,{\\rm eV^2}$ and $\\sin^22\\theta=1.0^{+0.00}_{-0.07}$ \\citep{abe11a}.\\\\\n \\textbf{Reactor Neutrinos} are produced in Nuclear reactors. Kamioka Liquid scintillator Anti-Neutrino Detector (KamLAND) was initially designed to detect reactor neutrinos and so later it was prepared to measure $^7$Be solar neutrinos. A two neutrino oscillation analysis gives $\\delta m^2=(7.9^{+0.6}_{-0.5})\\times 10^{-5}\\,{\\rm eV^2}$ and $\\tan^2\\theta=0.4^{+0.10}_{-0.07}$\\citep{ara05,shi07,mit11}.\\footnote{this value was obtained using a global analysis of data from KamLAND and solar-neutrino experiments}.\\\\\n\\textbf{Accelerator Neutrinos} are mostly produced by $\\pi$ decays (and some K decays), with the pions produced by the scattering of the accelerated protons on a fixed target. The beam can contain both $\\mu$- and e-neutrinos and antineutrinos. There are two categories: Long and short baselines.\\\\\nLong-baseline experiments with accelerator beams run with a baseline of about a hundred of kilometers. K2K experiment was designed to measure neutrino oscillations using a man-made beam with well controlled systematics, complementing and confirming the measurement made with atmospheric neutrinos. $\\delta m^2=(2.8^{+0.7}_{-0.9})\\times 10^{-3}\\,{\\rm eV^2}$ and $\\sin^22\\theta=1.0$\\citep{ahn06}.\\\\\nShort-baseline experiments with accelerator beams run with a baseline of about hundreds of meters. Liquid Scintillator Neutrino Detector (LSND) was designed to search for $\\nu_\\mu\\to\\nu_e$ oscillations using $\\nu_\\mu$ from $\\pi^+$ decay in flight \\citep{ath96, ath98}. The region of parameter space has been partly tested by Karlsruhe Rutherford medium energy neutrino KARMEN \\citep{arm02} and MiniBooNe experiments. \\cite{chu02} found two well-defined regions of oscillation parameters with either $\\delta m^2 \\approx 7\\, {\\rm eV^2}$ or $\\delta m^2 < 1\\, {\\rm eV^2} $ compatible with both LAND and KARMEN experiments, for the complementary confidence. The MiniBooNE experiment was specially designed to verify the LSND's neutrino data. It is currently running at Fermilab and is searching for $\\nu_e (\\bar{\\nu}_e)$ appearance in a $\\nu_\\mu(\\bar{\\nu}_\\mu)$ beam. Although MiniBooNE found no evidence for an excess of $\\nu_e$ candidate events above 475 MeV in the $\\nu_\\mu\\to\\nu_e$ study, there was observed a 3.0$\\sigma$ excess of electron-like events below 475 MeV\\citep{agu09,agu10,agu07}. In addition, in the $\\bar{\\nu}_\\mu\\to\\bar{\\nu}_e$ study, MiniBooNE found evidence of oscillations in the 0.1 to 1.0 eV$^2$, which are consistent with LSND results \\citep{ath96, ath98}.\\\\\nCombining solar, atmospheric, reactor and accelerator parameters, the best fit values of the three neutrino mixing are\n\n for $\\sin^2_{13} < 0.053$: \\citep{aha11}\n \\begin{equation}\n \\Delta m_{21}^2= (7.41^{+0.21}_{-0.19})\\times 10^{-5}\\,{\\rm eV^2}; \\hspace{0.1cm} \\tan^2\\theta_{12}=0.446^{+0.030}_{-0.029}\\,,\n \\end{equation}\nand for $\\sin^2_{13} < 0.04$: \\citep{wen10}\n\\begin{equation}\n\\Delta m_{23}^2=(2.1^{+0.5}_{-0.2})\\times 10^{-3}\\,{\\rm eV^2}; \\hspace{0.1cm} \\sin^2\\theta_{23}=0.50^{+0.083}_{-0.093}\\,\n\\label{3parosc}\n\\end{equation}\n\\subsection{Neutrino oscillation inside the jet}\nWhen neutrino oscillations take place in the matter, a resonance could occur that dramatically enhances the flavor mixing and can lead to maximal conversion from one neutrino flavor to another. This resonance depends on the effective potential, density profile of the medium, and oscillation parameters. As $\\nu_e$ is the one that can interact via CC, the effective potential can be obtained calculating the difference between the potential due to CC and NC contributions \\citep{kuo89}.\n\\subsubsection{Two-Neutrino Mixing}\nIn this subsection, we will consider the neutrino oscillation process $\\nu_e\\leftrightarrow \\nu_{\\mu, \\tau}$. The evolution equation for the propagation of neutrinos in the above medium is given by\n\\begin{equation}\ni\n{\\pmatrix {\\dot{\\nu}_{e} \\cr \\dot{\\nu}_{\\mu}\\cr}}\n={\\pmatrix\n{V_{eff}-\\Delta \\cos 2\\theta & \\frac{\\Delta}{2}\\sin 2\\theta \\cr\n\\frac{\\Delta}{2}\\sin 2\\theta & 0\\cr}}\n{\\pmatrix\n{\\nu_{e} \\cr \\nu_{\\mu}\\cr}},\n\\end{equation}\nwhere $\\Delta=\\delta m^2\/2E_{\\nu}$, $V_{eff}=\\sqrt 2G_F\\, N_e$ is the effective potential, $E_{\\nu}$ is the neutrino energy, and $\\theta$ is the neutrino mixing angle. For anti-neutrinos one has to replace $N_e$ by $-N_e$. The conversion probability for a given time $t$ is\n\\begin{equation}\nP_{\\nu_e\\rightarrow {\\nu_{\\mu}{(\\nu_\\tau)}}}(t) = \n\\frac{\\Delta^2 \\sin^2 2\\theta}{\\omega^2}\\sin^2\\left (\\frac{\\omega t}{2}\\right\n),\n\\label{prob}\n\\end{equation}\nwith\n\\begin{equation}\n\\omega=\\sqrt{(V_{eff}-\\Delta \\cos 2\\theta)^2+\\Delta^2 \\sin^2\n 2\\theta}.\n\\end{equation}\nThe oscillation length for the neutrino is given by\n\\begin{equation}\nL_{osc}=\\frac{L_v}{\\sqrt{\\cos^2 2\\theta (1-\\frac{V_{eff}}{\\Delta \\cos 2\\theta})^2+\\sin^2 2\\theta}},\n\\label{osclength}\n\\end{equation}\nwhere $L_v=2\\pi\/\\Delta$ is the vacuum oscillation length. If the density of the medium is such that the condition $\\sqrt2 G_F\\,N_e=\\Delta\\,\\cos2\\theta$ is satisfied, the resonant condition, \n\n\\begin{equation}\nV_{eff,R}=\\Delta \\cos 2\\theta\\,,\n\\label{reso2d}\n\\end{equation}\ncan come about, therefore the resonance length can be written as\n\\begin{equation}\nL_{res}=\\frac{L_v}{\\sin 2\\theta}.\n\\label{oscres}\n\\end{equation}\nCombining eqs (\\ref{oscres}) and (\\ref{reso2d}) we can obtain the resonance density as a function of resonance length\n{\\scriptsize \n\\begin{equation}\\label{p1}\n\\textbf{$\\rho_R$}=\n\\cases{\n\\frac{3.69\\times 10^{-4}}{E_{\\nu,TeV}}\t\\, \\biggl[ 1- E_{\\nu,TeV}^2\\biggl( \\frac{4.4 \\times 10^{12} \\,cm}{l_r}\\biggr)^2 \\biggr]^{1\/2}{\\rm gr\/cm^3} & {\\rm sol.} \\,, \\nonumber\\cr\n\\frac{1.39\\times 10^{-2}}{E_{\\nu,TeV}}\t\\, \\biggl[ 1- E_{\\nu,TeV}^2\\biggl( \\frac{1.18 \\times 10^{11}\\,cm}{l_r}\\biggr)^2 \\biggr]^{1\/2} {\\rm gr\/cm^3} & {\\rm atmosp.}\\,,\\nonumber\\cr\n\\frac{3.29}{E_{\\nu,TeV}}\t \\, \\biggl[ 1- E_{\\nu,TeV}^2\\biggl( \\frac{4.9 \\times 10^8\\,cm}{l_r}\\biggr)^2 \\biggr]^{1\/2} {\\rm gr\/cm^3} & {\\rm accel.}\\,, \\cr\n}\n\\end{equation}\n}\n\\noindent where sol, atmosp. and accel. correspond to solar, atmospheric and accelerator parameters.\\\\\nIn addition of the resonance condition, the dynamics of this transition must be determined by adiabatic conversion through the adiabaticity parameter \n \\begin{equation}\n\\gamma\\equiv \\frac{\\delta m^2}{2E}\\sin2 \\theta\\,\\tan2 \\theta\\frac{1}{\\mid \\frac1\\rho\\, \\frac {d\\rho}{dr}\\mid}_R\\,,\n\\end{equation}\nwith $\\gamma\\gg$ 1 or the flip probability given by\n\\begin{equation}\n P_f= e^{-\\pi\/2\\,\\gamma}\\,,\n \\label{flip}\n \\end{equation}\n where $\\rho$ is given by eqs. (\\ref{dens-pro-A}), (\\ref{dens-pro-B}) and (\\ref{dens-pro-C}).\n\n\n\\subsubsection{Three-neutrino Mixing}\nTo determine the neutrino oscillation probabilities we have to solve the evolution equation of the neutrino system in the matter. In a three-flavor framework, this equation is given by\n\\begin{equation}\ni\\frac{d\\vec{\\nu}}{dt}=H\\vec{\\nu},\n\\end{equation}\nand the state vector in the flavor basis is defined as\n\\begin{equation}\n\\vec{\\nu}\\equiv(\\nu_e,\\nu_\\mu,\\nu_\\tau)^T.\n\\end{equation}\nThe effective Hamiltonian is\n\\begin{equation}\nH=U\\cdot H^d_0\\cdot U^\\dagger+diag(V_{eff},0,0),\n\\end{equation}\nwith\n\\begin{equation}\nH^d_0=\\frac{1}{2E_\\nu}diag(-\\Delta m^2_{21},0,\\Delta^2_{32}).\n\\end{equation}\nwith the same potential $V_{eff}$ given for two-neutrino mixing subsection and $U$ the three\nneutrino mixing matrix given by \\cite{gon03,akh04,gon08,gon11}\n\\begin{equation}\nU =\n{\\pmatrix\n{\nc_{13}c_{12} & s_{12}c_{13} & s_{13}\\cr\n-s_{12}c_{23}-s_{23}s_{13}c_{12} & c_{23}c_{12}-s_{23}s_{13}s_{12} & s_{23}c_{13}\\cr\ns_{23}s_{12}-s_{13}c_{23}c_{12} &-s_{23}c_{12}-s_{13}s_{12}c_{23} & c_{23}c_{13}\\cr\n}},\n\\end{equation}\nwhere $s_{ij}=\\sin\\theta_{ij}$ and $c_{ij}=\\cos\\theta_{ij}$ and we have taken the Dirac phase $\\delta=0$. For anti-neutrinos one has to replace $U$ by $U^*$. The different neutrino probabilities are given as\n{\\scriptsize \n\\begin{eqnarray}\nP_{ee}&=&1-4s^2_{13,m}c^2_{13,m}S_{31}\\,,\\nonumber\\\\\nP_{\\mu\\mu}&=&1-4s^2_{13,m}c^2_{13,m}s^4_{23}S_{31}-4s^2_{13,m}s^2_{23}c^2_{23}S_{21}-4\nc^2_{13,m}s^2_{23}c^2_{23}S_{32}\\,,\\nonumber\\\\\nP_{\\tau\\tau}&=&1-4s^2_{13,m}c^2_{13,m}c^4_{23}S_{31}-4s^2_{13,m}s^2_{23}c^2_{23}S_{21}-4\nc^2_{13,m}s^2_{23}c^2_{23}S_{32}\\,,\\nonumber\\\\\nP_{e\\mu}&=&4s^2_{13,m}c^2_{13,m}s^2_{23}S_{31}\\,,\\nonumber\\\\\nP_{e\\tau}&=&4s^2_{13,m}c^2_{13,m}c^2_{23}S_{31}\\,,\\nonumber\\\\\nP_{\\mu\\tau}&=&-4s^2_{13,m}c^2_{13,m}s^2_{23}c^2_{23}S_{31}+4s^2_{13,m}s^2_{23}c^2_{23}S_{21}+4\nc^2_{13,m}s^2_{23}c^2_{23}S_{32}\\,,\\nonumber\\\\\n\\end{eqnarray}\n}\nwhere\n\\begin{equation}\n\\sin\n2\\theta_{13,m}=\\frac{\\sin2\\theta_{13}}{\\sqrt{(\\cos2\\theta_{13}-2E_{\\nu}V_e\/\\delta\n m^2_{32})^2+(\\sin2\\theta_{13})^2}},\n\\end{equation}\nand\n\\begin{equation}\nS_{ij}=\\sin^2\\biggl(\\frac{\\Delta\\mu^2_{ij}}{4E_{\\nu}}L\\biggr).\n\\end{equation}\nHere $\\Delta\\mu^2_{ij}$ are given by \n\\begin{eqnarray}\n\\Delta\\mu^2_{21}&=&\\frac{\\Delta\n m^2_{32}}{2}\\biggl(\\frac{\\sin2\\theta_{13}}{\\sin2\\theta_{13,m}}-1\\biggr)-E_{\\nu}V_e\\,,\\nonumber\\\\\n\\Delta\\mu^2_{32}&=&\\frac{\\Delta\n m^2_{32}}{2}\\biggl(\\frac{\\sin2\\theta_{13}}{\\sin2\\theta_{13,m}}+1\\biggr)+E_{\\nu}V_e\\,,\\nonumber\\\\\n\\Delta\\mu^2_{31}&=&\\Delta m^2_{32} \\frac{\\sin2\\theta_{13}}{\\sin2\\theta_{13,m}}\\,,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\sin^2\\theta_{13,m}&=&\\frac12\\biggl(1-\\sqrt{1-\\sin^22\\theta_{13,m}}\\biggr)\\,,\\nonumber\\\\\n\\cos^2\\theta_{13,m}&=&\\frac12\\biggl(1+\\sqrt{1-\\sin^22\\theta_{13,m}}\\biggr)\\,.\n\\end{eqnarray}\nThe oscillation length for the neutrino is given by\n\\begin{equation}\nL_{osc}=\\frac{L_v}{\\sqrt{\\cos^2 2\\theta_{13} (1-\\frac{2 E_{\\nu} V_e}{\\delta m^2_{32} \\cos 2\\theta_{13}} )^2+\\sin^2 2\\theta_{13}}},\n\\label{osclength}\n\\end{equation}\nwhere $L_v=4\\pi E_{\\nu}\/\\delta m^2_{32}$ is the vacuum oscillation length. From the resonance condition, $\\sqrt2 G_F\\,N_e=\\Delta \\cos2\\theta_{13}$, the resonance length and density are related as\n\\begin{equation} \n\\rho_R=\\frac{1.9\\times 10^{-2}}{E_{\\nu,TeV}}\t\\, \\biggl[ 1- E_{\\nu,TeV}^2\\biggl( \\frac{8.2 \\times 10^{10} \\,cm}{l_r}\\biggr)^2 \\biggr]^{1\/2}{\\rm gr\/cm^3}\\,.\n\\label{p2}\n\\end{equation}\nOn the other hand, generalizing the adiabaticity parameter, $\\gamma$, to three-mixing neutrinos, it can be written as\n \\begin{equation}\n\\gamma\\equiv \\frac{\\delta m_{32}^2}{2E}\\sin2 \\theta_{13}\\,\\tan2 \\theta_{13}\\frac{1}{\\mid \\frac1\\rho\\, \\frac {d\\rho}{dr}\\mid}_R\\,,\n\\end{equation}\nwith the flip probability given by eq. (\\ref{flip}).\n\\subsection{Neutrino Oscillation from Source to Earth}\nBetween the surface of the star and the Earth the flavor ratio $\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$ is affected by the full three description flavor mixing, which is calculated as follow. The probability for a neutrino to oscillate from a flavor estate $\\alpha$ to a flavor state $\\beta$ in a time starting from the emission of neutrino at star t=0, is given as\n\\begin{eqnarray}\nP_{\\nu_\\alpha\\to\\nu_\\beta} &=&\\mid < \\nu_\\beta(t) | \\nu_\\alpha(t=0) > \\mid\\cr\n&=&\\delta_{\\alpha\\beta}-4 \\sum_{j>i}\\,U_{\\alpha i}U_{\\beta i}U_{\\alpha j}U_{\\beta i}\\,\\sin^2\\biggl(\\frac{\\delta m^2_{ij} L}{4\\, E_\\nu} \\biggr)\\,.\n\\end{eqnarray}\nUsing the set of parameters give in eq. (\\ref{3parosc}), we can write the mixing matrix\n\\begin{equation}\nU =\n{\\pmatrix\n{\n0.816669\t & 0.544650 & 0.190809\\cr\n -0.504583 & 0.513419 &\t 0.694115\\cr\n 0.280085 & -0.663141 & 0.694115\\cr\n}}\\,.\n\\end{equation}\nAveraging the sin term in the probability to $\\sim 0.5$ for larger distances L \\citep{lea95}, the probability matrix for a neutrino flavor vector of ($\\nu_e$, $\\nu_\\mu$, $\\nu_\\tau$)$_{source}$ changing to a flavor vector ($\\nu_e$, $\\nu_\\mu$, $\\nu_\\tau$)$_{Earth}$ is given as\n\\begin{equation}\n{\\pmatrix\n{\n\\nu_e \\cr\n\\nu_\\mu \\cr\n\\nu_\\tau \\cr\n}_{Earth}}\n=\n{\\pmatrix\n{\n0.534143\t & 0.265544\t & 0.200313\\cr\n 0.265544\t & 0.366436\t & 0.368020\\cr\n 0.200313\t & 0.368020\t & 0.431667\\cr\n}}\n{\\pmatrix\n{\n\\nu_e \\cr\n\\nu_\\mu \\cr\n\\nu_\\tau \\cr\n}_{source}}\n\\label{matrixosc}\n\\end{equation}\nfor distances longer than the solar system. \n\\section{Results and Discussions}\nWe have considered a core collapse of massive stars leading to supernovae (SNe) of type Ib,c and II with mildly relativistic jets. Although this mildly relativistic jet may not be able to break through the stellar envelope, electrons and protons are expected to be accelerated in the internal shocks, and then to be cooled down by synchrotron radiation, inverse Compton and hadronic processes (p$\\gamma$ and p-hadron\/meson). Photons from electron synchrotron radiation thermalized to a some keV-peak energy serve as cooling mechanism for accelerated protons by means of p$\\gamma$ interactions. Another cooling mechanism of protons considered here are the p-p interactions, due to the high number density of protons (3.1 $\\times 10^{20}$ cm$^{-3} \\leq n'_p \\leq $ 3.1 $\\times 10^{22}$ cm$^{-3}$ ) \\citep{raz05}. In p$\\gamma$ and p-p interactions, high-energy pions and kaons are created which in turn interact with protons by $\\pi$-p and $K$-p interactions, producing another hadronic\/meson cooling mechanism. To illustrate the degree and energy region of efficiency of each cooling process, we have plotted the proton (figures \\ref{ptime_r1} and \\ref{ptime_r2}) and meson (figures \\ref{mtime_r1} and \\ref{mtime_r2}) time scales when internal shocks take place at $6\\times 10^{9}$ cm and $6\\times 10^{10}$ cm and, the magnetic field lies in the range 3.4$\\times 10^7$ G $\\leq B' \\leq$ 1.1$\\times 10^{10}$ G. Comparing the time scales in figures \\ref{ptime_r1} and \\ref{ptime_r2}, one can observe that the maximum proton energy is when the acceleration and synchrotron time scales are equal; it happens when proton energy is in the range $10^{15} eV \\leq E'_p \\leq 10^{16}$ eV which corresponds to internal shocks at $6\\times 10^9$ cm with $B' = 1.1\\times 10^{10}$ G and $6\\times 10^{10}$ cm with $B'=3.4\\times 10^7$ G, respectively. In figs. \\ref{mtime_r1} and \\ref{mtime_r2}, one can see that hadronic time scales are equal to other time scales at different energies. For instance, internal shocks at $6\\times 10^{10}$ cm and $B'=1.1\\times 10^{9}$ G, the time scales of pion synchrotron emission and hadronic are equal for pion energy $\\sim 5\\times 10^{11}$ eV. Computing the break meson energies for which time scales are equal to each other, we can estimate the break neutrino energies. From the equality of kaon\/pion lifetime and synchrotron cooling time scales we obtain the break neutrino energies $\\sim$(24\/179) GeV and $\\sim$428 GeV\/69 TeV, respectively. Also, considering p$\\gamma$ interactions the threshold neutrino energy $\\sim$ 3 TeV is obtained. Taking into account the distances of internal shocks ($6\\times 10^{9}$ cm and $6\\times 10^{10}$ cm)\nwe have plotted the neutrino energy as a function of the magnetic equipartition parameter in the range 0.1$\\leq \\epsilon_B\\leq 10^{-4}$ (3.4$\\times 10^7$ G $\\leq B' \\leq$ 1.1$\\times 10^{10}$ G). As shown in the fig. \\ref{prod_neu}, neutrino energy between 1 - 10 PeV can be generated for $\\epsilon_B$ between 3.5$\\times 10^{-3}$ and 4.1$\\times 10^{-4}$, that corresponds to a magnetic field in the range 2.02$\\times 10^8$ (2.02$\\times 10^9$) G - 6.9$\\times 10^7$ (6.9$\\times 10^8$) G at $6\\times 10^{9}$ cm and $6\\times 10^{10}$ cm from the central engine, respectively. Under this scenario, chocked jets are bright in high-energy neutrinos and dark in gamma rays.\\\\\nOn the other hand, taking into account the range of neutrino energy (24 GeV$\\leq E_\\nu\\leq$ 69 TeV), internal shocks at a distance of 6$\\times 10^{10}$ cm, strength of magnetic field of 1.1$\\times 10^{10}$ G and considering three models of density profile (see section III. eqs. \\ref{dens-pro-A}, \\ref{dens-pro-B} and \\ref{dens-pro-C}) of a pre-supernova star, we present a full description of two- and three-flavor neutrino oscillations. Based on these models of density profiles we calculate the effective potential, the resonance condition and, the resonance length and density. From the resonance condition, we obtain the resonance density ($\\rho_R$) as a function of resonance length ($l_R$) for two (eq. \\ref{p1}) and three flavors (eq. \\ref{p2}). We overlap the plots of the density profiles as a function of distance with the resonance conditions (resonance density as a function of resonance length). They are shown in Fig \\ref{twoflavor} (two flavors) and in Fig. \\ref{threeflavor} (three flavors). For two flavors, we have taken into account solar (top), atmospheric (middle) and accelerator (bottom) parameters of neutrino experiments. Using solar parameters, the resonance length is in the range $\\sim (10^{11} - 10^{14.2})$ cm and resonance density in $\\sim (10^{-2} - 10^{-4})$g\/cm$^3$. As can be seen, neutrinos with energy 24 GeV are the only ones that meet the resonance condition for all models of density profiles while neutrinos of energy 178 GeV meet marginally the resonance condition just for the model [B]. Neutrinos with other energy cannot meet the resonance condition. Using atmospheric parameters, the resonance length lies in the range $\\sim (10^{9.1} - 10^{13.3})$ cm and the resonance density in $\\sim (10^{1} - 10^{-4})$g\/cm$^3$. As shown, neutrinos in the energy range of 178 GeV - 3 TeV can oscillate many times before leaving the source. Although the resonance length of neutrino with energy 24 GeV is smaller than star radius, the resonance density is greater than other models. Using accelerator parameters, the resonance length is less than $\\sim 10^{10.2}$ cm and the resonance density lies in the range $\\sim (10^{2} - 10^{2})$g\/cm$^3$. Although the resonance length is smaller than the star radius for two flavors, the one that meets the resonance density is the neutrino energy 69 TeV. For three flavors, the range of resonance length is $\\sim (10^{9} - 10^{12.5})$ cm and resonance density is $\\sim (0.9 - 10^{-4})$g\/cm$^3$, presenting a similar behavior to that described by means of atmospheric parameters. \nAs the dynamics of resonant transitions is not only determined by the resonance condition, but also by adiabatic conversion, we plot the flip probability as a function of neutrino energy for two (fig \\ref{twoflip}) and three flavors (fig \\ref{threeflip}). Dividing the plots of flip probabilities in three regions of less than 0.2, between 0.2 and 0.8 and greater than 0.8, we have that in the first case (P$_\\gamma \\leq$ 0.2), a pure adiabatic conversion occurs, the last case (P$_\\gamma \\geq$ 0.8) is a strong violation of adiabaticity and the intermediate region 0.2 $<$ P$_\\gamma$ $<$ 0.8 represents the transition region \\citep{dig00}. In Fig. \\ref{twoflip}, the top, middle and bottom plots are obtained using solar, atmospheric and accelerator parameters of neutrino oscillations, respectively. As shown in top figure, the pure adiabatic conversion occurs when neutrino energy is less than $5\\times 10^{11}$ eV for model [A] and [C] and, $\\sim 10^{12}$ eV for model [B] and, the strong violation of adiabaticity is given for neutrino energy greater than $6\\times 10^{12}$ eV in the three profiles. In the middle figure, one can see that independently of the profile, neutrinos with energy of less than E$_\\nu$=10$^{14}$ eV can have pure adiabatic conversions. In the bottom figure, the three models of density profiles have the same behavior for the whole energy range. Neutrinos with energy less than $\\sim 10^{11.3}$ eV and greater than $\\sim 10^{13.2}$ eV present conversion adiabatically pure and strong violation, respectively. In fig. \\ref{threeflip}, the flip probability for three flavors are plotted. The energy range for each region of P$_\\gamma$ changes marginally according to the model of density profile. Neutrinos with E$\\sim 10^{12}$ eV are capable of having pure adiabatic conversion in [B] but not in [A] or [C]. The strong violation of adiabaticity begins when the neutrino energies are E$\\sim 10^{13}$ eV and E$\\sim 10^{13.8}$ eV, for [A] and [C], respectively. \\\\\nOn the other hand, we have also plotted (fig. \\ref{proen}) the oscillation probabilities for three flavors as a function of energy when neutrinos keep moving at a distance of r=$10^{11}$ cm (above) and r=$10^{12}$ cm (below) from the core. In the top figure, the survival probability of electron neutrino, P$_{ee}$, is close to one regardless of neutrino energy, therefore the conversion probabilities P$_{\\mu e}$ and P$_{\\tau e}$ are close to zero, as shown. Depending on the neutrino energy, the survival probability of muon and tau neutrino, P$_{\\mu \\mu}$ and P$_{\\tau \\tau}$, oscillates between zero and one. For example, for E$\\sim 430$ GeV, the conversion probability of muon P$_{\\mu \\tau}$ is close to zero while the survival probability of muon and tau neutrino, P$_{\\mu \\mu}$ and P$_{\\tau \\tau}$, are close to one, and for E$\\sim 1$ TeV probabilities change dramatically, being P$_{\\mu \\tau}\\sim 1$ and P$_{\\tau \\tau}$=P$_{\\mu \\mu}\\sim$ 0. In the bottom figure, neutrinos are moving along the jet at r=10$^{12}$ cm and although the survival and conversion probabilities have similar behaviors to those moving to r=10$^{11}$ cm, they are changing faster. To have a better understanding, we have separated all probabilities and plotted them in fig. \\ref{prosep}.\nFrom up to down, the probabilities of electron neutrino and survival probability of muon neutrino are shown in the first and second graph, respectively, and the conversion and survival probability of tau neutrino are plotted in the third and four graph, respectively. Moreover, we have plotted in figs. \\ref{prob_dist} and \\ref{prob_dist2} the oscillation probabilities as a function of distance, when neutrinos are produced at a radius $6\\times 10^{9}$ cm and $6\\times 10^{10}$ cm, respectively, and continue to propagate along the jet. We take into account four neutrino energies $E_\\nu$=178 GeV, $E_\\nu$=428 GeV, $E_\\nu$=3 TeV and $E_\\nu$=69 TeV. \nAs shown, as neutrino energy increases, the probabilities oscillate less. For instance, when an electron neutrino with energy $E_\\nu$=178 GeV propagates along the jet, the survival probability of electron changes from one at $\\sim 8\\times 10^{10}$ cm to zero at $\\sim 9.5\\times 10^{10}$ cm. For $E_\\nu$=428 GeV(3 TeV), the survival probabilities change from one at $9.1\\times 10^{10}$ ($6.0\\times 10^{10}$) cm to zero at $1.8\\times 10^{11}$($3.5\\times 10^{11}$) cm and for $E_\\nu$=69 TeV, the probability is constant in this range (greater than $\\sim 10^{12}$ cm). In the last case, neutrino does not oscillate to another flavor during its propagation.\nFinally, considering a flux ratio for $\\pi$, K and $\\mu$ decay of 1: 2: 0, the density profile [A] and oscillation probabilities at three distances (10$^{11}$ cm, 10$^{11.5}$ cm and 10$^{12}$ cm), we show in table 1 the flavor ratio on the surface of star. Also, computing the vacuum oscillation effects between the source and Earth (Eq. \\ref{matrixosc}), we estimate and show in table \\ref{flaratio} the flavor ratio expected on Earth when neutrinos emerge from the star at L=(10$^{11}$, 10$^{11.5}$ and 10$^{12}$) cm . \n\n\n\\begin{table}\n\\begin{center}\\renewcommand{\\tabcolsep}{0.2cm}\n\\renewcommand{\\arraystretch}{0.89}\n\\begin{tabular}{|c|c|c|c|c|c|}\\hline\n$E_{\\nu}$ &$\\phi_{\\nu_e}:\\phi_{\\nu_\\mu}:\\phi_{\\nu_\\tau}$ &$\\phi_{\\nu_e}:\\phi_{\\nu_\\mu}:\\phi_{\\nu_\\tau}$&$\\phi_{\\nu_e}:\\phi_{\\nu_\\mu}:\\phi_{\\nu_\\tau}$ \\\\\n(TeV)&(L=10$^{11}$ cm)&(L=10$^{11.5}$ cm)&(L=10$^{12}$ cm)\\\\ \\hline\n\n0.024 & 0.946:1.949:0.115 & 0.697:1.405:0.899 & 0.881:1.578:0.541 \\\\\\hline\n\n0.178 & 0.510:1.814:0.676 & 0.987:1.386:0.627 & 0.507:1.807:0.686 \\\\\\hline\n\n0.428 & 0.983:1.589:0.428 & 0.659:1.871:0.524 & 0.538:1.721:0.741\\\\\\hline\n\n3 & 0.896:1.212:0.892 & 0.502:1.753:0.744 & 0.501:1.762:0.737 \\\\\\hline\n\n68.5 & 0.999:1.997:0.003 & 0.998:1.972:0.030 & 0.979:1.746:0.275 \\\\\\hline\n\n\\end{tabular}\n\\label{tatm}\n\\end{center}\n\\caption{\\small\\sf The flavor ratio on the surface of source for five neutrino energies (E$_{\\nu}$=24 GeV, 178 GeV, 428 GeV, 3 TeV and 68.5 TeV), leaving the star to three distances L=10$^{11}$ cm, 10$^{11.5}$ cm, and 10$^{12}$ cm. }\n\\label{flaratio}\n\\end{table}\n\n\n\\begin{table}\n\\begin{center}\\renewcommand{\\tabcolsep}{0.2cm}\n\\renewcommand{\\arraystretch}{0.89}\n\\begin{tabular}{|c|c|c|c|c|c|}\\hline\n$E_{\\nu}$ &$\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$ &$\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$&$\\phi^0_{\\nu_e}:\\phi^0_{\\nu_\\mu}:\\phi^0_{\\nu_\\tau}$ \\\\\n(TeV)&(L=10$^{11}$ cm)&(L=10$^{11.5}$ cm)&(L=10$^{12}$ cm)\\\\ \\hline\n\n0.024 & 1.046:1.008:0.956 & 0.925:1.031:1.045 & 0.998:1.011:0.991 \\\\\\hline\n\n0.178 & 0.889:1.049:1.062 & 1.021:1.000:0.978 & 0.888:1.049:1.063 \\\\\\hline\n\n0.428 & 1.033:1.000:0.966 & 0.954:1.053:1.047 & 0.893:1.046:1.061 \\\\\\hline\n\n3 & 0.979:1.010:1.011 & 0.883:1.049:1.067 & 0.883:1.050:1.067 \\\\\\hline\n\n68.5 & 1.065:0.998:0.936 & 1.063:0.999:0.939 & 1.042:1.001:0.957\\\\\\hline\n\n\\end{tabular}\n\\label{tatm}\n\\end{center}\n\\caption{\\small\\sf The flavor ratio expected on Earth for five neutrino energies (E$_{\\nu}$=24 GeV, 178 GeV, 428 GeV, 3 TeV and 68.5 TeV), leaving the star to three distances L=10$^{11}$ cm, 10$^{11.5}$ cm, and 10$^{12}$ cm.}\n\\label{flaratio}\n\\end{table}\n\\section{Summary and conclusions}\nWe have done a wide description of production channels of high-energy neutrinos in a middle relativistic hidden jet and also shown that neutrinos with energies between 1 - 10 PeV can be generated. Taking into account a particular range of neutrino energies generated in the internal shocks at a distance of 6$\\times 10^{10}$ cm and with a distribution of magnetic field 1.1$\\times 10^{10}$ G, we have shown their oscillations between flavors along the jet for three models of density profiles. For two neutrinos mixing, we have used the fit values of neutrino oscillation parameters from solar, atmospheric, and accelerator experiments and analyzing the resonance condition we found that the resonance lengths are the largest and resonance densities are the smallest for solar parameters and using accelerator parameters we have obtained the opposite situation, the resonance lengths are the smallest and resonance densities are the largest. The most favorable condition for high-energy neutrinos to oscillate resonantly before going out of the source is given through atmospheric parameters and these conversions would be pure adiabatic.\nFor three neutrino mixing, we have calculated the ratio flavor on the surface of the source as well as that expected on Earth. Our analysis shows that deviations from 1:1:1 are obtained at different energies and places along the jet, which is given in table 2. From analysis of flip probability we also show that neutrinos may oscillate depending on their energy and the parameters of neutrino experiments. As a particular case, when the three-flavor parameters are considered (fig. \\ref{threeflip}), we obtain that neutrino energies above $\\geq$ 10 TeV can hardly oscillate, obtaining the same result given by \\citet{2013arXiv1304.4906O}.\\\\ As shown, depending on the flavor ratio obtained on Earth we could differentiate the progenitor, its density profile at different depths in the source, as well as understand similar features between lGRBs and core collapse supernovae. Distinct times of arrival of neutrino flavor ratio will provide constraints on density profiles at different places in the star \\citep{bar12}. \nThese observations in detectors such as IceCube, Antares and KM3Net would be a compelling evidence that chocked jets are bright in neutrinos \\citep{abb12, abb13, pra10,lei12}.\nThe number of sources with hidden jets may be much larger than the exhibited one, limited only by the ratio of type Ib\/c and type II SNe to GRB rates. Within 10 Mpc, the rate of core-collapse supernovae is $\\sim$1 - 3 yr$^{-1}$, with a large contribution of galaxies around 3 - 4 Mpc. At larger distances, the expected number of neutrino events in IceCube is still several, and the supernova rate is $\\geq$ 10 yr$^{-1}$ at 20 Mpc \\citep{and05}. Recently, \\citet{tab10} calculated the events expected in DeepCore and neutrino-induced cascades in km$^3$ detectors for neutrinos energies $\\leq$ 10 GeV and $\\leq$ a few TeV respectively and forecast that $\\sim$ 4 events in DeepCore and $\\sim$ 6 neutrino-induced cascades in IceCube\/KM3Net would be expected. An extension up to higher energies of this calculation should be done to correlate the expected events in these sources with the number of PeV-neutrinos observed with IceCube \\citep{aar13}. \\\\ \nInterference effects in the detector by atmospheric neutrino oscillation are very small (less than 10 \\%) due to short path traveled by neutrinos in comparison with cosmological distances \\citep{men07}.\n\n\\section*{Acknowledgements}\n\nWe thank the referee for a critical reading of the paper and valuable suggestions. We also thank B. Zhang, K. Murase, William H. Lee, Fabio de Colle, Enrique Moreno and Antonio Marinelli for useful discussions. NF gratefully acknowledges a Luc Binette-Fundaci\\'on UNAM Posdoctoral Fellowship.\n \n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{intro}\n\nWe study transition fronts for one-dimensional reaction-diffusion equations\nwith \\emph{compactly-perturbed ignition-monostable reactions}.\nConsider the evolution PDE\n\\begin{equation} \\label{eq:main}\nu_t = u_{xx} + f(x,u),\\quad (t,x)\\in \\mathbb{R}\\times \\mathbb{R},\n\\end{equation}\nwhere the nonlinearity $f$ satisfies the following on $\\mathbb{R}\\times [0,1]$:\n\\begin{enumerate}[label=(F\\arabic*)]\n \\setlength\\itemindent{10pt}\n\\item $f\\ge 0$ is Lipschitz continuous with $\\gamma:=\\mbox{Lip}(f)$, and $f(x,0)=f(x,1) = 0$ for all $x\\in \\mathbb{R}$;\n \\label{item:general}\n \n\\item there exists $L>0$ such that $f(x,u)\\equiv f_0(u)$ for all $|x|\\ge L$, where $f_0$ is an ignition reaction with $f_0\\equiv 0$ on $[0,\\theta_0]\\cup \\{1\\}$, $f_0>0$ on $(\\theta_0 ,1)$, and $f_0$ is non-increasing on $[1-\\theta_1,1]$ for some $\\theta_0,\\theta_1\\in (0,1)$;\n \\label{item:ignition}\n \n\\item \\label{item:regularity} the (right hand) derivative $a(x):=f_u(x,0)\\ge 0$ exists, and for all $\\varepsilon>0$, there exists $\\zeta=\\zeta(\\varepsilon)\\in(0,\\theta_0)$ such that\n \\begin{equation*}\n \n (1-\\varepsilon)a(x)u\\leq f(x,u)\\leq (a(x)+\\varepsilon)u\\quad \\mbox{for }(x,u)\\in \\mathbb{R}\\times [0,\\zeta].\n \\end{equation*}\n\\end{enumerate}\nAs described above, $f$ is obtained by perturbing a homogeneous ignition reaction $f_0$ locally on the interval $[-L,L]$ with an inhomogeneous monostable reaction.\nIn the present work, we are interested in how such perturbation affects the existence of transition fronts.\n\nThe PDE \\eqref{eq:main} and its variations are widely used to model a host of natural processes, including thermal, chemical, and ecological dynamics.\nBy \\ref{item:general}, $u\\equiv 0, 1$ are two equilibrium solutions of \\eqref{eq:main}.\nTherefore, one is usually interested in the transition from the (unstable) state $u\\equiv 0$ to the (stable) state $u\\equiv 1$.\n\\emph{Transition fronts} are a class of solutions that model this phenomenon.\nThey are global-in-time solutions $u:\\mathbb{R}^2 \\to (0,1)$ of \\eqref{eq:main} satisfying\n\\begin{equation} \\label{item:trans_lims}\n\\lim_{x\\to-\\infty} u(t,x)=1, \\quad\\lim_{x\\to+\\infty} u(t,x)=0\\quad \\mbox{for all }t\\in \\mathbb{R}\n\\end{equation}\nand the bounded front width condition, that is, for all $\\mu \\in (0,\\frac 12 )$,\n\\begin{equation} \\label{item:trans_width}\n\\sup _{t\\in \\mathbb{R}} L_\\mu(t):=\\sup_{t\\in\\mathbb{R}}\\, \\mbox{diam} \\{x\\in\\mathbb{R}|\\; \\mu\\le u(t,x)\\le 1-\\mu\\}<\\infty.\n\\end{equation}\nThis definition was introduced in \\cite{BH12, Matano, Sen}.\n\nThe study of transition fronts has seen much activity since the seminal works by Fisher \\cite{F37} and Kolmogorov, Petrovskii, and Piskunov \\cite{KPP37}, who first studied \\emph{traveling fronts} for \\eqref{eq:main} with \\emph{homogeneous Fisher-KPP reactions}.\nHere, traveling fronts are transition fronts of the form $u(t,x)=U(x-ct)$ for some speed $c\\in \\mathbb{R}$ and profile $U$ with $\\lim_{y\\to-\\infty}U(y)=1$, $\\lim_{y\\to\\infty}U(y)=0$,\nand Fisher-KPP reactions are those $f$ satisfying \\ref{item:general}, $f'(0)>0$, and $00$ on $(0,1)$), although $c_*\\ge 2\\sqrt{f'(0)}$ in general (e.g., see \\cite{AW78}).\nIn contrast, for \\emph{homogeneous ignition} (defined as in \\ref{item:ignition}) and \\emph{bistable reactions} (the same as ignition except $f<0$ on $(0,\\theta_0)$ and $\\int _0^1 f(u)du>0$), there is only one speed $c_*>0$ which gives rise to a unique (up to translation) traveling front. The unique speed $c_*$ will be called \\emph{the spreading speed of }$f$.\n\nOver decades, the study of transition fronts extended to spatially periodic reactions (in which case fronts have time-periodic profiles, and are known as \\emph{pulsating fronts}).\nInstead of surveying the vast literature, let us refer to the review articles by Berestycki \\cite{B03} and Xin \\cite{X00}, and the references therein.\nThe development in general inhomogeneous media is considerably more recent.\nThe first existence result was obtained by Vakulenko and Volpert \\cite{VV11} for small perturbations of homogeneous bistable reactions.\nLater, Mellet, Roquejoffre, and Sire \\cite{MARS10} proved the existence of fronts for ignition reactions of the form $f(x,u)=a(x)f_0(u)$, where $f_0$ is ignition, and $a(x)$ is bounded with $\\inf _{\\mathbb{R}}a(x)>0$, which need not be close to being constant (see also \\cite{NR09} for the case of random media, relying on the notion of generalized random traveling waves developed in \\cite{Sen}). Zlato\\v{s} then extended these results (along with uniqueness and stability) to general inhomogeneous ignition and mixed ignition-bistable media \\cite{Z13, ZPreprint}.\n\n\nTransition fronts has also been investigated in inhomogeneous Fisher-KPP media by several authors. As far as Fisher-KPP reactions are concerned, a strong inhomogeneity in the reaction may prevent existence of transition fronts, while a weak inhomogeneity gives rise to them.\nThis is translated into the following result proved by Nolen, Roquejoffre, Ryzhik and Zlato\\v{s} \\cite{Zlatos} for reactions satisfying $0< f(x,u)\\le a(x)u$ for all $(x,u)\\in \\mathbb{R}\\times (0,1)$, with $a(x):=f_u(x,0)$, $a_-:=\\inf_{x\\in \\mathbb{R}}a(x)>0$, and $a(x)-a_-\\in C_c(\\mathbb{R})$.\nThey found that when the inhomogeneity of $f$ is strong, in the sense that the principal eigenvalue $\\lambda$ of the operator $\\partial_{xx}+a(x)$ satisfies $\\lambda>2a_-$, any non-constant global-in-time solution $u$ of \\eqref{eq:main} is \\emph{bump-like} (i.e. $u(t,x)\\le C_t e^{-c|x|}$), preventing the existence of transition fronts.\nThis in fact is the first known example of a reaction function $f$ such that \\eqref{eq:main} does not admit any transition front.\n\nMoreover, in the same work, they also show that the existence criterion is (almost) sharp.\nIn the case of a weak localized inhomogeneity $\\lambda<2a_-$, for each $c\\in (2\\sqrt{a_-},\\lambda\/\\sqrt{\\lambda-a_-})$ the PDE \\eqref{eq:main} admits a transition front with \\emph{global mean speed} $c$, in the sense that if $X(t):=\\sup \\{x\\in\\mathbb{R}:u(t,x)=\\frac 12\\}$, then\n\\begin{equation}\\label{gms}\n \\lim_{t-s\\to\\infty}\\frac{X(t)-X(s)}{t-s}=c.\n\\end{equation}\nTo construct a front, they find an appropriate pair of ordered global-in-time super- and sub-solutions $w\\ge v$ that propagate with speed $c$, and recover a front $u$ between them as a locally uniform limit along a subsequence of solutions $(u_n)_{n\\in {\\mathbb{N}}}$ of the Cauchy problem \\eqref{eq:main} with initial data $u_n(-n,\\cdot)=w(-n,\\cdot)$.\nThe same method was deployed and extended by Zlato\\v{s} \\cite{Z12} and by Tao, Zhu, and Zlato\\v{s} \\cite{TZZ13} to prove the existence of fronts for general inhomogeneous KPP and monostable reactions when $a(x)-a_-$ is not compactly supported.\n\nIn the present paper, we modify the approach from \\cite{Zlatos} to establish a similar sharp existence criterion for reactions satisfying Hypothesis (F).\nAs mentioned, such $f$ is obtained by locally perturbing the ignition reaction $f_0$ with a monostable reaction.\nWe therefore show that a strong perturbation in the reaction prevents the existence of fronts, while a weak perturbation admits them.\nThe existence criterion in our case is determined by the spreading speed of the reaction $f_0$ and the supremum of the spectrum of the operator $\\partial_{xx}+a(x)$.\nThe spreading speed of $f_0$ is the unique number $c_0>0$ such that the following ODE admits a unique (up to translation) solution:\n\\begin{equation}\n \\label{tfeq}\n U''+c_0U' +f_0(U)=0,\\quad \\lim_{x\\to -\\infty} U(x)=1,\\quad \\lim_{x\\to\\infty} U(x)=0.\n\\end{equation}\nOn the other hand, the supremum of the spectrum of $\\partial_{xx}+a(x)$ is given by\n\\begin{equation*}\n \n \\lambda := \\sup \\sigma (\\partial_{xx}+a(x))= \\sup_{{\\psi \\in H^1(\\mathbb{R})}:\\,||\\psi||_{L^2}=1} {\\int_\\mathbb{R} (-[\\psi'(x)]^2 + a(x) [\\psi (x)]^2)dx}.\n\\end{equation*}\nSince $a(x)\\ge 0$ is compactly supported by \\ref{item:ignition}, the essential spectrum of $\\partial_{xx}+a(x)$ is $(-\\infty,0]$, which implies $\\lambda\\ge 0$.\nIf $\\lambda>0$ (i.e. $a\\not\\equiv 0$), it is in fact the principal eigenvalue.\nThen a corresponding $L^\\infty$-normalized principal eigenfunction $\\psi$ exists, is unique, and satisfies\n\\begin{equation}\n \\label{eq:eigen}\n \\psi '' + a(x)\\psi = \\lambda \\psi,\\quad \\psi>0,\\quad ||\\psi||_{L^\\infty}=1.\n\\end{equation}\n\nThe main results of the present work are stated as follows.\n\n\\begin{theorem}\n\t\\label{thm:nonex}\n\tLet $f$ satisfy \\ref{item:general}--\\ref{item:regularity} for some $f_0$ and $a$, $c_0$ be the spreading speed of $f_0$, $\\lambda$ be the supremum of the spectrum of $\\partial_{xx}+a(x)$,\n\tand assume $\\lambda > c_0^2$.\n\t\n\t\\begin{enumerate}\n\t\t\\item All entire solutions $u$ of \\eqref{eq:main} with $00$ such that $u(t,x)\\le C_te^{-c|x|}$ for all $(t,x)\\in \\mathbb{R}^2$.\n\t\tIn particular, \\eqref{eq:main} does not admit a transition front solution.\n\t\t\n\t\t\\item Assume \\ref{item:regularity} is replaced by the following: there exists $\\zeta \\in (0,\\theta_0)$ such that $f(x,u) = a(x)u$ for $u\\in [0,\\zeta]$.\n\t\tThen a nonzero bump-like solution of \\eqref{eq:main} exists, is unique (up to a time-shift) among all solutions with $0c_0^2$ (proof of Theorem \\ref{thm:nonex})}\n\\label{sec:non}\n\nAs mentioned above, the methods of this section are based on those found in \\cite{Zlatos}.\nIn particular, Theorem \\ref{thm:nonex}, Lemmas \\ref{lem:refined}, \\ref{lem:new}, and their proofs are similar to Theorem 1.2, Lemmas 3.1, 3.2 \\cite{Zlatos}.\nThe primary difference can be found in the proof of Lemma \\ref{lem:new}.\n\nThroughout this section, we assume $f,\\gamma,f_0, \\theta_0,\\theta_1, L, a$ are all from (F), and $\\lambda > c_0^2$.\nFor $\\varepsilon\\in (0,1)$, let $\\lambda_\\varepsilon$ be the principal eigenvalue of the differential operator $\\partial_{xx}+(1-\\varepsilon)a(x)$.\nSince $\\lim_{\\varepsilon\\to 0^+} \\lambda_\\varepsilon=\\lambda>c_0^2$, we may fix $\\varepsilon>0$ such that $\\lambda_\\varepsilon>c_0^2$, and\nlet $\\zeta = \\zeta(\\varepsilon)$ be given in \\ref{item:regularity}.\nFor $M>0$, we let $\\lambda_M = \\lambda_{\\varepsilon,M}$ be the Dirichlet principal eigenvalue of $\\partial_{xx}+(1-\\varepsilon)a(x)$ on $[-M,M]$, and $\\psi_M\\in C^2([-M,M])$ be the corresponding $L^\\infty$-normalized eigenfunction:\n\\begin{gather}\n\t\\label{TS:psi}\n\t\\psi_M'' + (1-\\varepsilon)a(x) \\psi_M = \\lambda_M \\psi_M \\mbox{ on }(-M,M),\\\\\n\t\\psi_M(\\pm M)=0,\\quad ||\\psi_M||_{L^\\infty}=1. \\nonumber\n\\end{gather}\nNote that $\\psi_M>0$ on $(-M,M)$ and $\\lim_{M\\to\\infty} \\lambda_M = \\lambda_\\varepsilon>c_0^2$.\nSo we may again fix $M\\ge L$ large so that $\\lambda_M>c_0^2$. Finally, all constants involved depend on $c_0, M, \\psi_M, \\lambda_M, \\zeta, \\gamma, \\theta_0$.\n\nIn the following, let $u\\in (0,1)$ be an entire solution of \\eqref{eq:main} with $\\inf _{(t,x)\\in \\mathbb{R}}u=0$.\nIn the proofs, we will frequently use the parabolic Harnack inequality for $u$.\nTherefore, for $R,\\sigma>0$, we let $k=k(R,\\sigma)>0$ denote the Harnack constant such that\n\\begin{equation}\n \\label{h-const}\n\t\\min_{|x-x_0|\\le R} u(t+\\sigma,x) \\ge k \\max _{|x-x_0|\\le R} u(t,x) \\ge k u(t,x_0)\n\\end{equation}\nholds for any $x_0\\in \\mathbb{R}$.\nWe begin with the following simple fact.\n\n\\begin{lemma}\n\t\\label{lem:limit}\n\tThe solution $u$ satisfies $\\lim_{t\\to-\\infty}u(t,x)=0$ locally uniformly.\n\\end{lemma}\n\n\\begin{proof}\n\tBy the Harnack inequality, it suffices to show the limit for $x=0$.\n\tAssume the contrary, so that there exists $\\alpha \\in (0,1)$ and a sequence of times $\\{t_n\\}$ with $t_n\\searrow -\\infty$ such that $u(t_n,0)\\geq \\alpha$.\n\tLet $k=k(M,1)$ be the Harnack constant from \\eqref{h-const},\n\t$\\theta:=\\min\\{k\\alpha, \\zeta\\}$, and extend the eigenfunction $\\psi_M$ from \\eqref{TS:psi} continuously to $\\mathbb{R}$ by setting $\\psi_M\\equiv 0 $ on $[-M,M]^c$.\n\tSince $||\\psi_M||_{L^\\infty}=1$, \\eqref{h-const} (with $(R,\\sigma,x_0,t)=(M,1,0,t_n)$) implies\n\t\\begin{equation}\n\t\t\\label{eq:Harnack_bd}\n\t\tu(t_n+1,x)\\geq \\theta\\psi_M(x)\\quad \\mbox{for all }x\\in {\\mathbb{R}}.\n\t\\end{equation}\n\t\n\tNow let $v:\\mathbb{R}^+\\times \\mathbb{R} \\to [0,1]$ be the solution to the Cauchy problem of \\eqref{eq:main} with initial data $v(0,x) = \\theta\\psi_M(x)$.\n\tWe claim that $v_t\\ge 0$.\n\tBy the comparison principle, it suffices to show $v(s,\\cdot)\\ge \\theta\\psi_M$ for all $s\\ge 0$.\n\tClearly this holds for all $x\\in [-M,M]^c$ because $\\psi_M\\equiv 0$ in this region.\n\tIf $x\\in [-M,M]$ instead, observe that $w(t,x):=\\theta\\psi_M(x)$ is a (stationary) sub-solution of \\eqref{eq:main} by (F3) and \\eqref{TS:psi}. So the comparison principle shows that $v(s,x)\\ge \\theta\\psi_M(x)$ for $(s,x)\\in \\mathbb{R}^+\\times [-M,M]$.\n\tThis implies $v_t\\ge 0$.\n\tLet $v_\\infty(x):= \\lim_{t\\to\\infty}v(t,x)$, which satisfies $v_\\infty '' +f(x,v_\\infty)=0$ on $\\mathbb{R}$ by parabolic regularity.\n\tSince $f\\ge 0$, this forces $v_\\infty \\equiv \\beta$ for some constant $\\beta \\in [\\theta ,1]$.\n\tNow fix $s\\in \\mathbb{R}$.\n\tBy the comparison principle and \\eqref{eq:Harnack_bd}, for all large $n$\n\t\\begin{equation*}\n\t\tu(s,x)\\ge v(s-t_n-1,x)\\quad \\mbox{for all }x\\in \\mathbb{R}.\n\t\\end{equation*}\n\tLetting $n\\to\\infty$, we find that $u(s,\\cdot)\\ge \\beta >0$ for all $s\\in \\mathbb{R}$, contradicting $\\inf_{(t,x)\\in \\mathbb{R}^2} u=0$. Therefore, we must have $\\lim_{t\\to-\\infty}u(t,0)=0$.\n\\end{proof}\n\nWith Lemma \\ref{lem:limit}, after an appropriate time translation we may now assume\n\\begin{gather}\n\t\\label{eq:zetabound}\n\tu(0,0)\\leq \\frac \\zeta 2 \\psi_M(0).\n\\end{gather}\nIn the coming two lemmas, we establish some important bounds on $u$, which play a crucial role in the proof of Theorem \\ref{thm:nonex}.\n\n\\begin{lemma}\n\t\\label{lem:refined}\n\tFor any $c\\in (c_0, \\sqrt{ \\lambda_M} )$, there exists $C_0>0$ (independent of $u$) such that\n\t\\begin{equation}\n\t\t\\label{eq:refined_bound}\n\t\tu(t, x) \\leq C_0 u(0,0) e^{c_0(x+ct)}, \\quad \\text{for }t\\le -1,\\; x\\in [M,\\sqrt{\\lambda_M}(-t-1)-M-1].\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n\tDenote $u_0 := u(0,0)>0$, $\\psi_0 := \\psi_M(0)>0$, and $D\\subset \\mathbb{R}^2$ the region described in \\eqref{eq:refined_bound}.\n\tTo show \\eqref{eq:refined_bound}, we will prove the following estimate for some $C_0'>0$ (independent of $u)$:\n\t\\begin{equation}\n\t\t\\label{TS:bound2.2}\n\t\tu(t, x) \\leq C'_0 u_0 \\sqrt{|t|} e^{\\sqrt{\\lambda_M}(x+\\sqrt{\\lambda_M}t)},\\quad \\text{for }(t,x)\\in D.\n\t\\end{equation}\n\tOne can easily show that \\eqref{eq:refined_bound} follows from this with $C_0 := C'_0 \\sup_{t \\leq 0}\\sqrt{|t|}e^{c_0(\\sqrt{\\lambda_M} - c)t}$ (which is finite because $\\sqrt{\\lambda_M}>c$).\n\t\n\tWe prove \\eqref{TS:bound2.2} by contradiction.\n\tLet $k=k(1,1)$ be the Harnack constant from \\eqref{h-const}, and\n\tsuppose there is $(t',x_0)\\in D$ so that \\eqref{TS:bound2.2} does not hold with $C_0'$ given by\n\t\\[ C_0':=\\frac{\\sqrt {4 \\pi}}{k\\psi_0}e^{\\lambda_M+\\sqrt{\\lambda_M}(M+1)}. \\]\n\tLet $t_0:=t'+1\\le 0$ and \n\t\\begin{gather*}\n\t\t\\beta:=\\frac {x_0+M+1}{2|t_0|\\sqrt{\\lambda_M}},\\quad \n\t\t\\eta:= C'_0ku_0\\sqrt{|t'|}e^{\\sqrt{\\lambda_M}(x_0+\\sqrt \\lambda_Mt') }.\n\t\\end{gather*}\n\tObserve that $\\beta \\in (0,\\frac 12 ]$ as $(t',x_0)\\in D$.\n\tAlso, by \\eqref{h-const} (with $(R,\\sigma,t)=(1,1,t')$) and the opposite of \\eqref{TS:bound2.2} we have $u(t_0,\\cdot)\\ge \\eta \\chi_{[x_0,x_0+1]}$.\n\tApplying the comparison principle ($u$ is a super-solution to the standard heat equation as $f\\ge 0$), for all $x\\in [-M,M]$ we have\n\t\\begin{align}\n\t\tu(t_0+\\beta |t_0|,x) &\\ge \\frac {\\eta}{\\sqrt{4\\pi\\beta |t_0|}} \\int _{x_0}^{x_0+1} e^{-\\frac{(x-y)^2}{4\\beta |t_0|}} dy\\ge \\frac{\\eta}{\\sqrt{4\\pi \\beta |t_0|}} e^{-\\frac{(x_0+M+1)^2}{4\\beta|t_0|}} \\nonumber \\\\\n\t\t&\\ge {2u_0}{\\psi_0}^{-1} e^{\\sqrt{\\lambda_M}(x_0+M+1+\\sqrt{\\lambda_M}t_0)-\\frac{(x_0+M+1)^2}{4\\beta|t_0|}} \\nonumber \\\\\n\t\t& = {2u_0}{\\psi_0}^{-1} e^{\\lambda_M(t_0+\\beta|t_0|)}.\n\t\t\\label{TS:2.2.3}\n\t\\end{align}\n\tHere, the second inequality is due to $-M\\le x \\le y\\le x_0+1$, for then $0\\le y-x\\le M+x_0+1$.\n\tNow let $v(t,x):=2u_0\\psi_0^{-1}e^{\\lambda _M t} \\psi_M(x)$, which by \\eqref{TS:psi} satisfies\n\t\\begin{equation}\\label{pdineq-23}\n\t\tv_t = v_{xx} + (1-\\varepsilon )a(x) v\\quad \\mbox{for all }(t,x)\\in\\mathbb{R}\\times (-M,M),\\quad v(t,\\pm M)=0.\n\t\\end{equation}\n\tFrom \\eqref{TS:2.2.3}, $||\\psi_M||_{L^\\infty}=1$, and \\eqref{eq:zetabound}, we also have\n\t\\begin{equation}\\label{ineq-27}\n\t\t\\begin{split}\n\t\t\tu(t_0+\\beta|t_0|,x)&\\ge v(t_0+\\beta|t_0|,x), \\\\\n\t\t\tv(t,x)\\le v(0,x)&=2u_0\\psi_0^{-1}\\psi_M(x)\\le \\zeta,\n\t\t\\end{split}\n\t\\end{equation}\n\twhere the latter holds for all $t\\le 0$ and $x\\in \\mathbb{R}$.\n\tThe latter with \\eqref{pdineq-23} and \\ref{item:regularity} shows that $v$ is a sub-solution of \\eqref{eq:main} on $\\mathbb{R}^- \\times (-M,M)$.\n\tHence, \\eqref{ineq-27} and the comparison principle (note that $t_0+\\beta |t_0|\\le 0$ as $\\beta\\in (0,\\frac 1 2]$ and $t_0\\le 0$) yield\n\t\\begin{equation*}\n\t\tu(0,x) \\ge v(0 , x)=2u_0\\psi_0^{-1}\\psi_M(x) \\quad \\mbox{for } x\\in[-M,M].\n\t\\end{equation*}\n\tLetting $x=0$ yields the contradiction $u_0\\ge 2u_0$ (as $u_0>0$).\n\tTherefore \\eqref{TS:bound2.2} holds.\n\\end{proof}\n\nWith the estimate \\eqref{eq:refined_bound}, we now further refine the bound for $u$ to show that it is bump-like for all large negative time.\n\n\\begin{lemma}\n\t\\label{lem:new}\n\tUnder the same assumptions as Lemma \\ref{lem:refined}, there exist $C>0$ and $\\tau <0$ (both independent of $u$) such that\n\t\\begin{equation}\n\t\t\\label{TS:new}\n\t\tu(t,x)\\le C u(0,0) e^{-c_0|x|+c_0ct},\\quad \\mbox{for } t\\le \\tau,\\, |x|\\ge M.\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{remark}\nWe follow the argument in the proof of Theorem 1.2 in \\cite{Zlatos}.\nThe fundamental difference lies in the definition of the super-solution $w=w_1+w_2^{s}$.\nIn \\cite{Zlatos}, $w_2^{s}$ is chosen to be an exponential function (see the definition of $v_{t_0}$ in Section 3 \\cite{Zlatos}).\nIn our case, $w_2^{s}$ is a (leftward-moving) traveling front of a perturbed reaction $f_\\delta$ (defined in the proof).\n\\end{remark}\n\n\\begin{proof}\n\tFirst of all, it suffices to prove \\eqref{TS:new} for the case $x\\ge M$.\n\tThis is because $\\tilde u(t,x):=u(t,-x)$ is still a global solution to \\eqref{eq:main} with $\\tilde f(x,u):=f(-x,u)$ in place of $f(x,u)$.\n\tClearly, $\\tilde f$ and $\\tilde u$ satisfy (F) and \\eqref{eq:zetabound} respectively.\n\tApplying \\eqref{TS:new} to $\\tilde u$ and $x\\ge M$, we find \\eqref{TS:new} for $u$ and $x\\le -M$.\n\t\n\tFix $c\\in (c_0,\\sqrt{\\lambda_M})$.\n\tFor $\\delta>0$ small, consider the perturbation of $f_0$ given by\n\t\\begin{equation*}\n\t\tf_\\delta(u) := \\max_{v\\in [u-\\delta,u+\\delta ]}f_0(v),\n\t\\end{equation*}\n\twhich is a Lipschitz ignition reaction with $\\text{\\rm{supp}}(f_\\delta)= [\\theta_0 -\\delta,1+\\delta]$.\n\tLet $(U_\\delta,c_\\delta)$ be the traveling front solution to the following problem:\n\t\\begin{gather}\n\t\t\\label{eq:tfeq2}\n\t\tU''_\\delta - c_\\delta U' + f_\\delta (U_\\delta ) = 0,\\quad\n\t\t\\lim_{y\\to -\\infty} U_{\\delta}(y) = 0,\\quad \\lim_{y\\to \\infty} U_{\\delta}(y) = 1+\\delta.\n\t\\end{gather}\n\tNote that $U_\\delta$ is leftward moving, so $U_\\delta'>0$.\n\tBy a simple argument using phase plane analysis and the uniqueness\/stability of solutions to ODEs, one can easily show that $c_\\delta>c_0$ and $\\lim_{\\delta \\searrow 0}c_{\\delta}= c_0$.\n\tThus we may fix $\\delta \\in (0,\\theta_0)$ so that $c_\\delta \\in ( c_0, \\sqrt {c_0c})$.\n\tLet $C_0$ be the constant from Lemma \\ref{lem:refined} and $u_0:=u(0,0)$.\n\tSince $f_\\delta\\equiv 0$ on $[0,\\theta_0-\\delta]$ and \\eqref{eq:tfeq2}, we can specify the translation of $U_\\delta$ so that\n\t\\begin{equation}\\label{Udel-trans}\n\t\tU_\\delta(x)=Ae^{c_\\delta x}\\mbox{ whenever }U_\\delta(x)\\le \\theta_0-\\delta,\\mbox{ where }A:=C_0u_0 e^{(c_0-c_\\delta )M}.\n\t\\end{equation}\n\tFor $s\\in \\mathbb{R}$, define\n\t\\begin{align*}\n\t\tw_1(t,x) := C_0u_0e^{-c_0(x-2M-ct)},\\quad w_2^{s}(t,x) := U_\\delta\\rb {x + c_\\delta t + \\rb{\\frac {c c_0}{c_\\delta} - c_\\delta}s}.\n\t\\end{align*}\n\tLet $\\tau:=\\min\\{T_0,T_1\\}$, where $T_0$, $T_1$ are given by\n\t\\begin{equation*}\n\t\tT_0:= -\\frac {2M+1}{\\sqrt {\\lambda_M}}-1 ,\\qquad C_0 e^{c_0(M+cT_1)}=\\delta.\n\t\\end{equation*}\n\tHere, $T_0$ is defined so that the interval from \\eqref{eq:refined_bound} is non-empty for all $t\\le T_0$.\n\tBy Lemma \\ref{lem:refined},\n\t\\begin{equation}\n\t\t\\label{TS:comp1}\n\t\tu(t,M)\\leq w_1(t,M)\\quad \\mbox{for }t\\le \\tau.\n\t\\end{equation}\n\tWe also claim that for all sufficiently negative $s\\le 0$,\n\t\\begin{equation}\n\t\t\\label{TS:comp2}\n\t\tu(s,x)\\le w_2^{s}(s,x) \\quad \\mbox{for }x\\in [M,\\infty).\n\t\\end{equation}\n\tWe postpone the proof of this claim to first show \\eqref{TS:new}.\n\t\n\tLet $w:=w_1+w_2^{s}$ and $D_{s} := [s,\\tau]\\times [M,\\infty)$.\n\tThen $w$ is a super-solution to \\eqref{eq:main} on $D_{s}$.\n\tAfter all, $w_1(t,x)\\leq w_1(T_1,M)= u_0\\delta\\le \\delta$ for all $(t,x) \\in D_{s}$, so\n\t\\begin{align*}\n\t\tw_t-w_{xx}&=\\partial_t w_1-\\partial_{xx}w_1+\\partial_tw_2^{s}-\\partial_{xx}w_2^{s}\\nonumber \\\\\n\t\t&=c_0(c-c_0)w_1+f_\\delta(w_2^{s})\\geq f_\\delta(w_2^{s})\\nonumber \\\\\n\t\t&\\ge f_0(w)=f(x,w).\n\t\n\t\\end{align*}\n\tThe last inequality follows from $0< w_1\\leq \\delta$ on $D_{s}$ and the definition of $f_\\delta$.\n\tBy \\eqref{TS:comp1}, \\eqref{TS:comp2}, and the comparison principle, we have $u\\leq w_1+w^{s}_2$ on $D_{s}$, which holds for all large negative $s$.\n\tObserve that the argument of $U_\\delta$ in the definition of $w^{s}_2$ tends to $-\\infty$ as $s\\searrow -\\infty$, since $c_\\delta <\\sqrt {cc_0}$.\n\tHence, $w^{s}_2\\searrow 0$, and $u(t,x)\\le w_1(t,x)$ for all $(t,x)\\in (-\\infty,\\tau]\\times [M,\\infty)$.\n\tThis is \\eqref{TS:new} for $x\\ge M$ if we set $C:=C_0e^{2c_0M}$.\n\t\n\tIt remains to prove \\eqref{TS:comp2}.\n\tLet $\\xi_0 \\in \\mathbb{R}$ satisfy $C_0u_0e^{c_0\\xi_0}=\\theta_0-\\delta$, and define\n\t\\begin{equation*}\n\t\tW(s):= w_2^{s}(s,\\xi_0-cs) = U_\\delta (\\xi_0+c(c_0c_\\delta^{-1}-1)s),\n\t\\end{equation*}\n\twhich is continuous and satisfies $W(-\\infty)=U_\\delta(\\infty)=1+\\delta$ (as $c_0 u(s,x)$.\n\tConsider $x\\in [M,\\xi_0-cs)$. By \\eqref{s0-def} and \\eqref{eq:refined_bound}, it suffices to show that $w_2^s(s,x)\\ge C_0u_0e^{c_0(x+cs)}$.\n\tAssume the contrary that it does not hold for some $x_0\\in [M,\\xi_0-cs)$.\n\tIt then follows from the definition of $\\xi_0$ that\n\t\\begin{equation}\\label{cont-05}\n\t\tw_2^s(s,x_0)< C_0u_0e^{c_0(x_0+cs)}\\le C_0u_0e^{c_0\\xi_0}=\\theta_0-\\delta.\n\t\\end{equation}\n\tOn the other hand, by our translation for $U_\\delta$ from \\eqref{Udel-trans}, $c_00$.\n\tNext we prove that this solution is unique up to time translation. Let $\\tilde u\\in (0,1)$ be another solution of \\eqref{eq:main} with $\\inf _{(t,x)\\in \\mathbb{R}^2}\\tilde u=0$.\n\tBy Lemmas \\ref{lem:limit} and \\ref{lem:new}, $\\tilde u(t,\\cdot)\\to 0$ uniformly as $t\\to-\\infty$.\n\tTherefore, after a time-shift we may assume\n\t\\begin{equation}\\label{sup-trans}\n\t\t\\sup _{(t,x)\\in \\mathbb{R}^-\\times \\mathbb{R}}\\tilde u(t,x)\\le \\frac \\zeta 2 \\psi_M(0).\n\t\\end{equation}\n\tLet $\\tilde \\varphi(t,x):=\\tilde u(t,x)$ for all $t\\le 0$, and propagate forward in time as the solution of \\eqref{eq:linearized} with $\\tilde \\varphi(0,x)=\\tilde u(0,x)$.\n\tSince $f(x,u)=a(x)u$ for $u\\in [0,\\zeta]$ by the assumption, $\\tilde \\varphi$ is an entire solution of \\eqref{eq:linearized}.\n\tBy \\cite[Proposition 2.5]{HP07}, we have $\\tilde \\varphi= q \\varphi$ for some $q>0$, provided that Conditions (A), (H1), and (2.12) from \\cite{HP07} are met (which will be shown shortly).\n\tTherefore, we have $\\tilde \\varphi(t,\\cdot) = \\varphi(t-T,\\cdot)$ with $T:=-\\lambda^{-1}\\log({q}{\\zeta}^{-1})$.\n\tSince $u\\equiv \\varphi$, $\\tilde u \\equiv \\tilde \\varphi$ for all $t\\le 0$, it clearly follows $\\tilde u(t,\\cdot)=u(t-T,\\cdot)$, which shows the uniqueness of solution.\n\t\n\tIt remains to check all the conditions from \\cite{HP07}.\n\tNote that (A) follows from $0\\le a\\le \\gamma\\chi_{[-L,L]}$, and (H1) holds for the PDE \\eqref{eq:linearized} because $\\lambda >0$.\n\tTo show \\cite[(2.12)]{HP07}, we will prove that\n\t\\begin{equation}\n\t\t\\label{TS:b1}\n\t\t\\sup_{x\\in\\mathbb{R}} \\tilde \\varphi(s,x)\\le K \\tilde \\varphi(s,0) \\quad \\mbox{for all }s\\in \\mathbb{R},\n\t\\end{equation}\n\tfor some $K>0$ independent in time.\n\tConsider the above for $s\\le 0$\n\tLet $\\tilde \\varphi^s(t,x):= \\tilde \\varphi(t+s,x)$, which again satisfies \\eqref{eq:zetabound} (by \\eqref{sup-trans}).\n\tFollow from Lemma \\ref{lem:new}, $\\tilde \\varphi^s$ satisfies the estimate \\eqref{TS:new}. With $t=\\tau\\le 0$, we find that\n\t\\begin{equation*}\n\t\t\\sup_{|x|\\ge M}\\tilde \\varphi^s(\\tau,x)\\le C \\tilde\\varphi^s(0,0).\n\t\\end{equation*}\n\tOn the other hand, by the Harnack inequality \\eqref{h-const}, we have $\\max_{|x|\\le M}\\tilde \\varphi^s(\\tau,x)\\le k^{-1}\\tilde \\varphi^s(0,0)$, where $k=k(M,-\\tau)$.\n\tHence, $\\sup_{x\\in \\mathbb{R}}\\tilde \\varphi^s(\\tau,x)\\le A\\varphi^s(0,0)$ with $A:=\\max\\{C,k^{-1}\\}$.\n\tApplying the comparison principle (noting that $w(t,x)=A\\varphi^s(0,0)e^{\\gamma (t-\\tau)}$ is a super-solution to \\eqref{eq:linearized}), we find $\\sup_{x\\in \\mathbb{R}}\\tilde\\varphi^s (0,x)\\le A e^{-\\gamma \\tau} \\tilde\\varphi^s(0,0)$, which is \\eqref{TS:b1} with $K:=Ae^{-\\gamma\\tau}$.\n\t\n\tNow consider \\eqref{TS:b1} with $s>0$.\n\tDecompose $\\tilde \\varphi(0,x)=\\alpha\\psi(x)+\\psi^\\perp(x)$, where $\\psi$, $\\psi^\\perp$ are orthogonal in $L^2(\\mathbb{R})$ (recalling that $\\psi$ is the eigenfunction from \\eqref{eq:eigen}).\n\tLet $\\phi(t,x) := e^{-\\lambda t}\\tilde\\varphi(t,x)$, which by \\eqref{eq:linearized} satisfies\n\t\\begin{equation*}\n\t\n\t\t\\phi_t = (\\partial_{xx}+a(x)-\\lambda)\\phi.\n\t\\end{equation*}\n\tSince the principal eigenvalue $0$ of $\\partial_{xx}+a(x)-\\lambda$ is isolated, it is well-known that $\\phi (t,\\cdot)\\to \\alpha \\psi $ uniformly as time progresses. This clearly implies \\eqref{TS:b1} for $s>0$, as desired.\n\\end{proof}\n\n\\section{Existence for $\\lambda 0$, let $\\lambda^\\varepsilon>0,\\psi^\\varepsilon> 0$ be the principal eigenvalue and (normalized) eigenfunction of the differential operator $\\partial_{xx} + a(x) + 2\\varepsilon\\chi_{[-L,L]}(x)$, which satisfy\n\\begin{gather*}\n\\partial_{xx}\\psi^\\varepsilon+[a(x)+2\\varepsilon\\chi_{[-L,L]}(x)] \\psi ^\\varepsilon = \\lambda^\\varepsilon \\psi^\\varepsilon \\mbox{ on }\\mathbb{R}\\\\\n\\lim_{|x|\\to\\infty}\\psi^\\eps(x)=0,\\quad ||\\psi^\\eps||_{L^\\infty}=1.\n\\end{gather*}\nSince $a(x)=0$ for $|x|\\ge L$ and $||\\psi^\\varepsilon||_{L^\\infty}=1$, $\\psi^\\varepsilon$ satisfies the exponential bound\n\\begin{equation}\n\\label{TS:psi-exp}\n\\psi^\\eps (x) \\le \\min\\{1,e^{-\\sqrt{\\lambda^\\varepsilon}(|x|-L)}\\}\\quad \\mbox{for all }x\\in \\mathbb{R}.\n\\end{equation}\nNote also that $\\varepsilon\\mapsto \\lambda^\\varepsilon$ is increasing and continuous, with $\\lambda^0 = \\lambda$ and $\\lim_{\\varepsilon \\to \\infty}\\lambda^\\varepsilon \\to \\infty$, so we may fix $\\varepsilon>0$ such that $\\lambda^{\\varepsilon}\\in (\\frac{c_0^2}{16},c_0^2)$, and let $\\zeta = \\zeta(\\varepsilon)\\in (0,\\theta_0)$ be from \\ref{item:regularity}.\nLet $U$ be the unique traveling front of $f_0$ in the sense of \\eqref{tfeq} with $U(0)=\\frac {\\theta_0} 2$.\nGiven $y\\in \\mathbb{R}$, we define\n\\begin{equation}\n\\label{TS:v1-def}\nv^{y}(t,x) := U(x-c_0t + y),\n\\end{equation}\nwhich satisfies \\eqref{eq:main} with $f(x,u)$ replaced by $f_0(u)$.\nFinally, let \\begin{equation}\n\\label{TS:om-eta-def}\n\\omega := \\inf_{x\\in[-L,L]} \\psi^\\varepsilon(x),\\quad \\eta: = c_0\\inf \\left\\{-U'(s);\\;s\\in \\mathbb{R},\\,U(s)\\in \\left[\\frac{\\theta_0}{2},1-\\theta_1\\right]\\right\\}.\n\\end{equation}\nAll the constants involved in this section will depend on $\\gamma, \\theta_0,\\theta_1, L, \\varepsilon, \\zeta, U, c_0, \\omega,$ and $\\eta$.\n\nWe begin with the construction of sub- and super-solutions for $t\\le 0$.\n\\begin{lemma}\n\t\\label{lem:supersub}\n\t\\begin{enumerate}\n\t\t\\item For all $y\\ge L$, $v^y$ given in \\eqref{TS:v1-def} is a sub-solution to \\eqref{eq:main} on $(-\\infty,0)\\times \\mathbb{R}$.\n\t\t\\item There exists $y_0\\ge L+c_0\\beta(0)$, such that $w$ given as follows is a super-solution to \\eqref{eq:main} on $(-\\infty,0)\\times \\mathbb{R}$\\emph{:}\n\t\t\\begin{gather*}\n\t\tw(t,x) := v^{y_0}(t+\\beta(t),x) + \\phi(t,x),\\\\\n\t\t\\beta(t):= \\frac{\\gamma\\zeta}{4\\sqrt{\\lambda^\\eps }c_0\\eta}e^{2\\sqrt{\\lambda^\\eps }c_0 t},\\quad\n\t\t\\phi(t,x):= \\frac \\zeta 2 e^{\\sqrt{\\lambda^\\eps }c_0 t}\\psi^\\varepsilon(x).\n\t\t\\end{gather*}\n\t\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\t(i) Abbreviate $v=v^y$, recalling that $v$ satisfies \\eqref{eq:main} with $f(x,u)\\equiv f_0(u)$.\n\tThen we must show that\n\t\\begin{equation}\n\t\\label{eq:sub1}\n\tv_t-v_{xx}-f(x,v)=f_0(v)-f(x,v)\\leq 0\\quad \\mbox{for }(t,x)\\in (-\\infty,0)\\times \\mathbb{R}.\n\t\\end{equation}\n\tFor $x\\in [-L,L]^c$, we have $f(x,v)\\equiv f_0(v)$ by \\ref{item:ignition}, and the above holds trivially.\n\tIf $x\\in [-L,L]$ instead, by $U'<0$ we have $v(t,x)=U(x-c_0t+y)\\le U(y-L)\\le U(0)= \\frac {\\theta_0}2$.\n\tTherefore $f_0(v)=0$, and \\eqref{eq:sub1} holds.\n\t\n\t(ii) Let $y_0\\in \\mathbb{R}$ be the unique number such that\n\t\\begin{equation}\\label{y0-def}\n\tU(y_0- c_0\\beta(0)-L) = \\min \\cb{\\frac\\zeta 2, \\frac{\\varepsilon \\omega \\zeta }{2(\\gamma +\\varepsilon )}}.\n\t\\end{equation}\n\tClearly, $y_0\\ge L+c_0\\beta(0)$ because $U(0)=\\frac {\\theta_0}{2} >\\frac{\\zeta}{2}$ and $U'<0$.\n\tAbbreviate $v^{y_0} = v$. Then we must show for all $(t,x)\\in (-\\infty,0)\\times \\mathbb{R}$ that\n\t\\begin{align}\n\tw_t-w_{xx}-f(x,w)\n\t=&f_0(v) + v_t\\,\\beta' + \\sqrt{\\lambda^\\eps }\\left(c_0-\\sqrt{\\lambda^\\eps }\\right)\\phi\\nonumber \\\\\n\t& +[a(x)+2\\varepsilon\\chi_{[-L,L]}(x)] \\phi - f(x,w)\\geq 0,\n\t\\label{eq:super1}\n\t\\end{align}\n\twhere $v$ and $v_t$ are evaluated at $(t+\\beta(t),x)$ in what follows.\n\tConsider first $x\\in [-L,L]$.\n\tSince the first four terms in \\eqref{eq:super1} are nonnegative, it suffices to show that $[a(x)+2\\varepsilon]\\phi\\geq f(x,w)$.\n\tNote that $\\phi(t,x)\\le \\frac{\\zeta}{2}$ as the eigenfunction $\\psi^\\varepsilon$ satisfies $||\\psi^\\varepsilon||_{L^\\infty}=1$.\n\tMoreover, by $U'<0$, $\\beta '>0$ and \\eqref{y0-def}, we have\n\t\\begin{equation*}\n\tv(t+\\beta(t),x)\\le U(y_0-c_0\\beta(0)-L)\\le \\frac{\\zeta}{2}.\n\t\\end{equation*}\n\tTherefore $w=v + \\phi \\le \\zeta$, and $f(x,w)\\leq\\left[a(x)+\\varepsilon\\right]w$ by \\ref{item:regularity}.\n\tOn the other hand, by \\eqref{tfeq} and $f_0\\equiv 0$ on $[0,\\theta_0]$, we have $U(y)=\\frac{\\theta_0}{2}e^{-c_0y}$ for $y\\ge 0$.\n\t\\eqref{y0-def} then implies\n\t\\begin{align*}\n\tv(t+\\beta(t),x)&\\le U(y_0-c_0\\beta(0)-L-c_0t)\\\\\n\t&=U(y_0-c_0\\beta(0)-L)e^{c_0^2 t} \\le \\frac {\\varepsilon \\omega \\zeta}{2(\\gamma+\\varepsilon )} e^{c_0^2t}.\n\t\\end{align*}\n\tCombining this, $\\sqrt{\\lambda^\\varepsilon}0$ such that\n\t\t$\\tilde w$ given as follows is a super-solution to \\eqref{eq:main}\n\t\ton $(0,\\infty)\\times \\mathbb{R}$\\emph{:}\n\t\t\\begin{gather*}\n\t\t\\tilde w(t,x):= v^{y_1} (t+\\beta_1(t),x)+\\phi_1 (t,x),\\\\\n\t\t\\beta_1 (t):=B_1(1-e^{-c_0^2t\/8}),\\quad \\phi_1(t,x) := e^{-\\frac{c_0}{4} (x-L -\\frac{c_0}{2} t)}.\n\t\t\\end{gather*}\n\t\t\\item For all $y\\in \\mathbb{R}$, there exists $B_2=B_2(y)>0$ such that\n\t\t$\\tilde v^y$ given as follows is a sub-solution to \\eqref{eq:main}\n\t\ton $(0,\\infty)\\times \\mathbb{R}$\\emph{:}\n\t\t\\begin{gather*}\n\t\t\\tilde v^y(t,x):= v^{y} (t+\\beta_2(t),x) -\\phi_2 (t,x),\\\\\n\t\t\\beta_2 (t):=B_2 e^{-c_0^2t\/8},\\quad \\phi_2(t,x):= \\frac{16\\gamma }{c_0^2} e^{-\\frac{c_0}{4} (x-L-\\frac{c_0}{2}t)}.\n\t\t\\end{gather*}\n\t\\end{enumerate}\n\n\\end{lemma}\n\n\\begin{proof}\n\t(i) Let $y_1,\\ell,B_1\\in \\mathbb{R}$ be defined as follows:\n\t\\begin{equation}\n\t\\label{TS:3.2-def}\n\t\\phi_1(0,-y_1)=\\frac{\\theta_0}{2},\\quad U(\\ell)=1-\\theta_1,\\quad B_1 := \\frac{8\\gamma }{c_0^2 \\eta }e^{-\\frac{c_0}{4}(\\ell-y_1 -L)}.\n\t\\end{equation}\n\tNote that $y_1\\le -L$ because $\\phi_1(0,L)=1$. As before, we abbreviate $v=v^{y_1}$, and we need to show for all $(t,x)\\in(-\\infty,0)\\times \\mathbb{R}$ that\n\t\\begin{equation}\n\t\\label{eq:tgreater}\n\t\\tilde w_t - \\tilde w_{xx} - f(x,\\tilde w) = v_t\\,\\beta_1' + \\frac{c_0^2}{16}\\phi_1+f_0(v)- f(x,\\tilde w)\\geq 0,\n\t\\end{equation}\n\twhere $v$ and $v_t$ are evaluated at $(t+\\beta_1(t),x)$ for the remainder of this part.\n\tNote that the first three terms are nonnegative.\n\tIf $x\\le L$, then \\eqref{eq:tgreater} holds by $f(x,\\tilde w) =0$ as $\\tilde w(t,x)\\geq \\phi_1(t,x)\\ge 1$.\n\tNow consider $x>L$, noting that $f(x,\\tilde w) = f_0(\\tilde w)$.\n\tWe again consider three cases for the value of $v$.\n\tWhen $v\\geq 1-\\theta_1$, we have $f_0(v)-f_0(\\tilde w)\\geq 0$ because $f_0$ is non-increasing on $\\left[1-\\theta_1,\\infty\\right)$.\n\t\\eqref{eq:tgreater} follows.\n\tIf $v\\leq \\frac{\\theta_0}{2}$, then \\eqref{TS:v1-def} and $U(0)=\\frac {\\theta_0}{2}$ imply $x-c_0t \\ge -y_1$.\n\tTherefore,\n\t\\begin{equation*}\n\t\\phi_1(t,x)\\le \\phi_1(2t,x)=\\phi_1(0,x-c_0t)\\le \\phi_1(0,-y_1)=\\frac{\\theta_0}{2}.\n\t\\end{equation*}\n\tIt follows that $\\tilde w= v+\\phi_1 \\le \\theta_0$, so $f_0(\\tilde w)=0$, and \\eqref{eq:tgreater} holds again.\n\tFinally, suppose $v\\in [\\frac{\\theta_0}{2}, 1-\\theta_1] $.\n\tFrom \\eqref{TS:3.2-def}, we again have $x-c_0t \\geq \\ell-y_1 $.\n\tUsing the definition of $\\eta$ from \\eqref{TS:om-eta-def},\n\t\\begin{align*}\n\t|f_0(v)-f_0(\\tilde w)|&\\leq \\gamma \\phi_1(t,x) = \\gamma e^{-\\frac{c_0}{4} (x-L -\\frac{c_0}{2} t)}\\\\\n\t&\\leq \\gamma e^{-\\frac{c_0 ^2}{8}t-\\frac{c_0}{4}(\\ell-y_1 -L)} \\le v_t \\frac{\\gamma}{\\eta} e^{-\\frac{c^2_0}{8}t-\\frac{c_0}{4}(\\ell-y_1 -L)} = v_t\\,\\beta_1'.\n\t\\end{align*}\n\t\\eqref{eq:tgreater} again follows, so $\\tilde w$ is a super-solution to \\eqref{eq:main} on $(0,\\infty)\\times \\mathbb{R}$.\n\t\n\t(ii) Define $\\tilde \\theta,\\tilde \\eta,\\tilde \\ell, B_2=B_2(y)$ as follows:\n\t\\begin{align*}\n\t\\tilde \\theta := \\min\\cb{\\frac{\\theta_0}{2},\\frac{\\theta_1}{2},\\frac{c_0^2 \\theta_1}{32\\gamma}},&\\quad \\tilde \\eta:= c_0\\inf\\left\\{-U'(s):s\\in \\mathbb{R},\\,U(s)\\in \\left[\\til \\theta,1-\\til \\theta\\right]\\right\\},\\\\\n\tU({\\til \\ell})= 1-\\til \\theta,&\\quad B_2:= \\frac{2^7 \\gamma^2}{c_0^4\\tilde \\eta }e^{-\\frac{c_0}{4}\\left({\\til \\ell}-y -L\\right)}.\n\t\\end{align*}\n\tAgain abbreviate $v=v^{y}$, $\\tilde v=\\tilde v^y$.\n\tWe will show for all $(t,x)\\in (0,\\infty)\\times \\mathbb{R}$ that\n\t\\begin{align}\n\t\\tilde v_t- \\tilde v_{xx} - f(x, \\tilde v ) = v_t\\,\\be_2' - \\frac{c_0^2}{16}\\phi_2 + f_0(v) - f(x, \\tilde v)\\leq 0,\n\t\\label{eq:subsol}\n\t\\end{align}\n\twhere $v$ and $v_t$ are evaluated at $(t+\\be_2(t),x)$ for the rest of the proof.\n\tNote that $f_0(v)$ is the only term in \\eqref{eq:subsol} which is potentially positive.\n\tConsider $x\\in [-L,L]$, where we have $\\phi_2(t,x)\\geq \\phi_2(0,L) = {16\\gamma}{c_0}^{-2}$.\n\tThen \\eqref{eq:subsol} follows from\n\t\\begin{equation*}\n\t- \\frac{c_0^2}{16}\\phi_2 + f_0(v) \\le -\\gamma+f_0(v) \\leq 0.\n\t\\end{equation*}\n\tNow consider $x\\in[-L,L]^c$, where $f(x,\\tilde v) = f_0(\\tilde v)$.\n\t\\eqref{eq:subsol} holds whenever $v \\leq \\til \\theta(\\le \\theta_0)$, as $f_0(v)=0$.\n\tIf $v\\in[\\til \\theta, 1 - \\til \\theta]$, then $x\\ge c_0 t+{\\til \\ell}-y$ because $U(\\tilde \\ell)=1-\\tilde \\theta$. It then follows from $v_t \\ge \\tilde \\eta>0 $ that\n\t\\begin{align*}\n\t\\abs{f_0(v) - f_0(\\tilde v)} &\\leq \\gamma \\phi_2(t,x) = \\frac{16\\gamma^2}{c_0^2}e^{-\\frac{c_0}{4}(x-L-\\frac{c_0}{2}t)}\\le \\frac{16\\gamma^2}{c_0^2} e^{-\\frac{c_0^2}{8}t-\\frac{c_0}{4}({\\til \\ell}-y-L)} \\\\\n\t&\\le v_t\\frac{16\\gamma^2}{c_0^2\\tilde \\eta}e^{-\\frac{c_0^2}{8}t-\\frac{c_0}{4}({\\til \\ell}-y-L)} = -v_t\\,\\be_2',\n\t\\end{align*}\n\twhich implies \\eqref{eq:subsol}.\n\tFinally, consider $v \\geq 1 - \\til \\theta(\\ge 1-\\frac{\\theta_1}{2})$.\n\tIf $\\phi_2 \\leq \\frac{\\theta_1}{2}$ then $\\tilde v \\geq 1 - \\theta_1$.\n\tSince $f_0$ is non-increasing on $[1 - \\theta_1,\\infty)$, $f_0( v) \\leq f_0( \\tilde v)$, and \\eqref{eq:subsol} holds again.\n\tIf $\\phi_2\\geq \\frac{\\theta_1}{2}$, then\n\t\\begin{equation*}\n\t\\frac{c_0^2}{16}\\phi_2\\geq \\frac{c_0^2\\theta_1}{32}\\geq \\gamma \\til \\theta \\geq \\gamma \\abs{v - 1} \\geq f_0(v).\n\t\\end{equation*}\n\tHence \\eqref{eq:subsol} holds in all cases, as claimed.\n\\end{proof}\n\nWith the requisite super- and sub-solutions in place, we may now establish Theorem \\ref{thm:ex}.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:ex}]\n\tIn what follows, $y_0,y_1, B_1, \\beta, \\phi, w,\\beta_1, \\phi_1,\\tilde w,$ and $\\phi_2$ are constants and functions defined in Lemmas \\ref{lem:supersub}(ii) and \\ref{lem:sup-t+}, and let $v=v^{y_0}$ be given as in \\eqref{TS:v1-def}.\n\t\n\tThe construction of an entire solution $u$ follows the standard procedure.\n\tFor $n\\in \\mathbb N$, let $u_n$ be the solution to \\eqref{eq:main} on $(-n,\\infty)\\times \\mathbb{R}$ with initial data $u_n(-n,x) = v(-n,x)$.\n\tFirst, observe that $u_n$ is increasing in time.\n\tIndeed, since $v$ is a sub-solution of \\eqref{eq:main} on $(-\\infty,0)\\times \\mathbb{R}$ with $v_t>0$ by Lemma \\ref{lem:supersub}, the comparison principle implies $u_n(t,\\cdot)> u_n(0,\\cdot)$ for all $t> 0$.\n\tIt follows thast $\\partial_t u_n> 0$ by the maximum principle.\n\tMoreover, since $v(-n,\\cdot)\\le w(-n,\\cdot)$ by their definition, Lemma \\ref{lem:supersub} and the comparison principle ensure that\n\t\\begin{equation*}\n\tv(t,\\cdot)\\le u_n(t,\\cdot)\\le w(t,\\cdot)\\quad\\mbox{for all }t\\in [-n,0].\n\t\\end{equation*}\n\tBy parabolic regularity, we obtain an increasing in time entire solution $u$ to \\eqref{eq:main} as a locally uniform limit along a subsequence of $(u_n)$ satisfying\n\t\\begin{equation}\n\t\\label{TS:ex-comp}\n\tv(t,\\cdot)\\le u(t,\\cdot)\\le w(t,\\cdot)\\quad \\mbox{for all }t\\le 0.\n\t\\end{equation}\n\t\n\tNext, we check that $u$ fulfills \\eqref{item:trans_lims}.\n\tIt obviously holds for $t\\le 0$ by \\eqref{TS:ex-comp} and the limit behavior of $v,w$ at $\\pm \\infty$.\n\tFor $t>0$, the first limit of \\eqref{item:trans_lims} still holds because $u_t>0$ and $u<1$.\n\tTo prove the second limit condition, we first claim that\n\t\\begin{equation}\n\t\\label{TS:ex-comp1}\n\tu(0,x)\\le \\min\\{ 1, w(0,x) \\} \\le \\tilde w(0,x).\n\t\\end{equation}\n\tFrom this, Lemma \\ref{lem:sup-t+}(i), and the comparison principle,\n\t\\begin{equation}\n\t\\label{TS:ex-comp3}\n\tu(t,\\cdot)\\le \\tilde w(t,\\cdot)\\quad \\mbox{for all }t\\ge 0.\n\t\\end{equation}\n\tThe second limit of \\eqref{item:trans_lims} then follows from $\\lim_{x\\to\\infty}\\tilde w(t,x) =0$ and $u>0$.\n\tNow consider \\eqref{TS:ex-comp1}.\n\tThe first inequality is simply \\eqref{TS:ex-comp} with $t=0$.\n\tFor the second, when $x\\le L$, $\\tilde w(0,x)\\ge \\phi_1(0,x) \\ge 1$.\n\tFor $x>L$, \\eqref{TS:psi-exp}, $\\lambda^\\varepsilon >\\frac{c_0^2}{16}$, $U'<0$, and $y_1\\le y_0-c_0\\beta(0)$ (by Lemmas \\ref{lem:supersub}(ii) and \\ref{lem:sup-t+}(i)) imply\n\t\\begin{align*}\n\tw(0,x) &= U(x+y_0-c_0\\beta(0)) + \\frac{\\zeta}{2}\\psi^\\varepsilon(x)\\le U(x+y_1) + e^{-\\frac{c_0}{4}(x-L)} = \\tilde w(0,x).\n\t\\end{align*}\n\tTherefore \\eqref{TS:ex-comp1} holds.\n\tThis completes the proof of \\eqref{item:trans_lims}.\n\t\n\tIt remains to show the bounded front width condition \\eqref{item:trans_width}.\n\tFix $\\mu \\in (0,\\frac 12 )$ and let\n\t\\begin{equation*}\n\tX^-(t):=\\inf \\{x\\in \\mathbb{R}:u(t,x)\\le 1-\\mu\\},\\quad X^+(t):=\\sup \\{x\\in \\mathbb{R}:u(t,x)\\ge \\mu\\},\n\t\\end{equation*}\n\tfor then we have $L_\\mu(t) = X^+(t)-X^-(t)$.\n\tWe will show that $L_\\mu(t)$ is uniformly bounded in $t\\in \\mathbb{R}$ by considering these three cases: $t\\le 0$, $t> t_\\mu,$ and $t\\in (0,t_\\mu]$, where $t_\\mu$ will be defined shortly.\n\tFor the first case $t<0$, let\n\t\\begin{gather*}\n\t\\rho_-:= U^{-1}(1-\\mu),\\quad \n\t \\rho_+:= \\max\\left\\{ U^{-1}\\left (\\frac \\mu 2\\right), \\frac 1{\\sqrt{\\lambda^\\eps }}\\left |\\log \\frac \\mu 2\\right|+L+y_0-c_0\\beta(0) \\right\\}.\n\t\\end{gather*}\n\tFor all $x< c_0 t + \\rho_--y_0$, by \\eqref{TS:ex-comp} we have\n\t\\begin{equation*}\n\tu(t,x)\\ge v(t,x) = U(x-c_0t +y_0)> U(\\rho_-)= 1-\\mu.\n\t\\end{equation*}\n\tTherefore $X^-(t) \\ge c_0t +\\rho_--y_0$.\n\tNow, suppose $x> c_0t+\\rho_+ -y_0 +c_0\\beta(0)$.\n\tCombining \\eqref{TS:psi-exp}, $||\\psi^\\varepsilon||_{L^\\infty}=1$, \\eqref{TS:ex-comp} and the definition of $\\rho_+$, we compute\n\t\\begin{align*}\n\tu(t,x) &\\le w(t,x) = U(x-c_0(t+\\beta(t))+y_0)+\\frac \\zeta 2 e^{c_0\\sqrt{\\lambda^\\eps }t}\\psi^\\eps(x) \\\\\n\t&\\le U(x-c_0(t+\\beta(0))+y_0)+ \\frac \\zeta 2e^{-\\sqrt{\\lambda^\\eps }(x-L-c_0t)} \\\\\n\t&< U(\\rho_+)+ e^{-\\sqrt{\\lambda^\\eps }(\\rho_+-L-y_0+c_0\\beta(0))}\\le \\mu .\n\t\\end{align*}\n\tThis implies $X^+(t) \\le c_0t +\\rho_+ - y_0+c_0\\beta(0)$, so\n\t\\begin{equation}\n\t\\label{TS:L-1}\n\tL_\\mu(t) \\le c_0\\beta(0)+\\rho_+ - \\rho_-\\quad \\mbox{for }t\\le 0.\n\t\\end{equation}\n\t\n\tWe now define $t_\\mu$.\n\tGiven $y\\in \\mathbb{R}$, let $\\bar v^y(t,x):= v^{y}(t,x)- \\phi_2(t,x)$, where $v^y,\\phi_2$ are from \\eqref{TS:v1-def} and Lemma \\ref{lem:sup-t+}(ii).\n\tOne can easily verify that $M(y):=\\sup_{x\\in \\mathbb{R}} \\bar v^{y}(0,x)$ is continuous, non-increasing in $y\\in \\mathbb{R}$, $\\lim_{y\\to-\\infty} M(y)=1$, $\\lim_{y\\to\\infty}M(y)=0$, and the supremum is achieved somewhere if $M(y)>0$.\n\tSo we may fix $y_2=y_2(\\mu)$ such that $M(y_2)=1-\\mu$.\n\tFor the remainder of the proof, we abbreviate $\\bar v = \\bar v ^{y_2}$ and let $B_2=B_2(y_2)$, $\\be_2$, and $\\tilde v=\\tilde v^{y_2}$ be from Lemma \\ref{lem:sup-t+}(ii).\n\tLet $x_\\mu\\in \\mathbb{R}$ be a maximizer so that $\\bar v (0,x_\\mu)=1-\\mu$, and define\n\t\\begin{equation}\\label{t-mu-def}\n\tt_\\mu := \\inf \\cb{t>0: u(t,\\cdot)\\ge \\max \\{\\tilde v(0,\\cdot), (1-\\mu)\\chi_{(-\\infty,x_\\mu]}\\} }\\ge 0.\n\t\\end{equation}\n\tWe claim that $t_\\mu$ is finite.\n\tAfter all, we have $\\tilde v(0,\\cdot)\\le (1-\\mu')\\chi_{I}$ for some bounded interval $I\\subset \\mathbb{R}$ and $\\mu'\\in (0,1)$.\n\tRecall that $u_t>0$ and the limit condition \\eqref{item:trans_lims} holds.\n\tTherefore, we have $u(t,\\cdot)\\nearrow 1$ uniformly on each $(-\\infty,R)$, $R\\in \\mathbb{R}$, which implies $t_\\mu<\\infty$.\n\t\n\tNow consider the front width for $t>t_\\mu$.\n\tCombining \\eqref{t-mu-def}, Lemma \\ref{lem:sup-t+}(ii), the comparison principle, and $\\be_2>0$, we have\n\t\\begin{equation}\n\t\\label{TS:ex-comp2}\n\tu(t_\\mu+t,\\cdot)\\ge \\tilde v(t,\\cdot)\\ge \\bar v(t,\\cdot)\\quad \\mbox{for all }t\\ge 0.\n\t\\end{equation}\n\tRecall $B_1$ from Lemma \\ref{lem:sup-t+}(i) and set\n\t\\begin{equation*}\n\t\\tilde\\rho_+ := \\max \\cb{U^{-1}\\rb{\\frac \\mu 2}, \\frac{4}{c_0} \\abs{\\log\\frac{\\mu}{2}} +y_1+L-c_0B_1}.\n\t\\end{equation*}\n\tThen $X^+(t) \\le c_0(t+B_1) +\\tilde \\rho_+-y_1$.\n\tIndeed, for $x> c_0(t+B_1)+\\tilde\\rho_+-y_1$, \\eqref{TS:ex-comp3} and $\\beta_1< B_1$ imply\n\t\\begin{align*}\n\tu(t,x)&\\le \\tilde w(t,x) = U(x-c_0(t+\\beta_1(t))+y_1) + e^{-\\frac{c_0}{4} (x-L-\\frac{c_0}{2}t)}\\\\\n\t&< U(\\tilde\\rho_+)+ e^{-\\frac{c_0}{4}(c_0B_1+\\tilde \\rho_+-y_1-L)} \\le \\mu.\n\t\\end{align*}\n\tOn the other hand, we claim that $X^-(t)\\ge c_0(t-t_\\mu)+x_\\mu$, implying:\n\t\\begin{equation}\n\t\\label{TS:L-2}\n\tL_\\mu(t) \\le c_0(t_\\mu+ B_1) +\\tilde \\rho_+ -y_1 - x_\\mu,\\quad \\text{for }t\\in (t_\\mu,\\infty).\n\t\\end{equation}\n\tTo prove the claimed bound, it suffices to check that $u(t,x)\\ge 1-\\mu$ for all $x< c_0(t-t_\\mu)+x_\\mu$.\n\tIf $x\\le x_\\mu$, then $u_t>0$ and \\eqref{t-mu-def} show that $u(t,x)> u(t_\\mu ,x)\\ge 1-\\mu$.\n\tNow consider $x\\in (x_\\mu, c_0(t-t_\\mu)+x_\\mu)$.\n\tNote that $v^{y_2}(t,x_\\mu+c_0t)=U(x_\\mu+y_2)$, while $t\\mapsto \\phi_2(t,x_\\mu+c_0t)$ is decreasing.\n\tHence, their difference $\\bar v (t,x_\\mu+c_0t) $ is increasing in $t$, and\n\t\\begin{equation*}\n\t\\bar v(t,x_\\mu+c_0t)> \\bar v(0,x_\\mu)=1-\\mu\\quad \\mbox{ for all }t>0.\n\t\\end{equation*}\n\tLet $t_*:=c_0^{-1}(x-x_\\mu)\\in (0,t-t_\\mu)$.\n\tThen by $u_t> 0$ and \\eqref{TS:ex-comp2}, it follows that\n\t\\begin{equation*}\n\tu(t,x) > u(t_\\mu+t_*,x)=u(t_\\mu+t_*,x_\\mu+c_0t_*)\\ge \\bar v(t_*,x_\\mu+c_0t_*) > 1-\\mu.\n\t\\end{equation*}\n\tThis proves the claim.\n\t\n\tFinally, consider $t\\in (0,t_\\mu]$.\n\tSince $u_t>0$, the width is bounded by\n\t\\begin{equation*}\n\tL_\\mu(t)\\le X^+(t_\\mu)-X^-(0) \\le c_0 (t_\\mu+B_1) +\\tilde\\rho_+-\\rho_- +y_0-y_1.\n\t\\end{equation*}\n\tWith this, \\eqref{TS:L-1}, and \\eqref{TS:L-2}, $L_\\mu(t)$ is uniformly bounded for all $t\\in \\mathbb{R}$.\n\tThis concludes the proof of \\eqref{item:trans_width}.\n\tTherefore $u$ is an increasing-in-time transition front solution of \\eqref{eq:main}. \n\tIt also obviously holds from the comparisons \\eqref{TS:ex-comp}, \\eqref{TS:ex-comp3} and \\eqref{TS:ex-comp2} that $u$ has a global mean speed $c_0$. \n\tThis completes the proof of Theorem \\ref{thm:ex}. \n\\end{proof}\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGeneralized Nash equilibrium problems (GNEPs) have been widely studied in the literature \\cite{Ros65, facchinei2007, facchinei2010} and such a strong interest is motivated by numerous applications ranging from economics to engineering \\cite{pavel2007,kulkarni2012}. In a GNEP, each agent seeks to minimize his own cost function, under local and coupled feasibility constraints. In fact, both the cost function and the constraints depend on the strategies chosen by the other agents. Due to the presence of these shared constraints, the search for generalized Nash equilibria is usually a quite challenging task. \n\nFor the computation of a GNE, various algorithms have been proposed, both distributed \\cite{belgioioso2018,yi2019}, and semi-decentralized \\cite{facchinei2010,belgioioso2017}. When dealing with coupling constraints, a common principle is the focus on a special class of equilibria, which reflect some notion of fairness among the agents. This class is known as \\emph{variational equilibria} (v-GNE) \\cite{Deb52,facchinei2010}. Besides fairness considerations, v-GNE is computationally attractive since it can be formulated in terms of variational inequality, which makes it possible to solve them via operator splitting techniques \\cite{BauCom16,facchinei2007}. A recent breakthrough along these lines is the distributed, preconditioned, forward-backward (FB) algorithm conceived in \\cite{yi2019} for strongly-monotone games. The key lesson from \\cite{yi2019} is that the FB method cannot be directly applied to GNEPs, thus a suitable preconditioning is necessary. From a technical perspective, the FB operator splitting requires that the pseudo-gradient mapping of the game is strongly monotone, an assumption which is not always satisfied\n\nIn this paper we investigate two distributed algorithmic schemes for computing a v-GNE. Motivated by the need to relax the strong monotonicity assumption on the pseudo-gradient of the game, we first investigate a distributed \\textit{forward-backward-forward} (FBF) algorithm \\cite{tseng2000}. We show that a suitably constructed FBF operator splitting guarantees not only fully distributed computation, but also convergence to a v-GNE under the mere assumption of monotonicity of the involved operators. This enables us to drop the strong monotonicity assumption, which is the main advantage with respect to the FB-based splitting methods \\cite{yi2019}.\nAs a second condition, in order to exploit the structure of the monotone inclusion defining the v-GNE problem, we also investigate the \\textit{forward-backward-half-forward} (FBHF) algorithm \\cite{briceno2018}. We would like to point out that both our algorithms are distributed in the sense that each agent needs to know only his local cost function and its local feasible set, and there is no central coordinator that updates and broadcasts the dual variables. The latter is the main difference with semi-decentralized schemes for aggregative games \\cite{Gram17,belgioioso2017}.\n\n\nCompared with the FB and the FBHF algorithms, the FBF requires less restrictive assumptions to guarantee convergence, i.e., plain monotonicity of the pseudo-gradient mapping. On the other hand, the FBF algorithm requires two evaluations of the pseudo-gradient mapping, which means that the agents must communicate at least twice at each iterative step. Confronted with the FBF algorithm, our second proposal, the FBHF algorithm requires only one evaluation of the pseudo-gradient mapping, but needs strong monotonicity to provide theoretical convergence guarantees. Effectively, the FBHF algorithm is guaranteed to converge under the same assumptions as the preconditioned FB \\cite{yi2019}.\n\n\n\n\n\n\n\\paragraph*{Notation}\n${\\mathbb{R}}$ indicates the set of real numbers and $\\bar{\\mathbb{R}}={\\mathbb{R}}\\cup\\{+\\infty\\}$. $\\mathbf{0}_N$ ($\\mathbf{1}_N$) is the vector of N zeros (ones). The Euclidean inner product and norm are indicated with $\\inner{\\cdot,\\cdot}$ and $\\norm{\\cdot}$, respectively.\nLet $\\Phi$ be a symmetric, positive definite matrix, $\\Phi \\succ 0$. The induced inner product is $\\inner{\\cdot,\\cdot}_{\\Phi}:=\\inner{\\Phi\\cdot,\\cdot}$, and the associated norm is $\\norm{\\cdot}_{\\Phi}:=\\inner{\\cdot,\\cdot}_{\\Phi}^{1\/2}$. We call $\\mathcal H_\\Phi$ the Hilbert space with norm $\\norm{\\cdot}_\\Phi$. \nGiven a set $\\mathcal X\\subseteq{\\mathbb{R}}^n$, the normal cone mapping is denoted with $\\NC_{\\mathcal{X}}(x)$. $\\Id$ is the identity mapping. Given a set-valued operator $A$, the graph of $A$ is the set $\\gr(A)=\\{(x,y)\\vert y\\in Ax\\}$ The set of zeros is $\\Zer A=\\{x\\in{\\mathbb{R}}^n \\mid 0\\in Ax\\}$. \nThe resolvent of a maximally monotone operator $A$ is the map $\\mathrm{J}_{A}=(\\Id+A)^{-1}$. If $g:{\\mathbb{R}}^{n}\\to(-\\infty,\\infty]$ is a proper, lower semi-continuous, convex function, its subdifferential is the maximal monotone operator $\\partial g(x)$. The proximal operator is defined as $\\operatorname{prox}^{\\Phi}_{g}(v)=\\operatorname{J}_{\\Phi\\partial g}(v)$ \\cite{BauCom16}.\nGiven $x_{1}, \\ldots, x_{N} \\in {\\mathbb{R}}^{n}, \\boldsymbol{x} :=\\operatorname{col}\\left(x_{1}, \\dots, x_{N}\\right)=\\left[x_{1}^{\\top}, \\dots, x_{N}^{\\top}\\right]^{\\top}$.\n\n\\section{Mathematical Setup: The Monotone Game and Variational Generalized Nash Equilibria}\n\\label{sec:problem}\nWe consider a game with $N$ agents where each agent chooses an action $x_{i}\\in{\\mathbb{R}}^{n_i}$, $i\\in\\mathcal I=\\{1,\\dots,N\\}$. \n\nEach agent $i$ has an extended-valued local cost function $J_{i}: {\\mathbb{R}}^n \\to (-\\infty,\\infty]$ of the form \n\\vspace{-.15cm}\\begin{equation}\\label{eq:f}\nJ_{i}(x_{i}, \\boldsymbol x_{-i}):=f_{i}(x_{i},\\boldsymbol x_{-i}) + g_{i}(x_{i}).\n\\vspace{-.15cm}\\end{equation}\nwhere $\\boldsymbol x_{-i}=\\operatorname{col}(\\{x_j\\}_{j\\neq i})$ is the vector of all decision variables except for $x_i$, and $g_{i}:{\\mathbb{R}}^{n_{i}}\\to(-\\infty,\\infty]$ is a local idiosyncratic costs function which is possibly non-smooth. Thus, the function $J_{i}$ in \\eqref{eq:f} has the typical splitting into smooth and non-smooth parts.\n\\begin{standassumption}[Local cost]\nFor each $i\\in\\mathcal I$, the function $g_i$ in \\eqref{eq:f} is lower semicontinuous and convex.\nFor each $i\\in\\mathcal I$, $\\dom(g_{i})=\\Omega_i$ is a closed convex set.\n\\hfill\\small$\\blacksquare$\n\\end{standassumption}\n\n\nExamples for the local cost function are indicator functions to enforce set constraints, or penalty functions that promote sparsity, or other desirable structure. \n\nFor the function $f_i$ in \\eqref{eq:f}, we assume convexity and differentiability, as usual in the GNEP literature \\cite{facchinei2010}.\n\\begin{standassumption}[Local convexity]\n\\label{ass:IC}\nFor each $i \\in \\mathcal{I}$ and for all $\\boldsymbol{x}_{-i} \\in {\\mathbb{R}}^{n-n_i}$, the function $f_{i}(\\cdot, \\boldsymbol{x}_{-i})$ in \\eqref{eq:f} is convex and continuously differentiable. \n\\hfill$\\blacksquare$\n\\end{standassumption}\n\n\n\nWe assume that the game displays joint convexity with affine coupling constraints defining the collective feasible set \n\\vspace{-.15cm}\\begin{equation}\\label{eq:coupling}\n\\boldsymbol{\\mathcal{X}}:=\\left\\{\\boldsymbol x \\in\\boldsymbol\\Omega \\mid A \\boldsymbol{x}-b \\leq {\\boldsymbol{0}}_{m}\\right\\}\n\\vspace{-.15cm}\\end{equation}\nwhere $A:=[A_1,\\dots, A_N]\\in{\\mathbb{R}}^{m\\times n}$ and $b:=\\sum_{i=1}^{N}b_{i}\\in{\\mathbb{R}}^m$. \nEffectively, each matrix $A_i\\in{\\mathbb{R}}^{m\\times n_i}$ defines how agent $i$ is involved in the coupling constraints, thus we consider it to be private information of agent $i$. \nThen, for each $i$, given the strategies of all other agents $\\boldsymbol x_{-i}$, the feasible decision set is\n\\vspace{-.15cm}\\begin{equation}\n\\mathcal{X}_{i}(\\boldsymbol{x}_{-i}) := \\left\\{y_i \\in \\Omega_i \\mid \\, A_i y_i \\leq b-\\textstyle\\sum_{j \\neq i}^{N} A_j x_j\\right\\}.\n\\vspace{-.15cm}\\end{equation}\n\nNext, we assume a constraint qualification condition.\n\n\\begin{standassumption}\\label{ass_X}\n(\\textit{Constraint qualification})\nThe set $\\boldsymbol{\\mathcal{X}}$ in \\eqref{eq:coupling} satisfies Slater's constraint qualification. \n\\hfill\\small$\\blacksquare$\n\\end{standassumption}\nThe aim of each agent is to solve its local optimization problem\n\\vspace{-.15cm}\\begin{equation}\\label{game}\n\\forall i\\in\\mathcal I: \\quad\\left\\{\\begin{array}{cl}\n\\min_{x_i \\in \\Omega_i} & J_{i}(x_i, \\boldsymbol x_{-i}) \\\\ \n\\text { s.t. } & A_i x_i \\leq b-\\sum_{j \\neq i}^{N} A_j x_j.\n\\end{array}\\right.\n\\vspace{-.15cm}\\end{equation}\n\n\n\n\n\nThus, the solution concept for such a competitive scenario is the generalized Nash equilibrium \\cite{Deb52,facchinei2010}. \n\n\\begin{definition} (\\textit{Generalized Nash equilibrium})\nA collective strategy $\\boldsymbol x^{\\ast}=\\operatorname{col}(x_{1}^{\\ast},\\ldots,x_{N}^{\\ast})\\in \\boldsymbol{\\mathcal{X}}$\nis a generalized Nash equilibrium of the game in \\eqref{game} if, for all $i\\in\\mathcal I$,\n\\vspace{-.15cm}\\begin{equation*}\nJ_i(x^*_i,\\boldsymbol x^*_{-i}) \\leq \\inf\\{ J_i(y,\\boldsymbol x^*_{-i}) \\, \\mid \\, y\\in\\mathcal X_i(\\boldsymbol x_{-i})\\}.\\vspace{-.4cm}\n\\vspace{-.15cm}\\end{equation*}\n\\hfill\\small$\\blacksquare$\n\\end{definition}\nTo derive optimality conditions characterizing GNE, we define agent $i$'s Lagrangian function as\n$\\mathcal L_{i}(x_{i},\\lambda_i, \\boldsymbol x_{-i}):=J_{i}(x_i, \\boldsymbol x_{-i})+\\lambda_i^{\\top}(A\\boldsymbol{x}-b)$\nwhere $\\lambda_i\\in{\\mathbb{R}}^m_{\\geq 0}$ is the Lagrange multiplier associated with the coupling constraint $A \\boldsymbol{x} \\leq b$. Thanks to the sum rule of the subgradient for Lipschitz continuous functions \\cite[\\S 1.8]{Cla98}, we can write the subgradient of agent $i$ as \n$ \\partial_{x_{i}}J_{i}(x_{i},\\boldsymbol x_{-i})=\\nabla_{x_{i}} f_{i}(x_{i},\\boldsymbol x_{-i})+\\partial g_{i}(x_{i})$. Therefore, \nUnder Assumption \\ref{ass_X}, the Karush--Kuhn--Tucker (KKT) theorem ensures the existence of a pair $(x^*_{i},\\lambda^*_i)\\in\\Omega_{i}\\times\\mathbb{R}^{m}_{\\geq 0}$, such that \n\\begin{equation}\\label{KKT_game}\n\\forall i\\in\\mathcal I:\\begin{cases}\n\\mathbf{0}_{n_i}\\in \\nabla_{x_{i}} f_{i}(x^*_{i};\\boldsymbol x^*_{-i})+\\partial g_{i}(x^*_{i})+A^{\\top}_{i}\\lambda_i^*\\\\\n\\mathbf{0}_{m}\\in \\NC_{\\mathbb{R}^{m}_{\\geq0}}(\\lambda_i^*)-(A\\boldsymbol{x}^*-b).\n\\end{cases}\n\\vspace{-.15cm}\\end{equation}\n\nWe conclude the section by postulating a standard assumption for GNEP's \\cite{facchinei2010}, and inclusion problems in general \\cite{BauCom16}, concerning the monotonicity and Lipschitz continuity of the mapping that collects the partial gradients $\\nabla_{i} f_{i}$.\n\n\\begin{standassumption}[Monotonicity]\n\\label{ass:GM}\nThe mapping \n\\vspace{-.15cm}\\begin{equation}\\label{eq:F}\nF(\\boldsymbol x):= \\mathrm{col}\\left( \\nabla_{x_1}f_{1}(\\boldsymbol x),\\ldots,\\nabla_{x_N}f_{N}(\\boldsymbol x)\\right)\n\\vspace{-.15cm}\\end{equation}\nis monotone on $\\boldsymbol\\Omega$, i.e., for all $\\boldsymbol x,\\boldsymbol y\\in\\boldsymbol\\Omega$\n$\\langle F(\\boldsymbol x)-F(\\boldsymbol y),\\boldsymbol x-\\boldsymbol y\\rangle\\geq 0.$\nand $\\frac{1}{\\beta}$-Lipschitz continuous, $\\beta > 0$, i.e., for all $\\boldsymbol x,\\boldsymbol y\\in\\boldsymbol\\Omega$,\\vspace{-.15cm}\n$\\norm{F(\\boldsymbol x)-F(\\boldsymbol y)} \\leq \\tfrac{1}{\\beta}\\norm{\\boldsymbol x-\\boldsymbol y}.\n\\hfill\\small$\\blacksquare$\n\\end{standassumption}\n\\vspace{.15cm}\nAmong all possible GNEs of the game, this work focuses on the computation of a \\emph{variational GNE} (v-GNE) \\cite[Def. 3.10]{facchinei2010}, i.e. a GNE in which all players share consensus on the dual variables:\n\\vspace{-.15cm}\\begin{equation}\\label{KKT_VI}\n\\forall i\\in\\mathcal I:\\begin{cases}\n\\mathbf{0}_{n_i}\\in \\nabla_{x_{i}}f_{i}(x^{*}_{i};\\boldsymbol{x}^{*}_{-i})+\\partial g_{i}(x^{*}_{i})+A_{i}^{\\top}\\lambda^{\\ast}\\\\\n\\mathbf{0}_{m}\\in \\NC_{\\mathbb{R}^{m}_{\\geq0}}(\\lambda^*)-(A\\boldsymbol{x}^*-b).\\\\\n\\end{cases}\n\\vspace{-.15cm}\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Distributed Generalized Nash equilibrium seeking via Operator Splitting}\nIn this section, we present the proposed distributed algorithms. We allow each agent to have information on his own local problem data only, i.e., $J_{i},\\Omega_{i}, A_{i}$ and $b_{i}$. We let each agent $i$ control its local decision $x_{i}$, and a local copy $\\lambda_{i}\\in\\mathbb{R}^{m}_{\\geq0}$ of dual variables, as well as a local auxiliary variable $z_{i}\\in\\mathbb{R}^{m}$ used to enforce consensus of the dual variables.\nTo actually reach consensus on the dual variables, we let the agents exchange information via an undirected weighted \\emph{communication graph}, represented by its weighted adjacency matrix $\\boldsymbol W = [w_{i,j}]_{i,j}\\in{\\mathbb{R}}^{N\\times N}$. We assume $w_{ij}>0$ iff $(i,j)$ is an edge in the communication graph. The set of neighbours of agent $i$ in the graph is $\\mathcal{N}_{i}^{\\lambda}=\\{j |w_{i,j}>0\\}$.\n\\vspace{.15cm}\n\\begin{standassumption}[Graph connectivity]\\label{ass:graph}\nThe matrix $\\mathbf{W}$ is symmetric and irreducible.\\hfill\\small$\\blacksquare$\n\\end{standassumption}\nDefine the weighted Laplacian as $\\mathbf{L}:=\\diag\\left\\{(\\mathbf{W}\\mathbf{1}_{N})_{1}, \\dots, (\\mathbf{W}\\mathbf{1}_{N})_{N}\\right\\}-\\mathbf{W}$. It holds that $\\mathbf{L}^{\\top}=\\mathbf{L}$, $\\ker(\\mathbf{L})=\\Span(\\mathbf{1}_{N})$ and that, given Standing Assumption \\ref{ass:graph}, $\\mathbf{L}$ is positive semi-definite with real and distinct eigenvalues $0=s_{1}0$.\n\\hfill\\small$\\blacksquare$\n\\end{assumption}\n\nTo ensure the cocoercivity condition, we refer to the following result.\n\n\\begin{lemma}\\cite[Lem. 5 and Lem. 7]{yi2019}\\label{lemma_coco}\nLet $\\Phi\\succ0$ and $F$ as in \\eqref{eq:F} satisfy Assumption \\ref{ass:Hstrong}. Then, the following hold:\n\\begin{itemize}\n\\item[(i)] $\\mathcal A$ is $\\theta$-cocoercive with $\\theta\\leq\\min\\{1\/2\\Delta,\\eta\\beta^2\\}$.\n\\item[(ii)] $\\Phi^{-1}\\mathcal A$ is $\\alpha\\theta$-cocoercive with $\\alpha=1\/\\abs{\\Phi^{-1}}$. \n\\hfill\\small$\\blacksquare$\n\\end{itemize}\n\\end{lemma}\n\nWe recall that convergence to a v-GNE has been demonstrated in \\cite[Th. 3]{yi2019}, if the step sizes in \\eqref{eq:phi} are chosen small enough \\cite[Lem. 6]{yi2019}. \n\n\n\\subsection{Forward-backward-forward splitting}\n\\label{sec:FBF}\nIn this section, we propose our distributed forward-backward-forward (FBF) scheme, Algorithm \\ref{FBF_algo}.\n\n\\begin{algorithm}\n\\caption{Distributed Forward Backward Forward}\\label{FBF_algo}\nInitialization: $x_i^0 \\in \\Omega_i, \\lambda_i^0 \\in {\\mathbb{R}}_{\\geq0}^{m},$ and $z_i^0 \\in {\\mathbb{R}}^{m} .$\\\\\nIteration $k$: Agent $i$\\\\\n($1$) Receives $x_j^k$ for $j \\in \\mathcal{N}_{i}^{J}$, $ \\lambda_j^k$ and $z_{j,k}$ for $j \\in \\mathcal{N}_{i}^{\\lambda}$ then updates\n$$\\begin{aligned}\n&\\tilde x_i^k=\\operatorname{prox}^{\\rho_i}_{g_{i}}[x_i^k-\\rho_{i}(\\nabla_{x_{i}} f_{i}(x_i^k,\\boldsymbol x_{-i}^k)-A_{i}^{T} \\lambda_i^k)]\\\\\n&\\tilde z_i^k=z_i^k+\\sigma_{i} \\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}(\\lambda_i^k-\\lambda_j^k)\\\\\n&\\tilde\\lambda_i^k=\\operatorname{proj}_{{\\mathbb{R}}^m_{\\geq 0}}\\{\\lambda_i^k-\\tau_{i}(A_{i}x_i^k-b_{i})\\\\\n&\\quad\\quad+\\tau\\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(z_{i}^{k}-z_j^k)-(\\lambda_i^k-\\lambda_j^k)]\\}\n\\end{aligned}$$\n($2$) Receives $\\tilde x_j^k$ for $j \\in \\mathcal{N}_{i}^{J}$, $ \\tilde \\lambda_j^k$and $\\tilde z_{j}^{k}$ for $j \\in \\mathcal{N}_{i}^{\\lambda}$ then updates\n$$\\begin{aligned}\n&x_i^{k+1}=\\tilde x_i^k-\\rho_{i}(\\nabla_{x_{i}} f_{i}(x_i^k,\\boldsymbol x_{-i}^k)-\\nabla_{\\tilde x_{i}} f_{i}(\\tilde x_i^k,\\tilde {\\boldsymbol{x}}_{-i}^k))\\\\\n&\\quad\\quad\\quad-\\rho_iA_{i}^{T} (\\lambda_i^k-\\tilde \\lambda_{i,k})\\\\\n&z_i^{k+1}=\\tilde z_i^k+\\sigma_{i} \\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(\\lambda_i^k-\\lambda_j^k)-(\\tilde\\lambda_i^k-\\tilde\\lambda_j^k)]\\\\\n&\\lambda_i^{k+1}=\\tilde{\\lambda}_i^{k}+\\tau_iA_i(\\tilde{x}_{i}^{k}-x_{i}^{k})\\\\\n&\\quad\\quad\\quad-\\tau_i\\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(z_i^k-z_j^k)-(\\tilde z_i^k-\\tilde z_j^k)]\\\\\n&\\quad\\quad\\quad+\\tau_i\\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(\\lambda_{i,k}-\\lambda_j^k)-(\\tilde\\lambda_i^k-\\tilde\\lambda_j^k)]\\\\\n\\end{aligned}$$\n\\end{algorithm}\n\nIn compact form, the FBF algorithm generates two sequences $(\\boldsymbol u^{k},\\boldsymbol v^{k})_{k\\geq 0}$ as follows: \n\\vspace{-.15cm}\\begin{equation}\\label{FBF}\n\\begin{aligned}\n\\boldsymbol u^{k}&=J_{\\Psi^{-1} \\mathcal C}(\\boldsymbol v^{k}-\\Psi^{-1} \\mathcal D \\boldsymbol v^{k})\\\\\n\\boldsymbol v^{k+1}&=\\boldsymbol u^{k}+\\Psi^{-1} (\\mathcal D\\boldsymbol v^{k}-\\mathcal D\\boldsymbol u^{k}).\n\\end{aligned}\n\\vspace{-.15cm}\\end{equation}\n\nIn \\eqref{FBF}, $\\Psi$ is the block-diagonal matrix of the step sizes: \n\\vspace{-.15cm}\\begin{equation}\\label{Psi}\n\\Psi=\\operatorname{diag}(\\rho^{-1},\\sigma^{-1}, \\tau^{-1}),\n\\vspace{-.15cm}\\end{equation}\n\nWe recall that $\\mathcal D=\\mathcal A+\\mathcal B$ is single-valued, maximally monotone and Lipschitz continuous by Lemma \\ref{lemma_op}. Each iteration differs from the scheme in \\eqref{eq:FB} by one additional forward step and the fact that the resolvent is now defined in terms of the operator $\\mathcal C$ only. Writing the coordinates as $\\boldsymbol u^{k}=(\\tilde{\\boldsymbol{x}}^{k},\\tilde{\\boldsymbol z}^{k},\\tilde{\\boldsymbol \\lambda}^{k})$ and $\\boldsymbol v^{k}=(\\boldsymbol{x}^{k},\\boldsymbol z^{k},\\boldsymbol \\lambda^{k})$, the updates are explicitly given in Algorithm \\ref{FBF_algo}.\n\nFBF operates on the splitting $\\mathcal C+\\mathcal D$ and it can be compactly written as the fixed-point iteration\n$\\boldsymbol v^{k+1}=T_{\\text{FBF}} \\, \\boldsymbol v^{k},$\nwhere the mapping $T_{\\text{FBF}}$ is defined as \n\\vspace{-.15cm}\\begin{equation}\\label{eq:T_FBF}\nT_{\\text{FBF}}:=\\Psi^{-1}\\mathcal D+(\\Id-\\Psi^{-1}\\mathcal D)\\circ \\operatorname{J}_{\\Psi^{-1}\\mathcal C}\\circ(\\Id-\\Psi^{-1}\\mathcal D). \n\\vspace{-.15cm}\\end{equation}\n\n\nTo ensure convergence of Algorithm \\ref{FBF_algo} to a v-GNE of the game in \\eqref{game}, we need the next assumption.\n\n \n\\begin{assumption}\\label{step_FBF}\n$\\abs{\\Psi^{-1}} < 1\/L_{\\mathcal D}$, with $\\Psi$ as in \\eqref{Psi} and $L_{\\mathcal D}$ being the Lipschitz constant of $\\mathcal D$ as in Lemma \\ref{lemma_op}.\n\\hfill\\small$\\blacksquare$\n\\end{assumption}\n\n\n\n\n\n\n\\begin{theorem}\\label{theo_FBF}\nLet Assumption \\ref{step_FBF} hold. The sequence $(\\boldsymbol x^k,\\boldsymbol \\lambda^k)$ generated by Algorithm \\ref{FBF_algo} converges to \n$\\operatorname{zer}(\\mathcal A+\\mathcal B+\\mathcal C)$, thus the primal variable converges to a v-GNE of the game in \\eqref{game}.\n\\end{theorem}\n\\begin{proof}\nThe fixed-point iteration with $T_{\\text{FBF}}$ as in \\eqref{eq:T_FBF} can be derived from \\eqref{FBF} by substituting $\\boldsymbol u_k$.\nTherefore, the sequence $(\\boldsymbol x^k,\\boldsymbol\\lambda^k)$ generated by Algorithm \\ref{FBF_algo} converges to a v-GNE by \\cite[Th.26.17]{BauCom16} and \\cite[Th.3.4]{tseng2000} since $\\Psi^{-1}\\mathcal A$ is monotone by Lemma \\ref{lemma_mono} and $\\mathcal A+\\mathcal B+\\mathcal C$ is maximally monotone by Lemma \\ref{lemma_op}. See Appendix \\ref{sec:FBF} for details.\n\\end{proof}\n \n\nWe emphasize that Algorithm \\ref{FBF_algo} does not require strong monotonicity (Assumption \\ref{ass:Hstrong}) of the pseudo-gradient mapping $F$ in \\eqref{eq:F}.\nMoreover, we note that the FBF algorithm requires two evaluations of the individual gradients, which requires computing the operator $\\mathcal D$ twice per iteration. At the level of the individual agents, this means that we need two communication rounds per iteration in order to exchange the necessary information. Compared with the FB algorithm, the non-strong monotonicity assumption comes at the price of increased communications at each iteration.\n\n\n\\subsection{Forward-backward-half forward splitting}\n\\label{sec:FBHF}\n\nShould the strong monotonicity condition (Assumption \\ref{ass:Hstrong}) be satisfied, an alternative to the FB is the \\emph{forward-backward-half-forward} (FBHF) operator splitting, developed in \\cite{briceno2018}. Thus, our second GNE seeking algorithm is a distributed FBHF, described in Algorithm \\ref{FBHF_algo}.\n\n\\begin{algorithm}\n\\caption{Distributed Forward Backward Half Forward}\\label{FBHF_algo}\nInitialization: $x_i^0 \\in \\Omega_i, \\lambda_i^0 \\in {\\mathbb{R}}_{\\geq0}^{m},$ and $z_i^0 \\in {\\mathbb{R}}^{m} .$\\\\\nIteration $k$: Agent $i$\\\\\n($1$) Receives $x_j^k$ for $j \\in \\mathcal{N}_{i}^{J}$, $ \\lambda_j^k$ and $z_{j,k}$ for $j \\in \\mathcal{N}_{i}^{\\lambda}$ then updates\n$$\\begin{aligned}\n&\\tilde x_i^k=\\operatorname{prox}^{\\rho_i}_{g_{i}}[x_i^k-\\rho_{i}(\\nabla_{x_{i}} f_{i}(x_i^k,\\boldsymbol x_{-i}^k)-A_{i}^{T} \\lambda_i^k)]\\\\\n&\\tilde z_i^k=z_i^k+\\sigma_{i} \\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}(\\lambda_i^k-\\lambda_j^k)\\\\\n&\\tilde\\lambda_i^k=\\operatorname{proj}_{{\\mathbb{R}}^m_{\\geq 0}}\\{\\lambda_i^k-\\tau_{i}(A_{i}x_i^k-b_{i})\\\\\n&\\quad\\quad+\\tau\\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(z_{i}^{k}-z_j^k)-(\\lambda_i^k-\\lambda_j^k)]\\}\n\\end{aligned}$$\n($2$) Receives $ \\tilde \\lambda_j^k$and $\\tilde z_{j,k}$ for $j \\in \\mathcal{N}_{i}^{\\lambda}$ then updates\n$$\\begin{aligned}\n&x_i^{k+1}=\\tilde x_i^k+\\rho_iA_{i}^{T} (\\lambda_i^k-\\tilde \\lambda_{i,k})]\\\\\n&z_i^{k+1}=\\tilde z_i^k+\\sigma_{i} \\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(\\lambda_i^k-\\lambda_j^k)-(\\tilde\\lambda_i^k-\\tilde\\lambda_j^k)]\\\\\n&\\lambda_i^{k+1}=\\tilde{\\lambda}_i^{k}+\\tau_iA_i(\\tilde{x}_{i}^{k}-x_{i}^{k})\\\\\n&\\quad\\quad\\quad-\\tau_i\\sum\\nolimits_{j \\in \\mathcal{N}_{i}^{\\lambda}} w_{i,j}[(z_i^k-z_j^k)-(\\tilde z_i^k-\\tilde z_j^k)]\\\\\n\\end{aligned}$$\n\\end{algorithm}\nIn compact form, the FBHF algorithm reads as\n\\vspace{-.15cm}\\begin{equation}\\label{FBHF}\n\\begin{aligned}\n\\boldsymbol u^{k} & = \\mathrm{J}_{\\Psi^{-1}\\mathcal C}(\\boldsymbol v^{k}-\\Psi^{-1} (\\mathcal A+\\mathcal B) \\boldsymbol v^{k}) \\\\\n\\boldsymbol v^{k+1} & =\\boldsymbol u^{k}+\\Psi^{-1}(\\mathcal B\\boldsymbol v^k- \\mathcal B\\boldsymbol u^k).\n\\end{aligned}\n\\vspace{-.15cm}\\end{equation}\nWe note that the iterates of FBHF are similar to those of the FBF, but the second forward step requires the operator $\\mathcal B$ only. \nMore simply, we can write the FBHF as the fixed-point iteration\n$\\boldsymbol v^{k+1}=T_{\\text{FBHF}}\\boldsymbol v^{k},$\nwhere \n\\vspace{-.15cm}\\begin{equation}\\label{eq:T_FBHF}\nT_{\\text{FBHF}}=(\\Id-\\Psi^{-1}\\mathcal B)\\circ \\operatorname{J}_{\\Psi^{-1}\\mathcal C}\\circ (\\Id-\\Psi^{-1}\\mathcal D)+\\Psi^{-1}\\mathcal B. \n\\vspace{-.15cm}\\end{equation}\n\nAlso in this case, we have a bound on the step sizes.\n\n \n\\begin{assumption}\\label{step_FBHF}\n$|\\Psi^{-1}| \\leq \\min\\{2\\theta_{\\mathcal A},1\/L_{\\mathcal B}\\}$,\nwith $\\theta_{\\mathcal A}$ as in Lemma \\ref{lemma_coco} and $L_{\\mathcal B}$ as in Lemma \\ref{lemma_op}.\n\\hfill\\small$\\blacksquare$\n\\end{assumption}\n \n\nWe note that in Assumption \\ref{step_FBHF}, the step sizes in $\\Psi$ can be chosen larger compared to those in Assumption \\ref{step_FBF}, since the upper bound is related to the Lipschitz constant of the operator $\\mathcal B$, not of $L_{\\mathcal D}=L_{\\mathcal A}+L_{\\mathcal B}$ as for the FBF (Assumption \\ref{step_FBF}). A similar comparison can be done with respect to the FB algorithm. Intuitively, larger step sizes should be beneficial in term of convergence speed.\n\nWe can now establish our convergence result for the FBHF algorithm.\n\n \n\\begin{theorem}\nLet Assumptions \\ref{ass:Hstrong} and \\ref{step_FBHF} hold. The sequence $(\\boldsymbol x^k,\\boldsymbol \\lambda^k)$ generated by Algorithm \\ref{FBHF_algo} converges to $\\operatorname{zer}(\\mathcal A+\\mathcal B+\\mathcal C)$, thus the primal variable converges to \na v-GNE of the game in \\eqref{game}. \\hfill\\small$\\blacksquare$\n\\end{theorem}\n\n\\begin{proof}\nAlgorithm \\ref{FBHF_algo} is the fixed point iteration in \\eqref{eq:T_FBHF} whose convergence is guaranteed by \\cite[Th. 2.3]{briceno2018} under Assumption \\ref{step_FBHF} because $\\Psi^{-1}\\mathcal A$ is cocoercive by Lemma \\ref{lemma_coco}. See Appendix \\ref{sec:FBHF} for details.\n\\end{proof}\n\n\n\n\n\n\n\\section{Case study and numerical simulations}\nWe consider a networked Cournot game with market capacity constraints \\cite{yi2019}.\nAs a numerical setting, we use a set of 20 companies and 7 markets, similarly to \\cite{yi2019}. Each company $i$ has a local constraint $x_i\\in(0,\\delta_i)$ where each component of $\\delta_i$ is randomly drawn from $[1, 1.5]$. The maximal capacity of each market $j$ is $b_j$, randomly drawn from $[0.5, 1]$. The local cost function of company $i$ is $c_i(x_i) = \\pi_i\\sum_{j=1}^{n_i} ([x_i]_j)^2 + r^\\top x_i$, where $[x_i]_j$ indicates the $j$ component of $x_i$.\nFor all $i\\in\\mathcal I$, $\\pi_i$ is randomly drawn from $[1, 8]$, and the components of $r_i$ are randomly drawn from $[0.1, 0.6]$. Notice that $c_i(x_i)$ is strongly convex with Lipschitz continuous gradient. The price is taken as a linear function $P= \\bar P-DA\\boldsymbol x$ where each component of $\\bar P =\\operatorname{col}(\\bar P_1,\\dots,\\bar P_7)$ is randomly drawn from $[2,4]$ while the entries of $D=\\operatorname{diag}(d_1,\\dots,d_7)$ are randomly drawn from $[0.5,1]$. Recall that the cost function of company $i$ is influenced by the variables of the agents selling in the same market. Such informations can be retrieved from \\cite[Fig. 1]{yi2019}. Since $c_i(x_i)$ is strongly convex with Lipschitz continuous gradient and the prices are linear, the pseudo gradient of $f_i$ is strongly monotone. The communication graph $\\mathcal G^\\lambda$ for the dual variables is a cycle graph with the addition of the edges $(2,15)$ and $(6,13)$. As local cost functions $g_i$ we use the indicator functions. In this way, the proximal step is a projection on the local constraints sets.\n\nThe aim of these simulations is to compare the proposed schemes.\nThe step sizes are taken differently for every algorithm. In particular, we take $\\rho_{\\text{FB}}$, $\\sigma_{\\text{FB}}$ and $\\tau_{\\text{FB}}$ as in \\cite[Lem. 6]{yi2019}, $\\rho_{\\text{FBF}}$, $\\sigma_{\\text{FBF}}$ and $\\tau_{\\text{FBF}}$ such that Assumption \\ref{step_FBF} is satisfied and $\\rho_{\\text{FBHF}}$, $\\sigma_{\\text{FBHF}}$ and $\\tau_{\\text{FBHF}}$ such that Assumption \\ref{step_FBHF} holds. We select them to be the maximum possible.\n\nThe initial points $\\lambda_i^0$ and $z_i^0$ are set to 0 while the local decision variable $x_i^0$ is randomly taken in the feasible sets.\n\nThe plots in Fig. \\ref{distance_sol} show the performance parameter $\\frac{\\norm{\\boldsymbol{x}_{k+1}-\\boldsymbol{x}^{*}}}{\\norm{\\boldsymbol{x}^{*}}}$, that is, the convergence to a solution $\\boldsymbol x^*$, and the CPU time (in seconds) used by each algorithm. We run 10 simulations, changing the parameters of the cost function to show that the result are replicable. The darker line represent the average path towards the solution.\n\n\nThe plot in Fig \\ref{distance_sol} shows that with suitable parameters convergence to a solution is faster with the FBF algorithm which, however, is computationally more expansive than the FB and FBHF algorithms\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=.22]{ave_sol.eps}\\hspace{-.2cm}\n\\includegraphics[scale=.22]{ave_sec.eps}\n\\caption{Relative distance from v-GNE (left) and cumulative CPU time (right).}\\label{distance_sol}\n\\end{figure}\n\\vspace{-.2cm}\n\n\n\n\\section{Conclusion}\n\nThe FBF and the FBHF splitting methods generate distributed equilibrium seeking algorithms for solving generalized Nash equilibrium problems. Compared to the FB, the FBF has the advantage to converge under the non-strong monotonicity assumption. This comes at the price of increased communications between the agents. If strong monotonicity holds, an alternative to the FBF is the FBHF that, in our numerical experience is less computationally expensive than the FBF.\n\n\\section{Appendix}\n\\subsection{Convergence of the forward-backward-forward}\\label{app:FBF}\nWe show the convergence proof for the FBF. From now on, $\\mathsf{H}={\\mathbb{R}}^n\\times{\\mathbb{R}}^{mN}\\times{\\mathbb{R}}^{mN}$ and $\\operatorname{fix}(T)=\\{x\\in\\mathsf{H}:Tx=x\\}$.\n\\begin{proposition}\nIf Assumption \\ref{step_FBF} holds, $\\operatorname{fix}(T_{\\text{FBF}})=\\mathcal Z$. \n\\end{proposition}\n\\begin{proof}\nWe first show that $\\mathcal Z\\subseteq \\operatorname{fix}(T_{\\text{FBF}})$. Let $u^{\\ast}\\in\\mathcal Z$: \n\\vspace{-.15cm}\\begin{equation*}\n\\begin{aligned}\n0\\in \\mathcal Cu^{\\ast} +\\mathcal Du^{\\ast} & \\Leftrightarrow -\\mathcal Du^{\\ast} \\in \\mathcal Cu^{\\ast} \\\\\n&\\Leftrightarrow u^{\\ast}=J_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})\\\\\n&\\Leftrightarrow \\Psi^{-1}\\mathcal Du^{\\ast}=\\Psi^{-1}\\mathcal DJ_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})\\\\\n&\\Leftrightarrow u^{\\ast}=T_{\\text{FBF}}u^{\\ast}. \n\\end{aligned}\n\\vspace{-.15cm}\\end{equation*}\nConversely, let $u^{\\ast}\\in\\operatorname{fix}(T_{\\text{FBF}})$. Then \n$u^{\\ast}-J_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})=\n\\Psi^{-1}\\mathcal Du^{\\ast}-\\Psi^{-1}\\mathcal DJ_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})$\nans\n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&\\norm{u^{\\ast}-J_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})}\\leq\\\\\n&\\leq \\alpha^{-1}\\norm{\\mathcal Du^{\\ast} -\\mathcal DJ_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})}\\\\\n&\\leq \\tfrac{L}{\\alpha}\\norm{u^{\\ast}-J_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nHence,\n$u^{\\ast}=J_{\\Psi^{-1}\\mathcal C}(u^{\\ast}-\\Psi^{-1}\\mathcal Du^{\\ast})$. \n\\end{proof}\n\n\\begin{proposition}\nFor all $u^{\\ast}\\in\\operatorname{fix}(T_{\\text{FBF}})$ and $v\\in\\mathsf{H}$, there exists $\\varepsilon\\geq 0$ such that \n\\vspace{-.15cm}\\begin{equation}\\label{eq:Fejer}\n\\norm{T_{\\text{FBF}}v-u^{\\ast}}^{2}_{\\Psi}= \\norm{v-u^{\\ast}}^{2}_{\\Psi}-\\left(1-(L\/\\alpha)^{2}\\right)\\norm{u-v}^{2}_{\\Psi}-2\\varepsilon.\n\\vspace{-.15cm}\\end{equation}\n\\end{proposition}\n\\begin{proof}\nLet $u^{\\ast}\\in\\operatorname{fix}(T_{\\text{FBF}})$ and $u=J_{\\Psi^{-1}\\mathcal C}(v-\\Psi^{-1}\\mathcal D v),v^{+}=T_{\\text{FBF}}v$, for $v\\in\\mathsf{H}$ arbitrary. Then, \n\\vspace{-.18cm}\\begin{equation*}\\begin{aligned}\n\\norm{v-u^{\\ast}}_{\\Psi}^{2}&=\\norm{v-u+u-v^{+}+v^{+}-u^{\\ast}}^{2}_{\\Psi}\\\\\n&=\\norm{v-u}^{2}_{\\Psi}+\\norm{u-v^{+}}^{2}_{\\Psi}+\\norm{v^{+}-u^{\\ast}}^{2}_{\\Psi}\\\\\n&+2\\inner{v-u,u-u^{\\ast}}_{\\Psi}+2\\inner{u-v^{+},v^{+}-u^{\\ast}}_{\\Psi}.\n\\end{aligned}\\vspace{-.18cm}\\end{equation*}\nSince, \n$2\\inner{u-v^{+},v^{+}-u^{\\ast}}_{\\Psi}=2\\inner{u-v^{+},v^{+}-u}_{\\Psi}\n+2\\inner{u-v^{+},u-u^{\\ast}}_{\\Psi}=-2\\norm{u-v^{+}}_{\\Psi}^{2}+2\\inner{u-v^{+},u-u^{\\ast}}_{\\Psi}.$\nThis gives \n$\\norm{v-u^{\\ast}}_{\\Psi}^{2}=\\norm{v-u}^{2}_{\\Psi}-\\norm{u-v^{+}}_{\\Psi}^{2}+\\norm{v^{+}-u^{\\ast}}_{\\Psi}^{2}+2\\inner{u-u^{\\ast},v-v^{+}}_{\\Psi}. $\nBy definition of the updates, we have for $\\bar{v}\\equiv Bv,\\bar{u}\\equiv Bu,\\hat{v}\\in Cu$, the identities\n$u+\\Psi^{-1}\\hat{v}=v-\\Psi^{-1}\\bar{v}$ and $v^{+}=u+\\Psi^{-1}(\\bar{v}-\\bar{u}).$\nFurthermore, since $0\\in \\mathcal Du^{\\ast} +\\mathcal Cu^{\\ast} $, there exists $\\hat{v}^{\\ast}\\in \\mathcal Cu^{\\ast} $ and $\\bar{u}^{\\ast}\\equiv \\mathcal Du^{\\ast} $ such that\n$0=\\bar{u}^{\\ast}+\\hat{v}^{\\ast}.$\nIt follows that \n$v-v^{+}=v-u-\\Psi^{-1}(\\bar{v}-\\bar{u})=\\Psi^{-1}(\\hat{v}+\\bar{u}).$\nHence, \n\\vspace{-.18cm}\\begin{equation*}\n\\begin{aligned}\n\\norm{v-u^{\\ast}}_{\\Psi}^{2}=&\\norm{v-u}^{2}_{\\Psi}-\\norm{u-v^{+}}_{\\Psi}^{2}+\\\\\n&+\\norm{v^{+}-u^{\\ast}}_{\\Psi}^{2}+2\\inner{u-u^{\\ast},\\hat{v}+\\bar{u}}\\\\\n=&\\norm{v-u}^{2}_{\\Psi}-\\norm{u-v^{+}}_{\\Psi}^{2}+\\norm{v^{+}-u^{\\ast}}_{\\Psi}^{2}+\\\\\n&+2\\inner{\\hat{v}-\\hat{v}^{\\ast}-\\bar{u}^{\\ast}+\\bar{u},u-u^{\\ast},u-u^{\\ast}}.\n\\end{aligned}\n\\vspace{-.18cm}\\end{equation*}\nSince $(u,\\hat{v}),(u^{\\ast},\\hat{v}^{\\ast})\\in\\gr(C),(u^{\\ast},\\bar{u}^{\\ast}),(u,\\bar{u})\\in\\gr(B)$, it follows from the monotonicity that \n$\\varepsilon:=\\inner{\\hat{v}-\\hat{v}^{\\ast}-\\bar{u}^{\\ast}+\\bar{u},u-u^{\\ast},u-u^{\\ast}}\\geq 0.$\nFinally, observe that $u-v^{+}=\\Psi^{-1}(\\mathcal Du-\\mathcal Dv)$, and that \n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&\\norm{\\Psi^{-1}(\\mathcal Du-\\mathcal Dv)}_{\\Psi}^{2}=\\inner{\\Psi^{-1}(\\mathcal Du-\\mathcal Dv),\\mathcal Du-\\mathcal Dv}\\\\\n&\\leq \\lambda_{\\max}(\\Psi^{-1})\\norm{\\mathcal Du-\\mathcal Dv}^{2}\\leq L^{2}\\lambda_{\\max}(\\Psi^{-1})\\norm{u-v}^{2}\\\\\n&\\leq L^{2}\\tfrac{\\lambda_{\\max}(\\Psi^{-1})}{\\lambda_{\\min}(\\Psi)}\\norm{u-v}^{2}_{\\Psi}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nSince $\\alpha=1\/\\lambda_{\\max}(\\Psi^{-1})=\\lambda_{\\min}(\\Psi)$, it follows from the Lipschitz continuity of the operator $B$ that\n$\\norm{u-v^{+}}_{\\Psi}^{2}\\leq (L\/\\alpha)^{2}\\norm{u-v}^{2}_{\\Psi} $\nand the statement is proven.\n\\end{proof}\n\\begin{corollary}\nIf $L\/\\alpha<1$, the map $T_{\\text{FBF}}:\\mathsf{H}\\to\\mathsf{H}$ is quasinonexpansive in the Hilbert space $(\\mathsf{H},\\inner{\\cdot,\\cdot}_{\\Psi})$, i.e. \n\\vspace{-.15cm}\\begin{equation*}\n\\forall v\\in\\mathsf{H}\\; \\forall u^{\\ast}\\in\\operatorname{fix}(T_{\\text{FBF}}) \\;\\norm{T_{\\text{FBF}}v-u^{\\ast}}_{\\Psi}\\leq\\norm{v-u^{\\ast}}_{\\Psi}.\n\\vspace{-.15cm}\\end{equation*}\n\\end{corollary}\n\\begin{proposition}\nIf Assumption \\ref{step_FBF} holds, the sequence generated by the FBF algorithm, $(v^{k})_{k\\geq 0}$, is bounded in norm, and all its accumulation points are elements in $\\mathcal Z$.\n\\end{proposition}\n\\begin{proof}\nForm \\eqref{eq:Fejer} we deduce that $(v^{k})_{k\\geq 0}$ is Fej\\'{e}r monotone with respect to $\\operatorname{fix}(T_{\\text{FBF}})=\\mathcal Z$. Therefore, it is bounded norm. It remains to show that all accumulation points are in $\\mathcal Z$. By an obvious abuse of notation, let $(v^{k})_{k\\geq 0}$ denote a converging subsequence with limit $u^{\\ast}$. From \\eqref{eq:Fejer} it follows $\\norm{u^{k}-v^{k}}_{\\Psi}\\to 0$, and hence $\\norm{u^{k}-v^{k}}\\to 0$ as $k\\to\\infty$. By continuity, it therefore follows as well $\\norm{\\mathcal Du^k-\\mathcal Dv^k}\\to 0$ as $k\\to\\infty$. Since $u^{k}=J_{\\Psi^{-1}\\mathcal C}(v^{k}-\\Psi^{-1}\\mathcal Dv^{k})$, it follows that $w^{k}:=\\Psi(v^{k}-u^{k})+\\mathcal Du^k-\\mathcal Dv^k\\in \\mathcal Du^k+\\mathcal Cu^{k}.$\nSince $w^{k}\\to 0$ and the operator $\\mathcal C+\\mathcal D$ is maximally monotone by Lemma \\ref{lemma_op} and has a closed graph \\cite[Lem. 3.2]{tseng2000}, we conclude $0\\in \\mathcal Du^{\\ast} +\\mathcal Cu^{\\ast} $. Hence, $u^{\\ast}\\in\\mathcal Z$.\n\\end{proof}\n\n\\subsection{Convergence of the forward-backward-half-forward}\n\\label{sec:FBHF}\nWe here provide the convergence proof for the FBHF.\n\\begin{proposition}\nIf Assumption \\ref{step_FBHF} holds, the sequence generated by the FBHF algorithm converges to $\\mathcal Z$.\n\\end{proposition}\nSince, $w-u\\in\\Psi^{-1}\\mathcal C u$, it follows that $(u,w-u)\\in\\gr(\\Psi^{-1}\\mathcal C)$. Additionally, $0\\in \\mathcal Du^{\\ast} +\\mathcal Cu^{\\ast} $, implying that $(u^{\\ast},-\\Psi^{-1}\\mathcal Du^{\\ast})\\in \\gr(\\Psi^{-1}\\mathcal C)$. Monotonicity of the involved operators, implies that \n$\\inner{u-u^{\\ast},w-u-\\Psi^{-1}\\mathcal Du^{\\ast}}_{\\Psi}\\leq 0,\\text{ and }\n\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Bu^{\\ast} -\\mathcal Bu)}_{\\Psi}\\leq 0, $\nUsing these two inequalities, we see \n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&\\inner{u-u^{\\ast},u-w-\\Psi^{-1}\\mathcal Bu}_{\\Psi}=\\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Au^{\\ast}}_{\\Psi}\\\\\n&+\\inner{u-u^{\\ast},u-w-\\Psi^{-1}\\mathcal Du^{\\ast}}_{\\Psi}\\\\\n&+\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Du^{\\ast} -\\mathcal Bu)}_{\\Psi}\\leq \\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Au^{\\ast}}_{\\Psi}\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nTherefore, \n\\vspace{-.15cm}\\begin{equation}\\label{step}\n\\begin{aligned}\n&2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}=\\\\\n&2\\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Bv+w-u}_{\\Psi}+2\\inner{u-u^{\\ast},u-w-\\Psi^{-1}\\mathcal Bu}_{\\Psi}\\\\\n&\\leq 2\\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Bv+w-u}_{\\Psi}+2\\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Au^{\\ast}}_{\\Psi}\\\\\n&=2\\inner{u-u^{\\ast},\\Psi^{-1}\\mathcal Dv+w-u}_{\\Psi}\\\\\n&+2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}\\\\\n&=2\\inner{u-u^{\\ast},v-u}_{\\Psi}+2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi},\n\\end{aligned}\n\\vspace{-.15cm}\\end{equation}\nwhere in the last equality we have used the identity $w=v-\\Psi^{-1}\\mathcal Dv$. Using the cosine formula, \n\\eqref{step} becomes \n\\vspace{-.15cm}\\begin{equation}\n\\begin{aligned}\\label{eq:step}\n&2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}\\leq \\norm{v-u^{\\ast}}_{\\Psi}^{2}-\\norm{u-u^{\\ast}}_{\\Psi}^{2}\\\\\n&-\\norm{v-u}_{\\Psi}^{2}+2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}.\n\\end{aligned}\n\\vspace{-.15cm}\\end{equation}\nThe cocoercivity of $\\Psi^{-1}\\mathcal A$ in $(\\mathsf{H},\\inner{\\cdot,\\cdot}_{\\Psi})$ gives for all $\\varepsilon>0$\n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}=\\\\\n&2\\inner{v-u^{\\ast},\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}+2\\inner{u-v,\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}\\\\\n&\\leq -2\\alpha\\theta\\norm{\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}^{2}+2\\inner{u-v,\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}\\\\\n&=-2\\alpha\\theta\\norm{\\Psi^{-1}(\\mathcal Au^{\\ast}-\\mathcal Av)}_{\\Psi}^{2}+\\tfrac{1}{\\varepsilon}\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}\\\\\n&+\\varepsilon\\norm{v-u}^{2}_{\\Psi}-\\varepsilon\\norm{v-u-\\tfrac{1}{\\varepsilon}\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}^{2}_{\\Psi}\\\\\n&=\\varepsilon\\norm{v-u}^{2}_{\\Psi}-\\left(2\\alpha\\theta-\\tfrac{1}{\\varepsilon}\\right)\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}\\\\\n&-\\varepsilon\\norm{v-u-\\tfrac{1}{\\varepsilon}\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}^{2}_{\\Psi}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nCombining this estimate with (\\ref{eq:step}), we see \n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}\\leq \\norm{v-u^{\\ast}}_{\\Psi}^{2}-\\norm{u-u^{\\ast}}_{\\Psi}^{2}\\\\\n&-\\norm{v-u}_{\\Psi}^{2}+\\varepsilon\\norm{v-u}^{2}_{\\Psi}-\\left(2\\alpha\\theta-\\tfrac{1}{\\varepsilon}\\right)\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}\\\\\n&-\\varepsilon\\norm{v-u-\\tfrac{1}{\\varepsilon}\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}^{2}_{\\Psi}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nTherefore, \n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n&\\norm{v^{+}-u^{\\ast}}^{2}_{\\Psi\n=\\norm{u+\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)-u^{\\ast}}^{2}_{\\Psi}\\\\\n&=\\norm{u-u^{\\ast}}^{2}_{\\Psi}+2\\inner{u-u^{\\ast},\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}\\\\\n&+\\norm{\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}^{2}\\\\\n&\\leq \\norm{u-u^{\\ast}}^{2}_{\\Psi}+\\norm{\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}_{\\Psi}^{2}-\\norm{u-u^{\\ast}}_{\\Psi}^{2}\\\\\n&-\\hspace{-.1cm}\\left(2\\alpha\\theta-\\tfrac{1}{\\varepsilon}\\right)\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}+\\norm{v-u^{\\ast}}_{\\Psi}^{2}\\hspace{-.1cm}-\\hspace{-.1cm}\\norm{v-u}_{\\Psi}^{2}\\\\\n&+\\varepsilon\\norm{v-u}^{2}_{\\Psi}-\\varepsilon\\norm{v-u-\\tfrac{1}{\\varepsilon}\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}^{2}_{\\Psi}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nSince, \n$\\norm{\\Psi^{-1}(\\mathcal Bv-\\mathcal Bu)}^{2}_{\\Psi}\\leq (L\/\\alpha)^{2}\\norm{v-u}^{2}_{\\Psi},$\nthe above reads as\n\\vspace{-.15cm}\\begin{equation*}\n\\begin{aligned}\n\\norm{T_{\\text{FBHF}}v-u^{\\ast}}_{\\Psi}^{2}\\leq& \\norm{v-u^{\\ast}}^{2}_{\\Psi}-L^{2}\\left(\\tfrac{1-\\varepsilon}{L^{2}}-\\tfrac{1}{\\alpha^{2}}\\right)\\norm{v-u}^{2}_{\\Psi}\\\\\n&-\\tfrac{1}{\\alpha\\varepsilon}\\left(2\\theta\\varepsilon-\\tfrac{1}{\\alpha}\\right)\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}\\\\\n&-\\varepsilon\\norm{v-u-\\tfrac{1}{\\varepsilon}\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}^{2}_{\\Psi}.\n\\end{aligned}\n\\vspace{-.15cm}\\end{equation*}\nIn order to choose the largest interval for $1\/\\alpha$ ensuring that the second and third terms are negative, we set\n$\\chi\\leq\\min\\{2\\theta,1\/L\\}$.\nThen,\n\\vspace{-.15cm}\\begin{equation*}\\begin{aligned}\n\\norm{T_{\\text{FBHF}}v-u^{\\ast}}_{\\Psi}^{2}\\leq& \\norm{v-u^{\\ast}}^{2}_{\\Psi}-L^{2}\\left(\\chi^{2}-\\tfrac{1}{\\alpha^{2}}\\right)\\norm{v-u}^{2}_{\\Psi}\\\\\n&-\\tfrac{2\\theta}{\\alpha\\chi}\\left(\\chi-\\tfrac{1}{\\alpha}\\right)\\norm{\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast})}_{\\Psi}^{2}\\\\\n&-\\tfrac{\\chi}{2\\theta}\\norm{v-u-\\tfrac{2\\theta}{\\chi}(\\Psi^{-1}(\\mathcal Av-\\mathcal Au^{\\ast}))}^{2}_{\\Psi}.\n\\end{aligned}\\vspace{-.15cm}\\end{equation*}\nFrom here, we obtain convergence of the sequence $(v^{k})_{k\\geq 0}$ as a consequence of \\cite[Thm. 2.3]{briceno2018} for $1\/\\alpha\\in(0,\\chi)$.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe 2014 Nobel Prize in Chemistry was awarded for development of methods of super-resolution optical imaging, which, in particular, relied on single optical emitters imaging and localization \\cite{Betzig2006science,Dickson1997nature}. Despite a tremendous progress of nanoscopic fluorescence-based imaging, which has made possible through those pioneering work, identification of single emitters remains to be a challenge \\cite{Mortensen2010natmeth} and often relies on ultra-bright emission, which is not always affordable for biological systems, and some prior knowledge of the system, which is often not available. Thus, it would be highly desirable to be able to characterize individual emitters and quantify their presence in any given imaging volume. This can be generalized to a broader fundamental problem of counting the number of emitters in a sample from optically collected data, which has significant implications beyond the commonly used fluorescence imaging. For example,\nis it possible to determine the number of emitters contributing to a fluorescence or Raman signal, on the basis of imaging data alone? Furthermore, if it is possible, what is the limit to which we may determine the number of emitters? In particular we want to be able to determine the number of molecules, which might be in the range of 10, 100, or 1,000. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{schematic.pdf}\n \\caption{Schematic showing the concept to determine the number of emitters $M$ in one sample with unknown detection probability $P$. A pulsed laser excites the sample and emitted photons are measured using a the photon-number-resolving detector, PNRD. a histogram of the number of photons in each collected pulse in analysed using a Maximum Likelihood Estimator, which enables the determination of both $M$ and $P$. Such discrimination is not possible using only classical intensity measurements.}\n \\label{fig:schematic}\n\\end{figure}\n\nThe problem with determination of the number of emitters from intensity is fundamental. For a classical fluorescence measurement, we observe an intensity, which we may express as $I = M I_0$, where $I$ is the total intensity, $M$ the number of emitters and $I_0$ is the intensity of each emitter, where we assume each emitter has the same emission intensity, for simplicity. However, without \\emph{a priori} knowledge of $I_0$ or $M$, it is not possible to discriminate between cases where there are more dimmer emitters, or fewer brighter emitters. Additional practical confouding issues include uncertainty in the probability of collecting light emitted from the emitters, which may vary due to experimental conditions.\n\nQuantum mechanically, however, there is the prospect for distinguishing different emitter configurations. It is known that photon anti-bunching signals (Hanbury Brown and Twiss experiment) can be used to distinguish the number of emitters \\cite{Monticone2014prl}. This technique is typically used when the number of emitters is few (e.g. to determine the difference between 1, 2 or 3 emitters) \\cite{Worboys2020pra, Davin2021arxiv, Chen2017nphoton, Schwartz2013nanoletter}. The Hanbury Brown and Twiss (HBT) experiment, in its simplest form, uses two SPDs that sample the same optical field of view via a beamsplitter \\cite{Stevens2013book}. By observing the signal in coincidence, some information about the number of emitters can be obtained, which leads to the well-known result for the background-free, equal brightness HBT signal at coincidence:\n\\begin{align}\n g^{(2)}(0) = 1 - \\frac{1}{M}.\n\\end{align}\nOne way to improve determination of the number of emitters is to increase the number of SPDs. This approach leads to considerable complexity due to the increased number of beamsplitters and coincidence electronics that is required \\cite{Steven2014oe}. \n\nMeasurement of photon number has traditionally been a difficult task. Originally, this task was performed using single photon detectors (SPD) such as photomultiplier tubes, and later avalanche photodiodes. Such devices permit a binary measurement of the number of photons: they measure either 0 photons, or more than 0 photons, but a single device typically does not allow for a more sophisticated determination of the number of photons. Alternatively, new generations of photon number resolving (PNR) detectors are becoming available. PNR detectors have the ability to perform a direct projective measurement of the number of photons in a pulse of light. Compared with non-PNR detection, PNR detection can provide more information about noise and receiver imperfections \\cite{Becerra2015natphotonics}. Several techniques have been applied in realising photon number resolving \\cite{Provaznik2020oe, Thekkadath2020thesis}, including multiplexed APD \\cite{Kardynal2008nphoton}, CMOS image sensors \\cite{Ma2017optica}, superconductor nanowire \\cite{Cahall2017optica}, and superconducting transition-edge sensor (TES) \\cite{Schmidt2018LowTemPhys}. Additionally, there are multipixel photon counters (MPPC) that have the ability to distinguish from one up to 10 photons \\cite{Kalashnikov2011oe}. Superconducting transition-edge sensors have recently been reported to resolve photon numbers up to 16 with the efficiency of over 90\\% \\cite{Morais2020arXiv}. A study has reported a 24-pixel PNR detector based on superconducting nanowires that achieves the detection of $n=0-24$ photons \\cite{Mattioli2016oe}. Given the increase in technology it is is expected that this upper limit will soon be exceeded, and the availability of such detectors will become more widespread. It is therefore timely to see the effects that such detectors will have on the determination of the number of emitters in an unknown sample. \n\nHere we show that photon number resolving measurements enable the determination of emitter number more generally. The schematic is shown in Fig.\\ref{fig:schematic}. We theoretically determine the photon number probability distribution for $M$ emitters, with photon detection probability $p$. On the basis of this, we show maximum likelihood estimation and the Cramer-Rao lower bound for the simultaneous determination of both the number of emitters and the probability of detection. This analysis enables us to provide scaling laws for the number of experiments required to distinguish between different configurations. \n\nThis paper is organised as follows: We first discuss the photon statistics from an ensemble of $M$ classically identical emitters (ie emitters with the same emission probability in the same field of view with the same emission properties such as polarisation and wavelength, although we stress that the emitters are assumed to be not quantum indistinguishable). We then show the maximum likelihood determination of the number of emitters and photon detection probability for particular cases, as a function of the number of experiments. Lastly we present the Cramer-Rao lower bound for the scaling.\n\n\n\\section{Photon number resolving detection probabilities}\n\nWe are concerned with the problem of simultaneously determining the number of emitters, and the collection probability for a number of emitters. We consider an experimental configuration where $M$ (unknown) emitters are excited by a short pulse laser, and the fluorescence signal collected confocally. Each emitter is assumed to emit no more than one photon per excitation pulse, and we assume that the probability of detecting a photon from each emitter in that pulse is $p$. The photon resolving detector performs a projective measurement in the photon number basis, and we may write down the binomially distributed probability of detecting $N$ photons from the $M$ emitters as \n\n\\begin{align}\n\\mathcal{P}(N|M,p) = \\frac{M!}{\\left(M-N\\right)! N!} p^N \\left(1 - p\\right)^{M-N}. \\label{eq:P(N)}\n\\end{align}\n\n\nWe can explore Eq.~\\ref{eq:P(N)} in various limits, however to address the original problem, the clearest case to consider is where we have a known (measured) brightness, but where the actual number of emitters and their probability of emission is unknown. Therefore, we set $\\lambda = M p$, so that $\\lambda$ is the expected number of photons emitted per experiment, where each experiment is a Bernouilli trial. Note that although $N$ is quantised, $\\lambda$ is not. \n\nEq.~\\ref{eq:P(N)} in terms of $\\lambda$ becomes\n\\begin{align}\n \\mathcal{P}(N|\\lambda,M) = \\frac{M!}{\\left(M-N\\right)! N!} \\left(\\frac{\\lambda}{M}\\right)^N \\left(1 - \\frac{\\lambda}{M}\\right)^{M-N}.\n\\end{align}\nThis result should be compared with the standard Poisson distribution, which is expected in the limit $M\\rightarrow \\infty$\n\\begin{align}\n \\lim_{M\\rightarrow\\infty}\\mathcal{P}(N|M,p) = \\frac{\\lambda^N e^{-\\lambda}}{N!}.\n\\end{align}\nAnalytical results for this are shown in Fig.~\\ref{fig:poissonDistri}. Fig.~\\ref{fig:poissonDistri}(a) shows the probability of obtaining $N$ photons for different $M$. As shown in Fig.~\\ref{fig:poissonDistri}(a), the greatest change in $\\mathcal{P}(N)$ occurs at $N\\approx \\lambda$, although it is clear that the \\emph{entire} distribution provides information about $M$. Hence it is important that any photon resolving detector should be at least able to detect $\\lambda$ photons for maximum ability to determine $M$.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[trim=0cm 5cm 0cm 5cm, width=\\textwidth]{Poisson.pdf}\n\\caption{(a) Series of curves showing the distribution of probability distribution function for the number of photons, $N$, for different $M$ and $p$ such that $\\lambda = Mp = 20$. The dotted line shows the limit for a Poisson distribution. As $M$ increases the peak broadens, and approaches the Poisson limit. By measuring the distribution, the number of emitters should be distinguishable, however it is important to stress that the differences are small, and noise will make sure determination difficult. (b) The peak of the probability distribution, at $N = \n\\lambda$, is the point that shows the largest dependence on number of emitters however as this curve shows, even variation of $M$ over three orders of magnitude only leads to a change in the probability of $N=20$ photon events from $\\mathcal{P}(N = 20| M = 30, \\lambda = 20) = 15.3\\%$ to $\\mathcal{P}(N = 20| M = 10^5, \\lambda = 20) = 8.88\\%$}\n\\label{fig:poissonDistri}\n\\end{figure}\n\n\nTo explore the determination of both $M$ and $p$, we begin by generating synthetic data obtained by sampling Eq.~\\ref{eq:P(N)} for a finite number of numerical experiments. This yields a histogram of events, such as that shown in Fig.~\\ref{fig:BarFig}. This data was generated on the basis of $\\nu = 100$ experiments, with $M=40$ atoms and probability of detection $p=0.2$. Also shown is the probability distribution function, $\\mathcal{P}(N)$ under the same circumstances. As the number of experiments increases, the synthetic data and probability distribution function should converge. \n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{BarFig.pdf}\n \\caption{Plot showing relative frequency data (bars) and expected (ideal) probability of detecting a number of photons given $M=40$ emitters, each with a detection probability of $p=0.2$. The synthetic data was generated on the basis of 100 experiments and assumes no noise other than random fluctuations provided by Eq.~\\ref{eq:P(N)}.\n }\n \\label{fig:BarFig} \n\\end{figure}\n\nTo address the issue of how well we can determine the model parameters using the data, we turn to maximum likelihood estimation. For a given set of data $\\bm{N} = \\left(N_1, N_2, ...N_{\\nu}\\right)$, where the $N_i$ correspond to the number of photons resolved in experiment $i$, which are independent and identically distributed with probability mass function $\\mathcal{P}(N_i|\\bm{\\theta})$, where $\\bm{\\theta} = \\left(M,p\\right)\\in\\mathbb{Z}^+\\bigtimes[0,1]$ is the vector of parameters to be estimated. Furthermore, let $\\bm{\\theta}_0$ be the ground truth and $L(\\bm{\\theta}|N_i)$ the associated likelihood function of $\\bm{\\theta}$ given data $N_i$. \n\nThen we can write the joint log likelihood function as follows:\n\\begin{align}\n\\ell\\left(\\bm{\\theta}\\big|\\mathbf{N}\\right) = \\log L\\left(\\bm{\\theta}\\big|\\mathbf{N}\\right)=\\sum_{i = 1}^{\\nu}\\log L\\left(\\bm{\\theta}\\big|{N}_i\\right),\n\\end{align}\nwhere $\\nu$ is the number of experiments. \n\nAccordingly, the MLE of $\\bm{\\theta}$ is given by\n\\begin{align}\n\\hat{\\bm{\\theta}}=\\mathop{\\arg \\max}\\limits_{\\bm{\\theta}\\in\\mathbb{Z}^+\\bigtimes [0,1]} \\ell\\left(\\bm{\\theta}\\big|\\mathbf{N}\\right).\n\\end{align}\nAs $\\nu$ increases, from the consistency of MLE \\cite{rao1973linear}, we expect that the $\\hat{\\bm{\\theta}}$ approaches $\\bm{\\theta}_0$. \n\nIt is easier to determine sample brightness than the number of emitters. This accords with our classical intuition, namely that on the basis of intensity-only measurements it should be \\emph{only} possible to determine the mean brightness, and \\emph{impossible} to determine the number of emitters (ie few bright emitters should be indistinguishable from many dim emitters). It is therefore useful to transform our parameters from $\\bm{\\theta}=(M,p)$ to $\\bm{\\beta}=(\\lambda, \\xi)$ where $\\xi = M\/p$. With this parameterisation, the probability distribution function becomes\n\\begin{align}\n\\mathcal{P}(N|\\bm{\\beta})=\\mathcal{P}(N|\\lambda,\\xi)=\\frac{(\\sqrt{\\lambda\\xi})!}{(\\sqrt{\\lambda\\xi}-N)!N!} \\left(\\frac{\\lambda}{\\xi}\\right)^{N}\\left[1-\\left(\\frac{\\lambda}{\\xi}\\right)\\right]^{\\sqrt{\\lambda\\xi}-N}\\label{pdf2}\n\\end{align}\n\n\n\n\\section{Uncertainty of the estimates: Cramer Rao lower bound}\n\nTo obtain the scaling laws for estimating $M$, and $p$, we now proceed to calculate the Cramer-Rao lower bound (CRLB). The Cramer-Rao Lower Bound (CRLB) gives a lower estimate for the variance of an unbiased estimator. The Fisher Information Matrix (FIM)\\cite{Nishiyama2019arxiv,Ly2017jmp} is required to calculate the CRLB. To do this, we need to find the derivative of (\\ref{pdf2}) w.r.t $\\lambda$ and $\\xi$. However the likelihood function $L(\\bm{\\beta}|N)$ is not differentiable since $\\sqrt{\\lambda\\xi}\\in\\mathbb{Z}^+$. To implement the derivative we use the $x!=x\\Gamma(x)$ to transfer $(\\sqrt{\\lambda\\xi})!$ into a continuous function with respect to $\\lambda$ and $\\xi$. Additionally, we have $\\left[x\\Gamma(x)\\right]'=\\Gamma(x)+x\\Gamma(x)\\psi(x),$\nwhere $\\psi(\\cdot)$ is digamma function.\n\nLet $\\bar{L}(\\bm{\\beta}|N)$ be the approximated likelihood function (after replacing the factorial term associated to $\\lambda$ and $\\xi$ by the interpolation function), then \n\\begin{align}\n &\\frac{\\partial \\bar{L}(\\bm{\\beta}|N)}{\\partial\\lambda}\\notag\\\\ \n=&\\sqrt{\\frac{\\xi}{\\lambda }}\\alpha_1\\left\\{\\xi \\lambda ^2+N^2 \\sqrt{\\xi \\lambda }-N \\left[\\left(\\sqrt{\\xi \\lambda }-1+\\lambda \\right) \\sqrt{\\xi \\lambda }+\\lambda \\right]+\\left(\\lambda -\\sqrt{\\xi \\lambda }\\right)\\alpha_2\\right\\}\\\\\n&\\frac{\\partial \\bar{L}(\\bm{\\beta}|N)}{\\partial\\xi}\\notag\\\\ \n=&\\lambda\\alpha_1\\left[-\\lambda \\sqrt{\\xi \\lambda }-N^2+N \\left(-\\sqrt{\\frac{\\lambda }{\\xi }}+\\sqrt{\\xi \\lambda }+\\lambda +1\\right)+\\left(\\sqrt{\\frac{\\lambda }{\\xi }}-1\\right)\\alpha_2\\right]\n\\end{align}\nwhere\n\\begin{align}\n \\alpha_1&=\\frac{\\Gamma \\left(\\sqrt{\\xi \\lambda }\\right) \\left(\\frac{\\lambda }{\\xi }\\right)^{N\/2} \\left(1-\\sqrt{\\frac{\\lambda }{\\xi }}\\right)^{\\sqrt{\\xi \\lambda }-N-1}}{2 N! \\sqrt{\\xi \\lambda } \\left(\\sqrt{\\xi \\lambda }-N\\right)^2 \\Gamma \\left(\\sqrt{\\xi \\lambda }-N\\right)}\\\\\n \\alpha_2&= \\left(\\xi \\lambda -N \\sqrt{\\xi \\lambda }\\right) \\left[\\log \\left(1-\\sqrt{\\frac{\\lambda }{\\xi }}\\right)+\\psi\\left(\\sqrt{\\xi \\lambda }\\right)-\\psi \\left(\\sqrt{\\xi \\lambda }-N\\right)\\right]\n\\end{align}\nThen the $(i,j)$-th element, $\\forall i,j=1,2$, in FIM, i.e. $\\mathbf{I}_N(\\bm{\\beta})_{i,j}$, is\n\\begin{align}\n\\mathbf{I}_N(\\bm{\\beta})_{1,1}&=\\sum_{N=0}^n\\left\\{\\left[\\frac{ \\partial \\bar{f}(\\lambda,\\xi|N)}{ \\partial \\lambda}\\right]^2\\frac{1}{f(\\lambda,\\xi|N)}\\right\\}\\label{I11_2}\\\\\n\\mathbf{I}_N(\\bm{\\beta})_{2,2}&=\\sum_{N=0}^n\\left\\{\\left[\\frac{ \\partial \\bar{f}(\\lambda,\\xi|N)}{ \\partial \\xi}\\right]^2\\frac{1}{f(\\lambda,\\xi|N)}\\right\\}\\label{I22_2}\\\\\n\\mathbf{I}_N(\\bm{\\beta})_{1,2}=\\mathbf{I}_N(\\bm{\\beta})_{2,1}&=\\sum_{N=0}^n\\left[\\frac{ \\partial \\bar{f}(\\lambda,\\xi|N)}{ \\partial \\lambda}\\frac{ \\partial \\bar{f}(\\lambda,\\xi|N)}{ \\partial \\xi}\\frac{1}{f(\\lambda,\\xi|N)}\\right]\\label{I21_2}\n\\end{align}\n\n\nEquivalently, the FIM for $\\bm{\\theta}=(M,p)$ is\n\\begin{align}\n \\mathbf{I}_N(\\bm{\\theta})_{1,1}\n \\sum_{N=0}^n\\left\\{\\left[\\frac{ \\partial L(\\bm{\\theta}|N)}{ \\partial M}\\right]^2\\frac{1}{L(\\bm{\\theta}|N)}\\right\\},\\notag\n \n \\label{eq:I11}\n\\end{align}\nwhere $L(\\bm{\\theta}|N)$ is not differentiable again since it is discrete in $M$. We can find a approximated $f(M,p|N)$, i.e. $\\bar{f}(M,p|N)$, using the similar method in obtaining $\\bar{f}(\\lambda,\\xi|N)$. Then we have\n\\begin{align}\n&\\frac{ \\partial \\bar{f}(M,p|N)}{ \\partial M}\\notag\\\\\n=&\\frac{\\Gamma (M) p^N (1-p)^{M-N} }{N! (M-N)^2 \\Gamma (M-N)}\\left\\{M (N-M) \\left[\\psi(M-N)-\\psi(M)-\\log (1-p)\\right]-N\\right\\}\n\\end{align}\n\n\nSimilarly, we have \n\\begin{align}\n\\mathbf{I}_N(\\bm{\\theta})_{2,2}=\\sum_{N=0}^n\\left\\{\\left[\\frac{ \\partial f(M,p|N)}{ \\partial p}\\right]^2\\frac{1}{f(M,p|N)}\\right\\},\\label{I22}\n\\end{align}\nand\n\\begin{align}\n\\mathbf{I}_N(\\bm{\\theta})_{2,1}&=\\mathbf{I}_N(\\bm{\\theta})_{1,2}\\notag\\\\\n&\\approx\\sum_{N=0}^n\\left\\{\\frac{ \\partial \\bar{f}(M,p|N)}{ \\partial M}\\frac{ \\partial f(M,p|N)}{ \\partial p}\\frac{1}{f(M,p|N)}\\right).\\label{I12}\n\\end{align}\nwhere \n\\begin{align}\n\\frac{ \\partial f(M,p|N)}{ \\partial p}=-\\frac{M! p^{N-1} (1-p)^{M-N-1} (M p-N)}{N! (M-N)!}\n\\end{align}\nThe CRLB is given by the inverse of the FIM,\n\\begin{align}\n\\mathbf{C} = \\mathbf{I}_N(\\bm{\\theta})^{-1}\\big|_{M=M_0,p=p_0}.\n\\end{align}\nGiven that there are $\\nu$ i.i.d. experiments, so the underlying Cramer Rao lower bound is\n\\begin{align}\n\\mathbf{C}_{\\nu} = \\frac{\\mathbf{C}}{\\nu} = \\frac{1}{\\nu}\\mathbf{I}_N(\\bm{\\theta})^{-1}\\big|_{M=M_0,p=p_0}\\label{eq:CRLB}\n\\end{align}\n\nWe proceeded to compare our maximum likelihood simulations with the CRLB. The parameters $(M,p)$ are estimated using increasing number of experiments, i.e. $\\nu$. For each $\\nu$, we performed $500$ independent Monte Carlo simulations. By performing an ensemble of numerical experiments, we could compare the estimated values in the $(M,p)$ space with a 2D confidence region, i.e. the CRLB 95\\% error ellipse (within which the probability that the random estimated value $\\bm{\\theta}=(M,p)$ will fall inside the ellipse is 95\\%).\nThe simulation results are shown in Fig.~\\ref{fig:CRLB} with ground truth $p_0=0.2$ and $M_0=40$. \n\nFig.~\\ref{fig:CRLB} shows a series of maximum likelihood determinations of the number of emitters and probability of detection per emitter, for ground truth $\\bm{\\theta_0} = (M,p) = (40,0.2)$. We show the results in $\\left(\\lambda,\\xi\\right)$ and the data converted back into $\\left(M,p\\right)$ space, for increasing experiments $\\nu$. Each point represents the maximum likelihood determination and the solid curve is the 95\\% confidence interval. Observe that in $\\left(\\lambda,\\xi\\right)$ space we obtain a standard error ellipse in Fig.\\ref{fig:CRLB} (b), whereas in $\\left(M,p\\right)$ space in Fig.\\ref{fig:CRLB} (a), the ellipse is converted according to reciprocal functions: $p = \\sqrt{\\lambda\/\\xi}$ and $M = \\sqrt{\\lambda \\xi}$. The shape of the error region in $\\left(M,p\\right)$ space is a consequence of the classical ambiguity between more dim emitters and fewer brighter emitters. Nevertheless, as can be seen, by applying quantum measurements, some bounding on the number of emitters can be obtained, with increasing certainty as the number of experiments increases. For simplicity, we have not enforced the requirement that CRLBs of $\\left(M,p\\right)$ and $\\left(\\lambda,\\xi\\right)$ are positive, although these values are strictly positive in the simulation data, hence the maximum likelihood and CRLB values do not agree for small $\\nu$. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.8\\textwidth]{CRLB_MP_LX.pdf}\n \\caption{ The Monte-Carlo simulation results (blue dots) using maximum likelihood and the Cramer-Rao lower bound (red) showing as 95\\% confidence interval ellipse in both (a) ($M,p$) space and (b) ($\\lambda,\\xi$) space. $p_0=0.2$ and $M_0=40$.}\n \\label{fig:CRLB}\n\\end{figure}\n\n\nTo justify the performance of the maximum likelihood model, we compare the variances of predicted $M$ and $p$ results with CRLB. Fig.\\ref{fig:scalinglaw} presents two configurations, $\\bm{\\theta_0} = \\left(40,0.2\\right)$ and $\\bm{\\theta_0} = \\left(100,0.1\\right)$ with CRLB in $(\\lambda,\\xi)$ space and $(M,P)$ space. Both showing asymptotic trend to CRLB. A log-log scaling law is observed here, i.e the $\\log(\\text{Variance})$ scales with $-\\log(\\nu)$. Ideally the variance of estimated data cannot be lower than CRLB, but for small $\\nu$ the estimator is biased when there are too few measurement data, and CRLB only holds when the estimator is unbiased.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.8\\textwidth]{ScalingLaw.pdf}\n \\caption{The scaling of Cramer-Rao lower bound (red) and the Monte-Carlo simulation (blue) using maximum likelihood. Ground truth (a) $p_0=0.2$, $M_0=40$ (b) $p_0=0.1$, $M_0=100$. (a)i-ii, (b)i-ii present variances in (M,P) space and (a)iii-iv, (b)iii-iv are variances in ($\\lambda,\\xi$) space. It can be observed that the sampled variance of $p$ is slightly smaller than the CRLB when the number of experiments is small. This is because the MLE is biased on those points due to the small size of data \\cite{WANG201579}.}\n \\label{fig:scalinglaw}\n\\end{figure}\n\n\nFig.\\ref{fig:NexpMap} (a) shows the number of experiments that required to meet the CRLB criterion, here we define the CRLB criterion as the relative variance of $M$: $\\mathrm{Var}[M]\/M=1\\%$. Here the lower bound of variance of $M$ is the corresponding CRLB element in the matrix of Eq.~\\ref{eq:CRLB}. The contours show the resolved photon number with max likelihood $\\lambda=MP$. In the top half of the map where the probability $p$ roughly larger than 0.5, the number of experiments to achieve CRLB is relatively small ($<10^6$), even for large emitter numbers $10^3$. In the bottom half of the map where $p<0.5$, with the decrease of the probability $p$ the number of experiments to achieve CRLB increases dramatically. With such low brightness or detected probability when it reaches to large emitter numbers $M$, measurements that required to determine $M$ is several orders higher than high brightness scenario.\n \n Along one contour with a fixed $\\lambda$, $\\nu$ increases with the increase of $M$ and decrease of $p$, which means given a detected photon number distribution with the peak occurrence locating at $\\lambda$ (similar to Fig.\\ref{fig:BarFig}), more measurements are required to resolve many low brightness emitters than few high brighter emitters. Fig.\\ref{fig:NexpMap} (b) shows the relationships of $\\nu$ and $M$ along one contour with fixed $\\lambda$, from $\\lambda=5$ to 50. The curves show an elbow shape when approaching large amount of emitters. To the left of the elbow shape, measurements $\\nu$ increases dramatically with emitter numbers. To the right of elbow shape, e.g.$M=200$, the small $\\lambda$ curves stay on top of the large $\\lambda$ ones. This is because the small $\\lambda$ indicates a small probability of detecting photons from each emitter, and results in more measurements being required to determine the number of emitters.\n \nWe now consider the example of quantitative fluoresence. If we consider a sample of 1,000 fluorophores, in a field of view with probability of photon collection of $1\\%$ from each emitter, then the number of measurements required to achieve a determination of the number of emitters with a relative variance of $1\\%$ is around $1.96 \\times 10^9$. Photon number resolving measurements can be performed at of order microsecond timescales \\cite{Morais2020arXiv}. This means that the length of time required to achieve to determine the number of identical (but unknown) fluorescent emitters is if order $\\sim$30min. \n\n\\begin{figure}\n \\centering\n \\includegraphics[trim=0cm 5cm 0cm 5cm, width=\\textwidth]{NuMap.pdf}\n \\caption{(a) Number of experiments to achieve CRLB with the relative variance of M: {$\\mathrm{Var}[M]\/M=1\\%$}. Here the lower bound of variance of M is the CRLB(M) element in the matrix of Eq.\\ref{eq:CRLB}. The contours show the $\\lambda=MP$. (b) The relationships of $\\nu$ and $M$ along contours with fixed $\\lambda$ value, from $\\lambda=5$ to 50, generated by extracting the $\\nu$ values along contours in (a). }\n \\label{fig:NexpMap}\n\\end{figure}\n\n\n\\section{Conclusions}\nWe have shown that photon number resolving measurements can help to identify the number of emitters in a field of view, even without \\emph{a priori} knowledge of the brightness of the emitters. Our results enable the prediction of the number of experiments required for a particular variance to be achieved. As the number of emitters increases, the photon distribution approaches Poissonian, and in this limit, resolution of the number of emitters becomes increasingly difficult.\n\nOur results show the idealised case, of equal brightness emitters. Naturally, variations in the emission probability (for example by particles locate in different parts of the optical point spread function, or with different local environments) will lead to increased number of experiments. Nevertheless, our analysis is likely to guide future experiments in quantitative tests of biological pathways. Practical systems will also have to contend with variations in photo-bleaching of emitters that may limit the practically achievable number of experiments. As such, our results provide an opportunity to bound the expected sample variance, and hence to give limits on the number of emitters that might be contributing to a signal - bounds that are not possible to impose given the current limits of classical fluorescence based imaging. \n\n\n\\section*{Acknowledgements}\nThis work was funded by the Air Force Office of Scientific Research (FA9550-20-1-0276). ADG also acknowledges funding from the Australian Research Council (CE140100003 and FT160100357).\nVVY acknowledges partial support from the National Science Foundation (NSF) (DBI-1455671, ECCS-1509268, CMMI-1826078), the Air Force Office of Scientific Research (AFOSR) (FA9550-15-1-0517, FA9550-20-1-0366, FA9550-20-1-0367), Army Medical Research Grant (W81XWH2010777), the National Institutes of Health (NIH) (1R01GM127696-01, 1R21GM142107-01), the Cancer Prevention and Research Institute of Texas (CPRIT) (RP180588).\n\n\n\\section*{Reference}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbljt b/data_all_eng_slimpj/shuffled/split2/finalzzbljt new file mode 100644 index 0000000000000000000000000000000000000000..370df77e51b9e8fb2a34117639c6bcb306f30303 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbljt @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nMotivated by the desire for greater efficiency in drug development and the low success rates in confirmatory (Phase 3) studies, methodological research on adaptive designs and interest in their application has grown tremendously over the last 30 years. In an adaptive design, accumulating data can be used to modify the course of the trial. Several possible adaptations can be considered in interim analyses, for example, adaptive randomization for dose finding, dropping and\/or adding treatment arms, sample size re-estimation, and early stopping for safety, futility or efficacy, to name a few.\n\nValidity and integrity are two major considerations in adaptive designs (Dragalin, 2006). Because data from one stage of the trial can inform the design of future stages of the trial, careful steps need to be taken to maintain the validity of the trial, i.e., control of the Type I error probability and minimization of bias. To maintain trial integrity, it is important that all adaptations be pre-planned, prior to the unblinded examination of data, and that all trial personnel other than those responsible for making the adaptations are blind to the results of any interim analysis (Food and Drug Administration, 2019). It is also important to ensure consistency in trial conduct among the different stages.\n\nA general method for hypothesis testing in experiments with adaptive interim analyses based on combining stage-wise $p$-values was proposed by Bauer and K\\\"{o}hne (1994). The basic idea behind the construction of a combination test in a two-stage adaptive design is to transform the stage-wise test statistics to $p$-values, with independence of the $p$-values following from the conditional invariance principle (Brannath {\\it{et al}}., 2007, 2012; Wassmer and Brannath, 2016), regardless of the adaptation performed after the first stage. The principle holds as long as the null distribution of the first-stage $p$-value ($p_1$) as well as the conditional distribution of the second-stage $p$-value ($p_2$) given $p_1$ are stochastically larger than the $U(0,1)$ distribution (the so-called ``p-clud\" property). A specified combination function is used to combine the $p$-values obtained before and after the preplanned adaptation of the design into a single global test statistic. An extension of combination tests to allow more flexibility regarding the number of stages and the choice of decision boundaries was provided by Brannath {\\it{et al}}. (2002).\n\nIn dose-response studies, a component of the MCP-Mod procedure (Bretz {\\it{et al}}., 2005) has gained popularity for the purpose of detecting a proof-of-concept (PoC) signal in learning-phase trials. The procedure consists of specifying a set of candidate dose-response models, determining the optimal contrast statistic for each candidate model, and using the maximum contrast as the overall test statistic. Other authors have considered extensions of this procedure to adaptive dose-response designs. Miller (2010) investigated a two-stage adaptive dose-response design for PoC testing incorporating adaptation of the dosages, and possibly the contrast vectors. He developed an adaptive multiple contrast test (AMCT) that combines the multiple contrast test statistics across two stages under the assumption that the variance is known. Franchetti {\\it{et al}}. (2013) extended the MCP-Mod procedure to a two-stage dose-response design with a pre-specified rule of adding and\/or dropping dosage groups in Stage 2 based on the Stage 1 results. The PoC test uses Fisher's (1932) combination method to combine the two stage-wise $p$-values, each obtained by applying the MCP-Mod procedure to the data from each stage. This method includes a restrictive requirement of equal total sample sizes for each stage. Also, the authors claimed that the independence of the two stage-wise $p$-values is potentially compromised if the number of dosages used in Stage 2 is not the same as that used in Stage 1 and proposed a method for assigning weights to the different dosage groups to deal with this problem. We do not believe that such weighting is necessary as long as the statistic used to combine the stage-wise $p$-values (Fisher's, in this case) does not include weights that depend on the Stage 1 data.\n\nEarly work related to adaptive designs for dose-response testing includes a general procedure with multi-stage designs proposed by Bauer and R\\\"{o}hmel (1995), in which dosage adaptations were performed at interim analyses. Other goals of adaptive dose-response studies include determining if any dosage yields a clinically relevant benefit, estimating the dose-response curve, and selecting a target dosage for further study (Dragalin {\\it{et al}}., 2010). Several model-based adaptive dose-ranging designs that utilize principles of optimal experimental design to address these objectives were studied by Dragalin {\\it{et al}}. (2010). Bornkamp {\\it{et al}}. (2011) proposed a response-adaptive dose-finding design under model uncertainty, which uses a Bayesian approach to update the parameters of the candidate dose-response models and model probabilities at each interim analysis.\n\nIn this article, we propose new methods to address the specific objective of detecting a PoC signal in adaptive dose-response studies with normally-distributed outcomes. We extend the MCP-Mod procedure to include generalized multiple contrast tests (GMCTs; Ma and McDermott, 2020) and apply them to adaptive designs; we refer to these as adaptive generalized multiple contrast tests (AGMCTs). These tests are introduced in Section \\ref{Adaptive Generalized Multiple Contrast Tests}. In Section \\ref{Adaptive Multiple Contrast Test} we extend the AMCT of Miller (2010) to accommodate more flexible adaptations and to the important case where the variance is unknown using the conditional rejection probability (CRP) principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004). Numerical examples are provided in Section \\ref{Numerical Example} to illustrate the application of the AGMCTs and AMCT. In Section \\ref{Simulation studies}, we conduct simulation studies to evaluate the operating characteristics of the various methods as well as the corresponding tests for non-adaptive designs. The conclusions are given in Section \\ref{Conclusion}.\n\n\n\\section{Adaptive Generalized Multiple Contrast Tests}\\label{Adaptive Generalized Multiple Contrast Tests}\n\nIn this section, we propose a two-stage adaptive design in which we use data from Stage 1 to get a better sense of the true dose-response model and make adaptations to the design for Stage 2. We then use data from both Stage 1 and Stage 2 to perform an overall test to detect the PoC signal. The rationale is to overcome the problem of potential model misspecification at the design stage.\n\n\\subsection{General Procedure}\nWe consider the case of a normally distributed outcome variable. Suppose that there are $n_{i1}$ subjects in dosage group $i$ in Stage 1, $i=1,\\ldots,k_1$. Denote the first stage data as $\\pmb{Y}_1=(Y_{111},\\ldots,Y_{1 n_{11} 1},\\ldots, $ $Y_{k_1 1 1},\\ldots,Y_{k_1 n_{k_1 1}1})^\\prime.$\nThe statistical model is\n$$Y_{ij1}=\\mu_i +\\epsilon_{ij1},\\quad\\epsilon_{ij1}\\stackrel{iid}{\\sim} N(0, \\sigma^2),\\quad i=1,\\ldots, k_1,\\ j=1,\\ldots, n_{i1}.$$\nThe true mean configuration is postulated to follow some dose-response model $\\mu_i=f(d_i,\\pmb{\\theta})$, where $d_i$ is the dosage in the $i^{\\text{th}}$ group, $i=1,\\ldots,k_1$. The dose-response model is restricted to be of the form $f(\\cdot;\\pmb{\\theta})=\\theta_0+\\theta_1 f^0(\\cdot;\\pmb{\\theta}^0)$, where $f^0(\\cdot;\\pmb{\\theta}^0)$ is a standardized dose-response model indexed by a parameter vector $\\pmb{\\theta}^0$ (Thomas, 2017). A candidate set of $M$ dose-response models $f_m(\\cdot,\\pmb{\\theta})$, $m=1,\\ldots,M$, including values for $\\pmb{\\theta}$, is pre-specified. For each candidate model, an optimal contrast is determined to maximize the power to detect differences among the mean responses; the contrast coefficients are chosen to be perfectly correlated with the mean responses if that model is correct (Bretz {\\it{et al}}., 2005; Pinheiro {\\it{et al}}., 2014).\n\nFor each candidate model, the following hypothesis is tested:\n$$H_{0m 1}: \\sum_{i=1}^{k_1} c_{mi1} \\mu_i=0,\\quad\\text{vs.}\\quad H_{1m 1}: \\sum_{i=1}^{k_1} c_{mi1} \\mu_i>0,\\quad m=1,\\ldots, M,$$\nwhere $c_{m11},\\ldots, c_{mk_11}$ are the optimal contrast coefficients associated with the $m^{\\text{th}}$ candidate model in Stage 1. The multiple contrast test statistics are\n$$T_{m1}=\\sum_{i=1}^{k_1} c_{mi1} \\bar{Y}_{i1}\\Bigg\/\\left(S_1\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{mi1}^2}{n_{i1}}}\\right),\\quad m=1,\\ldots, M,$$\nwhere $\\bar{Y}_{i1}=\\sum_{j=1}^{n_{i1}} Y_{ij1}\/n_{i1}$ and the pooled variance estimator is $S_1^2=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij1}-\\bar{Y}_{i1})^2\/\\nu_1$, where $\\nu_1=\\sum_{i=1}^{k_1} n_{i1}-k_1$. The joint null distribution of $(T_{11},\\ldots,T_{M1})^\\prime$ is multivariate $t$ (with $\\nu_1$ degrees of freedom) with common denominator and correlation matrix having elements\n$$\\rho_{m m^{\\prime}1}=\\sum_{i=1}^{k_1}\\frac{c_{mi1} c_{m^{\\prime} i1}}{n_{i1}}\\Bigg\/\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{m i1}^2}{n_{i1}}\\sum_{i=1}^{k_1}\\frac{c_{m^{\\prime} i1}^2}{n_{i1}}}, \\quad m, m^{\\prime}=1,\\ldots, M.$$\n\nLet $p_{m1}=1-\\mathcal{T}_{\\nu_1} (T_{m1})$ be the $p$-values derived from $T_{m1}$, $m=1, \\ldots, M$, where $\\mathcal{T}_{\\nu_1} (\\cdot)$ is the cumulative distribution function of the $t$ distribution with $\\nu_1$ degrees of freedom. We consider three combination statistics to combine the $M$ dependent one-sided $p$-values in Stage 1 (Ma and McDermott, 2020):\n\\begin{enumerate}[(i)]\n\\item Tippett's (1931) combination statistic,\n$$\\Psi_{T1}=\\min_{1 \\leq m \\leq M} \\ p_{m1};$$\n\\item Fisher's (1932) combination statistic,\n$$\\Psi_{F1}=-2 \\sum_{m=1}^M \\log (p_{m1});$$\n\\item Inverse normal combination statistic (Stouffer, 1949),\n$$\\Psi_{N1}=\\sum_{m=1}^M \\Phi^{-1} (1-p_{m1}).$$\n\\end{enumerate}\n\n\nNote that the use of Tippett's combination statistic is equivalent to the original MCP-Mod procedure; the use of different combination statistics results in a generalization of the MCP-Mod procedure, yielding GMCTs (Ma and McDermott, 2020). When the $p$-values are independent, these statistics have simple null distributions. In our case the $p$-values are dependent, but the correlations among $T_{11},\\ldots,T_{M1}$ are known. For Tippett's combination method, one can obtain multiplicity-adjusted $p$-values from $T_{m1}$, $m=1, \\ldots, M$, given the correlation structure using the {\\tt{mvtnorm}} package in {\\tt{R}}. A PoC signal is established in Stage 1 if the minimum adjusted $p$-value $p_{\\text{min, adj}1}< \\alpha$ (Bretz {\\it{et al}}., 2005). For Fisher's and the inverse normal combination methods, excellent approximations to the null distributions of $\\Psi_{F1}$ and $\\Psi_{N1}$ have been developed (Kost and McDermott, 2002), enabling computation of the overall $p$-value $p_1$ for Stage 1 using a GMCT (Ma and McDermott, 2020).\n\nAfter obtaining the Stage 1 data, we make design adaptations and determine the optimal contrasts for the updated models in Stage 2 (see Sections \\ref{Adapting the Candidate Dose-Response Models} and \\ref{Adapting the Dosage Groups} below). We then conduct a GMCT in Stage 2 and obtain the second-stage $p$-value $p_2$. Under the overall null hypothesis $H_0: \\mu_1= \\cdots =\\mu_{k^*}$, where $k^*$ is the total number of unique dosage groups in Stages 1 and 2 combined, the independence of the stage-wise $p$-values $p_1$ and $p_2$ can be established using the conditional invariance principle (Brannath {\\it{et al}}., 2007). To perform the overall PoC test in the two-stage adaptive design, we combine $p_1$ and $p_2$ using one of the above combination statistics.\n\nA procedure that ignores the adaptation, i.e., that simply pools the data from Stage 1 and Stage 2 and applies a GMCT to the pooled data as if no adaptation had been performed, would substantially increase the Type I error probability.\n\n\n\\subsection{Adapting the Candidate Dose-Response Models}\\label{Adapting the Candidate Dose-Response Models}\nHere and in Section \\ref{Adapting the Dosage Groups} below, we consider adaptations for the second stage that are arguably most relevant for PoC testing, namely those of the candidate dose-response models and the dosages to be studied. The choice of the candidate dose-response models and dosages for Stage 1 would depend on prior knowledge from pre-clinical or early-stage clinical experience with the investigative agent. If there is great uncertainty concerning the nature of the dose-response relationship, it would seem sensible to select a more diverse set of candidate dose-response models with pre-specified parameters when the trial begins.\n\nAfter collecting the Stage 1 data, these data can be used to estimate $\\pmb{\\theta}$ for each of the $M$ candidate dose-response models and adapt each of the models by substituting $\\pmb{\\hat{\\theta}}$ for the original specification (guess) of $\\pmb{\\theta}$. The optimal contrast vectors can be constructed for each of the updated models $f_m(\\cdot,\\pmb{\\hat{\\theta}})$, $m=1,\\ldots,M$, for use in Stage 2.\n\n\n\nA potential problem occurs when the true dose-response model differs markedly from some of the specified candidate models and if those candidate models are nonlinear models with several unknown parameters. In such cases there can be a failure to fit the models using the Stage 1 data. To handle this problem, one can consider fall-back approaches to determine the corresponding contrasts to be used in Stage 2. These include using isotonic regression (Robertson {\\it{et al}}., 1988), imposing reasonable bounds on the nonlinear parameters during model-fitting (as is done in the {\\tt{R}}-package {\\tt{DoseFinding}} to ensure the existence of the maximum likelihood estimates), and retaining the Stage 1 contrast for use in Stage 2. Different strategies can be used for different models in cases where more than one model cannot be fit using the Stage 1 data.\n\nSpecifically, consider the following 5 candidate dose-response models:\n\\begin{itemize}\n \\item[] $E_{\\max}$ model: $f_1(d,\\pmb{\\theta})=E_0+E_{\\max} d\/ (ED_{50}+d)$\n \\item[] Linear-log model: $f_2 (d,\\pmb{\\theta})=\\theta_0+\\theta_1\\log(5d+1)$\n \\item[] Linear model: $f_3 (d,\\pmb{\\theta})=\\theta_0+\\theta_1 d$\n \\item[] Quadratic model: $f_4 (d,\\pmb{\\theta})=\\theta_0+\\theta_1 d + \\theta_2 d^2$\n \\item[] Logistic model: $f_5 (d,\\pmb{\\theta})=E_0+E_{\\max} \/[1 + \\exp\\{(ED_{50}-d)\/ \\delta\\}]$\n\\end{itemize}\nAmong these 5 candidate models, the $E_{\\max}$ and Logistic models are the ones that may fail to converge since the others can be expressed as linear models in $d$ (or a simple function of $d$). A possible fall-back strategy could be as follows: if only one of the $E_{\\max}$ and Logistic models fails to converge in Stage 1, isotonic regression is used to generate the corresponding contrast for use in Stage 2; if both the $E_{\\max}$ and Logistic models fail to converge in Stage 1, then isotonic regression is used to generate the corresponding contrast for the Logistic model and the same contrast that was used in Stage 1 is used in Stage 2 for the $E_{\\max}$ model (see Section \\ref{Numerical Example, AGMCT} for a numerical example).\n\nAnother potential concern arises if the data from Stage 1 suggest that there is a negative dose-response relationship, i.e., that higher dosages are associated with worse outcomes. In this case, the adapted contrast associated with the linear model, say, in Stage 2 would be the negative of that used in Stage 1. If a similar dose-response pattern is observed in Stage 2, then the contrast associated with the linear model would incorrectly indicate (possibly strong) evidence against the null hypothesis. One way to avoid this problem would be to not adapt the dose-response models in such a case, but instead to consider adapting the dosage groups by retaining only dosages, if any, that appear to be associated with increasing sample means (see Section \\ref{Adapting the Dosage Groups} below).\n\nIdeally, of course, it would be required to pre-specify the measures that would be taken to deal with the problems noted above (non-convergence of non-linear models, negative dose-response relationship) prior to examination of the data.\n\nOne could also consider different numbers of candidate models (or contrast vectors) in Stage 1 and Stage 2. One non-model-based option, for example, would be to use a single contrast in Stage 2 based on the sample means of the dosage groups from Stage 1. We found that this strategy, while intuitively appealing, yielded tests with reduced power, likely due to the reliance on a single contrast combined with the uncertainty associated with estimation of the means of each dosage group in Stage 1. One could also consider a small number of other contrasts based on values that are within the bounds of uncertainty reflected in the sample means, though how to choose these contrasts is somewhat arbitrary.\n\n\n\\subsection{Adapting the Dosage Groups}\\label{Adapting the Dosage Groups}\n\nAdaptation of the dosage groups in Stage 2, including the number of dosage groups, could also be considered. One would have to establish principles for adding and\/or dropping dosages; for example, dropping active dosages that appear to be less efficacious than placebo or that appear to be less efficacious than other active dosages, or adding a dosage (within a safe range) when there appears to be no indication of a dose-response relationship in Stage 1. Relevant discussion of these issues can be found in Bauer and R\\\"{o}hmel (1995), Miller (2010), and Franchetti {\\it{et al}}. (2013).\n\nTo illustrate this type of adaptation, we create an example dosage adaptation rule to drop the active dosage groups that appear to be less efficacious than placebo and the adjacent group. Suppose that there are $k_1$ dosage groups in Stage 1 and denote the dosage vector in Stage 1 as $\\pmb{d}_{\\text{Stage1}}=(d_{11},\\ldots, d_{k_11})^\\prime$, where $d_{11}=0$ (placebo group). We will select $k_2$ dosage groups from the $k_1$ Stage 1 dosage groups, $k_2\\leq k_1$. Denote the dosage vector in Stage 2 as $\\pmb{d}_{\\text{Stage2}}=(d_{12}, \\ldots, d_{k_22})^\\prime$, where $d_{12}=0$ (placebo group). The example dosage adaptation rule is as follows:\n\\begin{itemize}\n \\item[] \\textbf{Step 1}: Always select the placebo group to be included in Stage 2, i.e., $d_{12}=d_{11}=0$.\n\n \\item[] \\textbf{Step 2}: Consider the difference in the means between each active dosage group and the placebo group in Stage 1.\n\nDenote $\\hat{\\Delta}_{21}=\\bar{Y}_{21}-\\bar{Y}_{11},\\ldots,\\hat{\\Delta}_{k_11}=\\bar{Y}_{k_11}-\\bar{Y}_{11}$. If there exists dosage group(s) $i$, $i=2,\\ldots, k_1$, such that $\\hat{\\Delta}_{i 1}<-\\delta$, where $\\delta\\ge 0$, then we remove dosage(s) $d_{i 1}$ from consideration; however, if $\\hat{\\Delta}_{i 1}<-\\delta$ for all $i=2,\\ldots, k_1$, then we stop the trial at the interim analysis and fail to reject $H_0$.\n\n \\item[] \\textbf{Step 3}: Consider the differences in the means between two adjacent dosage groups among the remaining dosage groups, ordered from smallest to largest.\n\nAfter Steps 1 and 2, we have selected $d_{11}$ (placebo) into Stage 2 and have several remaining dosage groups $d_{\\tilde{2}1},\\ldots,d_{\\tilde{k}1}$, where $\\tilde{k}\\leq k_1$.\n\nWe first examine the difference in the means between dosages $d_{11}$ and $d_{\\tilde{2}1}$. If $\\hat{\\Delta}_{\\tilde{2}1}=\\bar{Y}_{\\tilde{2}1}-\\bar{Y}_{11} > -\\delta$, then $d_{\\tilde{2}1}$ is selected to be included in Stage 2, i.e., $d_{22}=d_{\\tilde{2}1}$; otherwise, $d_{\\tilde{2}1}$ is discarded and we proceed to the next possible dosage $d_{\\tilde{3}1}$.\n\nIf $d_{\\tilde{2}1}$ is selected to be included in Stage 2, then we proceed to compare the means between dosages $d_{\\tilde{2}1}$ and $d_{\\tilde{3}1}$. If $\\hat{\\Delta}_{\\tilde{3} \\tilde{2}}=\\bar{Y}_{\\tilde{3}1}-\\bar{Y}_{\\tilde{2}1}> -\\delta$, then $d_{\\tilde{3}1}$ is selected to be included in Stage 2, i.e., $d_{32}=d_{\\tilde{3}1}$; otherwise, $d_{\\tilde{3}1}$ is discarded. However, if $d_{\\tilde{2}1}$ is discarded, then the means should be compared between dosages $d_{11}$ and $d_{\\tilde{3}1}$, since these are now adjacent dosages among those remaining.\n\nThis procedure is repeated until the last possible dosage $d_{\\tilde{k}1}$ is reached and its associated mean is compared with that of the remaining adjacent dosage. This results in a final number $k_2 \\leq \\tilde{k}$ of dosage groups selected to be included in Stage 2, i.e., $\\pmb{d}_{\\text{Stage2}}=(d_{12}, \\ldots, d_{k_22})^\\prime$.\n\\end{itemize}\n\nHere we consider the threshold of adaptive dosing $\\delta=0$, which simply considers the difference between two sample means and retains the dosage with the larger sample mean. This threshold might be strict since it does not consider the variability of the difference between two sample means. An alternative threshold could be $\\delta=\\sqrt{\\text{var}(\\bar{Y}_{i1}-\\bar{Y}_{i^\\prime 1})}$, $i,i^\\prime=1, \\ldots, k_1$, which retains a dosage with a mean that is no more than one standard error lower than the mean of the adjacent dosage (or placebo). Users are free to choose their own threshold $\\delta$ based on considerations specific to their problem.\n\n\n\n\n\nWe emphasize that this is just one possible rule to adapt the dosage groups for Stage 2, and this rule only considers dropping dosages at the end of Stage 1. One could consider different adaptation rules that allow adding and\/or dropping dosages at the end of Stage 1, i.e., $k_2$ does not need to be less than or equal to $k_1$, and some of the dosage groups selected in Stage 2 may differ from those included in Stage 1. Also, as in Miller (2010), such a rule is based on heuristic considerations and is relatively easy to communicate to non-statisticians. Mercier {\\it{et al}}. (2015) provide an approach to selecting dosages for Stage 2 based on the hypothetical dose-response shape (out of several pre-specified) that correlates highest with the data observed in Stage 1.\n\nOne can adapt both the candidate dose-response models and the dosage groups in Stage 2. The optimal contrast vectors for Stage 2 would then be determined by the updated candidate dose-response models with parameters $\\pmb{\\hat{\\theta}}$ and the adapted dosages $\\pmb{d}_{\\text{Stage2}}$. The overall p-value for Stage 2, $p_2$, would be obtained from a GMCT that uses the updated optimal contrast vectors. We incorporate this strategy in our simulation studies below. It should be noted that if one adapts only the candidate dose-response models and not the dosage groups, the contrasts for the Linear and Linear-log models would not change based on the Stage 1 data. This would not be the case if one also adapted the dosage groups.\n\n\\section{Adaptive Multiple Contrast Test}\\label{Adaptive Multiple Contrast Test}\n\n\\subsection{Known Variance Case}\\label{AMCT, Known Variance Case}\n\nInstead of combining the stage-wise $p$-values $p_1$ and $p_2$, each based on a GMCT, Miller (2010) suggested combining the test statistics for each candidate dose-response model across the two stages, and then derving an overall $p$-value from a multiple contrast test applied to those statistics, assuming a known variance $\\sigma^2$. For each candidate model, we have\n$$Z_m=\\left(\\sum_{i=1}^{k_1} c_{mi1} \\bar{Y}_{i1}+\\sum_{i=1}^{k_2} c_{mi2} \\bar{Y}_{i2}\\right)\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1} \\frac{c^2_{mi1}}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c^2_{mi2}}{n_{i2}}},\\quad m= 1,\\ldots,M.$$\nSince $k_2$, $c_{mi2}$, and $n_{i2}$ can depend on the interim data (adaptation), the null distribution of $Z_m$ is not standard normal in general.\n\nIn order to control the Type I error probability of the overall test, Miller (2010) applies a conditional error approach based on the conditional rejection probability (CRP) principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004). Computation of the conditional Type I error probability requires pre-specification of what Miller (2010) calls a ``base test\", i.e., pre-specified values for the contrast coefficients ($c^*_{mi2}$), number of dosage groups ($k_2^*$), and group sample sizes ($n^*_{i2}$) in Stage 2, $i=1, \\ldots, k^*_2$, $m=1,\\ldots, M$. There is not a clear best strategy for choosing these pre-specified values. Miller (2010) considers an example where all possible Stage 2 designs can be enumerated and have $k_1=k_2$ and $n_{i1}=n_{i2}$, $i=1,\\ldots,k_1$, and the pre-specified values involving $c_{mi2}^*$, $i=1,\\ldots,k_2$, $m=1,\\ldots,M$, are averaged over the possible Stage 2 designs. More generally one cannot enumerate all possible Stage 2 designs, so in the development below we pre-specify $c^*_{mi2}=c_{mi1}$, $k^*_2=k_1$, and $n^*_{i2}=n_{i1}$, $i=1,\\ldots,k_2$, $m=1,\\ldots,M$. Since the dosages can also be adapted, we suggest pre-specifying $\\pmb{d}^*_{\\text{Stage2}}=\\pmb{d}_{\\text{Stage1}}=(d_{11},\\ldots,d_{k_1 1})^\\prime$. One can think of this ``base test\" as one that is based on a study that uses the same design in Stage 2 as was used in Stage 1.\n\nThe $Z$-statistics for the base test are\n$$Z^*_m=\\sum_{i=1}^{k_1} c_{mi1}\\left(\\bar{Y}_{i1}+\\bar{Y}_{i2}\\right)\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{mi1}^2 }{n_{i1}}},\\quad m=1,\\ldots,M.$$\nUnder $H_0$, the joint distribution of $\\pmb{Z}^*=(Z^*_1,\\ldots,Z^*_M)^\\prime$ is multivariate normal with mean $\\pmb{0}$ and covariance matrix $\\pmb{R}^*=(\\rho_{m m^{\\prime}1})$, $m, m^{\\prime}=1,\\ldots, M$. One can then obtain the non-adaptive $\\alpha$-level critical value $u^*_{1-\\alpha}$ based on the null distribution of $Z_{\\max}^*=\\max \\{\\pmb{Z}^*\\}$ using the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nIn order to obtain the conditional Type I error probability $A=P_{H_0}(Z^*_{\\max}\\ge u^*_{1-\\alpha}\\,|\\,\\pmb{Y}_1)$, where $\\pmb{Y}_1$ are the Stage 1 data, it can be seen that the conditional distribution of $\\pmb{Z}^*$ given $\\pmb{Y}_1=\\pmb{y}_1$ is multivariate normal with mean vector\n$$\\left(\\sum_{i=1}^{k_1} c_{1i1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{1i1}^2}{n_{i1}}},\\ldots,\\sum_{i=1}^{k_1} c_{Mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c_{Mi1}^2 }{n_{i1}}}\\right)^\\prime$$\nand covariance matrix $\\pmb{R_2}^*=\\pmb{R}^*\/2$, where $\\bar{y}_{i1}=\\sum_{j=1}^{n_{i1}} y_{ij1}\/n_{i1}$, $i=1,\\ldots,k_1$. Hence, the conditional Type I error probability is\n$$A=P_{H_0} (Z^*_{\\max}\\geq u^*_{1-\\alpha}\\,|\\,\\pmb{Y}_1)=1- P_{H_0} (\\pmb{Z}^* \\leq (u^*_{1-\\alpha},\\ldots,u^*_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1),$$\nwhich can be obtained using the {\\tt{pmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nIn general, the interim analysis at the end of Stage 1 could yield adapted values of $c_{mi2}$, $k_2$, and $n_{i2}$ for Stage 2 and, hence, the adapted $Z$-statistics $Z_m$, $m=1, \\ldots, M$. Denote $\\pmb{Z}=(Z_1,\\ldots,Z_M)^\\prime$ and $Z_{\\max}=\\max \\{\\pmb{Z}\\}$. The adaptive critical value $\\tilde{u}_{1-\\alpha}$ can be obtained by solving the equation\n$$\\tilde{A}=P_{H_0}(Z_{\\max}\\geq\\tilde{u}_{1-\\alpha}\\,|\\,\\pmb{Y}_1)=1- P_{H_0}(\\pmb{Z} \\leq (\\tilde{u}_{1-\\alpha},\\ldots,\\tilde{u}_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1)= A,$$\nwhere the conditional distribution of $\\pmb{Z}$ given $\\pmb{Y}_1$ is multivariate normal with mean vector\n$$\\left(\\sum_{i=1}^{k_1} c_{1i1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{1i1}^2}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c_{1i2}^2}{n_{i2}}},\\ldots,\\sum_{i=1}^{k_1} c_{Mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c_{Mi1}^2}{n_{i1}}+\\sum_{i=1}^{k_2}\\frac{c_{Mi2}^2}{n_{i2}}}\\right)^{\\prime}$$\nand covariance matrix $\\pmb{\\tilde{R}}=(\\text{cov}(Z_m,Z_{m^\\prime}\\,|\\,\\pmb{Y}_1))$, $m,m^\\prime=1,\\ldots,M$, where\n$$\\text{cov}(Z_m,Z_{m^\\prime}\\,|\\,\\pmb{Y}_1 )={\\sum_{i=1}^{k_2}}\\frac{c_{mi2} c_{m^\\prime i2}}{n_{i2}} \\Bigg\/\n\\sqrt{\\left( {\\sum_{i=1}^{k_1}}\\frac{c_{mi1}^2}{n_{i1}}+ {\\sum_{i=1}^{k_2}}\\frac{c_{mi2}^2}{n_{i2}}\\right)\\left( {\\sum_{i=1}^{k_1}}\\frac{c_{m^\\prime i1}^2}{n_{i1}}+ {\\sum_{i=1}^{k_2}} \\frac{c_{m^\\prime i2}^2}{n_{i2}}\\right)}.$$\nUse of $\\tilde{u}_{1-\\alpha}$ as the critical value for the AMCT controls the Type I error probability at level $\\alpha$ (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004; Miller, 2010).\n\n\\subsection{Unknown Variance Case}\\label{AMCT, Unknown Variance Case}\n\nMiller (2010) briefly discusses the possibility of extending the AMCT to accommodate estimation of the variance $\\sigma^2$, the complication being that the conditional Type I error probability depends on the unknown variance. Posch {\\it{et al}}. (2004) developed methods to calculate the conditional Type I error probability for the one sample $t$-test given the interim data, but the authors only consider the univariate case and the approach does not directly apply to either the single contrast test or the multiple contrast test.\n\nIn this subsection, we extend the AMCT to the unknown variance case by considering the combined $T$-statistics\n$$T_m=\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1}\\bar{Y}_{i1}+\\displaystyle{\\sum_{i=1}^{k_2}} c_{mi2} \\bar{Y}_{i2}}{S\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}} \\frac{c^2_{mi1}}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c^2_{mi2}}{n_{i2}}}}=\\frac{\\sigma Z_m}{S},\\quad m= 1, \\ldots ,M,$$\nwhere the pooled variance estimator is\n$$S^2=\\left(\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij1}-\\bar{Y}_{i1})^2+\\sum_{i=1}^{k_2}\\sum_{j=1}^{n_{i2}} (Y_{ij2}-\\bar{Y}_{i2})^2\\right)\\Bigg\/\\left(\\sum_{i=1}^{k_1} n_{i1}-k_1+\\sum_{i=1}^{k_2} n_{i2}-k_2\\right).$$\nAs in Section \\ref{AMCT, Known Variance Case}, we pre-specify $c^*_{mi2}=c_{mi1}$, $k^*_2=k_1$, $n^*_{i2}=n_{i1}$, and $\\pmb{d}^*_{\\text{Stage2}}=\\pmb{d}_{\\text{Stage1}}$, $i=1,\\ldots,k^*_2$, $m=1,\\ldots,M$. The $T$-statistics for the base test are\n$$T^*_m=\\sum_{i=1}^{k_1} c_{mi1} (\\bar{Y}_{i1}+\\bar{Y}_{i2})\\Bigg\/S^*\\sqrt{2\\sum_{i=1}^{k_1}\\frac{c^2_{mi1}}{n_{i1}}}=\\frac{\\sigma Z^*_m }{S^*},\\quad m=1,\\ldots,M,$$\nwhere\n$$S^{*2}=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}\\left[(Y_{ij1}-\\bar{Y}_{i1})^2+(Y_{ij2}-\\bar{Y}_{i2})^2\\right]\\Bigg\/(2\\nu_1),\\text{ where }\\nu_1=\\sum_{i=1}^{k_1} n_{i1}-k_1.$$\nSince $S^{*2}$ is independent of $Z^*_m$ and $2\\nu_1 S^{*2}\/\\sigma^2 \\sim \\chi^2_{2 \\nu_1}$, the null joint distribution of $\\pmb{T}^*=(T_1^*, \\ldots, T_M^*)^\\prime$ is multivariate $t$ with $2\\nu_1$ degrees of freedom and correlation matrix $\\pmb{R}^*$. The non-adaptive $\\alpha$-level critical value $c^*_{1-\\alpha}$ can then be obtained using the {\\tt{qmvt}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}.\n\nThe main difficulty in the unknown variance case is that the approach outlined in Section \\ref{AMCT, Known Variance Case} cannot be employed because the conditional distribution of $T_m^*$ given $\\pmb{Y}_1$ is not central $t$ under $H_0$. We develop the conditional Type I error probability as follows. Denote\n$$T_m^*\\,|\\,\\pmb{Y}_1=\n\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1} (\\bar{y}_{i1}+\\bar{Y}_{i2})}{\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}}\\sqrt{ \\displaystyle{\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}}\\left\\{ (y_{ij1}-\\bar{y}_{i1})^2+ (Y_{ij2}-\\bar{Y}_{i2})^2\\right\\} \\Bigg\/\\nu_1}}=\\frac{U^*_m}{\\displaystyle{\\sqrt{\\frac{V^*}{\\nu_1}+q^*}}},\\quad m=1,\\ldots, M,$$\nwhere\n$$U_m^*=\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1} (\\bar{y}_{i1}+\\bar{Y}_{i2})}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}}},\\quad V^*=\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}} (Y_{ij2}-\\bar{Y}_{i2})^2 \\Big\/\\sigma^2,$$\nand the constant\n$$q^*= {\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}} (y_{ij1}-\\bar{y}_{i1})^2\\Big\/(\\nu_1\\sigma^2).$$\n\nUnder $H_0$, the joint distribution of $(U_1^*,\\ldots,U_M^*)^\\prime$ is multivariate normal with mean vector $(b^*_1,\\ldots, b^*_M)^\\prime$ and variance-covariance matrix $\\pmb{R}^*$, where\n$$b^*_m=\\sum_{i=1}^{k_1} c_{mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\sum_{i=1}^{k_1}\\frac{c^2_{mi1}}{n_{i1}}},\\quad m=1,\\ldots,M.$$\nSince $V^*\\sim\\chi^2_{\\nu_1}$ and is independent of $(U_1^*, \\ldots,U_M^*)^\\prime$, the joint density function of $(U_1^*,\\ldots,U_M^*,V^*)^\\prime$ is\n\\begin{eqnarray*}\n& &f_{(U_1^*,\\ldots,U_M^*,V^*)}(u_1^*,\\ldots,u_M^*,v^*)=\\frac{1}{(2 \\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma(\\nu_1\/2) 2^{\\nu_1\/2}}\\times\n\\\\[0.5\\baselineskip]\n& &(v^*)^{\\nu_1\/2-1}e^{-v^*\/2}\\exp\\left \\{-\\frac{1}{2}(u^*_1-b^*_1,\\ldots,u^*_M-b^*_M)(\\pmb{R}^*)^{-1}(u^*_1-b^*_1,\\ldots,u^*_M-b^*_M)^\\prime\\right \\},\n\\end{eqnarray*}\nwhere $\\Gamma (\\cdot)$ is the Gamma function. Now make the transformation\n$$T_m^*\\,|\\,\\pmb{Y}_1=\\frac{U_m^*}{\\displaystyle{\\sqrt{\\frac{V^*}{\\nu_1}+q^*}}},\\quad m=1,\\ldots,M,\\quad\\text{and}\\quad\nW^*=V^*$$\nwith Jacobian $(W^*\/\\nu_1+q^*)^{M\/2}$. The joint density function of $\\pmb{T}^*\\,|\\,\\pmb{Y}_1$ is\n\\begin{eqnarray*}\n& &\\displaystyle{f_{\\pmb{T}^*\\,|\\,\\pmb{Y}_1}\\left((t_1^*,\\ldots,t_M^*)\\,|\\,\\pmb{y}_1 \\right)} \\\\\n&=&\\frac{1}{(2\\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma (\\nu_1\/2) 2^{\\nu_1\/2}}\n\\int_0^{+ \\infty}\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{M\/2}(w^*)^{\\nu_1\/2-1} e^{-w^*\/2}\\times \\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1,\\ldots,t_M^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_M\\right \\}(\\pmb{R}^*)^{-1} \\\\\n& &\\left \\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1,\\ldots,t_M^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_M\\right\\}^\\prime\\, \\Bigg ] dw^*\n\\end{eqnarray*}\nWe then obtain the conditional Type I error probability\n\\begin{eqnarray*}\nA&=&1- P_{H_0}\\left(\\pmb{T}^*\\leq (c^*_{1-\\alpha},\\ldots,c^*_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1\\right) \\\\\n&=&1- \\int\\cdots\\int_{(t_1^*,\\ldots, t_M^*) \\leq (c^*_{1-\\alpha},\\ldots,c^*_{1-\\alpha})} f_{\\pmb{T}^*\\,|\\,\\pmb{Y}_1}\\left((t_1^*,\\ldots,t_M^*)\\,|\\,\\pmb{y}_1\\right)\\ d t_1^* \\cdots d t_M^*.\n\\end{eqnarray*}\n\nAfter making the adaptations at the interim analysis, from the conditional distribution of $\\pmb{T}=(T_1, \\ldots, T_M)^\\prime$ given $\\pmb{Y}_1$, the adaptive critical value $\\tilde{c}_{1-\\alpha}$ can be determined as a solution to the following equation:\n\\begin{eqnarray*}\n\\tilde{A}&=&1-P_{H_0}\\left(\\pmb{T} \\leq (\\tilde{c}_{1-\\alpha},\\ldots,\\tilde{c}_{1-\\alpha})^\\prime\\,|\\,\\pmb{Y}_1\\right) \\\\\n&=&1- \\int\\cdots\\int_{(t_1,\\ldots,t_M) \\leq (\\tilde{c}_{1-\\alpha},\\ldots,\\tilde{c}_{1-\\alpha})} f_{\\pmb{T}\\,|\\,\\pmb{Y}_1}\\left((t_1,\\ldots,t_M)\\,|\\,\\pmb{y}_1\\right)\\ d t_1 \\cdots d t_M=A,\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\n&\\displaystyle f_{\\pmb{T}\\,|\\,\\pmb{Y}_1}\\left((t_1, \\ldots, t_M)\\,|\\,\\pmb{y}_1\\right)=\\displaystyle\\frac{1}{(2\\pi)^{M\/2} |\\pmb{\\tilde{R}}|^{1\/2}}\\displaystyle \\frac{1}{\\Gamma (\\nu_2\/2) 2^{\\nu_2\/2}}\\int_0^{+ \\infty}\\left(\\frac{w}{\\nu}+q\\right)^{M\/2} w^{\\nu_2\/2-1} e^{-w\/2}\\times \\\\\n&\\exp\\Bigg[-\\displaystyle\\frac{1}{2}\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1,\\ldots,t_M\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_M\\right\\}\\pmb{\\tilde{R}}^{-1} \\\\\n&\\displaystyle\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1,\\ldots,t_M\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_M\\right\\}^\\prime\\, \\Bigg] dw,\\\\\n&\\nu_2=\\displaystyle{\\sum_{i=1}^{k_2}} n_{i2}-k_2, \\quad \\nu=\\nu_1+\\nu_2, \\quad q=\\displaystyle{\\sum_{i=1}^{k_1}\\sum_{j=1}^{n_{i1}}} (y_{ij1}-\\bar{y}_{i1})^2\\Big\/(\\nu\\sigma^2),\n\\end{eqnarray*}\nand\n\\[b_m=\\displaystyle{\\sum_{i=1}^{k_1}} c_{mi1}\\bar{y}_{i1}\\Bigg\/\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c^2_{mi1}}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c^2_{mi2}}{n_{i2}}},\\quad m=1,\\ldots,M.\\]\n$H_0$ is rejected if $T_{\\max}=\\max \\{\\pmb{T}\\}\\geq\\tilde{c}_{1-\\alpha}$. Use of the critical value $\\tilde{c}_{1-\\alpha}$ provides control of the Type I error probability at level $\\alpha$ according to the CRP principle (M\\\"{u}ller and Sch\\\"{a}fer, 2001, 2004).\n\n\n\\section{Numerical Example}\\label{Numerical Example}\n\n\n\\subsection{Adaptive Generalized Multiple Contrast Tests}\\label{Numerical Example, AGMCT}\n\nTo illustrate the adaptive generalized multiple contrast tests (AGMCTs), we generated a numerical example. The example data set is available as Supporting Information. Suppose that there are $k_1=5$ dosage groups in Stage 1, with $\\pmb{d}_{\\text{Stage1}}=(0,0.05,0.20,0.60,1.00)^\\prime$. The total sample sizes in two stages are the same ($N_1=N_2=120$) and the group sample sizes are equal in Stage 1 ($n_{11}=\\cdots =n_{51}=N_1\/5=24$). The $M=5$ candidate dose-response models with the original specifications of $\\pmb{\\theta}$ are shown in Table \\ref{Table.original candidate models}.\n\nWe assume that the true dose-response model is the $E_{\\max}$ 2 model:\n$$f_{E_{\\max}2}(d,\\pmb{\\theta})=E_0+E_{\\text{max}} d\/ (ED_{50}+d)=0.2+0.6 d\/(0.1+d).$$\nWe generate the Stage 1 data from a multivariate normal distribution with mean $f_{E_{\\max}2}(\\pmb{d}_{\\text{Stage1}},\\pmb{\\theta})=$ $(0.20, 0.40, 0.60, 0.71, 0.75)^\\prime$ and covariance matrix $\\sigma^2\\pmb{I}=1.478^2\\pmb{I}$. The sample mean and variance estimates from the Stage 1 data are $\\bar{\\pmb{y}}_1=(0.52, 0.47, 1.09, 1.70, 0.45)^\\prime$ and $s^2_1=1.58^2$, respectively.\n\nThe optimal contrast vectors in Stage 1 based on the $M=5$ candidate dose-response models in Table \\ref{Table.original candidate models} are as follows.\n\\begin{align*}\n&E_{\\max}: \\pmb{c}_{11}=(-0.64, -0.36, 0.06, 0.41, 0.53)^\\prime, \\\\\n&\\text{Linear-log}: \\pmb{c}_{21}=(-0.54, -0.39, -0.08, 0.37, 0.64)^\\prime, \\\\\n&\\text{Linear}: \\pmb{c}_{31}=(-0.44, -0.38, -0.20, 0.27, 0.74)^\\prime, \\\\\n&\\text{Quadratic}: \\pmb{c}_{41}=(-0.57, -0.36, 0.16, 0.71, 0.07)^\\prime, \\\\\n&\\text{Logistic}: \\pmb{c}_{51}=(-0.40, -0.39, -0.31, 0.50, 0.59)^\\prime.\\end{align*}\nAfter conducting three different GMCTs using Tippett's, Fisher's, and inverse normal combination statistics, we obtain the following Stage 1 p-values: $p_{T1}=0.005$, $p_{F1}=0.047$, and $p_{N1}=0.06$.\n\nWe then adapt the candidate dose-response models and the dosage groups. We fit the 5 original candidate dose-response models using the Stage 1 data. Unfortunately, the Logistic model failed to converge on a solution so we replaced it with isotonic regression. Also, we use the dosage adaptation rule described in Section \\ref{Adapting the Dosage Groups} with $\\delta=0$ to drop the active dosage groups that appear to be less efficacious than placebo or the adjacent dosage. Finally, we obtain $k_2=3$ dosage groups in Stage 2: $\\pmb{d}_{\\text{Stage2}}=(0,0.20,0.60)^\\prime$ and $n_{12}=n_{22}=n_{32}=N_2\/3=40$.\n\nThe optimal contrast vectors in Stage 2 based on the adapted dose-response models and dosage groups are as follows:\n\\begin{align*}\n&E_{\\max}: \\pmb{c}_{12}=(-0.433, -0.383, 0.816)^\\prime,\\\\\n&\\text{Linear-log}: \\pmb{c}_{22}=(-0.707, 0.000, 0.707)^\\prime,\\\\\n&\\text{Linear}: \\pmb{c}_{32}=(-0.617, -0.154, 0.772)^\\prime, \\\\\n&\\text{Quadratic}: \\pmb{c}_{42}=(-0.766, 0.137, 0.629)^\\prime,\\\\\n&\\text{Isotonic regression}: \\pmb{c}_{52}=(-0.816, 0.408, 0.408)^\\prime.\\end{align*}\n\nThe Stage 2 data are then generated from a multivariate normal distribution with mean $f_{E_{\\max}2}(\\pmb{d}_{\\text{Stage2}},\\pmb{\\theta})=$ $(0.20, 0.60, 0.71)^\\prime$ and covariance matrix $\\sigma^2\\pmb{I}=1.478^2\\pmb{I}$. The sample mean and variance estimates from the Stage 2 data under adaptation are $\\bar{\\pmb{y}}_2=(-0.09, 0.77, 0.73)^\\prime$ and $s^2_2=1.52^2$, respectively. After conducting three different GMCTs using Tippett's, Fisher's, and inverse normal combination statistics, we obtain the following Stage 2 p-values: $p_{T2}=0.005$, $p_{F2}=0.008$, and $p_{N2}=0.008$. The p-values from Stage 1 and Stage 2 are then combined using Fisher's combination statistic and the inverse normal combination statistic. The combination statistics and resulting overall p-values are shown in Table \\ref{num.results AGMCT}.\n\n\\subsection{Adaptive Multiple Contrast Test}\\label{Numerical Example, AMCT}\n\n\\subsubsection{Known Variance Case}\nWe use the same simulated data as in Section \\ref{Numerical Example, AGMCT} to illustrate the adaptive multiple contrast test (AMCT) for the known variance case (for purposes of this illustration, we use $\\sigma^2=1.478^2$). We first obtain the non-adaptive critical value $u^*_{1-\\alpha}$. The joint null distribution of $\\pmb{Z}^*=(Z_1^*,\\ldots,Z_5^*)^\\prime$ is multivariate normal with mean $\\pmb{0}$ and covariance matrix $\\pmb{R}^*$, where\n\\begin{eqnarray*}\n\\pmb{R}^*=\\left(\\begin{array}{ccccc}\n1 &0.977 &0.912& 0.842& 0.896 \\\\\n0.977 &1& 0.977& 0.750& 0.956 \\\\\n0.912 &0.977& 1& 0.602& 0.957 \\\\\n0.842 &0.750& 0.602& 1& 0.715 \\\\\n0.896 &0.956& 0.957& 0.715& 1\n\\end{array}\\right).\n\\end{eqnarray*}\n\nThe value of $u^*_{1-\\alpha}$ is obtained using the {\\tt{qmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}, resulting in $u^*_{1-\\alpha}=1.968$. We then calculate the conditional mean of $\\pmb{Z}^*$ given $\\pmb{Y}_1$,\n$$\\left(\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{1i1} \\bar{y}_{i1}}{\\sigma\\sqrt{2\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{1i1}^2}{n_{i1}}}},\\ldots,\n\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{Mi1} \\bar{y}_{i1}}{\\sigma\\sqrt{2\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{Mi1}^2}{n_{i1}}}}\\right)^\\prime=(1.19, 0.87, 0.42, 2.22, 0.92)^\\prime,$$\nand the conditional covariance matrix $\\pmb{R_2}^*=\\pmb{R}^*\/2$. The conditional error is obtained using the {\\tt{pmvnorm}} function in the {\\tt{R}}-package {\\tt{mvtnorm}} as\n$$A=1- P_{H_0}\\left(\\pmb{Z}^* \\leq (u^*_{1-\\alpha},\\ldots, u^*_{1-\\alpha})^{\\prime}\\,|\\,\\pmb{Y}_1\\right)=0.64.$$\n\nAfter adapting the dose-response models and dosage groups as in Section \\ref{Numerical Example, AGMCT} above, we obtain the conditional distribution of $\\pmb{Z}\\,|\\,\\pmb{Y}_1$, which is multivariate normal with mean\n$$\n\\left(\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{1i1} \\bar{y}_{i1}}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{1i1}^2}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}}\\frac{c_{1i2}^2}{n_{i2}}}},\\ldots,\\frac{\\displaystyle{\\sum_{i=1}^{k_1}} c_{Mi1} \\bar{y}_{i1}}{\\sigma\\sqrt{\\displaystyle{\\sum_{i=1}^{k_1}}\\frac{c_{Mi1}^2}{n_{i1}}+\\displaystyle{\\sum_{i=1}^{k_2}} \\frac{c_{Mi2}^2}{n_{i2}}}},\\right)^{\\prime} \\\\[0.5\\baselineskip]=(1.33, 0.98, 0.47, 2.48, 1.03)^{\\prime}\n$$\nand covariance matrix\n\\begin{eqnarray*}\n\\pmb{\\tilde{R}}=\\left(\\begin{array}{ccccc}\n0.375& 0.331& 0.358& 0.297& 0.199 \\\\\n0.331& 0.375& 0.368& 0.370& 0.325 \\\\\n0.358& 0.368& 0.375& 0.351& 0.283 \\\\\n0.297& 0.370& 0.351& 0.375& 0.352 \\\\\n0.199& 0.325& 0.283& 0.352& 0.375\n\\end{array}\\right).\n\\end{eqnarray*}\n\nFinally, we obtain the adaptive critical value $\\tilde{u}_{1-\\alpha}=2.263$ and the combined test statistics $\\pmb{Z}=(Z_1,\\ldots, Z_M)^\\prime=(2.22, 2.50, 1.78, 4.15, 2.83)^\\prime$. We reject $H_0$ since $Z_{\\max}=4.15 \\geq \\tilde{u}_{1-\\alpha}$.\n\n\\subsubsection{Unknown Variance Case}\n\nTo illustrate the AMCT in the unknown variance case (Section \\ref{AMCT, Unknown Variance Case}), we use the same example data as in Section \\ref{Numerical Example, AGMCT} for $M=2$ candidate dose-response models. Here, we only consider the $E_{\\max}$ and Linear-log candidate dose-response models in Table \\ref{Table.original candidate models}. Other settings are the same as in Section \\ref{Numerical Example, AGMCT}, including the optimal contrasts for\nboth Stage 1 and Stage 2, and the adapted dosage groups for Stage 2.\n\nWe first obtain the non-adaptive critical value $c^*_{1-\\alpha}$. The joint null distribution of $\\pmb{T}^*=(T_1^*,T_2^*)^\\prime$ is bivariate $t$ with degrees of freedom $2 \\nu_1$ and correlation matrix $\\pmb{R}^*$, where $\\nu_1=N_1-5=115$ and\n\\begin{eqnarray*}\n\\pmb{R}^*=\\left(\\begin{array}{cc}\n1 &0.977\\\\\n0.977 &1\n\\end{array}\\right).\n\\end{eqnarray*}\nThe value of $c^*_{1-\\alpha}$ is obtained using the {\\tt{qmvt}} function in the {\\tt{R}}-package {\\tt{mvtnorm}}, resulting in $c^*_{1-\\alpha}=1.732$.\n\nWe then obtain the conditional error by numerically calculating the three-dimensional integral below using the {\\tt{adaptIntegrate}} function in the {\\tt{R}}-package {\\tt{cubature}}.\n\\begin{eqnarray*}\nA&=&1- \\frac{1}{(2\\pi)^{M\/2} |\\pmb{R}^*|^{1\/2}}\\frac{1}{\\Gamma (\\nu_1\/2) 2^{\\nu_1\/2}} \\int_0^{+ \\infty} \\int_{-\\infty}^{c^*_{1-\\alpha}} \\int_{-\\infty}^{c^*_{1-\\alpha}} \\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{M\/2}(w^*)^{\\nu_1\/2-1} e^{-w^*\/2}\\times\\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1, t_2^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_2\\right\\}(\\pmb{R}^*)^{-1} \\\\\n& & \\left\\{t_1^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_1, t_2^*\\left(\\frac{w^*}{\\nu_1}+q^*\\right)^{1\/2}-b^*_2\\right\\}^\\prime\\, \\Bigg] dw^* \\ dt_1^* \\ dt_2^* = 0.198.\n\\end{eqnarray*}\n\nAfter adapting the dose-response models and dosage groups at the end of Stage 1, we consider the conditional distribution of $\\pmb{T}\\,|\\,\\pmb{Y}_1$. The adaptive critical value $\\tilde{c}_{1-\\alpha}$ can be obtained by solving the following equation using a bisection algorithm:\n\n\\begin{eqnarray*}\n\\tilde{A}&=& \\frac{1}{(2\\pi)^{M\/2} |\\pmb{\\tilde{R}}|^{1\/2}}\\frac{1}{\\Gamma (\\nu_2\/2) 2^{\\nu_2\/2}} \\int_0^{+ \\infty} \\int_{-\\infty}^{\\tilde{c}_{1-\\alpha}} \\int_{-\\infty}^{\\tilde{c}_{1-\\alpha}} \\left(\\frac{w}{\\nu}+q\\right)^{M\/2} w^{\\nu_2\/2-1} e^{-w\/2}\\times \\\\\n& &\\exp\\Bigg[-\\frac{1}{2}\\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1, t_2\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_2\\right\\}\\pmb{\\tilde{R}}^{-1} \\\\\n& & \\left\\{t_1\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_1, t_2\\left(\\frac{w}{\\nu}+q\\right)^{1\/2}-b_2\\right\\}^\\prime\\, \\Bigg] dw \\ dt_1\\ dt_2 = A ,\n\\end{eqnarray*}\nwhere the covariance matrix $\\pmb{\\tilde{R}}$ is\n\\begin{eqnarray*}\n\\pmb{\\tilde{R}}=\\left(\\begin{array}{cc}\n0.375 &0.331\\\\\n0.331 &0.375\n\\end{array}\\right).\n\\end{eqnarray*}\n\nFinally, we obtain the adaptive critical value $\\tilde{c}_{1-\\alpha}=1.802$ with tolerance $10^{-7}$. The combined test statistics are $\\pmb{T}=(T_1,T_2)^\\prime=(2.11, 2.38)^\\prime$ and we reject $H_0$ since $T_{\\max}=2.38 \\geq \\tilde{c}_{1-\\alpha}$.\n\n\n\\section{Simulation studies}\\label{Simulation studies}\n\n\nIn this section, we conduct simulation studies to compare the operating characteristics of the AGMCTs with those of the AMCT in the setting of a design that adapts both the candidate dose-response models and the dosage groups based on data from Stage 1. We also compare these with the operating characteristics of the corresponding tests in a non-adaptive design.\n\nAssume $k_1=5$ and $\\pmb{d}_{\\text{Stage1}}=(0,0.05,0.20,0.60,1.00)^\\prime$. The total sample size is the same for each of the two stages ($N_1=N_2$) and the group sample sizes within each stage are equal, with $N_1=N_2=60$, 120, 180, and 240. The $M=5$ candidate dose-response models with the original specifications of $\\pmb{\\theta}$ are shown in Table \\ref{Table.original candidate models}. The outcome for each patient is distributed as $N(\\mu(d),\\sigma^2)$, where the true mean configuration $\\mu(d)$ follows one of the eight different dose-response models in Table \\ref{Sim.eight dose-response models}, and $\\sigma=1.478$. The dose-response curves for the five candidate models and the eight true dose-response models are shown in Figure \\ref{fig: candidate and true model}.\n\nFor the (true) $E_{\\max}$ 2 and Double-logistic models, the optimal contrasts are highly correlated with those of the candidate models. In contrast, for the (true) $E_{\\max}$ 3, Exponential 1, Exponential 2, Quadratic 2, Step and Truncated-logistic models, the optimal contrasts are not highly correlated with those of the candidate models (Figure \\ref{fig: True vs. candidate}).\n\nFor the AGMCTs, we use three GMCTs to combine the $M=5$ dependent $p$-values \\emph{within} each stage: Tippett's ($T$), Fisher's ($F$) and inverse normal ($N$) combination methods (Ma and McDermott, 2020). The same GMCT is used in both Stage 1 and Stage 2. To perform the overall test, only the inverse normal ($\\Psi_N$) combination statistic is used to combine $p_1$ and $p_2$ \\emph{across} stages since our preliminary simulation studies showed that, in general, using $\\Psi_N$ to combine $p_1$ and $p_2$ yielded greater power than using $\\Psi_F$. The reason for this is that under the alternative hypothesis, $p_1$ and $p_2$ both tend to be small and the rejection region of $\\Psi_N$ is larger than that of $\\Psi_F$ when $p_1$ and $p_2$ are both small (Wassmer and Brannath 2016, Section 6.2).\n\n\n\nFor the AGMCTs, we report the results of the operating characteristics for both the known and unknown variance cases. The results for the corresponding GMCTs in a non-adaptive design are also reported. For the AMCT, the simulation studies of the operating characteristics are presented only for the known variance case. The corresponding test in a non-adaptive design is just the MCP-Mod procedure, which is equivalent to the GMCT based on Tippett's combination method in a non-adaptive design.\n\nAll dosage adaptations are made according to the example rule described in Section \\ref{Adapting the Dosage Groups}. To deal with the problems outlined in Section \\ref{Adapting the Candidate Dose-Response Models} above, if only one of the $E_{\\max}$ and Logistic models fails to converge in Stage 1, isotonic regression is used to generate the corresponding contrast for use in Stage 2; if both the $E_{\\max}$ and Logistic models fail to converge in Stage 1, then isotonic regression is used to generate the corresponding contrast for the Logistic model and the same contrast that was used in Stage 1 is used in Stage 2 for the $E_{\\max}$ model. Also, if there is a negative dose-response relationship suggested by the Stage 1 data (i.e., a negative estimated slope in the Linear model), no adaptation of the dose-response models is performed for Stage 2 and we only adapt the dosage groups.\n\nAll estimated values of Type I error probability and power are based on 10,000 replications of the simulations. The Type I error probabilities for the AGMCTs and the AMCT (Tables \\ref{sim: type I error, known variance} and \\ref{sim: type I error, unknown variance} in the Appendix) agree with theory that the tests being considered all exhibit control of the Type I error probability at $\\alpha=0.05$; all values fall within the 95\\% confidence interval (0.0457, 0.0543).\n\nFor the known variance case, the power curves of the competing tests are shown in Figure \\ref{fig: Power AGMCTs and AMCT, known variance}. When the optimal contrasts associated with the true dose-response models are highly correlated with those of the candidate models ($E_{\\max}$ 2 and Double-logistic models), the AGMCTs and the AMCT are, in general, slightly less powerful than the corresponding tests in a non-adaptive design. When the optimal contrasts associated with the true dose-response models are not highly correlated with those of the candidate models ($E_{\\max}$ 3, Exponential 1, Exponential 2, Quadratic 2, Step and Truncated-logistic models), however, the AGMCTs and AMCT are more powerful than the corresponding tests in a non-adaptive design. Another observation is that the overall performance of the AMCT is the best among all the adaptive designs.\n\nFor the unknown variance case, the power curves of the competing tests are shown in Figure \\ref{fig: Power AGMCTs, unknown variance}. The overall results for these comparisons are very similar to those for the known variance case.\n\n\\section{Conclusion}\\label{Conclusion}\n\nIn this article, we extend the MCP-Mod procedure with GMCTs (Bretz {\\it{et al}}., 2005; Ma and McDermott, 2020) to two-stage adaptive designs. We perform a GMCT within each stage and combine the stage-wise $p$-values using a specified combination method to test the overall null hypothesis of no dose-response relationship. We also consider and extend an alternative AMCT approach proposed by Miller (2010), which uses the maximum standardized stratified contrast across Stage 1 and Stage 2 as the test statistic. One issue that deserves further exploration is how to best determine the ``base test\" for the AMCT. Our development in Sections \\ref{AMCT, Known Variance Case} and \\ref{AMCT, Unknown Variance Case} is based on pre-specification of the contrasts, number of candidate dose-response models, and group sample sizes to be the same in Stage 2 as they were in Stage 1. While this is not necessarily the best choice, in the absence of the ability to enumerate all possible two-stage designs being considered, it might be quite reasonable in practice. An issue that remains unresolved is that of efficiently computing the conditional error and adaptive critical value for the AMCT when the variance is unknown since these involve multidimensional integrals that can take a long time to compute.\n\nSimulation studies demonstrate that the AGMCTs and AMCT are generally more powerful for PoC testing than the corresponding tests in a non-adaptive design if the true dose-response model is, in a sense, not ``close'' to the models included in the initial candidate set. This might occur, for example, if the selection of the candidate set of dose-response models is not well informed by evidence from preclinical and early-phase studies. This is consistent with intuition: if the dose-response models are badly misspecified at the design stage, using data from Stage 1 to get a better sense of the true dose-response model and using data from both Stage 1 and Stage 2 to perform an overall test for $H_0$ should result in increased power. On the other hand, if the true dose-response model is ``close\" to the models specified in the initial candidate set, the non-adaptive design is sufficient to detect the PoC signal. In this case, the adaptive design does not provide any benefit and results in a small loss of efficiency.\n\nComparisons among the different AGMCTs and the AMCT did not reveal major differences in their operating characteristics in general. Differences among the AGMCTs tended to be larger in the setting of a non-adaptive design (Ma and McDermott, 2020). In principle, the AGMCTs proposed here for two-stage adaptive designs could be extended to multiple stages, although the circumstances under which that would be beneficial are not clear.\n\nFinally, we note that baseline covariates can easily be incorporated into the AGMCTs, as outlined in Section 2.3 of Ma and McDermott (2020).\\\\\n\n\\setstretch{1}\n\n\\makeatletter\n\\renewcommand{\\@biblabel}[1]{#1.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAdvances in maritime robotics over the last two decades have fostered an emergence of unmanned surface vehicles (USVs). These autonomous boats range from small vessels used for automated inspection of dangerous areas and automation of repetitive tasks like bathymetry or environmental control, to massive cargo and transport ships. This next stage of maritime automation holds a potential to transform maritime-related tasks and will likely impact the global economy. The safety of autonomous navigation systems hinges on their environment perception capability, in particular obstacle detection, which is responsible for timely reaction and collision avoidance.\n\n\nCameras as low-power and information rich sensors are particularly appealing due to their large success in perception for autonomous cars~\\cite{Cordts2016Cityscapesa,Chen2018Encoder}. However, recent works~\\cite{Bovcon2019Mastr,Bovcon2020MODS} have shown that methods developed for autonomous cars do not translate well to USVs due to the specifics of the maritime domain. As a result, several approaches that exploit the domain specifics for improved detection accuracy have been recently proposed~\\cite{Bovcon2021WaSR,Cane2019Evaluating,Yao2021Shoreline,Zust2021SLR}. Since everything but water can be an obstacle, classical detectors for individual obstacle classes cannot address all obstacle types. State-of-the-art methods~\\cite{Bovcon2021WaSR} instead casts maritime obstacle detection as an anomaly segmentation problem by segmenting the image into the water, sky and obstacle categories.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/motivation.pdf}\n\\caption{Single-frame obstacle detection methods (top right) struggle to distinguish between object reflections and true objects. However, reflections exhibit a distinctive temporal pattern compared to true objects (bottom left). WaSR-T (bottom right) considers the temporal context from recent frames to learn these patterns and increase segmentation robustness.}\n\\label{fig:motivation}\n\\end{figure}\n\nDespite significant advances reported in the recent maritime benchmark~\\cite{Bovcon2020MODS}, the state-of-the-art is still challenged by the reflective properties of the water surface, which cause objects reflections and sun glitter. In fact, given a single image, it is quite difficult to distinguish a reflected object or a spot of sun glitter from a true obstacle (Figure~\\ref{fig:motivation}). This results in a number of false positive detections, which in practice leads to frequent and unnecessary slowdowns of the boat, rendering current camera-based obstacle detection methods impractical.\n\n\nWe note that while correctly classifying reflections from a single image is challenging, the problem might become simpler when considering the temporal context. As illustrated in Figure~\\ref{fig:motivation}, due to water dynamics, the reflection appearance is not locally static, like that of an obstacle, but undergoes warped deformations. Based on this insight, we propose a new maritime obstacle segmentation network WaSR-T, which is our main contribution. WaSR-T introduces a new temporal context module that allows the network to extract the temporal context from a sequence of frames to differentiate objects from reflections. To the best of our knowledge, this is the first deep maritime obstacle detection architecture with a temporal component.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/architecture.pdf}\n\\caption{Overview of WaSR-T (left). Target frame and preceding context frames are fed into a shared encoder producing per-frame feature maps $X_F$ and $M_F$. The Temporal Context Module (right) extracts the temporal information from per-frame embeddings using a 3D convolution. The resulting temporal context embeddings $C_V$ are combined with target frame embeddings $X_V$ and fed into the decoder which predicts the target frame segmentation.}\n\\label{fig:architecture}\n\\end{figure*}\n\n\nWe also observe that the challenging maritime mirroring and glitter scenes are underrepresented in the standard training sets. We therefore extend the existing single-frame maritime segmentation training dataset MaSTr1325~\\cite{Bovcon2019Mastr} with corresponding preceding frames and introduce additional training images representing challenging reflection conditions, which is our secondary contribution. To maintain the notation convention, we name the extended dataset MaSTr1478. Experiments show that the dataset extension delivers significant performance improvement. Results on the recent maritime benchmark MODS~\\cite{Bovcon2020MODS} show that, compared to the single-frame WaSR~\\cite{Bovcon2021WaSR}, the proposed \\mbox{WaSR-T} reduces the number of false positive detections by 30\\% with a low computational overhead and sets a new state-of-the-art in maritime obstacle detection.\n\nIn summary, our main contributions are: (i) WaSR-T, a temporal extension of WaSR~\\cite{Bovcon2021WaSR} that leverages the temporal context for increased robustness and (ii) MaSTr1478, an extension of the existing single-frame training dataset~\\cite{Bovcon2019Mastr} with challenging reflection scenes that facilitates the training of temporal maritime segmentation networks. The new dataset and the WaSR-T source code will be publicly released to facilitate further research of temporal features in maritime obstacle detection.\n\n\n\n\n\n\n\n\\section{Related work}\n\n\nSemantic segmentation has become a common approach for obstacle detection in the marine domain~\\cite{Bovcon2019Mastr,Cane2019Evaluating,Bovcon2020MODS}, as it can address both dynamic (\\eg boats, swimmers, buoys) and static obstacles (\\eg shore, piers, floating fences) in a unified way by posing the problem as anomaly segmentation. Recently, several specialized networks for the marine domain have been proposed for this task~\\cite{Bovcon2021WaSR,Yao2021Shoreline,Qiao2022Automated}. These methods address reflections and increase detection robustness in multiple ways, including regularization techniques~\\cite{Yao2021Shoreline}, specialized loss functions~\\cite{Bovcon2021WaSR} and obstacle-oriented training regimes~\\cite{Zust2021SLR}. \n\nHowever, robustness to reflections is still lacking and causes comparatively low performance within the 15m area near the boat~\\cite{Bovcon2020MODS}, where segmentation errors are most critical. In practice, obstacle detection methods receive frames sequentially, thus the temporal component of the data is also available and could be used to distinguish between reflections and objects. So far, the additional temporal information has not yet been explored in context of maritime obstacle detection.\n\nIn other domains with similar access to sequential image data, effort has been made to harness the temporal information to improve the segmentation performance. Some approaches investigate the use of temporal information only during training to improve the temporal consistency of single-frame networks. \\cite{Varghese2021Unsupervised} and \\cite{Liu2020Efficient} achieve this by propagating the segmentation masks in consecutive frames by optical flow.\n\nIncorporating temporal information into the network for improved prediction has been explored as well,\nwith attention-based approaches being the most prevalent method. In video object segmentation~\\cite{Oh2019Video,Li2020Fast,Duke2021SSTVOS} attention is used to aggregate the information from features and segmentation masks of previous \"memory\" frames based on the attention between the target and memory features. However, these methods are designed mainly for propagating initial segmentation masks of large foreground objects over the video sequence and are not directly suitable for general purpose discriminative semantic segmentation required for obstacle detection.\n\nSimilarly, in video semantic segmentation~\\cite{Wang2021Temporal,Yuan2021CSANet} attention-based approaches are used to aggregate the temporal information from recent frames to improve general purpose semantic segmentation. \\cite{Yuan2021CSANet} additionally introduces auxiliary losses, which guide the learning of attention based on inter-frame consistency. Instead of a global attention which aggregates information from semantically similar regions from past frames, we propose a convolutional approach to facilitate the learning of local temporal texture, which is characteristic for reflections.\n\n\\section{Temporal context for obstacle detection}\n\n\nGiven a target frame $X \\in \\mathbb{R}^{3 \\times H \\times W}$, the task of the segmentation-based obstacle detection method is to predict the segmentation mask, \\ie to classify each location in $X$ as either water, sky or obstacle. We propose using the temporal context to improve the prediction accuracy. Our network (Figure~\\ref{fig:architecture}), denoted as \\mbox{WaSR-T}, is based on the state-of-the-art single-frame network for maritime obstacle detection\nWaSR~\\cite{Bovcon2021WaSR}. \nWe design WaSR-T to encode the discriminative temporal information about local appearance changes of a region over $T$ preceding context frames $M \\in \\mathbb{R}^{T \\times 3 \\times H \\times W}$. The temporal context is added to the high-level features at the deepest level of the network as shown in Figure~\\ref{fig:architecture}.\n\nFollowing \\cite{Oh2019Video} and \\cite{Wang2021Temporal}, the target and context frames are first individually encoded with a shared encoder network, producing per-frame feature maps $X_F \\in \\mathbb{R}^{N \\times H \\times W}$ and $M_F \\in \\mathbb{R}^{T \\times N \\times H \\times W}$, where $N$ is the number of channels.\nThe Temporal Context Module (Section~\\ref{sec:method\/temporal_descriptors}) then extracts dicriminative temporal context embeddings from per-frame features. Finally, the temporal context embeddings are concatenated with target frame embeddings and fed into a decoder network. Following \\cite{Bovcon2021WaSR}, the decoder gradually merges the information with multi-level features of the target frame (\\ie skip connections) and outputs the final target frame segmentation.\n\n\n\\subsection{Temporal Context Module}\\label{sec:method\/temporal_descriptors}\n\nThe Temporal Context Module (TCM) extracts the temporal information from embeddings of the context and target frames and combines it with embeddings of the target frame using concatenation (Figure~\\ref{fig:architecture}). For this reason, the number of input channels to the decoder doubles compared to the single-frame network. Thus, in order to preserve the structure and number of input channels to the decoder, TCM first reduces the dimensionality of per-frame feature maps $X_F$ and $M_F$ accordingly -- a shared $1 \\times 1$ convolutional layer is used to project the per-frame feature maps into $N\/2$ dimensional per-frame embeddings $X_V$ and $M_V$ as shown in Figure~\\ref{fig:architecture}.\n\nTo extract the temporal information from a sequence of frame embeddings, attention-based approaches~\\cite{Oh2019Video,Duke2021SSTVOS,Wang2021Temporal} are often utilized, as they allow aggregation of information from semantically similar regions across multiple frames to account for movement and appearance changes of objects. Reflections, however often feature significant local texture changes as demonstrated in Figure~\\ref{fig:motivation}.\nThus, instead of globally aligning semantically similar regions using attention mechanisms, we utilize a spatio-temporal convolution to extract the local texture changes.\n\nFirst we stack the context and target frame embeddings $M_V$ and $X_V$ into a spatio-temporal context volume $C \\in \\mathbb{R}^{(T+1) \\times N\/2 \\times H \\times W}$. Then a 3D convolution layer is used to extract discriminative spatio-temporal features from $C$. To account for minor inter-frame object and camera movements, a kernel size of $(T+1) \\times 3 \\times 3$ is used to capture temporal information in a local spatial region around locations in the context volume. We apply padding in the spatial dimensions to preserve the spatial size of the output context features $C_V \\in \\mathbb{R}^{N\/2 \\times H \\times W}$.\n\n\\subsection{Efficient inference}\n\nDuring training, for each input image $X$, WaSR-T needs to extracts all per-frame context embeddings $M_F$ in addition to target frame embeddings $X_V$. However, during inference the frames are passed to the network sequentially, thus recent frame embeddings can be stored in memory and feature extraction only needs to be performed on the newly observed target frame. Specifically, WaSR-T stores a buffer of $T$ most recent frame embeddings $X_V$ in memory and uses them as the context frame embeddings $M_V$ in TCM. The memory buffer is initialized with $T$ copies of the $X_V$ embeddings of the first frame in the sequence. Using sequential inference, the efficiency of WaSR-T is not significantly impacted compared to single-frame methods, differing only due to the temporal context extraction in TCM.\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_real_compressed.pdf}\n\\caption{Qualitative results on MODS (top) and web-sourced sequences (bottom) reveal that in WaSR-T the handling of reflections and sun glitter is significantly improved compared to WaSR, resulting in a smaller number of FP detections and increased temporal consistency.}\n\\label{fig:cmp_mods}\n\\end{figure*}\n\n\\subsection{Implementation details}\n\nWaSR-T follows the architecture of WaSR~\\cite{Bovcon2021WaSR} and applies ResNet101 as the feature encoder. In a preliminary study we observed that in contrast to WaSR, the inertial measurements (IMU) do not bring improvements in our temporal extension. Therefore the IMU is not used in the decoder for simplicity. We apply the original WaSR training procedure, i.e., the water separation loss function, hyper-parameters, optimizers, learning rate schedule and image augmentation. We set the number of past frames in the temporal context module to $T=5$. Because of training memory constraints, the backbone gradients are restricted to the current and previous frame. WaSR-T is trained for 100 epochs on 2$\\times$NVIDIA Titan A100S GPUs with a minibatch size of 4 per GPU.\n\nThe networks in our experiments are trained on the training set Mastr1478 (Section~\\ref{sec:mastr1478}) and tested on the most recent maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}, which contains approximately 100 annotated sequences captured under various conditions. The evaluation protocol reflects the detection performance meaningful for practical USV navigation and separately evaluates the detection of obstacle-water edge for static obstacles and the detection of dynamic obstacles. The water-edge detection robustness ($\\mu_R$) is computed from the ground truth edge, while dynamic obstacle detection is evaluated in terms of true-positive (TP), false-positive (FP) and false-negative (FN) detections, and summarized by the F1 measure, precision (Pr) and recall (Re). A dynamic obstacle counts as detected (TP) if the coverage of the segmentation inside the ground truth bounding box is sufficient, otherwise the obstacle counts as undetected (FN). Predicted segmentations outside of the ground truth bounding boxes count as false positive detections. Detection performance is reported over the entire visible navigable area and separately within a 15m \\textit{danger zone} from the USV, where the detection performance is critical for immediate collision prevention.\n\n\\subsection{Temporally extended training dataset MaSTr1478}\\label{sec:mastr1478}\n\nTo facilitate the training of temporal networks, we extended the recent MaSTr1325~\\cite{Bovcon2019Mastr} dataset, which contains 1325 fully segmented images recorded by a USV. First, the dataset was extended by adding $T=5$ preceding frames for each annotated frame, to allow learning of the temporal context. We noticed that while MaSTr1325 is focused on the broader challenges in maritime segmentation, it contains relatively few examples of challenging reflections and glitter. We have thus extended the original dataset with additional 153 images (including their preceding frames) and use the codename \\textit{MaSTr1478} for this new dataset. The additional images were obtained from online videos or were additionally recorded by us to represent difficult scenarios for current single-frame methods, where the temporal information is important for accurate prediction, such as object mirroring, reflections and sun glitter. Examples are shown in Figure~\\ref{fig:dataset}. The frames are labeled with per-pixel ground truth following~\\cite{Bovcon2019Mastr}. To emphasize the challenging conditions, the training samples in the training batches are sampled from the original MaSTr1325 images and the additional images with equal probability. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/ds_examples.jpg}\n\\caption{Examples of the additional training sequences in the MaSTr1478 with object reflections, sun glitter and low-light conditions.}\n\\label{fig:dataset}\n\\end{figure}\n\n\n\n\\subsection{Comparison with state of the art}\n\nWe compare WaSR-T with single-frame state-of-the-art segmentation methods (DeepLabV3+~\\cite{Chen2018Encoder}, BiSeNet~\\cite{Yu2018Bisenet}, RefineNet~\\cite{Lin2017RefineNet}, WaSR~\\cite{Bovcon2021WaSR}), which scored as top performers on the recent maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}, as well as with state-of-the-art segmentation methods that rely on temporal information. For the latter we considered the video object segmentation method STM~\\cite{Oh2019Video} and a recent video semantic segmentation method TMANet~\\cite{Wang2021Temporal}, which use memory attention to encode the temporal information from past frames. Since a relatively simple backbone is used in the original STM, we extended it to the same backbone and decoder architecture as used in WaSR~\\cite{Bovcon2021WaSR}.\n\n\n\nResults in Table~\\ref{tab:sota} show that multi-frame methods outperform the single-frame networks in detection precision (particularly within the danger zone), and except from TMANet, preserve a high recall.\nWaSR-T outperforms the original WaSR by 1.8 points in precision and 0.9 points in the overall F1, while substantially outperforming it within the danger zone resulting in a 6.0 points F1 score improvement. This is primarily due to reduction of false positives (see Figures \\ref{fig:cmp_mods} and \\ref{fig:cmp_davimar}), which is reflected in a 10.5 point improvement of the Pr score within the danger zone. \\mbox{WaSR-T} also outperforms the temporal state-of-the-art networks especially inside the danger zone, resulting in approximately 2 points performance improvement of danger-zone F1 score. \n\nIn terms of speed, the new temporal module does not substantially increase the computation. The original WaSR runs at 15 FPS, while WaSR-T runs at approximately 10 FPS, which matches the sequence acquisition framerate.\n\nDespite the large improvements in robustness to reflections, WaSR-T also shares some limitations (\\eg detection of thin objects) with existing methods as shown in Figure~\\ref{fig:cmp_failure}. For example, the temporal context is still not able to fully address reflections in rare situations where the water is completely still and the temporal texture changes cannot be observed. We aim to tackle these challenges in our future work. \n\n\\begin{table}[]\n \\centering\n \\caption{Comparison of SOTA single-frame and multi-frame methods on MODS in terms of water-edge detection robustness ($\\mu_R$), precision, recall and F1 score for obstacle detection. Danger-zone performance is reported in parentheses.}\n \\label{tab:sota}\n \\begin{tabular}{lcccc}\n \n method & $\\mu_R$ & Pr & Re & F1 \\\\\n \\midrule\n DeepLabV3+~\\cite{Chen2018Encoder} & 96.8 & 80.1 (18.6) & \\textbf{92.7} (\\textbf{98.4}) & 86.0 (31.3) \\\\\n BiSeNet~\\cite{Yu2018Bisenet} & 97.4 & 90.5 (53.7) & 89.9 (97.0) & 90.2 (69.1) \\\\\n RefineNet~\\cite{Lin2017RefineNet} & 97.3 & 89.0 (45.1) & 93.0 (98.1) & 91.0 (61.8) \\\\\n WaSR~\\cite{Bovcon2021WaSR} & 97.8 & 95.1 (80.3) & 91.9 (96.2) & 93.5 (87.6) \\\\\n \\midrule\n TMANet~\\cite{Wang2021Temporal} & 98.3 & 96.4 (90.0) & 85.1 (93.0) & 90.4 (91.5) \\\\\n STM~\\cite{Oh2019Video} & \\textbf{98.4} & 96.3 (86.2) & 92.5 (96.4) & \\textbf{94.4} (91.0) \\\\\n WaSR-T & \\textbf{98.4} & \\textbf{96.9} (\\textbf{90.8}) & 92.0 (96.5) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_davimar.pdf}\n\\caption{Qualitative results on challenging inland water sequences demonstrates large improvements of WaSR-T in terms of practical robustness to reflections.}\n\\label{fig:cmp_davimar}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/cmp_failure.pdf}\n\\caption{Failure cases of both methods include small objects hiding in reflections (column 1), reflections on very still water (column 2), thin objects (column 3) and challenging water-land boundaries (columns 4 and 5).}\n\\label{fig:cmp_failure}\n\\end{figure*}\n\n\\subsection{Analysis of the alternative temporal aggregation methods}\n\nNext, we analyzed alternatives to the feature fusion in the temporal context module proposed in Section~\\ref{sec:method\/temporal_descriptors}: (i) pixel-wise average pooling of temporal features (window size of $T + 1 \\times 1 \\times 1$) and (ii) local average pooling of temporal features ($T + 1 \\times 3 \\times 3$). Table~\\ref{tab:temp_agg} shows that, compared to singe-frame WaSR, the simple pixel-wise temporal average pooling of context features already improves the performance over single-frame inference by 0.8 points (overall) and 1.9 points (danger zone) in F1. Increasing the pooling window size to a local window does not improve performance. In contrast, the 3D convolution approach described in Section~\\ref{sec:method\/temporal_descriptors} is able to learn discriminative local temporal relations and increases the F1 by an additional 0.2 points overall, and by 3.5 points inside the danger zone. The improvement is primarily on the account of substantial reduction of false positive detections.\n\\begin{table}[htb]\n \\centering\n \\caption{WaSR-T performance with different temporal aggregation methods in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:temp_agg}\n \\begin{tabular}{lccc}\n \n aggregation & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n Single-frame & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n Avg pool ($T+1 \\times 1 \\times 1$) & \\textbf{98.4} & 1771 (474) & 94.2 (90.1) \\\\\n Avg pool ($T+1 \\times 3 \\times 3$) & 98.3 & 2152 (537) & 93.5 (89.2) \\\\\n \n \n 3D Convolution & \\textbf{98.4} & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\\subsection{Influence of the temporal and spatial context size}\n\nTo gain further insights, we analyzed the influence of temporal context module parameters, i.e., the temporal context length $T$ and spatial kernel size. Table~\\ref{tab:abl} shows that utilizing even a single temporal context frame (i.e., $T=1$) significantly improves the performance over single-frame inference ($T=0$) by decreasing the number of false positive detections by 30\\% overall and 39\\% inside the danger zone.\nIncreasing the temporal context length $T$ further, brings consistent, but smaller improvements in reduction of FP detections and danger-zone F1 scores.\n\n\nThe spatial context size, determined by the kernel size of the 3D convolution of the temporal context module also importantly affects the performance. Using $1 \\times 1$ spatial kernel size encodes only pixel-wise temporal relations, which negatively impacts the performance inside the danger-zone within which the objects are typically large. Increasing the kernel size to $3 \\times 3$ addresses this issue, while the performance does not improve with further increasing the spatial context size.\n\\begin{table}[htb]\n \\centering\n \\caption{Influence of parameters in WaSR-T in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:abl}\n \\begin{tabular}{cccc}\n \n $T$ & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n 0 & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n 1 & 98.4 & 1745 (383) & 94.2 (91.5) \\\\\n 3 & \\textbf{98.6} & 1606 (323) & 94.0 (92.6) \\\\\n 5 & 98.4 & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \\midrule\n kernel size & & & \\\\\n \\midrule\n $1 \\times 1$ & 98.1 & \\textbf{1456} (357) & \\textbf{94.6} (92.0) \\\\\n $3 \\times 3$ & \\textbf{98.4} & 1540 (\\textbf{261}) & 94.4 (\\textbf{93.6}) \\\\\n $5 \\times 5$ & 98.3 & 1639 (318) & 94.2 (92.6) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\n\\subsection{Influence of the extended MaSTr1478}\n\nFinally, several experiments were performed to evaluate the contribution of the extended training dataset MaSTr1478. In particular, how much performance improvement is brought by the temporal extension and how much by the new scenes with reflections and glitter. The results in Table~\\ref{tab:wasr_comp} show that the single-frame WaSR does not benefit from the additional sequences in MaSTr1478. While the overall detection performance improves by 0.1 points F1, the performance decreases by 0.6 points inside the danger zone. \nUsing only temporally extended MaSTr1325 does not improve WaSR-T performance. However, considering also the new sequences in MaSTr1478, the performance improves substantially. We observe a 41\\% overall reduction in the number of FP detections and a 53\\% reduction of FPs inside the danger zone. The overall performance is thus increased by 1.0 F1 points overall and by 5.4 F1 points inside the danger zone. \n\nFigure~\\ref{fig:cmp_mods} provides qualitative results. In contrast to \\mbox{WaSR-T}, the single-frame WaSR is unable to correctly segment regions of water containing the reflections and glitter, despite using the reflection-specific training examples of MaSTr1478. We conclude that both the new scenes and the temporal extension allow learning of the temporal appearance in WaSR-T and are responsible for improved segmentation.\n\n\n\n\\begin{table}[htb]\n \\centering\n \\caption{Influence of training dataset extensions in terms of water-edge detection robustness ($\\mu_R$), number of FP detections and F1 score. Performance inside the danger-zone is reported in parentheses.}\n \\label{tab:wasr_comp}\n \\begin{tabular}{lccc}\n \n model & $\\mu_R$ & FP & F1 \\\\\n \\midrule\n WaSR (MaSTr1325) & 97.2 & 2625 (561) & 93.4 (88.2) \\\\\n WaSR (MaSTr1478) & 97.8 & 2492 (629) & 93.5 (87.6) \\\\\n WaSR-T (MaSTr1325) & 97.5 & 2273 (655) & 93.7 (87.3) \\\\\n WaSR-T (MaSTr1478) & \\textbf{98.4} & \\textbf{1540} (\\textbf{261}) & \\textbf{94.4} (\\textbf{93.6}) \\\\\n \n \\end{tabular}\n\\end{table}\n\n\\section{Conclusion}\n\nWe presented WaSR-T, a novel maritime obstacle detection network that harnesses the temporal context to improve obstacle detection by segmentation on water regions with ambiguous appearance. We also extended the well-known training dataset MaSTr1325~\\cite{Bovcon2019Mastr} by including preceding images for each training image and added new 153 training images with challenging scenes containing object mirroring and glitter -- the new dataset is called MaSTr1478. Experiments show that the new images and temporal extension lead to substantial improvement on maritime obstacle detection. WaSR-T outperforms single-frame maritime obstacle detection networks as well as other networks that use temporal contexts and sets a new state-of-the-art on the maritime obstacle detection benchmark MODS~\\cite{Bovcon2020MODS}. \n\n\n\n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe task of generating textual descriptions of images tests a machine's ability to understand visual data and interpret it in natural language. \nIt is a fundamental research problem lying at the intersection of natural language processing, computer vision, and cognitive science.\nFor example, single-image captioning~\\citep{farhadi2010every, kulkarni2013babytalk, vinyals2015show, xu2015show} has been extensively studied.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.9\\linewidth]{images\/overview.pdf}\n \\caption{Overview of the visual comparison task and our motivation. The key is to understand both images and compare them. Explicit semantic structures can be compared between images and used to generate comparative descriptions aligned to the image saliency.}\n \\label{fig:task}\n\\end{figure}\n\nRecently, a new intriguing task, visual comparison, along with several benchmarks ~\\citep{jhamtani2018learning, tan2019expressing, park2019robust, forbes2019neural} has drawn increasing attention in the community.\nTo complete the task and generate comparative descriptions, a machine should understand the visual differences between a pair of images (see \\cref{fig:task}).\nPrevious methods~\\cite{jhamtani2018learning} often consider the pair of pre-trained visual features such as the ResNet features~\\cite{he2016deep} as a whole, and build end-to-end neural networks to predict the description of visual comparison directly.\nIn contrast, humans can easily reason about the visual components of a single image and describe the visual differences between two images based on their semantic understanding of each one. \nHumans do not need to look at thousands of image pairs to describe the difference of new image pairs, as they can leverage their understanding of single images for visual comparison. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=\\textwidth]{images\/model.pdf}\n\\caption{Our \\textsc{L2C} model. It consists of a segmentation encoder, a graph convolutional module, and an LSTM decoder with an auxiliary loss for single-image captioning. Details are in \\cref{sec:method}.}\n\\label{fig:model}\n\\end{figure*}\n\nTherefore, we believe that visual differences should be learned by understanding and comparing every single image's semantic representation.\nA most recent work~\\cite{zhang2020diagnosing} conceptually supports this argument, where they show that low-level ResNet visual features lead to poor generalization in vision-and-language navigation, and high-level semantic segmentation helps the agent generalize to unseen scenarios. \n\nMotivated by humans, we propose a Learning-to-Compare (\\textsc{L2C}) method that focuses on reasoning about the semantic structures of individual images and then compares the difference of the image pair. \nOur contributions are three-fold: \n\\begin{itemize}\n \\setlength\\itemsep{-0.2em}\n \\item We construct a structured image representation by leveraging image segmentation with a novel semantic pooling, and use graph convolutional networks to perform reasoning on these learned representations.\n \\item We utilize single-image captioning data to boost semantic understanding of each image with its language counterpart.\n \\item Our \\textsc{L2C} model outperforms the baseline on both automatic evaluation and human evaluation, and generalizes better on the testing image pairs.\n\\end{itemize}\n\n\\section{\\textsc{L2C} Model}\n\\label{sec:method}\nWe present a novel framework in \\cref{fig:model}, which consists of three main components. \nFirst, a \\emph{segmentation encoder} is used to extract structured visual features with strong semantic priors.\nThen, a \\emph{graph convolutional module} performs reasoning on the learned semantic representations. \nTo enhance the understanding of each image, we introduce a \\emph{single-image captioning auxiliary loss} to associate the single-image graph representation with the semantic meaning conveyed by its language counterpart.\nFinally, a decoder generates the visual descriptions comparing two images based on differences in graph representations. \nAll parameters are shared for both images and both tasks.\n\n\\subsection{Semantic Representation Construction}\nTo extract semantic visual features, we utilize pre-trained fully convolutional networks (FCN)~\\citep{long2015fully} with ResNet-101 as the backbone. \nAn image $\\mathcal{I}$ is fed into the ResNet backbone to produce a feature map $\\mathcal{F}$ $\\in \\mathbb{R}^{D\\times H\\times W}$, which is then forwarded into an FCN head that generates a binary segmentation mask $B$ for the bird class. \nHowever, the shapes of these masks are variable for each image, and simple pooling methods such as average pooling and max pooling would lose some information of spatial relations within the mask.\n\nTo address this issue and enable efficient aggregation over the area of interest (the masked area), we add a module after the ResNet to cluster each pixel within the mask into $K$ classes. Feature map $\\mathcal{F}$ is forwarded through this pooling module to obtain a confidence map $\\mathcal{C}$ $\\in \\mathbb{R}^{K\\times H\\times W}$, whose entry at each pixel is a $K$-dimensional vector that represents the probability distribution of $K$ classes.\n\nThen a set of nodes $V = \\{v_1, ..., v_K\\}, v_k \\in \\mathbb{R}^D$ is constructed as following: \n\\begin{equation}\n v_k= \\sum_{i, j} \\mathcal{F} \\odot \\mathcal{B} \\odot \\mathcal{C}_k\n\\end{equation}\nwhere $i$=$1,... H,$ $j$=$1,...,W ,$, $\\mathcal{C}_k$ is the $k$-th probability map and $\\odot$ denotes element-wise multiplication.\n\nTo enforce local smoothness, i.e., pixels in a neighborhood are more likely belong to one class, we employ total variation norm as a regularization term:\n\\begin{equation}\n \\mathcal{L}_{TV} = \\sum_{i,j}|C_{i+1,j}-C{i,j}|+|C_{i,j+1}-C{i,j}|\n\\end{equation}\n\n\\subsection{Comparative Relational Reasoning}\nInspired by recent advances in visual reasoning and graph neural networks ~\\citep{chen2018iterative, li2019visual}, we introduce a relational reasoning module to enhance the semantic representation of each image.\nA fully-connected visual semantic graph $G = (V, E)$ is built, where $V$ is the set of nodes, each containing a regional feature, and $E$ is constructed by measuring the pairwise affinity between each two nodes $v_i, v_j$ in a latent space.\n\\begin{equation}\n A(v_i, v_j) = (W_i v_i)^T (W_j v_j)\n\\end{equation}\nwhere $W_i, W_j$ are learnable matrices, and $A$ is the constructed adjacency matrix. \n\nWe apply Graph Convolutional Networks (GCN) ~\\citep{kipf2016semi} to perform reasoning on the graph.\nAfter the GCN module, the output $V^o = \\{v_1^o, ..., v_K^o\\}, v_k^o \\in \\mathbb{R}^D$ will be a relationship enhanced representation of a bird.\nFor the visual comparison task, we compute the difference of each two visual nodes from two sets, denoted as $V^g_{diff} = \\{v_{diff,1}^o, ..., v_{diff,K}^o\\}, v_{diff,k}^o = v_{k,1}^o - v_{k, 2}^o \\in \\mathbb{R}^D$.\n\n\\subsection{Learning to Compare while Learning to Describe}\nAfter obtaining relation-enhanced semantic features, we use a Long Short-Term Memory (LSTM) ~\\citep{hochreiter1997long} to generate captions. \nAs discussed in \\cref{sec:intro}, semantic understanding of each image is key to solve the task. However, there is no single dataset that contains both visual comparison and single-image annotations.\nHence, we leverage two datasets from similar domains to facilitate training. One is for visual comparison, and the other is for single-image captioning. Alternate training is utilized such that for each iteration, two mini-batches of images from both datasets are sampled independently and fed into the encoder to obtain visual representations $V^o$ (for single-image captioning) or $V^o_{diff}$ (for visual comparison).\n\nThe LSTM takes $V^o$ or $V^o_{diff}$ with previous output word embedding $y_{t-1}$ as input, updates the hidden state from $h_{t-1}$ to $h_t$, and predicts the word for the next time step.\nThe generation process of bi-image comparison is learned by maximizing the log-likelihood of the predicted output sentence. The loss function is defined as follows:\n\\begin{equation}\n \\mathcal{L}_{diff}=-\\sum_t {\\log P(y_{t}|y_{1:t-1}, V^o_{diff})}\n\\end{equation}\nSimilar loss is applied for learning single-image captioning:\n\\begin{equation}\n \\mathcal{L}_{single}=-\\sum_t {\\log P(y_{t}|y_{1:t-1}, V^o)}\n\\end{equation}\n\nOverall, the model is optimized with a mixture of cross-entropy losses and total variation loss:\n\\begin{equation}\n \\begin{split}\n \\mathcal{L}_{loss} = \\mathcal{L}_{diff} + \\mathcal{L}_{single} + \\lambda \\mathcal{L}_{TV}\n \\end{split}\n\\end{equation}\nwhere $\\lambda$ is an adaptive factor that weighs the total variation loss.\n\n\n\\section{Experiments}\n\\subsection{Experimental Setup}\n\\paragraph{Datasets} \nThe Birds-to-Words (B2W) has 3347 image pairs, and each has around 5 descriptions of visual difference. This leads to 12890\/1556\/1604 captions for train\/val\/test splits. Since B2W contains only visual comparisons, We use the CUB-200-2011 dataset (CUB) ~\\citep{wah2011caltech}, which consists of single-image captions as an auxiliary to facilitate the training of semantic understanding. \nCUB has 8855\/2933 images of birds for train\/val splits, and each image has 10 captions.\n\n\\paragraph{Evaluation Metrics}\nPerformances are first evaluated on three automatic metrics\\footnote{\\url{https:\/\/www.nltk.org}}: BLEU-4~\\citep{papineni2002bleu}, ROUGE-L~\\citep{lin-2004-rouge}, and CIDEr-D~\\citep{vedantam2015cider}. Each generated description is compared to all five reference paragraphs. Note for this particular task, researchers observe that CIDEr-D is susceptible to common patterns in the data (See \\cref{tab:main} for proof), and ROUGE-L is anecdotally correlated with higher-quality descriptions (which is noted in previous work~\\citep{forbes2019neural}). Hence we consider ROUGE-L as the major metric for evaluating performances.\nWe then perform a human evaluation to further verify the performance.\n\n\\begin{table*}[t]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{8pt}\n\\begin{tabular}{l rrrrr rrrrr}\n\\toprule\n & \\multicolumn{3}{c}{\\textbf{Validation}} & \\multicolumn{3}{c}{\\textbf{Test}}\\\\\n\\cmidrule(lr){2-4} \\cmidrule(lr){5-7}\nModel & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ \\\\\n\\toprule\nMost Frequent & 20.0 & 31.0 & \\textbf{42.0} & 20.0 & 30.0 & \\textbf{43.0} \\\\\nText-Only & 14.0 & 36.0 & 5.0 & 14.0 & 36.0 & 7.0 \\\\\nNeural Naturalist & 24.0 & 46.0 & 28.0 & 22.0 & 43.0 & 25.0 \\\\\nCNN+LSTM & 25.1 & 43.4 & 10.2 & 24.9 & 43.2 & 9.9 \\\\\n\\midrule \n\\textsc{L2C} [B2W] & 31.9 & 45.7 & 15.2 & 31.3 & 45.3 & 15.1 \\\\\n\\textsc{L2C} [CUB+B2W] & \\textbf{32.3} & \\textbf{46.2} & 16.4 & \\textbf{31.8} & \\textbf{45.6} & 16.3 \\\\\n\\midrule\nHuman & 26.0 & 47.0 & 39.0 & 27.0 & 47.0 & 42.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Results for visual comparison on the Birds-to-Words dataset~\\citep{forbes2019neural}. \\textit{Most Frequent} produces only the most observed description in the dataset: ``the two animals appear to be exactly the same\". \\textit{Text-Only} samples captions from the training data according to their empirical distribution. \\textit{Neural Naturalist} is a transformer model in ~\\citet{forbes2019neural}. \\textit{CNN+LSTM} is a commonly-used CNN encoder and LSTM decoder model.\n}\n\\label{tab:main}\n\\end{table*}\n\n\n\\paragraph{Implementation Details}\nWe use Adam as the optimizer with an initial learning rate set to 1e-4. The pooling module to generate $K$ classes is composed of two convolutional layers and batch normalization, with kernel sizes 3 and 1 respectively. We set $K$ to 9 and $\\lambda$ to 1. The dimension of graph representations is 512. The hidden size of the decoder is also 512. The batch sizes of B2W and CUB are 16 and 128. Following the advice from ~\\citep{forbes2019neural}, we report the results using models with the highest ROUGE-L on the validation set, since it could correlate better with high-quality outputs for this task.\n\n\n\\subsection{Automatic Evaluation}\nAs shown in \\cref{tab:main}, first, L2C[B2W] (training with visual comparison task only) outperforms baseline methods on BLEU-4 and ROUGE-L. Previous approaches and architectures failed to bring superior results by directly modeling the visual relationship on ResNet features.\nSecond, joint learning with a single-image caption L2C[B2W+CUB] can help improve the ability of semantic understanding, thus, the overall performance of the model.\nFinally, our method also has a smaller gap between validation and test set compared to \\textit{neural naturalist}, indicating its potential capability to generalize for unseen samples.\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{c c|c|c}\n\\toprule\nChoice (\\%) & L2C & CNN+LSTM & Tie \\\\\n\\midrule\nScore & \\textbf{50.8} & 39.4 & 9.8 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Human evaluation results. We present workers with two generations by L2C and CNN+LSTM for each image pair and let them choose the better one.\n}\n\\label{tab:human}\n\\end{table}\n\n\\subsection{Human Evaluation}\nTo fully evaluate our model, we conduct a pairwise human evaluation on Amazon Mechanical Turk with 100 image pairs randomly sampled from the test set, each sample was assigned to 5 workers to eliminate human variance. Following~\\citet{wang2018arel}, for each image pair, workers are presented with two paragraphs from different models and asked to choose the better one based on text quality\\footnote{We instruct the annotators to consider two perspectives, relevance (the text describes the context of two images) and expressiveness (grammatically and semantically correct).}. As shown in \\cref{tab:human}, \\textsc{L2C} outperforms \\textsc{CNN+LSTM}, which is consistent with automatic metrics.\n\n\n\\subsection{Ablation Studies} \n\n\\paragraph{Effect of Individual Components}\nWe perform ablation studies to show the effectiveness of semantic pooling, total variance loss, and graph reasoning, as shown in \\cref{tab:ablation}.\nFirst, without semantic pooling, the model degrades to average pooling, and results show that semantic pooling can better preserve the spatial relations for the visual representations. \nMoreover, the total variation loss can further boost the performance by injecting the prior local smoothness.\nFinally, the results without GCN are lower than the full L2C model, indicating graph convolutions can efficiently modeling relations among visual regions.\n\n\\begin{table}[t]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{l rrr}\n\\toprule\n & \\multicolumn{3}{c}{\\textbf{Validation}}\\\\\n\\cmidrule(lr){2-4}\nModel & BLEU-4 $\\uparrow$ & ROUGE-L $\\uparrow$ & CIDEr-D $\\uparrow$ \\\\\n\\toprule\nL2C & \\textbf{31.9} & \\textbf{45.7} & \\textbf{15.2} \\\\\n\\midrule \n$-$ Semantic Pooling & 24.5 & 43.2 & 7.2 \\\\\n$-$ TV Loss & 29.3 & 44.8 & 13.6 \\\\\n$-$ GCN & 30.2 & 43.5 & 10.7 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Ablation study on the B2W dataset. We individually remove Semantic Pooling, total variation (TV) loss, and GCN to test their effects.\n}\n\\label{tab:ablation}\n\\end{table}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.8\\linewidth]{images\/robust.pdf}\n \\caption{Sensitivity test on number of K chosen.}\n \\label{fig:robust}\n\\end{figure}\n\n\\paragraph{Sensitivity Test}\nWe analyze model performance under a varying number of $K$ ($K$ is the number of classes for confidence map $\\mathcal{C}$), as shown in \\cref{fig:robust}. Empirically, we found the results are comparable when $K$ is small. \n\n\n\\section{Conclusion}\nIn this paper, we present a learning-to-compare framework for generating visual comparisons. \nOur segmentation encoder with semantic pooling and graph reasoning could construct structured image representations. \nWe also show that learning to describe visual differences benefits from understanding the semantics of each image.\n\n\\section*{Acknowledgments}\nThe research was partly sponsored by the U.S. Army Research Office and was accomplished under Contract Number W911NF19-D-0001 for the Institute for Collaborative Biotechnologies. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{intro}}\nWR124 ($\\equiv$BAC209) is a Galactic massive star characterized by a very high heliocentric recession velocity of $\\sim$175 km~s$^{-1}$ \\citep{1982A&A...116...54S}, and it is regarded to be among the fastest moving massive stars in the Galaxy \\citep{1982A&A...114..135M}. It was classified by \\citet{1938PASP...50..350M} as a nitrogen-sequence Wolf-Rayet star (WN) and later as Population I WN8 star \\citep{1969ApJ...157.1245S}. \\\\ \n \nWR stars are thought to be a late stage in the evolution of stars more massive than 25~$M_{\\mathrm{\\sun}}$ and they are characterized by significant stellar winds with high mass-loss rate and terminal velocity. Many WRs are surrounded by nebular emission, some of which are members of a class of object called \\textit{ring nebulae}. The structure of this type of nebula is attributed to a continual process of mass loss from the exciting WR star, which sweeps the surrounding interestellar gas into a shell \\citep{1965ApJ...142.1033J}. The study of nebulae around WRs gives us clues to the mass-loss history of massive stars, as well as to the chemical enrichment of the interstellar medium (ISM).\\\\\n\n\\begin{table*}[t]\n\t\t \\caption{Main physical parameters of WR\\,124 and M1-67.} \n\t\t\\label{table:parameter} \n\t\t\\centering \n \t\\begin{tabular}{ l l l l l}\t\\hline\n\t\t\t\t\\\\\n Object & Parameter & Value & Reference \\\\ \\hline \\hline \\\\\n\t\t\tWR\\,124 & ($\\alpha$,$\\delta$) (J2000) & (19:11:30.88, +16:51:38.16) & \\citet{1997ESASP1200.....P} \\\\ \t\n\t\t\t& Spectral type & WN 8 & \\citet{1969ApJ...157.1245S} \\\\\n\t\t & $v_{\\mathrm{\\infty}}$ (km~s$^{-1}$) & 710 & \\citet{2001NewAR..45..135V} \\\\\n\t\t & $T_{\\mathrm{eff}}$ (kK) & 44.7 & Hamann et al. (2006) \\\\\n\t\t\t& Distance (kpc )& 4-5 & Crawford \\& Barlow (1991) \\\\\n\t\t\t& $R_{\\mathrm{G}} $ (kpc )& 8-10& Esteban et al. (1992) \\\\\n\t\t\t&\t $v_{\\mathrm{hel}}$ (km~s$^{-1}$) & 175 & Solf \\& Carsenty (1982) \\\\\n \t\t& $M_{\\mathrm{v}} $ (mag) & -7.22 & Hamann et al. (2006) \\\\\n & $E_{\\mathrm{b-v}} $ (mag) & 1.08 & Hamann et al. (2006) \\\\ \t \t\t\n \\\\ \n\t\t\tM1-67 & H${\\alpha}$ diameter (arcsec) & 110-120 & \\citet{1998ApJ...506L.127G} \\\\\n\t\t\t&\t $v_{\\mathrm{hel}}$ (km~s$^{-1}$) & 150-185 & \\citet{1981ApJ...249..586C} \\\\\n\t\t\t&\t $v_{\\mathrm{exp}}$ (km~s$^{-1}$) & 46 & Sirianni et al. (1998) \\\\\n\t\t\t&$ M_{\\mathrm{ionized}}$ ($M_{\\mathrm{\\sun}}$) & 1.73 & \\citet{1998ApJ...506L.127G} \\\\\n\t\t\t\t\\hline\t\t\\\\ \n\t\t\t\\end{tabular}\n\t\t\\end{table*}\n\nM1-67 \\citep[$\\equiv$Sh2-80,][]{1959ApJS....4..257S} is a bright nebula surrounding WR124, and it shows a clumpy and irregular distribution of gas that is mostly condensed in bright knots and filaments. It was first reported by \\citet{1946PASP...58..305M} during an H${\\alpha}$ objective prism survey. Classification of the nebula and its distance have been subjects of debate in the past years. Although it was first considered an H{\\sc ii} region \\citep{1959ApJS....4..257S}, the classification of M1-67 has been alternating between a planetary nebula (PN) and a ring nebula. \\citet{1964PASP...76..241B} adopted a distance of 0.9~kpc and suggested that M1-67 might be a PN since both star and nebula have high radial velocity. Studies from optical, infrared, and radio data by \\citet{1975ApL....16..165C} prompted its classification as ring nebula around a WR, with a distance of 4.33~kpc (in agreement with \\citet{1979RMxAA...4..271P} estimations). Nevertheless, \\citet{1985A&A...145L..13V} supported the PN status based on the energy distribution in the far infrared. The issue was settled by \\citet{1991A&A...244..205E} and \\citet{1991A&A...249..518C}. The detailed abundance analysis of the nebula by \\citet{1991A&A...244..205E} revealed nitrogen enhancement and oxygen deficiency, which is typical of material ejected in a previous evolutionary phase, and pointed to a progenitor more massive than those usually associated with PN central stars. \\citet{1991A&A...249..518C} estimated a distance between 4 kpc and 5 kpc using the interstellar Na{\\sc i}D$_{2}$ absorption spectrum of the star, ruling out the PN nature. Recently, \\citet{2010ApJ...724L..90M} have used a comprehensive model of the nebular expansion to estimate a distance of 3.35~kpc. Currently, M1-67 is classified as an ejected type WR ring-nebula.\n\nAlthough M1-67 shows an apparent spherical symmetry, ground-based coronographic images revealed a bipolar structure \\citep{1995IAUS..163...78N}. The emission lines seems to be caused by condensations of gas in clumps and radial filaments \\citep{1998ApJ...506L.127G}. One of the most striking characteristics of the nebula is the virtual absence of optical oxygen emission lines \\citep{1978ApJ...219..914B,1991A&A...244..205E}. Nevertheless, \\citet{1981ApJ...249..586C} reported a bright spot of [O{\\sc iii}]$\\lambda$5007\\AA{} 15\\arcsec\\, to the NE of the central star. Spectroscopic investigations of the physical conditions and abundances of the nebular shell have shown that the ionized gas is nitrogen-enriched and oxygen-depleted, suggesting that O has been processed into N mainly via the ON cycle \\citep{1991A&A...244..205E}. This implies that M1-67 is almost completely composed of stellar material that is poorly mixed with the surrounding ISM. The long-slit spectroscopy of M1-67 established that the bulk of the nebula is expanding at $v_{\\mathrm{exp}}$=42-46~km~s$^{-1}$ \\citep{1982A&A...116...54S,1998A&A...335.1029S} and \\citet{1981ApJ...249..586C} confirmed the high heliocentric velocity of the nebula $v_{\\mathrm{hel}}$=150-185~km~s$^{-1}$, which is comparable to the velocity of the star. The main parameters of the central star, WR124, and the nebula, M1-67, are summarized in Table \\ref{table:parameter}. \n\nMany studies have tried to disentangle the geometry and dynamics of M1-67 and its interaction with WR124. \\citet{1982A&A...116...54S} proposed a simple expanding \\textquotedblleft empty\\textquotedblright shell with condensation of stellar material that was ejected by the high-velocity parent star; indeed, the leading edge of the shell is considerably brighter than the trailing part. \\citet{1998A&A...335.1029S} found two components in the environment of the central star and interpreted them as the consequence of two different events in the past: a spherical hollow shell of 92\\arcsec\\, in diameter expanding at 46~km~s$^{-1}$ and a bipolar outflow with a semi-dimension of 48\\arcsec\\, and a velocity of 88~km~s$^{-1}$ with respect to the expansion centre. On the other hand, some authors explained the asymmetry as the result of a possible low-mass companion for WR124 \\citep{1981ApJ...249..586C,1982A&A...114..135M}. \\\\\n\nDespite the important findings of the last years, some relevant aspects of the evolution and formation of the ring nebula associated with WR124 remain unknown. In particular, a 2D study of the ionization structure of the nebula covering all the morphologies and\/or the structural components can shed light on the formation process of the nebula from the ejecta of the central star. The late spectral type of the WR ionizing star (WN8) is also very remarkable to study, as well as the degree of homogeneity in the chemical composition of its ejecta. \\\\\n\nTo do this, we included M1-67 in our programme of integral field spectroscopy (IFS) observations to compare the 2D structure with the integrated properties of certain selected areas and with models of WR evolution. The paper is organized as follows. First, we describe the observations and data reduction in Sect. \\ref{obsandred}. Then, we present the 2D results for morphology, ionization structure, and kinematic in Sect. \\ref{2d}. In Sect. \\ref{1d} we show the physical conditions and chemical abundances of eight selected areas. We perform a study of M1-67 in the mid-infrared range by analysing the IRS spectrum and the 24$\\,\\mu$m MIPS image from Spitzer in Sect. \\ref{ir}. In Sect. \\ref{discussion} we discuss the chemical composition of M1-67, the observed structure, and its relation with the evolution of the central WR star. Finally, a summary of the main conclusion is given in Sect. \\ref{conclusions}.\n\n\n\n\\section{Observation and data reduction \\label{obsandred}} \n\\begin{figure*}\n\\centering\n\\includegraphics[width=14cm]{m167_INT.pdf}\n\\caption{Narrow-band image of M1-67 in H${\\alpha}$+continuum taken with the Wide Field Camera at the Isaac Newton Telescope. North is up and east left. Red hexagons show the two zones of our IFS observations: \\emph{Edge} to the NE (left) and \\emph{Centre} to the SW (right).}\n\\label{fig:rgb}\n\\end{figure*}\n\nThe observations were carried out on July 5, 2005 using the Potsdam Multi-Aperture Spectrograph instrument (PMAS) \\citep{2005PASP..117..620R} in PPAK mode (PMAS fibre Package, \\citealt{2006PASP..118..129K}) at the 3.5~m telescope of the Centro Astron\\'omico Hispano Alem\\'an (CAHA) at the observatory of Calar Alto (Almer\\'ia, Spain). \nThe PPAK fibre bundle consists of 382 fibres with a diameter of 2.7 arcsec. The 331 science fibres are concentrated in an hexagonal bundle covering a field of view (FoV) of $74\\arcsec \\times 65\\arcsec$. The surrounding sky is sampled by 36 fibres distributed in six bundles located following a circle at about 90\\arcsec \\,from the centre. There are 15 fibres for calibration purposes too \\citep[see Fig. 5 in][]{2006PASP..118..129K}. We used the V300 grating, covering the spectral range from 3660 to 7040 \\AA{} with a dispersion of 1.67~\\AA{}\/pix, giving a spectral resolution of FWHM$\\sim$8.7 \\AA{} (R = $\\lambda\/\\Delta\\lambda\\sim$ 660) at 5577\\AA{}. The weather was photometric throughout the observations with the typical seeing subarsecond. \\\\\n\nTo choose the regions of M1-67 to be mapped, we resorted to the narrow-band images observed by our group at the Isaac Newton Telescope (INT) with the Wide Field Camera (WFC). The first PPAK pointing (called \\emph{Centre}) was centred on the WR star and it covers almost the whole nebula. The second zone (called \\emph{Edge}) was selected to study the NE edge of the object containing nebular emission and surrounding medium. Both regions can be seen in Fig. \\ref{fig:rgb}. Table \\ref{table:log} shows the observational log for M1-67.\n\nBias frames, continuum, arcs, and one spectrophotometric standard star (Hz\\,44) were also acquired during the observations.\\\\\n\n\\begin{table*}\n\\caption{M1-67 PPAK observational log.} \n\\label{table:log} \n\\centering \n\\begin{tabular}{l c c c c c c }\n\\hline\nZone & Coordinates (J2000) &Grating & Spectral range & Exp. time & Airmass & Date \\\\\n& ($\\alpha$,$\\delta$) & & ( \\AA{} ) & (s) & &\\\\\n\\hline\\hline\n\\\\\nCentre & (19:11:30.9 , +16:51:39.2) & V300 & 3640-7040 & 3 $\\times$ 30 & 1.08 & July, 5, 2005 \\\\\nEdge & (19:12:14.8 , +16:52:12.9) & V300 & 3640-7040 & 3 $\\times$ 450 & 1.07 &July, 5, 2005 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\nThe data were reduced using the R3D software \\citep{2006AN....327..850S} in combination with IRAF\\footnote{The Image Reduction and Analysis Facility IRAF is distributed by the National Optical Astronomy Observatories, which are operated by Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. Website: http:\/\/iraf.noao.edu\/.} and the Euro3D packages \\citep{2004AN....325..167S}. The reduction consisted of the standard steps for fibre-fed IFS observations.\n\nAt first, a master bias frame was created and subtracted from all the images. The different exposures taken at the same position on the sky were combined to reject cosmic rays using IRAF routines. A trace mask was generated from a continuum-illuminated exposure, identifying the location of each spectrum on the detector along the dispersion axis. Then, each spectrum was extracted from the science and standard star frames, co-adding the flux within an aperture of five pixels at the location of the spectral peak in the raw data using the tracing information, and storing it in a 2D image called row-stacked spectrum (RSS) \\citep{2004AN....325..167S}. We checked that the contamination from flux coming from adjacent fibres using this aperture was negligible \\citep{2004PASP..116..565B,2006AN....327..850S}. For a given aperture and $FWHM\\sim(0.5\\times~aperture)$, we found a level of cross-talk that was always $<$10$\\%$. This seems to be an acceptable compromise between maximizing the recovered flux and minimizing the cross-talk. \n\nDistortion and dispersion solutions were obtained using a He calibration-lamp exposure and applied to the science data to perform the wavelength calibration. The accuracy achieved was better than $\\sim$0.1~\\AA{} (rms) for the arc exposures. Corrections to minimize the differences between fibre-to-fibre transmission throughput were also applied, creating a fibre flat from the exposure of a continuum source. Observations of the spectrophotometric standard star Hz\\,44 were used to perform the flux calibration. \n\nThe sky emission was determined using the science data spectra obtained throughout the additional fibres for sampling the sky. As explained above, the second pointing was made at the edge of M1-67, and some of its sky bundles are located within an area containing signals from the nebula. We inspected the 36 sky-fibres of each pointing, selecting those that did not show nebular emission. The spectra of all the selected fibres were combined with a mean in a single spectrum, and a 2D spectrum was created by copying the combined spectrum in each fibre. These sky spectra were then subtracted from every science spectrum, each pointing with its own sky.\n\nFinally, considering the wavelength range and the airmass of the observations, and using \\citet{1982PASP...94..715F}, we estimated that offsets due to the differential atmospheric refraction (DAR) were always smaller than one third of the fibre diameter. Correction for DAR was not necessary in our data.\n\n\n\n\\section{Two-dimensional analysis \\label{2d}} \nTo perform a detailed analysis of the 2D structure of the nebula, we built easy-to-use datacubes of the two reduced pointings with three dimensions: two spatials and one spectral. We studied all the interpolation routines included in the E3D software and verified which conserves the flux and the apparent morphology observed in the spaxels. Finally, we generated our cubes with the linear Delaunay interpolation and a pixel size of 1.5$\\times$1.5~arcsec$^{2}$.\\\\\n\nIn a first attempt to understand the morphology of the two observed zones, we extracted images by sliding the cubes at different wavelength ranges. They are presented in Fig. \\ref{fig:morphology_all} on logarithmic scale.\n\n\\begin{figure}\n\\includegraphics[width=9cm]{morph_O3.pdf}\n\\includegraphics[width=9cm]{morph_ha.pdf}\n\\includegraphics[width=9cm]{morph_S2.pdf}\n\\caption{Interpolated images of M1-67 of the two observed regions: In the left column the edge pointing and in the right the central one. In each row we represent the flux (including continuum) integrated in a wavelength range. Top: range 5006\\AA{}-5014\\AA{} including [O{\\sc iii}]$\\lambda$5007\\AA{}. Middle: range 6562\\AA{}-6590\\AA{} including H${\\alpha}$ and [N{\\sc ii}]$\\lambda$6584\\AA{}. Bottom: range 6729\\AA{}-6737\\AA{} including [S{\\sc ii}]$\\lambda \\lambda$6717,6731$\\AA{}$. \nAll the maps are represented on logarithmic scales with units of $\\log$(erg~cm$^{-2}$~s$^{-1}$). The size of the hexagon side is 38\\arcsec. In all the maps, north is up and east to the left (see Fig. \\ref{fig:rgb}).}\n\\label{fig:morphology_all}\n\\end{figure}\n\nIn the 5006\\AA{}-5014\\AA{} range, which includes the [O{\\sc iii}]$\\lambda$5007\\AA{} line, no significant extended emission can be observed in both regions, supporting previous studies that revealed no oxygen emission in M1-67. Several spots appear in the FoV (including the central WR124 star) with fluxes lower than\n$\\sim10^{-17}$~erg~cm$^{-2}$~s$^{-1}$, but we checked that their nature was not nebular. They probably are stars in our line of sight. We gave special attention to the spot described by \\citet{1981ApJ...249..586C} at 15\\arcsec\\ to the NE of the star. Although we can observe some emission, we cannot confirm that it comes from the nebula. A more detailed analysis of this spot is performed in Sect. \\ref{1d} by means of the integrated spectrum. \n\nAs for other lines from the central pointing maps (H${\\alpha}$, [N{\\sc ii}], and [S{\\sc ii}]), most of the emission seems to be concentrated in at least five knots distributed in the NE-SW direction without counterpart in the [O{\\sc iii}]$\\lambda$5007\\AA{} image. In addition, two regions with very faint surface brightness (or even no emission) can be seen at the opposite sides (NW and SE). This orientation agrees with the bipolar structure observed by \\citet{1995IAUS..163...78N} and \\citet{1998A&A...335.1029S} with coronographic studies. H${\\alpha}$ and [S{\\sc ii}] emission shows a discontinuity in the edge pointing of the nebula with higher surface brightness in its SW area. When we move to the NE, the emission decreases until it disappears. The purple coloured area reaches non-negligible emission up to H${\\alpha}\\sim10^{-16}$~erg~cm$^{-2}$~s$^{-1}$ per pixel (1pixel=2.25~arcsec$^2$).\\\\\n\n\n\n\n\\subsection{2D study of the emission-line maps \\label{maps}}\nMaps were created from cubes by fitting the emission lines in each spatial element following the methodology presented in \\citet{2012A&A...541A.119F}. Basically, we performed a Gaussian fit to the emission lines using our own routine, which returns maps of the flux, centre, and FWHM among other properties. To prevent contamination by low signal-to-noise (S\/N) data, we masked out all pixels with S\/N lower than 5. During the creation of the S\/N masks, we visually inspected all the pixels, rejecting interactively those with non-nebular spectra or with contamination from the central WR and other field stars.\n\nFor both regions (centre and edge pointings), maps of parameters from the Gaussian fitting were generated for seven emission lines: H${\\gamma}$, H${\\beta}$, H${\\alpha}$, [N{\\sc ii}]$\\lambda \\lambda$6548,6584\\AA{}, and [S{\\sc ii}]$\\lambda \\lambda$6717,6731$\\AA{}$. Although our spectral range includes the [O{\\sc ii}]$\\lambda \\lambda$3726,3728\\AA{} lines, their automatic fit was not considered because these lines are faint and placed at the edge of the CCD, where the distortion correction bended and deformed them.\\\\\n\nAll the emission line maps were reddening-corrected using the reddening coefficient, c(H${\\beta}$), map (each pointing was corrected with its own c(H${\\beta}$) map). To determine this coefficient we resorted to the H${\\alpha}$\/H${\\beta}$ line ratio. We analysed the maps of the three Balmer lines detected in our wavelength spectral range (H${\\gamma}$, H${\\beta}$, and H${\\alpha}$) and decided to discard the H${\\gamma}$ flux because the S\/N was lower than 5 in $\\sim$20$\\%$ of the spaxels. However, we checked that both derivations were consistent in the spaxels with good S\/N. We used an intrinsic Balmer emission line ratio of H${\\alpha}$\/H${\\beta}$ =3.03 obtained from the public software of \\citet{1995MNRAS.272...41S}, assuming Case B recombination with an electron density of $n_{\\mathrm{e}}\\sim1000$ cm$^{-3}$ \\citep{1991A&A...244..205E} and an electron temperature of $T_{\\mathrm{e}}\\sim 7000$ K (the mean value between the estimations of \\citealt{1978ApJ...219..914B} ($\\sim$7500 K) and \\citealt{1991A&A...244..205E} ($\\sim$6000 K)). Statistical frequency distributions of the reddening coefficient were also created for the two maps taking the mean error ($\\sim$0.1) as binning. Figure \\ref{fig:reddening} shows the spatial distribution of the two derived c(H${\\beta}$) maps and their corresponding histograms.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{chbeta_paper.pdf}\n\\caption{Spatial structure of the derived c(H${\\beta}$) maps and their corresponding statistical frequency distributions with a binning of 0.1. On the left the edge pointing and on the right the central one. Orientation and sizes of maps are as in Fig. \\ref{fig:morphology_all}.}\n\\label{fig:reddening}\n\\end{figure}\n\nThe structure of the reddening map of the central region is mostly uniform with values ranging from 1.3 to 2.5 and a mean value of $\\sim$1.85$\\pm$0.10; the histogram reveals that the most probable value for c(H${\\beta}$) in this zone is 1.90. Single pixels with very high or very low values have a large error in the coefficient estimations. Big holes in the map correspond to the masked pixels.\n\nThe derived reddening coefficient map of the edge pointing has a less homogeneous structure, with a c(H${\\beta}$) mean value of $\\sim$2.11$\\pm$0.08 over the 1.7-2.8 range. It is interesting to notice that all the pixels with c(H${\\beta}$)$>$2.5 are placed in the NW area, where the discontinuity of the H${\\alpha}$ image is observed (Fig. \\ref{fig:morphology_all}). In this region, pixels with c(H${\\beta}$)$>$2.5 were inspected individually; after checking the S\/N of the Balmer lines and the c(H${\\beta}$) errors we decided not to mask them and to pay special attention to the rest of properties derived there. We study this region in detail in the 1D analysis (Sect. \\ref {1d}). The statistical frequency distribution of the reddening coefficient for the edge pointing gives 2.0 as the most probable value. If we exclude values higher than 2.5, the distribution can be fitted by a Gaussian function. \n\nTo compare our results with the literature, we estimated the extinction as $A_{\\mathrm{v}}=2.145 \\times c(H{\\beta})$, using the \\citet{1989ApJ...345..245C} extinction law with $R_{\\mathrm{v}}=3.1$ and the colour excess as $E(B-V)=0.692 \\times c(H{\\beta})$. The mean values derived were $A_{\\mathrm{v}}$=3.9 and E(B-V)=1.3 for the central pointing, and $A_{\\mathrm{v}}$=4.5 and E(B-V)=1.5 for the edge. The reddening coefficients derived from our data are higher than those estimated by \\citet{1991A&A...244..205E} ($\\sim$1.35), whereas the $A_{\\mathrm{v}}\\sim3.8$ obtained by \\citet{1981ApJ...249..586C} and $E(B-V)\\sim1.35$ from \\citet{1975ApL....16..165C} agree with our values.\\\\\n\nThe electron density (n$_{\\mathrm{e}}$) maps were produced from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 ratios using the IRAF package TEMDEN based on a five-level statistical equilibrium model \\citep{1987JRASC..81..195D,1995PASP..107..896S}. Using these maps, we created statistical frequency distribution of the electron density with a binning of 100~cm$^{-3}$ (low density limit). They are shown with the derived n$_{\\mathrm{e}}$ maps in Fig. \\ref{fig:density}. In general terms, the values of n$_{\\mathrm{e}}$ presented in these maps are in good agreement with values reported in the literature. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{density_paper.pdf}\n\\caption{Electron density, n$_{\\mathrm{e}}$, maps derived from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 line ratios in units of cm$^{-3}$. Orientations and sizes as in Fig. \\ref{fig:morphology_all}. On the bottom the statistical frequency distributions with a binning of 100~cm$^{-3}$. The edge pointing is on the left, and the central on the right. The white lines across the maps (from NE to SW) represent the direction along which the cuts were extracted to study the radial variation of n$_{\\mathrm{e}}$. See text for details.}\n\\label{fig:density}\n\\end{figure}\n\nThe histogram for the central pointing shows elements distributed in a wide range of density from $\\sim$200 to $\\sim$3000~cm$^{-3}$ with 1000~cm$^{-3}$ as the most probable n$_{\\mathrm{e}}$. The mean value of the distribution is 1008~cm$^{-3}$. Some isolated pixels appear very intense in the image with densities as high as 3000~cm$^{-3}$, but with large errors. The density distribution follows the low-ionization emission elements supporting the idea of a bipolar structure. It is interesting to notice that the knots with the higher surface brightness correspond to the denser zones.\n\nThe histogram for the edge pointing shows a distribution close to a Gaussian centred on 500~cm$^{-3}$. The map ranges from 100~cm$^{-3}$ to 1000~cm$^{-3}$ with a mean value of 507~cm$^{-3}$. As happens in the other region, some pixels (7) show higher densities (up to 1000~cm$^{-3}$), and we removed them from our estimations. The majority of the pixels with high reddening coefficient (c(H${\\beta}$)$>$2.5) were rejected by the S\/N mask for the sulphur line, but the unmasked pixels present a mean density of 613~cm$^{-3}$.\n\nThe morphological analyses showed that the bright knots are aligned in a preferred axis along the NE-SW direction with a bipolar structure (see Fig. \\ref{fig:morphology_all}); to check that the electron density is related with the bipolarity, we performed a cut in the density maps along this direction (see cuts in Fig. \\ref{fig:density}), the density profiles obtained are presented in Fig. \\ref{fig:dens_rad}. For them, we performed four fits using the least-squares method: the first from the star towards the SW, the second from the star towards the NE including pixels from the two pointings (centre and edge), and the last two fits from the star towards the NE, but differentiating the two pointings (see Fig. \\ref{fig:dens_rad}). It can be seen that the density decreases when we move away from the WR star. In addition, the fits show a symmetric gradient in the central points with a tendency to flatten out towards the ends.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{neVSr.pdf}\n\\caption{Radial variation in electronic density (in cm$^{-3}$) with distance (in pc) along the direction of bipolarity (from NE to SW). We consider negative radius from the star towards the NE and positive from the star towards the SW. Lines indicate least-squares fits: solid lines correspond to pixels differentiating the two pointings, and the dashed line represents the fit along the direction star-NE including pixels from both pointings.}\n\\label{fig:dens_rad}\n\\end{figure}\n\n\n\n\\subsection{Emission line relations \\label{diagdiag}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{ratio_SH.pdf}\n\\includegraphics[width=9cm]{ratio_NS.pdf}\n\\includegraphics[width=9cm]{ratio_NH.pdf}\n\\caption{Derived maps of the emission line ratios of the two pointings: edge (left) and centre (right). Top: [S{\\sc ii}]$\\lambda \\lambda$6717,6731\/H${\\alpha}$. Middle: [N{\\sc ii}]$\\lambda $6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731. Bottom: ([N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$. Orientations and sizes as in Fig. \\ref{fig:morphology_all}.}\n\\label{fig:ratio_maps}\n\\end{figure}\n\nFigure \\ref{fig:ratio_maps} shows maps of the emission line ratios for the two pointings. Their mean values are summarized in Table \\ref{table:line_ratios}. All the intensities presented in the table and figure are reddening-corrected. \n\nIn both regions the [S{\\sc ii}]$\\lambda\\lambda$6717,6731\/H${\\alpha}$ map presents an inhomogeneus and patchy structure. [S{\\sc ii}] lines are fainter than H${\\alpha}$ in all the spaxels, with a maximum logarithmic ratio of -1.3 to the north of the edge pointing. Some isolated pixels show higher values, but they are over the limits of the region masked, so unreliable. \n\nThe distribution of the [N{\\sc ii}]$\\lambda $6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731 map presents a structure opposite to [S{\\sc ii}]\/H${\\alpha}$ in both regions. The [N{\\sc ii}] emission is stronger than [S{\\sc ii}], reaching $\\log$([N{\\sc ii}]\/[S{\\sc ii}])=1.4 in areas close to the ISM.\n\nStudying the [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ maps led to more interesting results. The central pointing shows positive values, except in some regions in the direction of the bipolarity, where [N{\\sc ii}] and H${\\alpha}$ fluxes are equal. In the edge pointing, regions with different ratios are clearly separated. In most of the pixels, [N{\\sc ii}]$\\ge$H${\\alpha}$ with an increasing ratio towards the side. To the north, an area can be seen where H${\\alpha}$ $>$[N{\\sc ii}]; this region possesses the higher derived c(H${\\beta})$, and it was masked in the sulphur maps because of its low S\/N ($<$5). The NW area has the highest ratio, possibly produced by the contamination of a nearby field star.\\\\\n\n\\begin{table}[!h]\n\\caption{Mean values of the emission line ratio maps}\n\\label{table:line_ratios} \n\\centering \n\\begin{tabular}{l c c}\n\\hline\nLine ratios & Edge & Centre \\\\\n\\hline\n\\hline \\\\\n$\\log$([S{\\sc ii}]$\\lambda\\lambda$6717,6731\/H${\\alpha}$)& -1.03 & -1.01 \\\\\n$\\log$([N{\\sc ii}]$\\lambda$6584\/[S{\\sc ii}]$\\lambda \\lambda$6717,6731)& 1.15 & 1.07 \\\\\n$\\log$([N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$)& 0.07 & 0.06 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nTo understand the differences of the [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ ratio in the edge pointing, we complement the study by generating statistical frequency distributions of the ratio map and plotting all the spaxels from emission line maps in the diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ (Fig. \\ref{fig:NHa}).The diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ showed in Fig. \\ref{fig:NHa} a, presents double behaviour. We considered two lines with unity slope as upper and lower limits. Points above the upper line are pixels where the [N{\\sc ii}] emission is stronger than H${\\alpha}$, while below the lower line they have the opposite behaviour. Points between these two lines are pixels with $\\log$(H${\\alpha}$)=$\\log$([N{\\sc ii}]) $\\pm$ 0.05. Then, we located the points of the diagram in the FoV of PPAK, taking these limits into account, to identify their spatial locations (see Fig. \\ref{fig:NHa} b); they appear grouped. The statistical frequency distribution of the [N{\\sc ii}]\/H${\\alpha}$ map shows a bimodal distribution as we can see in Fig.\\ref{fig:NHa} c. When we identified the spaxels of the three regions defined above, we found that the left peak (centred in $\\sim$-0.3) includes all the points behind the lower limit, and the right peak (centred in $\\sim$0.1) includes the points of the two other zones. We can conclude that at least two spatial regions exist in this pointing: one with [N{\\sc ii}]$\\geq$H${\\alpha}$ to the SW and another one to the north with [N{\\sc ii}]$<$H${\\alpha}$. All the pixels with c(H${\\beta}$)$>$2.5 are included in the second region, along with the spectra with very low S\/N of the sulphur lines.\\\\\n\nFor the central pointing, the same analysis shows that [N{\\sc ii}] follows H${\\alpha}$ for all the points in a one-to-one relation line of unity slope. Relations between the other lines were also studied by means of two diagrams ([N{\\sc ii}]$\\lambda$6584 vs. [S{\\sc ii}]$\\lambda\\lambda$6717,6731 and [S{\\sc ii}]$\\lambda\\lambda$6717,6731 vs. H${\\alpha}$), showing strong correlations in both the pointings. The statistical frequency distribution of all the emission line ratios showed single peaks with distributions close to Gaussian functions, except [N{\\sc ii}]$\\lambda\\lambda$6584\/H${\\alpha}$ on the edge.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{NHa_paper.pdf}\n\\caption{Relations between [N{\\sc ii}] and H${\\alpha}$ for the edge pointing. Colours help us locate points spatially: red corresponds to points with $\\log$(H${\\alpha}$)=$\\log$([N{\\sc ii}]) $\\pm$ 0.05, blue to points with $\\log$(H${\\alpha}$)$>\\log$([N{\\sc ii}]), and green to points where $\\log$(H${\\alpha}$)$<\\log$([N{\\sc ii}]). From the top to the bottom: (a) $\\log$([N{\\sc ii}]$\\lambda$6584) vs. $\\log$(H${\\alpha}$). All the spaxels of the intensity maps (in units of $\\log$(erg~cm$^{-2}$~s$^{-1}$) are represented in the diagram with crosses. Black lines with unitary slope represent the limits. (b) PPAK FoV of the edge pointing with the zones defined in plot \\emph{a}. (c) Statistical frequency distributions of the $\\log$([N{\\sc ii}]\/H${\\alpha}$) map. Black solid line represents the distributions of all the spaxels, and coloured dashed lines represent the regions defined above. See text for details.}\n\\label{fig:NHa}\n\\end{figure}\n\n\n\n\n\\subsection{The radial velocity field \\label{kinematics}}\nLimitations on the instrument resolution prevented us from carrying out an exhaustive analysis of the kinematics of M1-67. Nevertheless, the resolution was sufficient for studying the distribution of the radial velocity field and relating it to the morphology and ionization structure.\n\nUsing the central wavelength of the Gaussian fit performed in the cubes, we created radial velocity maps for the two observed regions. Two corrections were carried out over the measured radial velocities. First, we estimated the error in the wavelength calibration by comparing the wavelength of a sky emission line with its theoretical value, we obtained a difference of -0.303~\\AA{} ($\\sim$-16~km~s$^{-1}$ for [OI]$\\lambda$5577\\AA{}), and this zero point was added to the measured velocities. Then, we translated maps into the local standard of rest (LSR) and corrected for the Earth's motions, taking coordinates and universal time of the observations into account.\n\nWith the corrected radial velocity fields of H${\\alpha}$, we scaled the measured velocities using the overlapping region of the two pointings to avoid deviations. Then, we calculated the total mean velocity, obtaining a value of 139~km~s$^{-1}$, and we established it as the heliocentric velocity of the nebula. This velocity is in very good agreement with the 137~km~s$^{-1}$ obtained by \\citet{1998A&A...335.1029S}. We present the relative radial velocity field of H${\\alpha}$ for the two regions mosaicked in Fig. \\ref{fig:ha_velocity}.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{vradial.pdf}\n\\caption{Relative radial velocity field derived for H${\\alpha}$ in units of km~s$^{-1}$. Zero is the global mean velocity (139~km~s$^{-1}$), see text for details. The two pointings are mosaicked. The red cross marks the position of the central star. North is up and east left (see Fig. \\ref{fig:rgb}).}\n\\label{fig:ha_velocity}\n\\end{figure}\n\nPrevious kinematic studies \\citep{1982A&A...116...54S,1998A&A...335.1029S} have found two components (one redshifted and another blueshifted) supporting the idea of a shell in expansion. With the low resolution of our data we cannot resolve both components, and the velocity field shown is dominated by the radial velocity of the brightest knots, a kind of intensity- weighted radial velocity distribution. Despite the low resolution, a study of the overall structure of both regions can be carried out. The gas of the nebula seems to move faster near to the WR star, decreasing its relative velocity when moving away from the centre. The velocity field changes its tendency (increasing) in the \\textquotedblleft peculiar\\textquotedblright ~zone towards the north of the edge pointing, where other properties were also found to differ from the rest of the nebula.\n\n\nFigure \\ref{fig:hist_vr} shows the statistical frequency distributions of the radial velocity maps with a binning of 5~km~s$^{-1}$. To consolidate the differences found in the diagram [N{\\sc ii}]$\\lambda$6584 vs. H${\\alpha}$ for the edge pointing (Fig. \\ref{fig:NHa}), we represented pixels from the two regions separately The region where [N{\\sc ii}]$\\geq$H${\\alpha}$ presents a Gaussian distribution that covers a wide range in velocity, suggesting that some regions are moving away from us and others towards us. The distribution of the regions where H${\\alpha}$ emission is higher than [N{\\sc ii}] is narrower and it is centred near to the zero velocity.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{hist_vr.pdf}\n\\caption{Statistical frequency distributions of the radial velocity of H${\\alpha}$ relative to the heliocentric velocity, with a binning of 5~km~s$^{-1}$. The central region is on the top with all the spaxeles represented. On the bottom the edge pointing: black solid line represents all the pixels, the short-dashed red line is the region where [N{\\sc ii}]$\\geq$H${\\alpha}$, and the long-dashed blue line represents the regions with [N{\\sc ii}]$<$H${\\alpha}$.}\n\\label{fig:hist_vr}\n\\end{figure}\n\n\n\n\n\\section{Properties of the integrated spectra \\label{1d}} \n\nWe created 1D spectra by combining fibres to describe the integrated properties of several interesting zones. Eight integrated spectra were generated over the two pointings (see Fig. \\ref{fig:integratedregions}).\\\\\n\nThe regions selected over the central pointing are:\n\n-\\textit{Region 1} (R1): examining the emission of the low ionization elements showed in Fig. \\ref{fig:morphology_all}, three bright knots appear to the south of the nebula. Eight fibres over these knots were selected and combined to create a single spectrum. The offset from central star is $\\Delta\\alpha\\sim$4.05$\\arcsec$, $\\Delta\\delta\\sim$13.5$\\arcsec$.\n\n-\\textit{Region 2} (R2): we combined three spaxels to the north of the star coinciding with another isolated knot. The offset from central star is $\\Delta\\alpha\\sim$1.35$\\arcsec$, $\\Delta\\delta\\sim$14.85$\\arcsec$.\n\n-\\textit{Region 3} (R3): we chose those fibres placed to the east of the star in a zone where an extended emission is seen in H${\\alpha}$, paying attention to not include light from any star. The offset from central star is $\\Delta\\alpha\\sim$12.15$\\arcsec$, $\\Delta\\delta\\sim$4.05$\\arcsec$.\n\n-\\textit{Region 4} (R4): we were interested in analysing a large region in the NW masked in the 2D analysis where all the emission line maps showed S\/N lower than 5. The fourth integrated spectrum was created there to check that there is emission in this area. The offset from central star is $\\Delta\\alpha\\sim$14.85$\\arcsec$, $\\Delta\\delta\\sim$14.85$\\arcsec$.\\\\\n\nThe regions selected over the edge pointing are:\n\n-\\textit{Region 5} (R5): nine fibres were selected on the south of the edge pointing, close to the discontinuity. This spectrum belongs to the region showed in Fig. \\ref{fig:NHa} b where [N{\\sc ii}] is stronger than H${\\alpha}$. The offset from central star is $\\Delta\\alpha\\sim$31.05$\\arcsec$, $\\Delta\\delta\\sim$16.2$\\arcsec$.\n\n-\\textit{Region 6} (R6): we combined several spaxels at the SW limit of the FoV to check that [N{\\sc ii}]$\\sim$H${\\alpha}$ in this area, as we found in Sect. \\ref{diagdiag}. The offset from central star is $\\Delta\\alpha\\sim$14.85$\\arcsec$, $\\Delta\\delta\\sim$20.25$\\arcsec$.\n\n-\\textit{Region 7} (R7): in a faint region to the north of the edge pointing where interesting properties were obtained in the 2D analysis: some pixels show c(H${\\beta}$)$>$2.5, the S\/N of the sulphur lines is very low so they were masked in several maps, the [N{\\sc ii}]\/H${\\alpha}$ ratio has its minimum values, and the kinematic study revealed that here the radial velocity increases opposite to the general trend. Seven spaxels were combined in this region to analyse the properties in detail. The offset from central star is $\\Delta\\alpha\\sim$27$\\arcsec$, $\\Delta\\delta\\sim$40.5$\\arcsec$.\n\n-\\textit{Region 8} (R8): six fibres were selected on the left of the discontinuity to checked whether this region has nebular emission. The offset from central star is $\\Delta\\alpha\\sim$47.25$\\arcsec$, $\\Delta\\delta\\sim$52.65$\\arcsec$.\\\\\n\n \n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{integrated_regions.pdf}\n\\caption{H${\\alpha}$ images of the two areas of M1-67 observed with PPAK. Boxes represent the eight regions where the integrated spectra were generated. For the offsets of each region from the central star (green cross), see the text. Orientations and sizes are as in Fig. \\ref{fig:morphology_all}. Edge on the left and centre on the right.}\n\\label{fig:integratedregions}\n\\end{figure}\n\nIn addition, another three integrated spectra were extracted to perform several tests. From the central spaxel of the central pointing FoV, we obtained the spectrum of WR124 (\\textit{Region WR}). The other two were extracted at $\\sim$15\\arcsec\\ to the NE of the star (common region in both pointings) to study the zone where \\citet{1981ApJ...249..586C} found emission in [O{\\sc iii}]$\\lambda$5007\\AA{} (\\textit{Regions S1 and S2} in the central and edge pointings, respectively). Figure \\ref{fig:integratedspectra} shows six representative 1D spectra from the 11 created. \\\\\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{integrated_all.pdf}\n\\caption{Examples of integrated spectra. From left to right and top to bottom: (a) Whole spectrum of \\textit{Region 3}. (b) Spectrum of \\textit{Region 5} in the range of H${\\alpha}$. (c) \\textit{Region 7}, in same range as \\textit{b}, where the absence of sulphur lines can be seen. (d) Spectrum of \\textit{Region 5} centred on the [N{\\sc ii}]$\\lambda$5755\\AA{} emission line, used to calculate the electron temperature. (e) Zoom over the spectrum in \\textit{Region S2} without any emission in the [O{\\sc iii}]$\\lambda$5007\\AA{} emission line.(f) Whole spectrum of WR124 obtained from the central spaxel. }\n\\label{fig:integratedspectra}\n\\end{figure*}\n\n\nFluxes of the main emission lines were measured by fitting Gaussian functions using SPLOT within IRAF. All the measured fluxes are in units of erg~cm$^{-2}$~s$^{-1}$ per fibre (area of fibre $\\sim$5.7~arcsec$^2$). Statistical errors were estimated using the formula presented in \\citet{2003MNRAS.346..105P}:\n\\begin{equation}\n\\sigma_{\\mathrm{1}}=\\sigma_{\\mathrm{c}} N^{1\/2} [1+EW\/(N\\Delta)]^{1\/2}\n\\end{equation}\nwhere $\\sigma_{\\mathrm{1}}$ represents the error on the observed line flux, N is the number of pixels used to measure the line, EW the line equivalent width, $\\sigma_{\\mathrm{c}} $ the standard deviation in the continuum close to the line of interest, and $\\Delta$ represents the dispersion in \\AA{}\/pix.\\\\\n\nWe derived the reddening coefficient, c(H${\\beta}$), from the H${\\alpha}$\/H${\\beta}$ and H${\\gamma}$\/H${\\beta}$ line ratios using the procedure described in Sect. \\ref{maps}. In \\textit{Region 7}, H${\\gamma}$ was measured with low S\/N, and only the other two Balmer lines were used to estimate c(H${\\beta}$). The reddening coefficients agree with values obtained in the 2D study. Table \\ref{table:all_lines} lists the reddening-corrected fluxes of the emission lines measured in every zone labelled with their standard identification. The third column reports the adopted reddening curve using extinction law by \\citet{1989ApJ...345..245C} with $R_{\\mathrm{V}}=3.1$. Errors in the emission line intensities were derived by propagating the observational errors in the fluxes and the reddening constant uncertainties. The estimated fluxes and errors were normalized to $F(H{\\beta})=100$. The values obtained for c(H${\\beta}$) are also presented in the last row of Table \\ref{table:all_lines}. \\\\\n\nFive integrated spectra deserve special attention. First, the R4 created in the NW dark area of the central pointing only showed three emission lines: H${\\alpha}$, [N{\\sc ii}]$\\lambda$6548\\AA{}, and [N{\\sc ii}]$\\lambda$6584\\AA{}. We deduce that, in areas out of the bipolar structure a faint, but not negligible, emission exists coming from the nebular gas rather than the ISM. We estimated the H${\\beta}$ flux by means of the reddening coefficient: assuming c(H${\\beta}$)=1.87 (the mean value of the other integrated spectra of this pointing), we performed the inverse process of the extinction correction and obtained F($H{\\beta}$)=3.21~10$^{-16}$~erg~cm$^{-2}$~s$^{-1}$. \n\nThe spectrum of R8 does not show any emission, thus physical and chemical properties could not be estimated here. We did not include this region in tables. We extracted a spectrum of the WR star (\\textit{Region WR}) shown in Fig.\\ref{fig:integratedspectra}f, and not perform a detailed analysis here. Finally, the study performed over \\textit{Regions S1 and S2} revealed typical nebular spectra (very similar to R4), but we did not find any emission of the [O{\\sc iii}]$\\lambda$5007\\AA{} line as seen in Fig. \\ref{fig:integratedspectra}e.\n\n\n\n\n\\subsection{Physical properties and chemical abundances \\label{prop_and_ab}}\nElectron density (n$_{\\mathrm{e}}$) was calculated from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 line ratio using the IRAF package TEMDEN. The derived density ranges from $\\sim$1500~cm$^{-3}$ near the star, to $\\sim$650~cm$^{-3}$ towards the edge. These values are consistent with our 2D maps and with previous studies \\citep{1991A&A...244..205E,1998A&A...335.1029S}. \\\\\n\nElectron temperature, T$_{\\mathrm{e}}$, can be derived using the line ratio R$_{\\mathrm{N2}}$: \n\\begin{equation}\nR_{\\mathrm{N2}}={I([\\mathrm{N \\textsc{ii}}]\\lambda 6548)+I([\\mathrm{N \\textsc{ii}}]\\lambda 6584) \\over I([\\mathrm{N \\textsc{ii}}]\\lambda 5755)}.\n\\end{equation}\nThe [N{\\sc ii}]$\\lambda$5755\\AA{} auroral line that appears close to the \\textquotedblleft sky\\textquotedblright\\, line Hg{\\sc i} 5770\\AA{}, was detected in two zones (R5 and R6). We measured this line again in the spectra before sky subtraction and conclude that the flux of [N{\\sc ii}]$\\lambda$5755\\AA{} line in R6 is contaminated by the Hg{\\sc i} emission line, thus not reliable. We obtained a direct estimate of T$_{\\mathrm{e}}$([N{\\sc ii}]) from R$_{\\mathrm{N2}}$ only for R5.\\\\\n\nTo reinforce the validity of the chemical abundances estimations and to provide ionization correction factors (ICFs) for those species whose ionizations stages were not all observed in the optical spectrum, we performed photoionization models of R5. To do so, we used the code CLOUDY v.10 \\citep{1998PASP..110..761F}, assuming a central ionizing source from a WR star atmosphere \\citep{2002MNRAS.337.1309S} with $Z = 0.008$ and an effective temperature of the star of 45~000 K which are, respectively, the closest values to the measured total metallicity of the gas and the estimated temperature of WR124 \\citep{2006A&A...457.1015H}. \n\nWe considered a spherical geometry putting the gas at a distance of 1~pc from the star and assumed a constant density of 700~cm$^{-3}$, a value similar to the one derived from [S{\\sc ii}] emission lines. The model that fits the emission line intensities of [O{\\sc ii}], [O{\\sc iii}], He{\\sc i}, [N{\\sc ii}], and [S{\\sc ii}] better in R5 was obtained by varying the ionization parameter (U) and the relative chemical abundances of He, O, N, and S. The emission lines from this model are listed in Table \\ref{table:all_lines}, while the derived physical properties and the ionic and total chemical abundances are listed in Table \\ref{table:paramyabun}. The ICFs obtained were ICF(N$^{+}$)=1.21 and ICF(S$^{+}$)=1.58. Regarding the resulting geometry, the final radius is 1.22~pc, which is of the same order of magnitude as the apparent size of the nebula in the images. \\\\\n\nTo estimate chemical abundances, electron density and electron temperature are required. We used T$_{\\mathrm{e}}$([N{\\sc ii}]) as temperature representative of the low ionization ions, S$^{+}$, N$^{+}$, and O$^{+}$, and T$_{e}$([O{\\sc iii}]) for deriving the O$^{2+}$ and He$^{+}$ abundances. In those zones where the electron temperature was not calculated, we adopted the value of R5. In previous studies, T$_{\\mathrm{e}}$([N{\\sc ii}]) ranges from 5900~K \\citep{1998A&A...335.1029S} to 8000~K \\citep{1981ApJ...249..586C}; maybe, the supposition of T$_{\\mathrm{e}}$=8200~K leads to our abundances being underestimated. Since the photionization model predicts T$_{\\mathrm{e}}$([N{\\sc ii}])$\\sim$8550~K and T$_{e}$([O{\\sc iii}])$\\sim$8330~K in R5, we considered T$_{\\mathrm{e}}$([N{\\sc ii}])$\\sim$T$_{e}$([O{\\sc iii}]) in the estimations. To infer abundances in R4 and R7, where sulphur lines were not measured, we adopted the electron density of R5, n$_{e}$=631~cm$^{-3}$. We checked that variations in density do not affect this estimation. \\\\\n\nIonic abundances were derived from the forbidden-to-hydrogen emission line ratios using the functional forms given by \\citet{2008MNRAS.383..209H}, which are based on the IRAF package IONIC. We used equations from \\citet{2004ApJ...617...29O} to obtain the singly ionized helium abundance. To determine the total abundance of O\/H we added the two ionic abundances (O\/H~$\\sim$ O$^{+}$\/H$^{+}$ + O$^{2+}$\/H$^{+}$). The total N\/H and S\/H abundances were inferred thanks to the ICFs obtained in the photoionization model, X\/H$\\sim$(X$^{+}$\/H) $\\times$ ICF(X$^{+}$). In the case of helium abundances we used the relation between X(O$^{2+}$)=O$^{2+}$\/(O$^{2+}$+O$^{+}$) and ICF(He$^{+}$+He$^{++}$) from \\citet[Fig. 7]{2007ApJ...662...15I} and we deduced that ICF(y$^{+}$)$\\gg$1. Since our helium measurements are uncertain, we do not venture to estimate the total helium abundances.\\\\\n\nIn R5 all the useful emission lines were measured and the abundances determined as explained above. In the rest of the regions, we did not measure all the necessary lines to calculate abundances, and we resorted to the empirical parameter N2S2 \\citep{2009MNRAS.398..949P} to estimate N\/O from the nitrogen and sulphur emission lines:\n\n\\begin{equation}\n\\log(N\/O)=1.26\\times N2S2 - 0.86\n\\end{equation}\nwhere\n\n\\begin{equation}\nN2S2=\\log \\left ({I([\\mathrm{N \\textsc{ii}}]\\lambda 6584) \\over I([\\mathrm{S\\textsc{ii}}] \\lambda\\lambda 6717,6731)} \\right).\n\\end{equation}\n\nBefore, we estimated the N\/O in R5 with the N2S2 parameter and checked that the result was in good agreement with the value obtained with the direct method. In Table \\ref{table:paramyabun} we present all the ionic and total abundances, with their corresponding errors, derived for the integrated spectra. We discuss the results in Sect. \\ref{chemical}.\n\n\n\n\n\\section{Infrared study \\label{ir}}\nTo enhance the morphological and chemical analysis, a study in the mid-infrared was performed. We obtained IRS (Infrared Spectrograph, \\citealt{2004ApJS..154...18H}) data in mapping mode and the MIPS (Multiband Imaging Photometer, \\citealt{2004ApJS..154...25R}) 24$\\,\\mu$m image from the Spitzer Heritage Archive (SHA)\\footnote{Website: sha.ipac.caltech.edu\/applications\/Spitzer\/SHA}. M1-67 has already been studied in the past in the infrared range by \\citet{1985A&A...145L..13V}. They presented the energy distribution of the central star WR\\,124 and flux densities, finding thermal emission of dust at T$_{\\mathrm{c}}\\sim$100~K.\\\\\n\nFigure \\ref{fig:spitz_24micr} shows the MIPS 24$\\mu$m image of M1-67. This image has already been presented by \\cite{2010MNRAS.405.1047G}. In a nebula, the origin of the 24$\\mu$m emission can be mainly due to two factors: presence of the [O{\\sc iv}]25.90$\\mu$m line from highly ionized gas or warm dust. Since M1-67 presents a low degree of ionization, we deduce that the observed emission shown in Fig. \\ref{fig:spitz_24micr} traces the warm dust distribution of the nebula. The emission has an elliptical shape along the NE-SW direction in very good agreement with the bipolar axis observed in Fig. \\ref{fig:morphology_all}, thus suggesting that the structure is composed of a mixture of ionized gas and warm dust. Furthermore, an external and spherical structure can be seen extending around the ellipsoidal shell. This faint bubble is not seen in our optical images.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{spitzer24micr.pdf}\n\\caption{MIPS 24$\\mu$m image of M1-67. North is up and east left. Boxes indicate the two regions where IR spectra were obtained. Contourns represent the H${\\alpha}$ emission derived from Fig. \\ref{fig:rgb}.}\n\\label{fig:spitz_24micr}\n\\end{figure}\n\nFor the low-resolution short-low (SL) and long-low (LL) modules spectroscopic observations, basic calibrated data (BCD, pipeline version 18.18) were processed and analysed with the CUBISM software \\citep{2007ApJ...656..770S}. Data were background-subtracted using averaged off-source observations and flux-calibrated. Bad pixels were removed with the automatic correction routine within CUBISM and a datacube assembled for each module. CUBISM allows extracting of 1D spectra over polygonal apertures: given the different spatial coverage of the SL and LL module, we chose two apertures (with an area of $\\sim$60~arcsec$^2$) on the outskirts of the nebula, observed by both modules. The spectra from the different modules were stitched together, ignoring the noisy region at the red end of each order. We called them Regions A and B (see Fig. \\ref{fig:spitz_24micr}). In Fig. \\ref{fig:spitz_spec} we present the spectrum obtained in Region B.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{spectrum_ir.pdf}\n\\caption{Infrared spectrum obtained in Region B. The most relevant lines are indicated.}\n\\label{fig:spitz_spec}\n\\end{figure}\n\nWe measured the most important lines by fitting Gaussian functions with IRAF. Errors were calculated as we explained before (Sect. \\ref{1d}). Fluxes and their corresponding errors are presented for the two regions in Table \\ref{table:infrared}.\\\\\n\nAssuming the electron temperature of \\textit{Region 5} (T$_{e}$=8200~K) and an electron density n$_{e}$=600~cm$^{-3}$, we inferred the chemical abundances. To obtain the fluxes relatives to H${\\beta}$ we used the theoretical ratio of H(7-6)\/H${\\beta}$=0.0109 from \\citet{1995MNRAS.272...41S}. The ionic abundances, Ne$^{+}$\/H$^{+}$, Ne$^{2+}$\/H$^{+}$, S$^{2+}$\/H$^{+}$, and S$^{3+}$\/H$^{+}$ were inferred by using the IRAF package IONIC. We estimated the total neon abundance by adding the two ionic abundances, Ne\/H$\\sim$Ne$^{+}$\/H$^{+}$+Ne$^{2+}$\/H$^{+}$. For deriving the total S\/H abundance we need to add the S$^{+}$\/H$^{+}$ from the optical spectra. To do so we compared the regions from which the IR and the optical 1D spectra were taken. Noticing the proximity between Region A and R3, we approximated the total sulphur abundance in the spectrum A as S\/H$\\sim$(S$^{+}$\/H$^{+}$)$_{R3}$ + (S$^{++}$\/H$^{+}$)$_{A}$. The 1D spectrum nearest to B is R4, but in R4 we did not measure sulphur lines. Since the S$^{+}$\/H$^{+}$ is similar in all the integrated spectra, we considered the mean value, (S$^{+}$\/H$^{+}$)$_{mean}$=6.17, so that the total sulphur abundance in B can be written as S\/H$\\sim$(S$^{+}$\/H$^{+}$)$_{mean}$ + (S$^{++}$\/H$^{+}$)$_{B}$. We assumed that S$^{3+}$\/H$^{+}$ is negligible. Results are presented in Table \\ref{table:abinfrared} and discussed in Sect. \\ref{chemical}.\n\n\n\\begin{table}[h!]\n\\caption{Lines measured over the two spectra studied in the infrared range. Integrated fluxes are in units of 10$^{-5}$~erg~cm$^{-2}$~s$^{-1}$. }\n\\label{table:infrared} \n\\centering \n\\begin{tabular}{l c c c}\n\\hline\n&&\\multicolumn{2}{c}{F($\\lambda$)} \\\\\n\\cline{3-4}\nLine & $\\lambda$~($\\mu$m) & Region A & Region B \\\\\n\\hline \\hline \\\\\n{[}S{\\sc iv}] & 10.51 & ...& 4.1 $\\pm$ 0.5 \\\\\nH(7-6) & 12.37 & 4.9 $\\pm$ 0.5 & 5.1 $\\pm$ 1.0 \\\\\n{[}Ne{\\sc ii}] & 12.81 & 121.7 $\\pm$ 3.2 &105.3 $\\pm$ 3.5 \\\\\n{[}Ne{\\sc iii}] & 15.56 & 5.0 $\\pm$ 0.4 & 1.1 $\\pm$ 0.3 \\\\\n{[}S{\\sc iii}] & 18.71 & 133.9 $\\pm$ 5.5 & 99.2 $\\pm$ 1.9 \\\\\n{[}S{\\sc iii}] & 33.48 & 156.6 $\\pm$ 5.3 & 135.2 $\\pm$ 4.7 \\\\\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[h!]\n\\caption{Ionic and total chemical abundances estimated in Regions A and B with infrared spectroscopy.}\n\\label{table:abinfrared} \n\\centering \n\\begin{tabular}{l c c}\n\\hline\n& Region A & Region B \\\\\n\\hline \\hline \\\\\n12+log(Ne$^{+}$\/H$^{+})$ & 7.56 $\\pm$ 0.04 & 7.47 $\\pm$ 0.08 \\\\\n12+log(Ne$^{2+}$\/H$^{+})$ & 5.85 $\\pm$ 0.05 & 5.18 $\\pm$ 0.16 \\\\\n12+log(S$^{2+}$(18.71$\\mu$m)\/H$^{+}$) & 6.59 $\\pm$ 0.09 & 6.44 $\\pm$ 0.17\\\\\n12+log(S$^{2+}$(33.48$\\mu$m)\/H$^{+}$) & 6.64 $\\pm$ 0.16 & 6.56 $\\pm$ 0.21\\\\\n12+log(S$^{3+}$\/H$^{+}$) & ... & 4.41 $\\pm$ 0.19 \\\\\n\\\\\n12+log(Ne\/H) & 7.57 $\\pm$ 0.04 & 7.48 $\\pm$ 0.08\\\\\n12+log(S\/H)$\\dagger$ & 6.72 $\\pm$ 0.07 & 6.63 $\\pm$ 0.11 \\\\\n\\hline\n\\end{tabular}\n \\begin{list}{}{}\n\t\t\t\\item {$\\dagger$} Assuming S$^{+}$\/H$^{+}$ from the optical spectroscopy.\\\\\n\t\t\\end{list}\n\n\\end{table}\n\n\n\n\n\\section{Dicussion \\label{discussion}}\nWe included M1-67 in our IFS observational programme to provide answers to some questions that still surround this object: degree of gas homogeneity (both kinematic and chemical), stellar evolutionary phase origin of the gas, interaction with the ISM, influence of the star spectral type at WR stage, etcetera. To do this, we put together our results for the optical (1D + 2D) and infrared analysis, and we complemented them with theoretical models of stellar evolution and previous kinematic studies of this nebula.\n\n\n\\subsection{Chemical content of M1-67 \\label{chemical}}\nThe chemical abundances derived from the 1D optical and infrared studies, presented in Tables \\ref{table:abinfrared} and \\ref{table:paramyabun}, give us relevant information on the chemical content across the nebula. To compare the derived abundances with the expected ISM values at the location of the nebula, we use the solar values from \\citet{2009ARA&A..47..481A} as our primary reference. For the sake of consistency as reference for M1-67, we consider here gas abundances derived following the same methodology, i.e. H{\\sc ii} region collisional emission lines. We adopted the chemical abundances of the prototypical H{\\sc ii} region M\\,42 as a reference (\\citealt{2007A&A...465..207S,2011MNRAS.412.1367T} with t$^{2}$=0, and references therein). Then, we corrected the t$^{2}$=0 abundances from the effect of the radial abundance gradient of the Milky Way \\citep{2006ApJS..162..346R} to the galactocentric radius of M1-67\\footnote{Assumed R$_{G}\\sim$10 kpc as the the galactocentric distance of the representative ISM at the location of M1-67 \\citep{1992A&A...259..629E} and taking the distance from the Sun to M\\,42 into account (d$\\sim$0.414\\,kpc, \\citealt{2007A&A...474..515M}).}. We considered the constant ratio $\\log \\mathrm{(Ne\/O)}$=-0.73$\\pm$0.08 since they are products of the same nucleosynthesis. After these corrections the expected ISM abundances to be compared with M1-67 are 12+$\\log \\mathrm{(O\/H)}\\sim$8.42$\\pm$0.03, 12+$\\log \\mathrm{(N\/H)}\\sim$7.54$\\pm$0.09, 12+$\\log \\mathrm{(S\/H)}\\sim$6.99$\\pm$0.12, and 12+$\\log \\mathrm{(Ne\/H)}\\sim$7.69$\\pm$0.09.\\\\\n\n\nFirst of all, it can be observed that our derived oxygen abundances in R5 and R6 (12+$\\log \\mathrm{(O\/H)}\\sim$7.73$\\pm$0.06, 7.67$\\pm$0.07, respectively) are substantially lower than the expected value by factors $\\sim$10 with respect to the solar reference, and $\\sim$7 with respect to the ISM. This result implies that in the M1-67 nebula oxygen is strongly under-abundant. Comparing the derived N\/H abundance with the expected ISM value, we find that nitrogen is strongly enriched in M1-67 (factor $\\ge$ 6). \n\n\nOverall, this chemical composition can be seen in all the nebular regions observed; the N\/O ratio appears extremely high due to the effect of both nitrogen enhancement and oxygen deficiency. This fact can be understood when assuming we are seeing regions composed of material processed in the CNO cycle. This result for N\/O abundance is consistent with previous 1D studies \\citep{1991A&A...244..205E}, but here it has been extended across the whole (2D) nebular geometry and physical conditions. The only region where the N\/H abundance is close to the ISM expected value is R7 (the region with different properties in the 2D analysis, see Sect. \\ref{2d}).\n\nWe did not estimate the total helium abundances since our helium lines are very faint and the measures uncertain. Nonetheless, given the low limit of the value of He{\\sc i} ($<$0.03), the absence of He{\\sc ii} and the ICF inferred from \\citet{2007ApJ...662...15I} (ICF(y$^{+}$)$\\gg$1), we deduced that in M1-67 the largest part of helium is unseen and in neutral form. \\\\\n\nThe analysis of the chemical abundances obtained here is reinforced by the information derived from the infrared study. The infrared spectrum allowed us to derive the sulphur and neon abundances for the main ionic species, Ne$^{+}$, Ne$^{++}$, S$^{++}$, and S$^{+3}$. The total neon abundance derived in M1-67 is consistent within the errors with the expected ISM abundance for the two apertures (Table \\ref{table:abinfrared}). The noble gas neon is not expected to suffer nucleosynthetic transformation in the stellar interior, and its abundance should be preserved.\n\nIn the case of sulphur, the derivation of the total abundance requires the contribution of the optical S$^{+}$ to be added to the ionic fractions derived from the infrared. Once this approximation has been assumed, the total abundance of S\/H obtained is close to, though still slightly lower than, the expected ISM value at the galactocentric distance of M1-67. Thus we cannot rule out the possibility that the nebular material could be slightly sulphur-poor: either a certain degree of depletion on dust or maybe a nucleosynthetic origin (or both) could be at work as reported for some planetary nebulae \\citep{2012ApJ...749...61H}.\\\\\n\nTaking the abundance ratios into account, we can obtain clear indications of the excitation degree of the nebula. The values N$^{+}$\/N~$\\sim$1 and O$^{+}$\/O$^{++}$~$>$1 from the optical and the derived ratios of Ne$^{+}$\/Ne$^{++}$ and S$^{++}$\/S$^{3+}$ from the IR study point to the very low ionization degree of the gas in M1-67. The ionization parameter obtained from the photoionization model of R5, $\\log\\mathrm{(U)}=-3.84$, is fully consistent with this very low excitation observed.\\\\\n\n\nTo provide a summary of the chemical abundances obtained across the nebula in the optical and infrared ranges, we have grouped regions with similar physical and chemical properties (whenever possible). In Table \\ref{table:summary} we show the results: $<$1,2,3$>$ represents the average of R1, R2, and R3, $<$5,6$>$ the average of R5 and R6, and $<$A,B$>$ the average of zones A and B from the IR study. In these cases the corresponding parameters were estimated as the mean weighted by the error in each zone. The two last columns represent the expected ISM values and solar abundances from \\citet{2009ARA&A..47..481A}, respectively.\n\n\\begin{table*}\n\t\t \\caption{Summary of inferred properties in M1-67.} \n\t\t\\label{table:summary} \n\t\t\\centering \n\t\t\\begin{tabular}{l c c c c c c c}\n\t\t\\hline\n\t\t& $<$1,2,3$>$ & 4 & $<$5,6$>$ & 7 & $<$A,B$>$ & ISM$^{a}$ & Solar$^{b}$ \\\\\n\t\t\\hline\n\t\t\\hline\n \t\t\\\\\n\t\tc(H${\\beta}$) & 1.87 $\\pm$ 0.01 & 1.87 $\\pm$ 0.01 & 1.90 $\\pm$ 0.02 & 2.15 $\\pm$ 0.04 & ... & ... & ... \\\\\n\t\tn$_{\\mathrm{e}}$([S{\\sc ii}]) (cm$^{-3}$)& 1581 $\\pm$ 49 & ... & 677 $\\pm$ 62 & ... & ... & ... & ...\\\\\n\t\t12+log(O\/H) & ... &... & 7.70 $\\pm$ 0.03 & ... & 8.28 $\\pm$ 0.09 $^{c}$ & 8.42$\\pm$ 0.03 & 8.69 $\\pm$ 0.05\\\\\n\t\t12+log(S\/H) & 6.35 $\\pm$ 0.02 & ... & 6.40 $\\pm$ 0.02 & ... & 6.69 $\\pm$ 0.04 & 6.99$\\pm$ 0.12 & 7.12 $\\pm$ 0.03\\\\\n\t\t12+log(N\/H) & 8.13 $\\pm$ 0.01 & 8.36 $\\pm$ 0.03 & 8.21 $\\pm$ 0.03 & 7.92 $\\pm$ 0.03 & ... & 7.54 $\\pm$ 0.09 & 7.83 $\\pm$ 0.05 \\\\\n\t\t12+log(Ne\/H) & ... & ... & ... & ... & 7.55 $\\pm$ 0.04 & 7.69 $\\pm$ 0.09 & 7.93 $\\pm$ 0.10\\\\\n\t\t$\\Delta$(log(N\/H))$^{d}$ & 0.59 $\\pm$ 0.09 & 0.82 $\\pm$ 0.09 & 0.67 $\\pm$ 0.10 & 0.38 $\\pm$ 0.09 & ... & ...& ... \\\\\n\t\t$\\Delta$(log(O\/H))$^{d}$& ... & ... & -0.72 $\\pm$ 0.04 & ... & -0.14 $\\pm$ 0.09 $^{c}$ & ... & ... \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n \\begin{list}{}{}\n\t\t\t\\item {$^{a}$} Expected ISM abundances at R$_{G}\\sim$10 kpc. \\\\\n\t\t\t\\item {$^{b}$} Solar abundances from \\citet{2009ARA&A..47..481A}.\\\\\n\t\t\t\\item {$^{c}$} Estimated assuming $\\log \\mathrm{(Ne\/O)}$=-0.73$\\pm$0.08.\\\\\t\n\t\t\t\\item {$^{d}$} Variations with respect to the expected ISM abundance. \\\\ \n\t\t\\end{list}\n\t\t\\end{table*}\n\n\n\n\n\n\\subsection{M1-67 structure \\label{structure}}\nAlthough the first observations of M1-67 showed a nearly spherical shape, the high contrast achieved by coronographic studies in the inner regions made a bipolar symmetry clearly visible \\citep{1995IAUS..163...78N}. Owing to the field of view of our PPAK observations, we cannot detect this bipolarity; however, the narrow-band images from the INT and the interpolated maps from PPAK (see Figs. \\ref{fig:rgb} and \\ref{fig:morphology_all}) show that the bright knots are aligned along a preferred axis with \\textquotedblleft holes\\textquotedblright ~in the perpendicular direction. The integrated spectrum of R4, confirms that the emission in the holes is very faint (i.e. H${\\beta}$ was not detected). Furthermore, the MIPS image from Spitzer (Fig. \\ref{fig:spitz_24micr}) also reveals the bipolar appearance at 24$\\mu$m, suggesting that the ionized gas is mixed with warm dust. We emphasize that the knots are not only regions with high surface brightness but also very dense areas where the [N{\\sc ii}]\/H${\\alpha}$ and [N{\\sc ii}]\/[S{\\sc ii}] ratios show the maximum values.\\\\\n\nWe support the idea of a preferred axis, but either way, is the bipolarity the footprint of an ejection from the star? Looking at the radial velocity map of Fig. \\ref{fig:ha_velocity}, we can see that velocity decreases when we move away from the centre (except in the far NE where the gas has peculiar properties, see below). This agrees with the studies from \\citet{1981ApJ...249..586C}, who predict a faster movement near the star, and \\citet{1998A&A...335.1029S} who discuss the idea of a bipolar outflow. The spatial distribution of the electron density shows similar behaviour: the mean values of maps and integrated spectra of the central pointing are higher than at the edge ($\\sim$1500~cm$^{-3}$ and $\\sim$650~cm$^{-3}$, respectively). Also the electron density decreases along the radial cut seen in Fig. \\ref{fig:dens_rad} with a symmetric gradient in two directions (from the centre to NE and to SW) and flattening towards the edges. Both analyses lead us to think that the preferred axis is not only morphological, but is also the footprint of a mechanism that could have expelled material in the past and, later, interacted with the ISM diluted and decelerated the gas.\\\\\n\nLeaving aside for a moment the discussion of bipolarity, there is another striking morphological feature in this object. The IR study at 24$\\mu$m reveals a spherical bubble surrounding the bipolar structure. Kinematic studies from \\citet{1998A&A...335.1029S} show two different motions in the enviroment of WR124: a bipolar outflow and an external spherical hollow shell expanding into the ISM. In our narrow-band images from the INT, this bubble is not detected possibly because the material is diluted in the ISM and very weak in the optical range. A simple sketch of the proposed structure of M1-67 is presented in Figure \\ref{fig:sketch}: an inner region with bipolar or elliptical shape along the direction NE-SW surrounded by an external spherical bubble.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{sketch.pdf}\n\\caption{Sketch showing the structure of M1-67 around the central star WR124: the bipolar axis along the direction NE-SW and the spherical bubble.}\n\\label{fig:sketch}\n\\end{figure}\n\nThe study of the edge pointing region suggests that the gas in the NE possesses different properties from the gas of the bipolar outflow. In summary, the properties found for this region are a) the largest reddening coefficient of the nebula with c(H${\\beta})>$2.5; b) the only area where we measure [N{\\sc ii}]$<$H${\\alpha}$ and with the smallest N\/H abundance estimated, close to the solar neighbourhood value; c) increase rather than decrease in the relative radial velocity; d) absence of [S{\\sc ii}]$\\lambda\\lambda$6717\/6731emission lines; e) the minimum H${\\beta}$ flux measured (R7) and lack of [O{\\sc iii}]$\\lambda$5007 or helium lines. The presence of these properties puzzles us, but here we propose a possible scenario to explain the origin of this region. The nitrogen abundance of our \\textquotedblleft peculiar\\textquotedblright ~region points towards material not processed in the CNO cycle (e.g. ISM or MS bubble) while the morphology (Fig. \\ref{fig:rgb}) and kinematics (Fig. \\ref{fig:ha_velocity}) suggest that it does not belong to the bipolar ejection. When looking at the bow-shock simulations of \\citet{2003A&A...398..181V} and taking the external IR bubble into account, it is possible that the high velocity of the runaway WR124 causes a paraboloid-like bow shock around the star sweeping up the surrounding medium, so that we are seeing the remaining of this bow-shock placed in our line of sight. We should bear in mind that the peculiar region is spatially close to the small reversed bow-shock like structures found by \\citet{2001ApJ...562..753G} at the NE periphery of M1-67.\n\n\n\n\\subsection{M1-67: a consequence of the evolution of the central star WR124 \\label{evolution}}\nThe theory of evolution of massive stars can help us explain the observed structure. We compare the stellar parameters from the central star in M1-67 (effective temperature and luminosity from \\citealt{2006A&A...457.1015H}) with the stellar evolution models from STARS \\citep{1971MNRAS.151..351E,1995MNRAS.274..964P,2004MNRAS.353...87E}, \\citet{2003A&A...404..975M}, and the most recent models from \\citet{2012A&A...537A.146E} to estimate the initial mass of the WR star. Regardless of small discrepancies, all the models predict initial mass for WR124 of 60-80~M$\\sun$ (J. Toal\\'a, \\textit{private communication}). The evolutionary scenario for a single massive star with 60~M$\\sun<$M$_{\\mathrm{i}}<$90~M$\\sun$ follows the sequence O-Of\/WNL$\\longleftrightarrow$LBV-WNL-WCL-SN \\citep{2011BSRSL..80..266M}. After spending a normal life as O stars on the main sequence (MS), they evolve towards cooler temperatures becoming luminous blue variables (LBVs) \\citep{1994PASP..106.1025H}. These stars undergo extremely strong mass loss (up to 10$^{-3\\ldots -4}~$M$\\sun$~yr$^{-1}$) through winds and occasionally giant eruptions, and thus peel off parts of their stellar envelope to form a small LBV nebulae (LBVN) \\citep{1995ApJ...448..788N}. LBV stars lose their mass so fast that they rapidly evolve away from LBV stage to become WR stars. With an initial mass range of 60-80~M$\\sun$, we can derive that the central star in M1-67 has experienced an LBV phase instead of a red or a yellow supergiant phase before becoming a WR star. This idea is in good agreement with previous studies of the nature of M1-67 based on different observational approaches: M1-67 is very likely the imprint of a previous LBV wind instead of a classical red super-giant (RSG) wind-blown nebula \\citep{1998ApJ...506L.127G, 2003A&A...398..181V}.\\\\\n\nThe spectral type of the central star (WN8), tells us that it is a \\textquotedblleft young\\textquotedblright ~Wolf-Rayet and that it has most likely entered the WR phase recently. Under this hypothesis, we propose that the WR winds have not had enough time to substantially interact with the previous nebular material and, therefore, the layers and observed features originate in stellar material ejected during the MS and\/or LBV phases. Considering a representative expansion velocity and the linear size of the nebula, we estimate that the ejection happened $\\sim$5$\\times$10$^{4}$~yr ago. This value is slightly higher than the LBV phase duration ($\\sim$1.3$\\times$10$^{4}$~yr, \\citealt{1996A&A...305..229G}) thus supporting the hypothesis that the star has recently entered the WR phase.\\\\\n\nTaking the physical sizes and morphologies from an hydrodynamical simulations of a 60 M$\\sun$ star as reference \\citep{1996A&A...305..229G}, it is possible that the external bubble of M1-67 contains material expelled during the MS phase, which is very tenuous in the optical because of the dilution with the ISM. \\citet{Castor1975} and \\citet{Weaver1977} both built models the derive analytical solutions for the dynamic evolution of shock bubbles created by interaction between the ISM and the stellar wind in the MS phase.\\\\ \n\nSeveral observational reasons have led us to think that the bipolar ejection (or axis of preference) is composed by material ejected during the LBV stage. First, the abundances in the knots along this axis show enrichment in nitrogen and deficiency in oxygen, a behaviour typical of CNO-processed material in phases after the MS stage\\footnote{We should bear in mind that models that include rotation \\citep{Meynet2005} and recent observations of O stars in the LMC \\citep{2012A&A...537A..79R} have revealed that some stars of the MS stage can also show CNO-processed material.}. It is common in observations of LBVN to find very intense [N{\\sc ii}] emission and absence of [O{\\sc iii}] \\citep{1995ApJ...448..788N,1998ApJ...503..278S,2002A&A...393..503W}, this being also indicative of a low effective temperature and low degree of excitation. Furthermore, many of these nebulae show clumpy radial structures (not multiple shells) and morphologies with preferred axes \\citep{1993ApJ...410L..35C}. The presence of a bipolar ejection in M1-67 enhances the similarity of the nebula to other LBVN, which almost all display some degree of bipolarity \\citep{1995ApJ...448..788N, 2001RvMA...14..261W}. In short, M1-67 shows the general properties of LBV nebulae: linear size, total ionized gas, velocity field, IR emission, chemical abundances, line intensities, and dynamical characteristics; this clearly points to an LBV progenitor \\citep[among others]{1995ApJ...448..788N, 2001ApJ...551..764L,2011BSRSL..80..440W}.\n\nThe idea of M1-67 being made up of material ejected during the LBV stage was suggested in the past by \\citet{1998A&A...335.1029S} based on the total mass of ionized gas, the expansion velocity, and the linear size of the nebula. Also \\citet{1998ApJ...506L.127G} explain the clumpy appearance of M1-67 by assuming the interaction of winds in a previous LBV phase.\\\\\n\nOur study depicts M1-67 as a nebula with two regions: an external spherical bubble with material likely produced during the MS and an inner nearly elliptical region along the NE-SW direction produced due to an ejection in the LBV phase. We are observing a WR nebula with LBVN appearance.\\\\\n\n\n\n\n\\section{Summary and conclusions \\label{conclusions}}\n\\renewcommand {\\labelenumi} {\\arabic {enumi}$)$}\n\\renewcommand {\\labelenumii} {$\\bullet$}\nIn this work, we have presented the first integral field spectroscopy study of the ring-nebula M1-67 around the Wolf-Rayet star WR124 in the optical range with PPAK. Two regions of the nebula were observed and analysed by means of 2D and 1D studies. We also obtained and analysed IR spectroscopic data and the MIPS 24$\\mu$m image of M1-67 from Spitzer. In the following, we present the main results derived from this work.\n\n\\begin{enumerate}\n\\item We obtain maps from the emission lines that allow us to perform a detailed study of the 2D structure of the nebula:\n\\begin{enumerate}\n\\item Interpolated maps from the main emission lines show a clumpy structure with bright knots aligned along a preferred axis in the NE-SW direction. The [O{\\sc iii}]$\\lambda$5007\\AA{} emission is absent over the whole nebula.\n\\item The spatial distribution of the reddening coefficient maps, c(H${\\beta}$), presents slight variations between the two pointings. In the central region c(H${\\beta}$) ranges from 1.3 to 2.5 with a mean value of $\\sim$1.85, while in the edge pointing the mean is 2.11, ranging from 1.7 to 2.8.\n\\item Electron density maps, n$_{e}$, derived from the [S{\\sc ii}]$\\lambda\\lambda$6717\/6731 ratios, show a non-uniform structure. Knots with higher surface brightness in H${\\alpha}$ possess the highest densities. We also find that density decreases with increasing the distance from the star showing a symmetric gradient.\n\\item We analysed the ionization structure by means of line ratios maps. In particular, the [N{\\sc ii}]\/H${\\alpha}$ map of the edge pointing field reveals two behaviours, thus defining two spatially well delimited regions: one in the NE with [N{\\sc ii}]$<$H${\\alpha}$ and the second one with [N{\\sc ii}]$\\ge$H${\\alpha}$.\n\\item With radial velocity maps we studied the kinematics of the nebula. The derived heliocentric velocity for M1-67 is $\\sim$139~km~s$^{-1}$, in agreement with previous results. The relative radial velocity seems to decrease as it moves away from the central star along the preferred axis. \\\\\n\\end{enumerate}\n\n\\item We derived the physical parameters and chemical abundances of M1-67 using the integrated spectra of eight regions:\n\\begin{enumerate}\n\\item The electron densities inferred on the central region present higher values than on the edge ($\\sim$1500~cm$^{-3}$ and $\\sim$650~cm$^{-3}$, respectively). This result agrees with the radial variations of the 2D study.\n\\item We derived an electron temperature of $\\sim$8200~K in R5 by using our measurement of the [N{\\sc ii}]$\\lambda$5755\\AA{} emission line.\n\\item The chemical abundances show, in all the studied areas, an enrichment in nitrogen and a deficiency in oxygen. The nitrogen enhancement in each region is different, suggesting an inhomogeneous chemical enrichment.\\\\\n\\end{enumerate}\n\n\\item The 24$\\mu$m image reveals an inner bipolar-like structure in the NE-SW direction and an outer faint spherical bubble interacting with the surrounding ISM. From the low-resolution mid-IR spectroscopic data, we measured the main emission lines and estimate ionic and total chemical abundances, verifying the low ionization degree of the gas. \\\\\n\n\\item Overall, this study revealed the clumpy structure of M1-67 with knots aligned along a preferred axis and with \\textquotedblleft holes\\textquotedblright~ along the perpendicular direction. The gas along this bipolar axis possesses a low ionization degree, and it is well mixed with warm dust. The optical analysis of these knots revealed chemical abundances typical of material processed in the CNO cycle, suggesting that the material comes from an evolved stage of the star. The radial variations in electron density and velocity indicate that the gas of the bipolar feature was ejected by the star. \\\\\n\n\\item A region placed to the NE of the nebula shows different kinematic, chemical, and morphological properties. We propose that this region comprises the remaining of a bow-shock caused by the runaway WR124 with ISM material mixed up with the MS bubble.\\\\\n\\end{enumerate}\n\nBased on our observational results and taking theoretical models from the literature into account (e.g. \\citealt{1996A&A...305..229G}), we propose a scenario where the central star has recently entered the WR phase. This implies that the interaction of WR winds with previous surrounding material is not visible yet. After comparing our results with stellar evolution models and taking the inferred initial mass of the star (60~M$\\sun < $ M$_{i} <$ 80~M$\\sun$) into account, we deduced that the central star experienced an LBV stage before becoming a WR. The bipolar material observed belongs to an ejection during the LBV stage since the morphology, kinematics, and chemistry are in good agreement with previous studies of LBV nebulae.\\\\\n\n\n\n\n\n\\begin{acknowledgements}\nThis work is supported by the Spanish Ministry of Science and Innovation (MICINN) under the grant BES-2008-008120. This work has been partially funded by the projects: AYA2010-21887-C04-01 of the Spanish PNAYA and CSD2006 - 00070 \"1st Science with GTC from the CONSOLIDER 2010 programme of the Spanish MICINN and TIC114 of the Junta de Andaluc\\'ia. We thank J. Toal\\'a for providing estimations for the initial mass of the WR star and for useful suggestions. We are also very grateful to M. Fern\\'andez-Lorenzo, A. Monreal-Ibero, K. Weis, and the ESTALLIDOS collaboration for their useful comments and scientific support.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOpen source projects have an increasing importance in modern software development.\nFor example, several open source projects are daily used by millions of users.\nHowever, it is very important to continually attract more participants and contributors to these projects, in order to increase the chances of long-term success~\\cite{comino2007}.\nParticularly, several channels can be used to promote open source software, helping to keep the interest of the community and also to attract new members.\n\nIn this article, we investigate the most common channels used by developers to promote open source projects.\nWe manually inspected a large set of popular projects on GitHub, which is the world's largest collection of open source software, with around 27 million users and 77 million repositories~\\cite{githubsearch}.\nOur contributions include: (i) data about the promotion channels frequently used by popular open source projects; (ii) a comparison on the use of promotion channels by popular projects and by random ones; and (iii) an analysis of the impact of promotion on Hacker News, a popular news aggregation site, in the popularity of the studied projects.\nOur findings help practitioners to understand the importance of using promotion channels in the open source development context.\n\n\\section{Study Design}\n\\label{sec:design}\n\nTo reveal the most common promotion channels used by developers, we manually inspected the documentation of the top-100 projects with most stars on GitHub (stars is a popular feature to manifest interest or satisfaction with GitHub projects~\\cite{icsme2016}).\nWe restricted our analysis to popular projects because they have a large number of users and therefore need better and efficient ways to communicate with users and also to attract new contributors.\n\nFigure~\\ref{fig:repos-overview} shows the distribution of the number of stars of the projects considered in this study.\nThis number ranges from 291,138 stars ({\\sc \\mbox{freeCodeCamp\/freeCodeCamp}}) to 23,322 stars ({\\sc \\mbox{tiimgreen\/github-cheat-sheet}}).\nThe considered projects are primarily developed on 17 programming languages; JavaScript is the most common one (40 projects), followed by Python (9 projects) and Go (5 projects).\nFurthermore, 14 projects only include markdown files with documentation purposes (e.g., projects with tutorials, books, awesome lists, etc).\nFinally, regarding the project owners, 69 are organizational accounts and 31 are user accounts.\\medskip\n\n\\begin{figure}[!ht]\n\t\\center\n\t\\includegraphics[width=0.65\\textwidth,keepaspectratio,trim={0 2em 0 2em},clip]{images\/repos_overview.pdf}\n\t\\caption{Number of GitHub stars of the analyzed projects}\n\t\\label{fig:repos-overview}\n\\end{figure}\n\nFor each of these 100 projects, the first author of this paper initially inspected their READMEs on GitHub to identify the channels used to promote the projects and to keep the users up-to-date with important information about them.\nFor example, the following sentence is available on the README of {\\sc adobe\/brackets}: \\aspas{\\it You can see some screenshots of Brackets on the \\underline{wiki}, intro videos on \\underline{YouTube}, and news on the Brackets \\underline{blog}}.\nIn this case, wiki and YouTube are used to support users whereas blog is a channel used to disseminate news about {\\sc Brackets}.\nThus, only blog is considered a promotion channel in our study.\nNext, we inspected the projects' website, for those projects having one.\nWe navigated through the site pages, searching for more channels used to promote the projects.\n\nAfter this manual inspection, the following promotion channels emerged:\n\n\\begin{itemize}\n\t\\item {\\bf Blogs}, which are used, for example, to publish announcements of new software versions, upcoming events, and improvements.\n\n\t\\item {\\bf Events and Users Meetings:} Organizing events and supporting users meetings are other strategies commonly followed to promote projects. On events the initiative usually comes from the development team or from the organization that supports the project, whereas on user meeting the initiative comes from the users, usually from a specific region or country. We rely on Meetup (\\url{https:\/\/meetup.com}) to discover users meetings.\n\n\t\\item {\\bf Twitter, Facebook, and Google+}, which are also used to connect the projects to users. We considered only official accounts, which are explicitly advertised on the project documentation or are verified by the social network (e.g., \\url{https:\/\/support.twitter.com\/articles\/20174631}).\n\n\t\\item {\\bf Newsletter and RSS feeds}, which refer to e-mails with the most relevant news about the projects and RSS feeds.\n\n\\end{itemize}\n\nIn addition, we found that developers use Q\\&A forums (e.g., StackOverflow), discussion groups (e.g., Google Groups), and messaging tools (e.g., IRC and Slack) to promote their projects.\nHowever, these channels are mostly used to discuss the projects and to provide answers to common questions raised by users.\nFor example, from the 155 topics opened in 2017 in the {\\sc adobe\/brackets} discussion group at Google Groups, only eight (5.1\\%) are related to announcements of new versions, mostly pre-releases for community testing.\nMoreover, from almost 500 topics on {\\sc facebook\/react} official forum, we could not identify any announcement related to the project development.\nThus, in this study, we do not consider forums, discussion groups, and messaging tools as promotion channels.\n\n\\section{Results}\n\\label{results}\n\n\n\\subsection{What are the most common promotion channels?}\n\\label{sec:results:rq1}\n\nFigure~\\ref{fig:rq1} presents the most common promotion channels used by the top-100 projects on GitHub.\nThe most common channel is Twitter, which is used by 56 projects.\nThe second one is Users Meetings (41 projects), followed by Blogs (38 projects), Events (33 projects), and RSS feeds (33 projects).\nThe least common channels are Facebook and Google+, which are used by 18 and 7 projects, respectively.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth,keepaspectratio,trim={0 1em 0 3em},clip]{images\/channels_binary.pdf}\n\t\\caption{Most common promotion channels}\n\t\\label{fig:rq1}\n\\end{figure}\n\nFigure~\\ref{fig:rq1_2} shows the distribution of the number of promotion channels per project.\nAlmost one third of the projects (32 projects) do not use any channel.\nBy contrast, more than half of the projects (55 projects) use at least two promotion channels.\nThe highest number of promotion channels is seven, which is the case of {\\sc \\mbox{facebook\/react}}, {\\sc \\mbox{facebook\/react-native}}, {\\sc \\mbox{meteor\/meteor}}, {\\sc \\mbox{golang\/go}}, {\\sc \\mbox{ionic-team\/ionic}}, {\\sc \\mbox{angular\/angular}}, and {\\sc adobe\/\\\\brackets}.\nWe also found that Blog and Twitter is the most frequent combination of channels (35 projects).\nOther frequent combinations include, for example, Blog and RSS (31 projects), Events and Users Meetings (31 projects), and Twitter, Events and User Meetings (31 projects).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.725\\textwidth,keepaspectratio,trim={0 1em 0 3em},clip]{images\/channels_binary_histogram.pdf}\n\t\\caption{Number of promotion channels per project}\n\t\\label{fig:rq1_2}\n\\end{figure}\n\n\n\\subsection{How often do developers promote their projects?}\n\\label{sec:results:rq2}\n\nIn this second question, we investigate how often developers promote their projects on blogs and social networks.\nFor blogs, we calculate the promotion frequency as the number of posts on the last 12 months.\nFor social networks, we could not retrieve all posts for all projects because their APIs restrict the search to a recent period (e.g., last seven days for Twitter and last 100 posts for Facebook).\nThus, in this case, we only classified each social network account in two distinct groups: active and inactive.\nAn {\\em active} account has at least three posts on the last three months; otherwise, it is considered an {\\em inactive} account.\nThis classification was performed by manually counting the number of posts on the social network pages.\n\nFigure~\\ref{fig:rq2} presents the distribution of the number of blog posts on the last 12 months.\nThe number ranges from 1 ({\\sc nylas\/nylas-mail}) to 1,300 ({\\sc freeCodeCamp\/freeCodeCamp}); the first, second, and third quartile values are 7, 19, and 54 posts, respectively.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=.325\\linewidth,keepaspectratio,trim={0 2em 0 2em},clip,page=2]{images\/activity_blog.pdf}\n\t\\caption{Distribution of the number of posts on the last 12 months (outliers are omitted)}\n\t\\label{fig:rq2}\n\\end{figure}\n\nTable~\\ref{tab:rq2:social} lists the activity status of the Twitter, Facebook, and Google+ accounts.\nWe found that 83.9\\% of the projects that use Twitter have an active account; 55.6\\% of the projects have an active Facebook account and only 28.6\\% have an active Google+ account.\n\n\\begin{table}[!ht]\n \\caption{Active Twitter, Facebook, and Google+ accounts}\n \\label{tab:rq2:social}\n \\centering\n \\begin{tabular}{@{}ccrr@{}}\n \\toprule\n \\multicolumn{1}{c}{\\bf Channel} && \\multicolumn{1}{c}{\\bf Active (\\%)} & \\multicolumn{1}{c}{\\bf Inactive (\\%)} \\\\\n \\midrule\n Twitter && 47 (83.9\\%) & 9 (16.1\\%) \\\\\n Facebook && 10 (55.6\\%) & 8 (44.4\\%) \\\\\n Google+ && 2 (28.6\\%) & 5 (71.4\\%) \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nFinally, we investigate the characteristics of the user meeting groups promoted on Meetup (such meetings are the 3rd most common promotion channel studied in this article).\nA Meetup group is a local community of people that is responsible for organizing meeting events~\\cite{meetupgroup}. \nThese groups are identified by topics to help members find them. \nHere, we rely on these topics to collect meetups about the studied open source projects, along with their locations (i.e., city and country).\nFor example, the topic for {\\sc jquery\/jquery} is {\\em jquery} and a summary of the meeting groups about this topic can be found at \\url{https:\/\/www.meetup.com\/topics\/jquery\/all}.\nFigure~\\ref{fig:rq3_meetups} presents the distribution of the number of groups, cities, and countries of the projects with meetings registered at Meetup. For groups, the values ranges from 2 to 2,261 groups; considering the cities, the values range from 2 to 725; finally, for countries, the values range from 2 to 96. The maximum values always refer to {\\sc torvalds\/linux}. In other words, {\\sc torvalds\/linux} has 2,261 meetup groups, which are spread over 725 cities from 96 countries.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=2]{images\/meetups.pdf}\n\t\t\\caption{Groups}\n\t \\label{fig:rq3_meetups_sub1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=3]{images\/meetups.pdf}\n\t\t\\caption{Cities}\n\t \\label{fig:rq3_meetups_sub2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={1.5em 2em 1em 2em},clip,page=4]{images\/meetups.pdf}\n\t\t\\caption{Countries}\n\t \\label{fig:rq3_meetups_sub3}\n\t\\end{subfigure}%\n\t\\caption{Number of groups, cities, and countries of the user meetings}\n\t\\label{fig:rq3_meetups}\n\\end{figure}\n\n\n\\subsection{How popular and random projects differ on the usage of promotion channels?}\n\\label{sec:results:rq3}\n\nIn Section~\\ref{sec:results:rq1}, we investigated the most common promotion channels used by popular GitHub projects.\nIn this section, we contrast the usage of promotion channels by these projects and by a random sample of GitHub projects.\nFor this purpose, we randomly selected 100 projects from the top-5,000 repositories by number of stars and manually inspected their documentation using the same methodology reported in Section~\\ref{sec:design}.\nThe number of stars of this random sample ranges from 2,297 stars ({\\sc uber-archive\/image-diff}) to 22,558 ({\\sc vsouza\/awesome-ios}).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth,keepaspectratio,trim={0 1em 0 2em},clip]{images\/channels_random.pdf}\n\t\\caption{Most common promotion channels used by random projects}\n\t\\label{fig:rq3}\n\\end{figure}\n\nFigure~\\ref{fig:rq3} compares the usage of promotion channels by the random projects and by the most popular ones.\nIn the random sample, the number of projects using the investigated promotion channels is significantly lower compared to the most popular ones.\nHowever, by applying the Spearman's rank correlation test, we found a strong correlation between the number of projects using the promotion channels on each group ($rho =$ 0.904 and \\emph{p-value} $<$ 0.01).\nFor example, Twitter is also the most used promotion channel among the random projects (31 projects), followed by Blogs (17 projects) and RSS (13 projects).\nCompared to the most popular projects, Users meetings and Newsletter are less common (13 and 6 projects, respectively).\nFinally, Facebook and Google+ also have a very limited usage (7 and 4 projects, respectively).\n\n\n\\subsection{What is the impact of promotion on Hacker News?}\n\\label{results:rq4}\n\nAfter publishing content on blogs, Twitter, etc., open source developers can also promote this content on social news aggregator sites. These sites aggregate contents from distinct sources for easing viewing by a large public.\nThe most popular and important example is Hacker News (\\url{https:\/\/news.ycombinator.com}), which is dedicated to Computer Science and related technologies content. Hacker News posts just include a title and the URL of the promoted content (e.g.,~a blog post about a new version of an open source project). Any user registered in the site can post a link on Hacker News, i.e., not necessarily the links are posted by the contributors of an open source project, for example. Other Hacker News users can discuss the posts and upvote\nthem. An upvote is similar to a {\\em like} in social networks; posts are listed on Hacker News according to the number of upvotes.\nIn this research question, we use Hacker News due to its popularity; posts that reach the front page of the site receive for example 10-100K page views, in one or two days (\\url{https:\/\/goo.gl\/evyP4w}). Furthermore, Hacker News\nprovides a public API, which allows search and metadata collection.\n\nFor each popular project considered in our study (100 projects), we searched for Hacker News posts with a URL referencing the project sites or pages, including GitHub pages (READMEs, issues, etc). As result, we found 3,019 posts on Hacker News referencing content from 96 studied projects (i.e., only four projects are never referenced on Hacker News).\nFigure~\\ref{fig:rq4_overview} presents the distributions of the number of posts per project, upvotes, and comments.\nThe number of posts ranges from 1 to 298 posts per project ({\\sc rails\/rails}); the first, second, and third quartile values are 4, 10, and 43 posts, respectively.\nRegarding their upvotes, the most popular post is about {\\sc appple\/swift} (\\aspas{\\em Swift is Open Source}), with 1,824 upvotes; the quartile values are 2, 3, and 12 upvotes, respectively.\nFinally, the highest number of comments is 760, about a GitHub issue opened for Microsoft Visual Studio (\\aspas{\\em VS Code uses 13\\% CPU when idle due to blinking cursor rendering}); the quartile values are 0, 0, and 2 comments, respectively. \nOn the one hand, these results show that most Hacker News posts do not attract attention. \nBy contrast, a small number of posts attract a lot of attention. For example, the top-10\\% posts have at least 132 upvotes. These posts are called {\\em successful posts} in this investigation.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=2]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Posts}\n\t \\label{fig:rq4_overview_sub1}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=4]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Upvotes}\n\t \\label{fig:rq4_overview_sub2}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.325\\textwidth}\n\t \\centering\n\t \\includegraphics[width=.95\\linewidth,keepaspectratio,trim={0 2em 1em 2em},clip,page=6]{images\/hn_posts_overview.pdf}\n\t\t\\caption{Comments}\n\t \\label{fig:rq4_overview_sub3}\n\t\\end{subfigure}%\n\t\\caption{Number of posts, upvotes, and comments (outliers are omitted)}\n\t\\label{fig:rq4_overview}\n\\end{figure}\n\nFigure~\\ref{fig:rq4_stars_before_after} shows boxplots with the number of GitHub stars gained by projects covered by successful posts, in the first three days before and after the publication date on Hacker News. The intention is to investigate the impact of a successful promotion on Hacker News, by comparing the number of stars gained before and after each successful post publication. On the median, the projects covered by successful posts gained 74 stars in the first three days before their appearance on Hacker News; in the first three days after the publication, the projects gained 138 stars. Therefore, Hacker News has a positive impact on the project's popularity, measured by GitHub stars. \nIndeed, the distributions are statistically different, according to the one-tailed variant of the Mann-Whitney U test (p-value $\\leq 0.05$). By computing Cliff's delta, we found a {\\em medium} effect size ($d = -0.372$).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth,keepaspectratio,trim={0 0 0 2em},clip,page=2]{images\/hn_stars_before_after.pdf}\n\t\\caption{Number of GitHub stars received by projects covered by successful Hacker News posts in the first three days before and after the post publication}\n\t\\label{fig:rq4_stars_before_after}\n\\end{figure}\n\nFinally, we inspected the titles of each successful post, aiming to categorize the post purpose. The most common category includes posts announcing new releases of open source projects (44.9\\%; e.g., \\aspas{\\em Angular 2 Final Released}). Other popular categories include posts promoting articles or reports about the projects (25.4\\%; e.g., \\aspas{\\em Vue.js vs.~React}), announcing the first release of a project (16.5\\%; e.g., \\aspas{\\em YouTube-dl: Open-source YouTube downloader}), highlighting new project features (10.6\\%; e.g., \\aspas{\\em Git and GitHub Integration Comes to Atom}) and open sourcing products (1.6\\%; e.g., \\aspas{\\em Visual Studio Code is now open source}).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\\label{sec:related}\n\nAlthough open source software has been exhaustively explored recently, little is known about how developers promote these projects.\nThe main exception is a work conducted by Bianco et al. where the authors analyze marketing and communication strategies of three companies that develop open source software~\\cite{Bianco12}.\nBy means of interviews, they found that websites and product launch events are adopted by the three organizations; however, the organizations differ considerably on the use of other communication channels, mainly when promoting the projects in open source communities and among industrial users.\n\nOn the one hand, most communication channels investigated in this paper are explored in other studies, but with different intentions.\nSinger et al. report a qualitative study focused on discovering the benefits that Twitter brings to developers~\\cite{singer2014}.\nThey found that Twitter adopters use it to stay aware of industry changes, for learning, and for building relationships.\nBy correlating the blogging and committing behavior of developers, Pagano and Maleej observed an intensive use of blogs, frequently detailing activities described shortly before in commit messages~\\cite{pagano2011}.\nBajic and Lyons analyze how software companies use social media techniques to gather feedback from users collectively~\\cite{Bajic2011}.\nTheir results suggest that startups use social media mainly for competitive advantage and established organizations use it to monitor the buzz among their users.\nBy studying a successful software development company, Hansson et al. identified that user meetings and newsletter are adopted to include and increase the participation of users in the development process~\\cite{Hansson2006}.\nFinally, Aniche et al. conduct a study to understand how developers use modern news aggregator sites (Reddit and Hacker News)~\\cite{aniche2018}. According to their results, the two main reasons for posting links on these sites is to promote own work and to share relevant content.\n\n\n\\section{Conclusion and Practical Implications}\n\\label{sec:conclusion}\n\nIn this paper, we investigated the most common promotion channels used by popular GitHub projects. This investigation supports the following practical recommendations to open source project managers and leaders:\n\n\\begin{enumerate}\n\\item Promotion is an important aspect of open source project management, which should be emphasized by project leaders. For example, most popular GitHub projects (two thirds) use at least one promotion channel; half of the projects invest on two channels. By contrast, the use of promotion channels is less common among projects with lower popularity. \n\n\\item Open source project managers should consider the use of Twitter (47 projects among the top-100 most popular GitHub projects have active Twitter accounts), Users meetings (which are organized or supported by 41 projects), and blogs (which are used by 38 projects).\n\n\\item Open source project managers should also consider promotion on social news aggregator sites. Successful posts on Hacker News may have an important impact on the popularity of GitHub projects. However, only 10\\% of the Hacker News posts about the studied projects have had some success\n\\end{enumerate}\n\n\n\n\\section*{Acknowledgments}\n\n\\noindent Our research is supported by CAPES, FAPEMIG, and CNPq.\n\n\\small\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbsso b/data_all_eng_slimpj/shuffled/split2/finalzzbsso new file mode 100644 index 0000000000000000000000000000000000000000..25eb44d9c44325c0d1bda6b712ed21b8e0062b2b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbsso @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\\par With the fast development of deep learning techniques, face recognition systems (FRSs) have become a popular technique for identifying and verifying people due to the ease of capturing biometrics from the face. In our daily lives, one of the most relevant applications of FRS is the Automatic Border Control system, which can quickly verify the identity of a person with his electronic machine-readable travel document (eMRTD) \\cite{icao20159303} by comparing the face image of the traveler with a reference in the database. Although high-accuracy FRS can effectively distinguish an individual from others, it is vulnerable to adversarial attacks that conceal the real identity. Recent research found that attacks based on morphed faces \\cite{ferrara2014magic,scherhag2017vulnerability} pose a serious security risk in various applications. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=.75\\linewidth]{image\/fingerprint.eps}\n\\caption{Few-shot learning for morphing attack fingerprinting (MAF), a multiclass extension of MAD. Each class (morphing attack model) of the training set contains a few examples. After training, the model can classify unseen test samples for each class.}\n\\label{fig:MAF}\n\\end{figure*}\n\n\\par Morphing attacks were first introduced in 2014 \\cite{ferrara2014magic}. The morphed face is combined by two or more bona fide faces, and it was shown that commercial face recognition software tools are highly vulnerable to such attacks. In a further study \\cite{ferrara2016effects}, the authors showed that the images of morphed faces are realistic enough to fool human examiners. With the emergence of face morphing generation techniques \\cite{gimp,damer2018morgan,zhang2020mipgan,karras2019style} and numerous easy-to-use face morphing softwares (e.g., MorphThing \\cite{morphthing}, 3Dthis Face Morph \\cite{3dmorph}, Face Swap Online \\cite{faceswap}, Abrosoft FantaMorph \\cite{fanta}, FaceMorpher \\cite{morpher}), there is an imminent need to protect FRS security by detecting morphing attacks \\cite{scherhag2019face}. \n\nSome morphing attack detection (MAD) approaches have been developed since 2018 (for a recent review, see \\cite{venkatesh2021face}). They can be categorized into two types: single image-based (S-MAD) and differential image-based (D-MAD) \\cite{raja2020morphing}. The deep face representation for D-MAD has been studied in \\cite{scherhag2020deep}; existing S-MAD methods can be further classified into two subtypes \\cite{venkatesh2020face}: model-based (using handcraft characteristics) and deep learning-based. Noise-based photo-response non-uniformity (PRNU) methods \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} represent the former subtype due to its popularity and outstanding performance. Originally proposed for camera identification, PRNU turns out to be useful for detecting the liveness of face photos. For the latter subtype, Noiseprint \\cite{cozzolino2018noiseprint} used a CNN to learn the important features, with the objective of improving detection performance and supporting fingerprinting applications. \n\n\\par Despite rapid progress, existing MAD methods are often constructed on a small training dataset and a single modality, which makes them lacking good generalization properties \\cite{raja2020morphing,venkatesh2020face}. The performance of existing MAD methods might be satisfactory for predefined morphing attack models, but degrades rapidly when deployed in the real world facing newly evolved attacks. Although it is possible to alleviate this problem by fine-tuning the existing MAD model, the cost of collecting labeled data for every new morphing attack is often formidable. Furthermore, we argue that MAD alone is not sufficient to meet the demand for increased security risk facing FRS. A more aggressive countermeasure than MAD to formulate the problem of morphing attack fingerprinting (MAF), that is, we aim at a multiclass classification of morphing attack models, as shown in Fig. \\ref{fig:MAF}.\n\n\\par Based on the above observations, we propose to formulate MAF as a few-shot learning problem in this paper. Conventional few-shot learning (FSL) \\cite{snell2017prototypical} learns the knowledge from a few examples of each class and predicts the class label of the new test samples. Similarly, we train the detector using data from both predefined models and new attack models (only a few samples are required) to predict unknown new test samples. This task is named the few-shot MAD (FS-MAD) problem. Unlike existing MAD research, few-shot MAF (FS-MAF) aims at learning general discriminative features, which can be generalized from predefined to new attack models. The problem of few-shot MAF is closely related to camera identification (ID) \\cite{lukas2006digital}, camera model fingerprinting \\cite{cozzolino2018noiseprint}, and GAN fingerprinting (a.k.a. model attribution \\cite{yu2019attributing}) in the literature. The main contributions of this paper are summarized below.\n\n$\\bullet$ Problem formulation of few-shot learning for MAD\/MAF. We challenge the widely accepted assumptions of the MAD community, including the NIST's FRVT MORPH competition. The generalization property of MAD\/MAF methods will be as important as the optimization of recognition accuracy. \n\n$\\bullet$ Feature-level fusion for MAD applications. Although both PRNU and Noiseprint have shown promising performance in camera identification applications, no one has demonstrated their complementary nature in the open literature. We believe that this work is the first to combine them through feature-level fusion and to study the optimal fusion strategy.\n\n$\\bullet$ Design a fusion-based FSL method with adaptive posterior learning (APL) for MAD\/MAF. By adaptively combining the most surprising observations encountered by PRNU and Noiseprint, we can achieve a good generalization property by optimizing the performance of FS-MAD\/FS-MAF at the system level. \n\n$\\bullet$ Construction of a large-scale benchmark dataset to support MAD\/MAF research. More than 20,000 images with varying spatial resolution have been collected from various sources. Extensive experimental results have justified the superior generalization performance of FS-MAD and FS-MAF over all other competing methods. \n\n\n\\section{Related Work}\n\\label{related}\n\n\\subsection{Morphing Attack Detection (MAD)}\n\\noindent \\textbf{Model-based S-MAD}. Residual noise feature-based methods are designed to analyze pixel discontinuity, which may be greatly affected by the morphing process. Generally, noise patterns are extracted by subtracting the given image from a denoised version of the same image using different models, such as the deep multiscale context aggregate network (MS-CAN) \\cite{venkatesh2020detecting}. The most popular should be sensor noise patterns, such as PRNU. Recently, both PRNU-based \\cite{zhang2018face,debiasi2018prnu,debiasi2018prnu2,scherhag2019detection} and scale-space ensemble approaches \\cite{raja2020morphing,raja2017transferable} have been studied. \n\n\n\\noindent \\textbf{Learning-based S-MAD}. Along with rapid advances in deep learning, many methods have considered the extraction of deep learning features for detection. The use of a convolutional neural network (CNN) has reported promising results \\cite{8897214}. Most works are based on pre-trained networks and transfer learning. Commonly adopted deep models contain AlexNet \\cite{krizhevsky2012imagenet}, VGG16 \\cite{simonyan2014very}, VGG19 \\cite{simonyan2014very,raja2017transferable}, GoogleNet \\cite{szegedy2015going}, ResNet \\cite{he2016deep}, etc. In addition, several self-design models were also proposed. More recently, a deep residual color noise pattern was proposed for MAD in \\cite{venkatesh2019morphed}; and an attention-based deep neural network (DNN) \\cite{aghdaie2021attention} was studied, focusing on the salient regions of interest (ROI) that have the most spatial support for the morph detector decision function.\n\n\n\\noindent\\textbf{Learning-based D-MAD}. The presented D-MAD methods mainly focus on feature differences and demorphing. For feature difference-based methods, features of the suspected image and the live image are subtracted and further classified. Texture information, 3D information, gradient information, landmark points, and deep feature information (ArcFace \\cite{scherhag2020deep}, VGG19 \\cite{seibold2020accurate}) are the most popular features used. The authors in \\cite{scherhag2018detecting} computed distance-based and angle-based features of landmark points for analysis. In \\cite{singh2019robust}, a robust method using diffuse reflectance in a deep decomposed 3D shape was proposed. Fusion methods were commonly adopted by concatenating hand-crafted Local Binary Pattern Histogram (LBPH) and transferable deep CNN features \\cite{damer2019multi}, or concatenating feature vectors extracted from texture descriptors, keypoint extractors, gradient estimators and deep neural networks \\cite{scherhag2018towards}. More recently, a discriminative DMAD method in the wavelet subband domain was developed to discern the disparity between a real and a morphed image.\n\n\n\\subsection{Few-Shot Learning (FSL)}\nFew-shot learning addresses the challenge with the generalization property of deep neural networks, i.e., how can a model quickly generalize after only seeing a few examples from each class? Early approaches include meta-learning models \\cite{ravi2016optimization} and deep metric learning techniques \\cite{snell2017prototypical}. More recent advances have explored new directions such as the relation network \\cite{sung2018learning}, meta-transfer learning \\cite{sun2019meta}, adaptive posterior learning (APL) \\cite{ramalho2019adaptive}, and cluster-based object seeker with shared object concentrator (COSOC) \\cite{luo2021rectifying}.\n\n\n\\subsection{Camera and Deepfake Fingerprinting}\n\\par PRNU, as a model-based device fingerprint, has been used to perform multiple digital forensic tasks, such as device identification \\cite{cozzolino2020combining}, device linking \\cite{salazar2021evaluation}, forgery localization \\cite{lin2020prnu}, detection of digital forgeries \\cite{lugstein2021prnu}. It can find any type of forgery, irrespective of its nature, since the lack of PRNU is seen as a possible clue of manipulation. Furthermore, PRNU-based MAD methods \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} also confirm the usefulness of the sensor fingerprint in MAD.\nIn recent years, PRNU has been applied successfully in MAD \\cite{debiasi2018prnu2,debiasi2018prnu,scherhag2019detection}. The method in \\cite{debiasi2018prnu2} shows that region-based PRNU spectral analysis reliably detects morphed face images, while it fails if image post-processing is applied to generated morphs. Based on previous work, a PRNU variance analysis was performed in \\cite{debiasi2018prnu}. It focused on local variations of face images, which can be useful as a reliable indicator for image morphing. The work in \\cite{scherhag2019detection} proposed an improved version of the scheme based on the previous PRNU variance analysis in image blocks. Another work \\cite{marra2019gans} showed that each GAN model leaves a specific fingerprint in the generated images, just as the PRNU traces left by different cameras in real-world photos.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/pipeline.eps}\n\\vspace{-0.75cm}\n\\caption{An overview of the proposed system (FBC-APL) for few-shot MAF (FS-MAF). It consists of factorized bilinear coding (FBC) and adaptive posterior learning (APL) modules. The output contains the probability that the input image will be classified into one of the known morphing models.}\n\\label{fig:pipeline}\n\\end{figure*}\n\n\n\\section{Methodology}\n\\label{funda}\n\nMorphing attack fingerprinting (MAF) refers to the multiclass generalization of the existing binary MAD problem. In addition to detecting the presence of morphing attacks, we aim at finer-granularity classification about the specific model generating the face morph. It is hypothesized that different attack models inevitably leave fingerprints in morphed images (conceptually similar to the sensor noise fingerprint left by different camera models \\cite{lukas2006digital}).\nFig. \\ref{fig:pipeline} shows the overall system consisting of two stages: feature fusion through factorized bilinear coding (FBC) and few-shot learning (FSL) for MAF. We will first elaborate on fusion-based MAD in detail and then discuss the extension to few-shot MAF.\n\n\n\n\\subsection{Fusion-based Single-Image MAD}\n\\par Noise is often embedded in the image data during acquisition or manipulation. The uniqueness of the noise pattern is determined by the physical source or an artificial algorithm, which can be characterized as a statistical property to reveal the source of the noise \\cite{popescu2004statistical}. The noise of the sensor pattern was first used for the MAD task by performing a facial quantification statistics analysis, which confirmed its effectiveness \\cite{zhang2018face}. Here, we consider two types of sensor noise patterns: Photo Response Non-Uniformity (PRNU) \\cite{fridrich2009digital} and Noiseprint \\cite{cozzolino2018noiseprint}.\n\n\\noindent \\textbf{Photo Response Non-Uniformity (PRNU)}. PRNU originates from slight variations between individual pixels during photoelectric conversion in digital image sensors \\cite{lukas2006digital}. Different image sensors embed this weak signal into acquired images as a unique signature. Although the weak signal itself is mostly imperceptible to the human eye, its uniqueness can be characterized by statistical techniques and exploited by sophisticated fingerprinting methods such as PRNU \\cite{fridrich2009digital}. \nThis systemic and individual pattern, which plays the role of a sensor fingerprint, has proven robust to various innocent image processing operations such as JPEG compression. Although PRNU is stochastic in nature, it is a relatively stable component of the sensor over its lifetime. \n\nPRNU has been widely studied in camera identification because it is not related to image content and is present in every image acquired by the same camera. Most recently, PRNU has been proposed as a promising tool for detecting morphed face images \\cite{debiasi2018prnu,debiasi2018prnu2}.\nThe spatial feature of PRNU can be extracted using the approach presented by Fridrich \\cite{fridrich2009digital}. For each image $I$, the residual noise ${W}_{I}$ is estimated as described in Equation \\eqref{eq1}:\n\\begin{equation}\n\\label{eq1}\n\\vspace{-0.1in}\n{W}_{I} = I - F(I) \n\\end{equation}\nwhere $F$ is a denoising function that filters the noise from the sensor pattern. The clever design of the mapping function $F$ (e.g., wavelet-based filter \\cite{lukas2006digital}) makes PRNU an effective tool for various forensic applications.\n\n\n\\noindent \\textbf{Noiseprint}. Unlike model-based PRNU, data-driven or learning-based methods tackle the problem of camera identification by assuming the availability of training data. Instead of mathematically constructing unique signatures, Noiseprint \\cite{cozzolino2018noiseprint} attempts to learn the embedded noise pattern from the training data. A popular learning methodology adopted by Noiseprint is to construct a Siamese network \\cite{bertinetto2016fully}. The Siamese network is trained with pairs of image patches that come from the same or different cameras in an unsupervised manner. Similarly to PRNU, Noiseprint has shown clear traces of camera fingerprints. It should be noted that Noiseprint has performed better than PRNU when cropped image patches become smaller, implying the benefit of exploiting spatial diversity \\cite{cozzolino2018noiseprint}.\n\n\\par To the best of our knowledge, Noiseprint has not been proposed for MAD in the open literature. Existing deep learning-based S-MADs often use pre-trained networks such as VGG-face \\cite{raja2020morphing}. Our empirical study shows that morphing-related image manipulation leaves evident traces in Noiseprint, suggesting the feasibility of Noiseprint-based MAD. Moreover, morphed faces are often manipulated across the face, whose spatial diversity can be exploited by cropping image patches using Noiseprint. To justify this claim, Fig. \\ref{fig:featurefig} (d) presents the Noiseprint comparison between bona fide and morphed faces averaged over 1,000 examples. Visual inspection clearly shows that the areas around the eyes and nose have more significant (bright) traces than the bona fide faces. In contrast, Fig. \\ref{fig:featurefig} (c) shows the comparison of the extracted PRNU patterns with the same experimental setting. Similar visual differences between bona fide and morphed faces can be observed; more importantly, PRNU and Noiseprint demonstrate complementary patterns (low vs. high frequency) begging for fusion.\n\n\\noindent \\textbf{Feature Fusion Strategy}. Fusion methods are usually based on multiple feature representations or classification models. Taking advantage of diversity, the strategy of combining classifiers \\cite{kittler1998combining} has shown improved recognition performance compared to single-mode approaches. Recent work has shown that fusion methods based on Dempster-Shafer theory can improve the performance of face morphing detectors \\cite{makrushin2019dempster}. However, previous work \\cite{makrushin2019dempster} only considered ensemble models of the scale space and pre-trained CNN models. For the first time, we propose to combine PRNU and Noiseprint using a recently developed similarity-based fusion method, called factorized bilinear coding (FBC) \\cite{gao2020revisiting}.\n\nFBC is a sparse coding formulation that generates a compact and discriminative representation with substantially fewer parameters by learning a dictionary $\\boldsymbol{B}$ to capture the structure of the entire data space. It can preserve as much information as possible and activate as few dictionary atoms as possible. Let $\\boldsymbol{x}_i$, $\\boldsymbol{y}_j$ be the two features extracted from PRNU and Noiseprint, respectively. The key idea behind FBC is to encode the extracted features based on sparse coding and to learn a dictionary $\\boldsymbol{B}$ with $k$ atoms by matrix factorization. Specifically, the sparsity FBC opts to encode the two input features $(\\boldsymbol{x}_i, \\boldsymbol{y}_j)$ in the FBC code $\\boldsymbol{c}_v$ by solving the following optimization problem:\n\n\\begin{equation}\n\\underset{{{\\boldsymbol{c}}_{v}}}{\\mathop{\\min }}\\,\\bigg|\\bigg|{{\\boldsymbol{x}}_{i}}\\boldsymbol{y}_{j}^{\\top}-\\sum\\limits_{l=1}^{k}{c_{v}^{l}}{{\\boldsymbol{U}}_{l}}\\boldsymbol{V}_{l}^{\\top}\\bigg|{{\\bigg|}^{2}}+\\lambda||{{\\boldsymbol{c}}_{v}}|{{|}_{1}}\n\\end{equation}\nwhere $\\lambda$ is a trade-off parameter between the reconstruction error and the sparsity. The dictionary atom $b_l$ of $\\boldsymbol{B}$ is factorized into $\\boldsymbol{U}_{l}\\boldsymbol{V}_{l}^{\\top}$ where \n $\\boldsymbol{U}_{l}$ and $\\boldsymbol{V}_{l}^{\\top}$ are low-rank matrices. The $l_1$ norm $|| \\cdot ||_1$ is used to impose the sparsity constraint on $\\boldsymbol{c}_{v}$. In essence, the bilinear feature $\\boldsymbol{x}_{i}\\boldsymbol{y}_{j}^{\\top}$ is reconstructed by $\\sum\\limits_{l=1}^{k}{c_{v}^{l}} \\boldsymbol{U}_{l}\\boldsymbol{V}_{l}^{\\top}$\nwith $\\boldsymbol{c}_{v}$ being the FBC code and $c_v^l$ representing the $l$-th element of $\\boldsymbol{c}_{v}$.\n\nThis optimization can be solved using well-studied methods such as LASSO \\cite{tibshirani1996regression}. With two groups of features $\\{\\boldsymbol{x}_i\\}_{i=1}^m$ and $\\{\\boldsymbol{y}_j\\}_{j=1}^n$ at our disposal, we first calculate all FBC codes $\\{\\boldsymbol{c}_v\\}_{v=1}^N$ and then fuse them by the operation $max$ to achieve global representation $\\boldsymbol{z}$: \n\\begin{equation}\n \\boldsymbol{z}=max\\left\\{\\boldsymbol{c}_{v}\\right\\}_{i=1}^{N}.\n \\label{eq:3}\n\\end{equation}\nThe entire FBC module is shown in Fig. \\ref{fig:fbc}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/fbc.eps}\n\\vspace{-0.65cm}\n\\caption{The architecture of the FBC module to combine PRNU and Noiseprint. $\\tilde{\\boldsymbol{U}}$ and $\\tilde{\\boldsymbol{V}}$ replace $\\boldsymbol{U}$ and $\\boldsymbol{V}$ to avoid numerically unstable matrix inversion operations; $\\boldsymbol{P}$ is a fixed binary matrix.}\n\\label{fig:fbc}\n\\end{figure}\n\n\n\n\\subsection{Few-shot learning for Morphing Attack Fingerprinting}\n\\par Based on the FBC-fused feature $\\boldsymbol{z}$, we construct a few-shot learning module as follows. Inspired by recent work on adaptive posterior learning (APL) \\cite{ramalho2019adaptive}, we have redesigned the FSL module to adaptively select feature vectors of any size (e.g., FBC-fused feature) as input. This newly designed module consists of three parts: an encoder, a decoder, and an external memory store. The encoder is used to generate a compact representation for the incoming query data; the memory saves the previously seen representation by the encoder; the decoder aims at generating a probability distribution over targets by analyzing the query representation and pairwise data returned from the memory block. Next, we will elaborate on the design of these three components.\n\n\\noindent\\textbf{Encoder}. The encoder can convert input data of any size to a compact embedding with low dimensionality. It is implemented by a convolutional network, which is composed of a single first convolution to map the input to 64 feature channels, followed by 15 convolutional blocks. Each block is made up of a batch normalization step, followed by a ReLU activation and a convolutional layer with kernel size 3. For every three blocks (one combo), the convolution contains a stride 2 to down-sample the image. All layers have 64 features. Finally, the feature is flattened to a 1D vector and passed through Layer Normalization, generating a 64-dimensional embedding as an encoded representation.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/MAD_APL_module2.eps}\n\\vspace{-0.4cm}\n\\caption{(a) APL training procedure for iterations. We train the APL module on a sequence of episodes ($x_t$, $y_t$), where $x_t$ is the FBC feature and $y_t$ is the true label. At first, the memory is empty; at each iteration, a batch of samples is fed to the module, and a prediction is made. Cross-entropy loss L($\\hat{y}_t$, $y_t$) is calculated and a gradient update step is performed to minimize the loss in that batch alone. The loss is also fed to the memory controller so that the network can decide whether to write to memory. (b) and (c) show the behavior of the accuracy and memory size in a 9-class training scenario. APL stops writing to memory after having about 7 examples per class for classification.}\n\\label{fig:fscnn}\n\\end{figure*}\n\n\n\\noindent\\textbf{Memory}. The external memory store is a database to store experiences. It is key-value data. Each row represents the information for one data point. Each column is decomposed into an embedding (encoded representation) and a true label. The memory store is managed by a controller that decides which embeddings can be written into the memory while at the same time tries to minimize the amount of written embeddings. During the writing process, a quantity metric surprise is defined. The higher the probability that the model assigns to the true class correctly, the less surprised it will be. If the confidence in the prediction in the correct class is smaller than the probability assigned by a uniform prediction, the embedding should be written into memory. During the querying process, the memory is queried for the k-nearest-neighbors of the embeddings of queries from the encoder. The distance metric used to calculate the proximity between points is an open choice, and here we use two types (euclidean distance and cosine distance). Both the full-row data for each of the neighbors and query embeddings are concatenated and fed to the decoder. \n\n\\begin{figure\n\\centering\n\\includegraphics[width=0.8\\linewidth]{image\/db_sample.eps}\n\\caption{Face samples in five merged datasets. (a) FERET-Morphs (bona fide faces come from FERET \\cite{feret}), (b) FRGC-Morphs (bona fide faces come from FRGC V2.0 \\cite{frgc}), (c) FRLL-Morphs (bona fide faces come from Face Research Lab London Set (FRLL) \\cite{amslraw}), (d) CelebA-Morphs (bona fide faces come from CelebA \\cite{liu2015deep}), and (e) Doppelg\u00e4nger Morphs (bona fide faces come from the Web collection).}\n\\label{fig:sample}\n\\end{figure}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.95\\linewidth]{image\/self_sample.eps}\n\\caption{Some sample pairs of bona-fide face images from the Doppelg\u00e4nger dataset (note that these look-alike pairs do not have biological connections).}.\n\\label{fig:self}\n\\end{figure*}\n\n\n\\noindent\\textbf{Decoder}. The decoder takes the concatenation of query embedding, recalled neighbor embeddings from memory, labels, and distances as input. The architecture is a self-attention-based relational feedforward module. It processes each of the neighbors individually by comparing them with the query and then does a cross-element comparison with a self-attention module before reducing the activations with an attention vector calculated from neighbor distances. The self-attention blocks are repeated five times in a residual manner. The resulting tensors are called activation tensors. In addition, the distances between neighbors and the query are passed through a softmax layer to generate an attention vector, which is summed with the activation tensor on the first axis to obtain the final logit result for classification. The self-attention block comprises a multihead attention layer, a multihead dot product attention (MHDPA) layer \\cite{santoro2018relational} for cross-element comparison, and a nonlinear multilayer perceptron (MLP) layer to process each element individually. \n\n\n\n\\noindent\\textbf{Training}. During APL training, as shown in Fig. \\ref{fig:fscnn} (a), the query data (that is, the FBC-fused feature vector $\\boldsymbol{z}$) are passed through the encoder to generate an embedding, and this representation is used to query an external memory store. At first, the memory is empty; at each training episode, a batch of examples is fed to the model, and a prediction is made. Cross-entropy loss is used to be fed to the memory controller to decide whether to write to memory. After the query is searched in memory, the returned memory contents, as well as the query, are fed to the decoder for classification. Figs. \\ref{fig:fscnn} (b) and (c) show the behavior (accuracy and memory size) of APL during a single episode. The accuracy of APL increases as it sees more samples and saturates at some point, indicating that the additional inputs do not surprise the module anymore. In the case of the 9-class classification scenario, we have observed that about 7 examples per class are sufficient to reach performance saturation.\n\n\\noindent\\textbf{Morphing Attack Fingerprinting}. Both PRNU \\cite{lukas2006digital} and Noiseprint \\cite{cozzolino2018noiseprint} were originally proposed for the identification of camera models, which is known to be a fingerprint in image forensics. The duality between image generation in the cyber and physical worlds inspires us to extend the existing problem formulation of binary MAD \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} into multiclass fingerprinting. Different camera models (e.g., Sony vs. Nikon) are analogous to varying face morphing methods (e.g., LMA \\cite{damer2018morgan} vs. StyleGAN2 \\cite{karras2020analyzing}); therefore, it is desirable to go beyond MAD by exploring the feasibility of distinguishing one morphing attack from another. Fortunately, the system shown in Fig. \\ref{fig:pipeline} easily lends itself to generalization from binary to multiclass classification by resetting the hyperparameters, like the number of classes, the data path for each class, etc. To learn a discriminative FBC feature for fingerprinting, multiclass labeled data for training and testing should be prepared to be fed to the FBC module for retraining. When the FBC feature is available, it will be fed to the APL module for multiclass classification.\n\n\\section{Experiments}\n\\label{result}\n\n\n\\begin{table}[t]\n\\begin{center}\n\\small\n\\caption{The hybrid face morphing benchmark database consists of five image sources and 3-6 different morphing methods for each.}\n\\label{tab:dbinfo}\n\\vspace{-0.3cm}\n\\begin{threeparttable}\n\\begin{tabular}{ l |l c c}\n\\hline\\noalign{\\smallskip}\nDatabase & Subset & \\#Images & Resolution \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\multirow{4}{*}{FERET-Morphs} &\tbona fide \\cite{feret} &\t576 & 512x768\\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t529 & 512x768\\\\\n\t& OpenCV \\cite{sarkar2020vulnerability} &529 & 512x768\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 529 & 1024x1024 \\\\\n\\hline\n\\multirow{4}{*}{FRGC-Morphs} & bona fide \\cite{frgc} & \t964 & 1704x2272\\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t964 & 512x768\\\\\n\t& OpenCV \\cite{sarkar2020vulnerability} & 964 & 512x768\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 964 & 1024x1024\\\\\n\\hline\n\\multirow{7}{*}{FRLL-Morphs} & bona fide \\cite{amslraw} & 102+1932 & 413x531\\\\\n\t& AMSL \\cite{neubert2018extended} & 2175 & 413x531 \\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t1222 & 431x513 \\\\\n\t& OpenCV \\cite{sarkar2020vulnerability}& 1221 & 431x513 \\\\\n\t& LMA &768 & 413x531\\\\\n\t& WebMorph \\cite{sarkar2020vulnerability} & 1221 & 413x531\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 1222 & 1024x1024 \\\\\n\\hline\n\\multirow{4}{*}{CelebA-Morphs*} & bona fide \\cite{liu2015deep} & 2989 & 128x128 \\\\\n\t& MorGAN \\cite{damer2018morgan}& 1000 & 64x64\\\\\n\t& CIEMorGAN \\cite{damer2019realistic} & 1000 & 128x128 \\\\\n\t& LMA \\cite{damer2018morgan} & 1000 & 128x128 \\\\\n\\hline\n\\multirow{4}{*}{Doppelg\u00e4nger} & bona fide & 306 & 1024x1024 \\\\\n\t& FaceMorpher &\t150 & 1024x1024 \\\\\n\t& OpenCV &\t153 & 1024x1024\\\\\n\t& StyleGAN2\t& 153 & 1024x1024 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\small\n\\item * means only the cropped faces from raw images are used; no facial cropping is used for other datasets. The raw number of bona fide images in FRLL-Morphs is 102. Based on the raw faces, data augmentation is implemented to obtain extra 1932 images. \n\\end{tablenotes}\n\\end{threeparttable}\n\\vspace{-0.2in}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Large-scale Morphing Benchmark Dataset}\n\\noindent \\textbf{Benchmark Dataset Description.} To simulate the amount and distribution of data in real-world applications, we have combined five datasets to build a large-scale evaluation benchmark for detecting and fingerprinting few-shot morphing attacks. It contains four publicly available datasets, namely, FERET-Morphs \\cite{feret,sarkar2020vulnerability}, FRGC-Morphs \\cite{frgc,sarkar2020vulnerability}, FRLL-Morphs \\cite{amslraw,neubert2018extended,sarkar2020vulnerability}, and CelebA-Morphs \\cite{liu2015deep,damer2018morgan,damer2019realistic}. We also generated a new dataset with high-resolution faces collected from the Web, named Doppelg\u00e4nger Morphs, which contains morphing attacks from three algorithms and satisfies the so-called Doppelg\u00e4nger constraint \\cite{rottcher2020finding} (that is, look-alike faces without biological connections, refer to Fig. \\ref{fig:self}). A total of more than 20,000 images (6,869 bona fide faces and 15,764 morphed faces) have been collected, as shown in Table \\ref{tab:dbinfo}. Eight morphing algorithms are involved, including five landmark-based methods, OpenCV \\cite{opencv}, FaceMorpher \\cite{facemorpher}, LMA \\cite{damer2018morgan}, WebMorph \\cite{webmorph}, and AMSL \\cite{neubert2018extended}, and three adversarial generative networks based, including MorGAN \\cite{damer2018morgan}, CIEMorGAN \\cite{damer2019realistic}, and StyleGAN2 \\cite{karras2020analyzing}. Fig. \\ref{fig:sample} provides some cropped face samples with real faces and morphed faces from different morphing algorithms in these five datasets. To the best of our knowledge, this is one of the largest and most diverse face morphing benchmarks that can be used for MAD and MAF evaluations. \n\n\n\\noindent \\textbf{Evaluation Protocols.}\nBased on the large-scale dataset collected for few-shot MAD and MAF benchmarks, we have designed the evaluation protocols for each task as follows:\n\n$\\bullet$ Protocol FS-MAD (few-shot MAD). This protocol is designed for the few-shot binary classification (bona fide\/morphed). Training data comes from predefined types and a few (1 or 5) samples per new type. The test data come from new types. Here, the predefined types in our experiment contain five types of morphing results generated by FaceMorpher \\cite{facemorpher}, OpenCV \\cite{opencv}, WebMorph \\cite{webmorph}, StyleGAN2 \\cite{karras2020analyzing}, and AMSL \\cite{neubert2018extended}, and their corresponding bona fide faces. Faces of these types are from the FERET-Morphs, FRGC-Morphs, FRLL-Morphs, and Doppelg\u00e4nger-Morphs datasets. The morphing faces generated by LMA \\cite{damer2018morgan}, MorGAN \\cite{damer2018morgan}, and CIEMorGAN \\cite{damer2019realistic}, and their corresponding bona fide faces, are treated as new types. Faces of these types are from the CelebA-Morphs dataset.\n \n\n$\\bullet$ Protocol FS-MAF (few-shot MAF). This protocol is designed for multiclass fingerprint classification on the hybrid large-scale benchmark and for five separate morph datasets. Each morphing type and bona fide type are treated as different categories, namely FERET-Morphs, FRGC-Morphs, CelebA-Morphs, and Doppelg\u00e4nger datasets all with 4 classes, FRLL-Morphs with 7 classes, and the hybrid with 9 classes. For each data set, the data are split according to the rule of 8: 2. Training data consist of 1 and 5 images per class for 1 shot and 5-shot learning, respectively. The testing data contains non-overlapping data with the training in each dataset. To reduce the bias of the imbalanced distribution of the data, a similar number of faces is maintained for each class in each test set. \n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Traditional MAD performance (Accuracy-\\%) comparison of different feature-level fusion methods. NP - Noiseprint; CN - Concatenation; CC - Convex Compression; $\\bot$ - spatial; $\\square$ - spectral.}\n\\vspace{-0.3cm}\n\\label{tab:toyexp}\n\\begin{tabular}{ l c c c c c c }\n\\hline\\noalign{\\smallskip}\n{Feature} & CN & Sum & Max & CC & FBC (ours) \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nPRNU $\\bot$+PRNU $\\square$ & 83.78 & 84.23 & 83.78 & 84.23 & 84.42 \\\\\nNP $\\bot$ + NP $\\square$ & 89.19 & 89.64 & 89.64 & 89.64 & 96.40\\\\\nPRNU $\\bot$ + NP $\\square$ & 89.19 & 89.19 & 89.64 & 89.19 & 89.59\\\\\nPRNU $\\square$ + NP $\\bot$ & 83.78 & 84.23 & 83.78 & 85.59 & 86.04\\\\\nPRNU $\\square$. + NP $\\square$ & 86.94 & 85.59 & 85.59 & 86.94 & 84.68 \\\\\nPRNU $\\bot$ + NP $\\bot$ & \\textbf{91.44} & \\textbf{91.89} & \\textbf{91.89} & \\textbf{94.59} & \\textbf{96.85}\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Performance (\\%) comparison of few-shot MAD. Accu. - Accuracy.}\n\\vspace{-0.3cm}\n\\label{tab:madfs}\n\\resizebox{.95\\linewidth}{!}{\n\\begin{tabular}{ l |c c c |c c c}\n\\hline\\noalign{\\smallskip}\n & \\multicolumn{3}{c}{1-shot} & \\multicolumn{3}{|c}{5-shot} \\\\\nMethod & Accu. & D-EER & ACER & Accu. &\tD-EER &\tACER \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nXception \\cite{chollet2017xception} & 66.50 & 32.50 & 33.50 & 73.25 & 27.00 & 26.75 \\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 67.00 & 36.50 & 33.00 & 71.25 & 29.00 & 28.75 \\\\\nNasNetMobile \\cite{zoph2018learning} & 59.00 & 40.50 & 41.00 & 66.25 & 35.00 & 33.75 \\\\\nDenseNet121 \\cite{huang2017densely} &68.25 & 31.50 & 31.75 & 73.50 & 24.50 & 26.50 \\\\\nArcFace \\cite{deng2019arcface} & 58.00 & 41.00 & 42.00 & 62.25 &\t37.50 & 37.75 \\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 49.25 & 48.00 & 50.75 & 46.75 & 47.50 & 53.25 \\\\\nMB-LBP \\cite{scherhag2020face} & 61.00 & 38.50 & 39.00 & 69.25 & 31.00 & 30.75 \\\\\nFS-SPN \\cite{zhang2018face} & 51.50 & 45.00 & 48.50 & 58.25 & 43.50 & 41.75 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 54.25 & 44.50 &45.75 &\t60.25 &\t38.50 &\t39.75 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 56.50 & 57.00 & 43.50 & 64.25 & 66.70 & 35.75 \\\\\nInception-MAD \\cite{damer2022privacy} & 62.00 & 34.50 & 38.00 & 67.75 & 32.50 & 32.25 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 76.10 & 27.50 & 28.00 & 82.16 & 24.50 & 24.25 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 53.75 & 50.50 & 46.25 & 61.25 & 38.50 & 38.75 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 60.45 & - & - & 71.38 & - & - \\\\\nCOSOC \\cite{luo2021rectifying} & 66.89 & -&-& 74.54 &-&- \\\\\n\\hline\n\\textbf{FBC-APL} & \\textbf{99.25} & \\textbf{1.50} & \\textbf{0.75} & \\textbf{99.75} & \\textbf{0.50} & \\textbf{0.25} \\\\\n\\noalign{\\smallskip} \\hline\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Experimental Settings}\n\\noindent \\textbf{Data Preprocessing}. Dlib face detector \\cite{king2009dlib} is used to detect and crop the face region. The cropped face is normalized according to the coordinates of the eye and resized to a fixed size of $270\\times270$ pixels. The feature extraction of PRNU and Noiseprint is performed on the processed faces, respectively. The resulting vector dimension for each type of feature is 72,900 ($270\\times270$).\n\n\\noindent \\textbf{Performance Metrics}. Following previous MAD studies \\cite{raja2020morphing,scherhag2020deep}, we report performance using four metrics, including: (1) Accuracy; (2) D-EER; (3) ACER; (4) Confusion Matrix. Detection Equal-Error-Rate(D-EER) is the error rate for which both BPCER and APCER are identical. Average Classification Error Rate (ACER) is calculated by the mean of the APCER and BPCER values. Attack Presentation Classification Error Rate (APCER) reports the proportion of morph attack samples incorrectly classified as bona fide presentation, and the Bona Fide Presentation Classification Error Rate (BPCER) refers to the proportion of bona fide samples incorrectly classified as morphed samples. Both APCER and BPCER are commonly used in previous studies of MAD \\cite{raja2020morphing,scherhag2020deep}.\n\n\n\\subsection{Comparison of Feature Extraction and Fusion Strategies}\nFirst, we show the visual comparison of extracted features by different methods.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{image\/avg_feature.eps}\n\\caption{Average of (a) MB-LBP, (b) FS-SPN, (c) PRNU and (b) Noiseprint features over 1000 randomly selected face images. Left: bona fide; right: morphed faces.}\n\\label{fig:featurefig}\n\\end{figure}\n\n\\par We first compare different feature-level fusion strategies to combine PRNU and Noiseprint patterns, including element-wise operation (sum\/max), convex compression (CC) \\cite{norouzi2013zero}, vector concatenation, and our factorized bilinear coding (FBC) method \\cite{gao2020revisiting}. We consider the features in both the spatial and the spectral domains. The PRNU and Noiseprint features extracted from the images are treated as spatial features. The spectral features are obtained by applying the discrete Fourier transform to the spatial features. Any two types of feature are fused to perform traditional MAD tasks on a subset of the test data. Therefore, six different fusion features are generated. For concatenation, the final dimension of the feature is 145,800. For sum, max, and CC, it is 72,900. The fusion feature of FBC is as compact as 2,048 dimensions. All generated features are fed into the SVM with a linear kernel for binary classification. As shown in Table \\ref{tab:toyexp}, the fusion of spatial features of PRNU and Noiseprint performs best for the six features, which can be attributed to the fact that the two patterns in the spatial domain contain more discriminative features (as shown in Fig.~\\ref{fig:featurefig}). Furthermore, our FBC-based fusion achieves the highest accuracy among the five fusion strategies.\n\n\n\\begin{table*}[!t]\n\\small\n\\caption{Accuracy(\\%) of 1-shot MAF classification on single and hybrid datasets.}\n\\vspace{-0.3cm}\n\\label{tab:maf1}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ l |c c c c c c}\n\\hline\n\\multirow{2}{*}{Method} & FERET-Morphs & FRGC-Morphs & FRLL-Morphs & CelebA-Morphs & Doppelg\u00e4nger & Hybrid \\\\\n& 4-class & 4-class & 7-class & 4-class & 4-class & 9-class \\\\\n\\hline\nXception \\cite{chollet2017xception} & 29.47&\t25.26&\t17.68&\t16.67&\t21.05&\t15.11\\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 31.58&\t33.68&\t31.30&\t55.19&\t25.26&\t17.33\\\\\nNasNetMobile \\cite{zoph2018learning} & 32.63&\t27.37&\t22.61&\t19.26&\t23.16&\t12.88\\\\\nDenseNet121 \\cite{huang2017densely} & 46.32&\t26.32&\t22.03&\t47.04&\t23.16&\t19.33\\\\\nArcFace \\cite{deng2019arcface} & 29.33&\t39.64&\t26.12&\t28.33&\t18.03&\t15.22\\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 38.95&\t43.16&\t29.28&\t89.63&\t31.58&\t11.11 \\\\ \nMB-LBP \\cite{scherhag2020face} & 33.95 &\t33.42&\t34.59&\t34.50 &\t21.31&\t14.89 \\\\\nFS-SPN \\cite{zhang2018face} & 25.41&\t31.22&\t23.71&\t61.50 &\t32.79&\t29.44 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 26.32&\t29.47&\t29.28&\t25.93&\t25.26&\t21.89 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 34.74 & 26.32 & 11.01 & 37.04 &\t25.26 &\t18.56 \\\\\nInception-MAD \\cite{damer2022privacy} & 23.16 &\t30.53 &\t20.00\t& 44.81 & 29.47 & 21.78 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 36.84 & 37.89 & 35.94 & 57.04 & 49.47 & 33.56 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 50.53 & 43.16 & 22.61 & 84.44 & 31.58 & 22.00 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 51.05 & 51.44 & 34.77 & 61.43 & 33.43 & 53.46 \\\\\nCOSOC \\cite{luo2021rectifying} & 54.58 & 64.37 & 35.22 & 63.19 & 34.30 & 59.55 \\\\\n\\hline\nFBC & 96.93 & 98.83 & 94.06 & 99.50 & 56.67 & 96.11 \\\\\nFBC-all & 98.11 & 99.48 & 98.42 & 100 & 84.17 & 96.78 \\\\\n\\textbf{FBC-APL} & \\textbf{98.82} & \\textbf{99.61} & \\textbf{98.24} & \\textbf{99.67} & \\textbf{91.67} & \\textbf{98.11} \\\\\n\\hline\n\\end{tabular}}\n\\end{table*}\n\n\n\\begin{table*}[!t]\n\\small\n\\caption{Accuracy(\\%) of 5-shot MAF classification on single and hybrid datasets.}\n\\vspace{-0.3cm}\n\\label{tab:maf5}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ l |c c c c c c}\n\\hline\n\\multirow{2}{*}{Method} & FERET-Morphs & FRGC-Morphs & FRLL-Morphs & CelebA-Morphs & Doppelg\u00e4nger & Hybrid \\\\\n& 4-class & 4-class & 7-class & 4-class & 4-class & 9-class \\\\\n\\hline\nXception \\cite{chollet2017xception} & 46.32& 43.16&\t31.01&\t73.70&\t28.42&\t43.67\\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 55.79 & 53.68 &\t40.00 & 89.26 & 26.32 & 54.56 \\\\\nNasNetMobile \\cite{zoph2018learning} & 48.42 & 40.00 & 24.35 &\t67.41&\t27.37&\t37.33\\\\\nDenseNet121 \\cite{huang2017densely} & 54.74 & 55.79 &\t36.23&\t89.26&\t25.26\t&53.33\\\\\nArcFace \\cite{deng2019arcface} & 44.34 & 50.91 & 33.81 & 39.67 & 20.49 & 29.11 \\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 45.26 & 61.05 & 31.59 & 42.96 &\t28.42 &\t11.11 \\\\\nMB-LBP \\cite{scherhag2020face} & 69.28 & 74.87 & 42.67 & 63.00\t& 26.23 & 42.11 \\\\\nFS-SPN \\cite{zhang2018face} & 41.34 & 41.97 & 26.91 & 82.67 & 27.04 & 43.89 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 45.26 & 61.05 & 31.59 &\t42.96 & 28.42 & 37.78 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 53.68 & 32.63 & 29.86 & 78.15 & 26.32 & 39.22 \\\\\nInception-MAD \\cite{damer2022privacy} & 50.53 &\t51.58 &\t37.39 &\t82.59 &\t29.47 & 44.00 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 63.16\t& 63.68 & 53.48 & 82.59 & 33.68 & 51.00 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 69.47 & 69.47 & 57.39 & 87.41 & 37.89 & 51.89 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 60.60 & 64.72 & 50.74 & 81.42 & 36.80 & 61.98 \\\\\nCOSOC \\cite{luo2021rectifying} & 65.98 & 75.04 & 54.90 & 89.60 & 41.81 & 72.62 \\\\\n\\hline\nFBC & 97.64 & 99.09 & 96.94 & 99.50 & 65.83 & 96.22 \\\\\nFBC-all & 98.11 & 99.48 & 98.42 & 100 & 84.17 & 96.78 \\\\\n\\textbf{FBC-APL} & \\textbf{98.82} & \\textbf{99.61} & \\textbf{98.24} & \\textbf{99.67} & \\textbf{96.67} & \\textbf{98.22} \\\\\n\\hline\n\\end{tabular}\n}\n\\end{table*}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/confusion_matrix.eps}\n\\vspace{-0.75cm}\n\\caption{Confusion matrix of few-shot MAF classification on hybrid dataset.}\n\\label{fig:confus}\n\\end{figure*}\n\n\n\\subsection{Few-shot Learning for MAD}\n\n\n\nWe extend the traditional MAD problem to a few-shot learning problem. First, the PRNU and Noiseprint features are extracted, respectively. Then an FBC module (VGG-16 \\cite{simonyan2014very} as the backbone) is trained as a binary classifier for feature fusion, taking PRNU and Noiseprint features from the entire training set (all images of predefined types) as input. Based on the pre-trained FBC module, 2,048-dimensional fusion representations are generated and then fed to the APL module for binary few-shot learning using the cross-entropy loss. Here, the Euclidean distance is used to query the top five nearest neighbors of the memory component. The APL output is a tuple of the probability distribution for each class. The results in terms of accuracy, D-EER, and ACER are shown in Table \\ref{tab:madfs}. Two methods based on FSL \\cite{luo2021rectifying,chen2021meta}, two methods based on face recognition (FR) \\cite{schroff2015facenet,deng2019arcface}, several popular deep models pre-trained \\cite{chollet2017xception,sandler2018mobilenetv2,zoph2018learning,huang2017densely} on ImageNet \\cite{deng2009imagenet}, and eight current MAD methods \\cite{raghavendra2017face,scherhag2020face,zhang2018face,neubert2018reducing,debiasi2018prnu,damer2022privacy,cozzolino2018noiseprint}, are adopted for comparison. Due to the effective fusion of two complementary patterns (i.e., PRNU and Noiseprint) and the APL module, our proposed FBC-APL clearly outperforms other competing methods by a large margin.\n\n\n\n\n\n\\subsection{Few-shot Learning for MAF}\nUnlike the few-shot MAD problem, in MAF, the FBC module uses ResNet50 \\cite{he2016deep} as the backbone and is pre-trained as a nine-class classifier using all the training data (about 80\\%) of the collected database. The FBC fusion feature obtained from the training samples is then fed to the APL module for multiclass few-shot learning. A cosine similarity score is adopted to compute the similarity between queries and the data stored in memory to find the three nearest neighbors. From Tables \\ref{tab:maf1} and \\ref{tab:maf5}, one can see that our FBC-APL has achieved outstanding performance, and some results are even better than the FBC-all method, which uses FBC features from all training data to fit SVM for classification. To better illustrate the effectiveness of the proposed FBC-FSL method, we have compared the confusion matrix for nine different classes (including bona fide and eight different morphing models), as shown in Fig. \\ref{fig:confus}.\n\n\n\n\\subsection{Discussions and Limitations}\nWhy did the proposed method outperform other competing methods by a large margin? We believe there are three contributing reasons. First, PRNU and Noiseprint feature maps as shown in Fig. \\ref{fig:featurefig} have shown better discriminative capability than others; meanwhile, their complementary property makes fusion an efficient strategy for improving the accuracy. Second, we have specifically taken the few-shot constraints into the design (i.e., the adoption of APL module) while other competing approaches often assume numerous training samples. Third, from binary MAD to multi-class MAF, our FBC fusion strategy is more effective on distinguishing different classes as shown in Fig. \\ref{fig:confus}. Note that we have achieved unanimously better results than other methods across six different datasets, as shown in Table \\ref{tab:maf5}, which justifies the good generalization property of our approach.\n\nThe overall pipeline in Fig. \\ref{fig:pipeline} can be further optimized by end-to-end training. In our current implementation, the three steps are separated, that is, the extraction of PRNU and Noiseprint features, FBC-based fusion, and APL-based FSL. From the perspective of network design, end-to-end training could further improve the performance of the FBC-APL model. Moreover, there are still smaller and more challenging datasets for morphing attacks in the public domain. Validation of the generalization property for the FBC-APL model remains to be completed, especially when novel face morphing attacks (e.g., adversarial morphing attack \\cite{wang2020amora}, transformer-based, and 3D reconstruction-based face morphing) are invented. Finally, we have not considered the so-called post-morphing process \\cite{damer2021pw} where the print and scan operations are performed when issuing a passport or identity document.\n\n\n\n\\section{Conclusion and Future Work}\n\\label{con}\n\\par Face morphing attacks pose a serious security threat to FRS. In this work, we proposed a few-shot learning framework for the detection of non-reference morphing attacks and fingerprinting problems based on factorized bilinear coding of two types of camera fingerprint feature, PRNU and Noiseprint. Additionally, a large-scale database is collected that contains five types of face dataset and eight different morphing methods to evaluate the proposed few-shot MAD and fingerprinting problem. The results show outstanding performance of the proposed fusion-based few-shot MAF framework on our newly collected large-scale morphing dataset. \nWe note that face-morphing attack and defense research is likely to coevolve in the future. Future work on the attack side will include the invention of more powerful morphing attacks, such as GANformer-based \\cite{hudson2021generative} and diffusion model-based \\cite{dhariwal2021diffusion}. Consequently, defense models that include MAD and MAF could focus on the study of the feasibility of detecting novel attacks and morphed face images from printed and scanned image data. In practical applications, optimizing differential morphing attack detection with live trusted capture is also an interesting new research direction.\n\n\n\\section*{Acknowledgments}\nThis work was partially supported by the NSF Center for Identification (CITeR) awards 20s14l and 21s3li.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nThe IceCube collaboration gratefully acknowledges the support from the following agencies and institutions: USA {\\textendash} U.S. National Science Foundation-Office of Polar Programs,\nU.S. National Science Foundation-Physics Division,\nU.S. National Science Foundation-EPSCoR,\nWisconsin Alumni Research Foundation,\nCenter for High Throughput Computing (CHTC) at the University of Wisconsin{\\textendash}Madison,\nOpen Science Grid (OSG),\nExtreme Science and Engineering Discovery Environment (XSEDE),\nFrontera computing project at the Texas Advanced Computing Center,\nU.S. Department of Energy-National Energy Research Scientific Computing Center,\nParticle astrophysics research computing center at the University of Maryland,\nInstitute for Cyber-Enabled Research at Michigan State University,\nand Astroparticle physics computational facility at Marquette University;\nBelgium {\\textendash} Funds for Scientific Research (FRS-FNRS and FWO),\nFWO Odysseus and Big Science programmes,\nand Belgian Federal Science Policy Office (Belspo);\nGermany {\\textendash} Bundesministerium f{\\\"u}r Bildung und Forschung (BMBF),\nDeutsche Forschungsgemeinschaft (DFG),\nHelmholtz Alliance for Astroparticle Physics (HAP),\nInitiative and Networking Fund of the Helmholtz Association,\nDeutsches Elektronen Synchrotron (DESY),\nand High Performance Computing cluster of the RWTH Aachen;\nSweden {\\textendash} Swedish Research Council,\nSwedish Polar Research Secretariat,\nSwedish National Infrastructure for Computing (SNIC),\nand Knut and Alice Wallenberg Foundation;\nAustralia {\\textendash} Australian Research Council;\nCanada {\\textendash} Natural Sciences and Engineering Research Council of Canada,\nCalcul Qu{\\'e}bec, Compute Ontario, Canada Foundation for Innovation, WestGrid, and Compute Canada;\nDenmark {\\textendash} Villum Fonden and Carlsberg Foundation;\nNew Zealand {\\textendash} Marsden Fund;\nJapan {\\textendash} Japan Society for Promotion of Science (JSPS)\nand Institute for Global Prominent Research (IGPR) of Chiba University;\nKorea {\\textendash} National Research Foundation of Korea (NRF);\nSwitzerland {\\textendash} Swiss National Science Foundation (SNSF);\nUnited Kingdom {\\textendash} Department of Physics, University of Oxford.\n\\section{Investigation of the significance of TXS 0506+056}\n\\label{sec:TXS_significance_investigation}\nThe significance of TXS 0506+056 found by this multi-flare algorithm is smaller than the (single-flare) time-dependent significance that was determined in \\cite{IceCube:2018cha}. The goal of this Appendix is to show that the decrease of significance is only due to the different event selection of the sample used in this analysis, and not due to the different likelihood algorithms. It is mainly related to 2 cascade events that are rejected in the new event selection, presented in~\\citep{Aartsen:2019fau}. This was discussed also in IceCube~\\citep{Abbasi:2021bvk}. As a matter of fact, the new selection was focused on muon tracks for achieving best angular resolutions for the point-source search.\n\nThe differences between this analysis and the one described in \\cite{IceCube:2018cha} are mainly of three types. These are investigated using the analysis described in this letter and the one presented in \\cite{IceCube:2018cha} to find out how much each of them contributes to the change in significance of TXS 0506+056. The results are summarized in Table~\\ref{tab:TXS_comparisons}.\n\n\\paragraph{\\textbf{Different datasets:}}\n As mentioned also in Section \\ref{sec:detector}, the event selections used to produce the dataset analyzed in \\cite{IceCube:2018cha} and the one analyzed in this work (from~\\cite{Aartsen:2019fau}) are different. According to the internal IceCube nomenclature, the two datasets are referred to as \\MA{{\\tt PSTracks v2}} and \\MA{{\\tt PSTracks v3}}, respectively. In some cases the different event selection results in the reconstruction of slightly different energy and local angles. An extensive and detailed description of the two datasets can be found in~\\cite{Abbasi:2021bvk}.\n \n The significance of TXS 0506+056 is estimated on \\MA{{\\tt PSTracks v2}} and \\MA{{\\tt PSTracks v3}} by applying the multi-flare algorithm to the years 2012-2015 (containing only one of the two flares detected by this analysis). We observe the same drop in significance (from $4.0~\\sigma$ in \\MA{{\\tt PSTracks v2}} to $2.6~\\sigma$ in \\MA{{\\tt PSTracks v3}}) described in~\\cite{Abbasi:2021bvk}. The significance observed for \\MA{{\\tt PSTracks v3}} increases to $3.4~\\sigma$ if the two high-energy events, present in \\MA{{\\tt PSTracks v2}} but absent in \\MA{{\\tt PSTracks v3}}, are added by hand to the dataset. It is worth noticing also that the pre-trial significance observed for \\MA{{\\tt PSTracks v2}} with the multi-flare algorithm is not different to the pre-trial significance reported in \\citep{IceCube:2018cha}, which was obtained with a single-flare algorithm.\n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \\paragraph{\\textbf{Different algorithms:}}\n The multi-flare algorithm has been developed for this analysis and applied for the first time in this work. \n This is a crucial difference between this work and the one presented in \\cite{IceCube:2018cha}, since a multi-flare likelihood could in principle consist of more fit parameters than a single-flare likelihood. The increased parameter space of the fit may thus degrade the sensitivity. This degradation was avoided by requiring a pre-selection of candidate flares with $\\mathrm{TS}\\ge2$ (see Section \\ref{sec:analysis} and Appendix~\\ref{sec:multi-flare_algorithm}).\n \n Other minor improvements between the two analyses concern:\n \n a Gaussian integral factor, included in the marginalization term to correct for boundary effects;\n the time PDF normalization, set to 1 across each IceCube sample by considering only up times of the detector (in \\cite{IceCube:2018cha} it was set to 1 in an infinite range, regardless the up times). The results, shown in Table~\\ref{tab:TXS_comparisons} for the single- and multi-flare algorithm applied to the 2012-2015 data, suggest that the multi-flare algorithm is not responsible for the drop of the significance, when applied to the same dataset. \n\n \n \\paragraph{\\textbf{Different strategies for combining independent samples:}} \n The third and last potential source of change in significance is due to the different strategies adopted to combine the IceCube samples.\n Since the 10-year data sample of IceCube concerns different IceCube detector configurations, triggers and event cuts, this analysis is based on the maximization of the joint likelihood defined as the product of the likelihoods of each IceCube sample (see Section \\ref{sec:analysis}). The strategy adopted in~\\cite{IceCube:2018cha}, instead, consists in maximizing the likelihood of each IceCube sample, picking up the most significant p-value and reporting it as post-trial after correcting for the look-elsewhere effect. Such a correction is made by penalizing the most significant $p$-value by the ratio of the livetime of the sample with the most significant $p$-value to the total time. To investigate this difference, the single-flare algorithm is applied to \\MA{{\\tt PSTracks v3}}. To reproduce the analysis in~\\citep{IceCube:2018cha}, the TS is maximized only across the 3 years between 2012-2015 (containing the most significant flare) and the $p$-value is penalized by the ratio of 10 years to 3 years, adopting the same logic described in \\cite{IceCube:2018cha}. In the analysis presented in this letter, the whole 10-year data are analyzed with a single joint likelihood (as described in Section \\ref{sec:analysis} but without the multiple flare feature), and the same penalization of the $p$-value is not needed in this case. As seen in Table~\\ref{tab:TXS_comparisons}, it can be stated that the results obtained in the two cases are comparable and that the strategy adopted to combine the different samples is not responsible for a substantial change in significance.\n\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{>{\\centering\\arraybackslash}m{5cm} >{\\centering\\arraybackslash}m{3.5cm} >{\\centering\\arraybackslash}m{3.5cm}}\n \\multicolumn{3}{c}{TXS 0506+056 change in significance}\\\\\n \\hline\n \\hline\n \\multirow{3}{*}{\\parbox{4.2cm}{\\centering Different datasets (multi-flare, 2012-2015 only)}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering \\MA{{\\tt PSTracks v2}}\\\\(\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering \\MA{{\\tt PSTracks v3}}\\\\(This work)}}\\\\\n & & \\\\\n & $4.0~\\sigma$ & $2.6~\\sigma$ \\\\[3pt]\n \\hline\n \\multirow{6}{*}{\\parbox{3cm}{\\centering Different algorithms (2012-2015 only)}} &\\multirow{2}{*}{\\parbox{3.5cm}{\\centering Single-flare\\\\(\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering Multi-flare\\\\(This work)}}\\\\\n & & \\\\\n & \\multicolumn{2}{c}{\\MA{{\\tt PSTracks v2}}}\\\\\n & $4.0~\\sigma$ & $4.0~\\sigma$ \\\\\n & \\multicolumn{2}{c}{\\MA{{\\tt PSTracks v3}}}\\\\\n & $2.7~\\sigma$ & $2.6~\\sigma$ \\\\\n \\hline\n \\multirow{3}{*}{\\parbox{5cm}{\\centering Strategy of sample combination (single-flare, \\MA{{\\tt PSTracks v3}})}} & \\multirow{2}{*}{\\parbox{3.5cm}{Separate likelihoods\\\\(\\centering\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering Joint likelihood\\\\(This work)}}\\\\\n & & \\\\\n & $2.2~\\sigma$ (post-trial) & $2.3~\\sigma$ \\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{Results of the comparison between the significance obtained for TXS 0506+056 when using an analysis with features similar to the one in \\cite{IceCube:2018cha} and the one presented in this paper. When testing the impact of different datasets, the years 2012-2015 are analyzed with the multi-flare algorithm. \nWhen testing the impact of a different strategy in the combination of the samples, the single-flare algorithm is used on the dataset \\MA{{\\tt PSTracks v3}}: in one case only the IceCube sample containing the known flare is analyzed and the p-value penalized, adopting the same logic as in~\\cite{IceCube:2018cha}; in the other case all the 10-year samples are combined in a joint likelihood, as described in Section~\\ref{sec:analysis}, and no penalization is needed.}\n\\label{tab:TXS_comparisons}\n\\end{table}\n\n\n\n\n\\section{\\textit{A posteriori} comparisons with the time-integrated analysis}\n\\label{app:variab}\n\nThe results of these time-dependent analyses, despite unveiling new features of the source catalog, partly overlap with the results of the time-integrated search~\\citep{Aartsen:2019fau}. In fact, the time-dependent and time-integrated analyses are based on similar likelihood functions, sharing the same space and energy PDFs, but the time-dependent analysis distinguishes itself by adding a time PDF. This time-dependent analysis was planned together with the time-integrated analysis, and it was not developed based on the time-integrated unblinded results. Nonetheless, one might wonder how the results of the time-dependent analysis can be interpreted in the light of the prior knowledge of the time-integrated results. To address such a question, two tests are proposed in this Appendix. A first test estimates the time variability of the four most significant sources of the time-integrated analysis. A second test estimates the probability of obtaining the observed pre-trial significance of $3.8~\\sigma$ from a time-dependent binomial test (see Section~\\ref{sec:results}) on the source catalog, in the assumption that the neutrino excess observed by the time-integrated analysis~\\citep{Aartsen:2019fau} does not have any time structure. Both tests exploit the same approach, based on producing pseudo-realizations of the data by randomizing the time of the events and, unlike for the standard time-dependent analysis, keeping fixed the associated equatorial coordinates.\n\n\\paragraph{\\bf Time-variability test:}\nThis test aims at quantifying the time variability of the highly-significant events detected from the directions of NGC 1068, TXS 0506+056, PKS 1424+240 and GB6 J1542+6129 and at testing the compatibility of their arrival time with a flat distribution.\nThis test is sensitive only to the time information of the events and is unavoidably less sensitive than the time-dependent search described in Section~\\ref{sec:analysis} (referred to as standard time-dependent analysis), which is sensitive to energy, space and time information. Moreover, the significance of the likelihood method using the three variables at the same time is not equivalent to the product of likelihood methods that use one variable at a time. \n\nThe null (or background) hypothesis of the time-variability test assumes that the time-integrated signal-like events (i.e. the events with the highest time-integrated signal-over-background ratio, that mostly contribute to the significance around each source direction) are not clustered in time. Pseudo-realizations of the data for this null hypothesis (also called background samples which allow to count trials) are obtained similarly to the standard time-dependent analysis: events in a declination band around the location of the tested sources are selected and assigned a new time taken from a real up time of the detector. This procedure destroys any time correlation among events. However, while the standard analysis keeps the local coordinates of an event (azimuth and zenith) fixed and recalculates the right ascension using the new randomized time, the time-variability test freezes the equatorial coordinates of the events at the measured values, and randomizes the azimuth (notably the zenith angle, corresponding to an equatorial coordinate, at the South Pole does not depend on the time). This method guarantees that the same time-integrated signal-like events from the direction of a given source are present in the background sample with randomized times. On the other hand, this method flattens out the sub-daily modulation of the event rate in local coordinates due to the increased reconstruction efficiency along azimuth directions where more strings are aligned. As described in Section~\\ref{sec:analysis}, in the standard analysis this sub-daily modulation of the event rate is taken into account by using a correction in local coordinates to the background PDF. The azimuth dependency of the reconstruction efficiency is averaged out for flares longer than $\\sigma_T=0.2$ days as a consequence of the Earth rotation, while it might induce a change up to 5\\% in the TS for flares as short as $\\sigma_T = 10^{-3}$ days. Given that the variability observed for the four most significant time-integrated sources was beyond a flare duration of $\\sigma_T\\gg 1$ day, a lower limit $\\sigma_T^{min} = 0.2$ days is used for this time-variability test.\n\nWhereas for the standard analysis signal samples are produced by injecting Monte Carlo events on top of the background events, for the time-variability test $n_s$ events among real signal-like events are selected in the data and their times are sampled from a Gaussian distribution. The real signal-like events, potentially usable for signal injection in the time-variability test, are randomly chosen among the $2\\hat{N}_s^{t-int}$ events with the highest time-integrated signal-over-background ratio, where $\\hat{N}_s^{t-int}$ is the best-fit number of signal-like events reported by the time-integrated analysis~\\citep{Aartsen:2019fau}.\n\nThe likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized on the background and signal samples of the time-variability test and the corresponding TS distributions (for illustration at the location of NGC 1068) are shown in Fig.~\\ref{fig:ts_comparison}, for comparison with the same distributions for the standard analysis. For both analyses the separation of the signal and background TS is better for shorter flares (left plots) than longer ones (right plots). A notable feature concerns the background TS distributions in blue. For the standard analysis the TS distribution has a characteristic spike in the first bin populated by under-fluctuations set to zero. On the other hand, the TS distribution for the time-variability test is on average shifted towards larger values of TS, showing a more signal-like behavior. This is a consequence of preserving the same time-integrated space and energy variables of signal-like events in the background sample with the method described above. \nIt is to be noted that the time-integrated analysis in~\\cite{Aartsen:2019fau} fits a spectral index of NGC 1068 of 3.16, while the best-fit spectral index for the time-dependent analysis is harder, namely 2.8 (see Tab.~\\ref{tab:PS_results1}). As a consequence of preserving the spatial and energy information of the events, the background and signal samples of the time-variability test (used to make the distributions in the last row in Fig.~\\ref{fig:time-variability_comparison}) have a varying spectral index centered around 2.8. Notably, about 89\\% of the spectral indices of the 100,000 generated background samples are contained between $\\gamma^f=2$ and of $\\gamma^f=3$. Hence, these values of the spectral indices are used for the signal injection in the standard analysis when comparing the TS distributions of the standard analysis with the same distributions of the time-variability test in Fig.~\\ref{fig:time-variability_comparison}. In general, for harder spectral indices and the same flare duration $\\sigma_T^f$, the time-variability test characterizes the difference between background and signal less powerfully than the standard analysis. In fact, in the time-variability test the coordinates of the events are frozen to the true values, hence the differences between the spatial and energy PDFs of signal and background are not exploited, unlike for the standard analysis. \n\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.95\\linewidth]{figures\/TSdistributions.png}\n\t\\caption{Comparison of the TS distributions for signals of different intensity $n_s$ and for the background between the standard analysis (first and second row) and the time-variability test (third row) at the location of NGC 1068. The left plots are made for a flare duration of $\\sigma_T=1$~d, the right plots for 100~d. Spectral indices of $\\gamma^f=2$ (first row) and $\\gamma^f=3$ (second row) are used for the injected signal in the standard analysis.}\n\t\\label{fig:ts_comparison}\n\\end{figure}\n\n \nTo complete the comparison between the standard time-dependent analysis and the time-variability test, the sensitivity and $5~\\sigma$ DP at the location of NGC 1068 are shown for the two analyses in Fig.~\\ref{fig:time-variability_comparison}. The times of signal events are sampled from a Gaussian distribution with fixed mean $t_0=58000$ MJD and variable width $\\sigma_T$. This plot can be understood by observing the TS distributions in Figure~\\ref{fig:ts_comparison}.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.6\\linewidth]{figures\/sens_DP_time-variability_VS_std-ana.png}\n\t\\caption{Comparison of the sensitivity (dashed lines) and $5~\\sigma$ DP (solid lines) of the standard analysis (blue and orange, respectively for $\\gamma=2$ and $\\gamma=3$) and the time-variability test (green lines) in terms of the time-integrated flux per flare $F_0^f$ described in equation~\\ref{eq:time-integrated_flux}. These curves are produced at the location of NGC 1068 under the hypothesis of a single signal flare. Notice that the reconstructed value of the spectral index for NGC 1068 in \\cite{Aartsen:2019fau} is 3.16.}\n\t\\label{fig:time-variability_comparison}\n\\end{figure}\n\nFor each of the four aforementioned sources, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized on the real data and an observed TS is reported. A pre-trial p-value for the time-variability test is then evaluated by counting the fraction of generated background samples with TS larger than the observed TS. The post-trial p-value of each source is obtained by applying a Sidak correction (\\cite{Abdi2007}) to the pre-trial p-values with penalty factor 4 (the number of sources). The results of this test are shown in Table~\\ref{tab:time_var_analysis}. None of the four sources shows a significant time variability for the signal-like neutrino events. \n\n\\begin{table}\n\t\\centering\n\t\\begin{tabular}{>{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.8cm} }\n\t\t\\multicolumn{3}{c}{Time-variability results}\\\\\n\t\t\\hline\n\t\t\\hline\n\t\tSource & Pre-trial p-value & post-trial p-value\\\\[3pt] \\hline\n\t\t\\vspace{3pt}\n\t\tNGC 1068 & 0.13 & 0.43 \\\\[3pt]\n\t\tTXS 0506+056 & 0.24 & 0.67\\\\ [3pt]\n\t\tPKS 1424+240 & 0.33 & 0.80 \\\\[3pt]\n\t\tGB6 J1542+6129 & 0.029 & 0.11 \\\\[3pt]\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Results of the time-variability test applied to the four most significant sources of the time-integrated analysis of Ref.~\\cite{Aartsen:2019fau}. The table shows the p-values before (pre-trial) and after (post-trial) the Sidak correction with penalty factor 4. As described in this Appendix, the time-variability test only assesses the time distribution of the recorded events, by comparing with simulated samples in which the event directions and energies remain fixed as recorded, but times are randomized according to a uniform distribution.}\n\t\\label{tab:time_var_analysis}\n\\end{table}\n\nIt is worth noticing the case of M87: this source was an under-fluctuation for the time-integrated analysis, with no signal-like events identified in \\citep{Aartsen:2019fau}, but it shows up as the most significant source of the catalog when a time-dependent analysis is performed. Although a time-variability test is not made in this case, with $\\hat{n}_s=3$ signal-like neutrino events in a time scale of $\\hat{\\sigma}_T=2.0$ minutes, almost the entire significance of this source is expected to come from the time variability of the detected events.\n\n\\paragraph{\\bf Posterior time-dependent binomial test:} The second test determines the \\textit{a posteriori} probability that the time-dependent binomial test (see Section~\\ref{sec:analysis} referred to as \"standard\" in this Appendix) produces a pre-trial significance as high or higher than the observed value of $3.8~\\sigma$, in the assumption that the time-integrated neutrino excess is steady in time (background hypothesis). To do so, the same binomial test described in Section~\\ref{sec:analysis} is repeated on the list of p-values of the Northern sources. The per-source p-values are computed in the same way, by comparing the TS of each source with a distribution of TS from fully-scrambled (randomized times and recalculated right ascensions) background samples at the respective declination. As a matter of fact, the binomial p-value of the data for this test (referred to as \"posterior binomial test\") is the same as reported in Section~\\ref{sec:results} ($3.8\\sigma$). Nevertheless, the difference between the standard and the posterior binomial test is in the realization of the background samples used to translate the binomial p-value into a post-trial p-value.\n\nIn the posterior binomial test, background pseudo-realizations of the data for all the Northern sources of the catalog are obtained in the same way as described for the time-variability test: the times of the events are randomized while the equatorial coordinates are fixed at the recorded values, such that the spatial correlations among the events are preserved and the time-integrated information is effectively incorporated in the background hypothesis. For each pseudo-realization of the Northern catalog, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized at the location of each source and the corresponding TS is converted into a pre-trial p-value as described in Section~\\ref{sec:analysis}, by comparison with a distribution of TS from fully-scrambled background samples at the same declination. The lower limit on the flare duration $\\sigma_T^{min}$ is removed in this test to allow a proper comparison with the standard time-dependent binomial test. As a consequence, the azimuth-dependent correction to the background spatial PDF is neglected. However, this is a minor correction that has an impact at most of 5\\% only for time scales of the flares as short as $\\sim10^{-3}$ days.\n\nOnce a pre-trial p-value is computed for all the sources in a particular pseudo-realization of the Northern catalog, the binomial test is performed on this set of p-values, resulting in a background binomial p-value $P_{bin}$. This method is then repeated on many pseudo-realizations of the Northern catalog to produce the distribution of background binomial p-values for the posterior binomial test shown in blue in Fig.~\\ref{fig:binomial_p-value_distr}. These p-values are the typical binomial p-values that the binomial test produces if the neutrino events of the data (including the time-integrated excess) have no time structure. For comparison, the orange histogram in Fig.~\\ref{fig:binomial_p-value_distr} is the distribution of background binomial p-values for the standard binomial test, used in Section~\\ref{sec:results} to calculate the post-trial binomial p-value in the assumption that the time-integrated information is also randomized. Note that when the time-integrated information is preserved, the overall distribution is shifted towards higher values of $-\\log_{10}(P_{bin})$ as a consequence of including the additional information about the time-integrated excess in the background samples.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.8\\linewidth]{figures\/binomial_test_comparison.png}\n\t\\caption{Background distribution of the binomial p-value $P_{bin}$ for the posterior (blue) and standard (orange) binomial test. For the posterior binomial test, the background sample is produced by randomizing the time of the events while keeping fixed the equatorial coordinates; for the standard analysis, the equatorial coordinates are recalculated (assuming fixed local coordinates) after the time is randomized.}\n\t\\label{fig:binomial_p-value_distr}\n\\end{figure}\n\nFinally, the probability that the time-dependent binomial test produces a more significant result than the one observed in the real data ($3.8~\\sigma$ pre-trial), given the prior knowledge about the time-integrated excess and under the assumption that the neutrino events do not have any time correlation, is estimated by counting the fraction of background binomial p-values of the posterior binomial test that are more significant than the observed result. Such estimation leads to a probability of $0.9\\%$.\n\n\\section{Multi-flare algorithm}\n\\label{sec:multi-flare_algorithm}\n\n\nThe multi-flare algorithm aims at determining the number of flares to fit in the data. This is done by evaluating the TS of clusters of events with the highest signal-over-background ratio of the spatial and energy PDFs and selecting as candidate flares those that pass a given TS threshold. On the one hand, a high value of TS threshold is required to suppress background fluctuations (fake flares), on the other hand a low value is desired to avoid the rejection of signal flares of low intensity.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.80\\linewidth]{figures\/background_reco_flares_VS_declinations.png} \n\t\\caption{Fraction of trials in which, under the null hypothesis, 1 single flare (blue line) or more than 1 flare (orange line) are reconstructed as a function of the declination if a TS threshold of 2 is applied to select the candidate flares.}\n\t\\label{fig:bkg_reco_flares}\n\\end{figure}\n\nThis multi-flare algorithm selects as candidate flares the cluster of events with the highest TS and all additional clusters of events passing a TS threshold of 2. The choice of this threshold ensures a high efficiency in rejecting fake flares, with a frequency of multiple flare reconstruction under the null hypothesis of $\\lesssim 0.1\\%$ as shown in Fig.~\\ref{fig:bkg_reco_flares}. Such a high rejection efficiency allows to preserve a sensitivity and a DP comparable to the single-flare algorithm, as shown in Fig.~\\ref{fig:single_VS_multi_sensDP} at the declination of TXS 0506+056. Note additionally that if only one candidate flare is selected by the multi-flare algorithm, the multi-flare algorithm is completely equivalent to the single-flare algorithm. \n\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/sensitivity_singleflare_VS_multiflare.png}\n\t\\includegraphics[width=.49\\linewidth]{figures\/5sigmaDP_singleflare_VS_multiflare.png}\n\t\\caption{Sensitivity (left) and discovery potential (right) of the single-flare (orange lines) and multi-flare (blue lines) algorithm as a function of the flare duration $\\sigma_T$. Sensitivity and discovery potential are evaluate at the declination of TXS 0506+056 under the hypothesis of a single signal flare with a spectrum $E^{-2}$ (solid lines) and $E^{-3}$ (dashed lines). The bottom plots show the ratio of the multi-flare to single-flare curves above.}\n\t\\label{fig:single_VS_multi_sensDP}\n\\end{figure}\n\nTo quantify the goodness of the multi-flare reconstruction, two quantities are introduced: the multi-flare efficiency, defined as the fraction of trials in which all the signal flares are identified (no matter if additional fake flares are also reconstructed), and the multi-flare purity, defined as the fraction of trials in which no fake flares are reconstructed (no matter if all the signal flares are identified). The former is an indicator of how frequent the algorithm is able to identify \\textit{all} the signal flares injected in the data; the latter is an indicator of how well the algorithm is able to reject fake flares. Note that a partially reconstructed flare is considered as a fake flare in the estimation of efficiency and purity. These two quantities are shown in Fig.~\\ref{fig:efficiency_and_purity} under the hypothesis of two flares of equal intensity as a function of the time-integrated flux of each flare, for spectra $E^{-2}$ and $E^{-3}$ and for some values of $\\sigma_T$. The efficiency asymptotically reaches the value of 1: if the signal is strong enough the algorithm is always able to identify it. However, the flux required to reach such asymptotic \\textit{plateau} depends on the parameters of the flare (spectral index $\\gamma$ and flare duration $\\sigma_T$), and notably in extreme cases (soft spectra, long flare duration) the convergence is very slow, as a consequence of the high TS threshold. Nevertheless, note that in such extreme cases the flare intensity is mostly below the sensitivity level. The purity also tends to an asymptotic \\textit{plateau} at $\\gtrsim 95\\%$ with a rapidity that depends on the flare parameters.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_efficiency_gamma2.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_purity_gamma2.png}\n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_efficiency_gamma3.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_purity_gamma3.png}\n\t\\caption{Efficiency (left plots) and purity (right plots) under the hypothesis of a two flares of equal intensity as a function of the time-integrated flux of each flare. Efficiency and purity are calculated for a spectrum $E^{-2}$ (top plots, declination of TXS 0506+056) and $E^{-3}$ (bottom plots, declination of NGC 1068) and for some values of $\\sigma_T$ (see legend). Efficiency is defined as the fraction of trials in which \\textit{all} the injected flares are correctly reconstructed (no matter if additional fake flares are also reconstructed); purity is defined as the fraction of trials in which no fake flares are reconstructed. Note that a partially reconstructed flare is considered as a fake flare when calculating the efficiency and purity.}\n\t\\label{fig:efficiency_and_purity}\n\\end{figure}\n\n\n\\section{Sensitivity, discovery potential and upper limits}\n\\label{sec:sens_DP_upLims}\n\nThe sensitivity and discovery potential (DP) are evaluated by injecting a fake signal in the dataset and looking at the signal-like TS distributions. The sensitivity is defined as the signal flux required such that the resulting TS is greater than the background median in 90\\% of the trials; the $5~\\sigma$ DP is defined as the signal flux required such that the resulting TS is greater than the $5~\\sigma$ threshold of the background TS distribution in 50\\% of the trials. The sensitivity and $5~\\sigma$ discovery potential (DP) of the multi-flare analysis as a function of the declination are shown in Fig.~\\ref{fig:sensDP} for a single (left) and a double (right) signal flare hypothesis. In the latter case, the intensity and spectral shape of the two flares are the same.\n\nThe sensitivity and $5~\\sigma$ DP are expressed in terms of a time-integrated flux:\n\\begin{equation}\n \\label{eq:time-integrated_flux}\n F = \\int_{T_{live}}E^2\\Phi(E,t)dt=\\sum_{f=\\mathrm{flares}}F_0^f\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{2-\\gamma^f},\n\\end{equation}\nwhere $F_0^f$ is the time-integrated flux normalization factor of the $f$-th flare, independent of the flare duration $\\sigma_T^f$ and carrying the units of an energy divided by an area, and $\\Phi(E,t)$ is the overall flux, defined as the sum of the flux of all the flares from a single direction:\n\\begin{equation}\n \\label{eq:flux_definition}\n \\Phi(E,t)=\\sum_{f=\\mathrm{flares}}\\frac{F_0^f}{\\sqrt{2\\pi}\\sigma_T^f}\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{-\\gamma^f}G^f(t|t_0^f,\\sigma_T^f).\n\\end{equation}\nIn Eq. \\ref{eq:flux_definition}, $G^f(t|t_0^f,\\sigma_T^f)=\\exp\\left[-\\frac{1}{2}\\left(\\frac{t-t_0^f}{\\sigma_T^f}\\right)^2\\right]$ is the Gaussian time profile of the $f$-th flare.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/timeInt_sensitivity_singleflare.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/timeInt_sensitivity_doubleflare.png} \n\t\\caption{Sensitivity (dashed lines) and $5~\\sigma$ DP (solid lines) of the multi-flare analysis vs declination, expressed in terms of the flux normalization factor per flare $F_0^f$ defined in Eq.~\\ref{eq:time-integrated_flux}, under the hypothesis of a single (left plot) and a double (right plot) signal flare. \n\n\tThe assumed energy dependence of the flares has a spectral index of $\\gamma^f = 2$ and $\\gamma^f = 3$ (see labels), and a flare duration of $\\sigma_T^f = 1$~day (blue lines) and $\\sigma_T^f = 100$~days (orange lines). The double-flare\n\tassumes two identical and well separated flares.}\n\t\\label{fig:sensDP}\n\\end{figure}\n\nSensitivities and DPs are shown in Fig.~\\ref{fig:sensDP} for two different hypotheses of the spectral index of the flares ($\\gamma^f=2$ and $\\gamma^f=3$) and two different flare durations ($\\sigma_T^f=1$ day and $\\sigma_T^f=100$ days). In the double-flare case, two identical and well separated flares are assumed, with the same spectral index $\\gamma^f$, flare duration $\\sigma_T^f$ and time-integrated flux normalization per flare $F_0^f$.\n\nThe 90\\% confidence level (CL) upper limits on the flux of each source of the catalog are defined as the flux required to produce a TS distribution that exceeds the unblinded TS of the respective source in 90\\% of the trials. These upper limits are expressed in terms of a time-integrated flux by mean of the factor $F_{90\\%}$, defined as:\n\n\\begin{equation}\n \\label{eq:time-int_flux_upLims}\n F = F_{90\\%}\\sum_{f=\\mathrm{flares}}\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{2-\\gamma^f}.\n\\end{equation}\n\nIn the case of TXS 0506+056, the only observed multi-flare source of the catalog, the upper limits are evaluated assuming the same flare intensity for the two flares. As a matter of fact, only one global factor $F_{90\\%}$ appears in Eq.~\\ref{eq:time-int_flux_upLims}.\n\nThe upper limits of the not under-fluctuating sources of the catalog, together with the coordinates, maximum-likelihood parameters and pre-trial p-values, are reported in Table~\\ref{tab:PS_results1}. To calculate these upper limits, a spectral index $\\gamma^f=2$ in Eq. \\ref{eq:time-int_flux_upLims} is assumed for all the flares, whereas the flare time $t_0^f$ and duration $\\sigma_T^f$ are taken as the maximum-likelihood parameters. Only one flare is injected for each source, except for TXS 0506+056 for which two flares are injected, according to the maximum-likelihood results.\n\n\\setlength\\LTleft{-3cm}\n\\begin{center}\n\t\\begin{longtable}{>{\\centering\\arraybackslash}m{3.1cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{1.1cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{2.3cm} >{\\centering\\arraybackslash}m{2.2cm} >{\\centering\\arraybackslash}m{1.5cm} >{\\centering\\arraybackslash}m{1.9cm}}\n\t\\hline\n\t\t\\multicolumn{9}{|c|}{catalog results}\\\\ \\hline\n\t\tSource & R.A. & $\\delta$ & $\\hat{n}_s$ & $\\hat{\\gamma}$ & $\\hat{t}_0$ & $\\hat{\\sigma}_T$ & $-\\log_{10}(p_{loc})$ & $F_{90\\%}\\times10^{4}$\\\\\n\t\t & [ deg ] & [ deg ] & & & [ MJD ] & [ days ] & & [ TeV cm$^{-2}$ ] \\\\ [3pt]\n\t\t \\midrule\n\t\t\\endfirsthead\n\t\t\\midrule\n\t\tSource & R.A. & $\\delta$ & $\\hat{n}_s$ & $\\hat{\\gamma}$ & $\\hat{t}_0$ & $\\hat{\\sigma}_T$ & $-\\log_{10}(p_{loc})$ & $F_{90\\%}\\times10^{4}$\\\\\n\t\t & [ deg ] & [ deg ] & & & [ MJD ] & [ days ] & & [ TeV cm$^{-2}$ ] \\\\[3pt]\n\t\t\\midrule\n\t\t\\endhead\n\t\tS5 0716+71 & 110.49 & 71.34 & -- & -- & -- & -- & -- & --\\\\\n\t\tS4 1749+70 & 267.15 & 70.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tM82 & 148.95 & 69.67 & 27.8 &4.0 & 57395.8 & 200.0 & 1.51 & 5.7\\\\\n\t\t1ES 1959+650 & 300.01 & 65.15 & 3.9 &3.3 & 55028.4 & $1.8\\times10^{-1}$ & 2.21 & 3.8\\\\\n\t\t\\textbf{GB6 J1542+6129} & \\textbf{235.75} & \\textbf{61.50} & $\\mathbf{23.7^{+9.7}_{-7.9}}$ & $\\mathbf{2.7^{+0.5}_{-0.3}}$ & $\\mathbf{57740^{+80}_{-60}}$ & $\\mathbf{147^{+110}_{-25}}$ & \\textbf{2.67} & \\textbf{5.3}\\\\\n\t\tPG 1246+586 & 192.08 & 58.34 & -- & -- & -- & -- & -- & --\\\\\n\t\tTXS 1902+556 & 285.80 & 55.68 & 3.2 &4.0 & 54862.5 & 6.0 & 0.46 & 3.6\\\\ \n\t\t4C +55.17 & 149.42 & 55.38 & 11.2 &3.6 & 58303.7 & 59.7 & 1.00 & 2.5\\\\ \n\t\tS4 1250+53 & 193.31 & 53.02 & 6.1 &2.2 & 55062.9 & 35.9 & 0.74 & 3.7\\\\ \n\t\t1ES 0806+524 & 122.46 & 52.31 & 6.5 &3.1 & 55248.3 & 43.3 & 0.39 & 2.8\\\\ \n\t\t1H 1013+498 & 153.77 & 49.43 & 3.1 &2.2 & 58053.6 & $2.7\\times10^{-1}$ & 0.41 & 1.2\\\\ \n\t\tB3 1343+451 & 206.40 & 44.88 & 4.0 &2.7 & 57856.5 & $2.8\\times10^{-1}$ & 0.49 & 1.2\\\\ \n\t\tMG4 J200112+4352 & 300.30 & 43.89 & 11.6 &2.0 & 56776.2 & 105.9 & 1.00 & 2.6\\\\\n\t\t3C 66A & 35.67 & 43.04 & -- & -- & -- & -- & -- & --\\\\\n\t\tS4 0814+42 & 124.56 & 42.38 & 3.4 &2.6 & 56301.3 & 3.1 & 0.47 & 1.3\\\\ \n\t\tBL Lac & 330.69 & 42.28 & 3.8 &4.0 & 54637.6 & 5.6 & 0.48 & 2.5\\\\\n\t\t2HWC J2031+415 & 307.93 & 41.51 & 18.8 & 3.4 & 58056.8 & 114.0 & 0.93 & 2.4\\\\\n\t\tNGC 1275 & 49.96 & 41.51 & -- & -- & -- & -- & -- & --\\\\\n\t\tB3 0609+413 & 93.22 & 41.37 & 8.7 &1.7 & 56736.2 & 163.7 & 0.90 & 2.5\\\\ \n\t\tM31 & 10.82 & 41.24 & 8.6 &2.3 & 57900.7 & 16.4 & 1.29 & 2.1\\\\\n\t\tTXS 2241+406 & 341.06 & 40.96 & 3.8 &2.9 & 55334.5 & $2.5\\times10^{-1}$ & 0.55 & 1.7\\\\\n\t\tGamma Cygni & 305.56 & 40.26 & 5.8 &1.5 & 57336.8 & 13.0 & 0.95 & 1.8\\\\\n\t\tMkn 501 & 253.47 & 39.76 & -- & -- & -- & -- & -- & --\\\\\n\t\tB3 0133+388 & 24.14 & 39.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tMkn 421 & 166.12 & 38.21 & 2.9 &2.1 & 54875.0 & $7.6\\times10^{-1}$ & 1.23 & 2.8\\\\\n\t\t4C +38.41 & 248.82 & 38.14 & 6.2 &2.1 & 56751.6 & 9.0 & 0.60 & 1.5\\\\ \n\t\tMG2 J201534+3710 & 303.92 & 37.19 & 3.9 &1.3 & 57326.7 & 129.4 & 0.45 & 1.8\\\\ \n\t\tMGRO J2019+37 & 304.85 & 36.80 & 4.2 &1.4 & 57330.9 & 135.0 & 0.40 & 1.7\\\\\n\t\tB2 0218+357 & 35.28 & 35.94 & -- & -- & -- & -- & -- & --\\\\\n\t\tB2 2114+33 & 319.06 & 33.66 & -- & -- & -- & -- & -- & --\\\\\n\t\tB2 1520+31 & 230.55 & 31.74 & 5.0 &2.4 & 55999.0 & 2.7 & 0.66 & 1.2\\\\\n\t\tNGC 598 & 23.52 & 30.62 & 4.9 &1.8 & 56520.7 & 33.0 & 0.75 & 1.7\\\\\n\t\tPG 1218+304 & 185.34 & 30.17 & 2.0 &2.4 & 54647.8 & $2.1\\times10^{-2}$ & 1.12 & 2.1\\\\ \n\t\tB2 1215+30 & 184.48 & 30.12 & 2.0 &2.4 & 54647.8 & $2.1\\times10^{-2}$ & 1.21 & 2.2\\\\\n\t\tTon 599 & 179.88 & 29.24 & 2.0 &1.7 & 55024.2 & $3.0\\times10^{-3}$ & 0.45 & 1.2\\\\\n\t\tMG2 J043337+2905 & 68.41 & 29.10 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +28.07 & 39.48 & 28.80 & -- & -- & -- & -- & -- & --\\\\\n\t\tW Comae & 185.38 & 28.24 & 3.1 &3.4 & 55682.4 & 1.5 & 0.49 & 1.2\\\\\n\t\tTXS 0141+268 & 26.15 & 27.09 & -- & -- & -- & -- & -- & --\\\\\n\t\tON 246 & 187.56 & 25.30 & -- & -- & -- & -- & -- & --\\\\\n\t\t1ES 0647+250 & 102.70 & 25.06 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1441+25 & 220.99 & 25.03 & 4.1 &1.7 & 56994.7 & 105.6 & 0.69 & 1.8\\\\ \n\t\tPKS 1424+240 & 216.76 & 23.80 & 17.9 &2.8 & 57182.6 & 200.0 & 1.00 & 2.2\\\\\n\t\tS2 0109+22 & 18.03 & 22.75 & 4.6 &4.0 & 55153.2 & $9.2\\times10^{-1}$ & 0.93 & 1.6\\\\\n\t\tCrab nebula & 83.63 & 22.01 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +21.35 & 186.23 & 21.38 & 2.0 &2.3 & 55690.3 & $1.2\\times10^{-3}$ & 0.64 & 0.9\\\\\n\t\tTXS 0518+211 & 80.44 & 21.21 & -- & -- & -- & -- & -- & --\\\\\n\t\tRGB J2243+203 & 340.99 & 20.36 & 11.2 &3.6 & 57300.1 & 33.0 & 0.81 & 1.5\\\\ \n\t\tOJ 287 & 133.71 & 20.12 & 3.6 &2.6 & 56416.8 & $8.4\\times10^{-1}$ & 0.72 & 1.0\\\\ \n\t\tPKS 1717+177 & 259.81 & 17.75 & 2.0 &3.3 & 54587.2 & $2.0\\times10^{-1}$ & 0.45 & 1.3\\\\ \n\t\tOX 169 & 325.89 & 17.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0735+17 & 114.54 & 17.71 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0235+164 & 39.67 & 16.62 & -- & -- & -- & -- & -- & --\\\\\n\t\t3C 454.3 & 343.50 & 16.15 & 5.1 &2.0 & 56119.1 & 200.0 & 0.46 & 1.3\\\\\n\t\t4C +14.23 & 111.33 & 14.42 & 3.1 &2.0 & 58076.6 & 1.2 & 0.81 & 1.0\\\\\n\t\tPSR B0656+14 & 104.95 & 14.24 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\textbf{M87} & \\textbf{187.71} & \\textbf{12.39} & $\\mathbf{3.0^{+2.0}_{-1.4}}$ &$\\mathbf{4.0^{+0.9}_{-0.9}}$ & $\\mathbf{57730.031^{+0.001}_{-0.001}}$ & $\\mathbf{1.4^{+1.3}_{-0.4}\\times10^{-3}}$ & \\textbf{3.35} & \\textbf{0.9}\\\\\n\t\t1H 1720+117 & 261.27 & 11.88 & -- & -- & -- & -- & -- & --\\\\\n\t\tCTA 102 & 338.15 & 11.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPG 1553+113 & 238.93 & 11.19 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 2032+107 & 308.85 & 10.94 & -- & -- & -- & -- & -- & --\\\\\n\t\tMG1 J021114+1051 & 32.81 & 10.86 & 2.8 &2.1 & 56179.2 & $8.9\\times10^{-1}$ & 0.52 & 0.9\\\\ \n\t\t1RXS J194246.3+1 & 295.70 & 10.56 & 4.2 &3.4 & 54904.8 & 24.3 & 0.51 & 1.4\\\\\n\t\tPKS 1502+106 & 226.10 & 10.50 & 9.8 &2.5 & 55509.5 & 21.6 & 1.97 & 1.8\\\\ \n\t\tOT 081 & 267.87 & 9.65 & 9.7 &2.9 & 57751.6 & 45.7 & 0.79 & 1.3\\\\\n\t\tRX J1931.1+0937 & 292.78 & 9.63 & -- & -- & -- & -- & -- & --\\\\\n\t\tOG +050 & 83.18 & 7.55 & -- & -- & -- & -- & -- & --\\\\\n\t\tMGRO J1908+06 & 287.17 & 6.18 & 2.9 &2.1 & 57045.2 & 4.6 & 0.63 & 0.9\\\\\n\t\tPKS 0019+058 & 5.64 & 6.14 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\multirow{2}{*}{\\textbf{TXS 0506+056}} & \\multirow{2}{*}{\\textbf{77.35}} & \\multirow{2}{*}{\\textbf{5.70}} &$\\mathbf{10.0^{+5.2}_{-4.2}}$ & $\\mathbf{2.2^{+0.3}_{-0.3}}$ & $\\mathbf{57000^{+30}_{-30}}$ & $\\mathbf{62^{+27}_{-27}}$ & \\multirow{2}{*}{\\textbf{2.77}} & \\multirow{2}{*}{\\textbf{1.7}}\\\\ & & & $\\mathbf{7.6^{+6.1}_{-5.8}}$ & $\\mathbf{2.6^{+0.5}_{-0.6}}$ & $\\mathbf{58020^{+40}_{-40}}$ & $\\mathbf{42^{+42}_{-28}}$ & & \\\\\n\t\tPKS 0502+049 & 76.34 & 5.00 & 2.7 &2.0 & 57072.1 & 1.2 & 0.81 & 0.9\\\\\n\t\tMG1 J123931+0443 & 189.89 & 4.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0829+046 & 127.97 & 4.49 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1502+036 & 226.26 & 3.44 & 2.0 &2.9 & 54606.9 & $3.4\\times10^{-1}$ & 0.53 & 1.2\\\\\n\t\tHESS J1857+026 & 284.30 & 2.67 & 3.6 &2.3 & 54984.4 & $2.0\\times10^{-1}$ & 0.71 & 0.9\\\\\n\t\t3C 273 & 187.27 & 2.04 & -- & -- & -- & -- & -- & --\\\\\n\t\tOJ 014 & 122.87 & 1.78 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0215+015 & 34.46 & 1.74 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0736+01 & 114.82 & 1.62 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +01.02 & 17.16 & 1.59 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +01.28 & 164.61 & 1.56 & -- & -- & -- & -- & -- & --\\\\\n\t\tGRS 1285.0 & 283.15 & 0.69 & 6.5 &2.8 & 54808.6 & 87.3 & 0.39 & 1.9\\\\\n\t\tPKS 0422+00 & 66.19 & 0.60 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS B1130+008 & 173.20 & 0.58 & -- & -- & -- & -- & -- & --\\\\\n\t\tPMN J0948+0022 & 147.24 & 0.37 & 2.0 &2.4 & 55610.7 & $4.3\\times10^{-4}$ & 0.90 & 0.6\\\\ \n\t\tHESS J1852-000 & 283.00 & 0.00 & 5.4 &2.8 & 54751.9 & 100.3 & 0.38 & 1.9\\\\\n\t\t\\textbf{NGC 1068} & \\textbf{40.67} & \\textbf{-0.01} & $\\mathbf{23.0^{+8.7}_{-7.9}}$ &$\\mathbf{2.8^{+0.3}_{-0.3}}$ & $\\mathbf{56290^{+90}_{-80}}$ & $\\mathbf{198^{+64}_{-64}}$ & \\textbf{2.65} & \\textbf{1.9}\\\\\n\t\tHESS J1849-000 & 282.26 & -0.02 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0440-00 & 70.66 & -0.29 & 6.2 &2.6 & 57896.8 & 66.8 & 0.51 & 0.9\\\\\n\t\tPKS 1216-010 & 184.64 & -1.33 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0420-01 & 65.83 & -1.33 & -- & -- & -- & -- & -- & --\\\\\n\t\tNVSS J190836-012 & 287.20 & -1.53 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0336-01 & 54.88 & -1.77 & -- & -- & -- & -- & -- & --\\\\\n\t\tS3 0458-02 & 75.30 & -1.97 & 4.6 &2.5 & 56974.6 & $7.0\\times10^{-1}$ & 0.65 & 0.7\\\\ \n\t\tNVSS J141826-023 & 214.61 & -2.56 & 3.7 &2.9 & 57733.0 & $3.4\\times10^{-1}$ & 0.44 & 0.6\\\\\n\t\tPKS 2320-035 & 350.88 & -3.29 & 10.8 &3.2 & 56176.8 & 160.2 & 0.57 & 1.1\\\\\n\t\tHESS J1843-033 & 280.75 & -3.30 & -- & -- & -- & -- & -- & --\\\\[3pt]\n\t\t\\midrule\n\t\tPKS 1329-049 & 203.02 & -5.16 & -- & -- & -- & -- & -- & --\\\\\n\t\tHESS J1841-055 & 280.23 & -5.55 & -- & -- & -- & -- & -- & --\\\\\n\t\t3C 279 & 194.04 & -5.79 & -- & -- & -- & -- & -- & --\\\\\n\t\tHESS J1837-069 & 279.43 & -6.93 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0805-07 & 122.07 & -7.86 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1510-089 & 228.21 & -9.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0048-09 & 12.68 & -9.49 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0727-11 & 112.58 & -11.69 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 2233-148 & 339.14 & -14.56 & 2.0 &2.8 & 54877.5 & $2.6\\times10^{-3}$ & 1.04 & 12.0\\\\ \n\t\tNGC 253 & 11.90 & -25.29 & 4.1 &2.5 & 56511.7 & 22.7 & 0.52 & 8.7\\\\\n\t\tNGC 4945 & 196.36 & -49.47 & 2.0 &1.9 & 54739.8 & $2.4\\times10^{-1}$ & 0.63 & 55.3\\\\\n\t\tLMC & 80.00 & -68.75 & -- & -- & -- & -- & -- & --\\\\\n\t\tSMC & 14.50 & -72.75 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\hline\\hline\n\t\n\t\t\\caption{Coordinates (Right Ascension R.A. and declination $\\delta$), maximum-likelihood flare parameters, logarithm of the local pre-trial p-values $p_{loc}$ of the sources of the catalog and the 90\\% CL upper limits on the time-integrated flux $F_{90\\%}$ (in units of TeV cm$^{-2}$) defined in equation~\\ref{eq:time-int_flux_upLims} for an $E^{-2}$ spectrum. Under-fluctuating results are shown with hyphens. For the four sources that give rise to the $3.0~\\sigma$ excess of the binomial test in the Northern hemisphere (highlighted in bold), the fit parameters are shown with the confidence interval at $68\\%$ CL. A line is used to separate the Northern from Southern sources. The parameters of the flare from TXS 0506+056 at 58020 MJD and related to the neutrino alert ($n_s=7.6$, $\\gamma=2.6$, $\\sigma_T=42$ days) are different from those reported in \\cite{IceCube:2018cha}, when the data available for analysis extended up to 40 days after the central time of the flare. This analysis includes 7 additional months and reconstructs a longer, more significant flare associated with the same alert.}\n\t\t\\label{tab:PS_results1}\n\t\\end{longtable}\n\\end{center}\n\n\\section{Estimation of the single-flare significance of TXS 0506+056}\n\\label{sec:singleflare_significance}\n\nThis Appendix is intended to describe how the single-flare significances of the two flares of TXS 0506+056, that are shown in Fig.~\\ref{fig:best_fit_flares}, are estimated.\n\nBy factorizing the background PDF, the multi-flare likelihood ratio $\\Lambda_{mf}^{-1}$ in Eq.~\\ref{eq:teststatistic} can be written as follows:\n\\begin{equation}\n \\label{eq:likelihood_ratio_simple}\n \\Lambda^{-1}_{mf}=\\frac{\\mathcal{L}(\\vec{\\hat{n}}_s, \\vec{\\hat{\\gamma}}, \\vec{\\hat{t}}_0,\\vec{\\hat{\\sigma}}_T)}{\\mathcal{L}(n_s=0)}=\\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\sum_{f=\\mathrm{flares}}\\mathcal{F}^f_{i,j}\\right), \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\mathcal{F}_{i,j}^f \\coloneqq\\frac{n_s^f(\\mathcal{S}^f_{i,j}\/\\mathcal{B}_{i,j}-1)}{N}.\n\\end{equation}\nThe single-flare signal and background PDFs in Eq.~\\ref{eq:likelihood_ratio_simple} are the same as in Eq.~\\ref{eq:multi-likelihood}, but for the sake of clarity here they explicitly show the flare ($f$), event ($i$) and sample ($j$) indices. In addition, the dependency on the parameters, being the same as in Eq.~\\ref{eq:multi-likelihood}, is omitted to simplify the notation.\n\n\nFor TXS 0506+056 there are two identified flares, thus $\\sum_f \\mathcal{F}^f_{i,j}=\\mathcal{F}^1_{i,j}+\\mathcal{F}^2_{i,j}$. In addition, when an event $i$ does not contribute significantly to $\\mathcal{F}^f_{i,j}$, then $\\mathcal{F}^f_{i,j}\\sim10^{-6}\\text{--}10^{-4}$. Since an event can contribute significantly only to one flare, the crossed terms $\\mathcal{F}^1_{i,j}\\mathcal{F}^2_{i,j}$ can be neglected and it is meaningful to retain only terms at first order in $\\mathcal{F}^f_{i,j}$. Based on these observations, the likelihood ratio in Eq. \\ref{eq:likelihood_ratio_simple} can be well approximated as:\n\\begin{equation}\n \\Lambda^{-1}_{mf}=\\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\mathcal{F}^1_{i,j}+\\mathcal{F}^2_{i,j}\\right)\\simeq\n \\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\mathcal{F}^1_{i,j}\\right)\\left(1+\\mathcal{F}^2_{i,j}\\right)=\\left(\\Lambda_{sf}^{f=1}\\right)^{-1}\\left(\\Lambda_{sf}^{f=2}\\right)^{-1}.\n\\end{equation}\nThus, it can be factorized into single-flare components, that are equivalent to the multi-flare likelihood ratio when only one flare is considered. This result can be easily generalised to $N_f>2$ flares.\n\nSuch a factorization can be exploited to disentangle the contribution of each flare to the multi-flare TS in Eq. \\ref{eq:teststatistic}:\n\\begin{equation}\n \\mathrm{TS}\\simeq-2\\log\\left[\\frac{1}{2}\\prod_{f=\\mathrm{flares}}\\left(\\frac{T_{live}}{\\hat{\\sigma}_T^fI\\left[\\hat{t}_0^f,\\hat{\\sigma}_T^f\\right]}(\\Lambda_{sf}^f)^{-1}\\right)\\right]=-2\\sum_{f=\\mathrm{flares}}\\log\\left[\\left(\\frac{1}{2}\\right)^{1\/N_f}\\frac{T_{live}}{\\hat{\\sigma}_T^fI\\left[\\hat{t}_0^f,\\hat{\\sigma}_T^f\\right]}(\\Lambda_{sf}^f)^{-1}\\right]=\\sum_{f=\\mathrm{flares}}\\mathrm{TS}_{sf}^{f},\n\\end{equation}\nwhere $\\mathrm{TS}_{sf}^f$ is the contribution of the $f$-th flare to the multi-flare TS and can be interpreted as a single-flare TS.\n\nThe single-flare significance $\\sigma_{sf}^f$ can be obtained in the same way as the multi-flare significance, but using the single-flare TS instead of the multi-flare TS. Assuming that the two flares of TXS 0506+056 are independent, one might expect to retrieve the multi-flare TS by summing linearly the single-flare TS and to retrieve the multi-flare significance $\\sigma_{mf}$ by summing in quadrature the single-flare significance. Although this is effectively observed for the TS, the summation in quadrature of the single-fare significance results in a mismatch of nearly 2.5\\% with respect to the multi-flare significance. To correct for this mismatch, the single-flare significance is redefined as $\\sigma^{\\prime f}_{sf}$ through the following relation:\n\\begin{equation}\n \\frac{\\sigma^{\\prime 1}_{sf}}{\\sigma^{\\prime 2}_{sf}}=\\frac{\\sigma^1_{sf}}{\\sigma_{sf}^2}, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\sqrt{\\left(\\sigma^{\\prime 1}_{sf}\\right)^2+\\left(\\sigma^{\\prime 2}_{sf}\\right)^2}=\\sigma_{mf}.\n\\end{equation}\n\nFor TXS 0506+056 this method is used to disentangle the single-flare significance $\\sigma^{\\prime f}_{sf}$ of the 2 flares used in Fig.~\\ref{fig:best_fit_flares}.\n\n\\section{Discussion on the multi-messenger context}\n\\label{sec:MM}\n\nAs shown in Section~\\ref{sec:results}, M87 is the most significant source of the catalog, exhibiting 3 events over a time lag of minute scale with post-trail significance of $1.7~\\sigma$. It is one of the closest ($z=0.00436$) potential cosmic ray accelerators, hosting a supermassive black hole of $6.5\\times10^9M_\\odot$. Its jet was already observed more than a century ago ~\\citep{blanford_agn} in a large elliptical radio galaxy of Fanaroff-Riley type I in the Virgo cluster.\nIt has been observed in $>100$~GeV energy region: VERITAS detected a flare extending beyond 350~GeV with a spectral index at the peak of $2.19 \\pm 0.07$ \\citep{Aliu_2012} in Apr. 2010. In a 2008 flare, a clear correlation between the X-ray emission and the TeV one \\cite{Acciari_2008,Albert:2008kb}. Previous positive detection was reported by HEGRA in 1998\/99 above 700 GeV~\\citep{2003A&A...403L...1A} , and up to $\\sim 10$~TeV by H.E.S.S. in 89 hours of observation between 2003-6, showing a variability at the time scale of a few days in the 2005 high state associated to the Schwarzschild radius of M87 \\cite{Aharonian_2006}. Recently, MAGIC reported the results on the monitoring of M87 for 156 hours in 2012-15 \\cite{MAGIC2020}. It is worth noting that HAWC set an upper limit above 2 TeV for 760 d of data. The non-observation of gamma-rays at $>$~TeV energies, may indicate a cut-off in the spectrum. Such cut-off may differ for neutrinos, being less affected by the absorption in the source and by the extra-galactic background light. \n\nThe gamma-ray observations from M87 are summarized in Fig.~\\ref{fig:MM}, together with the 10-year time-integrated upper limits on the neutrino flux estimated in~\\cite{Aartsen:2019fau} for a spectrum of the form $dN\/dE\\sim E^{-2}$. \n\nPrecise radio observations \\cite{Sikora_2016} indicate a persistent central ridge structure, namely a spine flow in the interior of M87 jet, in addition to the well-known limb-brightening profile, which needs further measurements. A composite structure of the jet has been speculated also for TXS 0506+056 based on observations months after the detection of the IceCube high-energy event that triggered its multi-wavelength observations. With the millimeter-VLBI it was observed that the core jet expands in size with apparent super-luminal velocity \\cite{Ros:2019bgo}. This can be interpreted as deceleration due to proton loading from jet-star interactions in the inner host galaxy and\/or spine-sheath structure of the jet \\cite{2005A&A...432..401G,Tavecchio:2008be}. This sort of spine-sheat structure has been advocated as a possible explanation for the higher flux of neutrinos than gamma-rays and also suggested for TXS 0506+056 by MAGIC \\citep{2018ApJ...863L..10A}, while models with a single zone struggle to explain the 2014-2015 flare of TXS 0506+056 (see e.g. \\cite{Murase_2018,Zhang:2019htg,2018ApJ...864...84K}).\n\nOther models, e.g \\cite{Inoue:2019yfs,Murase_2020}, have been revised to explain the more recent observations of IceCube on NGC 1068 \\citep{Aartsen:2019fau}. These models focus on the higher observed flux of IceCube neutrino events in the $\\sim 1-50$~TeV region with respect to the level of gamma-ray fluxes observed at lower energy by Fermi and the limits of MAGIC. The corona super-hot plasma around the super-massive black hole accelerates protons, carrying few percent of the thermal energy, through plasma turbulence \\cite{Murase_2020} or shock acceleration \\cite{Inoue:2019yfs} leading to the creation of neutrinos and gamma rays. The environment is dense enough to prevent the escape of $\\gg$ 100 MeV gamma rays while $\\sim \\mathrm{MeV}$ gamma-rays would be their result from cascading down.\nFurther insights will be needed in both messengers and all wavelengths to better constrain the structure of jets and acceleration mechanisms in one or multiple zones.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{figures\/MM_plot.png} \n\t\\caption{The gamma-ray flux in the steady state of the source observed between 2012-2015 \\cite{MAGIC2020} is shown with black (Fermi-LAT) and blue dots (MAGIC). The higher level dashed lines are levels of flux observed during flares (see references in the text). The purple line with downing arrows corresponds to the 10-year time-integrated upper limits taken from ~\\cite{Aartsen:2019fau}, with an assumed spectrum $dN\/dE\\sim E^{-2}$.}\n\\label{fig:MM}\n\\end{figure}\n\n\n\n\\section{Results} \\label{sec:results}\n\nThe point-source search identifies M87 as the most significant source in the Northern hemisphere, with a pre-trial p-value of $p_{loc}=4.6\\times10^{-4}$, which becomes $4.3\\times10^{-2}$ ($1.7~\\sigma$) post-trial. In the Southern hemisphere, the most significant source is PKS 2233-148 with a pre-trial p-value of $p_{loc}=0.092$ and post-trial p-value of $0.72$. TXS 0506+056 is the only source of the catalog for which 2 flares are found. The time profiles of the neutrino flares reconstructed by this analysis at the location of each source, together with their pre-trial significance $\\sigma_{loc}^f$, are visualized in Fig.~\\ref{fig:best_fit_flares}. For the sake of clarity, the flare significance is denoted as $\\sigma_{loc}^f$ while the overall multi-flare significance is referred to as $\\sigma_{loc}=\\sqrt{\\sum_f\\sigma_{loc}^{f2}}$. For single-flare sources (all but TXS 0506+056) the flare and multi-flare significances coincide. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Gaussians_TXS_2weights.png} \n \\caption{Pre-trial flare significance $\\sigma_{loc}^f$ for the sources of the catalog. For all sources a single flare has been found, except for TXS 0506+056 for which 2 flares are found. In this case, the pre-trial significance of each individual flare is calculated as described in Appendix \\ref{sec:singleflare_significance}. The sources of the catalog with multi-flare pre-trial significance $\\sigma_{loc}\\ge2$ are labeled with their names.}\n \n \\label{fig:best_fit_flares}\n\\end{figure}\n\nThe cumulative distributions of pre-trial p-values at the location of the sources of the catalog, used as inputs to the population study, are shown in Fig.~\\ref{fig:pvalues_distribution}.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.49\\linewidth]{figures\/cumulative_pvals_north_distribution.png} \n \\includegraphics[width=.49\\linewidth]{figures\/cumulative_pvals_south_distribution.png} \n \\caption{Cumulative distributions of the pre-trial p-values of the sources of the catalog in the Northern (left) and Southern (right) hemispheres. The cumulative p-values of the unblinded data are shown in red and compared to the background expectations in blue.}\n \\label{fig:pvalues_distribution}\n\\end{figure}\n\nThe pre-trial binomial p-value is shown in Fig.~\\ref{fig:binomial_test} as a function of the source index $k$. The smallest binomial p-value is selected in each hemisphere and converted into a post-trial binomial p-value as described in Section~\\ref{sec:analysis}. In the Northern hemisphere the smallest pre-trial binomial p-value is $7.3\\times10^{-5}$ ($3.8~\\sigma$) when $k=4$ sources are considered (M87, TXS 0506+056, GB6 J1542+6129, NGC 1068), corresponding to a post-trial p-value of $1.6\\times 10^{-3}$ ($3.0~\\sigma$). In the Southern hemisphere the smallest pre-trial binomial p-value is 0.71, obtained by $k=1$ source (PKS 2233-148) and corresponding to a post-trial p-value of $0.89$.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.47\\linewidth]{figures\/northern_binomial_pval_noBkg.png} \n \\includegraphics[width=.49\\linewidth]{figures\/southern_binomial_pval_noBkg.png} \n \\caption{Pre-trial binomial p-value $P_{bin}(k)$ as a function of the source index $k$ in the Northern (left) and Southern (right) hemispheres. The edge with the under-fluctuating sources, with binomial p-value set to 1, is shown in blue.}\n \\label{fig:binomial_test}\n\\end{figure}\n\nThe results of the two searches are summarized in Table~\\ref{tab:summary_results}. Having not found any significant time-dependent excess, upper limits on the neutrino emission from the sources of the catalog are estimated as discussed in Appendix~\\ref{sec:sens_DP_upLims}, using Eq.~\\ref{eq:time-integrated_flux} and~\\ref{eq:flux_definition}.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{>{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.5cm} >{\\centering\\arraybackslash}m{1.7cm} >{\\centering\\arraybackslash}m{2.5cm}}\n \\multicolumn{4}{c}{Summary of the results}\\\\\n \\hline\n \\hline\n \\multirow{2}{*}{Analysis} & \\multirow{2}{*}{Hemisphere} & \\multicolumn{2}{c}{p-value}\\\\ & & Pre-trial & Post-trial \\\\[3pt] \\hline\n \\multirow{2}{*}{Point-source} & North & $4.6\\times10^{-4}$ & $4.3\\times10^{-2}$ ($1.7~\\sigma$)\\\\ & South & $9.2\\times 10^{-2}$ & 0.72\\\\ [3pt] \\hline\n \\multirow{2}{*}{Binomial test} & North & $7.3\\times10^{-5}$ & $1.6\\times10^{-3}$ ($3.0~\\sigma$) \\\\ & South & $0.71$ & $0.89$\\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{Summary of the results of the two analyses: for the point-source search the results of the best sources in the Northern (M87) and Southern (PKS 2233-148) hemisphere are reported.}\n\\label{tab:summary_results}\n\\end{table}\n\\section{Introduction}\n\\label{sec:intro}\n\nAfter more than 100 years since their discovery, the origin and acceleration processes of cosmic rays (CRs) remain unsolved. Relevant hints exist, one being provided by a neutrino event detected by IceCube with most probable energy of 290 TeV which triggered follow-up gamma-ray observations ~\\citep{IceCube:2018dnn}. These observations identified in the 50\\% containment region for the arrival direction of the IceCube event a classified BL Lac object, though possibly a Flat-Spectrum Radio Quasar (FSRQ) \\citep{Padovani:2019xcv}, at redshift $z = 0.34$, known as TXS 0506+056. It was in a flaring state \\citep{IceCube:2018dnn} with a chance correlation between the neutrino event and the photon counterpart rejected at the $3~\\sigma$ level. The intriguing aspect of the possible coincidence between the neutrino event and the gamma-ray flare hints at TXS 0506+056 being a potential CR source. Additionally, in the analysis of the data prior to the event alert IceCube found a neutrino flare of 110 day duration between 2014\/2015~\\citep{IceCube:2018cha} at a significance of $3.7~\\sigma$, if a Gaussian time window is assumed. In this case, no clear flare has been identified in available gamma-ray data from TXS 0506+056~\\citep{Aartsen:2019gxs,Glauch:2019emd}. \nThe total contribution of the observed TXS 0506+056 neutrino flares to the diffuse astrophysical flux observed by IceCube~\\citep{Aartsen:2013jdh,Aartsen:2014gkd,Aartsen:2016xlq,Aartsen:2017mau} is at most a few percent~\\citep{IceCube:2018cha}.\nIn addition, time-integrated upper limits on stacked catalogs of classes of sources (e.g. tidal disruption events \\citep{Stein:2019ivm}, blazars \\citep{Aartsen:2016lir}, gamma ray bursts \\citep{Aartsen:2017wea}, compact binary mergers \\citep{Aartsen:2020mla} and pulsar wind nebulae \\citep{Aartsen:2020eof}), \nconstrain their contribution to the measured diffuse flux. While these limits depend on assumptions on the emission of such classes of sources, such as their spectral shapes and their uniformity within the class, they indicate that there might be a mixture of contributing classes and still unidentified contributors.\n\nRecently, IceCube performed another analysis on neutrino sources: a time-integrated search for point-like neutrino source signals using ten years of data~\\citep{Aartsen:2019fau}. This search uses a maximum-likelihood (ML) method to test the locations of a catalog of 110 selected sources and the full sky. As an intriguing coincidence, the two searches find the hottest spot to be a region including the Seyfert II galaxy NGC 1068, with a significance reported from the catalog search of $2.9~\\sigma$. Additionally, a population study of the catalog revealed a $3.3~\\sigma$-level incompatibility of the neutrino events from the directions of four Northern sources with respect to the estimated background: NGC 1068, TXS 0506+056, PKS 1424+240 and GB6 J1542+6129.\n\n\nTo fully investigate this catalog of sources, this letter shows the results of a complementary time-dependent study. Time-dependent searches are particularly interesting not only because of their better sensitivity to time-integrated searches for flares of duration $\\lesssim 200$ d, due to the suppression of the time-constant background of atmospheric neutrinos, but also because flare events are particularly suitable periods for neutrino production in blazars. In fact, the injection rate of accelerated protons and the density of target photon fields for photo-meson interactions can be noticeably increased during flaring periods of blazars, leading to an enhanced neutrino luminosity $L_\\nu \\propto L^{1.5\\text{--}2}_\\gamma$ (see \\cite{Zhang:2019htg} and references therein), where $L_\\gamma$ is the photon luminosity. \nApart from the aforementioned evidence of the 2014\/2015 flare from the direction of TXS 0506+056, other IceCube time-dependent searches did not find any significant excess. Nevertheless, they constrained specific emission models \\citep{Abbasi:2020dfi} or set upper limits on the neutrino emission from selected sources \\citep{Aartsen:2015wto}. Triggered searches adopt lightcurves or flare directions from gamma-ray experiments, while sky scans search for largest flares anywhere in the sky.\nIn this paper, we extend these searches to a multiple flare scan based on a ML method. \n\\section{Conclusions} \\label{sec:conclusions}\n\nThe time-dependent point-source search presented in this letter identified M87 as the most significant source in the Northern hemisphere, with $\\hat{n}_s=3$ signal-like neutrino events in a time window of $\\hat{\\sigma}_T=2.0$ minutes and with a soft spectrum ($\\hat{\\gamma}=3.95$). The post-trial significance of M87 is found to be $1.7~\\sigma$. Because of the quite short time lag between the events, the time-dependent search is more sensitive than the time-integrated one, which explains the absence of significant signals in previous IceCube time-integrated analyses that had included M87. For the case of~\\cite{OSullivan:2019rpq}, a smaller data sample from Apr. 26, 2012 to May 11, 2017 was used. The difference in significance is due to small changes in the event reconstruction and angular uncertainty estimation between the two samples.\n\nThis analysis also identifies the two known flares at the location of TXS 0506+056, one corresponding to the most significant flare at $\\sim 57000$ MJD \\citep{IceCube:2018cha} and the other related to the high-energy event alert IceCube-170922A detected on 22 Sep. 2017 \\citep{IceCube:2018dnn}. Although these two flares are consistently identified, the significance of the result at the location of TXS 0506+056 is lower than the one reported in~\\citep{IceCube:2018cha}. This is due to the new data selection \\citep{Abbasi:2021bvk} described in Section~\\ref{sec:detector}, which introduces a different energy reconstruction from the past one~\\citep{Abbasi:2021bvk}. Further information about the reduced significance of TXS 0506+056 resulting from this analysis are provided in Appendix~\\ref{sec:TXS_significance_investigation}.\n\n\n\nThe time-dependent binomial test of the Northern hemisphere suggests an incompatibility at $3.0~\\sigma$ significance of the neutrino events from four sources with respect to the overall Northern background expectation. Of the four most significant sources in the Northern hemisphere, three are common with the time-integrated analysis~\\citep{Aartsen:2019fau}, namely NGC 1068, TXS 0506+056, GB6 J1542+6129, whereas a fourth source (M87) is different and shows a strong time-dependent behavior. However, the results of the time-dependent and time-integrated binomial test partly overlap, as both share the same space and energy PDFs in the likelihood definition in Eq.~\\ref{eq:multi-likelihood} and both select the same three out of four sources. For this reason, although a time-dependent structure of the data is suggested by the binomial test, a time-independent scenario cannot be excluded by this analysis (see Appendix~\\ref{app:variab} for a further discussion).\n\n\nNo significant result is found in the Southern hemisphere. This is consistent with the lower sensitivity due to the substantially larger background of atmospheric muons in the Southern hemisphere.\n\\section{Data analysis methods}\n\\label{sec:analysis}\n\nThe presented analyses are based on an unbinned ML method similar to previous IceCube analyses, extended to allow the detection of multiple flares and to handle different IceCube samples (IC40, IC59, IC79, IC86-I, IC86-II-VII) with different detector configurations. Since each IceCube sample is independent, the total 10-year likelihood $\\mathcal{L}$ is defined as the product of the likelihoods of each single IceCube sample $\\mathcal{L}_j$:\n\\begin{equation}\n \\label{eq:10-year-likelihood}\n \\mathcal{L}(\\vec{n}_s, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T)=\\prod_{j=\\mathrm{sample}}\\mathcal{L}_j(\\vec{n}_{s,j}, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T),\n\\end{equation}\nwhere $\\mathcal{L}_j$ is defined as\n\n\\begin{equation}\n\\label{eq:multi-likelihood}\n\\mathcal{L}_j(\\vec{n}_{s,j}, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T) = \\prod_{i=1}^{N_j}\\left[\\frac{\\sum_{f=\\mathrm{flares}}n_{s,j}^f\\mathcal{S}_j(|\\vect{x_s}-\\vect{x_i}|,\\sigma_i, E_i,t_i; \\gamma^f, t_0^f, \\sigma_T^f)}{N_j}+\\left(1-\\frac{\\sum_fn_{s,j}^f}{N_j}\\right)\\mathcal{B}_j(\\sin\\delta_i, E_i)\\right] .\n\\end{equation}\n\nFor each flare $f$, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is a function of four parameters described below: the total number of signal-like events in the flare $n_s^f$, the flare spectral index $\\gamma^f$, the flaring time $t_0^f$ and the flare duration $\\sigma_T^f$. They are denoted with an arrow in the likelihood arguments to indicate that there are as many sets of these four parameters as the number of flares. For each flare $f$, $n_{s,j}^f$ in Eq.~\\ref{eq:multi-likelihood} denotes the partial contribution of the $j$-th sample to the total number of signal-like events in that flare, such that $n_s^f=\\sum_j n_{s,j}^f$. Such partial contribution $n_{s,j}^f$ is estimated from the relative effective area of the IceCube configuration of the $j$-th sample (determined by Monte Carlo simulations of the detector and varying with spectral index and declination) and the fraction of time that the $f$-th flare stretches on the data-taking period of the $j$-th sample.\n\nFor each IceCube sample $j$, with $N_j$ total events, the likelihood in Eq.~\\ref{eq:multi-likelihood} is constructed from a single-flare signal probability density function (PDF) $\\mathcal{S}_j$, weighted by $n_{s,j}^f$ and summed over all flares from a source (multi-flare signal PDF), and a background PDF $\\mathcal{B}_j$. The single-flare signal PDF and the background PDF are the product of a space, energy and time PDFs, as also described in \\cite{Aartsen:2015wto}. The spatial signal PDF assumes a cluster of events distributed according to a 2D Gaussian around the source position $\\vect{x_s}$, with $\\sigma_i$ being the estimated angular uncertainty on the $\\vect{x_i}$ position of the $i$-th event. For the signal energy PDF, that depends on the declination $\\delta_i$ and the energy proxy $E_i$ of the events (the energy as measured by IceCube from visible light released in the detector by muon tracks), an unbroken power law $\\propto E^{-\\gamma^f}$ is used. The spectral index $\\gamma^f$ is bound within $1\\le\\gamma^f\\le4$ and can be different for each flare $f$. The signal time PDF of each flare $f$ is provided by a one-dimensional Gaussian $\\propto \\exp{[-(t_i-t_0^f)^2\/(2\\sigma_T^{f2})]}$, where $t_i$ is the time of the $i$-th event. Its normalization is such that the integral of the time PDF across the up times of each IceCube sample is 1. The central time of each Gaussian flare $t_0^f$ is constrained within the 10-year period of the analyzed data and the flare duration $\\sigma_T^f$ cannot exceed an upper limit of 200 days, above which time-integrated searches are more sensitive than time-dependent ones. For computational efficiency, the signal time PDF of each flare is truncated at $\\pm 4\\sigma_T^f$, where the flare can be considered concluded.\n\nThe spatial background PDF is obtained through a data-driven method by scrambling the time of the events and correcting the right ascension accordingly, assuming fixed local coordinates (azimuth, zenith). It depends only on the declination $\\delta_i$ of the events and it is uniform in right ascension. Due to the natural tendency of the reconstruction to be more efficient if the direction of the source is aligned with the strings of the detector, an azimuth-dependent correction is applied to the spatial background PDF. Such correction is relevant for time scales shorter than one day, whereas it is negligible for longer time scales, since any azimuth dependency is averaged out by the Earth rotation. The background energy PDF is taken from scrambled data as well, and it is fully described in~\\cite{Aartsen:2013uuv}. It depends on the declination $\\delta_i$ and the energy proxy $E_i$ of the events. The background time PDF is uniform, as expected for atmospheric muons and neutrinos if seasonal sinusoidal variations are neglected. The maximal amplitude for these variations is 10\\% for the downgoing muons produced in the polar atmosphere and smaller for atmospheric neutrinos coming from all latitudes \\citep{Gaisser:2013lrk}.\n\nThe test statistic (TS) is defined as:\n\\begin{equation}\n\\label{eq:teststatistic}\n\\mathrm{TS}=-2\\ln\\left[\\frac{1}{2}\\left(\\prod_{f=\\mathrm{flares}}\\frac{T_{live}}{\\hat{\\sigma}_T^f I\\left[\\hat{t}_0^f, \\hat{\\sigma}_T^f\\right]}\\right)\\times\\frac{\\mathcal{L}(\\vec{n}_s=\\vec{0})}{\\mathcal{L}(\\vec{\\hat{n}}_s, \\vec{\\hat{\\gamma}}, \\vec{\\hat{t}}_0, \\vec{\\hat{\\sigma}}_T)}\\right] ,\n\\end{equation} \nwhere the parameters that maximize the likelihood function in Eq.~\\ref{eq:10-year-likelihood} are denoted with a hat and $\\mathcal{L}(\\vec{n}_s=\\vec{0})$ is the background likelihood, obtained from Eq.~\\ref{eq:10-year-likelihood} by setting $n_s^f=0$ for all the flares.\nThe likelihood ratio is multiplied by a marginalization term intended to penalize short flares, similarly used in previous time-dependent single-flare IceCube analyses to correct a natural bias of the likelihood towards selecting short flares. This was discussed in~\\cite{Braun:2009wp} for the single-flare analysis. For the multi-flare analysis, the numerical factor $1\/2$ in the equation above is chosen such that the marginalization term has the same form as the single-flare one when the true hypothesis is a single flare. The factor $0$ is obtained by averaging $P_n(t)$ over all possible initial coin states. However, we observe that we get exactly the same result by only taking into account any pair of orthogonal coin states. This is due to the fact that the average probability distribution resulting from two walks starting with any two orthogonal coin states at the origin is equal to the one resulting from the evolution of a completely mixed coin state. (The resulting distribution is symmetric since the completely mixed coin state at the origin is reflection invariant.) Also, for the long-time limit, the bound states stay in the vicinity of the origin, whereas the extended states get spread over the infinite position space yielding probabilities going to zero. Based on these facts, we can obtain an analytic expression to estimate the long-time behaviour of $\\left$ by projecting the evolved state onto the bound subspace and averaging the corresponding probabilities over two orthogonal initial states, such that\n\\begin{eqnarray}\n\\left< P_0 \\right > &=& \\frac{1}{2}\\left[(1-\\lambda_+^2)^2 + (1-\\lambda_-^2)^2\\right]~~\\textrm{and} \\\\\n\\left< P_n \\right >\n&=&\n\\frac{1}{4}\n[\\lambda_+^{2|n|-2} (1+\\lambda_+^2) (1-\\lambda_+^2)^2\n\\nonumber \\\\\n&+&\n\\lambda_-^{2|n|-2} (1+\\lambda_-^2) (1-\\lambda_-^2)^2],\n\\label{eq:Pn}\n\\end{eqnarray}\nwhere $n\\neq 0$ and non-zero probabilities appear for even (odd) sites only after even (odd) number of steps. To quantify the localisation, we utilize the participation ratio of the averaged probability distribution, which is given by\n\\begin{equation}\n\\mathrm{PR} = \\sum_n \\left^2.\n\t\\label{eq:PR}\n\\end{equation}\nFor a uniform probability distribution over $N$ sites, PR yields its minimum value $\\sim N^{-1}$. At the other extreme of localisation at one site, PR takes its maximum value of one. In figure~\\ref{fig:average_loc}, the numeric results for the PR (green solid curve) and $\\left$ (orange dashed curve) for $150$ steps are represented. Both of them is calculated by using the average probability distribution $\\left$ which is averaged over a pair of orthogonal initial coin states as we mentioned before. We also provide the analytic prediction of PR (black dots) for the long-time behaviour using (\\ref{eq:Pn}) and (\\ref{eq:PR}) which slightly differs from its numerical simulation, whereas we omitted that of $\\left$ for clarity since it exactly fits to the numerical data. First of all, both curves exhibit similar behaviour with respect to $\\phi$ and $\\left$ pointing out that localisation occurs around the impurity site. They get maximized at $\\phi=\\pi$ and vanish at the standard quantum walk limit $\\phi=0,2\\pi$. The kinks at $\\phi=\\pi\/2,3\\pi\/2$ are due to bound states appearing or disappearing in this model as discussed previously. This behaviour matches exactly that of the effective localisation length determined by the bound states in figure~\\ref{fig:bandStr}(b), which consequently shows that the localisation properties of the walk in the long-time limit is determined by the number and character of the stationary bound states. The slight difference between the numerical and analytical results of PR stems from the finite number of time steps in the numerical simulation and the fact that contribution from the extended states is completely excluded in the analytical expression. As a consequence of this, the numerical data stays above the analytical prediction. For example, as we approach the standard walk case, the wavefunction for a finite-step walk stays relatively ``localised'' in comparison to that of the long-time case which spreads infinitely over the position space without any localisation. Hence, the numerical prediction will become zero in the standard walk in this limit as well. The very good agreement between the numerical and analytical results in figure~\\ref{fig:average_loc} implies that the effect of the extended states on the PR is negligible even after $150$ steps.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.85]{fig4.pdf}\n\\caption{The numerical results for the participation ratio (PR) and the average probability at the origin $\\left$ with respect to $\\phi$ after $150$ steps. The analytical prediction for PR (black dots) is also provided.\n\\label{fig:average_loc}}\n\\end{figure}\n\n\\subsection{Non-Markovianity}\n\nWe now turn our attention to the non-Markovian behaviour of the dynamics of the coin for the quantum walk with a phase impurity. As mentioned before, we are interested in the effects of localised bound states and their symmetry on the degree of non-Markovianity of the reduced coin evolution. In order to quantify the amount of memory effects in the open system dynamics from different perspectives, we will comparatively study two well-established measures of quantum non-Markovianity that are based on the information flow dynamics between the coin and the spatial degrees of freedom.\n\nLet us first briefly discuss how to characterize the non-Markovian nature of an open system evolution and identify the existence of possible memory effects in the dynamics. Assume that we have a quantum map \\ALS{$\\Lambda_{t,0}$}{$\\Lambda(t,0)$}, i.e., a completely positive trace preserving (CPTP) map describing the evolution of the open quantum system. The property of divisibility implies that divisible maps satisfy the decomposition rule $\\Lambda(t,0) = \\Lambda(t,s) \\Lambda(s,0)$, where $\\Lambda(t,s)$ is a CPTP map for all $s\\leq t$. Markovian or so-called memoryless dynamical maps are recognized as the ones that satisfy this decomposition rule. On the other hand, when the divisibility rule is violated, i.e., when \\ALS{$\\Lambda_{t,s}$}{$\\Lambda(t,s)$} is not a CPTP map or when it does not even exist, then the dynamical map $\\Lambda$ is said to be non-divisible and the evolution it describes non-Markovian. The concept of divisibility can also be discussed in the context of discrete dynamics, such as quantum walk, where $t,s \\in \\mathbb{N}$~\\cite{luoma15}.\n\nThe first non-Markovianity measure that we utilize in our work is known as \\ALS{the BLP}{Breuer-Laine-Piilo (BLP)} measure~\\cite{breuer09} which is based on the idea of distinguishability of two open system states under a given dynamical evolution. In this approach, the changes in the distinguishability between two arbitrary initial states of the open system during the dynamics are interpreted as the information flow between the open system and its environment. In particular, if distinguishability between the initial states decreases monotonically in time throughout the evolution, the dynamics is said to be Markovian, since in this case information flows from the open system to its environment in a monotonic fashion. However, if distinguishability temporarily increases during the dynamics, then this is understood as a back-flow of information from the environment to the open system giving rise to non-Markovian memory effects. The distinguishability of two systems can be quantified through trace distance between their density matrices $\\rho_1$ and $\\rho_2$ as\n\\begin{equation}\nD(\\rho_1, \\rho_2)\\!=\\!\n\\frac{1}{2}\n||\\rho_1\\!-\\!\\rho_2||_1\n\\!=\\!\n\\frac{1}{2}\n\\Tr \\left[(\\rho_1\\!-\\!\\rho_2)^{\\dagger} (\\rho_1\\!-\\!\\rho_2)\\right]^{1\/2}\n\\label{eq:trace_dist}\n\\end{equation}\nwhich acquires its maximum value of one, when the states $\\rho_1$ and $\\rho_2$ are orthogonal. At this point, we should stress that since CPTP maps are contractions for the trace distance, BLP measure vanishes for divisible maps, resulting in a memoryless evolution. However, we also emphasize that it is possible for trace distance to monotonically decrease for certain non-divisible maps as well. Therefore, as is well known in the recent literature, even though widely used as a measure for non-Markovianity on its own, BLP measure is actually a witness for the non-divisibility of quantum dynamical maps. The BLP measure can be expressed in discrete time as \\cite{luoma15}\n\\begin{equation}\n{\\cal{N}}\n=\n\\max_{\\rho_{1,2}}\n\\sum_{t, \\Delta D>0} \\Delta D_t\n=\n\\sum_{t} \\Delta D_t \\Theta(\\Delta D_t),\n\\label{eq:nonmarkov}\n\\end{equation}\nwhere $\\Theta(x)$ denotes the Heaviside step function,\n\\begin{equation}\n\\Delta D_t\n=\nD(\\rho_{1,t}, \\rho_{2,t})-D(\\rho_{1,t-1}, \\rho_{2,t-1}).\n\\end{equation}\nand the maximization is carried out over all possible initial state pairs. It has been shown that the pair which maximizes the sum in (\\ref{eq:nonmarkov}) is a pair of orthogonal of states~\\cite{wissmann12}. In our analysis, we study the reduced system dynamics of a pair of such initial states, namely, $\\ket{\\psi_{S,A}}$ introduced before, with opposite reflection symmetry, which will be later on revealed as the optimal initial state pair optimizing the BLP measure.\n\nThe time evolution of $\\rho^\\mathrm{coin}_{S,A}$ is particularly easy to visualize because the parametrization $\\rho^\\mathrm{coin}_t = (I + \\vec{r}_t\\cdot \\vec{\\sigma})\/2$ has only one non-zero component, i.e. $r_{x,t}$, throughout the time evolution which is shown in figure~\\ref{fig:spinxoscillations} for representative values of the phase $\\phi$. For $\\phi=0$, which gives the standart quantum walk, both $r^S_{x,t}$ (black dotted line in figure~\\ref{fig:spinxoscillations}(a)) and $r^A_{x,t}=-r^S_{x,t}$ (black dotted line in figure~\\ref{fig:spinxoscillations}(b)) undergo damped oscillations with a period of four steps as the steady-state is reached. Since the oscillations are out of phase for these orthogonal initial states, the trace distance between such states also oscillates in time with decreasing amplitude (black dotted line in figure~\\ref{fig:spinxoscillations}(c)). Therefore, even though there is a back-flow of information from the environment to the open system in the standard walk, the damping in oscillations shows that information flow between the two subsystems reduces and eventually vanishes in time~\\cite{hinarejos2014}. For non-zero values of $\\phi$, oscillations in the initial state component $r^{A(S)}_{x,t}$ arise depending on the overlap with the bound states. When $\\phi=\\pi\/4$, the oscillations in $r^A_{x,t}$ die out very quickly, whereas oscillations with period two between sublattice symmetric pair of localised states survive for $r^S_{x,t}$ as shown by the blue dot-dashed line in figure~\\ref{fig:spinxoscillations}(a)-(b). For $\\phi=\\pi\/2$, similar oscillations exist, except they die out more slowly for $r^A_{x,t}$ which has a finite overlap with the emerging reflection anti-symmetric bound state whereas oscillations continue with higher amplitudes for $r^S_{x,t}$ since the reflection symmetric bound-state becomes more localised for this value of $\\phi$. At $\\phi=\\pi$ where bound states of both parities exist, oscillations in $r_{x,t}$ occur with higher amplitudes for both of the initial states in comparison with the other shown phase values.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.90]{fig5.pdf}\n\\caption{Oscillations in the reduced coin density matrices starting from $\\ket{\\Psi_\\text{S}}$ in (a) and from $\\ket{\\Psi_\\text{A}}$ in (b) as a function of time for representative values of the phase parameter $\\phi$. The trace distance of these coin states $D(\\rho_S,\\rho_A)=|r_{x,A}-r_{x,S}|$ is shown in (c) and the oscillating behaviour gives rise to non-zero BLP measure.} \n\\label{fig:spinxoscillations}\n\\end{figure}\n\nHaving obtained the time dependence of $\\rho^\\mathrm{coin}_{S,A}$, we calculate the trace distance $D(\\rho_S, \\rho_A) = |r_{S,x}-r_{A,x}|$, and display our findings in figure~\\ref{fig:spinxoscillations}(c), as a function of $\\phi$. In contrast to the standard quantum walk where the trace distance oscillations die out in time, we find that they survive for non-zero $\\phi$, as at least one of $r^{S,A}_{x,t}$ keeps oscillating in time. However, we should keep in mind that the value of the trace distance also depends on the mean values $\\overline{r^{S,A}_{x,t}}$ about which oscillations take place. For example, when $\\phi=\\pi\/2$ we get oscillations in $D(\\rho_1,\\rho_2)$ with smaller amplitudes than in $r^S_{x,t}$, which will be of importance in our later discussions.\n\nAs the persistent oscillations in trace distance play a crucial role for the evaluation of the BLP measure in our model, the oscillation means $\\overline{r^{S,A}_{x,t}}$ and the oscillation amplitudes are plotted in figure~\\ref{fig:oscillations}(a). Comparison with figure~\\ref{fig:bandStr}(c) reveals that, as the overlap between one of the the initial states and the bound states increases, $\\overline{r^{S,A}_{x,t}}$ converges to the $r_x$ of the corresponding bound state and oscillations appear. For the interval $\\phi \\in (\\pi \/2, 3 \\pi \/2)$, $\\overline{r^{S,A}_{x,t}}$ becomes the same as $r_x$ in the long time limit. The difference in $\\overline{r^{S,A}_{x,t}}$ approaches to zero at $\\phi \\sim 0.6 \\pi$ and $\\phi \\sim 1.4 \\pi$, yielding very small values for the trace distance together with the fact that essentially one of $r^{S,A}_{x,t}$ oscillates about their common mean. For other values of $\\phi$, the trace distance is mainly determined by the oscillations in $r^{S,A}_{x,t}$. Since the period of the oscillations is two time steps due to the sublattice symmetry, the changes in trace distance can be obtained by subtracting the value at even time step from the neighbouring odd time step which is plotted in figure~\\ref{fig:oscillations} (b) at three different times. These plots clearly demonstrate that the trace distance oscillations quickly converge to their long time limit. As the bound states get more localised for certain $\\phi$ values and also the overlap of the initial states with them increases, so do the amplitude of the oscillations in the trace distance.\n\nTo evaluate the BLP measure, we maximize the sum of the positive increases in trace distance over all possible orthogonal pairs of initial states starting at the impurity site which is shown in figure~\\ref{fig:oscillations}(c) as a function of $\\phi$ for three increasing values of time. The result reveals that the pair $\\ket{\\psi_{S,A}}$ that we used for the preceeding analysis actually maximizes the sum in the BLP measure in the long-time limit. In contrast to the standard walk, the initial states maximizing BLP measure are equal superposition of symmetric and anti-symmetric states and these states do not change under other decoherence mechanisms~\\cite{hinarejos2014}. \nNear $\\phi=0,\\pi\/2, 3\\pi\/2, 2\\pi$, where bound states are weakly localised, we find that other orthogonal pairs actually maximize the BLP measure. However these regions get smaller as we consider longer time evolutions. The sudden drop in BLP at $\\phi=\\pi\/2,3\\pi\/2$ is related to the fact that oscillations take place about similar mean values. More importantly, we establish that the BLP measure of non-Markovianity increases with the emergence of bound states and reaches its maximum value at $\\phi=\\pi$ when the number and localisation of bound states assumes their maximum, as demonstrated by the effective localisation length in figure~\\ref{fig:bandStr}(b). The relation of non-Markovianity and localisation is also apparent comparing the BLP curve with the average PR shown in figure~\\ref{fig:average_loc}, which is maximum at $\\phi=\\pi$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.80]{fig6.pdf}\n\\caption{(a) Long-time limit time average of the reduced coin density matrix parameter $r_x$ for reflection symmetric ($\\ket{\\Psi_\\text{S}}$) and anti-symmetric ($\\ket{\\Psi_\\text{A}}$) initial states as a function of $\\phi$. (Time average is taken over 100 steps between $t=400$ and $t=500$.) Instantaneous values at even and odd time steps are shown by square and triangle markers, respectively. (b) Trace distance oscillation amplitudes between initial states $\\ket{\\Psi_\\text{S}}$ and $\\ket{\\Psi_\\text{A}}$ at different times show that they quickly converge to their long-time limit values for all $\\phi$. (c) BLP measure $\\mathcal{N}$ (\\ref{eq:nonmarkov}) at three different times. The maximization is performed over all the initial coin states for quantum walks starting at the impurity site. The linear increase in time reflects trace distance oscillations with constant amplitude. (See (b).) \n\\label{fig:oscillations}}\n\\end{figure}\n\nNext, we consider \\ALS{the RHP}{Rivas-Huelga-Plenio (RHP)}~\\cite{rivas10} measure of non-Markovianity, which is based on the dynamics of entanglement between the system of interest and an ancillary system. The ancillary system $A$ is assumed to have no dynamics of its own and is completely isolated so that any initial entanglement between the system and the ancilla can be affected by the open system dynamics only. In fact, similar to the BLP measure, this measure is also a witness for the violation of the divisibility. Considering the fact that no entanglement measure $E$ can increase under local CPTP maps, it is rather straightforward to observe that \n\\begin{equation}\n\tE[(\\Lambda(t,0) \\otimes I) \\rho_{\\mathrm{coin},A}] \\leq E[(\\Lambda(s,0) \\otimes I) \\rho_{\\mathrm{coin},A}]\n\\end{equation}\nfor all times $0\\leq s \\leq t$. Hence, any increase in the entanglement between the open system and its ancillary can be understood as a signature of non-Markovian memory effects in the time evolution. In other words, while the entanglement contained in $\\rho_{\\mathrm{coin},A}$ decreases monotonically for all Markovian processes, non-Markovian behaviour in the dynamics can be captured through the temporary increase of entanglement. In the same spirit of the BLP measure, one can then measure the degree of non-Markovianity using the following quantity:\n\\begin{equation}\n{\\cal{I}}^{(E)}\n=\n\\max_{\\rho_{CA}}\n\\sum_{t,\\Delta E_\\mathrm{CA}>0}\n\\Delta E_{\\mathrm{CA},t}\n\\end{equation}\nwhere $E_\\mathrm{CA}$ denotes the entanglement between the coin and a two level ancillary system. For any entanglement measure $E_{CA}$, the RHP measure is found by maximizing ${\\cal{I}}^{(E)}$ over all initial reduced density matrices $\\rho_{CA}$ of the composite coin-ancilla system. In order to calculate this measure, we start the evolution from composite initial state $|\\Phi^+ \\rangle \\vert 0 \\rangle = \\frac{1}{\\sqrt{2}}(|\\leftarrow \\rangle_C|\\downarrow\\rangle_A+|\\rightarrow \\rangle_C |\\uparrow \\rangle_A)\\vert 0\\rangle$ and use concurrence~\\cite{wooters97} as the entanglement measure. It has been shown that when concurrence is used as entanglement measure, the optimum initial state maximizing the RHP measure is a Bell state, for a single qubit interacting with an environment~\\cite{neto16}. \n\nFigure~\\ref{fig:non_makovianity}(a) shows the variation of the concurrence in time which is calculated from the reduced coin-ancilla state after tracing out the spatial degrees of the walker during the evolution. For the standard quantum walk, the entanglement oscillations with period of four steps are damped and slowly die out with time. Therefore, the RHP measure accumulates a finite amount of non-Markovianity in the long time limit which is similar to the behaviour of the BLP measure for the standard walk. On the other hand, in contrast to the BLP measure, the nature of bound states emerging with non-zero phase $\\phi$ plays a key role for the coin-ancilla entanglement. In the presence of reflection symmetric or anti-symmetric bound states only, the concurrence dies out very quickly. This is due to the fact that the symmetric and anti-symmetric states couple to different environmental degrees of freedom. For example, with only symmetric bound states present, the symmetric part of the coin-position state remains mostly localised in the vicinity of the impurity site whereas the anti-symmetric part moves away from the origin. Hence, the coin-ancilla entanglement is quickly destroyed upon tracing out the environmental degrees of position, as the coin-ancilla state becomes an incoherent mixture. An example of this situation is displayed in figure~\\ref{fig:non_makovianity}(a) for $\\phi=\\pi\/3$. It is only when both reflection symmetric and anti-symmetric stationary states exist that some entanglement can survive which shows non-decaying oscillations. These oscillations are due to the finite dimension of the bound state subspace and the frequencies of concurrence oscillations can easily be obtained from the quasi-energy differences. Such a case is displayed in figure~\\ref{fig:non_makovianity}(a) for $\\phi=\\pi$ with two dominant periods. One period is of two steps due to the sublattice symmetric bound states with quasi-energy difference $\\pi$ and another one is approximately ten steps due to the quasi-energy difference of $\\Delta E \\approx 0.205\\pi$ between reflection symmetric and anti-symmetric states. The latter dependence again shows the importance of bound states of both parities for the RHP measure. The energy difference $\\Delta E$ does not change much as $\\phi$ changes in the domain of four bound states unless one group of bound states is very weakly bound. (See figure~\\ref{fig:bandStr})\n\nUsing the time evolution of the coin-ancilla entanglement as shown in figure~\\ref{fig:non_makovianity}(a), we evaluate the RHP measure for all values of the impurity phase $\\phi$. The results are plotted in figure~\\ref{fig:non_makovianity}(b) for three increasing values of the final time. The amount of non-Markovianity measured by the RHP measure drastically depends on whether the reflection symmetric and anti-symmetric bound states are both supported for a given $\\phi$ or not. In the interval $\\phi \\in (0, \\pi\/2)$ where only the symmetric bound states exist, the concurrence vanishes quickly in time since the coin-ancilla Bell state can only be supported if both symmetric and anti-symmetric bound states exist. Therefore, the coupling of the symmetric and anti-symmetric coin states to different environmental degrees of freedom completely destroys the Bell state of the coin-ancilla system and results in a vanishing value for the RHP measure. A similar situation occurs in the interval $\\phi \\in (3\\pi\/2, 2\\pi)$ where only reflection anti-symmetric bound states exist and coin-ancilla entanglement is destroyed. In the interval $\\phi \\in (\\pi\/2, 3\\pi\/2)$ where bound states of both symmetries exist, the coin-ancilla entanglement is more robust and the RHP measure captures the non-Markovianity increasing linearly with $t$ in the long time limit due to non-decaying oscillations in the coin-ancilla entanglement. In this $\\phi$ interval, the RHP displays the same behaviour as seen for the BLP measure in figure~\\ref{fig:oscillations}(c). \n\n\\section{\\label{sec:conc}Conclusion}\n\nWe have provided a comprehensive and systematic analysis of non-Markovianity in a quantum walk model with a phase impurity in relation with the phenomenon of localisation. At the heart of analysis lies the manifestation of bound states emerging due to the existence of the phase impurity at the starting site of the walker. We have first presented a technique to analytically obtain the bound states of the model making use of the transfer matrix method. These bound states emerge in one or two sublattice symmetric pairs possessing definite reflection symmetry. With this knowledge at hand, we have explored the localisation properties of the walker in the position space. To this end, we have adopted two initial state independent quantities to measure the degree of localisation, namely, the effective localisation length for all eigenstates and an average participation ratio after time evolution over all initial states starting at the impurity site. Our analysis clearly demonstrates that the degree of localisation of the walker is directly determined by the properties of the bound states.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.8]{fig7.pdf}\n\\caption{(a) Concurrence between the coin and the ancilla qubit as a function of time for representative values of the phase parameter $\\phi$. When bound states with both positive and negative reflection parity exist, the concurrence shows oscillations. (See text for the involved frequencies.) (b) Concurrence based RHP measure as a function of $\\phi$ at three different time steps showing linear increase with time for $\\phi \\in (\\pi\/2,3\\pi\/2)$. RHP has a vanishing value when well-formed bound states of only positive or negative reflection parity exist.\n\\label{fig:non_makovianity}}\n\\end{figure}\n\nMore importantly, our main contribution in this work is the unveiling of an intrinsic relation between the emergence of bound states and the degree of non-Markovianity of the dynamics of the walker. In order to study non-Markovian behaviour in the time evolution of the walker, after tracing out the spatial degrees of freedom, we have utilized two distinct measures of quantum non-Markovianity, i.e., the BLP and the RHP measures based on the dynamics of trace distance and entanglement, respectively. These measures help us to understand the information flow between the principal coin system and the position system forming the environment from different perspectives. We show that, in the case of the existence of spatial decoherence in the form of a phase impurity, the BLP measure is optimized by the eigenstates of the coin operator for almost all values of the phase $\\phi$. Note that when one has decoherence in terms of broken links instead, the degree of decoherence does not change the optimal state maximizing the BLP measure~\\cite{hinarejos2014}. Our investigation also proves that phase impurity amplifies the degree of non-Markovianity quantified by the BLP measure.\\ALS{, similar to the disorder model studied in~\\cite{kumar2018}}{}\nThe underlying reason behind this behaviour is the oscillations in the state of the coin which essentially takes place between the sublattice symmetric bound state components with a period of two steps. Then, in general, increasing overlap between the initial and the bound states implies a greater degree of non-Markovianity. However, also note that when the time average of the reduced coin states corresponding to two orthogonal initial states are close to each other, the BLP measure drops abruptly.\n\nNext, we employed the RHP measure to analyse the degree of non-Markovianity in the dynamics of the walker. When the coin state is maximally entangled with an ancillary system initially, the amount of entanglement is known to oscillate in time for the standard walk. However, our examination demonstrates that, in case of the existence of a phase impurity, if the bound subspace supports only one type of reflection symmetric state, the coin-ancilla entanglement vanishes after a few time steps and the RHP measure becomes very small compared to the standard walk case. On the other hand, when both reflection symmetric and anti-symmetric bound states are present, the entanglement oscillations are persistent in time, leading to high values of RHP measure. Thus, while the RHP measure is generally in good agreement with the BLP measure when both even and odd parity bound states exist, the RHP measure fails to reliably detect the non-Markovian behaviour when only symmetric or anti-symmetric bound states are present. Most importantly, as can be clearly seen from both measures, maximum non-Markovianity is reached where our localisation measures determined by the bound states become also maximum.\n\\ALS{}{Relationship between non-Markovianity and localisation have been discussed in random static disorder models~\\cite{lorenzo2017quantum,kumar2018} where non-Markovianity increases with disorder.\nWe observe more nuanced behaviour between bound states and non-Markovianity as discussed above.} \n\nWe would like to indicate that the experimental realization of the model we presented here is quite feasible with today's technology. The time-multiplexing quantum walk employs laser light pulses going successively around a fiber loop where the position space is effectively encoded in the time domain from the point of view of the detectors \\cite{schreiber2010}. The main advantage of this setup is it's scalability and it's long coherence times, i.e., it only requires a fixed number of optical elements to realize the quantum walk for relatively large number of steps. The recent developments in the setup allow deterministic out-coupling of the light pulses from any site by utilizing electro-optic modulators \\cite{nitsche2018}. It is also possible to introduce arbitrary phases specific to any site by programming of the electro-optic modulators accordingly, which actually would allow the realization of the model we provided here \\cite{schreiber2011, nitsche2016}. \n\nAs a concluding remark, it would be interesting to study whether the oscillations due to the bound states become robust in the case of many-body interactions with more degrees of freedom in the context of quantum walks as a future work.\n \n\n\\ack{\n\\.{I}.Y. is supported by M\\v{S}MT under Grant No. RVO 14000 and the Czech Science Foundation under Grant No. GA CR 19-15744Y.\nG.K. is supported by the BAGEP Award of the Science\nAcademy and the TUBA-GEBIP Award of the Turkish Academy of Sciences. G.K. is also supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under\nGrant No. 117F317.\nB.D. and A.L.S. are supported by Istanbul Technical University Scientific Research Projects Department (ITU BAP No. 40881). A.L.S. would like to acknowledge useful discussions with {\\c S}.E. Kocaba{\\c s} at earlier stages of this work.\n}\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Background \\& Summary}\n\nMany assessments of future electricity demand in India project large increases in electricity consumption from adoption of air conditioning technologies in the buildings sector over the next two decades \\cite{weo20, ev, iea}. This large growth is likely to make India among the top nations in terms of electricity consumption, implying that technology choices related to energy consumption and production in India are likely to play a significant impact on global climate change mitigation efforts. Additionally, the Indian government has been pushing for the transportation sector's electrification, starting with two- and three-wheel vehicles,which is further likely to increase overall electricity demand. As of 2020 in India, there are 152,000 registered electric vehicles \\cite{ev}. Air conditioning (AC) related electricity demand accounted for 32.7 TWh, contributing to less than 2.5\\% of the total demand in 2019 \\cite{iea}. However, both air conditioning and transport electrification are anticipated to introduce structural changes in the temporal and spatial trends in electricity consumption patterns, that has important ramifications for long-term resource planning for the electricity sector \\cite{teri}. This paper presents an bottom-up approach to estimate electricity consumption in India for various scenarios of technology and policy adoption with a specific focus on providing aggregated consumption estimates as well as spatio-temporally resolved consumption profiles that would be relevant for regional and national electricity system planning studies. The approach enables quantifying the impact of various growth and technology adoption scenarios on quantity and pattern in electricity consumption. The datasets detailed in this paper include annual energy consumption at India's state, regional, and national levels as visualized in Fig. \\ref{fig:demand}, as well as underlying consumption profiles at an hourly time resolution. The annual energy consumption is forecasted on a five-year increment to 2050. Fig. \\ref{fig:summary_results} shows one scenario of national electricity demand forecast. In addition to the snapshot of annual consumption, hourly load profiles are developed at the same resolution as seen in Fig. \\ref{fig:profile_results}. \n\nThe forecasting is divided into two steps: business-as-usual and technology. Business-as-usual is a statistical model that infers data it can be trained on i.e. historical electricity demand. The technology model is a bottom-up approach that adds new loads to the total demand. Among new loads, we focus on residential and commercial cooling as well as various electric vehicles (EV). Some key insights from cooling\\cite{iea} and EV\\cite{ev} studies highlighting peak demand development motivate the need for demand forecasting at the hourly resolution. Cooling demand due to mainly split unit air conditioning installation in India is expected to increase the peak to mean ratio (also sometimes referred to the \"peakiness\") of electricity demand in India as well as shift the timing of peak demand from evenings to midnight\\cite{iea}. While electric vehicles do not constitute a large portion of the total demand, certain charging schemes can contribute significantly to the peak demand\\cite{ev}. Numerous energy demand forecast for India have recently been published as decadal snapshots \\cite{weo20, teri, brookings}, however granularity of demand at an hourly resolution has not been presented in these studies. Our approach enables quantifying the impact of different technology and structural elements, such as adopting energy efficient vs. baseline cooling technology or work-place charging vs. home charging for EVs, on the hourly electricity consumption profiles. These insights and the accompanying data sets are essential to carry out generation and transmission expansion as well as distribution network planning,and are thus essential for a sustainable energy infrastructure development in the Indian context. \n\n\nSimilar to other forecasting studies, we model Gross domestic product (GDP) growth \\cite{mospi} to be the main econometric driver of the business-as-usual demand forecasting, and thus three scenarios are introduced: slow, stable, and rapid GDP growth. We examine two AC load scenarios: energy-efficient equipment and baseline equipment per the International Energy Agency's Future of Cooling study \\cite{iea}. Finally, we evaluate three EV charging mechanisms: home, work, and public charging. This totals the number of data sets spanning three input dimensions to 18 scenarios. Technology adoption growth has been correlated with economic growth under the assumption that new technologies are adopted faster when the economy is growing faster and vice versa. We present two cooling scenarios to highlight the difference in energy-efficient and regular air conditioning units and bring attention to the need for policy and programs that favor energy-efficient cooling unit sales. Furthermore, we present various EV charging mechanisms to inspect the demand impacts that electric vehicle charging can have on the electric grid at different times. The produced data can be used as input to electricity infrastructure planning both at the distribution and transmission level. \n\n\\section*{Methods}\nFig. \\ref{fig:schematic} illustrates the major steps of our proposed demand forecasting approach. We use two models to estimate future electricity demand in India. In the first model --- business-as-usual --- we use a linear regression model to project daily peak and consumption on a regional basis; this is the business-as-usual scenario. We then add natural variation to the projections by finding the error between the training data and results and scaling it to every region based on seasonality. Then we fit the projected peak and total consumption to an annual hourly load profile for 2015 \\cite{shakti} featuring an evening peak \\cite{ivan}. In the second model --- technology model --- we take AC and EV adoption into account as an additive component on top of the business-as-usual predictions. GDP data, which is an independent variable in the model, is chosen to be the main driver of growth of the business-as-usual scenario as well as technology adoption rates. The input data used are publicly available and are referenced in Table \\ref{table:data}.\n\n\\subsection*{Input data processing}\n\nAlthough GDP is widely used for forecasting energy demand, it is specifically essential in the case of India, where economic growth is expected to ramp up over the next few decades similar to the recent trends in China \\cite{mckinsey}. We based our demand forecast on GDP projections from a PricewaterhouseCoopers (PwC) report \\cite{pwc}, that projected India's GDP to grow from 3.6 trillion in 2020 to reach 28 trillion USD in 2050. Considering the historical national GDP data for India starting in 1990, we fit and project an exponential curve for rapid growth and an Gompertz curve for slow growth \\cite{gompertz} as detailed in Table \\ref{table:gdp}. We use PwC's projections to define the stable GDP growth scenario. Curve fitting and projection results are illustrated in Supplementary Fig. \\ref{fig:sup-gdp}. The rapid growth scenario produces an annual average growth rate of 9.5\\% , PwC's growth rates start at 7.8\\% for the first projected decade and ends at 6.2 \\% in the final projected decade. The slow growth scenario starts at 7.2\\% growth rate in the first projected decade and ends at 3.9\\% in the final projected decade. To break down the regional energy consumption projections to state level we use the ratio of GDP per capita of the corresponding state to the GDP per capita of the region it is in. For each GDP growth scenario, we fit the same functions given state-wise data to produce GDP forecast at the same resolution. GDP per capita at state-level is computed using the projected GDP data and state level population projections \\cite{ssrn}.\n\n\\subsubsection*{GDP dependence and limitation}\n\nRelating growth in electricity demand to GDP is a strong generalization, however it is not a novel one in the case of India. Strong correlation between economic growth and energy consumption has been established in the Indian context in this study and other studies \\cite{eia} given data from the past two decades \\cite{mospi}. We recognize that GDP as a metric of economic growth has several limitations particularly related to projecting how economic growth is distributed among society within a state or nation. This may be the strongest limitation of the data we are presenting in the manuscript. However, lack of historical record and long-term projections of alternative open-access economic data at the desired spatial and temporal resolution limit the development of a framework to project energy consumption with other metrics. While GDP and energy consumption growths may differ in the long-run, there is an evident correlation between the two that can be used to estimate long-run energy consumption growth. Deviating away from linear regression may yield better results, however, data scarcity is again a limitation to the development of more complex models. Furthermore, this manuscript motivates the need for more bottom-up projections and not just regression models because historical consumption cannot infer consumption trends from new demand sources such as cooling and EVs.\n\nAdditionally, since the Future of Cooling study by the International Energy Agency relies on GDP forecasts developed by the International Monetary Fund\\cite{iea}, we elected to use a similar metric. We intentionally develop a large bandwidth of projection scenarios to mitigate the limitation of an individual snapshot representing a singular assumption. The motivation behind presenting the described results is ability to compare different scenarios and post-analyze the demand growth and the trade-offs. To produce a large bandwidth of growth scenarios we needed to use a straightforward metric that has enough historical data to produce various fitted curves for projections.\n\n\\subsection*{Business-as-usual model}\n\nThe business as usual projections are modeled with a linear regression considering weather and economic growth features. The ground truth historical daily peak and total consumption for each electric grid were obtained from the Power System Operation Corporation (POSOCO) for 2014-2019 \\cite{posoco}. The GDP used in the model was obtained, as explained in the previous section. Weather data was secured from the NASA Merra-2 data set \\cite{nasa}. The choice of features for the regression model is limited to GDP and weather variation due to the limitation in availability of data, both historical and future projections, at the desired spatial and temporal resolution. GDP is identified as a long-term parameter driving growth in year over year demand projections as highlighted in Fig. \\ref{fig:longrun}. Weather data is identified as a short-term parameter driving seasonal variation within a year's demand projections as highlighted in Fig. \\ref{fig:shortrun}. Previous parametric analysis on these features and their coefficient for short and long term demand forecasting in both time and frequency domain \\cite{meia} reinforce their use as features for the business-as-usual regression model. We present detailed outcomes for the Southern region, with further details available in \\cite{meia}.\n\n\\subsubsection*{NASA Merra 2 data acquisition}\n\nFor each of the five electric grid demand regions highlighted in right panel of Fig. \\ref{fig:demand}, the largest cities in each region were identified using population data made available by the United Nations\\cite{pop}. Then, the city's latitude and longitude were used to pull down the corresponding environmental data from the Nasa Merra-2 data set. The cities used for each of the five regions are listed here:\n\\begin{itemize}\n \\item Northern: Delhi, Jaipur, Lucknow, Kanpur, Ghaziabad, Ludhiana, Agra\n \\item Western: Mumbai, Ahmadabad, Surat, Pune, Nagpur, Thane, Bhopal, Indore, Pimpri-Chinchwad\n \\item Eastern: Kolkata, Patna, Ranchi (Howrah was ignored because the environmental factors are the same as Kolkata)\n \\item Southern: Hyderabad, Bangalore, Chennai, Visakhapatnam, Coimbatore, Vijayawada, Madurai\n \\item Northeast: Guwahati, Agartala, Imphal\n\\end{itemize}\n\nFrom the NASA set, 11 variables were included for each city: specific humidity, temperature, eastward wind, and northward wind (all 2m above the surface and 10m above the surface - eight total variables), precipitable ice water, precipitable liquid water, and precipitable water vapor. In particular, the instantaneous two-dimensional collection \"inst1\\_2d\\_asm\\_Nx (M2I1NXASM)\" from NASA was used. Detailed descriptions of these variables are available in the Merra-2 file specification provided by NASA\\cite{nasa} . The environmental variables available from the NASA MERRA-2 dataset were given on an hourly basis. The daily minimum, daily, maximum, and daily average was calculated for each of the 11 variables for each day.\n\n\\subsubsection*{Forecasts}\nThe business-as-usual demand forecasting problem was divided into ten separate problems,corresponding to one problem each peak and total consumption for each of the five regional grids shown in Figure \\ref{fig:demand}. To ensure the model would not overfit the data, the model was trained with Elastic Net \\cite{scikit-learn} to regularize results, and validated on held out 2019 data. An L1 ratio (Lasso) of .9 was chosen to minimize error in 2019 as the validation set. Then all of the models were trained with .9 L1 ratio on the full dataset.\n\n\\subsubsection*{Addition of natural variation}\nThis step aimed to match the statistical characteristics of an actual load year with the projected year. 2019 was used to derive the differences. Natural variation was estimated by a distribution characterized by the mean and standard deviation of the differences (in absolute value). Then, a natural variation adjustment was added to that day (with a random true\/false bit for positive or negative variation). The noise was calculated for each region and peak demand and daily consumption separately. The natural variation (noise) vectors used are on the Github repository for this paper \\cite{git}. This part of the process is non-deterministic and replication of the results requires using the same natural variation vector used in our projections.\n\n\\subsubsection*{Hourly profiles}\nThe statistical inference model presented above forecasts daily consumption driven by state-level economic parameters and weather data. The produced projections are at a daily resolution. We downscaled the data to hourly load profiles based on the 2015 hourly load profile data \\cite{shakti}. The result of the regression model is at regional level, breaking it down state-wise is pro-rated based on state-wise to region-wise GDP per capita projections ratios for the respective year. To do so, we tag each day of the year by the month it corresponds to and whether it is a weekday or weekend. We cluster demand for each hour by month and day. Each hour of the day then has its own cluster of demand data from 2015 based on the assumption that the same hour of the day for a given month and the same day type will exhibit similar demand behavior. This biases the construction of the profiles to demand patterns from 2015 only. To minimize the impact of this bias, we use the historical weather data\\cite{nasa} of the testing data years (2014-2019) for each day to simulate daily temperatures variations that are reflected in higher or lower demand. We sample weather data for each day and compare it to 2015, and subsequently use normalized the difference to scale the demand on a daily basis. Finally, we sample demand for each hour of the year from the corresponding cluster (defined by month and weekend or weekday) and scale it accordingly. Constructing the hourly load profile and fitting them to match the projected daily consumption and the projected daily peak demand then becomes a trivial exercise of sampling and fitting from the corresponding clusters and weather data space. The 2015 hourly demand data used in this study is documented in detail elsewhere and has been used in projecting demand for supply-side modeling efforts \\cite{ivan}. Limited availability of complete hourly data at state and regional level in India biases the hourly profiles to the 2015 datasets. However, the business-as-usual projections are for existing demands composed mainly of lighting and appliance at the residential level and large daytime loads at the commercial level \\cite{usaid}. Our approach implicitly assumes that energy consumption trends for these loads will follow historical patterns and therefore sampling from a given year with post-processed noise variation can yield reasonable results.\n\n\\subsubsection*{Impact of Climate change on business-as-usual demand}\nAs per the International Energy Agency (IEA) World Energy Outlook (WEO) 2019\\cite{weo19} only 5\\% of households in India currently own air conditioning units and 2.6\\% of commercial building energy use is from space cooling. Historically, electricity consumption in India has been driven by lighting and appliances in the residential sector \\cite{usaid} with commercial and industrial sector contributing via larger daytime loads. Since cooling demand is not historically available in the data that the business-as-usual regression model is learning from, there is no parametric value to projecting increase in temperatures since there is no evident correlation between temperature increase and lighting or appliance use. Moreover, since space cooling is a small percentage of current electricity demand in India, no major trends can be identified given the limited daily training data that is being used for the business-as-usual regression. It is then safe to assume that weather remains constant for the business-as-usual demand.\n\n\\subsection*{Technology model}\nSince a regression model can only produce forecasts of data it can learn from, additional bottom-up processing must be carried out to get a full picture of India's demand in the future. We identify trends and data points at the state level of the country to build a regional profile as well as the national one. \n\n\\subsubsection*{Cooling}\nCooling is divided into two main categories: residential and commercial. The ratio of commercial to residential consumption is computed from state-level data \\cite{stats} and is used as the ratio of commercial to residential cooling demand. Using the IEA's baseline and efficient cooling projections from the Future of Cooling study \\cite{iea}, we use the annual sales and unit types to calculate the energy consumption and growth rate at a national level and pro-rate it down to state level given GDP per capita. Surveyed hourly demand profiles \\cite{usaid} are indicators of behavioral cooling energy consumption patterns as exemplified in Supplementary Fig. \\ref{fig:sup-ac-res} and \\ref{fig:sup-ac-com}. The survey produce various profiles given climate seasons, household income and size. We apply a time-domain convolution of these profiles to generate a representative profile for each state for the various climates and seasons.\n\nWe can generate the air conditioning demand profiles for two weather seasons (winter and summer) by convolution of the sample profiles to generate a smooth aggregated demand profile. Moreover, coincidence factors must be applied to properly estimate the simultaneity of the demand and its peak. Two coincidence factors are identified: weekday and weekend, values are extracted from a Reference Network Model Toolkit \\cite{5504171}. We break down the national cooling demand to residential and commercial at state level by identifying state-level sector size and growth trends. Scaling the profiles to match the projected cooling energy demand produces hourly energy consumption profiles from residential and commercial cooling. Aggregating the appropriate states together will produce the same results at the regional level.\n\nMore importantly, the IEA's future of cooling study \\cite{iea} stresses the usage of Cooling Degree Days (CDD) to project cooling demand dependency on temperature. The unit consumption pattern and projections of capacity for India's share of global cooling demand is based on growth in electrification, urbanization as well as Purchasing Power Parity. The IEA future of cooling study estimates that a 1-degree Celsius increase in decadal average temperature in 2050 will to lead to 25\\% more CDD and a 2-degree Celsius increase will lead to 50\\% more CDD. Climate change impacts are considered in the unit sales and energy consumption data used from the IEA's future of cooling study. In our analysis, we use IEA's 50\\% increase in CDD to model cooling demand in 2050. For prior periods, we interpolate CDD between 2018 and 2050 to model cooling demand. The increase in CDD and the addition of noise variation are introduced for the purpose of modeling the projected increase in peak demand due to climate change. Specifically, this analysis does not consider frequency nor forecast of extreme weather events.\n\n\\subsubsection*{Electric vehicles}\nThe second component of the technology model projects EV demand in India. The data presented here considered electric two, three, and four-wheel vehicles. Two-wheelers, being the dominating vehicle in terms of annual sales in India \\cite{vehicle_sales}, are expected to be electrified first, followed by the three-wheelers and regular cars \\cite{ey}. The Indian government has set a goal of converting 100\\% of two-wheeler sales and 30\\% of all vehicle sales to electric by 2030 \\cite{nitiaayog}, so the starting point is vehicle sales at the state level \\cite{vehicle_sales}. Using the regression equations of the corresponding GDP growth scenarios, we can project car sales with the EV targets by 2030 met in the rapid growth scenario. From vehicle sales and conversion rates, we get an estimate of the number of EV that will require charging. From a market survey on the average commute distance of vehicles in urban areas and rural areas \\cite{ey}, long and short-range battery capacity and EV energy can be estimated. We introduce a mix of EV sales starting with short-range as the dominant market product and shifting to long-range, a market-dominant market in 2050. This trends reflects the current economic competitiveness of short-range EVs vs. existing internal combustion engine vehicles as well as the long-term competitiveness of long-range EVs with declining battery costs.\n\nSimilar to the construction of the cooling profiles, a coincidence factor must be implemented, so as to not over-predict peak EV charging demand. Since this is a new consumption behavior and given the relatively small batteries of two-wheelers and three-wheelers, it is assumed that every vehicle needs to charge every other day on average for urban drivers and every day for rural ones. This yields an average daily consumption from EV charging. As shown in Supplementary Fig. \\ref{fig:sup-ev-profiles}, three different charging profiles --- home, work, public -- are identified in an EV pilot project study in Mexico City \\cite{berkeley}. While Mexico and India differ greatly in many socio-economic aspects. The different hourly EV charging profiles collected were for a pilot project to deploy electric two-wheelers and small sedans in the metropolitan area of Mexico City. This presents two synergies enabling the usage of the charging profiles in India. Under the assumptions that EV deployment will be more prevalent in urban areas in India with initial conversion of smaller vehicles (two-wheelers and three-wheelers), the charging data collected \\cite{berkeley} is a suitable fit for potential EV charging schemes in India. Energy consumption is computed from vehicle sales, projections, and electrification conversion. That calculated number is then fitted under the chosen charging profile. Time domain convolution of the profiles is applied to smoothen the peakiness of the total constructed hourly time series.\n\n\\subsubsection*{Data Dependence}\n\nThe technology model relies heavily on surveyed data to produce the representative hourly profiles for cooling and electric vehicle demands at state levels. This is indeed a limitation, and our projections assumes that future technology adopters will behave just like initial adopters. In the absence of a better alternative at a similar spatial and temporal resolution, the bottom-up modeling effort provides a reasonable estimate of temporal patterns expected from these new demand sources. For the hourly sample cooling profiles, the main assumption is that cooling demand consumption is only dependent on weather patterns and econometric patterns. Specifically, we apply a weighted sum convolution of the income level cooling profiles based on the states' GDP per capita ranking. For the total cooling demand at national level, we depend on the air cooling unit sales projection as well as break down of unit energy consumption under baseline and efficient scenarios of the IEA's Future of Cooling report \\cite {iea}. We pro-rate residential cooling at state level using the GDP per capita projections. For commercial cooling we use the state-wise sector growth trends \\cite{energystatistics}. A sanity check for this break down is to sum both residential and commercial state-wise cooling demand and compare to the IEA's all India cooling demand annual electricity consumption projections to 2050, the difference is highlighted in Supplementary Fig. \\ref{fig:sup-ac_compare} and \\ref{fig:sup-cooling}. Regarding the EV profiles, while there are alternative choices of charging schemes, we identified the synergies with the Berkeley study \\cite{berkeley} to be best reflective of the bookend EV charging scenarios across India.\n\n\\section*{Data Records}\nThe data is uploaded on Zenodo \\cite{marc_barbar_2020_4564581} and is available to download at \\hyperlink{https:\/\/doi.org\/10.5281\/zenodo.4564581}{https:\/\/doi.org\/10.5281\/zenodo.4564581}. The path leading to a CSV file indicate the scenario corresponding to the results of that file. Breakdown of the folder hierarchy listed as:\n\n\\begin{enumerate}\n \\item GDP Growth: slow, stable, rapid\n \\item EV charging: home, work, public\n \\item Cooling: baseline, efficient\n \\item Type: detailed, summary\n\\end{enumerate}\n\nThe \\textit{detailed} results are tables of the itemized hourly demand profile of each considered scenario; all files will produce 8760 rows (number of hours in a year). The \\textit{summary} are tables of the itemized annual energy consumption for the considered years; all files will produce seven rows (number of considered future years). Both file types are itemized the same way as per Table \\ref{table:headers}. The path of each file is the reference to the specific scenario the data in the tables represents. For example \\textit{SR.csv} file under \\textit{slow\/home\/efficient\/summary} is the summary file of the case of slow economic growth, home EV and energy efficient air conditioning consumption.\n\n\\section*{Technical Validation}\nThe Business-as-usual statistical model is validated using standard statistical metrics when backtesting is applied. Further details on the backtesting are available elsewhere \\cite{meia}. For the technology model, we compare our estimates to the IEA's WEO \\cite{weo20,weo19,weo18,weo17} and Brookings India \\cite{brookings}. Furthermore, our projections compare favorably against the EV projections to the IEA's Global Electric Vehicle Outlook 2020 \\cite{ev}.\n\n\\subsection*{Back testing}\n\nDaily consumption and peak are projected for all five regions, we show the daily consumption back tests of the Southern Region in Fig. \\ref{fig:regression}. More results can be found on the GitHub repository. It is important to note that the regression model captures the organic growth of the historical demand as well as the seasonal variation in demand but is not accurate at predicting daily variation. This shortcoming can be attributed to the small training dataset that is available. To compensate for this short-coming, we add additional noise variation as discussed earlier in the Methods section. We compare the R-squared value of the regression only versus the regression and noise time series as shown in Table \\ref{table:r2}. Additionally, selected parameter performance metrics of the model for the Southern Region are presented in Table \\ref{table:params}. The model's independent variables are the 2 meters and 10 meters elevation historic temperature and humidity data for the selected cities and GDP data for the state. Various weather parameters will have a higher coefficient then GDP since the latter is not as granular as a metric but will still be factored in for longer term growth as interpreted by its Fourier component \\cite{meia}.\n\n\\subsection*{Cross-comparison}\nSupplementary Fig. \\ref{fig:sup-stated-weo} and \\ref{fig:sup-sustainable-weo} compare the forecasting results to the WEO 2020 projections of India's Energy Demand to 2040. Our band of projections is notably wider due to the large number of scenarios that are combined to forecast energy demand. We further compare our results to Brookings India's study in Supplementary Fig. \\ref{fig:sup-brookings}. We also compare our electric vehicle projections to those of the Global EV Outlook in Supplementary Fig. \\ref{fig:sup-ev}. Finally, we compare our air conditioning demand contribution to the peak demand to the Future of Cooling study in Supplementary Fig. \\ref{fig:sup-ac_compare}.\n\n\\subsection*{COVID-19 pandemic impact on year 2020}\nThe COVID-19 pandemic has drastically affected the global population in various ways. Energy consumption dropped severely as people were advised to stay at home. While it is not possible to project such \"Black Swan\" events from historical data, their long-term effects can be modeled as delayed growth under various recovery schemes. Fig. \\ref{fig:comparison} shows that our projections for the month of January 2020 align with the realized demand, which is prior to the global outbreak of COVID-19. Evidently, there is a strong mismatch in the following months as the outbreak developed into a global pandemic. However, in the later part of the year, signs of recovery are noticed where the historical daily consumption once again reaches projected levels.\n\nThe impact of extreme events on energy consumption are difficult to predict at a granular level. Our projections are at a five year increment so that such yearly variations are smoothed out and the regression towards the mean phenomenon is observed. Moreover, the recovery from extreme events and their long-term impact can depend on many factors: economic, social, scientific and more. Without modeling those events in detail, projected growth can model the long-term average growth rate. In case of a negative extreme event, a smaller growth rate can model the long-term impact caused by the slow down. Similarly, a positive extreme event can be modeled as larger growth rate to include the long-term impact by the rapid growth. With signals of a fast recovery in total daily consumption for most regions, we elected to disregard projections that model long-term COVID-19 pandemic impact to avoid confirmation bias. Moreover, there is little data to support projections modeling a long-term impact on Indian energy consumption. We believe that the model and data presented in this paper are valid beyond the COVID-19 pandemic.\n\n\\section*{Usage Notes}\n\nThe format of the results is comma-separated values (CSV). All the results are available on the Zenodo Open-Access repository \\cite{marc_barbar_2020_4564581}.\n\n\\section*{Code availability}\n\nThe code used in the generation of the data sets is open-sourced on Github repository \\cite{git}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n \\input{tex\/11_intro}\n\n\\section{Background}\n\t\\subsection{Generative Design}\\label{sec:sec31}\t\n\t \\input{tex\/21_gd}\n\t\\subsection{Exploration and Optimization with Non-Objective Criteria}\\label{sec:sec32}\t\n \\input{tex\/22_mcx}\n\n\\section{Method}\n \\input{tex\/31_method}\n \t \n\\section{Benchmarks}\n \\subsection{Setup}\n \\input{tex\/41_exp1_setup}\n \\subsection{Result} \n \\input{tex\/42_exp1_result}\n\\section{Case Study}\n \\subsection{Setup}\n \\input{tex\/51_exp2_setup}\n \\subsection{Result}\n \\input{tex\/52_exp2_result}\n \n\\section{Discussion}\n \\input{tex\/60_discussion}\n \n\\section{Acknowledgements}\n \\input{tex\/62_ack} \n \n\\section{Supplemental Material and Code}\nSupplemental material and code available at:\\\\ \\href{https:\/\/github.com\/agaier\/tdomino_ppsn}{https:\/\/github.com\/agaier\/tdomino\\_ppsn}\n\n\n\n\n\n\\subsubsection{Benchmark Functions}\\hfill\\\\\n\\indent\\textit{RastriginMOO}. To judge the performance of T-DominO on Multi-Objective QD problems, we test on a version of RastriginMOO as introduced in~\\cite{moo_qd}. The Rastrigin function is a classic optimization benchmark, often used to test QD algorithms because it contains many local minima~\\cite{cmame,cully2021multi}. Here it is converted into a multiobjective benchmark by optimizing a pair of Rastrigin functions with shifted centers. We use a 10-D version with constants added so that every discovered bin has a positive effect on the aggregate QD Score. These objectives can be explicitly defined as:\n\\begin{align}\n \\begin{cases}\n f_1(\\mathbf{x}) = 200 - (\\sum\\limits_{i=1}^n [(x_i - \\textcolor{blue}{\\lambda_1})^2 - 10\\cos (2\\pi (x_i - \\textcolor{blue}{\\lambda_1}))]) \\\\\n f_2(\\mathbf{x}) = 200 - (\\sum\\limits_{i=1}^n [(x_i - \\textcolor{blue}{\\lambda_2})^2 - 10\\cos (2\\pi (x_i - \\textcolor{blue}{\\lambda_2}))])\n \\end{cases} \n\\end{align}\nwhere $\\lambda_1 = 0.0$ and $\\lambda_2 = 2.2$ for $f_1$ and $f_2$. All parameters are limited to the range $[-2, 2]$, with the feature space defined by the first two parameters.\n\n\\textit{ZDT3}.\nWhen spread across the objective space is desired, objectives themselves could be used as features. This use case is demonstrated with the ZDT3 benchmark, a 30 variable problem from the ZDT MOO benchmark problem suite ~\\cite{zitzler2000comparison} whose hallmark is a set of disconnected Pareto-optimal fronts, and whose first parameter is value of the first objective. Parameter ranges span 0-1 with the first two parameters used as features, enforcing a spread of solutions across the range of the first objective.\n\n\\textit{DTLZ3}. To illustrate T-DominO's bias toward balanced solutions we analyze its performance on DTLZ3, a many-objective benchmark with a tunable number of objectives and variables\\cite{deb2002scalable}. We test with 10 parameters and 5 objectives, with the 6th and 7th parameters use as features.\\footnote{The first $n$ parameters are explicitly linked to the first $n$ objectives as in ZDT3 -- later parameters are used to avoid explicitly exploring the objective space.}. \n\n\n\\subsubsection{Baseline Approaches}\\hfill\\\\\n\\indent\\textit{ME Single.} MAP-Elites~\\cite{mapelites} optimizing only a single objective is used to establish an upper and lower bound of performance we can expect from MAP-Elites. Blind to the second objective we can expect it to find the top performing solutions for the first. Equally important, the exploration of all bins without regard to the performance on the second objective establishes a floor for performance -- the performance we could expect for having any solution in the bin.\n\n\n\\textit{ME Sum.} \nWe compare using the T-Domino objective with MAP-Elites~\\cite{mapelites} using the most naive way of combining multiple objective -- simply adding them. Our benchmarks all have well-scaled objectives, but this is typically not the case. To simulate this difficulty we use a weighted sum, with each additional objective values increased by an order of magnitude (e.g $\\times$1, $\\times$10, $\\times$100...). \n\n\\textit{NSGA-II.}\nNSGA-II~\\cite{nsga2} is used as a benchmark for conventional multi-objective optimization without feature space exploration, reaching near the Pareto front on these simple benchmarks. Though it is not our goal to compete with MOO algorithms, they provide a useful metric to contextualize the difference between exploratory approaches and pure optimizers.\n\n\\subsubsection{Settings.} In all MAP-Elites approaches the feature space is partitioned a 20x20 grid, with 2 CMA-ME improvement emitters~\\cite{cmame} performing optimization. T-Domino was computed using the neighbors from 4 bins away, using a history of the 10 most recent elites in each bin. Hyperparameters for NSGA-II were kept comparable, a population of 400 matched the 400 bins of the MAP-Elites grids, with the same number of new solutions generated per generation for the same number of generations. A standard implementation of NSGA-II from the PyMoo library~\\cite{pymoo} is used, as well as the library's formulations for the ZDT3 and DTLZ benchmarks whose the exact formulation is included in the Supplemental \\ref{sssec:zdt3}. The PyRibs~\\cite{pyribs} library was used as a basis for all MAP-Elites experiments, with T-DominO implemented as a specialized archive type. All experiments were replicated 30 times, additional plots are provided in the Supplemental.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Supplemental Material}\nUpon publication all supplemental material, along with all source code used to produce the results in this paper will be published online.\n\n\n\\subsection{Building Layout Objectives, Features, and Constraints}\n\\input{tex\/53_exp2_table}\n\n\\newpage\n\\subsection{Wave Function Collapse Tiles Set and Seed Examples}\n\\input{tex\/fig_supp_01_tiles}\n\n\\newpage\n\\subsection{Single Building Example Outputs of Wave Function Collapse}\n\\input{tex\/fig_supp_02_example}\n\n\\newpage\n\\subsection{QD Score}\n\\input{tex\/fig_supp_03_qdscore}\n\n\\newpage\n\\subsection{MOO Benchmark Functions}\n\\input{tex\/supp_moo_obj}\n\n\n\n\n\\subsubsection{ZDT3}\\label{sssec:zdt3}\nThe ZDT3 benchmark objective function is defined as:\n\n$\n\\begin{aligned}\nf_{1}(x) &=x_{1} \\\\\ng(x) &=1+\\frac{9}{n-1} \\sum_{i=2}^{n} x_{i} \\\\\nh\\left(f_{1}, g\\right) &=1-\\sqrt{f_{1} \/ g}-\\left(f_{1} \/ g\\right) \\sin \\left(10 \\pi f_{1}\\right) \\\\\n0 & \\leq x_{i} \\leq 1 \\quad i=1, \\ldots, n\n\\end{aligned}\n$\n\n\n\\subsubsection{DTLZ3}\\label{sssec:dltz3}\nThe DTLZ3 benchmark objective function is defined as:\n\nMin. $f_{1}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\cos \\left(x_{M-2} \\pi \/ 2\\right) \\cos \\left(x_{M-1} \\pi \/ 2\\right)$,\n\nMin. $f_{2}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\cos \\left(x_{M-2} \\pi \/ 2\\right) \\sin \\left(x_{M-1} \\pi \/ 2\\right)$,\n\nMin. $f_{3}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\sin \\left(x_{M-2} \\pi \/ 2\\right)$,\n\n$\\vdots \\quad \\vdots$\n\nMin. $f_{M}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\sin \\left(x_{1} \\pi \/ 2\\right)$,\n\nwith $g\\left(\\mathbf{x}_{M}\\right)=100\\left[\\left|\\mathbf{x}_{M}\\right|+\\sum_{x_{i} \\in \\mathbf{x}_{M}}\\left(x_{i}-0.5\\right)^{2}-\\cos \\left(20 \\pi\\left(x_{i}-0.5\\right)\\right)\\right]$,\n$0 \\leq x_{i} \\leq 1, \\quad$ for $i=1,2, \\ldots, n$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbvew b/data_all_eng_slimpj/shuffled/split2/finalzzbvew new file mode 100644 index 0000000000000000000000000000000000000000..c20fb8b12f7db362f1a9580fead34dfa3093a7b0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbvew @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe Epoch of Reionization is the interval of time during which the cosmic gas evolves from an almost completely neutral state (neglecting the recombination leftovers) to an ionized state.\nThis ionization process is believed to happen due to the onset of star formation at redshifts $z\\simeq 12$, and it is believed to last until $z\\simeq 6$.\nSeveral astrophysical observables (quasars~\\cite{Fan:2005es,Becker:2014oga}, Lyman $\\alpha$ emitters~\\cite{Stark:2010qj,Treu:2013ida,Pentericci:2014nia,Schenker:2014tda,Tilvi:2014oia}, $\\gamma$ ray bursts~\\cite{Wang:2015ira,Gallerani:2009aw}) seem to agree with this hypothesis.\nHowever, the precise details of the overall reionization process still remain obscure.\nThe main reason is that the currently available most precise information on the reionization period comes from Cosmic Microwave Background (CMB) measurements through a redshift-integrated quantity.\nDuring reionization, the number density of free electrons which can scatter the CMB, $n_e$, increases. As a consequence, the reionization optical depth $\\tau$ increases according to a line of-sight integral of $n_e$, generating a suppression of the CMB peaks at any scale within the horizon at the reionization period.\nThis suppression, however, can be easily compensated with an enhancement of the primordial power spectrum amplitude, $A_{\\rm s}$.\nA much better and cleaner measurement of $\\tau$ can be obtained via measurements of the CMB polarization, which is linearly affected by reionization (see e.g. Refs.~\\cite{Kaplinghat:2002vt,Haiman:2003ea,Holder:2003eb,Hu:2003gh} for seminal works and \\cite{Reichardt:2015cos} for a recent review). The latest measurements of the Planck collaboration provide a value of $\\tau = 0.055 \\pm 0.009$~\\cite{Aghanim:2016yuo, Adam:2016hgk} based exclusively on the CMB polarization spectrum.\nThis value of $\\tau$ is in a much better agreement than previous WMAP~\\cite{Hinshaw:2012aka} and Planck~\\cite{Ade:2015xua} estimates with observations of Lyman-$\\alpha$ (Ly-$\\alpha$) emitters at $z\\simeq 7$~\\cite{Stark:2010qj,Treu:2013ida,Pentericci:2014nia,Schenker:2014tda,Tilvi:2014oia}, which require that reionization is complete by $z\\simeq 6$.\nEven if now cosmological and astrophysical tests of the reionization process seem to agree, the measurement of $\\tau$ provides only integrated information on the free electron fraction $x_e$, and not on its precise redshift evolution.\nConsequently, the same measured value of $\\tau$ may correspond to very different reionization histories.\n\nTraditionally, the most commonly exploited model for the time evolution of the free electron fraction, $x_e(z)$, uses a step-like transition, implemented via a hyperbolic tangent~\\cite{Lewis:2008wr}.\nModel independent attempts have been carried out in several works in the past~\\cite{Hu:2003gh,Mortonson:2007hq,Mortonson:2007tb,Mortonson:2008rx,Mortonson:2009qv,Mortonson:2009xk,Mitra:2010sr,Lewis:2006ym,Pandolfi:2010dz,Pandolfi:2010mv} and also more recently~\\cite{Heinrich:2016ojb,Hazra:2017gtx,Mitra:2017oxx}, based either on a redshift-node decomposition of $x_e(z)$ or on a Principal Component Analysis (PCA) of the CMB polarization angular power spectrum.\nMore concretely, using the latter approach, the authors of \\cite{Heinrich:2016ojb} claimed that Planck 2015 data favors a high-redshift ($z>15$) component to the reionization optical depth.\nThe quoted $2\\sigma$ evidence would come from the excess in power in the low multipole range of the Planck 2015 CMB polarization spectrum. \nAccordingly to their results, the functional form of the usual step-like model prevents a priori for such an early component in the reionization history of our universe.\nHowever, the authors of \\cite{Hazra:2017gtx}, using a different method, which implements reionization through a non-parametric reconstruction that uses a Piecewise Cubic Hermite Interpolating Polynomial (\\texttt{PCHIP}), find only marginal evidence for extended reionization histories.\nSince an early component in the reionization history $x_e(z)$ (or, in other words, a high redshift contribution to the reionization optical depth $\\tau$) may either imply the need for a high-redshift population of ionizing sources (hypothesis that will be tested by the future James Webb Space Telescope~\\cite{Gardner:2006ky}),\nor give hints about a possible energy injection from dark matter annihilations or decays~\\cite{Pierpaoli:2003rz,Mapelli:2006ej,Natarajan:2008pk,Natarajan:2009bm,Belikov:2009qx,Huetsi:2009ex,Cirelli:2009bb,Kanzaki:2009hf,Natarajan:2010dc,Giesen:2012rp,Diamanti:2013bia,Lopez-Honorez:2013lcm,Lopez-Honorez:2016sur,Poulin:2016nat,Poulin:2015pna},\nor accreting massive primordial black holes~\\cite{Ricotti:2007au,Horowitz:2016lib,Ali-Haimoud:2016mbv,Blum:2016cjs,Poulin:2017bwe},\nit is mandatory to robustly establish what current data prefer, regardless of the model used to describe the redshift evolution of the free electron fraction. \n\nHere we first analyze several possible parameterizations for reionization (PCA with several fiducial cosmologies and the \\texttt{PCHIP}\\ method)\nand explore the corresponding constraints on the reionization history of the universe.\nWe then shall exploit tools related to model selection among competing models, using both the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), which will allow us to quantitatively decide which model is currently preferred and whether it exists or not an indication for an early reionization component in our universe.\n\nThe structure of the paper is as follows.\nWe start by discussing the different reionization approaches that we shall test against current data in Sec.~\\ref{sec:histories}.\nIn Sec.~\\ref{sec:data} we describe the cosmological observations exploited in our numerical analyses, whose results are shown in Sec.~\\ref{sec:results}.\nOur conclusions are summarized in Sec.~\\ref{sec:conclusions}.\n\n\n\\section{Reionization histories}\n\\label{sec:histories}\n\nIn the following, we will derive the constraints on the reionization history of our universe from cosmological observations exploring several possible scenarios, focusing on a possible early reionization component in our universe.\nFor that, we shall exploit the reionization optical depth:\n\\begin{equation}\n\\tau(z) = \\int_z^{\\infty} dz' \\frac{c ~dt'}{dz'} (n_{\\rm e}(z')- n_{\\rm e, 0}(z'))\\sigma_{\\rm T}\\,~,\n\\label{eq:cumtau}\n\\end{equation}\nwhere $n_{\\rm e}(z)=n_{\\rm H}(0)(1+z)^3x_{\\rm e}(z)$ and $n_{\\rm e,0}(z)=n_{\\rm H}(0)(1+z)^3x_{\\rm e, 0}(z)$, being $n_{\\rm H}(0)$ the number density of hydrogen at present, $x_{\\rm e}(z)$ the free electron fraction and $x_{e,0}(z)$ the free electron fraction leftover from the recombination epoch (see e.g.\\ \\cite{Kolb:1990vq,2009fflr,2010gfe}). Therefore, Eq.~\\eqref{eq:cumtau} just accounts for the cumulative Compton optical depth after recombination, subtracting the pre-reionization contribution.\n\n\n\\subsection{Canonical scenarios}\n\\label{subsec:canonical}\nWe start describing the free electron fraction by means of the most simple and commonly exploited parameterizations in the literature, i.e.\\ the so-called \\emph{redshift-symmetric} and \\emph{redshift-asymmetric} parameterizations (see e.g.~\\cite{Adam:2016hgk}). \n\n\\begin{itemize}\n\n\\item \\emph{Redshift-symmetric} parameterization.\n\nThe most economical and widely employed approach to describe the reionization process in our universe assumes that the free electron fraction follows a step-like function, taking the recombination leftover value at high redshifts and becoming close to one at low redshifts, and being described by the hyperbolic tangent function~\\cite{Lewis:2008wr}\n\\begin{equation}\nx_e^{\\rm tanh}(z) = \\frac{1+f_{\\rm He}}{2} \\left(1+ \\tanh \\left[ \\frac{y(z_{\\rm{re}})-y(z)}{\\Delta y} \\right] \\right),\n\\label{eqn:tanh}\n\\end{equation}\nwhere $y(z)=(1+z)^{3\/2}$, $\\Delta y=3\/2(1+z_{\\rm{re}})^{1\/2}\\Delta z$, and $\\Delta z$ is the width of the transition, fixed in the following to $\\Delta z=0.5$.\nThis parametrization is named ``redshift symmetric'' because the redshift interval between the beginning of reionization and its half completion equals the corresponding one between half completion and the reionization offset, and it is the default one implemented in Boltzmann solver codes such as \\texttt{CAMB}~\\footnote{\\href{http:\/\/camb.info}{http:\/\/camb.info}}~\\cite{Lewis:1999bs}.\nThis parameterization, as well as the following ones, also accounts for the first ionization of helium $f_{\\rm He}=n_{\\rm{He}}\/n_{\\rm{H}}$, assumed to happen at the same time than that of hydrogen.\nThe full helium reionization is modeled via another hyperbolic tangent function with $z_{\\rm{re,He}}=3.5$ and $\\Delta z=0.5$.\nTherefore, the only free parameter in this simple approach is the reionization redshift $z_{\\rm{re}}$.\nWhen this redshift-symmetric parameterization is used as the fiducial model in our PCA analyses (see next subsection), we fix $z_{\\rm{re}}=8.8$, following the results quoted in Ref.~\\cite{Adam:2016hgk}.\n\n\\item \\emph{Redshift-asymmetric} reionization.\n\nBesides the previous case, alternative reionization parameterizations with a non redshift-symmetric transition have been proposed in the literature.\nOne of the most flexible choices, which shows good agreement with current measurements from quasars, Ly$\\alpha$ emitters and star-forming galaxies, is represented by a power law, described via three parameters~\\cite{Adam:2016hgk,Douspis:2015nca}:\n\\begin{equation}\n x_e^{asym}(z) =\n \\begin{cases}\n\t1+f_{\\rm He} & \\mbox{for } z z_{\\rm early}.\n \\end{cases}\n \\label{eqn:asym}\n\\end{equation}\nFollowing Planck 2016 reionization analyses~\\cite{Adam:2016hgk}, when using this redshift-asymmetric model as a fiducial model in our PCA analyses, we shall fix the redshift at which the first sources in our universe switch on, $z_{\\rm early} = 20$, the redshift at which reionization is fully complete, $z_{\\rm end} = 6$, and the exponent $\\alpha = 6.10$. \n\\end{itemize}\n\n\\subsection{Principal Component Analysis (PCA)}\nThe second method we follow here to model the reionization process is the Principal Component Analysis (PCA) approach of Refs.~\\cite{Hu:2003gh,Mortonson:2007hq,Mortonson:2007tb,Mortonson:2008rx,Mortonson:2009qv,Mortonson:2009xk,Mitra:2010sr}, exploited more recently in Refs.~\\cite{Heinrich:2016ojb,Mitra:2017oxx}.\nFollowing these previous works, we discretize the redshift range from $z_{\\rm{min}}=6$ to $z_{\\rm{max}}=30$ in $N_z$ bins of width of $\\delta z = 0.25$.\nWe set the ionization fraction to $x_e=0$ for $z \\geq z_{\\rm{max}}$, when the reionization processes have not started yet, while for $z \\leq 6$ we assume fully ionized hydrogen and singly ionized helium, i.e.\\ $x_e=1+f_{\\rm He}$.\nThe full helium reionization is modeled as aforementioned.\nThis approach makes use of the Fisher information matrix~\\cite{Tegmark:1996bz}, that we compute as:\n\\begin{equation}\nF_{ij} = \\sum_{\\ell=2}^{\\ell_{\\rm max}}\\frac{1}{\\sigma_{ C_{\\ell}}^2}\n \\frac{\\partial C_{\\ell}}{\\partial x_e(z_i)}\n \\frac{\\partial C_{\\ell}}{\\partial x_e(z_j)} =\\sum_{\\ell=2}^{\\ell_{\\rm max}}\\left(\\ell+\\frac{1}{2}\\right)\n \\frac{\\partial \\ln C_{\\ell}}{\\partial x_e(z_i)}\n \\frac{\\partial \\ln C_{\\ell}}{\\partial x_e(z_j)} \\,,\n\\label{eq:fisher}\n\\end{equation}\nwhere the $C_{\\ell}$ are the components of the large angle $EE$ polarization spectrum.\nThe sum above is truncated at $\\ell_{\\rm max}=100$, because the reionization imprint is mostly located in the lowest modes of the CMB polarization spectrum.\nIn Eq.~\\eqref{eq:fisher} we have used the well-known result for the cosmic variance: $\\sigma_{ C_{\\ell}}^2 = C_{\\ell}^2\\, 2\/(2\\ell+1)$.\nHaving the Fisher matrix, we can diagonalize it and find that the eigenfunctions are the principal components $S_{\\mu}(z)$ and the eigenvalues are proportional to the inverse of the estimated variance of each eigenmode, $\\sigma^2_{\\mu}$.\nUsing the normalization of Ref.~\\cite{Mortonson:2007hq}, we can write the Fisher matrix as\n\\begin{equation}\nF_{ij}=\\frac{1}{(N_z+1)^2}\\sum_{\\mu=1}^{N_z}\n \\frac{1}{\\sigma^2_{\\mu}}S_{\\mu}(z_i) S_{\\mu}(z_j)~.\n\\label{eq:fisher2}\n\\end{equation}\nWe sort the different eigenfunctions in order to have the smallest uncertainties at the lowest modes, being therefore the $\\mu=1$ case the best constrained mode.\nDue to completeness and orthogonality of the principal components, the following properties are fulfilled:\n\\begin{align}\n\\int_{z_{\\rm min}}^{z_{\\rm max}} dz \\, S_{\\mu}(z)S_{\\nu}(z)&=(z_{\\rm max}-z_{\\rm min})\\delta_{\\mu\\nu} \\, , \\\\\n\\sum_{\\mu=1}^{N_z} S_{\\mu}(z_i)S_{\\mu}(z_j)&= (N_z+1) \\delta_{ij}\\,.\n\\end{align}\nSince the width of the bins is chosen to be sufficiently small, in practice we can replace the integrals over redshift by discrete sums.\nOne of the ideas behind the PCA approach is that one can write redshift-dependent quantities such as the ionization fraction as a linear combination of the principal components.\nSince the lowest modes have the smallest uncertainties, we truncate the sum, using only the first 5 principal components, following Ref.~\\cite{Mortonson:2007hq}.\nWe apply the PCA analysis to the ionization history in two different ways, which are explained below.\n\n\\begin{itemize}\n\n\\item \\textbf{Case A}\n\nIn the first PCA approach, named \\textbf{PCA-A} in what follows, the reionization history reads as\n\\begin{equation}\nx_e^A (z) = x^{\\rm{fid}}_{\\rm e} (z)+ \\sum_\\mu m_{\\mu}^{A} S_\\mu (z)~.\n\\label{eq:pca_a}\n\\end{equation}\nGiven a fiducial model $x^{\\rm{fid}}_{\\rm e} (z)$, and knowing the amplitudes derived from the Fisher matrix (see Eq.~\\eqref{eq:fisher}), one can recover an arbitrary ionization history using a PCA analysis.\nThis is the standard approach adopted in Refs.~\\cite{Mortonson:2007hq,Heinrich:2016ojb} in order to constrain the ionization history with CMB data.\nFollowing \\cite{Mortonson:2007hq}, we can derive upper and lower bounds for each amplitude $m_{\\mu}$:\n\\begin{equation}\nm_{\\mu}^{\\pm} = \\int_{z_{\\rm min}}^{z_{\\rm max} } dz \\frac{S_{\\mu}(z)[x_e^{\\rm max} -2 x_e^{\\rm fid}(z)]\n\\pm x_e^{\\rm max} | S_{\\mu}(z)|}{2(z_{\\rm max}-z_{\\rm min})}~.\n\\label{eq:mbounds}\n\\end{equation}\nAdditionally, in order to guarantee physical ionization histories, the choice of our amplitudes $m_{\\mu}$ has to fulfill the condition $0 \\leq x_e(z) \\leq 1+f_{\\rm He}$ at any redshift $z$~\\footnote{Notice that this constraint for physicality is stronger than that followed in Ref.~\\cite{Heinrich:2016ojb}, as any unphysical model will be retained for the Monte Carlo analyses.}.\n\n\\item \\textbf{Case B}\n\nIn the second of our PCA analyses, named \\textbf{PCA-B}, we choose a different approach to the standard PCA analysis described above, in which the free electron fraction is proportional to the fiducial model plus the PCA decomposition.\nHere, we exploit the functional form of the fiducial model in order to test other possible reionization parameterizations.\nFollowing this idea, for the redshift-symmetric, \\textit{tanh} description, we insert the PCA decomposition inside the argument of the hyperbolic tangent:\n\\begin{equation}\nx_e^{B,tanh}(z) = \\frac{1+f_{\\rm He}}{2} \\left(1+ \\tanh \\left[ \\frac{y(z_{\\rm{re}})-y(z)}{\\Delta y} + \\sum_\\mu m_\\mu^B S_\\mu (z) \\right] \\right)~.\n\\label{eqn:tanh_b}\n\\end{equation}\nNotice that we recover the fiducial \\textit{tanh} model by setting the amplitudes $m_{\\mu}$ to $0$.\nWe perform an analogous replacement for the redshift-asymmetric parameterization:\n\\begin{equation}\n x_e^{B,asym}(z) =\n \\begin{cases}\n\t1+f_{\\rm He} & \\mbox{for } z z_{\\rm early}.\n \\end{cases}\n \\label{eqn:asym_b}\n\\end{equation}\nWe take for the specific parameters of the \\textit{tanh} and \\textit{asym} cases the fiducial values given in Sec.~\\ref{subsec:canonical}.\n\n\\end{itemize}\n\n\\subsection{\\texttt{PCHIP}}\nThe third and last method we adopt in order to describe the reionization history is based on a non-parametric form for the free electron fraction $x_e(z)$, which is described using the function values $x_e(z_i)$ in a number $n$ of fixed redshift points $z_1,\\ \\ldots,\\ z_n$.\nFollowing the procedure adopted for the PCA analyses, we fix the function to be a constant\nboth at low\nredshifts ($z\\leq6$) and at high redshifts ($z\\geq30$).\nThe first and the last redshift nodes we use to parameterize the function at intermediate redshifts are therefore $z_1=6$ and $z_n=30$, where we also want the function to be continuous:\nas a consequence, the values $x_e(z_1)=1+f_{\\rm He}$ and $x_e(z_n)=0$ are fixed\nand the number of varying parameters that describe $x_e(z)$ is always $n-2$.\nWe consider a case with a total of $n=7$ nodes (5 free parameters),\nlocated at redshifts\n\\begin{equation}\\label{eq:nodes7}\n z_i \\in \\{6,\\,7,\\,8.5,\\,10,\\,13,\\,20,\\,30\\}\\,,\n\\end{equation}\nin order to have the same number of free parameter than in the PCA cases.\n\nThe function $x_e(z)$ at $z\\neq z_i$ is computed through an interpolation among its values in the nodes.\nWe employ the\n``piecewise cubic Hermite interpolating\npolynomial'' (\\texttt{PCHIP})~\\cite{Fritsch:1980,Fritsch:1984}\nin a very similar way to Refs.~\\cite{Gariazzo:2014dla,DiValentino:2015zta,Gariazzo:2015qea,DiValentino:2016ikp},\nwhere the \\texttt{PCHIP}\\ function was adopted to describe the power spectrum\nof initial curvature perturbations, or the more recent work of \\cite{Hazra:2017gtx}, where the \\texttt{PCHIP}\\ method has also been used to model the evolution of $x_e(z)$.\nThe idea behind the \\texttt{PCHIP}\\ function is similar to that of the natural cubic spline,\nwith the difference that the monotonicity of the series of interpolating points\nmust be preserved.\nSpurious oscillations that may be introduced by the standard spline\nare avoided by imposing a condition on the first derivative of the function in the nodes,\nwhich must be zero if there is a change in the monotonicity\nof the point series.\nA more detailed discussion on the \\texttt{PCHIP}\\ function can be found in the appendix of Ref.~\\cite{Gariazzo:2014dla}.\n\nSummarizing, the free electron fraction in the \\texttt{PCHIP}\\ case is described by:\n\\begin{equation}\\label{eq:xe_pchip}\n x_e(z) =\n \\begin{cases}\n 1+f_{\\rm He}\n & \\mbox{for } z \\leq z_1, \\\\\n \\texttt{PCHIP}(z;\\ x_e(z_1),\\ \\ldots,\\ x_e(z_n))\n & \\mbox{for } z_1 < z < z_n, \\\\\n 0\n & \\mbox{for } z \\geq z_n,\n \\end{cases}\n\\end{equation}\nwhere $n$ will be 7 and the redshifts $z_i$ are reported in Eq.~\\eqref{eq:nodes7}.\n\nFor the values of the function in the varying nodes,\nwhich are the free reionization parameters in our Markov Chain Monte Carlo analyses,\nwe impose a linear prior $0 \\leq x_e(z_i) \\leq 1+f_{\\rm He}$,\nwith $i=2,\\ \\ldots,\\ n-1$.\nThis ensures that the free electron fraction is always positive and smaller than its value today.\nThe value of the reionization optical depth $\\tau$ \nthat we report in our results is derived from Eq.~\\eqref{eq:cumtau}.\n\n\\section{Cosmological data}\n\\label{sec:data}\nWe use Planck satellite 2015 measurements of the CMB temperature,\npolarization, and cross-correlation spectra~\\cite{Adam:2015rua,Ade:2015xua}\nto derive the constraints on the possible reionization histories~\\footnote{%\nWe make use of the publicly available Planck likelihoods~\\cite{Aghanim:2015xee}, see \\href{http:\/\/www.cosmos.esa.int\/web\/planck\/pla}{www.cosmos.esa.int\/web\/planck\/pla}.\n}.\nMore precisely, we exploit both\nthe high-$\\ell$ ($30 \\leq \\ell \\leq 2508$) and\nthe low-$\\ell$ ($2 \\leq \\ell \\leq 29$) $TT$\nlikelihoods\nbased on the reconstructed CMB maps\nand\nwe include the Planck\npolarization likelihoods in the low-multipole regime\n($2 \\leq \\ell \\leq 29$), plus the high-multipole ($30 \\leq \\ell \\leq 1996$) $EE$ and $TE$ likelihoods~\\footnote{The latest reionization constraints from the Planck collaboration do not consider the TE data in the analyses, due to its larger cosmic variance and its weaker dependence on the reionization optical depth, when compared to EE measurements, see \\cite{Adam:2016hgk}.}. \nAll these CMB likelihood functions depend on several nuisance parameters\n(e.g.\\ residual foreground contamination, calibration, and\nbeam-leakage~\\cite{Ade:2015xua,Aghanim:2015xee}),\nwhich have been properly considered and marginalized over. \nTo derive constraints on the reionization history and related parameters, we have modified the Boltzmann equations solver \\texttt{CAMB} code \\cite{Lewis:1999bs} and apply\nMarkov Chain Monte Carlo (MCMC) methods by means of an adapted version of the \\texttt{CosmoMC} package~\\cite{Lewis:2002ah}.\nAs for current constraints, we consider a minimal version of the $\\Lambda$CDM model, described by the following set of parameters: \n\\begin{equation}\\label{parameterPPS}\n\\{\\omega_{\\rm{b}},\\,\\omega_{\\rm{c}},\\, \\Theta_{\\rm{s}},\\,\\ln{(10^{10} A_{\\rm{s}})},\\,n_{\\rm{s}}\\}~,\n\\end{equation}\nwhere $\\omega_{\\rm{b}}\\equiv\\Omega_{\\rm{b}}h^2$ and $\\omega_{\\rm{c}}\\equiv\\Omega_{\\rm{c}}h^2$\nrepresent the physical baryon and cold dark matter energy densities, $\\Theta_{\\rm{s}}$\nis the angular scale of recombination, $A_{\\rm{s}}$ is the primordial power spectrum amplitude and $n_{\\rm s}$ the spectral index.\nNotice that we do not have $\\tau$ among the parameters included in our analyses, as $\\tau$ is a derived parameter.\nInstead, we will add the additional parameters describing the PCA and \\texttt{PCHIP}\\ reionization models, that will lead to the constraints presented in what follows. \n\n\\section{Results}\n\\label{sec:results}\nFigure~\\ref{fig:tau} shows the most relevant results from our analyses of Planck 2015 temperature and polarization data assuming different reionization histories.\nAs aforementioned, we shall focus on the cumulative redshift distribution function of the reionization optical depth, Eq.~\\eqref{eq:cumtau}.\nA large departure from $0$ at redshifts $z>10$ would indicate evidence for an early reionization contribution, and therefore for non-standard reionization sources as, for instance, energy injection from dark matter annihilations or from matter accretion on massive primordial black holes.\nNotice that the PCA-A method of Ref.~\\cite{Heinrich:2016ojb}, in which the PCA decomposition is added linearly to a fiducial $x^{\\rm{fid}}_{\\rm{e}}(z)$, leads \\emph{always} to an early contribution to the optical depth $\\tau$, i.e.\\ $\\tau$ is significantly different from 0 at $z>10$, in contrast to standard reionization scenarios.\nFurthermore, the presence of this early contribution is independent of the fiducial model,\nas we can see from\nthe four PCA-A cases depicted in Fig.~\\ref{fig:tau}, which provide the same predictions at $z>10$, differing only mildly at small redshifts, regardless whether the fiducial model is a constant function or it depends on the redshift instead. \n\n\\begin{figure}[t]\n\\centering \n\\includegraphics[width=0.85\\textwidth]{reioPCHIP_pol_sm_bands_taue_new.pdf}\n\\caption{\\label{fig:tau} Cumulative redshift evolution of the reionization optical depth $\\tau(z)$ for several possible reionization scenarios.\nThe black thin solid and dot-dashed lines illustrate the PCA-A scenario for the case of two fiducial models constant in redshift.\nThe two upper dot-dashed lines refer also to the PCA-A parameterization but with redshift-dependent fiducial models.\nThe two lower colored solid lines depict the PCA-B scenarios, while the thick solid black line and the blue contours show the mean value and the $1$, $2$ and $3\\sigma$ allowed regions within the \\texttt{PCHIP}\\ prescription.}\n\\end{figure}\n\nIn order to unravel the origin of this early reionization component present when using the PCA-A description, several tests have been carried out.\nFirstly, we have eliminated the physical limits in the PCA amplitudes, finding very similar results.\nSecondly, we have simulated mock Planck data with the hyperbolic tangent description and then fitted these data to a PCA-A modeling, using different fiducial models.\nWe always find two bumps in the recovered $x_e$, see Fig.~\\ref{fig:xe}, one located between $z=10$ and $z=15$ and a second one located between $z=20$ and $z=25$.\nUpcoming measurements from the Planck satellite could disentangle if this early reionization component is truly indicated by the data or instead it is due to the adopted modeling or to other effects (i.e.\\ systematics).\n\nFurthermore, this early reionization component is definitely absent when other possible reionization histories are used in the analyses. \nFor instance, in the case of PCA-B parameterizations (see Eqs.~\\eqref{eqn:tanh_b} and \\eqref{eqn:asym}), there is no early reionization contribution, as $\\tau(z)$ is negligibly small for $z>10$.\nThe same happens for the \\texttt{PCHIP}\\ method, in which the mean reconstructed value of $\\tau(z)$ is also very small at high redshifts, showing little evidence for an early reionization component (see also Ref.~\\cite{Hazra:2017gtx}).\nNotice that the value of $\\tau$ today is smaller in the PCA-B approaches than in the PCA-A and \\texttt{PCHIP}\\ descriptions.\nHowever, this behavior is the expected one, as the PCA-B scenarios are very close to those explored by the Planck collaboration in Ref.~\\cite{Adam:2016hgk}, where it was found that the current value of $\\tau$ is $0.058\\pm 0.012$ for the hyperbolic tangent case, in perfect agreement with our findings here, even if we make use of the 2015 Planck likelihood only (the mean value is $\\tau=0.068$ for the very same model).\nThe differences between the PCA-A and PCA-B cases can be understood from the fact that the case B imposes a more restrictive functional form on the ionization history.\n \n\\begin{figure}\n\\centering \n\\includegraphics[width=0.85\\textwidth]{reioPCHIP_pol_sm_bands_xe_new_xe.pdf}\n\\caption{\\label{fig:xe}\nFree electron fraction as a function of the redshift for several possible reionization scenarios.\nLine styles and colors are the same as in Fig.\\ref{fig:tau}.}\n\\end{figure} \n\nThe findings above are fully consistent with our limits on the free electron fraction $x_{e}(z)$ at a given redshift.\nFigure~\\ref{fig:xe} shows the free electron fraction for the \\texttt{PCHIP}\\ parameterization together with the other PCA-A and PCA-B models explored here.\nThe color coding is identical to that used in Fig.~\\ref{fig:tau}. \nNotice that for the PCA-A models the free electron fraction is almost constant in the redshift interval $z=10-30$, as a consequence of the choice of the fiducial model, and therefore there will always be an early reionization component \\emph{within this approach}.\nHowever, when considering either the \\texttt{PCHIP}\\ or the PCA-B models, the free electron fraction is significantly smaller than $0.2$ for redshifts above $z=15$ and it is almost negligible above $z=20$.\nTherefore, the fact that current CMB observations need an early contribution to reionization is highly questionable, as it strongly depends on the framework used to analyze the data.\nUsing Planck CMB temperature and polarization data within the \\texttt{PCHIP}\\ analysis,\nwe find $x_e<0.90$, $<0.49$ and $<0.13$ at $2\\sigma$ in the nodes at $z=10$, $13$ and $20$, respectively.\nFluctuations in the lower $1\\sigma$ limits shown in Fig.~\\ref{fig:xe}\nare numerical artifacts that appear when computing the error bands at intermediate positions between the fixed \\texttt{PCHIP}\\ nodes and cannot be considered as significant.\nFigure~\\ref{fig:pchipamplitudes} shows the $68\\%$ and $95\\%$~CL allowed regions for the amplitudes of the \\texttt{PCHIP}\\ nodes, i.e.\\ the $x_{\\rm e} (z)$ at the redshifts listed in Eq.~\\eqref{eq:nodes7},\nfrom the Planck CMB measurements considered here.\nA quick inspection of Fig.~\\ref{fig:pchipamplitudes} tells us that all the amplitudes are perfectly compatible with a vanishing value.\nOnly one of them, $m_5$, the node corresponding to $z=13$, shows a very mild departure from $0$. However, this mild departure is far from being a significant effect, as it barely appears at $1\\sigma$.\nWe can therefore conclude that there is no evidence for a high redshift component in $x_{\\rm e}(z)$.\nNotice also from Fig.~\\ref{fig:pchipamplitudes} that, in general, the \\texttt{PCHIP}\\ amplitudes are anti-correlated among themselves.\nWe also illustrate the derived distribution for the value of the reionization optical depth, $\\tau_{\\rm PC}$, which is significantly correlated with the nodes at the higher redshifts.\nEven a modest increase of $x_{\\rm e}$ at $z=13$ or at $z=20$ would imply a significant shift towards larger values of the current reionization optical depth. \n\n\\begin{figure}\n\\centering \n\\includegraphics[width=0.85\\textwidth]{pchipNodes}\n\\caption{\\label{fig:pchipamplitudes} $68\\%$ and $95\\%$~CL allowed regions from the Planck CMB measurements considered here on the amplitudes in the \\texttt{PCHIP}\\ approach, together with the one-dimensional posterior probability distributions.}\n\\end{figure} \n\nIn order to further assess our findings above, we adopt here two information criteria \nwhich have been widely exploited\nin astrophysical and cosmological contexts (see Refs.~\\cite{Liddle:2007fy,Trotta:2008qt} for details), namely the frequentist Akaike Information Criterion (AIC)\n\\begin{equation}\n\\textrm{AIC}\\equiv -2 \\ln \\mathcal{L }_{\\rm{max}} +2k~,\n\\end{equation}\nwhich establishes that the penalty term between competing models is twice the number of free parameters in the model, $k$; and the Bayesian Information Criterion (BIC)\n\\begin{equation}\n\\textrm{BIC}\\equiv -2 \\ln \\mathcal{L }_{\\rm{max}} +k\\ln N~,\n\\end{equation}\nin which the penalty is proportional to the number of free parameters in the model times the logarithm of the number of data points $N$.\nThe best model is the one minimizing either the AIC or the BIC criteria. \nFollowing Ref.~\\cite{Liddle:2007fy}, the significance against a given model will be judged based on the Jeffreys' scale, which will characterize a difference $\\Delta$AIC (BIC)$>5$ ($>10$) as a strong (decisive) evidence against the cosmological model with higher AIC (BIC) value.\n\nAdopting first the AIC prescription, we shall compare the different models explored here to the standard scenario, in which reionization is described via just only one parameter, $\\tau$. \nThis \\emph{tau-only} cosmological model gives $-2\\ln \\mathcal{L }_{\\rm{max}}=12956.2$~\\cite{Ade:2015xua}.\nAs a comparison, the PCA-A case with constant fiducial model $x_{e}=0.15$ ($0.05$) provides $-2\\ln \\mathcal{L }_{\\rm{max}}=12954.0$ ($12953.2$). \nNotice that both the PCA-A cases have a higher AIC value than the \\emph{tau-only} cosmology because of the larger number of parameters.\nThe values for $\\Delta$AIC are $\\Delta$AIC $=5.8$ and $5$, respectively, and therefore there is strong evidence against these possible reionization histories.\nAlso within the PCA-A description, we get $-2\\ln \\mathcal{L }_{\\rm{max}}=12956.5$ ($12958.3$) in the PCA-A \\emph{tanh} (\\emph{asym}) fiducial approach.\nThese two models also provide a larger AIC than the \\emph{tau-only} scenario, and again, there will be strong (decisive) evidence against the PCA-A \\emph{tanh} (\\emph{asym}), in favor of the simplest and most economical \\emph{tau-only} reionization paradigm.\nIn the case of the \\texttt{PCHIP}\\ approach, our results lead to $-2\\ln \\mathcal{L}_{\\rm{max}}=12954.5$, which \nalso indicates strong preference for the \\emph{tau-only} scheme. We point out that all the reported values of $-2\\ln \\mathcal{L }_{\\rm{max}}$\nare taken from the corresponding MCMC chains, and not from a specific minimization algorithm.\nFor this reason, they may not be extremely precise and they must be considered only as fair estimates of the true values of each $-2\\ln \\mathcal{L }_{\\rm{max}}$, with possible errors of order unity, as estimated from the different parallel MCMC chains.\nIn the case of the PCA-B parameterizations,\nthe difference in the minimum $-2\\ln \\mathcal{L }_{\\rm{max}}$ from the different MCMC parallel chains is too large to give even a fair estimate of the true minimum,\nand we decide not to claim any evidence against these two descriptions, for the reasons listed above.\nHowever, we expect that these two models are equally good in fitting the CMB data, at a comparable level with respect to the \\emph{tau-only} scenario, as their reionization histories are extremely close to the standard cosmological framework, see Fig.~\\ref{fig:xe}.\nNevertheless, given the fact that the number of parameters in the PCA-B scheme is larger, the \\emph{tau-only} reionization description, with current data, will always be favored over the PCA-B parameterization.\n\nWe can also compare the different reionization descriptions among themselves using the BIC approach, as all of them have the same number of free parameters (five in total) and also the same number of data points.\nThe result of comparing the PCA-A and \\texttt{PCHIP}\\ scenarios among themselves will always give very weak or inconclusive answers, as none of them in particular is preferred over the other possible formulations.\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nUnraveling the reionization period, which is still a poorly known period in the evolution of our universe, is one of the most important goals of current and future cosmological probes.\nThis is a mandatory step, not only towards a complete understanding of star formation and evolution, but also to answer questions such as the nature of the dark matter component~\\cite{Barkana:2001gr,Yoshida:2003rm,Somerville:2003sh,Yue:2012na,Pacucci:2013jfa,Mesinger:2013nua,Schultz:2014eia,Dayal:2015vca,Lapi:2015zea,Bose:2016hlz,Bose:2016irl,Corasaniti:2016epp,Menci:2016eui,Lopez-Honorez:2017csg,Villanueva-Domingo:2017lae}, constraining dark matter properties or the abundance of accreting massive primordial black holes~\\cite{Pierpaoli:2003rz,Mapelli:2006ej,Natarajan:2008pk,Natarajan:2009bm,Belikov:2009qx,Huetsi:2009ex,Cirelli:2009bb,Kanzaki:2009hf,Natarajan:2010dc,Giesen:2012rp,Diamanti:2013bia,Lopez-Honorez:2013lcm,Lopez-Honorez:2016sur,Poulin:2016nat,Poulin:2015pna,Ricotti:2007au,Horowitz:2016lib,Ali-Haimoud:2016mbv,Blum:2016cjs,Poulin:2017bwe}.\nCurrently, the most accurate measurement of the reionization period comes from Cosmic Microwave Background data through a redshift-integrated quantity: the reionization optical depth $\\tau$.\nThe latest measurements of the Planck collaboration provide a value of $\\tau = 0.055 \\pm 0.009$~\\cite{Aghanim:2016yuo, Adam:2016hgk}, which shows a very good agreement with observations of Lyman-$\\alpha$ emitters at $z\\simeq 7$~\\cite{Stark:2010qj,Treu:2013ida,Pentericci:2014nia,Schenker:2014tda,Tilvi:2014oia}.\nHowever, this measured value of $\\tau$ may correspond to very different reionization histories.\n\nThe most commonly exploited model for the time evolution of the free electron fraction, $x_e(z)$, uses a step-like transition, implemented via a hyperbolic tangent~\\cite{Lewis:2008wr}.\nRecently, there have been several studies in the literature claiming that Planck 2015 data may prefer a high-redshift ($z>15$) component to the reionization optical depth, implying a clear departure from the hyperbolic tangent picture.\nHere we consider a number of possible reionization scenarios, some of them previously explored in the literature, such as the Principal Component Analysis (PCA) approach of Refs.~\\cite{Hu:2003gh,Mortonson:2007hq,Mortonson:2007tb,Mortonson:2008rx,Mortonson:2009qv,Mortonson:2009xk,Mitra:2010sr,Heinrich:2016ojb}, or the \\texttt{PCHIP}\\ framework~\\cite{Hazra:2017gtx}. \nWe find that the claimed need for an early reionization component from present data is highly debatable, as it is only motivated by a particular set of reionization descriptions.\nIn other possible reionization prescriptions, equally allowed by data, we do not find such a preference.\nTo assess this, we have applied the frequentist Akaike Information Criterion (AIC), which provides an unbiased model comparison method.\nThe AIC results show that there is strong evidence from current data against more complicated reionization scenarios, always favoring the minimal scenario with the symmetric hyperbolic tangent function and described by one single parameter, the reionization optical depth $\\tau$. In other words, current Planck CMB analyses are unable to provide more information beyond that based on a single value of the $\\tau$. Upcoming data from the Planck mission will help in further disentangling the reionization history of our universe. \n\n\n\\acknowledgments\nThis work makes use of the publicly available \\texttt{CosmoMC}~\\cite{Lewis:2002ah} and \\texttt{CAMB}~\\cite{Lewis:1999bs} codes and of the Planck data release 2015 Likelihood Code~\\cite{Aghanim:2015xee}. OM and PVD would like to thank the Fermilab Theoretical Physics Department for hospitality.\nOM and PVD are supported by PROMETEO II\/2014\/050, by the Spanish Grants SEV-2014-0398 and FPA2014--57816-P of MINECO and by the European Union's Horizon 2020 research and innovation program under the Marie Sk\\l odowska-Curie grant agreements No.\\ 690575 and 674896. \nThe work of SG was supported by the Spanish grants\nFPA2014-58183-P,\nMultidark CSD2009-00064 and\nSEV-2014-0398 (MINECO),\nand PROMETEOII\/2014\/084 (Generalitat Valenciana).\n\nThis manuscript has been authored in part by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. This work made extensive use of the NASA Astrophysics Data System and {\\tt arXiv.org} preprint server.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe ALICE experiment at the LHC is dedicated to the study of strongly interacting matter under extreme conditions, i.e. high temperature, which can be reached in heavy-ion collisions. In such collisions, the formation of a Quark-Gluon Plasma (QGP) is expected. Dielectrons are produced at all stages of the collision and therefore carry information about the whole evolution of the system. Since they do not interact strongly with the medium, they are a prime probe to study the properties of the QGP. Dielectrons stem from decays of pseudoscalar and vector mesons, from semi-leptonic decays of correlated open-charm and open-beauty hadrons and from internal conversions of direct photons. In heavy-ion collisions, additional sources are expected, i.e. thermal radiation from the QGP and hadron gas. The medium introduces modifications of the vector meson properties, in particular the short-lived $\\rho$, related to chiral symmetry restoration. In addition, the initial conditions of the collisions are expected to change compared to elementary collisions due to modifications of the parton distribution functions in nuclei. The latter can be studied in proton-lead (p--Pb) collisions, whereas pp collisions provide an important vacuum baseline. It is crucial to first understand the dielectron production in vacuum to single out the signal characteristics of the QGP. Moreover, proton-proton (pp) collisions can also be used to study the heavy-flavour (HF) and direct photon production.\n\nIn the following, the steps of the data analysis are explained and the first measurements of the dielectron production in pp collisions at $\\sqrt{s} = 7$\\,TeV are presented and discussed~\\cite{ref-ee}.\n\n\n\\section{Data analysis and results}\n\n\nThe analysis is performed with pp data taken during the first data-taking period of the LHC in 2010 with the ALICE detector. The integrated luminosity of the data sample is $L_{\\rm int} = 6.0\\pm0.2$\\,nb$^{-1}$.\nAfter identifying electrons in the ALICE detector it is not a priori clear which electrons belong to the same pair. We follow a statistical approach to obtain the final spectrum. The electrons and positrons are combined to an opposite-sign spectrum (OS), which includes not only the signal but also background, that can be purely combinatorial or have some residual correlation from jets or cross pairs from double Dalitz decays. This background is estimated by the same-sign spectrum (SS). Residual acceptance differences for OS and SS pairs are estimated with mixed events and taken into account during the subtraction of the background. Finally, the background-subtracted spectrum is corrected for tracking and particle identification inefficiencies within the ALICE acceptance ($p_{\\rm T,e} > 0.2$\\,GeV\/$c$, $ \\eta_{\\rm e}<0.8 $).\n\nIn Fig. 1 the measured dielectron cross section as a function of $m_{\\rm ee}$ is compared to a so-called hadronic cocktail, which includes all known sources of dielectron production from hadron decays and uses parameterisations of measured spectra as input when available. Where no measurements are available $m_{\\rm T}$-scaling~\\cite{ref-mt-scaling} is applied. The HF contributions are simulated using the Perugia2011 tune of PYTHIA~6~\\cite{ref-pythia,ref-pythia2011}. The resulting dielectron pairs from the hadron decays are then filtered through the ALICE acceptance.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.355]{.\/2018-09-03-2018-09-03-invmassintegrated.pdf}\n\\caption{Dielectron cross section as a function of $m_{\\rm ee}$ compared to a cocktail of known hadronic sources.}\n\\end{figure}\n\nA good agreement is observed between the cocktail and the data. The charm contribution already dominates the spectrum for $m_{\\rm ee} \\geq 0.5$\\,GeV\/$c^{2}$. The very large heavy-flavour contribution makes the measurement of thermal radiation from the medium in heavy-ion collisions very challenging at LHC energies. To separate the heavy-flavour background from thermal radiation from the QGP in a future heavy-ion run in the intermediate-mass range (IMR, $\\phi < m_{\\rm ee} < J\/\\psi$), an additional variable, the pair-distance-of-closest-approach ($\\rm DCA_{ee}$), is added to the traditional analysis as a function of $m_{\\rm ee}$ and $p_{\\rm T,ee}$. $\\rm DCA_{ee}$~is defined as:\n\\begin{equation}\n{\\rm DCA_{ee}} = \\sqrt{\\frac{({\\rm DCA_{{\\it xy},1}}\/\\sigma_{xy{ \\rm ,1}})^{2}+({\\rm DCA_{{\\it xy},2}}\/\\sigma_{xy,2})^{2}}{2}}\n\\end{equation}\n\nHere ${\\rm DCA}_{xy,i}$ is the closest distance between the reconstructed electron track and the primary collision vertex in the transverse plane. Its resolution is estimated from the covariance matrix of the track reconstruction and denoted as $\\sigma_{xy,i}$. In the case of weak decays, the decay electron candidates do not point to the vertex which leads to a broader DCA distribution than for tracks from prompt decays.\nThis can be seen in Fig. 2 and Fig. 3, where the $\\rm DCA_{ee}$~spectra are shown for two invariant mass regions. Fig. 2 shows the mass region between the $\\pi^{0}$ and the $\\phi$ mass. The light flavour template is taken from the $\\pi^{0}$ shape, normalised to the expected contribution from all light flavour sources. Fig. 3 shows the mass region around the $J\/\\psi$ mass peak. In both mass regions we can see a clear peak which can be described by the expected prompt contributions, whereas the tail of the spectrum is described by the broader contributions from charm and beauty.\n\\begin{figure}\n\\centering\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[scale=0.35]{.\/2018-May-09-resonancedca.pdf}\n \\caption{Dielectron spectrum as a function of $\\rm DCA_{ee}$~for $0.14 < m_{\\rm ee} < 1.1$\\,GeV\/$c^2$~\\cite{ref-ee}.}\n \\end{minipage}\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[scale=0.35]{.\/2018-May-09-jpsidca.pdf}\n \\caption{Dielectron spectrum as a function of $\\rm DCA_{ee}$~for $2.7 < m_{\\rm ee} < 3.3$\\,GeV\/$c^2$~\\cite{ref-ee}.}\n \\end{minipage}\n\\end{figure}\nIn Fig. 3 the $J\/\\psi$ from $B$-mesons can be seen in addition to the open HF contributions.\nIn the so-called intermediate mass region, located between the $\\phi$ and $J\/\\psi$ in the mass spectrum, the dominant contribution is from open HF.\nThe dielectron cross section as function of $p_{\\rm T,ee}$ and $\\rm DCA_{ee}$~is compared to a hadronic cocktail using PYTHIA 6 Perugia0~\\cite{ref-pythia2011} to estimate the $\\rm c\\bar{c}$ and $\\rm b\\bar{b}$ contributions in the left and right panels of Fig. 4, respectively.\n\\begin{figure}\n\\centering\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[scale=0.35]{.\/2018-May-09-heavyflavourptee.pdf}\n \\end{minipage}\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[scale=0.35]{.\/2018-May-09-heavyflavourdca.pdf}\n \\end{minipage}\n\\caption{Dielectron cross section as a function of $p_{\\rm T,ee}$ (left) and $\\rm DCA_{ee}$~(right) in the IMR compared to a cocktail calculated with PYTHIA~6~\\cite{ref-ee}.}\n\\end{figure}\nThe data are well described by the hadronic cocktail within the statistical and systematic uncertainties. The contribution from $\\rm c\\bar{c}$ dominates the dielectron yield at low $p_{\\rm T,ee}$ and relatively small $\\rm DCA_{ee}$, whereas the $\\rm b\\bar{b}$ becomes relevant at high $p_{\\rm T,ee}$ and large $\\rm DCA_{ee}$.\nTo investigate the processes of heavy-quark production we changed the generator from PYTHIA to POWHEG, switching from leading order in the HF quark generation to next-to-leading order. To quantify the differences the total $\\rm c\\bar{c}$ and $\\rm b\\bar{b}$ cross sections are extracted from the data by fitting the results two-dimensionally as a function of $p_{\\rm T,ee}$ and $m_{\\rm ee}$ and one-dimensionally as a function of $\\rm DCA_{ee}$~in the IMR allowing the contributions of the two HF components to vary. The results are shown in the left and right panels of Fig. 5 for PYTHIA and POWHEG\\cite{ref-powheg}, respectively.\nBoth fits give consistent results for a given MC event generator. The uncertainties are fully correlated between the cross sections extracted with PYTHIA and POWHEG. Significant model dependences are observed which reflect the different rapidity distribution of charm quarks and different $p_{\\rm T,ee}$ spectra of the $\\rm c\\bar{c}$ and $\\rm b\\bar{b}$ contributions predicted by the two models.\n\\begin{figure}\n\\centering\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[trim={0, 0, 0, 1.5cm},clip,scale=0.357]{.\/2018-May-09-oneSigmaPythiaDCA0to8.pdf}\n \\end{minipage}\n \\begin{minipage}{0.47\\textwidth}\n \\includegraphics[trim={0, 0, 0, 1.5cm},clip,scale=0.357]{.\/2018-May-09-oneSigmaPowhegDCA0to8.pdf}\n \\end{minipage}\n\\caption{Total $\\rm c\\bar{c}$ and $\\rm b\\bar{b}$ cross sections with their systematic and statistical uncertainties, extracted from a fit of the measured dielectron yield from heavy-flavour hadron decays in ($m_{\\rm ee}$, $p_{\\rm T,ee}$) and in $\\rm DCA_{ee}$ with PYTHIA (left) and POWHEG (right) are compared to published cross sections (lines)~\\cite{ref-ee}.}\n\\end{figure}\nThe results are compared to independent measurements of $\\sigma_{\\rm c\\bar{c}}$\\cite{ref-ccbar} and $\\sigma_{\\rm b\\bar{b}}$\\cite{ref-bbbar} from single heavy-flavour particle spectra and found to be consistent within the large uncertainties. Once these uncertainties are reduced, the dielectron measurements can give further constraints on the MC event generators aiming to reproduce the heavy-flavour production mechanisms.\n\n\\section{Conclusion}\n\nTo summarise, ALICE measured the dielectron cross sections as a function of $m_{\\rm ee}$, $p_{\\rm T,ee}$, and $\\rm DCA_{ee}$~in pp collisions at $\\sqrt{s} = 7$\\,TeV. The hadronic cocktail is in good agreement with the measured dielectron cross sections in the three discussed observables, which suggests a good understanding of the dielectron cross section in the ALICE acceptance. We show that $\\rm DCA_{ee}$~makes it possible to separate prompt from non-prompt dielectron pairs, and thus will be a key tool to determine the average temperature of the QGP formed in heavy-ion collisions in the future. In the heavy flavour sector we can report a significant dependence of the total cross sections of charm and beauty when using PYTHIA and POWHEG, which reflects the sensitivity of the dielectron measurement to the underlying heavy-flavour production mechanisms implemented in the models.\n\n\n\n\\bibliographystyle{unsrt} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec-1}\n\nThis technical report is an extension of the paper of the same title, which is to appear at MUCOCOS'13. The technical report proves correctness of the ELB-trees operations' semantics and that the operations are lock-free.\n\nThe following is a brief summary of the design of the datastructure, which is detailed in section 3 of the paper.\nAll ELB-trees have a permanent root node $r$ with a single child.\nELB-trees are $k$-ary leaf-oriented search tree, or multiway search trees, so internal nodes have up to $k$ children and $k-1$ keys. An ELB-trees contain a set $E_r$ of integer keys in the range $(0;2^{63})$. The key 0 is reserved. Keys have an additional read-only bit: when the read-only bit is set, the key cannot be written to. ELB-trees offer 3 main operations:\n\\begin{itemize}\n\\item Search($e_1$, $e_2$) returns a key $e$ from $E_r$ satisfying $e_1 \\le e \\le e_2$, if such a key exists. Otherwise it returns $0$.\n\\item Remove($e_1$, $e_2$) removes and returns a key $e$ from $E_r$ satisfying $e_1 \\le e \\le e_2$, if such a key exists. Otherwise it returns $0$.\n\\item Insert($e$) adds $e$ to $E_r$, if $e$ was not in $E_r$ before.\n\\end{itemize}\nELB-trees can also be used as dictionaries or priority queues by storing values in the least significant bits of the keys.\n\nThe operations of ELB-trees cannot generally be expressed as atomic operations, as they occur over a time interval. As a consequence, series of concurrent operations cannot generally be expressed as ocurring serially, that is the semantics are not linearizable.\nHowever, the set $E_r$ is atomic.\n$E_r$ is the union of the keys in the leaf nodes of the ELB-tree.\nThe keys in internal nodes guide tree search.\n\nSection 2 provides formal definitions for terms used throughout the proof.\nThe proof starts in Section 3 by proving that ELB-trees are leaf-oriented search trees.\nWe prove through induction, that \nELB-trees are leaf-oriented search trees initially, and that all operations maintain that property.\nThe inductive step is assisted by two significant subproofs:\n\\begin{enumerate}\n\\item Rebalancing does not change the keys in $E_r$.\n\\item The keys in leaf nodes are within a permanent range.\n\\end{enumerate}\n\nThese properties hold due to the behavior of rebalancing.\nThe first subproof shows that rebalancing is deterministic, even when concurrent.\nThe second shows that leaf nodes have a range of keys they may contain and it never changes.\n\nGiven these properties, Section 4 derives the operations' semantics.\nSection 5 follows up by proving that the operations are lock-free.\nFirst we prove that some operation has made progress whenever a node is rebalanced.\nNext we prove that some operation has made progress whenever any part of an operation is restarted.\n\nSection 6 concludes the technical report with a summary.\n\\section{Definitions}\n\\label{sec-2}\n\nThis section introduces definitions used in the following proofs of the ELB-trees' properties. The definitions start with the terms used, before moving on to the contents and properties of nodes. Finally the intitial state of ELB-trees is formally defined.\n\nLet $L$ be the set of leaf ndoes, $I$ the set of internal nodes, and $T$ the set of points in time. The sets are disjoint.\n\nNodes contain:\n\\begin{description}\n\\item[$C_i (t)$] list of children of internal node $i$ at time $t$\n\\item[$S_i (t)$] list of keys in internal node $i$ at time $t$\n\\item[$E_n (t)$] keys represeted by the node $n$ where at time $t$:\\begin{center}$E_n (t) =$ \\begin{math} \\left\\{\n \\begin{array}{lr}\n $Non-zero keys in $l & n \\in L \\\\\n \\bigcup _{c \\in C_i (t)} E_c (t) & : n \\in I\n \\end{array}\n \\right. \\end{math}\\end{center} \n\\end{description}\n\nThe following node properties can be derived from their content:\n\\begin{description}\n\\item[$D_n (t)$] the descendants of node $n$ at time $t$: \\\\ $D_n (t)$ = \\begin{math} \\left\\{\n \\begin{array}{lr}\n \\emptyset & : n \\in L\\\\\n C_n (t) \\cup \\bigcup _{d \\in C_n(t)} D_d (t) & : n \\in I\n \\end{array}\n \\right.\\end{math} \\\\ $n$ is reachable when $reachable_n (t) \\equiv n \\in (\\{r\\} \\cup D_r (t))$\n\\item[$parent_n (t)$] the parents of node $n$: \\\\ $parent_n (t) = \\{i \\in reachable_r (t) | n \\in C_i (t)\\}, t \\in T$\n\\end{description}\n\nInitially $r$ has one child $C_r (0) = \\langle ic \\rangle$, and one grandchild $C_{ic} (0) = \\\\ \\langle ln \\rangle$.\nThe grandchild is an empty leaf node $E_{ln} (0) = \\emptyset \\wedge E_r (0) = \\emptyset$.\n\n\\section{Search tree proof}\n\\label{sec-3}\n\nThis section proves that ELB-trees are $k$-ary leaf-oriented search trees.\nIn such a tree, all nodes except the root have one parent, and all internal nodes have strictly ordered keys.\nSpecifically the $i$'th key in a node provides an upper bound for the $i$'th child of the node, and a lower bound for the $i + 1$'th child.\nThe key ordering is formally expressed as: \n\\begin{center} $W_i (t) \\equiv \\forall j \\in [0;C_i (t)). E_{{C_i(t)} _t} \\subseteq (0; {S_i}_j] \\wedge E_{{C_i (t)} _t} \\subseteq ({S_i} _j; 2^{63})$ \\end{center}\nThe tree property is formally expressed as:\n\\begin{center} $\\forall n \\in reachable_n (t). \\left\\vert parent_n (t) \\right\\vert = 1 \\vee n = r$ \\end{center}\nThe properties are proven inductively, but doing so requires several intermediate steps.\nTo begin with, we will show that the behavior of rebalancing of search trees is deterministic, and does not change $E_r$.\n\n\\begin{lemma}\\label{ro-reb}\nUnbalanced nodes and their parent are read-only while rebalancing. \\end{lemma}\n\\begin{proof} While finding the nodes involved in rebalancing, they are made read-only: internal nodes are made read-only by setting their status field, and\nleaf nodes are made read-only by setting the read-only bit of all their keys, see Figure~16 in the paper.\\end{proof}\n\n\\begin{lemma}\\label{reb-nodes}\nIf $W_r$ holds and the unbalanced nodes' parent is still reachable, all threads can find the nodes involved in a rebalancing from the status field of the unbalanced nodes grandparent, . \\end{lemma}\n\\begin{proof} The status field stores the key of the unbalanced node and its parent.\nSince $W_r$ holds, the nodes can be found by searching for the key in the grandparent and parent of the unbalanced node.\\end{proof}\n\n\\begin{lemma}\\label{inv-detreb}\nRebalancing completes deterministically exactly once, if $W_r$ holds. \\end{lemma}\n\\begin{proof} Rebalancing finds the involved nodes (Lemma \\ref{reb-nodes}) and decides how to rebalance (Lemma \\ref{ro-reb}) determinstically. The parent is replaced, and the grandparent's status field is cleared using ABA safe CAS operations, see Section~3b of the paper. The grandparent has the status field \\{*,*,*,STEP2\\} when replacing the parent, ensuring that the grandparent is reachable when replacing the parent node.\\end{proof}\n\n\\begin{lemma}\\label{Er-reb}\n$E_r (t)$ does not change when rebalancing, if $W_r$ holds. \\end{lemma}\n\\begin{proof} The content of balanced nodes and their new parent is copied from the old nodes, while their content is read-only (Lemma \\ref{ro-reb}).\\end{proof}\n\nThe preceding lemmas show that rebalancing is well-behaved in search trees. The following lemmas will show that all operations maintain the tree property and $W_r$.\n\n\\begin{lemma}\\label{inv-tree}\nAll operations maintain the tree property, if $W_r$ holds. \\end{lemma}\n\\begin{proof} $descendants_n$ only changes when rebalancing. Specifically, $descendants_n$ changes when replacing an internal node $op$ with a new node $np$. \nThe children of $op$ had $op$ as their only parent, so all the children $np$ and $op$ share, will have $np$ as their only parent after rebalancing. The new children have $np$ as their only parent, because they have just been introduced, and the descendants of the new nodes have their parents replaced. Formally: \\begin{center} $(\\forall c \\in C_{op} (t_1). parent_c (t_1) = \\{op\\}) \\Rightarrow \\forall c \\in C_{np} (t_2). parent_c (t_2) = \\{np\\}$ \\end{center}\\end{proof}\n\n\\begin{lemma}\\label{lrange}\nLeaf nodes $l$ have a permanent range $R_l$ of keys they may contain, if $W_r$ holds.\\end{lemma}\n\\begin{proof} The lower bound is given by the keys of its ancestors. The ancestors change deterministically when $W_r$ holds (Lemma \\ref{inv-detreb}). Although the ancestors may change, their replacements use the same keys. \nInternal node keys are only introduced or removed when splitting and merging nodes, which results in two or three new nodes. \nWhen rebalancing results in two new nodes, the new parent has one less key. When rebalancing results in three new nodes, the new parent has one updated or additional key, which the old parent did not have. The updated or new key is copied from its the unbalanced nodes, so it only affects the new nodes. \\end{proof}\n\n\\begin{lemma}\\label{res-si}\nIf $W_r$ holds, the leaf node $l$ reached by $Search(e, e)$ satisfies: $W_r \\Rightarrow e \\in R_l$. \\end{lemma}\n\\begin{proof} Search visiting a node $n$ where $\\neg reachable_n (t)$ eventually restarts, so a terminating search only visits reachable nodes in the tree (Lemma \\ref{inv-tree}). Search of reachable nodes when $W_r$ holds is regular $k$-ary tree search.\\end{proof}\n\n\\begin{lemma}\\label{res-sl}\nIf $W_r$ holds, searching the leaf node $l$ from $t_{l1}$ to $t_{l2}$ must read the keys $O(t_{l1}, t_{l2}) \\cap R_l$. \\end{lemma}\n\\begin{proof} $l$ is read after a memory barrier, ensuring that $O(t_{l1}, t_{l2}) \\cap R_l$ are read.\\end{proof}\n\n\n\\begin{lemma}\\label{inv-Wr}\nAll writes to the tree maintain $W_r$. Formally: \\begin{center} $\\forall t_1, t_2 \\in T . (t_1 \\le t_2 \\wedge W_r (t_1)) \\Rightarrow W_r (t_2)$ \\end{center}\\end{lemma}\n\\begin{proof} Writes to the tree can be classified into: key insertion, key removal, and rebalancing. \nRebalancing maintains $W_r$ (Lemma \\ref{lrange}).\nKey removal and insertion only affects the keys in the tree.\n$remove(e_1, e_2, t_1, t_2)$ removes an key from a leaf node $l$, which maintain $W_r$. \n$insert(e, t_1, t_2)$ inserts into leaf nodes for which $\\forall t \\in T. W_r (t) \\Rightarrow e \\in R_l$ (Lemma \\ref{res-si}), which maintain $W_r$. \\end{proof}\n\n\\begin{theorem}\\label{lost}\nELB-trees are leaf-oriented search trees. \\end{theorem}\n\\begin{proof} ELB-trees are trees and $W_r$ holds initially. All operation on ELB-trees maintains the tree property (Lemma \\ref{inv-tree}) and $W_r$ (Lemma \\ref{inv-Wr}).\\end{proof}\n\nThis section proves that ELB-trees are leaf-oriented search trees. Such proofs are sufficient to derive the semantics of concurrent searches and serial insertions and removals. The next section will derive the semantics of the concurrent operations, which requires a few additional lemmas.\n\n\\section{Correctness}\n\\label{sec-4}\n\nThis section derives the semantics of the operations. But first we will introduce some terms to reason about the results of such operations. Let: \n\\begin{description}\n\\item[$search(e_1, e_2, t_1, t_2)$] be the result of a search operation matching against keys $e \\in [e_1;e_2]$ starting at $t_1$ and ending at $t_2$;\n\\item[$remove(e_1, e_2, t_1, t_2)$] be the result of a remove operation matching against keys $e \\in [e_1;e_2]$ starting at $t_1$ and ending at $t_2$;\n\\item[$insert(e, t_1, t_2)$] be an insert $e$ operation starting at $t_1$ and ending at $t_2$;\n\\item[$O(t_1, t_2)$] be the keys that were in $E_r$ at all times during $[t_1;t_2)$: \\begin{center} $O(t_1, t_2) = \\left\\{ e | \\forall t \\in [t_1;t_2) . e \\in E_r (t) \\right\\}$; and \\end{center}\n\\item[$U(t_1, t_2)$] be the keys that were in $E_r$ at some time during $[t_1;t_2)$: \\begin{center} $U(t_1, t_2) = \\left\\{ e | \\exists t \\in [t_1;t_2). e \\in E_r (t) \\right\\}$. \\end{center}\n\\end{description}\n\nWe first prove properties of search operations, then derive the operations' semantics:\n\n\\begin{lemma}\\label{sl}\nSearching a set of leaf nodes $RL$ from $t_1$ to $t_2$ reads the keys $\\bigcup _{l \\in RL} R_l \\cap O(t_1, t_2)$. \\end{lemma}\n\\begin{proof}\nThe search reads the keys $\\bigcup _ {l \\in RL} R_l \\cap O(t_{l1}, t_{l2})$ (Lemma \\ref{res-sl}). $\\forall l \\in RL . O(t_{l1}, t_{l2}) \\subseteq O(t_1, t_2)$ holds, as any key in the tree during $t_1$ to $t_2$ must have been in the tree for all fragments of that duration.\n\\end{proof}\n\n\\begin{theorem} $search(e_1, e_2, t_1, t_2)$ can only return $0$ (fail) if there are no matching entries in $E_r$ at all times during $[t_1, t_2)$: \\begin{center} $search(e_1, e_2, t_1, t_2) = 0 \\Rightarrow [e_1; e_2] \\cap O(t_1, t_2) = \\emptyset$ \\end{center} \\end{theorem}\n\\begin{proof}\n$search(e_1, e_2, t_1, t_2) = 0$ implies that a set of leaf nodes $RL$ have been searched, where $[e_1;e_2] \\subseteq \\bigcup _{l \\in RL} R_l$. If there was an key in $[e_1;e_2] \\cap O(t_1, t_2)$ it would have been read (Theorem \\ref{lost}, Lemma \\ref{sl}).\\end{proof}\n\n\\begin{theorem} Successful searches return a matching key that was in $E_r$ at some point in time during $[t_1;t_2)$: \\begin{center}$e = search(e_1, e_2, t_1, t_2) \\Rightarrow (e \\in U(t_1, t_2) \\wedge e \\in [e_1; e_2])$\\end{center} \\end{theorem}\n\\begin{proof} Successful searches return a key $e$ that was read from a leaf. Since $e$ was read it must have been in $E_r$ (Lemma \\ref{sl}).\\end{proof}\n\n\n\\begin{theorem} Remove can only return $0$ (fail) if there are no matching entries in $E_r$ at all times during $[t_1, t_2)$:\n\\begin{center} $remove(e_1, e_2, t_1, t_2) = 0 \\Rightarrow O(t_1, t_2) \\cap [e_1 ; e_2] = \\emptyset$. \\end{center}\\end{theorem}\n\\begin{proof} Terminating remove operations that return $0$ have searched a set of leafs $RL$ satisfying $[e_1; e_2] \\subseteq \\bigcup _{l \\in RL} R_l$ (Lemma \\ref{sl}), so any keys in $O(t_1, t_2) \\cup [e_1;e_2]$ would have been read.\\end{proof}\n\n\\begin{theorem} Successful remove operations remove matching a key $e$ from $E_r$ that was in $E_r$ at some point in time during $[t_1;t_2)$: \\begin{center} $e = remove(e_1, e_2, t_1, t_2) \\ne 0 \\Rightarrow$ \\ $(e_1 \\le e \\le min(O(t_1, t_2) \\cap [e_1 ; e_2]) \\le e_2 \\wedge e \\in U(t_1, t_2))$ \\end{center} \\end{theorem}\n\\begin{proof} Terminating remove operations have searched a set of leafs $RL$ satisfying $[e_1; e] \\subseteq \\bigcup _{l \\in RL} R_l$ (Lemma \\ref{sl}). Any keys smaller than $e$ in $O(t_1, t_2) \\cup [e_1;e_2]$ would have been read.\\end{proof}\n\n\\begin{theorem} $insert(e, t_1 , t_2)$ adds $e$ to the $E_r$, if $e \\notin U (t_1 , t_2 )$. \\end{theorem}\n\\begin{proof} Insert operations terminate when they use a successful CAS operation to write the key into an empty key of a leaf node $l$ where $e \\in R_l$ (Lemma \\ref{res-si}). The CAS operations success implies the key is not read-only, and hence $reachable_l (t_2)$.\\end{proof}\n\nTheorem 2-6 can be summarized as: \\\\\n$e = search( e_1, e_2, t_1, t_2 ) \\Rightarrow$ \\begin{math} \\left\\{\n \\begin{array}{lr}\n O(t_1, t_2) \\cap [e_1 ; e_2] = \\emptyset & : e = 0 \\\\\n e_1 \\le e \\le e_2 \\wedge e \\in U(t_1, t_2) & : e \\neq 0\n \\end{array}\n \\right. \\end{math}\n\\\\$e = remove(e_1, e_2, t_1, t_2) \\Rightarrow$ \\begin{math} \\left\\{\n \\begin{array}{lr}\n O(t_1, t_2) \\cap [e_1 ; e_2] = \\emptyset & : e = 0 \\\\\n e_1 \\le e \\le min([e_1; e_2] \\cap O(t_1, t_2)) \\\\ ~ \\wedge e \\in U(t_1, t_2) & \\raisebox{11pt}{$: e \\neq 0$}\n \\end{array}\n \\right. \\end{math}\n\\ $insert(e, t_1, t_2)$ adds $e$ to $E_r$, if $e \\notin U(t_1, t_2)$.\n\n\n\\section{Lock-freedom}\n\\label{sec-5}\n\nLock-freedom guarantees that as long as some thread is working on an operation $o_1$, some operation $o_2$ is coming closer to terminating. In this case we say $o_1$ is causing progress, and $o_2$ is making progress. The operations $o_1$ and $o_2$ can be different.\nFor ELB-trees, this means that whenever a thread is searching, inserting, or removing, some thread must be making progress. \nThe following is proof that the operations are lock-free:\n\n\\begin{lemma}\\label{ter-op}\nOperations eventually terminate or restart part of their operation. \\end{lemma}\n\\begin{proof} The operations' algorithms have loops in the following for: node search, tree search, rebalancing, and updating keys in leafs. The algorithms are given in the paper~\\cite{bkp13}. Without concurrency, they iterate up to K, tree height, tree height, and 1 times. With concurrency, tree search, rebalancing, and key update loops may restart part of their operation.\\end{proof}\n\n\\begin{lemma}\\label{lf-rebl}\nRebalancing leaf nodes cause progress. \\end{lemma}\n\\begin{proof} If the nodes are written to between deciding to rebalance and rebalancing, some operation has made progress.\nIf there are no writes, the size of the first node is either D or S, resulting in balanced nodes of $size \\in [min(2 S, 0.5 D); D-1]$. Such nodes can be removed from and inserted into at least once before requiring additional rebalancing. As such, every time a rebalancing completes, one operation has made progress.\\end{proof}\n\n\\begin{lemma}\\label{lf-rebi}\nRebalancing internal nodes cause progress. \\end{lemma}\n\\begin{proof} Rebalancing internal nodes leads to child nodes that can be rebalanced at least one. Each leaf rebalancing cause progress (Lemma \\ref{lf-rebl}), hence each internal rebalancing cause progress. \\end{proof}\n\n\\begin{theorem}\\label{lf-s}\nSearch causes progress. \\end{theorem}\n\\begin{proof} Search eventually terminates, similar to $k$-ary tree search, or rebalances a node (Lemma \\ref{res-si}). In the first case the search operation is making progress. In the second case some operation is making progress (Lemma \\ref{lf-rebl}, Lemma \\ref{lf-rebi}).\\end{proof}\n\n\\begin{theorem}\\label{lf-ri}\nRemove and insert operations cause progress. \\end{theorem}\n\\begin{proof} The operations proceed as searches followed by writes to leaf nodes. The leaf node write takes a bounded number of steps, as each key may be read once, but the steps can be restarted due to rebalancing, or other insertions and removals terminating. In the first case, some operation is nearing termination, and in the second case some operation terminated (Lemma \\ref{lf-rebl}, Lemma \\ref{lf-rebi}).\\end{proof}\n\n\\section{Conclusion}\n\\label{sec-6}\n\nThis technical report has introduced, proved, and derived properties of ELB-trees. ELB-trees have been proven to be leaf-oriented search trees. Their operations' semantics have been derived as:\\\\\n$e = search( e_1, e_2, t_1, t_2 ) \\Rightarrow$ \\begin{math} \\left\\{\n \\begin{array}{lr}\n O(t_1, t_2) \\cap [e_1 ; e_2] = \\emptyset & : e = 0 \\\\\n e_1 \\le e \\le e_2 \\wedge e \\in U(t_1, t_2) & : e \\neq 0\n \\end{array}\n \\right. \\end{math}\n\\\\$e = remove(e_1, e_2, t_1, t_2) \\Rightarrow$ \\begin{math} \\left\\{\n \\begin{array}{lr}\n O(t_1, t_2) \\cap [e_1 ; e_2] = \\emptyset & : e = 0 \\\\\n e_1 \\le e \\le min([e_1; e_2] \\cap O(t_1, t_2)) \\\\ ~ \\wedge e \\in U(t_1, t_2) & \\raisebox{11pt}{$: e \\neq 0$}\n \\end{array}\n \\right. \\end{math}\n\\ $insert(e, t_1, t_2)$ adds $e$ to $E_r$, if $e \\notin U(t_1, t_2)$.\nFinally the operations have been proven to be lock-free.\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOur notation is standard (e.g., see \\cite{Bol98}, \\cite{CDS80}, and\n\\cite{HoJo88}); in particular, all graphs are defined on the vertex set\n$\\left\\{ 1,2,...,n\\right\\} =\\left[ n\\right] $ and $G\\left( n,m\\right) $\nstands for a graph with $n$ vertices and $m$ edges. We write $\\Gamma\\left(\nu\\right) $ for the set of neighbors of the vertex $u$ and set $d\\left(\nu\\right) =\\left\\vert \\Gamma\\left( u\\right) \\right\\vert .$ Given a graph $G$\nof order $n,$ we assume that the eigenvalues of the adjacency matrix of $G$\nare ordered as $\\mu\\left( G\\right) =\\mu_{1}\\left( G\\right) \\geq...\\geq\n\\mu_{n}\\left( G\\right) $. As usual, $\\overline{G}$ denotes the complement of\na graph $G$ and $\\omega(G)$ stands for the clique number of $G.$\n\nNosal \\cite{Nos70} showed that for every graph $G$ of order $n,$\n\\begin{equation}\nn-1\\leq\\mu\\left( G\\right) +\\mu\\left( \\overline{G}\\right) <\\sqrt{2}n.\n\\label{Nosin}%\n\\end{equation}\nQuite of attention has been given to second of these inequalities. In\n\\cite{Nik02} it was shown that%\n\\begin{equation}\n\\mu\\left( G\\right) +\\mu\\left( \\overline{G}\\right) \\leq\\sqrt{\\left(\n2-\\frac{1}{\\omega(G)}-\\frac{1}{\\omega(\\overline{G})}\\right) n\\left(\nn-1\\right) }, \\label{Nikin}%\n\\end{equation}\nimproving earlier results in \\cite{Hon95}, \\cite{HoSh00}, \\cite{Li96}, and\n\\cite{Zho97}. Unfortunately inequality (\\ref{Nikin}) is not much better then\n(\\ref{Nosin}) when both $\\omega(G)$ and $\\omega(\\overline{G})$ are large\nenough. Thus, it is natural to ask whether $\\sqrt{2}$ in (\\ref{Nosin}) can be\nreplaced by a smaller absolute constant for $n$ sufficiently large. In this\nnote we answer this question in the positive but first we state a more general problem.\n\n\\begin{problem}\nFor every $1\\leq k\\leq n$ find%\n\\[\nf_{k}\\left( n\\right) =\\max_{v\\left( G\\right) =n}\\left\\vert \\mu_{k}\\left(\nG\\right) \\right\\vert +\\left\\vert \\mu_{k}\\left( \\overline{G}\\right)\n\\right\\vert .\n\\]\n\n\\end{problem}\n\nIt is difficult to determine precisely $f_{k}\\left( n\\right) $ for every $n$\nand $k,$ so at this stage it seems more practical to estimate it\nasymptotically. In this note we show that\n\\begin{equation}\n\\frac{4}{3}n-2\\leq f_{1}\\left( n\\right) <\\left( \\sqrt{2}-c\\right) n\n\\label{mainin1}%\n\\end{equation}\nfor some $c>8\\times10^{-7}$ independent of $n.$ For $f_{2}\\left( n\\right) $\nwe give the following tight bounds%\n\\begin{equation}\n\\frac{\\sqrt{2}}{2}n-3\\left(\n\\sqrt{2}-\\varepsilon\\right) n.\n\\]\nWriting $A\\left( G\\right) $ for the adjacency matrix of $G,$ we have\n\\begin{equation}\n\\sum_{i=1}^{n}\\mu_{i}^{2}\\left( G\\right) =tr\\left( A^{2}\\left( G\\right)\n\\right) =2e\\left( G\\right) , \\label{basin}%\n\\end{equation}\nimplying that%\n\\[\n\\mu_{1}^{2}\\left( G\\right) +\\mu_{n}^{2}\\left( G\\right) +\\mu_{1}^{2}\\left(\n\\overline{G}\\right) +\\mu_{n}^{2}\\left( \\overline{G}\\right) \\leq2e\\left(\nG\\right) +2e\\left( \\overline{G}\\right) \\left( 1-\\frac{\\varepsilon}{\\sqrt{2}}\\right)\n^{2}n^{2}>\\left( 1-2\\varepsilon\\right) n^{2}%\n\\]\nwe find that\n\\begin{equation}\n\\left\\vert \\mu_{n}\\left( G\\right) \\right\\vert +\\left\\vert \\mu_{n}\\left(\n\\overline{G}\\right) \\right\\vert \\leq\\sqrt{2\\left( \\mu_{n}^{2}\\left(\nG\\right) +\\mu_{n}^{2}\\left( \\overline{G}\\right) \\right) }<\\sqrt\n{4\\varepsilon}n^{2}, \\label{in1}%\n\\end{equation}\nand so, $\\mu_{n}\\left( G\\right) +\\mu_{n}\\left( \\overline{G}\\right)\n>-\\sqrt{4\\varepsilon}n.$ We thus have $\\sqrt{4\\varepsilon}n^{4}\\geq\ns^{2}\\left( G\\right) .$ On the other hand, by (\\ref{prpin1}) and in view of\n$s\\left( G\\right) =s\\left( \\overline{G}\\right) ,$ we see that\n\\[\n\\mu_{1}\\left( G\\right) +\\mu_{1}\\left( \\overline{G}\\right) \\leq\nn-1+2\\sqrt{s\\left( G\\right) }\\frac{4n}{3}-2.\n\\]\nThis gives some evidence for the following conjecture.\n\n\\begin{conjecture}%\n\\[\nf_{1}\\left( n\\right) =\\frac{4n}{3}+O\\left( 1\\right) .\n\\]\n\n\\end{conjecture}\n\nWe conclude this section with an improvement of the lower bound in\n(\\ref{Nosin}). Using the first of inequalities (\\ref{prpin1}) we obtain\n\\begin{align*}\n\\mu_{1}\\left( G\\right) +\\mu_{1}\\left( \\overline{G}\\right) & \\geq\nn-1+\\frac{s^{2}\\left( G\\right) }{2n^{2}}\\left( \\frac{1}{\\sqrt{2e\\left(\nG\\right) }}+\\frac{1}{\\sqrt{2e\\left( \\overline{G}\\right) }}\\right) \\geq\\\\\n& \\geq n-1+\\sqrt{2}\\frac{s^{2}\\left( G\\right) }{n^{3}}.\n\\end{align*}\n\n\n\\section{A class of graphs}\n\nIn this section we shall describe a class of graphs that give the right order\nof $f_{2}\\left( G\\right) $ and, we believe, also of $f_{n}\\left( G\\right)\n.$\n\nLet $n\\geq4$. Partition $\\left[ n\\right] $ in $4$ classes $A,B,C,D$ so that\n$\\left\\vert A\\right\\vert \\geq\\left\\vert B\\right\\vert \\geq\\left\\vert\nC\\right\\vert \\geq\\left\\vert D\\right\\vert \\geq\\left\\vert A\\right\\vert -1.$ Join\nevery two vertices inside $A$ and $D,$ join each vertex in $B$ to each vertex\nin $A\\cup C,$ join each vertex in $D$ to each vertex in $C.$ Write $G\\left(\nn\\right) $ for the resulting graph.\n\nNote that if $n$ is divisible by $4,$ the sets $A,B,C,D$ have equal\ncardinality and we see that $G\\left( n\\right) $ is isomorphic to its complement.\n\nOur main goal to the end of this section is to estimate the eigenvalues of\n$G\\left( n\\right) .$ Write $ch\\left( A\\right) $ for the characteristic\npolynomial of a matrix $A.$ The following general theorem holds.\n\n\\begin{theorem}\n\\label{thch}Suppose $G$ is a graph and $V\\left( G\\right) =\\cup_{i=1}%\n^{k}V_{i}$ is a partition in sets of size $n$ such that\n\n(i) for all $1\\leq i\\leq k,$ either $e\\left( V_{i}\\right) =\\binom{n}{2}$ or\n$e\\left( V_{i}\\right) =0$;\n\n(ii) for all $1\\leq i\\frac{\\sqrt{2}}{2}n-3,\n\\]\nso all we need to prove is that $f_{2}\\left( n\\right) \\leq n\/\\sqrt{2}$.\n\nBy (\\ref{basin}) we have\n\\begin{equation}\n\\mu_{1}^{2}\\left( G\\right) +\\mu_{2}^{2}\\left( G\\right) +\\mu_{n}^{2}\\left(\nG\\right) +\\mu_{1}^{2}\\left( \\overline{G}\\right) +\\mu_{2}^{2}\\left(\n\\overline{G}\\right) +\\mu_{n}^{2}\\left( \\overline{G}\\right) \\leq n\\left(\nn-1\\right) . \\label{in2}%\n\\end{equation}\nBy Weyl's inequalities (\\cite{HoJo88}, p. 181), for every graph $G$ of order\n$n,$ we have\n\\[\n\\mu_{2}\\left( G\\right) +\\mu_{n}\\left( \\overline{G}\\right) \\leq\\mu\n_{2}\\left( K_{n}\\right) =-1.\n\\]\nHence, using $\\mu_{2}\\geq0$ and $\\mu_{n}\\leq-1$ we obtain\n\\[\n\\mu_{2}^{2}\\left( G\\right) \\leq\\mu_{n}^{2}\\left( \\overline{G}\\right)\n+2\\mu_{n}\\left( \\overline{G}\\right) +1<\\mu_{n}^{2}\\left( \\overline\n{G}\\right) .\n\\]\nHence, from (\\ref{in2}) and $\\mu_{1}\\left( G\\right) +\\mu_{1}\\left(\n\\overline{G}\\right) \\geq n-1,$ we find that\n\\[\n\\frac{\\left( n-1\\right) ^{2}}{2}+2\\mu_{2}^{2}\\left( G\\right) +2\\mu_{2}%\n^{2}\\left( \\overline{G}\\right) \\leq\\mu_{1}^{2}\\left( G\\right) +\\mu_{2}%\n^{2}\\left( G\\right) +\\mu_{n}^{2}\\left( G\\right) +\\mu_{1}^{2}\\left(\n\\overline{G}\\right) +\\mu_{2}^{2}\\left( \\overline{G}\\right) +\\mu_{n}%\n^{2}\\left( \\overline{G}\\right) \\leq n\\left( n-1\\right) .\n\\]\nAfter some algebra, we deduce that\n\\[\n\\mu_{2}\\left( G\\right) +\\mu_{2}\\left( \\overline{G}\\right) \\leq\\frac\n{\\sqrt{2}}{2}n,\n\\]\ncompleting the proof of inequalities (\\ref{mainin2}).\n\n\\section{Bounds on $f_{n}\\left( n\\right) $}\n\nIn this section we shall prove inequalities (\\ref{mainin3}). From (\\ref{in4}),\nas above, we have%\n\\[\nf_{n}\\left( n\\right) >\\frac{\\sqrt{2}}{2}n-3.\n\\]\nWe believe that, in fact, the following conjecture is true.\n\n\\begin{conjecture}%\n\\[\nf_{n}\\left( G\\right) =\\frac{\\sqrt{2}n}{2}+O\\left( 1\\right) .\n\\]\n\n\\end{conjecture}\n\nHowever we can only prove that $f_{n}\\left( G\\right) <\\left( \\sqrt\n{3}\/2\\right) n$ which is implied by the following theorem.\n\n\\begin{theorem}\nFor every graph $G$ of order $n,$\n\\[\n\\mu_{n}^{2}\\left( G\\right) +\\mu_{n}^{2}\\left( \\overline{G}\\right)\n\\leq\\frac{3}{8}n^{2}.\n\\]\n\n\\end{theorem}\n\n\\begin{proof}\nIndeed, suppose $\\left( u_{1},...,u_{n}\\right) $ and $\\left( w_{1}%\n,...,w_{n}\\right) $ are eigenvectors to $\\mu_{n}\\left( G\\right) $ and\n$\\mu_{n}\\left( \\overline{G}\\right) .$ Let\n\\[\nU=\\left\\{ i:u_{i}>0\\right\\} ,\\text{ \\ \\ \\ }W=\\left\\{ i:w_{i}>0\\right\\} .\n\\]\nSetting $V=\\left[ n\\right] ,$ we clearly have $\\mu_{n}^{2}\\left( G\\right)\n\\leq E_{G}\\left( U,V\\backslash U\\right) $ and $\\mu_{n}^{2}\\left(\n\\overline{G}\\right) \\leq E_{\\overline{G}}\\left( W,V\\backslash W\\right) $.\nSince $E_{G}\\left( U,V\\backslash U\\right) \\cap E_{\\overline{G}}\\left(\nW,V\\backslash W\\right) =\\varnothing,$ we see that the graph\n\\[\nG^{\\prime}=\\left( V,E_{G}\\left( U,V\\backslash U\\right) \\cup E_{\\overline\n{G}}\\left( W,V\\backslash W\\right) \\right)\n\\]\nis at most $4$-colorable and hence $G^{\\prime}$ contains no $4$-cliques. By\nTur\\'{a}n's theorem (e.g., see \\cite{Bol98}), we obtain $e\\left( G^{\\prime\n}\\right) \\leq\\left( 3\/8\\right) n^{2},$ completing the proof.\n\\end{proof}\n\n\\section{Bounds on $f_{k}\\left( n\\right) ,$ $2\\sqrt{2m\/k}$ then\n\\[\n\\sum_{i=1}^{n}\\mu_{i}^{2}\\left( G\\right) \\geq\\left( n-k\\right) \\mu_{k}%\n^{2}\\left( G\\right) >2m\\frac{n-k}{k}>2m,\n\\]\na contradiction. Hence, $\\left\\vert \\mu_{k}\\left( G\\right) \\right\\vert\n\\leq\\sqrt{2e\\left( G\\right) \/k},$ and, by symmetry, $\\left\\vert \\mu\n_{k}\\left( \\overline{G}\\right) \\right\\vert \\leq\\sqrt{2e\\left( \\overline\n{G}\\right) \/k}.$ Now\n\\[\n\\left\\vert \\mu_{k}\\left( G\\right) \\right\\vert +\\left\\vert \\mu_{k}\\left(\n\\overline{G}\\right) \\right\\vert \\leq\\sqrt{2e\\left( G\\right) \/k}%\n+\\sqrt{2e\\left( \\overline{G}\\right) \/k}\\leq\\sqrt{\\frac{2}{k}n\\left(\nn-1\\right) }<\\sqrt{\\frac{2}{k}}n,\n\\]\nproving inequality (\\ref{in5}). The proof of inequality (\\ref{in6}) goes along\nthe same lines, so we will omit it.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: Janus and Hades}\n\n\nJanus solutions in string\/M-theory were originally introduced in the context of type IIB supergravity as a simple deformation of the $\\,\\textrm{AdS}_{5} \\times \\textrm{S}^{5}\\,$ background involving a non-trivial dilaton profile \\cite{Bak:2003jk}. The deformation breaks the $\\,\\textrm{SO}(2,4)\\,$ isometries of $\\,\\textrm{AdS}_{5}\\,$ to the $\\,\\textrm{SO}(2,3)\\,$ isometries of $\\,\\textrm{AdS}_{4}\\,$, but preserves the $\\,\\textrm{SO}(6)\\,$ isometries of the round $\\,\\textrm{S}^{5}\\,$. Soon after, a holographic interpretation of the solutions in \\cite{Bak:2003jk} was proposed in terms of a planar $\\,(1+2)$-dimensional interface in super Yang--Mills (SYM) separating two half-spaces with different coupling constants \\cite{Clark:2004sb}. The supersymmetric Janus was constructed in \\cite{Clark:2005te} using a 5D effective SO(6) gauged supergravity approach. Its ten-dimensional incarnation was put forward in \\cite{DHoker:2006vfr}, which provided the gravity dual of the $\\,\\mathcal{N}=1\\,$ supersymmetric interface first anticipated in \\cite{Clark:2004sb} and then constructed in \\cite{DHoker:2006qeo}. The $\\,\\mathcal{N}=1\\,$ supersymmetric Janus turns out to break the symmetry of the original (non-supersymmetric) Janus down to at least $\\,\\textrm{SU(3)} \\subset \\textrm{SO}(6)\\,$. The $\\,\\mathcal{N}=4\\,$ Janus solution with $\\,\\textrm{SO}(4)\\,$ symmetry was constructed in \\cite{DHoker:2007zhm}. However it was only recently that the $\\,\\mathcal{N}=2\\,$ supersymmetric Janus with $\\,\\textrm{SU}(2) \\times \\textrm{U}(1)\\,$ symmetry was constructed in five and ten dimensions \\cite{Bobev:2020fon}, thus completing the list of Janus solutions dual to the SYM interfaces scrutinised in \\cite{DHoker:2006qeo}.\n\n\nJanus solutions have been much less investigated in the context of M-theory. The first examples were constructed in \\cite{DHoker:2009lky} (and generalised in \\cite{Bachas:2013vza}) as $\\,\\mathcal{N}=4\\,$ deformations of the $\\,\\textrm{AdS}_{4} \\times \\textrm{S}^{7}\\,$ background preserving a subgroup $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4) \\subset \\textrm{SO}(8)\\,$ of the isometries of the round $\\,\\textrm{S}^{7}\\,$. This time the deformation breaks the $\\,\\textrm{SO}(2,3)\\,$ isometries of $\\,\\textrm{AdS}_{4}\\,$ to the $\\,\\textrm{SO}(2,2)\\,$ isometries of $\\,\\textrm{AdS}_{3}\\,$. The M-theory Janus solutions can still be holographically understood as $\\,(1+1)$-dimensional interfaces in ABJM theory \\cite{Aharony:2008ug} despite the absence of a dilaton field in the M-theory context \\cite{DHoker:2009lky,Bobev:2013yra,Kim:2018qle}. Interestingly, it was shown in \\cite{Bobev:2013yra} that the $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)\\,$ symmetric Janus can be alternatively found using a 4D effective SO(8) gauged supergravity description. Using this 4D approach, an $\\,\\mathcal{N}=1\\,$ supersymmetric and $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2\\,$ symmetric Janus was constructed in \\cite{Bobev:2013yra} using numerical methods, for which 11D uplift formuli were provided in \\cite{Pilch:2015dwa}. More numerical Janus solutions were also presented in \\cite{Bobev:2013yra} by studying the $\\,\\textrm{G}_{2}$-invariant sector of the SO(8) gauged supergravity.\\footnote{See \\cite{Karndumri:2020bkc} for a numerical study of Janus solutions in the one-parameter family of $\\omega$-deformed SO(8) gauged supergravities \\cite{Dall'Agata:2012bb}. See also \\cite{Suh:2018nmp,Karndumri:2021pva} for a similar study in the context of massive IIA compactified on $\\,\\textrm{S}^{6}\\,$ and its effective description in terms of the ISO(7) gauged supergravity \\cite{Guarino:2015jca,Guarino:2015qaa,Guarino:2015vca}.}\n\n\nAmongst the various interesting questions raised in the discussion section of \\cite{DHoker:2009lky} we will provide a positive answer to that of whether exact M-theory Janus solutions exist with no supersymmetry. We will use the four-dimensional SO(8) gauged supergravity that arises upon reduction of eleven-dimensional supergravity on $\\,\\textrm{S}^{7}\\,$ \\cite{deWit:1982ig,deWit:1986oxb} and construct non-supersymmetric, yet analytic and regular, AdS$_{3}$-sliced domain-wall solutions of the form\n\\begin{equation}\n\\label{metric_ansatz_intro}\nds_{4}^{2} = d\\mu^{2} + e^{2 A(\\mu)} \\, ds_{\\textrm{AdS}_{3}}^{2} \\ ,\n\\end{equation}\nfor which the metric function $\\,A(\\mu)\\,$ depends arbitrarily on three real constants $\\,\\alpha_{i} \\in \\mathbb{R}\\,$ with $\\,{i=1,2,3}\\,$. The geometry is supported by three complex scalar fields $\\,z_{i}(\\alpha_{i},\\beta_{i};\\mu)\\,$ which depend on three additional compact parameters, or phases $\\,\\beta_{i} \\in [0,2\\pi]\\,$, and develop non-trivial profiles along the radial coordinate $\\,\\mu\\,$ transverse to the domain-wall. The effective 4D gauge coupling $\\,g\\,$ -- which relates to the inverse radius of $\\,\\textrm{S}^{7}\\,$ -- and the set of real parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ fully determine a particular Janus configuration.\n\n\n\nThe Janus parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ specify the boundary values of the complex scalars at $\\,{\\mu \\rightarrow \\pm \\infty}\\,$. In particular, the parameters $\\,\\beta_{i}\\,$ encode the source\/VEV and bosonic\/fermionic nature of the dual operators turned on on each side of the interface living at the boundary. A generic choice of Janus parameters breaks all the supersymmetries and the $\\,\\textrm{S}^{7}\\,$ isometry group down to its Cartan subgroup $\\,\\textrm{U}(1)^4 \\subset \\textrm{SO}(8)\\,$. On the contrary, the very special choice $\\,\\alpha_{i}=0\\,$ $\\forall i\\,$ trivialises the Janus and the maximally supersymmetric AdS$_{4}$ vacuum of the SO(8) supergravity that uplifts to the $\\,\\textrm{AdS}_4 \\times \\textrm{S}^7\\,$ Freund--Rubin background of eleven-dimensional supergravity with a round $\\,\\textrm{S}^{7}\\,$ metric is recovered \\cite{Freund:1980xh}. Interestingly, (super) symmetry enhancements occur upon suitable identifications between the parameters. For instance, the supersymmetric Janus of \\cite{DHoker:2009lky,Bobev:2013yra} with $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)\\,$ symmetry is recovered upon setting two of the $\\,\\alpha_{i}\\,$ parameters to zero. In this work we will pay special attention to the Janus with $\\,{\\alpha_{1}=\\alpha_{2}=\\alpha_{3}}\\,$ and $\\,{\\beta_{1}=\\beta_{2}=\\beta_{3}}\\,$ which is non-supersymmetric and features an $\\,{\\text{SU}(3) \\times\\text{U}(1)^2}\\,$ symmetry enhancement. We will present the uplift of this 4D Janus to eleven-dimensions providing, to the best of our knowledge, the first example of an exact M-theory Janus with no supersymmetry. \n\n\n\n\nIn addition to the Janus, we will construct another class of analytic solutions -- we refer to them as flows to Hades following standard terminology in the literature -- which are non-supersymmetric and display a singularity at $\\,\\mu =0\\,$ where the $\\,e^{2 A(\\mu)}\\,$ factor in (\\ref{metric_ansatz_intro}) shrinks to zero size and the complex scalars run to the boundary of moduli space. Some similar curved-sliced \\cite{Bobev:2013yra} and flat-sliced \\cite{Cvetic:1999xx,Pope:2003jp,Pilch:2015vha,Pilch:2015dwa} singular flows have been constructed within the $\\,\\textrm{SO}(8)\\,$ gauged supergravity and argued to holographically describe an interface between a superconformal ABJM phase and a non-conformal phase with potentially interesting physics.\\footnote{The scalar potential of the maximal $\\,\\textrm{SO}(8)\\,$ gauged supergravity is bounded above by its value at the maximally supersymmetric AdS$_{4}$ vacuum thus satisfying the \\textit{good} condition of \\cite{Gubser:2000nd}.} In their simplest realisation, these flat-sliced singular flows in M-theory are the analogue of the type IIB flows to the Coulomb branch of $\\,\\mathcal{N}=4\\,$ SYM investigated in \\cite{Cvetic:1999xx,Freedman:1999gp,Freedman:1999gk,Gubser:2000nd}. \n\n\n\nThere are similarities and differences between the Janus and the Hades. As for the Janus, the Hades solutions depend on a set of six parameters $\\,(\\alpha_{i},\\beta_{i})\\,$. Unlike for the Janus, no supersymmetric limit can be taken on the Hades parameters, and the very special choice $\\,\\alpha_{i}=0\\,$ $\\forall i\\,$ does not trivialise the Hades to recover AdS$_{4}$. Instead, a special class of Hades flows -- we will refer to them as \\textit{ridge flows} adopting the terminology of \\cite{Pilch:2015vha} -- appears in this limit. As before, we will concentrate on the simple case with $\\,{\\alpha_{1}=\\alpha_{2}=\\alpha_{3}} \\equiv \\alpha\\,$ and $\\,{\\beta_{1}=\\beta_{2}=\\beta_{3}}\\equiv \\beta\\,$ for which the flows to Hades feature an $\\,{\\text{SU}(3) \\times\\text{U}(1)^2}\\,$ symmetry, and present their uplift to eleven-dimensional supergravity. \n\n\n\nSpecial attention will then be paid to the $\\,{\\text{SU}(3) \\times\\text{U}(1)^2}\\,$ symmetric ridge flows with $\\,\\alpha=0\\,$ for which there is just one free parameter left, \\textit{i.e.} the phase $\\,\\beta \\in [0,2\\pi]\\,$. This phase specifies the boundary values of the complex scalars at $\\,\\mu \\rightarrow \\infty\\,$ and, therefore, the source\/VEV and bosonic\/fermionic nature of the dual operators turned on on the ultraviolet (UV) side of the conformal interface. In the infrared (IR) side $\\,\\mu \\rightarrow 0\\,$ of the interface, the four-dimensional solution becomes singular and the dual field theory is expected to enter the non-conformal phase. Interestingly, the parameter $\\,\\beta\\,$ determining the boundary conditions of the complex scalars is associated with a $\\,\\textrm{U}(1)_{\\xi}\\,$ duality symmetry of the four-dimensional supergravity Lagrangian. However, as originally noticed in \\cite{Pope:2003jp} for a class of conventional flat-sliced RG-flows (see also \\cite{Pilch:2015vha,Pilch:2015dwa}), the $\\,\\textrm{U}(1)_{\\xi}\\,$ changes the physics of the ridge flows once they are uplifted to eleven dimensions: it takes metric modes into three-form gauge field modes. \n\n\nWe will illustrate this phenomenon by analysing in some detail the simple cases of setting $\\,\\beta= \\frac{\\pi}{2}\\,$ and $\\,\\beta=0\\,$. The corresponding ridge flows are triggered from the UV solely by bosonic VEV's or fermionic sources, respectively. The resulting M-theory ridge flows will be shown to be drastically different as far as the persistence of the singularity and the presence of magnetic M5-branes sources in the background are concerned. Setting $\\,\\beta= \\frac{\\pi}{2}\\,$ produces a singular M-theory background without magnetic M5-branes sources akin the (flat-sliced) Coulomb branch flows constructed in \\cite{Cvetic:1999xx}. Modifying the phase $\\,\\beta\\,$ by acting with $\\,\\textrm{U}(1)_{\\xi}\\,$ turns out to induce a transformation on the eleven-dimensional backgrounds that parallels the dielectric rotation of Coulomb branch flows investigated in \\cite{Pope:2003jp,Pilch:2015vha,Pilch:2015dwa}. We will look in detail at the limiting case $\\,\\beta=0\\,$ and conclude that the $\\,\\textrm{U}(1)_{\\xi}\\,$ transformation totally polarises M2-branes into M5-branes when flowing from the UV to the IR, leaving no M2-branes. We will provide some evidence for this phenomenon to occur also at generic values of $\\,\\beta\\,$.\n\n\nThe paper is organised in four sections plus appendices. In Section~\\ref{sec:Janus} we present our multi-parametric $(\\alpha_{i},\\beta_{i})$-families of analytic Janus and Hades solutions and discuss the ridge flow limit of the latter. We investigate the various possibilities of (super) symmetry enhancement depending on the choice of $\\,\\alpha_{i}\\,$ parameters, as well as the various possibilities of boundary conditions for the complex scalars (sources\/VEV's of dual operators) depending on the choice of $\\,\\beta_{i}\\,$ parameters. In Section~\\ref{sec:Uplift_11D} we present the uplift of the Janus and Hades solutions with $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2\\,$ symmetry to eleven-dimensional supergravity. We then focus on the ridge flows and discuss some eleven-dimensional aspects of the solutions, like the presence of singularities or the characterisation of the M2\/M5-brane sourcing the backgrounds, as a function of the parameter $\\,\\beta\\,$. We summarise the results and conclude in Section~\\ref{sec:conclusions}. Two additional appendices accompany the main text which contain technical results regarding the BPS equations as well as some relevant uplift formuli for the STU model. This is the subsector of the four-dimensional maximal $\\,\\textrm{SO}(8)\\,$ gauged supergravity within which we have constructed all the solutions presented in this work.\n\n\n\n\n\\section{Four-dimensional Janus and Hades}\n\\label{sec:Janus}\n\n\n\\subsection{The model}\n\n\nOur starting point is the $\\mathcal{N}=2$ gauged STU supergravity in four dimensions \\cite{Cvetic:1999xp}. This theory has a gauge group $\\textrm{U}(1)^4$, the maximal Abelian subgroup of $\\textrm{SO}(8)$, and can be embedded into the maximal $\\mathcal{N}=8$ $\\textrm{SO}(8)$-gauged supergravity \\cite{deWit:1982ig} as its $\\textrm{U}(1)^4$ invariant sector \\cite{Cvetic:1999xp}. The field content consists of the $\\mathcal{N}=2$ supergravity multiplet coupled to three vector multiplets. Upon setting vector fields to zero, the bosonic Lagrangian reduces to an Einstein-scalar model given by\n\\begin{equation}\n\\label{Lagrangian_model_U1^4_Einstein-scalars}\n\\begin{array}{lll}\n\\mathcal{L} & = & \\left( \\dfrac{R}{2} - V \\right) * 1 - \\dfrac{1}{4} \\displaystyle\\sum_{i=1}^3 \\left[\nd\\varphi_{i} \\wedge* d\\varphi_{i} + e^{2 \\varphi_{i}} \\, d\\chi_{i} \\wedge* d\\chi_{i} \\right] \\\\[4mm]\n& = & \\left( \\dfrac{R}{2} - V \\right) * 1 - \\displaystyle\\sum_{i=1}^{3}\n\\dfrac{1}{\\left( 1-|\\tilde{z_{i}}|^{2} \\right) ^{2}} \\, d\\tilde{z}_{i}\n\\wedge* d\\tilde{z}_{i}^{*} \\ .\n\\end{array}\n\\end{equation}\nIn passing from the first line to the second one in (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) we have changed the parameterisation of the scalar fields $\\,z_{i}\\,$ ($i=1,2,3$) in the vector multiplets -- which serve as coordinates in the scalar coset geometry $[\\textrm{SL}(2)\/\\textrm{SO}(2)]^3$ -- from the upper-half plane to the unit-disk parameterisation via the field redefinition\n\\begin{equation}\n\\label{ztilde&z}\n\\tilde{z}_{i}=\\frac{z_{i}-i}{z_{i}+i} \n\\hspace{10mm} \\textrm{ with } \\hspace{10mm} \nz_{i} = -\\chi_{i} + i \\, e^{-\\varphi_{i}} \\ .\n\\end{equation}\nThe non-trivial scalar potential in the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) is given by\n\\begin{equation}\n\\label{V_U1^4}\nV = - \\tfrac{1}{2} \\, g^{2} \\sum_{i} \\left( 2 \\, \\cosh\\varphi_{i} + \\chi\n_{i}^{2} \\, e^{\\varphi_{i}} \\right) = g^{2} \\, \\left( 3 - \\sum_{i} \\frac{2}{1-|\\tilde{z}_{i}|^{2}} \\right) \\ ,\n\\end{equation}\nwhere $\\,g\\,$ is the gauge coupling in the gauged four-dimensional supergravity. From (\\ref{V_U1^4}) one immediately sees that only $|\\tilde{z}_{i}|$ enter the\npotential. As a result, the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) is\ninvariant under the three $\\,\\textrm{U}(1)_{\\xi_{i}}\\,$ shifts of $\\,\\arg\\tilde{z}_{i}\\,$, namely, $\\,\\delta_{\\xi_{i}}\\tilde{z}_{i} = i \\, \\xi_{i} \\, \\tilde{z}_{i}\\,$, with constant parameters $\\,\\xi_{i}\\,$. However, as we will see shortly, the phases $\\,\\arg\\tilde{z}_{i}\\,$ will play a central role when discussing boundary conditions for Janus- and Hades-like solutions in this supergravity model.\n\n\n\nIn this work we will investigate Janus-like solutions for which the space-time metric takes the form\n\\begin{equation}\n\\label{metric_ansatz}\nds_{4}^{2} = d\\mu^{2} + e^{2 A(\\mu)} \\, d\\Sigma^{2} \\ ,\n\\end{equation}\nwith $\\,\\mu\\in (-\\infty, \\infty)\\,$ or $\\,\\mu\\in [0, \\infty)\\,$ being the coordinate along which space-time is foliated with $\\Sigma$ slices, and $A(\\mu)$ being a scale function. The line element $\\,d\\Sigma^{2}\\,$ describes a globally AdS$_{3}$ space-time of radius $\\,\\ell= 1\\,$. The second-order Euler-Lagrange equations for the scalar fields that follow from the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) read\n\\begin{equation}\n\\label{EOM_scalars}\n\\begin{array}{rll}\n\\varphi_{i} '' - e^{2 \\varphi_{i}} \\,(\\chi_{i}')^2 + 3 \\, A' \\, \\varphi_{i}' + g^2 \\, \\left( 2 \\, \\sinh\\varphi_{i} + e^{\\varphi_{i}} \\, \\chi_{i}^2\\right) & = & 0 \\ , \\\\[2mm]\n\\chi_{i}'' + \\left( 3 \\, A' + 2 \\, \\varphi_{i}' \\right) \\chi_{i}' + 2 \\, g^2 \\, e^{-\\varphi_{i}} \\, \\chi_{i} & = & 0 \\ ,\n\\end{array}\n\\end{equation}\nwith $i=1,2,3$ and where primes denote derivatives with respect to the coordinate $\\,\\mu\\,$. The Einstein equations impose two additional independent equations given by\n\\begin{equation}\n\\label{EOM_Einstein}\n\\begin{array}{rll}\n1 - e^{2 A} \\Big[ A'' + \\frac{1}{4} \\displaystyle\\sum_{i} \\Big( \\, (\\varphi_{i}')^2 + e^{2 \\varphi_{i}} \\, (\\chi_{i}')^2 \\Big) \\, \\Big] &=& 0 \\ , \\\\[2mm]\n2 + e^{2 A} \\Big[ A'' + 3 \\, (A')^2 - \\tfrac{1}{2} \\, g^{2} \\displaystyle\\sum_{i} \\left( 2 \\, \\cosh\\varphi_{i} + \\chi_{i}^2 \\, e^{\\varphi_{i}} \\right) \\Big] & = & 0 \\ .\n\\end{array}\n\\end{equation}\nWe will now present analytic and multi-parametric families of Janus and Hades solutions to this system of second-order differential equations.\n\n\n\n\\subsection{Multi-parametric Janus solutions}\n\nThe second-order equations of motion in (\\ref{EOM_scalars}) and (\\ref{EOM_Einstein}) have a multi-parametric family of analytic Janus solutions. The scale factor in the space-time metric is given by\n\\begin{equation}\n\\label{A(mu)_func_U1^4}\ne^{2A(\\mu)} = {(g k)}^{-2} \\cosh^2(g\\mu) \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{k_factor}\nk^2= 1 + \\sum_{i}\\sinh^{2}\\alpha_{i} \\, \\ge \\, 1\n\\hspace{10mm} \\text{ with } \\hspace{10mm} \\alpha_{i} \\in \\mathbb{R} \\ .\n\\end{equation}\nUsing the unit-disk parameterisation in (\\ref{ztilde&z}) to describe the scalar fields in the three vector multiplets, they acquire simple $\\mu$-dependent profiles of the form\n\\begin{equation}\n\\label{Janus_solution_U1^4_ztil}\n\\tilde{z}_{i}(\\mu) = e^{i \\beta_{i}}\\, \\frac{\\sinh\\alpha_{i}}{\\cosh\\alpha_{i} + i \\, \\sinh(g \\mu) } \n\\hspace{10mm} \\text{ with } \\hspace{10mm} \\beta_{i} \\in [0,2\\pi] \\ ,\n\\end{equation}\nso that $\\,|\\tilde{z}_{i}(0)|=\\tanh\\alpha_{i}\\,$. Eqs (\\ref{A(mu)_func_U1^4})-(\\ref{Janus_solution_U1^4_ztil}) describe a multi-parametric family of Janus solutions parameterised by $3+3$ arbitrary real constants $(\\alpha_{i},\\beta_{i})$. Importantly, the presence of non-trivial axions $\\,\\textrm{Im}\\tilde{z}_{i}\\,$ (spin $0$ pseudo-scalars) turns out to be crucial for the existence of regular Janus solutions, as first noticed in \\cite{Bobev:2013yra}. Parametric plots of the complex scalars $\\,\\tilde{z}_{i}(\\mu)\\,$ in (\\ref{Janus_solution_U1^4_ztil}) are displayed in Figure~\\ref{fig:ztilde_U1^4}. The real $\\,\\textrm{Re}\\tilde{z}_{i}\\,$ and imaginary $\\,\\textrm{Im}\\tilde{z}_{i}\\,$ components of $\\,\\tilde{z}_{i}\\,$ are shown in Figure~\\ref{fig:Rez&Imz}. Note the special limiting case of $\\,\\alpha_{i} \\gg 1\\,$ (\\textit{i.e.} $\\tanh\\alpha_{i} \\approx 1$) for which the flows become singular. In this limit, the complex scalar $\\,\\tilde{z}_{i}\\,$ gets to the boundary of the moduli space, which is located at $\\,|\\tilde{z}_{i}|=1\\,$ in the unit-disk parameterisation of the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}), and the scalar potential in (\\ref{V_U1^4}) diverges.\n\n\n\nOn the other hand, the value $\\,\\alpha_{i}=0\\,$ is certainly special. At this value an AdS$_{4}$ maximally supersymmetric solution with radius $\\,L_{\\text{AdS}_{4}}={g}^{-1}\\,$ is recovered with the scalars being fixed at the constant value $\\,\\tilde{z}_{i}=0\\,$. This AdS$_{4}$ vacuum uplifts to the $\\,\\textrm{AdS}_4 \\times \\textrm{S}^7\\,$ Freund--Rubin background of eleven-dimensional supergravity with a round $\\,\\textrm{S}^{7}\\,$ metric \\cite{Freund:1980xh}. Moreover, it describes the near-horizon geometry of a stack of M2-branes and is holographically dual to the three-dimensional ABJM theory \\cite{Aharony:2008ug}. When evaluated at this AdS$_{4}$ vacuum, the three $\\textrm{U}(1)^{4}$ invariant complex scalars have a normalised mass\n\\begin{equation}\n\\label{m^2L^2_AdS4}\nm_{i}^2 L^2 = -2 \\ ,\n\\end{equation}\nthus lying within the mass range $\\, -9\/4 < m_{i}^2 \\, L^2 < - 5\/4\\,$ for wich two possible quantisations of scalar fields in AdS$_{4}$ exist \\cite{Klebanov:1999tb}: the mode with conformal dimension $\\,\\Delta_{i}=\\Delta_{-}=1\\,$ and the mode with conformal dimension $\\,\\Delta_{i}=\\Delta_{+}=2\\,$ (where $\\,\\Delta_{\\pm}\\,$ are the two roots of $\\,m_{i}^2 \\, L^2=(\\Delta_{i}-3)\\Delta_{i}\\,$) can be interpreted as the source and the VEV of the corresponding dual operators (standard quantisation) or \\textit{viceversa} (alternative quantisation). However, as shown in \\cite{Breitenlohner:1982bm}, proper scalars $\\,\\textrm{Re}\\tilde{z}_{i}\\,$ and pseudo-scalars $\\,\\textrm{Im}\\tilde{z}_{i}\\,$ must be quantised in exactly opposite ways in order to preserve maximal supersymmetry. And, moreover, only the choice of proper scalars having alternative quantisation yields a perfect matching between the scaling dimensions of the supergravity modes and those of the dual operators in the M2-brane theory \\cite{Bobev:2011rv} (see footnote~\\ref{footnote:operators}). \n\n\n\nThe class of Janus solutions in (\\ref{A(mu)_func_U1^4})-(\\ref{Janus_solution_U1^4_ztil}) depends on the set of parameters $\\,g\\,$ and $\\,(\\alpha_{i},\\beta_{i})\\,$. As discussed in \\cite{Bobev:2013yra}, the four-dimensional gauge coupling $\\,g\\,$ sets the scale of the asymptotic AdS$_{4}$ vacuum and, via the AdS\/CFT correspondence, the number of M2-branes as well as the rank of the Chern--Simons gauge groups in ABJM theory. The parameters $\\,\\alpha_{i}\\,$ set the height of the bump, \\textit{i.e.} $\\,|\\tilde{z}_{i}(0)|=\\tanh\\alpha_{i}\\,$, and therefore the strength of the coupling between the (1+1)-dimensional defect and the three-dimensional ambient field theory. The parameters $\\,\\beta_{i}\\,$ set the boundary conditions of the bulk scalars at $\\,\\mu \\rightarrow \\pm \\infty\\,$ and, again via the AdS\/CFT correspondence (see footnote~\\ref{footnote:operators}), the specific linear combinations of bosonic and fermionic bilinear operators that are activated in the field theory. We will analyse the possible choices of boundary conditions in detail in Section~\\ref{sec:boundary conditions}.\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.50\\textwidth]{Plots\/ztilde_plot.pdf} \n\\put(0,90){$\\textrm{Re}\\tilde{z}_{i}$} \\put(-105,210){$\\textrm{Im}\\tilde{z}_{i}$}\n\\put(-114,101){{\\color{red}{$\\bullet$}}}\n\\end{center}\n\\caption{Parametric plot of $\\,\\tilde{z}_{i}(\\mu)\\,$ in (\\ref{Janus_solution_U1^4_ztil}) for the Janus solutions with $\\,\\alpha_{i}=1\\,$ and $\\,\\beta_{i}=\\frac{n\\pi}{4}\\,$ with $\\,n=0,\\ldots,7\\,$. The central red point at $\\,\\tilde{z}_{i}=0\\,$ $\\,\\forall i\\,$ corresponds to the maximally supersymmetric AdS$_{4}$ vacuum and describes the asymptotic values at $\\,\\mu\\rightarrow \\pm\\infty\\,$.}\n\\label{fig:ztilde_U1^4}\n\\end{figure}\n\n\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=1_b=0.pdf} \n\\hspace{5mm}\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=10_b=0.pdf}\n\\put(-307,15){\\small{$\\alpha_{i}=1 \\, , \\, \\beta_{i}=0$}}\\put(-87,15){\\small{$\\alpha_{i} \\gg 1 \\, , \\, \\beta_{i}=0$}}\n\\\\[5mm]\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=1_b=Pi4.pdf} \n\\hspace{5mm}\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=10_b=Pi4.pdf} \n\\put(-307,22){\\small{$\\alpha_{i}=1 \\, , \\, \\beta_{i}=\\frac{\\pi}{4}$}}\\put(-87,22){\\small{$\\alpha_{i} \\gg 1 \\, , \\, \\beta_{i}=\\frac{\\pi}{4}$}}\n\\\\[5mm]\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=1_b=Pi2.pdf} \n\\hspace{5mm}\n\\includegraphics[width=0.45\\textwidth]{Plots\/Plot_a=10_b=Pi2.pdf} \n\\put(-307,15){\\small{$\\alpha_{i}=1 \\, , \\, \\beta_{i}=\\frac{\\pi}{2}$}}\\put(-87,15){\\small{$\\alpha_{i} \\gg 1 \\, , \\, \\beta_{i}=\\frac{\\pi}{2}$}}\n\\put(-412,-8){\\scriptsize{$-\\infty$}}\\put(-323,-8){\\scriptsize{$0$}}\\put(-238,-8){\\scriptsize{$\\infty$}}\n\\put(-330,-22){\\small{$g \\mu$}}\n\\put(-190,-8){\\scriptsize{$-\\infty$}}\\put(-101,-8){\\scriptsize{$0$}}\\put(-16,-8){\\scriptsize{$\\infty$}}\n\\put(-109,-22){\\small{$g \\mu$}}\n\\end{center}\n\\caption{Plots of $\\,\\textrm{Re}\\tilde{z}_{i}\\,$ (blue dotted line), $\\,\\textrm{Im}\\tilde{z}_{i}\\,$ (orange dashed line) and $\\,A'(\\mu)\\,$ (green solid line) as a function of the radial coordinate $\\,g \\mu \\in (-\\infty , \\infty)\\,$ for different values of the Janus parameters $\\,(\\alpha_{i},\\beta_{i})\\,$. The limit $\\,\\alpha_{i} \\gg 1\\,$ (\\textit{i.e.} $\\tanh\\alpha_{i} \\approx 1$) renders the Janus solution singular. In this limit, $\\,\\tilde{z}_{i}\\,$ gets to the boundary of the moduli space which is located at $\\,|\\tilde{z}_{i}|=1\\,$ in the unit-disk parameterisation of (\\ref{ztilde&z}).}\n\\label{fig:Rez&Imz}\n\\end{figure}\n\n\n\nLastly, a study of the supersymmetry preserved by this family of solutions is presented in the Appendix~\\ref{app:susy}. The BPS equations (\\ref{BPS_A}) and (\\ref{BPS_scalars}) are not satisfied by the Janus solution in (\\ref{A(mu)_func_U1^4})-(\\ref{Janus_solution_U1^4_ztil}) for generic values of $(\\alpha_{i} , \\beta_{i})$ thus implying that such a solution is generically non-supersymmetric. However, as we will see in a moment, some supersymmetry can be restored upon suitable choice of $(\\alpha_{i} , \\beta_{i})$, namely, upon suitable adjustment of the Janus boundary conditions.\n\n\n\n\\subsection{Janus with (super) symmetry enhancements}\n\\label{sec:Janus_sym_enhancement}\n\nSpecific choices of the parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ translate into various (super) symmetry enhancements of the general Janus solution in (\\ref{A(mu)_func_U1^4}) and (\\ref{Janus_solution_U1^4_ztil}).\n\n\\subsubsection{\\texorpdfstring{$\\text{SO}(4) \\times\\text{SO}(4)$}{SO(4)xSO(4)} symmetry enhancement}\n\\label{sec:Janus_SO4xSO4}\n\nSetting two vector multiplets to zero, \\textit{e.g.} $\\,\\tilde{z}_{2}=\\tilde{z}_{3}=0$, by setting\n\\begin{equation}\n\\label{alpha_2_3=0}\n\\alpha_{2} = \\alpha_{3} = 0 \\ ,\n\\end{equation}\nand renaming $\\tilde{z}_{1} \\equiv\\tilde{z}\\,$, the $\\text{SO}(4) \\times\\text{SO}(4)$ invariant sector of the SO(8) gauged supergravity investigated in Section~$5$ of \\cite{Bobev:2013yra} is recovered upon the identification $\\tilde{z}=z_{\\text{there}}$. The Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) reduces to\n\\begin{equation}\n\\label{Lagrangian_model_SO4xSO4}\n\\begin{array}{lll}\n\\mathcal{L} & = & \\left( \\frac{R}{2} - V \\right) * 1 - \\frac{1}{4} \\left[ (d\\varphi)^{2} + e^{2 \\varphi} \\, (d\\chi)^{2} \\right] \\\\[2mm]\n& = & \\left( \\frac{R}{2} - V \\right) * 1 - \\dfrac{1}{\\left( 1-|\\tilde{z}|^{2} \\right) ^{2}} \\, d\\tilde{z} \\wedge* d\\tilde{z}^{*} \\ ,\n\\end{array}\n\\end{equation}\nand the scalar potential in (\\ref{V_U1^4}) simplifies to\n\\begin{equation}\n\\label{V_SO4xSO4}\nV = - \\tfrac{1}{2} \\, g^{2} \\left( 4 + 2 \\cosh\\varphi+ \\chi^{2} \\, e^{\\varphi} \\right) = - g^{2} \\, \\dfrac{3 - |\\tilde{z}|^{2}}%\n{1-|\\tilde{z}|^{2}} \\ .\n\\end{equation}\nThe Janus solution then reads\n\\begin{equation}\n\\label{Janus_solution_SO4xSO4^4_alt}\nds_{4}^{2} = d\\mu^{2}+ e^{2 A(\\mu)} \\, d\\Sigma^{2} \n\\hspace{10mm} , \\hspace{10mm} \n\\tilde{z}(\\mu) = e^{i \\beta} \\,\n\\frac{\\sinh\\alpha}{\\cosh\\alpha + i \\, \\sinh(g \\mu) } \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{A(mu)_func_SO4xSO4}\ne^{2A(\\mu)} = (g k)^{-2} \\cosh^2(g\\mu)\n\\hspace{8mm} \\textrm{ and } \\hspace{8mm}\nk= \\cosh \\alpha \\ge 1\\ ,\n\\end{equation}\nwhere $(\\alpha, \\beta) = (\\alpha_{1} , \\beta_{1})$. This solution precisely matches the one presented in Section~$5$ of \\cite{Bobev:2013yra} upon the identification $\\cosh\\alpha=(1-a^{2}_{\\text{there}})^{-\\frac{1}{2}}$. As noticed therein, the Janus solution is half-PBS and preserves $\\,16\\,$ real supercharges. From a holographic perspective, the $(1+1)$-dimensional defect dual to the AdS$_{3}$ factor in the geometry features $(4,4)$ supersymmetry and therefore has an $\\,\\textrm{SO}(4)_{\\textrm{R}} \\times \\textrm{SO}(4)_\\textrm{R}\\,$ R-symmetry group. We have explicitly verified that, when selecting the minus sign in (\\ref{Janus_solution_SO4xSO4^4_alt}), the Janus solution satisfies the 1\/2-BPS equations (\\ref{BPS_A}) and (\\ref{BPS_scalars}) for the eight gravitino mass terms (superpotentials) of the maximal theory (see Footnote~\\ref{Footnote:axions}). Finally, the original M-theory supersymmetric Janus with $\\text{SO}(4) \\times\\text{SO}(4)$ symmetry was presented in \\cite{DHoker:2009lky}.\n\n\n\n\n\\subsubsection{\\texorpdfstring{$\\text{SU}(3) \\times\\text{U}(1)^2$}{SU(3)xU(1)2} symmetry enhancement}\n\nIdentifying the three vector multiplets, namely $\\,\\tilde{z}_{1}=\\tilde{z}_{2}=\\tilde{z}_{3} \\equiv\\tilde{z}\\,$, so that\n\\begin{equation}\n\\alpha_{1} = \\alpha_{2} = \\alpha_{3} \\equiv\\alpha\n\\hspace{5mm} , \\hspace{5mm}\n\\beta_{1} = \\beta_{2} = \\beta_{3} \\equiv\\beta\\ ,\n\\end{equation}\nthe $\\text{SU}(3) \\times\\text{U}(1)^2$ invariant sector of Section~$6$ of \\cite{Bobev:2013yra} (see also \\cite{Pilch:2015dwa} for the 11D uplift) is recovered upon the identification $\\tilde{z}=z_{\\text{there}}$. The\nLagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) simplifies to\n\\begin{equation}\n\\label{Lagrangian_model_SU3xU1xU1}\n\\begin{array}{lll}\n\\mathcal{L} & = & \\left( \\frac{R}{2} - V \\right) * 1 - \\frac{3}{4} \\left[(d\\varphi)^{2} + e^{2 \\varphi} \\, (d\\chi)^{2} \\right] \\\\[2mm]\n& = & \\left( \\frac{R}{2} - V \\right) * 1 - \\dfrac{3}{\\left( 1-|\\tilde{z}|^{2} \\right) ^{2}} \\, d\\tilde{z} \\wedge* d\\tilde{z}^{*} \\ ,\n\\end{array}\n\\end{equation}\nand the scalar potential in (\\ref{V_U1^4}) reduces to\n\\begin{equation}\n\\label{V_SU3xU1^2}\nV = - \\tfrac{3}{2} \\, g^{2} \\left( 2 \\cosh\\varphi+ \\chi^{2} \\, e^{\\varphi} \\right) = - 3 \\, g^{2} \\, \\dfrac{1+|\\tilde{z}|^{2}}{1-|\\tilde{z}|^{2}} \\ .\n\\end{equation}\nThe Janus solution takes the form\n\\begin{equation}\n\\label{Janus_solution_SU3xU1xU1}\nds_{4}^{2} = d\\mu^{2}+ e^{2 A(\\mu)} \\, d\\Sigma^{2} \n\\hspace{10mm} , \\hspace{10mm} \n\\tilde{z}(\\mu) = e^{i \\beta} \\, \\frac{\\sinh\\alpha}{\\cosh\\alpha + i \\, \\sinh(g \\mu) } \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{A(mu)_func_SU3xU1xU1}\ne^{2A(\\mu)} = (g k)^{-2} \\cosh^2(g\\mu) \n\\hspace{8mm} \\textrm{ and } \\hspace{8mm}\nk^2= 1 + 3 \\sinh^{2}\\alpha \\ge 1\\ .\n\\end{equation}\nThis provides an analytic solution in the $\\text{SU}(3) \\times\\text{U}(1)^2 $ invariant sector of the SO(8) maximal supergravity investigated in Section~$6$ of \\cite{Bobev:2013yra}. The solution (\\ref{Janus_solution_SU3xU1xU1})-(\\ref{A(mu)_func_SU3xU1xU1}) satisfies the second-order equations of motion in (\\ref{EOM_scalars}) and (\\ref{EOM_Einstein}). However, we have verified that the BPS equations (\\ref{BPS_A}) and (\\ref{BPS_scalars}) are not satisfied for any of the eight gravitino mass terms (superpotentials) in the maximal SO(8) gauged supergravity, so the solution is non-supersymmetric.\n\n\n\n\\subsection{Janus geometry and boundary conditions}\n\nLet us discuss the geometry of the multi-parametric family of Janus solutions presented in the previous sections. Introducing embedding coordiantes in $\\mathbb{R}^{2,3}$, the $k$-family of Janus metrics in (\\ref{metric_ansatz}) and (\\ref{A(mu)_func_U1^4}) corresponds to\n\\begin{equation}\n\\label{embedding_coordinates}\n\\begin{array}{lll}\nX_{0} &=& (g k)^{-1} \\, \\dfrac{\\cos\\tau}{\\cos\\eta} \\, \\cosh(g \\mu) \\ , \\\\[6mm]\nX_{4} &=& (g k)^{-1} \\, \\dfrac{\\sin\\tau}{\\cos\\eta} \\, \\cosh(g \\mu) \\ , \\\\[6mm]\nX_{1} &=& (g k)^{-1} \\, \\tan\\eta \\cos\\theta \\, \\cosh(g \\mu) \\ , \\\\[6mm]\nX_{2} &=& (g k)^{-1} \\, \\tan\\eta \\sin\\theta \\, \\cosh(g \\mu) \\ , \\\\[6mm]\nX_{3} &=& g^{-1} \\, i \\, \\textrm{E}(i g \\mu \\, ; \\, k^{-2}) \\ ,\n\\end{array}\n\\end{equation}\nwith \n\\begin{equation}\nk^2 = 1 + \\sum_{i}\\sinh^{2}\\alpha_{i} \\, \\ge \\, 1 \\ ,\n\\end{equation}\nand $\\textrm{E}(i g \\mu \\, ; \\, k^{-2})$ being the incomplete elliptic integral of the second kind. The solution describes the hyper-surface\n\\begin{equation}\n\\label{hypersurface_X}\n-X_{0}^2 - X_{4}^2 + X_{1}^2 + X_{2}^2 +(g k)^{-2} \\sinh^2(g \\mu) = - (g k)^{-2} \\ ,\n\\end{equation}\nwhere the term $\\,(g k)^{-2} \\sinh^2(g \\mu)\\,$ is implicitly given in terms of $X_{3}$ by the last relation in (\\ref{embedding_coordinates}). For $\\,k=1\\,$ one has that $\\,i \\, \\textrm{E}(i g \\mu \\, ; \\,1) = - \\sinh(g \\mu)\\,$ and (\\ref{hypersurface_X}) reduces to the hyperboloid describing AdS$_{4}$.\n\n\\subsubsection{Global coordinates and boundary structure}\n\n\nLet us perform a change of coordinates that will help us to understand the Janus geometry in (\\ref{metric_ansatz}) and (\\ref{A(mu)_func_U1^4})-(\\ref{k_factor}), especially its boundary structure. We start by performing a change of radial coordinate to make its range compact\n\\begin{equation}\n\\tilde{\\mu} = 2 \\, k \\, \\textrm{arctan} \\left[ \\tanh \\left( \\frac{g \\, \\mu}{2} \\right)\\right] \\ ,\n\\end{equation}\nand then choose global coordinates to describe the AdS$_{3}$ slicing in (\\ref{metric_ansatz}). The Janus metric in (\\ref{metric_ansatz}) and (\\ref{A(mu)_func_U1^4})-(\\ref{k_factor}) then becomes conformal to $\\,\\mathbb{R} \\times \\textrm{S}^{3}\\,$\n\\begin{equation}\n\\label{Janus_metric_original}\nds_{4}^{2} = \\frac{(g k)^{-2} }{\\cos^2\\left( \\frac{\\tilde{\\mu}}{k}\\right) \\cos^{2}\\eta} \\, \\left( - d\\tau^2 + \\cos^{2}\\eta \\, d\\tilde{\\mu}^{2} + d\\eta^2 + \\sin^2\\eta \\, d\\theta^2\\right) \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{global_coords_ranges}\n\\tau \\in (-\\infty \\, , \\infty)\n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\tilde{\\mu} \\in [-\\frac{\\pi k}{2} \\, , \\frac{\\pi k}{2}]\n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\eta \\in [0 \\, , \\frac{\\pi}{2}] \n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\theta \\in [0 \\, , 2 \\pi] \\ .\n\\end{equation}\nThese are the global coordinates used to describe the original type IIB Janus solution in \\cite{Bak:2003jk,Clark:2004sb}. The geometry (\\ref{Janus_metric_original}) has a boundary that consists of two hemi-spheres of $\\,\\textrm{S}^2\\,$ at $\\,\\tilde{\\mu} = \\pm \\tilde{\\mu}_{0}\\,$, with $\\, \\tilde{\\mu}_{0}=\\frac{\\pi k}{2} \\,$, joined at the $\\textrm{S}^1$ equator at $\\,\\eta = \\frac{\\pi}{2}\\,$. Lastly, using the new radial coordinate $\\,\\tilde{\\mu}\\,$, the profiles for the complex scalars in (\\ref{Janus_solution_U1^4_ztil}) become\n\\begin{equation}\n\\label{Janus_solution_U1^4_new}\n\\tilde{z}_{i}(\\tilde{\\mu}) = e^{i \\beta_{i}}\\, \\frac{\\sinh\\alpha_{i}}{\\cosh\\alpha_{i} + i \\, \\tan\\left(\\frac{\\tilde{\\mu}}{k}\\right) } \\ ,\n\\end{equation}\nso that $\\,\\tilde{z}_{i}(\\tilde{\\mu}) \\rightarrow 0\\,$ when approaching the two hemi-spheres of $\\,\\textrm{S}^2\\,$ at $\\,\\tilde{\\mu} \\rightarrow \\pm \\tilde{\\mu}_{0} \\,$ in the Janus boundary. Note that $\\,\\textrm{arg}\\left[\\tilde{z}_{i}(\\tilde{\\mu}_{0})\\right] - \\textrm{arg}\\left[\\tilde{z}_{i}(-\\tilde{\\mu}_{0})\\right] = \\pi$, thus creating an interface discontinuity at the $\\,\\textrm{S}^{1}\\,$ equator where the defect lives.\n\n\n\\subsubsection{\\texorpdfstring{AdS$_{3}$}{AdS3} slicing and boundary conditions}\n\\label{sec:boundary conditions}\n\n\nIn order to investigate the boundary conditions of the the family of Janus solution in (\\ref{A(mu)_func_U1^4})-(\\ref{Janus_solution_U1^4_ztil}) we will perform a regular change of radial coordinate\n\\begin{equation}\n\\label{new_coordinate_Janus}\n\\rho= \\sinh({g} \\mu) \n\\hspace{10mm} , \\hspace{10mm}\nd\\mu = g^{-1} \\, \\dfrac{d\\rho}{\\sqrt{\\rho^2+1}} \\ ,\n\\end{equation}\nso that the family of Janus solutions in (\\ref{A(mu)_func_U1^4})-(\\ref{Janus_solution_U1^4_ztil}) becomes\\footnote{The Ricci scalar constructed from the metric (\\ref{Janus_U1^4_rho_1}) reads\n\\begin{equation}\n\\label{Janus_Ricci}\nR(\\rho) = - 6 \\, g^{2} \\left( \\, 1 + \\frac{\\rho^2 + k^2}{\\rho^2 + 1} \\, \\right) \\ ,\n\\end{equation}\nthus ensuring regularity of the Janus geometry within the whole range $\\,\\rho \\in ( -\\infty , \\infty)\\,$.}\n\\begin{equation}\n\\label{Janus_U1^4_rho_1}\nds_{4}^{2}=\\frac{1}{{g}^{2}} \\left( \\frac{d\\rho^{2}}{\\rho^{2}+1} + \n\\frac{\\rho^{2}+1 }{ k^2} \\, d\\Sigma^{2} \\right)\n\\hspace{5mm} , \\hspace{5mm}\nd\\Sigma^{2} = \\frac{1}{\\cos^{2}\\eta}\\left( - d\\tau^2 + d\\eta^2 + \\sin^2\\eta \\, d\\theta^2\\right) \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{global_coords_ranges_2}\n\\tau \\in (-\\infty \\, , \\infty)\n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\rho \\in (-\\infty \\, , \\infty)\n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\eta \\in [0 \\, , \\frac{\\pi}{2}] \n\\hspace{5mm} \\textrm{ , } \\hspace{5mm}\n\\theta \\in [0 \\, , 2 \\pi] \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{Janus_U1^4_rho_2}\n\\tilde{z}_{i}(\\rho) = e^{i \\beta_{i}} \\, \\frac{\\sinh\\alpha_{i}}{\\cosh\\alpha_{i} + i \\, \\rho} \\ .\n\\end{equation}\nThe Janus geometry (\\ref{Janus_U1^4_rho_1}) has a three-dimensional conformal boundary at $\\,\\rho \\rightarrow \\pm \\infty\\,$ that is conformal to $\\,\\mathbb{R} \\times \\textrm{S}^{2}\\,$ with a $k$-dependent prefactor $\\,(g k)^{-2} \\, \\rho^2\\,$. This is the geometry we will use to analyse the asymptotic behaviour of the $\\,\\textrm{U}(1)^4\\,$ invariant complex scalars (\\ref{Janus_U1^4_rho_2}).\n\n\n\nWhen approaching the maximally supersymmetric AdS$_4$ vacuum dual to ABJM theory\\footnote{\\label{footnote:operators}The 35 pseudo-scalars and 35 proper scalars of the maximal supergravity multiplet are dual to single-trace deformations of ABJM theory \\cite{Aharony:2008ug}. More concretely, pseudo-scalars are dual to fermionic bilinears $\\,\\mathcal{O}_{F}=\\textrm{Tr}(\\psi^{\\dot{A}} \\psi^{\\dot{B}}) - \\frac{1}{8} \\delta^{\\dot{A}\\dot{B}} \\textrm{Tr}(\\psi^{\\dot{C}} \\psi^{\\dot{C}} )\\,$ with $\\,\\dot{A}=1,\\ldots,8\\,$ and $\\,\\textrm{dim}(\\mathcal{O}_{F})=2\\,$. Proper scalars are dual to bosonic bilinears $\\,\\mathcal{O}_{B}=\\textrm{Tr}(X^{A} X^{B}) - \\frac{1}{8} \\delta^{AB} \\textrm{Tr}(X^{C}X^{C})\\,$ with $\\,A=1,\\ldots,8\\,$ and $\\,\\textrm{dim}(\\mathcal{O}_{B})=1\\,$.}, the asymptotic behaviour of (\\ref{Janus_U1^4_rho_2}) around the endpoints $\\,\\rho \\rightarrow \\pm \\infty\\,$ of the Janus solution reads\n\\begin{equation}\n\\label{source&vevs_zt}\n\\tilde{z}_{i}(\\rho) = \\dfrac{\\tilde{z}_{i,0}}{\\rho} + \\dfrac{\\tilde{z}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) \n\\hspace{10mm} \\textrm{ with } \\hspace{10mm} \ni=1,2,3 \\ , \n\\end{equation}\nin terms of normalisable modes $\\,\\tilde{z}_{i,0}\\,$ with $\\,\\Delta_{i}=1\\,$ specified by the parameters $\\,(\\alpha_{i} , \\beta_{i})\\,$,\n\\begin{equation}\n\\label{zt_0}\n\\tilde{z}_{i,0} = \\sinh\\alpha_{i} \\, e^{i (\\beta_i - \\frac{\\pi}{2})} \\ ,\n\\end{equation}\nas well as normalisable modes $\\,\\tilde{z}_{i,1}\\,$ with $\\,\\Delta_{i}=2\\,$. These modes satisfy a set of $\\alpha_{i}$-dependent algebraic relations\n\\begin{equation}\n\\label{zt_1}\n\\tilde{z}_{i,1} - i \\cosh\\alpha_{i} \\, \\tilde{z}_{i,0} = 0 \\ .\n\\end{equation}\nThe on-shell relations (\\ref{zt_1}) will help us to characterise the deformations in the field theory dual of the Janus solution upon appropriate manipulation of boundary terms and finite counterterms. \n\n\n\nIn order to discuss the boundary conditions (\\ref{source&vevs_zt})--(\\ref{zt_1}) in more detail, we will resort to an expansion of $\\,\\text{Re}\\tilde{z}_{i}\\,$ (proper scalars) and $\\,\\text{Im}\\tilde{z}_{i}\\,$ (pseudo-scalars) around $\\rho\\rightarrow\\pm\\infty$. This yields\n\\begin{equation}\n\\label{source&vevs_rho_f}\n\\begin{array}{llll}\n\\text{Re}\\tilde{z}_{i}(\\rho) & = & \\dfrac{a^{(v)}_{i,0}}{\\rho} + \\dfrac{a^{(s)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & , \\\\[4mm]\n\\text{Im}\\tilde{z}_{i}(\\rho) & = & \\dfrac{b^{(s)}_{i,0}}{\\rho} + \\dfrac{b^{(v)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & ,\n\\end{array}\n\\end{equation}\nso that\n\\begin{equation}\n\\label{zToab}\n\\tilde{z}_{i,0} = a^{(v)}_{i,0} + i \\, b^{(s)}_{i,0}\n\\hspace{5mm} , \\hspace{5mm}\n\\tilde{z}_{i,1} = a^{(s)}_{i,1} + i \\, b^{(v)}_{i,1} \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{ab_definition}\na^{(v)}_{i,0}=\\sinh\\alpha_{i} \\, \\sin\\beta_i\n\\hspace{5mm} , \\hspace{5mm}\nb^{(s)}_{i,0} = - \\sinh\\alpha_{i} \\, \\cos\\beta_i\n\\ .\n\\end{equation}\nThe algebraic relations in (\\ref{zt_1}) then become\n\\begin{equation}\n\\label{ab_algebraic}\na^{(s)}_{i,1} + \\cosh\\alpha_{i} \\,\\, b^{(s)}_{i,0} = 0\n\\hspace{5mm} , \\hspace{5mm}\nb^{(v\t)}_{i,1} - \\cosh\\alpha_{i} \\,\\, a^{(v)}_{i,0} = 0 \\ .\n\\end{equation}\nNote that the independent parameters specifying the boundary conditions in (\\ref{ab_definition}) are $\\,(\\alpha_{i} , \\beta_{i})\\,$. As a consequence, the coefficients in the expansions (\\ref{source&vevs_rho_f}) obey the following two sets of algebraic relations \n\\begin{equation}\n\\dfrac{\\left(a^{(s)}_{i,1}\\right)^2}{\\left(b^{(s)}_{i,0}\\right)^2} = 1+ |\\tilde{z}_{i,0} |^2\n\\hspace{8mm} , \\hspace{8mm}\n\\dfrac{\\left(b^{(v)}_{i,1}\\right)^2}{\\left(a^{(v)}_{i,0}\\right)^2} = 1+ |\\tilde{z}_{i,0} |^2 \\ .\n\\end{equation}\nLastly, following \\cite{Bobev:2011rv} (see also \\cite{Bobev:2013yra}), we have attached the labels ``source\" $\\,^{(s)}\\,$ and ``VEV\" $\\,^{(v)}\\,$ to the modes in (\\ref{source&vevs_rho_f}) to highlight that, in order to preserve maximal supersymmetry, proper scalars should feature the alternative quantisation and pseudo-scalars the standard quantisation. Note that setting $\\,\\beta_{i}=\\pm\\frac{\\pi}{2}\\,$ switches off the sources in (\\ref{source&vevs_rho_f}) leaving only the VEV's. This is in agreement with the standard AdS\/CFT prescription and renders $\\,\\tilde{z}_{i,0}\\,$ in (\\ref{zt_0}) real.\n\n\n\n\n\n\\subsubsection{Janus solutions and boundary conditions}\n\n\n\nLet us compute the on-shell variation of the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}). A standard computation yields the boundary term\n\\begin{equation}\n\\label{deltaS}\n\\delta S = \\displaystyle\\sum_{i} \\delta S_{i} = \\displaystyle\\sum_{i} \\int d^{4}x \\,\\, \\partial_{\\mu} \\theta^{\\mu}_{i} = - \\displaystyle\\sum_{i} \\int_{\\partial M}d^{3}x \\frac{\\sqrt{-h}}{\\left( 1-\\left\\vert\n\\tilde{z}_{i}\\right\\vert ^{2}\\right)^{2}} \\, N^{\\mu} \\, ( \\partial_{\\mu } \\tilde{z}_{i} \\, \\delta\\tilde{z}_{i}^{\\ast } + \\textrm{c.c.} ) \\ ,\n\\end{equation}\nwhere \n\\begin{equation}\n\\theta^{\\mu}_{i} \\equiv - \\frac{\\sqrt{-g}}{\\left( 1-\\left\\vert\n\\tilde{z}_{i}\\right\\vert^{2}\\right)^{2}} \\, g^{\\mu \\nu } \\left( \\partial_{\\nu} \\tilde{z}_{i} \\,\\,\n\\delta\\tilde{z}_{i}^{\\ast} + \\textrm{c.c.} \\right) \\ , \n\\end{equation}\nand c.c stands for complex conjugation. In (\\ref{deltaS}) we have introduced the standard foliation $\\,g_{\\mu \\nu }=h_{\\mu \\nu }+N_{\\mu} N_{\\nu }\\,$ with $\\,N_{\\mu }=\\sqrt{g_{\\rho \\rho }} \\, \\delta _{\\mu }^{\\rho }\\,$ being the vector normal to the AdS$_{3}$ leaves. \n\nPlugging into (\\ref{deltaS}) the asymptotic expansion of the scalars in (\\ref{source&vevs_zt}) around $\\,\\rho \\rightarrow \\pm \\infty\\,$, and using the asymptotic form of the metric (\\ref{Janus_U1^4_rho_1}), we encounter the well known linearly divergent term. In order to regularise the above boundary action and have a well-defined variational principle we introduce, for each complex field $\\,\\tilde{z}_{i}\\,$, the counter-term\n\\begin{equation}\n\\label{counterterm}\nS_{\\textrm{ct},i} = - \\, g \\lim_{\\rho \\rightarrow \\pm\\infty } \\, \\int_{\\partial M} \nd^{3}x \\sqrt{-h} \\,\\,\n\\tilde{z}_{i} \\, \\tilde{z}_{i}^{\\ast } \\ ,\n\\end{equation}\nso that\n\\begin{equation}\n\\label{boundary_contributions}\n\\delta S_{i} + \\delta S_{\\textrm{ct},i} = g^{-2} \\, k^{-3} \\int_{\\partial M} \\left( \\tilde{z}_{i,1} \\, \\delta \\tilde{z}_{i,0}^{\\ast } + \\tilde{z}_{i,1}^{\\ast} \\, \\delta \\tilde{z}_{i,0} \\right) \\, d\\Sigma \\ ,\n\\end{equation}\nin terms of the volume element at the boundary $\\,d\\Sigma=\\sqrt{-\\gamma} \\, d^{3}x \\,$ with $\\, \\sqrt{-\\gamma} =\\sin\\eta \\, \\cos^{-3}\\eta\\,$. Substituting the scalar mode parameterisation of (\\ref{zToab}) into the boundary contributions in (\\ref{boundary_contributions}) one obtains\n\\begin{equation}\n\\label{boundary_contributions_ab}\n\\delta S_i + \\delta S_{\\textrm{ct},i} = 2 \\, g^{-2} \\, k^{-3} \\int_{\\partial M} \\left( a^{(s)}_{i,1} \\, \\delta a^{(v)}_{i,0} + b^{(v)}_{i,1} \\, \\delta b^{(s)}_{i,0} \\right) \\, d\\Sigma \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{k_factor_alt}\nk^2= 1 + \\sum_{i}\\sinh^{2}\\alpha_{i} = 1 + \\sum_{i} |\\tilde{z}_{i,0} |^2 \\, \\ge \\, 1 \\ .\n\\end{equation}\nIn order to remove the $\\,k^{-3}\\,$ factor in (\\ref{boundary_contributions_ab}) we could rescale the radial coordinate as $\\,\\hat{\\rho} = k \\, \\rho\\,$ or, instead, perform the non-linear mode redefinitions\n\\begin{equation}\n\\label{rescaled_ab}\n\\hat{a}^{(v)}_{i,0} = k^{-1} \\, a^{(v)}_{i,0} \n\\hspace{5mm} , \\hspace{5mm}\n\\hat{a}^{(s)}_{i,1} = k^{-2} \\, a^{(s)}_{i,1}\n\\hspace{5mm} , \\hspace{5mm}\n\\hat{b}^{(s)}_{i,0} = k^{-1} \\, b^{(s)}_{i,0} \n\\hspace{5mm} , \\hspace{5mm}\n\\hat{b}^{(v)}_{i,1} = k^{-2} \\, b^{(v)}_{i,1} \\ .\n\\end{equation}\nFollowing the latter prescription, the boundary contribution in (\\ref{boundary_contributions_ab}) becomes\n\\begin{equation}\n\\label{boundary_contributions_ab_hat}\n\\delta S_{i} + \\delta S_{\\textrm{ct},i} \\, = \\, 2 \\, g^{-2} \\int_{\\partial M} \\left( \\hat{a}^{(s)}_{i,1} \\, \\delta \\hat{a}^{(v)}_{i,0} + \\hat{b}^{(v)}_{i,1} \\, \\delta \\hat{b}^{(s)}_{i,0} \\right) \\, d\\Sigma \\ ,\n\\end{equation}\nand, due to the alternative quantisation featured by the proper scalars, we must add an extra boundary term such that\n\\begin{equation}\n\\label{boundary_contributions_ab_hat_final}\n\\delta S_{i} + \\delta S_{\\textrm{ct},i} - \\delta \\left( 2 \\, g^{-2} \\int_{\\partial M} \\hat{a}^{(s)}_{i,1} \\, \\hat{a}^{(v)}_{i,0} \\right) \\, = \\, 2 \\, g^{-2} \\int_{\\partial M} \\left( \\hat{b}^{(v)}_{i,1} \\, \\delta \\hat{b}^{(s)}_{i,0} - \\hat{a}^{(v)}_{i,0} \\, \\delta \\hat{a}^{(s)}_{i,1} \\right) \\, d\\Sigma \\ .\n\\end{equation}\nTherefore, having a well-defined variational principle therefore requires $\\,\\delta \\hat{b}^{(s)}_{i,0} = \\delta \\hat{a}^{(s)}_{i,1} = 0\\,$. Recalling from (\\ref{ab_definition})-(\\ref{ab_algebraic}) that \n\\begin{equation}\nb^{(s)}_{i,0} = - \\sinh\\alpha_{i} \\, \\cos\\beta_i \n\\hspace{10mm} \\textrm{ and } \\hspace{10mm}\na^{(s)}_{i,1} = - \\cosh\\alpha_{i} \\,\\, b^{(s)}_{i,0} \\ ,\n\\end{equation}\nwe conclude that sources are generically present at the boundary theory of the Janus ($\\alpha_{i} \\neq 0$) except for the particular choice of boundary conditions $\\,\\beta_{i}= \\pm \\frac{\\pi}{2}\\,$. This implies that every choice of $\\,(\\alpha_{i}, \\beta_{i})\\,$ with $\\,\\beta_{i} \\neq \\pm \\frac{\\pi}{2}\\,$ corresponds to a different theory with a different value of the sources in the variational principle. On the contrary, when $\\,\\beta_{i}= \\pm \\frac{\\pi}{2}\\,$, the sources are zero on-shell and the boundary theory is unique.\n\n\n\n\n\n\n\n\\subsection{Multi-parametric Hades solutions}\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.50\\textwidth]{Plots\/boundary_Hades_straight.pdf} \n\\put(10,95){$\\textrm{Re}\\tilde{z}_{i}$} \\put(-105,228){$\\textrm{Im}\\tilde{z}_{i}$}\n\\put(-112.5,107.5){{\\color{red}{$\\bullet$}}}\n\\end{center}\n\\caption{Parametric plot of $\\,\\tilde{z}_{i}(\\rho)\\,$ in (\\ref{Hades_U1^4_rho_2}) for the Hades solutions with $\\,\\alpha_{i}=1\\,$ (blue-solid lines) and the ridge flows with $\\,\\alpha_{i}=0\\,$ (brown-dashed lines) upon setting $\\,\\beta_{i}=\\frac{n\\pi}{4}\\,$ with $\\,n=0,\\ldots,7\\,$. The central red point at $\\,\\tilde{z}_{i}=0\\,$ $\\,\\forall i\\,$ corresponds to the maximally supersymmetric AdS$_{4}$ vacuum and describes the asymptotic values at $\\,\\rho\\rightarrow \\infty\\,$. The boundary circle at $\\,|\\tilde{z}_{i}|=1\\,$ corresponds to the singularity at $\\,\\rho = 1\\,$.}\n\\label{fig:Hades_ztilde_U1^4}\n\\end{figure}\n\n\n\n\nStarting from the field equations in (\\ref{EOM_scalars})-(\\ref{EOM_Einstein}) and performing a change of radial coordinate\n\\begin{equation}\n\\label{new_coordinate_Hades}\n\\rho= \\cosh({g} \\mu) \n\\hspace{10mm} , \\hspace{10mm}\nd\\mu = g^{-1} \\, \\dfrac{d\\rho}{\\sqrt{\\rho^2-1}} \\ ,\n\\end{equation}\nwe find a new class of singular solutions of the form\n\\begin{equation}\n\\label{Hades_U1^4_rho_1}\nds_{4}^{2}=\\frac{1}{{g}^{2}} \\left( \\frac{d\\rho^{2}}{\\rho^{2}-1} + \n\\frac{\\rho^{2}-1 }{ k^2} \\, d\\Sigma^{2} \\right)\n\\hspace{10mm} \\textrm{ with } \\hspace{10mm}\nk^2= - 1 + \\sum_{i} \\cosh^2\\alpha_{i} \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{Hades_U1^4_rho_2}\n\\tilde{z}_{i}(\\rho) = e^{i \\beta_{i}}\\, \\frac{\\cosh\\alpha_{i}}{\\sinh\\alpha_{i} + i \\rho} \\ .\n\\end{equation}\nThese solutions are defined in the domain $\\,\\rho \\in [1,\\infty)\\,$ and feature a singularity at $\\,\\rho=1\\,$ where the change of radial coordinate in (\\ref{new_coordinate_Hades}) is ill-defined, the warping factor in front of the AdS$_{3}$ piece in the geometry collapses to zero size and $\\,|\\tilde{z}_{i}(1)|=1\\,$ (see Figure~\\ref{fig:Hades_ztilde_U1^4}). More concretely, the Ricci scalar constructed from the metric (\\ref{Hades_U1^4_rho_1}) reads\n\\begin{equation}\n\\label{Hades_Ricci}\nR(\\rho) = - 6 \\, g^{2} \\left( \\, 1 + \\frac{\\rho^2 + k^2}{\\rho^2 - 1} \\, \\right) \\ ,\n\\end{equation}\nand becomes singular at $\\,\\rho= 1\\,$. An analysis of the BPS equations (\\ref{BPS_scalars}) shows that the flows in (\\ref{Hades_U1^4_rho_1})-(\\ref{Hades_U1^4_rho_2}) turn out to be non-supersymmetric. We will refer to these singular solutions as flows to Hades. This term was coined for singular (flat-sliced) domain-walls dual to conventional RG-flows in \\cite{Freedman:1999gp,Gubser:2000nd}.\n\n\nAs previously done for the Janus solution, let us expand $\\,\\text{Re}\\tilde{z}_{i}\\,$ (proper scalars) and $\\,\\text{Im}\\tilde{z}_{i}\\,$ (pseudo-scalars) around $\\,\\rho\\rightarrow \\infty\\,$. One finds\n\\begin{equation}\n\\label{Hades_source&vevs_rho}\n\\begin{array}{llll}\n\\text{Re}\\tilde{z}_{i}(\\rho) & = & \\dfrac{a^{(v)}_{i,0}}{\\rho} + \\dfrac{a^{(s)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & , \\\\[4mm]\n\\text{Im}\\tilde{z}_{i}(\\rho) & = & \\dfrac{b^{(s)}_{i,0}}{\\rho} + \\dfrac{b^{(v)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & ,\n\\end{array}\n\\end{equation}\nwith\n\\begin{equation}\n\\label{Hades_ab_vevs}\na^{(v)}_{i,0}=\\cosh\\alpha_{i} \\, \\sin\\beta_i\n\\hspace{5mm} , \\hspace{5mm}\nb^{(v\t)}_{i,1} = \\sinh\\alpha_{i} \\,\\, a^{(v)}_{i,0} \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{Hades_ab_sources}\nb^{(s)}_{i,0} = - \\cosh\\alpha_{i} \\, \\cos\\beta_i\n\\hspace{5mm} , \\hspace{5mm}\na^{(s)}_{i,1} = - \\sinh\\alpha_{i} \\,\\, b^{(s)}_{i,0} \\ .\n\\end{equation}\nDifferent choices of the Hades parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ translate into different boundary conditions in the expansions (\\ref{Hades_source&vevs_rho}). Note that the boundary theory has sources in (\\ref{Hades_ab_sources}) generically activated except if setting $\\,\\beta_{i}=\\pm\\frac{\\pi}{2}\\,$.\n\n\n\n\\subsubsection{Ridge flows with \\texorpdfstring{$\\,\\alpha_{i}=0\\,$}{alpha=0}}\n\\label{sec:ridge_4D}\n\n\nUnlike for the Janus solutions, setting $\\,\\alpha_{i}=0\\,$ does not recover a regular AdS$_{4}$ vacuum. Instead, the complex scalars in (\\ref{Hades_U1^4_rho_2}) reduce to\n\\begin{equation}\n\\label{Ridge_U1^4_rho_2}\n\\tilde{z}_{i}(\\rho) = \\rho^{-1} \\, e^{i \\left( \\beta_{i} - \\frac{\\pi}{2} \\right)} \\ ,\n\\end{equation}\nand \\textit{ridge flows} of the type investigated in \\cite{Pilch:2015dwa,Pilch:2015vha} appear with constant $\\,\\textrm{arg}\\tilde{z}_{i} = \\beta_{i} - \\frac{\\pi}{2}\\,$ and $\\,k^2=2\\,$ in the singular geometry (\\ref{Hades_U1^4_rho_1}). The $\\,\\rho \\rightarrow \\infty\\,$ expansion in (\\ref{Hades_source&vevs_rho}) and the boundary conditions in (\\ref{Hades_ab_vevs})-(\\ref{Hades_ab_sources}) also simplify drastically\n\n\\begin{equation}\n\\label{Ridge_source&vevs_rho}\n\\begin{array}{llll}\n\\text{Re}\\tilde{z}_{i}(\\rho) & = & \\dfrac{a^{(v)}_{i,0}}{\\rho} + \\dfrac{a^{(s)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & , \\\\[4mm]\n\\text{Im}\\tilde{z}_{i}(\\rho) & = & \\dfrac{b^{(s)}_{i,0}}{\\rho} + \\dfrac{b^{(v)}_{i,1}}{\\rho^2} + \\mathcal{O}\\left( \\dfrac{1}{\\rho^3}\\right) & ,\n\\end{array}\n\\end{equation}\nwith\n\\begin{equation}\n\\label{Ridge_ab_definition}\na^{(v)}_{i,0}= \\sin\\beta_i\n\\hspace{8mm} , \\hspace{8mm}\nb^{(s)}_{i,0} = - \\cos\\beta_i\n\\hspace{8mm} , \\hspace{8mm}\na^{(s)}_{i,1} = 0\n\\hspace{8mm} , \\hspace{8mm}\nb^{(v\t)}_{i,1} = 0 \n\\ .\n\\end{equation}\nTwo special cases are immediately identified. Setting $\\,\\beta_{i}=0,\\pi\\,$ renders $\\,\\tilde{z}_{i}(\\rho)\\,$ purely imaginary and the ridge flow from the maximally supersymmetric AdS$_{4}$ vacuum at $\\,\\rho \\rightarrow \\infty\\,$ is triggered by the source modes $\\,b^{(s)}_{i,0}\\,$ of the pseudo-scalars dual to fermion bilinears. On the contrary, setting $\\,\\beta_{i}=\\pm\\frac{\\pi}{2}\\,$ renders $\\,\\tilde{z}_{i}(\\rho)\\,$ purely real and the ridge flow is triggered by the VEV modes $\\,a^{(v)}_{i,0}\\,$ of the proper scalars dual to boson bilinears. As we will see in Section~\\ref{sec:Uplift_11D}, the uplift of these special ridge flows to eleven dimensions will be very different. This is to be contrasted with the situation in four dimensions where the Lagrangian (\\ref{Lagrangian_model_U1^4_Einstein-scalars}) is invariant under constant shifts of $\\,\\beta_{i}\\,$. Note also that a shift of the form $\\,\\beta_{i} \\rightarrow \\beta_{i} + \\pi\\,$ amounts to a reflection $\\,\\rho \\rightarrow -\\rho\\,$ in the respective field $\\,\\tilde{z}_{i}\\,$ in (\\ref{Ridge_U1^4_rho_2}) while leaving the Hades metric in (\\ref{Hades_U1^4_rho_1}) invariant. Since the domain of the radial coordinate is fixed to $\\,\\rho \\in [1,\\infty)\\,$, the shift $\\,\\beta_{i} \\rightarrow \\beta_{i} + \\pi\\,$ generically generates a new solution. \n\n\n\n\n\nA fundamental difference between our ridge flows in (\\ref{Hades_U1^4_rho_1}) and (\\ref{Ridge_U1^4_rho_2}) and the ones investigated in \\cite{Pope:2003jp,Pilch:2015dwa,Pilch:2015vha} is that the ones there have a flat-sliced geometry. Therefore they correspond to conventional holographic RG-flows. Our solutions have an AdS$_{3}$-slicing of the geometry, instead. It was further shown in \\cite{Pilch:2015dwa,Pilch:2015vha} that, for the flat-sliced solutions, only a set of discrete values of $\\,\\textrm{arg}\\tilde{z}_{i}\\,$ was compatible with supersymmetry. However, if relaxing supersymmetry, any value of $\\,\\textrm{arg}\\tilde{z}_{i}\\,$ was permitted. In our non-supersymmetric ridge flows, any possible value of $\\,\\beta_{i}\\,$ is permitted too. Generic flows to Hades with $\\,\\alpha_{i} \\neq 0\\,$ and ridge flows with $\\,\\alpha_{i} = 0\\,$ are depicted in Figure~\\ref{fig:Hades_ztilde_U1^4}.\n\n\n\n\n\\subsubsection{Hades with (super) symmetry enhancements}\n\n\n\nAs already discussed for the Janus solutions in Section~\\ref{sec:Janus_sym_enhancement}, imposing identifications between the complex fields $\\,\\tilde{z}_{i}(\\rho)\\,$ translates into different patterns of (super) symmetry enhancements. For example, non-supersymmetric Hades solutions with $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2\\,$ symmetry are obtained upon identifying the three complex scalars, namely, upon setting $\\,{\\alpha_{1}=\\alpha_{2}=\\alpha_{3}}\\,$ and $\\,{\\beta_{1}=\\beta_{2}=\\beta_{3}}\\,$ in the general Hades solution (\\ref{Hades_U1^4_rho_1})-(\\ref{Hades_U1^4_rho_2}).\n\n\n\n\nSupersymmetric Hades solutions with an AdS$_{3}$ slicing have previously been constructed in \\cite{Bobev:2013yra} within the $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)\\,$ invariant sector of the $\\,\\textrm{SO}(8)\\,$ gauged supergravity. As discussed in Section~\\ref{sec:Janus_SO4xSO4}, this sector of the theory is recovered upon setting two of the three complex fields $\\,\\tilde{z}_{i}\\,$ to zero, \\textit{i.e.}, $\\,\\tilde{z}_{2}(\\rho)=\\tilde{z}_{3}(\\rho)=0\\,$. However, it is easy to see that this cannot be achieved by tuning the parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ in (\\ref{Hades_U1^4_rho_2}) to any real value. Instead, one must set two complex fields to zero from the start and search for solutions of the field equations. In this manner, one finds Hades solutions of the form \n\\begin{equation}\n\\label{Hades_no_ridge_SO(4)xSO(4)_rho_1}\nds_{4}^{2}=\\frac{1}{{g}^{2}} \\left( \\frac{d\\rho^{2}}{\\rho^{2}-1} + \n\\frac{\\rho^{2}-1 }{ k^2} \\, d\\Sigma^{2} \\right) \n\\hspace{10mm} \\textrm{ with } \\hspace{10mm}\nk^2= \\sinh^2\\alpha_{1}\n\\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{Hades_no_ridge_SO(4)xSO(4)_rho_2}\n\\tilde{z}_{1}(\\rho) = e^{i \\beta_{1}}\\, \\frac{\\cosh\\alpha_{1}}{\\sinh\\alpha_{1} + i \\rho} \n\\hspace{8mm} , \\hspace{8mm} \\tilde{z}_{2}(\\rho)=\\tilde{z}_{3}(\\rho)=0 \\ ,\n\\end{equation}\nwhich turn out to solve the BPS equations in (\\ref{BPS_A})-(\\ref{BPS_scalars}). It is worth emphasising that these supersymmetric Hades with $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)$ symmetry do not belong to the same class of solutions as the non-supersymmetric Hades in (\\ref{Hades_U1^4_rho_1})-(\\ref{Hades_U1^4_rho_2}). Also, they do not admit a ridge flow limit since setting $\\,\\alpha_{1}=0\\,$ implies having a pathological ($k^2=0$) warping of AdS$_{3}$ in the geometry (\\ref{Hades_no_ridge_SO(4)xSO(4)_rho_1}).\n\n\n\n\\section{Uplift to eleven-dimensional supergravity}\n\\label{sec:Uplift_11D}\n\nIn this section we present the uplift to eleven-dimensional supergravity of the Janus and Hades solutions constructed within the four-dimensional SO(8) gauged supergravity. We use the conventions of \\cite{Gauntlett:2002fz} according to which the Lagrangian of eleven-dimensional supergravity \\cite{Cremmer:1978km} takes the form\n\\begin{equation}\n\\mathcal{L}_{11} = \\hat{R} \\, \\text{vol}_{11} - \\tfrac{1}{2} \\, \\hat{F}_{(4)} \\wedge*_{11} \\hat{F}_{(4)} - \\tfrac{1}{6} \\, \\hat{A}_{(3)} \\wedge\\hat{F}_{(4)} \\wedge\\hat{F}_{(4)} \\ .\n\\end{equation}\nA consistent background is then subject to the source-less Bianchi identity\n\\begin{equation}\n\\label{BI_F4}\nd\\hat{F}_{(4)} = 0 \\ ,\n\\end{equation}\nas well as the equations of motion\n\\begin{equation}\n\\label{EOM_11D}\n\\begin{array}\n[c]{rll}%\nd(*_{11} \\hat{F}_{(4)}) + \\frac{1}{2} \\, \\hat{F}_{(4)} \\wedge\\hat{F}_{(4)} & = & 0 \\ ,\\\\[2mm]%\n\\hat{R}_{MN} - \\frac{1}{12} \\left( \\hat{F}_{MPQR} \\, \\hat{F}_{N}{}^{PQR} - \\frac{1}{12} \\, \\hat{F}_{PQRS} \\, \\hat{F}^{PQRS} \\, \\hat{G}_{MN} \\right) & = & 0 \\ .\n\\end{array}\n\\end{equation}\nThe equation of motion for $\\,\\hat{F}_{(4)}\\,$ in (\\ref{EOM_11D}) can be used to introduce the dual flux \n\\begin{equation}\n\\label{F7_definition}\n\\hat{F}_{(7)} \\equiv *_{11} \\hat{F}_{(4)} + \\tfrac{1}{2} \\hat{A}_{(3)} \\wedge \\hat{F}_{(4)} \\ ,\n\\end{equation}\nwhich therefore obeys the Bianchi identity $\\,d\\hat{F}_{(7)}=0\\,$. The flux in (\\ref{F7_definition}) determines the conserved Page charge of M2-branes in the background\\footnote{We have set the string length to unity, \\textit{i.e.}, $\\,\\ell_{s} = 1\\,$.}\n\\begin{equation}\n\\label{M2_brane_charge}\nN_{2} = \\frac{1}{(2 \\pi)^{6}} \\int_{M_{7}} \\hat{F}_{(7)} = \\frac{1}{(2 \\pi)^{6}} \\int_{M_{7}} *_{11} \\hat{F}_{(4)} + \\tfrac{1}{2} \\hat{A}_{(3)} \\wedge \\hat{F}_{(4)} \\ ,\n\\end{equation}\nwhere $\\,M_{7}\\,$ is the internal space. The contribution $\\,*_{11} \\hat{F}_{(4)}\\,$ comes from electric M2-branes and the contribution $\\,{\\tfrac{1}{2} \\hat{A}_{(3)} \\wedge \\hat{F}_{(4)}}\\,$ originates from magnetic M5-branes. \n\n\n\n\\subsection{\\texorpdfstring{$\\text{SU}(3) \\times\\text{U}(1)^2\\,$}{SU(3) x U(1)2} invariant sector}\n\\label{sec:11D_Janus}\n\nThe eleven-dimensional uplift of the $\\,\\text{SU}(3) \\times\\text{U}(1)^2\\,$ invariant sector of the maximal SO(8) supergravity has been worked out in \\cite{Pilch:2015dwa,Azizi:2016noi} (see also \\cite{Larios:2019kbw}). To describe the internal geometry, we will closely follow the Appendix~B.2 of \\cite{Larios:2019kbw} and use intrinsic coordinates on $\\,\\textrm{S}^{7}\\,$ adapted to its seven-dimensional Sasaki--Einstein structure. In these coordinates, the round metric on $\\,\\textrm{S}^{7}\\,$ takes the form\n\\begin{equation}\n\\label{metric_round_S7}\nds_{7}^{2} = ds_{\\mathbb{CP}_{3}}^{2} + \\left( d\\psi_{-}+ \\sigma_{-} \\right)^2 \\ ,\n\\end{equation}\nwhere $\\,ds_{\\mathbb{CP}_{3}}^{2}\\,$ is the Fubini-Study line element (normalised as in \\cite{Larios:2019kbw})\n\\begin{equation}\n\\label{metric_round_CP3}\nds_{\\mathbb{CP}_{3}}^{2} = d\\tilde{\\alpha}^{2} + \\cos^{2} \\tilde{\\alpha} \\, \\big( \\, ds^{2}_{\\mathbb{CP}_{2}} + \\sin^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma)^{2} \\, \\big)\n\\hspace{6mm} \\textrm{ with } \\hspace{6mm}\n\\sigma_{-} = \\cos^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma) \\ .\n\\end{equation}\nThe ranges of the angles in (\\ref{metric_round_S7})-(\\ref{metric_round_CP3}) are $\\,\\tilde{\\alpha} \\in[0,\\frac\n{\\pi}{2}]\\,$, $\\,\\tau_{-} \\in[0,2 \\pi]\\,$ and $\\,\\psi_{-} \\in[0,2 \\pi]\\,$. Moreover, $\\,\\sigma\\,$ in (\\ref{metric_round_CP3}) is the one-form on $\\,\\mathbb{CP}_{2}\\,$ such that $\\,d\\sigma=2\\boldsymbol{J}\\,$ with $\\,\\boldsymbol{J}\\,$ being the K\\\"ahler form on $\\,\\mathbb{CP}_{2}\\,$. The round metric in (\\ref{metric_round_S7}) occurs when the scalar field in the four-dimensional Lagrangian (\\ref{Lagrangian_model_SU3xU1xU1}) vanishes, \\textit{i.e.}, $\\,\\tilde{z}=0\\,$, and the $\\,\\textrm{AdS}_{4} \\times \\textrm{S}^{7}\\,$ Freund--Rubin vacuum of eleven-dimensional supergravity is recovered \\cite{Freund:1980xh}. However, whenever non-vanishing, the scalar $\\,\\tilde{z}\\,$ in (\\ref{Janus_solution_SU3xU1xU1}) inflicts a deformation on the Freund--Rubin vacuum so that a new background is generated which displays a smaller $\\,\\text{SU}(3) \\times \\text{U}(1)^{2} \\subset \\textrm{SO}(8)\\,$ isometry group.\n\n\n\nWe are encoding the breaking of isometries caused by $\\,\\tilde{z}\\,$ into a set of metric functions $\\,f\\,$'s and flux functions $\\,h\\,$'s. The eleven-dimensional metric takes the form\n\\begin{equation}\n\\label{11D_metric}\n\\begin{array}{lll}\nd\\hat{s}_{11}^{2} & = & \\frac{1}{2} \\, f_{1} \\, ds_{4}^{2} + 2 \\, g^{-2} \\Big[ f_{2} \\, d\\tilde{\\alpha}^{2} + \\cos^{2} \\tilde{\\alpha} \\, \\big( \\, f_{3} \\, \\, ds^{2}_{\\mathbb{CP}_{2}} + \\sin^{2} \\tilde{\\alpha} \\, f_{4} \\, (d\\tau_{-} + \\sigma)^{2} \\, \\big)\\\\[2mm]\n& + & f_{5} \\, \\big( d\\psi_{-} + \\cos^{2} \\tilde{\\alpha} \\, f_{6} \\, (d\\tau_{-} + \\sigma) \\big)^{2} \\Big] \\ ,\n\\end{array}\n\\end{equation}\nwith $\\,ds_{4}^{2}\\,$ given in (\\ref{Janus_solution_SU3xU1xU1})-(\\ref{A(mu)_func_SU3xU1xU1}). Note that the eleven-dimensional metric (\\ref{11D_metric}) displays an $\\,\\text{SU}(3) \\times \\text{U}(1)_{\\tau_{-}} \\times\\text{U}(1)_{\\psi_{-}}\\,$ symmetry. The $\\,\\textrm{SU}(3)\\,$ factor accounts for the $\\,\\mathbb{CP}_{2}\\,$ isometries and the two $\\,\\textrm{U}(1)\\,$ factors correspond with shifts along the angles $\\,\\tau_{-}\\,$ and $\\,\\psi_{-}\\,$, hence the attached labels. The various metric functions in (\\ref{11D_metric}) depend on the complex scalar $\\tilde{z}$ in\n(\\ref{Janus_solution_SU3xU1xU1}) and on the angle $\\,\\tilde{\\alpha} \\,$ on S$^{7}$. They are given by\n\\begin{equation}\n\\label{f_functions}\n\\begin{array}{c}\nf_{1}^{3} = \\dfrac{ (1+\\tilde{z}) (1+\\tilde{z}^{*})}{(1-|\\tilde{z}|^{2})^{3}} \\, H^{2}\n\\hspace{5mm} , \\hspace{5mm}\nf_{2}^{3\/2} = \\dfrac{H}{(1+\\tilde{z}) (1+\\tilde{z}^{*})}\n\\hspace{5mm} , \\hspace{5mm}\nf_{3}^{3} = \\dfrac{(1+\\tilde{z}) (1+\\tilde{z}^{*})}{H} \\ , \\\\[6mm]\nf_{4}^{3\/2} = \\dfrac{(1-|\\tilde{z}|^{2})^{3}}{ (1+\\tilde{z}) (1+\\tilde{z}^{*})} \\, H \\, K^{-\\frac{3}{2}} \n\\hspace{5mm} , \\hspace{5mm}\nf_{5}^{3\/2} = \\dfrac{1}{ (1+\\tilde{z}) (1+\\tilde{z}^{*})} \\, H^{-2} \\, K^{\\frac{3}{2}} \\ , \\\\[8mm]\nf_{6} = \\Big[ (1+\\tilde{z}) (1+\\tilde{z}^{*}) \\, H + ( \\tilde{z} - \\tilde{z}^{*})^{2} \\cos(2\\tilde{\\alpha}) \\Big] \\, K^{-1} \\ ,\n\\end{array}\n\\end{equation}\nwith\n\\begin{equation}\nH = 1+|\\tilde{z}|^{2} - ( \\tilde{z} +\\tilde{z}^{*}) \\cos(2\\tilde{\\alpha}) \\hspace{8mm} \\text{ and } \\hspace{8mm} K = 1+|\\tilde{z}|^{4} - 2 \\, |\\tilde{z}|^{2} \\, \\cos(4\\tilde{\\alpha}) \\ .\n\\end{equation}\nThe round metric on $\\,\\textrm{S}^{7}\\,$ is recovered from (\\ref{11D_metric}) upon setting $\\,\\tilde{z}=0\\,$, what implies that all the metric functions $\\,H=K=f_{1,\\ldots,6}=1\\,$. The part of the internal geometry in the upper line of (\\ref{11D_metric}) then reconstructs the $\\,\\mathbb{CP}_{3}\\,$ metric in (\\ref{metric_round_CP3}). \n\n\n\nThe eleven-dimensional four-form flux takes a more lengthy expression given in terms of three-, one- and zero-form deformations in four dimensions which we collectively denote $\\,h$'s. Adopting the terminology of \\cite{Pilch:2015dwa}, the four-form flux naturally splits as\n\\begin{equation}\n\\label{11D_F4}\n\\hat{F}_{(4)} = \\hat{F}_{(4)}^{\\textrm{st}} + \\hat{F}_{(4)}^{\\textrm{tr}} \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{11D_F4_st}\n\\hat{F}_{(4)}^{\\textrm{st}} =\n-\\frac{1}{2\\sqrt{2}} \\, g \\, h_{1} \\, \\text{vol}_{4} + \\frac{1}{\\sqrt{2}} \\, g^{-1} \\, \\sin(2 \\tilde{\\alpha}) \\,\\, h^{(3)}_{2} \\wedge d\\tilde{\\alpha} \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{11D_F4_tr}\n\\begin{array}{lll}\n\\hat{F}_{(4)}^{\\textrm{tr}} &=& - 2 \\sqrt{2} \\, g^{-3} \\Big[ \\, \\sin(2 \\tilde{\\alpha}) \\, h^{(1)}_{3} \\wedge d\\tilde{\\alpha} \\wedge d\\psi_{-} \\wedge(d\\tau_{-}+\\sigma)\\\\[2mm]\n& + & \\cos^{4} \\tilde{\\alpha} \\,\\, h^{(1)}_{4} \\wedge (d\\tau_{-} + \\sigma) \\wedge\\boldsymbol{J} + \\cos^{2} \\tilde{\\alpha} \\, \\cos(2 \\tilde{\\alpha}) \\,\\, h_{5}^{(1)} \\wedge d\\psi_{-} \\wedge\\boldsymbol{J}\\\\[2mm]\n& + & \\sin(2 \\tilde{\\alpha}) \\, h_{6} \\, d\\tilde{\\alpha} \\wedge d\\psi_{-} \\wedge\\boldsymbol{J} + \\cos^{4} \\tilde{\\alpha} \\,\\, h_{7} \\, \\boldsymbol{J} \\wedge\\boldsymbol{J}\\\\[2mm]\n& + & \\cos^{2}\\tilde{\\alpha} \\, \\sin(2 \\tilde{\\alpha}) \\,\\, h_{8} \\, d\\tilde{\\alpha} \\wedge(d\\tau_{-} + \\sigma) \\wedge\\boldsymbol{J} \\, \\Big] \\ .\n\\end{array}\n\\end{equation}\nFor the space-time part in (\\ref{11D_F4_st}) we have introduced a zero-form\n\\begin{equation}\n\\label{h1_func}\nh_{1} = \\dfrac{1}{(1-|\\tilde{z}|^{2})} \\Big( \\, 3 \\, (1+|\\tilde{z}|^{2}) + ( \\tilde{z} +\\tilde{z}^{*}) \\, (1 - 2 \\cos(2 \\tilde{\\alpha}) ) \\, \\Big) \\ , \n\\end{equation}\nand a three-form\n\\begin{equation}\n\\label{h2_func}\n\\begin{array}{lll}\nh^{(3)}_{2} & = & \\dfrac{1}{(1-|\\tilde{z}|^{2})^{2}} \\Big( (\\tilde{z}^{2}-1) *_{4} d\\tilde{z}^{*} + ((\\tilde{z}^{*})^{2}-1) *_{4} d\\tilde{z} \\Big) \\ ,\n\\end{array}\n\\end{equation}\nwhich has legs along the AdS$_{3}$ factor in the external geometry\\footnote{The Hodge dual $\\,*_{4}\\,$ is defined in four-dimensions using the metric $\\,ds^{2}_{4}\\,$ in (\\ref{Janus_solution_SU3xU1xU1})-(\\ref{A(mu)_func_SU3xU1xU1}).}. For the transverse part in (\\ref{11D_F4_tr}) we have introduced a set of one-forms\n\\begin{equation}\n\\label{11D_F4_one-forms}\n\\begin{array}{lll}\nh^{(1)}_{3} & = & \\dfrac{i}{2} \\left( \\dfrac{d\\tilde{z}^{*} }{(1+\\tilde{z}^{*})^{2}} - \\dfrac{d\\tilde{z} }{(1+\\tilde{z})^{2}} \\right) \\ ,\\\\[4mm]\nh^{(1)}_{4} & = & h^{(1)}_{5} \\,\\, = \\,\\, i \\, H^{-2} \\, \\Big( \\left( 1 - 2 \\cos(2 \\tilde{\\alpha}) \\, \\tilde{z}^{*} + (\\tilde{z}^{*})^{2} \\right) \\, d\\tilde{z} - \\left( 1 - 2 \\cos(2 \\tilde{\\alpha}) \\, \\tilde{z} + \\tilde{z}^{2} \\right) \\, d\\tilde{z}^{*}\n\\Big) \\ ,\n\\end{array}\n\\end{equation}\ntogether with zero-forms\n\\begin{equation}\n\\begin{array}{lll}\nh_{6} & = & i \\, 4 \\, H^{-2} \\, (\\tilde{z}^{*}-\\tilde{z}) \\dfrac{(1+|\\tilde{z}|^{2})}{(1+\\tilde{z})(1+\\tilde{z}^{*})} \\Big( 1+ |\\tilde{z}|^{2} + (\\tilde{z}+\\tilde{z}^{*}) \\sin^{2}\\tilde{\\alpha} \\Big) \\ ,\\\\[4mm]\nh_{7} & = & -i \\, 2 \\, H^{-1} \\, (\\tilde{z}^{*}-\\tilde{z}) \\ ,\\\\[4mm]\nh_{8} & = & i \\, 2 \\, H^{-2} \\, (\\tilde{z}^{*}-\\tilde{z}) \\, \\Big( 1+ |\\tilde{z}|^{2} + (\\tilde{z}+\\tilde{z}^{*}) \\sin^{2}\\tilde{\\alpha} \\Big) \\ .\n\\end{array}\n\\end{equation}\nThe zero-forms $\\,h_{6}\\,$, $\\,h_{7}\\,$ and $\\,h_{8}\\,$ determine the purely internal components in (\\ref{11D_F4_tr}) and vanish if $\\,\\tilde{z}^{*}=\\tilde{z}\\,$. Also the one-form deformations in (\\ref{11D_F4_one-forms}) vanish in this case so that $\\,\\hat{F}_{(4)}^{\\textrm{tr}}=0\\,$. Lastly, the entire eleven-dimensional flux in (\\ref{11D_F4}) preserves an $\\,\\text{SU}(3)\\times\\text{U}(1)_{\\tau_{-}} \\times\\text{U}(1)_{\\psi_{-}}\\,$ symmetry since there is no explicit dependence on the angle $\\,\\psi_{-}\\,$ and, moreover, the two-form $\\,\\boldsymbol{J}\\,$ on $\\,\\mathbb{CP}_{2}\\,$ is not charged under $\\,\\text{U}(1)_{\\tau_{-}}\\,$.\n\n\nTo complete the uplift, the above quantities must be evaluated at the value of the complex scalar $\\,\\tilde{z} \\equiv \\tilde{z}_{1}=\\tilde{z}_{2}=\\tilde{z}_{3}\\,$ both for the Janus (\\ref{Janus_U1^4_rho_2}) and Hades (\\ref{Hades_U1^4_rho_2}) solutions. We have explicitly verified that the resulting eleven-dimensional backgrounds in (\\ref{11D_metric}) and (\\ref{11D_F4}) satisfy the source-less Bianchi identity and equations of motion in (\\ref{BI_F4}) and (\\ref{EOM_11D}), respectively.\n\n\n\\subsection{\\texorpdfstring{$\\text{SU}(3) \\times\\text{U}(1)^2\\,$}{SU(3) x U(1)2} symmetric Janus}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f1_beta0.pdf} \n\\put(-100,100){$f_{1}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f2_beta0.pdf} \n\\put(-100,100){$f_{2}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f3_beta0.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, f_{3}$}\n\\vspace{5mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f4_beta0.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, \\sin^2\\tilde{\\alpha} \\,f_{4}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f5_beta0.pdf} \n\\put(-100,100){$f_{5}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f6_beta0.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, f_{5}\\, f_{6}$}\n\\end{center}\n\\caption{Regular metric functions in (\\ref{11D_metric}) for the Janus solution with $\\alpha=1$ and $\\beta=0$.}\n\\label{fig:f_functions_beta0}\n\\end{figure}\n\nWe have performed the explicit uplift of the analytic and non-supersymmetric Janus solution in (\\ref{Janus_solution_SU3xU1xU1})-(\\ref{A(mu)_func_SU3xU1xU1}). The resulting eleven-dimensional backgrounds are everywhere regular and depend on the choice of parameters $\\,(\\alpha, \\beta)\\,$ specifying the boundary conditions (\\ref{ab_definition})-(\\ref{ab_algebraic}) of the four-dimensional Janus solution. Plots of the functions entering the metric (\\ref{11D_metric}) for $\\,\\beta=0\\,$ and $\\,\\beta=\\frac{\\pi}{2}\\,$ are depicted in Figure~\\ref{fig:f_functions_beta0} and Figure~\\ref{fig:f_functions_betaPi}. These two choices respectively activate only sources or VEV's in the Janus boundary conditions (\\ref{ab_definition})-(\\ref{ab_algebraic}). In addition, the scalar $\\,\\tilde{z}\\,$ in the $\\text{SU}(3) \\times\\text{U}(1)^2\\,$ symmetric Janus solution of (\\ref{Janus_solution_SU3xU1xU1}) is necessarily complex so that no limit to a real Janus solution exists even in the general case of (\\ref{Janus_solution_U1^4_ztil}). This further implies that all the $\\,h\\,$ functions (and also three- and one-forms) entering $\\,\\hat{F}_{(4)}^{\\textrm{st}}\\,$ in (\\ref{11D_F4_st}) and $\\,\\hat{F}_{(4)}^{\\textrm{tr}}\\,$ in (\\ref{11D_F4_tr}) are generically activated.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f1_betaPi2.pdf} \n\\put(-100,100){$f_{1}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f2_betaPi2.pdf} \n\\put(-100,100){$f_{2}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f3_betaPi2.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, f_{3}$}\n\\vspace{5mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f4_betaPi2.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, \\sin^2\\tilde{\\alpha} \\,f_{4}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f5_betaPi2.pdf} \n\\put(-100,100){$f_{5}$} \\hspace{10mm}\n\\includegraphics[width=0.25\\textwidth]{Plots\/f6_betaPi2.pdf} \n\\put(-100,100){$\\cos^2\\tilde{\\alpha} \\, f_{5}\\, f_{6}$}\n\\end{center}\n\\caption{Regular metric functions in (\\ref{11D_metric}) for the Janus solution with $\\alpha=1$ and $\\beta=\\frac{\\pi}{2}$.}\n\\label{fig:f_functions_betaPi}%\n\\end{figure}\n\n\n\nIn order to compute the M$2$-brane charge for the $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2\\,$ symmetric Janus, we first note that the dual seven-form flux can be expressed as\n\\begin{equation}\n\\label{F7_general}\n\\hat{F}_{(7)} = d\\hat{\\alpha} \\wedge h^{(6)} + \\ldots \\ , \n\\end{equation}\nwith $\\,h^{(6)}=\\frac{1}{2} \\boldsymbol{J} \\wedge \\boldsymbol{J} \\wedge d\\tau_{-} \\wedge d\\psi_{-}\\,$ being the volume form of $\\,M_{6}\\,$ spanned by $\\,(\\mathbb{CP}_{2}, \\tau_{-}, \\psi_{-})$, and $\\,\\hat{\\alpha}\\,$ playing the role of an ``adapted'' angular coordinate threaded by the flux. This adapted coordiante is in general a complicated function\n\\begin{equation}\n\\label{hat_alpha}\n\\hat{\\alpha}=\\hat{\\alpha}(\\rho,\\tilde{\\alpha} \\, ; \\,\\alpha,\\beta) \\ ,\n\\end{equation}\nthat depends on the original coordinates $\\,(\\rho,\\tilde{\\alpha})\\,$ as well as on the Janus parameters $\\,(\\alpha,\\beta)\\,$. Lastly, the ellipsis in (\\ref{F7_general}) stand for additional terms with legs on the AdS$_3$ piece of the geometry which do not play a relevant role when computing M$2$-brane charges. Therefore, all the relevant information regarding M$2$-brane charges gets codified into the one-form $\\,d\\hat{\\alpha}\\,$ as it defines an adapted angular direction. It is important to highlight that, when taking the limit $\\,\\rho \\rightarrow \\pm \\infty\\,$, one finds that $\\,d\\hat{\\alpha} \\propto \\sin(2\\tilde{\\alpha}) \\cos^4\\tilde{\\alpha} \\, d\\tilde{\\alpha}\\,$ no longer depends on the Janus parameters $\\,(\\alpha,\\beta)\\,$. In this limit, the dual seven-form flux threads the $\\,\\textrm{S}^7\\,$ as required by the asymptotic $\\,\\textrm{AdS}_{4} \\times \\textrm{S}^{7}\\,$ geometry of the flow. \n\n\n\nThe computation of the M$2$-brane charge in the Janus solution gives\n\\begin{equation}\n\\label{N2_Janus}\nN_2 = \\frac{1}{(2\\pi)^6}\\int_{\\Gamma \\times \\textrm{M}_{6}} \\hat{F}_{(7)} = \\frac{1}{32 \\pi^2}\\int_{\\partial\\Gamma} \\hat{\\alpha} = \\frac{1}{4\\pi^2 g^6} \\ ,\n\\end{equation}\nwhere the relevant curves $\\,\\Gamma$'s threaded by the purely internal part of the seven-form flux in (\\ref{F7_general}) are specified by their tangent vector field $\\,\\boldsymbol{v} = (\\boldsymbol{v}_{\\mu},\\boldsymbol{v}_{\\tilde{\\alpha}}) = (g \\sqrt{\\rho^2+1} \\, \\partial_{\\rho}\\hat{\\alpha} \\,,\\, \\partial_{\\tilde{\\alpha}}\\hat{\\alpha})\\,$\\footnote{Note that $\\,\\boldsymbol{v}_{\\mu}=\\partial_{\\mu}\\hat{\\alpha}=g \\sqrt{\\rho^2+1} \\, \\partial_{\\rho}\\hat{\\alpha}\\,$ as a consequence of the change of radial coordinate in (\\ref{new_coordinate_Janus}).}. For the Janus, all the curves $\\,\\Gamma\\,$ start at $\\,\\tilde{\\alpha}=0\\,$ and end at $\\,\\tilde{\\alpha}=\\frac{\\pi}{2}\\,$ pointing at the $\\,\\tilde{\\alpha}\\,$ direction on $\\,\\textrm{S}^{7}\\,$ -- see Figure~\\ref{fig:vec_field} for an illustration of such curves in various examples --. Since the $\\,N_{2}\\,$ charge in (\\ref{N2_Janus}) is independent of $\\,\\Gamma\\,$ and also of the Janus parameters $\\,(\\alpha,\\beta)$, it matches the one of the $\\textrm{AdS}_{4} \\times \\textrm{S}^7\\,$ background controlling the asymptotic behaviour of the (regular) Janus solutions at $\\,\\rho \\rightarrow \\pm \\infty\\,$.\n\n\nLastly, it is also interesting to compute the volume of the internal manifold $\\,\\textrm{vol}_7\\,$ along the Janus flow as a function of the radial coordinate $\\,\\rho\\,$ and the Janus parameters $\\,(\\alpha,\\beta)\\,$. The result is a lengthy expression not very illuminating that we have evaluated and plotted in Figure~\\ref{Fig:Janus_S7} for various choices of the Janus parameters. The behaviour is akin a wormhole: the $\\,\\textrm{S}^7\\,$ is a non-contractible seven-manifold whose volume does not vanish anywhere in the flow along the radial direction $\\,\\rho\\,$. Moreover, for a given value of $\\,\\alpha\\,$, there is a range of the parameter $\\,\\beta\\,$ for which the eleven-dimensional Janus features two throats (see right plot in Figure~\\ref{Fig:Janus_S7}).\n\n\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{Plots\/Shell_Janus.pdf} \n\\put(-35,30){$\\textrm{Re}\\tilde{z}$}\n\\put(-150,10){$\\textrm{Im}\\tilde{z}$}\n\\hspace{8mm}\n\\includegraphics[width=0.42\\textwidth]{Plots\/Throats_Janus.pdf} \n\\put(0,-5){$\\rho$}\n\\put(-100,135){$\\frac{3 \\, g^7}{ \\, 2^{7\/2} \\pi^{4}} \\, \\textrm{vol}_7$}\n\\end{center}\n\\caption{Left: Volume of the internal seven-sphere as a function of the complex scalar $\\,\\tilde{z}\\,$ (orange dome). Examples of regular Janus flows (loops) are superimposed. Right: Volume of the internal seven-sphere as a function of the radial coordinate $\\,\\rho\\,$ for the regular Janus solutions. The parameters of the curves are: $\\,\\alpha=1\\,$ and $\\,\\beta=-\\frac{\\pi}{2}\\,$ (blue line), $\\,\\beta=\\pi\\,$ (black line) and $\\,\\beta=\\frac{17}{16}\\pi\\,$ (green line). Note the presence of two minima (throats) in the black and green lines.}\n\\label{Fig:Janus_S7}\n\\end{figure}\n\n\n\n\n\n\\subsection{\\texorpdfstring{$\\text{SU}(3) \\times\\text{U}(1)^2\\,$}{SU3 x U(1)2} symmetric Hades and ridge flows}\n\nSetting $\\,\\alpha \\equiv \\alpha_{1}=\\alpha_{2}=\\alpha_{3}\\,$ and $\\,\\beta \\equiv \\beta_{1}=\\beta_{2}=\\beta_{3}\\,$ enhances the symmetry of the general Hades solution in (\\ref{Hades_U1^4_rho_1})-(\\ref{Hades_U1^4_rho_2}) from $\\,\\textrm{U}(1)^4\\,$ to $\\,\\text{SU}(3) \\times\\text{U}(1)^2\\,$. Setting $\\,\\alpha \\neq 0\\,$ renders the running of the scalar field (\\ref{Hades_U1^4_rho_2}) along the flow intrinsically complex, as it happened for the Janus case. This again implies that all the $\\,h\\,$ functions (and also three- and one-forms) entering $\\,\\hat{F}_{(4)}^{\\textrm{st}}\\,$ in (\\ref{11D_F4_st}) and $\\,\\hat{F}_{(4)}^{\\textrm{tr}}\\,$ in (\\ref{11D_F4_tr}) are generically activated.\n\n\n\n\n\n\n\n\nThe decomposition of the seven-form flux $\\,\\hat{F}_{(7)} \\,$ in (\\ref{F7_general}) is still at work for the Hades solutions. The computation of the M$2$-brane charge gives\n\\begin{equation}\n\\label{N2_Hades}\nN_2 = \\frac{1}{(2\\pi)^6}\\int_{\\Gamma \\times \\textrm{M}_{6}} \\hat{F}_{(7)} = \\frac{1}{32 \\pi^2}\\int_{\\partial\\Gamma} \\hat{\\alpha} = \\frac{1}{4\\pi^2 g^6} \\ ,\n\\end{equation}\nso that it matches the one of the $\\textrm{AdS}_{4} \\times \\textrm{S}^7\\,$ background controlling the asymptotic behaviour of the Hades solutions at $\\,\\rho \\rightarrow \\infty\\,$. Some examples of Hades flows on the $\\,\\tilde{z}\\,$ complex plane are displayed in Figure~\\ref{fig:Shell_Hades} and superimposed on the volume of the internal seven-sphere.\n\n\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.50\\textwidth]{Plots\/Shell_Hades.pdf} \n\\put(-80,20){$\\textrm{Re}\\tilde{z}$}\n\\put(-210,60){$\\textrm{Im}\\tilde{z}$}\n\\end{center}\n\\caption{Volume of the internal seven-sphere (orange dome) as a function of the complex scalar $\\,\\tilde{z}\\,$. Examples of singular Hades flows are superimposed with $\\,(\\alpha,\\beta)=(0,0)\\,$ (green straight line), $\\,{(\\alpha,\\beta)=(0,\\frac{\\pi}{2})}\\,$ (blue straight line), $\\,{(\\alpha,\\beta)=(0,-\\frac{\\pi}{2})}\\,$ (red straight line), $\\,{(\\alpha,\\lambda)=(2,\\frac{\\pi}{2}})\\,$ (blue curved line) and $\\,{(\\alpha,\\lambda)=(2,-\\frac{\\pi}{2}})\\,$ (red curved line).}\n\\label{fig:Shell_Hades}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=0.40\\textwidth]{Plots\/1-form_janus_beta_0.pdf} \n\\put(-190,88){$\\tilde{\\alpha}$} \n\\put(-88,-10){$\\rho$}\n\\hspace{12mm}\n\\includegraphics[width=0.40\\textwidth]{Plots\/1-form_janus_beta_pi.pdf} \n\\put(-190,88){$\\tilde{\\alpha}$} \n\\put(-88,-10){$\\rho$}\n\\\\[5mm]\n\\includegraphics[width=0.40\\textwidth]{Plots\/1-form_ridge_beta_minus_pihalf.pdf} \n\\put(-190,88){$\\tilde{\\alpha}$} \n\\put(-88,-10){$\\rho$}\n\\hspace{12mm}\n\\includegraphics[width=0.40\\textwidth]{Plots\/1-form_ridge_beta_pihalf.pdf} \n\\put(-190,88){$\\tilde{\\alpha}$} \n\\put(-88,-10){$\\rho$}\n\\end{center}\n\\caption{Plots of the vector field $\\,\\boldsymbol{v} = (\\,g \\sqrt{\\rho^2 \\pm 1} \\, \\partial_{\\rho}\\hat{\\alpha} \\,,\\, \\partial_{\\tilde{\\alpha}}\\hat{\\alpha}\\,)\\,$ on the strip spanned by $\\,(\\rho,\\tilde{\\alpha})\\,$. The $\\,+\\,$ sign must be chosen for the Janus solutions whereas the $\\,-\\,$ sign corresponds to the Hades solutions as a consequence of the change of radial coordinate $\\,d\\rho = g \\, \\sqrt{\\rho^2 \\pm 1} \\, d\\mu \\,$. Top-Left: Janus flow with $\\,\\alpha=1\\,$ and $\\,\\beta=0\\,$. Top-Right: Janus flow with $\\,\\alpha=1\\,$ and $\\,\\beta=\\pi\\,$. Bottom-Left: Ridge flow with $\\,\\beta = -\\frac{\\pi}{2}\\,$. Bottom-Right: Ridge flow with $\\,\\beta = \\frac{\\pi}{2}\\,$.}\n\\label{fig:vec_field}\n\\end{figure}\n\n\n\\subsubsection*{Ridge flows and singularities}\n\n\nIn order to investigate the possible eleven-dimensional resolution of the four-dimensional Hades singularity at $\\,\\rho=1\\,$, we will look at the metric (\\ref{11D_metric}) and analyse the relevant function\n\\begin{equation}\n\\label{Omega_func}\n\\Omega \\equiv f_{1}^{\\frac{1}{2}} \\, e^{A} \\ , \n\\end{equation}\nlying in front of the $\\,\\textrm{AdS}_{3}\\,$ factor of the eleven-dimensional metric describing the world-volume of the (curved) M2-branes in the UV. For simplicity, we will take the limiting case of $\\,\\alpha=0\\,$ and focus on the ridge flows with\n\\begin{equation}\n\\label{Ridge_SU3xU1xU1_rho}\nds_{4}^{2}=\\frac{1}{{g}^{2}} \\left( \\frac{d\\rho^{2}}{\\rho^{2}-1} + \n\\frac{\\left( \\rho^{2}-1\\right) }{2} \\, d\\Sigma^{2} \\right)\n\\hspace{8mm} \\textrm{ and } \\hspace{8mm}\n\\tilde{z}(\\rho) = \\rho^{-1} \\, e^{i \\left( \\beta - \\frac{\\pi}{2} \\right)} \\ .\n\\end{equation}\nRemarkably, for these flows, the four-dimensional singularity at $\\,\\rho = 1\\,$ gets resolved when uplifting the solutions to eleven dimensions provided $\\,\\beta \\neq \\pm \\frac{\\pi}{2}\\,$. \n\n\n\n\nThe explicit computation of the $\\,\\Omega\\,$ factor in (\\ref{Omega_func}) for the ridge flows yields\n\\begin{equation}\n\\label{Omega_func_ridge}\n\\Omega = (2 g)^{-1} \\left( 1 + \\rho^2 + 2 \\, \\rho \\, \\sin\\beta \\right)^{\\frac{1}{6}} \\left( 1 + \\rho^2 - 2 \\, \\rho \\, \\sin\\beta \\, \\cos(2 \\tilde{\\alpha})\\right)^{\\frac{1}{3}} \\ .\n\\end{equation}\nEvaluating (\\ref{Omega_func_ridge}) at $\\,\\rho=1\\,$ where the four-dimensional singularity is located, one concludes that $\\,\\Omega\\,$ vanishes at $\\,(\\beta,\\tilde{\\alpha})=(\\frac{\\pi}{2},0)\\,$ as well as at $\\,(\\beta,\\tilde{\\alpha})=(-\\frac{\\pi}{2},\\tilde{\\alpha})\\,$ $\\,\\forall \\tilde{\\alpha}\\,$. In other words, the pathology at $\\,\\rho=1\\,$ persists for $\\,\\beta=\\pm \\frac{\\pi}{2}\\,$ and it either localises at $\\,\\tilde{\\alpha}=0\\,$ or gets delocalised along the interval $\\,\\tilde{\\alpha} \\in [0, \\frac{\\pi}{2}]\\,$.\\footnote{A similar class of conventional (flat-sliced) RG-flows with $\\,{\\text{SU}(3) \\times\\text{U}(1)^2}\\,$ symmetry was constructed in \\cite{Pilch:2015vha}. For the sake of comparison, there is a redefinition of the relevant parameter given by $\\,\\zeta_{\\tiny{\\cite{Pilch:2015vha}}}=\\beta-\\frac{\\pi}{2}\\,$. The singularity of the ridge flows we study here would be similar to that of a (yet to be constructed) non-supersymmetric generalisation of the flows in \\cite{Pilch:2015vha} with $\\,\\cos(3\\zeta)=+1\\,$.} We will look at some limiting examples of ridge flows in order to illustrate their main physical implications.\n\n\n\\subsubsection*{$\\circ\\,$ Singular $\\,\\boldsymbol{\\beta=\\pm\\frac{\\pi}{2}\\,}$ ridge flows:}\n\n\nThe scalar in (\\ref{Ridge_SU3xU1xU1_rho}) becomes real when setting $\\,\\beta=\\frac{\\pi}{2}\\,$. The eleven-dimensional geometry gets simplified to\n\\begin{equation}\n\\label{11D_metric_ridge_beta_pi\/2}\n\\begin{array}{lll}\nds_{11}^2 &=& \\dfrac{f_{-}^{\\frac{2}{3}}}{g^2} \\dfrac{(\\rho+1)^\\frac{2}{3}}{4} \\left[ \\, ds_{\\textrm{AdS}_{3}}^2 +\n\\dfrac{2 \\, d\\rho^2}{(\\rho^2-1)^2} + 8 \\dfrac{d\\tilde{\\alpha}^2}{(\\rho+1)^{2}} \\right. \\\\[6mm]\n& + & \\left. \\dfrac{8}{f_{-}} \\cos^2\\tilde{\\alpha} \\, \\left( ds^2_{\\mathbb{CP}^2} + \\dfrac{(\\rho-1)^2}{f_{+}} \\, \\sin^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma)^{2} \\right) \\right. \\\\[6mm]\n& + & \\left. \n\\dfrac{8}{f_{-}} \\, \\left( \\dfrac{f_{+}^{\\frac{1}{2}}}{\\rho+1} d\\psi_{-} + \\, \\dfrac{\\rho+1}{f_{+}^{\\frac{1}{2}}} \\, \\cos^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma) \\right)^{2} \\, \\right] \\ ,\n\\end{array}\n\\end{equation}\nin terms of the functions\n\\begin{equation}\n\\label{f+-_functions}\nf_{\\pm}= (\\rho \\pm 1)^2 \\mp 4 \\, \\rho \\, \\sin^2 \\tilde{\\alpha} \\ .\n\\end{equation}\nMoreover, since the scalar in (\\ref{Ridge_SU3xU1xU1_rho}) becomes real, one has that\n\\begin{equation}\n\\label{F4tr_ridge_beta_pi\/2}\n\\hat{F}_{(4)}^{\\textrm{tr}}=0 \\ , \n\\end{equation}\nin (\\ref{11D_F4_tr}). The non-vanishing contribution to the three-form gauge potential in this case is given by \n\\begin{equation}\n\\label{Ast_ridge_beta_pi\/2}\n\\hat{A}_{(3)}^{\\textrm{st}} = \\frac{\\rho \\, (3+\\rho+\\rho^2) - 2 \\, (\\rho^2-1) \\cos(2\\tilde{\\alpha})}{8 \\, g^3} \\, \\textrm{vol}_{\\textrm{AdS}_{3}} \\ ,\n\\end{equation}\nproducing a space-time four-form flux in (\\ref{11D_F4_st}) of the form\n\\begin{equation}\n\\label{F4st_ridge_beta_pi\/2}\n\\hat{F}_{(4)}^{\\textrm{st}}=d\\hat{A}_{(3)}^{\\textrm{st}} = \\frac{1}{2 g^3} \\left( \n\\frac{3 + \\rho \\, (2 + 3 \\, \\rho) - 4 \\, \\rho \\, \\cos(2\\tilde{\\alpha})}{4} \\, d\\rho \n+ (\\rho^2-1) \\, \\sin(2\\tilde{\\alpha}) \\, d\\tilde{\\alpha} \\right) \\wedge \\textrm{vol}_{\\textrm{AdS}_{3}}\\ . \n\\end{equation}\nTwo facts suggest an interpretation of this flow as a Coulomb branch type flow very much along the line of \\cite{Cvetic:1999xx}. Firstly, this singular ridge flow lies in the purely proper scalar sector of maximal supergravity as a consequence of $\\,\\beta=\\frac{\\pi}{2}\\,$. Namely, it is triggered from the UV solely by the VEV of the proper scalar dual to the boson bilinears. Secondly, the internal flux in (\\ref{F4tr_ridge_beta_pi\/2}) vanishes all along the flow so there are no magnetic M5-branes sourcing $\\,\\hat{F}_{(7)}\\,$.\n\n\nLet us now investigate the four-dimensional singularity at $\\,\\rho=1\\,$ from a higher-dimensional perspective. To study the eleven-dimensional geometry around $\\,\\rho=1\\,$ it is convenient to look at the Ricci scalar which, in this case, takes the form\n\\begin{equation}\n\\label{Ricci_beta_pi\/2}\n\\hat{R}(\\rho) = g^2 \\, \\frac{ (\\rho -1)^2}{3 \\, (\\rho +1)^{\\frac{2}{3}} f_{-}^{\\frac{8}{3}}} \\,\\, r(\\rho,\\tilde{\\alpha}) \\ ,\n\\end{equation}\nin terms of the negative-definite function\n\\begin{equation}\n\\begin{array}{lll}\nr(\\rho,\\tilde{\\alpha}) &=& \n-(9 \\rho^4 + 12 \\rho^3 + 32 \\rho^2 + 16 \\rho + 11)\n+ 8 \\, \\rho \\, (3 \\rho^2 + 2 \\rho + 3) \\cos (2 \\tilde{\\alpha}) \\\\[2mm]\n& & -2 \\, (\\rho -1) (3 \\rho +1) \\, \\cos(4 \\tilde{\\alpha}) \\ .\n\\end{array}\n\\end{equation}\nThe Ricci scalar in (\\ref{Ricci_beta_pi\/2}) becomes singular at $\\,(\\rho,\\tilde{\\alpha})=(1,0)\\,$. On the other hand, the evaluation of the four-form flux in (\\ref{F4st_ridge_beta_pi\/2}) around the singular value $\\,\\rho=1\\,$ is more subtle. The change of radial coordinate in (\\ref{new_coordinate_Hades}) becomes ill-defined and one must resort to the original coordinate $\\,\\mu\\,$ in (\\ref{metric_ansatz}) using $\\,d\\rho = g \\, \\sqrt{\\rho^2-1} \\, d\\mu \\,$. Then, it becomes clear from (\\ref{F4st_ridge_beta_pi\/2}) that\n\\begin{equation}\n\\left. \\hat{F}_{(4)}^{\\textrm{st}} \\right|_{\\rho = 1} = 0 \\ .\n\\end{equation}\n\n\n\n\nIt is also instructive to look at the flux $\\,\\hat{F}_{(7)} = d\\hat{\\alpha} \\wedge h^{(6)} + \\ldots\\,$ by analysing the expression of the adapted angular variable $\\,\\hat{\\alpha}\\,$. In this case it takes the form\n\\begin{equation}\n\\hat{\\alpha}(\\rho,\\tilde{\\alpha}) = - 8 \\, g^{-6} \\, f_{-}^{-1} (\\rho-1)^2 \\cos^6\\tilde{\\alpha} \\ , \n\\end{equation}\nwith $\\,f_{-}\\,$ given in (\\ref{f+-_functions}). A plot of the curves $\\,\\Gamma\\,$ is presented in Figure~\\ref{fig:vec_field} (bottom-right plot). Note that not all of them start at $\\,\\tilde{\\alpha}=0\\,$ and end at $\\,\\tilde{\\alpha}=\\frac{\\pi}{2}\\,$. There are curves that start at $\\,\\tilde{\\alpha}=0\\,$ but end at some value $\\,0 < \\tilde{\\alpha} < \\frac{\\pi}{2}\\,$ when reaching the singularity at $\\,\\rho = 1\\,$. These curves display a strong singularity bending: the one-form $\\,d\\hat{\\alpha}\\,$ interpolates between being aligned with the $\\,\\textrm{S}^{7}\\,$ direction $\\,d\\tilde{\\alpha}\\,$ at $\\,\\rho \\rightarrow \\infty\\,$ and being aligned with the non-compact direction $\\,d\\rho\\,$ when reaching the singularity at $\\,\\rho =1\\,$.\n\nFinally, recalling the result in Section~\\ref{sec:ridge_4D}, setting $\\,\\beta=-\\frac{\\pi}{2}\\,$ amounts to a reflection of the radial coordinate $\\,\\rho \\rightarrow -\\rho\\,$ (which implies an exchange $\\,f_{+} \\leftrightarrow f_{-}\\,$) while keeping the domain $\\,\\rho \\in [1,\\infty)\\,$. This reflection drastically modifies the eleven-dimensional geometry in (\\ref{11D_metric_ridge_beta_pi\/2}) and (\\ref{f+-_functions}) which becomes singular at $\\,\\rho=1\\,$ for any value of the angular coordinate within the interval $\\,\\tilde{\\alpha} \\in [0,\\frac{\\pi}{2}]\\,$. This can also be viewed in the eleven-dimensional Ricci scalar which reads\n\\begin{equation}\n\\label{Ricci_beta_-pi\/2}\n\\hat{R}(\\rho) = g^2 \\, \\frac{ (\\rho+1)^2}{3 \\, (\\rho -1)^{\\frac{2}{3}} f_{+}^{\\frac{8}{3}}} \\,\\, r(-\\rho,\\tilde{\\alpha}) \\ .\n\\end{equation}\nSince there is no special value of $\\,\\tilde{\\alpha}\\,$ as far as singularities are concerned, the $\\,\\Gamma\\,$ curves constructed from the adapted angular variable \n\\begin{equation}\n\\hat{\\alpha}(\\rho,\\tilde{\\alpha}) = - 8 \\, g^{-6} \\, f_{+}^{-1} (\\rho+1)^2 \\cos^6\\tilde{\\alpha} \\ , \n\\end{equation}\ndo not display any bending when approaching $\\,\\rho=1\\,$. These curves are presented in Figure~\\ref{fig:vec_field} (bottom-left plot). Lastly, the three-form gauge potential at $\\,\\beta=-\\frac{\\pi}{2}\\,$ is given by\n\\begin{equation}\n\\label{Ast_ridge_beta_minus_pi\/2}\n\\hat{A}_{(3)}^{\\textrm{st}} = \\frac{\\rho \\, (3-\\rho+\\rho^2) + 2 \\, (\\rho^2-1) \\cos(2\\tilde{\\alpha})}{8 \\, g^3} \\, \\textrm{vol}_{\\textrm{AdS}_{3}} \\ .\n\\end{equation}\n\n\n\n \n \n \n\n\\subsubsection*{$\\circ\\,$ Regular $\\,\\boldsymbol{\\beta=0,\\pi\\,}$ ridge flows:}\n\nThe scalar in (\\ref{Ridge_SU3xU1xU1_rho}) becomes purely imaginary when setting $\\,\\beta=0\\,$. As a result, this ridge flow is triggered from the UV solely by the source mode of the pseudo-scalar dual to the fermion bilinears. \n\n\n\nThe eleven-dimensional metric reduces in this case to\n\\begin{equation}\n\\label{11D_metric_ridge_beta_0}\n\\begin{array}{lll}\nds_{11}^2 &=& \\dfrac{\\rho^2+1}{4 \\, g^2} \\left[ \\, ds_{\\textrm{AdS}_{3}}^2 +\n\\dfrac{2 \\, d\\rho^2}{(\\rho^2-1)^2} + 8 \\dfrac{d\\tilde{\\alpha}^2}{\\rho^2+1} \\right. \\\\[6mm]\n& + & \\left. 8 \\, \\cos^2\\tilde{\\alpha} \\, \\left( \\dfrac{1}{\\rho^2+1} ds^2_{\\mathbb{CP}^2} + \\dfrac{1}{j_{2}} \\dfrac{(\\rho^2-1)^2}{\\rho^2+1} \\, \\sin^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma)^{2} \\right) \\right. \\\\[6mm]\n& + & \\left. \n\\dfrac{8 \\, j_{1}}{(\\rho^2+1)^3} \\, \\left( \\sqrt{\\dfrac{j_{2}}{j_{1}}} \\, d\\psi_{-} + \\sqrt{\\dfrac{j_{1}}{j_{2}}} \\, \\cos^{2} \\tilde{\\alpha} \\, (d\\tau_{-} + \\sigma) \\right)^{2} \\, \\right] \\ ,\n\\end{array}\n\\end{equation}\nin terms of the two functions\n\\begin{equation}\n\\label{j1_j2_functions}\nj_{1} = (\\rho^2 + 1)^2 - 4 \\, \\rho^2 \\, \\cos(2 \\tilde{\\alpha}) \n\\hspace{8mm} , \\hspace{8mm}\nj_{2} = (\\rho^2 + 1)^2 - 4 \\, \\rho^2 \\, \\cos^2(2 \\tilde{\\alpha}) \\ .\n\\end{equation}\nThe four-form flux in (\\ref{11D_F4}) comes with both space-time and transverse contributions. The former is given by\n\\begin{equation}\n\\label{Ast_ridge_beta_0}\n\\hat{F}_{(4)}^{\\textrm{st}}=d\\hat{A}_{(3)}^{\\textrm{st}}\n\\hspace{10mm} \\textrm{ with } \\hspace{10mm}\n\\hat{A}_{(3)}^{\\textrm{st}} = \\frac{\\rho \\, (3+\\rho^2)}{8 \\, g^3} \\, \\textrm{vol}_{\\textrm{AdS}_{3}} \\ ,\n\\end{equation}\nwhereas the latter reads\n\\begin{equation}\n\\hat{F}_{(4)}^{\\textrm{tr}}=d\\hat{A}_{(3)}^{\\textrm{tr}} \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{Atr_ridge_beta_0}\n\\begin{array}{lll}\n\\hat{A}_{(3)}^{\\textrm{tr}} &=& - \\dfrac{4 \\sqrt{2}}{g^{3}} \\dfrac{\\rho}{\\rho^2 +1} \\Big[ \\frac{1}{2} \\sin(2\\tilde{\\alpha}) \\, d\\tilde{\\alpha} \\wedge (d\\tau_{-} + \\sigma) \\wedge d\\psi_{-} \\\\[2mm]\n&& \\qquad\\qquad\\qquad + \\cos^4\\tilde{\\alpha} \\, \\boldsymbol{J} \\wedge (d\\tau_{-} + \\sigma) + \\cos^2\\tilde{\\alpha} \\, \\cos(2\\tilde{\\alpha}) \\, \\boldsymbol{J} \\wedge d\\psi_{-} \\Big] \\ .\n\\end{array}\n\\end{equation}\nThis signals the presence of both electric M2-branes and magnetic M5-branes at a generic point along the flow.\n\n\n\n\nIn order to investigate the four-dimensional singularity at $\\,\\rho=1\\,$ from a higher-dimensional perspective we will look again at the eleven-dimensional Ricci scalar. It reads\n\\begin{equation}\n\\label{Ricci_scalar_Hades_beta=0}\n\\hat{R}(\\rho) = g^2 \\, \\left(1 + \\rho^2\\right)^{-3} \\, \\left(1 + 3 \\, \\rho^2\\right) \\, \\left[ 1 + \\rho^2 \\left( 8 - \\rho^2 \\right) \\right] \\ ,\n\\end{equation}\nand becomes this time independent of the angular variable $\\,\\tilde{\\alpha}\\,$. The Ricci scalar in (\\ref{Ricci_scalar_Hades_beta=0}) features no singularity within the domain $\\,\\rho \\in [1 , \\infty )\\,$. It has a boundary value $\\,\\hat{R}(\\infty) = -3 \\, g^2\\,$ and changes smoothly until reaching the finite value $\\,\\hat{R}(1)=4 \\, g^2\\,$, thus making the eleven-dimensional solution regular. The space-time (\\ref{Ast_ridge_beta_0}) and transverse (\\ref{Atr_ridge_beta_0}) components of the three-form gauge potential are both non-zero when approaching the IR region at $\\,\\rho =1\\,$. However, recalling again the change of radial coordinate $\\,d\\rho = g \\, \\sqrt{\\rho^2-1} \\, d\\mu \\,$, it follows from (\\ref{Ast_ridge_beta_0}) that\n\\begin{equation}\n\\left. \\hat{F}_{(4)}^{\\textrm{st}} \\right|_{\\rho = 1} = 0 \\ .\n\\end{equation}\nTherefore, only magnetic M5-branes source the geometry in the deep IR. The same behaviour was observed for the similar, but flat-sliced, $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2\\,$ invariant flows constructed in \\cite{Pilch:2015vha}. Such flows were argued to describe how M2-branes in the UV totally dissolve along the flow into magnetic M5-branes, leaving no M2-branes at the core of the regular flows.\\footnote{The same type of behaviour was also observed in the flat-sliced dielectric flows with $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)\\,$ symmetry of \\cite{Pope:2003jp}, although the M2-branes do not totally polarise into M5-branes at the core of these flows.} Moving back to the original radial coordinate\n\\begin{equation}\n\\rho = \\cosh(g \\, \\mu) \n\\hspace{15mm} \\textrm{ with } \\hspace{15mm}\\mu \\in [0,\\infty) \\ ,\n\\end{equation}\nand expanding around $\\,\\mu = 0\\,$ one arrives at\n\\begin{equation}\n\\label{11D_metric_ridge_beta_0_IR}\n\\begin{array}{lll}\n\\left. ds_{11}^2 \\, \\right|_{\\textrm{IR}} & \\approx & \\dfrac{1}{4 \\, g^2} \\left[ \\, \\left(\\dfrac{4}{(g\\mu)^2} + \\dfrac{2}{3} + \\dfrac{4}{15}(g\\mu)^2 + \\ldots \\right) d(g\\mu)^2 + \\left( 2 + (g \\mu)^2 + \\ldots \\right)\\, ds_{\\textrm{AdS}_{3}}^2 \\right. \\\\[6mm]\n& + & \\left. \n\\left( \\dfrac{(g\\mu)^4}{2} - \\dfrac{(g\\mu)^6}{6} + \\ldots \\right) \\, \\Big( (d\\tau_{-} + \\sigma) + 2 \\cos(2\\tilde{\\alpha}) \\, \\left( d\\psi_{-} + \\frac{1}{2} (d\\tau_{-} + \\sigma) \\right) \\Big)^{2} \\, \\right. \\\\[6mm]\n& + &\n\\left. 8 \\, \\left( d\\tilde{\\alpha}^2 + \\cos^2\\tilde{\\alpha} \\, ds^2_{\\mathbb{CP}^2} + \\sin^2(2 \\tilde{\\alpha}) \\, \\left(d\\psi_{-} + \\frac{1}{2}(d\\tau_{-} + \\sigma) \\right)^2 \\right) \\right] \\ .\n\\end{array}\n\\end{equation}\nNote that the $\\,\\mu$-dependent part of the metric only involves the first two lines in (\\ref{11D_metric_ridge_beta_0_IR}). This $\\,\\mu$-dependent part describes a five-dimensional section of the eleven-dimensional geometry that involves the original four coordinates of the ridge flow and an additional $\\,\\textrm{S}^1\\,$ that is non-trivially fibered over a six-dimensional manifold. The latter is described by the last line in (\\ref{11D_metric_ridge_beta_0_IR}). Ignoring this fibration, the five-dimensional section of the geometry verifies $\\,R^{\\textrm{(\\tiny{5D})}}_{\\mu\\nu}= \\frac{1}{5} \\, R^{\\textrm{(\\tiny{5D})}} \\, g^{\\textrm{(\\tiny{5D})}}_{\\mu\\nu}\\,$ with $\\,R^{\\textrm{(\\tiny{5D})}} = -20 \\, g^2 < 0\\,$ at leading order in the radial coordinate $\\,\\mu\\,$. Therefore, up to \nthe non-trivial fibration over the six-dimensional manifold, this ridge flow develops a five-dimensional Einstein geometry in the deep IR.\n\n\n\nThe regularity of the ridge flow at $\\,\\beta=0\\,$ is also reflected in the flux $\\,\\hat{F}_{(7)} = d\\hat{\\alpha} \\wedge h^{(6)} + \\ldots\\,$. The adapted angular variable $\\,\\hat{\\alpha}\\,$ simplifies in this case to\n\\begin{equation}\n\\hat{\\alpha}(\\tilde{\\alpha}) = - 8 \\, g^{-6} \\, \\cos^6\\tilde{\\alpha} \\ , \n\\end{equation}\nso it is independent of $\\,\\rho\\,$. Therefore, all the $\\,\\Gamma\\,$ curves start at $\\,\\tilde{\\alpha}=0\\,$, end at $\\,\\tilde{\\alpha}=\\frac{\\pi}{2}\\,$ and flow parallel to the $\\,\\textrm{S}^7\\,$ angular direction $\\,\\tilde{\\alpha}\\,$ without displaying any bending or pathological behaviour.\n\n\n\n\n\n\n\nFinally, as discussed in Section~\\ref{sec:ridge_4D}, setting $\\,\\beta=\\pi\\,$ amounts to a shift $\\,\\rho \\rightarrow -\\rho\\,$ in the four-dimensional ridge flow solution while keeping the domain $\\,\\rho \\in [1,\\infty)\\,$. This reflection of the radial coordinate leaves the eleven-dimensional metric in (\\ref{11D_metric_ridge_beta_0}) and (\\ref{j1_j2_functions}) invariant. The three-form gauge potential in (\\ref{Ast_ridge_beta_0}) and (\\ref{Atr_ridge_beta_0}) simply flips its sign.\n\n\n\n\n\n\n\n\n\n\\section{Summary and discussion}\n\\label{sec:conclusions}\n\n\nIn this paper we have presented new analytic families of $\\,\\textrm{AdS}_{3} \\times \\mathbb{R}\\,$ Janus and $\\,\\textrm{AdS}_{3} \\times \\mathbb{R}^{+}\\,$ Hades solutions in the $\\,\\mathcal{N}=2\\,$ gauged STU-model in four dimensions \\cite{Cvetic:1999xp}. This supergravity model corresponds to the $\\textrm{U}(1)^{4}$ invariant sector of the maximal SO(8) gauged supergravity that arises upon reduction of eleven-dimensional supergravity on a seven sphere. \n\n\nThe Janus solutions turn out to be surprisingly simple. Using a radial coordinate $\\,\\rho \\in (-\\infty \\, , \\infty)\\,$, the geometry is given by \n\\begin{equation}\n\\label{Janus_metric_conclus}\ng^{2} \\, ds_{4}^{2} = \\frac{d\\rho^{2}}{\\rho^{2}+1} + \n\\frac{ \\rho^{2} + 1 }{ k^2} \\,ds_{\\textrm{AdS}_{3}}^{2} \\ ,\n\\end{equation}\nin terms of the supergravity gauge coupling $\\,g\\,$ and three constant parameters $\\, \\alpha_{i} \\in \\mathbb{R} \\,$. The latter enter (\\ref{Janus_metric_conclus}) through the specific combination\n\\begin{equation}\n\\label{k_factor_Janus_conclus}\nk^2= 1 + \\sum_{i=1}^{3} \\sinh^{2}\\alpha_{i} \\, \\ge \\, 1 \\ .\n\\end{equation}\nThe Janus geometry (\\ref{Janus_metric_conclus}) is supported by $\\rho$-dependent profiles for the three complex scalars in the STU-model. Using the unit-disk parameterisation of the SL(2)\/SO(2) scalar coset, they adopt the form\n\\begin{equation}\n\\label{Janus_scalar_conclus}\n\\tilde{z}_{i}(\\rho) = e^{i \\beta_{i}} \\, \\frac{\\sinh\\alpha_{i}}{\\cosh\\alpha_{i} + i \\, \\rho}\n\\hspace{8mm} \\textrm{ with } \\hspace{8mm} i=1,2,3 \\ ,\n\\end{equation}\nand depend on three additional phases $\\, \\beta_{i} \\in [0,2 \\pi] \\,$. The result is then a six-parameter family $\\,(\\alpha_{i},\\beta_{i})\\,$ of Janus solutions in the STU-model which are everywhere regular for arbitrary choices of the parameters $\\,(\\alpha_{i},\\beta_{i})\\,$. These are generically non-supersymmetric solutions (they solve second-order equations of motion) but there is a supersymmetry enhancement when two $\\,\\alpha_{i}\\,$ parameters are set to zero. In this limit the supersymmetric Janus with $\\,\\textrm{SO}(4) \\times \\textrm{SO}(4)\\,$ symmetry of \\cite{Bobev:2013yra} is recovered. The very special choice $\\,\\alpha_{i}=0\\,$ $\\forall i \\,$ sets the three scalars to zero. In this limit the maximally supersymmetric AdS$_{4}$ vacuum of the $\\,\\textrm{SO}(8)\\,$ supergravity is recovered which uplifts to the Freund--Rubin $\\,\\textrm{AdS}_{4} \\times \\textrm{S}^{7}\\,$ vacuum of eleven-dimensional supergravity \\cite{Freund:1980xh}. Note that this vacuum controls the asymptotic behaviour of the Janus solutions at $\\,\\rho \\rightarrow \\pm \\infty\\,$.\\footnote{The Janus solutions in (\\ref{Janus_metric_conclus})-(\\ref{Janus_scalar_conclus}) might resemble the ``boomerang RG flows\" studied in \\cite{Donos:2017ljs} within the STU-model. These are flows in supergravity both starting and ending at the maximally supersymmetric AdS$_{4}$ vacuum of the SO(8) gauged supergravity, thus being also relevant for ABJM theory. However the Ansatz for the scalar fields in \\cite{Donos:2017ljs} explicitly breaks translation invariance in the spatial directions of the dual field theory. This is not the case for the Janus solutions (\\ref{Janus_metric_conclus})-(\\ref{Janus_scalar_conclus}) which have no dependence on the spatial directions of AdS$_3$.} It is also worth emphasising that the Janus solutions in (\\ref{Janus_metric_conclus})-(\\ref{Janus_scalar_conclus}) are everywhere regular and genuinely \\textit{axionic} in nature: $\\,\\textrm{Im}\\tilde{z}_{i}(\\rho) \\neq 0\\,$ for the solution to exist. This fact makes the study of similar solutions in the Euclidean theory (where pseudo-scalars pick up an extra factor of $\\,i\\,$ with respect to proper scalars) interesting in the AdS\/CFT spirit of \\cite{Arkani-Hamed:2007cpn,Bobev:2020pjk}. This could help to understand instanton-like solutions in the context of M-theory, as it has been done for the type IIB non-extremal D-instantons \\cite{Bergshoeff:2004fq,Bergshoeff:2004pg,Bergshoeff:2005zf} (see also \\cite{Hertog:2017owm}), and perhaps to shed new light on axionic wormholes in M-theory. This issue certainly deserves further investigation.\n\n\nThe Hades solutions are closely related to the Janus solutions and turn out to be very simple too. Using this time a radial coordinate $\\,\\rho \\in [1 \\, , \\infty)$, the geometry is given by \n\\begin{equation}\n\\label{Hades_metric_conclus}\ng^{2} \\, ds_{4}^{2} = \\frac{d\\rho^{2}}{\\rho^{2} - 1} + \n\\frac{ \\rho^{2} - 1 }{ k^2} \\,ds_{\\textrm{AdS}_{3}}^{2} \\ ,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{k_factor_Hades_conclus}\nk^2= -1 + \\sum_{i=1}^3 \\cosh^2\\alpha_{i} \\ ,\n\\end{equation}\nand the scalar profiles read\n\\begin{equation}\n\\label{Hades_scalar_conclus}\n\\tilde{z}_{i}(\\rho) = e^{i \\beta_{i}}\\, \\frac{\\cosh\\alpha_{i}}{\\sinh\\alpha_{i} + i \\rho} \n\\hspace{8mm} \\textrm{ with } \\hspace{8mm} i=1,2,3 \\ .\n\\end{equation}\nUnlike the Janus, the Hades solutions are singular at $\\,\\rho=1\\,$ and do not possess a supersymmetric limit upon tuning of the parameters $\\,(\\alpha_{i},\\beta_{i})\\,$. Still the maximally supersymmetric AdS$_{4}$ vacuum controls the asymptotic behaviour of the Hades at $\\,\\rho \\rightarrow \\infty\\,$. The special limit $\\,\\alpha_{i}=0\\,$ $\\forall i \\,$ drastically simplifies the Hades solutions giving rise to the so-called ridge flows (see Figure~\\ref{fig:Hades_ztilde_U1^4}).\n\n\nBeing obtained within the $\\textrm{U}(1)^4$ invariant sector of the massless $\\,\\mathcal{N}=8\\,$ supergravity multiplet in four dimensions, the analytic Janus solutions in (\\ref{Janus_metric_conclus})-(\\ref{Janus_scalar_conclus}) generalise the supersymmetric ones with $\\,\\text{SO}(4) \\times\\text{SO}(4)\\,$ symmetry constructed in \\cite{Bobev:2013yra}. The non-supersymmetric Hades solutions in (\\ref{Hades_metric_conclus})-(\\ref{Hades_scalar_conclus}) are genuinely new an cannot be continuously connected with the supersymmetric Hades with $\\,\\text{SO}(4) \\times\\text{SO}(4)\\,$ symmetry of \\cite{Bobev:2013yra} upon tuning of $\\,\\alpha_{i}\\,$. In addition, the Janus and Hades solutions presented in this work can be readily uplifted to eleven-dimensional supergravity using the general results for the oxidation of the STU-model worked out in \\cite{Cvetic:1999xp,Azizi:2016noi} and the uplift building blocks collected in the Appendix~\\ref{app:general_uplift}. Instead of uplifting the general $\\textrm{U}(1)^4$ symmetric Janus and Hades solutions, and for the sake of simplicity, we have restricted to the case \n\\begin{equation}\n\\alpha_{1}=\\alpha_{2}=\\alpha_{3}=\\alpha\n\\hspace{10mm} \\textrm{ and } \\hspace{10mm}\n\\beta_{1}=\\beta_{2}=\\beta_{3}=\\beta\n\\end{equation}\nfor which a larger symmetry group $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^2 \\subset \\textrm{SO}(8)\\,$ is preserved by the solutions. The Janus solutions are non-supersymmetric and fully regular, both in four and eleven dimensions, for arbitrary values of the parameters $\\,(\\alpha,\\beta)\\,$. The four-dimensional singularity of the Hades may or may not be cured when the solutions are uplifted to eleven-dimensions depending on the choice of parameters $\\,(\\alpha,\\beta)\\,$. For example, in the ridge flow limit $\\,\\alpha=0\\,$, the choice $\\,\\beta=0,\\pi\\,$ eliminates the singularity whereas, if setting $\\,\\beta=\\pm\\frac{\\pi}{2}\\,$, the singularity remains either localised or delocalised in the internal space. It would be interesting to understand the ultimate fate of the singularity in the general Hades solution with $\\,\\textrm{U}(1)^4\\,$ symmetry, as well as to investigate the process of taking the ridge flow limit sequentially on the three scalars $\\,\\tilde{z}_{i}\\,$. Also to further investigate a possible holographic interpretation of these more general flows as interfaces connecting an $\\,\\mathcal{N}=8\\,$ Chern--Simons matter theory to new (non-)conformal phases.\n\n\n\n\n\nSome open questions and follow-up directions regarding the Janus and Hades presented in this work are immediately envisaged. The first one is the issue of the stability, both perturbative and non-perturbative, of the general class of non-supersymmetric Janus and Hades with $\\textrm{U}(1)^4$ symmetry. These solutions can be viewed as AdS$_{3}$ vacua in M-theory, so it would be interesting to investigate their stability in light of the Weak Gravity and Swampland conjectures \\cite{ArkaniHamed:2006dz,Ooguri:2016pdq}. In this respect, and unlike for the Hades, the Janus solutions presented here are continuously connected (in parameter space) to the supersymmetric, and thus stable, Janus solutions with $\\,{\\textrm{SO}(4) \\times \\textrm{SO}(4)}\\,$ symmetry of \\cite{Bobev:2013yra}. This could help in improving the stability properties of the generic non-supersymmetric Janus solution at least within some region in the parameter space $\\,(\\alpha_{i},\\beta_{i})\\,$. Along this line, it would also be interesting to perform a probe brane analysis as a first step towards assessing the non-perturbative stability of the solutions. \n\n\nThe second issue is to understand the higher-dimensional brane picture of the various flows constructed in this work. For a related class of flat-sliced ridge flows, it was shown in \\cite{Pilch:2015vha} (motivated by \\cite{Pope:2003jp}) that the M2-branes in the UV totally polarise into a $\\,(1+3)$-dimensional intersection of M5-branes in the IR generating an AdS$_{5}$ metric at the core of the flow that is non-trivially fibered over a six-dimensional manifold.\\footnote{The appearance of a new strongly-coupled IR phase on the M2-brane involving an extra dimension was argued in \\cite{Pilch:2015vha} to originate from charged solitons that become massless, very much in the spirit of (massless) type IIA string theory and 11D supergravity.} This phenomenon was signaled by the vanishing of the space-time flux component at the IR endpoint of the flow. In our ridge flows with $\\,\\textrm{SU}(3) \\times \\textrm{U}(1)^{2}\\,$ symmetry, the expression of the space-time four-form flux (\\ref{11D_F4_st}) at generic $\\,\\beta\\,$ is given by\n\\begin{equation}\n\\label{F4st_ridge_beta_general}\n\\hat{F}_{(4)}^{\\textrm{st}} = \\frac{1}{2 g^3} \\left( \n\\frac{3 \\, (1+\\rho^2) + 2 \\, \\rho \\, \\sin\\beta \\, (1- 2 \\cos(2\\tilde{\\alpha}))}{4} \\, d\\rho \n+ \\sin\\beta \\, (\\rho^2-1) \\, \\sin(2\\tilde{\\alpha}) \\, d\\tilde{\\alpha} \\right) \\wedge \\textrm{vol}_{\\textrm{AdS}_{3}} \\ ,\n\\end{equation}\nso that\n\\begin{equation}\n\\left. \\hat{F}_{(4)}^{\\textrm{st}} \\right|_{\\rho = 1} = 0 \\ ,\n\\end{equation}\nin the deep IR by virtue of the change of radial coordinate $\\,d\\rho = g \\, \\sqrt{\\rho^2-1} \\, d\\mu \\,$. This suggests a possible interpretation in terms of non-supersymmetric dielectric flows with M2-branes being polarised into intersecting M5-branes. Also, in the case of $\\,\\beta=0\\,$, we have shown the appearence of a five-dimensional geometry in the IR non-trivially fibered over a six-dimensional manifold along the lines of \\cite{Pilch:2015vha}. The generalisation to ridge and Hades flows with $\\,\\textrm{U}(1)^4\\,$ symmetry also deserves further investigation.\n\n\nThe third issue has to do with the holographic interpretation of the general Janus and Hades solutions in terms of non-supersymmetric interfaces in the field theory living at the boundary. We have made manifest the strong correlation between the choice of Janus\/Hades parameters $\\,(\\alpha_{i},\\beta_{i})\\,$ (\\textit{i.e.} boundary conditions for the complex scalars $\\,\\tilde{z}_{i}\\,$), the possible emergence of supersymmetry, the source\/VEV and bosonic\/fermionic nature of the dual operators that are turned on in the interface and the (dis)appearance of gravitational singularities. But much work remains to be done to better understand and characterise the physics of the non-supersymmetric interfaces we have presented. \n\n\nFinally, it would also be very interesting to construct charged solutions generalising the Janus and Hades constructed in this work, as well as to investigate the effect of including hypermultiplets in the setup thus going beyond the STU-model. We plan to come back to these and related issues in the future.\n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe are grateful to Ant\\'on Faedo, Carlos Hoyos and Anayeli Ram\\'irez for conversations. The research of AA is supported in part by the Fondecyt Grants 1210635, 1221504 and 1181047 and by the FAPESP\/ANID project 13231-7. The work of AG and MCh-B is partially supported by the AEI through the Spanish grant PGC2018-096894-B-100 and by FICYT through the Asturian grant SV-PA-21-AYUD\/2021\/52177.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdbiq b/data_all_eng_slimpj/shuffled/split2/finalzzdbiq new file mode 100644 index 0000000000000000000000000000000000000000..6378ce8a132facacd1d39cc8a24d688b69067ff5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdbiq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThis paper is devoted to a spectral analysis of the biharmonic operator subject to Neumann boundary conditions on a domain which undergoes a singular perturbation.\nThe focus is on planar dumbbell-shaped domains $\\Omega_{\\epsilon}$, with $\\epsilon >0$, described in Figure~\\ref{fig: dumbbell}. Namely,\ngiven two bounded smooth domains $\\Omega_L, \\Omega_R$ in $\\numberset{R}^2$ with $\\Omega_L\\cap \\Omega_R=\\emptyset $ such that $\\partial \\Omega_L \\supset \\{(0,y)\\in \\numberset{R}^2 : -10$ small enough. Here\n$ R_\\epsilon \\cup L_\\epsilon$ is a thin channel connecting $\\Omega_L$ and $\\Omega_R$ defined by\n\\begin{equation}\n\\label{def: R_eps}\nR_\\epsilon = \\{(x,y)\\in\\numberset{R}^2 : x\\in(0,1), 0< y< \\epsilon g(x) \\},\n\\end{equation}\n\\[ L_\\epsilon =( \\{0\\} \\times (0, \\epsilon g(0)) \\cup (\\{1\\}\\times (0, \\epsilon g(1))) ), \\]\nwhere $g \\in C^2[0,1]$ is a positive real-valued function. Note that $\\Omega_\\epsilon$ collapses to the limit set $\\Omega_0 = \\Omega \\cup ([0,1] \\times \\{0\\})$ as $\\epsilon \\to 0$.\n\nWe consider the eigenvalue problem\n\\begin{equation} \\label{PDE: main problem_eigenvalues}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n(1-\\sigma) \\frac{\\partial^2 u}{\\partial n^2} + \\sigma \\Delta u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n\\tau \\frac{\\partial u}{\\partial n} - (1-\\sigma)\\, \\Div_{\\partial \\Omega_\\epsilon}(D^2u \\cdot n)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta u)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\n\\end{cases}\n\\end{equation}\nwhere $\\tau \\geq 0$, $\\sigma \\in (-1,1)$ are fixed parameters, and we analyse the behaviour of the eigenvalues and of the corresponding eigenfunctions as $\\epsilon \\to 0$. Here $\\Div_{\\partial \\Omega_\\epsilon}$ is the tangential divergence operator, and $(\\cdot)_{\\partial \\Omega_\\epsilon}$ is the projection on the tangent line to $\\partial \\Omega_\\epsilon$.\n The corresponding Poisson problem reads\n\\begin{equation} \\label{PDE: main problem}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u +u= f, &\\textup{in $\\Omega_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 u}{\\partial n^2} + \\sigma \\Delta u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$},\\\\\n\\tau \\frac{\\partial u}{\\partial n} - (1-\\sigma) \\Div_{\\partial \\Omega_\\epsilon}(D^2u \\cdot n)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta u)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$},\n\\end{cases}\n\\end{equation}\nwith datum\n$f \\in L^2(\\Omega_\\epsilon)$.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\textwidth]{omega_eps_crop}\n\\caption{The dumbbell domain $\\Omega_\\epsilon$.}\n\\label{fig: dumbbell}\n\\end{figure}\n\n Since $\\partial\\Omega_{\\epsilon}$ has corner singularities at the junctions $(0,0)$, $(0,\\epsilon g(0))$, $(1,0)$, $(1,\\epsilon g(1))$ and $H^{4}$\nregularity does not hold around those points, we shall always understand problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: main problem},\n(as well as analogous problems) in a weak (variational) sense, in which case only $H^2$ regularity is required.\n\nNamely, the variational formulation of problem \\eqref{PDE: main problem} is the following: find $u\\in H^2(\\Omega_\\epsilon)$ such that\n\\begin{equation} \\label{PDE: main problem weak}\n\\int_{\\Omega_\\epsilon} (1-\\sigma) D^2u : D^2\\varphi + \\sigma \\Delta u \\Delta \\varphi + \\tau \\nabla u \\cdot \\nabla \\varphi +u\\varphi \\,dx = \\int_{\\Omega_\\epsilon} f \\varphi\\,dx\\, ,\n\\end{equation}\nfor all $\\varphi \\in H^2(\\Omega_\\epsilon)$. The quadratic form associated with the left-hand side of (\\ref{PDE: main problem weak}) - call it $B_{\\Omega_{\\epsilon}}(u, \\varphi )$ - is coercive for all $\\tau \\geq 0$ and $\\sigma \\in (-1,1)$, see e.g. \\cite{ChAppl}, \\cite{Ch}.\nIn particular, by standard spectral theory this quadratic form allows to define a non-negative self-adjoint operator $T=(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma )}$ in $L^2(\\Omega_{\\epsilon})$ which plays the role of the classical operator $\\Delta^2 - \\tau \\Delta +I$ subject to the boundary conditions above.\nMore precisely, $T$ is uniquely defined by the relation\n$$ B_{\\Omega_{\\epsilon}}(u, \\varphi )=_{L^2(\\Omega_{\\epsilon})} \\, , $$\nfor all\n$ u,\\varphi \\in H^2(\\Omega_{\\epsilon})$. In particular the domain of the square root $T^{1\/2}$ of $T$ is $H^2(\\Omega_{\\epsilon})$ and\n a function $u$ belongs to the domain of $T$ if and only if\n$u\\in H^2(\\Omega_{\\epsilon})$\nand there exists $f\\in L^2(\\Omega_{\\epsilon})$ such that\n$B_{\\Omega_{\\epsilon}}(u, \\varphi )= _{L^2(\\Omega_{\\epsilon})} $ for all $\\varphi \\in H^2(\\Omega_{\\epsilon})$, in which case\n$Tu=f$. We refer to \\cite[Chp.~4]{Daviesbook} for a general introduction to the variational approach to the spectral analysis of partial differential operators on non-smooth domains.\n\n The operator $T$ is densely defined and its eigenvalues and eigenfunctions are exactly those of problem (\\ref{PDE: main problem_eigenvalues}).\nMoreover, since the embedding $H^2(\\Omega_{\\epsilon} )\\subset L^2(\\Omega_{\\epsilon} )$ is compact, $(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma )}$ has compact resolvent, hence the spectrum is discrete and consists of a divergent increasing sequence of positive eigenvalues\n$\\lambda_{n}(\\Omega_{\\epsilon}),\\ n\\in \\numberset{N}$, with finite multiplicity (here each eigenvalue is repeated as many times as its multiplicity).\n\nProblem (\\ref{PDE: main problem_eigenvalues}) arises in linear elasticity in connection with the Kirchhoff-Love model for the study of vibrations and deformations of free plates, in which case $\\sigma $ represents the\n Poisson ratio of the material and $\\tau$ the lateral tension. In this sense, the dumbbell domain $\\Omega_{\\epsilon}$ could represent a plate and $R_{\\epsilon }$\n a part of it which degenerates to the segment $[0,1] \\times \\{0\\}$.\n\nWe note that problem (\\ref{PDE: main problem_eigenvalues}) can be considered as a natural fourth order version of the corresponding eigenvalue problem for the\nNeumann Laplacian $-\\Delta_N$, namely\n\\begin{equation} \\label{PDE: second problem_eigenvalues}\n\\begin{cases}\n-\\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n\\frac{\\partial u}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n\\end{cases}\n\\end{equation}\nthe variational formulation of which reads\n\n\\begin{equation} \\label{PDE: second problem weak}\n\\int_{\\Omega_\\epsilon} Du \\cdot D \\varphi + u \\varphi \\,dx =\\lambda \\int_{\\Omega_\\epsilon} u \\varphi\\,dx ,\n\\end{equation}\nwhere the test functions $\\varphi$ and the unknown $u$ are considered in $H^1(\\Omega_{\\epsilon})$.\n\nAlthough the terminology used in the literature to refer to boundary value problems for fourth order operators is sometimes a bit misleading, we emphasise\nthat the formulation of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: main problem} is rather classical, see e.g. \\cite[Example~2.15]{necas}\nwhere problem \\eqref{PDE: main problem} with $\\tau =0$ is referred to as the Neumann problem for the biharmonic operator. Moreover, we point out that a number of recent papers devoted to the analysis of \\eqref{PDE: main problem_eigenvalues} have confirmed that problem (\\ref{PDE: main problem_eigenvalues})\ncan be considered as the natural Neumann problem for the biharmonic operator, see \\cite{arlacras}, \\cite{ArrLamb}, \\cite{BuosoProv}, \\cite{BuosoProvCH}, \\cite{bulacompl}, \\cite{ChAppl}, \\cite{Ch}, \\cite{Prov}.\nWe also refer to \\cite{GazzGS} for an extensive discussion on boundary value problems for higher order elliptic operators.\n\nIt is known that the eigenelements of the Neumann Laplacian on a typical dumbbell domain as above have a singular behaviour, see \\cite{ArrPhD}, \\cite{Arr1}, \\cite{Arr2}, \\cite{ACJdE}, \\cite{ACL}, \\cite{AHH}, and the references therein. For example, it is known that not all the eigenvalues of $-\\Delta_N$ on $\\Omega_{\\epsilon}$ converge to the eigenvalues of $-\\Delta_N$ in $\\Omega$; indeed, some of the eigenvalues of the dumbbell domain are asymptotically close to the eigenvalues of a boundary value problem defined in the channel $R_\\epsilon$. This allows the appearance in the limit of extra eigenvalues associated with an ordinary differential equation in the segment $(0,1)$, which are generally different from the eigenvalues of $-\\Delta_N$ in $\\Omega$.\nSuch singular behaviour reflects a general characteristic of boundary value problems with Neumann boundary conditions, the stability of which requires rather strong assumptions on the admissible domain perturbations, see e.g., \\cite{ACJdE}, \\cite{ArrLamb}, \\cite{lalaneu}. We refer to \\cite[p.~420]{C-H} for a classical counterexample.\n\n\nThe aim of the present paper is to clarify how Neumann boundary conditions affect the spectral behaviour of the operator $\\Delta^2-\\tau \\Delta $ on dumbbell domains, by extending the validity of some results known for the second order operator $-\\Delta_N$ to the fourth-order operator $(\\Delta^2 - \\tau \\Delta)_{N(\\sigma )}$.\n\nFirst of all, we prove that the eigenvalues of problem (\\ref{PDE: main problem_eigenvalues})\ncan be asymptotically decomposed into two families of eigenvalues as\n\\begin{equation}\\label{dec}\n(\\lambda_n(\\Omega_\\epsilon))_{n\\geq 1} \\approx (\\omega_k)_{k\\geq 1} \\cup (\\theta^\\epsilon_l)_{l\\geq 1}, \\ \\ {\\rm as }\\ \\epsilon \\to 0,\n\\end{equation}\n where $(\\omega_k)_{k\\geq 1}$ are the eigenvalues of problem\n\\begin{equation} \\label{PDE: Omega}\n\\begin{cases}\n\\Delta^2 w - \\tau \\Delta w + w = \\omega_k\\, w, &\\text{in $\\Omega$},\\\\\n(1-\\sigma) \\frac{\\partial^2 w}{\\partial n^2} + \\sigma \\Delta w = 0, &\\textup{on $\\partial \\Omega$},\\\\\n\\tau \\frac{\\partial w}{\\partial n} - (1-\\sigma) \\Div_{\\partial \\Omega}(D^2w \\cdot n)_{\\partial \\Omega} - \\frac{\\partial(\\Delta w)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega$,}\n\\end{cases}\n\\end{equation}\nand $(\\theta^\\epsilon_l)_{l\\geq 1}$ are the eigenvalues of problem\n\\begin{equation} \\label{PDE: R_eps}\n\\begin{cases}\n\\Delta^2 v - \\tau \\Delta v + v = \\theta^\\epsilon_l\\, v, &\\text{in $R_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 v}{\\partial n^2} + \\sigma \\Delta v = 0, &\\textup{on $\\Gamma_\\epsilon$},\\\\\n\\tau \\frac{\\partial v}{\\partial n} - (1-\\sigma) \\Div_{\\Gamma_\\epsilon}(D^2v \\cdot n)_{\\Gamma_\\epsilon} - \\frac{\\partial(\\Delta v)}{\\partial n} = 0, &\\textup{on $\\Gamma_\\epsilon$,}\\\\\nv = 0 = \\frac{\\partial v}{\\partial n}, &\\text{on $L_\\epsilon$.}\n\\end{cases}\n\\end{equation}\nThe decomposition \\eqref{dec} is proved under the assumption that a certain condition on $R_\\epsilon$, called H-Condition, is satisfied. We provide in particular a simple condition on the profile function $g$ which guarantees the validity of the H-Condition.\n\nThus, in order to analyse the behaviour of $\\lambda_n(\\Omega_\\epsilon)$ as $\\epsilon \\to 0$, it suffices to study $\\theta^\\epsilon_l$ as $\\epsilon \\to 0$. To do so, we need to pass to the limit in the variational formulation of problem \\eqref{PDE: R_eps}. Since the domain $R_\\epsilon$ collapses to a segment as $\\epsilon \\to 0$, we use thin domain techniques in order to find the appropriate limiting problem. As in the case of the Laplace operator, the limiting problem depends on the shape of the channel $R_\\epsilon$ via the profile function $g(x)$. More precisely it can be written as follows\n\\begin{equation}\\label{ODE: limit problem}\n\\begin{cases}\n\\frac{1 - \\sigma^2}{g} (gh'')'' - \\frac{\\tau}{g}(gh')' + h = \\theta h, &\\text{in $(0,1)$}\\\\\nh(0)=h(1)=0,&\\\\\nh'(0)=h'(1)=0.&\n\\end{cases}\n\\end{equation}\nThis allows to prove convergence results for the eigenvalues and eigenfunctions of problem \\eqref{PDE: main problem}. The precise statement can be found in Theorem~\\ref{lastthm}.\nRoughly speaking, Theorem~\\ref{lastthm} establishes the following alternative:\n\\begin{itemize} \\item[(A)] either $\\lambda_n(\\Omega_\\epsilon) \\to \\omega_k$, for some $k\\geq 1$ in which case the corresponding eigenfunctions converge in $\\Omega$ to the eigenfunctions associated with $\\omega_k$.\n\\item[(B)] or $\\lambda_n(\\Omega_\\epsilon) \\to \\theta_l$ as $\\epsilon \\to 0$ for some $l\\in \\numberset{N}$ in which case the corresponding eigenfunctions behave in $R_\\epsilon$ like the eigenfunctions\nassociated with $ \\theta_l$.\n\\end{itemize}\nMoreover, all eigenvalues $\\omega_k$ and $\\theta_l$ are reached in the limit by the eigenvalues $\\lambda_n(\\Omega_{\\epsilon})$.\n\nWe find it remarkable that for $\\sigma\\ne 0$ the limiting equation in (\\ref{ODE: limit problem}) is distorted by the coefficient $1-\\sigma^2\\ne 1$. This phenomenon\nshows that the dumbbell problem for our fourth order problem \\eqref{PDE: main problem_eigenvalues} with $\\sigma \\ne 0$ is significantly different from the second order problem \\eqref{PDE: second problem_eigenvalues} considered in the literature.\n\nWe also note that the Dirichlet problem for the operator $\\Delta^2u - \\tau \\Delta u + u$, namely\n\\begin{equation} \\label{PDE: dir}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n \\frac{\\partial u}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$}\n\\end{cases}\n\\end{equation}\nis stable in the sense that its eigenelements converge to those of the operator $\\Delta^2- \\tau \\Delta + I$ in $\\Omega$ as $\\epsilon\\to 0$. In other words, as for the Laplace operator, in the case of Dirichlet boundary conditions, no eigenvalues from the channel $R_{\\epsilon}$ appear in the limit as $\\epsilon \\to 0$. In fact, it is well known that Dirichlet eigenvalues on thin domains diverge to $+\\infty$ as $\\epsilon \\to 0$, because of the Poincar\\'e inequality.\n\nIn order to prove our results, we study the convergence of the resolvent operators $(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma , \\tau)}^{-1}$ and this is done by using the notion of $\\mathcal{E}$-convergence, which is a useful tool in the analysis of boundary value problems defined on variable domains, see e.g., \\cite{ACL}, \\cite{arlacras}, \\cite{ArrLamb}.\n\n\nWe point out that, although many papers in the literature have been devoted to the spectral analysis of second order operators with either Neumann or Dirichlet boundary conditions on dumbbell domains, see \\cite{Arr1}, \\cite{Arr2}, \\cite{Jimbo1}, \\cite{Jimbo2} and references therein, very little seems to be known about these problems for higher order operators. We refer to \\cite{taylor} for a recent analysis of\nthe dumbbell problem in the case of elliptic systems subject to Dirichlet boundary conditions.\n\nFinally, we observe that it would be interesting to provide precise rates of convergence for the eigenvalues $\\lambda_n(\\Omega_{\\epsilon})$ and the corresponding eigenfunctions as $\\epsilon \\to 0$ in the spirit of the asymptotic analysis performed e.g., in \\cite{Arr2}, \\cite{Gady1}, \\cite{Gady2}, \\cite{Gady3}, \\cite{Gady4}, \\cite{Jimbo1}, \\cite{Jimbo2} for second order operators. However, in case of higher order operators, this seems a challenging problem and is not addressed here.\n\n\nThe paper is organized as follows. In Section \\ref{sec: decomposition} we prove the asymptotic decomposition \\eqref{dec} of the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$. This is achieved in several steps. In Theorem \\ref{thm: upper bound} we provide a suitable upper bound for the eigenvalue $\\lambda_n(\\Omega_\\epsilon)$. Then, in Definition~\\ref{def: H condition} we introduce an assumption on the shape of the channel $R_\\epsilon$, called H-Condition, which is needed to prove a lower bound for $\\lambda_n(\\Omega_\\epsilon)$ as $\\epsilon \\to 0$, see Theorem~\\ref{thm: lower bound}. Finally, we collect the results of the section in Theorem \\ref{thm: eigenvalues decomposition} to deduce a convergence result for the eigenvalues and the eigenfunctions of problem \\eqref{PDE: main problem_eigenvalues} under the assumption that the H-Condition holds. In Section \\ref{sec: proof H condition regular dumbbells} we show that a wide class of regular dumbbell domains satisfy the H-Condition. In Section \\ref{sec: thin plates} we study the convergence of the solutions of problem \\eqref{PDE: R_eps} as $\\epsilon \\to 0$, we identify the limiting problem in $(0,1)$, and we prove the spectral convergence of problem \\eqref{PDE: R_eps} to problem \\eqref{ODE: limit problem}.\nFinally, in Section \\ref{conclusionsec} we combine the results of the previous sections and prove Theorem~\\ref{lastthm}.\n\n\n\n\n\n\n\n\n\n\\section{Decomposition of the eigenvalues} \\label{sec: decomposition}\nThe main goal of this section is to prove the decomposition of the eigenvalues of problem \\eqref{PDE: main problem_eigenvalues} into the two families of eigenvalues coming from \\eqref{PDE: Omega} and \\eqref{PDE: R_eps}. First of all we note that, since $\\Omega_{\\epsilon} $, $\\Omega $ and $R_{\\epsilon }$ are sufficiently regular, by standard spectral theory for differential operators it follows that the operators associated with the quadratic forms appearing in the weak formulation of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega}, \\eqref{PDE: R_eps} have compact resolvents. Thus, the spectra of such problems are discrete and consist of positive eigenvalues of finite multiplicity. The eigenpairs of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega}, \\eqref{PDE: R_eps} will be denoted by $(\\lambda_n(\\Omega_\\epsilon), \\varphi_n^\\epsilon)_{n \\geq 1}$, $(\\omega_n, \\varphi_n^\\Omega)_{n \\geq 1}$, $(\\theta_n^\\epsilon, \\gamma_n^\\epsilon)_{n\\geq 1}$ respectively, where the three families of eigenfunctions $\\varphi_n^\\epsilon$, $\\varphi_n^\\Omega$, $\\gamma_n^\\epsilon$ are complete orthonormal bases of the spaces $L^2(\\Omega_{\\epsilon})$, $L^2(\\Omega )$, $L^2(R_{\\epsilon})$ respectively.\nMoreover we set $(\\lambda_n^\\epsilon)_{n\\geq 1} = (\\omega_k)_{k \\geq 1} \\cup (\\theta_l^\\epsilon)_{l\\geq 1}$, where it is understood that the eigenvalues are arranged in increasing order and repeated according to their multiplicity. In particular if $\\omega_k = \\theta_l^\\epsilon$ for some $k,l \\in \\numberset{N}$, then such an eigenvalue is repeated in the sequence $(\\lambda_n^\\epsilon)_{n \\geq 1}$ as many times as the sum of the multiplicities of $\\omega_k$ and $\\theta_l^\\epsilon$. Let us note explicitly that the order in the sequence $(\\lambda_n^\\epsilon)_{n\\geq 1}$ depends on $\\epsilon$. For each $\\lambda_n^\\epsilon$ we define the function $\\phi^\\epsilon_n \\in H^2(\\Omega) \\oplus H^2(R_\\epsilon)$ in the following way:\n\\begin{equation}\n\\label{def: phi_n 1}\n\\phi^\\epsilon_n = \\begin{cases}\n \\varphi_k^\\Omega, &\\text{in $\\Omega$},\\\\\n 0, &\\text{in $R_\\epsilon$},\n \\end{cases}\n\\end{equation}\nif $\\lambda_n^\\epsilon = \\omega_k$, for some $k \\in \\numberset{N}$; otherwise\n\\begin{equation}\n\\label{def: phi_n 2}\n\\phi^\\epsilon_n=\\begin{cases}\n 0, &\\text{in $\\Omega$},\\\\\n \\gamma_l^\\epsilon, &\\text{in $R_\\epsilon$},\n \\end{cases}\n\\end{equation}\nif $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$, for some $l \\in \\numberset{N}$. We observe that in the case $\\lambda_n^\\epsilon= \\omega_k = \\theta_l^\\epsilon$ for some $k,l \\in \\numberset{N}$, with $\\omega_k$ of multiplicity $m_1$ and $\\theta_l^\\epsilon$ of multiplicity $m_2$ we agree to order the eigenvalues (and the corresponding functions $\\phi^\\epsilon_n$) by listing first the $m_1$ eigenvalues $\\omega_k$, then the remaining $m_2$ eigenvalues $\\theta_l^\\epsilon$.\n\nNote that $(\\phi^\\epsilon_i, \\phi^\\epsilon_j)_{L^2(\\Omega_\\epsilon)} = \\delta_{ij}$ where $\\delta_{ij}$ is the Kronecker symbol, that is $\\delta_{ij}=0$ for $i\\ne j$ and $\\delta_{ij}=1$ for $i=j$. Note also that although $\\phi_n^\\epsilon$ defined by \\eqref{def: phi_n 2} are in $H^2(\\Omega_\\epsilon)$ (due to the Dirichlet boundary condition imposed in $L_\\epsilon$), the function $\\phi_n^\\epsilon$ defined by \\eqref{def: phi_n 1} do not lie in $H^2(\\Omega_\\epsilon)$.\nTo bypass this problem we define a sequence of functions in $H^2(\\Omega_\\epsilon)$ by setting\n\\[\n\\xi_n^\\epsilon =\n\\begin{cases}\nE\\varphi_k^\\Omega, &\\text{if $\\lambda_n^\\epsilon = \\omega_k$,}\\\\\n\\phi^\\epsilon_n, &\\text{if $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$},\n\\end{cases}\n\\]\nwhere $E$ is a linear continuous extension operator mapping $H^2(\\Omega)$ to $H^2(\\numberset{R}^N)$. Then it is easy to verify that for fixed $i,j$, we have $(\\xi^\\epsilon_i, \\xi^\\epsilon_j)_{L^2(\\Omega_\\epsilon)}=\\delta_{ij}+o(1)$ as $\\epsilon \\to 0$. Then for fixed $n$ and for $\\epsilon$ small enough, $\\xi_1^\\epsilon,\\ldots,\\xi_n^\\epsilon$ are linearly independent.\n\nNow we prove an upper bound for the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$.\n\\begin{theorem}[Upper bound] \\label{thm: upper bound}\nLet $n\\geq 1$ be fixed. The eigenvalues $\\lambda_n^\\epsilon$ are uniformly bounded in $\\epsilon$ and\n\\begin{equation} \\label{eq: upper bound}\n\\lambda_n(\\Omega_\\epsilon) \\leq \\lambda_n^\\epsilon + o(1), \\quad \\text{as $\\epsilon \\to 0$.}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\n\nThe fact that $\\lambda_n^\\epsilon$ remains bounded as $\\epsilon \\to 0$ is an easy consequence of the inequality \\begin{equation} \\label{eq: boundedness lambda_n^eps}\n\\lambda_n^\\epsilon \\leq \\omega_n < \\infty,\n\\end{equation}\nwhich holds by definition of $\\lambda_n^\\epsilon$.\nIn the sequel we write $\\perp$ to denote the orthogonality in $L^2$, and $[f_1, \\dots, f_m]$ for the linear span of the functions $f_1, \\dots, f_m$.\n\nBy the variational characterization of the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$ we have\n\\begin{multline} \\label{eq: lambda_n(Omega_eps)var}\n\\lambda_n(\\Omega_\\epsilon) = \\min \\left\\{ \\frac{\\displaystyle \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\psi|^2 + \\sigma |\\Delta \\psi|^2 + \\tau |\\nabla \\psi|^2 + |\\psi|^2 }{\\displaystyle\\int_{\\Omega_\\epsilon} |\\psi|^2} \\right.\\\\\n\\left. : \\text{$\\psi \\in H^2(\\Omega_\\epsilon)$, $\\psi \\not\\equiv 0$ and $\\psi \\perp \\varphi_1^\\epsilon, \\dots, \\varphi_{n-1}^\\epsilon$} \\right\\}.\n\\end{multline}\nSince the functions $\\xi^\\epsilon_1,\\dots,\\xi^\\epsilon_n$ are linearly independent, by a dimension argument there exists $\\xi^\\epsilon \\in [\\xi^\\epsilon_1,\\dots,\\xi^\\epsilon_n]$ such that $\\norma{\\xi^\\epsilon}_{L^2(\\Omega_\\epsilon)}=1$, and $\\xi^\\epsilon \\perp \\varphi_1^\\epsilon, \\dots, \\varphi_{n-1}^\\epsilon$.\n\nWe can write\n$ \\xi^\\epsilon = \\sum_{i=1}^n \\alpha_i \\xi_i^\\epsilon$,\nfor some $\\alpha_1,\\dots, \\alpha_n \\in \\numberset{R}$ depending on $\\epsilon$ such that $\\sum_{i=1}^n \\alpha_i^2 = 1 + o(1)$ as $\\epsilon \\to 0$. By using $\\xi^\\epsilon$ as a test function in \\eqref{eq: lambda_n(Omega_eps)var} we get\n\n\\begin{equation} \\label{proof: computationsRQ}\n\\begin{split}\n&\\lambda_n(\\Omega_\\epsilon) \\leq \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi^\\epsilon|^2 + \\sigma |\\Delta \\xi^\\epsilon|^2 + \\tau |\\nabla \\xi^\\epsilon|^2 +|\\xi^\\epsilon|^2\\\\\n&= \\sum_{i=1}^n \\alpha_i^2 \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2 \\biggr) \\\\\n&+ \\sum_{i\\neq j}\\alpha_i\\alpha_j \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) (D^2\\xi^\\epsilon_i : D^2\\xi_j^\\epsilon) + \\sigma \\Delta \\xi_i^\\epsilon \\Delta\\xi_j^\\epsilon + \\tau \\nabla\\xi_i^\\epsilon \\cdot \\nabla \\xi_j^\\epsilon + \\xi_i^\\epsilon \\xi_j^\\epsilon \\biggr).\n\\end{split}\n\\end{equation}\n\nBy definition of $\\xi_i^\\epsilon$ and the absolute continuity of the Lebesgue integral, we have\n{\\small\\[\\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2=\n\\begin{cases}\n\\omega_k+o(1), & \\hbox{if }\\textup{$\\exists\\, k$ s.t. $\\lambda_i^\\epsilon=\\omega_k$} ,\\\\\n\\theta_\\epsilon^l, &\\hbox{if }\\textup{$\\exists\\, l$ s.t. $\\lambda_i^\\epsilon=\\theta_\\epsilon^l$},\n\\end{cases}\n\\]}\nwhich implies that\n$\\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2\\leq \\lambda_n^\\epsilon+o(1).$\n\nNote that\n\\[\n\\begin{split}\n&\\sum_{i\\neq j}\\alpha_i\\alpha_j \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) (D^2\\xi^\\epsilon_i : D^2\\xi_j^\\epsilon) + \\sigma \\Delta \\xi_i^\\epsilon \\Delta\\xi_j^\\epsilon + \\tau \\nabla\\xi_i^\\epsilon \\cdot \\nabla \\xi_j^\\epsilon + \\xi_i^\\epsilon \\xi_j^\\epsilon \\biggr)=o(1)\n\\end{split}\n\\]\n\nHence,\n$\\lambda_n(\\Omega_\\epsilon)\\leq \\sum_{i=1}^n \\alpha_i^2 ( \\lambda_n^\\epsilon+o(1))+o(1)\\leq \\lambda_n^\\epsilon+o(1)$\nwhich concludes the proof of \\eqref{eq: upper bound}.\n\\end{proof}\n\n\\begin{remark}\nNote that the shape of the channel $R_\\epsilon$ does not play any role in establishing the upper bound. The only fact needed is that the measure of $R_\\epsilon$ tends to $0$ as $\\epsilon \\to 0$.\n\\end{remark}\n\nIn the sequel we shall provide a lower bound for the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$. Before doing so, let us introduce some notation.\n\n\\begin{definition}\\label{definitionNorm}\nLet $\\sigma \\in (-1,1)$, $\\tau \\geq 0$. We denote by $H^2_{L_\\epsilon}(R_\\epsilon)$ the space obtained as the closure in $H^2 (R_\\epsilon)$ of $C^{\\infty}(\\overline{R_{\\epsilon}})$ functions which vanish in a neighbourhood of $L_\\epsilon$.\nFurthermore, for any Lipschitz bounded open set $U$ we define\n\\[\n[f]_{H^2_{\\sigma,\\tau}(U)} = \\bigl|(1-\\sigma) \\norma{D^2 f}_{L^2(U)}^2 + \\sigma \\norma{\\Delta f}_{L^2(U)}^2 + \\tau \\norma{\\nabla f}_{L^2(U)}^2 + \\norma{f}_{L^2(U)}^2 \\bigr|^{1\/2}\\, ,\n\\]\nfor all $f \\in H^2(U)$.\n\\end{definition}\n\nNote the functions $u$ in $ H^2_{L_\\epsilon}(R_\\epsilon) $ satisfy the conditions $u=0$ and $\\nabla u =0$ on $L_{\\epsilon}$ in the sense of traces.\n\n\n\\begin{proposition} \\label{prop: convergence eigenprojections}\nLet $n \\in \\numberset{N}$ be such that the following two conditions are satisfied:\n\\begin{enumerate}[label=(\\roman*)]\n\\item For all $i=1,\\dots,n$,\n\\begin{equation} \\label{prop: lambda_i}\n\\abs{\\lambda_i^\\epsilon - \\lambda_i(\\Omega_\\epsilon)}\\to 0 \\quad \\quad \\text{as $\\epsilon \\to 0$,}\n\\end{equation}\n\\item There exists $\\delta>0$ such that\n\\begin{equation} \\label{prop: lambda_n+1}\n\\lambda_n^\\epsilon \\leq \\lambda_{n+1}(\\Omega_\\epsilon) - \\delta\n\\end{equation}\nfor any $\\epsilon >0$ small enough.\n\\end{enumerate}\nLet $P_n$ be the projector from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^\\epsilon,\\dots,\\phi_n^\\epsilon]$ defined by\n\\begin{equation}\nP_n g = \\sum_{i=1}^n (g, \\phi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\phi_i^\\epsilon\\, ,\n\\end{equation}\nfor all $g\\in L^2(\\Omega_\\epsilon)$, where $\\phi_i^\\epsilon$ is defined in \\eqref{def: phi_n 1}, \\eqref{def: phi_n 2}. Then\n\\begin{equation} \\label{prop: thesis}\n\\norma{\\varphi_i^\\epsilon - P_n \\varphi_i^\\epsilon}_{H^2(\\Omega) \\oplus H^2(R_\\epsilon)} \\to 0,\n\\end{equation}\nas $\\epsilon \\to 0$, for all $i=1,\\dots,n$.\n\\end{proposition}\n\\begin{proof}\nBy \\eqref{eq: upper bound} and \\eqref{eq: boundedness lambda_n^eps} we can extract a subsequence from both the sequences $(\\lambda_i^\\epsilon)_{\\epsilon>0}$ and $(\\lambda_i(\\Omega_\\epsilon))_{\\epsilon>0}$ such that\n\\[\n\\lambda_i^{\\epsilon_k} \\to \\lambda_i,\\ \\ {\\rm and}\\ \\\n\\lambda_i(\\Omega_{\\epsilon_k}) \\to \\widehat{\\lambda}_i,\n\\]\nas $k\\to \\infty$, for all $i=1,\\dots,n+1$.\\\\\nBy assumption we have $\\lambda_i = \\widehat{\\lambda}_i$ for all $i=1,\\dots,n$. Thus, by passing to the limit as $\\epsilon \\to 0$ in \\eqref{eq: upper bound} (with $n$ replaced by $n+1$) and in \\eqref{prop: lambda_n+1}, we get\n\\[ \\lambda_n \\leq \\widehat{\\lambda}_{n+1} - \\delta \\leq \\lambda_{n+1} - \\delta. \\]\n\nWe rewrite $\\lambda_1,\\dots,\\lambda_n$ without repetitions due to multiplicity in order to get a new sequence\n\\begin{equation} \\label{proof: nonoverlappeigenvalues}\n\\widetilde{\\lambda}_1< \\widetilde{\\lambda}_2<\\dots< \\widetilde{\\lambda}_s = \\lambda_n\n\\end{equation}\nand set $\\widetilde{\\lambda}_{s+1}:= \\widehat{\\lambda}_{n+1} \\leq \\lambda_{n+1}$. Thus, by assumption \\eqref{prop: lambda_n+1} we have that\n\\begin{equation} \\label{proof: nonoverlappeigenvalues2}\n\\widetilde{\\lambda}_s < \\widetilde{\\lambda}_{s+1}.\n\\end{equation}\nFor each $r=1,\\dots,s$, let $\\widetilde{\\lambda}_r = \\lambda_{i_r} = \\dots = \\lambda_{j_r}$, for some $i_r \\leq j_r$, $i_r, j_r \\in \\{1,\\dots,n \\}$, where it is understood that $j_r - i_r + 1$ is the multiplicity of $\\widetilde{\\lambda}_r$. Furthermore, we define the eigenprojector $Q_r$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\varphi_{i_r}^\\epsilon, \\dots, \\varphi_{j_r}^\\epsilon]$ by\n\\begin{equation} \\label{proof: def Q_r}\nQ_r g = \\sum_{i=i_r}^{j_r} (g, \\varphi_{i_r}^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\varphi_{i_r}^\\epsilon.\n\\end{equation}\nWe now proceed to prove the following\\\\ \\smallskip\n\n\\noindent\\emph{Claim:} $\\norma{\\xi_i^{\\epsilon_k} - Q_r \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$ as $\\epsilon \\to 0$, for all $i_r \\leq i \\leq j_r$ and $r \\leq s$.\n\n\\noindent Let us prove it by induction on $1 \\leq r \\leq s$.\\\\\nIf $r=1$, we define the function\n\\[\n\\chi_{\\epsilon_k} = \\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k} = \\xi_i^{\\epsilon_k} - \\sum_{l=1}^{j_1} (\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\varphi_l^{\\epsilon_k}.\n\\]\nThen $\\chi_{\\epsilon_k} \\in H^2(\\Omega_{\\epsilon_k})$, $(\\chi_{\\epsilon_k} , \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})}= 0$ for all $l=1,\\dots, j_1$ and by the min-max representation of $\\lambda_2(\\Omega_{\\epsilon_k})$ we have that\n\\begin{equation} \\label{proof: bigger lambda_2}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\geq \\lambda_2(\\Omega_{\\epsilon_k)} \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} \\geq \\widetilde{\\lambda}_2 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1).\n\\end{equation}\nOn the other hand, it is easy to prove by definition of $\\chi_{\\epsilon_k}$ that\n\\begin{multline}\n\\int_{\\Omega_{\\epsilon_k}} (1-\\sigma) \\big(D^2\\chi_{\\epsilon_k} : D^2 \\psi\\big) + \\sigma \\Delta \\chi_{\\epsilon_k} \\Delta\\psi + \\tau \\nabla\\chi_{\\epsilon_k}\\cdot \\nabla \\psi + \\chi_{\\epsilon_k}\\psi \\, dx\\\\\n= \\lambda_1(\\Omega_{\\epsilon_k})\\int_{\\Omega_{\\epsilon_k}} \\chi_{\\epsilon_k} \\psi \\, dx + o(1)\n\\end{multline}\nfor all $\\psi\\in H^2(\\Omega_{\\epsilon_k})$. This in particular implies that\n\\begin{equation} \\label{proof: chi_eps equality}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = \\lambda_1(\\Omega_{\\epsilon_k}) \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} +o(1)\n\\end{equation}\nand consequently,\n\\begin{equation} \\label{proof: less lambda_1}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\leq \\widetilde{\\lambda}_1 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1).\n\\end{equation}\nHence, inequalities \\eqref{proof: bigger lambda_2}, \\eqref{proof: less lambda_1} imply that\n\\[\n\\widetilde{\\lambda}_2 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1) \\leq \\widetilde{\\lambda}_1 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1),\n\\]\nwhich implies that $\\norma{\\chi_{\\epsilon_k}}_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ (otherwise we would have $\\widetilde{\\lambda}_2 \\leq \\widetilde{\\lambda}_1 + o(1)$, against \\eqref{proof: nonoverlappeigenvalues}). Finally, equation \\eqref{proof: chi_eps equality} implies that $[\\chi_{\\epsilon_k}]_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = o(1)$, so that also $\\norma{\\chi_{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}= o(1)$.\n\nLet $r>1$ and assume by induction hypothesis that\n\\begin{equation} \\label{proof: ind_hyp}\n\\norma{\\xi^{\\epsilon_k}_i - Q_t \\xi^{\\epsilon_k}_i}_{H^2(\\Omega_{\\epsilon_k})} \\to 0\n\\end{equation}\nas $k \\to \\infty$, for all $i_t \\leq i \\leq j_t$ and for all $t=1,\\dots,r-1$. We have to prove that \\eqref{proof: ind_hyp} holds also for $t=r$. Let $i_r \\leq i \\leq j_r$ and let $\\chi_{\\epsilon_k} = \\xi_i^{\\epsilon_k} - Q_r \\xi_i^{\\epsilon_k}$. Then\n\\begin{equation} \\label{proof: almost orthog}\n(\\chi_{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\to 0 \\quad \\text{as $k \\to \\infty$, for all $h=1,\\dots,j_r$ }.\n\\end{equation}\nIndeed, if $h \\in \\{i_r, \\dots, j_r\\}$ then by definition of $\\chi_{\\epsilon_k}$, $(\\chi_{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = 0$. Otherwise, if $h < i_r$, note that the function $\\varphi_h^{\\epsilon_k}$ satisfies\n\\begin{multline*}\n\\int_{\\Omega_{\\epsilon_k}} (1-\\sigma) \\left( D^2\\varphi_h^{\\epsilon_k} : D^2\\psi \\right) + \\sigma \\Delta \\varphi_h^{\\epsilon_k} \\Delta \\psi + \\tau \\nabla \\varphi_h^{\\epsilon_k} \\nabla\\psi + \\varphi_h^{\\epsilon_k} \\psi\\, dx \\\\\n= \\lambda_h(\\Omega_{\\epsilon_k}) \\int_{\\Omega_{\\epsilon_k}} \\varphi_h^{\\epsilon_k} \\psi\\,dx\\, ,\n\\end{multline*}\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$, briefly\n$\nB_{\\Omega_{\\epsilon_k}}(\\varphi_h^{\\epsilon_k}, \\psi) = \\lambda_h(\\Omega_{\\epsilon_k})(\\varphi_h^{\\epsilon_k}, \\psi)_{L^2(\\Omega_{\\epsilon_k})}\\, ,\n$\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$, where $B_U$ denotes the quadratic form associated with the operator\n$\\Delta^2-\\tau\\Delta +I$\non an open set $U$. Similarly,\n$\nB_{\\Omega_{\\epsilon_k}}(\\xi_i^{\\epsilon_k}, \\psi) = \\lambda_i^{\\epsilon_k}(\\xi_i^{\\epsilon_k}, \\psi)_{L^2(\\Omega_{\\epsilon_k})} + o(1)\n$\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$. Thus,\n$\\lambda_h(\\Omega_{\\epsilon_k})(\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = \\lambda_i^{\\epsilon_k}(\\xi_i^{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} + o(1)\n$\nwhich implies\n\\begin{equation}\n\\label{proof: difference eigenvalues}\n( \\lambda_h(\\Omega_{\\epsilon_k}) - \\lambda_i^{\\epsilon_k}) (\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = o(1)\n\\end{equation}\nand since $( \\lambda_h(\\Omega_{\\epsilon_k}) - \\lambda_i^{\\epsilon_k}) \\to (\\widetilde{\\lambda}_h - \\widetilde{\\lambda}_i) \\neq 0$ by assumption, by \\eqref{proof: difference eigenvalues} we deduce that $(\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ as $\\epsilon_k \\to 0$, for all $h=1,\\dots,j_r$, which implies \\eqref{proof: almost orthog}.\n\nAs in the case $r=1$ we may deduce that\n\\begin{equation} \\label{proof: bigger lambda_r+1}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\geq \\widetilde{\\lambda}_{r+1} \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1).\n\\end{equation}\nOn the other hand, by definition of $\\chi_{\\epsilon_k}$ we have\n\\begin{equation} \\label{proof: less lambda_r}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\leq \\widetilde{\\lambda}_r \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1).\n\\end{equation}\nBy \\eqref{proof: bigger lambda_r+1}, \\eqref{proof: less lambda_r} and \\eqref{proof: nonoverlappeigenvalues} it must be $\\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ and by \\eqref{proof: less lambda_r} we deduce that $[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = o(1)$, hence $\\norma{\\chi_{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$,\nas $k\\to \\infty$. This concludes the proof of the Claim.\\\\\n\nNow define the projector $\\widetilde{Q}_n$ from $L^2(\\Omega_\\epsilon)$ into the linear span $[\\varphi_1^{\\epsilon}, \\dots, \\varphi_n^{\\epsilon}]$ by\n\\[\n\\widetilde{Q}_n g = \\sum_{i=1}^n (g,\\varphi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\varphi_i^\\epsilon.\n\\]\nThen, as a consequence of the Claim we have that\n\\begin{equation}\n\\label{convergence xi}\n\\norma{\\xi_i^{\\epsilon_k} - \\widetilde{Q}_n \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0\n\\end{equation}\nas $k \\to \\infty$, for all $i=1,\\dots,n$. Indeed for all indexes $i=1,\\dots,n$ there exists $1 \\leq r\\leq s$ such that $i_r \\leq i \\leq j_r$; let assume for simplicity that $r=1$. Then we have $\\norma{\\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$ as $k\\to \\infty$; and also\n\\[\n\\norma{\\xi_i^{\\epsilon_k} - \\widetilde{Q}_n \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\leq \\norma{\\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} + \\sum_{l > j_1}^n \\big\\lvert(\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})}\\big\\rvert \\norma{\\varphi_l^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}\n\\]\nand the right-hand side tends to 0 as $k \\to \\infty$ because $\\norma{\\varphi_l^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}$ is uniformly bounded in $k$ and $(\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\to 0$ as $k\\to \\infty$ (to see this it is sufficient to argue as in the proof of \\eqref{proof: difference eigenvalues}). Moreover, since $\\norma{\\xi_i^{\\epsilon_k} - \\phi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k})} \\to 0$ as $k \\to \\infty$ for all $i=1,\\dots,n$, we also have $\\norma{\\phi_i^{\\epsilon_k} - \\widetilde{Q}_n \\phi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k})} \\to 0$\nas $k \\to \\infty$, for all $i=1,\\dots,n$. Thus $(\\widetilde{Q}_n \\phi_1^{\\epsilon_k}, \\dots,$ $ \\widetilde{Q}_n \\phi_n^{\\epsilon_k} )$ is a basis in $(L^2(\\Omega_{\\epsilon_k})^n)$ for $[\\varphi_1^{\\epsilon_k}, \\dots, \\varphi_n^{\\epsilon_k}]$. Hence,\n$\n\\varphi_i^{\\epsilon_k} = \\sum_{l=1}^n a_{li}^{\\epsilon_k} \\widetilde{Q}_n \\phi_l^{\\epsilon_k}\n$\nfor some coefficients $a_{li}^{\\epsilon_k} = (\\varphi_i^{\\epsilon_k}, \\phi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} + o(1)$ as $k\\to\\infty$. Then for all $i =1,\\dots,n$ we have\n\\begin{multline*}\n\\norma{\\varphi_i^{\\epsilon_k} - P_n \\varphi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k}\\!)}\\\\\n = \\bigg\\lVert \\sum_{l=1}^n (\\varphi_i^{\\epsilon_k}, \\phi_l^{\\epsilon_k})_{L^2} [\\phi_l^{\\epsilon_k} - \\widetilde{Q}_n \\phi_l^{\\epsilon_k}] + o(1) \\sum_{l=1}^n \\widetilde{Q}_n \\phi_l^{\\epsilon_k} \\bigg \\rVert_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k}\\!)}\n\\end{multline*}\nand the right-hand side tends to 0 as $k \\to \\infty$.\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk: orthogonal matrix A}\nIn the previous proof one could prove that the matrix\n$A = (a_{li} ^{\\epsilon_k} )_{l,i=1,\\dots,n} $\nis almost orthogonal, in the sense that $A A^t = A^t A = \\mathbb{I} + o(1)$ as $k \\to \\infty$. To prove this it is sufficient to show that the matrix\n $\\tilde A= \\bigl((\\phi^{\\epsilon_k}_l, \\varphi^{\\epsilon_k}_m)_{L^2(\\Omega_{\\epsilon_k})}\\bigr)_{l,m=1,\\dots,n}$\nis almost orthogonal. Let $l$ be fixed and note that\n$\n\\phi^{\\epsilon_k}_l = \\sum_{m=1}^n (\\phi^{\\epsilon_k}_l, \\varphi_m^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\varphi^{\\epsilon_k}_m + (\\mathbb{I}-\\widetilde{Q}_m) \\phi^{\\epsilon_k}_l,\n$\nhence, by \\eqref{convergence xi} we deduce that\n\\begin{equation}\\label{almost orthogonal matrix}\n\\delta_{li} = (\\phi^{\\epsilon_k}_l, \\phi^{\\epsilon_k}_i)_{L^2(\\Omega_{\\epsilon_k})} = \\sum_{m=1}^n (\\phi^{\\epsilon_k}_l, \\varphi^{\\epsilon_k}_m)_{L^2(\\Omega_{\\epsilon_k})} (\\varphi^{\\epsilon_k}_m, \\phi^{\\epsilon_k}_i)_{L^2(\\Omega_{\\epsilon_k})} + o(1)\\, ,\n\\end{equation}\nas $k \\to \\infty$.\nNote that we can rewrite \\eqref{almost orthogonal matrix} as $\\tilde A \\tilde A^t = \\mathbb{I} + o(1)$, and in a similar way we also get that $\\tilde A^t \\tilde A = \\mathbb{I} + o(1)$, concluding the proof.\n\\end{remark}\n\nIn the sequel we shall need the following lemma.\n\n\\begin{lemma} \\label{lemma: equation for chi} Let $1 \\leq i \\leq j \\leq n$. Assume that $\\widehat \\lambda \\in \\numberset{R}$ is such that, possibly passing to a subsequence, $\\lambda_m(\\Omega_\\epsilon )\\to \\widehat{\\lambda}$ as $\\epsilon \\to 0$ for all $m \\in \\{i, \\dots, j \\}$.\nIf $\\chi_{\\epsilon} \\in [\\varphi_i^{\\epsilon}, \\dots, \\varphi_j^{\\epsilon}]$, $\\norma{\\chi_{\\epsilon}}_{L^2(\\Omega_{\\epsilon})}=1$ and $\\chi_{\\epsilon}|_{\\Omega} \\rightharpoonup \\chi$ in $H^2(\\Omega)$\nthen\n\\begin{equation}\n\\label{eq: equation for chi}\n\\int_{\\Omega} (1-\\sigma) (D^2 \\chi : D^2 \\psi) + \\sigma \\Delta \\chi \\Delta \\psi + \\tau \\nabla \\chi \\cdot\\nabla \\psi + \\chi \\psi\\, dx = \\widehat{\\lambda} \\int_{\\Omega} \\chi \\psi\\, dx\\, ,\n\\end{equation}\nfor all $\\psi \\in H^2(\\Omega)$.\n\\end{lemma}\n\\begin{proof}\nSince $\\chi_{\\epsilon} \\in [\\varphi_i^{\\epsilon}, \\dots, \\varphi_j^{\\epsilon}]$ and $\\norma{\\chi_{\\epsilon}}_{L^2(\\Omega_{\\epsilon})}=1$ there exist coefficients $(a_l(\\epsilon))_{l=i}^j$ such that\n$\n\\chi_{\\epsilon} = \\sum_{l=i}^j a_l(\\epsilon) \\varphi_l^{\\epsilon}$ and $\\sum_{l=i}^j a_l^2(\\epsilon) = 1.$\nNote that for all $m \\in \\{i, \\dots, j \\}$, possibly passing to a subsequence, there exists $\\widehat{\\varphi}_m \\in H^2(\\Omega)$ such that $\\varphi_m^{\\epsilon}|_{\\Omega} \\rightharpoonup \\widehat{\\varphi}_m$ in $H^2(\\Omega)$. Since $\\chi_{\\epsilon}|_{\\Omega} \\rightharpoonup \\chi$ in $H^2(\\Omega)$ by assumption, we get that $\\chi = \\sum_{l=i}^j a_l \\widehat{\\varphi}_l$ in $\\Omega$ for some coefficients $(a_l)_{l=i}^j$. Let $\\psi \\in H^2(\\Omega)$ be fixed and consider an extension $\\widetilde{\\psi} = E \\psi \\in H^2(\\numberset{R}^N)$. Then\n\\begin{equation} \\label{proof: lemma chi}\n\\begin{split}\n&\\int_{\\Omega_{\\epsilon}} (1-\\sigma) \\bigl(D^2\\chi_{\\epsilon} : D^2 \\widetilde{\\psi}\\bigr) + \\sigma \\Delta \\chi_{\\epsilon} \\Delta \\widetilde{\\psi} + \\tau \\nabla \\chi_{\\epsilon} \\nabla \\widetilde{\\psi} + \\chi_{\\epsilon}\\widetilde{\\psi}\\\\\n&= \\sum_{l=i}^j a_l(\\epsilon) \\biggl[ \\int_{\\Omega_{\\epsilon}} (1-\\sigma) \\bigl(D^2 \\varphi_l^{\\epsilon} : D^2 \\widetilde{\\psi}\\bigr) + \\sigma \\Delta \\varphi_l^{\\epsilon} \\Delta \\widetilde{\\psi} + \\tau \\nabla \\varphi_l^{\\epsilon} \\nabla \\widetilde{\\psi} + \\varphi_l^{\\epsilon}\\widetilde{\\psi} \\biggr]\\\\\n&= \\sum_{l=i}^j a_l(\\epsilon) \\lambda_l(\\Omega_{\\epsilon}) \\int_{\\Omega_{\\epsilon}} \\varphi_l^{\\epsilon} \\widetilde{\\psi}.\n\\end{split}\n\\end{equation}\nThen it is possible to pass to the limit in both sides of \\eqref{proof: lemma chi} by splitting the integrals over $\\Omega_{\\epsilon}$ into an integral over $R_{\\epsilon}$ (that tends to $0$ as $\\epsilon \\to 0$) and an integral over $\\Omega$. Moreover, the integrals over $\\Omega$ will converge to the corresponding integrals in \\eqref{eq: equation for chi} as $\\epsilon \\to 0$, because of the weak convergence of $\\chi_{\\epsilon}$ in $H^2(\\Omega)$ and the strong convergence of $E\\psi$ to $\\psi$ in $H^2(\\Omega)$.\n\\end{proof}\n\n\nWe proceed to prove the lower bound for $\\lambda_n(\\Omega_\\epsilon)$.\nTo do so, we need to add an extra assumption on the shape of $\\Omega_{\\epsilon}$. Hence,\n we introduce the following condition in the spirit of what is known for the Neumann Laplacian (see e.g., \\cite{ArrPhD}, \\cite{Arr1}, \\cite{AHH}).\n\n\\begin{definition}[H-Condition]\n\\label{def: H condition}\nWe say that the family of dumbbell domains $\\Omega_\\epsilon$, $\\epsilon>0$, satisfies the H-Condition if, given functions $u_\\epsilon \\in H^2(\\Omega_\\epsilon)$ such that $\\norma{u_\\epsilon}_{H^2(\\Omega_\\epsilon)} \\leq R$ for all $\\epsilon>0$, there exist functions $\\bar{u}_\\epsilon \\in H^2_{L_\\epsilon}(R_\\epsilon)$ such that\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\norma{u_\\epsilon - \\bar{u}_\\epsilon }_{L^2(R_\\epsilon)} \\to 0$ as $\\epsilon \\to 0$,\n\\item $[\\bar{u}_\\epsilon]^2_{H^2_{\\sigma, \\tau}(R_\\epsilon)} \\leq [u_\\epsilon]^2_{H^2_{\\sigma, \\tau}(\\Omega_\\epsilon)} + o(1)$ as $\\epsilon \\to 0$.\n\\end{enumerate}\n\\end{definition}\n\nRecall that $[\\cdot ]_{H^2_{\\sigma,\\tau}}$ is defined above in Definition \\ref{definitionNorm}. We will show in Section \\ref{sec: proof H condition regular dumbbells} that a wide class of channels $R_\\epsilon$ satisfies the H-Condition.\n\n\n\\begin{theorem}[Lower bound] \\label{thm: lower bound}\nAssume that the family of dumbbell domains $\\Omega_\\epsilon$, $\\epsilon>0$, satisfies the H-Condition. Then for every $n\\in \\numberset{N}$ we have $\\lambda_n(\\Omega_\\epsilon) \\geq \\lambda_n^\\epsilon - o(1)$ as $\\epsilon \\to 0$.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{thm: upper bound} and its proof we know that both $\\lambda_i(\\Omega_\\epsilon)$ and $\\lambda_i^\\epsilon$ are uniformly bounded in $\\epsilon$. Then, for each subsequence $\\epsilon_k$ we can find a subsequence (which we still call $\\epsilon_k$), sequences of real numbers $(\\lambda_i)_{i\\in \\numberset{N}}$, $(\\widehat{\\lambda}_i)_{i\\in \\numberset{N}}$, and sequences of $H^2(\\Omega)$ functions $(\\phi_i)_{i \\in \\numberset{N}}$, $(\\widehat{\\varphi}_i)_{i \\in \\numberset{N}}$, such that the following conditions are satisfied:\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\lambda_i^{\\epsilon_k} \\longrightarrow \\lambda_i$, for all $i \\geq 1$;\n\\item $\\lambda_i(\\Omega_{\\epsilon_k}) \\longrightarrow \\widehat{\\lambda}_i$, for all $i \\geq 1$;\n\\item $\\xi^{\\epsilon_k}_i|_{\\Omega} \\longrightarrow \\phi_i $ strongly in $H^2(\\Omega)$, for all $i \\geq 1$;\n\\item $\\varphi_i^{\\epsilon_k}|_{\\Omega} \\longrightarrow \\widehat{\\varphi}_i$ weakly in $H^2(\\Omega)$, for all $i \\geq 1$;\n\\end{enumerate}\nNote that $(iii)$ immediately follows by recalling that $\\xi^{\\epsilon_k}_i|_{\\Omega}$ either it is zero or it coincides with $\\varphi_i^\\Omega$. Then $(iv)$ is deduced by the estimate\n$\\norma{\\varphi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\leq c\\, \\lambda_i(\\Omega_{\\epsilon_k})$ and by the boundedness of the sequence $\\lambda_i(\\Omega_{\\epsilon_k})$, $k \\in \\numberset{N}$.\n\nWe plan to prove that $\\widehat{\\lambda}_i = \\lambda_i$ for all $i\\geq 1$. We do it by induction.\nFor $i=1$ we clearly have $\\lambda_1 = \\lambda_1(\\Omega) = 1 = \\lambda(\\Omega_{\\epsilon_k})$ for all $k$; hence, passing to the limit as $k\\to\\infty$ in the right-hand side of the former equality we get $\\lambda_1 = \\widehat{\\lambda_1}$.\nThen, we assume by induction hypothesis that $\\widehat{\\lambda}_i = \\lambda_i$ for all $i=1,\\dots,n$ and we prove that $\\widehat{\\lambda}_{n+1} = \\lambda_{n+1}$. There are two possibilities: either $\\lambda_n = \\lambda_{n+1}$ or $\\lambda_n < \\lambda_{n+1}$. In the first case we deduce by \\eqref{eq: upper bound} that\n\\[\n\\lambda_n = \\widehat{\\lambda}_n \\leq \\widehat{\\lambda}_{n+1} \\leq \\lambda_{n+1} = \\lambda_n,\n\\]\nhence all the inequalities are equalities and in particular $\\widehat{\\lambda}_{n+1} = \\lambda_{n+1}$.\nConsequently we can assume without loss of generality that $\\lambda_n < \\lambda_{n+1}$. In this case we must have $\\widehat{\\lambda}_{n+1} \\in [\\lambda_n, \\lambda_{n+1}]$ because $\\lambda_n=\\widehat{\\lambda_n}$ and $\\lambda_n(\\Omega_{\\epsilon_k}) \\leq \\lambda_{n+1}(\\Omega_{\\epsilon_k}) \\leq \\lambda_{n+1}^{\\epsilon_k} + o(1)$ as $k\\to \\infty$. Let $r= \\max\\{\\lambda_i : i0$, with compact resolvents in $L^2(\\Omega_\\epsilon)$ if there exist $\\delta, M, N, \\epsilon_0 > 0$ such that\n\\begin{align}\n[x_\\epsilon - \\delta, x_\\epsilon + \\delta] \\cap \\{ \\lambda_n^\\epsilon \\}_{n=1}^\\infty = \\emptyset,& \\quad\\forall \\epsilon < \\epsilon_0\\\\\nx_\\epsilon \\leq M,& \\quad\\forall \\epsilon < \\epsilon_0\\\\\nN(x_{\\epsilon}) := \\#\\{ \\lambda_i^{\\epsilon} : \\lambda_i^{\\epsilon} \\leq x_\\epsilon\\}\\leq N < \\infty.\n\\end{align}\nIf $x_\\epsilon$ divides the spectrum we define the projector $P_{x_\\epsilon}$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^{\\epsilon}, \\dots, \\phi_{N(x_\\epsilon)}^{\\epsilon}]$ of the first $N(x_\\epsilon)$ eigenfunctions by\n\\[\nP_{x_\\epsilon} g = \\sum_{i=1}^{N(x_\\epsilon)} (g,\\phi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\phi_i^\\epsilon\\, ,\n\\]\nfor all $g\\in L^2(\\Omega_\\epsilon)$. Then, recalling Theorem \\ref{thm: upper bound} and Theorem \\ref{thm: lower bound} we deduce the following.\n\n\\begin{theorem}[(Decomposition of the eigenvalues)] \\label{thm: eigenvalues decomposition}\nLet $\\Omega_\\epsilon$, $\\epsilon>0$, be a family of dumbbell domains satisfying the H-Condition. Then the following statements hold:\n\\begin{enumerate}[label =(\\roman*)]\n\\item $\\lim_{\\epsilon \\to 0}\\, \\abs{\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^\\epsilon} = 0$, for all $n\\in \\numberset{N} $.\n\n\\item For any $x_\\epsilon$ dividing the spectrum,\n $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{r_\\epsilon} - P_{x_\\epsilon} \\varphi^\\epsilon_{r_\\epsilon}}_{H^2(\\Omega) \\oplus H^2(R_\\epsilon)} = 0$, for all $r_\\epsilon = 1,\\dots, N(x_\\epsilon)$.\\end{enumerate}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\\section{Proof of the H-Condition for regular dumbbells}\n\\label{sec: proof H condition regular dumbbells}\nThe goal of this section is to prove that the H-Condition holds for regular dumbbell domains. More precisely, we will consider channels $R_\\epsilon$ such that the profile function $g$ has the following monotonicity property:\\vspace{8pt}\\\\\n (MP): \\textit{ there exists $\\delta \\in ]0, 1\/2[$ such that $g$ is decreasing on $[0,\\delta)$ and increasing on $(1-\\delta, 1]$. } \\vspace{8pt}\\\\\n\\noindent If (MP) is satisfied then the set $A_{\\epsilon} = \\{ (x,y) \\in \\numberset{R}^2 : x \\in (0,\\delta) \\cup (1-\\delta, 1), 00$ we define the function $f_{\\gamma, \\beta} \\in C^{1,1}(0,1)$ by setting\n\\begin{equation}\nf=f_{\\gamma,\\beta}(x) =\n\\begin{cases} -\\epsilon^\\gamma \\Big(\\frac{x}{\\epsilon^\\beta}\\Big)^2 + (\\epsilon^\\beta+2\\epsilon^\\gamma) \\Big( \\frac{x}{\\epsilon^\\beta} \\Big) - \\epsilon^\\gamma, & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad x, & x\\in (\\epsilon^\\beta, 1).\n\\end{cases}\n\\end{equation}\nNote that $f$ is a $C^{1,1}$-diffeomorphism from $(0, \\epsilon^\\beta)$ onto $(-\\epsilon^\\gamma, \\epsilon^\\beta)$. Then,\n\\[\nf'(x) =\n\\begin{cases} 1+2 \\epsilon^{\\gamma-\\beta} \\, (1-\\frac{x}{\\epsilon^\\beta}), & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad 1, & x\\in (\\epsilon^\\beta, 1),\n\\end{cases}\n\\]\nand\n\\[\nf''(x) =\n\\begin{cases} - 2 \\epsilon^{\\gamma-2\\beta}, & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad 0, & x\\in (\\epsilon^\\beta, 1),\n\\end{cases}\n\\]\nwhich implies that $|f'(x)-1|\\leq 2 \\epsilon^{\\gamma-\\beta}$, for all $x\\in (0,1)$, and $|f''(x)|\\leq 2 \\epsilon^{\\gamma-2\\beta}$, for all $x\\in (0,1)$. Thus,\n if $\\gamma>\\beta$ then\n\\begin{equation}\n\\label{eq: asymptotics f'}\nf'(x) = 1 + o(1) \\quad \\hbox{ as } \\epsilon\\to 0.\n\\end{equation}\n\nFor any $\\theta \\in (0,1)$, we define the following sets:\n\\begin{align*}\n&K_\\epsilon^\\theta = \\{ (x,y) \\in \\Omega: - \\epsilon^\\theta < x < 0,\\, 0 < y < \\epsilon g(0) \\}\\, , \\\\\n&\\Gamma_\\epsilon^\\theta = \\{ (-\\epsilon^\\theta, y) : 0 0$. Then, with the notation above and for $0<\\theta<\\frac{1}{3}$, we have\n\\begin{equation}\n \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} = O(\\epsilon^{2\\theta}), \\quad \\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\theta)}=O(\\epsilon^\\theta), \\hbox{ as } \\epsilon\\to 0\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWe define the function $u_\\epsilon^s \\in H^2(J_\\epsilon^\\theta)$ by setting\n\\[\nu_\\epsilon^s (x,y) = -3 u_\\epsilon(-x,y) + 4 u_\\epsilon \\Bigl(-\\frac{x}{2}, y \\Bigr)\n\\]\nfor all $(x,y) \\in J_\\epsilon^\\theta$. The function $u_\\epsilon^s$ can be viewed as a higher order reflection of $u_\\epsilon$ with respect to the $y$-axis. Let us note that we can estimate the $L^2$ norm of $u^s_\\epsilon$, of its gradient and of its derivatives of order 2, in the following way:\n\\begin{align}\n&\\norma{u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C \\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 1} \\\\\n&\\norma{\\nabla u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C \\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 2} \\\\\n&\\norma{D^\\alpha u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C\\norma{D^\\alpha u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 3}\n\\end{align}\nfor any multiindex $\\alpha$ of length $2$ and for some constant $C$ independent of $\\epsilon$. To obtain the three inequalities above, we are using that the image of $K_\\epsilon^\\theta$ under the reflexion about the $y$-axis contains $J_\\epsilon^\\theta$. This is a consequence of (MP).\nSince the $L^2$ norms on the right-hand sides of the inequalities above are taken on a subset of $\\Omega$, we can improve the estimate of \\eqref{proof: ineq 1} and \\eqref{proof: ineq 2} using H\\\"older's inequality and Sobolev embeddings to obtain\n\\begin{equation}\\label{sobolev-1}\n\\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\theta)} \\leq |K_\\epsilon^\\theta|^{1\/2} \\norma{u_\\epsilon}_{L^\\infty(\\Omega)} \\leq c \\bigl( \\epsilon^{\\theta + 1}\\bigr)^{1\/2} \\norma{u_\\epsilon}_{H^2(\\Omega)}\n\\end{equation}\nand in a similar way\n\\begin{equation}\\label{sobolev-2}\n\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\theta)} \\leq |K_\\epsilon^\\theta|^{\\frac{1}{2} - \\frac{1}{p}} \\norma{\\nabla u_\\epsilon}_{L^p(\\Omega)} \\leq c \\bigl(\\epsilon^{\\theta + 1}\\bigr)^{\\frac{1}{2} - \\frac{1}{p}} \\norma{u_\\epsilon}_{H^2(\\Omega)}\n\\end{equation}\nfor any $2 0$, where we have used \\eqref{bound-second-derivatives}. Hence we rewrite inequality \\eqref{proof: Poincare ineq} in the following way:\n\\begin{equation} \\label{proof: decay ineq psi_eps}\n\\Big \\lVert \\frac{\\partial \\psi_\\epsilon}{\\partial x_i} \\Big \\rVert_{L^2(J_\\epsilon^\\theta)} \\leq \\frac{2}{\\pi} \\epsilon^\\theta (C R + o(1)) = O(\\epsilon^\\theta)\n\\end{equation}\nas $\\epsilon \\to 0$, for $i=1,2$.\n\nFinally, by the inequalities \\eqref{proof: decay ineq nabla u eps^s}, \\eqref{proof: decay ineq psi_eps} we deduce that\n\\begin{equation}\n\\begin{split}\n\\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} &\\leq \\norma{\\nabla \\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} + \\norma{\\nabla u^s_\\epsilon}_{L^2(J_\\epsilon^\\theta)}\\\\\n&\\leq O(\\epsilon^\\theta) + C \\bigl(\\epsilon^{\\theta + 1}\\bigr)^{\\frac{1}{2} - \\frac{1}{p}} \\norma{u_\\epsilon}_{H^2(\\Omega)}\\leq O(\\epsilon^\\theta),\n\\end{split}\n\\end{equation}\nwhere we have used that $(\\theta + 1) (1\/2 - 1\/p) > \\theta$ for large enough $p$.\n\nIt remains to prove that $\\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)}= O(\\epsilon^{2\\theta})$ as $\\epsilon \\to 0$. We can repeat the argument for $u_\\epsilon$ instead of $\\partial_{x_i} u_\\epsilon$, with the difference that now we can improve the decay of $\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)}$ by using the one-dimensional Poincar\\'{e} inequality twice. More precisely we have that\n\\[\n\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} \\leq \\Big(\\frac{2}{\\pi}\\Big)^2 \\epsilon^{2\\theta} \\bigg \\lVert \\frac{\\partial^2 \\psi_\\epsilon}{\\partial x^2} \\bigg \\rVert_{L^2(J_\\epsilon^\\theta)}\n\\]\nfrom which we deduce\n$\n\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} = O(\\epsilon^{2\\theta})\n$\nas $\\epsilon \\to 0$. Hence,\n\\begin{equation}\\label{eq:estimate ueps}\n\\begin{split}\n\\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} &\\leq \\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} + \\norma{u^s_\\epsilon}_{L^2(J_\\epsilon^\\theta)}\n\\leq O(\\epsilon^{2\\theta}) + C \\epsilon^{\\frac{\\theta + 1}{2}} \\norma{u_\\epsilon}_{H^2(\\Omega)} = O(\\epsilon^{2\\theta})\n\\end{split}\n\\end{equation}\nas $\\epsilon \\to 0$, concluding the proof.\n\\end{proof}\n\nWe can now give a proof of Theorem \\ref{thm: (MP) implies (H)}.\n\\begin{proof}[Proof of Theorem \\ref{thm: (MP) implies (H)}]\nLet $u_\\epsilon\\in H^2(\\Omega_\\epsilon)$ be such that $\\norma{u_\\epsilon}_{H^2(\\Omega_\\epsilon)}\\leq R$ for any $\\epsilon > 0$. We prove that the H-Condition holds if we choose $\\overline{u}_\\epsilon$ as in \\eqref{def: u bar} with $\\gamma <1\/3$. Note that $u_\\epsilon \\equiv \\overline{u}_\\epsilon$ on $R_\\epsilon \\setminus J_\\epsilon^\\beta$. Let us first estimate $\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$. By a change of variable and by \\eqref{eq: asymptotics f'} we deduce that\n\\begin{equation}\n\\begin{split}\n\\norma{\\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} &= \\int_0^{\\epsilon^\\beta} \\int_0^{\\epsilon g(x)} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(f(x),y)|^2\\, dy dx\\\\\n&= \\int_{-\\epsilon^\\gamma}^{\\epsilon^\\beta} \\int_0^{\\epsilon g(f^{-1}(z))} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(z,y)|^2 |f'(f^{-1}(z))|^{-1}\\, dy dz\\\\\n&\\leq (1+o(1)) \\int_{-\\epsilon^\\gamma}^{\\epsilon^\\beta} \\int_0^{\\epsilon g(f^{-1}(z))} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(z,y)|^2 dy dz\\\\\n&\\leq (1+o(1)) \\norma{u_\\epsilon}^2_{L^2(Z_\\epsilon^\\gamma)},\n\\end{split}\n\\end{equation}\nwhere $Z_\\epsilon^\\gamma = \\{ (x,y) \\in \\Omega_\\epsilon : -\\epsilon^\\gamma < x < \\epsilon^\\beta, 0 < y < \\epsilon g(f^{-1}(x)) \\}$. Note that since the function $g$ is non increasing, then $Z_\\epsilon^\\gamma\\subset K_\\epsilon^{\\gamma}\\cup J_\\epsilon^\\beta$. Hence,\n\\begin{equation} \\label{proof: estimate overline(u)}\n\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2 \\leq (1+o(1))( \\norma{u_\\epsilon}_{L^2(K^\\gamma_\\epsilon)}^2+ \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2).\n\\end{equation}\nNote that the last summand in the right-hand side of \\eqref{proof: estimate overline(u)} behaves as $O(\\epsilon^{4\\beta})$ as $\\epsilon \\to 0$ because of Proposition \\ref{prop: sym arg}. Also by \\eqref{sobolev-1} with $\\theta$ replaced by $\\gamma$, we get\n\\[\n\\norma{u_\\epsilon}_{L^2(K^\\gamma_\\epsilon)} \\leq c \\epsilon^{\\frac{\\gamma+1}{2}} \\norma{u_\\epsilon}_{H^2(\\Omega)},\n\\]\n\n\\noindent Thus,\n\\[\n\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2 \\leq (1+o(1)) (O(\\epsilon^{4\\beta}) + O(\\epsilon^{\\gamma+1}) = O(\\epsilon^{4\\beta})\n\\]\nas $\\epsilon \\to 0$. We then have by Proposition \\ref{prop: sym arg} that\n\\[\n\\norma{u_\\epsilon - \\overline{u}_\\epsilon}_{L^2(R_\\epsilon)} = \\norma{u_\\epsilon - \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} \\leq \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} + \\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^{2\\beta})\n\\]\nas $\\epsilon \\to 0$. This concludes the proof of $(i)$ in the H-Condition.\n\nIn order to prove $(ii)$ from Definition \\ref{def: H condition}, we first need to compute $\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$ and $\\norma{D^2 \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$. We have\n\\[\n\\begin{split}\n&\\frac{\\partial \\overline{u}_\\epsilon}{\\partial x} (x,y)= \\Bigg[\\bigg(\\frac{\\partial u_\\epsilon }{\\partial x} \\chi_\\epsilon^\\gamma\\bigg) (f(x),y) + (u_\\epsilon (\\chi_\\epsilon^\\gamma)')(f(x),y) \\Bigg] f'(x)\\\\\n&\\frac{\\partial \\overline{u}_\\epsilon}{\\partial y} (x,y)= \\bigg(\\frac{\\partial u_\\epsilon}{\\partial y} \\chi_\\epsilon^\\gamma\\bigg)(f(x),y).\n\\end{split}\n\\]\nHence,\n\\begin{equation}\n\\label{proof: gradient estimate}\n\\begin{split}\n\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} &\\leq \\norma{f'}_{L^\\infty} \\bigl( \\norma{\\nabla u_\\epsilon (f(\\cdot),\\cdot)}_{L^2(J_\\epsilon^\\beta)} + \\norma{(u_\\epsilon (\\chi_\\epsilon^\\gamma)')(f(\\cdot),\\cdot)}_{L^2(J_\\epsilon^\\beta)}\\bigr)\\\\\n&\\leq \\norma{f'}_{L^\\infty} \\norma{f'}_{L^\\infty}^{-1\/2}\\bigl(\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + c_1 \\norma{\\epsilon^{-\\gamma} u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)}\\bigr)\\\\\n&\\leq (1 + o(1)) \\bigl(\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} + \\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} + c_1 \\epsilon^{-\\gamma} \\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)}\\bigr)\n\\end{split}\n\\end{equation}\nwhere we have used the definition of $\\chi_\\epsilon^\\gamma$ and the change of variables $(f(x), y) \\mapsto (x, y)$. By Proposition \\ref{prop: sym arg} we know that $\\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^{\\beta})$ as $\\epsilon \\to 0$. Moreover, by \\eqref{sobolev-1}, \\eqref{sobolev-2} with $\\theta$ replaced by $\\gamma$, we deduce that\n\\[\n\\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} = O(\\epsilon^{ \\frac{\\gamma+1}{2}} ), \\quad\\quad \\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} = O(\\epsilon^{\\gamma_p}),\n\\]\nfor any $p < \\infty$, where we have set\n\\[\n\\gamma_p = \\biggl(\\frac{1}{2} - \\frac{1}{p}\\biggr)(\\gamma + 1).\n\\]\nFinally, we deduce by \\eqref{proof: gradient estimate} that\n\\begin{equation} \\label{proof: gradient estimate final}\n\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) (O(\\epsilon^{\\gamma_p}) + O(\\epsilon^{\\beta}) + \\epsilon^{-\\gamma} O(\\epsilon^{\\gamma_p}) ) = O(\\epsilon^{\\beta})\n\\end{equation}\nbecause $\\gamma_p-\\gamma>\\beta$, for sufficiently large $p$ (note that $\\beta < (1-\\gamma )\/2$ for $\\gamma < 1\/3$).\n\n\nWe now estimate the $L^2$ norm of $D^2 \\overline{u}_\\epsilon$. In order to simplify our notation we write $F(x,y) = (f(x),y)$, $\\chi_\\epsilon^\\gamma = \\chi$, $\\bar u_\\epsilon=\\bar u$, $u_\\epsilon=u$ and we use the subindex notation for the partial derivatives, that is, $u_x=\\frac{\\partial u}{\\partial x}$ and so on. First, note that\n\\begin{equation}\\label{2D-eq1}\n\\begin{split}\n&\\bar u_{xx} = \\Big[\\Big(u_{xx} \\chi + 2 u_x \\chi' + u \\chi''\\Big) \\circ F \\Big] \\cdot |f'|^2 + \\Big[\\Big(u_x \\chi + u \\chi' \\Big) \\circ F\\Big] \\cdot f'', \\\\\n&\\bar u_{xy} = \\Big[\\Big( u_{xy} \\chi + u_y \\chi' \\Big) \\circ F \\Big] \\cdot f', \\\\\n&\\bar u_{yy} =\\Big( u_{yy} \\chi \\Big) \\circ F,\n\\end{split}\n\\end{equation}\nand we may write\n\\begin{equation*}\n\\bar u_{xx} = [u_{xx} \\chi \\circ F]\\cdot |f'|^2+ R_1, \\quad \\bar u_{xy} = [u_{xy} \\chi \\circ F] \\cdot f'+ R_2, \\quad \\bar u_{yy} = u_{yy} \\chi \\circ F.\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{split}\n&R_1= \\Big[\\Big( 2 u_x \\chi' + u \\chi''\\Big) \\circ F \\Big] \\cdot |f'|^2 + \\Big[\\Big(u_x \\chi + u \\chi' \\Big) \\circ F\\Big] \\cdot f'', \\\\\n&R_2 = u_y \\chi' \\circ F \\cdot f'.\n\\end{split}\n\\end{equation*}\n\nWe now show that $\\|R_1\\|_{L^2(J_\\epsilon^\\beta)}=o(1)$, $\\|R_2\\|_{L^2(J_\\epsilon^\\beta)}=o(1)$ as $\\epsilon\\to 0$. For this, we will prove that each single term in $R_1$ and $R_2$ is $o(1)$ as $\\epsilon\\to 0$. Recall that $f'(x)=1+o(1)$ and $f''(x)=o(1)$, $\\chi'=O(\\epsilon^{-\\gamma})$ and $\\chi''=O(\\epsilon^{-2\\gamma})$ for $x\\in (0,\\epsilon^\\beta)$. By a change of variables, by the Sobolev Embedding Theorem and the definition of $\\chi$ it is easy to deduce that\n\\begin{align*}\n&\\norma{(u_x \\chi')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) \\norma{u_x \\chi'}_{L^2(K^\\gamma_\\epsilon)} \\leq C R \\epsilon^{\\gamma_p - \\gamma} = O(\\epsilon^\\beta)\\\\\n&\\norma{(u \\chi'')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq c_2 (1+o(1)) \\norma{u \\epsilon^{-2\\gamma}}_{L^2(K_\\epsilon^\\gamma)} \\leq C R \\epsilon^{\\frac{1-3\\gamma}{2}}\\\\\n&\\norma{(u_y \\chi')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq c_1 (1+o(1)) \\norma{\\epsilon^{-\\gamma} u_y}_{L^2(K^\\gamma_\\epsilon)} \\leq C R \\epsilon^{\\gamma_p - \\gamma}= O(\\epsilon^\\beta)\\, .\n\\end{align*}\nBy \\eqref{proof: gradient estimate final} we also have\n\\begin{equation}\n\\norma{(u_x \\chi + u \\chi' ) \\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) \\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^\\beta).\n\\end{equation}\nHence the $L^2$ norms of $R_1$, $R_2$ vanish as $\\epsilon \\to 0$. In particular,\n\\begin{equation*}\n\\norma{D^2 \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = (1 + o(1))\\norma{D^2 u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + O(\\epsilon^{\\frac{1-3\\gamma}{2}}) + O(\\epsilon^\\beta),\n\\end{equation*}\nas $\\epsilon\\to 0$. In a similar way we can also prove that\n\\begin{equation*}\n\\norma{\\Delta \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = (1 + o(1))\\norma{\\Delta u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + O(\\epsilon^{\\frac{1-3\\gamma}{2}}) + O(\\epsilon^\\beta),\n\\end{equation*}\nas $\\epsilon\\to 0$. Hence,\n\\begin{multline}\n\\label{proof: channel energy}\n(1-\\sigma) \\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} + \\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} + \\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(J_\\epsilon^\\beta)}\\\\\n =(1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + \\sigma \\norma{\\Delta u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + o(1).\n\\end{multline}\nBy adding to both handsides of \\eqref{proof: channel energy} {\\small$(1-\\sigma)\\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$, $\\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$} and the lower order term $\\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$, and keeping in account that $\\overline{u}_\\epsilon \\equiv u_\\epsilon$ on $R_\\epsilon \\setminus J_\\epsilon^\\beta$ we deduce that\n\\begin{multline}\\label{mon}\n(1-\\sigma) \\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon)} + \\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon)} + \\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(R_\\epsilon)}\\\\\n = (1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup R_\\epsilon)} + \\sigma\\norma{\\Delta u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup R_\\epsilon)} + \\tau \\norma{\\nabla u_\\epsilon }^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)} + o(1)\\\\\n\\leq (1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(\\Omega_\\epsilon)} + \\sigma \\norma{\\Delta u_\\epsilon}^2_{L^2(\\Omega_\\epsilon)} + \\tau \\norma{\\nabla u_\\epsilon }^2_{L^2(\\Omega_\\epsilon)} + o(1),\n\\end{multline}\nas $\\epsilon \\to 0$, concluding the proof of $(ii)$ in the H-Condition.\n Note that in \\eqref{mon}, we have used the monotonicity of the quadratic form with respect to inclusion of sets. Such property is straightforward\nfor $\\sigma \\in [0,1)$. In the case $\\sigma \\in (-1,0)$ it follows by observing that\n\\begin{multline*}\n(1-\\sigma) \\bigl[ u^2_{xx} + 2 u^2_{xy} + u^2_{yy} \\bigr] + \\sigma \\bigl[ u^2_{xx} + 2 u_{xx} u_{yy} + u^2_{yy} \\bigr]\\\\\n \\geq u^2_{xx} + u^2_{yy} + \\sigma (u^2_{xx} + u^2_{yy} ) = (1+\\sigma) (u^2_{xx} + u^2_{yy} ) > 0,\n\\end{multline*}\nfor all $u \\in H^2(\\Omega_\\epsilon)$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Asymptotic analysis on the thin domain}\n\\label{sec: thin plates}\nThe purpose of this section is to study the convergence of the eigenvalue problem \\eqref{PDE: R_eps} as $\\epsilon \\to 0$. Since the thin domain $R_\\epsilon$ is shrinking to the segment $(0,1)$ as $\\epsilon \\to 0$, we plan to identify the limiting problem in $(0,1)$ and to prove that the resolvent operator of problem \\eqref{PDE: R_eps} converges as $\\epsilon \\to 0$ to the resolvent operator of the limiting problem in a suitable sense which guarantees the spectral convergence.\n\nMore precisely, we shall prove that the the limiting eigenvalue problem in $[0,1]$ is\n\\begin{equation}\\label{classiceigenode}\n\\begin{cases}\n\\frac{1-\\sigma^2}{g} (gh'')''- \\frac{\\tau}{g}(gh')' + h = \\theta h, &\\text{in $(0,1)$,}\\\\\nh(0)=h(1)=0,&\\\\\nh'(0)=h'(1)=0.&\n\\end{cases}\n\\end{equation}\nNote that the weak formulation of (\\ref{classiceigenode}) is\n\\[\n(1-\\sigma^2)\\int_0^1 h''\\psi''gdx+\\tau \\int_0^1h'\\psi'gdx+\\int_0^1h\\psi g dx=\\theta \\int_0^1h\\psi g dx,\n\\]\nfor all $\\psi\\in H^2_0(0,1)$, where $h$ is to be found in the Sobolev space $H^2_0(0,1)$. In the sequel, we shall denote by $L^2_g(0,1)$ the Hilbert space $L^2((0,1); g(x)dx)$.\n\n\\subsection{Finding the limiting problem}\n\\label{subsection: finding limit prb}\n\nIn order to use thin domain techniques in the spirit of \\cite{HR}, we need to fix a reference domain $R_1$ and pull-back the eigenvalue problem defined on $R_{\\epsilon}$ onto $R_1$ by means of a suitable diffeomorphism.\n\nLet $R_1$ be the rescaled domain obtained by setting $\\epsilon = 1$ in the definition of $R_\\epsilon$ (see \\eqref{def: R_eps}). For any fixed $\\epsilon >0$, let $\\Phi_\\epsilon$ be the map from $R_1$ to $R_\\epsilon$ defined by $\\Phi_\\epsilon(x',y') = (x', \\epsilon y')= (x,y)$ for all $(x',y') \\in R_1$. We consider the composition operator $T_\\epsilon$ from $L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ to $L^2(R_1)$ defined by\n\\[\nT_\\epsilon u(x',y') = u \\circ \\Phi_\\epsilon (x', y') = u(x', \\epsilon y')\\, ,\n\\]\nfor all $u \\in L^2(R_\\epsilon)$, $(x',y') \\in R_1$. We also endow the spaces $H^2(R_1)$ and $H^2(R_\\epsilon)$ with the norms defined by\n\\begin{multline}\n\\|\\varphi\\|_{H^2_{\\epsilon, \\sigma, \\tau}(R_1)}^2 =\\int_{R_1} \\Bigg((1-\\sigma) \\Bigg[ \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}}^2 + \\frac{2}{\\epsilon^2}\\abs*{\\frac{\\partial^2 \\varphi}{\\partial x \\partial y}}^2 + \\frac{1}{\\epsilon^4} \\abs*{\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 \\Bigg]\\\\\n+ \\sigma \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2} + \\frac{1}{\\epsilon^2}\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 + \\tau \\Bigg[ \\abs*{\\frac{\\partial \\varphi}{\\partial x}}^2 + \\frac{1}{\\epsilon}\\abs*{\\frac{\\partial \\varphi}{\\partial y}}^2 \\Bigg] + \\abs{\\varphi}^2\\, \\Bigg)dxdy\\, ,\n\\end{multline}\n\n\\begin{multline}\n\\|\\varphi\\|_{H^2_{\\sigma, \\tau}(R_\\epsilon)}^2 =\\int_{R_\\epsilon} \\Bigg((1-\\sigma) \\Bigg[ \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}}^2 + 2\\abs*{\\frac{\\partial^2 \\varphi}{\\partial x \\partial y}}^2 + \\abs*{\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 \\Bigg]\\\\\n+ \\sigma \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}+ \\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 + \\tau \\Bigg[ \\abs*{\\frac{\\partial \\varphi}{\\partial x}}^2 + \\abs*{\\frac{\\partial \\varphi}{\\partial y}}^2 \\Bigg] + \\abs{\\varphi}^2\\,\\Bigg) dxdy\\, .\n\\end{multline}\nIt is not difficult to see that if $\\varphi\\in H^2(R_\\epsilon)$ then\n\\[\\|T_\\epsilon \\varphi\\|_{H^2_{\\epsilon,\\sigma,\\tau}(R_1)}^2= \\epsilon^{-1} \\|\\varphi\\|_{H^2_{\\sigma,\\tau}(R_\\epsilon)}^2.\\]\n\nWe consider the following Poisson problem with datum $f_\\epsilon \\in L^2(R_\\epsilon)$:\n\\begin{equation} \\label{PDE: R_eps f_eps}\n\\begin{cases}\n\\Delta^2 v_\\epsilon - \\tau \\Delta v_\\epsilon + v_\\epsilon = f_\\epsilon, &\\text{in $R_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 v_\\epsilon}{\\partial n_\\epsilon^2} + \\sigma \\Delta v_\\epsilon = 0, &\\textup{on $\\Gamma_\\epsilon$},\\\\\n\\tau \\frac{\\partial v_\\epsilon}{\\partial n_\\epsilon} - (1-\\sigma) \\Div_{\\partial \\Omega_\\epsilon}(D^2v_\\epsilon \\cdot n_\\epsilon)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta v_\\epsilon)}{\\partial n_\\epsilon} = 0, &\\textup{on $\\Gamma_\\epsilon$,}\\\\\nv = 0 = \\frac{\\partial v_\\epsilon}{\\partial n_\\epsilon}, &\\text{on $L_\\epsilon$.}\n\\end{cases}\n\\end{equation}\nNote that the energy space associated with Problem \\eqref{PDE: R_eps f_eps} is exactly $H^2_{L_{\\epsilon}}(R_\\epsilon)$.\nBy setting $\\tilde{v}_\\epsilon = v_\\epsilon (x', \\epsilon y')$, $\\tilde{f}_\\epsilon = f(x', \\epsilon y')$ and pulling-back problem (\\ref{PDE: R_eps f_eps}) to $R_1$ by means of $\\Phi_\\epsilon$, we get the following equivalent problem in $R_1$ in the unknown $\\tilde{v}_\\epsilon $ (we use again the variables $(x,y)$ instead of $(x',y')$ to simplify the notation):\n{\\small \\begin{equation} \\label{PDE: R_1}\n\\begin{cases}\n\\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial x^4} + \\frac{2}{\\epsilon^2} \\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial x^2 \\partial y^2} + \\frac{1}{\\epsilon^4} \\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial y^4} - \\tau \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\Big) + \\tilde{v}_\\epsilon = \\tilde{f}_\\epsilon, &\\text{in $R_1$},\\\\\n(1-\\sigma) \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2}\\tilde{n}_x^2 + \\frac{2}{\\epsilon} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x \\partial y}\\tilde{n}_x \\tilde{n}_y + \\frac{1}{\\epsilon^2}\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2}\\tilde{n}_y^2 \\Big) + \\sigma \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\Big)= 0, &\\textup{on $\\Gamma_1$},\\\\\n\\tau \\Big(\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial x} \\tilde{n}_x + \\frac{1}{\\epsilon} \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\tilde{n}_y \\Big) - (1-\\sigma) \\Div_{\\Gamma_{1,\\epsilon}}(D_\\epsilon^2 \\tilde{v}_\\epsilon \\cdot \\tilde{n}){_{\\Gamma_{1,\\epsilon}} }- \\nabla_\\epsilon(\\Delta_\\epsilon \\tilde{v}_\\epsilon) \\cdot \\tilde{n} = 0, &\\textup{on $\\Gamma_{1}$,}\\\\\n\\tilde{v}_\\epsilon = 0 = \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial x} n_x + \\frac{1}{\\epsilon}\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\tilde{n}_y , &\\text{on $L_1$.}\n\\end{cases}\n\\end{equation}}\nHere $\\tilde{n} = (\\tilde{n}_x, \\tilde{n}_y) = (n_x, \\epsilon^{-1}n_y)$ and the operators $\\Delta_\\epsilon, \\nabla_\\epsilon$ are the standard differential operators associated with $(\\partial_x, \\epsilon^{-1} \\partial_y)$. Moreover,\n\\[\\Div_{\\Gamma_{1,\\epsilon}} F = \\frac{\\partial F_1}{\\partial x} + \\frac{1}{\\epsilon}\\frac{\\partial F_2}{ \\partial y} - \\tilde{n}_\\epsilon \\nabla_{\\!\\epsilon} F\\, \\tilde{n}_\\epsilon\\]\nand $(F)_{\\Gamma_{1,\\epsilon}} = F - (F, \\tilde{n})\\, \\tilde{n}$ for any vector field $F =(F_1,F_2)$.\n\nAssume now that the data $f_{\\epsilon }$, $\\epsilon>0$ are such that $(\\tilde{f}_\\epsilon)_{\\epsilon>0}$ is an equibounded family in $L^2(R_1)$, i.e.,\n\\begin{equation} \\label{hypotesis on f eps}\n\\int_{R_1} \\abs{\\tilde{f}_\\epsilon}^2\\,dxdy' \\leq c, \\quad \\textup{or equivalently} \\quad \\int_{R_\\epsilon} \\abs*{f_\\epsilon}^2 dxdy \\leq c \\epsilon\\, ,\n\\end{equation}\nfor all $\\epsilon>0$, where $c$ is a positive constant not depending on $\\epsilon$.\n\nWe plan to pass to the limit in \\eqref{PDE: R_1} as $\\epsilon \\to 0$ by arguing as follows. If $\\tilde{v}_\\epsilon \\in H^2_{L_1}(R_1)$ is the solution to problem \\eqref{PDE: R_1}, then we have the following integral equality\n\\small\n\\begin{multline}\n\\label{eq: weak formulation R1}\n(1-\\sigma) \\int_{R_1} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} \\frac{\\partial^2 \\varphi }{\\partial x^2} + \\frac{2}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x \\partial y} \\frac{\\partial^2 \\varphi }{\\partial x \\partial y} + \\frac{1}{\\epsilon^4} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2} \\frac{\\partial^2 \\varphi }{\\partial y^2} dx\\\\\n+ \\sigma \\int_{R_1} \\Big(\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2} \\Big) \\Big( \\frac{\\partial^2 \\varphi }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\varphi }{\\partial y^2} \\Big) dx\\\\\n + \\tau \\int_{R_1} \\frac{\\partial \\tilde{v}_\\epsilon }{\\partial x} \\frac{\\partial \\varphi }{\\partial x} + \\frac{1}{\\epsilon^2} \\frac{\\partial \\tilde{v}_\\epsilon }{\\partial y} \\frac{\\partial \\varphi }{\\partial y} dx + \\int_{R_1} \\tilde{v}_\\epsilon \\varphi dx = \\int_{R_1} \\tilde{f}_\\epsilon \\varphi dx\n\\end{multline}\n\\normalsize\nfor all $\\varphi \\in H^2_{L_1}(R_1)$. By choosing $\\varphi = \\tilde{v}_\\epsilon$ we deduce the following apriori estimate:\n\\small\n\\begin{multline}\n\\label{ineq: apriori ineq tilde v eps}\n(1-\\sigma) \\int_{R_1} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2}}^2 + \\frac{2}{\\epsilon^2} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x \\partial y}}^2 + \\frac{1}{\\epsilon^4} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2}}^2 dx + \\sigma \\int_{R_1} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2}}^2 dx\\\\\n+ \\tau \\int_{R_1} \\abs*{\\frac{\\partial \\tilde{v}_\\epsilon }{\\partial x}}^2 + \\frac{1}{\\epsilon^2} \\abs*{\\frac{\\partial \\tilde{v}_\\epsilon }{\\partial y}}^2 dx + \\int_{R_1} \\abs{\\tilde{v}_\\epsilon}^2 dx \\leq \\frac{1}{2} \\int_{R_1} \\abs{\\tilde{f}_\\epsilon}^2\\,dx + \\frac{1}{2} \\int_{R_1} \\abs{\\tilde{v}_\\epsilon}^2\\,dx\n\\end{multline}\n\\normalsize\nfor all $\\epsilon>0$. This implies that $\\norma{\\tilde{v}_\\epsilon}_{H^2_{\\epsilon,\\sigma, \\tau}(R_1)} \\leq C$ for all $\\epsilon> 0$, in particular $\\norma{\\tilde{v}_\\epsilon}_{H^2(R_1)} \\leq C(\\sigma, \\tau)$ for all $\\epsilon>0$; hence, there exists $v \\in H^2(R_1)$ such that, up to a subsequence\n$\\tilde{v}_\\epsilon \\to v$, weakly in $H^2(R_1)$, strongly in $H^1(R_1)$. Moreover from \\eqref{ineq: apriori ineq tilde v eps} we deduce that\n\\begin{align}\n\\label{ineq: decay y derivatives 1} &\\norma*{\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x \\partial y}}_{L^2(R_1)} \\leq C \\epsilon, \\quad\\quad \\norma*{\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y}}_{L^2(R_1)} \\leq C \\epsilon , \\\\\n\\label{ineq: decay y derivatives 2}&\\norma*{\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2}}_{L^2(R_1)} \\leq C \\epsilon^2, \\end{align}\nfor all $\\epsilon > 0$, hence there exists $u \\in L^2(R_1)$ such that, up to a subsequence\n\\begin{equation}\\label{weakly-to-u}\n\\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\rightharpoonup u, \\hbox{ weakly in }L^2(R_1)\n\\end{equation}\n as $\\epsilon \\to 0$. By \\eqref{ineq: decay y derivatives 1} we deduce that the limit function $v$ is constant in $y$. Indeed, if we choose any function $\\phi \\in C^\\infty_c(R_1)$, then\n\\[\n\\int_{R_1} v \\frac{\\partial \\phi}{ \\partial y} = \\lim_{\\epsilon \\to 0} \\int_{R_1} \\tilde{v}_\\epsilon \\frac{\\partial \\phi}{ \\partial y} = - \\lim_{\\epsilon \\to 0} \\int_{R_1} \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\phi = 0,\n\\]\nhence $\\frac{\\partial v}{\\partial y} = 0$ and then $v(x,y) \\equiv v(x)$ for almost all $(x,y) \\in R_1$. This suggests to choose test functions $\\psi$ depending only on $x$ in the weak formulation \\eqref{eq: weak formulation R1}. Possibly passing to a subsequence, there exists $f \\in L^2(R_1)$ such that\n\\[\n\\tilde{f}_\\epsilon \\rightharpoonup f \\quad \\quad \\text{in $L^2(R_1)$, as $\\epsilon \\to 0$}.\n\\]\nLet $\\psi \\in H^2_0(0,1)$. Then $\\psi \\in H^2(R_1)$ (here it is understood that the function is extended to the whole of $R_1$ by setting $\\psi(x,y) = \\psi(x)$ for all $(x,y) \\in R_1$) and clearly $\\psi \\equiv 0$ on $L_1$. Use $\\psi$ as a test function in \\eqref{eq: weak formulation R1}, pass to the limit as $\\epsilon \\to 0$ and consider \\eqref{weakly-to-u} to get\n\\begin{equation} \\label{eq: limit probl weak x}\n\\int_0^1 \\Big( \\frac{\\partial^2 v}{\\partial x^2} \\frac{\\partial^2 \\psi}{\\partial x^2} + \\sigma \\mathcal{M}(u) \\frac{\\partial^2 \\psi}{\\partial x^2} + \\tau \\frac{\\partial v}{\\partial x} \\frac{\\partial \\psi}{\\partial x} + v \\psi \\Big) g(x)\\, dx = \\int_0^1 \\mathcal{M}(f) \\psi g(x)\\, dx\n\\end{equation}\nfor all $\\psi \\in H^2_0(0,1)$.\nHere, the averaging operator $\\mathcal{M} $ is defined from $L^2(R_1)$ to $L^2_g(0,1)$ by\n\\[\n\\mathcal{M} h (x) = \\frac{1}{ g(x)} \\int_0^{ g(x)} h(x,y)\\, dy\\, ,\n\\]\nfor all $h\\in L^2(R_1)$ and for almost all $x \\in (0,1)$.\n\nFrom \\eqref{eq: limit probl weak x} we deduce that\n\\[\n\\frac{1}{g} (v'' g)'' + \\frac{\\sigma}{g} (\\mathcal{M}(u) g)'' - \\frac{\\tau}{g} (v' g)' + v = \\mathcal{M}(f), \\quad\\quad \\text{in (0,1)},\n\\]\nwhere the equality is understood in the sense of distributions.\\\\\nComing back to \\eqref{eq: weak formulation R1} we may also choose test functions $\\varphi(x,y) = \\epsilon^2 \\zeta(x,y)$, where $\\zeta \\in H_{L_1}^2(R_1)$. Using \\eqref{ineq: decay y derivatives 1}, \\eqref{ineq: decay y derivatives 2} and letting $\\epsilon \\to 0$ we deduce\n\\begin{equation*}\n(1-\\sigma) \\int_{R_1} u \\frac{\\partial^2 \\zeta}{ \\partial y^2} + \\sigma \\int_{R_1}\\Big( \\frac{\\partial^2 v}{\\partial x^2} \\frac{\\partial^2 \\zeta}{\\partial y^2} + u \\frac{\\partial^2 \\zeta}{\\partial y^2} \\Big) = 0\n\\end{equation*}\nwhich can be rewritten as\n\\begin{equation}\n\\label{eq: identity 2 order deriv}\n\\int_{R_1} \\Big( u + \\sigma \\frac{\\partial^2 v}{\\partial x^2} \\Big) \\frac{\\partial^2 \\zeta}{\\partial y^2} = 0\n\\end{equation}\nfor all $\\zeta \\in H_{L_1}^2(R_1)$. In particular this holds for all $\\zeta \\in C^\\infty_c(R_1)$, hence there exists the second order derivative\n\\begin{equation} \\label{eq: second derivative yy = 0}\n\\frac{\\partial^2}{\\partial y^2}\\Big( u + \\sigma \\frac{\\partial^2 v}{\\partial x^2} \\Big) = 0.\n\\end{equation}\nHence, $u(x,y) + \\sigma \\frac{\\partial^2 v}{\\partial x^2} = \\psi_1(x) + \\psi_2(x) y$ for almost all $(x,y) \\in R_1$ and for some functions $\\psi_1, \\psi_2 \\in L^2(R_1)$, and then \\eqref{eq: identity 2 order deriv} can be written as\n\\begin{equation} \\label{eq: limit probl weak y}\n \\int_{R_1} (\\psi_1(x)+y\\psi_2(x)) \\frac{\\partial^2 \\zeta}{\\partial y^2} = 0\n\\end{equation}\nIntegrating twice by parts in $y$ in equation \\eqref{eq: limit probl weak y} we deduce that\n\\begin{equation}\n\\label{eq: boundary identity}\n- \\int_{\\partial R_1} \\psi_2(x) \\zeta n_y dS + \\int_{\\partial R_1} (\\psi_1(x)+y\\psi_2(x)) \\frac{\\partial \\zeta}{\\partial y} n_y dS = 0\n\\end{equation}\nfor all $\\zeta \\in H_{L_1}^2(R_1)$. We are going to choose now particular functions $\\zeta$ in \\eqref{eq: boundary identity}. Consider first $b=\\frac{1}{2}\\min_{x\\in [0,1]} g(x)>0$ so that the rectangle $(0,1)\\times (0,b)\\subset R_1$ and consider a function $\\eta=\\eta(y)$ with $\\eta\\in C^\\infty(0,b)$ such that $\\eta(y)=1+\\alpha y$ for $00$, be a family of Hilbert spaces. We assume the existence of a family of linear operators $\\mathcal{E}_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_0, \\mathcal{H}_\\epsilon)$, $\\epsilon >0$, such that\n\\begin{equation}\n\\label{def: basic property E_eps}\n\\norma{\\mathcal{E}_\\epsilon u_0}_{\\mathcal{H}_\\epsilon} \\to \\norma{u_0}_{\\mathcal{H}_0},\\ \\ {\\rm as}\\ \\epsilon\\to 0,\n\\end{equation}\nfor all $u_0 \\in \\mathcal{H}_0$.\n\n\n\n\\begin{definition} Let $\\mathcal{H}_\\epsilon$ and $\\mathcal{E}_\\epsilon$ be as above.\n\\begin{enumerate}[label =(\\roman*)]\n\\item Let $u_\\epsilon\\in \\mathcal{H}_\\epsilon$, $\\epsilon >0$. We say that $u_\\epsilon$ $\\mathcal{E}$-converges to $u$ as $\\epsilon \\to 0$ if $\\norma{u_\\epsilon - \\mathcal{E}_\\epsilon u}_{\\mathcal{H}_\\epsilon} \\to 0$ as $\\epsilon \\to 0$. We write $u_\\epsilon \\overset{E}{\\longrightarrow} u$.\n\\item Let $ B_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_\\epsilon)$, $\\epsilon >0$. We say that $B_\\epsilon$ $\\mathcal{E}\\E$-converges to a linear operator $B_0 \\in \\mathcal{L}(\\mathcal{H}_0)$ if $B_\\epsilon u_\\epsilon \\overset{E}{\\longrightarrow} B_0 u$ whenever $u_\\epsilon \\overset{E}{\\longrightarrow} u\\in \\mathcal{H}_0$. We write $B_\\epsilon \\overset{EE}{\\longrightarrow} B_0$.\n\\item Let $ B_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_\\epsilon)$, $ \\epsilon >0$. We say that $B_\\epsilon$ compactly converges to $B_0 \\in \\mathcal{L}(\\mathcal{H}_0)$ (and we write $B_\\epsilon \\overset{C}{\\longrightarrow} B_0$) if the following two conditions are satisfied:\n \\begin{enumerate}[label=(\\alph*)]\n \\item $B_\\epsilon \\overset{EE}{\\longrightarrow} B_0$ as $\\epsilon \\to 0$;\n \\item for any family $u_\\epsilon \\in \\mathcal{H}_{\\epsilon}$, $\\epsilon>0$, such that $\\norma{u_\\epsilon}_{\\mathcal{H}_\\epsilon}=1$ for all $\\epsilon \\in (0,1)$, there exists a subsequence $B_{\\epsilon_k}u_{\\epsilon_k}$ of $B_\\epsilon u_\\epsilon$ and $\\bar{u} \\in \\mathcal{H}_0$ such that $B_{\\epsilon_k}u_{\\epsilon_k} \\overset{E}{\\longrightarrow} \\bar{u}$ as $k \\to \\infty$.\n \\end{enumerate}\n\\end{enumerate}\n\\end{definition}\n\nFor any $\\epsilon \\geq 0$, let $A_\\epsilon$ be a (densely defined) closed, nonnegative differential operator on $\\mathcal{H}_\\epsilon$ with domain $\\mathscr{D}(A_\\epsilon) \\subset \\mathcal{H}_\\epsilon$. We assume for simplicity that $0$ does not belong to the spectrum of $A_{\\epsilon}$ and that\n\\[\n\\textup{(H1): $A_\\epsilon$ has compact resolvent $B_\\epsilon := A_\\epsilon^{-1}$ for any $\\epsilon \\in [0,1)$,}\n\\]\nand\n\\[\n\\textup{(H2): $B_\\epsilon \\overset{C}{\\longrightarrow} B_0 $, as $\\epsilon \\to 0$.}\n\\]\nGiven an eigenvalue $\\lambda$ of $A_0$ we consider the generalized eigenspace $S(\\lambda, A_0) := Q(\\lambda, A_0)\\mathcal{H}_0$, where\n\\[\nQ(\\lambda, A_0) = \\frac{1}{2\\pi i} \\int_{|\\xi - \\lambda| = \\delta} (\\xi \\mathbb{I} - A_0)^{-1}\\, d\\xi\n\\]\nand $\\delta > 0$ is such that the disk $\\{\\xi \\in \\mathbb{C} : |\\xi - \\lambda| \\leq \\delta \\}$ does not contain any eigenvalue except for $\\lambda$. In a similar way, if (H1),(H2) hold, then we can define $S(\\lambda, A_{\\epsilon}) := Q(\\lambda, A_{\\epsilon})\\mathcal{H}_{\\epsilon}$, where\n\\[\nQ(\\lambda, A_\\epsilon) = \\frac{1}{2\\pi i} \\int_{|\\xi - \\lambda| = \\delta} (\\xi \\mathbb{I} - A_\\epsilon)^{-1}\\, d\\xi.\n\\]\nThis definition makes sense because for $\\epsilon$ small enough $(\\xi \\mathbb{I} - A_\\epsilon)$ is invertible for all $\\xi$ such that $|\\xi - \\lambda| = \\delta$, see \\cite[Lemma 4.9]{ACJdE}. Then the following theorem holds.\n\n\\begin{theorem}\n\\label{thm: E conv -> spectral conv}\nLet $A_\\epsilon$, $A_0$ be operators as above satisfying conditions (H1), (H2). Then the operators $A_{\\epsilon}$ {\\rm are spectrally convergent} to $A_0$ as $\\epsilon\\to 0$, i.e., the following statements hold:\n\\begin{enumerate}[label=(\\roman*)]\n\\item If $\\lambda_0$ is an eigenvalue of $A_0$, then there exists a sequence of eigenvalues $\\lambda_\\epsilon$ of $A_\\epsilon$ such that $\\lambda_\\epsilon \\to \\lambda_0$ as $\\epsilon \\to 0$. Conversely, if $\\lambda_\\epsilon$ is an eigenvalue of $A_\\epsilon$ for all $\\epsilon >0$, and $\\lambda_\\epsilon \\to \\lambda_0$, then $\\lambda_0$ is an eigenvalue of $A_0$.\n\\item There exists $\\epsilon_0 > 0$ such that the dimension of the generalized eigenspace $S(\\lambda_0, A_\\epsilon)$ equals the dimension of $S(\\lambda_0, A_0)$, for any eigenvalue $\\lambda_0$ of $A_0$, for any $\\epsilon \\in [0,\\epsilon_0)$.\n\\item If $\\varphi_0 \\in S(\\lambda_0, A_0)$ then for any $\\epsilon >0$ there exists $\\varphi_\\epsilon \\in S(\\lambda_0, A_\\epsilon)$ such that $\\varphi_\\epsilon \\overset{E}{\\longrightarrow} \\varphi_0$ as $\\epsilon \\to 0$.\n\\item If $\\varphi_\\epsilon \\in S(\\lambda_0, A_\\epsilon)$ satisfies $\\norma{\\varphi_\\epsilon}_{\\mathcal{H}_\\epsilon} = 1$ for all $\\epsilon >0$, then $\\varphi_\\epsilon $, $\\epsilon >0$, has an $\\mathcal{E}$-convergent subsequence whose limit is in $S(\\lambda_0, A_0)$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nSee \\cite[Theorem 4.10]{ACL}.\n\\end{proof}\n\nWe now apply Theorem \\ref{thm: E conv -> spectral conv} to problem \\eqref{PDE: R_eps}. To do so, we consider the following Hilbert spaces\n\\[\n\\mathcal{H}_\\epsilon = L^2(R_\\epsilon; \\epsilon^{-1}dxdy),\\ \\ {\\rm and}\\ \\ \\mathcal{H}_0 = L^2_g(0,1),\n\\]\nand we denote by $\\mathcal{E}_\\epsilon$ the extension operator from $L^2_g(0,1)$ to $L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$, defined by\n\\begin{equation}\n\\label{def: extension}\n(\\mathcal{E}_\\epsilon v)(x,y) = v(x),\n\\end{equation}\nfor all $v \\in L^2_g(0,1)$, for almost all $(x,y) \\in R_\\epsilon$. Clearly $\\norma{E_\\epsilon u_0}_{(R_\\epsilon; \\epsilon^{-1}dxdy)} = \\norma{u_0}_{L^2_g(0,1)}$, hence $\\mathcal{E}_\\epsilon$ trivially satisfies property \\eqref{def: basic property E_eps}.\n\nWe consider the operators $A_{\\epsilon}=(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$, $A_0=(\\Delta^2 - \\tau \\Delta +I)_{D }$ on $\\mathcal{H}_{\\epsilon}$ and $\\mathcal{H}_0$ respectively, associated with the eigenvalue problems \\eqref{PDE: R_eps} and \\eqref{ODE: limit problem}, respectively. Namely, $(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$ is the operator $\\Delta^2 - \\tau \\Delta +I$ on $R_{\\epsilon}$ subject to Dirichlet boundary\nconditions on $L_{\\epsilon}$ and Neumann boundary conditions on $\\partial R_{\\epsilon}\\setminus L_{\\epsilon }$ as described in \\eqref{PDE: R_eps}. Similarly, $(\\Delta^2 - \\tau \\Delta +I)_{D }$\nis the operator $\\Delta^2 - \\tau \\Delta +I$ on $(0,1)$ subject to Dirichlet boundary conditions as described in \\eqref{ODE: limit problem}.\n\nThen we can prove the following\n\\begin{theorem}\\label{spectfin} The operators $(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$ spectrally converge to\\\\ $(\\Delta^2 - \\tau \\Delta +I)_{D }$ as $\\epsilon \\to 0$, in the sense of Theorem~\\ref{thm: E conv -> spectral conv}.\n\\end{theorem}\n\n\\begin{proof}\nIn view of Theorem \\ref{thm: E conv -> spectral conv}, it is sufficient to prove the following two facts:\n\\begin{enumerate}[label=(\\arabic*)]\n\\item if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ is such that $\\epsilon^{-1\/2}\\norma{f_\\epsilon}_{L^2(R_\\epsilon)} = 1$ for any $\\epsilon >0$, and $v_\\epsilon$ is the corresponding solutions of Problem \\eqref{PDE: R_eps f_eps}, then there exists a subsequence $\\epsilon_k \\to 0$ as $k \\to \\infty$ and $\\bar{v} \\in L^2_g(0,1)$ such that $v_{\\epsilon_k}$ $\\mathcal{E}$-converge to $\\bar{v}$ as $k \\to \\infty$.\n\\item if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ and $f_\\epsilon \\overset{E}{\\longrightarrow} f$ as $\\epsilon \\to 0$, then the corresponding solutions $v_\\epsilon$ of Problem \\eqref{PDE: R_eps f_eps} $\\mathcal{E}$-converge to the solution of Problem \\eqref{ODE: auxiliary problem sigma2} with datum $f$.\n\\end{enumerate}\nNote that (1) follows immediately from the computations in Section \\ref{subsection: finding limit prb}. Indeed, if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ is as in (1), up to a subsequence, $\\tilde{f}_\\epsilon \\rightharpoonup f$ in $L^2(R_1)$, which implies that $\\tilde{v}_\\epsilon \\rightharpoonup v_0\\in H_0^2(0,1)$ in $H^2(R_1)$, where $v_0$ is the solution of Problem \\eqref{ODE: auxiliary problem sigma2}. This implies that $\\norma{v_\\epsilon - \\mathcal{E} v_0}_{L^2(R_\\epsilon; \\epsilon^{-1}dxdy)} \\to 0$, hence (1) is proved.\\\\\nIn order to show (2) we take a sequence of functions $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ and $f\\in L^2_g(0,1)$ such that $\\epsilon^{-1\/2}\\norma{f_\\epsilon - \\mathcal{E}_\\epsilon f}_{L^2(R_\\epsilon)} \\to 0$ as $\\epsilon \\to 0$. After a change of variable, this is equivalent to $\\norma{\\tilde{f}_\\epsilon - \\mathcal{E} f}_{L^2(R_1)} \\to 0$ as $\\epsilon \\to 0$. Arguing as in Section \\ref{subsection: finding limit prb}, one show that the $\\tilde{v}_\\epsilon \\rightharpoonup v \\in L^2_g(0,1)$ in $H^2(R_1)$ and that $v$ solves problem \\eqref{ODE: auxiliary problem sigma2}. Hence $\\norma{\\tilde{v}_\\epsilon - \\mathcal{E} v}_{L^2(R_1)} \\to 0$ as $\\epsilon \\to 0$, or equivalently, $\\norma{v_\\epsilon - \\mathcal{E}_\\epsilon v}_{L^2(R_\\epsilon; \\epsilon^{-1}dxdy)} \\to 0$ as $\\epsilon \\to 0$, proving (2).\n\n\n\\end{proof}\n\n\n\n\n\n\n\\section{Conclusion}\\label{conclusionsec}\n\nRecall that the eigenpairs of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega} are denoted by $(\\lambda_n(\\Omega_\\epsilon), \\varphi_n^\\epsilon)$, $(\\omega_n, \\varphi_n^\\Omega)_{n\\geq1}$ respectively, where the two families of eigenfunctions $\\varphi_n^\\epsilon$, $\\varphi_n^\\Omega$ are complete orthonormal bases of the spaces $L^2(\\Omega_{\\epsilon})$, $L^2(\\Omega )$, respectively. Denote now by $(h_n, \\theta_n)_{n\\geq 1}$ the eigenpairs of problem $\\eqref{ODE: limit problem}$\nwhere the eigenfunctions $h_n$ define an orthonormal basis of the space $L^2_g(0,1)$.\nIn the spirit of the definition of $\\lambda_n^{\\epsilon} $ given in Section 2, we set now $(\\lambda_n^0)_{n\\geq 1} = (\\omega_k)_{k \\geq 1} \\cup (\\theta_l )_{l\\geq 1}$, where it is understood that the eigenvalues are arranged in increasing order and repeated according to their multiplicity. For each $\\lambda_n^0$ we define the function $\\phi_n^0 \\in H^2(\\Omega) \\oplus H^2(R_\\epsilon)$ in the following way:\n\\begin{equation*}\n\\phi^0_n =\n\\begin{cases}\n \\varphi_k^\\Omega, &\\text{in $\\Omega$}\\\\\n 0, &\\text{in $R_\\epsilon$},\n\\end{cases}\n\\end{equation*}\nif $\\lambda_n^0 = \\omega_k$, for some $k \\in \\numberset{N}$; otherwise\n\\begin{equation*}\n\\phi^0_n=\n\\begin{cases}\n 0, &\\text{in $\\Omega$}, \\\\\n \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_l, &\\text{in $R_\\epsilon$}\n\\end{cases}\n\\end{equation*}\nif $\\lambda_n^\\epsilon = \\theta_l$, for some $l \\in \\numberset{N}$ (here we agree to order the eigenvalues and the eigenfunctions following the same rule used in the definition of $\\lambda_n^{\\epsilon}$ and $\\phi_n^{\\epsilon }$ in Section 2).\n\nFinally, if $x>0$ divides the spectrum $\\lambda_n(\\Omega_{\\epsilon})$ for all $\\epsilon >0 $ sufficiently small (see the end of Section 2) and $N(x)$ is the number of eigenvalues with $\\lambda_n(\\Omega_{\\epsilon})\\le x$ (counting their multiplicity), we define the projector $P_{x}^0$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^{0}, \\dots, \\phi_{N(x)}^{0}]$ by setting\n\\[\nP_{x}^0 u = \\sum_{i=1}^{N(x)} (u,\\phi_i^0)_{L^2(\\Omega_\\epsilon)} \\phi_i^0\n\\]\nfor all $u\\in L^2(\\Omega_\\epsilon)$. (Note that choosing $x$ independent of $\\epsilon$ is possible by the limiting behaviour of the eigenvalues.) Then, using Theorems \\ref{thm: eigenvalues decomposition} and \\ref{spectfin} we deduce the following.\n\n\\begin{theorem} \\label{lastthm}\nLet $\\Omega_\\epsilon$, $\\epsilon>0$, be a family of dumbbell domains satisfying the H-Condition. Then the following statements hold:\n\\begin{enumerate}[label =(\\roman*)]\n\\item $\\lim_{\\epsilon \\to 0}\\, \\abs{\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^0} = 0$, for all $n\\in \\numberset{N} $.\n\\item For any $x$ dividing the spectrum,\n $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{H^2(\\Omega) \\oplus L^2(R_\\epsilon)} = 0$, for all $n = 1,\\dots, N(x)$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nThe convergence of the eigenvalues follows directly by Theorems \\ref{thm: eigenvalues decomposition} and \\ref{spectfin}. Indeed, by Theorem \\ref{thm: eigenvalues decomposition} we know that $|\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^\\epsilon| \\to 0$ as $\\epsilon \\to 0$. If $\\lambda_n^\\epsilon = \\omega_k$ for some $k\\in \\numberset{N}$, for all sufficiently small $\\epsilon$, then we are done; otherwise, if $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$ for some $l\\in \\numberset{N}$, definitely in $\\epsilon$, by Theorem \\ref{spectfin} we deduce that $\\theta_l^\\epsilon \\to \\theta_l$ as $\\epsilon \\to 0$, hence $|\\lambda_n(\\Omega_\\epsilon) - \\theta_l| \\leq |\\lambda_n(\\Omega_\\epsilon) - \\theta_l^\\epsilon| + |\\theta_l^\\epsilon - \\theta_l| \\to 0 $ as $\\epsilon \\to 0$.\\\\\nConsider now the convergence of the eigenfunctions. By Theorems~\\ref{thm: E conv -> spectral conv},~\\ref{spectfin} it follows that for any $\\epsilon >0$ there exists an orthonormal sequence of generalized eigenfunctions $\\delta_j^{\\epsilon }$ in $L^2(R_{\\epsilon}, \\epsilon^{-1}dxdy)$ associated with the eigenvalues\n$\\theta_j^{\\epsilon}$ of problem \\eqref{PDE: R_eps} such that for every $j\\in \\numberset{N}$\n\\begin{equation}\\label{lastthm1}\n\\norma{\\delta^\\epsilon_j - \\mathcal{E}_{\\epsilon} h_j}_{L^2(R_\\epsilon, \\epsilon^{-1}dxdy )} \\to 0,\n\\end{equation}\nas $\\epsilon \\to 0$. Recall that a generalized eigenfunction is an element of a generalized eigenspace, see Section~\\ref{sec: spectral convergence}. We set\n$\n\\gamma_j^\\epsilon =\\epsilon^{-1\/2}\\delta^\\epsilon_j\n$\nand we note that $\\gamma_j^{\\epsilon}$ is a sequence of generalized eigenfunctions of Problem \\eqref{PDE: R_eps} which is orthonormal in $L^2(R_{\\epsilon})$, as required in Theorem~\\ref{thm: eigenvalues decomposition}. Thus by Theorem~\\ref{thm: eigenvalues decomposition} $(ii)$, we deduce that\n\\small{\n\\begin{equation*}\n\\begin{split}\n&\\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x) }_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2} \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)} \\leq \\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon }_{L^2(R_\\epsilon)}\\\\\n&+ \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon - \\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i }_{L^2(R_\\epsilon)}\\\\\n&\\leq o(1) + \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} ( \\gamma_i^\\epsilon -\\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i) }_{L^2(R_\\epsilon)} + \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon - \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon}_{L^2(R_\\epsilon)}\\\\\n\\end{split}\n\\end{equation*}}\n\\normalsize\nHence, \n\\begin{equation}\\label{lastthm2}\n\\begin{split}\n&\\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x) }_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2} \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)}\\\\ \n&\\leq o(1) + C \\sum^{N(x)}_{i=1} \\norma{\\gamma_i^\\epsilon - \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)} =o(1) + C \\sum^{N(x)}_{i=1} \\norma{\\delta _i^\\epsilon - \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon ,\\epsilon^{-1}dxdy )}.\n\\end{split}\n\\end{equation}\nSince the right-hand side of the last inequality in \\eqref{lastthm2} goes to zero as $\\epsilon \\to 0$ by (\\ref{lastthm1}), we conclude that $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{L^2(R_\\epsilon)} = 0$. Finally, the fact that $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{H^2(\\Omega)} = 0$ follows directly from Theorem \\ref{thm: eigenvalues decomposition}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection*{Acknowledgment}\nThe first author is partially supported by grants MTM2012-31298, MTM2016-75465, ICMAT Severo Ochoa project SEV-2015-0554, MINECO, Spain and Grupo de Investigaci\\'on CADEDIF, UCM. The third author acknowledges financial support from the INDAM - GNAMPA project 2016 ``Equazioni differenziali con applicazioni alla meccanica\". The second and third authors are also members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).\\\\\nThe second and the third authors are very thankful to the Departamento de Matem\\'atica Aplicada of the Universidad Complutense de Madrid for the warm hospitality received on the occasion of their visits. The authors are thankful to an anonymous referee for pointing out a number of items in the reference list.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn semiconductors or insulators with broken inversion symmetry, an intrinsic electromechanical\ncoupling between stresses and electric polarizations can be observed, which is called piezoelectric effect.\nTwo-dimensional (2D) materials can show unique properties compared to their bulk counterparts, and the reduction in dimensionality of 2D\nmaterials can often eliminate inversion symmetry, which allows these materials to become piezoelectric\\cite{q4}.\nIt has been theoretically reported that many 2D\nmaterials break inversion symmetry\nand hence can exhibit piezoelectricity, such as group IIA and IIB metal oxides, group-V binary semiconductors, transition metal dichalchogenides (TMD), Janus TMD and group III-V semiconductors\\cite{q7,q7-2,q7-3,q7-3-1,q7-3-2,q9,q10,q11,q12,qr,nr,nr1}. A majority of structures have piezoelectric coefficients greater than a typical value of bulk piezoelectric materials (5 pm\/V). Significantly, the monolayer SnSe,\nSnS, GeSe and GeS with puckered structure possess giant piezoelectricity, as high as 75-251 pm\/V\\cite{q10}, which may have huge potential application in the field of sensors, actuators and energy harvesters.\n The different crystal symmetry can induce a only in-plane piezoelectricity like TMD monolayers\\cite{q9}, both in-plane and out-of-plane piezoelectricity for example 2D Janus monolayers\\cite{q7,q7-3}, or a pure out-of-plane piezoelectricity such as penta-graphene\\cite{q7-4}. It has been proved that strain may be a effective strategy to tune piezoelectric properties of 2D materials\\cite{r1,r3}.\nExperimentally discovered piezoelectricity of $\\mathrm{MoS_2}$\\cite{q5,q6}, MoSSe\\cite{q8} and $\\mathrm{In_2Se_3}$\\cite{q8-1} has triggered an intense interest in piezoelectric properties of 2D materials.\n\n\\begin{figure*}\n \n \\includegraphics[width=16.0cm]{Fig1.eps}\n \\caption{(Color online)The crystal structure of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ including top and side views. The purple balls represent M atoms, and the blue balls for A atoms, and the yellow balls for Z atoms. These crystal structure can be divided into three categories: A ($\\alpha_1$, $\\alpha_2$), B ($\\alpha_3$, $\\alpha_4$) and C ($\\alpha_5$, $\\alpha_6$) according to the relative positions of M and A atoms. The different categories can be connected by translation operation, and the different structures in the same category can be related by mirror or rotation operations. The green lines represent mirror face, translation direction or rotation axis.}\\label{t0}\n\\end{figure*}\n\n\nIt is meaningful to explore piezoelectricity of new 2D family. Recently, the layered\n2D $\\mathrm{MoSi_2N_4}$ has been experimentally achieved by chemical vapor deposition (CVD)\\cite{msn}, which possesses semiconducting behavior, high strength and excellent ambient stability. In rapid sequence, 2D\n$\\mathrm{WSi_2N_4}$ has also been synthesized by CVD. In the wake of $\\mathrm{MSi_2N_4}$ (M=Mo and W), $\\mathrm{MA_2Z_4}$ family are constructed with twelve different structures ($\\alpha_i$ and $\\beta_i$ ($i$=1 to 6)) by intercalating $\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into InSe-type $\\mathrm{A_2Z_2}$ monolayer\\cite{m20}. The $\\mathrm{MA_2Z_4}$ family spans a wide range of properties from semiconductor to topological insulator to Ising superconductor upon the number of valence electrons (VEC). Intrinsic piezoelectricity in monolayer $\\mathrm{XSi_2N_4}$ (X=Ti, Zr, Hf, Cr, Mo and W) with $\\alpha_1$ phase are studied by the first principle calculations\\cite{m21}, and the independent in-plane piezoelectric constants $d_{11}$ is predicted to be 0.78 pm\/V-1.24 pm\/V. The valley-dependent properties of monolayer $\\mathrm{MoSi_2N_4}$, $\\mathrm{WSi_2N_4}$ and $\\mathrm{MoSi_2As_4}$ have been investigated by the first-principle calculations\\cite{g1}. The structural, mechanical, thermal, electronic, optical and photocatalytic properties of $\\mathrm{MoSi_2N_4}$ are studied by using hybrid density functional theory (HSE06-DFT)\\cite{g2}.\n\nIn this work, the role of crystal structure on intrinsic piezoelectricity in monolayer $\\mathrm{MSi_2N_4}$ (M=Mo and W) are studied by using density functional perturbation theory (DFPT)\\cite{pv6}. It is interesting to note that the same structural dependence on $d_{11}$ and $e_{11}$ between monolayer $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ is observed. Calculated results show that the atomic arrangement of $\\mathrm{A_2Z_2}$ double layers has important effect on the in-plane piezoelectric polarization of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers.\nFinally, we investigate the intrinsic piezoelectricity of monolayer $\\alpha_1$- and $\\alpha_2$- $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. It is found that the $\\mathrm{MA_2P_4}$ have more stronger piezoelectricity than $\\mathrm{MA_2N_4}$. So,\n experimentally synthesizing monolayer $\\mathrm{MA_2Z_4}$ containing P atoms is very promising for energy harvesting and piezoelectric sensing.\n\n\n\n\n\nThe rest of the paper is organized as follows. In the next\nsection, we shall give our computational details and methods about piezoelectric coefficients.\n In the third section, we perform symmetry analysis for elastic and piezoelectric coefficients of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$. In the fourth sections, we shall present main results and analysis. Finally, we shall give our conclusions in the fifth section.\n\\begin{table}\n\\centering \\caption{The optimized lattice constants of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) using GGA ($\\mathrm{{\\AA}}$). }\\label{tab0}\n \\begin{tabular*}{0.48\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n \\hline\\hline\nName & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ & $\\alpha_4$ & $\\alpha_5$ & $\\alpha_6$ \\\\\\hline\\hline\n $\\mathrm{MoSi_2N_4}$ &2.91& 2.90 & 2.84 & 2.84 & 2.86 & 2.85\\\\\n $\\mathrm{WSi_2N_4}$ &2.91 & 2.90 & 2.84 & 2.84 & 2.87 & 2.85\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table}\n\n\n\\section{Computational detail}\nBased on the density functional theory (DFT)\\cite{1}, our simulations are carried out as implemented\nin the plane-wave code VASP\\cite{pv1,pv2,pv3}. The exchange-correlation functional is treated within popular generalized gradient\napproximation of Perdew, Burke and Ernzerhof (GGA-PBE)\\cite{pbe} to perform the structural relaxation and the calculations of the elastic and\npiezoelectric tensors. For energy band calculations, the spin orbital coupling (SOC)\nis also taken into account due to containing early transition metal.\nProjector-augmented wave pseudopotentials are used with a cutoff energy of 500 eV for plane-wave expansions.\nA vacuum spacing of more than 20 $\\mathrm{{\\AA}}$ is adopted to prevent any interactions\nbetween the adjacent periodic images of the 2D monolayers. The total energy convergence criterion is set\nto $10^{-8}$ eV, and the\natomic positions are optimized until all components of\nthe forces on each atom are reduced to values less than 0.0001 $\\mathrm{eV.{\\AA}^{-1}}$.\nWe calculate the coefficients of elastic stiffness tensor $C_{ij}$ by using strain-stress relationship (SSR) and the piezoelectric stress coefficients $e_{ij}$ by DFPT method\\cite{pv6}.\nA Monkhorst-Pack mesh of 15$\\times$15$\\times$1 in the first Brillouin zone\nis sampled for $C_{ij}$, and 9$\\times$16$\\times$1 for $e_{ij}$.\nThe 2D elastic coefficients $C^{2D}_{ij}$\n and piezoelectric stress coefficients $e^{2D}_{ij}$\nhave been renormalized by the the length of unit cell along z direction ($Lz$): $C^{2D}_{ij}$=$Lz$$C^{3D}_{ij}$ and $e^{2D}_{ij}$=$Lz$$e^{3D}_{ij}$.\nHowever, the $d_{ij}$ is independent of $Lz$.\n\n\\begin{figure*}\n \n \\includegraphics[width=12cm]{Fig2.eps}\n \\caption{(Color online) The energy band structures of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ using GGA+SOC, and the VBM and CBM are connected by red arrow. }\\label{band}\n\\end{figure*}\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig3.eps}\n \\caption{(Color online)The energy band gaps of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ using GGA+SOC.}\\label{band1}\n\\end{figure}\n\n\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig4.eps}\n \\caption{(Color online) The elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$.}\\label{c}\n\\end{figure}\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig5.eps}\n \\caption{(Color online) The piezoelectric stress coefficients $e_{11}$, the ionic contribution and electronic contribution to $e_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$. }\\label{e}\n\\end{figure}\n\n\\begin{table}\n\\centering \\caption{The $d_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) using GGA (pm\/V). }\\label{tab0-1}\n \\begin{tabular*}{0.48\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n \\hline\\hline\nName & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ & $\\alpha_4$ & $\\alpha_5$ & $\\alpha_6$ \\\\\\hline\\hline\n $\\mathrm{MoSi_2N_4}$ &1.15 & 0.65& 0.34 & -1.98 & 3.53 & 1.32\\\\\n $\\mathrm{WSi_2N_4}$ &0.78 &0.25 & -0.07 & -2.05 & 2.91 & 0.88\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table}\n\n\\section{Symmetry Analysis}\n The piezoelectric stress tensors $e_{ijk}$ and strain tensor $d_{ijk}$ is defined as:\n \\begin{equation}\\label{pe0}\n e_{ijk}=\\frac{\\partial P_i}{\\partial \\varepsilon_{jk}}=e_{ijk}^{elc}+e_{ijk}^{ion}\n \\end{equation}\nand\n \\begin{equation}\\label{pe0-1}\n d_{ijk}=\\frac{\\partial P_i}{\\partial \\sigma_{jk}}=d_{ijk}^{elc}+d_{ijk}^{ion}\n \\end{equation}\nwhere $P_i$, $\\varepsilon_{jk}$ and $\\sigma_{jk}$ are polarization vector, strain and stress, respectively.\nThe $e_{ijk}^{elc}$ ($d_{ijk}^{elc}$) is the clamped-ion\npiezoelectric tensors resulting from the pure electronic contribution. The relaxed-ion\npiezoelectric tensors $e_{ijk}$ ($d_{ijk}$) is obtained from the sum of ionic\nand electronic contributions. The $d_{ijk}$ can be connected with $e_{ijk}$ by the elastic stiffness tensor $C_{ijkl}$.\nBy employing the frequently used Voigt notation (11$\\rightarrow$1,\n22$\\rightarrow$2, 33$\\rightarrow$3, 23$\\rightarrow$4, 31$\\rightarrow$5 and 12$\\rightarrow$6),\nthe elastic tensor $C_{ijkl}$, piezoelectric tensors $e_{ijk}$ and $d_{ijk}$ become into $C_{ij}$ (6$\\times$6 matrix), $e_{ij}$ (3$\\times$6 matrix) and $d_{ij}$ (3$\\times$6 matrix). The symmetry of crystal\nstructure will further reduce the number of independent $C_{ij}$, $e_{ij}$ and $d_{ij}$ tensors.\n\nBy intercalating $\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into InSe-type $\\mathrm{A_2Z_2}$ monolayer,\nsix $\\alpha_i$ and six $\\beta_i$ ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ monolayers can be constructed\\cite{m20}.\nThe six $\\alpha_i$ have the same $P\\bar{6}m2$ space group due to inserting 2H-$\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into $\\alpha$-InSe-type $\\mathrm{A_2Z_2}$ double layers, which break inversion symmetry. The six $\\beta_i$ are built by intercalating 1T-$\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into $\\beta$-InSe-type $\\mathrm{A_2Z_2}$ double layers with the same $P\\bar{3}m1$ space group, which keep inversion symmetry.\nTherefore, $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ monolayers are piezoelectric.\n\n The six $\\alpha_i$ geometric structures of the $\\mathrm{MA_2Z_4}$ monolayer are\nplotted in \\autoref{t0}. All considered six $\\alpha_i$ crystal structures have the same $\\bar{6}m2$ point group. Only the in-plane piezoelectric effect is allowed in\nmonolayer $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$, when a uniaxial in-plane strain is applied. For 2D semiconductors, in general,\nin-plane stresses and strains are only allowed,\nwhile the out-of-plane is strain\/stress free\\cite{q9,q10,q11}. And then the $e_{ij}$, $d_{ij}$ and $C_{ij}$ can be written as:\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n e_{11} &-e_{11} & 0 \\\\\n 0 &0 & -e_{11}\\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\n \\end{equation}\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n d_{11} & -d_{11} & 0 \\\\\n 0 &0 & -2d_{11} \\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\n \\end{equation}\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n C_{11} & C_{12} &0 \\\\\n C_{12} & C_{11} &0 \\\\\n 0 & 0 & \\frac{C_{11}-C_{12}}{2} \\\\\n \\end{array}\n \\right)\n \\end{equation}\n The forms of these piezoelectric and stiffness constants are the same as those for TMD monolayers\\cite{q9,q11} due to the same point group.\nBy $e_{ik}$=$d_{ij}C_{jk}$, the only in-plane $d_{11}$ is found to be:\n\\begin{equation}\\label{pe2-7}\n d_{11}=\\frac{e_{11}}{C_{11}-C_{12}}\n\\end{equation}\n\n\n\n\\begin{table*}\n\\centering \\caption{The optimized lattice constants of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA ($\\mathrm{{\\AA}}$). }\\label{tab1}\n \\begin{tabular*}{0.96\\textwidth}{@{\\extracolsep{\\fill}}cccccccccccc}\n \\hline\\hline\nName & $\\mathrm{CrSi_2N_4}$ & $\\mathrm{MoSi_2N_4}$ & $\\mathrm{WSi_2N_4}$ & $\\mathrm{MoGe_2N_4}$ & $\\mathrm{WGe_2N_4}$ & $\\mathrm{CrSi_2P_4}$&$\\mathrm{MoSi_2P_4}$&$\\mathrm{WSi_2P_4}$&$\\mathrm{CrGe_2P_4}$&$\\mathrm{MoGe_2P_4}$&$\\mathrm{WGe_2P_4}$ \\\\\\hline\\hline\n $\\alpha_1$ &2.84& 2.91 & 2.91 & 3.02 & 3.02& 3.42 & 3.47 & 3.47 &3.50 & 3.54 &3.54\\\\\n $\\alpha_2$ &2.84 & 2.90 & 2.90 & 3.01 &3.01 & 3.41 & 3.45 & 3.46 &3.49 & 3.53 & 3.53\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n\\section{Main calculated results}\n Firstly, we discuss the structural relation among six $\\alpha_i$ crystal structures of $\\mathrm{MA_2Z_4}$.\n According to the relative positions of M and A atoms, the six $\\alpha_i$ crystal structure can be divided into three categories: A ($\\alpha_1$, $\\alpha_2$), B ($\\alpha_3$, $\\alpha_4$) and C ($\\alpha_5$, $\\alpha_6$).\nThe different categories can be connected by translation operation. The $\\alpha_3$ can be attained by translating $\\mathrm{A_2Z_2}$ double layers of $\\alpha_2$ along the green line of top view of $\\alpha_2$ in \\autoref{t0} with the transfixion of $\\mathrm{MZ_2}$ monolayer.\n The $\\alpha_6$ can be attained from $\\alpha_4$ by similar translation operation. The different structures in the same category can be related by mirror or rotation operations. The $\\alpha_2$ can be built by mirroring $\\mathrm{A_2Z_2}$ double layers of $\\alpha_1$ with respect to the vertical surface defined by two green lines of top and side views of $\\alpha_1$. The $\\alpha_4$ ($\\alpha_6$) can be constructed by rotating the $\\mathrm{A_2Z_2}$ double layers of $\\alpha_3$ ($\\alpha_5$) with $\\pi$\/3 along the vertical axis defined by linking two A atoms.\n\n \\begin{figure}\n \n \\includegraphics[width=7cm]{Fig6.eps}\n \\caption{(Color online) The piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$. }\\label{d}\n\\end{figure}\n\n\n\n\n\n\n\\begin{table*}\n\\centering \\caption{The $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA (pm\/V). }\\label{tab1-1}\n \\begin{tabular*}{0.96\\textwidth}{@{\\extracolsep{\\fill}}cccccccccccc}\n \\hline\\hline\nName & $\\mathrm{CrSi_2N_4}$ & $\\mathrm{MoSi_2N_4}$ & $\\mathrm{WSi_2N_4}$ & $\\mathrm{MoGe_2N_4}$ & $\\mathrm{WGe_2N_4}$ & $\\mathrm{CrSi_2P_4}$&$\\mathrm{MoSi_2P_4}$&$\\mathrm{WSi_2P_4}$&$\\mathrm{CrGe_2P_4}$&$\\mathrm{MoGe_2P_4}$&$\\mathrm{WGe_2P_4}$ \\\\\\hline\\hline\n $\\alpha_1$ &1.24&\t1.15&\t0.78&\t1.85&\t1.31&\t6.03&\t4.91&\t4.16&\t6.12&\t5.27&\t4.36\\\\\n $\\alpha_2$ &1.42&\t0.65&\t0.25&\t0.75&\t0.26&\t3.96&\t2.64&\t1.65&\t5.06&\t3.87&\t2.77\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n It has been proved that monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ with 34 VEC are non-magnetic with $\\alpha_1$ or $\\alpha_2$ crystal structure, and are both dynamically and thermodynamically\\cite{m20}. A piezoelectric material should be a semiconductor for\nprohibiting current leakage. Only $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers are semiconductors for all six $\\alpha_i$ crystal structures.\nSo, we mainly study the structure effect on intrinsic piezoelectricity of $\\mathrm{MSi_2N_4}$ (M=Mo and W). The\nstructural parameters of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) are optimized, and the lattice constants are listed in \\autoref{tab0}.\nIt is found that the lattice constants between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ with the same phase are almost the same.\nThe size of these lattice constants can also be classified into A, B and C, which declare that the relative positions of M and A atoms determine lattice constants.\n\nNext, we use optimized crystal structures to investigate their electronic structures.\n Although the SOC has little effects on the energy band gaps of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers, the SOC can produce observed spin-orbit splitting in the valence bands at K point\\cite{m21}. Because the energy band outlines between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are very similar, only the energy bands of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ are shown in \\autoref{band} using GGA+SOC. It is clearly seen that they all are indirect gap semiconductors, and the gap range is 0.18 eV to 1.99 eV. The position of conduction band minimum (CBM) for all six $\\alpha_i$ is at K point, except for $\\alpha_6$ at M point. The valence band maximum (VBM) of $\\alpha_1$, $\\alpha_2$ and $\\alpha_5$ is at $\\Gamma$ point, while the one of $\\alpha_3$, $\\alpha_4$ and $\\alpha_6$ is slightly off $\\Gamma$ point, and at one point along the $\\Gamma$-K line.\nThe energy band gaps of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ using GGA+SOC are plotted \\autoref{band1}. It is clearly seen that the structural dependence of band gap of $\\mathrm{WSi_2N_4}$ is the same with one of $\\mathrm{MoSi_2N_4}$, and the gap ranges from 0.08 eV to 2.37 eV. Therefore, it is very effective to tune the electronic structures of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers by translating or rotating $\\mathrm{Si_2N_2}$ bilayer.\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig7.eps}\n \\caption{(Color online)The energy band gaps of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA+SOC. The direct band gap is marked by \"D\", and the unmarked one is indirect band gap.}\\label{band2}\n\\end{figure}\n\n\n\n\n\n\nTo calculate the $d_{11}$, two independent elastic stiffness coefficients ($C_{11}$ and $C_{12}$) of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are attained by SSR, which are plotted in \\autoref{c}, together with $C_{11}$-$C_{12}$. For six structures, all calculated elastic coefficients of $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ satisfy the Born stability criteria\\cite{ela}, which means that they all are mechanically stable. Similar structural dependence of $C_{11}$, $C_{12}$ and $C_{11}$-$C_{12}$ can be observed between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$.\nIt is found that the $C_{12}$ of two structures in the same category are very close, if the two structures are connected by mirror operation ($\\alpha_1$ and $\\alpha_2$). However, the $C_{12}$ has obvious difference for two structures related by rotation operation ($\\alpha_3$ and $\\alpha_4$ or $\\alpha_5$ and $\\alpha_6$). The $\\alpha_4$ ($\\alpha_5$) has the larger\n $C_{12}$ than $\\alpha_3$ ($\\alpha_6$). Between $\\alpha_4$ ($\\alpha_3$) and $\\alpha_5$ ($\\alpha_6$), the difference is only the position of Si atom.\nIt is found that the $C_{11}$-$C_{12}$ of $\\alpha_1$, $\\alpha_2$, $\\alpha_4$ and $\\alpha_5$ are close, and $\\alpha_3$ and $\\alpha_6$ have the larger $C_{11}$-$C_{12}$, which is against the $d_{11}$ according to \\autoref{pe2-7}. These elastic stiffness coefficients are very larger than ones of\n TMD monolayers\\cite{q9,q11}, which indicates that\n$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are not easy to be deformed.\n\\begin{figure}\n \n \\includegraphics[width=8cm]{Fig8.eps}\n \\caption{The energy band structures of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{CrGe_2P_4}$ using GGA+SOC.}\\label{band-c}\n\\end{figure}\n\n\n\n\n\n\n\n Another key physical quantity $e_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are calculated to attain $d_{11}$. Their piezoelectric coefficients $e_{11}$ along with the ionic contribution and electronic contribution to $e_{11}$ are shown \\autoref{e}.\n It is clearly seen that the similar structural dependence between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ can be observed.\n It is found that the ionic contribution of two structures connected by mirror or rotation operations in the same category has opposite sign. In the different category, the ionic contribution of two structures connected by translation operation has the same sign.\n Calculated results show that the ionic contribution and electronic contribution have opposite sign for all $\\alpha_i$ except $\\alpha_5$.\n For A and B categories, the electronic contribution has similar structural dependence with ionic contribution.\n In the C category, the rotation operation gives rise to the identical signs for electronic contribution from $\\alpha_6$ to $\\alpha_5$.\nIn considered six structures, the $e_{11}$ of $\\alpha_5$ has the largest value, which is due to superposed ionic contribution and electronic contribution. The $e_{11}$ with $\\alpha_5$ phase is 13.95$\\times$$10^{-10}$ C\/m for $\\mathrm{MoSi_2N_4}$, and\t12.17$\\times$$10^{-10}$ C\/m for $\\mathrm{WSi_2N_4}$. These $e_{11}$ are very larger than ones of 2D TMD, metal oxides, III-V\nsemiconductor and Janus TMD materials\\cite{q7,q9,q11}.\nThe $e_{11}$ of experimentally synthesized $\\alpha_1$-$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ is 4.40$\\times$$10^{-10}$ C\/m and 3.14$\\times$$10^{-10}$ C\/m, which\nare comparable to that of most 2D materials, such as TMD and Janus TMD materials\\cite{q7,q9,q11}.\nUsing the calculated $C_{11}$-$C_{12}$ and $e_{11}$, the $d_{11}$ can be attained according to \\autoref{pe2-7}, which are shown in \\autoref{d}. From $\\alpha_1$ to $\\alpha_6$, the $d_{11}$ and $e_{11}$ show very analogical structural dependence. For $\\alpha_5$ phase, the $d_{11}$ has the largest value of 3.53 pm\/V for $\\mathrm{MoSi_2N_4}$, and\t2.91 pm\/V for $\\mathrm{WSi_2N_4}$.\n For experimentally synthesized $\\alpha_1$-$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$, the $d_{11}$ is 1.14 pm\/V and 0.78 pm\/V, which are smaller than that of 2D TMD\\cite{q9,q11} due to very large $C_{11}$-$C_{12}$. The related $d_{11}$ are listed in \\autoref{tab0-1}.\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig9.eps}\n \\caption{(Color online) The elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$.}\\label{c1}\n\\end{figure}\n\n\n\n\nFor $\\alpha_1$ and $\\alpha_2$ phases, the monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are all semiconductors using GGA+SOC. The energy band gaps of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA+SOC are plotted in \\autoref{band2}. It is found that the gap of $\\mathrm{CrGe_2P_4}$ is very small, and 0.008 eV for $\\alpha_1$ phase and 0.061 eV for $\\alpha_2$ phase. To unambiguously indicate them to be semiconductors, the energy band structures of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{CrGe_2P_4}$ using GGA+SOC are shown in \\autoref{band-c}.\n For the same material, the gap with $\\alpha_2$ phase is larger than one of $\\alpha_1$ phase. For $\\mathrm{MA_2N_4}$, the gap increases with M from Cr to Mo to W, while the gap of $\\mathrm{MA_2P_4}$ firstly increases, and then decreases. Another reason is that their enthalpies of formation between $\\alpha_1$ and $\\alpha_2$ phases are very close\\cite{m20}.\nSo, we investigate the intrinsic piezoelectricity of the 11 kinds of materials with $\\alpha_1$ and $\\alpha_2$ phases.\nThe optimize lattice constants are listed in \\autoref{tab1}, and the lattice constants between $\\alpha_1$ and $\\alpha_2$ phases for the same material\nalmost the same, which is because the $\\alpha_1$ and $\\alpha_2$ phases are in the same A class.\nWith element changing from Cr to Mo to W, from Si to Ge, and from N to P, the lattice constants of both $\\alpha_1$ and $\\alpha_2$ phases increase, which is due to increasing atomic radius.\n\n\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig10.eps}\n \\caption{(Color online) The piezoelectric stress coefficients $e_{11}$, the ionic contribution and electronic contribution to $e_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. }\\label{e1}\n\\end{figure}\n\n\nThe elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are plotted in \\autoref{c1}. For all studied materials, the $C_{11}$-$C_{12}$ with $\\alpha_2$ phase are larger than ones of $\\alpha_1$ phase, which is due to larger $C_{11}$ and smaller $C_{12}$. The $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ containing P atom are very smaller than ones including N atom, which is favor of $d_{11}$. The piezoelectric stress coefficients $e_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ together with the ionic contribution and electronic contribution to $e_{11}$ are shown in \\autoref{e1}. When M changes from Cr to Mo to W with the same A and Z atoms, the electronic contribution of $\\mathrm{MA_2Z_4}$ with $\\alpha_1$ phase decreases, while the one of $\\alpha_2$ phase changes toward more negative value. It is found that the electronic contribution (absolute value) of all $\\mathrm{MA_2Z_4}$ with $\\alpha_2$ phase is smaller than one of $\\alpha_1$. The ionic contribution of all materials with $\\alpha_2$ phase is positive, which is the same with the electronic contribution of $\\alpha_1$.\n With M from Cr to Mo to W, the ionic contribution of $\\mathrm{MA_2Z_4}$ of $\\alpha_2$ phase with the same A and Z atoms decreases, while the one of $\\alpha_1$ phase changes toward more negative value except for $\\mathrm{CrSi_2N_4}$. It is clearly seen that the $e_{11}$ of all materials are positive values. The $e_{11}$ of $\\mathrm{MA_2Z_4}$ containing P atom with the same M and A atoms is larger than one including N atom for both $\\alpha_1$ and $\\alpha_2$ phases. For $\\alpha_1$ phase, the $e_{11}$ ranges from 3.14$\\times$$10^{-10}$ C\/m to 9.31$\\times$$10^{-10}$ C\/m, and the whole range for $\\alpha_2$ phase is 0.85$\\times$$10^{-10}$ C\/m to 7.39$\\times$$10^{-10}$ C\/m.\n \\begin{figure}\n \n \\includegraphics[width=7cm]{Fig11.eps}\n \\caption{(Color online) The piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. }\\label{d1}\n\\end{figure}\n\n\n Finally, the piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are plotted in \\autoref{d1}. The related $d_{11}$ are also summarized in \\autoref{tab1-1}. For $\\alpha_1$ phase, the range of $d_{11}$ is 0.78 pm\/V to 6.12 pm\/V, and the range changes from 0.25 pm\/V to 5.06 pm\/V for $\\alpha_2$ phase. The change trend of $d_{11}$ as a function of material is very similar with one of $e_{11}$. It is clearly seen that monolayer $\\mathrm{MA_2Z_4}$ containing P atom have more excellent piezoelectric response due to high $d_{11}$.\n The most $d_{11}$ of them are larger than $d_{33}$ = 3.1 pm\/V of familiar bulk piezoelectric wurtzite GaN\\cite{zh1}.\nSo, it is highly recommended to synthesize monolayer $\\mathrm{MA_2Z_4}$ containing P atom, such as $\\alpha_1$-$\\mathrm{CrSi_2P_4}$, $\\alpha_1$-$\\mathrm{MoSi_2P_4}$, $\\alpha_1$-$\\mathrm{CrGe_2P_4}$, $\\alpha_1$-$\\mathrm{MoGe_2P_4}$ and $\\alpha_2$-$\\mathrm{CrGe_2P_4}$.\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe have demonstrated strong structure effect on intrinsic piezoelectricity in septuple-atomic-layer $\\mathrm{MSi_2N_4}$ (M=Mo and W)\nthrough first-principles simulations. The same structural dependence on $d_{11}$ and $e_{11}$, together with the ionic and electronic contributions to $e_{11}$ between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ monolayers is found, and the $\\alpha_5$ phase has large piezoelectric coefficients. The intrinsic piezoelectricity of monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) with $\\alpha_1$ and $\\alpha_2$ phases expect $\\mathrm{CrGe_2N_4}$ are explored, and the monolayer $\\mathrm{MA_2P_4}$ have more stronger piezoelectric polarization than monolayer $\\mathrm{MA_2Z_4}$ including N atom.\nThe largest $d_{11}$ among $\\mathrm{MA_2N_4}$ materials only is 1.85 pm\/V, and the largest $d_{11}$ of $\\mathrm{MA_2P_4}$ is up to 6.12 pm\/V.\n Among the studied 22 materials, the $d_{11}$ of monolayer $\\alpha_1$-$\\mathrm{CrSi_2P_4}$, $\\alpha_1$-$\\mathrm{MoSi_2P_4}$, $\\alpha_1$-$\\mathrm{CrGe_2P_4}$, $\\alpha_1$-$\\mathrm{MoGe_2P_4}$ and $\\alpha_2$-$\\mathrm{CrGe_2P_4}$ are greater than or close to 5 pm\/V.\n These $d_{11}$ of $\\mathrm{MA_2P_4}$ compare favorably with piezoelectric coefficients of\nfamiliar bulk piezoelectrics such as $\\alpha$-quartz ($d_{11}$ = 2.3\npm\/V), wurtzite GaN ($d_{33}$ = 3.1 pm\/V) and wurtzite AlN ($d_{33}$ = 5.1 pm\/V)\\cite{zh1,zh2}.\nOur works provide valuable guidance for\nexperimental synthesis efforts, and hope our study will stimulate more\nresearch interest into $\\mathrm{MA_2Z_4}$ family, especially for\nits applications in piezoelectric field.\n\n\n\n\n\\section{Data availability}\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n\n\n\\begin{acknowledgments}\nThis work is supported by the Natural Science Foundation of Shaanxi Provincial Department of Education (19JK0809). We are grateful to the Advanced Analysis and Computation Center of China University of Mining and Technology (CUMT) for the award of CPU hours and WIEN2k\/VASP software to accomplish this work.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\bigskip\nThere have been a number of recent investigations into the \n{\\em chiral odd} structure functions of the nucleon. As in the case \nof the polarized structure functions there are two quantities of \ninterest at leading twist: The transverse spin chiral odd structure\nfunction $h_T(x,Q^2)$ and the longitudinal spin chiral odd structure \nfunction $h_L(x,Q^2)$. Within the context of the operator product \nexpansion (OPE) the analysis in terms of twist reveals that the \ntransverse chiral odd structure function $h_T(x,Q^2)$ is purely twist--2, \nwhile the longitudinal structure function $h_L(x,Q^2)$ contains \nboth twist--2 and twist--3 contributions.\nAccordingly, the decomposition of $h_L(x,Q^2)$ into twist--2 \nand twist--3 ($\\overline{h}_L(x,Q^2)$) pieces is given by\n\\begin{eqnarray}\nh_L(x,Q^2)=2x\\, \\int_{x}^1\\ dy\\frac{h_T(y,Q^2)}{y^2}+ \n\\overline{h}_L(x,Q^2)\\ .\n\\label{hL}\n\\end{eqnarray}\nAs a reminder we note that the kinematics are defined such that $q$ denotes \nthe momentum transferred to a nucleon of momentum $p$. In the Bjorken\nlimit, {\\it i.e.} $Q^2=-q^2\\to\\infty$ with $x=Q^2\/2p\\cdot\\! q$ fixed,\nthe leading twist contributions to the nucleon structure\nfunctions dominate the $1\/Q^2$ expansion. The additional and \nimportant logarithmic dependence on $Q^2$, which is associated \nwith soft gluon emission, is included via the evolution program \nof perturbative quantum--chromo--dynamics (QCD).\n\nWhile the chiral odd structure functions are not directly accessible \nin deep inelastic lepton nucleon scattering (DIS) there is the well known \nproposal at {\\em RHIC} to extract the quark transversality distributions\n$h_T^{(a)}(x,Q^2)$ ($a$ being the flavor index) from Drell--Yan \ndilepton--production resulting from transversely polarized proton \nbeams \\cite{Ra79}. Unfortunately dilepton production processes \nare difficult to extract from proton--proton collisions as the purely\nhadronic processes dominate. Furthermore this experiment will provide\nonly the product of the chiral odd distributions for quarks and \nantiquarks. As the latter are presumably small these flavor distributions \nare not easily measurable in the Drell--Yan process. In the light of these \ndisadvantages it has recently been pointed out that the transversality \ndistributions may also be measured in the fragmentation region of \nDIS \\cite{Ja98}. The key observation is that these distribution \nfunctions can be extracted from an asymmetry in the two meson production \nin the special case that this two meson state (like $\\pi^+\\pi^-$) is a \nsuperposition of different $C$--parity states, as {\\it e.g.} $\\sigma$ \nand $\\rho$. Then the phases in the final state interactions do not \nvanish on the average and the differential cross section is proportional \nto the product of chiral odd distributions and the interference \nfragmentation functions. The latter describe the emission and \nsubsequent absorption of a two pion intermediate state from quarks \nof different helicity. In case these fragmentation functions are not \nanomalously small the chiral odd distribution functions can then be \nobtained from DIS processes\\footnote{The relevant fragmentation and \ndistribution functions depend on different kinematical variables: the \ntwo meson state momentum fraction and the Bjorken variable, respectively.} \nlike $eN\\to e^\\prime \\pi^+\\pi^-X$ with the nucleon $N$ being transversely \npolarized. Assuming isospin covariance for the fragmentation functions \nthese DIS processes will provide access to the charge squared weighted \nchiral odd distribution functions \\cite{Ja98}. Such processes \nshould be measurable in the transversely polarized target experiments at \n{\\it HERMES}. Knowledge of the chiral odd structure \nfunctions will serve to complete our picture of the spin structure\nof the nucleon as they correspond to the distribution of the quark \ntransverse spin in a nucleon which is transversely polarized \n\\cite{Ja96}. With these data being expected in the near future \nit is, of course, interesting to understand the structure of \nthe nucleon from the theoretical point of view. As we are still \nlacking a bound state wave function for nucleon in terms of quarks \nand gluons, {\\it i.e.} computed from first principles in QCD,\nit is both mandatory and fruitful to investigate these chiral odd \nflavor distributions and their charge weighted average\nnucleon structure functions within hadronic models of the\nnucleon \\cite{Ja92,St93,Ba97,Sc97,Sc97a,Ka97,Po96}. \n\nIn the context of the spin structure of the nucleon chiral soliton\nmodels are particularly interesting as they provide an explanation\nfor the small magnitude of the quark spin contribution to the proton \nspin, {\\it i.e.} the vanishingly small matrix element of the singlet \naxial current \\cite{We96}. In these models the nucleon is described as a \nnon--perturbative field configuration in some non--linear effective \nmeson theory \\cite{Sk61,Ad83,Al96}. Unfortunately in many of these soliton \nmodels the evaluation of structure functions is infeasible due to the \nhighly non--linear structure of the current operators and the inclusion \nof higher derivative operators which complicates the current commutation \nrelations. However, it has recently been recognized that the soliton \nsolution \\cite{Al96} which emerges after bosonization \\cite{Eb86} of \nthe Nambu--Jona--Lasinio (NJL) \\cite{Na61} chiral quark model can be \nemployed to compute nucleon structure functions \\cite{We96a,We97}. In \norder to project this soliton configuration onto nucleon states with good \nspin and flavor a cranking procedure must be employed \\cite{Ad83,Re89} \nwhich implements significant $1\/N_C$ contributions ($N_C$ is the number \nof color degrees of freedom.). When extracting the structure functions \nfrom the NJL chiral soliton model the full calculation which also\nincludes effects of the vacuum polarized by the background soliton is \nquite laborious. In addition we are still lacking a regularization \nprescription of the vacuum contribution to the structure functions\nwhich is derived from the action functional and which yields algebraic \nexpressions for their moments which are {\\em consistent} with those for \nthe static nucleon properties. Fortunately it is known that the \ndominant contribution to \nstatic nucleon properties stems from the single quark level which has the \nlowest energy eigenvalue (in magnitude) and is strongly bound by the \nsoliton \\cite{Al96}. This is particularly the case for spin related \nquantities. Hence it is a reasonable approximation to consider\nonly the contribution of this level to the structure functions. In \nthe proceeding section the NJL chiral soliton model together with the \nabove mentioned approximation, which we will call {\\it valence quark \napproximation}\\footnote{This notation refers to the valence quark in the \nNJL chiral soliton model and should not be confused with the valence quark\nin the parton model.} will be described in more detail. \n\nThe NJL model for the quark flavor dynamics incorporates spontaneous\nbreaking of chiral symmetry in a dynamic fashion. Hence the quark fields \nwhich built up the soliton self--consistently \\cite{Re88} are {\\em \nconstituent quarks} with a constituent quark mass of several hundred \n{\\rm MeV}. Keeping this in mind we calculate both the {\\em effective}\nconstituent quark distributions and in turn the corresponding leading twist\ncontributions to nucleon structure functions ({\\it cf.} eq (\\ref{chgw}))\nat a low scale $Q_0^2$. In the language of Feynman diagrams \nthe DIS processes are described by a constituent quark of the nucleon \nabsorbing a quanta of the external source. In the Bjorken limit the quark \nthen propagates highly off--shell before emitting a quanta of the external \nsource. The intermediate quark may propagate forward and backward.\nHence the complete structure functions acquire contributions from \nboth distributions where the intermediate constituent quark moves \nforward and backward. We will focus on nucleon structure functions which\nare defined as the sum over the charge--weighted flavor distributions \n\\cite{Ja92}\n\\begin{eqnarray}\nh_{T\/L}^{(\\pm)}(x,Q_0^2)=\\frac{1}{2} \\sum_a \ne_{a}^2 h^{(a,\\pm)}_{T\/L}(x,Q_0^2) \\ ,\n\\label{chgw}\n\\end{eqnarray}\nin analogy to those of the chiral even spin polarized and unpolarized \nnucleon structure functions \\cite{Ja98,Ja96}. Here $a$ represents a \nquark label, while $(\\pm)$ refers to the forward $(+)$ and backward\n$(-)$ propagating intermediate constituent quarks. Furthermore $e_{a}$ \ndenotes the charge fraction of the considered quark flavor $a$. The \ncomplete chiral odd structure functions are finally obtained as the sum\n\\begin{eqnarray}\nh_{T\/L}(x,Q_0^2)=h_{T\/L}^{(+)}(x,Q_0^2)+h_{T\/L}^{(-)}(x,Q_0^2)\\ .\n\\label{defintro}\n\\end{eqnarray}\nThe calculation of the flavor distributions $h^{(a)}_{T\/L}$ in the \nvalence approximation to the NJL chiral soliton model \\cite{We96a,We97} \nis summarized in section 3.\n\nFurther it is important to note that when considering model structure \nfunctions the OPE implies that the initial conditions,\n$\\mu^2=Q_0^2$, for the evolution is, \n{\\it a priori}, a free parameter in any baryon model \\cite{Sc91}. \nFor the model under consideration it has previously been determined to \n$Q_0^2\\approx0.4{\\rm GeV}^2$ by studying the evolution dependence of \nthe model prediction for the unpolarized structure functions \\cite{We96a}. \nIn a subsequent step to compute the chiral odd structure functions we \nemploy a leading order evolution program \\cite{Ba97,Ka97} to obtain the \nchiral odd structure functions at a larger scale, {\\it e.g.} \n$Q^2\\approx 4{\\rm GeV}^2$ relevant to the experimental conditions. This \nevolution program incorporates the leading logarithmic corrections to \nthe leading twist pieces. The evolution procedure as applied to our \nmodel structure functions will be explained in section 4.\n\nThe numerical results for the chiral odd structure functions are\npresented in section 5 while concluding remarks are contained in \nsection 6. Technical details on the model calculations and the QCD \nevolution procedure are relegated to appendices. Let us also mention\nthat there has been a previous calculation of $h_T(x,Q_0^2)$ \\cite{Po96} \nwhich, however, ignored both the projection onto good nucleon states \nand the QCD evolution. Furthermore in that calculation an (arbitrary) \nmeson profile was employed rather than a self--consistent soliton \nsolution to the static equations of motion. \n\n\\bigskip\n\\section{The NJL--Model Chiral Soliton}\n\\bigskip\n \nBefore continuing with the discussion of the chiral odd structure\nfunctions, we will review the issue of the \nchiral soliton in the NJL model.\n\nThe Lagrangian of the NJL model in terms of quark degrees of freedom \nreads \\cite{Na61,Eb86}\n\\begin{eqnarray}\n{\\cal L} = \\bar q (i\\partial \\hskip -0.5em \/ - m^0 ) q +\n 2G_{\\rm NJL} \\sum _{i=0}^{3}\n\\left( (\\bar q \\frac {\\tau^i}{2} q )^2\n +(\\bar q \\frac {\\tau^i}{2} i\\gamma _5 q )^2 \\right) .\n\\label{NJL}\n\\end{eqnarray}\nHere $q$, $\\hat m^0$ and $G_{\\rm NJL}$ denote the quark field, the \ncurrent quark mass and a dimensionful coupling constant, respectively.\nThis model is motivated as follows:\nIntegrating out the gluon fields from QCD yields a current--current \ninteraction mediated by one gluon exchange to leading order\nin powers of the quark current. Replacing the gluon mediating \npropagator with a local contact interaction and \nperforming the appropriate Fierz--transformations yields the \nLagrangian (\\ref{NJL}) in leading order of $1\/N_C$ \\cite{Ca87,Re90}, \nwhere $N_C$ refers to the number of color degrees of freedom. Although\nonly a subset of possible non--perturbative gluonic modes are \ncontained in the contact interaction term in eq (\\ref{NJL}) \nit is important to stress that gluonic effects are contained in the \nmodel (\\ref{NJL}). Furthermore the NJL model embodies the approximate \nchiral symmetry of QCD and has to be understood as an effective \n(non--renormalizable) theory of the low--energy quark flavor dynamics.\n\nApplication of functional bosonization techniques \\cite{Eb86} to the \nLagrangian (\\ref{NJL}) yields the mesonic action\n\\begin{eqnarray}\n{\\cal A}&=&{\\rm Tr}_\\Lambda\\log(D)+\\frac{1}{4G_{\\rm NJL}}\n\\int d^4x\\ {\\rm tr}\n\\left(m^0\\left(M+M^{\\dag}\\right)-MM^{\\dag}\\right)\\ , \n\\label{bosact} \\\\\nD&=&i\\partial \\hskip -0.5em \/-\\left(M+M^{\\dag}\\right)\n-\\gamma_5\\left(M-M^{\\dag}\\right)\\ ,\n\\label{dirac}\n\\end{eqnarray}\nwhere $M=S+iP$ comprises composite scalar ($S$) and pseudoscalar ($P$) \nmeson fields which appear as quark--antiquark bound states. \nFor regularization, which is indicated by the cut--off $\\Lambda$, we \nwill adopt the proper--time scheme \\cite{Sch51}. The free parameters \nof the model are the current quark mass $m^0$, the coupling constant \n$G_{\\rm NJL}$ and the cut--off $\\Lambda$. The equation of motion for \nthe scalar field $S$ may be considered as the gap--equation for the \norder parameter $\\langle {\\bar q} q\\rangle$ of chiral symmetry breaking. \nThis equation relates the vacuum expectation value \n$\\langle M\\rangle=m{\\mbox{{\\sf 1}\\zr{-0.16}\\rule{0.04em}{1.55ex}\\zr{0.1}}}$ to the model parameters $m^0$, $G_{\\rm NJL}$ \nand $\\Lambda$. For apparent reasons $m$ is called the {\\em constituent} \nquark mass. The occurrence of this vacuum expectation value reflects the \nspontaneous breaking of chiral symmetry and causes the pseudoscalar fields \nto emerge as (would--be) Goldstone bosons. Expanding ${\\cal A}$ to quadratic \norder in $P$ (around $\\langle M\\rangle$) these parameters are related to \nphysical quantities; that is, the pion mass, $m_\\pi=135{\\rm MeV}$ and the \npion decay constant, $f_\\pi=93{\\rm MeV}$. This leaves one undetermined \nparameter which we choose to be the constituent quark mass \\cite{Eb86}.\n\nThe NJL model chiral soliton \\cite{Al96,Re88} is given \nby a non--perturbative meson configuration which is assumed of the \nhedgehog type\n\\begin{eqnarray}\nM_{\\rm H}(\\mbox{\\boldmath $x$})=m\\ {\\rm exp}\n\\left(i\\mbox{\\boldmath $\\tau$}\\cdot{\\hat{\\mbox{\\boldmath $x$}}}\n\\Theta(r)\\right)\\ .\n\\label{hedgehog}\n\\end{eqnarray}\nIn order to compute the functional trace in eq (\\ref{bosact}) for this \nstatic configuration we express the \nDirac operator (\\ref{dirac}) in terms of a Hamiltonian\noperator $h$, {\\it i.e.} $D=i\\beta(\\partial_t-h)$ with\n\\begin{eqnarray}\nh=\\mbox{\\boldmath $\\alpha$}\\cdot\\mbox{\\boldmath $p$}+m\\ \\beta\\\n{\\rm exp}\\left(i\\gamma_5\\mbox{\\boldmath $\\tau$}\n\\cdot{\\hat{\\mbox{\\boldmath $x$}}}\\Theta(r)\\right)\\ .\n\\label{hamil}\n\\end{eqnarray}\nWe denote the eigenvalues and eigenfunctions of $h$ by \n$\\epsilon_\\mu$ and $\\Psi_\\mu$, respectively. Explicit expressions for \nthese wave--functions are displayed in appendix B of ref \\cite{Al96}. \nIn the proper--time regularization scheme the energy functional of \nthe NJL model is found to be \\cite{Re89,Al96}, \n\\begin{eqnarray}\nE[\\Theta]=\n\\frac{N_C}{2}\\epsilon_{\\rm v}\n\\left(1+{\\rm sgn}(\\epsilon_{\\rm v})\\right)\n&+&\\frac{N_C}{2}\\int^\\infty_{1\/\\Lambda^2}\n\\frac{ds}{\\sqrt{4\\pi s^3}}\\sum_\\nu{\\rm exp}\n\\left(-s\\epsilon_\\nu^2\\right)\n\\nonumber \\\\* && \\hspace{1.5cm}\n+\\ m_\\pi^2 f_\\pi^2\\int d^3r \\left(1-{\\rm cos}\\Theta(r)\\right) .\n\\label{efunct}\n\\end{eqnarray}\nThe subscript ``${\\rm v}$\" denotes the valence quark level. This state \nis the distinct level bound in the soliton background, {\\it i.e.}\n$-m<\\epsilon_{\\rm v}1$ although the contributions for $x>1$ \nare very small. \n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=updown.400.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=updown.450.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htud}\nThe valence quark approximation of the transverse chiral--odd nucleon \ndistribution function as a function of Bjorken--$x$ for the up and down \nquark flavor content in the rest frame. For comparison also the model \ncalculation \\protect\\cite{We97} for the twist--2 polarized structure \nfunction $g_1(x,Q_0^2)$ is shown for the respective flavor channels.\nTwo values of the constituent quark mass are considered:\n$m=400 {\\rm MeV}$ (left panel) and $m=450 {\\rm MeV}$ (right panel).}\n\\end{figure}\n\nThe calculation of nucleon \nstructure functions in the Bjorken limit, however, singles out \nthe null plane, $\\xi^+=0$. This condition can be satisfied upon \ntransformation to the infinite momentum frame (IMF) even for models \nwhere the nucleon emerges as a (static) localized object \\cite{Hu77}. \nFor the quark soliton model under consideration this transformation \ncorresponds to performing a boost in the space of the collective \ncoordinate $\\bbox{x}_0$, {\\it cf.} eq (\\ref{cht}). Upon this boost \nto the IMF we have observed \\cite{Ga97} that the common problem of \nimproper support for the structure functions, {\\it i.e.} non--vanishing \nstructure functions for $x>1$, is cured along the line suggested by \nJaffe \\cite{Ja80} some time ago. The reason simply is that the Lorentz \ncontraction associated with the boost to the IMF maps the infinite line \nexactly onto the interval $x\\in [0,1[$. In addition we have observed that \nthis Lorentz contraction effects the structure functions also at small \nand moderate $x$. Incorporating these results for the general set \nof leading twist structure functions within the NJL--chiral soliton model\nyields the following form for the forward and backward\nmoving intermediate quark\nstate contributions to the chiral odd transverse\nspin structure function, $h^{(\\pm)}_T\\left(x,Q^2\\right)$,\n\\begin{eqnarray}\n\\hspace{-0.3cm}\nh^{(\\pm)}_T(x)&=&\\pm N_C\\frac{M}{\\pi(1-x)}\n\\int_{p_{\\rm min}}^\\infty \\hspace{-0.2cm} pdp d\\varphi \\\n\\nonumber \\\\ && \\hspace{1.0cm}\n\\times\\langle N |\\tilde{\\psi}^\\dagger (\\bbox{p}_{\\mp})\n\\left(1\\mp\\alpha_3\\right)\\gamma_{\\perp}\\gamma_5{\\cal Q}^2\n\\tilde{\\psi}(\\bbox{p}_{\\mp})|N\\rangle\n\\Big|_{{\\rm cos}\\theta=-\n{\\textstyle \\frac{M\\ {\\rm ln}(1-x)\\pm\\epsilon_{\\rm v}}{p}}} \\ .\n\\label{htp}\n\\end{eqnarray}\nIn general the resulting relation between structure functions \nin the IMF and the rest frame (RF) reads\n\\begin{eqnarray}\nf_{\\rm IMF}(x)=\\frac{\\Theta(1-x)}{1-x} f_{\\rm RF}\n\\Big(-{\\rm ln}(1-x)\\Big)\\ .\n\\label{fboost}\n\\end{eqnarray}\nOf course, in the context of the chiral odd structure functions \n$f_{\\rm RF}$ is to be identified with the expressions in \neqs (\\ref{ht11},\\ref{hl11},\\ref{hltnjl}). As will be recognized \nshortly the solution to\nthe proper support problem is essential in order to \napply the evolution program of perturbative QCD.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=updownpr.400.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=updownpr.450.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htudpr}\nSame as figure \\protect\\ref{fig_htud} \nin the IMF (\\protect\\ref{fboost}).}\n\\end{figure}\nThe chiral odd and polarized structure functions resulting from this\ntransformation are shown in figure \\ref{fig_htudpr}.\n\nIn order to include the logarithmic corrections to the \ntwist--2 pieces of the chiral odd structure functions we \napply the well--established GLAP procedure \\cite{Gr72}.\nFor the transverse component $h_T(x,Q^2)$ this is \nstraightforward as it is pure twist--2. For the longitudinal\npiece $h_L(x,Q^2)$ one first has to extract the twist--2\ncomponent through $h_T(x,Q^2)$ namely,\n$h_L^{(2)}(x,Q^2)=2x\\, \\int_{x}^1\\ dy\\,h_T(y,Q^2)\/y^2$.\n\nWe simultaneously denote by $h^{(2)}$ the twist--2 parts of $h_T$ \nand $h_L$. To leading order (in $\\alpha_{QCD}(Q^2)$) the variations\nof the structure functions from a change $\\delta t$ of the \nmomentum scale is given by\n\\begin{eqnarray}\nh^{(2)}(x,t+\\delta{t})=h^{(2)}(x,t)\\ \n+ \\frac{dh^{(2)}(x,t)}{dt} \\ \\delta t\\ ,\n\\label{h2var}\n\\end{eqnarray}\nwhere $t={\\rm log}\\left(Q^2\/\\Lambda_{QCD}^2\\right)$. The variation\n(\\ref{h2var}) is essentially due to the emission and absorption of \nsoft gluons. The explicit expression for the evolution differential \nequation is given by the convolution integral,\n\\begin{eqnarray}\n\\frac{d\\, h^{(2)}(x,t)}{dt} =\\frac{\\alpha_{QCD}(t)}{2\\pi}\nC_{R}(F)\\int^1_{x}\\ \\frac{dy}{y}P_{qq}^h\\left(y\\right)\nh^{(2)}\\left(\\frac{x}{y},t\\right)\n\\label{convl}\n\\end{eqnarray}\nwhere the leading order splitting function \\cite{Ar90,Ba97} is\ngiven by,\n\\begin{eqnarray} \nP_{qq}^{h}\\left(z\\right)=\n\\frac{4}{3}\\left[\\frac{2}{\\left(1-z\\right)_+}-2\n+\\frac{3}{2}\\ \\delta(z-1)\\right]\n\\end{eqnarray}\nand $C_R(f)=\\left(n_f^2-1\\right)\/2n_f$ for $n_f$ active flavors,\n$\\alpha_{QCD}(t)=4\\pi\/\\left[b_0\\log\\left(Q^2\/ \\Lambda^2\\right)\\right]$\nand $b_0=(11N_C-2n_f)\/3$.\nEmploying the ``+\" prescription yields for three light flavors and\n$N_C=3$\n\\begin{eqnarray}\n\\frac{d h^{(2)}(x,t)}{dt}&=&\\frac{\\alpha_{QCD}(t)}{2\\pi}\n\\left\\{\\ \\left(2 + \\frac{8}{3}\\log(1-x)\\right)h^{(2)}(x,t)\n\\right.\n\\nonumber \\\\*&& \\hspace{-0.7cm} \n\\left. \n+\\, \\frac{8}{3}\\int^{1}_{x}\\ \\frac{dy}{y}\\left[\n\\frac{1}{1-y}\\left(h^{(2)}(\\frac{x}{y},t)-yh^{(2)}(x,t)\\right)\n- h^{(2)}(\\frac{x}{y},t)\\right]\\right\\}\\ .\n\\label{evhtw2}\n\\end{eqnarray}\nAs indicated above, the structure functions must vanish at the boundary \n$x=1$ in order to cancel the divergence of the logarithm in eq \n(\\ref{evhtw2}) and thus for the GLAP procedure to be applicable. This \nmakes the projection of the rest frame structure functions mandatory.\nThe variation of the structure functions for finite intervals \nin $t$ is straightforwardly obtained by iteration of these \nequations, {\\it i.e.} as a solution to the differential \nequation (\\ref{evhtw2}). As discussed previously the initial value \nfor integrating the differential equation is given by the scale \n$Q_0^2$ at which the model is defined. It should be emphasized that \nthis scale essentially is a new parameter of the model. For a given \nconstituent quark mass $m$ we adjust $Q_0^2$ to maximize the \nagreement of the predictions with the experimental data on \npreviously \\cite{We96a} calculated unpolarized structure functions for \nelectron--nucleon DIS: $F_2^{ep}-F_2^{en}$. For the constituent \nquark mass $m=400{\\rm MeV}$ we have obtained $Q_0^2\\approx0.4{\\rm GeV}^2$.\nNote that this value of $Q_0^2$ is indeed (as it should) smaller than \nthe ultraviolet cut--off of the underlying NJL soliton model as \n$\\Lambda^2\\approx 0.56{\\rm GeV}^2$. The latter quantity indicates the range \nof validity of the model. In figure \\ref{fig_ht2p}a we compare the un--evolved, \nprojected, proton structure function $h_T^{p}\\left(x,Q_0^2\\right)$ with \nthe one evolved from $Q_0^2=0.4{\\rm GeV}^2$ to $Q^2=4.0{\\rm GeV}^2$. As \nexpected the evolution pronounces the structure function at low $x$. \n\nThis change towards small $x$ is a generic feature of the projection \nand evolution process and presumably not very sensitive to the \nprescription applied here. In particular, choosing a projection \ntechnique \\cite{Tr97} alternative to (\\ref{fboost}) may easily be \ncompensated by an appropriate variation of the scale $Q_0^2$. In \nfigure \\ref{fig_ht2p}b the same calculation for $h_L^{(2)}(x,Q^2)$ is \npresented.\n\nIn the evolution of the twist--2 pieces we have restricted ourselves\nto the leading order in $\\alpha_s$ because for the twist--3 piece of\n$h_L$, the necessary ingredients are not known in next--to--leading \norder. Even the leading order evolution is only known in the large \n$N_C$ limit. It should be noted that such an approach seems \nparticularly suited for soliton models which equally utilize large \n$N_C$ arguments. As pointed out by Balitskii et al. \\cite{Bal96} the \nadmixture of independent quark and quark--gluon operators contributing \nto the twist--3 portion ${\\overline{h}}_L(x,Q^2)$ grows with $n$ \nwhere $n$ refers to the $n^{\\rm th}$ moment,\n${\\cal M}_n\\left[ \\overline{h}_L(Q^2)\\right]$ of $h_L(x,Q^2)$.\nHowever, much like the case with\nthe spin--polarized structure function, $g_2(x,Q^2)$ \\cite{Ali91}\nin the $N_C\\rightarrow \\infty$ limit the quark operators of \ntwist--3 decouple from the quark--gluon operators of the same twist.\nThen the anomalous dimensions $\\gamma_n$ which govern the \nlogarithmic $Q^2$ dependence of ${\\cal M}_n$ can be computed. Once the \n$\\gamma_n$'s are known an evolution kernel can be constructed that \n``propagates'' the the twist--3 part $\\overline{h}(x,Q^2)$ in momentum\n\\begin{eqnarray}\n\\overline{h}_L(x,Q^2)&=&\\int_x^1 \\frac{dy}{y} b(x,y;Q^2,Q_0^2)\n\\overline{h}_L(y,Q_0^2)\\ .\n\\label{evkern}\n\\end{eqnarray}\nWe relegate the detailed discussion of the kernel $b(x,y;Q^2,Q_0^2)$,\nwhich is obtained by inverting the $Q^2$ dependence of ${\\cal M}_n$,\nto appendix C. In figure \\ref{fig_h2bllp}a we show the evolution of \n$\\overline{h}_L(x)$. Again we used $Q_0^2=0.4{\\rm GeV}^2$ and \n$Q^2=4.0{\\rm GeV}^2$.\n\nAs discussed in ref \\cite{Bal96} the merit of this \napproach is that to leading order in $N_C$ the knowledge of \n$h_L(x,Q^2)$ at one scale is sufficient to predict it at any arbitrary \nscale, which is not the case at finite $N_C$.\\footnote{As noted in \n\\cite{Bal96}, next to leading order corrections are estimated to go \nlike $O\\left(1\/N^2_c\\times{\\rm ln}(n)\/\\, n\\right)$ at large $n$.}\nThus $h_L(x,Q^2)$ obeys a generalized GLAP evolution equation. \nThis finally enables us (in much the same manner as was the case \nfor $g_2(x,Q^2)$ in \\cite{We97}) to compute the longitudinal chiral odd \nstructure function $h_L(x,Q^2)$ by combining the separately evolved \ntwist--2 and twist--3 components together. The result for \n$Q_0^2=0.4{\\rm GeV}^2$ and $Q^2=4.0{\\rm GeV}^2$ is shown in figure\n\\ref{fig_h2bllp}b. We recall that the only ingredients have been the leading \ntwist pieces of the chiral odd structure functions at the model \nscale $Q_0$.\\footnote{A feature of $h_L(x)$ compared with $g_2(x)$ \nis that as $h_L(x)$ does not mix with gluon distributions \nowing to its chiral-odd nature and its $Q^2$ evolution is given by \n(\\ref{mom}), (\\ref{adm}) even for the flavor singlet piece.} \n\n\\bigskip\n\\section{Discussion of the Numerical Results}\n\\bigskip\nIn this section we discuss the results of the chiral-odd structure\nfunctions calculated from eqs (\\ref{hT0})--(\\ref{hL1}) for constituent\nquark masses $m=400 {\\rm MeV}$ and $m=450 {\\rm MeV}$. In figure\n\\ref{fig_htud} we have shown the up and down quark contributions\nto the transverse chiral odd structure function of the proton. Figure\n\\ref{fig_htudpr} displays them boosted to the IMF. We observe\nthat these structure functions are always smaller (in magnitude) than\nthe twist--2 polarized structure function $g_1$ with the same flavor\ncontent. This relation is also known from the bag model \\cite{Ja92}.\nSimilar to the confinement model calculation of Barone {\\it et al.}\n\\cite{Ba97} we find that $h_T^{(d)}(x)$ is negative at small $x$. In\ncontrast to $g_1^{(d)}(x)$, however, it might change sign although\nthe positive contribution appears to be small and diminishing with\nincreasing constituent quark mass.\n\nAs already indicated in the introduction the DIS processes which are \nsensitive to these distributions will provide access to the charge \nweighted combinations thereof. We will hence concentrate on this flavor \ncontent. In any event, as we will be discussing both, the proton and \nthe neutron chiral odd distributions, other flavor combinations can \nstraightforwardly be extracted by disentangling the isoscalar \nand isovector pieces in eq (\\ref{qsquare}). In \nconnection with the chiral--odd transverse nucleon structure function \nwe also calculate its zeroth moment which is referred to as the isoscalar \nand isovector nucleon tensor charges \\cite{Ja92},\n\\begin{eqnarray}\n\\Gamma^S_{T}(Q^2) &=& \\frac{18}{5} \\int_0^1\\, \n\\left[ dx\\ h_T^p\\left(x,Q^2\\right)\\\n+ h_T^n\\left(x,Q^2\\right)\\right]\n\\label{gtens} \\\\\n\\Gamma^V_{T}(Q^2) &=& 6 \\int_0^1\\, \\left[ dx\\ h_T^p\\left(x,Q^2\\right)\\ \n- h_T^n\\left(x,Q^2\\right)\\right] \n\\label{gtenv}\n\\end{eqnarray}\nat both the low scale, $Q_0^2=0.4 {\\rm GeV}^2$ and a scale commensurate\nwith experiment, $Q^2= 4 {\\rm GeV}^2$. Of course, for the neutron we \nhave to reverse the signs of the isovector pieces in eq (\\ref{hltnjl}).\nIn eqs (\\ref{gtens}) and (\\ref{gtenv}) the normalization factors are \ndue to the separation into isosinglet and isovector contributions, \n{\\it cf.} eq (\\ref{qsquare}). Note that due to \n$\\int_0^1 dz P_{qq}^h(z)\\ne0$ the tensor charge is not protected against \nlogarithmic corrections. Our results for the valence quark approximation \nare summarized in Table 1. For completeness we also add the vacuum \ncontribution to the tensor charges at the model scale $Q_0^2$. Their \nanalytic expressions are given in appendix D. Obviously this \nvacuum contribution is negligibly small. This is a strong\njustification of the valence quark approximation to the chiral \nodd structure functions. \n\\begin{table}[ht]\n\\caption{\\label{tab_1}\nNucleon tensor charges calculated from eqs (\\ref{gtens}) and\n(\\ref{gtenv}) as a function of the constituent quark mass $m$ in the\nNJL chiral--soliton model. The momentum scales are $Q_0^2=0.4{\\rm GeV}^2$\nand $Q^2=4.0{\\rm GeV}^2$. The numbers in parenthesis in the respective\nupper rows include the negligible contribution from the polarized quark\nvacuum. We compare with results from the Lattice \\protect\\cite{Ao97},\nQCD sum rules \\protect\\cite{He95}, the constituent quark model with\nGoldstone boson effects \\protect\\cite{Su97} and a quark soliton model \ncalculation \\protect\\cite{Ki96} including multiplicative $1\/N_C$ corrections \nviolating PCAC in the similar case of the axial vector current \n\\protect\\cite{Al93}. Finally the predictions from the confinement model \nof ref \\protect\\cite{Ba97} with the associated momentum scales \n(in ${\\rm GeV}^2$) are shown.}\n~ \\vskip0.1cm\n\\centerline{\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{c|lll|llll|ll}\n$m$ ({\\rm MeV}) &~~~350 &~~~400 &~~~450 \n& Lat. & ~SR & ~CQ & ~QS & ~$Q^2$ & CM\n\\\\ \\hline\n$\\Gamma^S_T(Q_0^2)$ & 0.80 (0.82)\n& 0.72 (0.76) & 0.67 (0.72)\n& 0.61 & 0.61 & 1.31 & 0.69 & 0.16 & 0.90 \\\\\n$\\Gamma^S_T(Q^2) $ & 0.73 & 0.65 & 0.61 \n&\\multicolumn{4}{c|}{no scale attributed} \n&25.0 & 0.72\\\\\n\\hline\n$\\Gamma^V_T(Q_0^2)$ & 0.88 (0.89) \n& 0.86 (0.87) & 0.86 (0.85) \n& 1.07 & 1.37 & 1.07 & 1.45 & 0.16 & 1.53 \\\\\n$\\Gamma^V_T(Q^2) $ & 0.80 & 0.78 & 0.77 \n&\\multicolumn{4}{c|}{no scale attributed}\n&25.0 & 1.22 \\\\\n\\end{tabular}}\n\\renewcommand{\\arraystretch}{1.0}\n\\end{table}\nA further justification comes from a recent\nstudy of the Gottfried sum rule within the same model \\cite{Wa98}.\nAlso in that case the contribution of the distorted quark vacuum\nto the relevant structure function turned out to be negligibly\nsmall.\n\nBesides justifying the valence quark approximation for the chiral \nodd distributions table \\ref{tab_1} contains the comparison to other \nmodel calculations of the nucleon tensor charges. We note that in obtaining \nthe isovector tensor charge $\\Gamma_T^V$ we have omitted contributions \nwhich are suppressed by $1\/N_C$ ({\\it cf.} appendix D). These contributions\narise when one adopts a non--symmetric ordering of the operators in \nthe space of the collective operators \\cite{Ki96}. The main reason for \ntaking the symmetric ordering is that in the case of the isovector axial \ncharge, $g_A$, any non--symmetric ordering of the collective operators \nleads to a sizable violation of PCAC unless the meson profile is not \nmodified \\cite{Al93}. These multiplicative $1\/N_C$ corrections \\cite{Da94} \nmay be the reason why our predictions for $\\Gamma_T^V$ are somewhat lower \nthan those of other models. In the case of the flavor singlet component, \nwhich does not have such corrections, our results compare nicely with \nother model calculations except for the constituent quark model of \nref \\cite{Su97}.\n\nIn figure \\ref{fig_htnp} we display the transverse chiral odd proton \n$h_T^{p}\\left(x,Q_0^2\\right)$ and neutron \n$h_T^{n}\\left(x,Q_0^2\\right)$ structure functions at the low momentum \nscale $Q_0^2$, while in figure \\ref{fig_hlnp} we do the same for the \ncorresponding chiral odd longitudinal structure functions \n$h_L^{p}\\left(x,Q_0^2\\right)$\nand $h_L^{n}\\left(x,Q_0^2\\right)$.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTp.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hTn.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htnp}\nThe valence quark approximation of the chiral--odd\nnucleon structure functions as a function of Bjorken--$x$.\nLeft panel: $h_{T}^{p}\\left(x ,Q_0^2\\right)$ for constituent\nquark masses $m=400 {\\rm MeV}$ (solid line) and\n$m=450 {\\rm MeV}$ (long--dashed line).\nRight panel: $h_{T}^{n}\\left(x,Q_0^2\\right)$.}\n\\end{figure}\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hLp.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLn.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_hlnp}\nThe valence quark approximation of the chiral--odd\nnucleon structure functions as a function of Bjorken--$x$.\nLeft panel: $h_{L}^{p}\\left(x ,Q_0^2\\right)$ for constituent\nquark masses $m=400 {\\rm MeV}$ (solid line)\nand $m=450 {\\rm MeV}$ (long--dashed line).\nRight panel: $h_{L}^{n}\\left(x,Q_0^2\\right)$.}\n\\end{figure}\nWe observe that the structure \nfunctions $h_{T}^{N}(x,Q_0^{2})$ and $h_{L}^{N}(x,Q_0^{2})$ are \nreasonably localized in the interval $0\\le x\\le1$. In particular, this\nis the case for the chiral odd structure functions of the neutron. \nNevertheless a projection as in eq (\\ref{fboost}) is required to \nimplement Lorentz covariance. In addition the computed structure functions \nexhibit a pronounced maximum at $x\\approx0.3$ which is smeared out when the \nconstituent quark mass $m$ increases. This can be understood as follows:\nIn our chiral soliton model the constituent mass serves as a coupling\nconstant of the quarks to the chiral field (see eqs (\\ref{bosact})\nand (\\ref{hamil})). The valence quark level becomes more strongly bound \nas the constituent quark mass increases. Hence the lower components of \nthe valence quark wave--function increase with $m$ and relativistic \neffects become more important. This effect results in the above \nmentioned broadening of the maximum.\n\nAs discussed above a sensible comparison with (eventually available)\ndata requires either to evolve the model results upward according to\nthe QCD renormalization group equations or to compare the model \nresults with a low momentum scale parameterization of the leading \ntwist pieces of the structure functions. The latter requires the \nknowledge of the structure functions at some scale in the whole \ninterval $x\\in[0,1[$. At present no such data are available for \nthe chiral odd structure functions $h_T(x)$ and $h_L(x)$. Therefore \nand in anticipation of results from {\\em RHIC} and or {\\em HERMES} we \napply leading order evolution procedures to evolve the structure \nfunction from the model scale, $Q_0^2=0.4 {\\rm GeV}^2$ to \n$Q^2=4{\\rm GeV}^2$. In Figs. \\ref{fig_ht2p}a and \\ref{fig_ht2p}b we \ndisplay the results of the two step process of projection and evolution \nfor the twist--2 transverse structure function, $h_T^{p}(x,Q^2)$ and \n$h_L^{p(2)}(x,Q^2)$, respectively for a constituent quark mass\nof $m=400 {\\rm MeV}$. \n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTpe.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=h2Lpe.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_ht2p}\nLeft panel: The evolution of $h_{T}^{p}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to $Q^2=4 {\\rm GeV}^2$ \n(long--dashed line) for the constituent quark mass $m=400 {\\rm MeV}$.\nRight panel: The evolution of the twist--2 contribution to the \nlongitudinal chiral odd structure function,\n$h_{L}^{p(2)}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for $m=400 {\\rm MeV}$.}\n\\end{figure}\nIn figure \\ref{fig_h2bllp} we present the evolution of \n$h_L^{p}(x)$ along with its decomposition into terms of the leading \ntwist--2 contribution, $2x \\int_{x}^1\\ dy h^p_T(y,Q^2)\/y^2$, and the \nremaining twist--3 piece, $\\overline{h}^p_L(x,Q^2)$.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=h2bLpe.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLpe.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_h2bllp}\nLeft panel (\\protect\\ref{fig_h2bllp}a): \nThe evolution of the twist--3 contribution to the longitudinal \nchiral odd structure function, $\\overline{h}_L^p(x,Q^2)$\nalong with the corresponding twist--2 piece,\n$h_{L}^{p(2)}\\left(x ,Q^{2}\\right)$.\nRight panel (\\protect\\ref{fig_h2bllp}b): The evolution\nof $h_{L}^{p}\\left(x ,Q^{2}\\right)=h_{L}^{p(2)}\\left(x ,Q^{2}\\right)\n+\\overline{h}_L^p(x,Q^2)$ from $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.}\n\\end{figure}\nAs in the case of the polarized structure\nfunction, $g_2(x,Q^2)$, the non--trivial twist--3\npiece arises as a result of the binding of the constituent \nquarks through the pion fields acting as effective non--perturbative \ngluonic modes. The twist--3 contribution is evolved according to the \nlarge $N_C$ scheme \\cite{Bal96,Ali91,Io95} outlined in the preceding\nsection (and in Appendix C). Similarly in Figs. \\ref{fig_ht2n} and \n\\ref{fig_h2blln} we display the projection and \nevolution procedure to the twist--2 and 3 contribution to the neutron \nstructure functions, $h_L^{n(2)}(x,Q^2)$ and $\\overline{h}_L^n(x,Q^2)$,\nrespectively.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTne.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=h2Lne.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_ht2n}\nLeft panel:The evolution\nof $h_{T}^{n}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.\nRight panel: The evolution\nof the twist--2 contribution to the longitudinal chiral odd\nstructure function,\n$h_{L}^{n(2)}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for $m=400 {\\rm MeV}$.}\n\\end{figure}\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=h2bLne.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLne.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_h2blln}\nLeft panel: The evolution of the twist--3\ncontribution to the longitudinal chiral odd\nstructure function, $\\overline{h}_L^n(x,Q^2)$\nalong with the corresponding twist--2 piece,\n$h_{L}^{n(2)}\\left(x ,Q^{2}\\right)$.\nRight panel: The evolution\nof $h_{L}^{n}\\left(x ,Q^{2}\\right)=h_{L}^{n(2)}\\left(x ,Q^{2}\\right)\n+\\overline{h}_L^n(x,Q^2)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.}\n\\end{figure}\n\nBesides the absolute magnitudes, the major difference between the chiral \nodd structure functions of the proton and the neutron is that the latter \ndrop to zero at a lower value of $x$. As can be observed from figure \n\\ref{fig_htnp} this is inherited from the model chiral odd structure \nfunction at the low momentum scale and can be linked to the smallness of \nthe down quark component of $h_T$, {\\it cf.} figure \\ref{fig_htud}. \nApparently the projection and evolution program does not alter this \npicture.\n\nWe would also like to compare our results from the NJL chiral soliton\nmodel to those obtained in other approaches. A MIT bag model calculation of \nthe isovector contribution $6(h_T^{p}-h_T^{n})$ has been presented \nin ref \\cite{Ja92}. In shape ({\\it e.g.} position of the maximum) that \nresult is quite similar to ours. However, the absolute value is a bit \nlarger in the MIT bag model. This reflects the fact that in the MIT bag \nmodel the isovector combinations of the axial and tensor charges turn \nout to be bigger than in the present model. Additionally, the QCD evolution \nof the MIT bag model prediction for $h_T$ has been studied in \nref \\cite{St93} utilizing the Peierls--Yoccoz projection as in ref \\cite{Sc91}. \nIn that case the maximum at $x\\approx0.5$ gets shifted to a value as low \nas $x=0.2$. Also the structure function becomes rather broad at the large \nscale. The fact that in that calculation the evolution effects are more \npronounced than in the present approach is caused by the significantly \nlower scale ($\\mu_{\\rm bag}=0.08{\\rm GeV}^2$) used in ref \\cite{St93}.\nOn the other hand our results \nare quite different to those obtained in the QCD sum rule approach of \nref \\cite{Io95}. The sum rule approach essentially predicts $h_T$ to be \nconstant in the interval $0.31$. This \ncan be cured by Lorentz boosting to the infinite momentum frame which \nis particularly suited for DIS processes. Although the un--boosted\nstructure functions are negligibly small at $x>1$ the transformation \nto this frame is essential and has sizable effects on the structure\nfunctions at moderate $x$. However, the most important \nissue when comparing the model predictions to (not yet available)\nexperimental data is the observation that the model represents \nQCD at a low momentum scale $Q_0^2$. {\\it A priori} this scale \nrepresents an additional parameter to the model calculation \nwhich, for consistency, has to be smaller than the ultraviolet \ncut--off of the model $\\Lambda^2=0.56{\\rm GeV}^2$. For the model \nunder consideration we previously fixed $Q_0^2$ when studying the \nunpolarized structure functions and found $Q_0^2=0.4{\\rm GeV}^2$. \nThe important logarithmic corrections\nto the model structure functions are then obtained within a generalized\nGLAP evolution program. In this context we have restricted ourselves\nto a leading order (in $\\alpha_{\\rm QCD}$) calculation because \nthe anomalous dimensions, which govern the QCD evolution, for the \ntwist--3 piece of the longitudinal part of the chiral odd structure \nare only known to that order. As the full evolution to the longitudinal\nstructure function involves both twist--2 and twist--3 pieces this \nrestriction is consistent. We have seen that the QCD evolution of the \nchiral odd structure function leads to sizable enhancements at low $x$, \n{\\it i.e.} in the region $0.01\\le x\\le 0.10$. In this respect the present \nsituation is similar to that for the polarized structure functions.\nA difference to the polarized structure function is that the lowest moment\nis not protected against logarithmic corrections, even at leading order\nin $\\alpha_{\\rm QCD}$. For the nucleon tensor charge we thus find a \nreduction of about 10\\% upon evolution to $Q^2=4.0{\\rm GeV}^2$.\nWe have also compared the neutron and proton chiral odd structure \nfunctions. This has been achieved by the inclusion of the $1\/N_C$\ncranking corrections. In absolute value the proton structure functions\nare about twice as large as those of the neutron. Furthermore the \nneutron structure functions drop to zero at a lower value of $x$.\nThese two effects can be linked to the down quark component of the \ntransverse nucleon chiral odd distribution functions being significantly \nsmaller than the component with up--quark quantum numbers. We have also \nobserved that neither of these features is effected by the evolution \nprogram.\n\n\\bigskip\n\\section*{Acknowledgements}\n\\bigskip\nThis work has been supported in part by the\nDeutsche Forschungsgemeinschaft (DFG) under contract Re 856\/2-3,\nand by the U.S. Department of Energy (D.O.E.) under\ncontract DE--FE--02--95ER40923.\nOne of us (LG) is grateful to G. R. Goldstein for helpful comments\nand to K. A. Milton for encouragement and support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFinding the spectrum of a transition matrix is a very popular subject in graph theory and Markov chain theory. There are only a few techniques known to describe the exact spectrum of a Markov chain, and they usually work under very specific conditions, such as when the Markov chain is a random walk on a finite group, generated only by a conjugacy class \\cite{DiaSha}. Most well-known examples where a transition matrix has been diagonalized usually rely on combination of advanced representation theory, Fourier analysis, and combinatorial arguments \\cite{DSaliola}, \\cite{Hough}, \\cite{Star}, \\cite{hyp}, \\cite{hyp2}, \\cite{hyp3}, \\cite{Brown}, \\cite{Pike}. But even in most of these cases, there is no description of what an eigenbasis of the transition matrix would look like, which in general is needed as well in order to understand the transition matrix. \n\nIn this work, we present the full spectrum of the simple random walk on complete, finite $d-$ary trees and a corresponding eigenbasis, and we use this information to produce a lower bound for the interchange process on the trees, which we conjecture is sharp. Consider the complete, $d-$ary tree $\\mathcal T_h$ of height $h$, which has $n = 1+d+\\dots+d^{h} = \\frac{d^{h+1}-1}{d-1}$ vertices. We study the simple random walk on $\\mathcal T_h$ ,whose transition matrix is denoted by $Q_h$, according to which when we are at the root we stay fixed with probability $1\/(d+1)$, or we move to a child with probability $1\/(d+1)$ each. When we are at a leaf, we stay fixed with probability $d\/(d+1)$ otherwise we move to the unique parent with probability $1\/(d+1)$. For any other node, we choose one of the neighbors with probability $1\/(d+1)$. \n\nThis is a well studied Markov chain. Aldous \\cite{Aldous} proved that the cover time is asymptotic to $2 h^2 d^{h+1} \\log h\/(h-1)$. The order of the spectral gap and the mixing time of this Markov chain have been widely known for a long time. In fact, the random walk on $\\mathcal T_h$ is one of the most famous examples of a random walk not exhibiting cutoff (see Example 18.6 of \\cite{AMPY4}). However, finding the exact value of the spectral gap has been an open question for years, let alone finding the entire spectrum and an eigenbasis of the transition matrix $Q_h$.\n\n\nWe denote by $\\rho$ the root of $\\mathcal T_h$, by $V(\\mathcal T_h)$ the vertex set of $\\mathcal T_h$, and by $E(\\mathcal T_h)$ the set of edges of $\\mathcal T_h$. Let $\\ell: V(\\mathcal T_h) \\rightarrow [0,\\ldots, h]$ denote the distance from the root. For every node $v$, let $\\mathcal T^v$ be the complete $d-$ary subtree rooted at $v$, namely consisting of $v$ and all vertices of $V( \\mathcal T_h)$ that are descendants of $v$ in $\\mathcal T_h$. Let $\\mathcal T_i^v$ be the complete $d-$ary subtree of $\\mathcal T^v$ rooted at the $i-$th child of $v$.\n\nThe next theorem includes the first result of this paper, presenting the eigenvalues and an eigenbasis of $Q_h$.\n\\begin{theorem}\\label{thm:spectrum} \n\t\\begin{enumerate} [label = (\\alph*)]\n\t\t\\item $Q_h$ is diagonalizable with $1$ being an eigenvalue with multiplicity 1. Every other eigenvalue $\\lambda\\neq 1$ of $Q_h$ is of the form\n\t\t\\begin{equation}\\label{eq:lambda:x:thm}\n\t\t\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{xd}\\right ),\n\t\t\\end{equation}\n\t\twhere $x\\neq \\pm \\frac{1}{\\sqrt d}$ is a solution of one of the following $h+1$ equations:\n\t\t\\begin{equation} \\label{eq:x:sym:thm}\n\t\td^{h+1}x^{2h+2} = 1\n\t\t\\end{equation}\n\t\tand \n\t\t\\begin{equation} \\label{eq:x:antisym}\n\t\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0,\\quad \\text{for some } 0 \\leq k \\leq h-1.\n\t\t\\end{equation}\n\t\t\n\t\tReversely, each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of these equations corresponds to an eigenvalue $\\lambda$ according to \\eqref{eq:lambda:x:thm}. \n\t\tFor each of these equations, if $x$ is a solution then so is $\\frac{1}{xd}$. Both $x$ and $\\frac{1}{xd}$ correspond to the same $\\lambda$. The correspondence between $x$ and $\\lambda$ is $2$-to-$1$.\n\t\t\n\t\t\\item \\label{thm:spectrum:eigenvector} For each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of \\eqref{eq:x:sym:thm}, an eigenvector $f_{\\lambda}$ with respect to $\\lambda$ is given by the formula \n\t\t\\begin{equation}\\label{eq:thm:spectrum:sym}\n\t\tf_{\\lambda}(v) = \\frac{dx^{2}-x}{dx^{2}-1} x^i + \\frac{x-1}{dx^{2}-1}\\frac{1}{d^{i} x^{i}} \\quad\\text{for every $v$ with $\\ell(v)=i$, $0\\le i\\le h$}.\n\t\t\\end{equation}\n\t\t\n\t\tFor each $0 \\leq k \\leq h-1$, each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of \\eqref{eq:x:antisym}, each $v \\in V(\\mathcal T_h)$ such that $\\ell(v)= h-1-k$, and each $j\\in [1, \\dots, d-1]$, an eigenvector $f_{v, j, j+1}$ with respect to $\\lambda$ is given by the formula \n\t\t\\begin{equation}\\label{eq:thm:spectrum:antisym}\n\t\tf_{ v, j, j+1}(w) = \n\t\t\\begin{cases}\n\t\t& \\frac{dx^{i+2}}{dx^{2}-1} - \\frac{1}{(dx^{2}-1)d^{i} x^{i}} \\quad \\mbox{ for } w\\in \\mathcal T^v_j,\\mbox{ where } i= \\ell(w) -h+k,\\\\\n\t\t& -\\frac{dx^{i+2}}{dx^{2}-1} + \\frac{1}{(dx^{2}-1)d^{i }x^{i}} \\quad \\mbox{ for } w\\in \\mathcal T^v_{j+1}, \\mbox{ where } i= \\ell(w) -h+k,\\\\\n\t\t& 0, \\mbox{ otherwise.} \n\t\t\\end{cases}\n\t\t\\end{equation}\n\t\t\\item The collection of these eigenvectors together with the all-1 vector form an eigenbasis of $Q_h$.\n\t\\end{enumerate}\n\n\\end{theorem}\n\n\n\nIn Lemma \\ref{lm:sym:antisym} and Figure \\ref{fig:sym:antisym}, we describe and illustrate the eigenvectors in more detail.\n\nThe idea behind the proof is to consider appropriate projections of the random walk. For example, let $X_t$ be the state of the random walk at time $t$ and let $Y_t$ be the distance of $X_t$ from the root. Then $Y_t$ is a Markov chain on $[0,h],$ whose eigenvalues are also eigenvalues of $Q_h$. Also, the eigenvectors of $Y_t$ lift to give the eigenvectors presented in \\eqref{eq:thm:spectrum:sym}. This computation is not going to give us the full spectrum, however. \n\nFor example, in the case of the binary tree, another type of projection to consider is as follows. We consider the process $W_t,$ which is equal to $-Y_t$ if $X_t \\in \\mathcal T^{\\rho}_1$ and equal to $Y_t$ otherwise. The second largest eigenvalue can be derived by this new process, while the eigenvectors are of the form presented in \\eqref{eq:thm:spectrum:antisym}. The reason why this is the right process to study is hidden in the mixing time of the random walk on $\\mathcal T_h$. A coupling argument roughly says that we have to wait until $X_t$ reaches the root $\\rho$. The first time that $X_t$ hits $\\rho$ is captured by $W_t$, since $W_t$ is a Markov chain on $[-h,h]$, where the bias is towards the ends and away from zero. The projections that we consider form birth and death processes, whose mixing properties have been thoroughly studied by Ding, Lubetzky, and Peres \\cite{DLP}. To capture the entire spectrum, our method is to find in each eigenspace a well-structured eigenvector, which occurs by considering an appropriate projection.\n\nOur analysis has immediate applications to card shuffling, namely the interchange process on $\\mathcal T_h$, and to the exclusion process. We enumerate the nodes in $V (\\mathcal T _h)$ and we assign cards to the nodes. At time zero, card $i$ is assigned to node $i$. The interchange process on $\\mathcal T_h$ chooses an edge uniformly at random and then flips a fair coin. If heads, interchange the cards on the ends of $e$; if tails, stay fixed. A configuration of the deck corresponds to an element of the symmetric group.\n\nLet $g \\in S_n$. Let $P$ be the transition matrix of the interchange process on the complete, finite $d-$ary tree $\\mathcal T_h$ and let $P^t_{id}(g)$ be the probability that we are at $g$ after $t$ steps, given that we start at the identity. We define the total variation distance between $P^t_{id}$ and the uniform measure $U$ to be \n\\begin{equation}\n\td(t)= \\frac{1}{2} \\sum_{x \\in S_n} \\left \\vert P^t_{id}(x) -\\frac{1}{n!}\\right \\vert. \\nonumber\n\\end{equation}\n\nA celebrated result concerning the interchange process was the proof of Aldous conjecture \\cite[Theorem 1.1]{CLR}, which states that the spectral gap of P is the same as the spectral gap of the Markov chain that the ace of spades performs. Adjusting our computations, we now get the following result.\n\\begin{theorem}\\label{thm:lowerbound}\n\tFor the interchange process on the complete $d$-ary tree of depth $h$, we have that \n\t\\begin{itemize}\n\t\t\\item[(a)] The spectral gap of the transition matrix is $\\frac{(d-1)^{2}}{2(n-1) d^{h+1}} + O \\left (\\frac{\\log_{d} n}{n^{3}}\\right )$, \n\t\t\\item[(b)] And if $t=\\frac{1}{d-1}n^{2}\\log n- \\frac{1}{d-1}n^2 \\log \\left( \\frac{1}{\\varepsilon} \\right) + O\\left ( n^{2}\\right ) $, then \n\t\t$$d(t) \\geq1- \\varepsilon,$$\n\t\twhere $\\varepsilon$ is any positive constant. \n\t\\end{itemize}\n\\end{theorem}\nThis is already much faster than the interchange process on the path, another card shuffle that uses $n-1$ transpositions, which Lacoin \\cite{Lacoin} recently proved exhibits cutoff at $\\frac{1}{2\\pi^2} n^3 \\log n$. We conjecture that the lower bound in part $(b)$ of Theorem \\ref{thm:lowerbound} is sharp and that the interchange process on $\\mathcal T_h$ exhibits cutoff at $\\frac{1}{d-1} n^{2}\\log n$.\n\n\nWe can get lower bounds for the mixing time of another well studied process, the exclusion process on the complete $d$-ary tree. This is a famous interacting particle system process, according to which at time zero, $k \\leq n\/2$ nodes of the tree are occupied by indistinguishable particles. At time $t$, we pick an edge uniformly at random and we flip the two ends. Similar computations to the ones of the proof of Theorem \\ref{thm:lowerbound} give that if $t=\\frac{1}{d-1}n^{2}\\log k- \\frac{1}{d-1}n^2 \\log \\left( \\frac{1}{\\varepsilon} \\right) + o\\left ( n^{2} \\log k\\right ) $, then \n$$d(t) \\geq1- \\varepsilon,$$\nwhere $\\varepsilon>0$ is a constant. Combining Oliveira's result \\cite{OL} with Theorem \\ref{thm:lowerbound} $(b)$, we get that the order of the mixing time of the exclusion process on the complete $d-$ary tree is $n^2 \\log k$. \n\nAs potential open questions, we suggest trying to find the spectrum or just the exact value of the spectral gap for the simple random on finite Galton-Watson trees or for the frog model as presented in \\cite{Jon}.\n \n \n\\section{The spectrum of $Q_h$}\nThis section is devoted to the proof of Theorem \\ref{thm:spectrum}.\n\n\nLet $\\lambda$ be an eigenvalue of $Q_h$ and let $E(\\lambda)$ be the corresponding eigenvalue. We first show that there exists an eigenvector in $E(\\lambda)$ that has the form described in Theorem \\ref{thm:spectrum} \\ref{thm:spectrum:eigenvector}.\n\\begin{lemma}\\label{lm:sym:antisym}\nThe eigenspace $E(\\lambda)$ contains an eigenvector $f$ that has one of the following forms:\n\\begin{enumerate}\n\\item[(a)] [Completely symmetric] $f(v)=f(w)$ for every $v,w \\in V(\\mathcal T_h) $ such that $\\ell(v)=\\ell(w)$. In this case we will call $f$ completely symmetric for $\\mathcal T_h$;\n\\item[(b)] [Pseudo anti-symmetric] There is a node $v$ and $i,j \\in \\{ 1,\\ldots, d\\}$ such that $f(w)=0$ for every $w \\notin V(\\mathcal T_i^v\\cup \\mathcal T_j^v)$, $f \\vert_{\\mathcal T_i^v}$ and $f \\vert_{\\mathcal T_j^v}$ are completely symmetric, and $f \\vert_{\\mathcal T_i^v}=-f \\vert_{\\mathcal T_j^v}$. We call such $f$ pseudo anti-symmetric.\n\\end{enumerate}\n\\end{lemma}\nThe following illustrations explain what the described eigenvectors look like for binary trees.\n\n\\tikzset{every tree node\/.style={minimum width=2em,draw,circle},\n blank\/.style={draw=none},\n edge from parent\/.style=\n {draw,edge from parent path={(\\tikzparentnode) -- (\\tikzchildnode)}},\n level distance=1.5cm}\n \n\n\\begin{figure}[H] \n\t\\centering\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\\Tree\n[.$y_0$ \n[.$y_1$ \n[.$y_2$\n[.$y_3$ ]\n[.$y_3$ ]\n]\n[.$y_2$\n[.$y_3$ ]\n[.$y_3$ ]\n]]\n[.$y_1$\n[.$y_2$ \n[.$y_3$ ]\n[.$y_3$ ]]\n[.$y_2$ \n[.$y_3$ ]\n[.$y_3$ ]]]\n]\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\\Tree\n[.0 \n[.0 \n[.$y_0$ \n[.$y_1$ ]\n[.$y_1$ ]]\n[.-$y_0$ \n[.-$y_1$ ]\n[.-$y_1$ ]]]\n[.0 \n[.0 \n[.0 ]\n[.0 ]]\n[.0 \n[.0 ]\n[.0 ]\n]]\n]\n\t\\end{minipage}\t\n\\caption{Completely symmetric eigenvectors (left) and Pseudo anti-symmetric eigenvectors (right)}\n\\label{fig:sym:antisym}\n\t\\end{figure}\n\n\n\n\\begin{proof}\nAssume that $E(\\lambda)$ does not contain a completely symmetric eigenvector. Let $f$ be a nonzero element of $E(\\lambda)$. Since $f$ is not completely symmetric, there exist vertices of the same level at which $f$ takes different values. Let $v$ be a vertex with the largest $l(v)$ such that there are at least two of its children, say the $i$-th and $j$-th children, at which $f$ has different values. For example, if there are two leaves $u$ and $w$ at which $f(u)\\neq f(w)$ that have the same parent $v'$ then we simply take $v$ to be $v'$. \n\nBy the choice of $v$, $f\\vert _{\\mathcal T_k^v}$ is completely symmetric for all $k\\in [d]$. Indeed, let $u$ be the $k$-th child of $v$. We have $\\mathcal T_k^v = \\mathcal T^{u}$. By the choice of $v$, $f$ takes the same value at all children of $u$. Let $u_1, u_2$ be two arbitrary children of $u$. Again by the choice of $v$, $f$ takes the same value, denoted by $f_1$, at all children of $u_1$, and the same value, denoted by $f_2$, at all children of $u_2$. Since $f$ is an eigenvector of $Q_h$, \n\\begin{equation}\\label{key}\n\\lambda f(u_1) = \\frac{d}{d+1} f_1+\\frac{1}{d+1} f(u) \\quad \\text{and}\\quad \\lambda f(u_2) = \\frac{d}{d+1} f_2+\\frac{1}{d+1} f(u). \\nonumber\n\\end{equation}\nSince $f(u_1) = f(u_2)$, $f_1 = f_2$. Thus, $f$ takes the same value at all grandchildren of $u$. Repeating this argument shows that $f\\vert _{\\mathcal T^u}$ is completely symmetric.\n\nConsider the vector $g$ obtained from $f$ by switching its values on $\\mathcal T^{v}_{i}$ and $\\mathcal T^{v}_{j}$. More specifically, $g\\vert _{\\mathcal T^{v}_{i}} = f\\vert _{\\mathcal T^{v}_{j}}$, $g\\vert _{\\mathcal T^{v}_{j}} = f\\vert _{\\mathcal T^{v}_{i}}$, and $g = f$ elsewhere.\n \n By the symmetry of the tree and the matrix $Q_h$, $g$ also belongs to $E(\\lambda)$. So is $f-g$, which we denote by $h$. Observe that $h$ is an eigenvector that is 0 everywhere except on $\\mathcal T^{v}_{i} \\cup \\mathcal T^{v}_{j}$ and $h \\vert_{\\mathcal T_i^v}=f\\vert_{\\mathcal T_i^v} - f\\vert_{\\mathcal T_j^v}=-h \\vert_{\\mathcal T_j^v}$. Moreover, $h$ is completely symmetric when restricted to $\\mathcal T^{v}_{i}$ and $ \\mathcal T^{v}_{j}$ because both $f$ and $g$ are, as seen above. Thus, $h \\in E(\\lambda)$ and is pseudo anti-symmetric.\n\\end{proof}\n\n\n\n\n\\subsection{Completely symmetric eigenvectors}\\label{subsection:sym}\n\nIn this section, we describe completely symmetric eigenvectors. We shall show that the completely symmetric eigenvectors of $Q_h$ are given by the formula \\eqref{eq:thm:spectrum:sym} and correspond to $\\lambda$ and $x$ satisfying \\eqref{eq:lambda:x:thm} and \\eqref{eq:x:sym:thm} as in Theorem \\ref{thm:spectrum}.\n\n\nSince a completely symmetric eigenvector of $Q_h$ has the same value at every node of the same level (see Figure \\ref{fig:1}), we can project it onto the path $[0, h]$ and obtain an eigenvector of the corresponding random walk on the path.\n \n\\begin{figure}[H]\\label{figure:sym:3}\n\t\\centering\n\\begin{tikzpicture}\n\\Tree\n[.$y_0$ \n [.$y_{1}$ \n [.$y_{2}$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]\n [.$y_{2}$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]]\n [.$y_{1}$ \n [.$y_2$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]\n [.$y_2$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]\n ]]\n] \n\\end{tikzpicture}\n\\caption{Completely symmetric eigenvectors}\n\\label{fig:1}\n\\end{figure}\n\n \n\\begin{lemma} \\label{lm:sym}\n\tThere are exactly $h+1$ linearly independent completely symmetric eigenvectors of $Q_h$.\n\\end{lemma}\n\\begin{proof} Each symmetric eigenvector of $Q_h$ corresponds one-to-one to an eigenvector of the following projection onto the path $[0, h]$ with transition matrix $R_h$:\n\t\\begin{itemize}\n\t\t\\item $R_h(0, 1) = \\frac{d}{d+1}, R_h(0, 0) = \\frac{1}{d+1}$,\n\t\t\\item $R_h(l, l-1) = \\frac{1}{d+1}$, $R_h(l, l+1) = \\frac{d}{d+1}$ for all $1\\le l\\le h-1$,\n\t\t\\item $R_h(h, h-1) = \\frac{1}{d+1}$, $R_h(h, h) = \\frac{d}{d+1}$,\n\t\\end{itemize}\n\n\tSince $R_h$ is a reversible transition matrix with stationary distribution $\\pi := [1, d, d^{2}, \\dots, d^{h}]$, the matrix $A:= D^{1\/2} R_h D^{-1\/2}$ is symmetric where $D$ is the diagonal matrix with $D(x, x)= \\pi(x)$. Therefore, $A$ is diagonalizable and so is $R_h$. In other words, $R_h$ has $h+1$ independent real eigenvectors. This implies that $Q_h$ has $h+1$ linearly independent completely symmetric eigenvectors.\n\\end{proof}\n\n\\begin{lemma}\\label{lm:sym:detail}\n\tThe matrix $R_n$ has 1 as an eigenvalue with multiplicity 1. Each of the remaining $h$ eigenvalues $\\lambda\\neq 1$ of $R_n$ is of the form \n\t$$\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{xd}\\right )$$\n\twhere $x \\neq \\pm \\frac{1}{\\sqrt{d}}$ is a non-real solution of the equation\n\t\\begin{equation} \n\td^{h+1}x^{2h+2} = 1.\\nonumber\n\t\\end{equation}\n\tThis equation has exactly $2h$ such solutions. If $x$ is a solution, so is $\\frac{1}{xd}$. There is a 2-to-1 correspondence between $x$ and $\\lambda$. An eigenvector $y = (y_0, y_1, \\dots, y_h)$ of $R_h$ with respect to $\\lambda$ is given by \n\t\\begin{equation} \n\ty_i = \\frac{dx^{2}-x}{dx^{2}-1} x^i + \\frac{x-1}{dx^{2}-1}\\frac{1}{d^{i} x^{i}} \\quad\\text{for every $0\\le i\\le h$}.\\nonumber\n\t\\end{equation}\n\tThe vector $f:\\mathcal T_h\\to \\mathbb R$ that takes value $y_i$ at all nodes of depth $i$ is an eigenvector of $Q_h$ with respect to $\\lambda$.\n\\end{lemma}\n\\begin{proof}\n\tLet $\\lambda$ be an eigenvalue of $R_h$ and $y = (y_0, y_1, \\dots, y_h)$ be an eigenvector corresponding to $\\lambda$. We have\n\t\\begin{enumerate}[label=(R\\arabic{*}), ref=R\\arabic{*}]\n\t\t\\item\\label{eq:R:0} $\\frac{d}{d+1} y_{1} +\\frac{1}{d+1} y_{0} = \\lambda y_0$,\n\t\t\\item\\label{eq:R:i} $\\frac{1}{d+1} y_{i-1} +\\frac{d}{d+1} y_{i+1} = \\lambda y_i$ for all $1\\le i\\le h-1$,\n\t\t\\item\\label{eq:R:h} $\\frac{1}{d+1} y_{h-1} +\\frac{d}{d+1} y_{h} = \\lambda y_h$.\n\t\\end{enumerate}\n\nSince $y$ is not the zero vector, the above equations imply that $y_0\\neq 0$. Without loss of generality, we assume $y_0=1$. \n\t\n\tLet $x_1, x_2$ be the solutions to the characteristic equation of \\eqref{eq:R:i}: \n\t$$\\frac{1}{d+1} - \\lambda x + \\frac{d}{d+1}x^{2} = 0$$\n\tor equivalently\n\t\\begin{equation}\\label{eq:x:lambda:1}\n\td x^{2} - (d+1)\\lambda x+1 = 0.\n\t\\end{equation}\n\t\n\tBy \\eqref{eq:x:lambda:1}, we have\n\t$$x_1 x_2 = \\frac{1}{d}$$\n\tand \n\t\\begin{equation} \n\t\\lambda = \\frac{d}{d+1}(x_1+ x_2) = \\frac{d}{d+1}\\left (x_1+ \\frac{1}{x_1 d}\\right ) .\\label{eq:lambda:x}\n\t\\end{equation}\n\t\n\t\n\t\n\tIf $x_1\\neq x_2$ then we can write $y_0 = \\alpha_1 - \\alpha_2$, $y_1 = \\alpha_1 x_1 -\\alpha_2 x_2$ for some $\\alpha_1, \\alpha_2$. We show that for all $0\\le i\\le h$,\n\t\\begin{equation}\\label{eq:recurrent:y:1}\n\ty_i = \\alpha_1 x_1^{i} - \\alpha_2 x_2^{i}.\n\t\\end{equation}\n\tIndeed, assuming that \\eqref{eq:recurrent:y:1} holds for $y_0, \\dots, y_i$ for some $1\\le i\\le h-1$ then by \\eqref{eq:x:lambda:1},\n\t\\begin{eqnarray}\n\t\\lambda y_i - \\frac{1}{d+1} y_{i-1} = \\alpha_1 x_1^{i-1}\\left (\\lambda x_1 - \\frac{1}{d+1}\\right )-\\alpha_2 x_2^{i-1}\\left (\\lambda x_2 - \\frac{1}{d+1}\\right ) = \\frac{d}{d+1} \\left (\\alpha_1 x_1^{i+1}- \\alpha_2 x_2^{i+1}\\right ).\\nonumber\n\t\\end{eqnarray}\n\tThus, by \\eqref{eq:R:i}, \n\t\\begin{equation}\\label{key}\n\t\\frac{d}{d+1} y_{i+1} = \\frac{d}{d+1} \\alpha_1 x_1^{i+1}- \\frac{d}{d+1} \\alpha_2 x_2^{i+1}\\nonumber\n\t\\end{equation}\n\tand so\n\t\\begin{equation}\\label{key}\n\ty_{i+1} = \\alpha_1 x_1^{i+1}- \\alpha_2 x_2^{i+1}.\\nonumber\n\t\\end{equation}\n\tThus, \\eqref{eq:recurrent:y:1} also holds for $y_{i+1}$ and hence, for all $y_0, \\dots, y_h$. \n\t\n\tSimilarly, by \\eqref{eq:R:h}, we get\n\t\\begin{eqnarray}\n\t\\frac{d}{d+1} y_h &=&\\lambda y_h - \\frac{1}{d+1} y_{h-1} = \\alpha_1 x_1^{h-1}\\left (\\lambda x_1 - \\frac{1}{d+1}\\right )-\\alpha_2 x_2^{h-1}\\left (\\lambda x_2 - \\frac{1}{d+1}\\right )\\nonumber\\\\\n\t& =& \\frac{d}{d+1} \\left (\\alpha_1 x_1^{h+1}- \\alpha_2 x_2^{h+1}\\right ).\\nonumber\n\t\\end{eqnarray}\n\tThus, \n\t\\begin{equation}\\label{eq:R:h:1}\n\t\\alpha_1 x_1^{h+1}- \\alpha_2 x_2^{h+1} = \\alpha_1 x_1^{h}- \\alpha_2 x_2^{h}\n\t\\end{equation}\n\tas they are both equal to $y_h$.\n\t\n\t\n\tBy \\eqref{eq:recurrent:y:1}, \\eqref{eq:R:0} becomes\n\t\\begin{equation}\\label{eq:R:0:1}\n\td(\\alpha_1x_1 - \\alpha_2 x_2) = \\left (xd+\\frac{1}{x} - 1\\right ) (\\alpha_1 - \\alpha_2).\n\t\\end{equation}\n\t\n\t\n\t\n\tFor simplicity, we write $\\alpha = \\alpha_1$ and $x = x_1$. By \\eqref{eq:recurrent:y:1} for $i=0$, we get\n\t$$\\alpha_2 = \\alpha-1.$$\n\t\n\n\tEquations \\eqref{eq:R:0:1} becomes\n\t\\begin{equation} \n\td\\alpha x - \\frac{\\alpha-1}{x} = dx+\\frac{1}{x} - 1\\nonumber\n\t\\end{equation}\n\twhich gives\n\t\\begin{equation}\\label{eq:R:0:2}\n\t\\alpha_1 = \\alpha = \\frac{dx^{2}-x}{dx^{2}-1} \\quad\\text{and}\\quad \\alpha_2 = \\alpha-1 = \\frac{1-x}{dx^{2}-1}. \n\t\\end{equation}\n\t\n \n\t\n\tPlugging \\eqref{eq:R:0:2} into \\eqref{eq:R:h:1} and taking into account $x_2 = \\frac{1}{xd}$, we get\n\t\\begin{equation}\\label{key}\n\t(dx-1)(x-1)(d^{h+1}x^{2h+2}-1) = 0.\\nonumber\n\t\\end{equation}\n\t\n\tIf $x = 1$ then $\\alpha_2 = \\alpha-1 = 0$ by \\eqref{eq:R:0:2}. And so, $y = \\alpha(1, \\dots, 1)$ which is an eigenvector of the eigenvalue 1. Since $\\lambda\\neq 1$, $x\\neq 1$. If $x=\\frac{1}{d}$ then $x_2 = \\frac{1}{xd} = 1$. By the symmetry of $x_1$ and $x_2$, this also corresponds to $\\lambda=1$ which is not the case. \n\t\n\tThus, $x$ satisfies\n\t\\begin{equation} \n\td^{h+1}x^{2h+2}-1=0.\\nonumber\n\t\\end{equation}\n\t \n\t\n\tThis equation has $2h$ non-real solutions and 2 real solutions $\\pm \\frac{1}{\\sqrt{d}}$. For each non-real solution $x_1$, observe that $x_2:=\\frac{1}{dx_1}$ is also a non-real solution. Note that $x_1\\neq x_2$ and by setting $\\lambda$ and $y$ as in \\eqref{eq:lambda:x} and \\eqref{eq:recurrent:y:1} with $\\alpha_1$ and $\\alpha_2$ as in \\eqref{eq:R:0:2}, one can check that $y$ is indeed an eigenvector corresponding to $\\lambda$. Thus, these $2h$ non-real solutions correspond to exactly $h$ eigenvalues $\\lambda\\neq 1$ of $R_n$. Since $R_n$ has exactly $h+1$ eigenvalues, these are all.\n\\end{proof}\n\n\n\\subsection{Pseudo anti-symmetric eigenvectors}\\label{subsection:anti}\nIn this section, we describe pseudo anti-symmetric eigenvectors. We shall show that the pseudo anti-symmetric eigenvectors of $Q_h$ are given by the formula \\eqref{eq:thm:spectrum:antisym} and correspond to $\\lambda$ and $x$ satisfying \\eqref{eq:lambda:x:thm} and \\eqref{eq:x:antisym} as in Theorem \\ref{thm:spectrum}.\n\n\nConsider a pseudo anti-symmetric eigenvector $f$ with node $v$ and indices $i, j$ as described in Lemma \\ref{lm:sym:antisym} (see Figure \\ref{fig:sym:antisym}). Let $k = h-\\ell(v)-1\\in [0, h-1]$. As in Figure \\ref{fig:sym:antisym} and Figure \\ref{fig:anti}, let $y=(y_0, y_1, \\dots, y_k)$ where $y_0$ is the value of $f$ at the $i$-th child of $v$, which is denoted by $u$, $y_1$ is the value of $f$ at the children of $u$ and so on. With these notations, we also write $f$ as $f_{y, v, i, j}$. Observe that $y$ is an eigenvector of the following matrix $S_k$:\n\\begin{itemize}\n\t\\item $S_k(0, 1) = \\frac{d}{d+1}$,\n\t\\item $S_k(l, l-1) = \\frac{1}{d+1}$, $S_k(l, l+1) = \\frac{d}{d+1}$ for all $1\\le l\\le k-1$,\n\t\\item $S_k(k, k-1) = \\frac{1}{d+1}$, $S_k(k, k) = \\frac{d}{d+1}$.\n\\end{itemize}\n\nReversely, for any eigenvector $y$ of $S_k$, for any node $v$ at depth $h-k-1$ and for any choice of $i, j\\in [1, d]$ with $i\\neq j$, we can lift it to a pseudo anti-symmetric eigenvector $f_{y, v, i, j}$.\n\n\n\\begin{figure}[H]\t\\centering\n\t\\begin{tikzpicture}\n\t\\Tree\n\t[.0\n\t[.$0$ \n\t[.$y_0$ \n\t[.$y_1$ \n\t[.$y_2$ ]\n\t[.$y_2$ ]]\n\t[.$y_1$ \n\t[.$y_2$ ]\n\t[.$y_2$ ]]]\n\t[.$-y_{0}$ \n\t[.$-y_1$ \n\t[.$-y_2$ ]\n\t[.$-y_2$ ]]\n\t[.$-y_1$ \n\t[.$-y_2$ ]\n\t[.$-y_2$ ]\n\t]]\n\t]\n\t[.$0$ \n\t[.$0$ \n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]\n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]]\n\t[.$0$ \n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]\n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]\n\t]]\n\t]\n\t]\n\t\\end{tikzpicture}\n\t\\caption{Pseudo anti-symmetric eigenvectors}\\label{fig:anti}\n\n\\end{figure}\n\n \\begin{lemma} \\label{lm:antisym}\n\tFor each $k\\in [0, h-1]$, $S_k$ has $k+1$ eigenvectors. For each eigenvector $y$ of $S_k$ and for each $v$ with $l(v) = h-k-1$, there are $d-1$ linearly independent pseudo anti-symmetric eigenvectors of $Q_h$ of the form $f_{y,v,i,j}$.\n\\end{lemma}\n\\begin{proof}\n\tSince $S_k$ differs from $R_k$ only at the $(0, 0)$ entry, it also satisfies the equation $\\pi(x) S_k(x, y) = \\pi(y)S_k(y, x)$ where $\\pi = [1, d, d^{2}, \\dots, d^{k}]$. Thus, like $R_k$, the matrix $DS_k D^{-1}$ is symmetric where $D$ is the diagonal matrix with $D(x, x) = \\pi(x)^{1\/2}$. By symmetry, $D S_k D^{-1}$ has $k+1$ eigenvalues and so does $S_k$. \n\t\n\tFor each eigenvector $y$ of $S_k$, we create $d-1$ independent vectors $f_{y, v, i, i+1}$ for $1\\le i\\le d-1$. It is clear that any $f_{y, v, i, j}$ can be written as a linear combination of these vectors. This completes the proof.\n\\end{proof}\nWe now describe the eigenvectors of $S_k$.\n\\begin{lemma}\\label{lm:anti:detail}\n\tEach of the $k+1$ eigenvalue $\\lambda$ of $S_k$ is of the form \n\t$$\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{dx}\\right )$$\n\twhere $x\\neq \\pm \\frac{1}{\\sqrt d}$ is a solution of the equation\n\t\\begin{equation} \n\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0.\\nonumber\n\t\\end{equation}\n\tThis equation has $2k+2$ solutions that differ from $\\frac{1}{\\sqrt d}$. If $x$ is a solution, so is $\\frac{1}{dx}$. There is a 2-to-1 correspondence between $x$ and $\\lambda$. An eigenvector $y = (y_0, y_1, \\dots, y_k)$ of $S_k$ with respect to $\\lambda$ is given by \n\t\\begin{equation} \n\ty_i = \\frac{dx^{i+2}}{dx^{2}-1} - \\frac{1}{(dx^{2}-1)d^{i} x^{i}} \\quad\\text{for every $0\\le i\\le k$}.\\nonumber\n\t\\end{equation}\n\\end{lemma}\n\n\n\\begin{proof}\n\tLet $\\lambda$ be an eigenvalue of $S_k$ and $y = (y_0, y_1, \\dots, y_k)$ be an eigenvector corresponding to $\\lambda$. We have\n\t\\begin{enumerate}[label=(S\\arabic{*}), ref=S\\arabic{*}]\n\t\t\\item\\label{eq:S:0} $\\frac{d}{d+1} y_{1} = \\lambda y_0$,\n\t\t\\item\\label{eq:S:i} $\\frac{1}{d+1} y_{i-1} +\\frac{d}{d+1} y_{i+1} = \\lambda y_i$ for all $1\\le i\\le k-1$,\n\t\t\\item\\label{eq:S:k} $\\frac{1}{d+1} y_{k-1} +\\frac{d}{d+1} y_{k} = \\lambda y_k$.\n\t\\end{enumerate}\n\t\n\tAs before, we let $x_1, x_2$ be the solutions to the equation \n\t$$\\frac{1}{d+1} - \\lambda x + \\frac{d}{d+1}x^{2} = 0.$$\n\tBy exactly the same argument as in the proof of Lemma \\ref{lm:sym:detail}, we derive by setting $y_0=1$ that \n\t$$y_i = \\alpha_1 x_1^{i} - \\alpha_2 x_2^{i}$$\nwhere\n\t$$\\alpha_1 = \\frac{dx^{2}}{dx^{2}-1}\\quad\\text{and}\\quad \\alpha_2 =\\frac{1}{dx^{2}-1}$$\n\tand $x_1$ and $x_2$ satisfy\n\t\\begin{equation}\\label{eq:x:2}\n\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0\n\t\\end{equation}\n\t\n \tNote that, $x = \\pm \\frac{1}{\\sqrt d}$ are solutions of \\eqref{eq:x:2}. The remaining $2k+2$ solutions split into pairs $(x, \\frac{1}{dx})$ of distinct components. For each of these pairs, let $x_1 := x$ and $x_2:=\\frac{1}{dx}$. We have $x_1\\neq x_2$ and by setting $\\lambda$ and $y$ as in \\eqref{eq:lambda:x} and \\eqref{eq:recurrent:y:1} with $\\alpha_1 = \\frac{dx^{2}}{dx^{2}-1}$ and $\\alpha_2 =\\frac{1}{dx^{2}-1}$, one can check that $y$ is indeed an eigenvector corresponding to $\\lambda$. Thus, these $2k+2$ solutions correspond to exactly $k+1$ eigenvalues $\\lambda$ of $S_k$. Since $S_k$ has exactly $k+1$ eigenvalues, these are all.\n\\end{proof}\n\n\n\\subsection{Proof of Theorem \\ref{thm:spectrum}}\\label{subsection:proof:spectrum}\nThe following lemma shows that we can retrieve all eigenvectors of $Q_h$ from completely symmetric and pseudo anti-symmetric eigenvectors. Let $\\mathcal A_{S_k}$ be the eigenbasis of $S_k$ as described in Lemma \\ref{lm:anti:detail} and $\\mathcal B$ be a collection of $h+1$ independent completely symmetric eigenvectors of $Q_h$ as in Lemma \\ref{lm:sym:detail}. Let \n$$\\mathcal A: = \\lbrace f_{y, v, i, i+1}, v \\in V(\\mathcal T_{h-1}), y \\in \\mathcal A_{S_{h-\\ell(v)-1}}, i \\in [d-1] \\rbrace.$$\n\\begin{lemma}\\label{lm:spanning}\n\tThe collection $\\mathcal A \\cup \\mathcal B$ is an eigenbasis for $Q_h$.\n\\end{lemma}\n\nAssuming Lemma \\ref{lm:spanning}, we now put everything together to complete the proof of Theorem \\ref{thm:spectrum}.\n\\begin{proof}[Proof of Theorem \\ref{thm:spectrum}]\n\tThe first part of the theorem follows from Lemmas \\ref{lm:sym:detail} and \\ref{lm:anti:detail}. As seen in Lemma \\ref{lm:sym:detail}, the set $\\mathcal B$ in Lemma \\ref{lm:spanning} consists of eigenvectors as in \\eqref{eq:thm:spectrum:sym} and the all-1 vector. By Lemmas \\ref{lm:antisym} and \\ref{lm:anti:detail}, the set $\\mathcal A$ consists of eigenvectors as in \\eqref{eq:thm:spectrum:antisym}. That gives the second part. Finally, the third part follows from Lemma \\ref{lm:spanning}.\n\\end{proof}\n \n\nBefore proving Lemma \\ref{lm:spanning}, we make the following simple observation. For a rooted-tree $T$ that is not necessarily regular, recall that a vector $f: T\\to \\mathbb R$ is said to be \\textit{completely symmetric} if $f(u) = f(v)$ for all pairs of vertices $u, v$ at the same level. A vector $f$ is said to be \\textit{energy-preserving} if for all level $l$, \n$$\\sum_{v\\in T: l(v)=l} f(v)=0.$$\n\\begin{observation}\\label{obs}\n\tFor any rooted-tree $T$ and any vector $f: T\\to \\mathbb R$, if $f$ is both energy-preserving and completely symmetric then it is the zero vector.\n\\end{observation}\n\n\n \\begin{proof}[Proof of Lemma \\ref{lm:spanning}]\nFirst of all, we check that their number is equal to $n$. By Lemmas \\ref{lm:sym} and \\ref{lm:antisym}, the total number of vectors is\n\\begin{align*}\nh+1+ \\sum_{k=0}^{h-1} (k+1)(d-1) d^{h-k-1} \n\\end{align*}\nwhere $d^{h-k-1}$ is the number of nodes $v$ of depth $h-k-1$. By algebraic manipulation, this number is exactly $\\frac{d^{h+1} -1}{d-1}=n$.\n\n\nWe will now prove that the vectors considered are linearly independent. Assume that there exist coefficients $c_{y,v,i}$ and $c_g$ such that\n\\begin{equation} \n\\sum c_{y,v,i} f_{y,v,i,i+1} + \\sum_{g\\in \\mathcal B} c_g g = 0\\nonumber\n\\end{equation}\nwhere the first sum runs over all $v \\in V(\\mathcal T_{h-1}), y \\in \\mathcal A_{S_{h-\\ell(v)-1}}, i \\in [d-1]$. We need to show that $ c_{y,v,i}$ and $c_g$ are all 0.\n\nSince pseudo anti-symmetric vectors are energy-preserving on $\\mathcal T_{h}$, the sum $\\sum_{g\\in \\mathcal B} c_g g = -\\sum c_{y,v,i} f_{y,v,i,i+1}$ is both completely symmetric and energy-preserving. And so, by Observation \\ref{obs},\n\\begin{equation}\\label{eq:indep:sum}\n\\sum c_{y,v,i} f_{y,v,i,i+1} = \\sum_{g\\in \\mathcal B} c_g g = 0\n\\end{equation}\nBy the independence of vectors in $\\mathcal B$, we conclude that $c_g = 0$ for all $g\\in \\mathcal B$. \n \nWe now prove by induction on the vertices of $v\\in V(\\mathcal T_{h-1})$ and $i\\in [d-1]$ that $c_{y,v,i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$. For this induction, we shall use the natural ordering of pairs $(v, i)$ as follows.\n$$(v, i)< (v', i')\\quad \\text{if and only if} \\quad l(v)< l(v') \\text{ or } l(v) = l(v') \\text{ and } i< i'.$$ \n \n \n For the base case, which is for $v := \\rho$ and $i:=1$, from \\eqref{eq:indep:sum}, we have\n \\begin{equation}\\label{key}\nF_{\\rho, 1}:= \\sum_{y\\in \\mathcal A_{S_{h-1}}} c_{y,\\rho,1} f_{y,\\rho,1,2} = -\\sum c_{y,u,j} f_{y,u,j,j+1} \\nonumber\n \\end{equation}\n where the second sum runs over all $u \\in V(\\mathcal T_{h-1})$ and $j \\in [d-1]$ with $(\\rho, 1)< (u, j)$ and all $y \\in \\mathcal A_{S_{h-\\ell(v)-1}}$. Note that when restricting on the subtree $\\mathcal T_{1}^{\\rho}$, $F_{\\rho, 1}$ is a completely symmetric vector because all of the $f_{y,\\rho,1,2}$ are completely symmetric. Likewise, $F_{\\rho, 1}$ is energy-preserving on $\\mathcal T_{1}^{\\rho}$, because of the vectors $ f_{y,u,j,j+1}$. By Observation \\ref{obs}, $F_{\\rho, 1}=0$ on $\\mathcal T_{1}^{\\rho}$. Since the $f_{y,\\rho,1,2}$ are only supported on $\\mathcal T_{1}^{\\rho}\\cup \\mathcal T_{2}^{\\rho}$ and $f_{y,\\rho,1,2}\\vert_{\\mathcal T_{1}^{\\rho}} = -f_{y,\\rho,1,2}\\vert_{\\mathcal T_{2}^{\\rho}} $, so is $F_{\\rho, 1}$. Therefore, $F_{\\rho, 1} = 0$ on $\\mathcal T_{2}^{\\rho}$ and thus on $\\mathcal T_{h}$. So, \n \\begin{equation}\\label{key}\n\\sum_{y\\in \\mathcal A_{S_{h-1}}} c_{y,\\rho,1} f_{y,\\rho,1,2} = 0 \\nonumber.\n \\end{equation}\n By the independence of vectors in $ \\mathcal A_{S_{h-1}}$, we conclude that $c_{y,\\rho,1} = 0$ for all $y\\in \\mathcal A_{S_{h-1}}$, establishing the base case.\n \n For the induction step, assume that for some $(v, i)$, it is proven that $c_{y,w, k} = 0$ for all $(w, k)< (v, i)$ and $y\\in \\mathcal A_{S_{h-\\ell(w)-1}}$. We now show that $c_{y,v, i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$. By this assumption, the left-most side in \\eqref{eq:indep:sum} reduces to\n \\begin{equation} \\label{eq:indep:induction}\n \\sum c_{y,u, j} f_{y,u, j, j+1} = 0 \n \\end{equation}\n where the sum runs over all $(u, j)\\ge (v, i)$. Our argument now is similar to the base case. From \\eqref{eq:indep:induction}, we have\n \\begin{equation} \n F_{v, i}:=\\sum_{y\\in \\mathcal A_{S_{h-\\ell(v)-1}}} c_{y,v,i} f_{y,v,i,i+1} = -\\sum_{y, (v, i)<(u, j)} c_{y,u,j} f_{y,u,j,j+1}.\\nonumber\n \\end{equation}\n Similarly to the base case, when restricting on the subtree $\\mathcal T_{i}^{v}$, $F_{v, i}$ is both completely symmetric and energy-preserving on $\\mathcal T_{j}^{v}$. By Observation \\ref{obs}, $F_{v, i}=0$ on $\\mathcal T_{j}^{v}$. This leads to $F_{v, i} = 0$ on $\\mathcal T_{i+1}^{v}$ and thus $F_{v, i}=0$ on $\\mathcal T_{h}$. So, \n \\begin{equation}\\label{key}\n \\sum_{y\\in \\mathcal A_{S_{h-\\ell(v)-1}}} c_{y,v,i} f_{y,v,i,i+1} =0 \\nonumber.\n \\end{equation}\n By the independence of vectors in $ \\mathcal A_{S_{h-\\ell(v)-1}}$, we conclude that $c_{y,v, i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$, establishing the induction step and thus finishing the proof.\n\\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{thm:lowerbound}}\n\\subsection{Proof of Theorem \\ref{thm:lowerbound}(a)}\nConsider the interchange process on $\\mathcal T_h$. Let $Q_h'$ be the transition matrix of the ace of spades. In other words, $Q_h'$ is the transition matrix of any fixed card on the tree. \nBy \\cite[Theorem 1.1]{CLR}, the spectral gap of the interchange process on the complete $d$-ary tree of depth $h$ is the same as the spectral gap of $Q_h'$. We note that\n\\begin{equation}\\label{key}\nQ_h' = \\frac{2n-d-3}{2(n-1)} I_n + \\frac{d+1}{2(n-1)} Q_h.\n\\end{equation}\nAnd therefore, the spectral gap of $Q_h'$ is $\\frac{d+1}{2(n-1)}$ times the spectral gap of $Q_h$.\n\n\n\nThus \\ref{thm:lowerbound} (a) is deduced from the following.\n\\begin{lemma}\\label{lm:Q_h:gap} For sufficiently large $h$, the spectral gap of $Q_h$ is equal $1 - \\lambda_2$ where $\\lambda_2$ is the second largest eigenvalue of $Q_h$. Moreover,\n\\begin{equation}\\label{eq:lm:gap}\n\\lambda_2 = 1- \\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}} + O \\left (\\frac{\\log_{d} n}{n^{2}}\\right )\n\\end{equation}\n\\end{lemma} \n\n \n\n\n\nTo prove Lemma \\ref{lm:Q_h:gap}, we shall use Theorem \\ref{thm:spectrum}. Let $\\lambda$ be an eigenvalue of $Q_h$ and $x$ be a solution of\n\\begin{equation} \\label{eq:x:lambda:1:1}\nd x^{2} - (d+1)\\lambda x+1 = 0\n\\end{equation}\nwhich we have encountered in \\eqref{eq:x:lambda:1}.\n\nNote that if $\\lambda^{2}\\ge \\frac{4d}{(d+1)^{2}}$ then this equation has two real solutions both of which have the same sign as $\\lambda$.\n \nSince the equation \\eqref{eq:x:sym:thm} only has nonreal solutions except $x = \\pm \\frac{1}{\\sqrt d}$, combining this observation with Theorem \\ref{thm:spectrum}, each eigenvalue $\\lambda^{2}\\ge \\frac{4d}{(d+1)^{2}}$ is given by Equation \\eqref{eq:lambda:x:thm} for some $x \\neq \\pm \\frac{1}{\\sqrt d}$ satisfying\n \\begin{equation} \\label{eq:x:anti:k}\n d^{k+1} x ^{2k+2}- d^{k+1} x ^{2k+1}+ dx -1 = 0,\n \\end{equation}\nfor some $k\\in [1, h]$ which is simply Equation \\eqref{eq:x:antisym} (with $k$ being shifted for notational convenience). \n \n \n We shall show the following\n \\begin{lemma}\\label{lm:bound:lambda}\n \t\\begin{enumerate} [label = (\\alph*)]\n \t\t\\item For all $k\\in [1, h]$, Equation \\eqref{eq:x:anti:k} has no solutions in $\\left (-\\infty, -\\frac{1}{\\sqrt d}\\right )$. There are no eigenvalues of $Q_h$ less than $-\\sqrt \\frac{4d}{(d+1)^{2}}$.\n\\item \t There exists a constant $h_0>0$ such that for all $k\\ge h_0$, the largest solution $x$ of \\eqref{eq:x:anti:k} satisfies\n \t\\begin{equation}\\label{eq:bound:x}\n \t1 - \\frac{a}{d^{k+1}} < x<1 - \\frac{d-1}{d^{k+1}} \\quad\\text{where}\\quad a=d-1 + \\frac{2(d-1)^{2}(k+1)}{d^{k+1}}.\n \t\\end{equation}\n \tFurthermore, for $k=h$, the eigenvalue that corresponds to this $x$ satisfies\n \t\\begin{equation}\\label{eq:bound:lambda}\n \t\\left |\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}}\\right )\\right | = O\\left (\\frac{\\log_{d} n}{n^{2}}\\right ).\n \t\\end{equation}\n \\end{enumerate}\n \\end{lemma}\n \n Assuming Lemma \\ref{lm:bound:lambda}, we conclude that for sufficiently large $h$, the largest $x$ that satisfies one of the equations \\eqref{eq:x:anti:k} for some $k$ in $[1, h]$ satisfies\n \\begin{equation} \n 1 - \\frac{a}{d^{h+1}} < x<1 - \\frac{d-1}{d^{h+1}} \\quad\\text{where}\\quad a=d-1 + \\frac{2(d-1)^{2}(h+1)}{d^{h+1}}.\\nonumber\n \\end{equation}\nSince the right-hand side of \\eqref{eq:lambda:x:thm} is increasing in $x$ for $x\\ge \\frac{1}{\\sqrt d}$, the second largest eigenvalue $\\lambda_2$ of $Q_h$ corresponds to such $x$ and so it satisfies \\eqref{eq:bound:lambda}, proving \\eqref{eq:lm:gap}. By the first part of Lemma \\ref{lm:bound:lambda}, there are no eigenvalues of $Q_h$ whose absolute value is larger than $\\lambda_2$. This proves Lemma \\ref{lm:Q_h:gap}.\n \n \\begin{proof}[Proof of Lemma \\ref{lm:bound:lambda}]\n \tLet $f(x) = d^{k+1} x^{2k+2} - d^{k+1} x^{2k+1} + dx-1$. \n \t\n \tTo prove part (a), for all $x< -\\frac{1}{\\sqrt d}$, we have\n \t$$d^{k+1}x^{2k+2}> 1\\quad\\text{and}\\quad - d^{k+1} x^{2k+1} > - dx$$\n \tand so $f$ has no roots in $\\left (-\\infty, -\\frac{1}{\\sqrt d}\\right )$. Assume that there were an eigenvalue $\\lambda<-\\sqrt \\frac{4d}{(d+1)^{2}}$. By the argument right before \\eqref{eq:x:anti:k}, Equation \\eqref{eq:x:lambda:1:1} has two negative solutions $x_1 0.\\nonumber\n \t\\end{equation}\n \tThus, $f$ is increasing on the interval $[1 - \\frac{1}{2k+2} , \\infty)$ which contains $[1 - \\frac{a}{d^{k+1}}, 1 - \\frac{d-1}{d^{k+1}}]$ for sufficiently large $k$. Thus, to prove \\eqref{eq:bound:x}, it suffices to show that \n \t\\begin{equation}\\label{eq:derivative:test}\n \tf\\left (1 - \\frac{a}{d^{k+1}}\\right )<00,\\nonumber\n \t\\end{eqnarray}\n \tproving the \\eqref{eq:derivative:test}. \n \t\n \tWe have shown that there exists a solution $x = 1-\\alpha$ where $\\frac{d-1}{d^{k+1}}\\le \\alpha \\le \\frac{a}{d^{k+1}}$. Let $\\lambda$ be the eigenvalue corresponding to $x$ as in \\eqref{eq:lambda:x}. We have\n \t\\begin{equation}\\label{key}\n \t\\frac{d+1}{d}\\lambda = 1 - \\alpha+\\frac{1}{d(1-\\alpha)} \\in \\left (1 - \\alpha+\\frac{1}{d} (1 +\\alpha), 1 - \\alpha+\\frac{1}{d} (1 +\\alpha+2\\alpha^{2})\\right ). \\nonumber \n \t\\end{equation}\n \tIn other words, \n \t\\begin{equation}\\label{key}\n \t\\frac{d+1}{d}\\lambda \\in\\left (\\frac{d+1}{d} -\\frac{d-1}{d} \\alpha , \\frac{d+1}{d} -\\frac{d-1}{d} \\alpha +\\frac{2}{d}\\alpha^{2}\\right ). \\nonumber\n \t\\end{equation}\n \tUsing the bounds $\\frac{d-1}{d^{k+1}}\\le \\alpha \\le \\frac{a}{d^{k+1}}$, we obtain\n \t\\begin{equation}\\label{key}\n \t\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}}\\right ) \\le \\frac{2}{d+1}\\alpha^{2}\\le \\frac{2a^{2}}{(d+1)\\cdot d^{2k+2}}\\le \\frac{2}{(d+1)\\cdot d^{2k+1}}\\nonumber \n \t\\end{equation}\n \tand\n \t\\begin{equation}\\label{key}\n \t\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}}\\right ) \\ge -\\frac{d-1}{d+1}\\alpha+\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}} \\ge -\\frac{2(d-1)^{3}(k+1)}{(d+1)\\cdot d^{2k+2}}\\ge - \\frac{2(k+1)}{d^{2k}}.\\nonumber \n \t\\end{equation}\n \tThus, for $k=h$,\n \t\\begin{equation} \n \t\\left |\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}}\\right )\\right | \\le \\frac{2(h+1)}{d^{2h}} .\\nonumber\n \t\\end{equation}\n \tThese bounds together with the equation $n = \\frac{d^{h+1}-1}{d-1} \\in (d^{h}, 2d^{h})$ give \\eqref{eq:bound:lambda}.\n \\end{proof}\n \n \n \\subsection{Proof of Theorem \\ref{thm:lowerbound} (b)}\n For the proof of the lower bound, we will use Wilson's lemma.\n \\begin{lemma}[Lemma 5, \\cite{Wilson}]\\label{W}\n \tLet $\\varepsilon, R$ be positive numbers and $0<\\gamma< 2-\\sqrt{2} $. Let $F: X\\to \\mathbb R$ be a function on the state space $X$ of a Markov chain $(C_t)$ such that \n \t$$\\expect{F(C_{t+1})\\vert C_t) }= (1 - \\gamma )F(C_t), \\quad \\expect{\\left [F(C_{t+1})- F(C_{t})\\right ]^2 \\vert C_t} \\leq R,$$ and \n \t$$t \\leq \\frac{ \\log \\max_{x\\in X}F(x) + \\frac{1}{2} \\log( \\gamma \\varepsilon\/(4R))}{-\\log (1 - \\gamma )}.\n \t$$ Then the total variation distance from stationarity at time $t$ is at least $1-\\varepsilon$.\n \\end{lemma}\n \n \n \\begin{proof}[Proof of Theorem \\ref{thm:lowerbound} (b)]\n Let $0 j > m$ such that $\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}(s_j,s_{j+1})$ holds.\n Furthermore, by construction of $\\mathcal{G}_{post}$, all states $s_j$ with $n > j > m$ satisfy $\\varphi$.\n Hence $\\varphi \\land \\lnot\\mathfrak{S}^{post}_{\\texttt{{SAFE}}{}}(s_j,s_{j+1})$ holds, which implies that $\\lnot \\mathfrak{S}_{\\texttt{{SAFE}}}(s_j,s_{j+1})$ holds. We conclude that $\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}$ is a necessary subgoal in $\\mathcal{G}^I$.\n\t\n\tWe now show (c). Since we return in line~\\ref{line:thirdreturn}, we have $\\operatorname{Unsat}(\\operatorname{Enf}(F,\\mathcal{G}))$ and, by induction hypothesis, $\\operatorname{Unsat}(\\operatorname{Enf}(\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post},\\mathcal{G}_{post}))$.\n As the transition relation of $\\mathcal{G}_{post}$ is restricted to $\\varphi$, this implies $\\operatorname{Unsat}(\\operatorname{Enf}(\\varphi \\land \\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post},\\mathcal{G}_{post}))$.\n We also have $F \\implies \\lnot \\varphi$ and $(\\varphi \\land \\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}) \\implies \\varphi$. As $(\\mathit{Safe}_{post} \\lor \\mathit{Reach}_{post}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ holds, we can apply \\Cref{lem:unsat_sum} to conclude $\\operatorname{Unsat}(\\operatorname{Enf}(F \\lor (\\varphi\\land\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})),\\mathcal{G})$, which implies $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathit{Safe} \\lor F \\lor (\\varphi\\land\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})),\\mathcal{G}) = \\operatorname{Unsat}(\\operatorname{Enf}(\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}),\\mathcal{G}))$.\n\t\t\n\t\\medskip\n\t{\\it Case 4: $\\operatorname{Reach}(\\mathcal{G})$ returns in line~\\ref{line: last return} and the {\\upshape\\textbf{if}} statement in line~\\ref{line:transback} is false}.\n\tBy induction hypothesis we assume that the recursive calls in lines~\\ref{line: recursion1} and \\ref{line: recursion2} returned tuples $(R_{post}, \\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}, \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})$ and $(R_{pre}, \\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre}, \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre})$ satisfying properties (a)--(c) above for $\\mathcal{G}_{post}$ and $\\mathcal{G}_{pre}$. We now show these properties in $\\mathcal{G}$ for $R\\lor R_{pre}$, and \n\t\\begin{align*}\n\t\t\\mathfrak{S}_{\\texttt{{REACH}}{}} &= \\;\\;(\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\implies \\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\\\\n & \\land \\;\\; ((\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\land \\operatorname{Pre}(F)) \\implies F) \\\\\n & \\land \\;\\; ((\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\land \\neg \\operatorname{Pre}(F)) \\implies \\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre}) \\\\\n & \\land \\;\\; (\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\lor \\operatorname{Pre}(F) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre})) \\\\\n\t\t\\mathfrak{S}_{\\texttt{{SAFE}}{}} &= (\\neg \\varphi \\implies \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre}) \\land (\\varphi \\implies \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}),\n\t\\end{align*} \n\twith $F= C \\land \\mathit{R}_{post}[\\var\/\\varp]$.\n\nWe first show that (a) $\\mathfrak{S}_{\\texttt{{REACH}}{}}$ is winning for \\texttt{{REACH}}{} from states satisfying $R\\lor R_{pre} \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}})$. For states in $R$ this is trivial, so let $\\rho = s_0 s_1 \\ldots$ be a play in $\\mathcal{G}$ conforming to $\\mathfrak{S}_{\\texttt{{REACH}}{}}$ such that $R_{pre}(s_0)$ holds.\n Our first claim is that if there exists $k \\in \\mathbb{N}$ such that $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})(s_k)$ holds, then $\\rho$ must be winning for \\texttt{{REACH}}{}.\n This is due to the fact that $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ is winning in $\\mathcal{G}_{post}$ from all states satisfying $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})$, which allows us to use~\\Cref{lem:gpost}.\n To argue that $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ keeps playing according to $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ once such a state is reached, we observe that if a symbolic reachability strategy $\\mathfrak{S}$ wins from $s$, then $\\operatorname{Pre}(\\mathfrak{S})$ holds in any state in $S_{\\texttt{{REACH}}}$ reachable from $s$ via a play prefix conforming to $\\mathfrak{S}$, by definition.\n \n Now we show that such a position $k$ must exist.\n First, for $j \\in \\mathbb{N}$ such that $(\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}) \\land \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})))(s_j)$ holds, the transition $(s_j,s_{j+1})$ must satisfy $F$.\n This is because if $s_j \\in S_{\\texttt{{SAFE}}}$, then all outgoing transitions from $s_j$ satisfy $F$.\n Otherwise, it follows by the fact that $\\rho$ conforms to $\\mathfrak{S}_{\\texttt{{REACH}}}$.\n As $\\operatorname{Post}(F) \\equiv R_{post}[\\var\/\\varp]$ and $\\mathfrak{S}_{\\texttt{{REACH}}}^{pos}$ wins from all states satisfying $R_{post}$ by assumption, it follows that $s_{j+1}$ satisfies $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})$.\n\n As long as $\\rho$ visits only states satisfying $(\\neg \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})) \\land \\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}))$, the strategy $\\mathfrak{S}_{\\texttt{{REACH}}}$ prescribes to play according to $\\mathfrak{S}_{\\texttt{{REACH}}}^{pre}$.\n By assumption, this strategy is winning for \\texttt{{REACH}}{} in $\\mathcal{G}_{pre}$, and hence the play $\\rho$ eventually visits a state in $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$.\n As above, the play is guaranteed to stay in $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{pre})$ until that position.\n\n The above argument also shows that $\\mathfrak{S}_{\\texttt{{REACH}}}$ is winning for all states satisfying $\\operatorname{Pre}(F) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{pre})$, which is implied by $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}})$.\n Also, $\\mathfrak{S}_{\\texttt{{REACH}}} \\implies (\\mathit{Reach} \\lor \\mathit{Safe})$ is valid, as the corresponding statements hold for the pre- and post-strategies, and $F \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ is valid.\n\n\n\n\t\n\n\t \n\n\t\n\tNext we show that (b) $\\lnot \\mathfrak{S}_{\\texttt{{SAFE}}{}}$ is a necessary subgoal in $\\mathcal{G}^I$.\n\tNo player can play back from $\\mathcal{G}_{post}$ to $\\mathcal{G}_{pre}$ without $\\texttt{{REACH}}$ having already won in $\\mathcal{G}_{post}$. We first show that under this condition,\n \\[\\neg \\mathfrak{S}_{\\texttt{{SAFE}}} = (\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre}) \\lor (\\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})\\]\n qualifies as a necessary subgoal in $\\mathcal{G}^I$. For this, consider the necessary subgoal $C$.\n For any play $\\rho = s_0 s_1 \\ldots$ with $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_n)$ there is some $k \\in \\mathbb{N}$ with $k < n$ and $C(s_k,s_{k+1})$. As $F$ characterizes a subset of $C$, we check two cases: Either (1) $\\lnot F(s_k,s_{k+1})$ or (2) $F(s_k,s_{k+1})$. In case (1), we have $\\lnot R_{post}(s_{k+1})$ and because of our assumption that no transition of the game satisfies $\\varphi \\land \\neg \\varphi'$, for all $j \\in \\mathbb{N}$ with $k < j < n: \\varphi(s_j)$. It follows by induction hypothesis that there is some $l \\in \\mathbb{N}$ such that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{post}(s_l,s_{l+1})$.\n In case (2) we use that $\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$ plays only moves available in $\\mathcal{G}_{pre}$, and hence $\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre} \\implies \\neg F$ is valid.\n Furthermore $F \\implies \\neg \\varphi$, as $F$ characterizes a subset of the subgoal $C$.\n Hence we can conclude that $F \\implies (\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre})$ is valid.\n It follows that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}$ qualifies as necessary subgoal.\n\n Finally we show (c) that $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}},\\mathcal{G}))$ holds.\n We have $(\\mathit{Safe}_{pre} \\lor \\mathit{Reach}_{pre}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ and $(\\mathit{Safe}_{post} \\lor \\mathit{Reach}_{post}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$. As $\\operatorname{Pre}(\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}) \\land \\operatorname{Pre}(\\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{post})$ is cleary unsatisfiable, we can again apply Lemma \\ref{lem:unsat_sum} to infer $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}},\\mathcal{G}))$.\n This uses that any transitions reachable in $\\mathcal{G}_{post}$ has to satisfy $\\varphi$ in this case.\n\t\n\t\\medskip\n\t{\\it Case 5: $\\operatorname{Reach}(\\mathcal{G})$ returns in line~\\ref{line: last return} and the {\\upshape\\textbf{if}} statement in line~\\ref{line:transback} is true}. \n\n\tWe assume that both recursive calls terminated and, by induction, returned triples $(R_{post},\\mathfrak{S}_{\\texttt{{REACH}}}^{post},\\mathfrak{S}_{\\texttt{{SAFE}}}^{post})$ and $(R_{pre},\\mathfrak{S}_{\\texttt{{REACH}}}^{pre},\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre})$ satisfying (a)-(c).\n\n (a) is shown exactly as in Case 4.\n\n\tFor (b) we first observe that by setting $\\varphi$ to $\\texttt{false}$ (see line~\\ref{line:tpostfalse}) in this case we get $\\mathfrak{S}_{\\texttt{{SAFE}}} = \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$.\n We show that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}} = \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$ is a necessary subgoal in $\\mathcal{G}_I$.\n The transition predicate $F$ in line~\\ref{line: recursion1} is a sufficient subgoal by induction hypothesis, but due to the restriction on the post-game, we cannot conclude that states in $\\operatorname{Post}(C)$ that are not in $\\operatorname{Post}(F)$ are winning for \\texttt{{SAFE}}{}.\n\tBy adding all transitions to $\\mathit{Goal}$ (line~\\ref{line: E2}) we get that $F$ in line~\\ref{line: recursion2} is a necessary and sufficient subgoal (clearly, any winning play must go through $\\mathit{Goal} [\\var\/\\varp]$).\n\tAs we have ensured that $F$ is necessary, we know for all plays $\\rho = s_0 s_1 \\ldots$ with some $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_n)$ there is some $k \\in \\mathbb{N}$ with $k < n$ and $F(s_k,s_{k+1})$. As in Case 4 we may conclude that $F \\implies \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$.\nIt follows that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}$ is a necessary subgoal in $\\mathcal{G}_I$.\n\nFor (c) we observe that $\\operatorname{Unsat}(\\operatorname{Pre}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre},\\mathcal{G}_{pre})))$ holds by induction hypothesis, which directly implies $\\operatorname{Unsat}(\\operatorname{Pre}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre},\\mathcal{G})))$. This concludes the argument for the final case, and the proof is complete.\n\t\\qed\n\n\\end{proof}\n\n\n\\terminationfinite*\n\\begin{proof}\n\tWe denote by $\\operatorname{size}(\\mathcal{G})$ the number of concrete transitions of $\\mathcal{G}$, formally: $\\operatorname{size}(\\mathcal{G}) = |\\{ (s,s') \\in S \\times S \\mid (\\mathit{Safe} \\lor \\mathit{Reach})(s,s') \\text{ is valid}\\}|$.\n\tIf the domains of all variables are finite, then so is $\\operatorname{size}(\\mathcal{G})$.\n\tWe assume that this is the case and show that the subgames on which $\\operatorname{Reach}(\\mathcal{G})$ recurses are strictly smaller in this measure.\n\tThis is enough to guarantee termination.\n\t\n\tThe first subgame is constructed in line~\\ref{line:gpost} and takes the form:\n\t\\[\\mathcal{G}_{post} = \\langle \\operatorname{Post}(\\mathit{C})[\\varp\/\\var],\\mathit{Safe} \\land \\varphi, \\mathit{Reach} \\land \\varphi, \\mathit{Goal} \\rangle.\\]\n\tThe important restriction of this game is that both safety and reachability player transitions have the additional precondition $\\varphi$.\n\tWe may assume that $\\operatorname{Enf}(C,\\mathcal{G})$ is satisfiable, as otherwise the algorithm does not reach line~\\ref{line:gpost}.\n\tThen, in particular, $C$ is satisfiable, by the definition of $\\operatorname{Enf}(C,\\mathcal{G})$.\n\tBut $C = \\operatorname{Instantiate}(\\varphi,\\mathcal{G}) = (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\varphi'$, which means that there exist states $s,s'$ such that $ (\\mathit{Safe} \\lor \\mathit{Reach})(s,s')$, $\\neg \\varphi(s)$, and $\\varphi(s')$ are all valid. \n\tThis transition from $s$ to $s'$ in $\\mathcal{G}$ is excluded in $\\mathcal{G}_{post}$, and as no new transitions are included, it follows that $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$.\n\t\n\tThe second subgame is constructed in line~\\ref{line:gpre2} and takes the form:\n\t\\[\\mathcal{G}_{pre} = \\langle I,\\mathit{Safe} \\land \\lnot F,\\mathit{Reach} \\land \\lnot F, \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))\\rangle.\\]\n We may assume that $F \\land (\\mathit{Safe} \\lor \\mathit{Reach})$ is satisfiable, as otherwise the algorithm would not have moved past line~\\ref{line:safeavoidsE}.\n Observe that if $F$ is changed in line~\\ref{line: E2} then it is only extended and hence satisfiability is preserved.\n As no transition satisfying $F$ exists in $\\mathcal{G}_{pre}$ it follows that $\\operatorname{size}(\\mathcal{G}_{pre}) < \\operatorname{size}(\\mathcal{G})$.\n This concludes the proof.\n\t\\qed\n\\end{proof}\n\n\\terminationbisim*\n\\begin{proof}\n\tLet $S_1,\\ldots,S_n$ be the bisimulation classes of $\\mathcal{G}$, and $\\psi_1, \\ldots, \\psi_n \\in \\cal L(\\mathcal{V})$ be the formulas that define them.\n\tWe define\n\t\\[\\operatorname{size}(\\mathcal{G}) = |\\{(S_i,S_j) \\mid (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\psi_i \\land \\psi_j' \\text{ is satisfiable} \\}|,\\]\n\twhich equals the number of transitions in the bisimulation quotient of $\\mathcal{G}$ under $\\sim$.\n\tOur aim is to show that $\\operatorname{Reach}(\\cdot)$ terminates for all subgames that are considered in any recursive call of $\\operatorname{Reach}(\\mathcal{G})$.\n\t\n\tTo this end, we show that $\\operatorname{Reach}(\\mathcal{G})$ terminates for all reachability games $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ such that\n\t\\begin{itemize}\n\t\\item $\\operatorname{size}(\\mathcal{G})$ is finite,\n \\item the relation $\\sim$ is a bisimulation on $\\mathcal{G}$, and\n\t\\item $\\mathit{Goal}$ is equivalent to a disjunction of formulas $\\psi_i$.\n\t\\end{itemize}\n\tWe show this by induction on $\\operatorname{size}(\\mathcal{G})$.\n\t\n\tLet $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ satisfy these conditions, and assume that $\\operatorname{size}(\\mathcal{G}) = 0$.\n\tThen it follows that $\\mathit{Safe} \\lor \\mathit{Reach}$ is unsatisfiable.\n\tThis is because if any $(s_1,s_2)$ would satisfy $\\mathit{Safe} \\lor \\mathit{Reach}$, then in particular $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\psi_i \\land \\psi_j'$ would be satisfied by $(s_1,s_2)$, where we assume $s_1 \\in \\S_i$ and $s_2 \\in \\S_j$.\n\tIt follows that $\\operatorname{Unsat}(\\operatorname{Enf}(C,\\mathcal{G}))$ in line~\\ref{line: cpre} is true, as $\\operatorname{Enf}(C,\\mathcal{G}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ is valid for any $C$.\n\tBut then Algorithm~\\ref{alg:algreach} terminates on input $\\mathcal{G}$.\n\t\n\tNow suppose that we have $\\mathcal{G}$ with $\\operatorname{size}(\\mathcal{G}) > 0$.\n\tIf the algorithm does not return in lines~\\ref{line:ret1} or~\\ref{line:safety wins}, we have to consider the first subgame\n\t\\[\\mathcal{G}_{post} = \\langle \\operatorname{Post}(C)[\\varp\/\\var], \\mathit{Safe} \\land \\varphi, \\mathit{Reach} \\land \\varphi, \\mathit{Goal} \\rangle,\\]\n\twhich is constructed in line~\\ref{line:gpost}.\n\tWe may assume that for some $I \\subseteq \\{1,\\ldots,n\\}$ we have $\\varphi \\equiv \\bigvee_{i \\in I} \\psi_i$, due to our assumption on the function $\\operatorname{Interpolate}$.\n\tHence the effect of restricting all transitions to $\\varphi$ is to remove all transitions in states not in $\\bigcup \\{S_i \\mid i \\in I\\}$, which are exactly the states in $\\bigcup \\{S_i \\mid i \\in \\{1,\\ldots,n\\} \\setminus I\\}$.\n\tIt is clear that $\\sim$ is still a bisimulation in the resulting game, and that the goal states are preserved.\n\tTo see that $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$ we may assume that $\\operatorname{Unsat}(\\operatorname{Enf}(C,\\mathcal{G}))$ is false, otherwise we would have returned in line~\\ref{line:safety wins}.\n\tThen, in particular, there is a transition in $\\mathcal{G}$ satisfying $\\neg \\varphi$, which means that there is a pair $S_i,S_j$ such that $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\psi_i \\land \\psi_j'$ is satisfiable.\n\tThis is cleary unsatisfiable when replacing $(\\mathit{Safe} \\lor \\mathit{Reach})$ by $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\varphi$.\n\tHence, $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$.\n\tAs a result, we can apply the induction hypothesis to conclude that the recursive call $\\operatorname{Reach}(\\mathcal{G}_{post})$ in line~\\ref{line: recursion1} terminates.\n\t\n\tNow let us consider the second subgame $\\mathcal{G}_{pre}$, as constructed in line~\\ref{line:gpre2}.\n\tFirst, we observe that $F \\equiv (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\varphi' \\land R_{post}[\\var\/\\varp]$, where $R_{post}$ is a state predicate characterizing the initial winning states of $\\mathcal{G}_{post}$ (this uses~\\Cref{thm:partcorr}).\n\tAs $\\sim$ is a bisimulation on $\\mathcal{G}_{post}$, it follows by~\\Cref{lem:bisimpreservesreach} that $R_{post}$ is equivalent to a disjunction of formulas $\\psi_i$.\n\tAs a consequence, we can equivalently write $F$ as $\\phi_1 \\land (\\phi_2[\\var\/\\varp])$ for two formulas $\\phi_1,\\phi_2 \\in \\cal L(\\mathcal{V})$ that are both equivalent to disjunctions of $\\psi_i$.\n\tBy~\\Cref{lem:preenfbisim} it follows that $\\operatorname{Pre}(\\operatorname{Enf}(E,\\mathcal{G}))$ is also equivalent to a disjunction of $\\psi_i$.\n\t\n\tRestricting transitions to $\\neg F$ in $\\mathcal{G}_{pre}$ has the effect of removing all transitions from states in $\\bigcup\\{S_i \\mid \\psi_i \\implies \\phi_1 \\text{ is valid}\\}$ to states in $\\bigcup\\{S_i \\mid \\psi_i \\implies \\phi_2 \\text{ is valid}\\}$.\n\tIt is clear that $\\sim$ is still a bisimulation in the resulting game.\n\tFurthermore, as $\\operatorname{Enf}(F,\\mathcal{G})$ is satisfiable, there is at least one such transition in $\\mathcal{G}$.\n\tIt follows that $\\operatorname{size}(\\mathcal{G}_{pre}) < \\operatorname{size}(\\mathcal{G})$ and hence the algorithm terminates by induction hypothesis.\n\t\\qed\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{lem:preenfbisim}\n\tLet $\\sim$ be a bisimulation on $\\mathcal{G}$ which is also an equivalence relation, and $S_1,\\ldots, S_n$ be its equivalence classes.\n\tAssume that $S_1,\\ldots, S_n$ are defined by $\\psi_1,\\ldots, \\psi_n \\in \\cal L(\\mathcal{V})$.\n\tLet $\\phi_1 \\land (\\phi_2[\\var\/\\varp]) \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ be such that both $\\phi_1,\\phi_2$ are equivalent to disjunctions of formulas $\\psi_i$.\n\t\n\tThen, $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))$ is equivalent to a disjunction of formulas $\\psi_i$.\n\\end{lemma}\n\\begin{proof}\n\tWe show that if there exists a state in $S_i$ that satisfies $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))$, then so do all states in $S_i$.\n\tLet $s_1 \\in S_i$ be such that $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))(s_1)$ is valid.\n\t\n\tWe make a case distinction on whether $s_1 \\in S_{\\texttt{{REACH}}}$ holds.\n\tIf so, then there exists a state $q_1$ such that $(\\mathit{Reach} \\land \\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_1,q_1)$ is valid.\n\tIn particular, $\\phi_1(s_1)$ and $\\phi_2(q_1)$ are both valid.\n\tAssuming that $q_1 \\in S_j$ holds, both $\\psi_i \\implies \\phi_1$ and $\\psi_j \\implies \\phi_2$ are valid, as both $\\phi_1$ and $\\phi_2$ are equivalent to disjunctions of $\\psi$-formulas (which have pairwise disjoint sets of models).\n\tNow take any other state $s_2 \\in S_i$.\n\tAs $s_1 \\sim s_2$ and $\\mathit{Reach}(s_1,q_1)$ is valid, there exists a state $q_2 \\in S_j$ such that $\\mathit{Reach}(s_2,q_2)$ is valid.\n\tFurthermore, as $\\psi_i(s_2)$ and $\\psi_j(q_2)$ are both valid, so is $(\\mathit{Reach} \\land \\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_2,q_2)$.\n\tHence, $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))(s_2)$ is valid.\n\t\n\tNow assume that $s_1 \\in S_{\\texttt{{SAFE}}}$.\n\tThen, for all states $q_1$ such that $\\mathit{Safe}(s_1,q_1)$ is valid, $(\\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_1,q_1)$ holds.\n\tWhenever this is the case, and $q_1 \\in S_j$ holds, it follows that $\\psi_j \\implies \\phi_2$ is valid.\n\n\tNow take any other state $s_2 \\in S_i$ and assume, for contradiction, that there exists a $q_2$ such that $\\mathit{Safe}(s_2,q_2)$ is valid, but not $(\\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_2,q_2)$.\n\tAssuming $q_2 \\in S_j$, we have that $\\psi_j \\land \\phi_2$ is unsatisfiable.\n\tAs $s_1 \\sim s_2$ holds, we find $q_1$ such that $\\mathit{Safe}(s_1,q_1) \\land \\psi_j(q_1)$ is valid.\n By the previous reasoning, this would imply that $\\psi_j \\implies \\phi_2$ is valid.\n\tThis is a contradiction as $\\psi_j$ is satisfiable.\n\t\\qed\n\\end{proof}\n\n\\section{Conclusion}\nOur work is a step towards the fully automated synthesis of software. \nIt targets symbolically represented reachability games which are expressive enough to model a variety of problems, from common game benchmarks to program synthesis problems. \nThe presented approach exploits causal information in the form of \\emph{subgoals}, which are parts of the game that the reachability player needs to pass through in order to win.\nHaving computed a subgoal, which can be done using Craig interpolation, the game is split along the subgoal and solved recursively.\nAt the same time, the algorithm infers a structured symbolic strategy for the winning player.\nThe evaluation of our prototype implementation \\textsc{CabPy} shows that our approach is practically applicable and scales much better than previously available tools on several benchmarks. \nWhile termination is only guaranteed for games with finite bisimulation quotient, the experiments demonstrate that several infinite games can be solved as well.\n\nThis work opens up several interesting questions for further research.\nOne concerns the quality of the returned strategies.\nDue to its compositional nature, at first sight it seems that our approach is not well-suited to handle global optimization criteria, such as reaching the goal in fewest possible steps.\nOn the other hand, the returned strategies often involve only a few key decisions and we believe that therefore the strategies are often very sparse, although this has to be further investigated.\nWe also plan to automatically extract deterministic strategies from the symbolic ones~\\cite{Bloem,Ehlers} we currently consider.\n\nAnother question regards the computation of subgoals. \nThe performance of our algorithm is highly influenced by which interpolant is returned by the solver.\nIn particular this affects the number of subgames that have to be solved, and how complex they are.\nWe believe that template-based interpolation \\cite{template_interpolation} could be a promising candidate to explore with the goal to compute good interpolants. \nThis could be combined with the possibility for the user to provide templates or expressive interpolants directly, thereby benefiting from the user's domain knowledge.\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\\section{Motivating Example} \\label{sec:motivation}\n\nConsider the scenario that an expensive painting is displayed in a large exhibition room of a museum.\nIt is secured with an alarm system that is controlled via a control panel on the opposite side of the room.\nA security guard is sleeping at the control panel and occasionally wakes up to check whether the alarm is still armed.\nTo steal the painting, a thief first needs to disable the alarm and then reach the painting before the alarm has been reactivated. We model this scenario as a two-player game between a safety player (the guard) and a reachability player (the thief) in the theory of linear arithmetic.\nThe moves of both players, their initial positions, and the goal condition are described by the formulas:\n\n\n\\iffalse\n\\begin{figure}[t]\n\t\\centering\n\t\\scalebox{0.4}{\n\t\t\\begin{tikzpicture}[node distance=2cm]\n\t\t\\node(mona) at (10,5){\\def\\svgwidth{1cm}\\input{monalisa_color.pdf_tex}};\n\t\t\\node(thief) at (0,0){\\def\\svgwidth{1.15cm}\\input{thief_color.pdf_tex}};\n\t\t\\node(guard) at (0,10){\\def\\svgwidth{2cm}\\input{guard2_color.pdf_tex}};\n\t\t\\draw[step=2.0,black,thin] (-0.5,-0.5) grid (10.5,10.5);\n\t\t\\node(guard) at (-1,0.0){$0$};\n\t\t\\node(guard) at (0,-1){$0$};\n\t\t\\node(guard) at (10.0,-1){$10$};\n\t\t\\node(guard) at (-1,10.0){$10$};\n\t\t\\end{tikzpicture}}\n\t\\caption{The Mona Lisa problem. The painting is secured with an alarm the thief has to disable first in order to remain undetected. The sleeping guard will occasionally wake up to check whether the alarm is still on.}\n\t\\label{exp_monalisa_pic}\n\\end{figure}\n\\fi\n\n\n\\begin{align*}\n\\mathit{Init} &\\equiv && \\lnot \\mathbf{r} \\land x = 0 \\land y = 0 \\land p = 0 \\land a = 1 \\land t = 0, &&&\\\\\n\\mathit{Guard} &\\equiv &&\\neg\\mathbf{r} \\land \\mathbf{r}' \\land x' = x \\land y' = y \\land p' = p &&&\\\\\n& &&\\land ((t' = t - 1 \\land a' = a)\\lor (t \\leq 0 \\land t' = 2)),&&&(\\text{sleep or wake up})\\\\\n\\mathit{Thief} &\\equiv && \\mathbf{r} \\land \\neg\\mathbf{r}' \\land t' = t \\\\\n\t&&&\\land x + 1 \\ge x' \\ge x - 1 \\land y + 1 \\ge y' \\ge y - 1&&&(\\text{move})\\\\\n& &&\\land (x' \\neq 0 \\lor y' \\neq 10 \\implies a' = a)&&&(\\text{alarm off})\\\\\n& &&\\land (x' \\neq 10 \\lor y' \\neq 5 \\lor a = 1 \\implies p' = p),&&&(\\text{steal})\\\\\n\\mathit{Goal} &\\equiv && \\lnot \\mathbf{r} \\land p = 1. &&&\n\\end{align*} \n\nThe thief's position in the room is modeled by two coordinates $x,y \\in \\mathbb{R}$ with initial value $(0,0)$, and with every transition the thief can move some bounded distance. \nNote that we use primed variables to represent the value of variables after taking a transition.\nThe control panel is located at $(0,10)$ and the painting at $(10,5)$. \nThe status of the alarm and the painting are described by two boolean variables $a,p \\in \\{0,1\\}$. \nThe guard wakes up every two time units, modeled by the variable $t \\in \\mathbb{R}$. \nThe variables $x,y$ are bounded to the interval $[0,10]$ and $t$ to $[0,2]$. \nThe boolean variable \\textbf{r} encodes who makes the next move. \nIn the presented configuration, the thief needs more time to move from the control panel to the painting than the guard will sleep. \nIt follows that there is a winning strategy for the guard, namely, to always reactivate the alarm upon waking up.\n\nAlthough it is intuitively fairly easy to come up with this strategy for the guard, it is surprisingly hard for game solving tools to find it. The main obstacle is the infinite state space of this game.\nOur approach for solving games represented in this logical way imitates \\emph{causal reasoning}: \nHumans observe that in order for the thief to steal the painting (i.e., the effect $p=1$), a transition must have been taken whose source state does not satisfy the pre-condition of (steal) while the target state does. \nPart of this cause is the condition $a=0$, i.e., the alarm is off. Recursively, in order for the effect $a=0$ to happen, a transition setting $a$ from $1$ to $0$ must have occurred, and so on. \n\nOur approach captures these cause-effect relationships through the notion of \\emph{necessary subgoals}, which are essential milestones that the reachability player has to transition through in order to achieve their goal.\nThe first necessary subgoal corresponding to the intuitive description above is\n$$C_1 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land p \\neq 1 \\land p' = 1.$$\nIn this case, it easy to see that $C_1$ is also a \\emph{sufficient subgoal}, meaning that all successor states of $C_1$ are winning for the thief. Therefore, it is enough to solve the game with the modified objective to reach those predecessor states of $C_1$ from which the thief can \\emph{enforce} $C_1$ being the next move (even if it is not their turn). Doing so recursively produces the necessary subgoal\n$$C_2 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land a \\neq 0 \\land a' = 0,$$\nmeaning that some transition must have caused the effect that the alarm is disabled. However, $C_2$ is \\emph{not} sufficient which can be seen by recursively solving the game spanning from successor states of $C_2$ to $C_1$. This computation has an important caveat: After passing through $C_2$, it may happen that $a$ is reset to $1$ at a later point (in this particular case, this constitutes precisely the winning strategy of the safety player), which means that there is no canonical way to slice the game along this subgoal into smaller parts. Hence the recursive call solves the game from $C_2$ to $C_1$ \\emph{subject to} the bold assumption that any move from $a = 0$ to $a' = 1$ is winning for the guard. This generally underapproximates the winning states of the thief. Remarkably, we show that this approximation is enough to build winning strategies for \\emph{both} players from their respective winning regions. In this case, it allows us to infer that moving through $C_2$ is always a losing move for the thief. However, at the same time, any play reaching $\\mathit{Goal}$ has to move through $C_2$. It follows that the thief loses the global game.\n\nWe evaluated our method on several configurations of this game, which we call \\emph{Mona Lisa}. The results in Section \\ref{sec:experiments} support our conjecture that the room size has little influence on the time our technique needs to solve the game.\n\n\\section{Case Studies}\n\\label{sec:experiments}\n\nIn this section we evaluate our approach on a number of case studies. \nOur prototype \\textsc{CabPy}{}\\footnote[2]{The source code of \\textsc{CabPy}{} and our experimental data are both available at \\url{https:\/\/github.com\/reactive-systems\/cabpy}. We provide a virtual machine image with \\textsc{CabPy}{} already installed for reproducing our evaluation~\\cite{VM}.} is written in Python and implements the game solving part of the presented algorithm. \nExtending it to returning a symbolic strategy using the ideas outlined above is straightforward.\nWe compared our prototype with \\textsc{SimSynth} \\cite{FarzanK17}, the only other readily available tool for solving linear arithmetic games. \nThe evaluation was carried out with Ubuntu 20.04, a 4-core Intel\\textsuperscript{\\textregistered} Core\\texttrademark~i5 2.30GHz processor, as well as 8GB of memory. \\textsc{CabPy}{} uses the PySMT~\\cite{pysmt2015} library as an interface to the MathSAT5~\\cite{mathsat5} and Z3~\\cite{z3solver} SMT solvers.\nOn all benchmarks, the timeout was set to 10 minutes. In addition to the winner, we report the runtime and the number of subgames our algorithm visits. Both may vary with different SMT solvers or in different environments.\n\\subsection{Game of Nim}\n\nGame of Nim is a classic game from the literature \\cite{Bouton1901} and played on a number of heaps of stones. Both players take turns of choosing a single heap and removing at least one stone from it. We consider the version where the player that removes the last stone wins. Our results are shown in \\Cref{exp_nim}. In instances with three heaps or more we bounded the domains of the variables in the instance description, by specifying that no heap exceeds its initial size and does not go below zero.\n\nFollowing the discussion in \\Cref{sec:termination}, we need to bound the domains to ensure the termination of our tool on these instances.\nRemarkably, bounding the variables was not necessary for instances with only two heaps, where our tool \\textsc{CabPy}{} scales to considerably larger instances than \\textsc{SimSynth}.\nWe did not add the same constraints to the input of \\textsc{SimSynth}{}, as for \\textsc{SimSynth}{} this resulted in longer runtimes rather than shorter.\nIn Game of Nim, there are no natural necessary subgoals that the safety player can locally control.\n\nThe results (see~\\Cref{exp_nim}) demonstrate that our approach is not completely dependent on finding the right interpolants and is in particular also competitive when the reachability player wins the game. We suspect that \\textsc{SimSynth}{} performs worse in these cases because the safety player has a large range of possible moves in most states, and inferring the win of the reachability player requires the tool to backtrack and try our all of them.\n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tHeaps & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t(4,4) & 19 & 1.50 & 10.44 & $\\texttt{{REACH}}$\\\\\n\t\t\t(4,5) & 23 & 1.92 & 12.74 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(5,5) & 23 & 1.99 & 85.75 & $\\texttt{{REACH}}$\\\\\n\t\t\t(5,6) & 27 & 2.90 & 91.66 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(6,6) & 28 & 3.04 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(6,7) & 31 & 3.76 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t(20,20) & 88 & 94.85 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(20,21) & 94 & 113.04 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t(30,30) & 128 & 364.13 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(30,31) & 135 & 404.02 & Timeout & $\\texttt{{SAFE}}$\\\\\\hline\n\t\t\t(3,3,3)b & 23 & 13.63 & 2.85 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(1,4,5)b & 32 & 7.00 & 289.85 & $\\texttt{{REACH}}$\\\\\n\t\t\t(4,4,4)b & 33 & 50.55 & 24.39 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(2,4,6)b & 38 & 19.77 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(5,5,5)b & 33 & 127.89 & 162.50 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(3,5,6)b & 40 & 86.56 & Timeout & $\\texttt{{REACH}}$\\\\\\hline\n\t\t\t(2,2,2,2)b & 39 & 84.79 & 213.79 & $\\texttt{{REACH}}$\\\\\n\t\t\t(2,2,2,3)b & 41 & 102.01 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Game of Nim. The notation $(h_1,\\ldots, h_n)$ denotes the instance played on $n$ heaps, each of which consists of $h_i$ stones. Instances marked with b indicate that the variable domains were explicitly bounded in the input for \\textsc{CabPy}{}.}\n\t\\label{exp_nim}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\t$r$ & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t10 & 10 & 0.57 & 3.93 & $\\texttt{{SAFE}}$\\\\\n\t\t\t20 & 20 & 1.23 & 20.48 & $\\texttt{{SAFE}}$\\\\\n\t\t\t40 & 40 & 3.42 & 121.96 & $\\texttt{{SAFE}}$\\\\\n\t\t\t60 & 60 & 7.36 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t80 & 80 & 17.72 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t100 & 100 & 26.36 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Corridor game. The safety player controls the door between rooms $r-1$ and $r$.}\n\t\\label{exp_corridor}\n\\end{figure}\n\n\\subsection{Corridor}\n\nWe now consider an example that demonstrates the potential of our method in case the game structure contains natural bottlenecks. Consider a corridor of $100$ rooms arranged in sequence, i.e., each room $i$ with $0 \\leq i < 100$ is connected to room $i+1$ with a door. The objective of the reachability player is to reach room 100 and they are free to choose valid values from $\\mathbb{R}^2$ for the position in each room at every other turn. The safety player controls some door to a room $r \\leq 100$. Naturally, a winning strategy is to prevent the reachability player from passing that door, which is a natural bottleneck and necessary subgoal on the way to the last room.\n\n The experimental results are summarized in Figure \\ref{exp_corridor}. We evaluated several versions of this game, increasing the length from the start to the controlled door. The results confirm that our causal synthesis algorithm finds the trivial strategy of closing the door quickly. This is because Craig interpolation focuses the subgoals on the room number variable while ignoring the movement in the rooms in between, as can be seen by the number of considered subgames. \\textsc{SimSynth}, which tries to generalize a strategy obtained from a step-bounded game, struggles because the tool solves the games that happen between each of the doors before reaching the controlled one.\n\n\\subsection{Mona Lisa}\n\nThe game described in Section \\ref{sec:motivation} between a thief and a security guard is very well suited to further assess the strength and limitations of both our approach as well as of \\textsc{SimSynth}{}. We ran several experiments with this scenario, scaling the size of the room and the sleep time of the guard, as well as trying a scenario where the guard does not sleep at all. Scaling the size of the room makes it harder for \\textsc{SimSynth}{} to solve this game with a forward unrolling approach, while our approach extracts the necessary subgoals irrespective of the room size. However, scaling the guard's sleep time makes it harder to solve the subgame between the two necessary subgoals, while it only has a minor effect on the length of the unrolling needed to stabilize the play in a safe region, as done by \\textsc{SimSynth}.\n\n The results in Figure \\ref{exp_monalisa} support this conjecture. The size of the room has \\emph{almost no effect at all} on both the runtime of \\textsc{CabPy}{} and the number of considered subgames. However, as the results for a sleep value of 4 show, the employed combination of quantifier elimination and interpolation introduces some instability in the produced formulas. This means we may get different Craig interpolants and slice the game with more or less subgoals. Therefore, we see a lot of potential in optimizing the interplay between the employed tools for quantifier elimination and interpolation. The phenomenon of the runtime being sensitive to these small changes in values is also seen with \\textsc{SimSynth}, where a longer sleep time sometimes means a faster execution.\n\n\\begin{figure}[tbp]\n\t\\centering\n {\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& & \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tSize & Sleep & Subgames & Time(s) & Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & - & 7 & 0.61 & 4.79 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & - & 7 & 0.60 & 25.26 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & - & 7 & 0.61 & 157.62 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 1 & 10 & 4.22 & 20.31 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 1 & 11 & 4.34 & 36.44 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 1 & 11 & 4.65 & 226.14 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 2 & 13 & 5.88 & 7.40 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 2 & 14 & 5.98 & 60.00 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 2 & 13 & 5.92 & 270.48 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 3 & 18 & 26.58 & 13.94 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 3 & 17 & 26.19 & 115.53 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 3 & 18 & 27.85 & 290.12 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 4 & 30 & 175.27 & 13.96 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 4 & 22 & 204.79 & 60.08 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 4 & 27 & 123.95 & 319.47 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Mona Lisa game.}\n\t\\label{exp_monalisa}\n\\end{figure}\n\n\\subsection{Program Synthesis}\n\nLastly, we study two benchmarks that are directly related to program synthesis. \nThe first problem is to synthesize a controller for a thermostat by filling out an incomplete program, as described in \\cite{BeyeneCPR14}. A range of possible initial values of the room temperature $c$ is given, e.g., $20.8 \\leq c \\leq 23.5$, together with the temperature dynamics which depend on whether the heater is on (variable $o \\in \\mathbb{B}$).\nThe objective for $\\texttt{{SAFE}}$ is to control the value of $o$ in every round such that $c$ stays between $20$ and $25$. This is a common benchmark for program synthesis tools and both \\textsc{CabPy}{} and \\textsc{SimSynth}{} solve it quickly.\nThe other problem relates to Lamport's bakery algorithm\\cite{Lamport1974}. We consider two processes using this protocol to ensure mutually exclusive access to a shared resource. The game describes the task of synthesizing a scheduler that violates the mutual exclusion. This essentially is a model checking problem, and we study it to see how well the tools can infer a safety invariant that is out of control of the safety player. For our approach, this makes no difference, as both players may play through a subgoal and the framework is well suited to find a safety invariant. The forward unrolling approach of \\textsc{SimSynth}, however, seems to explore the whole state space before inferring safety, and fails to find an invariant before a timeout. \n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tName & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\tThermostat & 6 & 0.44 & 0.39 & $\\texttt{{SAFE}}$ \\\\\n\t\t\tBakery & 46 & 18.25 & Timeout & $\\texttt{{SAFE}}$ \\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for program synthesis problems.}\n\t\\label{exp_synth}\n\\end{figure}\n\n\n\n\n\\section{Introduction}\nTwo-player games are a fundamental model in logic and verification due to their connection to a wide range of topics such as decision procedures, synthesis and control~\\cite{Harding05,Alur15,Alur16,BLOEM2012911,BLOEM20073,6225075}. \nAlgorithmic techniques for \\emph{finite-state} two-player games have been studied extensively for many acceptance conditions~\\cite{GraedelTW2002}.\nFor \\emph{infinite-state} games most problems are directly undecidable. \nHowever, infinite state spaces occur naturally in domains like software synthesis~\\cite{RyzhykCKLH09} and cyber-physical systems~\\cite{JessenRLD07}, and hence handling such games is of great interest. \nAn elegant classification of infinite-state games that can be algorithmically handled, depending on the acceptance condition of the game, was given in~\\cite{deAlfaroHM2001}.\nThe authors assume a symbolic encoding of the game in a very general form.\nMore recently, incomplete procedures for solving infinite-state two-player games specified using logical constraints were studied~\\cite{BeyeneCPR14,FarzanK17}.\nWhile~\\cite{BeyeneCPR14} is based on automated theorem-proving for Horn formulas and handles a wide class of acceptance conditions, the work in~\\cite{FarzanK17} focusses on reachability games specified in the theory of linear arithmetic, and uses sophisticated decision procedures for that theory.\n\nIn this paper, we present a novel technique for solving logically represented reachability games based on the notion of \\emph{subgoals}.\nA \\emph{necessary} subgoal is a transition predicate that is satisfied at least once on every play that reaches the overall goal.\nIt represents an intermediate target that the reachability player must reach in order to win. Subgoals open up game solving to the study of cause-effect relationships in the form of counterfactual reasoning~\\cite{counterfactual}: If a cause (the subgoal) had not occurred, then the effect (reaching the goal) would not have happened.\nThus for the safety player, a necessary subgoal provides a chance to win the game based on local information:\nIf they control all states satisfying the pre-condition of the subgoal, then any strategy that in these states picks a transition outside of the subgoal is winning. \nFinding such a necessary subgoal may let us conclude that the safety player wins without ever having to unroll the transition relation.\n\nOn the other hand, passing through a necessary subgoal is in general not enough for the reachability player to win. We call a subgoal \\emph{sufficient} if indeed the reachability player has a winning strategy from every state satisfying the post-condition of the subgoal.\nDual to the description in the preceding paragraph, sufficient subgoals provide a chance for the reachability player to win the global game as they must merely reach this intermediate target. The two properties differ in one key aspect: While necessity of a subgoal only considers the paths of the game arena, for sufficiency the game structure is crucial. \n\nWe show how Craig interpolants can be used to compute necessary subgoals, making our methods applicable to games represented by any logic that supports interpolation. In contrast, determining whether a subgoal is sufficient requires a partial solution of the given game. This motivates the following recursive approach. We slice the game along a necessary subgoal into two parts, the pre-game and the post-game.\nIn order to guarantee these games to be smaller, we solve the post-game under the assumption that the considered subgoal was bridged \\emph{for the last time}.\nWe conclude that the safety player wins the overall game if they can avoid all initial states of the post-game that are winning for the reachability player.\nOtherwise, the pre-game is solved subject to the winning condition given by the sufficient subgoal consisting of these states. \nThis approach does not only determine which player wins from each initial state, but also computes symbolically represented winning strategies with a causal structure. \nWinning safety player strategies induce necessary subgoals that the reachability player cannot pass, which constitutes a cause for their loss. Winning reachability player strategies represent a sequence of sufficient subgoals that will be passed, providing an explanation for the win.\n\nThe Python-based implementation \\textsc{CabPy}{} of our approach was used to compare its performance to \\textsc{SimSynth} \\cite{FarzanK17}, which is, to the best of our knowledge, the only other available tool for solving linear arithmetic reachability games. Our experiments demonstrate that our algorithm is competitive in many case studies. We can also confirm the expectation that our approach heavily benefits from qualitatively expressive Craig interpolants. It is noteworthy that like \\textsc{SimSynth}{} our approach is fully automated and does not require any input in the form of hints or templates. \nOur contributions are summarized as follows:\n\\begin{itemize}[topsep=0.5ex]\n\t\\item We introduce the concept of \\emph{necessary} and \\emph{sufficient subgoals} and show how Craig interpolation can be used to compute necessary subgoals (\\Cref{sec:subgoals}).\n\t\\item We describe an algorithm for solving logically represented two-player reachability games using these concepts.\n We also discuss how to compute representations of winning strategies in our approach (\\Cref{sec:gamesolving}).\n\t\\item We evaluate our approach experimentally through our Python-based tool \\textsc{CabPy}, demonstrating a competitive performance compared to the previously available tool \\textsc{SimSynth}{} on various case studies (\\Cref{sec:experiments}).\n\\end{itemize}\n\n\\textbf{Related Work. } \nThe problem of solving linear arithmetic games is addressed in~\\cite{FarzanK17} using an approach that relies on a dedicated decision procedure for quantified linear arithmetic formulas, together with a method to generalize safety strategies from truncated versions of the game that end after a prescribed number of rounds.\nOther approaches for solving infinite-state games include deductive methods that compute the winning regions of both players using proof rules~\\cite{BeyeneCPR14}, \npredicate abstraction where an abstract controlled predecessor operation is used on the abstract game representation~\\cite{WalkerR14}, \nand symbolic BDD-based exploration of the state space~\\cite{Edelkamp2002}. \nAdditional techniques are available for finite-state games, e.g., generalizing winning runs into a winning strategy for one of the players~\\cite{NarodytskaLBRW14}. \n\nOur notion of subgoal is related to the concept of landmarks as used in planning~\\cite{HoffmannPS11}. Landmarks are milestones that must be true on every successful plan, and they can be used to decompose a planning task into smaller sub-tasks. \nLandmarks have also been used in a game setting to prevent the opponent from reaching their goal using counter-planning~\\cite{PozancoEFB18}. \nWhenever a planning task is unsolvable, one method to find out why is checking hierarchical abstractions for solvability and finding the components causing the problem~\\cite{SreedharanSSK19}. \n\nCausality-based approaches have also been used for model checking of multi-threaded concurrent programs~\\cite{DBLP:conf\/concur\/KupriyanovF13,DBLP:conf\/cav\/KupriyanovF14}. \nIn our approach, we use Craig interpolation to compute the subgoals. \nInterpolation has already been used in similar contexts before, for example to extract winning strategies from game trees~\\cite{EenLNR15} or to compute new predicates to refine the game abstractions~\\cite{SlicingAbstractions}. \nIn ~\\cite{FarzanK17}, interpolation is used to synthesize concrete winning strategies from so called \\emph{winning strategy skeletons}, which describe a set of strategies of which at least one is winning.\n\n\n\n\\section{Preliminaries}\n\\label{sec:prelims}\nWe consider two-player reachability games defined by formulas in a given logic $\\cal L$.\nWe let $\\cal L(\\mathcal{V})$ be the $\\cal L$-formulas over a finite set of variables $\\mathcal{V}$, also called \\emph{state predicates} in the following.\nWe call $\\mathcal{V'} = \\{\\mathit{v'} \\mid \\mathit{v} \\in \\mathcal{V}\\}$ the set of \\emph{primed variables}, which are used to represent the value of variables after taking a transition.\nTransitions are expressed by formulas in the set $\\cal L(\\mathcal{V} \\cup \\mathcal{V'})$, called \\emph{transition predicates}.\nFor some formula $\\varphi \\in \\cal L(\\mathcal{V})$, we denote the substitution of all variables by their primed variant by $\\varphi[\\var\/\\varp]$. Similarly, we define $\\varphi[\\varp\/\\var]$.\n\nFor our algorithm we will require the satisfiability problem of $\\cal L$-formulas to be decidable and \\emph{Craig interpolants} \\cite{Craig1957} to exist for any two mutually unsatisfiable formulas.\nFormally, we assume there is a function $\\operatorname{Sat} : \\cal L(\\mathcal{V}) \\to \\mathbb{B}$ that checks the satisfiability of some formula $\\varphi \\in \\cal L(\\mathcal{V})$ and an unsatisfiability check $\\operatorname{Unsat} : \\cal L(\\mathcal{V}) \\rightarrow \\mathbb{B}$. \nFor interpolation, we assume that there is a function $\\operatorname{Interpolate} : \\cal L(\\mathcal{V}) \\times \\cal L(\\mathcal{V}) \\to \\cal L(\\mathcal{V})$ computing a \\emph{Craig interpolant} for mutually unsatisfiable formulas: If $\\varphi ,\\psi \\in \\cal L(\\mathcal{V})$ are such that $\\operatorname{Unsat}(\\varphi\\land\\psi)$ holds, then $\\psi \\implies \\operatorname{Interpolate}(\\varphi,\\psi)$ is valid, $\\operatorname{Interpolate}(\\varphi,\\psi)\\land \\varphi$ is unsatisfiable, and $\\operatorname{Interpolate}(\\varphi,\\psi)$ only contains variables shared by $\\varphi$ and $\\psi$.\n\nThese functions are provided by many modern \\emph{Satisfiability Modulo Theories} (SMT) solvers, in particular for the theories of linear integer arithmetic and linear real arithmetic, which we will use for all our examples. Note that interpolation is usually only supported for the quantifier-free fragments of these logics, while our algorithm will introduce existential quantifiers.\nTherefore, we resort to quantifier elimination wherever necessary, for which there are known procedures for both linear integer arithmetic and linear real arithmetic formulas~\\cite{presburger1929uber,Monniaux2008}.\n\nIn order to distinguish the two players, we will assume that a Boolean variable called $\\mathbf{r} \\in \\mathcal{V}$ exists, which holds exactly in the states controlled by the reachability player.\nFor all other variables $v \\in \\mathcal{V}$, we let $\\mathcal{D}(v)$ be the domain of $v$, and we define $\\mathcal{D} = \\bigcup \\{\\mathcal{D}(v) \\mid v \\in \\mathcal{V}\\}$.\nIn the remainder of the paper, we consider the variables $\\mathcal{V}$ and their domains to be fixed.\n\n\\begin{definition}[Reachability Game]\n\tA reachability game is defined by a tuple $\\G = \\langle \\init, \\safe, \\reach, \\goal \\rangle${}, where $\\mathit{Init} \\in \\cal L(\\mathcal{V})$ is the \\emph{initial condition}, $\\mathit{Safe} \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ defines the transitions of player $\\texttt{{SAFE}}$, $\\mathit{Reach} \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ defines the transitions of player $\\texttt{{REACH}}$ and $\\mathit{Goal} \\in \\cal L(\\mathcal{V})$ is the \\emph{goal condition}.\n\n We require the formulas $(\\mathit{Safe} \\implies \\neg \\mathbf{r})$ and $(\\mathit{Reach} \\implies \\mathbf{r})$ to be valid.\n\\end{definition}\n\nA \\emph{state} $s$ of $\\mathcal{G}$ is a valuation of the variables $\\mathcal{V}$, i.e., a function $s\\colon\\mathcal{V} \\to \\mathcal{D}$ that satisfies $s(v) \\in \\mathcal{D}(v)$ for all $v \\in \\mathcal{V}$.\nWe denote the set of states by $S$, and we let $S_\\texttt{{SAFE}}$ be the states $s$ such that $s(\\mathbf{r}) = \\texttt{false}$, and $S_\\texttt{{REACH}}$ be the states $s$ such that $s(\\mathbf{r}) = \\texttt{true}$. The variable $\\mathbf{r}$ determines whether \\texttt{{REACH}}{} or \\texttt{{SAFE}}{} makes the move out of the current state, and in particular $\\mathit{Safe} \\land \\mathit{Reach}$ is unsatisfiable. \n\nGiven a state predicate $\\varphi \\in \\cal L(\\mathcal{V})$, we denote by $\\varphi(s)$ the closed formula we get by replacing each occurrence of variable $v \\in \\mathcal{V}$ in $\\varphi$ by $s(v)$.\nSimilarly, given a transition predicate $\\tau \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ and states $s,s'$, we let $\\tau(s,s')$ be the formula we obtain by replacing all occurrences of $v \\in \\mathcal{V}$ in $\\tau$ by $s(v)$, and all occurrences of $v' \\in \\mathcal{V'}$ in $\\tau$ by $s'(v)$. For replacing only $v \\in \\mathcal{V}$ by $s(v)$, we define $\\tau(s)\\in\\cal L(\\mathcal{V}')$. A \\emph{trap state} of $\\mathcal{G}$ is a state $s$ such that $(\\mathit{Safe} \\lor \\mathit{Reach})(s)\\in\\cal L(\\mathcal{V}')$ is unsatisfiable (i.e., $s$ has no outgoing transitions).\n\nA \\emph{play} of $\\mathcal{G}$ starting in state $s_0$ is a finite or infinite sequence of states $\\rho = s_0 s_1 s_2 \\ldots \\in \\S^+ \\cup \\S^\\omega$ such that for all $i < \\operatorname{len}(\\rho)$ either $\\mathit{Safe}(s_i,s_{i+1})$ or $\\mathit{Reach}(s_i,s_{i+1})$ is valid, and if $\\rho$ is a finite play, then $s_{\\operatorname{len}(\\rho)}$ is required to be a trap state.\nHere, $\\operatorname{len}(s_0\\ldots s_n) = n$ for finite plays, and $\\operatorname{len}(\\rho) = \\infty$ if $\\rho$ is an infinite play. The set of plays of some game $\\G = \\langle \\init, \\safe, \\reach, \\goal \\rangle${} is defined as $\\operatorname{Plays}(\\mathcal{G}) = \\{\\rho = s_0 s_1 s_2 \\ldots \\mid \\rho\\text{ is a play in } \\mathcal{G} \\text{ s.t. } \\mathit{Init}(s_0)\\text{ holds} \\}$.\n$\\texttt{{REACH}}$ \\emph{wins} some play $\\rho = s_0 s_1 \\ldots$ if the play reaches a goal state, i.e., if there exists some integer $0\\leq k \\leq\\operatorname{len}(\\rho)$ such that $\\mathit{Goal}(s_k)$ is valid.\nOtherwise, $\\texttt{{SAFE}}$ wins play $\\rho$.\nA \\emph{reachability strategy} $\\sigma_{\\mathit{R}}$ is a function $\\sigma_{\\mathit{R}} : \\S^*S_\\texttt{{REACH}} \\to \\S$ such that if $\\sigma_{\\mathit{R}}(\\omega s) =s'$ and $s$ is not a trap state, then $\\mathit{Reach}(s,s')$ is valid. \nWe say that a play $\\rho = s_0 s_1 s_2 \\ldots$ is \\emph{consistent} with $\\sigma_{\\mathit{R}}$ if for all $i$ such that $s_i(\\mathbf{r}) = \\texttt{true}$ we have $s_{i+1} = \\sigma_{\\mathit{R}}(s_0 \\ldots s_i)$.\nA reachability strategy $\\sigma_{\\mathit{R}}$ is \\emph{winning} from some state $s$ if $\\texttt{{REACH}}$ wins every play consistent with $\\sigma_{\\mathit{R}}$ starting in $s$. We define \\emph{safety strategies} $\\sigma_{\\mathit{S}}$ for $\\texttt{{SAFE}}$ analogously. We say that a player \\emph{wins in or from a state}~$s$ if they have a winning strategy from $s$. Lastly, $\\texttt{{REACH}}$ \\emph{wins the game} $\\mathcal{G}$ if they win from some initial state.\nOtherwise, $\\texttt{{SAFE}}$ wins.\n\nWe often project a transition predicate $T$ onto the source or target states of transitions satisfying $T$, which is taken care of by the formulas $\\operatorname{Pre}(\\mathit{T})=\\exists \\mathcal{V'}.\\:\\mathit{T}$ and $\\operatorname{Post}(\\mathit{T})=\\exists \\mathcal{V}.\\:\\mathit{T}$.\nThe notation $\\exists \\mathcal{V}$ (resp. $\\exists \\mathcal{V'}$) represents the existential quantification over all variables in the corresponding set.\nGiven $\\varphi \\in \\cal L(\\mathcal{V})$, we call the set of transitions in $\\mathcal{G}$ that move from states not satisfying $\\varphi$, to states satisfying $\\varphi$, the \\emph{instantiation} of $\\varphi$, formally:\n\\[\\operatorname{Instantiate}(\\varphi,\\mathcal{G})=(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\lnot\\varphi\\land\\varphi'.\\]\n\n\\section{Subgoals}\n\\label{sec:subgoals}\n\nWe formally define the notion of subgoals.\nLet $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ be a fixed reachability game throughout this section, where we assume that $\\mathit{Init} \\land \\mathit{Goal}$ is unsatisfiable.\nWhenever this assumption is not satisfied in our algorithm, we will instead consider the game $\\mathcal{G}' = \\langle \\mathit{Init} \\land \\neg \\mathit{Goal}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ which does satisfy it.\nAs states in $\\mathit{Init} \\land \\mathit{Goal}$ are immediately winning for \\texttt{{REACH}}, this is not a real restriction.\n\n\\begin{definition}[Enforceable transitions]\n\tThe set of \\emph{enforceable transitions} relative to a transition predicate $T \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ is defined by the formula\n\t\\[\\operatorname{Enf}(\\mathit{T},\\mathcal{G})=\\; (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\mathit{T} \\land \\lnot \\exists \\mathcal{V'}.\\:\\big(\\mathit{Safe}\\land\\lnot\\mathit{T}\\big).\\]\n\t\n\\end{definition}\n\nThe enforceable transitions operator serves a purpose similar to the \\emph{controlled predecessors} operator commonly known in the literature, which is often used in a backwards fixed point computation, called \\emph{attractor construction} \\cite{Thomas95}. For both operations, the idea is to determine controllability by \\texttt{{REACH}}{}. The main difference is that we do not consider the whole transition relation, but only a predetermined set of transitions and check from which predecessor states the post-condition of the set can be enforced by \\texttt{{REACH}}{}. These include all transitions in $T$ controlled by \\texttt{{REACH}}{} and additionally transitions in $T$ controlled by \\texttt{{SAFE}}{} such that \\emph{all other transitions} in the origin state of the transition also satisfy $T$. The similarity with the controlled predecessor is exemplified by the following lemma:\n\\begin{lemma}\n \\label{lem:enf}\n Let $T$ be a transition predicate, and suppose that all states satisfying $\\operatorname{Post}(T)[\\varp\/\\var]$ are winning for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n Then all states in $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ are winning for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n\\end{lemma}\n\\begin{proof}\n Clearly, all states in $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ that are under the control of \\texttt{{REACH}}{} are winning for \\texttt{{REACH}}{}, as in any such state they have a transition satisfying $T$ (observe that $\\operatorname{Enf}(T,\\mathcal{G}) \\implies T$ is valid), which leads to a winning state by assumption.\n\n So let $s$ be a state satisfying $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ that is under the control of \\texttt{{SAFE}}{}.\n As $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))(s)$ is valid, $s$ has a transition that satisfies $T$ (in particular, $s$ is not a trap state).\n Furthermore, we know that there is no $s' \\in \\S$ such that $\\mathit{Safe}(s,s')\\land\\lnot\\mathit{T}(s,s')$ holds, and hence there is no transition satisfying $\\lnot\\mathit{T}$ from $s$. Since $\\operatorname{Post}(T)[\\varp\/\\var]$ is winning for \\texttt{{REACH}}{}, it follows that from $s$ player \\texttt{{SAFE}}{} cannot avoid playing into a winning state of \\texttt{{REACH}}{}.\n \\qed\n\\end{proof}\n\nWe now turn to a formal definition of \\emph{necessary subgoals}, which intuitively are sets of transitions that appear on every play that is winning for \\texttt{{REACH}}{}. \n\\begin{definition}[Necessary subgoal]\\label{necessary_subgoal}\n\tA \\emph{necessary subgoal} $C \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ for~$\\mathcal{G}$ is a transition predicate such that for every play $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ and $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_{n})$ is valid, there exists some $k < n$ such that $C(s_k,s_{k+1})$ is valid.\n\\end{definition}\n\nNecessary subgoals provide a means by which winning safety player strategies can be identified, as formalized in the following lemma.\n\n\\begin{lemma}\\label{prop_safestrat}\n\tA safety strategy $\\sigma_{\\mathit{S}}$ is winning in $\\mathcal{G}$ if and only if there exists a necessary subgoal $\\mathit{C}$ for $\\mathcal{G}$ such that for all plays $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ consistent with~$\\sigma_{\\mathit{S}}$ there is no $n \\in \\mathbb{N}$ such that $C(s_n,s_{n+1})$ holds. \n\\end{lemma}\n\\begin{proof}\n ``$\\implies$''. The transition predicate $\\mathit{Goal}[\\var\/\\varp]$ (i.e., transitions with endpoints satisfying $\\mathit{Goal}$) is clearly a necessary subgoal. If $\\sigma_{\\mathit{S}}$ is winning for \\texttt{{SAFE}}, then no play consistent with $\\sigma_{\\mathit{S}}$ contains a transition in this necessary subgoal. \\\\\n \\noindent ``$\\Longleftarrow$''. Let $C$ be a necessary subgoal such that no play consistent with $\\sigma_{\\mathit{S}}$ contains a transition of $C$. Then by \\Cref{necessary_subgoal} no play consistent with $\\sigma_{\\mathit{S}}$ contains a state satisfying $\\mathit{Goal}$. Hence $\\sigma_{\\mathit{S}}$ is a winning strategy for \\texttt{{SAFE}}.\n\t\\qed\n\\end{proof}\n\nOf course, the question remains how to compute non-trivial subgoals. Indeed, using $\\mathit{Goal}$ as outlined in the proof above provides no further benefit over a simple backwards exploration (see~\\Cref{rem:attractor} in the following section).\n\nIdeally, a subgoal should represent an interesting key decision to focus the strategy search.\nAs we show next, Craig interpolation allows to extract partial causes for the mutual unsatisfiability of $\\mathit{Init}$ and $\\mathit{Goal}$ and can in this way provide necessary subgoals. \nRecall that a Craig interpolant $\\varphi$ between $\\mathit{Init}$ and $\\mathit{Goal}$ is a state predicate that is implied by $\\mathit{Goal}$, and unsatisfiable in conjunction with $\\mathit{Init}$. \nIn this sense, $\\varphi$ describes an observable \\emph{effect} that must occur if $\\texttt{{REACH}}{}$ wins, and the concrete transition that instantiates the interpolant \\emph{causes} this effect.\n\n\\begin{proposition}\\label{prop_necessary}\n\tLet $\\varphi$ be a Craig interpolant for $\\mathit{Init}$ and $\\mathit{Goal}$. Then the transition predicate $\\operatorname{Instantiate}(\\varphi,\\mathcal{G})$ is a necessary subgoal.\n\\end{proposition}\n\\begin{proof}\n As $\\varphi$ is an interpolant, it holds that $\\mathit{Goal} \\implies \\varphi$ is valid and $\\mathit{Init} \\land \\varphi$ is unsatisfiable.\n Consider any play $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ such that $\\mathit{Goal}(s_n)$ is valid for some $n \\in \\mathbb{N}$.\n It follows that $\\lnot \\varphi(s_0)$ and $\\varphi(s_n)$ are both valid.\n Consequently, there is some $0 \\leq i < n$ such that $\\lnot \\varphi(s_i)$ and $\\varphi(s_{i+1})$ are both valid.\n As all pairs $(s_k,s_{k+1})$ satisfy either $\\mathit{Safe}$ or $\\mathit{Reach}$, it follows that $\\big(\\operatorname{Instantiate}(\\varphi,\\mathcal{G})\\big)(s_i,s_{i+1})$ is valid.\n Hence, $\\operatorname{Instantiate}(\\varphi,\\mathcal{G})$ is a necessary subgoal.\n\t\\qed\n\\end{proof}\n\nWhile avoiding a necessary subgoal is a winning strategy for \\texttt{{SAFE}}{}, reaching a necessary subgoal is in general not sufficient to guarantee a win for \\texttt{{REACH}}{}.\nThis is because there might be some transitions in the necessary subgoal that produce the desired effect described by the Craig interpolant, but that trap \\texttt{{REACH}}{} in a region of the state space where they cannot enforce some other necessary effect to reach goal. \nFor the purpose of describing a set of transitions that is guaranteed to be winning for the reachability player, we introduce \\emph{sufficient subgoals}.\n\n \\begin{definition}[Sufficient subgoal]\n A transition predicate $\\mathit{F}\\in\\cal L(\\mathcal{V}\\cup\\mathcal{V'})$ is called a \\emph{sufficient subgoal} if $\\texttt{{REACH}}$ wins from every state satisfying $\\operatorname{Post}(\\mathit{F})[\\varp\/\\var]$.\n \\end{definition}\n\n\\begin{example}\n\tConsider the Mona Lisa game $\\mathcal{G}$ described in Section \\ref{sec:motivation}.\n\t\\[C_1 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land p \\neq 1 \\land p' = 1\\]\n\tqualifies as sufficient subgoal, because $\\texttt{{REACH}}$ wins from every successor state as all those states satisfy $\\mathit{Goal}$. \n\tAlso, every play reaching $\\mathit{Goal}$ eventually passes $C_1$, and hence $C_1$ is also necessary. On the other hand, \n\t\\[C_2 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land a \\neq 0 \\land a' = 0\\]\n\tis only a necessary subgoal in $\\mathcal{G}$, because $\\texttt{{SAFE}}$ wins from some (in fact all) states satisfying $\\operatorname{Post}(C_2)$.\n\t\n\\end{example}\n \nIf the set of transitions in the necessary subgoal $C$ that lead to winning states of \\texttt{{REACH}}{} is definable in $\\cal L$ then we call the transition predicate $F$ that defines it the \\emph{largest sufficient subgoal} included in $C$. \nIt is characterized by the properties (1) $F \\implies C$ is valid, and (2) if $F'$ is such that $F \\implies F'$ is valid, then either $F \\equiv F'$, or $F'$ is not a sufficient subgoal. Since $C$ is a necessary subgoal and $F$ is maximal with the properties above, \\texttt{{REACH}}{} needs to see a transition in $F$ eventually in order to win. This balance of necessity and sufficiency allows us to partition the game along $F$ into a game that happens after the subgoal and one that happens before.\n\n\\begin{proposition}\n\t\t\\label{lem:slicing}\n\t Let $C$ be a necessary subgoal, and $F$ be the largest sufficient subgoal included in $C$. Then \\texttt{{REACH}}{} wins from an initial state $s$ in $\\mathcal{G}$ if and only if \\texttt{{REACH}}{} wins from $s$ in the pre-game \n \\[\\mathcal{G}_{pre} = \\langle \\mathit{Init}, \\mathit{Safe} \\land \\neg F, \\mathit{Reach} \\land \\neg F, \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})) \\rangle.\\]\n\\end{proposition}\n\\begin{proof}\n ``$\\implies$''. Suppose that \\texttt{{REACH}}{} wins in $\\mathcal{G}$ from $s$ using strategy $\\sigma_R$. Assume for a contradiction that \\texttt{{SAFE}}{} wins in $\\mathcal{G}_{pre}$ from $s$ using strategy $\\sigma_S$. Consider strategy $\\sigma'_S$ such that $\\sigma_{\\mathit{S}}'(\\omega s') = \\sigma_{\\mathit{S}}(\\omega s')$ if $(\\mathit{Safe} \\land \\lnot F)(s')$ is satisfiable, and else $\\sigma_{\\mathit{S}}'(\\omega s') = \\sigma_{\\mathit{S}}''(\\omega s')$, where $\\sigma_{\\mathit{S}}''$ is an arbitrary safety player strategy in $\\mathcal{G}$. Let $\\rho = s_0s_1\\ldots$ be the (unique) play of $\\mathcal{G}$ consistent with both $\\sigma_R$ and $\\sigma'_S$, where $s_0 = s$. Since $\\sigma_R$ is winning in $\\mathcal{G}$ and $C$ is a necessary subgoal in $\\mathcal{G}$, there must exist some $m\\in\\mathbb{N}$ such that $C(s_m, s_{m+1})$ is valid. Let $m$ be the smallest such index. Since $F \\implies C$, we know for all $0 \\leq k < m$ that $\\lnot F (s_k,s_{k+1})$ holds. Hence, there is the play $\\rho' = s_0s_1\\ldots s_m \\ldots$ in $\\mathcal{G}_{pre}$ consistent with $\\sigma_S$. The state $s_{m+1}$ is winning for \\texttt{{REACH}}{} in $\\mathcal{G}$, as it is reached on a play consistent with the winning strategy $\\sigma_R$. Hence, we know that $F(s_m, s_{m+1})$ holds, because $F$ is the largest sufficient subgoal included in $C$.\n If $(\\mathit{Reach} \\land F)(s_m, s_{m+1})$ held, we would have that $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})(s_m)$ holds: a contradiction with $\\rho'$ being consistent with $\\sigma_S$, which we assumed to be winning in $\\mathcal{G}_{pre}$. It follows that $(\\mathit{Safe} \\land F)(s_m, s_{m+1})$ holds. We can conclude that $(\\mathit{Safe} \\land \\lnot F)(s_m)$ is unsatisfiable (i.e., $s_m$ is a trap state in $\\mathcal{G}_{pre}$), because in all other cases $\\texttt{{SAFE}}$ plays according to $\\sigma_{\\mathit{S}}$, which cannot choose a transition satisfying $F$. However, this implies that $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})(s_m)$ holds, again a contradiction with $\\rho'$ being consistent with winning strategy $\\sigma_S$.\n\n\t\\noindent ``$\\Longleftarrow$''. If $\\texttt{{REACH}}$ wins in $\\mathcal{G}_{pre}$ they have a strategy $\\sigma_{\\mathit{R}}$ such that every play consistent with $\\sigma_{\\mathit{R}}$ reaches the set $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$.\n As $F$ is a sufficient subgoal, the states $\\operatorname{Post}(F)$ are winning for \\texttt{{REACH}}{} by definition.\n It follows by~\\Cref{lem:enf} that all states satisfying $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$ are winning in $\\mathcal{G}$.\n Combining $\\sigma_{R}$ with a strategy that wins in all these states yields a winning strategy for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n \\qed\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdxje b/data_all_eng_slimpj/shuffled/split2/finalzzdxje new file mode 100644 index 0000000000000000000000000000000000000000..f07762b05c9c786454ca04cd88b1c071310aff35 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdxje @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{}}\n\n\n\\section{Introduction}\n\\label{sec:Introduction}\n\nWe consider the problem of Bayesian inference from cosmological data, in the common scenario where we can generate synthetic data through forward simulations, but where the exact likelihood function is intractable. The generative process can be extremely general: it may be a noisy non-linear dynamical system involving an unrestricted number of latent variables. Likelihood-free inference methods, also known as approximate Bayesian computation \\citep[ABC, see][for reviews]{Marin2011,Lintusaari2017a} replace likelihood calculations with data model evaluations. In recent years, they have emerged as a viable alternative to likelihood-based techniques, when the simulator is sufficiently cheap. Applications in cosmology include measuring cosmological parameters from type Ia supernovae \\citep{Weyant2013} and weak lensing peak counts \\citep{Lin2015}, analysing the galaxy halo connection \\citep{Hahn2017}, inferring the photometric and size evolution of galaxies \\citep{Carassou2017}, measuring cosmological redshift distributions \\citep{Kacprzak2018}, estimating the ionising background from the Lyman-$\\alpha$ and Lyman-$\\beta$ forests \\citep{Davies2018}.\n\nIn its simplest form, ABC takes the form of likelihood-free rejection sampling and involves forward simulating data from parameters drawn from the prior, then accepting parameters when the discrepancy (by some measure) between simulated data and observed data is smaller than a user-specified threshold $\\varepsilon$. Such an approach tends to be extremely expensive since many simulated data sets get rejected, due to the lack of knowledge about the relation between the model parameters and the corresponding discrepancy. Variants of likelihood-free rejection sampling such as Population (or Sequential) Monte Carlo ABC \\citep[\\textsc{pmc}-\\textsc{abc} or \\textsc{smc}-\\textsc{abc}, see][for implementations aimed at astrophysical applications]{Akeret2015,Ishida2015,Jennings2017} improve upon this scheme by making the proposal adaptive; however, they do not use a probabilistic model for the relation between parameters and discrepancies (also known as a surrogate surface), so that their practical use usually necessitates $\\mathcal{O}(10^4-10^6)$ evaluations of the simulator. \n\nIn this paper, we address the challenging problem where the number of simulations is extremely limited, e.g. to a few thousand, rendering the use of sampling-based ABC methods impossible. To this end, we use Bayesian optimisation for likelihood-free inference \\citep[{\\textsc{bolfi}},][]{GutmannCorander2016}, an algorithm which combines probabilistic modelling of the discrepancy with optimisation to facilitate likelihood-free inference. Since it was introduced, {\\textsc{bolfi}} has been applied to various statistical problems in science, including inference of the Ricker model \\citep{GutmannCorander2016}, the Lotka-Volterra predator-prey model and population genetic models \\citep{Jaervenpaeae2018}, pathogen spread models \\citep{Lintusaari2017a}, atomistic structure models in materials \\citep{Todorovic2017}, and cognitive models in human-computer interaction \\citep{Kangasraeaesioe2017}. This work aims at introducing {\\textsc{bolfi}} in cosmological data analysis and at presenting its first cosmological application. We focus on computable parametric approximations to the true likelihood (also known as synthetic likelihoods), rendering the approach completely $\\varepsilon$-free. Recently, \\citet{Jaervenpaeae2017} introduced an acquisition function for Bayesian optimisation (the expected integrated variance), specifically tailored to perform efficient and accurate ABC. We extend their work by deriving the expression of the expected integrated variance in the parametric approach. This acquisition function measures the expected uncertainty in the estimate of the {\\textsc{bolfi}} posterior density, which is due to the limited number of simulations, over the future evaluation of the simulation model. The next simulation location is proposed so that this expected uncertainty is minimised. As a result, high-fidelity posterior inferences can be obtained with orders of magnitude fewer simulations than with likelihood-free rejection sampling. As examples, we demonstrate the use of {\\textsc{bolfi}} on the problems of summarising Gaussian signals and inferring cosmological parameters from the Joint Lightcurve Analysis (JLA) supernovae data set \\citep{Betoule2014}.\n\nThe structure of this paper is as follows. In section \\ref{sec:Inference of simulator-based statistical models}, we provide a review of the formalism for the inference of simulator-based statistical models. In section \\ref{sec:Regression and Optimisation for likelihood-free inference}, we describe {\\textsc{bolfi}} and discuss the regression and optimisation strategies. In particular, we provide the optimal acquisition rule for ABC in the parametric approach to likelihood approximation. Applications are given in section \\ref{sec:Applications}. The developed method is discussed in section \\ref{sec:Discussion} in the context of cosmological data analysis. Section \\ref{sec:Conclusion} concludes the paper. Mathematical details and descriptions of the case studies are presented in the appendices. \n\n\\section{Inference of simulator-based statistical models}\n\\label{sec:Inference of simulator-based statistical models}\n\n\\subsection{Simulator-based statistical models}\n\\label{ssec:Simulator-based statistical models}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tikzpicture}\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\tikzstyle{probability}=[draw, thick, text centered, rounded corners, minimum height=1em, minimum width=1em, fill=green!20]\n\t\\tikzstyle{variabl}=[draw, thick, text centered, circle, minimum height=1em, minimum width=1em]\n\n\t\\def0.7{0.7}\n\t\\def2.0{2.0}\n\n\n \\node (thetaprobaii) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaprobaii.south)+(0,-0.7) node (thetaii) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (thetaii.south)+(0,-0.7) node (dprobaii) [probability]\n {$\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$};\n \\path (dprobaii.south)+(0,-0.7) node (dii) [variabl]\n {$\\textbf{d}$};\n \n \n \\path (thetaprobaii.west)+(-2.0,0) node (thetaprobai) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaprobai.south)+(0,-0.7) node (thetai) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (thetai.south)+(0,-0.7) node (di) [variabl]\n {$\\textbf{d}$};\n\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaprobaii) -- (thetaii);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaii) -- (dprobaii);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (dprobaii) -- (dii);\n\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaprobai) -- (thetai);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetai) -- (di);\t\n\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Hierarchical representation of the exact Bayesian problem for simulator-based statistical models of different complexities: a deterministic simulator (left), and a stochastic simulator (right).\\label{fig:BHM_exact}}\n\\end{figure}\n\nSimulator-based statistical models (also known as generative models) can be written in a hierarchical form (figure \\ref{fig:BHM_exact}), where $\\boldsymbol{\\uptheta}$ are the parameters of interest, and $\\textbf{d}$ the simulated data. $\\mathpzc{P}(\\boldsymbol{\\uptheta})$ is the prior probability distribution of $\\boldsymbol{\\uptheta}$ and $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$ is the sampling distribution of $\\textbf{d}$ given $\\boldsymbol{\\uptheta}$.\n\nThe simplest case (figure \\ref{fig:BHM_exact}, left) is when the simulator is a deterministic function of its input and does not use any random variable, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta}) = \\updelta_\\mathrm{D}(\\textbf{d} - \\boldsymbol{\\hat{\\mathrm{d}}}(\\boldsymbol{\\uptheta})) ,\\label{eq:Dirac_deterministic_simulator}\n\\end{equation}\nwhere $\\updelta_\\mathrm{D}$ is a Dirac delta distribution and $\\boldsymbol{\\hat{\\mathrm{d}}}$ a deterministic function of $\\boldsymbol{\\uptheta}$.\n\nIn a more generic scenario (figure \\ref{fig:BHM_exact}, right), the simulator is stochastic, in the sense that the data are drawn from an overall (but often unknown analytically) probability distribution function (pdf) $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$. Equation \\eqref{eq:Dirac_deterministic_simulator} does not hold in this case. The scatter between different realisations of $\\textbf{d}$ given the same $\\boldsymbol{\\uptheta}$ can have various origins. In the simplest case, it only reflects the intrinsic uncertainty, which is of interest. More generically, additional nuisance parameters can be at play to produce the data $\\textbf{d}$ and will contribute to the uncertainty. This ``latent space'' can often be hundred-to-multi-million dimensional. Simulator-based cosmological models are typically of this kind: although the physical and observational processes simulated are repeatable features about which inferences can be made, the particular realisation of Fourier phases of the data is entirely noise-driven. Ideally, phase-dependent quantities should not contribute to any measure of match or mismatch between model and data.\n\n\\subsection{The exact Bayesian problem}\n\\label{ssec:The exact Bayesian problem}\n\nThe inference problem is to evaluate the probability of $\\boldsymbol{\\uptheta}$ given $\\textbf{d}$,\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\textbf{d}) = \\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta}) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{\\mathpzc{P}(\\textbf{d})},\n\\label{eq:exact_problem_Bayes}\n\\end{equation}\nfor the observed data $\\textbf{d}_\\mathrm{O}$, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\textbf{d})_\\mathrm{|\\textbf{d}=\\textbf{d}_O} = \\mathcal{L}(\\boldsymbol\\uptheta) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{Z_\\textbf{d}} ,\n\\end{equation}\nwhere the exact likelihood for the problem is defined as\n\\begin{equation}\n\\mathcal{L}(\\boldsymbol\\uptheta) \\equiv \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta)_\\mathrm{|\\textbf{d}=\\textbf{d}_O} .\n\\end{equation}\nIt is generally of unknown analytical form. The normalisation constant is $Z_\\textbf{d} \\equiv \\mathpzc{P}(\\textbf{d})_\\mathrm{|\\textbf{d}=\\textbf{d}_O}$, where $\\mathpzc{P}(\\textbf{d})$ is the marginal distribution of $\\textbf{d}$.\n\n\\subsection{Approximate Bayesian computation}\n\\label{ssec:Approximate Bayesian computation}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tikzpicture}\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\tikzstyle{probability}=[draw, thick, text centered, rounded corners, minimum height=1em, minimum width=1em, fill=green!20]\n\t\\tikzstyle{variabl}=[draw, thick, text centered, circle, minimum height=1em, minimum width=1em]\n\n\t\\def0.7{0.7}\n\n \\node (thetaproba) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaproba.south)+(0,-0.7) node (theta) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (theta.south)+(0,-0.7) node (dproba) [probability]\n {$\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$};\n \\path (dproba.south)+(0,-0.7) node (d) [variabl]\n {$\\textbf{d}$};\n \\path (d.south)+(0,-0.7) node (phiproba) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\Phi}|\\textbf{d})$};\n \\path (phiproba.south)+(0,-0.7) node (phi) [variabl]\n {$\\boldsymbol{\\Phi}$};\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaproba) -- (theta);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (theta) -- (dproba);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (dproba) -- (d);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (d) -- (phiproba);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (phiproba) -- (phi);\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Hierarchical representation of the approximate Bayesian inference problem for simulator-based statistical models, with a compression of the raw data to a set of summary statistics.\\label{fig:BHM_approx}}\n\\end{figure}\n\nInference of simulator-based statistical models is usually based on a finite set of simulated data $\\textbf{d}_{\\boldsymbol{\\uptheta}}$, generated with parameter value $\\boldsymbol{\\uptheta}$, and on a measurement of the discrepancy between simulated data and observed data $\\textbf{d}_\\mathrm{O}$. This discrepancy is used to define an approximation to the exact likelihood $\\mathcal{L}(\\boldsymbol{\\uptheta})$. The approximation happens on multiple levels.\n\nOn a physical and statistical level, the approximation consists of compressing the full data $\\textbf{d}_\\mathrm{O}$ to a set of summary statistics $\\boldsymbol{\\Phi}_\\mathrm{O}$ before performing inference. Similarly, simulated data $\\textbf{d}_{\\boldsymbol{\\uptheta}}$ are compressed to simulated summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$. This can be seen as adding a layer to the Bayesian hierarchical model (figure \\ref{fig:BHM_approx}). The purpose of this operation is to filter out the information in $\\textbf{d}$ that is not deemed relevant to the inference of $\\boldsymbol{\\uptheta}$, so as to reduce the dimensionality of the problem. Ideally, $\\boldsymbol{\\Phi}$ should be \\textit{sufficient} for parameters $\\boldsymbol{\\uptheta}$, i.e. formally $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}) = \\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi},\\textbf{d})$ or equivalently $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\Phi},\\boldsymbol{\\uptheta}) = \\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\Phi})$, which happens when the compression is lossless. However, sufficient summary statistics are generally unknown or even impossible to design; therefore the compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ will usually be lossy. The approximate inference problem to be solved is now $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}) = \\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta}) \\, \\dfrac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{\\mathpzc{P}(\\boldsymbol{\\Phi})}$ for the observed summary statistics $\\boldsymbol{\\Phi}_\\mathrm{O}$, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} = L(\\boldsymbol\\uptheta) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{Z_{\\boldsymbol{\\Phi}}} .\n\\label{eq:approx_problem_Bayes}\n\\end{equation}\nIn other words, $\\mathcal{L}(\\boldsymbol{\\uptheta})$ is replaced by\n\\begin{equation}\nL(\\boldsymbol{\\uptheta}) \\equiv \\mathpzc{P}(\\boldsymbol\\Phi|\\boldsymbol\\uptheta)_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} ,\n\\label{eq:L_theta}\n\\end{equation}\nand $Z_\\textbf{d}$ by $Z_{\\boldsymbol{\\Phi}} \\equiv \\mathpzc{P}(\\boldsymbol{\\Phi})_{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}$. Inference of model \\ref{fig:BHM_approx} gives\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol\\uptheta, \\textbf{d} | \\boldsymbol\\Phi) \\propto \\mathpzc{P}(\\boldsymbol\\Phi|\\textbf{d}) \\, \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta) \\, \\mathpzc{P}(\\boldsymbol\\uptheta),\n\\label{eq:BHM_approx_expansion}\n\\end{equation}\nwith, after marginalisation over $\\textbf{d}$,\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol\\uptheta | \\boldsymbol\\Phi) = \\int \\mathpzc{P}(\\boldsymbol\\uptheta, \\textbf{d} | \\boldsymbol\\Phi) \\, \\mathrm{d}\\textbf{d} .\n\\label{eq:BHM_approx_marginalisation}\n\\end{equation}\nTherefore, the approximate likelihood $L(\\boldsymbol{\\uptheta})$ must satisfy\n\\begin{equation}\nL(\\boldsymbol{\\uptheta}) \\propto \\int \\mathpzc{P}(\\boldsymbol\\Phi|\\textbf{d})_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} \\, \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta) \\, \\mathrm{d}\\textbf{d} .\n\\label{eq:BHM_approx_likelihood}\n\\end{equation}\nIn many cases, the compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ is deterministic, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\Phi}|\\textbf{d}) = \\updelta_\\mathrm{D}(\\boldsymbol{\\Phi} - \\boldsymbol{\\hat{\\Phi}}(\\textbf{d})) ,\n\\label{eq:Dirac_compression}\n\\end{equation}\nwhich simplifies the integral over $\\textbf{d}$ in equations \\eqref{eq:BHM_approx_marginalisation} and \\eqref{eq:BHM_approx_likelihood}.\n\nOn a practical level, $L(\\boldsymbol{\\uptheta})$ is still of unknown analytical form (which is a property of $\\mathpzc{P}(\\boldsymbol\\Phi|\\boldsymbol\\uptheta)$ inherited from $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$ in model \\ref{fig:BHM_approx}). Therefore, it has to be approximated using the simulator. We denote by $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ an estimate of $L(\\boldsymbol{\\uptheta})$ computed using $N$ realisations of the simulator. The limiting approximation, in the case where infinite computer resources were available, is denoted by $\\widetilde{L}(\\boldsymbol{\\uptheta})$, such that\n\\begin{equation}\n\\widehat{L}^N(\\boldsymbol{\\uptheta}) \\xrightarrow[N \\rightarrow \\infty]{} \\widetilde{L}(\\boldsymbol{\\uptheta}) .\n\\end{equation}\nNote that $\\widetilde{L}(\\boldsymbol{\\uptheta})$ can be different from $L(\\boldsymbol{\\uptheta})$, depending on the assumptions made to construct $\\widehat{L}^N(\\boldsymbol{\\uptheta})$. These are discussed in section \\ref{ssec:Computable approximations of the likelihood}.\n\n\\subsection{Computable approximations of the likelihood}\n\\label{ssec:Computable approximations of the likelihood}\n\n\\subsubsection{Deterministic simulators}\n\\label{sssec:Deterministic simulators}\n\nThe simplest possible case is when the simulator does not use any random variable, i.e. $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ is an entirely deterministic function of $\\boldsymbol{\\uptheta}$ (see figure \\ref{fig:BHM_exact}, left). Equivalently, all the conditional probabilities appearing in equation \\eqref{eq:BHM_approx_expansion} reduce to Dirac delta distributions given by equations \\eqref{eq:Dirac_deterministic_simulator} and \\eqref{eq:Dirac_compression}. In this case, one can directly use the approximate likelihood given by equation \\eqref{eq:L_theta}, complemented by an assumption on the functional shape of $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$.\n\n\\subsubsection{Parametric approximations and the synthetic likelihood}\n\\label{sssec:Parametric approximations and the synthetic likelihood}\n\nWhen the simulator is not deterministic, the pdf $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$ is unknown analytically. Nonetheless, in some situations, it may be reasonably assumed to follow specific parametric forms.\n\nFor example, if $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ is obtained through averaging a sufficient number of independent and identically distributed variables contained in $\\textbf{d}$, the central limit theorem suggests that a Gaussian distribution is appropriate, i.e. $\\widetilde{L}(\\boldsymbol{\\uptheta}) = \\exp\\left[\\tilde{\\ell}(\\boldsymbol{\\uptheta})\\right]$ with \n\\begin{equation}\n-2 \\tilde{\\ell}(\\boldsymbol{\\uptheta}) \\equiv \\log \\left| 2\\pi \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}} \\right| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}),\n\\end{equation}\nwhere the mean and covariance matrix,\n\\begin{equation}\n\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}\\left[ \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}} \\right] \\enskip \\mathrm{and} \\enskip \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}\\left[ (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}) (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}})^\\intercal \\right],\n\\end{equation}\ncan depend on $\\boldsymbol{\\uptheta}$. This is an approximation of $L(\\boldsymbol{\\uptheta})$, unless the summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are indeed Gaussian-distributed. $\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}$ and $\\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}}$ are generally unknown, but can be estimated using the simulator: given a set of $N$ simulations $\\lbrace \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}^{(i)} \\rbrace$, drawn independently from $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$, one can define\n\\begin{equation}\n\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}^N\\left[ \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}} \\right] \\enskip \\mathrm{and} \\enskip \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}^N\\left[ (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}) (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}})^\\intercal \\right],\n\\label{eq:mean_covariance_empirical}\n\\end{equation}\nwhere $\\mathrm{E}^N$ stands for the empirical average over the set of simulations. A computable approximation of the likelihood is therefore $\\widehat{L}^N(\\boldsymbol{\\uptheta}) = \\exp\\left[ \\hat{\\ell}^N(\\boldsymbol{\\uptheta}) \\right]$, where\n\\begin{equation}\n-2 \\hat{\\ell}^N(\\boldsymbol{\\uptheta}) \\equiv \\log \\left| 2\\pi \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\right| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}).\n\\label{eq:synthetic_likelihood}\n\\end{equation}\nDue to the approximation of the expectation $\\mathrm{E}$ with an empirical average $\\mathrm{E}^N$, both $\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}$ and $\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$ become random objects. The approximation of the likelihood $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ is therefore a random function with some intrinsic uncertainty itself, and its computation is a stochastic process. This is further discussed using a simple example in section \\ref{ssec:Summarising Gaussian signals}.\n\nThe approximation given in equation \\eqref{eq:synthetic_likelihood}, known as the synthetic likelihood \\citep{Wood2010,Price2017}, has already been applied successfully to perform approximate inference in several scientific fields. However, as pointed out by \\citet{SellentinHeavens2016}, for inference from Gaussian-distributed summaries $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ with an estimated covariance matrix $\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$, a different parametric form, namely a multivariate $t$-distribution, should rather be used. The investigation of a synthetic $t$-likelihood is left to future investigations.\n\nIn section \\ref{ssec:Summarising Gaussian signals} and appendix \\ref{apx:Summarising Gaussian signals}, we extend previous work on the Gaussian synthetic likelihood and introduce a Gamma synthetic likelihood for case where the $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are (or can be assumed to be) Gamma-distributed. \n\n\\subsubsection{Non-parametric approximations and likelihood-free rejection sampling}\n\\label{sssec:Non-parametric approximations and likelihood-free rejection sampling}\n\nAn alternative to assuming a parametric form for $L(\\boldsymbol{\\uptheta})$ is to replace it by a kernel density estimate of the distribution of a discrepancy between simulated and observed summary statistics, i.e.\n\\begin{equation}\n\\widetilde{L}(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}) \\right],\n\\end{equation}\nwhere $\\Delta_{\\boldsymbol{\\uptheta}}$ is a non-negative function of $\\boldsymbol{\\Phi}_\\mathrm{O}$ and $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ (usually of $\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$) which can also possibly depend on $\\boldsymbol{\\uptheta}$ and any variable used internally by the simulator, and the kernel $\\kappa$ is a non-negative, univariate function independent of $\\boldsymbol{\\uptheta}$ (usually with a maximum at zero). A computable approximation of the likelihood is then given by\n\\begin{equation}\n\\widehat{L}^N(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^N\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}) \\right] .\n\\end{equation}\n\nFor likelihood-free inference, $\\kappa$ is often chosen as the uniform kernel on the interval $\\left[ 0, \\varepsilon \\right)$, i.e. $\\kappa(u) \\propto \\chi_{\\left[ 0, \\varepsilon \\right)}(u)$, where $\\varepsilon$ is called the threshold and the indicator function $\\chi_{\\left[ 0, \\varepsilon \\right)}$ equals one if $u \\in \\left[ 0, \\varepsilon \\right)$ and zero otherwise. This yields\n\\begin{equation}\n\\widetilde{L}(\\boldsymbol{\\uptheta}) \\propto \\mathpzc{P}(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon) \\quad \\mathrm{and} \\quad \\widehat{L}^N(\\boldsymbol{\\uptheta}) \\propto \\mathpzc{P}^N(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon),\n\\label{eq:approximate_likelihood_acceptance}\n\\end{equation}\nwhere $\\mathpzc{P}^N(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon)$ is the empirical probability that the discrepancy is below the threshold. $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ can be straightforwardly evaluated by running simulations, computing $\\Delta_{\\boldsymbol{\\uptheta}}$ and using $\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon$ as a criterion for acceptance or rejection of proposed samples. Such an approach is often simply (or mistakenly) referred to as approximate Bayesian computation (ABC) in the astrophysics literature, although the more appropriate and explicit denomination is likelihood-free rejection sampling \\citep[see e.g.][]{Marin2011}.\n\nIt is interesting to note that the parametric approximate likelihood approach of section \\ref{sssec:Parametric approximations and the synthetic likelihood} can be embedded into the non-parametric approach. Indeed, $\\Delta_{\\boldsymbol{\\uptheta}}$ can be defined as\n\\begin{equation}\n\\Delta^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}_{\\boldsymbol{\\uptheta}} \\equiv \\log|2\\pi \\textbf{C}_{\\boldsymbol{\\uptheta}}| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})^\\intercal \\textbf{C}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})\n\\end{equation}\nfor some positive semidefinite matrix $\\textbf{C}_{\\boldsymbol{\\uptheta}}$. The second term is the square of the Mahalanobis distance, which includes the Euclidean distance as a special case, when $\\textbf{C}_{\\boldsymbol{\\uptheta}}$ is the identity matrix. Using an exponential kernel $\\kappa(u) = \\exp(-u\/2)$ and $\\textbf{C}_{\\boldsymbol{\\uptheta}} = \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$ gives $\\widetilde{L}(\\boldsymbol{\\uptheta}) = \\mathrm{E}\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right]$ and $\\widehat{L}^N(\\boldsymbol{\\uptheta}) = \\mathrm{E}^N\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right]$ with\n\\begin{eqnarray}\n-2 \\log\\left[\\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right] & = & \\log \\left| 2\\pi \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\right| \\\\\n& & + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}), \\nonumber\n\\end{eqnarray}\nthe form of which is similar to equation \\eqref{eq:synthetic_likelihood}. In fact, \\citet[][proposition 1]{GutmannCorander2016} show that the synthetic likelihood satisfies\n\\begin{eqnarray}\n-2\\tilde{\\ell}(\\boldsymbol{\\uptheta}) & = & J(\\boldsymbol{\\uptheta}) + \\mathrm{constant}, \\quad \\mathrm{and}\\label{eq:l_J_proposition_1}\\\\\n-2\\hat{\\ell}^N(\\boldsymbol{\\uptheta}) & = & \\widehat{J}^N(\\boldsymbol{\\uptheta}) + \\mathrm{constant},\\label{eq:l_J_proposition_2}\n\\end{eqnarray}\nwhere \n\\begin{equation}\nJ(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right]\n\\label{eq:def_J}\n\\end{equation}\nand\n\\begin{equation}\n\\widehat{J}^N(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^N\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right]\n\\label{eq:def_J_N}\n\\end{equation}\nare respectively the expectation and the empirical average of the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$, for $\\textbf{C}_{\\boldsymbol{\\uptheta}}= \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$.\n\n\\section{Regression and Optimisation for likelihood-free inference}\n\\label{sec:Regression and Optimisation for likelihood-free inference}\n\n\\subsection{Computational difficulties with likelihood-free rejection sampling}\n\\label{ssec:Computational difficulties with likelihood-free rejection sampling}\n\nWe have seen in section \\ref{ssec:Computable approximations of the likelihood} that computable approximations $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ of the likelihood $L(\\boldsymbol{\\uptheta})$ are stochastic processes, due to the use of simulations to approximate intractable expectations. In the most popular ABC approach, i.e. likelihood-free rejection sampling (see section \\ref{sssec:Non-parametric approximations and likelihood-free rejection sampling}), the expectations are approximated by empirical probabilities that the discrepancy is below the threshold $\\varepsilon$. While this approach allows inference of simulator-based statistical models with minimal assumptions, it suffers from several limitations that can make its use impossible in practice.\n\\begin{enumerate}\n\\item It rejects most of the proposed samples when $\\varepsilon$ is small, leading to a computationally inefficient algorithm.\n\\item It does not make assumptions about the shape or smoothness of the target function $L(\\boldsymbol{\\uptheta})$, hence accepted samples cannot ``share'' information in parameter space.\n\\item It uses a fixed proposal distribution (typically the prior $\\mathpzc{P}(\\boldsymbol{\\uptheta})$) and does not make use of already accepted samples to update the proposal of new points.\n\\item It aims at equal accuracy for all regions in parameter space, regardless of the values of the likelihood.\n\\end{enumerate}\n\nTo overcome these issues, the proposed approach follows closely \\citet{GutmannCorander2016}, who combine regression of the discrepancy (addressing issues 1 and 2) with Bayesian optimisation (addressing issues 3 and 4) in order to improve the computational efficiency of inference of simulator-based models. In this work, we focus on parametric approximations of the likelihood; we refer to \\citet{GutmannCorander2016} for a treatment of the non-parametric approach.\n\n\\subsection{Regression of the discrepancy}\n\\label{ssec:Regression of the discrepancy}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{GP_illustration.pdf} \n\\caption{Illustration of Gaussian process regression in one dimension, for the target test function $f: \\theta \\mapsto 2 - \\exp\\left[-(\\theta - 2)^2\\right] - \\exp\\left[-(\\theta - 6)^2\/10\\right] - 1\/ (\\theta^2 + 1)$ (dashed line). Training data are acquired (red dots); they are subject to a Gaussian observation noise with standard deviation $\\sigma_\\mathrm{n} = 0.03$. The blue line shows the mean prediction $\\mu(\\theta)$ of the Gaussian process regression, and the shaded region the corresponding $2\\sigma(\\theta)$ uncertainty. Gaussian processes allow interpolating and extrapolating predictions in regions of parameter space where training data are absent.\\label{fig:GP_illustration}}\n\\end{center}\n\\end{figure}\n\nThe standard approach to obtain a computable approximate likelihood relies on empirical averages (equations \\eqref{eq:mean_covariance_empirical} and \\eqref{eq:def_J_N}). However, such sample averages are not the only way to approximate intractable expectations. Equations \\eqref{eq:l_J_proposition_1} and \\eqref{eq:def_J} show that, up to constants and the sign, $\\tilde{\\ell}(\\boldsymbol{\\uptheta})$ can be interpreted as a regression function with the model parameters $\\boldsymbol{\\uptheta}$ (the ``predictors'') as the independent input variables and the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}$ as the response variable. Therefore, in the present approach, we consider an approximation of the intractable expectation defining $J(\\boldsymbol{\\uptheta})$ in equation \\eqref{eq:def_J} based on a regression analysis of $\\Delta_{\\boldsymbol{\\uptheta}}$, instead of sample averages. Explicitly, we consider\n\\begin{equation}\n\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^{(\\mathrm{t})}\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right],\n\\label{eq:def_J_t}\n\\end{equation}\nwhere the superscript $(\\mathrm{t})$ stands for ``training'' and the expectation $\\mathrm{E}^{(\\mathrm{t})}$ is taken under the probabilistic model defined in the following.\n\nInferring $J(\\boldsymbol{\\uptheta})$ via regression requires a training data set $\\lbrace (\\boldsymbol{\\uptheta}^{(i)} , \\Delta_{\\boldsymbol{\\uptheta}}^{(i)}) \\rbrace\\vspace*{-2pt}$ where the discrepancies are computed from the simulated summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}^{(i)}$. Building this training set requires to run simulations, but does not involve an accept\/reject criterion as does likelihood-free rejection sampling (thus addressing issue 1, see section \\ref{ssec:Computational difficulties with likelihood-free rejection sampling}). A regression-based approach also allows incorporating a smoothness assumption about $J(\\boldsymbol{\\uptheta})$. In this way, samples of the training set can ``share'' the information of the computed $\\Delta_{\\boldsymbol{\\uptheta}}$ in the neighbourhood of $\\boldsymbol{\\uptheta}$ (thus addressing issue 2). This suggests that fewer simulated data are needed to reach a certain level of accuracy when learning the target function $J(\\boldsymbol{\\uptheta})$.\n\nIn this work, we rely on Gaussian process (GP) regression in order to construct a prediction for $J(\\boldsymbol{\\uptheta})$. There are several reasons why this choice is advantageous for likelihood-free inference. First, GPs are a general-purpose regressor, able to deal with a large variety of functional shapes for $J(\\boldsymbol{\\uptheta})$, including potentially complex non-linear, or multi-modal features. Second, GPs provide not only a prediction (the mean of the regressed function), but also the uncertainty of the regression. This is useful for actively constructing the training data via Bayesian optimisation, as we show in section \\ref{ssec:Acquisition rules}. Finally, GPs allow extrapolating the prediction into regions of the parameter space where no training points are available. These three properties are shown in figure \\ref{fig:GP_illustration} for a multi-modal test function subject to observation noise.\n\nWe now briefly review Gaussian process regression. Suppose that we have a set of $t$ training points, $(\\boldsymbol{\\Theta}, \\textbf{f}) \\equiv \\lbrace (\\boldsymbol{\\uptheta}^{(i)}, f^{(i)} = f(\\boldsymbol{\\uptheta}^{(i)}) \\rbrace$, of the function $f$ that we want to regress. We assume that $f$ is a Gaussian process with prior mean function $m(\\boldsymbol{\\uptheta})$ and covariance function $\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ also known as the kernel \\citep[see][]{RasmussenWilliams2006}. The joint probability distribution of the training set is therefore $\\mathpzc{P}(\\textbf{f}|\\boldsymbol{\\Theta}) \\propto \\exp\\left[ \\ell(\\textbf{f}|\\boldsymbol{\\Theta}) \\right]$, where the exponent $\\ell(\\textbf{f}|\\boldsymbol{\\Theta})$ is\n\\begin{equation}\n- \\frac{1}{2} \\sum_{i,j=1}^t \\left[f^{(i)}-m(\\boldsymbol{\\uptheta}^{(i)})\\right]^\\intercal \\kappa(\\boldsymbol{\\uptheta}^{(i)},\\boldsymbol{\\uptheta}^{(j)})^{-1} \\left[f^{(j)}-m(\\boldsymbol{\\uptheta}^{(j)})\\right] .\n\\end{equation}\nThe mean function $m(\\boldsymbol{\\uptheta})$ and the kernel $\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ define the functional shape and smoothness allowed for the prediction. Standard choices are respectively a constant and a squared exponential (the radial basis function, RBF), subject to additive Gaussian observation noise with variance $\\sigma_\\mathrm{n}^2$. Explicitly, $m(\\boldsymbol{\\uptheta}) \\equiv C$ and \n\\begin{equation}\n\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}') \\equiv \\sigma_f^2 \\exp\\left[ -\\frac{1}{2} \\sum_p \\left( \\frac{\\theta_p - \\theta_p'}{\\lambda_p} \\right)^2 \\right] + \\sigma_\\mathrm{n}^2 \\, \\updelta_\\mathrm{K}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}').\n\\end{equation}\nThe $\\theta_p$ and $\\theta_p'$ are the components of $\\boldsymbol{\\uptheta}$ and $\\boldsymbol{\\uptheta}'$, respectively. In the last term, $\\updelta_\\mathrm{K}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ is one if and only if $\\boldsymbol{\\uptheta} = \\boldsymbol{\\uptheta}'$ and zero otherwise. The hyperparameters are $C$, the $\\lambda_p$ (the length scales controlling the amount of correlation between points, and hence the allowed wiggliness of $f$), $\\sigma_f^2$ (the signal variance, i.e. the marginal variance of $f$ at a point $\\boldsymbol{\\uptheta}$ if the observation noise was zero), and $\\sigma_\\mathrm{n}^2$ (the observation noise). For the results of this paper, GP hyperparameters were learned from the training set using L-BFGS \\citep{L-BFGS}, a popular optimiser for machine learning, and updated every time the training set was augmented with ten samples.\n\nThe predicted value $f_\\star$ at a new point $\\boldsymbol{\\uptheta}_\\star$ can be obtained from the fact that $(\\lbrace \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star \\rbrace, \\lbrace \\textbf{f} , f_\\star \\rbrace)$ form jointly a random realisation of the Gaussian process $f$. Thus, the target pdf $\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ can be obtained from conditioning the joint pdf $\\mathpzc{P}(\\textbf{f},f_\\star | \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ to the values of the training set $\\textbf{f}$. The result is \\citep[see][section 2.7]{RasmussenWilliams2006}\n\\begin{eqnarray}\n\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star) & \\propto & \\exp\\left[ -\\frac{1}{2} \\left( \\frac{f_\\star - \\mu(\\boldsymbol{\\uptheta}_\\star)}{\\sigma(\\boldsymbol{\\uptheta}_\\star)} \\right)^2 \\right], \\label{eq:GP_posterior_predictive_distribution}\\\\\n\\mu(\\boldsymbol{\\uptheta}_\\star) & \\equiv & m(\\boldsymbol{\\uptheta}_\\star) + \\uline{\\textbf{K}}_\\star^\\intercal \\uuline{\\textbf{K}}^{-1} (\\textbf{f} - \\textbf{m}), \\label{eq:GP_mean}\\\\\n\\sigma^2(\\boldsymbol{\\uptheta}_\\star) & \\equiv & K_{\\star\\star} - \\uline{\\textbf{K}}_\\star^\\intercal \\uuline{\\textbf{K}}^{-1} \\uline{\\textbf{K}}_\\star, \\label{eq:GP_variance}\n\\end{eqnarray}\nwhere we use the definitions\n\\begin{eqnarray}\nK_{\\star\\star} & \\equiv & \\kappa(\\boldsymbol{\\uptheta}_\\star, \\boldsymbol{\\uptheta}_\\star), \\label{eq:GP_notation_def_1}\\\\\n\\textbf{m} & \\equiv & \\left(m(\\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}, \\label{eq:GP_notation_def_2}\\\\\n\\uline{\\textbf{K}}_\\star & \\equiv & \\left(\\kappa(\\boldsymbol{\\uptheta}_\\star, \\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}, \\label{eq:GP_notation_def_3}\\\\\n(\\uuline{\\textbf{K}})_{ij} & \\equiv & \\kappa(\\boldsymbol{\\uptheta}^{(i)}, \\boldsymbol{\\uptheta}^{(j)}) \\quad \\mathrm{for}~\\lbrace \\boldsymbol{\\uptheta}^{(i)}, \\boldsymbol{\\uptheta}^{(j)} \\rbrace \\in \\boldsymbol{\\Theta}^2. \\label{eq:GP_notation_def_4}\n\\end{eqnarray}\n\n\\subsection{Bayesian optimisation}\n\\label{ssec:Bayesian optimisation}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{BO_illustration_11.pdf} \n\\includegraphics[width=0.49\\textwidth]{BO_illustration_12.pdf} \\\\\n\\includegraphics[width=0.49\\textwidth]{BO_illustration_13.pdf} \n\\includegraphics[width=0.49\\textwidth]{BO_illustration_14.pdf} \n\\caption{Illustration of four consecutive steps of Bayesian optimisation to learn the test function of figure \\ref{fig:GP_illustration}. For each step, the top panel shows the training data points (red dots) and the regression (blue line and shaded region). The bottom panel shows the acquisition function (the expected improvement, solid green line) with its maximiser (dashed green line). The next acquisition point, i.e. where to run a simulation to be added to the training set, is shown in orange; it differs from the maximiser of the acquisition function by a small random number. The acquisition function used is the expected improvement, aiming at finding the minimum of $f$. Hyperparameters of the regression kernel are optimised after each acquisition. As can observed, Bayesian optimisation implements a trade-off between exploration (evaluation of the target function where the variance is large, e.g. after 12 points) and exploitation (evaluation of the target function close to the predicted minimum, e.g. after 11, 13, and 14 points). \\label{fig:BO_illustration}}\n\\end{center}\n\\end{figure*}\n\nThe second major ingredient of the proposed approach is Bayesian optimisation, which allows the inference of the regression function $J(\\boldsymbol{\\uptheta})$ while avoiding unnecessary computations. It allows active construction of the training data set $\\lbrace (\\boldsymbol{\\uptheta}^{(i)} , \\Delta_{\\boldsymbol{\\uptheta}}^{(i)}) \\rbrace$, updating the proposal of new points using the regressed $\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta})$ (thus addressing issue 3 with likelihood-free rejection sampling, see section \\ref{ssec:Computational difficulties with likelihood-free rejection sampling}). Further, since we are mostly interested in the regions of the parameter space where the variance of the approximate posterior is large (due to its stochasticity), the acquisition rules can prioritise these regions, so as to obtain a better approximation of $J(\\boldsymbol{\\uptheta})$ there (thus addressing issue 4).\n\nBayesian optimisation is a decision-making framework under uncertainty, for the automatic learning of unknown functions. It aims at gathering training data in such a manner as to evaluate the regression model the least number of times while revealing as much information as possible about the target function and, in particular, the location of the optimum or optima. The method proceeds by iteratively picking predictors to be probed (i.e. simulations to be run) in a manner that trades off \\textit{exploration} (parameters for which the outcome is most uncertain) and \\textit{exploitation} (parameters which are expected to have a good outcome for the targeted application). In many contexts, Bayesian optimisation has been shown to obtain better results with fewer simulations than grid search or random search, due to its ability to reason about the interest of simulations before they are run \\citep[see][for a review]{Brochu2010}. Figure \\ref{fig:BO_illustration} illustrates Bayesian optimisation in combination with Gaussian process regression, applied to finding the minimum of the test function of figure \\ref{fig:GP_illustration}.\n\nIn the following, we give a brief overview of the elements of Bayesian optimisation used in this paper. In order to add a new point to the training data set $(\\boldsymbol{\\Theta}, \\textbf{f}) \\equiv \\lbrace (\\boldsymbol{\\uptheta}^{(i)}, f^{(i)} = f(\\boldsymbol{\\uptheta}^{(i)}) \\rbrace$, Bayesian optimisation uses an acquisition function $\\mathcal{A}(\\boldsymbol{\\uptheta})$ that estimates how useful the evaluation of the simulator at $\\boldsymbol{\\uptheta}$ will be in order to learn the target function. The acquisition function is constructed from the posterior predictive distribution of $f$ given the training set $(\\boldsymbol{\\Theta}, \\textbf{f})$, i.e. from the mean prediction $\\mu(\\boldsymbol{\\uptheta})$ and the uncertainty $\\sigma(\\boldsymbol{\\uptheta})$ of the regression analysis (equations \\eqref{eq:GP_mean} and \\eqref{eq:GP_variance}). The optimum of the acquisition function in parameter space determines the next point $\\boldsymbol{\\uptheta}_\\star \\equiv \\mathrm{argopt}_{\\boldsymbol{\\uptheta}} \\mathcal{A}(\\boldsymbol{\\uptheta})$ to be evaluated by the simulator ($\\mathrm{argopt} = \\mathrm{argmax}$ or $\\mathrm{argmin}$ depending on how the acquisition function is defined), so that the training set can be augmented with $(\\boldsymbol{\\uptheta}_\\star, f(\\boldsymbol{\\uptheta}_\\star))$. The acquisition function is a scalar function whose evaluation should be reasonably expensive, so that its optimum can be found by simple search methods such as gradient descent. \n\nThe algorithm needs to be initialised with an initial training set. In numerical experiments, we found that building this initial set by drawing from the prior (as would typically be done in likelihood-free rejection sampling) can result in difficulties with the first iterations of Gaussian process regression. Uniformly-distributed points within the boundaries of the GP are also a poor choice, as they will result in an uneven initial sampling of the parameter space. To circumvent this issue, we build the initial training set using a low-discrepancy quasi-random Sobol sequence \\citep{Sobol1967}, which covers the parameter space more evenly.\n\n\\subsection{Expressions for the approximate posterior}\n\\label{ssec:Expressions for the approximate posterior}\n\nAs discussed in section \\ref{ssec:Regression of the discrepancy}, using $\\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$ as the regressed quantity directly gives an estimate of $J(\\boldsymbol{\\uptheta})$ in equation \\eqref{eq:def_J}. The response variable is thus $f(\\boldsymbol{\\uptheta}) \\equiv \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$ and the regression then gives\n\\begin{equation}\n\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta}) = \\mu(\\boldsymbol{\\uptheta}).\n\\label{eq:J_t_equals_mu}\n\\end{equation}\n\nIn the parametric approach to likelihood approximation, this is equivalent to an approximation of $-2\\tilde{\\ell}(\\boldsymbol{\\uptheta}) = -2\\log \\widetilde{L}(\\boldsymbol{\\uptheta})$ (see equation \\eqref{eq:l_J_proposition_1}). The expectation of the (unnormalised) approximate posterior is therefore directly given as (see equation \\eqref{eq:approx_problem_Bayes})\n\\begin{equation}\n\\mathrm{E}^{(\\mathrm{t})} \\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\equiv \\mathpzc{P}(\\boldsymbol{\\uptheta}) \\exp\\left( -\\frac{1}{2} \\mu(\\boldsymbol{\\uptheta}) \\right),\n\\label{eq:approximate_posterior_expectation}\n\\end{equation}\nwhere $\\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\approx Z_{\\boldsymbol{\\Phi}} \\times \\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}$.\n\nThe estimate of the variance of $f(\\boldsymbol{\\uptheta})$ can also be propagated to the approximate posterior, giving\n\\begin{equation}\n\\mathrm{V}^{(\\mathrm{t})} \\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\equiv \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})^2}{4} \\exp\\left[ -\\mu(\\boldsymbol{\\uptheta}) \\right] \\sigma^2(\\boldsymbol{\\uptheta}) .\n\\label{eq:approximate_posterior_variance}\n\\end{equation}\nDetails of the computations can be found in appendix \\ref{sapx:Expressions for the approximate posterior}.\n\nExpressions for the {\\textsc{bolfi}} posterior in the non-parametric approach with the uniform kernel can also be derived \\citep[][lemma 3.1]{Jaervenpaeae2017}. As this paper focuses on the parametric approach, we refer to the literature for the former case.\n\n\\subsection{Acquisition rules}\n\\label{ssec:Acquisition rules}\n\n\\subsubsection{Expected improvement}\n\\label{sssec:Expected improvement}\n\nStandard Bayesian optimisation uses acquisition functions that estimate how useful the next evaluation of the simulator will be in order to find the minimum or minima of the target function. While several other choices are possible \\citep[see e.g.][]{Brochu2010}, in this work we discuss the acquisition function known as \\textit{expected improvement} (EI). The \\textit{improvement} is defined by $I(\\boldsymbol{\\uptheta}_\\star) = \\max\\left[\\min(\\textbf{f}) - f(\\boldsymbol{\\uptheta}_\\star), 0\\right]$, and the expected improvement is $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\mathrm{E}^{(\\mathrm{t})}\\left[ I(\\boldsymbol{\\uptheta}_\\star) \\right]$, where the expectation is taken with respect to the random observation assuming decision $\\boldsymbol{\\uptheta}_\\star$. For a Gaussian process regressor, this evaluates to \\citep[see][section 2.3]{Brochu2010}\n\\begin{equation}\n\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\sigma(\\boldsymbol{\\uptheta}_\\star) \\left[ z\\Phi(z) + \\phi(z) \\right], \\, \\mathrm{with}~z \\equiv \\frac{\\min(\\textbf{f}) - \\mu(\\boldsymbol{\\uptheta}_\\star)}{\\sigma(\\boldsymbol{\\uptheta}_\\star)},\n\\label{eq:EI}\n\\end{equation}\nor $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv 0$ if $\\sigma(\\boldsymbol{\\uptheta}_\\star)=0$, where $\\phi$ and $\\Phi$ denote respectively the pdf and the cumulative distribution function (cdf) of the unit-variance zero-mean Gaussian. The decision rule is to select the location $\\boldsymbol{\\uptheta}_\\star$ that maximises $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star)$.\n\nThe EI criterion can be interpreted as follows: since the goal is to find the minimum of $f$, a reward equal to the improvement $\\min(\\textbf{f}) - f(\\boldsymbol{\\uptheta}_\\star)$ is received if $f(\\boldsymbol{\\uptheta}_\\star)$ is smaller than all the values observed so far, otherwise no reward is received. The first term appearing in equation \\eqref{eq:EI} is maximised when evaluating at points with high uncertainty (exploration); and, at fixed variance, the second term is maximised by evaluating at points with low mean (exploitation). The expected improvement therefore automatically captures the exploration-exploitation trade-off as a result of the Bayesian decision-theoretic treatment.\n\n\\subsubsection{Expected integrated variance}\n\\label{sssec:Expected integrated variance}\n\nAs pointed out by \\citet{Jaervenpaeae2017}, in Bayesian optimisation for approximate Bayesian computation, the goal should not be to find the minimum of $J(\\boldsymbol{\\uptheta})$, but rather to minimise the expected uncertainty in the estimate of the approximate posterior over the future evaluation of the simulator at $\\boldsymbol{\\uptheta}_\\star$. Consequently, they propose an acquisition function, known as the \\textit{expected integrated variance} (ExpIntVar or EIV in the following) that selects the next evaluation location to minimise the expected variance of the future posterior density $\\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ over the parameter space. The framework used is Bayesian decision theory. Formally, the loss due to our uncertain knowledge of the approximate posterior density can be defined as\n\\begin{equation}\n\\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] = \\int \\mathrm{V}^{(\\mathrm{t})}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\, \\mathrm{d}\\boldsymbol{\\uptheta},\n\\end{equation}\nand the acquisition rule is to select the location $\\boldsymbol{\\uptheta}_\\star$ that minimises\n\\begin{equation}\n\\begin{split}\n& \\mathrm{EIV}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\mathrm{E}^{(\\mathrm{t})} \\left[ \\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, f_\\star, \\boldsymbol{\\uptheta}_\\star) \\right] \\right] \\\\\n& = \\int \\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, f_\\star, \\boldsymbol{\\uptheta}_\\star) \\right] \\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star) \\, \\mathrm{d}f_\\star\n\\end{split}\n\\end{equation}\nwith respect to $\\boldsymbol{\\uptheta}_\\star$, where we have to marginalise over the unknown simulator output $f_\\star$ using the probabilistic model $\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ (equations \\eqref{eq:GP_posterior_predictive_distribution}--\\eqref{eq:GP_variance}).\n\n\\citet[][proposition 3.2]{Jaervenpaeae2017} derive the expressions for the expected integrated variance for a GP model in the non-parametric approach. In appendix \\ref{apx:Derivations of the mathematical results}, we extend this work and derive the ExpIntVar acquisition function and its gradient in the parametric approach. The result is the following: under the GP model, the expected integrated variance after running the simulation model with parameter $\\boldsymbol{\\uptheta}_\\star$ is given by\n\\begin{equation}\n\\mathrm{EIV}(\\boldsymbol{\\uptheta}_\\star) = \\int \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})^2}{4} \\exp\\left[ -\\mu(\\boldsymbol{\\uptheta}) \\right] \\left[ \\sigma^2(\\boldsymbol{\\uptheta}) - \\tau^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\right] \\, \\mathrm{d}\\boldsymbol{\\uptheta},\n\\label{eq:EIV}\n\\end{equation}\nwith\n\\begin{equation}\n\\tau^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\equiv \\dfrac{\\mathrm{cov}^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star)}{\\sigma^2(\\boldsymbol{\\uptheta}_\\star)},\n\\label{eq:def_tau}\n\\end{equation}\nwhere $\\mathrm{cov}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\equiv \\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) - \\uline{\\textbf{K}}^\\intercal \\uuline{\\textbf{K}}^{-1}\\vspace{-4pt} \\uline{\\textbf{K}}_\\star$ is the GP posterior predicted covariance between the evaluation point $\\boldsymbol{\\uptheta}$ in the integral and the candidate location for the next evaluation $\\boldsymbol{\\uptheta}_\\star$. Note that in addition to the notations given by equations \\eqref{eq:GP_notation_def_1}--\\eqref{eq:GP_notation_def_4}, we have introduced the vector\n\\begin{equation}\n\\uline{\\textbf{K}} \\equiv \\left(\\kappa(\\boldsymbol{\\uptheta}, \\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}.\n\\label{eq:GP_notation_def_5}\n\\end{equation}\n\nIt is of interest to examine when the integrand in equation \\eqref{eq:EIV} is small. As for the EI (equation \\eqref{eq:EI}), optimal values are found when the mean of the discrepancy $\\mu(\\boldsymbol{\\uptheta})$ is small or the variance $\\sigma^2(\\boldsymbol{\\uptheta})$ is large. This effect is what yields the trade-off between exploitation and exploration for the ExpIntVar acquisition rule. However, unlike in standard Bayesian optimisation strategies such as the EI, the trade-off is a non-local process (due to the integration over the parameter space), and also depends on the prior, so as to minimise the uncertainty in the posterior (and not likelihood) approximation.\n\nComputing the expected integrated variance requires integration over the parameter space. In this work, the integration is performed on a regular grid of $50$ points per dimension within the GP boundaries. In high dimension, the integral can become prohibitively expensive to compute on a grid. As discussed by \\citet{Jaervenpaeae2017}, it can then be evaluated with Monte Carlo or quasi-Monte Carlo methods such as importance sampling.\n\nIn numerical experiments, we have found that the ExpIntVar criterion (as any acquisition function for Bayesian optimisation) has some sensitivity to the initial training set. In particular, the initial set (built from a Sobol sequence or otherwise) shall sample sufficiently well the GP domain, which shall encompass the prior. This ensures that the prior volume is never wider than the training data. Under this condition, as \\citet{Jaervenpaeae2017}, we have found that ExpIntVar is stable, in the sense that it produces consistent {\\textsc{bolfi}} posteriors over different realisations of the initial training data set and simulator outputs.\n\n\\subsubsection{Stochastic versus deterministic acquisition rules}\n\\label{sssec:Stochastic versus deterministic acquisition rules}\n\nThe above rules do not guarantee that the selected $\\boldsymbol{\\uptheta}_\\star$ is different from a previously acquired $\\boldsymbol{\\uptheta}^{(i)}$. \\citet[][see in particular appendix C]{GutmannCorander2016} found that this can result in a poor exploration of the parameter space, and propose to add a stochastic element to the decision rule in order to avoid getting stuck at one point. In some experiments, we followed this prescription by adding an ``acquisition noise'' of strength $\\sigma_\\mathrm{a}^p$ to each component of the optimiser of the acquisition function. More precisely, $\\boldsymbol{\\uptheta}_\\star$ is sampled from the Gaussian distribution $\\mathpzc{G}(\\boldsymbol{\\uptheta}_\\mathrm{opt}, \\textbf{D})$, where $\\boldsymbol{\\uptheta}_\\mathrm{opt} \\equiv \\mathrm{argopt}_{\\boldsymbol{\\uptheta}} \\mathcal{A}(\\boldsymbol{\\uptheta})$ and $\\textbf{D}$ is the diagonal covariance matrix of components $(\\sigma_\\mathrm{a}^p)^2$. The $\\sigma_\\mathrm{a}^p$ are chosen to be of order $\\lambda_p\/10$.\n\nFor a more extensive discussion and comparison of various stochastic and deterministic acquisition rules, the reader is referred to \\citet{Jaervenpaeae2017}.\n\n\\section{Applications}\n\\label{sec:Applications}\n\nIn this section, we show the application of {\\textsc{bolfi}} to several application studies. In particular, we discuss the simulator and the computable approximation of the likelihood to be used, and compare {\\textsc{bolfi}} to likelihood-free rejection sampling in terms of computational efficiency. In all cases, we show that {\\textsc{bolfi}} reduces the amount of required simulations by several orders of magnitude.\n\nIn section \\ref{ssec:Summarising Gaussian signals}, we discuss the toy problem of summarising Gaussian signals (i.e. inferring the unknown mean and\/or variance of Gaussian-distributed data). In section \\ref{ssec:Supernova cosmology}, we show the first application of {\\textsc{bolfi}} to a real cosmological problem using actual observational data: the inference of cosmological parameters from supernovae data. For each test case, we refer to the corresponding section in the appendices for the details of the data model and inference assumptions.\n\n\\subsection{Summarising Gaussian signals}\n\\label{ssec:Summarising Gaussian signals}\n\nA simple toy model can be constructed from the general problem of summarising Gaussian signals with unknown mean, or with unknown mean and variance. This example allows for the comparison of {\\textsc{bolfi}} and likelihood-free rejection sampling to the true posterior conditional on the full data, which is known analytically. All the details of this model are given in appendix \\ref{apx:Summarising Gaussian signals}.\n\n\\subsubsection{Unknown mean, known variance}\n\\label{sssec:Unknown mean, known variance}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{Gaussian_mean_illustration.pdf} \n\\caption{Illustration of {\\textsc{bolfi}} for a one-dimensional problem, the inference of the unknown mean $\\mu$ of a Gaussian. \\textit{Lower panel}. The discrepancy $\\Delta_\\mu$ (i.e. twice the negative log-likelihood) is a stochastic process due to the limited computational resources. Its mean and the $2\\sigma$ credible interval are shown in red. The dashed red line shows one realisation of the stochastic process as a function of $\\mu$. Simulations at different $\\mu$ are shown as black dots. {\\textsc{bolfi}} builds a probabilistic model for the discrepancy, the mean and $2\\sigma$ credible interval of which are shown in blue. \\textit{Upper panel}. The expectation of the (rescaled) {\\textsc{bolfi}} posterior and its $2\\sigma$ credible interval are shown in comparison to the exact posterior for the problem. The dashed red line shows the posterior obtained from the corresponding realisation of the stochastic process of the lower panel. \\label{fig:Gaussian_mean_illustration}}\n\\end{center}\n\\end{figure}\n\nWe first consider the problem, already discussed by \\citet{GutmannCorander2016}, where the data $\\textbf{d}$ are a vector of $n$ components drawn from a Gaussian with unknown mean $\\mu$ and known variance $\\sigma^2_\\mathrm{true}$. The empirical mean $\\Phi^1$ is a sufficient summary statistic for the problem of inferring $\\mu$. The distribution of simulated $\\Phi^1_\\mu$ takes a simple form, $\\Phi^1_\\mu \\sim \\mathpzc{G}\\left( \\mu, \\sigma^2_\\mathrm{true}\/n \\right)$. Using here the true variance, the discrepancy and synthetic likelihood are\n\\begin{equation}\n\\Delta^1_\\mu = -2 \\hat{\\ell}^N_1(\\mu) = \\log \\left(\\frac{2\\pi \\sigma^2_\\mathrm{true}}{n} \\right) + n\\frac{(\\Phi^1_\\mathrm{O}-\\hat{\\mu}^1_\\mu)^2}{\\sigma^2_\\mathrm{true}},\n\\end{equation}\nwhere $\\hat{\\mu}^1_\\mu$ is an average of $N$ realisations of $\\Phi^1_\\mu$. In figure \\ref{fig:Gaussian_mean_illustration} (lower panel), the black dots show simulations of $\\Delta^1_\\mu$ for different values of $\\mu$. We have $\\hat{\\mu}^1_\\mu \\sim \\mathpzc{G}\\left( \\mu, \\sigma^2_\\mathrm{true}\/(Nn) \\right)$, therefore the stochastic process defining the discrepancy can be written\n\\begin{equation}\n\\Delta^1_\\mu = \\log \\left(\\frac{2\\pi \\sigma^2_\\mathrm{true}}{n} \\right) + n\\frac{(\\Phi^1_\\mathrm{O}- \\mu -g )^2}{\\sigma^2_\\mathrm{true}}, \\quad g \\sim \\mathpzc{G}\\left(0, \\sigma^2_g \\right),\n\\end{equation}\nwhere $\\sigma^2_g \\equiv \\sigma^2_\\mathrm{true}\/(Nn)$. Each realisation of $g$ gives a different mapping $\\mu \\mapsto \\Delta^1_\\mu$. In figure \\ref{fig:Gaussian_mean_illustration}, we show one such realisation in the lower panel, and the corresponding approximate posterior in the upper panel. Using the percent point function (inverse of the cdf) of the Gaussian $\\mathpzc{G}\\left(0, \\sigma^2_g \\right)$, we also show in red the mean and $2\\sigma$ credible interval of the true stochastic process.\n\nThe GP regression using the simulations shown as the training set is represented in blue in the lower panel of figure \\ref{fig:Gaussian_mean_illustration}. The corresponding {\\textsc{bolfi}} posterior and its variance, defined by equations \\eqref{eq:approximate_posterior_expectation} and \\eqref{eq:approximate_posterior_variance}, are shown in purple in the upper panel. The uncertainty in the estimate of the posterior (shaded purple region) is due to the limited number of available simulations (and not to the noisiness of individual training points). It is the expectation of this uncertainty under the next evaluation of the simulator which is minimised in parameter space by the ExpIntVar acquisition rule.\n\n\\subsubsection{Unknown mean and variance}\n\\label{sssec:Unknown mean and variance}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Gaussian_mean_variance.pdf} \n\\caption{Prior and posterior for the joint inference of the mean and variance of Gaussian signals. The prior and exact posterior (from the analytic solution) are Gaussian-inverse-Gamma distributed and shown in blue and orange, respectively. In the left panel, the approximate rejection-sampling posterior, based on $5,000$ samples accepted out of $\\sim 350,000$ simulations, is shown in green. It loosely encloses the exact posterior. In the right panel, the approximate {\\textsc{bolfi}} posterior, based on $2,500$ simulations only, is shown in red. It is a much finer approximation of the exact posterior. For all distributions, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours are shown.\\label{fig:Gaussian_mean_variance}}\n\\end{center}\n\\end{figure*}\n\nWe now consider the problem where the full data set $\\textbf{d}$ is a vector of $n$ components drawn from a Gaussian with unknown mean $\\mu$ and unknown variance $\\sigma^2$. The aim is the two-dimensional inference of $\\boldsymbol{\\uptheta} \\equiv (\\mu, \\sigma^2)$. Evidently, the true likelihood $\\mathcal{L}(\\mu, \\sigma^2)$ for this problem is the Gaussian characterised by $(\\mu, \\sigma^2)$. The Gaussian-inverse-Gamma distribution is the conjugate prior for this likelihood. It is described by four parameters. Adopting a Gaussian-inverse-Gamma prior characterised by $(\\alpha, \\beta, \\eta, \\lambda)$ yields a Gaussian-inverse-Gamma posterior characterised by $(\\alpha', \\beta', \\eta', \\lambda')$ given by equations \\eqref{eq:Gaussian_analytic_solution_alpha}--\\eqref{eq:Gaussian_analytic_solution_lambda}. This is the analytic solution to which we compare our approximate results.\n\nFor the numerical approach, we forward model the problem using a simulator that draws from the prior, simulates $N = 10$ realisations of the Gaussian signal, and compresses them to two summary statistics, the empirical mean and variance, respectively $\\Phi^1$ and $\\Phi^2$. The graphical probabilistic model is given in figure \\ref{fig:BHM_Gaussian_model}. It is a noise-free simulator without latent variables (of the type given by figure \\ref{fig:BHM_exact}, right) completed by a deterministic compression of the full data. Note that the vector $\\boldsymbol{\\Phi} \\equiv (\\Phi^1 , \\Phi^2)$ is a sufficient statistic for the inference of $(\\mu, \\sigma^2)$. To perform likelihood-free inference, we also need a computable approximation $\\widehat{L}^N(\\mu, \\sigma^2)$ of the true likelihood. We derive such an approximation in section \\ref{sapx:Derivation of the Gaussian-Gamma synthetic likelihood for likelihood-free inference} using a parametric approach, under the assumptions (exactly verified in this example) that $\\Phi^1$ is Gaussian-distributed and $\\Phi^2$ is Gamma-distributed. We name it the Gaussian-Gamma synthetic likelihood.\n\nThe posterior obtained from likelihood-free rejection sampling is shown in green in figure \\ref{fig:Gaussian_mean_variance} (left) in comparison to the prior (in blue) and the analytic posterior (in orange). It was obtained from $5,000$ accepted samples using a threshold of $\\varepsilon = 4$ on $-2\\hat{\\ell}^N$. The entire run required $\\sim 350,000$ forward simulations in total, the vast majority of which have been rejected. The rejection-sampling posterior is a fair approximation to the true posterior, unbiased but broader, as expected from a rejection-sampling method. \n\nFor comparison, the posterior obtained via {\\textsc{bolfi}} is shown in red in figure \\ref{fig:Gaussian_mean_variance} (right). {\\textsc{bolfi}} was initialised using a Sobol sequence of $20$ members to compute the original surrogate surface, and Bayesian optimisation with the ExpIntVar acquisition function and acquisition noise was run to acquire $230$ more samples. As can be observed, {\\textsc{bolfi}} allows very precise likelihood-free inference; in particular, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours (the latter corresponding to the $0.27\\%$ least likely events) of the analytic posterior are reconstructed almost perfectly. The overall cost to get these results is only $2,500$ simulations with {\\textsc{bolfi}} versus $\\sim 350,000$ with rejection sampling (for a poorer approximation of the analytic posterior), which corresponds to a reduction by $2$ orders of magnitude. \n\n\\subsection{Supernova cosmology}\n\\label{ssec:Supernova cosmology}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Supernovae.pdf} \n\\caption{Prior and posterior distributions for the joint inference of the matter density of the Universe, $\\Omega_\\mathrm{m}$, and the dark energy equation of state, $w$, from the JLA supernovae data set. The prior and exact posterior distribution (obtained from a long MCMC run requiring $\\sim 6 \\times 10^6$ data model evaluations) are shown in blue and orange, respectively. In the left panel, the approximate rejection-sampling posterior, based on $5,000$ samples accepted out of $\\sim 450,000$ simulations, is shown in green. In the right panel, the approximate {\\textsc{bolfi}} posterior, based on $6,000$ simulations only, is shown in red. For all distributions, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours are shown. \\label{fig:Supernovae}}\n\\end{center}\n\\end{figure*}\n\nIn this section, we present the first application of {\\textsc{bolfi}} to a cosmological inference problem. Specifically, we perform an analysis of the Joint Lightcurve Analysis (JLA) data set, consisting of the B-band peak apparent magnitudes $m_\\mathrm{B}$ of $740$ type Ia supernovae (SN Ia) with redshift $z$ between $0.01$ and $1.3$ \\citep{Betoule2014}: $\\textbf{d}_\\mathrm{O} \\equiv \\left( m_{\\mathrm{B},\\mathrm{O}}^k \\right)$ for $k \\in \\llbracket 1,740 \\rrbracket$. The details of the data model and inference assumptions are given in appendix \\ref{apx:Supernova cosmology}. For the purpose of validating {\\textsc{bolfi}}, we assume a Gaussian synthetic likelihood (see section \\ref{sapx:Discrepancy}), allowing us to demonstrate the fidelity of the {\\textsc{bolfi}} posterior against the exact likelihood-based solution obtained via Markov Chain Monte Carlo (MCMC). This analysis can also be compared to the proof of concept for another likelihood-free method, {\\textsc{delfi}} \\citep[Density Estimation for Likelihood-Free Inference,][]{Papamakarios2016,Alsing2018}, as the assumptions are very similar.\n\nAs described in appendix \\ref{apx:Supernova cosmology}, the full problem is six dimensional; however, in this work, we focus on the inference of the two physically relevant quantities, namely $\\Omega_\\mathrm{m}$ (the matter density of the Universe) and $w$ (the equation of state of dark energy, assumed constant), and marginalise over the other four (nuisance) parameters ($\\alpha$, $\\beta$, $M_\\mathrm{B}$, $\\delta\\hspace{-0.1em}M$). We assume a Gaussian prior,\n\\begin{equation}\n\\begin{pmatrix}\n\\Omega_\\mathrm{m} \\\\\nw\n\\end{pmatrix} \\sim\n\\mathpzc{G}\\left[\n\\begin{pmatrix}\n0.3 \\\\\n-0.75\n\\end{pmatrix},\n\\begin{pmatrix}\n0.4^2 & -0.24 \\\\\n-0.24 & 0.75^2\n\\end{pmatrix}\n\\right],\n\\label{eq:SNe_prior_Omegam_w}\n\\end{equation}\nwhich is roughly aligned with the direction of the well-known $\\Omega_\\mathrm{m}-w$ degeneracy. We generated $10^6$ samples (out of $\\sim 6\\times 10^6$ data model evaluations) of the posterior for the exact six-dimensional Bayesian problem via MCMC \\citep[performed using the \\textsc{emcee} code,][]{Foreman-Mackey2013}, ensuring sufficient convergence to characterise the $3\\sigma$ contours of the distribution.\\footnote{The final Gelman-Rubin statistic \\citep{Gelman1992} was $R -1 \\leq 5 \\times 10^{-4}$ for each of the six parameters.} The prior and the exact posterior are shown in blue and orange, respectively, in figure \\ref{fig:Supernovae}.\n\nFor likelihood-free inference, the simulator takes as input $\\Omega_\\mathrm{m}$ and $w$ and simulates $N$ realisations of the magnitudes $m_\\mathrm{B}$ of the 740 supernovae at their redshifts. Consistently with the Gaussian likelihood used in the MCMC analysis, we assume a Gaussian synthetic likelihood with a fixed covariance matrix $\\textbf{C}$. The observed data $\\textbf{d}_\\mathrm{O}$ and the covariance matrix $\\textbf{C}$ are shown in figure \\ref{fig:JLA_Hubble_correlation}. \n\nThe approximate posterior obtained from likelihood-free rejection sampling is shown in green in figure \\ref{fig:Supernovae}. It was obtained from $5,000$ accepted samples using a (conservative) threshold of $\\varepsilon = 650$ on $\\Delta_{(\\Omega_\\mathrm{m},w)}$, chosen so that the acceptance ratio was not below $0.01$. The entire run required $\\sim 450,000$ simulations in total. The approximate posterior obtained via {\\textsc{bolfi}} is shown in red in figure \\ref{fig:Supernovae}. {\\textsc{bolfi}} was initialised with a Sobol sequence of $20$ samples, and $100$ acquisitions were performed according to the ExpIntVar criterion, without acquisition noise. The {\\textsc{bolfi}} posterior is a much finer approximation to the true posterior than the one obtained from likelihood-free rejection sampling. It is remarkable that only $100$ acquisitions are enough to learn the non-trivial banana shape of the posterior. Only the $3\\sigma$ contour \\citep[which is usually not shown in cosmology papers, e.g.][]{Betoule2014} notably deviates from the MCMC posterior. This is due to the fact that we used one realisation of the stochastic process defining $\\Delta_{(\\Omega_\\mathrm{m},w)}$ and only $N=50$ realisations per $(\\Omega_\\mathrm{m},w)$; the marginalisation over the four nuisance parameters is therefore partial, yielding slightly smaller credible contours. However, a better approximation could be obtained straightfowardly, if desired, by investing more computational resources (increasing $N$), without requiring more acquisitions. \n\nAs we used $N=50$, the total cost for {\\textsc{bolfi}} is $6,000$ simulations. This is a reduction by $\\sim 2$ orders of magnitude with respect to likelihood-free rejection sampling ($\\sim 450,000$ simulations) and $3$ orders of magnitude with respect to MCMC sampling of the exact posterior ($6 \\times 10^6$ simulations). It is also interesting to note that our {\\textsc{bolfi}} analysis required a factor of $\\sim 3$ fewer simulations than the recently introduced \\textsc{delfi}\\ procedure \\citep{Alsing2018}, which used $20,000$ simulations drawn from the prior for the analysis of the JLA.\\footnote{A notable difference is that {\\textsc{delfi}} allowed the authors to perform the joint inference of the six parameters of the problem, whereas we only get the distribution of $\\Omega_\\mathrm{m}$ and $w$. However, since these are the only two physically interesting parameters, inference of the nuisance parameters is not deemed crucial for this example.}\n\n\\section{Discussion}\n\\label{sec:Discussion}\n\n\\subsection{Benefits and limitations of the proposed approach for cosmological inferences}\n\\label{ssec:Benefits and limitations of the proposed approach for cosmological inferences}\n\nAs noted in the introduction, likelihood-free rejection sampling, when at all viable, is extremely costly in terms of the number of required simulations. In contrast, the {\\textsc{bolfi}} approach relies on a GP probabilistic model for the discrepancy, and therefore allows the incorporation of a smoothness assumption about the approximate likelihood $L(\\boldsymbol{\\uptheta})$. The smoothness assumption allows simulations in the training set to ``share'' information about their value of $\\Delta_{\\boldsymbol{\\uptheta}}$ in the neighbourhood of $\\boldsymbol{\\uptheta}$, which suggests that fewer simulations are needed to reach a certain level of accuracy. Indeed, the number of simulations required is typically reduced by $2$ to $3$ orders of magnitude, for a better final approximation of the posterior, as demonstrated by our tests in section \\ref{sec:Applications} and in the statistical literature \\citep[see][]{GutmannCorander2016}. \n\nA second benefit of {\\textsc{bolfi}} is that it actively acquires training data through Bayesian optimisation. The trade-off between computational cost and statistical performance is still present, but in a modified form: the trade-off parameter is the size of the training set used in the regression. Within the training set, the user is free to choose which areas of the parameter space should be prioritised, so as to approximate the regression function more accurately there. In contrast, in ABC strategies that rely on drawing from a fixed proposal distribution (often the prior), or variants such as \\textsc{pmc}-\\textsc{abc}, a fixed computational cost needs to be paid per value of $\\boldsymbol{\\uptheta}$ regardless of the value of $\\Delta_{\\boldsymbol{\\uptheta}}$. \n\nFinally, by focusing on parametric approximations to the exact likelihood, the approach proposed in this work is totally ``$\\varepsilon$-free'', meaning that no threshold (which is often regarded as an unappealing \\textit{ad hock} element) is required. As likelihood-based techniques, the parametric version of {\\textsc{bolfi}} has the drawback that assuming a wrong form for the synthetic likelihood or miscalculating values of its parameters (such as the covariance matrix) can potentially bias the approximate posterior and\/or lead to an underestimation of credible regions. Nevertheless, massive data compression procedures can make the assumptions going into the choice of a Gaussian synthetic likelihood (almost) true by construction (see section \\ref{sssec:Data compression}).\n\nOf course, regressing the discrepancy and optimising the acquisition function are not free of computational cost. However, the run-time for realistic cosmological simulation models can be hours or days. In comparison, the computational overhead introduced by {\\textsc{bolfi}} is negligible.\n\nLikelihood-free inference should also be compared to existing likelihood-based techniques for cosmology such as Gibbs sampling or Hamiltonian Monte Carlo (e.g. \\citealp{Wandelt2004,Eriksen2004} for the cosmic microwave background; \\citealp{Jasche2010b,Jasche2015,Jasche2015BORGSDSS} for galaxy clustering; \\citealp{Alsing2016} for weak lensing). The principal difference between these techniques and {\\textsc{bolfi}} lies in its likelihood-free nature. Likelihood-free inference has particular appeal for cosmological data analysis, since encoding complex physical phenomena and realistic observational effects into forward simulations is much easier than designing an approximate likelihood which incorporates these effects and solving the inverse problem. While the numerical complexity of likelihood-based techniques typically requires to approximate complex data models in order to access required products (conditionals or gradients of the pdfs) and to allow for sufficiently fast execution speeds, {\\textsc{bolfi}} performs inference from full-scale black-box data models. In the future, such an approach is expected to allow previously infeasible analyses, relying on a much more precise modelling of cosmological data, including in particular the complicated systematics they experience. However, while the physics and instruments will be more accurately modelled, the statistical approximation introduced with respect to likelihood-based techniques should be kept in mind.\n\nOther key aspects of {\\textsc{bolfi}} for cosmological data analysis are the arbitrary choice of the statistical summaries and the easy joint treatment of different data sets. Indeed, as the data compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ is included in the simulator (see section \\ref{ssec:Approximate Bayesian computation}), summary statistics do not need to be quantities that can be physically modelled (such as the power spectrum) and can be chosen robustly to model misspecification. For example, for the microwave sky, the summaries could be the cross-spectra between different frequency maps; and for imaging surveys, the cross-correlation between different bands. Furthermore, joint analyses of correlated data sets, which is usually challenging in likelihood-based approaches (as they require a good model for the joint likelihood) can be performed straightforwardly in a likelihood-free approach. \n\nImportantly, as a general inference technique, {\\textsc{bolfi}} can be embedded into larger probabilistic schemes such as Gibbs or Hamiltonian-within-Gibbs samplers. Indeed, as posterior predictive distributions for conditionals and gradients of GPs are analytically tractable, it is easy to obtain samples of the {\\textsc{bolfi}} approximate posterior for use in larger models. {\\textsc{bolfi}} can therefore allow parts of a larger Bayesian hierarchical model to be treated as black boxes, without compromising the tractability of the entire model. \n\n\\subsection{Possible extensions}\n\\label{ssec:Possible extensions}\n\n\\subsubsection{High-dimensional inference}\n\\label{sssec:High-dimensional inference}\n\nIn this proof-of-concept paper, we focused on two-dimensional problems. Likelihood-free inference is in general very difficult when the dimensionality of the parameter space is large, due to the curse of dimensionality, which makes the volume exponentially larger with $\\mathrm{dim}~\\boldsymbol{\\uptheta}$. In {\\textsc{bolfi}}, this difficulty manifests itself in the form of a hard regression problem which needs to be solved. The areas in the parameter space where the discrepancy is small tend to be narrow in high dimension, therefore discovering these areas becomes more challenging as the dimension increases. The optimisation of GP kernel parameters, which control the shapes of allowed features, also becomes more difficult. Furthermore, finding the global optimum of the acquisition function becomes more demanding (especially with the ones designed for ABC such as ExpIntVar, which have a high degree of structure -- see figure \\ref{fig:Supernovae_acquisition}, bottom right panel).\n\nNevertheless, \\citet{Jaervenpaeae2017} showed on a toy simulation model (a Gaussian) that up to ten-dimensional inference is possible with {\\textsc{bolfi}}. As usual cosmological models do not include more than ten free physical parameters, we do not expect this limitation to be a hindrance. Any additional nuisance parameter or latent variable used internally by the simulator (such as $\\alpha$, $\\beta$, $M_\\mathrm{B}$, $\\delta\\hspace{-0.1em}M$ in supernova cosmology, see section \\ref{ssec:Supernova cosmology}) can be automatically marginalised over, by using $N$ realisations per $\\boldsymbol{\\uptheta}$. Recent advances in high-dimensional implementation of the synthetic likelihood \\citep{Ong2017} and high-dimensional Bayesian optimisation \\citep[e.g.][]{Wang2013:BOH:2540128.2540383,Kandasamy2015} could also be exploited. In future work, we will address the problem of high-dimensional likelihood-free inference in a cosmological context.\n\n\\subsubsection{Scalability with the number of acquisitions and probabilistic model for the discrepancy}\n\\label{sssec:Scalability with the number of acquisitions and probabilistic model for the discrepancy}\n\nIn addition to the fundamental issues with high-dimensional likelihood-free inference described in the previous section, practical difficulties can be met.\n\nGaussian process regression requires the inversion of a matrix $\\uuline{\\textbf{K}}\\vspace{-4pt}$ of size $t \\times t$, where $t$ is the size of the training set. The complexity is $\\mathcal{O}(t^3)$, which limits the size of the training set to a few thousand. Improving GPs with respect to this inversion is still subject to research \\citep[see][chapter 8]{RasmussenWilliams2006}. For example, ``sparse'' Gaussian process regression reduces the complexity by introducing auxiliary ``inducing variables''. Techniques inspired by the solution to the Wiener filtering problem in cosmology, such as preconditioned conjugate gradient or messenger field algorithms could also be used \\citep{Elsner2013,KodiRamanah2017,Papez2018}. Another strategy would be to divide the regression problem spatially into several patches with a lower number of training points \\citep{Park2017}. Such approaches are possible extensions of the presented method.\n\nIn the GP probabilistic model employed to model the discrepancy, the variance depends only on the training locations, not on the obtained values (see equation \\eqref{eq:GP_variance}). Furthermore, a stationary kernel is assumed. However, depending on the simulator, the discrepancy can show heteroscedasticity (i.e. its variance can depend on $\\boldsymbol{\\uptheta}$ -- see e.g. figure \\ref{fig:Gaussian_mean_illustration}, bottom panel). Such cases could be handled by non-stationary GP kernels or different probabilistic models for the discrepancy, allowing a heteroscedastic regression.\n\n\\subsubsection{Acquisition rules}\n\\label{sssec:Acquisition rules}\n\nAs shown in our examples, attention should be given to the selection of an efficient acquisition rule. Although standard Bayesian optimisation strategies such as the EI are reasonably effective, they are usually too greedy, focusing nearly all the sampling effort near the estimated minimum of the discrepancy and gathering too little information about other regions in the domain (see figure \\ref{fig:Supernovae_acquisition}, bottom left panel). This implies that, unless the acquisition noise is high, the tails of the posterior will not be as well approximated as the modal areas. In contrast, the ExpIntVar acquisition rule, derived in this work for the parametric approach, addresses the inefficient use of resources in likelihood-free rejection sampling by directly targeting the regions of the parameter space where improvement in the estimation accuracy of the approximate posterior is needed most. In our experiments, ExpIntVar seems to correct -- at least partially -- for the well-known effect in Bayesian optimisation of overexploration of the domain boundaries, which becomes more problematic in high dimension.\n\nAcquisition strategies examined so far in the literature \\citep[see][for a comparative study]{Jaervenpaeae2017} have focused on single acquisitions and are all ``myopic'', in the sense that they reason only about the expected utility of the next acquisition, and the number of simulations left in a limited budget is not taken into account. Improvement of acquisition rules enabling batch acquisitions and non-myopic reasoning are left to future extensions of {\\textsc{bolfi}}.\n\n\\subsubsection{Data compression}\n\\label{sssec:Data compression}\n\nIn addition to the problem of the curse of dimensionality in parameter space, discussed in section \\ref{sssec:High-dimensional inference}, likelihood-free inference usually suffers from difficulties in the measuring the (mis)match between simulations and observations if the data space also has high dimension. As discussed in section \\ref{ssec:Approximate Bayesian computation}, simulator-based models include a data compression step. The comparison in data space can be made more easily if $\\mathrm{dim}~\\boldsymbol{\\Phi}$ is reduced. In future work, we will therefore aim at combining {\\textsc{bolfi}} with massive and (close to) optimal data compression strategies. These include \\textsc{moped} \\citep{Heavens2000}, the score function \\citep{AlsingWandelt2018}, or information-maximising neural networks \\citep{Charnock2018}. Using such efficient data compression techniques, the number of simulations required for inference with {\\textsc{bolfi}} will be reduced even more, and the number of parameters treated could be increased.\n\nParametric approximations to the exact likelihood depend on quantities that have to be estimated using the simulator (typically for the Gaussian synthetic likelihood, the inverse covariance matrix of the summaries). Unlike supernova cosmology where the covariance matrix is easily obtained, in many cases it is prohibitively expensive to run enough simulations to estimate the required quantities, especially when they vary with the model parameters. In this context, massive data compression offers a way forward, reducing enormously the number of required simulations and making the analysis feasible when otherwise it might be essentially impossible \\citep{Heavens2017,Gualdi2018}.\n\nAn additional advantage of several data compression strategies is that they support the choice of a Gaussian synthetic likelihood. Indeed, the central limit theorem (for \\textsc{moped}) or the form of the network's reward function (for information-maximising neural networks) assist in giving the compressed data a near-Gaussian distribution. Furthermore, testing the Gaussian assumption for the synthetic likelihood will be far easier in a smaller number of dimensions than in the original high-dimensional data space.\n\n\\subsection{Parallelisation and computational efficiency}\n\\label{ssec:Parallelisation and computational efficiency}\n\nWhile MCMC sampling has to be done sequentially, {\\textsc{bolfi}} lends itself to more parallelisation. In an efficient strategy, a master process performs the regression and decides on acquisition locations, then dispatches simulations to be run by different workers. In this way, many simulations can be run simultaneously in parallel, or even on different machines. This allows fast application of the method and makes it particularly suitable for grid computing. Extensions of the probabilistic model and of the acquisition rules, discussed in section \\ref{sssec:Scalability with the number of acquisitions and probabilistic model for the discrepancy} and \\ref{sssec:Acquisition rules}, would open the possibility of doing asynchronous acquisitions. Different workers would then work completely independently and decide on their acquisitions locally, while just sharing a pool of simulations to update their beliefs given all the evidence available.\n\nWhile the construction of the training set depends on the observed data $\\boldsymbol{\\Phi}_\\mathrm{O}$ (through the acquisition function), simulations can nevertheless be reused as long as summaries $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are saved. This means that if one acquires new data $\\boldsymbol{\\Phi}_\\mathrm{O}'$, the existing $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ (or a subset of them) can be used to compute the new discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}(\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}, \\boldsymbol{\\Phi}_\\mathrm{O}')$. Building an initial training set in this fashion can massively speed up the inference of $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_{\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}'}$, whereas likelihood-based techniques would require a new MCMC.\n\n\\subsection{Comparison to previous work}\n\\label{ssec:Comparison to previous work}\n\nAs discussed in the introduction, likelihood-free rejection sampling is not a viable strategy for various problems that {\\textsc{bolfi}} can tackle. In recent work, an other algorithm for scalable likelihood-free inference in cosmology \\citep[{\\textsc{delfi}},][]{Papamakarios2016,Alsing2018} was introduced. The approach relies on estimating the joint probability $\\mathpzc{P}(\\boldsymbol{\\uptheta},\\boldsymbol{\\Phi})$ via density estimation. This idea also relates to the work of \\citet{Hahn2018}, who fit the sampling distribution of summaries $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$ using Gaussian mixture density estimation or independent component analysis, before using it for parameter estimation. This section discusses the principal similarities and differences.\n\nThe main difference between {\\textsc{bolfi}} and {\\textsc{delfi}} is the data acquisition. Training data are actively acquired in {\\textsc{bolfi}}, contrary to {\\textsc{delfi}} which, in the simplest scheme, draws from the prior. The reduction in the number of simulations for the inference of cosmological parameters (see section \\ref{ssec:Supernova cosmology}) can be interpreted as the effect of the Bayesian optimisation procedure in combination with the ExpIntVar acquisition function. Using a purposefully constructed surrogate surface instead of a fixed proposal distribution, {\\textsc{bolfi}} focuses the simulation effort to reveal as much information as possible about the target posterior. In particular, its ability to reason about the quality of simulations before they are run is an essential element. Acquisition via Bayesian optimisation almost certainly remains more efficient than even the \\textsc{pmc} version of {\\textsc{delfi}}, which learns a better proposal distribution but still chooses parameters randomly. In future cosmological applications with simulators that are expensive and\/or have a large latent space, an active data acquisition procedure could be crucial in order to provide a good model for the noisy approximate likelihood in the interesting regions of parameter space, and to reduce the computational cost. This comes at the expense of a reduction of the parallelisation potential: with a fixed proposal distribution (like in {\\textsc{delfi}} and unlike in {\\textsc{bolfi}}), the entire set of simulations can be run at the same time.\n\nThe second comment is related to the dimensionality of problems which can be addressed. Like {\\textsc{delfi}}, {\\textsc{bolfi}} relies on a probabilistic model to make ABC more efficient. However, the quantities employed differ, since in {\\textsc{delfi}} the relation between the parameters $\\boldsymbol{\\uptheta}$ and the summary statistics $\\boldsymbol{\\Phi}$ is modelled (via density estimation), while {\\textsc{bolfi}} focuses on the relation between the parameters $\\boldsymbol{\\uptheta}$ and the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}$ (via regression). Summary statistics are multi-dimensional while the discrepancy is a univariate scalar quantity. Thus, {\\textsc{delfi}} requires to solve a density estimation problem in $\\mathrm{dim}~\\boldsymbol{\\uptheta} + \\mathrm{dim}~\\boldsymbol{\\Phi}$ (which equals $2 \\times \\mathrm{dim}~\\boldsymbol{\\uptheta}$ if the compression from \\citealp{AlsingWandelt2018} is used), while {\\textsc{bolfi}} requires to solve a regression problem in $\\mathrm{dim}~\\boldsymbol{\\uptheta}$. Both tasks are expected to become more difficult as $\\mathrm{dim}~\\boldsymbol{\\uptheta}$ increases (a symptom of the curse of dimensionality, see section \\ref{sssec:High-dimensional inference}), but the upper limits on $\\mathrm{dim}~\\boldsymbol{\\uptheta}$ for practical applications may differ. Further investigations are required to compare the respective maximal dimensions of problems that can be addressed by {\\textsc{bolfi}} and {\\textsc{delfi}}.\n\nFinally, as argued by \\citet{Alsing2018}, {\\textsc{delfi}} readily provides an estimate of the approximate evidence. In contrast, as in likelihood-based techniques, integration over parameter space is required with {\\textsc{bolfi}} to get\n\\begin{equation}\nZ_{\\boldsymbol{\\Phi}} = \\left(\\int \\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta}) \\, \\mathrm{d}\\boldsymbol{\\uptheta} \\right)_{\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}.\n\\end{equation}\nHowever, due to the GP model, the integral can be more easily computed, using the same strategies as for the integral appearing in ExpIntVar (see section \\ref{sssec:Expected integrated variance}): only the GP predicted values are required at discrete locations on a grid (in low dimension) or at the positions of importance samples. A potential caveat is that {\\textsc{delfi}} has only been demonstrated to work in combination with the score function \\citep{AlsingWandelt2018}, which is necessary to reduce the dimensionality of $\\boldsymbol{\\Phi}$ before estimating the density.\\footnote{In contrast, section \\ref{ssec:Supernova cosmology} showed, for the same supernovae problem, that {\\textsc{bolfi}} can still operate if the comparison is done in the full $740$-dimensional data space.} The score function produces summaries that are only sufficient up to linear order in the log-likelihood. However, in ABC, care is required to perform model selection if the summary statistics are insufficient. Indeed, \\citet[][equation 1]{Robert2011} show that, in such a case, the approximate Bayes factor can be arbitrarily biased and that the approximation error is unrelated to the computational effort invested in running the ABC algorithm. Moreover, sufficiency for models $\\mathcal{M}_1$ and $\\mathcal{M}_2$ alone, or even for both of them -- even if approximately realised via Alsing \\& Wandelt's procedure -- does not guarantee sufficiency to compare the two different models $\\mathcal{M}_1$ and $\\mathcal{M}_2$ \\citep{Didelot2011}. As the assumptions behind {\\textsc{bolfi}} do not necessarily necessitate to reduce $\\mathrm{dim}~\\boldsymbol{\\Phi}$ ($\\Delta_{\\boldsymbol{\\uptheta}}$ is always a univariate scalar quantity, see above), these difficulties could be alleviated with {\\textsc{bolfi}} by carefully designing sufficient summary statistics for model comparison within the black-box simulator, if they exist.\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\n\nLikelihood-free inference methods allow Bayesian inference of the parameters of simulator-based statistical models with no reference to the likelihood function. This is of particular interest for data analysis in cosmology, where complex physical and observational processes can usually be simulated forward but not handled in the inverse problem. \n\nIn this paper, we considered the demanding problem of performing Bayesian inference when simulating data from the model is extremely costly. We have seen that likelihood-free rejection sampling suffers from a vanishingly small acceptance rate when the threshold $\\varepsilon$ goes to zero, leading to the need for a prohibitively large number of simulations. This high cost is largely due to the lack of knowledge about the functional relation between the model parameters and the discrepancy. As a response, we have described a new approach to likelihood-free inference, {\\textsc{bolfi}}, that uses regression to infer this relation, and optimisation to actively build the training data set. A crucial ingredient is the acquisition function derived in this work, with which training data are acquired such that the expected uncertainty in the final estimate of the posterior is minimised.\n\nIn case studies, we have shown that {\\textsc{bolfi}} is able to precisely recover the true posterior, even far in its tails, with as few as $6,000$ simulations, in contrast to likelihood-free rejection sampling or likelihood-based MCMC techniques which require orders of magnitude more simulations. The reduction in the number of required simulations accelerated the inference massively.\n\nThis study opens up a wide range of possible extensions, discussed in section \\ref{ssec:Possible extensions}. It also allows for novel analyses of cosmological data from fully non-linear simulator-based models, as required e.g. for the cosmic web \\citep[see the discussions in][]{Leclercq2015ST,Leclercq2016CIT,Leclercq2017DMSHEET}. Other applications may include the cosmic microwave background, weak gravitational lensing or intensity mapping experiments. We therefore anticipate that {\\textsc{bolfi}} will be a major ingredient in principled, simulator-based inference for the coming era of massive cosmological data.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStrong infrared behavior is a characteristic feature of the inflationary physics \\cite{w1} (for reviews see e.g. \\cite{r1,r2}). The modes are continuously pushed from subhorizon to superhorizon regimes, which enlarges short distance quantum effects on cosmologically interesting scales. Most of the time, the loop corrections contain infrared infinities that must properly be handled by viable physical reasoning. The so called infrared logarithms\\footnote{In the presence of the entropy perturbations the infrared loop divergences are power law rather than logarithmic, see \\cite{pl}.} show up in loops as a reminiscent of the peculiar infrared behavior (see e.g. \\cite{il1,il2,il3,il4,il5,il6,il7,il8,il9,il10,il11}). \n\nIn this paper, we apply cosmological perturbation theory to the standard scalar slow-roll inflationary model in the minisuperspace approximation. The minisuperspace theory is expected to capture the dynamics of the zero modes of the full theory. Besides, it is free from loop infinities and renormalization issues, which are intricate in the presence of gravity, and it still contains special features related to the gauge invariance and nonlinearities. Therefore the results obtained in the minisuperspace approximation must shed light on some crucial questions in the full theory and our aim here is to examine the appearance of the infrared logarithms. \n\nWe first consider the single scalar field model and obtain the {\\it complete} gauge fixed action for the curvature perturbation $\\zeta$. The corresponding Hamiltonian can be expanded in powers of the momentum conjugate to $\\zeta$, which becomes an expansion in the inverse powers of the background scale factor of the universe $a_B(t)$. Asymptotically at late times, the quadratic momentum term, which is still nonlinear in $\\zeta$, dominates the dynamics. In this case, $\\zeta$ is conserved and no $\\ln a_B$ behavior appears, which is consistent with \\cite{zc1,zc2}. We then add a self-interacting spectator scalar to the system. This time there arises a specific asymptotically dominant interaction term, which yields an $(\\ln a_B)^n$ correction to $\\zeta$ in the $n$'th order perturbation theory. The emergence of this infrared logarithm is similar to what has been observed in the field theory calculations. For models having large number of e-folds such a correction may invalidate the perturbation theory. On the other hand, in the minisuperspace theory a nonperturbative argument shows that asymptotically $\\zeta$ has actually a slowly evolving $\\ln a_B$ correction to the constant mode, which indicates other infrared logarithms involving higher powers of $\\ln a_B$ might be the artifacts of the perturbation theory. \n\n\\section{Single Scalar Slow-roll Inflation}\n\nWe start from the following minisuperspace action:\n\\begin{equation}\\label{1}\nS=L^3\\int dt\\, a^3 \\, \\left\\{\\frac{1}{N} \\left[-6\\frac{\\dot{a}^2}{a^2}+\\fr12 \\dot{\\Phi}^2\\right]-NV(\\Phi)\\right\\},\n\\end{equation}\nwhere the dot denotes the time derivative, $N$ is the lapse function, $a(t)$ is the scale factor of the universe, $\\Phi$ is the inflaton and $V(\\Phi)$ is the inflaton potential (we set the reduced Planck mass $M_p=1$. The proper $M_p$ factors can easily be reinstated by dimensional analysis as we will do below). The minisuperspace action \\eq{1} can be obtained from the usual Einstein-Hilbert action by setting $N^i=0$, $h_{ij}=a^2\\delta_{ij}$, where $N$, $N^i$ and $h_{ij}$ refer to the standard ADM decomposition of the metric, and by assuming that the variables $N$, $a$ and $\\Phi$ depend only on time. The parameter $L$ denotes the size of the comoving spatial coordinates and the factor $L^3$ in \\eq{1} arises from their integration reducing the field theory to a quantum mechanical system. \n\nThe action \\eq{1} is invariant under a local time transformation with the parameter $k^0$:\n\\begin{equation}\\label{2}\n\\delta N=k^0\\dot{N}+N\\dot{k}^0,\\hs{10}\\delta\\Phi=k^0\\dot{\\Phi},\\hs{10}\\delta a=k^0\\dot{a}.\n\\end{equation}\nTo fix the gauge invariance one may define the background field variables $a_B$ and $\\Phi_B$ obeying\n\\begin{equation}\\label{3}\n6H_B^2=\\fr12\\dot{\\Phi}_B^2+V(\\Phi_B),\\hs{10}\\dot{H}_B^2=-\\fr14\\dot{\\Phi}_B^2,\n\\end{equation}\nwhere $H_B=\\dot{a}_B\/a_B$. Next, one may introduce the fluctuation fields $\\zeta$ and $\\phi$ as\n\\begin{equation}\\label{4}\na=a_Be^{\\zeta},\\hs{10}\\Phi=\\Phi_B+\\phi,\n\\end{equation}\nand impose the gauge $\\phi=0$. After algebraically solving the lapse $N$ from its own equation of motion one may obtain \n\\begin{equation}\\label{5}\nS=-2L^3\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2\\right]^{1\/2},\n\\end{equation}\nwhere $V_B=V(\\Phi_B)$. We take the background \\eq{3} to be a slow-roll inflationary solution with $\\dot{\\Phi}_B<0$. \n\nAssuming that the physical size of the pre-inflationary patch is determined by the initial Hubble parameter $H_i$, one has\n\\begin{equation}\\label{6}\na_B(t_i)L=\\frac{1}{H_i},\n\\end{equation}\nwhere $t_i$ is the initial time of the inflation. Normalizing the scale factor as \n\\begin{equation}\\label{7}\na_B(t_i)=\\frac{1}{H_i}\n\\end{equation}\ncorresponds to setting $L=1$, which eliminates the unphysical comoving scale from the equations. \n\nIt is instructive to repeat the gauge fixing procedure in the Hamiltonian formulation where the minisuperspace action \\eq{1} can be written as \n\\begin{eqnarray}\n&&S=\\int dt\\left[\\hat{P}_\\zeta \\dot{\\hat{\\zeta}}+P_\\Phi\\dot{\\Phi}-NH\\right],\\nonumber\\\\\n&&H=-\\frac{1}{24}e^{-3\\hat{\\zeta}}\\hat{P}_\\zeta^2+\\fr12e^{-3\\hat{\\zeta}}P_\\Phi^2+e^{3\\hat{\\zeta}}V(\\Phi)\\label{8},\n\\end{eqnarray}\nand $\\hat{\\zeta}=\\ln a$. Expanding the variables around their background values\n\\begin{eqnarray}\n&&\\hat{\\zeta}=\\ln a_B+\\zeta,\\hs{10}\\Phi=\\Phi_B+\\phi,\\hs{10}N=1+n,\\nonumber\\\\\n&&\\hat{P}_\\zeta=-12a_B^3H_B+P_\\zeta,\\hs{10}P_\\Phi=a_B^3\\dot{\\Phi}_B+P_\\phi,\\label{9}\n\\end{eqnarray} \nthe action becomes\n\\begin{equation}\nS=\\int dt \\left[P_\\zeta\\dot{\\zeta}+P_\\phi\\dot{\\phi}-H_F-nC\\right],\\label{10}\n\\end{equation}\nwhere $H_F$ is the fluctuation Hamiltonian involving all variables but $n$ and $C$ is the constraint given by\n\\begin{equation}\\label{11}\nC=-\\frac{1}{24a_B^3}e^{-3\\zeta}\\left(-12a_B^3H_B+P_\\zeta\\right)^2+\\frac{1}{2a_B^3}e^{-3\\zeta}\\left(a_B^3\\dot{\\Phi}_B+P_\\phi\\right)^2+a_B^3e^{3\\zeta}V(\\Phi_B+\\phi).\n\\end{equation}\nAfter imposing the gauge $\\phi=0$, one may solve\\footnote{In solving $P_\\phi$ one should keep in mind that $a_B^3\\dot{\\Phi}_B+P_\\phi<0$ since we are making an expansion around a background solution with $\\dot{\\Phi}_B<0$.} the constraint $C=0$ for $P_\\phi$, which would give the reduced action for $\\zeta$ and $P_\\zeta$ \n\\begin{equation}\\label{12}\nS=\\int\\, dt\\, P_\\zeta\\dot{\\zeta}-\\dot{\\Phi}_B\\left[\\frac{1}{12}\\left(P_\\zeta-12a_B^3H_B\\right)^2-2a_B^6V_Be^{6\\zeta}\\right]^{1\/2}+H_BP_\\zeta+6a_B^3V_B\\zeta.\n\\end{equation}\nIn the phase space path integral quantization, this procedure corresponds to the Faddeev-Popov gauge fixing\n\\begin{equation}\\label{13}\n\\delta(\\phi)\\delta(C) \\det\\left\\{\\phi,C\\right\\}=\\delta(\\phi)\\delta(C)\\frac{\\partial C}{\\partial\\phi}=\\delta(\\phi)\\delta(P_\\phi-P_\\phi^*),\n\\end{equation}\nwhere $P_\\phi^*$ is the solution of $C=0$. One can check that the two actions \\eq{5} and \\eq{12} are related by the Legendre transformation exchanging the Lagrangian and the Hamiltonian. \n\nOne may see that a constant $\\zeta$ solves the equations of motion that follows from \\eq{5} provided that the background equations \\eq{3} are satisfied. At first, this is not obvious from \\eq{5} since it contains a pure $\\zeta$ term with no time derivatives when the square root in \\eq{5} is expanded. In any case, it is possible to add \\eq{5} a total derivative term so that\n\\begin{eqnarray}\nS&=&-2\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2\\right]^{1\/2}+2\\int dt \\,a_B^3\\,e^{3\\zeta}\\left[V_B+6H_B\\dot{\\zeta}\\right],\\label{14}\\\\\n&=&\\int dt\\, a_B^3\\,e^{3\\zeta}\\left[\\frac{3\\dot{\\Phi}_B^2}{V_B}\\dot{\\zeta}^2-\\frac{18H_B\\dot{\\Phi}_B^2}{V_B^2}\\dot{\\zeta}^3+. . . \\right].\\label{15}\n\\end{eqnarray}\nIt is now clear from \\eq{15} that the equation of motion involves only the time derivatives of $\\zeta$ and a constant mode is a trivial solution. Note that by normalization \\eq{7}, the scale factor $a_B$ has mass dimension $-1$. In the Hamiltonian language the extra surface term added in \\eq{14} corresponds to a canonical transformation $P_\\zeta\\to P_\\zeta+12a_B^3H_B-12 a_B^3H_B e^{3\\zeta}$ as compared to \\eq{12}. The Hamiltonian of \\eq{14} can be found as \n\\begin{eqnarray}\nH&=&-a_B^3\\dot{\\Phi}_B^2e^{3\\zeta}\\left[1-\\frac{2H_B}{a_B^3\\dot{\\Phi}_B^2}e^{-3\\zeta}P_\\zeta+\\frac{1}{12a_B^6\\dot{\\Phi}_B^2}e^{-6\\zeta}P_\\zeta^2\\right]^{1\/2}-H_B P_\\zeta+a_B^3\\dot{\\Phi}_B^2e^{3\\zeta},\\label{16}\\\\\n&=&\\frac{V_B}{12a_B^3\\dot{\\Phi}^2}e^{-3\\zeta}P_\\zeta^2+\\frac{H_BV_B}{12a_B^6\\dot{\\Phi}_B^4}e^{-6\\zeta}P_\\zeta^3+. . . \\label{17}\n\\end{eqnarray}\nwhere the dotted terms are suppressed with more powers of the background scale factor. Evidently, the first term in \\eq{17} dominates the dynamics at late times in inflation. \n\nThe interaction picture operators are governed by the free Hamiltonian\n\\begin{equation}\\label{18}\nH_0=\\frac{V_B}{12a_B^3\\dot{\\Phi}_B^2}P_\\zeta^2.\n\\end{equation}\nTheir time evolution can be found as\n\\begin{eqnarray}\n&&\\zeta_I(t)=\\zeta_i+\\fr16\\int_{t_i}^t dt'\\frac{V_B(t')}{a_B(t')^3\\dot{\\Phi}_B(t')^2}\\,P_i,\\nonumber\\\\\n&&P_{\\zeta I}=P_i,\\label{19}\n\\end{eqnarray}\nwhere $\\zeta_i$ and $P_i$ are the initial time independent (Schr\\\"{o}dinger) operators obeying $[\\zeta_i,P_i]=i$. \n\nAn operator in the Heisenberg picture $O_H$ can be related to the corresponding interaction picture operator $O_I$ by\n\\begin{equation}\nO_H=U_I^\\dagger O_I U_I,\\label{20}\n\\end{equation}\nwhere $i\\dot{U}_I=H_IU_I$, $U(t_i)=I$ and $H_I$ is the interaction Hamiltonian in the interaction picture. As shown by Weinberg \\cite{il1}, \\eq{20} can be expanded as\n\\begin{equation}\\label{21}\nO_H(t)=O_I(t)-i\\int_{t_i}^t dt'\\,[O_I(t),H_I(t')]-\\int_{t_i}^t dt''\\int_{t''}^t dt'\\,[[O_I(t),H_I(t')],H_I(t'')]+. . . \n\\end{equation}\nwhere the dotted terms contain more nested commutators of $O_I$ with $H_I$. Eq. \\eq{21} can be used as the basis for the in-in perturbation theory. From \\eq{17} the interaction Hamiltonian can be determined as\n\\begin{equation}\\label{22}\nH_I=\\frac{V_B}{24a_B^3\\dot{\\Phi}_B^2}\\left\\{\\left(e^{3\\zeta_I}-1\\right),P_{\\zeta I}^2\\right\\}+. . .\n\\end{equation}\nwhere we apply symmetric ordering to make $H_I$ Hermitian. \n\nOne may approximate the time integrals during slow-roll inflation by taking (note the normalization \\eq{7}) \n\\begin{equation}\na_B\\simeq \\frac{1}{H_B}e^{H_B(t-t_i)}\\label{23}\n\\end{equation}\nand by treating the slowly changing variables $H_B$, $V_B$ and $\\dot{\\Phi}_B$ as constants. Using \\eq{22} in \\eq{21} for $\\zeta$, one finds that at the end of inflation after $N$ e-folds \n\\begin{equation}\\label{24}\n\\zeta_H=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_i+. . .+O\\left(e^{-3N}\\right),\n\\end{equation}\nwhere the slow-roll parameter is defined as\n\\begin{equation}\n\\epsilon=\\frac{3\\dot{\\Phi}_B^2}{V_B}\\simeq-\\frac{\\dot{H}_B}{H_B},\\label{25}\n\\end{equation}\nand dots denote time independent but nonlinear terms in $\\zeta_i$ and $P_i$ coming from the lower limits of the time integrals in \\eq{21} at $t_i$. Consequently, one sees that at late times $\\zeta_H$ exponentially asymptotes to a constant operator and no infrared logarithms appear.\n\nWe observe that neglecting all but the first term in \\eq{17}, which are exponentially suppressed at late times, gives an explicitly integrable system. Namely, the (classical) equations corresponding to the Hamiltonian\n\\begin{equation}\nH=\\frac{V_B}{12a_B^2\\dot{\\Phi}_B^2}e^{-3\\zeta}P_\\zeta^2,\\label{26}\n\\end{equation}\ncan be integrated to get\n\\begin{eqnarray}\n&&\\zeta(t)=\\zeta_i+\\fr23 \\ln\\left[1+P_i e^{-3\\zeta_i}\\int_{t_i}^t dt'\\frac{V_B}{4a_B^3\\dot{\\Phi}_B^2}\\right],\\nonumber\\\\\n&&P_\\zeta(t)=P_i+P_i^2e^{-3\\zeta_i}\\int_{t_i}^t dt'\\frac{V_B}{4a_B^3\\dot{\\Phi}_B^2}.\\label{27}\n\\end{eqnarray}\nIn the quantum theory \\eq{27} should be true for Heisenberg operators provided that operator orderings are solved in a suitable way. Eq. \\eq{27} shows that the asymptotic change of $\\zeta$ compared to its initial value is determined by the dimensionless parameter $H^2\/(M_p^2\\epsilon)$. \n\nTill now in our discussion we have focused on the evolution of the Heisenberg operator $\\zeta_H$. As for the initial state it is natural to take a minimum uncertainty Gaussian wave function $\\psi(\\zeta_i)$, which has zero mean $<\\zeta_i>=0$ and the deviations $<\\zeta_i>=\\sigma^2$, $=1\/4\\sigma^2$. Although this choice can be motivated from field theory side, the value of the deviation $\\sigma$ cannot be directly deduced from the field theory, which has a continuous spectrum of wave numbers and the zero mode is not isolated (unless the space is not compact). On the other hand, one must also note that the validity of the perturbation theory actually depends on the initial state. Choosing $\\sigma$ to be extremely small would yield a large momentum that may invalidate the perturbative expansion in $P_\\zeta$, at least at early times during inflation when the exponential suppression is not effective yet. \n\nIn showing the constancy of $\\zeta$ in single scalar inflationary models, the consistency condition in the squeezed limit \\cite{c1,c2}, hence the choice of the Bunch-Davies vacuum, plays an important role, see \\cite{c3}. In the minisuperspace model, no such property is needed since the time independence of $\\zeta$ becomes an operator statement, i.e. the Heisenberg picture $\\zeta_H$ exponentially approaches to a constant operator as in \\eq{24}. We anticipate this should also be the case in field theory since at late times semi-classical approximation becomes excellent \\cite{cl} and $\\zeta$ is conserved in the classical theory.\\footnote{Indeed, one naturally expects that some form of minisuperspace description of superhorizon modes, which is similar to the one considered here, must be valid at late times. However, such an approximation, if ever exists, is only possible in a suitable gauge that allows a smooth soft limit, which is not the case for the standard $\\zeta$-gauge because the shift $N^i$ is non-local.} Therefore, the constancy of $\\zeta$ in cosmological perturbation theory must hold not just for the Bunch-Davies vacuum but for a wider range of states. \n\n\\section{Adding a Spectator}\n\nWe have seen in the previous section that the $\\zeta$-self interactions cannot yield infrared logarithms in the minisuperspace perturbation theory. From \\eq{19} and \\eq{23} one sees that $[\\zeta_I(t),\\zeta_I(t')]\\propto 1\/a_B^3$, thus the time integrals in the perturbative series in \\eq{21} can produce an infrared logarithm provided that $H_I\\propto a_B^3$. To produce such an interaction term one may add a self-interacting {\\it massless} spectator scalar $\\varphi$ which has the potential $V(\\varphi)$. It is easy to repeat the gauge fixing in the presence of the spectator to get the following gauge fixed action: \n\\begin{equation}\nS=-2\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{V(\\varphi)}{V_B}\\right]^{1\/2}\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2-\\frac{1}{2V_B}\\dot{\\varphi}^2\\right]^{1\/2}.\\label{28}\n\\end{equation}\nBy expanding the square roots one may obtain the free Lagrangian and various interactions, where the quadratic spectator action is given by\n\\begin{equation}\\label{29}\nS=\\fr12\\,\\int \\,dt\\, a_B^3\\,\\dot{\\varphi}^2.\n\\end{equation}\nHence, the interaction picture spectator operators evolve like \n\\begin{eqnarray}\n&&\\varphi_I=\\varphi_i+\\int_{t_i}^t \\frac{dt'}{a_B(t')^3}P_{\\varphi i},\\nonumber\\\\\n&&P_{\\varphi I}=P_{\\varphi i},\\label{30}\n\\end{eqnarray}\nwhere $P_{\\varphi I}$ is the momentum conjugate to $\\varphi_I$, and $\\varphi_i$ and $P_{\\varphi i}$ are time independent initial operators obeying $[\\varphi_i,P_{\\varphi i}]=i$. \n\nAmong the interactions that follow from \\eq{28} we focus on the following one\n\\begin{equation}\nH_I=a_B^3V(\\varphi_I)\\left(e^{3\\zeta_I}-1\\right),\\label{31}\n\\end{equation}\nwhich would potentially yield infrared logarithms in the perturbation theory as noted above. Indeed, from the first order correction in \\eq{21} one may find\n\\begin{equation}\\label{32}\n\\zeta_H=\\zeta_I-\\fr12 \\int_{t_i}^t dt'a_B(t')^3\\,V(\\varphi_I(t'))e^{3\\zeta_I(t')}\\int_{t'}^t\\,dt''\\frac{V_B}{a_B^3\\dot{\\Phi}_B^2}.\n\\end{equation}\nUsing \\eq{23} one can get a late time expansion of the interaction picture operators so that at the end of inflation after $N$ e-folds one has \n\\begin{eqnarray}\n&&\\zeta_I=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_{i}+O\\left(e^{-3N}\\right),\\nonumber\\\\\n&& \\varphi_I=\\varphi_i+\\frac{H_B^2}{3}P_{\\varphi i}+O\\left(e^{-3N}\\right).\\label{33}\n\\end{eqnarray}\nUtilizing this expansion in \\eq{32} gives\n\\begin{equation}\n\\zeta_H(t)=\\zeta_c+\\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]\\,V(\\varphi_c)\\,e^{3\\zeta_c}+O\\left(e^{-3N}\\right),\\label{34}\n\\end{equation}\nwhere the constant operators $\\zeta_c$ and $\\varphi_c$ are defined from \\eq{33} by\n\\begin{eqnarray}\n&&\\zeta_c=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_{i},\\nonumber\\\\\n&&\\varphi_c=\\varphi_i+\\frac{H_B^2}{3}P_{\\varphi i}.\\label{344}\n\\end{eqnarray}\nAs it is anticipated, the interaction \\eq{31} yields an infrared logarithm in the first order perturbation theory. \n\nThe above calculation hints how one should handle the higher order perturbative corrections. Namely, one should first evaluate the commutators in \\eq{21} using\n\\begin{equation}\n[\\zeta_I(t),\\zeta_I(t')]=\\frac{i}{6}\\int_t^{t'}dt''\\frac{V_B}{a_B^3\\dot{\\Phi}_B^2},\\hs{10}[\\varphi_I(t),\\varphi_I(t')]=i\\int_t^{t'}\\frac{dt''}{a_B^3}.\\label{35}\n\\end{equation}\nOne can then apply the late time expansion of the interaction picture operators given in \\eq{33} and calculate the time integrals of the leading order terms. Using this strategy one may obtain the following second order correction to $\\zeta_H$:\n\\begin{equation}\n\\left[\\frac{V(\\varphi_c)^2}{16M_p^4H_B^2\\epsilon^2}e^{6\\zeta_c}+\\frac{V'(\\varphi_c)^2}{36M_p^2H_B^2\\epsilon}\\left(e^{6\\zeta_c}-e^{3\\zeta_c}\\right)\\right]\\left[1-2\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)+\\fr32 \\ln^2\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right],\\label{36}\n\\end{equation}\nwhere $V'(\\varphi)=dV\/d\\varphi$. Note that \\eq{36} contains a different type of infrared logarithm, i.e. a log square. \n\nIt is possible to argue that the interaction \\eq{31} yields the factor $\\ln^n(a_B(t)\/a_B(t_i))$ in the $n$'th order perturbation theory. In the $n$'th term of \\eq{21} there are $n$ factors of $a_B^3$ coming from the interaction Hamiltonian and there are $n$ factors of $a_B^{-3}$ coming from $n$-commutators. At late times, these cancel each other and the interaction picture operators asymptote to constant operators as in \\eq{33}. Hence, to leading order one ends up with an $n$-dimensional time integral of a constant operator giving $(t-t_i)^n=H_B^n\\ln^n(a_B(t)\/a_B(t_i))$. \n\nThese findings are consistent with what has been observed in the field theory calculations \\cite{il1,il2,il3}. In the minisuperspace approximation one can further make a nonperturbative estimate as follows: Using the asymptotic form \\eq{33}, the interaction Hamiltonian converges to\n\\begin{equation}\\label{37}\nH_I=a_B^3V(\\varphi_c)\\left(e^{3\\varphi_c}-1\\right)\\left\\{1+O\\left(e^{-3N}\\right)\\right\\}.\n\\end{equation}\nOne can check that at late times the commutator $[H_I(t),H_I(t')]$ is suppressed by a huge factor related to the number of e-folds as compared to the product $H_I(t)H_I(t')$. Therefore, to a very good approximation $H_I(t)$ becomes a self-commuting operator of its argument after some time $t_m$ corresponding to, say, 10 e-folds (the fact that $\\zeta$ has a similar property has been used to argue the classicality of the cosmological perturbations \\cite{cl}). The unitary interaction picture evolution operator can be decomposed like\n\\begin{equation}\nU_I(t,t_i)=U_2(t,t_m)U_1(t_m,t_i).\\label{38}\n\\end{equation}\nSince $H_I(t)$ can be treated as a self-commuting operator when $t>t_m$, one may approximate\n\\begin{equation}\nU_2(t,t_m)=Te^{-i\\int_{t_m}^t dt'H_I(t')}\\simeq e^{-i\\int_{t_m}^t dt'H_I(t')}.\\label{39}\n\\end{equation}\nFurthermore one has\n\\begin{equation}\n\\zeta_H=U_I^\\dagger\\zeta_IU_I=U_1^\\dagger U_2^\\dagger\\zeta_I U_2U_1.\\label{40}\n\\end{equation}\nTo proceed we note \n\\begin{equation}\n[\\zeta_I(t),H_I(t')]=3[\\zeta_I(t),\\zeta_I(t')]H_I(t'),\\label{41}\n\\end{equation}\nthus using \\eq{39} one may find\n\\begin{eqnarray}\n[\\zeta_I(t),U_2]&&\\simeq\\fr12 \\int_{t_m}^tdt'a_B(t')^3V(\\varphi(t'))e^{3\\zeta_I(t')}\\int_t^{t'}dt''\\frac{V_B(t'')}{a_B(t'')^3\\dot{\\Phi}_B(t'')^2}\\,U_2\\nonumber\\\\\n&&\\simeq \\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]V(\\varphi_c)e^{3\\varphi_c}\\,U_2.\\label{42}\n\\end{eqnarray}\nSo \\eq{40} becomes\n\\begin{equation}\n\\zeta_H\\simeq U_1^\\dagger\\zeta_cU_1+\\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]U_1^\\dagger V(\\varphi_c)e^{3\\varphi_c}U_1.\\label{43}\n\\end{equation}\nThe unitary operator $U_1$ only mixes the operators up to time $t_m$ and its action merely produces constant operators since these do not depend on the final time. As a result, in \\eq{43} we are able to extract the leading order time dependence of $\\zeta$, which is a single infrared logarithm of the form $\\ln a_B$, and other corrections are exponentially suppressed. On dimensional grounds one may estimate that (in expectation values) $V(\\varphi_c)\\propto H_B^4$, therefore the infrared logarithm correction is suppressed by the factor $H^2\/(M_p^2\\epsilon)$, which is generically small in realistic models. \n\nThe above nonperturbative argument shows that the $(\\ln a_B)^n$ behavior for $n>1$ that arises in the $n$'th order perturbation theory may be an artifact of that approximation. For $t3H_B\/2$. Using \\eq{30m} in \\eq{32}, one may then see that the integrand in \\eq{32} does {\\it not} approach to a time independent operator yielding the infrared logarithm as in the case of a massless spectator, but instead it becomes an exponentially decreasing function of $t'$ whose integral gives a smaller correction for larger mass. \n\n\\section{Conclusions}\n\nIn this paper we investigate the appearance of the infrared logarithms in the cosmological perturbation theory by studying the scalar slow-roll inflationary model in the minisuperspace approximation, which simplifies the field theoretical system involving gravity to a quantum mechanical one. The minisuperspace theory is still highly nontrivial because of the nonlinearities and the local gauge invariance related to the time reparametrizations. We obtain the complete gauge fixed action for the curvature perturbation $\\zeta$, both in the single scalar case and when a self-interacting spectator is added. The full action can be expanded around the inflationary background yielding an infinite number of interaction terms. \n\nIn our analysis we focus on the time evolution of the Heisenberg operators, which can be calculated using in-in perturbation theory. Thus, our findings are state independent provided that the expectation values do not break down the series expansion. We verify that in the single scalar case no infrared logarithms appear and $\\zeta$ exponentially asymptotes to a constant operator. In the presence of a spectator we find that the $n$'th order perturbation theory gives an infrared logarithm of the form $(\\ln a_B)^n$. Note that supposing the existence of a spectator is not unnatural for inflation; in any model where Higgs is not the inflaton, it actually becomes a self-interacting spectator scalar.\n\nIn the minisuperspace approximation it is possible to examine the time evolution of the Heisenberg operators nonperturbatively. Following some time after the beginning of inflation, the interaction picture operators including the interaction Hamiltonian become nearly self-commuting at different times. This allows one to extract the leading order time evolution of $\\zeta$ where all other corrections are exponentially suppressed. In the presence of a spectator, this leading order correction turns out to be a single infrared logarithm $\\ln a_B$. It would be interesting to generalize this argument to field theory to understand the structure of the infrared logarithms in the cosmological perturbation theory. \n\nOne usually attributes the emergence of infrared logarithms to loop effects. This is natural since in field theory calculations they normally appear in loop corrections. However, loops do not exist in the minisuperspace approach yet we still encounter infrared logarithms implied by the Heisenberg picture equations of motion. This indicates that neither the loops nor the modes running in them may not be the primary reason for the existence of infrared logarithms. Indeed, consider as an example the three point function $\\left< \\phi(k_1)\\phi(k_2)\\phi(k_3)\\right>$ of a self-interacting {\\it massless} test scalar field. It is not difficult to see that this correlation function is time dependent {\\it at tree level} because of a cubic interaction term $H_I=g\\, a^3 \\int d^3 x\\,\\phi^3$, even at late times when $k_1$, $k_2$ and $k_3$ become superhorizon. Choosing the vacuum and correspondingly the mode functions determine the precise form of this time dependence, e.g. for the Bunch-Davies vacuum one gets an infrared logarithm. The crucial point is that the cubic interaction induces nontrivial superhorizon evolution which is {\\it not} exponentially suppressed. In single field inflation, one may see that $\\zeta$ self interactions cannot produce such effects mainly because of the shift symmetry (as shown above, pure $\\zeta$ interactions containing no derivatives, which are potentially dangerous, actually disappear after integration by parts). Nevertheless, in the presence of a spectator field there are interactions which yield non-negligible superhorizon evolution both in classical and quantum theories. We expect these conclusions to hold irrespective of the gauge conditions or possible explicit non-localities present in the action. \n\nHence, the emergence of infrared logarithms or some other form of nontrivial superhorizon motion has a dynamical origin related to suitable interactions. Remember that in the minisuperspace model, solving the unitary evolution nonperturbatively gives only a single infrared logarithm as opposed to perturbation theory and this shows the importance of determining dynamics correctly. On the other hand, the initial state chosen is also crucial in fixing the exact form of the superhorizon time dependence. Of course, loops also arise from such interactions sandwiched between states. As discussed in \\cite{proj}, some corrections related to superhorizon modes are projection effects that define the mapping between physical scales of inflation and post-inflationary universe, and these disappear in observable quantities. One way of understanding projection effects is to keep in mind that the decomposition of the metric into background and fluctuation parts is introduced for computational convenience. Strictly, the {\\it full} metric must be used in relating the comoving and physical scales. In the minisuperspace theory any constant piece of $\\zeta$ will disappear as a projection artifact but a time dependent part represents a real physical effect, modifying for instance the Hubble expansion rate. \n\n\n\\begin{acknowledgments}\nThis work, which has been done at C.\\.{I}.K. Ferizli, Sakarya, Turkey without any possibility of using references, is dedicated to my friends at rooms C-1 and E-10 who made my stay bearable at hell for 440 days between 7.10.2016 and 20.12.2017. I am also indebted to the colleagues who show support in these difficult times. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\\emph{INTEGRAL} (INTErnational Gamma Ray Astrophysics Laboratory, \\citet{Winetal03}) was launched in 2002 and since then has performed high-quality observations in the energy band from 3 keV up to $\\sim 10$ MeV. The \\emph{INTEGRAL} payload consists of two main soft gamma-ray instruments (the imager IBIS \\citep{Ubeetal03}, and the spectrometer SPI \\citep{Vedetal03}) and two monitors (in X-rays JEM-X \\citep{Lunetal03}, and in optical OMC \\citep{Masetal03}). The wide field-of-view of the imager IBIS provides an ideal opportunity to survey the sky in hard X-rays.\\\\\n\nDuring its first 6 years in orbit, \\emph{INTEGRAL} has covered nearly the whole sky. The observational data have been mainly used to study the soft gamma-ray emission from the Galactic plane (GP) \\citep{Bouetal05, Krietal06} through the Galactic plane scans and the Galactic centre (GC) \\citet{Beletal04, Revetal04, Beletal06, Krietal06} through the Galactic centre deep exposure programme. A number of papers have already presented general surveys \\citep{Bazetal06, Biretal06} of the sky as well as of specific regions \\citep{Gotetal05, Haretal06, Moletal04} and population types \\citep{Baretal06, Basetal06, Sazetal07, Becetal06, Becetal09}.\\\\\n\nThe majority of the classified sources detected by \\emph{INTEGRAL} are either low and high mass X-ray binaries (LMXBs and HMXBs) or AGNs \\citep{Bodetal07}. However, a significant fraction of the detected sources remain unidentified. A special approach to population classification is required for the GC region to resolve the population types because of the high density of sources. Fortunately, the physics of the sources may help us to unveil their type. Indeed, the bulk of the \\emph{INTEGRAL} sources are accreting systems that are expected to be intrinsically variable on multiple timescales depending on the source type and the nature of the variability. For instance, X-ray binaries (XRBs) may exhibit variability on timescales that range from milliseconds (supporting the idea that emission originates close to the compact object in the inner accretion radius) to hours and days, indicating that the variability can originate throughout the accretion flow at multiple radii and propagate inwards to modulate the central X-ray emission \\citep{AreUtt05}. This idea is supported by the known correlation between millisecond\/second and hour\/day scale variability in XRBs \\citep{Utt03}. LMXBs may exhibit flaring behavior with an increase in both emission intensity and hardness over a period of a few hundred to a few thousand seconds. X-ray bursts with rise times of a few seconds and decay times of hunderds of seconds or even several hours \\citep{Baretal02} are also common to these objects. On the other hand, HMXBs are known to exhibit variability on timescales ranging from a fraction of a day up to several days, generated by the clumpiness of the stellar wind acreting onto the compact object \\citep{Ducetal09}. Hour-long outbursts caused by variable accretion rates are observed in supergiant fast X-ray transients, a sub-class of HMXBs discovered by INTEGRAL \\citep{Rometal2009}. Owing to their larger size, AGNs of different types exhibit day-to-month(s) variability depending on the black hole mass \\citep{IshCou09}. Gamma-ray loud blazars have variability timescales in the range from $10^{1.6}$ to $10^{5.6}$~s \\citep{LiaLiu03}. Therefore, a list of \\emph{INTEGRAL} sources with quantitative measurements of their variability would be an important help to classifying the unidentified sources and more detailed studies of their physics.\\\\\n\nThe variability of \\emph{INTEGRAL} sources was addressed in the latest 4th IBIS\/ISGRI survey catalog paper \\citep{Biretal09} when the authors performed the so-called \\textit{bursticity} analysis intended to facilitate the detection of variable sources.\\\\\n\nHere we present a catalog of \\emph{INTEGRAL} variable sources identified in a large fraction of the archival public data. In addition to standard maps produced by the standard data analysis software, we compiled a $\\chi^2$ all-sky map and applied the newly developed method to measure the fractional variability of the sources detected by the IBIS\/ISGRI instrument onboard \\emph{INTEGRAL}. The method is sensitive to variability on timescales longer than those of single ScW exposures ($\\approx 2000$ seconds), i.e., to variability on timescales of hour(s)-day(s)-month(s). The catalog is compiled from the sources detected in the variability map. In addition, we implemented an online service providing the community with all-sky maps in the 20-40, 40-100, and 100-200 keV energy bands generated during the course of this research.\\\\\n\nIn the following, we describe the data selection procedure and the implemented data analysis pipeline (Sect.~\\ref{sec:datana}). In Sect.~\\ref{sec:method}, we outline our systematic approach to the detection of variability in \\emph{INTEGRAL} sources and describe our detection procedure in Sect.~\\ref{sec:detect}. We compile the variability catalog in Sect.~\\ref{sec:catvar}. In Sect.~\\ref{sec:skyview}, we briefly describe the implemented all-sky map online service. We make some concluding remarks in Sect.~\\ref{sec:conclu}.\n\n\\section{Data and analysis}\n\\label{sec:datana}\n\n\\subsection{Data selection and filtering}\n\nSince its launch, \\emph{INTEGRAL} has performed over 800 revolutions each lasting for three days. We utilized the ISDC Data Centre for Astrophysics \\citep{Couetal03} archive\\footnote{http:\/\/isdc.unige.ch} to obtain all public data available up to June 2009 and the Offline Scientific Analysis (OSA) v. 7.0 to process the data. \\\\\n\n\\emph{INTEGRAL} data are organized into science windows (ScWs), each being an individual observation that can be either of pointing or slew type. Each observation (pointing type) lasts 1 -- 3 ksecs. For our analysis, we chose all pointing ScWs with an exposure time of at least 1 ksec. We filtered out revolutions up to and including 0025 belonging to the early performance verification phase, observations taken in staring mode, and ScWs marked as bad time intervals in instrument characteristics data including ScWs taken during solar flares and radiation belt passages. Finally, after the reconstruction of sky images we applied the following statistical filtering. We calculated the standard deviation of the pixel distribution for each ScW and found the mean value of standard deviations for the whole data set. We then rejected all the ScWs in which the standard deviation exceeded the mean for the whole data set by more than 3$\\sigma$. We assumed the distribution of standard deviations of individual and independent ScWs to be normal. While calculating standard deviations in individual ScWs, image pixels were assumed to be independent. Thus, the filtering procedure allowed us to remove all ScWs affected by a high background level. In the end, 43~724 unique pointing-type ScWs were selected for the analysis, giving us a total exposure time of 80.0~Msec and a more than 95 percent sky coverage.\n\n\\subsection{Instrument and background}\n\nIn the present study we use only the low-energy detector layer of the IBIS coded-mask instrument, called ISGRI (\\emph{INTEGRAL} Soft Gamma Ray Imager, \\citet{Lebetal03}), which consists of 16~384 independent CdTe pixels. It features an angular resolution of $12^{\\prime}$ (FWHM) and a source location accuracy of $\\sim$1 arcmin, depending on the signal significance \\citep{Groetal03}. Its field of view (FOV) is $29^{\\circ} \\times 29^{\\circ}$. The fully-coded part of the FOV (FCFOV), i.e., the area of the sky where the detector is fully illuminated by the hard X-ray sources, is $9^{\\circ} \\times 9^{\\circ}$. It operates in the energy range between 15 keV and 1 MeV.\\\\\n\nOver short timescales, the variability of the background of the instrument is assumed to be smaller than the statistical uncertainties. However, this is not the case for mosaic images constructed from long exposures. In general, it is assumed that the mean ISGRI background in each individual pixel changes very little with time, and therefore the standard OSA software provides only one background map for the entire mission. During the construction of the all-sky map, we noted that the quality of the mosaics of the extragalactic sky region depends on the time period over which the data were taken. We therefore, concluded that the long-term variation in the background of the instrument \\citep{Lebetal05} significantly affects the extragalactic sky mosaic. On the other hand, in the GC and inner GP regions (l$\\left\\lbrace-90;90\\right\\rbrace$, b$\\left\\lbrace -20;20\\right\\rbrace$) the standard background maps provided by OSA provide better results (noise distributions are narrower). This might be because of the large number of bright sources and the Galactic ridge emission \\citep{Krietal06}, although we leave this question open for the future research.\\\\\n\nTo produce time-dependent background maps, we extracted raw detector images for each \\emph{INTEGRAL} revolution (3 days) and calculated the mean count rate in each individual pixel during the corresponding time period. To remove the influence of the bright sources on the neighboring background, we fitted and removed these sources from the raw detector images, i.e., in each ScW we constructed a model of the source pattern on the detector (pixel illumination fraction, PIF) and fitted the raw detector images using the model\n\\begin{equation}\nS_{k,l}=\\sum_{i=1}^M f_i \\times PIF_{k,l}+B,\n\\end{equation}\nwhere $S_{k,l}$ are the detector count rate, $PIF_{k,l}$ are the respective pattern model of source $i$ in the detector pixel with coordinates $(k,l)$, $f_i, i=1..M$ is the flux of source $i$ in the given ScW, and $B$ is the mean background level. This procedure was applied to all the detected sources in the FOV. The stability of the fitting procedure was tested using a large set ($>1~000$) of simulated ScWs with variable source fluxes. The results of the fit were normally distributed around the expected source flux, and therefore we can conclude that our procedure is sufficiently accurate to remove the point sources\nfrom the construction of the background maps. The results of the fitting procedure were then used to create a transformed detector image, $\\hat S_{k,l}$, defined as\n\\begin{equation}\n\\hat S_{k,l}=S_{k,l}-\\sum_{i=1}^M f_i \\times PIF_{k,l}.\n\\end{equation}\nBackground maps were then constructed by averaging the transformed detector images of a given data set.\n\nFrom our time-dependent background maps, we found that the shape of the ISGRI background varies with time, in particular after each solar flare. A long-term change in the background was noticed as well. This result agrees with the findings of \\citet{Lebetal05}. To take these variations into account, we generated background maps for each spacecraft revolution and in the image reconstruction step applied them to the extragalactic sky region.\n\nBesides the real physical background of the sky, there is also artificial component, because IBIS\/ISGRI is a coded-mask instrument with a periodic mask pattern. Therefore, the deconvolution of ISGRI images creates structures of fake sources that usually appear around bright sources. Apart from the periodicity of the mask, insufficient knowledge of the response function leads to residuals in the deconvolved sky images. The orientation of the spacecraft changes from one observation of the real source to another, so fake sources and structures around the real source contribute to the noise level of the local background. To reduce this contribution, we used a method described in Sect.~\\ref{subsec:imarec}.\\\\\n\n\\subsection{Image reconstruction}\n\\label{subsec:imarec}\nAfter producing the background maps as described in the previous subsection, we started the analysis of the data using the standard Offline Scientific Analysis (OSA) package, version 7.0, distributed by ISDC \\citep{Couetal03}. For image reconstruction, we used a modified version of the method described in \\citet{Ecketal08}. It is known that screws and glue strips attaching the IBIS mask to the supporting structure can introduce systematic effects in the presence of very bright sources \\citep{Nevetal09}. To remove these effects, we identified the mask areas where screws and glue absorb incoming photons, and we disregarded the pixels illuminated by these mask areas for the 11 brightest sources in the hard X-ray band. No more than 1\\% of the detector area was disregarded for each of the brightest sources. For weaker sources, the level of systematic errors produced by the standard OSA software was found to be consistent with the noise, so the modified method was not required. Finally, we summed all the processed images weighting by variances to create the all-sky mosaic. For this work, we produced mosaics in 3 energy bands (20-40, 40-100, and 100-200 keV). Both our all-sky map images and corresponding exposure maps are available online and we direct the reader to our online web service\\footnote{http:\/\/skyview.virgo.org.ua}. As an example, we provide here the image of the inner part (36$^{\\circ}$ by 12$^{\\circ}$) of the Galaxy in the 20-40 keV energy band (see Fig.~\\ref{fig:gc}).\n\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=0.40\\textwidth]{fig1a.eps}\n\\includegraphics[width=0.14\\textwidth]{fig1b.eps}\n\\includegraphics[width=0.40\\textwidth]{fig1c.eps}\n\\caption{Lightcurves and variability map of HMXB 4U~1700-377 and LMXB GX~349+2. The solid line indicates the mean flux of the sources during the observation time, the dotted line shows the mean flux minus $S_{int}$, the dashed line shows the mean flux plus $S_{int}$.}\n\\label{fig:lc}\n\\end{center}\n\\end{figure*}\n\n\\section{Method of variability detection}\n\\label{sec:method}\nThe variability of \\emph{INTEGRAL} sources can be analyzed in a standard way by studying the inconsistency of the detected signal with that expected from a constant source by performing the $\\chi^2$ test. Here we consider introducing a variability measurement for the \\emph{INTEGRAL} sources and show how to apply it to the specific case of the coded-mask instrument. For an alternative approach based on the maximum likelihood function for the determination of intrinsic variability of X-ray sources the reader is referred to \\citet{Almetal00} and \\citet{Becetal07}.\n\nThe \\emph{INTEGRAL} data are naturally organized by pointings (ScW) with average duration of $\\sim 1-3$~ksec. Therefore, the simplest way to detect the variability of a source on ksec and longer timescales is to analyse the evolution of the flux from the source on a ScW-by-ScW basis. We define $F_i$ and $\\sigma_i^2$ to be the flux and the variance of a given source, respectively, in the $i$-th ScW. The weighted mean flux from the source is then given by\n\\begin{equation}\n\\langle F \\rangle=\\frac {\\sum_{i=1}^N \\frac {F_i}{\\sigma_i^2}} {\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}},\n\\end{equation}\nwhere $N$ is the total number of ScWs. The variance of the source's flux, which is the mean squared deviation of the flux from its mean value during the observation time, is given by\n\\begin{equation}\nS_{tot}^2 = \\frac {\\sum_{i=1}^N \\frac{(F_i - \\langle F \\rangle)^2}{\\sigma_i^2}} {\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}} = \\chi^2\\sigma^2,\n\\label{SV}\n\\end{equation}\nwhere $\\chi^2 = \\sum_{i=1}^N \\frac{(F_i - \\langle F \\rangle)^2}{\\sigma_i^2}$ and $\\sigma^2 = \\left(\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}\\right)^{-1}$ is the variance of the weighted mean flux.\n\nHowever, in addition to intrinsic variance of the source, this value includes the uncertainty in the flux measurements during individual ScWs, i.e., the contribution of the noise. If the source variance is caused only by the noise, i.e., $F_i = \\langle F \\rangle \\pm \\sigma_i$, Eq.~(\\ref{SV}) is given by $S_{noise}^2 = N \\sigma^2$. To eliminate the noise contribution, we can subtract the noise term of the variance from the source variance and derive the \\emph{intrinsic variance} of the source\n\n\\begin{equation}\nS_{int}^2 = \\chi^2\\sigma^2 - N\\sigma^2.\n\\label{intvar}\n\\end{equation}\nWhen all measurement errors are equal ($\\sigma_i = \\sigma_0$, $\\sigma^2 = \\sigma_{0}^2\/N$), our case reduces to the method used by \\cite{Nanetal97}\n\n\\begin{equation}\nS_{int}^2 = \\frac {1}{N} \\sum_{i=1}^N (F_i - \\overline{F})^2 - \\sigma_{0}^2,\n\\end{equation}\nwhere $\\overline{F}$ is the unweighted mean flux and $S_{int}^2$ is called the \\emph{excess variance}. In the absence of measurement errors, our case reduces to the standard definition of the variance\n\n\\begin{equation}\nS_{int}^2 = \\frac {1}{N} \\sum_{i=1}^N (F_i - \\overline{F})^2.\n\\end{equation}\n\nGiven that different sources have different fluxes, the variability of sources can be quantified by using the normalized measure of variability, which we call here the \\emph{fractional variability}\n\\begin{equation}\nV = \\frac {S_{int}}{\\langle F \\rangle}.\n\\label{simplefracvar}\n\\end{equation}\nHowever, in reality, if one were to apply the above method to detect the variable sources in a crowded field (i.e., containing many sources) of a coded-mask instrument such as IBIS, one would infer {\\it all} the detected sources to be highly variable. This is because in coded-mask instruments, each source casts a shadow of the mask on the detector plane. If there are several sources in the field of view, each of them produces a shadow that is spread over the whole detector plane. Some detector pixels are illuminated by more than one source. If the signal in a detector pixel is variable, one can tell, only with a certain probability, which of the sources illuminating this pixel is responsible for the variable signal. Thus, in a coded-mask instrument, the presence of bright variable sources in the field of view introduces an ``artificial'' variability for all the other sources illuminating the same pixels. Since the overlap between the PIF of the bright variable source and the sources at different positions on the sky varies with the position on the sky, one is also unable to determine in advance the level of this ``artificial'' variability in a given region of the deconvolved sky image.\n\nTo overcome this difficulty, one has to measure the variability of the flux not only directly in the sky pixels at the position of the source of interest, but also in the background pixels around the source. Obviously, the ``artificial'' variability introduced by the nearby bright sources is similar in the adjacent background pixels to that in the pixel(s) at the source position. Therefore, one can produce the variability map for the whole sky and compare the values of variability at the position of the source of interest to the mean values of variability in the adjacent background pixels. The variable sources should be visible as local excesses in the variability map of the region of interest. If a source can be localized in the variability image, then the true fractional variability of the source is calculated as\n\n\\begin{equation}\nV_r = \\frac {\\sqrt{S_{int,s}^2 - S_{int,b}^2}} {\\langle F_s \\rangle - \\langle F_b \\rangle},\n\\label{fracvar}\n\\end{equation}\nwhere the subscript $b$ represents the values of the background in the area adjacent to the source and the subscript $s$ the values taken from the source position.\\\\\n\nTo illustrate the method, we present the lightcurves (Fig.~\\ref{fig:lc}) of two objects that are typical bright \\emph{INTEGRAL} sources: the HMXB 4U~1700-377, which is a very bright and very variable source ($V_{r} \\simeq 104$~\\%), and the LMXB GX~349+2, which is a moderately bright and variable source ($V_{r} \\simeq 45$~\\%). The solid line indicates the mean flux of the sources, $\\langle F \\rangle$. We can see that the mean flux deviation (dotted lines), calculated as the square root of the intrinsic variance, $S_{int}^2$, measures the average flux variation of the sources during the corresponding time. However, we note that in the present case we consider calculations based solely on a lightcurve. If one wishes to obtain a fractional variability value dividing the mean flux deviation by the mean flux, one will obtain the $V$ value, but not $V_{r}$, i.e., the contribution of bright variable neighbor sources is not treated properly. It is impossible to extract the variability of the background, $S_{int,b}$, and the mean background flux $\\left\\langle F_b \\right\\rangle$ using the source lightcurve only. A number of lightcurves of the neighboring pixels should also be compiled to estimate $S_{int,b}$ and $\\left\\langle F_b \\right\\rangle$. This number should be sufficiently high to obtain good estimates. Therefore, an all-sky approach is justified. In the current example, no sources are much brighter in the vicinity of the ones considered that could strongly affect them, so the difference between $V$ and the catalog $V_r$ value is around 3\\%, but for the weak sources in the vicinity of bright ones the difference would be higher.\n\n\\begin{figure*}[!th]\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{fig2a.eps}\n\\includegraphics[width=0.95\\textwidth]{fig2b.eps}\n\\caption{Inner parts (36$^{\\circ}$ by 12$^{\\circ}$) of the INTEGRAL\/ISGRI all-sky maps in Galactic coordinates, Aitoff projection. The significance image (top) in the 20-40 keV energy band, has square root scaling. The bottom image shows the corresponding intrinsic variance map, and also has square root scaling. The circle shows the inner 4$^{\\circ}$ for which the variability background extraction was made from the box region.}\n\\label{fig:gc}\n\\end{center}\n\\end{figure*}\n\nLooking at Eq.~\\ref{fracvar}, one can indeed see that the effect of ``artificial'' fractional variability is strong for moderate and faint sources in the vicinity of the bright variable sources, while for bright sources the effect is small. The ``artificial'' variability introduced by the bright sources in their vicinity ($S_{int,b}^2$ for the surrounding sources) is in range from a fraction of a percent up to a few percent of their own variability (dictated by the PIF accuracy). When we consider the persistent source in the vicinity of the bright variable source, $S_{int,s}^2$ is defined by $S_{int,b}^2$ only (i.e., by variability introduced by the bright variable source). For moderate or faint sources, $S_{int,b}$ may well be comparable to their own flux, and if we apply Eq.~\\ref{simplefracvar} directly we will infer substantial fractional variability, which may well be between ten and fifty percent, or even higher. The bright sources are less sensitive to this effect because $S_{int,b}$ will be only a small fraction of their flux. We checked these conclusions by performing simulations of a moderate persistent source ($F = 1$~ct\/s) in the vicinity of a bright variable source ($\\langle F \\rangle = 20$~ct\/s). By applying Eq.~\\ref{simplefracvar} directly to measure the fractional variability of a moderate source, we found that $V \\simeq 25$\\% while Eq.~\\ref{fracvar} infered that the source was constant, i.e., $V_r \\simeq 0$\\%. \n\n\\begin{figure*}[!th]\n\\begin{center}\n\\includegraphics[width=0.61\\textwidth]{fig3a.eps}\n\\includegraphics[width=0.37\\textwidth]{fig3b.eps}\n\\includegraphics[width=0.61\\textwidth]{fig3c.eps}\n\\includegraphics[width=0.37\\textwidth]{fig3d.eps}\n\\caption{Top: the distributions of the intrinsic variability in pixels of $3.5^{\\circ} \\times 3.5^{\\circ}$ area (shown nearby) centered on the IGR~J18450-0435. In green is $B_1$ distribution in the range $(min, 2m-min)$ here representing the local intrinsic variance background, in red is the sources contribution, f(x) is the gaussian distribution. Bottom: the $3.5^{\\circ} \\times 3.5^{\\circ}$ area (shown nearby) centered on the GC source 1A~1743-288. In green is $B_1$ distribution in the range $(min, 2m-min)$, in blue is $B_2$ distribution from the empty region in GC here representing the local intrinsic variance background, in red and f(x) are same as above.}\n\\label{fig:sintbhist}\n\\end{center}\n\\end{figure*}\n\nIn the course of simulations, we also checked the behavior of the ``artificial'' variability in the case a moderate persistent source situated at the ghost position of the bright variable source. We considered two situations: a) when the mosaic image consisted mostly of ScWs in a configuration being determined almost entirely by the spacecraft orientation, which remained constant (i.e., sky region of the Crab), and b) when the mosaic image contained only a chance fraction of ScWs in that specific configuration because of different spacecraft orientations (i.e., sky region of the Cyg X-1). The simulations showed that the flux and therefore the variability measurement of the mosaic deviated from the input source parameters in case a) only, while in case b), there were no significant deviations. As expected in case a), the moderate source was affected. This was caused by the coincidence of the sources shadowgrams. The deconvolution procedure was unable to distinguish the sources correctly on the ScW level, therefore the mosaic results were affected. We detected an incorrect flux and variability for the moderate source. In reality, this particular simulated situation is very rare (see Sect.~\\ref{sec:catvar} for discussion of this situation in real case). Even if the constant orientation of some observation is kept, different observation patterns applied during the observation will significantly reduce the undesirable effect considered in situation a). \n\\\\\n\n\\section{The detection of variable sources}\n\\label{sec:detect}\n\nFor our study, we used the latest distributed \\emph{INTEGRAL} reference catalog\\footnote{http:\/\/isdc.unige.ch\/?Data+catalogs} \\citep{Ebietal03} version 30 and selected the sources with ISGRI\\_FLAG == 1, i.e., all the sources ever detected by IBIS\/ISGRI.\\\\\n\nBased on the aforementioned method, we compiled the instrinsic variance maps ($S_{int}^2$) of the \\emph{INTEGRAL} sky in three energy bands (20-40, 40-100, and 100-200 keV). This was accomplished by performing pixel operations following Eq.~(\\ref{intvar}) on the constructed all-sky mosaic maps of $\\chi^2$, $\\sigma^2$ (variance), and $N$, which is the map showing the number of ScWs used in a given pixel for the all-sky mosaic. As an example, the instrinsic variance image of the inner region of our Galaxy is given in Fig.~\\ref{fig:gc} (see our online service for all-sky maps).\\\\\n\nAfter compiling an intrinsic variance map in each band, we calculated the local (or background) intrinsic variance, $S_{int,b}^2$, and its scatter, $\\Sigma$, in the region around each catalog source. This was performed by the following scheme. First, the values of the mean $m$, the minimum $min$, and their difference were calculated in a square of $3.5^{\\circ} \\times 3.5^{\\circ}$ centered on the source position. We then assumed that the pixel values in the corresponding area are distributed normally and there is the always-positive contribution from the sources in the field. Since the sources occupy a small fraction of the considered sky region, the initial mean value, $m$, is almost unaffected by their presence because of their small contribution. To reject the source contribution and to obtain the parameters of the normal component, we calculated the mean and the standard deviation in the range $(min, 2m-min)$. The newly found mean value is $S_{int,b}^2$ in Eq.~\\ref{fracvar} and the standard deviation shows its scatter $\\Sigma$. The detectability of the sources in the intrinsic variance map is then defined by the condition that $S_{int}^2 \\geq S_{int,b}^2 + 3 \\Sigma$. For an illustration (see top of Fig.~\\ref{fig:sintbhist}) we present a region around INTEGRAL source IGR~J18450-0435 with respective distributions. The green solid area is the distribution in the range $(min, 2m-min)$ with the mean value representing $S_{int,b}^2$ for the current source, and in red the always-positive contribution from the sources in the field is given. The distribution in the range $(min, 2m-min)$ is well fitted by the Gaussian shown with a dashed line. The top of Fig.~\\ref{fig:sintbhist} justifies well the approximate rejection of the source contribution to the overall pixel value distribution in the field. Applying this rejection procedure allows us to obtain the true scatter in the background rather than the scatter in the overall distribution (including sources), which is obviously higher.\n\nThe detection of variable sources in the innermost region of our Galaxy (circle of 4$^\\circ$ from the GC, see Fig.~\\ref{fig:gc}) was performed differently because it is a crowded field and therefore a large number of sources contribute to the intrinsic variance background of each other. In contrast to the individual source case, the sources in the inner GC region occupy a significant fraction of the region around the source of interest and therefore influence the $m$ value significantly. This results in improper estimation of the background distribution if one applies the rejection procedure based on $(min, 2m-min)$ range ($B_1$ distribution at the bottom of Fig.~\\ref{fig:sintbhist}). In place of calculating the intrinsic variance background and its scatter for each GC source individually, these values were therefore estimated from an empty field near the GC (box at the bottom image of Fig.~\\ref{fig:gc}) and assumed to be equal for all the sources in the GC region. The $B_2$ distribution shown at Fig.~\\ref{fig:sintbhist} (bottom) is accurately determined and well fitted by the Gaussian shown by the dashed line.\\\\\n\n\\section{The catalog of variable sources}\n\\label{sec:catvar}\n\nThe search for variable sources from the reference catalog was performed on the maps generated from ScWs divided into three equal subsequent time periods (maps 1,2, and 3, approximately 2 years each). This was done to detect transient sources that would be difficult (or even impossible) to detect on the map integrated over the whole time period (map T). The search was performed separately on each map (1,2,3) and in each energy band. The results of the search were put into one list from which the sources were selected. Finally, the search was performed on the total map (T) to find sources that were detected only on the map integrated over the whole time period and identify the sources that were active only during specific time periods. The resulting catalog of variable sources detected by \\emph{INTEGRAL} can be found in Table~\\ref{varcat}.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig4.eps}\n\\caption{Number of significantly variable sources detected in different energy bands classified by types.}\n\\label{chart:poptypes}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{chart:poptypes} shows the number of significantly variable sources from our catalog for each source type (HMXB, LMXB, AGN, unidentified, and miscellaneous). The majority of the variable sources ($\\sim$76\\%) in all energy bands are Galactic X-ray binaries. We see that in the 100-200 keV band there are four times more LMXBs than HMXBs. The remaining variable sources are AGNs ($\\sim$10\\%), unidentified ($\\sim$7.5\\%), and other ($\\sim$6.5\\%) source types (cataclysmic variables, gamma-ray bursts, and pulsars). The number of significantly variable sources decreases with energy for each population type, which only reflects the sensitivity of the instruments.\n\nThe distribution of the variability of sources from Table~\\ref{varcat} is presented in Fig.~\\ref{chart:histo}. The variability distribution is approximately normal with one evident outlier, the gamma-ray burst IGR J00245+6251 \\citep{Vesetal05}. However, this is mainly caused by the upper limit to the mean flux of this source being too low. The gamma-ray burst IGR J00245+6251 is detected in all three energy bands. Figure~\\ref{fig:V1-V2} shows the fractional variability in the 40-100 keV band versus the variability in the 20-40 keV band. The majority of the sources that are found to be variable in both the 20-40 keV and 40-100 keV energy bands exhibit nearly equal variability in both bands. \n\nTo show the detection threshold for the variability of a source, we compiled a diagram (see Fig.~\\ref{fig:logV-logF}) where we plot the fractional variability versus flux for all detected variable sources along with the upper limit to the fractional variability versus flux for all the reference catalog sources detected in our significance map in 20-40 keV energy band. The upper limit was determined by substituting $S_{int,s}^2$ with $S_{int,b}^2 + 3 \\Sigma$ in Eq.~(\\ref{fracvar}). Although we chose all the sources detected in our significance map, we used the mean flux of the source because, unlike significance, it is an exposure-independent value. According to our definition, the fractional variability is also an exposure-independent value, so we plot it versus the exposure-independent mean flux to clearly see the detection threshold. We can see that starting from a limiting flux ($6.2\\times10^{-11}$~ergs\/cm$^2$s or $\\sim10$~mCrab) nearly all catalog sources are found to be variable. The majority of the bright sources are binary systems, which explains why nearly all of them are found to be variable.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig5.eps}\n\\caption{Number of sources versus fractional variability of the sources detected in different energy bands.}\n\\label{chart:histo}\n\\end{center}\n\\end{figure}\n\n\\begin{table*}[!h]\n\\begin{center}\n\\caption{The transient sources detected at the intrinsic variance map and not detected at the significance map. }\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\nName & Class & $V_{r,notnorm} \\pm Err$ (c\/s) & Exposure (ksec) & Map & Band\\\\\n\\hline\n\\input{varnstab.tex}\n\\hline\n\\end{tabular}\n\\label{transou}\n\\end{center}\n\\end{table*}\n\nWe also found a number of variable sources that have no counterparts in the significance map, which means that we were unable to measure the mean flux of these sources as it is compatible with the background mean flux. Hence, we are not able to give their fractional variability value, which is a normalized value and therefore tends to infinity with infinite errors. For these sources, we provide a 3-$\\sigma$ lower limit to their fractional variability by taking a 3-$\\sigma$ upper limit on their mean flux. Most of the sources are transient, and sometimes (in a specific observation period in a given energy band) the source is not detected in the significance map because the sensitivity of the instrument decreases with energy. Therefore, we can see that the variability map provides a tool to detect transient or faint but variable sources that would be missed in mosaics averaged over long timescales. To illustrate the detectability of the sources in the variability map, we provide a list (see Table~\\ref{transou}) of the sources that are smeared out in the significance map because of their high exposures. The values given in the table are not normalized variability values, $V_{r,notnorm} = \\sqrt{S_{int,s}^2 - S_{int,b}^2}$ along with their 3-$\\sigma$ errors, $Err$. To verify that the sources that are absent in the significance maps but detected in the variability maps are not the result of the low detection threshold, we ran the same detection procedures on the mock catalog of 2500 false sources distributed randomly and uniformly over the sky. The test detected 11 of 2500 false sources seen in variability maps and absent from the significance maps, compared to 8 of 576 in the case of the real catalog. This means that our detection criteria are rather strict.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig6.eps}\n\\caption{Fractional variability of sources in the 20-40 keV band versus fractional variability in the 40-100 keV band.}\n\\label{fig:V1-V2}\n\\end{center}\n\\end{figure}\n\nWe comment on the inclusion of Crab in our catalog. It is known to be a constant source that is commonly used as a ``standard candle'' in high-energy astrophysics. There are two reasons why it appears in the catalog. The first is the deterioration of the detector electronics onboard the spacecraft. The loss of detector gain is around 10\\% over the entire mission. Although this loss is partially compensated by the software, it introduces around 3-5\\% variability in our method. The remaining variability can be ascribed to systematic errors in the spacecraft alignment \\citep{Waletal03}, which for OSA 7.0 are of the order of 7 arcsec \\citep{Ecketal09}, hence slightly different positions of the Crab are found in each observation. Since Crab is a very bright source, its Gaussian profile is very steep. When the peak is slightly offset, we measure a sharp decrease in the flux at the catalog position of Crab. The combination of the two effects leads to an artifical variability of Crab in all energy bands. A similar effect occurs in the pixels adjacent to the catalog position of Crab.\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig7.eps}\n\\caption{Fractional variability (V) versus flux (F) for all significantly variable sources (red crosses) from Table~\\ref{varcat}. Energy band is 20-40 keV. For comparison, the green crosses show the fractional variability detection threshold (3-$\\sigma$) versus flux for all reference catalog sources detected in the significance map. The sources not detected at the variability map are indicated by blue stars and are coincident with their green cross counterparts. Pink squares indicate 3-$\\sigma$ lower limits to the fractional variability of the sources that are not detected in the significance map.}\n\\label{fig:logV-logF}\n\\end{center}\n\\end{figure}\nWhen the peak of the PSF is found at a slightly displaced position, we find a sharp increase in the flux in this pixel. Our interpretation is confirmed by the image of Crab in the variability map, where the closest pixels to the source are found to be very variable, creating a ring-like structure. Moreover, it has been found that the observed position of Crab does not coincide with the position of the pulsar \\citep{Ecketal09}, which thus explains such an effect at the expected source position. To determine the influence of this effect on other sources, we inspected the 11 brightest sources in the 20-40 keV band and looked for a similar situation. In the case of Cyg~X-1 and Sco~X-1, the same effect, albeit weaker, was also seen. However, the derived value of their variability was not found to be affected by this effect. This effect contributes mainly to the variability of the pixels around the catalog position of these sources. Cyg X-1 and Sco X-1 are intrinsically very variable so the value extracted from the source position is much higher than for surrounding pixels, which is the opposite of the situation found in the Crab. For all the other sources, the influence of the misalignment was found to be negligible.\n\n\\begin{table*}[!h]\n\\begin{center}\n\\caption{Sources with additional error induced by the ``ghosts''.}\n\\begin{scriptsize}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}\n\n\\hline\n\\multicolumn{1}{|l|}{Name} & \\multicolumn{5}{c|}{20-40 keV} &\\multicolumn{5}{c|}{40-100 keV} &\\multicolumn{5}{c|}{100-200 keV} \\\\\n\\cline{2-16}\n\\multicolumn{1}{|l|}{} & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp \\\\\n\n\\hline\n\\input{ghostab.tex}\n\\hline\n\\end{tabular}\n\\label{ghostab}\n\\end{scriptsize}\n\\end{center}\n\\hspace{-6mm} Here, $V_r$ is the true fractional variability of the source, $V_{r,err}$ is the fractional variability error same as in catalog, $G_{err}$ is the error induced by the source, Gexp, in ksec, is the exposure time the source was affected by the ``ghost'', Exp, in ksec, is the total exposure time.\n\\end{table*}\n\nFinally, we performed a test to find cases in which a source is coincident with the ghost of another source within one ScW (a case described in Sect.~\\ref{sec:method}). We considered all the reference catalog sources and searched for ``ghost-source'' pairs in individual ScWs from the list used for our all-sky maps. If one of the sources in a given pair was present in our variability catalog, the pair was selected for further analysis. According to our simulations, if two sources are in the ghosts of each other, the fainter one loses up to half of its flux to the flux of the brighter one. If one of the sources is substantially brighter than the other, the relative distortion of the flux of the bright source is minor. Therefore, the flux of the source is significantly distorted if its ``ghost companion'' is brighter or comparable in brightness to the source itself. In the latter case, the ``ghost companion'' is also affected. Thus, we assume that if the source in the ghost is more than ten times fainter than its ``ghost companion'', its contaminating influence is negligible, whereas its own flux is contaminated significantly. After adopting this condition, we obtained a list of the sources affected by such position coincidences and the exposure times for each of them during which they where affected. For nearly all of the sources from the list, the fraction of exposures with distorted flux is less than 1\\%, which infers a relative uncertainty in the average flux and fractional variability of the same order. This is much smaller than the error set on variability in our catalog and, as can be seen from the Fig.~\\ref{fig:logV-logF} is smaller than the variability detection threshold even for the brighest sources. Nonetheless, a number of sources have larger than 1\\% errors induced by the ghosts, which we indicate with a $^{g}$ superscript in the catalog and provide a Table~\\ref{ghostab} where both errors are given. As can be seen, in all cases the ghost induced error is much smaller than the catalog variability error.\n\n\n\\section{The All-Sky online}\n\\label{sec:skyview}\n\nTo make our results available to the community, we decided to take advantage of the SkyView interface \\citep{Mcgetal98} (i.e., a Virtual Observatory service available online\\footnote{http:\/\/skyview.gsfc.nasa.gov} developed at and hosted by the High Energy Astrophysics Science Archive Research Centre (HEASARC)). SkyView provides a straightforward interface where users can retrieve images of the sky at all wavelengths from radio to gamma-rays \\citep{Mcgetal98}. SkyView uses NED and SIMBAD name resolvers to translate names into positions and is connected to the HEASARC catalog services. The user can retrieve images in various coordinate systems, projections, scalings, and sizes as ordinary FITS as well as standard JPEG and GIF file formats. The output image is centered on the queried object or sky position. SkyView is also available as a standalone Java application \\citep{Mcgetal97}. The ease-of-use of this system allowed us to incorporate INTEGRAL\/ISGRI variability and significance all-sky maps into the SkyView interface. We developed a simple web interface for the SkyView Java application and have made all-sky mosaics available online\n\n\n\\section{Concluding remarks}\n\\label{sec:conclu}\n\nOur study of variability of the \\emph{INTEGRAL} sky has found that 202 sources exhibit significantly variable hard X-ray emission. To compile the catalog of variable sources, we have developed and implemented a method to detect variable sources, and compiled all-sky variability maps. A search for variable sources from the \\emph{INTEGRAL} reference catalog was carried out in three energy bands: 20-40, 40-100, and 100-200 keV. The variable sources were detected in all studied energy bands, although their number sharply decreases with increasing energy. A number of sources were detected only during specific time periods and not detected on the map integrated over the whole time period. These sources are most likely transient. On the other hand, some sources were found to be variable only on the total variability map. This means that they might be persistent and not extremely variable sources.\n\nWe found that around 76\\% of all variable sources of our catalog are binary systems and nearly 24\\% of variable sources are either AGNs, unidentified, or other source types. The variability measurements of our catalog sources have rather normal distributions in all energy bands. We found that in the majority of cases the variability of the source in the first band correlates with its variability in the second band. We derived the limits to the fractional variability value to be detected as a function of the source flux (Fig.~\\ref{fig:logV-logF}). We also found that variability map can be a tool to detecting transient or faint but variable sources that would be missed in mosaics averaged over long timescales. In a forthcoming paper, we will discuss in more detail the properties of the variable sources detected during this study in order to gain some physical insights into the population of hard X-ray sources.\\\\\n\nFinally, we emphasize that the sky maps generated during this study represent 6 years of \\emph{INTEGRAL} operation in orbit. In addition to the variability maps, we have compiled significance maps in three energy bands (20-40, 40-100, and 100-200 keV). All our maps are available as an online service to the community using the SkyView engine. \n\n\\section*{Acknowledgements}\n\nBased on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with participation of Russia and the USA.\n\nThis work was supported by the Swiss National Science Foundation and the Swiss Agency for Development and Cooperation in the framework of the programme SCOPES - Scientific co-operation between Eastern Europe and Switzerland. The computational part of the work was done at VIRGO.UA\\footnote{http:\/\/virgo.org.ua} and BITP\\footnote{http:\/\/bitp.kiev.ua} computing resources.\n\nWe are grateful to the anonymous referee for the critical remarks which helped us improve the paper.\n\nIT acknowledges the support from the INTAS YSF grant No.~06-1000014-6348. \n\n\\section*{Appendix: Error Calulation on $V_r$}\n\nWe use the standard error propagation formula to find the error, $\\sigma_{V_r}$, of the function $V_r = f(S_{int,s}^2=a, S_{int,b}^2=b, F_{s}=c, F_{b}=d)$ so :\n\\begin{equation}\n\\sigma_{V_r} = \\sigma_{f} = \\sqrt{\\left(\\frac{\\partial f}{\\partial a}\\sigma_a\\right)^2 + \\left(\\frac{\\partial f}{\\partial b}\\sigma_b\\right)^2 + \\left(\\frac{\\partial f}{\\partial c}\\sigma_c\\right)^2 + \\left(\\frac{\\partial f}{\\partial d}\\sigma_d\\right)^2},\n\\label{VERR}\n\\end{equation}\nwhere $\\sigma_a = \\sigma_b = \\Sigma$ and $\\sigma_c = \\sigma_d = \\sigma$ for a given source. By substituting the appropriate values in Eq.~\\ref{VERR} and by taking derivatives we find that:\n\\begin{equation}\n\\sigma_{V_r} = \\sqrt{\\frac{\\Sigma^2}{2\\left(F_s - F_b\\right)^2 \\left(S_{int,s}^2 - S_{int,b}^2 \\right)} + \\frac{2 \\sigma^2 \\left(S_{int,s}^2 - S_{int,b}^2 \\right) }{\\left(F_s - F_b\\right)^4}}.\n\\end{equation}\n\n\n\\bibliographystyle{aa}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Band Structure Calculations}\nTo support these claims, we performed extensive ab-initio numerical simulations of adatom-decorated stanene. For all of our simulations, we used the high-buckled, free-standing structure of stanene shown in Fig.~\\ref{fig:struc_BC}, with buckling height $\\delta$=0.859\\AA~, and in-plane lattice parameter $a=4.676$\\AA,~as found by relaxing the structure of free-standing stanene using DFT. We decorated stanene with Zn adatoms at one of each of the four structural sites for the buckled honeycomb lattice: hollow (H), bridge (B), valley (V), and top (T)~\\cite{naqvi2017exploring}, and relaxed the height of the adatoms. Because the phenomena in which we are interested require sublattice site symmetry breaking, we primarily focus on the V and T adatom sites for the remainder of this work.\n\nTo determine the stability of Zn atoms in the V and T positions, we used density functional theory to calculate the adsorption energy of Zn at each site using the definition $E_{ads} = E_{Zn+stanene} - E_{stanene} - E_{Zn}$. We found the adsorption energies to be $E_{ads}^{V} = -0.404$ eV and $E_{ads}^{T} = -0.545$ eV. To explore the possibility that the adatoms would migrate from the V and T positions, we also performed both a nudged elastic band calculation~\\cite{henkelman2000climbing} to determine the most favorable pathway for adatom transport away from the V or T sites, and a dimer method calculation~\\cite{henkelman1999dimer} to precisely determine the activation barriers for Zn migration. These calculations indicated that the Zn atoms on V and T sites are most likely to move to the H site, with diffusion barriers $E^{V}_{barrier}$ = 0.008 eV and $E^{T}_{barrier}$ = 0.011 eV. This is similar to other reports of barrier heights for the migration of adatoms on stanene for the V\/T $\\rightarrow$ H processes~\\cite{mortazavi2016staneneAdatomNaLi} and indicates that the adatoms will be slow to diffuse at temperatures below $\\sim$100K. These migration barriers, while limiting stability at higher temperature, would also allow the adatoms to be manipulated by STM techniques more easily. However, for cases where a higher operating temperature is desirable, we identify other candidate adatoms with higher diffusion barriers in the supplemental material.\n \nNext we calculated the band structure for bare stanene and stanene decorated with Zn adatoms at the T or V sites, shown in Fig~\\ref{fig:struc_BC}a-c. Bare stanene has massive Dirac cones with negative band gaps of magnitude $E_g^{bare} = 0.073$ eV at the $\\v{K}$ and $\\v{K}'$ points. When decorated with Zn adatoms in either position, the degenerate Dirac cones are spin-split, resulting in a smaller $E_g^{V}=0.095$ eV gap for V decoration and a larger $E_g^{T}=0.398$ eV gap for T decoration. Crucially, we find that the decoration leaves the bands away from the Fermi level qualitatively unchanged, ensuring that the significant physical changes in the material can be captured by the topological indices near the Fermi-level. \n\nWe repeated this analysis for stanene decorated at the T and V sites with each element in rows 2 through 5 of the periodic table. We find that nearly all elements produce bands that differ significantly from bare stanene. Additionally, many elements that do not qualitatively change the band structure of stanene end up doping the system to result in a metallic character. We provide more details of the viability of these adatom species and how they compare to Zn in the supplementary material.\n\n\n\n\\section{QSH and QVH Indicators}\nUsing the results of the ab-initio calculations, we generated tight-binding parameters via the maximally-localized Wannier function procedure. From the resulting Hamiltonians we calculate the Berry curvature for bare and decorated stanene~\\cite{marzari2012maximally}. As shown by the coloration of the band structures in Fig.~\\ref{fig:struc_BC}b and c, with red and blue representing positive and negative Berry curvature, both decorations produce equal and opposite Berry curvature concentrations at the $\\v{K}$ and $\\v{K}'$ points, indicating that one set of bands at each of these points was inverted by the Zn decoration. \n\nThe origin and consequences of these band inversions are best understood via a low-energy effective model for the massive Dirac cones at the $\\v{K}$ and $\\v{K}'$ points in stanene~\\cite{yao_spin-orbit_2007, Liu11, molle_buckled_2017}:\n\\begin{equation}\n \\begin{split}\n H=\\hbar v_F(\\eta k_x\\tau^x+ k_y\\tau^y)+\\eta\\tau^z\\sigma^z\\lambda_{SO}+\\Delta\\tau^z,\n \\end{split}\n \\label{eqn:ham_tb}\n\\end{equation}\nwhere $\\tau^\\alpha$ and $\\sigma^\\alpha$ are Pauli matrices for the sublattice and spin degrees of freedom respectively, $v_F$ is the Fermi velocity, $\\eta=+1$ for $\\v{K}$ and $\\eta=-1$ for $\\v{K}'$, and $\\lambda_{SO}$ is the spin-orbit coupling strength. The final term describes a staggered potential of strength $\\Delta$ between the sublattice sites generated by the adatom decoration.\n\nIn the absence of the staggered potential $\\Delta$, this model describes the QSH phase realized by bare stanene. Because the Berry curvature distribution is concentrated around the $\\v{K}$ and $\\v{K}'$ points, and the $z$-component of the spin is conserved, we can define spin-valley resolved Chern numbers, which are protected by time-reversal symmetry and spin-conservation, by integrating the Berry curvature of a particular spin around $\\v{K}$ or $\\v{K}'$~\\cite{ezawa_monolayer_2015}. We obtain $C_{K\\uparrow}=C_{K'\\uparrow}=\\pm1\/2$ and $C_{K\\downarrow}=C_{K'\\downarrow}=\\mp1\/2$, with the signs dependent on the sign of $\\lambda_{SO}$. In terms of these spin-valley resolved indices, the total Chern number $C \\in \\mathbb{Z}$ and the spin Chern number $C_s \\in \\mathbb{Z}_2$ are\n\\begin{equation}\n \\begin{aligned}\n C &= C_{K\\uparrow} + C_{K\\downarrow} + C_{K'\\uparrow} + C_{K'\\downarrow} = 0 \\\\\n C_s &= \\frac{1}{2}(C_{K\\uparrow} - C_{K\\downarrow} + C_{K'\\uparrow} - C_{K'\\downarrow}) = 1 \\text{ mod } 2.\n \\end{aligned}\n\\end{equation}\nAccording to the bulk-boundary correspondence, we expect that interfaces between regions with different spin Chern numbers host gapless helical modes that carry a spin current\\cite{kanemele05, Bernevig06, hasan_colloquium_2010, qi_topological_2011}. Pairs of helical modes are not protected by time-reversal symmetry, so the spin Chern number is defined modulo $2$, $C_s\\in\\mathbbm{Z}_2$, and interfaces either have zero or one pair of stable helical modes.\n\nNow when we consider the adatom decoration we find that a sufficiently large positive (negative) sublattice potential $\\Delta$ induces a spin-valley resolved band inversion and changes the signs of $C_{K,\\uparrow}$ and $C_{K',\\downarrow}$ ($C_{K,\\downarrow}$ and $C_{K',\\uparrow}$)~\\cite{Ni12, ezawa_monolayer_2015}. The resulting Chern and spin Chern numbers both vanish, but we can instead assign a translation symmetry protected \\emph{momentum} vector charge to each valley.\nThis charge is equal to the vector describing the position of the valley in momentum space and defines the valley vector index: $\\vec{C}_{v}=\\hbar\\vec{K}(C_{K}-C_{K'})=2\\hbar\\vec{K}$.\nSystems with a non-vanishing $\\vec{C}_v$ are called quantum valley Hall (QVH) insulators~\\cite{xiao_valley-contrasting_2007, ren_topological_2016}. \n\nInterestingly, a change in the QVH index across an interface is accompanied by a translation symmetry protected current~\\cite{Xiao07} carrying \\emph{lattice momentum} along the interface. \nThe amount of momentum transported along the edge is characterized by a scalar quantity $C_v$, equal to the projection of $\\vec{C}_v$ onto the unit vector $\\vec{\\tau}$ tangential to the interface.\nIn particular, a straight open edge satisfying $\\vec{\\tau} \\cdot \\vec{C}_v = 0$ projects the valleys on top of each other, resulting in a trivial edge as indicated by the vanishing scalar index $C_v=0$. In contrast, when $\\vec{\\tau}$ is parallel to $\\vec{C}_v$, we obtain $C_v=2\\hbar|\\vec{K}|$. It is customary to drop the factor of momentum $\\hbar|\\vec{K}|$ from the definition of $C_v$ altogether and work with the dimensionless valley index $C_{v}=C_{K}-C_{K'}=\\pm2$. We list the values of the Chern numbers and valley index realized by bare and decorated stanene in Table~\\ref{tab:indices}.\n\n\\begin{table}[t]\n \\centering\n \\def1.25{1.25}\n \\begin{tabular}{| c | c c c | c |}\n \\hline\n Phase & $C$ & $C_{s}$ & $C_{v}$ & Decoration pattern \\\\\n \\hline\n \\hline\n \n \\multirow{2}{*}{QVH} & 0 & 0 & 2 & Zn at V \\\\\n \\cline{2-5}\n & 0 & 0 & $-2$ & Zn at T \\\\\n \\hline\n QSH & 0 & 1 & 0 & No adatom decoration \\\\\n \\hline\n \\end{tabular}\n \\caption{The Chern number, $C$, spin-Chern number, $C_s$, and valley-Chern number, $C_{v}$, for each topologically nontrivial phase realized by Hamiltonian (\\ref{eqn:ham_tb}), along with the corresponding adatom decoration patterns.}\n \\label{tab:indices}\n\\end{table}\n \n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{images\/figure2.pdf}\n \\caption{\n %\n \\textbf{Interfacial edge modes from the QSH-QVH structure.}\n %\n The (a, b, d, e) band structure and (c, f) representative edge state probability distributions for the QSH-QVH ribbon for zigzag (top) and armchair (bottom) orientations. In (a, b, d, e), the thin gray and thick black lines represent the bulk and edge states, respectively. (b, e) zoom in on the near-gap region at the $\\v{K}'$ point. The red dots marks a spin-up edge state propagating down the right interface and the blue dots marks a spin-up edge state propagating up the left interface. The line plots in (c) and (f) represent the probability density of the edge states indicated by the red and blue dots integrated over the plane perpendicular to the width of the ribbon. The shapes of the probability density plots for the zigzag interfaces differ because the adatom decoration makes the two interfaces asymmetric. An in-plane view of the interface structure, with colored atoms corresponding to those in Fig.~\\ref{fig:struc_BC} is shown in the bottom row of (c) and (f). The locking of the spin and valley degrees of freedom at each interface is the hallmark of the QSH-QVH edge state.\n %\n }\n \\label{fig:qsh_qvh}\n\\end{figure}\n\n\\section{First-Principles Interface Calculations}\nWith this understanding of the bulk properties of decorated and bare stanene we can now consider interfaces between different spatial domains. Three interfaces can be constructed from the three phases realized by bare and decorated stanene: two distinct QSH-QVH interfaces where the spin Chern number changes from 1 to 0 and the valley Chern number changes from 0 to $\\pm2$, and a QVH-QVH interface where the spin Chern number is zero on both sides while the valley Chern number changes from $-2$ to $2$. As discussed above, the QVH-QVH interface is sensitive to the orientation of the interface relative to the valley separation, $\\mathbf{K}-\\mathbf{K}'$, so we consider only ``zigzag'' QVH-QVH interfaces for which the edge is perpendicular to the valley separation. The QSH-QVH interfaces are insensitive to the edge orientation because the change in $C_s$ does not depend on the valley degree of freedom, so we consider both zigzag and armchair interfaces for this case. \n\nNow let us consider the possible interface states. At QVH-QVH interfaces, the valley Chern number changes from $\\mp2$ to $\\pm2$, indicating that four gapless edge modes will appear. Each valley hosts two chiral modes, with the chirality determined by the valley such that a net momentum current will flow along the interface. At QSH-QVH interfaces, the spin Chern number changes by one, and the valley Chern number changes by two, producing a pair of oppositely propagating spin-valley polarized modes. The modes in each valley are of opposite spin and opposite chirality, resulting in both spin and momentum currents at the interface.\n\nTo determine the characteristics of these interface modes in decorated stanene, we performed large-scale first principles electronic structure simulations of stanene nanoribbons decorated to produce QVH-QSH and QVH-QVH interfaces. The translation-invariant direction points in the $\\vec{b}$ and $\\frac{1}{2}\\vec{a} + \\vec{b}$ directions to realize zigzag and armchair interfaces, respectively. The unit vectors $\\vec{a}$ and $\\vec{b}$ point along the primitive lattice vectors, as shown in Fig.~\\ref{fig:struc_BC}. To create topological interfaces, we selectively decorated domains in the transverse direction (the x-axis in Fig.~\\ref{fig:qsh_qvh}c, \\ref{fig:qsh_qvh}f, and \\ref{fig:qvh_qvh}c). We used periodic boundary conditions in the transverse direction to eliminate spurious interfaces with the vacuum, forming two topological interfaces per ribbon. All zigzag ribbons were 145.67~\\AA ~wide and the armchair ribbon was 149.64~\\AA ~wide.\n\n To make the most of finite computational resources, the relative sizes of the decoration domains were chosen to minimize wavefunction overlap between the exponentially decaying interface states. Accordingly, the zigzag QSH-QVH ribbon is T-decorated on 10 unit cells and bare on 26 unit cells, the armchair QSH-QVH ribbon is T-decorated on 10 unit cells and bare on 22 unit cells, and the QVH-QVH ribbon is T-decorated on 10 unit cells and V-decorated on 26 unit cells. We note that the overlap of the interface wavefunctions in the bulk of the ribbon leads to undesired gaps in the interface spectrum produced by finite-size effects. We show in the supplementary material that these interface spectrum gaps vanish for sufficiently wide ribbons, and we find that the interface states decay exponentially into the insulating bulk with a decay length roughly determined by the ratio of the Fermi velocity to the bulk gap. \n\nThe resulting band structure and interface wavefunction plots are shown in Fig.~\\ref{fig:qsh_qvh} for the QSH-QVH zigzag and armchair ribbons. Each interface in the zigzag ribbon hosts a helical pair of spin-valley locked modes that produce both equilibrium spin and momentum currents on the interface. Each interface of the armchair ribbon also hosts a helical pair of modes, but since the valleys in this case are projected to the $\\Gamma$ point of the Brillouin zone the interfaces only carry spin current, not momentum current. The configurations of edge states we find agree with the $\\v{k}\\cdot\\v{p}$ model predictions of the previous section. As mentioned above, both ribbons have a small gap in the interface state spectrum, $E_g \\approx 0.03$ eV, that originates from the overlap and hybridization of the wavefunctions on the two interfaces and would vanish for a larger system. The decay lengths of the interface states in the zigzag ribbon are $\\lambda_T=5.95$ \\AA{} and $\\lambda_0=32.4$ \\AA{} in the T-decorated and bare regions, respectively. The decay lengths in the armchair ribbon are $\\lambda_T=6.55$ \\AA{} and $\\lambda_0=35.7$ \\AA{}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{images\/figure3.pdf}\n \\caption{\n %\n \\textbf{Interfacial edge modes from the QVH-QVH structure.}\n %\n The (a, b) band structure and (c) representative edge state probability densities of the QVH-QVH interface with a zigzag orientation. In (a, b), the thin gray and thick black lines represent the bulk and edge states, respectively. (b) Zoom in on the near-gap region at the $\\v{K}'$ point. The red and purple dots mark states propagating along the right interface and the blue and yellow dots mark edge states propagating along the left interface. The line plots in (c) represent the probability density integrated over the plane perpendicular to the width of the ribbon. The shapes of the probability density plots for each interface differ because the adatom decoration makes the two interfaces asymmetric. An in-plane view of the structure, with colored atoms corresponding to those in Fig.~\\ref{fig:struc_BC}, is shown in the bottom row of (c). Each valley contributes two unpolarized edge modes to each interface, as the indicated by the change of the valley Chern number by four across each interface.\n %\n }\n \\label{fig:qvh_qvh}\n\\end{figure}\n\nThe results for the QVH-QVH ribbon are shown in the same format as the QSH-QVH ribbon in Fig.~\\ref{fig:qvh_qvh}. Each interface hosts two chiral modes from each valley, with the valleys contributing modes of opposite chirality. This leads to the equilibrium edge momentum current predicted above. In this case the decay lengths of the edge states are $\\lambda_T=5.68$ \\AA{} and $\\lambda_V=23.8$ \\AA{} in the T- and V-decorated regions, respectively. The gaps in the interface state spectrum resulting from wavefunction overlap are $E_g \\approx 0.02$ eV and and $0.006$ eV. We report two gaps here because there are two sets of interface states at each valley for this ribbon. For interfaces with a finite projection onto the valley separation direction, the momentum carried by the edge is reduced. In the extreme case of an armchair interface, the valleys exactly overlap, the edge carries no momentum current, and any local perturbation can gap out the interface states. \n \n\\section{Technological Applications}\nThe above calculations demonstrate that decorating stanene with Zn adatoms presents a uniquely promising platform for technological applications. The topological domains can be patterned with a high degree of control by manipulating the adatom positions with an STM tip, permitting the fabrication of many topological devices, two of which are depicted schematically in Fig.~\\ref{fig:devices}. Furthermore, the interface states residing at domain walls are localized on the scale of tens of nanometers, which permits extremely dense packing of features. One of the first proposals for an application of topological edge modes was designer interconnect networks, which are a possible solution to the ``interconnect bottleneck'', wherein scattering and parasitic capacitance in interconnects leads to signal delays that prohibit further miniaturization of semiconductor devices~\\cite{george_chiral_2012}. The minimum metal pitch, or center to center distance between interconnects, of the current semiconductor manufacturing technology node is 24 nm to 36 nm~\\cite{ITRS}. At this scale, grain boundary and defect scattering leads to large resistances that inhibit the performance of traditional copper interconnects~\\cite{graham_resistivity_2010}. Considering the edge state decay lengths obtained in our simulations, the minimum pitch that could be achieved with Zn-decorated stanene interconnects is also on the order of tens of nanometers. However, the topological protection of the interface modes eliminates the issue of scattering, drastically improving performance with no compromise on feature size.\n\nThe interface modes of Zn-decorated stanene also have many applications beyond the world of conventional electronics. The fields of spin- and valley-tronics attempt to process information by exploiting the spin and valley degrees of freedom, rather than the charge degree of freedom~\\cite{bader_spintronics_2010, vitale_valleytronics_2018}. Topological interface modes are useful for engineering spin- and valleytronic devices such as waveguides, splitters, valves, and filters~\\cite{li_valley_2018, ezawa_topological_2013, xu_manipulating_2017,yang_topological_2020, qiao_spin-polarized_2011, jana_robust_2021}, and STM manipulation of Zn adatoms on stanene provides an ideal platform to fabricate the precise geometries of such devices. The same is true of electron quantum optics devices, such as valley Hall beam splitters, Mach-Zehnder interferometers, and Fabry-Perot resonators~\\cite{jo_quantum_2021, rickhaus_transport_2018, wei_mach-zehnder_2017}. This approach to engineering topological interface modes is also well suited to fabricating quantum computing gates out of helical edge states decorated with magnetic impurities~\\cite{niyazov_coherent_2020, chen_quantum_2014}, as STM manipulation of adatoms can be used to both create the edge states \\emph{and} deposit magnetic impurities.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{images\/figure4}\n \\caption{\n %\n \\textbf{Schematic drawing of example devices.}\n %\n A schematic showing two possible devices constructed via adatom decoration of stanene. The blue spheres are Sn tin atoms and the red and yellow spheres are Zn adatoms located at T and V sites, respectively. The blue, red, and yellow shading is a guide to the eye, indicating the regions that are bare, T-decorated, and V-decorated, respectively. The red and yellow arrows indicate quantum spin Hall edge modes, the color determined by the decoration site of the quantum valley Hall region. The left side of the image shows two densely-packed chiral interconnects constructed by decorating a thin region of stanene with Zn adatoms. The right side of the image shows a Mach-Zehnder interferometer built out of the edge modes of two adjacent T- and V-decorated regions.\n %\n }\n \\label{fig:devices}\n\\end{figure}\n\n\\section{Conclusion}\nWe have demonstrated that sublattice-selective decoration of stanene with Zn adatoms is an excellent platform for engineering topological interface modes. Because Zn adatoms bond relatively weakly to stanene, they act as an ideal sublattice potential and induce a QSH to QVH transition in stanene. The weak nature of the bond also does not transfer significant charge to stanene and permits STM manipulation of the adatoms allowing detailed patterning of topological interfaces. Importantly, the Zn-Sn bond is also strong enough for the decoration to remain stable at liquid nitrogen temperatures. The combined result of these effects is a platform suitable for fabricating arbitrary networks of topological interface modes without any of the deleterious effects that plague existing proposals for topological devices. These ideal topological interface-state networks have applications in semiconductor devices, spintronics, valleytronics, quantum electron optics, and even quantum computing. Implementing this technique is possible with existing fabrication and STM technology and can lead to transformative advances in topological device engineering.\n\n\\section*{Methods}\n The investigation of all possible adatom species in rows two through five of the periodic table was completed using JDFTx~\\cite{sundararaman2017jdftx} to take advantage of GPU functionality. These calculations were carried out with ONCV pseudopotentials~\\cite{hamann2013optimized, van2018pseudodojo} using the Perdew-Burke-Enzerhof (PBE)~\\cite{pbe} exchange-correlation functional, a 1090 eV plane-wave energy cutoff, a 15x15x1 $\\Gamma$-centered k-mesh, and Methfessel-Paxton smearing of $\\sigma = 0.0272$ eV. \n \n The electronic structure of the decorated interface structures was determined via the Vienna Ab-initio Software Package (VASP)~\\cite{vasp1,vasp2,vasp3} using the PBE functional with the projector-augmented wave (PAW)~\\cite{paw_pseudopotentials} potentials provided by VASP. The calculations were performed on a 1x15x1 $\\Gamma$-centered k-mesh with a plane-wave energy cutoff of 450 eV and Methfessel-Paxton smearing of $\\sigma = 0.01$ eV.\n \n All calculations included spin-orbit coupling, 20 \\AA{} of z-axis vacuum between periodic images, and the many-body dispersion (MBD) van der Waals correction~\\cite{ambrosetti2014long}. Probability density data was visualized using the pawpyseed package~\\cite{pawpyseed}.\n\n\\nolinenumbers\n\n\\bibliographystyle{naturemag}\n\\footnotesize{","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n The collisions of heavy ions at ultra relativistic energies are performed to create and\nstudy bulk strongly interacting matter at high temperatures. \nThe data from RHIC and LHC provide strong evidences of formation of a new state\nof matter known as quark gluon\nplasma (QGP) in these collisions \\cite{Proceedings:2019drx}.\n\nThe light charged hadrons and jets \ntransverse momentum ($p_{\\rm{T}}$) spectra give insight into the particle production\nmechanism in pp collisions. The partonic energy energy loss is reflected in these\nparticles when measured in heavy ion collisions \ndue to jet-quenching \\cite{Wang:2003aw} which measures the opacity of the medium.\nA modified power law distribution \\cite{Tsallis:1987eu, Biro:2008hz, Khandai:2013gva} describes\nthe $p_{\\rm{T}}$ spectra of the hadrons in pp collisions in terms of a\npower index $n$ which determines the initial production in partonic\ncollisions. In Ref.\\cite{Saraswat:2017kpg}, the power law function is applied to heavy ion\ncollisions as well which includes the transverse\nflow in low $p_{\\rm{T}}$ region and the in-medium energy loss (also in terms of power law)\nin high $p_{\\rm{T}}$ region.\n\n\n\n The spectra of hadrons are measured in both pp and AA collisions and\nnuclear modification factor ($R_{\\rm{AA}}$) is obtained.\nThe energy loss of partons can be connected to horizontal shift in the scaled hadron \nspectra in AA with respect to pp spectra as done by PHENIX measurement \\cite{Adler:2006bw}.\nTheir measurements of neutral pions upto $p_{\\rm T} \\sim 10$ GeV\/$c$ are consistent with the\nscenario where the momentum shift $\\Delta p_{\\rm T}$ is proportional to $p_{\\rm T}$.\n In similar approach, the authors in Ref.~\\cite{Wang:2008se}, extracted the fractional energy loss\nfrom the measured nuclear modification factor of hadrons as a function of $p_{\\rm{T}}$ below\n10 GeV\/$c$ in AuAu collisions at $\\sqrt{s_{\\rm{NN}}}$ = 200 GeV. They also considered that \nthe energy loss increases linearly with $p_{\\rm T}$.\n In recent PHENIX work \\cite{Adare:2015cua},\nfractional energy loss was obtained in the hadron spectrum measured upto $p_{\\rm T}=20$ GeV\/$c$\nin heavy ions collisions at RHIC and LHC energy and is not found to be constant.\nThis means that a constant fractional energy loss (energy loss varying linearly with $p_{\\rm T}$)\ncan be applicable only to low $p_{\\rm T}$ RHIC measurements.\n\n There are many recent studies which use so-called shift formalism to study the energy loss.\nThe work in Ref.\\cite{Spousta:2015fca} is based on \nshift formalism and describes the transverse momentum ($p_{\\rm{T}}$), rapidity ($y$)\nand centrality dependences of the measured jet nuclear modification factor ($R_{\\rm{AA}}$)\nin PbPb collisions at LHC.\nThey assume that the energy loss is given by a power law in terms of $p_{\\rm T}$, the value of power\nindex is obtained between 0.4 to 0.8 by fitting the $R_{\\rm{AA}}$ as a function of $p_{\\rm{T}}$\nand centralities.\n They also found that the energy loss linearly increases with number of participants. \nUsing the same method they study the magnitude and the colour charge dependence of the\nenergy loss in PbPb collisions at LHC energies using the measured data of the inclusive\njet suppression~\\cite{Spousta:2016agr}.\n The authors of the Ref.\\cite{Ortiz:2017cul} work on inclusive charged particle spectra\nmeasured in the range ($5 < p_{\\rm T} < 20$ GeV\/$c$) in heavy ion collisions at RHIC and LHC.\nThey assume that the energy loss linearly increases with $p_{\\rm T}$ and pathlength. \n\n\n\n\n\n\nThere are detailed calculations of energy loss of partons in the hot medium\n[see e.g. Refs.~\\cite{Wang:1994fx,Baier:1996kr}.\n Phenomenological models tend to define simple dependence of the radiative energy\nloss of the parton on the energy of the parton inside the medium [for a discussions\nsee Ref.~\\cite{Baier:2000mf}]. The energy loss can be characterized in terms of \ncoherence length $l_{\\rm{coh}}$, which is associated with the formation time of\ngluon radiation by a group of scattering centres. If $l_{\\rm{coh}}$ is less then \nthe mean free path $(\\lambda)$ of the parton, the energy loss is proportional to the\nenergy of the parton.\nIf $l_{\\rm{coh}}$ is greater than $\\lambda$ but less than the path length ($L$) of the\nparton ($\\lambda < l_{\\rm{coh}} < L)$, the energy loss is proportional to the square\nroot of the energy of the parton.\nIn the complete coherent regime, $l_{\\rm{coh}} > L$, the energy loss per unit length\nis independent on energy but proportional to the parton path length implying that\n$\\Delta p_T$ is proportional to square of pathlength.\n There is a nice description of charged particle spectra at RHIC and LHC using such a\nprescription by dividing the $p_{\\rm T}$ spectra in three regions \\cite{De:2011fe, De:2011aa}.\n For low and intermediate energy partons, $\\Delta p_T$ is assumed to be linearly\ndependent on $L$ \\cite{Muller:2002fa}. The work in Ref.~\\cite{Betz:2011tu} studies \nthe energy loss of jets in terms of exponent of the number of participants.\n It should be remembered that the fragmentation changes the momentum between the partonic\nstage (at which energy is lost) and hadron formation.\n There are models which say that softening occurs at fragmentation stage \ndue to color dynamics [See e.g. Ref.~\\cite{Beraudo:2012bq}].\n \n\n In general, one can assume that the energy loss of partons in the hot medium as a function of\nparton energy is in the form of power law where the power index ranges from 0 (constant) to\n1 (linear). Guided by these considerations, in the present work, the $p_{\\rm T}$ loss has been assumed\nas power law with different power indices in three different $p_{\\rm T}$ regions.\nThe energy loss in different collisions centralities are described in terms of fractional power\nof number of participants.\n The $p_{\\rm T}$ distributions in pp collisions are fitted with a modified power law and\n$R_{\\rm AA}$ in PbPb collisions can be obtained using effective shift ($\\Delta p_{\\rm T}$) in the\n$p_{\\rm T}$ spectrum measured at different centralities.\n The power index and the boundaries of three $p_{\\rm T}$ regions are obtained by fitting the\nmeasured $R_{\\rm{AA}}$ of charged particles and jets in PbPb collisions at\n$\\sqrt{s_{\\rm NN}}$ = 2.76 and 5.02 TeV in large transverse momentum ($p_{\\rm T}$) and\ncentrality range.\nThe shift $\\Delta p_{\\rm{T}}$ includes the medium effect, mainly energy loss of parent\nquark inside the plasma.\n The shift $\\Delta p_T$ can be approximatively understood as the partonic energy loss in the\ncase of jets while in case of hadrons it is not simple due to complicated correlations.\nOften we refer to the shift $\\Delta p_{\\rm{T}}$ as the energy loss.\n\n\n\n\\section{Nuclear Modification Factor and Energy Loss}\n\nThe nuclear modification factor $R_{\\rm{AA}}$ of hadrons is defined as the ratio\nof yield of the hadrons in AA collision and the yield in pp collision with a\nsuitable normalization\n\\begin{equation}\nR_{\\rm{AA}} (p_{\\rm{T}}, b) = \\frac{1}{T_{\\rm{AA}}} {\\frac{d^2N^{AA}(p_{\\rm{T}}, b)}{dp_{\\rm{T}}dy}}\/\n{\\frac{d^2\\sigma^{pp}(p_{\\rm{T}}, b)}{dp_{\\rm{T}}dy}}~.\n\\label{raa_definition}\n\\end{equation}\nHere, $T_{\\rm{AA}}$ is the nuclear overlap function which can be calculated from the\nnuclear density distribution. High $p_{\\rm T}$ partons traversing the medium loose energy and\ncause the suppression of hadrons at high $p_{\\rm T}$ indicated by value of $R_{\\rm{AA}}$\nwhich is less than one.\n The transverse momentum distribution of hadrons in pp collisions\ncan be described by the Hagedorn function which is a QCD-inspired summed power\nlaw \\cite{Hagedorn:1983wk} given as\n\\begin{equation}\n\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\frac{d^2\\sigma^{\\rm{pp}}}{dp_{\\rm{T}}dy}\n= A_n~2\\pi p_{\\rm{T}} ~\\Bigg(1 + \\frac{p_{\\rm{T}}}{p_{0}}\\Bigg)^{-n}~.\n\\label{Hag}\n\\end{equation}\nwhere $n$ is the power and $A_n$ and $p_{0}$ are other parameters which are obtained\nby fitting the experimental pp spectrum.\n The yield in the AA collision can be obtained by shifting the spectrum by \n$\\Delta p_{\\rm T}$ as\n\\begin{eqnarray}\n \\,\\,\\,\\,\\, \\frac{1}{T_{\\rm{AA}}}\\frac{d^{2}N^{\\rm{AA}}}{dp_{\\rm{T}}dy}\n & = \\frac{d^{2}\\sigma^{\\rm{pp}}(p'_{\\rm{T}} = p_{\\rm{T}} + \\Delta p_{\\rm{T}})}{dp'_{\\rm{T}}dy}\n \\frac{dp'_{\\rm{T}}}{dp_{\\rm{T}}} \\nonumber \\\\\n & = \\frac{d^{2}\\sigma^{\\rm{pp}}(p'_{\\rm{T}})}{dp'_{\\rm{T}}dy}\n \\Bigg(1 + \\frac{d(\\Delta p_{\\rm{T}})}{dp_{\\rm{T}}}\\Bigg)~.\n\\label{shiftRAA}\n\\end{eqnarray}\nThe reasoning behind writing Eq.~\\ref{shiftRAA} lies in the assumption that particle yield at\na given $p_{\\rm{T}}$ in AA collisions would have been equal to the yield in pp collisions\nat $p_{\\rm{T}} + \\Delta p_{\\rm{T}}$. The shift $\\Delta p_{\\rm{T}}$ includes the medium effect,\nmainly energy loss of parent quark inside the plasma.\n\nThe nuclear modification factor $R_{\\rm{AA}}$ can be obtained as \n\\begin{eqnarray}\nR_{\\rm{AA}} = \\left(1 + { \\Delta p_{\\rm{T}} \\over p_{0}+p_{\\rm{T}} } \\right)^{-n} \\,\\,\n\\left({p_{\\rm{T}} + \\Delta p_{\\rm{T}} \\over p_{\\rm{T}}}\\right) \\, \n\\left(1 + {d(\\Delta p_{\\rm{T}}) \\over dp_{\\rm{T}}}\\right)\n\\label{nmf_raa_fitting_function}\n\\end{eqnarray}\nThe energy loss given by $p_{T}$ loss, $\\Delta p_{T}$ can be extracted by fitting the\nexperimental data on $R_{\\rm AA}$\nwith Eq.~\\ref{nmf_raa_fitting_function}.\nThe $\\Delta p_{\\rm T}$ and its derivative will go as input in the above equation\nand can be assumed to be in the form of the power law\nwith different values of power indices in three different $p_{\\rm T}$ regions as follows\n\n\n\n\\begin{eqnarray}\n\\Delta p_{\\rm T} = \\left\\{\n\\begin{array}{l}\na_1~(p_{\\rm T} - C_1)^{\\alpha_1} ~~~ {\\rm for} ~~~ p_{\\rm T} < p_{\\rm T_1}~~~, \\\\\na_2~(p_{\\rm T} - C_2)^{\\alpha_2} ~~~ {\\rm for} ~~~ p_{\\rm T_1} \\leq p_{\\rm T} < p_{\\rm T_2}~~,\\\\\na_3~(p_{\\rm T} - C_3)^{\\alpha_3} ~~~ {\\rm for} ~~~ p_{\\rm T} \\geq p_{\\rm T_2}~.\n\\end{array}\n\\right\\}\n\\label{Equation_Two}\n\\end{eqnarray}\n\n\nThe parameter $a_1$ in our work contains the pathlength dependence. The pathlength $L$\n scales as the square root of number of participants as $\\sqrt{N_{\\rm part}}$.\n For low and intermediate energy partons, $\\Delta p_T$ can be assumed to be\nlinearly dependent on $L$ \\cite{Muller:2002fa}. If the scattering happens\nin complete coherent regime where the whole medium acts as one coherent source of radiation, \nthe $\\Delta p_T$ approaches quadratic dependence on $L$.\n The work in Ref.~\\cite{Betz:2011tu} studies \nthe energy loss of jets in terms of exponent of the number of participants.\nWithout complicating the calculations we can assume that\n$a_1 = M \\, (N_{\\rm{part}}\/(2A))^\\beta$. The exponent $\\beta$ is obtained separately\nfor each dataset. \n The parameter $M$ relies on the energy density of the medium depending on the\ncollision energy but has a same value for all centralities.\nThe boundaries of the $p_{\\rm T}$ regions $p_{{\\rm{T}}_{1}}$, $p_{{\\rm{T}}_{2}}$ and the power\nindices $\\alpha_{1}$, $\\alpha_{2}$ and $\\alpha_{3}$ in the three different regions are\nused as free parameters while fitting the $R_{\\rm{AA}}$ measured at different centralities\nsimultaneously.\n The parameter $C_{1}$ is fixed to a suitable value to choose a lower $p_{\\rm T}$ cutoff\n and the parameters $C_{2}$, $C_{3}$, $a_2$ and $a_3$ are obtained by assuming the function\n and its derivative to be continuous at boundaries.\n\nDemanding that the function in Eq.~\\ref{Equation_Two} to be continuous\nat $p_{\\rm{T}} = p_{{\\rm{T}}_{1}}$ and at $p_{\\rm{T}} = p_{{\\rm{T}}_{2}}$ we obtain\n\\begin{equation}\n a_{2} = a_{1}~ \\frac{(p_{{\\rm{T}}_{1}} - C_{1})^{\\alpha_{1}}}{(p_{{\\rm{T}}_{1}} - C_{2})^{\\alpha_{2}}}~~.\n\\label{Equation_Three}\n\\end{equation}\n\n\\begin{equation}\n a_{3} = a_{2}~ \\frac{(p_{{\\rm{T}}_{2}} - C_{2})^{\\alpha_{2}}}{(p_{{\\rm{T}}_{2}} - C_{3})^{\\alpha_{3}}}~~.\n\\label{Equation_Four}\n\\end{equation}\n Demanding that at $p_{\\rm{T}} = p_{{\\rm{T}}_{1}}$, the derivative of Eq.~\\ref{Equation_Two} is continuous.\n\\begin{equation}\n a_{1}~\\alpha_{1}~(p_{{\\rm{T}}_{1}} - C_{1})^{(\\alpha_{1}-1)} = \n a_{2}~\\alpha_{2}~(p_{{\\rm{T}}_{1}} - C_{2})^{(\\alpha_{2}-1)}~~,\n\\label{Equation_Seven}\n\\end{equation}\nUsing the value of $a_{2}$ from Eq.~\\ref{Equation_Three}\n\\begin{equation}\n\\frac{\\alpha_{1}}{(p_{{\\rm{T}}_{1}} - C_{1})} = \\frac{\\alpha_{2}}{(p_{{\\rm{T}}_{1}} - C_{2})}~~,\n\\label{Equation_Eight}\n\\end{equation}\n\n\\begin{equation}\nC_{2} = p_{{\\rm{T}}_{1}} - \\frac{\\alpha_{2}}{\\alpha_{1}}(p_{{\\rm{T}}_{1}} -C_{1})~.\n\\label{Equation_Nine}\n\\end{equation}\nSimilarly, demanding the derivative to be continuous at\n$p_{\\rm{T}} = p_{{\\rm{T}}_{2}}$, we get $C_{3}$ \n\\begin{equation}\nC_{3} = p_{{\\rm{T}}_{2}} - \\frac{\\alpha_{3}}{\\alpha_{2}}(p_{{\\rm{T}}_{2}} -C_{2})~.\n\\label{Equation_Ten}\n\\end{equation}\nIn case of jets we consider only one region as the data starts from very high $p_T$\nabove 40 GeV\/$c$.\n\n\n\\section{Results and Discussions}\n\n\nFigure~\\ref{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV} shows the\ninvariant yields of the charged particles as a function of the transverse momentum $p_{\\rm{T}}$\nfor pp collisions at $\\sqrt{s}$ = 2.76 TeV measured by the ALICE\nexperiment \\cite{Abelev:2013ala}. The solid curve is the Hagedorn distribution fitted\nto the $p_{\\rm{T}}$ spectra with the parameters given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV.pdf}\n\\caption{The invariant yields of the charged particles as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 2.76 TeV measured by the ALICE experiment\n \\cite{Abelev:2013ala}. The solid curve is the fitted Hagedorn function.}\n\\label{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV}\n\\end{figure}\n\n\\begin{table}[ht]\n \\caption{Parameters for the Hagedorn function obtained by fitting the\ntransverse momentum spectra of charged particles and jets measured in pp \ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.}\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{| c || c | c | c | c |} \n\\hline\nParameters & \\multicolumn{2}{c|}{Charged particles}\n & \\multicolumn{2}{c|}{Jets } \\\\ \\hline\n & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \n & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline \\hline\n $n$ & 7.26 $\\pm$ 0.08 & 6.70 $\\pm$ 0.14 & 8.21 $\\pm$ 1.55 & 7.90 $\\pm$ 0.50 \\\\\\hline\n $p_{0}$ (GeV\/$c$) & 1.02 $\\pm$ 0.04 & 0.86 $\\pm$ 0.16 & 18.23 $\\pm$ 1.69 & 19.21 $\\pm$ 3.20 \\\\\\hline\n$\\chi^{2}\/\\rm{NDF}$ & 0.15 & 0.06 & 0.23 & 0.95 \\\\\\hline \n\\end{tabular}}\n\\end{center}\n\\label{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}\n\\end{table}\n\n\n\nFigure~\\ref{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV} shows the nuclear\nmodification factor $R_{\\rm{AA}}$ of the charged particles as a function of the transverse\nmomentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV measured by the ALICE experiment \\cite{Abelev:2012hxa}. The solid lines are the\nfunction given by Eq.~\\ref{nmf_raa_fitting_function}. The modeling of centrality dependence using\n$N_{\\rm part}^\\beta$ with $\\beta=0.58$ gives a very good description of the data. \n The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ \nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV are given in\nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\nalong with value of $\\chi^{2}\/\\rm{NDF}$. It shows that the $\\Delta p_{\\rm{T}}$ increases\nalmost linearly ($p_{\\rm{T}}^{0.97}$) upto $p_{\\rm{T}} \\simeq$ 5 GeV\/$c$ in confirmation with\nearlier studies. After that it increases slowly with power $\\alpha=0.224$ upto\na $p_{\\rm{T}}$ value 29 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$.\n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a \nfunction of transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb \ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV measured by the ALICE experiment \\cite{Abelev:2012hxa}.\n The solid curves are the $R_{\\rm{AA}}$ fitting function\n(Eq.~\\ref{nmf_raa_fitting_function}).} \n\\label{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV}\n\\end{figure}\n\n\n\\begin{table}[ht]\n \\caption[]{The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ obtained by fitting the charged\n particles $R_{\\rm{AA}}$\nmeasured in different centrality classes of PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.}\n\\label{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\n\\begin{center}\n\\begin{tabular}{| c || c | c |} \\hline\n ~ Parameters & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline\\hline\n~ $M$ & 0.75 $\\pm$ 0.02 & 0.80 $\\pm$ 0.038 \\\\ \\hline \n~ $p_{{\\rm{T}}_{1}}$ (GeV\/$c$) & 5.03 $\\pm$ 0.15 & 5.10 $\\pm$ 0.22 \\\\ \\hline \n~ $p_{{\\rm{T}}_{2}}$ (GeV\/$c$) & 29.0 $\\pm$ 0.1 & 22.2 $\\pm$ 4.1 \\\\ \\hline \n~ $C_{1}$ (GeV\/$c$) & 1.0 (fixed) & 1.0 \\\\ \\hline \n~ $\\alpha_{1}$ & 0.97 $\\pm$ 0.02 & 0.95 $\\pm$ 0.04 \\\\ \\hline \n~ $\\alpha_{2}$ & 0.22 $\\pm$ 0.02 & 0.22 $\\pm$ 0.03 \\\\ \\hline \n~ $\\alpha_{3}$ & 0.05 $\\pm$ 0.13 & 0.05 $\\pm$ 0.10 \\\\ \\hline \n~ $\\frac{\\chi^{2}}{\\rm{NDF}}$ & 0.35 & 0.38 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n Figure~\\ref{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV. The $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two}\nwith parameters given in \nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}.\n The $\\Delta p_{\\rm{T}}$ increases from peripheral to the most\n central collision regions as per $N_{\\rm part}^{0.58}$.\n The figure shows that the $\\Delta p_{\\rm{T}}$ increases \nalmost linearly upto $p_{\\rm{T}} \\sim 5$ GeV\/$c$. After that it\nincreases slowly upto a $p_{\\rm{T}}$ value 29 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of \ntransverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV for \ndifferent centrality classes.}\n\\label{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV}\n\\end{figure}\n\n\n Figure~\\ref{Figure4_charged_particles_cms_pT_spectra_pp_502TeV} shows the invariant\nyields of the charged particles as a function of the transverse momentum $p_{\\rm{T}}$\nfor pp collisions at $\\sqrt{s}$ = 5.02 TeV measured by the CMS experiment\n\\cite{Khachatryan:2016odn}. The solid lines are the Hagedorn function fitted to\nthe measured $p_{\\rm{T}}$ spectra the parameters of which are given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure4_charged_particles_cms_pT_spectra_pp_502TeV.pdf}\n\\caption{The invariant yields of the charged particles as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 5.02 TeV measured by the CMS experiment \n\\cite{Khachatryan:2016odn}. The solid curve is the fitted Hagedorn function.}\n\\label{Figure4_charged_particles_cms_pT_spectra_pp_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV} shows the\nnuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a function of the\ntransverse momentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at\n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the CMS experiment \\cite{Khachatryan:2016odn}.\nThe solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n Here also the modeling of centrality dependence using \n$N_{\\rm part}^\\beta$ with $\\beta=0.58$ gives a good description of the data. \n The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ \nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV are given in\nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\nalong with the value of $\\chi^{2}\/\\rm{NDF}$. It shows that the $\\Delta p_{\\rm{T}}$ increases\nalmost linearly ($p_{\\rm{T}}^{0.96}$) similar to the case at 2.76 TeV for $p_{\\rm T}$ upto 5.1 GeV\/$c$.\nAfter that it increases slowly with power $\\alpha=0.22$ upto a $p_{\\rm{T}}$ value 22.2 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$ right upto 160 GeV\/$c$.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a function \nof transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at \n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the CMS experiment \\cite{Khachatryan:2016odn}. The\nsolid lines are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).}\n\\label{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV}\n\\end{figure}\n \n\nFigure~\\ref{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ =\n5.02 TeV. The $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} with the\nparameters given in \nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ becomes constant for $p_{\\rm{T}}$ in the range 22 GeV\/$c$ to 160 GeV\/c\nand increases as the collisions become more central.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of \ntransverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV for \ndifferent centrality classes.}\n\\label{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV} shows the comparison of energy\nloss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for 0 - 5 $\\%$ centrality class in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ =\n2.76 and at 5.02 TeV. The $\\Delta p_{\\rm{T}}$ at 5.02 TeV is similar but slightly more than\nthat at 2.76 TeV. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function \nof transverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 \nand 5.02 TeV for 0 - 5 $\\%$ centrality.}\n\\label{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure8_jet_atlas_pT_spectra_pp_276TeV} shows the yields of the\njets as a function of the transverse momentum $p_{\\rm{T}}$ for pp collisions at $\\sqrt{s}$\n= 2.76 TeV measured by the ATLAS experiment~\\cite{Aad:2014bxa}. The solid curve is the\nHagedorn distribution fitted to the $p_{\\rm{T}}$ spectra, the parameters of which\nare given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure8_jet_atlas_pT_spectra_pp_276TeV.pdf}\n\\caption{The yields of the jets as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 2.76 TeV measured by the ATLAS experiment \n \\cite{Aad:2014bxa}. The solid curve is the fitted Hagedorn distribution.}\n\\label{Figure8_jet_atlas_pT_spectra_pp_276TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV} shows the nuclear\nmodification factor $R_{\\rm{AA}}$ of the jets as a function of the transverse\nmomentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV measured by the ATLAS experiment \\cite{Aad:2014bxa}. \nThe solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n Here the modeling of centrality dependence using $N_{\\rm part}^\\beta$ with $\\beta=0.60$\ngives a good description of the data. \n The extracted parameters of the energy loss\nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV are given in\nTable~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}\nalong with the value of $\\chi^{2}\/\\rm{NDF}$.\nIt shows that the $\\Delta p_{\\rm{T}}$ increases as $p_{\\rm{T}}^{0.76}$\nat all the values of $p_{\\rm T}$ measured for jets.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of jets as a function of\ntransverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at \n$\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV measured by the ATLAS experiment \\cite{Aad:2014bxa}. The solid\ncurves are the $R_{\\rm{AA}}$ fitting function given by Eq.~\\ref{nmf_raa_fitting_function}.}\n\\label{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV}\n\\end{figure}\n\n\n\n\\begin{table}[ht]\n \\caption[]{The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ obtained by fitting the jet $R_{\\rm{AA}}$\n measured in different centrality classes of PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and\n 5.02 TeV.}\n\\label{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}\n\\begin{center}\n\\begin{tabular}{| c || c | c |} \\hline\n~ Parameters~ & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline\\hline\n~ $M$ & 0.33 $\\pm$ 0.1 & 0.40 $\\pm$ 0.12 \\\\ \\hline \n~ $C$ (GeV\/$c$) & -55.1 $\\pm$ 22.7 & -119 $\\pm$ 15 \\\\ \\hline \n~ $\\alpha$ & 0.76 $\\pm$ 0.08 & 0.72 $\\pm$ 0.01 \\\\ \\hline \n~ $\\frac{\\chi^{2}}{\\rm{NDF}}$ & 0.30 & 0.25 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\nFigure~\\ref{Figure10_Jet_particles_Del_pT_PbPb_276TeV} shows the shift $\\Delta p_{\\rm{T}}$\nof the jets as a function of the transverse momentum $p_{\\rm{T}}$\nfor different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV.\nThe $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} using the \nparameters given in Table~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ increases from \nperipheral to the most central collision regions.\n The figure shows that the $\\Delta p_{\\rm{T}}$ increases almost linearly \nat all the values of $p_{\\rm T}$ measured for jets.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure10_Jet_particles_Del_pT_PbPb_276TeV.pdf}\n\\caption{The shift $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV for different \ncentrality classes.}\n\\label{Figure10_Jet_particles_Del_pT_PbPb_276TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure11_jet_yield_pp_502tev} shows the yields of the jets\nas a function of the transverse momentum $p_{\\rm{T}}$ for pp collisions at $\\sqrt{s}$\n= 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}. The solid curve\nis the Hagedorn distribution with the parameters given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure11_jet_yield_pp_502tev.pdf}\n\\caption{The yields of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 5.02 TeV measured by the \nATLAS experiment \\cite{Aaboud:2018twu}. The solid curve is the fitted Hagedorn \ndistribution.}\n\\label{Figure11_jet_yield_pp_502tev}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV} shows the\nnuclear modification factor $R_{\\rm{AA}}$ of the jets as a function of the\ntransverse momentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}.\n The solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n The modeling of centrality dependence is done with $N_{\\rm part}^\\beta$ and the\nvalue of exponent is obtained as $\\beta=0.75$. \n The extracted parameters of the energy loss\nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV are given in\nTable~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV},\nalong with the value of $\\chi^{2}\/\\rm{NDF}$.\nIt shows that the $\\Delta p_{\\rm{T}}$ increases as $p_{\\rm{T}}^{0.72}$\nat all the values of $p_{\\rm T}$ measured for jets similar to the case of jets at \n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the jets as a function\nof transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at\n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}. \nThe solid curves are the $R_{\\rm{AA}}$ fitting function given by Eq.~\\ref{nmf_raa_fitting_function}.}\n\\label{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure13_Jet_particles_Del_pT_PbPb_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the jets as a function of the transverse momentum $p_{\\rm{T}}$\nfor different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV.\nThe $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} with the \nparameters given in Table~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ increases from peripheral to the most central collision regions.\nThe figure shows that the $\\Delta p_{\\rm{T}}$ increases almost linearly\nat all the values of $p_{\\rm T}$ measured for jets.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.61\\linewidth]{Figure13_Jet_particles_Del_pT_PbPb_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV for different\ncentrality classes.}\n\\label{Figure13_Jet_particles_Del_pT_PbPb_502TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the jets as a function of the transverse momentum $p_{\\rm{T}}$ \nin the most central (0-10\\%) PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.\nThese are compared with the $\\Delta p_{\\rm{T}}$ obtained for charged particles in\nthe 0-5\\% centrality class of PbPb collision\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV.\nThe energy loss $\\Delta p_{\\rm{T}}$ in case of jets for both the energies increases\nwith $p_{\\rm{T}}$. The values of $\\Delta p_{\\rm{T}}$ for jets at 5.02 TeV is more than\nthat at 2.76 TeV. This behaviour at high $p_{\\rm{T}}$ is very different from the\nenergy loss of charged particles which becomes constant in these $p_{\\rm{T}}$ regions.\nThe modeling of centrality dependence of energy loss has been done using\n$N_{\\rm part}^\\beta$.\nFor charged particles, the centrality dependence of $p_T$ shift is found to\nbe $N_{\\rm part}^{0.58}$ which corresponds to $L^{1.18}$.\nThe centrality dependence for jets at $\\sqrt{s_{\\rm NN}}$ = 2.76 TeV is found to be\n$N_{\\rm part}^{0.60}$. \nIn case of jets at 5 TeV, the centrality dependence of energy loss is found to be\n$N_{\\rm part}^{0.75}$ corresponding to $L^{1.5}$ which means that the jets even at\nvery high energy are still away from complete coherent regime. \n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.61\\linewidth]{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in the most central PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV. \nThe $\\Delta p_{\\rm{T}}$ obtained for charged particles in the most central PbPb collision\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV is also shown.}\n\\label{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV}\n\\end{figure}\n\n\n\n\\clearpage\n\n\\section{Conclusions}\n\nWe presented a study of partonic energy loss with $p_T$ shift extracted from the measured\n$R_{\\rm{AA}}$ of charged particles and jets in PbPb collisions at $\\sqrt{s_{\\rm NN}}$ = 2.76\nand 5.02 TeV in wide transverse momentum and centrality range.\n The functional form of energy loss given by\n$\\Delta p_{\\rm T}$ has been assumed as power law with different power indices\nin three different $p_{\\rm T}$ regions driven by physics considerations.\nThe power indices and the boundaries of three $p_{\\rm T}$ regions are obtained by\nfitting the experimental data of $R_{\\rm{AA}}$ as a function of $p_{\\rm T}$ and centrality.\n The energy loss for light \ncharged particles is found to increase linearly with $p_{\\rm T}$ in low $p_{\\rm T}$ region\nbelow 5-6 GeV\/$c$ and approaches a constant value in high $p_{\\rm T}$ region above 25 GeV\/$c$\nwith an intermediate power law connecting the two regions.\n The $\\Delta p_{\\rm{T}}$ at 5.02 TeV is similar but slightly more than\nthat at 2.76 TeV. \nIn case of jets we consider only one $p_T$ region and it is found that for jets, the\nenergy loss increases almost linearly even at very\nhigh $p_{\\rm T}$.\n The modeling of centrality dependence of energy loss has been done using\n$N_{\\rm part}^\\beta$.\n For charged particles, the centrality dependence of $p_T$ shift is found to\nbe $N_{\\rm part}^{0.58}$ which corresponds to $L^{1.18}$.\n The centrality dependence for jets at $\\sqrt{s_{\\rm NN}}$ = 2.76 TeV goes as\n$N_{\\rm part}^{0.60}$. \nIn case of jets at 5 TeV, the centrality dependence of energy loss is found to be \n$N_{\\rm part}^{0.75}$ corresponding to $L^{1.5}$ which means that the jets even at\nvery high energy are still away from complete coherent regime. \n\n\n\n\\ \\\\\n\n\\noindent\n{\\bf References}\n\n\\noindent\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeppb b/data_all_eng_slimpj/shuffled/split2/finalzzeppb new file mode 100644 index 0000000000000000000000000000000000000000..a3ec4541e6f275403a0e690443ae1be31045133e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeppb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\\subsection{Motivation and problem description}\n\n\n\\IEEEPARstart{T}o study the coordination and control features of a group\ntask, the multiple groups' performances must be fitted together. \n An enduring postulate in organization science\nis that coordination and control cannot be achieved strictly by the\nauthority structure, but must also entail informal communication and\ninfluence networks that link the members of different task-oriented groups;\nwe focus on formation of such network structures. As the size of a\nconnected social network increases, multigroup formations that are\ndistinguishable clusters of individuals become a characteristic and\nimportant feature of network topology. The connectivity of multigroup\nnetworks may be based on edge bundles connecting multiple individuals in\ntwo disjoint groups, bridges connecting two individuals in two disjoint\ngroups, or co-memberships. A large-scale network may include instances of\nall of these connectivity modalities. We set up populations of multiple\ngroups and propose a dynamic model for formation of these intergroup\nconnectivity structures.\n\nOur economic dynamical model explains and predicts whether a network\nevolves into different coordination and control structures. Medium and\nlarge scale organizations adopt these multigroup structures to tackle\ncomplex nested tasks. Among the multitude of possible coordination and\ncontrol structures, we study formation of multigroup connectivity\nstructures shown in Fig.~\\ref{fig:schematic}, which are familiar constructs\nin the field of social network science.\n\\begin{figure}[h]\n\t\\begin{center} \n\t\t\\subfloat[Co-memberships]{\\includegraphics[height=.89in]{ControlStructure-3}\\label{fig:control-structure-3}}\\qquad\n\t\t\\subfloat[Edge Bundles]{\\includegraphics[height=.8in]{ControlStructure-2}\\label{fig:control-structure-2}}\\qquad\n\t\t\\subfloat[Bridges]{\\includegraphics[height=.8in]{ControlStructure-1}\\label{fig:control-structure-1}}\\\\\n\t\t\\caption{\\small Schematic illustration of the three possible control\nand coordination structures}\\label{fig:schematic}\n\t\\end{center}\n\\end{figure}\nFor this purpose we apply a game-theoretic framework in which strategic agents take actions based on the rate or importance of coordination problems. In other words, a value is assigned to the coordination problem between any two distinct groups, so that all control and coordination problems among groups are described by a square non-negative matrix, as illustrated in Fig.~\\ref{fig:matrix-F}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.3\\linewidth , height=0.3\\linewidth]{matrixGrayscale.png}\\\\\n\\caption{$F$ = frequency\/importance of intergroup coordination problem}\\label{fig:matrix-F} \n\\end{figure} \nIn our setting, agents are myopic, self-interested, and have thorough knowledge of graph topology and the utility they acquire from any other agent. \n\n\n\\subsection{Related literature}\n\nBridge, edge bundle, and co-membership connectivity models have been studied extensively in~\\cite{SM-PA-FB-NEF:17c}, where implications of these structures are investigated and generative models are proposed for each. These prototypical structures can mitigate coordination and control loss in an organization. Coordination and control importance of bridge connected structure, in which\ncommunication between subgroups are based on single contact edges, is the emphasis of the~\\cite{MSG:73, MT-DK:10}, and~\\cite{WS-TE:08} models. \nCoordination and control importance of the redundant ties structure, in which multiple redundant contact edges connect pairs of groups, is the emphasis\nof~\\cite{NEF:98}, Chapter~8, ~\\cite{NEF:83}, and~\\cite{HCW-SAB-RLB:76}. Co-membership intersection\nstructures, in which subgroups have common members, is the emphasis of the linking-pin\nmodel by Likert~\\cite{RL:67}, as well as~\\cite{BC-JAH:04} and~\\cite{SPB-DSH:11}.~\\cite{XZ-CW-YS-LP-HZ:17} and \\cite{JY-JL:12} propose a community detection algorithm for overlapping networks.\n\nJackson and Wolinsky introduced a strategic network formation model in their seminal paper~\\cite{MOJ-AW:96}. They studied pairwise stability, where bilateral agreement is required for link formation. Homogeneity and common knowledge of current network to all players are two assumptions in this model. Jackson and Watts studied strategic network formation in a dynamic framework in~\\cite{MOJ-AW:02}. The network formation model we present in this work is closely related to~\\cite{MOJ-AW:96} and~\\cite{MOJ-AW:02}. Jackson and Rogers examined an economic model of network formation in \\cite{MOJ-BWR:05} where agents benefit from indirect relationships. They showed that small-world features necessarily emerge for a wide set of parameters.\n\nIn \\cite{VB-SG:00}, Bala and Goyal proposed a dynamic model to study Nash and strict Nash stability. In their model, starting from any initial network, each player with some positive probability plays a best response (or randomizes across them when there is more than one); otherwise the player exhibits inertia. A Markov chain on the state space of all networks is defined whose absorbing states are strict Nash networks. The authors proved that starting from any network, the dynamic process converges to a strict Nash network (i.e., the empty network or a center-sponsored star) with probability 1.\n\nIn \\cite{NO-FV:13}, Olaizola and Valenciano extended the model in~\\cite{VB-SG:00} and studied network formation under linking constraints. An exogenous link-constraining system specifies the admissible links. Players in the same component of the link-constraining network have common knowledge of that component. This model collapses to the unrestricted setting in~\\cite{VB-SG:00} (when the underling constraining network is complete graph). The set of Nash networks is a subset of Bala and Goyal's unrestricted Nash network sets.\n\nIn the network formation game by Chasparis and Shamma in~\\cite{GCC-JSS:13} and~\\cite{GCC-JSS:08}, agents form and sever unidirectional links with other nodes, and stable networks are characterized through the notion of Nash equilibrium. Pagan and D{\\\"o}rfler~\\cite{NP-FD:19} studied network formation on directed weighted graphs and considered two notions of stability: Nash equilibrium to model purely selfish actors, and pairwise-Nash stability which combines the selfish attitude with the possibility of coordination among agents. McBride dropped the common knowledge assumption and studied the effects of limited perception (each player perceives the current network only up to a certain distance) in \\cite{MMB:06}. Song and van der Schaar \\cite{YS-MVDS:15} studied a dynamic network formation model with incomplete information.\n\nCommunity networks and their growth into potential socially robust\nstructures is studied in~\\cite{LM:19}. Bringmann et al. analyzed the\nevolution of large networks to predict link creation among the nodes\nin~\\cite{BB-MB-FB-AG:10}. \\cite{YJ-YW-XJ-ZZ-XC:17} studied link\ninference problem in heterogeneous information networks by proposing a\nknapsack-constrained inference method.\n\n\\subsection{Statement of contribution} \nWe consider a strategic network formation game described by a cost of maintaining links, a benefit of having connections, and an importance of coordination problems among pre-specified groups. Our setup is a heterogeneous generalization of the famous connection model. For this game, we study the resulting multi-group structures that are pairwise stable and socially efficient.\n\n\nFor this game, we also introduce a formation dynamics whereby \nlink formations require mutual consent and link removals can be initiated unilaterally. We study the conditions that give rise to formation of multigroup structures, as well as conditions which cause the multigroup structures be stable and\/or efficient.\nOur contributions are as follows:\n \n\nWe introduce certain threshold functions and provide bounds based on these functions to study pairwise stable and efficient structures. We also investigate the convergence of Formation Dynamics. For our analysis, we first focus on the structure of each group and formation of intra-connections. We particularly study the conditions which result in each group being a clique, and present results on pairwise stability, efficiency, and convergence of these cliques.\n\nWe then focus on the interconnections among those cliques. We present results on the pairwise stability and convergence to disjoint union of cliques for multigroup structures of arbitrary sizes. The rest of the analysis for density of interconnections is divided into two sections: two-group connectivity structures and multigroup connectivity structures. \n\nFor the two group structures, we provide a complete characterization of full ranges of parameters for stability and efficiency. We present results on the pairwise stability and efficiency of minimally connected, redundantly connected, and maximally connected structures. We identify the ranges of parameter in which the efficient and the pairwise stable structure overlap and those in which they have a conflict.\n\nWe then investigate the multigroup structures. We study the pairwise stability of minimally connected cliques along arbitrary interconnection structures. We show that for the special case of the interconnection being a star graph, it is possible to identify the boundaries of parameters for stability of all interconnections being minimally connected. We also present results on formation of redundancies and for efficiency.\n\n\\subsection{Preliminaries}\n\n\nEach undirected graph is identified with the pair $\\mathcal{(V, E)}$. The set of graph nodes $\\mathcal{V} \\neq \\emptyset$ represents individuals or groups of individuals in a social network. $|\\mathcal{V}|=n$ is the size of the network. The pair $(i,j)$ is called an edge and it indicates the interaction between the two individuals $i$ and $j$. The set of graph edges $\\mathcal{E}$ represents the social interactions or ties among all individuals. Throughout this paper, since the individuals are unchangeable, we refer to the network $\\mathcal{(V, E)}$ simply as $\\mathcal{E}$. \n\nThe density of a graph is given by the ratio of the number of its observed to possible edges, $ \\ddfrac{2|\\mathcal{E}|}{n(n-1)}$. In a complete graph every pair of distinct nodes is connected by an edge. We denote the complete graph of size $n$ by $K_n$. A clique is a subset of vertices of a graph in which every two distinct vertices are adjacent. We say two graphs are adjacent if they differ in precisely one edge. A path of length $k$ is a sequence of nodes $i_{1}i_{2}\\dots i_{k}$ such that $\\{(i_{s},i_{s+1}) \\} \\in \\mathcal{E}$. A walk of minimum length between two nodes is the shortest path. $d_{ij}(\\mathcal{E})$ denotes the distance between nodes $i$ and $j$, which is defined as the length of the shortest path beginning at $i$ and ending at $j$. \n\n\\section{Multigroup Network Formation Model}\n\\label{sec:model}\nConsider a society of $n$ individuals $\\mathcal{V}$, divided into $m$ groups. The set of $m$ groups is denoted by $\\until{m}, m \\leq n$.\n$P= \\{ P_1, \\dots, P_m\\}$ represents the partitioning of individuals into\nthe groups and is a set partition of size $n$, i.e, $\\mathcal{V}=\n\\bigcup \\limits_{\\gamma=1}^{m} P_{\\gamma}$, and $\\bigcap \\limits_{\\gamma=1}^{m} P_{\\gamma} =\n\\emptyset$. We use the shorthand notation $s_{\\gamma}= |P_{\\gamma}|$\ndenoting the size of group $\\gamma$. Throughout this paper, we assume that\n$s_{\\gamma} \\geq 3$ for all $\\gamma \\in \\until{m}$.\n\n\\textit{Group coordination importance matrix (data)}: is given as $F \\in\n\\mathbb{R}^{m \\times m}$, where $ 0 \\leq F_{\\alpha \\beta} \\leq 1$ for\n$\\alpha, \\beta \\in \\until{m}$ represents importance\/frequency of coordination\nproblem between groups $\\alpha$ and $\\beta$. We assume $F$ is a symmetric\nmatrix with diagonal entries equal to $1$.\n\n\\textit{Individual coordination importance matrix}: $\\hat{F} \\in \\mathbb{R}^{n \\times n}$, is obtained from $F$ and the partition $P$, i.e., $\\hat{F}= f(F, P)$. We construct $\\hat{F}$ as follows:\n \n \\begin{equation*} \n \\hat{F}_{ij} =\\begin{cases}\n F_{\\alpha \\beta}, & i \\in P_{\\alpha}, j \\in P_{\\beta}, i \\neq j \\\\\n 0, & i=j. \n \\end{cases}\n \\end{equation*}\n \n For the setting where groups are all of equal size $s$, one can write\n \\begin{equation*}\n \\hat{F}=F \\otimes \\vectorones[s] \\vectorones[s]^T -I_n \n \\end{equation*}\n \n\nAt edge set $\\mathcal{E}$, the payoff function for individual $i \\in\n\\mathcal{V}$ is \n\\begin{equation}\\label{payoff-function}\n U_i (\\mathcal{E}) = \\sum_{k=1}^n \\hat{F}_{ik} \\delta^{d_{ik}\n (\\mathcal{E})} - \\sum\\nolimits_{k \\in N_i(\\mathcal{E}) } c,\n\\end{equation}\nwhere $d_{ik}(\\mathcal{E})$ is the number of steps from individual $i$\nto $k$, $\\delta < 1$ is the one-hop benefit, and $c$ is the cost of each\nlink. The value of network $\\mathcal{E}$ is defined as the sum of all individuals' payoffs, i.e., $v(\\mathcal{E})=\\sum_{i=1}^{n}U_{i}(\\mathcal{E})$, and it indicates the social welfare. For a given society $\\mathcal{V}$ and value function $v$, $\\mathcal{E}^*$ is an \\emph{efficient structure} if its social welfare(value) is maximized over all possible edge sets on $\\mathcal{V}$, i.e., $\\mathcal{E}^*= \\arg \\max\\limits_{\\mathcal{E}} v(\\mathcal{E})$. Given the pair $(i, j)$ in network $\\mathcal{E}$, we say that individual $i$ \\emph{benefits from edge} $\\{(i,j)\\}$ if\n$\n U_i \\big (\\mathcal{E}\\cup \\{( i,j) \\} \\big ) > U_i \\big(\\mathcal{E} \\setminus \\{( i,j) \\}\n \\big).\n$\n\n\\textit{Formation Dynamics}: Time periods are represented with countable\ninfinite set $\\mathbb{N}= \\{ 1, 2, \\dots, t, \\dots \\}$. In each period, a\npair $(i,j)$ is uniformly randomly selected and is added to, or removed\nfrom, the network $\\mathcal{E}$ according to the following rules:\n\\begin{itemize}\n\n\\item if $\\{(i,j)\\} \\notin \\mathcal{E}$, then it is added when its\n addition is marginally beneficial to the pair of individuals (i.e.,\n either both individuals benefit or one individual is indifferent and the\n other benefits); the edge $(i,j)$ is not added when its addition causes\n a drop in the payoff of either or both individuals or both individuals\n are indifferent towards it; and\n\\item if $\\{(i,j)\\} \\in \\mathcal{E}$, then $(i,j)$ is removed when its\n removal benefits at least one of the two individuals; no action is taken\n when both sides are either indifferent or benefit from the existence of\n the edge.\n\\end{itemize} \n \n\n\n\n\\begin{definition}(Pairwise Stability)\\label{def:pairwise_stable}\nA network $\\mathcal{E}$ is \\emph{pairwise stable} if,\n\\begin{equation*}\n\\begin{aligned}\n& \\text{for all } \\{(i,j)\\}\\in\\mathcal{E},\\\\\n& \\quad U_i(\\mathcal{E}) \\geq U_i(\\mathcal{E} \\setminus \\{(i,j)\\} ) \n\\text{ and } U_j(\\mathcal{E}) \\geq U_j(\\mathcal{E} \\setminus \\{( i,j) \\}); \\\\\n & \\text{and } \\text{for all } \\{(i,j)\\} \\notin \\mathcal{E}, \\\\\n &\\quad \\text{if } U_i(\\mathcal{E})U_j(\\mathcal{E}\\cup\\{(i,j)\\}).\n\\end{aligned}\n\\end{equation*}\n\\end{definition}\n \\begin{remark}\\label{remark:pairwise-stable}\nAccording to Definition~\\ref{def:pairwise_stable}, if the edge $(i,j)$ belongs to the pairwise stable network, removing it results in a loss for $i$ or $j$; and if the edge $(i,j)$ does not belong to the pairwise stable network, adding it makes no difference or causes loss for $i$ or $j$. \n \\end{remark}\n \n \n\\begin{definition}\n$\\mathcal{E}'$ defeats $\\mathcal{E}$ if either $\\mathcal{E}'=\\mathcal{E} \\setminus\n\\{(i,j)\\}$ and $U_i(\\mathcal{E}')>U_i(\\mathcal{E})$, or $\\mathcal{E}'=\\mathcal{E} \\cup \\{(i,j)\\}$\nand $U_i(\\mathcal{E}') \\geq U_i(\\mathcal{E})$ and $U_j(\\mathcal{E}') \\geq U_j(\\mathcal{E})$ with at least one inequality holding strictly. \n\\end{definition}\\label{def:improving-path}\n\n\\begin{lemma}\\label{lem:stable-by-dynamics}\nA network is pairwise stable if and only if it does not change under Formation Dynamics.\n\\end{lemma}\n\\begin{proof}\nTo prove necessity, we refer to Remark~\\ref{remark:pairwise-stable}. According to the definition, if a network is pairwise stable, no network can defeat it, i.e., no links can be added to or severed from it. To show sufficiency, note that a network not being changed by Formation Dynamics, implies that: \n\\begin{enumerate}\n\\item adding a link makes no difference or causes loss for at least one individual;\n\\item removing a link results in loss for at least one individual. \n\\end{enumerate}\nTherefore, the network is pairwise stable. \n\\end{proof}\t\n\n\nAccording to Lemma~\\ref{lem:stable-by-dynamics}, if there exists some time $t^*$\nsuch that from $t^*$ on, no additional links are added to or severed from a network by Formation Dynamics, then the network has reached the pairwise stable structure.\n\nWe define the following terms that we will frequently use throughout this paper indicating the density of the interconnections among the groups. \n\\begin{definition}\nWe say that a society of individuals consists of {\\it the disjoint union of groups} if there exists no interconnection among any pairs of groups. \nFor a connected pair, we say it is\n\\begin{enumerate}\n\\item {\\it minimally connected} if there exists exactly one interconnection among the pair;\n\\item {\\it redundantly connected} if there exist at least two interconnections among the pair;\n\\item {\\it maximally connected} if all of the possible interconnections among the pair of groups exist.\n\\end{enumerate}\n\\end{definition}\nFig. \\ref{fig:density_interC} represents a schematic illustration of the terms discussed above.\n\\begin{remark}\nA minimally connected pair corresponds to the bridge connection (Fig.~\\ref{fig:control-structure-1}), redundantly connected to the ridge connection (Fig.~\\ref{fig:control-structure-2}), and maximally connected to a full co-membership connection (Fig.~\\ref{fig:control-structure-3}.)\n\\end{remark}\n\\begin{figure}[h]\n\t\\begin{center} \t\n\t\\includegraphics[width=0.99\\linewidth]{interconnection_density}\\label{fig:}\n\t\t\\caption{\\small Schematic illustration of interconnection densities}\\label{fig:density_interC}\n\t\\end{center}\n\\end{figure}\n\nWe next define the Price of Anarchy (PoA) as a measure of how the efficiency of a system degrades due to the selfish behavior of its individuals. It is calculated as follows:\n\\[\nPoA=\\dfrac{\\max_{\\mathcal{E}}v(\\mathcal{E})}{\\min_{p.w. stable\\mathcal{E}}v(\\mathcal{E})}.\n\\]\n\nThroughout this paper we use the following threshold functions\n$y_1(s, \\delta)$, $y_2(s, \\delta)$, and $y_3(\\delta)$ defined by\n\\begin{equation*}\n\\begin{aligned}\n & y_1(s, \\delta)= {\\delta + \\big(s-1\\big) \\delta^2},\\\\\n & y_2(s, \\delta) = {\\delta - \\delta^2 + \\big(s-1\\big) \\delta^2 - \\big(s-1\\big) \\delta^3} = { \\big (1-\\delta \\big )y_1(s)},\\qquad \\\\\n & y_3(\\delta)={\\delta - \\delta^2}.\n\\end{aligned}\n\\end{equation*}\nIn what follows we will often suppress the argument $\\delta$ in the interest of simplicity.\n\nUnder the conditions $0<\\delta<1$ and $s\\geq 3$, we claim that,\n\\begin{equation*}\n 0 y_3$.\nPlots of these three threshold functions for $0<\\delta<1$, where $s=3$ are depicted in\nFig.~\\ref{fig:thresholds}.\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.6\\linewidth]{y.pdf}\n \\caption{Plots of $y_1$, $y_2$, and $y_3$ for $s=3$.}\\label{fig:thresholds} \n\\end{figure} \n In what follows we provide bounds based on these functions to study pairwise stable and efficient structures, and investigate the convergence of the Formation Dynamics when possible.\n\n\n\\section{Results on Formation of Disjoint Cliques}\n\n We first study the inner structure of each group in a pairwise stable network. Throughout this paper, we assume that the dynamics does not start with an initial state containing any interconnection. We define the invariant set of all subgraphs of disjoint cliques as $S= \\Big \\{ \\bigcup \\limits_{\\gamma=1}^{m} \\mathcal{E}_{\\gamma} \\ | \\ \\mathcal{E}_{\\gamma} \\subset \\mathcal{E}_{ K_{s_{\\gamma}}} \\}$ where $\\mathcal{E}_{\\gamma}$ indicates the inner-network of group $P_{\\gamma}$. \n\n \n\\begin{theorem}[Formation of Cliques: Pairwise Stability, Efficiency, Convergence]\\label{thm: Pairwise Stability of Cliques}\nConsider $n$ individuals partitioned into groups $P_1,\\dots,P_m$. Then, each one of these $m$ groups is a clique in the pairwise stable and in the efficient structure if and only if $c1$ is the distance between $i$ and $j$ in $\\mathcal{E} \\setminus \\{(i,j)\\}$. Since $\\delta-\\delta^2>c$, we have \n $\\delta-c>\\delta^2 >\\dots >\\delta^n$; meaning that all\n agents prefer direct links to any indirect link. Thus, if agents $i$ and\n $j$ in group $P_\\alpha$ are not directly connected, they will form a link\n and each will gain at least $(\\delta-c)-\\delta^{d_{ij}}>0$,\n i.e.,\n \\[\n \\begin{aligned} \n &\\text{for all } \\{(i,j)\\} \\notin \\mathcal{E},\\quad i, j\n \\in P_{\\alpha}, \\ i \\neq j \\\\\n & \\quad U_i(\\mathcal{E}) U_i(\\mathcal{E} \\setminus \\{(i,j)\\}), \\text{ and }\n U_j(\\mathcal{E}) > U_j(\\mathcal{E} \\setminus \\{( i,j) \\}).\n \\end{aligned}\n \\]\n Thus, each group forms a clique and no intra-connection is removed after being formed, and according to Lemma \\ref{lem:stable-by-dynamics}, these $m$ groups are cliques in the pairwise stable structure.\nTo prove necessity, assume we have a pairwise stable clique. \nFor $P_\\alpha$ to remain a clique, all pairs of nodes belonging to the same group should prefer to keep one-hop links rather than having links with larger lengths, and thus $\\delta-c>\\delta^2>\\delta^3> \\dots$. This proves the claim that each group $P_\\alpha$ is a clique if and only if $c<\\delta-\\delta^2$. Convergence of dynamics to cliques can be obtained directly from the same argument.\n\n We now continue by first proving that if $c<\\delta - \\delta^2$, in the efficient structure each group is a clique. From the analysis above, when $c<\\delta-\\delta^2$, we have:\n\\[\n\\begin{aligned}\nv\\big(\\mathcal{E}\\cup\\{(i,j)\\}\\big)&-v\\big(\\mathcal{E}\\setminus\\{(i,j)\\}\\big) \\\\\n & \\geq U_i\\big(\\mathcal{E}\\cup\\{(i,j)\\}\\big)+U_j\\big(\\mathcal{E}\\cup\\{(i,j)\\}\\big)\n \\\\\n &\\quad-U_i\\big(\\mathcal{E}\\setminus\\{(i,j)\\}\\big) -U_j\\big(\\mathcal{E}\\setminus\\{(i,j)\\}\\big)\\\\\n & \\geq 2(\\delta -c-\\delta^{2})>0\n\\end{aligned}\n\\] \nwhich holds for each pair $(i,j)$ belonging to the same group, meaning that each group is a clique in the efficient structure. \nWe next prove necessity for efficiency: assume $\\mathcal{E}$ is the efficient structure and each group is a clique, i.e., $\\{(i,j)\\}\\in \\mathcal{E}$ for any two individuals $i, j$, $(i \\neq j)$ from the same group. Then, we have:\n\\[\n\\begin{aligned}\nv(\\mathcal{E})&-v(\\mathcal{E}\\setminus\\{(i,j)\\})\\\\\n & =U_i(\\mathcal{E})+U_j(\\mathcal{E})-U_i(\\mathcal{E}\\setminus\\{(i,j)\\})-U_j(\\mathcal{E}\\setminus\\{(i,j)\\})\\\\\n &=2(\\delta -c-\\delta^{2})>0,\n\\end{aligned}\n\\]\nwhich results in $c U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ) \n \\iff F_{12} > \\dfrac{c}{y_2(s_2)}.$\n From \n$\n U_{j}(\\mathcal{E} \\cup \\{ (\\hat{i},j)\\} )= (s_{2} -1) \\delta +2 F_{12} \\delta + (s_{2} -2) F_{12} \\delta^2 +(s_{2} +1) c\n$\nand \n$\n U_{j}(\\mathcal{E} \\setminus \\{ (\\hat{i},j )\\} )= (s_{2} -1) \\delta + F_{12} \\delta + (s_{2} -1) F_{12} \\delta^2 +s_{2} c\n$,\nwe obtain: \n$ U_j(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} ) > U_j(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ) \n \\iff F_{12} > \\dfrac{c}{y_{3}}.\n$\nThen, from $y_2(s)>y_{3}>0$, we conclude that an additional interconnection $\\{(\\hat i, j)\\}$ is added and maintained if and only if $F_{12} > \\dfrac{c}{y_{3}}$. \nSimilarly, an additional interconnection $\\{( i,\\hat j)\\}$ is added and maintained if and only if $F_{12}> \\dfrac{c}{y_{3}}$. \nFor $\\hat i \\neq i, \\hat j\\neq j$, using a similar argument, we obtain that $U_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i},\\hat j ) \\} )> U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},\\hat j )\\} ) \n \\iff F_{12} > \\dfrac{c}{y_2(s_2)}$ and \n $U_{\\hat j} (\\mathcal{E} \\cup \\{ (\\hat{i},\\hat j) \\} )> U_{\\hat j} (\\mathcal{E} \\setminus \\{( \\hat{i},\\hat j )\\} ) \n \\iff F_{12}> \\dfrac{c}{y_2(s_1)}$; which means that an additional interconnection $\\{(\\hat i, \\hat j)\\} $ is added and maintained if and only if $ F_{12} > \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)} $ (strictly holds when $s_{1} = s_{2}$). Thus, we conclude that at least two interconnections are added and maintained if and only if $F_{12} > \\min\\left\\{\\dfrac{c}{y_{3}},\\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)}\\right\\}=\\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)}$ (strictly holds when $s_{1} = s_{2}$.) \nTherefore, the network contains precisely one interconnection if and only if $ \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_1(s)} < \n F_{12} < \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)} $. Moreover, from the moment when two group form cliques and this unique interconnection builds, the network will not change. According to Lemma \\ref{lem:stable-by-dynamics}, this concludes the proof of statement~\\ref{fact:bridge-2groups}. \n\nTo prove~\\ref{fact:comember-2groups}, assume that $F_{12}> \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)} $. We have shown above that $\\mathcal{E}$ contains at least two interconnections between two cliques. As a result, for any agent $\\hat i$ from $P_{1}$ and $\\hat j$ from $P_{2}$, the distance between $\\hat i$ and $\\hat j$ in $\\mathcal{E} \\setminus \\{(\\hat i, \\hat j)\\}$ is equal to either 2 or 3. If it is equal to 2, then $U_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i}, \\hat j ) \\} -U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j )\\}=U_{\\hat j}(\\mathcal{E} \\cup \\{( \\hat{i}, \\hat j )\\} -U_{\\hat{j}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j )\\}=F_{12}(\\delta-\\delta^{2})-c$; and if it is equal to 3, then $U_{\\hat i}(\\mathcal{E} \\cup \\{( \\hat{i}, \\hat j )\\} -U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j) \\}=U_{\\hat j}(\\mathcal{E} \\cup \\{ (\\hat{i}, \\hat j )\\} -U_{\\hat{j}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j )\\}=F_{12}(\\delta-\\delta^{3})-c$. Interconnection $\\{(\\hat i, \\hat j)\\}$ is added and maintained if and only if $U_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i},\\hat j ) \\} -U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j) \\}> 0$ and $U_{\\hat j}(\\mathcal{E} \\cup \\{( \\hat{i}, \\hat j) \\} -U_{\\hat{j}}(\\mathcal{E} \\setminus \\{ (\\hat{i}, \\hat j )\\}>0$. \nThus, we conclude that $\\{(\\hat i, \\hat j)\\}$ is added and maintained if and only if $F_{12}>\\max\\left\\{\\dfrac{c}{\\delta-\\delta^{2}},\\dfrac{c}{\\delta-\\delta^{3}}\\right\\}$. Since $\\max\\left\\{\\dfrac{c}{\\delta-\\delta^{2}},\\dfrac{c}{\\delta-\\delta^{3}}\\right\\}=\\dfrac{c}{\\delta-\\delta^{2}}$ for $0<\\delta<1$, $\\{(\\hat i, \\hat j)\\}$ is added and maintained maintained if and only if $F_{12}>\\dfrac{c}{\\delta-\\delta^{2}}=\\dfrac{c}{y_{3}}$. Therefore, the network will not be changed when all agents link with each other. By Lemma \\ref{lem:stable-by-dynamics}, this concludes the proof of~\\ref{fact:comember-2groups}.\n\nFrom statements \\ref{fact:bridge-2groups} and \\ref{fact:comember-2groups}, we know that the pairwise stable structure contains at least 2 but not fully numbers of interconnections if $\\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)} < F_{12} < \\dfrac{c}{ y_3 }$. Suppose that $(i_{1},j_{1}), \\dots, (i_{k-1},j_{k-1})$ are $k-1$ interconnections between $P_{1}$ and $P_{2}$. Take agents $\\hat{i} $ from $P_{1}$ and $\\hat j$ from $P_{2}$. Similar to the analysis in the proof of statement~\\ref{fact:bridge-2groups}, we have the following two cases:\n\\begin{enumerate}[(a)]\n\\item For $\\hat{i} \\notin \\{i_{1},\\dots,i_{k-1}\\}, \\hat j\\notin \\{j_{1},\\dots, j_{k-1}\\}$, we have\n\\[\n\\begin{aligned}\n U_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} )&- U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j) \\} )\\\\\n &\\quad = F_{12}(y_{2}(s_{2})-(k-2)\\delta y_{3}), \\text{ and}\\\\\n U_{\\hat j}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} ) &- U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j) \\} )\\\\\n&\\quad = F_{12}(y_{2}(s_{1})-(k-2)\\delta y_{3}),\n\\end{aligned}\n\\]\nimplying \n\\[\\begin{aligned}\nU_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} )&>U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ), \\text {~and~} \\\\\nU_{\\hat j}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} )&> U_{\\hat{j}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ) \\\\\n \\iff \\qquad F_{12} &> \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{y_2(s)-(k-2)\\delta y_{3}}.\n \\end{aligned}\\]\n\\item For $\\hat{i} \\in \\{i_{1},\\dots,i_{k-1}\\}, \\hat j\\notin \\{j_{1},\\dots, j_{k-1}\\}$ or $\\hat{i} \\notin \\{i_{1},\\dots,i_{k-1}\\}, \\hat j\\in \\{j_{1},\\dots, j_{k-1}\\}$, we have \n\\[\\begin{aligned}\n U_{\\hat i}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} )&> U_{\\hat{i}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ) \n \\text {~ and~}\\\\\n U_{\\hat j}(\\mathcal{E} \\cup \\{ (\\hat{i},j) \\} )&>U_{\\hat{j}}(\\mathcal{E} \\setminus \\{ (\\hat{i},j)\\} ) \\\\\n \\iff \\qquad F_{12} &> \\dfrac{c}{y_{3}}. \n \\end{aligned}\\]\n\\end{enumerate}\nTherefore, we conclude then $k-th$ interconnection is added and maintained if and only if $F_{12} >\\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{y_2(s)-(k-2)\\delta y_{3}}.$ Likewise, the $(k+1)-th$ interconnection is added and maintained if and only if $F_{12} > \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{y_2(s)-(k-1)\\delta y_{3}}.$ It follows that the unique pair-wise stable structure has exact $k~ (2\\leq k\\leq \\min\\{s_{1},s_{2}\\})$ interconnections if and only if $\\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)-(k-2)\\delta y_{3}}< F_{12} < \\max\\limits_{s \\in\\{s_{1}, s_{2}\\}} \\dfrac{c}{ y_2(s)-(k-1)\\delta y_{3}}$. \nThis concludes the proof of~\\ref{fact:redundant-2groups}.\n\n Finally, we complete the proof of Theorem~\\ref{thm:existence-convergence-two} by proving the convergence statement. Since $cc$. We introduce the undirected graph $\\mathcal T=(\\mathcal V_{P}, \\mathcal{E}_{\\mathcal T} )$, whose nodes represent groups and $(\\alpha,\\beta)\\in \\mathcal{E}_{\\mathcal T} $ if there exists at least one connection between $P_{\\alpha}$ and $P_{\\beta}$.\n\n\\begin{theorem}[Sufficiency Condition for Minimally Connected Cliques]\\label{thm:multigroup_stability}\nConsider $n$ individuals partitioned into groups $P_1,\\dots,P_m$ of sizes $s_{1},\\dots, s_{m}$ respectively. \nAssume that $cc, \\\\\n & \\sum _{\\lambda\\neq \\beta, \\lambda=1}^{m} F_{\\beta \\lambda} \n ( \\delta^{d'_{\\beta\\lambda}}-\\delta^{d_{\\beta\\lambda}})(1+(s_{\\lambda}-1)\\delta)>c, \\\\\n & F_{\\alpha \\beta} <\\max_{s\\in \\{s_{\\alpha},s_{\\beta}\\}}\\frac{c}{y_{2}(s)}; \\text{ and}\n \\end{aligned}\n \\end{equation}\n \\item \\label{fact:tree-equal} for all $(\\alpha, \\beta) \\notin \\mathcal{E}_{\\mathcal T}$, $\\alpha, \\beta \\in \\until{m} $, \\\\\n \\begin{equation}\\label{utility-change-2}\n \\begin{aligned}\n \\sum _{\\lambda\\neq \\alpha, \\lambda=1}^{m} F_{\\alpha \\lambda} \n ( \\delta^{d'_{\\alpha\\lambda}}-\\delta^{d_{\\alpha\\lambda}})(1+(s_{\\lambda}-1)\\delta)\\max_{s\\in \\{s_{\\alpha},s_{\\gamma}\\}}\\frac{c}{y_{1}(s)}; \\text{ and}\n \\end{aligned}\n \\end{equation*}\n \\item for all $(\\alpha, \\beta) \\in \\until{m} $, $(\\alpha, \\beta \\neq \\gamma)$, \\\\\n \\begin{equation*}\n \\begin{aligned}\n & F_{\\alpha \\beta} <\\max_{s\\in \\{s_{\\alpha},s_{\\beta}\\}}\\frac{c}{y_{2}(s)}.\n \\end{aligned}\n \\end{equation*}\n\\end{enumerate} \n\\end{corollary}\n\n\nIn the following example, we illustrate that due to randomness in choosing the pair of players, Formation Dynamic does not always converge to a unique stable structure even for the same initial network structure and matrix $F$.\n\n\n\\begin{example}\\label{ex:multigroup-convergence}\nConsider the case where we have $\\dfrac{c}{y_{1}(s)} \\dfrac{c}{y_1(s)}.\n\\]\n Consequently, we obtain \n \\[U_i\\big( \\mathcal{E} \\cup \\{4,5\\} -U_i(\\mathcal{E} \\setminus \\{ 4,5\\}) \\big)>0\\]\n which means that the connection (4,5) is formed. Now since we have $F_{\\alpha \\beta}< \\dfrac{c}{y_2(s)}$, no connected triad and thereby, no additional links will be formed. Also no link will be removed. Therefore, the final structure in Fig.~\\ref{fig:dynamics_ex1}, which is a ring, is stable.\n\n\\end{enumerate}\n\\end{example}\n \\begin{figure}[h]\n\t\\begin{center} \n\t\t\\subfloat[Process A]{\\includegraphics[width=0.9\\linewidth]{dynamics_ex2}\\label{fig:dynamics_ex2}}\\quad\n\t\t\\subfloat[Process B]{\\includegraphics[width=0.9\\linewidth]{dynamics_ex1}\\label{fig:dynamics_ex1}}\n\t\t\\caption{The processes of Example \\ref{ex:multigroup-convergence}. At each step, shaded nodes represent the groups which the selected individuals belong to, and the outcome of the game (action taken regarding link addition, link removal, or indifference) is represented in the next.}\\label{fig:dynamics_ex}\n\t\\end{center}\n\\end{figure}\n\n\nExample \\ref{ex:multigroup-convergence} shows that, based on the order of the sequence of selected pairs, we can have two or possibly more convergent stable structures, and therefore, the convergence results cannot be generalized and the convergent structure is not always unique.\n\n\nFrom Theorem \\ref{thm: Pairwise Stability of Cliques} we know that each group forms a clique. We now analyze the interconnections among those cliques. Theorem~\\ref{statements on redundant and comembers} addresses the redundancy of interconnections.\n\n\n\\begin{theorem}[Formation of Redundancies]\\label{statements on redundant and comembers}\n Consider $n$ individuals partitioned into groups $P_1,\\dots,P_m$ of sizes $s_1, s_2, \\dots s_m$. Suppose that $c \\max\\limits_{s \\in\\{s_{\\alpha}, s_{\\beta}\\}} \\dfrac{c}{ y_2(s)}$, and \n \\item\\label{fact:comember} maximal interconnections between $P_{\\alpha}$ and $P_{\\beta}$ will be formed and never removed, if $\\dfrac{c}{ y_3 }\\frac{T_{CDW}}{2}$, where the CDW gap is expected not to be completely open yet\\cite{Gruner1988}. This violation of the Kohler rule can be attributed both to the reconstruction of the Fermi surface due to nesting and to presence of more than one type of carriers in the CDW state\\cite{McKenzie1998, Yasuzuka2005}. We suggest that a stronger manifestation of the deviation from the MR scaling could be observed at temperatures in the close vicinity of $T_{CDW}$ as in tungsten bronzes also showing Peierls transition \\cite{Kolincio20162}. This range is, however, beyond the scope of our experimental equipment.\n\n\\subsection{Hall effect}\n\\begin{figure*} [ht!]\n \\includegraphics[angle=0,width=2.1\\columnwidth]{panel2.eps}\n \\caption{\\label{Panel2} (a)-(b) Magnetic field dependence of Hall resistivity $\\rho_{yx}$ in YNiC$_2$ (a) and LuNiC$_2$(b). The plots have been vertically shifted for clarity and the vertical scale applies to the plot for corresponding to $T$ = 1.9 K. (c)-(d) Hall conductivity $\\sigma_{xy}$ in YNiC$_2$ (c) and LuNiC$_2$ (d). The black solid lines are representative fits to the experimental data with equation \\ref{EQsigmaxyAP}. (e)-(f) The results of the analysis of Hall resistivity and conductivity: mobilities $\\mu_H$, $\\mu_{ext}$ and concentrations $n_H$, $n_{eff}$ plotted as a function of temperature for YNiC$_2$ (e) and LuNiC$_2$ (f). The legend for (a), (b), (c) and (d) is displayed in panel (c).}\n\\end{figure*}\n\nTo explore the evolution of carrier concentrations, we have examined the Hall effect for both compounds. The thermal dependence of the Hall resistivity ($\\rho_{yx}$) is depicted in Fig. \\ref{Panel1}e (YNiC$_2$) and \\ref{Panel1}f (LuNiC$_2$). For YNiC$_2$, $\\frac{\\rho_{yx}}{B}$ is almost temperature independent above $T_{CDW}$. At this characteristic temperature, the Hall resistivity shows an abrupt downturn, indicating the loss of free electrons due to the CDW condensation. The presumed lock-in transition is indicated by a kink in $\\frac{\\rho_{yx}}{B}(T)$. At lower temperatures, the Hall resistance shows a minimum and then returns to less negative values. Previously, such an effect was observed in magnetic $R$NiC$_2$, and was attributed both to the suppression of charge density wave by the magnetic ordering and to the onset of the anomalous component of the Hall effect\\cite{Kim2012, Kolincio20161, Kolincio2017, Lei2017}. Due to the absence of long range magnetism in YNiC$_2$, these two terms appear to be irrelevant in this case.\nAt temperatures below $T_1$, the $\\frac{\\rho_{yx}}{B}(T)$ curves do not superimpose into a single line which suggests that in the CDW state, $\\rho_{yx}$ is not linear with $B$. \n\nFor LuNiC$_2$ the Peierls temperature $T_{CDW}\\simeq$ 450 K\\cite{Roman2018_1, Steiner2018}, thus at 400 K, which is the maximum temperature limit of our experiment, the system is already in the charge density wave state. All the curves reveal a kink at $T\\simeq$ 355 K. Its origin is not clear, however, while this weak anomaly is not detected by other measurements, it might result from the experimental artifact instead of being truly intrinsic to the sample. Another scenario is, that this anomaly originates from the Lu$_4$Ni$_2$C$_5$ impurity phase. Similarly to YNiC$_2$, the sign of $\\rho_{yx}$ is negative in the whole temperature range, indicating the dominance of electrons. This is not the only similarity between the $\\frac{\\rho_{yx}}{B}$ curves for both compounds. Here we also find that for LuNiC$_2$ the Hall resistivity is also driven to more negative values as the free electrons are condensed in the CDW state, which is followed by the return of $\\rho_{yx}$ to close to zero at lower temperatures. We find that the $\\frac{\\rho_{yx}}{B}$ superimpose at temperatures above approximately 250 K. At lower temperatures, the plots do not coincide with each other, indicating a nonlinearity of $\\rho_{yx}(B)$ also in LuNiC$_2$. Similarly to the case of YNiC$_2$, further temperature decrease leads to the upturn of the Hall resistivity, which also cannot be attributed to magnetic ordering. A plausible scenario to explain these features is the existence of more than one type of electronic carrier, originating from unnested pockets remaining in the Fermi surface after imperfect nesting, a situation characteristic of quasi-2D metals showing charge density wave\\cite{Monceau2012, Gruner2000}. \n\nTo obtain a more detailed picture of the electronic parameters, we have examined the magnetic field dependence of $\\rho_{yx}$. The results of field sweeps at constant temperatures, shown in the Fig. \\ref{Panel2}a for YNiC$_2$ and \\ref{Panel2}b for LuNiC$_2$ reveal a visible deviation of Hall signal from linearity.\nIn the absence of long range magnetic interactions or ordering, this effect is a clear manifestation of the multiband character of electrical conductivity\\cite{Akiba2017, Li2016, Liu2017, Luo2015, Wang2014}. \nIn the two-band model, the Hall resistivity is expressed with equation (\\ref{EQHall})\\cite{Hurd1972}:\n\\begin{equation}\n\\label{EQHall}\n\\frac{\\rho_{yx}}{B}=\\frac{1}{e}\\frac{n_h\\mu_h^2-n_e\\mu_e^2+(n_h-n_e)\\mu_e^2\\mu_h^2B^2}{(n_h\\mu_h+n_e\\mu_e)^2+(n_h-n_e)^2\\mu_h^2\\mu_e^2B^2}\n\\end{equation}\nwhere $n_h$, $n_e$, $\\mu_h$ and $\\mu_e$ are respectively concentrations and mobilities corresponding to two (hole and electron) conduction channels. The direct $\\rho_{yx}$ fit with eq. (\\ref{EQHall}) gives four dependent parameters, which may lead to misguiding conclusions\\cite{Rotella20151}. However, the high field limit of this equation gives an approximate measure of the effective carrier concentration $n_{eff}$\\cite{Sun2014}, which will be discussed in section D:\n\\begin{equation}\n\\label{EQHallHF}\n\\frac{\\rho_{yx}}{B}=\\frac{1}{e}\\frac{1}{n_h-n_e}=\\frac{1}{e}\\frac{1}{n_{eff}}\n\\end{equation}\n\\subsection{Multiband conductivity}\nMore detailed information can be extracted by transforming components of resistivity tensor $\\rho_{yx}$ and $\\rho_{xx}$ to obtain Hall conductivity $\\sigma_{xy}$ via the following equation:\n\n\\begin{equation}\n\\label{EQsigmaxy}\n\\sigma_{xy}(B)=\\frac{\\rho_{yx}}{\\rho_{yx}^2+\\rho_{xx}^2}\n\\end{equation}\n\n\nIn the multiband system, $\\sigma_{xy}$ is a superposition of the terms originating from subsequent contributing bands. Equation (\\ref{EQsigmaxy}) can be then rewritten as\\cite{Lin2016}:\n\n\\begin{equation}\n\\label{EQsigmaxymulti}\n\\sigma_{xy}(B)=\\sum_{i}\\frac{\\sigma_i\\mu_iB}{1+\\mu_i^2B^2}\n\\end{equation}\n\nHall conductivity is commonly used to determine the electronic parameters, since the extremum of $\\sigma_{xy}(B)$ is a direct measure (or at least a good approximation in a multiband system) of the dominant mobility $\\mu_{ext}$ calculated from the inverse of the magnetic field $B_{ext}$, at which $\\sigma_{xy}$ peaks\\cite{Liang2014}:\n\n\\begin{equation}\n\\label{EQmuinv}\n\\mu_{ext}=\\frac{1}{B_{ext}}\n\\end{equation}\n\nThe Hall conductivity for both compounds is negative in the whole temperature range and at low temperatures shows a minimum, which for YNiC$_2$ is visibly sharper than for LuNiC$_2$. The position of this minimum shifts from high fields to lower values of $B$ as temperature is lowered. For YNiC$_2$, the {$B_{ext}$ is clearly defined, while the broad extremum seen in LuNiC$_2$ precludes the precise determination of the peak position. Since the direct fitting of $\\sigma_{xy}$ with equation \\ref{EQsigmaxymulti} assuming one hole and one electron bands once again requires using four dependent parameters, for further analysis we have used an approach\\cite{Takahashi2011, Ishiwata2013}, in which we have assumed the existence of a single band with high mobility carriers and the remaining band(s) to show significantly lower mobility:\n\n\\begin{equation}\n\\label{EQsigmaxyAP}\n\\sigma_{xy}(B)=n_{xy}e\\mu_{xy}^2B \\left( \\frac{1}{1+\\mu_{xy}^2B^2} +C_{xy} \\right)\n\\end{equation}\n\n\n\nEquation \\ref{EQsigmaxyAP} allows the estimation of the mobility $\\mu_{xy}$, and concentration $n_{xy}$ of this single 'fast' band (pocket), while other 'slower' bands contribute to $C_{xy}$ parameter. The typical fits are shown by solid lines in panels c and d of Fig. \\ref{Panel2} respectively. We have found that $\\sigma_{xy}$ can be reasonably well-described with equation (\\ref{EQsigmaxyAP}) despite of the fact that the zero field values of $\\rho_{xx}$ can be significantly increased due to the polycrystalline character of the samples. \n\n\\begin{figure} [ht!]\n \\includegraphics[angle=0,width=1.1\\columnwidth]{Cpar.eps}\n \\caption{\\label{Cpar} $C_{xy}$ parameters resulting from least square fit of $\\sigma_{xx}$ with equation \\ref{EQsigmaxyAP} for YNiC$_2$ (red color) and LuNiC$_2$ (blue color).}\n \\end{figure}\n\nThe parameters derived from this procedure, as well as the values of $n_{eff}$ and $\\mu_{ext}$, are summarized in Fig. \\ref{Panel2}e for YNiC$_2$ and \\ref{Panel2}f, for LuNiC$_2$. The mobilities $\\mu_{ext}$ and $\\mu_{xy}$ coincide with each other for YNiC$_2$, and both quantities reach very large values of $7\\cdot 10^3$ cm$^2$ V$^{-1}$ s$^{-1}$ at $T$ = 1.9 K. \nThe electronic mobility $\\mu_{xy}$ of LuNiC$_2$ is twice as small as in the case YNiC$_2$, yet still considerable. The coincidence of $\\mu_{xy}$ and $\\mu_{ext}$ is an additional argument for the correctness of the value calculated from $\\sigma_{xy}$. It shall be, however, noted that, next to the increase of the residual resistivity, the polycrystalline samples character is expected also to substantially lower the electronic mobility in comparison with the single crystal.\n\n As seen in Fig. \\ref{Panel2}e and f, for both compounds, the concentration of the carriers originating from the high mobility band increases as temperature is lowered. \n The growth of $n_{xy}$ is concomitant with the decrease of the effective carrier concentration $n_{eff}$ below Peierls temperature. This is consistent with the nesting picture: while the majority of electrons are removed from the conducting band and condenses towards CDW, the parallel opening of unnested pockets results in the increase of the high mobility carriers. Interestingly, while the results of the $\\sigma_{xy}$ analysis suggests the electron origin of the carriers described with concentration $n_{xy}$, the upturn of Hall resistivity and $n_{eff}$ at lowest temperatures can possibly be caused by the existence of holes with not so large mobility, thus contributing only to $C_{xy}$ parameter in equation (\\ref{EQsigmaxyAP}). The temperature interval in which this effect is observed corresponds to the range in which the a turnover of deviations from Kohler scaling is observed in YNiC$_2$ (inset of Fig. \\ref{Panel1}c).\n \nThe $C_{xy}$ parameter serves as an estimate of the ratio of the conductivities stemming from 'slow' to 'fast' bands respectively. Thermal dependence of $C_{xy}$ for both compounds is shown in figure \\ref{Cpar}. For both compounds, the values of this parameter show values close to unity at high $T$ and decrease as temperature is lowered. $C_{xy}$ reaches $\\simeq$ 0.001 for YNiC$_2$ and $\\simeq$ 0.1 for LuNiC$_2$. Small upturn is seen at low temperatures at low temperatures, which can be associated with the existence of an additional band as suggested above. The relatively low values of $C_{xy}$, especially in the former compound, underline the major role played by the carriers originating from the 'fast' pocket in the terms of electronic transport and show that the used approximate model can be used to describe the properties of YNiC$_2$ and LuNiC$_2$. \n\nThe presence of both electron and hole pockets in the CDW state of LuNiC$_2$ is also consistent with the results of band structure calculations\\cite{Steiner2018}. Owing to the similarities between the Fermi surfaces of YNiC$_2$ \\cite{Hase2009} and other $R$NiC$_2$ showing CDW, it is reasonable to assume the relevance of the same scenario for Y bearing compound as well. \n\nThe high mobility of carriers contained in these pockets is then likely responsible for the high magnitude of MR in both compounds. Opening of such pockets was reported in a number of quasi-2D CDW materials showing strong, yet imperfect Fermi surface nesting, leading to the enhancement of magnetoresistance\\cite{Rotger1994, Rotger1996, Yasuzuka1999, Chen2017},\n themopower\\cite{Rhyee2009, Rhyee2015} and galvanothermomagnetic properties\\cite{Bel2003, Kolincio20163}.\n\nThis result supports the scenario of strong Fermi surface reconstruction in YNiC$_2$ and LuNiC$_2$, which is possible due to the absence of any competing magnetic ordering which was responsible for the CDW suppression in majority of $R$NiC$_2$ family members \\cite{Yamamoto2013, Hanasaki2012, Kolincio20161, Kolincio2017, Hanasaki2017, Lei2017}.\n\n\n\n\n\\subsection{Specific heat}\n\n\nTo complement the results of transport, magnetotransport, and Hall experiments, and to further characterize the CDW transition in YNiC$_2$, we have measured the specific heat $C_p$. Fig. \\ref{CP}a depicts the temperature dependence of the specific heat capacity $C_p(T)$ in the temperature range 1.9 - 300 K. At 300 K, $C_p$ reaches approximately 80\\% of the value expected by Dulong-Petit law (3nR $\\sim$ 100 J mol$^{-1}$ K$^{-1}$), suggesting that the Debye temperature for YNiC$_2$ exceeds 300 K. \n\nNo anomalies have been detected at low temperatures, which confirms the absence of bulk superconductivity or magnetic ordering. \nThe specific heat data plotted as $\\frac{C_p}{T} $ vs. $T^2$ presented in Fig. \\ref{CP}b has been fitted to the equation (\\ref{CPeq}) with both sides divided by $T$.\n\n\\begin{equation}\n\\label{CPeq}\nC_p = \\gamma T + \\beta T^3\n\\end{equation}\n\n\\noindent where the first and second terms represent electronic and lattice contributions, respectively.\nThe fit revealed values of Sommerfeld coefficient $\\gamma$ = 1.65(1) mJ mol$^{-1}$K$^{-2}$ and $\\beta$ = 0.326(4) mJ mol$^{-1}$K$^{-4}$, the latter corresponds to the Debye temperature $\\Theta_D$ = 620 K according to:\n\n\\begin{equation}\n\\label{Debye}\n\\Theta_D = \\left( \\frac{12\\pi^4 nR}{5\\beta} \\right)^{\\frac{1}{3}} \n\\end{equation}\n\n\\noindent where R = 8.314 J mol$^{-1}$K$^{-1}$ and $n$ is the number of atoms per formula unit ($n$ = 4 for YNiC$_2$). This value is larger than the $\\Theta_D$ = 456 K reported previously for YNiC$_2$ \\cite{long_heat_2001}. The Debye temperature found here is also larger than the value reported for LaNiC$_2$ ($\\Theta_D$ = 445 K) \\cite{Prathiba2016}. Such behavior can be reasonably explained by a mass relationship: for molar mass of Y smaller than La, one expects higher $\\Theta_D$.\n\n \\begin{figure} [t]\n \\includegraphics[angle=0,width=1.0\\columnwidth]{Cp.eps}\n \\caption{\\label{CP} (a) Specific heat of YNiC$_2$ as a function of temperature. The inset shows an expanded view on the vicinity of the Peierls transition. The anomalies are marked with arrows. Dashed line corresponds to the background subtracted to evaluate the excess specific heat corresponding to the transitions - highlighted with light violet color. The high temperature measurements were performed with Apiezon L grease - see experimental section for details. (b) $\\frac{C_p}{T} (T^2)$ in the low temperature region. Black solid line corresponds to the fit with equation (\\ref{CPeq}), divided by $T$ on both sides.}\n \\end{figure}\n \nResults of the detailed measurements of $C_p(T)$ above room temperature are shown in the inset of Fig. \\ref{CP}a. The Peierls transition is signaled by a small maximum of $C_p(T)$ at $T$ = 310 K, being in rough agreement with the transition temperature $T_{CDW}$ established from resistivity measurements. The relative increase of specific heat at the charge density wave formation temperature denotes $\\frac{\\Delta C_p}{C_p(T_{CDW})} \\simeq 1.1 \\% $, thus is at the same order of magnitude as in canonical CDW systems as NbSe$_3$\\cite{Tomic1981}, K$_{0.9}$Mo$_6$O$_{17}$\\cite{Escribe1984}, or tungsten bronzes\\cite{Chung1993}. \n\nThe mean-field weak coupling description of the Peierls transition predicts the specific heat jump of:\n\n\\begin{equation}\n\\label{BCSeq}\n\\frac{\\Delta C_p}{\\gamma T_{CDW}}=1.43\n\\end{equation}\n\nIn the case of YNiC$_2$, the equation (\\ref{BCSeq}) gives the value of 1.79, slightly larger than the BCS prediction, indicating the relevance of a weak coupling scenario.\n\n\nVisibly stronger and sharper anomaly accompanies the presumed lock-in crossover at $T_1$ = 275 K. Here the specific heat increases by $\\frac{\\Delta C_p}{C_p(T_1)} \\simeq 2.9 \\% $, with $C_p(T_1)$ estimated from the background. The magnitude of this anomaly is noticeably larger than for the features typically observed at the incommensurate-commensurate CDW transformation\\cite{Craven1977, Kuo2004}.\n\n\n\nThe entropy $\\Delta S$ and enthalpy $\\Delta H$ of both anomalies were estimated from the excess specific heat at each transition by integrating the $\\frac{\\Delta C_p}{T}dT$ of and $\\Delta C_pdT$ respectively, after evaluating and subtracting the background values of $C_p$.\nThe integrated regions are highlighted by light violet color in Fig. \\ref{CP}a.\n\\begin{table}[t!]\n\\caption{Thermodynamic parameters: relative increase of specific heat $\\frac{\\Delta C_p}{C_p(T)}$, entropy $\\Delta S$ and enthalpy $\\Delta H$ at transition temperatures $T_{CDW}$ and $T_1$ in YNiC$_2$.}\n\\label{tableHC}\n\\begin{ruledtabular}\n\\begin{tabular}{cccc}\n& $\\frac{\\Delta C_p}{C_p(T)}$ (\\%) & $\\Delta S$ (mJ mol$^{-1}$K$^{-1}$) & $\\Delta H$ (J mol$^{-1}$)\\\\\n \\hline\n$T_{CDW}$ & 1.1 & 30.6 & 9.4\\\\\n$T_1$ & 2.9 & 77.8 & 21.3\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\n\\end{table}\nThe results of the integration of $C_p$ excess accompanying the phase transitions are summarized in Tab. \\ref{tableHC}. While the size of $\\Delta C_p$ step at $T_{CDW}$ stands in agreement with the BCS predictions as well as with the values found in other materials exhibiting a weakly coupled charge density wave, we find an unusualy low value of $\\Delta S$ accompanying this transition. This can be imposed by the high Peierls temperature, resulting in a large denominator of $\\frac{\\Delta C_p}{T}$ and thus small result of the integral. The value of enthalpy however, does not diverge from the typically observed values in CDW systems\\cite{Escribe1984, Wang2006}.\nIn agreement with the comparison of $\\Delta C_p$ jump, for the crossover at $T_1$, the values $\\Delta S$ and $\\Delta H$, are significantly larger than for the Peierls transition at $T_{CDW}$. This result is unexpected, since typically the lock-in transition is not associated with the opening of a new electronic gap, next the one already existing in the CDW state. \nThe sharp peak shape of this anomaly can suggest a large role played by CDW order parameter fluctuations \\cite{McMillan1977, Kuo2001, Kwok1990}. The detailed analysis of crystal structure, as well as of the phonon spectra, performed on a single crystal is required to elucidate this issue.\n\n\n \n\\section{conclusions}\nWe have examined the physical properties of polycrystalline YNiC$_2$ and LuNiC$_2$. The former compound shows at $T_{CDW}$ = 318 K Peierls transition with signatures of BCS - mean field weak coupling scenario, followed by presumed lock-in crossover at $T_1$ = 275 K. The temperatures corresponding to these anomalies, revealed by transport, Hall effect and specific heat measurements, are found to obey the linear scaling with the unit cell volume, observed previously with lanthanide-based $R$NiC$_2$ compounds. \nBoth studied materials show large magnetoresistance in the CDW state, reaching 470 \\% for YNiC$_2$ and 50 \\% for LuNiC$_2$ at $T$ = 1.9 K and $B$ = 9 T. To discuss its origin, we have combined the analysis of thermal and magnetic field depencence of Hall effect and magnetoresistance. We have found that the effect standing behind such strong magnetoresistive features in YNiC$_2$ and LuNiC$_2$ is the existence of pockets, including at least one with high mobility carriers, remaining in the Fermi surface after nesting, caused by fully developed CDW transition not interrupted by competing orders such as magnetism or superconductivity.\n\n\\begin{acknowledgments}\n The authors gratefully acknowledge the financial support from National Science Centre (Poland), grant number: UMO-2015\/19\/B\/ST3\/03127. The authors would also like to thank to M. Hirshberger and C. Zhang (both RIKEN), Alain Pautrat (CRISMAT), Helen Walker (ISIS), Nathan Runyon, and Jesse Sprowes for their helpful advice. \n \\end{acknowledgments}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA standard model of cosmology is emerging (often dubbed the\nConcordance Model), in which the universe consists of 5\\% ordinary\nbaryonic matter, $\\sim 26$\\% dark matter, and $\\sim 69$\\% dark energy.\n\\cite{Komatsu:2008hk,concordo2} The baryonic content is\nwell-known, both from element abundances produced in primordial\nnucleosynthesis roughly 100 seconds after the Big Bang, and from\nmeasurements of anisotropies in the cosmic microwave background (CMB).\nThe evidence for the existence of dark matter is overwhelming, and\ncomes from a wide variety of astrophysical measurements.\n\n\n\\section{Dark Matter in Galaxies and Clusters}\n\n\\subsection{The Beginnings of the Dark Matter Problem and Rotation Curves}\n\nThe dark matter problem is perhaps the longest outstanding problem in all of modern physics.\nThe puzzle dates back to the 1930's, to the work first of Knut Lundmark in Sweden and shortly after that Fritz Zwicky at Caltech.\nZwicky noticed that galaxies in the Coma Cluster were moving too rapidly to be explained by the stellar\nmaterial in the cluster. He postulated that additional mass in the form of something dark must\nbe providing the gravitational pull to speed up the orbits. Subsequent work continued to find\nsimilar evidence, but it wasn't until the work of Ford and Rubin \\cite{FordRubin1970} in the 1970's \nthat the same unexplained rapid orbits were found\nto exist in every single galaxy. At that point the scientific consensus for dark matter emerged.\nFor a review of dark matter history, see the review of Ref.~\\refcite{Bertone:2016nfn}.\n\nRotation curves of\ngalaxies are flat. The velocities of objects (stars or gas) orbiting\nthe centers of galaxies, rather than decreasing as a function of the\ndistance from the galactic centers as had been expected, remain\nconstant out to very large radii. Similar observations of flat\nrotation curves have now been found for all galaxies studied,\nincluding our Milky Way. The simplest explanation is that galaxies\ncontain far more mass than can be explained by the bright stellar\nobjects residing in galactic disks. This mass provides the force to\nspeed up the orbits. To explain the data, galaxies must have enormous\ndark halos made of unknown `dark matter.' Indeed, more than 95\\% of\nthe mass of galaxies consists of dark matter. This is illustrated in\nFig. 1, where the velocity profile of galaxy NGC 6503 is displayed as\na function of radial distance from the galactic center. The baryonic\nmatter which accounts for the gas and disk cannot alone explain the\ngalactic rotation curve. However, adding a dark matter halo allows a\ngood fit to data.\\footnote{It is interesting to note that alternative scenarios without dark matter\nbegan with modified Newtonian dynamics (MOND). \\cite{milgrom} While these models\nhave been shown to fail, particularly by cosmic microwave background observations, \nthey may provide an interesting phenomenological fit on small scales. \\cite{mcgaugh}}\n\nThe limitations of rotation curves are that one can only look out as\nfar as there is light or neutral hydrogen (21 cm), namely to distances\nof tens of kpc. Thus one can see the beginnings of dark matter haloes, but\ncannot trace where most of the dark matter is. The lensing experiments\ndiscussed in the next section go beyond these limitations.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{fig1}\n\\caption{Galactic rotation curve for NGC 6503 showing disk and gas\n contribution plus the dark matter halo contribution needed to match\n the data.}\n\\end{figure}\n\n\\subsection{Gravitational Lensing}\n\nEinstein's theory of general relativity predicts that mass bends, or\nlenses, light. This effect can be used to gravitationally ascertain\nthe existence of mass even when it emits no light. Lensing\nmeasurements confirm the existence of enormous quantities of dark\nmatter both in galaxies and in clusters of galaxies.\n\nObservations are made of distant bright objects such as galaxies or\nquasars. As the result of intervening matter, the light from these\ndistant objects is bent towards the regions of large mass. Hence\nthere may be multiple images of the distant objects, or, if these\nimages cannot be individually resolved, the background object may\nappear brighter. Some of these images may be distorted or sheared.\nThe Sloan Digital Sky Survey used weak lensing (statistical studies of\nlensed galaxies) to conclude that galaxies, including the Milky Way,\nare even larger and more massive than previously thought, and require\neven more dark matter out to great distances. \\cite{Sloan2005} Again, the\npredominance of dark matter in galaxies is observed.\n\nA beautiful example of a strong lens is shown in Fig.~2. The panel\non the right shows a computer reconstruction of a foreground cluster\ninferred by lensing observations made by Tyson et al.\\ \\cite{tyson} using the Hubble\nSpace Telescope. This extremely rich cluster contains many galaxies,\nindicated by the peaks in the figure. In addition to these galaxies,\nthere is clearly a smooth component, which is the dark matter\ncontained in clusters in between the galaxies.\n\nThe key success of the lensing of dark matter to date is the evidence that dark matter\nis seen out to much larger distances than could be probed by rotation\ncurves: the dark matter is seen in galaxies out to 200 kpc from the centers\nof galaxies, in agreement with\n$N$-body simulations. On even larger Mpc scales, there is\nevidence for dark matter in filaments (the cosmic web).\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{fig2}\n\\caption{Left: The foreground cluster of galaxies gravitationally\n lenses the blue background galaxy into multiple images. Right: A\n computer reconstruction of the lens shows a\n smooth background component not accounted for by the mass of the\n luminous objects.}\n\\end{figure}\n\n\\subsection{Hot Gas in Clusters}\n\nAnother piece of gravitational evidence for dark matter is the hot gas\nin clusters. Fig.~3 illustrates the Coma Cluster. The left panel is\nin the optical, while the right panel is emission in the X-ray\nobserved by ROSAT. \\cite{Coma1997}\n[Note that these two images are not\non the same scale.] The X-ray image indicates the presence of hot\ngas. The existence of this gas in the cluster can only be explained\nby a large dark matter component that provides the potential well to\nhold on to the gas.\n\n\\begin{figure}\n\\includegraphics[width=0.49\\textwidth]{fig3a}\n\\includegraphics[width=0.49\\textwidth]{fig3b}\n\\caption{COMA Cluster: without dark matter, the hot gas would\n evaporate. Left panel: optical image. Right panel: X-ray image from\n ROSAT satellite.}\n\\end{figure}\n\n\n\\subsection{Bullet Cluster}\n\nAn image (shown in Fig.~4) of the Bullet Cluster of galaxies (a cluster formed out\nof a collision of two smaller clusters) taken by the Chandra X-ray\nobservatory shows in pink the baryonic matter; in blue is an image of\nthe dark matter, deduced from gravitational lensing. In the process of\nthe merging of the two smaller clusters, the dark matter has passed\nthrough the collision point, while the baryonic matter slowed due to\nfriction and coalesced to a single region at the center of the new\ncluster. The Bullet Cluster provides clear evidence of the existence\nof two different types\nof matter: baryons and dark matter behave differently.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{fig4}\n\\end{center}\n\\caption{The Bullet Cluster: A collision of galactic clusters shows\n baryonic matter (pink) as separate from dark matter (blue), whose\n distribution is deduced from gravitational lensing.}\n\\end{figure}\n\nThus the evidence that most of the mass of galaxies and clusters is\nmade of\nsome unknown component of dark matter is overwhelming.\nAs I've shown, dark matter shows its existence gravitationally in many ways, including rotation\ncurves (out to tens of kpc), gravitational lensing (out to 200\nkpc), hot gas in clusters, and the Bullet Cluster.\n\nAdditionally,\nwithout dark matter, large scale structure could not have formed by the present time\nand we would not exist. Until recombination at $z=1100$, the universe is ionized, baryons are\ntied to photons, and both photons and baryons stream out of structures as they are forming. It is the dark matter\nthat clumps together first, before recombination, and provides the potential wells for the ordinary matter to fall\ninto at a later time. In order for dark matter to initiate the formation of galaxies and clusters,\nit must be cold rather than hot. Hot dark matter would be moving relativistically and would\nstream out of structures in the same way that photons do; hence it was already known in the\n1980s that neutrinos cannot\nprovide the potential wells for structure formation and cannot constitute the dark matter.\nNonrelativistic cold dark matter has become the standard paradigm for the dark matter in the universe.\\footnote{Alternatives do exist including warm dark matter.}\n\nBelow I turn to the cosmic microwave background which\nprovides irrefutable evidence for dark matter.\n\n\n\n\\section{Cosmic Abundances}\nThe cosmic abundances tell a consistent story in which the\npreponderance of the mass in the universe consists of an unknown dark matter\ncomponent. The cosmic microwave background provides the most powerful\nmeasurements of the cosmological parameters; primordial\nnucleosynthesis restricts the abundance of baryonic matter; Type IA\nsupernovae provided the first evidence for the acceleration of the\nuniverse, possibly explained by dark energy as the major constituent\nof the cosmic energy density.\n\n\\subsection{The Cosmic Microwave Background}\n\nFurther evidence for dark matter comes from measurements on\ncosmological scales of anisotropies in the cosmic microwave background. \n\\cite{Komatsu:2008hk,concordo2} The CMB is the remnant\nradiation from the hot early days of the universe. The photons\nunderwent oscillations that froze in just before decoupling from the\nbaryonic matter at a redshift of 1100. The angular scale and height\nof the peaks (and troughs) of these oscillations are powerful\nprobes of cosmological parameters, including the total energy density,\nthe baryonic fraction, and the dark matter component, as shown in Fig.~5. The sound\nhorizon at last scattering provides a ruler stick for the geometry of\nthe universe: if the light travels in a straight line (as would be the\ncase for a flat geometry), then the angular scale of the first Doppler\npeak was expected to be found at 1 degree; indeed this is found to be\ncorrect. Thus the geometry is flat, corresponding to an energy\ndensity of the universe of $\\sim 10^{-29} {\\rm gm\/cm}^3$. The height\nof the second peak implies that 5\\% of the total is ordinary atoms,\nwhile matching all the peaks implies that 26\\% of the total is dark matter.\nIndeed the CMB by itself provides irrefutable evidence for dark matter.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{planck}\n\\end{center}\n\\caption{Planck's power spectrum of temperature fluctuations in the cosmic microwave background.\nThe fluctuations are shown at different angular scales on the sky. Red dots with error bars are the\nPlanck data. The green curve represents the standard model of cosmology, $\\Lambda$CDM. The\npeak at 1 degree is consistent with a flat geometry of the universe, the height of the second peak with 5\\%, and the second and third peaks with 26\\% dark matter. }\n\\end{figure}\n\n\n\n\\subsection{Primordial nucleosynthesis}\nWhen the universe was a few hundred seconds old, at a temperature of\nten billion degrees, deuterium became stable: $p + n \\rightarrow D +\n\\gamma$. Once deuterium forms, helium and lithium form as well. The\nformation of heavier elements such as C, N, and O must wait a billion\nyears until stars form, with densities high enough for triple\ninteractions of three helium atoms into a single carbon atom. The\npredictions from the Big Bang are 25\\% Helium-4, $10^{-5}$\ndeuterium, and $10^{-10}$ Li-7 abundance by mass. These predictions\nexactly match the data as long as atoms are only 5\\% of the total\nconstituents of the universe.\n\n\\subsection{Dark Energy}\nThe first evidence for the $\\sim$70\\% dark energy in the universe came from\nobservations of distant supernovae (Perlmutter et al.,\n\\cite{sn1999a} Riess et al., \\cite{sn1999b} Riess et\n al.\\ \\cite{sn2004}). The supernovae are dimmer than expected, as\nis most easily explained by an accelerating universe. There are two\ndifferent theoretical approaches currently pursued to explain the dark energy: (i) a vacuum energy such as a\ncosmological constant or time-dependent vacuum \\cite{fafm1987} may be responsible, or (ii) it is possible that\nGeneral Relativity is incomplete and that Einstein's equations need to\nbe modified. \\cite{modgenrel2002a,modgenrel2005,modgenrel2002b,Carroll_etal2004} Note, however, that\nthis dark energy does not resolve or contribute to the question of\ndark matter in galaxies, which remains as puzzling (if not more) than\ntwenty years ago. We now have a concordance model of the universe, in\nwhich roughly a quarter of its content consists of dark matter.\n\n\\section{Dark Matter Candidates}\n\n\\subsection{MACHOs}\n\nTwenty years ago, it seemed reasonable that dark matter might consist of faint stars, substellar objects,\nor stellar remnants (white dwarfs or neutron stars),\ni.e., stars that simply were too faint to have yet been discovered. These fall into the category\nof massive compact halo objects, or MACHOs. Other MACHO candidates would include\n primordial black holes or mirror matter. \\cite{MohapatraTeplitz1999}\n\nA combination of theory and observation have ruled these out\nas solving the dark matter problem of the Milky Way. First, \nRefs.~\\refcite{faintstars,graff1996} used HST data to show that low mass stars could be at most 3\\% of the Milky Way dark matter. Next, a combination of theory plus Hipparchos parallax data was \nused to rule out substellar objects, or brown dwarfs, as the primary constituent of the Galaxy's dark matter. \\cite{browndwarfs} Stellar \nremnants were also potential DM candidates. Bounds on white dwarfs (WD) as dark matter came from many arguments (see Refs.~\\refcite{machoreview,machoreview2} for a review). Stellar precursors of white dwarfs would have produced too much IR radiation that would have swallowed TeV gamma-rays seen from objects like Markarian 451; a too large fraction of the Universe's baryonic mass budget would have been required to produce the progenitor stars of the white dwarfs; WD would have overproduced carbon and nitrogen. From\n these constraints we argued that at most 15\\% of the Milky Way Halo could be\nmade of white dwarfs (Freese et al.,\n\\cite{Freese_etal2000} Fields et al., \\cite{ffgwpb},\nGraff et al.\\ \\cite{ffgwpc}); at that time we disagreed with claims made by\n the MACHO microlensing experimental that 100\\% of the dark matter could be in the\n form of MACHOs (the experiments originally overestimated the MACHO contribution).\n\n Microlensing experiments (the MACHO (Alcock et al.\\\n\\cite{alcock2000}) and EROS experiments (Ansari et al.\\\n\\cite{eros2004})) eventually showed that MACHOs less massive than 0.1 $M_\\odot$\nmake an insignificant contribution to the energy density of the Galaxy. However, there is a possible detection\n(Alcock et al. \\cite{alcock2000}) of a roughly 15\\% halo\nfraction made of $\\sim 0.5 M_\\odot$ objects which might be made of\nstellar remnants such as white dwarfs. These estimates agree with the numbers we found earlier\nfrom a combination of theory and other data sets. \\cite{machoreview,machoreview2} \nThe white dwarf contribution to the dark matter halo could be significant, yet\nnot enough to explain all of the dark matter of the Milky Way.\n\n\\subsection{Nonbaryonic Dark Matter}\nFrom primordial nucleosynthesis and microwave background data, it has\nbecome clear that dark matter consists of nonbaryonic material.\nThere is a plethora of dark matter candidates.\nOf the many candidates, the most popular are the weakly interacting massive particles\n(WIMPS) and the axions, as these\nparticles have been proposed for other reasons in particle physics.\nThese are discussed further below.\nOrdinary neutrinos are too light to be cosmologically\nsignificant, though sterile neutrinos remain a possibility. Other\ncandidates include primordial black holes (for the latest bounds, see Ref.~\\refcite{Carr:2016drx}),\nself-interacting dark matter, light dark matter,\nasymmetric dark matter, nonthermal WIMPzillas, Q-balls, and many others.\n\n\n \\subsection{Axions}\n\\label{sec:axions}\n\nThe good news is that cosmologists don't need to ``invent'' new\nparticles. Two candidates already exist in particle physics for other\nreasons: axions and WIMPs. Axions arise in the Peccei-Quinn solution to the strong-CP\nproblem in the theory of strong interactions, \\cite{peccei} and are suitable dark matter candidates \\cite{weinberg1978,wilczek1978} if the mass lies in the range $m_a \\sim 10^{-(3-6)}\\,$eV. An upper bound on the axion mass $m_a < 15\\,$meV can be derived from astrophysical considerations, \\cite{raffelt1986, raffelt2008,viaux,miller_bertolami} while a lower bound comes from cosmology \\cite{preskill1983,abbott1983,dine1983} and its value strongly depends on the thermal history of the universe and on the amount of topological defects. \\cite{visinelli2010} An exclusion region $6 \\times 10^{-13}{\\rm\\,eV} < m_a < 2 \\times 10^{-11}{\\rm \\, eV}$ that is independent of the cosmological history and comes from black hole super-radiance has been obtained \\cite{arvanitaki} using aLIGO measurements.\nMicrowave cavity searches \\cite{sikivie1983} allow for a direct detection of axions. The Adark matterX cavity experiment (ADMX)\\cite{rosenberg} has already excluded a portion of the axion mass range and is currently searching for axions with a mass $\\sim 10^{-5}\\,$eV. A different technique consisting of searching for keV photons from axion-photon conversion in the Sun (through the Primakoff effect) has also been used in the KEK, CAST, and IAXO observatories. Such ``axion helioscopes'' are sensitive to the heavier end of the axion mass window. In addition, new ideas for axion searches include the Cosmic Axion Spin Precession Experiment (CASPEr) \\cite{Budker:2013hfa} and broadband and resonant approaches. \\cite{Kahn:2016aff}\nAxion searches continue to reach into the theoretically best motivated regions of mass and coupling.\n\n\n\\subsection{WIMPs}\n\\label{sec:WIMPs}\n\nWIMPs are \nthought to be good dark matter candidates from particle physics for two reasons.\nThey are defined to be particles that participate in weak interactions (but not strong or electromagnetic)\nand their masses are in the range GeV--10 TeV. \nThese particles, if present in thermal abundance in the early\nuniverse, annihilate with one another so that a predictable number of\nthem remain today. The relic density of these particles comes out to\nbe the right value:\n\\begin{equation}\n\\Omega_\\chi h^2 = (3 \\times 10^{-27} {\\rm cm}^3\/{\\rm sec})\n\/ \\langle \\sigma v \\rangle_{ann}\\,.\n\\end{equation}\nHere $h$ is the Hubble constant in units of 100 km\/s\/Mpc, and\n the annihilation cross section $\\langle \\sigma v \\rangle_{ann} $\nof weak interaction strength automatically gives the correct abundance of these particles today.\nThis coincidence is known as ``the WIMP miracle\" and is the first reason why\nWIMPs are taken so seriously as dark matter candidates.\n\nSecondly, WIMP candidates automatically exist in models that have been proposed to resolve problems\nin theoretical particle physics. These models contain WIMPs as a byproduct of the theory. \nFor example WIMP candidates exist in supersymmetric models (SUSY), including the lightest\nneutralino in the minimal supersymmetric standard model.\nSupersymmetry in particle theory is designed to keep particle masses\nat the right value. As a consequence, each particle we know has a\npartner: the photino is the partner of the photon, the squark is the\nquark's partner, and the selectron is the partner of the electron.\nThe lightest superysmmetric partner is a good dark matter candidate.\nAnother type of WIMP exists in models of universal extra dimensions.\nIn these theories all standard model fields propagate in a higher dimensional\nbulk that is compactified on a space that is TeV$^{-1}$ in extent. Higher\ndimensional momentum conservation in the bulk translates in four dimensions\nto Kaluza-Klein (KK) number (with boundary conditions to KK parity). The lightest\nKK particle, known as the LKP, does not decay and is a WIMP candidate. \\cite{Servant:2002aq}\nWIMP candidates are well-motivated from the point of view of particle physics and relic density;\nthe key issue now is whether or not nature agrees with our theoretical prejudice.\nThe experimental hunt for WIMPs is ongoing.\n\n\\section{Four Pronged Approach to WIMP Detection}\n\nThere are several ways to search for WIMPs based on their interactions\nwith standard model particles: production at the Large Hadron Collider, scattering in underground direct detection experiments,\nindirect detection of the products of annihilating dark matter, and discovery of dark stars. I will discuss each of these in turn.\n\n\\subsection{Production at the Large Hadron Collider at CERN}\n\nAt the Large Hadron Collider (LHC), protons are accelerated to 13 TeV. Two beams travel in opposing directions\naround a 27 kilometer long ring, and then collide in several detectors. The two general purpose detectors ATLAS and CMS\nwere built with the goal of discovering the Higgs, discovering SUSY and dark matter, and discovering the unknown.\nThe first goal of finding the Higgs boson, the last missing piece of the standard model of particle physics,\n was successful as of July 2012 and immediately led to a Nobel Prize for\nHiggs and Englert. The other goals have as yet been elusive.\n\nSUSY dark matter particles could manifest at the LHC in a variety of ways.\nA possible signature would be missing transverse energy as the dark matter\nparticle leaves undetected, together with jets of particles created during the decay chain of SUSY particles emerging from the\ncollision. Such a signature has not yet been seen, leading to ever higher bounds on SUSY particle masses.\nThe minimal supersymmetric standard model (MSSM) has 105 free parameters. If one makes some simplifying assumptions that\nunify all fermion masses $m_{1\/2}$ and all scalar masses $m_0$ at a high scale, then in the resulting constrained minimal\nsupersymmetric model (CMSSM, or MSUGRA), only five parameters remain. The experimental results are often quoted in the context\nof this CMSSM\/MSUGRA. For example, Fig.~6 illustrates the bounds from ATLAS on the supersymmetric parameter space.\nThe remaining parameter space is being pushed to above the TeV scale. However, it is important to note that\nthese bounds apply only to the MSUGRA\/CMSSM.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{atlasbounds}\n\\caption{Bounds on MSUGRA\/CMSSM from 8 TeV ATLAS data. The remaining allowed parameter space is above the lines.}\n\\end{figure}\n\n\n\nThe LHC will never be able to kill even minimal supersymmetry. \\cite{gates} Even in the MSSM, a 25 GeV neutralino currently\nsurvives as a possibility. \\cite{Pierce:2013rda} If the LHC sees nothing, SUSY can survive. It may be at high scale.\nOr, it may be less simple than the assumption that all scalars and all fermions unify at some high scale; e.g. the non-universal\nHiggs model (NUHM) or the non-universal gaugino model (NUGM).\n\nSUSY particles may be\ndiscovered at the LHC as missing transverse energy plus jets in an event. In that case one\nknows that the particles live long enough to escape the detector, but\nit will still be unclear whether they\nare long-lived enough to be the dark matter. Thus\ncomplementary astrophysical experiments are needed.\nProof that the dark matter has been found requires astrophysical particles to be found,\nvia the other prongs of the dark matter search techniques.\n\n\\subsection{Direct Detection Experiments}\n\nDirect detection experiments take advantage of the large number of WIMPs in the Galaxy.\nA WIMP travels through the detector,\n scatters off of a nucleus, and deposits a small amount of energy that may be detected\n The experiments are extraordinarily difficult and the progress has been impressive:\nthe count rates are less than one count\/kg\/day and the energy deposited is O(keV).\n\nThe history of dark matter direct detection began with the ideas and theoretical calculations in the 1980s.\nIn 1984 Drukier and Stodolsky \\cite{stodolsky} proposed neutrino detection via weak scattering off nuclei.\nThen Goodman and Witten \\cite{gw} turned the same approach to dark matter detection.\n Drukier, Freese, and Spergel \\cite{dfs}\nfirst included a Maxwellian distribution of WIMPs in the Galaxy,\ncomputed cross sections for a variety of candidates, and proposed the idea of annual modulation to identify\na WIMP signal. In another paper we further studied the idea of using annual modulation, not only for background rejection but also to tease out a\nsignal even in the presence of overwhelming noise; \\cite{wgould} this is the technique used by the DAMA experiment described below.\nFor reviews, see Refs.~\\refcite{Jungman_etal1996,jkgb,jkgc,Bertone_etal2004,lisanti}.\n\nThe text in the subsequent few paragraphs outlines dark matter direct detection and\n is taken from my review paper with Lisanti and Savage. \\cite{lisanti}\nWhen a WIMP strikes a nucleus, the nucleus recoils with energy $E$.\nThe differential recoil rate per unit detector mass is\n\\begin{equation}\\label{eqn:dRdEnr}\n dR\/dE\n = \\frac{n_\\chi}{M} \\Big\\langle v \\frac{d\\sigma}{dE} \\; \\Big\\rangle\n = \\frac{2\\rho_\\chi}{m_\\chi}\n \\int d^3v \\, v f(v,t) \\frac{d\\sigma}{dq^2}(q^2,v) \\, ,\n\\end{equation}\nwhere $n_\\chi = \\rho_\\chi\/m_\\chi$ is the number density of WIMPs, with\n$\\rho_\\chi$ the local dark matter mass density; $f(v,t)$ is the\ntime-dependent WIMP velocity distribution; and\n$\\frac{d\\sigma}{dq^2}(q^2,v)$ is the velocity-dependent differential\ncross-section, with $q^2 = 2 M E$ the momentum exchange in the\nscatter. The differential rate is typically given in units of\ncpd\\,kg$^{-1}$\\,keV$^{-1}$, where cpd is counts per day. Using the\nform of the differential cross-section for the most commonly assumed\ncouplings, to be discussed below,\n\\begin{equation}\\label{eqn:dRdEnr2}\n dR\/dE\n = \\frac{1}{2 m_\\chi \\mu^2} \\, \\sigma(q) \\, \\rho_\\chi \\eta(v_{min}(E),t),\n\\end{equation}\nwhere $\\sigma(q)$ is an effective scattering cross-section and\n\\begin{equation} \\label{eqn:eta}\n \\eta(v_{min},t) = \\int_{v > v_{min}} d^3v \\, \\frac{f(v,t)}{v}\n\\end{equation}\nis the mean inverse speed, with\n\\begin{equation} \\label{eqn:vmin}\n v_{min} =\n \\sqrt{\\frac{M E}{2\\mu^2}}\n \\end{equation}\nThe benefit of writing the recoil spectrum in the\nform of Eqn.(\\ref{eqn:dRdEnr2}) is that the particle physics and astrophysics\nseparate into two factors, $\\sigma(q)$ and $\\rho_\\chi \\eta(v_{min},t)$,\nrespectively.\nIt is traditional to define a form-factor\ncorrected cross-section\n\\begin{equation}\\label{eqn:sigmaq}\n \\sigma(q) \\equiv \\sigma_0 F^2(q) \\, ,\n\\end{equation}\nHere $\\sigma_0$ is the scattering cross-section in the\nzero-momentum-transfer limit and\n$F^2(q)$ is a form factor to account for the finite size of the\nnucleus.\n\nTwo types of interactions are most commonly studied.\nIn spin independent (SI) interactions, the\nscattering is coherent and scales as the atomic mass squared, $A^2$.\nThe SI\ncross-section can be written as\n\\begin{equation} \\label{eqn:sigmaSI2}\n \\sigma_{SI} = \\frac{\\mu^2}{\\mu_p^2} A^2 \\, \\sigma_{p,SI} \\, ,\n\\end{equation}\nwhere $\\mu_p$ is the WIMP-proton reduced mass.\nThe SI cross-section grows rapidly with nuclear mass. The explicit\n$A^2$ factor arises from the fact that the\ncontributions to the total SI cross-section of a nucleus is a coherent\nsum over the individual protons and neutrons within.\n\nSpin dependent (SD) scattering is due to the interaction of a WIMP with the spin of the\nnucleus and takes place only in those\ndetector isotopes with an unpaired proton and\/or unpaired neutron.\nThe SD WIMP-nucleus cross-section is\n\\begin{equation} \\label{eqn:sigmaSD}\n \\sigma_{SD} = \\frac{32 \\mu^2}{\\pi} G_{F}^{2} J(J+1) \\Lambda^2 \\, ,\n\\end{equation}\nwhere $G_F$ is the Fermi constant, $J$ is the spin of the nucleus,\n\\begin{equation} \\label{eqn:Lambda}\n \\Lambda \\equiv \\frac{1}{J} \\Big( a_p\\langle S_p \\rangle + a_n \\langle S_n \\rangle\n \\Big) \\, ,\n\\end{equation}\nwhere $\\langle S_p \\rangle$ and $\\langle S_n \\rangle $\n are the average spin contributions from the\nproton and neutron groups, respectively, and $a_p$ ($a_n$) are the\neffective couplings to the proton (neutron) (these need not be the same).\n\nThe dark matter halo in the local neighborhood is most likely\ndominated by a smooth and well-mixed (virialized) component with an\naverage density $\\rho_\\chi \\approx 0.4$~GeV\/cm$^3$.\nThe simplest model for this smooth component is often taken to be the\nstandard halo model (SHM) \\cite{dfs,wgould} of an\nisothermal sphere with an isotropic, Maxwellian velocity distribution\nand rms velocity dispersion $\\sigma_v$. The SHM is written as\n \n \n \n \n \n \n \n\\begin{equation} \\label{eqn:TruncMaxwellian}\n \\widetilde{f}(v) =\n \n \\frac{1}{N_{esc}} \\left( \\frac{3}{2 \\pi \\sigma_v^2} \\right)^{3\/2}\n \\, e^{-3v^2\\!\/2\\sigma_v^2} ,\n \\textrm{for} \\,\\, |v| < v_{esc} \\\\\n \\end{equation}\nand $ \\widetilde{f}(v) = 0 $ otherwise. Here,\n\\begin{equation} \\label{eqn:Nesc}\n N_{esc} = \\textrm{erf}(z) - \\frac{2}{\\sqrt{\\pi}} z e^{-z^2} \\, ,\n\\end{equation}\nwith $z \\equiv v_{esc}\/v_0$, is a normalization factor and\n\\begin{equation} \\label{eqn:vmp}\n v_0 = \\sqrt{2\/3} \\, \\sigma_v\n\\end{equation}\nis the most probable speed, with an approximate value of 235~km\/s\n(see Refs.~\\refcite{Kerr:1986hz,Reid:2009nj,McMillan:2009yr,Bovy:2009dr}).\n\nOur early work \\cite{dfs,wgould} used this Maxwellian dark matter distribution. Although there has been concern\nthat the velocity distribution of the dark matter might deviate significantly from Maxwellian, Refs.~\\refcite{withbaryons,bozorgnia,sloane2016} showed that results obtained for dark matter with a Maxwellian profile are consistent with those obtained when baryons are included in dark matter simulations, though there is as yet possible disagreement for the high velocity tail. We concluded that\nthe Maxwellian approximation\n is a perfectly good approximation when comparing results of dark matter experiments to data.\n\n We also showed \\cite{dfs} that the dark matter signal should experience an annual modulation\n(for a review, see Ref.~\\refcite{lisanti}.)\n As the Sun orbits around the Galactic Center, Earth-based detectors are effectively moving into a ``wind\" of WIMPs.\n The WIMPs are moving in random directions in the Galaxy, and the Sun's motion creates (on the\n average) a relative velocity between us and the WIMPs.\n On top of that, because the Earth is moving around the Sun, the relative velocity of the Earth with the WIMP wind\n varies with the time of year. Thus the count rate should modulate sinusoidally with the time of year, peaking in June\n and with a minimum in December.\n We predicted that the annually modulating\nrecoil rate can be approximated by\n\\begin{equation} \\label{eqn:dRdES}\n dR\/dE(E,t) \\approx S_0(E) + S_m(E) \\cos{\\omega(t-t_0)} ,\n\\end{equation}\nwith $|S_m| \\ll S_0$, where $S_0$ is the time-averaged rate, $S_m$ is\nreferred to as the modulation amplitude, $\\omega = 2\\pi$\/year and $t_0$ is the phase of the\nmodulation.\nSince typical backgrounds do not experience the same annual modulation, this effect\ncan be used to tease the signal out of the background. \\cite{wgould}\n\n These first papers convinced experimentalists that they would be able to build detectors sensitive\n enough to search for WIMPs. The detectors must be placed deep underground in order to filter out cosmic rays,\n in underground mines or underneath mountains. The first experimental effort to search for and bound WIMP dark\n matter was Ref.~\\refcite{ahlenavignone}.\n Now, 30 years later, direct detection searches are\n ongoing worldwide, in US, Canada, Europe, Asia, and the South Pole, see Fig.~7.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{undergroundlabs}\n\\caption{Underground dark matter laboratories worldwide (courtesy of M. Tripathi and M. Woods). The CanFranc underground\nlaboratory in Spain is missing from the figure.}\n\\end{figure}\n\n Of all of these experiments, only one, the Italian DAMA experiment, \\cite{Bernabei:2014tea} has positive signal.\n They use NaI crystals in the Gran Sasso tunnel under the Apennine Mountains near Rome.\n The signal they have is the annual modulation we predicted for the WIMP signal. \\cite{dfs,wgould}\n DAMA has observed exactly this annual modulation with the correct phase, see Fig.~8. \n Indeed DAMA has 10 years of cycles corresponding to a 9 $\\sigma$ detection of modulation.\n\n Now the question is, have they detected dark matter? Unfortunately they have not released the data for others to study.\n In addition, no experiment other than DAMA has found any signal at all. Indeed the null results from other experiments\n place strong bounds on the WIMP elastic scattering cross section. Naively it might seem that the other experiments\n rule out the DAMA results as being due to WIMPs. Yet, this may not be true, because all the detectors are made of\n different materials. DAMA is the only experiment to date that uses NaI crystals.\n For example, LUX \\cite{Akerib:2016vxi}\nand XENON \\cite{Aprile:2016swn}\nare made of xenon while CDMS (and SuperCDMS) \\cite{cdms} is made of germanium, which are far heavier nuclei\n than the components of DAMA's NaI crystals. To compare the different experiments, theoretical input is required.\n For example, if one assumes the scattering is SI so that the cross section scales as $A^2$, one can then plot\n the different experiments as in Fig.~9 in the cross section\/ WIMP mass plane. DAMA signals\ncould be due to roughly 10 GeV WIMPs if the scattering is with Na atoms, while the signal would be due to 80 GeV WIMPs\nif the scattering were off of iodine atoms. The higher mass region is in severe conflict with bounds from other\nexperiments, while the lower mass region also appears to be ruled out. However, if one abandons the $A^2$ assumption\nthen this comparison plot is no longer valid. For all known theoretical assumptions it is hard to reconcile the positive results\nof DAMA with the negative results of other experiments. Perhaps uncertain nuclear physics may be responsible.\n\\cite{Anand:2014kea,Dent:2015zpa}\n Many alternate explanations to the discovery of DM\nhave been proposed (e.g. radon contamination, muons, etc.) but all have been shown to be wrong.\nThe reason DAMA remains so interesting is that there is no other known explanation of the\nannual modulation they are seeing. \n\n \\begin{figure}\n\\includegraphics[width=\\textwidth]{DAMAdata}\n\\caption{DAMA data (including DAMA\/LIBRA) has a 9 $\\sigma$ detection of annual modulation consistent\nwith WIMPs. \\cite{Bernabei:2014tea}}\n\\end{figure}\n\nWhat is needed are further experimental tests using the same detector material as DAMA (NaI crystals) but in a different\nlocation. These experiments are now taking place: SABRE, \\cite{Froborg:2016ova}\nCOSINE-100 \\cite{COSINE100} (KIMS has joined with dark matter-ICE \\cite{deSouza:2016fxg}), and ANAIS. \\cite{Amare:2015rpa}\n Thus in the next five years there should be either\nconfirmation of DAMA or it will be ruled out.\n\n \\begin{figure}\n\\includegraphics[width=\\textwidth]{SIbounds}\n\\caption{Spin independent scattering bounds from direct detection experiments as shown, as well as regions compatible with DAMA data,\nin the SI elastic scattering cross section vs. WIMP mass plane. Plot taken from Particle Data Book 2015 (PANDA-X and LUX bounds need to be updated).}\n\\end{figure}\n\nI also wanted to mention a new idea we have for dark matter direct detection using DNA (see Fig.~10). We proposed \\cite{DNA} to\nuse nanometer thin sheets of gold (or other material) with $\\sim10^{60}$ strands of DNA attached. When a WIMP hits the\ngold sheet, it knocks a gold atom forward into the DNA. The gold atom then severs whatever DNA strands it hits.\nThe broken strand of DNA then falls down and is collected.\nThe DNA has been carefully constructed to have a well-known sequence of bases (A,G,C...).\nUsing well known biological techniques (PCR and sequencing), the location of the break can be identified.\nThus the track of the recoiling gold nucleus can be reconstructed. Since the distance between the bases in the DNA\nstrand is nanometer in size, this technique provides a nanometer tracker.\nOnce the track of the gold nucleus is known, since the WIMP traveled in roughly the same direction, the direction that\nthe WIMP came in from is also known. This idea thus provides a directional dark matter detector. The importance of this\nis as follows. It allows clear proof of dark matter discovery. We expect ten times as many counts when the detector\nis pointing into the direction of the WIMP wind than when it is pointing in the opposite direction. This head\/tail asymmetry\nwould be hard to explain with any background. Additionally, both the annual and daily (due to Earth's rotation)\n variation of the signal would\nbe detectable and would give superb background rejection. In the long run, a directional detector would allow\nthe discovery of where the WIMPs are in the Galaxy and how they are moving.\n\nA second radically new idea we have proposed for dark matter detection is ``nanobooms\". \\cite{nanobooms}\nThe WIMP sets off a very small explosion when\nit deposits heat in the detector. For example, the detector might consist of thermites. Then the WIMP's energy deposit would\ncause the exothermic reaction between a metal and a metal oxide to take place, i.e., there is a small explosion, which can then\nbe detected acoustically, optically, or more likely via gas expansion.\n\n \\begin{figure}\n\\includegraphics[width=\\textwidth]{DNA}\n\\caption{DNA based Dark Matter Directional Detector.\nA WIMP hits the nanometer-thin gold plate, knocks a gold atom into the hanging\nstrands of DNA. Whenever the gold atom strikes a DNA strand, the strand breaks and is collected. Since the base sequence\nof the strands is controlled, sequencing the broken strand allow the location of the break to be identified.\nHence the DNA serves as a tracker\nwith nanometer accuracy. Since the WIMP travels in roughly the same direction as the gold atom, the detector discovers the\ndirection the WIMP came from.}\n\\end{figure}\n\nThe next five years stand to lead to tests of the DAMA annual modulation signal and a confirmation or refutation of WIMP\ndiscovery as well as progress in directional sensitivity.\n\n \\subsection{Indirect Detection}\nWIMP annihilation in today's universe takes place wherever there is an overdensity of WIMPs.\nThe final products of WIMP annihilation are neutrinos, e$^+$\/e$^-$ pairs, and photons.\nAll three of these are being looked for in detectors.\nPromising places to look are the Galactic Center, dwarf galaxies, clusters of galaxies, \\cite{Adams:2016alz}\nand in the case of neutrinos, the Earth and the Sun. The first papers suggesting\nthe latter neutrino searches were by Silk et al.\\, \\cite{SOS} in the Sun; and\nby Freese \\cite{Freese1986} as well as Krauss, Srednicki and\nWilczek \\cite{Krauss_etal1986} in the Earth. As yet no signal of neutrinos due to WIMP annihilation\nin the Sun or Earth \\cite{Aartsen:2016fep}\nhas been found\nin the IceCube\/DeepCore detectors at the South Pole.\n\nThe AMS experiment on board the International Space Station has found an excess of positrons. \\cite{AMS}\n However, this excess is not likely to be due to WIMP annihilation.\nA combination of two papers has shown that such an explanation is extremely unlikely.\nFirst, the work of Lopez, Savage, Spolyar, and Adams \\cite{Lopez:2015uma}\npointed out that\nsuch a positron excess would predict also gamma-rays from dwarf galaxies, which are not seen in the Fermi Gamma Ray\nSpace Telescope (Fermi-LAT) data.\nThey used the bounds on gamma-rays from dwarfs in Fermi-LAT data \nto show that all WIMP annihilation channels are excluded as explanations of AMS data except one (via a mediator to four muons).\nThis latter channel was further examined by Scaffidi et al.\\ \\cite{Scaffidi:2016ind}\nSecond, the Planck satellite\nexamined the effects such an excess would imply for the CMB and ruled out a large swath of parameter space. \\cite{Ade:2015xua}\nThe work of Ref.~\\refcite{Lopez:2015uma} using Fermi-LAT data to rule out a DM explanation of the AMS positron excess\nwas placed on the arXiv a month prior to the Planck bounds.\nIt is far more likely that the AMS positron excess is due to pulsars or other point sources than due to WIMP\nannihilation.\n\nOf great interest over the last few years has been Fermi-LAT's discovery of a gamma-ray excess towards the Galactic Center.\nHooper and Goodenough \\cite{GCexcess} pointed out that it could be from the annihilation of a 40 GeV WIMP.\nMore recent studies of cosmic ray backgrounds have widened the possible range of masses \\cite{agrawal}\nand therefore SUSY\nexplanations of this excess. \\cite{Freese:2015ysa}\nHowever, studies \\cite{pointsources} have shown that a point source explanation (e.g., pulsars) is at least as likely as\na dark matter explanation. Though tantalizing, a dark matter explanation of this gamma ray excess will be hard to prove as there\nis much astrophysical competition at the Galactic Center.\n\n\\subsection{Summary of WIMP Searches}\n\nTo summarize the current status of WIMP searches, there is possible evidence for WIMP detection already now, but\nnone of it is certain.\nThe direct detection experiment DAMA has found annual modulation of its signal that would be compatible\nwith a WIMP origin. However, other experiments have null results in conflict with DAMA's result. Since the\nexperiments are made of different detector materials, further tests of the same material as DAMA are now\ntaking place around the world and will result in confirmation or refutation in the next five years.\n\nAs far as indirect detection of WIMP annihilation products, the positron excess seen by AMS likely has a different\norigin than WIMPs. The gamma-ray excess seen from the direction of the Galactic Center by the Fermi Gamma Ray Space Telescope\nis compatible with a WIMP origin but other astrophysical explanations are at least as likely.\n\nTheorists are looking for models in which some of these results are consistent with one another, given a WIMP interpretation.\nWhat will it take for us to believe dark matter has been found? We need a compatible signal in a variety of experiments made\nof different detector materials and all the parties agree.\n\n\n\\section{Dark Stars}\nA fourth prong of the hunt for dark matter is the search to discover dark stars.\nThe first stars to form in the universe, at redshifts $z \\sim 10-50$,\nmay be very unusual; these dark stars are made almost entirely of atomic matter (hydrogen and helium, with only $10^{-3}$ of the mass made of dark matter)\nand yet are powered by dark matter heating rather than by fusion. Dark stars were first proposed by Spolyar, Freese, and Gondolo \\cite{SpolyarFreeseGondolo08} and are reviewed in Ref.~\\refcite{Freese:2015mta}.\n\n As discussed in the last section, WIMP dark matter annihilation in the\nearly universe provides the right abundance today to explain the dark\nmatter content of our universe. This same annihilation process will\ntake place at later epochs in the universe wherever the dark matter\ndensity is sufficiently high to provide rapid annihilation. The first\nstars to form in the universe are a natural place to look for\nsignificant amounts of dark matter annihilation, because they form at\nthe right place and the right time. They form at high redshifts, when\nthe universe was still substantially denser than it is today, and at\nthe high density centers of dark matter haloes.\n\nThe first stars form inside dark matter haloes of $\\sim10^6 M_\\odot$\n(for reviews see e.g., Ripamonti \\& Abel, \\cite{RipamontiAbel05}\nBarkana \\& Loeb, \\cite{BarkanaLoeb01} and Bromm \\& Larson;\n\\cite{BrommLarson03} see also Yoshida et al.\\ \\cite{Yoshida_etal06}).\nOne star is thought to form inside one such dark matter halo. It was our idea to ask, what is the\neffect of the dark matter on these first stars? We studied the behavior of\nWIMPs in the first stars. As our canonical values, we take $m_\\chi =\n100$GeV for the WIMP mass and $\\langle \\sigma v \\rangle_{ann} = 3\n\\times 10^{-26} {\\rm cm^3\/sec}$ for the annihilation cross section\n(motivated above). However, the same behavior results for a wide variety\nof WIMP masses and cross sections over many orders of magnitude.\nWe find that the annihilation products of the\ndark matter inside the star can be trapped and\ndeposit enough energy to heat the star and prevent it from further\ncollapse. A new stellar phase results, a dark star, powered\nby dark matter annihilation as long as there is dark matter fuel.\n\n\\subsection{Three Criteria for Dark Matter Heating}\n\n WIMP annihilation produces energy at a rate per\nunit volume\n\\begin{equation}\n Q_{\\rm ann} = \\langle \\sigma v \\rangle_{ann} \\rho_\\chi^2\/m_\\chi\n \\linebreak \\simeq 10^{-29} {{\\rm erg} \\over {\\rm cm^3\/s}} \\,\\,\\, {\\langle\n \\sigma v \\rangle \\over (3 \\times 10^{-26} {\\rm cm^3\/s})} \\left({n \\over {\\rm\n cm^{-3}}}\\right)^{1.6} \\left({100 {\\rm GeV}\\over m_\\chi}\\right)\n\\end{equation}\nwhere $\\rho_\\chi$ is the dark matter energy density inside the star and $n$ is\nthe stellar hydrogen density. Spolyar, Freese and Gondolo\n\\cite{SpolyarFreeseGondolo08} outlined the three key ingredients\nfor dark stars: 1) high dark matter densities, 2) the annihilation\nproducts get stuck inside the star, and 3) dark matter heating wins over other\ncooling or heating mechanisms. These ingredients are required\nthroughout the evolution of the dark stars.\n\n{\\bf First criterion: high dark matter density inside the star.} Dark\nmatter annihilation is a powerful energy source in these first stars\nbecause the dark matter density is high. To find the dark matter density\nprofile, we started with an NFW (Navarro, Frenk \\& White\n\\cite{NavarroFrenkWhite96}) profile for both dark matter and gas in the $10^6\nM_\\odot$ halo. However, we find the same behavior results for even\na completely flat profile; the dark star is born regardless.\nOriginally we used adiabatic contraction ($M(r)r$ =\nconstant) (Blumenthal et al.\\ \\cite{Blumenthal_etal85}) and\nmatched onto the baryon density profiles given by Abel, Bryan \\&\nNorman \\cite{AbelBryanNorman02} and Gao et\nal.\\ \\cite{Gao_etal07} to obtain dark matter profiles; see also Natarajan,\nTan \\& O'Shea \\cite{NatarajanTanO'Shea08} for a recent\ndiscussion. Subsequent to our original work, we have done an exact\ncalculation (which includes radial orbits) (Freese, Gondolo, Sellwood\n\\& Spolyar \\cite{FreeseGondoloSellwoodSpolyar08}) and found that\nour original results were remarkably accurate, to within a factor of\ntwo. At later stages, we also consider possible further enhancements\ndue to capture of dark matter into the star (discussed below).\n\n{\\bf Second Criterion: dark matter annihilation products get stuck\n inside the star}. In the early stages of Population III star formation,\nwhen the gas density is low, most of the annihilation energy is\nradiated away (Ripamonti, Mapelli \\& Ferrara\n\\cite{RipamontiMapelliFerrara06}). However, as the gas collapses\nand its density increases, a substantial fraction $f_Q$ of the\nannihilation energy is deposited into the gas, heating it up at a rate\n$f_Q Q_{\\rm ann}$ per unit volume. While neutrinos escape from the\ncloud without depositing an appreciable amount of energy, electrons\nand photons can transmit energy to the core. We have computed\nestimates of this fraction $f_Q$ as the core becomes more dense. Once\n$n\\sim 10^{11} {\\rm cm}^{-3}$ (for 100 GeV WIMPs), e$^-$ and photons\nare trapped and we can take $f_Q \\sim 2\/3$.\n\n{\\bf Third Criterion: dark matter heating is the dominant heating\/cooling\n mechanism in the star}. We find that, for WIMP mass $m_\\chi =\n100$GeV (1 GeV), a crucial transition takes place when the gas density\nreaches $n> 10^{13} {\\rm cm}^{-3}$ ($n>10^9 {\\rm cm}^{-3}$). Above\nthis density, dark matter heating dominates over all relevant cooling\nmechanisms, the most important being H$_2$ cooling (Hollenbach\n\\& McKee \\cite{HollenbachMcKee79}).\n\nFig.~11 shows evolutionary tracks of the protostar in the\ntemperature-density phase plane with dark matter heating included\n(Yoshida et al.\\ \\cite{Yoshida_etal08}), for two dark matter particle\nmasses (10 GeV and 100 GeV). Moving to the right on this plot is\nequivalent to moving forward in time. Once the black dots are\nreached, dark matter heating dominates over cooling inside the star.\nThe protostar collapses somewhat further until it reaches equilibrium, at which point the\ndark star phase begins. The protostellar core is prevented from\ncooling and collapsing further. The size of the core at this point is\n$\\sim 17$ A.U. and its mass is $\\sim 1 M_\\odot$ for 100 GeV mass\nWIMPs. A new type of object is created, a dark star supported by dark matter\nannihilation rather than fusion.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{naoki-bob}\n\\caption{ Temperature (in degrees K) as a function of hydrogen density\n (in cm$^{-3}$) for the first protostars, with dark matter annihilation\n included, for two different dark matter particle masses (10 GeV and 100 GeV).\n Moving to the right in the figure corresponds to moving forward in\n time. When the ``dots'' are reached, dark matter annihilation wins over H2\n cooling. After that the protostar collapses somewhat further until\n it reaches equilibrium. At that point a dark star is created.}\n\\end{figure}\n\n\\subsection{Building up the Mass}\n\nWe have found the stellar structure of the dark stars\n(hereafter DS) (Freese, Bodenheimer, Spolyar \\& Gondolo\n\\cite{FreeseBodenheimerSpolyarGondolo08}). They accrete mass from the\nsurrounding medium. We built up the DS mass as it grows\nfrom $\\sim 1 M_\\odot$ to possibly become supermassive.\nThe studies were done in two different ways, first assuming polytropic interiors and\nmore recently using the MESA stellar evolution code; the basic results are the same.\n\\cite{Rindler-Daller:2014uja}\nAs the mass increases, the DS radius adjusts\nuntil the dark matter heating matches its radiated luminosity. We find\nsolutions for dark stars in hydrostatic and thermal\nequilibrium. We build up the DS by accreting $1 M_\\odot$ at a time\nwith a variety of possible accretion rates, always\nfinding equilibrium solutions. We find that initially the DS are in\nconvective equilibrium; from $(100-400) M_\\odot$ there is a transition\nto radiative; and heavier DS are radiative. As the DS grows, it pulls\nin more dark matter, which then annihilates. Fig.~12 shows the hydrogen and dark matter density profiles. One can see ``the power of\ndarkness'': although the dark matter constitutes a tiny fraction ($<10^{-3}$)\nof the mass of the DS, it can power the star. The reason is that WIMP\nannihilation is a very efficient power source: 2\/3 of the initial\nenergy of the WIMPs is converted into useful energy for the star,\nwhereas only 1\\% of baryonic rest mass energy is useful to a star via\nfusion.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig6}\n\\caption{Evolution of a dark star as mass is accreted onto the\n initial protostellar core of 3 M$_\\odot$. The set of upper (lower)\n curves correspond to the baryonic (dark matter) density profile at different\n masses and times. Note that dark matter constitutes $<10^{-3}$ of the mass of\n the DS.}\n\\end{figure}\n\n\\subsection{Later stages: Capture}\n\nThe dark stars will last as long as the dark matter fuel inside them persists.\nOnce the gravitationally attracted dark matter runs out, the star collapses somewhat,\nat which point the star is dense enough to capture more dark matter.\n\nThe new source of dark matter in the first stars is capture of dark matter particles\nfrom the ambient medium. Any dark matter particle that passes through the\nDS has some probability of interacting with a nucleus in the star\nand being captured. The new particle physics ingredient required\nhere is a significant scattering cross section between the WIMPs\nand nuclei. Whereas the annihilation cross section is\nfixed by the relic density, the scattering cross section is a\nsomewhat free parameter, set only by bounds from direct detection\nexperiments.\nTwo simultaneous papers (Freese, Spolyar \\& Aguirre,\n\\cite{FreeseSpolyarAguirre08} Iocco \\cite{Iocco08}) found the\nsame basic idea: the dark matter luminosity from captured WIMPs can be larger\nthan fusion for the DS. Two uncertainties exist here: the scattering\ncross section, and the amount of dark matter in the ambient medium to capture\nfrom. DS studies following the original papers that include\ncapture have assumed (i) the maximal scattering cross sections allowed by\nexperimental bounds and (ii) ambient dark matter densities that are never depleted.\nWith these assumptions, DS evolution models with dark matter heating after the\nonset of fusion were studied in several papers. \\cite{Taoso_etal08,Yoon_etal08}\n\n\\subsection{Supermassive Dark Stars}\n\nDark stars are very unusual stars --- they are made of atomic matter (hydrogen and helium)\nbut they are powered by dark matter heating (Freese, Bodenheimer, Spolyar \\& Gondolo\n\\cite{FreeseBodenheimerSpolyarGondolo08}). They are very puffy (10 A.U. in size) and cool\n(surface temperatures $\\sim$ 10,000 K. Reionization\nduring this period is likely to be slowed down, as these stars can\nheat the surroundings but not ionize them.\nBecause they are so cool, they can keep accreting matter and growing\nas long as there is dark matter fuel.\nStandard Population III stars are hot, give off ionizing photons, and prevent further accretion\nabove $\\sim 140 M_\\odot$. Dark stars, on the other hand, can keep growing to become supermassive,\neven as massive as $10^7 M_\\odot$ and as bright as $10^{10} L_\\odot$.\nThere should be a variety of dark star masses ranging from a few solar masses all the way up to these\nvery large masses.\n\nFig.~13 shows the Hertzsprung-Russell diagram for dark stars as they grow from $\\sim 1 M_\\odot$ to become supermassive.\nThe two cases of matter being accreted gravitationally and via capture are shown separately. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{HR}\n\\caption{Hertzsprung-Russell diagram for DSs for a variety of WIMP masses as labeled for the two cases: (i) with gravitationally\nattracted dark matter only (dotted lines), assuming no significant depletion of dark matter due to annihilation, which is equivalent to assuming a replenishment of dark matter due to centrophilic orbits; (ii) with capture (solid lines). Results were obtained assuming polytropic interiors for the DS. The case with capture is for product of scattering cross section times ambient WIMP density\n$\\sigma_c \\rho_\\chi = 10^{-39}$ GeV\/cm$^3$ (the maximum allowed cross section for all WIMP masses and the maximum reasonable ambient density for 100 GeV WIMPs). Once the gravitational dark matter runs out, DSs must first become dense enough in order\nfor dark matter capture to happen. This explains the horizontal lines in the evolution of the case with capture. Labeled are also stellar masses reached by the DS on its way to becoming supermassive. The final DS mass was taken to be $10^5 M_\\odot$ (the baryonic mass inside the initial halo), but could vary from halo to halo, depending on the specifics of the halo mergers (figure taken from Ref.~\\protect\\refcite{HR}).\n}\n\\end{figure}\n\n\\subsection{Dark Stars are Detectable in James Webb Space Telescope}\nSupermassive dark stars may be detectable in the JWST as J, H, or K-band dropouts. Detailed discussion may be found\nin Refs.~\\refcite{HR,ruiz,cosmin}. Comparison of light output with sensitivity of JWST filters is shown in Fig.~14\nfor a $10^6 M_\\odot$ DS. Predictions for numbers of these objects, based on cosmological simulations, is also found in Ref.~\\refcite{HR}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{JWST}\n\\caption{Supermassive Dark Stars in JWST. Spectra for $10^6 M_\\odot$ supermassive DSs formed at redshift $z_{form}=15$ compared with sensitivity of JWST filters. The formation mechanism in this figure is gravitational attraction of dark matter only. The surface temperature $T_{\\rm eff} = 1.9 \\times 10^4$K. The fluxes are shown at z = 15 (dashed line), 10 (solid line) and 5 (dotted line) and compared to the detection limits of NirCam wide passband filters. The colored horizontal lines represent the sensitivity limits for the filters as labeled in the legend for exposure times $10^4$ sec (upper lines) and $10^6$ sec (lower lines). IGM absorption will decrease the observed fluxes for wavelengths shortward of the vertical red lines, which indicate the Lyman-$\\alpha$ line (1216 Angstroms) redshifted from the rest-frame of the star (figure taken from Ref.~\\protect\\refcite{cosmin}).}\n\\end{figure}\n\n\n\n\\subsection{Supermassive Black Holes}\nOnce these supermassive dark stars (SMDS) run out of dark matter fuel, they collapse to black holes.\nThey may provide large seeds for the supermassive black holes that have been found\nat high redshift ($10^9-10^{10} M_\\odot$ BH at $z=6$) and are, as yet,\nunexplained (Li et al., \\cite{Li_etal07} Pelupessy et\nal., \\cite{Pelupessy_etal07} Wu et al.\\ \\cite{WuNature}).\n\n\\subsection{Pulsations} \n\nAn interesting new research direction is the fact that DS pulsate, like all stars. As a first step, we used the MESA stellar\nevolution code to calculate the adiabatic pulsation periods of radial $p$-modes (where the\nrestoring force is pressure and those for which there is no angular dependence, so $l = 0$).\nWe found that our DS models pulsate on timescales which range from less than a day to more than two years in their restframes at about $z = 15$, depending on the WIMP mass and overtone number. The pulsation periods are significantly shorter for higher WIMP mass. Converting to the observer frame, the shortest periods we found are less than about 50 days for modes with overtone number\n$n > 6$ and a WIMP mass of 1 TeV (Ref.~\\refcite{Rindler-Daller:2014uja}).\nWe are currently investigating other pulsation modes: nonadiabatic modes and also dark matter density driven modes.\n\n\nIn short, the first stars to form in the universe may be dark stars\npowered by dark matter heating rather than by fusion. Our work indicates that\nthey may become very large (up to $10^7 M_\\odot$) and bright (up to $10^{10} L_\\odot$),\nthereby detectable in upcoming JWST observations.\nThey may provide seeds for the many supermassive black holes found in the universe. \nThe observational possibilities of discovering dark matter by finding these stars with JWST data\nis intriguing. Further, once DS\nare found, one can use them as a tool to study the properties of WIMPs.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{sterileneutrinos}\n\\caption{ Sterile Neutrinos:\nObservations consistent with and bounding the sterile neutrino mass and mixing angle to ordinary neutrinos\n(figure courtesy of K. Abazajian ``Cosmology of Sterile Neutrinos,\" in preparation (2016)).}\n\\end{figure}\n\n\n\\section{Sterile Neutrinos}\n\nAnother intriguing dark matter candidate is a sterile neutrino.\nWhereas the three known neutrino species are far too light\nto constitute dark matter, it is possible that one or more additional neutrino types, that do not\ninteract via the fundamental interactions of the standard model of particle physics\ncould make up the dark matter.\nThese sterile neutrinos could, however, mix with ordinary neutrinos.\n\nIn the past few years, several X-ray astronomy groups \\cite{Bulbul:2014sua,Boyarsky:2014jta}\nhave found evidence for a 3.5 keV line in clusters\nof galaxies and in M31. This line would be consistent with a dark matter origin, corresponding to a 7 keV\nrest mass sterile neutrino with vacuum mixing with active neutrinos\n${\\rm sin}^2 2 \\theta \\sim (2-20) \\times 10^{-11}$. Fig.~15 illustrates some of the observations.\nHowever, others argue against this interpretation, e.g. Ref.~\\refcite{Jeltema:2015mee} claims that the\nline is not seen from the dwarf galaxy DRACO and thus the 7 keV sterile neutrino is ruled out.\nThis is a subject of\ndeep controversy.\n\nTheoretical studies of sterile neutrinos are also ongoing.\nThe sterile neutrino is a singlet under the standard model; it is likely a right handed neutrino.\nThe production of these particles is difficult. If thermal, they tend to overclose the universe.\nOther mechanisms \\cite{Barbieri:1990vx,Dodelson:1993je,Dolgov:2002wy}\n or resonance using a large lepton\nasymmetry \\cite{Canetti:2012vf}\nare difficult but being investigated.\nIn many models the sterile neutrino constitutes warm dark matter, which leads to testable predictions such as the core\/cusp\nof galaxies and the numbers of substructures (objects smaller than our Galaxy), see e.g. Ref.~\\refcite{Bozek:2015bdo}.\n\n\\section{What's Hot in Dark Matter}\n\nAs I've discussed, unexplained signals in a variety of data sets point to four hints of possible dark matter detection.\nFirst, the DAMA \\cite{Bernabei:2014tea}\nannual modulation \\cite{dfs} signal could be compatible with a $\\sim$10GeV WIMP.\nHowever, since other experiments do not see any signal at all, the DAMA results must be checked. Currently three\ndifferent experiments are planning to repeat the DAMA setup with NaI crystals: SABRE, COSINE, and ANAIS.\n\nSecond, the Fermi-LAT $\\gamma$-ray excess from the direction of the Galactic Center could be due to WIMP annihilation.\nHowever, point sources (such as pulsars) constitute another explanation of the excess that is at least as good or better.\n\nThird, the possible 3.5 keV X-ray line from clusters and from M31 could be explained by a 7 keV sterile neutrino, but this \ninterpretation\nis very controversial.\n\nA fourth intriguing signal, not yet mentioned in this article, is the 511 keV $\\gamma$-ray line in INTEGRAL data. \n\\cite{Knodlseder:2003sv,Jean:2003ci}\nThis is seen in the Galactic Bulge out to 6 degrees (3 kpc). There is no clear astrophysical explanation.\nLow mass X-ray binaries were thought to be a compelling explanation which is now being ruled out.\nThe explanation for the line could be dark matter annihilation to e$^+$e$^-$ pairs. This would be MeV dark matter.\n\\cite{Boehm:2003bt}\n\nThe future holds interesting studies of these signals as well as the continuing hunt for dark matter.\n\n\\section{Conclusion}\nMost of the mass in the universe is in the form\nof an unknown type of dark matter. The need for dark matter has become more\nand more clear since the 1930s, with evidence from rotation curves,\ngravitational lensing, hot gas in clusters, the Bullet Cluster, structure formation, and\nthe cosmic microwave background. A consensus picture has emerged, in which the dark matter\ncontributes 26\\% of the overall energy density of the universe. Its\nnature is still unknown. At most 15\\% of the dark matter in galaxies can be\nwhite dwarfs (or other MACHO candidates), but most is likely to be an\nexotic particle candidate. Dark matter searches for the best motivated\ncandidates, axions and WMPs are ongoing and promising over the next\ndecade.\n\nThe interesting unexplained signals that may herald the discovery of dark matter have\nbeen reviewed: DAMA's annual modulation signal and the\nFermi-LAT gamma-rays from the Galactic Center might be due to WIMPs, a 3.5 keV X-ray line from various\nastrophysical sources is possibly from sterile neutrinos,\nand the 511 keV line in INTEGRAL might be due to MeV dark matter.\nAll of these would require further confirmation in\nother experiments or data sets to be proven correct.\nIn addition, a new line of research on dark stars was reviewed\nwhich suggests that the first stars to exist in the universe\nwere powered by dark matter heating rather than by fusion:\nthe observational possibilities of discovering dark matter by finding these stars with JWST data\n were discussed. The goal of the searches over the next decade is to decipher the nature of the unknown dark matter.\n\n\n\\section{Acknowledgments}\nKF would like to thank Luca Visinelli for commenting on the draft.\nKF acknowledges support through a grant from the Swedish Research Council (Contract No. 638-2013-8993). KF acknowledges support from DoE grant DE-SC007859 at the University of Michigan.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nExchange anisotropy, or exchange bias, is an interfacial phenomenon between \nferromagnetic and antiferromagnetic domains which results in the shifting and \nbroadening of magnetic hysteresis loops. Exchange bias is believed to result \nfrom the interaction of ferromagnetic (FM) spins with uncompensated\nantiferromagnetic (AFM) spins at the FM\/AFM \ninterface.\\cite{blamire_uncomp,kuch_uncomp,nogues_schuller} \nSince its discovery in partially oxidized Co\/CoO nanoparticles by Meiklejohn \nand Bean,\\cite{mbean} exchange bias has been observed and engineered in\ncore-shell nanoparticles,\\cite{coreshell} thin films,\\cite{thinfilms} and \ngranular composites.\\cite{nanostructure_review} These\narchitectures are utilized because a high proportion of FM spins must be\ninterfacial in order for the AFM switching behavior to appreciably\naffect the FM coercivity. While they achieve a high interface\/volume\nratio, core-shell nanoparticles and thin film architectures\ndo not result in large quantities of exchange-biased material.\nAs an alternative, novel methods of processing exchange biased systems have\nbeen explored, including coevaporation,\\cite{coevap} mechanical\nmilling,\\cite{sort_milling} and spontaneous phase \nseparation.\\cite{separation_alloy} \n\nInitial reports from Sort \\textit{et al.}\\cite{sort} have demonstrated \nhydrogen reduction of Fe$_{0.2}$Cr$_{1.8}$O$_3$ to produce metal\/oxide \ncomposites. Different transition metals reduce sequentially, resulting in\nnanosized Fe particles within micron sized Cr$_2$O$_3$ grains. \nInteraction between the $\\sim$10\\,nm Fe precipitates and the bulk\nCr$_2$O$_3$ provides exchange bias shifts of 10\\,Oe. Reduction\nkinetics of the system CoCr$_2$O$_4$--Co$_3$O$_4$ have been reported by\nBracconi and Dufour\\cite{bracconi}, and Kumar and Mandal\\cite{kumar} have\nproduced Co\/Cr$_2$O$_3$ composites directly from nitrate precursors. \nRecently, Toberer \\textit{et al.}\\cite{toberer_advmat05} have demonstrated \nthat remarkable microstructures with aligned porosity can be\nobserved when the reduction product shares a common oxygen sublattice\nwith the precursor. \n\nHere we report on hydrogen reduction of the system Ni$_x$Mn$_{3-x}$O$_4$ \nto form Ni\/MnO composites with striking microstructures associated\nwith substantial exchange biasing.\nThe Ni particles exhibit bulk saturation magnetization values, and\nexchange bias is observed below the N\\'eel temperature of MnO at $T_N$ = \n119\\,K. Surface and interior particle size analysis reveals\nthat this system produces Ni nanoparticles on the order of 15\\,nm to 30\\,nm. \nSize-dependent exchange bias phenomena are manifested in trends between\nthe Ni content of the precursor spinel and the exchange and coercive\nfields of the reduced composite.\n\n\\section{Experimental}\n\nSingle-phase ceramic monoliths were prepared\nby solid-state reactions of oxalates, similar to that of\nWickham.\\cite{wickham} Oxalates are versatile precursors for \nmixed metal oxides, and have found extensive use in recent years\nto produce substituted binary\\cite{risbud_co-zno,lawes_co-mn-zno} and \nternary\\cite{toberer_chemmat05,toberer_advmat05,toberer_chemmat06} compounds. \n\nStoichiometric amounts of\nnickel acetate and manganese acetate [Ni(CH$_2$COOH)$_2$$\\cdot$4H$_2$O and\nMn(CH$_2$COOH)$_2$$\\cdot$4H$_2$O, Aldrich 99\\,\\%] were added to a solution\ncontaining one equivalent of glacial acetic acid. Excess oxalic acid\nmonohydrate [H$_2$(C$_2$O$_4$)$\\cdot$H$_2$O, Fisher 99.9\\,\\%] was mixed in a\nseparate solution and both were stirred at $90^{\\circ}$C. Addition of\nthe oxalic acid to the dissolved acetates results in coprecipitation of\nvery fine, single-phase nickel-manganese oxalates in which the metals are \nmixed on the atomic scale. The oxalate powders, \nNi$_x$Mn$_{3-x}$(C$_2$O$_4$)$_3$$\\cdot$2H$_2$O,\nwere washed with deionized water and dried at $90^{\\circ}$C,\ncalcined in alumina boats in\nair at temperatures ranging from 780$^\\circ$ to $1200^\\circ$C for 10\\,h,\nthen quenched into water to prevent conversion to $\\alpha$-Mn$_2$O$_3$ or\nto NiMnO$_3$. The resulting single-phase Ni$_x$Mn$_{3-x}$O$_4$ spinel\npowder was pressed into pellets at 100\\,MPa and sintered at\n$1325^{\\circ}$C for 24\\,h, then annealed at the previous calcination\ntemperature and water quenched.\n\nReductions were performed in \nalumina boats in a tube furnace under \n5\\,\\% H$_2$\/N$_2$ with a flow rate of approximately 30\\,sccm. Once the gas \nmixture had equilibrated, the specimens, as pellets, were heated at \n$2^{\\circ}$C\/min to 650$^\\circ$C, 700$^\\circ$C, or 725$^\\circ$C, held for \n2\\,h, then cooled at $10^\\circ$C\/min to room temperature. Reduced samples \nwere verified to be Ni\/MnO by x-ray diffraction (XRD, Philips X'Pert with \nCu$K_{\\alpha}$ radiation) and Rietveld refinement using the \\textsc{xnd}\ncode.\\cite{xnd} Composites were characterized by\nthermogravimetic analysis (TGA, Cahn TG-2141), scanning electron\nmicroscopy (SEM, FEI Sirion XL40), focused ion beam milling and\nmicroscopy (FIB, FEI DB235), and SQUID magnetometry (Quantum Design MPMS\n5XL). \n\n\\section{Results and Discussion}\n\n\\begin{figure}\n\\centering\\epsfig{file=fig01.eps,width=7cm}\\\\\n\\caption{X-ray diffraction Rietveld refinements of \n(a) Ni$_{0.3}$Mn$_{2.7}$O$_4$ single-phase tetragonal spinel (hausmannite)\nprecursor, and (b) the \\textit{fcc}-Ni\/rock-salt MnO composite produced by \nreduction of the above spinel in 5\\,\\%H$_2$\/N$_2$.}\n\\label{rietveld10}\n\\end{figure}\n\n\nThe calcining of the single-phase Ni\/Mn oxalates, according to the phase \ndiagram presented by Wickham,\\cite{wickham} results in single phase \nspinel-related compounds that are not all cubic. Wickham\\cite{wickham}\nhas reported that in their high-temperature state, bulk samples \nof Ni$_x$Mn$_{3-x}$O$_4$ with $x$ between 0.15 and 1.00 are cubic spinels \nbefore decomposing into NiMnO$_3$ and $\\alpha$-Mn$_2$O$_3$ in the temperature\nrange of 705$^\\circ$ to 1000$^\\circ$C. Upon\nwater quenching, samples prepared with $x < 1$ and fired at\n$\\geq1000^{\\circ}$C are observed to distort from the high-temperature \ncubic spinel reported by Wickham into single-phase hausmannite-type tetragonal\nspinels in space group $I4_1\/amd$. Slow-cooling, air-quenching, or quenching \nin flowing nitrogen are insufficient to prevent decomposition of the solid\nsolution. \n\nRietveld refinement of the room-temperature XRD pattern for the\nwater-quenched compound Ni$_{0.30}$Mn$_{2.7}$O$_4$ is shown in \nFig.\\,\\ref{rietveld10}(a). Only peaks for the hausmannite-type solid solution\nare evident; this is a requirement for the final reduced composite to be \nhomogeneous in terms of the distribution of Ni precipitates.\nThe refinement assumes a ``normal\" spinel, where Ni$^{2+}$ and Mn$^{2+}$ \noccupy the 4b tetrahedral sites. Mn$^{3+}$ in the 8c octahedral \nsites causes a cooperative Jahn-Teller distortion which leads to a loss of \ncubic symmetry.\\cite{goodenough_jt} An accurate determination of the cation\ndistribution may be obtained by neutron diffraction and has been investigated \nby Larson \\textit{et al.}\\cite{cation_dist} When sintered at 1325$^{\\circ}$C, \nsamples with $x$ near 1 partially decompose into mixtures of NiO and\nNi$_{1-\\delta}$Mn$_{2+\\delta}$O$_4$ as described by Wickham,\\cite{wickham} \nbut subsequent annealing at 800$^{\\circ}$C for 72\\,h\nensures the formation of a single-phase tetragonal spinel. Dense pellets\nand micron-sized powder are both suitable precursors for hydrogen\nreduction because the dimensions of the precipitates and pores are\norders of magnitude smaller than the grain size in either case. \nAdequately high oxygen mobility at the reduction temperature allows the \nreaction to permeate the sample regardless of any lack of preexisting porosity.\n\n\\begin{figure}\n\\centering\\epsfig{file=fig02.eps,width=7cm}\\\\\n\\caption{TGA of a Ni$_{0.3}$Mn$_{2.7}$O$_4$ sample shows that reduction \nproceeds by an initial reaction to rocksalt Ni$_{0.1}$Mn$_{0.9}$O solid \nsolution, followed by a reduction of Ni$^{2+}$ into metallic Ni.} \n\\label{tga}\n\\end{figure}\n\nIn all cases, TGA analysis confirms the total amount\nof nickel precipitated (and thus the stoichiometry of the precursor\nspinel) during hydrogen reduction. A TGA weight loss curve for\nNi$_{0.3}$Mn$_{2.7}$O$_4$ is shown in Fig.\\,\\ref{tga}. The\nweight loss curve reveals that the single-phase spinel first reduces to\na rocksalt (Ni$_{0.1}$,Mn$_{0.9}$)O solid solution, followed by\nprecipitation of metallic Ni. This progression is verified by\nthe fact that incompletely reduced samples display an MnO lattice\nparameter that is smaller than the theoretical value, due to Ni substitution.\nX-ray diffraction Rietveld refinement of the final composite\nproduct obtained after reduction in 5\\% H$_2$\/N$_2$ indicate only\nrocksalt MnO and face-centered cubic Ni [Fig.\\,\\ref{rietveld10}(b)]. \n\n\n\\begin{figure}\n\\centering\\epsfig{file=fig03.eps,width=7cm}\\\\\n\\caption{(a) Lattice parameter of MnO obtained by Rietveld refinement \nof samples after hydrogen reduction at varying temperatures. The diagonal \ndotted line is the calculated lattice parameter of a (Ni,Mn)O solid solution, \nwhile the top line represents the desired conversion to pure MnO.\n(b) Magnetic saturation of reduced Ni\/MnO samples increases linearly \nwith the completeness of Ni reduction as determined by $a_{\\rm{MnO}}$\nfrom XRD. All magnetic data concerning coercivity or \nexchange bias was measured from samples with complete Ni reduction and \nthus $M_S \\approx 0.6 \\mu_B$\/Ni.}\n\\label{reducheck}\n\\end{figure}\n\nHigh-spin Mn$^{2+}$ in octahedral coordination has an ionic radius of \n0.83\\,\\AA\\\/ in contrast to octahedral Ni$^{2+}$ which has a radius of only \n0.69\\,\\AA.\\cite{shannon-prewitt} Consequently, when Ni$^{2+}$ enters product\nMnO lattice, there is significant shrinkage of the cell parameter, which can \nbe used to estimate the degree of conversion of the starting phases into pure \nNi\/MnO. The MnO lattice parameter obtained from Rietveld refinement is plotted\nin Fig.\\,\\ref{reducheck}(a) as a function of the Ni content in the single-phase\nhausmannite\/spinel precursor. The cell parameter of pure MnO, 4.444\\,\\AA\\,\\ is \nalso indicated as a horizontal dashed line. It is seen that for \nsmall substitution of Ni ($x$ in the starting phases) the reduction \ntemperature must be increased from 650$^\\circ$C to 725$^{\\circ}$C to ensure \ncomplete reduction and avoid the rock-salt (Ni,Mn)O solid solution. \nDepression of the required reduction temperature of Ni$_x$Mn$_{3-x}$O$_4$ as \n$x$ deviates from Mn$_3$O$_4$ is a consequence of the higher ionization \nenergy of Ni$^{2+}$. In other words, more energy is released by reduction \nof Ni$^{2+}$ ions than of Mn$^{2+}$, so the reduction to metal occurs more \nreadily when $x$ is larger. The greater ease of reduction of Ni over Mn\nis suggested by the appropriate Ellingham diagram.\\cite{ellingham} \nThe saturation magnetization $M_S$ of the magnetic\nNi nanoparticle precipitates can be used in tandem with the values of \n$a_{\\rm{MnO}}$ \nobtained from Rietveld refinement to determine the completeness of Ni \nreduction. This is shown in Fig.\\,\\ref{reducheck}(b), where \nagreement is seen between the convergence of $a_{\\rm{MnO}}$ and $M_S$ to their \nrespective theoretical values of 4.444~\\AA\\, and 0.6\\,$\\mu_B$\/Ni for a\ncompletely reduced $x$Ni\/MnO composite, regardless of $x$. \n\n\n\\begin{figure}\n\\centering\\epsfig{file=fig04.eps,width=7cm}\\\\\n\\caption{Representative scanning electron microscope images: \n(a) The as-sintered surface of a dense pellet of Ni$_x$Mn$_{3-x}$O$_4$ with \n$x$ = 0.3. (b) The fracture surface of the material obtained from reducing\nthe sample in (a) at 725$^\\circ$C for 2~h. (c) and (d) are the sample\nin (b) shown at higher magnification. The highly porous and crystalline matrix\nof MnO is seen in (c), and at higher magnification, small Ni particles with\nsizes in the 30\\,nm to 40\\,nm range are seen as bright objects on a darker\nbackground.}\n\\label{semprog}\n\\end{figure}\n\nHydrogen reduction of single-phase oxide monoliths can lead to striking \nhierarchically porous microstructures, which have been characterized by \nToberer \\textit{et al.}\\cite{toberer_advmat05,toberer_chemmat06,toberer_chemmat07}\nAt first glance, low-magnification SEM micrographs of Ni$_x$Mn$_{3-x}$O$_4$\nprecursor spinels and Ni\/MnO reduced samples [Fig.\\,\\ref{semprog}(a)\nand Fig.~\\ref{semprog}(b), respectively] appear nearly identical. However,\nhigher magnification [Fig.\\,\\ref{semprog}(c) and (d)]\nreveals that reduced composites contain aligned pores in rock-salt MnO\ncovered with Ni metal nanoprecipitates. It has been previously \nsuggested \\cite{toberer_advmat05,toberer_chemmat07} that the shared\noxide sublattice of spinel and rocksalt allows the transformation from\none to the other to take place without reconstruction. \nPorosity is introduced during the spinel to rocksalt transformation \nwhile leaving the oxygen framework largely intact. The associated volume loss\ngives rise to a pore structure that can be regarded as negative crystals --\nvoids in crystals that possess the same facets as the crystals themselves do. \n\nAlthough the pores are as small as 20\\,nm, the pore and surface edges are \naligned at right angles over the entire breadth of the 20\\,$\\mu$m grains. This\nlong-range alignment implies that the MnO grains are in fact single\ncrystals with the same orientation and extent as the pre-reduction\nspinel grains.\\cite{toberer_advmat05,toberer_chemmat06} \nIncreasing the reduction temperatures should lead to densification \nand closing of the pores in the MnO monolith.\nHowever, in the interest of maintaining\nsmall Ni nanoparticles (and thus a high interface\/volume ratio), and\nbecause the majority of nanoparticles are completely encased in MnO even\nin porous samples, reduction was performed at the lowest temperature\nthat allowed complete Ni precipitation.\n\n\\begin{figure}\n\\centering\\epsfig{file=fig05.eps,width=7cm}\\\\\n\\caption{The mean diameters of the Ni particles on the surface of the MnO matrix\nas a function of the initial Ni content $x$ in Ni$_x$Mn$_{3-x}$O$_4$.\nError bars indicate one standard deviation in the particle diameter. Typically\nat least 30 distinct particle's were counted in preparing the distributions. \nIt is seen that most in most samples, the sizes are somewhat independent of $x$\nand are clustered around 30\\,nm.}\n\\label{size_nifrac}\n\\end{figure}\n\nIf we assume that for the different values of $x$, the number of nuclei are \nthe same, and that increasing $x$ only affects the growth (\\textit{ie.} the \ndiameter) of the particles, then we would expect only a weak dependence \n(changing as $x{^\\frac13}$) of the particle diameter rate on $x$. \nIf we assume that increasing $x$ also increases the number of Ni nuclei \nupon reduction, then average particle diameter would show an even weaker \ndependence on $x$.\nWe have analyzed the Ni particles in the SEM images of the surfaces of the\nmonoliths by using the program \\textsc{imageJ}\\cite{imagej} to prepare \nhistograms of particle size distributions. These are plotted in \nFig.\\,\\ref{size_nifrac} for the different\nmonoliths. It is seen that mean particle diameters \nrange from $\\sim$15\\,nm to 35\\,nm, but there is no clear trend in size,\nat least until a nickel content of $x = 0.60$ is reached. \n\n\\begin{figure}\n\\centering\\epsfig{file=fig06.eps,width=7cm}\\\\\n\\caption{(a) Fracture surface of a reduced sample (700$^\\circ$C, 2\\,h)\nof Ni$_x$Mn$_{3-x}$O$_4$ with $x$ = 0.45, and (b) the surface of a \nFIB-cut sample of the product formed on reduction (700$^\\circ$C, 2\\,h)\nof Ni$_x$Mn$_{3-x}$O$_4$ with $x$ = 0.60.\nThe two images are displayed at the same magnification. It is seen in (a) \nthat different faces of the underlying MnO\nseem to nucleate different particle sizes of \\textit{fcc}-Ni. In (b),\nit is seen that the Ni particles are found within the MnO matrix as well, and\nnot simply on the surface.}\n\\label{fib}\n\\end{figure}\n\nIndeed, in the different monoliths, a clearer correlation is found \nfor Ni particle size with the specific crystallographic face of MnO\nupon which it grows, rather than the starting $x$ value.\nIt is evident in Fig.\\,\\ref{fib}(a) that for a $x = 0.45$\nspecimen, regions can be found which exhibit a wide variety of surface\nparticle sizes and spacings depending on the nucleation environment. \nThe coherent pore structure introduced by reduction produces square or\ntriangular facets seen in Fig.\\,\\ref{fib}(a) which correspond to\nexposed \\{100\\} or \\{111\\} faces.\n\nCross-sections of reduced grains produced by FIB milling, shown in \nFig.\\,\\ref{fib}, reveal that the bulk\nMnO contains Ni nanoparticles of similar dimensions as those on the surface. \nPorosity is still prevalent in the bulk of the monoliths as it is in the\nimages of the monolith surface. This is necessary to accommodate the volume loss\nof the structure while retaining the size and alignment of the MnO\ngrains. By a comparison of lattice parameters, and assuming no\nsintering during reduction, the fraction of intragranular porosity\nproduced by the conversion of Ni$_x$Mn$_{3-x}$O$_4$ to $x$Ni\/MnO\nincreases linearly from 16\\% when $x=0$ to 39\\% when $x=0.6$, which is\nin rough agreement with observations of the intragranular pore volume in\nFIB-milled samples. Most Ni nanoparticles observed in\ncross-section [Fig.\\ \\ref{fib}(b) are completely encased within the MnO \nmatrix. Based upon the observed surface density of Ni particles and assuming\n100\\,nm diameter pores, it can be determined\nthat the observed surface Ni particles only constitute about 20\\% of the volume of\nNi that must be precipitated. Therefore, we estimate that\napproximately 80\\% of the Ni grains are encased within the MnO matrix.\n\n\\begin{figure}\n\\centering\\epsfig{file=fig07.eps,width=7cm}\\\\\n\\caption{Magnetization $M$ as a function of magnetic field $H$ \nfor an Ni\/MnO composite with $x = 0.3$ above $T_{N}$ (dashed) and field-cooled \nunder a 50\\,KOe field to 5\\,K (solid). \nExchange bias leads to a broadening (associated with the coercivity $H_C$)\nand shift (associated with the exchange field $H_E$) of the \nfield-cooled loop at 5\\,K.}\n\\label{hys}\n\\end{figure}\n\n\\begin{figure}\n\\centering\\epsfig{file=fig08.eps,height=7cm}\\\\\n\\caption{(a) Coercive field $H_C$ and (b) Exchange-bias field $H_E$ as a \nfunction of the initial Ni content $x$ in the reduced Ni\/MnO composites. \nThe coercivity $H_C$ of the Ni nanoparticles decreases with Ni content for \neach firing temperature in response to the increased fraction of interfacial \nspins as Ni content is decreased. Exchange bias effects become less pronounced \nas well, with increasing Ni content in the oxide precursor.} \n\\label{mag_nifrac}\n\\end{figure}\n\nMagnetic hysteresis loops for an $x = 0.3$ sample (Fig.\\ \\ref{hys}) display\nat 5\\,K, a loop shift $H_E$ characteristic of exchange\nbiased systems. Above the N\\'{e}el temperature of MnO, $T_N$ = 119\\,K,\nthe hysteresis loop is centered about $H = 0$. After field cooling at\n$H$ = 50\\,kOe, the coercive field is broadened\nand shifted $H_E$ = 100\\,Oe in opposition of the cooling field direction.\nThe exchange behavior can be influenced by many\nfactors, including Ni particle size, the amount and orientation of the\nFM-AFM interface, temperature, and the cooling field.\\cite{nogues_schuller}\nWe anticipate that in the size regime studied here (near 20\\,nm) the Ni\nnanoparticles are single-domain magnets and that the coercivity below\nthe blocking temperature should not show a strong size-dependence.\\cite{cullity}\nFig.\\ \\ref{mag_nifrac}(a) shows that as the\nnickel content $x$ increases, $H_C$ decreases for samples reduced at\neither 700$^{\\circ}$C or 725$^{\\circ}$C. \nAt both reduction temperatures, the highest $H_C$ is found for the smallest \n$x$, and the smallest $H_C$ is found for the largest $x$. \n\nAdditionally, the decrease in $H_C$ for $x$ = 0.3 samples reduced at \n725$^{\\circ}$C as opposed to 700$^{\\circ}$C implies increased coalescence \nof Ni particles as the temperature increases. We therefore anticipate that \nthe increased coercivity as size is decreased arises from the same\ninterfacial coupling that results in the increased exchange bias.\n\nIn exchange biased nanostructures of spherical FM particles in an AFM\nmatrix, the strength of the exchange field $H_E$ has been suggested to\nvary as\n\n\\[ H_E = \\frac{6 E_A}{M_S d_{\\rm{FM}}} \\] \n\n\\noindent where $E_A$ is the interfacial coupling energy per unit area, \n$M_S$ is the saturation magnetization of the FM, and $d_{\\rm{FM}}$ is the \ndiameter of the FM particle.\\cite{nanostructure_review} Assuming this model\nto be correct, we anticipate that the exchange field $H_E$ should decrease \nwith in increasing ferromagnetic particle size. If, with increasing $x$ in our \nsystems, Ni particle particle size indeed increases, then our results are \nbroadly consistent with this model.\n\n\\begin{figure}\n\\centering\\epsfig{file=fig09.eps,width=7cm}\\\\\n\\caption{Coercive field as a function of exchange field at 5\\,K for the\ndifferent Ni\/MnO composites. For most of the composite systems, $H_C$ varies \nnearly linearly with $H_E$.}\n\\label{hc_xb}\n\\end{figure}\n\nIn Fig.\\,\\ref{hc_xb} we plot the 5\\,K coercivity as a function of the exchange\nfield for the different systems measured, data for which are displayed in \nFig.\\,\\ref{mag_nifrac}. We see that the coercivity varies nearly linearly with\nthe exchange field, with the exception of one outlier. G\\\"okemeijer \n\\textit{et al.}\\cite{PhysRevB.63.174422} have recently measured biasing of \nferromagnets on different CoO surfaces and have concluded that on the \nuncompensated CoO surfaces, exchange biasing, and the associated shift of \nhysteresis is found, but on compensated CoO surfaces, the effect of the \ninterface is simply to increase coercivity. The magnetic structure of MnO\nis not simple\\cite{goodwin:047209} and the architectures described here of\nnearly spherical ferromagnetic particles embedded in an antiferromagnetic\nhost cannot be described in terms of simple interfaces. Given this, we suggest \nthat perhaps both effects, of the uncompensated as well as the compensated\nsurfaces are playing a role, and the linear relation between coercivity and \nexchange is simply an indication of increasing interfacial area between the\ntwo magnetic components. \n\n\\section{Conclusions} \n\nWe have demonstrated that hydrogen reduction of Ni$_x$Mn$_{3-x}$O$_4$\nspinels produces Ni\/MnO composites with significant interfacial area\nbetween antiferromagnetic MnO and ferromagnetic Ni, and associated exchange \nbias. With increasing nickel content $x$, these effects decrease, presumably\nbecause of a decrease in the relative proportion of interfacial spins\nin the ferromagnet. Exchange bias effects at the FM--AFM interface lead to \nan increase in $H_C$ with decreasing Ni content, along with a $1\/x$ dependence \nof $H_E$. A nearly linear relationship is found between $H_C$ and $H_E$ in \nthese systems.\n\n\\section{Acknowledgments}\n\nThis work was supported by the donors of the American Chemical Society \nPetroleum Research Fund, and the National Science Foundation through\na Career Award (DMR 0449354) to RS, and for the use of MRSEC facilities\n(DMR 0520415). MG was supported by a RISE undergraduate fellowship.\n\n\\bibliographystyle{apsrev} ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nSuppose that $A$ is a Banach algebra. We denote by $\\textbf{A-mod}$\nand $\\textbf{mod-A}$ the categories of Banach left $A$-modules and\nBanach right $A$-modules, respectively. In the case where $A$ is\nunital, we also denote by $\\textbf{A-unmod}$ the categories of\nunital Banach left $A$-modules. For each $E,F\\in\\textbf{A-mod},$ let\n$_{A}B(E,F)$ be the closed subspace of $B(E,F)$ consisting of the\nleft $A$-module morphisms. An operator $T\\in B(E,F)$ is called {\\it\nadmissible} if $\\text{ker} T$ and $\\text{Im} T$ are closed\ncomplemented subspaces of $E$ and $F$, respectively. It is easy to\nverify that $T$ is admissible if and only if there exists $S\\in\nB(F,E)$ such that $T\\circ S\\circ T=T.$\n\nA Banach left $A$-module $E$ is called {\\it injective} if for each\n$F, K\\in\\textbf{A-mod}$ and admissible monomorphism\n$T\\in_{A}\\hspace{-0.1cm}B(F,K),$ the induced map\n$_{A}B(K,E)\\longrightarrow\\hspace{-0.1cm}_{A}B(F,E)$ is onto. We\nalso say $E\\in\\textbf{mod-A}$ is {\\it flat} if the dual module of\n$E^{*}\\in\\textbf{A-mod}$ is injective with the following left module\naction:\n$$(a\\cdot f)(x)=f(x\\cdot a)\\qquad (a\\in A, x\\in E).$$\nThe notions of injectivity and flatness of Banach algebras were\nintroduced by A. Ya. Helemskii. These notions have been studied for\nvarious classes of Banach modules; see \\cite{Dales. Polyakov},\n\\cite{Helem}, \\cite{Ramsden Paper} and \\cite{white} for more\ndetails. Recently, Ramsden in \\cite{Ramsden Paper} studied\ninjectivity and flatness of Banach modules over semigroup algebras.\nIt is well known that if $A$ is amenable, then every Banach\n$A$-modules is flat but the converse is a long standing open\nproblem. We recall that the answer is positive for some classes of\nBanach algebras associated with locally compact groups such as, the\nclass of group algebras and measure algebras; see \\cite{Dales.\nPolyakov} and \\cite{Ramsden}.\n\nKaniuth, Lau and Pym introduced and studied in \\cite{klp} and\n\\cite{klp2} the notion of $\\phi$-amenability for Banach algebras,\nwhere $\\phi:A\\longrightarrow\\mathbb{C}$ is a character, i.e., a\nnon-zero homomorphism on $A$. Afterwards, Monfared introduced and\nstudied in \\cite{m} the notion of character amenability for Banach\nalgebras. Let $\\Delta(A)$ be the set of all characters of the Banach\nalgebra $A$, and let $\\phi\\in\\Delta(A)$. The Banach algebra $A$ is\ncalled {\\it left $\\phi$-amenable} if for all Banach $A$-bimodules\n$E$ for which the left module action is given by\n$$a\\cdot x=\\phi(a)x \\qquad (a\\in A, x\\in E),$$ every\nderivation $D:A\\longrightarrow E^{*}$ is inner. It is clear that\namenability of $A$ implies $\\phi$-amenability for all\n$\\phi\\in\\Delta(A).$\n\nRecently, Nasr-Isfahani and Soltani Renani in \\cite{Nasr} introduced\nand studied the notion of $\\phi$-injectivity and $\\phi$-flatness for\nBanach modules (see Definition $2.1$). As an important result, it is\nshown in \\cite[Proposition 3.1]{Nasr} that the Banach algebra $A$ is\nleft $\\phi$-amenable if and only if every Banach left $A$-modules\n$E$ is $\\phi$-flat. Indeed, this result gives a positive answer to\nthe above open problem arises by A. Ya. Helemskii in this homology\nsetting based on character $\\phi$. Furthermore, they obtained some\nnecessary and sufficient conditions for $\\phi$-injectivity and\ncharacterized $\\phi$-injectivity of Banach modules in terms of a\ncoretraction problem; see \\cite[Theorem 2.4]{Nasr}.\n\nThis paper is organized as follows. In Section 2, after recalling\nsome definitions, we investigate some properties of\n$\\phi$-injectivity for Banach modules. Indeed, we obtain a\nsufficient condition for $\\phi$-injectivity of Banach left\n$A$-modules, in the case where $A$ is a commutative Banach algebra.\nMoreover, we give some hereditary properties of $\\phi$-injectivity\nfor Banach $A$-modules related to the closed ideals of Banach\nalgebra $A$. As the main result, we show that if $J$ is a left\ninvariant complemented ideal in $A$, then $\\phi$-injectivity of $J$\nand $A\/J$ in $\\textbf{A-mod}$ is equivalent to the\n$\\phi$-injectivity of $A$ in $\\textbf{A-mod}$ (Theorem \\ref{Cor:\nDirectsum}). In Section 3, by using the results of Section 2, we\nstudy $\\phi$-injectivity of certain $\\ell^{1}$-semilattice algebras\nand show that $\\ell^{1}(\\mathbb{N}_{\\wedge})$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is $\\phi$-injective for each\ncharacter $\\phi,$ although is not injective.\n\n\\section{$\\phi$-injectivity and some hereditary properties}\nFirst, we recall some standard notations that we shall use and\ndefine the notions of $\\phi$-injectivity and $\\phi$-flatness of\nBanach modules.\n\nLet $A$ be a Banach algebra and $E\\in\\textbf{A-mod}.$ Throughout the\npaper, we regard $E$ as a Banach left $A^{\\sharp}$-module (the\nunitization of $A$) with the following left module action:\n$$(a,\\lambda)\\cdot x=a\\cdot x+\\lambda x \\qquad(a\\in A, \\lambda\\in\\mathbb{C}, x\\in E).$$\nMoreover, the space $B(A,E)$ is a Banach $A$-bimodule with the\nfollowing module actions:\n$$(a\\cdot T)(b)=T(ba), \\qquad (T\\cdot a)(b)=T(ab)\\quad (T\\in B(A,E), a, b\\in A).$$\nSuppose that $A$ is a Banach algebra and $\\phi\\in\\Delta(A)$. For\neach $E\\in\\textbf{A-mod}$ we define,\n$$I(\\phi,E)=\\textrm{span}\\{a\\cdot x-\\phi(a)x : a\\in A, x\\in E\\}.$$\nFollowing \\cite{Nasr}, we also consider\n$$_{\\phi}B(A^{\\sharp},E)=\\{T\\in B(A^{\\sharp},E) :\nT(ab-\\phi(b)a)=a\\cdot T(b-\\phi(b)e^{\\sharp})\\:\\:\\text{for all}\\:\\:\na, b\\in A\\},$$ where $e^{\\sharp}=(0,1)$ denotes the unite of\n$A^{\\sharp}.$ It is straightforward to check that\n$_{\\phi}B(A^{\\sharp},E)$ is a closed $A$-submodule of\n$B(A^{\\sharp},E).$ Moreover, we define {\\it the canonical morphism}\n$_{\\phi}\\Pi^{\\sharp}:E\\longrightarrow _{\\phi}B(A^{\\sharp},E)$ as\nfollows:\n$$_{\\phi}\\Pi^{\\sharp}(x)(a)=a\\cdot x\\qquad (x\\in E, a\\in A^{\\sharp}).$$\n\\begin{definition}\nLet $A$ be a Banach algebra, $\\phi\\in \\Delta(A)$ and $E\\in\n\\textbf{A-mod}$. We say that $E$ is {\\it $\\phi$-injective} if, for\neach $F, K\\in \\textbf{A-mod}$ and admissible monomorphism\n$T:F\\longrightarrow K$ with $I(\\phi,K)\\subseteq\\text{Im}(T)$, the\ninduced map $T_{E}:\\hspace{-0.1cm}_{A}B(K,E)\\longrightarrow\\\n_{A}B(F,E)$ defined by $T_{E}(R)=R\\circ T$ is onto.\n\\end{definition}\nThe following theorem gives a characterization of $\\phi$-injectivity\nin terms of a coretraction problem.\n\\begin{theorem}$($\\cite[Theorem 2.4]{Nasr}$)$\nLet $A$ be a Banach algebra and $\\phi\\in\\Delta(A)$. For\n$E\\in\\textbf{A-mod}$ the following statements are equivalent.\n\\begin{enumerate}\n\\item[(i)] $E$ is $\\phi$-injective.\n\\item[(ii)] $_{\\phi}\\Pi^{\\sharp}\\in\\hspace{-0.1cm}_{A}B(E,_{\\phi}B(A^{\\sharp},E))$ is a coretraction, $($that is there exists\n$_{\\phi}\\rho^{\\sharp}\\in\\: _{A}B(_{\\phi}B(A^{\\sharp},E), E)$ such\nthat is a left inverse for $_{\\phi}\\Pi^{\\sharp})$.\n\\end{enumerate}\n\\end{theorem}\nA Banach right (left) $A$-module $E$ is $\\phi$-flat if $E^{*}$ is\n$\\phi$-injective as a left (right) $A$-module. It is shown that\nBanach algebra $A$ is left $\\phi$-amenable if and only if each\nBanach left $A$-module $E$ is $\\phi$-flat \\cite[Proposition\n3.1]{Nasr}.\n\nIn this section, we give some hereditary properties of\n$\\phi$-injectivity for certain classes of Banach modules. We also\nconsider some hereditary properties of $\\phi$-injectivity of Banach\nleft $A$-modules with their ideals. We first give a sufficient\ncondition for $\\phi$-injectivity of Banach left $A$-module $E$ in\nthe case where $A$ is a commutative Banach algebra. Following\n\\cite[Definition 1.4.4]{Dales}, {\\it the annihilator} of $E$ is\ndefined by $E^{\\perp}=\\{a\\in A: a\\cdot E=\\{0\\}\\}$.\n\\begin{theorem}\\label{Th: 2}\nLet $A$ be a commutative Banach algebra, $\\phi\\in \\Delta(A)$ and\n$E\\in \\textbf{A-mod}.$ If\n$E^{\\perp}\\cap(\\ker(\\phi))^{c}\\neq\\emptyset$, then $E$ is\n$\\phi$-injective.\n\\end{theorem}\n\\begin{proof}\nLet $a_{0}\\in E^{\\perp}\\cap(\\ker(\\phi))^{c}$. We can assume that\n$\\phi(a_{0})=1$ and define the map\n$_{\\phi}\\rho^{\\sharp}:$$_{\\phi}B(A^{\\sharp},E)\\longrightarrow E$ by\n\\begin{center}\n$_{\\phi}\\rho^{\\sharp}(T)=T(e^{\\sharp}-a_{0})\\hspace{0.5cm}(T\\in$$\n_{\\phi}B(A^{\\sharp},E)).$\n\\end{center}\nHence, for each $x\\in E$ we have\n\\begin{center}\n$_{\\phi}\\rho^{\\sharp}\\circ$$_{\\phi}\\Pi^{\\sharp}(x)=$$_{\\phi}\\Pi^{\\sharp}(x)(e^{\\sharp}-a_{0})=(e^{\\sharp}-a_{0})\\cdot\nx=x.$\n\\end{center}\nTherefore, $_{\\phi}\\rho^{\\sharp}\\circ$$_{\\phi}\\Pi^{\\sharp}=I_{E}$.\nOn the other hand, for each $a\\in A$ and $T\\in$$\n_{\\phi}B(A^{\\sharp},E)$ we have\n\\begin{equation}\n\\begin{split}\n_{\\phi}\\rho^{\\sharp}(a\\cdot T)&=(a\\cdot T)(e^{\\sharp}-a_{0})=T((e^{\\sharp}-a_{0})\\cdot a)\\\\\n&=T(a-a_{0}a)\\\\\n&=T(a\\phi(a_{0})-aa_{0}).\n\\end{split}\n\\end{equation}\nSince $T\\in$$ _{\\phi}B(A^{\\sharp},E)$ we have\n$T(a\\phi(a_{0})-aa_{0})=a\\cdot T(e^{\\sharp}-a_{0}).$ Now, using\n($2.1$) we conclude that\n\\begin{align*}\n_{\\phi}\\rho^{\\sharp}(a\\cdot T)=a\\cdot\nT(e^{\\sharp}-a_{0})=a\\cdot_{\\phi}\\rho^{\\sharp}(T).\n\\end{align*}\nIt follows that $_{\\phi}\\rho^{\\sharp}$ is a left $A$-module\nmorphism. Hence, $E$ is a $\\phi$-injective Banach left $A$-module.\n\\end{proof}\n\\begin{corollary}\\label{Cor: 2} Let $A$ be a commutative Banach algebra, and $J$ be a closed ideal of $A$\nsuch that $\\phi_{|J}\\neq 0$. Then $A\/J$ is $\\phi$-injective as a\nBanach left $A$-module.\n\\end{corollary}\n\\begin{proof}\nSince $\\phi_{|J}\\neq 0$, it is easy to check that\n$(A\/J)^{\\perp}\\cap(\\ker(\\phi))^{c}\\neq \\emptyset$. Now, apply\nTheorem \\ref{Th: 2}.\n\\end{proof}\n\\begin{corollary}\\label{Th: Inj Module}\nLet $A$ be a commutative Banach algebra, $\\phi\\in\\Delta(A)$ and let $E\\in\\textbf{A-mod}$ with $I(\\phi,E)=\\{0\\}$. Then for all $\\psi\\in\\Delta(A)\\setminus\\{\\phi\\}$, $E\\in \\textbf{A-mod}$ is $\\psi$-injective.\n\\end{corollary}\n\\begin{proof} Since $\\phi\\neq\\psi$ there exists $a_{0}\\in A$ such that $\\phi(a_{0})=0$ and $\\psi(a_{0})=1$.\nOn the other hand, since $I(\\phi,E)=\\{0\\}$ we conclude that $a_{0}\\in E^{\\perp}\\cap(\\ker(\\psi))^{c}$ and the proof\nis complete.\n\\end{proof}\nNow, we give some hereditary properties of $\\phi$-injectivity of\nBanach modules that we shall use. Recall that $E\\in\\textbf{A-mod}$\nis {\\it faithful} if $ A\\cdot x=\\{0\\}$ implies that $x=0.$\n\\begin{theorem}\\label{Th: Her,of,Id}\nLet $A$ be a Banach algebra,$E\\in\\textbf{A-mod}$, $\\phi\\in\\Delta(A)$ and $J$ be a closed\nideal of $A$ such that $\\phi_{|J}\\neq 0$.\n\\begin{enumerate}\n\\item[(i)] Suppose that $J$ has an identity and $E\\in\\textbf{J-unmod}$. If $E\\in\\textbf{A-mod}$ is\n$\\phi$-injective, then $E\\in\\textbf{J-unmod}$ is\n$\\phi_{|J}$-injective.\n\\item[(ii)] If $E\\in\\textbf{J-mod}$ is $\\phi_{|J}$-injective and\nfaithful, then $E\\in\\textbf{A-mod}$ is $\\phi$-injective.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}(i) Suppose that $E\\in \\textbf{A-mod}$ is $\\phi$-injective. Let $F$ and $K$ be in $\\textbf{J-mod}$ and $T:F\\longrightarrow K$ be an\nadmissible monomorphism with $I(\\phi_{|J},K)\\subseteq \\textrm{Im}T$.\nWe claim that the induced map, $_{J}B(K,E)\\longrightarrow\\\n_{J}B(F,E)$ defined by, $R\\longrightarrow R\\circ T$ is onto. Suppose\nthat $e_{J}$ is the identity of $J$. We can consider $F$ and $K$ as\nBanach left $A$-modules with the following module actions:\n\\begin{align*}\n&a\\bullet f=(ae_{J})\\cdot f\\hspace{0.5cm}(a\\in A, f\\in F),\\\\\n&a\\bullet k=(ae_{J})\\cdot k\\hspace{0.5cm}(a\\in A, k\\in K).\n\\end{align*}\nWe denote these $A$-modules with $\\widetilde{F}$ and\n$\\widetilde{K},$ respectively. Take $W\\in$$_{J}B(F,E)$ and define\nthe map $\\widetilde{W}:\\widetilde{F}\\longrightarrow E$ by\n$\\widetilde{W}(f)=W(f)$. For each $a\\in A$ and $f\\in F$ we have,\n\\begin{align*}\n\\widetilde{W}(a\\bullet f)&=W((ae_{J})\\cdot f)=(ae_{J})\\cdot\nW(f)\\\\\n&=a\\cdot (e_{J}\\cdot W(f))=a\\cdot W(f)\\\\\n&=a\\cdot\\widetilde{W}(f).\n\\end{align*}\nSo $\\widetilde{W}$ is a left $A$-module morphism. Moreover, the map\n$\\widetilde{T}:\\widetilde{F}\\longrightarrow\\widetilde{K}$ defined by\n$\\widetilde{T}(f)=T(f)$ is an admissible monomorphism such that\n\\begin{align*}\nI(\\phi,\\widetilde{K})&=\\textrm{span}\\{a\\bullet k-\\phi(a)k : a\\in A, k\\in K\\}\\\\\n&=\\textrm{span}\\{(ae_{J})\\cdot k-\\phi(ae_{J})k : a\\in A, k\\in K\\}\\\\\n&\\subseteq \\textrm{Im}T=\\textrm{Im}\\widetilde{T}.\n\\end{align*}\nSince $E\\in \\textbf{A-mod}$ is $\\phi$-injective, there exist\n$S\\in$$_{A}B(\\widetilde{K},E)$ such that $S\\circ\n\\widetilde{T}=\\widetilde{W}$. On the other hand, for each $a\\in J$\nand $k\\in K$ we have\n\\begin{equation*}\na\\cdot S(k)=S(a\\bullet k)=S((ae_{J})\\cdot k)=S(a\\cdot k).\n\\end{equation*}\nIt follows that $S\\in$$_{J}B(K,E)$. Now, we conclude that $E\\in\n\\textbf{J-unmod}$ is $\\phi_{|J}$-injective.\n\n(ii) Let $F$ and $K$ be in $\\textbf{A-mod}$ and $T:F\\longrightarrow\nK$ be an admissible monomorphism and take\n$W\\in\\hspace{-0.1cm}_{A}B(F,E)$. So $W\\in\\hspace{-0.1cm}_{J}B(F,E)$\nand there exists $S\\in\\hspace{-0.1cm}_{J}B(K,E)$ such that $S\\circ\nT=W$. For each $a\\in J$, $b\\in A$ and $k\\in K$, we have\n\\begin{align*}\na\\cdot(S(b\\cdot k)-b\\cdot S(k))&=a\\cdot S(b\\cdot k)-(ab)\\cdot S(k)\\\\\n&=S(ab\\cdot k)-S(ab\\cdot k)=0.\n\\end{align*}\nSince $E\\in \\textbf{J-mod}$ is faithful, we conclude that $S(b\\cdot\nk)=b\\cdot S(k).$ It follows that $S\\in\\hspace{-0.1cm}_{A}B(K,E)$ and\nthe proof is complete.\n\\end{proof}\n\\begin{corollary}\nLet $A$ be a Banach algebra, $\\phi\\in\\Delta(A)$ and $J$ be a closed\nideal of $A$ with an identity such that $\\phi_{|J}\\neq 0$. Then\n$J\\in\\textbf{A-mod}$ is $\\phi$-injective if and only if\n$J\\in\\textbf{J-mod}$ is $\\phi_{|J}$-injective.\n\\end{corollary}\nB. E. Forrest in \\cite{Forrest} introduced the notion of invariantly\ncomplemented submodules in categories of Banach modules. In the\nsequel, we obtain some results for $\\phi$-injectivity of invariantly\ncomplemented ideals.\n\\begin{definition}(\\cite[Definition\n6.3]{Forrest}) Let $X$ be a Banach left $A$-module and $Y$ be a\nBanach $A$-submodule of $X$. We say that $Y$ is {\\it left\n$($right$)$ invariantly complemented} in $X$ if there exists\n$P\\in\\hspace{-0.1cm}_{A}B(X,Y)$ ($P\\in B_{A}(X,Y)$) such that\n$P^{2}=P$ and $P(X)=Y$.\n\\end{definition}\n\\begin{theorem}\\label{Th: 3} Let $\\{E_{\\alpha}\\}_{\\alpha\\in \\Gamma}$ be a collection of Banach left $A$-modules and consider\n$E=\\ell^{1}-\\bigoplus_{\\alpha\\in \\Gamma} E_{\\alpha}$ as a Banach\nleft $A$-module with the natural module action.\n\\begin{enumerate}\n\\item[(i)] If $E$ is $\\phi$-injective, then for each $\\alpha\\in\\Gamma,$ $E_{\\alpha}$ is $\\phi$-injective.\n\\item[(ii)] Conversely, if $\\Gamma$ is finite and each $E_{\\alpha}$ is $\\phi$-injective, then $E$ is $\\phi$-injective.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} (i) It is obvious that each $E_{\\alpha}$ is left invariantly complemented in $E$. Hence, for each $\\alpha\\in\\Gamma$,\nlet $P_{\\alpha}\\in$$_{A}B(E,E_{\\alpha})$ such that\n$P_{\\alpha}(E)=E_{\\alpha}$ and $P^{2}_{\\alpha}=P_{\\alpha}$. Also,\nlet $i_{\\alpha}:E_{\\alpha}\\longrightarrow E$ be the natural\nembedding of $E_{\\alpha}$ into $E$.\n\nLet $E$ be $\\phi$-injective. Then there exists\n$_{\\phi}\\rho^{E}\\in$$_{A}B(_{\\phi}B(A^{\\sharp},E),E)$ such that is a\nleft inverse for\n$_{\\phi}\\Pi^{E}:E\\longrightarrow\\hspace{-0.1cm}_{\\phi}B(A^{\\sharp},E)$.\nFor each $\\alpha\\in \\Gamma$, we define the map\n$_{\\phi}\\rho^{\\alpha}:$$_{\\phi}B(A^{\\sharp},E_{\\alpha})\\longrightarrow\nE_{\\alpha}$ by\n\\begin{center}\n$_{\\phi}\\rho^{\\alpha}(T)=P_{\\alpha}\\circ$$_{\\phi}\\rho^{E}(i_{\\alpha}\\circ\nT)\\qquad(T\\in$$_{\\phi}B(A^{\\sharp},E_{\\alpha})).$\n\\end{center}\nWe claim that $_{\\phi}\\rho^{\\alpha}$ is a left $A$-module morphism\nsuch that\n$_{\\phi}\\rho^{\\alpha}\\circ$$_{\\phi}\\Pi^{\\alpha}=I_{E_{\\alpha}}$.\nIndeed, since for each $x\\in E_{\\alpha}$,\n$i_{\\alpha}\\circ$$_{\\phi}\\Pi^{\\alpha}(x)=$$_{\\phi}\\Pi^{E}(i_{\\alpha}(x))$,\nso we have\n\\begin{align*}\n_{\\phi}\\rho^{\\alpha}\\circ_{\\phi}\\Pi^{\\alpha}(x)&=P_{\\alpha}\\circ_{\\phi}\\rho^{E}(i_{\\alpha}\\circ\n_{\\phi}\\Pi^{\\alpha}(x))=P_{\\alpha}\\circ_{\\phi}\\rho^{E}(_{\\phi}\\Pi^{E}(i_{\\alpha}(x)))\\\\\n&=P_{\\alpha}(i_{\\alpha}(x))=x.\n\\end{align*}\nTherefore,\n$_{\\phi}\\rho^{\\alpha}\\circ$$_{\\phi}\\Pi^{\\alpha}=I_{E_{\\alpha}}$. On\nthe other hand, since $P_{\\alpha}\\in$$_{A}B(E,E_{\\alpha})$ it is\neasy to see that $_{\\phi}\\rho^{\\alpha}$ is a left $A$-module\nmorphism and the proof is complete.\n\n(ii) Suppose that for each $\\alpha\\in \\Gamma$, $E_{\\alpha}$ is\n$\\phi$-injective. So, for each $\\alpha\\in \\Gamma$ there exists\n$_{\\phi}\\rho^{\\alpha}\\in$$_{A}B(_{\\phi}B(A^{\\sharp},E_{\\alpha}),E_{\\alpha})$\nfor which\n$_{\\phi}\\rho^{\\alpha}\\circ$$_{\\phi}\\Pi^{\\alpha}=I_{E_{\\alpha}}$.\nDefine the map $\\rho:$$_{\\phi}B(A^{\\sharp},E)\\longrightarrow E$ by\n\\begin{center}\n$\\rho(T)=(_{\\phi}\\rho^{\\alpha}(P_{\\alpha}\\circ T))_{\\alpha\\in\n\\Gamma}\\qquad(T\\in$$_{\\phi}B(A^{\\sharp},E)).$\n\\end{center}\nSince $\\Gamma$ is finite, $\\rho$ is well-defined. For each $a\\in A$ and $T\\in$$_{\\phi}B(A^{\\sharp},E)$ we have\n\\begin{align*}\n\\rho(a\\cdot T)&=(_{\\phi}\\rho^{\\alpha}(P_{\\alpha}\\circ (a\\cdot T)))_{\\alpha\\in \\Gamma}=(_{\\phi}\\rho^{\\alpha}(a\\cdot(P_{\\alpha}\\circ T)))_{\\alpha\\in \\Gamma}\\\\\n&=(a\\cdot_{\\phi}\\rho^{\\alpha}(P_{\\alpha}\\circ T))_{\\alpha\\in \\Gamma}=a\\cdot(_{\\phi}\\rho^{\\alpha}(P_{\\alpha}\\circ T))_{\\alpha\\in \\Gamma}\\\\\n&=a\\cdot\\rho(T).\n\\end{align*}\nMoreover, if $x=(x_{\\alpha})_{\\alpha\\in\\Gamma}$ is an arbitrary\nelement of $E$, it is easy to see that\n$P_{\\alpha}\\circ$$_{\\phi}\\Pi^{E}(x)=$$_{\\phi}\\Pi^{\\alpha}(x_{\\alpha})$.\nHence,\n\\begin{align*}\n\\rho\\circ_{\\phi}\\Pi^{E}(x)&=(_{\\phi}\\rho^{\\alpha}(P_{\\alpha}\\circ\n_{\\phi}\\Pi^{E}(x)))_{\\alpha\\in\n\\Gamma}=(_{\\phi}\\rho^{\\alpha}(_{\\phi}\\Pi^{\\alpha}(x_{\\alpha})))_{\\alpha\\in\n\\Gamma}\\\\\n&=(x_{\\alpha})_{\\alpha\\in\\Gamma}=x.\n\\end{align*}\nTherefore, we conclude that $E$ is $\\phi$-injective.\n\\end{proof}\n\\begin{theorem}\\label{Cor: Directsum}\nLet $A$ be a Banach algebra, $\\phi\\in \\Delta(A)$, $B$ be a\nsubalgebra of $A$ and $J$ be a closed left ideal of $A$. Then the\nfollowing assertions holds:\n\\begin{enumerate}\n\\item[(i)] If $B$ is left invariantly complemented in $A$ and $A$ is $\\phi$-injective in\n\\textbf{A-mod}, then $B$ is $\\phi$-injective in \\textbf{A-mod}.\n\\item[(ii)] If $J$ is left invariantly complemented, then $J$ and $A\/J$ are $\\phi$-injective in \\textbf{A-mod}\nif and only if $A$ is $\\phi$-injective in \\textbf{A-mod}.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}(i) Since $B$ is left invariantly complemented in $A$,\nthere exists an onto projection $P\\in$$_{A}B(A,B)$. Hence\n$$A\\cong\\textrm{Im} P\\oplus\\ker P=B\\oplus\\ker P,$$ as a Banach\nleft $A$-module. Therefore, by Theorem \\ref{Th: 3} it follows that\n$B$ is $\\phi$-injective in $\\textbf{A-mod}.$\n\n(ii) Since $J$ is a left invariant complemented ideal in $A,$ there\nexists an onto projection $P\\in$$_{A}B(A,J)$. We claim that $A\\cong\nJ\\oplus \\frac{A}{J}$ as a Banach left $A$-module. To see this,\ndefine the map $T:A\\longrightarrow J\\oplus \\frac{A}{J}$ by\n\\begin{equation*}\nT(a)=(P(a), a+J)\\qquad(a\\in A).\n\\end{equation*}\nFirst, $T$ is a left $A$-module morphism, because for each $a, b\\in\nA$ we have,\n\\begin{align*}\nT(ab)&=(P(ab),ab+J)=(aP(b),a(b+J))\\\\\n&=a\\cdot (P(b),b+J)=a\\cdot T(b).\n\\end{align*}\nOn the other hand, if $a\\in \\textrm{Im}P\\ \\cap\\ \\ker P$, then there\nexists $b\\in A$ such that $P(b)=a$. Hence, $a=P(b)=P(P(b))=P(a)=0$.\nThis follows that $\\textrm{Im}P\\cap \\ker P=\\{0\\}$ and so $T$ is\none-to-one. Moreover, $T$ is onto because for each $(a,b+J)\\in\nJ\\oplus \\frac{A}{J}$ if we put $c=a+b-P(b)$, then $T(c)=(a,b+J)$.\nNow, the result follows from Theorem \\ref{Th: 3}.\n\\end{proof}\nAs an application of Theorem \\ref{Cor: Directsum} and Corollary\n\\ref{Cor: 2}, we have the following result for commutative Banach\nalgebras.\n\\begin{corollary}\\label{Cor: A and I. A is Com}\nLet $A$ be a commutative Banach algebra, $\\phi\\in \\Delta(A)$ and let\n$J$ be a closed invariant complemented ideal of $A$ such that\n$\\phi_{|J}\\neq 0$. Then $A\\in\\textbf{A-mod}$ is $\\phi$-injective if\nand only if $J\\in\\textbf{A-mod}$ is $\\phi$-injective.\n\\end{corollary}\n\\section{Applications to semigroup algebras}\nIn this section, we apply our later results to study\n$\\phi$-injectivity of certain commutative semigroup algebras and\ngive some examples of non-injective modules which are\n$\\phi$-injective for each character $\\phi.$ First, we need some\nbasic facts about semigroup algebras.\n\nLet $S$ be a semigroup and let $E(S)=\\{s\\in S : s^{2}=s\\}.$ We say\nthat $S$ is a {\\it semilattice} if $S$ is commutative and $E(S)=S.$\nA {\\it semi-character} on $S$ is a non-zero homomorphism\n$\\widehat{\\phi}:S\\longrightarrow\\{z\\in\\mathbb{C}: |z|\\leq 1\\}.$ The\nspace of semi-characters on $S$ is denoted by $\\Phi_{S}.$ The\nsemi-character $\\widehat{\\phi}_{S}:S\\longrightarrow\\{z\\in\\mathbb{C}:\n|z|\\leq 1\\}$, defined by\n$$\\widehat{\\phi}_{S}(t)=1 \\qquad (t\\in S),$$ is called the {\\it\naugmentation character} on $S$. For each\n$\\widehat{\\phi}\\in\\Phi_{S},$ we associate the map\n$\\phi:\\ell^{1}(S)\\longrightarrow \\mathbb{C}$ defined by\n$$\\phi(f)=\\sum_{s\\in S}\\widehat{\\phi}(s)f(s) \\qquad (f\\in\\ell^{1}(S)).$$\nIt is easy to verify that $\\phi\\in\\Delta(\\ell^{1}(S))$ and every\ncharacter on $\\ell^{1}(S)$ arises in this way. Indeed, we have\n$$\\Delta(\\ell^{1}(S))=\\{\\phi: \\widehat{\\phi}\\in\\Phi_{S}\\}.$$\nWe also define the convolution of two elements $f,g\\in \\ell^{1}(S)$\nby\n$$(f\\ast g)(s)= \\sum_{uv=s}f(u)g(v) \\qquad (s\\in S),$$\nwhere $\\sum_{uv=s}f(u)g(v)=0,$ when there are no elements $u,v\\in S$\nwith $uv=s.$ Then $(\\ell^{1}(S),\\ast,{\\|\\cdot\\|}_{1})$ becomes a\nBanach algebra that is called the {\\it semigroup algebra} of $S.$\nThe following proposition immediately follows from Corollary\n\\ref{Cor: A and I. A is Com}.\n\\begin{proposition}\\label{pp}\nLet $S$ be a semilattice, $\\phi\\in\\Delta(\\ell^{1}(S))$ and $I$ be a\nclosed invariant complemented ideal in $\\ell^{1}(S)$ such that\n$\\phi_{|_{I}}\\neq 0.$ Then $\\ell^{1}(S)\\in\\ell^{1}(S)\\textbf{-mod}$\nis $\\phi$-injective if and only if $I\\in\\ell^{1}(S)\\textbf{-mod}$ is\n$\\phi$-injective.\n\\end{proposition}\nLet $\\ell^{1}(\\mathbb{N}_{\\wedge})$ be the semigroup algebra on semigroup $S=(\\mathbb{N},{\\wedge})$ with the following\nproduct:\n\\begin{equation*}\n\\mathbb{N}\\times \\mathbb{N}\\longrightarrow \\mathbb{N},\\quad\n(m,n)\\longrightarrow m\\wedge n=\\min\\{m, n\\}.\n\\end{equation*}\nIt is easy to check that $\\Phi_{S}=\\{\\widehat{\\phi}_{n} :\nn\\in\\mathbb{N}\\},$ where for each $n\\in\\mathbb{N},$\n$$\\begin{array}{lll}\n\\widehat{\\phi}_{n}(m) =\\left\\{\\begin{array}{l} 1 \\quad\\text{if}\\quad\nm\\geq n\n\\\\ 0 \\quad \\text{if} \\quad m< n\n\\end{array}\\qquad (m\\in\\mathbb{N}).\\right.\n\\end{array}$$\nFor each $n\\in \\mathbb{N}$, let $I_{n}$ be the ideal of\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$ generated by the set $\\{\\delta_{1},\n\\delta_{2}, \\delta_{3},\\ldots, \\delta_{n}\\}$. It is easy to see that\n$\\ell^{1}(\\mathbb{N}_{\\wedge})\/I_{n}$ does not have an identity. So\n$I_{n}$ does not have a modular identity and using \\cite[Corollary\n2.2.8 (ii)]{Ramsden}, we conclude that\n$\\ell^{1}(\\mathbb{N}_{\\wedge})\/I_{n}$ is not injective as a Banach\nleft $\\ell^{1}(\\mathbb{N}_{\\wedge})$-module. Furthermore, we recall\nthat $\\ell^{1}(\\mathbb{N}_{\\wedge})$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is not injective, because it\ndoes not have a right identity \\cite[Example 4.10]{Dales. Lau}. As\nmentioned above, we regard the map\n$\\phi_{n}:\\ell^{1}(\\mathbb{N}_{\\wedge})\\longrightarrow \\mathbb{C}$\nas a character on $\\ell^{1}(\\mathbb{N}_{\\wedge})$ which is defined\nby\n\\begin{equation*}\n\\phi_{n}(f)=\\sum_{i=n}^{\\infty}f(i)\\qquad(f\\in\n\\ell^{1}(\\mathbb{N}_{\\wedge})).\n\\end{equation*}\nThe following theorem shows that $\\ell^{1}(\\mathbb{N}_{\\wedge})$ as\na left $\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is $\\phi$-injective\nfor each $\\phi\\in \\Delta(\\ell^{1}(\\mathbb{N}_{\\wedge}))$, although\nis not injective.\n\\begin{theorem}\\label{Th: L1Min is 1-inj}\nWith the above notations, we have following assertions:\n\\begin{enumerate}\n\\item[(i)] $\\ell^{1}(\\mathbb{N}_{\\wedge})\/I_{n}$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is $\\phi_{n}$-injective, for\neach $n\\in\\mathbb{N}$.\n\\item[(ii)] $\\ell^{1}(\\mathbb{N}_{\\wedge})$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is $\\phi_{n}$-injective, for\neach $n\\in\\mathbb{N}$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}(i) Since $\\ell^{1}(\\mathbb{N}_{\\wedge})$ is commutative and\n$(\\phi_{n})_{|I_{n}}\\neq 0$, by Corollary \\ref{Cor: 2}, it follows\nthat $\\ell^{1}(\\mathbb{N}_{\\wedge})\/I_{n}$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\wedge})$-module is $\\phi_{n}$-injective.\n\n(ii) First, we show that for each $n\\in \\mathbb{N}$, $I_{n}$ is an\ninvariant complemented ideal of $\\ell^{1}(\\mathbb{N}_{\\wedge})$. To\nsee this, suppose that the map\n$P_{n}:\\ell^{1}(\\mathbb{N}_{\\wedge})\\longrightarrow I_{n}$ is\ndefined by\n\\begin{equation*}\nP_{n}(f)=\\sum_{i=1}^{n-1}f(i)\\delta_{i}+(\\sum_{i=n}^{\\infty}f(i))\\delta_{n}\\qquad(f\\in\n\\ell^{1}(\\mathbb{N}_{\\wedge})).\n\\end{equation*}\nIt is easy to check that $P_{n}$ is a projection on $I_{n}.$\nMoreover, if $f$ or $g$ belong to $I_{n}$ we have $P_{n}(f\\ast\ng)=f\\ast P_{n}(g)$. Now suppose that $f, g$ are not in $I_{n}$. We\ncan suppose that $f=\\delta_{i}$ and $g=\\delta_{j}$ such that\n$n n\n\\end{array}\\qquad (m\\in\\mathbb{N}).\\right.\n\\end{array}$$\nIn \\cite[Example 5.6]{Ramsden Paper}, it is proved that\n$\\ell^{1}(\\mathbb{N}_{\\vee})$ is not injective as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\vee})$-module. In the following theorem, we\nshow that $\\ell^{1}(\\mathbb{N}_{\\vee})$ is $\\phi$-injective for each\n$\\phi\\in \\Delta(\\ell^{1}(\\mathbb{N}_{\\vee}))$.\n\\begin{theorem}\\label{aa}\n$\\ell^{1}(\\mathbb{N}_{\\vee})$ as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\vee})$-module is $\\phi$-injective for each\n$\\phi\\in \\Delta(\\ell^{1}(\\mathbb{N}_{\\vee}))$.\n\\end{theorem}\n\\begin{proof} By \\cite[Corollary 2.2]{Essmaili}, it follows that $\\ell^{1}(\\mathbb{N}_{\\vee})$ is character amenable . Hence,\nfor each $\\phi\\in\\Delta(\\ell^{1}(\\mathbb{N}_{\\vee}))$,\n$\\ell^{1}(\\mathbb{N}_{\\vee})$ is $\\phi$-amenable. On the other hand,\nsince $\\mathbb{N}_{\\vee}$ is weakly cancellative by \\cite[Theorem\n4.6]{Dales. Lau}, we conclude that $c_{0}(\\mathbb{N}_{\\vee})$ is a\nBanach $\\ell^{1}(\\mathbb{N}_{\\vee})$-module . Hence,\n$c_{0}(\\mathbb{N}_{\\vee})$ is $\\phi$-flat as a Banach right\n$\\ell^{1}(\\mathbb{N}_{\\vee})$-module \\cite[Proposition 3.1]{Nasr}.\nThis follows that\n$c_{0}(\\mathbb{N}_{\\vee})^{*}=\\ell^{1}(\\mathbb{N}_{\\vee})$ is\n$\\phi$-injective as a Banach left\n$\\ell^{1}(\\mathbb{N}_{\\vee})$-module.\n\\end{proof}\nFor each $n\\in \\mathbb{N}$, let $J_{n}$ be the closed ideal of\n$A=\\ell^{1}(\\mathbb{N}_{\\vee})$ generated by\n$\\{\\delta_{n},\\delta_{n+1},\\ldots\\}$. It is easy to see that $J_{n}$\nis invariantly complemented in $\\ell^{1}(\\mathbb{N}_{\\vee})$.\nIndeed, it is sufficient to consider the map $Q_{n}:A\\longrightarrow\nJ_{n}$ defined by\n$$Q_{n}(f)=(\\sum_{i=1}^{n}f(i))\\delta_{n}+\\sum_{i=n+1}^{\\infty}f(i)\\delta_{i} \\qquad(f\\in A).$$\nIt is straightforward to check that $Q_{n}$ is an onto projection in\n$_{A}B(A,J_{n})$.\n\nAs a consequence of Corollary \\ref{Cor: A and I. A is Com} and\nTheorem \\ref{aa}, we give the following result.\n\\begin{corollary} For each $n\\in \\mathbb{N}$, $J_{n}$ as a Banach left $\\ell^{1}(\\mathbb{N}_{\\vee})$-module is\n$\\phi$-injective for each\n$\\phi\\in\\Delta(\\ell^{1}(\\mathbb{N}_{\\vee}))$ with $\\phi_{|J_{n}}\\neq\n0$.\n\\end{corollary}\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeqqt b/data_all_eng_slimpj/shuffled/split2/finalzzeqqt new file mode 100644 index 0000000000000000000000000000000000000000..f27a259caa6198fa21ca512e8a3f070ad829c6ae --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeqqt @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nThe field of interpretability aims at providing users and practitioners with techniques meant to explain either globally a trained machine learning model or locally a particular prediction made by a model. This can be achieved either by training directly an interpretable model, or in a post hoc approach, using model-agnostic or model-specific interpretability techniques.\n\nThis paper focuses on post hoc surrogate models that globally approximate a machine learning classifier while providing explanations at the local level of each prediction.\nWe are interested in model-agnostic interpretability approaches meant to be applied on standard feature spaces composed of tabular data. Our goal is to explain any type of trained model: the classifier is a black-box left to the discretion of the practitioners.\nWe refer the reader to recent published surveys for a global picture of the interpretability field as for instance~\\citep{Guidotti2018a}.\n\n\\begin{figure}\n \\centering\n \\resizebox*{\\columnwidth}{3.5cm}{\n \\input{illustration_tree_concept.tex}\n }\n \\caption{Concept Tree trained on FRED-MD macroeconomic dataset. Variables are grouped by Concepts to constraint the training of an interpretable surrogate decision tree}\n \\label{fig:my_label}\n\\end{figure}\n\nSurrogate models aiming at providing post hoc interpretability may induce confusion by conveying a false sense of simplicity, especially when subgroups of dependent variables are involved. We refer to dependent variables as variables sharing similar information and possibly generated by a common phenomenon. It may include the various lags of a given time series, various features of a variables, or various measures of a given fact. Surrogate models may arbitrarily select one given variable among a group of dependent variables, thus obscuring the global picture.\nSubsequently, practitioners may better understand a surrogate model that retains the whole set of dependent variables and depicts a bigger picture than a simpler model. \n\nThis paper introduces the idea of \\textit{concept}. A \\textit{concept} is a representation gathering a group of dependent variables. It can be defined using either domain knowledge or statistical properties of dependent variables (such as the Pearson correlation). The use of \\textit{concepts} allows to provide high-level representations that practitioners may find easier to interpret. We contend that \\textit{concept}-based methods may be better suited to human understanding and provide more practitioner-friendly representations of a black-box classifier. \n\nWe substantiate that claim with an application to decision tree surrogates. Decision trees are universally considered interpretable by domain experts~\\cite{freitas2014comprehensible}. We compare standard surrogate tree models to trees whose training is constrained by the grouping of subgroups of variables in \\textit{concepts}. More specifically, we embed the idea of \\textit{concept} in the TREPAN algorithm~\\citep{Craven1996b}, an interpretable decision tree originally instantiating a variant of \\emph{id2-of-3}~\\citep{murphy1991id2} with a mechanism of oracle querying aiming at populating areas of the training set where the fidelity of the surrogate can be improved. In our approach, the \\textit{concepts} are used at each node of the decision tree to constrain the training of the split rule based on \\emph{id2-of-3}. We compare the resulting \\textit{Concept Trees} to the surrogates provided by the original TREPAN algorithm. \n\nThe next section expands on the motivation and formally introduces the idea of \\textit{high-level concepts}. Section 3 introduces \\textit{Concept Trees}, a version of the TREPAN algorithm that builds on \\textit{concepts}, and shows that \\textit{Concept Trees} meet the prerequisites of a global-to-local, post-hoc and model-agnostic surrogate. Section 4 assesses both the qualitative and quantitative relevance of our proposition through experiments led on FRED-MD, a monthly macroeconomic database designed for empirical analysis of the US economy \\citep{mccracken2016fred}.\n\n\n\\section{Concept: Grouping Dependent Variables into High-Level Representation of Variables}\n\nIt is often the case that groupings of variables in a given dataset may naturally appear. Such grouping can derive from similar meaning or a similar origin (\\emph{e.g.} unemployment among men, unemployment among women, unemployment among young people...). A grouping can also be the result of multiple transformations applied to a given source of data (such as multiple lags of a time series, or features engineered from the same variable).\n\nIn this work, we consider two types of \\textit{concepts}: expert-defined grouping of features and automatically-defined grouping based on a statistical criterion such as feature correlation.\nExpert-based \\textit{concepts} may be used when domain knowledge is available. Automatically-defined concepts do not require prior domain knowledge.\n\nExploiting the group structure of variables has already been used in the literature to train more accurate sparse models, for instance with \\emph{group-lasso}~\\citep{yuan2006model} or \\emph{sparse-group-lasso}~\\citep{simon2013sparse}. In the latter, improved accuracy is observed with variable groupings such as gene pathways or factor level indicators in categorical data.\nOther machine learning fields also cover the idea of grouping dimensions, such as subspace clustering \\citep{vidal2011subspace}.\n\nIn the field of interpretability, the idea of exploiting a meaningful grouping of features to generate better explanations has emerged, for instance with topic-modeling-based feature compression~\\citep{kim2015scalable} or on image classification with deep learning models~\\citep{Kim2017,Ghorbani2019}.\n\nCorrelated features is a known challenge when building machine learning models and interpreting feature importances~\\cite{Buhlmann2013,Gregorutti2017,Strobl2008,Tolosi2011}. For instance, \\emph{lasso}-based methods for feature selection tend to select only one representative from a group of correlated features and to discard the others~\\cite{Buhlmann2013}. It has been pointed out that correlated features severely impact variable importance measures of random forests~\\cite{Strobl2008, Gregorutti2017}. Also, many feature selection methods suffer from a \\emph{correlation bias}: features belonging to a group of correlated features receive weights inversely proportional to the size of the group~\\cite{Tolosi2011}. This issue creates instability in the feature selection process. Small changes in the training data can result in significant changes in the selected set of features. This instability prevents a robust interpretation of variable importance.\n\nWe propose to use the idea of \\textit{concept} to address both expert-defined grouping of features and automatically (correlated)-defined grouping. \\textit{Concepts} are embedded into surrogate models in order to constrain their training, which provides two levels of granularity for the explanations: at high-level (concept) and at finer level (raw variables).\nThe next paragraph offers a formal presentation of the idea of \\textit{concepts}. \n\nWe consider a set of training examples $\\mathbb{X}$ where each example is denoted $x^{(i)}$ with $i \\in [1...|\\mathbb{X}|]$ and associated with a label $y^{(i)}$. The set of training examples $\\mathbb{X}$ is composed of a set of features $j \\in \\mathbb{J}$ and each feature vector is noted $x_j$ with $j \\in \\mathbb{J}=[1...N]$.\n\nA \\emph{concept} is a subset of features denoted $c_k \\subset \\mathbb{J}$. $K$ concepts $c_k$ co-exist to form the set of concepts $c_k \\in \\mathbb{C}, k \\in [1...K]$. The instantiation of a concept $c_k$ is the process of populating it with dependent features. Every feature $j \\in \\mathbb{J}$ belongs to a single concept $c_k$ and one concept only:\n$$c_k \\cap c_l = \\emptyset \\mid \\forall l \\in \\mathbb{J} \\text{ and } l \\neq k$$\n\n\\subsection{Expert knowledge concepts}\nThe instantiation of a concept $c_k$ can be either driven by domain knowledge or performed automatically. The former requires that all variables belong to user-defined groups that be meaningful to domain experts. The variable classifications are sometimes to be found in the documentation of a dataset. That is the case of the FRED-MD data, which is used in the experimentation section of this work. The paper accompanying the dataset \\citep{mccracken2016fred} includes in appendix a table that classifies the 134 monthly macroeconomic indicators into 8 categories: output and income, labour market, housing, consumption orders and inventories, money and credit, interest and exchange rates, prices, and stock markets. Table~\\ref{tab:fred_succint_desc} provides a sample of these categories. \n\n\\begin{table*}[]\n\\centering\n\\caption{Overview of the grouping of variables by concept in FRED-MD database~\\citep{mccracken2016fred}}\n\\label{tab:fred_succint_desc}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}ccc@{}}\n\\toprule\n\\thead{Concept 1: Output and Income} & \\thead{Concept 2: Labor Market} & \\thead{Concept 5: Money and Credit} \\\\ \\midrule\nReal Personal Income & Civilian Labor Force & Total Reserves of Depository Institutions \\\\\nReal personal income ex transfer receipts & Civilian Employment & Commercial and Industrial Loans \\\\\nIP: Consumer Goods & Civilian Unemployment Rate & Total Consumer Loans and Leases Outstanding \\\\\n... & ... & ... \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\\subsection{Automatic concepts: a simple approach}\nFailing that user may rely on domain knowledge, the set of \\textit{concepts} $\\mathbb{C}$ can be built automatically using a clustering algorithm based on feature correlations. Features (indexed by $j$) can be grouped in a concept $c_k$ using any dependence measure $\\rho$. The most straightforward is the Pearson correlation, that measures linear correlation between variables. Assuming the measure has values between $[-1;1]$ (an absolute value of $1$ meaning two variables perfectly dependent), a user-defined threshold $\\epsilon$ is set on the absolute value of the measure of dependence between two features $x_j$ and $x_{j'}$ in order to decide whether these features belong to the same concept $c_k$:\n\n$$\\left|\\rho(x_j,x_{j'})\\right| \\geq \\epsilon \\mid \\forall (j, j') \\in c_k$$\n\nThe clustering algorithm is greedy: for each iteration a feature is tested against all features and existing groups. A feature $j'$ is affected to a \\textit{concept} $c_k$ if its dependence to each feature in $c_k$ is higher than $\\epsilon$:\n$$\\left|\\rho(x_j,x_{j'})\\right| \\geq \\epsilon \\mid \\forall j \\in c_k \\rightarrow c_k = c_k \\cup j'$$\n\nIf a given feature is independent from all the others, it belongs to a singleton.\nThis formalization is also adequate for the expert's knowledge grouping. In that case, $\\rho$ and $\\epsilon$ would be the criteria of group assignment by the expert.\n\n\nThe next section explains how the notion of \\textit{concepts} may be used to constrain the training of a decision tree in order to produce more interpretable surrogates. \n\n\n\\section{Concept Tree: Embedding Concepts For More Interpretable Surrogate Decision Tree}\n\nDecision trees are a well-known interpretable machine learning model. A decision tree has a graphical structure, its decisions rely on a sparse subset of features, and features are used in a hierarchical way, thus conveying an intuitive sense of feature importance and providing several levels of explanation granularity~\\citep{freitas2014comprehensible}. Training a decision tree on the training set $\\mathbb{X}$ yields an interpretable classification algorithm, provided that the number of nodes is kept under a certain threshold. The limit on the tree complexity may come at the expense of predictive performance. Decision trees appear as good candidate surrogates to black-box classifiers. \n\nA decision tree surrogate is produced as follows. A black-box $b$ is trained on $\\mathbb{X}$ with the true class labels $y^{(i)}$ ; the surrogate $f$ is then trained on the black-box predictions $\\hat{y}^{(i)} = b(x^{(i)})$. In production, the classification is performed by the black-box while the explanations are provided by the surrogate decision tree. The fidelity of the surrogate is assessed as the proportion of instances where the surrogate makes the same prediction than the black-box classifier. \n\n\nThe TREPAN algorithm is an instance of interpretable surrogate tree model~\\citep{Craven1996}. It is model-agnostic and aims at mimicking the classification behaviour of a black-box $b$. It queries the black-box with instances to get predictions $\\hat{y}^{(i)} = b(x^{(i)})$ and then fits an interpretable decision tree. The outline of TREPAN is shown in Algorithm~\\ref{alg:trepan}. The querying of extra instances allows to populate the critical areas of the feature space and thus significantly curb the tendency of decision trees to overfit. \n\n\nTREPAN uses $m-of-n$ decision rules, that are inspired from $id2-of3$ decision trees~\\citep{murphy1991id2}. To fit an $m-of-n$ decision rule, the set of the $n$ most discriminative tests on the features for the node is discovered using the information gain. Then, in order to validate a node, an instance must validate at least $m$ tests among the $n$. For instance, given a decision rule with 3 tests $x_1$, $x_2$ and $x_3$, such as $2$-of-$\\{x_1, \\neg x_2, x_3\\}$ is equivalent to the logical expression $(x_1 \\vee \\neg x_2) \\wedge (x_1 \\vee x_3) \\wedge (\\neg x_2 \\vee x_3)$.\nThe parameters $m$ and $n$ are user-defined upper-bounds: their final values are learnt by the node. The $m-of-n$ decision rules are learnt in a greedy way for computational efficiency. For the outline of the fitting algorithm of an $m-of-n$ decision rule, we refer the reader to the original paper~\\citep{Craven1996b} for the sake of conciseness and precision.\n\nWhile the original TREPAN paper is two decades old already, researchers have kept reassessing its relevance up until recently~\\citep{Sarkar2016}.\nExperimentations show that TREPAN has a good fidelity to the black-box and a better accuracy on the test set than a decision tree directly trained on the training set $\\mathbb{X}$~\\citep{Craven1996}. This good performance is attributed to the additional-instance-drawing mechanism, which yields a denser support to the fit of a decision rule and thus a better prediction accuracy. \n\nThe $m-of-n$ decision rule structure improves the accuracy and the fidelity of the decision tree as it allows to learn more complex decision boundaries. However, it comes at the price of interpretability of both the node's decision rule and the decision tree overall. A practitioner may find it hard to understand all the possible ${n \\choose m}$ combinations of variables at the same time. Moreover, the contrary of a $m-of-n$ literal may be challenging to conceive as soon as $m>1$ and $1$\n\\STATE $n\\_nodes = 1$\n\\WHILE{$Queue \\neq \\emptyset$ and $n\\_nodes < max\\_nodes$}\n\\STATE Remove $$ from head of $Queue$ \n\\STATE Fit decision rule of node $N$\n\\FOR{each outcome $t$ of the test}\n \\STATE Initialize a child node $C$\n \\STATE $S_c \\gets$ instances of $S_N$ with outcome $t$ for the test\n \\STATE $S_C \\gets S_c \\cup DrawSample(min\\_sample-|S_c|)$\n \\STATE Get labels from black-box $b$ for $S_C$\n \\IF{$C$ is not pure enough}\n \\STATE Add $$ to $Queue$\n \\ENDIF\n \\STATE $n\\_nodes = n\\_nodes + 1$\n\\ENDFOR\n\\ENDWHILE\n\n\\STATE {\\bfseries Return} $R$\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}[tb]\n \\caption{Construction of a Concept Tree decision rule}\n \\label{alg:concepttreenode}\n\\begin{algorithmic}\n \\STATE {\\bfseries ConstructConceptDecisionRule}($X$, $y$, $concepts$)\n \n \\STATE $best\\_candidate \\leftarrow \\emptyset$\n \\STATE $best\\_ig \\leftarrow = 0$\n \\FOR{$c \\in concepts$}\n \\STATE $X_c \\leftarrow$ Select features from $X$ belonging to $c$\n \\STATE $candidate \\leftarrow MofNDecisionRule(X_c, y, m, n)$\n \\STATE $ig \\leftarrow$ Compute information gain for $candidate$\n \\IF{$ig > best\\_ig$}\n \\STATE $best\\_ig \\leftarrow ig$\n \\STATE $best\\_candidate \\leftarrow candidate$\n \\ENDIF\n \\ENDFOR\n \n \\STATE {\\bfseries Return} $best\\_candidate$\n \n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experimentation: FRED-MD Macroeconomic Database}\n\nThis paper has introduced the ideas of Concept and Concept Tree, whose main objectives are to provide an accurate surrogate $f$ mimicking a black-box classifier $b$ while being as interpretable as possible. The next paragraphs describe experimentations made with the FRED-MD dataset ~\\citep{mccracken2016fred}, a publicly released macroeconomic database of 134 monthly U.S. indicators with more than 700 instances. Interpretability is critical in economics and our experimentations show how Concept Trees may match the requirements of the field. \n\nThe experimentations are conduced as follows. The classification target is computed from the \\emph{civilian unemployment rate}: if the value for an instance is lower than in the previous period, the target value is set to label 0 and the label is set to 1 otherwise. Domain-knowledge-based experts are extracted from the FRED-MD official documentation, which classifies variables into 8 subgroups (see Table~\\ref{tab:fred_succint_desc}). \n\nThe competitors are both flavors of Concept Tree (Concept Tree-Expert and Concept Tree-Correlated for automatically-defined concepts) and the original TREPAN.\nSince the Concept Tree and TREPAN have a similar structure, they share the same parameters for the experimentation. The maximal number of nodes $max\\_nodes$ is set to 10. For the split rules, the values of $m-of-n$ are set to $1-of-1$, $3-of-3$ and $5-of-5$. The minimal value of samples $min\\_samples$ to fit a split rule at a node is 100, thus additional samples are drawn from the fitted distribution if the $\\mathbb{X}$ is not large enough. For Concept Tree-Correlation, the threshold $\\epsilon$ on the correlation $\\rho$ is set to $0.9$ such as $\\left|\\rho(x_j,x_{j'})\\right| \\geq 0.9$.\n\nThe black-box $b$ used is a Random Forest with 200 estimators, with the scikit-learn default values for the other parameters.\nOut-sample-fidelity is computed by 5-fold cross-validation. At each split the black-box is fitted on the train set and makes predictions for the train set and the test set. The Concept Tree and TREPAN instances are then fitted on the train set with black-box predictions as targets, and their fidelities are measured against the black-box predictions made on the test set. Out-of-sample accuracy is assessed using the same procedure. Fidelity measures the proportion of predictions made by the surrogate that match the predictions made by the black-box, while accuracy measures the proportion of predictions made by the surrogate that match the actual value of the target. Interpretability is assessed by economic expert judgement.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Experimental results: surrogate accuracy and fidelity as a function of the algorithm, the concept type and the split rule}\n\\label{tab:results}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}ccc||ccc@{}}\n\\toprule\n\\thead{Algorithm} & \\thead{Concept Type} & \\thead{Split Rule} & \\thead{Surr. Accuracy} & \\thead{Surr. Fidelity} \\\\ \\midrule \\midrule\nConcept Tree & Expert & \\multirow{ 3}{*}{$1-of-1$} & $63\\% \\pm 4\\%$ & $65\\% \\pm 9\\%$ \\\\\nConcept Tree & Correlation & & $68\\% \\pm 6\\%$ & $71\\% \\pm 6\\%$ \\\\\nTREPAN & \/ & & $\\bm{75\\% \\pm 9\\%}$ & $\\bm{74\\% \\pm 7\\%}$ \\\\ \\hline\nConcept Tree & Expert & \\multirow{ 3}{*}{$3-of-3$} & $69\\% \\pm 9\\%$ & $\\bm{76\\% \\pm 4\\%}$ \\\\\nConcept Tree & Correlation & & $\\bm{72\\% \\pm 11\\%}$ & $75\\% \\pm 5\\%$ \\\\\nTREPAN & \/ & & $68\\% \\pm 8\\%$ & $72\\% \\pm 6\\%$ \\\\ \\hline\nConcept Tree & Expert & \\multirow{ 3}{*}{$5-of-5$} & $\\bm{71\\% \\pm 4\\%}$ & $\\bm{73\\% \\pm 2\\%}$ \\\\\nConcept Tree & Correlation & & $70\\% \\pm 8\\%$ & $71\\% \\pm 8\\%$ \\\\\nTREPAN & \/ & & $67\\% \\pm 5\\%$ & $71\\% \\pm 4\\%$ \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\n\\subsection{Results}\nTable~\\ref{tab:results} presents the cross-validated accuracies and fidelities for TREPAN, the Concept Tree with expert-defined concepts and the Concept Tree with automatically defined clusters. The black-box mean accuracy over the folds is $82\\% \\pm 4\\%$.\n\nThe experimentations show that Concept Tree provides surrogates whose fidelity and accuracy matches the performance of TREPAN trees and whose interpretability may be significantly enhanced. Although, TREPAN leads in terms of accuracy and fidelity for $1-of-1$ nodes; Concept Tree-Expert for $3-of-3$; and Concept Tree-Correlated for $5-of-5$ nodes, the non-negligible standard-deviations and the setup of this preliminary experiment (number of folds and datasets) don't allow for a final conclusion. However, the experiment highlights the relevance of Concept Tree in terms of accuracy and fidelity and as things stand, Concept Tree is at least as relevant as TREPAN.\n\nFigure \\ref{fig:trees} in Appendix displays the trees generated by TREPAN (Figure \\ref{fig:tree_trepan}), the Concept Tree with expert-defined concepts (Figure \\ref{fig:concept_expert}) and the Concept Tree with correlation-based defined concepts (Figure \\ref{fig:concept_correlated}). \n\nFrom a macroeconomic point of view, the Concept Tree yields meaningful high level explanations of the workings of the black-box classifiers. The Concept Tree-Expert highlights that Labor Market related variables are the most important in the prediction of the target, followed by Output and Income related variables and Consumption related variables. The Concept Tree-Correlated also sheds light on the importance of nodes referring to Labour market data. Overall, Concept Tree enhances the interpretability of surrogate trees by \\textbf{structuring} the explanations.\n\nIn Concept Tree-Expert (based on domain-knowledge), explanations are structured by sharing a common \"language\" with users or experts. It provides the big picture with one general \\textit{concept} by node. The detailed analysis of a node is eased because only related, homogeneous, variables are assembled. There is an intuitive relations between high-level explanations (concepts) and low-level explanations (raw variables).\n\nIn Concept Tree-Correlation, computed automatically based on variable correlations, part of the domain-knowledge can be recovered. Concept Tree-Correlation presents also the advantage of gathering dependent variables for each node, avoiding arbitrary choices between correlated variables to build a test.\n\nIn contrast, TREPAN trees use an idiosyncratic language not shared by the practitioner. Associations of tests in a TREPAN node generate confusion by gathering variables that are hardly related from a domain-knowledge point of view. Such nodes obstruct the understanding by preventing the user from getting the big picture.\n \nTo illustrate these arguments, we focus on the top three nodes of the trees in Figures \\ref{fig:tree_trepan}, \\ref{fig:concept_expert} and \\ref{fig:concept_correlated}. In the TREPAN tree (Figure \\ref{fig:tree_trepan}), the colored variables names highlight that 8 out of the 9 chosen variables are part of the Labor Market concept. This structure is explicitly displayed by the Concept Tree-Expert (Figure \\ref{fig:concept_expert}) as the concept chosen in the root node, facilitating the interpretation of the tree by referring to high level concept. We can also notice in the TREPAN tree that the left child of the root node chose the \\textit{Civilian Employment, gr} feature for its first rule, whereas the right child node chose the \\textit{All Employees: Total nonfarm, gr} feature instead. However, the cluster 3 in the Concept Tree-Correlated (Figure \\ref{fig:concept_correlated}) explicitly shows that these features are highly correlated, suggesting that they are interchangeable.\n\n\n\n\n\\section{Conclusion}\n\nThe present paper introduces \\textit{concepts}, a meaningful manner to group dependent variables, and Concept Trees, an alternative tree-based surrogate model that provides both high-level and detailed explanation to black-box classifiers. The grouping of variables in \\textit{concepts} allows to overcome the false sense of simplicity conveyed by simpler decision tree surrogate that may give an artificially high importance to a given variable picked among a set of correlated variables, thus obscuring the bigger picture. The use of \\textit{concepts} also helps practitioners make sense of otherwise cryptic $m-of-n$ literals, by relying on a higher-level representation of the data. Compared to TREPAN, Concept Trees produce surrogates that have a comparable size and are as accurate, but more easily understandable to a human thanks to a better organization of the information along higher-level representations that significantly enhance the interpretability of the surrogate.\nExperiments were conduced using FRED-MD, a macroeconomic database whose documentation includes a grouping of variables. The Concept Tree was applied to this data using both expert-defined concepts derived from the data documentations and concepts built using a simple correlation-based clustering algorithm. First results show a notable improvement in human-readability while accuracy and fidelity of the surrogate are preserved.\nFurther research could involve a deeper assessment of our propositions, both quantitatively and qualitatively. It could also be relevant to explore alternative clustering algorithms designed to produce more relevant \\textit{concepts}.\nFurther modification to the Concept Tree algorithm may improve performance: currently, following TREPAN's principle, one concept can only be used once in a decision path. Considering a concept encompass several variables, the accuracy and fidelity of the surrogate may suffer from this probably too severe constraint.\n\n\n\\bibliographystyle{icml2019}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter[Pupil Plane Phase]{Pupil Plane Phase Apodization}\\label{sec:ppp}\n\n\\author[M. Kenworthy, J. Codona and F. Snik]{Matthew A. Kenworthy, Johanan\nL. Codona\\footnote{Steward Observatory, 933 N. Cherry Avenue, Tucson, AZ\n85721, USA} and Frans Snik}\n\n\\address{Leiden Observatory, Leiden University, \\\\\nP.O. Box 9513, 2300 RA Leiden, The Netherlands, \\\\\nkenworthy@strw.leidenuniv.nl}\n\n\\begin{abstract}\n\nPhase apodization coronagraphs are implemented in a pupil plane to\ncreate a dark hole in the science camera focal plane.\nThey are successfully created as ``Apodizing Phase Plates'' (APPs) using\nclassical optical manufacturing, and as ``vector-APPs'' using\nliquid-crystal patterning with essentially achromatic performance.\nThis type of coronagraph currently delivers excellent broadband contrast\n($\\sim$10$^{-5}$) at small angular separations (few $\\lambda\/D$) at\nground-based telescopes, owing to their insensitivity to tip\/tilt\nerrors.\n\n\\end{abstract}\n\n\\body\n\n\\section{Introduction}\n\nPupil-plane apodization techniques (amplitude, phase, or complex) differ\nfrom focal plane mask coronagraphs in that they affect all objects in\nthe field in an identical fashion.\nThe main goal of such pupil-plane coronagraphs is to enforce dark holes\nin the ensuing point spread function (PSF) in which faint companions can\nbe directly detected and characterized.\nSince the star and companion have the same PSF, the halo should be\nsuppressed while preserving the starlight in the core as much as\npossible, {\\it i.e.}~a high Strehl ratio PSF.\nIn this situation, the ``noise'' is governed by the PSF diffraction halo\nplus any diffuse background, while the ``signal'' is contained in the\nPSF core.\n\nThe phase-only ``Apodizing Phase Plate''\\cite{Kenworthy07,\nKenworthy13,Kenworthy10a,Kenworthy10b} (APP) coronagraphs have now been\nsuccessfully applied on-sky at ground-based telescopes.\nThe main benefits of APPs include a high contrast inside the dark hole\n($\\sim$10$^{-4}$--10$^{-6}$), at a small inner working angle $\\sim$$1.5\n\\lambda\/D$, with complete insensitivity to tip\/tilt errors (and\npartially resolved stellar disks) that usually limit focal-plane\ncoronagraphs.\nThis invariance of the PSF additionally enables beam-switching for\nthermal background removal, and observations of multiple star systems.\nWith the introduction of advanced liquid-crystal technology for the\nvector-APP coronagraph\\cite{vAPP,vAPP-prototype,vAPP-MagAO}, it has also\nbecome efficient over spectral bandwidths of more than an octave, at\nwavelengths from 300 to 30,000 nm\\cite{Packham2010}.\nThe extreme phase patterns enabled by liquid-crystal writing techniques\ncan now also produce dark holes with various shapes, including\ncomplementary 180$^\\circ$ D-shaped dark holes and 360$^\\circ$\ndonut-shaped dark holes.\nAs a single pupil-plane optic, the (vector-)APP is easily implemented in\na filter wheel in existing instruments, and is fully compatible with\ncryovacuum (and likely also space-based) operation.\n\n\\section{Theory}\n\nThe 1-D apodization problem has been studied for a long time, including\nslit apodization in spectroscopy and pulse shaping to reduce channel\nbandwidth in telegraphy, by apodizing in amplitude\\cite{Jacquinot64}.\nThe family of functions to describe this are the Slepian functions and\nthe Prolate Spheroidal wavefunction\\cite{slepian1965ast}.\nSince transmission apodization is linear, it can achieve a high degree\nof suppression between the PSF and the halo beyond a selected inner\nworking angle (IWA), and in general the apodizations are complex with\nboth transmission and phase.\nThe accurate manufacture of complex amplitude masks is non-trivial and\ncan result in low transmission efficiencies.\n\nPhase-only apodization theory was initially developed for removing\nspeckles generated by residual optical aberrations in high contrast\nimaging experiments\\cite{Malbet95}, where wavefront sensing in the final\nfocal plane of a coronagraph forms a closed loop with a deformable\nmirror (DM) in the optical system.\nA sinusoidal ripple on the DM forms a diffraction grating in the phase\nof the wavefront, generating a pair of speckles that are copies of the\nAiry core of the central PSF.\nThe appropriate choice of spatial phase and amplitude of the ripple\napplied to the DM destructively interferes with speckles generated by\naberrations in the optical system.\nThe same principle can be generalized to cancel out the diffraction\nrings of the PSF itself, as demonstrated on-sky by the addition of coma\ninto an adaptive optics system to cancel out part of the first Airy\nring\\cite{Serabyn07}.\nApodization in phase over a two-dimensional region does not yet have an\nanalytic solution.\nSuperposing many different phase ripples in the pupil plane to suppress\nthe diffraction pattern over a region of interest (ROI - typically\ndefined as a D-shaped region next to the Airy core of the PSF) is\nchallenging, since the speckles add vectorially and interfere with each\nother, making it a nonlinear problem.\nRef.~\\refcite{Codona2007} searched for phase-only apodization solutions\nthrough a modal basis approach. An ROI is defined in a complex\namplitude focal plane, where the diffraction halo is to be minimized.\nA complex amplitude field is defined in the pupil plane, and a Fourier\nimaging operator is defined that maps from the pupil plane into the ROI.\nSingular Value Decomposition of this operator produces a modal basis set\nof complex pupil amplitudes, ordered canonically from the most power\ncontained within the ROI to the least.\nThese modes typically have complex amplitudes in the pupil plane, so\ntheir complex amplitude is normalized to unity to make them phase-only\napodization.\nThese ``antihalo'' modes are subtracted off the complex amplitude of the\npupil plane, and the process is repeated.\nThe antihalo modes extend a short distance beyond the ROI, and if the\nIWA is within the first Airy ring, flux from the core of the PSF is\ndetrimentally removed as well.\nCare is needed to suppress these modes by imposing additional\nconstraints to maximize the PSF core encircled energy.\nIf not properly accounted for, phase wrapping can also occur when the\npeak-to-valley phase apodization is greater than $2\\pi$.\n\nNew algorithms have been developed at Leiden Observatory by Doelman,\nKeller and Por.\nDoelman generates focal plane dark zones using a combination of\nphase-only pupil modes.\\cite{Doelman16}\nA simulated annealing approach is used, where the mode amplitudes are\nrandomly adjusted.\nSolutions that improve the dark region are kept, but worse solutions are\noccasionally accepted as well to escape local minima.\nKeller uses a Gerchberg-Saxton\\cite{Gerchberg72} method, switching\nbetween the pupil plane and focal plane.\nConvergence to a given contrast level is increased by an order of\nmagnitude using Douglas-Rachford operator splitting\\cite{Douglas56}.\nPor\\cite{por2017optimal} generalizes an algorithm by\nCarlotti\\cite{Carlotti2013} for general complex amplitudes in the pupil\nplane.\nStrehl ratio maximisation for this mask is a linear operation solved by\nlarge scale optimizer, and phase-only solutions are naturally found\nthrough this approach.\n\n\\section{First generation APPs using classical phase}\n\nThe manufacture of APP solutions requires the variation of phase across\nthe pupil plane of the camera, and the development of free-form optic\nmanufacture with notable departures from sphericity using\ncomputer-controlled diamond turning\\cite{Davis07} encoded the phase\npatterns as variations in the thickness of a high refractive index\ntransmissive substrate.\nFirst light observations of an APP with diamond turned\noptics\\cite{Kenworthy07} demonstrated the viability of the manufacturing\ntechnique and of the theory.\nThe success of the prototype led to APP coronagraphs on the 6.5m MMTO\ntelescope in Arizona\\cite{Kenworthy13} and on the Very Large Telescope\nin Chile\\cite{Kenworthy10a,Kenworthy10b}.\nThe VLT APP led to the first coronagraphic image of the extrasolar\nplanet $\\beta$ Pictoris b\\cite{Quanz10} and the discovery of the\nextrasolar planet HD~100546b\\cite{Quanz12}.\n\nDiamond turning only allows for low spatial frequencies in the azimuthal\ndirection of the cutting tip, and the classical phase plate\nmanufacturing was inherently chromatic.\nAttempts to achromatize the APP using doublets proved highly\nchallenging\\cite{Kenworthy10c}.\n\n\\section{The Vector-APP}\n\nThe main limitations of the APP coronagraph (chromaticity, limited\ncoverage around the star, limited phase pattern accuracy) were solved by\nthe introduction of the vector-APP (vAPP)\\cite{vAPP}.\nIn a similar way as for the Vector Vortex Coronagraph\\cite{VVC}, the\nvAPP replaces the classical phase pattern ($\\phi_{\\textrm{c}}[u,v] =\nn(\\lambda) \\Delta d[u,v]$) with the so-called\nPancharatnam\\cite{Pancharatnam}-Berry\\cite{Berry} phase or ``geometric\nphase''\\cite{Escuti-geometricphase}.\nThe vAPP phase pattern is imposed by a half-wave retarder with a\npatterned fast axis orientation $\\theta[u,v]$.\nThe geometric phase is imprinted on incident beams decomposed according\nto circular polarization state: $\\phi_{\\textrm{g}}[u,v] =\n\\pm2\\cdot\\theta[u,v]$, with the sign depending on the circular\npolarization handedness.\nAs this fast axis orientation pattern does not vary as a function of\nwavelength (with the possible exception of an inconsequential\noffset\/piston term), the geometric phase is strictly achromatic.\nA simple Fraunhofer propagation from the pupil $[u,v]$ to the focal\nplane $[x,y]$ shows that after splitting circular polarization states\nthe two ensuing coronagraphic PSFs are point-symmetric\n($PSF_{\\textrm{L}}[x,y] = PSF_{\\textrm{R}}[-x,-y]$), and therefore, in\nthe case of D-shaped dark holes, delivers complementary PSFs that\nfurnish instantaneous 3$60^\\circ$ search space around each star.\n\nVector-APP devices are produced by applying two breakthrough\nliquid-crystal techniques: any desired phase pattern is applied onto a\nsubstrate glass through a \\textit{direct-write\nprocedure}\\cite{directwrite} that applies the orientation pattern\n$\\theta[u,v]$ by locally polymerizing the alignment layer material in\nthe direction set by the controllable polarization of a scanning UV\nlaser.\nConsecutively, birefringent liquid-crystal layers are deposited on top\nof this alignment layer.\nSeveral self-aligning layers (``\\textit{Multi-Twist Retarders}'';\nMTR\\cite{MTR}) with predetermined parameters (birefringence dispersion,\nthickness, nematic twist) yield a linear retardance that is close to\nhalf-wave over the specified wavelength range.\nThe vAPP can become efficient over a large wavelength range (up to more\nthan an octave), while any phase pattern can be written with high\naccuracy.\n\n\\subsection{Prototyping and first on-sky results}\n\nThe first broad-band vAPP device was fully characterized in the lab at\nvisible wavelengths (500--900 nm)\\cite{vAPP-prototype}.\nThe main limitation of the contrast performance inside the dark hole was\nthe occurrence of leakage terms that produced a faint copy of the\nregular PSF on top of the coronagraphic PSFs.\nThese leakage terms are caused by small offsets to the half-wave\nretardance of the vAPP device, and offsets from quarter-wave retardance\nof the quarter-wave plate that, together with a Wollaston prism,\naccomplishes the (broad-band) circular polarization splitting.\nThis issue was resolved with the introduction of the\n``grating-vAPP''\\cite{grating-vAPP}, which implements the circular\npolarization splitting by superimposing a tilt (i.e.~a ``polarization\ngrating''\\cite{Packham2010}) pattern on top of the coronagraphic pupil\nphase pattern, which, by virtue of the properties of the geometric\nphase, very efficiently sends the coronagraphic PSFs into grating orders\n$\\pm$1, and leaves all the leakage terms in the zeroth order.\nThe grating-vAPP also greatly simplifies the optical configuration, as\nall the manipulation takes place within one single (flat) optic.\nThe coronagraphic PSFs are now subject to a lateral grating dispersion\nterm and so the grating-vAPP can only be used in combination with\nnarrow-band filters, although the wavelength range throughout which\nthese filters can be applied can still be very large.\n\nThe first grating-vAPP successfully demonstrated on-sky was installed at\nthe MagAO\/Clio instrument attached to the 6.5-m Magallan-Clay telescope\nin Chile\\cite{vAPP-MagAO} (\\fref{MagAO-vAPPs}a--c).\nThe device was designed and built to operate from 2--5 $\\mu$m, covering\nthe infrared atmospheric K, L and M-bands.\nThe first-light observations demonstrated excellent suppression of the\nstellar diffraction halo in the complementary dark holes (see\n\\fref{MagAO-vAPPs}c).\nDetailed analysis of the data demonstrated a 5-$\\sigma$ contrast for\npoint source detection of $\\sim$$10^{-5}$ at 2.5--7\n$\\lambda\/D$\\cite{vAPP-MagAO}.\nThe contrast performance is greatly enhanced by combining the two\ncomplementary dark holes through a simple rotation-subtraction procedure\nto further suppress the wind-driven starlight halo in the dark holes,\nwhich is caused by finite AO loop speed.\n\\Fref{MagAO-vAPPs}c shows the presence of the leakage term PSF in\nbetween the coronagraphic PSFs, which can be used as an astrometric and\nphotometric reference, in the (frequent) case that the coronagraphic PSF\ncores are saturated.\n\n\\begin{figure}[ht] \\includegraphics[width=\\textwidth]{MagAO-vAPPs}\n\n \\caption{Phase patterns, theoretical and on-sky PSFs (logarithmic\nscale) for the two vAPP devices installed at MagAO.\na) Theoretical phase pattern for a 180$^\\circ$ dark hole covering 2--7\n$\\lambda\/D$, b) the ensuing theoretical PSF, c) the on-sky PSFs at MagAO\nfor the star $\\eta$ Crucis at 3.9 $\\mu$m.\nd) Theoretical phase pattern for a 360$^\\circ$ dark hole covering 3--7\n$\\lambda\/D$, e) the ensuing theoretical PSF, f) the on-sky PSFs at MagAO\nfor the binary star $\\beta$ Centauri at 3.9 $\\mu$m.\nPhase pattern designs by Christoph Keller. Data processing by Gilles\nOtten\\cite{vAPP-MagAO}.}\n\n\\label{MagAO-vAPPs}\n\\end{figure}\n\n\\subsection{360 degree APP solutions}\n\nAs part of the algorithm exploration of the APP surface, a family of\nfunctions was found that showed 360 degrees of suppression around the\ncentral star.\nThese solutions have lower Strehl ratios for the star (typically\n20--40\\%) with larger IWA compared to the 180$^\\circ$ dark holes, and\nthese phase pattern solutions are pathological in nature, with rapid\nphase changes over small scales.\nThe advent of liquid-crystal patterning encouraged us to revisit these\n360$^\\circ$ solutions, and test them in the lab and on-sky.\n\\Fref{MagAO-vAPPs}d-f shows the phase pattern and ensuing PSFs for the\nexperimental vAPP device at MagAO.\nThe lower row of figures shows that the liquid-crystal manufacture\nsuccessfully reproduces the complex phase pattern, and this on-sky image\n(\\fref{MagAO-vAPPs}f) shows a fainter binary stellar companion to the\nright of the primary star's PSF.\n\n\\section{Future Directions}\n\nOur team is currently installing different vAPP coronagraphs at several\ninstruments at large telescopes around the world, and working on novel\ndesigns for the future extremely large telescopes.\nForeseeable future developments of the vector-APP as a separate optical\ncomponent, and as integral part of a high-contrast imaging system\ninclude:\n\n\\begin{itemize}\n\n\\item The combination of several grating layers in a\n``\\textit{double-grating-vAPP}'' to recombine the two coronagraphic PSFs\nwith 360$^\\circ$ dark holes to feed an integral-field unit while\nrejecting the leakage terms. \n\n\\item By prescribing a specific retardance profile as a function of\nwavelength, it is possible to build a \\textit{wavelength-selective vAPP}\ndevice, that operates as a regular vAPP coronagraph at the science\nwavelengths, and acts like a regular glass plate at the spectral range\nof a wavefront sensor behind it. \n\n\\item The pupil phase manipulation of the vAPP can be extended by\namplitude manipulation in the pupil to create complex\napodizers\\cite{Carlotti2013}, and by phase\/amplitude masks in the focal\nplane to yield \\textit{hybrid coronagraphy}\\cite{Ruane2015}.\n\n\\item As this technology is likely compatible with operation in space,\nit is opportune to characterize the performance of vAPP-like\ncoronagraphs at the \\textit{extreme contrast levels} ($\\sim$10$^{-9}$)\nof space-based high-contrast imaging.\n\n\\item To adapt the vAPP phase pattern to the observational needs, the\nobserving conditions, and segmented pupils with variable configurations,\nactive liquid-crystal devices will be developed to establish\n``\\textit{adaptive coronagraphy}''. Such a system can then deliver dark\nholes of various geometry and depth, depending on whether the observer\nis interested in detecting exoplanets or characterizing known targets.\n\n\\item As the vAPP relies on polarization splitting, it is possible to\ndesign an optimal system for \\textit{coronagraphic\npolarimetry}\\cite{vAPP-polarimetry}, particularly with the\n360$^\\circ$-designs.\n\n\\item The fact that the vAPP produces several PSFs for the same star at\nthe focal plane makes it an attractive option for implementing\n\\textit{focal-plane wavefront sensing}, for instance through\nphase-diversity techniques. Another promising approach involves the\nincorporation of an additional pupil phase pattern which generates pairs\nof PSF copies around the main PSFs, with each pair encoding a wavefront\nerror mode through an intensity difference\\cite{Wilby2016}.\n\n\\end{itemize}\n\n\\section{Acknowledgments}\n\nThe research of FS leading to these results has received funding from\nthe European Research Council under ERC Starting Grant agreement 678194\n(FALCONER).\n\n\\bibliographystyle{ws-rv-van}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHeavy-flavour hadrons are suitable to probe the conditions of the high-energy-density Quark-Gluon Plasma (QGP) medium formed in ultra-relativistic heavy-ion collisions. Heavy quarks are mainly produced in hard scattering processes in the early stage of Pb--Pb collisions. The time-scale of their production ($\\Delta\\tau< 1\/2m_{c,b}\\sim 0.07~{\\rm fm}\/c~{\\rm for~charm~and}\\sim 0.02~{\\rm fm}\/c~{\\rm for~beauty}$) is, in general, shorter than the formation time of the QGP, $\\tau_{0}\\sim0.1-1$~fm\/$c$. During their propagation through the medium, heavy quarks interact with its constituents and lose energy. QCD energy loss is expected to occur via both inelastic (radiative energy loss via medium-induced gluon radiation)~\\cite{Baier1997265} and elastic (collisional energy loss) processes~\\cite{PhysRevD.44.1298}. The energy loss for quarks is expected to be smaller than for gluons, due to the smaller colour coupling factor of quarks with respect to gluons. In addition, the ``dead-cone effect'' should reduce small-angle gluon radiation for heavy quarks with moderate energy compared to their mass, thus further attenuating the effect of the medium~\\cite{PhysRevD.69.114003}. \n\nThe nuclear modification factor $R_{\\rm AA}(p_{\\rm T})=({\\rm d}N_{\\rm AA}\/{\\rm d}p_{\\rm T})\/(\\langle T_{\\rm AA}\\rangle\\cdot{\\rm d}\\sigma_{\\rm pp}\/{\\rm d}p_{\\rm T})$, where $\\langle T_{\\rm AA}\\rangle$ is the average nuclear overlap function for a given centrality class, is sensitive to the interaction of hard partons with the medium. At large $p_{\\rm T}$, $R_{\\rm AA}$ is expected to be mostly sensitive to the average energy loss of heavy-quarks in the hot and dense medium. The questions whether low-momentum heavy quarks can reach thermal equilibrium with the medium constituents and participate in the collective expansion of the system~\\cite{Batsouli200326,Greco2004202}, and whether heavy quarks can hadronise also via recombination with other quarks from the medium~\\cite{Greco2004202,Andronic200336} are still open. \nThese questions are addressed by studying $R_{\\rm AA}$ at low and intermediate $p_{\\rm T}$ and measuring the azimuthal anisotropy of heavy-flavour hadron production with respect to the reaction plane, defined by the beam axis and the impact parameter of the collision. The hadronisation mechanisms of the c quark are also investigated through the measurement of ${\\rm D}_{\\rm s}^+$ production in nucleus--nucleus collisions compared to that in pp collisions~\\cite{Anastasia}.\n\n\\section{D-meson reconstruction}\n\nThe decays ${\\rm D^0}\\rightarrow {\\rm K^-}\\pi^{+}$, ${\\rm D^+}\\rightarrow {\\rm K^-}\\pi^{+}\\pi^{+}$ and ${\\rm D^{*+}}\\rightarrow {\\rm D^0}\\pi^{+}$, and their charge conjugates, were reconstructed at mid-rapidity ($|y|<0.8$) in minimum-bias Pb--Pb collisions using the ALICE central barrel detectors. The D-meson selection was based on the precise reconstruction of the primary and secondary (decay) vertices, which is provided by the Inner Tracking System (ITS).\nCharged pions and kaons were identified using the information provided by the Time Projection Chamber (TPC) and the Time Of Flight (TOF) detectors~\\cite{Abelev:2014ffa}.\nThe reference proton--proton cross section at $\\sqrt{s_{\\rm NN}}=2.76$~TeV, needed to compute $R_{\\rm AA}$, was obtained by a pQCD-based energy scaling of the $p_{\\rm T}$-differential cross section measured at $\\sqrt{s}=7$~TeV~\\cite{ALICE:2011aa}. \n\n\\section{D-meson nuclear modification factor}\n\n\\begin{figure}[!t]\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.92\\textwidth]{2014-May-15-pPbAndPbPb.eps}\n\\caption{\\label{fig:DmesonRAARpA}Average $R_{\\rm pPb}$ of prompt ${\\rm D^0}$, ${\\rm D^+}$ and ${\\rm D^{*+}}$ mesons~\\cite{Abelev:2014hha} compared to the prompt D-meson $R_{\\rm AA}$ in Pb--Pb collisions in the 0--20\\% and 40--80\\% centrality classes~\\cite{ALICE:2012ab}.}\n\\end{minipage}\\hspace{1pc\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.9\\textwidth]{2014-May-16-AverageDMesonsRaa_075_ComparisonWithModels_150514.eps}\n\\caption{\\label{fig:DmesonRAACentral}Average prompt D-meson $R_{\\rm AA}$ in Pb--Pb collisions in the 0--7.5\\% centrality class compared to theoretical models including parton energy loss.}\n\\end{minipage} \n\\end{figure}\n\nA large suppression of the D-meson $R_{\\rm AA}$ (factor 3-5) was observed for $p_{\\rm T}>5$~GeV\/$c$ in central Pb--Pb collisions at $\\sqrt{s_{\\rm NN}}=2.76~{\\rm TeV}$ (Figure~\\ref{fig:DmesonRAARpA})~\\cite{ALICE:2012ab}. The comparison of $R_{\\rm AA}$ with the D-meson nuclear modification factor measured in p--Pb collisions at $\\sqrt{s_{\\rm NN}}=5.02~{\\rm TeV}$~\\cite{Abelev:2014hha} shows that the expected cold nuclear matter effects are smaller than the uncertainties on $R_{\\rm AA}$ for $p_{\\rm T}>3~{\\rm GeV\/}c$. Therefore, the suppression observed in central Pb--Pb collisions is predominantly induced by final-state effects due to the presence of a hot and dense partonic medium. Figure~\\ref{fig:DmesonRAACentral} shows the D-meson $R_{\\rm AA}$ measured in Pb--Pb collisions in the centrality class 0--7.5\\%, compared to theoretical models including charm interactions with the medium constituents.\nThe large suppression observed, e.g. of a factor 6 at $p_{\\rm T}=10~{\\rm GeV\/}c$, is described by the models that include radiative and collisional heavy-quark energy loss.\n\n\\begin{figure}[!t]\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.93\\textwidth]{2015-Jun-25-DmesPions_8to16_CompDjordjevic_200515.eps}\n\\caption{\\label{fig:DmesonPionRAA}$R_{\\rm AA}$ of D mesons~\\cite{Adam:2015nna} and charged pions~\\cite{Abelev2014196} as a function of centrality compared to a pQCD model including mass dependent radiative and collisional energy loss~\\cite{Djordjevic2014298}.}\n\\end{minipage}\\hspace{1pc\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.93\\textwidth]{2015-Jun-26-DmesNonPromptJpsi_8to16_CompDjordjevic_110615.eps}\n\\caption{\\label{fig:DmesonJPsiRAA}D~\\cite{Adam:2015nna} and non-prompt J\/$\\psi$ meson~\\cite{CMSnonprompt} $R_{\\rm AA}$ vs. centrality compared to a pQCD model including mass dependent radiative and collisional energy loss~\\cite{Djordjevic2014298}.}\n\\end{minipage} \n\\end{figure}\n\nFigures~\\ref{fig:DmesonPionRAA} and~\\ref{fig:DmesonJPsiRAA} show the D-meson $R_{\\rm AA}$ as a function of centrality (quantified in terms of the average number of participant nucleons in the Pb--Pb collision)~\\cite{Adam:2015nna} along with the $R_{\\rm AA}$ of charged pions~\\cite{Abelev2014196} and non-prompt J\/$\\psi$ mesons measured by the CMS Collaboration~\\cite{CMSnonprompt}, respectively. The focus here is on the study of the parton energy loss, thus, the results are presented for the high-$p_{\\rm T}$ interval $8-16~{\\rm GeV\/}c$ for the pions and D mesons and for $6.50$ and, thus, consistent with the expectations from collective flow.\n\n\\section{Conclusions}\nThe results obtained by ALICE using the data from the LHC Run-1 (2010--2013) indicate a strong suppression of the D-meson production in central Pb--Pb collisions for $p_{\\rm T}>3~{\\rm GeV\/}c$, which is mainly due the interactions of heavy quarks with the hot and dense medium. \nThe smaller $R_{\\rm AA}$ observed for D mesons with respect to non-prompt J\/$\\psi$ confirms the mass-dependent nature of the energy-loss mechanisms.\nThe non-zero $v_2$ of D mesons and the azimuthal dependence of the ${\\rm D^0}$ $R_{\\rm AA}$ indicate that, during the collective expansion of the medium, the interactions between its constituents and the charm quarks transfer to the latter information on the azimuthal anisotropy of the system. During the LHC Run-2 we expect to collect a data sample larger by a factor 5-10 with respect to Run-1, depending on collision centrality. It will be, thus, possible to measure the D-meson $R_{\\rm AA}$ and $v_2$ with a better precision and in an extended $p_{\\rm T}$ range.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{section:intro}\n\nMassive MIMO is one of the most relevant technologies in wireless communications \\cite{marzetta,rusek}. Among the key features of this technology are high spectral efficiency and improved link reliability, making it a key enabler for 5G. Massive MIMO exploits spatial diversity far beyond traditional MIMO systems by employing a large scale antenna array in the base-station (BS) with hundreds or possibly even thousands of elements. This large number of elements allows for unprecedented spatial resolution and high spectral efficiency, while providing simultaneous service to several users within the same time-frequency resource.\n\nDespite all the advantages of Massive MIMO, there are still challenges from an implementation point of view. One of the most critical ones is sending data from the BS antennas to the central processing unit (CPU) and vice-versa, and the high interconnection throughput it requires. In current set-ups, uplink detection algorithms based on zero-forcing (ZF) equalizer typically rely on a centralized architecture, shown in Fig. \\ref{fig:BS_centralized}, where baseband samples are collected in the CPU for obtaining channel state information (CSI) and further matrix inversion, which allows data estimation and further detection. The same argument is valid for downlink precoding. In order to avoid dedicated links between antenna modules and CPU, a shared bus is typically used to exchange this data. In case of LuMaMi testbed \\cite{lumami,lumami2}, the shared bus was reported to support an aggregated data-rate of 384Gps, which exceed base-station internal interface standards such as eCPRI \\cite{ecpri}. Additionally, the pin-count of integrated circuits (IC) limits the number of links the IC can handle simultaneously and thus the throughput. Due to this high data-rate, the power appears as another potential limitation. This combination of factors are considered as the main bottleneck in the system and a clear limitation for array scalability. In this paper we will address the inter-connection throughput limitation by decreasing its value per link and consequently reducing the impact of the other two (pin-count and power).\n\nThe inter-connection bottleneck has been noted in several previous studies on different architectures for Massive MIMO BSs \\cite{argos,Bertilsson,puglielli,lumami,cavallaro,li_jeon,jeon_li}. As a solution, most of these studies recommend moving to a decentralized approach where uplink estimation and downlink precoding can be performed locally in processing nodes close to the antennas (final detection can still be done in a CPU). However, to achieve that, CSI still needs to be collected in the CPU, where matrix inversion is performed \\cite{argos,Bertilsson,lumami}, imposing an overhead in data shuffling.\n\nThe CSI problem is addressed in \\cite{cavallaro}, where CSI is obtained and used only locally (not shared) for precoding and estimation, with performance close to MMSE. However, this architecture relies on the CPU for exchanging a certain amount of consensus information between the nodes, and this exchange negatively impacts the processing latency and throughput \\cite{li_jeon}, and therefore limits the scalability of this solution. In order to solve these problems, feedforward architectures for detection \\cite{jeon_li} and precoding \\cite{li_jeon} have been proposed recently, where the authors present a partially decentralized (PD) architecture for detection and precoding, which achieves the same results as linear methods (MRC, ZF, L-MMSE), and therefore becomes optimal when $M$ is large enough. Partial Gramian matrices from antennas are added up before arriving to a processing unit where the Gramian is inverted.\n\nIn \\cite{argos}, a flat-tree structure with daisy-chained nodes was presented. The authors propose conjugate beamforming as a fully decentralized method with the corresponding penalty in system capacity. In the same work it is also pointed out that by following this topology the latency was being severely compromised. The more detailed analysis on latency is thus needed to evaluate the algorithm.\n\nIn this article we propose a fully decentralized architecture and a recursive algorithm for Massive MIMO detection and precoding, which is able to achieve very low inter-connection data-rate without compromising latency.\nThe proposed algorithm is pipelined so that it runs in a distributed way at the antenna processing units, providing local vectors for estimation\/detection that approximate to the zero-forcing solution.\nWe make use of the Coordinate Descent (CD) algorithm, which is detailed in Section \\ref{section:CD}, to compute these vectors.\n\nThere is previous work based on CD, such as \\cite{li_CD}. The main difference is that the coordinate update in \\cite{li_CD} is done per user basis, i.e., a different user index is updated every iteration, while in our proposed method the coordinate update is done per antenna basis, updating all users at once.\n\nWe extend the work presented in \\cite{jesus} and \\cite{muris}, which are also based on decentralized daisy-chain architecture. The novelties of the present work compared to these two is as follows:\n\\begin{itemize}\n\\item A common strategy for downlink precoding and uplink equalization is presented, in contrast to \\cite{jesus} and \\cite{muris}, which only covers uplink and downlink separately.\n\\item The algorithm has been modified that serial processing is only needed when new CSIs are estimated. The corresponding filtering phase can be conducted in parallel to reduce latency, in contrast to \\cite{jesus}, where serial processing is always needed, which increases the latency.\n\\item A recommended step-size is provided, in contrast to \\cite{jesus}.\n\\item An analytical expression for resulting SINR and a complete performance analysis is presented in this paper.\n\\item Complexity analysis from a general point of view (not attached to any specific implementation) is provided, which includes: inter-connection data-rate, memory size and latency. In \\cite{jesus}, only inter-connection data-rates are analyzed.\n\\end{itemize}\n\nDecentralized architectures, as shown in Fig. \\ref{fig:BS_decentralized}, have several advantages compared to the centralized counterpart, as shown in Fig. \\ref{fig:BS_centralized}. For example, they overcome bottlenecks by finding a more equal distribution of the system requirements among the processing nodes of the system. Apart from this, data localization is a key characteristic of decentralized architectures. In uplink, the architecture allows data to be consumed as close as possible to where it is generated, minimizing the amount to transfer, and therefore saving throughput and energy. To achieve data localization, processing nodes need to be located near the antenna, where they perform processing tasks locally such as channel and data estimation. Local CSI is estimated and stored locally in each, without any need to share it with any other nodes in the system. This approach has been suggested previously in \\cite{argos,Bertilsson,jeon_li,cavallaro,li_jeon,puglielli}, and we take advantage of it in the proposed solution.\n\nThe remainder of the paper is organized as follows. In Section \\ref{section:background} the preliminaries are presented, comprising the system model for uplink and downlink, together with an introduction to linear processing and the ZF method. Section \\ref{section:central_vs_decentral} is dedicated to a comparison between the centralized and decentralized architectures and reasoning why the latter one is needed, together with an overview of the daisy-chain topology. The proposed algorithm, based on CD, is presented in Section \\ref{section:CD}. In \\ref{section:analysis} closed-form expressions of the SIR and SINR are provided for this algorithm, together with interconnection data-rates, latency and memory requirements of the proposed solution. Finally, Section \\ref{section:conclusions} summarizes the conclusions of this publication. \n\nNotation: In this paper, lowercase, bold lowercase and upper bold face\nletters stand for scalar, column vector and matrix, respectively. The\noperations $(.)^T$, $(.)^*$ and $(.)^H$ denote transpose, conjugate and conjugate transpose respectively.\nThe $i$-th element of vector $\\h$ is denoted as $h_{i}$. A vector $\\w$ and a matrix $\\A$ related to the $m$th antenna is denoted by $\\w_m$ and $\\A_{m}$, respectively. $A_{i,j}$ denotes element $(i,j)$ of $\\A$. $\\mathbf{A}_{m}(i,j)$ denotes element $(i,j)$ of the $m$-th matrix in the sequence $\\{\\A_{m}\\}$. The $k$th coordinate vector in $\\mathbb{R}^{K}$ is defined as $\\e_{k}$. Kronecker delta is represented as $\\delta_{ij}$. Probability density function and cumulative density function are denoted respectively as $f_{\\mathbf{X}}(x)$ and $F_{\\mathbf{X}}(x)$. Computational complexity is measured in terms of the number of complex-valued multiplications.\\\\\n\n\\section{Background}\n\\label{section:background}\n\\subsection{System model}\nFor uplink, we consider a scenario with $K$ single-antenna users transmitting to a BS with an antenna array with $M$ elements. Assuming time-frequency-based channel access, a Resource\nElement (RE) represents a unit in the time-frequency grid (also\nnamed subcarrier in OFDM) where the channel is expected to be approximately flat. Under this scenario, the input-output relation is\n\\begin{equation}\n\\yu = \\Hbf\\xu + \\nup,\n\\label{eq:ul_model}\n\\end{equation}\nwhere $\\yu$ is the $M \\times 1$ received vector, $\\xu$ is the transmitted user data vector ($K \\times 1$), $\\Hbf=[\\h_1 \\; \\h_2 \\, \\cdots \\, \\h_M]^{{T}}$ is the channel matrix ($M \\times K$) and $\\nup$ an $M \\times 1$ vector of white, zero-mean complex Gaussian noise. The entries of $\\Hbf$ are i.i.d. zero-mean circularly-symmetric complex-gaussian entries, with rows $\\h_{i} \\sim \\mathcal{CN}(0, \\I)$ for all $i$. The noise covariance at the receiver is $N_{0}\\I$. The average transmitted power is assumed to be equal across all users and we assume, without any loss of generality, a unit transmit power. SNR is defined as $\\frac{1}{N_{0}}$ and represents the average \"transmit\" signal-to-noise ratio.\n\nFor downlink, if Time Division Duplex (TDD) is assumed, then according to channel reciprocity principle and by employing reciprocity calibration techniques \\cite{joao}, it is assumed that within the same coherence time, the channel matrix is the same as in the uplink case, and the system model follows\n\\begin{equation}\n\\xdd = \\Hbf^{T}\\yd + \\nd,\n\\label{eq:dl_model}\n\\end{equation}\nfor a RE, where $\\yd$ is the $M \\times 1$ transmitted vector, $\\xdd$ is the received data vector by users ($K \\times 1$), and $\\nd$ samples of noise ($K \\times 1$).\n\nOnce the system model is established, we introduce the linear processing fundamentals used for downlink precoding and uplink estimation.\n\n\\begin{figure*}\\centering\n\t\\footnotesize\n\t\\subfloat[Centralized architecture]{\n\t\t\\psfrag{1}{$1$}\n\t\t\\psfrag{M}{$M$}\n\t\t\\psfrag{RPU}[][][0.7]{$\\mathrm{RPU}$}\n\t\t\\psfrag{RF}[][][0.7]{$\\mathrm{RF}$}\n\t\t\\psfrag{OFDM}[][][0.55]{$\\mathrm{OFDM}$}\n\t\t\\psfrag{CPU}{$\\mathrm{CPU}$}\n\t\t\\psfrag{CHEST}[][][0.6]{$\\mathrm{CHEST}$}\n\t\t\\psfrag{EST}[][][0.6]{$\\mathrm{EST}$}\n\t\t\\psfrag{DET}[][][0.6]{$\\mathrm{DET}$}\n\t\t\\psfrag{DEC}[][][0.6]{$\\mathrm{DEC}$}\n\t\t\\psfrag{Bs}[][][1.0]{$\\text{Base Station}$}\n\t\t\\psfrag{Rc}{$R_\\mathrm{c}$}\t\n\t\t\\includegraphics[width=0.35\\textwidth]{BS_centralized.eps}\n\t\t\\label{fig:BS_centralized}\n\t}\n\t\\subfloat[Decentralized architecture]{\n\t\t\\psfrag{1}{$1$}\n\t\t\\psfrag{M}{$M$}\n\t\t\\psfrag{RPU}[][][0.7]{$\\mathrm{RPU}$}\n\t\t\\psfrag{RF}[][][0.7]{$\\mathrm{RF}$}\n\t\t\\psfrag{OFDM}[][][0.55]{$\\mathrm{OFDM}$}\n\t\t\\psfrag{CPU}{$\\mathrm{CPU}$}\n\t\t\\psfrag{CHEST}[][][0.5]{$\\mathrm{CHEST}$}\n\t\t\\psfrag{EST}[][][0.6]{$\\mathrm{EST}$}\n\t\t\\psfrag{DET}[][][0.6]{$\\mathrm{DET}$}\n\t\t\\psfrag{DEC}[][][0.6]{$\\mathrm{DEC}$}\n\t\t\\psfrag{Bs}{$\\text{Base Station}$}\n\t\t\\psfrag{Rd}{$R_\\mathrm{d}$}\n\t\t\\includegraphics[width=0.35\\textwidth]{BS_decentralized.eps}\n\t\t\\label{fig:BS_decentralized}\n\t}\n\t\n\t\\caption{Comparison between base station receiver chain in centralized and fully decentralized architectures for Massive MIMO uplink. Antenna array with $M$ elements is divided into RPUs, each containing a set of antennas. (a): Centralized architecture. Each RPU has one link to transfer baseband samples to the CPU, where the rest of processing tasks are done. (b): Fully decentralized architecture for detection. Each RPU performs RF, ADC, OFDM, channel estimation (CHEST) and data estimation (EST) locally. Detection (DET) and decoding (DEC) is centralized. RPUs are connected to each other by uni-directional links. Only one RPU has a direct connection with the CPU. Proposed algorithms are executed in EST blocks in parallel mode. The points where the interconnection data-rate is estimated are marked by circles and the value is denoted by $\\mathrm{R}_{c}$ and $\\mathrm{R}_{d}$ for centralized and decentralized respectively. The goal is to have $\\mathrm{R}_{d} \\ll \\mathrm{R}_{c}$ without compromising performance and latency.}\n\t\\label{fig:comparison}\n\\end{figure*}\n\n\\subsection{Linear Processing}\nIn this article we focus on linear estimators and precoders, because they show close to optimal performance in Massive MIMO regime while requiring low complexity.\n\nA linear estimator provides $\\hatx^u$, which is an estimate of $\\xu$, by applying an equalizer filter matrix $\\Wbf$ to the vector of observations, $\\yu$:\n\\begin{equation}\n\\begin{split}\n\\hatxu &= \\Wbf^{H} \\yu\\\\\n&= \\sum_{m=1}^{M} \\w_{m}^{*} \\ymu,\\\\\n\\end{split}\n\\label{eq:linear_det}\n\\end{equation}\nwhere $\\Wbf = [\\w_{1} \\; \\w_{2} \\, \\cdots \\, \\w_{M}]^{T}$ is an $M \\times K$ matrix, $\\w_{m}$ is a $K \\times 1$ filter vector related to antenna $m$ and $\\ymu$ the observation at antenna $m$. As it can be seen the estimate $\\hatxu$ is computed by the sum of $M$ partial products. If $\\w_{m}$ is obtained and stored locally in the m$th$ antenna module, then the partial products can be computed with local data only, reducing the amount of data to exchange between nodes. From implementation point of view, the linear estimator relies on the accumulation of all partial results according to \\eqref{eq:linear_det}, which can be done centrally (fusion node) or distributed.\n\nFor downlink, the data vector intended to the users, $\\xd$, is precoded with matrix $\\Pbf$ as\n\\begin{equation}\n\\yd = \\Pbf\\xd,\\\\\n\\label{eq:linear_prec}\n\\end{equation}\nwhere $\\Pbf = [\\p_{1} \\; \\p_{2} \\, \\cdots \\, \\p_M]^{T}$ is an $M \\times K$ matrix, which fulfills a power constraint $\\|\\Pbf\\|_{F}^{2}\\leq P$, such that $P$ is the maximum transmitted power. Particularly for antenna $m$ we have\n\\begin{equation}\n\\ymd = \\p_{m}^T \\xd.\\\\\n\\label{eq:linear_prec_i}\n\\end{equation}\nSimilarly to uplink, if $\\p_{m}$ is obtained and stored locally at the m$th$ antenna module, then $\\ymd$ can be computed only with local data after $\\xd$ is broadcasted to all antennas.\n\nThe zero-forcing (ZF) equalizer, which is one type of linear estimator, constitutes a reference in our analysis. It is defined for uplink estimation as\n\\begin{equation}\n\\Wbf_\\text{ZF}^{H} = (\\Hbf^H \\Hbf)^{-1}\\Hbf^H,\n\\label{eq:W_ZF}\n\\end{equation}\nand $\\Pbf_\\text{ZF}=\\Wbf_\\text{ZF}^{*}$ for the downlink precoding.\n\nZF is able to completely cancel inter-user interference (IUI) and reach the promised spectral efficiency of Massive MIMO. However, as ZF is performed in a central processor, the Gramian matrix $\\Hbf^{H}\\Hbf$ needs to be collected and inverted, which increases the average inter-connection data-rate. The computational load is also increased due to the matrix inversion and posterior matrix multiplication during estimation phase. Taking this into consideration, we look for methods with IUI-cancellation capabilities but with lower requirements for the system.\n\n\\subsection{Uplink \\& Downlink reciprocity}\nSubstituting \\eqref{eq:ul_model} into \\eqref{eq:linear_det} leads to\n\\begin{equation}\n\\begin{split}\n\\hatxu\n&= \\Eu \\xu + \\zu\\\\\n\\end{split}\n\\label{eq:Eu}\n\\end{equation}\nfor uplink, where $\\Eu = \\Wbf^{H} \\Hbf$ is a $K \\times K$ matrix containing the equivalent uplink channel with IUI information and $\\mathbf{z}^u$ is the $K \\times 1$ post-equalization noise term.\n\nOn the other hand, in the downlink, substituting \\eqref{eq:linear_prec} into \\eqref{eq:dl_model} leads to\n\\begin{equation}\n\\begin{split}\n\\xdd\n&= \\Ed \\xd + \\nd,\\\\\n\\end{split}\n\\label{eq:Ed}\n\\end{equation}\nwhere $\\Ed = \\Hbf^{T} \\Pbf$ is a $K \\times K$ matrix containing the equivalent downlink channel with IUI information. For the particular case that $\\Pbf^{T} = \\Wbf^{H}$, we have $\\Ed = \\Eu^{T}$, meaning that both equivalent channels are transposed, and therefore experiment the same IUI cancellation properties.\nFrom this result it is clear that once an equalization matrix $\\Wbf$ is obtained for uplink detection, it can also be applied for downlink precoding with no extra effort. It is interesting to note that, since $\\Pbf^{T} = \\Wbf^{H}$, it follows that $\\p_i = \\w_i^{*}$, so each antenna node can re-use same vector for detection and precoding, ideally reducing complexity and storage needs by half. Said that, in this article we focus mainly on uplink estimation without limiting the results to downlink. In reality, there is a downlink power constraint as the total transmitted power, which is addressed in \\ref{section:analysis}.\n\n\\section{Centralized vs Decentralized}\n\\label{section:central_vs_decentral}\nIn this section we describe the differences between centralized and decentralized Massive MIMO processing and the justification to study the later one.\n\nUplink estimation based on ZF equalization has two components that should be multiplied: $\\Wbf_\\text{ZF}$ and $\\yu$. The former includes a $K \\times K$ matrix inversion, which typically is done in one place, and for that, CSI from all antennas needs to be collected. Apart from that, the observation data vector, $\\yu$, is also needed for estimation. This vector is $M \\times 1$, increasing considerably the amount of data to transfer and limiting the scalability of the array. Based on those considerations, we can think of two possible architectures for the Massive MIMO base-station: centralized and decentralized.\n\nFig. \\ref{fig:BS_centralized} presents an architecture based on a central baseband processing node, where baseband samples are exchanged between Remote Processing Units (RPUs) and CPU. Each antenna is connected to a receiver and transmitter circuitry, which involves: RF front-end, ADC\/DAC and OFDM processing. For simplicity, only uplink is represented in this figure. We can identify some common tasks among these processing elements across different antennas, such as: time synchronization, automatic gain control, local oscillator generation, carrier frequency and sampling rate offset estimation, phase noise compensation, among others. Therefore, a few antennas (together with corresponding receivers\/transmitters) can be grouped into one RPU for efficient implementation of such common tasks. However, for simplicity, in this work we only analyze the case where each RPU manages one antenna.\n\nDedicated physical links would easily exceed the number of I\/O connections in current standards, in addition to the increment of the cost of adding a new RPUs when needed. To overcome this, we consider that RPUs are connected to the CPU node by a shared bus as shown in Fig. \\ref{fig:BS_centralized}. \n\nEven though, this approach can support ZF detection (and precoding) from a functionality point of view, from the implementation point of view, it requires a very high inter-connection data-rate in the bus and at the input of the CPU ($R_\\mathrm{c}$ in the figure). As an example, consider a 5G NR-based system with 128 antennas and OFDM as an access technology, then the average data-rate can be calculated as\n\\begin{equation}\nR_{\\mathrm{c}} = \\frac{2w M N_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}},\n\\label{eq:R_central}\n\\end{equation}\nwhere $N_{\\mathrm{u}}$ is the number of active subcarriers, $w$ is the bit-width for the baseband samples (real\/imaginary parts) after FFT, and $T_{\\mathrm{OFDM}}$ is the OFDM symbol duration. For $N_{\\mathrm{u}}=3300$, $w=12$ and $T_{\\mathrm{OFDM}}=1\/120\\mathrm{kHz}$ then $R_{\\mathrm{c}}=1.2 \\mathrm{Tbps}$. This result clearly exceed the limit data-rate for common interfaces, such as eCPRI \\cite{ecpri} and PCIe, and furthermore, it is proportional to $M$, which clearly limits the scalability of the system.\n\nAs a solution to this limitation, we propose the fully-decentralized architecture for baseband detection and precoding shown in Figure \\ref{fig:BS_decentralized}. We can observe that channel estimation and estimation\/precoding have been moved from the CPU to the RPUs, with detection and decoding as a remaining task in the CPU from physical layer point of view. The benefit of this move is manifold. Firstly, the inter-connection data-rate scales with $K$ instead of $M$. Secondly, the high complexity requirement in the CPU for channel estimation and data estimation\/precoding is now equally distributed among RPUs, which highly simplifies the implementation and overcomes the computational bottleneck and, additionally, CSI is obtained and consumed locally in each RPU without the need for exchange, with the consequent reduction in the required inter-connection data-rate. In addition to the advantages already mentioned, which are common to other decentralized schemes, the proposed architecture presented in this work achieves an unprecedented low inter-connection data-rate by the direct connection of RPUs forming a daisy-chain, where the CPU is at one of the ends.\n\nIn the daisy-chain, depicted in Fig. \\ref{fig:BS_decentralized}, nodes are connected serially to each other by a dedicated connection. All elements in the chain work simultaneously in pipeline mode, processing and transmitting\/receiving to\/from the respective next\/previous neighbor in the chain. The data is passed through the nodes sequentially, being updated at every RPU. There is an unique connection to the root node where the last estimate is transmitted and therefore been detected by the CPU. An important remark is the average inter-connection data-rate between nodes is the same regardless of the number of elements in the chain. This topology was proposed in \\cite{argos} and further studied in \\cite{jesus} and \\cite{muris} with specific algorithms designed for this topology.\n\nWhen the decentralized architecture in Fig. \\ref{fig:BS_decentralized} needs to be deployed, antennas can be collocated in the same physical place or distributed over a large area. These antennas and therefore their corresponding RPUs can behave as nodes in the chain, whilst the CPU remains as the root node. There may be multiple chains in a network. The selection of the RPUs to form a chain may depend on the users they are serving. RPUs which serve the same set of users should be in the same chain, so they can work jointly to cancel IUI. This concept fits very well with the distributed wireless communication system \\cite{DWCS}, the recent cell-free Massive MIMO concept \\cite{cell-free} and the promising large intelligent surface \\cite{lis}.\n\nDecentralized architectures, such as the one shown in Fig. \\ref{fig:BS_decentralized}, require other type of algorithms compared to Fig. \\ref{fig:BS_centralized}. In the next section we introduce our proposed algorithm, which is a method for obtaining $\\w_{m}$ and $\\p_{m}$ as the equalization and precoding vectors, respectively.\n\n\\section{Coordinate Descent}\n\\label{section:CD}\n\nOur proposed algorithm is an iterative algorithm based on the gradient descent (GD) optimization, in which the gradient information is approximated with a set of observations in every step. From this, each antenna can obtain its own equalization\/precoding vector sequentially in a coordinate descent approach. The main advantage of this method is that it does not require access to all observations at each iteration, becoming an ideal choice for large scale distributed systems.\n\n\\subsection{Preliminaries}\nFrom \\eqref{eq:Eu} we know that in the non-IUI case, $\\Eu$ is a diagonal matrix, which is the case when zero-forcing (ZF) is applied. In the general case, IUI is not zero and as consequence $\\Eu$ contains non-zero entries outside the main diagonal.\n\nThe objective is to find a matrix $\\Wbf$, which cancels IUI to a high extent ($\\Eu \\approx \\I$), while fulfilling the following conditions:\n\\begin{itemize}\n\t\\item Uses daisy-chain as a base topology, so we exploit the advantages seen in Section \\ref{section:central_vs_decentral}.\n\t\\item No exchange of CSI between nodes. Only local CSI. \n\t\\item Limited amount of data to pass between antenna nodes. It should depend on $K$ instead of $M$, to enable scalability.\n\t\\item Limit the dependency on the central processing unit in order to reduce data transfer, processing and memory requirements of that unit. One consequence of this is to avoid matrix inversion in the central unit.\n\\end{itemize}\n\n\\subsection{Algorithm formulation}\nThe algorithm setup is that one intends to solve the unconstrained Least Squares (LS) problem in the uplink\n\\begin{equation} \\label{eq:CD_R} \\hatx = \\argmin_{\\mathbf{x}} \\|\\mathbf{y}-\\mathbf{H}\\mathbf{x}\\|^2\n\\end{equation}\nvia a GD approach. The gradient of \\eqref{eq:CD_R} equals $\\nabla_{\\mathbf{x}}=\\Hbf^{H}\\Hbf\\mathbf{x}-\\Hbf^{H}\\mathbf{y}$.\nEven though $\\Hbf^{H}\\Hbf$ and $\\Hbf^{H}\\mathbf{y}$ can be formulated in a decentralized way, the selection of $\\mathbf{x}$ and the product with $\\Hbf^{H}\\Hbf$ is preferably done in a central processing unit to limit latency and inter-connection data-rates. Following the fully-decentralized approach and the intention to off-load the equalization\/precoding computation from the CPU to the RPUs, we propose a different approach.\n\nThe proposed method can be derived as an approximate version of GD that can be operated in a decentralized architecture with minimum CPU intervention. It does so by computing, at each antenna, as much as possible of $\\nabla_{\\mathbf{x}}$ with the information available at the antenna. Then the estimate $\\hatx$ is updated by using a scaled version of the \"local\" gradient and the antenna passes the updated estimate on to the next antenna.\n\nThe above described procedure can, formally, be stated as\n\\begin{equation}\n\\begin{split}\n\\varepsilon_m &= y_{m} - \\h_{m}^{T} \\hatx_{m-1} \\\\\n\\hatx_{m} &= \\hatx_{m-1} + \\mu_m \\h_{m}^{*} \\varepsilon_m,\n\\end{split}\n\\label{eq:CD_sm}\n\\end{equation}\nfor antenna $m$, where $\\mu_m$ is a scalar step-size. The update rule in \\eqref{eq:CD_sm} corresponds to the Kaczmarz method \\cite{kaczmarz}, whose step-size is according to \\cite{censor}\n\\begin{equation}\n\\mu_{m} = \\frac{\\mu}{\\|\\h_{m}\\|^2},\n\\label{eq:mu_m}\n\\end{equation}\nwhere $\\mu \\in \\mathbb{R}$ is a relaxation parameter. In case of consistent systems, this is $\\mathbf{y}=\\Hbf \\mathbf{x}$ (if SNR is high enough or there is no noise), $\\mu=1$ is optimum and the method converge to the unique solution. Otherwise, when the system is inconsistent, $\\mu$ give us an extra degree of freedom, which allows to outperform the $\\mu=1$ case, as we will see in Section \\ref{section:analysis}.\n\nAfter $M$ iterations of \\eqref{eq:CD_sm} we have\n\\begin{equation}\n\\begin{split}\n\\hatx_{M} &= \\prod_{m=1}^{M} \\left( \\I_K - \\mu_{m} \\h_{m}^{*} \\h_{m}^{T} \\right) \\hatx_0 \\\\\n&+ \\sum_{m=1}^{M} \\prod_{i=m+1}^{M} \\left(\\I_K - \\mu_{i} \\h_{i}^{*} \\h_{i}^{T} \\right) \\mu_{m} \\h_{m}^{*} y_m.\n\\nonumber\n\\end{split}\n\\end{equation}\nIf we assume $\\hatx_0 = \\mathbf{0}_{K\\times1}$ \\footnote[1]{If prior information of $\\mathbf{x}$ is available, it can be used here.}, then it is possible to express $\\hatx_M$ as linear combination of $\\mathbf{y}$, in the same way as \\eqref{eq:linear_det}, and identify\n$\\w_m$ (the equalization vector associated to antenna $m$) as\n\\begin{equation}\n\\w_m = \\left[ \\prod_{i=m+1}^{M} \\left(\\I_K - \\mu_{i} \\h_{i} \\h_{i}^{H} \\right) \\right] \\mu_{m} \\h_{m}.\n\\label{eq:CD_W}\n\\end{equation}\nIf \\eqref{eq:CD_sm} is applied in reverse antenna order ($m=M \\cdots 1$), then we obtain a different estimation. The expression for $\\w_{m}$ when using the alternative approach is\n\\begin{equation}\n\\w_m = \\mu_{m} \\A_{m-1} \\h_{m},\n\\label{eq:CD_W2}\n\\end{equation}\nwhere matrix $\\A_m$ is defined as\n\\begin{equation}\n\\A_m = \\prod_{i=1}^{m} \\left(\\I_K - \\mu_{i} \\h_{i} \\h_{i}^{H} \\right).\n\\label{eq:CD_A_impl}\n\\end{equation}\n\nIt is important to remark that both approaches lead to different $\\w_{m}$ sequences, however the overall performance should be the same if CSI in all antennas shows same statistical properties (stationarity across antennas).\n\n\n\\subsection{Algorithm design and pseudocode}\n\\label{section:alg}\nIn this subsection we derive an equivalent and more attractive form for the calculation of the weights of the algorithm in \\eqref{eq:CD_W2} in an easy and low-complexity way, suitable for hardware implementation.\n\nThe algorithm description is shown in Algorithm \\ref{algo:CD}. The vector $\\w_{m}$ is computed in each antenna, while the matrix $\\A_{m-1}$ gets updated according to the recursive rule: $\\A_{m} = \\A_{m-1} - \\w_{m} \\h_{m}^{H}$. Then, $\\w_{m}$ is stored for the detection and precoding phase, and $\\A_{m}$ is passed to the next antenna node for further processing.\n\\IncMargin{1em}\n\\begin{algorithm}[ht]\n\t\\SetKwInOut{Input}{Input}\n\t\\SetKwInOut{Output}{Output}\n\t\\SetKwInOut{Preprocessing}{Preprocessing}\n\t\\SetKwInOut{Init}{Init}\n\t\\Input{ $\\Hbf = \\left[ \\h_{1}, \\h_{2} \\cdots \\h_{M} \\right]^{T}$}\n\t\\Preprocessing{}\n\t$\\A_0 = \\I_K$\\\\\n\t\\For{$m = 1,2,...,M$}{\n\t\t$\\w_m = \\mu_{m} \\A_{m-1} \\h_m$\\\\\n\t\t$\\A_m = \\A_{m-1} - \\w_{m} \\h_{m}^{H}$\n\t}\n\t\\caption{Proposed algorithm}\n\t\\label{algo:CD}\n\t\\Output{$\\Wbf = \\left[ \\w_{1}, \\w_{2} \\cdots \\w_{M} \\right]^{T}$}\n\t\n\\end{algorithm}\\DecMargin{1em}\n\nFrom Algorithm \\ref{algo:CD} we can observe that after $M$ steps we achieve the following expression: $\\A_M = \\I_{K} - \\Eu^{*}$. Then, if perfect IUI cancellation is achieved, $\\Eu=\\I_{K}$ and therefore $\\A_{M} = \\mathbf{0}$. As a consequence we can take $\\|\\A_{m}\\|^{2}$ as a metric for residual IUI. The interpretation of Algorithm \\ref{algo:CD} is as follows. $\\|\\A_{m}\\|$ is reduced by subtracting from $\\A_{m}$ a rank-1 approximation to itself. In order to achieve that, $\\A_{m}$ is projected onto $\\h_{m}$ to obtain $\\w_{m}$, therefore $\\w_{m} \\h^{H}_m$ is the best rank-1 approximation to $\\A_{m}$, having $\\h_{m}$ as vector base. Ideally, if the channel is rich enough, vectors $\\h_{m}$ are weakly correlated and assuming $M$ is large (Massive MIMO scenario) then IUI can be canceled out to a high extent \\footnote[2]{The selection of Coordinate Descent as our method's name is because we consider the vectors $\\{\\w_i\\}$ as the outcome of the method, and these can be seen as coordinates of a cost function to minimize. Such optimization problem can be written as: $\\w_{m} = \\argmin_{z} f(\\w_{1},\\cdots,\\w_{m-1},\\mathbf{z},\\w_{m+1},\\cdots,\\w_{M})$, where $f = \\|\\A_{m-1} - \\mathbf{z} \\h_{m}^{H}\\|_{F}^{2}$, and $\\A_{m-1} = \\I_{K}-\\sum_{i \\neq m} \\w_{i} \\h_{i}^{H}$. Each antenna solves this optimization problem in a sequential fashion, obtaining one coordinate as a result, while keeping the rest fixed. This is valid for single and multiple iterations to the array, which is presented in the next subsection.}.\n\nThe role of step-size $\\mu$ is to control how much IUI is removed at every iteration. High values will tend to reduce IUI faster at the beginning when the amount to remove is high, but will lead to oscillating or unstable residual IUI after some iterations because the steps are too big, so the introduced error dominates. Low values for $\\mu$ will ensure convergence of the algorithm and a relatively good IUI cancellation at the expense of a slower convergence.\n\n\\subsection{Multiple-iterations along the array}\n\\label{sub:multiple-iter}\nRecalling from Section \\ref{section:alg}, Algorithm \\ref{algo:CD} reduces the norm of $\\A$ at each step, providing as a result $\\A_{M}$, which contains the residual IUI after the algorithm is run along the array. It is possible to expand the algorithm and apply $\\A_{M}$ as initial value, $\\A_{0}$ for a new iteration through the array, with the intention of decreasing even more the norm of $\\A$. The pseudocode of the expanded version is shown in Algorithm \\ref{algo:CD_multiple}, with $n_{iter}$ iterations, and as it can be seen, an increment of $\\w_{m}$ is computed at each iteration. From topology point of view, it requires an extra connection between last and first RPUs, closing the daisy-chain and becoming a ring. It is expected to improve the performance at the expense of increasing the latency.\n\\IncMargin{1em}\n\\begin{algorithm}[ht]\n\t\\SetKwInOut{Input}{Input}\n\t\\SetKwInOut{Output}{Output}\n\t\\SetKwInOut{Preprocessing}{Preprocessing}\n\t\\SetKwInOut{Init}{Init}\n\t\\Input{ $\\Hbf = \\left[ \\h_{1}, \\h_{2} \\cdots \\h_{M} \\right]^{T}$}\n\t\\Preprocessing{}\n\t$\\A_{0,1} = \\I_K$\\\\\n\t$\\w_{m,1} = \\mathbf{0},m=1,...,M$\\\\\n\t\\For{$n = 1,2,...,n_{iter}$}{\n\t\t\\For{$m = 1,2,...,M$}{\n\t\t\t$\\w_{m,n} = \\w_{m,n-1} + \\mu_{m} \\A_{m-1,n} \\h_m$\\\\\n\t\t\t$\\A_{m,n} = \\A_{m-1,n} - \\w_{m,n} \\h_{m}^{H}$\\\\\n\t\t}\n\t\t$\\A_{0,n+1} = \\A_{M,n}$\n\t}\n\t\\caption{Proposed algorithm multiple iterations}\n\t\\label{algo:CD_multiple}\n\t\\Output{$\\Wbf = \\left[ \\w_{1,n_{iter}}, \\w_{2,n_{iter}} \\cdots \\w_{M,n_{iter}} \\right]^{T}$}\n\t\n\t\\end{algorithm}\\DecMargin{1em}\n\n\n\\section{Analysis}\n\\label{section:analysis}\nIn this section we present an analysis of the proposed solution. The main points are:\n\\begin{itemize}\n\t\\item Performance analysis of the presented solution based on SIR, SINR and BER evaluation, and comparison with other methods. \n\t\\item Complexity and timing analysis, including computational complexity, inter-connection throughput, memory requirement and latency.\n\\end{itemize}\n\nAs was commented in the Introduction, the analysis presented in this section is quite general and not dependent on any specific hardware implementation. The idea is to provide high level guidelines on algorithm-hardware trade-offs, system parameter selections, and hardware architectures. A more specific analysis can be performed when one has decided the dedicated implementation strategy.\n\n\\subsection{Performance}\n\\label{section:performance}\nIn this subsection we obtain and present different metrics to evaluate and compare the performance of the proposed algorithm. The analysis we present is divided as follows: Derivation of SIR and SINR closed form expressions, bit-error-rate (BER) analysis of the proposed algorithm based on ideal and measured channels and comparison with other methods, such as MF and ZF. The performance analysis that follows is focused on uplink, but it can be extended to downlink.\n\n\\subsubsection{SIR \\& SINR}\nSpecifically for user $k$, \\eqref{eq:Eu} is reduced to\n\\begin{equation}\n\\hat{x}^{u}_{k} = E_{k,k} x^{u}_{k} + \\sum_{i=1,i \\neq k}^{K} E_{k,i} x^{u}_{i} + z_{k},\n\\nonumber\n\\end{equation}\nwhere the first term represents the desired value to estimate (scaled version), the second one is the interference from other users and the third one is due to noise. The signal-to-interference ratio (SIR) for user $k$ is defined as\n\\begin{equation}\n\\text{SIR}_{k} = \\frac{\\E|E_{k,k}|^{2}}{ \\E \\left\\lbrace \\sum_{i=1,i \\neq k}^{K} |E_{k,i}|^2 \\right\\rbrace}.\n\\label{eq:SIR}\n\\end{equation}\n\nAnd for the signal-to-interference-and-noise ratio (SINR) we have\n\\begin{equation}\n\\text{SINR}_{k} = \\frac{\\E|E_{k,k}|^{2}}{\\E \\left\\lbrace \\sum_{i=1,i \\neq k}^{K} |E_{k,i}|^2 \\right\\rbrace + \\E|z_{k}|^2 }.\n\\label{eq:SINR}\n\\end{equation}\n\nA list of parameters and their corresponding values are presented in Table \\ref{table:parameters}, which are used in the following propositions.\n\n\\begin{table}[h!]\n\t\\begin{center}\n\t\t\\caption{Parameters}\n\t\t\\label{table:parameters}\n\t\t\\begin{tabular}{llr}\n\t\t\t\\cline{1-2}\n\t\t\tParameter & Description \\\\\n\t\t\t\\hline\n\t\t\t$\\alpha$ & $1-\\frac{2 \\mu}{K} +\\frac{\\mu^2}{K(K+1)}$\\\\\n\t\t\t\\hline\n\t\t\t$\\beta$ & $\\frac{\\mu^2}{K(K+1)}$ \\\\\n\t\t\t\\hline\n\t\t\t$\\nu$ & $1 - \\frac{\\mu}{K}$ \\\\\n\t\t\t\\hline\n\t\t\t$\\epsilon$ & $1 - \\frac{2\\mu}{K} + \\frac{\\mu^2}{K}$\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\nFrom \\eqref{eq:SIR} it is possible to obtain a closed-form expression of the SIR as follows:\n\\begin{theorem}\n\t\\label{prop:SIR}\n\tWith perfect $\\mathrm{CSI}$ and channel model as defined in Section \\ref{section:background}, $\\mathrm{SIR}$ per user in uplink with $\\mathrm{CD}$ algorithm for estimation is\n\n\t\\begin{equation}\n\t\\mathrm{SIR} = \\frac{1 - 2\\nu^{M} + \\alpha^{M} \\left(1-\\frac{1}{K}\\right) + \\epsilon^M \\frac{1}{K} }{\\left(1-\\frac{1}{K}\\right) \\cdot \\left( \\epsilon^{M} - \\alpha^{M} \\right)},\n\t\\end{equation}\n\n\twhich can be simplified in case of relatively large $M$, $K$, and $\\frac{M}{K}$, which is the case of Massive MIMO, as\n\n\t\\begin{equation}\n\t\\mathrm{SIR} \\approx e^{\\mu(2-\\mu)\\frac{M}{K}}.\n\t\\label{eq:SIR_approx}\n\t\\end{equation}\n\n\\end{theorem}\n\\begin{proof}\n\tSee Appendix-\\ref{proof:SIR}.\n\\end{proof}\n\n\\begin{figure*}\\centering\n\t\\subfloat[SINR vs $\\mu$ under different SNR. M=128 and K=16.]{\n\t\t\\includegraphics[width=0.48\\textwidth]{SINR_vs_mu_M128_K16}\n\t\t\\label{fig:SINR_vs_mu_M128_K16}\n\t}\n\t\\subfloat[SINR vs $\\mu$ under different channels. M=128 and K=5. SNR=0dB.]{\n\t\t\\includegraphics[width=0.48\\textwidth]{SINR_vs_mu_sim}\n\t\t\\label{fig:SINR_vs_mu_sim}\n\t}\\\\[-2ex]\n\t\\caption{ }\n\t\\label{fig:SINR_vs_mu}\n\t\\vspace*{-4mm}\n\\end{figure*}\n\nThe maximum value of \\eqref{eq:SIR_approx} is achieved for $\\mu=1$ and the SIR value only depends on the ratio $\\frac{M}{K}$ in an exponential fashion, showing how fast the IUI is canceled as $M$ grows, and therefore ZF is approached. As an example, for a target value of SIR = 40dB, $\\frac{M}{K}=10$ meets the requirement, which is a typical ratio in Massive MIMO regime.\n\nRegarding SINR, it can be derived based on previous results as\n\\begin{theorem}\n\t\\label{prop:SINR}\n\tWith perfect $\\mathrm{CSI}$ and channel model as defined in Section \\ref{section:background}, $\\mathrm{SINR}$ per user in uplink with $\\mathrm{CD}$ algorithm for estimation is given by\n\n\t\\begin{equation}\n\t\\begin{split}\n\t\\mathrm{SINR} = \\frac{1 - 2\\nu^{M} + \\alpha^{M} \\left(1-\\frac{1}{K}\\right) + \\epsilon^M \\frac{1}{K} }{\\left(1-\\frac{1}{K}\\right) \\left(\\epsilon^{M} - \\alpha^{M} \\right) + \\frac{N_{0}}{K-1} \\left( \\frac{\\mu}{2-\\mu}\\right) (1-\\epsilon^{M})},\n\t\\end{split}\n\t\\label{eq:SINR_CD}\n\t\\end{equation}\n\n\twhich can be simplified in case of relatively large $M$, $K$, and $\\frac{M}{K}$, which is the case of Massive MIMO, as\n\n\t\\begin{equation}\n\t\\mathrm{SINR} \\approx \\left[ e^{-\\mu(2-\\mu)\\frac{M}{K}} + \\frac{1}{K \\cdot \\mathrm{SNR}} \\left( \\frac{\\mu}{2-\\mu}\\right) \\right]^{-1}.\n\t\\label{eq:SINR_CD_limit}\n\t\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\n\tSee Appendix-\\ref{proof:SINR}.\n\\end{proof}\n\nThe first term in \\eqref{eq:SINR_CD_limit} represents SIR containing IUI information, while the second one takes into account the post-equalized noise power. For high SNR, the first term is dominant and $\\mathrm{SINR} \\to e^{\\mu(2-\\mu)\\frac{M}{K}}$, which depends on $\\frac{M}{K}$ and $\\mu$, but not on $\\mathrm{SNR}$. On the other hand, when SNR is low, the second term is dominant and $\\mathrm{SINR} \\to \\mathrm{SNR} \\cdot K \\left(\\frac{2 - \\mu}{\\mu}\\right)$ as $M$ grows, which grows linearly with $\\mathrm{SNR}$ and $K$ (up to certain value). This linear dependency on $K$ is due to the post-equalization noise is equally distributed among the users. While the noise power per antenna remains constant, the portion assigned to each user decays as $K$ grows, so the SINR per user grows linearly. However, as $K$ increases the IUI does so (first term in \\eqref{eq:SINR_CD_limit} grows), and both effects cancel out at some point, being IUI dominant afterwards, with the corresponding decay of SINR.\n\nThe optimal value of $\\mu$, denoted as $\\mu^{*}$, depends on $M$, $K$, and the specific channel. For the i.i.d. case, defined in Section \\ref{section:background}, it is possible to obtain $\\mu^{*}$ by numerical optimization over \\eqref{eq:SINR_CD}. An approximate value, denoted as $\\mu_{0}$, is presented as follows.\n\\begin{theorem}\n\t\\label{prop:mu_init}\n\tA recommended value for $\\mu_{0}$, in the vicinity of $\\mu^{*}$, under CD and i.i.d. channel as defined in Section \\ref{section:background}, is given by\n\n\t\\begin{equation}\n\t\\mu_{0} = \\frac{1}{2} \\frac{K}{M} \\log (4 M \\cdot \\mathrm{SNR} ).\n\t\\label{eq:mu_init}\n\t\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\n\tSee Appendix-\\ref{proof:mu_init}.\n\\end{proof}\n\nAs a side result, from the analysis performed in this section, we can extract interesting properties of the matrix $\\Wbf$, such the following one:\n\\begin{theorem}\n\t\\label{prop:W_power}\n\tThe equalization matrix $\\Wbf$ as result of $\\mathrm{CD}$ algorithm satisfies the next properties for $\\mu \\in [0,2)$\n\n\t\\begin{equation}\n\t\\E \\| \\Wbf \\|^{2}_{F} = \\frac{K}{K-1} \\cdot \\frac{\\mu}{2-\\mu} \\cdot \\left( 1-\\epsilon^{M} \\right).\n\t\\label{eq:W_power}\n\t\\end{equation}\n\n\t\\nonumber\n\\end{theorem}\n\\begin{proof}\n\tSee Appendix-\\ref{proof:W_power}.\n\\end{proof}\n\nThis result is relevant in downlink, where a transmission power budget is needed. Expression in \\eqref{eq:W_power} is a monotonically growing function of $\\mu$. It can be shown that total transmitted mean power is bounded by $4\\frac{M}{K}$, reaching this value at $\\mu=2$. However, as we will see in next section, optimal $\\mu$ for i.i.d. Gaussian channel is within the range $(0,1]$, therefore for a large enough $K$, we have $\\E \\| \\Wbf \\|^{2}_{F} \\leq 1$, which does not depend on $M$, therefore ensure the scalability of the proposed solution.\\\\\n\nExpression \\eqref{eq:SINR_CD} is plotted in Figure \\ref{fig:SINR_vs_mu_M128_K16} showing SINR vs $\\mu$ for CD under different SNR values and step-size according to \\eqref{eq:mu_m}. As expected, optimal $\\mu$ approaches 1 as SNR grows. Simulation results shows a good match with \\eqref{eq:SINR_CD}. The curve with $\\mu_{0}$ values obtained from \\eqref{eq:mu_init} is also plotted for a wide range of SNR. It is observed how the $\\mu_{0}$ value is reasonably close to the optimum for the SNR range depicted. Furthermore, the result is much closer to ZF than MRC values, which are $\\{40.5, 30.5, 20.5, 10.5\\}$dB and $\\{9.0, 9.0, 8.8, 6.8\\}$dB respectively for the different SNR values used in the figure.\n\nFigure \\ref{fig:SINR_vs_mu_sim} shows simulation results for the CD algorithm performance under different channels. For some of them we use a model (i.i.d and WINNER II) and others are based on real measurements (Rich A and LOS A). For this comparison we use different $\\frac{M}{K}$ ratio and the step-size according to \\eqref{eq:mu_m}. Rich A is non-line-of-sight (NLOS) channel, rich in scatters, while LOS A is predominantly line-of-sight (LOS) channel. WINNER II is obtained from a NLOS scenario with a uniform linear array at the BS, with M elements separated by $\\lambda$\/2. Users are randomly located in a 300m$\\times$300m area, with the BS at the center. It is noticed how rich channels (i.i.d and WINNER II) provide better performance. SINR levels reached by ZF are \\{20.9, 20.9, 19.8, 17.6\\}dB and for MRC they are \\{14.3, 15.2, 7.8, 4.8\\}dB, in both cases for the i.i.d., WINNER II, Rich A and LOS A channels, respectively. It is also noticed that CD performance lies in between ZF and MRC for these scenarios.\n\nFigure $\\ref{fig:SINR_vs_M_over_K_SNR0}$ shows SINR versus $\\frac{M}{K}$ for $M=128$ and SNR = 0dB. SINR for CD is shown comparing the effect of using $\\mu^{*}$ and $\\mu_{0}$ according to \\eqref{eq:mu_init}. We observe that $\\frac{M}{K} \\approx 10$ (equivalent to $K\\approx12$) is the preferred working point, where SINR reaches the maximum value and $\\mu_{0}$ gives the same result as $\\mu^{*}$. We also compare the performance with ZF and MRC algorithms.\n\nAs presented in Subsection \\ref{sub:multiple-iter}, the algorithm can be extended to perform multiple iterations through the array, in order to increase the performance. Figure $\\ref{fig:SINR_vs_mu_sim_num_iter}$ shows SINR versus $\\mu$ for a different number of iterations through the array together with ZF for comparison. From the figure we can notice that the maximum SINR increases after each iteration, approaching to ZF. It is also relevant to note that $\\mu^{*}$ changes with the number of iterations. \n\n\\begin{figure}[t]\\centering\n\t\\includegraphics[width=1\\linewidth]{SINR_vs_M_over_K_SNR0}\n\t\\vspace*{-4mm}\n\t\\caption{SINR (dB) versus $\\frac{M}{K}$ for SNR=0dB and M=128. CD SINR is plotted in the case of $\\mu^{*}$ (dashed) and $\\mu_{0}$ (solid) are used. i.i.d. channel. SNR = 0dB.}\n\t\\label{fig:SINR_vs_M_over_K_SNR0}\n\\end{figure}\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{SINR_vs_mu_sim_num_iter}\n\t\\vspace*{-4mm}\n\t\\caption{SINR vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel. SNR=0dB. SINR after a certain number of iterations through the array. ZF added for comparison.}\n\t\\label{fig:SINR_vs_mu_sim_num_iter}\n\\end{figure}\n\n\n\\subsubsection{BER}\nBER versus SNR is shown in Figure \\ref{fig:BER_vs_SNR} under i.i.d. channel for three different methods: CD, ZF and MRC. CD is shown using two different values for $\\mu$: 1 and $\\mu^*$. It is noticeable the great impact of the selected $\\mu$ and therefore the importance of selecting an appropriate value.\n\nThe effect of non-ideal CSI in the BER is shown in Figure \\ref{fig:BER_vs_SNR_non-ideal-CSI} for ZF and CD (for $\\mu^{*}$). The non-ideal CSI is modeled as an ideal-CSI with a noise contribution (complex normal distributed) with a variance equal to $N_{0}$, therefore it depends inversely on SNR. No boosting in pilots is used. As it can be observed, for SNR$<$0dB the SNR gap is very small and increases as long as SNR increases too, in a similar fashion as the ideal CSI case. For SNR$>$0 the SNR gap in both cases is similar.\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{BER_vs_SNR_M128_K16_16QAM_IID}\n\t\\vspace*{-4mm}\n\t\\caption{BER vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel.}\n\t\\label{fig:BER_vs_SNR}\n\\end{figure}\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{BER_vs_SNR_M128_K16_16QAM_IID_non-ideal-CSI}\n\t\\vspace*{-4mm}\n\t\\caption{BER vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel. Comparison between ideal and non-ideal CSI.}\n\t\\label{fig:BER_vs_SNR_non-ideal-CSI}\n\\end{figure}\n\n\\subsection{Complexity \\& Timing}\nIn this subsection we analyze the complexity of the proposed solution from three different domains: computational complexity (data processing), inter-connection throughput (data movement) and memory (data storage). Timing in the form of total system latency is also analyzed.\n\nFor this analysis we assume a frame structure based on OFDM, which contains one dedicated OFDM symbol per frame for channel estimation based on orthogonal pilots, so each one is dedicated to one of the users in a consecutive way. The other symbols convey users' data. Under the TDD assumption, some of them are used for DL and others for UL. We also assume that all RPUs perform IFFT\/FFT in parallel with an output data-rate of $\\frac{N_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}}$.\n\n\\begin{figure*}[ht]\n\t\\footnotesize\n\t\\centering\n\t\\psfrag{P1}{$P_1$}\n\t\\psfrag{P2}{$P_2$}\n\t\\psfrag{P3}{$P_3$}\n\t\\psfrag{PN}{$P_N$}\n\t\\psfrag{D1}{$D_1$}\n\t\\psfrag{D2}{$D_2$}\n\t\\psfrag{D3}{$D_3$}\n\t\\psfrag{DN}{$D_N$}\n\t\\psfrag{M1}{$M_1$}\n\t\\psfrag{M2}{$M_2$}\n\t\\psfrag{M3}{$M_3$}\n\t\\psfrag{MN}{$M_N$}\n\t\\psfrag{C1}{$C_1$}\n\t\\psfrag{C2}{$C_2$}\n\t\\psfrag{C3}{$C_3$}\n\t\\psfrag{CN}{$C_N$}\n\t\\psfrag{W11}{$w_{1}^{(1)}$}\n\t\\psfrag{W12}{$w_{1}^{(2)}$}\n\t\\psfrag{W13}{$w_{1}^{(3)}$}\n\t\\psfrag{W1N}{$w_{1}^{(N)}$}\n\t\\psfrag{W21}{$w_{2}^{(1)}$}\n\t\\psfrag{W22}{$w_{2}^{(2)}$}\n\t\\psfrag{W23}{$w_{2}^{(3)}$}\n\t\\psfrag{W2N}{$w_{2}^{(N)}$}\n\t\\psfrag{WM1}{$w_{M}^{(1)}$}\n\t\\psfrag{WM2}{$w_{M}^{(2)}$}\n\t\\psfrag{WM3}{$w_{M}^{(3)}$}\n\t\\psfrag{WMN}{$w_{M}^{(N)}$}\n\t\\psfrag{M1}{$M_1$}\n\t\\psfrag{M2}{$M_2$}\n\t\\psfrag{M3}{$M_3$}\n\t\\psfrag{MN}{$M_N$}\n\t\\psfrag{ant1}{$1$}\n\t\\psfrag{ant2}{$2$}\n\t\\psfrag{ant3}{$3$}\n\t\\psfrag{antM}{$M$}\n\t\\psfrag{OFDM1}{$\\mathrm{OFDM} 1$}\n\t\\psfrag{OFDM2}{$\\mathrm{OFDM} 2$}\n\t\\psfrag{OFDM3}{$\\mathrm{OFDM} 3$}\n\t\\psfrag{OFDML}{$\\mathrm{OFDM} L$}\n\t\\psfrag{A1}{$\\mathbf{A}_{1}^{(n)}$}\n\t\\psfrag{A2}{$\\mathbf{A}_{2}^{(n)}$}\n\t\\psfrag{A3}{$\\mathbf{A}_{3}^{(n)}$}\n\t\\psfrag{AM1}{$\\mathbf{A}_{M-1}^{(n)}$}\n\t\\psfrag{A11}{$\\mathbf{A}_{1}^{(1)}$}\n\t\\psfrag{A1N}{$\\mathbf{A}_{1}^{(N)}$}\n\t\\psfrag{A21}{$\\mathbf{A}_{2}^{(1)}$}\t\n\t\\psfrag{A2N}{$\\mathbf{A}_{2}^{(N)}$}\n\t\\psfrag{TS}{$\\cdots$}\n\t\\psfrag{TPRB}{$T_{\\mathrm{PRB}}$}\n\t\\includegraphics[width=1.0\\linewidth]{time_diagram.eps}\n\t\\caption{Time diagram representing formulation and filtering\/precoding activities performed in the antenna modules. Each OFDM symbol is split into $N_\\mathrm{PRB}$ blocks ($N$ in the figure) in the same order as data come out of any of the receiver FFT. Those blocks which contains pilots are shown as $P_{i}$, while those carrying data are denoted as $D_{i}$. Channel estimation is performed during $C_{i}$ blocks, while formulation is done in $\\w_{i}$ blocks. Filtering\/precoding data is carried out during the MIMO processing blocks, named $\\mathrm{M}_{i}$. As it can be observed, all antennas perform their tasks simultaneously, while formulation is done sequentially as a matrix $\\A^{(n)}$ passes through the array. In total, $N$ matrices are passed sequentially through antenna $m$, corresponding to $\\A_{m}^{(n)}, n=1 \\cdots N$. $\\w_{i}$ vectors need to be available in the antenna modules before the corresponding data comes out of the receiver FFT so it can be properly processed. Daisy-chain topology exploits the parallelism of the operations by allowing the pipeline of the operations and the fully usage of all dedicated links simultaneously.}\n\t\\label{fig:time_diagram}\n\\end{figure*}\n\nWe can exploit channel correlation based on the Physical Resource Block (PRB) concept in 3GPP. A PRB is a region in frequency-time domain where the channel response is assumed to be approximately constant across all subcarriers within that PRB. Within an OFDM symbol, the number of subcarriers in each PRB and the number of PRB per symbol, defined as $N_{\\mathrm{sc,PRB}}$ and $N_{\\mathrm{PRB}}$ respectively, are related as follows: $N_{\\mathrm{u}} = N_{\\mathrm{PRB}} N_{\\mathrm{sc,PRB}}$. We define $T_{\\mathrm{PRB}}$ as the time needed by $N_{\\mathrm{sc,PRB}}$ consecutive subcarriers to come out the FFT.\n\nFor each PRB we have a different channel matrix and also MIMO model as in \\eqref{eq:ul_model} and \\eqref{eq:dl_model}. Then, it is required to have a unique set of vectors $\\w_m$ and $\\p_m (m=1...M)$ per antenna, as in \\eqref{eq:linear_det} and \\eqref{eq:linear_prec_i}, for uplink detection and downlink precoding respectively. The phase where these vectors are computed is named $\\textit{formulation}$, while the phase where user's data is processed is named $\\textit{filtering}$ and $\\textit{precoding}$ for UL and DL respectively. To minimize data buffering, formulation needs to be completed before filtering\/precoding starts. This imposes the constraint that the formulation phase needs to be finished within one OFDM symbol, or in other words, all antennas need to obtain these vectors and the matrix $\\A$ needs also to pass through the array within one OFDM symbol. A diagram of the main activities involved and their timing relationship is shown in Figure \\ref{fig:time_diagram}. The analysis assumes that the processing and data transmission are pipelined in each RPU so they concurrently operate.\n\n\\subsubsection{Computational complexity}\n\n\\begin{itemize} \n\\item Formulation phase:\nThe number of complex multiplications needed to formulate one precoding\/filtering vector per antenna are $C_{\\mathrm{form}} \\approx 2K^{2}$, which represents the matrix-vector product to obtain $\\w_{m}$ and the outer product to update $\\A_{m}$ according to algorithm \\ref{algo:CD}. Other possible required operations such as norm, square root or division are assumed to be negligible.\n\n\\item Filtering phase:\nDuring the filtering phase, each RPU performs the required operations for UL detection. Vectors $\\w_{m}$ are applied to all observations (data subcarriers), $y^{u}_{m}$, under the same PRB. The complexity measured in number of complex multiplications per antenna and per $N_{\\mathrm{sc,PRB}}$ subcarriers is $C_{\\mathrm{filt}} = KN_{\\mathrm{sc,PRB}}$.\n\n\\item Precoding phase:\nDuring the precoding phase, each RPU performs the operations required by \\eqref{eq:linear_prec_i}. Similarly to the filtering case, the same vector $\\p_{m}$ is applied to all data vectors $x^{d}_{m}$ under same PRB. The complexity measured in number of complex multiplications per antenna and PRB is $C_{\\mathrm{prec}} = KN_{\\mathrm{sc,PRB}}$.\n\\end{itemize}\n\n\\subsubsection{Inter-connection data-rate}\n\\label{section:data-rate}\n\\begin{itemize} \n\t\\item Formulation phase:\n\tThe average inter-connection data-rate during formulation can be calculated assuming that the average time to complete a transfer of a matrix $\\A$ is $T_{\\mathrm{PRB}}$, which leads to an average rate of\n\n\t\\begin{equation}\n\tR_{\\mathrm{d,form}} = \\frac{2w_{\\A} K^{2} N_{\\mathrm{PRB}}}{T_{\\mathrm{OFDM}}},\n\t\\nonumber\n\t\\end{equation}\n\n\twhere the numerator represents the amount of bits to transfer (all matrices $\\A$ in a symbol) and $w_{\\A}$ is the bit-width of $\\A$ entries (real\/imaginary parts).\n\t\n\t\\item Filtering phase:\n\tPartial filtering results from each RPU are added up through the chain. The average inter-connection data-rate per dedicated link can be calculated as\n\n\t\\begin{equation}\n\tR_{\\mathrm{d,filt}} = \\frac{2 w_{\\mathrm{d}} KN_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}},\n\t\\nonumber\n\t\\end{equation}\n\n\twhere $w_{\\mathrm{d}}$ is the bit-width of baseband samples exchanged among RPUs.\n\t\n\t\\item Precoding phase:\n\tIn the precoding phase, the data vectors $\\xd$ are passed through the array for processing. Each node receives a vector which is passed to next node without any required pause (broadcasting). This leads to the same data-rate as in the filtering case.\n\t\n\\end{itemize}\n\n\\subsubsection{Latency}\nThe processing latency in the formulation phase for one antenna is given from next expression\n\\begin{equation}\n\\begin{split}\nT_{\\mathrm{proc,form}} &= \\frac{C_{\\mathrm{form}} T_{\\mathrm{CLK}}}{N_{\\mathrm{mult}}} \\\\\n&\\approx \\frac{2K^{2} T_{\\mathrm{CLK}}}{N_{\\mathrm{mult}}},\n\\end{split}\n\\nonumber\n\\end{equation}\nwhere $N_{\\mathrm{mult}}$ is the number of multipliers available in each RPU that can be used in parallel, $T_{\\mathrm{CLK}}$ is the clock period and we assume that one complex multiplication can be done within one $T_{\\mathrm{CLK}}$. Total latency is expressed as\n\\begin{equation}\n\\begin{split}\nLat_{form} &= M \\cdot T_{\\mathrm{proc, form}} + (N_{\\mathrm{RPU}}-1) \\cdot T_{\\mathrm{trans}},\n\\end{split}\n\\nonumber\n\\end{equation}\nwhere $N_{\\mathrm{RPU}}$ is the number of RPUs in the system, and $T_{\\mathrm{trans}}$ is the transmission latency between two consecutive RPUs. As said before, formulation needs to be finished within one $T_\\mathrm{OFDM}$, therefore the formulation latency is constrained as $Lat_{form} < T_{\\mathrm{OFDM}}$. This leads to an upper limit for M as\n\\begin{equation}\nM < \\frac{T_\\mathrm{OFDM}+T_{\\mathrm{trans}}}{T_{\\mathrm{proc, form}} + \\frac{T_{\\mathrm{trans}}}{M_{\\mathrm{RPU}}}},\n\\nonumber\n\\end{equation}\nwhere $M_{\\mathrm{RPU}}=\\frac{M}{N_{\\mathrm{RPU}}}$ is the number of antennas per RPU, which is considered as a design parameter. We can consider another limit, slightly lower than previous one but easier to extract conclusions as follows\n\\begin{equation}\nM < \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{proc, form}} + \\frac{T_{\\mathrm{trans}}}{M_{\\mathrm{RPU}}}}.\n\\nonumber\n\\end{equation}\n\nWe analyze three scenarios:\n\\begin{itemize} \n\t\\item $T_{\\mathrm{proc, form}} \\rightarrow 0$: When processing time is reduced, by increasing $N_{\\mathrm{mult}}$ or decreasing $T_{\\mathrm{CLK}}$, then transaction time becomes dominant and a reduction in the number of links allow for higher values of $M$. Formally, the upper value for $M$ scales proportionally to $M_{\\mathrm{RPU}}$ as follows\n\n\t\\begin{equation}\n\tM < M_{\\mathrm{RPU}} \\cdot \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{trans}}}.\n\t\\nonumber\n\t\\end{equation}\n\n\t\\item $T_{\\mathrm{trans}} \\rightarrow 0$: By decreasing the transaction time the upper limit of $M$ converges to a certain value, which is inversely proportional to the processing time as follows\n\n\t\\begin{equation}\n\tM < \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{proc, form}}}.\n\t\\nonumber\n\t\\end{equation}\n\n\t\\item $M_{\\mathrm{RPU}} \\gg \\frac{T_{\\mathrm{trans}}}{T_{\\mathrm{proc, form}}}$. When $M_{\\mathrm{RPU}}$ increases beyond a certain value, processing time becomes dominant and we obtain the same limit as previous point.\n\\end{itemize}\n\nIn case of filtering, its related processing is done in parallel as soon as data comes out of the FFT. However, partial results needs to be accumulated through the array from RPU 1 to $N_{\\mathrm{RPU}}$. This latency is uniquely due to data transfer through the dedicated links, then\n\\begin{equation}\n\\begin{split}\nLat_{\\mathrm{filt}} &= (N_{\\mathrm{RPU}}-1) \\cdot T_{\\mathrm{trans}}\\\\\n& < Lat_{\\mathrm{form}}\\\\ & < T_{\\mathrm{OFDM}}.\n\\end{split}\n\\label{eq:lat_filt}\n\\end{equation}\n\n\\subsubsection{Memory}\nIn terms of memory requirement, a centralized architecture requires to store the channel matrix $\\mathbf{H}$ fully at the CPU, previous to the inversion. There is a channel matrix per PRB, so CSI storage requires $M_{\\mathrm{H}} = 2 w_{\\mathrm{h}} M K N_{\\mathrm{PRB}}$ bits, where $w_{\\h}$ represents the bit-width of $\\Hbf$ entries (real\/imaginary parts), and in order to store the resulting square matrix, $(\\Hbf^{H}\\Hbf)^{-1}$ requires $M_{\\mathrm{inv}} = 2 w_{\\mathrm{h}} K^{2} N_{\\mathrm{PRB}}$ and therefore the total requirement is: $M_{\\mathrm{central}} = M_{\\mathrm{H}} + M_{\\mathrm{inv}} \\approx M_{\\mathrm{H}}$.\n\nIn the decentralized architecture, each antenna module needs to store the corresponding $\\h$, which gets replaced by $\\mathbf{w}$ after formulation. Both of them requires the same amount of memory if same bit-width is assumed, which is $M_{\\mathrm{w}} = 2 w_{\\mathrm{h}} K N_{\\mathrm{PRB}}$, and the total amount of memory in the system is: $M_{\\mathrm{daisy}} = M \\cdot M_{\\w} \\approx M_{\\mathrm{central}}$. Therefore, the total amount of memory required for $\\Hbf$ and $\\Wbf$ is the same in both systems, however the daisy-chain allows a uniform distribution of the memory requirements across all antenna modules, reducing design complexity, time and cost. As a drawback, we point out the need for data buffering during the filtering phase due to latency in the transfer of partial results, as discussed in the previous subsection (Latency). The buffer size for the RPU closest to the CPU (worst case) can, based on \\eqref{eq:lat_filt}, be obtained as\n\\begin{equation}\nM_{\\mathrm{buffer}} = \\frac{ 2 w_{\\mathrm{d}} K N_{\\mathrm{u}} Lat_{\\mathrm{filt}}}{T_{\\mathrm{OFDM}}},\n\\nonumber\n\\end{equation}\nwhich is shared by all antennas belonging to that RPU.\n\n\\subsection{Comparison}\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Inter-connection data-rate comparison for different system parameters [$G\\lowercase{b\/s}$]}\n\t\\label{tab:data-rate}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$R_{\\mathrm{d,form}} $ & 12.67 & 50.69 & 114.05 & 114.05\\\\\n\t\t$R_{\\mathrm{d,filt\/prec}} $ & 38.02 & 76.03 & 114.05 & 114.05\\\\\n\t\t\\hline\n\t\t$R_{\\mathrm{c}} $ & 304.13 & 608.26 & 1216.51 & 2433.02\\\\\n\t\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:data-rate} shows a comparison of interconnection data-rate between daisy-chain and centralized architecture for different scenarios of $M$ and $K$. It is important to remark that $R_{\\mathrm{c}}$ corresponds to the aggregated data\/rate at the shared bus, while $R_{\\mathrm{d}}$ is the average data\/rate in each of the RPU-RPU dedicated links. For the centralized case, \\eqref{eq:R_central} is used, while for the daisy-chain case, data-rates are detailed according to the different tasks (formulation, filtering and precoding) as described in Section \\ref{section:data-rate}. For the numerical results we employ $T_{\\mathrm{CLK}}=1\\mathrm{ns}$ and $w=12$. The rest of system parameters are as follows according to worst case in 5G NR: $N_{\\mathrm{u}}=3300$, $N_{\\mathrm{PRB}}=275$, $N_{\\mathrm{sc,PRB}}=12$ and $T_{\\mathrm{OFDM}}=\\frac{1}{120\\mathrm{KHz}}$. We observe that for $M=128$ case, daisy-chain requires $\\sim 10\\%$ of the inter-connection data-rate needed by the centralized case. This number can even decrease as $\\frac{M}{K}$ grows. As it is observed, daisy-chain requires much lower inter-connection data-rates than the centralized counterpart. We remark that if we take into account the total inter-connection data-rate in the decentralized case, which is $N_{\\mathrm{RPU}} R_{\\mathrm{d,form}}$, may easily exceed the centralized counterpart $R_{\\mathrm{c}}$, however the decentralized architecture is able to distribute this data-rate equally across all links, reducing considerably the requirements for each of them.\n\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Computational complexity comparison for different system parameters [$GOPS$]}\n\t\\label{tab:complexity}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$C_{\\mathrm{d,ant}} $ & 1.58 & 3.17 & 4.75 & 4.75\\\\\n\t\t\\hline\n\t\t$C_{\\mathrm{c}} $ & 50.69 & 202.75 & 608.26 & 1216.51\\\\\n\t\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:complexity} shows a computational complexity comparison between centralized and decentralized architectures. $C_{\\mathrm{d,ant}}$ represents complex multiplications per second and per antenna in the decentralized case, while $C_{\\mathrm{c}}$ is the computational complexity required by CPU in centralized system. In both cases, only filtering\/precoding is taken into account because formulation depends on how often channel estimation is available. The result of the comparison is meaningful. Even tough, the total complexity in the decentralized system is approximately equal to the centralized counterpart, this is $M \\cdot C_{\\mathrm{d,ant}} \\approx C_{\\mathrm{c}}$, our decentralized solution is able to divide equally the total computational complexity among all existing RPUs, relaxing considerably the requirements compared to the CPU in centralized case. The relatively low number obtained for the daisy-chain allows the employment of cheap and general processing units in each RPU, in opposite to the centralized architecture where the total complexity requirement is on the CPU.\n\nNumerical results for latency are shown in table \\ref{tab:latency} for $N_{\\mathrm{mult}}=8$, $T_{\\mathrm{trans}}=100ns$ and $N_{\\mathrm{RPU}}=\\frac{M}{4}$. These design parameters meets the constraint $Lat < T_\\mathrm{OFDM}$ up to $M=128$. For larger arrays there are different solutions: allows the latency to increase and buffer the needed input data (need for larger memory), group more antennas in each RPU (which reduces the number of links but increase the complexity of the CPU controlling each RPU), and\/or employ low-latency link connections (reducing $T_{\\mathrm{trans}}$ at the expense of higher cost). It is relevant to note that $T_\\mathrm{OFDM}$ value in the table is the worst case $1\/120KHz$.\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Latency comparison for different system parameters}\n\t\\label{tab:latency}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$Lat(\\mu s)$ & 0.83 & 2.52 & 7.71 & 15.52\\\\\n\t\t$Lat\/T_{\\mathrm{OFDM}}$ & 0.10 & 0.30 & 0.92 & 1.86\\\\\n\t\\end{tabular}\n\\end{table}\n\nIn table $\\ref{tab:memory}$ a comparison between both systems from memory perspective is shown. If $w_{\\h}=12$ and $N_{\\mathrm{PRB}}=275$ are assumed, then for the $M=128$ case, each antenna module in the daisy-chain only needs $\\sim 80 \\mathrm{kbits}$ of memory and each RPU needs at maximum $354 \\mathrm{kbits}$ for buffering, while in the centralized architecture, the central processor requires $\\sim 11 \\mathrm{Mbits}$, which is a challenging number for a cache memory. The memory requirement grows proportionally to M in the centralized system, while that does not happen in $M_{\\w}$. In order to reduce the buffer size we can group more antennas in each RPU, so all of them share the same buffer memory.\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Memory requirement comparison for different system parameters [$kbits$]}\n\t\\label{tab:memory}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$M_{\\w} (ant) $ & 26.4 & 52.8 & 79.2 & 79.2\\\\\n\t\t$M_{\\mathrm{buffer}} (RPU) $ & 26.6 & 114.1 & 353.6 & 718.5\\\\\t\n\t\t\\hline\n\t\t$M_{\\mathrm{H}} $ & 844.8 & 3379.2 & 10137.6 & 20275.2\\\\\n\t\t$M_{\\mathrm{inv}} $ & 105.6 & 422.4 & 950.4 & 950.4\\\\\n\t\\end{tabular}\n\\end{table}\n\n\\section{Conclusions}\n\\label{section:conclusions}\n\nIn this article we proposed an architecture for Massive MIMO base-station for uplink detection and downlink precoding, which is based on the fully distribution of the required baseband processing across all antenna modules in the system. The main goal is to reduce the inter-connection data-rate needed to carry out the processing tasks and enable the scalability needed in Massive MIMO. We continued our previous work in this topic \\cite{jesus} \\cite{muris} by a detailed introduction to the CD algorithm and its application to the Massive MIMO case. We also presented an extensive analysis of the expected performance of the system, the inter-connection data-rate, complexity, latency and memory requirements. The results show that there is a performance loss compared to ZF, but unlike MF, our proposed method does not have an error floor, from which we can not recover, while the inter-connection data-rate is distributed avoiding the aggregation of the centralized approach. At the same time, complexity and memory requirements per antenna module are easy to meet with commercial off-the-self hardware, which proves the scalability of this solution.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn high dimensional linear regression\n\\begin{align*}\ny=X\\beta^\\ast+w,\\quad\\mbox{with noise }w \\sim \\mathcal{N}(0,\\sigma^2 I),\n\\end{align*}\nthe goal is to parsimoniously predict the response $y\\in \\mathbb R^n$ as a linear combination of a large number of covariates $X=(X_1,X_2,\\ldots,X_p)\\in\\mathbb R^{n\\times p}$, and conduct statistical inference on the linear combination coefficients $\\beta^\\ast=(\\beta_1^\\ast,\\ldots,\\beta_p^\\ast)^T\\in\\mathbb R^p$ \\citep{tibshirani1996regression,donoho2006compressed}. By leveraging on certain lower dimensional structure in the regression coefficient vector $\\beta^\\ast\\in\\mathbb R^p$ such as a sparsity constraint $s=\\|\\beta^\\ast\\|_0\\ll n$, where $\\|\\beta^\\ast\\|_0$ counts the number of nonzeros in $\\beta^\\ast$, the number $p$ of covariates is allowed to be substantially larger than the sample size $n$. Due to the intrinsic computational hardness in dealing with the $\\ell_0$ metric reflecting sparsity, people instead use different metrics as surrogates, and cast the estimation problem into various convex or nonconvex optimization problems. Many approaches have been proposed for high dimensional regression by solving certain penalized optimization problem, including basis pursuit \\citep{chen2001atomic}, the Lasso \\citep{tibshirani1996regression}, the Dantzig selector \\citep{candes2007dantzig}, SCAD \\citep{fan2001variable}, MCP \\citep{zhang2010nearly} and so on. In this work, we focus on the recovery of $\\beta^\\ast\\in\\mathbb R^p$ without explicitly specifying a penalty.\n\nRecent work~\\citep{hoff2017lasso} shows that through a change-of-variable (over-parametrization) via Hadamard product parametrization, the Lagrangian (dual) form of the non-smooth convex optimization problem for the Lasso~\\eqref{Eqn:CS}:\n\\begin{equation}\\label{Eqn:CS}\n\\min_{\\beta} \\frac{1}{2n}\\|X\\beta-y\\|^2+\\lambda \\|\\beta\\|_{1}, \\quad\\mbox{with }\\|\\beta\\|_1:\\,=\\sum_{j=1}^p|\\beta_j|,\n\\end{equation}\ncan be reformulated as a smoothed optimization problem at a cost of introducing non-convexity. Due to the smoothness feature, simple and low-cost first-order optimization methods such as gradient descent and coordinate descent can be applied to recover $\\beta^\\ast$. Despite the non-convexity and exponentially many stationary points induced by the change-of-variable, these first-order algorithms exhibit encouraging empirical performance~\\citep{hoff2017lasso}.\n\nIn this work, we consider the same Hadamard product over-parametrization $\\beta = g\\circ l$ as in \\cite{hoff2017lasso}, where $g, \\,l\\in\\mathbb R^p$ and $\\circ$ denotes the Hadamard product (element-wise product). Instead of solving the penalized optimization problem~\\eqref{Eqn:CS}, we consider directly applying the gradient descent to the quadratic loss function\n\\begin{equation}\\label{eq_opt}\nf(g,l)=\\frac{1}{2n}\\,\\|X(g\\circ l)-y\\|^2.\n\\end{equation}\nIn the noiseless case where $\\sigma=0$, minimizing $f(g,\\,l)$ jointly over $(g,\\,l)$ is a highly non-convex optimization problem with exponentially many saddle points. To see this, notice that each non-zero pattern of $\\beta$ corresponds to at least one saddle point by choosing $g_j=l_j=0$ for each $j$ such that $\\beta_j=0$. In addition, due to the column rank deficiency of the design matrix $X$ (for example, when $p>n$), there are infinitely many global minimizers of \\eqref{eq_opt} as potential convergent points of the gradient descent. \nInterestingly, we show that despite these seemingly hopeless difficulties, in the noiseless case if we initialize the gradient descent arbitrarily close to $g=l=0$, then under\nthe prominent Restricted Isometry Property (RIP) condition~\\citep{candes2008restricted} on the design matrix $X$, a properly tuned gradient descent converges to least $\\ell_1$-norm solution within error $\\varepsilon$ in $\\mathcal O(\\log\\frac{C}{\\varepsilon})$ iterations, where constant $C$ depends on the RIP constant, step size of the gradient descent, and some other characteristics of the problem. Our proofs borrow ideas from \\cite{li2018algorithmic}, where they prove the algorithmic convergence of matrix factorized gradient descent in the context of noiseless matrix sensing under the RIP.\n\n\nIn high dimensional regression, the usual regularized least square is known to suffer from the so-called saturation phenomenon \\citep{vito2005learning,yao2007early}, where the overall estimation error is dominated by a bias term due to the penalty. In particular, since regularization is artificially introduced for restricting the ``effective size'' of the parameter space, the resulting estimator may be deteriorated and the estimation error cannot fall below the penalty level to adapt to a possibly faster convergence rate. For example, the estimator by solving the Lasso achieves the minimax rate of $\\sqrt{s}\\lambda\\asymp\\sqrt{s \\log p\/n}$. However, this worse-case rate only happens when there exist weak signals, meaning that some nonzero $\\beta^\\ast_j$'s have a borderline magnitude of order $\\sqrt{s\\log p\/n}$. In fact, if all signals are sufficiently strong, or significantly larger this borderline magnitude,\nthen the faster dimension-independent parametric rate $\\sqrt{s\/n}$ is attainable. For regularized approaches such the Lasso, one possible way to remove the penalty-induced bias term (whose order is $\\lambda$) is to refit the model with the selected variables. However, this two stage procedure requires stringent assumptions on the minimal signal strength to guarantee variable selection consistency for the first stage, and will suffer from weak signals. Interestingly, we show that by combining the Hadamard product over-parametrization with early stopping, a widely used regularization technique in boosting \\citep{zhang2005boosting} and nonparametric regression \\citep{raskutti2014early}, our method can overcome the saturation issue and lead to more accurate estimation. \nMore precisely, in the presence of random noise $w$ in the linear model, the solution path \nof minimizing the quadratic loss function~\\eqref{eq_opt} as we increase the gradient descent iteration still tends to converge to the least $\\ell_1$-norm solution that will overfit the data.\nFortunately, by terminating the gradient descent updating procedure earlier within a proper number of iterations, we may find a solution that optimally balances between the model complexity (reflected by the increasing $\\ell_1$-norm of the iterate) and goodness fit of the model, akin the bias-variance trade-off.\nIn particular, we show that the estimator can adapt to an optimal convergence rate of $\\sqrt{s\/n}$ when all signals are relatively strong. Generally, when both strong signals and weak signals exist, our estimator attains the rate $\\sqrt{s_1\/n}+\\sqrt{s_2 \\log p\/n}$ (with $s_1, s_2$ denoting thenumber of strong signals and weak signals, respectively).\n\n\nOur result also complements the recent surge of literature on over-parametrization for implicit regularization of the first-order iterative method for non-convex optimization in machine learning. \\cite{gunasekar2017implicit} introduce the phenomenon of implicit regularization in matrix factorization, where they empirically observe the convergence of gradient methods in matrix factorization problem to the minimal nuclear norm solution as the the initial value tends to zero. However, they only provide some heuristic illustration under some hard-to-check assumptions such as the continuity of the solution relative to the change in the initialization. Later, \\cite{li2018algorithmic} rigorously prove the implicit regularization in matrix sensing problem under a matrix RIP condition. Some other very recent works such \\cite{pmlr-v80-gunasekar18a} and \\cite{soudry2018implicit} study implicit regularizations in mirror descent and in classification problems. Note that all above implicit regularization literatures only consider data that are either noiseless (regression) or perfectly separable (classification). To our best knowledge, we are the first to rigorously study and utilize implicit regularization in high dimensional linear regression where responses are noisy. \n\n\nIn a nutshell, we show that through a simple change-of-variable, the non-smooth $\\ell_1$- penalized optimization problem~\\eqref{Eqn:CS} can be transformed to an unconstrained smooth quadratic loss minimization one; moreover, a simple gradient descent initialized near zero efficiently solves this non-convex optimization problem with provable guarantees. Furthermore, our method enjoy several advantages over existing procedures for high dimensional linear regression under sparsity constraints. First, our method is computationally efficient --- its time complexity is $O(np)$, which is linear in both $n$ and $p$. Second, despite the non-convexity nature, our method has a natural initialization that provably leads the optimal solution.\nIn comparison, penalized $M$-estimators based on non-convex penalties such as SCAD and MCP require stringent conditions on their initializations: to obtain good estimators, they require good initial values that are sufficiently closed to the truth (theoretically) or satisfy some restricted strong convexity conditions \\citep{zhao2018pathwise}, otherwise their optimization algorithms will suffer from bad local minima with bad generalization errors. In contrast, our algorithm only requires the initialization to be closed to zero. Moreover, unlike penalized approaches such as SCAD and MCP, where both parameters for the noise level and the concavity of the penalty need to be tuned, our method only need to tune the number of iterations. \n\n\n\n\nTo conclude, our main contributions with respect to the relative literatures are as follows:\n\\begin{enumerate}\n\t\\item We propose an estimator by combining early stopping with implicit regularization to overcome the saturation issues in high dimensional regression with explicit regularizations;\n\t\\item Unlike recent implicit regularization literatures that exclusively focus on noiseless data, we are the first to rigorously study the effect of implicit regularization for noisy data;\n\t\\item From computational perspective, we transform the non-smooth optimization problem to an unconstrained smooth quadratic loss minimization problem for which standard optimization tools can be applied.\n\\end{enumerate}\n\n\n\\section{Background and Our Method}\nTo begin with, we formally introduce the setup and notations used throughout the paper. After that, we introduce the intuition for our new implicit regularized algorithm for high dimensional linear regression via Hadamard product parameterization.\n\n\\subsection{Setup and notations}\nRecall that $\\beta^\\ast$ is the unknown $s$-sparse signal in $\\mathbb{R}^{p}$ to be recovered. Let $S\\subset\\{1,\\ldots,p\\}$ denote the index set that corresponds to the nonzero components of $\\beta^\\ast$, and the size $|S|$ of $S$ is then $s$.\nFor two vectors $g, l\\in \\mathbb{R}^{p}$, we call $\\beta = g\\circ l \\in\\mathbb R^p$ as their Hadamard product, whose components are $\\beta_j = g_jl_j$ for $j=1,\\ldots p$. For two vectors $a,b\\in\\mathbb R^p$, we use the notation $a\\geq b$ to indicate element-wise ``great than or equal to''.\nWhen there is no ambiguity, we use $\\beta^2=\\beta\\circ \\beta$ to denote the self-Hadamard product of $\\beta$. For a function $f:\\mathbb R^p\\times \\mathbb R^p \\to \\mathbb R$, $(g,\\,l)\\mapsto f(g,\\,l)$, we use $\\nabla_g f$ and $\\nabla_l f$ to denote its partial derivative relative to $g$ and $l$, respectively. \nFor any index set $J\\subset \\{1,\\ldots,p\\}$ and vector $a\\in \\mathbb R^p$, we use $a_J=(a_j:\\, j\\in J)$ to denote the sub-vector of $a$ formed by concatenating the components indexed by $J$. Let $\\mathbf 1\\in\\mathbb R^p$ denote the vector with all entries as $1$, and $I$ as the identity matrix in $\\mathbb R^p$. Let $I_J$ be the diagonal matrix with one on the $j$th diagonal for $j\\in J$ and $0$ elsewhere. For a vector $a\\in\\mathbb R^p$, we use $\\|a\\|$ to denote its vector-$\\ell_2$-norm, and $\\|a\\|_\\infty=\\max_{j}|a_j|$ its $\\ell_\\infty$-norm. Let $\\mbox{Unif}(a,b)$ to denote the uniform distribution over interval $(a,b)$. For a symmetric matrix $A$, let \n$\\lambda_{\\min}(A)$ denote its smallest eigenvalue. For two sequences $\\{a_n\\}$ and $\\{b_n\\}$, we use the notation $a_n\\lesssim b_n$ or $a_n\\gtrsim b_n$ to mean there exist some constant $c$ and $C$ independent of $n$ such that $a_n \\leq Cb_n$ or $a_n \\geq cb_n$ for all $n<0$, respectively, and $a_n\\asymp b_n$ to mean $a_n\\lesssim b_n$ and $b_n \\lesssim a_n$.\n\n\\subsection{Gradient descent with Hadamard product parametrization}\nAs we mentioned in the introduction, we consider augmenting the $p$-dimensional vector $\\beta$ into two $p$-dimensional vectors $g,\\,l$ through $\\beta=g\\circ l$. Instead of solving the Lasso problem~\\eqref{Eqn:CS} with $\\beta$ replaced with $g\\circ l$, we consider directly applying gradient descent to the quadratic loss function $f(g,l)=(2n)^{-1}\\|X(g\\circ l)-y\\|^2$. \nIn particular, we apply the updating formula $g_{t+1}=g_t-\\eta \\nabla f_g(g_t,\\,l_t)$, $l_{t+1}=l_t-\\eta \\nabla_l f(g_t,l_t)$, with random initial values $g_0$ and $l_0$ chosen close enough to $0$ (notice that $(0,0)$ is a saddle point of the objective function, so we need to apply a small perturbation $\\alpha$ on the initial values).\nThis leads to the following algorithm:\n\\smallskip\n\n\\begin{algorithm}[H]\n\\KwData{Design matrix $X\\in\\mathbb R^{n\\times p}$,\\, measurement vector $y\\in\\mathbb R^n$, initialization magnitude $\\alpha$, step size $\\eta$, and stopping threshold $\\epsilon$;}\n Initialize variables $[g_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$, $[l_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$ for $j=1,\\ldots,p$, and iteration number $t=0$;\\\\\n \\While{$\\ \\|X(g_t\\circ l_t)-y\\|\/\\sqrt{n}>\\epsilon\\ $}{\n ${g}_{t+1}=g_t-\\eta \\ l_t \\circ \\big[n^{-1}\\,X^{T}\\big(X(g_t\\circ l_t)-y\\big)\\big]$;\\\\ \n $\\, {l}_{t+1}=l_t-\\eta \\ g_t \\circ \\big[n^{-1}\\,X^{T}\\big(X(g_t\\circ l_t)-y\\big)\\big]$; \\\\[0.2em]\n $\\, t=t+1$;\\\\\n}\n \\KwResult{Output the final estimate $\\widehat \\beta=g_t\\circ l_t$;\n }\\label{alg1}\n \\caption{Gradient Descent for linear regression}\n\\end{algorithm}\n\\smallskip\n\nAlgorithm~\\ref{alg1} is the standard gradient descent algorithm, and the iterates $(g_{t+1},l_{t+1})$ tend to converge to a stationary point $(g_\\infty, l_{\\infty})$ of $f(g,l)$ that satisfies the first order optimality condition $\\nabla f_g(g_\\infty,\\,l_\\infty) = 0$ and $\\nabla f_l(g_\\infty,\\,l_\\infty) = 0$. However, stationary points of $f(g,l)$ can be local minimum, local maximum, or saddle points (when the Hessian matrix $\\nabla^2_{g,l} f(g,l)$ contains both positive and negative eigenvalues).\nThe following result provides the optimization landscape of $f(g,l)$, showing that $f(g,l)$ does not have local maximum, all its local minimums are global minimum, and all saddle points are strict. The strict saddle points are saddle points with negative smallest eigenvalues for Hessian matrix.\n\n\\begin{lem}\\label{Lem:global_min}\n$f(g,l)=(2n)^{-1}\\|X(g\\circ l)-y\\|^2$ does not have local maximum, and all its local minimums are global minimum. In particular, $(\\bar g, \\bar l\\,)$ is a global minimum of $f(g,l)$ if and only if\n\\begin{align*}\nX^T\\big(X(\\bar g\\circ \\bar l)-y\\big) = 0.\n\\end{align*}\nIn addition, any saddle point $(g^\\dagger,l^\\dagger)$ of $f(g,l)$ is a strict saddle, that is,\n$\\lambda_{\\min}\\big(\\nabla^2_{g,l} f(g^\\dagger,l^\\dagger)\\big)<0$.\n\\end{lem}\n\n\\noindent According to the first order condition associated with $f(g,l)$\n\\begin{align*}\ng\\circ \\big[X^T\\big(X(g\\circ l)-y\\big)\\big]=l\\circ \\big[X^T\\big(X(g\\circ l)-y\\big)\\big] = 0,\n\\end{align*}\nthere could be exponentially many (at least $2^p-1$) saddle points as a solution to this equation, for example, for those $(g,l)$ satisfying\n\\begin{align*}\ng_A=l_A =0\\in\\mathbb R^{|A|}, \\qquad\\mbox{and}\\qquad \\big[X^T\\big(X(g\\circ l)-y\\big)\\big]_{A^c} = 0\\in \\mathbb R^{p-|A|},\n\\end{align*}\nfor any non-empty subset $A$ of $\\{1,\\ldots,p\\}$. Consequently, the gradient descent algorithm may converge to any of these bad saddle points. To see this, if we initialize $(g,l)$ in a way such that the components in the index set $A$ are zero, or $[g_0]_A=[l_0]_A=0$, then these components will remain zero forever in the gradient iterations, or $[g_t]_A=[l_t]_A=0$ for all $t>0$. Fortunately, the following result implies that as long as we use a random initialization for $(g,l)$ with continuous pdf over $\\mathbb R^{2p}$ as in Algorithm~\\ref{alg1}, then the gradient descent almost surely converges to a global minimum.\n\n\\begin{lem}\\label{Lem:GD_global_min}\nSuppose the step size $\\eta$ is sufficiently small. Then with probability one, Algorithm~\\ref{alg1} converges to a global minimum of $f(g,l)$.\n\\end{lem}\n\nIn the low-dimensional regime where the design matrix $X$ has full column rank, the solution $\\bar \\beta$ to the normal equation $X^T(X\\beta-y) = 0$ is unique, which is also the least squares estimator. Under this scenario, Lemma~\\ref{Lem:global_min} and Lemma~\\ref{Lem:GD_global_min} together certify that Algorithm~\\ref{alg1} will converge to this optimal least squares estimator. However, in the high-dimensional regime which is the main focus in the paper, the normal equation $X^T(X\\beta-y) = 0$ have infinitely many solutions, and it is not clear which solution the algorithm tends to converge to. For example, if we consider instead applying the gradient descent to the original parameter $\\beta$ in the objective function $(2n)^{-1}\\|X\\beta-y\\|^2$ with initialization $\\beta_0=0$, then the iterates will converge to the minimal $\\ell_2$-norm solution of the normal equation (see below for details). Interestingly, as we will illustrate in the following, under the Hadamard parametrization the gradient descent Algorithm~\\ref{alg1} now tends to converge to the minimal $\\ell_1$-norm solution under certain conditions for initialization, thereby inducing sparsity and naturally facilitating variable selection.\n\n\n\n\\subsection{Gradient descent converges to sparse solution}\\label{Sec:Heuristic}\nIn this subsection, we provide two different perspectives for understanding the following informal statement on the behavior of simple gradient descent for the loss function $f(g,\\,l)$ defined in~\\eqref{eq_opt} under the Hadamard product parameterization $\\beta =g\\circ l$. For simplicity, we assume that the response $y$ in the linear model is perfect, that is, the noise variance $\\sigma^2=0$, throughout this subsection.\n Then in the next subsection, we turn to general noisy observations, and propose methods that lead to optimal estimation when the true regression coefficient $\\beta^\\ast$ is sparse. \n\n\\paragraph{Informal Statement:} \\emph{If we initialize the algorithm to be arbitrarily close to $g=l=0$, then under suitable conditions on design $X$, a simple gradient descent converges to a solution of basis pursuit problem:\n\t\\begin{equation}\\label{bs}\n\t\\min_{\\beta\\in\\mathbb R^p}{\\|\\beta\\|_1} \\quad \\mbox{subject to} \\quad X\\beta =y.\n\t\\end{equation}}\nOur first perspective is based on the $\\ell_2$-norm implicit regularization in linear system, and the second is based on analyzing the gradient dynamical system as the limit of the gradient descent algorithm as the step size $\\eta\\to 0_{+}$. However, both perspectives in this section are heuristic, and formal statements and proofs (based on a different strategy) will be provided in Section~\\ref{Sec:Theory}.\n\n\\paragraph{$\\ell_2$-norm implicit regularization perspective:} Consider the under-determined system $X\\beta=y$, where $X\\in\\mathbb R^{n\\times p}$ has full row rank.\nOur first intuition comes from the fact that a zero-initialized gradient descent algorithm over $\\beta\\in\\mathbb R^p$ for solving \n\\begin{align*}\n\\min_{\\beta\\in\\mathbb R^p} \\frac{1}{2n}\\,\\|X\\beta - y\\|^2:\\,= g(\\beta)\n\\end{align*}\nfinds a minimal $\\ell_2$-norm solution to $X\\beta=y$. \n\nIn fact, we know that any solution to the linear system $X\\beta=y$ takes the form of\n\\begin{align*}\n\\beta = X^{+} y + [I - X^+X]w,\\quad \\mbox{over all }w\\in\\mathbb R^p,\n\\end{align*}\nwhere $X^+=\\lim_{\\lambda\\to 0_+}(X^TX+\\lambda\\, I)^{-1}X^T$ is the Moore-Penrose inverse of $X$. Since $X(I-X^+X)=0$, we have \n\\begin{align*}\n\\|\\beta\\|^2 = \\|X^{+} y\\|^2 + \\| [I - X^+X]w\\|^2 \\geq \\|X^{+} y\\|^2,\n\\end{align*}\nimplying that $X^{+}y$ is the unique solution of $X\\beta=y$ in the column space of $X^T$, which is also the minimal $\\ell_2$-norm solution. \nNow since the gradient updating formula for $\\beta$, $\\beta_{t+1}=\\beta_t-\\eta X^T(X\\beta_t-y)\/n$, implies that the difference $\\beta_t-\\beta_0$ always lies in the column span of $X^T$. Let $\\beta_\\infty:\\,=\\lim_{t\\to\\infty}\\beta_t$ be the limiting point of the gradient algorithm. Then $\\beta_\\infty$ must be a solution to $X\\beta=y$. On the other hand , when $\\beta_0$ is initialized at zero, $\\beta_\\infty$ should also belong to the column span of $X^T$. These two properties combined imply that $\\beta_\\infty$ must be the minimal $\\ell_2$-norm solution $X^{+}y$.\n\nIn high dimensional linear regression with perfect observations, a popular class of penalization methods attempts find the minimal $\\ell_1$-norm solution to $X\\beta=y$ . \nUnder the Hadamard product parameterization $\\beta =g\\circ l$, the fact that gradient descent tends to find the minimal $\\ell_2$-norm solution suggests (this is not rigorous) that the gradient descent algorithm for jointly minimizing $f(g,\\,l)$ over $(g,\\,l)$ tends to converge to a solution to $X(g\\circ l) =y$ with a minimal $\\ell_2$-norm $\\sqrt{\\|g\\|^2+\\|l\\|^2}$. However, a minimal $\\ell_2$-norm solution to $X(g\\circ l) =y$ must satisfy $|g_j|=|l_j|$ for each $j=1,\\ldots,p$ (otherwise we can always construct another solution with strictly smaller $\\ell_2$-norm), which implies $\\sqrt{\\|g\\|^2+\\|l\\|^2} = \\sqrt{2}\\, \\|g\\circ l\\|_1=\\sqrt{2}\\,\\|\\beta\\|_1$. As a consequence, $\\beta_{\\infty}=g_{\\infty}\\circ l_{\\infty}$ should be the minimal $\\ell_1$-norm solution to $X\\beta = y$.\n\nAnother way to understand the difference in the evolutions of gradient descents for $f(g,l)$ and $h(\\beta)$ is by noticing that the gradient $\\nabla_{g_j} f(g,l) = l_j\\cdot \\nabla_{\\beta_j} h(\\beta)\\big|_{\\beta =g\\circ l}$ in the new parametrization, for each $j=1,\\ldots,p$, has an extra multiplicative factor of $l_j$ than the gradient $\\nabla_{\\beta_j} h(\\beta)$ in the usual least squares of minimizing $g(\\beta)$. It is precisely this extra multiplicative factor $l_j$ that helps select important signals (nonzero regression coefficients) and prevent unimportant signals (zero regression coefficients) to grow too fast at the early stage of the evolution when both $g$ and $l$ are close to zero. Precisely, as we will show in our theory part (Section~\\ref{Sec:Theory}), under suitable conditions on the model, all unimportant signals remain to stay in a $\\mathcal{O}(p^{-1})$ neighbourhood of zero, while important ones tend to grow exponentially fast. \n\n\n\\paragraph{Gradient dynamical system perspective:} Our second perspective comes from considering the limiting gradient dynamical system of the problem (i.e.~gradient descent with an infinitesimally small step size), which is motivated by the interpretation for matrix factorization problems in~\\cite{gunasekar2017implicit} and \\cite{pmlr-v80-gunasekar18a}. In particular, the behavior of this limiting dynamical system is captured by the ordinary differential equations\n\\begin{equation}\\label{de}\n\\begin{cases} \n\\ \\dot{g}(t)=-\\big[ X^{T}r(t)\\big] \\circ l(t),\\\\ \n\\ \\, \\dot{l}(t)=-\\big[X^{T}r(t)\\big] \\circ g(t),\n\\end{cases}\\quad\\mbox{with initialization}\\quad \n\\begin{cases} \n\\ g(0)=\\alpha\\mathbf 1,\\\\ \n\\ \\, l(0)= 0,\n\\end{cases}\n\\end{equation}\nwhere $r(t)=n^{-1}\\big[X(g(t)\\circ l(t))-y\\big]\\in\\mathbb R^p$, and for simplicity we fixed the initialization. To emphasize the dependence of the solution on $\\alpha$, we instead write $g(t),\\,l(t),\\,r(t)$ as $g(t,\\alpha), \\,l(t,\\alpha),\\,r(t,\\alpha)$.\nFor illustration purposes, we assume that the limiting point of this system is continuous and bounded as the initialization value $\\alpha\\to 0_+$, that is, both limits $g_\\infty=\\lim_{t\\to\\infty, \\alpha\\to 0_+}g(t,\\alpha)$ and $l_\\infty=\\lim_{t\\to\\infty, \\alpha\\to 0_+}l(t,\\alpha)$ exist in $\\mathbb R^p$ and are finite. \n\nLet $s(t,\\alpha)=\\int_0^t r(\\tau,\\alpha) d\\tau\\in\\mathbb R^p$, then simple calculation leads to the relation\n\\begin{equation*}\n\\left[ \\begin{array}{c} g_j(t,\\alpha)+l_j(t,\\alpha) \\\\[0.3em] g_j(t,\\alpha)-l_j(t,\\alpha) \\end{array} \\right] \n= \\alpha\\,\\left[ \\begin{array}{c}\\exp(-X_j^{T}s(t,\\alpha)) \\\\[0.3em] \\exp(X_j^{T}s(t,\\alpha)) \\end{array} \\right],\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nUnder the aforementioned assumption on the existence of limits as $t\\to\\infty$ and $\\alpha\\to 0_+$, the preceding display implies one of the following three situations for each $j$:\n\\begin{enumerate}\n\t\\item[Case 1:] $g_{j,\\infty}=l_{j,\\infty}\\neq 0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T}s(t,\\alpha)\/\\log(1\/\\alpha)= 1$.\n\t\\item[Case 2:] $g_{j,\\infty}=-l_{j,\\infty}\\neq 0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T} s(t,\\alpha)\/\\log(1\/\\alpha)= -1$.\n\t\\item[Case 3:] $g_{j,\\infty}=l_{j,\\infty}=0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T} s(t,\\alpha)\/\\log(1\/\\alpha) =\\gamma_j\\in[-1,1]$.\n\\end{enumerate}\nDenote $s_\\infty$ as the limit $\\lim_{t\\to\\infty,\\alpha\\to 0_+} s(t,\\alpha)\/\\log(1\/\\alpha)$. Recall $\\beta_\\infty = g_\\infty\\circ l_\\infty$, and the previous three cases can be unified into\n\\begin{equation*}\nX_j^{T}s_\\infty=\n\\begin{cases} \n\\mbox{sign}(\\beta_{j,\\infty}), & \\mbox{if}\\ \\beta_{j,\\infty}\\neq 0, \\\\ \n\\gamma_j\\in [-1,1], & \\mbox{if}\\ \\beta_{j,\\infty}= 0,\n\\end{cases}\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nThis identity together with the limiting point condition $X\\beta_{\\infty}=y$\ncoincides with the KKT condition for the basis pursuit problem~\\eqref{bs}.\n\n\n\nAgain, this argument is based on the hard-to-check solution continuity assumption. In the next section, we provide a formal proof of the result without making this assumption, albeit under a somewhat strong RIP condition on $X$. We conjecture this gradient descent implicit regularization property to be generally true for a much wider class of design matrices (see our simulation section for numerical evidences).\n\n\n\n\\subsection{Gradient descent with early stopping}\nIn this subsection, we consider the general case where the response $y$ contains noise, or $\\sigma^2\\neq 0$. In particular, we propose the use of early stopping, a widely used implicit regularization technique \\citep{zhang2005boosting,raskutti2014early}, to the gradient descent Algorithm~\\ref{alg1}. As the name suggests, we will use certain criterion to decide whether to terminate the gradient descent updating to prevent overfitting of the data. Algorithm~\\ref{alg2} below summarizes a particular stopping criterion widely-used in the machine learning community via validation. In principal, we can also treat the number of iterations as a hyperparameter and repeat this procedure multiple times in same spirits as data splitting and cross validation.\n\n\\smallskip\n\n\\begin{algorithm}[H]\n\t\\KwData{Training design matrix $X\\in\\mathbb R^{n\\times p}$,\\, measurement vector $y\\in\\mathbb R^n$, validation data $X'$, $y'$, initialization magnitude $\\alpha$, step size $\\eta$, and maximal number of iterations $T_{max}$;}\n\tInitialize variables $[g_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$, $[l_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$ for $j=1,\\ldots,p$, and iteration number $t=0$;\\\\\n\t\\While{$t \\|X'(g_{{\\tilde t}+1}\\circ l_{{\\tilde t}+1})-y'\\|$ or $\\|X'(g_{{\\tilde t}}\\circ l_{{\\tilde t}})-y'\\|$ is minimized over all iterations, then output the final estimate $\\widehat \\beta=g_{{\\tilde t}}\\circ l_{{\\tilde t}}$.\n\t}\\label{alg2}\n\t\\caption{Gradient Descent for Linear Regression with Validation Data} \n\\end{algorithm} \n\n\\smallskip\n\nRecall that in the introduction, we discussed about the saturation issue suffered by explicit penalized methods such as the Lasso. Now we turn to our method and illustrate that it is unaffected, or at least less suffered from the saturation issue. In the next section, we will provide rigorous theory showing that our method can achieve a faster $\\sqrt{s\/n}$ rate of convergence then the typical $\\sqrt{s\\log p\/n}$ rate when all nonzero signals are relatively large.\n\nDue to the connection of our method with the basis pursuit problem~\\eqref{bs}, one may naturally think that our method in the noisy case should be equivalent to a basis pursuit denoising problem:\n\\begin{equation}\\label{bsdn}\n\\min \\|\\beta\\|_1 \\quad \\mbox{subject to} \\quad \\|X \\beta -y\\|_2 \\leq \\epsilon,\n\\end{equation}\nwith some error tolerance level $\\varepsilon$ depending on the stopping criterion, and therefore is equivalent to the Lasso. Surprisingly, a simulation example below shows that the iterate path of the gradient descent Algorithm~\\ref{alg1} contains estimates with much smaller error than the Lasso. Precisely, we adopt the simulation setting S2 in section~\\ref{sec:simu} . As comparisons, we also report the Lasso solution path (as a function of the regularization parameter $\\lambda$) solved by ISTA and FISTA \\citep{beck2009fast}. For our gradient descent algorithm, we set $\\alpha = 10^{-5}$ in the random initialization. From figure~\\ref{fig:31}, when the iteration number is around $1000$, even though the prediction error in panel~(c) of our algorithm and the Lasso (with an optimally tuned $\\lambda$, see panel~(b) for the entire Lasso solution path), the estimation error in panel~(a) of our method is significantly lower than that of the Lasso, illustrating the occurrence of the saturation phenomenon of the Lasso. Moreover, the stabilized region (iterations $200$--$1000$) of our method GD in panel~(a) is relatively wide, and therefore the performance tends to be robust to the stopping criterion. \n\n\n\n\\begin{figure}[H]\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{estimation2.jpg}\n\t\t\\caption{Estimation error vs Iteration}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{lassopath.jpg}\n\t\t\\caption{Estimation error vs Regularization for Lasso}\n\t\\end{subfigure} \n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{prediction2.jpg}\n\t\t\\caption{Prediction error vs Iteration}\n\t\t\\label{fig:32}\n\t\\end{subfigure} \n\t\\caption{Panel (a) is a log-log plot of standardized estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|^2_2\/\\|\\beta^\\ast\\|^2_2$ versus iteration number $t$. Panel (b) is a log-log plot of standardized estimation error versus regularization parameter $\\lambda$ for Lasso. Panel (c) is a log-log plot of mean prediction error $\\sqrt{\\|\\widehat y-y\\|^2_2\/n}$ versus iteration number $t$.}\\label{fig:31}\n\t\\vspace{-0.7em}\n\\end{figure}\n\n\nNext, let us briefly illustrate why implicit regularization with early stopping works, while explicit regularized methods may fail.\nWe know that early stopping, serving as algorithmic regularization, is based on the intuition that as the iteration number grows, the bias keeps decreasing while the variance increasing. Consequently, the iteration number $T$, acting as an implicit regularization parameter, aims to optimally balance between the bias and the variance, akin to the bias-variance trade-off. In our parametrization, the iteration number $T$ controls the $\\ell_1$ norm of the solution, reflecting the model space complexity. To see this, we plot the $\\ell_1$ norm versus the iteration number, and also the estimation errors versus the $\\ell_1$ norm, all in logarithmic scales, in figure~\\ref{fig:33}. As we expected, as the number of iterations increases, the $\\ell_1$ norm of the iterate also increases. When the logarithm of the iteration number is within $(2.3,3)$, the $\\ell_1$ norm of the estimated coefficients tends to be stabilized at the $\\ell_1$ norm of the true $\\beta^\\ast$ as $0.9$, corresponding to the most accurate estimation region in panel~(a) of figure~\\ref{fig:31}. In contrast, as we can see from panel~(b) of figure~\\ref{fig:33}, the estimation error is very sensitive in the regularization parameter domain --- the region corresponds to smallest estimation accuracy is very narrow, and a small change in the $\\ell_1$ norm in the solution leads to a drastic deterioration in the estimation accuracy. This numerical example provides evidences of why explicit regularized approaches may suffer from large bias and low accuracy.\n\n\n\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.48\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{L1IN.jpg}\n\t\t\\caption{$\\ell_1$ norm vs Iteration}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.48\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{EEL1.jpg}\n\t\t\\caption{Prediction error vs Iteration Comparison}\n\t\\end{subfigure} \n\t\\caption{Panel (a) is a log-log plot of $\\ell_1$ norm of the estimated coefficients versus iteration number $t$. Panel (b) is a log-log plot of standardized estimation error versus $\\ell_1$ norm of the estimated coefficients.} \\label{fig:33}\n\t\\vspace{-0.7em}\n\\end{figure}\n\nFinally, we discuss several commonly used early stopping rules by treating the iteration number as a tuning parameter. \n\n\n\\paragraph{Hold-out or cross validation:} The simplest method is to use hold-out data as validation: for example, we can split the training data into half-half, and then run gradient descent on the first half $D_1$ of the data while calculate the prediction error $R(t)=\\sum_{i\\in D_2}({X^{(i)}}^T (g_t\\circ l_t)-y_i)^2$ for all $t \\leq T_{max} $ on the second half $D_2$, then the final iteration number is decided by (cf.~Algorithm~\\ref{alg2}):\n\t\\begin{equation}\\label{cv}\n\t\\tilde t : = \\arg \\min \\{ t \\leq T_{max} \\,|\\, R ( t+1 ) > R (t) \\} \\quad \\mbox{or}\n\t\\end{equation} \n\t\\begin{equation}\\label{cv2}\n\t\\tilde t : = \\arg \\min \\{ t \\leq T_{max} \\,|\\, R ( t ) = \\min_{\\tau \\leq T_{max}} R (\\tau) \\}. \\quad \n\t\\end{equation} \n\tTo make use of the whole dataset, we can perform cross validation: first split data into $K$ folds, then apply gradient descent on all possible combinations of $K-1$ folds without replacement and evaluate at the corresponding rest $1$ folds. The final risk $R(t)$ can be the sum of all the evaluations on each fold, and the criterion~\\eqref{cv} or~\\eqref{cv2} can be used to obtain the iteration number. Finally we can apply the same iteration number obtained from cross validation to approximate the optimal one for the entire training data.\n\\paragraph{Stein's unbiased risk estimate (SURE):} \\cite{stein1981estimation} suggested the use of degrees of freedom as surrogate to the true prediction error given the standard derivation $\\sigma$. Under our settings, ignoring the second order term of step size $\\eta$, the updating of the prediction error (up to rescaling) $r_t=n^{-1}\\big[X(g_t\\circ l_t) - y\\big]\\in\\mathbb R^n$ through gradient descent can be approximated by (by ignoring second order terms of order $\\eta^2$):\n\t\\begin{equation}\n\tr_{t+1}\\approx [I - 2 \\eta n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T] \\,r_t, \n\t\\end{equation} \n\twhere for a vector $u$, diag$(u)$ denotes the diagonal matrix with components of $u$ in the its diagonals. Define $S_t= \\Pi_{i=1}^{t-1} (I - 2 \\eta n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T)$, then the estimated degrees of freedom at time $t$ can be approximated by $n-\\mbox{trace}(S_t)$. Consequently, the risk based on the $C_p$-type statistic \\citep{efron2004estimation} is\n\t\\begin{equation}\n\tR(t) = \\frac { \\| r_t \\| ^ { 2 } } { n } + \\Big(2-\\frac { 2 \\mbox{trace}(S_t) } { n } \\Big)\\sigma ^ { 2 }.\n\t\\end{equation} \n\tThe total iteration number as a tunign parameter can then be selected by minimizing $R(t)$ in equation~\\eqref{cv} or~\\eqref{cv2} . In practice, we can use the plug-in estimator $\\hat{\\sigma}$ to replace the unknown $\\sigma$ in $R(t)$. According to our simulation studies (for example, see figure~\\ref{fig:es}), early stopping based on SURE generally works not as good as the hold-out or cross validation method.\n\t\n\\paragraph{Measure of model complexity:} \\citep{raskutti2014early} proposed an early stopping rule based on local empirical Rademacher complexity of the Reproducing kernel Hilbert space. However, their method can not be directly applied in our case: their stopping rule is based on the eigenvalues of empirical kernel matrix, which is $ n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T$ in our settings. Since our empirical kernel matrix keeps updated throughout the iterations, their method is not directly applicable.\n\t\n\t\\smallskip \n\tIn the end of this subsection, we adopt the simulation framework S1-S4 (only change the standard derivation to $\\sigma=0.1$) in section~\\ref{sec:simu} to compare different early stopping criteria. We record the mean estimation errors averaging over $50$ trials and report the errors in figure~\\ref{fig:es}. From the figure, we can see that cross-validation tends to be more robust than SURE. \t\n\tTherefore, we recommend using hold-out or cross validation to select the iteration number, and will stick to this method in the rest of the paper.\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{ESrule.jpeg}\n\t\t\\caption{Comparison of the estimation errors for different early stopping rules, `Oracle' stands for the optimal early stopping rule with knowldege on the truth. 'CV' stand for early stopping through $5$ fold cross validation.} \\label{fig:es}\n\t\t\\vspace{-0.7em}\n\t\\end{figure}\n\t\n\n\\subsection{Adaptive step size and variable selection} \\label{sec:2.5}\n\nA nature extension of gradient descent algorithm~\\ref{alg1} is to assign different weights (step sizes) to different coordinates of $\\beta$, which is related to the adaptive Lasso \\citep{zou2006adaptive}. It can be seen from the differential equation interpretation: by inserting a constant weighting matrix $D(\\Omega)=\\mbox{diag}(\\omega_1,...,\\omega_p)$ into the equation~\\eqref{de}, we obtain the limiting dynamical system as\n\\begin{equation*}\n\\begin{cases} \n\\ \\dot{g}(t)=-\\big[D(\\Omega) X^{T}r(t)\\big] \\circ l(t),\\\\ \n\\ \\, \\dot{l}(t)=-\\big[D(\\Omega)X^{T}r(t)\\big] \\circ g(t).\n\\end{cases}\n\\end{equation*}\nBased on similar heuristic analysis as in Section~\\ref{Sec:Heuristic} for the noiseless case, the limiting point of the dynamic system satisfies:\n\\begin{equation*}\nX_j^{T}s_\\infty=\n\\begin{cases} \n\\mbox{sign}(\\beta_{j,\\infty}) \/\\omega_j, & \\mbox{if}\\ \\beta_{j,\\infty}\\neq 0, \\\\ \n\\gamma_j\\in [-\\frac{1}{\\omega_j},\\frac{1}{\\omega_j}], & \\mbox{if}\\ \\beta_{j,\\infty}= 0,\n\\end{cases}\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nwhich is the KKT condition for the dual form of the adaptive Lasso \n\\begin{align*}\n\\min_{\\beta\\in\\mathbb R^p}\\sum_{j=1}^p\\frac{|\\beta_j|}{w_j} \\qquad\\mbox{subject to } X\\beta =y.\n\\end{align*}\nIn the limiting case when the step size $\\omega_j$ of a particular component $\\beta_j$ tends to $0$, we are equivalently adding an $+\\infty$ when $\\beta_j\\neq 0$. In contrast, if we apply a larger step size $\\omega_j$ to $\\beta_j$, then $\\beta_j=g_j\\circ l_j$ tend to move faster and more freely in the parameter space, which is equivalent to a smaller penalty on $\\beta_j$.\nThe original paper in \\cite{zou2006adaptive} constructed the weights based on the ordinary least square solution, which requires $n\\geq p$. In practice when $p>n$, we can construct weights through a preprocessing step. For example, variable screening can be applied to introduce sparse weights.\n\n\nTo enable variable selection in our method, we can perform a component-wise hard thresholding operation to the final estimator $\\hat{\\beta}=g_{{\\tilde t}}\\circ l_{{\\tilde t}}$. Based on our theoretical analysis, since our method tries to shrink both weak signals and errors into very small order $p^{-2}$, it is more robust to false detections than other explicit regularizations when the same tuning parameter for noise level is applied. Let us consider a simple example to illustrate the basic idea: we set $n=10$, $p=20$, $X_{ij} \\overset{iid}{\\sim} \\mathbb{ N }(0,1)$ for $i=1,2,...,n$ and $j=1,2,...,p$, $\\beta^*_1=0.5 \\sigma\\sqrt{\\log p\/n}$, $\\beta^*_2=5 \\sigma \\sqrt{\\log p\/n}$, and all other components are zeros in the data generating model $y=X\\beta^* +w$ with $w \\sim \\mathbb{ N }(0,I)$. Since the strength of the first components of truth is weak, it is hard to be detected by all methods we have tried. However, the effect of the weak signals on $y$ still pertains. In particular, when applying the cross validation, traditional penalized methods tends to over-select the predictors, leading a lot of false discoveries. In comparison, due to the implicit regularization our method tend to be more robust to the weak signals---our estimate is typically non-sparse, the effect of the non-detected weak signals can be distributed to all components of the estimated vector, and no component is particularly spiked. As a consequence, our method tends to be more robust to false discoveries after applying the hard thresholding. The variable selection results are shown in figure~\\ref{fi:vsl}. As we can see, the Lasso can not detect the weak signal, and two components, indexed by $6,19$, appear to be false detected through cross validation (note that in Lasso, a soft thresholding has already been applied). In contrast, in our method most unimportant signals remains small. Performing a hard thresholding with the same regularization parameter selected by the Lasso can erase all false detections, leading to the selection of strong signal only. \n\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{selection.jpg}\n\t\\caption{The values versus index for truth $\\beta^*$, $\\beta$ estimator through lasso by minimizing the cross validation error in prediction, $\\beta$ estimator through gradient descent by minimizing the cross validation error in prediction and $\\beta$ estimator through `post estimation' selection for gradient descent.} \\label{fi:vsl}\n\t\\vspace{-0.7em}\n\\end{figure}\n\n\n\n\\subsection{Related literatures}\n\\cite{li2018algorithmic} studies the theory for implicit regularization in matrix sensing, which requires the data to be perfect measured and has different geometric structures as linear regression. \\cite{hoff2017lasso} considers the Hadamard product parametrization to optimize the parameters in high-dimensional linear regression. However, his method is computational-wise in order to reformulate the non-smooth Lasso optimization problem into a smooth one.\nIn particular, their objective function involves an $\\ell_2$ penalty on $(g,l)$ (which is equivalent to the $\\ell_1$ penalty on $\\beta$), and the solution is precisely the Lasso solution. \n\n\n\n\\section{Theoretical Analysis}\\label{Sec:Theory}\nIn this section, we provide formal statements for characterizing the behavior of the gradient descent algorithm for minimizing the Hadamard product parametrized quadratic loss $f(g,l)$ as defined in \\eqref{eq_opt}. We start with a description of our assumptions. Then we turn to the case of non-negative parameters, where a simpler parametrization $\\beta=u\\circ u$ can be applied, as a warm-up to convey the main proof ideas. Finally, we consider the general signal case and illustrate when the fast parametric root-$n$ rate independent of the dimension can be achieved. All proofs are deferred to the appendix in the supplementary material of the paper.\n\n\\subsection{Assumptions}\nRecall that the underlying true data generating model is $y=X\\beta^*+w$ with $w \\sim \\mathcal{N}(0,\\sigma^2 I)$, where the true parameter $\\beta^\\ast$ is $s$-sparse.\nWithin the $s$ nonzero signal components of $\\beta^\\ast$, we define the index set of strong signals as $S_1=\\{i\\in S: |\\beta^*_i| \\geq 2\\sigma \\log p \\sqrt{\\frac{\\log p}{n}}\\}$ and weak signals as $S_2=\\{i\\in S: |\\beta^*_i| \\leq 2\\sigma \\sqrt{\\frac{\\log p}{n}}\\}$, where $|S_1|=s_1$, $|S_2|=s_2$. According to the information-theoretic limits from \\cite{wainwright2009information}, weak signals of order $\\sigma \\sqrt{\\log p\/n}$ in sparse linear regression are generally impossible to be jointly recovered or selected (but can be detected in terms of the type I\/II error control in hypothesis testings, e.g.~see \\cite{jin2016rare}). Therefore, our primary focus would be the estimation and selection of strong signals.\nWe use the notation $\\theta_{s_1}(\\beta)$ to denote the $s_1$-th largest absolute component value of $\\beta$, and let $m=\\theta_{s_1}(\\beta^\\ast)$, which reflects the minimal strength for strong signals. We also use $\\kappa$ to denote the strong signal-condition number as the ratio between the largest absolute signal value to the smallest strong signal. \nWe will also make use of the notation of Restricted Isometry Property (RIP, \\cite{candes2008restricted}), which is a commonly used assumption (e.g. \\cite{candes2007dantzig}) in the high dimensional linear regression literatures.\n\n\\begin{defin}[Restricted Isometry Property]\n\tA matrix $X\\in \\mathbb{R}^{n\\times p}$ is said to satisfy the $(s,\\delta)$-Restricted Isometry Property (RIP) if for any $s$-sparse vector $u$ in $\\mathbb{R}^{p} $, we have:\n\t\\begin{equation*}\n\t(1-\\delta)\\|u\\|^2 \\leq \\frac{1}{n}\\,\\|Xu\\|^2 \\leq (1+\\delta)\\|u\\|^2\n\t\\end{equation*}\n\\end{defin}\n\\noindent As an easy consequence, if matrix $X$ satisfies $(2s,\\delta)$-RIP, then Euclidean inner-product is also approximately preserved, that is, $\\big|n^{-1}\\,\\langle Xu, Xv\\rangle - \\langle u,\\,v\\rangle \\big| \\leq \\delta\\,\\|u\\|\\cdot\\|v\\|$ holds for any two $s$-sparse vectors $u,\\,v\\in\\mathbb R^p$. \n\n\n\\noindent With these preparations, we make the following assumptions on the true parameter $\\beta^\\ast$, design matrix $X$, initialization parameter $\\alpha$ and step size $\\eta$ in Algorithm~\\ref{alg1}.\n\n\\paragraph{Assumption (A): } The true parameter $\\beta^\\ast$ is $s$-sparse, and $s=s_1+s_2$, that is, each nonzero signal in $\\beta^\\ast$ is either weak or strong. In addition, $\\kappa m \\lesssim1$. \n\n\\paragraph{Assumption (B):} The design matrix $X$ satisfies $(s+1,\\delta)$-RIP condition with $\\delta\\lesssim 1 \/(\\kappa \\sqrt{s}\\log\\frac{p}{\\alpha})$.\n\n\\paragraph{Assumption (C):} The initialization parameter $\\alpha$ satisfies $0<\\alpha\\lesssim p^{-1}$, and the step size $\\eta$ satisfies $0<\\eta\\lesssim (\\kappa\\log\\frac{p}{\\alpha})^{-1}$.\n\nSome remarks are in order.\nFirst, our current proof heavily relies on the RIP condition as in Assumption (B), which is satisfied if the predictors are iid and $s\\log p\\ll n$. However, the extensive simulation studies in the next section provide a strong numerical evidence suggesting that our conclusions remain valid even when the RIP condition is violated. We leave the formal theoretical investigation as an open problem for our future studies. Second, Assumption (A) is made mainly for illustrating the possibility of achieving the fast parametric root $n$ rate of convergence when $s_1=0$. In fact, our current proof can still lead to the typical high-dimensional rate of $\\sqrt{s\\log p\/n}$ without introducing the notion of strong and weak signals. And due to space constraint we omit the details.\n\n\n\\subsection{Non-negative Signals}\nTo start with, we demonstrate the key ideas of our proofs and give an analysis of the non-negative case as a warm-up. More specifically, we consider the case when all components of true signal $\\beta^\\ast$ are non-negative. To exploit this non-negativeness, we may instead use the self-Hadamard product parametrization $\\beta = u^2=u\\circ u$ for $u\\in\\mathbb R^p$, and write $\\beta^\\ast=(u^*)^2=u^* \\circ u^*$. Now, we have the optimization problem: \n\\begin{equation*}\n\\min_{u\\in \\mathbb{R}^{p}} f(u)=\\frac{1}{2n}\\,\\|Xu^2-y\\|^2,\n\\end{equation*}\nwith gradient descent updating formula $u_{t+1}=u_t-2\\eta\\, u_t \\circ \\big[n^{-1}X^T(Xu_t^2-y)\\big]$. For technical simplicity, we instead focus on the deterministic initialization $u_0=\\alpha \\mathbf 1\\in\\mathbb R^p$. \nThis case is simpler for the analysis than the general case since components of $u_t$ will not change sign during the iterations, and will always stay away from saddle points.\nWe summarize our result in the following main theorem. Since the non-negative signal constraint resembles the positive semi-definiteness constraint in matrix sensing, our proof utilizes the proof strategy in \\cite{li2018algorithmic} for analyzing matrix factorized gradient descent for noiseless matrix sensing by dividing the convergence into two stages (more details are provided after the theorem).\n\\begin{thm}\\label{thm1}\n\tUnder the above assumptions (A), (B) and (C). Let $\\epsilon=\\max\\{ \\alpha^2, \\sigma^2 \\frac{ Ms_1}{n}, \\sigma^2 \\frac{s_2\\log p}{n}\\}$, $\\tau=\\max \\{\\delta \\alpha, \\sigma \\sqrt{\\frac{\\log p}{n}}\\}$ and any $M\\geq 1$. Then there exist positive constants $(c_1,\\,c_2,\\,c_3,\\,c_4,\\,c_5)$ such that for every time $t$ satisfying $c_1\\, \\log(\\frac{p}{\\alpha})\/(\\eta m) \\leq t\\leq c_2 \/(\\eta \\tau)$, with probability at least $1-p^{-c_4}-e^{-c_5\\,Ms}$, the time $t$-iterate $u_t$ satisfies\n\t\\begin{align*}\n\t\\|u_t^2-\\beta^\\ast\\|^2\\leq c_5 \\,\\epsilon ,\n\t\\end{align*}\n\\end{thm}\n This theorem tells us in high dimension linear regression, combining early stopping with implicit regularization can significantly improve the estimation accuracy. In particular, when all signals are strong ($s_1=s$ and $s_2=0$), the estimate $\\hat \\beta =u_t^2$ attains a parametric rate $\\sigma\\sqrt{s_1\/n}$ of convergence that is independent of the dimension $p$. In general when weak signals exist, then the overall rate $\\sigma \\sqrt{\\frac{s_1}{n}}+\\sigma \\sqrt{\\frac{s_2\\log p}{n}}$ depends on the number of weak (strong) signals, which is still minimax-optimal \\citep{zhao2018pathwise}. The same remark also applies to our theory in the general case.\n\n\n\n\n\nOur proof strategy is to divide the convergence into two stages. Recall that $S=\\{j:\\,\\beta_j^\\ast\\neq 0\\}$ is the support set of true signal $\\beta^\\ast$, and $S_1 \\subset S$ corresponds to the subset of all strong signals. In the first ``burn-in'' stage, we show that each component of the strong signal part $u_{t,S_1}$ increases at an exponential rate in $t$ until hitting $\\sqrt{m}\/2$, while the weak signal and error part $u_{t,{S_1}^c}$ remains bounded\nby $\\mathcal O(p^{-1})$. In the second stage, iterate $u_t$ enters a geometric convergence region where $u_t$ converges towards $u^\\ast$ at a linear rate up to some high-order error term, and then stay in a $O(\\epsilon)$ neighborhood of $u^\\ast$ up to the time $\\Theta(1\/\\tau)$. Therefore, the time interval $c_1\\, \\log(\\frac{p}{\\alpha})\/(\\eta m) \\leq t\\leq c_2 \/(\\eta \\delta \\tau)$ would be the theoretical ``best solution region'' corresponding to the stabilized region in figure~\\ref{fig:31}.\n\n\nMore specifically, in the proof we consider the decomposition of $u_t$ into three parts: strong signal part $z_t$, weak signal part $d_t$ and error part $e_t$:\n\\begin{align*}\nu_t=z_t+d_t+e_t, \\quad\\mbox{with}\\quad\nz_t:=I_{S_1} u_t\\in\\mathbb R^p, \\quad\nd_t:=I_{S_2} u_t \\in \\mathbb R^p \\quad\\mbox{and}\\quad\ne_t:=I_{S^c}u_t\\in\\mathbb R^p,\n\\end{align*} \nwhere recall that $I_E$ is the diagonal matrix with ones on the positions indexed by subset $E\\subset\\{1,\\ldots,p\\}$ and zero elsewhere. We use induction to prove the following results characterizing the gradient dynamics in the first stage. Recall that $\\theta_{s_1}(b)$ denote the $s_1$-th largest absolute component value of vector $b\\in\\mathbb R^p$ and $m$ is the minimal strength of the strong signals.\n\n\\begin{pro}[Stage one dynamics]\\label{pro2.1}\n\tUnder assumptions of theorem~\\ref{thm1}, there are constants $(c_7,c_8)$ and $(c_7',c_8')$, such that for each $t0$. In both scenarios, we choose true signal $\\beta^\\ast=(-1,2,2,3)^T\\in\\mathbb R^p$, and set $y=X\\beta^\\ast$. When implementing gradient descent, we choose step size $\\eta=0.2\\ (0.1)$ for the independent (correlated) design, $\\alpha\\in\\{10^{-10},10^{-9},\\ldots,10^{1}\\}$, and stopping threshold $\\epsilon=0.01\\alpha$. Figure~\\ref{fig:200} shows the estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|_2$ versus $\\alpha$ in log-log plots. As we can see, they tend to have a linear trend under the log-scale, which is consistent with our theoretical error bound estimate in Section 4.\nIn addition, in the correlated design scenario where the RIP does not hold, the algorithm is still able to recover $\\beta^\\ast$ as $\\alpha\\to0$, albeit under a slower convergence (due to a smaller allowable step size and a larger condition number of $X$). This observation provides evidence to the correctness of our informal statement made at the beginning of Section 3 even without RIP condition. We leave the proof of this conjecture open.\n\n\n\\paragraph{$\\ell_0$-norm minimizer differs from $\\ell_1$-norm minimizer:}\nIn this example, we study the empirical performance of the algorithm when the least $\\ell_1$-norm in the basis pursuit problem~\\eqref{Eqn:CS} is not the sparsest solution of $X\\beta=y$ (the null space property is violated). In particular, we choose\n\\begin{align*}\nX=\\begin{bmatrix} 0.2 & 1 & 0 \\\\ 0.2 & 0 &-1 \\end{bmatrix},\\quad \\beta^\\ast= \\begin{bmatrix} 5\\\\ 0 \\\\0 \\end{bmatrix}, \\quad\\mbox{and}\\quad y=\\begin{bmatrix} 1\\\\ 1 \\end{bmatrix},\n\\end{align*}\nso that $X\\beta^\\ast =y$. It is easy to verify that for this example, the sparsest solution of $X\\beta=y$ is $\\beta^\\ast$, while the least $\\ell_1$-norm solution is $\\beta^\\dagger = [0,1,-1]^T$. We use the same setting as before for implementing the algorithm with $\\alpha\\in\\{10^{-10},10^{-5},10^{-3},10^{-1},10^0,10^1\\}$. Table~\\ref{tb1} reports final outputs $\\beta=(\\beta_1,\\beta_2,\\beta_3)^T$ of the algorithm. Due to our intuition in Section 3, as expected, the algorithm still converges to the least $\\ell_1$-norm solution $\\beta^\\dagger$ instead of the least $\\ell_0$-norm solution $\\beta^\\ast$. Again, the estimation error decreases as the initialization level $\\alpha$ decreases. We conjecture this phenomenon of convergence to the least $\\ell_1$-norm solution to be generally true, and leave a formal proof as a future direction.\n\n\n\\begin{table}[h]\\caption{Convergent point without null space property} \\label{tb1}\n\t\\begin{center}\n\t\t$\\begin{array}{|ccccccc|} \n\t\t\\hline\n\t\t\\alpha & 10^{-10} & 10^{-5} & 10^{-3} & 0.1 & 1 & 10 \\\\ \n\t\t\\hline\n\t\t\\beta_1 & 7.433e-13 & 5.703e-7 & 1.289e-4 & 2.884e-2 & 2.987e-1 & 8.823e-1 \\\\ \n\t\t\\hline\n\t\t1-\\beta_2 & 1.492e-13 & 1.141e-7 & 2.577e-5 & 5.769e-3 & 5.974e-2 & 1.765e-1\\\\\n\t\t\\hline\n\t\t1+\\beta_3 & 1.492e-13 & 1.141e-7 & 2.577e-5 & 5.769e-3 & 5.974e-2 & 1.765e-1 \\\\\n\t\t\\hline \n\t\t\\end{array}$\n\t\\end{center}\n\t\\vspace{-0.7em}\n\\end{table}\n\\subsection{Simulations for Noisy Case}\\label{sec:simu}\n\n\n\n\\paragraph{Comparison with other high dimensional estimators:} We further demonstrate the advantages of our algorithm by considering the following $8$ simulation settings, the sparsity level $s=4$, signals $-1,2,2,3$ and noise level $\\sigma$ with $\\sigma =0.15*\\|\\beta^\\ast\\|$. We generate $3 n$ observations independently and split into $3$ even parts, then use the first part for training, the second part for validation and the final part for testing. The evaluation metric is standardized estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|^2_2\/\\|\\beta^\\ast\\|^2_2$ and mean prediction error $\\sqrt{\\|y-\\hat{y}\\|^2\/n}$ for the test data set. We compare the median of the standardized estimation errors and prediction errors with the Lasso, SCAD and MCP by repeating $50$ times. We implement the Lasso using \\textit{glmnet} R package \\citep{friedman2010regularization} while for SCAD and MCP, we use the R package $\\textit{ncvreg}$ \\citep{breheny2011coordinate}. The standard error of medians are calculated by bootstrapping the calculated errors $1000$ times. For our algorithm, we use the initialization $\\alpha=10^{-5}\\times \\+1$. The simulation results in Table~\\ref{table1} and~\\ref{table2} indicate that our methods consistently have the best performance over all explicit penalization-based competitors across all settings.\n\n\\begin{enumerate}\n\t\\item \\textbf{S1:} $n=200$, $p=500$, $\\Sigma_{jk}=1$ for $j=k$ while $\\Sigma_{jk}=0$ for $j\\neq k$;\n\t\\item \\textbf{S2:} $n=200$, $p=500$, $\\Sigma_{jk}=0.1^{|j-k|}$;\n\t\\item \\textbf{S3:} $n=200$, $p=500$, $\\Sigma_{jk}=0.2^{|j-k|}$;\n\t\\item \\textbf{S4:} $n=200$, $p=500$, $\\Sigma_{jk}=0.5^{|j-k|}$;\n\t\\item \\textbf{S5:} $n=200$, $p=2000$, $\\Sigma_{jk}=1$ for $j=k$ while $\\Sigma_{jk}=0$ for $j\\neq k$;\n\t\\item \\textbf{S6:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.1^{|j-k|}$;\n\t\\item \\textbf{S7:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.2^{|j-k|}$;\n\t\\item \\textbf{S8:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.5^{|j-k|}$.\n\\end{enumerate}\n\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccccccc|}\n\t\t\n\t\t\\hline\n\t\t&S1 &S2 &S3 &S4 &S5 &S6 &S7 & S8 \\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{GD} \n\t\t&\\textbf{0.520}\t&\\textbf{0.448}\t&\\textbf{0.510}\t&\\textbf{0.568}\t&\\textbf{0.385}\t&\\textbf{0.290} &\\textbf{0.465}\t\t&\\textbf{0.460}\n\t\t\n\t\t\\\\\n\t\t&(0.0428)\t&(0.0530)\t&(0.0607)\t&(0.0850)\t&(0.0533)\t&(0.0465)\t&(0.0858)\t&(0.0863) \n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{Lasso} \n\t\t&3.11\t&3.10\t&3.42\t&4.51\t&4.62\t&4.04\t&4.40\t&6.98\n\t\t\\\\\n\t\t&(0.219)\t&(0.173)\t&(0.242)\t&(0.274)\t&(0.279)\t&(0.205)\t&(0.306)\t&(0.452)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{SCAD}\n\t\t&0.613\t&0.533\t&0.650\t&0.691\t&0.519\t&0.401\t&0.595\t&0.646\n\t\t\\\\\n\t\t&(0.0464)\t&(0.0679)\t&(0.0702)\t&(0.103)\t&(0.0527)\t&(0.0574)\t&(0.0837)\t&(0.0776)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{MCP}\n\t\t&0.628\t&0.552\t&0.594\t&0.733\t&0.484\t&0.405\t&0.595\t&0.708\n\t\t\\\\\n\t\t&(0.0392)\t&(0.0779)\t&(0.0809)\t&(0.0902)\t&(0.0706)\t&(0.0597)\t&(0.0741)\t&(0.0680)\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Simulation result for median of standardized estimation error of each method, with standard derivation in the parenthesis under the median. There are $10^{-3}$ factors for all medians and standard derivations.\n\t\t\\label{table1}}\n\\end{table}\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccccccc|}\n\t\t\n\t\t\\hline\n\t\t&S1 &S2 &S3 &S4 &S5 &S6 &S7 & S8 \\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{GD} \n\t\t&0.638\t&\\textbf{0.636}\t&0.640\t&\\textbf{0.634}\t&\\textbf{0.646}\t&\\textbf{0.651} &\\textbf{0.641}\t\t&\\textbf{0.642}\n\t\t\n\t\t\\\\\n\t\t&(0.0597)\t&(0.0753)\t&(0.0718)\t&(0.0709)\t&(0.0491)\t&(0.0510)\t&(0.0406)\t&(0.0498) \n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{Lasso} \n\t\t&0.676\t&0.672\t&0.671\t&0.685\t&0.693\t&0.699\t&0.696\t&0.708\n\t\t\\\\\n\t\t&(0.0899)\t&(0.0981)\t&(0.0932)\t&(0.0510)\t&(0.0568)\t&(0.0918)\t&(0.0615)\t&(0.0488)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{SCAD}\n\t\t&0.638\t&0.638\t&\\textbf{0.637}\t&0.637\t&0.650\t&0.654\t&0.643\t&0.647\n\t\t\\\\\n\t\t&(0.0530)\t&(0.0724)\t&(0.0716)\t&(0.0713)\t&(0.0470)\t&(0.0451)\t&(0.0402)\t&(0.0434)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{MCP}\n\t\t&\\textbf{0.637}\t&0.639\t&0.638\t&0.637\t&0.650\t&0.652\t&0.644\t&0.647\n\t\t\\\\\n\t\t&(0.0516)\t&(0.0756)\t&(0.0684)\t&(0.0753)\t&(0.0475)\t&(0.0441)\t&(0.0435)\t&(0.0408)\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Simulation result for median of mean prediction error of each method, with standard derivation in the parenthesis under the median. There are $10^{-1}$ factors for all standard derivations.\n\t\t\\label{table2}}\n\t\n\t\n\\end{table}\n\n\\paragraph{Comparison in Variable Selection:} Now we consider variable selection when there exists some weak signals. Suppose the simulation settings are similar with S3 above, with only the true signals changed. Let $s=20$, and the strength of first $4$ signals is $0.5 \\sigma \\sqrt{\\log p\/n}$, while the other $16$ are $ 5 \\sigma \\sqrt{ \\log p \/ n}$, where $\\sigma =1$. Clearly the first $4$ signals are too weak to be selected by all methods. However, since all methods are based minimizing the prediction error, the effect of these weak signals pertains, and may increase the false discovery rate. Under the above settings, we perform a model selection based on minimized prediction errors through $5$-fold cross validation. For our method, we use the same regularization parameter as the lasso to perform hard threshold after estimation. We repeat the process $50$ times and compare variable selection errors. From the figure~\\ref{fi:vse}, we can see our method is more robust to the enhancement of false detection due to failure on detecting weak signals: although the true negative errors of our method is $4$, which means all weak signals can not be detected, the false detections of our methods are closed to zero. For other methods, although sometimes weak signals can be detected, the risk of false detections is high. Overall, our methods perform consistent variable selection for strong signals, and achieve better estimation than the competitors.\n\\begin{figure}[h!]\n\t\\includegraphics[scale=0.7]{0vse.jpeg}\n\t\\caption{Variable selection errors for selected model based on minimized prediction cross validation errors. `fp' stands for false positive when the truth is zero but detected as signal; `tn' stands for true negative when the truth is nonzero but not detected.}\\label{fi:vse}\n\\end{figure}\n\n\n\\subsection{Real Data Analysis}\nWe compare our method with others to analyze the Riboflavin data set \\citep{buhlmann2014high}, which is available in \\textit{hdi} R package. The dataset contains $71$ observations of log-transformed riboflavin production rate verses the logarithm of expression level of 4088 genes. Before estimation, we first perform independence screening \\citep{fan2008sure} based on the rank of the correlation strength for each predictor verses response to decrease the dimension of feature space into $500$. Then we normalize and add the intercept column into the design matrix. For evaluation, we split the observations into $50$ training samples and $21$ testing samples, with performing $10$-fold cross validation to select iteration steps and regularization parameters in the training data. Still, for our algorithm, we use the initial value $\\alpha=10^{-5} \\times \\+1$ for all training processes. We record the prediction errors for testing data set and repeat $50$ times. From the figure~\\ref{fi:box} below, our method also obtain the least prediction errors, which implies the estimation of this high dimensional linear regression problems can also have the least errors.\n\n\n\\begin{figure}[h!]\n\t\\includegraphics[scale=0.7]{realdata.jpeg}\n\t\\caption{Prediction errors on the test data of Riboflavin data set for each method. $x$-axis stands for the methods used for estimation, and $y$-axis stands for the testing prediction error $\\|y-\\hat{y}\\|$.}\\label{fi:box}\n\\end{figure}\n\n\nWe also perform variable selection on the whole Riboflavin data set with the same tuning parameter obtained through minimized estimation errors based on cross validation. For our algorithm, when we get the number of iterations from cross validation, we run gradient descent on the whole data set with the same initial values and step size until the corresponding number of iterations. We use the same regularization parameter obtained by the lasso as the the hard thresholding value on the absolute value of obtained the `post-estimation' selection. The comparison between different variable selection methods is given in Table~\\ref{tb:3}. From the table, we can see except one variable, all other variables detected by our method are also selected by other methods, illustrating that our methods tend to have lower false positive rate for variable selection without sacrificing estimation and prediction accuracy. \n\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccc|}\n\t\t\n\t\t\\hline\n\t\t&Lasso &SCAD &MCP &GD \\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{Lasso} \n\t\t& 33\t& \t&\t&\n\t\t\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{SCAD} \n\t\t&11\t&14\t&\t&\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{MCP}\n\t\t&2\t&3\t&5\t&\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{GD}\n\t\t&8\t&9\t&3\t&11\n\t\t\\\\\n\t\t\n\t\t\\hline\n\t\t\\multirow{1}{3em}{Independent}\n\t\t&20\t&2\t&2\t&1\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Variable selection result for Riboflavin data set, each cell stands for the number of the same detected variables between row labels and column labels. `Independent' means the number of detected variables for the corresponding column method that are not detected by others. \\label{tb:3}\n\t}\n\\end{table}\n\n\\section{Discussion}\nIn this paper, we illustrated the phenomenon of implicit regularization through Hadamard product change of variables in high dimensional linear regression, and demonstrated that a combination of implicit regularization with early stopping yields better estimation than state-of-the-art penalized approaches with explicit regularization. However, we still face several important open problems on implicit regularizations as our future directions. First, our theory heavily relies on the RIP condition, which is relatively strong comparing to the restricted eigenvalue condition as the minimal possible assumption on the design in the literature.\nIt would be interesting to investigate whether our results remain valid without the RIP condition.\nSecond, it is interesting to study whether any computationally efficient early stopping rule (rather than cross validation) based on certain date-driven model complexity measure can be applied and provably works.\n\n\n\\section{Proof of the results in the paper}\n\\subsection{Overview}\n\nIn this supplementary material, we provide proofs of the main theorems presented in the paper. \n\n\\subsection{Notation}\nRecall that $\\|v\\|=\\sqrt{\\sum_{j=1}^p v_j^2}$ and $\\|v\\|_\\infty=\\max_{j}|v_j|$ denote the vector $\\ell_2$-norm and $\\ell_\\infty$-norm, respectively. Moreover, $I$ is the identity matrix in $\\mathbb R^p$, and for any subset $S$ of $\\{1,\\ldots,p\\}$, $I_S$ is the diagonal matrix with $1$ on the $j$th diagonals for $j\\in S$ and $0$ elsewhere. We use bold letter $\\mathbf 1\\in\\mathbb R^p$ to denote an all-one vector. $\\theta_s(\\beta)$ denote the $s$-largest component of vector $\\beta\\in\\mathbb R^p$ in absolute value. We use notation $\\lesssim$ and $\\gtrsim$ to denote $\\leq$ and $\\geq$ up to some positive multiplicative constant, respectively. For two vectors $u$ and $v$ of the same dimension, we use $a\\geq b$ and $a\\leq b$ to denote element-wise $\\geq$ and $\\leq$. Denote $\\lambda_{\\max}(A)$ and $\\lambda_{\\min}(A)$ be the maximal and minimal eigenvalues of matrix $A$. Through this document, letters $c$, $c'$ and $c''$ denote some constants whose meaning may change from line to line.\n\n\\subsection{Some Useful Results}\nIn our proof, we will constantly deal with the Hadamard product $u\\circ v$ and the operation $(n^{-1}\\,X^TXu)\\circ v$ for two vectors $u,v\\in\\mathbb R^p$. \nTherefore, we collect some useful properties in this section, some of which are consequences of the RIP condition. \n\nThe first property regarding the Hadamard product is a direct consequence of the H\\\"{o}lder inequality.\n\\begin{lem}\\label{lem2}\n\tFor any two vectors $u$ and $v$ in $\\mathbb{R}^{p}$, we have:\n\t\\begin{equation}\n\t\\|u\\circ v\\| \\leq\\|u\\|\\|v\\|_\\infty.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tThis follows since $\\|u\\circ v\\|^2 = \\sum_{j}u_j^2v_j^2 \\leq \\|v\\|_\\infty^2 \\sum_{j}u_j^2=\\|u\\|^2\\|v\\|_\\infty^2$.\n\\end{proof}\nThe second lemma shows that under the RIP, the product $(n^{-1}\\,X^TXu)\\circ v$ can be well-approximated by $u\\circ v$ for all sparse vectors $u\\in\\mathbb R^p$ and any vector $v\\in\\mathbb R^p$. \n\\begin{lem}\\label{lem3}\n\tLet $X$ be a matrix in $\\mathbb{R}^{n\\times p}$ that satisfies $(s+1,\\delta)$-restricted isometry property (see Definition 2.1 in the paper). Then for any $s$-sparse vectors $u$ and any $v$ in $\\mathbb{R}^{p} $, we have:\n\t\\begin{equation}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty} \\leq \\delta\\|u\\|_2\\|v\\|_{\\infty}.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tLet $D(v)$ be the diagonal matrix in $\\mathbb{R}^{n\\times p}$ with diagonal elements the same as components of $v$ correspondingly. Then $\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}$ can be represented as:\n\t\\begin{align*}\n\t\\max_{i=1,2,...,p} |e_i^T D(v) n^{-1}X^TXu-e_i^T D(v) u|,\n\t\\end{align*}\n\twhere $e_i$ is a $p$-dimensional vector whose $i$-th component is $1$ and $0$ elsewhere. Using the fact that $X$ satisfies $(s+1,\\delta)$-RIP and $e_i^T D(v)$ is $1$-sparse, we have (see the remark right after Definition 2.1 in the paper):\n\t\\begin{align*}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}\\leq \\max_{i=1,2,...,p} \\delta\\|e_i^T D(v)\\|\\|u\\|= \\delta\\|u\\|_2\\|v\\|_{\\infty}.\n\t\\end{align*}\n\\end{proof}\nOur third lemma considers the case when $u$ and $v$ are both arbitrary. \n\\begin{lem}\\label{lem4}\n\tLet $X$ be a matrix in $\\mathbb{R}^{n\\times p}$ that satisfies $(2,\\delta)$-restricted isometry property. Then for any vectors $u,\\, v\\in\\mathbb R^p$, we have: \n\t\\begin{equation}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty} \\leq \\delta\\|u\\|_1\\|v\\|_{\\infty}.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tSince we can decompose $u=\\sum_j I_j u$, we have\n\t\\begin{align*}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}&=\\max_{i=1,2,...,p} |e_i^T D(v)\\, n^{-1}X^TXu-e_i^T D(v) u|\\\\\n\t&\\leq \\sum_{j=1}^{p} \\max_{i=1,2,...,p} |e_i^T D(v)\\, n^{-1}X^TX \\, I_j u-e_i^T D(v)\\, I_j u| \\\\\n\t&\\leq \\sum_{j=1}^{p} \\max_{i=1,2,...,p} \\delta \\|e_i^T D(v)\\| \\|u_j\\| \\\\\n\t&\\leq \\delta\\|u\\|_1\\|v\\|_{\\infty}.\n\t\\end{align*}\n\\end{proof}\nOur fourth and fifth lemma consider the concentration behavior about the noise terms.\n\\begin{lem}\\label{lem6}\n\t$w \\sim \\mathcal{N}(0, \\sigma^2 I_{n \\times n})$, all $\\ell_2$ norm of column vectors of $X_{n \\times s}$ are normalized to $\\sqrt{n}$, $s0$, we can ensure $\\|e_0\\|_{\\infty}\\lesssim 1\/p$, and $\\|z_0\\|_{\\infty}\\lesssim 1$. Now suppose for time $tt>T_1$, since $\\|e_t\\|, \\|d_t\\| \\lesssim 1\/p$ is still controlled, combined with bound~\\eqref{decomp}, we have:\n\t\\begin{align*}\n\t\\|u_t^2-\\beta^\\ast\\|^2 \\leq c (\\alpha^2+\\sigma^2 \\frac{Ms_1}{n} + \\sigma^2 \\frac{s_2 \\log p}{n}).\n\t\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Proof of Proposition \\ref{pro3.1}}\n\nSimilar to the proof for the nonnegative case, we use induction to show that for each $t\\leq T_1$,\n\\begin{align}\n\\|a_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p,\\ &&\\|b_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p \\label{eq3.1.1},\\\\\n\\|a_{t,S_1}\\|_{\\infty}\\lesssim 1, &&\\|b_{t,S_1}\\|_{\\infty}\\lesssim 1 \\label{eq3.1.2},\\\\\n\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|\\lesssim \\sqrt{s} \\label{eq3.1.3},\n\\end{align}\nwhere the set $S_1^c$ is the union of weak signals and errors. When $t=0$, we have $g_0=\\alpha \\mathbf{1}$, $l_0=0$. Therefore, under the assumption $\\alpha\\lesssim 1\/p$, we have $\\|a_{0,S_1^c}\\|_{\\infty}\\lesssim 1\/p $, $\\|a_{0,S_1}\\|_{\\infty}\\lesssim 1$, and similar bounds for $b$. Now suppose for time $t\\beta_i^*\/2$, then $a_{t,i}^2$ decreases and $b_{t,i}^2$ increases, both in an exponential rate. Overall, the sign of $i$th component $\\beta_{t,i}=a_{t,i}^2-b_{t,i}^2$ of $\\beta_t$ tends to fall to negative in an exponential rate.\n\\end{itemize}\nOur analysis will be based on the resemblance between update rules of $(a_t,b_t)$ and $u_t$ in the nonnegative case. Recall that we assumed that the RIP constant $\\delta\\lesssim 1\/(\\kappa \\sqrt{s}\\log\\frac{p}{\\alpha})$, step size $\\eta\\lesssim 1\/(\\kappa \\log\\frac{p}{\\alpha})$, and $T_1=\\Theta(\\frac{ \\log(p\/\\alpha)}{\\eta m})$.\n\\begin{pro}\\label{apro3.2}\n\tUnder assumptions in theorem~\\ref{thm2} and the induction hypothesis~\\eqref{eq3.1.1}-\\eqref{eq3.1.3} at $t0;\\\\\n\t\\beta_{t,i}\\leq (1-c\\,\\eta\\,\\beta^\\ast_{i})^t (-\\alpha^2)+c'\\alpha^2,\\ \\ \\text{if}\\ \\beta^\\ast_{i}<0.\n\t\\end{align*}\n\\end{pro}\n\\begin{proof}\n\tSimilar with the proof of proposition \\ref{pro2.2}, first we approximate $(n^{-1}X^TXu)\\circ v$ by $u\\circ v$ based on the RIP condition via lemmas~\\ref{lem3} and \\ref{lem4},\n\t\\begin{align}\n\t&\\|n^{-1}X^T(X\\beta_t-y)-(I_{S_1}\\beta_{t}-I_{S_1}\\beta^\\ast)\\|_{\\infty} \\nonumber\\\\\n\t\\leq&\\, \\|n^{-1}X^TX (I_{S_1^c}r_{t})\\|_{\\infty}+\\delta\\|r_{t,S_1}\\| +\\|X^Tw\\|_\\infty \n\t\\lesssim \\frac{m}{\\log (p\/\\alpha)}, \\label{eq4.2.1}\n\t\\end{align}\n\timplying that under the condition $ m\\lesssim 1$, we have\n\t\\begin{align*}\n\t&\\|n^{-1}X^TX(\\beta_t-y)\\|_{\\infty}\\\\\n\t\\leq&\\, \\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|_{\\infty} +\n\t\\|n^{-1}X^T(X (I_{S_1^c}r_{t})-w)\\|_{\\infty}+\\delta\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\| \n\t\\lesssim 1, \n\t\\end{align*}\n\twhere the last inequality uses $\\|\\beta_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p$ and $\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|\\lesssim \\sqrt{s}$.\n\t\n\tIn order to analyze $\\beta_{t,S_1}=a_{t,S_1}^2-b_{t,S_1}^2$, let us focus on $a_{t,S_1}^2$ and $b_{t,S_1}^2$, separately. According to the updating rule of $a_{t,S_1}$, we have\n\t\\begin{align*}\n\t&a_{t+1,S_1}^2=a_{t,S_1}^2-2\\eta a_{t,S_1}^2 \\circ [n^{-1}X^T(X\\beta_{t}-y)]_{S_1}+\\eta ^2 a_{t,S_1}^2 \\circ [n^{-1}X^T(X\\beta_{t}-y)]_{S_1}^2,\n\t\\end{align*}\n\twhere recall that for a vector $a\\in\\mathbb R^p$, $a_{S_1}$ denote the sub-vector of $a$ with indices in $S_1$. Applying lemmas~\\ref{lem3} with $v=\\mathbf 1$, we obtain\n\t\\begin{align*}\n\t&\\|a_{t+1,S_1}^2-a_{t,S_1}^2-2\\eta a_{t,S_1}^2(\\beta_{t,S_1}-\\beta^\\ast_{S_1})\\|_{\\infty}\\lesssim \\eta\\frac{m}{\\log (p\/\\alpha)}+\\eta^2 \\kappa^2 m^2\\stackrel{(i)}{\\lesssim} \\eta\\frac{m}{\\log (p\/\\alpha)}.\n\t\\end{align*}\n\twhere in step $(i)$ we used $\\eta \\kappa m \\lesssim \\frac{m}{\\log (p\/\\alpha)}$ sand $\\kappa m \\lesssim 1$. Similar to the nonnegative case, since $\\eta m \\leq 1\/2$, $\\frac{m}{\\log (p\/\\alpha)} \\leq 1\/2$, we have $a_{t,i}^2\/a_{t+1,i}^2 \\leq 4$ for $i\\in S_1$. Therefore, we can obtain an element-wise bound for $\\xi_t=(\\xi_{t,i})_{i\\in S_1}$,\n\t\\begin{equation*}\n\t\\xi_{t,i}:\\,=1-a_{t,i}^2\/a_{t+1,i}^2 \\circ(2\\eta ({\\beta}_{t,i}-\\beta^\\ast_i)), \n\t\\end{equation*} \n\tas $\\|\\xi_t\\|_{\\infty}\\lesssim \\eta\\frac{m}{\\log (p\/\\alpha)}$.\n\tEquivalently, we can write\n\t\\begin{align}\n\ta_{t+1,i}^2= a_{t,i}^2 (1-2\\eta ({\\beta}_{t,i}-\\beta^\\ast_i))+\\xi_{t,i} a_{t+1,i}^2. \\label{eq4.2.2}\n\t\\end{align} \n\tNow let us divide into two cases depending on the sign of $\\beta^\\ast_i,\\ i\\in S_1$:\n\t\n\t\\emph{Case $\\beta_i^* >0$:} When ${\\beta}_{t,i}-\\beta^\\ast_i\\leq -\\beta^\\ast_i\/2$, since $\\beta^\\ast_i \\geq m$, we have by equation~\\eqref{eq4.2.2},\n\t\\begin{align*}\n\ta_{t+1,i}^2 &\\geq \\frac{a_{t,i}^2(1+\\eta \\beta^\\ast_i)}{1+c\\eta\\frac{m}{\\log (p\/\\alpha)}} \\geq a_{t,i}^2(1+\\eta \\beta^\\ast_i)(1-c\\eta\\frac{\\beta_i^*}{\\log (p\/\\alpha)} )\\geq a_{t,i}^2(1+\\eta \\beta^\\ast_i\/4),\n\t\\end{align*}\n\twhere the last inequality follows since $1\/\\log(p\/\\alpha)\\leq 1\/2$ and $\\eta \\beta^\\ast_i \\leq 1\/2$. Similarly, we can analyze $b_{t,S}^2$ to get\n\t\\begin{align*}\n\tb_{t+1,i}^2 &\\leq \\frac{b_{t,i}^2(1-\\eta \\beta^\\ast_i)}{1-c\\eta\\delta\\sqrt{s}}\\leq b_{t,i}^2(1-\\eta \\beta^\\ast_i)(1+c\\eta\\beta_i^*\/\\log(p\/\\alpha))\\leq b_{t,i}^2(1-\\eta \\beta^\\ast_i\/4).\n\t\\end{align*}\n\tTherefore, $a_{t+1,i}^2$ increases at an exponential rate faster than the noise term $a_{t+1,S_1^c}$ while $b_{t+1,i}^2$ decreases to zero at an exponential rate, and when $a_{t+1,i}$ increases to $\\beta_{i}^\\ast\/2$, $b_{t+1,i}$ decreases to $O( \\alpha^4)$ correspondingly. A combination of these two leads to the first claimed bound for $\\beta_{i}^\\ast>0$.\n\t\n\t\\emph{Case $\\beta_i^* <0$:} The analysis for the case is similar: when ${\\beta}_{t,i}-\\beta^\\ast_i\\geq -\\beta^\\ast_i\/2$, we have:\n\t\\begin{align*}\n\ta_{t+1,i}^2 \\leq a_{t,i}^2(1-\\eta \\beta^\\ast_i\/4),\\quad\\mbox{and}\\quad b_{t+1,i}^2 \\geq b_{t,i}^2(1+\\eta \\beta^\\ast_i\/4),\n\t\\end{align*}\n\twhich leads to the second claimed bound for $\\beta_{i}^\\ast<0$.\n\\end{proof}\n\nAs a consequence of the proof in this step, after at most $T\\geq \\Theta ( \\frac{\\log(m\/\\alpha^2)}{\\eta m})$ iterations, we are guaranteed to have have $|{\\beta}_{T,i}|\\geq |\\beta^\\ast_i|\/2$ with $sign({\\beta}_{T,i})=sign(\\beta^\\ast_i)$ and $\\min\\{a_{T,i}^2, b_{T,i}^2\\} \\leq c \\alpha^4$.\n\n\\subsubsection*{Step 3: Prove Induction Hypothesis}\n\\begin{pro}\\label{apro3.3}\n\tUnder assumptions in theorem~\\ref{thm2}, the induction hypothesis~\\eqref{eq3.1.1}-\\eqref{eq3.1.3} at $t \\beta^*_j - \\sigma \\sqrt{s \/n} \\geq \\lambda$ based on the definition of the strong signals for $j \\in S_1$, while for errors and weak signals $j \\in S_1^c$, we have $\\|\\beta_{t,S_1^c}\\|_{\\infty} \\leq c\\alpha^2 < \\lambda$. Consequently, after the component-wise hard thresholding operation at level $\\lambda$, all strong signals remains nonzero while all weak signals and errors become zero.\n\\bibliographystyle{apalike}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzerng b/data_all_eng_slimpj/shuffled/split2/finalzzerng new file mode 100644 index 0000000000000000000000000000000000000000..64c7e5fb9380f7708501c88f6755047f83f3512e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzerng @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Main Results}\n\nWe study slow-fast systems driven by fractional Brownian motions (fBm):\n\\begin{alignat}{4}\n dX_t^\\varepsilon&=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon, Y_t^\\varepsilon)\\,dB_t, &\\qquad X_0^\\varepsilon&=X_0, \\label{eq:slow}\\\\\n dY_t^\\varepsilon&=\\frac{1}{\\varepsilon}b(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t, &\\qquad Y_0^\\varepsilon&=Y_0, \\label{eq:fast}\n\\end{alignat}\nwhere $B$ and $\\hat{B}$ are independent fBms on an underlying complete probability space $(\\Omega, {\\mathcal F},\\ensuremath\\mathbb{P})$ with Hurst parameters $H\\in(\\frac12,1)$ and $\\hat{H}\\in(1-H,1)$, respectively. Here, $g:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\Lin[m]{d}$ and $\\sigma\\in\\Lin{n}$ is non-degenerate. As the scale parameter $\\varepsilon>0$ is taken to $0$, one hopes that the \\emph{slow motion} $X^\\varepsilon$ is well approximated by an \\emph{effective dynamics} $\\bar{X}$. For $H=\\hat{H}=\\frac12$, this convergence has been studied by myriad authors since the seminal works of Bogolyubov-Mitropol{\\textquotesingle}ski\\u{\\i} \\cite{Bogolyubov1955} and Hasminskii \\cite{Hasminskii1968}, see e.g. the monographs and survey articles \\cite{Freidlin2012,Skorokhod2002,Pavliotis2008,Berglund2006,Liu2012,Li2018} and references therein for a comprehensive overview. It is still a very active research area \\cite{Liu2020,Roeckner2020,Roeckner2020a}.\n\nFor $H,\\hat{H}\\neq\\frac12$, the SDEs \\eqref{eq:slow}--\\eqref{eq:fast} provide a suitable model for economic, medical, and climate phenomena exhibiting a genuinely non-Markovian behavior in both the system and its environment. It is for example very well known that neglecting temporal memory effects in climate modeling by resorting to a diffusion model results in prediction notoriously mismatching observational data \\cite{Ashkenazy2003,Karner2002,Davidsen2010,Barboza2014}. It thus became widely popular to use fBm in climate modeling \\cite{Sonechkin1998,Yuan2014,Eichinger2020}.\n\n\nWhile slow-fast systems with fractional noise have seen a tremendous spike of interest in the last two years \\cite{Bourguin-ailus-Spiliopoulos-typical,Bourguin-Gailus-Spiliopoulos,Hairer2020,Pei-Inaham-Xu, Pei-Inaham-Xu2,Han2021}, all of these works resort to Markovian, strongly mixing fast processes by choosing $\\hat{H}=\\frac12$ in \\eqref{eq:fast}. The main contribution of this article is to establish the convergence $X^\\varepsilon\\to\\bar{X}$ even for a \\emph{non-Markovian} fast dynamics by allowing $\\hat{H}\\neq\\frac12$. It hardly comes as a surprise that this renders the analysis much more delicate and it is not clear at all if an averaging principle can even hold for a fractional, \\emph{non-mixing} environment. In fact, the usual assumption in the aforementioned works on Markovian averaging principles is a strong mixing condition with an algebraic rate \\cite{Heunis1994,Abourashchi2010}. This condition is essentially never satisfied for a fractional dynamics \\cite{Bai2016}.\n\nRecent work of Hairer and the first author of this article suggests the following ansatz for the effective dynamics:\n\\begin{equation}\\label{eq:effective_dynamics}\n d\\bar{X}_t=\\bar{f}(\\bar{X}_t)\\,dt+\\bar{g}(\\bar{X}_t)\\,dB_t,\\qquad \\bar{X}_0=X_0,\n\\end{equation}\nwhere $\\bar{f}(x)\\ensuremath\\triangleq\\int f(x,y)\\,\\pi^x(dy)$ and similar for $\\bar{g}$ \\cite{Hairer2020}. For $\\hat{H}=\\frac12$, this work showed that the average is taken with respect to the unique invariant $\\pi^x$ of the fast dynamics with \\emph{frozen} slow input\n\\begin{equation}\\label{eq:frozen_fast}\n dY_t^x=b(x,Y_t^x)\\,dt+\\sigma\\,d\\hat{B}_t.\n\\end{equation} \nFor $\\hat{H}\\neq\\frac12$, it is \\emph{a priori} not clear what $\\pi^x$ should be. We show that it is the one-time marginal of the unique stationary path space law $\\ensuremath\\mathbb{P}_{\\pi^x}\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$, see \\cref{sec:physical_solution} for details. [Here and in the sequel, $\\P(\\ensuremath{\\mathcal{X}})$ denotes the set of Borel probability measures on a Polish space $\\ensuremath{\\mathcal{X}}$.]\n \n\nIn addition to standard regularity requirements ensuring well-posedness of the slow-fast system (see \\cref{cond:feedback} below), we shall impose a contractivity condition on the drift in \\eqref{eq:fast}:\n\\begin{definition}\\label{define-semi-contractive}\n Let $\\lambda, R\\geq 0$ and $\\kappa>0$. We write $\\S(\\kappa, R, \\lambda)$ for the set of Lipschitz continuous functions $b:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ satisfying\n \\begin{equation}\\label{eq:semicontractive}\n \\Braket{b(x)-b(y),x-y}\\leq\\begin{cases}\n -\\kappa|x-y|^2, & |x|,|y|\\geq R,\\\\\n \\lambda|x-y|^2, &\\text{otherwise}.\\\\\n \\end{cases}\n\\end{equation}\n\\end{definition}\n\nNote that $\\lambda$ may be smaller than $\\Lip{b}$, whence its prescription is not necessarily redundant. If $b=-\\nabla V$ is a gradient vector field with potential $V$, then \\eqref{eq:semicontractive} is equivalent to $V$ being at most $\\lambda$-concave on $|x|0$. Then there is a number $\\lambda_0>0$ such that, if $b(x,\\cdot)\\in\\S\\big(\\kappa, R,\\lambda_0\\big)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$, all of the following hold:\n \\begin{itemize}\n \\item For every $x\\in\\ensuremath{\\mathbb{R}}^d$, there exists a unique stationary path space law $\\ensuremath\\mathbb{P}_{\\pi^x}\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$ for the frozen fast dynamics \\eqref{eq:frozen_fast}.\n \\item Let $\\pi^x\\in\\P(\\ensuremath{\\mathbb{R}}^n)$ be the one-time marginal of $\\ensuremath\\mathbb{P}_{\\pi^x}$. If \n \\begin{equation*}\n x\\mapsto\\bar{g}(x)\\ensuremath\\triangleq\\int_{\\ensuremath{\\mathbb{R}}^n}g(x,y)\\,\\pi^x(dy)\\in\\ensuremath{\\mathcal{C}}_b^2\\big(\\ensuremath{\\mathbb{R}}^d,\\Lin[m]{d}\\big),\n \\end{equation*}\n then there is a unique pathwise solution to \\eqref{eq:effective_dynamics} and $X^\\varepsilon\\to\\bar{X}$ as $\\varepsilon\\to 0$ in $\\ensuremath{\\mathcal{C}}^\\alpha\\big([0,T],\\ensuremath{\\mathbb{R}}^d\\big)$ in probability for any $T>0$.\n \\end{itemize} \n\\end{theorem}\n\nThe regularity of $\\bar{g}$ not only hinges on the regularity of $g$ but also on the fast dynamics. First we note that the requirement on $\\bar{g}$ clearly holds for a diffusion coefficient depending only on the slow motion $X^\\varepsilon$: \n\\begin{equation*}\n dX_t^\\varepsilon=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon)\\,dB_t.\n\\end{equation*}\nAnother class of examples is provided by \\cref{cor:smooth} below. \n\nThe technical core of the proof of \\cref{thm:feedback_fractional} is a quantitative \\emph{quenched ergodic theorem} on the conditional evolution of the process \\eqref{eq:frozen_fast}. We prove this by means of a control argument, which is of independent interest. In fact, it allows us to improve recent work of Panloup and Richard \\cite{Panloup2020} by establishing geometric ergodicity for a class of SDEs driven by additive fractional noise. To our best knowledge, this is the first result achieving an exponential convergence rate for a fractional dynamics (excluding the trivial instance of an everywhere contractive drift).\n\nLet $\\TV{\\mu}\\ensuremath\\triangleq\\sup_A|\\mu(A)|$ denote the total variation norm, $\\ensuremath{\\mathcal{W}}^p$ be the $p$-Wasserstein distance, and $\\ensuremath{\\mathbb{W}}^p$ be the Wasserstein-like metric for generalized initial conditions introduced in \\cref{def:wasserstein}.\n \n\\begin{theorem}[Geometric Ergodic Theorem]\\label{thm:geometric}\nLet $(Y_t)_{t\\geq 0}$ be the solution to the SDE\n\\begin{equation}\\label{eq:sde_intro}\n dY_t=b(Y_t)\\,dt+\\sigma\\,dB_t\n\\end{equation}\nstarted in the generalized initial condition $\\mu$, where $\\sigma\\in\\Lin{n}$ is non-degenerate and $B$ is an fBm with Hurst parameter $H\\in(0,1)$. Then, for any $p\\geq 1$ and any $\\kappa,R>0$, there exists a $\\Lambda=\\Lambda(\\kappa,R,p)>0$ such that, whenever $b\\in\\S\\big(\\kappa,R,\\Lambda\\big)$, there is a unique invariant measure $\\mathcal I_\\pi$ for \\eqref{eq:sde_intro} in the sense of \\cref{initial-condition}. Moreover, \n \\begin{equation}\\label{eq:wasserstein_time_t}\n \\ensuremath{\\mathcal{W}}^p(\\mathcal{L}(Y_t),\\pi)\\leq Ce^{-ct} \\ensuremath{\\mathbb{W}}^p\\big(\\mu,\\mathcal I_{\\pi}\\big) \\qquad \\forall\\, t\\geq 0\n \\end{equation}\n and\n \\begin{equation}\\label{eq:tv_process}\n \\TV{\\L(Y_{\\cdot+t})-\\ensuremath\\mathbb{P}_\\pi}\\leq Ce^{-ct}\\ensuremath{\\mathbb{W}}^1\\big(\\mu,\\mathcal I_\\pi\\big) \\qquad \\forall\\, t\\geq 0,\n \\end{equation}\n where $c,C>0$ are numerical constants independent of $t\\geq 0$ and $\\mu$.\n\\end{theorem}\n\n\nThe work \\cite{Hairer2005} already contained a result on the rate of convergence. There, the author assumed an off-diagonal contraction condition, see \\cref{cond:off_diagonal} below, and obtained an algebraic rate in \\eqref{eq:tv_process}. Very recently Panloup and Richard \\cite{Panloup2020} studied $b\\in\\S(\\kappa,R,0)$ for which they found a rate of order $e^{-Dt^\\gamma}$ for some $\\gamma<\\frac23$ in both \\eqref{eq:wasserstein_time_t} and \\eqref{eq:tv_process}. Albeit these works did not require a global Lipschitz condition on the drift for Hurst parameters $H<\\frac12$, we emphasize that they do impose this assumption for $H>\\frac12$ to obtain \\eqref{eq:tv_process}. This is due to the lack of regularity of a certain fractional integral operator. \\Cref{thm:geometric} thus provides a genuine ramification of the results of \\cite{Panloup2020} in the latter case. We note that similarly to the work of Panloup and Richard, the Wasserstein decay \\eqref{eq:wasserstein_time_t} also holds for more general Gaussian driving noises with stationary increments. We shall briefly comment on this in \\cref{sec:geometric_ergodicity}.\n\nWith the spiking interest in numerical methods based on the generalized Langevin equation with memory kernel \\cite{Chak2020,Leimkuhler2020}, \\cref{thm:geometric} and the quenched quantitative ergodic theorem underpinning it can give a better theoretical understanding. A first step would be to derive quantitative estimates on the constants $c$, $C$, and $\\Lambda$; a possible pathway is outlined in \\cref{rem:constant_xi} below. It is an interesting open question if there is indeed a finite threshold value of $\\Lambda$ beyond which the exponential rates \\eqref{eq:wasserstein_time_t}--\\eqref{eq:tv_process} no longer hold. As established by Eberle, such a transition from exponential to sub-exponential rates does not happen in case $H=\\frac12$ \\cite{Eberle2016}.\n\n\\begin{example}\n Let us give an example of a drift not covered by the sub-exponential convergence theorems of \\cite{Panloup2020}. Consider the double-well potential\n \\begin{equation*}\n V(x)=\\alpha|x|^4-\\beta|x|^2\n \\end{equation*}\n for $\\alpha,\\beta>0$. We modify $V$ outside of a compact such that its Hessian is bounded. Set $b=-\\nabla V$. It is clear that $b\\notin\\bigcup_{\\kappa,R>0}\\S(\\kappa,R,0)$ as soon as $\\beta>0$. However, for $\\frac{\\beta}{\\alpha}$ sufficiently small, \\cref{thm:geometric} furnishes an exponential rate of convergence.\n\\end{example}\n\n\\paragraph{Outline of the article.} The next section features a brief overview of preliminary material. In \\cref{sec:convergence}, we prove the quantitative quenched ergodic theorem and deduce \\cref{thm:geometric}. The proof of \\cref{thm:feedback_fractional} is concluded in \\cref{sec:feedback}.\n\\paragraph{Acknowledgements.} We would like to thank the anonymous referees for their careful reading and helpful comments. Partial support from the EPSRC under grant no. EP\/S023925\/1 is also acknowledged.\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\nRecall that one-dimensional fractional Brownian motion with Hurst parameter $H\\in(0,1)$ is the centered Gaussian process $(B_t)_{t\\geq 0}$ with\n\\begin{equation*}\n \\Expec{(B_t-B_s)^2}=|t-s|^{2H},\\qquad s,t\\geq 0.\n\\end{equation*}\nTo construct $d$-dimensional fBm one lets the coordinates evolve as independent one-dimensional fBms with the same Hurst parameter. We will make frequent use of the following classical representation of one-dimensional fBm as a fractional integral of a two-sided Wiener process $(W_t)_{t\\in\\ensuremath{\\mathbb{R}}}$, which is due to Mandelbrot and van Ness \\cite{Mandelbrot1968}:\n\\begin{equation}\\label{eq:mandelbrot}\n B_t=\\alpha_H\\int_{-\\infty}^0 (t-u)^{H-\\frac12}-(-u)^{H-\\frac12}\\,dW_u+\\alpha_H\\int_0^t(t-u)^{H-\\frac12}\\,dW_u,\\qquad t\\geq 0.\n\\end{equation}\nHere, $\\alpha_H>0$ is some explicitly known normalization constant and we also write $B_t=\\bar B_t+\\tilde B_t$. \n\n\n\\subsection{Invariant Measures of Fractional SDEs} \\label{sec:physical_solution}\n\nAlbeit being certainly non-Markovian on its own, the solution to \\eqref{eq:sde_intro} can actually be cast as the marginal of an infinite-dimensional Feller process $Z_t\\ensuremath\\triangleq\\big(Y_t,(W_s)_{s\\leq t}\\big)$ with values in $\\ensuremath{\\mathbb{R}}^n\\times\\H_H$. Here, $W$ is the two-sided Wiener process driving the equation through \\eqref{eq:mandelbrot} and $\\H_H$ is a H\\\"older-type space of paths $\\ensuremath{\\mathbb{R}}_-\\to\\ensuremath{\\mathbb{R}}^n$ supporting the Wiener measure $\\ensuremath{\\mathsf{W}}$. More concretely, $\\H_H$ is the closure of the space $\\{f\\in\\ensuremath{\\mathcal{C}}_c^\\infty(\\ensuremath{\\mathbb{R}}_-,\\ensuremath{\\mathbb{R}}^n):\\,f(0)=0\\}$ in the norm\n\\begin{equation*}\n \\|f\\|_{\\H_H}\\ensuremath\\triangleq\\sup_{s,t\\leq 0}\\frac{\\big|f(t)-f(s)\\big|}{|t-s|^{\\frac{1-H}{2}}\\sqrt{1+|t|+|s|}}.\n\\end{equation*} \nTo ensure that this construction actually furnishes a solution to \\eqref{eq:sde_intro}, we of course have to assume that the law of the second marginal of $Z$ coincides with $\\ensuremath{\\mathsf{W}}$ for each time $t\\geq 0$. This motivates the following definition:\n\\begin{definition}[\\cite{Hairer2005}]\\label{initial-condition}\nA measure $\\mu\\in\\P(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)$ with $\\Pi_{\\H_H}^*\\mu=\\ensuremath{\\mathsf{W}}$ is called a \\emph{generalized initial condition}. A generalized initial condition $\\mathcal I_\\pi$, which is invariant for the Feller process $Z$ is called an \\emph{invariant measure} for the SDE \\eqref{eq:sde_intro}. We write $\\pi\\ensuremath\\triangleq\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mathcal I_\\pi$ for the first marginal and $\\ensuremath\\mathbb{P}_\\pi\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$ for the law of the first coordinate of $Z$ when started in $\\mathcal I_\\pi$.\n\\end{definition} \n\nBy only adding the past of the driving noise to the auxiliary process $Z$, Hairer's framework rules out the existence of `unphysical' invariant measures, which frequently occur in the theory of random dynamical systems, see \\cite{Hairer2009} for details.\n\n\nThere are only a few examples for which the invariant measure can be written down explicitly:\n\\begin{example}\\label{ex:disintegration_fou}\n Let $Y$ be the fractional Ornstein-Uhlenbeck process \\cite{Cheridito2003}, that is,\n \\begin{equation*}\n dY_t=-Y_t\\,dt+dB_t.\n \\end{equation*}\n Then it is well known that its invariant measure is given by\n \\begin{equation*}\n \\mathcal I_\\pi(dy,dw)=\\delta_{F(w)}(dy)\\ensuremath{\\mathsf{W}}(dw),\\qquad F(w)\\ensuremath\\triangleq-\\int_{-\\infty}^0 e^s D_Hw(s)\\,ds,\n \\end{equation*}\n where $D_H:\\H_H\\to\\H_{1-H}$ is a continuous linear operator switching between Wiener and fBm paths, see \\cite[Eq. (3.6)]{Hairer2005} for the precise definition. The first marginal of $\\mathcal I_\\pi$ and the stationary path space law are given by\n \\begin{equation*}\n \\pi=\\L\\left(\\int_{-\\infty}^0 e^s\\,dB_s\\right)\\quad\\text{and}\\quad\\ensuremath\\mathbb{P}_\\pi=\\L\\left(\\int_{-\\infty}^t e^s\\,dB_s\\right)_{t\\geq 0}.\n \\end{equation*}\n \n\\end{example}\n\n\\begin{remark}\n The invariant measure of \\eqref{eq:sde_intro} is in general not of product form.\n\\end{remark}\n\nSince $\\sigma\\in\\Lin{n}$ is non-degenerate, one can show that there is an isomorphism between the strictly stationary solutions to \\eqref{eq:sde_intro} and the set of invariant measures (provided one quotients the latter by the equivalence relation identifying generalized initial initial conditions which generate the same evolution in the first marginal). It is also not hard to prove the following:\n\\begin{proposition}[\\cite{Hairer2005}]\\label{prop:existence_invariant_measure}\n If $\\sigma\\in\\Lin{n}$ and $b\\in\\S(\\kappa,R,\\lambda)$ for some $\\kappa>0$, $R,\\lambda\\geq 0$, then there exists an invariant measure for \\eqref{eq:sde_intro} in the sense of \\cref{initial-condition}. Moreover, $\\mathcal I_\\pi$ has moments of all orders.\n\\end{proposition}\n\nThe conclusion of \\cref{prop:existence_invariant_measure} actually holds for a merely locally Lipschitz off-diagonal large scale contractive drift (see \\cref{cond:off_diagonal} below). See also \\cite{Hairer2007,Deya2019} for versions for multiplicative noise. Finally, we introduce a Wasserstein-type distance for generalized initial conditions:\n\\begin{definition}\\label{def:wasserstein}\nLet $\\mu$ and $\\nu$ be generalized initial conditions. Let $\\mathscr{C}_{\\Delta}(\\mu,\\nu)$ denote the set of couplings of $\\mu$ and $\\nu$ concentrated on the diagonal $\\Delta_{\\H_H}\\ensuremath\\triangleq\\{(w,w^\\prime)\\in\\H_H^2:\\,w=w^\\prime\\}$. For $p\\geq 1$, we set\n \\begin{equation*}\n \\ensuremath{\\mathbb{W}}^p(\\mu,\\nu)\\ensuremath\\triangleq\\inf_{\\rho\\in\\mathscr{C}_\\Delta(\\mu,\\nu)}\\left(\\int_{(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2}|x-y|^p\\,\\rho(dx,dw,dy,dw^\\prime)\\right)^{\\frac1p}.\n\\end{equation*}\n\\end{definition}\n\nNote that clearly $\\ensuremath{\\mathcal{W}}^p\\big(\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mu,\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\nu\\big)\\leq \\ensuremath{\\mathbb{W}}^p(\\mu,\\nu)$ and the inequality is strict in general.\n\n\\subsection{Large Scale Contractions}\n\nKnown ergodic theorems on \\eqref{eq:sde_intro} require either a Lyapunov-type stability or a large scale contractivity condition on the drift $b$. The former indicates that once far out, the solutions have the tendency to come back to a neighborhood of the origin. Under this condition, it is conceivable that two distinct solutions can come back from diverging routes, thus allowing to couple them. The Lyapunov stability condition was used in \\cite{Fontbona2017,Deya2019} for multiplicative noise. \n\nA large scale contraction on the other hand will force two solutions to come closer once they have left a ball $B_R$ of sufficiently large radius $R>0$. The following two conditions appeared in previous works:\n\n\\begin{condition}[Off-diagonal large scale contraction, \\cite{Hairer2005}]\\label{cond:off_diagonal}\nThere exist numbers $\\tilde \\kappa>0$ and $D,\\lambda\\geq 0$ such that \n\\begin{equation}\\label{quasi-contr}\n \\Braket{b(x)-b(y),x-y}\\leq \\big(D-\\tilde \\kappa|x-y|^2\\big)\\wedge\\big(\\lambda|x-y|^2\\big)\\qquad \\forall\\, x,y\\in\\ensuremath{\\mathbb{R}}^n.\n\\end{equation}\n\\end{condition}\n\n\n\\begin{condition}[Large scale contraction, \\cite{Panloup2020}]\nThere exist numbers $R\\geq 0$ and $\\kappa>0$ such that \n \\begin{equation}\\label{contractive}\n \\Braket{b(x)-b(y),x-y}\\leq -\\kappa|x-y|^2 \\qquad \\forall\\, x,y\\in \\ensuremath{\\mathbb{R}}^n\\setminus B_R. \n \\end{equation}\n\\end{condition}\n\n\\begin{example}\n The function $b(x)=x-x^3$ is a large scale contraction. \n\\end{example}\n\nWe will later use the following standard result, a slightly weaker version of which was proven in \\cite[Lemma 5.1]{Panloup2020}.\n\\begin{lemma}\\label{lem:bigger_ball}\nIf $b$ is locally Lipschitz continuous and satisfies the large scale contraction condition \\eqref{contractive}, then for any $\\bar{\\kappa}\\in(0,\\kappa)$, there is an $\\bar{R}>0$ such that\n \\begin{equation*}\n \\braket{b(x)-b(y),x-y}\\leq -\\bar{\\kappa}|x-y|^2 \\qquad \\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n,\\, |x|>\\bar{R}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n Since $\\braket{b(x)-b(y),x-y}\\leq -{\\kappa}|x-y|^2$ for $x$ and $y$ outside of the ball $B_R$, \nwe only need to show that the required contraction holds for any $|y|\\leq R$ and $|x|>\\bar R$. \nFix such $x$ and $y$. \n\nWithout loss of generality, we may also assume that $\\bar{R}\\geq R+1$. Then there is a $\\beta\\in(0,1)$ such that $z_\\beta\\ensuremath\\triangleq (1-\\beta)x+\\beta y$ has norm $|z_\\beta|=R+1$. \nSince $x-y=\\ensuremath{\\frac} 1 \\beta(x-z_\\beta)$ and, since $x, z_\\beta$ are outside of $B_R$, \n \\begin{equation*}\n \\braket{b(x)-b(z_\\beta),x-y}\\leq-\\ensuremath{\\frac} 1 \\beta \\kappa|x-z_\\beta|^2=-\\kappa\\beta|x-y|^2.\n \\end{equation*}\nLet $K\\ensuremath\\triangleq\\Lip[B_{R+1}]{b}$ denote the Lipschitz constant of $b$ on $B_{ R+1}$. Since $|z_\\beta-y|=(1-\\beta)|x-y|$, it holds that\n \\begin{align*}\n \\braket{b(x)-b(y),x-y}&=\\braket{b(x)-b(z_\\beta),x-y} +\\\\\\\n & \\leq -\\kappa\\beta|x-y|^2+K(1-\\beta)|x-y|^2\n \\end{align*}\nSince $\\beta$ is the length of the proportion of the line segment outside of $B_{R+1}$,\nwe can choose it as close to $1$ as we like by choosing $\\bar R$ sufficiently large $\\big(\\beta=\\ensuremath{\\frac}{|x-z_\\beta|}{|x-y|}\\geq\\frac{|x|-R-1}{|x|+R}\\geq\\frac{\\bar{R}-R-1}{\\bar{R}+R}\\big)$.\n\\end{proof}\n\n\\begin{remark}\\label{rem:large_scall_off_diagonal}\n\\leavevmode\n\\begin{enumerate}\n \\item Let $b: \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^n$ be a globally Lipschitz continuous function. Then the large scale contraction condition \\eqref{contractive} is equivalent to $b\\in\\bigcup_{\\lambda>0}\\S(\\kappa,R,\\lambda)$. In view of \\cref{lem:bigger_ball}, condition \\eqref{eq:semicontractive} also holds for a merely locally Lipschitz continuous $b$ at the cost of a smaller contractive rate and a bigger contractive range. In fact, choose $\\bar\\kappa\\in(0,\\kappa)$ and let $\\bar R>R$ be the corresponding radius furnished by \\cref{lem:bigger_ball}. This gives \\eqref{eq:semicontractive} with $\\kappa\\rightsquigarrow\\bar{\\kappa}$, $R\\rightsquigarrow\\bar{R}$, and $\\lambda\\rightsquigarrow \\Lip[B_{\\bar{R}}]{b}$.\n \\item\\label{it:off_diagonal} The off-diagonal large scale contraction condition is weaker than the large scale contraction condition. With the former, there may be no $\\kappa>0$ such that \\eqref{contractive} holds in the region $\\{|x-y| \\leq \\ensuremath{\\frac} D {2\\tilde \\kappa}\\} \\cap \\{|x|\\geq R, |y|\\geq R\\}$. On the other hand, if \\eqref{contractive} holds and $b$ is locally Lipschitz continuous, we can choose any $\\tilde \\kappa<\\kappa$. In fact, denoting the radius from \\cref{lem:bigger_ball} by $\\bar R>0$, one only needs to show \\eqref{quasi-contr} when both $x$ and $y$ are in $B_{\\bar R}$. To this end, we pick $\\lambda=\\Lip[B_{\\bar{R}}]{b}$ and $D\\geq\\sup_{x,y\\in B_{\\bar{R}}}(\\tilde\\kappa+\\lambda)|x-y|^2$.\n\\end{enumerate}\n\\end{remark}\n\n\n\\section{The Conditional Evolution of Fractional Dynamics}\\label{sec:convergence}\n\nTo derive strong $L^p$-bounds on the H\\\"older norm of the slow motion in \\cref{sec:feedback} below, we need to study the conditional distribution of the evolution \\eqref{eq:frozen_fast}. Unlike the Markovian case, the conditioning changes the dynamics and the resulting evolution may \\emph{no longer} solve the original equation. We will show that, in the limit $t\\to\\infty$, the law of the conditioned dynamics still converges to $\\pi^x$, the first marginal of the invariant measure for the fast dynamics with frozen slow input \\eqref{eq:frozen_fast}. The rate of convergence is however slower (only algebraic rather than exponential).\n\n\nLet us first state the regularity assumption imposed in \\cref{thm:feedback_fractional}. For this we introduce a convenient notation, which we shall frequently use in the sequel. We write $a\\lesssim b$ if there is a constant $C>0$ such that $a\\leq C b$. The constant $C$ is independent of any ambient parameters on which $a$ and $b$ may depend.\n\n\\begin{condition}\\label{cond:feedback}\n The drift $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ satisfies the following conditions:\n \\begin{itemize}\n \\item \\emph{Linear growth:}\n \\begin{equation*}\n |b(x,y)|\\lesssim 1+|x|+|y|, \\qquad \\forall\\, x\\in\\ensuremath{\\mathbb{R}}^d,y\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\item \\emph{Uniformly locally Lipschitz in the first argument:} For each $R>0$, there is an $L_R>0$ such that\n \\begin{equation*}\n \\sup_{y\\in\\ensuremath{\\mathbb{R}}^n}|b(x_1,y)-b(x_2,y)|\\leq L_R|x_1-x_2|, \\qquad \\forall\\, |x_1|,|x_2|\\leq R.\n \\end{equation*}\n \\item \\emph{Uniformly Lipschitz in the second argument:} There is an $L>0$ such that\n \\begin{equation*}\n \\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}|b(x,y_1)-b(x,y_2)|\\leq L|y_1-y_2|, \\qquad \\forall\\, y_1,y_2\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\end{itemize}\n\\end{condition}\n\nLet $({\\mathcal F}_t)_{t\\geq 0}$ be a complete filtration to which $\\hat{B}$ is adapted. For any continuous, $({\\mathcal F}_t)_{t\\geq 0}$-adapted, $\\ensuremath{\\mathbb{R}}^d$-valued process $X$ with continuous sample paths, and any $\\varepsilon>0$, the equation\n\\begin{equation}\\label{eq:general_flow}\n d\\Phi_{t}^X=\\frac{1}{\\varepsilon}b\\big(X_t,\\Phi_{t}^X\\big)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,\\qquad \\Phi_{t}^{X}=y,\n\\end{equation}\nhas a unique global pathwise solution under \\cref{cond:feedback}, see \\cref{lem:comparison} below. The flow $\\Phi_{s,t}^X(y)$ associated with \\eqref{eq:general_flow} is therefore well defined. An important special case of \\eqref{eq:general_flow} is when the extrinsic process is given by a fixed point $x\\in\\ensuremath{\\mathbb{R}}^d$. For this we reserve the notation $\\bar{\\Phi}^x$:\n\\begin{equation}\\label{eq:general_flow-fixed-x}\n d\\bar{\\Phi}_t^x=\\frac{1}{\\varepsilon}b(x,\\bar{\\Phi}_t^x)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,\\qquad \\bar{\\Phi}_0^x=y.\n\\end{equation}\nWe would like the reader to observe that the dependency of flows on the scale parameter $\\varepsilon>0$ is suppressed in our notation. Note that, by self-similarity, sending $\\varepsilon\\to 0$ in \\eqref{eq:general_flow-fixed-x} is equivalent to keeping $\\varepsilon=1$ fixed and taking $t\\to\\infty$. As the $\\varepsilon$-dependence of the flows \\eqref{eq:general_flow}--\\eqref{eq:general_flow-fixed-x} will play a key r\\^ole in \\cref{sec:feedback}, we choose to introduce a new notation in case $\\varepsilon=1$, which is used throughout the rest of this section: \n\\begin{definition}\\label{def:flow}\n Let $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\ensuremath\\triangleq\\big\\{f\\in\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n):\\,f(0)=0\\big\\}$ and $x\\in\\ensuremath{\\mathbb{R}}^d$. We denote the flow of the ordinary differential equation\n \\begin{equation}\\label{eq:ode_solution}\n dy_t=b(x,y_t)\\,dt+d\\ensuremath{\\mathfrak{h}}_t\n \\end{equation}\n by $\\Psi^x_{s,t}(y,\\ensuremath{\\mathfrak{h}})$, where $y\\in\\ensuremath{\\mathbb{R}}^n$ and $0\\leq s\\leq t$. It is given by the solution to the integral equation\n \\begin{equation*}\n \\Psi_{s,t}^x(y,\\ensuremath{\\mathfrak{h}})=y+\\int_s^t b\\big(x,\\Psi_{s,r}^x(y,\\ensuremath{\\mathfrak{h}})\\big)\\,dr+\\ensuremath{\\mathfrak{h}}_t-\\ensuremath{\\mathfrak{h}}_s.\n \\end{equation*}\n We also use the abbreviation $\\Psi^x_{t}\\ensuremath\\triangleq \\Psi^x_{0,t}$.\n\\end{definition}\n\nUnder \\cref{cond:feedback}, \\eqref{eq:ode_solution} is well posed and it follows that $\\Psi_{s,t}^x(y, \\ensuremath{\\mathfrak{h}})=\\Psi_{t-s}^x(y,\\theta_s \\ensuremath{\\mathfrak{h}})$ for each $0\\leq s\\leq t$ and $y\\in\\ensuremath{\\mathbb{R}}^n$, where $\\theta_sf=f(\\cdot+s)-f(\\cdot)$ is the Wiener shift operator on the path space. If $x\\in\\ensuremath{\\mathbb{R}}^d$, $y\\in\\ensuremath{\\mathbb{R}}^n$, or $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$ are random, we understand \\cref{def:flow} pathwise for each fixed sample $\\omega\\in\\Omega$. The solutions to \\eqref{eq:general_flow} and \\eqref{eq:general_flow-fixed-x} are also understood in this sense.\n\n\n\\subsection{Processes with a Locally Independent Increment Decomposition}\\label{sec:increment}\n\nThe derivation of the conditioned evolution relies on the following simple fact: For $t,h\\geq 0$, we have\n\\begin{equation}\\label{eq:increment_decomposition}\n (\\theta_t\\hat{B})_h=\\hat{B}_{t+h}-\\hat{B}_t=\\bar{\\hat{B}}_h^t+\\tilde{\\hat{B}}_h^t,\n\\end{equation}\nwhere, in a slight abuse of notation (the integrand has to be multiplied by the identity matrix),\n\\begin{equation*}\n \\bar{\\hat{B}}_h^t\\ensuremath\\triangleq \\alpha_{\\hat H}\\int_{-\\infty}^t\\left((t+h-u)^{\\hat{H}-\\frac12}-(t-u)^{\\hat{H}-\\frac12}\\right)\\,d\\hat{W}_u,\\quad \\tilde{\\hat{B}}_h^t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_t^{t+h} (t+h-u)^{\\hat{H}-\\frac12}\\,d\\hat{W}_u.\n\\end{equation*}\nThis decomposition is easily obtained by rearranging \\eqref{eq:mandelbrot}. For any $t\\geq 0$, the two components $\\bar{\\hat{B}}^t$ and $ \\tilde{\\hat{B}}^t$ are independent. We call $\\bar{\\hat{B}}^t$ the \\emph{smooth} part of the increment, whereas $\\tilde{\\hat{B}}^t$ is referred to as the \\emph{rough} part. This terminology is based on the fact that, away from the origin, the process $\\bar{\\hat{B}}^t$ has continuously differentiable sample paths and therefore the `roughness' of $\\hat{B}$ essentially comes from $\\tilde{\\hat{B}}^t$. Indeed, it is not hard to check that $\\tilde{\\hat{B}}^t$ is of precisely the same H\\\"older regularity as $\\hat{B}$. We also observe that $\\tilde{\\hat{B}}^t\\overset{d}{=}\\tilde{\\hat{B}}^0\\ensuremath\\triangleq\\tilde{\\hat{B}}$ for all $t>0$. \n\nThe process $\\tilde{\\hat{B}}$ is---up to a prefactor---known as Riemann-Liouville process (or type-II fractional Brownian motion) and was initially studied by L\\'evy \\cite{Levy1953}. Its use in modelling was famously discouraged in \\cite{Mandelbrot1968} due to its overemphasis of the origin and the `regularized' process \\eqref{eq:mandelbrot} was proposed instead. In fact as we shall see below, the lack of stationarity of the increments of $\\tilde{\\hat{B}}$ complicates the analysis of the conditioned evolution. \n\n\\begin{definition}\\label{def:ind_increment}\nLet $({\\mathcal F}_t)_{t\\geq 0}$ be a complete filtration. An $({\\mathcal F}_t)_{t\\geq 0}$-adapted stochastic process $Z$ is said to have a \\emph{locally independent decomposition of its increments} with respect to $({\\mathcal F}_t)_{t\\geq 0}$ if for any $t\\geq 0$, there exists an increment decomposition of the form\n$$(\\theta_t Z)_h=\\tilde Z^t_h+\\bar Z^t_h, \\qquad h\\geq 0,$$\nwhere $\\bar Z^t \\in {\\mathcal F}_t$ and $\\tilde Z^t$ is independent of ${\\mathcal F}_t$. \n\\end{definition}\n\n\nAs seen in \\eqref{eq:increment_decomposition}, an fBm $\\hat{B}$ has a locally independent decomposition of its increments with respect to any filtration $({\\mathcal F}_t)_{t\\geq 0}$ \\emph{compatible} with $\\hat{B}$. By this we mean that $(\\hat{W}_s)_{s\\leq t}\\in{\\mathcal F}_t$ and $(\\theta_t\\hat{W}_s)_{s\\geq t}$ is independent of ${\\mathcal F}_t$ for any $t\\geq 0$. \n\n\n\\begin{example}\\label{example-1}\nLet us give some further examples, which will become important later on:\n\\begin{enumerate}\n\\item\\label{it:rough_decomposition} Let $(\\hat W_t)_{t\\geq 0}$ be a Wiener process and \n$ \\tilde{\\hat B}_t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_0^{t} (t-u)^{H-\\frac12}\\,d\\hat W_u$ be the Riemann-Liouville process. \nThen, for any $t\\geq 0$ and $h\\geq 0$, \n \\begin{align}\n (\\theta_t\\tilde{\\hat B})_h&=\\alpha_{\\hat H}\\int_0^t \\Big( (t+h-u)^{\\hat H-\\frac12}-(t-u)^{\\hat H-\\frac12}\\Big)\\,d\\hat W_u+\\alpha_{\\hat H}\\int_t^{t+h}(t+h-u)^{\\hat H-\\frac12}\\,d\\hat W_u\\nonumber\\\\\n &\\ensuremath\\triangleq Q^t_h+\\tilde{\\hat B}^t_h.\\label{eq:z_t}\n \\end{align}\nThus, $\\tilde{\\hat B}$ admits a locally independent decomposition of its increments with respect to any filtration compatible with $\\hat{B}$.\n\n\\item Another example, given in \\cite{Gehringer-Li-2020, Gehringer-Li-2020-1}, is the stationary fractional Ornstein-Uhlenbeck process $Z_t=\\int_{-\\infty }^t e^{-(t-s)}\\,d\\hat{B}_s$. More generally, it is clear that $Z_t=\\int_{-\\infty }^t \\mathfrak{G}(s,t)\\,d\\hat{B}_s$ with a suitable kernel $\\mathfrak{G}$ also has this property.\n\n\\item\\label{it:smooth_decomposition} Albeit not being a direct instance of \\cref{def:ind_increment}, it is also interesting to observe a \\emph{fractal} property of $\\hat{B}$: The smooth part of the increment has an independent decomposition as $\\bar{\\hat B}_h^t=P_h^t+Q_h^t$, where $Q^t$ was defined in \\eqref{eq:z_t} and\n\\begin{equation*}\n P_h^t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_{-\\infty}^0\\Big((t+h-u)^{\\hat H-\\frac12}-(t-u)^{\\hat H-\\frac12}\\Big)\\,d\\hat W_u.\n\\end{equation*}\n\\end{enumerate}\n\\end{example}\n\nOur argument for the quenched ergodic theorem will be based on a two step conditioning procedure making use of an explicit representation of the conditioned process. We state it for a general noise with locally independent increments:\n\n\\begin{lemma}\\label{lem:conditioning_general}\nLet $0\\leq s\\leq t0$ we define the set\n\\begin{equation}\\label{eq:omega}\n \\Omega_{\\alpha}\\ensuremath\\triangleq\\Big\\{f\\in \\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\cap\\ensuremath{\\mathcal{C}}^2\\big((0,\\infty),\\ensuremath{\\mathbb{R}}^n\\big):\\limsup_{t\\to\\infty}\\left(t^{\\alpha}\\big|\\dot{f}(t)\\big|+t^{1+\\alpha}\\big|\\ddot{f}(t)\\big|\\right)<\\infty\\Big\\}.\n\\end{equation}\nThis space is equipped with the semi-norm \n\\begin{equation*}\n \\|f\\|_{\\Omega_\\alpha}\\ensuremath\\triangleq\\sup_{t\\geq 1}t^{\\alpha}\\big|\\dot{f}(t)\\big|+\\sup_{t\\geq 1}t^{1+\\alpha}\\big|\\ddot{f}(t)\\big|.\n\\end{equation*}\nWe also set $\\Omega_{\\alpha-}\\ensuremath\\triangleq\\bigcap_{\\beta<\\alpha}\\Omega_{\\beta}$. The motivation for this definition stems from the following lemma:\n\n\\begin{lemma}\\label{lem:smooth_part_decay}\n Let $\\varepsilon>0$ and $t\\geq 0$. Then $\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}^t_{\\varepsilon\\cdot}\\overset{d}{=}\\bar{\\hat{B}}^t\\overset{d}{=}\\bar{\\hat{B}}\\in\\Omega_{(1-\\hat{H})-}$ a.s. and $\\|\\bar{\\hat{B}}\\|_{\\Omega_\\alpha}\\in\\bigcap_{p\\geq 1} L^p$ for any $\\alpha<1-\\hat{H}$.\n\\end{lemma}\n\\begin{proof}\n Let $\\delta\\in\\big(0,1-\\hat H\\big)$. It is enough to prove that there is a random variable $C>0$ with moments of all orders such that\n \\begin{equation}\\label{eq:estimate_all_orders}\n \\big|\\dot{\\bar{\\hat{B}}}_t\\big|\\leq \\frac{C}{t^{1-\\hat{H}-\\delta}},\\qquad\\big|\\ddot{\\bar{\\hat{B}}}_t\\big|\\leq\\frac{C}{t^{2-\\hat{H}-\\delta}}\n \\end{equation}\n for all $t\\geq 1$ on a set of probability one. This in turn easily follows from sample path properties of the standard Wiener process. Firstly, we have that\n \\begin{equation*}\n \\dot{\\bar{\\hat{B}}}_t=\\alpha_{\\hat{H}}\\left(\\hat{H}-\\frac12\\right)\\int_{-\\infty}^0(t-u)^{\\hat{H}-\\frac32}\\,dW_u=-\\alpha_{\\hat{H}}\\left(\\hat{H}-\\frac12\\right)\\left(\\hat{H}-\\frac32\\right)\\int_{-\\infty}^0 (t-u)^{\\hat{H}-\\frac52}W_u\\,du\n \\end{equation*}\n since $\\lim_{u\\to-\\infty}(t-u)^{\\hat{H}-\\frac32}W_u=0$. Therefore,\n \\begin{align*}\n \\big|\\dot{\\bar{\\hat{B}}}_t\\big|&\\lesssim \\left(\\sup_{-1\\leq s\\leq 0} |W_s| \\int_{-1}^0 (t-u)^{\\hat{H}-\\frac52}\\,du+\\sup_{s\\leq -1}\\frac{|W_s|}{(t-s)^{\\frac12+\\delta}}\\int_{-\\infty}^{-1} (t-u)^{\\hat{H}-2+\\delta}\\,du\\right)\\\\\n &\\leq C\\left(t^{\\hat{H}-\\frac52}+(t+1)^{\\hat{H}-1+\\delta}\\right).\n \\end{align*}\n The fact that $C$ has moments of all order is an easy consequence of Fernique's theorem. In fact, the Wiener process defines a Gaussian measure on the separable Banach space\n \\begin{equation*}\n \\mathcal{M}^{\\frac12+\\delta}\\ensuremath\\triangleq\\left\\{f\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n):\\,\\|f\\|_{\\mathcal{M}^{\\frac12+\\delta}}\\ensuremath\\triangleq\\sup_{u\\geq 0}\\frac{|f(u)|}{(1+u)^{\\frac12+\\delta}}<\\infty\\right\\}\n \\end{equation*}\n By Fernique's theorem, the random variable $\\|W\\|_{\\mathcal{M}^{\\frac12+\\delta}}$ has therefore Gaussian tails. The first estimate in \\eqref{eq:estimate_all_orders} follows. The bound on $\\big|\\ddot{\\bar{\\hat{B}}}_t\\big|$ is similar.\n\\end{proof}\n\n\n\\subsection{A Universal Control}\n\nLet $b\\in\\S(\\kappa,R,\\lambda)$, $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$, and $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$. Let us consider the following controlled ordinary differential equation:\n\\begin{equation}\\label{eq:controlled_ode}\n x^{\\varsigma,u}(t)=x_0+\\int_0^t b\\big(x^{\\varsigma,u}(s)\\big)\\,ds+\\varsigma(t)+\\int_0^t u(s)\\,ds,\\qquad t\\in[0,1].\n\\end{equation}\nWe think of $\\varsigma$ as an external `adversary' and of $u$ as a control. Since $b$ is Lipschitz continuous, it is standard that there is a unique global solution to \\eqref{eq:controlled_ode}. If $u\\equiv 0$, we adopt the shorthand $x^{\\varsigma}\\ensuremath\\triangleq x^{\\varsigma,0}$.\n\nThe aim of this section is to exhibit an $\\eta\\in(0,1)$ as large as possible so that the following holds: Given $\\bar{R}>0$, there is an $M>0$ such that, for any adversary $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$ and any initial condition $x_0\\in\\ensuremath{\\mathbb{R}}^n$, we can find a control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ with $|u|_\\infty\\leq M$ ensuring that the occupation time of $x^{\\varsigma,u}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is at least $\\eta$. It is important to emphasize that the sup-norm of the control $|u|_\\infty$ may neither depend on the adversary $\\varsigma$ nor on the initial condition $x_0$ (otherwise the construction of $u$ essentially becomes trivial). We shall actually choose $u$ as concatenation of the zero function and a \\emph{universal} control $\\hat u\\in L^\\infty([0,N^{-1}],\\ensuremath{\\mathbb{R}}^n)$ for a sufficiently large, but universal, $N\\in\\ensuremath{\\mathbb{N}}$.\n\nWe begin with a lemma:\n\n\\begin{lemma}\\label{lem:control_bound}\n There is a constant $C>0$ independent of $\\varsigma$ and $u$ such that, for the solution of \\eqref{eq:controlled_ode},\n \\begin{equation*}\n |x^{\\varsigma,u}(t)-x^{\\varsigma}(t)|^2\\leq C(1+|u|^2_\\infty)t\n \\end{equation*}\n for all $t\\in [0,1]$.\n\\end{lemma}\n\\begin{proof}\n Since $b$ is contractive on the large scale, there are constants $D,\\tilde{\\kappa}>0$ such that\n \\begin{equation*}\n \\braket{b(x)-b(y),x-y}\\leq D-\\tilde{\\kappa}|x-y|^2\n \\end{equation*}\n for all $x,y\\in\\ensuremath{\\mathbb{R}}^n$, see \\cref{rem:large_scall_off_diagonal} \\ref{it:off_diagonal}. Define now $f(t)\\ensuremath\\triangleq e^{\\tilde\\kappa t}\\big|x^{\\varsigma,u}(t)-x^{\\varsigma}(t)\\big|^2$, then\n \\begin{equation*}\n f^\\prime(t)=\\tilde\\kappa f(t)+2e^{\\tilde\\kappa t}\\Braket{b\\big(x^{\\varsigma,u}(t)\\big)-b\\big(x^{\\varsigma}(t)\\big)+u(t),x^{\\varsigma,u}(t)-x^{\\varsigma}(t)}\\leq 2D e^{\\tilde\\kappa}+\\frac{|u(t)|^2}{\\tilde\\kappa}\n \\end{equation*}\n for all $t\\in [0,1]$. Consequently, setting $C\\ensuremath\\triangleq \\max(2D,\\tilde\\kappa^{-1})$, we have\n \\begin{equation*}\n \\big|x^{\\varsigma,u}(t)-x^{\\varsigma}(t)\\big|^2\\leq C \\int_0^t e^{-\\tilde\\kappa(t-s)}\\left(1+|u(s)|^2\\right)\\,ds\n \\end{equation*}\n and the lemma follows at once.\n\\end{proof}\n\nFor a piecewise constant function $u:[0,1]\\to\\ensuremath{\\mathbb{R}}^n$, let $\\ensuremath{\\mathcal{D}}_u\\subset[0,1]$ denote the finite set of discontinuities. We then have the following control result:\n\\begin{proposition}\\label{prop:control}\n\tLet $\\eta<\\frac12$ and $\\bar{R}>0$. Then there is a value $M>0$ such that the following holds true: For each $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$ and each $x_0\\in\\ensuremath{\\mathbb{R}}^n$, we can find a piecewise constant control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ with $|u|_\\infty+|\\ensuremath{\\mathcal{D}}_u|\\leq M$ such that the occupation time of $x^{\\varsigma,u}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is greater than or equal to $\\eta$.\n\\end{proposition}\n\n\\begin{proof}\n We prove that there exist an integer $N$ and a control $\\hat u\\in L^\\infty([0,N^{-1}])$ with at most two constant pieces independent of both the initial condition $x_0$ and the adversary $\\varsigma$ such that either\n \\begin{equation*}\n \\mathop{\\mathrm {Leb}} \\Big(\\Big\\{t\\in[0,N^{-1}]:|x^{\\varsigma}(t)|>\\bar{R} \\Big\\}\\Big)\\geq \\ensuremath{\\frac} \\eta N\\quad\\text{or}\\quad\\mathop{\\mathrm {Leb}} \\Big(\\Big\\{t\\in[0,N^{-1}]:|x^{\\varsigma,\\hat u}(t)|>\\bar{R}\\Big\\}\\Big)\\geq \\ensuremath{\\frac} \\eta N.\n \\end{equation*}\n In the former case, we of course choose $u\\equiv 0$, otherwise we let $u=\\hat u$. By the flow property of well-posed ordinary differential equations, the solution to \\eqref{eq:controlled_ode} restarted at time $N^{-1}$ solves a similar equation (with new adversary $\\tilde{\\varsigma}(\\cdot)=\\theta_{N^{-1}} \\varsigma \\in\\ensuremath{\\mathcal{C}}_0([0,1-N^{-1}],\\ensuremath{\\mathbb{R}}^n)$ and initial condition $x^{\\varsigma,u}(N^{-1})$). Upon constructing $\\hat u$, we can thus easily deduce the proposition by iterating this construction.\n\n Suppose that the time spent by uncontrolled solution $(x_t^\\varsigma)_{t \\in [0,N^{-1}]}$ in $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is strictly less than $\\frac{\\eta}{N}$. We let $A_{x_0,\\varsigma}$ be the set of times $t\\in[0,N^{-1}]$ at which $|x^\\varsigma(t)|\\leq \\bar{R}$. Note that $A_{x_0,\\varsigma}$ is the union of a countable number of closed, disjoint intervals. By assumption, we have $\\mathop{\\mathrm {Leb}}(A_{x_0,\\varsigma})>(1-\\eta)N^{-1}$. \n \n For $\\delta\\ensuremath\\triangleq (2N)^{-1}$ and $e$ any fixed unit vector, we define $\\hat u$ to be the piecewise constant function\n \\begin{equation*}\n \\hat u(t)=\\begin{cases}\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}e, & t\\in [0, \\delta],\\\\\n -\\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}e, & t\\in (\\delta, 2\\delta],\n \\end{cases}\n \\end{equation*}\n so that\n \\begin{equation*}\n \\int_0^t \\hat u(s)\\,ds=\\begin{cases}\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}te, & t\\in [0, \\delta],\\\\\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}(2\\delta-t)e, & t\\in (\\delta, 2\\delta].\n \\end{cases}\n \\end{equation*}\n We observe that\n \\begin{equation}\\label{eq:lower_control}\n |x^{\\varsigma,\\hat u}(t)|\\geq \\left |\\int_0^t \\hat u(s)\\,ds \\right|-|x^\\varsigma(t)|-\\Lip{b}\\int_{0}^t\\big|x^{\\varsigma,\\hat u}(s)-x^\\varsigma(s)\\big|\\,ds.\n \\end{equation}\n Moreover, owing to \\cref{lem:control_bound}, we can bound\n \\begin{equation}\\label{eq:lower_control_n}\n \\phantom{\\leq}\\int_{0}^t\\big|x^{\\varsigma,\\hat u}(s)-x^\\varsigma(s)\\big|\\,ds\\leq\\sqrt{C}(1+|\\hat u|_\\infty)\\int_{0}^{2\\delta}\\sqrt{s}\\,ds=\\frac{2\\sqrt{C}}{3 N^{\\frac32}}\\left(1+\\frac{2(2\\bar{R}+1)N}{1-2\\eta}\\right)<\\Lip{b}^{-1},\n \\end{equation}\n provided we choose the integer $N=N(C,\\bar{R},\\eta,\\Lip{b})$ large enough. Define the set $B_{x_0,\\varsigma}\\ensuremath\\triangleq A_{x_0,\\varsigma}\\cap [(1-2\\eta)\\delta,(1+2\\eta)\\delta]$. Combining \\eqref{eq:lower_control} and \\eqref{eq:lower_control_n}, we then certainly have that $|x^{\\varsigma,\\hat u}(t)|>\\bar{R}$ for all $t\\in B_{x_0,\\varsigma}$. Since\n \\begin{equation*}\n \\mathop{\\mathrm {Leb}}(B_{x_0,\\varsigma})\\geq\\frac{(1-\\eta)}{N}-2(1-2\\eta)\\delta=\\frac{\\eta}{N}\n \\end{equation*}\n and $|\\hat u|_\\infty$ as well as $|\\ensuremath{\\mathcal{D}}_{\\hat{u}}|$ only depend on $N$ and $\\bar R$, this finishes the proof.\n\\end{proof}\n\nWe conclude our study of the deterministic controlled ODE \\eqref{eq:controlled_ode} with the following stability result which is proven by a standard Gr\\\"onwall argument:\n\\begin{lemma}\\label{lem:cont_control}\n Let $x^{\\varsigma,u}$ denote the solution to the controlled differential equation \\eqref{eq:controlled_ode} with initial condition $x_0\\in\\ensuremath{\\mathbb{R}}^n$ and control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$. Then, for any $w\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$, we have the bound\n \\begin{equation*}\n |x^{\\varsigma,u}-\\tilde{x}|_\\infty\\leq e^{\\Lip{b}}\\left|\\int_0^\\cdot u(s)\\,ds-w\\right|_\\infty,\n \\end{equation*}\n where $\\tilde{x}$ is the unique solution to\n \\begin{equation*}\n \\tilde{x}(t)=x_0+\\int_0^t b\\big(\\tilde{x}(s)\\big)\\,ds+w(t)+\\varsigma(t),\\qquad t\\in[0,1].\n \\end{equation*}\n\\end{lemma}\n\n\n\n\\subsection{Exponential Stability of the Conditional Evolution}\nWe now turn to the conditional evolution of \\eqref{eq:general_flow-fixed-x} derived in \\cref{lem:conditioning}. For brevity, we drop the hat on the driving fBm throughout this and the next section. Remember that we have to study SDEs driven by a Riemann-Liouville process \n\\begin{equation*}\n\\tilde{B}_t\\ensuremath\\triangleq\\alpha_H\\int_0^{t} (t-u)^{H-\\frac12}\\,dW_u,\n\\end{equation*}\nwhere $(W_t)_{t\\geq 0}$ is a standard Wiener process. Recall from \\cref{def:flow} that, for $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, $\\Psi_{s,t}(\\cdot, \\varsigma+\\sigma\\tilde B)$ denotes the solution flow to the equation \n\\begin{equation}\\label{eq:rl_sde}\n dX_t=b(X_t)\\,dt+d\\varsigma_t+\\sigma\\,d\\tilde{B}_t.\n\\end{equation}\nFor brevity, let us henceforth set $\\Psi_{s,t}^{\\varsigma}(\\cdot)\\ensuremath\\triangleq\\Psi_{s,t}(\\cdot,\\varsigma+\\sigma\\tilde B)$.\n\nWe first prove that---starting from any two initial points---the laws of the solutions converge to each other with an exponential rate. This however does not yet imply the convergence of $\\L\\big(\\Psi_t^{\\varsigma}(x)\\big)$ to the first marginal of the invariant measure $\\pi$ of the equation $dX_t=b(X_t)\\,dt+\\sigma\\,dB_t$ since, even if we choose $X_0\\sim\\pi$, we have $\\L\\big(\\Psi_t^{\\varsigma}(X_0)\\big)\\neq\\pi$ for $t>0$ in general. \n\nAs a preparation, we let $\\big(\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n),\\ensuremath{\\mathcal{H}}_H,\\mu_H\\big)$ denote the abstract Wiener space induced by the Gaussian process $(\\tilde B_t)_{t\\in[0,1]}$. Recall that the Cameron-Martin space is given by $\\ensuremath{\\mathcal{H}}_H=\\mathscr{K}_H(H_0^1)$, where\n\\begin{equation*}\n \\mathscr{K}_H f(t)\\ensuremath\\triangleq\\begin{cases}\n \\displaystyle\\alpha_H\\int_0^t (t-s)^{H-\\frac32}f(s)\\,ds, & H>\\frac12,\\\\ \n \\displaystyle\\alpha_H\\frac{d}{dt}\\int_0^t (t-s)^{H-\\frac12}f(s)\\,ds, & H<\\frac12,\n \\end{cases}\\qquad t\\in[0,1],\n\\end{equation*}\nand \n\\begin{equation*}\n H_0^1\\ensuremath\\triangleq\\left\\{f=\\int_0^\\cdot\\dot{f}(s)\\,ds:\\,\\dot{f}\\in L^2([0,1],\\ensuremath{\\mathbb{R}}^n)\\right\\}\n\\end{equation*}\nis the Cameron-Martin space of the standard Wiener process. The inner product on $\\ensuremath{\\mathcal{H}}_H$ is defined by $\\braket{\\mathscr{K}_H f,\\mathscr{K}_H g}_{\\ensuremath{\\mathcal{H}}_H}\\ensuremath\\triangleq\\braket{\\dot{f},\\dot{g}}_{L^2}$. \n\nWe shall make use of the following simple observation:\n\\begin{lemma}\\label{lem:cameron_martin_facts}\n Let $f:[0,1]\\to\\ensuremath{\\mathbb{R}}^n$ be piecewise linear with $f(0)=0$. Then, for each $H\\in(0,1)$, $f\\in\\ensuremath{\\mathcal{H}}_H$ and\n \\begin{equation}\\label{eq:cameron_martin_bound}\n \\|f\\|_{\\ensuremath{\\mathcal{H}}_H}\\lesssim|\\dot{f}|_\\infty \\big(1+\\big|\\ensuremath{\\mathcal{D}}_{\\dot{f}}\\big|\\big).\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\n It follows from \\cite[Theorem 5]{Picard2011} (see also \\cite{Samko1993}) that the inverse of $\\ensuremath{\\mathscr{K}}_H$ exists on the set of Lipschitz functions and there is a numerical constant $\\varrho_H>0$ such that $\\ensuremath{\\mathscr{K}}_H^{-1}=\\varrho_H\\ensuremath{\\mathscr{K}}_{1-H}$. Notice also that we have $\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1}f=\\ensuremath{\\mathscr{K}}_H^{-1}\\dot{f}$.\n \n Let us first consider the case $H<\\frac12$. The bound \\eqref{eq:cameron_martin_bound} is an immediate consequence of \n \\begin{equation*}\n \\left|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1} f(t)\\right|\\leq\\varrho_H\\int_0^t (t-s)^{-H-\\frac12}\\big|\\dot{f}(s)\\big|\\,ds\\lesssim|\\dot{f}|_\\infty \\qquad\\forall\\,t\\in[0,1].\n \\end{equation*}\n For $H>\\frac12$ we let $\\tau_1,\\dots,\\tau_k$ denote the jump points of $\\dot{f}$ in the interval $[0,t)$. Notice that\n \\begin{align*}\n \\left|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1} f(t)\\right|&\\leq\\varrho_H\\left|\\frac{d}{dt}\\left(\\sum_{i=1}^{k-1}\\int_0^{\\tau_1}(t-s)^{\\frac12-H}\\dot{f}(s)\\,ds+\\cdots+\\int_{\\tau_k}^t (t-s)^{\\frac12-H}\\dot{f}(s)\\,ds\\right)\\right| \\\\\n &\\lesssim |\\dot{f}|_\\infty\\big(1+|\\ensuremath{\\mathcal{D}}_{\\dot{f}}|\\big)t^{\\frac12-H}.\n \\end{align*}\n Since $1-2H>-1$, we obtain\n \\begin{equation*}\n \\|f\\|_{\\ensuremath{\\mathcal{H}}_H}=\\left\\|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1}f\\right\\|_{L^2}\\lesssim|\\dot{f}|_\\infty\\big(1+|\\ensuremath{\\mathcal{D}}_{\\dot{f}}|\\big),\n \\end{equation*}\n as required.\n \n\\end{proof}\n\n\nThe next important lemma lifts the control result of \\cref{prop:control} to solutions of SDEs with additive noise:\n\\begin{lemma}\\label{lem:probabilistic_control}\n Let $b\\in\\S(\\kappa, R,\\lambda)$ and $\\sigma\\in\\Lin{n}$ be invertible. Then, for any $\\bar R>0$ and any $\\eta\\in(0,\\frac12)$, there is constant $\\a_{\\eta,\\bar{R}}>0$ such that the following holds: For each $x\\in\\ensuremath{\\mathbb{R}}^n$ and each $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, we can find an event $\\ensuremath{\\mathscr{A}}_{x,\\varsigma}$ with $\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})\\geq\\a_{\\eta,\\bar{R}}$ such that\n \\begin{equation*}\n \\int_0^1 \\mathbf 1_{\\big\\{t: \\big|\\Psi_{t}^{\\varsigma}(x)(\\omega)\\big|>\\bar R\\big\\}}(s)\\,ds > \\eta \\qquad \\forall\\, \\omega \\in \\ensuremath{\\mathscr{A}}_{x,\\varsigma}.\n \\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n Let $u_{x,\\varsigma}\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ be the piecewise constant control furnished by \\cref{prop:control} such that the occupation time of $x^{\\varsigma,u_{x,\\varsigma}}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}+1}$ is greater than $\\eta$. We set $U_{x,\\varsigma}\\ensuremath\\triangleq\\int_0^\\cdot u_{x,\\varsigma}(s)\\,ds$ and note that $U_{x,\\varsigma}$ is piecewise linear. \\Cref{lem:cont_control} allows us to choose an $\\varepsilon>0$ (independent of $x$ and $\\varsigma$) such that, on the event $\\ensuremath{\\mathscr{A}}_{x,\\varsigma}\\ensuremath\\triangleq\\big\\{\\big|U_{x,\\varsigma}-\\sigma\\tilde{B}\\big|_\\infty\\leq\\varepsilon\\big\\}$, the occupation time of $\\big(\\Psi^{\\varsigma}_{h}(x)\\big)_{h\\in[0,1]}$ of $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar R} $ exceeds $\\eta$. \n\n It remains to show that $\\inf_{x,\\varsigma}\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})>0$. To this end, we first note that $U_{x,\\varsigma}\\in\\ensuremath{\\mathcal{H}}_H$ by \\cref{lem:cameron_martin_facts}. By the Cameron-Martin formula (see e.g. \\cite{Bogachev1998}), \n \\begin{align*}\n \\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})&\\geq\\ensuremath\\mathbb{P}\\big(\\big|\\sigma^{-1}U_{x,\\varsigma}-\\tilde{B}\\big|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big)\\\\\n &=\\exp\\left(-\\frac12\\|\\sigma^{-1}U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}^2\\right)\\int_{\\{|x|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\}}e^{\\braket{x,U_{x,\\varsigma}}_{\\ensuremath{\\mathcal{H}}_H}}\\,\\mu_H(dx).\n \\end{align*}\n Consequently, Jensen's inequality and spherical symmetry give\n \\begin{equation}\\label{eq:quant_lower_bound}\n \\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})\\geq\\exp\\left(-\\frac12\\|\\sigma^{-1}U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}^2\\right)\\ensuremath\\mathbb{P}\\big(|\\tilde{B}|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big).\n \\end{equation}\n Combining \\cref{prop:control,lem:cameron_martin_facts}, we obtain that $\\sup_{x,\\varsigma}\\|U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}\\lesssim M(1+M)$. This concludes the proof.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:conditional_initial_condition_wasserstein}\n Let $\\sigma\\in\\Lin{n}$ be invertible. Then, for any $\\kappa, R>0$ and any $p\\geq 1$, there exists a number $\\Lambda=\\Lambda(\\kappa,R,p)\\in(0,\\kappa)$ such that the following holds: If $b\\in\\S\\big(\\kappa,R, \\Lambda\\big)$, there are constants $c,C>0$ such that, for any $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$,\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p\\Big( \\L\\big( {\\Psi}^{\\varsigma}_t(Y)\\big), \\L\\big( {\\Psi}^{\\varsigma}_t(\\tilde Y)\\big)\\Big)\\leq C\\ensuremath{\\mathcal{W}}^p\\big(\\L(Y),\\L(\\tilde Y)\\big) e^{-c t}\n \\end{equation*}\n for all $t\\geq 0$.\n \\end{proposition}\n\n\\begin{proof} \nWrite $X_t\\ensuremath\\triangleq\\Psi^{\\varsigma}_{t}(Y)$ and $Z_t=\\Psi_{t}^{\\varsigma}(\\tilde Y)$. Let $\\mu_t\\ensuremath\\triangleq\\L(X_t)$ and $\\nu_t\\ensuremath\\triangleq\\L(Z_t)$, thus $(X_t,Z_t)$ is a synchronous coupling of $\\mu_t$ and $\\nu_t$. Our strategy for proving the exponential convergence of $t\\mapsto\\ensuremath{\\mathcal{W}}^p(\\mu_t,\\nu_t)$ is to show that, for any $t>0$, the evolution of $(X_s)_{s\\in[t,t+1]}$ conditional on ${\\mathcal F}_t$ spends a sufficient amount of time in the contractive region $\\{|x|>R\\}$. As noted in \\cref{example-1} \\ref{it:rough_decomposition}, there is an independent increment decomposition $(\\theta_t\\tilde{B})_{h}= Q^t_h+\\tilde{B}^t_h$ for the Riemann-Liouville process. Using this and the conditional evolution derived in \\cref{lem:conditioning_general}, we find\n \\begin{align}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}& = \\Expec{\\Expec{\\big|\\Psi^{\\varsigma}_{t,t+1}(X_t)-\\Psi^{\\varsigma}_{t,t+1}(Z_t)\\big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &= \\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma\\theta_t\\tilde B\\big)-\\Psi_{1}\\big(Z_t, \\theta_t\\varsigma+\\sigma \\theta_t\\tilde B\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma\\tilde{B}^t\\big)-\\Psi_{1}\\big(Z_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma\\tilde{B}^t\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}} \\nonumber\\\\\n &= \\Expec{ \\Expec{\\Big|\\Psi^{\\theta_t\\varsigma+\\ell}_{1} (x)-\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(z)\\Big|^p}\\bigg|_{\\substackal{x&=X_t,z=Z_t,\\\\\\ell&=\\sigma Q^t}}},\\label{two-initial-conditions}\n \\end{align}\n where in the last step we also used that $(\\tilde{B}^t_{h})_{h\\geq 0}\\overset{d}{=}(\\tilde{B}_{h})_{h\\geq 0}$.\n\nBy assumption, the drift $b$ does not expand by more than a factor of $\\Lambda$ on all of $\\ensuremath{\\mathbb{R}}^n$. We therefore have the pathwise estimate\n\\begin{equation}\\label{eq:rl_lipschitz}\n\\big|\\Psi_{s,t}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{s,t}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\leq e^{p(t-s)\\Lambda} |x-z|^p\n\\end{equation}\nfor all $0\\leq s< t\\leq 1$. Let $\\eta\\in(0,\\frac12)$ and $\\bar\\kappa\\in(0,\\kappa)$ be such that $\\Xi\\ensuremath\\triangleq\\bar\\kappa\\eta-\\Lambda(1-\\eta)>0$ (recall that we assume $\\Lambda<\\kappa$). Let $\\bar R>R$ be the corresponding radius furnished by \\cref{lem:bigger_ball}. For any $x\\in\\ensuremath{\\mathbb{R}}^n$ and any $\\varsigma,\\ell\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, let $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$ be the event from \\cref{lem:probabilistic_control}. Recall that $\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell})\\geq\\a_{\\eta,\\bar{R}}>0$ and \n\\begin{equation*}\n \\int_0^1 \\mathbf 1_{\\big\\{s: \\big|\\Psi_{s}^{\\theta_t\\varsigma+\\ell}(x)(\\omega)\\big|>\\bar R\\big\\}}(r)\\,dr > \\eta \\qquad \\forall\\, \\omega \\in \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}.\n\\end{equation*}\nSince $\\Xi>0$, by possibly decreasing $\\Lambda$ we can also ensure that \n\\begin{equation}\\label{eq:lambda}\n 0<\\Lambda < \\frac1p\\log\\left(\\ensuremath{\\frac} {1-\\a_{\\eta,\\bar{R}}e^{-p \\Xi} }{1-\\a_{\\eta,\\bar{R}}}\\right).\n\\end{equation}\nOwing to pathwise continuity of $h\\mapsto\\Psi_h^{\\theta_t\\varsigma+\\ell}(x)$, there are random times $t_1,\\dots,t_{2N(\\omega)}$ such that, for all $\\omega\\in \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$,\n\\begin{itemize}\n \\item $0\\leq t_1(\\omega)<\\cdots\\bar R\\big\\}$.\n\\end{itemize}\nTogether with \\eqref{eq:rl_lipschitz} it follows that, on the event $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$,\n\\begin{align*}\n&\\phantom{\\leq}\\big|\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\n= \\Big|\\Psi_{t_{2N},1}^{\\theta_t\\varsigma+\\ell}\\Big(\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell} (x)\\Big)-\\Psi_{t_{2N},1}^{\\theta_t\\varsigma+\\ell}\\Big(\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell} (z)\\Big)\\Big|^p\\\\\n&\\leq e^{p(1-t_{2N})\\Lambda} \\big|\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\\\\n&\\leq e^{p (1-t_{2N})\\Lambda} e^{-p (t_{2N}-t_{2N-1})\\bar\\kappa} \\big|\\Psi_{t_{2N-1}}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{t_{2N-1}}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\\\\n&\\leq\\cdots\\leq \\exp\\left[p\\left(\\Lambda\\sum_{i=0}^N(t_{2i+1}-t_{2i})-\\bar\\kappa\\sum_{i=0}^{N} (t_{2i}-t_{2i-1}) \\right)\\right] |x-z|^p\\\\\n&\\leq e^{ -p\\Xi}|x-z|^p,\n\\end{align*}\nwhere we have set $t_{2N+1}\\ensuremath\\triangleq 1$ for convenience. On the complementary event $\\Omega\\setminus \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$, we apply the trivial estimate \\eqref{eq:rl_lipschitz}. Inserting these bounds back into \\eqref{two-initial-conditions}, we conclude that\n\\begin{equation*}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}\\leq\\Big(\\big(1-\\a_{\\eta,\\bar R}\\big)e^{p\\Lambda}+\\a_{\\eta,\\bar R}e^{-p\\Xi}\\Big)\\Expec{|X_t-Z_t|^p}\\ensuremath\\triangleq\\rho\\Expec{|X_t-Z_t|^p}.\n\\end{equation*}\nObserve that $\\rho<1$ by \\eqref{eq:lambda}. Finally, a straight-forward induction shows that\n\\begin{equation}\\label{eq:to_minimize}\n \\ensuremath{\\mathcal{W}}^p\\Big(\\L\\big( {\\Psi}^{\\varsigma}_t(Y)\\big), \\L\\big( {\\Psi}^{\\varsigma}_t(\\tilde Y)\\big)\\Big)\\leq\n \\big\\|X_t-Z_t\\big\\|_{L^p}\\leq e^{\\Lambda} \\rho^{[t]}\\big\\|Y-\\tilde Y\\big\\|_{L^p}\\leq\\frac{e^{\\Lambda}}{\\rho}e^{-|\\log\\rho|t}\\big\\|Y-\\tilde Y\\big\\|_{L^p},\n\\end{equation}\nwhere $[\\cdot]$ denotes the integer part. Minimize over the set of couplings of $\\L(Y)$ and $\\L(\\tilde{Y})$ to conclude the proof.\n\\end{proof}\n\nA more explicit expression for the threshold value $\\Lambda(\\kappa,R,p)$ can be derived by the method outlined in \\cref{rem:constant_xi} below. We abstain from including further details in this work. Let us however introduce the following notation:\n\\begin{definition}\n Let $\\kappa,R>0$ and $p\\geq 1$. We abbreviate $\\S_p(\\kappa,R)\\ensuremath\\triangleq\\S\\big(\\kappa,R,\\Lambda(\\kappa,R,p)\\big)$ with the constant from \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{definition}\n\nBy \\cref{lem:conditioning}, the Wasserstein bound of \\cref{prop:conditional_initial_condition_wasserstein} lifts to bounds on the fast motion with frozen slow input \\eqref{eq:general_flow-fixed-x}. We obtain the following Lipschitz dependence of the flow $\\bar \\Phi$ on the initial value:\n\n\\begin{corollary}\\label{cor:fast_different_initial}\n Let $({\\mathcal F}_t)_{t\\geq 0}$ be a filtration compatible with the fBm $\\hat{B}$. Let $0\\leq s\\leq t$ and let $X$, $Y$, and $\\tilde{Y}$ be ${\\mathcal F}_s$-measurable random variables. Suppose that there are $\\kappa,R>0$ such that $b(x,\\cdot)\\in\\S_1(\\kappa,R)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$. Then there is a constant $c>0$ such that, for any Lipschitz continuous function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$,\n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-h\\big(X,\\bar{\\Phi}_{s,t}^X(\\tilde{Y})\\big)\\,\\middle|\\,{\\mathcal F}_s}\\Big|\\lesssim\\Lip{h}|Y-\\tilde Y|e^{-c\\frac{|t-s|}{\\varepsilon}}.\n \\end{equation*}\n If, in addition, $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$, then also\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}_{s,t}^X(\\tilde{Y})\\Big\\|_{L^p}\\lesssim\\|Y-\\tilde Y\\|_{L^p}e^{-c\\frac{|t-s|}{\\varepsilon}}.\n \\end{equation*}\n\\end{corollary}\n\\begin{proof}\n The first estimate is an immediate consequence of \\cref{lem:conditioning} and Kantorovich-Rubinstein duality. The second bound follows from the fact that we used a synchronous coupling in the proof of \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{proof}\n\nThe proof of \\cref{prop:conditional_initial_condition_wasserstein} shows that its conclusion actually holds if $\\tilde B$ is replaced by another process $Z$ with similar properties:\n\\begin{remark}\\label{rem:Wasserstein-general}\nLet $Z$ be a process with locally independent increment decomposition $\\theta_t Z=\\bar Z^t+\\tilde Z^t$. Assume that\n\\begin{enumerate}\n\\item the ${\\mathcal F}_t$-adapted part $\\bar Z^t$ takes values in $\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$ and\n\\item\\label{it:cm_dense} there is a unit vector $e\\in\\ensuremath{\\mathbb{R}}^n$ such that, for each $t\\geq 0$, $\\L\\big((\\tilde Z^t_h\\cdot e)_{h\\in[0,1]}\\big)$ is supported on all of $\\ensuremath{\\mathcal{C}}_0([0,1])$.\n\\end{enumerate} \nThen a statement similar to \\cref{prop:conditional_initial_condition_wasserstein} holds.\n\\end{remark}\n\n\\begin{example}\\label{ex:titmarsh}\n Suppose that $\\tilde{Z}^t_h=\\int_t^{t+h}\\mathfrak{G}(t+h-s)\\,dW_s$ for some kernel $\\mathfrak{G}:\\ensuremath{\\mathbb{R}}_+\\to\\Lin{n}$ which is square integrable at the origin and continuous on $(0,\\infty)$. Then the requirement \\ref{it:cm_dense} in \\cref{rem:Wasserstein-general} holds if $\\int_0^t|\\mathfrak{G}(s)|\\,ds>0$ for each $t>0$. Indeed, this can be shown by a clever application of Titmarsh's convolution theorem as in \\cite[Lemma 2.1]{Cherny2008}.\n\\end{example}\n\nThe example shows that in particular an fBm of any Hurst parameter $H\\in(0,1)$ falls in the regime of \\cref{rem:Wasserstein-general}. Hence, we have the following corollary to \\cref{prop:conditional_initial_condition_wasserstein}:\n\\begin{corollary}\\label{conveergence-equilibrium}\n Let $p\\geq 1$ and suppose that $b\\in\\S_p(\\kappa,R)$ for some $\\kappa,R>0$. Let $(X_t)_{t\\geq 0}$ be the solution to \n \\begin{equation}\\label{eq:fbm_sde}\n dX_t=b(X_t)\\,dt+\\sigma\\,dB_t\n \\end{equation}\n started in the generalized initial condition $\\mu$, where $(B_t)_{t\\geq 0}$ is an fBm with Hurst parameter $H\\in(0,1)$ and $\\sigma\\in\\Lin{n}$ is invertible. Then there is a unique invariant measure $\\mathcal I_\\pi\\in\\P(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)$ for the equation \\eqref{eq:fbm_sde} in the sense of \\cref{initial-condition}. Moreover, writing $\\pi=\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mathcal I_\\pi$ for the first marginal, there are constants $c,C>0$ such that\n \\begin{equation}\\label{eq:wasserstein_fbm}\n \\ensuremath{\\mathcal{W}}^p\\big(\\L( X_t),\\pi\\big)\\leq C\\ensuremath{\\mathbb{W}}^p(\\mu,\\mathcal I_\\pi) e^{-ct}\n \\end{equation}\n for all $t\\geq 0$.\n \\end{corollary}\n\\begin{proof}\n By \\cref{prop:existence_invariant_measure}, we know that there is an invariant measure $\\mathcal I_\\pi$ to \\eqref{eq:fbm_sde} with moments of all orders. The Wasserstein estimate \\eqref{eq:wasserstein_fbm} then follows by the very same arguments as in \\cref{prop:conditional_initial_condition_wasserstein}. The only difference is that we now have to specify a generalized initial condition $\\nu\\in\\P\\big((\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2\\big)$ for the coupling $(X_t,Z_t)$, see \\cref{sec:physical_solution}. Unlike for the conditioned dynamics, we have $Z_t\\sim\\pi$ if we start $Z$ in the invariant measure $\\mathcal I_\\pi$. In order for our previous argument to apply, we need to ensure that the past of the noises in the synchronous coupling coincide. In \\eqref{eq:to_minimize} we can thus only minimize over couplings in the set \n \\begin{equation*}\n \\big\\{\\rho\\in\\P\\big((\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2\\big):\\,\\rho(\\ensuremath{\\mathbb{R}}^n\\times\\ensuremath{\\mathbb{R}}^n\\times\\Delta_{\\H_H})=1\\big\\},\n \\end{equation*}\n which precisely yields \\eqref{eq:wasserstein_fbm}.\n\\end{proof}\n \n\n\\subsection{Quenched Convergence to the Invariant Measure}\\label{quenched-convergence}\n\nThe other distance, which will play a r\\^ole in \\cref{sec:uniform_bounds} below, is between $\\L\\big(\\Psi_t^\\varsigma(Y)\\big)$ and the stationary law $\\pi$ of the equation \\eqref{eq:fbm_sde}. We stress that---contrarily to the proof of \\cref{conveergence-equilibrium}---we cannot simply start the process in the invariant measure. In fact, the measure $\\pi$ is not stationary for \\eqref{eq:rl_sde} since the increments of $\\tilde{B}$ are not stationary. It is therefore necessary to wait for a sufficient decay of the deterministic `adversary' $\\varsigma$, whence we only find an algebraic rate of convergence. Before we state the result, let us first illustrate that there is indeed no hope for an exponential rate:\n\\begin{example}\n Let\n \\begin{equation*}\n dX_t=-X_t\\,dt+d\\tilde{B}_t,\\qquad dY_t=-Y_t\\,dt+dB_t. \n \\end{equation*}\n If we start both $X$ and $Y$ in the generalized initial condition $\\delta_0\\otimes\\ensuremath{\\mathsf{W}}$, then $\\L(X_t)=N(0,\\Sigma_{t}^2)$ and $\\L(Y_t)=N(0,\\bar\\Sigma_{t}^2)$ where\n \\begin{equation*}\n \\Sigma_t^2=\\bar\\Sigma_t^2-\\Expec{\\left|\\int_0^te^{-(t-s)}\\dot{\\bar{B}}_s\\,ds\\right|^2}.\n \\end{equation*}\n In particular, $\\ensuremath{\\mathcal{W}}^2\\big(\\L(X_t),\\L(Y_t)\\big)=|\\Sigma_{t}-\\bar\\Sigma_t|\\gtrsim t^{-(1-\\hat{H})}$ uniformly in $t\\geq 1$. Since it is easy to see that $\\ensuremath{\\mathcal{W}}^2\\big(\\L(Y_t),\\pi\\big)\\lesssim e^{-t}$, it follows that $\\ensuremath{\\mathcal{W}}^2\\big(\\L(X_t),\\pi\\big)\\gtrsim t^{-(1-\\hat{H})}$.\n\\end{example}\n\n\\begin{proposition}\\label{prop:conditional_stationary_wasserstein}\n Suppose that $b\\in\\S_p(\\kappa,R)$ for some $\\kappa,R>0$ and $\\sigma\\in\\Lin{n}$ is invertible. Let $p\\geq 1$, $\\varsigma\\in\\Omega_\\alpha$ for some $\\alpha>0$, and $Y$ be an ${\\mathcal F}_0$-measurable random variable. Then, for each $\\beta<\\min\\big(\\alpha,1-H\\big)$, there is a constant $C>0$ such that\n \\begin{equation}\\label{eq:wasserstein_quenched}\n \\ensuremath{\\mathcal{W}}^p\\big(\\L(\\Psi^\\varsigma_t(Y)), \\pi\\big)\\leq C\\frac{\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^p(\\L(Y),\\pi)\\big)}{t^{\\beta}}\n \\end{equation}\n for all $t>0$.\n\\end{proposition}\n\n\\begin{proof}\n Fix $t\\geq 1$, abbreviate $X\\ensuremath\\triangleq\\Psi_\\cdot^{\\varsigma}(Y)$, and let $Z$ be the stationary solution to the equation \\eqref{eq:fbm_sde}. We assume that $X$ and $Z$ are driven by the same Wiener process. Let us first consider the case $p\\geq 2$. Recall the following locally independent decompositions from \\cref{sec:increment}:\n $$\\theta_t B=\\bar B^t+\\tilde B^t, \\qquad \\theta_t \\tilde B=Q^t+\\tilde B^t.$$\n Remember also that the `smooth' part of the fBm increment can be further decomposed as $\\bar{B}^t=P^t+Q^t$, see \\cref{example-1} \\ref{it:smooth_decomposition}. Therefore, \n \\begin{align}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}& = \\Expec{\\Expec{\\big|\\Psi_{t,t+1}(X_t,\\varsigma+ \\sigma\\tilde B)-\\Psi_{t,t+1}(Z_t,\\sigma B)\\big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma \\tilde B^t\\big)-\\Psi_{1}\\big(Z_t,\\sigma P^t+\\sigma Q^t+\\sigma\\tilde B^t\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Big| \\Psi_1^{\\theta_t \\varsigma+ \\ell} (x) - \\Psi_1^{\\bar\\ell+\\ell} (z) \\Big|^p\\bigg|_{\\substackal{x&=X_t,z=Z_t,\\\\\\ell&=\\sigma Q^t,\\bar{\\ell}=\\sigma P^t}}}\\label{eq:expec_diff}\n \\end{align}\n Write $R_h\\ensuremath\\triangleq\\Psi_h^{\\theta_t \\varsigma+\\ell} (x)$ and $S_h\\ensuremath\\triangleq \\Psi_h^{\\bar\\ell+\\ell} (z) $. Notice that, since $\\varsigma$ and $\\bar\\ell$ are differentiable,\n \\begin{align*}\n \\frac{d}{dh}\\big|R_{h}-S_{h}\\big|^p&=p\\Braket{\\dot{\\varsigma}_{t+h}-\\dot{\\bar{\\ell}}_h+b\\big(R_{h}\\big)-b\\big(S_{h}\\big),R_{h}-S_{h}}\\big|R_{h}-S_{h}\\big|^{p-2}\\\\\n &\\leq p(\\Lambda+\\gamma)\\big|R_{h}-S_{h}\\big|^p\n +\\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\left(|\\dot{\\varsigma}_{t+h}|+|\\dot{\\bar{\\ell}}_{h}|\\right)^p\n \\end{align*}\n for any $\\gamma>0$, where $\\Lambda=\\Lambda(\\kappa,R,p)$ is the expansion threshold derived in \\cref{prop:conditional_initial_condition_wasserstein}. It follows that, for any $0\\leq h_1\\leq h_2\\leq 1$,\n \\begin{align}\n &\\phantom{\\leq}\\big|R_{h_2}-S_{h_2}\\big|^p\\nonumber\\\\\n &\\leq\\big|R_{h_1}-S_{h_1}\\big|^p e^{p(\\Lambda+\\gamma)(h_2-h_1)}+\\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\int_{h_1}^{h_2}e^{p(\\Lambda+\\gamma)(h_2-s)}\\left(|\\dot\\varsigma_{t+s}|+|\\dot{\\bar{\\ell}}_{s}|\\right)^p\\,ds\\nonumber\\\\\n &\\leq \\big|R_{h_1}-S_{h_1}\\big|^p e^{p(\\Lambda+\\gamma)(h_2-h_1)}+C_\\gamma(h_2-h_1),\\label{eq:estimate_waserstein_1}\n \\end{align}\n where we abbreviated\n \\begin{equation*}\n C_\\gamma\\ensuremath\\triangleq \\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\left(\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}}{t^{\\beta}}+|\\dot{\\bar{\\ell}}|_\\infty\\right)^p.\n \\end{equation*}\n We now argue similarly to \\cref{prop:conditional_initial_condition_wasserstein}: Pick $\\eta\\in(0,\\frac12)$ and $\\bar{\\kappa}\\in(0,\\kappa)$ such that $\\Xi\\ensuremath\\triangleq \\eta\\bar{\\kappa}-(1-\\eta)\\Lambda>0$. Let $\\bar{R}>0$ be the corresponding constant of \\cref{lem:bigger_ball} and $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$ be the event furnished by \\cref{lem:probabilistic_control}. As before, we write $t_1,\\dots,t_{2N(\\omega)}$ for the random times characterizing the excursions of $(R_h)_{h\\in[0,1]}$ outside of $B_{\\bar R}$, see \\cref{prop:conditional_initial_condition_wasserstein}. By an argument similar to \\eqref{eq:estimate_waserstein_1}, \n \\begin{equation}\\label{eq:estimate_waserstein_2}\n \\big|R_{t_{2i}}-S_{t_{2i}}\\big|^p\\leq \\big|R_{t_{2i-1}}-S_{t_{2i-1}}\\big|^p e^{p(\\gamma-\\bar{\\kappa})(t_{2i}-t_{2i-1})}+C_\\gamma (t_{2i}-t_{2i-1})\n \\end{equation}\n for all $i=1,\\dots,N(\\omega)$ on the set $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$. Combining \\eqref{eq:estimate_waserstein_1} and \\eqref{eq:estimate_waserstein_2}, we further find on this set\n \\begin{align*}\n \\phantom{\\leq}\\big|R_{1}-S_{1}\\big|^p&\\leq e^{p(\\Lambda+\\gamma)(1-t_{2k})}\\big|R_{t_{2k}}-S_{t_{2k}}\\big|^p+C_\\gamma(1-t_{2k})\\\\\n &\\leq e^{p(\\Lambda+\\gamma)(1-t_{2k})}e^{p(\\gamma-\\bar{\\kappa})(t_{2k}-t_{2k-1})}\\big|R_{t_{2k-1}}-S_{t_{2k-1}}\\big|^p+C_\\gamma(1-t_{2k-1})\\\\\n &\\leq\\cdots \\leq e^{p(\\Lambda+\\gamma)(1-\\eta) +p(\\gamma-\\bar \\kappa)\\eta}|x-z|^p+C_\\gamma \n \\leq e^{-p\\big(\\Xi-\\gamma\\big)}|x-z|^p+C_\\gamma \n \\end{align*}\n Choose $\\gamma>0$ sufficiently small such that simultaneously $\\Xi-\\gamma>0$ and \n \\begin{equation*}\n \\rho\\ensuremath\\triangleq \\big(1-\\a_{\\eta,\\bar{R}}\\big)e^{p(\\Lambda+\\gamma)}+\\a_{\\eta,\\bar{R}}e^{-p(\\Xi-\\gamma)}<1.\n \\end{equation*}\n This shows that\n \\begin{equation}\\label{eq:estimate_p_bigger}\n \\Expec{\\big|R_{1}-S_{1}\\big|^p}\\leq\\rho|x-y|^p+C_\\gamma.\n \\end{equation}\n It is clear that the estimate \\eqref{eq:estimate_p_bigger} also holds for $p<2$ with the constant\n \\begin{equation*}\n C_\\gamma=\\frac{1}{(2\\gamma)^{\\frac{p}{2}}}\\left(\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}}{t^{\\beta}}+|\\dot{\\bar{\\ell}}|_\\infty\\right)^p\n \\end{equation*}\n and a slightly increased $\\rho<1$. Since $P^t=\\bar{B}_{t+\\cdot}$, \\cref{lem:smooth_part_decay} and the identity \\eqref{eq:expec_diff} show that\n \\begin{equation*}\n \\Expec{\\big|X_{t+1}-Y_{t+1}\\big|^p}\\leq\\rho\\Expec{\\big|X_{t}-Y_{t}\\big|^p}+\\frac{C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}^p\\big)}{t^{p\\beta}}\n \\end{equation*}\n for some numerical constant $C>0$ independent of $t$ and $\\varsigma$. Therefore, iterating this bound we find\n \\begin{equation}\\label{eq:quenched_wasserstein}\n \\Expec{\\big|X_{t}-Y_{t}\\big|^p}\\lesssim e^{-ct}\\Expec{|X_0-Y_0|^p}+C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}^p\\big)\\sum_{i=0}^{[t]-2}\\frac{\\rho^i}{(t-1-i)^{p\\beta}}.\n \\end{equation}\n The last sum is easily seen to be $\\lesssim t^{-p\\beta}$ uniformly in $t\\geq 2$ and the claim follows at once.\n\\end{proof}\n\nBy a strategy inspired by \\cite[Section 7]{Panloup2020} (see also \\cite{Hairer2005}), we can lift \\cref{prop:conditional_stationary_wasserstein} to a total variation bound. Since the exposition of Panloup and Richard does not immediately transfer to the problem at hand, we choose to include the necessary details. Consider the system\n\\begin{equation}\\label{eq:girsanov_coupling}\n \\begin{aligned}[c]\n dX_s&=b(X_s)\\, ds +d\\varsigma_s+\\sigma d\\tilde{B}_s,\\\\\n dZ_s&=b(Z_s)\\, ds +\\sigma\\,dB_s+\\sigma\\varphi^t(s)\\,ds,\n \\end{aligned}\n\\end{equation}\nwhere $X_0$ is an arbitrary initial condition and $Z$ is the stationary solution of the first equation. Our aim is to exhibit an adapted integrable function $\\varphi^t:[0,t+1]\\to\\ensuremath{\\mathbb{R}}^n$ which vanishes on $[0,t]$ and ensures that $X_{t+1}=Z_{t+1}$. To this end, we define\n\\begin{equation}\\label{eq:coupling_function}\n \\varphi^t(s)\\ensuremath\\triangleq\\left\\{\n \\begin{array}{ll}\n \\left(2\\frac{|X_t-Z_t|^{\\frac12}}{|X_s-Z_s|^{\\frac12}}+\\lambda\\right)\\sigma^{-1}(X_s-Z_s)-\\dot{\\bar{B}}_s+\\sigma^{-1}\\dot{\\varsigma}_s, \\quad \\quad &s\\in [t,t+1],\\\\\n 0, \\qquad \\qquad &\\hbox{ otherwise.}\n \\end{array}\\right.\n\\end{equation}\n\\begin{lemma}\\label{lem:girsanov}\n Let $t\\geq 1$, $\\varsigma\\in\\Omega_\\alpha$, $b\\in\\S(\\kappa,R,\\lambda)$, \n and consider the system \\eqref{eq:girsanov_coupling} with $\\varphi^t$ defined in \\eqref{eq:coupling_function}. Then $X_{t+1}=Z_{t+1}$ and, for any $\\beta<\\alpha\\wedge(1-H)$,\n\\begin{equation}\\label{eq:phi_norm}\n |\\varphi^t|_\\infty\\lesssim |X_t-Z_t|+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\|\\bar{B}\\|_{\\Omega_\\beta}}{t^{\\beta}},\\qquad |\\dot{\\varphi}^t|_\\infty\\lesssim |X_t-Z_t|^{\\frac12}+|X_t-Z_t|+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\|\\bar{B}\\|_{\\Omega_\\beta}}{t^{1+\\beta}},\n\\end{equation}\nwhere the derivative of $\\varphi^t$ is understood as right- and left-sided derivative at the boundaries $t$ and $t+1$, respectively.\n\\end{lemma}\n\\begin{proof}\n The argument is a minor modification of \\cite[Lemma 5.8]{Hairer2005}: Abbreviate $f(s)\\ensuremath\\triangleq|X_s-Z_s|^2$, then\n \\begin{equation*}\n f^\\prime(s)=2\\braket{b(X_s)-b(Z_s)+\\dot{\\varsigma}_s-\\sigma\\dot{\\bar{B}}_s-\\sigma\\varphi^t(s),X_s-Z_s}\\leq -4|X_t-Z_t|^{\\frac12}f(s)^{\\frac34}\n \\end{equation*}\n since $b\\in\\S(\\kappa,R,\\lambda)$. It follows that\n \\begin{equation*}\n |X_s-Z_s|^{\\frac12}\\leq |X_t-Z_t|^{\\frac12}-(s-t)|X_t-Z_t|^{\\frac12}\\qquad\\forall\\, s\\in[t,t+1],\n \\end{equation*}\n whence $X_{t+1}=Z_{t+1}$. This also implies\n \\begin{equation*}\n \\left|\\frac{d}{ds}\\big(X_s-Z_s\\big)\\right|\\leq\\big(\\Lip{b}+2+\\lambda\\big)|X_t-Z_t|^{\\frac12}|X_s-Z_s|^{\\frac12}\n \\end{equation*}\n and consequently\n \\begin{equation*}\n \\left|\\frac{d}{ds}\\left(\\frac{X_s-Z_s}{|X_s-Z_s|^{\\frac12}}\\right)\\right|\\leq\\frac32\\frac{\\left|\\frac{d}{ds}\\big(X_s-Z_s\\big)\\right|}{|X_s-Z_s|^{\\frac12}}\\lesssim|X_t-Z_t|^{\\frac12}.\n \\end{equation*}\n The bounds \\eqref{eq:phi_norm} follow at once.\n\\end{proof}\n\\begin{remark}\n We stress that the bound on $|\\dot{\\varphi}^t|_\\infty$ only holds for a Lipschitz continuous drift $b$.\n\\end{remark}\n\nIt is now easy to prove the following result:\n\\begin{proposition}\\label{prop:conditional_stationary_total}\n Assume the conditions of \\cref{prop:conditional_stationary_wasserstein} for $p=1$. Then, for any $\\beta<\\alpha\\wedge(1-H)$, it holds that\n \\begin{equation*}\n \\TV{\\L\\big(\\Psi^\\varsigma_t(Y)\\big)-\\pi}\\lesssim {t^{-\\frac{\\beta}{3}}\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)} \\qquad \\forall\\, t>0.\n \\end{equation*}\n\\end{proposition}\n\n\\begin{proof}\n Let $B$ and $B^\\prime$ be $H$-fBms built from underlying two-sided Wiener processes $W$ and $W^\\prime$, see \\eqref{eq:mandelbrot}. Recall that $\\tilde{B}$ is the Riemann-Liouville process associated with $B$. Let $X$ and $Z$ solve \n \\begin{align}\\label{eq:proof_tv_equations}\n \\begin{split}\n dX_s&=b(X_s)\\,ds+d\\varsigma_s+\\sigma d\\tilde{B}_s,\\\\\n dZ_s&=b(Z_s)\\,ds+\\sigma\\,dB^\\prime_s,\n \\end{split}\n \\end{align}\n where $X_0\\overset{d}{=}Y$ and $Z$ is the stationary solution. \n Fix $t>1$.\n We shall use the bound\n \\begin{align}\n \\TV{\\L\\big(\\Psi^\\varsigma_{t+1}(Y)\\big)-\\pi}&=\\inf_{(\\tilde B,B^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1}\\big)\\leq\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1}\\big) \n \\nonumber\\\\\n &\\leq\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1},|X_t-Z_t|\\leq\\delta\\big)+ \\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(|X_t-Z_t|>\\delta\\big).\\label{eq:tv_proof}\n \\end{align}\n Taking $W$ and $W'$ equal, we are in the setting of \\cref{prop:conditional_stationary_wasserstein}. The estimate \\eqref{eq:quenched_wasserstein} thus shows that, for any $\\delta\\in(0,1]$,\n\\begin{equation}\n\\label{eq:tv_proof_2}\n\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(|X_t-Z_t|>\\delta)\\nonumber \\leq \\frac{C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)}{\\delta t^\\beta}.\n\\end{equation}\n To bound the first term in \\eqref{eq:tv_proof} we exploit the fact that $X_t$ and $Z_t$ are already close so that we can couple them at time $t+1$ with a controlled cost. \n Let $\\varphi^t$ be the function from \\cref{lem:girsanov}; in particular $\\varphi^t(s)=0$ for $s\\frac12.\n \\end{cases}\n \\end{equation*}\n In either case, \\eqref{eq:phi_norm} yields\n \\begin{equation*}\n \\int_t^{t+1}|\\psi^t(s)|^2\\,ds\\lesssim \\delta+\\frac{\\|\\varsigma\\|^2_{\\Omega_\\beta}+\\|\\bar{B}\\|^2_{\\Omega_\\beta}}{t^{2\\beta}}\n \\end{equation*}\n on the event $\\{|X_t-Z_t|\\leq \\delta\\}$ and therefore\n \\begin{align*}\n &\\phantom{\\leq}\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1},|X_t-Z_t|\\leq\\delta\\big) \n \\lesssim\\sqrt{\\delta}+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\big\\|\\|\\bar{B}\\|_{\\Omega_\\beta}\\big\\|_{L^2}}{t^\\beta}.\n \\end{align*}\nCombining this with \\eqref{eq:tv_proof} and \\cref{lem:smooth_part_decay}, we have proven\n \\begin{equation}\\label{eq:tv_final}\n \\TV{\\L\\big(\\Psi_{t+1}^\\varsigma(Y)\\big)-\\pi}\\lesssim \\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)\\left(\\sqrt{\\delta}+\\frac{1}{\\delta t^\\beta}\\right),\n \\end{equation}\n which is minimized for $\\delta=t^{-\\frac{2\\beta}{3}}$.\n\\end{proof}\n\n\nBy duality and \\cref{lem:conditioning}, we obtain the following ergodic theorem as a corollary to \\cref{prop:conditional_stationary_wasserstein,prop:conditional_stationary_total}. It provides the fundamental estimates for our proof of the averaging principle for the fractional slow-fast system with feedback dynamics.\n\\begin{corollary}\\label{cor:total_variation_conditional}\n Let $0\\leq s\\leq t$ and let $X,Y$ be ${\\mathcal F}_s$-measurable random variables. Suppose that there are $\\kappa,R>0$ such that $b(x,\\cdot)\\in\\S_1(\\kappa,R)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$. Then, for any $\\zeta<1-\\hat{H}$ and\n \\begin{enumerate}\n \\item\\label{it:ergodicity_wasserstein} any Lipschitz function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$, \n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-\\bar{h}(X)\\,|\\,{\\mathcal F}_s}\\Big|\\lesssim\\Lip{h}\\Big(1+\\big\\|\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}_{\\varepsilon\\cdot}^s\\big\\|_{\\Omega_\\zeta}\\Big)\\big(1+|Y|\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\zeta}}{|t-s|^{\\zeta}}\\right).\n \\end{equation*}\n \\item\\label{it:ergodicity_tv} any bounded measurable function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$,\n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-\\bar{h}(X)\\,|\\,{\\mathcal F}_s}\\Big|\\lesssim |h|_\\infty\\Big(1+\\big\\|\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}_{\\varepsilon\\cdot}^s\\big\\|_{\\Omega_\\zeta}\\Big)\\big(1+|Y|\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\frac{\\zeta}{3}}}{|t-s|^{\\frac{\\zeta}{3}}}\\right).\n \\end{equation*}\n \\end{enumerate}\n Here, as usual, $\\bar{h}(x)=\\int_{\\ensuremath{\\mathbb{R}}^n} h(x,y)\\,\\pi^x(dy)$.\n\\end{corollary}\n\n\n\n\n\\subsection{Geometric Ergodicity for SDEs Driven by Fractional Brownian Motion}\\label{sec:geometric_ergodicity}\n\nApplying the arguments of \\cref{prop:conditional_initial_condition_wasserstein,prop:conditional_stationary_total} to the equation \\begin{equation}\\label{eq:sde}\n dY_t=b(Y_t)\\,dt+\\sigma\\,dB_t, \n\\end{equation} \nwe obtain an exponential rate of convergence improving the known results:\n\\begin{proof}[Proof of \\cref{thm:geometric}]\n In \\cref{conveergence-equilibrium} we have already proven the Wasserstein decay \\eqref{eq:wasserstein_time_t}:\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\mathcal{L}(Y_t),\\pi)\\leq Ce^{-ct}\\ensuremath{\\mathbb{W}}^p\\big(\\mu,\\pi\\big), \\qquad \\forall\\, t\\geq 0\n \\end{equation*}\n The total variation rate \\eqref{eq:tv_process} then follows by a similar Girsanov coupling as in the proof of \\cref{prop:conditional_stationary_total}. In fact, we now consider\n \\begin{align*}\n dX_s&=b(X_s)\\,ds+\\sigma dB_s,\\\\\n dZ_s&=b(Z_s)\\,ds+\\sigma\\,dB_s +\\sigma\\varphi^t(s)\\,ds,\n \\end{align*}\n where $X$ is started in the generalized initial condition $\\mu$ and $Z$ is the stationary solution. Let us define\n \\begin{equation*}\n \\varphi^t(s)\\ensuremath\\triangleq -\\left(\\frac{4|X_t-Z_t|^{\\frac12}}{|X_s-Z_s|^{\\frac12}}+\\lambda\\right)\\sigma^{-1}(X_s-Z_s)\\mathbf 1_{[t,t+1]}(s).\n \\end{equation*}\n It can then be checked similarly to \\cref{lem:girsanov} that $X_{t+1}=Y_{t+1}$ and\n \\begin{equation*}\n |\\varphi^t|_\\infty\\lesssim |X_t-Z_t|,\\qquad |\\dot{\\varphi}^t|_\\infty\\lesssim |X_t-Z_t|^{\\frac12}+|X_t-Z_t|.\n \\end{equation*}\n Consequently, the estimate \\eqref{eq:tv_final} becomes\n \\begin{equation}\\label{eq:geometric_time_t}\n \\TV{\\L(Y_{t+1})-\\pi}\\lesssim\\ensuremath{\\mathbb{W}}^1\\big(\\mu,\\pi\\big)\\left(\\sqrt{\\delta}+\\frac{e^{-ct}}{\\delta}\\right)\n \\end{equation}\n and choosing $\\delta= e^{-\\frac{ct}{2}}$ shows a geometric decay of the total variation distance at a fixed time. To get asserted decay on the path space \\eqref{eq:tv_process}, we observe that, by the very same argument as in \\cite[Proposition 7.2 (iii)]{Panloup2020}, $\\varphi^t$ actually induces a coupling on the path space with a similar cost. Hence, $\\TV{\\L(Y_{t+\\cdot})-\\ensuremath\\mathbb{P}_\\pi}$ is still bounded by a quantity proportional to the right-hand side of \\eqref{eq:geometric_time_t} and \\eqref{eq:tv_process} follows at once. \n\\end{proof}\n\\begin{remark}\\label{rem:constant_xi}\n The admissible repulsivity strength $\\Lambda(\\kappa,R,p)$ obtained in the proof of \\cref{thm:geometric} is certainly not optimal. We therefore abstain from deriving a quantitative upper bound. Let us however indicate one way to obtain such an estimate: Start from \\eqref{eq:quant_lower_bound} in the proof \\cref{lem:probabilistic_control} and recall a standard result (see e.g. \\cite[Theorem D.4]{Piterbarg2012}) saying that\n \\begin{equation*}\n \\ensuremath\\mathbb{P}\\big(|\\tilde{B}|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big)\\geq 1-K\\big(|\\sigma|^{-1}\\varepsilon\\big)^{\\frac{1}{H}}e^{-H(|\\sigma|^{-1}\\varepsilon)^2}\n \\end{equation*}\n for a known numerical constant $K>0$. Finally optimize over all constants involved.\n\\end{remark}\n\nLet us finally sketch the main differences for a more general Gaussian driving noise $G$ in equation \\eqref{eq:sde}. We assume that $G$ has continuous sample paths and a moving average representation similar to \\eqref{eq:mandelbrot} with a kernel $\\mathfrak{G}:\\ensuremath{\\mathbb{R}}\\to\\Lin{n}$ which vanishes on $(-\\infty,0]$, is continuous on $(0,\\infty)$, and satisfies\n\\begin{equation*}\n\t\\int_{-\\infty}^t\\big|\\mathfrak{G}(t-u)-\\mathfrak{G}(-u)\\big|^2\\,du<\\infty\n\\end{equation*}\nfor each $t>0$. Then\n\\begin{equation*}\n\tG_t=\\int_{-\\infty}^t \\mathfrak{G}(t-u)-\\mathfrak{G}(-u)\\,dW_u,\\qquad t\\geq 0,\n\\end{equation*}\nhas the locally independent increment decomposition \n\\begin{equation*}\n\t\\big(\\theta_t G\\big)_h=\\int_{-\\infty}^t\\mathfrak{G}(t+h-u)-\\mathfrak{G}(t-u)\\,dW_u+\\int_t^{t+h}\\mathfrak{G}(t+h-u)\\,dW_u\\ensuremath\\triangleq\\bar{G}^t_h+\\tilde{G}^t_h\n\\end{equation*} \nwith respect to any compatible filtration. Moreover, we require that\n\\begin{equation*}\n \\int_0^{\\delta} |\\mathfrak{G}(u)|\\,du>0\n\\end{equation*}\nfor each $\\delta>0$. We remark that (up to a time-shift) this is certainly implied by the assumptions of Panloup and Richard, see \\cite[Condition $\\boldsymbol{(\\mathrm{C}_2)}$]{Panloup2020}. As we have seen in \\cref{ex:titmarsh}, the Cameron-Martin space of $(\\tilde{G}_h)_{h\\in[0,1]}$ then densely embeds into $\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$. Thus \\cref{rem:Wasserstein-general} applies and we obtain a geometric rate in Wasserstein distance, provided that there is a stationary measure for the equation $dY_t=b(Y_t)\\,dt+\\sigma\\,dG_t$.\n\n\n\\section{The Fractional Averaging Principle}\\label{sec:feedback}\n\nLet us remind the reader of the setup of \\cref{thm:feedback_fractional}: We consider the slow-fast system\n\\begin{alignat}{4}\n dX_t^\\varepsilon&=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dB_t,& \\qquad&X_0^\\varepsilon=X_0,\\label{eq:slow_feedback_sec}\\\\\n dY_t^\\varepsilon&=\\frac{1}{\\varepsilon}b(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,&\\qquad &Y_0^\\varepsilon=Y_0,\\label{eq:fast_feedback_sec}\n\\end{alignat}\ndriven by independent $d$-dimensional and $n$-dimensional fractional Brownian motions $B$ and $\\hat{B}$ with Hurst parameters $H\\in(\\frac12,1)$ and $\\hat{H}\\in(1-H,1)$, respectively. We claim that $X_t^\\varepsilon$ converges to the solution of the na\\\"ively averaged equation \\eqref{eq:effective_dynamics} as $\\varepsilon\\to 0$. \n\nLet us also introduce the following filtrations for later reference:\n\\begin{equation*}\n {\\mathcal G}_t\\ensuremath\\triangleq\\sigma(B_s,s\\leq t),\\quad\\hat{{\\mathcal G}}_t\\ensuremath\\triangleq\\sigma(\\hat{B}_s,s\\leq t),\\quad {\\mathcal F}_t\\ensuremath\\triangleq{\\mathcal G}_t\\vee\\hat{{\\mathcal G}}_t.\n\\end{equation*}\nTo be utterly precise, we actually use the right-continuous completion of ${\\mathcal F}$ in order to ensure that hitting time of an open sets by a continuous, adapted process is a stopping time. Observe that ${\\mathcal F}$ is compatible with the fBm $\\hat{B}$, see \\cref{sec:increment}.\n\nWe shall first convince ourselves that, under the conditions of \\cref{thm:feedback_fractional}, the pathwise solution of the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} exists globally. If the drift vector field $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ in \\eqref{eq:fast_feedback_sec} were \\emph{globally} Lipschitz continuous, this would be an easy consequence of the standard Young bound \\cite{Young1936}:\n\\begin{equation}\\label{eq:young}\n \\left|\\int_s^t f_r\\,d\\ensuremath{\\mathfrak{h}}_r\\right|\\lesssim |f|_{\\ensuremath{\\mathcal{C}}^\\beta}|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}|t-s|^{\\alpha+\\beta}+|f_s||\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}|t-s|^\\alpha,\n\\end{equation}\nprovided that $\\alpha+\\beta>1$. We shall also prove a bound on the moments of the H\\\"older norm of the solution for any fixed scale $\\varepsilon$. The main technical estimates in the proof of \\cref{thm:feedback_fractional} are delegated to \\cref{sec:uniform_bounds}, allowing us to easily conclude the argument in \\cref{sec:proof} by appealing to L\\^e's stochastic sewing lemma \\cite{Le2020}. \n\n\\subsection{A Solution Theory for the Slow-Fast System}\\label{sec:solution_theory}\n\n\nWe shall begin with a deterministic (pathwise) existence and uniqueness result. Fix a terminal time $T>0$ and let $\\ensuremath{\\mathfrak{h}}=(\\ensuremath{\\mathfrak{h}}^1,\\ensuremath{\\mathfrak{h}}^2)\\in\\ensuremath{\\mathcal{C}}^{\\alpha_1}([0,T],\\ensuremath{\\mathbb{R}}^{m})\\times\\ensuremath{\\mathcal{C}}^{\\alpha_2}([0,T],\\ensuremath{\\mathbb{R}}^{n})$, where $\\alpha_1>\\frac12$ and $\\alpha_2>1-\\alpha_1$. We consider the Young differential equation\n\\begin{equation}\\label{eq:ode}\n z(t)=\\begin{pmatrix}z^1(t)\\\\z^2(t)\\end{pmatrix}=z_0+\\int_0^t \\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds+\\int_0^t G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s.\n\\end{equation}\nWe impose the following assumptions on the data:\n\n\\begin{condition}\\label{cond:data_ode}\n\\leavevmode\n\\begin{enumerate}\n \\item\\label{it:cond_ode_1} $F_1:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{d}$ is bounded and globally Lipschitz continuous.\n \\item\\label{it:cond_ode_2} $F_2:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{n}$ is locally Lipschitz continuous and of linear growth, that is, $|F_2(z,x)|\\lesssim 1+|x|+|z|$ for all $x\\in\\ensuremath{\\mathbb{R}}^n$ and $z\\in\\ensuremath{\\mathbb{R}}^d$. Moreover, there are $\\kappa,D>0$ such that\n \\begin{equation*}\n \\Braket{F_2(z, x)-F_2(z,y),x-y}\\leq D- \\kappa|x-y|^2 \\qquad \\forall\\, x,y\\in\\ensuremath{\\mathbb{R}}^n, \\forall \\,z\\in \\ensuremath{\\mathbb{R}}^d.\n\\end{equation*}\n \\item\\label{it:cond_ode_3} $G:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\Lin[m+n]{d+n}$ is of the form $G=\\begin{pmatrix}G_1 & 0\\\\ 0 & G_2\\end{pmatrix}$ with $G_1\\in\\Cb{2}\\big(\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n},\\Lin[m]{d}\\big)$ and $G_2\\in\\Lin{d}$ is constant.\n\\end{enumerate}\n\\end{condition}\n\nOur proof for the well-posedness of \\eqref{eq:ode} and the non-explosiveness is based on the following comparison lemma, versions of which will be of repeated use in the sequel:\n\\begin{lemma}\\label{lem:comparison}\n Let $F_2:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{n}$ satisfy \\cref{cond:data_ode} \\ref{it:cond_ode_2} and let $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^{n})$, $z\\in\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^{d})$. \n \\begin{enumerate}\n \\item Then for any $x_0\\in\\ensuremath{\\mathbb{R}}^{n}$, there are unique global solutions to\n \\begin{equation*}\n x(t)=x_0+\\int_0^t F_2\\big(z(s),x(s)\\big)\\,ds+\\varsigma_t,\\qquad y(t)=x_0-\\int_0^ty(s)\\,ds+\\varsigma_t.\n \\end{equation*}\n Furthermore, on any finite time interval $[0,T]$, the difference of the solutions satisfies the bound\n \\begin{equation}\\label{eq:solution_difference}\n |x(t)-y(t)|^2\\lesssim\\int_0^t e^{-\\kappa(t-s)}\\big(1+|y(s)|+|z(s)|\\big)^2\\,ds\n \\end{equation}\n for all $t\\in[0,T]$. In particular,\n \\begin{equation}\\label{eq:a_priori_sup}\n |x|_\\infty\\lesssim 1+|x_0|+|\\varsigma|_\\infty+|z|_\\infty.\n \\end{equation}\n\n \\item If, in addition, $\\varsigma\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^{n})$ for some $\\alpha>0$, then $x\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^{n})$ and the following bound holds:\n \\begin{equation}\\label{eq:comparison_apriori} \n |x|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim 1+|x_0|+|z|_\\infty+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}.\n \\end{equation}\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof} \n Since $F_2$ is locally Lipschitz, it is clear that uniqueness holds for the equation defining $x$. To see existence, first notice that $\\tilde{x}(t)\\ensuremath\\triangleq x(t)-\\varsigma_t$ solves\n \\begin{equation*}\n \\tilde{x}(t)=x_0+\\int_0^t F_2\\big(z(s),\\tilde{x}(s)+\\varsigma_s\\big)\\,ds.\n \\end{equation*}\n Set $\\Upsilon(s,x) =F_2\\big(z(s), x+\\varsigma_s\\big)$. This function is jointly continuous in $(s,x)$. Therefore, a local solution exists by the Carath\\'eodory theorem. \n\n On the other hand, global existence and uniqueness of $y$ is standard. Consequently, the required non-explosion statement follows easily upon establishing \\eqref{eq:solution_difference}. To this end, we first observe that, for all $z\\in\\ensuremath{\\mathbb{R}}^{d}$ and all $x,y\\in\\ensuremath{\\mathbb{R}}^{n}$, the off-diagonal large scale contraction property and the linear growth of $F$ furnish the following bound:\n \\begin{align*}\n \\Braket{F_2(z,x)+y,x-y}&\\leq D-\\kappa|x-y|^2+\\\\\\\n &\\leq D- \\frac{\\kappa}{2}|x-y|^2+\\frac{C}{\\kappa}\\big(1+|z|+|y|\\big)^2\n \\end{align*}\n for some uniform constant $C>0$, where we also used Young's inequality. Consequently, the function $h(t)\\ensuremath\\triangleq e^{\\kappa t}|x(t)-y(t)|^2$ satisfies \n \\begin{equation*}\n h^\\prime(t)\\lesssim e^{\\kappa t}\\big(1+|y(t)|+|z(t)|\\big)^2\n \\end{equation*}\n and \\eqref{eq:solution_difference} follows at once. \n\n The bound \\eqref{eq:comparison_apriori} is an immediate consequence of \\eqref{eq:solution_difference} together with the fact that\n \\begin{equation*}\n |x|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim|F_2(z,x)|_{\\infty}T^{1- \\alpha}+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim \\big(1+|z|_\\infty+|x|_{\\infty}\\big) T^{1-\\alpha}+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}.\\qedhere\n \\end{equation*}\n\\end{proof}\n\nThe announced existence and uniqueness result for \\eqref{eq:ode} is as follows:\n\\begin{proposition}\\label{prop:abstract_ode}\n Under \\cref{cond:data_ode}, for any $T>0$ and any $\\beta<\\alpha_1\\wedge\\alpha_2$, \\eqref{eq:ode} has a unique global solution in $\\ensuremath{\\mathcal{C}}^{\\beta}([0,T],\\ensuremath{\\mathbb{R}}^{d+n})$. \n\\end{proposition}\n\\begin{proof}\nOwing to \\cref{lem:comparison}, it is enough to derive an \\emph{a priori} bound on $|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}$, $\\tilde{\\alpha}\\in[\\beta,\\alpha_1)$, to conclude with a standard Picard argument. \n\nLet $\\delta\\in(0,1)$. By the Young bound \\eqref{eq:young}, we see that\n \\begin{align*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}&\\lesssim |F_1|_\\infty\\delta^{1-\\tilde{\\alpha}}+\\big(\\big|G_1(z^1,z^2)\\big|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}\\wedge\\alpha_2}}+|G_1|_\\infty\\big)|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\\\\n &\\lesssim\\big(1+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}+|z^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\delta^{\\alpha_1-\\tilde{\\alpha}},\n \\end{align*}\n where the prefactor is proportional to $M\\ensuremath\\triangleq|F_1|_\\infty+|G|_\\infty+\\Lip{G}$. We may apply \\cref{lem:comparison} to $z^2$ to further find\n \\begin{equation*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\lesssim\n \\big(1+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\delta^{\\alpha_1-\\tilde{\\alpha}}.\n \\end{equation*}\n Here, we take the H\\\"older norms of $z^1,z^2$ over the interval $[0,\\delta]$, whereas we use the full interval $[0,T]$ for $\\ensuremath{\\mathfrak{h}}^1$ and $\\ensuremath{\\mathfrak{h}}^2$. For $\\delta>0$ small enough, we therefore get\n \\begin{equation}\\label{eq:iteration}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,\\delta])}\\lesssim\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big).\n \\end{equation}\n Combining this with \\cref{lem:comparison}, we can find a constant $C>0$ such that\n \\begin{equation*}\n |z(\\delta)|\\leq|z_0|+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,\\delta])}+|z^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}([0,\\delta])}\\leq C\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big).\n \\end{equation*}\n This bound can now be easily iterated and together with \\eqref{eq:iteration} we see that there is a (increased) constant $C$ such that\n \\begin{equation*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([t,t+\\delta])}\\lesssim\\big(1+|z_t|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\leq C^{\\left[\\frac{t}{\\delta}\\right]+1}\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)^{\\left[\\frac{t}{\\delta}\\right]+2}\n \\end{equation*}\n for each $t\\in[0,T-\\delta]$. Since $|\\cdot|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T])}\\leq 2\\delta^{\\tilde{\\alpha}-1}\\sup_t |\\cdot|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([t,t+\\delta])}$, we get that\n \\begin{equation}\\label{eq:a_priori}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T])}\\leq\\frac{2C^{\\left[\\frac{t}{\\delta}\\right]+1}}{\\delta^{1-\\tilde\\alpha}}\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)^{\\left[\\frac{T}{\\delta}\\right]+2}.\n \\end{equation}\n\n Local existence and uniqueness of a solution to \\eqref{eq:ode} is a classical consequence of the Young bound. Indeed, if we define \n \\begin{equation*}\n A_\\delta\\ensuremath\\triangleq\\left\\{f\\in\\ensuremath{\\mathcal{C}}^{\\beta}([0,\\delta],\\ensuremath{\\mathbb{R}}^{d+n}):\\,f(0)=z_0\\text{ and }|f|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\\leq 1\\right\\},\n \\end{equation*}\n then, for $\\delta>0$ small enough, the operator $\\mathcal{A}_\\delta: A_\\delta\\to A_\\delta$,\n \\begin{equation*}\n (\\mathcal{A}_\\delta z)(t)\\ensuremath\\triangleq z_0+\\int_0^t\\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds+\\int_0^t G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s,\n \\end{equation*}\n is contracting on a complete metric space. Abbreviating $\\gamma\\ensuremath\\triangleq\\alpha_1\\wedge\\alpha_2$, this in turn follows from the well-known bounds\n \\begin{align*}\n \\left|\\int_0^\\cdot G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\lesssim(\\Lip{G}+|G|_\\infty)(|z|_{\\ensuremath{\\mathcal{C}}^{\\beta}}+1)|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^{\\gamma}}\\delta^{\\gamma-\\beta},\\\\\n \\left|\\int_0^\\cdot G\\big(z(s)\\big)-G\\big(\\bar{z}(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\lesssim (\\Lip{G}+\\Lip{DG})|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^{\\gamma}}\\delta^{\\gamma-\\beta}|z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\beta}},\\\\\n \\left|\\int_0^\\cdot \\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\leq \\big(|F_1|_{\\infty;\\,B_{\\delta^{\\beta}}(z_0)}+|F_2|_{\\infty;\\,B_{\\delta^{\\beta}}(z_0)}\\big)\\delta^{1-\\beta},\\\\\n \\left|\\int_0^\\cdot \\begin{pmatrix}F_1\\big(z(s)\\big)-F_1\\big(\\bar{z}(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)-F_2\\big(\\bar{z}(s)\\big)\\end{pmatrix}\\,ds\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\leq \\big(\\Lip{F_1}+\\Lip[B_{\\delta^{\\beta}}(z_0)]{F_2}\\big)\\delta|z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\n \\end{align*}\n for all $z,\\bar{z}\\in A_\\delta$, where $|\\cdot|_{\\infty;\\,A}$ and $\\Lip[A]{\\cdot}$ denote the respective norms of the function restricted to the set $A$. Here, we also used that $\\max\\big(|z-z_0|_\\infty,|\\bar z-z_0|_\\infty\\big)\\leq\\delta^\\beta$ since $z,\\bar{z}\\in A_\\delta$ by assumption. Consequently, there is a unique solution to \\eqref{eq:ode} in $\\ensuremath{\\mathcal{C}}^{\\beta}([0,\\delta],\\ensuremath{\\mathbb{R}}^{d+n})$. Global existence and uniqueness follow from the \\emph{a priori} estimates \\eqref{eq:comparison_apriori} and \\eqref{eq:a_priori} by a standard maximality argument.\n\\end{proof}\n\nWe now bring the randomness back in the picture. To this end, let $\\alpha>0$, $p\\geq 1$, and $T>0$. We define the space\n\\begin{equation*}\n {\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\ensuremath\\triangleq\\left\\{X:[0,T]\\times\\Omega\\to\\ensuremath{\\mathbb{R}}^d:\\,X\\text{ is }({\\mathcal F}_t)_{t\\in[0,T]}\\text{-adapted and }\\|X\\|_{{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)}<\\infty\\right\\},\n\\end{equation*} \nwhere we introduced the semi-norm\n\\begin{equation*}\n \\|X\\|_{{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)}\\ensuremath\\triangleq\\sup_{s\\neq t\\in[0,T]}\\frac{\\|X_t-X_s\\|_{L^p}}{|t-s|^\\alpha}.\n\\end{equation*}\nIf the terminal time $T$ and the dimension $d$ are clear from the context, we shall also write ${\\mathcal B}_{\\alpha,p}$ for brevity. By Kolmogorov's continuity theorem, we have the continuous embeddings\n\\begin{equation}\\label{eq:embeddings}\n L^p\\big(\\Omega,\\ensuremath{\\mathcal{C}}^{\\alpha+\\delta}([0,T],\\ensuremath{\\mathbb{R}}^d)\\big)\\hookrightarrow{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\hookrightarrow L^p\\big(\\Omega,\\ensuremath{\\mathcal{C}}^{\\alpha-\\delta-\\frac1p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\big)\n\\end{equation}\nfor any $\\delta>0$. Finally, let us also introduce the Besov-type space\n\\begin{align*}\n W_0^{\\alpha,\\infty}([0,T],\\ensuremath{\\mathbb{R}}^d)&\\ensuremath\\triangleq \\big\\{f:[0,T]\\to\\ensuremath{\\mathbb{R}}^d:\\,|f|_{\\alpha,\\infty}<\\infty\\big\\},\\\\\n |f|_{\\alpha,\\infty}&\\ensuremath\\triangleq\\sup_{t\\in[0,T]}\\left(|f(t)|+\\int_0^t\\frac{|f(t)-f(s)|}{|t-s|^{\\alpha+1}}\\,ds\\right).\n\\end{align*}\nNualart and R\\u{a}s\\c{c}anu proved the following classical result:\n\\begin{proposition}[{\\cite[Theorem 2.1.II]{Rascanu2002}}]\\label{prop:nualart}\n Let $f:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^d$ be bounded Lipschitz continuous and $g:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\Lin[m]{d}$ be of class $\\ensuremath{\\mathcal{C}}_b^2$. Let $(Y_t)_{t\\in[0,T]}$ be a stochastic process with sample paths in $\\ensuremath{\\mathcal{C}}^{\\gamma}([0,T],\\ensuremath{\\mathbb{R}}^n)$ for some $\\gamma>1-H$ and let $B$ be an fBm with Hurst parameter $H>\\frac12$. Then there is a unique global solution to the equation\n \\begin{equation*}\n X_t=X_0+\\int_0^t f(X_s,Y_s)\\,ds+\\int_0^tg(X_s,Y_s)\\,dB_s\n \\end{equation*}\n and, provided that $X_0\\in L^\\infty$, we also have that\n \\begin{equation*}\n |X|_{\\alpha,\\infty}\\in\\bigcap_{p\\geq 1} L^p\n \\end{equation*}\n for each $\\alpha<\\frac12\\wedge\\gamma$.\n\\end{proposition} \n\n\\begin{corollary}\\label{cor:norm_bound_solution}\n Fix the scale parameter $\\varepsilon>0$ and a terminal time $T>0$. Let $\\alpha0$ is sufficiently small. Instead, we employ \\cref{prop:nualart}: Since $Y^\\varepsilon\\in\\ensuremath{\\mathcal{C}}^{\\hat H-}([0,T],\\ensuremath{\\mathbb{R}}^n)$ by \\cref{lem:comparison}, we see that, for each $\\alpha<\\frac12\\wedge\\hat H$, $|X^\\varepsilon|_{\\alpha,\\infty}\\in\\bigcap_{p\\geq 1} L^p$, It is clear that\n\\begin{equation*}\n W_0^{\\alpha,\\infty}([0,T],\\ensuremath{\\mathbb{R}}^d)\\hookrightarrow \\ensuremath{\\mathcal{C}}^{\\alpha-\\delta}([0,T],\\ensuremath{\\mathbb{R}}^d)\n\\end{equation*}\nfor any $\\delta>0$. Combine this with the continuous embedding \\eqref{eq:embeddings} to conclude \\eqref{eq:b_norm_bound}.\n\\end{proof}\n\n\\begin{remark}\n We finally record that \\cref{prop:abstract_ode,cor:norm_bound_solution} are the only places in the proof of \\cref{thm:feedback_fractional} which require a linear growth of the drift $b$, see \\cref{cond:feedback}. In fact, the remainder of the argument would still work, \\emph{mutatis mutandis}, under the weaker assumption of a polynomially growing drift, i.e., $|b(x,y)|\\lesssim 1+|x|^N+|y|^N$ for some $N\\in\\ensuremath{\\mathbb{N}}$. It is however unclear whether the solution to \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} exists globally in this case.\n\\end{remark}\n\n\n\n\\subsection{Uniform Bounds on the Slow Motions}\\label{sec:uniform_bounds}\n\nOur strategy in proving \\cref{thm:feedback_fractional} is as follows: The integrals in \\eqref{eq:slow_feedback_sec} are approximated by suitable Riemann sums, on which we then aim to establish uniform bounds. These estimates translate into bounds on the integrals in view of L\\^e's stochastic sewing lemma \\cite{Le2020}.\n\nFix a terminal time $T>0$ and let $\\mathcal{S}^p$ denote the set of adapted two-parameter processes on the simplex with finite $p^\\text{th}$ moments; in symbols:\n\\begin{equation*}\n \\mathcal{S}^p\\ensuremath\\triangleq\\left\\{A:[0,T]^2\\times\\Omega\\to\\ensuremath{\\mathbb{R}}^d:\\,A_{s,t}=0\\text{ for }s\\geq t\\text{ and }A_{s,t}\\in L^p(\\Omega,{\\mathcal F}_t,\\ensuremath\\mathbb{P})\\text{ for all }s,t\\geq 0\\right\\}.\n\\end{equation*}\nGiven $\\eta,\\bar{\\eta}>0$, we define the spaces\n\\begin{align*}\n H_\\eta^p&\\ensuremath\\triangleq\\left\\{A\\in\\mathcal{S}^p:\\,\\|A\\|_{H_\\eta^p}\\ensuremath\\triangleq\\sup_{0\\leq s\\frac12$, and $\\bar{\\eta}>1$. Suppose that $A\\in H_\\eta^p\\cap\\bar{H}_{\\bar{\\eta}}^p$. Then, for every $0\\leq s\\leq t\\leq T$, the limit\n \\begin{equation*}\n I_{s,t}(A)\\ensuremath\\triangleq\\lim_{|P|\\to 0}\\sum_{[u,v]\\in P}A_{u,v}\n \\end{equation*}\n along partitions $P$ of $[s,t]$ with mesh $|P|\\ensuremath\\triangleq\\max_{[u,v]\\in P}|v-u|$ tending to zero exists in $L^p$. The limiting process $I(A)$ is additive in the sense that $I_{s,u}(A)+I_{u,t}(A)=I_{s,t}(A)$ for all $0\\leq s\\leq u\\leq t\\leq T$. Furthermore, there is a constant $C=C(p,\\eta,\\bar{\\eta})$ such that\n \\begin{equation*}\n \\|I_{s,t}(A)\\|_{L^p}\\leq C\\left(\\vertiii{A}_{\\bar{H}_{\\bar{\\eta}}^p}|t-s|^{\\bar{\\eta}}+\\|A\\|_{H_{\\eta}^p}|t-s|^\\eta\\right)\n \\end{equation*}\n for all $0\\leq s\\leq t\\leq T$. Moreover, if $\\|\\Expec{A_{s,t}\\,|\\,{\\mathcal F}_s}\\|_{L^p}\\lesssim|t-s|^{\\bar{\\eta}}$, then $I(A)\\equiv 0$.\n\\end{proposition}\nRecall our notation of the fast motion's flow from \\eqref{eq:general_flow} and \\eqref{eq:general_flow-fixed-x}, respectively. We are ultimately going to apply \\cref{prop:stochastic_sewing} with the two-parameter process\n\\begin{equation}\\label{eq:riemann_summands}\n A_{s,t}^\\varepsilon\\ensuremath\\triangleq\\int_s^t \\left(g\\Big(X_s^\\varepsilon,\\bar{\\Phi}_{s,r}^{X_s^\\varepsilon}\\big(\\Phi_{0,s}^{X^\\varepsilon}(Y_0)\\big)\\Big)-\\bar{g}\\big(X_s^\\varepsilon\\big)\\right)\\,dB_r,\\quad 0\\leq s1-H$. Let $X$ be an $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted stochastic process with $\\alpha$-H\\\"older sample paths. Moreover assume that $X\\in{\\mathcal B}_{\\alpha,p}$. Let $f:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}$ be a bounded Lipschitz continuous function. Then we have the following bound on the Young integral:\n \\begin{equation*}\n \\left\\|\\int_s^t f(X_r)\\,dB_r\\right\\|_{{\\mathcal B}_{H,p}}\\lesssim\\big(|f|_\\infty+\\Lip{f}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big),\n \\end{equation*}\n uniformly in $0\\leq s\\frac12$ and let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n \\to\\ensuremath{\\mathbb{R}}$ be a Lipschitz continuous function. Let $p>2$ and $\\alpha>1-H$. Let $X$ be an $\\ensuremath{\\mathbb{R}}^d$-valued, $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted process with $\\sup_{t\\in[0,T]}\\|X_t\\|_{L^p}<\\infty$ and sample paths in $\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^d)$. Let $Y_0\\in L^p$. Define\n \\begin{equation*}\n A_{s,t}\\ensuremath\\triangleq\\int_s^t h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\Big)\\,dB_r,\n \\end{equation*}\n where the integration is understood in the mixed Wiener-Young sense, see \\eqref{eq:wiener_young}. If $A\\in H_\\eta^2\\cap\\bar{H}_{\\bar{\\eta}}^2$ for some $\\eta>\\frac12$ and $\\bar{\\eta}>1$, then, for any $\\varepsilon>0$ and any $0\\leq s\\leq t\\leq T$, \n \\begin{equation*}\n \\lim_{|P|\\to 0} \\sum_{[u,v]\\in P([s,t])} A_{u,v}=\\int_{s}^t h\\big(X_r,\\Phi_{0,r}^{X}(Y_0)\\big)\\,dB_r,\n \\end{equation*} \n where the right-hand side is the Young integral.\n\\end{lemma}\n\\begin{proof}\n We first note that, by \\cref{lem:comparison}, the process $\\Phi_{0,\\cdot}^X(Y_0)$ takes values in $\\ensuremath{\\mathcal{C}}^\\beta([0,T],\\ensuremath{\\mathbb{R}}^d)$ for any $\\beta<\\hat{H}$. The pathwise Young integral $\\int h\\big(X_r,\\Phi_{0,r}^{X}(Y_0)\\big)\\,dB_r$ is thus well defined and is given by the limit of the Riemann sums of\n \\begin{equation*}\n \\tilde{A}_{s,t}\\ensuremath\\triangleq h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)(B_t-B_s)\n \\end{equation*}\n along any sequence of partitions. By the last part of \\cref{prop:stochastic_sewing}, it now suffices to show that $\\|A_{s,t}-\\tilde{A}_{s,t}\\|_{L^2}\\lesssim |t-s|^{\\bar{\\eta}}$ for some $\\bar{\\eta}>1$. \n\n To see this, we apply \\cref{lem:wiener_integral_bound} with $\\kappa=0$ to find that, for each $\\beta<\\hat{H}$,\n \\begin{align*}\n &\\phantom{\\leq}\\big\\|A_{s,t}-\\tilde{A}_{s,t}\\big\\|_{L^2}=\\left\\|\\int_s^t \\Big(h\\big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\big)-h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\,dB_r\\right\\|_{L^2}\\\\\n &\\leq\\Big\\|\\sup_{s\\leq r\\leq t}\\Big|h\\big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\big)-h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)\\Big|\\Big\\|_{L^p}|t-s|^H\\\\\n &\\leq\\Lip{h}\\Big\\|\\Big|\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\Big|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\\Big\\|_{L^p}|t-s|^{H+\\beta}.\n \\end{align*}\n Since $H+\\hat{H}>1$, we can conclude with \\cref{lem:comparison,lem:fast_process_moments}.\n\\end{proof}\nOur interest in \\cref{lem:sewing_young} is of course in applying it to the slow motion \\eqref{eq:slow_feedback_sec} and the Riemann summands $A^\\varepsilon_{s,t}$ defined in \\eqref{eq:riemann_summands}. We have already seen in \\cref{cor:norm_bound_solution} that $X^\\varepsilon\\in\\bigcap_{p\\geq 1}{\\mathcal B}_{\\alpha,p}$ for any $\\alpha<\\frac12\\wedge\\hat{H}$. We are therefore left to check that $A^\\varepsilon\\in H_\\eta^p\\cap\\bar{H}_{\\bar{\\eta}}^p$ for some $\\eta>\\frac12$, $\\bar{\\eta}>1$, and $p\\geq 2$. Since these estimates are somewhat technically involved and require longer computations, we devote a subsection to each of the norms $\\|\\cdot\\|_{H_{\\eta}^p}$ and $\\vertiii{\\;\\cdot\\;}_{\\bar{H}_{\\bar{\\eta}}^p}$, respectively. \n\n\n\\subsubsection{Controlling the Increment $A^\\varepsilon_{s,t}$}\\label{sec:sewing_1}\n\nLet $h:\\ensuremath{\\mathbb{R}}^d\\times \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^d$. Recall that write $\\bar h(x)=\\int h(x,y) \\pi^x(dy)$ for its average with respect to the first marginal of the invariant measure of the process $\\bar \\Phi^x$, see \\eqref{eq:general_flow-fixed-x} and \\cref{initial-condition}.\nThe following lemma exploits the convergence rates derived in \\cref{sec:convergence}. [The reader should observe that without further notice we assume that the conditions of \\cref{thm:feedback_fractional} on the drift $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ are in place.]\n\\begin{lemma}\\label{lem:sewing_helper_1}\n Let $q>1$. Let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ be a bounded measurable function and let $X,Y\\in L^q$ be ${\\mathcal F}_s$-measurable random variables. Then, for any $0\\leq s\\leq t$, any $p\\geq 2$, and any $\\zeta<1-\\hat{H}$, we have that\n \\begin{equation*}\n \\left\\|\\int_s^t \\Big( h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)-\\bar{h}(X)\\Big) \\,dr\\right\\|_{L^p}\\lesssim |h|_\\infty\\Big(1+\\|Y\\|_{L^q}^{\\frac{1}{p}}+\\|X\\|_{L^q}^{\\frac{1}{p}}\\Big)\\varepsilon^{\\frac{\\zeta}{3p}}|t-s|^{1-\\frac{\\zeta}{3p}}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\nThere is no loss of generality in assuming that $\\bar{h}\\equiv 0$. Notice also that the trivial estimate $\\big\\|\\int_s^t h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)\\,dr\\big\\|_{L^\\infty}\\leq|h|_\\infty|t-s|$. By interpolation, we can therefore restrict ourselves to the case $p=2$. Clearly,\n \\begin{equation*}\n \\Expec{\\left|\\int_s^t h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)\\,dr\\right|^2}=2\\int_s^t\\int_s^v\\Expec{h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)h\\big(X,\\bar{\\Phi}_{s,v}^X(Y)\\big)}\\,dr\\,dv.\n \\end{equation*}\n For $r0$. Moreover, there is a $\\gamma>0$ such that \n \\begin{equation*}\n \\|A\\|_{H_{\\eta}^p}\\lesssim |h|_\\infty\\Big(1+\\sup_{0\\leq t\\leq T}\\|X_t\\|_{L^q}\\Big)\\varepsilon^{\\gamma}.\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n Again, we may assume that $\\bar{h}\\equiv 0$ without any loss of generality. Since $X$ is $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted, we can use \\cref{lem:wiener_integral_bound} to obtain that, for $\\tilde{q}>p$ and $\\kappa\\in[0,H-\\frac12)$,\n \\begin{equation*}\n \\|A_{s,t}\\|_{L^p}\\lesssim\\left\\|\\left|h\\Big(X_s,\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\right|_{-\\kappa}\\right\\|_{L^{\\tilde{q}}}|t-s|^{H-\\kappa}.\n \\end{equation*}\n By \\cref{lem:sewing_helper_1,lem:fast_process_moments}, we obtain \n \\begin{equation*}\n \\left\\|\\int_u^v h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\,dr\\right\\|_{L^{\\tilde{q}}}\\lesssim |h|_\\infty\\bigg(1+\\|Y_0\\|^{\\frac{1}{\\tilde{q}}}_{L^q}+\\sup_{0\\leq r\\leq s}\\|X_r\\|_{L^q}^{\\frac{1}{\\tilde{q}}}\\bigg)\\varepsilon^{\\frac{\\zeta}{3\\tilde{q}}}|v-u|^{1-\\frac{\\zeta}{3\\tilde{q}}}\n \\end{equation*}\n for all $u,v\\in[s,t]$ and any $\\zeta<1-\\hat{H}$. Therefore, Kolmogorov's continuity theorem shows that\n \\begin{equation*}\n \\left\\|\\left|h\\Big(X_s,\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\right|_{-\\kappa}\\right\\|_{L^{\\tilde{q}}}\\lesssim |h|_\\infty\\bigg(1+\\|Y_0\\|^{\\frac{1}{\\tilde{q}}}_{L^q}+\\sup_{0\\leq t\\leq T}\\|X_t\\|_{L^q}^{\\frac{1}{\\tilde{q}}}\\bigg)\\varepsilon^{\\frac{\\zeta}{3\\tilde{q}}},\n \\end{equation*}\n provided that we choose $\\tilde{q}>\\kappa^{-1}\\left(1+\\frac{\\zeta}{3}\\right)$, and the final result follows.\n \\end{proof}\n\n\n\\subsubsection{Continuity of the Invariant Measures}\nLet $\\varepsilon>0$ and $s0$. In order to keep the statements of the next lemmas concise, we shall freely absorb quantities independent of $0\\leq s\\leq t$ and $\\varepsilon\\in(0,1]$ into the prefactor hidden beneath $\\lesssim$.\n\n\\begin{lemma}\\label{lem:continuity}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $X,\\bar{X}\\in L^\\infty$, and $Y\\in L^p$ be ${\\mathcal F}_s$-measurable random variables. Then\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\lesssim\\|X-\\bar{X}\\|_{L^{p}}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n We abbreviate $\\Lambda\\ensuremath\\triangleq\\Lambda(\\kappa,R,p)$ and observe that, for any $s\\leq u\\leq r$,\n \\begin{align*}\n \\frac{d}{dr}\\Big|\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\Big|^2&=\\frac{2}{\\varepsilon}\\Braket{b\\big(X,\\bar{\\Phi}_{u,r}^{X}(Y)\\big)-b\\big(\\bar{X},\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\big),\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)}\\\\\n &\\leq \\frac{2(\\Lambda+1)}{\\varepsilon}\\Big|\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\Big|^2+ \\frac{\\Lip[\\|X\\|_{L^\\infty}\\vee\\|\\bar{X}\\|_{L^\\infty}]{b}^2}{2\\varepsilon}|X-\\bar{X}|^2\n \\end{align*}\n with probability $1$. It follows that\n \\begin{equation}\\label{eq:cont_interpolate_1}\n \\Big|\\bar{\\Phi}_{u,r}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{u,r}(Y)\\Big|\\lesssim\\Lip[\\|X\\|_{L^\\infty}\\vee\\|\\bar{X}\\|_{L^\\infty}]{b}e^{(\\Lambda+1)\\frac{|r-u|}{\\varepsilon}}|X-\\bar{X}|.\n \\end{equation}\n This bound is of course only useful on a time interval with length of order $\\varepsilon$. We therefore expand\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\leq\\sum_{(t_i,t_{i+1})\\in P([s,t];\\varepsilon)}\\left\\|\\bar{\\Phi}_{t_{i+1},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i+1}}^X(Y)\\big)-\\bar{\\Phi}_{t_{i},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\right\\|_{L^p}.\n \\end{equation*}\n \\Cref{cor:fast_different_initial} shows that\n \\begin{align*}\n \\left\\|\\bar{\\Phi}_{t_{i+1},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i+1}}^X(Y)\\big)-\\bar{\\Phi}_{t_{i},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\right\\|_{L^p}&\\lesssim\\Big\\|\\bar{\\Phi}_{s,t_{i+1}}^X(Y)-\\bar{\\Phi}_{t_{i},t_{i+1}}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\Big\\|_{L^p} e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}}\\\\\n & \\lesssim \\|X-\\bar{X}\\|_{L^p}e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}},\n \\end{align*}\n where the last inequality uses \\eqref{eq:cont_interpolate_1} together with $|t_{i+1}-t_i|\\asymp\\varepsilon$. Consequently,\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\lesssim\\|X-\\bar{X}\\|_{L^p}\\sum_{(t_i,t_{i+1})\\in P([s,t];\\varepsilon)}e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}}\\lesssim\\|X-\\bar{X}\\|_{L^p}\n \\end{equation*}\n uniformly in $0\\leq s\\leq t$ and $\\varepsilon\\in(0,1]$. \n\\end{proof}\n\n\\Cref{lem:continuity} implies the local Lipschitz continuity of the invariant measure $\\pi^x$ in the parameter $x\\in\\ensuremath{\\mathbb{R}}^d$:\n\\begin{proposition}\\label{lem:wasserstein_holder}\n Let $p\\geq 1$ and $K>0$. Suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Then\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\pi^{x_1},\\pi^{x_2})\\lesssim |x_1-x_2|,\n \\end{equation*}\n uniformly for $|x_1|,|x_2|\\leq K$.\n\\end{proposition}\n\\begin{proof}\n Owing to \\cref{thm:geometric}, it follows that\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\pi^{x_1},\\pi^{x_2})\\leq\\limsup_{\\varepsilon\\to 0}\\big\\|\\bar{\\Phi}^{x_1}_{0,1}(0)-\\bar{\\Phi}^{x_2}_{0,1}(0)\\big\\|_{L^p}\n \\end{equation*} \n and we conclude with \\cref{lem:continuity}.\n\\end{proof}\n\nThe simple proof of the following corollary is left to the reader.\n\\begin{corollary}\\label{cor:lipschitz_average}\nLet $h:\\ensuremath{\\mathbb{R}}^d\\times \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^d$ be Lipschitz continuous. Then $\\bar{h}:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}^d$ is locally Lipschitz.\n\\end{corollary}\n\\subsubsection{Controlling the Second Order Increment $\\delta A^\\varepsilon_{s,u,t}$}\\label{sec:sewing_2}\n\nUniform bounds on the second order increments are difficult to obtain even for the Markovian fast dynamic. The first technical estimate of this subsection is the following:\n\\begin{lemma}\\label{lem:sewing_helper_2}\n Let $1\\leq p0$ such that \n \\begin{equation*}\n \\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\lesssim\\Lip{h}\\big(1+\\|Y\\|_{L^q}\\big)\\|X-\\bar{X}\\|_{L^{p}}^{\\rho}\\left(1\\wedge\\frac{\\varepsilon^\\gamma}{|t-s|^\\gamma}\\right).\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n By \\cref{cor:total_variation_conditional} \\ref{it:ergodicity_wasserstein} and H\\\"older's inequality, we certainly have\n \\begin{equation}\\label{eq:sewing_helper_2_interpolate}\n \\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\lesssim\\Lip{h}\\big(1+\\|Y\\|_{L^q}\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\zeta}}{|t-s|^{\\zeta}}\\right).\n \\end{equation}\n On the other hand, by the continuity lemma (\\cref{lem:continuity}),\n \\begin{align*}\n &\\phantom{\\lesssim}\\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\\\\n &\\lesssim\\Lip{h}\\left(\\|X-\\bar{X}\\|_{L^p}+\\Big\\|\\bar{\\Phi}^{X}_{s,t}(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\Big\\|_{L^p}\\right)\\lesssim\\Lip{h}\\|X-\\bar{X}\\|_{L^{p}}.\n \\end{align*}\n Finally, we interpolate this bound with \\eqref{eq:sewing_helper_2_interpolate}.\n\\end{proof}\n\nOur remaining task is to derive an estimate on the distance between $\\Phi^Z_{s,t}$ and $\\bar{\\Phi}_{s,t}^{Z_s}$. This is based on the following version of \\cref{lem:continuity}:\n\\begin{lemma}\\label{lem:continuity_path}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $Y\\in L^p$ be ${\\mathcal F}_s$-measurable and $Z$ be a continuous process. Assume that $|Z|_\\infty\\in L^\\infty$. Then\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}^{Z_s}_{s,t}(Y)-\\Phi_{s,t}^Z(Y)\\Big\\|_{L^p}\\lesssim\\Big\\|\\sup_{r\\in[s,t]}|Z_r-Z_s|\\Big\\|_{L^{p}}\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n The reader can easily check that the very same argument we gave at the beginning of the proof of \\cref{lem:continuity} also shows that, for $0\\leq s\\leq u\\leq r\\leq T$,\n \\begin{align*}\n \\Big|\\bar{\\Phi}_{u,r}^{Z_s}(Y)-\\Phi_{u,r}^Z(Y)\\Big|&\\lesssim\\Lip[\\||Z|_{\\infty}\\|_{L^\\infty}]{b}\\left(\\int_{\\frac{u}{\\varepsilon}}^{\\frac{r}{\\varepsilon}} e^{2(\\Lambda+1)\\left(\\frac{r}{\\varepsilon}-v\\right)}|Z_{\\varepsilon v}-Z_s|^2\\,dv\\right)^{\\frac12}\\\\\n &\\lesssim \\sup_{v\\in[u,r]}|Z_v-Z_s| e^{(\\Lambda+1)\\frac{|r-u|}{\\varepsilon}}.\n \\end{align*}\n The asserted bound then follows along the same lines as \\cref{lem:continuity}.\n\\end{proof}\n\nThe following estimate is now an easy consequence:\n\\begin{lemma}\\label{lem:sewing_helper_3}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ be Lipschitz continuous. Assume furthermore that $X$ and $Y$ are ${\\mathcal F}_u$- and ${\\mathcal F}_s$-measurable random variables, respectively. Moreover, let $Z\\in{\\mathcal B}_{\\alpha, p}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\alpha>0$ and assume that $|Z|_\\infty\\in L^\\infty$. Then\n \\begin{equation}\\label{eq:sewing_moderately_3}\n \\left\\|\\Expec{h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\bar{\\Phi}_{s,u}^{Z_s}(Y)\\big)\\Big)-h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\Phi_{s,u}^{Z}(Y)\\big)\\Big)\\,\\middle|\\,\\mathcal{F}_u}\\right\\|_{L^{p}}\\lesssim \\Lip{h}\\|Z\\|_{{\\mathcal B}_{\\alpha,p}}|u-s|^{\\alpha}e^{-c\\frac{|t-u|}{\\varepsilon}}.\n \\end{equation} \n\\end{lemma}\n\\begin{proof}\n By \\cref{cor:fast_different_initial}, we have that\n \\begin{align*}\n &\\phantom{\\lesssim}\\Big\\|\\Expec{h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\bar{\\Phi}_{s,u}^{Z_s}(Y)\\big)\\Big)-h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\Phi_{s,u}^{Z}(Y)\\big)\\Big)\\,\\middle|\\,\\mathcal{F}_u}\\Big\\|_{L^p}\\\\\n &\\lesssim \\Lip{h}\\Big\\|\\bar{\\Phi}_{s,u}^{Z_s}(Y)-\\Phi_{s,u}^{Z}(Y)\\Big\\|_{L^p}e^{-c\\frac{|t-u|}{\\varepsilon}}.\n \\end{align*}\n By \\cref{lem:continuity_path},\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}_{s,u}^{Z_s}(Y)-\\Phi_{s,u}^{Z}(Y)\\Big\\|_{L^p}\\lesssim\\|Z\\|_{{\\mathcal B}_{\\alpha,p}}|u-s|^{\\alpha}.\\qedhere\n \\end{equation*}\n\\end{proof}\n\nFinally, we can establish the second estimate needed for the application of \\cref{prop:stochastic_sewing}:\n\\begin{proposition}\\label{prop:sewing_2}\n Let $1\\leq p1-H$ and $|X|_\\infty\\in L^\\infty$. Define \n \\begin{equation*}\n A_{s,t}\\ensuremath\\triangleq\\int_s^t \\bigg(h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)-\\bar{h}(X_s)\\bigg)\\,dB_r,\n \\end{equation*}\n in the mixed Wiener-Young sense, see \\eqref{eq:wiener_young}. Then $A\\in\\bar{H}_{\\bar{\\eta}}^p$ for any $\\bar{\\eta}<\\alpha+H$ and any $\\varepsilon>0$. Moreover, there is a $\\gamma>0$ such that\n \\begin{equation*}\n \\vertiii{A}_{\\bar{H}_{\\bar{\\eta}}^p}\\lesssim\\Lip{h}\\big(1\\vee\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1\\vee\\|X\\|_{{\\mathcal B}_\\alpha,p}\\big)\\varepsilon^{\\gamma}.\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n Fix $1<\\bar{\\eta}<\\alpha+H$ and choose $\\rho\\in(0,1)$ such that $\\bar{\\eta}0$ sufficiently small. Here, the last inequality used that, for any $p\\geq 1$, $\\big\\|\\dot{\\bar{B}}_r^u\\big\\|_{L^{p}}\\lesssim |r-u|^{H-1}$ together with the elementary fact\n \\begin{equation*}\n \\int_u^t\\frac{1}{|r-u|^{1-H}}\\left(1\\wedge\\frac{\\varepsilon^\\gamma}{|r-u|^\\gamma}\\right)\\,dr\\lesssim\\varepsilon^\\delta|t-u|^{H-\\delta}\n \\end{equation*}\n for any $\\delta\\in(0,\\gamma]$.\n\n The term $\\rom{2}$ can be handled similarly in view of \\cref{lem:sewing_helper_3}.\n\\end{proof}\n\n\\subsection{Proof of \\cref{thm:feedback_fractional}}\\label{sec:proof}\n\nThe estimates of the previous two subsection furnish the following fundamental estimates:\n\\begin{proposition}\\label{prop:final_control}\n Let $2\\leq p1-H$ such that $X$ has $\\alpha$-H\\\"older sample paths and $X\\in{\\mathcal B}_{\\alpha,p}$. If, in addition, $|X|_\\infty\\in L^\\infty$, then, for any $\\eta0$ such that\n \\begin{equation}\n \\left\\|\\int_0^\\cdot \\Big(h\\big(X_r,\\Phi_{0,s}^X(Y_0)\\big)-\\bar{h}(X_r)\\Big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\eta,p}}\\lesssim\\big(|h|_\\infty+\\Lip{h}\\big)\\big(1+\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big)\\varepsilon^{\\gamma},\\label{eq:combine_sewing_1}\n \\end{equation}\n and\n \\begin{equation}\\label{eq:combine_sewing_2}\n \\left\\|\\int_0^\\cdot h\\big(X_r,\\Phi_{0,r}^X(Y_0)\\big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\eta,p}}\\lesssim\\big(|h|_\\infty+\\Lip{h}\\big)\\big(1+\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big),\n \\end{equation}\n uniformly in $0\\leq s0$ and $M>0$, let us define the $({\\mathcal F}_t)_{t\\geq 0}$-stopping time $\\tau_M^\\varepsilon\\ensuremath\\triangleq\\inf\\{t\\geq 0:\\,|X_t^\\varepsilon|>M\\}$. Applying the previous proposition to the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec}, we can deduce relative compactness of the stopped slow motion $X^{\\varepsilon,M}\\ensuremath\\triangleq X^\\varepsilon_{\\cdot\\wedge\\tau_M^\\varepsilon}$:\n\n\\begin{corollary}\\label{cor:tightness}\n Consider the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} with \\cref{cond:feedback} in place. Let $\\beta<\\frac12\\wedge\\hat{H}$ and $p\\geq 2$. Suppose that there are $\\kappa,R>0$ and $q>p$ such that $b(x,\\cdot)\\in\\S_q(\\kappa,R)$ for each $x\\in\\ensuremath{\\mathbb{R}}^d$. Then, for any $M>0$,\n \\begin{equation*}\n \\sup_{\\varepsilon\\in(0,1]}\\big\\|X^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\beta,p}}<\\infty.\n \\end{equation*}\n\\end{corollary} \n\\begin{proof}\n Recall from \\cref{cor:norm_bound_solution} that, for each $\\varepsilon>0$, there is a unique global solution $X^\\varepsilon$ to \\eqref{eq:slow_feedback_sec} with values in $\\ensuremath{\\mathcal{C}}^{\\alpha}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\alpha>1-H$. Moreover, since the H\\\"older norm of the stopped solution $X^{\\varepsilon,M}$ is controlled by the H\\\"older norm of $X^\\varepsilon$, the argument of \\cref{cor:norm_bound_solution} also shows that $\\big\\|X^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\beta,p}}<\\infty$ for each $\\beta<\\frac12\\wedge\\hat{H}$ and $p\\geq 1$. Employing \\cref{prop:final_control}, we obtain that, for any $\\gamma0$ sufficiently small, the proof is concluded by a standard iteration argument.\n\\end{proof}\n\nNow we can finish the proof of \\cref{thm:feedback_fractional} by localizing the argument of Hairer and Li. To this end, we rely on the following deterministic residue lemma:\n\\begin{lemma}[Residue Lemma]\\label{lem:residue_lemma}\n Let $F:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}^d$ be Lipschitz continuous, $G:\\ensuremath{\\mathbb{R}}^d\\to\\Lin[m]{d}$ be of class $\\ensuremath{\\mathcal{C}}_b^2$, and $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^n)$ for some $\\alpha>\\frac12$. Moreover, let $Z,\\bar{Z}\\in\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\tilde{\\alpha}\\in(1-\\alpha,\\alpha]$ with $Z_0=\\bar{Z}_0$. Then there is a constant $C$ depending only on $F$, $G$, and the terminal time $T$ such that\n \\begin{equation*}\n |z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\leq C\\exp\\left(C|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac1\\alpha}+C|Z|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}^{\\frac{1}{\\tilde{\\alpha}}}+C|\\bar{Z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}^{\\frac{1}{\\tilde{\\alpha}}}\\right)|Z-\\bar{Z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}},\n \\end{equation*}\n where $z$ and $\\bar{z}$ are the solutions to the equations\n \\begin{equation*}\n z_t=Z_t+\\int_0^t F(z_s)\\,ds+\\int_0^t G(z_s)\\,d\\ensuremath{\\mathfrak{h}}_s,\\qquad\\bar{z}_t=\\bar{Z}_t+\\int_0^t F(z_s)\\,ds+\\int_0^t F(\\bar{z}_s)\\,d\\ensuremath{\\mathfrak{h}}_s.\n \\end{equation*}\n\\end{lemma}\nAlbeit the statement of \\cref{lem:residue_lemma} is slightly stronger than \\cite[Lemma 2.2]{Hairer2020}, it is straight-forward to show that the very same proof still applies. We therefore omit the details and finally turn to the proof of the main result of this article:\n\n\\begin{proof}[{Proof of \\cref{thm:feedback_fractional}}]\n First observe that, by the assumptions of the theorem and \\cref{cor:lipschitz_average}, there exists a unique global solution to the averaged equation \\eqref{eq:effective_dynamics}, see \\cite{Lyons1998,Lyons2002,Rascanu2002}. We fix $\\bar{\\alpha}\\in(\\alpha,H)$ with $(\\bar\\alpha-\\alpha)^{-1}0$. Consequently, by \\cref{prop:final_control}, we deduce that\n \\begin{align*}\n \\left\\|\\int_0^\\cdot \\Big(g\\big(X_r^{\\varepsilon,M},\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)-\\bar{g}\\big(X_r^{\\varepsilon,M}\\big)\\Big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}&\\lesssim\\varepsilon^\\gamma,\n \\\\\n \\left\\|\\int_0^\\cdot \\Big(f\\big(X_r^{\\varepsilon,M},\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)-\\bar{f}\\big(X_r^{\\varepsilon,M}\\big)\\Big)\\,dr\\right\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}&\\lesssim\\varepsilon^\\gamma.\n \\end{align*}\n Therefore, $\\big\\|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}\\lesssim\\varepsilon^\\gamma$, where\n \\begin{align*}\n \\hat{X}^{\\varepsilon,M}_t&\\ensuremath\\triangleq X_0+\\int_0^t f\\big(X^{\\varepsilon,M}_r,\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)\\,dr+\\int_0^t g\\big(X^{\\varepsilon,M}_r,\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)\\,dB_r,\\\\\n \\bar{X}^{\\varepsilon,M}_t&\\ensuremath\\triangleq X_0+\\int_0^t\\bar{f}\\big(X^{\\varepsilon,M}_r\\big)\\,dr+\\int_0^t\\bar{g}\\big(X^{\\varepsilon,M}_r\\big)\\,dB_r.\n \\end{align*}\n In particular, $\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}\\to 0$ in probability by the embedding \\eqref{eq:embeddings}. Note also the decomposition\n \\begin{equation*}\n X_t^{\\varepsilon,M}=\\hat{X}_t^{\\varepsilon,M}-\\bar{X}_t^{\\varepsilon,M}+X_0+\\int_0^t\\bar{f}(X_r^{\\varepsilon})\\,dr+\\int_0^t \\bar{g}(X_r^{\\varepsilon})\\,dB_r,\\quad t\\in[0,\\tau_M^\\varepsilon\\wedge T],\n \\end{equation*}\n whence \\cref{lem:residue_lemma} furnishes the bound\n \\begin{equation}\\label{eq:residue_bound}\n \\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}\\leq C\\exp\\left(C|B|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac{1}{\\alpha}}+C\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac{1}{\\alpha}}\\right)\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}.\n \\end{equation}\n As we have seen above, for each $M>0$, the right-hand side goes to $0$ in probability as $\\varepsilon\\to 0$. Hence, we also have that $\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}\\to 0$ in probability.\n\n On the other hand, note that\n \\begin{align}\n \t\\ensuremath\\mathbb{P}(\\tau_M^\\varepsilonT^{-\\gamma}(M-\\|X_0\\|_{L^\\infty})-1\\right)\\label{eq:split}\n \\end{align}\n for each $\\gamma>0$. By \\cref{prop:nualart}, we know that $\\big|\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\gamma([0,T])}\\in L^1$ provided that $\\gamma<\\frac12$. We fix such a $\\gamma$.\n\n It is now easy to finish the proof. Let $\\delta_1,\\delta_2\\in(0,1)$ be given. Then we can find a $M>0$ such that\n \\begin{equation*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\gamma([0,T])}>T^{-\\gamma}(M-\\|X_0\\|_{L^\\infty})-1\\right)\\leq\\frac{\\delta_2}{2}.\n \\end{equation*}\n For this $M$, we can also find an $\\varepsilon_0>0$ such that\n \\begin{equation*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}>\\delta_1\\right)\\leq\\frac{\\delta_2}{4}\\qquad\\forall\\,\\varepsilon\\in(0,\\varepsilon_0).\n \\end{equation*}\n The estimate \\eqref{eq:split} therefore yields that\n \\begin{align*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,T])}>\\delta_1\\right)&\\leq\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}>\\delta_1,\\tau_M^\\varepsilon\\geq T\\right)+\\ensuremath\\mathbb{P}(\\tau_M^\\varepsilon\\delta_1\\right)+\\frac{\\delta_2}{2}\\leq\\delta_2\n \\end{align*}\n for all $\\varepsilon\\in(0,\\varepsilon_0)$. Hence, $\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,T])}\\to 0$ in probability as $\\varepsilon\\to 0$, as required.\n\\end{proof}\n\n\\begin{remark}\n The proof above shows that we can choose\n \\begin{equation*}\n \\lambda_0=\\inf_{x\\in\\ensuremath{\\mathbb{R}}^d}\\Lambda(\\kappa,R,p)\n \\end{equation*}\n for any $p>\\max\\big(2,(H-\\alpha)^{-1}\\big)$ in \\cref{thm:feedback_fractional}. Here, $\\Lambda$ is the constant from \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{remark}\n\n\n\\subsection{Smoothness of the Averaged Coefficients}\n\nLet us finally show that an \\emph{everywhere contractive} fast process falls in the regime of \\cref{thm:feedback_fractional}. While smoothness of $\\bar g$ also holds under less restrictive conditions, the proof becomes much more involved. To keep this article concise, we chose to report on these results in future work.\n\n\n\\begin{corollary}\\label{cor:smooth}\n Suppose that\n \\begin{itemize}\n \\item $g\\in\\ensuremath{\\mathcal{C}}_b^3\\big(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n,\\Lin[m]{d}\\big)$,\n \\item there is a $\\kappa>0$ such that $b(x,\\cdot)\\in\\S(\\kappa,0,0)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$,\n \\item $b\\in\\ensuremath{\\mathcal{C}}^3\\big(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n,\\ensuremath{\\mathbb{R}}^d\\big)$ is globally Lipschitz continuous and there is an $N\\in\\ensuremath{\\mathbb{N}}$ such that, for each $i,j,k\\in\\{x,y\\}$,\n \\begin{equation*}\n |D^2_{i,j} b(x,y)|+|D^3_{i,j,k}b(x,y)|\\lesssim 1+|y|^N\\qquad\\forall\\,x\\in\\ensuremath{\\mathbb{R}}^d,\\,\\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\end{itemize}\n Then the conclusion of \\cref{thm:feedback_fractional} holds.\n\\end{corollary}\n\\begin{example}\n Let $V\\in\\ensuremath{\\mathcal{C}}^4(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n)$. If $\\inf_{x,y}D_{y,y}^2 V(x,y)\\geq\\kappa$, $|D^2_{x,y}V|_\\infty+|D^2_{y,y}V|_\\infty<\\infty$, and, for each $i,j,k\\in\\{x,y\\}$,\n \\begin{equation*}\n |D^3_{i,j,y}V(x,y)|+|D^4_{i,j,k,y}V(x,y)|\\lesssim 1+|y|^N\\qquad\\forall\\,x\\in\\ensuremath{\\mathbb{R}}^d,\\,\\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n,\n \\end{equation*}\n then $b=-D_y V$ falls in the regime of \\cref{cor:smooth}. To give a concrete example, we can choose $V(x,y)=\\big(2+\\sin(x)\\big)\\big(y^2+\\sin(y)\\big)$, which furnishes the drift $b(x,y)=-\\big(2+\\sin(x)\\big)\\big(2y+\\cos(y)\\big)$.\n\\end{example}\n\n\n\\begin{proof}[Proof of \\cref{cor:smooth}]\n In order to apply \\cref{thm:feedback_fractional} it is enough to show that, for any $g\\in\\ensuremath{\\mathcal{C}}_b^3(\\ensuremath{\\mathbb{R}}^n)$, the function\n \\begin{equation*}\n \\bar{h}(x)\\ensuremath\\triangleq\\int_{\\ensuremath{\\mathbb{R}}^n}g(y)\\,\\pi^x(dy)\n \\end{equation*}\n is again of class $\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$. To this end, we define $h_t(x)\\ensuremath\\triangleq\\Expec{g(Y_t^x)}$ where $Y^x$ is the solution to the SDE\n \\begin{equation*}\n dY_t^x=b(x,Y_t^x)\\,dt+\\sigma\\,d\\hat{B}\n \\end{equation*}\n started in the generalized initial condition $\\delta_0\\otimes\\ensuremath{\\mathsf{W}}$. Note that $h_t\\to\\bar{h}$ pointwise as $t\\to\\infty$ by \\cref{thm:geometric}. Since $h_t\\in\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$ for each $t\\geq 0$, it thus suffices to show that \n \\begin{equation}\\label{eq:derivative_bound}\n \\sup_{t\\geq 0} \\left(|D h_t|_\\infty+|D^2 h_t|_\\infty\\right)<\\infty\n \\end{equation} \n and both $D h_t$ and $D^2 h_t$ converge locally uniformly along a subsequence. By a straight-forward `diagonal sequence' argument, we actually only need to prove uniform convergence on a fixed compact $K\\subset\\ensuremath{\\mathbb{R}}^d$.\n\n Under the assumptions of the corollary, it is easy to see that the mapping $x\\mapsto Y_t^x$ is three-times differentiable for each $t\\geq 0$ and it holds that\n \\begin{align*}\n D_x Y_t^x&=\\int_0^tJ_{s,t}D_x b(x,Y_s^x)\\,ds, \\label{eq:first_derivative}\\\\\n D^2_{x,x} Y_t^x(u\\otimes v)&=\\int_0^tJ_{s,t}\\Big(D_{x,x}^2 b(x,Y_s^x)(u\\otimes v)+2D_{x,y}^2 b(x,Y_s^x)\\big(u\\otimes D_x Y_s^x(v)\\big) \\nonumber\\\\\n &\\phantom{=\\int_0^tJ_{s,t}}+D^2_{y,y}b(x,Y_s^x)\\big(D_x Y_s^x(u)\\otimes D_x Y_s^x(v)\\big)\\Big)\\,ds,\n \\end{align*}\n where $J_{s,t}$ solves the homogeneous problem\n \\begin{equation*}\n J_{s,t}=\\mathrm{id}+\\int_s^t D_yb(x,Y_r^x)J_{s,r}\\,dr.\n \\end{equation*}\n Since $b(x,\\cdot)\\in\\S(\\kappa,0,0)$, it is not hard to see that, for each $x\\in\\ensuremath{\\mathbb{R}}^d$ and $y\\in\\ensuremath{\\mathbb{R}}^n$, $D_yb(x,y)\\leq-\\kappa$ in the sense of quadratic forms. In particular, the operator norm of $J$ satisfies the bound\n \\begin{equation*}\n |J_{s,t}|\\leq e^{-\\kappa(t-s)}.\n \\end{equation*}\n By an argument similar to \\cref{lem:fast_process_moments}, it follows that, for any $p\\geq 1$,\n \\begin{equation*}\n \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_xY_t^x\\big\\|_{L^p}<\\infty\\quad\\text{and}\\quad \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_{x,x}^2Y_t^x\\big\\|_{L^p}<\\infty.\n \\end{equation*}\n Based on this, it is straight-forward to verify \\eqref{eq:derivative_bound}. Consequently, by the Arzela-Ascoli theorem, there is a subsequence of times along which $Dh$ converges uniformly on $K$. By a similar---albeit more tedious---computation, the reader can easily check that also \n \\begin{equation*}\n \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_{x,x,x}^3Y_t^x\\big\\|_{L^p}<\\infty.\n \\end{equation*}\n In particular, $D^3 h$ is uniformly bounded, whence we can pass to a further subsequence along which $D^2 h$ also converges uniformly on $K$. Therefore, $\\bar{h}\\in\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$ as required.\n\\end{proof}\n\n{\n\\footnotesize\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\t\t\t\n\t\tA \\emph{square} $S$ in a matrix $M= \\left( a_{i,j} \\right)$ is a $2 \\times 2$ submatrix of the form \\[S = \\left(\\begin{array}{cc}\n\t\t\ta_{i,j} & a_{i, j+s} \\\\\n\t\t\ta_{i+s,j} & a_{i+s, j+s}\n\t\t\\end{array}\\right).\n\t\t\\]\n\t\t\n\t\tIn 1996 Erickson \\cite{erickson1996introduction} asked for the largest $n$ such that there exists an $n \\times n$ binary matrix $M$ with no squares which have constant entries. An upper bound was first given by Axenovich and Manske \\cite{axenovich2008monochromatic}, before the answer 14 was determined by Bacjer and Eliahou in \\cite{bacher2010extremal}. \n\t\t\n\t\tRecently, Ar\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero} initiated the study of a zero-sum variant of Erickson's problem. Here we wish to avoid \\emph{zero-sum squares}, squares with entries that sum to $0$.\n\t\t\n\t\tZero-sum problems have been well-studied since the classic Erd\\H{o}s-Ginsburg-Ziv Theorem in 1961 \\cite{erdos1961theorem}. Much of the research has been on zero-sum problems in finite abelian groups (see the survey \\cite{gao2006zero} for details), but problems have also been studied in other settings such as on graphs (see e.g. \\cite{caro2016ero, caro2019zero, caro2020zero, bialostocki1993zero}). Of particular relevance is the result of Balister, Caro, Rousseau and Yuster in \\cite{balister2002zero} on submatrices of integer valued matrices where the rows and columns sum to $0 \\mod p$, and the result of Caro, Hansberg and Montejano on zero-sum subsequences in bounded sum $\\{-1,1\\}$-sequences \\cite{caro2019zerosum}. \n\t\t\n\t\tGiven an $n \\times m$ matrix $M = \\left( a_{i,j} \\right)$ define the \\emph{discrepancy} of $M$ as the sum of the entries, that is\n\t\t\\[\\disc(M) = \\sum_{\\substack{1 \\leq i \\leq n\\\\1 \\leq j \\leq m}} a_{i,j}. \\]\n\t\tWe say a square $S$ is a \\emph{zero-sum square} if $\\disc(S) = 0$, or equivalently,\n\t\t\\[a_{i,j} + a_{i, j+s} + a_{i+s,j} + a_{i+s, j+s} = 0.\\]\n\t\t\n\t\tWe will be interested in $\\{-1,1\\}$-matrices $M$ which do not contain any zero-sum squares, and we shall call such matrices \\emph{zero-sum square free}. Clearly matrices with at most one $-1$ are zero-sum square free and, in general, there are many such matrices when the number of $-1$s is low. Instead, we will be interested in matrices which have a similar number of $1$s and $-1$s or, equivalently, matrices with small discrepancy (in absolute value).\n\t\t\n\t\tAn $n \\times m$ $\\{-1,1\\}$-matrix $M = \\left(a_{i,j}\\right)$ is said to be \\emph{$t$-diagonal} for some $0 \\leq t \\leq n +m -1$ if\n\t\t\\[a_{i,j} = \\begin{cases}\n\t\t\t1 & i + j \\leq t + 1,\\\\\n\t\t\t-1 & i + j \\geq t +2.\n\t\t\\end{cases}\\]\n\t\tWe say a matrix $M$ is \\emph{diagonal} if there is some $t$ such that a $t$-diagonal matrix $N$ can be obtained from $M$ by applying vertical and horizontal reflections. \n\t\tDiagonal matrices are of particular interest since they can have low discrepancy, yet they never contain a zero-sum square.\n\t\t\n\t\tAr\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero} proved that, except when $n \\leq 4$, every $n \\times n$ non-diagonal $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n$ has a zero-sum square. They remark that it should be possible to extend their proof to give a bound of $2n$, and they conjecture that the bound $Cn$ should hold for any $C > 0$ when $n$ is large enough relative to $C$.\n\t\t\n\t\t\\begin{conjecture}[Conjecture 3 in \\cite{arevalo2020zero}]\n\t\t\tFor every $C > 0$ there is a integer $N$ such that whenever $n \\geq N$ the\n\t\t\tfollowing holds: every $n \\times n$ non-diagonal $\\{-1, 1\\}$-matrix $M$ with $|\\disc(M)| \\leq Cn$\n\t\t\tcontains a zero-sum square.\n\t\t\\end{conjecture}\n\t\t\n\t\tWe prove this conjecture in a strong sense with the following theorem.\n\t\t\n\t\t\\begin{theorem}\\label{thm:low-bound}\n\t\t\tLet $n \\geq 5$. Every $n \\times n$ non-diagonal $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ contains a zero-sum square.\n\t\t\\end{theorem}\n\t\n\t\tThe best known construction for a non-diagonal zero-sum square free matrix has discrepancy close to $n^2\/2$, and our computer experiments suggest that this construction is in fact optimal. Closing the gap between the upper and lower bounds remains a very interesting problem and we discuss it further in Section \\ref{sec:open-problems}. \n\t\t\t\t\n\t\t\n\t\t\\section{Proof}\n\t For $p\\leq r$ and $q \\leq s$ define the \\emph{consecutive submatrix} $M[p:r, q:s]$ by\n\t\t\\[M[p:r, q:s] = \\left(\\begin{array}{cccc}\n\t\t\ta_{p,q} & a_{p, q+1} & \\dotsb & a_{p, s} \\\\\n\t\t\ta_{p+1, q} & a_{p+1, q+1} & \\dotsb & a_{p+1, s} \\\\\n\t\t\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\t\ta_{r, q} & a_{r+1, q} & \\dotsb & a_{r,s}\n\t\t\\end{array}\n\t\t \\right).\n\t\t \\]\n\t\t Throughout the rest of this paper, we will assume that all submatrices except squares are consecutive submatrices.\n\t\t\n\t\tWe start by stating the following lemma from \\cite{arevalo2020zero} which, starting from a small $t'$-diagonal submatrix $M'$, determines many entries of the matrix $M$. An example application is shown in Figure \\ref{fig:struct}.\n\t\t\n\t\t\n\t\n\t\t\\begin{lemma}[Claim 3 in \\cite{arevalo2020zero}]\n\t\t\t\\label{lem:struct}\n\t\t\tLet $M$ be an $n \\times n $ $\\{-1,1\\}$-matrix with no zero-sum squares, and suppose that there is a submatrix $M' = M[p: p+s, q: q+s]$ which is $t'$-diagonal for some $2 \\leq t' \\leq 2s - 3$. Let $t = t + p + q -2$ and suppose $t \\leq n$. \t\t\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item The submatrix \\[N = M[1: \\min(t + \\floor{t\/2}, n), 1:\\min(t + \\floor{t\/2}, n)]\\] is $t$-diagonal.\n\t\t\t\\end{enumerate} \n\t\t\t\n\t\t\tFurthermore, both $a_{i,j} = 1$ and $a_{j,i} = 1$ whenever $t + \\floor{t\/2} < j \\leq t + \\floor{t\/2} + t -2$ and one of the following holds:\n\t\t\t\\begin{enumerate}\n\t\t\t\t \\setcounter{enumi}{1}\n\t\t\t\t\\item $j - t \\leq i \\leq t + \\floor{\\frac{t}{2}}$;\n\t\t\t\t\\item $i \\leq \\floor{\\frac{t}{2}} - \\floor{\\frac{j - t - \\floor{t\/2} -1}{2}}$;\n\t\t\t\t\\item $i = j$.\n\t\t\t\\end{enumerate}\n\t\\end{lemma}\n\n\t\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=0.5\\textwidth\/11cm]\n\t\t\t\n\t\t\t\n\t\t\t\\fill[yellow5] (0,10) rectangle +(5,1);\n\t\t\t\\fill[yellow5] (0,9) rectangle +(4,1);\n\t\t\t\\fill[yellow5] (0,8) rectangle +(1,1);\n\t\t\t\\fill[yellow5] (0,7) rectangle +(1,1);\n\t\t\t\\fill[yellow5] (0,6) rectangle +(1,1);\n\t\t\t\n\t\t\t\\fill[yellow7] (1,8) rectangle +(2,1);\n\t\t\t\\fill[yellow7] (1,7) rectangle +(1,1);\n\t\t\t\n\t\t\t\\fill[blue7] (3, 8) rectangle +(1,1);\n\t\t\t\\fill[blue7] (2, 7) rectangle +(2,1);\n\t\t\t\\fill[blue7] (1, 6) rectangle +(3,1);\n\t\t\t\n\t\t\t\\fill[blue5] (4, 6) rectangle +(1, 4);\n\t\t\t\\fill[blue5] (0, 4) rectangle +(7, 2);\n\t\t\t\\fill[blue5] (5, 6) rectangle +(2, 5);\n\t\t\t\n\t\t\t\\fill[blue3] (7, 10) rectangle +(3, 1);\n\t\t\t\\fill[blue3] (7, 9) rectangle +(2,1);\n\t\t\t\\fill[blue3] (0, 1) rectangle +(1, 3);\n\t\t\t\\fill[blue3] (1,2) rectangle +(1, 2);\n\t\t\t\n\t\t\t\\fill[lightblue5] (3, 3) rectangle +(4, 1);\n\t\t\t\\fill[lightblue5] (4, 2) rectangle +(3, 1);\n\t\t\t\\fill[lightblue5] (5, 1) rectangle +(2, 1);\n\t\t\t\n\t\t\t\\fill[lightblue5] (7,4) rectangle +(1, 4);\n\t\t\t\\fill[lightblue5] (8,4) rectangle +(1, 3);\n\t\t\t\\fill[lightblue5] (9,4) rectangle +(1, 2);\n\t\t\t\n\t\t\t\\fill[lightblue3] (7, 3) rectangle +(1,1);\n\t\t\t\\fill[lightblue3] (8, 2) rectangle +(1,1);\n\t\t\t\\fill[lightblue3] (9, 1) rectangle +(1,1);\n\t\t\t\n\t\t\t\\draw[black] (0,1) -- +(11,0);\n\t\t\t\\draw[black] (0,2) -- +(11,0);\n\t\t\t\\draw[black] (0,3) -- +(11,0);\n\t\t\t\\draw[black] (0,4) -- +(11,0);\n\t\t\t\\draw[black] (0,5) -- +(11,0);\n\t\t\t\\draw[black] (0,6) -- +(11,0);\n\t\t\t\\draw[black] (0,7) -- +(11,0);\n\t\t\t\\draw[black] (0,8) -- +(11,0);\n\t\t\t\\draw[black] (0,9) -- +(11,0);\n\t\t\t\\draw[black] (0,10) -- +(11,0);\n\t\t\t\\draw[black, very thick] (0,11) -- +(11,0);\n\t\t\t\n\t\t\t\\draw[very thick, black] (0,0) -- +(0,11);\n\t\t\t\\draw[black] (1,0) -- +(0,11);\n\t\t\t\\draw[black] (2,0) -- +(0,11);\n\t\t\t\\draw[black] (3,0) -- +(0,11);\n\t\t\t\\draw[black] (4,0) -- +(0,11);\n\t\t\t\\draw[black] (5,0) -- +(0,11);\n\t\t\t\\draw[black] (6,0) -- +(0,11);\n\t\t\t\\draw[black] (7,0) -- +(0,11);\n\t\t\t\\draw[black] (8,0) -- +(0,11);\n\t\t\t\\draw[black] (9,0) -- +(0,11);\n\t\t\t\\draw[black] (10,0) -- +(0,11);\n\t\t\t\n\t\t\t\\draw[very thick] (1,6) rectangle +(3,3);\n\t\t\t\n\t\t\t\n\t\t\t\\draw[|-|] (0, 11.5) -- +(5, 0);\n\t\t\t\\draw[-|] (5,11.5) -- +(2,0);\n\t\t\t\\draw[-|] (7, 11.5) -- +(3,0);\n\t\t\t\n\t\t\t\\draw node[above] at (2.5, 11.5) {$t$};\n\t\t\t\\draw node[above] at (6, 11.5) {$\\floor{t\/2}$};\n\t\t\t\\draw node[above] at (8.5, 11.5) {$t-2$};\n\t\t\t\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\caption{The entries known from applying Lemma \\ref{lem:struct}. The yellow squares represent $-1$s and the blue squares represent $1$s. The submatrix $M'$ is show in a darker shade.}\n\t\t\\label{fig:struct}\n\t\\end{figure}\n\n\tNote that we can apply this lemma even when it is a reflection of $M'$ which is $t$-diagonal; we just need to suitably reflect $M$ and potentially multiply by $-1$, and then undo these operations at the end. The matrix $N$ will always contains at least one of $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$, and if $N$ contains two, then $M$ is diagonal.\n\t\n\tWe will also make use of the following observation. This will be used in conjunction with the above lemma to guarantee the existence of some additional $1$s, which allows us to show a particular submatrix has positive discrepancy. \n\n\t\\begin{observation}\n\t\t\\label{obs:oneof}\n\t\tLet $M$ be an $n \\times n $ $\\{-1,1\\}$-matrix with no zero-sum squares, and suppose that $a_{i,i} = 1$ for every $i \\in [n]$. Then at least one of $a_{i,j}$ and $a_{j,i}$ is 1. In particular, $a_{i,j} + a_{j,i} \\geq 0$ for all $1 \\leq i ,j \\leq n$.\n\t\\end{observation}\n\n\n\n\n\tThe final lemma we will need to prove Theorem \\ref{thm:low-bound} is a variation on Claims 1 and 2 from \\cite{arevalo2020zero}. The main difference between Lemma \\ref{lem:submatrix} and the result used by Ar\\'evalo, Montejano and Rold\\'an-Pensado is that we will always find a square submatrix. This simplifies the proof of Theorem \\ref{thm:low-bound}.\n\t\n\t\\begin{lemma}\n\t\t\\label{lem:submatrix}\n\t\tFor $n \\geq 8$, every $n \\times n $ $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ has an $n' \\times n'$ submatrix $M'$ with $|\\disc(M')| \\leq (n')^2\/4$ for some $(n-1)\/2 \\leq n' \\leq (n+1)\/2$.\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tWe only prove this in the case $n$ is odd as the case $n$ is even is similar, although simpler.\n\t\tPartition the matrix $M$ into 9 regions as follows. Let the four $(n-1)\/2 \\times (n-1)\/2$ submatrices containing $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$ be $A_1, \\dots, A_4$ respectively. Let the $(n-1)\/2 \\times 1$ submatrix between $A_1$ and $A_2$ be $B_1$ and define $B_2$, $B_3$ and $B_4$ similarly. Finally, let the central entry be $B_5$. The partition is shown in Figure \\ref{fig:regions-part}.\n\t\t\n\t\t\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=\\textwidth\/10cm]\n\t\t\t\\draw[step=1, very thin, gray] (0,0) grid (9,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,5) rectangle(4,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,5) rectangle (9,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,0) rectangle (4,4);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,0) rectangle (9,4);\n\t\t\t\\draw node[fill=white] at (2, 7) {$A_1$};\n\t\t\t\\draw node[fill=white] at (7, 7) {$A_2$};\n\t\t\t\\draw node[fill=white] at (7, 2) {$A_3$};\n\t\t\t\\draw node[fill=white] at (2, 2) {$A_4$};\n\t\t\t\\draw[fill=none, stroke=black, very thick] (4,5) rectangle(5,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,4) rectangle(9,5);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (4,0) rectangle(5,4);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,4) rectangle(4,5);\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 7) {$B_1$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (7, 4.5) {$B_2$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 2) {$B_3$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (2, 4.5) {$B_4$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 4.5) {$B_5$};\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{}\n\t\t\\label{fig:regions-part}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=\\textwidth\/10cm]\n\t\t\t\\draw[step=1, very thin, gray] (0,0) grid (9,9);\n\t\t\t\n\t\t\t\\draw[fill=none, black, very thick] (4,0) rectangle (9,5);\n\t\t\t\\draw[fill=none, black, very thick] (0,4) rectangle(5,9);\n\t\t\t\\draw[fill=none, black, very thick] (0,0) rectangle(9,9);\n\t\t\t\\draw node[] at (2.5, 6.5) {$A'_1$};\n\t\t\t\\draw node[] at (6.5, 2.5) {$A'_3$};\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{}\n\t\t\\label{fig:regions-overlap}\n\t\\end{subfigure\n\t\\caption{A subset of the regions used in the proof of Lemma \\ref{lem:submatrix}.}\n\t\\label{fig:regions}\n\\end{figure}\n\t\t\n\tAs these partition the matrix $M$, we have\n\t\\begin{equation}\\label{eqn:part}\n\t\t\\disc(M) = \\disc(A_1) + \\dotsb + \\disc(A_4) + \\disc(B_1) + \\dotsb + \\disc(B_5).\n\t\\end{equation}\n\t\n\tLet the overlapping $(n+1)\/2 \\times (n+1)\/2$ submatrices containing $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$ be $A_1', \\dots, A_4'$, as indicated in Figure \\ref{fig:regions-overlap}. The submatrices $B_1, \\dots, B_4$ each appear twice in the $A_i'$ and $B_5$ appears four times and, by subtracting these overlapping regions, we obtain a second equation for $\\disc(M)$:\n\t\t\\begin{multline}\\label{eqn:part2}\n\t\t\t\\disc(M) = \\disc(A_1') + \\dotsb + \\disc(A_4')\\\\ - \\disc(B_1) - \\dotsb - \\disc(B_4) - 3 \\disc(B_5).\n\t\t\\end{multline}\n\t\n\tIf any of the $A_i$ or $A_i'$ have $|\\disc(A_i)| \\leq (n-1)^2\/16$ or $|\\disc(A_i')| \\leq (n+1)^2\/16$ respectively, we are done, so we may assume that this is not the case.\n\tFirst, suppose that $\\disc(A_i) > (n-1)^2\/16$ and $\\disc(A_i') > (n+1)^2\/16$ for all $i = 1,2,3,4$. Since $n - 1$ is even and $\\disc(A_i) \\in \\mathbb{Z}$, we must have $\\disc(A_i) \\geq (n-1)^2\/16 + 1\/4$, and similarly, $\\disc(A_i') \\geq (n+1)^2\/16 + 1\/4$. Adding the equations (\\ref{eqn:part}) and (\\ref{eqn:part2}) we get the bound\n\t\\[n^2\/2 \\geq 2 \\disc(M) \\geq (n+1)^2\/4 + (n-1)^2\/4 + 2 - 2 \\disc(B_5), \\]\n\twhich reduces to $\\disc(B_5) \\geq 5\/4$. This gives a contradiction since $B_5$ is a single square. Similarly we get a contradiction if, for every $i$, both $\\disc(A_i) < - (n-1)^2\/16$ and $\\disc(A_i') < - (n+1)^2\/16$. \n\t\n\tThis only leaves the case where two of the 8 submatrices have different signs. If $A_i' > (n+1)^2\/16$, then, for $n \\geq 8$, \\[A_i > (n+1)^2\/16 - n > -(n-1)^2\/16,\\] and either $|\\disc(A_i)| \\leq (n-1)^2\/16$, a contradiction, or $\\disc(A_i) > 0$. By repeating the argument when $\\disc(A_i')$ is negative, it follows that $A_i$ and $A_i'$ have the same sign for every $i$. In particular, two of the $A_i$ must have different signs, and we can apply an interpolation argument as in \\cite{arevalo2020zero}.\n\t\n\tWithout loss of generality we can assume that $\\disc(A_1) > (n-1)^2\/16$ and $\\disc(A_2) < -(n-1)^2\/16$. Consider the sequence of matrices $N_0, \\dots, N_{(n+1)\/2} $ where \\[N_i = M[1: (n-1)\/2, 1 + i: i + (n-1)\/2].\\]\n\tWe claim that there is a $j$ such that $|\\disc(N_j)| \\leq (n-1)^2\/16$, which would complete the proof of the lemma. By definition, $N_0 = A_1$ and $N_{(n+1)\/2} = A_2$ so there must be some $j$ such that $\\disc(N_{j-1}) > 0$ and $\\disc(N_j) \\leq 0$. Since the submatrices $N_{j-1}$ and $N_{j}$ share most of their entries $|\\disc(N_{j-1}) - \\disc(N_j)| \\leq (n-1)$, and as $(n-1)^2\/8 > (n-1)$, it cannot be the case that $\\disc(N_{j-1}) > (n-1)^2\/16$ and $\\disc(N_j) < -(n-1)^2\/16$. This means there must be some $j$ such that $|\\disc(N_j)| \\leq (n-1)^2\/16$, as required. \n\t\\end{proof}\n\n\tArmed with the above results, we are now ready to prove our main result, but let us first give a sketch of the proof which avoids the calculations in the main proof.\n\t\n\t\\begin{proof}[Sketch proof of Theorem \\ref{thm:low-bound}]\t\n\t Assume we have an $n\\times n$ $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ which is zero-sum square free. We will prove the result by induction, so we assume that the result is true for $5 \\leq n' < n$. \n\t \n\t Applying Lemma \\ref{lem:submatrix} gives a submatrix $M'$ with low discrepancy. Since $M'$ must also be zero-sum square free, we know that it is diagonal by the induction hypothesis. Applying Lemma \\ref{lem:struct} then gives us a lot of entries $M$ and, in particular, a submatrix $N$ with high discrepancy. Since we are assuming that $M$ has low discrepancy, the remainder $M \\setminus N$ of $M$ not in $N$ must either have low discrepancy or negative discrepancy. In both cases we will find $B$, a submatrix of $M$ with low discrepancy. When the discrepancy of $M \\setminus N$ is low, we use an argument similar to the proof of Lemma \\ref{lem:submatrix}, and when the discrepancy of $M \\setminus N$ is negative, we find a positive submatrix using Observation \\ref{obs:oneof} and use an interpolation argument.\n\t \n\t By the induction hypothesis, $B$ must also be diagonal and we can apply Lemma \\ref{lem:struct} to find many entries of $M$. By looking at specific $a_{i,j}$, we will show that the two applications of Lemma \\ref{lem:struct} contradict each other.\n\t\\end{proof}\n\n\tWe now give the full proof of Theorem \\ref{thm:low-bound}, complete with all the calculations. To start the induction, we must check the cases $n < 30$ which is done using a computer. The problem is encoded as a SAT problem using PySAT \\cite{imms-sat18} and checked for satisfiability with the CaDiCaL solver. The code to do this is attached to the arXiv submission.\n\n\t\\begin{proof}[Proof of Theorem \\ref{thm:low-bound}]\n\t\tWe will use induction on $n$. A computer search gives the result for all $n < 30$, so we can assume that $n \\geq 30$ and that the result holds for all $5 \\leq n' < n$. \n\t\t\n\t\tSuppose, towards a contradiction, that $M$ is an $n \\times n$ matrix with no zero-sum squares and $|\\disc(M)| \\leq n^2\/4$ . \t\t\n\t\tBy Lemma \\ref{lem:submatrix}, we can find an $n' \\times n'$ submatrix $M' = M[p:p+s, q:q+s]$ with $(n-1)\/2 \\leq n ' \\leq (n+1)\/2$ and $|\\disc(M')| \\leq (n')^2\/4$. By the induction hypothesis and our assumption that $M$ doesn't contain a zero-sum square, the matrix $M'$ must be diagonal. By reflecting $M$ and switching $-1$ and $1$ as necessary, we can assume that the submatrix $M'$ is $t'$-diagonal for some $t'$, and that $t := t' + p + q -2 \\leq n$.\n\t\t\n\t\tWe will want to apply Lemma \\ref{lem:struct}, for which we need to check $2 \\leq t' \\leq 2s - 3$. If $t' \\leq 1$ or $t' \\geq 2s - 2$, then the discrepancy of $M'$ is \\[|\\disc(M')| \\geq (n')^2 -1 > (n')^2\/4,\\] which contradicts our choice of $M'$. In fact, since $\\disc(M') \\leq (n')^2\/4$ and $\\disc(M') \\leq (n')^2 - t'(t'+1)$ we find\n\t\t\\begin{equation}\n\t\t\\label{eqn:tbound}\n\t\tt \\geq t' \\geq \\frac{1}{2} \\left( \\sqrt{3(n')^2 + 1} -1 \\right) \\approx 0.433 n.\n\t\t\\end{equation}\t\n\t\n\t\tIf $t + \\floor{t\/2} \\geq n$, the matrix $M$ is $t$-diagonal and we are done, so we can assume that this is not the case, and that $t \\leq 2n\/3$. We will also need the following bound on $2t + \\floor{t\/2} -2$, which follows almost immediately from (\\ref{eqn:tbound}).\n\t\t\n\t\t\\begin{claim}\\label{claim:tgeqnmins1}\n\t\t\tWe have\n\t\t\t\\[2t + \\floor{t\/2} -2 \\geq n - 1.\\]\n\t\t\\end{claim}\n\t\t\\begin{proof}\n\t\t\tSubstituting $n' \\geq (n-1)\/2$ into (\\ref{eqn:tbound}) gives the following bound on $t$.\n\t\t\t\\[t \\geq \\frac{1}{4} \\left( \\sqrt{3n^2 - 6n + 7} - 2 \\right)\\]\n\t\t\tWe now lower bound $\\floor{t\/2}$ by $(t-1)\/2$ to find\n\t\t\t\\begin{align*}\n\t\t\t\t2t + \\floor{t\/2} -2 &\\geq 2t + \\frac{t-5}{2}\\\\\n\t\t\t\t&\\geq \\frac{5}{8} \\sqrt{3n^2 - 6n + 7} - \\frac{15}{4}\n\t\t\t\\end{align*}\n\t\t\tThe right hand side grows like $\\frac{\\sqrt{75}}{8} n$ asymptotically, which is faster than $n$, so the claim is certainly true for large enough $n$. In fact, the equation $ \\frac{5}{8} \\sqrt{3n^2 - 6n + 7} - \\frac{15}{4} \\geq n -1$ can be solved explicitly to obtain the following the bound on $n$:\n\t\t\t\\[n \\geq \\frac{1}{11} \\left( 251 + 20 \\sqrt{166} \\right) \\approx 46.2.\\]\n\t\t\tThis still leaves the values $30 \\leq n \\leq 46$ for which the bounds above are not sufficient. These cases can be checked using a computer.\n\t\t\\end{proof}\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\tLet $k = \\ceil{5n\/6}$ and let $N$ be the $k \\times k$ sub-matrix in the top left corner which contains $a_{1,1}$ i.e. $N = M[1:k, 1:k]$. We will apply Lemma \\ref{lem:struct} and Observation \\ref{obs:oneof} to guarantee lots of 1s in $N$, and therefore ensure $N$ has large discrepancy. This will mean that the rest of $M$ which is not in $N$ must have low discrepancy, and we can find another diagonal submatrix, $B$.\n\t\t\n\t\t\\begin{claim}\\label{claim:B}\n\t\t\tThere is an $(n-k) \\times (n-k)$ submatrix $B$ which is disjoint from $N$ and with $|\\disc(B)| \\leq (n-k)^2\/4$.\n\t\t\\end{claim}\n\t\t\\begin{proof}\n\t\t\t\t\\begin{figure}\n\t\t\t\t\t\\centering\n\t\t\t\t\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\fill[gray] (10, 0) rectangle (11, 2);\n\t\t\t\t\t\t\t\\fill[gray] (11, 13) rectangle (13, 12);\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\draw[very thick] (0,0) rectangle (13, 13);\n\t\t\t\t\t\t\t\\draw[very thick] (0, 13) rectangle (11, 2);\n\t\t\t\t\t\t\t\\draw node at (5.5, 7.5) {N};\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\draw[very thick] (0, 0) rectangle (2, 2);\n\t\t\t\t\t\t\t\\draw node at (1,1) {$B_7$};\n\t\t\t\t\t\t\t\\draw[very thick] (2, 0) rectangle (4, 2);\n\t\t\t\t\t\t\t\\draw node at (3,1) {$B_8$};\n\t\t\t\t\t\t\t\\draw[very thick] (4, 0) rectangle (6, 2);\n\t\t\t\t\t\t\t\\draw node at (5,1) {$B_9$};\n\t\t\t\t\t\t\t\\draw[very thick] (6, 0) rectangle (8, 2);\n\t\t\t\t\t\t\t\\draw node at (7,1) {$B_{10}$};\n\t\t\t\t\t\t\t\\draw[very thick] (8, 0) rectangle (10, 2);\n\t\t\t\t\t\t\t\\draw node at (9,1) {$B_{11}$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 0) rectangle (13, 2);\n\t\t\t\t\t\t\t\\draw node at (12,1) {$B_1$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 2) rectangle (13, 4);\n\t\t\t\t\t\t\t\\draw node at (12,3) {$B_2$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 4) rectangle (13, 6);\n\t\t\t\t\t\t\t\\draw node at (12,5) {$B_3$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 6) rectangle (13, 8);\n\t\t\t\t\t\t\t\\draw node at (12,7) {$B_4$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 8) rectangle (13, 10);\n\t\t\t\t\t\t\t\\draw node at (12,9) {$B_5$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 10) rectangle (13, 12);\n\t\t\t\t\t\t\t\\draw node at (12,11) {$B_6$};\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\t\\end{subfigure}\\hfil\n\t\t\t\t\\caption{The matrix $M$ with the submatrices $N$ and $B_1$, $\\dots$, $B_{11}$. The entries of $M$ which are not in any of the submatrices are shown in grey.}\\label{fig:b-part}\n\t\t\t\t\\end{figure}\n\t\t\t\n\t\t\t\tConsider the 11 $(n-k) \\times (n-k)$ disjoint submatrices of $M$ $B_1 , \\dots, B_{11}$ given by \n\t\t\t\t\\[ B_i = \\begin{cases}\n\t\t\t\t\tM[k: n, n - ik: n - (i-1)k ] & \\text{if $i \\leq 6$}\\\\\n\t\t\t\t\tM[(i- 7)(n-k) : (i-6) (n - k), k : n] & \\text{if $ i > 6$},\n\t\t\t\t\\end{cases}\\]\n\t\t\t\tand shown in Figure \\ref{fig:b-part}. The submatrix $B_1$ contains $a_{n,n}$ and sits in the bottom right of $M$, while the others lie along the bottom and right-hand edges of $M$.\n\t\t\t\t\n\t\t\t\tIf one of the $B_i$ satisfies $|\\disc(B_i)| \\leq (n-k)^2\/4$, we are done, so suppose this is not the case. \n\t\t\t\t\n\t\t\t\tWe start by using Observation \\ref{obs:oneof} to show that $\\disc(B_1) > 0$. Let the entries of $B$ be $b_{i,j}$ where $1 \\leq i,j \\leq n-k$. By Claim \\ref{claim:tgeqnmins1}, $2t + \\floor{t\/2} - 2 \\geq n -1$ and, applying Lemma $1$, $b_{i,i} = 1$ for all $i \\leq n- k - 1$. Further, by Observation \\ref{obs:oneof}, we have $b_{i,j} + b_{j,i} \\geq 0$ for all $1 \\leq i,j \\leq n -k -1$. This means \n\t\t\t\t\\[\\disc(B_1) \\geq (n-k - 1) - (2(n-k) - 1) = - (n-k)\\]\n\t\t\t\t For $(n- k) \\geq 5$, $(n-k) < (n-k)^2\/4$ so we must have $\\disc(B_1) > (n-k)^2\/4$. \n\t\t\t\t \n\t\t\t\t As $\\disc(B_1) > 0 $, if $\\disc(B_i) < 0$ for any $i \\neq 1$, we can use an interpolation argument as in Lemma \\ref{lem:submatrix} to find the claimed matrix. The argument only requires \n\t\t\t\t \\[2(n-k) < \\frac{(n-k)^2}{2}\\]\n\t\t\t\t which is true for $(n-k) > 4$.\n\t\t\t\t \n\t\t\t\tWe must now be in the case where $\\disc(B_i) > (n-k)^2\/4$ for every $i$. The bulk of the work in this case will be bounding the discrepancy of the matrix $N$, and then the discrepancy of $M$. There are $2nk - 12(n-k)^2 \\leq 10(n-k)$ entries of $M$ in the gaps between the $B_i$ i.e. there are at most $10(n-k)$ entries $a_{i,j}$ which are not contained in either $N$ or one of the $B_i$. In particular, we have \n\t\t\t\t\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\disc(M) &\\geq \\disc(N) + \\disc(B_1) + \\dotsb + \\disc(B_{11}) - 10 (n-k) \\notag\\\\\n\t\t\t\t\t&> \\disc(N) + 11 (n-k)^2\/4 - 10(n-k)\\label{eqn:disc}\n\t\t\t\t\\end{align}\n\t\t\n\t\tLet $s = \\min\\left\\{ k, t + \\floor{t\/2} \\right\\}$ so that $M[1:s, 1:s]$ is $t$ diagonal, and let $r = k - s$ be the number of remaining rows. Let $a_1, \\dots, a_4$ be the number of 1s in $N$ guaranteed by Lemma \\ref{lem:struct}, and let $a_5$ be the number of additional 1s guaranteed by also applying Observation \\ref{obs:oneof}. This guarantees that at least one of $a_{i,j}$ and $a_{j,i}$ is $1$ for all $(t+2)\/2 \\leq i , j \\leq r$, and $a_5 \\geq r(r-1)$.\n\t\t\n\t\tWe have the following bounds.\n\t\t\\begin{align*}\n\t\ta_1 &= s^2 - \\frac{t(t+1)}{2},\\\\\n\t\ta_2 &= 2 \\sum_{i=1}^r(t-i),\\\\\n\t\ta_3 &= 2 \\sum_{i=1}^r \\left( \\floor{\\frac{t}{2}} - \\floor{\\frac{i-1}{2}} \\right),\\\\\n\t\ta_4 &= r,\\\\\n\t\ta_5 &\\geq r(r-1).\n\t\t\\end{align*}\n\t\t\n\t\tLet us first consider the case where $s = k$, so that $N$ is $t$-diagonal. In this case $a_2 = \\dotsb = a_5 = 0$, and we can easily write down the discrepancy of $N$ as $k^2 - t(t+1)$. Since $k \\geq 5n\/6$, we get the bound\n\t\t\\begin{align*}\n\t\t\t\\disc(N) &\\geq \\frac{25n^2}{36} - t(t+1).\n\t\t\t\\intertext{Substituting this into (\\ref{eqn:disc}) and using the bounds $(n-5)\/6\\leq n - k \\leq n\/6$ we get}\n\t\t\t\\disc(N) &> \\frac{25n^2}{36} - t(t+1) + \\frac{11}{4}\\left( \\frac{n-5}{6} \\right)^2 - \\frac{10n}{6}\\\\\n\t\t\t&= \\frac{1}{144} \\left( 111 n^2 - 350n - 144t^2 - 144t + 275\\right).\n\t\t\t\\intertext{For $n \\geq 4$, the righthand side is greater than $n^2\/4$ whenever}\n\t\t\tt &< \\frac{1}{12} \\left( \\sqrt{75 n^2 - 350n + 311} - 6 \\right) \\approx 0.721n + o(n).\n\t\t\t\\intertext{Since we have assumed $t \\leq 2n\/3$, we get a contradiction for all sufficiently large $n$. In fact, we get a contradiction for all $n \\geq 39$. The remaining cases need to be checked using exact values for the floor and ceiling functions which we do with the help of a computer.}\n\t\t\\end{align*}\n\t\t\n\t\tNow we consider the case where $s = t + \\floor{t\/2}$ which is very similar, although more complicated. To be in this case, we must have $t + \\floor{t\/2} \\leq k$ which implies \n\t\t\\[ t + \\frac{t-1}{2} \\leq \\frac{5(n+1)}{6},\\]\n\t\tand $t \\leq (5n + 8)\/9 \\approx 0.556n$.\n\t\t\n\t\t\\begin{align*}\n\t\t\t\\intertext{Start by using the bounds $(t-1)\/2 \\leq \\floor{t\/2}$ and $\\floor{(i-1)\/2} \\leq (i-1)\/2$ to get}\n\t\t\ta_1 + \\dotsb + a_5 &\\geq \\left( t + \\frac{t-1}{2} \\right)^2 - \\frac{t(t+1)}{2} + r (2t -r - 1) + r(t - 1) \\\\&\\qquad - \\frac{r(r-1)}{2} + r + r(r-1)\\\\\n\t\t\t&= \\frac{7t^2}{4} - 2t - \\frac{r^2}{2} + 3rt - \\frac{5r}{2} + \\frac{1}{4}.\n\t\t\t\\intertext{By definition, $r = k - t - \\floor{t\/2}$, so we get the bounds $5n\/6 - t - t\/2 \\leq r \\leq 5(n+1)\/6 - t - (t-1)\/2$, and substituing these in gives}\n\t\t\ta_1 + \\dotsb + a_5 &\\geq \\frac{7}{4} t^2 - 2t + \\frac{1}{4} - \\frac{1}{2} \\left( \\frac{5(n+1)}{6} - t - \\frac{t-1}{2}\\right)^2 + 3t \\left( \\frac{5n}{6} -t - \\frac{t}{2} \\right)\\\\&\\qquad - \\frac{5}{2} \\left( \\frac{5(n+1)}{6} - t - \\frac{t-1}{2} \\right)\\\\\n\t\t\t&= \\frac{1}{72} \\left( - 25n^2 + 270nt - 230n - 279 t^2 + 270t - 286 \\right) \n\t\t\t\\intertext{Plugging this into (\\ref{eqn:disc}) and using the bounds $5n\/6 \\leq k \\leq 5(n+1)\/6$ we get}\n\t\t\t\\disc(M) &>2 (a_1 + \\dotsb a_5) - \\left( \\frac{5(n+1)}{6} \\right)^2 + \\frac{11}{4} \\left( \\frac{n-5}{6} \\right)^2 - \\frac{10n}{6}\\\\\n\t\t\t&\\geq \\frac{1}{48} \\left( - 63n^2 + 360nt - 490n - 372t^2 + 360t - 323 \\right).\n\t\t\\end{align*}\n\t\t\t\tWhen $n \\geq 27$, this is greater than $n^2\/4$ whenever\n\t\\begin{align*}\n\t\t&\\frac{1}{186}\\left(90n + 90 - \\sqrt{1125 n^2 - 29370n - 21939}\\right) <\\\\\n\t\t&\\qquad t < \\frac{1}{186}\\left(90n + 90 + \\sqrt{1125 n^2 - 29370n - 21939}\\right),\n \t\\end{align*}\n or approximately,\n \\[0.304n < t < 0.664n.\\]\n We have the bounds \n \\[ \\frac{1}{4} \\left( \\sqrt{3n^2 - 6n + 7} - 2 \\right) \\leq t \\leq \\frac{5n + 8}{9}, \\]\n and so, for $n \\geq 36$, $\\disc(M) > n^2\/4$.\n \n This again leaves a few cases which we check with the help of a computer (although they could feasibly be checked by hand).\t\t\n\\end{proof}\n\t\t\n\t\tGiven a submatrix $B$ as in the above claim we apply the induction hypothesis, noting that $(n-k) \\geq 5$ since $n \\geq 30$, to find that $B$ is diagonal. Let $C$ be the diagonal submatrix obtained from applying Lemma 4 to $B$, and let $C$ be $\\ell$-diagonal up to rotation. Note that $\\ell \\geq 3$ as $(n-k) \\geq 5$, and we can assume $\\ell \\leq 2n\/3$ as $M$ is not diagonal.\n\t\t\n\t\tHence, $C$ contains exactly one of $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$, and we will split into cases based on which one $C$ contains. We will also sometimes need to consider cases for whether the entry is $1$ or $-1$, but in all cases we will find a contradiction.\n\t\t\n\t\tFrom Lemma \\ref{lem:struct} applied to $M'$ and Claim \\ref{claim:tgeqnmins1}, we already know some of the entries and we highlight some important entries in the following claim.\n\t\t\n\t\t\\begin{claim}\\label{claim:particular1s}\n\t\t\tWe have\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item $a_{j,1} = a_{1, j} = \\begin{cases}\n\t\t\t\t\t1 & t + 1 \\leq j \\leq n-1,\\\\\n\t\t\t\t\t-1 & 1 \\leq j \\leq t,\n\t\t\t\t\\end{cases}$\n\t\t\t\t\\item $a_{2,t} = a_{t,2} = 1$,\n\t\t\t\t\\item $a_{i,i} = 1$ for all $(t+2)\/2 \\leq i \\leq n -1$.\n\t\t\t\\end{enumerate}\n\t\t\\end{claim}\n\t\t\n\t\tSuppose the submatrix $C$ contains $a_{1,1}$ so sits in the top-left corner. Since $M[1:t + \\floor{t\/2}, 1:t + \\floor{t\/2}]$ is $t$-diagonal, $C$ must also be $t$-diagonal. As $C$ was found by applying Lemma \\ref{lem:struct} to $B$, it must contain a $-1$ from $B$. Hence, $t \\geq 5n\/6$ which is a contradiction as we assumed that $t \\leq 2n\/3$.\n\t\t\n\t\tSuppose instead that $C$ contains $a_{1,n}$ so sits in the top-right corner. Since $\\ell \\geq 3$, if the corner entry is $-1$, so is the entry $a_{1,n-1}$, but this contradicts Claim \\ref{claim:particular1s}. Suppose instead the corner entry is $1$. Since $C$ is $\\ell$-diagonal up to rotation we have, for all $1 \\leq i, (n - j + 1) \\leq \\ell + \\floor{\\ell\/2}$, \n\t\t\\begin{equation}\\label{eqn:C}\n\t\t\ta_{i,j} = \\begin{cases}\n\t\t\t\t-1 & i + (n -j + 1) \\geq \\ell + 2,\\\\\n\t\t\t\t1 & \\text{otherwise}.\n\t\t\t\\end{cases}\n\t\t\\end{equation}\n\t\t\n\t\tIf $n - \\ell > t$, then $a_{1, n-\\ell} = -1$ by (\\ref{eqn:C}) and $a_{1, n-\\ell} = 1$ by Claim \\ref{claim:particular1s}.\n\t\tSuppose $n - \\ell < t$. Then $a_{1, t} = 1$ by (\\ref{eqn:C}) and $a_{1, t} = -1$ as $M[1:t, 1:t]$ is $t$-diagonal.\n\t\tFinally, when $n-\\ell = t$, we have $a_{2,t} = -1$ by (\\ref{eqn:C}) and $a_{2,t} = 1$ from Claim \\ref{claim:particular1s}. Some illustrative examples of these three cases are shown in Figure \\ref{fig:case-1-n}.\n\t\t\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {9, 10, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 21 - \\myvar) -- (\\myvar, 20 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (8,8) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (8, 12) -- (9,11);\n\t\t\t\t\t\\draw[thick] (9, 12) -- (8,11);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n- \\ell > t$}\n\t\t\t\\end{subfigure}\\hfil\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {5, 6, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 17 - \\myvar) -- (\\myvar, 16 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (3,3) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (5, 12) -- (6,11);\n\t\t\t\t\t\\draw[thick] (6, 12) -- (5,11);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n- \\ell < t$}\n\t\t\t\\end{subfigure}\\hfil\n\t\t\t\\centering\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 7, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 18 - \\myvar) -- (\\myvar, 17 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (3,3) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (5, 11) -- (6,10);\n\t\t\t\t\t\\draw[thick] (6, 11) -- (5,10);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n - \\ell =t$}\n\t\t\t\\end{subfigure\n\t\t\t\\caption{The three cases when $C$ contains $a_{1,n}$ and $a_{1,n} = 1$. The yellow squares represent some of the $a_{i,j}$ which are known to be $-1$ from Claim \\ref{claim:particular1s} and the blue squares those which are $1$. The square which gives the contradiction is marked with a cross.}\n\t\t\t\\label{fig:case-1-n}\n\t\t\\end{figure}\n\t\t\n\t\tThe case where $C$ contains $a_{n,1}$ is done in the same way with the rows and columns swapped.\n\t\t\n\t\tThis leaves the case where $C$ contains $a_{n,n}$. Since $\\ell \\geq 3$, if the entry $a_{n, n}$ equals $-1$, so does the entry $a_{n-1, n-1}$, and this contradicts Claim \\ref{claim:particular1s}. If instead $a_{n,n} = 1$, we consider the entry $a_{i, i}$ where $i = n + 1 - \\ceil{(l+2)\/2}$, which must be $-1$. However, since $\\ell \\leq 2n\/3$, \\[n + 1 - \\ceil{(l+2)\/2}\\geq n + - \\frac{n}{3} - \\frac{1}{2} > \\frac{n}{3} + 1 \\geq \\frac{t+2}{2},\\] and $a_{i,i} = 1$ by Claim \\ref{claim:particular1s}. This final contradiction is shown in Figure \\ref{fig:case-n-n}.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.4\\textwidth\/12cm]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\foreach \\myvar in {4,...,11}\n\t\t\t\t\t\\fill[blue5] (\\myvar-1, 12-\\myvar) rectangle +(1,1);}\n\t\t\t\t\n\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\n\t\t\t\t\\draw[thick] (7,0)\n\t\t\t\t\\foreach \\myvar in {7,8, ..., 12}\n\t\t\t\t\t-- (\\myvar, \\myvar - 7) -- (\\myvar, \\myvar - 6)\n\t\t\t\t};\n\t\t\t\t\n\t\t\t\t\\draw[thick] (5,0) rectangle (12, 7);\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\draw[thick] (8, 3) -- (9,4);\n\t\t\t\t\\draw[thick] (8, 4) -- (9,3);\n\t\t\t\t\n\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{The case where $C$ contains $a_{n,n}$ and $a_{n,n} = 1$. The square marked with a cross gives a contradiction.}\n\t\t\t\\label{fig:case-n-n}\n\t\t\\end{figure}\n\t\\end{proof}\n\n\tWe remark that it should be possible to improve the bound $n^2\/4$ using a similar proof provided one can check a large enough base case. Indeed, we believe that all the steps in the above proof hold when the bound is increased to $n^2\/3$, but only when $n$ is large enough. For example, Claim \\ref{claim:tgeqnmins1} fails for $n = 127$ and our proof of Claim \\ref{claim:B} fails for $n = 86$. Checking base cases this large is far beyond the reach of our computer check, and some new ideas would be needed here.\n\t \n\\section{Open problems}\\label{sec:open-problems}\n\n\t\tThe main open problem is to determine the correct lower bound for the discrepancy of a non-diagonal $\\{-1,1\\}$-matrix with no zero-sum squares. We have improved the lower bound to $n^2\/4$, but this does not appear to be optimal.\n\n\t\tThe best known construction is the following example by Ar\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero}. Let $M = \\left(a_{i,j}\\right)$ be given by \\[a_{i,j} = \\begin{cases}\n\t\t\t-1 & \\text{$i$ and $j$ are odd},\\\\\n\t\t\t1 & \\text{otherwise}.\n\t\t\\end{cases}\\]\n\t\tThis has discrepancy $n^2\/2$ when $n$ is even and $(n-1)^2\/2 - 1$ when $n$ is odd. With the help of a computer we have verified that this construction is best possible when $9 \\leq n \\leq 32$, and we conjecture that this holds true for all $n \\geq 9$. In fact, our computer search shows that the above example is the unique zero-sum square free non-diagonal matrix with minimum (in magnitude) discrepancy, up to reflections and multiplying by $-1$. \n\n\t\tWe note that the condition $n \\geq 9$ is necessary, as shown by the $8 \\times 8$ zero-sum square free $\\{-1, 1\\}$-matrix with discrepancy 30 given in Figure \\ref{fig:counter}.\n\n\\begin{restatable}{conjecture}{conj}\n\tLet $n\\geq9$. Every $n \\times n$ non-diagonal $\\{-1, 1\\}$-matrix $M$ with \\[|\\disc(M)| \\leq \\begin{cases}\n\t\t\\frac{n^2}{2} - 1 & \\text{$n$ is even}\\\\\n\t\t\\frac{(n-1)^2}{2} -2 & \\text{$n$ is odd}\n\t\\end{cases}\\]\n\tcontains a zero-sum square.\n\\end{restatable}\n\n\n\n\t\t\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=0.5\\textwidth\/8cm]\n\t\t\t\n\t\t\t\n\t\t\t\\fill[blue5] (0,0) rectangle (1,1);\n\t\t\t\\fill[blue5] (1,0) rectangle (2,1);\n\t\t\t\\fill[yellow5] (2,0) rectangle (3,1);\n\t\t\t\\fill[blue5] (3,0) rectangle (4,1);\n\t\t\t\\fill[blue5] (4,0) rectangle (5,1);\n\t\t\t\\fill[blue5] (5,0) rectangle (6,1);\n\t\t\t\\fill[blue5] (6,0) rectangle (7,1);\n\t\t\t\\fill[blue5] (7,0) rectangle (8,1);\n\t\t\t\n\t\t\t\\fill[blue5] (0,1) rectangle (1,2);\n\t\t\t\\fill[blue5] (1,1) rectangle (2,2);\n\t\t\t\\fill[blue5] (2,1) rectangle (3,2);\n\t\t\t\\fill[blue5] (3,1) rectangle (4,2);\n\t\t\t\\fill[blue5] (4,1) rectangle (5,2);\n\t\t\t\\fill[blue5] (5,1) rectangle (6,2);\n\t\t\t\\fill[blue5] (6,1) rectangle (7,2);\n\t\t\t\\fill[blue5] (7,1) rectangle (8,2);\n\t\t\t\n\t\t\t\\fill[blue5] (0,2) rectangle (1,3);\n\t\t\t\\fill[blue5] (1,2) rectangle (2,3);\n\t\t\t\\fill[blue5] (2,2) rectangle (3,3);\n\t\t\t\\fill[blue5] (3,2) rectangle (4,3);\n\t\t\t\\fill[blue5] (4,2) rectangle (5,3);\n\t\t\t\\fill[blue5] (5,2) rectangle (6,3);\n\t\t\t\\fill[blue5] (6,2) rectangle (7,3);\n\t\t\t\\fill[blue5] (7,2) rectangle (8,3);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,3) rectangle (1,4);\n\t\t\t\\fill[blue5] (1,3) rectangle (2,4);\n\t\t\t\\fill[blue5] (2,3) rectangle (3,4);\n\t\t\t\\fill[blue5] (3,3) rectangle (4,4);\n\t\t\t\\fill[blue5] (4,3) rectangle (5,4);\n\t\t\t\\fill[blue5] (5,3) rectangle (6,4);\n\t\t\t\\fill[blue5] (6,3) rectangle (7,4);\n\t\t\t\\fill[blue5] (7,3) rectangle (8,4);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,4) rectangle (1,5);\n\t\t\t\\fill[yellow5] (1,4) rectangle (2,5);\n\t\t\t\\fill[blue5] (2,4) rectangle (3,5);\n\t\t\t\\fill[blue5] (3,4) rectangle (4,5);\n\t\t\t\\fill[blue5] (4,4) rectangle (5,5);\n\t\t\t\\fill[blue5] (5,4) rectangle (6,5);\n\t\t\t\\fill[blue5] (6,4) rectangle (7,5);\n\t\t\t\\fill[blue5] (7,4) rectangle (8,5);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,5) rectangle (1,6);\n\t\t\t\\fill[yellow5] (1,5) rectangle (2,6);\n\t\t\t\\fill[yellow5] (2,5) rectangle (3,6);\n\t\t\t\\fill[blue5] (3,5) rectangle (4,6);\n\t\t\t\\fill[blue5] (4,5) rectangle (5,6);\n\t\t\t\\fill[blue5] (5,5) rectangle (6,6);\n\t\t\t\\fill[blue5] (6,5) rectangle (7,6);\n\t\t\t\\fill[yellow5] (7,5) rectangle (8,6);\n\t\t\t\n\t\t\t\n\t\t\t\\fill[yellow5] (0,6) rectangle (1,7);\n\t\t\t\\fill[yellow5] (1,6) rectangle (2,7);\n\t\t\t\\fill[yellow5] (2,6) rectangle (3,7);\n\t\t\t\\fill[yellow5] (3,6) rectangle (4,7);\n\t\t\t\\fill[blue5] (4,6) rectangle (5,7);\n\t\t\t\\fill[blue5] (5,6) rectangle (6,7);\n\t\t\t\\fill[blue5] (6,6) rectangle (7,7);\n\t\t\t\\fill[blue5] (7,6) rectangle (8,7);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,7) rectangle (1,8);\n\t\t\t\\fill[yellow5] (1,7) rectangle (2,8);\n\t\t\t\\fill[yellow5] (2,7) rectangle (3,8);\n\t\t\t\\fill[yellow5] (3,7) rectangle (4,8);\n\t\t\t\\fill[yellow5] (4,7) rectangle (5,8);\n\t\t\t\\fill[blue5] (5,7) rectangle (6,8);\n\t\t\t\\fill[blue5] (6,7) rectangle (7,8);\n\t\t\t\\fill[blue5] (7,7) rectangle (8,8);\n\t\t\t\n\t\t\t\\draw[step=1, thin] (0,0) grid (8,8);\n\t\t\t\n\t\t\t\\draw[very thick] (0,0) rectangle(8,8);\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\caption{An $8 \\times 8$ $\\{-1,1\\}$-matrix with no zero-sum squares and discrepancy 30. The yellow squares represent a $-1$ and the blue squares represent a $1$.}\n\t\t\\label{fig:counter}\n\t\\end{figure}\n\n\n\n\tAr\\'evalo, Montejano and Rold\\'an-Pensado prove their result for both $n \\times n$ and $n \\times (n+1)$ matrices, and computational experiments suggest that Theorem \\ref{thm:low-bound} holds for $n \\times (n+1)$ matrices as well. More generally, what is the best lower bound for a general $n \\times m$ matrix when $n$ and $m$ are large?\n\t\n\t\\begin{problem}\n\t\tLet $f(n, m)$ be the minimum $d \\in \\mathbb{N}$ such that there exists an $n \\times m$ non-diagonal $\\{-1,1\\}$ matrix $M$ with $|\\disc(M)| \\leq d$. What are the asymptotics of $f(n,m)$?\n\t\\end{problem}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAnti-de Sitter (AdS) backgrounds of supergravity are an essential part of the \nAdS\/CFT correspondence \\cite{Maldacena:1997re} and have been studied\nin recent years from varying perspectives. On the one hand they can be constructed as compactifications of higher-dimensional supergravities as is the natural\nset up in the AdS\/CFT correspondence.\\footnote{See \\cite{Kehagias:1998gn,Morrison:1998cs} for earlier work and \ne.g.\\ \\cite{Polchinski:2010hw} and references therein for a more recent review.} Alternatively, one can investigate and, if possible, \nclassify their appearance directly in a given supergravity without relating it\nto any compactification.\n\nFor a given AdS background it is also of interest to study its properties\nand in particular its moduli space $\\mathcal{M}$, i.e.\\ the subspace of the scalar field space\nthat is spanned by flat directions of the AdS background.\nThis moduli space has been heavily investigated \nin Minkowskian backgrounds of string theory as it prominently appears\nin its low energy effective theory.\nFor AdS backgrounds much less is known about $\\mathcal{M}$, partly because the defining equations are more involved and furthermore quantum corrections contribute unprotected.\n\n\n\nIn \\cite{deAlwis:2013jaa,Louis:2014gxa} supersymmetric $\\textrm{AdS}_{4}$ vacua \nand their classical supersymmetric moduli spaces\nwere studied in four-dimensional ($d=4$) supergravities \nwith $\\mathcal{N}=1,2,4$ supersymmetry\nwithout considering their relation to higher-dimensional theories.\\footnote{Throughout this paper we only consider $\\textrm{AdS}$ backgrounds that preserve all supercharges of a given supergravity and furthermore only consider the subspace of the moduli space that preserves all these supercharges. This is what we mean by supersymmetric $\\textrm{AdS}$ backgrounds and supersymmetric moduli spaces.}\nFor $\\mathcal{N}=1$ it was found that the supersymmetric moduli space is at best a real submanifold of the original K\\\"ahler field space.\nSimilarly, for $\\mathcal{N}=2$ the supersymmetric moduli space \nis at best a product of a real manifold times a K\\\"ahler manifold\nwhile $\\mathcal{N}=4$ $\\textrm{AdS}$ backgrounds have no supersymmetric moduli space.\n This analysis was repeated for $\\textrm{AdS}_{5}$ vacua in $d=5$ gauged supergravity\nwith 16 supercharges ($\\mathcal{N}=4$) in \\cite{Louis:2015dca} and for $\\textrm{AdS}_{7}$ vacua in $d=7$ gauged supergravity with 16~supercharges in \\cite{Louis:2015mka}. For the $d=5,\\, \\mathcal{N}=4$ theories it was shown that the supersymmetric moduli space is\nthe coset $\\mathcal{M}=SU(1,m)\/(U(1)\\times SU(m))$ while in $d=7$ it was proven that again \nno supersymmetric moduli space exists.\n\n\nIn this paper we focus on supersymmetric $\\textrm{AdS}_{5}$ vacua in $d=5$ gauged \nsupergravities with eight supercharges ($\\mathcal{N}=2$)\ncoupled to an arbitrary number of vector-, tensor- and hypermultiplets. \nA related analysis was carried out in \\cite{Tachikawa:2005tq}\nfor the coupling of Abelian vector multiplets and hypermultiplets.\nWe confirm the results of \\cite{Tachikawa:2005tq} and generalize \nthe analysis by including tensor multiplets and\nnon-Abelian vector multiplets. \nIn particular, we show that also in this more general case \nthe unbroken gauge group has to \n be of the form $H\\times U(1)_{R}$\nwhere the $U(1)_R$-factor is gauged by the graviphoton.\nThis specifically forbids unbroken semisimple gauge groups in AdS\nbackgrounds.\n\nIn a second step\nwe study the supersymmetric moduli space $\\mathcal{M}$ \nof the previously obtained $\\textrm{AdS}_{5}$ backgrounds\nand show that it necessarily is a K\\\"ahler submanifold of the quaternionic scalar field space $\\mathcal{T}_H$ spanned by all scalars in the hypermultiplets.\\footnote{This result was also obtained in \\cite{Tachikawa:2005tq}. Our results is more general as we include tensor multiplets and non-Abelian vector multiplet in the analysis.}\nThis is indeed consistent with the AdS\/CFT correspondence where the \nmoduli space $\\mathcal{M}$ is mapped to the conformal manifold of the dual \nsuperconformal field theory (SCFT). For the gauged supergravities considered here\nthe dual theories are $d=4,\\, \\mathcal{N}=1$ SCFTs.\nIn \\cite{Asnin:2009xx} it was indeed shown that \nthe conformal manifold of these SCFTs is a K\\\"ahler manifold. \n\nThe organization of this paper is as follows. In section \\ref{sec:sugra} we briefly review gauged $\\mathcal{N}=2$ supergravities in five dimensions. This will then be used to study the conditions for the existence of supersymmetric $\\textrm{AdS}_{5}$ vacua and determine some of their properties in section~\\ref{sec:vacua}. Finally, in section \\ref{sec:moduli} we compute the conditions on the moduli space of these vacua and show that it is a K\\\"ahler manifold. \n\n\n\n\\section{Gauged $\\mathcal{N}=2$ supergravity in five dimensions}\\label{sec:sugra}\n\nTo begin with let us review five-dimensional gauged $\\mathcal{N}=2$ supergravity following \\cite{Gunaydin:2000xk,Bergshoeff:2002qk,Bergshoeff:2004kh}.\\footnote{Ref.~\\cite{Bergshoeff:2004kh}\nconstructed the most general version of five-dimensional gauged $\\mathcal{N}=2$ supergravity.} The theory consists of the gravity multiplet with field content\n\\begin{equation}\n\\{g_{\\mu\\nu}, \\Psi_{\\mu}^{\\mathcal{A}}, A_{\\mu}^{0}\\}\\ , \\quad \\mu,\\nu=0,...,4\\ ,\\quad \n\\mathcal{A}=1,2\\ ,\n\\end{equation}\nwhere $g_{\\mu\\nu}$ is the metric of space-time, $\\Psi_{\\mu}^{\\mathcal{A}}$ is\nan $SU(2)_{R}$-doublet of symplectic Majorana gravitini and $A_{\\mu}^{0}$ is the graviphoton. In\nthis paper we consider theories that additionally contain $n_{V}$\nvector multiplets, $n_{H}$ hypermultiplets and $n_{T}$ tensor\nmultiplets. A vector multiplet $\\{A_{\\mu}, \\lambda^{\\mathcal{A}}, \\phi\\}$\ntransforms in the adjoint representation of the gauge group $G$ and contains a vector $A_{\\mu}$, a doublet of gauginos $\\lambda^{\\mathcal{A}}$ and a real scalar~$\\phi$. In $d=5$ a vector is Poincar\\'e dual to an antisymmetric \ntensor field $B_{\\mu\\nu}$ which carry an arbitrary representation of $G$. This gives rise to tensor multiplets which have the same field content as vector multiplets, but with a two-form instead of a vector. Since vector- and tensor multiplets mix in the Lagrangian, we label their scalars $\\phi^{i}$ by the same index $i,j=1,...,n_{V}+n_{T}$. Moreover, we label the vector fields (including the graviphoton) by $I,J=0,1,...,n_{V}$, the tensor fields by $M,N=n_{V}+1,...,n_{V}+n_{T}$ and also introduce a combined index $\\tilde{I}=(I,M)$. Finally, the $n_{H}$ hypermultiplets\n\\begin{equation}\n\\{q^{u}, \\zeta^{\\alpha}\\}, \\quad u=1,2,...,4n_{H}\\ , \\quad \\alpha=1,2,...,2n_{H}\\ , \n\\end{equation}\ncontain $4n_{H}$ real scalars $q^{u}$ and $2n_{H}$ hyperini $\\zeta^{\\alpha}$.\n\nThe bosonic Lagrangian of $\\mathcal{N}=2$ gauged supergravity in five dimensions reads\\footnote{\nNote that we set the gravitational constant $\\kappa=1$ in this paper.}\n\\cite{Bergshoeff:2004kh}\n\\begin{equation}\\label{eq:Lagrangian}\n\\begin{aligned}\ne^{-1}\\mathcal{L}&=\\tfrac{1}{2}\n-\\tfrac{1}{4}a_{\\tilde{I}\\tilde{J}}H^{\\tilde{I}}_{\\mu\\nu}H^{\\tilde{J}\\mu\\nu}-\\tfrac{1}{2}g_{ij}\\mathcal{D}_{\\mu}\\phi^{i}\\mathcal{D}^{\\mu}\\phi^{j}-\\tfrac{1}{2}G_{uv}\\mathcal{D}_{\\mu}q^{u}\\mathcal{D}^{\\mu}q^{v}-g^{2}V(\\phi,q)\\\\\n&+\\tfrac{1}{16g}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}\\Omega_{MN}B^{M}_{\\mu\\nu}\\left(\\partial_{\\rho}B^{N}_{\\sigma\\tau}+2gt_{IJ}^{N}A_{\\rho}^{I}F_{\\sigma\\tau}^{J}+gt_{IP}^{N}A_{\\rho}^{I}B_{\\sigma\\tau}^{P}\\right)\\\\\n&+\\tfrac{1}{12}\\sqrt{\\tfrac{2}{3}}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}C_{IJK}A_{\\mu}^{I}\\left[F_{\\nu\\rho}^{J}F_{\\sigma\\tau}+f_{FG}^{J}A_{\\nu}^{F}A_{\\rho}^{G}\\left(-\\tfrac{1}{2}F_{\\sigma\\tau}^{K}+\\tfrac{g^{2}}{10}f_{HL}^{K}A_{\\sigma}^{H}A_{\\tau}^{L}\\right)\\right]\\\\\n&-\\tfrac{1}{8}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}\\Omega_{MN}t_{IK}^{M}t_{FG}^{N}A_{\\mu}^{I}A_{\\nu}^{F}A_{\\rho}^{G}\\left(-\\tfrac{g}{2}F_{\\sigma\\tau}^{K}+\\tfrac{g^{2}}{10}f_{HL}^{K}A_{\\sigma}^{H}A_{\\tau}^{L}\\right)\n\\ .\n\\end{aligned}\n\\end{equation}\nIn the rest of this section we recall the various ingredients which\nenter this Lagrangian.\nFirst of all $H^{\\tilde{I}}_{\\mu\\nu}=(F_{\\mu\\nu}^{I}, B_{\\mu\\nu}^{M})$\nwhere\n$F_{\\mu\\nu}^{I}=2\\partial_{[\\mu}A_{\\nu]}^{I}+gf_{JK}^{I}A^{J}_{\\mu}A^{K}_{\\nu}$\nare the field strengths with $g$ being the gauge coupling constant.\nThe scalar fields in $\\mathcal{L}$ can be interpreted as coordinate charts from spacetime $M_{5}$ to a target space $\\mathcal{T}$,\n\\begin{equation}\\label{eq:target space}\n\\phi^{i} \\otimes q^{u}: M_{5} \\longrightarrow \\mathcal{T}.\n\\end{equation}\nLocally $\\mathcal{T}$ is a product $\\mathcal{T}_{VT} \\times \\mathcal{T}_{H}$ where the first\nfactor is a projective special real manifold $(\\mathcal{T}_{VT}, g)$ of\ndimension $n_{V}+n_{T}$. It is constructed as a hypersurface in an $(n_{V}+n_{T}+1)$-dimensional real manifold $\\mathcal{H}$ with local coordinates $h^{\\tilde{I}}$. This hypersurface is defined by \n\\begin{equation}\\label{eq:polynomial}\nP(h^{\\tilde{I}}(\\phi))=C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{I}}h^{\\tilde{J}}h^{\\tilde{K}}=1,\n\\end{equation}\nwhere $P(h^{\\tilde{I}}(\\phi))$ is a cubic homogeneous polynomial with $C_{\\tilde{I}\\tilde{J}\\tilde{K}}$ constant and completely symmetric. Thus $\\mathcal{T}_{VT}=\\{P=1\\}\\subset \\mathcal{H}$. \n\nThe generalized gauge couplings in \\eqref{eq:Lagrangian} correspond to a positive metric on the ambient space $\\mathcal{H}$, given by\n\\begin{equation}\\label{adef}\na_{\\tilde{I}\\tilde{J}}:=-2C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{K}}+3h_{\\tilde{I}}h_{\\tilde{J}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:hlower}\n h_{\\tilde{I}}= C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{J}}h^{\\tilde{K}}\\ . \n\\end{equation}\nThe pullback metric $g_{ij}$ is the (positive) metric on the hypersurface \n$\\mathcal{T}_{VT}$ and is given by\n\\begin{equation}\\label{gpull}\ng_{ij}:=h_{i}^{\\tilde{I}}h_{j}^{\\tilde{J}}a_{\\tilde{I}\\tilde{J}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:hder}\nh_{i}^{\\tilde{I}}:=-\\sqrt{\\tfrac{3}{2}}\\,\\partial_{i}h^{\\tilde{I}}(\\phi)\\\n.\n\\end{equation}\nThese quantities satisfy (see Appendix C in \\cite{Bergshoeff:2004kh} for more details)\n\\begin{equation}\nh^{\\tilde{I}}h_{\\tilde{I}}=1\\ ,\\qquad\nh_{\\tilde{I}}h_{i}^{\\tilde{I}}=0\\ ,\\qquad\nh_{\\tilde{I}}h_{\\tilde{J}}+h_{\\tilde{I}}^{i}h_{\\tilde{J}i}=a_{\\tilde{I}\\tilde{J}} \\ ,\n\\label{eq:hmetric}\n\\end{equation}\nwhere we raise and lower indices with the appropriate metrics $a_{\\tilde{I}\\tilde{J}}$ or $g_{ij}$ respectively.\nThe metric $g_{ij}$ induces a covariant derivative which acts on the $h^{\\tilde{I}}_{i}$ via \n\\begin{equation}\\label{eq:covderh}\n\\nabla_{i}h^{\\tilde{I}}_{j}=-\\sqrt{\\tfrac{2}{3}}\\, (h^{\\tilde{I}}g_{ij}+T_{ijk}h^{\\tilde{I}k})\\ ,\n\\end{equation}\nwhere $T_{ijk}:=C_{\\tilde{I}\\tilde{J}\\tilde{K}}h_{i}^{\\tilde{I}}h_{j}^{\\tilde{J}}h_{k}^{\\tilde{K}}$ is a completely symmetric tensor. \n\nThe second factor of $\\mathcal{T}$ in (\\ref{eq:target space}) is a quaternionic K\\\"ahler manifold $(\\mathcal{T}_{H},G, Q)$ of real dimension $4n_{H}$ (see \\cite{Andrianopoli:1996cm} for a more extensive introduction). Here $G_{uv}$ is a Riemannian metric and $Q$ denotes a $\\nabla^{G}$ invariant rank three subbundle $Q\\subset \\text{End} (T\\mathcal{T}_H)$ that is locally spanned by a triplet $J^{n}$, $n=1,2,3$ of almost complex structures which satisfy $J^{1}J^{2}=J^{3}$ and $(J^{n})^{2}=-\\text{Id}$. Moreover the metric $G_{uv}$ is hermitian with respect to all three $J^{n}$ and one defines the associated triplet of two-forms $\\omega^{n}_{uv}:=G_{uw}(J^{n})^{w}_{v}$. In contrast to the K\\\"ahlerian case, the almost complex structures are not parallel but the Levi-Civita connection $\\nabla^{G}$ of $G$ rotates the endomorphisms inside $Q$, i.e. \n\\begin{equation}\\label{nabladef}\n\\nabla J^{n}:=\\nabla^{G}J^{n}-\\epsilon^{npq}\\theta^{p}J^{q}=0\\ .\n\\end{equation}\nNote that $\\nabla$ differs from $\\nabla^{G}$ by an\n$SU(2)$-connection with connection one-forms $\\theta^{p}$.\nFor later use let us note that the metric $G_{uv}$ can be expressed in terms of vielbeins $\\mathcal{U}^{\\alpha\\mathcal{A}}_{u}$ as\n\\begin{equation}\nG_{uv}= C_{\\alpha\\beta}\\epsilon_{\\mathcal{A}\\mathcal{B}}\\mathcal{U}^{\\alpha\\mathcal{A}}_{u}\\mathcal{U}^{\\beta\\mathcal{B}}_{v}\\ ,\n\\end{equation}\nwhere $C_{\\alpha\\beta}$ denotes the flat metric on $Sp(2n_{H},\\mathbb{R})$\nand the $SU(2)$-indices $\\mathcal{A},\\mathcal{B}$ are raised and lowered with $\\epsilon_{\\mathcal{A}\\mathcal{B}}$. \n\nThe gauge group $G$ is specified by the generators $t_{I}$ of its Lie algebra $\\mathfrak{g}$ and the structure constants $f_{IJ}^{K}$,\n\\begin{equation}\n[t_{I},t_{J}]=-f_{IJ}^{K}t_{K}\\ .\n\\end{equation}\nThe vector fields transform in the adjoint representation of the gauge group, i.e.\\ $t_{IJ}^{K}=f_{IJ}^{K}$ while the tensor fields\ncan carry an arbitrary representation.\nThe most general representation for $n_{V}$ vector multiplets and $n_{T}$ tensor multiplets has been found in \\cite{Bergshoeff:2002qk} and is given by\n\\begin{equation}\\label{trep}\n t_{I\\tilde{J}}^{\\tilde{K}}=\n\\begin{pmatrix}\nf_{IJ}^{K} & t_{IJ}^{N}\\\\\n0 & t_{IM}^{N}\\\\\n\\end{pmatrix}.\n\\end{equation}\nWe see that the block matrix $t_{IJ}^{N}$ mixes vector- and tensor\nfields. However the $t_{IJ}^{N}$ are only nonzero if the chosen\nrepresentation of the gauge group is not completely reducible. This\nnever occurs for compact gauge groups but there exist non-compact\ngauge groups containing an Abelian ideal that admit representations\nof this type, see\n\\cite{Bergshoeff:2002qk}. There it is also shown that the construction\nof a generalized Chern-Simons term in the action for vector- and\ntensor multiplets requires the existence of an invertible and\nantisymmetric matrix $\\Omega_{MN}$. In particular, the $t_{I\\tilde J}^{N}$\nare of the form\n\\begin{equation}\\label{eq:Omega}\nt_{I\\tilde{J}}^{N}=C_{I\\tilde{J}P}\\Omega^{PN}\\ .\n\\end{equation}\n\nThe gauge group is realized on the scalar fields via the action of\nKilling vectors $\\xi_{I}$ for the vector- and tensor multiplets and\n$k_{I}$ for the hypermultiplets that satisfy the Lie\nalgebra~${\\mathfrak{g}}$~of~$G$, \n\\begin{equation}\\label{Killingc}\n\\begin{aligned}\n{}[\\xi_{I},\\xi_{J}]^{i}&:=\\xi_I^j\\partial_j \\xi^i_J-\\xi_J^j\\partial_j \\xi^i_I=\n-f_{IJ}^{K}\\, \\xi_{K}^{i}\\ ,\\\\\n[k_{I},k_{J}]^{u}&:=k_I^v\\partial_v k_J^u-k_J^v\\partial_v k_I^u=\n-f_{IJ}^{K}\\,k_{K}^{u}\\ .\n\\end{aligned}\n\\end{equation}\nIn the case of the projective special real manifold, one can obtain an explicit expression for the Killing vectors $\\xi_{I}^{i}$ given by \\cite{Bergshoeff:2004kh}\n\\begin{equation}\\label{eq:VTkilling}\n\\xi_{I}^{i}:= -\\sqrt{\\tfrac{3}{2}}\\,t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h^{i}_{\\tilde{K}}=-\\sqrt{\\tfrac{3}{2}}\\,t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}i}h_{\\tilde{K}}\\ .\n\\end{equation}\nThe second equality is due to the fact that \\cite{Gunaydin:1984ak}\n\\begin{equation}\\label{eq:representation0}\nt_{I\\tilde{J}}^{\\tilde{K}}\\,h^{\\tilde{J}}h_{\\tilde{K}}= 0\\ ,\n\\end{equation}\nand thus \n\\begin{equation}\n0=\\partial_{i}(t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h_{\\tilde{K}}) = t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}\\partial_{i}h_{\\tilde{K}}+t_{I\\tilde{J}}^{\\tilde{K}}(\\partial_{i}h^{\\tilde{J}})h_{\\tilde{K}}\\ , \n\\end{equation}\nwhich implies\\footnote{Note that the derivative\n $h_{\\tilde{I}i}=\\sqrt{\\tfrac{3}{2}}\\,\\partial_{i}h_{\\tilde{I}}$\nhas an additional minus sign compared to \\eqref{eq:hder} which can be\nshown by lowering the index with $a_{\\tilde{I}\\tilde{J}}$ given in \\eqref{adef}.}\n\\begin{equation}\\label{eq:representation}\nt_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h^{i}_{\\tilde{K}}=t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}i}h_{\\tilde{K}}\\ .\n\\end{equation}\n\nThe Killing vectors $k_{I}^u$ on the quaternionic K\\\"ahler\nmanifold $\\mathcal{T}_H$ \\cite{Andrianopoli:1996cm,Alekseevsky:2001if,Bergshoeff:2002qk} have to be triholomorphic which implies \n\\begin{equation}\\label{eq:Jinvariance}\n\\nabla_{u}\nk^{I}_{w}(J^{n})_{v}^{w}-(J^{n})_{u}^{w}\\nabla_{w}k^{I}_{v}=2\\epsilon^{npq}\\omega^{p}_{uv}\\mu^{Iq}\\\n.\n\\end{equation}\nHere $\\mu_{I}^{n}$ is a\ntriplet of moment maps which also satisfy\n\\begin{equation}\\label{eq:covdermomentmap}\n\\tfrac{1}{2}\\omega^{n}_{uv}k_{I}^{v}=-\\nabla_{u}\\mu_{I}^{n}\n\\ ,\n\\end{equation}\nand the equivariance condition\n\\begin{equation}\\label{eq:equivariance}\nf_{IJ}^{K}\\mu_{K}^{n}=\\tfrac{1}{2}\\omega_{uv}^{n}k_{I}^{u}k_{J}^{v}-2\\epsilon^{npq}\\mu_{Ip}\\mu_{Jq}\\ .\n\\end{equation}\nFurthermore the covariant derivative of the Killing vectors \nobeys \\cite{D'Auria:2001kv,Alekseevsky:2001if}\n\\begin{equation}\\label{eq:covderkilling}\n\\nabla_{u}k_{Iv} +\\nabla_{v}k_{Iu} = 0\\ ,\\qquad \\nabla_{u}k_{Iv} -\\nabla_{v}k_{Iu} = \\omega^{n}_{uv}\\mu_{nI}+L_{Iuv} \\ ,\n\\end{equation}\nwhere \nthe $L_{Iuv}$ are related to the gaugino mass matrix and commute with\n$J^{n}$.\nFor later use we define\n\\begin{equation}\\label{SLdef}\nS^{n}_{Iuv}:={L}_{Iuw}(J^{n})^{w}_{v}\\ ,\\qquad L_{uv}:=h^{I}L_{Iuv}\\\n,\\qquad S_{uv}^{n}:=h^{I}S_{Iuv}^{n}\\ ,\n\\end{equation}\nwhere the $S^{n}_{Iuv}$ are symmetric in $u,v$ \\cite{Alekseevsky:2001if}. \n\nBefore we proceed let us \nnote that for $n_{H}=0$, i.e.\\ when there are no hypermultiplets,\nconstant Fayet-Iliopoulos (FI) terms can exist which have to satisfy\nthe equivariance condition \\eqref{eq:equivariance}. \nIn this case the first term on the right hand side of\n\\eqref{eq:equivariance}\nvanishes which implies that \n there\nare only two possible solutions \\cite{Bergshoeff:2004kh}. \nIf the gauge group contains an $SU(2)$-factor, the FI-terms have to be\nof the form\n\\begin{equation}\n\\mu_{I}^{n}= c e_{I}^{n}\\ ,\\quad c \\in \\mathbb{R}\\ ,\n\\end{equation}\nwhere the $e_{I}^{n}$ are nonzero constant vectors for $I=1,2,3$ of\nthe $SU(2)$-factor that satisfy\n\\begin{equation}\n \\epsilon^{mnp}e^{m}_{I}e^{n}_{J}=f_{IJ}^{K}e^{p}_{K}\\ .\n\\end{equation}\n The second solution has $U(1)$-factors in the gauge group and the constant moment maps are given by \n\\begin{equation}\\label{eq:AbelianFI}\n \\mu_{I}^{n}=c_{I}e^{n}\\ ,\\quad c_{I}\\in \\mathbb{R}\\ ,\n\\end{equation}\nwhere $e^{n}$ is a constant $SU(2)$-vector and\n$I$ labels the $U(1)$-factors. \n\nFinally, the covariant derivatives of the scalars in \\eqref{eq:Lagrangian} are given by\n\\begin{equation}\\label{eq:covderivatives} \n\\mathcal{D}_{\\mu}\\phi^{i} = \\partial_{\\mu}\\phi^{i} + gA_{\\mu}^{I}\\xi_{I}^{i}(\\phi)\\ , \\qquad \\mathcal{D}_{\\mu} q^{u} = \\partial_{\\mu}q^{u}+gA_{\\mu}^{I}k_{I}^{u}(q)\\ .\n\\end{equation}\nThe scalar potential\n\\begin{equation}\\label{eq:potential}\nV=2g_{ij}W^{i\\mathcal{A}\\mathcal{B}}W_{\\mathcal{A}\\mathcal{B}}^{j}+2g_{ij}\\mathcal{K}^{i}\\mathcal{K}^{j}+2N^{\\alpha}_{\\mathcal{A}}N_{\\alpha}^{\\mathcal{A}}-4S_{\\mathcal{A}\\mathcal{B}}S^{\\mathcal{A}\\mathcal{B}},\n\\end{equation}\nis defined in terms of the couplings\\footnote{Note that the $h^{M}$ in\n the direction of the tensor multiplets do not appear\n explicitly. Nevertheless, the couplings can implicitly depend on the\n scalars in the tensor multiplet as they might appear in $h^{I}$\n after solving \\eqref{eq:polynomial}.}\n\\begin{equation}\\label{eq:definitions}\n\\begin{aligned}\nS^{\\mathcal{A}\\mathcal{B}}&:=h^{I}\\mu_{I}^{n}\\sigma_{n}^{\\mathcal{A}\\mathcal{B}}\\ ,\\qquad\nW_{i}^{\\mathcal{A}\\mathcal{B}}:=h^{I}_{i}\\mu^{n}_{I}\\sigma_{n}^{\\mathcal{A}\\mathcal{B}}\\ ,\\\\\n\\mathcal{K}^{i}&:=\\tfrac{\\sqrt{6}}{4} h^{I}\\xi_{I}^{i}\\ ,\\qquad\nN^{\\alpha\\mathcal{A}}:=\\tfrac{\\sqrt{6}}{4} h^{I}k_{I}^{u}\\mathcal{U}_{u}^{\\alpha\\mathcal{A}}\\ .\n\\end{aligned}\n\\end{equation}\nHere $\\sigma^{n}_{\\mathcal{A}\\mathcal{B}}$ are the Pauli matrices with an index lowered by $\\epsilon_{\\mathcal{A}\\mathcal{B}}$, i.e.\n\\begin{equation}\n\\sigma^{1}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix}\\\n,\\quad\n\\sigma^{2}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} -i & 0 \\\\ 0 & -i \\end{pmatrix}\\\n, \\quad\n\\sigma^{3}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} 0 & -1 \\\\ -1 & 0 \\end{pmatrix}\\\n.\n\\end{equation}\nAs usual the couplings \\eqref{eq:definitions}\nare related to the\nscalar parts of the supersymmetry variations of the fermions via\n\\begin{equation}\\label{susytrans}\n\\begin{aligned}\n\\delta_{\\epsilon}\\psi_{\\mu}^{\\mathcal{A}}&=D_{\\mu}\\epsilon^{\\mathcal{A}}-\\tfrac{ig}{\\sqrt{6}}S^{\\mathcal{A}\\mathcal{B}}\\gamma_{\\mu}\\epsilon_{\\mathcal{B}}+...\\ , \\\\\n\\delta_{\\epsilon}\\lambda^{i\\mathcal{A}}&=g\\mathcal{K}^{i}\\epsilon^{\\mathcal{A}}-gW^{i\\mathcal{A}\\mathcal{B}}\\epsilon_{\\mathcal{B}}+...\\ ,\\\\\n\\delta_{\\epsilon}\\zeta^{\\alpha}&=gN_{\\mathcal{A}}^{\\alpha}\\epsilon^{\\mathcal{A}}+...\\ .\n\\end{aligned}\n\\end{equation}\nHere $\\epsilon^{\\mathcal{A}}$ denote the supersymmetry parameters. This concludes our review of $d=5$ supergravity and we now turn to its possible supersymmetric $\\textrm{AdS}$ backgrounds.\n\n\n\n\\section{Supersymmetric $\\textrm{AdS}_{5}$ vacua}\\label{sec:vacua}\n\nIn this section we determine the conditions that lead to\n $\\textrm{AdS}_{5}$ vacua which preserve all eight supercharges.\nThis requires the vanishing of all fermionic \nsupersymmetry transformations, i.e.\n\\begin{equation}\n\\vev{\\delta_{\\epsilon}\\psi_{\\mu}^{\\mathcal{A}}}=\\vev{\\delta_{\\epsilon}\\lambda^{i\\mathcal{A}}}=\\vev{\\delta_{\\epsilon}\\zeta^{\\alpha}}=0 \\ \n\\end{equation}\nwhere $\\vev{\\ }$ denotes the value of a quantity\nevaluated in the background. Using the fact that $W^{i\\mathcal{A}\\mathcal{B}}$ and $\\mathcal{K}^{i}$ are linearly\nindependent \\cite{Gunaydin:2000xk} and \\eqref{susytrans}, this implies the following four conditions,\n\\begin{equation}\\label{eq:conditions}\n\\vev{W_{i}^{\\mathcal{A}\\mathcal{B}}}=0\\ , \\quad \\vev{S_{\\mathcal{A} \\mathcal{B}}}\\,\\epsilon^{\\mathcal{B}}=\\Lambda U_{\\mathcal{A}\\mathcal{B}}\\,\\epsilon^{\\mathcal{B}}\\ ,\\quad \\vev{N^{\\alpha\\mathcal{A}}}=0\\ ,\\quad \\vev{\\mathcal{K}^{i}}=0\\ .\n\\end{equation}\nHere $\\Lambda \\in \\mathbb{R}$ is related to the cosmological constant and\n$U_{\\mathcal{A}\\mathcal{B}}=v_{n}\\sigma_{\\mathcal{A}\\mathcal{B}}^{n}$ for $v\\in S^{2}$ is an $SU(2)$-matrix.\n$U_{\\mathcal{A}\\mathcal{B}}$ appears in the Killing spinor equation for $\\textrm{AdS}_{5}$ which reads \\cite{Shuster:1999zf}\n\\begin{equation}\n \\vev{D_{\\mu}\\epsilon_{\\mathcal{A}}}=\\tfrac{ia}{2}\\, U_{\\mathcal{A}\\mathcal{B}}\\,\\gamma_{\\mu}\\epsilon^{\\mathcal{B}}\\ , \\quad a\\in \\mathbb{R}\\ .\n\\end{equation}\nAs required for an $\\textrm{AdS}$ vacuum, the conditions \\eqref{eq:conditions}\ngive a negative background value for the scalar potential\n$\\vev{V(\\phi,q)}< 0$ which can be seen from (\\ref{eq:potential}).\nUsing the definitions (\\ref{eq:definitions}), we immediately see that \nthe four conditions \\eqref{eq:conditions} can also be formulated as \nconditions on the moment maps and Killing vectors,\n\\begin{equation}\\label{eq:backgroundmomentmaps}\n\\vev{h^{I}_{i}\\mu^{n}_{I}}=0\\ ,\\qquad\n\\vev{h^{I}\\mu^{n}_{I}}=\\Lambda v^{n}\\ ,\\qquad\n\\vev{h^{I}k_{I}^{u}}=0\\ , \\qquad \\vev{h^{I}\\xi_{I}^{i}}=0\\ .\n\\end{equation}\n Note that due to \\eqref{eq:polynomial}, \\eqref{gpull} we need to have\n $\\vev{h^{I}}\\neq0$ for some $I$ and $\\vev{h^{\\tilde I}_i}\\neq0$ for every $i$ and some $\\tilde I$.\\footnote{\nIn particular this can also hold at the\n origin of the scalar field space $\\vev{\\phi^i}=0$, i.e.\\ for unbroken gauge groups.}\n\n\nIn order to solve \\eqref{eq:backgroundmomentmaps} we combine\nthe first two conditions as\n\\begin{equation}\\label{eq:momentummaps}\n \\vev{\\begin{pmatrix}h^{I} \\\\ h^{I}_{i} \\end{pmatrix} \\mu_{I}^{n}} = \\begin{pmatrix}\\Lambda v^{n} \\\\ 0\\end{pmatrix}.\n\\end{equation}\nLet us enlarge these equations to the tensor multiplet indices by introducing $\\mu_{\\tilde{I}}^{n}$ where we keep in mind that $\\mu^{n}_{N}\\equiv0$. Then we use the fact that the matrix $(h^{\\tilde{I}},h^{\\tilde{I}}_{i})$ is invertible in special real geometry (see Appendix C of \\cite{Bergshoeff:2004kh}), so we can multiply (\\ref{eq:momentummaps}) with $(h^{\\tilde{I}},h^{\\tilde{I}}_{i})^{-1}$ to obtain a solution for both equations given by\n\\begin{equation}\n\\vev{\\mu_{\\tilde{I}}^{n}}=\\Lambda v^{n}\\vev{h_{\\tilde{I}}}\\ .\n\\end{equation}\nNote that this condition is non-trivial since it implies that the moment maps point in the same direction in $SU(2)$-space for all $I$.\nFurthermore, using the $SU(2)_{R}$-symmetry we can rotate the vector $v^{n}$\nsuch that $v^{n}=v\\delta^{n3}$ and absorb the constant $v\\in \\mathbb{R}$\ninto $\\Lambda$. \nThus only $\\vev{\\mu_{I}}:=\\vev{\\mu_{I}^{3}}\\neq 0$, $\\forall I$ in the above equation. Since by definition $\\vev{\\mu_{N}^{n}}= 0$, this implies\n\\begin{equation}\\label{eq:momentmapsvacuum}\n\\vev{\\mu_{I}}=\\Lambda \\vev{h_{I}}\\ , \\quad \\vev{h_{N}}= 0\\ .\n\\end{equation}\nIn particular, this means that the first two equations in \\eqref{eq:hmetric} hold in the vacuum for only the vector indices, i.e.\\\n\\begin{equation}\\label{eq:hmetricvacuum}\n\\vev{h^{I}h_{I}}=1\\ , \\quad \\vev{h_{I}h^{I}_{i}}=0\\ .\n\\end{equation}\nMoreover due to the explicit form of the moment maps in \\eqref{eq:momentmapsvacuum}, the equivariance condition \\eqref{eq:equivariance} reads in the background\n\\begin{equation}\\label{equivariancevacuum}\n f_{IJ}^{K}\\vev{\\mu_{K}}=\\tfrac{1}{2}\\vev{\\omega^{3}_{uv}k_{I}^{u}k_{J}^{v}}.\n\\end{equation}\n\n\nSince \\eqref{eq:potential} has to hold in the vacuum, $\\vev{h^{I}}\\neq 0$ for some $I$ and thus the background necessarily has non-vanishing moment maps due to \\eqref{eq:momentmapsvacuum}. This in turn implies that part of the $R$-symmetry is gauged, as can be seen from the covariant derivatives of the fermions which always contain a term of the form $A_{\\mu}^{I}\\vev{\\mu_{I}^{3}}$ \\cite{Bergshoeff:2004kh}. More precisely, this combination gauges the $U(1)_{R}\\subset SU(2)_{R}$ generated by $\\sigma^{3}$. From \\eqref{eq:momentmapsvacuum} we infer $A_{\\mu}^{I}\\vev{\\mu_{I}^{3}}=\\Lambda A_{\\mu}^{I}\\vev{h_{I}}$ which can be identified with the graviphoton \\cite{Gunaydin:1984ak}.\n\nWe now turn to the last two equations in \\eqref{eq:backgroundmomentmaps}. Let us first prove that the third equation $\\vev{h^{I}k_{I}^{u}}=0$ implies the fourth $\\vev{h^{I}\\xi_{I}^{i}}=0$. This can be shown by expressing $\\vev{\\xi_{I}^{i}}$ in terms of $\\vev{k_{I}^{u}}$ via the equivariance condition \\eqref{equivariancevacuum}. Note that we learn from \\eqref{eq:VTkilling} that the background values of the Killing vectors on the manifold $\\mathcal{T}_{VT}$ are given by\n\\begin{equation}\\label{xinA}\n \\vev{\\xi_{I}^{i}}=\n-\\sqrt{\\tfrac{3}{2}}\\,\\vev{t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde Ji}h_{\\tilde K}}\n=-\\sqrt{\\tfrac{3}{2}}\\,\\vev{f_{IJ}^{K}h^{Ji}h_{K} + t_{IJ}^{N}h^{Ji}h_{N}}\n=-\\sqrt{\\tfrac{3}{2}}\\,\\vev{f_{IJ}^{K}h^{Ji}h_{K}}\n\\ ,\n\\end{equation}\nwhere we used \\eqref{trep} and \\eqref{eq:momentmapsvacuum}. Inserting \\eqref{eq:momentmapsvacuum}, \\eqref{equivariancevacuum} into \\eqref{xinA} one indeed computes\n\\begin{equation}\\label{eq:Killingvacuum}\n\\vev{\\xi_{I}^{i}} =\n-\\sqrt{\\tfrac{3}{2}}\\tfrac{1}{2\\Lambda}\\,\\vev{h^{J}_{i}\\omega_{uv}^{3}k_{I}^{u}k_{J}^{v}}\\\n.\n\\end{equation}\nBut then $\\vev{h^{I}\\xi_{I}^{i}}=0$ is always satisfied if $\\vev{h^{I}k_{I}^{u}}=0$. Moreover this shows that $\\vev{\\xi_{I}^{i}}\\neq0$ is only\npossible for $\\vev{k^u_I}\\neq0$. Note that the reverse is not true in general as can be seen from \\eqref{xinA}.\nWe are thus left with analyzing the third condition in \\eqref{eq:backgroundmomentmaps}.\n\nLet us first note that for $n_{H}=0$ there are no Killing vectors ($k_I^u\\equiv0$) and the third equation in \\eqref{eq:backgroundmomentmaps} is automatically satisfied.\nHowever \\eqref{eq:momentmapsvacuum} can nevertheless hold if the constant FI-terms discussed below \\eqref{SLdef} are of the form given in \\eqref{eq:AbelianFI} and thus only gauge groups with Abelian factors are allowed in this case.\n\nNow we turn to $n_{H}\\neq 0$. Note that then $\\vev{h^{I}k_{I}^{u}}=0$ has two possible solutions:\n\\begin{equation}\\begin{aligned}\\label{twocases}\ni)& \\quad \\vev{k_{I}^{u}}=0\\ ,\\quad \\textrm{for all}\\ I\\\\\nii)&\\quad \\vev{k_{I}^{u}}\\neq0 \\ ,\\quad \\textrm{for some}\\ I \\ \\textrm{with}\\ \\vev{h^{I}}\\ \\textrm{appropriately tuned}.\n\\end{aligned}\\end{equation}\nBy examining the covariant derivatives (\\ref{eq:covderivatives}) of the scalars we see that in the first case there is no gauge symmetry breaking by the hypermultiplets while in the second case $G$ is spontaneously broken. \nNote that not all possible gauge groups can remain unbroken in the vacuum. In fact, for case $i)$ the equivariance condition \\eqref{equivariancevacuum} implies\n\\begin{equation}\n f_{IJ}^{K}\\vev{\\mu_{K}}=0\\ .\n\\end{equation}\nThis can only be satisfied if the adjoint representation of ${\\mathfrak{g}}$ has a non-trivial zero eigenvector, i.e.\\ if the center of $G$ is non-trivial (and continuous).\\footnote{For more details on Lie groups and their adjoint representation, see for example \\cite{O'Raifeartaigh:1986vq}.} In particular, this holds for all gauge groups with an Abelian factor but all semisimple gauge groups have to be broken in the vacuum.\n\nIn the rest of this section we discuss the spontaneous symmetry\nbreaking for case $ii)$ and the details\nof the Higgs mechanism.\nLet us first consider the case where only a set of Abelian factors in $G$\nis spontaneously broken, i.e.\\ $\\vev{k^u_I}\\neq0$ for $I$ labeling\nthese Abelian factors.\nFrom \\eqref{xinA} we then learn \n$\\vev{\\xi_{I}^{i}}=0$ and \nthus we only have spontaneous symmetry breaking in the hypermultiplet\nsector\nand the Goldstone bosons necessarily are recruited out of these \nhypermultiplets.\nHence the vector multiplet corresponding to a broken Abelian factor in\n$G$ becomes massive by ``eating'' an entire hypermultiplet. \nIt forms a ``long'' vector multiplet containing the massive vector,\nfour gauginos and four scalars obeying the AdS mass relations.\n\nNow consider spontaneously broken non-Abelian factors of $G$,\ni.e.\\ $\\vev{k^u_I}\\neq0$ for $I$ labeling\nthese non-Abelian factors.\nIn this case we learn from \\eqref{eq:Killingvacuum} \nthat either $\\vev{\\xi_{I}^{i}}=0$ as before or $\\vev{\\xi_{I}^{i}}\\neq 0$.\nHowever the Higgs mechanism is essentially unchanged compared to the Abelian\ncase in that entire hypermultiplets are eaten and all massive vectors\nreside in long multiplets.\\footnote{Note that short BPS vector\n multiplets which exist in this theory cannot appear since the breaking\n necessarily involves the hypermultiplets.} \n\nHowever there always has to exists at least one unbroken generator of\n$G$ which commutes with all other unbroken generators, i.e.\\ the\nunbroken gauge group in the vacuum is always of the form $H\\times\nU(1)_{R}$. To see this, consider the mass matrix $M_{IJ}$ of the gauge\nbosons $A^{I}_{\\mu}$. \nDue to \\eqref{eq:covderivatives} and \\eqref{eq:Killingvacuum}, this is given by\n\\begin{equation}\n M_{IJ} = \\vev{G_{uv}k_{I}^{u}k_{J}^{v}}+\\vev{g_{ij}\\xi_{I}^{i}\\xi_{J}^{j}}=\\vev{K_{uv}k^{u}_{I}k^{v}_{J}}\\ .\n\\end{equation}\nHere $K_{uv}$ is an invertible matrix which can be given in terms of $G_{uv}$ and $S_{uv}$ defined in \\eqref{SLdef} as\n\\begin{equation}\n K_{uv} = \\vev{\\left(\\tfrac{5}{8}G_{uv}-\\tfrac{6}{8\\Lambda}S_{uv}\\right)}\\ .\n\\end{equation}\nSince $\\vev{h^{I}k_{I}^{u}}=0$ the mass matrix $M_{IJ}$ has a zero\neigenvector given by $\\vev{h^{I}}$, i.e.\\ the graviphoton\n$\\vev{h^{I}}A_{I}^{\\mu}$ always remains massless in the vacuum. In the\nbackground the commutator of the corresponding Killing vector $h^{I}k_{I}^{u}$ with any other isometry $k_{J}$ is given by\n\\begin{equation}\n \\vev{[h^{I}k_{I}, k_{J}]^{u}} =\n \\vev{h^{I}(k_{I}^{v}\\partial_{v}k_{J}^{u}-k_{J}^{v}\\partial_{v}k_{I}^{u})}=\n -\\vev{h^{I}k_{J}^{v}\\partial_{v}k_{I}^{u}}\\ .\n\\end{equation}\nThis vanishes for $\\vev{k_{J}^{u}}=0$ and thus the $R$-symmetry\ncommutes with every other symmetry generator of the vacuum, i.e.\\ the\nunbroken gauge group is $H \\times U(1)_{R}$. In particular, every\ngauge group $G$ which is not of this form has to be broken $G \\rightarrow H\\times U(1)_{R}$.\n\nLet us close this section with the observation that the number of broken generators is determined by the number of linearly\nindependent $\\vev{k_{I}^{u}}$. This coincides with the number of\nGoldstone bosons $n_{G}$. In fact the $\\vev{k_{I}^{u}}$ form a basis in the\nspace of\nGoldstone bosons $\\mathcal{G}$ and we have $\\mathcal{G}=\\text{span}_{\\mathbb{R}}\\{\\vev{k_{I}^{u}}\\}$ with $\\text{dim}(\\mathcal{G}) = \\rk \\vev{k_{I}^{u}} = n_{G}$.\n\nIn conclusion, we have shown that the conditions for maximally supersymmetric $\\textrm{AdS}_{5}$ vacua are given by\n\\begin{equation}\n \\vev{\\mu_{I}}=\\Lambda\\, \\vev{h_{I}}, \\quad \\vev{h_{M}}=0, \\quad \\vev{h^{I}k_{I}^{u}}=\\vev{h^{I}\\xi_{I}^{i}}=0\\ .\n\\end{equation}\nNote that the tensor multiplets enter in the final result only implicitly since the $h^{I}$ and its derivatives are functions of all scalars $\\phi^{i}$.\nThe first equation implies that a $U(1)_{R}$-symmetry is always gauged\nby the graviphoton while the last equation shows that the unbroken\ngauge group in the vacuum is of the form $H\\times U(1)_{R}$. This\nreproduces the result of \\cite{Tachikawa:2005tq} that the $U(1)_{R}$\nhas to be unbroken and gauged in a maximally supersymmetric $\\textrm{AdS}_{5}$\nbackground. In the dual four-dimensional SCFT this $U(1)_{R}$ is\ndefined by a-maximization. Moreover we discussed that if the gauge\ngroup is spontaneously broken the massive vector multiplets\nare long multiplets. \nFinally, we showed that space of Goldstone bosons is given by\n$\\mathcal{G}=\\text{span}_{\\mathbb{R}}\\{\\vev{k_{I}^{u}}\\}$ which will be used in the next section to compute the moduli space $\\mathcal{M}$ of these vacua.\n\n\n\\section{Structure of the moduli space}\\label{sec:moduli}\n\nWe now turn to the computation of the moduli space $\\mathcal{M}$ of the maximally supersymmetric $\\textrm{AdS}_{5}$ vacua determined in the previous section.\nLet us denote by $\\mathcal{D}$ the space of all possible deformations of the\nscalar fields $\\phi\\rightarrow \\vev{\\phi}+\\delta \\phi$, $q\\rightarrow\n\\vev{q}+\\delta q$ that leave the conditions\n\\eqref{eq:backgroundmomentmaps} invariant. However, if the gauge group\nis spontaneously broken the corresponding Goldstone bosons are among\nthese deformations but they should not be counted as moduli. Thus the\nmoduli space is defined as the space of deformations $\\mathcal{D}$ modulo the space of\nGoldstone bosons $\\mathcal{G}$, i.e.\\ $\\mathcal{M}=\\mathcal{D} \/ \\mathcal{G}$. \nIn order to determine $\\mathcal{M}$ we vary (\\ref{eq:backgroundmomentmaps})\nto linear order and characterize the space $\\mathcal{D}$ spanned by $\\delta \\phi$\nand $\\delta q$ that are not fixed.\\footnote{Since we consider the\n variations of the vacuum equations \\eqref{eq:backgroundmomentmaps}\n to first order in the scalar fields, this procedure only gives a\n necessary condition for the moduli space.} We then show that the\nGoldstone bosons also satisfy the equations defining $\\mathcal{D}$ and\ndetermine the quotient $\\mathcal{D} \/ \\mathcal{G}$. \n\nLet us start by varying the second condition of (\\ref{eq:backgroundmomentmaps}). This yields\n\\begin{equation}\n\\vev{\\delta(h^{I}\\mu^{n}_{I})}= \\vev{(\\partial_{i}h^{I})\\,\\mu^{n}_{I}}\\,\\delta\\phi^{i}+\\vev{h^{I}\\nabla_{u}\\mu^{n}_{I}}\\,\\delta q^{u}=-\\tfrac{1}{2}\\vev{\\omega_{uv}^{n}h^{I}k_{I}^{v}}\\delta q^{u}\\equiv 0\\ ,\n\\end{equation}\nwhere we used (\\ref{eq:backgroundmomentmaps}) and\n(\\ref{eq:covdermomentmap}). \nSince this variation vanishes automatically, no conditions are imposed on the scalar field variation.\n\nThe variation of the first condition in (\\ref{eq:backgroundmomentmaps}) gives\n\\begin{equation}\\label{varone}\n\\begin{aligned}\n\\vev{\\delta(h_{i}^{I}\\mu^{n}_{I})}&=\\vev{(\\nabla_{j}h_{i}^{I})\\,\\mu^{n}_{I}}\\,\\delta\\phi^{j}+\\vev{h_{i}^{I}\\nabla_{u}\\mu_{I}^{n}}\\,\\delta q^{u}\\\\\n&=-\\sqrt{\\tfrac{2}{3}}\\vev{\\mu^{n}_{I}(h^{I}g_{ij}+h^{Ik}T_{ijk})}\\,\\delta \\phi^{j}-\\tfrac{1}{2}\\vev{h^{I}_{i}\\omega^{n}_{uv}k^{v}_{I}}\\,\\delta q^{u}\\\\\n&=-\\sqrt{\\tfrac{2}{3}}\\Lambda \\delta^{n3} \\delta\\phi_{i}-\\tfrac{1}{2}\\vev{h^{I}_{i}\\omega^{n}_{uv}k^{v}_{I}}\\,\\delta q^{u}=0\\ ,\n\\end{aligned}\n\\end{equation}\nwhere in the second step we used (\\ref{eq:covderh}), (\\ref{eq:covdermomentmap})\nwhile in the third we used (\\ref{eq:backgroundmomentmaps}). \nFor $n=1,2$ \\eqref{varone} imposes\n\\begin{equation}\n\\langle h^{I}_{i}\\omega_{uv}^{1,2}k^{v}_{I}\\rangle\\, \\delta q^{u} = 0\\ , \\label{eq:12}\n\\end{equation}\nwhile \nfor $n=3$ the deformations $\\delta \\phi_{i}$ can be expressed in terms of $\\delta q^{u}$ as\n\\begin{equation} \\label{eq:deltaphi}\n \\delta \\phi_{i} = -\\sqrt{\\tfrac{3}{2}}\\tfrac{1}{2\\Lambda}\\vev{h_{i}^{I}\\omega_{uv}^{3}k_{I}^{v}}\\, \\delta q^{u}\\ .\n\\end{equation}\nThus all deformations $\\delta \\phi_{i}$ are fixed either to vanish or to be related to $\\delta q^{u}$. In other words, the entire space of deformations can be spanned by scalars in the hypermultiplets only, i.e.\\ $\\mathcal{D}\\subset \\mathcal{T}_{H}$. Note that this is in agreement with \\eqref{eq:Killingvacuum} and also $\\mathcal{G} \\subset \\mathcal{T}_{H}$.\n\nFinally, we vary the third condition in (\\ref{eq:backgroundmomentmaps}) to obtain\n\\begin{equation}\n\\vev{\\delta(h^{I}k_{Iu})}=\\vev{\\partial_{i}h^{I}k_{Iu}}\\,\\delta\\phi^{i}+\\vev{h^{I}\\nabla_{v}k_{Iu}}\\,\\delta q^{v}=0.\n\\end{equation}\nInserting \\eqref{eq:deltaphi} and using \\eqref{eq:hmetric}, (\\ref{eq:backgroundmomentmaps}) we find\n\\begin{equation}\\label{eq:Killing1}\n\\big(\\tfrac{1}{2\\Lambda}\\vev{k^{Iu}\\omega^{3}_{vw}k_{I}^{w}} + \\vev{h^{I}\\nabla_{v}k_{I}^{u}}\\big)\\,\\delta q^{v} = 0\\ .\n\\end{equation}\nThus we are left with the two conditions \\eqref{eq:12} and\n \\eqref{eq:Killing1} whose solutions determine $\\mathcal{D}$. For a generic supergravity we will not solve them here in general. However the conditions alone suffice to prove that the moduli space is a K\\\"ahler submanifold of $\\mathcal{T}_H$ as we will now show.\n\nAs a first step we prove that the Goldstone bosons satisfy \\eqref{eq:12} and \\eqref{eq:Killing1}.\nWe know from section~\\ref{sec:vacua} that the Goldstone directions are\nof the form $\\delta q^{u} = c^I\\vev{k_{I}^{u}}$ where $c^I$ are constants.\nInserted into \\eqref{eq:12} we find\n\\begin{equation}\nc^I\\vev{h_{i}^{J}\\omega_{uv}^{1,2}k^{u}_{I}k^{v}_{J}}=2c^I\\vev{h_{i}^{J}f_{IJ}^{K}\\mu_{K}^{1,2}}\n= 0\\ ,\n\\end{equation}\nwhere we used (\\ref{equivariancevacuum}) and the fact that $\\vev{\\mu_{K}^{1,2}}=0$.\nTo show that the Goldstone bosons also satisfy (\\ref{eq:Killing1})\nwe first observe that\n\\begin{equation}\\label{eq:killingalgebra2}\n \\vev{h^{I}(\\nabla_{v}k_{I}^{u})k^{v}_{J}}= \\vev{h^{I}(\\partial_{v}k_{I}^{u})k_{J}^{v}-h^{I}(\\partial_{v}k_{J}^{u})k_{I}^{v}} = -\\vev{h^{I}[k_{I},k_{J}]^{u}} = \\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}\\ ,\n\\end{equation}\nwhere \nin the first step we used \\eqref{eq:backgroundmomentmaps},\nadded a term which vanishes in the\nbackground\n and then in the second step used \\eqref{Killingc}.\nIn addition we need to show\n\\begin{equation}\\label{eq:structureconstants}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}=\\vev{f_{IJ}^{K}h_{K}k^{Iu}}\\ .\n\\end{equation}\nIndeed, using \\eqref{eq:hmetric} and $\\vev{h^{I}k_{I}^{u}}=0$ we find\n\\begin{equation}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}=\\vev{f_{IJ}^{K}h^{I}k^{Lu}a_{KL}}=\\vev{f_{IJ}^{K}h^{I}k^{Lu}h_{K}^{i}h_{Li}}\\ .\n\\end{equation}\nInserting \\eqref{eq:representation} evaluated in the vacuum, i.e.\\\n$\\vev{f_{IJ}^{K}h^{J}h_{K}^{i}}=\\vev{f_{IJ}^{K}h^{Ji}h_{K}}$ and using\nagain \\eqref{eq:hmetric}\nwe obtain \n\\begin{equation}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}\n=\\vev{f_{IJ}^{K}h^{Ii}k^{Lu}h_{K}h_{iL}}=\\vev{f_{IJ}^{K}h_{K}k^{Lu}\\delta^{I}_{L}}=\\vev{f_{IJ}^{K}h_{K}k^{Iu}}\\ ,\n\\end{equation}\nwhich proves \\eqref{eq:structureconstants} as promised.\n\nTurning back to \\eqref{eq:Killing1}, we insert $\\delta q^{u}= c^{I}\n\\vev{k_{I}^{u}}$ and use \\eqref{equivariancevacuum} and \\eqref{eq:killingalgebra2}\nto arrive at\n\\begin{equation}\\label{GBint}\n\\tfrac{1}{2\\Lambda}c^{I}\\vev{k^{Ju}\\omega^{3}_{vw}k_{J}^{w}k_{I}^{v}}+c^{I}\\vev{h^{J}\\nabla_{v}k_{J}^{u}k_{I}^{v}}=\\tfrac{1}{\\Lambda}c^{I}\\vev{k^{Ju}f_{IJ}^{K}\\mu_{K}}+c^{I}\\vev{f_{JI}^{K}h^{J}k_{K}^{u}}\\ .\n\\end{equation}\nUsing again that $\\vev{\\mu_{I}}=\\Lambda \\vev{h_{I}}$ and applying \\eqref{eq:structureconstants}, this yields\n\\begin{equation}\n\\tfrac{1}{\\Lambda}c^{I}\\vev{k^{Ju}f_{IJ}^{K}\\mu_{K}}+c^{I}\\vev{f_{JI}^{K}h^{J}k_{K}^{u}}=(f_{JI}^{K}+f_{IJ}^{K})c^{I}\\vev{h^{J}k_{K}^{u}}= 0\\ .\n\\end{equation}\nThus the Goldstone directions $\\delta q^{u}=c^{I}\\vev{k_{I}^{u}}$ leave the vacuum conditions \\eqref{eq:backgroundmomentmaps} invariant and hence $\\mathcal{G} \\subset \\mathcal{D}$.\n\nLet us now consider the moduli space $\\mathcal{M} = \\mathcal{D} \/ \\mathcal{G}$ and show that\n$J^{3}(\\mathcal{M})=\\mathcal{M}$, i.e.\\ $J^{3}$ restricts to an almost complex\nstructure on $\\mathcal{M}$. Concretely we show that the defining equations for the moduli space, \\eqref{eq:12} and \\eqref{eq:Killing1}, are invariant under $J^{3}$. For equations (\\ref{eq:12}) this follows from the fact that $J^{3}$ interchanges the two equations. This can be seen by substituting $\\delta q'^{u} = (J^{3})^{u}_{v}\\delta q^{v}$ and using that $J^{1}J^{2}=J^{3}$ on a quaternionic K\\\"ahler manifold. \n\nTurning to \\eqref{eq:Killing1}, we note that since only\n$\\vev{\\mu_{I}^{3}}\\neq 0$ the covariant derivative\n\\eqref{eq:Jinvariance} of the Killing vectors $k_{I}^{u}$ commutes\nwith $J^{3}$ in the vacuum, i.e.\n\\begin{equation}\n \\vev{\\nabla_{u}k^{I}_{w}(J^{n})_{v}^{w}-(J^{n})_{u}^{w}\\nabla_{w}k^{I}_{v}}=2\\epsilon^{npq}\\vev{\\omega^{p}_{uv}\\mu^{Iq}} = 0\\ .\n\\end{equation}\nThis implies that the second term in \\eqref{eq:Killing1} is invariant\nunder $J^{3}$ and we need to show that this also holds for the first\nterm. In fact, we will show in the following\nthat this term vanishes on the moduli space and is only \nnonzero for Goldstone directions.\n\n\nLet us first note that in general\n$\\rk{\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}}}\\leq\\rk{\\vev{k_{I}^{u}}}=n_{G}$.\nHowever, \n$\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}k_{J}^{v}} \\neq 0$ (as we\nsaw in \\eqref{GBint}) implies that the rank of the two matrices has\nto coincide. This in turn says that the first term in\n\\eqref{eq:Killing1}\ncan only be nonzero in the Goldstone directions and thus has to\nvanish\nfor the directions spanning $\\mathcal{M}$. Thus the whole equation \\eqref{eq:Killing1} is $J^{3}$-invariant on\n$\\mathcal{M}$. \nTherefore we have an almost complex structure\n$\\tilde{J}:=J^{3}\\vert_{\\mathcal{M}}$ and a compatible metric\n$\\tilde{G}:=G\\vert_{\\mathcal{M}}$ on $\\mathcal{M}$. Thus $(\\mathcal{M}, \\tilde{G}, \\tilde{J})$ is an almost hermitian submanifold of the quaternionic K\\\"ahler manifold $(\\mathcal{T}_{H}, G, Q)$. \n\nIn the following we want to use theorem 1.12 of \\cite{Alekseevsky:2001om}: an almost Hermitian submanifold $(M, G, J)$ of a quaternionic K\\\"ahler manifold $(\\tilde{M}, \\tilde{G}, Q)$ is K\\\"ahler if and only if it is totally complex, i.e.\\ if there exists a section $I$ of $Q$ that anticommutes with $J$ and satisfies\n\\begin{equation}\nI(T_{p}M) \\perp T_{p}M \\quad \\forall p\\in M\\ .\n\\end{equation}\nIn particular, this condition is satisfied if the associated fundamental two-form $\\omega_{uw}=G_{uw}I_{v}^{w}$ on $M$ vanishes.\n\nNow let us show that the moduli space $\\mathcal{M}$ actually is totally\ncomplex and hence K\\\"ahler. To do so, we use \\eqref{eq:covderkilling}\nand \\eqref{SLdef} to \nnote that in the vacuum\n\\eqref{eq:momentmapsvacuum} \n$ \\vev{\\omega^{3}_{uv}}$ is given by \n\\begin{equation}\\label{eq:omega3}\n \\vev{\\omega^{3}_{uv}}=\\tfrac{2}{\\Lambda}\\vev{h^{I}\\nabla_{u}k_{Iv}-L_{uv}}\\ .\n\\end{equation}\nWe just argued that \n$\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}}$ vanishes on $\\mathcal{M}$\nand thus \\eqref{eq:Killing1} projected onto $\\mathcal{M}$ also implies\n\\begin{equation}\\label{eq:nablaG}\n \\vev{h^{I} \\nabla_{u}k_{vI}}\\vert_{\\mathcal{M}} = 0 \\ .\n\\end{equation}\nSince $\\vev{\\omega^{1}_{uv}}=-\\vev{\\omega^{3}_{uw}(J^{2})_{v}^{w}}$, we can multiply \\eqref{eq:omega3} with $-(J^{2})^{w}_{v}$ from the right and obtain\n\\begin{equation}\n\\vev{\\omega^{1}_{uv}}\\vert_{\\mathcal{M}} =\n\\tfrac{2}{\\Lambda}\\vev{S^{2}_{uv}-h^{I}\n \\nabla_{u}k_{wI}(J^{2})_{v}^{w}}\\vert_{\\mathcal{M}}=0\\ ,\n\\end{equation}\nwhere in the first step we used \\eqref{SLdef}.\nThis expression vanishes due to \n\\eqref{eq:nablaG} and the fact that $S^{2}_{uv}$ is symmetric while\n$\\omega^{1}_{uv}$ is antisymmetric.\nThus $\\mathcal{M}$ is totally complex and in particular $(\\mathcal{M}, \\tilde{G}, \\tilde{J})$ is a K\\\"ahler submanifold. \n\nAs proved in \\cite{Alekseevsky:2001om} a K\\\"ahler submanifold can\nhave at most half the dimension of the ambient quaternionic K\\\"ahler\nmanifold, i.e.\\ $\\text{dim}(\\mathcal{M}) \\leq 2n_{H}$.\\footnote{Applying the same method as in $d=4$, $\\mathcal{N}=2$ this can be checked explicitly \\cite{deAlwis:2013jaa}.}\nNote that in the case of an unbroken gauge group we have $\\mathcal{G} = \\{\\emptyset\\}$ and thus $\\mathcal{D}=\\mathcal{M}$. This is the case of maximal dimension of the moduli space. If the gauge group is now spontaneously broken then additional scalars are fixed by \\eqref{eq:12}. Since $\\mathcal{M}$ is $J^{3}$-invariant, every $\\delta q^{u} \\in \\mathcal{M}$ can be written as $\\delta q^{u} = (J^{3})_{v}^{u}\\delta q'^{v}$ for some $\\delta q'^{u}\\in \\mathcal{M}$. Combined with the fact that $J^{1}J^{2}=J^{3}$ this implies that the two conditions in \\eqref{eq:12} are equivalent on $\\mathcal{M}$. Furthermore we have $\\rk{\\vev{h^{I}_{i}\\omega_{uv}^{1}k_{I}^{v}}}=\\rk{\\vev{k_{u}^{I}}}=n_{G}$ and thus $n_{G}$ scalars are fixed by \\eqref{eq:12}. In conclusion, we altogether have\n\\begin{equation}\n\\text{dim}(\\mathcal{M})=\\text{dim}(\\mathcal{D})-\\text{dim}(\\mathcal{G})\\leq (2n_{H}-n_{G})-n_{G}\\ ,\n\\end{equation}\nso the moduli space has at most real dimension $2n_{H}-2n_{G}$.\n\n\n\n\n\\section*{Acknowledgments}\nThis work was supported by the German Science Foundation (DFG) under\nthe Collaborative Research Center (SFB) 676 ``Particles, Strings and the Early\nUniverse'', the Research Training Group (RTG) 1670 ``Mathematics\ninspired by String Theory and Quantum Field Theory'' and the Joachim-Herz Stiftung.\n\nWe have benefited from conversations and correspondence with David Ciupke, Peter-Simon Dieterich, Malte Dyckmanns, Jonathan Fisher, Severin L\\\"ust, Stefan Vandoren and Owen Vaughan.\n\n\n\n\n\n\n\n\n\\newpage \n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe theory of artificial neural networks (ANN) represents an open research field setting the stage for the implementation of a statistical mechanical approach in novel interdisciplinary problems, such as the modeling of the collective behavior of the human brain neurons. An important field of application of ANN is represented by the pattern recognition analysis \\cite{Egmont,Wang}, which has received an increasing interest in the literature, witnessed by the extensive application of ANN to tackle complex real-word problems, e.g. in medical diagnosis \\cite{Haya,Jiang,Shayea} and in biological sequences analysis \\cite{Ding,Qian, Condon,Cart}.\nRecent works, in this field, paved also the way to the systematic use of technical tools borrowed from Information Theory and Statistical Mechanics \\cite{McKay,Anand,Tkacik}.\\\\\nIn this paper, in particular, we adopt information theoretic methods \\cite{Kim,Vihn} to classify a sequence of hazelnuts images, and show how our approach allows for improving the performance of pattern recognition procedures performed via ANN algorithms.\nFrom a preliminary statistical analysis on the image histograms, we identify some relevant observables to be used in the implementation of a machine learning algorithm. A special focus of our approach is on the role of \\textit{fluctuations} of the histograms around the corresponding \\textit{mean} distribution. In particular, by making use of various notions of ``distance'' between histograms, we introduce two statistical scales, whose magnitude affects the performance of a machine learning algorithm in disentangling and extracting the distinctive features of the hazelnuts.\\\\\nThe paper is organized as follows.\\\\\nIn Sec. \\ref{sec:sec1} we introduce the two aforementioned statistical scales and discuss their dependence on a quantity referred to as the ``image resolution''. We comment on the need of a large separation between two such scales to obtain an efficient pattern recognition: the lack of a wide separation between them is due to large histograms fluctuations which blur the distinctive features of the hazelnuts, thus hindering a proper classification of the data. \\\\\nIn Sec. \\ref{sec:sec2} we test, then, the prediction of our statistical analysis by employing a machine learning algorithm, known as Support Vector Machines (SVM) \\cite{Haykin,Webb}. The numerical results we obtained not only confirm the relevance of the aforementioned scale separation, but also show that the predicted onset of an optimal scale of description can be recovered through the use of a SVM algorithm, provided that its performance is \\textit{averaged} over a sufficiently large set of training samples. \\\\\nConclusions are finally drawn in Sec. \\ref{sec:conc}.\\\\\nThe main results of this work can be summarized as follows:\n\\begin{itemize}\n\t\\item We introduce two typical statistical scales, whose magnitude critically affects the performance of a pattern recognition algorithm based on statistical variables;\n \\item We describe the dependence of such scales on the scale of resolution, thus unveiling the onset of an optimal resolution at which the pattern recognition is favoured;\n \\item We numerically recover the results of the statistical analysis by using a SVM algorithm, and also shed light on the role of \\textit{averaging} the performance of a SVM over sufficiently many training samples.\n\\end{itemize}\n\n\\section{The original set of hazelnut images: a statistical approach}\n\\label{sec:sec1}\n\nIn this work we consider the problem of pattern recognition applied to a sequence of hazelnut images, to be categorized into three different sets: ``good'' ($G$), ``damaged'' ($D$) and ``infected'' ($I$). In the sequel, we will use the shorthand notation $\\mathcal{S}=\\{G,D,I\\}$, and, for any $A \\in \\mathcal{S}$, we will also denote $N_A=card(A)$. Our database consists of a set of $800$ x-ray scanned images, cf. Fig. \\ref{hazelnuts}, with $N_G=750$, $N_D=25$ and $N_I=25$. The analysis outlined below is meant to provide a guiding strategy to assess, and possibly enhance, the performance of pattern recognition methods based on ANN algorithms. The prominent distinctive features of the three sets $G$, $D$ and $I$ are not detectable from a solely visual inspection of the x-ray images. Hence, in order to extract some valuable information, we relied on the computation of the histograms of the hazelnut images, shown in Fig. \\ref{hazelnuts2}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_6.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_7.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_8.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_1.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_2.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_3.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_16.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_17.pdf}\n\\caption{X-ray scanned images of good hazelnuts (top row), damaged hazelnuts (middle row) and infected hazelnuts (bottom row).}\\label{hazelnuts}\n\\end{figure}\n\nTherefore, for any $A \\in \\mathcal{S}$, we computed the number of pixels, in the image pertaining to the $i$-th hazelnut belonging to the set $A$ (with $i= 1,...,N_{A}$), characterized by the shade of gray $j$ (conventionally running from the value $0$ - black - to $255$ - white). After normalizing wrt the total number of pixels forming the same image, we thus obtained the so-called image histogram $p_i^{(A)}(j)$. We could also compute, then, the mean histogram pertaining to $A$, denoted by $\\overline{p}_i^{(A)}(j)$, which was obtained by averaging over the $N_A$ histograms $p_i^{(A)}(j)$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistS.pdf}\n\\includegraphics[width=0.45\\textwidth]{meanhistA.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{meanhistC.pdf}\n\\includegraphics[width=0.45\\textwidth]{meanhistTot.pdf}\n\\caption{Image histograms of good hazelnuts (top left), damaged hazelnuts (top right), infected hazelnuts (bottom left). The horizontal axis displays the shades of gray, conventionally running from $0$ to $255$. For each of the three sets, the figures display the (normalized) histograms of single hazelnuts as well as the (normalized) mean histogram. On the bottom right corner, the mean histograms of the three different sets are compared.}\\label{hazelnuts2}\n\\end{figure}\n\nA quantitative characterization of the images can be afforded by introducing various notions of ``distance'' between different histograms \\cite{Cha}: we considered, in particular, the norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, the Squared $\\chi^2$ distance and the Jeffrey's divergence \\cite{SHC}. \nIt is worth briefly recalling some basic aspects concerning the latter two notions of distance, borrowed from probability theory.\nThe Squared $\\chi^2$ distance corresponds to the symmetrized version of the Pearson's $\\chi^2$ test \\cite{Plackett}, which, given a histogram $p(j)$ and a reference histogram $q(j)$, defines their relative distance as:\n\\begin{equation}\nd_{\\chi^2}=\\sum_j\\frac{(p(j)-q(j))^2}{q(j)} \\label{chisq} \\quad .\n\\end{equation}\nThus, the quantity $d_{\\chi^2}$ in (\\ref{chisq}) resembles the standard euclidean distance between the two histograms, except that it introduces a weight corresponding to the inverse of the reference histogram.\\\\\nOn the other hand, the Jeffreys' divergence \\cite{Jeffreys} belongs to the Shannon entropy family \\cite{Beck}, and corresponds to the symmetrized version of the Kullback-Leibler (K-L) divergence (or \\textit{relative entropy}) \\cite{KL}, defined as:\n\\begin{equation}\nd_{K-L}(p\\|q)=\\sum_j \\left(p(j)\\log\\left(\\frac{p(j)}{q(j)}\\right) \\right)=H(p,q)-H(p) \\label{KL} \\quad ,\n\\end{equation}\nwhere $H(p,q)$ is the cross entropy of $p$ and $q$, and $H(p)$ is the entropy of $p$ \\cite{Kull,Jay}.\nMore in general, the K-L divergence (\\ref{KL}), is a member of the family of the so-called $f$-divergencies \\cite{Mori,Ali} and stems as a limiting case of the more general R\\'enyi's (or $\\alpha$-) divergence \\cite{Xu}. It is worth recalling its definition: given any two continuous distributions $p$ and $q$, over a space $\\Omega$, with $p$ absolutely continuous wrt $q$, the $f$-divergence of $p$ from $q$ is\n\\begin{equation}\nd_{f}(p\\|q)=\\int_\\Omega f\\left(\\frac{dp}{dq}\\right)dq \\quad ,\n\\end{equation}\nwhere $f$ is a convex function such that $f(1)=0$.\\\\\nThen, for any $A \\in \\mathcal{S}$, we considered the distance (or \\textit{fluctuation}), defined according to the various notions introduced above, between the histogram $p_i^{(A)}(j)$ and the corresponding mean $\\overline{p}_i^{(A)}(j)$. Next, by averaging over the set $A$, one obtains a characteristic ``statistical scale'' (still depending on the chosen notion of distance) characterizing the fluctuations within each set $A$. \nTo clarify the meaning of the entries in Tab. \\ref{normtot}, let us illustrate, for instance, the procedure to calculate the quantity $\\langle d \\rangle^{(A)}_2$. To this aim, we introduce the euclidean distance between the histograms $p_i^{(A)}(j)$ and $\\overline{p}_i^{(A)}(j)$:\n\\begin{equation}\nd_{2,i}^{(A)}=\\sqrt{\\sum_{j=1}^{N_g}|p_i^{(A)}(j)-\\overline{p}_i^{(A)}(j)|^2} \\label{eucl}\n\\end{equation}\n\n\\begin{table}[bth]\n\\centering\n\\begin{tabular}{c|c|c|c|c|c|}\n \n & $\\langle d \\rangle^{(A)}_1$ & $\\langle d \\rangle^{(A)}_2$ & $\\langle d \\rangle^{(A)}_\\infty$ & $\\langle d \\rangle^{(A)}_{\\chi^2}$ & $\\langle d \\rangle^{(A)}_{J}$\\\\\n\\hline\n $A =G$ & $0.2079$ & $0.0372$ & $0.0139$ & $0.0495$ & $0.0369$ \\\\\n\\hline\n $A =D$ & $0.2485$ & $0.0488$ & $0.0162$ & $0.0776$ & $0.0477$ \\\\ \n\\hline\n $A =I$ & $0.2097$ & $0.0379$ & $0.0145$ & $0.0435$ & $0.0401$ \\\\\n \\hline\n\\end{tabular}\n %\n\\caption{Typical fluctuation of the histograms of the hazelnuts from the corresponding mean histogram, within each of the sets $G$, $D$, and $I$. The quantities $\\langle d \\rangle^{(A)}$ are evaluated by using different notions of distances: norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, Squared $\\chi^2$ distance and Jeffreys divergence.}\n %\n \\label{normtot}\n \\end{table}\n\n\\begin{table}[bth]\n\\centering\n\\begin{tabular}{c|c|c|c|c|c|}\n \n & $\\Delta^{(A,B)}_{1}$ & $\\Delta^{(A,B)}_{2}$ & $\\Delta^{(A,B)}_{\\infty}$ & $\\Delta^{(A,B)}_{\\chi^2}$ & $\\Delta^{(A,B)}_{J}$\\\\\n\\hline\n $A=G, B=D$ & $0.0923$ & $0.0162$ & $0.0036$ & $0.0089$ & $0.0200$ \\\\\n\\hline\n $A=D, B=I$ & $0.0533$ & $0.0090$ & $0.0028$ & $0.0021$ & $0.0030$ \\\\ \n\\hline\n $A=G, B=I$ & $0.0526$ & $0.0115$ & $0.0051$ & $0.0044$ & $0.0124$ \\\\ \n \\hline\n\\end{tabular}\n %\n\\caption{Average distances between between pairs of mean histograms referring to two different sets $A$ and $B$, evaluated, as in Tab \\ref{normtot}, using different notions of distance: norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, Squared $\\chi^2$ distance and Jeffreys divergence.}\n %\n \\label{Deltatot}\n \\end{table}\n\nFrom the knowledge of $d_{2,i}^{(A)}$ in (\\ref{eucl}), the quantity $\\langle d \\rangle^{(A)}_2$, shown in Tab. \\ref{normtot}, is then computed by averaging over $A$:\n\\begin{equation}\n\\langle d \\rangle_2^{(A)} =\\frac{1}{N_{A}}\\sum_{i=1}^{N_{A}}d_{2,i}^{(A)} \\label{aver}\n\\end{equation} \nIt is worth noticing, from Tab. \\ref{normtot}, that, no matter of what notion of distance is adopted, the magnitude of the fluctuations is not significantly affected by $N_{A}$.\nThe scale $\\langle d \\rangle^{(A)}$, which, for any $A \\in \\mathcal{S}$, is of the order $\\langle d \\rangle^{(A)}\\simeq 10^{-2}$, can be thus regarded as an intrinsic statistical scale pertaining to the set $A$. \nIt is worth comparing such scale with another statistical scale, denoted by $\\Delta^{(A,B)}$, whose values are listed in Tab. \\ref{Deltatot}. The quantity $\\Delta^{(A,B)}$ is defined as the distance, computed by using the various notions of distance introduced above, between the pair of mean histograms relative to the sets $A$ and $B$, with $(A,B)\\in\\mathcal{S}$ and $A \\neq B$. The symmetric form of the distances introduced above entails, in particular, that $\\Delta^{(A,B)}=\\Delta^{(B,A)}$. \nA better interpretation of the meaning of the scales $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ can be achieved by noticing that a large value of $\\langle d \\rangle^{(A)}$ mirrors the presence of a considerable amount of noise on top of the mean histogram $\\overline{p}_i^{(A)}(j)$, which thus blurs the distinctive features of the set $A$. On the contrary, a larger value of $\\Delta^{(A,B)}$ reflects a more significant separation between the mean histograms of the two sets $A$ and $B$, which instead favours the pattern recognition. In the sequel of this Section we will focus, therefore, on the ratio of two such scales.\nFrom an inspection of Tabs. \\ref{normtot} and \\ref{Deltatot}, we first observe that $\\Delta^{(A,B)}\\sim\\langle d \\rangle^{(A)}$. That is, the two scales are comparable: the fluctuations, within each set, are comparable with the typical distances between different sets. This entails, hence, that the histograms shown in Fig. \\ref{hazelnuts2} can not be regarded as a useful source of information to perform a pattern recognition.\nA different route can be pursued by just focusing on a selected portion of the original images. This approach is motivated by the assumption that the distinctive features of each of the three sets are mostly contained in the ``nuclei'' of the hazelnuts. We calculated, therefore, the histograms corresponding to the cropped portions of the original images, delimited by the tick red rectangles shown in Fig. \\ref{hazelnuts3}. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.31\\textwidth]{HistoS1.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoS6.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoS12.pdf}\\\\\n\\includegraphics[width=0.31\\textwidth]{HistoA1.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoA12.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoA13.pdf}\\\\\n\\includegraphics[width=0.31\\textwidth]{HistoC2.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoC6.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoC13.pdf}\n\\caption{Image histograms of good hazelnuts (top row), damaged hazelnuts (middle row) and infected hazelnuts (bottom row). Each image shows the histogram of the entire hazelnut (top histogram) and the histogram referring to the fraction of the image delimited by the thick red rectangles, characterized by $\\epsilon=80$ and $\\rho=2.5$.}\\label{hazelnuts3}\n\\end{figure}\n\nThe red rectangles in Fig. \\ref{hazelnuts3} are identified by the pair of parameters $\\{\\epsilon, \\rho\\}$, where $\\epsilon$, related to the image resolution, is defined as the number of pixels comprised along the horizontal length of the rectangles, while $\\rho$ is the ratio of the number of pixels along the vertical length to the corresponding number of pixels along the horizontal one.\nIn our simulations, the values of the parameters $\\{\\epsilon, \\rho\\}$ were kept constant when calculating the histograms relative to different hazelnut nuclei.\nFigure \\ref{hazelnuts3} refers, for instance, to the case corresponding to $\\epsilon=80$ and $\\rho = 2.5$. \nIn Figs. \\ref{nuclS},\\ref{nuclA} and \\ref{nuclC}, shown is the result of the image processing of the hazelnut nuclei, performed through a noise removal filter (adaptive Wiener filtering) and various edge-detector algorithms. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusS10.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusS8.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusS6.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusS13.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $G$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclS}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusA1.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusA6.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusA10.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusA15.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $D$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclA}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusC3.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusC7.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusC6.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusC11.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $I$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclC}\n\\end{figure}\n\nIn Fig. \\ref{hazelnutsnucl}, which is worth comparing with Fig. \\ref{hazelnuts2}, we plotted the mean histograms relative to the cropped images, with $\\epsilon=80$ and $\\rho =2.5$. The question arises, then, as to whether the separation between the two scales $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ is amenable to be enhanced by tuning the two parameters $\\epsilon$ and $\\rho$. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistSnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistAnucl80r25.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistCnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r25.pdf}\n\\caption{Image histograms of the hazelnut nuclei belonging to the sets $G$ (top left), $D$ hazelnuts (top right), $I$ hazelnuts (bottom left). For each of the three sets, the figures display the histograms of single hazelnuts as well as the mean histogram in the corresponding set (mean histogram). On the bottom right corner, the mean histograms of the three sets are compared. All the histograms were obtained by setting $\\epsilon = 80$ and $\\rho=2.5$.}\\label{hazelnutsnucl}\n\\end{figure}\n\nWe thus studied the behaviour of the mean histograms, shown in Fig. \\ref{hazelnutsnucl}, as well as of the typical fluctuations occurring in each set, as a function of $\\epsilon$ and $\\rho$: in our simulations, $\\epsilon$ spans a broad range of values, whereas we let $\\rho$ attain the values $1.5$ and $2.5$, cf. Fig. \\ref{rect}. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{rect.pdf}\n\\caption{Different values of the scale of resolution: $\\epsilon=80$ (red rectangle), $\\epsilon =60$ (magenta rectangle), $\\epsilon=40$ (blue rectangle), $\\epsilon=20$ (green rectangle). All the colored rectangles shown in the picture are obtained by setting $\\rho =2.5$.}\\label{rect}\n\\end{figure}\n\nIn Fig. \\ref{nuclhist1} and \\ref{nuclhist2}, the mean histograms of the sets $G$, $D$ and $I$ are shown for different values of $\\epsilon$, and for two different values of $\\rho$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl60r15.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl40r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl20r15.pdf}\n\\caption{Mean histograms of the hazelnut nuclei at different scales of resolution: $\\epsilon =80$ (top left), $\\epsilon =60$ (top right), $\\epsilon =40$ (bottom left) and $\\epsilon =20$ (bottom right), with $\\rho=1.5$.}\\label{nuclhist1}\n\\end{figure}\n\nWe focused, in particular, on the investigation of the dependence of the scales $\\langle d \\rangle^{(A)}(\\epsilon; \\rho)$ and $\\Delta^{(A,B)}(\\epsilon; \\rho)$ on the resolution $\\epsilon$. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl60r25.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl40r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl20r25.pdf}\n\\caption{Mean histograms of the nuclei of the hazelnuts at different scales of description: $\\epsilon =80$ (top left), $\\epsilon =60$ (top right), $\\epsilon =40$ (bottom left) and $\\epsilon =20$ (bottom right), with $\\rho=2.5$.}\\label{nuclhist2}\n\\end{figure}\n\nFigures \\ref{norm1} and \\ref{mean1} illustrate the behaviour of $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ vs. $\\epsilon$ for $\\rho=1.5$, whereas \nFigs. \\ref{norm2} and \\ref{mean2} show the analogous behaviour of $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ vs. $\\epsilon$ for $\\rho=2.5$\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{FluctS15.pdf}\n\\includegraphics[width=0.45\\textwidth]{FluctA15.pdf}\n\\includegraphics[width=0.45\\textwidth]{FluctC15.pdf}\n\\caption{Behaviour of the distances $\\langle d \\rangle^{(A)}$ vs. $\\epsilon$, with $\\rho=1.5$.}\\label{norm1}\n\\end{figure}\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{dAS15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{dSC15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{dAC15.pdf}\n\\caption{Behaviour of the distances $\\Delta^{(A,B)}$ vs. $\\epsilon$, with $\\rho=1.5$.}\\label{mean1}\n\\end{figure}\n\nThe two plots \\ref{mean1} and \\ref{mean2} reveal that reducing $\\epsilon$ leads, on the one hand, to a remarkable increase of $\\Delta^{(A,B)}$, which attains an order of magnitude of about $\\Delta^{(A,B)} \\simeq 10^{-1}$. On the other hand, this effect is counterbalanced by the simultaneous increase of the scale $\\langle d \\rangle^{(A)}$, evidenced in Figs. \\ref{norm1} and \\ref{norm2}, which turns out to be, for both the considered values of $\\rho$, of the same order of magnitude of $\\Delta^{(A,B)}$. This is more clearly visible in Fig. \\ref{ratio}, which illustrates the behaviour of the ratio of $\\Delta^{(A,B)}$ to $\\langle d \\rangle^{(A)}$ and to $\\langle d \\rangle^{(B)}$, for different values of $\\epsilon$, obtained by setting $A = G$ and $B = D$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{FluctS25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{FluctA25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{FluctC25.pdf}\n\\caption{Behaviour of the distances $\\langle d \\rangle^{(A)}$ vs. $\\epsilon$, with $\\rho=2.5$.}\\label{norm2}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{dAS25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{dSC25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{dAC25.pdf}\n\\caption{Behaviour of the distances $\\Delta^{(A,B)}$ vs. $\\epsilon$, with $\\rho=2.5$.}\\label{mean2}\n\\end{figure}\n\nThe plots in Fig. \\ref{ratio} confirm that the two scales $\\Delta^{(A,B)}$ and $\\langle d \\rangle^{(A)}$ remain of the same order, also when reducing $\\epsilon$. \nOn the contrary, an efficient pattern recognition, based on the analysis of the image histograms, can be obtained if the ratio $\\Delta^{(A,B)} \/\\langle d \\rangle^{(A)} \\gg 1$, i.e. when the mean statistical distance between different sets overwhelms the typical size of fluctuations characteristic of each set.\nThus, the study of the behaviour of the two latter scales allows one to predict a poor performance of a machine learning algorithm aiming at classifying the hazelnuts on the basis of the image histograms.\nNevertheless, an interesting aspect can be evinced from an inspection of Fig. \\ref{ratio}: despite the similarity of the magnitudes of the two statistical scales, the plot of their ratio vs. $\\epsilon$ yields a non monotonic function. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{ratioASAr15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASSr15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASAr25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASSr25.pdf}\n\\caption{Behaviour of the ratio $\\Delta^{(G,D)}\/\\langle d \\rangle^{(G)}$ (left column) and $\\Delta^{(G,D)}\/\\langle d \\rangle^{(D)}$ (right column) vs. $\\epsilon$, for $\\rho=1.5$ (upper row) and $\\rho=2.5$ (lower row).}\\label{ratio}\n\\end{figure}\n\nTo better evidence this point, we plotted, in Fig. \\ref{ratio2}, the ratio of the scale $\\Delta^{(A,B)}$ to the geometric mean $\\sqrt{\\langle d \\rangle^{(A)} \\langle d \\rangle^{(B)}}$, where we set $A=G, B=D$ (left panel) and $A=G, B=I$ (right panel). \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{ratioASr15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioCSr15.pdf}\n\\caption{\\textit{Left panel:} Behaviour of the ratio $\\Delta^{(G,D)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(D)}}$ vs. $\\epsilon$, for $\\rho=1.5$. \\textit{Right panel:} Behaviour of the ratio $\\Delta^{(G,I)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(I)}}$ vs. $\\epsilon$, for $\\rho=1.5$.}\\label{ratio2}\n\\end{figure}\n\nIn Fig. \\ref{ratio3}, instead, for reasons to be further clarified in Sec. \\ref{sec:sec2}, we show the results, analogous to those portrayed in Fig. \\ref{ratio2}, obtained by merging the two sets $D$ and $I$ into one single set, labeled as $nG$ (``not good'' hazelnuts). The plot in Fig. \\ref{ratio3} shows that, for $\\rho=1.5$, the value $\\epsilon^*=70$ maximizes the ratio of the aforementioned statistical scales wrt almost all the various notions of ``statistical distance'' we considered. In Sec. \\ref{sec:sec2} we will show that such optimal value $\\epsilon^*$, here obtained by only relying on information theoretic methods, can be also recovered by using Support Vector Machines numerical algorithms, by averaging their performance over a set of training samples.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.70\\textwidth]{GnGr15.pdf}\n\\caption{\\textit{Left panel:} Behaviour of the ratio $\\Delta^{(G,nG)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(nG)}}$ vs. $\\epsilon$, for $\\rho=1.5$. The plot evidences the onset of an optimal scale $\\epsilon^*$ at which the ratio of the statistical scales is maximized.}\\label{ratio3}\n\\end{figure}\n\n\n\\section{Support Vector Machines}\n\\label{sec:sec2}\n\nIn this Section, we discuss the results obtained by elaborating our data through a supervised learning method known as Support Vector Machines (SVM) \\cite{Haykin,Boser, Cortes,Vapnik95,Vapnik98}. The SVM constitute a machine learning algorithm which seeks a separation of a set of data into two classes, by determining the \\textit{best separating hyperplane} (BSH) (also referred to, in the literature, as the ``maximal margin hyperplane'' \\cite{Webb}), cf. Fig. \\ref{SVM}. It is worth recapitulating the basic notions underpinning the numerical algorithm we used.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM2.pdf}\n\\caption{\\textit{Left panel:} Example of a linear discriminant analysis based on the SVM algorithm. Shown are three different hyperplanes: $\\Pi_1$, which does not separate the two classes, $\\Pi_2$ which separates the classes but only with a small margin, and $\\Pi_3$, which corresponds to the best separating hyperplane. \\textit{Right panel:} Illustration of the best separating hyperplane (red straight line), the canonical hyperplanes (black dashed lines), the support vectors (magenta circles) and the margin of separation $\\xi$.}\\label{SVM}\n\\end{figure}\n\nLet $\\{\\textbf{x}\\}$ denote the set of data (input pattern) to be classified, with $\\textbf{x} \\in E\\subseteq\\mathbb{R}^N$, and consider a given training set $\\mathcal{T}=\\{\\textbf{x}_k,d_k\\}_{k=1}^{N_T}$, where $N_T$ denotes the dimensionality of $\\mathcal{T}$. Let, then, $d_k=\\{+1,-1\\}$ denote the \\textit{desired response} parameter corresponding to $\\textbf{x}_k$, whose value depends on which of the two classes $\\textbf{x}_k$ belongs to.\nThe equation of a hyperplane $\\Pi$ in $\\mathbb{R}^N$ reads:\n\\begin{equation}\n\\textbf{w}^T \\cdot \\textbf{x} + b=0 \\quad , \\nonumber\n\\end{equation}\nwith $\\textbf{w}$ and $b$ denoting, respectively, a $N$-dimensional adjustable weight vector and a bias. The BHS is the hyperplane characterized by the pair $(\\textbf{w}_o,b_o)$ which, for linearly separable patterns, fulfills the following conditions \\cite{Haykin}:\n\n\\begin{eqnarray}\n\\textbf{w}_o^T \\cdot \\textbf{x}_k+b_o\\ge 1 \\quad \\text{for $d_k = +1$} \\quad ,\\nonumber\\\\\n\\textbf{w}_o^T \\cdot \\textbf{x}_k+b_o\\le -1 \\quad \\text{for $d_k = -1$} \\quad .\\label{suppvec}\n\\end{eqnarray}\n\nThe data points, portrayed in magenta color in the right panel of Fig. \\ref{SVM}, for which Eqs. (\\ref{suppvec}) are satisfied with the equality sign, are called \\textit{support vectors}, and lie on the so-called \\textit{canonical hyperplanes} \\cite{Webb}, represented by the black dashed lines in the right panel of Fig. \\ref{SVM}. Figure \\ref{SVM} also illustrates the so-called \\textit{margin of separation}, defined as the distance $\\xi=1\/\\|\\textbf{w}_o\\|$ between the support vectors and the BSH.\nThe BSH, which maximizes $\\xi$ under the constraints (\\ref{suppvec}), can be found by determining the saddlepoint of the Lagrangian function $d\\mathcal{L}(\\textbf{w},b,\\lambda_1,...,\\lambda_{N_T})=0$, given by:\n\\begin{equation}\n\\mathcal{L}(\\textbf{w},b,\\lambda_1,...,\\lambda_{N_T})=\\frac{1}{2}\\textbf{w}^T\\cdot \\textbf{w}-\\sum_{k=1}^{N_T} \\lambda_k[d_k(\\textbf{w} \\cdot \\textbf{x}_k+b)-1] \\label{lagr} \\quad .\n\\end{equation} \nThe solution of such variational problem is easily found in the form \\cite{Haykin}:\n\\begin{equation}\n\\mathbf{w}_o=\\sum_{k=1}^{N_T} \\lambda_k d_k \\textbf{x}_k \\label{sol1}\n\\end{equation}\nwhere the Lagrange multipliers $\\lambda_k$ satisfy the conditions:\n\\begin{eqnarray}\n\\sum_{k=1}^{N_T} \\lambda_k d_k&=&0 \\quad ,\\nonumber\\\\\n\\lambda_k[d_k(\\textbf{w} \\cdot \\textbf{x}_k+b)-1]&=&0 \\quad \\text{for $k=1,...,N_T$} \\quad ,\\nonumber\n\\end{eqnarray}\n(the latter being known as the ``Karush-Kuhn-Tucker complementarity condition'' \\cite{Webb}) whereas $b_o$ can be determined, once $\\textbf{w}_o$ is known, using Eqs. (\\ref{suppvec}).\nWhen the two classes are not linearly separable, a possible strategy consists in introducing a suitable (nonlinear) function $\\Phi:E\\rightarrow F$ , which makes it possible to map the original pattern inputs into a \\textit{feature space} $F\\subseteq\\mathbb{R}^M$, in which a linear separation can be performed, cf. Fig. \\ref{nlSVM} \\cite{Webb}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{nlSVM.pdf}\n\\caption{Patterns which are not linearly separable can be mapped, via a function $\\Phi$, into a \\textit{feature space} where a linear separation of the classes can be achieved}\\label{nlSVM}\n\\end{figure}\nThus,by denoting as $\\boldsymbol\\Phi(\\textbf{x})=\\{\\Phi_j(\\textbf{x})\\}_{j=1}^M$ a set of nonlinear transformations from the original input space to the feature space, the corresponding variational problem leads now, in place of Eq. (\\ref{sol1}), to the expression:\n\\begin{equation}\n\\mathbf{w}_o=\\sum_{k=1}^{N_T} \\lambda_k d_k \\boldsymbol\\Phi(\\textbf{x}_k) \\label{sol2} \\quad .\n\\end{equation}\nIn our implementation of the SVM algorithm, we regarded the set $G$ as one of the two classes, whereas the other class, formerly introduced in Sec. \\ref{sec:sec1} and denoted by $nG$, was thought of as given by the union $nG = D \\cup I$.\nWe thus relied on the analysis of the histograms of the hazelnut nuclei, detailed in Sec. \\ref{sec:sec1}. Therefore, we introduced two variables to identify each hazelnut: we set $\\textbf{x}=(x_{mean},x_{max})$, where, for each histogram relative to an hazelnut nucleus, $x_{mean}$ and $x_{max}$ denote, respectively, the \\textit{average} shade of gray and the shade of gray equipped with the highest probability.\nTherefore, in the space spanned by the coordinates $x_{mean}$ and $x_{max}$, and parameterized by the values of $\\epsilon$ and $\\rho$, each hazelnut is represented by a single dot. The resulting distribution of dots, for different values of $\\epsilon$ and $\\rho$, is illustrated in Figs. \\ref{raw3a} and \\ref{raw3b}, which evidence a clustering of points, for both the considered values of $\\rho$, around the bisectrix of the plane. This is readily explained by considering that, when reducing $\\epsilon$, the histograms attain a more and more symmetric shape. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_80r15.pdf}\n\\caption{Classification of the data in the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $ \\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=1.5$.}\\label{raw3a}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_80r25.pdf}\n\\caption{Classification of the data in the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $ \\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=2.5$.}\\label{raw3b}\n\\end{figure}\n\nFurthermore, an inspection of Figs. \\ref{raw3a} and \\ref{raw3b} reveals that the dots corresponding to the sets $D$ and $I$ are nested within the ensemble of points belonging to the set $G$: the classes $G$ and $nG$ are not amenable to be disentangled by a linear SVM regression, as also confirmed by the plots in Figs. \\ref{lin1} and \\ref{lin2}. \nIn each of the two latter figures, the left plot shows the elements of the adopted (randomly selected) training set: green and red symbols identify the elements of the two classes $G$ and $nG$, while the black circles indicate the support vectors. The black line indicates the boundary (best separating hyperplane) detected by the SVM, which sensibly depends on the chosen training set. The right plot, instead, displays all the available data (red and blue crosses represent, respectively, the elements of the classes $G$ and $nG$), complemented by the SVM test set output (red and blue circles). \nThe proper match between the colours of the circles and the crosses would indicate a successfully accomplished separation between the two classes, which, though, is not obtained with our data.\nFurthermore, no remarkable improvement is obtained by attempting a classification of the data by means of a nonlinear SVM algorithm, based on radial basis functions \\cite{Webb}, as shown in Figs. \\ref{nonlin1} and \\ref{nonlin2}. \nThe results of this Section, confirm, therefore, the predictions of the statistical analysis outlined in Sec. \\ref{sec:sec1}: the presence of a not linearly separable entanglement between points belonging to different classes can be thus traced back to the lack of a suitable statistical scales separation.\\\\\nThere is another relevant aspect, concerned with the implementation of the SVM algorithm, to be pointed out. \\\\\nWe remark, in fact, that each of the plots shown in Figs. \\ref{lin1} and \\ref{lin2} pertains to a specific training set of data $\\mathcal{T}$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_80r15.pdf}\n\\caption{Classification of the data through a linear SVM algorithm. Shown is the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $\\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=1.5$. In each left subfigure, shown are the training set of data (green and red crosses, denoting, respectively, the elements of the classes $G$ and $nG$), the support vectors (black circles) and the best separating hyperplane (black line). According to the SVM classification,the elements of the class $nG$ are expected to lie on the right of the boundary line. The right sub-figures, instead, display the 2D representation of all the available data (red and blue crosses, denoting, respectively, the elements of $G$ and those of $nG$) and the SVM output (red and blue circles).}\\label{lin1}\n\\end{figure}\n \n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_80r25.pdf}\n\\caption{Classification of the data with a linear SVM algorithm, as in Fig. \\ref{lin1}, but with $\\rho=2.5$.}\\label{lin2}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_80r15.pdf}\n\\caption{Classification of the data through a nonlinear SVM algorithm (based on radial basis functions) for $\\rho=1.5$ (cf. the caption of Fig. \\ref{lin1}).}\\label{nonlin1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_80r25.pdf}\n\\caption{Classification of the data with a nonlinear SVM algorithm (based on radial basis functions) for $\\rho=2.5$ (cf. the caption of Fig. \\ref{lin1}).}\\label{nonlin2}\n\\end{figure}\n\nWe can introduce, then, the quantity $\\Psi_\\ell(\\mathcal{T}_\\ell;\\epsilon,\\rho)$, relative to the specific training set $\\mathcal{T}_\\ell$, and defined as the ratio of the number of hazelnuts, belonging to the class $G$ and mistakenly classified as belonging to the class $nG$, to the total number of hazelnuts in the database, given by $N_G+N_{nG}$. The function $\\Psi_\\ell$ is an indicator of the performance of the SVM algorithm, and sensibly depends on the structure of the training sample considered in the simulation. \nThus, while the behaviour of $\\Psi_\\ell$, pertaining to single training samples, yields no indication about the onset of an optimal scale $\\epsilon^*$, the average $\\langle \\Psi \\rangle$, given by\n\\begin{equation}\n\\langle \\Psi \\rangle(\\epsilon,\\rho) = \\frac{1}{N_c}\\sum_{\\ell=1}^{N_c}\\Psi_\\ell(\\mathcal{T}_\\ell;\\epsilon,\\rho) \\quad , \\nonumber\n\\end{equation}\nand computed over a sufficiently large number $N_c$ of training samples, attains a minimum precisely at $\\epsilon^*=70$, cf. Fig. \\ref{Psi}. The latter value of $\\epsilon$ corresponds, in fact, to the scale of resolution maximizing the two statistical scales introduced in Sec. \\ref{sec:sec1}, cf. Fig. \\ref{ratio3}. The plot in Fig. \\ref{Psi} confirms, hence, that the onset of an optimum scale $\\epsilon^*$ can be numerically evinced also by means of SVM algorithms, provided that performance of the SVM is \\textit{averaged} over a sufficiently large number of training samples.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{psi.pdf}\n\\caption{Behavior of $\\langle \\Psi \\rangle$, averaged over $N_c=500$ training samples, vs. $\\epsilon$, with $\\rho=1.5$, with error bars (in red).}\\label{Psi}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nIn this work we performed a statistical analysis on the histograms of a set of hazelnut images, with the aim of obtaining a preliminary estimate of the performance of a machine learning algorithm based on statistical variables. We shed light, in Sec. \\ref{sec:sec1}, on the relevance of two statistical scales, which need to be widely separated to accomplish a successful pattern recognition. The intrinsic lack of such scale separation in our data was also evidenced by the numerical results reported in Sec. \\ref{sec:sec2}, revealing that no exhaustive classification can be achieved through SVM algorithms.\nMoreover, the analysis outlined in Sec. \\ref{sec:sec1} also unveiled the onset of an optimal resolution $\\epsilon^*$, which is expected to optimize the pattern recognition. This observation was also corroborated by the results discussed in Sec. \\ref{sec:sec2}, where the same value $\\epsilon^*$, maximizing the performance of the SVM algorithm, is recovered by averaging over a sufficiently large number of training samples.\nOur results, thus, strengthen the overall perspective that a preliminary estimate of the intrinsic statistical scales of the data constitute a decisive step in the field of pattern recognition and, moreover, pave the way for the further implementation of statistical mechanical techniques aimed at the development of a generation of more refined neural networks algorithms.\n\n\\newpage\n\n{\\bf Acknowledgments}\n\n\\vskip 5pt\n\nWe would like to thank Ferrero and Soremartec for their long-standing support of our research activity. We also thank Dr. A. Boscolo and Dr. L. Placentino, for providing us with the set of x-ray images used in this work.\nThis study was funded by ITACA, a project financed by the European Union, the Italian Ministry of Economy and Finance and the Piedmont Region.\n\n\\vskip 10pt\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe rotation frequencies\n$\\nu$ of pulsars generally decrease slowly in time, but occasionally experience sudden increases $\\Delta\\nu$ that are usually accompanied by increases in the absolute value of their spin-down rates, $\\dot{\\nu}$ \\citep{rm69,rd69,sl96}. \nThese spin-up events, known as glitches, are infrequent, not periodic, and cover a wide range of sizes \\citep[from $\\Delta \\nu\/\\nu \\sim 10^{-11}$ to $\\Delta \\nu\/\\nu \\sim 10^{-5}$;][]{elsk11,ymh+13}. \nThe mechanism that generates these events is not completely understood, but they are believed to be caused by angular momentum transfer from an internal neutron superfluid\nto the rest of the neutron star \\citep{ai75}.\n\nThanks to the few long-term monitoring campaigns that keep operating, some since the 1970s \\citep[e.g.][]{hlk+04,ymh+13}, the number of detected glitches has slowly increased, thereby improving the significance of statistical studies in pulsar populations. \n\\citet{ml90}, \\citet{lsg00}, and \\citet{elsk11} showed that the glitch activity $\\dot{\\nu}_{\\rm{g}}$ (defined as the mean frequency increment per unit of time due to glitches) correlates linearly with $|\\dot\\nu|$. \nThey also found that young pulsars (using the characteristic age, $\\tau_c=-\\nu\/2\\dot{\\nu}$, as a proxy for age), which also have the highest $|\\dot\\nu|$, exhibit glitches more often than older pulsars, with rates varying from about one glitch per year to one per decade among the young pulsars. \nUsing a larger and unbiased sample, \\cite{fer+17} confirmed that the size distribution of all glitches in a large and representative sample of pulsars is multi-modal \\citep[recently also seen by][]{ka14b,apj17}, with at least two well-defined classes of glitches: large glitches in a relatively narrow range $\\Delta \\nu \\sim (10-30)\\, \\rm{\\mu Hz}$, and small glitches with a much wider distribution, from $\\sim 10\\,\\mathrm{\\mu Hz}$ down to at least $10^{-4}\\,\\mathrm{\\mu Hz}$. \nFurther, \\cite{fer+17} found that a constant ratio $\\dot\\nu_{\\rm{g}}\/|\\dot\\nu| = 0.010 \\pm 0.001$ is consistent with the behaviour of nearly all rotation-powered pulsars and magnetars.\nThe only exception are the (few) very young pulsars, which have the highest spin-down rates, such as the Crab pulsar (PSR B0531$+$21) and PSR B0540$-$69. \n\nBecause glitches are rare events, the number of known glitches in the vast majority of pulsars is not enough to perform robust statistical analyses on individual bases.\nThis has made people focus on the few objects that have the largest numbers of detected glitches (about 10 pulsars).\nThe statistical distributions of glitch sizes and times between consecutive glitches (waiting times), for the nine pulsars with more than five known glitches at the time, were studied by \\citet{mpw08}.\nThey found that seven out of the nine pulsars exhibited power-law-like size distributions and exponential waiting time distributions.\nThe distributions of the other two (PSRs J0537$-$6910 and B0833$-$45, the Vela pulsar) were\nbetter described by Gaussian functions, setting preferred sizes and time scales.\nThese results have been further confirmed by \\citet{fmh17} and \\citet{hmd18}, who also found that there are at least two main behaviours among the glitching pulsars.\n\nCorrelations between glitch sizes and the times to the nearest glitches, either backward or forward, are naturally expected.\nWe know that glitch activity is driven by the spin-down rate \\citep{fer+17}, which suggests that glitches are the release of some stress that builds up at a rate determined by $|\\dot{\\nu}|$.\nIf the stress is completely released at each glitch, then one should expect a correlation between size and the time since the last glitch.\nConversely, if glitches occur when a certain critical state is reached, one should expect a correlation between size and the time to the next glitch, as longer times would be needed to come back to the critical state after the largest glitches.\nMoreover, if both assumptions were indeed correct, glitches would all be of equal sizes and occur periodically. \nHowever, with the exception of PSR J0537$-$6910 (see below), no other pulsars have shown significant correlations between glitch sizes and the times to the nearest events \\citep[e.g.][]{wmp+00,ywml10,mhf18}.\nThis may be partly due to small-number statistics and might improve in the future, provided a substantial number of pulsars continue to be monitored for glitches.\n\nThe case of PSR J0537$-$6910, however, is very clear.\nWith more than 40 glitches detected in $\\sim 13$\\,yr, the statistical conclusions about its behaviour are much more significant than for any other pulsar.\nAs first reported by \\citet{mmw+06}, its glitch sizes exhibit a strong correlation with the waiting time to the following glitch \\citep[see also][who confirmed the correlation using twice as much data]{aeka18,fagk18}.\n\n\\citet{aeka18} interpret\nthis behaviour as an indication that glitches in this pulsar occur only once some threshold is reached.\nMoreover, this behaviour would imply that not necessarily all the stress is released in the glitches, thereby giving rise to the variety of (unpredictable) glitch sizes observed and the lack of backward time correlation.\n\nIn this work we study the sequence of glitches in the pulsars with at least ten detected events, by characterizing their distributions of glitch sizes and waiting times between successive glitches. Also, we test two hypotheses to explain\nwhy most pulsars do not show a correlation between glitch size and time to the following glitch: the effects of undetected small glitches and the possibility that two different classes of glitches are present in each pulsar.\n\n\\section{Pulsars with at least ten detected glitches} \n\\label{s1}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{fig1.pdf}\n\\caption{Upper part of the $P-\\dot{P}$ diagram for all known pulsars. \n\tThe pulsars in our sample have at least ten detected glitches and are labeled with different symbols. \n\tLines of constant spin-down rate $\\dot{\\nu}$ are shown and labeled. \n\t$P$ and $\\dot{P}$ values were taken from the ATNF pulsar catalog \\protect\\footnotemark.\n}\\label{fig1}\n\\end{figure}\n\\footnotetext{\\url{http:\/\/www.atnf.csiro.au\/research\/pulsar\/psrcat}}\nTo date, there are eight pulsars with at least 10 detected glitches (Fig. \\ref{fig1}).\nPSRs J0205$+$6449, B0531$+$21 (the Crab pulsar), B1737$-$30, B1758$-$23, and J0631$+$1036 have been observed regularly by the Jodrell Bank Observatory \\citep[JBO,][]{hlk+04}.\nPSR B1338$-$62 has been observed by the Parkes telescope, and the Vela pulsar has been observed by several telescopes, including Parkes, the Jet Propulsion Laboratory, and others in Australia and Southafrica \\citep[e.g.][]{downs81,mkhr87,ymh+13,buc13}. \nPSR J0537$-$6910 is the only object in our sample not detected in the radio band and was observed for 13 years by the \\textit{Rossi X-ray Timing Explorer} \\citep[RXTE,][]{aeka18,fagk18}. \nGlitch epochs and sizes were taken from the JBO online glitch catalog \\footnote{\\url{http:\/\/www.jb.man.ac.uk\/pulsar\/glitches\/gTable.html}}, where more information and the appropriate references for each measurement can be found.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_2.pdf}\n\\caption{Logarithm (base 10) of glitch sizes $\\Delta\\nu$ (with $\\Delta\\nu$ measured in $\\mu$Hz) as a function of the glitch epoch for the pulsars in the sample. \n\tThe gray areas mark periods of time in which there were no observations for more than 3 months. \n\t$N_g$ is the number of glitches detected in the respective pulsar, until 20 April 2019 (MJD 58593). \n\tTo build a continuous sample, in the analyses of the Crab pulsar, we only use the 25 glitches after MJD 45000, when daily observations started \\citep{eas+14}. All panels share the same scale, in both axes.\n\t} \\label{fig2}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_3.pdf}\n\\caption{Distribution of $\\log \\Delta \\nu$ (with $\\Delta \\nu$ measured in $\\rm{\\mu Hz}$) for the pulsars in our sample. The orange areas indicate that glitches with $\\Delta\\nu<0.01\\,\\rm{\\mu Hz}$ could be missing due to detectability issues.} \\label{fig3}\n\\end{figure*}\n\nFigures \\ref{fig2} and \\ref{fig3} show that the Vela pulsar and PSR J0537$-$6910 produce glitches of similar sizes, particularly large glitches ($\\Delta \\nu > 10$ $\\mu$Hz), and in fairly regular time intervals. The absence of smaller glitches in these pulsars is not a selection effect, as it is quite unlikely that a considerable amount of glitches with sizes up to $\\Delta \\nu \\sim 10\\,\\rm{\\mu Hz}$, far above the detection limits reported in the literature \\citep[see ][and text below]{wxe+15}, could have gone undetected. \nOn the other hand, the rest of the pulsars exhibit irregular waiting times and cover a wider range of sizes ($\\Delta \\nu \\sim 10^{-3}-10$ $\\mu$Hz). \n\nThe cadence of the timing observations varies considerably from pulsar to pulsar (and even with time for individual pulsars), and the sensitivity of the observations, from which the glitch measurements were performed, are also different between different pulsars.\nThis means that the chances of detecting very small glitches are different for each pulsar and that the completeness of the samples towards small events might also be different \\citep{eas+14}.\nNonetheless, in this study we use a single value to represent the glitch size below which samples are likely to be incomplete due to detectability issues.\nFor an observing cadence of 30 days and a rotational noise of 0.01 rotational phases, glitch detection is severely compromised below sizes $\\Delta\\nu \\sim 10^{-2}\\, \\rm{\\mu Hz}$, especially if their frequency derivative steps are larger than $|\\Delta\\dot{\\nu}|\\sim 10^{-15} \\, \\rm{Hz\\, s^{-1}}$ \\citep[see][]{wxe+15}. We use the above numbers to characterize the glitch detection capabilities in this sample of pulsars, but we note that such cadence and rotational noise are rather pessimistic values in some cases.\n\n\\section{Distributions of glitch sizes and times between glitches} \n\\label{distris}\n \nIn the following, we model the distributions of glitch sizes ($\\Delta\\nu$, measured in $\\mu$Hz) and the distributions of times between successive glitches ($\\Delta \\tau$, measured in yr) for each pulsar in our sample. \nFour probability density distributions are considered: Gaussian,\n\\begin{equation}\nM(x|\\mu,\\sigma) = C_{\\rm{Gauss}}\\,\\exp\\left[\\frac{-(x-\\mu)^2}{2\\sigma^2}\\right]\\text{,}\n\\end{equation}\npower-law,\n\\begin{equation}\nM(x|\\alpha) = \\dfrac{\\alpha - 1}{x_{\\rm{min}}}\\left(\\dfrac{x}{x_{\\rm{min}}}\\right)^{-\\alpha}\\text{,}\n\\end{equation}\nlog-normal,\n\\begin{equation}\nM(x|\\mu_{\\rm{L-N}},\\sigma_{\\rm{L-N}}) = \\dfrac{C_{\\rm{L-N}}}{x}\\,\\exp\\left[\\frac{-(\\ln x-\\mu_{\\rm{L-N}})^2}{2\\sigma_{\\rm{L-N}}^2}\\right]\\text{,}\n\\end{equation}\nand exponential,\n\\begin{equation}\nM(x|\\lambda) = \\lambda\\, \\exp\\left[-\\lambda(x-x_{\\rm{min}})\\right]\\text{.}\n\\end{equation}\n\nThe set $\\{\\mu,\\sigma, \\alpha, \\mu_{\\rm{L-N}},\\sigma_{\\rm{L-N}}, \\lambda\\}$ are the fitting parameters. \nAll the distributions are normalized in the range $x_{\\rm{min}}$ to $\\infty$. Formally, $x_{\\rm{min}}$ is given by detection limits. \nHowever, it is not simple to define precise values for $\\Delta \\nu_{\\rm{min}}$ and $\\Delta \\uptau_{\\rm{min}}$ for each pulsar.\nThus we use $\\Delta \\nu_{\\rm{min}} = 10^{-2}\\, \\mu$Hz for the glitch sizes (see previous section), and the smallest interval of time between glitches in each pulsar as $\\Delta \\uptau_{\\rm{min}}$.\n\nFor the Gaussian and log-normal distributions the normalization constants $C_{\\rm{Gauss}}$ and $C_{\\rm{L-N}}$ were found numerically.\nWe use the maximum likelihood technique to obtain the parameters of the models that describe best the data, and use the Akaike Information Criterion \\citep[AIC,][]{aka74} to compare the different models \\citep[see also the Appendix in][]{fer+17}.\n\n\\begin{figure*}\n\\includegraphics[width=18cm]{fig_4.pdf}\n\\caption{Cumulative distribution of glitch sizes and model fits. The best-fitting models are indicated by thicker curves.}\\label{fig4}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=18cm]{fig_5.pdf}\n\\caption{Cumulative distribution of waiting times between successive glitches and model fits. The best-fitting models are indicated by thicker curves.}\\label{fig5}\n\\end{figure*}\n\n\\begin{table*\n\\centering\n\\caption{Distributions of glitch sizes: results of the fits and the AIC weights for each model; using glitches with $\\Delta\\nu \\geq 0.01\\, \\rm{\\mu Hz}$.}\n\\label{Table_sizes}\n\\begin{tabular}{@{}lcccccccccc@{}}\n\\toprule \\toprule\nPSR Name & $w^{\\rm{Gauss}}$ & $w^{\\textrm{Power law}}$ & $w^{\\textrm{L-N}}$ & $w^{\\rm{Exp}}$ & $\\hat{\\mu}$ & $\\hat{\\sigma}$ & $\\hat{\\alpha}$ & $\\hat{\\mu}_{\\rm{L-N}}$ & $\\hat{\\sigma}_{\\rm{L-N}}$ & $\\hat{\\lambda}$ \\\\\n & & & & & $\\rm{\\mu Hz}$ & $\\rm{\\mu Hz}$ & & & & $(\\rm{\\mu Hz})^{-1}$\\\\\n\\midrule\nJ0205$+$6449 & $10^{-8}$ & $\\mathbf{0.66}$ & 0.33 & $10^{-5}$ & 15(5) & 20(4) & 1.27(6) & 0.7(7) & 2.5(3) & 0.07(6)\\\\\n\nB0531$+$21 & $10^{-17}$ & 0.02 & $\\mathbf{0.97}$ & $10^{-7}$ & 1.2(5) & 3(1) & 1.4(1) & -1.3(3) & 1.5(2) & 0.8(7)\\\\\n\nJ0537$-$6910 & $\\mathbf{0.96}$ & $10^{-24}$ & $10^{-8}$ & 0.03 & 15(1) & 9.9(9) & 1.19(5) & 2.2(2) & 1.3(2) & 0.063(6)\\\\\n\nJ0631$+$1036 & $10^{-12}$ & $\\mathbf{0.94}$ & 0.05 & $10^{-8}$ & 1(1) & 3(1) & 1.4(1) & -1.9(6) & 2.1(4) & 0.61(4)\\\\\n\nB0833$-$45 & $\\mathbf{0.997}$ & $10^{-13}$ & $10^{-6}$ & 0.002 & 21(2) & 9(1) & 1.2(4) & 2.7(2) & 1.2(4) & 0.05(1)\\\\\n\nB1338$-$62 & $10^{-5}$ & 0.07 & $\\mathbf{0.53}$ & 0.4 & 2.5(5) & 2.7(3) & 1.36(5) & -0.1(3) & 1.6(1) & 0.4(1)\\\\\n\nB1737$-$30 & $10^{-14}$ & $\\mathbf{0.82}$ & 0.17 & $10^{-7}$ & 0.6(2) & 1.0(2) & 1.38(6) & -2.0(3) & 1.9(1) & 1.5(8)\\\\\n\nB1758$-$23 & 0.06 & 0.004 & 0.07 & $\\mathbf{0.866}$ & 0.6(1) & 0.51(8) & 1.3(2) & -1.2(4) & 1.5(3) & 1.7(6)\\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{$w^m$ denotes the Akaike weight of the model $m$. \n$\\hat \\mu$ and $\\hat \\sigma$ are the mean and the standard deviation of the Gaussian model, and $\\hat\\alpha$ is the power-law index. \n$\\hat \\mu_{\\rm{L-N}}$ and $\\hat \\sigma_{\\rm{L-N}}$ are the mean and the standard deviation of the log-normal model, respectively. $\\hat \\lambda$ is the rate parameter of the exponential distribution. \nThe values in parentheses correspond to the uncertainty in the last quoted digit and were calculated using the usual bootstrap method. We marked in bold the values of $w^m$ for the best models.}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{Distributions of waiting times between successive glitches: results of the fits and the AIC weights for each model.}\\label{Table_times}\n\\begin{tabular}{@{}lcccccccccc@{}}\n\\toprule \\toprule\nPSR Name & $w^{\\rm{Gauss}}$ & $w^{\\rm{Power law}}$ & $w^{\\rm{L-N}}$ & $w^{\\rm{Exp}}$ & $\\hat{\\mu}$ & $\\hat{\\sigma}$ & $\\hat{\\alpha}$ & $\\hat{\\mu}_{\\rm{L-N}}$ & $\\hat{\\sigma}_{\\rm{L-N}}$ & $\\hat{\\lambda}$\\\\\n& & & & & yr & yr & & & & yr$^{-1}$\\\\\n\\midrule\n\nJ0205$+$6449 & $0.001$ & $0.40$ & 0.16 & $\\mathbf{0.43}$ & 1.3(4)& 1.4(4) & 1.7(1) & -$0.2(3)$ & 1.0(1) & 0.9(5)\\\\\n\nB0531$+$21 & $10^{-4}$ & $10^{-5}$ & 0.15 & $\\mathbf{0.84}$ & 1.3(2) & 1.3(2) & 1.4(1) & -$0.2(2)$ & 1.0(1) & 0.8(2)\\\\\n\nJ0537$-$6910 & $\\mathbf{0.72}$ & $10^{-10}$ & 0.07 & 0.2 & 0.28(2) & 0.15(1) & 1.64(8) & -1.44(9) & 0.65(6) & 4.3(4)\\\\\n\nJ0631$+$1036 & $10^{-4}$ & $10^{-5}$ & 0.20 & $\\mathbf{0.79}$ & 1.4(4) & 1.7(6) & 1.3(2) & -0.3(3) & 1.2(2) & 0.7(3)\\\\\n\nB0833$-$45 & $\\mathbf{0.993}$ & $10^{-10}$ & $10^{-4}$ & 0.006 & 2.5(2) & 1.2(1) & 1.3(3) & 0.7(2) & 0.9(2) & 0.41(9)\\\\\n\nB1338$-$62 & 0.25 & $10^{-3}$ & 0.20 & $\\mathbf{0.54}$ & 0.88(9) & 0.42(4) & 1.9(2) & -0.3(1) & 0.51(5) & 1.7(3)\\\\\n\nB1737$-$30 & $10^{-5}$ & $10^{-6}$ & 0.17 & $\\mathbf{0.82}$ & 0.9(1) & 0.9(1) & 1.44(7) & -0.6(1) & 1.0(1) & 1.2(2)\\\\\n\nB1758$-$23 & 0.04 & 0.16 & 0.08 & $\\mathbf{0.72}$ & 2.4(4) & 1.4(2) & 2.1(2) & 0.7(1) & 0.61(8) & 0.6(2)\\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{$w^m$ denotes the Akaike weights of the model $m$. $\\hat \\mu$ and $\\hat \\sigma$ are the mean and the standard deviation of the Gaussian model, and $\\hat\\alpha$ is the power-law index. $\\hat \\mu_{\\rm{L-N}}$ and $\\hat \\sigma_{\\rm{L-N}}$ are the mean and the standard deviation of the log-normal model, respectively. $\\hat \\lambda$ is the rate parameter of the exponential distribution. The values in parentheses correspond to the uncertainties in the last digit, and were calculated by using the bootstrap method. We marked in bold the values of $w^m$ for the best models.}\n\\end{table*}\n\nFigures \\ref{fig4}-\\ref{fig5} and Tables \\ref{Table_sizes}-\\ref{Table_times} summarize the results of fitting these distributions to each pulsar. There is no single distribution type that can simultaneously describe all the pulsars satisfactorily, for either sizes or waiting times.\nThe size distributions present a large variety (as also found in the model of \\citealt{cm19}): the log-normal distribution gives the best fit for the Crab pulsar and PSR B1338$-$62, power-law for PSRs J0631$+$1036, B1737$-$30, and J0205$+$6449, and exponential for PSRs B1758$-$23.\n\n\nWe also note that PSR J0205+6449 and PSR B1758$-$23 are the pulsars with the fewest recorded glitches in the sample (both have 13 glitches detected), hence we ought to wait and confirm this result once more events are detected.\n\nIn the case of PSRs J0537$-$6910 and B0833$-$45 (Vela), the best fit for both size and waiting time distributions are Gaussian functions.\nTheir size distributions are centered at large sizes $\\Delta \\nu \\approx 15$ and $20\\, \\rm{\\mu Hz}$, respectively, consistent with the peak of large glitches in the combined distribution for all pulsars \\citep{fer+17}. \n\nThe distributions of times between successive glitches offer more homogeneous results. Besides the case of PSR J0537$-$6910 and the Vela pulsar (best modelled by Gaussian functions), the waiting time distributions for all the other pulsars are best represented by exponential functions.\nThese results are in agreement with \\citet{mpw08,wwty12}, and \\citet{hmd18} for almost all the pulsars studied. \nThe only exception is PSR B1338$-$62, for which \\cite{hmd18} reported a local maximum in the distribution and classified this pulsar as a quasi-periodic glitcher.\n\nIf $\\Delta\\nu_{\\rm{min}}$ is set to the size of the smallest detected glitch in each pulsar (rather than to $10^{-2}\\, \\mu$Hz), the results of the fits are very similar, and give parameters within the uncertainties presented in Table \\ref{Table_sizes}.\n\n\n\\section{Time series correlations: Glitch size and time to the next glitch} \n\\label{s2}\n\nDifferent studies have shown that for PSR J0537$-$6910 the glitch magnitudes $\\Delta \\nu_k$ are strongly correlated with the waiting times to the following glitch $\\Delta \\uptau_{k+1}$ \\citep[][and see Fig. \\ref{fig6}]{mmw+06,aeka18,fagk18}. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_6.pdf}\n\\caption{Time to next glitch, $\\Delta \\uptau_{k+1}$, as a function of glitch size, $\\Delta \\nu_k$, for all the pulsars in the sample.} \n\\label{fig6}\n\\end{figure*}\n\nWe test whether this correlation is also present in the other pulsars of the sample, and show the results in Table \\ref{dnu_dt_next} and Fig. \\ref{fig6} \\citep[this is fairly consistent with][though we note that the samples of glitches are not exactly the same]{mhf18}. None of them exhibits a correlation as clear as PSR J0537$-$6910. \nHowever, for PSRs J0205$+$6449, J0631$+$1036, B1338$-$62, and B1758$-$23, the Pearson correlation coefficients are larger than $0.5$ and the $p$-values are $\\sim 10^{-3}$, or less. \nTherefore, at $95\\%$ confidence level ($p$-values $ < 0.05$), we can reject the null hypothesis that $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$ are uncorrelated in these pulsars.\nSince the Pearson coefficient can be dominated by outliers, we also compute the Spearman rank correlation coefficient, obtaining similar or even stronger correlations, except for PSR J0631$+$1036.\n\n\\begin{table}\n\\centering\n\\caption{Correlation coefficients between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$.} \n\\begin{tabular}{@{}lccccc@{}}\n\\toprule \\toprule\nPSR Name & $N_{\\mathrm{g}}$ & $r_p$ & $p_p$ & $r_s$ & $p_s$ \\\\\n\\midrule\nJ0205$+$6449 & 13 & 0.88 & 0.0002 & $0.76$ & 0.004 \\\\\nB0531$+$21 & 25 & -0.10 & 0.62 & -0.12 & 0.57 \\\\\nJ0537$-$6910 & 45 & 0.95 & $10^{-22}$ & 0.95 & $10^{-23}$ \\\\\nJ0631$+$1036 & 17 & 0.93 & $10^{-7}$ & 0.20 & 0.45 \\\\\nB0833$-$45 & 20 & 0.24 & 0.31 & 0.31 & 0.21 \\\\\nB1338$-$62 & 23 & 0.59 & 0.003 & 0.70 & 0.0002\\\\\nB1737$-$30 & 36 & 0.29 & 0.09 & 0.29 & 0.08 \\\\\nB1758$-$23 & 13 & 0.76 & 0.003 & 0.80 & 0.001 \\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{The first and second columns contain the names of the pulsars and the respective number of glitches detected, respectively. \nThe third and fourth columns correspond to the Pearson linear correlation coefficient $r_p$ and the respective $p$-value $p_p$. \nThe last two columns correspond to the Spearman correlation coefficient $r_s$ and the respective $p$-value $p_s$.\n\t}\\label{dnu_dt_next}\n\\end{table}\n\nIt is also interesting to note that not only for PSR J0537$-$6910, but for all pulsars in the sample except the Crab, both the Pearson and Spearman correlation coefficients are positive. \nThe probability of finding at least six out of seven pulsars having the same sign as our reference case, just by chance, is rather low. \nThe probability of getting exactly $k$ successes among $n$ trials, with $1\/2$ success probability in each trial, is $P(k\\,|\\,n) = {n\\choose k}(1\/2)^n$. \nThus, the probability of getting at least 6 successes in 7 trials is\n\n\\begin{equation}\nP(\\geq 6\\,|\\,7)=P(6\\,|\\,7)+P(7\\,|\\,7)=\\frac{1}{16}=0.0625\\,.\n\\end{equation}\n\nThis low probability suggests that the waiting time to the following glitch is at least partially regulated by the size of the previous glitch.\n\nIn order to explain why the correlation for all other pulsars is much less clear than\nfor PSR J0537$-$6910, we explore two hypotheses, both of which are motivated by noting that most glitches in PSR J0537$-$6910 are large:\n\n\n\\begin{itemize}\n\\item[(I)] The correlation is intrinsically present in the full population of glitches of each pulsar, but glitches below a certain size threshold are not detected, thereby increasing by random amounts the times between the detected ones and worsening the correlation.\\\\\n\n\n\\item[(II)] There are two classes of glitches: glitches above a certain threshold size that follow the correlation, and glitches below the same threshold that are uncorrelated. \n\n\\end{itemize}\n\n\n\n\\subsection{Hypothesis I: Incompleteness of the sample}\n\nIn order to test the first hypothesis, we simulate a hypothetical pulsar with 100 glitches that follow a perfect correlation between $\\Delta\\nu_{k}$ and $\\Delta \\uptau_{k+1}$.\nThe events smaller than a certain value are then removed to understand the effect of their absence in the correlation. \nThe procedure is the following:\n\n\n\\begin{enumerate}\n\n\\item\nGlitch sizes are generated from a power-law distribution given by $dN\/d \\Delta\\nu \\propto \\Delta\\nu^{-\\alpha}$, with power-law index $\\alpha>1$. \nWe choose a power-law distribution because it mainly produces small events, and we want to see the effect of removing a substantial fraction of them.\nSeveral different choices for $\\alpha$ were considered. \nHere we only show the results for $\\alpha = 1.2$ and 1.4, as they generate distributions that resemble some of the ones observed.\n\nThe distributions do not have an upper cutoff, and the lower limit was varied so that, after reducing the sample of glitches (as we explain in step 3 below), the resulting sample covers the typical observed range of glitch sizes ($10^{-2} - 10^2\\, \\rm{\\mu Hz}$).\n\n\\item\nThe time to the next glitch $\\Delta \\uptau_{k+1}$ is computed in terms of the glitch size $\\Delta \\nu_k$ as:\n\\begin{equation}\n\\Delta\\uptau_{k+1}= C\\Delta\\nu_{k}\\, . \n\\label{eq_corr}\n\\end{equation}\nThe value of the proportionality constant $C$ is irrelevant in this case, since we are simulating a generic pulsar.\n\\\\\n\n\\item \nSteps (1) and (2) are repeated until a sequence of 100 glitches is reached. \nThen the 80 smallest are removed, thereby leaving a reduced sample of 20 to be analyzed, which is comparable to the number of glitches observed in each of our 8 pulsars.\nThe lower limit for the distribution is computed analytically so that, after reducing the sample of glitches, the final sample covers the typical observed range of glitch sizes ($10^{-2} - 10^2\\, \\rm{\\mu Hz}$).\\\\\n\n\\item \nFinally, we calculate the time interval between each pair of successive glitches in the reduced sample, and determine both the Spearman and Pearson correlation coefficients between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$.\n\n\\end{enumerate}\n\n\n\nAfter simulating $10^4$ cases, it was found that removing all glitches smaller than a certain value has a minor effect on the correlation. \nRepresentative realizations are shown in Fig. \\ref{hyp1}, where the correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$ is plotted in log-scale to show more clearly the dispersion produced by the removal of the smallest glitches. \nWe observe that missing small glitches does not substantially worsen the correlation: more than $90\\%$ of the realizations give correlation coefficients $\\geq 0.95$ (both Pearson and Spearman).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{fig_h1_v1.pdf}\n\\caption{Reduced samples of simulated glitches from an assumed parent distribution $dN\/d\\Delta\\nu\\propto \\Delta\\nu^{-\\alpha}$ with a perfect correlation $\\Delta\\tau_{k+1}=C\\Delta\\nu_k$, with $C=0.21\\, \\mathrm{yr\\, \\mu Hz^{-1}}$.\nTop: Resulting correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$. \nBottom: The corresponding distributions of $\\log \\Delta \\nu$ for the reduced samples of glitches. For both panels, each color (and point marker) represents a typical realization in the simulations, for different power-law exponents as shown in the legends.}\\label{hyp1}\n\\end{figure}\n\nFor $\\alpha>1.4$ the distribution becomes narrower, accumulating towards the lower limit. \nSince a large fraction of the simulated glitches have very similar sizes, after removing the 80 smallest glitches the correlation does worsen, and yields correlation coefficients between $0.4$ and $0.9$, which are similar to those exhibited by the real data.\nHowever, in these cases the distributions of glitch sizes differ strongly from those observed for the pulsars in our sample.\n\nFrom these simulations, we conclude that it is unlikely that the non-detection of all the glitches below a certain detection limit is the explanation for the low observed correlations in pulsars other than PSR J0537$-$6910.\n\n\\subsection{Hypothesis II: Two classes of intrinsically different glitches}\n\nThe second hypothesis states that pulsars exhibit two classes of glitches: larger events, which follow a linear correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$; and smaller events, for which these variables are uncorrelated.\nWe allow the point of separation between large and small glitches to be different for each pulsar.\n\nTo visualize whether this hypothesis works, correlation coefficients (for the same pair of variables, $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$) were calculated for sub-sets of glitches of the original sample. \nThe sub-sets are defined as all glitches with sizes larger or equal to a given $\\Delta\\nu_\\textrm{min}$. \nCorrelation coefficients as a function of $\\Delta\\nu_\\textrm{min}$ are plotted in Fig. \\ref{r_df0_min} for each pulsar.\nVisual inspection of the plots immediately tells us that by removing small glitches no pulsar reaches the level of correlation observed for PSR J0537$-$6910, for both correlation tests.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=180mm]{r_df0_min.pdf}\n\\caption{Pearson (orange squares) and Spearman (blue dots) correlation coefficients for glitches larger or equal than $\\Delta \\nu_\\textrm{min}$. Each panel represents a pulsar in our sample. For each pulsar, the last point in the plot was calculated with its five largest glitches. Note that some pulsars are shown in log-scale for a better visualization.} \n\\label{r_df0_min}\n\\end{figure*}\n\n\n\nIn the following we explore the curves in Fig. \\ref{r_df0_min} in some more detail.\nFor that purpose, Monte Carlo simulations of pulsars with correlated and uncorrelated glitches were performed.\nSince the underlying glitch size distributions of the pulsars in the sample are unknown, we use the measured values of a given pulsar.\nThe following is the procedure for one realization:\n\n\\begin{enumerate}\n\\item The glitches larger than a certain value $\\Delta\\nu^{\\star}$ are chosen in random order and assigned epochs according to their size.\nThe first one is set at an arbitrary epoch and the epochs of the following ones are assigned according to \n\\begin{equation}\n\\Delta\\uptau_{k+1}=\\Delta\\nu_{k}\\cdot 10^{x}\\, ,\n\\label{eq_hyp2}\n\\end{equation}\nwhere $x$ is drawn from a Gaussian distribution centred at $\\bar{x}=\\log(C)$ and with a standard deviation equal to $\\sigma_{\\bar{x}}$. \nThe latter allows us to introduce a dispersion in the correlation of the simulated glitches. \nThe distribution of $\\log(\\Delta\\uptau_{k+1}\/\\Delta\\nu_k)$ for all glitches with $\\Delta\\nu>5\\,\\mu$Hz in PSR J0537$-$6910 can be well modelled by a Gaussian distribution with standard deviation $\\sigma_{0537}=0.085$ (in logarithmic scale, if $\\Delta\\uptau_{k+1}$ is measured in days and $\\Delta\\nu_k$ is measured in $\\mu$Hz). \nIn the simulations, $\\sigma_{\\bar{x}}$ was set either to zero (i.e. $x=\\log(C)$, perfect correlation) or to multiples of $\\sigma_{0537}$. \\\\\n\n\\item The glitches smaller than $\\Delta\\nu^{\\star}$ are distributed randomly over the time span between the first and the last correlated glitches. \nThe resulting waiting times of all, correlated and uncorrelated glitches are then multiplied by a factor that ensures that their sum equals the time in between the first and the last observed glitches. \\\\\n\n\\item Steps 1 and 2 were repeated $10^4$ times for each considered value of $\\Delta \\nu^{\\star}$. \n\n\\end{enumerate}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_explaining_sim2_v3.pdf}\n\\caption{Correlation coefficients $r_p$ (orange) and $r_s$ (blue) versus $\\Delta\\nu_\\textrm{min}$ for simulated glitches under hypothesis II, and for three $\\Delta\\nu^{\\star}$ cases: left, when all glitches are correlated ($\\Delta\\nu^{\\star}\\sim0$); middle, about half of them are correlated ($\\Delta\\nu^{\\star}=12.39\\,\\mu$Hz); right, none of them is correlated ($\\Delta\\nu^{\\star}=40\\,\\mu$Hz).\nShaded regions represent the values of the $70\\%$ closer to the median of all realizations.\nThe dashed lines show particular realizations.\nThese simulations used the glitch sizes of PSR J0537$-$6910 and $\\sigma_{\\bar{x}}=\\sigma_{0537}$.\nIn all cases the last points in the plots were calculated using the five largest glitches.\n}\n\\label{examples}\n\\end{figure*}\n\n\nThe plots in Fig. \\ref{examples} show the results of simulations using the glitch sizes of PSR J0537$-$6910 and $\\sigma_{\\bar{x}}=\\sigma_{0537}$ for three values of $\\Delta \\nu^{\\star}$.\nThe results are shown via curves of $r$ versus $\\Delta\\nu_\\textrm{min}$, to compare with Fig. \\ref{r_df0_min}.\nThe shaded areas represent the $70\\%$ of the correlation coefficients closer to the median of all realizations.\nWe visually inspected the distributions of $r_p$ and $r_s$ for all possible $\\Delta\\nu_\\textrm{min}$ values, and for many $\\Delta \\nu^{\\star}$ cases.\nIt was verified that the median is sufficiently close to the maximum of the distribution in most cases.\nThough, this tends to fail for the largest $\\Delta\\nu_\\textrm{min}$ values, where the $r_p$ and $r_s$ distributions are rather flat.\nBut this is irrelevant because any conclusion pointing to a case in which only a few glitches are correlated (large $\\Delta\\nu_\\textrm{min}$) would have little statistical value, regardless of the above.\nThus we are confident that the shaded areas effectively cover the most possible outcomes of series of glitches under the assumptions considered.\n\nWe now use the plots in Fig. \\ref{examples} to understand the curves of the correlation coefficients as functions of $\\Delta\\nu_\\textrm{min}$ in Fig. \\ref{r_df0_min}, in the frame of hypothesis~II:\n\n\n\\begin{itemize}\n\n\\item[(a)] If all glitches were correlated, which is the case shown in the leftmost plot in Fig. \\ref{examples}, the correlation coefficients would decrease gradually as $\\Delta\\nu_\\textrm{min}$ increases. \nThis is because a progressive reduction of the sample, starting from the smallest events (i.e. increasing the remaining waiting times by small random amounts), will gradually kill the correlation.\nNote that the correlation coefficients of the simulated glitches start at values just below $1.0$ for the smallest $\\Delta\\nu_\\textrm{min}$, just like the observations of PSR J0537$-$6910.\nThis is because $\\sigma_{\\bar{x}}=\\sigma_{0537}$ in those simulations.\nOnly for $\\sigma_{\\bar{x}}=0$ the simulations would start at correlation coefficients equal to $1.0$.\n\n\\item[(b)] If only glitches above a certain size $\\Delta \\nu^{\\star}$ were correlated, the correlation coefficients would improve as small glitches are eliminated, and the remaining sub-set approaches the one in which all glitches are correlated (as in the middle plot of Fig. \\ref{examples}).\nOne would expect a maximum correlation for $\\Delta\\nu_\\textrm{min}\\sim\\Delta \\nu^{\\star}$, and a gradual decrease as $\\Delta\\nu_\\textrm{min}$ increases beyond $\\Delta \\nu^{\\star}$.\n\n\\item[(c)] If there were no correlated glitches, we should expect a rather flat curve of low correlation coefficients oscillating around zero (rightmost plot in Fig. \\ref{examples}).\n\n\\end{itemize}\n\nThe behaviours just described correspond to the general trends exhibited by the shaded areas in Fig. \\ref{examples}, which evolve smoothly with $\\Delta\\nu_\\textrm{min}$. \nHowever, particular realizations show abrupt variations, of both signs, just as the observations do in Fig. \\ref{r_df0_min}.\n\nClearly, PSR J0537$-$6910 is best represented by case (a).\nIndeed, both correlation coefficients for this pulsar are maximum (and very similar) when all glitches are included and they decrease gradually as the smallest glitches are removed (Fig. \\ref{r_df0_min}).\nNonetheless, we note that $r_p$ stays above $0.9$ (and $p_p<3\\times10^{-12}$) for $\\Delta\\nu_\\textrm{min}\\leq7\\,\\mu$Hz, hence it is possible that the smallest glitches are not correlated. \nAnother indication for this possibility is that the six glitches below $5\\,\\mu$Hz fall to the right of the distribution of $\\log(\\Delta\\uptau_{k+1}\/\\Delta\\nu_k)$ for all glitches, and the width of the distribution is reduced considerably (from more than 2 decades to a half decade) when they are removed.\nIn other words, the straight line that best fits the ($\\Delta\\uptau_{k+1}$, $\\Delta\\nu_k$) points passes closer to the origin \\citep[a more physically motivated situation,][]{aeka18}, and the data exhibit a smaller dispersion around this line, when the smallest glitches are not included.\n\n\n\n\nThe pulsars B1338$-$62, and B1758$-$23 may in principle also correspond to case (a).\nAs mentioned at the beginning of section \\ref{s2}, they present mildly significant correlations when all their glitches are considered, and both their $r_p$ and $r_s$ curves in Fig. \\ref{r_df0_min} decrease as $\\Delta\\nu_\\textrm{min}$ increases.\nBy performing simulations with $\\Delta \\nu^{\\star}=0$, and for different values of $\\sigma_{\\bar{x}}$, we find that the correlation coefficients of PSR B1758$-$23 are within the range of $70\\%$ of the possible outcomes if $\\sigma_{\\bar{x}}$ is set to 5-6 times $\\sigma_{0537}$.\n\nFor PSR B1338$-$62 the situation is less clear because the amplitudes of the variations of both $r_p$ and $r_s$ for $\\Delta\\nu_\\textrm{min}<1\\,\\mu$Hz are rather high.\nOne possible interpretation is that all glitches are correlated and the variations are due to the correlation not being perfect (i.e. $\\sigma_{\\bar{x}}\\neq0$).\nWe find that only for $\\sigma_{\\bar{x}}\\geq10\\times\\sigma_{0537}$ the simulations can reproduce such behaviour and the observed values. \nAnother possibility is that $\\Delta \\nu^{\\star}\\sim0.2\\,\\mu$Hz, which could explain the local maxima of $r_p$ and $r_s$ around that value.\nThe maxima and subsequent values can indeed be reproduced with lower levels of noise, $\\sigma_{\\bar{x}}=5\\times\\sigma_{0537}$. \nBut for smaller values of $\\Delta\\nu_\\textrm{min}$ most realizations ($>70\\%$) give correlation coefficients below $0.5$, thus they fail at reproducing the observed $0.6$-$0.7$ at $\\Delta\\nu_\\textrm{min}=0$.\n\nIt is clear that Hypothesis II does not apply to this pulsar directly, and that the observations are not consistent with a set of uncorrelated glitches either.\nBased on the lack of glitches with sizes equal or less than $0.1\\,\\mu$Hz after MJD $\\sim$ 50400 (Fig. \\ref{fig2}), we speculate that the sample might be incomplete for glitches smaller than this size after this date\\footnote{This would be a more extreme case than those considered for the Hypothesis I because $0.1\\,\\mu$Hz is a rather high limit.}.\n\n\n\nThe pulsars J0205$+$6449 and J0631$+$1036 also exhibit significant Pearson correlations when all their glitches are considered.\nHowever, their $r_s$ curves tend to increase with $\\Delta\\nu_\\textrm{min}$ rather to decrease.\nAs mentioned before, the Pearson test can be affected by outliers, hence the behaviour we see for $r_p$ is likely due to the very broad size and waiting times distributions and the low numbers of events towards the high ends of the distributions, which produce outlier points for both pulsars (Fig.\\ref{fig6}).\nIt is therefore difficult to conclude anything for PSR J0631+1036. \nMoreover, the observed behaviour is very hard to reproduce by the simulations, even for high levels of noise (we tried up to $\\sigma_{\\bar{x}}=12\\times\\sigma_{0537}$).\nPerhaps its largest glitches ($\\Delta\\nu\\geq0.1\\,\\mu$Hz) are indeed correlated, but the statistics are too low to conclude anything.\n\nFor J0205, however, the Spearman coefficients $r_s$ are rather high ($>0.55$ for all $\\Delta\\nu_\\textrm{min}$) and both coefficients become similar and even higher for $\\Delta\\nu_\\textrm{min}>1\\,\\mu$Hz. \nIt is possible that glitches above this size are correlated in this pulsar.\nWe find that the observed $r_p$ and $r_s$, and their evolution with $\\Delta\\nu_\\textrm{min}$, are within the $70\\%$ of simulations with $\\Delta \\nu^{\\star}=1.3\\,\\mu$Hz and for $\\sigma_{\\bar{x}} = 2\\times\\sigma_{0537}$. \nWe note, however, that in this case the correlation coefficients observed for $\\Delta\\nu_\\textrm{min}\\leq0.1\\,\\mu$Hz are higher than the vast majority of the realizations.\nPerhaps the small glitches are also correlated and follow their own relation, though we did not simulate such scenario.\nWe conclude that the Hypothesis II does not fully explain this pulsar, although the 8 glitches above $1\\,\\mu$Hz appear to be well correlated indeed.\n\n\n\n\nThe Vela pulsar is the only pulsar in the sample that seems well represented by case (b).\nThe highest $r_p=0.68$ has a probability $p_p=0.003$ and is obtained for $\\Delta\\nu_\\textrm{min}\\sim2\\,\\mu$Hz.\nBoth $r_p$ and $r_s$ decline monotonically for larger $\\Delta\\nu_\\textrm{min}$ values. \nThis behaviour suggests that glitches of sizes above $\\sim2\\,\\mu$Hz might indeed be correlated, but the correlation is somewhat noisy.\nThe observed correlation coefficients fall within the middle 70$\\%$ of the realizations if $\\sigma_{\\bar{x}}=2\\times\\sigma_{0537}$ and for $\\Delta\\nu^{\\star}=2$-$10\\,\\mu$Hz.\nThe case $\\Delta\\nu^{\\star}=9.35\\,\\mu$Hz is presented in Fig. \\ref{vela_h2}.\nWe prefer this case because simulations for $\\Delta\\nu^{\\star}=2\\,\\mu$Hz tend to fail at reproducing the low correlation coefficients ($\\leq0.4$) observed for the smallest $\\Delta\\nu_\\textrm{min}$.\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{vela_2_hip2.pdf}\n\\caption{Observations and simulations of the Vela pulsar.\nLeft: Shaded regions indicate the values obtained by the $70\\%$ closer to the median of all realizations. \nThe observations are overlaid using dashed lines.\nCentre: comparison of observations (dashed) and one particular realization.\nRight: $\\Delta\\uptau_{k+1}$ versus $\\Delta\\nu_k$ for the same realization (red triangles) and for the observations (grey dots).\nOrange represents $r_p$ values and blue represents $r_s$ values in all panels.\nThe simulations were performed using $\\sigma_{\\bar{x}}=2\\times\\sigma_{0537}$ and $\\Delta\\nu^{\\star}=9.35\\,\\mu$Hz.}\n\\label{vela_h2}\n\\end{figure*}\n\n\n\nFinally, the cases of PSRs B0531+21 (the Crab) and B1737$-$30 are rather inconclusive.\nThe Crab pulsar is perhaps the pulsar for which case (c) applies the best. \nBoth correlation coefficients are negative or positive, and in both cases stay at relatively low absolute values, which leads to the conclusion that there are no correlated glitches in the Crab pulsar.\nWe note that the high $r_p$ and $r_s$ values observed for $\\Delta\\nu_\\textrm{min}\\sim0.6\\,\\mu$Hz are obtained with the 5-6 largest events and that a linear fit to their $\\Delta \\nu_k - \\Delta\\uptau_{k+1}$ does not pass close to the origin.\n\n \nThe case of B1737$-$30 is more complex.\nThe observations show two $\\Delta\\nu_\\textrm{min}$ values, $0.0015$ and $0.03\\,\\mu$Hz, after which the correlation coefficients decrease with the removal of more small glitches (Fig. \\ref{r_df0_min}). \nThis behaviour is hard to reproduce under Hypothesis II, unless the dispersion of the correlation is increased considerably, to $10\\times\\sigma_{0537}$ or more.\nWe conclude that Hypothesis II does not apply to this pulsar directly and that there is some extra complexity, as the data are also inconsistent with a set of purely uncorrelated glitches.\n\n\n\nSurprisingly, even though no pulsar complies perfectly with Hypothesis II, and the only way in some cases is to increase the dispersion of the correlation ($\\sigma_{\\bar{x}}\\gg\\sigma_{0537}$), there is no pulsar in the sample that is well represented by case (c) (only the Crab, to some extent).\n\nTherefore, the sizes of at least some glitches must be positively correlated with the times to the next glitch in the available datasets.\nThe question is why this correlation is much stronger in PSR J0537$-$6910 than in all other pulsars of our sample.\nCould this be an effect of its particularly high spin-down rate? Or the fact that most of its glitches are large?\nIt could be that the correlations are indeed there, as stated in Hypothesis II, but for some reason exhibit high $\\sigma_{\\bar{x}}$ values. \nMaybe the fact that the glitches in PSR J0537$-$6910 occur so frequently ensures that the relationship stays pure.\nBut it could also be that reality was more complex.\nFor instance, it could be that both small and large glitches were correlated, but each of them followed a different law.\n\n\n\n\\section{Other correlations} \n\\label{s3}\n\nWe looked for other correlations between the glitch sizes and the times between them.\nSpecifically, we tried $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$ (size of the glitch versus the time since the preceding glitch), and $\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$ (size of the glitch versus the size of the previous glitch).\nNo pulsar shows a significant correlation between these quantities (Table \\ref{others_correlations}). \n\n\n\\begin{table*}\n\\caption{Correlation coefficients for the pairs of variables $(\\Delta \\nu_k,\\,\\Delta \\uptau_{k})$, and $(\\Delta \\nu_k,\\, \\Delta \\nu_{k-1})$.} \\label{others_correlations}\n\\small\n \\begin{subtable}{0.47\\textwidth}\n \\begin{tabular*}{\\linewidth}{@{}l \n @{\\extracolsep{\\fill}} SS\n S[table-format=2.2(2)]\n S[table-format=2.2(2)]@{}}\n \\toprule\n \\phantom{Var.} & \n \\multicolumn{4}{c}{$\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$}\\\\\n \\cmidrule{1-5}\n {PSR Name}& {$r_p$} & {$p_p$} & {$r_s$} & {$p_s$}\\\\\n \\midrule\n J0205$+$6449\\hspace{0.5cm} & 0.16 & 0.60 & 0.44 & 0.15\\\\\n B0531$+$21 & -0.02 & 0.90 & 0.40 & 0.05\\\\\n J0537$-$6910 & -0.08 & 0.60 & -0.12 & 0.41 \\\\[1ex]\n J0631$+$1036 & -0.10 & 0.68 & -0.18 & 0.49 \\\\\n B0833$-$45 & 0.55 & 0.01 & 0.27 & 0.24 \\\\\n B1338$-$62 & -0.30 & 0.16 & -0.18 & 0.41 \\\\[1ex]\n B1737$-$30 & -0.02 & 0.89 & -0.10 & 0.56 \\\\\n B1758$-$23 & -0.02 & 0.94 & -0.04 & 0.89 \\\\\n \\bottomrule\n \\end{tabular*}%\n \n \\end{subtable}%\n \\hspace*{\\fill}%\n \\begin{subtable}{0.47\\textwidth}\n \\begin{tabular*}{\\linewidth}{@{}l \n @{\\extracolsep{\\fill}} SS\n S[table-format=2.2(2)]\n S[table-format=2.2(2)]@{}}\n \\toprule\n \\phantom{Var.}\n & \\multicolumn{4}{c}{$\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$}\\\\\n \\cmidrule{1-5}\n {PSR Name}& {$r_p$} & {$p_p$} & {$r_s$} & {$p_s$}\\\\\n \\midrule\n J0205$+$6449\\hspace{0.5cm} & -0.06 & 0.83 & 0.25 & 0.42\\\\\n B0531$+$21 & -0.10 & 0.61 & -0.15 & 0.47\\\\\n J0537$-$6910 & -0.13 & 0.38 & -0.16 & 0.29\\\\[1ex]\n J0631$+$1036 & -0.12 & 0.65 & 0.32 & 0.21 \\\\\n B0833$-$45 & -0.08 & 0.71 & -0.12 & 0.59\\\\\n B1338$-$62 & -0.33 & 0.13 & -0.13 & 0.55\\\\[1ex]\n B1737$-$30 & -0.11 & 0.50 & 0.03 & 0.85\\\\\n B1758$-$23 & -0.02 & 0.92 & -0.04 & 0.89\\\\\n \\bottomrule\n \\end{tabular*}%\n \n \\end{subtable}\n\\tablefoot{The first column contains the names of the pulsars considered in the sample. $r_{n}$ and $p_{n}$ correspond to a correlation coefficient and its $p$-value, respectively. The sub-index $n = p$ denotes the Pearson correlation, and $n=s$ denotes the Spearman correlation.}\n\\end{table*}\n\n\nNearly all pulsars in our sample show negative correlation coefficients (both, Pearson and Spearman) for $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$. \nThe only exceptions are the Vela pulsar and PSR J0205$+$6449 (see Table \\ref{others_correlations}). Our results are in general agreement with \\cite{mhf18},\nwho also found a lack of correlation between $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$ for individual pulsars.\n\nFor $\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$, in most cases the correlation coefficients are close to zero and the $p$-values are larger than $0.2$, i.e., no individual pulsar shows a significant correlation. However, the results could still be meaningful for the sample as a whole because all the pulsars have negative correlation coefficients, except for the Spearman coefficients for PSRs J0631$+$1036 and B1737$-$30). \nThe probability of getting all Pearson's correlations coefficients of the same sign just by chance, regardless of whether the sign is positive or negative, is $2\\times p_{\\mathrm{binom}}(8|8) = 0.007$. This could establish an interesting constraint on the glitch mechanism: Smaller glitches are somewhat more likely to be followed by larger ones, and vice-versa.\nHowever, this statement has to be confirmed with more data in the future.\n\n\n\n\\section{Discussion} \n\\label{disc}\n\n\\citet{fer+17} found that all pulsars (with the strong exception of the Crab pulsar and PSR B0540$-$69)\nare consistent with a constant ratio between the glitch activity, $\\dot{\\nu}_{\\rm g}$, and the spin-down rate,\n$\\dot\\nu_{\\rm{g}}\/|\\dot\\nu| = 0.010 \\pm 0.001$, i.e., $\\approx 1\\%$ of their spin-down is recovered by the glitches. This fraction has been interpreted as the fraction of the moment of inertia in a superfluid component that transfers its angular momentum to the rest of the star in the glitches \\citep{lel99,aghe12}.\n\\citet{fer+17} used the observed bimodal distribution of glitch sizes to distinguish between large and small glitches, with the boundary at $\\Delta\\nu=10\\, \\mathrm{\\mu Hz}$, and argued that the constant ratio is determined by the large glitches, whose rate, $\\dot N_\\ell$ is also proportional to $|\\dot\\nu|$. In\nthis scenario, the\nmuch lower (sometimes null) glitch activities measured in many low-$|\\dot{\\nu}|$ pulsars are due to \ntheir observation time spans\nnot being long enough to include any large glitches (or any glitch at all).\nInterestingly, the pulsars in our sample (except the Crab) are quite consistent with the constant ratio (Fig. \\ref{fig_discussion}), even those, like PSRs B1338$-$62, B1737$-$30, and B1758$-23$, which do not have any large glitches contributing to their activities.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{discussion.pdf}\n\\caption{$\\dot{\\nu}_g\/|\\dot{\\nu}|$ versus $|\\dot{\\nu}|$ for pulsars in our sample.\nThe dashed-line with the blue region correspond to the constant ratio $\\dot{\\nu}_g\/|\\dot{\\nu}| = 0.010 \\pm\n0.001$, determined by \\citet{fer+17}.\nThe error bars were calculated as described in the latter paper.}\n\\label{fig_discussion}\n\\end{figure}\n\n\nOn the other hand, pulsars with higher spin-down rates also have a larger fraction of large glitches. At the highest spin-down rates ($|\\dot{\\nu}|\\geq 10^{-11}$\\,Hz\\,s$^{-1}$), the production of large glitches becomes comparable and sometimes higher than the production of small glitches, again with the notorious exception of the Crab and PSR B0540$-$69.\nThis trend is also followed by the pulsars in our sample: all large glitches (but one in PSR J0631+1036), are concentrated in PSRs J0205$+$6449, J0537$-$6910, and the Vela pulsar, which are (together with the Crab) the ones with largest $|\\dot{\\nu}|$ values (see Fig. \\ref{fig1} and \\ref{fig_discussion}). \n\n\nThus, it seems to be the case that both large and small glitches draw from the same angular momentum reservoir (for all but the very young, Crab-like pulsars), but have different trigger mechanisms, the large ones being produced once a critical state\nis reached, whereas small ones occur in a more random fashion. \nFor reasons still to be understood, the glitch activity of relatively younger, high $|\\dot\\nu|$, Vela-like pulsars is dominated by large glitches, whereas for smaller $|\\dot \\nu|$ the large glitches become less frequent, both in absolute terms and relative to the small ones \\citep{wmp+00,elsk11}. \nIn this context, it is interesting to note that recent long-term braking index measurements\nindicate that Vela-like pulsars move towards the region where PSRs J0631+1036, B1737$-$30, and B1758$-$23 are located on the $P$--$\\dot{P}$ diagram \\citep[][]{els17}.\n\n\n\n\n\n\n\\section{Summary and Conclusions} \n\\label{conc}\n\nWe studied the individual glitching behaviour of the eight pulsars that today have at least ten detected glitches.\nOur main conclusions are the following:\n\n\n\\begin{enumerate}\n\n\\item \nWe confirm the previous result by \\cite{mpw08} and \\cite{hmd18} that, for Vela and PSR J0537$-$6910, the distributions of both their glitch sizes and waiting times are best fitted by Gaussians, indicating well-defined scales for both variables. For all other pulsars studied, the waiting time distribution is best fitted by an exponential (as would be expected for mutually uncorrelated events), but they have a variety of best-fitting size distributions: a power law for PSR J0205+6449, J0631+1036, and B1737$-$30, a log-normal for the Crab and PSR B1338$-$62, and an exponential for PSR B1758$-$23.\n\n\\item \nAll pulsars in our sample, except for the Crab, have positive Spearman and Pearson correlation coefficients for the relation between the size of each glitch, $\\Delta\\nu_k$, and the waiting time to the following glitch, $\\Delta\\tau_{k+1}$. For each coefficient, the probability for this happening by chance is $1\/16=6.25\\%$. \nBoth coefficients also stay positive as the small glitches are removed\n(see Fig. \\ref{r_df0_min}).\n\n\n\\item \nPSR J0537$-$6910 shows by far the strongest correlation between glitch size and waiting time until the following glitch ($r_p=r_s=0.95$, $p$-values $\\lesssim 10^{-22}$). \nAnother three pulsars, PSRs J0205$+$6449, B1338$-$62, and B1758$-$23, have quite significant correlations ($p$-values $\\leq 0.004$ for both coefficients).\n\n\\item\nOur first hypothesis to explain the much weaker correlations in all other pulsars compared to PSR J0537$-$6910, namely missing glitches that are too small to be detected, is very unlikely to be correct. Our Monte Carlo simulations show that, for reasonable glitch size distributions, it cannot produce an effect as large as observed.\n\n\\item \nOur alternative hypothesis, namely that there are two classes of glitches, large correlated ones and small uncorrelated ones, comes closer to reproducing the observed relations; notably for PSRs J0205$+$6449 and Vela.\nThe resulting correlations for both pulsars present dispersions that are twice the one observed for PSR J0537$-$6910.\nFor the other pulsars, the required dispersions to accommodate this hypothesis are much larger.\n\n\n\\item\nThe correlation coefficients between the sizes of two successive glitches, $\\Delta\\nu_{k-1}$ and $\\Delta\\nu_k$, as well as between the size of a glitch, $\\Delta\\nu_k$ and the waiting time since the previous glitch, $\\Delta\\tau_k$, are generally not significant in individual pulsars, but they are negative for most cases, suggesting some (weaker) relation also among these variables.\n\n\\item\nExcept for the Crab, all pulsars in our sample are consistent with the constant ratio between glitch activity and spin-down rate, $\\dot\\nu_\\mathrm{g}\/|\\dot\\nu|=0.010\\pm 0.001$ \\citep{fer+17}. This includes cases dominated by large glitches, as well as others with only small glitches. \n\n\\item\nThe previous results suggest that large and small glitches draw their angular momentum from a common reservoir, although they might be triggered by different mechanisms. Large glitches, which dominate at large $|\\dot\\nu|$ (except for the Crab and PSR B0540$-$69), might occur once a certain critical state\nis reached, while small glitches, dominating in older pulsars with lower $|\\dot\\nu|$, occur at essentially random times.\n\n\n\n\n\\end{enumerate}\n\nAll the above is based on the behaviour of the pulsars with the most detected glitches. \nEven though we have shown before that the activity of all pulsars appears to be consistent with one single trend, these pulsars could still be outliers among the general population. \nOnly many more years of monitoring will clarify the universality of these results.\n\n\n\\begin{acknowledgements}\nWe thank Vanessa Graber and Simon Guichandut for valuable comments on the first draft of this article. We are also grateful to Wilfredo Palma for conversations that guided us at the beginning of this work. We also thank Ben Shaw for information regarding the detection of recent glitches and for keeping the glitch catalog up to date.\nThis work was supported in Chile by CONICYT, through the projects ALMA31140029, Basal AFB-170002, and FONDECYT\/Regular 1171421 and 1150411.\nJ.R.F. acknowledges\tpartial support by an NSERC Discovery Grant awarded to A. Cumming at McGill University.\nC.M.E. acknowledges support by the Universidad de Santiago de Chile (USACH).\n\\end{acknowledgements}\n\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzewfo b/data_all_eng_slimpj/shuffled/split2/finalzzewfo new file mode 100644 index 0000000000000000000000000000000000000000..4e1cfcbc663747ae9eac7db12ca861739fa6e331 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzewfo @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nDiscrete statistical models with latent variables and graphical components are widely used in statistics, machine learning, and many real-world applications. Examples include noisy-or Bayesian networks in medical diagnosis \\citep{shwe1991qmrdt, halpern2013noisyor}, binary latent skill models in educational cognitive diagnosis \\citep{chen2015qmat, xu2017rlcm, gu2021jmle}, and restricted Boltzmann machines and their variants in machine learning \\citep{hinton2006fast, goodfellow2016deep}.\nGenerally, incorporating latent variables into graphical models can greatly enhance the flexibility of a model. But such flexibility comes at a cost of the increasing model complexity and statistical subtlety, including the identifiability as a fundamental and challenging issue.\nRecognizing the potential non-identifiability caused by latent variables in graphical models, a body of works \\cite[e.g.][]{foygel2012half, evans2014markovian} project away the latent variables and study the property of the induced mixed graph among the observed variables.\nIn some applications, however, the latent variables themselves carry important substantive meanings and inferring the parameters involving the latent is of paramount importance and practical interest \\citep[e.g.][]{bing2020overlap, bing2020detecting}.\nIn this work, we propose a general algebraic technique to investigate identifiability of discrete models with complicated latent and graphical components, characterize the minimal identifiability requirements for a class of such models motivated by diagnostic test applications, and along the way reveal a new geometry about multidimensional latent structures -- the blessing of dependence on identifiability.\n\n\n\nStatistically, a set of parameters for a family of models are said to be identifiable, if distinct values of the parameters correspond to distinct joint distributions of the observed variables.\nIdentifiability is a fundamental prerequisite for valid statistical estimation and inference. \nIn the literature, identifiability of discrete statistical models with latent variables is known to be challenging to study, partly due to their inherent complex nonlinearity.\nFor example, Latent Class Models \\citep[LCMs;][]{lazarsfeld1968latent} are a simplest form of discrete models with latent structure, which assumes a univariate discrete latent variable renders the multivariate categorical responses conditional independent.\nDespite the seemingly simple structure and the popularity of LCMs in social and biomedical applications, their identifiability issues had eluded researchers for decades.\n\\cite{goodman1974} investigated several specific small-dimensional LCMs, some being identifiable and some not.\n\\cite{gyllenberg1994non} proved LCMs with binary responses are not \\textit{strictly identifiable}. \\cite{carreira2000practical} empirically showed the so-called practical identifiability of LCMs using simulations. \nAnd finally, \\cite{allman2009} provided a rigorous statement about the \\textit{generic identifiability} of LCMs, whose proof leveraged Kruskal's Theorem from \\cite{kruskal1977three} on the uniqueness of three-way tensor decompositions. \n\n\n\n\nTo be concrete, \\textit{strict identifiability} means model parameters are identifiable everywhere in some parameter space $\\mathcal T$.\nA slightly weaker notion, \\textit{generic identifiability}\nformalized and popularized by \n\\cite{allman2009}, allows for a subset $\\mathcal N\\subseteq\\mathcal T$ where non-identifiability may occur, requiring $\\mathcal N$ to have Lebesgue measure zero in $\\mathcal T$; such $\\mathcal N$s are basically zero sets of polynomials of model parameters \\citep{allman2009}.\nIn some cases, these measure-zero subsets may be trivial, such as simply corresponding to the boundary of the parameter space. In some other cases, however, these subsets may be embedded in the interior of the parameter space, or even carries rather nontrivial geometry and interesting statistical interpretation (as is the case in this work under the minimal conditions for generic identifiability).\nA precise characterization of the measure-zero subset where identifiability breaks down is essential to performing correct statistical analysis and hypothesis testing \\citep{drton2009lrt}. But it is often hard to obtain a complete understanding of such sets or to derive sharp conditions for identifiability in complicated latent variable models. These issues become even more challenging when graphical structures are present in latent variable models.\n\n\n\n\nIn this work, we present a general algebraic technique to study the identifiability of discrete statistical models with latent and graphical components, and use it to investigate an interesting class of such models. \nIn the literature, pioneered by \\cite{allman2009}, many existing identifiability results for models involving discrete latent structures leveraged Kruskal's Theorem\n\\citep[e.g.,][]{allman2008covarion, fang2019, culpepper2019ordinal, chen2020slcm, fang2020bifactor}; these studies cover models ranging from phylogenetics in evolutionary biology to psychometrics in education and psychology.\nThese identifiability proofs using Kruskal's Theorem often rely on certain global rank conditions of the tensor formulated under the model.\nIn contrast, we \ncharacterize a useful transformation property of the Khatri-Rao tensor products of arbitrary discrete variables' probability tables.\nWe then use this property to investigate how any specific parameter impacts the zero set of \npolynomials induced by the latent and graphical constraints.\nThis general technique covers as a special case the one in \\cite{xu2017rlcm} for restricted latent class models with binary responses.\nOur approach will unlock possibilities to study identifiability\nat the {finest} possible scale (rather than checking global rank conditions of tensors), and hence help obtain sharp conditions and characterize the aforementioned measure-zero non-identifiable sets.\nIn particular, we will study settings where Kruskal's theorem {does not apply}, demonstrating the power of this technique. \n\n\n\n\n\nWe provide an overview of our results.\nMotivated by epidemiological and educational diagnosis tests, we focus on discrete models with multiple binary latent variables, where\nthe latent-to-observed measurement graph is a forest of star trees. Namely, each latent variable can have several observed noisy proxy variables as children.\nWe allow the binary latent variables to have arbitrary dependencies among themselves for the greatest possible modeling flexibility. \nCall this model the \\textit{Binary Latent cliquE Star foreSt (BLESS)} model. \nWe characterize the necessary and sufficient graphical criteria for strict and generic identifiability, respectively, of the BLESS model; this includes identifying both the discrete star-forest structure and the various continuous parameters.\nUnder the minimal conditions for generic identifiability that each latent variable has \\emph{exactly two} observed children, we show that the measure-zero set $\\mathcal N$ in which identifiability breaks down is the independence model of the latent variables.\nThat is, our identifiability condition delivers a deep and somewhat surprising geometry of \\textit{blessing-of-dependence} -- the statistical dependence between latent variables can help restore identifiability.\nMore broadly, this blessing-of-dependence phenomenon has nontrivial connections to and implications on the uniqueness of matrix and tensor decompositions. \nBuilding on the blessing of dependence, we propose a formal statistical hypothesis test of identifiability in the boundary case. In fact, in this case testing identifiability amounts to testing the marginal dependence of the latent variables' observed children. \n\n \nOur results have practical relevance on statistical modeling and real-world applications employing multidimensional latent structures.\nIn many applications, it is intrinsically natural and interpretable to conceptualize each latent construct as presence or absence of some underlying trait.\nExamples include diagnosing the presence\/absence of multiple unobserved disease pathogens of a patient in epidemiology \\citep{wu2016partially, wu2017nested, o2019causes}, and determining the mastery\/deficiency of multiple latent skills of a student in educational testing \\citep{von2005, henson2009, dela2011, george2015cdm}.\nStatistically, such an appealing conceptualization leads to statistical models with\nmultidimensional binary latent variables. \nIn addition, such models are also widely used in machine learning and deep learning\nas building blocks of deep generative models\n\\citep{hinton2006reducing, hinton2006fast, salakhutdinov2009deep}.\nThe models mentioned above often possess unique and curious algebraic structures.\nUnderstanding the statistical properties caused by these algebraic structures will provide valuable insight into scientific and statistical learning practices. This work contributes a new tool and new understanding in this regard.\n\n\n\nThe rest of this paper is organized as follows.\nSection \\ref{sec-setup} introduces the formal setup of the BLESS model and several relevant identifiability notions.\nSection \\ref{sec-main} presents the main theoretical results of identifiability and overviews our general proof technique.\nSection \\ref{sec-test} proposes a statistical hypothesis test of identifiability of the BLESS model under minimal conditions for generic identifiability.\nSection \\ref{sec-prac} presents two real-world examples and Section \\ref{sec-disc} concludes the paper.\n\n\n\n\\section{Model Setup and Identifiability Notions}\\label{sec-setup}\n\\subsection{Binary Latent cliquE Star foreSt (BLESS) model}\nWe next introduce the setup of the BLESS model, the focus of this study.\nWe first introduce some notation. \nFor an integer $m$, denote $[m]=\\{1,\\ldots,m\\}$. For a $K$-dimensional vector $\\boldsymbol x=(x_1,\\ldots, x_K)$ and some index $k\\in[K]$, denote the $(K-1)$-dimensional vector by $\\boldsymbol x_{-k} = (x_1,\\ldots,x_{k-1},x_{k+1},\\ldots,x_K)$.\nConsider discrete statistical models with $K$ binary latent variables $\\alpha_1,\\ldots,\\alpha_K\\in\\{0,1\\}$ and $p$ categorical observed variables $y_1,\\ldots,y_p \\in[d]$.\nHere $d\\geq 2$ is the number of categories of each observed variable.\nBoth the latent vector $\\boldsymbol \\alpha = (\\alpha_1,\\ldots,\\alpha_K) \\in \\{0,1\\}^K$ and the observed vector $\\boldsymbol y=(y_1,\\ldots,y_p) \\in [d]^K$ are subject-specific random quantities, and have their realizations for each subject $i$ in a random sample.\nFor two random vectors (or variables) $\\boldsymbol x$ and $\\boldsymbol y$, denote by $\\boldsymbol x \\perp\\!\\!\\!\\perp \\boldsymbol y$ if $\\boldsymbol x$ and $\\boldsymbol y$ are statistically independent, and denote by $\\boldsymbol x \\not\\! \\perp\\!\\!\\!\\perp \\boldsymbol y$ otherwise.\n\n\\begin{figure}[h!]\n\\centering\n\\resizebox{0.4\\textwidth}{!}{\n\\begin{tikzpicture}\n\\def5 {5}\n\\def2.8cm {1.4cm}\n\n\\foreach \\s in {1,...,5}\n{\n\\node (\\s)[draw, circle, minimum size=20pt, inner sep=0pt] at ({90+360\/5 * (\\s - 1)}:2.8cm) {$\\alpha_{\\s}$};\n}\n\\def 11 {10}\n\\def 2.8cm {2.8cm}\n\n\\foreach \\ss in {1,...,11}\n{\n\\node (y\\ss)[draw, circle, minimum size=20pt, inner sep=0pt, fill=black!10] at ({72+360\/11 * (\\ss - 1)}:2.8cm) {$y_{\\ss}$};\n}\n\n\\draw[dotted, thick] (1) -- (2);\n\\draw[dotted, thick] (1) -- (3);\n\\draw[dotted, thick] (1) -- (4);\n\\draw[dotted, thick] (1) -- (5);\n\\draw[dotted, thick] (2) -- (3);\n\\draw[dotted, thick] (2) -- (4);\n\\draw[dotted, thick] (2) -- (5);\n\\draw[dotted, thick] (3) -- (4);\n\\draw[dotted, thick] (3) -- (5);\n\\draw[dotted, thick] (4) -- (5);\n\n\\draw[->, thick] (1) -- (y1);\n\\draw[->, thick] (1) -- (y2);\n\\draw[->, thick] (2) -- (y3);\n\\draw[->, thick] (2) -- (y4);\n\\draw[->, thick] (3) -- (y5);\n\\draw[->, thick] (3) -- (y6);\n\\draw[->, thick] (4) -- (y7);\n\\draw[->, thick] (4) -- (y8);\n\\draw[->, thick] (5) -- (y9);\n\\draw[->, thick] (5) -- (y10);\n\\end{tikzpicture}}\n\n\n\\caption\nBLESS model with $K=5$ latent variables and $p=10$ observed variables.\nAll nodes are discrete random variables, with $\\alpha_k\\in\\{0,1\\}$ latent and $y_j\\in\\{1,\\ldots,d\\}$ observed. Directed edges form the measurement graph and can be equivalently represented as a $10 \\times 5$ graphical matrix $\\mathbf G$. Undirected dotted lines between latent variables $\\alpha_k$'s indicate arbitrary dependence in the latent part.\n}\n\\label{fig-graph1}\n\\end{figure}\n\n\n\nA key structure in the BLESS model is the latent-to-observed \\emph{measurement graph}. \nThis is a bipartite graph with directed edges from the latent $\\alpha_k$'s to the observed $y_j$'s indicating direct statistical dependence.\nThe BLESS model posits that the measurement graph is a forest of star trees; namely,\neach latent variable can have multiple observed proxy variables as \\emph{children}, but each observed variable has exactly one latent \\emph{parent}. \nOn the latent part, we allow the binary latent variables to have arbitrary dependencies for the greatest possible modeling flexibility. Figure \\ref{fig-graph1} provides a graphical illustration of the BLESS model, where we draw latent variables as white nodes and observed variables as gray nodes.\nAlthough assuming each observed variable having exactly one latent parent may appear to be restrictive, we point out that the arbitrary dependence between the latent variables indeed allows the observables to have extremely flexible and rich joint distributions.\nIn Figure \\ref{fig-graph1}, the solid directed edges from the latent to the observed variables form a star-forest-shaped measurement graph, and the dotted undirected edges between all pairs of latent variables indicate arbitrary possible dependence among them. \n\n\nEquivalently, we can represent the bipartite measurement graph from the $K$ latent variables to the $p$ observed children in a $p\\times K$ \\emph{graphical matrix} $\\mathbf G=(g_{j,k})$ with binary entries, where $g_{j,k}=1$ indicates $\\alpha_k$ is the latent parent of $y_j$ and $g_{j,k}=0$ otherwise.\nEach row of $\\mathbf G$ contains exactly one entry of ``1'' due to the star-forest graph structure.\nStatistically, the conditional distribution of $y_j\\mid \\boldsymbol \\alpha$ equals that of $y_j\\mid\\alpha_{k}$ if and only if $g_{j,k}=1$.\nWe can therefore denote the conditional distribution of $y_j$ given the latent variables as follows,\n\\begin{align*}\n\\forall c_j \\in[d],\\quad \\mathbb P(y_j = c_j \\mid \\boldsymbol \\alpha, \\mathbf G) = \n\\mathbb P(y_j = c_j \\mid \\alpha_{k},~ g_{j,k}=1)\n=\n\\begin{cases}\n\\theta^{(j)}_{c_j\\mid 1}, & \\text{if } \\alpha_{k} = 1;\\\\[3mm]\n\\theta^{(j)}_{c_j\\mid 0}, & \\text{if } \\alpha_{k} = 0.\n\\end{cases}\n \n \n \n\\end{align*}\nTo avoid the somewhat trivial non-identifiablility issue associated with the sign flipping of each binary latent variable ($\\alpha_k$ flipping between 0 and 1), we assume\n\\begin{align}\\label{eq-flip}\n \\theta^{(j)}_{c_j\\mid 1} > \\theta^{(j)}_{c_j\\mid 0},\\quad c_j=1,\\ldots,d-1,\n\\end{align} \nfor all $j\\in[p]$; this could be understood as fixing the interpretation of $\\alpha_k$ to that possessing the underlying latent trait always increases the response probability to the first $d-1$ non-baseline categories.\nFixing any other orders equally works for our identifiability arguments.\n\n\n\nTo complete the model specification, we need to describe the distribution of the latent variables $\\boldsymbol \\alpha=(\\alpha_1,\\ldots,\\alpha_K)$. As mentioned before, we do not impose any restrictions on the dependence structure among the latent variables, but just adopt the most flexible saturated model. That is, we give each possible binary latent pattern $\\boldsymbol \\alpha\\in\\{0,1\\}^K$ a population proportion parameter $\\nu_{\\boldsymbol \\alpha}=\\mathbb P(\\boldsymbol a_i = \\boldsymbol \\alpha)$, where $\\boldsymbol a_i$ denoting the latent profile of a random subject $i$ in the population. The only constraint on $\\boldsymbol\\nu = (\\nu_{\\boldsymbol \\alpha})$ is $\\nu_{\\boldsymbol \\alpha}>0$ and $\\sum_{\\boldsymbol \\alpha\\in\\{0,1\\}^K} \\nu_{\\boldsymbol \\alpha}=1$.\nTherefore, we obtain the following probability mass function of the response vector $\\boldsymbol y$ under the commonly adopted local independence assumption (i.e., observed variables are conditionally independent given the latent),\n\\begin{align}\\label{eq-model}\n \\mathbb P(\\boldsymbol y = \\boldsymbol c\\mid \\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)\n =\n \\sum_{\\boldsymbol \\alpha\\in\\{0,1\\}^K} \\nu_{\\boldsymbol \\alpha} \\prod_{j=1}^p \n \n \\prod_{k=1}^{K}\n \\left[\n \\left(\\theta^{(j)}_{c_j\\mid 1}\\right)^{\\alpha_{k}} \n \\cdot\n \\left(\\theta^{(j)}_{c_j\\mid 0}\\right)^{1-\\alpha_{k}}\\right]^{g_{j,k}},\n \n\\end{align}\nwhere $\\boldsymbol c=(c_1,\\ldots,c_p)^\\top\\in\\times_{j=1}^p [d]$ is an arbitrary response pattern.\nThe name \\textit{Binary Latent cliquE Star foreSt} (BLESS) model is suggested as Equation \\eqref{eq-model} does not assume any conditional or marginal independence relations among latent variables a priori, and hence the graph among the latent can be viewed as a ``clique'' a priori in the graph terminology.\n\n\n\n\n\n\nIn real-world applications, the BLESS model can be useful in educational assessments, epidemiological diagnostic tests, and social science surveys, where the presence\/absence of multiple latent characteristics are of interest and there are several observed proxies measuring each of them.\nFor instance, in disease etiology in epidemiology \\citep{wu2017nested}, we can use each $\\alpha_k$ to denote the presence\/absence of a pathogen, and for each pathogen a few noisy diagnostic measures $y_j$'s are observed as the children variables of $\\alpha_k$.\nSee Section \\ref{sec-prac} for two real-world examples.\nIn addition, our BLESS model is interestingly connected to a family of models used in causal discovery and machine learning, the \\emph{pure-measurement} models in \\cite{silva2006latent}. Those are linear models of continuous variables, where the latent variables are connected in an acyclic causal graph; the commonality with the BLESS model is that each observed variable has at most one latent parent. The BLESS model can be thought of as a discrete analogue of such a pure-measurement model in \\cite{silva2006latent}, and indeed more general in terms of the latent dependence structure. This is because we do not constrain the $\\alpha_k$'s to follow a acyclic graph distribution but rather allow them to be arbitrarily dependent. Our identifiability conclusions always hold under this general setup.\n\n\n\n\n\n\n\n\n\n\\subsection{Strict, Generic, and Local Identifiability}\n\nWe first define strict identifiability in the context of the BLESS model. \nAll the model parameters are included in the identifiability consideration, including the continuous parameters: the conditional probabilities $\\boldsymbol \\theta = \\left\\{\\theta^{(j)}_{c_j\\mid 1}, \\theta^{(j)}_{c_j\\mid 1}\\right\\}$ and the proportions $\\boldsymbol\\nu$; and the discrete measurement graph structure $\\mathbf G$.\n\n\n\\begin{definition}[Strict Identifiability]\n\\label{def-str}\n\tThe BLESS model is said to be strictly identifiable under certain conditions, if for valid parameters $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ satisfying these conditions, the following equality holds if and only if $(\\overline{\\mathbf G}, \\overline\\boldsymbol \\theta, \\overline\\boldsymbol\\nu)$ and $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ are identical up to a common permutation of $K$ latent variables:\n\t\\begin{align}\\label{eq-id}\n\t\t\\mathbb P(\\boldsymbol y = \\boldsymbol c\\mid \\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)\n\t\t=\n\t\t\\mathbb P(\\boldsymbol y = \\boldsymbol c\\mid \\overline{\\mathbf G}, \\overline\\boldsymbol \\theta, \\overline\\boldsymbol\\nu),\n\t\t\\quad\n\t\t\\forall \\boldsymbol c\\in \\times_{j=1}^p [d].\n\t\\end{align}\n\\end{definition}\n\nThe statement of ``identifiable up to latent variable permutation'' in Definition \\ref{def-str} is an inevitable property of any latent variable model.\nWe next define generic identifiability in the context of the BLESS model. Generic identifiability is a concept proposed and popularized by \\cite{allman2009}.\nGiven a graphical matrix $\\mathbf G$ and some valid continuous parameters $(\\boldsymbol \\theta,\\boldsymbol\\nu)$ under the BLESS model,\ndefine the following subset of the parameter space as\n\\begin{align}\\label{eq-ns}\n\\mathcal N^{\\mathbf G} = \n&~\\{(\\boldsymbol \\theta, \\boldsymbol\\nu)\\text{ are associated with some graphical matrix } \\mathbf G:\n~~\n\\exists~ (\\overline\\boldsymbol \\theta, \\overline\\boldsymbol\\nu)~\\text{associated}\n\\\\ \\notag\n&~~\\text{with some graphical matrix}~\\overline{\\mathbf G}~\n\\text{such that}~ \\mathbb P(\\boldsymbol y\\mid\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu) = \\mathbb P(\\boldsymbol y\\mid\\overline\\mathbf G, \\overline\\boldsymbol \\theta, \\overline\\boldsymbol\\nu)\\}.\t\n\\end{align}\n\n\\begin{definition}[Generic Identifiability]\\label{def-genid}\n\tUnder a BLESS model, parameters $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ are said to be generically identifiable under certain conditions, if for valid parameters $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ satisfying these conditions, the set $\\mathcal N^{\\mathbf G}$ defined in \\eqref{eq-ns} has measure zero with respect to the Lebesgue measure on the parameter space of $(\\boldsymbol \\theta,\\boldsymbol\\nu)$ under the identifiability conditions.\n\\end{definition}\n\n\nIt is believed that generic identifiability often suffices for data analyses purposes \\citep{allman2009}.\nFinally, we define local identifiability of continuous parameters in the model.\n\n\\begin{definition}[Local Identifiability]\nUnder a BLESS model, a continuous parameter $\\mu$ (e.g., some entry of $\\boldsymbol \\theta$ or $\\boldsymbol\\nu$) is said to be locally identifiable, if there exists an open neighborhood $\\mathcal S$ of $\\mu$ in the parameter space such that there does not exist any alternative parameter $\\overline\\mu\\in\\mathcal S\\setminus \\{\\mu\\}$ leading to the same distribution of the response vector $\\boldsymbol y$.\n\\end{definition}\n\n\nThe lack of local identifiability usually has severe consequences in practice, because in an arbitrarily small neighborhood of the true parameter, there exist infinitely many alternative parameters that give rise to the same observed distributions. This would render any estimation and inference conclusions invalid. \n\n\n\n\n\\section{Main Theoretical Results}\\label{sec-main}\n\n\\subsection{Theoretical Results of Generic Identifiability and Their Illustrations}\n\\label{sec-mainsub}\n\nIn this subsection we will present sharp identifiability conditions and the blessing-of-dependence geometry for the BLESS model. The later Section \\ref{sec-overview} will provide an overview of the general algebraic proof technique used to derive the identifiability results.\nThroughout this work, assume $\\nu_{\\boldsymbol \\alpha}>0$ for any latent pattern $\\boldsymbol \\alpha\\in\\{0,1\\}^K$; i.e., all the possible binary latent patterns exist in the population with nonzero proportions.\nThis is the \\emph{only} assumption imposed on the distribution of the latent variables, simply requiring the proportion parameters not to be on the boundary of the probability simplex. \nThere is {no assumption} on whether or how the latent variables should depend on each other in the BLESS model. \n\n\n\nIt may be expected that each latent variable needs to have at least one observed child (i.e., $\\sum_{j=1}^p g_{j,k}\\geq 1$) to ensure identifiability of the BLESS model.\nWhat may not be expected at first is that such a condition is insufficient even for generic identifiability or local identifiability to hold.\nOur first conclusion below shows the condition that each latent variable has at least two observed children is necessary for generic identifiability or local identifiability.\n\n\n\\begin{proposition}[Necessary Condition for Generic Identifiability: $\\geq 2$ children]\n\\label{prop-nece}\nThe following two conclusions hold.\n\\begin{itemize}\n \\item[(a)] If some binary latent variable has only one observed variable as child, then the model parameters are \\textbf{not} generically identifiable and \\textbf{not} locally identifiable.\n \n \\vspace{2mm}\n \\item[(b)] Specifically, suppose $\\alpha_k$ has only one observed $y_j$ as child, then any of the $\\theta^{(j)}_{c\\mid 0}$ and $\\theta^{(j)}_{c\\mid 1}$ for $c\\in[d]$, and $\\nu_{\\boldsymbol \\alpha}$ for $\\boldsymbol \\alpha\\in\\{0,1\\}^K$ can not be generically or locally identifiable. In an arbitrarily small neighborhood of any of these parameters, there exist alternative parameters that lead to the same distribution of the observables as those given by the truth. \n\\end{itemize}\n\\end{proposition}\n\n\n\n\\begin{remark}\\label{rmk-tree}\nProposition \\ref{prop-nece} also has an interesting implication on a seemingly unrelated problem: learning tree models from noisy data. \\cite{nikolakakis2021ising} considered learning hidden tree-structured Ising models, which essentially can be reformulated as the BLESS model where each node in the latent tree has exactly one child (its noisy observed proxy) and the responses are binary, i.e., $\\mathbf G=\\mathbf I_K$ and $d=2$.\n\\cite{nikolakakis2021ising} derived the sample complexity under the assumption that the noise level at each node is homogeneous and known. \nProposition \\ref{prop-nece} implies that when the noise level is unknown and potentially heterogeneous, then these node-wise noise parameters (analogous to our $\\boldsymbol \\theta$) are not even generically or locally identifiable, no matter what structure the tree graph among the latent nodes is.\n\\end{remark}\n\n\n\nThe conclusion of ``{not even generically identifiable or locally identifiable}'' in Proposition \\ref{prop-nece} has quite severe consequences in parameter interpretation or estimation. \nThere will be one-dimensional continuum of each of $\\theta^{(j)}_{c\\mid 0}$ and $\\theta^{(j)}_{c\\mid 1}$ for $c\\in[d]$, and $\\nu_{\\boldsymbol \\alpha}$ for $\\boldsymbol \\alpha\\in\\{0,1\\}^K$, that lead to the same probability mass function of the response vector $\\boldsymbol y$.\nAs revealed in part (b) of Proposition \\ref{prop-nece}, the parameter space will have ``flat regions'' where identifiability is no hope, hence any statistical analysis in this scenario will be meaningless.\n\n\nIn Figure \\ref{fig-prop1}, we provide a numerical example to illustrate and corroborate Proposition \\ref{prop-nece}. Consider $p=5$ binary response variables, $K=3$ binary latent variables, and a $5\\times 3$ graphical matrix $\\mathbf G= (1 0 0;~ 0 1 0;~ 0 0 1;~ 0 1 0;~ 0 0 1)$. This $\\mathbf G$ indicates that the first latent variable $\\alpha_1$ only has one observed child $y_1$, violating the necessary condition for generic identifiability in Proposition \\ref{prop-nece}.\nIn the left panel of Figure \\ref{fig-prop1}, the horizontal axis records nine continuous parameters in the model, including one conditional probability $\\theta^{(1)}_{1\\mid 1}$ and $2^K=8$ proportion parameters for the binary latent pattern; the black solid line represents one set of true parameters, while the 150 colored lines represent those alternative parameters in a neighborhood of the truth constructed based on the proof of Proposition \\ref{prop-nece}.\nTo see the non-identifiablility, we calculate the probability mass function of the $5$-dimensional binary response vector $\\boldsymbol y$, which has $2^p = 32$ entries, and plot it under the true and alternative parameters in the right panel of Figure \\ref{fig-prop1}. In particular, the horizontal axis in the plot presents the indices of the response patterns $\\boldsymbol c\\in\\{0,1\\}^5$, and the vertical axis presents the values of $\\mathbb P(\\boldsymbol y = \\boldsymbol c\\mid \\mathbf G,\\boldsymbol \\theta,\\boldsymbol\\nu)$, where the ``$+$'' symbols correspond to response probabilities given by the true parameters and the ``${\\bigcirc}$'' represents those given by the 150 sets of alternative parameters. The marginal response probabilities of the observables given by all the alternative parameters perfectly equal those under the truth.\nThis illustrates the severe consequence of lack of local identifiability.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=\\textwidth]{one_attr_nogid2.png}\n \\caption{Illustrating Proposition 1, severe consequence of the lack of local identifiability. The $\\mathbf G_{5\\times 3} = (1 0 0;~ 0 1 0;~ 0 0 1;~ 0 1 0;~ 0 0 1)$. On the left panel, the black line represents the true set of parameters and each colored line corresponds to an alternative set of parameters. On the right panel, the marginal probability mass functions of the observed $\\boldsymbol y \\in \\{0,1\\}^5$ are plotted for all the parameter sets, ``$+$'' for the true set and circles ``${\\bigcirc}$'' for alternative sets.}\n \\label{fig-prop1}\n\\end{figure}\n\n\n\n\n\n\n\nNow that each latent variable has to have $\\geq 2$ observed children for generic identifiability to possibly hold, next we focus on this scenario.\nIn fact, our next result shows that such a condition is sufficient for identifying discrete structure $\\mathbf G$ of the BLESS model. Such a result is technically quite nontrivial, and in fact can not be derived using existing techniques such as Kruskal's Theorem.\n\n\\begin{theorem}[Identifiability of the Latent-to-observed Star Forest $\\mathbf G$]\\label{thm-graph}\nIn the BLESS model,\nif each latent variable has at least two observed variables as children (i.e., $\\sum_{j=1}^p g_{j,k} \\geq 2$), then the latent-to-observed star forest structure $\\mathbf G$ is identifiable.\n\\end{theorem}\n\n\nWe have the following main theorem on the identifiability of all the parameters in the BLESS model, which reveals the ``blessing of dependence'' phenomenon.\n\n\\begin{theorem}[Blessing of Latent Dependence and Generic Identifiability]\\label{thm-main}\nIn the BLESS model,\nsuppose each latent variable has exactly two observed variables as children. \nThe following conclusions hold.\n\n\\begin{itemize}\n \\item[(a)] The model with parameters $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ is generically identifiable.\n \n \\item[(b)] Specifically, any valid set of model parameters ($\\boldsymbol \\theta$, $\\boldsymbol\\nu$) are identifiable \\textbf{if and only if} the $K$ latent variables are \\textbf{not} independent according to $\\boldsymbol\\nu$.\n\\end{itemize}\n\\end{theorem}\n\n\n\n\n\\normalfont{We provide a numerical example to illustrate the blessing-of-dependence phenomenon and corroborate Theorem \\ref{thm-main}. Consider the BLESS model with each observed variable having $d=3$ categories and $\\mathbf G=(\\mathbf I_2;\\; \\mathbf I_2)^\\top$.\nWe first randomly generate $M=100$ sets of true parameters of the BLESS model, from which we further generate the observed datasets.\nGiven a fixed sample size $N=10^4$, for each of the $M=100$ parameter sets we further generate $L=200$ independent datasets each with $N$ data points. We further use an EM algorithm (presented as Algorithm 1 in the Supplementary Material) to compute the maximum likelihood estimators (MLE) of the model parameters for each dataset; here we focus on estimating continuous parameters $(\\boldsymbol \\theta,\\boldsymbol p)$ with $\\mathbf G$ fixed, because $\\mathbf G$ is guaranteed to be identifiable by Theorem \\ref{thm-graph}.\nTen random initializations are taken for the EM algorithm and we keep the one with the largest log likelihood value as the MLE. After collecting the MLEs, we calculate the Mean Squares Errors (MSEs) of continuous parameters calculated based on the 200 datasets for each of the 100 true parameter sets.}\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{indep_K2_ov3.png}\n \\hfill\n \\includegraphics[width=0.45\\textwidth]{indep_K2_ov2.png}\n \\caption{Corroborating Theorem \\ref{thm-main}. Latent probability simplex $\\boldsymbol \\Delta^{2^2-1}$ for the proportion parameters of $\\boldsymbol \\alpha=(\\alpha_1, \\alpha_2) \\in\\{0,1\\}^2$, where the saddle surface corresponds to the independence model of two latent variables $\\alpha_1\\perp\\!\\!\\!\\perp\\alpha_2$.\n Black balls correspond to those parameter sets which have the largest 20\\% MSEs across the 100 sets, while blue balls correspond to the remaining 80\\% parameter sets.\n MSEs are calculated based on sample size $N=10^4$.}\n \\label{fig-mse}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=\\textwidth]{indep_noid_band.png}\n \\caption{Illustrating Theorem \\ref{thm-main}. The $\\mathbf G_{6\\times 3} = (\\mathbf I_3;~\\mathbf I_3)$. The true parameters $\\boldsymbol\\nu_{\\text{\\normalfont{true}}}$ falls on the independence surface of $\\alpha_1 \\perp\\!\\!\\!\\perp \\boldsymbol \\alpha_{2:3}$.\n On the right panel, the marginal probability mass functions of the observed $\\boldsymbol y \\in \\{0,1\\}^5$ are plotted for all the parameter sets, ``$+$'' for the true set and circles ``${\\bigcirc}$'' for 150 alternative sets.\n }\n \\label{fig-indep}\n\\end{figure}\n\n\\normalfont{Figure \\ref{fig-mse} plots the 100 sets of values of the proportion parameters $\\boldsymbol\\nu=(\\nu_{\\boldsymbol \\alpha},\\; \\boldsymbol \\alpha\\in\\{0,1\\}^2)$ inside the latent probability simplex $\\Delta^{2^K-1} = \\Delta^3$; such a simplex takes the shape of a polyhedron in three dimensions where $x$-, $y$-, $z$-axes correspond to $\\nu_{00}, \\nu_{01}, \\nu_{11}$, respectively. \nFor reference, we also plot the subset of the $\\Delta^3$ that corresponds to the case of independent $\\alpha_1$ and $\\alpha_2$ inside the simplex.\nSuch a subset is a smooth surface determined by $\\nu_{00}\\nu_{11} - \\nu_{01}\\nu_{10} = 0$, which can be equivalently written as $\\nu_{00}\\nu_{11} - \\nu_{01}(1-\\nu_{00}-\\nu_{11} - \\nu_{01}) = 0$ in terms of the three coordinates $(\\nu_{00}, \\nu_{01}, \\nu_{11})$. This subset takes the shape of a saddle surface embedded in the interior of the polyhedron, as shown in Figure \\ref{fig-mse}.\nEach dataset is plotted as a solid ball inside the latent simplex. In particular, we plot those parameter sets with the largest 20\\% MSEs as black balls and plot the remaining parameters as blue balls. The two views shown in Figure \\ref{fig-mse} clearly show that the black balls are closer to the saddle surface of $\\alpha_1\\perp\\!\\!\\!\\perp\\alpha_2$.\nThe simulation results demonstrate that the independence submodel of the latent variables defines a singular subset well within the \\emph{interior} of the parameter space (rather than on the boundary of it). Parameter estimation under the model becomes harder when true parameters are closer to this singular subset of $\\alpha_1\\perp\\!\\!\\!\\perp\\alpha_2$.}\n\n\nWe next provide another example to illustrate the other part of Theorem \\ref{thm-main} -- that when the latent variables are indeed independent, each with two observed children, then the BLESS model parameters are unidentifiable.\nIn Figure \\ref{fig-indep}, we further use the proof of Theorem \\ref{thm-main} to construct such indistinguishable sets of parameters.\nSpecifically, the true proportion parameters $\\boldsymbol\\nu^{\\text{\\normalfont{true}}}$ are constructed such that it implies $\\alpha_1\\perp\\!\\!\\!\\perp (\\alpha_2, \\alpha_3)$; in particular, this means the two subvectors $(\\nu_{000}^{\\text{\\normalfont{true}}}, \\nu_{001}^{\\text{\\normalfont{true}}}, \\nu_{010}^{\\text{\\normalfont{true}}}, \\nu_{011}^{\\text{\\normalfont{true}}})$ and $(\\nu_{100}^{\\text{\\normalfont{true}}}, \\nu_{101}^{\\text{\\normalfont{true}}}, \\nu_{110}^{\\text{\\normalfont{true}}}, \\nu_{111}^{\\text{\\normalfont{true}}})$ are linearly dependent; the true parameters are plotted with black ``$+$''s on the left panel of Figure \\ref{fig-indep}. We then follow the proof of Theorem \\ref{thm-main} to construct 150 alternative sets of parameters and plot each set as a colored line with circles. Then similarly to that in Figure \\ref{fig-prop1}, we calculate the marginal response probabilities of the observed vector $\\boldsymbol y$ under the true and all the alternative parameter sets, respectively, and plot them in the right panel of Figure \\ref{fig-indep}. These $2^J$-dimensional marginal probability vectors are exactly equal under all the parameter sets, confirming the nonidentifiability.\nCombining Figure \\ref{fig-mse} and Figure \\ref{fig-indep}, we have demonstrated that in the boundary case where each latent variable has two observed children, the parameters are generically identifiable and will become nonidentifiable when the latent variables are independent.\nThese observations corroborate Theorem \\ref{thm-main}.\n\n\n\nWe next present a more nuanced statement about identifiability implied by the proof of the main result Theorem \\ref{thm-main}.\nIn the BLESS model, denote by $\\text{\\normalfont{Child}}(\\alpha_k) \\;\\big |\\; \\alpha_k$ the conditional distribution of all the child variables of $\\alpha_k$ given $\\alpha_k$; hence $\\text{\\normalfont{Child}}(\\alpha_k) = \\{y_j:\\; g_{j,k}=1\\}$.\nSpecifically, the parameters associated with $\\text{\\normalfont{Child}}(\\alpha_k) \\;\\big |\\; \\alpha_k$ are the follwoing conditional probabilities:\n\\begin{align}\\label{eq-cond}\n \\left\\{\\boldsymbol \\theta^{(j)}:\\; y_j\\in\\text{Child}(\\alpha_k)\\right\\}\n = \\left\\{\\theta^{(j)}_{1:d\\mid 0},\\; \\theta^{(j)}_{1:d\\mid 1}:\\; g_{j,k}=1\\right\\}.\n\\end{align}\nOur proof of Theorem \\ref{thm-main} gives the following fine-grained identifiability arguments regarding each latent variable.\n\n\n\n\\begin{corollary}[Blessing of Latent Dependence for Each Latent Variable]\\label{cor-fine}\nFor each latent variable $\\alpha_K$ that has exactly two observed variables as children, the following two statements are equivalent, in that (S1) is true if and only if (S2) is true.\n\\begin{itemize}\n \\item[(S1)] \n $\\alpha_k \\not\\!\\perp\\!\\!\\!\\perp (\\alpha_1,\\ldots,\\alpha_{k-1}, \\alpha_{k+1}, \\ldots, \\alpha_K)$ holds;\n \n \\item[(S2)] the parameters associated with the conditional distributions $\\text{\\normalfont{Child}}(\\alpha_k) \\;\\big |\\; \\alpha_k$ defined in \\eqref{eq-cond} are identifiable.\n\\end{itemize} \n\n\\end{corollary}\n\n\n\n\\begin{remark}\nWe would like to emphasize that it is impossible to obtain the conclusions of Theorem \\ref{thm-graph}, Theorem \\ref{thm-main}, and Corollary \\ref{cor-fine} by applying Kruskal's theorem.\nIn fact, the observed probability tensor $\\boldsymbol \\Pi = (\\pi_{c_1,\\ldots,c_p})$ in the minimal generically identifiable case cannot be concatenated in any way as in \\cite{allman2009} to satisfy Kruskal's rank conditions for unique three-way tensor decompositions.\nThe proofs of Theorem \\ref{thm-graph} and Theorem \\ref{thm-main} are in fact quite nontrivial.\nIn Section \\ref{sec-overview} we will give an overview of the general algebraic technique used to prove these results.\n\\end{remark}\n\n\n\\begin{figure}[h!]\n\\centering\n\\begin{minipage}[c]{0.47\\textwidth}\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tikzpicture}\n\\def5 {5}\n\\def2.8cm {1.4cm}\n\n\\foreach \\s in {1,...,5}\n{\n\\node (\\s)[draw, circle, minimum size=20pt, inner sep=0pt] at ({90+360\/5 * (\\s - 1)}:2.8cm) {$\\alpha_{\\s}$};\n}\n\\def 11 {10}\n\\def 2.8cm {2.8cm}\n\n\\foreach \\ss in {1,...,11}\n{\n\\node (y\\ss)[draw, circle, minimum size=20pt, inner sep=0pt, fill=black!10] at ({72+360\/11 * (\\ss - 1)}:2.8cm) {$y_{\\ss}$};\n}\n\n\\draw[dotted, thick] (1) -- (2);\n\\draw[dotted, thick] (1) -- (3);\n\\draw[dotted, thick] (1) -- (4);\n\\draw[dotted, thick] (1) -- (5);\n\\draw[dotted, thick] (2) -- (3);\n\\draw[dotted, thick] (2) -- (4);\n\\draw[dotted, thick] (2) -- (5);\n\\draw[dotted, thick] (3) -- (4);\n\\draw[dotted, thick] (3) -- (5);\n\\draw[dotted, thick] (4) -- (5);\n\n\\draw[->, thick] (1) -- (y1);\n\\draw[->, thick] (1) -- (y2);\n\\draw[->, thick] (2) -- (y3);\n\\draw[->, thick] (2) -- (y4);\n\\draw[->, thick] (3) -- (y5);\n\\draw[->, thick] (3) -- (y6);\n\\draw[->, thick] (4) -- (y7);\n\\draw[->, thick] (4) -- (y8);\n\\draw[->, thick] (5) -- (y9);\n\\draw[->, thick] (5) -- (y10);\n\\end{tikzpicture}}\n\n(a) CPTs for $\\text{\\normalfont{Child}}(\\alpha_1) \\mid \\alpha_1$ identifiable,\\\\\nthanks to blessing of dependence\n\\end{minipage}\n\\hfill\n\\begin{minipage}[c]{0.47\\textwidth}\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tikzpicture}\n\\def 5 {5}\n\\def 2.8cm {1.4cm}\n\n\\foreach \\s in {1,...,5}\n{\n\\node (\\s)[draw, circle, minimum size=20pt, inner sep=0pt] at ({90+360\/5 * (\\s - 1)}:2.8cm) {$\\alpha_{\\s}$};\n}\n\\def 11 {10}\n\\def 2.8cm {2.8cm}\n\n\\foreach \\ss in {1,...,11}\n{\n\\node (y\\ss)[draw, circle, minimum size=20pt, inner sep=0pt, fill=black!10] at ({72+360\/11 * (\\ss - 1)}:2.8cm) {$y_{\\ss}$};\n}\n\\draw[dotted, thick] (2) -- (3);\n\\draw[dotted, thick] (2) -- (4);\n\\draw[dotted, thick] (2) -- (5);\n\\draw[dotted, thick] (3) -- (4);\n\\draw[dotted, thick] (3) -- (5);\n\\draw[dotted, thick] (4) -- (5);\n\n\\draw[->, thick, dashed] (1) -- (y1);\n\\draw[->, thick, dashed] (1) -- (y2);\n\\draw[->, thick] (2) -- (y3);\n\\draw[->, thick] (2) -- (y4);\n\\draw[->, thick] (3) -- (y5);\n\\draw[->, thick] (3) -- (y6);\n\\draw[->, thick] (4) -- (y7);\n\\draw[->, thick] (4) -- (y8);\n\\draw[->, thick] (5) -- (y9);\n\\draw[->, thick] (5) -- (y10);\n\\end{tikzpicture}}\n\n(b) CPTs for $\\text{\\normalfont{Child}}(\\alpha_1) \\mid \\alpha_1$ nonidentifiable, due to lack of dependence of $\\alpha_1$ and $\\boldsymbol \\alpha_{2:5}$\n\\end{minipage}\n\n\\bigskip\n\\begin{minipage}[c]{0.47\\textwidth}\\centering\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tikzpicture}\n\\def 5 {5}\n\\def 2.8cm {1.4cm}\n\n\\foreach \\s in {1,...,5}\n{\n\\node (\\s)[draw, circle, minimum size=20pt, inner sep=0pt] at ({90+360\/5 * (\\s - 1)}:2.8cm) {$\\alpha_{\\s}$};\n}\n\\def 11 {11}\n\\def 2.8cm {2.8cm}\n\n\\foreach \\ss in {4,...,11}\n{\n\\node (y\\ss)[observed] at ({360\/10 * (\\ss)}:2.8cm) {$y_{\\ss}$};\n}\n\n\n\\node (y1) [observed, above left = 1 cm of 1] {$y_1$};\n\\node (y2) [observed, above = 1 cm of 1] {$y_2$};\n\\node (y3) [observed, above right = 1 cm of 1] {$y_3$};\n\n\n\n\\draw[dotted, thick] (2) -- (3);\n\\draw[dotted, thick] (2) -- (4);\n\\draw[dotted, thick] (2) -- (5);\n\\draw[dotted, thick] (3) -- (4);\n\\draw[dotted, thick] (3) -- (5);\n\\draw[dotted, thick] (4) -- (5);\n\n\\draw[->, thick] (1) -- (y1);\n\\draw[->, thick] (1) -- (y2);\n\\draw[->, thick] (1) -- (y3);\n\n\\draw[->, thick] (2) -- (y4);\n\\draw[->, thick] (2) -- (y5);\n\\draw[->, thick] (3) -- (y6);\n\\draw[->, thick] (3) -- (y7);\n\\draw[->, thick] (4) -- (y8);\n\\draw[->, thick] (4) -- (y9);\n\\draw[->, thick] (5) -- (y10);\n\\draw[->, thick] (5) -- (y11);\n\\end{tikzpicture}}\n\n(c) CPTs for $\\text{\\normalfont{Child}}(\\alpha_1) \\mid \\alpha_1$ identifiable\n\\end{minipage}\n\n\n\\caption\nCPTs refer to Conditional Probability Tables.\nAll nodes are discrete random variables, with $\\alpha_k\\in\\{0,1\\}$ latent and $y_j\\in\\{1,\\ldots,d\\}$ observed. The parameters corresponding to the dashed directed edges in (b) are unidentifiable, because $\\alpha_1$ is indepedent of $\\boldsymbol \\alpha_{2:5} = (\\alpha_2, \\alpha_3, \\alpha_4, \\alpha_5)$.\n}\n\\label{fig-graph}\n\\end{figure}\n\n\n\n\n\nIt is useful to adopt a graphical perspective to our identifiability results of the BLESS model.\nFigure \\ref{fig-graph}(a)--(b) provide graphical illustrations of generic identifiability conclusions and the blessing of dependence phenomenon.\nWith $K=5$ latent variables each having two observed variables as children (i.e., $\\mathbf G = (\\mathbf I_K;\\; \\mathbf I_K)^\\top$), the parameters corresponding to Figure \\ref{fig-graph}(a) are identifiable due to the dependence indicated by the dotted edges between $\\alpha_1,\\ldots,\\alpha_5$; \nwhile the parameters corresponding to Figure \\ref{fig-graph}(b) are not identifiable due to the lack of dependence between $\\alpha_1$ and $\\boldsymbol \\alpha_{-1}:=(\\alpha_2,\\ldots,\\alpha_5)$.\nSuch identifiability arguments guaranteed by Corollary \\ref{cor-fine} are of a very fine-grained nature, revealing that the dependence between a specific latent variable and the remaining ones exactly determines the identifiability of the conditional probability tables given this particular latent variable.\n\nThe identifiability results in Theorem \\ref{thm-graph} and Theorem \\ref{thm-main} yield the following observations. \nFirst, a notable fact is that our proof reveals that the blessing of latent dependence always holds regardless of the sign of the dependence. Either positive dependence or negative dependence helps deliver model identifiability.\nSecond, the easiest scenario for the star forest structure $\\mathbf G$ to be identifiable seems to be the hardest one for the continuous parameters to be identifiable.\nTo see this, consider the extreme case where all the $K$ latent variables are perfectly dependent\\footnote{Note that in this extreme case, many latent patterns $\\boldsymbol \\alpha$'s will have population proportions zero and hence does not satisfy our only assumption on $\\boldsymbol\\nu$. Therefore the fact that $\\mathbf G$ is unidentifiable in this extreme case does not contradict our result on the identifiability of $\\mathbf G$ in Theorem \\ref{thm-graph}.}, then the star forest structure cannot be recovered, because it is impossible to tell apart which observed variables are children of which latent ones. \nGenerally, the more independent the latent variables are, the easier it should be to identify the measurement graph.\nOn the other hand, however, according to our conclusions in Theorem \\ref{thm-main} and Corollary \\ref{cor-fine}, having the latent variables independent is the hardest, and in fact impossible, scenario for the continuous parameters $\\boldsymbol \\theta$ and $\\boldsymbol\\nu$ to be identifiable when each $\\alpha_k$ has two children.\nThis perhaps counterintuitive phenomenon shows the complexity and surprising geometry of discrete graphical models with latent variables.\n\n\n\n\n\n\n\nInterestingly, the case of each latent variable having two children forms the exact boundary for the blessing of dependence to play a role.\nIn fact, as long as each latent variable has at least three observed variables as children, the Kruskal's theorem \\citep{kruskal1977three} on the uniqueness of three-way tensor decompositions can ``kick in'' to guarantee identifiability. In particular, we can use an argument similar to that in \\cite{allman2009} to establish this conclusion, by concatenating certain observed variables into groups and transforming the underlying $p$-way probability tensor into a three-way tensor. The following proposition formalizes this statement.\n\n\\begin{proposition}[Kruskal's Theorem Kicks in for the $\\geq 3$ Children Case]\\label{prop-3chi}\nConsider model \\eqref{eq-model} with \\eqref{eq-flip} satisfied.\nIf each latent variable has three or more observed variables as children (i.e., $\\sum_{j=1}^p g_{j,k} \\geq 3$), then the model is always strictly identifiable, regardless of the dependence between latent variables.\n\\end{proposition}\n\n\n\n\\begin{example}\\label{exp-g73}\nConsider the BLESS model with $K=3$, $p=7$ and the following $7\\times 3$ graphical matrix\n\\begin{align*}\n \\mathbf G=\\begin{pmatrix}\n & \\mathbf I_3 & \\\\\n & \\mathbf I_3 & \\\\\n 1 & 0 & 0\n \\end{pmatrix}.\n\\end{align*}\nThen by Theorem \\ref{thm-graph}, the $\\mathbf G$ matrix itself is identifiable from the joint distribution of the observed variables. And by Theorem \\ref{thm-main}, the continuous parameters are generically identifiable. \nFurther, since $\\alpha_1$ has three children with $\\text{\\normalfont{Child}}(\\alpha_1) = \\{y_1, y_4, y_7\\}$, by Proposition \\ref{prop-3chi}, the conditional probability tables $\\boldsymbol \\theta^{(1)}, \\boldsymbol \\theta^{(4)}, \\boldsymbol \\theta^{(7)}$ are strictly identifiable regardless of the dependence between the variable $\\alpha_1$ and the other latent variables $(\\alpha_1, \\alpha_2)$. \nBy Corollary \\ref{cor-fine}, since $\\alpha_2$ has two children $\\text{\\normalfont{Child}}(\\alpha_2) = \\{y_2, y_5\\}$, the $\\boldsymbol \\theta^{(2)}, \\boldsymbol \\theta^{(5)}$ are identifiable if and only if $\\alpha_2 \\not\\! \\perp\\!\\!\\!\\perp (\\alpha_1, \\alpha_3)$. Similarly, parameters $\\boldsymbol \\theta^{(3)}, \\boldsymbol \\theta^{(6)}$ are identifiable if and only if $\\alpha_3 \\not\\! \\perp\\!\\!\\!\\perp (\\alpha_1, \\alpha_2)$.\n\\end{example}\n\nSummarizing all the above conclusions in this section, we have the following conclusions. \n\n\n\\begin{corollary}\\label{cor-ns} Consider the BLESS model. The following statements hold.\n\n\\begin{itemize}\n\\item[(a)]\nThe condition that each binary latent variable has $\\geq 2$ observed variables as children is \\textbf{necessary and sufficient} for the generic identifiability of the model parameters.\n\\item[(b)]\nThe condition that each binary latent variable has $\\geq 3$ observed variables as children is \\textbf{necessary and sufficient} for the strict identifiability of the model parameters.\n\\end{itemize}\n\\end{corollary}\n\n\n\nCorollary \\ref{cor-ns} describes the minimal conditions for strict identifiability and those for generic identifiability of the BLESS model, respectively.\nThe conclusions in Corollary \\ref{cor-ns} are immediate consequences of Theorem \\ref{thm-main} and Proposition \\ref{prop-3chi}.\nIt is worth noting that both the minimal conditions for strict identifiability and those for generic identifiability only concern the discrete structure in the model -- the measurement graph $\\mathbf G$, but not on the specific values of the continuous parameters $\\boldsymbol \\theta$ or $\\boldsymbol\\nu$.\nTherefore, these identifiability conditions as graphical criteria are easily checkable in practice.\n\n\n\nThe blessing of dependence phenomenon when each latent variable has two children has nontrivial connections to and implications on the uniqueness of matrix and tensor decompositions. \n\\emph{On one hand}, if each latent variable has two children with $p=2K$ and $y_k$, $y_{K+k}$ are children of $\\alpha_k$ for each $k\\in[K]$, then we can group the first $y_1,\\ldots, y_K$ and define a surrogate categorical variable $Z_1 = (y_1,\\ldots, y_K) \\in [d]^K$ with $d^K$ latent states, and similarly group the $y_{K+1},\\ldots,y_{2K}$ to define $Z_2 \\in [d]^K$. The joint contingency table of $(Z_1, Z_2)$ can then be expressed as a two-way table of size $d^K \\times d^K$, where each entry in the table corresponds to the probability of a response pattern of the original vector $\\boldsymbol y$ and all these probabilities sum up to one.\nThis setting can be considered as a reduced rank model for two-way contingency table (matrix) \\citep{de1991reduced}, where the rank of the matrix is $|\\{0,1\\}^K|=2^K$, equal to the number of states the latent vector $\\boldsymbol \\alpha$ can take. \nIt is well-known that such a matrix factorization generally can not be unique. \n\\emph{On the other hand}, if each latent variable has three children with $p=3K$ and $y_k$, $y_{K+k}$, $y_{2K+k}$ being children of $\\alpha_k$, then we can define $Z_1=(y_1,\\ldots,y_K)$, $Z_2=(y_{K+1},\\ldots,y_{2K})$, and $Z_3=(y_{2K+1},\\ldots,y_{3K})$. The joint contingency table of $Z_1, Z_2, Z_3$ is a three-way tensor. Due to the conditional independence of $Z_1, Z_2, Z_3$ given the latent $\\boldsymbol \\alpha$, such a tensor has a CP decomposition \\citep{koldabader2009} of rank $2^K$. By Kruskal's Theorem, this three-way decomposition is identifiable under mild conditions.\nOur results reveal that between the well-known unidentifiable matrix factorization and well-known identifiable tensor decomposition, there is a special middle ground where the dependence between multiple binary latent variables helps restore identifiability.\n\n\n\n\n\n\n\n\\subsection{Overview of the Proof Techniques and Its Usefulness}\\label{sec-overview}\nIn this subsection we provide an overview of the general proof techniques used to derive the identifiability results.\nFor the ease of understanding, we next describe the technique in the context of multidimensional binary latent variables; we will later explain that these techniques are generally applicable to discrete models with latent and graphical components.\nWith $K$ binary latent variables, define the binary vector representations of integers $1,\\ldots,2^K$ by $\\boldsymbol \\alpha_1,\\boldsymbol \\alpha_2,\\ldots,\\boldsymbol \\alpha_{2^K}$; that is, for a $K$-dimensional vector $\\boldsymbol v=(2^{K-1}, 2^{K-2}, \\cdots, 2^0)^\\top$ there is\n$$\\boldsymbol \\alpha_{\\ell}^\\top \\boldsymbol v = \\ell+1,\\quad \\ell=1,2,\\ldots,2^K.$$\nEach $\\boldsymbol \\alpha_\\ell$ represents a binary latent pattern describing the presence or absence of the $K$ latent variables and $\\{\\boldsymbol \\alpha_1,\\ldots,\\boldsymbol \\alpha_{2^K}\\} = \\{0,1\\}^K$.\nWith $p$ discrete observed variables $y_1,\\ldots,y_p$, generally denote the conditional distribution of each $y_j$ given latent pattern $\\boldsymbol \\alpha_\\ell$ by \n$$\n\\theta_{c\\mid \\boldsymbol \\alpha_\\ell}^{(j)} = \\mathbb P(y_j=c\\mid \\boldsymbol a = \\boldsymbol \\alpha_\\ell), \\quad j\\in[p],~ c\\in[d],~\\ell\\in[2^K].\n$$\nNote that under the BLESS model, the $\\theta_{c\\mid \\boldsymbol \\alpha_\\ell}^{(j)}$ is a reparametrization of the probabilities $\\theta_{c\\mid 1}^{(j)}$ and $\\theta_{c\\mid 0}^{(j)}$.\nAccording to the star-forest measurement graph structure, whether $\\theta_{c\\mid \\boldsymbol \\alpha_\\ell}^{(j)}$ equals $\\theta_{c\\mid 1}^{(j)}$ or $\\theta_{c\\mid 0}^{(j)}$ depends only on whether or not the pattern $\\boldsymbol \\alpha_\\ell$ possesses the latent parent of $y_j$.\nMathematically, since vector $\\boldsymbol g_j$ summarizes the parent variable information of $y_j$, we have that\n\\begin{align}\\label{eq-thetaeq}\n \\theta_{c\\mid \\boldsymbol \\alpha_\\ell}^{(j)} =\n \n \n \n \n \\begin{cases}\n \\theta_{c\\mid 1}^{(j)}, & \\text{if } \\alpha_{\\ell,k}=1 \\text{ for the $k$ where } g_{j,k}=1;\\\\[2mm]\n \\theta_{c\\mid 0}^{(j)}, & \\text{if } \\alpha_{\\ell,k}=0 \\text{ for the $k$ where } g_{j,k}=1.\n \\end{cases}\n\\end{align}\nIn the above expression, the $\\alpha_{\\ell,k}$ denotes the $k$th entry of the binary pattern $\\boldsymbol \\alpha_\\ell$.\nFor each observed variable index $j\\in[p]$, define a $d\\times 2^K$ matrix $\\boldsymbol \\Phi^{(j)}$ as\n\\begin{align*}\n \\boldsymbol \\Phi^{(j)} \n &= \n \\begin{pmatrix}\n \\mathbb P(y_j=1\\mid \\boldsymbol a = \\boldsymbol \\alpha_1) &~ \\cdots &~ \\mathbb P(y_j=1\\mid \\boldsymbol a = \\boldsymbol \\alpha_{2^K}) \\\\[2mm]\n \\vdots &~ \\vdots &~ \\vdots \\\\[2mm]\n \\mathbb P(y_j=d\\mid \\boldsymbol a = \\boldsymbol \\alpha_1) &~ \\cdots &~ \\mathbb P(y_j=d\\mid \\boldsymbol a = \\boldsymbol \\alpha_{2^K})\n \\end{pmatrix}\\\\[2mm]\n &=\n \\begin{pmatrix}\n \n \n \n \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_1} &~ \\cdots &~~ \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_{2^K}} \\\\[2mm]\n \\vdots &~ \\vdots &~~ \\vdots \\\\[2mm]\n \\theta^{(j)}_{d\\mid \\boldsymbol \\alpha_1} &~ \\cdots &~~ \\theta^{(j)}_{d\\mid \\boldsymbol \\alpha_{2^K}}\n \\end{pmatrix},\n\\end{align*}\nthen $\\boldsymbol \\Phi^{(j)}$ is the conditional probability table of variable $y_j$ given $2^K$ latent patterns. Each column of $\\boldsymbol \\Phi^{(j)}$ is indexed by a pattern $\\boldsymbol \\alpha_\\ell$ and gives the conditional distribution of variable $y_j$ given the latent pattern $\\boldsymbol \\alpha_\\ell$.\nNote that many entries in $\\boldsymbol \\Phi^{(j)}$ are equal due to \\eqref{eq-thetaeq}; we deliberately choose this overparameterized matrix notation to facilitate further tensor algebra. The equality of the many parameters in each $\\boldsymbol \\Phi^{(j)}$ will later be carefully exploited when examining identifiability conditions.\n\n\n\nDenote by $\\bigotimes$ the Kronecker product of matrices and denote by $\\bigodot$ the Khatri-Rao product \\citep[][]{koldabader2009}. The Khatri-Rao product is a column-wise Kronecker product of matrices, and\nfor two matrices with the same number of columns $\\mathbf A=(a_{i,j})=(\\boldsymbol a_{\\boldsymbol{:},1}\\mid\\cdots\\mid\\boldsymbol a_{\\boldsymbol{:},k})\\in\\mathbb R^{n\\times k}$,\n$\\mathbf B=(b_{i,j})=(\\boldsymbol b_{\\boldsymbol{:},1}\\mid\\cdots\\mid\\boldsymbol b_{\\boldsymbol{:},k})\\in\\mathbb R^{\\ell\\times k}$, their Khatri-Rao product\n$\\mathbf A\\bigodot \\mathbf B \\in\\mathbb R^{n \\ell\\times k}$ still has the same number of columns and can be written as\n\\begin{align*}\n\t\\mathbf A\\bigodot \\mathbf B\n\t=\n\t\\begin{pmatrix}\n\t\t\\boldsymbol a_{\\boldsymbol{:},1}\\bigotimes\\boldsymbol b_{\\boldsymbol{:},1}\n\t\t~\\mid~ \\cdots ~\\mid~\n\t\t\\boldsymbol a_{\\boldsymbol{:},k}\\bigotimes\\boldsymbol b_{\\boldsymbol{:},k}\n\t\\end{pmatrix}.\n\\end{align*}\nUnder the considered model, all the $d^p$ marginal response probabilities form a $p$-way tensor $$\\boldsymbol \\Pi=(\\pi_{c_1,\\cdots,c_p}), \\quad c_j\\in[d],$$ \nwhere each entry $\\pi_{c_1,\\cdots,c_p} = \\mathbb P (y_1=c_1,\\ldots,y_p=c_p\\mid \\text{star-forest structure and parameters})$ denotes the marginal probability of observing the response pattern $\\boldsymbol y=\\boldsymbol c$ under the latent variable model.\nWith the above notation, the probability mass function (PMF) of vector $\\boldsymbol y$ under the BLESS model in \\eqref{eq-model} can be equivalently written as\n\\begin{align}\\label{eq-kreq}\n \\text{\\normalfont{vec}}(\\boldsymbol \\Pi)\n = \\Big(\\bigodot_{j=1}^p \\boldsymbol \\Phi^{(j)}\\Big) \\cdot \\boldsymbol\\nu,\n\\end{align}\nwhere $\\text{\\normalfont{vec}}(\\boldsymbol \\Pi)$ denotes the vectorization of the tensor $\\boldsymbol \\Pi$ into a vector of length $d^p$. The Khatri-Rao product of $\\boldsymbol \\Phi^{(j)}$ in the above display results from the basic local independence assumption in \\eqref{eq-model}.\nWe next state a useful technical lemma.\nThe following lemma characterizes a fundamental property of the transformations of Khatri-Rao product of matrices.\n\n\n\n\\begin{lemma}\\label{lem-poly}\nConsider an arbitrary set of conditional probability tables $\\{\\boldsymbol \\Phi^{(j)}: j\\in[p]\\}$, where $\\boldsymbol \\Phi^{(j)}$ has size $d_j\\times 2^K$ with each column summing to one.\nGiven any set of vectors $\\{{\\boldsymbol \\Delta}_j:\\,{j\\in[p]}\\}$ with $\\boldsymbol \\Delta_j = (\\Delta_{j,1},\\ldots,\\Delta_{j,d_j-1}, 0)^\\top \\in \\mathbb R^{d_j\\times 1}$, \nthere exists a $\\prod_{j=1}^p d_j \\times \\prod_{j=1}^p d_j$ \\textbf{invertible} matrix $\\mathbf B:=\\mathbf B(\\{\\boldsymbol \\Delta_j:\\,{j\\in[p]}\\})$ determined entirely by $\\{\\boldsymbol \\Delta_j:\\,{j\\in[p]}\\}$ such that \n\\begin{align}\\label{eq-algebra}\n\t\\bigodot_{j\\in[p]} \\Big(\\boldsymbol \\Phi^{(j)}-\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K} \\Big)\n\t&= \\mathbf B\\left(\\{\\boldsymbol \\Delta_j:\\,{j\\in[p]}\\}\\right) \\boldsymbol\\cdot \\Big(\\bigodot_{j\\in[p]} \\boldsymbol \\Phi^{(j)}\\Big),\n\\end{align}\nwhere $\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K}$ is a $d_j\\times 2^K$ matrix, of the same dimension as $\\boldsymbol \\Phi^{(j)}$.\n\nIn addition, replacing the index $j\\in[p]$ in \\eqref{eq-algebra} by $j\\in S$ where $S$ is an arbitrary subset of $[p]$ on both hand sides still makes the equality holds.\n\\end{lemma}\n\n\n\nLemma \\ref{lem-poly} covers as special case a result in \\cite{xu2017rlcm} for restricted latent class models with binary responses.\nInstead of exclusively considering moments of binary responses as \\cite{xu2017rlcm}, our Lemma \\ref{lem-poly} here characterizes a general algebraic property of Khatri-Rao products of conditional probability tables of multivariate categorical data. This property together with the model formulation in \\eqref{eq-kreq} will enable us to exert various transformations on the model parameters to investigate their identifiability. \nWe provide a proof of Lemma \\ref{lem-poly} below, because it is concise and delivers an insight into our technique's usefulness.\n\n\\begin{proof}[Proof of Lemma \\ref{lem-poly}]\nConsider an arbitrary subset $S\\in[p]$.\nFirst note that the sum of all the entries in each column of $\\boldsymbol \\Phi^{(j)}$ is one because each column vector is a conditional probability distribution of $y_j$ given a particular latent pattern. Therefore with $\\boldsymbol \\Delta_j = (\\Delta_{j,1},\\ldots,\\Delta_{j,d-1}, 0)^\\top$, we have\n\\begin{align*}\n \\boldsymbol \\Phi^{(j)}-\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K}\n &= \n \\begin{pmatrix}\n \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_1}-\\Delta_{j,1} &~ \\cdots &~~ \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_{2^K}}-\\Delta_{j,1} \\\\[2mm]\n \\vdots &~ \\vdots &~~ \\vdots \\\\[2mm]\n \n \\theta^{(j)}_{d-1\\mid \\boldsymbol \\alpha_1}-\\Delta_{j,d-1} &~ \\cdots &~~ \\theta^{(j)}_{d-1\\mid \\boldsymbol \\alpha_{2^K}}-\\Delta_{j,d-1} \\\\[4mm]\n \\theta^{(j)}_{d\\mid \\boldsymbol \\alpha_1} &~ \\cdots &~~ \\theta^{(j)}_{d\\mid \\boldsymbol \\alpha_{2^K}}\n \\end{pmatrix}\n \\\\\n \n &=\n \\begin{pmatrix}\n 1 &~ 0 &~ \\cdots &~ 0 & -\\Delta_{j,1}\\\\\n 0 &~ 1 &~ \\cdots &~ 0 & -\\Delta_{j,2}\\\\\n \\vdots &~ \\vdots &~ \\ddots &~ 0 & \\vdots\\\\\n 0 &~ 0 &~ \\cdots &~ 1 &\\quad -\\Delta_{j,d-1}\\\\\n -1 &~ -1 &~ \\cdots & -1 & 1\n \\end{pmatrix}\n \\boldsymbol\\cdot\n \\begin{pmatrix}\n \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_1} &~ \\cdots &~~ \\theta^{(j)}_{1\\mid \\boldsymbol \\alpha_{2^K}} \\\\[2mm]\n \\vdots &~ \\vdots &~~ \\vdots \\\\[2mm]\n \n \\theta^{(j)}_{d-1\\mid \\boldsymbol \\alpha_1} &~ \\cdots &~~ \\theta^{(j)}_{d-1\\mid \\boldsymbol \\alpha_{2^K}} \\\\[4mm]\n 1 &~ \\cdots &~~ 1\n \\end{pmatrix}\n \\\\\n \n &=\n \\underbrace{\\begin{pmatrix}\n 1 &~ 0 &~ \\cdots &~ 0 & -\\Delta_{j,1}\\\\\n 0 &~ 1 &~ \\cdots &~ 0 & -\\Delta_{j,2}\\\\\n \\vdots &~ \\vdots &~ \\ddots &~ 0 & \\vdots\\\\\n 0 &~ 0 &~ \\cdots &~ 1 &\\quad -\\Delta_{j,d-1}\\\\\n -1 &~ -1 &~ \\cdots & -1 & 1\n \\end{pmatrix}}_{d\\times d\\text{ matrix, denoted by }\\widetilde{\\boldsymbol \\Delta}_j}\n \\boldsymbol\\cdot\n \\underbrace{\\begin{pmatrix}\n 1 &~ 0 &~ \\cdots &~ 0 &~ 0\\\\\n 0 &~ 1 &~ \\cdots &~ 0 &~ 0\\\\\n \\vdots &~ \\vdots &~ \\ddots &~ \\vdots &~ \\vdots \\\\\n 0 &~ 0 &~ \\cdots &~ 1 &~ 0\\\\\n 1 &~ 1 &~ \\cdots &~ 1 &~ 1\n \\end{pmatrix}}_{d\\times d\\text{ matrix, denoted by }\\mathbf C}\n \\boldsymbol\\cdot~ \\boldsymbol \\Phi^{(j)}\\\\\n &=:\n \\widetilde{\\boldsymbol \\Delta}_j \\mathbf C \\boldsymbol \\Phi^{(j)}.\n\\end{align*}\nIt is easy to see that both matrix $\\widetilde{\\boldsymbol \\Delta}_j$ and matrix $\\mathbf C$ have full rank $2^K$, so their product $\\widetilde{\\boldsymbol \\Delta}_j \\mathbf C$ also has full rank $2^K$. Then\n\\begin{align*}\n \\bigodot_{j\\in S} \\Big(\\boldsymbol \\Phi^{(j)}-\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K} \\Big) \n &= \n \\bigodot_{j\\in S} \\Big(\\widetilde{\\boldsymbol \\Delta}_j \\mathbf C \\boldsymbol \\Phi^{(j)} \\Big)\\\\\n &=\n \\bigotimes_{j\\in S} (\\widetilde{\\boldsymbol \\Delta}_j \\mathbf C ) \\boldsymbol\\cdot \\bigodot_{j\\in S} \\boldsymbol \\Phi^{(j)},\n\\end{align*}\nwhere the last equality above follows from basic properties of the Kronecker and Khatri-Rao products and can be verified by checking corresponding entries in the products. Now define\n$$\n\\mathbf B\\left(\\{\\boldsymbol \\Delta_j:\\,{j\\in S}\\}\\right) : =\n\\bigotimes_{j\\in S} (\\widetilde{\\boldsymbol \\Delta}_j \\mathbf C ),\n$$\nthen $\\mathbf B\\left(\\{\\boldsymbol \\Delta_j:\\,{j\\in S}\\}\\right)$ is a $d^{|S|} \\times d^{|S|}$ matrix and it is invertible because it is the Kronecker product of $|S|$ invertible matrices $\\widetilde{\\boldsymbol \\Delta}_j \\mathbf C$. This proves Lemma \\ref{lem-poly}. \n\\end{proof}\n\nRecall that many entries in $\\boldsymbol \\Phi^{(j)}$ are constrained equal under the graphical matrix $\\mathbf G$; in fact, the $\\boldsymbol \\Phi^{(j)}$ is entirely determined by $\\mathbf G$ and $\\boldsymbol \\theta$ and also the structure of $\\mathbf G$ and $\\boldsymbol \\theta$ can be read off given the $\\boldsymbol \\Phi^{(j)}$.\nNow suppose an alternative graphical matrix $\\bar\\mathbf G \\in\\{0,1\\}^{p\\times K}$ and some associated alternative parameters $(\\bar\\boldsymbol \\theta, \\bar\\boldsymbol\\nu)$ lead to the same distribution as $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$. \nThen by \\eqref{eq-kreq}, the following system of $d^{|S|}$ equations about the alternative parameters $\\overline{\\boldsymbol \\Phi}$ and $\\overline{\\boldsymbol\\nu}$ must hold for an arbitrary subset $S\\subseteq[p]$,\n\\begin{align*}\n \\Big(\\bigodot_{j\\in S} \\boldsymbol \\Phi^{(j)}\\Big) \\cdot \\boldsymbol\\nu =\n \\Big(\\bigodot_{j\\in S} \\overline{\\boldsymbol \\Phi}^{(j)}\\Big) \\cdot \\overline\\boldsymbol\\nu.\n\\end{align*}\nOur goal is to study under what conditions on the true parameters, the alternative $(\\bar\\mathbf G, \\bar\\boldsymbol \\theta, \\bar\\boldsymbol\\nu)$ must be identical to the true $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$.\nBy Lemma \\ref{lem-poly}, for arbitrary $\\{\\boldsymbol \\Delta_j\\}$, we have\n\\begin{align}\\notag\n \\Big(\\bigodot_{j\\in S} \\boldsymbol \\Phi^{(j)} -\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K} \\Big) \\cdot \\boldsymbol\\nu \n &= \\mathbf B\\left(\\{\\boldsymbol \\Delta_j:\\,{j\\in S}\\}\\right) \\boldsymbol\\cdot \\Big(\\bigodot_{j\\in S} \\boldsymbol \\Phi^{(j)}\\Big) \\cdot \\boldsymbol\\nu \n \\\\ \\notag\n &= \\mathbf B\\left(\\{\\boldsymbol \\Delta_j:\\,{j\\in S}\\}\\right) \\boldsymbol\\cdot \\Big(\\bigodot_{j\\in S} \\overline{\\boldsymbol \\Phi}^{(j)}\\Big) \\cdot \\overline\\boldsymbol\\nu \\\\ \\label{eq-trans}\n &= \\Big(\\bigodot_{j\\in S} \\overline{\\boldsymbol \\Phi}^{(j)} -\\boldsymbol \\Delta_j\\boldsymbol\\cdot\\mathbf 1^\\top_{2^K} \\Big) \\cdot \\overline{\\boldsymbol\\nu}.\n\\end{align}\nWe next give a high-level idea of our proof procedure.\nEq.~\\eqref{eq-trans} will be frequently invoked for various subsets $S\\subseteq[p]$ when deriving the identifiability results.\nFor example, suppose we want to investigate whether a specific parameter $\\theta^{(j)}_{c\\mid\\boldsymbol \\alpha_\\ell}$ is identifiable under certain conditions.\nExploiting the fact that $\\overline\\mathbf G$ induces many equality constraints on entries of $\\overline{\\boldsymbol \\Phi}^{(j)}$, \nwe will construct a set of vectors $\\{\\boldsymbol \\Delta_j; j\\in S\\}$, which usually has the particular $\\bar\\theta^{(j)}_{c\\mid\\boldsymbol \\alpha_\\ell}$ as an entry. These vectors $\\{\\boldsymbol \\Delta_j; j\\in S\\}$ are purposefully constructed such that the right hand side of Eq.~$\\eqref{eq-trans}=0$ for some polynomial equation out of the $\\prod_{j\\in S} d_j$ ones.\nThis implies a polynomial involving parameters $(\\mathbf G, \\boldsymbol \\theta, \\boldsymbol\\nu)$ and the constructed vectors $\\{\\boldsymbol \\Delta_j; j\\in S\\}$ is equal to zero.\nWe will then carefully inspect under what conditions this equation gives $\\theta^{(j)}_{c\\mid\\boldsymbol \\alpha_\\ell}$'s identifiability; \nnamely, inspect whether $\\theta^{(j)}_{c\\mid\\boldsymbol \\alpha_\\ell} = \\overline{\\theta}^{(j)}_{c\\mid\\boldsymbol \\alpha_\\ell}$ must hold under the considered conditions. \n\n\n\n\nWe emphasize here that our algebraic technique described above can be generally useful beyond the BLESS model. \nEssentially, our proof technique exploits the following two key model properties.\n\\emph{First}, observed variables are conditionally independent given the (potentially multiple) latent variables. This property makes it possible to write the joint distribution of the observed variables as the product of (a) Khatri-Rao product of individual conditional probability tables and (b) the vector of the joint probability mass function of latent variables.\n\\emph{Second}, there exist rich graphical structures involving the latent variables and observed variables. The graph will induce many equality constraints on the conditional probability table $\\boldsymbol \\Phi^{(j)}$ of each observed variable given the configurations of the latent.\nThe first property above about conditional independence is an extremely prevailing assumption adopted in many other latent variable models, and it is often called ``local indenpendence'' in the literature. The second property above about graph-induced constraints is also frequently encountered across various directed and undirected graphical models \\citep{lauritzen1996graphical}.\nBecause of these two facts, we expect our techniques will be generally useful to find identifiability conditions for other complicated discrete models with multidimensional latent and graphical structures, e.g., discrete Bayesian networks with latent variables with application to causal inference \\citep{allman2015dag, mealli2016causal}, \nmixed membership models \\citep{erosheva2007aoas}, and overlapping community models for networks \\citep{todeschini2020exchangeable}.\n\n\n\n\\subsection{Discussing Connections to and Differences from Related Works}\n\n\n\n\n\n\n\nIt is worth connecting the BLESS model to discrete Latent Tree Models \\citep[LTMs;][]{choi2011learning, mourad2013survey}, which are popular tools in machine learning and have applications in phylogenetics in evolutionary biology. \nSome deep results about the geometry and statistical properties of LTMs are uncovered in \\cite{zwiernik2012tree}, \\cite{zwiernik2016semialg}, and \\cite{shiers2016gltm}.\nConceptually, the BLESS model is more general than LTMs because in the former, the latent variables can have entirely flexible and arbitrarily complex dependence structure according to the definition in Eq.~\\eqref{eq-model} (also implied by the word ``clique'' in the name of the BLESS model). Namely, the BLESS model only requires the latent-to-observed graph to be a tree and the latent graph can be a general clique; in contrast, LTMs require the entire graph among all the latent and observed variables to be a tree. \nAs a result, the identifiability and geometry of the BLESS model are more complicated than those of the LTMs.\nGeometry and identifiability of Bayesian networks with hidden variables have also been investigated in\n\\cite{settimi2000geometry}, \\cite{allman2015dag} and \\cite{anandkumar2013bn}. But these works often either consider a small number of variables, or employ certain specific (rather than entirely flexible) assumptions on the dependence of latent variables.\n\n\n\nNotably, in real-world applications in education and psychology, the aforementioned formulation of arbitrarily dependent latent variables has been widely employed in an emerging family of diagnostic models \\citep[3.g.,][]{chen2015qmat, xu2018q, gu2021jmle, von2019handbook}. This is because the binary latent variables in those applications have semantic meanings such as specific skills or mental disorders, and it is usually unsuitable to restrict the dependence graph between these latent constructs to be a tree. Rather, the latent variables may exhibit quite rich dependencies because of the complicated cognitive processes underlying learning or behaviors.\nBeing able to derive sharp identifiability results without assuming any specific dependence structure among latent variables shows the power of the general algebraic technique we employ in this work.\n\n\n\n\nA generic identifiability statement related to our work appeared in \\cite{gu2021idq} in the form of a small toy example for the aforementioned cognitive diagnostic models. More specifically, these are models where test items are designed to measure the presence\/absence of multiple latent skills and binary item responses of correct\/wrong answers are observed for each subject.\nIn the special case with two binary latent skills each measured by two binary observed variables, \\cite{gu2021idq} proved the parameters are identifiable if and only if the two latent variables are not independent.\nIn this work, we investigate the fully general case of the BLESS model where there are (a) an arbitrary number of binary latent variables, (b) arbitrary dependence between these variables, and (c) the observed variables have an arbitrary number of categories. Under this general setup, we characterize a complete picture of the generic identifiability phenomenon with respect to the latent dependence in Section \\ref{sec-mainsub}. \n\n\n\n\n\n\n\n\n\n\\section{Statistical Hypothesis Test of Identifiability in the Boundary Case}\n\\label{sec-test}\nConsider the minimal conditions for generic identifiability of the BLESS model, where certain (all or a subset of) latent variables have only two children. \nIn this boundary scenario, a natural question of interest is whether one can decide whether the parameters are identifiable or not. To this end, it would be desirable to develop a formal statistical hypothesis test of identifiability. Our identifiability theory of blessing of dependence indeed provides a basis for such a simple testing approach.\nUnder the star-forest measurement graph in the BLESS model, we have the following proposition.\n\n\n\n\\begin{proposition}\\label{prop-depdep}\nUnder the BLESS model defined in \\eqref{eq-model}, consider two different latent variables $\\alpha_{k_1}$ and $\\alpha_{k_2}$. \nThe two groups of observed variables\n$\\{y_j=c_j:\\; g_{j,k_1}=1\\}$ and $\\{y_m=c_m:\\; g_{m,k_2}=1\\}$ \nare independent if and only if $\\alpha_{k_1}$ and $\\alpha_{k_2}$ are independent.\n\\end{proposition}\n\n\nProposition \\ref{prop-depdep} states that under the BLESS model, the dependence\/independence of latent variables is exactly reflected in the dependence\/independence of their observed proxies (i.e., observed children variables). This fact is apparent from the graphical representation of the BLESS model in Figure \\ref{fig-graph}; it can also be formally proved using the model definition in \\eqref{eq-model}.\nA nice implication of Theorem \\ref{thm-main} and Proposition \\ref{prop-depdep} is that,\nwe can test the marginal dependence between certain observed variables to determine model identifiability, before even trying to fit a potentially unidentifiable model to data.\n\n\n\nFormally, in the boundary case (i.e., under minimal conditions for generic identifiability) where some latent variable $\\alpha_k$ only has two observed children, if one wishes to test the following hypothesis\n$$\nH_{0k}:~ \\text{Parameters associated with }~ \\text{\\normalfont{Child}}(\\alpha_k)\\mid\\alpha_k ~\\text{ are not identifiable},\n$$\nthen it is equivalent to testing the hypothesis $H_{0k}':~ \\alpha_k \\perp\\!\\!\\!\\perp \\boldsymbol \\alpha_{-k}$. \nFurther, to test $H_{0k}'$ it suffices to test the marginal independence between the following observed variables,\n$$\nH_{0k}':~ \\text{\\normalfont{Child}}(\\alpha_k) \\perp\\!\\!\\!\\perp \\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k}).\n$$\nSince $\\text{\\normalfont{Child}}(\\alpha_k)$ and $\\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k})$ are fully observed given the measurement graph, the above hypothesis $H_{0k}'$ can be easily tested.\nNote that $\\text{\\normalfont{Child}}(\\alpha_k)$ can be regarded as a categorical variable with $d^{|\\text{\\normalfont{Child}}(\\alpha_k)|}$ categories and that $\\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k})$ can be regarded as another categorical variable with $d^{|\\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k})|}$ categories. So the simple $\\chi^2$ test of independence between two categorical variables can be employed for testing $H_{0k}'$.\nIf the null hypothesis of independence is not rejected, then caution is needed in applying the BLESS model because some parameters may not be identifiable.\nIf, however, the hypothesis of independence is rejected, then this is statistical evidence supporting the identifiability of the BLESS model. In this case one can go on to fit the model to data, interpret the estimated parameters, and conduct further statistical analysis.\nIn fact, if $\\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k})$ consists of many observed variables, one can start with a small subset $S\\subseteq \\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k})$ and testing whether $ \\text{\\normalfont{Child}}(\\alpha_{k}) \\perp\\!\\!\\!\\perp S$; the rejection of this hypothesis would already provide evidence for identifiability of parameters (see Section \\ref{sec-timss} for such an example).\nWe also point out that if multiple latent variables $\\alpha_{k_1},\\ldots,\\alpha_{k_m}$ each has only two observed children, then one can test the $m$ hypotheses simultaneously $\\{H_{0k}':~ \\text{\\normalfont{Child}}(\\alpha_k) \\perp\\!\\!\\!\\perp \\text{\\normalfont{Child}}(\\boldsymbol \\alpha_{-k});~ k=1,\\ldots,m\\}$, and then use the Bonferroni correction to reach the final conclusion about the overall model identifiability. \n\n\n\n\\begin{example}\nContinue to consider Example \\ref{exp-g73} where $\\mathbf G_{7\\times 3}=(\\mathbf I_3; \\; \\mathbf I_3;\\; 1~ 0 ~0)^\\top$. Recall that $\\{\\boldsymbol \\theta^{(2)}, \\boldsymbol \\theta^{(5)}\\}$ are identifiable if and only if $\\alpha_2 \\not\\! \\perp\\!\\!\\!\\perp (\\alpha_1, \\alpha_3)$, and $\\{\\boldsymbol \\theta^{(3)}, \\boldsymbol \\theta^{(6)}\\}$ are identifiable if and only if $\\alpha_3 \\not\\! \\perp\\!\\!\\!\\perp (\\alpha_1, \\alpha_2)$. In order to test the hypothesis\n\\begin{align*}\n H_{01}:~ \\text{Parameters associated with }~ \\text{\\normalfont{Child}}(\\alpha_2)\\mid\\alpha_2 ~~(\\normalfont{\\text{i.e.}}~\\boldsymbol \\theta^{(2)}, \\boldsymbol \\theta^{(5)}) \\text{ are not identifiable},\n\\end{align*}\nit suffices to test $H_{01}':~ \\alpha_2 \\perp\\!\\!\\!\\perp (\\alpha_1, \\alpha_3)$. Because of the form of the $\\mathbf G$ matrix, the test $H_{01}'$ of latent independence can be further reduced to the test of the following hypothesis of the observed variables,\n$$H_{01}'':~ (y_2, y_5) \\perp\\!\\!\\!\\perp (y_1, y_3, y_4, y_6).$$\nSimilarly, in order to test\n\\begin{align*}\n H_{02}:~ \\text{Parameters associated with }~ \\text{\\normalfont{Child}}(\\alpha_3)\\mid\\alpha_3 ~~(\\normalfont{\\text{i.e.}}~\\boldsymbol \\theta^{(3)}, \\boldsymbol \\theta^{(6)}) \\text{ are not identifiable},\n\\end{align*}\nit suffices to test the following hypothesis about the observed variables\n$$H_{02}'':~ (y_3, y_6) \\perp\\!\\!\\!\\perp (y_1, y_2, y_4, y_5).$$\nThe tests of $H_{01}''$ and $H_{02}''$ can be carried out simply by testing the dependence between two concatenated categorical variables, one with $d^2$ categories and the other with $d^4$ categories. \n\\end{example}\n\n\nNote that our hypothesis test of identifiability is performed without fitting the BLESS model to data, and can serve as a first-step sanity check in real data analysis.\nIn a similar spirit but for a different purpose when studying the Gaussian Latent Tree Models, \\cite{shiers2016gltm} proposed to test certain covariance structures of variables to determine the goodness of fit before fitting the model to data. \nTo the author's best knowledge, there has not been previous formal statistical approaches to directly \\emph{testing the identifiability} of multidimensional latent variable models. \nOur test of identifiability of the BLESS model is enabled by the discovery of the nontrivial blessing of dependence phenomenon and may inspire future relevant hypothesis testing approaches in other latent variable models.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Examples of Real-world Applications}\\label{sec-prac}\n\nThis section presents two real-world examples where our new theory can bring new insights potentially -- one in educational assessment and the other in social science surveys.\n\n\\subsection{Educational assessment example}\n\\label{sec-timss}\n\nThe Trends in International Mathematics and Science Study (TIMSS) is a series of international assessments of the mathematics and science knowledge of middle school students around the world.\nTIMSS assesses fourth and eighth grade students and it has been held every four years since 1995 in over 50 countries.\nThe so-called cognitive diagnosis models have been used to analyze a subset of the Austrian TIMSS 2011 data in \\cite{george2015cdm}; this dataset is available in the R package \\texttt{CDM}.\nThe dataset involves fourth grade students' correct\/wrong (binary) responses to a set of TIMSS questions in mathematics.\nAccording to psychometricians, these questions were designed to measure the presence\/absence (binary) statuses of $K=3$ content-based latent skills of students: ($\\alpha_1$) Data, ($\\alpha_2$) Geometry, and ($\\alpha_3$) Numbers. \nEach test question targets exactly one content-based skill, which means the latent-to-observed measurement graph satisfies the assumption of the BLESS model.\nThis original Austrian TIMSS dataset in the \\texttt{CDM} package contains 1010 students' responses to a total number of 47 questions but has many missing data, \nbecause the 47 items were\ndivided up into three booklets and only two\nof the three booklets are presented to each student; such missingness is common to large-scale educational assessments \\citep{george2015cdm}.\nTo avoid dealing with the missing data issue in our example of identifiability considerations, here we focus on the first booklet containing the first $p = 21$ questions, and consider the $N=341$ students who answered all these 21 questions. \nTable \\ref{tab-timss} summarizes the dependence of these 21 questions on the three underlying latent skills, i.e., the $\\mathbf G$ matrix structure in our notation, which is also provided in the R package \\texttt{CDM}.\n\n\n\n \n\n\n \n\n\n\n\\begin{table}[h!]\n \\centering\n \\caption{Educational assessment example of the Austrian TIMSS 2011 data. Latent-to-observed measurement graph structure between the first $p=21$ questions and $K=3$ content-based latent skills, constructed using the information available in the R package \\texttt{CDM}.}\n \\label{tab-timss}\n \n \\begin{tabular}{lll}\n \\toprule\n & Content-based latent skill & Indices of questions that measure the skill\\\\\n \\midrule\n $\\alpha_1$ & Data & 20, 21 \\\\\n $\\alpha_2$ & Geometry & 7, 8, 16, 17, 18, 19 \\\\\n $\\alpha_3$ & Numbers & 1, 2, 3, 4, 5, 6, 9, 10, 11, 12, 13, 14, 15 \\\\\n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\n\nTable \\ref{tab-timss} shows that the first skill ``Data'' is measured by only two questions (questions 20 and 21), hence satisfying the minimal conditions for generic identifiability.\nSo according to our new results, whether the model parameters are identifiable would depend on whether there exists underlying dependence between latent variables.\nWe carry out a hypothesis test of identifiability of the BLESS model. \nIn particular, we want to test \n\\begin{align*}\nH_{0,\\text{Data}}:~ &\\text{The latent skill ``Data'' is independent of the two remaining skills ``Geometry'' and}\\\\ \n&\\text{``Numbers''};\n\\end{align*}\nand based on the $\\mathbf G$ matrix structure in Table \\ref{tab-timss}, we can test whether the questions targeting the ``Data'' skill are independent with those targeting other two skills.\nIn particular, here we consider all the two-question-combinations consisting of one question targeting ``Geometry'' and one question targeting ``Numbers'', and then test whether this combination of questions are independent of those two ``Data'' questions; namely, we test\n\\begin{align*}\nH_{0,\\text{Data}}^{j_1, j_2}:~ (y_{20}, y_{21}) \\text{~are independent of~} (y_{j_1}, y_{j_2}),\\quad\nj_1 \\text{~targets Geometry,~} j_2 \\text{~targets Numbers}.\n\\end{align*}\nUsing the standard $\\chi^2$ test of independence between two categorical variables each of $2^{2}=4$ categories, each test statistic under the null hypothesis $H_{0,\\text{Data}}^{j_1, j_2}$ asymptotically follows the $\\chi^2$ distribution with $df = (2^2 - 1) \\cdot (2^2 - 1) = 9$ degrees of freedom. \nOut of the $6\\times 13 = 78$ such test statistics, we found 73 of them are greater than the 95\\% quantile of the reference distribution $\\chi^2(df, 0.95) = 16.92$, where we reject the null hypothesis of independence between $(y_{20}, y_{21})$ and $(y_{j_1}, y_{j_2})$. We point out that the rejection of any of these tests $H_{0,\\text{Data}}^{j_1, j_2}$ already indicates one should reject the original null $H_{0,\\text{Data}}$. \nThanks to the blessing of dependence theory we have established, the test results provide statistical evidence to reject the original null hypothesis of non-identifiability, and hence support the identifiability of model parameters.\nThis provides a statistical conclusion of identifiability for the first time in such applications in educational cognitive diagnosis modeling.\n\n\n\n\\subsection{Prevention science survey example}\n\\label{sec-prev}\nAn influential paper in prevention science \\cite{lanza2013} used the latent class model (LCM; with a unidimensional latent variable) to analyse the treatment effects on different latent subgroups, and illustrated the method using a dataset extracted\nfrom the National Longitudinal Survey of Adolescent Health (NLSAH). \nObserved data for each subject are $p=6$ dichotomized characteristics: household poverty; single-parent status;\npeer cigarette use; peer alcohol use; neighborhood unemployment; and neighborhood poverty. \nThese observables actually measure three risks, with the first two measuring ($\\alpha_1$) \\emph{household risk}, the middle two measuring \n($\\alpha_2$) \\emph{peer risk}, and the last two measuring \n($\\alpha_3$) \\emph{neighborhood risk}.\nAccording to the estimated conditional probability tables of the observed variables given the five latent classes, \\cite{lanza2013} interpreted the latent classes as (a) Overall low risk, (b) Peer risk, (c) Household \\& neighborhood (economic) risk, (d) Household \\& peer risk, and (e) Overall high (multicontext) risk. \nInterestingly, we note that the analysis in \\cite{lanza2013} lends itself to a reformulation using the BLESS model, and we argue that such a reformulation provides an interpretable graphical modeling alternative to plain latent class analysis.\nSpecifically, if viewing the three underlying risks as three latent variables, then the latent-to-observed measurement graph indeed takes a star-forest shape; see Table \\ref{tab-prev1} for details of the $\\mathbf G$ matrix.\nMore importantly, the aforementioned five latent classes can be nicely formulated as five different binary configurations of the three latent risks, as $(0,0,0)$, $(1,0,0)$, $(1,0,1)$, $(1,1,0)$, and $(1,1,1)$, respectively.\nHere $\\alpha_k=1$ indicates the higher risk group while $\\alpha_k=0$ indicates the lower risk group.\nSee Table \\ref{tab-prev2} for the multidimensional binary configurations of latent classes.\n\n\\begin{table}[h!]\n \\caption{Prevention science survey example reformulated using the BLESS model. Latent-to-observed measurement graph structure $\\mathbf G_{6\\times 3}$.}\n \\label{tab-prev1}\n \n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{llccc}\n \\toprule\n & \\multirow{3}{*}{Item Content} & & Fine-grained Latent Risks & \\\\\n \\cmidrule(lr){3-5}\n & & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ \\\\\n & & Household risk & Peer risk & Neighborhood risk \\\\\n \\midrule\n 1 & Household poverty & 1 & 0 & 0 \\\\\n 2 & Single-parent status & 1 & 0 & 0\\\\\n 3 & Peer cigarette use & 0 & 1 & 0 \\\\\n 4 & Peer alcohol use & 0 & 1 & 0 \\\\\n 5 & Neighborhood unemployment & 0 & 0 & 1 \\\\\n 6 & Neighborhood poverty & 0 & 0 & 1 \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\begin{table}[h!]\n \\caption{Prevention science survey example reformulated using the BLESS model. Five latent classes obtained and explained in \\cite{lanza2013}, and reformulated in the interpretable multidimensional-binary latent variable format.}\n \\label{tab-prev2}\n \n \\centering\n \n \n \n \n \n \n \n \n \n\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{llccc}\n \\toprule\n & \\multirow{3}{*}{Latent Class Explanation} & & Fine-grained Latent Risks & \\\\\n \\cmidrule(lr){3-5}\n & & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ \\\\\n & & Household risk & Peer risk & Neighborhood risk \\\\\n \\midrule\n 1 & Overall low risk & 0 & 0 & 0 \\\\\n 2 & Peer risk & 1 & 0 & 0\\\\\n 3 & Household \\& neighborhood risk & 1 & 0 & 1 \\\\\n 4 & Household \\& peer risk & 1 & 1 & 0 \\\\\n 5 & Overall high risk & 1 & 1 & 1 \\\\\n \\bottomrule\n \\end{tabular}\n}\n \n\\end{table}\n\nBecause $\\mathbf G$ shows that each latent risk has exactly two observed children characteristics, this example analysed in \\cite{lanza2013} can be exactly regarded as satisfying the minimal conditions for generic identifiability of the BLESS model.\nAs \\cite{lanza2013} did not include the original dataset that they analyzed which is extracted and sampled from the NLSAH survey, we do not perform the test here but point out the testing procedure is just the same as what we conducted in Section \\ref{sec-timss} for the TIMSS data.\nSpecifically, one could simply test the hypothesis of identifiability by testing the marginal independence of the three groups of binary characteristics falling under the household risk, peer risk, and neighborhood risk, respectively.\nOne plausible conjecture is these three risks are likely interdependent due to the interactions of an adolescent's household, peers, and neighborhood.\nIn such a case, the BLESS model would be identifiable when applied to the survey dataset, and one could use the BLESS model as a more fine-grained and interpretable graphical modeling alternative to plain latent class analysis.\n\n\n\n\n\n\n\n\n\\section{Concluding Remarks}\\label{sec-disc}\nThis work reveals an interesting and highly nontrivial phenomenon, blessing of latent dependence on identifiability, for the BLESS model, a class of discrete statistical models with multiple binary latent variables.\nWe have proved that under the minimal conditions for generic identifiability that each latent variable has two observed children, the model parameters are identifiable if and only if there exists dependence between the latent variables.\nUsing two real-world examples in education and prevention science, we have shown how our sharp identifiability results can be applied in practice and guide the use of interpretable, graphical, and more fine-grained latent variable modeling approaches.\n\nThe blessing of dependence phenomenon between latent variables is perhaps a bit surprising, partly because the independence assumption of latent variables is predominant in many latent variable modeling approaches. For example, in the traditional and popular factor analysis model, the latent factors are often assumed independent with a diagonal covariance matrix \\citep{anderson1956fa}.\nIn practice, however, especially in confirmatory latent variable analysis widely seen in education, psychology, and epidemiology, each latent construct of interest carries a substantive meaning (see the examples in Section \\ref{sec-prac}). \nSo it is highly likely that such latent constructs postulated by domain experts are dependent on each other.\nFrom this perspective, our theoretical result in this work provides reassurance that the dependence of latent variables can be a blessing, rather than a curse.\nIn the future, it would be interesting to explore whether similar blessing-of-dependence phenomenon is present in other types of graphical latent variable models.\n\n\nFinally, in a study of the geometry of the simplest discrete latent variable model, the latent class model, and in the special case involving only a total number of $p=2$ observed variables, \\cite{fienberg2009} made the following remark, ``\\textit{The study of higher dimensional tables is still an open area of research. The mathematical machinery required to handle larger dimensions is considerably more complicated}''. \nIndeed, due to the complex nonlinearity of discrete models with latent structures, previous studies about identifiability either draw on Kruskal's Theorem or focus on small number of variables \\citep[e.g.][]{allman2015dag}.\nIn contrast, this work provides a new algebraic technique useful to study the identifiability and geometry of general $p$-dimensional tables.\nThis technique has proved to be more powerful than Kruskal's theorem when applied to the BLESS model considered in this work, and we are able to use it to derive sharp identifiability results which Kruskal's Theorem cannot obtain, in addition to revealing the new geometry.\nUsing the new technique to study other properties (beyond identifiability) of discrete graphical latent variable models and exploring its connection to other algebraic statistical techniques would be an interesting future direction.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Supplementary material}\n\\label{SM}\nThe Supplementary Material contains the proofs of all the theoretical results and the details of the EM algorithms.\n\n\\spacingset{1}\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor every natural number $n\\geqslant 1$ the \\textbf{\\emph{Harmonic Number}}, $H_{n}$ is the $n$th partial sum of the harmonic series:\\begin{equation}\n\\fbox{$\\displaystyle H_{n}:=1+\\frac{1}{2}+\\frac{1}{3}+\\cdots+\\frac{1}{n}.$}\n\\end{equation}\n\nAlthough the asymptotics of $H_{n}$ were determined by \\textsc{Euler}, (see \\cite{K}), in his famous formula: \\begin{equation}\n\\fbox{$\\displaystyle H_{n}\\sim \\ln n +\\gamma +\\frac{1}{2n}-\\frac{1}{12n^{2}}+\\frac{1}{120n^{4}}-\\left[\\cdots\\right],$}\n\\end{equation}where $\\gamma=0.57721\\cdots$ is \\textsc{Euler}'s constant and each summand in the asymptotic expansion is of the form $\\dfrac{B_{k}}{n^{k}}$, where $B_{k}$ denots the $k$th \\textsc{Bernoulli} number, mathematicians have continued to offer alternate approximative formulas to \\textsc{Euler}'s. We cite the following formulas, which appear in order of increasing accuracy:\\begin{align}\nH_{n} &\\approx \\ln n +\\gamma +\\frac{1}{2n+\\frac{1}{3}} \\\\\n &\\approx\\ln\\sqrt{n(n+1)}+\\gamma +\\frac{1}{6n(n+1)+\\frac{6}{5}}\\\\\n &\\approx\\ln\\left(n+\\frac{1}{2}\\right)+\\gamma +\\frac{1}{24\\left(n+\\frac{1}{2}\\right)^{2}+\\frac{21}{5}}. \n\\end{align}The formula (3) is the \\textsc{T\\'oth-Mare} approximation, (see \\cite{TM}), and it\\textbf{\\emph{underestimates}} the true value of $H_{n}$ by terms of order $\\dfrac{1}{72n^{3}}$; the second, (4), is the \\textsc{Lodge-Ramanujan} approximation, and it \\textbf{\\emph{overestimates}} the true value of $H_{n}$ by terms of order $ \\dfrac{19}{3150\\left[n(n+1)\\right]^{3}}$, (see \\cite{Vill}); and the last, (5), is the \\textsc{DeTemple-Wang} approximation, and it \\textbf{\\emph{overestimates}} the true value of $H_{n}$ by terms of order $\\dfrac{2071}{806400\\left(n+\\frac{1}{2}\\right)^{6}},$ (see \\cite{D}).\n\n\nIn 2003, \\textsc{Chao-Ping Chen} and \\textsc{Feng Qi}, (see \\cite{CQ}), published a proof of the following sharp form of the \\textsc{T\\'oth-Mare} approximation:\n\n\\begin{thm} For any natural number $n\\geqslant 1$, the following inequality is valid:\n\\begin{equation}\n\\fbox{$\\displaystyle \\frac{1}{2n+\\frac{1}{1-\\gamma}-2}\\leqslant H_{n}-\\ln n -\\gamma <\\frac{1}{2n+\\frac{1}{3}}.$}\n\\end{equation}The constants $\\frac{1}{1-\\gamma}-2=.3652721\\cdots$ and $\\frac{1}{3}$ are the best possible, and equality holds only for $n=1.$\n\n\\end{thm}\nThe first \\emph{statement} of this theorem had been announced ten years earlier by the editors of the ``Problems\" section of the \\emph{American Mathemtical Monthyly}, Vol 99, No. 7, (Jul-Aug, 1992), p 685, as part of a commentary on the solution of Problem 3432, but they did not publish the proof. So, the first \\emph{published} proof is apparently that of \\textsc{Chen} and \\textsc{Qi}.\n\nIn this paper we will prove sharp forms of the \\textsc{Lodge-Ramanujan} approximation and the \\textsc{DeTemple-Wang} approximation.\n\n\\begin{thm} For any natural number $n\\geqslant 1$, the following inequality is valid:\n\\begin{equation}\n\\fbox{$\\displaystyle \\frac{1}{6n(n+1)+\\frac{6}{5}}< H_{n}-\\ln\\sqrt{n(n+1)}-\\gamma \\leqslant\\frac{1}{6n(n+1)+\\frac{12\\gamma -11-12\\ln 2}{1-\\gamma-\\ln\\sqrt{2}}}.$}\n\\end{equation}The constants $\\frac{12\\gamma -11-12\\ln 2}{1-\\gamma-\\ln\\sqrt{2}}=1.12150934\\cdots$ and $\\frac{6}{5}$ are the best possible, and equality holds only for $n=1.$\n\\end{thm} \n\n\\noindent and\n\n\\begin{thm} For any natural number $n\\geqslant 1$, the following inequality is valid:\n\\begin{equation}\n\\fbox{$\\displaystyle\\frac{1}{24\\left(n+\\frac{1}{2}\\right)^{2}+\\frac{21}{5}} \\leqslant H_{n}-\\ln n -\\gamma <\\frac{1}{24\\left(n+\\frac{1}{2}\\right)^{2}+\\frac{54\\ln\\frac{3}{2}+54\\gamma-53}{1-\\ln\\frac{3}{2}-\\gamma}}.$}\n\\end{equation}The constants $\\frac{54\\ln\\frac{3}{2}+54\\gamma-53}{1-\\ln\\frac{3}{2}-\\gamma}=3.73929752\\cdots\\cdots$ and $\\frac{21}{5}$ are the best possible, and equality holds only for $n=1.$\n\\end{thm}\n\nAll three theorems are corollaries of the following stronger theorem:\n\\begin{thm}For any natural number $n\\geqslant 1$, define $f_{n}$, $\\lambda_{n}$, and $d_{n}$ by:\\begin{align}\nH_{n} &:= \\ln n +\\gamma +\\frac{1}{2n+f_{n}} \\\\\n &:=\\ln\\sqrt{n(n+1)}+\\gamma +\\frac{1}{6n(n+1)+\\lambda_{n}}\\\\\n &:=\\ln\\left(n+\\frac{1}{2}\\right)+\\gamma +\\frac{1}{24\\left(n+\\frac{1}{2}\\right)^{2}+d_{n}}, \n\\end{align}respectively. Then for any natural number $n\\geqslant 1$ the sequence $\\{f_{n}\\}$ is \\textbf{monotonically decreasing} while the sequences $\\{\\lambda_{n}\\}$ and $\\{d_{n}\\}$ are \\textbf{monotonically increasing}.\n\\end{thm}\\textsc{Chen} and \\textsc{Qi}, (see \\cite{CQ}), proved that the sequence $\\{f_{n}\\}$ \\textbf{\\emph{decreases}} monotonically. In this paper we will prove the monotonicity of the sequences $\\{\\lambda_{n}\\}$ and $\\{d_{n}\\}$.\n\\section{Lemmas}\n\nOur proof is based on inequalities satisfied by the \\textbf{digamma} function, $\\Psi(x)$:\n\\begin{equation}\n\\fbox{$\\displaystyle \\Psi(x):=\\frac{d}{dx}\\ln\\Gamma(x)\\equiv\\frac{\\Gamma'(x)}{\\Gamma(x)}\\equiv -\\gamma-\\frac{1}{x}+x\\sum_{n=1}^{\\infty}\\frac{1}{n(x+n)} ,$}\n\\end{equation}which is the generalization of $H_{n}$ to the real variable $x$ since $\\Psi(x)$ and $H_{n}$ satisfiy the equation:\\begin{equation}\\Psi(n+1)=H_{n}-\\gamma.\\end{equation} \n\n\\begin{lemma}For every $x>0$ there exist numbers $\\theta_{x}$ and $\\Theta_{x}$, with $0<\\theta_{x}<1$ and $0<\\Theta_{x}<1$, for which the following equations are true:\\begin{align}\n \\Psi(x+1) &=\\ln x+\\frac{1}{2x}-\\frac{1}{12x^{2}}+\\frac{1}{120x^{4}}-\\frac{1}{252x^{6}}+\\frac{1}{240x^{8}}\\theta_{x}, \\\\\n \\Psi'(x+1) &=\\frac{1}{x}-\\frac{1}{2x^{2}}+\\frac{1}{6x^{3}}-\\frac{1}{30x^{5}}+\\frac{1}{42x^{7}}-\\frac{1}{30x^{9}}\\Theta_{x}. \\\\\n \\end{align}\n\\end{lemma}\n\\begin{proof}\n\nBoth formulas are well-known. See, for example, \\cite{Ed}, pp 124-125.\n\n\\end{proof}\n\n\\begin{lemma}The following inequalities are true for $x>0$:\\begin{multline}\n\\frac{1}{3x(x+1)}-\\frac{1}{15x^{2}(x+1)^{2}}< 2\\Psi(x+1)-\\ln\\{x(x+1)\\}\\\\\n<\\frac{1}{3x(x+1)}-\\frac{1}{15x^{2}(x+1)^{2}}+\\frac{8}{315x^{3}(x+1)^{3}} , \\end{multline}\\begin{multline}\n \\frac{1}{x^{2}}-\\frac{1}{x(x+1)}-\\frac{1}{3x^{3}}+\\frac{1}{15x^{5}}-\\frac{1}{18x^{7}}<\\frac{1}{x}+\\frac{1}{x+1}-2\\Psi'(x+1)\\\\ <\\frac{1}{x^{2}}-\\frac{1}{x(x+1)}-\\frac{1}{3x^{3}}+\\frac{1}{15x^{5}}. \n \\end{multline}\n\n\n\n\n\n\n\n\n\\end{lemma}\n\\begin{proof}\n\nThe inequalities (17) were proved in our paper, (see\\cite{Vill}), for integers $n$ instead of the real variable $x$. But the proofs are valid for real $x$.\n\nFor (18) we start with (15) of \\textbf{Lemma 1.} We conclude that $$\\frac{1}{2x^{2}}-\\frac{1}{6x^{3}}+\\frac{1}{30x^{5}}-\\frac{1}{36x^{7}}<\\frac{1}{x}-\\Psi'(x+1)<\\frac{1}{2x^{2}}-\\frac{1}{6x^{3}}+\\frac{1}{30x^{5}}.$$Now we multiply to all three components of the inequality by 2 and add $\\dfrac{1}{x+1}-\\dfrac{1}{x}$ to them.\n\n\\end{proof}\n\n\\begin{lemma}The following inequalities are true for $x>0$:\\begin{multline}\n\\frac{1}{\\left(x+\\frac{1}{2}\\right)}-\\frac{1}{x}+\\frac{1}{2x^{2}}-\\frac{1}{6x^{3}}+\\frac{1}{30x^{5}}-\\frac{1}{42x^{7}}< \\frac{1}{x+\\frac{1}{2}}-\\Psi'(x+1)\\\\\n<\\frac{1}{\\left(x+\\frac{1}{2}\\right)}-\\frac{1}{x}+\\frac{1}{2x^{2}}-\\frac{1}{6x^{3}}+\\frac{1}{30x^{5}}, \\end{multline}\n\\begin{multline}\n\\frac{1}{24x^{2}}-\\frac{1}{24x^{3}}+\\frac{23}{960x^{4}}-\\frac{1}{160x^{5}}-\\frac{11}{8064x^{6}}-\\frac{1}{896x^{7}}< \\Psi(x+1)-\\ln\\left(x+\\frac{1}{2}\\right)\\\\\n<\\frac{1}{24x^{2}}-\\frac{1}{24x^{3}}+\\frac{23}{960x^{4}}-\\frac{1}{160x^{5}}-\\frac{11}{8064x^{6}}-\\frac{1}{896x^{7}}+\\frac{143}{30720x^{8}} . \\end{multline}\n\\end{lemma}\n\\begin{proof} Similar to the proof of \\textbf{Lemma 2.}\n\\end{proof}\n\\section{Proof for the Lodge-Ramanujan approximation}\n\\begin{proof}\nWe solve (10) for $\\lambda_{n}$ and use (13) to obtain $$\\lambda_{n}=\\frac{1}{\\Psi(n+1)-\\ln\\sqrt{n(n+1)}}-6n(n+1).$$Define \\begin{equation}\n\\fbox{$\\displaystyle \\Lambda_{x}:=\\frac{1}{2\\Psi(x+1)-\\ln x(x+1)}-3x(x+1). $}\n\\end{equation}for all $x>0$. Observe that $2\\Lambda_{n}=\\lambda_{n}.$\n\\\\\n\\\\\n\\emph{We will show that} $\\Lambda_{x}'>0$ for $x>5.$ Computing the derivative we obtain$$\\Lambda_{x}'=\\frac{\\frac{1}{x}+\\frac{1}{x+1}-\\Psi'(x+1)}{\\{ 2\\Psi(x+1)-\\ln\\{x(x+1)\\}^{2}}-(6x+3)$$ and therefore \\begin{align*}\n\\{ 2\\Psi(x+1)-\\ln\\{x(x+1)\\}^{2}\\Lambda_{x}'&=\\frac{1}{x}+\\frac{1}{x+1}-\\Psi'(x+1)-(6x+3)\\{ 2\\Psi(x+1)-\\ln\\{x(x+1)\\}^{2}.\\end{align*} By \\textbf{Lemma 2}, this is greater than\\begin{align*}\n&\\frac{1}{x^{2}}-\\frac{1}{x(x+1)}-\\frac{1}{3x^{3}}+\\frac{1}{15x^{5}}-\\frac{1}{18x^{7}}\\\\\n&-(6x+3)\\left\\{\\frac{1}{3x(x+1)}-\\frac{1}{15x^{2}(x+1)^{2}}+\\frac{8}{315x^{3}(x+1)^{3}}\\right\\}^{2}\\\\\n&=\\frac{1071x^{6}+840x^{5}-17829x^{4}-49266x^{3}-502999x^{2}-22178x-3675}{66150x^{7}(x+1)^{6}}\\\\\n&=\\frac{(x-5)\\left(x^{5}+\\frac{295}{51}x^{4}+\\frac{628}{51}x^{3}+\\frac{784}{51}x^{2}+\\frac{32021}{1071}x\n+\\frac{137927}{1071}\\right)+\\frac{685960}{1071}}{\\frac{1051}{17}x^{7}(x+1)^{6}}\n\\end{align*}which is obviously\\emph{ positive} for $x>5.$ \n\nFor $x=1, \\ 2, \\ 3, \\ 4, \\ 5,$ we compute directly:\\begin{align*}\n\\label{}\n \\Lambda_{1} &=.56075467\\cdots \\\\\n \\Lambda_{2} &=.58418229\\cdots \\\\\n \\Lambda_{3} &=.59158588\\cdots \\\\\n \\Lambda_{4} &=.59481086\\cdots \\\\\n \\Lambda_{5} &=.59649019\\cdots \\\\\n\\end{align*}Therefore, the sequence $\\{\\Lambda_{n}\\}$, $n \\geqslant 1$, is a strictly increasing sequence, and therefore so is the sequence $\\{\\lambda_{n}\\}$.\n\nMoreover, in \\cite{Vill}, we proved that $$\\lambda_{n}=\\frac{6}{5}-\\Delta_{n},$$where $0<\\Delta_{n}<\\dfrac{38}{175n(n+1)}$. Therefore $$\\lim_{n\\rightarrow\\infty}\\lambda_{n}=\\frac{6}{5}.$$ This completes the proof.\n\\end{proof}\n\n\\section{Proof for the DeTemple-Wang Approximation}\n\n\\begin{proof}\nFollowing the idea in the proof of the \\textsc{Lodge-Ramanujan} approximation we solve (11) for $d_{n}$ and define the corresponding real-variable version. Let\n\\begin{equation}\n\\fbox{$\\displaystyle d_{x}:=\\frac{1}{\\Psi(x+1)-\\ln\\left(x+\\frac{1}{2}\\right)}-24\\left(x+\\frac{1}{2}\\right)^{2}$}\n\\end{equation}We compute the derivative, ask\\emph{ when it is \\textbf{positive}}, clear the denominator and observe that we have to solve the inequality:$$\\left\\{\\frac{1}{x+\\frac{1}{2}}-\\Psi'(x+1)\\right\\}-48\\left(x+\\frac{1}{2}\\right)\\left\\{\\Psi(x+1)-\\ln\\left(x+\\frac{1}{2}\\right)\\right\\}^{2}>0.$$By \\textbf{Lemma 3}, the left hand side of this inequality is\\begin{align*}\n &>\\frac{1}{\\left(x+\\frac{1}{2}\\right)}-\\frac{1}{x}+\\frac{1}{2x^{2}}-\\frac{1}{6x^{3}}+\\frac{1}{30x^{5}}-\\frac{1}{42x^{7}}-48\\left(x+\\frac{1}{2}\\right)\\\\\n &\\left(\\frac{1}{24x^{2}}-\\frac{1}{24x^{3}}+\\frac{23}{960x^{4}}-\\frac{1}{160x^{5}}-\\frac{11}{8064x^{6}}-\\frac{1}{896x^{7}}+\\frac{143}{30720x^{8}}\\right)^{2} \\\\\n \\end{align*}for all $x>0.$ This last quantity is equal to\n\\begin{align*}&(-9018009-31747716 x-14007876 x^2+59313792 x^3+\n11454272 x^4-129239296 x^5+119566592 x^6\\\\\n&+65630208 x^7-701008896 x^8-534417408 x^9+\n178139136 x^{10})\/(17340825600 x^{16} (1+2 x))\\end{align*}\n\n\n\n\\noindent The denominator, $$17340825600 x^{16} (1+2 x),$$ is evidently \\emph{positive} for $x>0$ and the \\emph{numerator} can be written in the form $$p(x)(x-4)+r$$ where \\begin{align*}\np(x)&=548963242092+137248747452 x+34315688832 x^2\n+8564093760 x^3+2138159872 x^4\\\\&+566849792 x^5\n+111820800 x^6+11547648 x^7+178139136 x^8+178139136 x^9\n\\end{align*}with remainder $r$ equal to\n$$r=2195843950359.$$\n\nTherefore, the numerator is clearly \\emph{positive} for $x>4,$ and therefore, the derivative, $d_{x}\\ '$, too, is \\emph{postive} for $x>4.$ Finally\\begin{align*}\n d_{1} &=3.73929752\\cdots \\\\\n d_{2} &=4.08925414\\cdots \\\\ \n d_{3} &=4.13081174\\cdots \\\\\n d_{4} &=4.15288035\\cdots\n \\end{align*}Therefore $\\{d_{n}\\}$ is an \\textbf{\\emph{increasing}} sequence for $n\\geqslant 1.$ \n\nNow, if we expand the formula for $d_{n}$ into an asymptotic series in powers of $\\dfrac{1}{\\left(n+\\frac{1}{2}\\right)}$, we obtain$$d_{n}\\sim \\frac{21}{5}-\\frac{1400}{2071\\left(n+\\frac{1}{2}\\right)}+\\cdots$$and we conclude that $$\\lim_{n\\rightarrow\\infty}d_{n}=\\frac{21}{5}.$$This completes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe antiferromagnetic Heisenberg model for $S=1\/2$ spins interacting on the triangular lattice represents the simplest example in which \nquantum fluctuations give rise to strong modifications of the classical picture, where the minimum energy configuration shows $120^\\circ$ \norder. Indeed, this was the first microscopic model that has been proposed for the realization of the so-called resonating valence-bond \nstate~\\cite{anderson1973,fazekas1974}. Within this approach, the ground state is described by a superposition of an exponentially large \nnumber of singlet coverings of the lattice, generalizing the concept of resonance introduced and developed by Rumer~\\cite{rumer1932} and\nPauling~\\cite{pauling1933} to describe the chemical bond. Even though recent numerical investigations~\\cite{capriotti1999,chernyshev2007} \nhave shown that the ground state possesses a finite magnetization in the thermodynamic limit, the results confirmed large deviations \nfrom classical and semiclassical limits. In addition, small perturbations on top of the nearest-neighbor Heisenberg model have shown to \ndrive the system into magnetically disordered phases~\\cite{zhu2018,iaconis2018}. By keeping the spin SU(2) symmetry, a natural way to \ninduce further magnetic frustration is to include a next-nearest-neighbor super-exchange coupling, leading to the following \nHamiltonian:\n\\begin{equation}\\label{eq:hamj1j2}\n{\\cal H} = J_1 \\sum_{\\langle i,j \\rangle} {\\bf S}_i \\cdot {\\bf S}_j +\nJ_2 \\sum_{\\langle\\langle i,j \\rangle\\rangle} {\\bf S}_i \\cdot {\\bf S}_j,\n\\end{equation}\nwhere $\\langle \\dots \\rangle$ and $\\langle \\langle \\dots \\rangle\\rangle$ indicate nearest-neighbor and next-nearest-neighbor sites\nin the triangular lattice; ${\\bf S}_i=(S_i^x,S_i^y,S_i^z)$ is the spin-$1\/2$ operator at the site $i$ and, finally, $J_1$ and $J_2$ \nare the antiferromagnetic coupling constants. This model has been intensively investigated in the past, from the semi-classical \napproaches of the early days~\\cite{jolicoeur1990,chubukov1992} to the recent numerical approaches~\\cite{zhu2015,hu2015,iqbal2016}.\nThe latter ones indicated a rather fragile $120^\\circ$ magnetic order, which is melted for $J_2\/J_1 \\approx 0.07(1)$ (a value that\nis in very good agreement among these calculations). For larger values of the frustrating ratio $J_2\/J_1$ the nature of the \nnon-magnetic phase is not settled down, with evidences for either a gapped~\\cite{zhu2015,hu2015} or a gapless~\\cite{iqbal2016} \nspin liquid. \n\nAn important information about the physical properties is given by the features of the low-energy spectrum. In particular, the dynamical \nstructure factor $S({\\bf q},\\omega)$ gives a direct probe to assess the nature of the relevant excitations. These can be divided in two \nbroad classes: standard gapless magnons (or gapped triplons), which exist in magnetically ordered phases (or valence-bond solids), and \nmore exotic (gapped or gapless) spinons, which exist in deconfined spin liquids. In addition to spinons, another kind of excitation is \npresent, due to the emergence of gauge fluctuations in the low-energy effective theory of spin liquids~\\cite{savary2016}.\n\nFor the Heisenberg model with only nearest-neighbor couplings on the triangular lattice, semi-classical approaches, based upon the\nlarge-$S$ expansion, suggested that the excitation spectrum obtained within the leading order (i.e., within the linear spin-wave \napproximation) is subjected to significant corrections when interactions between spin waves are taken into account~\\cite{starykh2006}. \nThis fact is mainly due to the non-collinearity of the magnetization, which allows for three-magnon interactions. Then, despite the \npresence of long-range order, the Goldstone modes are not stable but they may decay in a large part of the Brillouin zone (see \nFig.~\\ref{fig:latt}); in particular, the existence of more than one Goldstone mode, with different velocities, immediately causes that \nmagnons may be kinematically unstable, decaying into two magnons with lower energy~\\cite{chernyshev2006,chernyshev2009}. A detailed \nanalysis, which includes interactions among spin waves, corroborated this outcome, also showing roton-like minima at $M=(0,2\\pi\/\\sqrt{3})$ \nand symmetry-related points (i.e., midpoints of the edges of the Brillouin zone)~\\cite{chernyshev2006,chernyshev2009,zhitomirsky2013}.\nThe latter aspect shares similarities with the Heisenberg model on the square lattice, where minima of the magnon dispersion are present \naround $(\\pi,0)$ and $(0,\\pi)$~\\cite{singh1995,zheng2005}. As far as the triangular lattice is concerned, aspects of the strong \nrenormalization of the magnon dispersion at high energies have been confirmed by series expansions~\\cite{zheng2006}. Moreover, within \nthese numerical calculations, a huge downward renormalization of the one-magnon excitations is recovered, leading to a relatively \ndispersionless mode.\n\nWhile there are a number of materials whose low-energy behavior can be well described by the $S=1\/2$ Heisenberg model on the square \nlattice (among them, we just mention La$_2$CuO$_4$ for its relevance to cuprate superconductors~\\cite{coldea2001a}), until very \nrecently there were no compounds that could be well approximated by the same model on the equilateral triangular lattice. For example,\nin Cs$_2$CuCl$_4$ the super-exchange couplings are not isotropic in the nearest-neighbor bonds, one out of the three being much \nstronger than the other ones (thus defining weakly-coupled zig-zag chains)~\\cite{coldea2001b}. Here, inelastic neutron scattering \nmeasurements have shown the existence of a very broad continuum, which has been associated to spin fractionalization and spin-liquid\nbehavior~\\cite{coldea2001b}.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig1a.pdf}\n\\includegraphics[width=0.4\\columnwidth]{fig1b.pdf}\\hspace{0.05\\columnwidth}\n\\includegraphics[width=0.4\\columnwidth]{fig1c.pdf}\n\\caption{\\label{fig:latt}\nUpper-left panel: the classical spin configuration (in the $XY$ plane) that is determined by the fictitious magnetic field $h$ in \nthe Hamiltonian~(\\ref{eq:auxham}) with ${\\bf Q}=(2\\pi\/3,2\\pi\/\\sqrt{3})$. Upper-right panel: pattern for the sign structure of the \nnearest-neighbor hopping $s_{i,j}$ of Eq.~(\\ref{eq:auxham}), $s_{i,j}=+1$ ($-1$) for solid (dashed) lines; notice the the amplitude \nfor the kinetic terms is chosen to be $t>0$. Lower-left panel: the path in the Brillouin zone that is used to plot the results of \nthe dynamical structure factor of the $30\\times30$ triangular lattice (blue arrows), see Figs.~\\ref{fig:not}, \\ref{fig:AF120}, \n\\ref{fig:dispersions}, \\ref{fig:007}, and~\\ref{fig:125}. Lower-right panel: the path in the Brillouin zone that is used to plot the \ndynamical structure factor of the $84 \\times 6$ cylinder (blue arrows), see Fig.~\\ref{fig:cylin}. In both lower panels\nthe orange shaded area corresponds to the region of the Brillouin zone in which magnon decay is predicted by the spin-wave \napproximation~\\cite{chernyshev2006,chernyshev2009} and the dashed line delimits the magnetic Brillouin zone.}\n\\end{figure}\n\nRecently, measurements on Ba$_3$CoSb$_2$O$_9$ have been reported, providing evidence that it can be described by a $S=1\/2$ Heisenberg \nmodel on the undistorted triangular lattice with predominant nearest-neighbor super-exchange couplings (a small easy-plane anisotropy \nis present, in addition to a small interlayer coupling)~\\cite{shirata2012}. The initial interest was aimed at the study of the \nmagnetization curve and the stabilization of magnetization plateaux~\\cite{shirata2012,suzuki2013}, and the proximity to a spin liquid\nphase~\\cite{zhou2012}. Later, inelastic neutron scattering measurements have been performed, in order to clarify the nature of the\nmagnetic excitations on top of the ground state~\\cite{ma2016,ito2017}. Even though Ba$_3$CoSb$_2$O$_9$ possesses long-range magnetic \norder (with $120^{\\circ}$ ordering), several aspects of the magnon dispersion and the multi-magnon continuum reveal an unconventional\nbehavior, which can only be partly explained within semi-classical approaches. First of all, at low-energies, the magnon dispersion \nis strongly renormalized with respect to the linear spin-wave approximation; an anomalous line broadening has also been detected, \nleading to the conclusion that magnon decay may be plausible; finally, the continuum presents unexpected dispersive features at high \nenergies. It should be noticed that, since neutron scattering data are sensitive to the full dynamical spin structure factor, three\ncopies of the magnon dispersion (translated by the ordering vectors) are visible in the spectrum. Experimental investigations have \nbeen also performed to infer the nature of the magnon excitations on top of the gapped phase that is stabilized at the one-third \nmagnetization plateau~\\cite{kamiya2018}. In this case, the situation seems to be more conventional, with the experimental results \nin relatively good agreement with theoretical predictions.\n\nMotivated by these experimental findings, there have been a few attempts to investigate the Heisenberg model (also including small\nperturbations) with both analytical and numerical tools~\\cite{ghioldi2015,ghioldi2018,verresen2018,chen2018}. In particular, by using \ndensity-matrix renormalization group (DMRG) calculations, Verresen and collaborators~\\cite{verresen2018} claimed that the magnon \ndecay does not take place, because of the strong coupling interactions between quasi-particles (i.e., magnons) in the Heisenberg \nmodel~\\cite{noteverre}. As a result of the avoided decay, the midpoint of the edge of the {\\it magnetic} Brillouin zone (dubbed \n$Y_1$) displays a minimum of the magnon dispersion, possibly explaining the high-energy features seen around the $M$ point in \nRef.~\\cite{ito2017}.\n\nWithin this context, also the discovery of YbMgGaO$_4$~\\cite{li2015} and, more recently, NaYbO$_2$~\\cite{ding2019} will give a further \nimpetus to study (generalized) spin models on the triangular lattice. In both cases, no signatures of magnetic order appear down to \nvery low temperatures, suggesting the existence of a quantum spin liquid. While both materials host effective $J=1\/2$ spin degrees of \nfreedom, the actual low-energy Hamiltonian may be more complicated than the $SU(2)$-invariant one of Eq.~(\\ref{eq:hamj1j2}); still, the \nphysical properties can share many similarities with the ground state of the $J_1-J_2$ model, as suggested in Ref.~\\cite{zhu2018}.\n\nIn this work, we employ a dynamical variational Monte Carlo approach~\\cite{li2010} to compute the out-of-plane dynamical spin structure \nfactor for the Heisenberg model on the triangular lattice, also in presence of a next-nearest-neighbor coupling $J_2$. First of all, we \nfocus our attention on the model with $J_2=0$ for which we confirm huge corrections from the linear spin-wave calculations. Our results\nsupport the idea that the magnon excitations are stable in the whole Brillouin zone; indeed, even though a {\\it discrete} set of\nexcitations is obtained within our numerical method, the lowest-energy state for each momentum ${\\bf q}$ appears to be rather well \nseparated from the rest of the spectrum at higher energies, suggesting the existence of a faint continuum just above the magnon branch.\nThe second part of this work deals with the $J_1-J_2$ model, to highlight the modifications in the dynamical structure factor that take \nplace when entering the spin-liquid phase (which, according to our variational approach, is gapless~\\cite{iqbal2016}). Here, the spectrum \nshows gapless excitations at $M$ points; in addition, a strong signal at low energies is present in correspondence of the corners of the \nBrillouin zone, i.e., $K=(2\\pi\/3,2\\pi\/\\sqrt{3})$ and $K^\\prime=(4\\pi\/3,0)$. While the former aspect can be easily understood by inspecting \nthe non-interacting spinon band structure, the latter one is a genuine feature that emerges from the Gutzwiller projector, which includes \ninteractions between spinons and gauge fields. Indeed, while the non-interacting wave function corresponds to a mean-field approximation, \nin which gauge fields are completely frozen, the Gutzwiller projection has the effect of inserting back the temporal fluctuations of \nthose fields~\\cite{wenbook}. In this respect, it is worth mentioning that a recent field-theoretical analysis indicated the existence of \nlow-energy (triplet) monopole excitations at the zone corners, which are expected to contribute to the dynamical structure \nfactor~\\cite{song2018}.\n\n\\section{Dynamical variational Monte Carlo}\\label{sec:method}\n\nThe dynamical structure factor, which is directly measured within inelastic neutron scattering experiments, can be used to unveil the \nnature of the elementary excitations of the models\/materials under investigation. In its spectral form, this quantity reads as\n\\begin{equation}\\label{eq:dsf}\nS^{a}({\\bf q},\\omega) = \\sum_{\\alpha} |\\langle \\Upsilon_{\\alpha}^q | S^{a}_q | \\Upsilon_0 \\rangle|^2 \\delta(\\omega-E_{\\alpha}^q+E_0),\n\\end{equation}\nwhere $|\\Upsilon_0\\rangle$ and $\\{|\\Upsilon_{\\alpha}^q\\rangle\\}_\\alpha$ are the ground state and the set of all excited states with \nmomentum $q$, whose corresponding energies are $E_0$ and $\\{E_{\\alpha}^q\\}_{\\alpha}$, respectively. In this work, we evaluate the dynamical \nstructure factor of the spin model~(\\ref{eq:hamj1j2}) by directly constructing accurate variational {\\it Ansatze} for its ground state and \na few low-energy excited states. Our variational approach is based on the so-called {\\it parton} construction, in which the spin degrees of \nfreedom of the model are rewritten in terms of auxiliary fermionic operators~\\cite{savary2016,wen2002}. The fermionic language constitute a \nversatile framework to define variational wave functions for both magnetically ordered and disordered phases of matter. The present Section \nis dedicated to the introduction of the fermionic wave functions for spin models and to the description of the variational Monte Carlo \nmethod employed for the calculation of the dynamical structure factor.\n\n\\subsection{Gutzwiller-projected fermionic wave functions for the ground state}\\label{sec:wavefunctions}\n\nHere, for the sake of generality, we consider a generic $SU(2)$ model for frustrated spin systems, which consists of a set of spin-$1\/2$ \ndegrees of freedom sitting on the sites of a lattice and interacting through the Heisenberg exchange couplings $J_{i,j}$:\n\\begin{equation}\\label{eq:generic_heis}\n{\\cal H} = \\sum_{i,j} J_{i,j} {\\bf S}_i \\cdot {\\bf S}_j.\n\\end{equation}\nThe interplay of the different interactions can lead to the stabilization of different phases of matter. In absence of frustration, i.e.,\nwhen no competing couplings are present, the ground state may develop some kind of magnetic order, which minimizes the classical energy of \nthe model. On the contrary, when different interactions compete with each other, magnetically disordered phases can arise, such as spin \nliquids. \n\nThe first attempt to describe spin-liquid states dates back to the resonating valence-bond approach, where a variational wave function is\ndefined in terms of a linear superposition of singlet coverings of the lattice~\\cite{anderson1973}. More recently, Wen~\\cite{wen2002} \ndeveloped a general approach to classify and construct spin-liquid states, which satisfy all the symmetries of a given lattice model. \nThis method is built upon the introduction of auxiliary Abrikosov fermions, which form a projective representation of $S=1\/2$ spin operators: \n\\begin{equation}\\label{eq:Sabrikosov}\n{\\bf S}_i = \\frac{1}{2} \\sum_{\\alpha,\\beta} c_{i,\\alpha}^\\dagger \n\\boldsymbol{\\sigma}_{\\alpha,\\beta} c_{i,\\beta}^{\\phantom{\\dagger}}.\n\\end{equation}\nHere $c_{i,\\alpha}^{\\phantom{\\dagger}}$ ($c_{i,\\alpha}^\\dagger$) destroys (creates) a fermion with spin $\\alpha=\\uparrow,\\downarrow$ on site $i$, and the \nvector $\\boldsymbol{\\sigma}=(\\sigma_x,\\sigma_y,\\sigma_z)$ is the set of Pauli matrices. The anticommutation relations among fermions ensure \nthat the Abrikosov representation yields the correct commutation relations among different spin components. Still, in order to faithfully \nreproduce the Hilbert space of the original spin model, only configurations with one fermion per site must be considered, which implies that \nthe Abrikosov fermions must satisfy the constraint:\n\\begin{equation}\\label{eq:Gutz_constraint1}\nc^\\dagger_{i,\\uparrow}c^{\\phantom{\\dagger}}_{i,\\uparrow}+c^\\dagger_{i,\\downarrow}c^{\\phantom{\\dagger}}_{i,\\downarrow}=1,\n\\end{equation}\nor equivalently:\n\\begin{equation}\\label{eq:Gutz_constraint2}\nc^\\dagger_{i,\\uparrow} c^\\dagger_{i,\\downarrow}=0,\n\\end{equation}\n\nBesides constant terms, the Hamiltonian of Eq.~(\\ref{eq:generic_heis}) can be rewritten in terms of Abrikosov fermions as follows:\n\\begin{equation}\\label{eq:quartic_ham}\n {\\cal H} = -\\frac{1}{2}\\sum_{i,j} \\sum_{\\alpha,\\beta} J_{i,j} \\left(\n c_{i,\\alpha}^\\dagger c_{j,\\alpha}^{\\phantom{\\dagger}} c_{j,\\beta}^\\dagger c_{i,\\beta}^{\\phantom{\\dagger}} \n + \\frac{1}{2} c_{i,\\alpha}^\\dagger c_{i,\\alpha}^{\\phantom{\\dagger}} c_{j,\\beta}^\\dagger c_{j,\\beta}^{\\phantom{\\dagger}} \\right).\n\\end{equation}\nAt this stage, the Hamiltonian~(\\ref{eq:quartic_ham}) with the constraints of Eqs.~(\\ref{eq:Gutz_constraint1}) and~(\\ref{eq:Gutz_constraint2}) \ngive an {\\it exact} representation of the original model. In order to tackle the above interacting fermionic system, one possibility is to \nperform a mean-field decoupling~\\cite{wen2002}. For the purpose of studying spin-liquid phases, we keep only the mean-field terms that do not \nbreak the $SU(2)$ symmetry of the original spins. The result is a quadratic Hamiltonian:\n\\begin{eqnarray}\\label{eq:generic_mf}\n {\\cal H}_{0} = \\sum_{i,j} \\sum_{\\sigma} t_{i,j} c_{i,\\sigma}^\\dagger c_{j,\\sigma}^{\\phantom{\\dagger}} +\n \\sum_{i,j} \\Delta_{i,j} c_{i,\\uparrow}^\\dagger c_{j,\\downarrow}^\\dagger + h.c. \\nonumber \\\\\n +\\sum_{i} \\sum_{\\sigma} \\mu_i c_{i,\\sigma}^\\dagger c_{i,\\sigma}^{\\phantom{\\dagger}} +\n \\sum_{i} \\zeta_{i} c_{i,\\uparrow}^\\dagger c_{i,\\downarrow}^\\dagger + h.c.,\n\\end{eqnarray}\nwhich contains a hopping term $t_{i,j}$ and a singlet pairing term $\\Delta_{i,j}$, which are related to the expectation values \n$\\langle c_{j,\\sigma}^\\dagger c_{i,\\sigma}^{\\phantom{\\dagger}}\\rangle$ and $\\langle c_{i,\\sigma} c_{j,-\\sigma}\\rangle$, respectively. In addition, the \none-fermion-per-site constraint of the parton construction is enforced in a {\\it global} fashion by including a chemical potential $\\mu_i$ \nand an onsite-pairing $\\zeta_{i}$ as Lagrange multipliers in ${\\cal H}_{0}$~\\cite{wen2002}. Within the mere mean-field approach, the \nparameters of ${\\cal H}_{0}$ are computed self-consistently and define a low-energy effective theory for the spin model under investigation. \nHowever, the ground state of ${\\cal H}_{0}$, named $|\\Phi_0 \\rangle$, satisfies the constraints of Eqs.~(\\ref{eq:Gutz_constraint1}) \nand~(\\ref{eq:Gutz_constraint2}) only on average and, therefore, does not represent a valid wave function for spins. Within this approach, \na full treatment of the original spin model requires the inclusion of all fluctuations of the parameters around the mean-field solution. \nSince this task is in general unfeasible, an alternative approach can be pursued, in which the Hamiltonian ${\\cal H}_{0}$ is exploited \nas a starting point for the definition of a variational wave function for the initial spin model. Indeed, the one-fermion-per-site \nconstraint can be enforced exactly by applying the Gutzwiller projector,\n\\begin{equation}\n \\mathcal{P}_G= \\prod_i (n_{i,\\uparrow}-n_{i,\\downarrow})^2,\n\\end{equation}\nto the ground state wave function of ${\\cal H}_{0}$. We emphasize that in general the Gutzwiller projection cannot be treated analytically, \ndue to its intrinsic many-body character, however it can be considered within Monte Carlo sampling. At variance with the mean-field treatment, \nin the variational approach the parameters of ${\\cal H}_{0}$ are not computed self-consistently, but are optimized in order to minimize the \nenergy of the Gutzwiller-projected {\\it Ansatz} $\\mathcal{P}_G|\\Phi_0 \\rangle$. \n\nThe artificial enlargement of the Hilbert space introduced by the parton construction gives rise to a {\\it gauge redundancy} in the \nrepresentation of the spin degrees of freedom. Specifically, the mapping~(\\ref{eq:Sabrikosov}) is invariant under {\\it local} $SU(2)$ \ntransformations of the Abrikosov fermions operators~\\cite{wen2002}. As a consequence, all physical properties of the spins are independent on \nthe gauge choice for fermions. For example, whenever we perform $SU(2)$ transformations to the unprojected Hamiltonian ${\\cal H}_{0}$, the\nvariational wave function with the Gutzwiller projector remains invariant. Exploiting this gauge redundancy, it is possible to classify all \nthe quadratic Hamiltonians ${\\cal H}_{0}$ whose Gutzwiller-projected ground states fulfill the symmetries of the lattice model. \nThis procedure, known as projective symmetry group analysis~\\cite{wen2002}, provides a recipe to construct all the distinct spin liquid \n{\\it Ansatze} for a given spin model. From a variational point of view, the spin-liquid wave function with the lowest variational energy is \nthe one which better describes the true ground state of the model.\n\nIn general, the variational {\\it Ansatze} defined by Gutzwiller-projecting the ground state of Eq.~(\\ref{eq:generic_mf}) do not display any \nmagnetic order~\\cite{li2013}. For the purpose of defining suitable wave functions for magnetically ordered phases, an additional term can be \nadded to ${\\cal H}_{0}$:\n\\begin{equation}\\label{eq:magnfield}\n {\\cal H}_{0} \\mapsto {\\cal H}_{0} + h \\sum_{i} \n \\left ( e^{i \\mathbf{Q} \\cdot \\mathbf{R}_i} c_{i,\\uparrow}^\\dagger c_{i,\\downarrow}^{\\phantom{\\dagger}}\n + e^{-i \\mathbf{Q} \\cdot \\mathbf{R}_i} c_{i,\\downarrow}^\\dagger c_{i,\\uparrow}^{\\phantom{\\dagger}} \\right ).\n\\end{equation}\nHere, $h$ is a {\\it fictitious} magnetic field which lies in the $XY$ plane and displays a periodic pattern defined by the pitch vector \n$\\mathbf{Q}$. Since the ground-state wave function of the Hamiltonian~(\\ref{eq:magnfield}) tends to overestimate the magnetic \norder~\\cite{becca2011}, further transverse quantum fluctuations are added through the application of a spin-spin Jastrow factor, \n\\begin{equation}\n\\mathcal{J}_s=\\exp \\left ( \\frac{1}{2} \\sum_{i,j} v_{i,j} S^z_i S^z_j \\right ),\n\\end{equation}\nto the Gutzwiller-projected state. Specifically, the complete form of the variational wave functions employed in this work is\n\\begin{equation}\\label{eq:wf}\n|\\Psi_0\\rangle= \\mathcal{P}_{S_z} \\mathcal{J}_s \\mathcal{P}_G |\\Phi_0 \\rangle,\n\\end{equation}\nwhere in addition to the Gutzwiller projection and the Jastrow factor, we apply a projector enforcing zero value for the $z$-component of the \ntotal spin ($\\mathcal{P}_{S_z}$).\n\nBy using this approach, the variational phase diagram for the $J_1-J_2$ model on the triangular lattice has been obtained in Ref.~\\cite{iqbal2016}:\nthe system undergoes a phase transition between a magnetically ordered phase to a gapless spin liquid at ${J_2\/J_1 \\approx 0.08}$. For this \nmodel, the optimal variational wave functions are obtained by considering only a hopping term (no pairing) \nand the fictitious magnetic field in the quadratic Hamiltonian:\n\\begin{eqnarray}\\label{eq:auxham}\n\\mathcal{H}_0 &=& t \\sum_{\\langle i,j \\rangle} s_{i,j} c_{i,\\sigma}^\\dagger c_{j,\\sigma}^{\\phantom{\\dagger}} \\nonumber \\\\\n&+& h \\sum_{i} \\left ( e^{i \\mathbf{Q} \\cdot \\mathbf{R}_i} c_{i,\\uparrow}^\\dagger c_{i,\\downarrow}^{\\phantom{\\dagger}}\n+ e^{-i \\mathbf{Q} \\cdot \\mathbf{R}_i} c_{i,\\downarrow}^\\dagger c_{i,\\uparrow}^{\\phantom{\\dagger}} \\right ).\n\\end{eqnarray}\nHere $t$ is a first-neighbor hopping with a non-trivial sign structure ($s_{i,j} = \\pm 1$) which generates a pattern of alternating $0$ \nand $\\pi$ fluxes through the triangular plaquettes of the lattice, see Fig.~\\ref{fig:latt}; $h$ is a fictitious magnetic field which displays \nthe classical $120^\\circ$ order with ${\\bf Q}=(2\\pi\/3,2\\pi\/\\sqrt{3})$, see Fig.~\\ref{fig:latt} (considering ${\\bf Q}=(4\\pi\/3,0)$ would not \nchange the physical content of the ground state wave function). All the parameters included in $\\mathcal{H}_0$ and the pseudopotential \n$v_{i,j}$ (one parameter for each distance $|{\\bf R}_i-{\\bf R}_j|$ in the translational invariant lattice) entering the Jastrow factor can \nbe optimized to minimize the variational energy. While in the magnetic phase of the system the optimal value for the ratio $h\/t$ is finite, \nfor $J_2\/J_1 \\gtrsim 0.08$ the system enters the spin liquid phase and the magnetic field parameter vanishes in the thermodynamic \nlimit~\\cite{iqbal2016}. The values of the fictitious magnetic field as a function of $J_2\/J_1$ can be found in Ref.~\\cite{iqbal2016}.\n\nIn this work we compute the dynamical structure factor for the $J_1-J_2$ model on the $30 \\times30$ triangular lattice. For $J_2=0$, we first\nconsider the crudest approximation for the ground state, which consists in setting the hopping term $t$ to zero. The resulting wave function \nis equivalent to the state of Ref.~\\cite{huse1988} with only a two-body Jastrow factor. Much more accurate results are then obtained by restoring\nthe hopping term in the Hamiltonian and optimizing all the variational parameters, for the cases $J_2=0$ and $J_2\/J_1=0.07$. On the other hand, \nwhen the system is in the spin liquid regime ($J_2\/J_1=0.09$ and $J_2\/J_1=0.125$), the fictitious magnetic field is vanishing and the Jastrow \nfactor is not considered, because of its negligible effects on the variational results. According to the projective symmetry group classification, \nthe wave function obtained by considering only the hopping term in $\\mathcal{H}_0$ is a fully symmetric $U(1)$ spin liquid~\\cite{lu2016}.\n\n\\subsection{Dynamical structure factor}\\label{sec:dynamical}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig2.pdf}\n\\caption{\\label{fig:not}\nDynamical structure factor of the nearest-neighbor Heisenberg model on the triangular lattice obtained by using the variational wave function \nof Eq.~(\\ref{eq:wf}) and~(\\ref{eq:auxham}) with $t=0$ on the $30 \\times 30$ cluster. The path along the Brillouin zone is shown in \nFig.~\\ref{fig:latt}. A Gaussian broadening of the spectrum has been applied ($\\sigma=0.02J_1$). The spin-wave energies of the magnon branch \n($\\epsilon_q$), on the same cluster size, are represented by the white dots connected with a solid line. The dashed line corresponds to the \nbottom of the continuum within linear spin waves, i.e. $E_q=\\min_{k} \\{ \\epsilon_{q-k} + \\epsilon_{k} \\}$. Notice that $E_{q}<\\epsilon_{q}$ \nin most of the Brillouin zone, as obtained in Ref.~\\cite{chernyshev2006,chernyshev2009}.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig3.pdf}\n\\caption{\\label{fig:AF120}\nThe same as Fig.~\\ref{fig:not} but for the optimal variational wave function with both hopping $t$ and fictitious magnetic field $h$. \nThe path along the Brillouin zone is shown in Fig.~\\ref{fig:latt}. The dotted line denotes the bottom of the continuum \n$E_{q}=\\min_{k} \\{E_{0}^{q-k}+E_{0}^{k}\\}$, where $E_{0}^{q}$ is the lowest energy for a given momentum ${\\bf q}$ obtained within our \nvariational approach. Since the spectrum is gapless at the $\\Gamma$ point, we exclude the cases ${\\bf k}=(0,0)$ and ${\\bf k}={\\bf q}$ \nin the search of the minimum, because the resulting $E_{q}$ would simply coincide with the energy of the magnon branch $E_{0}^{q}$ \nall over the Brillouin zone. The purpose of this kinematic analysis is to show that no magnon decay can yield an energy $E_{q}$\nwhich is lower than the one of the magnon branch $E_{0}^{q}$ (in constrast with spin wave results).}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig4.pdf}\n\\caption{\\label{fig:dispersions}\nEnergies of the magnon branch for the nearest-neighbor Heisenberg model on the triangular lattice obtained with different methods. The path \nalong the Brillouin zone is shown in Fig.~\\ref{fig:latt}. The black line corresponds to linear spin wave, the blue squares to series \nexpansion~\\cite{zheng2006}, and the orange circles to our variational results (on the $30 \\times 30$ cluster).}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig5.pdf}\n\\caption{\\label{fig:dispersion}\nDispersion relation of the magnon branch (i.e., the lowest-energy excitation) as obtained within our variational approach (on the $30\\times 30$ \ncluster). The linear spin-wave results are also reported for comparison. Dashed lines represent the edges of the magnetic Brillouin zone. The \npresence of the roton minima at the $M$ and $Y_1$ points in the variational spectrum is evident.}\n\\end{figure}\n\nAs already mentioned, the dynamical structure factor of the $J_1-J_2$ model is computed by constructing variational {\\it Ansatze} to approximate \nthe low-energy excited states of the system. Here we limit ourselves to the calculation of the out-of-plane component $S^z({\\bf q},\\omega)$, \nand we employ the technique outlined in Ref.~\\cite{li2010,ferrari2018a,ferrari2018b}, which is briefly summarized in the following.\n\nFirst, we find the optimal variational \\textit{Ansatz} for the ground state of the model, which has the form of Eq.~(\\ref{eq:wf}), by \nminimizing the variational energy. The resulting wave function is employed as a reference state to construct a set of projected particle-hole \nexcitations with a given momentum $q$:\n\\begin{equation}\\label{eq:qRstate}\n|q,R\\rangle = \\mathcal{P}_{S_z} \\mathcal{J}_s \\mathcal{P}_G \n\\frac{1}{\\sqrt{N}} \\sum_{i}\\sum_{\\sigma} e^{i {\\bf q} \\cdot {\\bf R}_i} \\sigma c^\\dagger_{i+R,\\sigma}c^{\\phantom{\\dagger}}_{i,\\sigma} |\\Phi_0\\rangle.\n\\end{equation}\nThese states are labelled by $R$, which runs over all lattice vectors. We approximate the low-energy excited states of the model by using \nlinear combinations of the elements of the basis set $\\{|q,R\\rangle\\}_R$:\n\\begin{equation}\\label{eq:psinq}\n |\\Psi_n^q\\rangle=\\sum_R A^{n,q}_R |q,R\\rangle.\n\\end{equation}\nFor a certain momentum {\\bf q}, we consider the Schr{\\\"o}dinger equation for the $J_1-J_2$ Hamiltonian restricting the form of its eigenvectors \nto the one of Eq.~(\\ref{eq:psinq}), i.e. ${ {\\cal H}|\\Psi_n^q\\rangle = E_n^q |\\Psi_n^q\\rangle }$. Expanding everything in terms of \n$\\{|q,R\\rangle\\}_R$, we arrive to the following generalized eigenvalue problem\n\\begin{equation}\\label{eq:general_eig_prob}\n\\sum_{R^\\prime} \\langle q,R|{\\cal H}|q,R^\\prime \\rangle A^{n,q}_{R^\\prime} = E_n^q \\sum_{R^\\prime} \\langle q,R|q,R^\\prime \\rangle \nA^{n,q}_{R^\\prime},\n\\end{equation}\nwhich is solved to find the expansion coefficients $A^{n,q}_R$ and the energies $E_n^q$ of the excitations. All the matrix elements,\n$\\langle q,R|{\\cal H}|q,R^\\prime \\rangle$ and $\\langle q,R|q,R^\\prime \\rangle$, are evaluated within the Monte Carlo procedure, by \nsampling according to the variational ground-state wave function. Finally the dynamical structure factor is computed by:\n\\begin{equation}\\label{eq:Szz_practical}\nS^{z}({\\bf q},\\omega) = \\sum_n |\\langle \\Psi_{n}^q | S^{z}_q | \\Psi_0 \\rangle|^2 \\delta(\\omega-E_{n}^q+E_0^{\\rm var}),\n\\end{equation}\nwhere $E_0^{\\rm var}$ is the variational energy of $|\\Psi_0 \\rangle$.\n\n\\section{Results}\\label{sec:results}\n\nIn this section, we present the numerical calculations for the dynamical structure factor $S({\\bf q},\\omega)$ obtained by the \nvariational approach described in the previous section. First, we discuss the case of the Heisenberg model with only nearest-neighbor\nsuper-exchange $J_1$, also comparing our results with recent DMRG calculations~\\cite{verresen2018}. Then, we include the \nnext-nearest-neighbor coupling $J_2$ to increase frustration and melt the magnetic order. In this way, a gapless spin-liquid regime \nis reached for $J_2\/J_1 \\approx 0.08$~\\cite{iqbal2016}.\n\n\\subsection{The nearest-neighbor model with $J_2=0$}\n\nLet us start our analysis by considering the case in which the ground-state wave function only contains the fictitious magnetic field,\ni.e., $t=0$. In this case, the Abrikosov fermions are completely localized (e.g., the eigenvalues of the auxiliary Hamiltonian define \nflat bands) and the wave function corresponds to the Jastrow state of Ref.~\\cite{huse1988} with only a two-body Jastrow factor. The \nresults for the dynamical structure factor on the $30 \\times 30$ cluster are shown in Fig.~\\ref{fig:not}. Here, the spectrum consists \nof a {\\it single} mode, which is identified as the magnon excitation (no continuum is visible). Notice that only one magnon branch is \nvisible, related to the magnon dispersion $\\epsilon_{q}$, since we consider the out-of-plane dynamical structure factor (the {\\it folded}\nbranches $\\epsilon_{q \\pm K}$ do not contribute to the signal). Remarkably, the dispersion of the magnon branch is possible thanks to \nthe Jastrow factor, since the wave function without it would give rise to a trivially flat (gapped) excitation spectrum, reflecting \nthe non-interacting band structure of fermions. By contrast, the long-range Jastrow term is able to produce a reasonable magnon mode, \nwhich agrees fairly well with the spin-wave calculations. In paticular, the spectrum is gapless at $\\Gamma=(0,0)$ (with a vanishingly \nsmall weight). Instead, in constrast to spin waves, which correctly predict gapless magnons at $K$ and $K^\\prime$ due to the coplanar\n$120^\\circ$ order, this simple wave function leads to a gapped spectrum at the corners of the Brillouin zone. In connection to that, \nthe out-of-plane static structure factor $S^z({\\bf q})=\\int d\\omega S^z({\\bf q},\\omega)$ does not diverge at $K$ or $K^\\prime$ when \n$L \\to \\infty$, showing only a maximum.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig6.pdf}\n\\caption{\\label{fig:cylin}\nThe dynamical structure factor for the nearest-neighbor Heisenberg model on a cylindrical geometry ($84 \\times 6$), to make a close\ncomparison with DMRG calculations by Verresen and collaborators~\\cite{verresen2018}. We apply a Gaussian broadening to the spectrum \nwhich is equivalent to the one of the aforementioned DMRG result ($\\sigma=0.077J_1$). The path in the Brillouin zone is shown in the \ninset and in Fig.\\ref{fig:latt} (the point $A$ lies at $1\/4$ of the $\\Gamma-K^{\\prime\\prime}$ line, where \n$K^{\\prime\\prime}=(-2\\pi\/3,2\\pi\/\\sqrt{3})$; the point $B$ lies at $1\/4$ of the $K-K^\\prime$ line). The dashed line denotes the bottom \nof the continuum, which is evaluated by taking $E_{q}=\\min\\{E_{0}^{q-K}+E_{0}^{K},E_{0}^{q+K}+E_{0}^{-K}\\}$, where $E_{0}^{q}$ is \nthe lowest energy for a given momentum $q$ obtained within our variational approach and $K=(2\\pi\/3,2\\pi\/\\sqrt{3})$.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fig7a.pdf}\n\\includegraphics[width=\\columnwidth]{fig7b.pdf}\n\\caption{\\label{fig:007}\nThe dynamical structure factor for the $J_1-J_2$ Heisenberg model on the $30 \\times 30$ cluster with $J_2\/J_1=0.07$ (above) and\n$J_2\/J_1=0.09$ (below). The path along the Brillouin zone is shown in Fig.~\\ref{fig:latt} and a Gaussian broadening of the spectrum \nhas been applied ($\\sigma=0.02J_1$).}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[width=\\columnwidth]{fig8a.pdf}\\hfill\n\\includegraphics[width=\\columnwidth]{fig8b.pdf}\n\\caption{\\label{fig:125}\nThe dynamical structure factor for the $J_1-J_2$ Heisenberg model on the $30 \\times 30$ cluster with $J_2\/J_1=0.125$. The variational results \n(left panel) are compared to the ones obtained from the unprojected Abrikosov fermion Hamiltonian $\\mathcal{H}_0$ of Eq.~(\\ref{eq:auxham}) with \n$t=1$ and $h=0$ (right panel). The path along the Brillouin zone is shown in Fig.~\\ref{fig:latt}. We applied a Gaussian broadening of $\\sigma=0.02J_1$ \nto the variational results. Notice that, for the unprojected data, the energy scale is given by the hopping amplitude $t$ of the unprojected \nHamiltonian~(\\ref{eq:auxham}), instead of $J_1$. In addition, the broadening has been rescaled in order to account for the larger bandwidth of the \nspectrum.}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=\\columnwidth]{fig9a.pdf}\\hfill\n\\includegraphics[width=\\columnwidth]{fig9b.pdf}\n\\caption{\\label{fig:square}\nThe dynamical structure factor for the $J_1-J_2$ Heisenberg model on the square lattice ($22 \\times 22$) with $J_2\/J_1=0.55$. The \nvariational results (left panel) are compared to the ones obtained from the unprojected Abrikosov fermion Hamiltonian $\\mathcal{H}_0$ \n(right panel), which contains a flux-phase hopping (of strength $t$) and a $d_{xy}$ pairing (see Ref.~\\cite{ferrari2018b} for details). \nWe applied a Gaussian broadening of $\\sigma=0.02J_1$ to the variational results. Notice that, for the unprojected data, the energy scale \nis given by the hopping amplitude $t$ of the unprojected Hamiltonian of Ref.~\\cite{ferrari2018b}, instead of $J_1$. In addition, the broadening \nhas been rescaled in order to account for the larger bandwidth of the spectrum.}\n\\end{figure*}\n\n\nA much more realistic spectrum is obtained when considering a finite fermion hopping $t$ (with the $\\pi$-flux pattern shown in \nFig.~\\ref{fig:latt}), as well as the optimized value of the fictitious magnetic field $h$ (and the Jastrow factor). The results for\nthe $30 \\times 30$ lattice are reported in Fig.~\\ref{fig:AF120}. In this case, there are several excitations with a finite weight for\neach momentum, thus reproducing the existence of a broad continuum, which extends up to relatively large energies. We would like to\nmention that, with respect to the square lattice~\\cite{ferrari2018b,dallapiazza2015,yu2018}, here many more excitations for each momentum\npossess a visible spectral weight. Within this calculation, we identify the lowest-energy excitation $E_{0}^{q}$ as the magnon peak. This\nassumption is corroborated by the results shown in Fig.~\\ref{fig:dispersions}, where the variational energies $E_{0}^{q}$ closely follow \nthe magnon branch obtained by series expansions. Instead, identifying the lowest-energy peak as the bottom of the continuum is not very \nplausible, since a much broader signal should be present in this case. In this regard, the basis set that is used here for the excited \nstates is made of particle-hole spinon excitations on top of the ground state of the auxiliary Hamiltonian of Eq.~(\\ref{eq:auxham}), before \nGutzwiller projection. For this reason, we argue that, in general, our approach is particularly suited to capture (i) two-spinon excitations \nor (ii) bound states of spinons, e.g., magnons. Multi-magnon excitations are expected to show up with a reduced intensity. In order to discuss \nthe issue of magnon decay, we apply a kinematic argument (as done both in the linear spin-wave approach~\\cite{chernyshev2006,chernyshev2009} \nand within DMRG~\\cite{verresen2018}) and we consider all the possible two-magnons decays, which fulfill the conservation of momenta, i.e., \n$E_{q}= \\min_k \\{ E_{0}^{q-k}+E_{0}^{k}\\}$. For this purpose, we computed the spectrum $E_{0}^{k}$ for all the $k$-vectors in the Brillouin \nzone on the $30 \\times 30$ lattice. The outcome is that the bottom of the two-magnon continuum, defined by the kinematic analysis, lies above \nthe magnon branch. These results clearly indicate an avoided decay in a large part of the Brilloiun zone, as suggested by DMRG calculations,\nwhich considered certain (high-energy) parts of the magnon dispersion~\\cite{verresen2018}. Still, we cannot exclude the existence of small \nregions where the magnon decay may persist, especially close to the gapless points. In this respect, within the linear spin-wave approach, \nthe different velocities of the excitation spectrum at $\\Gamma$ and $K$ immediately lead to an unstable magnon branch close to the $\\Gamma$ \npoint~\\cite{chernyshev2006,chernyshev2009}. Should this aspect be a genuine feature of the model, the magnon would be unstable in a small \npart around the center of the Brillouin zone. Unfortunately, given the finiteness of the cluster used in our numerical calculations, we cannot \nreliably estimate the slope of the magnon spectrum at $\\Gamma$ and $K$ and, therefore, make definitive statements for this issue.\n\nHere, we would like to notice the strong renormalization of the magnon branch with respect to spin-wave calculations, see \nFig.~\\ref{fig:dispersions}. Most importantly, we emphasize that, within this most accurate calculation, the magnon branch shows a roton-like \nminimum not only at $M$, but also at $Y_1$, i.e., the midpoint of the edge of the magnetic Brillouin zone (see also Fig.~\\ref{fig:dispersion}), \nas already detected by neutron scattering measurements in Ba$_3$CoSb$_2$O$_9$~\\cite{ito2017}. This feature was not captured by the previous\nseries expansion calculations~\\cite{zheng2006} but, instead, has been observed also by recent DMRG calculations on an infinitely long cylinder \n(with a small circumference $L=6$)~\\cite{verresen2018} and has been interpreted as the hallmark for the absence of magnon decay. In order to \nmake a closer comparison with DMRG data, we perform the variational calculations on a long cylinder ($84 \\times 6$) along the same path in the \nBrillouin zone as the one that has been considered in Ref.~\\cite{verresen2018}. The results are shown in Fig.~\\ref{fig:cylin}. Here, the large \nnumber of lattice points along the cylinder allows us to have a detailed resolution of the magnon branch, which closely follows the one obtained \nby DMRG. In particular, we can estimate the bottom of the continuum by evaluating $E_{q}=\\min \\{ E_{0}^{q-K}+E_{0}^{K}, E_{0}^{q+K}+E_{0}^{-K} \\}$, \nwhere we consider the possible decays involving a magnon at $K$ and $-K$. In doing so, we find that the lowest-energy excitation $E_{0}^{q}$ is \nalways below $E_{q}$, indicating that well defined branch exists and magnon decay is avoided. We finally remark that a roton minimum is detected \nalong the same path as the one studied by Verresen and collaborators~\\cite{verresen2018}, strongly suggesting that this is a genuine feature of \nthe Heisenberg model.\n\n\\subsection{The $J_1-J_2$ model}\n \nWe now move to the case where also a next-nearest-neighbor coupling $J_2$ is present. Within our variational approach, a gapless\nspin-liquid phase is stabilized for $0.08 \\lesssim J_2\/J_1 \\lesssim 0.16$; here, the fictitious magnetic field vanishes in the\nthermodynamic limit and the best wave function only contains fermionic hopping (with $\\pi$-flux threading half of the triangular\nplaquettes)~\\cite{iqbal2016}. On a finite size, a small value of $h$ can be stabilized, as well as a tiny Jastrow pseudopotential.\nStill, we verified that these ingredients do not cause sensible differences in the dynamical structure factor. In Fig.~\\ref{fig:007},\nwe show the results for the $30 \\times 30$ cluster and for two values of $J_2\/J_1$, which are very close to the transition point, one \nstill inside the magnetic phase ($J_2\/J_1=0.07$) the other one in the spin-liquid region ($J_2\/J_1=0.09$). By approaching the quantum \nphase transition, the major modification of the spectrum comes from the softening of the magnon excitation at the $M$ points. This \nfeature closely resembles the case of the frustrated $J_1-J_2$ model on the square lattice, previously studied with the same numerical \ntechnique~\\cite{ferrari2018b}, where a softening is clearly detected for ${\\bf q}=(\\pi,0)$ [and $(0,\\pi)$]. In this latter case, this \nfact has been connected to the progressive deconfinement of spinons that have gapless (Dirac) points at ${\\bf q}=(\\pm \\pi\/2,\\pm \\pi\/2)$. \nWe would like to mention that the possibility to have (gapped) almost-deconfined spinon in the unfrustrated Heisenberg model has been \nsuggested by a recent quantum Monte Carlo calculation~\\cite{shao2017}; moreover, clear signatures for deconfined spinons at the transition \nbetween an antiferromagnetically ordered phase and a valence-bond crystal have been reported in the so-called $J-Q$ model~\\cite{ma2018}. \nOn the triangular lattice, the softening of the spectrum at the $M$ points is a direct consequence of the Dirac points at \n${\\bf q}=(0,\\pm \\pi\/\\sqrt{3})$ in the spinon band structure. Therefore, we expect both $M$ and $K$ points to be gapless at the transition \n(as well as $Y_1$, which can be obtained by combining $M$ and $K$ vectors). Indeed, this is necessary for a continuous phase transition, \nas the one that appears in the $J_1-J_2$ Heisenberg model, according to ground-state calculations~\\cite{iqbal2016}.\n\nIn Fig.~\\ref{fig:125}, we report the dynamical structure factor for $J_2\/J_1=0.125$. The \nspin-liquid state is characterized by a broad continuum that extends up to relatively large energies. In particular, around the $M$ \npoints, the magnon roton-like minima of the ordered phase fractionalize into an incoherent set of excitations at low energies. This \nfeature is compatible with the existence of Dirac points in the unprojected spectrum of the auxiliary Hamiltonian $\\mathcal{H}_0$, \nsee Fig.~\\ref{fig:125}. By contrast, a strong signal in the lowest-energy part of the spectrum is detected around the $K$ points, where \nthe unprojected spinon spectrum is instead gapped. In this respect, the Gutzwiller projection is fundamental to include interaction\namong spinons in a non-perturbative way and give a drastic modification of the low-energy features. This is a distinctive aspect of \nthe triangular lattice, since, on the square lattice, all the low-energy (gapless) points observed in presence of the Gutzwiller \nprojector [i.e. ${\\bf q}=(0,0)$, $(\\pi,\\pi)$, $(\\pi,0)$ and~$(0,\\pi)$] already exist in the non-interacting picture~\\cite{hu2013}, \nsee Fig.~\\ref{fig:square}. We would like to emphasize that, in contrast to the magnetically ordered phase, where no visible spectral \nweight is present right above the magnon branch (see Fig.~\\ref{fig:AF120}), in the spin-liquid phase the continuum is not separated \nfrom the lowest-energy excitation. This outcome corroborates the fact of having deconfined spinons in the magnetically disordered phase. \nThe intense signal at $K$ points immediately implies strong (but short-range) antiferromagnetic correlations in the variational wave \nfunction, which are absent in the unprojected $\\pi$-flux state (by contrast, on the square lattice, the $\\pi$-flux state has already \nsignificant antiferromagnetic correlations built in it).\n\nThe presence of low-energy spectral weight at the corners of the Brillouin zone could be ascribed to the existence of critical monopole \nexcitations, as suggested by the analysis of Ref.~\\cite{song2018}. In fact, the Gutzwiller projector, which imposes single occupacy\non each lattice site, introduces temporal fluctuations of the gauge fields that are completely frozen within the non-interacting\npicture (i.e., within the unprojected wave function). Even though we cannot exclude a more conventional picture where a bound state of \nspinons is responsible for the intense signal around $K$, it is plausible that this feature originates from the existence of gauge\nfields, which emerge in the field-theoretical description of spin liquids~\\cite{savary2016}. While gauge fields are known to predominantly\ncontribute to spectral functions of specific Kitaev spin liquids with $\\mathcal{Z}_2$ magnetic fluxes~\\cite{knolle2014}, our calculations \nsuggest that monopole excitations may give some relevant signature in the spin-liquid phase of the $J_1-J_2$ Heisenberg model on the \ntriangular lattice. Remarkably, on the $30 \\times 30$ cluster, the lowest-energy excitation at $K$ is slightly higher inside the \nspin-liquid phase (i.e., for $J_2\/J_1=0.125$) than close to the critical point (i.e., for $J_2\/J_1 \\approx 0.08$), see Figs.~\\ref{fig:007}\nand~\\ref{fig:125}. This fact may suggest the possibility that this kind of excitation may be slightly gapped in the spin-liquid region, \nwhile being gapless at the critical point. We finally highlight the existence of an unexpected high-energy dispersing mode, which bends \nfrom the $\\Gamma$ point down into the continuum, being seemingly connected to the low-energy excitation at $K$. A comparison with other \nnumerical techniques will be needed to clarify whether this feature is a genuine aspect of the model or an artifact of the present \nvariational approach.\n\n\\section{Conclusions}\n\nIn this work, we performed variational Monte Carlo calculations to estimate the dynamical structure factor of the $J_1-J_2$\nHeisenberg model on the triangular lattice. The results for $J_2=0$ are consistent with the existence of a well-defined magnon branch\nin the whole Brillouin zone, in agreement with recent DMRG calculations~\\cite{verresen2018}. This outcome contrasts the \nsemiclassical predictions~\\cite{chernyshev2006,chernyshev2009}, which suggested the presence of magnon decay in a large portion \nof the Brillouin zone. When a finite $J_2$ super-exchange is included and the spin-liquid phase is approached, a clear softening of\nthe spectrum is detected around the $M$ points, in close similarity to what happens on the square lattice~\\cite{ferrari2018b}. \nRemarkably, the low-energy physics of the spin liquid phase cannot be fully described by the unprojected spinon picture, since, \nbesides gapless excitations at $M$ and $M^\\prime$, there are anomalously low-energy states appearing around the $K$ points. \nOur numerical calculations provide an indisputable evidence of the fact that the non-interacting (i.e., unprojected) spinon spectrum\nis not sufficient to fully explain the low-energy spectrum detected by the dynamical structure factor. In light of the recent \nfield-theoretical analysis~\\cite{song2018}, the natural interpretation of the spectral features around the corners of the Brillouin\nzone comes from the existence of low-energy monopole excitations. This outcome is particularly important, since it would give a \ndirect signature of the fact that these theoretical approaches correctly capture the nature of the spin-liquid phase. \nWe hope that the present results will motivate future investigations in this direction.\n\n\\acknowledgements\nWe are particularly indebted to T. Li, for pointing out interesting aspects of the problem, and A. Chernyshev, for highlighting some \naspects of his results. We also acknowledge C. Batista, Y.-C. He, A. Parola, F. Pollmann, R. Verresen, and A. Vishwanath for useful \ndiscussions.\n\n\\bibliographystyle{apsrev4-1}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nUnconventional superconductors usually refer to the superconductors\nthose can not be understood within the conventional\nBardeen-Cooper-Schrieffer (BCS) theory. Notable examples are\nhigh-$T_c$ cuprates \\cite{Lee06}, heavy fermion superconductors\n\\cite{Stewart84, Lohneysen07, Stockert12}, organic superconductors\n\\cite{Powell11} and iron based superconductors \\cite{Paglione10,\nHirschfeld11, Stewart11, Chubukov12}. The unconventional\nsuperconductivity is usually driven by strong electron-electron\ninteraction, and the electron pairing mechanism usually has a\nmagnetic origin. In the past decades, identifying the precise gap\nsymmetry of unconventional superconductors has attracted great\ntheoretical and experimental efforts since such efforts may lead to\nimportant progress in seeking the microscopic pairing mechanism.\n\n\nMany unconventional superconductors are believed to have a $d$-wave\nenergy gap, which is different from that of isotropic $s$-wave\nsuperconductors. However, it is not an easy task to determine the\nprecise $d$-wave gap symmetry. A powerful and frequently used\napproach is to probe the angular dependence of various observable\nquantities, such as upper critical field \\cite{Won94, Takanaka95,\nKoike96, Naito01, Metlushko97, Won04, Weickert06, Vieyra11},\nspecific heat \\cite{Vorontsov07a, Vorontsov10, An10, Kittaka12}, and\nthermal conductivity \\cite{Vorontsov10,Vorontsov07b,Kasahara08,\nKasahara09}. In this paper, we are mainly interested in the\nbehaviors of in-plane upper critical field $H_{c2}$ in heavy fermion\nsuperconductors. This issue has recently been addressed with the aim\nto identify the precise gap symmetry of some heavy fermion\ncompounds, especially CeCoIn$_5$ \\cite{Weickert06} and\nCeCu$_{2}$Si$_2$ \\cite{Vieyra11}. Despite the intensive theoretical\nand experimental efforts, it remains unclear whether the gap symmetry\nof these compounds is $d_{x^2-y^2}$-wave\nor $d_{xy}$-wave. These two gaps are different from\neach other primarily in the positions of gap nodes. In principle,\ntheir positions can be clarified by measuring the angular dependence\nof $H_{c2}$. Unfortunately, experimental studies have not yet reached\na consensus on this issue. In the case of CeCoIn$_{5}$, currently most of\nexperiments suggest that the gap symmetry should be $d_{x^2-y^2}$-wave\\cite{An10,Allan13,Zhou13},\nhowever there is currently still an experimental discrepancy in the\nconcrete angular dependence of $H_{c2}$ in CeCoIn$_{5}$: some experiments find that the maxima of\n$H_{c2}$ are along the [100] direction \\cite{Settai01, Bianchi03,\nWeickert06}, whereas other experiment observes the maxima along the\n[110] direction \\cite{Murphy02}. This discrepancy is still a open puzzle which need to\nbe resolved\\cite{Das13}. In the case of\nCeCu$_{2}$Si$_{2}$, many earlier experiments suggest a $d_{x^2 -\ny^2}$-wave gap\\cite{Stockert08, Eremin08}. Nevertheless, a recent\nmeasurement \\cite{Vieyra11} observes the maxima of $H_{c2}$ along\nthe $[100]$ direction, which is argued to infer a $d_{xy}$-wave gap according the\ncorresponding theoretical analysis\\cite{Vieyra11}. Apparently, more research efforts are\ncalled for to solve these puzzles, which have motivated us to revisit\nthis issue more systematically.\n\n\nNow suppose an external magnetic field is introduced to a\nsuperconductor. In principle, this field can couple to the charge\nand spin degrees of freedom of the electrons via the orbital and\nZeeman mechanisms respectively. The former mechanism is described by\nthe minimal coupling between the momentum of electrons and the\nvector potential, and can lead to the well-known Abrikosov mixed\nstate in type-II superconductors. The latter mechanism, usually\ncalled Pauli paramagnetic or Pauli limiting effect, is known to be\nimportant in some heavy fermion compounds \\cite{Vieyra11, Bianchi02,\nBianchi08, Kenzelmann08}. Which one of these two effects plays a\ndominant role is determined by a number of physical factors. When\nboth of them are important, novel and interesting properties may\nemerge.\n\n\nSince the middle of 1990s, the in-plane $H_{c2}$ has been applied to\nidentify the gap symmetry in layered unconventional superconductors\n\\cite{Won94, Takanaka95, Koike96, Naito01, Metlushko97, Won04,\nWeickert06, Vieyra11}. Early theoretical calculations have showed\nthat the in-plane $H_{c2}$ exhibits a fourfold oscillation in\n$d$-wave superconductors \\cite{Won94, Takanaka95}. The presence of\nsuch a fourfold oscillation has already been verified in many\nunconventional superconductors, including high-$T_c$ cuprate superconductors\n\\cite{Koike96, Naito01}, LuNi$_{2}$B$_{2}$C \\cite{Metlushko97},\nheavy fermion compounds CeCoIn$_5$ \\cite{Weickert06} and\nCeCu$_{2}$Si$_2$ \\cite{Vieyra11}.\n\n\n\n\\begin{figure}[htbp]\n\\center \\subfigure{\n\\includegraphics[width=3in]{dx2y2.eps}}\n\\\\\n\\vspace{-0.5cm} \\subfigure{\n\\includegraphics[width=3in]{dxy.eps}}\n\\caption{Shapes of $d_{x^2-y^2}$-wave and $d_{xy}$-wave gaps.}\n\\vspace{-0.5cm} \\label{Fig:Shaped}\n\\end{figure}\n\n\n\nIn the early calculations of Won \\emph{et. al.} \\cite{Won94} and\nTakanaka \\emph{et. al.} \\cite{Takanaka95} who solely considered the\norbital effect, $H_{c2}$ is found to exhibit its maxima along the\nantinodal directions where the $d$-wave superconducting gap is\nmaximal. The subsequent analysis of Weickert \\emph{et. al.}\n\\cite{Weickert06} includes both the orbital and Pauli paramagnetic\neffects, but still finds the maxima of $H_{c2}$ along the antinodal\ndirections. A similar conclusion is drawn in a recent work\n\\cite{Vorontsov10}, where the authors also show that increasing the\nPauli effect reduces the difference in $H_{c2}$ between nodal and\nantinodal directions. There seems to be a priori hypothesis in the\nliterature that a larger gap necessarily leads to a larger magnitude\nof $H_{c2}$, which means $H_{c2}$ and $d$-wave gap should always\nhave their maxima and minima at exactly the same azimuthal angles.\nIf such a hypothesis is correct, it would be straightforward to\nidentify the precise gap symmetry. For instance, if the\nexperimentally observed $H_{c2}$ displays its maxima along the [100]\ndirection, the gap possesses a $d_{x^2-y^2}$ symmetry. On the other\nhand, if the maxima are observed along the direction [110], the gap\nsymmetry should be $d_{xy}$-wave. To make a comparison, we show the\nangular dependence of $d_{x^2-y^2}$- and $d_{xy}$-wave gaps in\nFig.~\\ref{Fig:Shaped}.\n\n\nIt is necessary to emphasize that the above hypothesized connection\nbetween in-plane $H_{c2}$ and $d$-wave gap, though intuitively\nreasonable, is actually not always correct. When there is only\norbital effect, the maxima of $H_{c2}$ and $d$-wave gap are along the\nsame directions in all cases. In the presence of Pauli paramagnetic\neffect, however, there is indeed no guarantee that such a connection\nis valid. In order to clarify the detailed connection between the\nprecise gap symmetry and the angular dependence of $H_{c2}$, we will\nconsider the influence of the interplay of orbital and Pauli effects\non $H_{c2}$ more systematically. This problem is important because\nin-plane $H_{c2}$ has recently played a significant role in the\ndetermination of the gap symmetries of CeCoIn$_{5}$ and CeCu$_{2}$Si$_{2}$.\n\n\n\n\n\n\nIn this paper, motivated by the recent progress and the existing\ncontroversy, we analyze the angular dependence of in-plane $H_{c2}$\nand its connection with the $d$-wave gap symmetry by considering the\ninterplay of orbital and Pauli effects in the contexts of heavy\nfermion compounds. After carrying out systematical calculations, we\nwill show that the maxima of angle-dependent $H_{c2}(\\theta)$ are\nnot always along the antinodal directions when both the orbital and\nPauli effects are important. The concrete fourfold oscillation\npattern of $H_{c2}(\\theta)$ is determined by a number of physical\nparameters, including temperature $T$, critical temperature $T_{c}$,\ngyromagnetic ratio $g$, fermion velocity $v_0$, and two parameters\nthat characterize the shape of the underlying Fermi surface. Each of\nthese parameters can strongly affect the angular dependence of\n$H_{c2}$. Among the above six relevant parameters, the temperature $T$ is\nparticularly interesting, due to that in any given compound $t$ is the\nonly free parameter and all the other parameters are fixed at\ncertain values. If we vary temperature $T$ but fix all the\nrest parameters, $H_{c2}(\\theta)$ is found to exhibit its maxima\nalong the nodal directions at lower temperatures and along the\nantinodal directions at higher temperatures. This means the\nangle-dependent $H_{c2}(\\theta)$ is shifted by $\\pi\/4$ as\ntemperature increases across certain critical value.\n\n\nOur results can be used to clarify the aforementioned experimental\npuzzle about the angular dependence of in-plane $H_{c2}$. Since\n$H_{c2}(\\theta)$ shifts by $\\pi\/4$ as some of the relevant\nparameters are changed, the seemingly contradictory experimental\nresults reported in Refs.\\cite{Settai01, Bianchi03, Weickert06} may\nbe well consistent. On the other hand, since the concrete behavior\nof $H_{c2}(\\theta)$ is very sensitive to the specific values of\nseveral parameters, one should be extremely careful when judging the\ngap symmetry by measuring $H_{c2}$.\n\n\nIn Sec.\\ref{Sec:Derive}, we derive the equation for $H_{c2}$ after\nincluding both the orbital and Pauli paramagnetic effects. In\nSec.\\ref{Sec:NumResults}, we present numerical results for $H_{c2}$\nin three cases, i.e., pure orbital effect, pure Pauli paramagnetic\neffect, and interplay of both orbital and Pauli effects. We show\nthat $H_{c2}$ displays complicated angle dependence due to interplay\nof orbital and Pauli effects. In Sec.\\ref{Sec:Discussion}, we\ndiscuss the physical implications of our results and make a\ncomparison with some relevant experiments.\n\n\n\n\\section{Equation for in-plane upper critical field $H_{c2}$ \\label{Sec:Derive}}\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3in]{FS.eps}\n\\caption{Schematic diagram for a rippled Fermi\nsurface.}\\label{Fig:FS}\n\\end{figure}\n\n\n\nHeavy fermion compounds are known to have a layered structure, which\nis analogous to cuprates. However, the inter-layer coupling is not\nas weak as that in cuprates. It is convenient to consider a rippled\ncylinder Fermi surface, schematically shown in Fig.~\\ref{Fig:FS}. The fermion\nvelocity has three components $k_{x,y,z}$, where $k_{x,y}$ denote\nthe two components in the superconducting plane. Here, we use\n$t_{c}$ to represent the inter-layer hoping parameter and $c$ the\nunit size along $z$-direction, and then write the dispersion as\n\\cite{Thalmeier05, Vorontsov07a, Vorontsov07b,Vorontsov10}\n\\begin{eqnarray}\n\\varepsilon(\\mathbf{\\mathbf{k}}) = \\frac{k_{x}^{2}+k_{y}^{2}}{2m} -\n2t_{c}\\cos(k_{z}c).\n\\end{eqnarray}\nIntroducing a constant magnetic field $H$ to the system leads to\nfruitful behaviors. For type-II superconductors, the field $H$\nweaker than lower critical field $H_{c1}$ cannot penetrate the\nsample due to the Meissner effect. As $H$ exceeds $H_{c1}$ and\nfurther increases, the superconducting pairing is gradually\ndestructed by the orbital effect. The superconductivity is entirely\nsuppressed once $H$ reaches the upper critical field $H_{c2}$, which\ncan be obtained by solving the corresponding linearized gap\nequation. In some superconductors, the Pauli paramagnetic effect can\nalso close the gap by breaking spin singlet pairs, and may even be\nmore important than the orbital effect \\cite{Weickert06, Vieyra11}.\nIn order to make a general analysis, we consider both of these two\neffects in the following.\n\n\nTo proceed, it is useful to rewrite the in-plane magnetic field\n$\\mathbf{H}$ in terms of a vector potential $\\mathbf{A}$. Let us\nchoose the $a$-axis as $x$-coordinate and $b$-axis as\n$y$-coordinate, and then write down a vector potential\n\\begin{eqnarray}\n\\mathbf{A} = \\left(0,0,H(-x\\sin\\theta+y\\cos\\theta)\\right),\n\\end{eqnarray}\nwhere $\\theta$ denotes the angle between a-axis and field\n$\\mathbf{H}$. For conventional $s$-wave superconductors, the gap is\nisotropic and the upper critical field $H_{c2}$ is certainly\n$\\theta$-independent. In the case of $d$-wave superconductors,\nhowever, the gap is strongly anisotropic, thus $H_{c2}$ becomes\n$\\theta$-dependent. Now the field $\\mathbf{H}$ takes the form\n\\begin{eqnarray}\n\\mathbf{H} &=& \\mathbf{\\nabla}\\times\\mathbf{A} =\n\\left(H\\cos\\theta,H\\sin\\theta,0\\right).\n\\end{eqnarray}\nOne can write the generalized derivative operator as\n\\begin{eqnarray}\n\\mathbf{\\Pi}(\\mathbf{R}) &=& -i\\mathbf{\\nabla}_{\\mathbf{R}} +\n2e\\mathbf{A}(\\mathbf{R}) \\nonumber \\\\\n&=& -i\\partial_{x}\\mathbf{e}_{x} - i\\partial_{y}\\mathbf{e}_{y}\n\\nonumber \\\\\n&& + \\left(-i\\partial_{z} + 2eH\\left(-x\\sin\\theta +\ny\\cos\\theta\\right)\\right)\\mathbf{e}_{z}. \\nonumber\n\\end{eqnarray}\n\n\nFollowing the general methods presented in Refs.~\\cite{Helfand66,\nWerthamer66, Scharnberg80, Lukyanchuk87, Shimahara96,\nShimahara97,Suginishi06,Shimahara09}, we obtain the following\nlinearized gap equation:\n\\begin{eqnarray}\n-\\ln(\\frac{T}{T_{c}})\\Delta(\\mathbf{R}) &=&\n\\int_{0}^{+\\infty}d\\eta\\frac{\\pi T}{\\sinh(\\pi T\\eta)}\n\\int_{-\\pi}^{\\pi}\\frac{d\\chi}{2\\pi}\\int_{0}^{2\\pi}\\frac{d\\varphi}{2\\pi}\n\\nonumber \\\\\n&& \\times \\gamma_{\\alpha}^2(\\hat{\\mathbf{k}}) \\left\\{1-\n\\cos\\left[\\eta\\left(h'+\\frac{1}{2} \\mathbf{v}_{F}(\\hat{\\mathbf{k}})\n\\right.\\right.\\right. \\nonumber \\\\\n&&\\left.\\left.\\left.\\cdot \\mathbf{\\Pi}(\\mathbf{R})\\right)\\right]\n\\right\\}\\Delta(\\mathbf{R}), \\label{eqn:GapL}\n\\end{eqnarray}\nwhere $\\chi = k_{z}c$. The function $\\Delta(\\mathbf{R})$ is\n\\begin{eqnarray}\n\\Delta(\\mathbf{R}) = \\left(\\frac{2eH}{\\pi}\\right)^{\\frac{1}{4}}\ne^{-eH\\left(x\\sin\\theta-y\\cos\\theta\\right)^{2}}.\n\\end{eqnarray}\nHere we do not include Landau level mixing \\cite{Weickert06,\nVieyra11, Lukyanchuk87} for simplicity, which will not affect our\nconclusion. For the chosen Fermi surface, the Fermi velocity vector\nis \\cite{Thalmeier05}\n\\begin{eqnarray}\n\\mathbf{v}_{F}(\\hat{\\mathbf{k}})=v_{a}\\cos\\varphi \\mathbf{e}_{x}\n+v_{a}\\sin\\varphi \\mathbf{e}_{y}+v_{c}\\sin\\chi \\mathbf{e}_{z}.\n\\label{eqn:FermionV}\n\\end{eqnarray}\nThe Fermi velocity component along the $c$-axis is $v_{c} =\n2t_{c}c$. The two-component in-plane velocity vector has a constant\nmagnitude $v_{a}$, defined as $v_{a} = v_{0} \\sqrt{1 +\n\\lambda\\cos(\\chi)}$, where $v_{0} = \\frac{k_{F0}}{m}$ with the Fermi\nmomentum $k_{F0}$ being related to the Fermi energy $\\epsilon_{F}$\nby $k_{F0} = \\sqrt{2m\\epsilon_{F}}$. The shape of rippled cylinder\nFermi surface is characterized by a velocity ratio $v_{c}\/v_{0} =\n\\lambda\\gamma$, where $\\lambda = 2t_{c}\/\\epsilon_{F}$ and $\\gamma =\nck_{F0}\/2$. As will shown below, both $\\lambda$ and $\\gamma$ can\nstrongly affect the behavior of $H_{c2}$. Moreover, we define $h' =\n-\\frac{g\\mu_{B}H}{2}$, where $\\mu_B$ is Bohr magneton and $g$ is the\ngyromagnetic ratio. The orbital effect of magnetic field is\nreflected in the factor $\\mathbf{v}_{F}(\\mathbf{k}) \\cdot\n\\Pi(\\mathbf{R})$, whereas the Pauli paramagnetic effect is reflected\nin the factor $h'$. The concrete behavior of $H_{c2}$ is determined\nby the interplay of these two effects.\n\n\nIn Eq.(\\ref{eqn:GapL}), the influence of gap symmetry is reflected\nin the function $\\gamma_{\\alpha}(\\mathbf{k})$. For isotropic\n$s$-wave pairing, $\\gamma_{s}(\\hat{\\mathbf{k}}) = 1$; for $d_{x^2 -\ny^2}$-wave pairing, $\\gamma_{d}(\\hat{\\mathbf{k}}) =\n\\sqrt{2}\\cos(2\\varphi)$; for $d_{xy}$-wave pairing,\n$\\gamma_{d}(\\hat{\\mathbf{k}}) = \\sqrt{2}\\sin(2\\varphi)$.\n\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=3.1in]{Hc2tNP.eps}\n\\caption{Fourfold oscillation of $\\theta$-dependent $H_{c2}$ at two\nrepresentative temperatures $t = 0.1$ and $t = 0.9$.}\n\\label{Fig:Hc2tNP}\n\\end{figure}\n\n\n\nAlthough the linearized gap equation Eq.(4) is formally general and\nvalid in many superconductors, its solution is determined by a\nnumber of physical effects and associated parameters. For instance,\nthe behavior of $H_{c2}$ may be strongly influenced by the concrete\nshapes of the Fermi surface. The Fermi surface has different spatial\ndependence in various superconductors, which naturally leads to\ndifferent forms of fermion dispersion and Fermi velocity\n$\\mathbf{v}_{F}$. Such a difference certainly affects the equation\nof $H_{c2}$. For spherical Fermi surface, Fermi velocity\n$\\mathbf{v}_{F}$ depends on the azimuthal angle $\\varphi$ within the\nbasal plane and the angle between $z$-axis and $\\mathbf{v}_{F}$.\nTherefore, the equation of $H_{c2}$ contains the integrations over\nthese two variables \\cite{Shimahara96, Suginishi06, Shimahara09}.\nFor cylindrical Fermi surface, the direction of vector\n$\\mathbf{v}_{F}$ solely depends on the azimuthal angle $\\varphi$, so\nthere is only the integration over angle $\\varphi$ in the equation\nof $H_{c2}$ \\cite{Shimahara97}. For rippled Fermi surface, the\ndirection of $\\mathbf{v}_{F}$ depends on the azimuthal angle\n$\\varphi$ and the coordinate $\\chi$ along $z$-axis, then the\nintegrations over $\\varphi$ and $\\chi$ enter into the equation of\n$H_{c2}$, as shown in Eq.~(\\ref{eqn:GapL}). In addition, there are\ntwo independent parameters $\\lambda$ and $\\gamma$ which can\ncharacterize the rippled Fermi surface in Eq.~(\\ref{eqn:GapL}).\nNotice that once $\\lambda = 0$, the rippled cylindrical Fermi\nsurface reduces to the cylindrical Fermi surface. The influence of\nFermi surface on $H_{c2}$ is rarely studied in the literature. In\nthis paper, we adopt rippled Fermi surface and show that\n$H_{c2}$ can exhibit different behaviors under different parameters.\n\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3.1in]{Hc2tCPNP.eps}\n\\caption{$t$-dependence of $H_{c2}$ with $T_{c}=1K$, $v_{0} =\n3000m\/s$, $\\lambda = 0.5$, and $\\gamma=1$.} \\label{Fig:Hc2tCPNP}\n\\end{figure}\n\n\n\nTo facilitate analytical computation, we can choose the direction of\nfield $\\mathbf{H}$ as a new $z'$-axis and define\n\\begin{eqnarray}\n\\left\\{\n\\begin{array}{l}\n\\mathbf{e}_{x}'=\\mathbf{e}_{x}\\sin\\theta-\\mathbf{e}_{y}\\cos\\theta\n\\\\\n\\mathbf{e}_{y}'=-\\mathbf{e}_{z}\n\\\\\n\\mathbf{e}_{z}'=\\mathbf{e}_{x}\\cos\\theta+\\mathbf{e}_{y}\\sin\\theta\n\\end{array}\\right..\n\\end{eqnarray}\nIn the coordinate frame spanned by $(\\mathbf{e}_{x}',\n\\mathbf{e}_{y}', \\mathbf{e}_{z}')$, we have a new velocity vector\n\\begin{eqnarray}\n\\mathbf{v}_{F}(\\hat{\\mathbf{k}}) &=& v_{a}\\sin(\\theta-\\varphi)\n\\mathbf{e}_{x}' - v_{c}\\sin(\\chi)\\mathbf{e}_{y}' \\nonumber \\\\\n&& + v_{a}\\cos(\\theta-\\varphi)\\mathbf{e}_{z}',\\label{eqn:FermionV2}\n\\end{eqnarray}\nand a new generalized derivative operator\n\\begin{eqnarray}\n\\mathbf{\\Pi}(\\mathbf{R}) &=& \\sqrt{eH}\n\\left[\\left(a_{+}+a_{-}\\right)\\mathbf{e}_{x}' -\ni\\left(a_{+}-a_{-}\\right)\\mathbf{e}_{y}' \\right.\\nonumber\n\\\\\n&&\\left.+\\sqrt{2}a_{0}\\mathbf{e}_{z}'\\right], \\label{eqn:PiDef}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\na_{\\pm} &=& \\frac{1}{2\\sqrt{eH}}\\left[-i\\sin\\theta\\partial_{x} +\ni\\cos\\theta\\partial_{y} \\mp \\partial_{z}\\right. \\nonumber \\\\\n&&\\left.\\pm 2ieH(x\\sin\\theta-y\\cos\\theta)\\right], \\\\\na_{0} &=& \\frac{1}{\\sqrt{2eH}}\\left[-i\\partial_{x}\\cos\\theta -\ni\\partial_{y}\\sin\\theta \\right],\n\\end{eqnarray}\nwhich satisfy\n\\begin{eqnarray}\n[a_{-},a_{+}] = 1, [a_{\\pm},a_{0}] = 0.\n\\end{eqnarray}\n\nIn the following analysis, we take $d_{x^2-y^2}$-wave pairing as an\nexample and assume that $\\gamma_{d}(\\hat{\\mathbf{k}}) =\n\\sqrt{2}\\cos(2\\varphi)$. The results in the case of $d_{xy}$-wave\npairing can be obtained analogously. It is easy to examine that the\nqualitative conclusion will be not changed. After averaging over the\nground state $\\Delta_{0}(\\mathbf{R})$ on both sides of\nEq.(\\ref{eqn:GapL}) and inserting the $d_{x^2-y^2}$-wave gap\n$\\gamma_{d}(\\hat{\\mathbf{k}}) = \\sqrt{2}\\cos(2\\varphi)$, we obtain\nthe following integral equation for $H_{c2}$,\n\\begin{eqnarray}\n-\\ln t &=& \\int_{0}^{+\\infty}\\frac{du}{\\sinh\\left(u\\right)}\n\\left\\{1-\\cos\\left(hu\\right)\\int_{-\\pi}^{\\pi}\\frac{d\\chi}{2\\pi}\n\\int_{0}^{2\\pi}\\frac{d\\varphi}{2\\pi}\\right. \\nonumber \\\\\n&&\\times \\left[1+\\cos(4\\theta)\\cos(4\\varphi)\\right] \\nonumber \\\\\n&&\\times \\exp\\left[-\\rho u^2\n\\left(\\lambda^2\\gamma^2\\sin^{2}(\\chi)\\right.\\right. \\nonumber \\\\\n&&\\left.\\left.\\left.+\\left(1+\\lambda\\cos(\\chi)\\right)\\sin^{2}(\\varphi)\n\\right)\\right] \\right\\},\\label{eqn:Hc2Expression}\n\\end{eqnarray}\nwhere $t=\\frac{T}{T_{c}}$, $h=\\frac{g\\mu_{B}H_{c2}}{2\\pi k_{B}T}$\nand $\\rho=\\frac{v_{0}^{2}eH_{c2}}{8\\pi^2 k_{B}^{2}T^2}$. One can\nanalyze the detailed behavior of $H_{c2}$, especially its dependence\non various physical parameters, systematically by solving this\nintegral equation. This will be done in the next section.\n\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=3.6in]{PauliOnly.eps}\n\\caption{$t$-dependence of $H_{c2}$ with $T_{c}=1K$, $\\lambda=0.5$,\n$\\gamma = 1$, and $g = 1$.} \\label{Fig:PauliOnly}\n\\end{figure}\n\n\n\n\\section{Numerical results of $H_{c2}$ and physical implications\n\\label{Sec:NumResults}}\n\n\nIn this section, we first present the numerical solutions of\nEq.~(\\ref{eqn:Hc2Expression}), then discuss the physical\nimplications of the results, and finally compare our results with\nsome recent experiments. From Eq.~(\\ref{eqn:Hc2Expression}), we know\nthe behavior of $H_{c2}(\\theta)$ is determined by six physical\nparameters:\n\\begin{eqnarray}\n&& T_{c}: \\mbox{Zero-field critical temperature},\n\\\\\n&& t = T\/T_{c},\n\\\\\n&& v_{0} = \\sqrt{2\\epsilon_{F}\/m},\n\\\\\n&& g: \\mbox{gyromagnetic ratio},\n\\\\\n&& \\lambda = 2t_{c}\/\\epsilon_{F},\n\\\\\n&& \\gamma = k_{F0}c\/2.\n\\end{eqnarray}\nAmong this set of parameters, $\\lambda$ and $\\gamma$ are related to\nthe shape of rippled cylinder Fermi surface. We notice that the\ninfluence of these two parameters are rarely investigated in\nprevious works on $H_{c2}$. The critical temperature $T_c$ and the\ngyromagnetic ratio $g$ will be taken as varying parameters.\n\n\nThe detailed behavior of $H_{c2}$ can be clearly seen from its\nangular dependence. In addition, it is also interesting to analyze\nthe difference of $H_{c2}$ between its values obtained at $\\theta =\n45^{\\degree}$ and $\\theta = 0^{\\degree}$:\n\\begin{eqnarray}\n\\Delta H_{c2} = H_{c2}(\\theta = 45^{\\degree}) - H_{c2}(\\theta =\n0^{\\degree}),\n\\end{eqnarray}\nsince the maxima and minima of $H_{c2}$ always appear at these two\nangles. $H_{c2}$ exhibits its maxima at $\\theta = 45^{\\degree}$ if\n$\\Delta H_{c2} > 0$ and at $\\theta = 0^{\\degree}$ if $\\Delta H_{c2}\n< 0$.\n\n\nIn order to demonstrate the influence of the orbital effect and that\nof the Pauli paramagnetic effect on the angular dependence of\nin-plane $H_{c2}$, we find it helpful to consider three cases\nseparately: pure orbital effect; pure Pauli effect; interplay of\norbital and Pauli effects.\n\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=3in]{Hc2t.eps}\n\\caption{Angular dependence of $H_{c2}$ at $t = 0.1$ and $t = 0.9$\nwith $T_{c} = 1K$, $v_{0} = 3000m\/s$, $\\lambda=0.5$, $\\gamma = 1$,\nand $g=1$. The fourfold oscillation patterns are apparently\ndifferent at low and high temperatures.} \\label{Fig:Hc2t}\n\\end{figure}\n\n\n\n\\subsection{Pure orbital effect\\label{subsec:PureOrbital}}\n\n\nFirst, we consider only the orbital effect by setting the\ngyromagnetic factor $g = 0$. In this case, the factor $\\cos(hu)$\nappearing in Eq.(\\ref{eqn:Hc2Expression}) is equal to unity, $\\cos(hu) = 1$. We assume\nthat $T_c = 1K$, $v_{0} = 3000m\/s$, $\\lambda=0.5$, and $\\gamma=1$,\nwhich are suitable parameters in heavy fermion compounds.\n\n\nAfter carrying out numerical calculations, we plot the angular\ndependence of $H_{c2}(\\theta)$ in Fig.~\\ref{Fig:Hc2tNP} at two\nrepresentative temperatures $t = 0.1$ and $t = 0.9$. It is easy to\nsee from Fig.~\\ref{Fig:Hc2tNP} that $H_{c2}(\\theta)$ exhibits a\nfourfold oscillation pattern. Moreover, the maxima of $H_{c2}$ are\nalways along the antinodal directions for any values of the relevant\nparameters, which means the angular dependence of orbital\neffect-induced $H_{c2}$ is exactly the same as that of $d$-wave gap.\nThis is consistent with the original theoretical predictions of Won\n\\emph{et. al.} \\cite{Won94} and Takanaka \\emph{et. al.}\n\\cite{Takanaka95}. An important feature that needs to be emphasized\nis that the positions of peaks are temperature independent, as\nclearly manifested in both Fig.~\\ref{Fig:Hc2tNP} and\nFig.~\\ref{Fig:Hc2tCPNP}.\n\n\nThe above properties can also be elaborated by the detailed\n$t$-dependence of $H_{c2}$ and $\\Delta H_{c2}$ are presented in\nFig.~\\ref{Fig:Hc2tCPNP}. $H_{c2}$ is an monotonously decreasing\nfunction of parameter $t$, valid for all values of $\\theta$. This is\neasy to understand since the magnitude of the superconducting gap\nalways decreases monotonously with growing temperature. Moreover,\nthe difference $\\Delta H_{c2}$ is negative for all values of $t$.\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3in]{Hc2tCP.eps}\n\\caption{$t$-dependence of $H_{c2}$ with $T_{c} = 1K$, $v_{0} =\n3000m\/s$, $\\lambda = 0.5$, $\\gamma=1$, and $g = 1$.}\n\\label{Fig:Hx2tCP}\n\\end{figure}\n\n\n\n\\subsection{Pure Pauli paramagnetic effect}\n\n\nWe next consider the effects of pure Pauli paramagnetic effect by\nsetting $v_{0}=0$, which leads to\n\\begin{eqnarray}\n-\\ln(t) = \\int_{0}^{+\\infty}du\\frac{1 - \\cos(hu)}{\\sinh(u)},\n\\end{eqnarray}\nwhich is completely independent of $\\theta$. The $t$-dependence of\n$H_{c2}$ is shown in Fig.~\\ref{Fig:PauliOnly}. Different from pure\norbital effect, $H_{c2}$ is not a monotonous function: it rises\ninitially with growing $t$, but decreases as $t$ is larger than\ncertain critical value $t_c$, which is roughly $0.5t$ under the\nchosen set of parameters.\n\n\n\n\\subsection{Interplay of orbital and Pauli effects}\n\n\nWe now turn to the general and interesting case in which both the\norbital and Pauli paramagnetic effects are important. This case is\nbroadly believed to be realized in several heavy fermion compounds,\nsuch as CeCoIn$_5$ and CeCu$_{2}$Si$_2$. As aforementioned, the\nconcrete behaviors of $\\theta$-dependent $H_{c2}$ are influenced by\na number of parameters. In order to illustrate the numerical results\nand their physical implications, we vary one particular parameter\nwhile fixing all the rest parameters. In most of the following\ncalculations, the gyromagnetic factor is taken to be $g = 1$. The\ninfluence of various values of $g$ will be analyzed separately.\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3in]{Hc2Tc.eps}\n\\caption{$T_c$-dependence of $H_{c2}$ with $t = 0.5$, $v_{0} =\n3000m\/s$, $\\lambda=0.5$, $\\gamma=1$, and $g=1$.} \\label{Fig:Hc2Tc}\n\\end{figure}\n\n\n\nAs shown in Fig. \\ref{Fig:Hc2t}, under the currently chosen\nparameters, the maxima of $H_{c2}$ locates along the antinodal\ndirections at a relatively higher temperature $t = 0.9$. This\nbehavior is very similar to that in the case of pure orbital effect.\nHowever, at a relatively lower temperature $t = 0.1$, the maxima of\n$H_{c2}$ are along the nodal directions where the $d_{x^2 - y^2}$-wave\ngap vanishes. Two conclusions can be immediately drawn: $H_{c2}$\ndoes not always exhibit its maxima at the angles where the\nsuperconducting gap reaches its maximal value; the fourfold\noscillation curves of $H_{c2}$ is shifted by $\\pi\/4$ as temperature\ngrows in the range of $0 < T < T_c$.\n\n\nFrom Fig. \\ref{Fig:Hx2tCP}(a), we see that $H_{c2}$ first arises\nwith growing $t$ and then decreases rapidly once $t$ exceeds a\ncritical value. Apparently, such a non-monotonous $t$-dependence of\n$H_{c2}$ is a consequence of the interplay of both orbital and Pauli\nparamagnetic effects. On the other hand, the difference $\\Delta\nH_{c2}$ shown in Fig. \\ref{Fig:Hx2tCP}(b) is positive for small\nvalues of $t$ but becomes negative for larger values of $t$.\n\n\nAddition to temperature $t$, the concrete angular dependence of\n$H_{c2}$ is also strongly influenced by a number of other physical\nparameters, including critical temperature $T_c$, fermion velocity\n$v_0$, gyromagnetic factor $g$, and two Fermi surface factors\n$\\lambda$ and $\\gamma$. Indeed, different values of these parameters\ncan lead to very different behaviors of $H_{c2}$. In the following,\nwe show how $H_{c2}$ and $\\Delta H_{c2}$ are changed as these\nparameters are varying. To simplify the analysis, we vary one\nparticular parameter and fix all the other parameters in each\nfigure.\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3in]{Hc2v0.eps}\n\\caption{$v_{0}$-dependence of $H_{c2}$ with $t=0.1$, $T_{c}=1K$,\n$\\lambda=0.5$, $\\gamma=1$, and $g=1$.} \\label{Fig:Hc2V0}\n\\end{figure}\n\n\n\n$T_c$: First, we consider the influence of critical temperature\n$T_c$ on $H_{c2}$ and $\\Delta H_{c2}$, and show the results in Fig.\n\\ref{Fig:Hc2Tc}. It is well-known that $T_c$ of heavy fermion\ncompounds is actually quite low, especially when compared with\ncuprates and iron-based superconductors. To cover all possible heavy\nfermion compounds, we assume $T_{c}$ varies in the range of\n$[0,3K]$. All the other parameters are fixed. $H_{c2}$ rises\nmonotonously with growing $T_c$, which is obviously owing to the\nmonotonous increase of the superconducting gap. Moreover, if $T_{c}$\nis smaller than some critical value, $\\Delta H_{c2}$ is negative,\nwhich means the maxima of $H_{c2}$ are along the antinodal directions.\nFor larger $T_{c}$, $\\Delta H_{c2}$ becomes positive and the maxima\nof $H_{c2}$ are shifted to the nodal directions. Apparently, $T_c$\nhas important impacts on the concrete angular dependence of\n$H_{c2}$. In passing, we point out that the maxima of $H_{c2}$ will\nbe shifted back to the antinodal directions for even higher $T_c$\n(not shown in the figure).\n\n\n$v_0$: We then consider the influence of fermion velocity $v_0$ on\n$H_{c2}$ and $\\Delta H_{c2}$, and show the results in Fig.\n\\ref{Fig:Hc2V0}. In the limit $v_{0} = 0$, the orbital effect is\nactually ignored and the Pauli effect entirely determines $H_{c2}$.\nIn such a limit, $H_{c2}$ is angle independent, so $\\Delta H_{c2} =\n0$. For finite $v_0$, $H_{c2}$ becomes angle dependent and exhibits\nfourfold oscillation, as a consequence of the interplay between\norbital and Pauli effects. As $v_0$ is growing, $H_{c2}$ first\nincreases and then decreases, which indicates that the enhancement\nof orbital effect does not necessarily suppress $H_{c2}$ once the\nPauli paramagnetic effect is present. However, as already discussed\nearlier, $H_{c2}$ deceases monotonously with growing $v_{0}$ when\nthe Pauli effect is completely neglected. Furthermore, $\\Delta\nH_{c2}$ is negative for both small and large values of $v_{0}$, but\nis positive for intermediate values of $v_{0}$. Therefore, the\nconcrete angular dependence of $H_{c2}$ is very sensitive to the\nvalues of fermion velocity.\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3.04in]{Hc2g.eps}\n\\caption{$g$-dependence of $H_{c2}$ with $t = 0.1$, $v_{0} =\n3000m\/s$, $T_{c} = 1K$, $\\lambda = 0.5$, and $\\gamma = 1$.}\n\\label{Fig:Hc2g}\n\\end{figure}\n\n\n\n$g$: We next consider the influence of the gyromagnetic factor $g$,\nwhich characterizes the effective strength of Pauli paramagnetic\neffect. The dependence of $H_{c2}$ and $\\Delta H_{c2}$ on $g$ is\ngiven in Fig. \\ref{Fig:Hc2g}. First of all, taking $g = 0$ simply\nleads to the known results obtained in the case of pure orbital\neffect presented in Sec.\\ref{subsec:PureOrbital}. Second, $H_{c2}$\ndecreases monotonously with growing $g$. An immediate indication of\nthis behavior is that increasing the Pauli paramagnetic effect\nalways tends to suppress $H_{c2}$ in the presence of orbital effect.\nFinally, it is easy to observe that $\\Delta H_{c2}$ is negative if\n$g$ takes very small values and positive when $g$ becomes larger\nthan certain critical value. Therefore, the gyromagnetic factor $g$\nalso plays a crucial role in the determination of the concrete angle\ndependence $H_{c2}$.\n\n\n$\\lambda$: $\\lambda$ represents the ratio of inter layer coupling\n$2t_{c}$ and the Fermi energy $E_{F}$. If $t_{c}=0$, the\ncorresponding $\\lambda=0$, then the rippled cylindrical Fermi\nsurface reduce to the cylindrical Fermi surface. The dependence of\n$H_{c2}$ and $\\Delta H_{c2}$ on $\\lambda$ is as depicted in Fig.\n\\ref{Fig:Hc2lambda}. $H_{c2}$ deceases monotonously with the growing\n$\\lambda$. For given values of other parameters shown in\nFig.~\\ref{Fig:Hc2lambda}, the maxima of $H_{c2}$ are along the nodal\ndirections for small $\\lambda$, but along the antinodal directions for\nlarge values of $\\lambda$.\n\n\n$\\gamma$: $\\gamma$ represents the ratio of two momentum $k_{F0}$ and\n$1\/2c$, $c$ is the unit cell size along third direction. The\ndependence of $H_{c2}$ and $\\Delta H_{c2}$ on $\\gamma$ is shown in\nFig.~\\ref{Fig:Hc2gamma}. $H_{c2}$ deceases monotonously with the\ngrowing $\\gamma$. For given values of other parameters shown in\nFig.~\\ref{Fig:Hc2gamma}, the maxima of $H_{c2}$ are along the nodal\ndirections for small $\\gamma$, but along the antinodal directions for\nlarge values of $\\gamma$.\n\n\n\n\\begin{figure}[htbp]\n\\center\n\\includegraphics[width=3in]{Hc2lambda.eps}\n\\caption{Variation with $\\lambda$ fixing $t=0.5$, $v_{0}=5000m\/s$,\n$T_{c}=1K$, $\\gamma=1$ and $g=1$.} \\label{Fig:Hc2lambda}\n\\end{figure}\n\n\n\nFrom all these results, we see that both the magnitudes and the\ndetailed angular dependence of in-plane $H_{c2}$ are significantly\ninfluenced by a number of physical parameters. A particularly\ninteresting feature is the fourfold oscillation pattern of angle\ndependent $H_{c2}$ can be shifted by $\\pi\/4$ if one varies any one\nof these parameters. $H_{c2}$ may exhibit its maxima along either\nnodal or antinodal directions, depending on the specific values of\nrelevant parameters, which is apparently in sharp contrast in the\nnaive notion that $H_{c2}$ always displays the same angle dependence\nof the $d$-wave superconducting gap.\n\n\n\n\\subsection{Comparison with recent experiments}\n\n\nAs aforementioned, in the last several years the in-plane $H_{c2}$\nhas been widely investigated with the aim to identify the precise\nsuperconducting gap symmetry in two heavy fermion compounds\nCeCoIn$_5$ and CeCu$_{2}$Si$_2$. In this subsection, we make a\ncomparison between our theoretical analysis and some recent\nexperiments of $H_{c2}$. Our results are valuable to theoretical and\nexperimental research of $H_{c2}$ in two main aspects.\n\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=3in]{Hc2gamma.eps}\n\\caption{$\\gamma$-dependence of $H_{c2}$ with $t = 0.5$, $v_{0} =\n3000m\/s$, $T_{c} = 1K$, $\\lambda = 0.5$, and $g = 1$.}\n\\label{Fig:Hc2gamma}\n\\end{figure}\n\n\n\nFirst, one should be very careful when fitting theoretical\ncalculations with experimental data. In the current literature, it\nis often taken for granted that the in-plane $H_{c2}$ always\nexhibits exactly the same angular dependence as that of the\nsuperconducting gap. In other words, the maxima of in-plane\n$H_{c2}$ are believed to be always along the antinodal directions\nwhere the $d$-wave gap is maximal. According to this seemingly\ncorrect relationship, the superconducting gap symmetry is simply\nidentified as $d_{x^2 - y^2}$-wave ($d_{xy}$-wave) if\n$H_{c2}(\\theta)$ is found to exhibit its maxima at $\\theta =\n0^{\\degree}$ ($45^{\\degree}$). However, as showed in our extensive\ncalculations, such a relationship is not always correct. In a\nPauli-limited \\emph{d}-wave superconductor, the maxima of $H_{c2}$\nmay be along either the nodal or the antinodal direction, depending\non the specific values of a number of physical parameters, as a\nconsequence of the delicate interplay between orbital and Pauli\neffects. Inaccurate and even incorrect conclusions might be drawn if\nsome of these parameters are not properly chosen. In order to\nidentify the precise gap symmetry of CeCoIn$_5$ or\nCeCu$_{2}$Si$_{2}$, one should first choose suitable values for all\nthe relevant parameters before probing the angular dependence of\n$H_{c2}$ and deducing the gap symmetry from experimental data.\n\n\n\n\nAmong the above six relevant parameters, the temperature $t$ is\nparticularly interesting, because in any given compound $t$ is the\nonly free parameter and all the other parameters are fixed at\ncertain values. Our extensive calculations show that there is always\na $\\pi\/4$ difference between $H_{c2}$ and $d$-wave gap at small $t$\nand that $H_{c2}$ and $d$-wave gap always exhibit exactly the same\nangular dependence once $t$ exceeds certain critical value, provided\nthat the gyromagnetic factor $g$ is sufficiently large. It appears\nthat the impact of Pauli effect on $H_{c2}$ is much more important\nat low temperatures than at high temperatures. If one attempts to\ndeduce the precise gap symmetry by fitting experiments of $H_{c2}$,\nit would be better to measure $H_{c2}$ at a series of very different\ntemperatures. Otherwise, incorrect results might be obtained.\n\n\n\nSecond, our results may help to resolve some current controversies\nwith regard to the precise gap symmetry of heavy fermion compounds.\nThe gap symmetry of CeCoIn$_{5}$ has been investigated extensively\nby means of various experimental techniques. Settai \\emph{et. al.}\n\\cite{Settai01} reported that the maxima of in-plane $H_{c2}$ are along\n[100] direction through de Haas-van Alphen oscillation signal at\n$40$mK. The cantilever magnetometer measurements at $20$mK of Murphy\n\\emph{et. al.} \\cite{Murphy02} observed that the maxima of $H_{c2}$\nare along [110] direction. Bianchi et. al. \\cite{Bianchi03} measured the\nspecific heat and found the maxima of $H_{c2}$ along [100] direction\nat temperatures higher than $1$K. After measuring the magnetic field\ndependence of electric resistivity at $100$mK, Weickert \\emph{et.\nal.}\\cite{Weickert06} revealed that the maxima of\n$H_{c2}$ are along [100] direction. Obviously, there seems to be a\ndiscrepancy among the experimental results about the detailed\nangular dependence of $H_{c2}$, which is considered as an open\npuzzle \\cite{Das13} and complicates the search for the precise gap\nsymmetry.\n\n\nAccording to our results, however, probably such a discrepancy does\nnot exist at all, since the maxima of $H_{c2}$ may be along either\n[100] or [110] direction when some of the relevant physical\nparameters are moderately changed. In particular, the maxima of\n$H_{c2}$ can shift by $\\pi\/4$ as the temperature increases beyond\ncertain critical value. Notice that the measurements of\nRef.~\\cite{Murphy02} were performed at a temperature as low as\n$20$mK. There is a good possibility that the position of the maxima\nof $H_{c2}$ are shifted from [110] direction at low temperatures to\n[100] direction at higher temperatures. Although this possibility\nneeds to be further examined, it should be safe to say that the\nseemingly contradictory experimental results about in-plane $H_{c2}$\nof CeCoIn$_{5}$ may be well consistent with each other. More careful\nand more systematical research are required to completely solve this\nproblem.\n\n\n\\section{Discussion and Conclusion\\label{Sec:Discussion}}\n\n\nIn this paper, we have performed a detailed and systematical\nanalysis of the unusual behaviors of in-plane upper critical field\n$H_{c2}$ in the contexts of Pauli-limited heavy fermion compounds.\nWe show that the concrete angular dependence of $H_{c2}$ is\ndetermined by a delicate interplay of the orbital and Pauli\nparamagnetic effects. The most interesting result is that $H_{c2}$\ndoes not necessarily exhibit the same fourfold oscillation pattern\nas the $d$-wave superconducting gap, which is often taken for\ngranted in the literature. For certain values of a series of\nphysical parameters, $H_{c2}$ may display its maxima along the nodal\ndirections where the superconducting gap vanishes. We also have\ncompared our theoretical analysis with some current measurements of\nin-plane $H_{c2}$ in two heavy fermion compounds CeCoIn$_5$ and\nCeCu$_{2}$Si$_2$.\n\n\nThe theoretical results presented in this paper impose an important\nrestraint on the determination of the precise gap symmetry of Pauli\nlimited $d$-wave heavy fermion superconductors by means of measuring\nthe in-plane $H_{c2}$. One has to be extremely careful when trying\nto deduce the gap symmetry from experiments of $H_{c2}$.\n\n\nThe authors are supported by the National Natural Science Foundation\nof China under grants No. 11074234 and No. 11274286.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt has been shown that neural networks are vulnerable to adversarial examples~\\citep{szegedy2016rethinking,goodfellow2015explaining,carlini2017towards,athalye2018obfuscated}. \nGiven a victim neural network model and a correctly classified example, an adversarial attack aims to compute a small perturbation such that with this perturbation added, the original example will be misclassified. \nMany adversarial attacks have been proposed in the literature. \nMost of them consider the white-box setting, where the attacker has full knowledge about the victim model, and thus gradient based optimization can be used for attack. Popular Examples include \nC\\&W~\\citep{carlini2017towards} and PGD~\\citep{madry2017towards} attacks. \nOn the other hand, \nsome more recent attacks have considered the probability black-box setting where the attacker does not know the victim model's structure and weights, but can iteratively query the model and get the corresponding probability output.\nIn this setting, although gradient (of output probability to the input layer) is not computable, it can still be estimated using finite differences, and \nalgorithms many attacks are based on this~\\citep{chen2017zoo, ilyas2018black,tu2018autozoom,jun2018adversarial}.\n\nIn this paper, we consider the most challenging and practical attack setting -- hard-label black-box setting -- where the model is hidden to the attacker and the attacker can only make queries and get the corresponding hard-label decisions (e.g., predicted labels) of the model. A commonly used algorithm proposed in this setting, also called Boundary attack~\\citep{brendel2017decision}, is based on random walks on the decision surface, but it does not have any convergence guarantee. More recently, \\cite{cheng2018queryefficient} showed that finding the minimum adversarial perturbation in the hard-label setting can be reformulated as another optimization problem (we call this Cheng's formulation in this paper). This new formulation enjoys the benefit of having a smooth boundary in most tasks and the function value is computable using hard-label queries. Therefore, the authors of~\\citep{cheng2018queryefficient}\nare able to use standard zeroth order optimization to solve the new formulation. Although their algorithm converges quickly, it still requires large number of queries (e.g., 20,000) for attacking a single image since every function evaluation of Cheng's formulation has to be computed using binary search requiring tens of queries. \n\nIn this paper, we follow the same optimization formulation of~\\citep{cheng2018queryefficient} which has the advantage of smoothness, but instead of using finite differences to estimate the magnitude of directional derivative,\nwe propose to evaluate its sign using only {\\bf a single query}. With this single-query sign oracle, we design novel algorithms for solving the Cheng's formulation, and we theoretically prove and empirically demonstrate the significant reduction in the number of queries required for hard-label black box attack. \n\nOur contribution are summarized below: \n\\begin{compactitem}\n \\item {\\bf Novelty in terms of adversarial attack.} We elucidate an efficient approach to compute the sign of directional derivative of Cheng's formulation using a single query, and based on this technique we develop a novel optimization algorithm called Sign-OPT for hard-label black-box attack. \n \\item {\\bf Novelty in terms of optimization.}\n Our method \ncan be viewed as a new zeroth order optimization algorithm that features fast\nconvergence of signSGD. Instead of directly taking the sign of gradient estimation, our algorithm utilizes the scale of\nrandom direction. This make existing analysis inappropriate to our case, and we provide a new recipe to prove the convergence of this new optimizer. \n \n \n \\item We conduct comprehensive experiments on several datasets and models. We show that the proposed algorithm consistently reduces the query count by 5--10 times across different models and datasets, suggesting a practical and query-efficient robustness evaluation tool. Furthermore, on most datasets our algorithm can find an adversarial example with smaller distortion compared with previous approaches. \n\\end{compactitem}\n\n\\section{Related Work}\n\\label{sec:related}\n\\vspace{-5pt}\n\\paragraph{White-box attack}\nSince it was firstly found that neural networks are easy to be fooled by adversarial examples \\citep{goodfellow2015explaining}, a lot of work has been proposed in the white-box attack setting, where the classifier $f$ is completely exposed to the attacker. For neural networks, under this assumption, back-propagation can be conducted on the target model because both network structure and weights are known by the attacker. \nAlgorithms including \\citep{goodfellow2015explaining, kurakin2016adversarial, carlini2017towards, chen2018ead, madry2017towards} are then proposed based on gradient computation. \nRecently, the BPDA attack introduced by \\cite{athalye2018obfuscated} bypasses some models with obfuscated gradients and is shown to successfully circumvent many defenses. In addition to typical attacks based on small $\\ell_p$ norm perturbation, non-$\\ell_p$ norm perturbations such as scaling or shifting have also been considered~\\citep{zhang2019limitations}.\n\n\\paragraph{Black-box attack}\nRecently, black-box setting is drawing rapidly increasing attention. In black-box setting, the attacker can query the model but\nhas no (direct) access to any internal information inside the model. Although there are some works based on transfer attack \\citep{papernot2017practical}, we consider the query-based attack in the paper. Depending on the model's feedback for a given query, an attack can be classified as a soft-label or hard-label attack. In the soft-label setting, the model outputs a probability score for each decision.\n\\cite{chen2017zoo} uses a finite difference in a coordinate-wise manner to approximately estimate the output probability changes and does a coordinate descent to conduct the attack. \\cite{ilyas2018black} uses Neural evolution strategy (NES) to approximately estimate the gradient directly. Later, some variants \\citep{ilyas2018prior,tu2018autozoom} were proposed to utilize the side information to further speed up the attack procedure. \\cite{alzantot2019genattack} uses a evolutionary algorithm as a black-box optimizer for the soft-label setting. \nRecently, \\cite{al2019there} proposes SignHunter algorithm based on signSGD \\citep{pmlr-v80-bernstein18a} to achieve faster convergence in the soft-label setting.\nThe recent work \\citep{al2019there} proposes SignHunter algorithm to achieve a more query-efficent sign estimate when crafting black-box adversarial examples through soft-label information.\n\nIn the hard-label case, only the final decision, i.e. the top-1 predicted class, is observed. As a result, the attacker can only make queries to acquire the corresponding hard-label decision instead of the probability outputs. \\cite{brendel2017decision} first studied this problem and proposed an algorithm based on random walks near the decision boundary. By selecting a random direction and projecting it onto a boundary sphere in each iteration, it aims to generate a high-quality adversarial example. Query-Limited attack~\\citep{ilyas2018black} tries to estimate the output probability scores with model query and turn the hard-label into a soft-label problem. \\cite{cheng2018queryefficient} instead re-formalizes the hard-label attack into an optimization problem that finds a direction which could produce the shortest distance to decision boundary.\n\nThe recent arXiv paper \\citep{chen2019boundary} applied the zeroth-order sign oracle to improve Boundary attack, and also demonstrated significant improvement. The major differences to our algorithm are that we propose a new zeroth-order gradient descent algorithm, provide its algorithmic convergence guarantees, and aim to improve the query complexity of the attack formulation proposed in~\\citep{cheng2018queryefficient}. For completeness, we also compare with this method in Section \\ref{ssec:hsja-sign-comp}. \nMoreover, \\citep{chen2019boundary} \n uses one-point gradient estimate, which is unbiased but may encounter larger variance compared with the gradient estimate in our paper. Thus, we can observe in Section \\ref{ssec:hsja-sign-comp} that although they are slightly faster in the initial stage, Sign-OPT will catch up and eventually lead to a slightly better solution. \n\n\n\n\\section{Proposed Method}\n\\label{sec:proposed}\n\nWe follow the same formulation in\n\\citep{cheng2018queryefficient} and consider the hard-label attack as the problem of finding the direction with shortest distance to the decision boundary. \nSpecifically, for a given example $\\bx_0$, true label $y_0$ and the hard-label black-box function $f: \\RR^d \\rightarrow \\{1, \\dots, K\\}$, the objective function $g: \\RR^d \\rightarrow \\RR$ (for the untargeted attack) can be written as:\n\\begin{equation}\n \\min_{\\btheta} g(\\btheta) \\ \\text{ where } \\ g(\\btheta) = \\arg\\min_{\\lambda>0} \\bigg( f(x_0+\\lambda\\frac{\\btheta}{\\|\\btheta\\|})\\neq y_0 \\bigg).\n \\label{eq:g_theta}\n\\end{equation}\nIt has been shown that this objective function is usually smooth and the objective function $g$ can be evaluated by a binary search procedure locally. At each binary search step, we query the function $f(\\bx_0 + \\lambda \\frac{\\btheta}{\\|\\btheta\\|})$ and determine whether the distance to decision boundary in the direction $\\btheta$ is greater or smaller than $\\lambda$\nbased on the hard-label prediction\\footnote{Note that binary search only works in a small local region; in more general case $g(\\btheta)$ has to be computed by a fine-grained search plus binary search, as discussed in \\cite{cheng2018queryefficient}.}. \n\nAs the objective function is computable, the directional derivative of $g$ can be estimated by finite differences: \n\\begin{align}\n \\hat{\\nabla} g(\\btheta;\\ub):= \\frac{g(\\btheta+\\epsilon \\ub)-g(\\btheta)}{\\epsilon}\\ub \n \\label{eq:grad_test}\n\\end{align}\nwhere $\\bu$ is a random Gaussian vector and $\\epsilon > 0$ is a very small smoothing parameter. This is a standard zeroth order oracle for estimating directional derivative and based on this we can apply many different zeroth order optimization algorithms to minimize $g$. \n\\begin{wrapfigure}{r}{0.4\\textwidth}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figures\/illu.pdf}\n \\caption{Illustration}\n \\label{fig:illustration}\n\\end{wrapfigure}\nFor example, \\cite{cheng2018queryefficient} used the Random Derivative Free algorithm~\\cite{nesterov2017random} to solve problem~\\eqref{eq:g_theta}. \nHowever, each computation of \\eqref{eq:grad_test} \nrequires many hard-label queries due to binary search, so \\cite{cheng2018queryefficient} still requires a huge number of queries despite having fast convergence.\n\nIn this work, we introduce an algorithm that hugely improves the query complexity over~\\cite{cheng2018queryefficient}. Our algorithm is based on the following key ideas: (i) one does not need very accurate values of directional derivative in order to make the algorithm converge, and (ii) there exists an {\\bf imperfect but informative estimation} of directional derivative of $g$ that can be computed by a single query. \n\n\\subsection{A single query oracle}\n\n\n\n\nAs mentioned before, the previous approach requires computing $g(\\btheta+\\epsilon \\bu) - g(\\btheta)$ which consumes a lot of queries. \nHowever, based on the definition of $g(\\cdot)$, \nwe can compute the sign of this value $\\text{sign}(g(\\btheta + \\epsilon \\bu)-g(\\btheta))$ \nusing a single query. Considering the untargeted attack case, the sign can be computed by\n\\begin{equation}\n\\text{sign}(g(\\btheta+\\epsilon \\ub)-g(\\btheta))=\n\\begin{cases}\n+1, & f(x_0 + g(\\btheta)\\frac{(\\btheta+\\epsilon \\ub)}{\\|\\btheta+\\epsilon \\ub\\|})=y_0 ,\\\\\n-1, & \\text{Otherwise.}\n\\end{cases} \n\\label{eq:est_sign}\n \\end{equation}\nThis is illustrated in Figure \\ref{fig:illustration}. \nEssentially, for a new direction $\\btheta+\\epsilon\\ub$, we test whether a point at the original distance $g(\\btheta)$ from $x_0$ in this direction lies inside or outside the decision boundary,\ni.e. if the produced perturbation will result in a wrong prediction by classifier. If the produced perturbation is outside the boundary i.e. $f(x_0 + g(\\btheta)\\frac{(\\btheta+\\epsilon \\ub)}{\\|\\btheta+\\epsilon \\ub\\|})\\neq y_0$, the new direction has a smaller distance to decision boundary, and thus giving a smaller value of $g$. It indicates that $\\ub$ is a descent direction to minimize $g$. \n\\subsection{Sign-OPT attack}\nBy sampling random Gaussian vector $Q$ times, we can estimate the imperfect gradient by \n\\begin{equation}\n \\hat{\\nabla} g(\\btheta)\\approx \\hat{\\bg}:= \\sum\\nolimits_{q=1}^Q\\text{sign}(g(\\btheta+\\epsilon \\ub_q) - g(\\btheta))\\ub_q,\n \\label{eq:our_estimator}\n\\end{equation}\nwhich only requires $Q$ queries. \nWe then use this imperfect gradient estimate to update our search direction $\\btheta$ as $\\btheta \\leftarrow \\btheta-\\eta \\hat{\\gb}$ with a step size $\\eta$ and use the same search procedure to compute $g(\\btheta)$ up to a certain accuracy. The detailed procedure is shown in Algorithm \\ref{alg:signopt}.\n\n\\begin{algorithm}[t]\n\n\\SetAlgoLined\n\\textbf{Input}: Hard-label model $f$, original image $x_0$, initial $\\btheta_0$ \\;\n\\For{$t=1, 2, \\dots, T$}{\nRandomly sample $u_1, \\dots, u_Q$ from a Gaussian or Uniform distribution\\;\nCompute $\\hat{\\bg} \\leftarrow \\frac{1}{Q}\\sum_{q=1}^Q \\text{sign}(g(\\btheta_t+\\epsilon \\ub_q) - g(\\btheta_t))\\cdot \\ub_q$ \\;\nUpdate $\\btheta_{t+1} \\leftarrow \\btheta_t - \\eta \\hat{\\bg}$ \\;\nEvaluate $g(\\btheta_t)$ using the same search algorithm in \\cite{cheng2018queryefficient} \\;\n}\n\\caption{Sign-OPT attack}\n\\label{alg:signopt}\n\\end{algorithm}\nWe note that \\cite{liu2018signsgd}\ndesigned a Zeroth Order SignSGD \nalgorithm for soft-label black box attack (not hard-label setting).\nThey use $\\hat{\\nabla} g(\\btheta)\\approx \\hat{\\bg}:= \\sum_{q=1}^Q\\text{sign}(g(\\btheta+\\epsilon \\ub_q) - g(\\btheta)\\ub_q)$ and \nshows that it could achieve a comparable or even better convergence rate than zeroth order stochastic gradient descent by using only sign information of gradient estimation. \nAlthough it is possible to combine ZO-SignSGD with our proposed single query oracle for solving hard-label attack, \ntheir estimator will take sign of the whole vector and thus ignore the direction of $\\ub_q$, which leads to slower convergence in practice (please refer to Section 4.4 and Figure 5(b) for more details).\n\nTo the best of our knowledge, no previous analysis can be used to prove convergence of Algorithm~\\ref{alg:signopt}. \nIn the following, we show that Algorithm~\\ref{alg:signopt} can in fact converge and furthermore, with similar convergence rate compared with~\\citep{liu2018signsgd} despite using a different gradient estimator.\n\\begin{assumption}\nFunction $g(\\theta)$ is L-smooth with a finite value of L.\n\\end{assumption}\n\\begin{assumption}\nAt any iteration step t, the gradient of the function g is upper bounded by $\\|\\nabla g(\\btheta_t)\\|_2 \\leq \\sigma$.\n\\end{assumption}\n\\begin{theorem}\nSuppose that the conditions in the assumptions hold, and the distribution of gradient noise is\nunimodal and symmetric. Then, Sign-OPT attack with learning rate $\\eta_t = O(\\frac{1}{Q\\sqrt{dT}})$ and $\\epsilon = O(\\frac{1}{dT})$ will give following bound on $\\EE[\\|\\nabla g(\\btheta)\\|_2]$:\n\\begin{align*}\n \\EE[\\|\\nabla g(\\btheta)\\|_2] = O(\\frac{\\sqrt{d}}{\\sqrt{T}} + \\frac{d}{\\sqrt{Q}}). \n\\end{align*}\n\\end{theorem}\nThe proof can be found in \\autoref{ssec:proof}. The main difference with the original analysis provided by~\\cite{liu2018signsgd} is that they only only deal with sign of each element, while our analysis also takes the magnitudes of each element of $\\bu_q$ into account. \n\n\n\n\\subsection{Other gradient estimations}\n\nNote that the value $\\text{sign}(g(\\btheta+\\epsilon \\bu)-g(\\btheta))$ computed by our single query oracle is actually the sign of the directional derivative: \n\\begin{equation*}\n \\text{sign}(\\langle \\nabla g(\\btheta), \\bu\\rangle ) = \\text{sign}(\\lim_{\\epsilon\\rightarrow \\infty} \\frac{g(\\btheta+\\epsilon \\bu)-g(\\btheta)}{\\epsilon}) = \\text{sign}(g(\\btheta+\\epsilon \\bu) - g(\\btheta)) \\text{ for a small $\\epsilon$.}\n\\end{equation*}\nTherefore, we can use this information to estimate the original gradient. The Sign-OPT approach in the previous section uses $\\sum_{q} \\text{sign}(\\langle \\nabla g(\\btheta), \\bu_q \\rangle) \\bu_q$ as an estimation of gradient. Let $y_q := \\text{sign}(\\langle \\nabla g(\\btheta), \\bu_q \\rangle)$, \na more accurate gradient estimation can be cast as the following constraint optimization problem: \n\\begin{align*}\n\\text{Find a vector $\\bz$ such that } \\text{sign}(\\langle \\bz, \\bu_q\\rangle) = y_q \\ \\ \\forall q=1, \\dots, Q. \n\\end{align*}\nTherefore, this is equivalent to a hard constraint SVM problem where each $\\bu_q$ is a training sample and $y_q$ is the corresponding label. The gradient can then be recovered by solving\nthe following quadratic programming problem: \n\\begin{equation}\n \\min_{\\bz} \\ \\bz^T \\bz \\ \\ \\text{ s.t. } \\ \\ \\bz^T \\bu_q \\geq y_q,\\ \\ \\forall q=1,\\dots, Q.\n \\label{eq:svm}\n\\end{equation}\nBy solving this problem, we can get a good estimation of the gradient. As explained earlier, each $y_q$ can be determined with a single query. Therefore, we propose a variant of Sign-OPT, which is called SVM-OPT attack. The detailed procedure is shown in Algorithm \\ref{alg:svmopt}. We will present an empirical comparison of our two algorithms in \\autoref{ssec:svm-sign-comp}.\n\\begin{algorithm}[h]\n\\SetAlgoLined\n\\textbf{Input}: Hard-label model $f$, original image $\\bx_0$, initial $\\btheta_0$ \\; \n\\For{$t=1, 2, \\dots,T$}{\nSample $\\bu_1, \\dots, \\bu_Q$ from Gaussian or orthogonal basis \\;\nSolve $\\bz$ defined by \\eqref{eq:svm} \\;\nUpdate $\\btheta_{t+1} \\leftarrow \\btheta_t - \\eta \\bz$ \\;\nEvaluate $g(\\btheta_t)$ using search algorithm in \\citep{cheng2018queryefficient} \\;\n}\n\\caption{SVM-OPT attack}\n\\label{alg:svmopt}\n\\end{algorithm}\n\n\n\\vspace{-8pt}\n\n\\section{Experimental Results}\n\\label{sec:exp}\n\\vspace{-6pt}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/example_mnist.pdf}\n \n \\end{subfigure}\n \n \n \n \n \n \\begin{subfigure}[b]{0.9\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/example_cifar_sign.pdf}\n \n \\end{subfigure}\n \n \n \n \n \n \\begin{subfigure}[b]{0.9\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/example_imgnt_car.pdf}\n \n \\end{subfigure}\n \\captionsetup{justification=centering}\n \\caption{Example of Sign-OPT targeted attack. $L_2$ distortions and queries used are shown above and below the images. First two rows: Example comparison of Sign-OPT attack and OPT attack. Third and fourth rows: Examples of Sign-OPT attack on CIFAR-10 and ImageNet}\n \\label{fig:untarget-example}\n \n\\end{figure}\n\nWe evaluate the SIGN-OPT algorithm for attacking black-box models in a hard-label setting on three different standard datasets - MNIST~\\citep{lecun1998gradient}, CIFAR-10~\\citep{cifar10} and ImageNet-1000~\\citep{deng2009imagenet} and compare it with existing methods. For fair and easy comparison, we use the CNN networks provided by \\citep{carlini2017towards}, which have also been used by other previous hard-label attacks as well. Specifically, for both MNIST and CIFAR-10, the model consists of nine layers in total - four convolutional layers, two max-pooling layers and two fully-connected layers. Further details about implementation, training and parameters are available on \\citep{carlini2017towards}. As reported in \\citep{carlini2017towards} and \\citep{cheng2018queryefficient}, we were able to achieve an accuracy of 99.5\\% on MNIST and 82.5\\% on CIFAR-10. We use the pretrained Resnet-50 \\citep{he2016deep} network provided by torchvision \\citep{torchvision} for ImageNet-1000, which achieves a Top-1 accuracy of 76.15\\%.\n\nIn our experiments, we found that Sign-OPT and SVM-OPT perform quite similarly in terms of query efficiency. Hence we compare only Sign-OPT attack with previous approaches and provide a comparison between Sign-OPT and SVM-OPT in \\autoref{ssec:svm-sign-comp}. We compare the following attacks:\n\\begin{itemize}\n \\item \\textbf{Sign-OPT attack} (black box): The approach presented in this paper.\n \\item \\textbf{Opt-based attack} (black box): The method proposed in \\cite{cheng2018queryefficient} where they use Randomized Gradient-Free method to optimize the same objective function. \n We use the implementation provided at {\\small \\url{https:\/\/github.com\/LeMinhThong\/blackbox-attack}}.\n \\item \\textbf{Boundary attack} (black box):\n The method proposed in \\cite{brendel2017decision}. This is compared only in $L_2$ setting as it is designed for the same. We use the implementation provided in Foolbox ({\\small\\url{https:\/\/github.com\/bethgelab\/foolbox}}).\n \n \\item \\textbf{Guessing Smart Attack} (black box): The method proposed in \\citep{brunner2018guessing}. This attack enhances boundary attack by biasing sampling towards three priors. Note that one of the priors assumes access to a similar model as the target model and for a fair comparison we do not incorporate this bias in our experiments. We use the implementation provided at {\\small \\url{https:\/\/github.com\/ttbrunner\/biased_boundary_attack}}. \n \\item \\textbf {C\\&W attack} (white box):\n One of the most popular methods in the white-box setting proposed in\n \\cite{carlini2017towards}. We use C\\&W $L_2$ norm attack as a baseline for the white-box attack performance. \n \n \n\\end{itemize}\n\nFor each attack, we randomly sample 100 examples from validation set and generate adversarial perturbations for them. For untargeted attack, we only consider examples that are correctly predicted by model and for targeted attack, we consider examples that are already not predicted as target label by the model. \nTo compare different methods, \nwe mainly use \\textit{median distortion}\nas the metric. \nMedian distortion for $x$ queries is the median adversarial perturbation of all examples achieved by a method using less than $x$ queries. \nSince all the hard-label attack algorithms will start from an adversarial exmample and keep reduce the distortion, if we stop at any time they will always give an adversarial example and medium distortion will be the most suitable metric to compare their performance. \nBesides, we also show \n\\textit{success rate (SR)} for $x$ queries for a given threshold ($\\epsilon$), which is the percentage of number of examples that have achieved an adversarial perturbation below $\\epsilon$ with less than $x$ queries. \nWe evaluate success rate on different thresholds which depend on the dataset being used. \nFor comparison of different algorithms in each setting, we chose the same set of examples across all attacks.\n\n\n\\textbf{Implementation details}: To optimize \\autoref{alg:signopt}, we estimate the step size $\\eta$ using the same line search procedure implemented in~\\cite{cheng2018queryefficient}. At the cost of a relatively small number of queries, this provides significant speedup in the optimization. Similar to \\cite{cheng2018queryefficient}, $g(\\theta)$ in last step of \\autoref{alg:signopt} is approximated via binary search. The initial $\\theta_0$ in \\autoref{alg:signopt} is calculated by evaluating $g(\\theta)$ on 100 random directions and taking the best one.\nWe provide our implementation publicly\\footnote{https:\/\/github.com\/cmhcbb\/attackbox}.\n\\vspace{-5pt}\n\\subsection{Comparison between Sign-OPT and SVM-OPT}\n\\label{ssec:svm-sign-comp}\n\\vspace{-5pt}\nIn our experiments, we found that the performance in terms of queries of both these attacks is remarkably similar in all settings (both $L_2$\/$L_\\infty$ \\& Targeted\/Untargeted) and datasets. We present a comparison for MNIST and CIFAR-10 ($L_2$ norm-based) for both targeted and untargeted attacks in\n\\autoref{fig:svm_sign_comparison}. We see that the median distortion achieved for a given number of queries is quite on part for both Sign-OPT and SVM-OPT.\n\n\\textbf{Number of queries per gradient estimate}: In \\autoref{fig:svm_sign_comparison}, we show the comparison of Sign-OPT attack with different values of $Q$. Our experiments suggest that $Q$ does not have an impact on the convergence point reached by the algorithm. Although, small values of $Q$ provide a noisy gradient estimate and hence delayed convergence to an adversarial perturbation. Large values of $Q$, on the other hand, require large amount of time per gradient estimate. After fine tuning on a small set of examples, we found that $Q=200$ provides a good balance between the two. Hence, we set the value of $Q=200$ for all our experiments in this section. \n\n\\subsection{Untargeted attack}\n\\vspace{-5pt}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/svm-param.pdf}\n \\captionsetup{justification=centering}\n \\caption{Median $L_2$ distortion vs Queries. First two: Comparison of Sign-OPT and SVM-OPT attack for MNIST and CIFAR-10. Third: Performance of Sign-OPT for different values of $Q$.}\n \\label{fig:svm_sign_comparison}\n\\end{figure}\n\nIn this attack, the objective is to generate an adversary from an original image for which the prediction by model is different from that of original image. \\autoref{fig:untargeted_distortion} provides an elaborate comparison of different attacks for $L_2$ case\nfor the three datasets. Sign-OPT attack consistently outperforms the current approaches in terms of queries. Not only is Sign-OPT more efficient in terms of queries, in most cases\nit converges to a lower distortion than what is possible by other hard-label attacks. Furthermore, we observe Sign-OPT converges to a solution comparable with C\\&W white-box attack (better on CIFAR-10, worse on MNIST, comparable on ImageNet). This is significant for a hard-label attack algorithm since we are given very limited information. \n\nWe highlight some of the comparisons of Boundary attack, OPT-based attack and Sign-OPT attack ($L_2$ norm-based) in \\autoref{u-l2-table}. Particularly for ImageNet dataset on ResNet-50 model, Sign-OPT attack reaches a median distortion below 3.0 in less than $30k$ queries while other attacks need more than $200k$ queries for the same. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/u_l2.pdf}\n \n \n \n \n \n \n \n \\caption{Untargeted attack: Median distortion vs Queries for different datasets. }\n \\label{fig:untargeted_distortion}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/fig5.pdf}\n \n \n \n \n \n \n \n \\caption{(a) Targeted Attack: Median distortion vs Queries of different attacks on MNIST and CIFAR-10. (b) Comparing Sign-OPT and ZO-SignSGD with and without single query oracle (SQO).}\n \\label{fig:targeted_distortion}\n\\end{figure}\n\n \n\n \n\n\n\\subsection{Targeted attack}\n\\vspace{-5pt}\nIn targeted attack, the goal is to generate an adversarial perturbation for an image so that the prediction of resulting image is the same as a specified target. For each example, we randomly specify the target label, keeping it consistent across different attacks. We calculate the initial $\\theta_0$ in \\autoref{alg:signopt} using 100 samples in target label class from training dataset and this $\\theta_0$ is the same across different attacks. \\autoref{fig:untarget-example} shows some examples of adversarial examples generated by Sign-OPT attack and the Opt-based attack. The first two rows show comparison of Sign-OPT and Opt attack respectively on an example from MNIST dataset. The figures show adversarial examples generated at almost same number of queries for both attacks. Sign-OPT method generates an $L_2$ adversarial perturbation of 0.94 in $\\sim6k$ queries for this particular example while Opt-based attack requires $\\sim35k$ for the same. \\autoref{fig:targeted_distortion} displays a comparison among different attacks in targeted setting. In our experiments, average distortion achieved by white box attack C\\&W for MNIST dataset is 1.51, for which Sign-OPT requires $\\sim 12k$ queries while others need $> 120k$ queries. We present a comparison of success rate of different attacks for CIFAR-10 dataset in \\autoref{fig:success_rate} for both targeted and untargeted cases.\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/success_u_l2_e1.pdf}\n \n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/success_u_l2_e2.pdf}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/success_t_l2_e1.pdf}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures_iclr\/success_t_l2_e2.pdf}\n \n \\end{subfigure}\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\captionsetup{justification=centering}\n \\caption{Success Rate vs Queries for CIFAR-10 ($L_2$ norm-based attack). First two and last two depict untargeted and targeted attacks respectively. Success rate threshold is at the top of each plot.}\n \\label{fig:success_rate}\n \n\\end{figure}\n\n\\subsection{The power of single query oracle}\n\\vspace{-5pt}\nIn this subsection, we conduct several experiments to prove the effectiveness of our proposed single query oracle in hard-label adversarial attack setting. ZO-SignSGD algorithm \\citep{liu2018signsgd} is proposed for soft-label black box attack and we extend it into hard-label setting. \nA straightforward way is simply applying ZO-SignSGD to solve the hard-label objective proposed in~\\cite{cheng2018queryefficient}, estimate the gradient using binary search as \\citep{cheng2018queryefficient} and take its sign. In Figure 5(b), we clearly observe that simply combining ZO-SignSGD and \\cite{cheng2018queryefficient} is not efficient. \nWith the proposed single query sign oracle, we can also reduce the query count of this method, as demonstrated in Figure 5(b). This \nverifies the effectiveness of single query oracle, which can universally improve many different optimization methods in the hard-label attack setting. To be noted, there is still improvement on Sign-OPT over ZO-SignSGD with single query oracle because instead of directly taking the sign of gradient estimation, our algorithm utilizes the scale of random direction $u$ as well. In other words, signSGD's gradient norm is always 1 while our gradient norm takes into account the magnitude of $u$. Therefore, our signOPT optimization algorithm is fundamentally different \\citep{liu2018signsgd} or any other proposed signSGD varieties. Our method can be viewed as a new zeroth order optimization algorithm that features fast convergence in signSGD.\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\vspace{-6pt}\nWe developed a new and ultra query-efficient algorithm for adversarial attack in the hard-label black-box setting. Using the same smooth reformulation in \\cite{cheng2018queryefficient}, we design a novel zeroth order oracle that can compute the sign of directional derivative of the attack objective using single query. Equipped with this single-query oracle, we design a new optimization algorithm that can dramatically reduce number of queries compared with \\cite{cheng2018queryefficient}. We prove the convergence of the proposed algorithm and show our new algorithm is overwhelmingly better than current hard-label black-box attacks. \n\n\\begin{table}[tbp]\n \n \\centering\n \\captionsetup{justification=centering}\n \\caption{$L_2$ Untargeted attack - Comparison of average $L_2$ distortion achieved using a given number of queries for different attacks. SR stands for success rate.}\n \\centering\n \n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{lccc|ccc|ccc}\n \\toprule\n \\cmidrule(r){1-2}\n & \\multicolumn{3}{c}{MNIST} &\\multicolumn{3}{c}{CIFAR10} & \\multicolumn{3}{c}{ImageNet (ResNet-50)}\\\\\n &\\#Queries &Avg $L_2$& SR($\\epsilon=1.5$)& \\#Queries & Avg $L_2$ & SR($\\epsilon=0.5$)&\\#Queries & Avg $L_2$ & SR($\\epsilon=3.0$)\\\\\n \\midrule \n \\multirow{3}{*}{Boundary attack} \n &4,000 &4.24 &1.0\\% &4,000 &3.12 &2.3\\% &4,000 &209.63 &0\\% \\\\\n &8,000 &4.24 &1.0\\% &8,000 &2.84 &7.6\\% &30,000 &17.40 &16.6\\%\\\\\n &14,000 &2.13 &16.3\\% &12,000 &0.78 &29.2\\% &160,000 &4.62 &41.6\\%\\\\\n \\hline\n \\multirow{2}{*}{OPT attack} \n &4,000 &3.65 &3.0\\% &4,000 &0.77 &37.0\\% &4,000 &83.85 &2.0\\%\\\\\n &8,000 &2.41 &18.0\\% &8,000 &0.43 &53.0\\% &30,000 &16.77 &14.0\\%\\\\\n &14,000 &1.76 &36.0\\% &12,000 &0.33 &61.0\\% &160,000 &4.27 &34.0\\%\\\\\n \\hline\n \\multirow{2}{*}{Guessing Smart} \n &4,000 &1.74 &41.0\\% &4,000 &0.29 &75.0\\% &4,000 &16.69 &12.0\\%\\\\\n &8,000 &1.69 &42.0\\% &8,000 &0.25 &80.0\\% &30,000 &13.27 &12.0\\%\\\\\n &14,000 &1.68 &43.0\\% &12,000 &0.24 &80.0\\% &160,000 &12.88 &12.0\\%\\\\\n \\hline\n \\multirow{2}{*}{\\textbf{Sign-OPT attack}}\n &4,000 &1.54 &46.0\\% &4,000 &0.26 &73.0\\% &4,000 &23.19 &8.0\\%\\\\\n &8,000 &1.18 &84.0\\% &8,000 &0.16 &90.0\\% &30,000 &2.99 &50.0\\%\\\\\n &14,000 &1.09 &94.0\\% &12,000 &0.13 &95.0\\% &160,000 &1.21 &90.0\\%\\\\\n \\hline\n C\\&W (white-box)\n &- &0.88 &99.0\\% &- &0.25 &85.0\\% &- &1.51& 80.0\\% \\\\\n \\bottomrule\n \\end{tabular}\n}\n\n \\label{u-l2-table}\n \n\\end{table}\n\n\n\n\n\n\\section*{Acknowledgement}\nThis work is based upon work supported by the Department of Energy National Energy Technology Laboratory under Award Number DE-OE0000911 and by NSF under IIS1719097.\n\n\\newpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzezzu b/data_all_eng_slimpj/shuffled/split2/finalzzezzu new file mode 100644 index 0000000000000000000000000000000000000000..68ac9de7194d9dbd9327d8d0c6035c1f5dddb1d8 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzezzu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe abundance of Li has attracted much attention, especially since the Li gap\nhas been discovered in the Hyades for stars with \\teff $\\sim 6600$~K. In the\ncontext of radiative diffusion, it is interesting to examine the atmospheric\nabundance of lithium in stars where such a mechanism is known to be at work from\nthe abundances of other elements, such as calcium, i.e. in the Am stars.\nSuch studies have been carried out especially by Burkhart \\& Coupry \\citep{bc91}\nand Burkhart et al. \\citep{bcfg05}.\nTheir conclusion was that in general, the Li abundance\nof Am stars is close to the cosmic value ($\\log N(Li)\\sim 3.0$ in the scale\nwhere $\\log N(H)= 12.0$), although a small proportion of them are deficient.\nThe latter seem in general to be either evolved stars or, as recently suggested\nby Burkhart et al. \\citep{bcfg05}, to lie on the red side of the Am domain,\namong the $\\rho$ Puppis--like stars.\n\nIn this poster, we present Li abundances obtained for 31 Am stars and 36 normal\nA and F stars in the field, all having Hipparcos parallaxes. This sample had\nbeen defined before the Hipparcos era, on purely photometric criteria, but with\nthe purpose of testing how far the Li abundance depends on the evolutionary\nstate, i.e. on the surface gravity $\\log g$. The Hipparcos data which became\navailable later showed that the photometric luminosity calibrations of Am stars\nwere not very satisfactory (North et al. 1997), but allowed to determine\n$\\log g$ in a more fundamental way. Furthermore, the sample has the advantage\nof presenting no bias against large rotational velocities.\n\n\n\\section{Observations and analysis}\nAll stars were observed at OHP with the Aur\\'elie spectrograph attached to the\n1.5m telescope, in April 1993 and in October 1993 and 1994. The grating No 7 was\nused, giving a resolving power $R=40000$ in the spectral range $6640-6760$~\\AA .\nThe typical exposure times were between 40 and 60 minutes, the resulting\nsignal-to-noise ratio being between 250 and 400. The spectra were\nreduced during the observing runs with the IHAP package, and were later\nnormalized to the continuum in an interactive way.\n\nThe analysis was made by comparison of the observed spectra with synthetic ones\nconvoluted with an assumed gaussian instrumental profile and with an appropriate\nrotational profile. The Synspec code (Hubeny et al. 1995) and Kurucz model\natmospheres were used to produce the synthetic spectra. The line parameters were\ntaken from Kurucz's $gfiron$ list, except of course the parameters for the Li\ndoublet. The effective temperatures were computed from Geneva photometry, while\nthe surface gravities were computed from the Hipparcos parallaxes, by combining\nthem with theoretical evolutionary tracks from Schaller et al. \\citep{ssmm92},\nas explained by North \\citep{n98}, assuming standard evolution.\nThe microturbulent velocity was either computed from the formula proposed by\nEdvardsson et al. \\citep[eqn 9]{e93}, for \\teff $< 7000$~K, or estimated from the\nFig.~1 of Coupry \\& Burkhart \\citep{cb92}, for \\teff $\\geq 7000$~K.\nThe abundance of\nFe, Ca and a few other elements (in cases of sharp lined stars) were first\nestimated by visual fits. Then, the Li abundance and the projected rotational\nvelocities were obtained by minimizing the $\\chi^2$ between observed and\nsynthetic spectra having various values of these parameters.\n\\begin{figure}\n \\includegraphics[width=10cm]{north_hr.eps}\n \\caption{HR diagram of the Am (black dots) and normal (white dots) stars of\nour sample. Black triangles (with error bars typical of the whole\nsample) are stars from Burkhart et al. (2005, Table~3) not in\ncommon with our sample. The error bars were drawn assuming a $\\pm 200$~K error\non \\teff and include, on the vertical axis, the parallax error of Hipparcos.\n}\n\\label{hr}\n\\end{figure}\n\\section{Results}\nFig.~\\ref{hr} shows the distribution of Am stars (full dots) and of normal A-F stars\n(open dots) in the HR diagram. Evolutionary tracks and isochrones from\nSchaller et al. \\citep{ssmm92} are shown for 4 masses ($1.5$ to $2.5~M_\\odot$)\nand for 3\nages ($\\log t = 8.7$ to $9.3$) respectively. The stars are well distributed\non the whole main sequence band. The lack of Am stars below \\teff $\\sim 7000$~K\nis the well-known limit due to the onset of convection.\n\nFig.~\\ref{LiTelg} (left) shows the lithium abundance as a function of \\teff\nfor Am stars (full dots) and for normal A--F stars (open dots).\nThe most striking feature of this diagram is\nthe bimodal distribution of the Li abundance for \\teff $\\lesssim 7500$~K, which\nis reminiscent of a similar distribution of F-type dwarfs in the range $5900\n<$ \\teff $< 6600$~K reported by Lambert et al. \\citep[Fig.~4]{lhe91}.\nThus, our data\ncomplement that of Lambert et al. as well as the larger sample of Chen et al.\n\\citep{cnbz01} by extending the results to higher \\teff. We have\nverified that duplicity cannot account for the low apparent Li abundances (even\nthough this might hold for some isolated cases). Restricting the diagram to\nthose stars with\n\\vsini $< 80$~\\kms, the upper branch almost disappears (there are only two\nnormal stars left around \\teff $\\sim 6500$~K), while the lower one remains\nintact. This is related to the fact that the upper branch is populated only with\nnormal stars, which rotate more rapidly than the Am stars, while the lower\nbranch is a mix of normal and Am stars. Thus, below $7500$~K, all Am stars of\nour sample are Li deficient. The black triangles refer to the 4 stars of\nBurkhart et al. \\citep[their Table~3]{bcfg05} which are not common to our\nsample. Their positions are in perfect agreement with the general picture.\n\nFig.~\\ref{LiTelg} (right) displays the Li abundance as a function of surface gravity. There is\nno strong trend, but one can notice that those stars (either Am or normal) which\nare strongly deficient in Li are {\\bf all} at least slightly evolved. There is one\nunevolved star (HD 18769) for which only an upper limit to its Li abundance\ncould be obtained, but this is due to its high \\teff ($8420$~K) and moderately\nbroad lines, and the upper limit is close to the ``cosmic'' Li abundance, so\nthis is not a significant exception. Thus, we confirm the suggestion made by\nBurkhart \\& Coupry that Li-deficient Am stars are evolved objects, although it\nseems that all evolved Am stars are not necessarily deficient.\n\\begin{figure}\n \\includegraphics[width=10.3cm]{north_LiTelg.eps}\n \\caption{{\\bf Left:} Li abundance (on the scale $\\log N(H)=12$) of Am\nstars (black\ndots) and normal A--F-type stars (white dots) versus effective temperature.\nUpper limits to the Li abundance are indicated by vertical arrows.\nBlack triangles are from Burkhart et al. \\citep{bcfg05}.\n{\\bf Right:} Li abundance versus surface gravity derived from Hipparcos\nparallaxes. The leftmost arrow refers to the Am star HD 18769, which has\n\\teff $=8420$~K and \\vsini $= 46$~\\kms, so that only an upper limit\nto its Li abundance can be obtained. The typical error on $\\log g$ is $0.1$~dex,\nwhile that on $\\log N(Li)$ vary from better than $0.1$~dex to more than\n$0.3$~dex, depending on \\teff and \\vsini.}\n\\label{LiTelg}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\n\n\nThere is a voluminous literature on second order analysis of distribution functions $F_N(z) = P(Z_N\\leq z)$ of statistics $Z_N = \\zeta_N(X_1,X_2,\\dots, X_N)$ that are functions of i.i.d. random variables $X_1,X_2,...$. The results obtained are generally refinements of the central limit theorem. Suppose that $Z_N$ is asymptotically standard normal, that is $\\sup_z |F_N(z)-\\Phi (z)|\\rightarrow0 $ as $N\\rightarrow \\infty$, where $\\Phi $ denotes the standard normal distribution function. Then second order results are concerned with the speed of this convergence, or with attempts to increase this speed by replacing the limit $\\Phi $ by a series expansion $\\Psi_N$ that provides a better approximation. Results of the first kind are called theorems of Berry-Esseen type and assert that for all $N$,\n$$\\sup_z|F_N(z)-\\Phi(z)|\\leq CN^{-\\frac12},$$\nwhere $C$ is a constant that depends on the particular statistic $Z_N$ and the\ndistribution of the variables $X_i$, but not on $N$. Such results are often\nvalid under mild restrictions such as the existence of a third moment of\n$X_i$. The original Berry-Esseen theorem dealt with the case where $Z_N$ is a\nsum of i.i.d. random variables, \n\\citet*{Esseen:1942},\n\\citet*{Berry:1941}. \nFor a more general version see \n\\citet*{vanZwet:1984}. \n\n\nResults of the second kind concern so-called Edgeworth expansions. These are series expansions such as \n\\begin{equation} \n \\begin{split}\n &\\Psi_{ N,1}(z) = \\Phi(z)+\\varphi(z)N^{-\\frac12} Q_1(z), \\quad \\text{or} \\\\ \n &\\Psi_{N,2}(z) =\\Phi(z)+\\varphi(z)\\left[ N^{-\\frac12} Q_1(z)+ N^{-1}Q_2(z)\\right],\n \\end{split}\n\\end{equation} \n\n\\noindent\nwhere $ \\varphi $ is the standard normal density and $Q_1$ and $Q_2$ are polynomials depending on low moments of $X_i$ . One then shows that \n\\begin{equation} \n \\begin{split}\n &\\sup_z|F_N(z)-\\Psi_{N,1}(z)|\\leq CN^{-1}, \\quad \\text{or} \\\\ \n &\\sup_z|F_N(z)-\\Psi_{N,2}(z)|\\leq CN^{-\\frac32} .\n \\end{split}\n\\end{equation} \n\n\\noindent\nFor this type of result the restrictions are more severe. Apart from moment\nassumptions one typically assumes that $Z_N$ is not a lattice random\nvariable. For the case where $Z_N$ is a sum of i.i.d. random variables a good\nreference is \n\\citep[chap. XVI]{Feller:1965\/2}. \nThere are numerous papers devoted to special types of statistics. For a\nsomewhat more general result we refer to \n\\citet*{Bickel-Goetze-vanZwet:1986}\n and \n\\citet*{bentkus-goetze-vanzwet:1997} .\n\nFor the case where $Z_N$ assumes its values on a lattice, say the integers, an alternative approach it to generalize the local central limit theorem and provide an expansion for the point probabilities $P(Z_N=z)$ for values of $z$ belonging to the lattice. A typical case is the binomial distribution for which local expansions are well known. It is obvious that for the binomial distribution one can not obtain Edgeworth expansions as given in $(1.1)$ for which $(1.2)$ holds. The reason is that out of the $N$ possible values for a binomial $(N,p)$ random variable, only $cN^{\\frac12}$ values around the mean $Np$ really count and each of these has probability of order $N^{-\\frac12}$. Hence the distribution function has jumps of order $N^{-\\frac12}$ and can therefore not be approximated by a continuous function such as given in $(1.1)$ with an error of smaller order than $N^{-\\frac12}$. \n\nIn a sense the binomial example is an extreme case where the ease of\nthe approach through local expansions for $P(Z_N=z)$ precludes the one through\nexpansions of Edgeworth type for $P(Z_N\\leq z)$. In \n\\citet*{Albers-Bickel-vanZwet:1976}\n the authors found somewhat to their surprise that for the Wilcoxon statistic which is a pure lattice statistic, an Edgeworth expansion with remainder $O(N^{-\\frac32})$ for the distribution function is perfectly possible. In this case the statistic ranges over $N^2$ possible integer values, of which the central $N^{\\frac32}$ values have probabilities of order $N^{-\\frac32}$ so that one can approximate a distribution function with such jumps by a continuous one with error $O(N^{-\\frac32})$.\n \nOn the basis of these examples one might guess that the existence of an Edgeworth expansion with error $O(N^{-p})$ for the distribution function $F_N(z) = P(Z_N\\leq z)$ would merely depend on the existence of some moments of $Z_N$ combined with the requirement that $F_N$ does not exhibit jumps of large order than $N^{-p}$. But one can envisage a more subtle problem if $F_N$ would assign a probability of larger order than $N^{-p}$ to an interval of length $N^{-p}$. Since Edgeworth expansions have bounded derivative, this would also preclude the existence of such an expansion with error $O(N^{-p})$.\n \nLittle seems to be known about the case where $Z_N$ has a discrete but non-lattice distribution. Examples abound if one considers a lattice random variable with expectation $0$ and standardized by dividing by its sample standard deviation. As a simple example, one could for instance consider Student's $t$-statistic $\\tau_N = N^{-1\/2}\\sum_i X_i \/ \\sqrt{\\sum_i\\left( X_i -m \\right)^2\/(N-1) }$ with $m= \\sum_i X_i\/N $ and $X_1,X_2,\\dots$ i..i.d. random variables with a lattice distribution. Since we are not interested in any particular statistic, but merely in exploring what goes on in a case like this, we shall simplify even further by deleting the sample mean m and considering the statistic\\smallskip\n\n\\noindent\n\\begin{equation} \nW_N = \\sum_{i=1}^N \\frac{X_i}{\\sqrt{\\sum_{i=1}^N X_i^2}} ,\n\\end{equation}\n\n\\noindent\nwith \\smallskip\n\n\\noindent\n\\begin{equation}\n X_1,X_2,\\dots i.i.d.\\mbox{ with } P(X_i=-1)= P(X_i=0)= P(X_i=1)= \\frac13 .\n\\end{equation}\n\n\nWe should perhaps point out that for $w>0$\n\\begin{align*}\nP(0<\\tau_N\\leq w)&=P\\left(00$, $\\Lambda_N(w)$ is bounded and $N^{-1\/2}\\Lambda_N(w) = O(N^{-1\/2})$ uniformly in $w$. At first sight there is a striking similarity between the expansion $\\Psi_N$ in Theorem 1.1 and the two term Edgeworth expansion $\\Psi_{N,1}$ in (1.1). However, the term $\\phi(z)N^{-1\/2}Q_1(z)$ of order $O(N^{-1\/2})$ in the Edgeworth expansion is a skewness correction that vanishes for a symmetric distribution $F_N$. As we are dealing with a symmetric case, such a term is not present and for the continuous case the Edgeworth expansion with remainder $O(N^{-1})$ is simply $\\Phi(z)$. The origin of the term $N^{-1\/2}\\Lambda_N(w)$ is quite different. It arises from the fact that we are approximating a discrete distribution function by a continuous one, and as such it is akin to the classical continuity correction.\n\nTo make sure that the term $N^{-1\/2}\\Lambda_N(w)$ is not of smaller order than $N^{-1\/2}$, we shall bound $|\\Lambda_N(w)|$ from below by the absolute value \nof the following series. Assume that $N$ is divisible by $3$ and let\n \\begin{equation}\n \\begin{split}\n\\lambda_N(w):=& \\sqrt{\\frac 3 2}\\4 \\varphi(w)\\sum_{k=1}^{M} \\frac 1 {\\pi \\4 k} \nf_{N,k}\\exp\\bigl(- \\frac{\\pi^2}6\\4 k^2\\4 w^2\\4)\\4 \n\\sin\\bigl(2 \\4 \\pi\\4 \\4 k \\4 w\\4 \\sqrt{\\frac {2 N} 3}\\bigr)\\\\\n& \\, \\quad + O\\bigl(N^{-1\/2}(\\log N)^5\\bigr), \\quad M := \\lfloor \\log N \\rfloor ,\n \\end{split}\n \\end{equation}\nwhere $f_{N,k}=1 + O((k\/M)^2)$ is defined in $(3.9)$. Thus $\\lambda_N(w)$ \nis a rapidly converging Fourier series, (illustrated in\nFigure 1. below) the modulus of which is larger than a positive constant $c(w)>0$,\nprovided that $4\\4 w\\4 \\sqrt{\\frac{2 N} 3} $ is an odd integer.\n\n\n\n\\begin{figure}[H] \n\\psfig{file=oscillatory.eps,width=18cm, height=4cm}\n\\caption{$\\lambda_{100}(w)$:\\,\\, $0.05 \\le w \\le 2.34,\\,\\, M=10$, $f_{100,k}:=\\exp[-(k\/M)^{2\/3}]$\\label{fig1}} \n\\end{figure} \n Hence, we shall prove\n\\noindent\n{\\bf Theorem 1.2.} {\\it For any $N$ divisible by $3$, we have} \n\\begin{equation}\n \\sup_{w>0} |F_N(w)-\\Phi(w)| \\ge \\,\\,\n\\sup_{w\\ge 1}N^{-\\frac 12}|\\lambda_N(w)| + O\\bigl(N^{-1} (\\log N)^5\\bigr)\n>\\,\\, \\frac c {\\sqrt N},\n\\end{equation}\n{\\it for some absolute constant $c>0$.} \n\n\\bigskip\\noindent\nThe proof of Theorem 1.1 is given in Section 2. In Section 3 we investigate \nthe oscillatory part of $\\Psi_N$ in (1.6), relating it to the\nFourier series $\\lambda_N(w)$ above and thus proving Theorem 1.2.\n\n\\noindent\n{\\bf Acknowledgment.}\n The authors would like to thank G.Chistyakov for a careful reading of the\n manuscript\n and Lutz Mattner for his comments on the current ArXiv version.\n \n\n\\section{Proof of Theorem 1.1}\n\nThe event $W_N = 0$ occurs iff $D_N = 0$. Let $Z_1,Z_2,\\dots, Z_N$ be i.i.d. random variables assuming the values $0$, $-1$ and $+1$, each with probability $\\frac 13$. Then $D_N$ is distributed as $ \\sum Z_i$, which has mean $0$ and variance $\\frac{2N}{3}$. By the local central limit theorem $P(\\sum Z_i =0) \\sim (2\\pi)^{-\\frac12} \\left(\\frac{2N}{3} \\right)^{-\\frac12} = \\sqrt{\\frac{3}{4\\pi N}}$ which proves the first statement of Theorem 1.1. Because the distribution of $W_N$ is symmetric about the origin, this implies that in the remainder of the proof we only need to consider positive values of $W_N$. Hence we suppose that $w>0$ throughout and this implies that we need only be concerned with positive values of $D_N$ also.\n\nHoeffding's inequality ensures that for all $N\\geq 2$,\n \\[P\\left(|D_N|\\geq \\sqrt{6N \\log N} \\right)\\leq \\frac{2}{N^{3}} \\]\nand \n\\[P\\left(|T_N -2N\/3|\\geq \\sqrt{2 N \\log N}\\right) \\leq \\frac{2}{N^{3}}.\\] \n\\noindent\nSince the joint distribution of $T_N$ and $D_N$ assigns positive probability to at most $N^2$ points and events with probability $O(N^{-1})$ are irrelevant for the remainder of the proof, we may at any point restrict attention to values $D_N=d$ and $T_N=t$ with $|d| \\leq t$ and satisfying \\smallskip\n\n\\noindent\n \\begin{equation}\n |d|< \\sqrt{6N\\log N} \\quad \\mbox{ and }\\quad \\left|t-\\frac{2N}{3}\\right|< \\sqrt{2N\\log N}. \n \\end{equation}\n\nFor positive integer $m\\leq n$ we have\n\\begin{eqnarray*}\n P(D_N =2m, T_N =2n)&=& P(S_N =m+n, T_N =2n) \\\\\n &=& \\frac{N!}{3^{N}(n+m)! (n-m)! (N-2n)!} .\n\\end{eqnarray*}\n\\noindent\nIf $d=2m$ and $t=2n$ satisfy $(2.1)$, then $(n+m)$, $(n-m)$ and $(N-2n)$ are of exact order $N$ and we may apply Stirling's formula to see that\n$$ P(D_N=2m,T_N =2n)$$ \n$$ = \\frac{N^{N+\\frac12} \\left(1+O\\left(\\frac 1N\\right)\\right) }{ 2\\pi 3^{N} (n+m)^{(n+m+\\frac12)}(n-m)^{(n-m+\\frac12)}(N-2n)^{(N-2n+\\frac12)}} $$\n$$ = \\frac{3^{\\frac 32} \\left(1+O\\left(\\frac 1N\\right)\\right) } { 2\\pi N\\left(\\frac{ 3(n+m)}{N}\\right)^{(n+m+\\frac12)}\\left( \\frac{3(n-m)}{N}\\right)^{(n-m+\\frac12)} \\left(\\frac{3(N-2n)}{N}\\right)^{(N-2n+\\frac12)}} $$\n$$ = \\frac{3^{\\frac32} \\left(1+O\\left( \\frac 1N \\right)\\right)} { 2\\pi N } \\, \\exp \\Bigg\\{ -\\left(n+m+\\frac12\\right)\\log\\left( 1+\\frac 3N \\left(n+m-\\frac N3\\right) \\right) $$\n$$ -\\left(n-m+\\frac12\\right) \\log\\left( 1+\\frac 3N\\left(n-m-\\frac N3\\right) \\right)$$\n$$ -\\left(N-2n+\\frac12 \\right)\\log\\left(1+\\frac 3N\\left(\\frac{2N}{3}-2n\\right)\\right)\\Bigg\\}.$$\n\n\\noindent\nNext we expand the logarithms in the exponent. For the first order terms we obtain\n$$-\\frac 3N \\bigg[\\left(n+m+\\frac 12\\right)\\left(n+m-\\frac N3\\right)+\\left(n-m+\\frac12\\right)\\left(n-m-\\frac N3\\right)+ $$\n$$\\left(N-2n+\\frac 12\\right)\\left(\\frac{2N}{3}-2n\\right)\\bigg] $$ \n$$= -\\frac 3N \\left[\\left(n+m-\\frac N3\\right )^2 + \\left(n-m-\\frac N3 \\right)^2+\\left( \\frac{2N}{3}-2n\\right)^2\\right] $$\n$$= -\\frac{3}{N}\\left( 6\\,\\tilde{n}^2+2m^2\\right),$$\nwhere $ \\tilde{n}:= \\left(n-\\frac N3\\right)$.\\\\\n\n\\noindent\nThe second order terms yield\n$$ \\frac 12 \\left(\\frac 3N\\right)^2\\bigg[ \\left(n+m+\\frac 12\\right)\\left(n+m-\\frac N3\\right)^2+\\left(n-m+\\frac 12\\right)\\left(n-m-\\frac N3\\right)^2$$\n$$+\\left(N-2n+\\frac12\\right)\\left( \\frac{2N}{3}-2n\\right)^2 \\bigg] $$\n$$= \\frac12 \\left(\\frac 3N \\right)^2 \\left[ -6\\tilde{n}^3+(2N+3)\\tilde{n}^2+6\\tilde{n}m^2 +\\left( \\frac{2N}{3}+1 \\right)m^2 \\right] $$\n$$= \\frac 3N \\left( 3\\tilde{n}^2+m^2 \\right) + \\frac{27}{N^{2}}\\left(-\\tilde{n}^3+\\tilde{n}m^2 \\right) +O\\left( \\frac{ \\tilde{n}^2+m^2}{ N^{2}}\\right).$$\n\\noindent\nThe third order terms contribute\n$$-\\frac 13\\left(\\frac 3N \\right)^3 \\bigg[\\left(n+m+\\frac12\\right)\\left(n+m-\\frac N3\\right)^3+\\left(n-m+\\frac12\\right)\\left(n-m-\\frac N3\\right)^3$$\n$$+\\left(N-2n+\\frac12\\right)\\left(\\frac{2N}{3}-2n\\right)^3\\bigg]$$\n$$= \\frac{18(\\tilde{n}^3-\\tilde{n}m^2)}{N^{2}}+O\\left(\\frac{\\tilde{n}^4+m^4}{N^{3}}\\right) $$ \n\\noindent\nAs $d=2m$ and $t=2n$ satisfy $(2.1)$, the contribution of the remaining terms is dominated by that of the fourth order terms and equals\n$$O\\left(\\frac{\\tilde{n}^4+m^4}{N^{3}}\\right). $$\n\nCollecting the results of these computations we arrive at \\smallskip\n\n\\noindent\n\\begin{equation} \n \\begin{split}\n & \\qquad \\qquad \\qquad P(D_N=2m,T_N =2n) = \\frac{3^{\\frac 32}}{ 2\\pi N}\\,\\\\ \n &\\times \\exp\\bigg\\{ -\\frac{3(3\\tilde{n}^2+m^2)}{N}-\\frac{9(\\tilde{n}^3 \n -\\tilde{n}m^2)}{N^2} +O\\left(\\frac1N+ \\frac{\\tilde{n}^4+m^4}{N^3}\\right) \\bigg\\} , \n\\end{split} \n\\end{equation}\n\n\\noindent\nprovided $m\\leq n$ are integers between $1$ and $\\frac12 N$ satisfying $m<\\sqrt{2N\\log N}$ and \n$|\\tilde n|=\\left|n-\\frac N3\\right|<\\sqrt{N\\log N}$. However, we shall also use $(2.2)$ if these inequalities do not hold, since in that case both left- and right-hand members of $(2.2)$ are negligible for our purposes.\n\n \nBy Taylor expansion of the integrand about $x=m$, we find that for integer $0\\frac12 $ we write $r=m+\\theta$ where $m=\\lfloor r\\rfloor$ and $\\theta= \\mbox{frac} (r)=r-\\lfloor r \\rfloor \\in[0,1)$ denote the integer and fractional parts of r respectively. Then for $r<\\sqrt{2N\\log N}$ and $|\\tilde n| = \\left|n-\\frac N3\\right|<\\sqrt{N\\log N}$,\n\\begin{eqnarray*}\n P(2\\leq D_N\\leq 2r,T_N =2n) = P(2\\leq D_N \\leq 2m,T_N =2n)= \\qquad \\\\[3mm] \n \\frac{3^{\\frac32}}{ 2\\pi N}e^{-\\left\\{\\frac{9\\tilde{n}^2}{N}+ \\frac{9\\tilde{n}^3}{N^2}+ O\\left(\\frac1N+\\frac{\\tilde{n}^4}{N^3}\\right)\\right\\}} \\bigg[ \\int\\limits_{[\\frac12,r)}e^{\\frac{-3x^2}{N}+\\frac{9\\tilde{n}x^2}{N^2}}dx \n+ \\int\\limits_{[r,m+\\frac12)} e^{\\frac{-3x^2}{N}+\\frac{9\\tilde{n}x^2}{N^2}} dx \\bigg] .\n\\end{eqnarray*}\n\\noindent\nEvaluating the second integral by expanding the integrand about the point $x=m+\\frac12 $, we arrive at\n\\begin{eqnarray*}\n&& P(2\\leq D_N\\leq 2r,T_N =2n) = \n\\frac{3^{\\frac32}}{ 2\\pi N}e^{-\\left\\{ \\frac{9}{N}\\tilde{n}^2 +\\frac{9}{N^2}\\tilde{n}^3+\n O\\left(\\frac1N+\\frac{1}{N^3}\\tilde{n}^4 \\right)\\right\\}} \\\\ && \\quad \\quad\\times \n \\left[ \\int_{[\\frac12,r)} e^{-\\frac{3}{N}x^2 +\\frac{9}{N^2}\\tilde{n}x^2}dx -\n e^{-\\frac{3}{N}r^2 +\\frac{9}{N^2}\\tilde{n}r^2 } \\left(\\mbox{frac} (r)-\\frac12+O\\left(\\frac rN\\right)\\right) \\right]. \n\\end{eqnarray*}\n\\noindent\nAgain we may use this for all $r>0$ and integer $n\\leq\\frac12 N$.\n\nChoose $w>0$ and $r=w\\sqrt{\\frac n2}$. We have\n\\begin{eqnarray*}\n&& P(00$,\n\\begin{equation}\n \\begin{split}\n P\\left( 00$. Since it is identical to $(1.6)$, this proves the third statement of the theorem.\n\n\nIt remains to prove that any closed interval of length $O(N^{-1})$ that does not contain the origin has probability $O(N^{-1})$ under $\\Psi_N$ and hence $F_N$. Clearly, this will imply the second statement of the theorem. Obviously, the only term in $(2.8)$ that we need to consider is \n\\begin{equation*}\nR(w) = \\frac{\\Lambda_N(w)}{\\sqrt{N}}\n\\end{equation*}\n\\begin{equation}\n= -\\sqrt{\\frac{3}{2N}} \\varphi (w) \\sum_{0\\leq n\\leq N} \\frac{3}{\\sqrt{\\pi N} }e^{-\\frac 9N\\left(n - \\frac N3\\right)^2} \n \\left(\\mbox{frac} \\left(w\\sqrt{2n} \\right)-\\frac12 \\right) ,\n\\end{equation}\nas the remainder of the expansion obviously has bounded derivative. \n\n We begin by noting that if for a given $w>0$, $w\\sqrt{ 2n}$ is an integer for some $1\\leq n\\leq N$, then $\\mbox{frac} \\left(w\\sqrt{ 2n}\\right)$ and hence $R$ has a jump discontinuity at this value of $w$. In the range where\n$|n- \\frac N3|= x\\sqrt{N}$ for $|x|\\leq y$, there can be a most $wy$ such integer values of $n$. To see this, simply note that if $w\\sqrt{ 2n}=k$ and $w\\sqrt{2n'}=k+1$ , then $|n'-n| \\geq \\frac{ 2\\sqrt{N}}{w}$ , so there can be only \n$\\frac{2y\\sqrt{N}}{ \\frac{2\\sqrt{N}}{w} }=wy$ values of $n$ in the required interval. Such a value of n contributes an amount $O\\left(N^{-1}\\varphi(w)e^{-9x^2}\\right)$ to the jump discontinuity at $w$, and hence $R(w)-R(w-0)= O(N^{-1})$ at such a point $w$. Incidentally, this proves the second part of Theorem 1.1.\n\n Choose $\\epsilon>0$ and consider two such jump points $w\\not=w'$ in $[\\epsilon , \\infty)$ with $w\\sqrt{ 2n}=k$ and $w'\\sqrt{2n'}=k'$ for integers $k, k', n$ and $n'$ with $(n-\\frac N3)=x\\sqrt{N},\\ \\left(n'-\\frac N3\\right)=x'\\sqrt{N} $ and $|x|\\vee | x'|\\leq y$. Suppose that $(w'-w)=O(N^{-1})$ and hence $\\frac{w'-w}{w}=O(N^{-1})$ since $w\\geq \\epsilon$. For given $w$, $n$ and $k$, we ask how many integer values of $n'$ satisfy these conditions.\n\nFirst we note that, for some positive $c$ there are only at most $cw(y+1)$ possible choices for $k'$ since $\\sqrt{2n} = \\sqrt{2\\frac N3 +2x\\sqrt{N}} = \\sqrt{\\frac{2N}{3}} +\\sqrt{\\frac32}x+O\\left( \\frac{y^2}{\\sqrt{N}} \\right), \\sqrt{2n'} =\\sqrt{\\frac23 N} +\\sqrt{\\frac 32}x'+O\\left(\\frac{y^2}{\\sqrt{N}}\\right)$ and hence $|k'-k|\\leq 2wy + O\\left( w\\frac{y^2}{\\sqrt N}+|w'-w|\\sqrt{N}\\right)\\leq \\left(\\frac c2 \\right)w(y+1)$. For each choice of $k'$, the corresponding $n'$ satisfies $n' =\\frac12 \\left(\\frac{k'}{w'}\\right)^2$ for some admissible $w'$, and since $w,w'\\geq \\epsilon$ and $(w'-w)=O(N^{-1})$, this leaves a range of order $O\\left( \\left(\\frac{k'}{w'}\\right)^2 N^{-1} \\right)=O(1)$ for $n'$. Hence, for some $C>0$, there are at most $Cw(y+1)$ possible values of $n'$ for which there exists an integer $k'$ with $(w'-w) = O(N^{-1})$. By the same argument as above, the total contribution of discontinuities to$|R(w')-R(w) |$ is $O(N^{-1})$ as long as $|w-w'|= O(N^{-1})$. As any closed interval of length $O(N^{-1})$ that does not contain the origin is bounded away from $0$, this holds for the sum of the discontinuities in such an interval.\n\n At all other points $w>0$, $R$ is differentiable and the derivative of $\\mbox{frac} \\left(w\\sqrt{ 2n}\\right)$ equals $\\sqrt{2 n}$. Hence the derivative of $R$ is $O(1)$ and its differentiable part contributes at most $O(N^{-1})$ to the probability of any interval of length $O(N^{-1})$. This completes the proof of the Theorem 1.1.\n\n\n\n\\section{Evaluation of the oscillatory term}\n\nLet $W$ denotes a r.v. with non negative c.f. $\\psi(t)\\ge 0$ of\nsupport contained in $[-1,1]$ and exponential decay of density \nof type $\\exp\\{-|x|^{2\/3}\\},\\,x\\to\\infty$, \n\\citep[see e.g.] [p. 85] {Bhattacharya-Rao:1986\/2}.\nIntroduce r.v. $w_N := w +N^{-1\/2}(\\log N)^{-1}\\4 W, \\, w >0$ and \nlet $c>0$ denote an positive absolute constant. \nThen we may bound the normal approximation\nerror in $(1.6)$ using similar arguments as in the proof of \nthe well-known smoothing inequality, (see Lemma 12.1 of Bhattacharya and Rao),\n obtaining, for $w\\ge 1$,\n\\begin{equation}\n\\begin{split}\n\\,N^{-1\/2}\\bigl|\\mathbf E \\Lambda_N(w_N)\\bigr| \\le & \n\\, \\bigl|\\mathbf E \\bigl(F_N(w_N) -\\Phi(w_N)\\bigr)\\bigr| +\ncN^{-1}\\\\ \n& \\,\\le \\sup_{x\\in[w-1\/2,w+1\/2]}\\bigl|F_N(x) -\\Phi(x)\\bigr|+cN^{-1}, \n\\end{split}\n\\end{equation} \nwhere\n$$\\Lambda_N(w):= \\, -\\varphi(w)\\4 \n\\sum_{1\\le n\\le N}\\frac {3^{3\/2}}{\\sqrt{2 \\pi N}}\n\\exp\\{-\\frac 9 N(n - \\frac N 3)^2\\}(\\mbox{frac}(w \\sqrt {2n})- 1\/2 ).$$\nWe start with the following Fourier series expansion\n\\begin{eqnarray} \n\\tau(x):= frac(x)-1\/2 = -\\sum_{k=1}^{\\infty} 2\\4 \\frac{ sin(2 \\pi\\4 k\\4 x)}\n{2\\4 k\\4 \\pi}, \n\\end{eqnarray*}\nwhich holds for all nonintegral $x$. \n\nNote that by the properties of $W$ (i.e. the vanishing of Fourier coefficients)\n$$ \n\\mathbf E \\tau(w_N\\sqrt{2n})= -\\sum_{k=1}^{M_n} \\mathbf E \\frac{ sin(\n 2\\pi\\4 k\\4 \\sqrt{2n}(w+N^{-1\/2}(\\log N)^{-1}\\4 W))} { k\\4 \\pi},\n$$\nwhere $M_n:=[\\sqrt N\\log N\/(2\\pi\\sqrt {2n})]+1$, i.e. $M_n = O(\\log N)$ for \n$|n-N\/3|<\\sqrt{N\\log N}$.\n\nRewriting $\\Lambda_N(w)$ in $(3.1)$ in the form\n\\begin{equation}\n\\Lambda_N(w) := \\, -\\frac {3^{3\/2}} {(2 \\4 \\pi\\4 N)^{1\/2}} \\varphi(w) \n\\sum_{n=1}^{N} exp\\{-9\\4 (\\tilde{n}^2\/N\\}\\4 \n\\tau\\bigl(w\\4(2 \\4 n)^{1\/2}\\bigr),\n\\end{equation}\nwhere $\\tilde{n}:=n-N\/3$,\nwe get \n\\begin{equation}\n\\begin{split}\n\\mathbf E \\Lambda_N(w_N) =& \\, \\sqrt{\\frac 3 2}\\pi^{-1} \n\\sum_{k=1}^{M} \\frac 1 k \n\\lambda_{N,k}+O(N^{-3}), \\,\\, \n\\text{where}\\,\\, M:=[\\log N]\\,\\, \\text{and}\\\\\n\\lambda_{N,k} :=& \\, \\frac 3{\\sqrt{\\pi \\4 N}} \\mathbf E \\varphi(w_N)\\sum_{n=1}^{N} \nexp\\{-9\\4 \n\\tilde{n}^2\/N\\}\\4 sin(2\\pi\\4 k\\4 w_N\\4 \\sqrt{2n}).\n\\end{split}\n\\end{equation}\nIn the arguments of the $\\sin$ function we\n use a Taylor expansion, for $|n-N\/3|<\\sqrt{N\\log N}$, \n$$\n\\sqrt{n} = \\sqrt{N\/3} + \\sqrt 3 \\tilde{n}\/(2\\sqrt N)+\nO\\bigl(\\tilde{n}^2\/N^{3\/2} \\bigr).\n$$\nThus, for $|\\tilde{n}|<\\sqrt{N\\log N}$, \n\\begin{equation}\n sin(2\\pi\\4 k\\4 w_N\\4 \\sqrt{2n})= sin\\bigl(d_0 + \\4 \\pi\\4 d_1 \\tilde{n} \\bigr) \n+ O\\bigl(k\\4 w_N N^{-3\/2}\\4 \\tilde{n}^2\\bigr),\n\\end{equation}\nwhere $ d_0 :=2\\pi\\4k \\4 w_N\\4 (\\frac 2 3)^{1\/2} \\sqrt{N}$, \n$d_1 := k\\4 \\4 w_N\\4\n(\\frac 3 2)^{1\/2}\\4\/ \\sqrt N$. %\nHence we may write\n\\begin{equation}\n\\begin{split}\n \\lambda_{N,k} =& \\frac 3 {\\sqrt{\\pi \\4 N}} \\mathbf E \\varphi(w_N) \n\\sum_{n\\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }} exp\\{-9\\4 \n\\tilde{n}^2\/N\\}\\4\n\\sin\\bigl(d_0 + \\4 2\\pi \\4d_1 \\4\\tilde{n}\\bigr)\\\\\n&\\quad + O(k N^{-1\/2} \\4 \\log N ). \n\\end{split}\n\\end{equation}\nWe shall now evaluate the theta sum on the left hand side using\nPoisson's formula, \n\\citep[see e.g.][p. 189]{Mumford:1983}.\n\\begin{equation}\n\\sum_{m \\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }} \\exp\\{-z\\4 m^2 +i2\\pi\\4 m \\4 b\\} = \\pi^{1\/2}z^{-1\/2}\n\\sum_{l \\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }}\n\\exp\\{- \\pi^2 \\4 z^{-1}(l- b)^2\\},\n\\end{equation}\nwhere $b \\in \\mathbb R$, $\\Re z >0$ and $z^{1\/2}$ denotes the branch with \npositive real part.\nWriting $sin(x)=(\\exp[i\\4 x] - \\exp[-i\\4 x])\/2$ in (3.6) \nand assuming for simplicity $N\/3 \\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }$ we may replace summation over n by\nsummation over $ m:=\\tilde{n}= n - N\/3 \\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }$ in (3.6). \nApplying now (3.7) we have to bound the imaginary part of \n expectations of theta functions of type \n\\begin{eqnarray}\nI_k:=\\frac 3 {\\sqrt{\\pi\\4 N}} \\mathbf E \\varphi(w_N)\\exp\\{\\4 i\\4 d_0 \\}\n\\sum_{m\\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }} \\exp\\{-\n9\\4 m^2 \\4 N^{-1} + i\\4 2\\pi\\4 d_1\\4m\\}.\n\\end{eqnarray*}\nWe obtain for $ k \\le M =[\\log N]$ that $|d_1| \\le 2\n \\4N^{-1\/2}(\\log N) |w_N| \\le 4 \\4N^{-1\/2}(\\log N)^2 $ with probability\n $1- O(N^{-3\/2})$ by the assumption $w \\le \\log N$.\n Hence the dominant term\nin (3.9) below is the term with $l=0$ and we obtain with \n$c_{N,k}:= \\exp\\{ \\4 2\\pi\\4 i\\4 \\4k \\4 w_N\\4 (\\frac 2 3)^{1\/2} \\sqrt{N}\\}$ \n\\begin{equation}\n\\begin{split}\nI_k =& \\, \\mathbf E c_{N,k}\\varphi(w_N)\\4 \\sum_{l \\in \\hbox{\\rm Z\\negthinspace \\negthinspace Z }} \n\\exp\\{ - N \\4( l- d_1)^2\\4 \\pi^2\/9\\}\n\\\\\n = & \\, \\mathbf E c_{N,k}\\4 \\varphi(w_N)\\exp\\{- N\\4 d_1^2\\4 \\pi^2\/9\\} + O\\bigl(N^{-3\/2}\\bigr)\n\\\\ \n =& \\,\\4 f_{N,k}\\4 \\varphi(w) \\exp\\{- \\pi^2 k^2 \\4 w^2\/6 \n+ \\4 i\\4 \\42\\pi k \\4 w\\4 (\\frac 2 3)^{1\/2} \\sqrt{N}\\}\n+ O\\bigl(N^{-1\/2}(\\log N)^4\\bigr),\n\\end{split}\n\\end{equation}\nwhere $f_{N,k}:= \\psi\\bigl(2\\pi\\4 (\\frac 2 3)^{1\/2} \\4 \\frac k {\\log\n N}\\bigr)= 1+ O\\bigl((k\/\\log N)^2\\bigr)$. \nUsing the equation (3.9) in (3.4) we get\n\\begin{equation}\n\\begin{split}\n\\mathbf E \\Lambda_N(w_N) =& \\, \\sqrt\\frac 3 2 \n\\varphi(w)\\Im \\sum_{k=1}^M \\frac {f_{N,k}} {k\\pi}\n\\exp\\bigl\\{- \\frac{\\pi^2}6\\4 k^2\\4 w^2\\4+ \n2\\pi\\4 i\\4 \\4k \\4 w\\4 \\sqrt{\\frac{2N}3}\\bigr\\}\\\\\n& \\, \\quad + O\\bigl(N^{-1\/2}(\\log N)^5\\bigr).\n\\end{split}\n\\end{equation}\n\nHence, there exists a constant $c_0(w) >0$ such that\n\\begin{equation}\n|\\mathbf E \\Lambda_N(w_N)| > c_0(w) >0,\n \\end{equation} \nprovided that $4\\4 w\\4 \\sqrt{\\frac{2 N} 3} $ is an odd integer, which\nproves the assertion $(1.10)$. \n\n\\renewcommand\\bibsection{\\section*{REFERENCES}}\n\\bibliographystyle{ims}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA hypergraph $H = (V,E)$ is said to be bipartite or 2-colorable\nif the vertex set $V$ can be partitioned into two disjoint sets $V_1$ and $V_2$\nsuch that every edge $e\\in E$ has non-empty intersections with both the partitions.\nIn the case of graphs, one can easily find the two partitions from any given instance of\n$H$ by breadth first search.\nHowever, the problem turns out to be notoriously hard if edges of size more than 2 are present.\nIn fact, in the case of bipartite 3-uniform and 4-uniform hypergraphs,\nit is well known that the problem is NP-hard~\\cite{Dinur_2005_jour_Combinatorica,Khot_2014_conf_SODA}.\n\nIn general, finding a proper 2-coloring is relatively easy if the hypergraph is sparse. \nIn an answer to a question asked by Erd\\\"os~\\cite{Erdos_1963_jour_NordikMat} on 2-colorability of uniform hypergraphs, it is now known that for large $m$,\nany $m$-uniform hypergraph on $n$ vertices with at most \n$2^m0.7\\displaystyle\\sqrt{\\frac{m}{\\ln m}}$ edges is 2-colorable~\\cite{Radhakrishnan_1998_conf_FOCS}. As pointed in~\\cite{Radhakrishnan_1998_conf_FOCS}, the result can also be extended to \nnon-uniform hypergraphs with minimum edge size $m$. \nHowever, it is much worse if the restriction on the minimum edge size and the\nnumber of hyperedges is not imposed. Even when a hypergraph is 2-colorable, the best\nknown algorithms~\\cite{Alon_1996_jour_NordicJComput,Chen_1996_conf_IPCO}\nrequire $O\\left((n\\ln n)^{1-1\/M}\\right)$ colors to properly color the hypergraph\nin polynomial time,\nwhere $M$ is the maximum edge size, also called dimension, of the hypegraph.\nIn recent years, 2-colorability of random hypergraphs has also received considerable attention.\nThrough a series of works~\\cite{Achlioptas_2008_conf_FOCS,CojaOghlan_2012_conf_SODA,Panangiotou_2012_conf_STOC},\nit is now established that random uniform hypergraphs are 2-colorable only when\nthe number of edges are at most $Cn$, for some constant $C>0$.\nThus, it is evident that coloring relatively dense hypergraphs is difficult unless the \nhypergraph admits a ``nice\" structure.\n\nIn spite of the hardness of the problem,\nthere are a number of applications that require hypergraph coloring algorithms.\nFor instance, such algorithms have been used for approximate DNF counting~\\cite{Lu_2004_jour_SIAMJDiscMath}, as well as in various resource allocation and scheduling\nproblems~\\cite{Capitanio_1995_jour_IJPP,Ahuja_2002_conf_APPROX}.\n The connection between ``Not-All-Equal\" (NAE) SAT\nand hypergraph 2-coloring also demonstrate its significance in context of satisfiability problems. \nAmong the various approaches studied in the literature, perhaps\nthe only known non-probabilistic instances of efficient 2-coloring are in the cases \nwhere the hypergraph is $\\alpha$-dense, 3-uniform and bipartite~\\cite{Chen_1996_conf_IPCO}, \nor where the hypergraph is $m$-uniform and its every edge has equal number of vertices of either colors~\\cite{McDiarmid_1993_jour_CombProbComput}.\n\nIn this paper, we consider the problem of coloring random non-uniform hypergraphs of dimension $M$,\nthat has an underlying planted bipartite structure. We present a polynomial time algorithm\nthat can properly 2-color instances of the random hypergraph with high probability whenever\nthe expected number of edges in at least $dn\\ln n$ for some constant $d>0$.\nTo the best of our knowledge, such a model has been only considered \nby Chen and Frieze~\\cite{Chen_1996_conf_IPCO}, who extended a graph coloring \napproach of Alon and Kahale~\\cite{Alon_1997_jour_SIAMJComput} to \npresent an algorithm for \n 2-coloring of 3-uniform bipartite hypergraphs with $dn$ number of edges.\nTo this end, our work generalizes the results of \\cite{Chen_1996_conf_IPCO} to\nnon-uniform hypergraphs, and it is the first algorithm that is guaranteed to properly color\nnon-uniform bipartite hypergraphs using only two colors. We also discuss the possible extension \nof our approach to the case of non-uniform $k$-colorable hypergraphs.\n\n\\subsection*{The Main Result}\nBefore stating the main result of this paper, we present the planted model under \nconsideration, which \nis based on the model that is studied in~\\cite{Ghoshdastidar_2015_arxiv}.\nThe random hypergraph $H_{n,(p_m)_{m=2,\\ldots,M}}$ is generated\non the set of vertices $V = \\{1,2,\\ldots,2n\\}$, which is arbitrarily split into \ntwo sets, each of size $n$, and the sets are colored with two different colors.\nGiven a integer $M$, and $p_2,\\ldots,p_M\\in[0,1]$, the edges of the hypergraph\nare randomly added in the following way. All the edges \nof size at most $M$ are added independently, and for any $e\\subset V$, \n\\begin{align*}\n \\P(e\\in E) = \\left\\{ \\begin{array}{ll}\n p_m & \\text{if } e \\text{ is not monochromatic and } {|e|=m}, \\\\\n 0\t & \\text{otherwise}.\\\\\n \\end{array}\\right.\n\\end{align*}\nWe prove the following result.\n\\begin{theorem}\n\\label{thm_spec_color}\n Assume $M=O(1)$. There is a constant $d>0$ such that if \n \\begin{equation}\n \\sum\\limits_{m=2}^M p_m \\binom{2n}{m} \\geq {dn\\ln n}, \n \\end{equation}\n then with probability \n $(1-o(1))$, Algorithm~\\ref{alg} (presented in next section) finds a proper 2-coloring of the random non-uniform bipartite hypergraph $H_{n,(p_m)_{m=2,\\ldots,M}}$. \n\\end{theorem}\nIt is easy to see that the expected number of edges in the hypergraph is \n$\\Theta\\left(\\sum_{m=2}^M p_m\\binom{2n}{m}\\right)$, and so the condition may be stated\nin terms of expected number of edges.\n\n\\subsection*{Organization of this paper}\nThe rest of the paper is organized in the following manner.\nIn Section~\\ref{sec_algorithm}, we present our coloring algorithm, followed by a proof of \nTheorem~\\ref{thm_spec_color} in Section~\\ref{sec_proof}. In the concluding remarks in \nSection~\\ref{sec_conclusion}, we provide discussions about the key assumptions made in this work,\nand also the possible extensions of our results to $k$-coloring and strong coloring of non-uniform\nhypergraphs. The appendix contains proofs of the lemmas mentioned in Section~\\ref{sec_proof}.\n\n\\section{Spectral algorithm for hypergraph coloring}\n\\label{sec_algorithm}\n\nThe coloring algorithm, presented below, is similar \nin spirit to the spectral methods of~\\cite{Alon_1997_jour_SIAMJComput,Chen_1996_conf_IPCO},\nbut certain key differences exist, which are essential to deal with\nnon-uniform hypergraphs. \n\nGiven a hypergraph $H = (V,E)$, \nan initial guess of the color classes is formed by exploiting the spectral properties of a certain matrix\n$A\\in\\mathbb{R}^{|V|\\times|V|}$ defined as \n\\begin{align}\n A_{ij} = \\left\\{ \\begin{array}{rl}\n \\displaystyle\\sum_{e\\in E: e\\ni i,j} \\frac{1}{|e|}\t& \\text{if } i\\neq j, \\text{ and} \\\\\n \\displaystyle\\sum_{e\\in E: e\\ni i} \\frac{1}{|e|}\t& \\text{if } i= j. \n \\end{array}\\right.\n \\label{eq_defnA}\n\\end{align}\nThe above matrix has been used in the literature to construct the Laplacian of a \nhypergraph~\\cite{Bolla_1993_jour_DiscreteMath,Ghoshdastidar_2015_arxiv},\nand is also known to be related to the affinity matrix of the star expansion of \nhypergraph~\\cite{Agarwal_2006_conf_ICML}. \nThe use of matrix $A$ is in contrast to the adjacency based graph construction of~\\cite{Chen_1996_conf_IPCO} that is likely to\nresult in a complete graph if the hypergraph is dense.\n\nThe later stage of the algorithm considers an iterative procedure that\nis similar \nto~\\cite{Alon_1997_jour_SIAMJComput,Chen_1996_conf_IPCO}, but uses a \nweighted summation of neighbors. Such weighting is crucial while\ndealing with the edges of\ndifferent sizes.\n\n\\begin{varalgorithm}{COLOR}\n\\caption {-- Colors a non-uniform hypergraph $H$:}\n\\label{alg}\n\\begin{algorithmic}[1]\n \\STATE Define the matrix $A$ as in~\\eqref{eq_defnA}.\n \\STATE Compute\n $x^A = \\underset{\\Vert x \\Vert_2 = 1}{\\textup{arg min~}} x^TAx$.\n \\STATE Let $T = \\lceil \\log_2 n\\rceil$, $V_1^{(0)} = \\{ i\\in V: x_i^A \\geq 0\\}$ and \n $V_2^{(0)} = \\{ i\\in V: x_i^A < 0\\}$.\n \\FOR {$t = 1,2,\\ldots, T$}\n \\STATE Let \n $V_1^{(t)} = \\left\\{ i\\in V: \\sum\\limits_{j\\in V_1^{(t-1)}\\backslash\\{i\\}} A_{ij} <\n \\sum\\limits_{j\\in V_2^{(t-1)}\\backslash\\{i\\}} A_{ij} \\right\\}$, \n \\newline and $V_2^{(t)} = V\\backslash V_1^{(t)}$.\n \\ENDFOR\n \\IF {{$\\exists e\\in E$} such that $e\\subset V_1^{(T)}$ or $e\\subset V_2^{(T)}$}\n \\STATE Algorithm FAILS.\n \\ELSE\n \\STATE 2-Color $V$ according to the partitions \n $V_1^{(T)},V_2^{(T)}$.\n \\ENDIF\n\\end{algorithmic}\n\\end{varalgorithm}\n\n\\section{Proof of Main Result}\n\\label{sec_proof}\n\nWe now prove Theorem~\\ref{thm_spec_color}.\nWithout loss of generality, assume that the true color classes in $V$ are $\\{1,2,\\ldots,n\\}$ and $\\{n+1,\\ldots,2n\\}$.\nAlso, let $W^{(t)}$, $t=0,1,\\ldots,T$, denote the incorrectly colored\nvertices after iteration $t$,\nwith $W^{(0)}$ being the incorrectly colored nodes after initial spectral step.\nWe prove Theorem~\\ref{thm_spec_color} by showing with probability $(1-o(1))$, \nthe size of $W^{(T)} <1$, which implies that all nodes are correctly colored, and hence, the hypergraph must be \nproperly colored.\n\nThe first lemma bounds the size of $W^{(0)}$, \\textit{i.e., } the error incurred at the initial spectral step.\n\\begin{lemma}\n\\label{lem_spectral}\nWith probability $(1-o(1))$,\n$|W^{(0)}| \\leq \\displaystyle\\frac{n}{M^22^{2M+4}}$.\n\\end{lemma}\nNext, we analyze the iterative stage of the algorithm to make the following claim,\nwhich characterizes the vertices that are correctly colored after iteration $t$.\n\\begin{lemma}\n\\label{lem_iteration_charac}\n Let $\\eta = \\displaystyle\\frac{1}{2^{M+2}}\\sum\\limits_{m=2}^M\\frac{p_m(n-1)}{m}\\binom{n-2}{m-2}$. \n For any $t\\in\\{1,\\ldots,T\\}$,\n if $\\sum\\limits_{j\\in W^{(t-1)}\\backslash\\{i\\}} A_{ij} < \\eta$ for any $i\\in V$, then \n $P(i\\in W^{(t)})\\leq n^{-\\Omega(d)}$.\n\\end{lemma}\nNote that there are only $T=\\lceil \\log_2 n\\rceil$ iterations, and $|V| =2n$. \nCombining the result of Lemma~\\ref{lem_iteration_charac} with union bound, we can conclude\nthat with probability $(1-o(1))$, for all iterations $t=1,2,\\ldots,T$, \nthere does not exist any $i\\in V$ such that\n$\\sum\\limits_{j\\in W^{(t-1)}\\backslash\\{i\\}} A_{ij} < \\eta$.\nWe also make the following observation, where $\\eta$ is defined in Lemma~\\ref{lem_iteration_charac}.\n\\begin{lemma}\n\\label{lem_iteration_size}\nWith probability $(1-o(1))$, there does not exist $C_1,C_2\\subset V$ such that $|C_1|\\leq\\frac{n}{M^22^{2M+4}}$,\n$|C_2| = \\frac12 |C_1|$ and for all $i\\in C_2$, $\\sum\\limits_{j\\in C_1\\backslash\\{i\\}} A_{ij} \\geq \\eta$.\n\\end{lemma}\nWe now use the above lemmas to proceed with the proof of Theorem~\\ref{thm_spec_color}.\nLemma~\\ref{lem_spectral} shows that $|W^{(0)}|\\leq \\frac{n}{M^22^{2M+4}}$ with probability $(1-o(1))$.\nConditioned on this event, and due to the conclusion of Lemma~\\ref{lem_iteration_charac},\none can argue that Lemma~\\ref{lem_iteration_size} is violated unless\n$|W^{(t)}| < \\frac12 |W^{(t-1)}|$ for all iteration $t$ with probability $(1-o(1))$. \nThus, in each iteration,\nthe number of incorrectly colored vertices are reduced by at least half. Hence, after \n$T=\\lceil \\log_2 n\\rceil$ iterations, $|W^{(T)}| <1$, which implies that all vertices are correctly colored.\n\n\\section{Discussions and Concluding remarks}\n\\label{sec_conclusion}\nIn this paper, we showed that a random non-uniform bipartite hypergraph of dimension $M$ \nwith balanced partitions can be properly 2-colored with \nprobability $(1-o(1))$ by a polynomial time algorithm.\nThe proposed method uses a spectral approach to form initial guess of the color classes,\nwhich is further refined iteratively.\nTo the best of our knowledge, this is the first work on 2-coloring bipartite non-uniform hypergraphs.\nPrevious works~\\cite{Chen_1996_conf_IPCO,Krivelvich_2003_jour_JAlgo} \nhave only restricted to the case of uniform hypergraphs.\n\n\\subsection*{A note on the assumptions in Theorem~\\ref{thm_spec_color}}\n\nThe key assumptions made in this paper are the following:\n\\begin{enumerate}\n\\item $M = O(1)$, and \n\\item $p_2,\\ldots,p_M$ are such that\nthe expected number of edges is larger than $dn\\ln n$, where $d>0$ is a large constant.\n\\end{enumerate}\nThe assumption $M = O(1)$ is crucial, particularly in Lemma~\\ref{lem_spectral},\nand helps to ensure that $d$ can be chosen to be a constant. This can be avoided \nif $d$ is allowed to increase with $n$ appropriately. We note that a previous \nwork on spectral hypergraph partitioning~\\cite{Ghoshdastidar_2015_arxiv} allows\n$M$ to grow with $n$, but imposes an additional restriction so that the number of \nedges of larger size decay rapidly.\n\nThe second assumption is stronger than the one in \\cite{Chen_1996_conf_IPCO},\nwhere it was shown that a random bipartite 3-uniform hypergraph can be properly\n2-colored with high probability if the expected number of edges is $dn$.\nThis is due to the use of matrix Bernstein inequality~\\cite{Tropp_2012_jour_FOCM}\nin Lemma~\\ref{lem_spectral} that does not provide useful bounds in the most sparse \ncase. On the other hand, Chen and Frieze~\\cite{Chen_1996_conf_IPCO}\nuse the techniques of Kahn and Szemeredi~\\cite{Friedman_1989_conf_STOC}\nthat allows them to work in the most sparse regime. \nHowever, it is not clear how the \nsame techniques can be extended even to uniform hypergraphs of higher order.\nThus, it remains an open problem whether a similar result can be proved when the number of edges in the hypergraph grows linearly with $n$.\n\n\\subsection*{$k$-coloring of hypergraphs}\nThough Algorithm~\\ref{alg} has been presented only for the hypergraph 2-coloring problem,\none may easily extend the approach to achieve a $k$-coloring,\nwhere the objective is to color the vertices of the hypergraph with $k$ colors such that no edge \nis monochromatic.\nA possible extension of Algorithm~\\ref{alg} is as follows:\n\\begin{enumerate}\n \\item\n In Step~2, compute the eigenvectors corresponding to the $(k-1)$ smallest eigenvalues of $A$. \n \\item\n Use $k$-means algorithm~\\cite{Ostrovsky_2013_jour_JACM} to cluster rows of the eigenvector matrix into $k$ groups,\n and define the initial guess for the color classes $V_1^{(0)},\\ldots,V_k^{(0)}$ in Step~3 according\n to the above clustering.\n \\item\n The iterative computation in Step~6 is modified by defining\n \\begin{displaymath}\n \\qquad\n V_l^{(t)} = \\left\\{ i\\in V: \\sum\\limits_{j\\in V_l^{(t-1)}\\backslash\\{i\\}} A_{ij} <\n \\sum\\limits_{j\\in V_{l'}^{(t-1)}\\backslash\\{i\\}} A_{ij} \\text{ for all } l'\\neq l\\right\\}\n \\end{displaymath}\n for $l=1,2,\\ldots,(k-1)$, and $V_k^{(t)} = V\\backslash \\left(\\bigcup_{l>> \\widehat G(c) \\\\\n@AAA @AAA \\\\\n\\widehat G({\\mathcal O},a) @>>> \\widehat G({\\mathcal O},c).\n\\end{CD}\n\\end{equation*}\nOne can also view $\\widehat G({\\mathcal O},c)$ as the projective limit of\n$\\mathbf G({\\mathcal O})\/\\Gamma({\\mathfrak a})$ over nonzero ideals ${\\mathfrak a}\\subseteq {\\mathcal O}$ and similarly for\n$\\widehat G({\\mathcal O},a)$. Thus they are profinite (and hence compact) groups,\nwhile $\\widehat G(c)$ and $\\widehat G(a)$ are locally compact. It is then\neasy to see that the two horizontal maps are surjective and have the same\nkernel which is called the \\emph{congruence subgroup kernel} $C(S, \\mathbf G)$. \n\nFrom a more general perspective, the congruence subgroup problem is the\ndetermination of $C(S,\\mathbf G)$. The case when $C(S,\\mathbf G)=1$ is equivalent to\nevery $S$-arithmetic subgroup being an $S$-congruence subgroup.\n\n\\subsection{Reductions}\n\nThe congruence subgroup problem admits a number of reductions. The functor\n$\\mathbf G\\to C(S,\\mathbf G)$ satisfies a weak form of exactness outlined in\n\\cite{Ra1}*{Introduction}. Since $C(S,\\mathbf G)=1$ when $\\mathbf G$ is finite or the\nadditive group $\\mathbf G_a$, this implies that $C(S,\\mathbf G) = C(S, \\mathbf G^0\/\\mathbf\nN_{\\mathbf G})$, where $\\mathbf N_{\\mathbf G}$ is the unipotent radical of $\\mathbf G$. We thus\nmay assume that $\\mathbf G$ is connected and reductive. A theorem of Chevalley\n\\cite{Chevalley} based on class field theory implies that $C(S,\\mathbf\nT)=1$ for $\\mathbf T$ a $k$-torus. Together with the weak exactness\nproperty, this implies that $C(S,\\mathbf G) = C(S,\\mathpsscr D\\mathbf G)$ where\n$\\mathpsscr D\\mathbf G$ is the derived group (see also\n\\cite{PlatonovSaromet}). It thus suffices to assume that $\\mathbf G$ is\nconnected and semisimple.\n\nIf $\\mathbf G$ is not simply connected then $C(S,\\mathbf G)$ can be infinite.\nSpecifically let $\\widetilde{\\mathbf G}$ be the simply connected covering group of\n$\\mathbf G$ and let $\\mathbf B = \\Ker (\\widetilde{\\mathbf G}\\to \\mathbf G)$. If all $k$-simple\ncomponents $\\mathbf H$ of $\\mathbf G$ satisfy $k_v\\text{-rank}\\: \\mathbf H > 0$ for some\n$v\\in S$, then $\\Coker(C(S,\\widetilde{\\mathbf G}) \\to C(S,\\mathbf G))$ will contain an\nisomorphic copy of the infinite group $\\mathbf B({\\mathbb A}_{k,S})\/\\mathbf B(k)$,\nwhere ${\\mathbb A}_{k,S}$ denotes the $S$-adeles of $k$\n\\citelist{\\cite{SerreBourbaki} \\cite{Ra1}}. Thus we will make the\nassumption that $\\mathbf G$ is simply connected.\n\nAny simply connected group is a direct product of almost $k$-simple groups,\nso we may assume $\\mathbf G$ is almost $k$-simple. We may then write $\\mathbf G =\\Res_{k'\/k}\n\\mathbf G'$, where $\\mathbf G'$ is an absolutely almost simple group over a finite\nextension $k'$ over $k$. Since $C(S,\\mathbf G) = C(S',\\mathbf G')$ where $S'$ consists\nof all places of $k'$ lying over places of $S$, we may assume that $\\mathbf G$ is\nconnected, simply connected and absolutely almost simple.\n\n\\subsection{Some known results}\n\\label{ssectKnownResults}\n\nThe congruence subgroup kernel has been considered extensively by many\nauthors; see the survey \\cite{PrasadRapinchukSurvey}. In particular, Bass,\nMilnor, and Serre \\cite{BMS} proved that $C(S, \\mathbf G)$ is finite for the\ngroups $\\SL_n$, $n\\ge 3$, and $\\SP_{2n}$, $n\\ge 2$; in fact they prove that\n$C(S,\\mathbf G)$ is trivial unless $k$ is totally imaginary and $S=S_\\infty$ in\nwhich case $C(S,\\mathbf G)\\cong \\mu(k)$, the roots of unity in $k$. Serre\n\\cite{se3} later treated the case $\\SL_2$ and obtained the same\ndetermination of $C(S,\\mathbf G)$ if $|S|\\ge 2$; if $|S|=1$ he proves that\n$C(S,\\mathbf G)$ is infinite.\n\nLet $S\\text{-rank}\\: \\mathbf G = \\sum_{v\\in S} k_v\\text{-rank}\\: \\mathbf G$.\nFor a global field $k$ (that is, a number field or a function field of an\nalgebraic curve over a finite field) Serre \\cite{se3} has conjectured%\n\\footnote{The hypothesis that $k_v\\text{-rank}\\: \\mathbf G>0$ for all $v\\in S\\setminus\n S_\\infty$ was not included in \\cite{se3} but is necessary\n\\cite{Ra1}*{p.~109 and (6.2)}.}\nthat if $\\mathbf G$ is simply connected and absolutely almost simple, then\n\\begin{equation}\n\\label{eqnSerreConjecture}\n\\text{$C(S,\\mathbf G)$ is finite if $S\\text{-rank}\\: \\mathbf G \\geq 2$ and $k_v\\text{-rank}\\: \\mathbf G>0$ for\n all $v\\in S\\setminus S_\\infty$.}\n\\end{equation}\nWhen $k$ is a number field, the main theorems in Raghunathan's papers\n\\citelist{\\cite{Ra1} \\cite{Ra2}} established the conjecture when $k\\text{-rank}\\: \\mathbf G\n> 0$ (see also \\cite{PrasadOnRaghunathan}). For a general global field,\nPrasad and Raghunathan \\cite{pr}*{Theorem~ 2.6} established the conjecture\nwhen $k\\text{-rank}\\: \\mathbf G > 0$ provided $C(S,\\mathbf G)$ is central in $\\widehat G(a)$; in\nfact they showed \\cite{pr}*{Theorems~ 2.9, 3.4} then that $C(S,\\mathbf G)$ is a\nquotient of $\\mu(k)$ provided in addition that the Kneser-Tits conjecture%\n\\footnote{Let $\\mathbf G(k)^+$ denote the subgroup of $\\mathbf G(k)$ generated by\n $k$-rational points of the unipotent radicals of the parabolic\n $k$-subgroups of $\\mathbf G$. The Kneser-Tits conjecture states that if $\\mathbf G$ is\n simply connected, almost $k$-simple, with $k\\text{-rank}\\: \\mathbf G>0$, then\n $\\mathbf G(k)^+=\\mathbf G(k)$.}\nholds for global fields. The centrality of $C(S,\\mathbf G)$ was proved when\n$k\\text{-rank}\\:\\mathbf G>0$ by Raghunathan \\citelist{\\cite{Ra1} \\cite{Ra2}} (again\nassuming that the Kneser-Tits conjecture holds) and the Kneser-Tits\nconjecture for global fields has since been demonstrated \\cite{Gille}.\nThus \\eqref{eqnSerreConjecture} holds for global fields when $k\\text{-rank}\\: \\mathbf G >\n0$; for the progress on groups with $k\\text{-rank}\\: \\mathbf G=0$ see the survey by\nRapinchuk \\cite{R2}.\n\nSerre \\cite{se3} also conjectures that $C(S,\\mathbf G)$ is infinite if $S\\text{-rank}\\: \\mathbf G\n= 1$ and verifies this for $\\mathbf G = \\SL_2$. In fact for $\\SL_2$ over ${\\mathbb Q}$,\n$C(S,\\mathbf G)$ is a free profinite group on a countable number of\ngenerators \\cite{Melnikov2}, and over a quadratic imaginary\nfield it has a finite index subgroup of this type \\cite{Lubotzky0}.\n\n\\subsection{Connection with elementary matrices}\n\\label{ssectElementaryMatrices}\n\nOur goal is a topological interpretation of the congruence subgroup kernel.\nFor this we will use the relationship of $C(S,\\mathbf G)$ with\n``elementary'' matrices. More precisely, for any $S$-arithmetic subgroup\n$\\Gamma$ let\n\\begin{equation*}\nE\\Gamma \\subset \\Gamma\n\\end{equation*}\nbe the subgroup generated by the elements of $\\Gamma$ belonging to the\nunipotent radical of any parabolic $k$-subgroup of $\\mathbf G$. As $\\Gamma$ runs\nthrough the family of\n$S$-congruence subgroups $\\Gamma({\\mathfrak a})$, we obtain a family\n$\\{E\\Gamma({\\mathfrak a})\\}_{{\\mathfrak a}\\subseteq {\\mathcal O}}$ of normal subgroups of $\\mathbf G({\\mathcal O})$ which define a\ntopology ${\\mathcal T}_e$ on $\\mathbf G({\\mathcal O})$. We denote by $\\widehat G({\\mathcal O},e)$ the\ncompletion of $\\mathbf G({\\mathcal O})$ in the topology ${\\mathcal T}_e$. For any ideal ${\\mathfrak a}\n\\subseteq {\\mathcal O}$ consider the exact sequence\n\\begin{equation*}\n1 \\ \\rightarrow\\ \\Gamma({\\mathfrak a})\/E\\Gamma({\\mathfrak a})\\ \\rightarrow\n\\ \\mathbf G({\\mathcal O})\/E\\Gamma({\\mathfrak a})\\ \\rightarrow\\ \\mathbf G({\\mathcal O})\/\\Gamma({\\mathfrak a})\n\\ \\rightarrow\\ 1\\ .\n\\end{equation*}\nTaking projective limits over the ideals ${\\mathfrak a}$ we obtain\n\\begin{equation*}\n1 \\ \\rightarrow\\ CG(e,c)\\ \\rightarrow\n\\widehat{G}({\\mathcal O},e)\\ \\rightarrow\\ \\widehat{G}({\\mathcal O},c) \\ \\rightarrow\\ 1\\ ,\n\\end{equation*}\nwhere $CG(e,c)$ is defined to be the kernel of the map on the right and\nRaghunathan's Main Lemma is used to prove that this map is surjective\n\\cite{Ra1}*{(1.21)}.\n\nAssume now that $k\\text{-rank}\\: \\mathbf G > 0$ and $S\\text{-rank}\\: \\mathbf G \\ge 2$. Then $E\\Gamma({\\mathfrak a})$ is\n$S$-arithmetic \\citelist{\\cite{Margulis} \\cite{Ra2}*{Theorem~ A, Corollary~\n 1}} (see also \\cite{Venkataramana}) and any $S$-arithmetic subgroup\n$\\Gamma$ contains $E\\Gamma({\\mathfrak a})$ for some ${\\mathfrak a}\\neq 0$ \\cite{Ra1}*{(2.1)}. So\nunder this condition, the topologies ${\\mathcal T}_e$ and ${\\mathcal T}_a$\nare the same,\n\\begin{equation*}\n\\widehat{G}({\\mathcal O},e)\\ \\cong\\ \\widehat{G}({\\mathcal O},a)\\ ,\n\\end{equation*}\nand thus\n\\begin{equation}\n\\label{eqnCongruenceKernel}\nC(S,\\mathbf G)\\ \\cong\\ CG(e,c)\\ \\cong\\ \\varprojlim_{\\mathfrak a}\n\\Gamma({\\mathfrak a})\/E\\Gamma({\\mathfrak a})\\ .\n\\end{equation}\nThis characterization of $C(S,\\mathbf G)$ will enable us to give a topological\nrealization.\n\n\\subsection{A topological realization of $C(S,\\mathbf G)$}\n\nIn this paper, our aim is to show that the algebraically and arithmetically\ndefined group $C(S,\\mathbf G)$ also has a topological interpretation as the\nfundamental group of certain compactifications of a locally symmetric\nspace. More precisely, we consider a connected, absolutely almost simple,\nsimply connected algebraic group $\\mathbf G$ defined over $k$. Let $\\mathbf H$\ndenote the restriction of scalars $\\operatorname{Res}_{k\/{\\mathbb Q}} \\mathbf G$ of $\\mathbf G$;\nthis is a group defined over ${\\mathbb Q}$ with ${\\mathbb Q}\\text{-rank}\\: \\mathbf H = k\\text{-rank}\\: \\mathbf G$.\nLet $X_\\infty =\\mathbf H({\\mathbb R})\/K$ be the symmetric space associated to\n$\\mathbf H$, where $K$ is a maximal compact subgroup of $\\mathbf H({\\mathbb R})$,\nand for $v\\in S\\setminus S_\\infty$, let $X_v$ be the Bruhat-Tits building\nof $\\mathbf G(k_v)$.\n\nConsider $X = X_\\infty \\times \\prod_{v\\in S\\setminus S_\\infty} X_v$. By\ngeneralizing the work of Borel and Serre \\citelist{\\cite{Borel-Serre}\n\\cite{BS2}} and of Zucker \\cite{Zu1}, we define in\n\\S\\S\\ref{subsectRBSarith}, \\ref{subsectRBSSarith} the reductive Borel-Serre\nbordification $\\overline{X}^{RBS}$ of $X$. For an $S$-arithmetic\nsubgroup $\\Gamma$ of $\\mathbf G(k)$, the action of $\\Gamma$ on $X$ by left translation\nextends to $\\overline{X}^{RBS}$ and the quotient\n$\\Gamma\\backslash\\overline{X}^{RBS}$ is a compact Hausdorff topological\nspace, called the \\emph{reductive Borel-Serre compactification} of\n$\\Gamma\\backslash X$. Our main result (Theorem ~\\ref{thmMainArithmetic}) is the\ncomputation of the fundamental group of\n$\\Gamma\\backslash\\overline{X}^{RBS}$. Under the mild condition that $\\Gamma$\nis a neat $S$-arithmetic group, we show (Corollary ~\\ref{corNeat}) that\n\\begin{equation}\n\\pi_1(\\Gamma\\backslash\\overline{X}^{RBS}) \\cong \\Gamma \/ E\\Gamma\n\\end{equation}\nIf $k\\text{-rank}\\: \\mathbf G >0$ and $S\\text{-rank}\\: \\mathbf G \\ge 2$ this is finite and we\nconclude from \\eqref{eqnCongruenceKernel} that \n\\begin{equation}\nC(S,\\mathbf G) \\cong \\varprojlim_{\\mathfrak a} \\pi_1(\\Gamma({\\mathfrak a})\\backslash\\overline{X}^{RBS}).\n\\end{equation}\nIn fact we show (Corollary ~\\ref{corIdentifyCSG}) that $C(S,\\mathbf G)$ is\nprecisely $\\pi_1(\\Gamma^*({\\mathfrak a})\\backslash\\overline{X}^{RBS})$ for ${\\mathfrak a}$ small,\nwhere $\\Gamma^*({\\mathfrak a})$, defined by Raghunathan \\cite{Ra1}, is the smallest\n$S$-congruence subgroup containing $E\\Gamma({\\mathfrak a})$.\n\nFrom the point of view of identifying the congruence subgroup kernel $C(S,\n\\mathbf G)$, we see that $\\Gamma\\backslash \\overline{X}^{RBS}$ is the most natural\ncompactification of $\\Gamma\\backslash X$. On the other hand, the Satake\ncompactifications of the locally symmetric space $\\Gamma\\backslash X_\\infty$\nare important as well, as mentioned at the beginning of this introduction.\nIn \\S\\ref{subsectSatakeSArith} we define compactifications $\\Gamma\\backslash\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ of $\\Gamma \\backslash X$ which generalize the\nSatake compactifications of $\\Gamma\\backslash X_\\infty$ and in\n\\S\\ref{sectFundGrpArithmetic} we calculate that their fundamental groups\nare a certain quotient of $\\pi_1(\\Gamma\\backslash\\overline{X}^{RBS})$.\n\n\\subsection{Connection to bounded generation}\nAlthough not directly addressed by this paper, we close this introduction\nby mentioning the relation of the congruence subgroup problem to the notion\nof bounded generation. A fundamental result of Borel and Harish-Chandra\n\\cite{BorelHarishChandra} is that arithmetic subgroups of algebraic groups\nare finitely generated. The proof of Borel and Harish-Chandra is in fact\nconstructive, and Grunewald and Segal \\cite{GrunewaldSegal1} have shown how\nto use it to find generators. If one assumes that the algebraic group is\nreductive then this result extends to $S$-arithmetic subgroups and in fact\n$S$-arithmetic subgroups of reductive algebraic groups are even finitely\npresented \\citelist{\\cite{BS2}*{Th\\'eor\\`eme~6.2} \\cite{GrunewaldSegal2}}.\nNote that $S$-arithmetic subgroups of a general algebraic group need not be\neven finitely generated. For example, ${\\mathbb Z}[1\/p]$ is a\n$\\{p,\\infty\\}$-arithmetic subgroup of $\\mathbf G_a$ over ${\\mathbb Q}$ and is not finitely\ngenerated.\n\nA finitely generated group $\\Gamma$ has \\emph{bounded generation} if there\nexist elements $\\gamma_1,\\gamma_2,\\dots,\\gamma_m\\in \\Gamma$ (not necessarily\ndistinct) such that any $\\gamma\\in \\Gamma$ can be written in the form\n\\begin{equation*}\n\\gamma= \\gamma_1^{k_1} \\dots \\gamma_m^{k_m}\n\\end{equation*}\nwith $k_1 , \\dots , k_m \\in {\\mathbb Z}$. The least possible value of $m$ is called\nthe \\emph{degree of bounded generation}.\n\nA free group on more than one generator does not have bounded generation.\nSince $\\SL_2({\\mathbb Z})$ contains a free group of finite index on two generators\n(for example, the commutator subgroup), it follows that it does not have\nbounded generation \\cite{Murty}*{\\S5}. Rapinchuk \\cite{R1} conjectures that\nif $\\mathbf G$ is simple and the $S$-rank of $\\mathbf G$ is $\\geq 2$, then $\\mathbf G({\\mathcal O})$\nhas bounded generation.\n\nThe relation between bounded generation and the congruence subgroup problem\nhas been clarified by recent work of Platonov and Rapinchuk\n\\cite{PlatonovRapinchuk2} and independently by Lubotzky \\cite{Lubotzky}.\nLet $T$ be the (finite) set of primes $v$ where $\\mathbf G(k_v)$ is anisotropic\nand assume that $S\\cap T=\\emptyset$. Suppose every non-central normal\nsubgroup of $\\mathbf G({\\mathcal O})$ is the inverse image of an open normal subgroup\nunder the map\n\\begin{equation*}\n\\mathbf G (k) \\to \\prod_{v\\in T} \\mathbf G(k_v) \\ .\n\\end{equation*}\nThen if $\\mathbf G({\\mathcal O})$ has bounded generation they prove that $C(S,\\mathbf G)$ is\nfinite.\n\nThus another way to establish that $C(S,\\mathbf G)$ is finite is to show that\n$\\mathbf G({\\mathcal O})$ has bounded generation. For example, Tavgen\\cprime\\\n\\cite{tavgen} has established that $\\mathbf G({\\mathcal O})$ has bounded generation for\n$k$-simple groups $\\mathbf G$ which are quasi-split over $k$ with $k$-rank $\\ge 2$\n(except possibly for type ${}^6D_4$). In another direction, if $|S|$ is\nassumed sufficiently large (depending only on $[ k:{\\mathbb Q} ]$), Murty and\nLoukanidis have proved bounded generation for $\\SL_n({\\mathcal O})$, $n\\ge 2$, and\n$\\SP_{2n}({\\mathcal O})$, $n\\ge 1$; this work is announced in \\cite{Murty} and\npartially included in the thesis of Loukanidis \\cite{Loukanidis}. The\nproof, which uses analytic number theory, actually gives an explicit bound\non the degree of bounded generation depending only on $[k:{\\mathbb Q}]$; bounds on\nthe degree which depend also on the discriminant of $k$ have been obtained\npreviously by other authors.\n\n\\subsection{Other directions}\n\n\\subsubsection{Infinite $C(S,\\mathbf G)$}\n\nThis paper has focused on the case $S\\text{-rank}\\: \\mathbf G \\ge 2$ where Serre's\nconjecture says that $C(S,\\mathbf G)$ is finite. It would be interesting to\ninvestigate topological interpretations in the case $S\\text{-rank}\\: \\mathbf G = 1$ and\n$C(S,\\mathbf G)$ is infinite.\n\n\\subsubsection{Function fields}\n\nUsually the congruence subgroup problem is considered for algebraic groups\ndefined over global fields, not just algebraic number fields as considered\nhere. As noted in \\S\\ref{ssectKnownResults}, for $k$ a global field, the\ncongruence subgroup kernel $C(S, \\mathbf G)$ is finite for $\\mathbf G$ simply connected,\nabsolutely almost simple with $k\\text{-rank}\\: \\mathbf G >0$ and $S\\text{-rank}\\: \\mathbf G\\ge 2$. A\nnatural question is to give a topological interpretation in this case as\nwell. Here there are no infinite places so it seems plausible to consider\nthe fundamental group of suitable compactifications of an $S$-arithmetic\nquotient of the product of Bruhat-Tits buildings $\\prod_{v\\in S} X_{v}$.\nSeveral compactifications of Bruhat-Tits buildings have been considered: the\nBorel-Serre compactification in which the spherical Tits building is placed\nat infinity \\cite{BS2}; a polyhedral compactification due to Landvogt\n\\cite{Landvogt}; and compactifications\nassociated to linear representations \\citelist{\\cite{Werner}\n\\cite{RemyThuillierWernerI} \\cite{RemyThuillierWernerII}}. These last \ncompactifications are analogous to the Satake\ncompactifications of symmetric spaces and recover Landvogt's\ncompactification as a special case for the generic representation; thus\nLandvogt's compactification is analogous to the maximal Satake\ncompactification. It would be interesting to see if there is an analogy of\nSatake's theory of rational boundary components which would lead to\ncorresponding compactifications of the $S$-arithmetic quotients.\n\n\\section{The reductive Borel-Serre and Satake compactifications: the\n arithmetic case}\n\\label{sectCompactificationsArithmetic}\n\nIn order to establish notation and set the framework for later proofs, we\nrecall in \\S\\S\\ref{ssectBSarith}--\\ref{subsectSatakeArith} several natural\ncompactifications of the locally symmetric space $\\Gamma\\backslash X_\\infty$\nassociated to an arithmetic group $\\Gamma$; in each case a bordification of\n$X_\\infty$ is described on which $\\mathbf G(k)$ acts. We also examine the\nstabilizer subgroups of points in these bordifications. The case of\ngeneral $S$-arithmetic groups will be treated in\n\\S\\ref{sectCompactificationsSArithmetic}. Throughout the paper, $\\mathbf G$ will\ndenote a connected, absolutely almost simple, simply connected algebraic\ngroup defined over a number field $k$.\n\n\\subsection{Proper and discontinuous actions}\n\\label{ssectProperDiscontinuousActions}\nRecall \\cite{BourbakiTopologiePartOne}*{III, \\S4.4, Prop.~7} that a\ndiscrete group $\\Gamma$ acts \\emph{properly} on a Hausdorff space $Y$ if and\nonly if for all $y$, $y'\\in Y$, there exist neighborhoods $V$ of $y$ and\n$V'$ of $y'$ such that $\\gamma V\\cap V'\\neq \\emptyset$ for only finitely\nmany $\\gamma \\in \\Gamma$. We will also need the following weaker condition on\nthe group action:\n\n\\begin{defi}[\\cite{Gro}*{Definition~1}]\n\\label{defnDiscontinuous}\nThe action of a discrete group $\\Gamma$ on a topological space $Y$ is\n\\emph{discontinuous} if\n\\begin{enumerate}\n\\item\\label{itemDiscontinuousTwoPoints} for all $y$, $y'\\in Y$ with\n $y'\\notin \\Gamma y$ there exists neighborhoods $V$ of $y$ and $V'$ of $y'$\n such that $\\gamma V\\cap V' =\\emptyset$ for all $\\gamma\\in \\Gamma$, and\n\\item\\label{itemDiscontinuousOnePoint} for all $y\\in Y$ there exists a\n neighborhood $V$ of $y$ such that $\\gamma V\\cap V = \\emptyset$ for\n $\\gamma \\notin \\Gamma_y$ and $\\gamma V = V$ for $\\gamma \\in \\Gamma_y$.\n\\end{enumerate}\n\\end{defi}\n\nIt is easy to check that a group action is proper if and only if it is\ndiscontinuous and the stabilizer subgroup $\\Gamma_y$ is finite for all $y\\in\nY$.\n\n\\subsection{The locally symmetric space associated to an arithmetic subgroup}\nLet $S_{\\infty}$ be the set of all\ninfinite places of $k$. For each $v\\in S_\\infty$, let $k_{v}$ be the\ncorresponding completion of $k$ with respect to a norm associated with $v$;\nthus either $k_{v}\\cong {\\mathbb R}$ or $k_{v}\\cong {\\mathbb C}$. For each $v\\in\nS_{\\infty}$, $\\mathbf G(k_{v})$ is a (real) Lie group.\n\nDefine $G_{\\infty}=\\prod_{v\\in S_{\\infty}}\\mathbf G(k_{v})$, a semisimple Lie\ngroup with finitely many connected components. Fix a maximal compact\nsubgroup $K$ of $G_{\\infty}$. When endowed with a $G$-invariant metric,\n$X_\\infty = G_{\\infty}\/K$ is a Riemannian symmetric space of noncompact\ntype and is thus contractible. Embed $\\mathbf G(k)$ into $G_\\infty$ diagonally.\nThen any arithmetic subgroup $\\Gamma\\subset \\mathbf G(k)$ is a discrete subgroup of\n$G_\\infty$ and acts properly on $X_\\infty$. It is known that the quotient\n$\\Gamma\\backslash X_\\infty$ is compact if and only if the $k$-rank of $\\mathbf G$ is\nequal to 0. In the following, we assume that the $k$-rank of $\\mathbf G$ is\npositive so that $\\Gamma\\backslash X_\\infty$ is noncompact.\n\nSince the theory of compactifications of locally symmetric spaces is\nusually expressed in terms of algebraic groups defined over ${\\mathbb Q}$, let\n$\\mathbf H=\\operatorname{Res}_{k\/{\\mathbb Q}}\\mathbf G$ be the algebraic group defined over\n${\\mathbb Q}$ obtained by restriction of scalars; it satisfies\n\\begin{equation}\n\\label{eqnPointsOfRestrictionScalars}\n\\mathbf H({\\mathbb Q})=\\mathbf G(k) \\quad\\text{and}\\quad \\mathbf H(\\mathbb R)=G_{\\infty}\\ .\n\\end{equation}\nThe space $X_{\\infty}$ can be identified with the symmetric space of\nmaximal compact subgroups of $\\mathbf H(\\mathbb R)$, $X_{\\infty}=\\mathbf\nH(\\mathbb R)\/K$, and the arithmetic subgroup $\\Gamma\\subset \\mathbf G(k)$ corresponds\nto an arithmetic subgroup $\\Gamma\\subset \\mathbf H({\\mathbb Q})$. Restriction of\nscalars yields a one-to-one correspondence between parabolic $k$-subgroups\nof $\\mathbf G$ and parabolic ${\\mathbb Q}$-subgroups of $\\mathbf H$ so that the analogue of\n\\eqref{eqnPointsOfRestrictionScalars} is satisfied.\n\n\\subsection{The Borel-Serre compactification}\n\\label{ssectBSarith}\n(For details see the original paper \\cite{Borel-Serre}, as well as\n\\cite{Borel-Ji}.) For each parabolic ${\\mathbb Q}$-subgroup $\\P$ of $\\mathbf H$,\nconsider the Levi quotient $\\mathbf L_{\\P} = \\P\/\\mathbf N_{\\P}$ where $\\mathbf N_{\\P}$ is\nthe unipotent radical of $\\P$. This is a reductive group defined over\n${\\mathbb Q}$. There is an almost direct product $\\mathbf L_{\\P} = \\mathbf S_{\\P}\n\\cdot \\mathbf M_{\\P}$, where $\\mathbf S_{\\P}$ is the maximal ${\\mathbb Q}$-split\ntorus in the center of $\\mathbf L_{\\P}$ and $\\mathbf M_{\\P}$ is the\nintersection of the kernels of the squares of all characters of $\\mathbf\nL_{\\P}$ defined over ${\\mathbb Q}$. The real locus $L_P= \\mathbf L_\\P({\\mathbb R})$ has a\ndirect product decomposition $A_P \\cdot M_P$, where $A_P = \\mathbf\nS_\\P({\\mathbb R})^0$ and $M_P = \\mathbf M_\\P({\\mathbb R})$. The dimension of $A_P$ is called\nthe \\emph{parabolic ${\\mathbb Q}$-rank} of $\\P$.\n\nThe real locus $P=\\P({\\mathbb R})$ has a Langlands decomposition\n\\begin{equation}\\label{rationalLanglands}\nP=N_{P} \\ltimes (\\widetilde A_P \\cdot \\widetilde M_ P),\n\\end{equation}\nwhere $N_{P}= \\mathbf N_{\\P}({\\mathbb R})$ and $\\widetilde A_P \\cdot \\widetilde M_ P$ is\nthe lift of $A_P \\cdot M_P$ to the unique Levi subgroup of $P$ which is\nstable under the Cartan involution $\\theta$ associated with $K$.\n\nSince $P$ acts transitively on $X_\\infty$, the Langlands decomposition induces a\nhorospherical decomposition\n\\begin{equation}\\label{horo}\nX_\\infty \\cong A_P\\times N_{P}\\times X_P,\\quad u\\tilde a\\tilde mK \\mapsto\n(\\tilde a,u,\\tilde m(K\\cap \\widetilde M_P),\n\\end{equation}\nwhere \n\\begin{equation*}\nX_P= \\widetilde M_P \/ (K \\cap \\widetilde M_P) \\cong L_P\/(A_P\\cdot K_P)\n\\end{equation*}\nis a symmetric space (which might contain an Euclidean factor) and is\ncalled the \\emph{boundary symmetric space associated with $\\P$}. The\nsecond expression for $X_P$ is preferred since $\\mathbf L_\\P$ is defined\nover ${\\mathbb Q}$; here $K_P\\subseteq \\mathbf M_\\P({\\mathbb R})$ corresponds to $K \\cap\n\\widetilde M_P$\n\nFor each parabolic ${\\mathbb Q}$-subgroup $\\P$ of $\\mathbf H$, define the\nBorel-Serre boundary component\n\\begin{equation*}\ne(P)=N_{P}\\times X_P\n\\end{equation*}\nwhich we view as the quotient of $X_\\infty$ obtained by collapsing the\nfirst factor in \\eqref{horo}. The action of $P$ on $X_\\infty$ descends to\nan action on $e(P)=N_{P}\\times X_P$ given by\n\\begin{equation}\n\\label{eqnBoundaryAction}\np\\cdot (u, y) = (pu\\tilde m_p^{-1}\\tilde a_p^{-1} , \\tilde a_p \\tilde m_p\ny), \\qquad \\text{for $p=u_p \\tilde a_p \\tilde m_p\\in P$.}\n\\end{equation}\nDefine the Borel-Serre partial compactification $\\overline{X}_\\infty^{BS}$ (as a\n\\emph{set}) by\n\\begin{equation}\n\\label{BSPartialCompactification}\n\\overline{X}_\\infty^{BS}=X_\\infty\\cup \\coprod_{\\P\\subset \\mathbf H} e(P).\n\\end{equation}\n\nLet $\\Delta_P$ be the simple ``roots'' of the adjoint action of $A_P$ on\nthe Lie algebra of $N_P$ and identify $A_P$ with $({\\mathbb R}^{>0})^{\\Delta_P}$ by\n$a \\mapsto (a^{-\\alpha})_{\\alpha\\in\\Delta_P}$. Enlarge $A_P$ to the\ntopological semigroup $\\overline A_P \\cong ({\\mathbb R}^{\\ge0})^{\\Delta_P}$ by\nallowing $a^\\alpha$ to attain infinity and define\n\\begin{equation*}\n\\overline A_P(s) = \\{\\, a\\in \\overline A_P\\mid a^{-\\alpha} < s^{-1} \\text{\n for all $\\alpha\\in \\Delta_P$}\\,\\}\\cong [0,s^{-1})^{\\Delta_P}\\ ,\\qquad\n \\text{for $s>0$}\\ .\n\\end{equation*}\nSimilarly enlarge the Lie algebra ${\\mathfrak a}_P \\subset \\overline{\\mathfrak a}_P$. The\ninverse isomorphisms $\\exp\\colon {\\mathfrak a}_P \\to A_P$ and $\\log\\colon A_P \\to\n{\\mathfrak a}_P$ extend to isomorphisms\n\\begin{equation*}\n\\overline A_P \\overset{\\log}{\\longrightarrow} \\overline {\\mathfrak a}_P\n\\qquad\\text{and} \\qquad \\overline {\\mathfrak a}_P \\overset{\\exp}{\\longrightarrow}\n\\overline A_P.\n\\end{equation*}\n\nTo every parabolic ${\\mathbb Q}$-subgroup $\\mathbf Q\\supseteq \\P$ there corresponds\na subset $\\Delta_P^Q \\subseteq \\Delta_P$ and we let $o_Q\\in\n\\overline A_P$ be the point with coordinates $o_Q^{-\\alpha} =1$\nfor $\\alpha\\in \\Delta_P^Q$ and $o_Q^{-\\alpha} =0$ for\n$\\alpha\\notin \\Delta_P^Q$. Then $\\overline A_P = \\coprod_{\\mathbf Q\n\\supseteq \\P} A_P \\cdot o_Q$ is the decomposition into\n$A_P$-orbits.\n\nDefine the \\emph{corner associated to $\\mathbf P$} to be\n\\begin{equation}\n\\label{Pcorner}\nX_\\infty(P) = \\overline A_P \\times e(P) = \\overline A_P \\times N_P \\times X_P.\n\\end{equation}\nWe identify $e(Q)$ with the subset $ (A_P\\cdot o_Q) \\times N_P\\times X_P$.\nIn particular, $e(P)$ is identified with the subset $\\{o_P\\}\\times\nN_P\\times X_P$ and $X_\\infty$ is identified with the open subset $A_P \\times\nN_P\\times X_P \\subset X_\\infty(P)$ (compare \\eqref{horo}). Thus we have a\nbijection\n\\begin{equation}\n\\label{strataPcorner}\nX_\\infty(P) \\cong X_\\infty \\cup \\coprod_{\\P \\subseteq \\mathbf Q \\subset\n \\mathbf H} e(Q).\n\\end{equation}\n\nNow give $\\overline X_\\infty^{BS}$ the finest topology so that for all\nparabolic ${\\mathbb Q}$-subgroups $\\P$ of $\\mathbf H$ the inclusion of\n\\eqref{strataPcorner} into \\eqref{BSPartialCompactification} is a\ncontinuous inclusion of an open subset. Under this topology, a sequence\n$x_n\\in X$ converges in $\\overline X_\\infty^{BS}$ if and only if there\nexists a parabolic ${\\mathbb Q}$-subgroup $\\P$ such that if we write $x_n=(a_n, u_n,\ny_n)$ according to the decomposition of \\eqref{horo}, then $(u_n,y_n)$\nconverges to a point in $e(P)$ and $a_n^\\alpha\\to \\infty$ for all\n$\\alpha\\in \\Delta_P$. The space $\\overline X_\\infty^{BS}$ is a manifold\nwith corners. It has the same homotopy type as $X_\\infty$ and is thus\ncontractible \\cite{Borel-Serre}.\n\nThe action of $\\mathbf H({\\mathbb Q})$ on $X_\\infty$ extends to a continuous action\non $\\overline{X}_\\infty^{BS}$ which permutes the boundary components:\n$g\\cdot e(P) = e(gPg^{-1})$ for $g\\in \\mathbf H({\\mathbb Q})$. The normalizer of\n$e(P)$ is $\\P({\\mathbb Q})$ which acts according to \\eqref{eqnBoundaryAction}.\n\nIt is shown in \\cite{Borel-Serre} that the action of $\\Gamma$ on\n$\\overline{X}_\\infty^{BS}$ is proper and the quotient $\\Gamma\\backslash\n\\overline{X}_\\infty^{BS}$, the \\emph{Borel-Serre compactification}, is a compact\nHausdorff space. It is a manifold with corners if $\\Gamma$ is torsion-free.\n\n\\subsection{The reductive Borel-Serre compactification}\n\\label{subsectRBSarith}\nThis compactification was first constructed by Zucker \\cite{Zu1}*{\\S4} (see also\n\\cite{GHM}). For each parabolic ${\\mathbb Q}$-subgroup $\\P$ of $\\mathbf H$, define\nits reductive Borel-Serre boundary component $\\hat{e}(P)$ by\n\\begin{equation*}\n\\hat{e}(P)=X_P\n\\end{equation*}\nand set\n\\begin{equation*}\n\\overline{X}_\\infty^{RBS}=X_\\infty\\cup \\coprod_{\\P} \\hat{e}(P).\n\\end{equation*}\nThe projections $p_P\\colon e(P) = N_P\\times X_P \\to \\hat e(P) = X_P$ induce\na surjection $p\\colon \\overline{X}_\\infty^{BS} \\to\n\\overline{X}_\\infty^{RBS}$ and we give $\\overline{X}_\\infty^{RBS}$ the\nquotient topology. Its topology can also be described in terms of\nconvergence of interior points to the boundary points via the horospherical\ndecomposition in equation \\eqref{horo}. Note that $\\overline{X}^{RBS}$ is\nnot locally compact, although it is compactly generated (being a Hausdorff\nquotient of the locally compact space $\\overline{X}^{BS}$). The action of\n$\\mathbf H({\\mathbb Q})$ on $\\overline{X}_\\infty^{BS}$ descends to a continuous\naction on $\\overline{X}_\\infty^{RBS}$.\n\n\\begin{lem}\n\\label{lemStabilizersRBS}\nLet $\\P$ be a parabolic ${\\mathbb Q}$-subgroup of $\\mathbf H$.\nThe stabilizer $\\mathbf H({\\mathbb Q})_z= \\mathbf G(k)_z$ of $z\\in X_P$ under the action of\n$\\mathbf H({\\mathbb Q})$ on $\\overline{X}^{RBS}_\\infty$ satisfies a short exact sequence\n\\begin{equation*}\n1 \\to \\mathbf N_{\\P}({\\mathbb Q}) \\to \\mathbf H({\\mathbb Q})_z \\to \\mathbf L_{\\P}({\\mathbb Q})_z \\to 1\n\\end{equation*}\nwhere $\\mathbf L_{\\P}({\\mathbb Q})_z$ is the stabilizer of $z$ under the action of\n$\\mathbf L_{\\P}({\\mathbb Q})$ on $X_P$.\n\\end{lem}\n\\begin{proof}\nThe normalizer of $X_P$ under the action of $\\mathbf H({\\mathbb Q})$ is $\\P({\\mathbb Q})$\nwhich acts via its quotient $\\mathbf L_{\\P}({\\mathbb Q})$. \n\\end{proof}\n\nBy the lemma, the action of $\\Gamma$ on $\\overline{X}_\\infty^{RBS}$ is not\nproper since the stabilizer of a boundary point in $X_P$ contains the\ninfinite group $\\Gamma_{N_P} = \\Gamma\\cap N_P$. Nonetheless\n\\begin{lem}\n\\label{lemRBSDiscontinuous}\nThe action of an arithmetic subgroup $\\Gamma$ on $\\overline{X}_\\infty^{RBS}$\nis discontinuous and the arithmetic quotient\n$\\Gamma\\backslash\\overline{X}_\\infty^{RBS}$ is a compact Hausdorff space.\n\\end{lem}\n\n\\begin{proof}\nWe begin by verifying Definition\n~\\ref{defnDiscontinuous}\\ref{itemDiscontinuousOnePoint}. Let $x\\in X_P\n\\subseteq \\overline{X}_\\infty^{RBS}$. Set $\\Gamma_P = \\Gamma\\cap P$ and\n$\\Gamma_{L_P} = \\Gamma_P\/\\Gamma_{N_P}$. Since $\\Gamma_{L_P}$ acts properly on $X_P$\nthere exists a neighborhood $O_x$ of $x$ in $X_P$ such that $\\bar \\gamma\nO_x \\cap O_x \\neq \\emptyset$ if and only if $\\bar \\gamma \\in \\Gamma_{L_P,x}$,\nin which case $\\bar \\gamma O_x = O_x$. We can assume $O_x$ is relatively\ncompact. Set $V=p(\\overline{A}_P(s)\\times N_P \\times O_x)$, where we chose\n$s$ sufficiently large so that that only identifications induced by $\\Gamma$\non $V$ already arise from $\\Gamma_P$ \\cite{Zu3}*{(1.5)}. Thus $\\gamma V\\cap\nV\\neq \\emptyset$ if and only if $\\gamma \\in \\Gamma_P$ and $\\gamma \\Gamma_{N_P}\n\\in \\Gamma_{L_P,x}$; by Lemma ~\\ref{lemStabilizersRBS} this occurs if and only\nif $\\gamma \\in \\Gamma_x$ as desired.\n\nTo verify Definition\n~\\ref{defnDiscontinuous}\\ref{itemDiscontinuousTwoPoints} we will show the\nequivalent condition that $\\Gamma\\backslash\\overline{X}_\\infty^{RBS}$ is\nHausdorff (compare \\cite{Zu1}*{(4.2)}). Compactness will follow since it\nis the image of a compact space under the induced projection $p'\\colon\n\\Gamma\\backslash \\overline{X}_\\infty^{BS} \\to \\Gamma\\backslash\n\\overline{X}_\\infty^{RBS}$. Observe that $p'$ is a quotient map and that\nits fibers, each being homeomorphic to $\\Gamma_{N_P}\\backslash N_P$ for some\n$\\P$, are compact. For $y\\in \\Gamma\\backslash\\overline{X}_\\infty^{RBS}$ and\n$W$ a neighborhood of $p'^{-1}(y)$, we claim there exists $U\\ni y$ open\nsuch that $p'^{-1}(U)\\subseteq W$. This suffices to establish Hausdorff,\nfor if $y_1\\neq y_2 \\in \\Gamma\\backslash\\overline{X}_\\infty^{RBS}$ and $W_1$\nand $W_2$ are disjoint neighborhoods of the compact fibers $p'^{-1}(y_1)$\nand $p'^{-1}(y_2)$, there must exist $U_1$ and $U_2$, neighborhoods of\n$y_1$ and $y_2$, such that $p'^{-1}(U_i) \\subseteq W_i$ and hence $U_1\\cap\nU_2 =\\emptyset$.\n\nTo prove the claim, choose $x\\in X_P$ such that $y=\\Gamma x$. Let $q\\colon\n\\overline{X}_\\infty^{BS} \\to \\Gamma\\backslash \\overline{X}_\\infty^{BS} $ be\nthe quotient map. The compact fiber $p'^{-1}(y)$ may be covered by\nfinitely many open subsets $q(\\overline A_P(s_\\mu)\\times C_{P,\\mu} \\times\nO_{P,\\mu}) \\subseteq W$ where $C_{P,\\mu} \\subseteq N_P$ and $x\\in\nO_{P,\\mu}\\subseteq X_P$. Define a neighborhood $V$ of the fiber by\n\\begin{equation*}\np'^{-1}(y) \\subset V = q(\\overline A_P(s)\\times C_{P}\n\\times O_{P}) \\subseteq W\n\\end{equation*}\nwhere $s = \\max \\,s_\\mu$, $O_P = \\bigcap O_{P,\\mu}$, and $C_P = \\bigcup C_{P,\\mu}$.\nSince $\\Gamma_{N_P}C_P = N_P$, we see $V=p'^{-1}(U)$ for some $U\\ni y$ as\ndesired.\n\\end{proof}\n\n\\subsection{Satake compactifications}\n\\label{subsectSatakeArith}\nFor arithmetic quotients of $X_\\infty$, the Satake compactifications\n$\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ form an important family of\ncompactifications. When $X_\\infty$ is Hermitian, one example is the Baily-Borel\nSatake compactification. The construction has three steps.\n\\begin{enumerate}\n\\item Begin%\n\\footnote{Here we follow \\cite{Cass} in beginning with a spherical\n representation. Satake's original construction \\cite{sat1} started with a\n non-spherical representation but then constructed a spherical\n representation by letting $G_\\infty$ act on the space of self-adjoint\n endomorphisms of $V$ with respect to an admissible inner product. See\n \\cite{sap2} for the relation of the two constructions.}\nwith a representation $(\\tau,V)$ of $\\mathbf H$ which has a nonzero\n$K$-fixed vector $v\\in V$ (a \\emph{spherical representation}) and which is\nirreducible and nontrivial on each noncompact ${\\mathbb R}$-simple factor of\n$\\mathbf H$. Define the Satake compactification $\\overline{X}_\\infty^{\\tau}$\nof $X$ to be the closure of the image of the embedding $X_\\infty \\hookrightarrow\n\\mathbb P(V)$, $gK \\mapsto [ \\tau(g) v]$. The action of $G_\\infty$ extends\nto a continuous action on $\\overline{X}_\\infty^{\\tau}$ and the set of points\nfixed by $N_P$, where $\\P$ is any parabolic ${\\mathbb R}$-subgroup, is called a\n\\emph{real boundary component}. The compactification\n$\\overline{X}_\\infty^{\\tau}$ is the disjoint union of its real boundary\ncomponents.\n\n\\item Define a partial compactification\n ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}\\subseteq \\overline{X}_\\infty^{\\tau}$\n by taking the union of $X_\\infty$ and those real boundary components that\n meet the closure of a Siegel set. Under the condition that\n $\\overline{X}_\\infty^{\\tau}$ is \\emph{geometrically rational}\n \\cite{Cass}, this is equivalent to considering those real boundary\n components whose normalizers are parabolic ${\\mathbb Q}$-subgroups; call these the\n \\emph{rational boundary components}. Instead of the subspace topology\n induced from $\\overline{X}_\\infty^{\\tau}$, give\n ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ the Satake topology \\cite{sat2}.\n\n\\item Still under the condition that $\\overline{X}_\\infty^{\\tau}$ is\n geometrically rational, one may show that the arithmetic subgroup $\\Gamma$\n acts continuously on ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ with a\n compact Hausdorff quotient, $\\Gamma\\backslash\n {}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$. This is the \\emph{Satake\n compactification} of $\\Gamma\\backslash X_\\infty$.\n\\end{enumerate}\n\nThe geometric rationality condition above always holds if the\nrepresentation $(\\tau,V)$ is rational over ${\\mathbb Q}$ \\cite{sap2}. It also holds\nfor the Baily-Borel Satake compactification \\cite{BB}, as well as most\nequal-rank Satake compactifications including all those where ${\\mathbb Q}\\text{-rank}\\:\n\\mathbf H >2$.\n\nWe will now describe an alternate construction of\n${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ due to Zucker \\cite{Zu2}. Instead of\nthe Satake topology, Zucker gives ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ the\nquotient topology under a certain surjection $\\overline{X}_\\infty^{RBS}\n\\to {}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ described below. It is this\ntopology we will use in this paper. Zucker proves that the resulting two\ntopologies on $\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ coincide.\n\nLet $(\\tau,V)$ be a spherical representation as above. We assume that\n$\\overline{X}_\\infty^{\\tau}$ is geometrically rational. For any parabolic\n${\\mathbb Q}$-subgroup $\\P$ of $\\mathbf H$, let $X_{P,\\tau}\\subseteq \\overline\nX_\\infty^{\\tau}$ be the real boundary component fixed pointwise by $N_P$;\ngeometric rationality implies that $X_{P,\\tau}$ is actually a rational\nboundary component. The transitive action of $P$ on $X_{P,\\tau}$ descends\nto an action of $L_P = P\/N_P$. The geometric rationality condition ensures\nthat there exists a normal ${\\mathbb Q}$-subgroup $\\mathbf L_{\\P, \\tau} \\subseteq\n\\mathbf L_{\\P}$ with the property that $L_{P,\\tau}= \\mathbf L_{\\P,\n \\tau}({\\mathbb R})$ is contained in the centralizer\n$\\operatorname{Cent}(X_{P,\\tau})$ of $X_{P,\\tau}$ and\n$\\operatorname{Cent}(X_{P,\\tau})\/L_{P,\\tau}$ is compact. Then $X_{P,\\tau}$\nis the symmetric space associated to the ${\\mathbb Q}$-group $\\mathbf H_{\\P,\n\\tau} = \\mathbf L_{\\P} \/ \\mathbf L_{\\P,\\tau}$. There is an\nalmost direct product decomposition\n\\begin{equation}\n\\label{eqnSatakeLeviDecomposition}\n\\mathbf L_{\\P} = \\widetilde {\\mathbf H}_{\\P, \\tau} \\cdot \\mathbf L_{\\P,\n \\tau}\\ ,\n\\end{equation}\nwhere $\\widetilde {\\mathbf H}_{\\P, \\tau}$ is a lift of $\\mathbf H_{\\P,\n \\tau}$; the root systems of these factors may be described using the\nhighest weight of $\\tau$. We obtain a decomposition of symmetric spaces\n\\begin{equation}\n\\label{eqnBoundaryDecomposition}\nX_P = X_{P,\\tau} \\times W_{P,\\tau}\\ .\n\\end{equation}\n\nDifferent parabolic ${\\mathbb Q}$-subgroups can yield the same rational boundary\ncomponent $X_{P,\\tau}$; if $\\P^\\dag$ is the maximal such parabolic\n${\\mathbb Q}$-subgroup, then $P^\\dag=\\P^\\dag({\\mathbb R})$ is the normalizer of $X_{P,\\tau}$.\nThe parabolic ${\\mathbb Q}$-subgroups that arise as the normalizers of rational\nboundary components are called \\emph{$\\tau$-saturated}. For example, all\nparabolic ${\\mathbb Q}$-subgroups are saturated for the maximal Satake\ncompactification, while only the maximal parabolic ${\\mathbb Q}$-subgroups are\nsaturated for the Baily-Borel Satake compactification when $\\mathbf H$ is\n${\\mathbb Q}$-simple. In general, the class of $\\tau$-saturated parabolic\n${\\mathbb Q}$-subgroups can be described in terms of the highest weight of $\\tau$.\n\nDefine \n\\begin{equation*}\n{}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}=X_\\infty\\cup \\coprod_{\\text{$\\mathbf Q$\n $\\tau$-saturated}} X_{Q,\\tau}\\ .\n\\end{equation*}\nA surjection $p\\colon \\overline{X}_\\infty^{RBS} \\to\n{}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ is obtained by mapping $X_P$ to\n$X_{P,\\tau} = X_{P^\\dag,\\tau}$ via the projection on the first factor in\n\\eqref{eqnBoundaryDecomposition}. Give ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$\nthe resulting quotient topology; the action of $\\mathbf H({\\mathbb Q})$ on\n$\\overline{X}_\\infty^{RBS}$ descends to a continuous action on\n${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$. \n\nLet $\\P_\\tau$ be the inverse image of $\\mathbf L_{\\P,\\tau}$ under\nthe projection $\\P \\to \\P\/\\mathbf N_{\\P}$.\n\\begin{lem}\n\\label{lemStabilizersSatake}\nLet $\\P$ be a $\\tau$-saturated parabolic ${\\mathbb Q}$-subgroup of $\\mathbf H$. The\nstabilizer $\\mathbf H({\\mathbb Q})_z = \\mathbf G(k)_z$ of $z\\in X_{P,\\tau}$ under the\naction of $\\mathbf H({\\mathbb Q})$ on \n${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ satisfies a short exact sequence\n\\begin{equation*}\n1 \\to \\P_{\\tau}({\\mathbb Q}) \\to \\mathbf H({\\mathbb Q})_z \\to \\mathbf H_{\\P,\\tau}({\\mathbb Q})_z \\to 1,\n\\end{equation*}\nwhere $\\mathbf H_{\\P,\\tau}({\\mathbb Q})_z$ is the stabilizer of $z$ under the action\nof $\\mathbf H_{\\P,\\tau}({\\mathbb Q})$ on $X_{P,\\tau}$.\n\\end{lem}\n\\begin{proof}\nAs in the proof of Lemma ~\\ref{lemStabilizersRBS}, the normalizer of\n$X_{P,\\tau}$ is $\\P({\\mathbb Q})$ which acts via its quotient $\\P({\\mathbb Q})\/\\P_\\tau({\\mathbb Q}) =\n\\mathbf H_{\\P,\\tau}({\\mathbb Q})$.\n\\end{proof}\n\nSimilarly to $\\overline{X}^{RBS}$, the space\n${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ is not locally compact and $\\Gamma$ does\nnot act properly. Nonetheless one has the\n\\begin{lem}\n\\label{lemSatakeDiscontinuous}\nThe action of an arithmetic subgroup $\\Gamma$ on ${}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$\nis discontinuous and the arithmetic quotient\n$\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ is a compact Hausdorff space.\n\\end{lem}\n\nThe proof is similar to Lemma ~\\ref{lemRBSDiscontinuous} since the fibers\nof $p'$ are again compact, being reductive Borel-Serre compactifications of\nthe $W_{P^\\dag,\\tau}$. The \\emph{Satake compactification} of\n$\\Gamma\\backslash X_\\infty$ associated to $\\tau$ is $\\Gamma\\backslash\n{}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$.\n\nIn the case when the representation $\\tau$ is generic one obtains the\nmaximal Satake compactification $\\overline{X}_\\infty^{\\max}$. This is\nalways geometrically rational and the associated\n${}_{\\mathbb Q}\\overline{X}_\\infty^{\\max}$ is very similar to\n$\\overline{X}_\\infty^{RBS}$. Indeed in this case $X_P = X_{P,\\tau} \\times\n({}_{\\mathbb R} A_{P}\/A_{P})$, where ${}_{\\mathbb R} A_{P}$ is defined like $A_P$ but using a\nmaximal ${\\mathbb R}$-split torus instead of a maximal ${\\mathbb Q}$-split torus, and the\nquotient map simply collapses the Euclidean factor ${}_{\\mathbb R} A_{P}\/A_{P}$ to a\npoint. In particular, if ${\\mathbb Q}\\text{-rank }\\mathbf H = {\\mathbb R}\\text{-rank }\n\\mathbf H$, then $\\Gamma\\backslash {}_{\\mathbb Q}\\overline{X}_\\infty^{\\max} \\cong\n\\Gamma\\backslash\\overline{X}_\\infty^{RBS}$.\n\n\\section{The Bruhat-Tits buildings}\n\\label{sectBruhatTitsBuildings}\n\nFor a finite place $v$, let $k_{v}$ be the completion of $k$ with respect\nto a norm associated with $v$. Bruhat and Tits \\citelist{\\cite{BruhatTits1}\n \\cite{BruhatTits2}} constructed a building $X_{v}$ which reflects the\nstructure of $\\mathbf G(k_{v})$. The building $X_{v}$ is made up of subcomplexes\ncalled \\emph{apartments} corresponding to the maximal $k_{v}$-split tori in\n$\\mathbf G$ and which are glued together by the action of $\\mathbf G(k_{v})$. We give an\noutline of the construction here together with the properties of $X_{v}$\nwhich are needed in the sections below; in addition to the original papers,\nwe benefited greatly from\n\\citelist{\\cite{ji}*{\\S3.2}\\cite{Landvogt}\\cite{Tits}}.\n\nIn this section we fix a finite place $v$ and a corresponding discrete\nvaluation $\\omega$.\n\n\\subsection{The apartment}\n\nLet $\\Split$ be a maximal $k_{v}$-split torus in $\\mathbf G$ and let\n$X^{*}(\\Split)= \\Hom_{k_{v}}(\\Split, \\mathbf G_{m})$ and $X_{*}(\\Split)\n=\\Hom_{k_{v}}(\\mathbf G_{m}, \\Split)$ denote the $k_v$-rational characters and\ncocharacters of $\\Split$ respectively. Denote by $\\Phi \\subset\nX^{*}(\\Split)$ the set of $k_{v}$-roots of $\\mathbf G$ with respect to\n$\\Split$. Let $\\N$ and $\\Cent$ denote the normalizer and the centralizer,\nrespectively, of $\\Split$; set $N=\\N(k_{v})$, $Z=\\Cent(k_{v})$. The Weyl\ngroup $W =\nN\/Z$ of $\\Phi$ acts on the real vector space\n\\begin{equation*}\nV = X_{*}(\\Split) \\otimes_{{\\mathbb Z}}{\\mathbb R} = \\Hom_{{\\mathbb Z}}(X^{*}(\\Split) , {\\mathbb R})\n\\end{equation*}\nby linear transformations; for $\\alpha\\in\\Phi$, let $r_\\alpha$ denote the\ncorresponding reflection of $V$.\n\nLet $A$ be the affine space underlying $V$ and let $\\Aff(A)$ denote the\ngroup of invertible affine transformations. We identify $V$ with the\ntranslation subgroup of $\\Aff(A)$. There is an action of $Z$ on $A$ via\ntranslations, $\\nu\\colon Z\\rightarrow V \\subset \\Aff(A)$, determined by\n\\begin{equation*}\n\\chi(\\nu(t)) = -\\omega(\\chi(t))\\ , \\quad t\\in Z,\\ \\chi\\in X^{*}(\\Cent)\\ ;\n\\end{equation*}\nnote that $V = \\Hom_{{\\mathbb Z}}(X^{*}(\\Cent), {\\mathbb R})$ since\n$X^{*}(\\Cent) \\subseteq X^{*}(\\Split)$ is a finite index subgroup. \n\nWe now extend $\\nu$ to an action of $N$ by affine transformations. Let $H\n= \\ker\\nu$, which is the maximal compact subgroup of $Z$. Then $Z\/H$ is a\nfree abelian group with rank $= \\dim_{\\mathbb R} V = k_{v}\\text{-rank}\\: \\mathbf G$. The group $W'\n= N\/H$ is an extension of $W$ by $Z\/H$ and there exists an affine action of\n$W'$ on $A$ which makes the following diagram commute\n\\cite{Landvogt}*{1.6}:\n\\begin{equation*}\n\\begin{CD}\n1 @>>> Z\/H @>>> W' @>>> W @>>> 1 \\\\\n@. @VVV @VVV @VVV \\\\\n1 @>>> V @>>> \\Aff(A) @>>> \\mathrm{GL}(V) @>>> 1\\rlap{\\ .}\n\\end{CD}\n\\end{equation*}\nThe action of $W'$ lifts to the desired extension $\\nu\\colon N \\to \\Aff(A)$.\n\nFor each $\\alpha \\in \\Phi$, let $U_{\\alpha}$ be the $k_v$-rational points\nof the connected unipotent subgroup of $\\mathbf G$ which has Lie algebra spanned\nby the root spaces $\\mathfrak g_\\alpha$ and (if $2\\alpha$ is a root)\n$\\mathfrak g_{2\\alpha}$. For $u\\in U_{\\alpha}\\setminus \\{1\\}$, let $m(u)$\nbe the unique element of $N\\cap U_{-\\alpha}uU_{-\\alpha}$\n\\cite{Landvogt}*{0.19}; in $\\SL_2$, for example,\n$m\\left(\\left(\\begin{smallmatrix} 1 & x \\\\ 0\\vphantom{x^{-1}} &\n 1 \\end{smallmatrix}\\right)\\right) = \\left(\\begin{smallmatrix} 0 & x\n \\\\ -x^{-1} & 0 \\end{smallmatrix}\\right)$. The element $m(u) \\in N$ acts\non $A$ by an affine reflection $\\nu(m(u))$ whose associated linear\ntransformation is $r_\\alpha$. The hyperplanes fixed by these affine\nreflections for all $\\alpha$ and $u$ are the \\emph{walls} of $A$. The\nconnected components of the complement of the union of the walls are called\nthe \\emph{chambers} of $A$; since we assume $\\mathbf G$ is almost simple, these\nare (open) simplices. A \\emph{face} of $A$ is an open face of a\nchamber. The affine space $A$ is thus a simplicial complex (with the open\nsimplices being faces) and the action of $N$ is simplicial.\n\nFor convenience we identify $A$ with $V$ by choosing a ``zero'' point $o\\in\nA$. For $\\alpha \\in \\Phi$, define $\\phi_\\alpha\\colon U_{\\alpha} \\to {\\mathbb R}\n\\cup \\{\\infty\\}$ by setting $\\phi_\\alpha(1)=\\infty$ and requiring for\n$u\\neq 1$ that the function $x\\mapsto \\alpha(x) + \\phi_\\alpha(u)$ vanishes\non the wall fixed by $\\nu(m(u))$. For $\\ell \\in {\\mathbb R}$, let\n\\begin{equation*}\nU_{\\alpha,\\ell} = \\{\\, u\\in U_{\\alpha} \\mid \\phi_\\alpha(u) \\ge \\ell\\,\\}\\ .\n\\end{equation*}\nThese are compact open subgroups and define a decreasing exhaustive and\nseparated filtration of $U_{\\alpha}$ which has ``jumps'' only for $\\ell$ in\nthe discrete set $\\phi_\\alpha( U_{\\alpha}\\setminus \\{1\\})$. The affine\nfunction $\\alpha + \\ell$ is called an \\emph{affine root} if for some $u\\in\nU_{\\alpha}\\setminus \\{1\\}$, $\\ell = \\phi_\\alpha(u)$ and (if $2\\alpha$ is a\nroot) $\\phi_\\alpha(u)= \\sup \\phi_\\alpha(u U_{2\\alpha})$; let\n$r_{\\alpha,\\ell} = \\nu(m(u))$ be the corresponding affine reflection. Note\nthat the zero set of an affine root is a wall of\n$A$ and every wall of $A$ arises in this fashion.\n\nDenote the set of affine roots by $\\Phi_{\\mathrm{af}}$; it is an \\emph{affine root\n system} in the sense of \\cite{Macdonald}. The Weyl group $W_{\\mathrm{af}}$ of the\naffine root system $\\Phi_{\\mathrm{af}}$ is the group generated by $r_{\\alpha,\\ell}$\nfor $\\alpha + \\ell \\in \\Phi_{\\mathrm{af}}$; it is an affine Weyl group in the sense\nof \\cite{Bourbaki}*{Ch.~VI, \\S2} associated to a reduced root system (not\nnecessarily $\\Phi$). Since we assume $\\mathbf G$ is simply connected, $W_{\\mathrm{af}} =\n\\nu(N) \\cong W'$.\n\nThe \\emph{apartment} associated to $\\Split$ consists of the affine\nsimplicial space $A$ together with the action of $N$, the affine root\nsystem $\\Phi_{\\mathrm{af}}$, and the filtration of the root groups,\n$(U_{\\alpha,\\ell})_{\\substack{\\alpha\\in\\Phi \\\\ \\ell\\in {\\mathbb R}}}$.\n\n\\subsection{The building}\n\\label{ssectBuilding}\n\nFor $x\\in A$, let $U_x$ be the group generated by $U_{\\alpha,\\ell}$ for all\n$\\alpha + \\ell \\in\\Phi_{\\mathrm{af}}$ such that $(\\alpha + \\ell)(x) \\ge 0$. The\n\\emph{building} of $\\mathbf G$ over $k_v$ is defined \\cite{BruhatTits1}*{(7.4.2)}\nto be\n\\begin{equation*}\nX_v = (G\\times A ) \/ \\!\\sim \\ ,\n\\end{equation*}\nwhere $(gnp,x) \\sim (g, \\nu(n)x)$ for all $n\\in N$ and $p \\in H U_x$. We\nidentify $A$ with the subset of $X_v$ induced by $\\{1\\}\\times A $.\n\nThe building $X_v$ has an action of $\\mathbf G(k_v)$ induced by left\nmultiplication on the first factor of $G\\times A$. Under this action, $N$\nacts on $A\\subset X_v$ via $\\nu$ and $U_{\\alpha,\\ell}$ fixes the points in\nthe half-space of $A$ defined by $\\alpha +\\ell\\ge 0$. The simplicial\nstructure on $A$ induces one on $X_v$ and the action of $\\mathbf G(k_v)$ is\nsimplicial. The subcomplex $gA\\subset X_v$ may be identified with the\napartment corresponding to the maximal split torus $g\\Split g^{-1}$.\n\nChoose an inner product on $V$ which is invariant under the Weyl group $W$;\nthe resulting metric on $A$ may be transferred to any apartment by using the\naction of $\\mathbf G(k_v)$. These metrics fit together to give a well-defined\nmetric on $X_{v}$ which is invariant under $\\mathbf G(k_{v})$\n\\cite{BruhatTits1}*{(7.4.20)} and complete \\cite{BruhatTits1}*{(2.5.12)}.\nGiven two points $x$, $y\\in X_v$, there exists an apartment $gA$ of $X_v$\ncontaining them \\cite{BruhatTits1}*{(7.4.18)}. Since $gA$ is an affine\nspace we can connect $x$ and $y$ with a line segment, $t \\mapsto tx +\n(1-t)y$, $ t \\in [0,1]$; this segment is independent of the choice of\napartment containing the two points and in fact is the unique geodesic\njoining $x$ and $y$.\n\n\\begin{prop}[\\cite{BruhatTits1}*{(7.4.20)}]\nThe mapping $t \\mapsto tx + (1-t)y$ of $[0,1] \\times X_{v} \\times X_{v}\n\\rightarrow X_{v}$ is continuous and thus $X_{v}$ is contractible.\n\\end{prop}\n\nIn fact it follows from \\cite{BruhatTits1}*{(3.2.1)} that $X_v$ is a\n$\\CAT(0)$-space. (Recall that a $\\CAT(0)$-space is a metric space where\nthe distance between any two points is realized by a geodesic and every\ngeodesic triangle is thinner than the corresponding triangle of the same\nside lengths in the Euclidean plane; see \\cite{BH} for a comprehensive\ndiscussion of $\\CAT(0)$-spaces.) Besides affine buildings such as $X_v$,\nanother important class of $\\CAT(0)$-spaces are the simply connected,\nnon-positively curved Riemannian manifolds such as $X_\\infty$.\n\n\\subsection{Stabilizers}\n\\label{ssectStabilizersBuilding}\n\nFor $\\Omega \\subset X_{v}$, let $\\mathbf G(k_{v})_{\\Omega}$ be the subgroup that\nfixes $\\Omega$ pointwise (the \\emph{fixateur} of $\\Omega$). Suppose now\nthat $\\Omega \\subseteq A$ and set \\begin{equation*}\nU_{\\Omega} = \\langle \\, U_{\\alpha,\\ell} \\mid (\\alpha+\\ell)(\\Omega) \\geq 0,\\,\n\\alpha+\\ell\\in \\Phi_{\\mathrm{af}}\\, \\rangle\\ .\n\\end{equation*}\nSince $\\mathbf G$ is simply connected and the valuation $\\omega$ is discrete,\n$\\mathbf G(k_{v})_{\\Omega} = HU_{\\Omega}$ (see \\cite{BruhatTits1}*{(7.1.10),\n (7.4.4)}). In particular, the stabilizer of $x\\in A$ is the compact\nopen subgroup $\\mathbf G(k_{v})_x = HU_x$.\n\nIf $F$ is a face of $A$ and $x\\in F$, then the set of affine roots which\nare nonnegative at $x$ is independent of the choice of $x\\in F$. Thus\n$\\mathbf G(k_{v})_{F} = \\mathbf G(k_{v})_x$. Note that an element of $\\mathbf G(k_v)$ which\nstabilizes $F$ also fixes the barycenter $x_F$ of $F$; thus $\\mathbf G(k_v)_F$ is\nthe stabilizer subgroup of $F$. The stabilizer subgroups for the building\nof $\\SL_2$ (a tree) are calculated in \\cite{SerreTrees}*{II, 1.3}.\n\nLet $\\P$ be a parabolic $k_v$-subgroup which without loss of generality we\nmay assume contains the centralizer of $\\Split$; let $\\mathbf N_{\\P}$ be its\nunipotent radical. Let $\\Phi_P = \\{\\, \\alpha \\in \\Phi \\mid U_\\alpha\n\\subseteq \\mathbf N_{\\P}(k_v) \\, \\}$ and set $E_P = \\{\\, v\\in V \\mid \\alpha(v) \\ge\n0, \\, \\alpha \\in \\Phi_P\\, \\}$; note that $\\Phi_P$ is contained in a positive\nsystem of roots and hence $E_P$ is a cone with nonempty interior.\n\n\\begin{lem}\n\\label{lemUnipotentsHaveFixedPoints}\nFor $u \\in \\mathbf N_{\\P}(k_v)$ there exists $x\\in A$ such that $x + E_P$ is\nfixed pointwise by $u$. In particular, $u$ belongs to a compact open subgroup.\n\\end{lem}\n\n\\begin{proof}\nSince $\\mathbf N_{\\P}(k_v)$ is generated by $(U_\\alpha)_{\\alpha\\in\\Phi_P}$, there\nexists $\\ell\\in {\\mathbb R}$ such that $u$ belongs to the group generated by\n$(U_{\\alpha,\\ell})_{\\alpha\\in\\Phi_P}$. Since $U_{\\alpha,\\ell}$ fixes the\npoints in the half-space of $A$ defined by $\\alpha +\\ell\\ge 0$,\nchoosing $x\\in A$ such that $\\alpha(x) \\ge -\\ell$ for all $\\alpha\\in\n\\Phi_P$ suffices.\n\\end{proof}\n\n\\section{The reductive Borel-Serre and Satake compactifications: the\n $S$-arithmetic case}\n\\label{sectCompactificationsSArithmetic}\n\nWe now consider a general $S$-arithmetic subgroup $\\Gamma$ and define a\ncontractible space $X=X_S$ on which $\\Gamma$ acts properly. If the $k$-rank\nof $\\mathbf G$ is positive, as we shall assume, $\\Gamma\\backslash X$ is noncompact\nand it is important to compactify it. Borel and Serre \\cite{BS2} construct\n$\\ga\\backslash\\osp^{BS}$, the analogue of $\\Gamma\\backslash\\overline{X}_\\infty^{BS}$ from\n\\S\\ref{ssectBSarith}, and use it to study the cohomological finiteness of\n$S$-arithmetic subgroups. In this section we recall their construction and\ndefine several new compactifications of $\\Gamma\\backslash X$ analogous to\nthose in \\S\\ref{sectCompactificationsArithmetic}.\n\n\\subsection{\\boldmath The space $\\Gamma\\backslash X$ associated to an\n$S$-arithmetic group}\n\nLet $S$ be a finite set of places of $k$ containing the infinite places\n$S_\\infty$ and let $S_f = S \\setminus S_\\infty$. Define\n\\begin{equation*}\nG =G_{\\infty}\\times \\prod_{v\\in S_{f}} \\mathbf G(k_{v}),\n\\end{equation*}\nwhich is a locally\ncompact group, and\n\\begin{equation*}\nX =X_{\\infty}\\times \\prod_{v\\in S_{f}} X_{v}\\ ,\n\\end{equation*}\nwhere $X_v$ is the Bruhat-Tits building associated to $\\mathbf G(k_v)$ as\ndescribed in \\S\\ref{sectBruhatTitsBuildings}. If we need to make clear the\ndependence on $S$, we write $X_S$. $X$ is a locally compact\nmetric space under the distance function induced from the factors. Since\neach factor is a $\\CAT(0)$-space and contractible (see\n\\S\\ref{ssectBuilding}), the same is true for $X$.\n\nThe group $G$ acts isometrically on $X$. We view $\\mathbf G(k)\\subset G$ under\nthe diagonal embedding. Any $S$-arithmetic subgroup $\\Gamma \\subset \\mathbf G(k)$ is\na discrete subgroup of $G$ and acts properly on $X$ \\cite{BS2}*{(6.8)}.\nIt is known that the quotient $\\Gamma\\backslash X$ is compact if and\nonly if the $k$-rank of $\\mathbf G$ is equal to 0. In the following, we assume\nthat the $k$-rank of $\\mathbf G$ over $k$ is positive. Then for every $v\\in\nS_{f}$, the $k_{v}$-rank of $\\mathbf G$ is also positive.\n\n\\subsection{The Borel-Serre compactification}\n\\label{ssectBorelSerreSarithmetic}\nDefine\n\\begin{equation*}\n\\overline{X}^{BS} = \\overline{X}_\\infty^{BS}\\times \\prod_{v\\in S_{f}}X_{v} \\ ,\n\\end{equation*}\nwhere $\\overline{X}_\\infty^{BS}$ is as in \\S\\ref{ssectBSarith}. This space\nis contractible and the action of $\\mathbf G(k)$ on $X$ extends to a continuous\naction on $\\overline X^{BS}$. The action of any $S$-arithmetic subgroup\n$\\Gamma$ on $\\overline{X}^{BS}$ is proper \\cite{BS2}*{(6.10)}. When\n$S_f=\\emptyset$ this is proved in \\cite{Borel-Serre} as mentioned in\n\\S\\ref{ssectBSarith}; in general, the argument is by induction on $|S_f|$.\nThe key points are \\cite{BS2}*{(6.8)}:\n\\begin{enumerate}\n\\item The covering of $X_v$ by open stars $V(F)$ about the barycenters\n of faces $F$ satisfies\n\\begin{equation*}\n\\gamma V(F)\\cap V(F) \\neq \\emptyset \\quad \\Longleftrightarrow \\quad\n\\gamma\\in\\Gamma_{F} = \\Gamma\\cap \\mathbf G(k_v)_F \\text{ , and}\n\\end{equation*}\n\\item For any simplex $F \\subset X_{v}$, $\\Gamma_F$ is an\n $(S\\setminus\\{v\\})$-arithmetic subgroup and hence by induction acts\n properly on $\\overline{X}_{S\\setminus\\{v\\}}^{BS}$.\n\\end{enumerate}\nFurthermore $\\Gamma\\backslash \\overline{X}^{BS}$ is compact Hausdorff\n\\cite{BS2}*{(6.10)} which follows inductively from\n\\begin{enumerate}[resume]\n\\item There are only finitely many $\\Gamma$-orbits of simplices in $X_{v}$ for\n $v\\in S_f$ and the quotient of $\\overline{X}^{BS}_\\infty$ by an\n arithmetic subgroup is compact.\n\\end{enumerate}\n\n\\subsection{The reductive Borel-Serre compactification}\n\\label{subsectRBSSarith}\nDefine\n\\begin{equation*}\n\\overline{X}^{RBS}= \\overline{X}_\\infty^{RBS}\\times \\prod_{v\\in S_f}X_{v} \\ .\n\\end{equation*}\nThere is a $\\mathbf G(k)$-equivariant surjection $\\overline{X}^{BS} \\to\n\\overline{X}^{RBS}$ induced from the surjection in \\S\\ref{subsectRBSarith}.\n\\begin{prop}\n\\label{propDiscontinuousRBS}\nAny $S$-arithmetic subgroup $\\Gamma$ of $\\mathbf G(k)$ acts discontinuously on\n$\\overline{X}^{RBS}$ with a compact Hausdorff quotient $\\Gamma\\backslash\\overline{X}^{RBS}$.\n\\end{prop}\nThe proposition is proved similarly to the case of $\\ga\\backslash\\osp^{BS}$ outlined in\n\\S\\ref{ssectBorelSerreSarithmetic}; one replaces ``proper'' by\n``discontinuous'' and begins the induction with Lemma\n~\\ref{lemRBSDiscontinuous}. The space $\\Gamma\\backslash\\overline{X}^{RBS}$ is the\n\\emph{reductive Borel-Serre compactification} of $\\Gamma\\backslash X$.\n\n\\subsection{Satake compactifications}\n\\label{subsectSatakeSArith}\nLet $(\\tau,V)$ be a spherical representation of\n$\\operatorname{Res}_{k\/{\\mathbb Q}}\\mathbf G$ as in \\S\\ref{subsectSatakeArith} and define\n\\begin{equation*}\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}=\n{}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}\n\\times\\prod_{v\\in S_{f}} X_{v}\\ .\n\\end{equation*}\nThere is a $\\mathbf G(k)$-equivariant surjection $\\overline{X}^{RBS} \\to\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ induced by $\\overline{X}_\\infty^{RBS} \\to\n{}_{{\\mathbb Q}}\\overline{X}_\\infty^{\\tau}$ from \\S\\ref{subsectSatakeArith}.\n\n\\begin{prop}\n\\label{propDiscontinuousSatake}\nAssume that the Satake compactification $\\overline{X}_\\infty^{\\tau}$ is\ngeometrically rational. Then any $S$-arithmetic subgroup $\\Gamma$ acts\ndiscontinuously on ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ with a compact Hausdorff\nquotient $\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau}$.\n\\end{prop}\n\nThe compact quotient $\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ is\ncalled the \\emph{Satake compactification} associated with $(\\tau,V)$.\n\n\\section{The fundamental group of the compactifications and applications to\nthe congruence subgroup kernel}\n\\label{sectFundGrpArithmetic}\n\nIn this section we state our main result, Theorem ~\\ref{thmMainArithmetic},\nwhich calculates the fundamental group of the reductive Borel-Serre and the\nSatake compactifications of $\\Gamma\\backslash X$. We then apply the main result\nto identify the congruence subgroup kernel with certain fundamental groups.\nThe proof of Theorem ~\\ref{thmMainArithmetic} is postponed to\n\\S\\ref{sectProofArithmetic}.\n\nThroughout we fix a spherical representation $(\\tau,V)$ such that\n$\\overline{X}_\\infty^{\\tau}$ is geometrically rational.\n\n\\begin{defi} Let $\\Gamma$ be a group acting continuously on a topological\n space $Y$. For each point $y\\in Y$, let $\\Gamma_{y} =\\{\\,g\\in\\Gamma\\mid\n gy=y\\,\\}$ be the \\emph{stabilizer subgroup} of $y$ in $\\Gamma$. The\n \\emph{fixed subgroup} $\\Gamma_{f}$ is the subgroup generated by the\n stabilizer subgroups $\\Gamma_{y}$ for all $y\\in Y$. (The fixed subgroup is\n obviously normal.)\n\\end{defi}\n\nIn our situation of an $S$-arithmetic subgroup $\\Gamma$ acting on $\\overline{X}^{RBS}$\nand ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$, we denote $\\Gamma_f$ by $\\Gamma_{f,RBS}$ and\n$\\Gamma_{f,\\tau}$ respectively. The main result of this paper is the\nfollowing theorem.\n\n\\begin{thm}\n\\label{thmMainArithmetic}\nFor any $S$-arithmetic subgroup $\\Gamma$, there exists a commutative diagram\n\\begin{equation*}\n\\begin{CD}\n\\pi_{1}(\\ga\\backslash\\oX^{RBS}) @<\\cong<< \\Gamma\/\\Gamma_{f,RBS} \\\\\n@VVV @VVV \\\\\n\\pi_{1}(\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau}) @<\\cong<<\n\\Gamma\/\\Gamma_{f,\\tau}\n\\end{CD}\n\\end{equation*}\nwhere the horizontal maps are isomorphisms and the vertical maps are\nsurjections induced by the $\\Gamma$-equivariant projection $\\overline{X}^{RBS} \\to\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ and the inclusion $\\Gamma_{f,RBS} \\subseteq\n\\Gamma_{f,\\tau}$.\n\\end{thm}\n\nThe proof of the theorem will be given in \\S\\ref{sectProofArithmetic}. In\nthe remainder of this section we present some applications to the\ncongruence subgroup kernel. To do this we first need to calculate\n$\\Gamma_{f,RBS}$ and $\\Gamma_{f,\\tau}$ which will require the information on\nstabilizers from \\S\\S\\ref{subsectRBSarith}, \\ref{subsectSatakeArith}, and\n\\ref{ssectStabilizersBuilding}.\n\nLet $\\P$ be a parabolic $k$-subgroup $\\P$ of $\\mathbf G$. The $S$-arithmetic\nsubgroup $\\Gamma$ induces $S$-arithmetic subgroups $\\Gamma_{P}=\\Gamma\\cap\n\\P(k)\\subseteq \\P(k)$, $\\Gamma_{N_{P}} = \\Gamma\\cap \\mathbf N_{\\P}(k) \\subseteq\n\\mathbf N_{\\P}(k)$, and $\\Gamma_{L_{P}} = \\Gamma_{P}\/\\Gamma_{N_{P}} \\subseteq \\mathbf\nL_{\\P}(k)$, as well as $\\Gamma_{P_\\tau} = \\Gamma\\cap \\P_\\tau(k) \\subseteq\n\\P_\\tau(k)$ and $\\Gamma_{H_{P,\\tau}} = \\Gamma_P \/ \\Gamma_{P_\\tau} \\subseteq \\mathbf\nH_{\\P, \\tau}(k)$.\n\nLet $E\\Gamma\\subseteq \\Gamma$ be the subgroup generated by $\\Gamma_{N_{P}}$\nfor every parabolic $k$-subgroup $\\P$ of $\\mathbf G$. Since $\\gamma\n\\mathbf N_{\\P}\\gamma^{-1}=\\mathbf N_{\\gamma \\P\\gamma^{-1}}$ for $\\gamma \\in \\Gamma$,\n$E\\Gamma$ is clearly normal. Let $E_{\\tau}\\Gamma\\subseteq \\Gamma$ be the\nsubgroup generated by $\\Gamma_{P_\\tau} \\cap \\bigcap_{v\\in S_f} K_v$ for\nevery $\\tau$-saturated parabolic $k$-subgroup $\\P$ of $\\mathbf G$ and compact open\nsubgroups $K_v\\subset \\mathbf G(k_v)$. As above, $E_{\\tau}\\Gamma$ is normal.\nSince $\\Gamma_{N_P}$ is generated by $\\Gamma_{N_P} \\cap \\bigcap_{v\\in S_f} K_v$\nfor various $K_v$ by Lemma~\\ref{lemUnipotentsHaveFixedPoints}, it is easy\nto see that $E\\Gamma \\subseteq E_\\tau\\Gamma$.\n\nA subgroup $\\Gamma\\subset \\mathbf G(k)$ is \\emph{neat} if the subgroup of ${\\mathbb C}$\ngenerated by the eigenvalues of $\\rho(\\gamma)$ is torsion-free for any\n$\\gamma\\in\\Gamma$. Here $\\rho$ is a faithful representation $\\mathbf G\\to \\GL_N$\ndefined over $k$ and the condition is independent of the choice of $\\rho$.\nClearly any neat subgroup is torsion-free. Any $S$-arithmetic subgroup has a\nnormal neat subgroup of finite index \\cite{Borel}*{\\S17.6}; the image of a\nneat subgroup by a morphism of algebraic groups is neat\n\\cite{Borel}*{\\S17.3}.\n\n\\begin{prop}\n\\label{propGammaFixedIsEGamma}\nLet $\\Gamma$ be an $S$-arithmetic subgroup. Then $E\\Gamma \\subseteq\n\\Gamma_{f,RBS}$ and $E_{\\tau}\\Gamma \\subseteq \\Gamma_{f,\\tau}$. If $\\Gamma$\nis neat then equality holds for both.\n\\end{prop}\n\n\\begin{proof}\nWe proceed by induction on $\\vert S_{f}\\vert$. Suppose first that\n$S_{f}=\\emptyset$. By Lemma ~\\ref{lemStabilizersRBS}, $\\Gamma_{N_P}$\nstabilizes any point of $X_{P} \\subseteq \\overline{X}^{RBS}_\\infty$ for any\nparabolic $k$-subgroup $\\P$, and hence $E\\Gamma \\subseteq \\Gamma_{f,RBS}$.\nLikewise by Lemma ~\\ref{lemStabilizersSatake}, $\\Gamma_{P_\\tau}$ stabilizes\nany point of $X_{P,\\tau} \\subset {}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ and so\n$E_{\\tau}\\Gamma \\subseteq \\Gamma_{f,\\tau}$.\n\nIf $\\Gamma$ is neat, then $\\Gamma_{L_{P}}$ and $\\Gamma_{H_{P,\\tau}}$ are\nlikewise neat and hence torsion-free. The actions of $\\Gamma_{L_P}$ and\n$\\Gamma_{H_{P,\\tau}}$ are proper and hence $\\Gamma_{L_{P},z}$ and\n$\\Gamma_{H_{P,\\tau},z}$ are finite. Thus these stabilizer subgroups must\nbe trivial. It follows then from Lemmas ~\\ref{lemStabilizersRBS} and\n\\ref{lemStabilizersSatake} that $E\\Gamma = \\Gamma_{f,RBS}$ and\n$E_{\\tau}\\Gamma = \\Gamma_{f,\\tau}$.\n\nNow suppose that $v \\in S_{f}$ and let $S' = S \\setminus\\{v\\}$. Write\n$\\overline{X}^{RBS} = \\overline{X}_{S'}^{RBS} \\times X_v$. Suppose that\n$\\gamma\\in \\Gamma_{N_P}$ for some parabolic $k$-subgroup $\\P$. By Lemma\n~\\ref{lemUnipotentsHaveFixedPoints}, $\\gamma \\in \\mathbf G(k_v)_y$ for some $y\\in\nX_{v}$. Thus $\\gamma \\in \\Gamma' \\cap \\mathbf N_{\\P}(k)$, where $\\Gamma' = \\Gamma\\cap\n\\mathbf G(k_v)_y$. Since $\\mathbf G(k_v)_y$ is a compact open subgroup, $\\Gamma'$ is an\n$S'$-arithmetic subgroup. By induction $\\gamma = \\gamma_1 \\dots \\gamma_m$\nwhere $\\gamma_i\\in\\Gamma'_{x_i}$ with $x_i\\in \\overline{X}_{S'}^{RBS}$. Since\neach $\\gamma_i\\in \\Gamma_{(x_i,y)} \\subset \\Gamma_{f,RBS}$, we see $E\\Gamma\n\\subseteq \\Gamma_{f,RBS}$. The proof that $E_{\\tau}\\Gamma \\subseteq\n\\Gamma_{f,\\tau}$ is similar since if $\\gamma \\in \\Gamma_{P_\\tau} \\cap\n\\bigcap_{v\\in S_f} K_v$ then $\\gamma \\in \\mathbf G(k_v)_y$ for some $y\\in X_v$\n\\cite{BruhatTits1}*{(3.2.4)}.\n\nAssume that $\\Gamma$ is neat. Let $(x,y) \\in \\overline{X}_{S'}^{RBS}\\times\nX_{v}$, and let $F$ be a face of $X_v$ containing $y$. As above,\n$ \\Gamma_F = \\Gamma\\cap \\mathbf G(k_v)_F$ is $S'$-arithmetic and, in this case, neat.\nSo by induction, $\\Gamma_{F,x} \\subseteq E(\\Gamma_F)\\subseteq E\\Gamma$. But\nsince $\\mathbf G(k_{v})_{y}=\\mathbf G(k_{v})_{F}$, $\\Gamma_{(x,y)} =\n\\Gamma_{F,x}$. Therefore $\\Gamma_{f,RBS} \\subseteq E\\Gamma $. A similar\nargument shows that $\\Gamma_{f,\\tau} \\subseteq E_{\\tau}\\Gamma $.\n\\end{proof}\n\nWe now can deduce several corollaries of Theorem ~\\ref{thmMainArithmetic}\nand Proposition ~\\ref{propGammaFixedIsEGamma}.\n\n\\begin{cor}\n\\label{corNeat}\n$\\pi_{1}(\\ga\\backslash\\oX^{RBS})$ is a quotient of $\\Gamma\/ E\\Gamma$ and\n$\\pi_{1}(\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau})$ is a quotient of\n$\\Gamma\/ E_\\tau\\Gamma$. If $\\Gamma$ is neat, then $\\pi_{1}(\\ga\\backslash\\oX^{RBS}) \\cong\n\\Gamma\/ E\\Gamma$ and $\\pi_{1}(\\Gamma\\backslash\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}) \\cong \\Gamma\/ E_\\tau\\Gamma$.\n\\end{cor}\n\n\\begin{cor}\nIf $k\\text{-rank}\\: \\mathbf G >0 $ and $S\\text{-rank}\\: \\mathbf G \\ge 2$, $\\pi_{1}(\\ga\\backslash\\oX^{RBS})$ and\n$\\pi_{1}(\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau})$ are finite.\n\\end{cor}\n\\begin{proof}\nUnder the rank assumptions, $E\\Gamma$ is $S$-arithmetic\n\\citelist{\\cite{Margulis} \\cite{Ra2}*{Theorem~ A, Corollary~ 1}}.\n\\end{proof}\n\n\\begin{cor}\n\\label{corRankTwoAndUp}\nIf $k\\text{-rank}\\: \\mathbf G >0 $ and $S\\text{-rank}\\: \\mathbf G \\ge 2$, then $C(S,\\mathbf G) =\n\\varprojlim\\limits_{{\\mathfrak a}} \\pi_{1}(\\Gamma({\\mathfrak a})\\backslash\\overline{X}^{RBS})$, where ${\\mathfrak a}$\nranges over nonzero ideals of ${\\mathcal O}$. These fundamental groups and the\nlimit are finite.\n\\end{cor}\n\\begin{proof}\nUnder the rank hypothesis, Raghunathan proves that the congruence kernel is\nthe projective limit of $\\Gamma({\\mathfrak a})\/ E\\Gamma({\\mathfrak a})$ (see\n\\eqref{eqnCongruenceKernel} in \\S\\ref{ssectElementaryMatrices}).\nFurthermore these groups are finite (see the discussion in\n\\S\\S\\ref{ssectKnownResults}, \\ref{ssectElementaryMatrices}). Now apply\nCorollary ~\\ref{corNeat} and the fact that $\\Gamma({\\mathfrak a})$ is neat for ${\\mathfrak a}$\nsufficiently small.\n\\end{proof}\n\nSet $\\Gamma^*({\\mathfrak a}) = \\bigcap_{{\\mathfrak b}\\neq 0} E\\Gamma({\\mathfrak a})\\cdot \\Gamma({\\mathfrak b})$ where ${\\mathfrak b}$ runs\nover nonzero ideals of ${\\mathcal O}$. Clearly\n\\begin{equation}\n\\label{eqnGammaStar}\nE\\Gamma({\\mathfrak a}) \\subseteq \\Gamma^*({\\mathfrak a}) \\subseteq \\Gamma({\\mathfrak a}).\n\\end{equation}\nBy Raghunathan's Main Lemma \\cite{Ra1}*{(1.17)}, for every nonzero ideal\n${\\mathfrak a}$ there exists a nonzero ideal ${\\mathfrak a}'$ such that $\\Gamma^*({\\mathfrak a})\\supseteq\n\\Gamma({\\mathfrak a}')$. Thus $\\Gamma^*({\\mathfrak a})$ is the smallest $S$-congruence subgroup\ncontaining $E\\Gamma({\\mathfrak a})$.\n\n\\begin{cor}\n\\label{corIdentifyCSG}\nIf $k\\text{-rank}\\: \\mathbf G >0 $ and $S\\text{-rank}\\: \\mathbf G \\ge 2$, then $C(S,\\mathbf G) =\n\\pi_{1}(\\Gamma^*({\\mathfrak a})\\backslash\\overline{X}^{RBS})$ for any sufficiently small nonzero\nideal ${\\mathfrak a}$ of ${\\mathcal O}$.\n\\end{cor}\n\\begin{proof}\nSince $\\Gamma^*({\\mathfrak a})$ is an $S$-congruence subgroup,\nequations \\eqref{eqnCongruenceKernel} and \\eqref{eqnGammaStar} imply that\n\\begin{equation*}\nC(S,\\mathbf G) = \\varprojlim\\limits_{{\\mathfrak a}} \\Gamma({\\mathfrak a})\/ E\\Gamma({\\mathfrak a}) \\cong\n\\varprojlim\\limits_{{\\mathfrak a}} \\Gamma^*({\\mathfrak a}) \/ E\\Gamma({\\mathfrak a}).\n\\end{equation*}\nSince $C(S,\\mathbf G)$ is finite, the second limit will stabilize if we show\n\\begin{equation*}\n \\Gamma^*({\\mathfrak b})\/ E\\Gamma({\\mathfrak b}) \\longrightarrow\n \\Gamma^*({\\mathfrak a}) \/ E\\Gamma({\\mathfrak a})\n\\end{equation*}\nis surjective for ${\\mathfrak b}\\subset {\\mathfrak a}$. But this follows from Raghunathan's\nMain Lemma \\cite{Ra1}*{(1.17)} applied to ${\\mathfrak b}$ and the definition of\n$\\Gamma^*({\\mathfrak a})$. Finally we note that that\n$\\pi_{1}(\\Gamma^*({\\mathfrak a})\\backslash\\overline{X}^{RBS}) \\cong \\Gamma^*({\\mathfrak a}) \/ E\\Gamma({\\mathfrak a})$ by\nCorollary ~\\ref{corNeat} and the fact that $E\\Gamma({\\mathfrak a}) = E\\Gamma^*({\\mathfrak a})$ (apply\n$E$ to \\eqref{eqnGammaStar}).\n\\end{proof}\n\n\\begin{rem}\nFrom the point of view of identifying the congruence subgroup kernel $C(S,\n\\mathbf G)$, Corollary ~\\ref{corIdentifyCSG} shows that the reductive Borel-Serre\ncompactification $\\ga\\backslash\\osp^{RBS}$ is the most natural compactification. On\nthe other hand, the Satake compactifications are important as well. In\nparticular, when $X=X_{\\infty}$ is Hermitian, the Baily-Borel\ncompactification is a normal projective variety and has played an important\nrole in algebraic geometry and number theory. In the cases considered in\n\\citelist{\\cite{hk} \\cite{kn} \\cite{hs} \\cite{ge} \\cite{Gro} \\cite{gro2}},\nthe fundamental group of the Baily-Borel compactification is shown to\nvanish. The maximal Satake compactification is also special among the\nfamily of all Satake compactifications and important for various purposes.\nIn the general situation in this paper, the precise relations between\n$C(S,\\mathbf G)$ and $\\pi_{1}(\\Gamma(\\mathfrak a) \\backslash\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau})$ are not clear, even when $\\mathfrak a$ is a\nsufficiently small ideal, aside from the fact that $\\pi_{1}(\\Gamma^*(\\mathfrak\na) \\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau})$ is a quotient of $C(S,\\mathbf G)$ when\nthe $k\\text{-rank}\\: \\mathbf G >0 $ and $S\\text{-rank}\\: \\mathbf G \\ge 2$.\n\\end{rem}\n\n\\section{Proof of the main theorem}\n\\label{sectProofArithmetic}\nIn this section we give the proof of Theorem ~\\ref{thmMainArithmetic}. The\nmain tool is Proposition ~\\ref{propGrosche}. Part ~\\ref{itemGrosche} in\nthe proposition is used for the proof of the case where $\\Gamma$ is neat; it\nrequires the notion of an \\emph{admissible} map (Definition\n~\\ref{defiAdmissible}). Part ~\\ref{itemArmstrong} is needed in addition to\ncomplete the general case. In order to apply Proposition\n~\\ref{propGrosche} we must first verify that the spaces\n$\\overline{X}^{RBS}$ and ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ are simply connected\n(Proposition ~\\ref{propSimply}) and that the $\\Gamma$-actions are admissible\nin the neat case (Proposition ~\\ref{propAdmissibleNeatCase}). Both of\nthese arguments depend on deforming paths to the boundary where the\ngeometry is simpler; this technique is formalized in Lemma\n~\\ref{lemAdmissibilityViaRetract}.\n\nHomotopy of paths $\\omega$ and $\\eta$ will always mean homotopy relative to\nthe endpoints and will be denoted $\\omega \\cong \\eta$. An action of a\ntopological group $\\Gamma$ on a topological space $Y$ will always be a\ncontinuous action.\n\n\\begin{defi}\n\\label{defiAdmissible}\nA continuous surjection $p\\colon Y \\to X$ of topological spaces is\n\\emph{admissible} if for any path $\\omega$ in $X$ with initial point $x_0$\nand final point $x_1$\nand for any $y_0\\in p^{-1}(x_0)$, there exists a path $\\tilde{\\omega}$ in\n$Y$ starting at $y_0$ and ending at $y_1\\in p^{-1}(x_1)$ such that $p\\circ \\tilde \\omega$ is homotopic to\n$\\omega$ relative to the endpoints.w An action of a group $\\Gamma$\non a topological space $Y$ is \\emph{admissible} if the quotient map $Y\\to\n\\Gamma\\backslash Y$ is admissible.\n\\end{defi}\n\n\\begin{prop}\n\\label{propGrosche}\nLet $Y$ be a simply connected topological space and $\\Gamma$ a discrete group\nacting on $Y$. Assume that either\n\\begin{enumerate}\n\\item\\label{itemGrosche} the $\\Gamma$-action is discontinuous and admissible,\n or that\n\\item\\label{itemArmstrong} the $\\Gamma$-action is proper and $Y$ is a locally\n compact metric space.\n\\end{enumerate}\nThen the natural morphism $\\Gamma \\to \\pi_{1}(\\Gamma\\backslash Y)$ induces an\nisomorphism $\\Gamma\/\\Gamma_{f} \\cong \\pi_{1}(\\Gamma\\backslash Y)$.\n\\end{prop}\n\\begin{proof}\nSee \\cite{Gro}*{Satz~5} and \\cite{Armstrong} for hypotheses\n\\ref{itemGrosche} and \\ref{itemArmstrong} respectively .\n\\end{proof}\n\n\\begin{prop}\n\\label{propAdmissibilityImpliesSC}\nLet $p\\colon Y \\to X$ be an admissible continuous map of a simply\nconnected topological space $Y$ and assume that $p^{-1}(x_0)$ is\npath-connected for some $x_0\\in X$. Then $X$ is simply connected.\n\\end{prop}\n\\begin{proof}\nLet $\\omega\\colon [0,1] \\to X$ be a loop based at $x_0$ and let\n$\\tilde\\omega$ be a path in $Y$ such that $p\\circ \\tilde\\omega \\cong\n\\omega$ (relative to the basepoint). Let $\\eta$ be a path in\n$p^{-1}(x_0)$ from $\\tilde\\omega(1)$ to $\\tilde\\omega(0)$. Then the\nproduct $\\tilde\\omega\\cdot \\eta$ is a loop in the simply connected space\n$Y$ and hence is null-homotopic. It follows that $\\omega\\cong p\\circ\n\\tilde \\omega\\cong p\\circ(\\tilde\\omega\\cdot \\eta)$ is null-homotopic.\n\\end{proof}\n\n\\begin{lem}\n\\label{lemAdmissibilityIsLocal}\nA continuous surjection $p\\colon Y \\to X$ of topological spaces is\nadmissible if and only if $X$ can be covered by open subsets $U$\nsuch that $p|_{p^{-1}(U)}\\colon p^{-1}(U) \\to U$ is\nadmissible.\n\\end{lem}\n\\begin{proof}\nBy the Lebesgue covering lemma, any path $\\omega\\colon [0,1] \\to X$ is\nhomotopic to the product of finitely many paths, each of which maps into\none of the subsets $U$. The lemma easily follows.\n\\end{proof}\n\n\\begin{lem}\n\\label{lemAdmissibilityViaRetract}\nLet $p\\colon Y \\to X$ be a continuous surjection of topological spaces.\nAssume there exist deformation retractions $r_t$ of $X$ onto a subspace\n$X_0$ and $\\tilde r_t$ of $Y$ onto $Y_0 = p^{-1}(X_0)$ such that $p\\circ\n\\tilde r_t = r_t \\circ p$. Also assume for all $x\\in X$ that\n$\\pi_0(p^{-1}(x)) \\xrightarrow{\\tilde r_{0*}} \\pi_0(p^{-1}(r_0(x)))$ is\nsurjective. Then $p$ is admissible if and only if $p|_{Y_0}\\colon Y_0\\to\nX_0$ is admissible.\n\\end{lem}\n\n\\setlength{\\pinch}{.002128769252056923\\textwidth}\n\\setlength{\\mim}{2.85427559055181102\\pinch}\n\\begin{figure}[h]\n\\begin{equation*}\n\\begin{xy}\n<0\\mim,-15\\mim>;<3\\mim,-15\\mim>:\n<24\\mim,-3\\mim>=\"c\"+<0\\mim,6\\mim>=\"cdmid\"+<0\\mim,6\\mim>=\"d\",\n(15,0)=\"adown\";(0,10)=\"bdown\" **[bordergrey]\\crv{(10,5)&(5,8)}?(.3)=\"xdown\",\n?(.25)=\"main3\",?(.6)=\"main6\",\n\"adown\"+\"c\";{\"bdown\"+\"c\"} **[verylightgrey]\\crv{ (10,5)+\"c\" & (5,8)+\"c\"},\n(15,10)=\"aup\";(0,20)=\"bup\" **[bordergrey]\\crv{(10,15)&(5,18)}\n?(.5)=\"xup\",\n\"aup\"+\"d\";{\"bup\"+\"d\"} **[bordergrey]\\crv{\"d\"+(10,15) & \"d\"+(5,18)},\n\"adown\",\\blownupslice{bordergrey}{bordergrey},\n\"bdown\",\\blownupslice{verylightgrey}{bordergrey},\n\"main6\",\\blownupslice{verylightgrey}{verylightgrey},\n\"bot\";p+<0\\mim,42\\mim>**\\dir{}?(.65)*\\dir{*}=\"y1\"*+!L{_{y_1}}=\"f3\",\n?(.25)*\\cir<1\\pinch>{}*\\frm{*}=\"f7\",?(.85)*\\cir<1\\pinch>{}*\\frm{*}=\"f1\",\n?(.75)*+{}=\"f2\",?(.35)*+{}=\"f6\",?(.45)*\\cir<1\\pinch>{}*\\frm{*}=\"f5\",\n?(.55)*+{}=\"f4\",?(.05)*+{}=\"f9\",\n\"top\"+<0\\mim,6\\mim>*++!DC\\txt<20\\mim>\\tiny{$ p^{-1}(x_1)$ (marked by\n $\\scriptscriptstyle \\bullet$)\\\\$\\downarrow$},\n\"main6\";\"upper\" **[lightgrey]\\dir{-}?(.5)=\"r0y1\"*\\dir{*}*+!RD{_{\\tilde\n r_0(y_1)}},\n?(.25)=\"eta1\"*\\dir{*}*+!UR{_{\\eta(1)}},\n\"lower\"-<0\\mim,6\\mim>*++!UC\\txt<20\\mim>\\tiny{$\\uparrow$\\\\$p^{-1}(r_0(x_1))$},\n;\"lower\"**[lightgrey]\\dir{--},\n\"upper\";\"upper\"+<0\\mim,6\\mim>**[lightgrey]\\dir{--},\n\"main3\",\\blownupslice{verylightgrey}{verylightgrey},\n\"main3\";\"bot\" **\\crv{~*\\dir{} \"ccp\"},?(.8)=\"x0bot\";p+<0\\mim,37.5\\mim>=\"x0top\"\n**\\dir{}?(.4)=\"y0\"*\\dir{*}*+!L{_{y_0}},\n\"main3\";\"upper\" **\\dir{}?(.35)=\"r0y0\"*\\dir{*}*+!UR{_{\\tilde r_0(y_0)}},\n;\"y0\" **\\dir{},?(.5)+\/u2.25\\pinch\/=\"mcp\",\n\"r0y0\";\"y0\" **\\crv{ \"mcp\"},?(.5)*+!U{_{\\tilde\\sigma_0}},*\\dir{>},\n\"r0y0\"+<0\\pinch,.5\\pinch>;\"y0\"+<0\\pinch,.5\\pinch> **\\crv{ \"mcp\"+<0\\pinch,.5\\pinch>},?(.5)*\\dir{>},\n\"r0y0\";\"eta1\" **\\dir{},?(.45)+\/r3.75\\pinch\/=\"cp1\",?(.55)+\/l2\\pinch\/=\"cp2\",\n\"eta1\" **\\crv{ \"cp1\" & \"cp2\" },?(.5)*+!UR{_\\eta},*\\dir{>},\n\"r0y0\"+<.35\\pinch,.35\\pinch>;\"eta1\"+<.35\\pinch,.35\\pinch> **\\crv{ \"cp1\"+<.35\\pinch,.35\\pinch> & \"cp2\"+<.35\\pinch,.35\\pinch> },?(.5)*\\dir{>},\n\"eta1\";\"r0y1\" **\\dir{-},?(.5)*+!R{_{\\psi}},*\\dir{>},\n\"eta1\"+<.5\\pinch,0\\pinch>;\"r0y1\"+<.5\\pinch,0\\pinch> **\\dir{-},?(.5)*\\dir{>},\n\"r0y1\";\"y1\" **\\dir{},?(.5)+\/d4.5\\pinch\/=\"mcp\",\n\"r0y1\";\"y1\" **\\crv{ \"mcp\"},?(.5)*+!U{_{\\tilde\\sigma_1}},*\\dir{>},\n\"r0y1\"+<0\\pinch,.5\\pinch>;\"y1\"+<0\\pinch,.5\\pinch> **\\crv{ \"mcp\"+<0\\pinch,.5\\pinch>},?(.5)*\\dir{>},\n\"adown\"+<-2\\mim,-5\\mim>*{_{Y_0\\quad\\qquad\\subseteq \\quad\\qquad Y}},\n{<79\\mim,15\\mim> \\ar _{p} @{>} <89\\mim,15\\mim>},\n<90\\mim,0\\mim>;<93\\mim,0\\mim>:\n<24\\mim,-3\\mim>=\"c\"+<0\\mim,6\\mim>=\"cdmid\"+<0\\mim,6\\mim>=\"d\",\n(15,0)=\"a\";(0,10)=\"b\" **[bordergrey]\\crv{(10,5)&(5,8)},\n?(.25)=\"main3\"*\\dir{*}*+!UR{_{r_0(x_0)}},?(.6)=\"main6\"*\\dir{*}*+!UR{_{r_0(x_1)}},,\n\"main3\",\\slice{verylightgrey}{verylightgrey},\n\"main3\";\"bot\" **\\crv{~*\\dir{} \"ccp\"},?(.8)=\"x0\"*\\dir{*}*+!U{_{x_0}},\n\"x0\" **\\crv{ \"ccp\"},?(.5)*+!U{_{\\sigma_0}},*\\dir{>},\n\"main3\"+<0\\pinch,.5\\pinch>;\"x0\"+<0\\pinch,.5\\pinch> **\\crv{ \"ccp\"+<0\\pinch,.5\\pinch>},?(.5)*\\dir{>},\n\"main6\",\\slice{verylightgrey}{verylightgrey},\n\"main6\"+\"cdmid\"-<1.5\\mim,0\\mim>=\"x1\"*\\dir{*}*+!DR{_{x_1}}, \n\"a\"+\"cdmid\";{\"b\"+\"cdmid\"} **\\crv{~*\\dir{} \"cdmid\"+(10,5) & \"cdmid\"+(5,8)},\n\"a\"+\"c\";{\"b\"+\"c\"} **[verylightgrey]\\crv{\"c\"+(10,5) & \"c\"+(5,8)},\n\"main6\";\"x1\" **\\dir{},?(.5)+\/d3\\pinch\/=\"mcp\",\n\"main6\";\"x1\" **\\crv{ \"mcp\"},?(.6)*+!U{_{\\sigma_1}},*\\dir{>},\n\"main6\"+<0\\pinch,.5\\pinch>;\"x1\"+<0\\pinch,.5\\pinch> **\\crv{ \"mcp\"+<0\\pinch,.5\\pinch>},?(.6)*\\dir{>},\n\"x0\";\"x1\" **\\dir{},?(.45)+\/r37.5\\pinch\/=\"cp1\",?(.55)+\/l27.5\\pinch\/=\"cp2\",\n\"x1\" **\\crv{ \"cp1\" & \"cp2\" },?(.5)*+!LD{_\\omega},*\\dir{>},\n\"x0\"+<.35\\pinch,.35\\pinch>;\"x1\"+<.35\\pinch,.35\\pinch> **\\crv{ \"cp1\"+<.35\\pinch,.35\\pinch> & \"cp2\"+<.35\\pinch,.35\\pinch> },?(.5)*\\dir{>},\n\"main3\";\"main6\" **\\crv{(10.2,4.4)}?(.6)*\\dir{>}, \n*+!UR{_{r_0\\circ\\omega}},\n\"main3\"+<.35\\pinch,.35\\pinch>;\"main6\"+<.35\\pinch,.35\\pinch> **\\crv{(10.2,4.4)+<.35\\pinch,.35\\pinch>}?(.6)*\\dir{>}, \n\"a\",\\slice{bordergrey}{bordergrey},\n\"b\",\\slice{verylightgrey}{bordergrey},\n\"a\"+\"d\";{\"b\"+\"d\"} **[bordergrey]\\crv{\"d\"+(10,5) & \"d\"+(5,8)},\n\"a\"+<10\\mim,-6\\mim>*{_{X_0\\quad\\subseteq \\quad X}}\n\\end{xy}\n\\end{equation*}\n\\caption{$p\\colon Y \\to X$ as in Lemma~\\ref{lemAdmissibilityViaRetract}}\n\\label{figAdmissibility}\n\\end{figure}\n\n\\begin{proof}\n(See Figure ~\\ref{figAdmissibility}.) Assume $p|_{Y_0}$ is admissible. If\n $\\omega$ is a path in $X$ from $x_0$ to $x_1$, then $\\omega \\cong\n \\sigma_0^{-1} \\cdot (r_0 \\circ \\omega) \\cdot \\sigma_1$ where $\\sigma_i(t)\n = r_t(x_i)$ for $i=0$, $1$. Pick $y_0\\in p^{-1}(x_0)$ and let $\\eta(t)$\n be a path in $Y_0$ starting at $\\tilde r_0(y_0)$ such that $p\\circ \\eta\n \\cong r_0\\circ \\omega$. By assumption there exists $y_1\\in p^{-1}(x_1)$\n such that $\\tilde r_0(y_1)$ is in the same path-component of\n $p^{-1}(r_0(x_1))$ as $\\eta(1)$; let $\\psi$ be any path in\n $p^{-1}(r_0(x_1))$ from $\\eta(1)$ to $\\tilde r_0(y_1)$. Set\n $\\tilde\\omega = \\tilde\\sigma_0^{-1} \\cdot \\eta \\cdot \\psi \\cdot\n \\tilde\\sigma_1$, where $\\tilde \\sigma_i(t) = \\tilde r_t(y_i)$. Then\n $p\\circ \\tilde\\omega \\cong \\sigma_0^{-1} \\cdot (r_0 \\circ \\omega) \\cdot\n \\sigma_1$ and thus $p$ is admissible.\n\\end{proof}\n\nRecall the $\\mathbf G(k)$-equivariant quotient maps $\\overline{X}^{BS}\n\\xrightarrow{p_1} \\overline{X}^{RBS} \\xrightarrow{p_2}\n {}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ from \\S\\S\\ref{subsectRBSSarith},\n \\ref{subsectSatakeSArith}.\n\n\\begin{prop}\n\\label{propSimply}\nThe spaces $\\overline{X}^{RBS}$ and ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ are\nsimply connected.\n\\end{prop}\n\\begin{proof}\nFor any finite place $v$, the building $X_{v}$ is contractible. So we need\nonly prove that $\\overline{X}^{RBS}_\\infty$ and\n${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ are simply connected (the case that\n$S_{f} = \\emptyset$). By Proposition ~\\ref{propAdmissibilityImpliesSC},\nLemma ~\\ref{lemAdmissibilityIsLocal}, and the fact that\n$\\overline{X}^{BS}_\\infty$ is contractible, it suffices to find a cover of\n$\\overline{X}^{RBS}_\\infty$ by open subsets $U$ over which $p_1$\n(resp. $p_2\\circ p_1$) is admissible.\n\nConsider first $\\overline{X}^{RBS}_\\infty$. The inverse image\n$p_1^{-1}(X_Q)$ of a stratum $X_Q \\subseteq \\overline{X}^{RBS}_\\infty$ is\n$e(Q) = N_Q\\times X_Q \\subseteq \\overline{X}^{BS}_\\infty$. Set $\\tilde U =\n\\overline A_Q(1) \\times N_Q \\times X_Q \\subseteq \\overline{X}^{BS}_\\infty$\n(compare \\eqref{Pcorner}) and $U=p_1(\\tilde U)$, a neighborhood of $X_Q$; note $p_1^{-1}(U)= \\tilde\nU$. Define a deformation retraction of \n$\\tilde U$ onto\n$e(Q)$ by\n\\begin{equation*}\n\\tilde r_t(a,u,z) =\n\\begin{cases}\n(\\exp(\\frac{1}{t}\\log a), u, z) & \\text{for $t\\in (0,1]$,} \\\\\n(o_Q, u, z) & \\text{for $t=0$.}\n\\end{cases}\n\\end{equation*}\nThis descends to a deformation retraction $r_t$ of $U$ onto $X_Q$. Since\n$p_1|_{e(Q)}\\colon N_Q\\times X_Q \\to X_Q$ is admissible and $N_Q$ is\npath-connected, Lemma ~\\ref{lemAdmissibilityViaRetract} shows that\n$p_1|_{\\tilde U}$ is admissible.\n\nNow consider ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ and a stratum $X_{Q,\\tau}$,\nwhere $\\mathbf Q$ is $\\tau$-saturated. The inverse image $(p_2\\circ\np_1)^{-1}(X_{Q,\\tau})$ is $\\coprod_{\\P^\\dag = \\mathbf Q} e(P) \\subseteq\n\\overline{X}^{BS}_\\infty$; it is an open subset of the closed stratum\n$\\overline{e(Q)} = \\coprod_{\\P \\subseteq \\mathbf Q} e(P)$. For each $\\P$\nsuch that $\\P^\\dag = \\mathbf Q$, we can write $e(P) = N_P\\times X_P = N_P\n\\times X_{Q,\\tau} \\times W_{P,\\tau}$ by \\eqref{eqnBoundaryDecomposition}.\nThus $(p_2\\circ p_1)^{-1}(X_{Q,\\tau}) = Z_Q\\times X_{Q,\\tau}$, where $Z_Q =\n\\coprod_{\\P^\\dag = \\mathbf Q} ( N_P \\times W_{P,\\tau})$. Note that\n$N_Q\\times W_{Q,\\tau}$ is dense in $Z_Q$, so $Z_Q$ is path-connected.\n\nFor $X_{Q,\\tau}\\subset {}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$, the construction\nof $\\tilde U$ is more subtle than in the case of\n$\\overline{X}^{RBS}_\\infty$. The theory of tilings \\cite{sap1}*{Theorem\n ~8.1} describes a neighborhood in $\\overline{X}^{BS}_\\infty$ of the\nclosed stratum $\\overline{e(Q)}$ which is piecewise-analytically\ndiffeomorphic to $\\overline A_Q(1)\\times \\overline{e(Q)}$. (Note however\nthat the induced decomposition on the part of this neighborhood in\n$X_\\infty(Q)$ does \\emph{not} in general agree with that of\n\\eqref{Pcorner}.) We thus obtain a neighborhood $\\tilde U$ of $(p_2\\circ\np_1)^{-1}(X_{Q,\\tau}) = Z_Q \\times X_{Q,\\tau}$ in $\\overline{X}^{BS}_\\infty$ and a\npiecewise-analytic diffeomorphism $\\tilde U \\cong \\overline A_Q(1)\\times\nZ_Q \\times X_{Q,\\tau}$; let $U = p_2\\circ p_1(\\tilde U)$ and note \n$(p_2\\circ p_1)^{-1}(U) = \\tilde U$. Since $Z_Q$ is\npath-connected, we proceed as in the $\\overline{X}^{RBS}_\\infty$ case.\n\\end{proof}\n\n\\begin{rem}\nIt is proved in \\cite{ji2} that every Satake compactification\n$\\overline{X}^{\\tau}_\\infty$ of a symmetric space $X_\\infty$ is a topological ball\nand hence contractible. Though the partial Satake compactification\n${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ is contained in\n$\\overline{X}^{\\tau}_\\infty$ as a subset, their topologies are different and\nthis inclusion is not a topological embedding. Hence, it does not follow\nthat ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ is contractible or that a path in\n${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ can be retracted into the interior. In\nfact, it is not known if ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$ is weakly\ncontractible.\n\\end{rem}\n\n\\begin{prop}\\label{propAdmissibleNeatCase}\nFor any neat $S$-arithmetic subgroup $\\Gamma$, the action of $\\Gamma$ on\n$\\overline{X}^{RBS}$ and on ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ is admissible.\n\\end{prop}\n\n\\begin{proof}\nLet $Y = \\overline{X}^{RBS}$ or ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ and let\n$p\\colon Y \\to \\Gamma\\backslash Y$ be the quotient map, which in this case is\nopen. It suffices to find for any point $x\\in Y$ an open neighborhood $U$\nsuch that $p|_U$ is admissible. For then $p|_{\\Gamma U}$\nis admissible and hence, by Lemma ~\\ref{lemAdmissibilityIsLocal}, $p$ is\nadmissible.\n\nWe proceed by induction on $\\vert S_{f}\\vert$ and we suppose first that\n$S_{f}=\\emptyset$.\n\nSuppose $x$ belongs to the stratum $X_Q$ of $\\overline{X}^{RBS}_\\infty$. Since\n$\\Gamma$ is neat, $\\Gamma_{L_Q}$ is torsion-free. Thus we can choose a\nrelatively compact neighborhood $O_Q$ of $x$ in $X_Q$ so that\n$p|_{O_Q}\\colon O_Q \\to p(O_Q)$ is a homeomorphism. Let $U =\np_1(\\overline A_Q(s) \\times N_Q \\times O_Q) \\subseteq \\overline{X}^{RBS}_\\infty$\nwhere $s>0$; this is a smaller version of the set $U$ constructed in the\nproof of Proposition ~\\ref{propSimply}. By reduction theory, we can choose\n$s$ sufficiently large so that the identifications induced by $\\Gamma$ on $U$\nagree with those induced by $\\Gamma_Q$ \\cite{Zu3}*{(1.5)}. Since $\\Gamma_Q\n\\subseteq N_Q \\widetilde M_Q $, it acts only on the last two factors of\n$\\overline A_Q \\times N_Q \\times X_Q$. Thus the deformation retraction\n$r_t$ of $U$ onto $O_Q$ (from the proof of Proposition ~\\ref{propSimply})\ndescends to a deformation retraction of $p(U)$ onto $p(O_Q)=O_Q$.\nNow apply Lemma ~\\ref{lemAdmissibilityViaRetract} to see that $p|_U$ is\nadmissible.\n\nFor $x$ in the stratum $X_{Q,\\tau}$ of ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}_\\infty$, we\nagain emulate the construction of $U$ from the proof of Proposition\n~\\ref{propSimply}. Specifically let $U= (p_2\\circ p_1)(\\overline\nA_Q(s)\\times Z_Q \\times O_{Q,\\tau})$ where $O_{Q,\\tau}$ is a relatively\ncompact neighborhood of $x$ in $X_{Q,\\tau}$ such that\n$p|_{O_{Q,\\tau}}\\colon O_{Q,\\tau} \\to p(O_{Q,\\tau})$ is a homeomorphism;\nsuch a $O_{Q,\\tau}$ exists since $\\Gamma_{H_{Q,\\tau}}$ is neat and hence\ntorsion-free. By \\cite{sap1}*{Theorem ~8.1}, the identifications induced\nby $\\Gamma$ on $U$ agree with those induced by $\\Gamma_Q$ and these are\nindependent of the $\\overline A_Q(s)$ coordinate. Thus the deformation\nretraction $r_t$ descends to $p(U)$ and we proceed as above.\n\nNow suppose that $v \\in S_{f}$ and let $S' = S \\setminus\\{v\\}$. We\nconsider $Y = \\overline{X}^{RBS}$ which we write as\n$\\overline{X}^{RBS}_{S'}\\times X_{v}$; the case $Y =\n{}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ is identical. Following \\cite{BS2}*{(6.8)}, for\neach face $F$ of $X_{v}$ let $x_{F}$ be the barycenter of $F$ and let\n$V(F)$ be the open star of $x_{F}$ in the barycentric subdivision of\n$X_{v}$. The sets $V(F)$ form an open cover of $X_{v}$. For any $\\gamma\n\\in \\Gamma$, $\\gamma V(F) = V(\\gamma F)$. If $F_{1} \\neq F_{2}$ are two\nfaces with $\\dim F_{1} = \\dim F_{2}$, then $V(F_{1}) \\cap V(F_{2}) =\n\\emptyset$. It follows that\n\\begin{equation*}\n\\gamma V(F)\\cap V(F) \\neq \\emptyset \\quad \\Longleftrightarrow \\quad\n\\gamma\\in\\Gamma_{F}\\ ,\n\\end{equation*}\nwhere $\\Gamma_F = \\Gamma \\cap \\mathbf G(k_v)_F$. It follows from\n\\S\\ref{ssectStabilizersBuilding} that $\\Gamma_F$ fixes $F$ pointwise (since\n$\\mathbf G(k_v)_F$ does) and is a neat $S'$-arithmetic subgroup (since $\\mathbf G(k_v)_F$ is\na compact open subgroup of $\\mathbf G(k_v)$)\n\nLet $U = \\overline{X}^{RBS}_{S'}\\times V(F)$ for some open face $F$ of\n$X_{v}$. Define a deformation retraction $r_t$ of $U$ onto\n$\\overline{X}^{RBS}_{S'}\\times F$ by $r_t(w,z) = (w, tz + (1-z)r_F(z))$,\nwhere $r_F(z)$ is the unique point in $F$ which is closest to $z\\in V(F)$.\nThe map $r_t$ is $\\Gamma_F$-equivariant since $\\Gamma_{F}$ fixes $F$\npointwise and acts by isometries. So $r_{t}$ descends to a deformation\nretraction of $p(U)$ onto $(\\Gamma_F\\backslash\n\\overline{X}^{RBS}_{S'})\\times F$. The remaining hypothesis of Lemma\n~\\ref{lemAdmissibilityViaRetract} is satisfied since $r_0(\\gamma w, \\gamma\nz) = r_0(\\gamma w,z)$ for $\\gamma \\in \\Gamma_F$. Since\n$\\overline{X}^{RBS}_{S'}\\times F \\to (\\Gamma_F\\backslash\n\\overline{X}^{RBS}_{S'})\\times F$ is admissible by induction, the lemma\nimplies that $p|_U$ is admissible.\n\\end{proof}\n\nTheorem ~\\ref{thmMainArithmetic} holds if $\\Gamma$ is neat by combining \nPropositions ~\\ref{propGrosche}\\ref{itemGrosche},\n~\\ref{propDiscontinuousRBS}, ~\\ref{propDiscontinuousSatake},\n\\ref{propSimply}, and \\ref{propAdmissibleNeatCase}.\n\n\\begin{cor}\n\\label{corAdmissibleSubgroupNeatCase}\nFor any neat $S$-arithmetic subgroup $\\Gamma$, the actions of $E\\Gamma$ on\n$\\overline{X}^{RBS}$ and $E_\\tau\\Gamma$ on ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ are admissible.\n\\end{cor}\n\\begin{proof}\nBy Proposition ~\\ref{propGammaFixedIsEGamma} the action of $\\Gamma\/E\\Gamma$ on\n$E\\Gamma \\backslash \\overline{X}^{RBS}$ is free and by Proposition\n~\\ref{propDiscontinuousRBS} it is discontinuous. It follows that $E\\Gamma\n\\backslash \\overline{X}^{RBS} \\to (\\Gamma\/E\\Gamma)\\backslash (E\\Gamma\\backslash\n\\overline{X}^{RBS}) = \\Gamma \\backslash \\overline{X}^{RBS}$ is a covering\nspace (in fact a regular covering space) and thus $E\\Gamma $ acts admissibly\nif and only if $\\Gamma$ acts admissibly.\nNow apply the proposition. The case of ${}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ is\ntreated similarly.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem ~\\textup{\\ref{thmMainArithmetic}}]\nLet $\\Gamma'\\subseteq \\Gamma$ be a normal neat subgroup of finite index. The\nidea in the general case is to factor $\\overline{X}^{RBS}\\to \\Gamma\\backslash\n\\overline{X}^{RBS}$ as\n\\begin{equation*}\n\\overline{X}^{RBS}\\to E\\Gamma'\\backslash \\overline{X}^{RBS} \\to\n(\\Gamma\/E\\Gamma')\\backslash (E\\Gamma'\\backslash \\overline{X}^{RBS}) = \\Gamma\\backslash\n\\overline{X}^{RBS}\n\\end{equation*}\nand apply Proposition ~\n\\ref{propGrosche}\\ref{itemGrosche} to the first map and Proposition ~\n\\ref{propGrosche}\\ref{itemArmstrong} to the second map.\n\nBy Proposition ~\\ref{propGammaFixedIsEGamma}, $\\Gamma'_{f,RBS} = E\\Gamma'$ and\nhence $(E\\Gamma')_{f,RBS} = E\\Gamma'$. Thus $E\\Gamma' \\backslash\n\\overline{X}^{RBS}$ is simply connected by Propositions\n~\\ref{propDiscontinuousRBS}, \\ref{propSimply},\n\\ref{propGrosche}\\ref{itemGrosche}, and Corollary\n\\ref{corAdmissibleSubgroupNeatCase}. We now claim that $E\\Gamma' \\backslash\n\\overline{X}^{RBS}$ is locally compact. To see this, note that $E\\Gamma'\n\\backslash \\overline{X}^{BS}$ is locally compact since it is triangulable\n\\cite{BS2}*{(6.10)}. Furthermore the fibers of $p_1'\\colon E\\Gamma'\n\\backslash \\overline{X}^{BS} \\to E\\Gamma' \\backslash \\overline{X}^{RBS}$ have\nthe form $\\Gamma'_{N_P}\\backslash N_P$ which are compact. The claim follows.\nWe can now apply Proposition ~\\ref{propGrosche}\\ref{itemArmstrong} to\n$\\Gamma\\backslash \\overline{X}^{RBS} = (\\Gamma\/E\\Gamma' )\\backslash (E\\Gamma'\n\\backslash\\overline{X}^{RBS})$ and find that $\\pi_1(\\Gamma\\backslash\n\\overline{X}^{RBS}) \\cong (\\Gamma\/E\\Gamma' ) \/ (\\Gamma\/E\\Gamma' )_{f,RBS} \\cong \\Gamma \/\n\\Gamma_{f,RBS}$ as desired. Furthermore the proof shows that the isomorphism\nis induced by the natural morphism $\\Gamma \\to \\pi_1(\\Gamma\\backslash\n\\overline{X}^{RBS})$.\n\nA similar proof using $E_\\tau\\Gamma'$ instead of $E\\Gamma'$ treats the case of\n$\\Gamma\\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau}$; one only needs to observe\nthat the fibers of $p_2'\\colon E_\\tau\\Gamma' \\backslash \\overline{X}^{RBS} \\to\nE_\\tau\\Gamma' \\backslash {}_{{\\mathbb Q}}\\overline{X}^{\\tau}$ have the form\n$\\Gamma'_{L_{P,\\tau}} \\backslash \\overline{W}_{P,\\tau}^{RBS}$ which are\ncompact.\n\\end{proof}\n\n\\begin{bibdiv}\n\\begin{biblist}\n\\bib{Armstrong}{article}{\n author={Armstrong, M. A.},\n title={The fundamental group of the orbit space of a discontinuous group},\n journal={Proc. Cambridge Philos. Soc.},\n volume={64},\n date={1968},\n pages={299--301},\n}\n\\bib{BB}{article}{\n author={Baily, W. L., Jr.},\n author={Borel, A.},\n title={Compactification of arithmetic quotients of bounded symmetric\n domains},\n journal={Ann. of Math. (2)},\n volume={84},\n date={1966},\n pages={442--528},\n issn={0003-486X},\n}\n\\bib{BMS}{article}{\n author={Bass, H.},\n author={Milnor, J.},\n author={Serre, J.-P.},\n title={Solution of the congruence subgroup problem for ${\\rm\n SL}\\sb{n}\\,(n\\geq 3)$ and ${\\rm Sp}\\sb{2n}\\,(n\\geq 2)$},\n journal={Inst. Hautes \\'Etudes Sci. Publ. Math.},\n volume={33},\n date={1967},\n pages={59--137},\n issn={0073-8301},\n}\n\\bib{Borel}{book}{\n author={Borel, Armand},\n title={Introduction aux groupes arithm\\'etiques},\n series={Actualit\\'es Scientifiques et Industrielles, No. 1341},\n publisher={Hermann},\n place={Paris},\n date={1969},\n pages={125},\n}\n\\bib{BorelHarishChandra}{article}{\n author={Borel, Armand},\n author={Harish-Chandra},\n title={Arithmetic subgroups of algebraic groups},\n journal={Ann. of Math. (2)},\n volume={75},\n date={1962},\n pages={485--535},\n issn={0003-486X},\n}\n\\bib{Borel-Ji}{book}{\n author={Borel, Armand},\n author={Ji, Lizhen},\n title={Compactifications of symmetric and locally symmetric spaces},\n series={Mathematics: Theory \\& Applications},\n publisher={Birkh\\\"auser},\n place={Boston},\n date={2006},\n pages={xvi+479},\n isbn={978-0-8176-3247-2},\n isbn={0-8176-3247-6},\n}\n\\bib{Borel-Serre}{article}{\n author={Borel, A.},\n author={Serre, J.-P.},\n title={Corners and arithmetic groups},\n note={Avec un appendice: Arrondissement des vari\\'et\\'es \\`a coins, par\n A. Douady et L. H\\'erault},\n journal={Comment. Math. Helv.},\n volume={48},\n date={1973},\n pages={436--491},\n issn={0010-2571},\n}\n\n\\bib{BS2}{article}{\n author={Borel, A.},\n author={Serre, J.-P.},\n title={Cohomologie d'immeubles et de groupes $S$-arithm\\'etiques},\n journal={Topology},\n volume={15},\n date={1976},\n number={3},\n pages={211--232},\n issn={0040-9383},\n}\n\\bib{Bourbaki}{book}{\n author={Bourbaki, N.},\n title={\\'El\\'ements de math\\'ematique. Fasc. XXXIV. Groupes et alg\\`ebres\n de Lie. Chapitre IV: Groupes de Coxeter et syst\\`emes de Tits. Chapitre\n V: Groupes engendr\\'es par des r\\'eflexions. Chapitre VI: syst\\`emes de\n racines},\n series={Actualit\\'es Scientifiques et Industrielles, No. 1337},\n publisher={Hermann},\n place={Paris},\n date={1968},\n pages={288 pp. (loose errata)},\n}\n\\bib{BourbakiTopologiePartOne}{book}{\n author={Bourbaki, N.},\n title={\\'El\\'ements de math\\'ematique. Topologie g\\'en\\'erale. Chapitres\n 1 \\`a 4},\n publisher={Hermann},\n place={Paris},\n date={1971},\n pages={xv+357 pp. (not consecutively paged)},\n}\n\\bib{BH}{book}{\n author={Bridson, Martin R.},\n author={Haefliger, Andr{\\'e}},\n title={Metric spaces of non-positive curvature},\n series={Grundlehren der Mathematischen Wissenschaften},\n volume={319},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1999},\n pages={xxii+643},\n isbn={3-540-64324-9},\n}\n\\bib{BruhatTits1}{article}{\n author={Bruhat, F.},\n author={Tits, J.},\n title={Groupes r\\'eductifs sur un corps local: I. Donn\\'ees radicielles valu\\'ees},\n journal={Inst. Hautes \\'Etudes Sci. Publ. Math.},\n volume={41},\n date={1972},\n pages={5--251},\n issn={0073-8301},\n}\n\\bib{BruhatTits2}{article}{\n author={Bruhat, F.},\n author={Tits, J.},\n title={Groupes r\\'eductifs sur un corps local: II. Sch\\'emas en groupes.\n Existence d'une donn\\'ee radicielle valu\\'ee},\n journal={Inst. Hautes \\'Etudes Sci. Publ. Math.},\n volume={60},\n date={1984},\n pages={197--376},\n issn={0073-8301},\n}\n\\bib{Cass}{article}{\n author={Casselman, W. A.},\n title={Geometric rationality of Satake compactifications},\n conference={\n title={Algebraic groups and Lie groups},\n },\n book={\n series={Austral. Math. Soc. Lect. Ser.},\n volume={9},\n publisher={Cambridge Univ. Press},\n place={Cambridge},\n },\n date={1997},\n pages={81--103},\n}\n\\bib{Chevalley}{article}{\n author={Chevalley, Claude},\n title={Deux th\\'eor\\`emes d'arithm\\'etique},\n journal={J. Math. Soc. Japan},\n volume={3},\n date={1951},\n pages={36--44},\n}\n\\bib{ge}{book}{\n author={van der Geer, Gerard},\n title={Hilbert modular surfaces},\n series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3)},\n volume={16},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1988},\n pages={x+291},\n isbn={3-540-17601-2},\n}\n\\bib{Gille}{article}{\n author={Gille, Philippe},\n title={Le probl\\`eme de Kneser-Tits},\n part={Expos\\'e 983},\n book={\n series={Ast\\'erisque},\n volume={326},\n date={2009},\n title={S\\'eminaire Bourbaki},\n subtitle={Volume 2007\/2008, Expos\\'e 982--996},\n },\n pages={39--82},\n}\n\\bib{GHM}{article}{\n author={Goresky, M.},\n author={Harder, G.},\n author={MacPherson, R.},\n title={Weighted cohomology},\n journal={Invent. Math.},\n volume={116},\n date={1994},\n pages={139--213},\n issn={0020-9910},\n}\n\\bib{Gro}{article}{\n author={Grosche, J{\\\"u}rgen},\n title={\\\"Uber die Fundamentalgruppen von Quotientenr\\\"aumen Siegelscher\n Modulgruppen},\n journal={J. Reine Angew. Math.},\n volume={281},\n date={1976},\n pages={53--79},\n issn={0075-4102},\n}\n\\bib{gro2}{article}{\n author={Grosche, J{\\\"u}rgen},\n title={\\\"Uber die Fundamentalgruppen von Quotientenr\\\"aumen Siegelscher\n und Hilbert-Siegelscher Modulgruppen},\n journal={Nachr. Akad. Wiss. G\\\"ottingen Math.-Phys. Kl. II},\n date={1976},\n number={9},\n pages={119--142},\n issn={0065-5295},\n}\n\\bib{GrunewaldSegal1}{article}{\n author={Grunewald, Fritz},\n author={Segal, Daniel},\n title={Some general algorithms. I. Arithmetic groups},\n journal={Ann. of Math. (2)},\n volume={112},\n date={1980},\n number={3},\n pages={531--583},\n issn={0003-486X},\n}\n\\bib{GrunewaldSegal2}{article}{\n author={Grunewald, Fritz},\n author={Segal, Daniel},\n title={Decision problems concerning $S$-arithmetic groups},\n journal={J. Symbolic Logic},\n volume={50},\n date={1985},\n number={3},\n pages={743--772},\n issn={0022-4812},\n}\n\\bib{hk}{article}{\n author={Heidrich, Holger},\n author={Kn{\\\"o}ller, Friedrich W.},\n title={\\\"Uber die Fundamentalgruppen Siegelscher Modulvariet\\\"aten vom\n Grade $2$},\n journal={Manuscripta Math.},\n volume={57},\n date={1987},\n number={3},\n pages={249--262},\n issn={0025-2611},\n}\n\\bib{hs}{article}{\n author={Hulek, K.},\n author={Sankaran, G. K.},\n title={The fundamental group of some Siegel modular threefolds},\n conference={\n title={Abelian varieties},\n address={Egloffstein},\n date={1993},\n },\n book={\n publisher={de Gruyer},\n place={Berlin},\n },\n date={1995},\n pages={141--150},\n}\n\\bib{HumphreysArithmetic}{book}{\n author={Humphreys, James E.},\n title={Arithmetic groups},\n series={Lecture Notes in Mathematics},\n volume={789},\n publisher={Springer},\n place={Berlin},\n date={1980},\n pages={vii+158},\n isbn={3-540-09972-7},\n}\n\\bib{ji}{article}{\n author={Ji, Lizhen},\n title={Buildings and their applications in geometry and topology},\n journal={Asian J. Math.},\n volume={10},\n date={2006},\n number={1},\n pages={11--80},\n issn={1093-6106},\n}\n\\bib{ji2}{article}{\n author={Ji, Lizhen},\n title={Satake and Martin compactifications of symmetric spaces are\n topological balls},\n journal={Math. Res. Lett.},\n volume={4},\n date={1997},\n number={1},\n pages={79--89},\n issn={1073-2780},\n}\n\\bib{kn}{article}{\n author={Kn{\\\"o}ller, F. W.},\n title={Die Fundamentalgruppen der Siegelschen Modulvariet\\\"aten},\n journal={Abh. Math. Sem. Univ. Hamburg},\n volume={57},\n date={1987},\n pages={203--213},\n issn={0025-5858},\n}\n\\bib{Landvogt}{book}{\n author={Landvogt, Erasmus},\n title={A compactification of the Bruhat-Tits building},\n series={Lecture Notes in Mathematics},\n volume={1619},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1996},\n pages={viii+152},\n isbn={3-540-60427-8},\n}\n\\bib{Loukanidis}{thesis}{\n author={Loukanidis, Dimitrios},\n title={Bounded Generation of Certain Chevalley Groups},\n date={1995},\n type={Ph.D. Thesis},\n organization={University of Toronto},\n}\n\\bib{Lubotzky0}{article}{\n author={Lubotzky, Alexander},\n title={Free quotients and the congruence kernel of ${\\rm SL}_{2}$},\n journal={J. Algebra},\n volume={77},\n date={1982},\n number={2},\n pages={411--418},\n issn={0021-8693},\n doi={10.1016\/0021-8693(82)90263-0},\n}\n\\bib{Lubotzky}{article}{\n author={Lubotzky, Alexander},\n title={Subgroup growth and congruence subgroups},\n journal={Invent. Math.},\n volume={119},\n date={1995},\n number={2},\n pages={267--295},\n issn={0020-9910},\n}\n\\bib{Macdonald}{article}{\n author={Macdonald, I. G.},\n title={Affine root systems and Dedekind's $\\eta $-function},\n journal={Invent. Math.},\n volume={15},\n date={1972},\n pages={91--143},\n issn={0020-9910},\n}\n\\bib{Margulis}{article}{\n author={Margulis, G. A.},\n title={Finiteness of quotient groups of discrete subgroups},\n journal={Funct. Anal. Appl.},\n pages = {178-187},\n volume = {13},\n number = {3},\n date={1979},\n issn = {0016-2663},\n}\n\\bib{Melnikov2}{article}{\n author={Mel{\\cprime}nikov, O. V.},\n title={Normal divisors of free profinite groups},\n journal={Math. USSR-Izv.},\n volume={12},\n date={1978},\n number={1},\n pages={1--20},\n\n}\n\\bib{Murty}{article}{\n author={Murty, V. Kumar},\n title={Bounded and finite generation of arithmetic groups},\n conference={\n title={Number theory},\n address={Halifax, NS},\n date={1994},\n },\n book={\n series={CMS Conf. Proc.},\n volume={15},\n publisher={Amer. Math. Soc.},\n place={Providence, RI},\n },\n date={1995},\n pages={249--261},\n}\n\\bib{MurtyRamakrishnan}{article}{\n author={Murty, V. Kumar},\n author={Ramakrishnan, Dinakar},\n title={The Manin-Drinfel\\,\\cprime \\!d theorem and Ramanujan sums},\n journal={Proc. Indian Acad. Sci. Math. Sci.},\n volume={97},\n date={1987},\n pages={251--262},\n issn={0253-4142},\n}\n\\bib{PlatonovRapinchuk2}{article}{\n author={Platonov, V. P.},\n author={Rapinchuk, A. S.},\n title={Abstract properties of $S$-arithmetic groups and the congruence\n problem},\n journal={Izv. Ross. Akad. Nauk Ser. Mat.},\n volume={56},\n date={1992},\n number={3},\n pages={483--508},\n issn={0373-2436},\n translation={\n journal={Russian Acad. Sci. Izv. Math.},\n volume={40},\n date={1993},\n number={3},\n pages={455--476},\n issn={1064-5632},\n },\n}\n\\bib{PlatonovSaromet}{article}{\n author={Platonov, V. P.},\n author={{\\v{S}}aromet, A. A.},\n title={On the congruence problem for linear groups over arithmetic\n rings},\n journal={Dokl. Akad. Nauk BSSR},\n volume={16},\n date={1972},\n pages={393--396, 477},\n issn={0002-354X},\n translation={\n note={Selected papers in $K$-theory, Amer. Math. Soc.\n Transl., Series 2, vol. 154, Amer. Math. Soc., Providence, RI,\n 1992, pp. 1--5},\n },\n}\n\\bib{Prasad}{article}{\n author={Prasad, Gopal},\n title={Semi-simple groups and arithmetic subgroups},\n book={\n title={Proceedings of the International Congress of Mathematicians\n (Kyoto, 1990)},\n publisher={Math. Soc. Japan},\n place={Tokyo},\n },\n date={1991},\n pages={821--832},\n}\n\\bib{PrasadOnRaghunathan}{article}{\n author={Prasad, Gopal},\n title={On some work of Raghunathan},\n conference={\n title={Algebraic groups and arithmetic},\n },\n book={\n publisher={Tata Inst. Fund. Res.},\n place={Mumbai},\n },\n date={2004},\n pages={25--40},\n}\n\\bib{pr}{article}{\n author={Prasad, Gopal},\n author={Raghunathan, M. S.},\n title={On the congruence subgroup problem: determination of the\n ``metaplectic kernel''},\n journal={Invent. Math.},\n volume={71},\n date={1983},\n number={1},\n pages={21--42},\n issn={0020-9910},\n}\n\\bib{PrasadRapinchukSurvey}{article}{\n author={Prasad, Gopal},\n author={Rapinchuk, Andrei S.},\n title={Developements on the congruence subgroup problem after the work\n of Bass, Milnor and Serre},\n eprint={arXiv:0809.1622 [math.NT]},\n book={\n author={Milnor, John},\n title={Collected Papers of John Milnor. V. Algebra},\n editor={Bass, Hyman},\n editor={Lam, T. Y.},\n publisher={American Mathematical Society},\n place={Providence, RI},\n date={2011},\n },\n}\n\\bib{Ra1}{article}{\n author={Raghunathan, M. S.},\n title={On the congruence subgroup problem},\n journal={Inst. Hautes \\'Etudes Sci. Publ. Math.},\n volume={46},\n date={1976},\n pages={107--161},\n issn={0073-8301},\n}\n\\bib{Ra2}{article}{\n author={Raghunathan, M. S.},\n title={On the congruence subgroup problem. II},\n journal={Invent. Math.},\n volume={85},\n date={1986},\n number={1},\n pages={73--117},\n issn={0020-9910},\n}\n\\bib{R1}{article}{\n author={Rapinchuk, A. S.},\n title={Congruence subgroup problem for algebraic groups: old and new},\n conference={\n title={Journ\\'ees Arithm\\'etiques},\n address={Geneva},\n date={1991},\n },\n book={\n series={Ast\\'erisque},\n volume={209},\n date={1992},\n },\n pages={73--84},\n issn={0303-1179},\n}\n\\bib{R2}{article}{\n author={Rapinchuk, A. S.},\n title={The congruence subgroup problem},\n conference={\n title={Algebra, $K$-theory, groups, and education},\n address={New York},\n date={1997},\n },\n book={\n series={Contemp. Math.},\n volume={243},\n publisher={Amer. Math. Soc.},\n place={Providence, RI},\n },\n date={1999},\n pages={175--188},\n}\n\\bib{RemyThuillierWernerI}{article}{\n author={R{\\'e}my, Bertrand},\n author={Thuillier, Amaury},\n author={Werner, Annette},\n title={Bruhat-Tits theory from Berkovich's point of view. I. Realizations\n and compactifications of buildings},\n journal={Ann. Sci. \\'Ec. Norm. Sup\\'er. (4)},\n volume={43},\n date={2010},\n number={3},\n pages={461--554},\n issn={0012-9593},\n}\n\\bib{RemyThuillierWernerII}{article}{\n author={R{\\'e}my, Bertrand},\n author={Thuillier, Amaury},\n author={Werner, Annette},\n title={Bruhat-Tits theory from Berkovich's point of view. II. Satake\n compactifications of buildings},\n date={2009},\n eprint={\\tt arXiv:0907.3264 [math.GR]},\n}\n\\bib{san}{article}{\n author={Sankaran, G. K.},\n title={Fundamental group of locally symmetric varieties},\n journal={Manuscripta Math.},\n volume={90},\n date={1996},\n number={1},\n pages={39--48},\n issn={0025-2611},\n}\n\\bib{sap1}{article}{\n author={Saper, Leslie},\n title={Tilings and finite energy retractions of locally symmetric spaces},\n journal={Comment. Math. Helv.},\n volume={72},\n date={1997},\n number={2},\n pages={167--202},\n issn={0010-2571},\n}\n\\bib{sap2}{article}{\n author={Saper, Leslie},\n title={Geometric rationality of equal-rank Satake compactifications},\n journal={Math. Res. Lett.},\n volume={11},\n date={2004},\n number={5},\n pages={653--671},\n issn={1073-2780},\n}\n\\bib{sat1}{article}{\n author={Satake, Ichir{\\^o}},\n title={On representations and compactifications of symmetric Riemannian\n spaces},\n journal={Ann. of Math. (2)},\n volume={71},\n date={1960},\n pages={77--110},\n issn={0003-486X},\n}\n\\bib{sat2}{article}{\n author={Satake, Ichir{\\c{o}}},\n title={On compactifications of the quotient spaces for arithmetically\n defined discontinuous groups},\n journal={Ann. of Math. (2)},\n volume={72},\n date={1960},\n pages={555--580},\n issn={0003-486X},\n}\n\\bib{SerreBourbaki}{article}{\n author={Serre, Jean-Pierre},\n title={Groupes de congruence (d'apr\\`es H. Bass, H. Matsumoto, J.\n Mennicke, J. Milnor, C. Moore)},\n part={Expos\\'e 330},\n book={\n title={S\\'eminaire Bourbaki},\n subtitle={Volume 1966\/1967, Expos\\'e 313--330},\n publisher={W. A. Benjamin},\n address={New York},\n date={1968},\n },\n reprint={\n title={S\\'eminaire Bourbaki, Vol.\\ 10},\n publisher={Soc. Math. France},\n place={Paris},\n date={1995},\n note={pp. 275--291},\n },\n \n}\n\\bib{se3}{article}{\n author={Serre, Jean-Pierre},\n title={Le probl\\`eme des groupes de congruence pour $\\mathbf{SL}_2$},\n journal={Ann. of Math. (2)},\n volume={92},\n date={1970},\n pages={489--527},\n issn={0003-486X},\n}\n\\bib{SerreTrees}{book}{\n author={Serre, Jean-Pierre},\n title={Trees},\n series={Springer Monographs in Mathematics},\n note={Translated from the French original by John Stillwell;\n Corrected 2nd printing of the 1980 English translation},\n publisher={Springer-Verlag},\n place={Berlin},\n date={2003},\n pages={x+142},\n isbn={3-540-44237-5},\n}\n\\bib{tavgen}{article}{\n author={Tavgen{\\cprime}, O. I.},\n title={Bounded generability of Chevalley groups over rings of $S$-integer\n algebraic numbers},\n journal={Izv. Akad. Nauk SSSR Ser. Mat.},\n volume={54},\n date={1990},\n number={1},\n pages={97--122, 221--222},\n issn={0373-2436},\n translation={\n journal={Math. USSR, Izv.},\n volume={36},\n date={1991},\n number={1},\n pages={101--128},\n issn={0025-5726},\n },\n}\n\\bib{Tits}{article}{\n author={Tits, J.},\n title={Reductive groups over local fields},\n conference={\n title={Automorphic forms, representations and $L$-functions (Proc.\n Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part\n 1},\n },\n book={\n series={Proc. Sympos. Pure Math., XXXIII},\n publisher={Amer. Math. Soc.},\n place={Providence, R.I.},\n },\n date={1979},\n pages={29--69},\n}\n\\bib{Venkataramana}{article}{\n author={Venkataramana, T. N.},\n title={On systems of generators of arithmetic subgroups of higher rank\n groups},\n journal={Pacific J. Math.},\n volume={166},\n date={1994},\n number={1},\n pages={193--212},\n issn={0030-8730},\n}\n\\bib{Werner}{article}{\n author={Werner, Annette},\n title={Compactifications of Bruhat-Tits buildings associated to linear\n representations},\n journal={Proc. Lond. Math. Soc. (3)},\n volume={95},\n date={2007},\n number={2},\n pages={497--518},\n issn={0024-6115},\n doi={10.1112\/plms\/pdm019},\n}\n\\bib{wo}{article}{\n author={Wohlfahrt, Klaus},\n title={An extension of F. Klein's level concept},\n journal={Illinois J. Math.},\n volume={8},\n date={1964},\n pages={529--535},\n issn={0019-2082},\n}\n\\bib{Zu1}{article}{\n author={Zucker, Steven},\n title={$L\\sb{2}$ cohomology of warped products and arithmetic groups},\n journal={Invent. Math.},\n volume={70},\n date={1982},\n number={2},\n pages={169--218},\n issn={0020-9910},\n}\n\\bib{Zu2}{article}{\n author={Zucker, Steven},\n title={Satake compactifications},\n journal={Comment. Math. Helv.},\n volume={58},\n date={1983},\n number={2},\n pages={312--343},\n issn={0010-2571},\n}\n\\bib{Zu3}{article}{\n author={Zucker, Steven},\n title={$L\\sb 2$-cohomology and intersection homology of locally symmetric\n varieties, II},\n journal={Compositio Math.},\n volume={59},\n date={1986},\n number={3},\n pages={339--398},\n issn={0010-437X},\n}\n\\end{biblist}\n\\end{bibdiv}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Derivation of cooperative manager gradients} \\label{sec:derivation}\n\nIn this section, we derive an analytic expression of the gradient of the manager policy in a two-level goal-conditioned hierarchy with respect to both the losses associated with the high level and low level policies. In mathematical terms, we are trying to derive an expression for the weighted summation of the derivation of both losses, expressed as follows:\n\\begin{equation}\n \\nabla_{\\theta_m} J_m' = \\nabla_{\\theta_m} \\left( J_m + \\lambda J_w \\right) = \\nabla_{\\theta_m} J_m + \\lambda \\nabla_{\\theta_m} J_w\n \n\\end{equation}\nwhere $\\lambda$ is a weighting term and $J_m$ and $J_w$ are the expected returns assigned to the manager and worker policies, respectively. More specifically, these two terms are:\n\\begin{equation}\n \\resizebox{!}{12pt}{$\n J_m = \\mathbb{E}_{s\\sim p_\\pi} \\left[ \\sum_{t=0}^{T\/k} \\gamma^t r_m(s_{kt}) \\right] = \\int_{\\mathcal{S}} \\rho_0(s_t) V_m(s_t) ds_t\n $}\n\\end{equation}\\\\[-25pt]\n\\begin{equation}\n \\resizebox{!}{12pt}{$\n J_w = \\mathbb{E}_{s\\sim p_\\pi} \\left[ \\sum_{t=0}^k \\gamma^t r_w(s_t, g_t,\\pi_w(s_t,g_t)) \\right] = \\int_{\\mathcal{S}} \\rho_0(s_t) V_w(s_t, g_t) ds_t \n $}\n\\end{equation}\nHere, under the actor-critic formulation we replace the expected return under a given starting state with the value functions $V_m$ and $V_w$ This is integrated over the distribution of initial states $\\rho_0(\\cdot)$.\n\nFollowing the results by \\cite{silver2014deterministic}, we can express the first term in Eq.~\\eqref{eq:connected-gradient} as:\n\\begin{equation}\n \\nabla_{\\theta_m} J_m = \\mathbb{E}_{s\\sim p_\\pi} \\left[ \\nabla_a Q_m (s,a)|_{a=\\pi_m(s)}\\nabla_{\\theta_m} \\pi_m(s) \\right]\n\\end{equation}\n\nWe now expand the second term of the gradient into a function of the manager and worker actor ($\\pi_m$, $\\pi_w$) and critic ($Q_m$, $Q_w$) policies and their trainable parameters.\nIn order to propagate the loss associated with the worker through the policy parameters of the manager, we assume that the goals assigned to the worker $g_t$ are not fixed variables, but rather temporally abstracted outputs from the manager policy $\\pi_m$, and may be updated in between decisions by the manager via a transition function $h$. Mathematically, the goal transition is defined as: \n\\begin{equation}\n g_t(\\theta_m) = \n \\begin{cases}\n \\pi_m(s_t) & \\text{if } t \\text{ mod } k = 0 \\\\\n h(s_{t-1}, g_{t-1}(\\theta_m), s_t) & \\text{otherwise}\n \\end{cases}\n\\end{equation}\nFor the purposes of simplicity, we express the manager output term as $g_t$ from now on.\n\nWe begin by computing the partial derivative of the worker value function with respect to the parameters of the manager:\n\\begin{equation}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} V_w(s_t, g_t) &=\\nabla_{\\theta_m} Q_w (s_t, g_t, \\pi_w(s_t, g_t)) \\\\\n &= \\nabla_{\\theta_m} \\bigg( r_w(s_t, g_t, \\pi_w(s_t,g_t)) +\\int_{\\mathcal{G}} \\int_{\\mathcal{S}} \\gamma p_w(s', g'| s_t,g_t, \\pi_w(s_t,g_t)) V_w(s',g')ds'dg' \\bigg) \\\\\n &= \\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) + \\gamma \\nabla_{\\theta_m} \\int_{\\mathcal{G}}\\int_{\\mathcal{S}} p_w(s',g'| s_t, g_t, \\pi_w(s_t,g_t)) V_w(s',g')ds'dg'\n \\end{aligned}\n $}\n \\label{eq:gradient_p1}\n\\end{equation}\nwhere $\\mathcal{G}$ and $\\mathcal{S}$ are the goal and environment state spaces, respectively, and $p_w(\\cdot, \\cdot | \\cdot, \\cdot, \\cdot)$ is the probability distribution of the next state from the perspective of the worker given the current state and action.\n\nExpanding the latter term, we get:\n\\begin{equation}\n \\begin{aligned}\n &p_w(s',g'|s_t,g_t,\\pi_w(s_t,g_t)) = p_{w,1} (g'| s', s_t,g_t,\\pi_w(s_t,g_t)) p_{w,2} (s'| s_t,g_t,\\pi_w(s_t,g_t))\n \\end{aligned}\n \\label{eq:pw_decompose}\n\\end{equation}\nThe first element, $p_{w1}$, is the probability distribution of the next goal, and is deterministic with respect to the conditional variables. Specifically:\n\\begin{equation}\n p_{w,1} (g'| s_t,g_t,\\pi_w(s_t,g_t)) = \n \\begin{cases}\n 1 & \\text{if } g' = g_{t+1} \\\\\n 0 & \\text{otherwise}\n \\end{cases}\n \\label{eq:pw1}\n\\end{equation}\n\nThe second element, $p_{w,2}$, is the state transition probability from the MDP formulation of the task, i.e.\n\\begin{equation}\n p_{w,2}(s'| s_t,g_t,\\pi_w(s_t,g_t)) = p (s'| s_t,\\pi_w(s_t,g_t))\n \\label{eq:pw2}\n\\end{equation}\n\nCombining Eq.~\\eqref{eq:pw_decompose}-\\eqref{eq:pw2} into Eq.~\\eqref{eq:gradient_p1}, we get:\n\\begin{equation} \\label{eq:simplified-next-step-value}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} V_w(s_t,g_t) &=\\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) \\\\\n &\\quad + \\gamma \\nabla_{\\theta_m} \\int_{\\mathcal{G}}\\int_{\\mathcal{S}}\\bigg( p_{w,1} (g'| s', s_t,g_t,\\pi_w(s_t,g_t)) p_{w,2} (s'| s_t,g_t,\\pi_w(s_t,g_t)) V_w(s',g') ds'dg'\\bigg) \\\\\n %\n &= \\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) \\\\\n &\\quad + \\gamma \\nabla_{\\theta_m} \\int_{\\mathcal{G}\\cap \\{g_{t+1}\\}}\\int_{\\mathcal{S}} 1 \\cdot p (s'| s_t,\\pi_w(s_t,g_t)) V_w(s',g') ds'dg' \\\\\n &\\quad + \\gamma \\nabla_{\\theta_m} \\int_{(\\mathcal{G}\\cap \\{g_{t+1}\\})^c}\\int_{\\mathcal{S}} 0 \\cdot p (s'| s_t,\\pi_w(s_t,g_t)) V_w(s',g') ds'dg' \\\\\n %\n &= \\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) + \\gamma \\nabla_{\\theta_m} \\int_{\\mathcal{S}} p(s'| s_t,\\pi_w(s_t,g_t)) V_w(s',g_{t+1})ds'\n \\end{aligned}\n $}\n\\end{equation}\n\nContinuing the derivation of $\\nabla_{\\theta_m}V_w$ from Eq.~\\eqref{eq:simplified-next-step-value}, we get,\n\\begin{equation} \\label{eq:continue-derivatione}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} V_w(s_t,g_t) &= \\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) +\\gamma \\nabla_{\\theta_m} \\int_{\\mathcal{S}} p(s'| s_t,\\pi_w(s_t,g_t)) V_w(g_{t+1}, s')ds' \\\\\n %\n &= \\nabla_{\\theta_m} r_w(s_t,g_t,\\pi_w(s_t,g_t)) +\\gamma \\int_{\\mathcal{S}} \\nabla_{\\theta_m} p(s'| s_t,\\pi_w(s_t,g_t)) V_w(g_{t+1}, s')ds' \\\\\n %\n &= \\nabla_{\\theta_m} g_t \\nabla_g r_w(s_t,g,\\pi_w(s_t,g_t))|_{g=g_t} \\\\\n &\\quad + \\nabla_{\\theta_m}g_t \\nabla_g \\pi_w (s_t,g)|_{g=g_t} \\nabla_a r_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)} \\\\\n &\\quad +\\gamma\\int_\\mathcal{S} \\bigg(V_w(s',g_{t+1})\\nabla_{\\theta_m} g_t \\nabla_g \\pi_w(s_t,g)|_{g=g_t} \\nabla_a p(s'\\vert s_t,a)|_{a=\\pi_w(s_t,g_t)}ds'\\bigg)\\\\\n &\\quad +\\gamma\\int_\\mathcal{S}p(s'\\vert s_t,\\pi_w(s_t,g_t))\\nabla_{\\theta_m} V_w(s',g_{t+1}) ds'\\\\\n %\n &= \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) \\\\\n &\\quad \\quad \\quad \\quad \\quad \\quad + \\pi_w (s_t,g) \\nabla_a r_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)} \\vphantom{\\int} \\\\\n &\\quad \\quad \\quad \\quad \\quad \\quad + \\gamma\\int_\\mathcal{S} V_w(s',g_{t+1}) \\pi_w(s_t,g) \\nabla_a p(s'\\vert s_t,a)|_{a=\\pi_w(s_t,g_t)}ds' \\bigg) \\bigg\\rvert_{g=g_t}\\\\\n &\\quad +\\gamma\\int_\\mathcal{S}p(s'\\vert s_t,\\pi_w(s_t,g_t))\\nabla_{\\theta_m} V_w(s',g_{t+1}) ds'\\\\\n %\n &= \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) \\\\\n &\\quad \\quad \\quad \\quad \\quad \\quad + \\pi_w (s_t,g) \\nabla_a \\bigg( r_w(s_t,g_t,a) + \\gamma\\int_\\mathcal{S} V_w(s',g_{t+1}) p(s'\\vert s_t,a)ds' \\bigg)\\bigg\\rvert_{a=\\pi_w(s_t,g_t)} \\bigg) \\bigg\\rvert_{g=g_t}\\\\\n &\\quad + \\gamma\\int_\\mathcal{S}p(s'\\vert s_t,\\pi_w(g_t, s_t))\\nabla_{\\theta_m} V_w(s',g_{t+1}) ds'\\\\\n &= \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) + \\pi_w (s_t,g) \\nabla_a Q_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_t}\n \\\\\n &\\quad + \\gamma\\int_\\mathcal{S}p(s'\\vert s_t,\\pi_w(s_t,g_t))\\nabla_{\\theta_m} V_w(s',g_{t+1}) ds'\n \\end{aligned}\n $}\n\\end{equation} \n\nIterating this formula, we have,\n\\begin{equation}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} V_w(s_t,g_t) &= \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) + \\pi_w (s_t,g) \\nabla_a Q_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_t}\\\\\n &\\quad +\\gamma\\int_\\mathcal{S}p(s_{t+1}\\vert s_t,\\pi_w(s_t,g_t))\\nabla_{\\theta_m} V_w(s_{t+1},g_{t+1}) ds_{t+1} \\\\\n %\n &= \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) + \\pi_w (s_t,g) \\nabla_a Q_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_t}\n \\quad \\\\\n &\\quad +\\gamma\\int_\\mathcal{S}p(s_{t+1}\\vert s_t,\\pi_w(s_t,g_t)) \\nabla_{\\theta_m} g_{t+1} \\nabla_g \\bigg(r_w(s_{t+1},g,\\pi_w(s_{t+1},g_{t+1})) \\vphantom{\\int} \\\\\n &\\quad \\quad \\quad \\quad \\quad + \\pi_w (s_{t+1},g) \\nabla_a Q_w(s_{t+1},g_{t+1},a)|_{a=\\pi_w(s_{t+1},g_{t+1})}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_{t+1}}ds_{t+1} \\\\\n & \\quad +\\gamma^2 \\int_\\mathcal{S}\\int_\\mathcal{S} \\bigg( p(s_{t+1}\\vert s_t,\\pi_w(s_t,g_t)) p(s_{t+2}\\vert s_{t+1},\\pi_w(g_{t+1}, s_{t+1}))\\\\\n &\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\nabla_{\\theta_m} V_w(s_{t+2},g_{t+2}) ds_{t+2} ds_{t+1} \\bigg)\\\\\n \n & \\hspace{45mm} \\vdots\\\\\n &= \\sum_{n=0}^{\\infty} \\gamma^n \\underbrace{\\int_\\mathcal{S} \\cdots \\int_\\mathcal{S}}_{n \\text{ times}} \\left(\\prod_{k=0}^{n-1} p(s_{t+k+1}|s_{t+k},\\pi_w(s_{t+k},g_{t+k})) \\right) \\\\\n &\\quad \\quad \\quad \\quad \\times \\nabla_{\\theta_m} g_{t+n} \n \\nabla_g \\bigg(r_w(s_{t+n},g,\\pi_w(s_{t+n},g_{t+n})) \\\\\n & \\quad \\quad \\quad +\\pi_w (s_{t+n},g) \\nabla_a Q_w(s_{t+n},g_{t+n},a)|_{a=\\pi_w(s_{t+n},g_{t+n})}\\bigg)\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_{t+n}} ds_{t+n}\\cdots ds_{t+1}\n \\end{aligned}\n $}\n\\end{equation}\n\nTaking the gradient of the expected worker value function, we get,\n\\begin{small}\n\\begin{equation}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} J_w &= \\nabla_{\\theta_m} \\int_{\\mathcal{S}} \\rho_0(s_0) V_w(s_0, g_0) ds_0 \\\\\n %\n &= \\int_{\\mathcal{S}} \\rho_0(s_0) \\nabla_{\\theta_m} V_w(s_0, g_0) ds_0 \\\\\n %\n &= \\int_{\\mathcal{S}} \\rho_0(s_0) \\sum_{n=0}^{\\infty} \\gamma^n \\underbrace{\\int_\\mathcal{S} \\cdots \\int_\\mathcal{S}}_{n \\text{ times}} \\Bigg[\\left(\\prod_{k=0}^{n-1} p(s_{k+1}|s_k,\\pi_w(s_k,g_k)) \\right) \\nabla_{\\theta_m} g_n \\\\\n &\\quad \\quad \\quad \\quad \\times \\nabla_g \\bigg(r_w(s_n,g,\\pi_w(s_n,g_n))\\vphantom{\\int} + \\pi_w (s_n,g) \\nabla_a Q_w(s_n,g_n,a)|_{a=\\pi_w(s_n,g_n)}\\vphantom{\\int} \\bigg)\\Bigg] \\bigg\\rvert_{g=g_n} ds_n\\cdots ds_0 \\\\\n %\n &= \\sum_{n=0}^{\\infty} \\underbrace{\\int_\\mathcal{S} \\cdots \\int_\\mathcal{S}}_{n+1 \\text{ times}} \\gamma^n p_{\\theta_m, \\theta_w, n}(\\tau) \\nabla_{\\theta_m} g_n\n \\nabla_g \\bigg(r_w(s_n,g,\\pi_w(s_n,g_n))\\vphantom{\\int}\\\\\n &\\quad \\quad \\quad \\quad + \\pi_w (s_n,g) \\nabla_a Q_w(s_n,g_n,a)|_{a=\\pi_w(s_n,g_n)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_n} ds_n\\cdots ds_0 \\\\\n \n &= \\mathbb{E}_{\\tau \\sim p_{\\theta_m, \\theta_w}(\\tau)} \\bigg[ \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) + \\pi_w (s_t,g) \\nabla_a Q_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_t} \\bigg]\n \\end{aligned}\n $}\n\\end{equation}\n\\end{small}\nwhere $\\tau=(s_0, a_0, s_1, a_1, \\dots, s_n)$ is a trajectory and $p_{\\theta_m, \\theta_w, n}(\\tau)$ is the (improper) discounted probability of witnessing a trajectory a set of policy parameters $\\theta_m$ and $\\theta_w$.\n\nThe final representation of the connected gradient formulation is then:\n\\begin{equation}\n \\resizebox{.9\\hsize}{!}{$\n \\begin{aligned}\n \\nabla_{\\theta_m} J_m' &= \\mathbb{E}_{s\\sim p_\\pi} \\left[ \\nabla_a Q_m (s,a)|_{a=\\pi_m(s)}\\nabla_{\\theta_m} \\pi_m(s) \\right] \\\\\n & \\quad + \\mathbb{E}_{\\tau \\sim p_{\\theta_m, \\theta_w}(\\tau)} \\bigg[ \\nabla_{\\theta_m} g_t \\nabla_g \\bigg(r_w(s_t,g,\\pi_w(s_t,g_t)) + \\pi_w (s_t,g) \\nabla_a Q_w(s_t,g_t,a)|_{a=\\pi_w(s_t,g_t)}\\vphantom{\\int} \\bigg) \\bigg\\rvert_{g=g_t} \\bigg]\n \\end{aligned}\n $}\n\\end{equation}\n\n\n\\section{Cooperative HRL as goal-constrained optimization}\n\\label{sec:constrained-hrl}\n\nIn this section we will derive a constrained optimization problem that motivates cooperation between a meta policy $\\pi$ and a worker policy $\\omega$. We will derive an update rule for the finite horizon reinforcement learning setting, and then approximate the derivation for stationary policies by dropping the time dependencies from the meta policy, worker policy, and the cooperative $\\lambda$. Our goal is to find a hierarchy of policies $\\pi$ and $\\omega$ with maximal expected return subject to a constraint on minimum expected distance from goals proposed by $\\pi$. Put formally, \n\\begin{gather}\n \\max_{\\pi_{0:T}, \\omega_{0:T}} \\sum_{t = 0}^{T} \\mathbb{E} \\left[ r (s_{t}, a_{t}) \\right] \\;\\text{s.t.}\\; \\sum_{i = t}^{T} \\mathbb{E} \\left[ \\left\\| s_{i + 1} - g_{i} \\right\\|_{p} \\right] \\leq \\delta \\; \\forall t\n\\end{gather}\n\nwhere $\\delta$ is the desired minimum expected distance from goals proposed by $\\pi$. The optimal worker policy $\\omega$ without the constraint need not be goal-reaching, and so we expect the constraint to be tight in practice---this seems to be true in our experiments in this paper. The hierarchy of policies at iteration $t$ may only affect the future, and so we can use approximate dynamic programming to solve for the optimal hierarchy at the last timestep, and proceed backwards in time. We write the optimization problem as iterated maximization,\n\\begin{gather}\n \\max_{\\pi_{0}, \\omega_{0}} \\mathbb{E} \\left[ r (s_{0}, a_{0}) + \\max_{\\pi_{1}, \\omega_{1}} \\mathbb{E} \\left[ \\cdots + \\max_{\\pi_{T}, \\omega_{T}} \\mathbb{E} \\left[ r (s_{T}, a_{T}) \\right] \\right] \\right]\n\\end{gather}\n\nsubject to a constraint on the minimum expected distance from goals proposed by $\\pi$. Starting from the last time step, we convert the primal problem into a dual problem. Subject to the original constraint on minimum expected distance from goals proposed by $\\pi_{T}$ at the last timestep,\n\\begin{gather}\n \\max_{\\pi_{T}, \\omega_{T}} \\mathbb{E} \\left[ r (s_{T}, a_{T}) \\right] = \\min_{\\lambda_{T} \\geq 0} \\max_{\\pi_{T}, \\omega_{T}} \\mathbb{E} \\left[ r (s_{T}, a_{T}) \\right] + \\lambda_{T} \\delta - \\lambda_{T} \\sum_{i = T}^{T} \\mathbb{E} \\left[ \\left\\| s_{i + 1} - g_{i} \\right\\|_{p} \\right]\n\\end{gather}\n\nwhere $\\lambda_{T}$ is a Lagrange multiplier for time step $T$, representing the extent of the cooperation bonus between the meta policy $\\pi_{T}$ and the worker policy $\\omega_{T}$ at the last time step. In the last step we applied strong duality, because the objective and constraint are linear functions of $\\pi_{T}$ and $\\omega_{T}$. Solving the dual problem corresponds to CHER, which trains a meta policy $\\pi_{T}$ with a cooperative goal-reaching bonus weighted by $\\lambda_{T}$. The optimal cooperative bonus can be found by performing minimization over a simplified objective using the optimal meta and worker policies.\n\\begin{gather}\n \\min_{\\lambda_{T}\\geq 0} \\lambda_{T} \\delta - \\lambda_{T} \\sum_{i = T}^{T} \\mathbb{E}_{g_{i} \\sim \\pi^{*}_{T} (g_{i} | s_{i}; \\lambda_{T}), a_{i} \\sim \\omega^{*}_{T} (a_{i} | s_{i}, g_{i}; \\lambda_{T}) } \\left[ \\left\\| s_{i + 1} - g_{i} \\right\\|_{p} \\right]\n\\end{gather}\n\nBy recognizing that in the finite horizon setting the expected sum of rewards is equal to the meta policy's Q function and the expected sum of distances to goals is the worker policy's Q function for deterministic policies, we can separate the dual problem into a bi-level optimization problem first over the policies. \n\\begin{gather}\n \\max_{\\pi_{T}, \\omega_{T}} Q_{m}(s_{T}, g_{T}, a_{T}) - \\lambda_{T} Q_{w}(s_{T}, g_{T}, a_{T})\\\\\n \\min_{\\lambda_{T}\\geq 0} \\lambda_{T} \\delta + \\lambda_{T} Q_{w}(s_{T}, g_{T}, a_{T})\n\\end{gather}\n\nBy solving the iterated maximization backwards in time, solutions for $t