diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpzem" "b/data_all_eng_slimpj/shuffled/split2/finalzzpzem" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpzem" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\n\nLet $f(x)$ be a polynomial in $n$ variables over ${\\mathbb Z}$ and of degree $d>1$. Let $s$ be the dimension of the critical locus of $f_d:{\\mathbb C}^n\\to {\\mathbb C}$, where $f_d$ is the degree $d$ homogeneous part of $f$. For any positive integer $N$ and any complex primitive $N$-th root of unity $\\xi$, consider the exponential sum\n\\begin{equation}\\label{eq:fN}\n \\sum_{x\\in ({\\mathbb Z}\/N{\\mathbb Z})^n} \\xi^{f(x)}.\n\n\\end{equation}\nThe work by Grothendieck, Deligne, Katz, Laumon, and others after the Weil Conjectures imply rich results about these sums when $N$ runs over the set of prime number. We don't aim to deepen these results.\nInstead we put forward new bounds for these sums for all $N$ with, roughly, a win of a factor $N^{-(n-s)\/d}$ on the trivial bound (See Conjecture \\ref{con1}). We dare to put this forward as a conjecture because we prove a generic case (based on the Newton polyhedron of $f$), as well as the case with no more than $4$ variables, and,\nthe case restricted to those $N$ which are\n$d+2$-th power free.\nThe simplicity of the bounds makes them attractive and useful. As we will explain, the bounds go further than two of Igusa's\nconjectures, namely on exponential sums and on monodromy, and they represent an update of these conjectures in line with recent insights and results. \n\n\n\n\nIn more detail, let us normalize by the number of terms, take the complex modulus and put\n\\begin{equation}\\label{eq:SfN}\nE_f(N,\\xi):= \\abs{ \\frac{1}{N^n } \\sum_{x\\in ({\\mathbb Z}\/N{\\mathbb Z})^n} \\xi^{f(x)} }.\n\\end{equation}\n\nWrite $f_d$ for the homogeneous degree $d$ part of $f$,\nand write $s=s(f)=s(f_d)$ for the dimension of the critical\nlocus of $f_d$, namely, of the solution set in ${\\mathbb C}^n$ of the equations\n\\begin{equation}\\label{eq:grad-fd}\n0 =\n\\frac{\\partial f_d}{ \\partial x_1}(x) =\\ldots = \\frac{\\partial f_d}{\\partial x_n}(x).\n\\end{equation}\nNote that $0\\leq s\\le n-1$. Our projected bounds are very simple:\n\n\\begin{conj}\\label{con:SfN}\\label{con1}\nGiven $f$, $n$, $s$, and $d$ as above and any $\\varepsilon>0$, one has\n\\begin{equation}\\label{bound:SfN}\nE_f(N,\\xi) \\ll N^{-\\frac{n-s}{d}+\\varepsilon}.\n\\end{equation}\n\\end{conj}\n\n\n\n\nThe notation in (\\ref{bound:SfN})\nmeans that, given $f$ and $\\varepsilon>0$, there is a constant $c=c(f,\\varepsilon)$\nsuch that, for all integers $N\\geq 1$ and all primitive $N$-th roots of unity $\\xi$, the value $E_f(N,\\xi)$ is no larger than\n$cN^{- \\frac{n-s}{d} + \\varepsilon}$. Moreover, knowing only $n$, $s$ and $d$, the exponent $-(n-s)\/d$ is optimal in (\\ref{bound:SfN}), as witnessed by the example with $f=\\sum_i x_i^d$ and $N=p^d$ for primes $p$, see also the paragraph around (\\ref{eq:igusa}). We are not aware of such projected upper bounds in the literature, except in the one variables case like in \\cite{Chalk} following \\cite{Hua1959} and generalisations in \\cite{CochZheng1, CochZheng2}. \n\n\\begin{remark}\nThe critical case of Conjecture \\ref{con1} is with $N$ having a single prime divisor.\nIndeed, by the Chinese Remainder Theorem, if one writes $N= \\prod_{i}p_i^{e_i}$ for distinct prime numbers $p_i$ and integers $e_i>0$, then one has\n\\begin{equation}\\label{eq:p-i}\nE_{f}(N,\\xi) = \\prod_{i} E_f(p_i^{e_i},\\xi_i)\n\\end{equation}\nfor some choice of primitive $p_i^{e_i}$-th roots of unity $\\xi_i$. In detail, writing $1\/N=\\sum_i a_i\/p_i^{e_i}$ with $(a_i,p_i)=1$, then one takes $\\xi_i = \\xi^{b_i}$ with $b_i=a_i N\/p_i^{e_i}$.\n\\end{remark}\n\n\n\\subsection{}\\label{sec:introbeyond}\nConjecture \\ref{con1} goes beyond Igusa's original questions on both monodromy and exponential sums, and simplifies them to only involving $n,s$, and $d$. This simplification comes from recent insights on the minimal exponent by Musta{\\c{t}}{\\v{a}} and Popa from \\cite{MustPopa}. The presented conjecture is a way to go forward after the recent solution of Igusa's conjecture on exponential sums for non-rational singularities in \\cite{CMN}, which implies Conjecture \\ref{con1} when $(n-s)\/d\\le 1$ and thus in particular when $n\\le 3$ (indeed, the case with $d=2$ is easy and well-known).\n\nIn most of the evidence we can furthermore take $\\varepsilon=0$ and\none may wonder to which extent this sharpening of Conjecture \\ref{con:SfN} holds. The sharpening with $\\varepsilon=0$ goes further beyond Igusa's conjectures in ways explained in Section \\ref{subsection:order:poles}. One may also wonder whether the implied constant $c$ can be taken depending only on $f_d$ and $n$.\n\n\n\\subsection{}\\label{sec:intro:monod}\nIn Section \\ref{sec:monodromy} we relate the bounds from (\\ref{eq:p-i}) to Igusa's monodromy conjecture.\nConjecture \\ref{con1} implies the strong monodromy conjecture for poles of local zeta functions with real part in the range strictly between $-\\frac{n-s}{d}$ and zero. More precisely, we show that there are no poles (of a local zeta function of $f$) with real part in this range except $-1$ (see Theorem \\ref{prop:mon}), and, correspondingly, there are no zeros of the Bernstein-Sato polynomial of $f$ in this range other than $-1$ (see Proposition \\ref{lem:alpha}). In the other direction, we note that the strong monodromy conjecture in this range implies merely a much weaker variant of Conjecture \\ref{con1}, namely the bounds from (\\ref{bound:SfNp}) instead of (\\ref{bound:SfN}), where the constant $c_p$ is allowed to depend on $p$.\n\n\\subsection{}\nThe other main results in this paper (apart from Section \\ref{sec:monodromy}) consist of evidence\nfor Conjecture \\ref{con1}.\nIn Section \\ref{sec:intro:evidence} we first translate some existing results into evidence for Conjecture \\ref{con1}, namely, the case of $f$ being smooth homogeneous, the case of degree $d=2$, the case with $(n-s)\/d\\le 1$, the case of at most $3$ variables, and, the case for cube free $N$. We then extend this further to evidence for all $N$ which are $d+2$-th power free (see Section \\ref{sec:HeathB}, where moreover $\\epsilon=0$). This uses some recent results on bounds of \\cite{CGH5} coming from the theory of motivic integration. \n\n\n\nIn the final Section \\ref{sec:CAN}, we show Conjecture \\ref{con:SfN} with $\\epsilon=0$ when $f$ is non-degenerate with respect to its Newton polyhedron at zero, using\nrecent work from Castryck and the second author in \\cite{CAN}. This shows that Conjecture \\ref{con1} holds for generic $f$,\nin a sense explained in Remark \\ref{rem:gen}.\n\nIn the final Section \\ref{sec:n=4}, we show Conjecture \\ref{con1} for all polynomial in no more than $4$ variables. The key case here, in view of the results from \\cite{CMN}, is when $n=4$, $s=0$, and $d=3$.\n\n\n\n\\subsection{}\\label{sec:intro:original}\nThe original question of Igusa's on exponential sums used a non-canonical exponent coming from a choice of log-resolution for $f$, see page 2 of the introduction of \\cite{Igusa3}. Later, Igusa's exponent was replaced by canonical candidates for the (expectedly optimal) exponent: the motivic oscillation index of $f$, and, (expected to be equal) the minimal exponent of $f$, or rather, of $f-v$ with $v$ a critical value of $f:{\\mathbb C}^n\\to{\\mathbb C}$. Also Igusa's original conditions, like homogeneity for $f$, were dropped with some care. See Section \\ref{subs:minimal:exp}\nfor more details and references.\nOur suggested bounds encompass and simplify several issues related to the minimal exponent (and, the motivic oscillation index),\nby replacing them by the much simpler value $\\frac{n-s}{d}$.\nAlthough the bounds from (\\ref{bound:SfN}) seem simple and very natural, they appear surprisingly hard to show in general. Even much weaker the bounds from (\\ref{bound:SfNp}) seem surprisingly hard to show.\n\n\n\\subsection{}\\label{sec:intro:vast}\nIn his vast program, Igusa wanted to show a certain ad\\`elic Poisson summation formula related to $f$, inspired by the theory by Siegel and Weil about the Hasse-Minkowski principle.\nConjecture \\ref{con:SfN} would imply that Igusa's ad\\`elic Poisson summation formula for $f$ holds under the simple condition\n\\begin{equation}\\label{bound:2d}\n n -s > 2d\n \\end{equation}\nwhich simplifies the list of conditions put forward by Igusa in \\cite{Igusa3}.\n\n\n\nAlso for obtaining (or just for streamlining) local global principles, Conjecture \\ref{con:SfN} may play a role.\n\nIt are precisely the $E_f(N,\\xi)$ that appear for estimating the contribution of the major arcs in the circle method\nto get a smooth local global principle for $f$ when\n\\begin{equation}\\label{bound:2^d}\nn-s > (d-1) 2^d,\n\\end{equation}\nin work by Birch \\cite{Birch} (see also the recent sharpening from \\cite{BrowPren}), where Birch shows that for any homogeneous form $f=f_d$ with (\\ref{bound:2^d}) there are smooth local zeros of $f$ for each completion of ${\\mathbb Q}$ if and only if $f$ has a smooth rational zero. One may hope one day to replace Condition (\\ref{bound:2^d}) on $f$ by (\\ref{bound:2d}), in line with a conjectured local global principle from \\cite{Browning-HB}.\nConjecture \\ref{con:SfN} would put the remaining obstacle\ncompletely with the estimation for the minor arcs (where actually already lie the limits of current knowledge).\nOther possible applications may be for small solutions of congruences as studied in e.g.~\\cite{Baker}.\nWe leave all these for the future. \n\n\n\n\n\n\n\\subsection{Generalization to a ring of integers}\nBefore giving precise statements and proofs, we formulate\na natural generalization to rings of integers (a generality we will not use later in this paper).\nFor a ring of integers $\\mathcal{O}$ of a number field and a polynomial $g$ over $\\mathcal{O}$, one can formulate an analogous conjecture with summation sets $(\\mathcal{O}\/I)^n$ with nonzero ideals $I$ of $\\mathcal{O}$ and primitive additive characters $\\psi:\\mathcal{O}\/I\\to{\\mathbb C}^{\\times}$. More precisely, let $g$ be a polynomial in $n$ variables of degree $d>1$ and with coefficients in $\\mathcal{O}$. For any nonzero ideal $I$ of $\\mathcal{O}$ and any primitive additive character $\\psi:\\mathcal{O}\/I\\to{\\mathbb C}^{\\times}$, let $N_I:=[\\mathcal{O}:I]$ be the absolute norm of $I$ and consider\n\\begin{equation}\\label{eq:SfI}\nE_g(I,\\psi):= \\abs{ \\frac{1}{N_I^n } \\sum_{x\\in (\\mathcal{O}\/I)^n} \\psi(g(x)) }.\n\\end{equation}\nLet $s(g)=s(g_d)$ be the dimension of the critical\nlocus of the degree $d$ part $g_d$ of $g$. As generalization of the above questions, one may wonder whether\nfor each $\\varepsilon>0$ (or even with $\\varepsilon=0$) one has\n\\begin{equation}\\label{bound:SgN}\nE_g(I,\\psi) \\ll N_I^{-\\frac{n-s(g)}{d}+\\varepsilon}.\n\\end{equation}\nAs above with the Chinese remainder theorem, one can rephrase this using the finite completions of the field of fractions of $\\mathcal{O}$, and, one can study similar sums for the local fields ${\\mathbb F}_q \\mathopen{(\\!(} t \\mathopen{)\\!)}$ of large characteristic.\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\\section{Link with the monodromy conjecture}\\label{sec:monodromy}\n\n\nFix a prime number $p$. For each integer $m>0$ let $a_{p,m}$ be the number of solutions in $({\\mathbb Z}\/p^m{\\mathbb Z})^n$ of the equation $f(x)\\equiv 0\\bmod p^m$, and consider the Poincar\\'e series\n$$\nP_{f,p}(T):= \\sum_{m>0} a_{p,m}T^m,\n$$\nin ${\\mathbb Z}[[T]]$. Igusa \\cite{Igusa3} showed that $P_{f,p}(T)$ is a rational function in $T$, using an embedded resolution of $f$ from \\cite{Hir:Res}. Let $T_0$ be a complex pole of $P_{f,p}(T)$ and let $t_0$ be the real part of a complex number $s_0$ with $p^{-s_0}=T_0$. Denef \\cite{DenefBour} formulated a strong variant of Igusa's monodromy conjecture by asking whether $t_0$ is automatically a zero of the Bernstein-Sato polynomial of $f$. The following result addresses this question in a range of values for $t_0$, namely strictly between $-(n-s)\/d$ and zero, assuming Conjecture \\ref{con1} for $f$.\n\n\\begin{thm}[Strong Monodromy Conjecture, in a range]\\label{prop:mon}\nLet $f$, $n$, $s,$ and $d$ be as in the introduction and suppose that Conjecture \\ref{con:SfN} holds for $f$. Let $t_0$ be coming as above from a pole $T_0$ of $P_{f,p}(T)$ for a prime number $p$. Suppose that moreover $t_0>-(n-s)\/d$. Then $t_0=-1$, and hence, $t_0$ is a zero of the Bernstein-Sato polynomial of $f$.\n\\end{thm}\nTheorem \\ref{prop:mon} is a form of the strong monodromy conjecture in the range strictly between $-(n-s)\/d$ and zero. We don't pursue highest generality here, like for other variants of motivic and $p$-adic local zeta functions. Theorem \\ref{prop:mon} follows rather easily from Igusa's results, and in fact, similar results for the poles of twisted $p$-adic local zeta functions follow by the same reasoning, see \\cite{DenefBour}.\nProposition \\ref{lem:alpha} below gives a related statement for the zeros of the Bernstein-Sato polynomial of $f$.\n\n\\begin{proof}[Proof of Theorem \\ref{prop:mon}]\nLet $p$ be a prime number. Let $t_0$ be the real part of a complex number $s_0$ such that $T_0:=p^{-s_0}$ is a pole of $P_{f,p}(T)$.\nSuppose that \nfor all $\\varepsilon>0$ there exists $c_p=c_p(f,\\varepsilon)$ such that\n\\begin{equation}\\label{bound:SfNp}\nE_f(p^m,\\xi) \\leq c_p\\cdot (p^m)^{-\\frac{n-s}{d}+\\varepsilon} \\mbox{ for all $m>0$ and primitive $\\xi$}.\n\\end{equation}\nBy Igusa's results from \\cite{Igusa3} in the form of \\cite{DenefBour}, Corollary 1.4.5 and the comment at the end of \\cite{DenefVeys}, it follows that $t_0$ either equals $-1$ or is at most $-(n-s)\/d$. \nClearly the bound from (\\ref{bound:SfNp}) holds if Conjecture \\ref{con1} holds for $f$. Hence, if $t_0>-(n-s)\/d$, then $t_0=-1$. By our assumption that $d>1$, the value $-1$ is automatically a zero of the Bernstein-Sato polynomial of $f$. This completes the proof of the theorem.\n\\end{proof}\nShowing the bounds (\\ref{bound:SfNp}) from the above proof for general $f$\ndoes not seem easy, although they are much weaker (and much less useful) than the bounds from (\\ref{bound:SfN}), because of the dependence of $c_p$ on $p$.\n\nIn view of the strong monodromy conjecture, Theorem \\ref{prop:mon} should be compared with the following absence of zeros of the Bernstein-Sato polynomial in a similar range, apart from $-1$. (Recall that the zeros of the Bernstein-Sato polynomial are negative rational numbers.)\n\\begin{prop}\\label{lem:alpha}\nLet $f$, $n$, $s,$ and $d$ be as in the introduction and let $r$ be any zero of the Bernstein-Sato polynomial of $f$. Then either $r=-1$, or, $r \\leq - (n-s) \/d$.\n\\end{prop}\n\\begin{proof}\nWhen $f=f_d$ then the proposition follows from item (3) of Theorem~E of \\cite{MustPopa} and the definition of the minimal exponent as minus the largest zero of the quotient of the Bernstein-Sato polynomial $b_f(s)$ of $f$ divided by $(s+1)$.\nThe general case of the proposition follows from this homogeneous case by using the semi-continuity result for the minimal exponent of item (2) of Theorem~E of \\cite{MustPopa}.\n\\end{proof}\n\n\n\n\\subsection{}\\label{subsection:order:poles}\nThe variant of Conjecture \\ref{con:SfN} with $\\varepsilon=0$ (or even just the bounds (\\ref{bound:SfNp}) with $\\varepsilon=0$) implies for any pole $T_0$ of $P_{f,p}(T)$ with corresponding value $t_0$ the following bound on the order of the pole : If $t_0$ equals $-(n-s)\/d$ and $-(n-s)\/d\\neq -1$, then the pole $T_0$ has multiplicity at most one. If $t_0=-1= -(n-s)\/d$, then the pole $T_0$ has multiplicity at most two, see \\cite{DenefBour}, Corollary 1.4.5 and the comment at the end of \\cite{DenefVeys}.\n\nConjecture \\ref{con:SfN} clearly implies the bounds (\\ref{bound:SfNp}) with moreover constants $c_p$ taken independently from $p$.\nThe variant of Conjecture \\ref{con:SfN} with $\\varepsilon=0$ is equivalent with the bounds (\\ref{bound:SfNp}) with $\\varepsilon=0$ and such that furthermore the products of the constants $c_p$ over any set $P$ of primes is bounded independently of $P$.\n\n\n\\subsection{}\\label{subs:minimal:exp}\nThe minimal exponent of $f$ is defined as minus the largest zero of the quotient $b_f(s)\/(s+1)$ with $b_f(s)$ the Bernstein-Sato polynomial of $f$ if such a zero exists, and it is defined as $+\\infty$ otherwise. Write $\\hat \\alpha_f$ for the minimum over the minimal exponents of $f-v$ for $v$ running over the (complex) critical values of $f$.\nIn a more canonical variant of Igusa's original question, one may wonder more technically than Conjecture \\ref{con1} whether\nfor all $\\varepsilon>0$ one has\n\\begin{equation}\\label{bound:SfNfull}\nE_f(N,\\xi) \\ll N^{-\\hat\\alpha_f+\\varepsilon} \\mbox{ for all $\\xi$ and all squareful integers $N$},\n\\end{equation}\nsimilarly as the question introduced in \\cite{CVeys} for the motivic oscillation index (and where the necessity of working with squareful integers $N$ is explained). Recall that an integer $N$ is called squareful if for any prime $p$ dividing $N$ also $p^2$ divides $N$. In \\cite{CAN} and \\cite{CMN}, evidence is given for this sharper but more technical question. As mentioned above, $\\hat\\alpha_f$ is hard to compute in general, and $(n-s)\/d$ is much more transparent. However, $\\hat\\alpha_f$ is supposedly equal to the motivic oscillation index of $f$, which in turn is optimal as exponent of $N^{-1}$ in the upper bounds for $E_f(N,\\xi)$ (see the last section of \\cite{CMN}). Note that by Proposition \\ref{lem:alpha}, one has\n\\begin{equation}\\label{bound:min:exp}\n\\hat\\alpha_f \\ge (n-s)\/d,\n\\end{equation}\nwhich shows that (\\ref{bound:SfNfull}) is indeed a sharper (or equal) bound than (\\ref{bound:SfN}).\n\n\n\\section{Some first evidence}\\label{sec:intro:evidence}\n\nIn this section we translate some well-known results into evidence for Conjecture \\ref{con1}, and we show the case of $d+2$-th power free $N$. A key (but hard) case of Conjecture \\ref{con1} is when $f_d$ is projectively smooth, namely with $s=0$, since the case of general $s$ can be derived from a sufficiently uniform form of the case with $s=0$, see e.g.~how (\\ref{eq:deligne}) is used below for square free $N$. However, the case $s=0$ seems very hard at the moment in general. This should not be confused with Igusa's more basic case recalled in Section \\ref{sec:ev1}.\n\n\n\\subsection{}\\label{sec:ev1}\nWhen $f$ itself is smooth homogeneous, namely, $f=f_d$ and $s=0$, then Conjecture \\ref{con:SfN} with $\\varepsilon=0$ is known by Igusa's bounds from \\cite{Igusa3}, by a straightforward computation and reduction to Deligne's bounds.\nIn detail, if $f=f_d$ and $s=0$, Igusa \\cite{Igusa3} showed (using \\cite{DeligneWI}) that for each prime $p$ there is a constant $c_p$ such that\n\\begin{equation}\\label{eq:igusa}\nE_{f}(p^{m},\\xi) \\leq c_p p^{-mn\/d} \\mbox{ for all integers $m>0$ and all choices of $\\xi$},\n\\end{equation}\nand, that one can take $c_p=1$ when $p$ is larger than some value $M$ depending on $f$. More precisely, one can take $c_p=1$ when $p$ does not divide $d$ and when the reduction of $f_d$ modulo $p$ is smooth.\nFurthermore, Igusa \\cite{Igusa3} shows that the exponent $-n\/d$ of $p^m$ is optimal in the upper bound of (\\ref{eq:igusa}) when $m=d$ (see also the last section of \\cite{CMN} for more general lower bounds).\n\n\\subsection{}\\label{sec:ev2}\nLet us recall in general that for $f$, $n$, $s$, and $d$ as above, if\n\\begin{equation}\\label{eq:leq1}\n(n-s)\/d \\leq 1,\n\\end{equation}\nthen Conjecture \\ref{con:SfN} follows from the recent solution of Igusa's conjecture for non-rational singularities in \\cite{CMN}. Indeed, in \\cite{CMN} the stronger (and optimal) upper bounds from (\\ref{bound:SfNfull}) are shown in the case of non-rational singularities, as well as the case with $1$ instead of $\\hat\\alpha_f$ in the case of rational singularities. (Recall that this is indeed stronger by (\\ref{bound:min:exp}).) We mention on the side that $\\hat\\alpha_f \\le 1$ if and only if $f-v=0$ has non-rational singularities for some critical value $v\\in {\\mathbb C}$ of $f$, by \\cite{Saito-rational}.\n\nIt follows from this that Conjecture \\ref{con:SfN} holds for all $f$ in three (or less) variables. Indeed, the degree two case is easy by diagonalizing $f_2$ over ${\\mathbb Q}$, and, (\\ref{eq:leq1}) holds when $n\\le 3\\le d$. In Section \\ref{sec:n=4} we will treat the case of four variables.\n\nAlthough it is classical, let us explain the case of $d=2$ in more detail, by showing that Conjecture \\ref{con1} holds with $\\varepsilon=0$ for $f$ with $d=2$.\nFirst suppose that the degree two part of $f$ is a diagonal form, namely, $f_2(x)=\\sum_{i=1}^n a_i x_i^2$ for some $a_i\\in {\\mathbb Z}$.\nIn this case it sufficient to show the case with $n=1$ and $d=2$ (indeed, $f=f_1(x_1)+\\ldots+f_n(x_n)$ for some polynomials $f_j$ in one variable $x_j$ and of degree $\\le 2$). But this case follows readily from Hua's bounds, see \\cite{Hua1959} or \\cite{Chalk}.\n\nFor general $f$ with $d=2$, by diagonalizing $f_2$ over ${\\mathbb Q}$ and taking a suitable multiple, we find a matrix $T\\in{\\mathbb Z}^{n\\times n}$ with nonzero determinant so that $f_2(Tx)$ is a diagonal form in the variables $x$, namely, $f_2(Tx)=\\sum_{i=1}^n a_i x_i^2$ for some $a_i\\in {\\mathbb Z}$. The map sending $x$ to $Tx$ transforms ${\\mathbb Z}_p^n$ into a set of the form $\\prod_{j=1}^n p^{e_{p,j}}{\\mathbb Z}_p$ for some integers $e_{p,j}\\ge 0$ (called a box). By composing with a map of the form $(x_j)_j\\mapsto (b_jx_j)_j$ for some integers $b_j$ it is clear that we may assume that $T$ is already such that $e_{p,j}=e_p$ for all $p$ and all $j$ and some integers $e_p\\ge 0$. Hence, the case $d=2$ follows from Lemma \\ref{lem:T}.\n\n\\begin{lem}\\label{lem:T}\nLet $f$, $n$, $s,$ and $d$ be as in the introduction and let $T\\in{\\mathbb Z}^{n\\times n}$ be a matrix with nonzero determinant and such that, for each prime $p$, the transformation $x\\mapsto Tx$ maps ${\\mathbb Z}_p^n$ onto $p^{e_p}{\\mathbb Z}_p^n$ for some $e_p\\ge 0$. Then, Conjecture \\ref{con1} for the polynomials $g_i(x) := f(i+Tx)$ for all $i\\in {\\mathbb Z}^n$ implies conjecture \\ref{con1} for $f$, and, similarly for Conjecture \\ref{con1} with $\\varepsilon=0$.\n\\end{lem}\n\\begin{proof}\nFor each $i\\in{\\mathbb Z}^n$, write $g_i(x)$ for the polynomial $f(i+ Tx)$.\nFor any prime $p$, let $\\mu_{p,n}$ be the Haar measure on ${\\mathbb Q}_p^n$, normalized so that ${\\mathbb Z}_p^n$ has measure $1$. For any integer $m>0$ and any primitive $p^m$-th root of unity $\\xi$, we have, by the change of variables formula for $p$-adic integrals, and with $e=e_p$ and with integrals taken against the measure $\\mu=\\mu_{p,n}$,\n\\begin{eqnarray*}\nE_f(p^m,\\xi) & = & \\abs{ \\int_{x\\in {\\mathbb Z}_p^n} \\xi^{f(x)\\bmod p^m} \\mu } \\\\\n& \\le & \\sum_{j=1}^n\\sum_{i_j=0}^{p^{e}-1} \\abs{\\int_{x\\in i +(p^{e}{\\mathbb Z}_p)^n} \\xi^{f(x)\\bmod p^m} \\mu }. \\\\\n\\end{eqnarray*}\nFor each $i$ we further have\n\\begin{eqnarray*}\n\\abs{\\int_{x\\in i +(p^{e}{\\mathbb Z}_p)^n} \\xi^{f(x)\\bmod p^m} \\mu } & =& p^{-ne} \\abs{\\int_{x\\in {\\mathbb Z}_p^n} \\xi^{f(i+p^{e}x)\\bmod p^m} \\mu }\\\\\n&=& p^{-ne} \\abs{\\int_{x\\in {\\mathbb Z}_p^n} \\xi^{g_i (x)\\bmod p^m} \\mu } \\\\\n& = & p^{-ne}E_{g_i}(p^m,\\xi).\n\\end{eqnarray*}\nSince $e_p=0$ for all but finitely many primes $p$, we are done.\n \\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{}\\label{sec:ev3}\nWhen one restricts to integers $N$ which are square free (namely, not divisible by a nontrivial square), then Conjecture \\ref{con:SfN} with $\\varepsilon=0$ follows from Deligne's bound from \\cite{DeligneWI}, as we now explain.\nBy \\cite{DeligneWI}, for a prime number $p$ such that the reduction of $f_d$ modulo $p$ is smooth, one has\n\\begin{equation}\\label{eq:deligne}\nE_{f}(p,\\xi) \\leq (d-1)^n p^{-n\/2}\\ \\mbox{ for each primitive $\\xi$,}\n\\end{equation}\nwhere smooth means that the reduction modulo $p$ of the equations (\\ref{eq:grad-fd}) have $0$ as only solution over an algebraic closure of ${\\mathbb F}_p$.\nIf $s=0$ then the reduction of $f_d$ modulo $p$ is smooth whenever $p$ is large and thus Conjecture \\ref{con1} for square free $N$ and with $\\varepsilon=0$ follows for $f$ with $s=0$.\nThe case of $s>0$ follows by induction on $s$ and by restricting $f$ to hyperplanes, as follows. The bound from (\\ref{eq:deligne}) is our base case with $s=0$. Now suppose that $s>0$. After a linear coordinate change of ${\\mathbb A}_{\\mathbb Z}^n$, we may suppose that the polynomial $g(\\hat x) := f(0,\\hat x)$ in the variables $\\hat x = (x_2,\\ldots,x_n)$ is still of degree $d$ and that its degree $d$ homogeneous part $g_d$ has critical locus of dimension $s-1$. \nHence, for large prime $p$, the reduction of $g_d$ modulo $p$ has also critical locus of dimension $s-1$, in ${\\mathbb A}_{{\\mathbb F}_p}^{n-1}$.\nHence, for large $p$, one has by induction on $s$ that\n\\begin{equation}\\label{eq:deligne-ga}\n|\\frac{1}{p^{n-1} } \\sum_{\\hat x=(x_2,\\ldots,x_n)\\in {\\mathbb F}_p^{n-1}} \\xi^{f(a,\\hat x)} | \\leq (d-1)^{n-s} p^{-(n-s)\/2}\n\\end{equation}\nfor each $a\\in {\\mathbb F}_p$ and each primitive $p$-th root of unity $\\xi$.\nIndeed, the polynomial $f(a,\\hat x)$ has $g_d$ mod $p$ as its degree $d$ homogeneous part for each $a\\in {\\mathbb F}_d$. Now, summing over $a\\in {\\mathbb F}_p$ and dividing by $p$ gives\n\\begin{equation}\\label{eq:deligne:s}\n|\\frac{1}{p^{n} } \\sum_{x\\in {\\mathbb F}_p^{n}} \\xi^{f(x)} | \\leq (d-1)^{n-s} p^{-(n-s)\/2}\n\\end{equation}\nfor large $p$.\nConjecture \\ref{con1} with $\\varepsilon=0$ for square free integers $N$ thus follows.\n(Alternatively, one can use the much more general Theorem 5 of \\cite{Katz} when $d>2$ and a diagonalization argument as in Section \\ref{sec:ev2} when $d=2$).\n\n\\subsection{}\\label{sec:HeathB}\nWhen one restricts to integers $N$ which are cube free (namely, not divisible by a nontrivial cube), then Conjecture \\ref{con:SfN} with $\\varepsilon=0$ follows exactly in the same way as for square free $N$, but now using both the bounds from \\cite{Heath-B-cube-free} and from \\cite{DeligneWI}.\nIndeed, this similarly gives\n\\begin{equation}\\label{eq:cube-free:s}\n E_f(p^2,\\xi) \\leq (d-1)^{n-s} p^{-(n-s)}\n\\end{equation}\nfor large $p$ and all $\\xi$. Together with the square free case, this implies the cube free case of Conjecture \\ref{con1}, with $\\varepsilon=0$, and even with $(n-s)\/2$ instead of $(n-s)\/d$.\nIn fact, with some more work we can go up to $d+2$-th powers instead of just cubes, as follows.\n\\begin{thm}\\label{lem:d+2}\nConjecture \\ref{con1} with $\\varepsilon=0$\nholds when restricted to integers $N$ which do not contain a non-trivial $d+2$-th power.\nIn detail, let $f$, $n$, $s,$ and $d$ be as in the introduction.\nThen there is a constant $c=c(f_d)$ (depending only on $f_d$)\nsuch that for all integers $N>0$ which are not divisible by a non-trivial $d+2$-th power and all primitive $N$-th roots $\\xi$ of $1$, one has\n\\begin{equation}\\label{bound:SfNd+2}\nE_f(N,\\xi) \\le c N^{-\\frac{n-s}{d}}.\n\\end{equation}\n\\end{thm}\n\nWe will prove Theorem \\ref{lem:d+2} by making a link between $E_f(p^m,\\xi)$ and finite field exponential sums, as follows.\nFor any prime $p$, any $m>0$, any point $P$ in ${\\mathbb F}_p^n$ and any $\\xi$, write\n\\begin{equation}\\label{EPf}\nE^{P}_f(p^m,\\xi) :=\\abs{ \\frac{1}{p^{mn}} \\sum_{x\\in P + (p{\\mathbb Z}\/p^m{\\mathbb Z})^n} \\xi^{f(x)} }.\n\\end{equation}\nCompared to $E_f(p^m,\\xi)$, the summation set for $E^{P}_f(p^m,\\xi)$ has $p$-adically zoomed in around the point $P$.\n\n\nLet us consider the following positive characteristic analogues.\n\\begin{equation}\\label{eq:Eft}\nE_f(t^m,\\psi) := \\abs{ \\frac{1}{p^{mn} } \\sum_{x\\in ({\\mathbb F}_p[t]\/(t^m) )^n} \\psi({f(x))} },\n\\end{equation}\nand\n\\begin{equation}\\label{eq:EftP}\nE^P_f(t^m,\\psi):= \\abs{ \\frac{1}{p^{mn} } \\sum_{x\\in P + (t{\\mathbb F}_p[t]\/(t^m) )^n} \\psi'({f(x))} },\n\\end{equation}\nfor any primitive additive character\n$$\n\\psi:{\\mathbb F}_p[t]\/(t^m)\\to {\\mathbb C}^\\times,\n$$\nwhere primitive means that $\\psi$ does not factor through the projection ${\\mathbb F}_p[t]\/(t^m)\\to {\\mathbb F}_p[t]\/(t^{m-1})$.\n\nThe sums of (\\ref{eq:Eft}) and (\\ref{eq:EftP}) can be rewritten as finite field exponential sums, to which classical bounds like (\\ref{eq:deligne:s}) apply. This is done\nby identifying the summation set\nwith ${\\mathbb F}_p^{mn}$, resp.\nwith ${\\mathbb F}_p^{(m-1)n}$, namely by sending a polynomial in $t$ to its coefficients, while forgetting the constant terms in the second case.\n\nWe first prove the following variant of Theorem \\ref{lem:d+2}.\n\\begin{prop}\\label{lem:d+2:t}\nLet $f$, $n$, $s,$ and $d$ be as in the introduction. Then there is a constant $M$ (depending only on $f_d$)\nsuch that for all primes $p$ with $p>M$, all integers $m>0$ with $m \\le d+1$, and all primitive additive characters $\\psi:{\\mathbb F}_p[t]\/(t^m)\\to {\\mathbb C}^\\times$\none has\n\\begin{equation}\\label{bound:SfNd+2:t}\nE_f(t^m,\\psi) \\le p^{-m\\cdot \\frac{n-s}{d}}.\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nBy a reasoning as for the square free and the cube free case, it is sufficient to treat the case that $s=0$. So, we may assume that $s=0$, and, by the square free case treated above, that $m>1$. We also may assume that $d\\ge 3$.\nFor each $p$, let $C_p$ be the set of critical points of the reduction of $f$ modulo $p$.\nSince $s=0$, one has $\\# C_p \\le c_1$ for some constant $c_1$ depending only on $n$ and $d$, see for example by the final inequality of \\cite{Heath-B-cube-free}.\nClearly we have\n\\begin{equation}\\label{eq:count0}\nE_f(t^m,\\psi) = \\sum_{P\\in C_p} E^{P}_f(t^m,\\xi)\n\\end{equation}\nfor all primes $p>d$, all $m>1$ and all primitive $\\psi:{\\mathbb F}_p[t]\/(t^m)\\to {\\mathbb C}^\\times$.\nFor $m< d$, note that\n\\begin{equation}\\label{eq:countP}\n\\frac{1}{p^{mn}} \\cdot \\# (p{\\mathbb Z}\/p^m{\\mathbb Z})^n = p^{ n(m-1) -mn} = p^{-n} < p^{-mn\/d}.\n\\end{equation}\nFor $m< d$, we thus find by (\\ref{eq:count0}) that\n\\begin{equation}\\label{eq:countPP}\nE_f(t^m,\\psi)\\le c_1 p^{-n},\n\\end{equation}\nand (\\ref{bound:SfNd+2:t}) follows when $md$ is such that the reduction of $f$ modulo $p$ is smooth homogeneous, then there is nothing left to prove since then $E_f(t^m,\\psi)=E^{P_0}_f(t^m,\\psi) =p^{-n}=p^{-mn\/d}$, with $P_0=\\{0\\}$.\nIf the reduction of $f$ modulo $p$ is not homogeneous, and, $p$ is larger than $d$, then there is a constant $c_2$ (depending only on $n$ and $d$) such that\n$$\nE^{P}_f(t^m,\\psi) \\le c_2 p^{-n - 1\/2}\n$$\nfor all $P$ in $C_p$ and all primitive $\\psi$. Indeed, this follows from the worst case of (\\ref{eq:deligne-ga}) applied to $E^P_f(t^m,\\psi)$, after rewriting it as a finite field exponential sum. The case $m=d$ for (\\ref{bound:SfNd+2:t}) follows, where the constant $c_2$ is eaten by the extra saved power of $p$. For $d=m+1$, when we rewrite\n$E^P_f(t^m,\\psi)$ for $P\\in C_p$ as a finite field exponential sum over $(m-1)n$ variables running over ${\\mathbb F}_p$, we can again apply (\\ref{eq:deligne-ga}), now in $n(m-1)$ variables and with highest degree part $f_d$ which has singular locus of dimension $n(m-2)$ when seen inside ${\\mathbb A}^{n(m-1)}$. Since $d\\ge 3$, we again can use the extra saved power of $p$ to obtain (\\ref{bound:SfNd+2:t}) and the proposition is proved.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{lem:d+2}]\nWe show\nfor all large primes $p$ all integers $m>0$ with $m \\le d+1$, and all primitive $p^m$-th roots of $1$, that\n\\begin{equation}\\label{bound:SfNd+2:pp}\nE_f(p^m,\\xi) \\le p^{-m\\cdot \\frac{n-s}{d}},\n\\end{equation}\nwhere moreover the lower bound on $p$ depends only on $f_d$.\nFor $m\\not=d$ this follows at once from the transfer principle for bounds from Theorem 3.1 of \\cite{CGH4} and the corresponding power savings when $m\\not=d$ in the proof Proposition \\ref{lem:d+2:t}.\nIndeed, the transfer principle holds uniformly in $f$ as long as $f_d$ and $n$ are fixed, since the bounds from Proposition \\ref{lem:d+2:t} have this kind of uniformity.\nLet us now treat the remaining case of $m=d$. It is again enough to treat the case $s=0$. For a prime $p>d$ such that the reduction of $f$ modulo $p$ is not homogeneous, we are done similarly by the transfer principle for bounds from \\cite{CGH4} and the corresponding power savings in the proof Proposition \\ref{lem:d+2:t}.\nIf $m=d$ and $p>d$ is such that the reduction of $f$ modulo $p$ is smooth homogeneous, then we have that $P_0=\\{0\\}$ is the unique critical point of the reduction of $f$ modulo $p$, and thus\n$$\nE_f(p^m,\\xi)=E^{P_0}_f(p^m,\\xi) \\le p^{-n}=p^{-mn\/d}.\n$$\nThe proof of Theorem \\ref{lem:d+2} is thus finished.\n\\end{proof}\n\n\n\n\n\\section{The non-degenerate case}\\label{sec:CAN}\n\nIn this section we show that Conjecture \\ref{con:SfN} with $\\varepsilon=0$ holds for non-degenerate polynomials, where the non-degeneracy condition is with respect to the Newton polyhedron of $f-f(0)$ at zero, as in \\cite{CAN}. The non-degeneracy condition generalizes the situation where $f$ is a sum of monomials in separate variables, like $x_1x_2+x_3x_4$.\nIn detail, writing $f(x)-f(0) = \\sum_{i\\in{\\mathbb N}^n} \\beta_i x^i$ in multi-index notation, let\n$$\n\\operatorname{Supp} (f) :=\\{i\\in{\\mathbb N}^n\\mid \\beta_i\\not=0\\}\n$$\nbe the support of $f-f(0)$. Further, let\n$$\n\\Delta_0(f) := \\operatorname{Conv} (\\operatorname{Supp} (f) + ({\\mathbb R}_{\\geq 0})^n)\n$$\nbe the convex hull of the sum-set of $\\operatorname{Supp} (f)$ with $({\\mathbb R}_{\\geq 0})^n$ where ${\\mathbb R}_{\\geq 0}$ is $\\{x\\in{\\mathbb R}\\mid x\\ge 0\\}$. The set $\\Delta_0(f)$ is called the Newton polyhedron of $f-f(0)$ at zero. For each face $\\tau$ of the polyhedron $\\Delta_0(f)$, consider the polynomial\n$$\nf_\\tau := \\sum_{i\\in\\tau} \\beta_i x^i.\n$$\nCall $f$ non-degenerate w.r.t.~$\\Delta_0(f)$ when for each face $\\tau$ of $\\Delta_0(f)$ and each critical point $P$ of $f_\\tau:{\\mathbb C}^n\\to{\\mathbb C}$, at least one coordinate of $P$ equals zero. Recall that a complex point $P\\in{\\mathbb C}^n$ is called a critical point of $f_\\tau$ if $\\partial f_\\tau \/\\partial x_i (P)=0$ for all $i=1,\\ldots,n$.\nLet $\\sigma_f$ be the unique real value such that $(1\/\\sigma_f,\\ldots,1\/\\sigma_f)$ is contained in a proper face of $\\Delta_0(f)$.\n\nFinally, let $\\kappa$ denote the maximal codimension in ${\\mathbb R}^n$ of $\\tau$ when $\\tau$ varies over the faces of $\\Delta_0(f)$ containing $(1\/\\sigma_f,\\ldots,1\/\\sigma_f)$.\n\nThe following result slightly extends the main result of \\cite{CAN} as it covers small primes as well.\n\n\\begin{prop}\\label{prop:CANsigma}\nSuppose that $f$ is non-degenerate with respect to~$\\Delta_0(f)$.\nThen, there is a constant $c$ such that for all primes $p$, all integers $m\\ge 2$ and all primitive $p^m$-th roots of unity $\\xi$ one has\n\\begin{equation}\\label{eq:CAN}\nE_f(p^m,\\xi) \\le c p^{-m\\sigma_f}m^{\\kappa-1}.\n\\end{equation}\n\\end{prop}\n\nThe following is the main result of this section.\n\\begin{thm\n\\label{prop:CAN}\nLet $f$, $n$, $s$, and $d$ be as in the introduction. Suppose that $f$ is non-degenerate w.r.t.~$\\Delta_0(f)$.\nThen Conjecture \\ref{con1} with $\\varepsilon=0$ holds for $f$. Namely,\nthere is $c$ such that for all integers $N>0$ and all primitive $N$-th roots of unity $\\xi$, one has\n$$E_f(N,\\xi) \\le c N^{-\\frac{n-s}{d}}.$$\n\nFurthermore, for all large primes $p$ (with `large' depending on $f$), all integers $m>0$ and all primitive $p^m$-th roots of unity $\\xi$ one has\n$$E_f(p^m,\\xi) \\le p^{-m\\frac{n-s}{d}}.$$\n\\end{thm}\n\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:CANsigma}]\nIn \\cite{CAN} it is shown that one can take a constant $c$ so that (\\ref{eq:CAN}) holds for all large primes $p$ and al $m\\ge 2$. So, there is only left to prove that for each prime $p$ there is\na constant $c_p$ (depending on $p$) such that for each integer $m\\ge 2$ one has\n\\begin{equation}\\label{eq:CANp}\nE_f(p^m,\\xi) \\le c_p p^{-m\\sigma_f}m^{\\kappa-1}.\n\\end{equation}\nFirst, if $f$ is non-degenerate w.r.t~$\\Delta_0(f)$ we show that $f(0)$ is the only possible critical value of $f$, by induction on $n$. If $n=1$, by the non-degeneracy of $f$, we get that $f$ has no critical point in ${\\mathbb C}^\\times$ and we are done. Now suppose that $n>1$. Let $f$ be a polynomial in $n$ variables which is non-degenerate w.r.t~$\\Delta_0(f)$. Suppose that $u=(u_1,...,u_n)$ is a critical point of $f$. By the non-degeneracy of $f$ there exists $1\\leq j\\leq n$ such that $u_j=0$. Without lost of generality we can suppose that $j=n$. We can write $f=f(0)+ \\sum_{i=0}^r g_i(x_1,...,x_{n-1})x_n^i$ with furthermore $g_0(0)=0$. We is sufficient to show that $f(u_1,...,u_n)=f(0)$. Since $u_n=0$, it suffices to show that $g_0(u_1,...,u_{n-1})=0$. By the fact that $f$ is non-degenerate w.r.t~$\\Delta_0(f)$ we get that $g_0$ is non-degenerate w.r.t~$\\Delta_0(g_0)$. It is clear that $(u_1,...,u_{n-1})$ is a critical point of $g_0$. So, we can use the inductive hypothesis to deduce that $g_0(u_1,...,u_{n-1})=g_0(0)=0$.\nNow, since $f$ has no other possible critical value than $f(0)$ and since there exists a toric log-resolution of $f-f(0)$ whose numerical properties are controlled by $\\Delta_0(f)$ (see for example \\cite{Varchen}), inequality (\\ref{eq:CANp}) follows from Igusa's work in \\cite{Igusa3} and the proposition is proved.\n\\end{proof}\n\n\nThe proof of Theorem \\ref{prop:CAN} will rely on Proposition \\ref{prop:CANsigma} and the following lemma.\n\n\\begin{lem}\\label{lem:CAN} Let $f$, $n$, $s$, and $d$ be as in the introduction. Suppose that $d\\geq 3$. Then one has $\\sigma_f \\ge (n-s)\/d$, and, equality holds if and only if there is a smooth form $g$ of degree $d$ in $n-s$ variables \nsuch that\n$$\nf(x) - f(0)\n=g(x_{i_1},\\ldots,x_{i_{n-s}})\n$$\nfor some $i_j$ with $1\\leq i_10$ and let $J$ be the subset of $\\{1,\\ldots,n\\}$ consisting of $j$ with $k_j<0$. Clearly $I$ and $J$ are disjoint. Since $(d\/n,\\ldots,d\/n)$ belongs to $H$, it follows that $I$ and $J$ are both nonempty and that\n\\begin{equation}\\label{=}\n\\sum_{i\\in I} k_i=\\sum_{j\\in J} |k_j|.\n\\end{equation}\nFurthermore, the inclusion $\\operatorname{Supp}(f)\\subset H_+$ implies that\n\\begin{equation}\\label{*}\n\\sum_{i\\in I} k_ia_i\\geq \\sum_{j\\in J} |k_j|a_j \\mbox{ for all $a\\in \\operatorname{Supp}(f)$. }\n\\end{equation}\n\n\nConsider the set\n$\n\\operatorname{Supp}_0(f)\n$\n consisting of those $a\\in \\operatorname{Supp}(f)$ with moreover $a_i=1$ for some $i\\in I$ and $a_{i'}=0$ for all $i'\\not\\in J\\cup \\{i\\}$.\nFor $a\\in \\operatorname{Supp}_0(f)$ write $t(a)$ for the unique $i\\in I$ with $a_i=1$ and write\n$$\nI_0:=\\{i\\in I\\mid \\exists a\\in \\operatorname{Supp}_0(f) \\mbox{ with } t(a)=i\n\\}.\n$$\nClearly we can write\n$$\nf(x)=\\sum_{i\\in I_0}x_ig_i(x_j)_{j\\in J}+\\sum_{a\\in \\operatorname{Supp}(f)\\setminus \\operatorname{Supp}_0(f)\n}\\beta_{a}x^a\n$$\nfor some polynomials $g_i$ in the variables $(x_j)_{j\\in J}$.\nAlso, the algebraic set\n$$\n\\bigcap_{i\\notin J}\\{x_i=0 \\} \\bigcap_{i\\in I_0}\\{g_i=0\\}\n$$\nin ${\\mathbb A}^n_{\\mathbb C}$ has dimension at least $|J|-|I_0|$ and is contained in $\\operatorname{Crit}_f$, the critical locus of $f$. By our smoothness condition $s=0$, this implies\n\\begin{equation}\\label{I0J}\n|I_0|\\geq |J|.\n\\end{equation}\nHence, we can write $I_0=\\{i_1,\\ldots,i_\\ell\\}$ and $J=\\{j_1,\\ldots,j_m\\}$ with $m\\leq \\ell$ and with\n\\begin{equation}\\label{kikj}\nk_{i_1}\\geq k_{i_2}\\geq\\ldots\\geq k_{i_{\\ell}} \\mbox{ and } |k_{j_1}|\\geq |k_{j_2}|\\geq\\ldots\\geq |k_{j_{m}}|.\n\\end{equation}\nTo prove the lemma it is now sufficient to show that\n\\begin{equation}\\label{kirjr}\nk_{i_{r}}>|k_{j_r}| \\mbox{ for all $r$ with } 1\\leq r\\leq m.\n\\end{equation}\nIndeed, (\\ref{kirjr}) gives a contradiction with (\\ref{=}).\nTo prove (\\ref{kirjr}), we suppose that there is $r_0$ with $1\\leq r_0\\leq m$ and with\n\\begin{equation}\\label{kr0r1}\nk_{i_{r_0}}\\le |k_{j_{r_0}}|\n\\end{equation}\nand we need to find a contradiction.\nIf there exists $a\\in \\operatorname{Supp}_0(f)$ such that $a_{j_{r_1}}\\ge 1$ for some $r_1\\le r_0$, then let $a$ be such an element and let $t$ be $t(a)$; otherwise, let $a$ be arbitrary and put $t=0$.\nWe will now show that $t0$.\n\nSince $d\\ge 3$ and $a\\in \\operatorname{Supp}_0(f)$,\nwe find by (\\ref{*}) and (\\ref{kikj}) that\n\\begin{equation}\\label{kit}\nk_{i_{t}} = \\sum_{i\\in I} k_ia_i \\geq \\sum_{j\\in J} a_{j}|k_{j}| > |k_{j_{r_1}}|\\ge |k_{j_{r_0}}|.\n\\end{equation}\nTogether with (\\ref{kikj}) and (\\ref{kr0r1}),\nthis implies that $t< r_0$ as desired.\n\nWe can thus write\n\\begin{equation}\\label{fr}\nf=\\sum_{1\\leq \\ell \\le r_0-1}x_{i_\\ell}h_\\ell(x_{j})_{j\\in J}+\\sum_{a\\in A\n}\\beta_ax^a\n\\end{equation}\nwith\n$$\nA=\\{a\\in \\operatorname{Supp}(f)\\mid \\sum_{i\\notin\\{j_1,\\ldots,j_{r_0}\\}}a_i\\geq 2\\}\n$$\nand with some polynomials $h_\\ell$ in the variables $(x_j)_{j\\in J}$.\nIt follows that the algebraic set\n$$\n\\bigcap_{i\\notin \\{j_1,\\ldots,j_{r_0}\\}}\\{x_i=0\\}\\bigcap_{1\\leq \\ell\\leq r_0-1}\\{h_{\\ell}=0\\}\n$$\nhas dimension at least $1$ and is contained in $\\operatorname{Crit}_f$, again a contradiction with our smoothness assumption $s=0$. So, relation (\\ref{kirjr}) follows and the lemma is proved.\n\\end{proof}\n\nThe case of Lemma \\ref{lem:CAN} with $s=0$ is derived from Lemma \\ref{s=0}, as follows.\n\\begin{proof}[Proof of Lemma \\ref{lem:CAN} with $s=0$]\nLet $f$ be of degree $d\\geq 3$ and with $s=0$. We need to show that $\\sigma_f \\ge n\/d$, and, that $\\sigma_f = n\/d$ if and only if $f=f_d$.\nSince $f_d$ is smooth, Lemma \\ref{s=0} implies that $(d\/n,\\ldots,d\/n)$ belongs to $\\Delta_0(f)$,\nand hence, $\\sigma_f\\geq n\/d$. Suppose now that\n$f\\neq f_d$. Then there exists $a\\in \\operatorname{Supp}(f)$ with \n\\sum_{i=1}^n a_i0$ such that\n$$\n\\{x\\in{\\mathbb R}^n\\mid ||x-(d\/n,\\ldots,d\/n)||\\leq \\epsilon\n\\} \\subset \\Delta_0(f).\n$$\nTherefore it is clear that $\\sigma_f>n\/d$.\nThis finishes the proof of Lemma \\ref{lem:CAN} with $s=0$.\n\\end{proof}\n\\begin{proof}[Proof of lemma \\ref{lem:CAN} with $s>0$] To prove the lemma with $s>0$ we may suppose that\n\\begin{equation}\\label{eq:sf}\n\\sigma_f\\leq (n-s)\/d.\n\\end{equation}\n By the definition of $\\sigma_f$ we have\n\\begin{equation}\\label{eq:sf2}\n\\min_{a\\in \\operatorname{Conv}(\\operatorname{Supp}(f))}\\max(a)=1\/\\sigma_f,\n\\end{equation}\nwhere $\\max(a)=\\max_{1\\leq i\\leq n}\\{a_i\\}$ and where $\\operatorname{Conv}(\\operatorname{Supp}(f))$ is the convex hull of $\\operatorname{Supp}(f)$.\nWe set\n$$\nk:=\\min_{\\max(a)=1\/\\sigma_f} \\# \\{i|a_i=1\/\\sigma_f\\},\n$$\nwhere the minimum is taken over $a\\in \\operatorname{Conv}(\\operatorname{Supp}(f))$.\nLet $a\\in \\operatorname{Conv}(\\operatorname{Supp}(f))$ realize this minimum, namely, with $\\# \\{i\\mid a_i=1\/\\sigma\\}=k$ and with $\\max(a)=1\/\\sigma_f$.\nWe may suppose that\n\\begin{equation*}\\label{eq:maxa}\na_1=\\ldots =a_k=1\/\\sigma_f \\mbox{ and $a_i<1\/\\sigma_f$ if } i>k.\n\\end{equation*}\nLet $b\\in \\operatorname{Conv}(\\operatorname{Supp}(f))$ be such that $\\max(b)=1\/\\sigma_f$. Then, for each $\\lambda\\in[0,1]$, the point $c_\\lambda:=\\lambda a+(1-\\lambda)b$ lies in $\\operatorname{Conv}(\\operatorname{Supp}(f))$. When $\\lambda$ is sufficiently close to $1$, then we have $c_{\\lambda,i}<1\/\\sigma_f$ for all $i>k$, and, the definition of $k$ implies that $b_i=1\/\\sigma_f$ for all $1\\leq i\\leq k$. By the same reasoning, for each $b\\in \\operatorname{Conv}(\\operatorname{Supp}(f))$ one has $b_i\\geq 1\/\\sigma_f$ for some $i$ with $1\\leq i\\leq k$. The definition of $k$ and (\\ref{eq:sf2}) also tell us that $k\/\\sigma_f\\leq d$, and thus we find\n\\begin{equation}\\label{eq:sf3}\nk\\leq n-s\n\\end{equation}\nfrom (\\ref{eq:sf}). For any tuple of complex numbers $C=(c_{i,j})_{1\\leq i,j\\leq s}\n$\nwe consider the polynomial\n$$\ng_C=f(x_1,\\ldots,x_{n-s}, x_{n-s+1}+\\sum_{1\\leq j\\leq n-s}c_{1,j}x_j,\\ldots,x_n+\\sum_{1\\leq j\\leq n-s}c_{s,j}x_j).\n$$\nFor a generic choice of $C$ one has $\\operatorname{Supp}(f)\\subset \\operatorname{Supp}(g_C)$. Furthermore, we show that for a generic choice of $C$ the polynomial\n$$\nh_C=f_d(x_1,\\ldots,x_{n-s}, \\sum_{1\\leq j\\leq n-s}c_{1,j}x_j,\\ldots,\\sum_{1\\leq j\\leq n-s}c_{s,j}x_j)\n$$\nis smooth homogeneous in $n-s$ variables, where $f_d$ is the degree $d$ homogeneous part of $f$.\nFor a generic choice of $e_{n}=(e_{n,i})_{i < n}$ one has\n$$\n\\dim(\\operatorname{Sing}(f_{d,e_{n}}))=n-s,\n$$\nwhere\n$$\nf_{d,e_{n}}(x_1,\\ldots,x_{n-1}) :=f_{d}(x_1,\\ldots,x_{n-1}, \\sum_{i=1}^{n-1}e_{n,i}x_i),\n$$\nconsidered as a polynomial in $n-1$ variables $x_i$ with $i1$, all primes $p$ and all primitive $p^m$-th roots of unity $\\xi$ we have\n\\begin{equation}\\label{m>1}\nE_f(p^m,\\xi) \\le c_2p^{-m\\sigma_f}m^{\\kappa-1}.\n\\end{equation}\nBy Lemma \\ref{lem:CAN} we have $\\sigma_f\\geq \\frac{n-s}{d}$. If $\\sigma_f>\\frac{n-s}{d}$, then we have $\\frac{n-s}{d}<\\frac{n-s}{2}$, from using $d\\ge 3$ and $s1}) with the square free case from Section \\ref{sec:ev3} .\nIf $\\sigma_f=\\frac{n-s}{d}$, we use Lemma \\ref{lem:CAN} again to see that $f=g_d$ for a smooth form $g_d$ of degree $d$ in $n-s$ variables. Conjecture \\ref{con:SfN} for this case follows by the Igusa's case from Section \\ref{sec:ev1}.\n\\end{proof}\n\n\n\n\n\n\n\\begin{remark}\\label{rem:gen}\nThe genericity of our notion of non-denegericity depends in fact in $\\operatorname{Supp}(f)$, and does not always hold.\nWhen $\\operatorname{Supp}(f)$ is contained in a hyperplane which does not go through the origin $0$ and has a normal vector with non-negative coordinates (see \\cite[Sec 2.2]{CAN}), then the condition of non-degeneracy on the coefficients $\\beta_i$ is generic within this support, that is, for any $\\gamma$ outside a Zariski closed subset of ${\\mathbb C}^{\\operatorname{Supp} (f)}$, the polynomial $\\sum_{i\\in \\operatorname{Supp} (f)} \\gamma_i x^i$ is non-degenerate w.r.t.~its Newton polyhedron at zero. This hyperplane condition generalizes the case of weighted homogeneous polynomials. However, in the general case, this genericity may be lost since we imposed conditions on critical points of $f_\\tau$ instead of on singular points as is done more usually. For instance, polynomials of the form $f(x)=ax^3+by^3+cxy$ for nonzero $a$, $b$, and $c$ are never non-degenerate in our sense.\n\\end{remark}\n\n\\section{The four variable case}\\label{sec:n=4}\nIn this section we prove Conjecture $1$ when $n\\le 4$ (Theorem \\ref{allp}), and some stronger results when furthermore $d=3$ (Proposition \\ref{prop:3} and its corollary). \n\\begin{thm}\\label{allp}\nLet $f$, $n$, $s,$ and $d$ be as in the introduction and suppose that \n$n\\le 4$. Then Conjecture \\ref{con1} holds for $f$.\n\\end{thm}\n\nWe will derive Theorem \\ref{allp} from the following key proposition for $f$ of degree $3$ in $n\\le 4$ variables.\n\n\n\\begin{prop}\\label{prop:3}\nIf $n\\leq 4$ and $d\\le 3$, then the bounds from (\\ref{bound:SfNfull}) hold for $f$, that is, for all $\\varepsilon>0$ one has\n\\begin{equation}\\label{bound:3}\nE_f(N,\\xi) \\ll N^{-\\hat\\alpha_f+\\varepsilon} \\mbox{ for all $\\xi$ and all squareful integers $N$},\n\\end{equation}\nwith $\\hat{\\alpha}_f$ as in Section \\ref{subs:minimal:exp}. Moreover,\n$\\hat{\\alpha}_f$ is the optimal exponent in (\\ref{bound:3}); indeed, $\\hat{\\alpha}_f$ equals the motivic oscillation index of $f$ as given in \\cite{CMN}.\n\\end{prop}\nThe optimality of the exponent $\\hat{\\alpha}_f$ in (\\ref{bound:3}) means that there is a constant $c_0>0$ such that for infinitely many $N>0$ one has\n\\begin{equation}\\label{lowerb}\nc_0N^{-\\hat{\\alpha}_f} \\le E_f(N,\\xi) \\mbox{ for some primitive $N$-th root $\\xi$ of unity.}\n\\end{equation}\nThe motivic oscillation index of $f$ as given in \\cite{CMN} (which corresponds to \\cite{Cigumodp} but without the negative sign) is the optimal exponent of $p^{-m}$ in (\\ref{bound:3}), see Section 3.4 of \\cite{CMN}; therefor, the equality with $\\hat{\\alpha}_f$ is a useful property.\n\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{allp}]\nSuppose that $n\\le 4$.\nIf $d=2$ or $(n-s)\/d\\leq 1$, then Conjecture \\ref{con:SfN} follows by the argument in Section \\ref{sec:ev2}.\nHence, we may concentrate on the case that $d=3$ and $s=0$.\nBut this case follows from Proposition \\ref{prop:3} and the square free case from Section \\ref{sec:ev3}, together with (\\ref{bound:min:exp}).\n\\end{proof}\n\nThe following auxiliary lemma is well known, see for example the final inequality of \\cite{Heath-B-cube-free}, where furthermore an explicit upper bound on the number of critical points is obtained.\n\\begin{lem}\\label{criticalpoint}\nSuppose that $g=g_0+...+g_d$ is a polynomial in ${\\mathbb C}[x_1,...,x_n]$ of degree $d$ and with $\\dim(\\operatorname{Crit}_{g_d})=0$, where $g_i$ is the degree $i$ homogeneous part of $g$, and where $\\operatorname{Crit}_{g_d}$ is the critical locus of $g_d:{\\mathbb C}^n\\to{\\mathbb C}$.\nThen $\\operatorname{Crit}_g$ is a finite set, with $\\operatorname{Crit}_g$ the critical locus of $g$.\n\\end{lem}\n\\begin{proof}\nThis is shown by homogenizing $g$ as in the reasoning towards the final inequality of \\cite{Heath-B-cube-free}, where it is even shown that\n$\\# \\operatorname{Crit}_g \\le (d-1)^n$, by an application of B\\'ezout's theorem.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:3}]\nIf $(n-s)\/d\\leq 1$, then the bounds (\\ref{bound:3}) follow from \\cite{CMN}, as explained in Section \\ref{sec:ev2}, and, the optimality and equality with the motivic oscillation index is treated in the last section of \\cite{CMN}.\nWe may thus suppose that $s=0$.\nBy Lemma \\ref{criticalpoint} it is sufficient to show that there exist $C>0$ and $M$ such that for all primes $p>M$, for all integers $m\\ge 2$, all points $P$ in ${\\mathbb F}_p^n$, and all primitive $p^m$-th roots of unity $\\xi$ we have\n\\begin{equation}\\label{m>1, p>M}\nE^{P}_f(p^m,\\xi)\\leq C p^{-m(\\hat{\\alpha}_f -\\varepsilon) },\n\\end{equation}\nwith $E^{P}_f$ from (\\ref{EPf}).\n\n\nIf there exists a point $a\\in \\operatorname{Crit}_f({\\mathbb Z}_p)$ such that the multiplicity of $f$ at $a$ is $3$ then $E_f(p^m,\\xi)=E_{f_3}(p^m,\\xi)$ for all $m\\geq 1$ and all all primitive $p^m$-th roots of unity $\\xi$. So, we are done by the case of $f=f_d$ with $s=0$ as treated in Section \\ref{sec:ev1}, and, $\\hat{\\alpha}_f=4\/3$ which equals the motivic oscillation index in this case, by the optimality mentioned in Section \\ref{sec:ev1} and by the last section of \\cite{CMN}. If $\\operatorname{Crit}_f=\\emptyset$ then the bounds (\\ref{bound:3}) are clear, since in this case one has $E_f(p^m,\\xi)=0$ as soon as $p$ is large and $m\\ge 2$, and similarly, for any $p$ but $m$ large enough.\n\nNow suppose that $f$ has multiplicity $2$ at $0$. We focus on the bounds (\\ref{m>1, p>M}) for $E^{P}_f$ with $P=\\{0\\}$. By Lemma \\ref{lem:T} we may suppose that $f_2$ is diagonal. By Weierstrass preparation in the ring $R[[x,y,z,w]]$ with $R$ a ring of the form ${\\mathbb Z}[1\/M]$ for some integer $M>0$, we may suppose that $f(x,y,z,w)$ equals $u(x,y,z,w)(x^2 + x(h(y,z,w) + g(y,z,w))$ for some $g,h$ in ${\\mathbb Z}[[y,z,w]]$ and some unit $u$ in ${\\mathbb Z}[[x,y,z,w]]$. By the general theory of local zeta functions of \\cite{DenefBour}, it is thus sufficient to prove the proposition for $f$ being $x^2 + xh_D(y,z,w) + g_D(y,z,w)$ where $h_D$ and $g_D$ are polynomials which coincide with $h$ and $g$ up to degree $D$, for some large $D>0$. Now by the transformation with $x-h_D\/2$ instead of $x$, we may assume that $h_D=0$. Now, if $g_D$ has multiplicity $3$ or more at zero, then $\\tilde \\alpha_{g_D}\\le 1$, and, we are done by \\cite{CMN} for $g_D$. If $g_D$ has multiplicity $2$ at zero, then we may repeat the above reduction and assume that $g_D(y,z,w)=y^2+G(z,w)$ for some polynomial $G$ in two variables. Since $\\tilde \\alpha_{G}\\le 1$, we are again done by \\cite{CMN}.\nWhen $a\\in \\operatorname{Crit}_f({\\mathbb C})$ is algebraic, then one works similarly but with Weierstrass preparation over the ring $R={\\mathbb Z}[a,1\/M]$.\n\\end{proof}\n\n \n \\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nModeling the observed properties of the Galactic population of radio\npulsars, with the purpose of inferring their intrinsic properties, has\nbeen the subject of extensive investigation for several decades\n(e.g. Gunn \\& Ostriker 1970; Phinney \\& Blandford 1981; Lyne et\nal. 1985; Stollman 1987; Emmering \\& Chevalier 1989; Narayan \\&\nOstriker 1990; Lorimer et al. 1993; Hartman et al. 1997; Cordes \\&\nChernoff 1998; Arzoumanian, Cordes \\& Chernoff 2002; Vranesevic et al\n2004; Faucher-Giguere \\& Kaspi 2006; Ferrario \\& Wickramasinghe 2006).\nSince the fraction of pulsars that can be detected close to their\nbirth constitutes a negligible fraction of the total sample, these\nstudies generally use the {\\em present day} observed properties of\npulsars (namely their period $P$ and period derivative $\\dot{P}$),\ntogether with some assumptions about their time evolution, to\nreconstruct the birth distribution of periods and magnetic fields for\nthe pulsar population. These analyses also need to make\nassumptions about pulsar properties and their evolution (such as, for example,\nthe exact shape of the radio beam and its dependence on the period),\nas well as overcome a number of selection effects. Results from\nvarious investigations have often been conflicting, with some studies\nfavoring initial periods in the millisecond range (e.g. Arzoumanian\net al. 2002), and others instead finding more likely periods in the\nrange of several tens to several hundreds of milliseconds\n(e.g. Faucher-Giguere \\& Kaspi 2006). The efforts put over the\nyears into this area of research stem from the fact that the birth\nproperties of neutron stars (NSs) are intimately related to the\nphysical processes occurring during the supernova (SN) explosion and in the\nproto-neutron star. As such, they bear crucial information on the\nphysics of core-collapse SNe, in which most are thought to be formed. \n\nBesides the inferences on the birth parameters from the radio\npopulation discussed above, we show here that constraints can be\nderived also from the X-rays. Young, fast rotating neutron stars are\nindeed expected to be very bright in the X-rays. In fact,\nobservationally there appears to be a correlation between the\nrotational energy loss of the star, $\\dot{E}_{\\rm rot}$, and its X-ray\nluminosity, $L_x$. This correlation was noticed by Verbunt et\nal. (1996), Becker \\& Trumper (1997), Seward \\& Wang (1988), Saito\n(1998) for a small sample of objects, and later studied by Possenti et\nal. (2002; P02 in the following) for the largest sample of pulsars\nknown to date.\n\nCombining the birth parameters derived from the radio (which determine\nthe birth distribution of $\\dot{E}_{\\rm rot}$ for the pulsars), with\nthe empirical $L_x - \\dot{E}_{\\rm rot}$ correlation, the distribution\nof X-ray luminosity can be predicted for a sample of pulsars with a\ncertain age distribution. The above calculation was performed by\nPerna \\& Stella (2004). They found that the birth parameters derived\nby Arzoumanian et al. (2002), together with the $L_x - \\dot{E}_{\\rm\nrot}$ correlation derived by P02, yield a sizable fraction of sources\nwith luminosities $\\ga 10^{39}$ erg\/s, which could hence constitute\npotential contributors to the observed population of ultra luminous\nX-ray sources (ULXs) observed in nearby galaxies (e.g. Fabbiano \\&\nWhite 2003; Ptak \\& Colbert 2004). Obviously, these predictions were\nheavily dependent on the assumed initial birth parameters (the periods\nespecially) of the pulsar population.\n\nIn this paper, we propose a new, independent method to constrain the\npulsar spin periods at birth from X-ray observations, and hence also\nassess the contribution of young, fast rotating NSs to the population\nof bright X-ray sources. Since neutron stars are born in supernova\nexplosions, and very young pulsars are still embedded in their\nsupernovae, the X-ray luminosity of the SNe provides an upper limit to\nthe luminosity of the embedded pulsars. We have analyzed an extensive\nsample of historical SNe whose position has been observed by {\\em\nChandra}, {\\em XMM} or {\\em Swift}, and studied their X-ray\ncounterparts. We measured their X-ray luminosities, or derived a limit\non them in the cases of no detection. A comparison between these\nlimits and the theoretical predictions for the distribution of pulsar\nX-ray luminosities shows that, if the assumed initial spins are in the\nmillisecond range, the predicted distribution of pulsar X-ray\nluminosities via the $L_x - \\dot{E}_{\\rm rot}$ correlation is highly\ninconsistent with the SN data. Our analysis hence suggests that a\nsubstantial fraction of pulsars cannot be born with millisecond\nperiods.\n\nThe paper is organized as follows: in \\S2, we describe the method by\nwhich the SN X-ray flux measurements and limits are extracted, while\nin \\S3 we describe the theoretical model for the distribution of the\nX-ray luminosity of young pulsars. A comparison between the\ntheoretical predictions and the data is performed in \\S4, while the\nresults are summarized and discussed in \\S5.\n\n\\section{X-ray analysis of historical Supernovae observed by {\\em Chandra},\n{\\em XMM} and {\\em Swift}}\n\n\nWe compared and combined the CfA List of\nSupernovae\\footnote{http:\/\/cfa-www.harvard.edu\/iau\/lists\/Supernovae.html,\ncompiled by The Central Bureau for Astronomical Telegrams at the\nHarvard-Smithsonian Center for Astrophysics.}, the Padova-Asiago\nCatalogue\\footnote{http:\/\/web.pd.astro.it\/supern\/snean.txt}, the\nSternberg Catalogue\\footnote{VizieR On-line Data Catalog: II\/256}\n(Tsvetkov et al. 2004), and Michael Richmond's Supernova\nPage\\footnote{http:\/\/stupendous.rit.edu\/richmond\/sne\/sn.list}, to\ncreate a list of unambiguosly identified core-collapse SNe (updated to\n2007 April). We cross-correlated the SN positions with the catalogues\nof {\\it Chandra}\/ACIS, {\\it XMM-Newton}\/EPIC and {\\it Swift}\/XRT\nobservations\\footnote{Search form at http:\/\/heasarc.nasa.gov}, to\ndetermine which SN fields have been observed by recent X-ray missions\n({\\it ASCA} was excluded because of its low spatial resolution, and\n{\\it ROSAT} because of its lack of 2--10 keV sensitivity). For the\n{\\it Chandra} ACIS-S data, we limited our search to the S3 chip. We\nobtained a list of $\\sim 200$ core-collapse SNe whose positions\nhappened to be in a field observed at least once after the event.\nFrom the list, we then selected for this paper all the core collapse\nSNe with unambiguos subtype classification (Type Ib\/c, Type IIn, IIL,\nand IIP and IIb). That is about half of the total sample. We leave\nthe analysis of the other $\\sim 100$ SNe (classified generically as\nType II) to a follow-up paper.\n\n\n\\setcounter{table}{0}\n\\begin{table*}\n\\begin{tabular}{lllllll}\nSN & Host galaxy & Type & Age (yr) & $L_{2-10\\;keV} (\\rm erg\/s)$ & Instrument & Observation date\\\\\n\\hline\n1923A & N5236 &IIP&77.3& $<6.0\\times 10^{35}$ & ACIS & 2000-04-29, 2001-09-04\\\\\n1926A & N4303 &IIP&75.3& $<1.4\\times 10^{37}$ & ACIS & 2001-08-07 \\\\\n1937A & N4157 &IIP&67.3& $<1.3\\times 10^{37}$ & EPIC & 2004-05-16\\\\\n1937F & N3184 &IIP&62.1& $<2.7\\times 10^{36}$& ACIS & 2000-01-08, 2000-02-03\\\\\n1940A & N5907 &IIL&63.0& $<1.0 \\times 10^{37}$ & EPIC & 2003-02-20, 2003-02-28\\\\\n1940B & N4725 &IIP&62.6& $<8.6\\times 10^{36}$ & ACIS & 2002-12-02\\\\\n1941A & N4559 &IIL&60.2& $<5.5\\times 10^{36}$& ACIS & 2001-01-14, 2001-06-04, 2002-03-14\\\\\n1948B & N6946 &IIP&55.1& $<4.7\\times 10^{35}$& ACIS & 2001-09-07, 2002-11-25, 2004-10-22, 2004-11-06, 2004-12-03\\\\\n1954A & N4214 &Ib & 48.9& $<1.6 \\times 10^{35}$& ACIS & 2003-03-09\\\\\n1959D & N7331 &IIL&41.6& $<2.2\\times 10^{37}$& ACIS & 2001-01-27\\\\\n1961V & N1058 &IIn&38.3& $<6.1\\times 10^{37}$& ACIS & 2000-03-20\\\\\n1962L & N1073 &Ic &41.2& $< 4.7 \\times 10^{37}$& ACIS & 2004-02-09\\\\\n1962M & N1313 &IIP&40.3& $<3.7\\times 10^{35}$& ACIS & 2002-10-13, 2002-11-09, 2003-10-02, 2004-02-22\\\\\n1965H & N4666 &IIP&37.7& $<1.5\\times 10^{38}$ & ACIS & 2003-02-14\\\\\n1965L & N3631 &IIP&37.8& $<5.7\\times 10^{36}$& ACIS & 2003-07-05\\\\\n1968L & N5236 &IIP&32.0& $< 1.5\\times 10^{36}$ & ACIS & 2000-04-29, 2001-09-04\\\\ \n1969B & N3556 &IIP&32.6& $<3.8\\times 10^{36}$& ACIS & 2001-09-08\\\\\n1969L & N1058 &IIP&30.3& $<4.8\\times 10^{37}$& ACIS & 2000-03-20\\\\\n1970G & N5457 &IIL&33.9& $<4.9\\times 10^{36}$& ACIS & 2004-07-05, 2004-07-11\\\\\n1972Q & N4254 &IIP&30.5& $<3.0\\times 10^{38}$& EPIC & 2003-06-29\\\\\n1972R & N2841 &Ib &31.9& $< 7.3\\times 10^{35}$& EPIC & 2004-11-09\\\\\n1973R & N3627 &IIP&25.9& $<7.7\\times 10^{37}$& ACIS & 1999-11-03\\\\\n1976B & N4402 &Ib& 26.2& $<8.9\\times 10^{37}$& EPIC & 2002-07-01\\\\\n1979C & N4321 &IIL&26.8& $2.7^{+0.4}_{-0.4}\\times 10^{38}$& ACIS & 2006-02-18\\\\\n1980K & N6946 &IIL&24.0& $<6.5\\times 10^{36}$& ACIS & 2004-10-22, 2004-11-06, 2004-12-03\\\\\n1982F & N4490 &IIP&22.6& $<1.1\\times 10^{36}$& ACIS & 2004-07-29, 2004-11-20\\\\\n1983E & N3044 &IIL&19.0& $<4.6\\times 10^{37}$& EPIC & 2001-11-24, 2002-05-10\\\\\n1983I & N4051 &Ic &17.8 & $< 1.7\\times 10^{36}$& ACIS & 2001-02-06\\\\\n1983N & N5236 &Ib &16.8& $< 5.5\\times 10^{36}$& ACIS & 2000-04-29\\\\\n1983V & N1365 &Ic &19.1& $< 7.0\\times 10^{37}$& ACIS & 2002-12-24\\\\\n1985L & N5033 &IIL&14.9& $<8.1\\times 10^{37}$& ACIS & 2000-04-28\\\\\n1986E & N4302 &IIL&19.6& $1.4^{+0.5}_{-0.5}\\times 10^{38}$& ACIS & 2005-12-05\\\\\n1986I & N4254 &IIP&17.1& $<3.0\\times 10^{38}$& EPIC & 2003-06-29\\\\\n1986J & N891 &IIn&21.2 & $8.5^{+0.5}_{-0.5}\\times 10^{38}$& ACIS & 2003-12-10\\\\\n1986L & N1559 &IIL&18.9& $<1.4\\times 10^{38}$& EPIC & 2005-08-10, 2005-10-12\\\\\n1987B & N5850 &IIn&14.1& $< 1.5\\times 10^{38}$& EPIC & 2001-01-25, 2001-08-26\\\\\n1988A & N4579 &IIP&12.3& $<2.4\\times 10^{37}$& ACIS & 2000-05-02\\\\\n1988Z & MCG+03-28-22 &IIn&15.5& $2.9^{+0.5}_{-0.5}\\times 10^{39}$& ACIS & 2004-06-29\\\\\n1990U & N7479 & Ic & 10.9 & $1.1^{+0.6}_{-0.5}\\times 10^{39}$& EPIC & 2001-06-19\\\\\n1991N & N3310 &Ib\/Ic &11.8& $<4.2\\times 10^{37}$& ACIS & 2003-01-25\\\\\n1993J & N3031 & IIb & 8.1 & $< 1.0 \\times 10^{38}$& ACIS & 2001-04-22\\\\\n1994I & N5194& Ic & 8.2 & $8.0^{+0.3}_{-0.7}\\times 10^{36}$& ACIS & 2000-06-20, 2001-06-23, 2003-08-07\\\\\n1994ak & N2782 &IIn&7.4& $< 3.7\\times 10^{37}$& ACIS & 2002-05-17\\\\\n1995N & MCG-02-38-17 &IIn&8.9& $4.3^{+1.0}_{-1.0}\\times 10^{39}$& ACIS & 2004-03-27\\\\\n1996ae & N5775 &IIn&5.9& $< 6.1\\times 10^{37}$& ACIS & 2002-04-05\\\\\n1996bu & N3631 &IIn&6.6& $<2.1\\times 10^{37}$& ACIS & 2003-07-05\\\\\n1996cr & ESO97-G13 &IIn&4.2& $1.9^{+0.4}_{-0.4}\\times 10^{39}$& ACIS & 2000-03-14\\\\\n1997X & N4691 &Ic&6.1& $< 2.2\\times 10^{37}$& ACIS & 2003-03-08\\\\\n1997bs & N3627 &IIn&2.5& $<2.9\\times 10^{38}$& ACIS & 1999-11-03\\\\\n1998S & N3877 &IIn&3.6& $3.8^{+0.5}_{-0.5}\\times 10^{39}$& ACIS & 2001-10-17\\\\\n1998T & N3690 &Ib&5.2& $< 2.0\\times 10^{38}$& ACIS & 2003-04-30\\\\\n1998bw & ESO184-G82 & Ic & 3.5 & $4.0^{+1.0}_{-0.9}\\times 10^{38}$& ACIS & 2001-10-27\\\\\n1999dn & N7714 &Ib&4.4& $<5.9\\times 10^{37}$& ACIS & 2004-01-25\\\\\n1999ec & N2207 &Ib& 5.9& $3.1^{+0.4}_{-0.4}\\times 10^{39}$& EPIC & 2005-08-31\\\\\n1999el & N6951 &IIn&5.6& $< 5.6\\times 10^{38}$& EPIC & 2005-04-30, 2005-06-05\\\\\n1999em & N1637 &IIP&1.0& $<1.4 \\times 10^{37}$& ACIS & 2000-10-30\\\\\n1999gi & N3184 &IIP&0.10& $2.6^{+0.6}_{-0.6}\\times 10^{37}$& ACIS & 2000-01-08, 2000-02-03\\\\\n2000P & N4965 &IIn&7.2& $< 1.2\\times 10^{39}$& XRT & 2007-05-16\\\\\n2000bg & N6240 &IIn&1.3& $<1.4\\times 10^{39}$& ACIS & 2001-07-29\\\\\n2001ci & N3079 &Ic &2.5& $< 5.0\\times 10^{37}$& EPIC & 2003-10-14\\\\\n2001du & N1365 &IIP&1.3& $<3.8\\times 10^{37}$& ACIS & 2002-12-24\\\\\n2001em & UGC11794 &Ib\/Ic & 4.7 & $5.8^{+1.2}_{-1.2}\\times 10^{40}$& EPIC & 2006-06-14\\\\\n2001gd & N5033 & IIb&1.1 & $1.0^{+0.3}_{-0.3}\\times 10^{39}$& EPIC & 2002-12-18\\\\\n2001ig & N7424 &IIb&0.50 &$3.5^{+2.0}_{-2.0}\\times 10^{37}$& ACIS & 2002-06-11\\\\\n\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\begin{tabular}{lllllll}\nSN & Host galaxy & Type & Age (yr) & $L_{2-10\\;keV} (\\rm erg\/s)$& Instrument & Observation date\\\\\n\\hline\n2002ap & N628 &Ic&0.92& $< 3.1\\times 10^{36}$& EPIC & 2003-01-07\\\\\n2002fj & N2642 &IIn&4.7& $<1.3\\times 10^{39}$& XRT & 2007-05-11\\\\\n2002hf & MCG-05-3-20 &Ic&3.1& $<7.6\\times 10^{38}$& EPIC & 2005-12-19\\\\\n2003L & N3506 &Ic&0.08 &$7.7^{+1.5}_{-1.5}\\times 10^{39}$& ACIS & 2003-02-10\\\\\n2003ao & N2993 &IIP&0.016& $5.6^{+1.2}_{-1.2}\\times 10^{38}$& ACIS & 2003-02-16\\\\\n2003bg & MCG-05-10-15 &Ic\/IIb&0.33 & $5.3^{+1.3}_{-0.8}\\times 10^{38}$& ACIS & 2003-06-22\\\\\n2003dh & Anon. &Ic &0.71& $< 5.0 \\times 10^{40}$& EPIC & 2003-12-12\\\\\n2003jd & MCG-01-59-21 &Ic &0.041& $< 3.0 \\times 10^{38}$& ACIS & 2003-11-10\\\\\n2003lw & Anon. &Ic&0.34& $7.0^{+3}_{-3}\\times 10^{40}$& ACIS & 2004-04-18\\\\\n2004C & N3683 &Ic &3.1& $1.0^{+0.3}_{-0.2}\\times 10^{38}$& ACIS & 2007-01-31\\\\\n2004dj & N2403 &IIP&0.34& $1.1^{+0.3}_{-0.3}\\times 10^{37}$& ACIS & 2004-12-22\\\\\n2004dk & N6118 &Ib& 0.030& $1.5^{+0.4}_{-0.4}\\times 10^{39}$& EPIC & 2004-08-12\\\\\n2004et & N6946 & IIP & 0.18 & $1.0^{+0.2}_{-0.2}\\times 10^{38}$& ACIS & 2004-10-22, 2004-11-06, 2004-12-03\\\\ \n2005N & N5420 &Ib\/Ic &0.50 & $< 1.0\\times 10^{40}$& XRT & 2005-07-17\\\\\n2005U & Anon. &IIb &0.041& $< 1.1 \\times 10^{39}$& ACIS & 2005-02-14\\\\\n2005at & N6744 &Ic&1.7& $< 3.0\\times 10^{38}$& XRT & 2006-10-31\\\\\n2005bf & MCG+00-27-5 &Ib&0.58& $< 6.0\\times 10^{39}$& XRT & 2005-11-07\\\\\n2005bx & MCG+12-13-19 &IIn&0.25& $<1.0\\times 10^{39}$& ACIS & 2005-07-30\\\\\n2005da & UGC11301 &Ic&0.098& $<5.0\\times 10^{39}$& XRT & 2005-08-23\\\\\n2005db & N214 &IIn&0.036& $<2.0\\times 10^{39}$&EPIC & 2005-08-01\\\\\n2005ek & UGC2526 &Ic &0.041& $< 4.0\\times 10^{39}$& XRT & 2005-10-07\\\\\n2005gl & N266 &IIn&1.6& $<3.4\\times 10^{39}$& XRT & 2007-06-01\\\\\n2005kd & Anon. &IIn&1.2& $2.6^{+0.4}_{-0.4}\\times 10^{41}$& ACIS & 2007-01-24\\\\\n2006T & N3054 &IIb &0.0082& $< 6.0\\times 10^{39}$& XRT & 2006-02-02\\\\\n2006aj & Anon. &Ic & 0.43& $< 7.0 \\times 10^{39}$& XRT & 2006-07-25\\\\\n2006bp & N3953 &IIP&0.058& $1.0^{+0.2}_{-0.2}\\times 10^{38}$& EPIC & 2006-04-30\\\\ \n2006bv & UGC7848 &IIn&0.0082& $<1.2\\times 10^{39}$& XRT & 2006-05-01\\\\\n2006dn & UGC12188 &Ib&0.033& $< 2.5\\times 10^{40}$& XRT & 2006-07-17\\\\\n2006gy & N1260 &IIn&0.16& $<2.0\\times 10^{38}$& ACIS & 2006-11-15\\\\\n2006jc & UGC4904 &Ib&0.068& $2.1^{+0.6}_{-0.6}\\times 10^{38}$& ACIS & 2006-11-04\\\\\n2006lc & N7364 &Ib\/Ic &0.016& $< 2.0\\times 10^{40}$& XRT & 2006-10-27\\\\\n2006lt & Anon. &Ib&0.068& $< 4.0\\times 10^{39}$& XRT & 2006-11-05\\\\\n2007C & N4981 &Ib &0.022& $<3.0\\times 10^{40}$& XRT & 2007-01-15\\\\\n2007D & UGC2653 &Ic &0.025& $< 3.0\\times 10^{40}$& XRT & 2007-01-18\\\\\n2007I & Anon. &Ic &0.016& $< 9.0\\times 10^{39}$& XRT & 2007-01-20\\\\\n2007bb & UGC3627 &IIn&0.022& $<4.4\\times 10^{39}$& XRT & 2007-04-10\\\\\n\n\\hline\n\n\\end{tabular}\n\\caption{X-ray measurements and upper limits for our sample of\n historical supernovae. When more than one observation was used \nfor a given source, the age is an average of three epochs weighted \nby their exposure lengths. The instrument ACIS is on-board {\\em Chandra}, EPIC on {\\em XMM-Newton}\nand XRT on {\\em Swift}.}\n\\end{table*}\n\n\nWe retrieved the relevant X-ray datasets from the public archives of\nthose three missions. The optical position of each SN in our sample is\nwell known, to better than $1\\arcsec$: this makes it easier to\ndetermine whether a SN is detected in the X-ray band (in particular\nfor {\\it Chandra}), even with a very low number of counts, at a level\nthat would not be considered significant for source detection in a\nblind search. For the {\\it Chandra} observations, we applied standard\ndata analysis routines within the Chandra Interactive Analysis of\nObservations ({\\small CIAO}) software\npackage\\footnote{http:\/\/cxc.harvard.edu\/ciao} version 3.4. Starting\nfrom the level-2 event files, we defined a source region (radius\n$2\\farcs5$, comprising $\\approx 95\\%$ of the source counts at 2 keV,\non axis, and proportionally larger extraction radii for off-axis\nsources) and suitable background regions not contaminated by other\nsources and at similar distances from the host galaxy's nucleus. For\neach SN, we extracted source and background counts in the $0.3$--$8$\nkeV band with {\\it dmextract}. In most cases, we are dealing with a\nvery small number of counts (e.g., 2 or 3, inside the source\nextraction region) and there is no excess of counts at the position of\nthe SN with respect to the local background. In these cases, we\ncalculated the 90\\% upper limit to the number of net counts with the\nBayesian method of Kraft et al. (1991). We then converted this net\ncount-rate upper limit to a flux upper limit with\nWebPimms\\footnote{http:\/\/heasarc.nasa.gov\/Tools\/w3pimms.html and\nhttp:\/\/cxc.harvard.edu\/toolkit\/pimms.jsp },\nassuming a power-law spectral model with photon index $\\Gamma = 2$ and\nline-of-sight Galactic column density. The choice of a power-law\nspectral model is motivated by our search for X-ray emission from an\nunderlying pulsars rather than from the SN shock wave. In a few cases,\nthere is a small excess of counts at the SN position: we then also\nbuilt response and auxiliary response functions (applying {\\it\npsextract} in {\\small CIAO}), and used them to estimate a flux,\nassuming the same spectral model. When possible, for sources with\n$\\approx 20$--$100$ net counts, we determined the count rates\nseparately in the soft ($0.3$--$1$ keV), medium ($1$--$2$ keV) and\nhard ($2$--$8$ keV) bands, and used the hard-band rates (essentially\nuncontaminated by soft thermal-plasma emission, and unaffected by the\nuncertainty in the column density and by the degradation of the ACIS-S\nsensitivity) alone to obtain a more stringent value or upper limit to\nthe non-thermal power-law emission. Very few sources have enough\ncounts for a two-component spectral fit (mekal thermal plasma plus\npower-law): in those cases, we used the 2-10 keV flux from the\npower-law component alone in the best-fitting spectral model. For those\nspectral fits, we used the {\\small XSPEC} version 12 software package\n(Arnaud 1996).\n\nWhen we had to rely on {\\it XMM-Newton}\/EPIC data, we followed\nessentially the same scheme: we estimated source and background count\nrates (this time, using a source extraction circle with a $20\\arcsec$\nradius) in the full EPIC pn and MOS bands ($0.3$--$12$ keV) and, when\npossible, directly in the $2$--$10$ keV band. The count rate to flux\nconversion was obtained with WebPimms (with a $\\Gamma=2$ power-law\nmodel absorbed by line-of-sight column density) or through full\nspectral analysis for sources with enough counts. We used standard\ndata analysis tasks within the Science Analysis System ({\\small SAS})\nversion 7.0.0 (for example, {\\it xmmselect} for source extraction).\nAll three EPIC detectors were properly combined, both when we\nestimated count rates, and when we did spectral fitting, to increase\nthe signal-to-noise ratio. In fact, in almost all cases in which a\nsource position had been observed by both {\\it Chandra} and {\\it\nXMM-Newton}, {\\it Chandra} provided a stronger constraint to the flux,\nbecause of its much narrower point-spread function and lower\nbackground noise. The {\\it Swift} data were analyzed using the Swift\nSoftware version 2.3 tools and latest calibration products. Source\ncounts were extracted from a circular region with an aperture of\n$20\\arcsec$ radius centered at the optical positions of the SNe. In\nsome cases, Swift observations referred to a Gamma ray burst (GRB)\nassociated to a core-collapse SN: we did not obviously consider the\nGRB flux for our population analysis. Instead, for those cases, we\nconsidered the most recent {\\it Swift} observation after the GRB had\nfaded, and used that to determine an upper limit to a possible pulsar\nemission. We only considered {\\it Swift} observations deep enough to\ndetect or constrain the residual luminosity to $\\la 10^{40}$ erg\ns$^{-1}$.\n\nIn some cases, two or more {\\it Chandra} or {\\it XMM-Newton}\nobservations of the same SN target were found in the archive. If they\nwere separated by a short interval in time (much shorter than the time\nelapsed from the SN explosion), we merged them together, to increase\nthe detection threshold. The reason we can do this is that we do not\nexpect the underlying pulsar luminosity to change significantly\nbetween those observations. However, when the time elapsed between\nobservations was comparable to the age of the SN, we attributed\ngreater weight to the later observations, for our flux\nestimates. The reason is that the thermal X-ray emission from the\nshocked gas tends to decline more rapidly (over a few months or years)\nthan the non-thermal pulsar emission (timescale $\\ga$ tens of\nyears). More details about the data analysis and the luminosity and\ncolor\/spectral properties of individual SNe in our sample will be\npresented elsewhere (Pooley et al. 2008, in preparation). Here, we are\nmainly interested in a population study to constrain the possible\npresence and luminosity of high-energy pulsars detectable in the\n$2$--$10$ keV band.\n\nWhile this is, to the best of our knowledge, the first X-ray\nsearch for pulsar wind nebulae (PWNe) in extragalactic SNe, and the first work that uses\nthese data to set statistical constraints on the properties of the\nembedded pulsars, it should be noted that the possibility of\nobserving pulsars in young SNe (a few years old) was originally\ndiscussed, from a theoretical point of view, by Chevalier \\& Fransson\n(1992). Furthermore, searches for PWNe in extragalactic SNe have been\nperformed in the radio (Reynolds \\& Fix 1987; Bartel \\& Bietenholz\n2005). Observationally, however, clear evidence for pulsar activity\nin SNe has been lacking. The radio emission detected in some SNe,\nalthough initially ascribed to pulsar activity (Bandiera, Pacini \\&\nSalvati 1984), was later shown to be well described as the result of\ncircumstellar interaction (Lundqvist \\& Fransson 1988). There is\nhowever a notable exception, that is SN 1986J, for which the observed\ntemporal decline of the H$_\\alpha$ luminosity (Rupen et al. 1987) has\nbeen considered suggestive of a pulsar energy input (Chevalier 1987).\nAs noted by Chevalier (1989), a possible reason for the apparent\nlow-energy input in some cases could be the fact that the embedded\nneutron stars were born with a relatively long period. The present\nwork allows us to make a quantitative assessment on the typical\nminimum periods allowed for the bulk of the NS population.\nThe list of SNe, their measured fluxes and their ages (at the time\nof observation) are reported in Table~1. \n\n\\section{Theoretical expectations for the X-ray luminosity of young pulsars}\n\nMost isolated neutron stars are X-ray emitters throughout all their\nlife: at early times, their X-ray luminosity is powered by rotation\n(e.g. Michel 1991; Becker \\& Trumper 1997); after an age of $\\sim 10^3\n- 10^4$ yr, when the star has slowed down sufficiently, the main X-ray\nsource becomes the internal heat of the star\\footnote{In the case\nof magnetars, this internal heat is provided by magnetic field decay,\nwhich dominates over all other energy losses.}, and finally, when this\nis exhausted, the only possible source of X-ray luminosity would be\naccretion by the interstellar medium, although to a very low\nluminosity level, especially for the fastest stars (e.g. Blaes \\&\nMadau 1993; Popov et al. 2000; Perna et al. 2003). Another possible\nsource of X-ray luminosity that has often been discussed in the\ncontext of NSs is accretion from a fallback disk (Colgate 1971; Michel\n\\& Dressler 1981; Chevalier 1989; Yusifov et al. 1995; Chatterjee et\nal. 2000; Alpar 2001; Perna et al. 2000). Under these circumstances,\naccretion would turn off magnetospheric emission, and X-ray radiation\nwould be produced as the result of accretion onto the surface of the\nstar. For a disk to be able to interfere with the magnetosphere and\naccrete, the magnetospheric radius $R_m\\sim 6.6\\times 10^7\nB_{12}^{4\/7} \\dot{m}^{-2\/7}$~cm (with $\\dot{m}^{-2\/7}$ being the\naccretion rate in Eddington units, and $B_{12}\\equiv B\/(10^{12} {\\rm\nG})$) must be smaller than the corotation radius $R_{\\rm cor}\\sim\n1.5\\times 10^8 P^{2\/3} (M\/M_\\odot)^{1\/3}$ cm. If, on the other hand,\nthe magnetospheric radius resides outside of the corotation radius,\nthe propeller effect (Illarionov \\& Sunyaev 1975) takes over and\ninhibits the penetration of material inside the magnetosphere, and\naccretion is (at least largely) suppressed. For a typical pulsar\nmagnetic field $B_{12}\\sim 5$, the magnetospheric radius becomes\ncomparable to the corotation radius for a period $P\\sim 1$ s and an\nEddington accretion rate. If the infalling material does not possess\nsufficient angular momentum, however, a disk will not form, but infall\nof the bound material from the envelope is still likely to proceed,\nalbeit in a more spherical fashion. The accretion rate during the\nearly phase depends on the details of the (yet unclear) explosion\nmechanism. Estimates by Chevalier (1989) yield values in the range\n$3\\times 10^{-4}$--$2\\times 10^2 M_\\odot$ yr$^{-1}$. In order for the\npulsar mechanism to be able to operate, the pressure of the pulsar\nmagnetic field must overcome that of the spherical infall. For the\naccretion rates expected at early times, however, the pressure of the\naccreting material dominates over the pulsar pressure even at the\nneutron star surface. Chevalier (1989) estimates that, for accretion\nrates $\\dot{M}\\ga 3\\times 10^{-4}M_\\odot$ yr$^{-1}$, the photon\nluminosity is trapped by the inflow and the effects of a central\nneutron star are hidden. Once the accretion rate drops below that\nvalue, photons begin to diffuse out from the shocked envelope; from\nthat point on, the accretion rate drops rapidly, and the pulsar\nmechanism can turn on. Chevalier (1989) estimates that this occurs at\nan age of about 7 months. Therefore, even if fallback plays a major\nrole in the initial phase of the SN and NS lives, its effects are not\nexpected to be relevant at the timescales of interest for the\nconclusions of this work.\n\nFor the purpose of our analysis, we are especially interested in the\nX-ray luminosity at times long enough so that accretion is\nunimportant, but short enough that rotation is still the main source\nof energy. During a Crab-like phase, relativistic particles\naccelerated in the pulsar magnetosphere are fed to a synchrotron\nemitting nebula, the emission of which is characterized by a powerlaw\nspectrum. Another important contribution is the pulsed X-ray\nluminosity (about 10\\% of the total in the case of the Crab)\noriginating directly from the pulsar magnetosphere. It should be noted\nthat one important assumption of our analysis is that all (or at least\nthe greatest majority) of neutron stars goes through an early time\nphase during which their magnetosphere is active and converts a\nfraction of the rotational energy into X-rays. However, there is\nobservational evidence that there are objects, known as Central\nCompact Objects\\footnote{Examples are central source in Cas A and in\nPuppis A (e.g. Petre et al. 1996; Pavlov et al. 2000).} (CCOs), for\nwhich no pulsar wind nebulae are detected. Since no pulsations are\ndetected for these stars, it is possible that they are simply objects\nborn slowly rotating and which hence have a low value of $\\dot{E}_{\\rm\nrot}$. In this case, they would not affect any of our considerations,\nsince the $L_x-\\dot{E}_{\\rm rot}$ correlation appears to hold all the\nway down to the lowest measured values of $L_x$ and $\\dot{E}_{\\rm\nrot}$. However, if the CCOs are NSs with a high $\\dot{E}_{\\rm rot}$,\nbut for which there exists some new physical mechanism that suppresses\nthe magnetospheric activity (and hence the X-ray luminosity) to values\nmuch below what allowed by the scatter in the pulsar $L_x-\\dot{E}_{\\rm\nrot}$ relation, then these stars would affect the limits that we\nderive. Since at this stage their nature is uncertain, we treat the\nall sample of NSs on the same footing, although keeping this in mind\nas a caveat should future work demonstrate the different intrinsic\nnature of the CCOs with respect to the conversion of rotational energy\ninto X-ray luminosity.\n\nAs discussed in \\S1, for all the neutron stars for which both the\nrotational energy loss, $\\dot{E}_{\\rm rot}$, and the X-ray luminosity,\n$L_x$, have been measured, there appears to be a correlation between\nthese two quantities. This correlation appears to hold over a wide\nrange of rotational energy losses, including different emission\nmechanisms of the pulsar. Since in the high $L_x$ regime (young\npulsars) of interest here the X-ray luminosity is dominated by\nrotational energy losses, the most appropriate energy band for our\nstudy is above $\\sim 2$ keV, where the contribution of surface\nemission due to the internal energy of the star is small. The\ncorrelation between $L_x$ and $\\dot{E}_{\\rm rot}$ in the 2-10 keV band\nwas first examined by Saito et al. (1997) for a small sample of\npulsars, and a more comprehensive investigation with the largest\nsample up to date was later performed by P02. They found, for a\nsample of 39 pulsars, that the best fit is described by the relation\n\\beq \n\\log L_{x,[2-10]}= 1.34\\,\\log \\dot{E}_{\\rm rot} -15.34\\;,\n\\label{eq:Lx}\n\\eeq with $1\\sigma$ uncertainty intervals on the parameters $a=1.34$\nand $b=15.34$ given by $\\sigma_a=0.03$ and $\\sigma_b=1.11$,\nrespectively. A similar analysis on a subsample of 30 pulsars with\nages $\\tau < 10^6$ yr by Guseinov et al. (2004) yielded a best fit with\nparameters $a=1.56$, $b=23.4$, and corresponding uncertainties\n$\\sigma_a=0.12$ and $\\sigma_b=4.44$. The slope of this latter fit is a\nbit steeper than that of P02; as a result, the model by Guseinov et\nal. predicts a larger fraction of high luminosity pulsars from the\npopulation of fast rotating young stars with respect to the best fit\nof P02. In order to be on the conservative side for the predicted\nnumber of high-$L_x$ pulsars, we will use as our working model the one\nby P02. It is interesting to note, however, that both groups\nfind that the efficiency $\\eta_x\\equiv L_x\/\\dot{E}_{\\rm rot}$ is an\nincreasing function of the rotational energy loss $\\dot{E}_{\\rm rot}$\nof the star. Furthermore, the analysis by Guseinov et al. shows that,\nfor a given $\\dot{E}_{\\rm rot}$, pulsars with larger $B$ field have a\nsystematically larger efficiency $\\eta_x$ of conversion of rotational\nenergy into X-rays. An increase of $\\eta_x$ with $\\dot{E}_{\\rm\nrot}$ was found also in the investigation by Cheng, Taam \\& Wang\n(2004). They considered a sample of 23 pulsars and studied the trend\nwith $\\dot{E}_{\\rm rot}$ of the pulsed and non-pulsed components\nseparately. Their best-fit yielded $L_x^{\\rm pul}\\propto\\dot{E}_{\\rm\nrot}^{1.2\\pm 0.08}$ for the pulsed component, and $L_x^{\\rm\nnpul}\\propto\\dot{E}_{\\rm rot}^{1.4\\pm 0.1}$ for the non-pulsed\none. They noticed how the former is consistent with the theoretical\nX-ray magnetospheric emission model by Cheng \\& Zhang (1999), while\nthe latter is consistent with a PWN model in which $ L_x^{\\rm\nnpul}\\propto\\dot{E}_{\\rm rot}^{p\/2}$, where $p\\sim 2-3$ is the\npowerlaw index of the electron energy distribution. Their best fit for\nthe total X-ray luminosity (pulsed plus unpulsed components) yielded\n$L_x\\propto\\dot{E}_{\\rm rot}^{1.35\\pm 0.2}$, fully consistent with\nthe best-fit slope of P02. Along similar lines, recently Li et\nal. (2007) presented another statistical study in which, using {\\em\nChandra} and {\\em XMM} data of galactic sources, they were able to\nresolve the component of the X-ray luminosity due to the pulsar from\nthat due to the PWN. Their results were very\nsimilar to those of Cheng et al. (2004), with a best fit for the\npulsar component $L_x^{\\rm psr}\\propto\\dot{E}_{\\rm rot}^{1\\pm 0.1}$,\nand a best fit for the PWN (representing the unpulsed contribution)\n$L_x^{\\rm PWN}\\propto\\dot{E}_{\\rm rot}^{1.4\\pm 0.2}$. They found that\nthe main contribution to the total luminosity generally comes from the\nunpulsed PWN, hence yielding the steepening of the $L_X-\\dot{E}_{\\rm\nrot}$ correlation with $\\dot{E}_{\\rm rot}$, consistently with the P02\nrelation, where the contribution from the pulsar and the PWN are not\ndistinguished. For our purposes, we consider the sum of both\ncontributions, since we cannot resolve the two components in the\nobserved sample of historical SNe.\\footnote{For typical distances $\\ga$\na few Mpc, the ACIS spatial resolution is $\\ga 20$ pc, while \nPWN sizes are on the order of a fraction of a pc to a few pc.}\n\nIt should be noted that, despite the general agreement among the\nvarious studies on the trend of $L_x$ with $\\dot{E}_{\\rm rot}$, and\nthe support from theory that the correlation is expected to steepen\nwith $\\dot{E}_{\\rm rot}$, there must be a point of saturation in order\nto always satisfy the condition $L_x \\le \\dot{E}_{\\rm rot}$. While in\nour simulations we impose the extra condition that $\\eta_x\\le 1$, it\nis clear that, until the correlation can be calibrated through direct\nmeasurements of objects with high values of $\\dot{E}_{\\rm rot}$, there\nremains an uncertainty on how precisely the saturation occurs, and\nthis uncertainty is unavoidably reflected in the precise details of\nour predictions. However, unless there is, for some reason, a\npoint of turnover above the observed range where the efficiency of\nconversion of rotational energy into X-rays turns back into $\\eta_x\\ll\n1$, then our general conclusions can be considered robust.\nIn our analysis, in order to quantify the uncertainty associated\nwith the above, we will also explore the consequences of a break in\n$\\eta_x$ just above the observed range (to be mostly conservative).\n\nAnother point to note is that one implicit assumption that we make in\napplying the $L_x-\\dot{E}_{\\rm rot}$ relation to very young objects is\nthat the synchrotron cooling time $t_{\\rm synch}$ in X-rays is much\nsmaller than the age of the source, so that the X-ray luminosity is\nessentially an instantaneous tracer of $\\dot{E}_{\\rm rot}$. In order\nto check the validity of this assumption, we have made some rough\nestimates based on measurements in known sources. For example, let's\nconsider the case of the PWN in SN 1986J. Although the field in the\nPWN has not been directly measured, we can use radio equipartition and\nscale it from that of the Crab Nebula. The Crab's radio synchrotron\nemission has a minimum energy of $\\sim 6\\times 10^{48}$ ergs (see\ne.g. Chevalier 2005), and a volume of $\\sim 5\\times 10^{56}$\ncm$^3$. The average magnetic field is then $\\sim 550$ $\\mu$G. We can\nthen scale to the PWN in SN 1986J, using the fact that the radio\nluminosity is related to that of the Crab by $L_{\\rm r, 1986J} \\sim\n200 L_{\\rm r,Crab}$, while its size is about 0.01 times that of the\nCrab. According to equipartition, $B_{\\rm min} \\sim ({\\rm\nsize})^{-6\/7} L_r^{2\/7}$ (e.g. Willott et al. 1999), so this very\ncrude approach suggests that the magnetic field in the PWN of SN 1986J\nis $B_{\\rm min} \\sim 235 B_{\\rm Crab} \\sim 120$ mG. This yields a very\nshort cooling time in X-rays, $t_{\\rm synch}\\sim 5$ hr (assuming a\nLorentz factor for the electrons of $\\sim 10^6$), so that, if we scale\nfrom the Crab nebula, the use of $L_x-\\dot{E}_{\\rm rot}$ at early\ntimes appears reasonable. If, on the other hand, initial periods are\ngenerally slower than for the Crab, then the equipartition energy\ncould be much smaller and the corresponding lifetimes much\nlonger. Let's then consider a $10^{12}$ G pulsar with an initial\nperiod of 60 ms (the pulsar produced in SN 386 is such a source). We\nthen have $\\dot{E}_{\\rm rot} \\sim 3\\times 10^{36}$ erg\/s, so that\n(ignoring expansion losses), the energy deposited in the PWN over 20\nyears would be $\\sim 2\\times 10^{45}$ ergs. For a volume similar to\nthat for SN 1986J above, the equipartition magnetic field would be\n$\\sim10$ mG, corresponding to a lifetime at 2 keV of about 10\ndays. This is still a short enough lifetime for our purposes.\nAlternatively, for $P_0\\sim 5$ ms and $B \\sim 10^{12}$ G, we have\n$\\dot{E}_{\\rm rot} \\sim 6\\times 10^{40}$ erg\/s, and over 20 years,\nthis yields $E_{\\rm tot} \\sim 4\\times 10^{49}$ ergs. This implies $B\n> 1$~ G, so that $t_{\\rm synch} \\sim 15$ minutes. Therefore, we\nconclude that, overall, the magnetic fields in young PWNe are likely\nstrong enough to justify the use of the $L_x-\\dot{E}_{\\rm rot}$\nrelation even for the youngest objects in our sample.\n\nOne further point to notice with respect to the $L_x-\\dot{E}_{\\rm\nrot}$ correlation is the fact that it is based on a diversity of\nobjects. The low end range of the relation, in particular, is\npopulated with Millisecond Pulsars (MSPs), which are spun up neutron\nstars. It is possible that this class of objects might bias the\ncorrelation of the youngest, isolated pulsars in the sample.\nGenerally speaking, once they are spun up, the MSPs form PWNe again\n(e.g. Cheng et al. 2006), and the conversion of $\\dot{E}_{\\rm rot}$\ninto $L_x$, which is practically an instantaneous relationship (as\ncompared to the ages under consideration), should not be dependent on\nthe history of the system. The magnetic field of the objects (lower\nfor the MSPs than for the young, isolated pulsars), however, might\ninfluence the conversion efficiency (Guseinov et al. 2004), hence\nbiasing the overall slope of the correlation. Overall, in our analysis\na steeper slope would lead to tighter limits on the NS spin birth\ndistribution, and viceversa for a shallower slope. What would be\naffected the most by a slope change is the high $\\dot{E}_{\\rm rot}$\ntail of the population. Hence, in \\S4, besides deriving results using\nthe $L_x-\\dot{E}_{\\rm rot}$ for all the pulsars, we will also examine\nthe effects of a change of slope for the fastest pulsars.\n\nThe rotational energy loss of the star, under the assumption that it is dominated\nby magnetic dipole losses, is given by\n\\begin{equation}\n\\dot{E}_{\\rm rot}= \\frac{B^2\\sin^2\\theta\\,\\Omega^4\\,R^6}{6c^3}\\;,\n\\label{eq:Edot}\n\\end{equation}\n\nwhere $R$ is the NS radius, which we take to be 10 km, \n$B$ the NS magnetic field, $\\Omega=2\\pi\/P$ the star angular velocity, and \n$\\theta$ the angle between the magnetic and spin axes. We take $\\sin\\theta=1$\nfor consistency with what generally assumed in pulsar radio studies.\n With $\\sin\\theta=1$ and a\nconstant $B$ field, the spin evolution of the pulsars is simply given\nby\n\\beq P(t) = \\left[P_0^2 + \\left(\\frac{16\\pi^2 R^6\nB^2}{3Ic^3}\\right)t\\right]^{1\/2}\\;,\n\\label{eq:spin} \n\\eeq \nwhere $I\\approx 10^{45} {\\rm g}\\, {\\rm cm}^2$ is the moment of\ninertia of the star, and $P_0$ is its initial spin period. The\nX-ray luminosity of the pulsar at time $t$ (which traces $\\dot{E}_{\\rm\nrot}$) correspondingly declines as $ L_x=L_{x,0}(1+t\/t_0)^{-2}$, where\n$t_0\\equiv 3Ic^3P_0^2\/B^2R^6(2\\pi)^2 \\sim 6500\\, {\\rm\nyr}\\;I_{45}B^{-2}_{12}R_{10}^{-6}P^2_{0,10}$, having defined\n$I_{45}\\equiv I\/(10^{45}$~g~cm$^2$), $R_{10}\\equiv R\/(10\\; {\\rm km})$,\n$P_{0,10}\\equiv P_0\/(10\\;{\\rm ms})$. For $t\\la t_0$ the flux does not\nvary significantly. Since the ages $t_{\\rm SN}$ of the SNe in our\nsample are all $\\la 77$ yr, we deduce that, for typical pulsar fields, $t_{\\rm SN}\\ll\nt_0$. The luminosities of the\npulsars associated with the SNe in our sample are therefore expected to be\nstill in the plateau region, and thus they directly probe the initial birth\nparameters, before evolution affects the periods appreciably.\n\nIn order to compute the X-ray luminosity distribution of a population\nof young pulsars, the magnetic fields and the initial periods of the\npulsars need to be known. As discussed in \\S1, a number of\ninvestigations have been made over the last few decades in order to\ninfer the birth parameters of NSs, and in particular the distribution\nof initial periods and magnetic fields. Here, we begin our study by\ncomparing the SN data with the results of a pulsar population\ncalculation that assumes one of such distributions, and \nspecifically one that makes predictions for birth periods in the\nmillisecond range. After establishing that the SN data are highly\ninconsistent with such short initial spins, we then generalize our\nanalysis by inverting the problem, and performing a parametric study\naimed at finding the minimum values of the birth periods that result\nin predicted X-ray luminosities consistent with the SN X-ray data.\n\n\\section{Observational constraints on the pulsar X-ray luminosities from \ncomparison with historical SNe}\n\nAs a starting point to constrain pulsar birth parameters, we consider\nthe results of one of the most recent and comprehensive radio studies,\nbased on large-scale radio pulsar surveys, that is the one carried out\nby Arzoumanian et al. (2002; ACC in the following). They find that, if\nspin down is dominated by dipole radiation losses (i.e. braking index\nequal to 3), and the magnetic field does not appreciably decay, the\nmagnetic field strength (taken as Gaussian in log) has a mean $\\langle\n\\log B_0[G] \\rangle =12.35$ and a standard deviation of $0.4$, while\nthe initial birth period distribution (also taken as a log-Gaussian),\nis found to have a mean $\\langle \\log P_0(s) \\rangle =-2.3$ with a\nstandard deviation $\\sigma_{P_0}> 0.2$ (within the searched range of\n$0.1-0.7$). In the first part of our paper, as a specific example of\na distribution that predicts a large fraction of pulsars to be born\nwith millisecond periods, we use their inferred parameters described\nabove. Since in their model the standard deviation for the initial\nperiod distribution is constrained only by the lower limit\n$\\sigma_{P_0}>0.2$, here we adopt $\\sigma_{P_0}=0.3$. As the width of\nthe velocity dispersion increases, the predicted X-ray luminosity\ndistribution becomes more and more heavily weighed towards higher\nluminosities (see Figure 1 in Perna \\& Stella 2004). Therefore, the\nlimits that we derive in this section would be even stronger if\n$\\sigma_{P_0}$ were larger than what we assume. We then assume that\nthe $L_X-\\dot{E}_{\\rm rot}$ correlation is described by the P02 best\nfit with the corresponding scatter.\n\nIn order to test the resulting theoretical predictions for the pulsar\ndistribution of X-ray luminosities against the limits of the SNe, we\nperform $10^6$ Monte Carlo realizations of the compact object remnant\npopulation. Each realization is made up of $N_{\\rm obj}=N_{\\rm\nSN}=100$, with ages equal to the ages of the SNe in our sample. The\nfraction of massive stars that leave behind a black hole (BH) has been\ntheoretically estimated in the simulations by Heger et al. (2003).\nFor a solar metallicity and a Salpeter IMF, they find that this\nfraction is about 13\\% of the total. However we need to remark that\nwhile, following their predictions, in our Monte Carlo\nsimulations we assign a 0.13 probability for a remnant to contain a\nBH, the precise BH fraction is, in reality, subject to a certain\ndegree of uncertainty. Even taking rigorously the results of Heger et\nal. (2003), one needs to note that their NS vs BH fraction (cfr. their\nfig.5) was computed assuming a fraction of about 17\\% of Type Ib\/c\nSNe, and 87\\% of Type II. Our sample, on the other hand, contains\nabout 40\\% of Type Ib\/c and 60\\% of Type II SNe. How the remnant\nfraction would change in this case is difficult to predict. Heger et\nal. point out how normal Type Ib\/c SNe are not produced by single\nstars until the metallicity is well above solar. In this case, the\nremnants would be all NSs. At lower metallicities, on the other hand,\nmost Type Ib\/c SNe are produced in binary systems where the binary\ncompanion helps in removing the hydrogen envelope of the collapsing\nstar. Given these uncertainties, while adopting for our simulations\nthe BH\/NS fraction estimated by Heger et al. for solar metallicity, we\nalso discuss how results would vary for different values of the BH and\nNS components.\n\nIf an object is a BH, a low level of X-ray luminosity ($<10^{35}$\nerg\/s, i.e. smaller than the lowest measurement\/limit in our SN data\nset) is assigned to it. This is the most conservative assumption that\nwe can make in order to derive constraints on the luminosity\ndistribution of the NS component. If an object is a NS, then its\nbirth period and magnetic field is drawn from the ACC distribution as\ndescribed above, and it is evolved to its current age (equal to the\nage of the corresponding SN at the time of the observation) with\nEq.(\\ref{eq:spin}). The corresponding X-ray luminosity is then drawn\nfrom a log-Gaussian distribution with mean given by the P02 relation,\nand dispersion $\\sigma_{L_x}=\\sqrt{\\sigma_a^2[\\log\\dot{E}_{\\rm rot}]^2\n+ \\sigma_b^2}$.\n\nFigure 1 (top panel) shows the predicted distribution of the most\nfrequent value\\footnote{For each (binned) value of the pulsar\nluminosity, we determined the corresponding probability distribution\nresulting from the Monte Carlo simulations. The maximum of that\ndistribution is what we indicated as the ``most frequent'' value for\neach bin.} of the pulsar luminosity over all the Monte Carlo\nrealizations of the entire sample of Table~1. The shaded region\nindicates the $1\\sigma$ dispersion in the model. This has been\ndetermined by computing the most compact region containing 68\\% of the\nrandom realizations of the sample. Also shown is the distribution of\nthe X-ray luminosity (both detections and upper limits) of the SNe\n(cfr. Table~1). Since the measured X-ray luminosity of each object is\nthe sum of that of the SN itself and that of the putative pulsar\nembedded in it, for the purpose of this work X-ray detections are also\ntreated as upper limits on the pulsar luminosities. This is indicated\nby the arrows in Figure~1.\n\n\n\\begin{figure}\n\\psfig{file=fig1a.ps,width=0.38\\textwidth}\n\\vspace{-0.05in}\n\\psfig{file=fig1b.ps,width=0.38\\textwidth}\n\\vspace{-0.05in}\n\\psfig{file=fig1c.ps,width=0.38\\textwidth}\n\\vspace{-0.07in}\n\\caption{The dashed line shows the distribution of 2-10 keV luminosities\n(either measurements or upper limits) for the entire sample of 100 SNe\nanalyzed {\\em (upper panel)}, for the subsample of SNe with ages $>10$\nyr {\\em (middle panel)}, and with ages $>30$ yr {\\em (lower panel)}.\nThe measured SN luminosities are also treated as upper limits on the\nluminosities of the embedded pulsars. The solid line shows the\nprediction for the X-ray luminosity distribution of pulsars born\ninside those SNe, according to the ACC birth parameters and the\n$L_X-\\dot{E}_{\\rm rot}$ P02 relation. The shaded regions indicate the\n$1$-$\\sigma$ confidence level of the model, derived from $10^6$ random\nrealizations of the sample. Independently of the SN sample considered,\nthe pulsar luminosity distribution is highly inconsistent with the\ncorresponding SN X-ray limits.}\n\\end{figure}\n\n\nOur X-ray analysis, in all those cases where a measurement was\npossible, never revealed column densities high enough to affect the\nobserved 2-10 keV flux significantly. However, if a large fraction of\nthe X-ray luminosity (when not due to the pulsar) does not come from\nthe innermost region of the remnant, then the inferred $N_{\\rm H}$\nwould be underestimated with respect to the total column density to\nthe pulsar. The total optical depth to the center of the SN as\na function of the SN age depends on a number of parameters, the most\nimportant of which are the ejected mass and its radial\ndistribution. The density profile of the gas in the newly born SN is\ndetermined by the initial stellar structure, as modified by the\nexplosion. Numerical simulations of supernova explosions produce\ndensity distributions that, during the free expansion phase, can be\napproximated by the functional form $\\rho_{\\rm SN}=f(v)t^{-3}$ (see\ne.g. Chevalier \\& Fransson 1994 and references therein). The function\n$f(v)$ can in turn be represented by a power-law in velocity,\n$f(v)\\propto v^{-n}$. To date, the best studied case is that of SN\n1987A. Modeling by Arnett (1988) and Shigeyama \\& Nomoto (1990)\nyield an almost flat inner powerlaw region, surrounded by a very\nsteep outer powerlaw profile, $n\\sim 9-10$. For normal abundances and\nat energies below\\footnote{Above 10 keV, the opacity is dominated by\nelectron scattering, which is energy independent.} 10 keV, Chevalier\n\\& Fransson (1994) estimate the optical depth at energy $E_{10}\\equiv\nE\/(10\\;{\\rm keV})$ to the center of a supernova with a flat inner\ndensity profile to be $\\tau=\\tau_s E_{10}^{-8\/3}\\; E_{\\rm SN,\n51}^{-3\/2}\\; M_{\\rm ej, 10}^{5\/2} t_{\\rm yr}^{-2}$, where $E_{\\rm SN,\n51}$ is the supernova energy in units of $10^{51}$ erg, and $M_{\\rm\nej,10}$ is the mass of the ejecta in units of 10 $M_\\odot$. The\nconstant $\\tau_s$ is found to be 5.2 for a density profile with $n=7$\nin the outer parts, and 4.7 for $n=12$. From these simple estimates,\nit can be seen that the SN would have to wait a decade or so before\nstarting to become optically thin at the energies of interest. These\nestimates however do not account for the fact that, if the SN harbors\nan energetic pulsar in its center, the pulsar itself will ionize a\nsubstantial fraction of the surrounding neutral material. Calculations\nof the ionization front of a pulsar in the interior of a young SN were\nperformed by Chevalier \\& Fransson (1992). In the case of a flat\ndensity profile in the inner region, and an outer density profile with\npowerlaw $n=9$, they estimate that the ionization front reaches the\nedge of the constant density region after a time $t_{\\rm yr}=10\\;t_0\nf_i^{-1\/3} \\dot{E}_{\\rm rot, 41}^{-1\/3}\\;M_{\\rm ej, 10}^{7\/6}\\;E_{\\rm\nSN, 51}^{-1\/2}$, where $\\dot{E}_{\\rm rot, 41}\\equiv\\dot{E}_{\\rm\nrot}\/10^{41}\\;{\\rm erg}\\;{\\rm s^{-1}}$, and $f_i$ is the fraction of\nthe total rotational power that is converted in the form of ionizing\nradiation with a mean free path that is small compared to the\nsupernova size. The constant $t_0$ depends of the composition of the\ncore. For a hydrogen-dominated core, $t_0=1.64$, for a\nhelium-dominated core, $t_0=0.69$, and for an oxygen-dominated core\n$t_0=0.28$. Once the ionization front has reached the edge of the\nconstant density region, the steep outer power-law part of the\ndensity profile is rapidly ionized. Therefore, depending on the\ncomposition and total mass of the ejecta, an energetic pulsar can\nionize the entire mass of the ejecta on a timescale between a few\nyears and a few tens of years. This would clearly reduce the optical\ndepth to the center of the remnant estimated above.\n\nGiven these considerations, in order to make predictions that are\nnot as likely to be affected by opacity effects, we also performed a\nMonte Carlo simulation of the compact remnant population for all the\nSNe with ages $t>10$ yr, and another for all the SNe with ages $t>30$~yr.\nSince the opacity scales as $t^{-2}$, these subsets of objects\nare expected to be substantially less affected by high optical depths\nto their inner regions. The subsample of SNe with ages $t>10$~yr\ncontains 40 objects, while the subsample with ages $t>30$~yr contains\n21 SNe. The corresponding luminosity distributions (both measurements\nand limits) are shown in Figure~1 (middle and bottom panel\nrespectively), together with the predictions of the adopted model (ACC\ninitial period distribution and P02 $L_x-\\dot{E}_{\\rm rot}$\ncorrelation) for the luminosities of the pulsars associated with those\nSN samples. Given the uncertainties in the early-time optical depth,\nwe consider the constraints derived from these subsamples (and\nespecially the one with $t>30$ yr) more reliable. Furthermore, even\nindependently of optical depth effects that can bias the youngest\nmembers of the total sample, the subsamples of older SNe have on average\nlower luminosities, hence making the constraints on the model\npredictions more stringent. In the following, when generalizing our\nstudy to derive limits on the allowed initial period distribution, we\nwill use for our analysis only the subsets of older SNe.\n\nIn all three panels of Figure~1, the low luminosity tail of the\nsimulation, accounting for $\\sim 15\\%$ of the population, is dominated\nby the fraction of SNe whose compact remnants are black holes, and for\nwhich we have assumed a luminosity lower than the lowest SN\nmeasurement\/limit ($\\sim 10^{35}$ erg\/s). While it is possible that\nnewly born BHs could be accreting from a fallback disk and hence have\nluminosities as high as a few $\\times 10^{38}$ erg\/s, our assumption\nof low luminosity for them is the most conservative one for the\nanalysis that we are performing, in that it allows us to derive the\nmost stringent limits on the luminosity of the remaining remnant\npopulation of neutron stars. For these, the high-luminosity tail is\ndominated by the fastest pulsars, those born with periods of a few\nms. The magnetic fields, on the other hand, are in the bulk range of\n$10^{12}-10^{13}$ G. The low-$B$ field tail produces lower\nluminosities at birth, while the high-$B$ field tail will cause the\npulsars to slow down on a timescale smaller than the typical ages of\nthe SNe in the sample. Therefore, it is essentially the initial\nperiods which play a crucial role in determining the extent of the\nhigh-luminosity tail of the distribution. With the birth parameter\ndistribution used here, we find that, out of the $10^6$ Monte Carlo\nrealizations of the sample (for each of the three cases of Fig.1),\nnone of them predicts pulsar luminosities compatible with the SN\nX-ray limits.\\footnote{We need to point out that, in the\nstudy presented here, we refrain from performing detailed probability\nanalysis. This is because, given the observational uncertainties of\nsome of the input elements needed for our study (as discussed both\nabove and in the following), precise probability numbers would not be\nespecially meaningful at this stage.}\n\nThese results point in the direction of initial periods of the pulsar\npopulation to be slower than the ms periods derived from some\npopulation synthesis studies in the radio. A number of other\ninvestigations in the last few years, based on different methods of\nanalysis of the radio sample with respect to ACC, have indeed come up\nto conclusions similar to ours. The population synthesis studies of\nFaucher-Giguere \\& Kaspi (2005) yielded a good fit to the data with\nthe birth period described by a Gaussian with a mean period of 0.3 s\nand a spread of 0.15 sec. Similarly, the analysis by Ferrario \\&\nWickramasinghe (2006) yielded a mean period of 0.23 sec for a magnetic\nfield of $10^{12}$ G. We performed Monte Carlo simulations of the\nX-ray pulsar population using the birth parameters derived in those\nstudies above, and found them to be consistent with the SNe X-ray\nlimits shown in Figure~1.\n\nIn order to generalize our analysis beyond the testing of known\ndistributions, we performed a number of Monte Carlo simulations with\ndifferent initial spin period distributions and a mean magnetic field\ngiven by the optimal model of Faucher-Giguere \\& Kaspi (2006). This\nis a log-Gaussian with mean $\\left<\\log(B\/{\\rm G})\\right>=12.65$ and\ndispersion $\\sigma_{\\log B}=0.55$.\\footnote{The inferred values of the\nmagnetic field in different studies are all generally in this range,\neven for very different inferred spin birth parameters. Furthermore,\nFerrario \\& Wickramasinghe (2006) note that the pulsar birth period\nthat they infer is almost independent of the field value in the range\n$\\log B({\\rm G})=10-13$, where the vast majority of isolated radio\npulsar lie.}.\n\n\n\\begin{figure}\n\\psfig{file=fig2a.ps,width=0.48\\textwidth}\n\\psfig{file=fig2b.ps,width=0.48\\textwidth}\n\\caption{Fraction $f_{sim}^c$ of Monte Carlo realizations of the SN\nsample for which the 2-10 keV luminosities of the pulsars are below\nthe limits of the corresponding SNe. This is shown for different\ndistributions of the initial spin periods, described by Gaussians of\nmean $P_0$ and dispersion $\\sigma_{P_0}$. In the upper panel, the\nsample includes only the SNe of ages $>10$ yr (cfr. Fig.1, middle\npanel), while in the lower panel only the SNe with ages $>30$ yr\n(cfr. Fig.1, lower panel) are included for the montecarlo simulations.\nIndependently of the sample considered, initial periods $P_0\\la 40$ ms\nare inconsistent with the SN data.}\n\\end{figure}\n\n\nFigure 2 shows the fraction $f_{sim}^c$ of the montecarlo simulations\nof the SN sample for which the luminosity of each pulsar is found\nbelow that of the corresponding SN. The sample of SNe selected is\neither the one with ages $>10$ yr (top panel), or the one with ages $>\n30$ yr (bottom panel), which allow tighter constraints while\nminimizing optical depth effects. Monte Carlo realizations of the\nsamples have been run for 50 Gaussian distributions of the period with\nmean in the range $20-100\\;$ms, and, for each of them, 4 values of\nthe dispersion\\footnote{The dependences with $\\sigma_{P_0}$ should be\ntaken as representative of the general trend, since it is likely that\n$\\sigma_{P_0}$ and $P_0$ might be correlated. But since no such\ncorrelations have been studied and reported, we took as illustrative\nexample the simplest case of a constant $\\sigma_{P_0}$ for a range of\n$P_0$.} $\\sigma_{P_0}$ between 1 and $50\\;$ms. For each value of the\nperiod, we performed 100,000 random realizations\\footnote{The number\nof random realizations is smaller here with respect to Fig.1 for\ncomputational reasons since, while each panel of Fig.1 displays a\nMonte Carlo simulation for one set of parameters only, each panel of\nFig. 2 is the results of 200 different Monte Carlo realizations. For a\nfew cases, however, we verified that the results were statistically\nconsistent with those obtained with a larger number of random\nrealizations.}. Details of the results vary a bit between the\ntwo age-limited subsamples. This is not surprising since the extent\nto which we can draw limits on the pulsar periods depends on the\nmeasurements\/limits of the X-ray luminosities of the SNe in the\nsample. In a large fraction of the cases, we have only upper limits,\nand therefore our analysis is dependent on the sensitivity at which\neach object has been observed. Independently of the sample, however,\nwe find that, for initial periods $P_0\\la 35-40$ ms, the distribution\nof pulsar luminosities is highly inconsistent with the SN data for any\nvalue of the assumed width of the period distribution.\n\nWe need to emphasize that the specific value of $f_{sim}^c$ as a\nfunction of $P_0$ should be taken as representative. Various authors\nhave come up with slightly different fits for the $L_x-\\dot{E}_{\\rm\nrot}$ correlation. If, for example, instead of the fit by Possenti et\nal. (2002) we had used the fit derived by Guseinov et al. (2004), then\nthe limits on the period would have been more stringent. On the\nother hand, if, for some reason, the efficiency $\\eta_x$ of conversion\nof $\\dot{E}_{\\rm rot}$ into $L_x$ becomes low at high $\\dot{E}_{\\rm\nrot}$, then our results would be less constraining. In order to assess\nthe robustness of our results with respect to changes in $\\eta_x$, we\nran simulations of the pulsar population assuming that, for\n$\\dot{E}_{\\rm rot}>\\dot{E}_{\\rm rot} ^{\\rm max, obs}\\sim 10^{39}$ erg\ns$^{-1}$ (where $\\dot{E}_{\\rm rot} ^{\\rm max, obs}$ is the maximum\nobserved $\\dot{E}_{\\rm rot}$), the efficiency becomes\n$\\eta_x^{\\prime}=\\epsilon \\eta_x$, and we tried with a range of values of\n$\\epsilon <1$. This test also addresses the issue of a bias in our\nresults deriving from a possible shallower slope for the youngest\npulsars of the population, as discussed in \\S3. We ran Monte Carlo\nsimulations for the ACC birth parameters, and decreased $\\epsilon$ by\nincrements of 0.02. We found that, only for the very low value\n$\\epsilon\\sim 10^{-4}$, a sizable fraction of $\\sim 5\\%$ of the\nsimulations predicts pulsar X-ray luminosities that are fully consistent\nwith the SN data. Therefore, we conclude that our results on the millisecond\nbirth periods of pulsars are reasonably robust with respect to uncertainty\nin the $L_x-\\dot{E}_{\\rm rot}$ for the youngest members of the\npopulation.\n\nAnother systematic that might in principle affect our results would arise\nif a fraction of neutron stars is born with a non-active\nmagnetosphere so that their X-ray luminosity at high $\\dot{E}_{\\rm\nrot}$ is much smaller than for the active pulsars, then the limits on\nthe initial periods of the ``active'' pulsars would be less stringent.\nAn example of non-active neutron stars could be that of the CCOs\ndiscussed in \\S3. However, until the fraction of these stars becomes\nwell constrained by the observations and an independent\n$L_x-\\dot{E}_{\\rm rot}$ is established for them, it is not possible to\ninclude them quantitatively in our population studies. Similarly, the\nprecise fraction of BHs versus NSs in the remnant population plays a\nrole in our results. A larger fraction of BHs would alleviate our\nconstraints on the initial spin periods, while a smaller fraction\nwould, obviously, make them tighter. If a fraction of those BHs had a\nluminosity larger than the maximum assumed upper limit in our\nsimulations (due to e.g. accretion from a fallback disk as discussed\nabove), then our results would again be more constraining. While our\nwork is the first of its kind in performing the type of analysis that\nwe present, future studies will be able to improve upon our results,\nonce the possible systematics discussed above are better constrained,\nand deeper limits are available for the full SN sample.\n\n\\section{Summary}\n\nIn this paper we have proposed a new method for probing the birth\nparameters and energetics of young neutron stars. The idea is simply\nbased on the fact that the X-ray measurements of young supernovae\nprovide upper limits to the luminosity of the young pulsars embedded\nin them. The pulsar X-ray luminosity on the other hand, being directly\nrelated to its energy loss, provides a direct probe of the pulsar spin\nand magnetic field. Whereas pulsar birth parameters are generally\ninferred through the properties of the radio population, the X-ray\nproperties of the youngest members of the population provide tight and\nindependent constraints on those birth parameters, and, as we\ndiscussed, especially on the spins.\n\nThe statistical comparison between theoretical predictions and the\ndistribution of X-ray luminosity limits that we have performed in this\nwork has demonstrated that the two are highly inconsistent if the bulk\nof pulsars is born with periods in the millisecond range. Whereas we\ncannot exclude that the efficiency $\\eta_x$ of conversion of\nrotational energy into X-ray luminosity could have a turnover and drop\nat high values of $\\dot{E}_{\\rm rot}$ to become $\\eta_x\\ll 1$, the\n2-10 keV pulsar data in the currently observed range of $\\dot{E}_{\\rm\nrot}$ do not point in this direction (but rather point to an increase\nof $\\eta_x$ with $\\dot{E}_{\\rm rot}$), and there is no theoretical\nreason for hypothesizing such a turn over. However, even if such\na turnover were to exist just above the observed range of\n$\\dot{E}_{\\rm rot}$, we found that only by taking an efficiency\n$\\eta_x^{\\prime}\\sim 10^{-4}\\eta_x$ above $\\dot{E}_{\\rm rot}^{\\rm\nmax,obs}$, our results would lose their constraining value for the ms\nspin birth distributions. Therefore, we can robustly interpret our\nresults as an indication that there must be a sizable fraction of\npulsars born with spin periods slower than what has been derived by a\nnumber of radio population studies as well as by hydrodynamic\nsimulations of SN core-collapse (e.g. Ott et al 2006). Our findings\ngo along the lines of a few direct measurements of initial periods of\npulsars in SNRs (see e.g. Table 2 in Migliazzo et al. 2002), as well\nas some other population synthesis studies (Faucher-Gigure \\& Kaspi\n2006; Ferrario \\& Wickramasinghe 2006; Lorimer et al. 2006). Our\nresults for the bulk of the pulsar population, however, do not exclude\nthat the subpopulation of magnetars could be born with very fast\nspins, as needed in order to create the dynamo action responsible for\nthe $B$-field amplification required in these objects (Thompson \\&\nDuncan 1993). Because of their very short spin-down times, the energy\noutput of magnetars can be dominated by the spin down luminosity only\nup to timescales of a fraction of year, during which the SN is still\ntoo optically thick to let the pulsar luminosity go\nthrough. Therefore, our analysis cannot place meaningful constraints\non this class of objects.\n\nFinally, our results also bear implications on the contribution of\nyoung pulsars to the population of the Ultra Luminous X-ray sources\n(ULXs) observed in nearby galaxies. The model in \\S3 predicts that a\nsizable fraction of that population could indeed be made up of young,\nfast rotating pulsars (Perna \\& Stella 2004). However, the analysis\nperformed here shows that the contribution from this component,\nalthough expected from the tail of the population, cannot be as large\nas current models predict.\n \nThe extent to which we could perform our current analysis has been\nlimited by the size of the SNe sample, and, especially, by the\navailable X-ray measurements. The fact that, in a large fraction of\nthe sample, we have limits rather than detections, means that a large\nimprovement can be made with deeper limits from longer observations.\nThe deeper the limits, the tighter the constraints that can be derived\non the spin period distribution of the pulsars. The analysis proposed\nand performed here is completely uncorrelated from what done in radio\nstudies, and therefore it provides an independent and complementary\nprobe of the pulsar spin distribution at birth (or shortly\nthereafter); our results provide stronger constraints on theoretical\nmodels of stellar core collapse and early neutron star evolution,\nmaking it even more necessary to explain why neutron stars spin down\nso rapidly immediately after birth (see also Thompson, Chang \\&\nQuataert 2004; Metzger, Thompson \\& Quataert 2007).\n\n\\section*{Acknowledgements}\n\nWe thank Roger Chevalier, John Raymond and Stefan Immler for very\nuseful discussions on several aspects of this project. We are\nespecially grateful to Bryan Gaensler and Shami Chatterjee for their\ncareful reading of our manuscript and detailed comments. We also\nthank the referee for his\/her insightful suggestions. RP and RS thank\nthe University of Sydney for the partial support and the kind\nhospitality during the initial phase of this project.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Local optima of the Hamiltonian}\n\nLet $W=(W_{i,j})_{n\\times n}$ be a symmetric matrix with zero diagonal such that the $(W_{i,j})_{1\\le i\\sqrt{\\frac{2}{\\pi}}$. We let $\\alpha^*=R(v^*)>0$ denote the maximum value of $R$. \n\n\\begin{theorem}\n\\label{thm:main}We have that, as $n\\to +\\infty$, for any choice of $\\sigma=\\sigma(n)\\in \\{-1,+1\\}^n$,\n\\[\n \\lim_{n\\to +\\infty}\\frac{1}{n} \\log \\mathbb{P}\\left\\{ \\sigma \\ \\text{is\n locally optimal} \\right\\} = \\alpha^* - \\log 2 .\n\\]\nMoreover, there exists constants $\\epsilon_0>0$, $L>0$ and $n_0$ such that, for $0<\\epsilon <\\epsilon_0$ and $n\\geq n_0$,\n\\[\\mathbb{P}\\left\\{\\left. -\\frac{v^*}{2} - \\epsilon \\le n^{-3\/2} H(\\sigma) \\le -\\frac{v^*}{2} + \\epsilon \\,\\right| \\,\\sigma \\text{ is locally optimal} \n\\right\\} \\geq 1- \\exp\\left(L\\,\\sqrt{n} -\\epsilon^2\\,n\\right).\\]\n\\end{theorem}\n\nThe values of the constants are numerically evaluated to be $\\alpha^*\\approx 0.199$ and $v^*\/2\\approx\n0.506$. Since the global minimum of $H(\\sigma)$ is about $- 0.763\\,\nn^{3\/2}$, the typical value a local optimum $-0.506\\,n^{3\/2}$\ncomes fairly close. \n\nAlso note that Proposition \\ref{prop:alphabounds} below implies that $\\alpha^*$ is\nbetween $1\/(2\\pi) \\approx 0.1591\\ldots$ and $2\/(3\\pi) \\approx 0.2122\\ldots$.\n\n\\subsection{Local minima, greedy algorithms and MaxCut}\\label{sub:maxcut} \n\nOur problem is related to finding a local optimum of weighted MaxCut on the complete graph, which was recently studied in Angel, Bubeck, Peres, and Wei \\cite{AnBuPeWe16}. Given $S\\subset [n]$, we denote the value of the {\\em cut} $(S,[n]\\backslash S)$ as\n\\[{\\rm Cut}(S,[n]\\backslash S):= \\sum_{i\\in S}\\sum_{j\\in [n]\\backslash S}\\,W_{i,j}.\\]\nNote that there is a correspondence between cuts $(S,[n]\\backslash S)$ and spin configurations $\\sigma_S$ with:\n\\[\\sigma_{S,i}:= 2\\mathbbm{1}_{i\\in S} - 1 \\,\\,(i\\in[n]).\\]\n\\[{\\rm Cut}(S,[n]\\backslash S) = \\frac{- H(\\sigma_S) + \\sum_{1\\leq i0$, \n\\[\n\\lambda \\mathbb{E} \\|N\\|_1^2 \\le \\log \\mathbb{E} \\exp\\left( \\lambda \\|N\\|_1^2\\right) \n\\le \\lambda \\mathbb{E} \\|N\\|_1^2 \\left(1+ \\frac{n\\lambda}{(1-n\\lambda)} \\right)~.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nThe inequality on the left-hand side is obvious from Jensen's inequality.\nTo prove the right-hand side, we use the Gaussian logarithmic Sobolev inequality. \nIn particular, writing $f(x)=\\|x\\|_1^2$ and $F(\\lambda)= \\mathbb{E} \\exp\\left( \\lambda f(N)\\right)$,\nthe inequality on page 126 of Boucheron, Lugosi, and Massart \\cite{BoLuMa13} asserts\nthat\n\\[\n \\lambda F'(\\lambda) - F(\\lambda)\\log F(\\lambda) \\le \\frac{\\lambda^2}{2}\n \\mathbb{E}\\left[ e^{\\lambda f(N)} \\|\\nabla f(N) \\|^2\\right]~.\n\\]\nSince $\\|\\nabla f(N)\\|^2=4n\\|N\\|_1^2$, we obtain the differential inequality\n\\[\n \\lambda F'(\\lambda) - F(\\lambda)\\log F(\\lambda) \\le 2n\\lambda^2 F'(\\lambda)~.\n\\]\nThis inequality has the same form as the one at the top of page 191 of \\cite{BoLuMa13} with $a=2n$ and $b=0$\nand Theorem 6.19 implies the result above.\n\\end{proof}\n\nSince\n\\[\n\\mathbb{E} \\|N\\|_1^2 = n+ n(n-1) \\frac{2}{\\pi}~,\n\\]\nwe get\n\\[\n\\mathbb{P}\\{ \\sigma \\ \\text{is locally optimal} \\} \\ge 2^{-n} \\sqrt{\\frac{n-2}{2n-2}} \n\\exp\\left( n\/(4(n-1)) + \\frac{n}{2\\pi} \\right)\n\\]\nand \n\\[\n\\mathbb{P}\\{ \\sigma \\ \\text{is locally optimal} \\} \\le 2^{-n} \\sqrt{\\frac{n-2}{2n-2}} \n\\exp\\left( \\left(n\/(4(n-1)) + \\frac{n}{2\\pi} \\right) \\frac{4n-1}{3n-1} \\right) \n\\]\nSummarizing, we obtain the following bounds\n\\begin{proposition}\n\\label{prop:alphabounds}\nFor all spin configurations $\\sigma\\in \\{-1,1\\}^n$,\n\\[\n\\frac{1}{2\\pi} -\\log 2 -O(1\/n) \\le \\frac{1}{n} \\log \\mathbb{P}\\{ \\sigma \\ \\text{is locally optimal} \\} \\le \n\\frac{2}{3\\pi} -\\log 2 +O(1\/n)\n\\]\n\\end{proposition}\n\n\nIn the next section we take a closer look at the integral expression\nof the probability of local optimality. In fact, we prove that\n$(1\/n)\\log \\mathbb{P}\\{ \\sigma \\ \\text{is locally optimal} \\}$ converges to\n$\\alpha^*-\\log 2$ defined in the introduction. \n\n\n\\section{The value of local optima}\n\nIn this section we study, for any fixed $\\sigma\\in \\{-1,+1\\}^n$ and $\\Delta\n>0$, the joint probability\n\\[\n \\mathbb{P}\\left\\{ \\sigma \\ \\text{is locally optimal},\\;n^{-3\/2} H(\\sigma) \\le - \\Delta\n \\right\\}~.\n\\]\nWe let $Z=(Z_i)_{i=1}^n$ with $Z_i = Z_i(\\sigma)$ as in the previous section. Recall from equations (\\ref{eq:locmincondition}) and (\\ref{eq:locminenergy}) that\n\\[\\mbox{$\\sigma$ is locally optimal}\\Leftrightarrow \\forall i\\in [n],\\, Z_i\\geq 0\\]\nand\n\\[-\\frac{H(\\sigma)}{n^{3\/2}} = \\frac{1}{2n^{3\/2}}\\sum_{i=1}^nZ_i.\\]\nTherefore, we may follow the calculations in the previous section and obtain:\n\\begin{eqnarray*}\n\\lefteqn{\n \\mathbb{P}\\left\\{ \\sigma \\ \\text{is locally optimal}, n^{-3\/2} H(\\sigma) \\le - \\Delta \\right\\}\n} \\\\\n& = &\n\\mathbb{P}\\left\\{ (\\cap_{i=1}^n\\{Z_i\\geq 0\\})\\bigcap \\left\\{\\sum_{i=1}^n Z_i \\ge 2\\Delta n^{3\/2}\\right\\}\\right\\} \\\\\n& = &\n\\frac{1}{(2\\pi)^{n\/2} \\det(C)^{1\/2}} \\int\\limits_{[0,\\infty)^n\\cap\n \\{x:\\sum_i x_i \\ge 2\\Delta n^{3\/2}\\}} \\exp\\left(\\frac{-x^T C^{-1} x}{2}\\right) dx \\\\\n& = &\n\\frac{1}{(2\\pi)^{n\/2} (2n-2)^{1\/2} (n-2)^{(n-1)\/2}} \\int\\limits_{[0,\\infty)^n\\cap\n \\{x:\\sum_i x_i \\ge 2\\Delta n^{3\/2}\\}} \\exp\\left(\\frac{- \\|x\\|_2^2}{2(n-2)} + \\frac{ \\|x\\|_1^2}{2(n-2)(2n-2)}\\right) dx\n\\\\\n& = &\n2^{-n} \\frac{1}{(2\\pi)^{n\/2} (2n-2)^{1\/2}\n (n-2)^{(n-1)\/2}}\\int\\limits_{\\{x:\\|x\\|_1\\ge 2\\Delta n^{3\/2}\\}} \\exp\\left(\\frac{- \\|x\\|_2^2}{2(n-2)} + \\frac{ \\|x\\|_1^2}{2(n-2)(2n-2)}\\right) dx~.\n\\end{eqnarray*}\nThus, by a change of variables, we get\n\\begin{eqnarray*}\n\\lefteqn{\n\\mathbb{P}\\left\\{ \\sigma \\ \\text{is locally optimal}, n^{-3\/2} H(\\sigma) \\le - \\Delta\\right\\} } \\\\\n& = & 2^{-n} \\sqrt{\\frac{n-2}{2n-2}} \n\\mathbb{E} \\left[ \\mathbbm{1}{\\|N\\|_1\\ge 2\\Delta n^{3\/2}\/\\sqrt{n-2}} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right) \\right]~,\n\\end{eqnarray*}\nwhere $N$ is a vector of independent standard normal random variables. \n\nWe deduce the following proposition.\n\n\\begin{proposition}\\label{prop:conditional}We have that, for all $\\sigma\\in \\{-1,+1\\}^n$,\n\\begin{equation}\n\\label{eq:key}\n\\mathbb{P}\\left\\{\\left. -\\frac{H(\\sigma)}{n^{3\/2}}\\leq \\Delta \\right| \\sigma \\text{ is locally optimal} \n\\right\\} = \n\\frac{\n\\mathbb{E} \\left[ \\mathbbm{1}{\\|N\\|_1\\ge 2\\Delta n^{3\/2}\/\\sqrt{n-2}} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right) \\right]}\n{\\mathbb{E} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right)}~.\n\\end{equation}\\end{proposition}\n\n\\section{Approximating the integral}\n\nIn order to establish convergence of the exponent\n$(1\/n) \\log \\mathbb{P}\\{ \\sigma \\ \\text{is locally optimal} \\}$ and also the\n``typical''~value of the energy, we\nneed to understand the behavior of the numerator and the denominator \nof the key equation \\eqref{eq:key}. \n\nThe main idea is to obtain a Laplace-type approximation to the integral. Make the approximation\n\\[\\mathbb{E} \\left[ \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right) \\right]\\approx \\mathbb{E}\\left[ \\exp\\left(\n \\frac{\\|N\\|_1^2}{4n} \\right) \\right].\\]\nObserve that \n\\[\\frac{\\|N\\|_1}{n} = \\frac{1}{n}\\sum_{i=1}^n|N_i|\\]\nis an average of i.i.d. random variables expectation $\\sqrt{2\/\\pi}$ and light tails. Therefore, it satisfies a Large Deviations Principle with a rate function $\\mu^*(x)$:\n\\[\\mathbb{P}\\{\\|N\\|_1\\geq nx\\}\\approx e^{-\\mu^*(x)\\,n}.\\]\nReaders familiar with Varadhan's Lemma (see e.g. \\cite[page 32]{DenHollander_LD}) should expect that, as $n\\to +\\infty$,\n\\[\\frac{1}{n}\\log\\,\\mathbb{E}\\left[ \\exp\\left(\n \\frac{\\|N\\|_1^2}{4n} \\right) \\right] = \\frac{1}{n}\\log\\,\\mathbb{E}\\left[ \\exp\\left(\n n\\,\\frac{(\\|N\\|_1\/n)^2}{4} \\right) \\right] \\to \\sup_{v}\\,\\frac{v^2}{4} - \\mu_*(v).\\]\nIn fact, the intuition behind the Lemma is that most of the ``mass\"~of the expectation concentrates around $\\|N\\|_1\\sim v_*\\,n$, where $v_*$ achieves the above supremum. This means that the conditional measure described in Proposition \\ref{prop:conditional} should concentrate around $v^*\/2$.\n\nOur calculations confirm this reasoning. The usual statement of Varadhan's Lemma does not apply directly because $\\|N\\|_1^2\/4$ is an unbounded function of $\\|N\\|_1$. Another minor technicality is that the function $\\|N\\|_1$ is divided by $(n-1)$ instead of $n$. In what follows we have opted for a self-contained approach to our estimates, which gives quantitative bounds. This section collects the corresponding technical estimates. We finish the proof of Theorem \\ref{thm:main} in the next section.\n\nThe next Lemma is a quantitative version of the large deviations principle (or Cram\\'{e}r's Theorem) for $\\|N\\|_1$.\n\n\\begin{lemma}\\label{lem:LDP}For $x\\geq \\sqrt{2\/\\pi}$, define $\\mu^*(x)$ as in the introduction. Let $N=(N_1,\\dots,N_n)$ be a vector of i.i.d. standard normal coordinates. \nThen:\n\\[\\mathbb{P}\\,\\{\\|N\\|_1\\geq nx\\} = e^{-(\\mu^*(x)+r_n(x))\\,n}\\]\nwith\n\\[0\\leq r_n(x)\\leq \\kappa\\,\\left(\\frac{x - \\sqrt{2\/\\pi}}{\\sqrt{n}} + \\frac{1}{n}\\right)\\]\nfor some $\\kappa>0$ independent of $x$ and $n$. Moreover, $\\mu^*$ is smooth and $\\mu^*(\\sqrt{2\/\\pi})=0$.\\end{lemma} \n\\begin{proof}This follows directly from Lemmas \\ref{lem:absnormal}, \\ref{lem:lambdastar} and \\ref{lem:chernoff} in subsection \\ref{sec:technical}.\\end{proof}\n\nWe will use this Lemma to estimate expectations of the form: \n\\[\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq an}\\right]\\mbox{ and }\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\geq bn}\\right].\\]\n\nThe function $R_c$ defined below naturally shows up in our estimates. \n\\begin{equation}\\label{eq:defRc}R_c(x):=\\frac{cx^2}{2} - \\mu^*(x).\\,\\,\\,(x\\geq \\sqrt{2\/\\pi}).\\end{equation}\n\n\\begin{lemma}\\label{lem:followsfromLaplace}For $a\\geq \\sqrt{2\/\\pi}$, $c\\geq 0$\n\\[\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq an}\\right] = (I) + (II),\\]\n where \n \\[1\\leq (I)\\leq \\exp\\left(n\\,R_c(\\sqrt{2\/\\pi})\\right)\\]\n and\n \\[ (II) = cn\\int_{\\sqrt{2\/\\pi}}^{a} x\\,\\exp(n\\,(R_c(x)-r_{n}(x)))\\,dx.\\]\nwith $r_n(x)$ is as in Lemma \\ref{lem:LDP}. For $b\\geq \\sqrt{2\/\\pi}$,\n\\begin{eqnarray*}\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\geq bn}\\right] &=& \\exp\\left\\{n\\,(R_c(b) -r_n(b))\\right\\} \\\\ & & + cn\\int_{b}^{+\\infty} x\\,\\exp\\left\\{n\\,(R_c(x)-r_n(x))\\right\\}\\,dx.\\end{eqnarray*}\\end{lemma}\n\\begin{proof}Let $\\phi_{c,n}(x) = e^{cn\\,x^2\/2}$. Note that:\n\\[ \\mathbbm{1}{\\|N\\|_1\\leq a\\,n} \\exp\\left(\\frac{cn\\,\\|N\\|_1^2}{2n}\\right) = \\phi_{c,n}\\left(\\frac{\\|N\\|_1}{n}\\right)\\,\\mathbbm{1}{\\frac{\\|N\\|_1}{n}\\leq a}.\\]\n We may compute the expectation of this expression as follows.\n \\begin{eqnarray*}\\mathbb{E}\\left[\\mathbbm{1}{\\|N\\|_1\\leq a\\,n} \\exp\\left(\n \\frac{cn\\,\\|N\\|_1^2}{2n}\\right)\\right] &=& 1 + \\int_{0}^a\\,\\phi'_{c,n}(x)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq x\\right\\}\\,dx\\\\\n\t&=& 1 + cn \\int_0^a\\,\\exp\\left(\\frac{cn\\,x^2}{2}\\right)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq x\\right\\}\\,dx.\\end{eqnarray*}\nWe split the above integral in two parts. \n\\begin{eqnarray*}(I) &=& 1 + cn \\int_0^{\\sqrt{2\/\\pi}}\\,x\\,\\exp\\left(\\frac{cn\\,x^2}{2}\\right)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq x\\right\\}\\,dx\\\\\n(II) &=& cn\\,\\int_{\\sqrt{2\/\\pi}}^{a} \\,x\\,\\exp\\left(\\frac{cn\\,x^2}{2}\\right)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq x\\right\\}\\,dx.\\end{eqnarray*}\nFor part (I), we bound the probability in the integral by $1$, and obtain:\n\\[1\\leq (I)\\leq 1 + cn \\int_0^{\\sqrt{2\/\\pi}}\\,x\\,\\exp\\left(\\frac{cn\\,x^2}{2}\\right)\\,dx \\leq \\exp\\left(\\frac{cn\\,x^2}{2}\\right)|_{x=\\sqrt{\\frac{2}{\\pi}}} = \\exp\\left(n\\,R_c(\\sqrt{2\/\\pi})\\right)\\]\nbecause $\\mu^*(\\sqrt{2\/\\pi})=0$.\nTerm (II) may be evaluated using the estimate from Lemma \\ref{lem:LDP}.\n\\[ (II) = cn\\int_{\\sqrt{2\/\\pi}}^{a} x\\,\\exp\\left(\\frac{cn\\,x^2}{2} - n\\,\\mu^*(x) - n\\,r_{n}(x)\\right)\\,dx,\\]\nwhich has the desired form because \n\\[\\frac{cn\\,x^2}{2} - n\\,\\mu^*(x) = n\\,R_c(x).\\]\n\nSimilarly, \\[ \\mathbbm{1}{\\|N\\|_1\\geq b\\,n} \\exp\\left(\\frac{cn\\,\\|N\\|_1^2}{2n}\\right) = \\phi_{c,n}\\left(\\frac{\\|N\\|_1}{n}\\right)\\,\\mathbbm{1}{\\frac{\\|N\\|_1}{n}\\geq b},\\]\nand we finish the proof via the identity \\begin{eqnarray*}\\mathbb{E}\\left[\\mathbbm{1}{\\|N\\|_1\\geq b\\,n} \\exp\\left(\n \\frac{cn\\,\\|N\\|_1^2}{2n}\\right)\\right] &=& \\phi_{c,n}(b)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq b\\right\\} \\\\ & & + \\int_{b}^{+\\infty}\\,\\phi'_{c,n}(x)\\,\\mathbb{P}\\left\\{\\frac{\\|N\\|_1}{n}\\geq x\\right\\}\\,dx\\end{eqnarray*}\n and using the bounds in Lemma \\ref{lem:LDP} (which are valid for all $x\\geq b\\geq \\sqrt{2\/\\pi}$).\\end{proof}\n\n\\section{Proof of main Theorem}\n\nThe previous section shows that, in order to estimate the expectations in Lemma \\ref{lem:followsfromLaplace}, we need to understand the function $R_c$. The case of interest for us is when $c=n\/2\\,(n-1)$, which is when we recover the expectations in (\\ref{eq:key}). Since $c$ varies with $n$, we will consider instead:\n\\begin{equation}\\label{eq:defR14}R(x) = R_{\\frac{1}{2}}(x):= \\frac{x^2}{4} - \\mu^*(x)\\,\\,(x\\geq \\sqrt{2\/\\pi}).\\end{equation}\nand note that \n\\begin{equation}\\label{eq:wrongRc}R(x)\\leq R_c(x)\\leq R(x) + (2c-1)\\,\\frac{x^2}{4}.\\end{equation}\n\nThe next Lemma contains some information on $R(x)$. \n \n\\begin{lemma}\\label{lem:estimatesforproof}Let $x\\geq \\sqrt{2\/\\pi}$. Define $R$ as in equation (\\ref{eq:defR14}) and $\\mu^*$ as in Lemma \\ref{lem:LDP}. Then there exists a unique $x=v^*>\\sqrt{2\/\\pi}$ that maximizes $R(x)$ over $x\\geq \\sqrt{2\/\\pi}$. Leting $\\alpha^*:=R(v^*)$ denote the value of the maximum, for any $x\\geq \\sqrt{2\/\\pi}$, there exists $\\theta(x)\\in [1\/4,10]$ with:\n\\[R(x) - \\alpha^*= -\\theta(x)\\,(x-v^*)^2.\\]\\end{lemma}\n\\begin{proof}See subsection \\ref{sub:estimatesforproof}.\\end{proof}\n\nWe can now obtain good upper and lower estimates on the integral expressions in Lemma \\ref{lem:followsfromLaplace} and finish the proof of the main Theorem.\n\n\\begin{proof}[of Theorem \\ref{thm:main}] In this proof we assume $n\\geq 100$ for simplicity. We will use the notation $L>0$ to denote the value of a constant independent of $n$ that may change from line to line. Finally, we set \n\\[c=c_n := \\frac{n}{2\\,(n-1)} = 1 + \\frac{1}{2\\,(n-1)}.\\] Lemma \\ref{lem:estimatesforproof} and (\\ref{eq:wrongRc}) give:\n\\begin{equation}\\label{eq:concavity}\\forall x\\geq \\sqrt{\\frac{2}{\\pi}}\\,:\\,R_c(x)-\\alpha^* \\in\\left[- 10\\,(x-v^*)^2 ,- \\frac{1}{6}\\,(x-v^*)^2 + \\frac{x^2}{(n-1)}\\right].\\end{equation}\nWe will now apply this to estimate expectations to the left of $v^*$. That is, we consider:\n\\[\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq an}\\right], \\,\\sqrt{\\frac{2}{\\pi}}\\leq a\\leq v^*.\\]\nIn this range $|a-v^*|$ is uniformly bounded, so $x^2\\leq L$ and\n\\[\\forall \\sqrt{\\frac{2}{\\pi}}\\leq x\\leq v^*\\,:\\,0\\leq r_n(x)\\leq \\frac{L}{\\sqrt{n}}.\\]\nCombining Lemma \\ref{lem:followsfromLaplace} with $c\\leq 1$ and (\\ref{eq:concavity}), we obtain:\n\\begin{eqnarray*}\\frac{\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq an}\\right]}{\\exp(n\\,\\alpha^*)}&\\leq& \\exp(n\\,(R_c(\\sqrt{2\/\\pi})-\\alpha^*)) \\\\ & & + cn \\int_{\\sqrt{2\/\\pi}}^a x\\,\\exp\\left(n\\,(R_c(x) - \\alpha^*)\\right)\\,dx\\\\ \n \\\\ &\\leq & \\exp\\left(L - \\frac{(v^*-\\sqrt{2\/\\pi})^2}{4}\\,n\\right) \\\\ & & + n \\int_{\\sqrt{2\/\\pi}}^a x\\,\\exp\\left(L + n\\,\\frac{(v^*-x)^2}{6}\\right)\\,dx\\\\ \n &\\leq & L\\,(1+cn)\\,\\exp\\left(L- \\frac{(a-v^*)^2\\,n}{4}\\right)\\\\ &\\leq & \\exp\\left(L\\log n\\, - \\frac{(a-v^*)^2\\,n}{4}\\right).\\end{eqnarray*}\n At the same time, \n\\begin{eqnarray*} \\frac{ \\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq v^*n}\\right]}{{\\exp(n\\,\\alpha^*)}} &\\geq & \\exp\\left(-L\\sqrt{n}\\right)\\,\\int_{v^*-\\frac{1}{n}}^{v^*} x\\,\\exp\\left(n\\,(R_c(x) - \\alpha^*)\\right)\\,dx\n \\\\ &\\geq & \\frac{1}{n}\\,\\left(v^*-\\frac{1}{n}\\right)\\,\\frac{\\exp\\left(-L\\sqrt{n}\\, - 10\\,(1\/n)^2\\right)}{n}\\\\ &\\geq & \\exp(-L\\sqrt{n}).\\end{eqnarray*}\n \nFor bounding the expectation for $b\\geq v^*$, we cannot simply use $x^2\\leq L$ and $r_n(x)\\leq L\/\\sqrt{n}$. However, note that \n\\[- \\frac{1}{6}\\,(x-v^*)^2 + \\frac{x^2}{4\\,(n-1)}\\leq \\left\\{\\begin{array}{ll}- \\frac{1}{5}\\,(x-v^*)^2 + \\frac{L}{\\sqrt{n}} &\\mbox{for $x\\leq (n-1)^{1\/4}$};\\\\\n - \\frac{1}{6}\\,(x-v^*)^2 + \\frac{2(x-v^*)^2 + 2(v^*)^2}{(n-1)}\\leq -\\frac{1}{5}\\,(x-v^*)^2 + \\frac{L}{n} &\\mbox{ for larger $x$}.\\end{array}\\right.\\]\n\nAlso, recalling the expression for $r_n$ in Lemma \\ref{lem:LDP}, \n\\[0\\leq r_n(x)\\leq \\kappa\\left(\\frac{x-\\sqrt{2\/\\pi}}{\\sqrt{n}} + \\frac{1}{n} \\right)\\leq \\frac{L}{\\sqrt{n}} + \\frac{L\\,(x-v^*)}{\\sqrt{n}}.\\] \n\nThis allows us to obtain, for $b\\leq v^*+\\epsilon_0$, \n \n\\begin{eqnarray*}\\frac{\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\geq bn}\\right]}{\\exp(n\\,\\alpha^*)}&\\leq& \\left(L\\sqrt{n} - \\frac{(b-v^*)^2\\,n}{4}\\right);\\\\ \\frac{ \\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\geq v^*n}\\right]}{{\\exp(n\\,\\alpha^*)}}&\\geq & \\exp(-L\\sqrt{n}).\\end{eqnarray*}\n\nThis leads to our main results. Indeed, if we apply the above bounds with $a=b=v^*$, we obtain that, as $n\\to +\\infty$\n\\begin{eqnarray*}\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n}\\right)\\right] &=& \\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\leq v^*n}\\right] \\\\ & & +\\mathbb{E}\\left[ \\exp\\left(\n \\frac{c\\|N\\|_1^2}{2n} \\right)\\mathbbm{1}{\\|N\\|_1\\geq v^*n}\\right] \\\\ & = & \\exp\\left(n\\,\\alpha^* \\pm L\\sqrt{n}\\right).\\end{eqnarray*}\n This implies the first statement in the Theorem via Proposition \\ref{prop:alphabounds}. \n \n \n \nSecondly, we apply Proposition \\ref{prop:conditional} and obtain:\n\\begin{eqnarray*}\\mathbb{P}\\left\\{ - H(\\sigma)\\leq - \\frac{v^*}{2} - \\epsilon \\mid \\sigma\\mbox{ local optimum}\\right\\} &\\leq & \\frac{\n\\mathbb{E} \\left[ \\mathbbm{1}{\\|N\\|_1\\ge b\\,n} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right) \\right]}\n{\\mathbb{E} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right)}~\n \\\\ & & \\left(\\mbox{ with }b = \\left(v^* + 2\\epsilon\\right)\\,\\sqrt{\\frac{n}{n-1}}\\right)\\\\ &= & \\exp\\left(-L\\sqrt{n} - \\epsilon^2\\,n\\right),\\end{eqnarray*}\nand (for $\\epsilon$ small enough, so that $a$ below is $\\geq \\sqrt{2\/\\pi}$):\n\\begin{eqnarray*}\\mathbb{P}\\left\\{ - H(\\sigma)\\geq - \\frac{v^*}{2} + \\epsilon \\mid \\sigma\\mbox{ local optimum}\\right\\} & = & \\frac{\n\\mathbb{E} \\left[ \\mathbbm{1}{\\|N\\|_1\\le a\\,n} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right) \\right]}\n{\\mathbb{E} \\exp\\left(\n \\frac{\\|N\\|_1^2}{4(n-1)} \\right)}~\n \\\\ & & \\left(\\mbox{ with }a = \\left(v^* - 2\\epsilon\\right)\\,\\sqrt{\\frac{n}{n-1}}\\right)\\\\ &=& \\exp\\left(-L\\sqrt{n} - \\epsilon^2\\,n\\right).\\end{eqnarray*}\n \\end{proof}\n \n\\section{Auxiliary results}\n\n\\subsection{Lemmas on large deviations of $\\|N\\|_1$}\\label{sec:technical}\n\nThe goal of this subsection is to prove a series of Lemmas that together imply Lemma \\ref{lem:LDP}. We first find an expression for the Laplace transform of $|N(0,1)|$\n\n\\begin{lemma}\n\\label {lem:absnormal}\nLet $N(0,1)$ be a standard normal random variable. For all $\\lambda >0$,\n\\[\n \\mathbb{E} e^{\\lambda|N(0,1)|}= e^{\\lambda^2\/2 + \\phi(\\lambda)}~,\n\\]\nwhere $\\phi(\\lambda)= \\log(2\\Phi(\\lambda))$, with\n$\\Phi(\\lambda)=\\mathbb{P}\\{N(0,1)\\le \\lambda\\}$.\n\\end{lemma}\n\\begin{proof}\n\\begin{eqnarray*}\n\\mathbb{E} e^{\\lambda|N(0,1)|} & = & \\frac{2}{\\sqrt{2\\pi}} \\int_0^\\infty\ne^{\\lambda x - x^2\/2}dx \\\\\n& = & 2e^{\\lambda^2\/2} \\frac{1}{\\sqrt{2\\pi}}\\int_0^\\infty\ne^{(x-\\lambda)^2\/2}dx \\\\\n& = & 2e^{\\lambda^2\/2} \\mathbb{P}\\{N(0,1) > - \\lambda\\}~.\n\\end{eqnarray*}\n\\end{proof}\n\nWe will need to compute the large deviations rate function for $\\sum_{i=1}^n|N_i|$, with $N_i$ i.i.d. standard normal. As usual, this is given by the Fenchel-L\\'{e}gendre transform of $\\log \\mathbb{E} e^{\\lambda|N(0,1)|}$:\n\\[\\mu^*(x):= \\sup_{\\lambda\\geq 0}\\,\\lambda x - \\log \\mathbb{E} e^{\\lambda|N(0,1)|}.\\]\n\nThe next lemma collects technical facts on $\\mu^*$ and the value $\\lambda=\\lambda_{*}$ that achieves the minimum.\n\n\\begin{lemma}\\label{lem:lambdastar}For each $x\\geq \\sqrt{2\/\\pi}$, there exists a unique $\\lambda=\\lambda_{*}(x)\\geq 0$ such that\n\\[\\lambda + \\phi'(\\lambda) = x.\\]\nDefining:\n\\[x\\geq \\sqrt{\\frac{2}{\\pi}}\\mapsto \\mu^*(x) := \\lambda_{*}(x)\\,x -\\frac{\\lambda_{*}(x)^2}{2} - \\phi(\\lambda_{*}(x)),\\]\nwe have that, for each $x$ in the above range, $\\mu^*(x)$ is the global maximum of \n\\[\\lambda\\geq 0\\mapsto \\lambda\\,x - \\frac{\\lambda^2}{2} - \\phi(\\lambda),\\]\nwhich is uniquely achieved at $\\lambda=\\lambda_{*}(x)$. We also have the following inequalities.\n \\begin{enumerate}\n \\item {\\em Strict concavity.} For each $\\lambda\\geq 0$, $x\\geq \\sqrt{2\/\\pi}$,\n\\begin{equation}\\label{eq:growtharoundlambdastar}\\mu^*(x) - (\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N(0,1)|}) \\in \\left[\\frac{(\\lambda-\\lambda_{*}(x))^2}{40},\\frac{(\\lambda-\\lambda_{*}(x))^2}{2}\\right].\\end{equation}\n\\item {\\em Derivative bounds for $\\lambda_{*}$:} \\begin{equation}\\label{eq:lambdastarstarderivative}1\\leq \\lambda_{*}'(x) \\leq 20\\end{equation}\n\\end{enumerate}\\end{lemma}\n\\begin{proof}By the previous Lemma,\n\\[\\lambda + \\phi'(\\lambda) = \\frac{d}{d\\lambda}\\,\\log\\,\\mathbb{E} e^{\\lambda|N(0,1)|},\\]\nwhich is a smooth function because $|N(0,1)|$ has a Gaussian-type tail. Using this ``lightness of the tail\",~ one can differentiate under the the expectation and obtain:\n\\[ \\phi'(0) = \\frac{d}{d\\lambda}\\mathbb{E} \\,e^{\\lambda|N(0,1)|}\\mid_{\\lambda=0} = \\mathbb{E}\\,|N(0,1)| = \\sqrt{\\frac{2}{\\pi}}.\\]\nLemma \\ref{lem:phidoubleprime}~below implies that \n\\[-0.95\\leq \\phi''(\\lambda) \\leq 0.\\]\nTherefore\n\\begin{equation}\\label{eq:convexMGF}\\forall \\lambda\\geq 0,\\, \\frac{d}{d\\lambda}\\,(\\lambda + \\phi'(\\lambda))\\in [0.05,1].\\end{equation}\nIn particular, $\\lambda + \\phi'(\\lambda)$ is an increasing function that is equal to $\\sqrt{2\/\\pi}$ at $\\lambda=0$ and diverges when $\\lambda\\nearrow +\\infty$. It follows that for all $x\\geq \\sqrt{2\/\\pi}$ there exists a unique $\\lambda=\\lambda_{*}(x)$ with $\\lambda_{*}(x)+\\phi'(\\lambda_{*}(x))=x$, and $\\lambda_{*}(\\sqrt{2\/\\pi})=0$. The Implicit Function Theorem guarantees that $\\lambda_{*}$ is smooth over $[\\sqrt{2\/\\pi},+\\infty)$ and \n\\begin{equation}\\label{eq:lambdastarderivativehere}\\lambda_{*}'(x) = \\frac{1}{\\frac{d}{d\\lambda}\\,(\\lambda + \\phi'(\\lambda))\\mid_{\\lambda=\\lambda_{*}(x)}}\\in [1,20].\\end{equation}\nEquation (\\ref{eq:convexMGF}) above shows that \\[\\lambda\\,x - \\frac{\\lambda^2}{2} - \\phi(\\lambda) = \\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N|} \\] is a strictly concave function of $\\lambda$ with second derivative \n\\[-1\\leq -\\frac{d^2}{(d\\lambda)^2}\\,\\left(\\frac{\\lambda^2}{2} + \\phi(\\lambda) \\right)\\leq- \\frac{1}{20}.\\]\nThus $\\lambda_{*}(x)$, which is a critical point for this function, is its unique global maximum of $\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N|}$. The value of the function at that point is precisely $\\mu^*(x)$. \n\nLet us now prove the estimates in the Lemma. The strict concavity property in (\\ref{eq:growtharoundlambdastar}) follows from expanding \n\\[\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N(0,1)|}\\]\naround the critical point $\\lambda=\\lambda_{*}(x)$ and applying a second-order Taylor expansion:\n\\begin{eqnarray*}\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N(0,1)|} - \\mu^*(x) &=& \\frac{d}{d{\\lambda}}\\,(\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N(0,1)|})_{\\lambda=\\lambda_{*}(x)} \\\\ \n& & + \\frac{1}{2}\\,\\frac{d^2}{(d\\lambda)^2}\\,\\left(\\lambda\\,x - \\ln \\mathbb{E}\\,e^{\\lambda |N(0,1)|} \\right)_{\\lambda=\\tilde{\\lambda}}\\,(\\lambda - \\lambda_{*}(x))^2 \\\\ & & \\mbox{ with }\\tilde{\\lambda}= (1-\\alpha)\\lambda_{*}(x) + \\alpha \\lambda,\\, 0\\leq \\alpha\\leq 1.\\end{eqnarray*}\nnoting that the first derivative is $0$ and the second one is between $-1$ and $-1\/20$. Finally, the derivative bound in item $2$ is proven in (\\ref{eq:lambdastarderivativehere}).\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lem:chernoff}\nLet $N=(N_1,\\ldots,N_n)$ be a vector of $n$ independent standard\nnormal random variables. Let $x\\ge \\sqrt{2\/\\pi}$. \nDefine $\\mu^*(x)$ as in Lemma \\ref{lem:lambdastar}. Then for all $n\\ge 1$,\n\\[\n \\frac{1}{n} \\log \\mathbb{P}\\left\\{ \\|N\\|_1 \\ge nx \\right\\} = \n -\\mu^*(x) - r_n(x)~,\n\\]\nwhere\n\\[0\\leq r_n(x)\\leq \\kappa\\,\\left(\\frac{x - \\sqrt{2\/\\pi}}{\\sqrt{n}} + \\frac{1}{n}\\right) \\]\nfor some universal $\\kappa>0$ that is independent of $x\\geq \\sqrt{2\/\\pi}$ and $n\\geq 1$.\\end{lemma}\n\n\\begin{proof}\nFor any $\\lambda>0$, the usual Cram\\'{e}r-Chernoff trick may be combined with Lemma \\ref{lem:lambdastar}~to obtain:\n\\[\\frac{1}{n}\\log\\mathbb{P}\\left\\{ \\|N\\|_1 \\ge nx \\right\\} \\leq \\inf_{\\lambda\\geq 0}(\\log\\,\\mathbb{E}\\,e^{\\lambda|N|} - \\lambda\\,x) = - \\mu^*(x).\\]\nIt remains to give a lower bound for this probability. In order to get a non-asymptotic bound, \nwe use the following lemma that appears in the fourth edition of Alon and\nSpencer's book \\cite[Theorem A.2.1]{AlSp16}.\n\n\\begin{lemma}\n\\label{lem:alsp}\nLet $u,\\lambda, \\epsilon >0$ such that $\\lambda >\\epsilon$. Let $X$ be\na random variable such that the moment generating function $\\mathbb{E}\ne^{cX}$ exists for $c\\le \\lambda +\\epsilon$. For any $a\\in \\mathbb{R}$, define\n$g_a(c)= e^{-ac} \\mathbb{E} e^{cX}$. Then \n\\[\n \\mathbb{P}\\{ X \\ge a-u\\} \\ge e^{-\\lambda u}\\left( g_a(\\lambda) -\n e^{-\\epsilon u} \\left( g_a(\\lambda+\\epsilon) +\n g_a(\\lambda-\\epsilon) \\right) \\right)~. \n\\]\n\\end{lemma}\n\nLemma \\ref{lem:alsp} will be applied to the random variable $X=\\|N\\|_1$ with $\\lambda=\\lambda_{*}(a\/n)$ and $a,u,\\varepsilon$ to be chosen. \n\nIn the notation of that Lemma, for each $\\lambda\\geq 0$,\n\\[\n g_a(\\lambda) = \\exp\\left(-n \\mu_a(\\lambda)\\right)\\mbox{ where }\\mu_a(\\lambda)=\\left(\\lambda\\,(a\/n) - \\ln\\,\\mathbb{E}\\,e^{\\lambda\\,|N|} \\right).\n\\]\nUsing Lemma \\ref{lem:lambdastar} to bound this expression, we obtain from (\\ref{eq:growtharoundlambdastar}) that\n\\[\n \\frac{g_a(\\lambda_{*}(a\/n)+ \\epsilon)}{g_a(\\lambda_{*}(a\/n))}\n \\le e^{n\\epsilon^2\/2}\n\\quad \\text{and} \\quad\n \\frac{g_a(\\lambda_{*}(a\/n)- \\epsilon)}{g_a(\\lambda_{*}(a\/n))}\n \\le e^{n\\epsilon^2\/2}~.\n\\]\nMoreover, $g_a(\\lambda_{*}(a\/n)) = e^{-n\\mu^*(a\/n)}$. So\n\n\\[\n \\mathbb{P}\\{ \\|N\\|_1 \\ge a-u\\} \\ge e^{-\\lambda_{*}(a\/n)\\,u}e^{-n\\mu^*(a\/n)}\\,\\left( 1 - 2\n e^{-\\epsilon u + \\frac{\\varepsilon^2n}{2}} \\right)~. \n\\]\nWe now choose $\\epsilon=\\sqrt{2\/n}$ and $u=\\epsilon n\/2+ 1\/\\epsilon=\\sqrt{2n}$ to obtain:\n\\[ \\mathbb{P}\\{ \\|N\\|_1 \\ge a-u\\} \\ge e^{-\\lambda_{*}(a\/n)\\,\\sqrt{2n}}e^{-n\\mu^*(a\/n)}\\,\\left(1 - \\frac{2}{e}\\right)~. \\]\nLetting $a=nx+\\sqrt{2n} = n\\,(x+\\epsilon)$, we have that\n\\[\\mathbb{P}\\{\\|N\\|_1 \\ge nx\\} = e^{-\\lambda_{*}(x+\\epsilon)\\,\\sqrt{2n}}e^{-n\\,\\mu^*(x+\\epsilon)}\\,\\left(1 - \\frac{2}{e}\\right).\\]\nRecall from Lemma \\ref{lem:lambdastar} that $\\lambda_{*}'(y)\\leq 20\\,(y-\\sqrt{2\/\\pi})$ and $1\\,(y-\\sqrt{2\/\\pi})\\leq (\\mu^*)'(y)\\leq 20\\,(y-\\sqrt{2\/\\pi})$ for all $y\\geq \\sqrt{2\/\\pi}$. Thus:\n\\[\\lambda_{*}(x+\\epsilon)\\,\\sqrt{2n}\\leq 20\\,(x + \\epsilon -\\sqrt{2\/\\pi})\\,\\sqrt{2n}\\]\nand\n\\[\\mu^*(x+\\epsilon)\\leq \\mu^*(x) + 20\\,\\epsilon\\,(x+\\epsilon-\\sqrt{2\/\\pi}).\\]\nRecalling $\\epsilon = \\sqrt{2\/n}$, we may plug theses estimates back in the lower bound for our probability and obtain the theorem. \\end{proof}\n\n\\subsection{Estimates on the optimization problem}\\label{sub:estimatesforproof}\n\nIn this section we prove Lemma \\ref{lem:estimatesforproof}.\n\n\\begin{proof}[of Lemma \\ref{lem:estimatesforproof}] We will use Lemma \\ref{lem:lambdastar} several times in the proof. In particular, the properties of $\\mu^*$ and $\\lambda_{*}=(\\mu^*)'$ in that proof will be used several times. \n\nWe first argue that $x\\mapsto R(x)$ is a strictly concave function of $x\\geq \\sqrt{2\/\\pi}$. To see this, we use Lemma \\ref{lem:lambdastar} to obtain:\n\\[R''(x) = \\frac{1}{2} - \\lambda_{*}'(x).\\]\nBy the same Lemma, we know $1\\leq \\lambda_{*}'(x)\\leq 20$. So\n\\[ \\forall x\\geq \\sqrt{\\frac{2}{\\pi}}\\,:\\, -20\\leq R''_c(x) \\leq -\\frac{1}{2}.\\]\n\nWe now argue that $R(x)$ is maximized at some $x=v^*>\\sqrt{2\/\\pi}$. To see this, notice that $\\lambda_{*}(\\sqrt{2\/\\pi})=0$, so the derivative of $R$ at $x=\\sqrt{2\/\\pi}$ satisfies:\n\\[R'(x) = \\frac{1}{2}\\sqrt{2\/\\pi} - \\lambda_{*}(\\sqrt{2\/\\pi}) >0\\]\nWe conclude that $R$ is increasing in an interval to the right of $\\sqrt{2\/\\pi}$. At the same time, the second derivative of $R$ in $x$ is at most $-1$, so there exists a $s_+$ such that $R'(x)\/\\leq 0$ for $x\\geq s_+$. So the maximum of $R$ in $x$ must be achieved at a point $v^*\\in (\\sqrt{2\/\\pi},s_+]$. In particular, $v^*$ is a critical point of $R$: $R'(v^*)=0$.\n\nNow consider, \\[\\alpha^* := R(v^*) = \\min_{x\\geq \\sqrt{2\/\\pi}}R(x).\\]\nBy Taylor expansion, if $x\\geq \\sqrt{2\/\\pi}$,\n\\[R(x) = \\alpha^* + R'(v^*)\\,(x-v^*) + \\frac{R''(v^* + \\alpha\\,(x-v^*))}{2}\\,(x-v^*)^2\\]\nwhere $0\\leq \\alpha\\leq 1$. The Theorem follows because $R'(v^*)=0$ and $R''\\in [-20,-1\/2]$.\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection*{Significance Table}\n \n \n\\begin{table}[!htb]\n\t\\caption{\\footnotesize \n Hypothesis tests for the clustered blocks in \\textit{Youth} subjects along two different criteria. In the first assessment (left), edges in the weighted network for each layer are treated as a i.i.d sample and compared to other edges using t-tests. In the second assessment, proportions of positive clinical diagnoses are tested across different imputed blocks. \n Let $\\textbf{X}^x$ represent the network of symptom response similarities for anxiety, $\\textbf{X}^y$ for behavior, and $\\textbf{X}^z$ for mood disorders.\n } \\label{table:Youth_ClusSig}\n\t\\begin{minipage}{.5\\linewidth}\n\t\t \t\t\\footnotesize\n\t\t\\begin{tabular}{clll}\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\t\\multicolumn{4}{c} {\\textsc{Edge Comparison for Youth (3 Gps)}}\\\\\n\t\t\t\\hline\n\t\t\t$B_q$ Comp. & $\\textbf{X}^x$ & $\\textbf{X}^y$ & $\\textbf{X}^z$ \\\\ \n\t\t\t\\hline\n\t\t\t$NB$- $S_1$& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t\t\t$S_1$-$S_2$& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t\t\t$NB$-$S_2$& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{minipage}%\n \\begin{minipage}{.5\\linewidth}\n \t\t\\centering\n \t\t\\footnotesize\n \t\t\\begin{tabular}{lllll}\n \t\t\t\\hline\n \t\t\t\\hline\n \t\t\t\\multicolumn{5}{c} {\\textsc{Diagnosis Comparison for Youth (3 Gps)}}\\\\\n \t\t\t\\hline\n \t\t\t \\%Anx & \\%Beh & \\%Mood & \\%TD & \\%Psy \\\\ \n \t\t\t\\hline\n \t\t\t 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n \t\t\t 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.09 & 0.00 (**) \\\\ \n \t\t\t 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n \t\t\t\\hline\n \t\t\\end{tabular}\n\t\\end{minipage} \n\t\\begin{minipage}{.5\\linewidth}\n\t\t\\centering\n\t\t\\footnotesize\n\\begin{tabular}{clll}\n\t\\hline\n\t\\multicolumn{4}{c} {\\textsc{Edge Comparison for EA (4 Gps)}}\\\\\n\t\\hline\n\t\t\t$B_q$ Comp. & $\\textbf{X}^x$ & $\\textbf{X}^y$ & $\\textbf{X}^z$ \\\\ \n\t\\hline\n\t$NB$-$S_1$ & 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t$NB$-$S_2$ & 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t$NB$-$S_3$ & 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t$S_1$-$S_2$ & 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t$S_1$-$S_3$ & 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n\t$S_2$-$S_3$ & 0.00 (**)& 0.00 (**)& 0.25 \\\\ \n\t\\hline\n\\end{tabular}\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\linewidth}\n\t\t\\centering\n\t\t\\footnotesize\n\t\\begin{tabular}{lllll}\n\t\\hline\n\t\\multicolumn{5}{c} {\\textsc{Diagnosis Comparison for EA (4 Gps)}}\\\\\n\t\\hline\n \\%Anx & \\%Beh & \\%Mood & \\%TD & \\%Psy \\\\ \n\t\\hline\n 0.85 & 6e-4 (**) & 0.01 & 0.57 & 0.00 (**)\\\\ \n 0.03 & 0.12 & 5e-4 (**) & 0.07 & 0.03 \\\\ \n 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)\\\\ \n 9e-4 (**) & 0.59 & 0.02 & 0.05 & 0.11 \\\\ \n 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.00 (**)& 0.60 \\\\ \n 0.59 & 2e-4 (**) & 0.84 & 0.07 & 0.22 \\\\ \n\t\\hline\n\\end{tabular}\n\t\\end{minipage} \n\\end{table}\n\n\n \n\n\n\\subsection{Network Generation Details} \\label{app:network_gen_detail}\n\n\\subsection{Simulations of Networks with Differing Parameters (Experiment 1)} \\label{appsec:simu_experiment1}\n\nWe first describe the simulation scheme of the first experiment. IThe means for each unique block for every network are randomly generated from a Gaussian distribution centered around 0 and 2 respectively for the first and second layers. \nAfter the parameters are generated, the observations are simulated from multinormal distributions governed by these parameters.\nEach network has $AN$ governing both a single block $NB$ and interstitial noise $IN$ that is centered around (-1,0). \nWe repeat this procedure for trivariate networks of $n=200$ nodes, wherein the Gaussian priors for each (signal) block have means of -2, 0, and 2 respectively for the first, second, and third layers. \nIn order to ensure the separability of blocks during simulations, we only select the networks whose blocks' minimum Bhattacharya distances are above a certain threshold. We calculate the minimum Bhattacharya distances between blocks across 500 simulated networks, and then select the networks with the largest 10\\% of the minimum Bhattacharya distances to filter out the networks whose blocks are `far enough away' from each other; we run 50 instances of the \\texttt{SBANM} algorithm for both the bivariate ($n=500$) and the trivariate case ($n=200$).\n \n \n\\noindent \\textbf{Results: }Fifty runs of the algorithm were performed for both the bivariate and trivariate networks with differing parameters. 500 networks were generated as described in the previous section, then networks with the highest 10\\% of the minimum Bhattacharya Distances between clusters' parameters are retained. \n\nThough this experiment is primarily focused on \\textit{membership recovery}, parameter estimation remains as a byproduct.\nAcross many simulations with a variety of parameters, there does not seem to be much systemic bias in the estimates as empirical means of differences between estimated and true parameters are centered around 0.\nMedian percentage differences, across all estimated parameters, between the estimates and true values are between 20 to 25\\% for bivariate, and 10-20\\% for trivariate networks. Histograms for the mean and variance parameters (each distinct parameter is treated like an observation) show essentially matching distributions between estimates and ground truth parameters for means (\\ref{fig:simu_histos}).\n\nA slight discrepancy between distributions for variance parameters ($\\sigma^2_{q,k}$ for $k = 1,2,3$) among trivariate networks. This slight bias may be related again to the curse of dimensionality and, while does not seem to elicit too severe a problem in the clustering results, may be investigated in future endeavors.\n\nPercentage differences between the estimated and ground-truth parameters also show moderately accurate recovery in both bivariate and trivariate networks. The lowest 25\\% quartiles for all parameters are between 0 and 3 percent and show that these estimates are very close to the ground truths. Conflated with the relatively higher mean and median differences, the low 1st quartiles show that accuracy for parameter runs seem to occur along a binary: either estimates are very close to their targets, \nor they are fairly far off. Some of the high percentage differences may arise from small ground-truth values, which are divided to calculate percentage differences. Others may arise from the mismatches in clustering memberships. However, this limitation mostly arises in the trivariate case, as there is a near-perfect recovery rate for the bivariate simulations. \n\n\\begin{figure} [htbp]\n\t\\centering \n\t\\begin{tabular}{ccc}\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{\\textsc{ Histograms of True and Estimated Parameters}} \\\\ \n\t\t\\hline \n\t\t& \\textbf{Bivariate} & \\textbf{Trivariate} \\\\\n\t\t\\hline\n\t\t\\\\\n\t\t{\\rotatebox[origin=l]{90}{ \t\\quad \\quad \t\\quad \\quad \t \\footnotesize Estimated\\slash True \t$\\mu_{k,q}$}} \n\t\t& \n\t\t\\includegraphics[width=0.4\\linewidth\t]{simu_histosBiv_mu}\n\t\t& \t\t\\includegraphics[width=0.4\\linewidth]{simu_histosTriv_mu} \\\\\n\t\t{\\rotatebox[origin=l]{90}{ \t\t\\quad \\quad \t\\quad \\quad \\footnotesize Estimated \\slash True \t$\\sigma^2_{k,q}$}} & \t\\includegraphics[width=0.4\\linewidth]{simu_histosBiv_sigma}\n\t\t& \t\t\\includegraphics[width=0.4\\linewidth]{simu_histosTriv_sigma} \n\t\t\\\\\n\t\n\t\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{\\footnotesize Histograms of ground truth (red) and estimate (blue) parameter values for the 2-layer and 3-layer networks compared to the estimated parameters from the algorithm. Parameters across layers are all plotted together.\n\t\tDashed lines demarcate the empirical means of these estimated and ground truth parameters. \n\t\tFor ground truths (red), these empirical means are \n\t\t.75 for $\\mu_{k,q}$ (bivariate, top left), \n\t\t1.98 for $\\mu_{k,q}$ (trivariate, top right),\n\t\t4.01 for $\\sigma^2_{k,q}$ (bivariate, bottom left), \n\t\t3.10 for $\\sigma^2_{k,q}$ (trivariate, bottom right).\n\t\tFor estimates of parameters, they are\n\t\t.58 for $\\mu_{k,q}$ (bivariate, top left), \n\t\t1.84 for $\\mu_{k,q}$ (trivariate, top right),\n\t\t5.51 for $\\sigma^2_{k,q}$ (bivariate, bottom left), \n\t\t5.58 for $\\sigma^2_{k,q}$ (trivariate, bottom right).\n\t}\n\t\\label{fig:simu_histos}\n\\end{figure} \n\n\\subsection{Simulations of Networks with the Same Parameters (Experiment 2)} \\label{appsec:simu_experiment2}\n\nThe first experiment was conducted primarily to demonstrated membership recovery under a variety of different parameters and block sizes. \nThe purpose of the second experiment, which runs the algorithm under a set of fixed parameters, is to show that the method recovers parameters effectively.\nThe fixed parameters were generated through simulation with fixed Gaussian distributions with prior means 10,15, and 20 and prior variance parameters of 5. The first entries of each layer correspond to the noise block with fixed means at 5, 10 and 15. The means are:\n$\\boldsymbol{\\mu}_{X,q} = (5, 11.98, 11.55, 10.39)$,\n$\\boldsymbol{\\mu}_{Y,q} = (10 ,16.86, 16.49, 14.81)$,\n$\\boldsymbol{\\mu}_{Z,q} = (15 ,16.69 ,21.25 ,21.08)$.\nThe variances are \n$\\boldsymbol{\\Sigma}_{X,q}$ = ( 7.88, 13.11, 0.31, 1.16), \n$\\boldsymbol{\\Sigma}_{Y,q}$= ( 7.32, 7.67, 4.89, 1.03), \n$\\boldsymbol{\\Sigma}_{Z,q}$ =(6.69, 4.15, 0.06, 4.36).\nThe correlations are $\\rho_q$ = (0.00, 0.40, 0.15, 0.34), \nand the true group sizes are 76 nodes for the first block ($NB$), 97 for the second, 93 for the third, and 34 for the fourth.\n\n\n\\noindent \n\\textbf{Results}: We generated 100 networks following these exact specifications and ran \\texttt{SBANM} on all of them. In Figure \\ref{fig:simu_boxplots} in the main text, each boxplot comprises a set of 100 estimates for each parameter values. The first row shows those for the first layer (written as $\\mathbf{X}$), the second $\\mathbf{Y}$, the third $\\mathbf{Z}$, and the fourth for correlations between the three layers. The red band shows the true parameter values as listed above.\n\n\n\n\\subsection{ICL Assessment (Experiment 3)} \\label{app:simu_ICL}\n\n \n\\textit{Model selection} in the SBM clustering context usually refers to selection of the number of a priori blocks before VEM estimation as it is the only `free' parameter in the specification step of the algorithm. \nExisting approaches \\cite{Daudin2008,mariadassou2010,matias2016} consider the \\textit{integrated complete likelihood} (ICL) for assessing block model clustering performance. Matias et al. write the ICL for multilayer graphs in the following way (adapted to match the notation of this study)\n\\begin{equation} \\label{eq:ICL}\n\tICL(Q) = \\log f( \\mathbf{X} , \\mathbf{Z} ) - \\frac 1 2 Q(Q-1) \\log ( n (K-1)) - pen(n,K, \\boldsymbol{\\Theta} )\n\\end{equation}\nto translate the terminology, $\\boldsymbol{\\Theta} $ corresponds to the total set of transition parameters in the SBM, where $\\boldsymbol{\\Theta} : =\\boldsymbol{\\Theta}_\\text{Signal} \\bigcup \\boldsymbol{\\Theta}_\\text{Noise} $ \\cite{matias2016}. The penalty parameter $pen( \\cdot )$ is chosen dependent on the distributions of the networks; the `Gaussian homoscedastic' case in Matias et al. is derived to be \n\\begin{align*}\n\tpen(n,K, \\boldsymbol{\\Theta}) = Q\\cdot \\log \\bigg( \\frac{n(n-1) K }{2} \\bigg) \n\t+ \\frac{Q(Q-1) }{2} K \\cdot \\log \\bigg( \\frac{n(n-1) }{2} \\bigg) .\n\\end{align*} \nThough the authors made the assumptions that the variances are constant for all blocks, we assume that the models are similar enough to \\texttt{SBANM} such that the evaluation criterion is applicable to our case. \nFor this portion of the simulation experiment we fix $n$ at 200 and the ground-truth $Q$ at 5. However, we apply the method for a range of hypothesized block numbers $\\widehat{Q}$ (as the \\textit{estimate} for number of blocks) from 2 to 7. Simulation results show that the usage of ICLs caps at $\\widehat{Q}=5$, the correct ground truth value (Figure \\ref{fig:elbossim200}).\n\n\\noindent\n\\textbf{Results:} We used a single instance of a trivariate network with 200 nodes from the simulations generated in the first experiment (Section \\ref{appsec:simu_experiment1}). ICLs for five runs of the algorithm were calculated. Each run presupposed a different selection of $Q$ from 2 to 7. The ground-truth value of $Q$ is 5 and Figure \\ref{fig:elbossim200} showed that the ground-truth $Q$ captured the highest ICL.\n\\begin{figure} [htbp]\n\t\\centering\n\t\\includegraphics[width=0.52\\linewidth, trim= {0cm .8cm 0cm 2.3cm} , clip\t]\n\t{fig2_ICL_sim200}\n\t\\caption{\\footnotesize{} \\textit{ICLs for simulation study for three-layer network of 200 nodes with a ground-truth $Q$ of 5, which maps to the maximum ICL that was found by the method of estimation.}}\n\t\\label{fig:elbossim200}\n\\end{figure}\n\n\\subsection{Large Network Simulations}\nFor large-network simulations, single instances of networks with $n=1000$ and 2000 are generated for $Q=4$ and 5. Results yielded exact recovery for memberships and within 5\\% errors for parameters.\n\\section{Introduction}\n\n\n\nRelational data have become more common in the advent of sophisticated data gathering mechanisms and more nuanced conceptions of dependency.\nStatistical network analysis has become a major field of research and is a useful, efficient mode of pattern discovery. Networks representing social interactions, genes, and ecological webs often model members or agents as nodes (vertices) and their interaction as edges.\nOftentimes, relational information manifest in different modes among the same members. For example, nodes represented by users in a social network such as Twitter can have edges that represent `likes', `follows', and `mentions'.\nIn biological networks, modes of interactions such as gene co-expressions or similarities between biotic assemblages may arise among the same sample of study.\nThese questions are especially pertinent in psychiatric data, where distinct diagnoses are not clearly demarcated but rely on constellations of interacting psychopathologies. In this study, we analyze these multimodal psychopathological symptom data using multilayer network analysis.\n\n\nWhile single-graph approaches have been developed decades ago \\cite{girvan_community_2002,configbender}, the literature concerning weighted, multimodal networks is a more recent emerging field of interest \\cite{Menichetti_2014WeightedMultiplex,Holme_2015}.\nThe field of \\textit{community detection} has, in conjunction, also grown considerably in recent times \\cite{newman2018networks,Fortunato_2016,handock_modelclust_2007,townshend_murphy_2013}.\nCommunity detection is an approach used to divide a set of nodes in a network into clusters whose members are strongly connected. Many techniques have been proposed for \\emph{unweighted} (binary) graphs including modularity optimization \\cite{girvan_community_2002, clauset_community_2004}, stochastic block models \\cite{holland_stochastic_1983, nowicki_snijders, peixoto_nonpar, yan_sbm}, and extraction \\cite{zhao_consistency_2012,Lancichinetti_plos_2011}. \n\n\nThe stochastic block model (SBM) is a foundational theoretical model for random graphs \\cite{karrer2011stochastic,peixoto_nonpar,Hoff2002,nowicki_snijders} and has also found practical use in community detection \\cite{mariadassou2010,newman2003structure}. The model lays out a concise formulation for dependency structures within and across communities in networks, but does not typically model \\textit{global} characteristics. \nThough some methods discern but do not statistically model \\textit{background} (unclustered) nodes \\cite{palowitch_continuous,wilson_essc,dewaskar2020finding}, few existing models explicitly account for \\textit{community-wise} noise even though it is useful in many applications. We develop a model for multilayer weighted graphs that explicitly accounts for (1) global noise present between differing communities, and (2) dependency structure across layers within communities. We refer to this model and its associated estimation algorithm as the (multivariate Gaussian) \\textit{Stochastic Block (with) Ambient Noise Model} (\\texttt{SBANM}) for the rest of this manuscript.\n\nOur main contribution is a novel method that jointly finds clusters in a multilayer weighted network and also classifies \\textit{what} \\textit{types} of these clusters, namely (local) signal or (global) noise, these are. We propose a method that discovers, categorizes, and estimates the associated parameters of these communities. We focus on developing a model as well as its method of inference, which is useful as many existing multilayer SBM analyses assume known parameters \\cite{wang2019joint,mayya2019mutual}. In the primary case study (Section \\ref{sec:results_PNC}), we use \\texttt{SBANM} to find clusters of diagnostic subgroups of patients judged by similarity measures of their psychopathology symptoms\n\n\n\n\\subsection{Motivation } \\label{sec:motivation} \nWe posit an example to motivate the representation of the proposed model to describe patterns in sociality. Suppose there is a social network where nodes represent members and weights represent social interactions. Members naturally interact in cliques where rates of communication are roughly similar (i.e. assortative). Across differing communities, however, rates are assumed to be at a global baseline level. Moreover, interactions among members who are \\textit{asocial} and do not belong to any community with a unique signal are similarly modeled as ``noise\". Who, in a social clique or \\textit{scene}, are still friends with each other after 10 years? Alternatively, how might the notion of ``friendship\" be broken down -- in what ways may work relationships (i.e. co-authorships) correlate with social relationships? \nA schematic figure for this model compared to SBM is presented in Figure \\ref{fig:schematic}.\n\n\\begin{figure} [htbp]\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\hline\n\t\t\\textbf{\tSBM }&\\textbf{\\texttt{SBANM} }\\\\\n\t\t\\hline\n\t\t\\includegraphics[width=0.35\\linewidth, trim= {3cm 19.5cm 12.5cm 2cm }, clip]{fig1_1_orig} \t& \n\t\t\\includegraphics[width=0.5\\linewidth, trim= {2.5cm 16cm 8.5cm 2cm }, clip]{ILLUSTRATIVE_w_NoiseMultiD} \n\t\\end{tabular}\n\t\\caption{ \\footnotesize \n\t\t\\textit{Illustrative example of the types of relationships between blocks for the canonical SBM (left) and \\texttt{SBANM} (right). Dashed lines represent the inter-block connectivity among nodes. Large circles represent distinct communities. Solid thick lines represent the inter-community rates of interaction (transition probabilities if binary). In the canonical case (left), the inter-block transitions are all distinct, as denoted by its colors. For the multilayer \\texttt{SBANM} case (right), the inter-block parameters are all the same (represented by gray); $AN$ governs the connectivities between blocks and the \\textit{intra-}block connectivity within the block \\textit{NB} across two layers $\\mathcal{G}_1$ and $\\mathcal{G}_2$ with blocks $B_1,B_2,B_3$ and $NB$ with correlations $\\rho_1,\\rho_2,\\rho_3 $ across $\\mathcal{G}_1$ and $\\mathcal{G}_2$ .\t}}\n\t\\label{fig:schematic}\n\\end{figure}\n\nThe logic of this model is natural for clinical, psychiatric, and experimental settings. Psychiatric illnesses have multiple causes and symptoms. There are no laboratory tests for these conditions. Current diagnostic processes only consider the presence of discrete symptoms and can identify patients who need treatment, but it does not help identify who is at risk for the illness in question. One such illness is schizophrenia, which is a chronic psychotic disorder that affects millions worldwide and imposes a substantial societal burden. Identifying individuals who are at risk for developing this psychotic condition is a clinically significant issue. \n\nIn most existing research on networks where nodes represent individuals, edges are known quantities between them. This assumption cannot be applied to psychiatric network models to identify communities of individuals with the same diagnosis as psychiatric disorders manifest with significant heterogeneity. Connections between individuals can be estimated from biological and\/or psychosocial data, which can then be used for early identification \\cite{kahn_schizophrenia_2015,clark_dx95,kendell_diagnoses2003}. With an increase in availability of multimodal data across populations of clinical subjects, multilayer community detection is a natural tool for the classification of psychiatric illnesses with multifaceted characteristics. \n\nWhile distinguishing psychosis spectrum will be the primary focus of the proposed methodology, it is useful to find latent structure in other types networks. We also demonstrate the method on (1) US congressional voting data and (2) human mobility (bikeshare) data in Appendices \\ref{app:voting},\\ref{app:results_bikeshare}.\n\n\n\n\\subsection{Background and Contributions} \\label{sec:contributions_context} \n\nThe canonical example of a globally noisy network is the Erdos-Renyi model where every edge is governed by a single probability. The affiliation model is a weighted extension \\cite{Allman_2011} used to describe a ``noisy homogeneous network\"; a single \\textit{global} parameter $\\theta_\\text{in}$ dictates the connectivity between all nodes in \\textit{any} community, while another $\\theta_\\text{out}$ controls the connectivity for all nodes in differing communities. A similar model was posited by Arroyo et al. \\cite{arroyo2020inference} where $\\theta_\\text{in} > \\theta_\\text{out}$ as a baseline for network classification. The weighted SBM and the affiliation model are both \\textit{mixture models for random graphs} described by Allman et al. \\cite{Allman_2011,ambroise2010new}. This class of network models accounts for assortativity (the tendency for nodes who connect to each other at similar intensities to cluster) and sparsity (when there are much fewer edges than nodes).\n\n\nInitially proposed to describe binary networks \\cite{Hoff2002,nowicki_snijders}, SBMs have been extended to weighted \\cite{mariadassou2010}) and multilayer settings \\cite{stanley2015,paul_multilayer_2015,arroyo2020inference}, and in particular time series \\cite{matias2016} where clusters across all time points have the same inter-block parameters, but varying between-block interactions. These multilayer SBMs typically do not account for correlations between layers. This notion has also only begun to be explored in the context of multilayer SBMs; some recent studies or binary networks have accounted for correlations across layers \\cite{mayya2019mutual} and noise \\cite{mathews2019gaussian}, but typically assume that parameters are already known.\n\n\nThough much work has been done on estimating SBMs, there has not been much study focused on estimating the \\textit{noise} inherent within SBMs, much less for multilayer weighted graphs. Extraction-based methods identify background nodes to signify lack of community membership \\cite{palowitch_continuous,wilson_essc}, but these methods do not attribute any parametric descriptions to these nodes. Some recent work discuss noise in network models that are oftentimes associated with global (i.e. entire-network) uncertainty that is uniformly added to all edges \\cite{blevins2021variability,Newman_2018,mathews2019gaussian,Young_2020}. However, few have studied \\textit{structural noise} that exists between differing communities or that serves as some notion of a \\textit{residual} term (i.e. in regression analysis). \n\n\nWe attempt to address these two gaps in this study. We simplistically describe the model as follows. In a multilayer graph with $Q$ ground truth communities (indexed by $q $), as well as a single block that is considered \\textit{noise} (labeled $NB$ for \\textit{noise block}), we postulate a model that is \\textit{locally unique} with parameter $ \\boldsymbol{\\theta}_q $ for all edges within a block indexed at $q$. Global noise parameter ${\\boldsymbol{\\theta}} _\\text{Noise} $ describes all interactions between differing blocks as well as $NB$. This model is written simplistically as follows, but in more detail in Section \\ref{sec:model_inference}:\n\\begin{align} \\label{eq:combined_model}\n\t\\boldsymbol{\\theta}_{ql} & = \\begin{cases}\n\t\t{\\boldsymbol{\\theta}}_{q} & \\text{if } q = l \\text{ and } q \\text{ is not } NB\n\t\t\\\\\n\t\n\t\t{\\boldsymbol{\\theta}} _\\text{Noise} & \\text{if } q \\ne l \\text{ or } q \\text{ is } NB \n\t\\end{cases}.\n\\end{align} \n\nThe model combines qualities of the affiliation model \\cite{Allman_2011} with the weighted SBM and extends to multiple layers. Because both the affiliation model and the multilayer SBM are proven to be identifiable by prior work \\cite{Allman_2011,matias2016}, we posit that \\texttt{SBANM} is also identifiable. A brief argument is given in Appendix \\ref{app:identifiability}, but deeper investigation remains as future work. One major advantage of a global noise term is its parsimony compared to SBMs. Existing clustering models on multilayer networks, even when accounting for communities that persist across layers \\cite{persistent_liu}, still tend toward overparameterization. \n\n\nA reference or \\textit{null} group is often used in scientific and clinical settings, an example being the cerebellum as a reference region-of-interest (ROI) in the analysis of brain networks. The commonality in \\textit{out-of-clique} and \\textit{baseline} modes of communication in the example in Section \\ref{sec:motivation} provides an interpretable justification for the empirical realism of this model.\n\nIn the rest of the paper, we describe the terminology alongside the \\textit{Philadephia Neurodevelopmental Cohort} data for the main case study in Section \\ref{sec:data}. We then describe the model and its method of (variational) inference in Section \\ref{sec:model_inference}, and its specific mechanics in Section \\ref{sec:est_algorithm}. Model performance is assessed and compared with other methods in Section 5. In Section \\ref{sec:results_PNC}, we demonstrate the focal case study of psychopathology symptom data.\n\n\n\n\n\\section{Data, Notation, and Terminology} \\label{sec:data}\n\\input{pt2_data_input}\n\n\\section{Model and Inference} \\label{sec:model_inference}\n\n\\texttt{SBANM} supposes that networks across $K$ layers have the same block structure, while transition parameters between blocks are fixed at the same global, \\textit{ambient}, level. This model allows detection of common latent characteristics across layers, as well as differential sub-characteristics within blocks (represented by multivariate normal distributions). This model also presumes block structures whose edges are correlated across layers. \n\n\n\n\n\n\\begin{definition} \\label{def:corr_blocks} \n\t{(Correlated Signal Blocks)} \n\t\\normalfont\t\t\n\tFor a $K-$layer (Gaussian) weighted multigraph $\\mathbf{X}=\\{\\mathbf{X}^1,...,\\mathbf{X}^K\\}$ where each layer $k$ represents a graph with $n$ registered nodes , let $B_q \\subset [n]$ represent a community housing a partition of nodes $\\{i\\}_{i \\in B_q}$, then each weighted edge between any node in block $B_q$ form a multivariate normal distribution with mean $K$-dimensional vector $ \\boldsymbol{\\mu}_q $ and $K \\times K $-dimensional covariance matrix $\\boldsymbol \\Sigma_q$:\n\t\\begin{align*} \n\t\t\\boldsymbol{\\Sigma}_q = \n\t\t\\begin{pmatrix}\n\t\t\t\\sigma_{q,1}^2& \\rho_q \\sigma_{q,1} \\sigma_{q,2 }... & \\rho_q \\sigma_{q,1} \\sigma_{q,K } \\\\\n\t\t\t\\rho_q \\sigma_{q,2} \\sigma_{q,1 } & \\sigma_{q,2}^2... & \\rho_q \\sigma_{q,2} \\sigma_{q,K } \\\\\n\t\t\t... &... &... \\\\\n\t\t\t\\rho_q \\sigma_{q,K} \\sigma_{q,1 } & ... & \\sigma^2_{q, K} \t\t\n\t\t\\end{pmatrix}.\n\t\\end{align*} \t\n\tIf nodes $i,j$ are in the same block, the distribution of their edges follow a multivariate normal distribution\n\t\\begin{align*}\n\t\t\\mathbf{X}_{ij} | \\{ i \\in B_q, j \\in B_q \\} \\sim N_K (\\boldsymbol{\\mu}_q , \\boldsymbol{\\Sigma}_q).\n\t\\end{align*}\n\\end{definition} \n\nNote that there is a single correlation parameter $\\rho_q$ across all layers for a given block $B_q$, implying that $\\boldsymbol{\\Sigma}_q$ is a rank 1 plus rank 2 matrix. This is a deliberate choice to induce parsimony and interpretability among block relationships across all layers. We assume that the \\textit{noise block} as has the same characteristics as the \\textit{interstitial noise}; both are drawn from the same distribution $AN$ (\\textit{ambient noise}). \t$AN$ is a global noise distribution that governs both $IN$ and $NB$:\n\\begin{align*}\n\t\\mathbf{X}_{IN}\n\t& \\stackrel{d}{=} \\mathbf{X}_{NB}\n\t\\sim N_K (\\boldsymbol{\\mu }_{AN}, \\boldsymbol{\\Sigma}_{AN}).\n\\end{align*}\t\nBecause $NB$ and $IN$ both represent ``baseline\" levels of connectivity for the network, we assume that they both have equivalent characteristics as $AN$. Members of each block $B_q$ interact with other members in the same block at rates that follow multivariate $\\boldsymbol{\\mu}_{q}$ with variance $\\boldsymbol{\\Sigma}{q}$, but interact with members in differing groups $l; l \\ne q$ at baseline rates $\\boldsymbol{\\mu}_{IN}$ with variance $\\boldsymbol{\\Sigma}_{IN}$, i.e. background interactions. \n\n\\begin{definition} \\label{def:ambient_noise}\n\t{(Ambient Noise)} \\normalfont Edges in $IN$ between differing blocks and in $NB$, are characterized by \n\t$ (\\boldsymbol{\\mu}_{AN} , \\boldsymbol{\\Sigma}_{AN} ) $: \n\t$ \\boldsymbol{\\Sigma}_{AN} $ is a $K \\times K$ diagonal matrix \n\twith diagonal $ (\\sigma^{2}_{AN, 1}, ..., \\sigma^2_{AN, K}) $\n\tand off-diagonal entries of 0:\n\t\\begin{align*}\n\t\t\\mathbf{X}_{ij} | \\{ i \\in B_q, j \\in B_l \\} \\sim N_K (\\boldsymbol{\\mu}_{AN} , \\boldsymbol{\\Sigma}_{AN}).\n\t\\end{align*}\n\n\\end{definition}\nFor a community $B_q \\subset [n]$ representing the nodes that are contained in block $q$ in a weighted multilayer network $\\mathbf{X}$,we let $\\mathbf{X}_q$ represent the set of all edges contained in block $B_q$ across all $K$ layers as defined in Equation \\eqref{eq:Xb_q}. Conversely, the set of edges across differing $B_q,B_l$ (i.e. interstitial noise), are defined as in Equation \\eqref{eq:Xb_IN}.\n\n\n\\begin{definition} \n\t(Stochastic Block (with) Ambient Noise Model (\\texttt{SBANM})) \\normalfont\n\t\t A $K-$layer (Gaussian) weighted multigraph $\\mathbf{X}=\\{\\mathbf{X}^1,...,\\mathbf{X}^K\\}$ with $n$ nodes and $Q$ communities (blocks) indexed by $q$ with a single block that is considered \\textit{noise} labeled $NB$ (indexed by $q_{NB}$) with disjoint blocks $\\{B_1, B_2, ...,NB,...,B_Q\\}_{q: q \\le Q}$ such that $\\bigcup_{q \\le Q} B_q \\bigcup NB = [n] $\n\t\tis a \\texttt{SBANM} if the following conditions are satisfied.\t\t \n\t\\begin{enumerate}\n\t\t\\item Edges in the same block $B_q$ adhere to (Correlated Signal Blocks) where each edge $\\mathbf{X}_{ij}$ follows conditional distribution $N_K(\\boldsymbol{\\mu}_q , \\boldsymbol{\\Sigma}_q)$ given block memberships,\n\t\t\\item Ambient noise $AN$ with $\t N_K (\\boldsymbol{\\mu }_{AN}, \\boldsymbol{\\Sigma}_{AN})$ governs both $IN$ and $NB$: \n\t\t\\begin{enumerate}\n\t\t\t\\item Edges $i \\in B_q $ and $j \\in B_l$ $ (l \\ne q)$ follow a $N_K(\\boldsymbol{\\mu}_{AN} , \\boldsymbol{\\Sigma}_{AN} )$ distribution. \n\t\t\t\\item \\textbf{One} block $NB$ contains members whose edges are generated from a $K-$ dimensional multivariate normal distribution $ N_K (\\boldsymbol{ \\mu }_{AN}, \\boldsymbol{ \\Sigma }_{AN} ). $ \n\t\t\\end{enumerate} \t\n\t\\end{enumerate}\n\\end{definition}\n\n\\subsection {Connection to Existing Models}\n\nThe weighted SBM and the affiliation model are both cases of the \\textit{mixture models for random graphs} described by Allman et al. \\cite{Allman_2011,ambroise2010new}. This general class of network models accounts for assortativity (the tendency for nodes who connect to each other at similar intensities to cluster together) and sparsity (when there are much fewer edges than nodes).\nIn addition to the class of VEM-based inference methods \\cite{mariadassou2010,matias2016,paul_randomfx_2018} that are extensively referenced in Section \\ref{sec:contributions_context}, we also note multigraph SBM inference methods based on spectral decomposition \\cite{wang2019joint,arroyo2020inference,mayya2019mutual}. These methods are typically applied to binary networks and use different sets of methodology or assumptions such as known parameters (\\cite{mayya2019mutual}), but are still similar in motivation as to warrant comparison. \nSome of these existing methods model edge connectivity of a (potentially multilayer) network as a function of membership vectors $\\mathbf{Z}_i$ (for node $i$), connectivity matrix $\\mathcal{R}_k$ at layer $k$, and the graph Laplacian \\cite{mayya2019mutual,mathews2019gaussian,reeves2019geometry,arroyo2020inference,wang2019joint}. Typically, the connectivity rate corresponds to Bernoulli probabilities (for binary networks), but some of these approaches allow for (or posit for future work) extensions to the weighted cases \\cite{wang2019joint,mercado2019spectral,arroyo2020inference}.\nSome work has focused on studying the correlations or linear combinations of the eigenvectors of $\\mathcal{R}_k$, but in most of these cases \\textit{conditional independence given labels} between layers is assumed for correlated networks \\cite{mayya2019mutual,arroyo2020inference}.\nA recent trend in these multiplex methods has focused on devising an optimal aggregation scheme to combine multiple layers and then to use single-graph methods on the resultant static network \\cite{levin2019central}. We consider several special cases for \\texttt{SBANM} where it reduces to existing models. \n\\begin{enumerate} [noitemsep]\n\t\\item If all $\\rho_q$ were zero (ie. diagonal $\\boldsymbol{ \\Sigma}_q$; no correlations amongst communities) and all the within-community signals were the same, then \\texttt{SBANM} is a multivariate extension of the models posited by Allman et al. \\cite{Allman_2011} or Arroyo et al. \\cite{Arroyo2019connectomics}.\n\t\\item If $K=1$, \\texttt{SBANM} is a special case of the weighted Gaussian SBM as proposed by Mariadassou et al. where all inter-block connectivities are fixed at a single rate \\cite{mariadassou2010}. \n\t\\item \n\tWang et al. (\\cite{wang2019joint}) constrain the connectivity matrix to a diagonal, which would be analogous to \\texttt{SBANM} if ambient noise parameter is fixed at zero: $\\bm{\\theta}_{AN}:=0$.\n\t\\item\n\tArroyo et al. \\cite{arroyo2020inference} describe the multilayer SBM \\cite{holland_stochastic_1983} for binary graphs which ``could be easily extended to the weighted cases\". The model assumes \\textit{independent} block parameters $\\mathcal{R}_k$ across every layer.\n\tIf there were parameters $ \\bm{\\theta}_{AN}$ such that $\\mathcal{R}_{ql,k}:= \\bm{\\theta}_{AN}$ (for every $q \\ne l$), then a special case of \\texttt{SBANM} (where each $\\rho_q:=0$) would be recovered.\n\\end{enumerate}\n\\texttt{SBANM} finds a loose connection to mixed-membership blockmodels (MMBM) in that both models attribute uncertainty to membership designations \\cite{airoldi2007mixed}.\nHowever, MMBM doubly complicates the model parameter landscape with overlapping block combinations, while \\texttt{SBANM} more parsimoniously addresses ambiguous memberships by subsuming their characteristics into an umbrella ambient noise term that describes the ambiguities in block memberships in the interstitial noise term $IN$. \n\n\n\\subsection{Variational Inference}\nThe proposed model is estimated by variational inference (VI) , which has historically been used for estimating SBM memberships as well as their parameters \\cite{mariadassou2010,paul_multilayer_2015}. \nVI is an approach to approximate a conditional density of latent variables using observed information by solving this problem with optimization \\cite{blei_VI,Jaakkola00tutorialon}. \nWhen optimizing the full likelihood is intractable, simpler surrogates of complicated variables are chosen as to create a simpler objective function. The Kullback-Liebler (KL) Divergence between this simpler function and the full likelihood are then minimized. For community detection problems, mean-field (MF) approximations of membership allocations often serve as simpler surrogates of latent approximands to simplify the likelihood function into a {lower bound} (typically known as \\textit{evidence lower bound}: ELBO) \\cite{mariadassou2010,townshend_murphy_2013,airoldi2007mixed}. \n\n\nVariational EM (VEM) is the state-of-the-art for SBM estimation and demonstrably more efficient than other approaches (such as MCMC) \\cite{mariadassou2010,nowicki_snijders}. Daudin et al. introduced using VEM for binary-graph SBMs (\\cite{Daudin2008}. \nMariadassou et al. used a similar method for detecting communities in a single weighted graph \\cite{mariadassou2010}, while Matias et al. also did so for multilayer networks \\cite{matias2016}. The estimation algorithm for the proposed model is also rooted in VEM, but we augment the procedure with \\textit{signal} and \\textit{noise} typologizing different blocks.\n\n\\newcommand{\\ERhv }{ \\mathbb{E}_{ R (\\mathbf{Z}, \\mathbf{C} )} } \nThough it enables efficient inference, ``typical\" MF VI is limited by its assumption of strong factorization and does not capture posterior dependencies between latent variables arising amongst multilayered networks. Hierarchical Variational Inference (HVI) provides a natural framework for the two-layered latent structure for multilayer networks. \n A natural hierarchy is induced in \\texttt{SBANM} by the assumption that all but one block are under the umbrella of \\textit{signal}, while a single block is classified as noise. HVI augments variational approximations with priors on its parameters: this assumption allows joint clustering of blocks and their signal-noise differentiation as the \\textit{superstructure}.\n\nWe use a similar approach to that originally used in Daudin et al. \\cite{Daudin2008}. \nThe latent variable of interest is the membership allocation matrix $\\mathbf{Z}$, which is a $n \\times Q$ matrix where each row $\\{\\mathbf{Z}_i\\}_{i \\le n}$ contains $Q-1$ zeros and a single 1 that represents membership at that given entry. We introduce indicator $\\mathbf{C}$ to determine if a block $q$ is signal or noise $NB$. $\\mathbf{C}$ is a vector of length $Q$ whose values $C_q$ are 0 or 1. \nThe main difference between our's and previous approaches is that joint approximate conditional distributions of $\\mathbf{Z}$ and $\\mathbf{C}$ are modeled instead of just $\\mathbf{Z}$:\n\\begin{align} \\label{eq:R_ZC}\n\tR_\\mathbf{X}(\\mathbf{Z},\\mathbf{C} ) \\approx \\prod_{i, q} \\bigg( m (\\mathbf{Z}_i ,\\boldsymbol{\\tau}_i) \\times \\text{Bern}(C_q, P_q) \\bigg) . \n\\end{align} \nIn Eq. \\eqref{eq:R_ZC} $ R_\\mathbf{X}(\\mathbf{Z},\\mathbf{C} ) $ represents the joint variational distribution of the memberships $\\mathbf{Z}, \\mathbf{C}$.\nThe exact joint distribution is unknown, but the hierarchical mean field (MF) approximation $ R(\\mathbf{Z},\\mathbf{C} ) $ can be used to obtain a factorized estimate for its marginals \\cite{ranganath2016}. We write the approximate composition of marginals using ``$\\times$\"; $m(\\cdot)$ represents the multinomial distribution. The variational approximations of membership matrix $\\mathbf{Z}$ is a $n \\times Q$-dimensional matrix $\\boldsymbol{\\tau}$, each row represents the vector of probabilities that approximates $\\mathbf{Z}_i$ \\cite{mariadassou2010}. \n\nThe variational approximation of the indicator $C_q$ at block $q$ is the probability $P_q$, which typologizes (and ``sits at a higher tier\" than) $\\boldsymbol{\\tau}$. Under variational distribution $R$, each member $i$ of a block $B_q$ adheres to multinomial distribution with parameter $\\tau_{iq} = \\mathbb{E}[\\mathbf{Z}_{iq}]$. $P_q$ is the probability of $C_q$ akin to $\\tau_{iq}$. For each $q$, $P_q$ is ambient noise with prior probability $\\Psi$. A derivation for $\\Psi$ is given in Appendix \\ref{app:derivPsi}.\n\n\\begin{definition} \\label{def:Psi}\n\t$\\Psi$ is the probability of block $\\{B_q\\}_{q: q\\le Q}$ to be noise block $NB$: \n\t\\begin{align} \\label{eq:Psi}\n\t\t\\Psi & := (Q-1)\/Q \n\t\\end{align} \n\\end{definition} \n\nThe hierarchical MF distribution $R_\\text{hv}(\\mathbf{Z} )$ as introduced by Ranganathan et al. \\cite{ranganath2016} ``marginalizes out\" the MF parameters in $ R_{\\mathbf{X}}(\\mathbf{Z} , \\mathbf{C} )$ and is written as\n\\begin{align*}\n\tR_\\text{hv}(\\mathbf{Z} ) = \\int R_{\\mathbf{X}}(\\mathbf{Z} , \\mathbf{C} ) \\text{d} \\mathbf{C}. \n\\end{align*}\nFollowing the methods of estimation proposed in prior work on SBM estimation \\cite{Daudin2008,mariadassou2010,paul_randomfx_2018}, $R_{\\mathbf{X}}(\\mathbf{Z}, \\boldsymbol{\\tau} )$ represents the multinomial variational distribution wherein each $\\tau_{iq}$ approximates the membership allocations. The integrated $R_\\text{hv}(\\mathbf{Z} ) $ represents the distribution in prior work that is not subject to the signal or noise categorizations. \n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.56\\linewidth, trim= {8.5cm 18cm 4cm 4cm }]{flowchart_SignalNoise}\n\t\\caption{\\footnotesize Schematic diagram for the hierarchy of organization for blockstructures with signal\/noise differentiation for blocks as the top layer and the actual blocks as the bottom layer. }\n \\label{fig:flowchart-signalnoise}\n\\end{figure\n\nPrior VEM-based estimation methods focus on optimizing the Evidence Lower Bound (ELBO) \\cite{paul_multilayer_2015,paul_randomfx_2018,mariadassou2010,Daudin2008}. $ \\mathcal{L} $ is the \tapproximately optimal likelihood that minimizes the KL Divergence between $R( \\mathbf{Z} , \\mathbf{C} )$ and the posterior frequency $f( \\mathbf{Z} , \\mathbf{C} | \\mathbf{X} ) $. It is the sum of the expected frequency and the entropy $\\mathcal{H}$ of variational variable $\\mathbf{Z}$:\n\\begin{align*} \n\t\\mathcal{L} = \\mathbb{E}_{ R_{\\text{hv}} (\\mathbf{Z}) } [ \\log f (\\Zb, \\Xb )] + \\mathcal{H}_{\\text{hv}}( R(\\mathbf{Z})).\n\\end{align*} \nA better bound than the ELBO is derived by introducing the marginal recursive variational approximation $S(\\mathbf{C}|\\mathbf{Z})$, and then exploiting the following inequality with joint MF distribution $R(\\mathbf{Z}, \\mathbf{C} )$ and the (hierarchical) entropy $\\mathcal{H} (\\mathbf{Z})$:\n\\begin{equation} \\label{eq:HELBO_Ineq} \n\t\\mathcal{H}_\\text{hv} ( R(\\mathbf{Z} )) \n\t\\geq - \\mathbb{E}_{R ( \\mathbf{Z}, \\mathbf{C} ) }\\left[ \\log R (\\mathbf{Z}, \\mathbf{C}) \\right] + \\mathbb{E}_{R (\\mathbf{Z}, \\mathbf{C} ) } \\left[ \\log S( \\mathbf{C} | \\mathbf{Z}) \\right]. \n\\end{equation} \nA proof of inequality \\eqref{eq:HELBO_Ineq} is given in Appendix \\ref{app:ELBO_Proof} \\cite{ranganath2016}. \n\nThe jointly factorized MF components of $R (\\mathbf{Z}, \\mathbf{C} ) = R ( \\mathbf{C} ) R (\\mathbf{Z}| \\mathbf{C} ) $ are written as follows:\n$ R (\\mathbf{C} ) = \\prod_q P_q ^{C_q} (1-P_q) ^{1-C_q} $ as each $C_q$ is Bernoulli distributed, and $R (\\mathbf{Z}|\\mathbf{C} ) $ is written similarly to prior variational membership variables \\cite{Daudin2008,mariadassou2010}, exponentiated by $C_q$:\n$$R (\\mathbf{Z}|\\mathbf{C} ) = \\prod_q \\prod_{i} \\bigg( \\tau_{iq}^{ Z_{iq} } \\bigg) ^{C_q} \\bigg( \\prod_{i} \\tau_{iq}^{ Z_{iq} } \\bigg)^{1-C_q}, $$ \ncombining to form \t$R (\\mathbf{Z}, \\mathbf{C} ) = R (\\mathbf{Z} | \\mathbf{C} ) R (\\mathbf{C} ) .$\nMoreover, the recursive variational approximation $S (\\mathbf{C}| \\mathbf{Z} )$, as introduced by Ranganath et al. \\cite{ranganath2016} estimates the higher-order memberships $\\mathbf{C}$ using the basal memberships $\\mathbf{Z}$:\n$$ S (\\mathbf{C}| \\mathbf{Z} ) = \\prod_q \\prod_i \\bigg( \\Psi ^ {C_q} (1-\\Psi) ^{ 1- C_q} \\bigg) ^ {Z_{iq}}. $$ \nThe global signal rate $\\Psi:=\\mathbb{P}(NB)$ (Definition \\ref{def:Psi}) represent the \\textit{prior} probabilties of each group membership $C_{q}$, or the parameters of the \\textit{initial stationary distribution} of $P_q$ \\cite{matias2016}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Parameter Estimation} \\label{sec:param_estimation} \n\nIn this section we describe the estimation of parameters. First, to ease notation, we introduce some more terms \n\\begin{align} \n\tf( X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q ) &= \\frac 1 2 ( X^k_{ij} - \\boldsymbol{\\mu}_{q} )^T \\boldsymbol{\\Sigma}_q^{-1} (X^k_{ij} - \\boldsymbol{\\mu}_{q} ) - (2\\pi)^{K\/2} (\\log |\\boldsymbol{ \\Sigma}_q | )^{ 1 \/2 }\n\t\\stepcounter{equation}\\tag{\\theequation} \\label{eq:abbrev_Sigfreq}\n\t\\\\\n\tf( X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) &= \\frac 1 2 ( X^k_{ij} - \\boldsymbol{\\mu}_{AN} )^T \\boldsymbol{\\Sigma}_{AN}^{-1} (X^k_{ij} - \\boldsymbol{\\mu}_{AN} ) - (2\\pi)^{K\/2} (\\log |\\boldsymbol{ \\Sigma}_{AN} | )^{ 1 \/2 }\n\t\\stepcounter{equation}\\tag{\\theequation} \\label{eq:abbrev_Noifreq}.\n\\end{align} \nEquation \\eqref{eq:abbrev_Sigfreq} denotes the density for edges in a signal block $\\big( \\boldsymbol{\\mu}_q , \\boldsymbol{\\Sigma}_q \\big)$ at layer $k$; equation \\eqref{eq:abbrev_Sigfreq} denotes density for edges with noise $\\big( \\boldsymbol{\\mu}_{AN} , \\boldsymbol{\\Sigma}_{AN} \\big)$.\nIn a graph $\\mathbf{X}$ with $K$ graph-layers $\\{\\mathbf{X}^1,..., \\mathbf{X}^K\\} $, each edge between nodes $i,j$ of each layer $k$ has conditional density \n\\begin{align*}\n\t\\log f ( \\mathbf{X}| \\mathbf{Z} ) \n\t= &\n\t\\sum_{q : B_q \\ne NB;q \\le Q } \\sum_{k \\le K } \n\t\\sum_{i , j \\le n } \\ \\tau_{iq} \\tau_{jq} f( X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q) \\\\\n\t+ \\textbf{1} (B_q& = NB ) \\sum_{i , j \\le n} \\tau_{iq} \\tau_{jl} f( X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \n\t+ \\sum_{q,l \\le Q : q \\ne l } \\sum_{i,j \\le n } \\tau_{iq} \\tau_{jl} f( X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) .\n\t\\stepcounter{equation}\\tag{\\theequation} \\label{eq:fXZ}\n\\end{align*}\nThe log likelihood portion of the ELBO, $\\log( f ( \\mathbf{X}| \\mathbf{Z} ) )$, written above in Equation \\eqref{eq:fXZ} is comprised of three parts: unique signals for every $q$ (top), the noise block $NB$ (bottom left), and the interstitial noise $IN$ (bottom right).\n$AN$ is the global \\textit{ambient noise} whose parameters govern the \\textit{interstitial noise} as well as \\textit{noise block} as in Definition \\ref{def:ambient_noise}.\nThe probability of block $B_q$ ``being signal\" is demarcated by $P_q$. Given variational variables $\\boldsymbol{\\tau}, \\mathbf{P}$, the expected likelihood is \n\\begin{align*}\n\t\\mathbb{E}_{R_{\\bm{X}}} [\\log( f ( \\mathbf{X}| \\mathbf{Z} ) ) ] \n\t=& \n\t\\sum_{q : q \\le Q} \\mathbb{P}(B_q \\ne NB) \\tau_{iq} \\tau_{jl} f( X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q) \n\t\\\\\n\t+ \\mathbb{P}(B_q = NB) & \\sum_{i , j: i \\ne j } \\tau_{iq} \\tau_{jl} f( X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \n\t+ \\sum_{q, l \\le Q: q \\ne l } \\sum_{i , j \\le n } \\tau_{iq} \\tau_{jl} f( X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) .\n\\end{align*} \nThe $ \\ERhv [\\log f (\\mathbf{Z} ) ] $ term restores to the same form as earlier work on SBMs \\cite{mariadassou2010,Daudin2008}:\n\\begin{equation} \\label{eq:fZb_same}\n\t\\ERhv [\\log f (\\mathbf{Z} ) ] \n\t= \\sum_{i,q} \\tau_{iq} \\log \\alpha_q, \n\\end{equation} \nwhere as in prior work \\cite{Daudin2008,mariadassou2010}, the variables $\\alpha_q$ represent the membership probabilities of $Z_{iq}$ and sum to 1:\n\\begin{align}\\label{eq:alpha_q}\n\t\\alpha_q = \\mathbb{P}(i \\in B_q) = \\mathbb{P}( Z_{iq}=1).\n\\end{align}\n\n\nNote for the rest of the manuscript we use to $ \\sum_{i,q} ( \\cdot )$ to signifiy the double summation across all $i \\le n$ and $q \\le Q$. Since the expected log frequency of the membership vectors $\\mathbf{Z}$ reduces to that in canonical SBMs. Details of this identity are found in Appendix \\ref{app:preserve_ElogfZ}. The joint density is written as:\n\\begin{equation} \\label{eq:ElogfXZ} \n\t\\mathbb{E}_{ R (\\mathbf{Z}, \\mathbf{C} )}[\\log f (\\mathbf{X} ,\\mathbf{Z})] = \\ERhv [\\log f (\\mathbf{X} | \\mathbf{Z} ) ] + \\sum_{i,q} \\tau_{iq} \\log \\alpha_q . \n\\end{equation} \nThe expression is written in full in Appendix \\ref{app:ElogfX|Z}.\nModel parameters can be partitioned into $ \\Theta_{\\text{Signal}}$ and $\\Theta_{\\text{Noise}}$ in addition to global parameters $\\boldsymbol{\\alpha}, \\Psi$. We write the entire set of model parameters as\n\\begin{align} \\label{eq:total_Theta} \n\t\\Theta = \\{ \\boldsymbol{\\alpha}, \\Psi , \\Theta_{\\text{Noise}} , \\Theta_{\\text{Signal}} \\}. \n\\end{align} \n$\\Theta_{\\text{Signal}} = \\{ \\boldsymbol{\\mu}_{q} , \\boldsymbol{\\Sigma}_q\\}_{q: 1 \\le q \\le Q; B_q \\ne NB}$ represents the model parameters that are unique to each block $B_q$ (not including $NB$), and also there is one fixed label $q_{NB}$ that indexes the noise block $NB$.\n$ \\Theta_{\\text{Noise}}= \\{ \\boldsymbol{ \\mu}_{AN} , \\boldsymbol{ \\Sigma}_{AN} \\} $ represents the noise parameters that govern both interstitial noise $IN$ and noise block $NB$. For $NB$, each correlation between layers is set at zero. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Decomposition of the Hierarchical ELBO} \\label{sec:ELBO_Optim}\nThe estimation procedure optimizes the hierarchical ELBO.\nThe hierarchical ELBO $ \\mathcal{L}^\\prime $ (details in Appendix \\ref{app:ELBO}) can be decomposed as\n\\begin{align} \\label{eq:ELBO}\n\t\\mathcal{L}^\\prime \n\t=& \\ERhv \\log f (\\mathbf{X},\\mathbf{Z}) \n\t+ \\ERhv [ \\log R (\\mathbf{C} ,\\mathbf{Z} ) ] + \\ERhv [ \\log S (\\mathbf{C}| \\mathbf{Z} ) ] .\n\\end{align} \nThe first term $ \\ERhv \\log f (\\mathbf{X},\\mathbf{Z}) $ which represents the observed joint densities of $\\mathbf{X}$ and $\\mathbf{Z}$ is written in Eq. \\eqref{eq:ElogfXZ}. $ \\ERhv [ \\log R (\\mathbf{C} ,\\mathbf{Z} ) ] $ represents the joint distribution of the two-tiered variational variables and is written as: \n\\begin{align*}\n\t\\mathbb{E}_{ R (\\mathbf{Z}, \\mathbf{C} )}[\\log R (\\mathbf{Z}, \\mathbf{C} ) ] \n\t& = \\sum_{i,q} \\tau_{iq} \\log \\tau_{iq} + \\sum_q \\bigg( P_q \\log P_q + (1-P_q) \\log (1-P_q) \\bigg) .\n\\end{align*} \nThe third term $\\ERhv [ \\log S (\\mathbf{C}| \\mathbf{Z} ) ] $ described by Ranganath et al. as the `recursive variational approximation' \\cite{ranganath2016} for $R(\\cdot )$, is written as \n\\begin{align*}\n\t\\mathbb{E}_{ R (\\mathbf{Z}, \\mathbf{C} )} \\log S (\\mathbf{C}| \\mathbf{Z} ) ) = \n\t\\sum_{i,q} \\bigg( P_q \\log \\Psi +(1- P_q) \\log (1-\\Psi) \\bigg) \\tau_{iq}.\n\\end{align*}\nCombining these elements together, the hierarchical ELBO can be rewritten as:\n\\begin{align*} \n\t\\mathcal{L}^\\prime \n\t=& \\ERhv [\\log f (\\mathbf{X} | \\mathbf{Z}) ] + \\sum_{i,q} \\bigg( \\tau_{iq} \\log \\alpha_q \n\t+ \\tau_{iq} \\log \\tau_{iq} + \\bigg( P_q \\log \\Psi +(1- P_q) \\log (1-\\Psi) \\bigg) \\tau_{iq} \\bigg) \n\t\\\\\n\t&+ \\sum_q \\bigg( P_q \\log P_q + (1-P_q) \\log (1-P_q) \\bigg) .\n\\end{align*} \nThe hierarchical ELBO written in full can be found in Appendix \\ref{app:ELBO}. Derivations for all of these terms can be found in Appendix \\ref{appendix:proofs_and_deriv}.\n\n\n\n\n\\section{Estimation Algorithm} \\label{sec:est_algorithm} \nWe summarize the targets of inference here to set up the language for the rest of the section. We distinguish \\textit{variational} and \\textit{model parameters}: variational parameters $\\boldsymbol{ \\tau}_q$ and $P_q $ (for $q: q \\le Q$) approximate the membership allocations, while model parameters describe the parametric qualities of the blocks. Within the set of model parameters, we further distinguish \\textit{local} and \\textit{global} parameters. \\textit{Local} parameters are $\\boldsymbol{\\Sigma}_q$, and membership probabilties $\\alpha_q$ for each $q$. \\textit{Global} parameters are $ \\Psi , \\Theta_{\\text{Noise}} $. \nWe use VEM to estimate variational parameters in the E-step and model parameters in the M-step, alternating these steps until the differences in $\\boldsymbol{\\tau }$ become miniscule. We present the closed-from solutions to all the estimates below, but detailed derivations for every term is found in Appendix \\ref{app:VEM_Calc}. Operationally, the E-step and M-step are implemented in a an alternating fashion until the membership variables $\\boldsymbol{\\tau }$ reach some criterion of convergence.\n\n\\subsection{E-Step}\nThe E-Step of the algorithm estimates the variational variables which represent block memberships $Z_{iq}$ of the nodes $i$ as well as $C_q$ which are analogous to the ``memberships of memberships\". First we describe the estimation procedure for the variational approximations $\\tau_{iq}$, next we describe the estimation of signal-noise differentiation probabilities $P_q$. This two-step procedure differs from prior work because of an additional hierarchical estimation step of the higher-level variational variables $P_q$.\n\n\\subsubsection{Estimation of Membership Vectors $\\boldsymbol{\\tau} $} \\label{sec:E_stepTauEst}\nA iterative fixed-point approach is used to estimate $\\tau_{iq}$, wherein the derivative for each $\\tau_{iq}$ is taken based on model parameters and $\\tau_{jl}$, \n\\begin{align*} \n\t\\log(\\tau_{iq}) \\propto \\log(\\alpha_q )\n\t&+ \\sum_{k \\le K } \\sum_{j \\le n } \\tau_{jl} \\bigg( P_q f(X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q ) + (1-P_q) f(X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \\\\\n\t&+ \\sum_{l \\le Q ; l \\ne q } f(X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \\bigg) -1 + P_q \\log \\Psi +(1- P_q) \\log (1-\\Psi) .\n\\end{align*}\nAfter exponentiating, the fixed-point equation can feasibly be solved after the iterating the system until relative stability. This is the same approach as most existing literature \\cite{Daudin2008,mariadassou2010}. $P_q$ are calculated as follows:\n\\begin{align} \\label{eq:Pq_calculation}\n\t\\widehat{P_q} = 1 - \\bigg( 1+ \\bigg[ \\exp \\bigg( \\sum_{k \\le K } \\sum_{i,j \\le n} \\tau_{iq} \\tau_{jq} \\bigg( f(X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q ) - f(X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) ) + \\log \\bigg(\n\t\\frac{1-\\Psi }{ \\Psi }\n\t\\bigg) \\bigg) \\bigg) \\bigg] ^{-1} \\bigg) ^{-1}. \n\\end{align} \nCalculations for each of these terms are provided in Appendices \\ref{app:tau_est} and \\ref{app:noise_probPq}.\n\n\n\n\n\n\n\\subsubsection{Stochastic Variational Inference}\nTo speed up computation, we apply stochastic variational inference (SVI) to calculate the membership parameters $\\tau_{iq}$ and $P_q$.\nWe subsample nodes at each step of the E-step in variational EM. Calculating $\\tau_{iq,t}$ and $P_{q,t}$ comprise two stochastic sub-steps of the E-step at iteration step $t$; we label their SVI estimates as $\\widehat \\tau_{iq,t}$ and $\\widehat P_{q,t}$. \nAt each $t$, we sample a set of nodes $M = \\{i_1,...,i_m\\} $ of size $m$ and their associated edges from graph layers $\\mathbf{X}^1,...,\\mathbf{X}^K$.\nLet $\\tau^m_{iq,t }$ represent the randomly subsampled graph at iteration step $t$.\nMore details on the setup and assumptions of SVI can be found in Appendix \\ref{app:StochVI}.\n\\begin{enumerate}\n\t\\item (Calculating $\\tau^m_{iq,t}$)\nPartial updating step for $\\tau^*_{iq,t }$ at time $t$ wherein the subsampled memberships $i,j \\in M$ are found: \t\n\t\\begin{align*} \n\t\t\\tau^*_{iq, t} \\propto \\exp \\bigg( &\\log(\\alpha_q ) + \\sum_{k \\le K} \\sum_{ j, l \\in M } \\tau_{jl , t-1 } \\bigg( P_q f(X^k_{ij}, \\boldsymbol{\\mu}_q, \\boldsymbol{\\Sigma}_q ) + (1-P_q) f(X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \\\\\n\t\t&+ \\sum_{l : l \\ne q } f(X^k_{ij}, \\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN} ) \\bigg) \n\t\t-1 + P_q \\log \\Psi +(1- P_q) \\log (1-\\Psi) \\bigg). \n\t\\end{align*} \n\tThe update step averages the newly calculated $ \\tau^*_{iq, t} $ with the previous value\n\t\\begin{align*}\n\t\t\\widehat{ \\tau}_{iq, t} = \n\t\t\\delta_t \\tau^*_{iq, t} + (1-\\delta_t) \\widehat \\tau_{iq, t-1} .\n\t\\end{align*} \n\t\\item (Calculating $P_{q,t}$)\n\tThe signal probability \n\t$P_q$ is calculated in \\eqref{eq:Pq_calculation} but with the same subsampled replacements as done in the previous calculation of $\\boldsymbol{\\tau}$. For each time point the new noise probability $p^*_{q,t}$ is calculated and averaged with the previous noise probability at time $t-1$. The update step is \n\t\\begin{align*}\n\t\t\\widehat P_{q,t} = \n\t\t\\delta_t P^*_{q,t} + (1-\\delta_t)\\widehat P_{q,t-1} .\n\t\\end{align*} \n\\end{enumerate}\n\n\n\n\n\n\n\n\\subsection{ M-Step} \\label{sec:Mstep}\nSimilar to its estimation estimates in Daudin et al. \\cite{Daudin2008}, $ {\\alpha}_q $ are estimated as follows using Lagrangian multipliers: $ \\hat{\\alpha}_q = { \\sum_{i,q} \\tau_{iq} } \/ { n}. $\nThe closed-form estimate for the \\textit{local} parameters for the mean vector $ {\\boldsymbol{ \\mu } }_{q} $ for each block $q$ from the M-step is\n\\begin{align*}\n\t\\widehat{\\boldsymbol{ \\mu } }_{q} & = \\frac{ \\sum_{i,j} \\tau_{iq} \\tau_{jq} \\mathbf{X}_{ij} }{ \\sum_{i,j} \\tau_{iq} \\tau_{jq} } P_q + \\boldsymbol{ \\mu}_{AN} (1- P_q ).\n\\end{align*}\n\nIn the above, and all subsequent expressions in this subsection, the derivations are located in Appendix \\ref{app:M_StepSignalTerms}.\nSimilarly to mean calculations, the variance calculations (along diagonals) are \n\\begin{align*}\n\t\\widehat { \\boldsymbol{\\Sigma}_q} & = \\frac{ \\sum_{i,j} \\tau_{iq} \\tau_{jq} ( \\mathbf{X}_{ij} - \\boldsymbol \\mu_q )^2 } { \\sum_{i,j} \\tau_{iq} \\tau_{jq} } P_q \n\t+ \\boldsymbol \\Sigma_{{AN}} (1-P_q ). \n\\end{align*} \nThe cross-term for two layers $h, k$ is written as:\n\\begin{align*} \n\t\\widehat{\\boldsymbol {\\Sigma}}_{hk,q}\n\t&=\\frac{ \\sum_{i,j} \\tau_{iq} \\tau_{jq} (X^k_{ij} - \\boldsymbol \\mu_{q,k}) (X^h_{ij} - \\boldsymbol \\mu_{q,h} ) } { \\sum_{i,j} \\tau_{iq} \\tau_{jq} } P_q.\n\\end{align*}\nThe element-wise correlations at iteration $t$ across layers $h,k$ ($h \\ne k$) are then calculated, and the maximum (if $K>2$) of these values is taken as the putative correlation (across all layers) for block $q$ \n\\begin{align*}\n\t\\hat{\\rho_q} = \\max_{h,k} \\frac{\\widehat{ {\\Sigma_{hk}^q}} }{ \n\t\t\\sqrt{\\widehat{ {\\Sigma^{ h}_q}} \\widehat{ {\\Sigma^{ k}_q}}}}. \n\\end{align*}\nIf $K=2$ then no maximum needs to be taken. This is an operational step of the optimization and does not necessarily yield closed-form estimates. However, we note that this value is identical to the \\textit{mutual coherence} of estimated correlation matrix and serves as a summary statistic of the estimates for correlations that is consistent with the approximation of the optimization problem we solve with VEM \\cite{Tropp_JustRelax2006}. \nTheoretical properties of these relationships should be explored in future work. \n\n\n\n\n\n\n\\subsubsection{Estimation of Global Parameters } \n\\newcommand{\\tilde{\\tau}}{\\tilde{\\tau}}\n\\newcommand{\\dot{\\omega}}{\\dot{\\omega}}\n\nTo calculate the global parameters, the global noise probability term $\\Psi$ defined previously is \n\\begin{align}\n\t{ \\widehat{ \\boldsymbol{\\mu}} _{AN} }\n\t& = \\Psi \\frac { \\sum_{j , i } \\sum_{l, q: q \\ne l } \\tau_{iq} \\tau_{jl} \n\t\t\\mathbf{X}_{ij} \n\t} { \\sum_{j , i } \\sum_{l, q: q \\ne l } \\tau_{iq} \\tau_{jl} } \n\t+(1-\\Psi)\n\t\\frac{\t \\sum_{j , i } \\sum_{ q } \\tau_{iq} \\tau_{jq} (1-P_q)\n\t\t\\mathbf{X}_{ij} }{ \\sum_{j , i } \\sum_{ q } \\tau_{iq} \\tau_{jq} (1-P_q) }.\n\\end{align} \\label{eq:mu_AN} \nThe covariance term for global noise, as stated earlier, is zero. The variance of global parameters is similarly calculated as: \n\\begin{align*}\n\t{ \\widehat{\\boldsymbol{\\Sigma}}_{AN}}\n\t& = \\Psi \\frac { \\sum_{j , i } \\sum_{l, q: q \\ne l } \\tau_{iq} \\tau_{jl} \n\t\t(\\mathbf{X}_{ij} - \\boldsymbol{\\mu}_{AN} )^2 \n\t} { \\sum_{j , i } \\sum_{l, q: q \\ne l } \\tau_{iq} \\tau_{jl} } \n\t+ (1 - \\Psi) \\frac { \\sum_{j , i } \\sum_{ q } \\tau_{iq} \\tau_{jl} (1-P_q) \n\t\t(\\mathbf{X}_{ij} - \\boldsymbol{\\mu}_{AN} )^2 } { \\sum_{j , i } \\sum_{ q } \\tau_{iq} \\tau_{jq} (1-P_q) } ,\n\\end{align*} \nDerivations for these expressions are in Appendix \\ref{app:deriv_muAN}.\n\n\n\n\n\n\n\n\n\n\\section{Empirical Performance of Synthetic Experiments} \\label{sec:app_results} \n\\input{pt3_results_input}\n\n\n\\section{Discussion} \\label{sec:disc} \n\nWe have introduced a novel method that is motivated by real-world clinical problems and that offers a data-driven approach for grouping subject psychopathologies. This method may predicate deeper understanding or even discovery of psychosis and schizophrenia\nbased on the principles of statistical network theory. \nWe demonstrated the relative efficacy and accuracy of this model compared to existing methods.\n\nNetwork data in recent years come in more complex forms, which map to the multitude of ways that data relate with one another. They are particularly synchronous with the rise of availability in more different types of data, with even more complex configurations of community structures.\nOur primary contribution in this research was to introduce the notion of structured noise to weighted SBMs. Other work has explored cases where between-block transitions are all uniquely parameterized \\cite{matias2016}, but they do not account for correlations between layers nor do they separate signal from noise. The proposed model is more parsimonious and reveals more interpretable results in clinical and experimental settings. More details on this parsimony can be found in Appendix \\ref{app:parsimony}.\nIn practice, $NB$ does not represent a control group but rather a dynamic cluster that reflect the noisiest interactions.\n\nWe have demonstrated that the method is able to uncover latent, non-trivial patterns in psychiatry (as well as voting and human mobility in Appendices \\ref{app:voting},\\ref{app:results_bikeshare}). The application to psychopathology data reflects an ongoing discourse around \\textit{nosology} where psychiatric disorders are treated as discrete entities as opposed to multifaceted pathological configurations \\cite{nosology}. Etiologically, the proposed methodology reinforces the multidimensional nature of psychiatric disorders.\n\nDespite its advantages, there remain limitations with \\texttt{SBANM}. The issue of computation time persistently plagues SBM estimation using VEM. The algorithm slows when $K$ or $Q$ is large. However, in practice it outperforms existing methods. Moreover, usage of stochastic VI has sped up computation time such that previously infeasible sample sizes are made possible.\nFuture work may further explore subsampling methods induce faster computation times.\n\n\\textit{Ambient noise} in networks dovetail the notion of overlapping communities and in particular, SBMs. A class of community detection methods adhere to a \\textit{bottom-up} heuristic where sets gradually increase in size until memberships become stable; and naturally allows for separation between \\textit{signal} and \\textit{noise}. Many of these approaches implicitly assume inherent structure but do not assign an explicitly parametric model to signal or noise \\cite{wilson_essc,bodwin2015testingbased,palowitch_continuous}. Members not assigned to communities are called \\textit{background} nodes are identified but not statistically modeled.\nUncertainty and ambiguity in block-memberships may be represented by either \\textit{noise} or \\textit{overlapping blocks}. MMBMs have been useful in modeling real-world data, but in multilayer graphs, overlaps in high dimensions lead to more problems of parameter identifiablity (or altogether avoided \\cite{Liu2018MultiOverlap}), and ambient noise serves to assuage the ``curse of dimensionality\". We refer the reader to the work of Latouche et al. and Airoldi et al. \\cite{Latouche_2011,airoldi2007mixed} for background on MMBMs, and leave the connection between \\textit{representing noisy signals via overlapping memberships} and global \\textit{ambient noise} to future work. Theoretical properties of the model relating to dimensional sensitivities may also be explored.\n\n\nThe development of \\texttt{SBANM} opens up a bevy of methodological avenues. One immediate next step is to expand the study of PNC data to neuroimaging and genomics data. Such work is currently in progress for the PNC study to identify potentially jointly model neural and genetic influences in addition to symptoms. Another direction is in assessing significance or predictive power of the imputed clusters.\nMore generally, these in-group and out-of-group interactions are related to mixed effects models for multimodal weighted networks that may serve as another perspective in the study longitudinal analysis of networks \\cite{Snijders05modelsfor}.\n \n\\section*{ Reproducibility}\nCode and sample data for \\texttt{SBANM} is available at \\url{https:\/\/github.com\/markhe1111\/SBANM}.\n\n\\section*{Acknowledgements and Funding Information}\nThis project was funded by the Rockefeller University Heilbrunn Family Center for Research Nursing (RX, 2019) through the generosity of the Heilbrunn Family. The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.\nMH was supported by the NSDEG fellowship.\n\nPhiladelphia Neurodevelopment Cohort (PNC) clinical phenotype data used for the anal- yses described in this manuscript were obtained from dbGaP at \\url{http:\/\/www.ncbi.nlm. nih.gov\/sites\/entrez?db=gap} through dbGaP accession \\url{phs000607.v3.p2.}\nSupport for the collection of the data for Philadelphia Neurodevelopment Cohort (PNC) was provided by grant RC2MH089983 awarded to Raquel Gur and RC2MH089924 awarded to Hakon Hakonarson. Subjects were recruited and genotyped through the Center for Ap- plied Genomics (CAG) at The Children's Hospital in Philadelphia (CHOP). Phenotypic data collection occurred at the CAG\/CHOP and at the Brain Behavior Laboratory, University of Pennsylvania.\n\nThe authors thank Andrew Nobel and Shankar Bhamidi for helpful coments and theoretical advice. In particular, we thank them for defining and recognizing the problem of differential, correlated communities amongst multilayer networks. We also thank Professor Galen Reeves for helpful advice in contextualizing this work to the literature. \n\n\n\n\n\\subsection{PNC Data}\n\n\n\n\n\nFor a $K$-layer \\textbf{weighted} multigraph with registered $n$ nodes indexed by the set $[n] = \\{1,2,...,n \\}$, let $\\mathbf{X}$ represent the collection of multilayer weighted graphs with $K$ layers: $\\mathbf{X} = \\{ \\mathbf{X}^1, \\mathbf{X}^2,...,\\mathbf{X}^K\\}.$\nSimilarly, suppose $\\mathbf{X}$ contains $Q$ ground truth communities (blocks) indexed by $q$, but such that a single block is considered \\textit{noise} and labeled $NB$ (indexed by $q_{NB}$). \nWe let $\\textbf{X}_{ij} = (X^1_{ij} , X^2_{ij},..., X^K_{ij})$ represent the vector of edge-weights between edges $(i,j)$ across all layers $k=1,2,...,K$. We define a community as $B_q \\subset [n]$ to denote the nodes that are contained in a given block indexed by $q$ in $\\mathbf{X}$, and we let $\\mathbf{X}_q$ represent the set of all edges contained in block $q$ across all $K$ layers:\n\\begin{align} \\label{eq:Xb_q} \n\t\\mathbf{X}_q = \t\\{ \\mathbf{X}_{ij} \\}_{i,j \\in B_q}.\n\\end{align} \nMoreover, we call the set of edges across different blocks $q,l$ (where $q \\ne l$) \\textit{interstitial noise} ($IN$), and label it as: \t\t\n\\begin{align} \\label{eq:Xb_IN}\n\t\\mathbf{X}_{IN} = \t\\{ \\mathbf{X} _{ij} \\}_{i \\in B_q , j \\in B_l }. \n\\end{align} \nWe fix \\textbf{one} block indexed as $NB$ as the \\textit{noise block} (previously described in Section \\ref{sec:contributions_context }) where all weights in the block follow a $ N_K \\big(\\boldsymbol{\\mu}_{NB}, \\boldsymbol{\\Sigma}_{NB} \\big) $ distribution. \tThis block represents a null region that is devoid of unique signal, but is distributionally governed by the same characteristics as the interstitial relationships between different blocks.\nWe let $\\mathbf{X}_{NB}$ represent the set of edges among members in the ``noise block\": $\\mathbf{X}_{NB} = \t\\{ \\mathbf{X}_{ij} \\}_{i,j \\in NB}. $\nIn the following subsection we describe the data as introduced in the prior section in the context of the notation.\nIn Section \\ref{sec:model_inference} we describe the assumption that classifies this notion of noise. \n\n\n\n\n\n\n\n\n\n\\subsection{Mapping Notation to Data}\nMultilayer networks can represent multimodal, longitudinal, or \\textit{difference} graphs\n\\cite{Menichetti_2014WeightedMultiplex,Holme_2015}. The data in the \\textit{Philadelphia Neurodevelopmental Cohort} (PNC) (described below) is constructed as a multimodal network, while the applications outlined in Appendices \\ref{app:voting} and \\ref{app:results_bikeshare} are examples of longitudinal graphs.\nIn each application, we write the weighted graph-system $\\mathbf{X}$ with $K$ layers and define the index set $[n]$ as the set (of cardinality $n$) of all nodes. \nEvery layer has $n$ nodes and each weight $\\mathbf{X}_{ij}$ between nodes $i ,j$ is written as a $K-$ dimensional vector, and each layer-specific (at $k$) weight is written as $X_{ij}^k$.\n\nWith respect to the PNC data, $\\mathbf{X}$ represents the whole set of anxiety, behavior, and mood psychopathology symptom networks across a given set of subjects. \nThere are three layers $\\mathbf{X}^x, \\mathbf{X}^y, \\mathbf{X}^z $ indexed by $k =\\{ x,y,z \\}$; each represents one of the psychometric evaluation networks representing each disorder. \nThe sample size $n$ in this context represents the 5136 subjects between the ages of 11 to 17 (\\textit{youth}) and 1863 between the ages of 18 to 21 (\\textit{early adult}).\nEach node represents a subject, and each weighted edge the transformed similarity ratio between two subjects for anxiety, behavior, and mood symptoms.\n\n\nA community sample was obtained from the PNC study from the greater Philadelphia area. Subjects aged 8-21 years were subject to a detailed neuropsychiatric evaluation \\cite{Calkins_PNC_2014,Calkins_PNC_2015}. This sample is used as the primary case study.\n$ X^k_{ij} $ is assumed to be generated from clusters of nodes whose (Fisher) transformed edges follow blockwise multivariate normal distributions. We use three general categories of disorders to represent each layer:\n \\begin{enumerate}[noitemsep]\n \t\\item Anxiety ($\\mathbf{X}^x$): 44 questions (generalized, social, separation anxiety, etc.)\n \t\\item Behavior ($\\mathbf{X}^y$): 22 questions (ADHD, OCD, CDD) \n \t\\item Mood ($\\mathbf{X}^z$): 10 questions (depression and mania),\n \\end{enumerate} \nthen Fisher-transformed to produce the weighted edge in graph $\\mathbf{X}^k$ in layer $k$. In these following sections these categories will simply be referred to as ``anxiety\",``behavior\", and ``mood\". More details on pre-processing can be found in Appendix \\ref{app:data_pnc_proc}.\n \n \n \n \t%\n \n\\subsection{Experimental Design of Simulation Model} \\label{sec:properties_of_sim}\n\n \n\n\n \n \n\\subsection{Experimental Procedure} \\label{sec:expeirmental_procedure}\n\nThe goal of these experiments is to demonstrate that the proposed method can faithfully recover generated memberships and parameters in a time-efficient manner. \nAs described above, we use two simulation schemes to evaluate membership and parameter recovery (then perform another experiment to assess optimal number of blocks in Section \\ref{sec:choice_of_blocks}).\nIn all of the experiments outlined above, blockwise parameters for every network are first randomly generated for every layer, then observations (edges) are simulated from multinormal distributions governed by these parameters. Each network has distribution $N_K(\\boldsymbol{\\mu}_{AN}, \\boldsymbol{\\Sigma}_{AN})$ governing both noise block $NB$ and interstitial noise $IN$. \\texttt{SBANM} is then applied to these networks and membership (as well as parameter) recovery is assessed. Simulations are all drawn from differing parameters to demonstrate that the method is effective for a variety of settings different parameter values.\n\nAfter the ground-truth parameters are generated, we proceed to the second data-generating step. For each mean-covariance pair corresponding to a block, we generate multivariate Gaussian distributions with a sample size of $n_q (n_q -1)\/2$, then we convert these multivariate data to weighted edges. Finally, a sample of the $AN$ distribution with size\n$$n_{IN} := (n-1)\/2 - \\sum_{q:q \\le Q} n_q (n_q -1)\/2$$ \nis generated for all \n$ n_{IN}$ interstitial edges between differing blocks. \n\n\nIn all of the experiments, the algorithm is initialized by applying spectral clustering on the sum graph $\\tilde{X} $ across all $K$ layers, such that each entry in a single flattened graph $\\tilde{\\mathbf{X}}$ is $\\tilde{X}_{ij} = \\sum_{k \\le K} X^k_{ij}.$\nAnother option is drawing that every $\\tau_{iq}$ is drawn from a uniform distribution, then normalized. Matias et al. propose averaging the graphs and then running k-means over the averages \\cite{matias2016}. \nWe initialize by first averaging the layers to $\\tilde{X}_{ij} $, then by using spectral clustering \\cite{rohe2011}, which approximates the community structure in a single network quickly and reliably.\n\n\n\n\n \n\n\n\\subsubsection{Recovery Under Differing Parameters (First Experiment)} \\label{sec:first_experiment}\n\nIn the first experiment, we fix Gaussian priors and generate different multinormal distributions from these hyperparameters, such that every network has different parameters.\nWe generate bivariate networks of size 500 and trivariate networks of size 200 with block sizes between 3 and 5. Block-memberships are generated from a multinomial distribution.\n\nSynthetic data are generated from a two-step procedure. In the first step, Gaussian parameters are randomly generated using fixed priors. In the second step, multivariate Gaussian distributions are generated from the parameters obtained in the first step.\nThe number of blocks $Q$ is first randomly generated. Means and variances of each block, as well as the global mean and variance for the ambient noise, are then independently generated from normal distribution (ie. Gaussian prior), and a positive correlation coefficient is sampled from a uniform distribution between 0 and 1. The first block of each network is designated as $NB$ and its mean and variance follow those of $AN$. Group sizes $n_q$ for each block are generated from multinomial distributions that were drawn from Dirichlet priors. \nIn order to induce separability of blocks during simulations, we only select the networks whose blocks' minimum Bhattacharya distances are above a certain threshold. More details are in Appendix \\ref{appsec:simu_experiment2}.\n\nWe denote \\textit{exact recovery} as whether the \\texttt{SBANM} algorithm is able to correctly impute and place all the block memberships of the network that was generated based on the multinormal simulation scheme in Section \\ref{sec:expeirmental_procedure} \\cite{abbe2017community}.\nExact recovery rates of the algorithm (for memberships) were fairly accurate. Results show that bivariate simulations induces nearly a 100\\% (49\/50) recovery rate; and 75\\% (37\/50) for the trivariate simulations. In the triavriate case, the imperfect recoveries do recover \\textit{most} of the parameters and memberships as shown by existing metrics for community detection in Table \\ref{table:method_compare}.\nWe note the sensitivity of the recovery rates to the increase in dimensions (or layers); and hints at some parallels with the {curse of dimensionality} for community detection in multilayer networks \\cite{Ertz2003FindingCO}, or perhaps due to small sample size of the networks ($n=200$). \nIncreasing dimensions tends to induce more probable mixtures between blocks that are close together.\n\nParameter estimates are also reasonably retrieved from the \\texttt{SBANM} algorithm, both in absolute and relative terms.\nMean errors are centered around zero as to not show any systemic bias; absolute percentage differences between ground truths and their estimates hover around 10-25\\%; some of the discrepancies may arise from small ground truth values or imperfectly matching memberships.\nMore details can be found in Figure \\ref{fig:simu_histos} in Appendix \\ref{appsec:simu_experiment1}\n\n\n\n\\subsubsection{Parameter Recovery Under Same Parameters (Second Experiment)}\nIn the second experiment, 100 three-layer networks were generated from a fixed set of parameters as well as memberships ($n=300$). This experiment with fixed parameters is performed in addition to the first experiment in order to better assess the accuracy of parameter estimates under more controlled conditions. Results show consistently accurate estimates for most of the mean, variance, and correlation parameters (Figure \\ref{fig:simu_boxplots}).\nMoreover, all memberships were 100\\% recovered. True parameters are shown in Figure \\ref{fig:simu_boxplots}, and described in more detail in Appendix \\ref{appsec:simu_experiment2}. The variances for most of the estimates were within 3-5\\% of the true values, but the estimated variance for ambient noise $\\boldsymbol{\\Sigma}_{AN}$ appears to be biased. These types of biases are typical of variational approaches and could be a weakness in VEM for estimating covariance matrices \\cite{mariadassou2010}. Further investigation of this discrepancy may be pursued in future work.\n\n \n \n \\begin{figure}[htbp] \n \t\\centering\n \t\\begin{tabular}{ccc}\n \t\t\\hline\n \t\t\\multicolumn{3}{c}{\\textsc{ Boxplots of Estimated Parameters}} \\\\ \n \t\t\\hline\n \t\t&\t$\\boldsymbol{\\mu}_{k,q}$&\t$\\boldsymbol{\\Sigma}_{k,qq}$ \\\\\n \t\t\\rotatebox[origin=l]{90}\n \t\t{ \t\\quad \\quad \t\\quad \\quad \t\\quad \\footnotesize $\\mathbf{X}$} &\t\\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsMean1}\t& \t\\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsVar1} \t \t\\\\ \n \t\t\\rotatebox[origin=l]{90}{ \t\\quad \\quad \t\\quad \\quad \\quad\t \\footnotesize $\\mathbf{Y}$}\n \t\t& \\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsMean2}\t& \\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsVar2} \\\\\n \t\t\\rotatebox[origin=l]{90}{ \t\\quad \\quad \t\\quad \\quad \t\\quad \\footnotesize $\\mathbf{Z}$} \n \t\t&\\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsMean3} \n \t\t&\\includegraphics[width=0.4\\linewidth]{Simu_BoxplotsVar3} \\\\\n \t\\end{tabular}\n \t\\begin{tabular}{c}\n \t\t${\\rho}_{q}$ \\\\\n \t\t\\includegraphics[width=0.55\\linewidth]{Simu_BoxplotsRho} \n \t\\end{tabular}\n \t\\caption{\\footnotesize \\textit{Boxplots for repeated estimates of simulations (second type). We ran the algorithm applied to 100 randomly generated networks with the same ground truth parameters and fixed sample sizes. Each boxplot represents the summary of 100 individual estimates corresponding to 100 runs. The red bands represent the ground truth parameters for measn, variances, and correlations. }}\n \t\\label{fig:simu_boxplots}\n \\end{figure}\n \n\n\n\\subsection{Comparison with Other Methods} \\label{sec:method_comparison}\nWe compared the proposed \\texttt{SBANM} method with spectral clustering as well as the \\url{dynsbm} proposed by Matias et al. \\cite{matias2016} using the results of the first experiment (Section \\ref{sec:first_experiment}).\n We applied spectral clustering `naively' as in the initialization scheme where all layers are summed and collapsed into a single network because this is an intuitive simple and fast method for multilayer community detection. \nWhen we compare to the method with \\url{dynsbm}, we assume two interpretations of their clustering results. Because \\url{dynsbm} imputes different block memberships for every layer, we convert these into cross-layer persistent community labels by \n(1) taking the most frequent occurrence of the clustered membership across all layers\nand (2) treating each block-combination across layers as a unique configuration for the definition of a new block.\nThis need to interpret the results of \\url{dynsbm} already reveals an implicit advantage of the \\texttt{SBANM} method in its inherent parsimony of clusters and interpretability of blocks across layers for certain fitting data-types and scientific questions.\n\nWe evaluated \\textit{ARI} (Adjusted Rand Index) and \\textit{NMI} (Normalized Mutual Information) scores \\cite{wilson_essc,palowitch_continuous,matias2016}. for the three methods with the 50 simulations for both bivariate and trivariate networks and have found that \\texttt{SBANM} outperforms competing methods in every setting. In the bivariate case, because nearly all simulations yielded \\textit{perfect recovery}, the NMI and ARI are both very close to 1. \nIn the trivariate case, the high NMIs and ARIs suggest effective \\textit{partial} recovery of the memberships if some of the network block structures are not perfectly recovered. We note that none of the competing methods perfectly recover the block structures for the multigraph systems. We also note that spectral clustering in the bivariate case outperforms \\url{dynsbm}, but not in the trivariate case; suggesting a potential sensitivity of spectral clustering to the curse of dimensionality.\n \n\n\\begin{table}[htbp] \\footnotesize\n\t\\centering\n\t\n\t\\begin{tabular}{cc}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multicolumn{2}{c}{\\textsc{ Method Comparison}} \n\t\t\\\\ \n\t\t\t\t\\hline\n\t\t\t\\begin{tabular}{lcccc}\n\\multicolumn{1}{c}{} &\t\\multicolumn{4}{c}{\\textbf{Bivariate (50 Runs)}} \\\\ \n\t\\hline\n\t\\textit{Method}& \\multicolumn{2}{c}{\\textit{NMI}} & \\multicolumn{2}{c}{\\textit{ARI}} \\\\ \n\t\\hline\n\t& Mean & SD & Mean & SD \\\\ \n\t\\hline\n\t\\texttt{SBANM} & 1.00 & 0.02 & 1.00 & 0.01 \\\\ \n\tSpectral & 0.80 & 0.27 & 0.84 & 0.24 \\\\ \n\t\\url{dynsbm} (unique config.) & 0.62 & 0.25 & 0.67 & 0.25 \\\\ \n\t\\url{dynsbm} (most freq.)& 0.68 & 0.25 & 0.71 & 0.25 \\\\ \n\\end{tabular}\n\t& \n\t\t\\begin{tabular}{lcccc}\n\t\t\\multicolumn{5}{c}{\\textbf{Trivariate (50 Runs)}} \\\\ \n\t\t\\hline\n & \\multicolumn{2}{c}{\\textit{NMI}} & \\multicolumn{2}{c}{\\textit{ARI}} \\\\ \n\t\t\\hline\n\t\t& Mean & SD & Mean & SD \\\\ \n\t\t\\hline\n\t\t& 0.87 & 0.26 & 0.87 & 0.27 \\\\ \n\t\t& 0.65 & 0.31 & 0.69 & 0.29 \\\\ \n\t\t& 0.75 & 0.21 & 0.80 & 0.21 \\\\ \n\t\t& 0.70 & 0.16 & 0.77 & 0.18 \\\\ \n\t\\end{tabular}\n\t\\\\\n\t\t\\hline\n\t\\end{tabular}\n \n\t\\caption{\\footnotesize \\textit{Comparison of different methods for membership recovery using the ARI and NMI measures. \\texttt{dynsbm} (unique config.) refers to the interpretation of the method when every unique configuration of blocks across layers are treated as a unique block. \\texttt{dynsbm} (most freq.) treats the block with the most frequent occurence of memberships across all layers as the cross-layer block.}} \\label{table:method_compare}\n\\end{table}\n\n\nComputing times were higher in \\texttt{dynsbm} compared to \\texttt{SBANM} (for spectral clustering, computing time is nearly instant) in both bivariate and trivariate cases. The mean time for trivariate cases is 144 (SD 548) seconds, compared to 160 (125) on average for \\texttt{dynsbm}. Though \\texttt{SBANM} computing times have fairly high variance, it is comparable in time to that of \\texttt{dynsbm} in the trivariate cases. The time differential is much larger in larger bivariate networks. The mean time was 330 (328) seconds for \\texttt{SBANM} and on average 859 (88) seconds for a few samples of \\texttt{dynsbm}. The time difference in computation suggests that \\texttt{SBANM} may better handle larger-size graphs than existing methods. Fitting larger networks when $n>5000$ are feasible for \\texttt{SBANM}, but not for \\texttt{dynsbm}. \n\n \n \\subsection{Choice of Number of Blocks (Third Experiment) } \\label{sec:choice_of_blocks}\n \\textit{Model selection} in the SBM clustering context usually refers to selection of the number of a priori blocks before VEM estimation as it is the only `free' parameter in the specification step of the algorithm. Existing approaches \\cite{Daudin2008,mariadassou2010,matias2016} consider the \\textit{integrated complete likelihood} (ICL) for assessing block model clustering performance. \n For this experiment we fix $n$ at the ground-truth $Q$ and apply the method for a range of $\\widehat{Q}$ (as the \\textit{estimate} for number of blocks). Simulation results show that the usage of ICLs caps at the correct ground truth value and verify that this metric is suitable for evaluation of the method (Figure \\ref{fig:elbossim200} in Appendix \\ref{app:simu_ICL}). More details on ICL can also be found in Appendix \\ref{app:simu_ICL}). \n \n \n \n \\section{Case Study: PNC Psychopathology Networks} \\label{sec:results_PNC}\n After validating the method on simulations and real-world datasets, we apply \\texttt{SBANM} to the PNC data which constitutes the primary case study of this paper. We use networks constructed from \\textit{anxiety, behavior}, and \\textit{mood} psychopathologies as described in Section \\ref{sec:data}, and then validate the discovered communities from clinical diagnoses for each disorder as well as typical development (TD) and psychosis.\nWe let $\\mathbf{X}^x$ represent the layer of symptom response networks for anxiety, $\\mathbf{X}^y$ for behavior, and $\\mathbf{X}^z$ for mood disorders. Correspondingly, we let $ \\big( \\boldsymbol \\mu_x, \\boldsymbol \\mu_y, \\boldsymbol \\mu_z \\big)_{q: q \\le Q}$ represent the means of the edge-connections for each block representing anxiety, behavior, and mood with corresponding standard deviations $\\big( \\boldsymbol \\sigma_x , \\boldsymbol \\sigma_y, \\boldsymbol \\sigma_z \\big)_{q: q \\le Q}$. \n \n\nNot much prior work has approached the study of psychiatric networks by constructing networks of individuals as nodes and their similarity as edges. The goal of introducing \\textit{ambient noise} to psychopathology symptom networks is to identify groups of people who have similar clinical characteristics and facilitating early identification of individuals who could be at high risk. \n Existing classification studies on psychosis typically require input from (``training on\") already-diagnosed subjects, or psychosis specific symptoms. These methods usually use methods such as logistic regression \\cite{dcannon_psyRiskCalculator2016}. However, we aim to classify anxiety, mood, and behavior symptoms to identify who is at risk for psychosis \\textit{without} the use of psychosis labels in a sample of youth aged 8-21 years, a developmental period prior to the onset of psychotic disorders. Unsupervised analysis is clinically useful in early identification.\n\n \nWe ran the method on youth and early adult data under several different specifications for range of $Q$. We applied the method to multilayer networks constructed from 5136 youth and 1863 early adult subjects. In each of these runs the \\texttt{SBANM} algorithm has separated the population into distinct groups with varying block sizes. Table \\ref{fig:mutable} shows that highly correlated blocks and $NB$ are discovered with mostly ample separation in terms of Bhattacharya distances as well as post-hoc significance testing (Table \\ref{table:Youth_ClusSig} in Appendix \\ref{app:data_pnc_proc}).\n \\begin{table}[htbp]\n \t\\centering\n \t\\includegraphics[width=0.95\\linewidth, trim= {4cm 28.5cm 2cm 4cm }, clip]{table4a_mutable_Youth}\n \t\\includegraphics[width=0.95\\linewidth, trim= {4cm 28.09cm 2cm 5.3cm }, clip]{table4b_mutable_EA}\n \t\\caption{ \\footnotesize\\textit{ Estimated parameters between blocks in youth and early adult subjects, as well as Bhattacharya distances between the blocks. Mean rates for anxiety response networks are represented by $\\mu_x$, behavior $\\mu_y$, and mood $\\mu_z$. Associated standard deviations are also shown.}}\n \t\\label{fig:mutable}\n \\end{table}\n\nWe used the ICL procedure outlined in Section \\ref{sec:choice_of_blocks} to select optimal $Q$. For youth, the ICL highest for the results when $Q=3$; for the early adults, $Q=4$. In both youth and early adults, ICLs suggest that the more parsimonious selections are preferable. In the remainder of this section we mostly focus on the results of these selections of $Q$, unless there are results specific to the suboptimal-$Q$ model. However, we also note results across model specifications: for example, in youth the same 2552-member cluster is persistent in both settings for $Q$ (3 and 4) (Table \\ref{fig:mutable}). These results show the persistence of the constellation of symptom agreements across mood, behavior, and anxiety layers. \n \n\n In general, these results demonstrate the ability of \\texttt{SBANM} to integrate \\textit{anxiety, mood,} and \\textit{behavior} symptoms to differentiate groups that signal differential behaviors. Table \\ref{fig:psychtable} shows the average proportions of subjects who met the criteria of positive symptoms for clinical diagnoses of the anxiety, mood, and behavior disorders as well as psychosis and typically development (TD). The leftmost columns after block labels and sizes are positive indicators for anxiety, behavior, and mood disorders. They are distinct from symptom data in that each indicator is a binary `yes' or `no' for each subject and identified clinically. \n In nearly all the clustering results, the rates of psychosis spectrum is clearly differentiated among differing clusters. Among youth subjects, $S_1$ correspond to a group that has relatively low incidence of psychosis (13\\%). However, $NB$ and $S_2$ in youth (Table \\ref{fig:psychtable}, left) exhibit similar rates of psychosis spectrum and TD, but with differing anxiety, mood, and behavior symptoms. A table with more selections of $Q$ is found in Table \\ref{fig:app_fullpsychtable} in Appendix \\ref{app:pnc_posthoc}.\n \n \nIn youth subjects, the $S_1$ group (in yellow) appears to be have the highest rates of typically developing (TD) youth (Table \\ref{fig:psychtable}) and can be interpreted as a relatively \\textit{normal} group. Because it models all between-block interactions, $NB$ may be interpreted as a group that straddles those who exhibit psychosis spectrum symptoms and those who do not. \nBecause this sample that is part symptomatic and part ``control\" with absence of symptoms, $NB$ may be interpreted a number of different ways in its contrast with correlated signal blocks.\nUncorrelated symptoms across all layers potentially signal groups that tend towards psychosis through more individuated channels in $NB$.\nIn early adult subjects, $NB$ maps to the group with the highest rates of psychosis, as well as the lowest rates of TD subjects. \n \n\n\n \\begin{table} [htbp]\n \t\\centering\n \t\\begin{tabular}{cc}\n \t\t\\hline\n\t\t\\multicolumn{2}{c}{\\textsc{ Psychopathological Symptom Groupings}} \\\\\n\t\t\\hline\n \t \\includegraphics[width=0.46\\linewidth, trim= {6.45cm 28cm 6.45cm 4.8cm} , clip]{table3aa_Youth3} \t& \n \t \\includegraphics[width=0.46\\linewidth, trim= {6.45cm 28cm 6.45cm 4.8cm} , clip]{table3ab_EA4}\n \t\\end{tabular} \n\\caption {\\footnotesize \\textit{Mean summary statistics for psychiatric diagnoses for youth (left) and early adult (right). The following columns details symptoms of anxiety, behavior, and mood disorders. The `Psy' column gives the average of whether the respondants have overall diagnoses for psychosis.}}\n \t\\label{fig:psychtable}\n \\end{table}\n\n \n \n \t \t\n The results of clustering demonstrates the ability of \\texttt{SBANM} to integrate \\textit{anxiety, mood,} and \\textit{behavior} symptoms to differentiate groups that signal differential, multimodal behaviors. \n In the results, psychosis rates are clearly differentiated and those in $NB$ are consistently higher.\n The differential clustering results for youth hints at latent neurodevelopmental pathways for onset of psychosis. \n Onset of psychosis is characterized by presence of active psychotic symptoms and occurs during early adulthood. It is also better understood as a continuum with patients reporting proportionally more depression, anxiety, and behavior disorders symptoms prior to the onset of psychosis \\cite{Cupo2021}. As symptoms segregate with growth and development psychopathology symptom relationships become statistically independent. Clustered subjects with higher correlations $\\rho_q$ correspond to the pre-psychotic states of more interconnected pathways, while subjects with independent psychopathologies exhibit more sublimation of psychosis. That these categories emerged without any supervision demonstrates the efficacy of the method to discern risk of developing psychosis. Results also did not show any strong differentiation in demographic characteristics (Table \\ref{fig:ClusDemog_table} in Appendix \\ref{app:pnc_posthoc}). \n \n \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAt present, Bayesian methods are routinely used in many fields: astrophysics and cosmology \n\\cite{Lewis2002,Trotta2008,Feroz2009,Xu2014,Yu2018,Gunther2019,Abbott2019,Abbott2020},\nparticle physics \\cite{PDG2018}, plasma physics \\cite{Langenberg2019,Milhone2019}, machine learning \\cite{Barber} and many others \\cite{vonToussaint2011,vonderLinden}.\nIn the past few years, they were also applied to nuclear \\cite{King2019,Ozturk2019} and atomic physics \\cite{Stockton2007,Calonico2009,Mooser2013,Covita2018,Heim2019}.\nOn one hand, one of the reasons for this success is related to the possibility of assigning a probability value to models (hypotheses) from the analysis of the same set of data in a very defined framework.\nIn opposite to this, classical statistical tests and criterions (e.g. chi-square and likelihood ratio, Aikaike information criterion \\cite{Akaike1974}, etc.) are completely powerless if any defined preference does not emerge.\nOn the other hand, the implementation of Bayesian methods is only now widely possible thanks to the recent relatively cheap cost of computation power. \nA large computing capability is in fact required for the fine exploration of the probability distribution of the model parameters.\nUnlike standard methods, which are mostly reduced to minimization\/maximization problems (of the likelihood function or chi-squares), Bayesian approaches have to deal with non-trivial integrations in multi-dimensional space.\nOne of the key points of Bayesian model selection is in fact the calculation of the \\textit{Bayesian evidence}, also called \\textit{marginal likelihood}, defined by\n\\begin{equation}\nE(\\mathcal{M}) \\equiv P(Data | \\mathcal{M}, I) \n= \\int P(Data | \\boldsymbol{a},\\mathcal{M}, I) P(\\boldsymbol{a}| \\mathcal{M}, I) d^{J}\\boldsymbol{a} \n = \\int L^\\mathcal{M}(\\boldsymbol{a}) P(\\boldsymbol{a}| \\mathcal{M}, I) d^{J}\\boldsymbol{a}. \\label{eq:evidence}\n\\end{equation}\nIt consists in the integral of the likelihood function $L^\\mathcal{M}(\\boldsymbol{a}) = P(Data | \\boldsymbol{a},\\mathcal{M}, I)$ in the $J$-dimensional parameter space (with $J$ the number of the parameters) weighted by the prior probability $P(\\boldsymbol{a}| \\mathcal{M}, I)$\nof the parameters $\\boldsymbol{a}$ of a defined model $\\mathcal{M}$ and where $I$ represents the background available information.\nFrom the evidence, the probability of the model $P(\\mathcal{M} | Data,I)$ is simply evaluated by the formula\n\\begin{equation}\nP(\\mathcal{M} |Data ,I) \\propto E(\\mathcal{M}) P(\\mathcal{M} | I), \\label{eq:bayes-data}\n\\end{equation}\nwhere $P(\\mathcal{M} | I)$ is the prior probability of the model itself.\nThe challenging part resides in the multi-dimensional integration of Eq.~\\eqref{eq:evidence}. \nFor this matter, different approaches have been developed in the past, some of them are Markov chain Monte Carlo (MCMC) based techniques (see e.g. \\cite{vonderLinden,Lawrence}) for the integration of $ L^\\mathcal{M}(\\boldsymbol{a}) P(\\boldsymbol{a}| \\mathcal{M}, I)$.\nAs an alternative, the \\textit{nested sampling} method has been proposed by Skilling in 2004 \\cite{Skilling2004,Skilling2006,Sivia}.\nWith this method, the multi-dimensional integral in Eq.~\\eqref{eq:evidence} is reduced to a one-dimensional integral and calculated.\nBecause of its high-efficiency and relatively moderate calculation power requirement compared to other approaches, the nested sampling method is actually implemented in several data analysis\ncodes such \\textit{Multinest} \\cite{Feroz2008,Feroz2009}, \\textit{Diamonds} \\cite{Corsaro2014},\n\\textit{Polycord} \\cite{Handley2015}, \\textit{DNest4} \\cite{Brewer2018} and \\textit{Dynesty} \\cite{Speagle2019} for the computation of the Bayesian evidence and posterior probability distributions.\nBecause of its efficient sampling, nested sampling is also routinely used to study thermodynamic partition functions \\cite{Murray2005,Nielsen2013,Baldock2017,Bolhuis2018} and to explore potential energy landscapes of atomistic systems \\cite{Partay2010,Burkoff2012,Partay2014}.\n\nWhen several maxima of the likelihood function are present, nested sampling algorithm can however encounter problems with converging correctly.\nThe parameter space exploration can become inefficient or exclude entire regions, which introduces systematic errors in the estimation of the evidence.\nIn order to avoid such a problem, several solutions are proposed in the literature.\nHere we present an original approach based on cluster recognition with the \\textit{mean shift method}, one of the classic clustering algorithm widely used and included in the major machine learning libraries.\nThis method is implemented in the program \\textit{NestedFit}, a code developed by one of the authors and described in details in Refs.~\\cite{Trassinelli2017b,Trassinelli2019}.\n\nAn introduction to nested sampling and NestedFit code is presented in Sec.~\\ref{sec:nested_sampling}.\nThe description of the mean shift algorithm, its implementation on NestedFit and the results of some tests are presented in Sec.~\\ref{sec:mean-shift}.\nThe article will end with a conclusive section (Sec.~\\ref{sec:conclusions}).\n\n\n\\section{Nested sampling and NestedFit} \\label{sec:nested_sampling}\n\n\\subsection{The nested sampling algorithm}\nNested sampling is based on the reduction of the multi-dimensional integral in Eq.~\\eqref{eq:evidence} for the evidence computation into a one-dimensional integral\n\\begin{equation}\nE(\\mathcal{M}) = \\int _0^1 \\mathcal{L}(X) d X. \\label{eq:evX}\n\\end{equation} \n$X$ represents the normalized value of the volume, weighted by the prior probability $P(\\boldsymbol{a}| I)$, of the portion of $J$-dimensional space of parameters where $L(\\boldsymbol{a})$ is higher than a certain value $\\mathcal{L}$:\n\\begin{equation}\nX(\\mathcal{L}) = \\int_{L(\\boldsymbol{a})>\\mathcal{L}} P(\\boldsymbol{a}| I) d^{J}\\boldsymbol{a}. \\label{eq:XvsL}\n\\end{equation}\nEquation~\\eqref{eq:evX} is numerically calculated using the rectangle integration method subdividing the $[0,1]$ interval in $M+1$ segments with an ensemble $\\{X_m\\}$ of $M$ ordered points \n$0< X_M<... \\mathcal{L}_{m=1}$, a new set of $\\xi_{m=2,k}$ points is constructed and the next procedure iteration step starts.\nFor each step, the discarded values $\\boldsymbol{\\tilde a}_{m} = \\boldsymbol{a}_{m,k'}$ are stored together with their corresponding likelihood values $ \\mathcal{L}_m=L(\\boldsymbol{\\tilde a}_{m})$.\nThe $X_m$ are obtained by their average expectation value from Eq.~\\eqref{eq:shrinking}.\nStep by step, the nested volumes built with the condition $L(\\boldsymbol{a}) > \\mathcal{L}_m$ converge around the parameter space regions corresponding to high values of the likelihood function.\nWhen the algorithm converges, the evidence is evaluated from the different values $ \\mathcal{L}_m, \\Delta X_m$ using Eq.~\\eqref{eq:evidence_app}.\nFrom the set of collected values of the discarded live points $\\boldsymbol{\\tilde a}_{m}$ and the associated weights $w_m = \\mathcal{L}_m \\Delta X_m$, the posterior probability $P(\\boldsymbol{a} | Data, \\mathcal{M}, I)$ can be determined.\nMore details on the nested sampling algorithm and its implementation can be found in Refs.~\\cite{Skilling2004,Skilling2006,Sivia,Mukherjee2006,Feroz2008,Feroz2009,Veitch2010}.\n\n\n\n\n\\subsection{Bottleneck of nested sampling and proposed solutions}\nThe difficulty of this elegant method is to efficiently find a new live point at each step within the hypervolume contour defined by $L(\\boldsymbol{a}) > \\mathcal{L}_m$.\nCodes that use the nested sampling method generally encounter difficulties to find new live points $\\boldsymbol{a}_{new}$ when several maxima of the likelihood function are present.\nIn this case, the exploration of the parameter space becomes generally inefficient or can consider only one local maximum while introducing systematic errors in the estimation of the evidence.\nIn order to avoid these problems, different strategies have been proposed in the literature. \nThese strategies can be divided into two categories: with a cluster recognition algorithms and without cluster recognition, but with other improvements of the search algorithm for new live points.\n\n\nA first attempt to improve the search of new live points for multimodal problems via MCMC\nhas been proposed by Veitch and collaborators in 2010 \\cite{Veitch2010}. \nHere 10\\% of the steps of the random walk are determined by a combination of three past points and not only the previous point of the Markov chain.\nIn this way, a more efficient sampling is obtained without need of cluster recognition.\n\nAnother improved random walk method for nested sampling algorithm \nis the \\textit{diffusive nested sampling}, developed by Brewer et al. in 2011 \\cite{Brewer2011} and implemented in \\textit{DNest4} \\cite{Brewer2018} program.\nHere, the passage between maxima is facilitate by blurring the condition $L(\\boldsymbol{a}_i) > \\mathcal{L}_m$ for the parameter values explored by the MCMC, allowing to momentarily pass in regions with lower values of the likelihood function.\n\nAlternatively to random walks, the use of single- or multi-particle trajectories have been implemented for improving the search of new points in complex landscapes of the function to maximize or minimize.\nThis is the principle of Galilean and Hamiltonian Monte Carlo exploration \\cite{Baldock2017,Skilling2019}.\nIn the first case, linear trajectories and reflection from hard boundaries, given by the minimal likelihood threshold value, are considered.\nIn the second case, more complex trajectories are computed from the motion determined by the Hamiltonian function, like in molecular dynamics, assimilated here to the likelihood function. \n\nIn the case of the presence of several maxima, these methods significantly improves the search of new points but do not allow to pass from one maximal region to another, which limits their efficiency.\nA completely different approach has been proposed by Martiniani and collaborators in 2014 \\cite{Martiniani2014}.\nTo take into account the presence of several maxima without recurring to cluster recognition, they suggest global optimization techniques to use the knowledge of identified local maxima and their statistical weight and then to perform parallel nested sampling in correspondence of each significant region. \n\n\nA first solution with the use of a cluster recognition algorithm has been \nimplemented in \\textit{Multinest} code already in 2008 \\cite{Feroz2008,Feroz2009}.\nHere, new live points are randomly selected in an ellipsoid that is defined by the covariance matrix of the present live points.\nCluster analysis is used for partitioning of the parameter spaces in a series of ellipsoids.\nThis is obtained by implementing the $k$-means clustering algorithm, which is triggered when the estimated volume occupied by the live points is much smaller than the ellipsoid volume estimated from their covariance matrix. \nA partition in two cluster is initially performed ($k=2$) and recursively repeated (always with $k=2$) to obtain an efficient partition of the space with many ellipsoids.\n\n\nIn the more recent \\textit{Polycord} program \\cite{Handley2015}, where the search of new live points is based on the slice sampling\\footnote{A MCMC that uses the live point covariant matrix to provide a probability distribution for the choice of the random walk direction}, the cluster recognition is obtained by the $k$-nearest neighbor algorithm.\nOnce the different cluster are identified, for each of them a parallel exploration and analysis via slice sampling MCMC is independently performed. \n\nIn the recent and very complete nested sampling code \\textit{Dynesty} \\cite{Speagle2019}, different sampling methods are proposed: from random uniform selections in ellipsoids, like Multinest, to a series of MCMC (random walks, slice sampling,\\ldots).\nDifficult cases with several likelihood maxima are treated by decomposing the parameter space in several ellipsoids via a cluster analysis (using $k$-means algorithm like Multinest), or spheres or cubes (with same radius\/side, one per each live point) with no need of any cluster recognition technique.\n\nIn the following sections, we present a new alternative method based on a MCMC and where the mean shift algorithm is used for the identification of clusters.\nIt is implemented in the existing nested sampling code \\textit{NestedFit}, which is briefly introduced as well in the next paragraph.\n\n\n\\subsection{The NestedFit program}\n\nNestedFit is a general-purpose code for the evaluation of Bayesian evidence and parameter probability distributions based on nested sampling algorithm.\nIt is written in Fortran90 \nwith some subroutines in Fortran77, and parallelized via OPEN- MPI. It is mainly developed and implemented for the analysis in the fields of atomic, nuclear and solid-state physics \\cite{Trassinelli2016b,Trassinelli2017b,Trassinelli2016c,Trassinelli2017b,Papagiannouli2018,DeAndaVilla2019,Ozturk2019,Trassinelli2019}.\nIt is accompanied by a Python function library for visualization of the results and for automatization of series of analyses.\nIn this publication we present the version 3.2 \nthat has the cluster analysis of the live points as substantial upgrade with respect to older versions (see Ref.~\\cite{Trassinelli2017b} for v.~0.7 and Ref.~\\cite{Trassinelli2019} for v.~2.2).\nIn addition, in this new version some new improvements in the search of live point are also implemented. \nThe source code is freely available in the repository \\url{https:\/\/github.com\/martinit18\/nested_fit}.\n\n\nThe code requires two main input files: the main input file (\\texttt{nf\\_input.dat}) where the analysis parameters are selected, and the data file, in the format (channel, counts) or (channel, y value, y uncertainty).\nDependent on the data format, a Poisson or Gaussian statistics likelihood function is used.\nThe function name in the input file indicates the model to be used for the calculation of the likelihood function. \nSeveral functions are already defined in the function library for different model of spectral lines.\nAdditional functions can be easily defined by the user in a dedicated routine.\nNon-analytical or simulated profile models can be considered as well.\n In this case, one or more additional files have to be provided by the user for interpolation by B-splines using FITPACK routines \\cite{Dierckx}.\n\nSeveral data sets can be analyzed at the same time. \nThis is particularly important for the correct study of physically correlated spectra at the same time, e.g., background and signal-plus-background spectra.\nThis is implemented by using a global user-defined function composed by different models to describe each spectra but with common parameters between the models.\n\nThe main analysis results are summarized in one output file (\\texttt{nf\\_output\\_res.dat}).\nHere the details of the computation, number of trials, number of iteration, can be found as well as the final evidence value and its uncertainty $E \\pm \\delta E$, the parameter values corresponding to the maximum of the likelihood function, but also the mean, the median, the standard deviation and the confidence intervals one, two and three sigma (68\\%, 95\\% and 99\\%) of the posterior probability distribution of each parameter.\n$\\delta E$, or more precisely $\\delta (\\ln E)$ is evaluated by running the nested sampling several time for different sets of starting live points. \n$\\delta (\\ln E)$ is obtained by the standard deviation of the different values of $\\ln E$, the natural estimation to study the uncertainty of $E$ \\cite{Skilling2009,Chopin2010}.\nThe information gain $\\mathcal{H}$ and the Bayesian complexity are also provided in the output.\nData for plots and for further analyses are provided in separated files.\nThe step-by-step information of the nested sampling exploration can be found in the largest output file that contains the live points used during the parameter space exploration $\\boldsymbol{\\tilde a}_m$, their associated likelihood values $\\mathcal{L}_m$ and weight $w_m=\\mathcal{L}_m \\Delta X$.\nFrom this file, the different parameter probability distributions and joint probabilities can be build from the marginalization of the unretained parameters. \n\nDetails of the NestedFit search algorithm are presented in the next section.\nAdditional information can be found in Refs.~\\cite{Trassinelli2017b,Trassinelli2019}.\n\n\n\n\\subsection{NenstedFit search algorithm} \n\n\\begin{figure}\n\\centering\n\\includegraphics[height=5cm]{lawn_mower_robot_algo_new}\\\\\n\n\\vspace{3mm} \n\n\\includegraphics[height=5cm]{live_point_mean_algo}\n\\includegraphics[height=4.8cm]{synthetic_live_point_algo}\n\\caption{Graphical presentation of the different search algorithms discussed in the text. \na: Exploration of the parameter volume via the \\textit{lawn mower robot} for finding a new live point. \nb: Search of a new live point from the parameter set $\\boldsymbol{a}_n$ outside the limit $L(\\boldsymbol{a})>\\mathcal{L}_m$ and the barycenter of the current live points.\nc: Construction of the new live point from different coordinates of the current live points.}\n\\label{fig:algo}\n\\end{figure} \n\n\nThe search of new live points in NestedFit is based on a random walk called \\textit{lawn mower robot} \\cite{Theisen2013,Trassinelli2017b,Trassinelli2019}, which is represented in Fig.~\\ref{fig:algo}a. \nIt is composed by a sequence of $N$ steps (or jumps, with $N$ selected by the user) starting from a randomly chosen live point.\nEach step has an amplitude and direction given by the $J$-dimensional vector $f \\boldsymbol{r} \\boldsymbol{\\sigma}$ where each component $f r_j \\sigma_j$ is determined by factor $f$, selected by the user, a random number $r_j$ and the standard deviation of the current live points $\\sigma_j$ with respect to the $j$th parameter.\nFor an efficient covering of the entire parameter space, $f$ and $N$ should be chosen with the criterion\n\\begin{equation}\nf \\times N \\gtrsim 1 \\label{eq:criterion}\n\\end{equation}\nto explore regions within a distance of the order of one standard deviation around the starting point.\nEach new step, which correspond to a new parameter set $\\boldsymbol{a}_{n}$, is accepted if $L(\\boldsymbol{a}_{n})>\\mathcal{L}_m$. \nIf $L(\\boldsymbol{a}_{n})<\\mathcal{L}_m$, a new set of $r_j$ is chosen.\nThe number of total tries $n_t$ is recorded.\nThe choice of the values for $f$ and $N$ is very critical and it could vary from case to case.\n$N$ has to be large enough to lose the memory of the starting live point position, but an increase of it produces a linear increase in computation time. \nA similar situation arises for $f$. \nIf it is too small, a strong correlation between live points is artificially created. \nIf it is too large, many failures can occur.\nFrom our experience, a reasonable range of values is $N=20-40$ and $f=0.1-0.2$. \nIn any case we suggest a visual check of the explored live points for detecting possible correlations.\n\nIf the number of failures becomes too high ($n_t \\gg N$), two different strategies are implemented for finding a new live point.\nIn the first one, schematically represented in Fig.~\\ref{fig:algo}b, a new parameter set is determined by randomly choosing a point between the last failing chain point $\\boldsymbol{a}_{n}$ with $L(\\boldsymbol{a}_{n})<\\mathcal{L}_m$ and the barycenter of the current live points\\footnote{As for the \\textit{lawn mower robot} method, also this algorithm is due to L.~Simons \\cite{Theisen2013} but it was not implemented in the past versions of NestedFit.}.\nThe second method, represented in Fig.~\\ref{fig:algo}c, consists on building a new synthetic live point $\\boldsymbol{a}_{new}$ from the $j\\text{th}$ components from distinct live points: $(a_{new})_j = (a_{m,k})_j$ where $k$ is randomly chosen between 1 and $K$ (the total number of live points) for each $j$.\nIf $\\boldsymbol{a}_{new}$ $L(\\boldsymbol{a}_{new})>\\mathcal{L}_m$, the new point is considered, otherwise another random live point is chosen as start of the random walk.\n\nThe two strategies are chosen randomly when $n_t =N_t$ ($N_t$ chosen by the user in the configuration file) and $n_t$ is reset to zero.\nAs suggested by the schemes in Fig.~\\ref{fig:algo}, the first one favors a re-centering of the live points.\nIn the opposite, the second can more easily explore peripheral regions. \nThis second strategy was in fact the only present in the previous versions of NestedFit (where also $N_t$ was a fixed parameter of the code), and it was developed to improve the search algorithm for multimodal cases facilitating jumps between maximal regions of the likelihood function \\cite{Trassinelli2017b,Trassinelli2019}.\n\nIf the entire above procedure is repeated subsequently too many times ($N\\!N_t$, selected by the user), the cluster analysis, described in the following sections, is triggered for improving the search of new live points.\n\n\\section{Mean shift clustering algorithm and its implementation} \\label{sec:mean-shift}\n\n\\subsection{Preliminary tests and considerations on other cluster recognition algoritms}\nBefore the implementation of one particular cluster recognition method in NestedFit, different algorithms from classical machine learning libraries (\\url{https:\/\/scikit-learn.org} as example) have been considered and some of them have been tested with simple Python scripts.\nFor this purpose, we used different ensembles of live points issued from NestedFit runs on real data when convergence problems were encountered. \nWe excluded a priori Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method.\nThis method is well adapted for detecting cluster with singular shapes (e.g. arc of circle) without necessary improving the implemented random walk algorithm that is based on the standard deviation of the recognized cluster.\nWe then test the Gaussian mixture method with the determination of the number of clusters based on the expectation-maximization algorithm. \nThe results were not convincing and requires external criterions for determining the number of clusters. \nFor similar reasons, we excluded the $k$-means method that requires a preliminary choice of number of clusters and the $x$-means method that uses external criterions to determinate the best choice of $k$. \nWe did not consider the recursive use of k-means with $k=2$, like in the \\textit{Multinest} code, to keep a simple cluster recognition implementation.\nFrom these preliminary tests and considerations, the \\textit{mean shift} clustering algorithm \\cite{Fukunaga1975,Cheng1995} emerges for its simplicity of implementation, its robustness and, more important, because it does not require a choice of the number of clusters before the analysis.\n\n\\subsection{The mean shift algorithm for cluster recognition}\n\nMean shift is a recursive algorithm based on the iterative calculation of the mean of points within a given region.\nConsidering an ensemble $\\{\\boldsymbol{x}_i\\}$, for each point the mean value $m_i$ of the neighbor points $N\\!H(\\boldsymbol{x}_i)$ is calculated recursively via a kernel function $K(\\boldsymbol{x}_i,\\boldsymbol{x}_j)$ via \n\\begin{equation}\nm_{s,i} = \\cfrac {\\sum_{\\boldsymbol{x}_{s,j} \\in N\\!H(\\boldsymbol{x}_{s,i}) } K(\\boldsymbol{x}_{s,i},\\boldsymbol{x}_{s,j}) \\boldsymbol{x}_{s,j}} {\\sum_{\\boldsymbol{x}_{s,j} \\in N\\!H(\\boldsymbol{x}_{s,i}) } K(\\boldsymbol{x}_{s,i},\\boldsymbol{x}_{s,j})},\n\\end{equation}\nwith $s=1$ and $\\boldsymbol{x}_{s=1,i}=\\boldsymbol{x}_i$ for the first step.\nThen the procedure is repeated considering instead of the initial points $\\boldsymbol{x}_i$, the mean values of the previous step, $\\boldsymbol{x}_{s,i} = m_{s-1,i}$, until convergence or the maximum number of allowed steps is reached.\nDifferent points belonging to the same cluster are identified by the vicinity of the final $m_{s,i}$ values.\n\nWith the present implementation, via a Fortran module in NestedFit, the identification of the neighbor points $N\\!H$ is determined by the Euclidean distance $d(\\boldsymbol{x}_i,\\boldsymbol{x}_j)5000$ are excluded for the fit of $\\log E$ uncertainty and CPU time, respectively.\n}\n\\label{fig:studies}\n\\end{figure} \n\n\nA more quantitative measurement on the cluster analysis is obtained by varying the number of used live points $K$.\nAs it can be observed in Fig.~\\ref{fig:studies} (left, top), the final evidence does not change significantly with $K$.\nIn opposite, the evaluated uncertainty (in blue) changes by several order of magnitudes and is systematically larger than its theoretical estimation (in black) $\\delta(\\log E) \\approx \\sqrt{\\mathcal{H} \/ K}$ \\cite{Skilling2006,Skilling2009}, where $\\mathcal{H}$ is the information gain.\nWhen the evaluated evidence uncertainty is plotted in logarithmic scale (Fig.~\\ref{fig:studies} left, bottom), it can be observed that, for high values of $K$ ($ \\ge 500$), $\\delta(\\log E)$ is proportional to $1\/ \\sqrt{\\mathcal{H} \/ K}$ as expected ($\\delta(\\log E) \\propto K^c$ with $c = - 0.52 \\pm 0.02$), but is systematically higher by a factor of about 1.6 than the estimated accuracy (not shown in the bottom figure). \nWhen $K$ is too low ($K < 500$ in the present case), even with the cluster analysis, the nested sampling algorithm cannot efficiently explore the 24 minima producing a systematic increases of $\\delta(\\log E)$.\n\nAs expected, the computation time (equivalent for one single CPU) per set of live points grows almost linearly with $K$.\nA simple fit gives an exponential dependency $\\propto K^c$ with $c = 1.13 \\pm 0.01$.\nA significant deviation is observed for $K=10000$.\nIn this case the cluster analysis, which number of operations is proportional to $K^2$, is significantly contributing to the total computation time.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=6cm]{evidence_and_cpu_time_4gaussian}\n\\caption{Values of $\\log E$ and CPU time for different choices of parameter values of the cluster recognition algorithm.\nUncertainties of the evidence relative to the labels `f 0.7' and `g 0.8 0.3' are not evaluated because of the large computation time corresponding to these cases.}\n\\label{fig:studies2}\n\\end{figure} \n\nCluster analyses of above results have been obtained all with the same set of parameter: with a Gaussian kernel and with $D=0.6, \\ell = 0.2, N_t = 200$ and $N\\!N_t = 2$.\nThe exploration of the algorithm efficiency as dependence of these parameters is investigated and the corresponding results are resumed in Fig.~\\ref{fig:studies2}, where the final evidence values and required computation time are presented for different parameter sets.\nSeveral cases are considered with flat and Gaussian kernel, indicated in the label by `f' and `g', respectively, and different values of $D$ and $\\ell$, indicated in the label as well (only $D$ for the flat kernel).\nAs it can be noticed, for too small values of $D$ and $\\ell$ the final accuracy is poor. \nThis is related to the identification of too many and too small clusters that finally induce an inaccurate, but fast, exploration of the parameter space.\nOn the opposite, for too large values, one or very few clusters are identified.\nIn these cases, the cluster algorithm is called very often without really improving the situation but increasing significantly the total computation time.\nGaussian kernel results to be more robust and flexible than a flat kernel, probably due to the presence of the counter-reaction of the two parameters.\nThe optimal parameter choice depends on the specific problem and the values of $N_t$ and $N\\!N_t$. \nIt is generally observed that low values of $N_t$ allow for changing starting live point often enough improve the efficiency of the algorithm. \n$N\\!N_t$ has to be adapted to trigger enough times the cluster analysis, but not too often.\n\n\n\\hfill\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{cluster_go}\n\\caption{Results of the cluster recognition corresponding to the analysis of the modulation of the exponential decay. The relative amplitude, pulsation and phase are represented. Different colors represent different identified clusters. In black, the projection to some planes are represented.}\n\\label{fig:cluster_go}\n\\end{figure}\n\nThe analysis of the other considered case is characterized by a completely different cluster evolution. \nIn Fig.~\\ref{fig:cluster_go} we represent the amplitude, pulsation and phase of the modulation values after each cluster analysis.\nAfter the first run, several clusters are identified even if no clear structures are visible. \nIn the following analysis, a very complex landscape is drawn, with many clusters and with very narrow values in omega.\nEven if characterized by very different sizes for the different parameters (even after their normalization), different clusters are well identified by the mean shift algorithm.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=8cm]{plot_par_omega}\n\\caption{Evolution of one of the components of the discarded live points $\\boldsymbol{\\tilde a}_{m}$ relative to the pulsation $\\omega$ of the modulation of the single ion exponential decay (see text) as function of the nested sampling step with cluster analysis and for two different starting live points selections.}\n\\label{fig:omega}\n\\end{figure}\n\nThe complex dependency on the modulation pulsation $\\omega$ is also presented in Fig.~\\ref{fig:omega}, where its evolution as function of the nested sampling step is represented for two different choices of starting live points.\nIt can be observed that the rich landscape of the likelihood value as function of $\\omega$ is well reproduced for each try, demonstrating again the efficiency of the cluster analysis implementation once again.\n\nAs in the previous example, similar values of the Bayesian evidence are found: $\\ln E = -1921.54 \\pm 0.12$ without cluster analysis and $-1922.04 \\pm 0.21$ with.\nIn contrast to the previous case, the uncertainty for the analysis without cluster analysis is very small. \nThis is mainly caused by the choice of the value $f=0.014$ (and $N=40$, $K=5000$), a very small value compared to the value set for the analysis with cluster analysis (with $f=0.1$, $N = 20$, $K=5000$). \nThis small value of $f$ contradicts in fact also the recommendation from Eq.~\\eqref{eq:criterion} with the risk to introduce some systematic errors in the computation.\nIt was however required for insuring the convergence of the computation, which was otherwise impossible without cluster analysis.\nLike the previous example, the computation time without cluster analysis is in the best case about eight times longer than with the cluster analysis.\n\nWhen the number of live points $K$ is varied, keeping the other parameters fixed (Gaussian kernel with $D=0.6, \\ell = 0.2, N_t = 100$ and $N\\!N_t = 3$), we can observe in Fig.~\\ref{fig:studies} (right) a similar tendency for the results as in the previous case. \nThe estimated evidence accuracy is found to be proportional, as expected, to $1 \/ \\sqrt{K}$ ( $\\delta ( \\log E \\propto K^c$ with $c = - 0.48 \\pm 0.08$).\nHere too, $\\delta(\\ln E)$ is by a factor of 4.4--5.5 higher than the estimated accuracy.\nBecause of the presence of less local minima than in the case of the four Gaussian peaks problem, the evaluated accuracy follows the proportionality to $1 \/ \\sqrt{K}$ down to $K=100$.\nAn almost linear dependency of the computation time on $K$ is visible in this case too (CPU time $ \\propto K^c$ with $c = 1.13 \\pm 0.01$), with a significant deviation for $K=10000$ due to the high cluster analysis requirements for high $K$. \n\nThese two examples show the general behavior of the cluster algorithm and its dependency on the parameters choice.\nHowever, each case can be different and the user should vary the different parameters to reach the required accuracy. \nA general and simple suggestion is to use a large number of live points to efficiently explore the whole parameter space.\nThis is crucial when multiple local maxima of the likelihood function are present to avoid missing one of them. \nThis is an important requirement even when a cluster analysis is available.\n\n\n\\section{Conclusions} \\label{sec:conclusions}\nWe present a new application of cluster recognition to a nested sampling algorithm for the evaluation of the Bayesian evidence and posterior parameter probability distributions.\nFor this matter, we selected the method of the mean shift, a robust and simple classical cluster recognition method widely used in the machine learning community.\nThis clustering algorithm proved itself well adapted to critical data analysis when several likelihood maxima are present.\nIt has been implemented in the program NestedFit and tested with two different benchmark cases, proving its efficiency in exploring the parameter space without excluding any region.\nAs a consequence, the computation time is reduced by a factor at least eight.\nAt the same time, a smaller value of the estimated evidence uncertainty is obtained.\nAs a result from study the dependency on the different algorithm parameters, a sufficiently high number of live point should be always used, even when the cluster analysis is implemented, to efficiently explore all local likelihood maxima.\nMoreover for a good efficiency of the mean shift cluster recognition, its typical parametric distances ($D$ and $\\ell$, the maximal neighbours distance and the bandwidth of the Gaussian kernel) should neither be too small or too large. \nIn one case very low accuracy, but fast computation is obtained, in the other case the computation time increases too much.\n\nIn this article we explore only the implementation of the mean shift algorithm for cluster recognition. \nIn the future, we plan to explore other methods like the $k$-nearest neighbours and the $x$-means method, successfully used in other nested sampling codes, and compare NestedFit performances with these codes in benchmark cases.\n\n\n\n\n\\acknowledgments{M.T. thanks the Alexander von Humboldt Foundation that provide the financing to attend the MaxEnt2019 conference.\nM.T. would like to express once again his deep gratitude to Leopold M. Simons who introduced the author to the Bayesian data analysis and without whom this work could not have been started. \nWe thank D. Schury and E.B. for the carful reading of the manuscript.\nIn addition, we would like to thank Lorenzo Paulatto for his help for the introduction of git and github for NestedFit future version development and sharing.\nThe development of this program would not be possible without the close interactions and discussions with many collaborators that the author would like to thank as well R. Grisenti, A. L\u00e9vy, D. Gotta, Y. Litvinov, J. Machado, N. Paul and all members of the Pionic Hydrogen, FOCAL and GSI Oscillation collaborations and the ASUR group at INSP.}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}