diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzajca b/data_all_eng_slimpj/shuffled/split2/finalzzajca new file mode 100644 index 0000000000000000000000000000000000000000..b2db2bad1d7ac393cc5a95b74ce731d74e85024e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzajca @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nTwo important concepts in modern condensed matter physics are\ntopological and dynamical phase transitions. These two ideas differ\nfrom the traditional phenomenology of phase transitions which can be\ntraced back to Landau.\\cite{Landau1980} Key to the Landau\nclassification of phase transitions is the concept of order parameters\nindicative of symmetries broken across phase boundaries. Contrarily,\ntopological phase transitions separate regions with the same\nsymmetries but with different topological properties of the ground\nstate. Dynamical phase transitions---rather than focusing on the\nnon-analyticities which occur in the derivatives of the free\nenergy---concern non-analyticities which occur in dynamical\nquantities after a perturbation of the system.\n\nThe definition of a dynamical phase transition which we want to follow\nhere is based on the Loschmidt echo\\cite{Heyl2013} \n\\begin{equation}\n\\label{LE1}\nL(t)= \\langle \\Psi_0 | \\e^{-iH_1t} | \\Psi_0\\rangle.\n\\end{equation}\nHere $|\\Psi_0\\rangle$ is the initial state of the system before the\nquench and $H_1$ the time-independent Hamiltonian which induces the\nunitary time evolution of the system. The Loschmidt echo can be viewed\nas a `partition function' with fixed boundaries. Similar to the\ncanonical partition function\\cite{Fisher1965,Bena2005} there are\n`Fisher zeros' for complex times $t$ with the Loschmidt echo vanishing\nif the Fisher zeros are real or approach the real axis in the\nthermodynamic limit. For the Ising model in a transverse field the\nauthors of Ref.~\\onlinecite{Heyl2013} have shown that the Loschmidt\necho becomes zero at real critical times $t_c$ only for quenches\nacross the critical point, i.e. in cases where the initial state is\nthe ground state of the Hamiltonian on one side of the transition\nwhile the time-evolving Hamiltonian belongs to the other phase. In\nthis case there is therefore a direct connection between the\nequilibrium and the dynamical phase transition. In recent years,\ndynamical phase transitions have also been studied in a number of\nother\nmodels.\\cite{Mitra2012a,Karrasch2013,Heyl2014,Andraschko2014,Sharma2016,Karrasch2017,Gomez-Leon2017,Sedlmayr2017a}\nContrary to the transverse Ising model it has been found that, in\ngeneral, there is no connection between equilibrium and dynamical\nphase transitions: crossing an equilibrium phase transition does not\nnecessarily lead to zeros at real times in the Loschmidt echo while\nsuch zeros can also occur for quenches within the same\nphase.\\cite{Andraschko2014,Vajna2014} A special case are quenches in\nGaussian models with topological order. Here it has been shown that,\nunder certain conditions, a quench across a topological phase\ntransition is guaranteed to lead to dynamical phase transitions while\nthe opposite is not true.\\cite{Vajna2015} The phase of the Loschmidt\necho can then be used to define a dynamical topological order\nparameter which changes at critical times $t_c$.\\cite{Budich2016}\n\nA natural question to ask which we want to address in this manuscript\nis then: is there a dynamical analogue to the bulk boundary\ncorrespondence of equilibrium topological phase transitions? Our paper is organized as follows. In\nSec.~\\ref{Models} we introduce the class of models we will discuss. In\nSec.~\\ref{LE} we review known results for the Loschmidt echo and the\nreturn rate in the periodic case. We then present results of\nnumerical calculations of the return rate for open systems in\nSec.~\\ref{Numerics}. To understand the origins of the observed sudden\nchanges of the boundary contribution to the return rate at dynamical\nphase transitions we investigate the dynamical entanglement structure\nin Sec.~\\ref{Entangle}. Our results are summarized in\nSec.~\\ref{Concl}.\n\n\\section{Models}\n\\label{Models}\nWe focus here on one-dimensional (1D) models with symmetry protected\ntopological (SPT) phases. Following the ten-fold way symmetry\nclassification\\cite{Schnyder2008} we have three symmetry classes with\nground states labeled by a $\\mathbb{Z}$ topological invariant AIII,\nBDI, and CII; and two labeled by a $\\mathbb{Z}_2$ topological\ninvariant D, and DIII. The unitary particle-hole operator $\\C$ for\nBDI, D, and DIII obeys $\\{\\C,H\\}=0$ with $H$ the Hamiltonian of the\nsystem and $\\C^2=1$. The other two symmetries we require are the time\nreversal symmetries $\\mathcal{T}_\\pm$ satisfying\n$\\mathcal{T}_\\pm^2=\\pm1$. They must also anticommute with $\\C$. BDI\nhas additionally $[\\mathcal{T}_+,H]=0$ and a $\\mathbb{Z}$ invariant in\n1D. DIII has additionally $[\\mathcal{T}_-,H]=0$ and a $\\mathbb{Z}_2$\ninvariant in 1D. As $\\mathcal{T}_-$ is the time reversal symmetry of\nthe electrons, DIII has Kramer's pairs. D, with a $\\mathbb{Z}_2$\ninvariant in 1D, has no additional symmetry beyond particle-hole. If\nboth $\\mathcal{T}_\\pm$ symmetries are present the system is best\nthought of as in BDI with an additional TR symmetry protecting the\nKramer's pairs and this will have a $\\mathbb{Z}$ invariant in\n1D.\\cite{Schnyder2008,Sedlmayr2015a}\n\nWe will use examples principally in the BDI class, which possesses\nboth a unitary `time reversal symmetry', and a particle hole\nsymmetry. It therefore also possesses chiral or sublattice\nsymmetry. The topological superconductors in which Majorana bound\nstates are sought all belong to either BDI, D, or\nDIII.\\cite{Kitaev2001,Lutchyn2010,Oreg2010,Alicea2011,Mourik2012,Wong2012}\n\nFor concreteness we consider 1D Hamiltonians, which after a Fourier\ntransform on a periodic lattice, are of the form\n\\begin{equation}\n\\label{1dham}\nH=\\sum_{ k}\\Psi^\\dagger_{ k}\\mathcal{H}( k)\\Psi_{ k} \\textrm{ with } \\mathcal{H}(k)=\\mathbf{d}_k \\cdot {\\bm\\tau}\\,,\n\\end{equation}\nwhere ${\\bm\\tau}$ is the vector of Pauli matrices acting in some\nsubspace and $\\Psi_{ k}$ are the appropriate operators for that\nsubspace. This will be particle-hole space for examples such as the\nKitaev chain, which is a topological superconductor, or a unit-cell\nsubspace for the Su-Schrieffer-Heeger (SSH) chain. These two examples\nwill be those we focus on and we will introduce them in more detail\nbelow. In general $\\mathbf{d}_k=(d^x_k,d^y_k,d^z_k)$, and\ndiagonalizing $\\mathbf{d}_k\n\\cdot {\\bm\\tau}$ one can find $\\mathbf{\\tilde d}_k \\cdot\n{\\bm\\tilde\\tau}$ with $\\mathbf{\\tilde d}_k=(0,0,\\epsilon_k)$ and such\nHamiltonians have pairs of eigenenergies $\\pm \\epsilon_k$, a result of\nthe particle-hole symmetry of the Hamiltonians we consider.\n\n\\subsection{SSH model}\n\\label{SSH}\nThe first example we will consider is the SSH model with open or\nperiodic boundary conditions,\n\\begin{equation}\n\\label{ssh_model}\nH=-J\\sum_{j}\\left[(1+\\delta\\e^{i\\pi j})c^\\dagger_jc_{j+1}+\\textrm{H.c.}\\right]\\,.\n\\end{equation}\nHere $J$ is the nearest-neighbor hopping amplitude, $\\delta$ is the\ndimerization, and $c^\\dagger_j$ is the creation operator on site\n$j$. For periodic boundary conditions the sum is taken up to site $N$\nand $c_{N+1}=c_N$ while the sum is taken up to $N-1$ for open boundary\nconditions. The main reason to consider this specific model first, is\nthat an exact solution for open boundary conditions\nexists\\cite{Shin1997,Sirker2014a} which depends on a set of parameters\ndetermined by non-linear equations. As a result, numerically accurate\ndata for very large open systems can be easily obtained. The system is\ntopologically non-trivial for $\\delta>0$. The particle-hole symmetry\nis then $\\psi_j\\to\\textrm{i}\\psi_j^\\dagger$ and $T_+$ is\n$\\psi_j\\to(-1)^j\\psi_j^\\dagger$. Note that the phase of $\\C$ must be\nfixed such that $\\{\\C,\\T_+\\}=0$.\n\nFor periodic boundary conditions the Hamiltonian can be easily diagonalized. Firstly after a Fourier transform and a convenient rotation,\n\\begin{equation}\n\\Psi^\\dagger_k=\\sqrt{\\frac{2}{N}}\\sum_{j=1}^{N\/2}\\e^{\\textrm{i} 2kj}\\begin{pmatrix}\n1&0\\\\0&\\e^{\\textrm{i} 2k}\n\\end{pmatrix}\\underbrace{\\begin{pmatrix}\nc_{2j-1}\\\\\nc_{2j}\n\\end{pmatrix}}_{=\\Psi_j}\\,,\n\\end{equation}\nthe Hamiltonian takes the form \\eqref{1dham} with\n\\begin{equation}\n\\label{d_SSH}\n\\mathbf{d}_k=\\begin{pmatrix}\n-2J\\cos[k],2J\\delta\\sin[k],0\n\\end{pmatrix}\\,,\n\\end{equation}\nwhich can be readily diagonalized. The momenta are $k=2\\pi n\/N$ with\n$n=1,2,\\ldots N\/2$. The particle-hole symmetry is now\n$\\C=\\e^{\\textrm{i}\\pi\/2}{\\bm \\tau}^x\\hat K$ and $\\T_+=\\hat K$, where $\\hat K$ is the complex conjugation operator. \n\nAlthough this model has a $\\mathbb{Z}$ winding number, the values are\nnevertheless confined to be either 0 or 1. Extensions of this model\nin the same symmetry class but with a higher winding number are however\npossible.\\cite{Rice1982}\n\n\\subsection{Long-range Kitaev chain}\n\\label{Kitaev}\nAs our second example we will consider the Kitaev chain of $M$\nsites with long-range hopping terms,\\cite{Kitaev2001} \n\\begin{eqnarray}\\label{khlr}\nH&=&\\sum_{i,j}\\Psi^\\dagger_{i}\\left(\n\\Delta_{|i-j|}\\textrm{i}{\\bm\\tau}^y-J_{|i-j|}{\\bm\\tau}^z\\right)\\Psi_{j+1}+\\textrm{H.c.}\\nonumber\\\\&&-\\mu\\sum_{j}\\Psi^\\dagger_{j}{\\bm\\tau}^z\\Psi_{j}\\,,\n\\end{eqnarray}\nwith open or periodic boundary conditions. For the periodic case we\nhave $\\Psi_{M+1}=\\Psi_{1}$. The operators in particle-hole space are\ngiven by $\\Psi^\\dagger_{j}=(c^\\dagger_{j},c_{j})$, and $c_{\nj}^{(\\dagger)}$ annihilates (creates) a spinless fermionic particle at\na site $j$. In this case we have again $\\C=\\e^{\\textrm{i}\\pi\/2}{\\bm\n\\tau}^x\\hat K$, $\\T_+=\\hat K$, and a Fourier transform brings the\nHamiltonian into the form of Eq.~\\eqref{1dham} where\n$\\Psi^\\dagger_k=(c^\\dagger_k,c_{-k})$ and\n\\begin{equation}\n\\label{d_Kit}\n\\mathbf{d}_k=\\sum_{m=1}^3 \\begin{pmatrix}\n-2J_m\\cos[mk]-\\mu\/3,2\\Delta_m\\sin[mk],0\n\\end{pmatrix}\\,.\n\\end{equation}\nThe long-range hopping has been truncated here at a distance of 3\nsites and we define $\\vec J=(J_1,J_2,J_3)$ and\n$\\vec\\Delta=(\\Delta_1,\\Delta_2,\\Delta_3)$ . Note that contrary to the\nSSH model phases with higher winding numbers in $\\mathbb{Z}$ exist,\nallowing for a more general investigation of quenches between\ntopological phases with different invariants, and therefore also\ndifferent numbers of boundary states. The momenta are $k=2\\pi n\/M$\nwith $n=1,2,\\ldots M$ and the total system size is $N=2M$.\n\n\\section{The Loschmidt echo and return rate}\n\\label{LE}\nIn this section we will define the quantities studied throughout the following,\nand review results for periodic boundary conditions. \nThe initial state $|\\Psi_0\\rangle$ in Eq.~\\eqref{LE1} is the many-body\nground state of the initial Hamiltonian $H_0$ before the quench. The\nunitary time evolution is then determined by the Hamiltonian\n$H_1$. The Loschmidt echo in a translationally invariant system of the\nform of Eq.~\\eqref{1dham} can be easily calculated\\cite{Quan2006} as\nthe momentum $k$ remains a good quantum number during the quench. One\nfinds\n\\begin{equation}\\label{pbcloschmidt}\nL(t)=\\prod_k\\left[\\cos(\\epsilon^1_kt)+\\textrm{i}\\hat{\\mathbf{d}}^0_k\\cdot\\hat{\\mathbf{d}}^1_k\\sin(\\epsilon^1_kt)\\right]\\,,\n\\end{equation}\nwith\n$\\hat{\\mathbf{d}}^{0,1}_k=\\mathbf{d}^{0,1}_k\/\\sqrt{\\mathbf{d}^{0,1}_k\\cdot\\mathbf{d}^{0,1}_k}$\nand $\\mathbf{d}^{0,1}_k$ being the parameter vector in the\nHamiltonian \\eqref{1dham} before and after the quench respectively. The\nproduct in $k$ is over all filled states of the lower band.\n\nMore generally, for any free fermion system the Loschmidt echo can\nalways be obtained from the single-particle correlation matrix defined\nby\n\\begin{equation}\n\\label{Cij}\n\\C_{ij}=\\langle\\Psi_0|\\Psi^\\dagger_i\\Psi_j|\\Psi_0\\rangle \\, .\n\\end{equation}\nHere $i$ and $j$ run over all lattice sites. The Loschmidt echo in\nterms of the correlation matrix is given\nby\\cite{Levitov1996,Klich2003,Rossini2007}\n\\begin{equation}\n\\label{rle}\nL(t)=\\det\\mathbf{M}\\equiv\\det\\left[1-\\mathbf{\\C}+\\mathbf{\\C}\\e^{\\textrm{i} {\\bm H}_1t}\\right]\\,.\n\\end{equation}\nHere ${\\bm H}_1$ is the Hamiltonian matrix written in the same basis\nas $\\C$. We will use \\eqref{rle} to calculate $L(t)$ in the open\nboundary case and call $\\mathbf{M}$ the {\\it Loschmidt matrix} in the\nfollowing. Eq.~\\eqref{pbcloschmidt} is easily recovered for periodic\nboundary conditions by transforming to momentum space and by using the\neigenbasis of the initial (momentum resolved) Hamiltonian.\n\nIn a many-body system we expect, in general, an orthogonality\ncatastrophe: the overlap between the initial state and the states in\nthe time evolution will become exponentially small in system size\n$N$. It is therefore useful to define the return rate as\n\\begin{equation}\n\\label{return}\nl(t)=-\\frac{1}{N}\\ln|L(t)|\\,.\n\\end{equation}\nThe non-analytic points of the return rate are determined by the zeros\nof the Loschmidt echo.\\cite{Heyl2013} Fisher zeros in the complex\nplane occur in the translationally invariant case at\ntimes\\cite{Vajna2015}\n\\begin{equation}\n\\label{tn1}\nt_n(k)=\\frac{\\pi}{\\epsilon^1_k}\\left(n+\\frac{1}{2}\\right)+\\frac{i}{\\epsilon^1_k}\\tanh^{-1}\\left(\\hat{\\mathbf{d}}^0_k\\cdot\\hat{\\mathbf{d}}^1_k\\right)\\,\n\\end{equation}\nwit $n$ being an integer. These zeros lie on the real axis and\ntherefore give rise to non-analytic behavior for the return rate at critical\ntimes\n\\begin{equation}\n\\label{tn2}\nt_n=\\frac{\\pi}{2\\epsilon^1_{k^*}}\\left(2n-1\\right)\\,, \\textrm{ where }n\\in\\mathbb{Z}\\,,\n\\end{equation}\nif a critical momentum $k^*$ exists with\n\\begin{equation}\n\\label{tn3}\n\\hat{\\mathbf{d}}^0_{k^*}\\cdot\\hat{\\mathbf{d}}^1_{k^*}=0 \\, .\n\\end{equation}\nThis is the condition for the vanishing of the imaginary part in\nEq.~\\eqref{tn1}. We introduce the critical time scale\n$t_c=\\pi\/(2\\epsilon^1_{k^*})$. Where multiple critical times exist, we\ntake the smallest critical time to be the timescale $t_c$.\n\n\\section{Boundary contributions to the return rate}\n\\label{Numerics}\nIn this study we are interested in the boundary contributions to the\nreturn rate \\eqref{return} for systems with open boundaries. In the\nlarge $N$ limit we can expand the return rate as\n\\begin{equation}\n\\label{bbreturn}\nl(t)\\sim l_0(t)+\\frac{l_{B}(t)}{N}\\,.\n\\end{equation}\nHere $l_0$ is the bulk contribution which is equivalent to the return\nrate in the thermodynamic limit for periodic boundary\nconditions. $l_B$ is the boundary contribution which contains information about the topologically protected edge states, as we will demonstrate in the following.\n\n\\subsection{Finite-size scaling}\n\\label{scaling}\nThe most straightforward approach to find $l_B(t)$ is a numerical\ncalculation of the correlation matrix \\eqref{Cij} followed by a finite\nsize scaling analysis of the return rate, Eq.~\\eqref{bbreturn}. We\nwill discuss the results of such an approach here for both our\nexamples, the SSH and the long-range Kitaev chain.\n\n\\subsubsection{SSH chain}\nWe start with the SSH chain where the semi-analytical solution for\nopen boundary conditions allows one to obtain highly accurate results\nfor very large systems. We consider quenches between the topologically\nordered phase with edge states ($\\delta>0$) and the topologically\ntrivial phase without edge states ($\\delta<0$) in the half-filled\ncase. If we perform a symmetric quench $\\delta\\to -\\delta$ then the\ndirection of the quench does not matter for the bulk contribution\n$l_0(t)$ shown in Fig.~\\ref{Fig1} as is obvious from\nEq.~\\eqref{pbcloschmidt}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure01.pdf}\n\\caption{Bulk contributions for symmetric quenches $\\delta\\to -\\delta$ in the SSH model for $\\delta=0.3$ and $\\delta=0.95$.\n}\n\\label{Fig1}\n\\end{figure}\nIn both examples cusps in the bulk return rate are present at the\ncritical times determined by Eqs.~(\\ref{tn2}, \\ref{tn3}).\n\nThe direction of the quench does, however, strongly affect the\nboundary contribution $l_B(t)$. For a quench from the trivial into the\ntopological phase we find a boundary contribution $l_B(t)$ which shows\nlarge jumps at the critical times, see Fig.~\\ref{Fig2}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure02.pdf}\n\\caption{Boundary contribution to the return rate $l_B(t)$ for symmetric quenches from \nthe trivial into the topological phase. $l_B(t)$ is extracted from a\n$1\/N$ scaling analysis of chains of up to $N=2200$ sites.}\n\\label{Fig2}\n\\end{figure}\nFor a quench from the topological into the trivial phase we also\nobserve jumps in $l_B(t)$ at $t_c$ as shown in Fig.~\\ref{Fig3},\nhowever, the jumps are more than two orders of magnitude smaller than\nfor the quench in the opposite direction.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure03.pdf}\n\\caption{Boundary contribution to the return rate $l_B(t)$ for symmetric quenches from \nthe topological into the trivial phase. $l_B(t)$ is extracted from a\n$1\/N$ scaling analysis of chains of up to $N=2200$ sites. Note the\ndifferent scale of $l_B(t)$ compared to the quenches in\nFig.~\\ref{Fig2}. The top panel shows a close-up of $l_B(t)$ for the quench from $\\delta=0.95$ to $\\delta=-0.95$.}\n\\label{Fig3}\n\\end{figure}\nThe data in the two figures clearly point to a bulk-boundary\ncorrespondence: At the same critical times where the bulk contribution\nshows cusps and the bulk dynamical winding number\nchanges,\\cite{Budich2016} the boundary contribution also shows\ndiscontinuities. The dependence on the direction of the quench\nfurthermore suggests, that the boundary contribution $l_B(t)$ is\nstrongly affected by the presence or absence of symmetry protected\nedge states in the final Hamiltonian. That the boundary contribution\nis directly related to the edge states can be seen from the\ntime-dependent occupation of these states, see Fig.~\\ref{Fig4}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure04.pdf}\n\\caption{Occupation of the edge modes for a symmetric quench at $|\\delta|=0.95$ from the topological to the trivial phase for a system of size $N=80$.}\n\\label{Fig4}\n\\end{figure}\nFor a system with $N\/2$ spinless fermions in the topological phase,\none of the edge modes is filled at time $t=0$ while the other one is\nempty. At the critical times $t_c$, where the cusps in $l_0(t)$ occur,\nboth edge modes are approximately half-filled. In the remainder of the\npaper we will investigate the relation between the edge modes and the\nsingularities in the Loschmidt echo in more detail.\n\n\\subsubsection{Long-range Kitaev chain}\nBefore doing so, we will first present numerical results for $l_B(t)$\nfor the other model system we study here, the long-range Kitaev\nchain. Contrary to the SSH model, this chain has different topological\nphases characterized by an integer winding number\n$\\nu_{\\textrm{w}}$. Dynamical phase transitions are expected for any\nquench between phases with different winding numbers. In\nFig.~\\ref{Fig5} the bulk return rate is shown for two examples. \n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure05.pdf}\n\\caption{Bulk return rate for quenches in the \nlong-range Kitaev model from (a) $\\nu_{\\textrm{w}}=1$ to $\\nu_{\\textrm{w}}=3$ and (b) $\\nu_{\\textrm{w}}=1$ to $\\nu_{\\textrm{w}}=-1$. The gray and dashed red lines show the positions of the critical times.}\n\\label{Fig5}\n\\end{figure}\nIn the case shown in Fig.~\\ref{Fig5}(a) the quench is from\n$\\nu_{\\textrm{w}}=1$ with $\\vec J=(1,-2,2)$, $\\mu=2$, and\n$\\vec\\Delta=(1.3,-0.6,0.6)$ to $\\nu_{\\textrm{w}}=3$ with $\\vec\nJ=(1,-2,2)$, $\\mu=0.1$, and $\\vec\\Delta=(0.45,-0.9,1.35)$. Two\ncritical momenta exist leading to two distinct critical times at which\nDPT's occur. In Fig.~\\ref{Fig5}(b) a quench from $\\nu_{\\textrm{w}}=1$\nwith $\\vec J=(1,-2,-2)$, $\\mu=3$, and $\\vec\\Delta=(1.3,-0.6,0.6)$ to\n$\\nu_{\\textrm{w}}=-1$ with $\\vec J=(1,-2,-2)$, $\\mu=3$, and\n$\\vec\\Delta=-(1.3,-0.6,0.6)$ is considered. Cusps in the return rate\nare again clearly visible, demonstrating that DPT's also occur for\nquenches where only the sign of the winding number changes.\n\nBy a finite size scaling analysis of chains up to $N=1600$, we\nhave also extracted the boundary contribution. As an example we show\n$l_B(t)$ for the same quench as in Fig.~\\ref{Fig5}(a) in\nFig.~\\ref{Fig6}(a), and for the reverse quench in Fig.~\\ref{Fig6}(b). Note that contrary to the SSH model an analytic\nsolution for the eigensystem is not known so that we cannot increase\nthe system size till a clear scaling also for times very close to\n$t_c$ emerges. Finite-size corrections are still present near\nDPT's. Furthermore, the edge states have energies which are\nexponentially close to zero so that an exact diagonalization in\nmulti-precision is required.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure06.pdf}\n\\caption{Boundary contribution $l_B(t)$ in the long-range Kitaev model for (a)\nthe same quench as in Fig.~\\ref{Fig5}(a) from $\\nu_{\\textrm{w}}=1$ to\n$\\nu_{\\textrm{w}}=3$, and (b) the reverse quench from\n$\\nu_{\\textrm{w}}=3$ to $\\nu_{\\textrm{w}}=1$. $l_B(t)$ is extracted from a\n$1\/N$ scaling analysis of chains of up to $N=1600$. The gray and dashed red\nlines show the positions of the critical times.}\n\\label{Fig6}\n\\end{figure}\nWhile the obtained data are therefore not as accurate as for the SSH\nchain, we nevertheless observe a behavior which is qualitatively\nsimilar. At the critical times $t_c$ the boundary contribution shows\njumps. However, here the jumps are of similar magnitude for both\nquench directions.\n\n\\subsection{Loschmidt eigenvalues}\nIn order to understand the origin of the discontinuous boundary\ncontribution we next investigate the spectrum of the dynamical\nLoschmidt matrix $\\mathbf{M}$ defined in Eq.~\\eqref{rle}.\n\nWe concentrate first on symmetric quenches in the SSH model for large\ndimerizations $\\delta$ where the structure of the spectrum is\nparticularly simple. In Fig.~\\ref{Fig7} the spectrum of the matrix\n$\\mathbf{M}$ for a quench from the trivial into the topological phase\nis shown.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure07.pdf}\n\\caption{Absolute value [black circles] and argument [green diamonds] of the eigenvalues of the matrix $\\mathbf{M}$ for a symmetric quench at $|\\delta|=0.95$ from the trivial into the topological phase for a system of size $N=80$.}\n\\label{Fig7}\n\\end{figure}\nAt the first critical time $t_c$ there are two eigenvalues\n$\\lambda_{1,2}$ whose absolute values become exponentially small in\nsystem size. Between $t_c$ and $3t_c$ these eigenvalues stay close to\nzero and this structure repeats itself in time. At $t_c$, the argument\nof the eigenvalues also changes abruptly from zero to $\\pm\\pi$. These\ntwo eigenvalues are related to the edges of the system as a comparison\nto the case with periodic boundary shows where they are absent.\n\nThe return rate in terms of the eigenvalues $\\lambda_i$ of the matrix\n$\\mathbf{M}$ is given by\n\\begin{equation}\n\\label{EVs}\nl(t)=-\\frac{1}{N}\\sum_{j=1}^N \\ln |\\lambda_j| \\,.\n\\end{equation}\nWe can isolate the contribution to $l(t)$ from $\\lambda_{1,2}$, the two eigenvalues which periodically approach\nzero:\n\\begin{equation}\n\\label{2EVs}\n\\Lambda\\equiv-\\frac{1}{N}\\ln |\\lambda_1\\lambda_2|\\,.\n\\end{equation}\nAt a particular system size $N$, $\\Lambda$ reproduces the same behavior as the finite size boundary contribution which we define as $l(t)-l_0(t)$, see\nFig.~\\ref{Fig8}. It is the principle \n source of the large boundary contribution for $t\\in[t_c,3t_c]$.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure08.pdf}\n\\caption{The boundary contribution to the return rate $l(t)-l_0(t)$ [red circles] compared to $\\Lambda$ [green triangles], the contribution to the return rate from\nthe two eigenvalues which are close to zero for $t\\in [t_c,3t_c]$, see Fig.~\\ref{Fig7}. Shown for a system of\nsize $N=500$.}\n\\label{Fig8}\n\\end{figure}\n\nAt a DPT the system thus not only changes back and forth between the\ntrivial phase with dynamical winding number $\\nu=0$ and the\ntopological one with $\\nu=1$\\cite{Budich2016} there is also---at the\nsame time---a transition in the edge degrees of freedom. The presence\nor absence of a pair of zero eigenvalues of the dynamical matrix\n$\\mathbf{M}$ can serve as an equivalent order parameter to the bulk\nwinding number, establishing a concrete bulk-boundary correspondence\nin this case.\n\nFor the quench from the topological to the trivial phase we obtain a\nslightly different picture. While at the critical times there is still\na pair of eigenvalues which approach zero, the eigenvalues no longer\nremain close to zero for $t\\in [t_c,3t_c]$, see Fig.~\\ref{Fig9}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure09.pdf}\n\\caption{Absolute value [black circles] and argument [green diamonds] of the eigenvalues of the matrix $\\mathbf{M}$ for a symmetric quench at $|\\delta|=0.95$ from the topological into the trivial phase for a system of size $N=80$.}\n\\label{Fig9}\n\\end{figure}\nThe direction of the quench can thus clearly be distinguished from the\nspectrum of $\\mathbf{M}$. From the almost symmetric spectrum around\n$t_c$ for the topological to trivial quench it is in particular clear\nthat the boundary contribution is of similar magnitude on both sides\nof the transition consistent with the scaling results shown in\nFig.~\\ref{Fig3}.\n\nSo far we have investigated quenches for large dimerizations\n$|\\delta|\\lesssim 1$ where the system is almost perfectly dimerized\nand the Loschmidt spectrum is particularly simple. A natural question\nto ask is whether or not the spectrum is still useful to detect if\nedge states are present in the final Hamiltonian for smaller\ndimerizations. To this end, we present in Fig.~\\ref{Fig10} data for\nsymmetric quenches at $|\\delta|=0.3$.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure10.pdf}\n\\caption{Absolute value [black circles] of the smallest 20\neigenvalues of the matrix $\\mathbf{M}$ for a symmetric quench at\n$|\\delta|=0.3$ from (a) the trivial to the topological phase, and (b)\nvice versa, for a system of size $N=160$.}\n\\label{Fig10}\n\\end{figure}\nWhile the spectrum becomes more complex, the direction of the quench\nis still obvious. In particular, a pair of eigenvalues close to zero\nfor $t\\in [t_c,3t_c]$ persists for the trivial to topological quench.\n\nIn the long range Kitaev model where we are able to consider quenches\nbetween more general winding numbers a similar structure is seen in\nthe spectrum of $\\mathbf{M}$, see Fig.~\\ref{Fig11}. Although finite\nsize effects are still present, it is already obvious that the results\nare again very different for the two quench directions. Note that for\nthe quench from $\\nu_{\\textrm{w}}=1$ to $\\nu_{\\textrm{w}}=3$ the time\nevolving Hamiltonian, $H_1$, has two additional pairs of edge states,\ncompared to the initial Hamiltonian, $H_0$.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure11.pdf}\n\\caption{Absolute value of the smallest four eigenvalues of the matrix $\\mathbf{M}$ for the quenches in Fig.~\\ref{Fig6}, for (a) $\\nu_{\\textrm{w}}=1$ to $\\nu_{\\textrm{w}}=3$, and (b) vice versa, for a system of size $N=600$. Eigenvalues are colored as an aid to the eye.}\n\\label{Fig11}\n\\end{figure}\nFor the quench where the number of edge states increases\n[Fig.~\\ref{Fig11}(a)] we find, in particular, an extended time\ninterval between critical times where $2$ or $4$ eigenvalues are\nzero. In fact two eigenvalues are close to zero for $t\\in[t_c,3t_c]$\nand two different eigenvalues are close to zero for\n$t\\in[t'_c,3t'_c]$, where $t'_c$ is the larger of the two critical\ntimes for this quench. For the opposite direction, on the other hand,\nwe find a roughly symmetric structure around the first two critical\ntimes very similar to the SSH case. Note that finite size effects\nstrongly influence the results for $t\/t_c\\gtrsim 4$. Compatible with\nthe data for the SSH and the Kitaev chain is thus the idea that for\nthe boundary contribution $l_B(t)$ and for the Loschmidt spectrum,\n$\\text{spec}(\\mathbf{M})$, it is important how many more or fewer edge\nstates are present for $H_1$ as compared to $H_0$.\n\n\\section{Long-range entanglement}\n\\label{Entangle}\nWe have established numerically a bulk-boundary correspondence and\ncould show that the spectrum of $\\mathbf{M}$ contains information\nabout the edge states. However, the spectrum of this matrix is not an\neasily measurable quantity and also does not provide a physical\npicture about what happens to the edge states during the time\nevolution. In this section we therefore want to connect the changes on\nthe Loschmidt echo with the dynamical entanglement properties of the\nsystem. To this end, we consider the entanglement between the two\nhalves of an open chain with an even number of sites. The entanglement\nentropy is defined as the von-Neumann entropy of a reduced density\nmatrix\n\\begin{equation}\n S_{\\textrm{ent}}(t) =-\\Tr \\{\\rho_A(t) \\ln \\rho_A(t)\\}\\,\n\\end{equation}\nwith $\\rho_A(t)=\\Tr_B |\\Psi(t)\\rangle\\langle\\Psi(t)|$ and\n$|\\Psi(t)\\rangle = \\text{e}^{-iH_1t}|\\Psi_0\\rangle$ being the\ntime-evolved state. The system has been divided up into two blocks $A$\nand $B$ of equal size. Here we will focus on the SSH model.\n\nFor a Gaussian model the entanglement between a subsystem and the rest\ncan be calculated from the correlation matrix $\\mathbf{\\C}(t)$ defined\nin Eq.~\\eqref{Cij} with the now time-dependent two-point correlations\nrestricted to lattice sites within the subsystem.\\cite{Peschel2003}\nThe entanglement entropy is then given by\n\\begin{equation}\n\\label{Sent}\nS_{\\textrm{ent}} = -\\sum_j \\left[ \\eta_j\\ln\\eta_j +(1-\\eta_j)\\ln(1-\\eta_j)\\right] \\, \n\\end{equation}\nwith $\\eta_j$ being the eigenvalues of $\\mathbf{\\C}(t)$.\n\nIn Fig.~\\ref{Fig12} the entanglement entropy for a symmetric quench\nfrom the trivial into the topological phase at large $|\\delta|$ is\nshown. \n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure12.pdf}\n\\caption{$S_{\\textrm{ent}}(t)$ between two halves of the system for a symmetric quench at $|\\delta|=0.95$ from the trivial to the topological phase with $N=32$.}\n\\label{Fig12}\n\\end{figure}\n$S_{\\textrm{ent}}(t)$ is showing oscillations with local maxima\nlocated exactly at the DPT's. For short times, in particular,\n$S_{\\textrm{ent}}(t_c)\\approx 2\\ln 2$ while at longer times the\nentanglement entropy on average starts to increase linearly as is\nexpected for a global quench. These observatios can be explained as\nfollows: In the strongly dimerized case, each dimer bond between the\ntwo subsystems is in a fully entangled state, $(|10\\rangle \\pm\n|01\\rangle)\/\\sqrt{2}$, and contributes $\\ln 2$ to the entanglement\nentropy. The data in Fig.~\\ref{Fig12} therefore indicate that there\nare two dimer bonds between the subsystems at the critical times\n$(2n+1)t_c$. At times $2nt_c$, on the other hand, the two subsystems\nare almost completely disentangled. This picture is confirmed by\ndirectly considering the two-point correlations in the system as a\nfunction of time, see Fig.~\\ref{Fig13}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure13.pdf}\n\\caption{Time evolution of two-point correlations for the same quench as in Fig.~\\ref{Fig12} for a system with $N=16$ sites and times $t=0,t_c,2t_c,3t_c$. The opacity indicates the strength of the correlation and the dashed gray line is the cut between the two subsystems.}\n\\label{Fig13}\n\\end{figure}\nThe system starts in a topologically trivial, strongly dimerized state\n$|\\Psi_0\\rangle=|\\Psi(t=0)\\rangle$ and thus\n$S_{\\textrm{ent}}(t=0)\\approx 0$. At the critical time $t_c$, two\ndimer bonds have formed which cross the cut between the subsystems\nleading to a $S_{\\textrm{ent}}(t_c)\\approx 2\\ln 2$. For short times,\nthe system oscillates between these two configurations with other\ncorrelations slowly starting to build up and finally leading to a\nlinearly increasing entanglement entropy. Note that for a finite\nsystem this linear increase will be cut off at\n$S^{\\textrm{max}}_{\\textrm{ent}}=\\frac{N}{2}\\ln 2$.\n\nFor the quench in the opposite direction, shown in Fig.~\\ref{Fig14},\n$S_{\\textrm{ent}}$ also shows oscillations with a frequency set by the\ncritical time $t_c$.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure14.pdf}\n\\caption{$S_{\\textrm{ent}}(t)$ between two halves of the system for a symmetric quench at $|\\delta|=0.95$ from the topological to the trivial phase with $N=32$.}\n\\label{Fig14}\n\\end{figure}\nIn this case $S_{\\textrm{ent}}(t=0)\\approx 2\\ln 2$ and for long times\nthe entanglement entropy shows again the expected linear increase. To\nunderstand this behavior it is again instructive to consider the time\nevolution of the two-point correlations, see Fig.~\\ref{Fig15}.\n\\begin{figure}[!ht]\n\\includegraphics*[width=0.99\\linewidth]{Figure15.pdf}\n\\caption{Time evolution of two-point correlations for the same quench as in Fig.~\\ref{Fig14} for a system with $N=16$ sites and times $t=0,t_c,2t_c,3t_c$. The opacity indicates the strength of the correlation and the dashed gray line is the cut between the two subsystems. Long-range entanglement between the edges of the system is present at all times.}\n\\label{Fig15}\n\\end{figure}\nAt time $t=0$ a nearest-neighbor dimer and a dimer between the two\nedge sites is cut. Such long-range entanglement between the boundaries\nof an open SSH chain has been discussed previously in\nRef.~\\onlinecite{CamposVenuti2007a}. Interestingly, the long-range\nentanglement persists at all times during the unitary time evolution\ndespite the strong quench perturbing the system. While at times\n$2nt_c$ the two edge sites are strongly entangled, the long-range\nentanglement moves to the sites one removed from the edge for times\n$(2n+1)t_c$, see Fig.~\\ref{Fig15}. The different entanglement\nstructure in the two cases explains the oscillations seen in the\nentanglement entropy shown in Fig.~\\ref{Fig14} while the slow build-up\nof additional correlations explains the linear increase of\n$S_{\\textrm{ent}}(t)$ at longer times.\n\n\\section{Conclusions}\n\\label{Concl}\nIn this paper we have studied dynamical phase transitions in open\nchains with symmetry protected topological phases. Specifically, we\nhave concentrated on two examples: the SSH chain and a long-range\nKitaev model. In both cases we have shown that for a quench between\ndifferent topological phases there is not only a cusp in the bulk\nreturn rate but also a jump in the boundary ($1\/N$) contribution. In\ncontrast to the bulk part, the boundary return rate $l_B(t)$ is\nsensitive to the direction of the quench. For the SSH model, in\nparticular, we found that the jump in $l_B(t)$ at a DPT is orders of\nmagnitude larger for a quench from the trivial to the topological\nphase than in the other direction. A clear qualitative difference\nbetween the quench directions can also be seen in the Loschmidt\neigenvalue spectrum: While for the quench into the topological phase\ntwo eigenvalues which behave very differently on both sites of the DPT\nare largely responsible for the boundary contribution, the spectrum\nnear a DPT is almost symmetric for a quench in the opposite direction.\n\nThe critical times $t_c$ at which DPT's occur are also clearly visible\nas oscillations in the entanglement entropy between two halves of the\nSSH chain. We found that these oscillations can be explained by the\ntime-dependent structure of two-point correlations. At short times and\nlarge dimerizations the system oscillates between two configurations\nof correlations. Starting from the topological phase we discussed, in\nparticular, the long-range entanglement which is transferred from the\nboundary sites onto the sites one removed from the boundary going from\ntimes $2nt_c$ to $(2n+1)t_c$. Quite surprisingly, the long-range\nentanglement in this case remains stable despite the fact that the\nquench introduces a large perturbation to the system.\n\n\\acknowledgments\nJS acknowledges support by the Natural Sciences and Engineering\nResearch Council (NSERC, Canada) and by the Deutsche\nForschungsgemeinschaft (DFG) via Research Unit FOR 2316. Support for this research at Michigan State University (NS) was provided by the Institute for Mathematical and Theoretical Physics with funding from the office of the Vice President for Research and Graduate Studies.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the last few years, it has become increasingly clear that SN~1987A\nin the Large Magellanic Cloud (LMC) was a remarkable, but highly\nunusual event. One of the major surprises of SN~1987A was the fact\nthat the star that exploded was a blue supergiant rather than a red\nsupergiant as had been predicted. While there have been many attempts\nin the early years to explain the blue-supergiant progenitor within\nthe framework of single stellar evolution theory (e.g., Woosley,\nPinto \\& Ensman 1988; for a review see Podsiadlowski 1992), these\nmodels are no longer viable with the best, up-to-date input physics\n(Woosley 1997). In particular, the recent large increase in stellar\nopacities (Rogers \\& Iglesias 1992) had as an immediate consequence\nthat there are no longer any plausible parameters for which a massive\nstar first becomes a red supergiant and then experiences a late blue\nloop after helium core burning (Woosley 1997). Of course, even if such\nmodels could be constructed, they still would not be promising models\nfor the progenitor of SN~1987A, since they still could not explain any\nof the other major anomalies of this event: the complex triple-ring\nnebula surrounding the progenitor and the various chemical anomalies\n(see Podsiadlowski 1992 and section~3). It is now almost certain\nthat these anomalies are somehow connected to binary evolution. In\nsection~2, I therefore first review some of the main, relevant\nprinciples of binary stellar evolution theory and, in section~3,\nsummarize the observational constraints any model for the progenitor\nhas to fulfill. In section~4, I present in detail an updated version\nof a merger scenario that at present provides the only framework in\nwhich not only some, but all of these anomalies can be\nunderstood. Throughout this review, I emphasize the still substantial\ntheoretical uncertainties in this model and indicate how observations\nmay help to conclusively verify it in all its details.\n\n\\section{Binary interactions}\n\nAs is not very widely known in the supernova community, most stars in the\nsky are actually members of binary (or multiple) systems. To zeroth order,\n{\\em all} stars are members of binaries (see, e.g., the references in\nPodsiadlowski, Joss \\& Hsu 1991 [PJH] and Ghez 1996). Of course to have its\npresupernova structure altered, a star has to be in a close,\ninteracting binary where at least one star fills its Roche lobe during\nits evolution. The fraction of massive stars in interacting binaries\ncan be estimated to be in the range of 30$\\,$--$\\,50\\,$\\%. For\nexample, Garmany, Conti \\& Massey (1980) found that $36\\,$\\% of massive stars\nare spectroscopic binaries with massive companions with periods less\nthan $\\sim 1\\,$yr. This estimate would imply a {\\em true}\ninteracting-binary frequency of around $50\\,$\\%. It is worth noting\nthat Roche-lobe overflow occurs more frequently in evolved phases\nsimply because the radius of a star expands only by a factor of $\\sim\n2$ during the main sequence, while it expands by a factor of $\\sim 100$\nsubsequently. This means that, for any plausible orbital-period\ndistribution, a star is much more likely to encounter mass transfer after\nits main-sequence phase. Since stars spend the largest fraction of their\nlives on the main sequence, most stars observed in the sky have not\n(yet) experienced a binary interaction. On the other hand, supernovae\nprobe the very final stage in the evolution of a star. Therefore a large\nfraction of all supernova progenitors are affected by a previous binary\ninteraction. This is, at least in part, responsible for the \nlarge variety of observed supernova types.\n\n\nIn general, two qualitatively very different modes of mass transfer\ncan be distinguished: more-or-less conservative mass transfer and\ndynamically unstable mass transfer.\n\n{\\em (Quasi-)conservative mass transfer} usually occurs when the\nmass donor has a radiative envelope and the mass ratio of the\nmass-accreting to the mass-losing component is not too small (e.g.,\nde Loore \\& de Gr\\`eve 1992). Then, a large fraction\n($\\ga 0.5$) of the mass lost from the primary is accreted by\nthe secondary. Thus in this case, both components of the binary are\naffected, one by losing mass, the other by accreting it. During mainly\nconservative mass transfer, the orbital period tends to increase.\n\n\n\n{\\em Dynamical mass transfer} usually takes place when the secondary\nis a giant star with a deep convective envelope. In this case, mass\ntransfer is dynamically unstable and leads to the formation of a\ncommon envelope surrounding the core of the giant and the secondary\n(Paczy\\'nski 1976). Due to friction between this immersed binary with\nthe common envelope, the orbit of the binary starts to shrink.\nDependent on how much energy is released in the orbital decay of the\nbinary and is deposited in the envelope, two different outcomes are\npossible. If the deposited energy exceeds the binding energy of the\nenvelope, the common envelope can be ejected, leaving a very close\nbinary consisting of a helium star (or Wolf-Rayet star) and a normal\ncompanion star (which is hardly affected by the common-envelope\nphase).\nIf the energy is not sufficient to unbind\nthe envelope, the less dense component of the immersed binary will\nultimately be tidally destroyed and the two components merge completely\nto form a single, but rapidly rotating giant (see section~4.2).\t\n\n\nIn a typical binary scenario, a binary system may \nexperience several different phases of\nmass transfer, which is the reason for the large variety of possible\nbinary scenarios. In the following, I will concentrate on the main\nconsequences of the various mass-transfer types for the structure\nof the immediate supernova progenitors (for a more detailed discussion\nsee PJH and Hsu et al.\\ 1996).\n\n\\subsection{Mass loss}\n\nSome 30$\\,$--$\\,50\\,$\\% of all massive stars experience Roche-lobe\noverflow and mass loss at some point during their evolution. \nIn most cases, they lose all of their\nhydrogen-rich envelopes and become helium stars or Wolf-Rayet stars,\nwhich are excellent candidates for type Ib\/Ic supernovae. In some\ncases, however, it is possible for the primary to retain part of\nits hydrogen-rich envelope, in particular when mass transfer occurs\nvery late during the evolution of the primary and when the initial mass\nratio is very close to one. However, even in this case, at most a few\nsolar masses can be retained in the envelope. The immediate supernova\nprogenitor will be a {\\em stripped supergiant} with a small envelope\nmass (Joss et al.\\ 1988). \nIt is interesting to note that the outer appearance of this\nstar will be very similar to that of a star that has lost no mass at\nall, since the radius and the luminosity of the star are almost\nindependent of the envelope mass (PJH). The\nresulting supernova will, however, be quite different. The light curve\nwill show no extended plateau phase but resemble a type II-L\nor, in the most extreme case, a type IIb supernova (Nomoto et al.\\ 1993;\nPodsiadlowski et al.\\ 1993; Woosley et al.\\ 1994; Hsu et al.\\ 1996). \n\n\\subsection{Mass accretion}\n\n\nJust as the structure and appearance of the mass-losing star can be\nstrongly affected, the structure and further evolution \nof the accreting companion can also be dramatically altered by mass transfer.\nThis depends, however, strongly on the evolutionary stage of the\naccreting secondary at the beginning of the mass-transfer phase.\n\\par\\medskip\n\\noindent {\\em Secondary on the main sequence}\\par\\noindent\nIf the secondary is still on the main sequence at the beginning of\nthe mass-transfer phase, the secondary is usually {\\em\nrejuvenated} and will behave subsequently (after the mass-transfer \nphase) like a single,\nbut now more massive star (Hellings 1983; PJH).\nHowever, as was shown by Braun \\& Langer (1995), this need\nnot be the case if accretion occurs very late on the main sequence\nand if semi-convection is very slow (combined with the Ledoux\ncriterion for convective instability), since the convective core will\nthen not grow significantly as a result of accretion. The subsequent\nevolution will resemble the evolution of a star that accreted after the\nmain-sequence phase (see below); in particular, the star may never \nbecome a red supergiant and spend the whole post-main-sequence phase\nin the blue-supergiant region of the Hertzsprung-Russell (H-R)\ndiagram.\n\\par\\medskip\\noindent\n{\\em Secondary has completed hydrogen core burning}\\par\\noindent\nIf the secondary has already left the main sequence before the\nbeginning of the mass-transfer phase, the subsequent evolution will\ngenerally differ quite substantially \nfrom the evolution of a normal single star. \nSince the mass of the helium core will not grow as a result\nof mass accretion, the main effect of post-main-sequence accretion is\nto increase the envelope mass relative to the core mass. As was first\nshown by Podsiadlowski \\& Joss (1989) (see also De Loore \\&\nVanbeveren 1992; PJH), this has the consequence that\nthe star will now not become a red supergiant or, if it was a red\nsupergiant at the time of the accretion phase, leave the\nred-supergiant region and spend its remaining lifetime as a blue\nsupergiant (see also Barkat \\& Wheeler 1989 for a related scenario).\nThe final location of the star in the H-R diagram\nat the time of the supernova depends on how much mass has been\naccreted in the accretion phase. The star will be the bluer, the more mass\nit has accreted. The final supernova will be of the SN~1987A variety \n(type II[blue]). Since a star has to accrete only a few percent of its\nown mass from a binary companion to be spun up to critical rotation,\nthe progenitor is expected to pass through a phase where it is \nrapidly rotating. Rapid rotation may also induce large-scale mixing in the\naccreting star (even across chemical gradients). \nHowever, whether this happens depends critically on the angular-momentum\ntransport inside and the structure of the accreting star.\n\n\\subsection{Binary mergers}\n\n\nThe most dramatic type of binary interaction is dynamical mass\ntransfer leading to a common-envelope and spiral-in phase. If the\norbital energy released during the spiral-in is sufficient to eject\nthe common envelope, the end product is a short-period binary\nconsisting of a Wolf-Rayet primary and a normal stellar companion.\nThe primary will eventually explode as a type Ib\/Ic supernova. If the\nsystem remains bound, the secondary is also likely to experience\nmass loss in the future leading to a second mass-transfer phase, just\nas in the mass-accretion scenarios discussed in the previous\nsubsection.\n\nIf the common envelope is not ejected, the two components will merge\ncompletely to produce a more massive and rapidly rotating\nsingle star. This end product resembles in many ways the outcome\nof the post-main-sequence accretion models in section~2.2. In\nparticular, it may end its evolution as a blue supergiant rather than\nas a red supergiant either because of the added mass in the envelope\n(Podsiadlowski, Joss \\& Rappaport 1990) or because of the dredge-up\nof helium (Hillebrandt \\& Meyer 1989) or both.\nThe resulting supernova may again be of the SN~1987A variety.\nPodsiadlowski et al.\\ (1991) have estimated that the combined frequency\nfor a blue-supergiant progenitor in either an accretion or a merger\nscenario is about 5$\\,$\\% of all core-collapse supernovae with an uncertainty\nof about a factor of two.\n\\par\\medskip\n\n\n\n\\noindent{\\em Complications.} \nIn this section, I only discussed the main types of interactions.\nHowever, many binaries experience more than one phase of mass transfer.\nFor example, low-mass helium stars, produced in a first mass-transfer phase,\nwill expand again after core helium burning to become helium giants \nand may fill their Roche lobes for a second time (so-called case BB \nmass transfer; De Gr\\`eve \\& De Loore 1977; Delgado \\& Thomas 1981).\nThis still does not exhaust the whole variety of multiple \nmass-transfer scenarios. Nomoto {et al.}\\ (1994) \nsummarized various scenarios to produce bare CO cores as progenitors\nof type Ic supernovae, which involve up to {\\it three} mass-transfer phases.\n\\par\n\n\n\\section{Observational Constraints on the Progenitor of SN~1987A}\n\n\\subsection{The blue-supergiant progenitor}\n\nOne of the major early surprises of SN~1987A was the fact that its\nknown progenitor, \\mbox{SK~$-69^{\\circ}$202}, was a blue supergiant\nrather than a red supergiant, the normally expected supernova\nprogenitor. Moreover, from the dynamical age of the surrounding\nlow-velocity nebula ($\\sim 30000\\,$yr) one can infer that it was a red\nsupergiant just a few $10^4\\,$yr ago. Any model of the progenitor has\nto explain this recent transition {\\em and} be consistent with the\ngeneral behaviour of massive stars in the LMC in the H-R diagram.\nMany of the early models already failed this obvious test (see\nPodsiadlowski 1992).\n\n\\subsection{The triple-ring nebula} \n\nOne of the most spectacular features of SN~1987A is \nthe complex, but very axisymmetric nebula surrounding\nthe supernova, which was formed out of material that was ejected from\nthe progenitor in the not-too-distant past. The main geometry of the\nnebula consists of three rings, an inner ring centered on the\nsupernova (Wampler et al.\\ 1990; Jacobson et al.\\ 1991) and two rings\ndisplaced to the South and the North, but in approximate alignment\nwith the symmetry axis of the inner ring (Wampler et al.\\ 1990;\nBurrows et al.\\ 1995).\nThis geometry implies an axisymmetric, but highly non-spherical\nstructure of the envelope of the progenitor and\/or its winds.\nThe origin of this non-sphericity provides a severe constraint\nfor models of the progenitor. A plausible mechanism to produce\nthe required asymmetry is the flattening of the progenitor's envelope\ncaused by rapid rotation (Chevalier \\& Soker 1989). However,\nstraightforward angular-momentum considerations show that a single\nstar which was rapidly rotating on the main sequence would be a slow\nrotator in any subsequent supergiant phase and could not possibly \nbe significantly flattened at the time of the supernova explosion.\nRecent HST observations (Pun 1997) which show that the supernova ejecta\nthemselves are elongated along the symmetry axis of the nebula\nprovide further evidence for a rapidly rotating, flattened (oblate) \nprogenitor, since in this case the supernova shock will propagate faster\nin the polar directions, thereby generating a prolate structure\nof the ejecta.\n\n\\subsection{The chemical anomalies}\n\nThe third major surprise of the supernova involves a number of\nchemical anomalies in the progenitor's hydrogen-rich envelope\nand in the presupernova ejecta.\\par\\medskip\n\n{\\noindent\\em The inner ring: the helium anomaly}\\par\\nobreak\\medskip\\noindent\nAs is now firmly established, the composition of the inner ring shows\nthat significant amounts of CNO-processed material have been dredged up to\nthe surface (with N\/C$\\,\\sim\\,$5, N\/O$\\,\\sim\\,$1 [all ratios are by number]) \nand that, most importantly, the helium abundance in the inner ring is \nabout twice solar (He\/H$\\,\\sim\\,$0.25; H\\\"oflich 1988; Allen, Meikle \\& \nSpyromilio 1989; Fransson et al.\\ 1989; Wang 1991; Lundqvist \\& Fransson 1996;\nSonneborn et al.\\ 1997).\n\n\\medskip\n{\\noindent\\em The outer rings: two dredge-up phases?}\n\\par\\medskip\\noindent\nRecently, Panagia et al.\\ (1996) found that the chemical composition\nof the outer rings is significantly different from the composition of\nthe inner ring, indicating a smaller amount of dredge-up of\nCNO-processed material (N\/C$\\,\\sim\\,$2; N\/O$\\,\\sim\\,$0.6). This, if confirmed,\nmight suggest that there were two dredge-up phases, the first\nassociated with the normal convective dredge-up when a star becomes a\nred-supergiant and develops a deep convective envelope and the second,\na few $10^4\\,$yr ago, connected with the event that produced all the\nother anomalies as well. Determination of the helium abundance of the\nouter rings could shed more light on the details of this process.\n\\par\\medskip\n{\\noindent\\em The barium anomaly}\n\\par\\medskip\\nobreak\\noindent\nThe third chemical anomaly, most people had hoped would go away, is the\nenhancement of barium (by a factor of 5$\\,$--$\\,$10) and other\ns-process elements in the progenitor's envelope\n(Williams 1987; H\\\"oflich 1988; \nMazzali, Lucy \\& Butler 1992; Mazzali \\& Chugai 1995). This suggests\nthe simultaneous occurrence of hydrogen and some helium-burning\nreactions (in particular, C$^{13} + \\alpha$) in the outer\nparts of the core, similar to the process that produces these elements\nin S stars on the asymptotic-giant branch (Sanders 1967).\n\n\nAll of these anomalies together suggest that there was a single,\ndramatic event a few $10^4\\,$yr ago that is responsible for them.\nSeveral of these show clear fingerprints of binary interactions.\nJust as in any good mystery story, there are plenty of traces and clues,\nand it is just up to us to decipher their meanings and to reconstruct what\n{\\em really} happened to the progenitor of SN~1987A.\n\n\\section{The Progenitor of SN~1987A: a Binary Merger}\n\\begin{figure}[p]\n\\plotone{podsiadlowski1.eps}\n\\end{figure}\n\nOver the last ten years, numerous binary models have been suggested\n(for a review, see Podsiadlowski 1992; see also Rathnasree 1993;\nBraun \\& Langer 1995). Both accretion and merger\nmodels (see section~2) can explain a rapidly rotating blue-supergiant\nprogenitor. However, since in the former class of models it is the\nsecondary that accretes from a more evolved star and since all of this\nshould have occurred in the very recent past, this would require the\nmasses of the stars to have been extremely close initially. This\nimplies enormous fine-tuning of the binary parameters and therefore is\nno longer a favoured model. It now appears most likely that the\nprogenitor was a binary that merged with its companion in the recent\npast (as originally proposed independently by Hillebrandt \\& Meyer\n[1989] and Podsiadlowski {et al.}\\ [1990]; also see Chevalier \\& Soker\n[1988] for an earlier suggestion).\n\nIn this section, I will present an updated version of the merger\nscenario outlined in Podsiadlowski (1992) and schematically\nillustrated in Figure~1 (details will be published in Podsiadlowski\n1997). The calculations use the chemical abundances\nfor massive LMC stars determined by Russell \\& Bessell (1989) and\nRussell \\& Dopita (1990) (with $Z=0.01$ but an uncertain carbon\nabundance), updated opacities by Rogers \\& Iglesias (1992) and\nAlexander (1994), as provided by Eggleton (1997).\n\nInitially, the progenitor was a member of a very typical binary,\nconsisting of a primary of $\\sim 15\\,M_{\\odot}$ and a significantly\nless massive secondary (5$\\,$--$\\,10\\,M_{\\odot}$) in a fairly wide\norbit (with an orbital period of $\\sim 10\\,$yr), so that the primary\nstarted to fill its Roche lobe only on the asymptotic-giant branch\n(i.e., on its second ascent of the red-supergiant branch after helium\ncore burning). The mass of the companion is not well determined by the\nmodel at the present time. Indeed, a relatively low-mass companion\ncould be sufficient. The system then experienced dynamical mass\ntransfer, leading to the complete merger of the two stars. The end\nproduct of this evolution is a single, but very rapidly rotating red\nsupergiant, which has been thoroughly stirred up during the merging\nprocess (explaining the main chemical anomalies, provided that this\nenvironment allows for s-processing). The star will now want to shrink\nto become a blue supergiant, producing a rotationally forced disk-like\noutflow in the process. In the subsequent blue-supergiant phase, the\nenergetic blue-supergiant wind sweeps up all the structures\ngenerated previously and produces the triple-ring nebula (for a model\nof the outer rings in a merger scenario, see Podsiadlowski, Fabian \\&\nStevens 1991; Lloyd, O'Brien \\& Kahn 1995; Podsiadlowski \\& Cumming\n1995).\\par\n\n\\begin{figure}[t]\n\\special{psfile='podsiadlowski2.eps' vscale=35 hscale=35 angle=-90 hoffset=30}\n\\vspace{2.6truein}\n\\noindent{{\\bf Figure 2.} \nRepresentative merger calculations for a star with an initial mass of\n18\\hbox{$\\,M_\\odot $}\\ merging with a 10\\hbox{$\\,M_\\odot $}\\ star after helium core burning for different\namounts of helium dredge-up (solid and dashed curves) and final\nmasses as indicated.}\n\\vspace{-0.1in}\n\\end{figure}\n\nFigure~2 shows several representative merger calculations based on\nan analytic theory for the merger process \n(Podsiadlowski 1997; and Podsiadlowski\n\\& Spruit 1997), for a primary with an initial mass of 18\\hbox{$\\,M_\\odot $}\\ merging\nwith a 10\\hbox{$\\,M_\\odot $}\\ companion. In the calculations represented by solid\ncurves, 1.2\\hbox{$\\,M_\\odot $}\\ of the helium core is dredged-up during the merger and\nthe final masses of the merged stars are 20 and 25\\hbox{$\\,M_\\odot $}, respectively (as\nindicated). Both stars have surface abundances of helium and the CNO elements \nthat are in excellent agreement with the observational constraints\n(section~3.3). In the model shown as a dashed curve, the merger is even\nmore dramatic and the final object would chemically be classified as a\nbarium star, though this particular model is somewhat too hot and too\nluminous for the progenitor of SN~1987A.\n\n\\subsection{Formation of the inner ring}\n\nThe production of the inner ring requires a disk-like outflow in the\nred-supergiant phase. It has been suggested that gravitational\nfocusing by a non-interacting companion (similar to what may happen in\nsome planetary nebulae, e.g., Morris 1981) could provide such a\nfocusing mechanism. However, the constraints on any distant companion\nafter the supernova are quite stringent, and its mass could not be\nlarger than $\\sim 2\\hbox{$\\,M_\\odot $}$ (e.g., Plait et al.\\ 1995). It is easy to\nshow, using a Bondi-Hoyle-type wind theory, that the fraction of the\nred-supergiant wind that is gravitationally affected by the companion\nis of order ($v_{\\rm orb}\/v_{\\rm wind})\\,[M_2\/(M_1+M_2)]^2$, where\n$v_{\\rm orb}$ and $v_{\\rm wind}$ are the orbital velocity of the\nsecondary and the wind velocity, respectively, while $M_1$ and $M_2$\nare the masses of the primary and the secondary, respectively. For\nplausible parameters for the SN~1987A progenitor, this fraction cannot\nexceed $\\sim 5\\,$\\%, which is completely insufficient to explain the\nlarge observed asymmetry.\n\nOn the other hand in a merger scenario, the problem is not a lack of\nrotation but an excess thereof (Podsiadlowski 1992; Chen \\& Colgate 1995),\nsince all the orbital angular momentum in the pre-merger binary will be\ndeposited in the envelope of the progenitor, spinning it up in the process.\nIndeed, for the parameters of the model shown in Figure~2, the total orbital\nangular momentum ($\\sim 9\\times 10^{54}\\,$erg s) substantially exceeds \nthe maximum angular momentum of a dynamically stable blue supergiant \n($\\sim 4\\times 10^{54}\\,$erg s). This immediately implies that the\nmerged system will pass through a phase of critical surface rotation in the\nfinal red-blue transition, leading to rotationally enforced, equatorial\nmass loss. One can obtain a rough estimate for the minimum amount of mass that\nneeds to be shed in this process by dividing the excess angular momentum\nby the specific orbital angular momentum of the initial binary. This\nyields\n$$\\Delta M \\sim {\\Delta L\\over\\sqrt{G M D}}\\sim 4\\hbox{$\\,M_\\odot $}\\,\\left({D\\over 10 \n{\\rm AU}}\\right)^{-1\/2},$$\nwhere $D$ is the characteristic co-rotation radius \nassociated with the mass loss (assumed here to be of order the initial\nbinary separation). \n\nThis estimate implies that one expects at least\nseveral solar masses to be lost after the merger. How this mass loss took\nplace in detail is not so clear at the moment. It may involve\na rotationally focused wind, as in the standard model for Be-star disks\nby Bjorkman \\& Cassinelli \n(1993), or dynamical instabilities in a \ncritically rotating object (e.g., Durisen et al.\\ 1986;\nLivio \\& Soker 1988; Taam \\& Bodenheimer\n1991; Chen \\& Colgate 1995). In both cases, one would\nexpect the formation of a disk-like, equatorial structure with a small\nradial velocity (as required to explain the low expansion velocity of \n$10\\,$km s$^{-1}$ of the inner ring).\n\nIn this context it is worth noting that there is an observed system\nthat seems to have merged in the very recent past and that shows some\nsimilarities to the SN~1987A progenitor: V Hydrae is a carbon-star\ngiant rotating near break-up (Barnbaum, Morris \\& Kahane 1995). The\nsystem also shows evidence for a disk-like equatorial and a biconical\npolar outflow. Barnbaum et al.\\ (1995) even suggested that the\ncarbon-star characteristics of V Hyd may by a direct consequence of\nthe dredge-up of carbon {\\em caused} by the merging process. While\nthis system is not entirely comparable to the progenitor of SN~1987A,\none may hope to learn more about this relatively unexplored, but not\nat all uncommon ($\\sim\\,$5\\%--$\\,10\\,$\\%) phase of binary evolution from V Hyd\nand, of course, SN~1987A.\n\n\\subsection{The merger phase}\n\nOne of the least studied aspects of the merger scenario involves the details\nof the final merging of the two binary components, the secondary and\nthe helium core of the red supergiant, inside the common envelope.\nWe (Podsiadlowski \\& Spruit 1997) have started to look at this process in\nsome detail (also see Meyer \\& Meyer-Hofmeister 1979; Taam \\& Bodenheimer\n1989, 1991; Terman, Taam \\& Hernquist 1994).\n\nThe spiral-in phase ends when the embedded secondary starts to fill\nits own Roche lobe (at a separation of $\\sim 10\\hbox{$\\,R_\\odot $}$). It then begins\nto transfer mass to the helium core. This mass-transfer process\nresembles in many respects the ``normal'' mass transfer in interacting\nbinaries, except that it occurs inside a low-density, opaque\nenvelope. The mass-transfer rate is determined by the frictional drag\nthe orbiting binary experiences with the common envelope. The\ntimescale for the final destruction of the secondary is uncertain.\nRough guesses range from a few days (assuming that the envelope does\nnot expand as a result of the spiral-in) to hundreds of years\n(assuming that the frictional luminosity is self-regulated at the\nEddington limit; e.g., Meyer \\& Meyer-Hofmeister [1979]). In the\ncalculations presented earlier, I adopted, somewhat arbitrarily, a\ntimescale of $1\\,$yr. Because of the relatively large size of the\nhelium core, the stream emanating from the Roche-lobe filling\nsecondary does not self-intersect and form an accretion disk but\nrather impacts with the core and penetrates it. In other words, it\ndrills a hole into the core and starts to erode it, causing\nthe dredge-up of helium-rich material. The\npenetration depth can be estimated from the condition that the ram\npressure in the stream must be of the order of the ambient pressure in\nthe helium core. One major uncertainty in this estimate is how much\nenergy is dissipated inside the stream by internal shocks (which must\noccur because of the pressure focusing of the stream and\nangular-momentum constraints). The characteristic temperature of the\nshock-heated material in the stream-impact region is $T\\sim 2\\times\n10^8\\,$K. Since this material is hydrogen-rich, but the temperature\nmore typical of helium burning, one can expect some unusual\nnucleosynthesis in this region, possibly responsible for the more\nexotic chemical anomalies of SN~1987A like the barium anomaly.\n\nPreliminary calculations using extended nuclear-reaction networks\nshow that significant s-processing (with neutron exposures\nof $\\sim 10^{27}\\,$cm$^{-2}$ with C$^{13}+\\alpha$ as neutron source) \nis possible in this environment. This requires\nthat the dredge-up region of the core is continually mixed with\nsome of the hydrogen-rich envelope that serves as a proton reservoir\nfor an extended period of time (of order the merger timescale). \nUnfortunately, some of the key reaction rates are very temperature sensitive,\nwhile the burning conditions are not sufficiently well determined by\nthe present analytic model to allow very firm conclusions.\nIn addition, these calculations suggest that the abundances of the\nrare nitrogen and oxygen isotopes N$^{15}$ and O$^{17}$ may\nbe overabundant by a large factor. This could provide a potentially\nsensitive tracer for the burning conditions in this environment.\n\n\n\n\n\\section{Concluding Remarks}\n\nAs I have shown in this review, binary interactions are almost certainly\nresponsible for some of the large variety of observed supernova types\nin general and therefore cannot be ignored. As far as SN~1987A is concerned,\na merger scenario provides at present the only model for the progenitor \nthat can explain all the major features of this remarkable event. \nThere are still many theoretical uncertainties,\nin particular involving the details of the merging process and the\nassociated nucleosynthesis. Observations of abundances in different\nparts of the nebula may be particularly useful in helping to reconstruct the\ndetailed merger history (for example, the helium abundance in the\nouter rings). In addition, there may be other chemical anomalies \n(overabundances of N$^{15}$ and O$^{17}$) that may be used as direct\ntracers of the unusual burning conditions that occur during the final\nmerging process. \n\nWhile a lot has been learned about the progenitor of SN~1987A in the last\nten years, there still remains much more to be done, both theoretically and\nobservationally. Ultimately, when the supernova blast wave reaches the\ninner ring, at least the structure of the whole nebula will be finally\nrevealed and confirm or refute some of the aspects of the merger scenario.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Proof of Lemma~\\ref{corollary:hellinger_projection_lb_product}}\n\\label{proof:lemma:hellinger_projection_lb_bayes_net}\n\nWe start by defining the \\emph{projection} of a distribution unto a DAG structure, a notion which will be used in the next subsection. \n\\begin{definition}\n The \\emph{projection} of a distribution $P$ on $\\{ 0, 1 \\}^n$ onto a DAG $G :=\n \\{ (i, \\Pi_i) \\mid i = \\{ 1, \\ldots, n \\} \\}$ (where, without loss of generality, $\\{ X_i \\}_{i \\in [n]}$ is assumed in topological order) is the distribution $Q$, Markov w.r.t. $G$, defined as\n follows. Let $Q_{X_i, X_{\\Pi_i}} := P_{X_i, X_{\\Pi_i}}$ and $Q_{X_{\\Pi_i}} :=\n P_{X_{\\Pi_i}}$ for $i \\in [n]$, and, for $x\\in\\{0,1\\}^n$, define \n \\[ Q (x) = Q_{X_1} (x_1) \\prod_{i = 2}^n Q_{X_i \\mid X_{\\Pi_i}} (x_i \\mid x_{\\Pi_i})\n . \\]\n\\end{definition}\n\nSince squared Hellinger distance is an $f$-divergence, by the data processing inequality, we have that\n\\begin{equation}\\label{eq:hellinger:projection}\nd_{\\rm{}H} (P, Q) \\geqslant d_{\\rm{}H} (\\pi P, \\pi Q)\n\\end{equation}\nfor any $P$ and $Q$ on $\\{ 0, 1 \\}^n$ and any projection $\\pi$ on a subset of random variables. We will establish the more general statement below, which readily implies Lemma~\\ref{corollary:hellinger_projection_lb_product} by taking $k=0$:\n\\begin{lemma}\n \\label{lemma:hellinger_projection_lb_bayes_net}Let $G_k$ be a degree-$k$ DAG, $k \\leqslant n$. Let $P'$\n a projection of $P$ onto $G_k$ with a support of $\\{0, 1\\}^n$ and let\n $\\mathcal{Q}$ be the set of all distributions Markov w.r.t. $G_k$ on $\\{0,\n 1\\}^n$. Then, we have\n \\[\n \\min_{Q \\in \\mathcal{Q}} d_{\\rm{}H} (P, Q) \\geqslant \\frac{1}{1 + \\sqrt{n}} d_{\\rm{}H}\n (P, P') .\n \\]\n\\end{lemma}\n \\begin{proof}\n For $\\mathcal{Q}$ be the set of\n all distributions Markov w.r.t. $G_k$, by the subadditivity of Hellinger\n %\n for Bayes nets from Corollary~\\ref{corollary:squared_hellinger_subadditivity} along with~\\eqref{eq:hellinger:projection},\n \\begin{eqnarray*}\n d_{\\rm{}H} (P, P') & \\leqslant & d_{\\rm{}H} (P, Q) + d_{\\rm{}H} (Q, P')\\\\\n & \\leqslant & d_{\\rm{}H} (P, Q) + \\Big(\\sum_{i = 1}^n d_{\\rm{}H}^2 (P'_{X_i,\n X_{\\Pi_i}}, Q_{X_i, \\Pi_i}) \\Big)^{1\/2}\\\\\n & = & d_{\\rm{}H} (P, Q) + \\Big(\\sum_{i = 1}^n d_{\\rm{}H}^2 (P_{X_i,\n X_{\\Pi_i}}, Q_{X_i, \\Pi_i})\\Big)^{1\/2}\\\\\n & \\leqslant & d_{\\rm{}H} (P, Q) + \\sqrt{nd_{\\rm{}H}^2 (P, Q)}\\\\\n & = & \\left( 1 + \\sqrt{n} \\right) d_{\\rm{}H} (P, Q).\n \\end{eqnarray*}\n \\end{proof}\n\\subsection{Proof of Lemma~\\ref{lemma:technical_distance_lb_2xbinomials_conditional_tree}}\n\\begin{lemma}[Lemma~\\ref{lemma:technical_distance_lb_2xbinomials_conditional_tree}, restated] \n There exist $C_1,C_2>0$ such that the following holds. Let $\\varepsilon\\in(0,1]$ and $n \\geq C_1$, and let $a,b$ be two integers such that $a+b=n$ and $b \\geqslant\n \\frac{1}{4} n$. Then, for $\\delta \\coloneqq \\frac{\\varepsilon}{\\sqrt{n}}$, we have\n \\[ \n \\frac{(1 - \\delta)^n}{2^n} \\sum_{k_1 = 0}^{a}\\sum_{k_2 = 0}^{b} \\binom{a}{k_1} \\binom{b}{k_2} \\mleft|\n \\mleft( \\mleft( \\frac{1 + \\delta}{1 - \\delta} \\mright)^{k_1 + k_2} - \\mleft(\n \\frac{1 + \\delta}{1 - \\delta} \\mright)^{k_1 + b - k_2} \\mright) \\mright|\n \\geqslant C_2\\varepsilon . \\]\n\\end{lemma}\n\\begin{proof}\n \\label{proof:lemma:technical_distance_lb_2xbinomials_conditional_tree}\n By concentration of Binomials,\n \\begin{eqnarray*}\n & & \\left( \\frac{1}{2} \\right)^n (1 - \\delta)^n \\sum_{k_1 =\n 0}^{a} \\sum_{k_2 = 0}^{b} \\binom{a}{k_1}\n \\binom{b}{k_2} \\left| \\left( \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{k_1 + k_2} - \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1\n + b - k_2} \\right) \\right|\\\\\n & \\geqslant & \\left( \\frac{1}{2} \\right)^n (1 - \\delta)^n \\sum_{k_1 =\n \\frac{a}{2} + \\sqrt{a}}^{\\frac{a}{2} + 2 \\sqrt{a}}\n \\sum_{k_2 = \\frac{b}{2} + \\sqrt{b}}^{\\frac{b}{2} + 2\n \\sqrt{b}} \\binom{a}{k_1} \\binom{b}{k_2} \\left| \\left(\n \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1 + k_2} - \\left(\n \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1 + b - k_2} \\right)\n \\right|\\\\\n & \\geqslant & \\left( \\frac{1}{2} \\right)^n \\frac{C}{\\sqrt{a\n b}} \\cdot 2^{| a | + | b |} \\sum_{k_1 = \\frac{a}{2} +\n \\sqrt{a}}^{\\frac{a}{2} + 2 \\sqrt{a}} \\sum_{k_2 =\n \\frac{b}{2} + \\sqrt{b}}^{\\frac{b}{2} + 2 \\sqrt{b}} (1 -\n \\delta)^n \\left| \\left( \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{k_1 + k_2} - \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1\n + b - k_2} \\right) \\right|\\\\\n & \\geqslant & \\frac{C}{\\sqrt{a b}} \\sum_{k_1 =\n \\frac{a}{2} + \\sqrt{a}}^{\\frac{a}{2} + 2 \\sqrt{a}}\n \\sum_{k_2 = \\frac{b}{2} + \\sqrt{b}}^{\\frac{b}{2} + 2\n \\sqrt{b}} (1 - \\delta)^n \\left| \\left( \\left( \\frac{1 + \\delta}{1 -\n \\delta} \\right)^{k_1 + k_2} - \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{k_1 + k_2 + b - 2 k_2} \\right) \\right|\n \\end{eqnarray*}\n where $C$ is some constant larger than 0. For $k_1 + k_2 - \\frac{n}{2} = l \\in\n \\left[ \\sqrt{a} + \\sqrt{b}, 2 \\left( \\sqrt{a} + \\sqrt{b}\n \\right) \\right]$, $l \\geqslant \\sqrt{a + b} = \\sqrt{n}$.\n \\begin{eqnarray}\n & & (1 - \\delta)^n \\left| \\left( \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{k_1 + k_2} - \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1 +\n k_2 + b - 2 k_2} \\right) \\right| \\nonumber\\\\\n & = & \\left| \\left( (1 - \\delta)^n \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{n \/ 2 + l} - (1 - \\delta)^n \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{n \/ 2 + l + b - 2 k_2} \\right) \\right| \\nonumber\\\\\n & = & (1 - \\delta^2)^{n \/ 2} \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^l\n \\left| 1 - \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{b - 2 k_2}\n \\right| \\nonumber\\\\\n & \\geqslant & e^{- \\delta^2 n} e^{2 \\delta \\cdot l} \\left| 1 - \\left(\n \\frac{1 + \\delta}{1 - \\delta} \\right)^{b - 2 k_2} \\right| \\geqslant e^{2\n \\varepsilon - \\varepsilon^2} \\left| 1 - \\left( \\frac{1 + \\delta}{1 -\n \\delta} \\right)^{b - 2 k_2} \\right| \\label{eq:bin_asym_1}\\\\\n & \\geqslant & e^{\\varepsilon} \\left( 1 - \\left( \\frac{1 + \\delta}{1 -\n \\delta} \\right)^{b - 2 k_2} \\right) \\geqslant e^{\\varepsilon} (1 - e^{4\n \\delta (b - 2 k_2)}) . \\label{eq:bin_asym_2}\n \\end{eqnarray}\n where (\\ref{eq:bin_asym_1}) and (\\ref{eq:bin_asym_2}) follows from $1 - x\n \\geqslant e^{- 2 x}$, $0 < x < 0.79$ and $e^{4 x} \\geqslant \\left( \\frac{1 +\n x}{1 - x} \\right) \\geqslant e^{2 x}$, $0 < x < 0.95$. All these inequalities\n hold for $n$ larger some constant and every $\\varepsilon \\in (0, 1]$. Since\n $b \\geqslant \\frac{1}{4} n$, and by the summation above $b - 2 k_2 \\in\n \\left[ - 4 \\sqrt{b}, - 2 \\sqrt{b} \\right]$,\n \\[ e^{\\varepsilon} (1 - e^{4 \\delta (b - 2 k_2)}) \\geqslant\n e^{\\varepsilon} \\left( 1 - e^{- 8 \\frac{\\varepsilon}{\\sqrt{n}} \n \\sqrt{b}} \\right) \\geqslant e^{\\varepsilon} (1 - e^{- 4 \\varepsilon})\n \\geqslant \\varepsilon . \\]\n and therefore, summing up every term, we have our lower bound\n \\[\n \\frac{C}{\\sqrt{a b}} \\sum_{k_1 = \\frac{a}{2} +\n \\sqrt{a}}^{\\frac{a}{2} + 2 \\sqrt{a}} \\sum_{k_2 =\n \\frac{b}{2} + \\sqrt{b}}^{\\frac{b}{2} + 2 \\sqrt{b}} (1 -\n \\delta)^n \\left| \\left( \\left( \\frac{1 + \\delta}{1 - \\delta}\n \\right)^{k_1 + k_2} - \\left( \\frac{1 + \\delta}{1 - \\delta} \\right)^{k_1\n + k_2 + b - 2 k_2} \\right) \\right| \\geqslant C \\varepsilon\n \\]\n concluding the proof.\n %\n %\n %\n %\n %\n %\n %\n %\n %\n\\end{proof}\n\n\\subsection{Proof of~\\eqref{fact:intermedia_upper_bound_on_multigraph_cycle_term:restated}}\n\\label{app:fact:intermedia_upper_bound_on_multigraph_cycle_term:proof}\n\n\\begin{fact}\n \\label{fact:intermedia_upper_bound_on_multigraph_cycle_term}\n For any set of cycles such that $\\sum_i\n | \\sigma_i | \\leqslant n$, we have\n \\[\n \\prod_{\\substack{\\sigma_i : \\tmop{even}\\\\ | \\sigma_i | \\geqslant 4}} (1 + (- 4\n \\delta)^{| \\sigma_i |}) \\prod_{\\substack{\\sigma_i : \\tmop{odd}\\\\ | \\sigma_i | \\geqslant 4}} (1 - (- 4 \\delta)^{| \\sigma_i |})\n \\leqslant e^{O \\left( \\varepsilon^5 \/ n^{\\frac{3}{2}} \\right)}\n \\prod_{\\substack{\\sigma_i : \\tmop{even}\\\\ | \\sigma_i | = 4}} (1 + (4 \\delta)^4)\n \\prod_{\\substack{\\sigma_i : \\tmop{odd}\\\\ | \\sigma_i | = 4}} (1 - (4 \\delta)^4)\n \\]\n \\begin{proof}\n We have\n \\begin{eqnarray*}\n \\prod_{\\sigma_i : \\tmop{even}} & & (1 + (- 4 \\delta)^{| \\sigma_i |})\n \\prod_{\\sigma_i : \\tmop{odd}} (1 - (- 4 \\delta)^{| \\sigma_i |})\\\\\n & = & \\prod_{\\substack{\\sigma_i : \\tmop{even}\\\\ | \\sigma_i | \\geq 5}} (1 + (-\n 4 \\delta)^{| \\sigma_i |}) \\prod_{\\substack{\\sigma_i : \\tmop{odd}\\\\ | \\sigma_i | \\geq 5}} (1 - (- 4 \\delta)^{| \\sigma_i |})\n \\prod_{\\substack{\\sigma_i : \\tmop{even}\\\\ | \\sigma_i | = 4}} (1 + (- 4\n \\delta)^{| \\sigma_i |}) \\prod_{\\substack{\\sigma_i : \\tmop{odd}\\\\ | \\sigma_i | = 4}}\n (1 - (- 4 \\delta)^{| \\sigma_i |})\\\\\n & \\leqslant & \\prod_{\\sigma_i : | \\sigma_i | > 5} (1 + (4 \\delta)^{|\n \\sigma_i |}) \\prod_{\\sigma_i : \\tmop{even}, | \\sigma_i | = 4} (1 + (- 4\n \\delta)^{| \\sigma_i |}) \\prod_{\\sigma_i : \\tmop{odd}, | \\sigma_i | = 4}\n (1 - (- 4 \\delta)^{| \\sigma_i |})\\\\\n & \\leqslant & (1 + (4 \\delta)^5)^{\\frac{n}{4}} \\prod_{\\sigma_i :\n \\tmop{even}, | \\sigma_i | = 4} (1 + (- 4 \\delta)^{| \\sigma_i |})\n \\prod_{\\sigma_i : \\tmop{odd}, | \\sigma_i | = 4} (1 - (- 4 \\delta)^{|\n \\sigma_i |})\\\\\n & \\leqslant & e^{(4 \\delta)^5 n \/ 4} \\prod_{\\sigma_i : \\tmop{even}, |\n \\sigma_i | = 4} (1 + (- 4 \\delta)^{| \\sigma_i |}) \\prod_{\\sigma_i :\n \\tmop{odd}, | \\sigma_i | = 4} (1 - (- 4 \\delta)^{| \\sigma_i |})\\\\\n %\n %\n %\n %\n & \\leq & e^{256\\varepsilon^5 \/ n^{\\frac{3}{2}}}\n \\prod_{\\sigma_i : \\tmop{even}, | \\sigma_i | = 4} (1 + (- 4 \\delta)^4)\n \\prod_{\\sigma_i : \\tmop{odd}, | \\sigma_i | = 4} (1 - (- 4 \\delta)^4)\n \\end{eqnarray*}\n \n \\end{proof}\n\\end{fact}\n\n\n\n\n\\section{Structured Testing Lower Bound}\\label{sec:gen}\n\nLetting $D = 2^n$, we will rely on the construction from the ``standard'' lower bound of~{\\cite{paninski2008coincidence}} by picking a uniformly\nrandom subset $S$ of $\\{0, 1\\}^n$ of size $\\frac{D}{2}$. Denote\n$\\mathcal{S}$ the set of all such combinations of $S$, and define\n$\\mathcal{P}_{\\tmop{no}}$ to be $\\mathcal{P}_{\\tmop{no}} := \\left\\{ P =\n\\frac{1 + C \\varepsilon}{2} U_S + \\frac{1 - C \\varepsilon}{2} U_{S^c} \\mid S \\in \\mathcal{S}\n\\right\\}$, where $C>0$ is a suitable normalizing constant. As before, $U_S$ denotes the uniform distribution on the set of variable $S$\nand $P \\in \\mathcal{P}_{\\tmop{no}}$ is a mixture of two uniform\ndistributions on disjoint parts, with different weights.\n\nIt is known that $\\Omega (2^{n \/ 2} \/ \\varepsilon^2)$\nsamples are required to distinguish between such a randomly chosen $P$ and\nthe uniform distribution $U$; further, assume we know that the uniform distribution $U$\nis in $\\mathcal{C}$. What remains to show is the \\emph{distance}, that is, ``most'' choices of $P \\in\n\\mathcal{P}_{\\tmop{no}}$ are $\\varepsilon$-far from $\\mathcal{C}$. To argue that\nlast part, we will use our assumption that $\\mathcal{C}$ can be learned with\n$m$ samples to conclude by a counting argument: i.e., we will show that there can be at most $2^{mn}$\nor so ``relevant'' elements of $\\mathcal{C}$, while there are at least $2^{2^{\\Omega (n)}}$ $\\mathcal{P}_{\\tmop{no}}$ that are $\\varepsilon$-far from each\nother. Suitably combining the two will establish the theorem below:\n\\begin{theorem}\n \\label{theorem:testing_hard_for_easy_to_learn_dist}Let $\\mathcal{C}$ be a\n class of probability distributions over $\\{0, 1\\}^n$ such that the following\n holds: (1) the uniform distribution belongs to $\\mathcal{C}$ (2) there\n exists a learning algorithm for $\\mathcal{C}$ with sample complexity $m = m\n (n, \\varepsilon)$. Then, as long as $mn \\ll 2^{O (n)}$, testing whether an\n arbitrary distribution over $\\{0, 1\\}^n$ belongs to $\\mathcal{C}$ or is $\\varepsilon$-far\n from every distribution in $\\mathcal{C}$ in total variation distance requires\n $\\Omega (2^{n \/ 2} \/ \\varepsilon^2)$ samples.\n\\end{theorem}\n\n\\begin{proof}\n As discussed above, indistinguishability follows from the literature~{\\citep{paninski2008coincidence}}, and thus all we need to show now is that $\\mathcal{P}_{\\tmop{no}}$ is far from every distribution in $\\mathcal{C}$.\n By assumption (2), there exists an algorithm $H\\colon \\{0, 1\\}^{mn} \\to\n \\mathcal{P}$ (without loss of generality, we assume $H$ deterministic) that can output an estimated\n distribution given $m = m (n, \\varepsilon)$ samples from $P \\in \\mathcal{C}$. Thus, for every $P\n \\in \\mathcal{C}$ given $m$ $i.i.d$. samples $X \\in \\{0, 1\\}^{mn}$, $\\Pr_{X \\sim P^{\\otimes m}} (d_{\\tmop{TV}} (H (X), P) < \\varepsilon)\n \\geqslant 2 \/ 3$. \n \n In particular, this implies the weaker statement that, for every $P \\in \\mathcal{C}$, there\n exists \\emph{some} $x$ in $\\{0, 1\\}^{mn}$ s.t. $P \\in B(H (x), \\varepsilon)$ (where $B(x,r)$ denotes the TV distance ball of radius $r$ centered at $x$).\n By enumerating all possible values in $\\{0, 1\\}^{mn}$, we then can obtain an\n $\\varepsilon$-cover $\\{H (x_1), \\ldots, H (x_{2^{mn}})\\}$ of $\\mathcal{C}$, that is, such that\n $\\mathcal{C} \\subseteq \\bigcup_{i = 1}^{2^{mn}} B (H (x_i), \\varepsilon)$. The $\\varepsilon$-covering number of $\\mathcal{C}$ is thus upper bounded by $2^{O (mn)}$.\n \n Next, we lower bound the size of $\\mathcal{P}_{\\rm{}no}$ by constructing an $\\varepsilon$-packing $P_{\\varepsilon}$, where $P_{\\varepsilon} = \\{P_i \\in \\mathcal{P}_{\\tmop{no}}, i\n \\in \\mathbb{N}: d_{\\tmop{TV}} (P_i, P_j) > \\varepsilon, i \\neq j\\}$.\n For $P,Q \\in \\mathcal{P}_{\\tmop{no}}$ corresponding to two sets $S,S'$, each of size $\\frac{D}{2}=2^{n-1}$, we have\n \\begin{eqnarray*}\n d_{\\tmop{TV}} (P,Q) & = & \\frac{1}{2} | S \\triangle S' | \\cdot \\left|\n \\frac{1 + C \\varepsilon}{2} - \\frac{1 - C \\varepsilon}{2} \\right| \\cdot \n \\frac{2}{D} = C\\varepsilon\\cdot \\frac{| S \\triangle S' |}{D} > \\varepsilon\n \\end{eqnarray*}\n For this to be at least $\\varepsilon$, the pairrwise\n symmetric difference of (the sets corresponding to the) distributions in $P_{\\varepsilon}$ should be at least $\\frac{D}{C} = \\Omega(2^n)$. We know, by, e.g., \\citet[Proposition~3.3]{Blais_2019} that there exist such families of balanced subsets of $\\{0,1\\}^n$ of cardinality at least $\\Omega(2^{2^{\\rho n}})$, where \n $\\rho>0$ is a constant that only depends on $C$.\n \n Thus, the size of $\\mathcal{P}_{\\tmop{no}}$ is itself $\\Omega(2^{2^{\\rho n}})$; combining this lower bound with the upper bound on the covering number of $\\mathcal{C}$ concludes the proof.\n\\end{proof}\n\nAs a corollary, instantiating the above to the class $\\mathcal{C}$ of degree-$d$ Bayes nets over $n$ nodes readily yields the following:\n\\begin{corollary}\n \\label{corollary:lb_general}\n For large enough $n$, testing whether an arbitrary distribution over\n $\\{0, 1\\}^n$ is a degree-$d$ Bayes net or is $\\varepsilon$-far from every such\n Bayes net requires $\\Omega (2^{n \/ 2} \/ \\varepsilon^2)$ samples, for any $d = o\n (n)$ and $\\varepsilon \\geq 2^{- O (n)}$.\n\\end{corollary} \n \\begin{proof}\n We can obtain a learning upper bound of $m = O (2^d n \\log (2^{d + 1} n)\n \\log (n^{d n}) \/ \\varepsilon^2)$ for degree-$d$ Bayes nets by combining\n the known-structure case (proven in~{\\citet{BhattacharyyaGMV20}}) with the\n reduction from known-structure to unknown-structure (via hypothesis\n selection\/tournament~{\\citep{CanonneDKS20}}). We have $m n \\leqslant O\n (2^d n^6 \/ \\varepsilon^2)$. To have $2^{mn} \\ll 2^{2^{\\rho n}}$, where\n $\\rho$ is some constant, we need $m n < 2^{O (n)}$, which requires $d = o\n (n)$ and $\\varepsilon \\geq 2^{- O (n)}$ for large enough $n$.\n \\end{proof}\n\n\\subsection{Related Work}\nDistribution testing has been an active and rapidly progressing research program for the last 20+ years; see \\cite{rubinfeld2012taming} and \\cite{canonne2020survey} for surveys. One of the earliest works in this history was that of \\cite{batu2001testing} who studied testing independence of two random variables. There followed a series of papers \\citep{alon2007testing, levi2013testing, AcharyaDK15}, strengthening and generalizing bounds for this problem, culminating in the work of \\cite{diakonikolas2016new} who gave tight bounds for testing independence of distributions over $[n_1]\\times \\cdots \\times [n_d]$. \\citet{HaoL20} recently considered the (harder) problem of estimating the distance to the closest product distribution (i.e., \\emph{tolerant} testing), showing this task could, too, be performed with a sublinear sample complexity.\n\nThough most of the focus has been on testing properties of arbitrary input distributions, it has long been recognized that distributional restrictions are needed to obtain sample complexity improvements. For example, \\cite{rubinfeld2009testing, adamaszek2010testing} studied testing uniformity of monotone distributions on the hypercube. Similarly, \\cite{daskalakis2012learning} considered the problem of testing monotonicity of $k$-modal distributions. More recently, \\cite{CanonneDKS20, daskalakis2016square, BhattacharyyaGMV20, bhattacharyya2021testing} have studied identity testing and closeness testing for distributions that are structured as degree-$d$ Bayes nets. Our work here continues this research direction in the context of independence testing.\n\n\\subsection{Our techniques}\n\\noindent\\textit{Upper bound.} The starting point of our upper bound is the (standard) observation that a distribution $P$ over $\\{0,1\\}^n$ is far from being a product if, and only if, it is far from the product of its marginals. By itself, this would not lead to any savings over the trivial exponential sample complexity. However, we can combine this with a localization result due to~\\cite{daskalakis2016square}, which then guarantees that if the degree-$d$ Bayes net $P$ is at \\emph{Hellinger} distance $\\varepsilon$ from the product of its marginals $P'$, then there exists \\emph{some} vertex $i\\in[n]$ such that $P_{i,\\Pi_i}$ (the marginalization of $P$ onto the set of nodes consisting of $i$ and its $d$ parents) is at Hellinger distance at least $\\frac{\\varepsilon}{\\sqrt{n}}$ from $P'_{i,\\Pi_i}$. \nThese two facts, combined, seem to provide exactly what is needed: indeed, given access to samples from $P$ and any fixed set of $d+1$ vertices $S$, one can simulate easily samples from both $P_S$ and $P'_S$ (for the second, using $d+1$ samples from $P$ to generate one from $P'_S$, as $P'_S$ is the product of marginals of $P_S$). A natural idea is then to iterate over all $\\binom{n}{d+1}$ possible subsets $S$ of $d+1$ variables and check whether $P_S=P'_S$ for each of them using a closeness testing algorithm for arbitrary distributions over $\\{0,1\\}^{d+1}$: the overhead due to a union bound and the sampling process for $P_S,P'_S$ adds a factor $O(d\\cdot \\log\\binom{n}{d+1}) = O(d^2\\log n)$ to the closeness testing procedure. However, since testing closeness over $\\{0,1\\}^{d+1}$ to total variation distance $\\varepsilon'$ has sample complexity $O(2^{2d\/3}\/\\varepsilon'^2)$ and, by the quadratic relation between Hellinger and total variation distances, we need to take $\\varepsilon' = \\frac{\\varepsilon^2}{n}$, we would then expect the overall test to result in a $\\tilde{O}(2^{2d\/3}n^2\/\\varepsilon^4)$ sample complexity~--~much more than what we set out for.\n\nA first natural idea to improve upon this is to avoid the back and forth between Hellinger and total variation distance, and save on this quadratic blowup. Namely, by using an optimal closeness testing algorithm directly for Hellinger distance~\\citep{diakonikolas2016new}, we could in the last step keep $\\varepsilon' =\\frac{\\varepsilon}{\\sqrt{n}}$ (saving on this quadratic blowup), and pay only overall $\\tilde{O}(2^{3d\/4}\/\\varepsilon'^2)=\\tilde{O}(2^{3d\/4}n\/\\varepsilon^2)$. This is better, but still falls short of our original goal.\n\nThe second idea is to forego closeness testing in the last step entirely, and instead use directly an \\emph{independence} testing algorithm for arbitrary distributions over $\\{0,1\\}^{d+1}$, to test if $P_S$ is indeed a product distribution for every choice of $S$ considered. Unfortunately, while promising, this idea suffers from a similar drawback as our very first attempt: namely, the known independence testing algorithms are all designed for testing in total variation distance (not Hellinger)! Thus, even using an optimal TV testing algorithm for independence~\\citep{AcharyaDK15,diakonikolas2016new} would still lead to this quadratic loss in the distance parameter $\\varepsilon'$, and a resulting $\\tilde{O}(2^{d\/2}n^2\/\\varepsilon^4)$ sample complexity.\n\nTo combine the best of our last two approaches and achieve the claimed $\\tilde{O}(2^{d\/2}n\/\\varepsilon^2)$, we combine the two insights and perform, for each set $S$ of $d+1$ variables, independence testing on $P_S$ in Hellinger distance. In order to do so, however, we first need to design a testing algorithm for this task, as none was previously available in the literature. Fortunately for us, we are able to design such a testing algorithm (Lemma~\\ref{lemma:hellinger_independence_tester}) achieving the desired~--~and optimal~--~sample complexity. Combining this Hellinger independence testing algorithm over $\\{0,1\\}^{d+1}$ with the above outline finally leads to the $\\tilde{O}(2^{d\/2}n\/\\varepsilon^2)$ upper bound of Theorem~\\ref{theo:main:informal}.\\medskip\n\n\\noindent\\textit{Lower bound.} To obtain our $\\Omega(2^{d\/2}n\/\\varepsilon^2)$ lower bound on testing independence of a degree-$d$ Bayes net, we start with the construction introduced by \\cite{CanonneDKS20} to prove an $\\Omega(n\/\\varepsilon^2)$ sample complexity lower bound on testing \\emph{uniformity} of degree-1 Bayes nets. At a high level, this construction relies on picking uniformly at random a perfect matching $M$ of the $n$ vertices, which defines the structure of the Bayes net; and, for each of the $n\/2$ resulting edges, picking either a positive or negative correlation (with value $\\pm \\varepsilon\/\\sqrt{n}$) between the two vertices, again uniformly at random. One can check relatively easily that every Bayes net $P_{\\lambda}$ obtained this way, where $\\lambda$ encodes the matching $M$ and the $2^{n\/2}$ signs, is (1)~a degree-$1$ Bayes net, (2)~at total variation $\\varepsilon$ from the uniform distribution $U$. The bulk of their analysis then lies in showing that (3)~$\\Omega(n\/\\varepsilon^2)$ samples are necessary to distinguish between $U$ and such a randomly chosen $P_\\lambda$. Generalizing this lower bound construction and analysis to independence testing (not just uniformity), and to degree-$d$ (and not just degree-$1$) Bayes nets turns out to be highly non-trivial, and is our main technical contribution.\n\nIndeed, in view of the simpler and different $\\Omega(2^{d\/2}\\sqrt{n}\/\\varepsilon^2)$ sample complexity lower bound for uniformity testing degree-$d$ Bayes nets obtained in~\\cite{CanonneDKS20} \\emph{when the structure of the Bayes net is known},\\footnote{We note that generalizing this (weaker, in view of the dependence on $n$) $\\Omega(2^{d\/2} \\sqrt{n}\/\\varepsilon^2)$ sample complexity lower bound to our setting is relatively simple, and we do so in Appendix~\\ref{section:lb_2} (specifically, Theorem~\\ref{theorem:lb_2}).} one is tempted to adapt the same idea to the matching construction: that is, reserve $d-1$ out of the $n$ vertices to ``encode'' a pointer towards one of $2^{d-1}$ independently chosen $P_\\lambda$'s as above (i.e., the hard instances are now uniform mixtures over $2^{d-1}$ independently generated degree-$1$ hard instances).\nThe degree-$1$ hard instances lead in previous work to a tighter dependence on $n$ (linear instead of $\\sqrt{n}$) because their Bayesian structure is \\emph{unknown}.\nThus, by looking at the mixture of these degree-$1$ Bayes nets, one could hope to extend the analysis of that second lower bound from~\\cite{CanonneDKS20} and get the desired $\\Omega(2^{d\/2}n\/\\varepsilon^2)$ lower bound.\nUnfortunately, there is a major issue with this idea: namely, if the $2^{d-1}$ matchings are chosen independently, then the resulting overall distribution is unlikely to be a degree-$d$ Bayes net~--~instead, each vertex will have expected degree $\\Omega(2^d)$! (This was not an issue in the corresponding lower bound of~\\cite{CanonneDKS20}, since for them each of the $2^d$ components of the mixture was a degree-$0$ Bayes net, i.e., a product distribution; so degrees could not ``add up'' across the components).\n\nTo circumvent this, we instead choose the matching $M$ to be common to all $2^{d-1}$ components of the mixture and only pick the sign of their $n\/2$ correlations independently; thus ensuring that every node in the resulting distribution $P$ has degree $d$. This comes at a price, however: the analysis of the $\\Omega(2^{d\/2}\\sqrt{n}\/\\varepsilon^2)$ lower bound from~\\cite{CanonneDKS20} crucially relied on independence across those components, and thus can no longer be extended to our case (where the $2^{d-1}$ distributions $P_\\lambda$ share the same matching $M$, and thus we only have independence across components conditioned on $M$). Handling this requires entirely new ideas, and constitutes the core of our lower bound. In particular, from a technical point of view this requires us to handle the moment-generating-function of squares of Binomials (Lemma~\\ref{lemma:MGF_Binomial_square_bound}), as well as that of (squares of) truncated Binomials (Lemma~\\ref{lemma:MGF_Binomial_square_bound_with_min}). To do so, we develop in Section~\\ref{ssec:tools:mgf} a range of results on Binomials and Multinomial distributions which we believe are of independent interest.\n\nFinally, after establishing that $\\Omega(2^{d\/2}{n}\/\\varepsilon^2)$ samples are necessary to distinguish the resulting ``mixture of trees'' $P$ from the uniform distribution $U$ (Lemma~\\ref{theorem:lower_bound_uniformity_testing_on_Bayes_Net}), it remains to show that this implies our stronger statement on testing \\emph{independence} (not just uniformity). To do so, we need to show that $P$ is not only far from $U$, but from \\emph{every} product distribution: doing so is itself far from immediate, and is established in Lemma~\\ref{lemma;distance:mixture:of:trees} by relating the distance from the mixture $P$ to every product distribution to the distance between distinct components of the mixture (Lemma~\\ref{lemma:technical:TV_lower_bound_of_its_marginals}), and lower bounding those directly by analyzing the concentration properties of each component $P_\\lambda$ of our construction (Lemma~\\ref{lemma:technical_distance_lb_2xbinomials_conditional_tree}).\n\n\\subsection{Sample Complexity to distinguish from Uniform (Lemma~\\ref{theorem:lower_bound_uniformity_testing_on_Bayes_Net})}\n \\label{ssec:distinguish:uniform}\n\\iffalse %\nWe will also require the two following technical lemmas.\n\\begin{lemma}\n\\label{lemma:proof:mgf:twonorm}\nLet $\\mathbf{p}$ be an arbitrary distribution over $[k]$, $m\\geq 0$, and let $X_1,\\dots,X_k$ denote the vector of counts obtained by drawing $m$ i.i.d.\\ samples from $\\mathbf{p}$ (i.e., $X\\coloneqq (X_1,\\dots,X_k)$ follows a multinomial distribution with parameters $m$ and $(\\mathbf{p}_1,\\dots,\\mathbf{p}_k)$). Then,\n\\[\n\\mathbb{E}[e^{t\\norm{X}_2^2}] \\leq e^{16 m^2 t \\norm{\\mathbf{p}}_2^2 + 2mt}\n\\]\nfor every $0 < t < \\frac{1}{16m}$.\n\\end{lemma}\n\\begin{proof}\nBy negative association~\\cite[Lemma 26 and Theorem~31]{Dubhashi96ballsand}, we have\n \\begin{align*}\n \\mathbb{E}[e^{t\\norm{X}_2^2}] \n &= \\mathbb{E}\\mleft[ \\prod_{j = 1}^k e^{t X_j^2} \\mright]\n \\leq \\prod_{j = 1}^k \\mathbb{E}_{X_j \\sim \\tmop{Bin} \\mleft( m, \\mathbf{p}_j \\mright)} [e^{t X^2}]\n \\end{align*}\n From Lemma~\\ref{lemma:MGF_Binomial_square_bound:restated}, we then get that, for every $0< t \\leq 1\/(16m)$, \n \\begin{align*}\n \\mathbb{E}[e^{t\\norm{X}_2^2}] \n &\\leq \\prod_{j = 1}^k e^{16 t m^2 \\mathbf{p}_j^2 + 2 t m \\mathbf{p}_j}\n = e^{16 t m^2 \\norm{\\mathbf{p}}_2^2 + 2tm}\n \\end{align*}\nconcluding the proof.\n\\end{proof}\nThis result will itself be useful in deriving the next lemma, whose proof is deferred to Appendix~\\ref{proof:lemma:average_of_product_with_binomial_raise_by_m}.\n\\begin{lemma}\n \\label{lemma:average_of_product_with_binomial_raise_by_m:restated}\n There exists an absolute constant $\\delta_0\\approx 0.96$ such that the following holds. Let $K\\geq 1$ and $R_1,\\dots, R_K$ be integers, and $\\delta_1,\\dots, \\delta_K\\in(0,\\delta_0]$. \n Suppose that $\\kappa_{j, 1},\n \\ldots, \\kappa_{j, D} \\sim \\tmop{Bin} \\mleft( R_j, \\frac{1}{2} \\mright)$,\n are i.i.d., and mutually independent across $1\\leq j\\leq K$, and $z_j \\coloneqq \\frac{1 +\n \\delta_j}{1 - \\delta_j}$. Then\n \\[\n \\mathbb{E} \\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D \\prod_{j=1}^K\n z_j^{\\kappa_{j, i}} \\mright)^m \\mright]\n \\leq \\mleft( \\prod_{j=1}^K z_j^{\\frac{m}{2}R_j} \\mright) \\mleft(\n \\mathbb{E}_{X \\sim \\tmop{Bin} \\mleft( m, \\frac{1}{D} \\mright)} [e^{t\n X^2}] \\mright)^D,\n \\]\n where $t \\coloneqq \\sum_j 2 \\delta_j^2 R_j \\geq 0$. Moreover, if $tm \\leq \\frac{1}{16}$, then the factor $\\mleft(\n \\mathbb{E}_{X \\sim \\tmop{Bin} \\mleft( m, \\frac{1}{D} \\mright)} [e^{t\n X^2}] \\mright)^D$ in the RHS can be replaced by $e^{16 t \\frac{m^2}{D} + 2mt}.\n $ \n\\end{lemma}\n\\fi %\n\nWe here proceed with the proof of Lemma~\\ref{theorem:lower_bound_uniformity_testing_on_Bayes_Net}, starting with some convenient notations; some of the technical lemmas and facts used here are stated and proven in Section~\\ref{ssec:tools:mgf}. \nLet $\\theta = (\\lambda, \\mu_1, \\ldots, \\mu_{2^{d-1}})$, where each $\\mu_k\\in\\{\\pm 1\\}^{N\/2}$, and let $P_{\\theta}$ be\nthe distribution for the mixture of trees construction from Definition~\\ref{def:mixture:trees}. The\nfollowing denotes the \\emph{matching count} between $(\\lambda, \\mu)$ and $x$ as the quantity\n\\[ \nc (\\lambda, \\mu, x) \\coloneqq \\mleft| \\mleft\\{ (i, j) \\in \\{d, \\dots, n\\}^2\n : \\exists 1\\leq k \\leq \\tfrac{N}{2}, \\lambda_k = (i, j) \\text{ and } (- 1)^{x_i + x_j}\n = (- 1)^{\\mu_{\\iota(x),k}} \\mright\\} \\mright| . \n\\]\nWe will also introduce an analogous quantity with an ``offset'', for $x_{\\tmop{ch}} = (x_{d},\n\\ldots, x_n)$, referring exclusively to the child nodes of $x$ (i.e., the last $N$ nodes, which are the ``children'' of the first $d-1$ ``pointer nodes'' in our construction),\n\\begin{align}\n&c_{\\tmop{ch}} (\\lambda, \\mu, x_{\\tmop{ch}}) \\coloneqq\\label{def:ch}\\\\ \n &\\mleft| \\mleft\\{ (i, j) \\in \\{d, \\dots, n\\}^2 : \\exists 1\\leq k \\leq \\tfrac{N}{2},\n \\lambda_k = (i - d + 1, j - d + 1) \\text{ and } (- 1)^{x_i + x_j} = (- 1)^{\\mu_{\\iota(x),k}}\n \\mright\\} \\mright| \\,. \\notag\n\\end{align}\nTo denote the parameters of the ``mixture of trees,'' we write $\\theta_i \\coloneqq (\\lambda, \\mu_i)$ (for $i\\in[D]$), recalling that the matching parameter $\\lambda$ is common to all $D$ tree components. Since each $\\mu_i$\ncorresponds to one of the values of $(x_1, \\ldots, x_{d-1}) \\in \\{0, 1\\}^{d-1}$, we as before use \n$\\iota\\colon \\{0,1\\}^{d-1} \\to [D]$ to denote the indexing function (so that, for instance, $\\iota(x_1 =\n\\cdots = x_{d-1} = 0) = 0$). We finally introduce three more quantities, related to the matching and orientations parameters across the $D$ components of the mixture:\n\\begin{align*}\nA_{\\theta_i, \\theta_i'} &\\coloneqq \\{(s, t) \\in \\{1,\\dots,N \/ 2\\}^2 :\n \\lambda [s] = \\lambda' [t], \\mu_i [s] = \\mu_i' [t] \\} \\tag{common\n pairs, same orientation} \\\\\nB_{\\theta_i, \\theta_i'} &\\coloneqq \\{(s, t) \\in \\{1,\\dots,N \/ 2\\}^2 :\n \\lambda [s] = \\lambda' [t], \\mu_i [s] \\neq \\mu_i' [t] \\}\n \\tag{common pairs, different orientation} \\\\\nC_{\\theta_i, \\theta_i'} & \\coloneqq \n (\\lambda \\cup \\lambda') \\setminus (A \\cup B) \\tag{pairs unique to\n $\\theta$ or $\\theta'$} \n\\end{align*}\nFor ease of notation, we define $A_i \\coloneqq A_{\\theta_i, \\theta_i'}, B_i \\coloneqq\nB_{\\theta_i, \\theta_i'} \\text{ and } C_i \\coloneqq C_{\\theta_i, \\theta_i'}$; and note that $C_i=C_1$, as it only depends on $\\lambda$ (not on the orientation $\\mu_i$).\\label{def:AB:lb}\n\n\n To prove the indistinguishability, we will bound the squared total variation distance (or equivalently,\nsquared $\\ell_1$) distance between the distributions of $m$ samples from (the uniform mixture of) $P_{\\theta}$, or $U$ by a small constant; that is, between $Q \\coloneqq \\mathbb{E}_{\\theta}\n[P_{\\theta}^{\\otimes m}]$ and $U^{\\otimes m}$. From Ingster's method (see, e.g., \\cite[Lemma~III.8.]{acharya2018inference}), by using chi-square divergence as an intermediate step we get\n\\begin{align}\n \\norm{Q - U^{\\otimes m} }_1^2 & \\leq d_{\\chi^2} (Q, U^{\\otimes m}) \n %\n = \\mathbb{E}_{\\theta, \\theta'} [(1 + \\tau (\\theta, \\theta'))^m] - 1 \\label{eq:Ingster:restated}\n \\text{,}\n\\end{align}\nwhere $\\tau (\\theta, \\theta') \\coloneqq \\mathbb{E}_{x \\sim U} \\mleft[ \\mleft(\n\\frac{P_{\\theta} (x) - U (x)}{U (x)} \\mright) \\mleft( \\frac{P_{\\theta'} (x) - U\n(x)}{U (x)} \\mright) \\mright]$. \nIn order to get a handle on this quantity $\\tau (\\theta, \\theta')$, we start by writing the expression for the density $P_{\\theta}$ (for a given parameter $\\theta$ of the mixture of trees). For any $x\\in\\{0,1\\}^n$, recalling Item~\\ref{def:mixture:trees:item4} of Definition~\\ref{def:mixture:trees},\n\\begin{eqnarray*}\n P_{\\theta} (x) & = & P_{\\theta} (x_{d}, \\ldots, x_n \\mid x_1, \\ldots, x_{d-1}) U_{d-1}\n (x_1, \\ldots, x_{d-1})\\\\\n & = & \\frac{1}{2^{d-1}} P_{\\lambda, \\mu_{\\iota(x)}} (x_{d}, \\ldots, x_n) \\\\\n & = & \\frac{1}{2^{d-1}} \\cdot \\frac{1}{2^N} (1 + 4 \\delta)^{c (\\lambda, \\mu_{\\iota(x)}, x)} (1 -\n 4 \\delta)^{\\frac{N}{2} - c (\\lambda, \\mu_{\\iota(x)}, x)}\\\\\n & = & \\frac{1}{2^n} (1 + 4 \\delta)^{c (\\lambda, \\mu_{\\iota(x)}, x)} (1 - 4\n \\delta)^{\\frac{N}{2} - c (\\lambda, \\mu_{\\iota(x)}, x)} .\n\\end{eqnarray*}\nSubstituting this in the definition of $\\tau$, we get\n\\begin{align}\n \\tau (\\theta,\\theta')\n & = \\mathbb{E}_{x \\sim U} \\mleft[ \\mleft( \\frac{P_{\\theta} (x)}{U (x)} - 1\n \\mright) \\mleft( \\frac{P_{\\theta'} (x)}{U (x)} - 1 \\mright) \\mright]\\notag\\\\\n & = \\mathbb{E}_{x \\sim U} \\mleft[ \\mleft( (1 - 4 \\delta)^{\\frac{N}{2}}\n \\mleft( \\frac{1 + 4 \\delta}{1 - 4 \\delta} \\mright)^{c (\\lambda, \\mu_{\\iota(x)},\n x)} - 1 \\mright) \\mleft( (1 - 4 \\delta)^{\\frac{N}{2}} \\mleft( \\frac{1 + 4\n \\delta}{1 - 4 \\delta} \\mright)^{c (\\lambda', \\mu_{\\iota(x)}', x)} - 1 \\mright)\n \\mright] \\notag\\\\\n %\n %\n &= 1 +(1-4\\delta)^N\\mathbb{E}_{x \\sim U} \\mleft[ z_0^{c (\\lambda, \\mu_{\\iota(x)}, x) + c (\\lambda', \\mu_{\\iota(x)}', x)} \\mright] \\notag\\\\\n &\\quad- (1 - 4 \\delta)^{\\frac{N}{2}}\\mathbb{E}_{x \\sim U} \\mleft[ z_0^{c (\\lambda, \\mu_{\\iota(x)}, x)}\n \\mright] -(1 - 4 \\delta)^{\\frac{N}{2}} \\mathbb{E}_{x \\sim U} \\mleft[ z_0^{c (\\lambda', \\mu_{\\iota(x)}', x)}\n \\mright] \\,, \\label{eq:calculation:tau}\n\\end{align}\nwhere $z_0 \\coloneqq \\frac{1 + 4\n\\delta}{1 - 4 \\delta}$. \nAs $x \\sim U_N$, for fixed $\\lambda,\\mu$ we have that $c_{\\tmop{ch}} (\\lambda, \\mu, x)$ follows a $\\tmop{Bin}\n\\mleft( \\frac{N}{2}, \\frac{1}{2} \\mright)$ distribution; recalling the expression of the Binomial distribution's probability\ngenerating function, we then have\n\\begin{eqnarray*}\n (1 - 4 \\delta)^{\\frac{N}{2}} \\mathbb{E}_{x \\sim U} [z_0^{c (\\lambda, \\mu_{\\iota\n (x)}, x)}] & = & (1 - 4 \\delta)^{\\frac{N}{2}} \\mathbb{E}_{\\orgvec{x}_{1} \\sim U_{d-1}}\n \\mleft[ \\mathbb{E}_{\\orgvec{x}_2 \\sim U_N} \\mleft[ z_0^{c_{\\tmop{ch}} (\\lambda, \\mu_{\\iota\n (\\orgvec{x}_1)}, \\orgvec{x}_2)} \\mright] \\mright]\\\\\n & = & (1 - 4 \\delta)^{\\frac{N}{2}} \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D\n \\mathbb{E}_{m \\sim \\tmop{Bin} \\mleft( \\frac{N}{2}, \\frac{1}{2} \\mright)}\n [z_0^m] \\Bigg\\} \\\\\n & = & (1 - 4 \\delta)^{\\frac{N}{2}} \\mleft(\n \\frac{1 + z_0}{2} \\mright)^{\\frac{N}{2}} = 1 \\text{.}\n\\end{eqnarray*}\nUsing this to simplify the last two terms of~\\eqref{eq:calculation:tau}, we obtain\n\\begin{align}\n 1 + \\tau (\\theta, \\theta') \n & = (1 - 4 \\delta)^N \\mathbb{E}_{x \\sim U} [z_0^{c (\\lambda, \\mu_{\\iota(x)}, x) + c (\\lambda', \\mu_{\\iota(x)}', x)}] \\notag\\\\\n & = \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D (1 - 4 \\delta)^N \\mathbb{E}_{x \\sim U_N} [ z_0^{c_{\\tmop{ch}} (\\lambda, \\mu_i, x) + c_{\\tmop{ch}}\n (\\lambda', \\mu_i', x)}] \\Bigg\\} \\notag\\\\\n & = \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D (1 - 4 \\delta)^N \\cdot z_0^{|B_i |}\n \\mathbb{E}_{\\alpha \\sim \\tmop{Bin} \\mleft( |A_i |, \\frac{1}{2} \\mright)}\n [z_0^{2 \\alpha}] \\prod_{\\sigma_i : | \\sigma_i | \\geq 4}\n \\mathbb{E}_{\\alpha \\sim \\mathcal{B} (\\sigma_i)}\n [z_0^{\\alpha}] \\Bigg\\} \\notag\\\\\n & = \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D (1 - 4 \\delta)^N \\cdot z_0^{|B_i |}\n \\mleft( \\frac{1 + z_0^2}{2} \\mright)^{|A_i |} \\prod_{\\sigma_i : | \\sigma_i |\n \\geq 4} \\mathbb{E}_{\\alpha \\sim \\mathcal{B} (\\sigma_i)}\n [z_0^{\\alpha}] \\Bigg\\} , \\label{eq:1+tau:restated}\n\\end{align}\nwhere the product is taken over all cycles in the multigraph $G_{\\theta,\\theta'}$ induced by the two matchings; and, given a cycle $\\sigma$, $\\mathcal{B} (\\sigma)$ is the probability distribution defined as follows. Say that a cycle $\\sigma$ is \\emph{even} (resp., \\emph{odd}) if the number of edges with weight $1$ along $\\sigma$ is even (resp., odd); that is, a cycle is even or odd depending on whether the number of negatively correlated pairs along the cycle is an even or odd number. \nIf $\\sigma$ is an \\emph{even cycle}, then $\\mathcal{B} (\\sigma)$ is a Binomial with parameters $|\\sigma|$ and $1\/2$, conditioned on taking even values. Similarly, if $\\sigma$ is an \\emph{odd cycle}, $\\mathcal{B} (\\sigma)$ is a Binomial with parameters $|\\sigma|$ and $1\/2$, conditioned on taking odd values. \nIt follows that $\\mathbb{E}_{\\alpha \\sim\n \\mathcal{B} (\\sigma)} [z_0^{\\alpha}]$ is given by the following expression.\n \\[ \\mathbb{E}_{\\alpha \\sim \\mathcal{B} (\\sigma)}\n [z_0^{\\alpha}] = \\mleft\\{\\begin{array}{ll}\n \\mathbb{E}_{\\alpha \\sim \\tmop{Bin} \\mleft( | \\sigma |, \\frac{1}{2}\n \\mright)} \\mleft[ z^{\\alpha} \\mid \\alpha \\text{ even} \\mright] = \\frac{(1 +\n z_0)^{| \\sigma |} + (1 - z_0)^{| \\sigma |}}{2^{| \\sigma |}} & \\text{,\n if } \\sigma \\text{ is even}\\\\\n \\mathbb{E}_{\\alpha \\sim \\tmop{Bin} \\mleft( | \\sigma |, \\frac{1}{2}\n \\mright)} \\mleft[ z^{\\alpha} \\mid \\alpha \\text{ odd} \\mright] = \\frac{(1 +\n z_0)^{| \\sigma |} - (1 - z_0)^{| \\sigma |}}{2^{| \\sigma |}} & \\text{,\n if } \\sigma \\text{ is odd}\n \\end{array}\\mright. \\text{.} \\]\n Denote\n \\begin{align*}\n \\mathcal{S}_{e, i} &\\coloneqq \\mathcal{S}_e (\\theta_i, \\theta_i') = \\mleft\\{ \\sigma\n \\in \\tmop{cycle} (\\theta_i, \\theta_i') : | \\sigma | \\geq 4, \\sigma\n \\text{ is even} \\mright\\}, \\\\\n \\mathcal{S}_{o, i} &\\coloneqq\\mathcal{S}_o (\\theta_i, \\theta_i') = \\mleft\\{ \\sigma\n \\in \\tmop{cycle} (\\theta_i, \\theta_i') : | \\sigma | \\geq 4, \\sigma\n \\text{ is odd} \\mright\\} .\n \\end{align*} \n %\n We will often drop $\\lambda, \\mu$ or $i$, when clear from context. We expand $\\mathbb{E}_{\\alpha \\sim \\mathcal{B} (\\sigma)}\n[z_0^{\\alpha}]$ as follows: \n\\begin{eqnarray*}\n \\prod_{\\sigma : | \\sigma | \\geq 4} \\mathbb{E}_{\\alpha \\sim\n \\mathcal{B} (\\sigma)} [z_0^{\\alpha}]\n & = & \\prod_{\\sigma : | \\sigma | \\geq 4, \\tmop{even}} \\frac{(1 +\n z_0)^{| \\sigma |} + (1 - z_0)^{| \\sigma |}}{2^{| \\sigma |}} \\prod_{\\sigma :\n | \\sigma | \\geq 4, \\tmop{odd}} \\frac{(1 + z_0)^{| \\sigma |} - (1 -\n z_0)^{| \\sigma |}}{2^{| \\sigma |}}\\\\\n & = & \\prod_{\\substack{\\sigma : | \\sigma | \\geq 4\\\\ \\tmop{even}}} \\frac{(1 +\n z_0)^{| \\sigma |}}{2^{| \\sigma |}} \\!\\! \\mleft( 1 + \\mleft( \\frac{1 - z_0}{1 +\n z_0} \\mright)^{| \\sigma |} \\mright)\n \\prod_{\\substack{\\sigma : | \\sigma | \\geq 4 \\\\\\tmop{odd}}} \\!\\!\\frac{(1 + z_0)^{|\n \\sigma |}}{2^{| \\sigma |}} \\mleft( 1 - \\mleft( \\frac{1 - z_0}{1 + z_0}\n \\mright)^{| \\sigma |} \\mright)\\\\\n & = & \\prod_{\\sigma : | \\sigma | \\geq 4} \\frac{(1 + z_0)^{| \\sigma\n |}}{2^{| \\sigma |}} \\prod_{\\sigma \\in \\mathcal{S}_e} (1 + (- 4 \\delta)^{|\n \\sigma |}) \\prod_{\\sigma \\in \\mathcal{S}_o} (1 - (- 4 \\delta)^{| \\sigma\n |})\\\\\n & = & \\frac{(1 + z_0)^{\\sum_{\\sigma: |\\sigma|\\geq 4} |\\sigma|}}{2^{\\sum_{\\sigma: |\\sigma|\\geq 4} |\\sigma|}} \\prod_{\\sigma \\in \\mathcal{S}_e} (1 + (- 4 \\delta)^{|\n \\sigma |}) \\prod_{\\sigma \\in \\mathcal{S}_o} (1 - (- 4 \\delta)^{| \\sigma\n |})\\\\\n & = & \\frac{(1 + z_0)^{|C|}}{2^{|C|}} \\prod_{\\sigma \\in \\mathcal{S}_e} (1\n + (- 4 \\delta)^{| \\sigma |}) \\prod_{\\sigma \\in \\mathcal{S}_o} (1 - (- 4\n \\delta)^{| \\sigma |})\n\\end{eqnarray*}\nwhere for the last equality we used that $\\sum_{\\sigma: |\\sigma|\\geq 4} |\\sigma| = |C|$. We now improve upon the analogous analysis from~\\cite[Claim~12]{CanonneDKS20} to\nobtain a better upper bound for the remaining terms; indeed, the bound they derived is $e^{O(\\varepsilon^4\/n)}$, which was enough for their purposes but not ours (since it does not feature any dependence on $d$). Let\n$z_1 \\coloneqq \\frac{1 + (4 \\delta)^2}{1 - (4 \\delta)^2}$. In view of using the above expression to bound~\\eqref{eq:1+tau:restated}, we first simplify (part of) the summands of~\\eqref{eq:1+tau:restated} by using the fact that $2|A_i|+2|B_i|+|C_i|=N$ for all $i$, and following the same computations as in~\\citet{CanonneDKS20}:\n\\begin{align*}\n (1 - 4 \\delta)^N z_0^{|B_i |} &\n \\mleft( \\frac{1 + z_0^2}{2} \\mright)^{|A_i |} \\frac{(1 + z_0)^{|C_i|}}{2^{|C_i|}} \\\\\n &= (1 - 4 \\delta)^N z_0^{|B_i |} \\mleft( \\frac{1 + z_0^2}{2}\n \\mright)^{|A_i |} \\frac{(1 + z_0)^{|C_i |}}{2^{|C_i |}}\\\\\n & = ((1 - 4 \\delta)^2 z_0)^{|B_i |} \\mleft( (1 -\n 4 \\delta)^2 \\frac{1 + z_0^2}{2} \\mright)^{|A_i |}\n \\underbrace{\\mleft( (1 - 4 \\delta) \\frac{1 + z_0}{2} \\mright)^{|C_i |}}_{=\n 1}\\\\\n & = (1 - (4 \\delta)^2)^{|B_i |} (1 + (4 \\delta)^2)^{|A_i |}\\\\\n & = (1 - (4 \\delta)^2)^{|A_i | + |B_i |} z_1^{|A_i |}\n = (1 - (4 \\delta)^2)^{|A_1 | + |B_1 |} z_1^{|A_i |} \\,,\n\\end{align*}\nwhere the last equality uses the fact that the sum $|A_i | + |B_i |$ only depends on the matchings $\\lambda,\\lambda'$ (not the orientations $\\mu_i, \\mu_i'$), and thus is independent of $i$. \nPlugging this simplification into~\\eqref{eq:1+tau:restated}, and letting $R \\coloneqq |A_1|+|B_1| \\leq N\/2$ for convenience, we get\n\\begin{align*}\n 1 + \\tau (\\theta, \\theta')\n & = \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D (1 - 4 \\delta)^N \\cdot z_0^{|B_i |}\n \\mleft( \\frac{1 + z_0^2}{2} \\mright)^{|A_i |} \\prod_{\\sigma_i}\n \\mathbb{E}_{\\alpha \\sim \\mathcal{B}(\\sigma_i)} [z_0^{\\alpha}] \\Bigg\\}\\\\\n & = (1 - (4 \\delta)^2)^{R} \\cdot \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D \n z_1^{|A_i |} \\prod_{\\sigma \\in \\mathcal{S}_{e, i}} (1 + (- 4\n \\delta)^{| \\sigma |}) \\prod_{\\sigma \\in \\mathcal{S}_{o, i}} (1 - (- 4\n \\delta)^{| \\sigma |}) \\Bigg\\} \\text{.}\n\\end{align*}\nNext, we compute the expectation after raising the above to the power $m$.\n\\begin{align}\n \\mathbb{E}_{\\theta, \\theta'} &[(1 + \\tau (\\theta, \\theta'))^m] \\notag\\\\\n & = \\mathbb{E}_{\\theta, \\theta'} \\mleft[ \\mleft( (1 - (4 \\delta)^2)^{R} \n \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D z_1^{|A_i |} \\prod_{\\sigma \\in\n \\mathcal{S}_{e, i}} (1 + (- 4 \\delta)^{| \\sigma |}) \\prod_{\\sigma \\in\n \\mathcal{S}_{o, i}} (1 - (- 4 \\delta)^{| \\sigma |}) \\Bigg\\} \\mright)^m\n \\mright] \\notag\\\\\n & = \\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (1 - (4 \\delta)^2)^{mR}\n \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} \\mleft[ \\mleft( \\frac{1}{D} \\Bigg\\{ \\sum_{i = 1}^D \n z_1^{|A_i |} \\prod_{\\sigma \\in \\mathcal{S}_{e, i}} (1 + (- 4 \\delta)^{|\n \\sigma |}) \\prod_{\\sigma \\in \\mathcal{S}_{o, i}} (1 - (- 4 \\delta)^{| \\sigma\n |}) \\Bigg\\} \\mright)^m \\mright] \\mright]. \\label{eq:boundexpect:partial}\n\\end{align}\nThe quantity inside the inner expectation is quite unwieldy; to proceed, we will rely on the following identity, which lets us bound the two product terms:\n\\begin{equation}\n\\label{fact:intermedia_upper_bound_on_multigraph_cycle_term:restated}\n \\prod_{\\sigma \\in \\mathcal{S}_e} (1 + (- 4 \\delta)^{| \\sigma |}) \n \\prod_{\\sigma \\in \\mathcal{S}_o} (1 - (- 4 \\delta)^{| \\sigma |})\n \\leq e^{c'\\frac{\\varepsilon^5}{N^{3 \/ 2}}}\\!\\!\\!\\!\\!\\! \\prod_{\\sigma \\in\n \\mathcal{S}_e : | \\sigma | = 4} \\!\\!\\!\\!\\!\\!(1 + (- 4 \\delta)^{| \\sigma |}) \n \\!\\!\\!\\!\\!\\!\\prod_{\\sigma \\in \\mathcal{S}_o : | \\sigma | = 4} \\!\\!\\!\\!\\!\\!(1 - (- 4 \\delta)^{|\n \\sigma |})\\,,\n\\end{equation}\nfor some absolute constant $c'>0$. We defer the proof of this inequality to Appendix~\\ref{app:fact:intermedia_upper_bound_on_multigraph_cycle_term:proof}, and proceed assuming it. Note that as long as $D = \\new{O}(n\/\\varepsilon^6)$, we will have $e^{c'\\cdot m\\varepsilon^5 \/ N^{3 \/ 2}} \\leq e^{128 m \\varepsilon^2 \/ (\\sqrt{D} n)}$,\\footnote{\\new{As per the condition set in Lemma \\ref{lemma:multinomial_truncated_MGF}, we will from now on assume that $n \/ \\varepsilon^2 \\geqslant n \\geqslant 40 D$, which gives us $N^{3 \/ 2} \\geqslant \\left( n\/2 \\right)^{3 \/ 2} \\geqslant \\frac{n}{2}\n\\cdot (20 D)^{1 \/ 2} \\geqslant 2 n \\sqrt{D}$; and some more calculations give us $c'\\frac{m \\varepsilon^2}{N^{3 \/ 2}} \\leqslant \\frac{128\nm \\varepsilon^2}{\\sqrt{D} n}$.}} and this restriction on $D$ is satisfied for the regime of parameters considered in our lower bound, $d = O(\\log n)$.\n\nFix a pair $\\lambda, \\lambda'$; we have that the $|A_i |$'s are i.i.d.\\ $\\tmop{Bin}\n(R, 1 \/ 2)$ random variables. We now introduce \\[\nR' \\coloneqq |\\{\\sigma_1 : | \\sigma_1 | = 4\\}| =\n|\\{\\sigma_i : | \\sigma_i | = 4\\}|,\n\\] which is the random variable denoting how many cycles have length exactly 4. In particular, we have $R'\n\\leq \\frac{N}{4}$, since $\\sum_{\\sigma: |\\sigma|\\geq 4} \\sigma = |C | \\leq\nN$; more specifically, we have $R' \\leq \\frac{N - 2 R}{4} \\leq \\frac{N}{4}$. \nFurther, define $\\kappa_i$ as the number of cycles of length 4 which have an even total number of negative correlations; that is, the number of cycles $\\sigma$ such that $\\mu_i, \\mu_i'$ impose either 0, 2, or 4 negatively correlated pairs along that cycle. \n\n\nSince $\\mu, \\mu'$ are uniformly distributed, being odd or even each has probability $1 \/ 2$, and thus $\\kappa_i \\sim \\tmop{Bin} \\mleft(\nR', \\frac{1}{2} \\mright)$. Moreover, while $\\kappa_i$ and $A_i$ both depend on $\\mu_i, \\mu_i'$, they by definition depend on disjoint subsets of those two random variables: thus, because each correlation parameter is chosen independently, we have that \n$\\kappa_i$ and $A_i$ are independent conditioned on $(R, R')$. Now, recalling our setting of $z_2 = \\frac{1 + (4 \\delta)^4}{1\n- (4 \\delta)^4}$ and fixing a realization of $R,R'$, we have\n\n\n\\iffalse\n\\begin{align}\n \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} &\\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D\n z_1^{|A_i |} \\prod_{\\sigma \\in \\mathcal{S}_e (i) : | \\sigma | =\n 4} (1 + (4 \\delta)^4) \\prod_{\\sigma_i \\in \\mathcal{S}_o (i) : | \\sigma | =\n 4} (1 - (4 \\delta)^4) \\mright)^m \\mright]\\notag\\\\\n & = \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} \\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D\n z_1^{|A_i |} (1 + (4 \\delta)^4)^{\\kappa_i} (1 - (4 \\delta)^4)^{R' -\n \\kappa_i} \\mright)^m \\mright]\\notag\\\\\n & = (1 - (4 \\delta)^4)^{m R'} \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} \\mleft[ \\mleft(\n \\frac{1}{D} \\sum_{i = 1}^D z_1^{|A_i |} z_2^{\\kappa_i} \\mright)^m\n \\mright]\\notag\\\\\n & \\leq (1 - (4 \\delta)^4)^{m R'} z_1^{\\frac{mR}{2}}\n z_2^{\\frac{mR'}{2}} \\mleft( \\mathbb{E}_{X \\sim \\tmop{Bin}\n \\mleft( m, \\frac{1}{D} \\mright)} [e^{t X^2}] \\mright)^D, \\label{eq:boundinnerexpect:partial}\n\\end{align}\nwhere the last step follows from the first part of Lemma~\\ref{lemma:average_of_product_with_binomial_raise_by_m:restated}, setting $t \\coloneqq 2 (4\n\\delta)^4 R + 2 (4 \\delta)^8 R'$. \n\nTo continue, we would like to invoke the second part of Lemma~\\ref{lemma:average_of_product_with_binomial_raise_by_m:restated} to bound the RHS further. However, this requires $mt \\leq 1\/16$, which does not always hold: indeed, for $m = O(\\sqrt{D}n\/\\varepsilon^2)$,\\footnote{This is our regime of interest, as we want to prove a lower bound of $m = \\Omega(\\sqrt{D}n\/\\varepsilon^2)$, and thus assume that $m \\leq C'\\cdot \\sqrt{D}n\/\\varepsilon^2$ for some absolute constant $C'>0$ in order to derive a contradiction. Similarly, as for $m = o(n\/\\varepsilon^2)$ or $m = o(2^{d\/2} \\sqrt{n}\/\\varepsilon^2)$ the uniformity testing lower bounds of~\\citet[Theorems 13 and 14]{CanonneDKS20} (which can be adapted, after establishing the corresponding analogue of Lemma~\\ref{lemma;distance:mixture:of:trees:restated}, to generalize to independence testing) apply, we can assume that $m \\geq c' n\/\\varepsilon^2$ and $m \\geq c' \\sqrt{D n}\/\\varepsilon^2$ (for some small absolute constant $c'>0$), which lets us later use that $m^2\/D > m$ and $m^2\/D > Dn\/m^2$.} $mt \\asymp m \\delta^4 R \\asymp m \\frac{\\varepsilon^4}{n^2} R$ could be as large as $\\sqrt{D}\\varepsilon^2$, since all we can say with probability one is that $R\\leq N\/2$. Yet, as we will see, we have that $mt = O(1)$ with overwhelming probability, which will suffice.\n\nFirst, note that since $R' \\leq N\/4\\leq n\/4$, \n\\begin{equation}\n \\label{eq:inequality:t}\nmt \\leq 256\\delta^4 mR + 2^{15} m\\frac{\\varepsilon^8}{n^3} \\leq 256\\delta^4 mR + \\frac{\\varepsilon^4}{32n},\n\\end{equation}\nthe last inequality since $m = O(\\sqrt{D}n\/\\varepsilon^2)$, and $d \\ll \\log n$. In particular, given the bound on $m$ and the choice of $\\delta$, there exists an absolute constant $c>0$ such that $mt \\leq \\frac{1}{16}$ whenever $R \\leq r^\\ast \\coloneqq c\\cdot n\/(\\sqrt{D}\\varepsilon^2)$. To show that this happens with high enough probability, we will use the fact that, for every $k\\geq 0$,\n\\begin{equation}\n \\label{eq:tail:bound:R}\n \\Pr [R > k] \\leq \\frac{1}{k!},\n\\end{equation}\nwhich was established in~\\citet[p.46]{CanonneDKS20}. First, combining~\\eqref{eq:boundexpect:partial}, Claim~\\ref{fact:intermedia_upper_bound_on_multigraph_cycle_term:restated} and~\\eqref{eq:boundinnerexpect:partial}, we have\n\\begin{align}\n \\mathbb{E}_{\\theta, \\theta'} [(1 + \\tau (\\theta, \\theta'))^m]\n &\\leq e^{\\bigO{\\frac{m \\varepsilon^2}{\\sqrt{D} n}}}\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (1 - (4 \\delta)^2)^{mR} (1 - (4\\delta)^4)^{m R'} z_1^{\\frac{mR}{2}} z_2^{\\frac{mR'}{2}} (\\mathbb{E}_X[e^{tX^2}])^D \\mright] \\notag\\\\\n &= e^{\\bigO{\\frac{m \\varepsilon^2}{\\sqrt{D} n}}}\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (1 - (4 \\delta)^4)^{\\frac{mR}{2}} (1 - (4\\delta)^8)^{\\frac{m R'}{2}}(\\mathbb{E}_X[e^{tX^2}])^D \\mright] \\notag\\\\\n &\\leq e^{\\bigO{\\frac{m \\varepsilon^2}{\\sqrt{D} n}}}\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (\\mathbb{E}_X[e^{tX^2}])^D \\mright], \\label{eq:almost:there}\n\\end{align}\nwhere we recall that $X$ is Binomial with parameters $m$ and $1\/D$ and that, in the inner expectation, $t$ is a random variable (function of $R$, and thus of $\\lambda,\\lambda'$). To handle this, we use the above tail bound on $R$ to write, recalling that $t \\leq 256\\delta^4 R + \\frac{\\varepsilon^4}{32mn} \\leq 128\\delta^4 n + \\frac{\\varepsilon^4}{32mn}\n= \\frac{128 \\varepsilon^4}{n} + \\frac{\\varepsilon^4}{32mn} \\leq \\frac{129Dn}{m^2}$, \n\\begin{align*}\n\\mathbb{E}_{R} \\mleft[ (\\mathbb{E}_X[e^{tX^2}])^D \\mright]\n&\\leq \\mathbb{E}_{R} \\mleft[ (\\mathbb{E}_X[e^{tX^2}])^D \\mathbbm{1}[R\\leq r^\\ast] \\mright]\n+ \\mathbb{E}_{R} \\mleft[ (\\mathbb{E}_X[e^{t X^2}])^D \\mathbbm{1}[R > r^\\ast] \\mright]\\\\\n&\\leq \\mathbb{E}_{R} \\mleft[ (\\mathbb{E}_X[e^{tX^2}])^D \\mathbbm{1}[R\\leq r^\\ast] \\mright] + \\mathbb{E}_{R} \\mleft[ (\\mathbb{E}_X[e^{\\frac{129 Dn}{m^2} X^2}])^D \\mathbbm{1}[R > r^\\ast] \\mright]\\\\\n&\\leq \\mathbb{E}_{R} \\mleft[ e^{16 t \\frac{m^2}{D} + 2mt} \\mathbbm{1}[R\\leq r^\\ast] \\mright] + e^{ 129 D^2 n } \\Pr[R > r^\\ast] \\tag{Lemma~\\ref{lemma:average_of_product_with_binomial_raise_by_m:restated}}\\\\\n&\\leq \\mathbb{E}_{R} \\mleft[ e^{18 t \\frac{m^2}{D}} \\mright] + e^{ 129 D^2 n } \\cdot \\frac{1}{r^*!} \\tag{as $\\frac{m^2}{D} > m$, $\\frac{m^2}{D} > \\frac{Dn}{m^2}$} \\\\\n&\\leq \\mathbb{E}_{R} \\mleft[ e^{C'' \\mleft(\\delta^4\\frac{m^2}{D} R + \\frac{\\varepsilon^8m^2}{n^3 D} \\mright) } \\mright] + e^{ 129 D^2 n - C'''\\frac{n}{\\sqrt{D}\\varepsilon^2} \\log \\frac{n}{\\sqrt{D}\\varepsilon^2} },\n\\end{align*} \nwhen for the first term we used the setting of $t$ and the bound from (the first part of)~\\eqref{eq:inequality:t}; and, for the second, we used the value of $r^\\ast$, along with Stirling's inequality ($C'',C'''>0$ are again absolute constants). This means that the second term can be made arbitrary small as long as \n\\[\nD^{5\/2} \\ll \\log \\frac{n}{\\sqrt{D}\\varepsilon^2},\n\\]\nfor which $D \\ll \\log^{2\/5} n$ (i.e., $d \\ll \\log\\log n$) suffices. For the first term, we have, since\n$\\frac{\\varepsilon^8m^2}{n^3 D} = O(\\varepsilon^4\/n)$ and \n$\\alpha \\coloneqq C''\\delta^4\\frac{m^2}{D} = O(1)$,\n\\[\n\\mathbb{E}_{R} \\mleft[ e^{C'' \\mleft(\\delta^4\\frac{m^2}{D} R + \\frac{\\varepsilon^8m^2}{n^3 D} \\mright) } \\mright]\n= e^{O(\\varepsilon^4\/n)}\\mathbb{E}_{R} \\mleft[ e^{\\alpha R} \\mright]\n\\leq e^{O(\\varepsilon^4\/n)} (1+O(\\alpha)),\n\\]\nusing again the tail bound on $R$ from~\\eqref{eq:tail:bound:R} along with a summation by parts. \n\\iffalse %\n\\begin{align*}\n\\mathbb{E}_{R} \\mleft[ e^{\\alpha R} \\mright]\n&= \\sum_{k=0}^\\infty e^{\\alpha k} \\Pr[ R = k]\n= \\sum_{k=0}^\\infty e^{\\alpha k} (\\Pr[ R \\geq k]-\\Pr[ R \\geq k+1])\\\\\n&= \\sum_{k=0}^\\infty e^{\\alpha k} \\Pr[ R \\geq k]-\n \\sum_{k=1}^\\infty e^{\\alpha (k-1)} \\Pr[ R \\geq k] \\\\\n &= 1+\\sum_{k=1}^\\infty (e^{\\alpha k} - e^{\\alpha (k-1)}) \\Pr[ R \\geq k] \\\\\n &= 1+(1-e^{-\\alpha})\\sum_{k=1}^\\infty e^{\\alpha k} \\Pr[ R \\geq k] \\\\\n &\\leq 1+(1-e^{-\\alpha})\\sum_{k=1}^\\infty \\frac{e^{\\alpha k}}{k!} \\\\\n &= 1+(1-e^{-\\alpha}) (e^{e^\\alpha}-1)\n \\xrightarrow[\\alpha\\to 0]{} 1+ \\alpha(e-1)\n\\end{align*}\n\\fi\nIn particular, as $\\alpha$ can be made arbitrarily close to $0$ by choosing a smaller value for the constant $C'$ (in the bound for $m$), this term can made made arbitrarily close to $1$.\n\nPlugging this back in~\\eqref{eq:almost:there}, we finally obtain\n\\begin{align}\n \\mathbb{E}_{\\theta, \\theta'} [(1 + \\tau (\\theta, \\theta'))^m]\n &\\leq e^{\\bigO{\\frac{m \\varepsilon^2}{\\sqrt{D}n}}} (1+O(\\alpha) + o(1))\n = 1+o(1),\n\\end{align}\nshowing, in view of~\\eqref{eq:Ingster:restated}, that our family of ``mixture-of-trees'' distributions cannot be distinguished from uniform given $m \\leq C' \\sqrt{D} n\/\\varepsilon^2$ samples. This concludes the proof of Lemma~\\ref{theorem:lower_bound_uniformity_testing_on_Bayes_Net:restated}.\n\\fi\n \n \n \\begin{align}\n \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} & \\mleft[ \\mleft( \\frac{1}{D} \n \\sum_{i = 1}^D z_1^{|A_i |} \\prod_{\\sigma \\in \\mathcal{S}_e (i) : | \\sigma\n | = 4} (1 + (4 \\delta)^4) \\prod_{\\sigma_i \\in \\mathcal{S}_o (i) : | \\sigma |\n = 4} (1 - (4 \\delta)^4) \\mright)^m \\mright] \\nonumber\\\\\n & =\\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'} \\mleft[ \\mleft( \\frac{1}{D} \n \\sum_{i = 1}^D z_1^{|A_i |} (1 + (4 \\delta)^4)^{\\kappa_i} (1 - (4\n \\delta)^4)^{R' - \\kappa_i} \\mright)^m \\mright] \\nonumber\\\\\n & = (1 - (4 \\delta)^4)^{mR'} \\mathbb{E}_{\\orgvec{\\mu}, \\orgvec{\\mu}'}\n \\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D z_1^{|A_i |} z_2^{\\kappa_i}\n \\mright)^m \\mright] \\nonumber\\\\\n & \\leq (1 - (4 \\delta)^4)^{mR'} z_1^{\\frac{mR}{2}} z_2^{\\frac{mR'}{2}}\n \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D (\\cosh (2 \\alpha_i\n \\delta^2))^R (\\cosh (2 \\alpha_i \\delta^4))^{R'} \\mright], \n \\label{eq:boundinnerexpect:partial_improve_attempt}\n\\end{align}\nwhere \\eqref{eq:boundinnerexpect:partial_improve_attempt} follows from the following lemma, whose proof we defer to the end of the section:\n\\begin{restatable*}[]{lemma}{scaryExpectation}\n \\label{lemma:average_of_product_with_binomial_raise_by_m:restated_improve_attempt}\n There\n exists an absolute constant $\\delta_0 \\approx 0.96$ such that the following\n holds. Let $K \\geq 1$ and $R_1, \\ldots, R_K$ be integers, and $\\delta_1,\n \\ldots, \\delta_K \\in (0, \\delta_0]$. Suppose that $\\kappa_{j, 1}, \\ldots,\n \\kappa_{j, D} \\sim \\tmop{Bin} \\mleft( R_j, \\frac{1}{2} \\mright)$, are i.i.d.,\n and mutually independent across $1 \\leq j \\leq K$, and $z_j \\coloneqq \\frac{1 +\n \\delta_j}{1 - \\delta_j}$. Then\n \\[ \\mathbb{E} \\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D \\prod_{j = 1}^K\n z_j^{\\kappa_{j, i}} \\mright)^m \\mright] \\leq \\mleft( \\prod_{j = 1}^K\n z_j^{\\frac{m}{2} R_j} \\mright) \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[\n \\prod_{i = 1}^D \\prod^K_{j = 1} \\cosh (2 \\alpha_i \\delta_j) \\mright], \\]\n where $(\\alpha_1, \\ldots, \\alpha_D)$ follows a multinomial distribution with\n parameters $m$ and $(1\/D,\\dots,1\/D)$.\n\\end{restatable*}\nWe now focus on the expectation on the right (last factor of the RHS of~\\eqref{eq:boundinnerexpect:partial_improve_attempt}): using that $\\cosh u \\leq \\min(e^{u^2\/2}, e^u)$ for $u\\geq 0$, we have, setting $\\Delta\\coloneqq 1\/\\delta^2 = n\/\\varepsilon^2$,\n\\begin{align}\n \\mathbb{E}_{\\orgvec{\\alpha}} &\\mleft[ \\prod_{i = 1}^D (\\cosh (2 \\alpha_i\n \\delta^2))^R (\\cosh (2 \\alpha_i \\delta^4))^{R'} \\mright] \\nonumber\\\\\n & \\leq \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\min(e^{2 \\alpha_i \\delta^2 R}, e^{2\\alpha_i^2 \\delta^4 R}) e^{2 \\alpha_i^2 \\delta^8 R'} \\mright] \\nonumber\\\\\n & \\leq \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D e^{2\\alpha_i \\delta^2 R \\mathbbm{1} [\\alpha_i > \\Delta]} e^{2 \\alpha_i^2 \\delta^4 R \\mathbbm{1} [\\alpha_i \\leq \\Delta]} e^{2 \\alpha_i^2 \\delta^8 R'}\n \\mright] \\nonumber\\\\\n %\n & \\leq \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\alpha_i\n \\delta^2 R \\mathbbm{1} [\\alpha_i > \\Delta]} \\mright]^{1\/4} \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\alpha_i^2 \\delta^4 R \\mathbbm{1} [\\alpha_i \\leq \\Delta]} \\mright]^{1 \/ 4} \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{4 \\alpha_i^2 \\delta^8 R'} \\mright]^{1 \/ 2} \\label{eq:three_scary_expactations_multinomial}\n\\end{align}\nwhere the last step comes from the generalized H\\\"older inequality (or, equivalently, two applications of the Cauchy--Schwarz\ninequality), and the threshold $\\Delta$ was chosen as the value for which the term realizing the minimum changes. We first bound the product of the last two expectations:\n\\begin{align}\n \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\alpha_i^2 \\delta^4 R\n \\mathbbm{1} [\\alpha_i\\leq \\Delta]} \\mright]^{1 \/ 4} &\\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{4 \\alpha_i^2 \\delta^8 R'}\n \\mright]^{1 \/ 2} \\notag\\\\\n & \\leq \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\delta^4 R \\min(\\alpha_i^2, \\Delta^2)} \\mright]^{1 \/ 4} \\mathbb{E} \\mleft[\n \\prod_{i = 1}^D e^{4 \\alpha_i^2 \\delta^8 R'} \\mright]^{1 \/\n 2} \\nonumber\\\\\n & \\leq (\\mathbb{E} [e^{8 \\min (\\alpha_j^2, \\Delta^2) \\delta^4 R}])^{D \/\n 4} (\\mathbb{E} [e^{4 \\alpha_1^2 \\delta^8 R'}])^{D \/ 2} \n \\label{equation:step_negative_association}\\\\\n & \\leq \\exp \\mleft( 32 \\delta^4 \\frac{m^2}{D} R \\mright) \\exp \\mleft( 32\\delta^8 \\frac{m^2}{D} R' \\mright) . \n \\label{equation:apply_MGF_min_square_bin}\n\\end{align}\nwhere we applied negative\nassociation on both expectations for~\\eqref{equation:step_negative_association};\nand then got~\\eqref{equation:apply_MGF_min_square_bin} by Lemmas~\\ref{lemma:MGF_Binomial_square_bound_with_min} and~\\ref{lemma:MGF_Binomial_square_bound} (for the latter, noting that $t m = 2 \\delta^8 m R' \\leq 1 \/ 16$; and, for the former, assuming with little loss of generality that $\\varepsilon \\leq 1\/(4\\sqrt{2})$). \nApplying Lemma~\\ref{lemma:bound_on_truncated_multinomial_MGF} to the first\n(remaining) factor of the LHS above as $8\\delta^2 R\\leq 4$ and $D \\geq \\Omega(1)$, we get\n\\begin{align*}\n &\\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\alpha_i \\delta^2 R\n \\mathbbm{1} [\\alpha_i > \\Delta]} \\mright]^{1\/4} \\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{8 \\alpha_i^2 \\delta^4 R \\mathbbm{1} [\\alpha_i \\leq \\Delta]} \\mright]^{1\/ 4}\\mathbb{E} \\mleft[ \\prod_{i = 1}^D e^{4 \\alpha_i^2\n \\delta^8 R'} \\mright]^{1 \/ 2}\\\\\n &\\qquad \\leq (1 + o (1)) \\cdot \\exp \\mleft( 32 \\delta^4 \\frac{m^2}{D} R\n \\mright) \\exp \\mleft( 32 \\frac{m^2}{D} \\delta^8 R' \\mright)\\\\\n &\\qquad = (1 + o(1))\\exp (32C'^2 R),\n\\end{align*}\nrecalling that $R' \\leq N\/4\\leq n\/4$, and our assumption that $m \\leq C'\\sqrt{D}n\/\\varepsilon^2$. \\new{Combining~\\eqref{eq:boundexpect:partial}, \\eqref{fact:intermedia_upper_bound_on_multigraph_cycle_term:restated} and~\\eqref{eq:boundinnerexpect:partial_improve_attempt}, what we showed is}\n \\begin{align*}\n \\mathbb{E}_{\\theta, \\theta'} [(1 + \\tau (\\theta, \\theta'))^m]\n &\\leq (1 + \\new{e^{128 \\frac{m \\varepsilon^2}{\\sqrt{D} n}}})\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (1 - (4 \\delta)^2)^{mR}(1 - (4 \\delta)^4)^{mR'} z_1^{\\frac{mR}{2}} z_2^{\\frac{mR'}{2}}e^{32C'^2 \\cdot R} \\mright]\\\\\n &= (1 + \\new{e^{128 \\cdot C'}})\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[ (1 - (4 \\delta)^4)^{\\frac{mR}{2}}(1 - (4 \\delta)^8)^{\\frac{mR'}{2}}e^{32C'^2 \\cdot R} \\mright]\\\\\n &\\leq (1 + \\new{e^{128 \\cdot C'}})\\mathbb{E}_{\\lambda, \\lambda'} \\mleft[e^{32C'^2 \\cdot R} \\mright]\\,,\n\\end{align*}\nwhere the equality follows from the definition of $z_1,z_2$. To conclude, we will use the fact that, for every $k\\geq 0$,\n\\begin{equation}\n \\label{eq:tail:bound:R}\n \\Pr [R > k] \\leq \\frac{1}{k!},\n\\end{equation}\nwhich was established in~\\citet[p.46]{CanonneDKS20}. By summation by parts, one can show that this implies\n\\[\n \\mathbb{E}_{R} \\mleft[ e^{\\alpha R} \\mright] \\leq 1+(1-e^{-\\alpha}) (e^{e^\\alpha}-1)\n \\xrightarrow[\\alpha\\to 0]{} 1+ \\alpha(e-1)\n\\]\nfor any $\\alpha>0$, and so, in our case,\n\\iffalse %\n\\begin{align*}\n\\mathbb{E}_{R} \\mleft[ e^{\\alpha R} \\mright]\n&= \\sum_{k=0}^\\infty e^{\\alpha k} \\Pr[ R = k]\n= \\sum_{k=0}^\\infty e^{\\alpha k} (\\Pr[ R \\geq k]-\\Pr[ R \\geq k+1])\\\\\n&= \\sum_{k=0}^\\infty e^{\\alpha k} \\Pr[ R \\geq k]-\n \\sum_{k=1}^\\infty e^{\\alpha (k-1)} \\Pr[ R \\geq k] \\\\\n &= 1+\\sum_{k=1}^\\infty (e^{\\alpha k} - e^{\\alpha (k-1)}) \\Pr[ R \\geq k] \\\\\n &= 1+(1-e^{-\\alpha})\\sum_{k=1}^\\infty e^{\\alpha k} \\Pr[ R \\geq k] \\\\\n &\\leq 1+(1-e^{-\\alpha})\\sum_{k=1}^\\infty \\frac{e^{\\alpha k}}{k!} \\\\\n &= 1+(1-e^{-\\alpha}) (e^{e^\\alpha}-1)\n \\xrightarrow[\\alpha\\to 0]{} 1+ \\alpha(e-1)\n\\end{align*}\n\\fi %\n \\begin{align}\n \\mathbb{E}_{\\theta, \\theta'} [(1 + \\tau (\\theta, \\theta'))^m]\n &\\leq (1 + e^{128 \\cdot C'})\\mleft( 1+(1-e^{-32C'^2}) (e^{e^{32C'^2}}-1) \\mright)\n\\end{align}\nIn particular, the RHS can be made arbitrarily close to $1$ by choosing a small enough value for the constant $C'$ (in the bound for $m$). By~\\eqref{eq:Ingster:restated}, this implies the desired bound on $\\norm{Q - U^{\\otimes m} }_1^2$, and thus establishes Lemma~\\ref{theorem:lower_bound_uniformity_testing_on_Bayes_Net}. \\qed\n\n\n\\paragraph{The remaining technical lemma.} It only remains to establish Lemma~\\ref{lemma:average_of_product_with_binomial_raise_by_m:restated_improve_attempt}, which we do now.\n\\scaryExpectation\n\\begin{proof}[Proof\nof~Lemma~\\ref{lemma:average_of_product_with_binomial_raise_by_m:restated_improve_attempt}]\nWe will require the following simple fact, which follows from the multinomial theorem and the definition of the multinomial distribution:\n\\begin{fact}\\label{fact:power_of_average_equates_multinomial_expect}Let $D$ be a\npositive integer and $m$ be a non-negative integer. For any $x_1, \\ldots, x_D\n\\in \\mathbb{R}$, we have\n\\[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D x_i \\mright)^m =\\mathbb{E}_{\\alpha_1,\n \\ldots, \\alpha_D} \\mleft[ \\prod_{i = 1}^D x_i^{\\alpha_i} \\mright], \\]\nwhere $(\\alpha_1, \\ldots, \\alpha_D)$ follows a multinomial distribution with\nparameters $m$ and $(1\/D, \\ldots, 1\/D)$.\n\\end{fact}\n\\iffalse %\n\\begin{proof}\n By the multinomial theorem and then the definition of multinomial\n distribution,\n \\[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D x_i \\mright)^m = \\sum_{\\alpha_1 +\n \\cdots + \\alpha_m = D} \\binom{m}{\\alpha_1, \\ldots, \\alpha_D} \\mleft(\n \\prod_{i = 1}^D \\frac{1}{D^{\\alpha_i}} \\mright) \\cdot \\mleft( \\prod_{i =\n 1}^D x_i^{\\alpha_i} \\mright) =\\mathbb{E}_{\\alpha_1, \\ldots, \\alpha_D}\n \\mleft[ \\prod_{i = 1}^D x_i^{\\alpha_i} \\mright] . \\]\n\\end{proof}\n\\fi\n\nWe now apply Fact~\\ref{fact:power_of_average_equates_multinomial_expect} inside\nthe expectation of the LHS of the statement. Note that the sets of random variables $\\orgvec{\\alpha} =\n\\{\\alpha_1, \\ldots, \\alpha_D \\}$, $\\orgvec{\\kappa}_1 = \\{\\kappa_{1, 1},\n\\ldots, \\kappa_{1, D} \\}, \\ldots, \\orgvec{\\kappa}_K = \\{\\kappa_{K, 1}, \\ldots,\n\\kappa_{K, D} \\}$ are mutually independent, since $\\orgvec{\\alpha}$ are a\nset of auxiliary random variables derived from an averaging operation and by\nthe assumption on $\\orgvec{\\kappa}_j$; and we have that $\\kappa_{j, 1},\n\\ldots, \\kappa_{j, D}$ are i.i.d.,\n\\begin{align*}\n \\mathbb{E} \\mleft[ \\mleft( \\frac{1}{D} \\sum_{i = 1}^D \\prod^K_{j = 1}\n z_j^{\\kappa_{j, i}} \\mright)^m \\mright] & = \\mathbb{E} \\mleft[\n \\mathbb{E}_{\\alpha_1, \\ldots, \\alpha_D} \\mleft[ \\prod_{i = 1}^D \\mleft(\n \\prod^K_{j = 1} z_j^{\\kappa_{j, i}} \\mright)^{\\alpha_i} \\mright] \\mright]\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}, \\orgvec{\\kappa}_j} \\mleft[ \\prod_{i =\n 1}^D \\prod^K_{j = 1} z_j^{\\alpha_i \\kappa_{j, i}} \\mright]\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\mathbb{E}_{\\orgvec{\\kappa}_j}\n \\mleft[ \\prod_{i = 1}^D \\prod^K_{j = 1} z_j^{\\alpha_i \\kappa_{j, i}} \\mright]\n \\mright]\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\mleft[ \\prod_{i = 1}^D \\prod^K_{j\n = 1} \\mathbb{E}_{\\orgvec{\\kappa}_j} [z_j^{\\alpha_i \\kappa_{j, i}}] \\mright]\n \\mright]\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\mleft[ \\prod_{i = 1}^D \\prod^K_{j\n = 1} \\mleft( \\frac{1 + z_j^{\\alpha_i}}{2} \\mright)^{R_j} \\mright] \\mright] \\tag{Probability-Generating Function of a Binomial}\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\prod^K_{j = 1}\n z_j^{\\frac{\\alpha_i R_j}{2}} \\mleft( \\frac{z_j^{- \\alpha_i \/ 2} +\n z_j^{\\alpha_i \/ 2}}{2} \\mright)^{R_j} \\mright]\\\\\n & = \\mleft( \\prod^K_{j = 1} z_j^{\\frac{mR_j}{2}} \\mright)\n \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\prod^K_{j = 1} \\mleft(\n \\frac{z_j^{- \\alpha_i \/ 2} + z_j^{\\alpha_i \/ 2}}{2} \\mright)^{R_j} \\mright] .\n\\end{align*}\nNext, we will simplify the\nexpression left inside by upper bounding it, using the fact that, given our\nassumption on $\\delta_j$ being bounded above by $\\delta_0$, we have $z_j =\n\\frac{1 + \\delta_j}{1 - \\delta_j} \\leq e^{4 \\delta_j}$. Thus,\n\\begin{align*}\n \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\prod^K_{j = 1} \\mleft(\n \\frac{z_j^{- \\alpha_i \/ 2} + z_j^{\\alpha_i \/ 2}}{2} \\mright)^{R_j} \\mright] &\n \\leq \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\prod^K_{j =\n 1} \\mleft( \\frac{e^{- 2 \\alpha_i \\delta_j} + e^{2 \\alpha_i \\delta_j}}{2}\n \\mright)^{R_j} \\mright]\\\\\n & = \\mathbb{E}_{\\orgvec{\\alpha}} \\mleft[ \\prod_{i = 1}^D \\prod^K_{j = 1}\n (\\cosh (2 \\alpha_i \\delta_j))^{R_j} \\mright]\n\\end{align*}\nas claimed.\n\\end{proof}\n\n\n %\n %\n\\subsection{Product Distributions Are Far from Mixture of Trees (Lemma~\\ref{lemma;distance:mixture:of:trees})}\n\\label{ssec:mixturetree:farness}\nIn this subsection, we outline the proof of Lemma \\ref{lemma;distance:mixture:of:trees}.\nOur argument starts with Lemma~\\ref{lemma:technical:TV_lower_bound_of_its_marginals}, which allows us to relate the total variation distance between the mixture and the product of its marginals to a simpler quantity, the difference between two components of this mixture. \n\\begin{lemma}\n\\label{lemma:technical:TV_lower_bound_of_its_marginals}\nLet $p$ be a distribution on $\\{0, 1\\}^N \\times \\{0, 1\\}^M$ (with $N,M\\geq 2$), and denote its\nmarginals on $\\{0, 1\\}^N$, $\\{0, 1\\}^M$ by $p_1, p_2$ respectively. Then, if $p_1$ is uniform, %\n\\[ \\begin{array}{l}\n d_{\\tmop{TV}} (p, p_1 \\otimes p_2)\n \\end{array} \\geq d_{\\tmop{TV}} (p (\\cdot \\mid x_1 = 0), p (\\cdot \\mid x_1\n = 1))\\,. \\]\n\\end{lemma} \nThis in turn will be much more manageable, as the parameters of these two mixture components are independent, and thus analyzing this distance can be done by analyzing Binomial-like expressions. This second step is reminiscent of \\cite[Lemma 8]{CanonneDKS20}, which can be seen as a simpler version involving only one Binomial instead of two:\n\\begin{lemma}\n \\label{lemma:technical_distance_lb_2xbinomials_conditional_tree} \n There exist $C_1,C_2>0$ such that the following holds. Let $\\varepsilon\\in(0,1]$ and $n \\geq C_1$, and let $a,b$ be two integers such that $a+b=n$ and $b \\geq\n \\frac{1}{4} n$. Then, for $\\delta \\coloneqq \\frac{\\varepsilon}{\\sqrt{n}}$, we have\n \\[ \n \\frac{(1 - \\delta)^n}{2^n} \\sum_{k_1 = 0}^{a}\\sum_{k_2 = 0}^{b} \\binom{a}{k_1} \\binom{b}{k_2} \\mleft|\n \\mleft( \\mleft( \\frac{1 + \\delta}{1 - \\delta} \\mright)^{k_1 + k_2} - \\mleft(\n \\frac{1 + \\delta}{1 - \\delta} \\mright)^{k_1 + b - k_2} \\mright) \\mright|\n \\geq C_2\\varepsilon . \\]\n\\end{lemma}\nThis parameter $b$ corresponds to the\ndifference between the orientations parameters \n$\\mu, \\mu'$ being large, which happens with high constant probability as long\nas $n$ is large enough. The proof of Lemma\n\\ref{lemma:technical_distance_lb_2xbinomials_conditional_tree} is deferred to Appendix\n\\ref{proof:lemma:technical_distance_lb_2xbinomials_conditional_tree}, and we hereafter proceed with the rest of the argument.\nFor fixed $\\theta$ and $x_2, \\ldots, x_d$, $z := \\frac{1 + 4 \\delta}{1 -\n4 \\delta}$. We will denote by $\\mu, \\mu'$ the two (randomly chosen) orientation parameters corresponding to the mixture components indexed by $(0,x_2,\\dots, x_d)$ and $(1,x_2,\\dots, x_d)$. \nBy\nLemma \\ref{lemma:technical:TV_lower_bound_of_its_marginals} and Lemma\n\\ref{lemma:technical_approximated_lb_product_distributions}, for any product\ndistribution $q$,\n\\begin{align}\n 2 d_{\\tmop{TV}} (p, q) & \\geq \\frac{1}{3\\cdot 2^N} (1 - 4 \\delta)^{\\frac{N}{2}} \\sum_{x_{d + 1}, \\ldots,\n x_n} |z^{c (\\lambda, \\mu, x)} - z^{c (\\lambda, \\mu', x)} | \\label{eq:expression:tv}\n\\end{align}\nLet $S_1$ denote the set of pairs in the child nodes with common parameters\nbetween $\\mu$ and $\\mu'$, and $S_2$ the set of pairs with different\nparameters (that is, the definition of $S_1, S_2$ is essentially that of $A$ and $B$ from the previous section (p.\\pageref{def:AB:lb}), but for equal matching parameters $\\lambda = \\lambda'$). In particular, we have that $|S_2 | = \\tmop{Hamming} (\\mu, \\mu') \\sim\n\\tmop{Binomial} \\left( \\frac{N}{2}, \\frac{1}{2} \\right)$ and $|S_1 \\cup S_2 |\n= \\frac{N}{2}$. Let $\\tilde{c}(S, \\mu, x)$ be the analogue of $c_{\\tmop{ch}}\n(\\lambda, \\mu, x)$ from~\\eqref{def:ch}, but only on a subset of pairs $S$ instead of $\\{d,\\dots,n\\}^2$; i.e.,\n\\[ \\tilde{c}(S, \\mu, x) \\coloneqq \\left| \\left\\{ (i, j) \\in S : \\exists k \\in\n \\mathbb{N}, \\lambda_k = (i - d + 1, j - d + 1) \\text{ and } (- 1)^{x_i + x_j} = (-\n 1)^{\\mu_k} \\right\\} \\right| . \\]\nGiven any $x, \\mu \\text{ and } \\mu'$, the following holds from the definitions of $\\tilde{c}$ and $c_{\\tmop{ch}}$:\n\\begin{itemize}\n \\item Since $S_1 \\cup S_2$ contains all the pairs, $c_{\\tmop{ch}} (\\lambda, \\mu, x)\n = \\tilde{c}(S_1, \\mu, x) + \\tilde{c}(S_2, \\mu, x)$ (similarly for $\\mu'$).\n \n \\item Since $S_1$ (resp., $S_2$) contains exactly the pairs whose orientation is the same (resp., differs) between $\\mu$ and $\\mu'$, we have\n $\\tilde{c}(S_1, \\mu', x) = \\tilde{c}(S_1, \\mu, x)$\n and\n $\\tilde{c}(S_2, \\mu', x) + \\tilde{c}(S_2, \\mu, x) = |S_2 |$\n \n \\item For a fixed matching and a partition $S_1,S_2$ of its $N\/2$ pairs, given an orientation vector $\\mu\\in\\{0,1\\}^{N\/2}$, and fixed values $0\\leq k_1\\leq |S_1|$, $0\\leq k_2\\leq |S_2|$, there are $2^{N\/2}\\binom{|S_1|}{k_1}\\binom{|S_2|}{k_2}$ different vectors $x\\in\\{0,1\\}^N$ such that $\\tilde{c}(S_1, \\mu, x)=k_1$ and $\\tilde{c}(S_2, \\mu, x)=k_2$.\n\\end{itemize}\nUsing these properties, we have, assuming $|S_2 | \\geqslant \\frac{1}{4} \\cdot \\frac{N}{2}$ and $N$ bigger than some constant,\n\\begin{align*}\n \\sum_{x \\in \\{0, 1\\}^N} |z^{c_{\\tmop{ch}} (\\lambda,\n \\mu, x)} - z^{c_{\\tmop{ch}} (\\lambda, \\mu', x)} |\n &= \\sum_{x \\in \\{0, 1\\}^N} |z^{\\tilde{c}(S_1, \\mu, x) + \\tilde{c}(S_2, \\mu, x)} - z^{\\tilde{c}(S_1, \\mu', x) + \\tilde{c}(S_2, \\mu', x)} |\\\\\n &= \\sum_{x \\in \\{0, 1\\}^N} |z^{\\tilde{c}(S_1, \\mu, x)+\\tilde{c}(S_2, \\mu, x)} - z^{\\tilde{c}(S_1, \\mu, x)+|S_2|-\\tilde{c}(S_2, \\mu, x)} |\\\\\n &= 2^{\\frac{N}{2}}\\sum_{k_1=0}^{|S_1|} \\binom{|S_1|}{k_1}\\sum_{k_2=0}^{|S_2|} \\binom{|S_2|}{k_2} |z^{k_1+k_2} - z^{k_1+|S_2|-k_2} |\\\\&\\geq C\\cdot \\frac{2^N}{(1-4\\delta)^{N\/2}}\\cdot \\varepsilon,\n\\end{align*}\nwhere $C>0$ is an absolute constant, and for the last inequality we invoked Lemma~\\ref{lemma:technical_distance_lb_2xbinomials_conditional_tree}.\n Recalling now that $|S_2 | \\sim \\tmop{Bin}\\mleft( \\frac{N}{2},\n \\frac{1}{2} \\mright)$, for $N$ large enough we also have\n\\[ \\Pr \\mleft[ |S_2| \\geq \\frac{N}{8} \\mright] \\geq 1 - e^{\n - \\frac{N}{16}} > 9\/10. \\]\nThus, combining the two along with~\\eqref{eq:expression:tv}, we conclude that\n\\[\n\\Pr [d_{\\tmop{TV}} (P_{\\theta} (\\cdot \\mid x_1 = 0, x_2, \\ldots, x_d),\n P_{\\theta} (\\cdot \\mid x_1 = 1, x_2, \\ldots, x_d)) \\geq \\Omega\n (\\varepsilon)] \\geq 9\/10,\n\\]\nestablishing Lemma~\\ref{lemma;distance:mixture:of:trees}. \\hfill$\\blacksquare$\n\n\\subsection{Bounds on moment-generating functions}\nWe start with some relatively simple statements:\n\\begin{fact}\n\\label{fact:bound_on_MGF_Binomial_not_general}If $X \\sim \\tmop{Bin}\n(m, p)$, then, for any $0 \\leq t \\leq 1$,\n$\\mathbb{E} [e^{t X}] \\leq \\exp (2 t mp).$\n\\end{fact}\n\\begin{proof}\n This follows from computing explicitly \n $\n \\mathbb{E} [e^{t X}] = (1 + p (e^{t} - 1))^m \\leq (1 + 2\n t p)^m \\leq e^{2 t mp},\n $\n where the first inequality uses that $t\\leq 1$. \n\\end{proof}\n\\noindent We will also require the following decoupling inequality:\n\\begin{lemma}\n \\label{lemma:decoupling}Let $F\\colon \\mathbb{R} \\to \\mathbb{R}$ be a convex,\n {\\tmem{non-decreasing}} function, and $X = (X_1, \\ldots, X_n)$ be a vector\n of independent {\\tmem{non-negative}} random variables. Then\n \\[ \\mathbb{E} \\mleft[ F \\mleft( \\sum_{i \\neq j} X_i X_j \\mright) \\mright] \\leq \\mathbb{E} \\mleft[\n F \\mleft( 4 \\sum_{i, j} X_i Y_j \\mright) \\mright] \\]\n where $Y$ is an independent copy of $X$.\n\\end{lemma}\n\\begin{proof}\n \\label{proof:lemma:decoupling}\n Introduce a vector of independent\n (and independent of $X$) $\\tmop{Bern} (1 \/ 2)$ random variables $\\delta =\n (\\delta_1, \\ldots, \\delta_n)$; so that $\\mathbb{E} [\\delta_i (1 - \\delta_j)]\n = \\frac{1}{4} \\mathbbm{1}_{i \\neq j}$. For any realization of $X$, we can write\n \\[ \\sum_{i \\neq j} X_i X_j = 4 \\mathbb{E}_{\\delta} \\mleft[ \\sum_{i \\neq j} \\delta_i\n (1 - \\delta_j) X_i X_j \\mright] = 4 \\mathbb{E}_{\\delta} \\mleft[ \\sum_{i, j}\n \\delta_i (1 - \\delta_j) X_i X_j \\mright], \\]\n and so, by Jensen's inequality and Fubini, as well as independence of $X$ and $\\delta$, \n \\begin{align*}\n \\mathbb{E}_X \\mleft[ F \\mleft( \\sum_{i \\neq j} X_i X_j \\mright) \\mright] &\n =\\mathbb{E}_X \\mleft[ F \\mleft( 4 \\mathbb{E}_{\\delta} \\mleft[ \\sum_{i, j} \\delta_i (1\n - \\delta_j) X_i X_j \\mright] \\mright) \\mright]\\\\\n & \\leq \\mathbb{E}_{\\delta} \\mleft[ \\mathbb{E}_X \\mleft[ F \\mleft( 4\n \\sum_{i, j} \\delta_i (1 - \\delta_j) X_i X_j \\mright) \\mright] \\mright]\n \\end{align*}\n \n This implies that there exists some realization $\\delta^{\\ast} \\in \\{0,\n 1\\}^n$ such that\n \\[ \\mathbb{E}_X \\mleft[ F \\mleft( \\sum_{i \\neq j} X_i X_j \\mright) \\mright]\n \\leq \\mathbb{E}_X \\mleft[ F \\mleft( 4 \\sum_{i, j} \\delta^{\\ast}_i (1 -\n \\delta^{\\ast}_j) X_i X_j \\mright) \\mright] \\hspace{0.17em} . \\]\n Let $I := \\{i \\in [n] : \\delta^{\\ast}_i = 1\\}$. Then $\\sum_{i, j}\n \\delta^{\\ast}_i (1 - \\delta^{\\ast}_j) X_i X_j = \\sum_{(i, j) \\in I \\times\n I^c} X_i X_j$, and we get\n \n \\begin{align}\n \\mathbb{E}_X \\mleft[ F \\mleft( \\sum_{i \\neq j} X_i X_j \\mright) \\mright] &\n \\leq \\mathbb{E}_X \\mleft[ F \\mleft( 4 \\sum_{(i, j) \\in I \\times I^c}\n X_i X_j \\mright) \\mright] \\label{eq:decoupling:intermediate}\\\\\n & =\\mathbb{E}_X \\mleft[ F \\mleft( 4 \\sum_{(i, j) \\in I \\times I^c} X_i Y_j \\mright) \\mright] \\notag\\\\\n & \\leq \\mathbb{E}_X \\mleft[ F \\mleft( 4 \\sum_{(i, j) \\in I \\times I^c} X_i Y_j + 4 \\sum_{(i, j) \\not\\in I \\times I^c} X_i Y_j \\mright) \\mright] \\notag\\\\\n & =\\mathbb{E}_X \\mleft[ F \\mleft( 4 \\sum_{i, j} X_i Y_j \\mright) \\mright] \\notag\n\\;,\n \\end{align}\n where the equality uses the fact that $(X_i)_{i \\in I}$ and $(X_j)_{j \\in\n I^c}$ are independent (as $I, I^c$ are disjoint), and so replacing $\\sum_{j\n \\in I^c} X_j$ by the identically distributed $\\sum_{j \\in I^c} Y_j$ does not\n change the expectation; and the second inequality uses monotonicity of $F$\n and non-negativity of $X, Y$, as $4 \\sum_{(i, j) \\not\\in I \\times I^c} X_i Y_j\n \\geq 0$. (Note that up to (and including)~\\eqref{eq:decoupling:intermediate}, the assumption that the $X_i$'s are independent is not necessary; we will use this fact later on.)\n\\end{proof}\nNote that compared to the usual version of the inequality, we do not require\nthat the $X_i$'s have mean zero; but instead require that they be\nnon-negative, and that $F$ be monotone. We will, in the next lemma, apply\nLemma~\\ref{lemma:decoupling} to the function $F (x) = e^{2 tx}$, for some\nfixed {\\tmem{positive}} parameter $t > 0$ (so that $F$ is indeed\nnon-decreasing), and to $X_1, \\ldots, X_n$ independent Bernoulli r.v.'s. Specifically, we obtain the following bound on the MGF of the square of a Binomial:\n\n\\MGFSquaredBinomial\n\\begin{proof}\n Write $X=\\sum_{i = 1}^m X_i$, where the $X_i$ are i.i.d.\\ $\\tmop{Bern} (p)$ (in particular, $X_i = X_i^2$). \n Then, by the Cauchy--Schwarz inequality and the decoupling inequality from Lemma~\\ref{lemma:decoupling}, we have, for $t>0$,\n \\begin{eqnarray}\n \\mathbb{E} [e^{tX^2}] \n & = & \\mathbb{E} \\mleft[ e^{t \\sum_i X_i} e^{t \\sum_{i \\neq j} X_i\n X_j} \\mright] \\nonumber\\\\\n & \\leq & \\sqrt{\\mathbb{E} \\mleft[ e^{2 t \\sum_i X_i} \\mright]} \n \\sqrt{\\mathbb{E} \\mleft[ e^{2 t \\sum_{i \\neq j} X_i X_j} \\mright]} \\nonumber\\\\\n \\mleft( \\text{decoupling} \\mright) & \\leq & \\sqrt{\\mathbb{E} \\mleft[ e^{2\n t \\sum_i X_i} \\mright]} \\sqrt{\\mathbb{E} \\mleft[ e^{8 t \\sum_{i, j} X_i Y_j}\n \\mright]} \\text{.} \\label{eq:decoupling_step}\n \\end{eqnarray}\n where $Y_j \\sim \\tmop{Bern} (p)$ are i.i.d., and independent of the $X_i$'s. Let $Y = \\sum_{i = 1}^m Y_i\\sim \\tmop{Bin}(m, p)$. \n From Fact~\\ref{fact:bound_on_MGF_Binomial_not_general}, as long as $2 t\n \\leq 1$, $8 t m \\leq 1$, and $16 tmp \\leq 1$\n (all conditions satisfied in view of our assumption),\n \\[ \\mathbb{E}_{X, Y} [e^{8 tXY}] =\\mathbb{E}_X [\\mathbb{E}_Y [e^{8 tXY}]]\n \\leq \\mathbb{E}_X [e^{16 tXmp}] \\leq e^{32 tm^2 p^2},\n \\]\n and $\\mathbb{E} \\mleft[ e^{2t X} \\mright] \\leq e^{4tmp}$. Going back to~\\eqref{eq:decoupling_step}, this implies\n \\[\n \\mathbb{E} [e^{tX^2}] \n \\leq \\sqrt{\\exp \\left(4tmp\\right)}\\sqrt{\\exp \\left(32 tm^2 p^2 \\right)}\n = \\exp \\left(2 tmp + 16 tm^2 p^2 \\right)\\,,\n \\]\n concluding the proof.\n\\end{proof}\nWe will prove an MGF bound on the truncated Multinomial in Lemma \\ref{lemma:bound_on_truncated_multinomial_MGF} (noting that using MGF bound of Multinomial distribution is not nearly enough), as required by our analysis on the independence testing lower bound; prior to that, we will need two important lemmas: Lemma \\ref{lemma:factorial_fraction_inequality} and Lemma \\ref{lemma:multinomial_truncated_MGF}. These two lemmas both try to bound the expression with a uniform and more manageable term.\n\n\\iffalse\nThe intuition for this particular analysis starts with the the following\narguments (using the notation in the proof):\n\\begin{itemize}\n \\item We can obtain a ``trivial'' bound via Gibbs' inequality (which is\n based on Jensen's inequality) in\n (\\ref{equation:everything-grouped-together-log-sum}), $L_1 \\log\n \\mleft( \\frac{L_1 \\cdot D}{m k} + L_2 \\log \\mleft( \\frac{L_2 \\cdot\n D}{m (D - k)} \\mright) \\mright) \\geq m \\log (1) = 0$; but this is\n not enough for us, as we need to sum over a huge collection of different\n values of $\\alpha_i$, which could be up to $\\binom{D}{k} (m - n)^k n^{D -\n k}$.\n \n \\item We notice that in order for the ``trivial'' bound to be tight, we need\n the following to hold:\n \\[ \\frac{L_1}{m k \/ D} = \\frac{L_2}{m (D - k) \/ D} \\leftrightarrow\n \\frac{L_1}{k} = \\frac{m - L_1}{D - k} \\leftrightarrow L_1 D = m k. \\]\n However, we know that $L_1 \\gtrsim O (k n) = m k \/ \\sqrt{D}$, and there $L_1\n D \\gg m k$, leaving a potential gap for us. As such we suspect that it is\n closely related to an application of the more general Jensen's gap.\n \n \\item One key step of our analysis in\n (\\ref{equation:swapping-D-with-lower-order-D}) replaces $D$ with $D^{3 \/\n 4}$, as it is more likely to make Jensen's gap smaller ($D^{1 \/ 2}$ is where\n the Jensen's inequality above becomes tight, but it makes the analysis\n messy; and thus we choose a $3 \/ 4$ in place of $1 \/ 2$).\n\\end{itemize}\n\\fi\n\\begin{lemma}\n \\label{lemma:factorial_fraction_inequality}\n Fix $m,\\Delta,D$ such that $\\frac{m}{\\Delta} \\new{\\leq}c\\sqrt{D}$ for some $c>0$ (and $D> \\max(16c^4, e^{100})$). Fix any integer $k > 0$ and a tuple of non-negative integers $(a_1,\\dots,a_D)$ summing to $m$ such that $L \\coloneqq \\sum_{i = 1}^k a_i > k\\Delta$ (in particular, $k \\leq c\\sqrt{D}$). \n Suppose $(\\alpha_1, \\ldots, \\alpha_D)$ follows\n a multinomial distribution with\n parameters $m$ and $(1\/D, \\ldots, 1\/D)$. Then,\n \\[\n e^{4 L} \\Pr \\mleft[ \\orgvec{\\alpha} = (a_1, \\ldots, a_D)\n \\mright] \\leq m\\cdot \\exp (- \\frac{1}{5} L \\log D) .\n \\]\n\\end{lemma}\n\n\\begin{proof}\n Via a multinomial distribution grouping argument, the probability can be\n bounded by considering a grouping of two random variables, $L_1 = \\sum_{i = 1}^k\n \\alpha_i$ and $L_2 = \\sum_{i = k + 1}^D \\alpha_i$, where $(L_1,L_2)$ follows\n a multinomial distribution with parameters $m$ and $\\mleft( \\frac{k}{D},\n \\frac{D - k}{D} \\mright)$, namely, recalling $L = \\sum_{i = 1}^k a_i$ and setting $T \\coloneqq \\sum_{i = k + 1}^D a_i$, \n \\[ \\Pr \\mleft[ \\orgvec{\\alpha} = (a_1, \\ldots, a_D) \\mright] \\leq \n \\Pr\\mleft[ L_1 = L, L_2 = T \\mright] = \\frac{m!}{L! T!} \\mleft( \\frac{k}{D}\n \\mright)^{L} \\mleft( \\frac{D - k}{D} \\mright)^{T} \\]\n Moreover, note that $m=L+T$. Via Stirling's approximation, we have\n \\begin{equation}\n \\label{equation:stirling_gibbs_application} \n \\frac{m!}{L ! T!}\n \\leq \\exp(m \\log m + \\log m - L \\log L - T \\log T)\n \\end{equation}\n from which we can write, taking the logarithm for convenience,\n \\begin{align}\n \\log\\mleft( \\frac{m!}{L!T!} \\mleft( \\frac{k}{D} \\mright)^{L}\n \\mleft( \\frac{D - k}{D} \\mright)^{T} \\mright) \n & \\leq \\log m - \\mleft( L \\log \\frac{L D}{m k} + T \\log \\frac{T D}{m (D - k)}\n \\mright)\n \\label{equation:everything-grouped-together-log-sum}\\\\\n & = \\log m - \\mleft( L \\log \\frac{LD}{m\n k } + T \\log \\mleft( \\frac{TD}{m ({D^{3 \/ 4}} -\n k)} \\frac{{D^{3 \/ 4}} - k}{D - k} \\mright) \\mright) \\label{equation:swapping-D-with-lower-order-D}\\\\\n & \\leq \\log m - m \\log (D^{1 \/ 4}) + (m - L) \\log \\mleft(\n \\frac{D - k}{D^{3 \/ 4} - k} \\mright)\n \\label{equation:Gibbs-ineqaulity-again}\\\\\n %\n & = \\log m + m \\log \\mleft( 1 + k\\frac{D^{1 \/ 4} - 1}{D\n - k D^{1 \/ 4}} \\mright) - L \\log \\mleft( 1 + \\frac{D^{1 \/ 4} - 1}{1 - k \/\n D^{3 \/ 4}} \\mright) \\nonumber\\\\\n & \\leq \\log m + m k \\frac{D^{1 \/ 4} - 1}{D - kD^{1 \/ 4}} - L\n \\log (D^{1 \/ 4})\n \\label{equation:setting-k-to-extreme}\\\\\n & \\leq \\log m +\n \\frac{m k}{D^{1 \/ 2}} \\frac{1 - 1 \/ D^{1 \/ 4}}{D^{1 \/ 4} - c} - \\frac{1}{4} L \\log D \\nonumber\\\\\n & \\leq \\log m + \n \\frac{c L}{D^{1 \/ 4} - c} - \\frac{1}{4} L \\log D \n \\label{equation:upperbound-for-mk-over-D-square}\\\\\n & \\leq \\log m + L - \\frac{1}{4} L \\log D \\tag{as $c\/D^{1\/4}\\leq 1\/2$} \n \\end{align}\n where we used Gibbs' inequality for \\eqref{equation:Gibbs-ineqaulity-again};\n we then have (\\ref{equation:setting-k-to-extreme}) by $\\log (1 + x)\n \\leq x$ for the first term. \\eqref{equation:upperbound-for-mk-over-D-square} then follows from $k \\leq c\\sqrt{D}$ and $km \\new{\\leq} k\\Delta \\cdot c\\sqrt{D} \\leq L\\cdot c\\sqrt{D} $. Finally,\n \\[\n e^{4 L} \\frac{m!}{L!T!} \\mleft( \\frac{k}{D}\n \\mright)^{L} \\mleft( \\frac{D - k}{D} \\mright)^{T} \n \\leq \\exp (5L- \\frac{1}{4}L \\log D)\n \\leq m \\exp \\mleft( - \\frac{1}{5} L \\log D \\mright) \\,,\n \\]\n the last inequality as long as $\\log D > 100$.\n\\end{proof}\n\n\n\\begin{lemma}\n \\label{lemma:multinomial_truncated_MGF}\n Suppose $\\orgvec{\\alpha} =\n (\\alpha_1, \\ldots, \\alpha_D)^T$ follows a multinomial distribution with\n parameters $m$ and $(1\/D, \\ldots, 1\/D)$, and that $\\frac{m}{\\Delta} \\new{\\leq} c\\sqrt{D}$ for some $c>0,\\Delta\\geq 1$ with $\\Delta \\geq 40D \\geq \\Omega(c^4)$ and $D > \\Omega(1)$. For any integer $c \\sqrt{D} \\geq k \\geq 1$ and any $t\\leq 4$,\n \\[ \n \\mathbb{E} \\mleft[ \\prod_{i : \\alpha_i > \\Delta} e^{t \\alpha_i}\n \\cdot \\mathbbm{1} [\\nu(\\orgvec{\\alpha})= k] \\mright] \\leq \\exp \\mleft( -\n \\frac{1}{80} \\Delta \\log (D) \\mright) . \n \\]\n where $\\nu(\\orgvec{\\alpha}) \\coloneqq |\\{ i: \\alpha_i > \\Delta \\}|$ denotes the number of coordinates of $\\orgvec{\\alpha}$ greater than $\\Delta$.\n\\end{lemma}\n\\begin{proof}\n Without loss of generality, (as later, we will sum over all combinations)\n assume that $\\alpha_1, \\ldots, \\alpha_k$ are the coordinates larger than $\\Delta$, for some integer $k$; and denote their sum by $L$. Note that we then have $k\\Delta < L \\leq m \\new{\\leq} c\\Delta\\sqrt{D}$, and thus $0 \\leq k \\leq c\\sqrt{D}$.\n \\begin{equation}\n \\mathbb{E} \\mleft[ \\prod_{i : \\alpha_i > \\Delta} e^{4\\alpha_i}\n \\cdot \\mathbbm{1} [\\nu(\\orgvec{\\alpha})= k] \\mright]\n = \\binom{D}{k} \\!\\!\\sum_{\\alpha_1, \\ldots, \\alpha_k > \\Delta} \\sum_{\\alpha_{k\n + 1}, \\ldots, \\alpha_D \\leq \\Delta} \\!\\!\\!e^{4 \\sum_{i = 1}^k \\alpha_i} \\Pr\n [\\orgvec{\\alpha} = (\\alpha_1, \\ldots, \\alpha_D)] \n \\label{equation:MGF_bound_of_truncated_multinomial}\n \\end{equation}\n A uniform bound on any $\\alpha_1, \\ldots, \\alpha_D$ as specified can be\n obtained from Lemma \\ref{lemma:factorial_fraction_inequality}; and, combining it\n with (\\ref{equation:MGF_bound_of_truncated_multinomial}), we have an\n expression that does not depend on the value of $\\orgvec{\\alpha}$; from which\\footnote{We have the number of terms in the summation upper bounded by the following\n analysis: $(m - \\Delta)^k$ is an upper bound of combinations of $\\alpha_1,\n \\ldots, \\alpha_k$ with values larger than $\\Delta$; and similarly, $(\\Delta + 1)^{D -\n k}$ will be the upper bound for the combinations of $\\alpha_{k + 1}, \\ldots,\n \\alpha_D$ with values up to $\\Delta$.}\n \\begin{eqnarray}\n \\mathbb{E} \\mleft[ e^{4\\sum_{i = 1}^k \\alpha_i} \n \\mathbbm{1} [\\nu(\\orgvec{\\alpha})= k] \\mright] \n & \\leq & \\binom{D}{k} \\sum_{\\alpha_1, \\ldots, \\alpha_k > \\Delta, \\alpha_{k + 1},\n \\ldots, \\alpha_D \\leq \\Delta} m e^{- \\frac{1}{5} \\Delta \\log D} \\nonumber\\\\\n & \\leq & \\binom{D}{k} (m - \\Delta)^k \\Delta^{D - k} \\exp \\mleft( \\log m - \\frac{1}{5}\n \\Delta \\log D \\mright) \\nonumber\\\\\n & \\leq & \\exp \\mleft( k \\log D + k \\log (m - \\Delta) + (D - k) \\log \\Delta + \\log m\n - \\frac{\\Delta}{5} \\log (D) \\mright) \\nonumber\\\\\n & = & \\exp \\mleft( k \\log (D \\cdot \\frac{m - \\Delta}{\\Delta}) + D \\log \\Delta + \\log m -\n \\frac{\\Delta}{5} \\log (D) \\mright) \\nonumber\\\\\n & \\leq & \\exp \\mleft( - \\frac{1}{5} \\Delta \\log D+ (D+1) \\log \\Delta + \\log (c\\sqrt{D})+\n \\frac{3}{2} k \\log (c D) \\mright) \\nonumber\\\\\n & \\leq & \\exp \\mleft( - \\frac{1}{10} \\Delta \\log D + 2c \\sqrt{D}\n \\log (c D) \\mright) \\label{weight_log_swap}\\\\\n & \\leq & \\exp \\mleft( - \\frac{1}{80} \\Delta \\log D \\mright) . \\nonumber\n \\end{eqnarray}\n where (\\ref{weight_log_swap}) follows from $20\\frac{D}{\\log D} \\leq \\frac{\\Delta}{\\log \\Delta}$, which holds for $\\Delta \\geq 40D$ and $D$ large enough (larger than some absolute constant); and the last inequality holds, given the above constraints, for $D \\geq 16 c^4$.\n\\end{proof}\n\n\\MGFCappedMultinomial\n\\begin{proof}\n Let $\\nu(\\orgvec{\\alpha}) \\coloneqq |\\{ i: \\alpha_i > \\Delta \\}|$ denote the number of coordinates of $\\orgvec{\\alpha}$ greater than $\\Delta$. Note that $\\nu(\\orgvec{\\alpha}) < L\\coloneqq \\frac{m}{\\Delta}$, and that $L = c \\sqrt{D}$s by assumption. We break down the expectation by enumerating over the possible values for $\\nu(\\orgvec{\\alpha})$,\n from $0\\leq k\\leq L$:\n \\begin{eqnarray}\n \\mathbb{E} \\left[ \\prod_{i = 1}^D e^{t \\alpha_i \\mathbbm{1} [\\alpha_i > \\Delta]} \\right] \n & = & \\mathbb{E} \\mleft[ \\sum_{k = 1}^L \\prod_{i : \\alpha_i > \\Delta} e^{t\n \\alpha_i} \\cdot \\mathbbm{1} [\\nu(\\orgvec{\\alpha}) = k] +\n \\mathbbm{1} [\\nu(\\orgvec{\\alpha}) = 0] \\mright] \\nonumber\\\\\n & = & \\sum_{k = 1}^L \\mathbb{E} \\mleft[ \\prod_{i : \\alpha_i > \\Delta} e^{t\n \\alpha_i} \\cdot \\mathbbm{1} [\\nu(\\orgvec{\\alpha}) = k] \\mright]\n + 1 \\cdot \\Pr [\\nu(\\orgvec{\\alpha}) = 0] \\nonumber\\\\\n & \\leq & L \\exp \\mleft( - \\frac{1}{80} \\Delta \\log D \\mright) + \\Pr [\\nu(\\orgvec{\\alpha}) = 0] \n \\label{equation:application_multinomial_truncated_MGF}\\\\\n & \\leq & c D^{1\/2} \\exp \\mleft( - \\frac{1}{80} \\Delta \\log D \\mright) + 1\\,, \\nonumber\n \\end{eqnarray}\n where (\\ref{equation:application_multinomial_truncated_MGF}) follows from\n Lemma \\ref{lemma:multinomial_truncated_MGF}.\n\\end{proof}\n\n\n\nWe now state and prove our last lemma, Lemma~\\ref{lemma:MGF_Binomial_square_bound_with_min}, on the MGF of the square of a truncated Binomial:\n\n\\MGFSquaredCappedBinomial\n\\begin{proof}\n We will analyze the sampling process in Definition \\ref{definition:coupling-decoupling-sampling-procedure-attempt-2}:\n %\n \\begin{definition}\n \\label{definition:coupling-decoupling-sampling-procedure-attempt-2}\n Fix integers $m \\geq \\Delta \\geq 1$, and let $X'_1,\\dots, X'_m$ be i.i.d.\\ $\\tmop{Bern} (p)$ random variables. Define the distribution of $X_1,\\dots,X_m$ through the following sampling process:\n \\begin{enumerate}\n \\item Initialize $X_i = 0$ for all $i \\in [m]$; sample $\\{ X_i' \\}_{1\\leq i\\leq m}$ as $m$\n i.i.d.\\ $\\tmop{Bern} (p)$;\n \\item If $\\sum_{i \\in [m]} X_i' < \\Delta$, let $X_i = X_i'$ for all $i \\in\n [m]$;\n \\item If $\\sum_{i \\in [m]} X_i' \\geq \\Delta$, let $\\mathcal{S}' = \\{ i \\in\n [m] : X_i' = 1 \\}$ and let $\\mathcal{S}$ be a uniformly random subset of\n $\\mathcal{S}'$ of size $\\Delta$; set $X_i = X_i'$ for $i \\in \\mathcal{S}$.\n \\end{enumerate}\n \\end{definition}\n %\n\n \n %\n %\n %\n \\noindent Consider a sequence of random variable $X_1,\\dots,X_m$ as defined in\n Definition~\\ref{definition:coupling-decoupling-sampling-procedure-attempt-2}; each\n $X_i$ (for $1\\leq i\\leq m$) is supported on $\\{ 0, 1 \\}$ (so that, in particular, $X_i^2=X_i$); and $X = \\sum_{i \\in [m]}\n X_i$. By the Cauchy--Schwarz inequality,\n \\begin{eqnarray}\n \\mathbb{E} [e^{t X^2}] & = & \\mathbb{E} \\left[ e^{t \\sum_{i = 1}^m X_i + t\n \\sum_{i \\neq j} X_i X_j} \\right] \\nonumber\\\\\n & \\leqslant & \\sqrt{\\mathbb{E} \\left[ e^{2 t \\sum_{i = 1}^m X_i} \\right]}\n \\sqrt{\\mathbb{E} \\left[ e^{2 t \\sum_{i \\neq j} X_i X_j} \\right]}\n \\nonumber\\\\\n & \\leqslant & \\sqrt{\\mathbb{E} \\left[ e^{2 t \\sum_{i = 1}^m X_i} \\right]}\n \\sqrt{\\mathbb{E} \\left[ e^{8 t \\sum_{(i, j) \\in I \\times I^c} X_i X_j}\n \\right]} \n \\label{equation:coupling-decoupling-application-square-truncated-binomials}\\\\\n & \\leqslant & \\sqrt{\\mathbb{E} \\left[ e^{2 t X} \\right]}\n \\sqrt{\\mathbb{E} [e^{8 t Y_1 Y_2}]} \n \\label{equation:dominance_application}\n \\end{eqnarray}\n where $Y_1 \\sim \\min (\\tmop{Bin} (| I |, p), \\Delta)$, $Y_2 \\sim \\min (\\tmop{Bin}\n (| I^c |, p), \\Delta)$ and $Y_1$ is independent of $Y_2$ (and $(I, I^c)$ is some fixed, but unknown partition of $[m]$).\n \\eqref{equation:coupling-decoupling-application-square-truncated-binomials} follows from the intermediate step~\\eqref{eq:decoupling:intermediate} in the proof of Lemma~\\ref{lemma:decoupling} (observing that $x\\mapsto e^{t x}$ is convex, and\n non-decreasing as $t>0$; and using the remark from that proof about the independence of $X_i$'s not being required up to that step) and (\\ref{equation:dominance_application}) follows from Lemma\n \\ref{lemma:coupling-for-decoupling-attempt-2}. We will implicitly use Facts~\\ref{fact:useful_dominance_order},\n \\ref{fact:min_preserves_dominance_order}, and~\\ref{fact:dominance_between_binomials} for the remaining calculations, eventually replacing most expressions with $X' \\sim \\tmop{Bin} (m, p)$.\n \n Recalling that $X \\leq X'$ by definition, the first term \n of~\\eqref{equation:dominance_application} can be bounded as $\\mathbb{E} [e^{2 t X}] \\leqslant e^{4 t m p}$. Moreover, from our assumption, $t Y_1 \\leq t \\Delta \\leq 1 \/ 8$ and $t m p\n \\leq 1 \/ 16$. Combined with the fact that $Y_1, Y_2$ is dominated by $X\n \\sim \\min (\\tmop{Bin} (m, p), \\Delta)$ and thus by $X' \\sim \\tmop{Bin} (m, p)$,\n we have\n \\[\n \\mathbb{E} [e^{8 t Y_1 Y_2}] = \\mathbb{E}_{Y_1} [\\mathbb{E}_{Y_2}\n [e^{8 t Y_1 Y_2}]]\n \\leqslant \\mathbb{E}_{Y_1} [e^{16 t Y_1 m p}]\n \\leqslant e^{32 t m^2 p^2} .\n \\]\n Going back to~\\eqref{equation:dominance_application},\n this implies\n \\[\n \\mathbb{E} [\\exp \\left(t X^2 \\right)] \\leq \\sqrt{\\exp \\left( 4 t m p \\right)} \\sqrt{\\exp \\left( 32 t m^2 p^2 \\right)} = \\exp \\left(2 t m p + 16 t m^2 p^2 \\right),\n \\]\n concluding the proof.\n\\end{proof}\n\n\\subsection{Stochastic dominance results between truncated Binomials} %\n\\begin{fact}\\label{fact:useful_dominance_order}\nLet $X \\sim \\tmop{Bin} (m, p)$, and $0 < n \\leq m$. Defining $Y \\coloneqq \\min (X, n)$ and $Z \\coloneqq X \\mid X\\leq\nn$, we have, for every $k\\geq 0$,\n\\[ \\Pr [X \\geq k] \\geq \\Pr [Y \\geq k] \\geq\n \\Pr [Z \\geq k], \\]\ni.e., $X \\succeq Y \\succeq Z$, where $\\succeq$ denotes first-order stochastic dominance.\n\\end{fact}\n\n\\begin{proof}\n We can write the PMF of $Z$ and $Y$, for all $0\\leq k\\leq n$,\n \\[ \\Pr [Y = k] = \\left\\{\\begin{array}{ll}\n \\Pr [X = k], & k < n\\\\\n \\Pr [X \\geq n], & k = n\n \\end{array}\\right., \\qquad \\Pr [Z = k] = \\frac{\\Pr [X = k]}{\\Pr [X \\leq n]} . \\]\n It follows that $\\Pr [Y \\geq k] = \\Pr [X \\geq k]\\mathbbm{1}\\{k\\leq n\\}$, which gives the first part of the statement.\n \n The second part follows from a direct comparison between the two CDF of $Z, Y$: indeed, for $0\\leq k \\leq n$,\n \\begin{eqnarray*}\n \\Pr [Y \\geq k] \\geq \\Pr [Z \\geq k]\n & \\Leftrightarrow & \\Pr [X \\geq k] \\geq \\frac{\\Pr [n \\geq\n X \\geq k]}{\\Pr [X \\leq n]}\\\\\n & \\Leftrightarrow & \\Pr [X \\geq k] (1 - \\Pr [X > n])\n \\geq \\Pr [X \\geq k] - \\Pr [X > n]\\\\\n & \\Leftrightarrow & \\Pr [X \\geq k] \\Pr [X > n] \\leq \\Pr [X > n]\\\\\n & \\Leftrightarrow & \\Pr [X \\geq k] \\leq 1\\,,\n \\end{eqnarray*}\n and this last inequality clearly holds.\n\\end{proof}\n\nWe also record the facts below, which follow respectively from the more general result that first-order stochastic dominance is preserved by non-decreasing mappings, and from a coupling argument.\n\\begin{fact}\n\\label{fact:min_preserves_dominance_order}\nConsider two real-valued random variables $X,Y$, \nand $n \\geq 0$. If $X \\succeq Y$, then $\\min(X,n) \\succeq \\min(Y,n)$: for all $k$,\n\\[ \\Pr [\\min (X, n) \\geq k] \\geq \\Pr [\\min (Y, n) \\geq k]\\,; \\]\ni.e., the $\\min$ operator preserves first-order stochastic dominance relation.\n\\end{fact}\n\\begin{fact}\n\\label{fact:dominance_between_binomials}\nLet $X \\sim \\tmop{Bin} (n, p)$ and $Y \\sim \\tmop{Bin} (m, p)$, where $m \\geq n$. Then $X \\preceq Y$.\n\\end{fact}\n\\begin{lemma}\n \\label{lemma:coupling-for-decoupling-attempt-2}\n Let\n $X_1,\\dots,X_m$ be sampled from the sampling process in Definition~\\ref{definition:coupling-decoupling-sampling-procedure-attempt-2}, and $I,\n I^c$ be any partition of $[m]$. \n Define $Z_I \\coloneqq \\sum_{i \\in I} X_i$, $Z_{I^c} \\coloneqq \\sum_{i \\in I^c} X_i$, and $Y_I\n \\sim \\min (\\tmop{Bin} (| I |, p), n)$, $Y_{I^c} \\sim \\min (\\tmop{Bin} (| I^c\n |, p), n)$. Then \n \\[\n Z_I \\cdot Z_{I^c} \\preceq Y_I \\cdot Y_{I^c}\\,.\n \\]\n\\end{lemma}\n\n\\begin{proof}\n We prove the lemma by defining a coupling $Z_I, Z_{I^c}, Y_I, Y_{I^c}$ such that $Z_I \\cdot Z_{I^c} \\leq Y_I \\cdot Y_{I^c}$ with probability one. The sampling process below will\n generate samples $(X_i)_{1\\leq i\\leq m}, Z_I, Z_{I^c}, Y_I, Y_{I^c}$ for all possible\n realizations of $I$ and $I^c$. In other words, from a given sequence $\\{\n X_i' \\}_{i \\in [m]}$, we will generate $\\{ X_i \\}_{i \\in [m]}, Y_{I_1},\n Y_{I^c_1}, Y_{I_2}, Y_{I_2^c}, \\ldots, Y_{I_{2^m}}, Y_{I_{2^m}^c}, Z_{I_1},\n Z_{I^c_1}, Z_{I_2}, Z_{I_2^c}, \\ldots, Z_{I_{2^m}}, Z_{I_{2^m}^c}$, where the $(I_i, I^c_i)$ enumerate all partitions of $[m]$ in two sets.\n \\begin{enumerate}\n \\item Initialize $X_i = 0$ for all $i \\in [m]$; sample $( X_i' )_{1\\leq i\\leq m}$ as\n $m$ i.i.d.\\ $\\tmop{Bern} (p)$;\n \n \\item If $\\sum_{i \\in [m]} X_i' < n$, let $X_i = X_i'$ for all $i \\in\n [m]$;\n \n \\item If $\\sum_{i \\in [m]} X_i' \\geq n$, let $\\mathcal{S}' = \\{ i\n \\in [m] : X_i' = 1 \\}$ and let $\\mathcal{S}$ be a uniformly random subset\n of $\\mathcal{S}'$ with size $n$; set $X_i =\n X_i'$ for $i \\in \\mathcal{S}$.\n \n \\item \\label{enumerate:the-coupling-step}For each $I \\in \\{ I_1, \\ldots,\n I_{2^m} \\}$, denote $\\mathcal{S}_I' =\\mathcal{S}' \\cap I$. Select a\n uniformly random subset of $\\mathcal{S}_I'$ with at most $n$ indices which is a\n superset of $\\mathcal{S} \\cap I$. In more detail, if $| \\mathcal{S} \\cap\n I | < n$, select $\\min (| \\mathcal{S}_I' |, n) - | \\mathcal{S} \\cap I |$\n elements uniformly at random from $\\mathcal{S}_I' \\setminus (\\mathcal{S} \\cap\n I)$ to add to $\\mathcal{S} \\cap I$, which becomes $\\mathcal{S}_I$;\n else, let $\\mathcal{S}_I =\\mathcal{S} \\cap I$. Repeat a similar process for $I^c$ to obtain\n $\\mathcal{S}_{I^c}$.\n \n \\item For each $I \\in \\{ I_1, \\ldots, I_{2^m} \\}$, set $Y_I = \\sum_{i \\in\n \\mathcal{S}_I} X_i'$ and $Y_{I^c} = \\sum_{i \\in S_{I^c}} X_i'$.\n \\end{enumerate}\n \n From the above definition, we can\n readily see that for any $I$, $Y_I \\geq Z_I$ and $Y_{I^c} \\geq\n Z_{I^c}$. What is left is to argue that the $Y_I \\sim \\min (\\tmop{Bin} (| I\n |, p), n)$ and $Y_{I^c} \\sim \\min (\\tmop{Bin} (| I^c |, p), n)$. We start by\n noting that for any $k < n$, $\\{ Y_I = k \\} = \\{ | \\mathcal{S}_I | = k \\} =\n \\{ | \\mathcal{S}_I' | = k \\}$. The last equality comes from the fact that $|\n \\mathcal{S}_I | < n$ can only mean that $| \\mathcal{S}_I' | < n$, and the\n selection process in step \\ref{enumerate:the-coupling-step} will thus add all elements from\n $\\mathcal{S}_I'$ to $\\mathcal{S}_I$. From here, we have $\\Pr [Y_I = k] = \\Pr\n [| \\mathcal{S}_I' | = k] = \\Pr [\\tmop{Bin} (m, p) = k]$, for $k < n$; and we\n have $\\Pr [Y_I = n] = 1 - \\Pr [Y_I < n] = \\Pr [\\tmop{Bin} (m, p) \\geq\n n]$. As a result, $Y_I \\sim \\min (\\tmop{Bin} (| I |, p), n)$. Similarly, we\n can argue that $Y_{I^c} \\sim \\min (\\tmop{Bin} (| I^c |, p), n)$.\n\\end{proof}\n\n\n\n\\section{Introduction}\n\\input{sec-intro}\n\n\\section{Preliminaries}\n\\input{sec-preliminaries}\n\n\\section{Upper Bound for Independence Testing}\n\\input{sec-upperbound}\n\n\\section{The $\\Omega(2^{d\/2}{n} \/ \\varepsilon^2)$ Lower Bound}\n \\label{sec:lb:outline}\n\\input{sec-lowerbound}\n\n\\iffalse\n\\section{Extension to Testing Substructure}\n \\input{..\/testing_substructure}\n\\fi\n\n\\section{Useful results on the MGFs of Binomials and Multinomials}\n \\label{ssec:tools:mgf}\n\\input{sec-mgfs}\n\n\\section*{Acknowledgments}\nYang would like to thank Vipul Arora and Philips George John for the helpful discussions on the lower bound analysis; and Vipul, specifically, for providing valuable feedback on the manuscript.\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper we derive stability estimates in several norms for the $L^{2}$ projection and a family\nof quasiinterpolants in the space of smooth, periodic, polynomial splines on a uniform mesh in a \nfinite interval. Such estimates are of course useful in deriving approximation properties for the\n$L^{2}$ projection and the quasiinterpolant. They have also been used in deriving optimal-order-of\n-accuracy error estimates for Galerkin finite-element approximations to the solution of e.g. first-\norder hyperbolic problems and nonlinear dispersive wave pde's, cf. e.g. \\cite{bdk},\n\\cite{dk}, \\cite{bdkmc}, \\cite{adm}, \\cite{ad1}, \\cite{adk}. Stability estimates for the $L^{2}$ \nprojection onto finite element spaces have been derived in several works. For example, see \n\\cite{ddw} for estimates in $L^{\\infty}$ in the case of a quasiuniform mesh in one dimension, \n\\cite{ct} for estimates in $L^{p}$ and the Sobolev spaces $W_{\\infty}^{p}$, $1\\leq p\\leq \\infty$, in one and two \ndimensions for more general than quasiuniform types of meshes, and also the references of these \npapers. Here, we take advantage of the uniform mesh, and the periodicity of the underlying space of \nsmooth splines, and use matrix methods to derive suitable decay estimates of the elements of the\ninverse of a Gram matrix associated with the standard basis of the space of splines. To this end, we\nuse properties of cyclic (circulant) matrices listed in \\cite{se}, and relevant results of Demko\n\\emph{et al.}, \\cite{dms}, and Bini and Capovani, \\cite{bc}. These decay properties enable us to\nprove stability estimates for the $L^{2}$ projection in the $L^{2}$-based Sobolev spaces \n$H_{per}^{l}$ of periodic functions, for $l=0,\\dots,r-1$, where $r$ is the order of the spline space,\nand for continuous, periodic functions in $W_{l}^{\\infty}$, for $l=0,\\dots,r-1$. \\par\nQuasiinterpolants of continuous, periodic functions in the space of smooth, periodic splines have\nbeen studied, among other, in \\cite{s}, \\cite{tw}, \\cite{t}, \\cite{ds}, and in references of \nthese works. All these quasiinterpolants achieve optimal-order accuracy in $L^{2}$ provided the\nfunctions they approximate are smooth enough. Of particular interest is the Thom{\\'e}e-Wendroff\nquasiinterpolant, \\cite{tw}, for which $L^{2}$ inner products of truncation errors with elements of\na special basis of the spline space are superaccurate due to cancellations. This enables one to prove\noptimal-order $L^{2}$-error estimates for Galerkin approximations of the solutions of the periodic\ninitial-value-problems for the types of pde's previously mentioned.\\par\nIn section 2 of the paper at hand we introduce the spline spaces and their standard basis. In \nsection 3 we list a series of properties of cyclic matrices that will be needed in the sequel, \nmostly following \\cite{se}. In section 4, based on results from \\cite{dms} and \\cite{bc}, we \nestablish the required decay estimates for the elements of the inverse of the Gram matrix. These are\nneed in section 5 in order to establish stability results for the $L^{2}$ projection onto the spline\nspaces in the Sobolev spaces $H_{per}^{l}$ and $W_{\\infty}^{l}\\cap H_{per}^{l}$ for \n$l=0,1,\\dots r-1$. Finally, in section 6, we consider the quasiinterpolants and prove stability \nestimates for them in $H_{per}^{l}$ and $W_{\\infty}^{l}\\cap H_{per}^{l}$, for $l=1,\\dots,r-1$, and in\n$C_{per}$. These estimates do not depend of course on the decay results of section 4.\\par\nWe use the following notation: For integer $k\\geq 0$, $C_{per}^{k}$ ($C_{per}\\equiv C_{per}^{0}$)\nwill denote the space of continuous, 1-periodic functions that are $k$ times continuously\ndifferentiable. (By $C_{per}^{k}([0,1])$ we mean the restriction on $[0,1]$ of such functions).\nAnalogously, $L_{per}^{2}$ will denote the space of 1-periodic functions that are \nsquare-integrable over one period. The $L^{2}$ inner product, resp. norm, on $[0,1]$ will be denoted by\n$(\\cdot,\\cdot)$, resp. $\\|\\cdot\\|$. For integer $l\\geq 0$, the norm on $[0,1]$ of the \n$L^{2}$-based Sobolev spaces of 1-periodic functions will be denoted by $\\|\\cdot\\|_{l}$, while\nthe analogous norm of $W_{\\infty}^{l}$ by $\\|\\cdot\\|_{l,\\infty}$. \nThe Euclidean inner product, resp. norm, on $\\mathbb{R}^{N}$ will be denoted by \n$\\eup{\\cdot,\\cdot}$, resp. $\\abs{\\cdot}$, while\n$\\mathbb{P}_{r}$ will stand for the polynomials of degree at most $r$.\nFinally, for $a>0$ we will denote by $\\floor{a}$ the largest integer that is less or equal to \n$a$.\n\\section{Spline spaces, basis} Let $r$ and $N$ be integers such that $r\\geq 2$, $N\\geq 4r$. Let\n$h=1\/N$ and $x_{i}=ih$, $i=0,1,\\dots,N$, be a uniform partition of $[0,1]$. We consider the \n$N$-dimensional space of the 1-periodic, smooth, piecewise polynomial splines\n\\[\n\\mathcal{S}_{h}^{r}=\\{v\\in C_{per}^{r-2}[0,1] : v_{|(x_{i-1},x_{i})}\\in\n\\mathbb{P}_{r-1}, \\,\\, i=1,2,\\dots,N\\},\n\\]\nand the space\n\\[\n\\mathcal{S}_{h}^{1} = \\{v\\in L_{per}^{2}(0,1) : v_{|(x_{i-1},x_{i})}=\\text{constant},\n\\,\\, i=1,2,\\dots,N\\}.\n\\]\nFollowing e.g. \\cite{tw}, we define a standard basis of $\\mathcal{S}_{h}^{r}$ as follows. Let \n$v_{1}$ be the characteristic function of $[-\\tfrac{1}{2},\\tfrac{1}{2}]$ and $v_{r}$ the convolution\nof $v_{1}$ with itself $r-1$ times. Thus, $v_{r}$, with support $[-\\tfrac{1}{2}r,\\tfrac{1}{2}r]$, is\nthe $B$-spline of order $r$. If\n\\[\n\\phi_{l}(x)=v_{r}(h^{-1}x - l - \\tfrac{r-2}{2}),\\,\\, l\\in \\mathbb{Z},\\quad\n\\text{and}\\quad\n\\Phi_{j}(x) =\\sum_{l\\in \\mathbb{Z}}\\phi_{j+lN}(x),\\,\\, j=1,2,\\dots, N,\n\\]\nthen, the restrictions of $\\{\\Phi_{j}\\}_{j=1}^{N}$ on $[0,1]$ are a basis of $\\mathcal{S}_{h}^{r}$. \\par\n{\\bf{Remarks:}} 1. Here we define $\\Phi_{j}$ in terms of \n$\\phi_{l}(x)=v_{r}(h^{-1}x - l - \\tfrac{r-2}{2})$ and not in the standard way via \n$\\phi_{l}(x)=v_{r}(h^{-1}x - l)$, since we wish that the support of $\\Phi_{j}$ in $[0,1]$ to be\nan interval or union of intervals with endpoints that are integer multiples of $h$. Thus, in \n$[0,1]$, supp$\\Phi_{j}=[x_{j-1},x_{j+r-1}]$, for $j=1,2,\\dots,N-(r-1)$, and \nsupp$\\Phi_{j}=[x_{j-1},1]\\cup [0,x_{j-(N-(r-1))}]$ for $j=N-(r-2),N-(r-3),\\dots,N$. \\\\\n2. If $v\\in\\mathcal{S}_{h}^{r}$, then $v$ may be written in the form \n\\[\nv(x) = \\sum_{j=1}^{N}V_{j}\\Phi_{j}(x), \\quad \\text{or}\\quad \nv(x)=\\sum_{l\\in\\mathbb{Z}}V_{l}\\phi_{l}(x),\n\\]\nwhere the coefficients $V_{l}$ are periodic with period $N$, i.e. $V_{l}=V_{l+N}$, $l\\in\\mathbb{Z}$.\nIn addition it holds that $\\Phi_{l}(x)=\\Phi_{l+N}(x)$ for any integer $l$.\\\\\n3. A basis of $\\mathcal{S}_{h}^{1}$ is, obviously, the set $\\{\\Phi_{j}\\}_{j=1}^{N}$, where\n$\\Phi_{j}(x)=v_{1}(h^{-1}x-j+\\tfrac{1}{2})$.\\\\\n4. If $\\{\\Phi_{j}\\}_{j=1}^{N}$, $\\{\\Psi_{j}\\}_{j=1}^{N}$ are the bases of $\\mathcal{S}_{h}^{r}$,\n$\\mathcal{S}_{h}^{r-1}$, respectively, then $\\Phi_{j}'(x)=h^{-1}(\\Psi_{j}(x) - \\Psi_{j+1}(x))$.\nIndeed, it follows by the definition of $v_{r}$ that\n\\[\nv_{r}(x)=\\int_{x-\\frac{1}{2}}^{x+\\frac{1}{2}}v_{r-1}(y)dy,\n\\]\nfrom which $v_{r}'(x) = v_{r-1}(x+\\frac{1}{2}) - v_{r-1}(x - \\frac{1}{2})$, and since\n$\\Phi_{j}(x)=\\sum_{l\\in\\mathbb{Z}}v_{r}(h^{-1}x - lN - j - \\frac{r-2}{2})$, it follows that\n\\begin{align*}\n\\Phi_{j}'(x) & = h^{-1}\\sum_{l\\in\\mathbb{Z}}\n\\Bigl[v_{r-1}\\Bigl(h^{-1}x - lN-j-\\frac{r-2}{2} +\n\\frac{1}{2}\\Bigr) - v_{r-1}\\Bigl(h^{-1}x -\nlN-j-\\frac{r-2}{2} -\\frac{1}{2}\\Bigr)\\Bigr]\\\\\n& = h^{-1}\\sum_{l\\in\\mathbb{Z}}\\Bigl[v_{r-1}\\Bigl(h^{-1}x - lN-j - \\frac{r-3}{2}\\Bigr)\n- v_{r-1}\\Bigl(h^{-1}x - lN - j-1 - \\frac{r-3}{2}\\Bigr)\\Bigr]\\\\\n& = h^{-1}\\bigl(\\Psi_{j}(x) - \\Psi_{j+1}(x)\\bigr).\n\\end{align*}\n5. If $G$ is the Gram matrix with elements $h^{-1}(\\Phi_{k},\\Phi_{j})$, then, cf. \n\\cite[Lemma 2.1]{tw}, $G$ is a symmetric, positive definite, cyclic $N\\times N$ matrix with\neigenvalues $g(2\\pi l\/N)$, $l=1,2,\\dots,N$, where\n\\[\ng(x) = \\sum_{l\\in\\mathbb{Z}}\\hat{v}_{r}(x + 2\\pi l)^{2}, \\qquad\n\\hat{v}_{r}(x)=\\Biggl(\\frac{2\\sin\\frac{1}{2}x}{x}\\Biggr)^{r}.\n\\]\nMoreover,\n\\[\n\\underline{g}\\abs{V}^{2} \\leq \\eup{GV,V} \\leq \\overline{g}\\abs{V}^{2},\n\\]\nprovided $\\underline{g}\\leq g(x)\\leq \\overline{g}$. It follows that: \\\\\n(i)\\,\\, The elements of $G$ are numbers independent of $h$.\\\\\n(ii)\\, The function $g$ is periodic with period $2\\pi$, $g(0)=g(2\\pi)=1$, and\n$g(\\theta)\\in [\\underline{g},\\overline{g}]$ for all $\\theta\\in\\mathbb{R}$, \n\\indent\\,\\,\\,\\, where $\\overline{g}=1$\nand $\\underline{g} >0$. In addition, the maximum eigenvalue of $G$ is $\\lambda_{\\max}(G)=1$.\\\\\n(iii)\\, The eigenvalues of $G$ are included in the interval $[\\underline{g},1]$ and \n$\\underline{g}$ does not depend on $N$ but only \\indent\\,\\,\\,\\,\\, on $r$.\\\\\n(iv)\\,\\, For each element of the inverse of $G$ there holds \n$\\abs{(G^{-1})_{ij}}\\leq 1\/\\lambda_{\\min}(G)\\leq 1\/\\underline{g}$, where $\\lambda_{\\min}(G)$ \n\\indent \\,\\,\\,\\,\\,\nis the smallest eigenvalue of $G$. In particular, $\\underline{g}\\leq (G^{-1})_{11}\\leq 1$.\\\\\n(v)\\,\\, If $v_{h}=\\sum_{j=1}^{N}V_{j}\\Phi_{j}$ is an element of $\\mathcal{S}_{h}^{r}$, then\n\\begin{equation}\n\\underline{g}h\\abs{V}^{2} \\leq \\|v_{h}\\|^{2} \\leq \\overline{g}h\\abs{V}^{2}.\n\\label{eq21}\n\\end{equation} \nIt is also well known that the following inverse inequality holds in $\\mathcal{S}_{h}^{r}$: There\nexists a constant $C$ independent of $h$, such that\n\\begin{equation}\n\\|v_{hx}\\|\\leq Ch^{-1}\\|v_{h}\\|,\n\\label{eq22}\n\\end{equation}\nfor all $v_{h}\\in \\mathcal{S}_{h}^{r}$.\n\\section{Cyclic matrices} \nIn what follows we state some facts about cyclic (or circulant) matrices that will be useful in the\nsequel. A $N\\times N$ matric $C$ is called cyclic when\n\\begin{equation*}\nc_{jk}=\n\\begin{cases}\nc_{k-j+1}\\,, & \\text{if} \\,\\,\\, k\\geq j,\\\\\nc_{k-j+N+1}\\,,& \\text{if} \\,\\,\\, k k$ for some positive integer $k$. If $\\lambda_{\\min}(B)$, \n$\\lambda_{\\max}(B)$ are the minimum and the maximum eigenvalue, respectively, of $B$ and \n$\\lambda_{\\mu}=(\\lambda_{\\min}(B))^{1\/2}$, $\\lambda_{m}=(\\lambda_{\\max}(B))^{1\/2}$, then\n\\begin{equation}\n\\abs{(B^{-1})_{ij}} \\leq C_{B} q_{B}^{-\\abs{i-j}},\n\\label{eq43}\n\\end{equation}\nwhere\n\\begin{equation}\nC_{B} = \\frac{1}{\\lambda_{\\mu}^{2}}\\max \\Bigl(1, \\frac{(\\lambda_{\\mu} \n+ \\lambda_{m})^{2}}{2\\lambda_{m}^{2}}\\Bigr),\\quad\nq_{B} = \\Bigl(\\frac{\\lambda_{m}+\\lambda_{\\mu}}{\\lambda_{m} - \\lambda_{\\mu}}\\Bigr)^{1\/k}.\n\\label{eq44}\n\\end{equation}\n\\end{proposition} \nBased on this result we will prove the following lemma for the inverse of $G$.\n\\begin{lemma} Let $\\{\\Phi_{j}\\}_{j=1}^{N}$ be the basis of $\\mathcal{S}_{h}^{r}$ defined in section 2,\n$G$ be the $N\\times N$ matrix defined by $G_{ij} = h^{-1}(\\Phi_{j},\\Phi_{i})$, and let $\\Gamma$ be\nthe inverse of $G$. If $\\gamma$ is the first row of $\\Gamma$, there exist positive constants\n$C_{1}$, $C_{2}$, $q$, independent of $N$, such that \n\\begin{equation}\n\\abs{\\gamma_{i}} \\leq C_{1}q^{-(i-1)} + C_{2}q^{-(N-i)},\n\\label{eq45}\n\\end{equation}\nfor $i=r,r+1,\\dots, \\floor{N\/2}+1$. Moreover, there exists a constant $C_{3}$, independent of $N$,\nsuch that \n\\begin{equation}\n\\sum_{i=r}^{\\floor{\\frac{N}{2}}+1}(1 + i)\\abs{\\gamma_{i}} \\leq C_{3}.\n\\label{eq46}\n\\end{equation}\n\\end{lemma}\n\\begin{proof} Let $\\wt{G}$ be the $N\\times N$ symmetric, banded matrix with elements\n$\\wt{G}_{ij} = G_{ij}$, if $\\abs{i-j}\\leq r-1$, and $\\wt{G}_{ij}=0$, if $\\abs{i-j}>r-1$. Let\n$\\mathcal{G}$ be the $(N+2r-2)\\times (N+2r-2)$ cyclic matrix with elements \n$\\mathcal{G}_{ij}=\\mathfrak{h}^{-1}(\\Upphi_{j},\\Upphi_{i})$, $1\\leq i,j\\leq N+2r-2$,\nwhere $\\mathfrak{h}=1\/(N+2r-2)$, and $\\{ \\Upphi_{j} \\}_{j=1}^{N+2r-2}$ is the basis of \n$\\mathcal{S}_{\\mathfrak{h}}^{r}$. Hence $\\mathcal{G}$ is a\n`cyclic extension' of $\\wt{G}$, in the sense that $\\wt{G}$ is obtained from $\\mathcal{G}$ if we omit\nthe first $r-1$ and the last $r-1$ columns of $\\mathcal{G}$, and also the first $r-1$ and the last\n$r-1$ rows of $\\mathcal{G}$. Following Bini and Capovani, cf. \\cite[Proposition 4.2]{bc},\nwe obtain that $\\lambda_{\\min}(\\mathcal{G})\\leq \\lambda_{\\min}(\\wt{G})$. Hence \n$\\lambda_{\\min}(\\wt{G}) \\geq \\underline{g} >0$, i.e. $\\wt{G}$ is positive definite, and by\nProposition 4.1, we have \n\\begin{equation}\n\\abs{(\\wt{G}^{-1})_{ij}} \\leq C_{\\wt{G}}q_{\\wt{G}}^{-\\abs{i-j}},\n\\label{eq47}\n\\end{equation}\nwhere $C_{\\wt{G}}$, $q_{\\wt{G}}$ are defined as in Proposition 4.1 for $B=\\wt{G}$ and $k=r-1$. Since\n$\\lambda_{min}(\\wt{G})\\geq\\underline{g}>0$, it follows that $C_{\\wt{G}}$ is bounded above by a\nconstant independent of $N$ (see Remark 5 (iii) in section 2.) In addition, using Proposition 4.2 of\n\\cite{bc}, we also conclude that $\\lambda_{min}(\\wt{G}) < \\lambda_{max}(\\wt{G})$ since the\neigenvalues of $\\mathcal{G}$ have the same property. We conclude that $q_{\\wt{G}}$ is also\nindependent of $N$. (Of course $C_{\\wt{G}}$, $q_{\\wt{G}}$ depend on $r$.) Now, the cyclic matrix\n$G$ is written in the form $G=\\wt{G}+\\wt{W}$, where\n\\begin{equation*}\n\\wt{W} = \n\\begin{pNiceMatrix}\n0 & & W \\\\\n\\hline\n & 0 & \\\\\n\\hline\nW^{T} & & 0 \n\\end{pNiceMatrix}\n, \\qquad\nW =\n\\begin{pmatrix}\ng_{r} & g_{r-1} & \\cdots & g_{2} \\\\\n0 & g_{r} & \\cdots & g_{3} \\\\\n\\vdots & \\ddots & \\ddots & \\vdots \\\\\n 0 & \\cdots & 0 & g_{r}\n\\end{pmatrix},\n\\end{equation*}\nand therefore $\\wt{W}\\gamma=(W\\ul{\\gamma},0,\\dots,0,W^{T}\\ol{\\gamma})^{T}$, where\n$\\ul{\\gamma} = (\\gamma_{r},\\gamma_{r-1},\\dots,\\gamma_{2})^{T}$,\n$\\ol{\\gamma}=(\\gamma_{1},\\gamma_{2},\\dots,\\gamma_{r-1})^{T}$. Since\n$G\\gamma=e_{1}$ we have $\\gamma = (\\wt{G})^{-1}e_{1} - (\\wt{G}^{-1})\\wt{W}\\gamma$, from which, for\n$r\\leq i\\leq \\floor{N\/2}+1$, we see that\n\\begin{equation}\n\\begin{aligned}\n\\gamma_{i} & =(\\wt{G}^{-1})_{i1} - \\sum_{j=1}^{r-1}(W\\ul{\\gamma})_{j}(\\wt{G}^{-1})_{ij}\n-\\sum_{j=1}^{r-1}(W^{T}\\ol{\\gamma})_{j}(\\wt{G}^{-1})_{i,N-(r-1-j)} \n= (1 - (W\\ul{\\gamma})_{1})(\\wt{G}^{-1})_{i1} \\\\\n& \\hspace{53pt} \n- (W^{T}\\ol{\\gamma})_{1}(\\wt{G}^{-1})_{i,N-(r-2)} \n-\\sum_{j=2}^{r-1}\\Bigl((W\\ul{\\gamma})_{j}(\\wt{G}^{-1})_{ij}\n+ (W^{T}\\ol{\\gamma})_{j}(\\wt{G}^{-1})_{i,N-(r-1-j)}\\Bigr).\n\\end{aligned}\n\\label{eq48}\n\\end{equation}\nBut $g_{1}\\gamma_{1} + 2(g_{2}\\gamma_{2} + \\dots g_{r}\\gamma_{r})=1$, which yields\n\\[\n(W\\ul{\\gamma})_{1} = \\sum_{j=2}^{r}g_{j}\\gamma_{j}=\\frac{1}{2}(1-g_{1}\\gamma_{1}).\n\\]\nIn addition, $(W^{T}\\ol{\\gamma})_{1}=g_{r}\\gamma_{1}$, and therefore \\eqref{eq48} may be written as\n\\begin{equation}\n\\gamma_{i}=\\frac{1+g_{1}\\gamma_{1}}{2}(\\wt{G}^{-1})_{i1}\n- g_{r}\\gamma_{1}(\\wt{G}^{-1})_{i,N-(r-2)}\n-\\sum_{j=2}^{r-1}\\Bigl((W\\ul{\\gamma})_{j}(\\wt{G}^{-1})_{ij}\n+(W^{T}\\ol{\\gamma})_{j}(\\wt{G}^{-1})_{i,N-(r-1-j)}\\Bigr).\n\\label{eq49}\n\\end{equation}\nNow, since $g_{1}+2(g_{2}+g_{3}+\\dots+g_{r})=1$, $g_{j}>0$, for\n$j=1,2,\\dots,r$, $0<\\gamma_{1}\\leq 1$ and $\\abs{\\gamma_{j}}\\leq 1\/\\ul{g}$,\nit follows that \n$\\abs{(W\\ul{\\gamma})_{j}} \\leq 1\/\\ul{g}$ and $\\abs{(W^{T}\\ol{\\gamma})_{j}} \\leq 1\/\\ul{g}$,\nfor $j=2,3,\\dots,r$. Hence, \\eqref{eq48} and \\eqref{eq43} give, for\n$r\\leq i\\leq \\floor{N\/2}+1$,\n\\begin{align*}\n\\abs{\\gamma_{i}} & \\leq Cq^{-(i-1)}+ Cq^{-(N-r+2-i)}\n+\\frac{C}{\\ul{g}}\\Bigl(\\sum_{j=2}^{r-1}q^{-(i-j)}\n+ \\sum_{j=2}^{r-1}q^{-(N-r+1+j-i)}\\Bigr)\\\\\n& = Cq^{-(i-1)} + Cq^{r-2}\\cdot q^{-(N-i)} + \\frac{C}{\\ul{g}}\\Bigl(\nq^{-(i-1)}\\cdot\\frac{q^{r-1}-q}{q-1} + q^{-(N-i)}\\cdot\\frac{q^{r-2}-1}{q-1}\\Bigr),\n\\end{align*}\nwhere $C$ is a multiple of ${C}_{\\wt{G}}$ by a constant and where, for simplicity, we have put \n$q = q_{\\wt{G}}$. This finally yields \\eqref{eq45} with\n\\[\nC_{1} = \\Bigl(1 + \\frac{q^{r-1}-q}{\\ul{g}(q-1)}\\Bigr)C,\\quad\nC_{2} = \\Bigl(q^{r-2} + \\frac{q^{r-2}-1}{\\ul{g}(q-1)}\\Bigr)C.\n\\]\nIn order to prove \\eqref{eq46} we consider the polynomial \n$p(x)=x^{r} + x^{r+1} + \\dots +x^{M-1}=(x^{M}-x^{r})\/(x-1)$, for $00$, we obtain\n\\eqref{eq46}.\n\\end{proof}\n{\\bf{Remarks:}} 1. It Follows from \\cite[Theorem 1]{se}, that the elements of $\\Gamma$ for $r=2$\nmay be computed exactly.\\\\\n2. From \\eqref{eq46} and the fact that $\\abs{\\gamma_{i}}\\leq 1\/\\ul{g}$, $1\\leq i1, \\, 3\\leq m\\leq 50$ and $n \\geq 3$ are \n\\begin{align*} (m,x,y,n)= & (5,57121,3107,4), \\; (7,2,2,3), \\; (15,2,2,4), \\; (17,8,6,4), \\\\ & (26,2,3,3), \\; (31,2,2,5) \\; \\text{ and } \\; (50,15,30,3). \\end{align*}\n\\end{theorem}\n\\endgroup\n\nWhen $m = 3$ or $m = 4$, these are known results (see \\citep{m=3} and \\citep{m=4}), and so we will consider the cases $5 \\leq m \\leq 50$. We note that there are no apparent obstructions to extending Theorem \\ref{Mainthm} to larger values of $m$, say $m \\leq 100$ for example, although we do not pursue this here. \n\nThe techniques we use to prove Theorem \\ref{Mainthm} when $n $ is odd will be of a very different flavour to those used in \\citep{part1}, where the main ideas are centred around the study of integral points on elliptic curves. Starting from equation (\\ref{maineq}) we will form various systems of binomial Thue equations. We will then use a combination of local arguments and the modular method via Frey curves, along with bounds arising from linear forms in logarithms to prove Theorem \\ref{Mainthm}.\n\nWe now outline the rest of the paper. In Section 2, we use the results of \\citep{part1} to find all solutions to equation (\\ref{maineq}) in the case $n = 4$. We also treat the case $m = 5$. For the remainder of the paper, we will consider the case $n$ an odd prime and $ 6 \\leq m \\leq 50$. In Section 3, we reduce the problem to the study of finitely many systems of binomial Thue equations, and obtain a bound on $n$ using results on linear forms in logarithms. In Section 4, we solve the majority of these Thue equations using local arguments, and finally we use the modular method in various guises in Section 5 to deal with the remaining cases. \n\n\\bigskip\n\nThe \\texttt{Magma} \\citep{magma} code used to support the computations in this paper can be found at: \n\n\\vspace{3pt}\n\n \\url{https:\/\/warwick.ac.uk\/fac\/sci\/maths\/people\/staff\/michaud\/c\/}\n\n\\bigskip\n\nThe third-named author would like to thank Samir Siksek and Damiano Testa for many useful discussions.\n\n\\section{The case \\texorpdfstring{$n = 4$}{} and the case \\texorpdfstring{$m = 5$}{}}\n\nIn this section we will treat certain cases that are not amenable to the methods of the later parts of the paper.\n\n\\begin{lemma}\\label{n=4Lem} Let $(m,x,y)$ be a solution to equation (\\ref{maineq}) with $n = 4$, $x > 0$, $y >1 $, and $6 \\leq m \\leq 50$. Then \\[(m,x,y) = (15,2,2) \\; \\text{ or } \\; (17,8,6). \\]\n\\end{lemma}\n\n\\begin{proof} Let $(m,x,y)$ be such a solution to equation (\\ref{maineq}). Then $(m,x,y^2)$ is a solution to equation (\\ref{maineq}) with $n = 2$, and so we can apply the results of \\citep[pp.~218--219]{part1} to obtain all solutions for $6 \\leq m \\leq 50$. The two solutions we find are those stated in the lemma.\n\\end{proof}\n \n\\begin{lemma}\\label{m=5Lem} Let $m = 5$. Let $(x,y,n)$ be a solution to equation (\\ref{maineq}) with $x > 0$, $y >1 $, and $n \\geq 3$. Then \\[(x,y,n) = (57121, 3107, 4). \\]\n\\end{lemma} \n \n\\begin{proof} Suppose $(x,y,n)$ is such a solution to equation (\\ref{maineq}). We have \\begin{equation}\\label{m=5} x^2(x+1) = 2y^n. \\end{equation} Suppose first that $n = 4$. Then $2\\operatorname{ord}_2(x)+ \\operatorname{ord}_2(x+1) = 1 + 4 \\operatorname{ord}_2(y)$, so $x$ is odd and $x + 1$ is even. We have \\[ x^2 \\left (\\frac{x+1}{2} \\right) = y^4, \\] and $\\gcd(x^2, (x+1)\/2) = 1$. It follows that $x^2 = y_1^4$ and $x+ 1 = 2y_2^4$ for some coprime integers $y_1$ and $y_2$ with $y = y_1y_2$. Then $x = y_1^2$, since $x$ is positive, so \\begin{equation} y_1^2 + 1 = 2y_2^4. \\end{equation} This equation is known as Ljunggren's equation, since Ljunggren proved that the only positive integer solutions to this equation are given by $(y_1,y_2) = (1,1)$ and $(y_1,y_2) = (239,13)$ (see \\citep{ljunggren} for the original proof, which is somewhat involved, or \\citep{simpler} for an example of a simpler proof). Since $y > 1$, we obtain the solution $(x,y) = (y_1^2, y_1y_2) = (57121,3107)$ to equation (\\ref{maineq}).\n\nSuppose instead that $n$ is odd. In this case $x$ may be odd or even. If $x$ is odd, then arguing similarly to above, there exist coprime integers $y_1$ and $y_2$ satisfying \\begin{equation*} y_1^n + 1 = 2y^n. \\end{equation*} This equation has no solutions with $y_1y_2 > 1$ and $n \\geq 3$ by \\citep[Main~Theorem]{ll2}. If $x$ is even, then we find that there exist coprime integers $y_1$ and $y_2$ satisfying \\[ x^2 = 2y_1^n \\; \\text{ and } \\; x+1 = y_2^n. \\] From $x^2 = 2y_1^n$, we see that $\\operatorname{ord}_2(y) = k \\geq 1$, with $k$ odd, and we may write $y_1 = 2^k z_1^2$ for some positive integer $z_1$ coprime to $y_2$. Then \\[ x = 2^{\\frac{kn+1}{2}} z_1^n, \\] and so \\[2^{\\frac{kn+1}{2}} z_1^n - y_2^n = 1. \\] By \\citep[Theorem~1.2]{consecutive}, this equation has no solutions in positive coprime integers. This completes the case $m = 5$.\n\\end{proof}\n\n\\section{Systems of binomial Thue equations}\n\nThanks to Lemmas \\ref{n=4Lem} and \\ref{m=5Lem}, we may suppose that $n \\geq 3$ is odd and that $6 \\leq m \\leq 50$. Moreover, we will assume that $n = p$ is an odd prime. We write equation (\\ref{maineq}) as \\begin{equation}\\label{eq2} x(x+1)(a_mx-b_m) = c_m y^p, \\end{equation} \nwhere \\begin{align*} (a_m,b_m,c_m) = \\begin{cases} \\left(\\frac{m-2}{3}, \\frac{m-5}{3}, 2 \\right) & \\text{ if } \\; m \\equiv 2 \\pmod{3}, \\\\ (m-2, m-5, 6) & \\text{ if } \\; m \\not\\equiv 2 \\pmod{3}. \\end{cases} \\end{align*}\nWe introduce the notation \\begin{equation*} d_1 = \\gcd(x, a_mx-b_m) \\quad \\text{and} \\quad d_2 = \\gcd(x+1,a_mx-b_m). \\end{equation*} By writing $a_mx-b_m = a_m(x+1) - (a_m+b_m)$, it is straightforward to see that \\[ \\gcd(x,x+1) = 1, \\; \\; d_1 \\mid b_m, \\; \\; \\text{ and} \\; \\; d_2 \\mid a_m+b_m. \\] Even though $d_1$ and $d_2$ are unknown, we know a finite list of possibilities for each one; namely the divisors of $b_m$ and $a_m+b_m$ respectively. We also note that $\\gcd(a_m,b_m) = 1$, since $m \\ne 3$.\n\nWe may now divide both sides of equation (\\ref{eq2}) by $c_m$ and the appropriate $p$th powers of $d_1$ and $d_2$ to obtain \\begin{equation} \\label{denom} \\left( \\frac{x}{A} \\right) \\left( \\frac{x+1}{B} \\right) \\left( \\frac{a_mx-b_m}{C} \\right) = Y^p. \\end{equation} Here, $Y \\mid y$ is a positive integer, $A, B,$ and $C$ are positive integers satisfying \\[ A \\mid c_m (b_m)^p, \\quad B \\mid c_m(a_m+b_m)^p, \\quad C \\mid c_m (b_m)^p (a_m+b_m)^p, \\] and the three factors on the left-hand side of equation (\\ref{denom}) are integral and pairwise coprime. Moreover, we may assume that $A$, $B$, and $C$ are $p$th power free. Later in this section we will provide, for each $m$, the precise list of possibilities for the triple $(A,B,C)$. We emphasise that the triple $(A,B,C)$ depends on $m$, and will usually also depend on $p$. It follows that there exist positive pairwise coprime integers $y_1, y_2,$ and $y_3$ satisfying\n\\begin{equation*} x = Ay_1^p, \\qquad x+1 = By_2^p, \\qquad a_mx-b_m = Cy_3^p. \\end{equation*} This leads us to consider the following system of binomial Thue equations \\begin{align*} \nB \\, y_2^p - A \\, y_1^p & = 1 \\\\\na_m A \\, y_1^p - C \\, y_3^p & = b_m \\\\\na_m B \\, y_2^p - C \\, y_3^p & = a_m+b_m. \n\\end{align*} We observe that the third equation is dependent on the first two, so we in fact have the following system of two Thue equations:\n\\begin{align} \\label{system}\n\\begin{split}\nB \\, y_2^p - A \\, y_1^p & = 1 \\\\\na_m A \\, y_1^p - C \\, y_3^p & = b_m. \n\\end{split}\n\\end{align}\nFor each fixed value of $m$, we aim to solve this system of equations for each possible triple $(A,B,C)$. We start by providing a bound for $p$ using results on linear forms in logarithms.\n\n\\begin{proposition}\\label{LogBound} Let $(x,y,m,p)$ be a solution to equation (\\ref{eq2}) with $y > 1$. Then \\[ p < 10676 \\cdot \\log \\left( {c_m}^2 \\cdot b_m \\cdot (a_m+b_m) \\right). \\] \n\\end{proposition}\n\n\\begin{proof} We aim to obtain a binomial Thue equation with coefficients independent of $p$. We divide both sides of equation (\\ref{eq2}) by $c_m \\cdot {d_1}^p \\cdot {d_2}^p$. to obtain \\[ \\left(\\frac{x}{e_1 \\cdot {d_1}^{r_1}} \\right) \\left(\\frac{x+1}{e_2 \\cdot {d_2}^{r_2}} \\right) \\left(\\frac{a_mx-b_m}{D} \\right) = \\left(\\frac{y}{d_1d_2}\\right)^p, \\] where $r_1, r_2 \\in \\{1,p-1\\}$, $e_1, e_2 \\mid c_m$, $D$ is some integer, $Y \\mid y$, and the three factors on the left-hand side are integral and pairwise coprime. Now, if $r_i = p-1$ then we rewrite $1\/{d_i}^{r_i}$ as $d_i \/ {d_i}^p$. It follows that \\[ x = \\frac{u_1}{v_1} z_1^p \\; \\text{ and } \\; x+1 = \\frac{u_2}{v_2} z_2^p, \\] for some positive integers $u_i, v_i,$ and $z_i$. Moreover, we observe that \\[ u_1v_1 \\mid c_m b_m \\; \\text{ and } \\; u_2v_2 \\mid c_m (a_m+b_m). \\] \nWe then obtain the binomial Thue equation \\begin{equation} \\label{logthue} u_2v_1z_2^p - u_1v_2 z_1^p = v_1v_2, \\end{equation} with $\\max \\{u_2v_1, u_1v_2, v_1v_2 \\} \\leq {c_m}^2 \\cdot b_m \\cdot (a_m+b_m)$. The proposition now follows by applying \\citep[Theorem~2]{mignotte}, a result due to Mignotte obtained using linear forms in logarithms, where we have used $\\lambda \\geq \\log(2)$ in this result, so that $7400 \/ \\lambda < 10676$. \n\\end{proof}\n\nTo give some indication of the magnitude of this bound, when $m = 6$ the bound we obtain is $55440$, and when $m = 50$ the bound is $80372$. We note that for each triple $(A,B,C)$ we could provide a distinct bound on $p$ using the system (\\ref{system}), and this bound will usually be smaller than the one obtained in Proposition \\ref{LogBound}, but for simplicity we use a single bound for each value of $m$.\n\nWe will now list the possibilities for the triple $(A,B,C)$ for each value of $m$. In general, we write \\[b_m = 2^t \\cdot {p_1}^{r_1} \\cdot {q_1}^{s_1} \\quad a_m+b_m = {p_2}^{r_2} \\cdot {q_2}^{s_2}, \\] where $p_i$ and $q_i$ are primes that do not divide $c_m$, and $t, r_i, s_i$ are non-negative integers. We split into six cases dependent on $t = \\operatorname{ord}_2(b_m)$.\n\n\n\n\\subsection*{Case 1: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 0$}{}}\n\nHere,\n\\begin{align*} m \\in \\{ & 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, \\\\ & 48, 50 \\}.\\end{align*} We have\n\\begin{itemize}\n\\item $b_m = {p_1}^{r_1} \\cdot {q_1}^{s_1}$ for some $r_1 \\in \\{ 0,1,2 \\}$, $s_1 \\in \\{ 0,1 \\}$, and $p_1, q_1 \\nmid c_m$ are prime;\n\\item $a_m+b_m = {p_2}^{r_2} \\cdot {q_2}^{s_2}$ for some $r_2 \\in \\{ 1,2,3 \\}$, $s_2 \\in \\{0,1\\}$, and $p_2, q_2 \\nmid c_m$ are prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3^{\\beta_1} \\cdot {p_1}^{\\gamma_1} \\cdot {q_1}^{\\delta_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2} \\cdot {q_2}^{\\delta_2},\\\\ \nC & = 3^{\\beta_3} \\cdot {p_1}^{p-\\gamma_1} \\cdot {q_1}^{p-\\delta_1} \\cdot {p_2}^{p-\\gamma_2} \\cdot {q_2}^{p-\\delta_2}, \\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2) & \\in \\{(0,1),(1,0) \\}, & \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\begin{cases} \\{ (1,0,0),(0,1,0),(0,0,1) \\} & \\text{ if } c_m = 6, \\\\\n \\{ (0,0,0) \\} & \\text{ if } c_m = 2, \\end{cases} \\\\\n\\gamma_i & \\in \\{0,r_i,p-r_i \\}, & \\\\\n\\delta_i & \\in \\{0,s_i,p-s_i\\}. & \\\\\n\\end{align*}\nWe note that if, say, $\\gamma_1 = 0$, then we use the convention of removing the perfect $p$th power $p_1^p$ from $C$. We do this (in each case) to avoid introducing too many variables.\n\n\\begin{proof}[Proof of Case 1] As discussed earlier in this section, the basic idea is to divide both sides of (\\ref{eq2}) by $p$th powers of $d_1$ and $d_2$ in order to obtain three pairwise coprime factors. We will work one prime at a time.\n\nWe start by dividing both sides of equation (\\ref{eq2}) by $2$. If $x$ is even then $A$ will have a factor of $2$ and both $B$ and $C$ will be odd. Otherwise, both $x+1$ and $a_mx-b_m$ are even. In this case we choose to divide $x+1$ by $2$ so that $B$ is even and $A$ and $C$ are odd.\n\nNext, if $c_m = 6$, we divide both sides by $3$, and precisely one of $x$, $x+1$, and $a_mx-b_m$ will be divisible by $3$, so precisely one of $A$, $B$, and $C$ will have a factor of $3$.\n\nWe now consider the prime $p_2$ and split into three cases depending on the value of $r_2$. The other primes ($p_1$, $q_1$, and $q_2$) are dealt with in the same manner.\n\n\\begin{enumerate}[(i)]\n\\item Case $r_2 = 1$. If $p_2 \\nmid d_2$, then our three factors will be pairwise coprime at $p_2$ and there is nothing more to do, so we assume that $p_2 \\mid d_2$. Since $p_2 \\parallel a_m+b_m$, we have that $p_2 \\parallel d_2$.\n\nWe then divide both sides of equation (\\ref{eq2}) by ${p_2}^p$. If $p_2 \\parallel x$, then ${p_2}^{p-1} \\mid a_mx-b_m$, so $B$ will have a factor of $p_2$ and $C$ will have a factor of ${p_2}^{p-1}$. Otherwise, $p_2 \\parallel a_mx-b_m$, so $C$ will have a factor of $p_2$ and $B$ will have a factor of ${p_2}^{p-1}$.\n\\item Case $r_2 = 2$. If $p_2 \\nmid d_2$ then we already have coprimality at $p_2$. If ${p_2}^2 \\parallel d_2$, then after dividing by ${p_2}^p$, one of $B$ and $C$ will have a factor of ${p_2}^2$, and the other will have a factor of ${p_2}^{p-2}$. \n\nNext, if $p_2 \\parallel d_2$, then one of $x+1$ and $a_mx-b_m$ will be divisible by ${p_2}^{p-1}$, and in particular by ${p_2}^2$. If ${p_2}^2 \\mid x+1$, then since ${p_2}^2 \\mid a_m+b_m$, we obtain ${p_2}^2 \\mid a_mx-b_m$, contradicting $p_2 \\parallel d_2$. We obtain a similar contradiction in the case ${p_2}^2 \\mid a_mx-b_m$ and therefore conclude that $\\operatorname{ord}_{p_2}(d_2) \\ne 1$.\n\n\\item Case $r_2 = 3$. If $p_2 \\nmid d_2$ then we argue as in the first two cases. Arguing as in Case (ii), we see that $\\operatorname{ord}_{p_2}(d_2) \\ne 1$ or $2$. Suppose ${p_2}^3 \\parallel d_2$. We then divide both sides of equation (\\ref{eq2}) by ${p_2}^p$. One of $B$ and $C$ will have a factor of ${p_2}^3$ and the other a factor of ${p_2}^{p-3}$.\n\\end{enumerate}\n\nWe repeat this process with the primes $p_1$, $q_1$, and $q_2$, until the factors are pairwise coprime. This gives the possibilities listed for the triple $(A,B,C)$. \\end{proof}\n\n\\subsection*{Case 2: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 1$}{}}\n\nHere, \\[ m \\in \\{ 7, 11, 15, 19, 23, 27, 31, 35, 39, 47 \\}. \\] We have\n\\begin{itemize}\n\\item $b_m = 2 \\cdot {p_1}^{r_1}$ for some $r_1 \\in \\{ 0,1 \\}$, and $p_1 \\nmid c_m$ is prime;\n\\item $a_m+b_m = p_2 \\cdot {q_2}^{s_2}$, for some $s_2 \\in \\{ 0,1 \\}$, and $p_2, q_2 \\nmid c_m$ are prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3{^\\beta_1} \\cdot {p_1}^{\\gamma_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2} \\cdot {q_2}^{\\delta_2},\\\\ \nC & = 2^{\\alpha_3} \\cdot 3^{\\beta_3} \\cdot {p_1}^{p-\\gamma_1} \\cdot {p_2}^{p-\\gamma_2} \\cdot {q_2}^{p-\\delta_2},\\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2, \\alpha_3) & \\in \\{(1,0,0),(0,1,0),(0,0,1) \\}, & \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\begin{cases} \\{ (1,0,0),(0,1,0),(0,0,1) \\} & \\text{ if } c_m = 6, \\\\\n \\{ (0,0,0) \\} & \\text{ if } c_m = 2, \\end{cases} \\\\\n\\gamma_1 & \\in \\{0,r_1,p-r_1\\}, & \\\\\n\\gamma_2 & \\in \\{0,1,p-1\\}, & \\\\\n\\delta_2 & \\in \\{0,s_2,p-s_2 \\}. &\n\\end{align*}\n\n\\begin{proof}[Proof of Case 2] For primes away from $2$, we argue in exactly the same way as in Case 1. We only need to consider what happens at the prime $2$. If $x$ is odd, then $a_mx-b_m$ is also odd, and we simply divide equation (\\ref{eq2}) by $2$ so that $B$ has a factor of $2$. Since $a_m+b_m$ is odd, there is nothing more to do.\n\nNow we suppose $x$ is even. Since $\\operatorname{ord}_2{b_m} = 1$, we must have $2 \\parallel d_1$. We divide both sides of equation (\\ref{eq2}) by $2^{p+1}$. Either $2 \\parallel x$ and $2^p \\mid a_mx-b_m$, or $2^p \\mid x$ and $ 2 \\parallel a_m x- b_m$. Since we may absorb any $p$th powers, we have $(\\alpha_1,\\alpha_3) = (0,1)$ or $(1,0)$.\n\\end{proof}\n\n\\subsection*{Case 3: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 2$}{}}\n\nHere, \\[m \\in \\{ 9,17,25,33,41,49 \\}. \\] We have\n\\begin{itemize}\n\\item $b_m = 4 \\cdot {p_1}^{r_1}$ for some $r_1 \\in \\{ 0,1 \\}$, and $p_1 \\nmid c_m$ is prime;\n\\item $a_m+b_m = {p_2}^{r_2} \\cdot {q_2}^{s_2}$, for some $r_2 \\in \\{1,2\\}$, $s_2 \\in \\{ 0,1 \\}$, and $p_2, q_2 \\nmid c_m$ are prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3^{\\beta_1} \\cdot {p_1}^{\\gamma_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2} \\cdot {q_2}^{\\delta_2},\\\\ \nC & = 2^{\\alpha_3} \\cdot 3^{\\beta_3} \\cdot {p_1}^{p-\\gamma_1} \\cdot {p_2}^{p-\\delta_2} \\cdot {q_2}^{p-\\delta_2},\\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2, \\alpha_3) & \\in \\{(0,1,0),(2,0,p-1),(p-1,0,2) \\}, & \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\begin{cases} \\{ (1,0,0),(0,1,0),(0,0,1) \\} & \\text{ if } c_m = 6, \\\\\n \\{ (0,0,0) \\} & \\text{ if } c_m = 2, \\end{cases} \\\\\n\\gamma_1 & \\in \\{0,r_1,p-r_1\\}, &\\\\\n\\gamma_2 & \\in \\{0,r_2,p-r_2\\}, &\\\\\n\\delta_2 & \\in \\{0,s_2,p-s_2 \\}. &\n\\end{align*}\n\n\\begin{proof}[Proof of Case 3]\nWe will only consider what happens at the prime $2$ in the case $x$ even. The other primes and the case when $x$ is odd can be dealt with as in the proofs of cases 1 and 2.\n\nWe first claim that $2^2 \\parallel d_1$. If not, then we must have $2 \\parallel d_1$, and then either $2^2 \\mid x$ or $2^2 \\mid (a_mx-b_m)$. Since $2^2 \\mid b_m$, we will have that $2^2 \\mid x$ and $2^2 \\mid a_mx-b_m$, a contradiction, proving the claim.\n\nWe now divide both sides of equation (\\ref{eq2}) by $2^{p+1}$. One of $x$ and $a_mx-b_m$ will be exactly divisible by $2^2$ and the other will be divisible by $2^{p-1}$, and so $(\\alpha_1, \\alpha_3) = (2,p-1)$ or $(p-1,2)$. \n\\end{proof}\n\n\\subsection*{Case 4: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 3$}{}}\n\nHere, \\[m \\in \\{ 13,29,45 \\}. \\] We have\n\\begin{itemize}\n\\item $b_m = 8 \\cdot {p_1}^{r_1}$ for some $r_1 \\in \\{ 0,1 \\}$, and $p_1 \\nmid c_m$ is prime;\n\\item $a_m+b_m = p_2$, and $p_2 \\nmid c_m$ is prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3^{\\beta_1} \\cdot {p_1}^{\\gamma_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2},\\\\ \nC & = 2^{\\alpha_3} \\cdot 3^{\\beta_3} \\cdot {p_1}^{p-\\gamma_1} \\cdot {p_2}^{p-\\gamma_2}, \\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2, \\alpha_3) & \\in \\{(0,1,0),(3,0,p-2),(p-2,0,3) \\}, & \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\begin{cases} \\{ (1,0,0),(0,1,0),(0,0,1) \\} & \\text{ if } c_m = 6, \\\\\n \\{ (0,0,0) \\} & \\text{ if } c_m = 2, \\end{cases} \\\\\n\\gamma_1 & \\in \\{0,r_1,p-r_1\\}, & \\\\\n\\gamma_2 & \\in \\{0,1,p-1\\}. &\n\\end{align*}\nWhen $p = 3$, we must also consider the cases $(\\alpha_1, \\alpha_2, \\alpha_3) = (2,0,2)$, with $\\beta_i$ and $\\gamma_i$ varying as above.\n\n\\begin{proof}[Proof of Case 4] As in Case 3, we will only consider $x$ even and the prime $2$. If $2 \\parallel d_1$ then we obtain a contradiction as in Case 3. Suppose $2^2 \\parallel d_1$. Then if $2^3 \\mid x$ or $a_mx-b_m$ we obtain a contradiction as before, so we must have $2^2 \\parallel x, a_mx-b_m$. The valuation at $2$ of the left-hand side of equation (\\ref{eq2}) is thus $4$, and it is $p\\operatorname{ord}_2(y)+1$ for the right-hand side of equation (\\ref{eq2}). This forces $p = 3$ and $(\\alpha_1,\\alpha_3) =(2,2)$.\n\nFinally, if $2^3 \\parallel \\gcd(x,a_mx-b_m)$ then we divide by $2^{p+1}$ and one of $A$ and $C$ will have a factor of $2^3$, and the other a factor of $2^{p-2}$.\n\\end{proof}\n\n\\subsection*{Case 5: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 4$}{}}\n\nHere, \\[m = 21.\\] We have\n\\begin{itemize}\n\\item $b_m = 16$ and $c_m = 6$;\n\\item $a_m+b_m = p_2 \\cdot q_2$, and $p_2, q_2 \\nmid c_m$ are prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3^{\\beta_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2} \\cdot {q_2}^{\\delta_2},\\\\ \nC & = 2^{\\alpha_3} \\cdot 3^{\\beta_3} \\cdot {p_2}^{p-\\gamma_2} \\cdot {q_2}^{p-\\delta_2}, \\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2, \\alpha_3) & \\in \\{(0,1,0),(4,0,p-3),(p-3,0,4) \\}, \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\{ (1,0,0),(0,1,0),(0,0,1) \\}, \\\\\n\\gamma_2, \\delta_2 & \\in \\{0,1,p-1\\}.\n\\end{align*}\nWhen $p = 3$ or $p = 5$ we must also consider the cases \\[(\\alpha_1, \\alpha_2, \\alpha_3) = \\left((p+1)\/2, 0, (p+1)\/2 \\right), \\] with $\\beta_i, \\gamma_2,$ and $\\delta_2$ varying as above.\n\n\\begin{proof}[Proof of Case 5]\nWe are in a very similar set-up to Case 4. If $2^2 \\parallel d_1$ then we must have $p = 3$ and $(\\alpha_1,\\alpha_3) = (2,2)$. If $2^3 \\parallel d_1$, then by comparing valuations at $2$ on each side of equation (\\ref{eq2}), we have $6 = p \\operatorname{ord}_2(y) + 1$, which forces $p = 5$ and $(\\alpha_1,\\alpha_3) = (3,3)$.\nNext, if $2^4 \\parallel d_1$, then we divide through by $2^{p+1}$ and argue as in previous cases.\n\\end{proof}\n\n\\subsection*{Case 6: \\texorpdfstring{$\\operatorname{ord}_2(b_m) = 5$}{}}\n\nHere, \\[ m = 37. \\] We have\n\\begin{itemize}\n\\item $b_m = 32$ and $c_m = 6$;\n\\item $a_m+b_m = p_2$, and $p_2 \\nmid c_m$ is prime.\n\\end{itemize}\nThen \\begin{align*} A & = 2^{\\alpha_1} \\cdot 3{^\\beta_1}, \\\\\nB & = 2^{\\alpha_2} \\cdot 3^{\\beta_2} \\cdot {p_2}^{\\gamma_2},\\\\ \nC & = 2^{\\alpha_3} \\cdot 3^{\\beta_3} \\cdot {p_2}^{p-\\gamma_2},\n\\end{align*}\nwhere \\begin{align*} (\\alpha_1,\\alpha_2, \\alpha_3) & \\in \\begin{cases} \\{(0,1,0),(5,0,p-4),(p-4,0,5) \\} & \\text{ if } p > 3, \\\\ \\{(0,1,0),(2,0,2) \\} & \\text{ if } p = 3,\n\\end{cases}\n \\\\\n(\\beta_1, \\beta_2, \\beta_3) & \\in \\{ (1,0,0),(0,1,0),(0,0,1) \\}, & \\\\\n\\gamma_2 & \\in \\{0,1,p-1\\}. &\n\\end{align*}\nWhen $p = 3$, $p = 5$, or $p = 7$ we must also consider the cases \\[(\\alpha_1, \\alpha_2, \\alpha_3) = \\left((p+1)\/2, 0, (p+1)\/2 \\right), \\] with $\\beta_i$ and $\\gamma_2$ varying as above.\n\n\\begin{proof}[Proof of Case 6] This is almost identical to the argument given in Case 5, apart when $2^5 \\parallel d_1$ and $p = 3$. In this case, dividing by $2^{p+1} = 2^4$ is not enough to make the factors coprime. Instead, we divide through by $2^{2p+1} = 2^7$. One of $A$ and $C$ will have a factor of $2^5$, and the other a factor of $2^2$. After removing the perfect cube $2^3$ from $2^5$, we have $(\\alpha_1, \\alpha_3) = (2,2)$.\n\\end{proof}\n\n\\section{The Local Method}\n\nWe consider the system (\\ref{system}) of Thue equations for a fixed value of $m$ and triple $(A,B,C)$. We will start by considering this system mod $\\ell$, for many auxilliary primes $\\ell$ to try and obtain a contradiction; since if the system of equations has no local solution then it will certainly not have a global solution. When the system of equations does not have a (global) solution, we found this method to be extremely effective (as we see below). The strategy we present here is used with a single binomial Thue equation in \\citep[p.~492]{local}. \n\nFix a prime $p > 2$. We search for a prime $\\ell$ such that $\\ell = 2kp+1$ for some $k \\geq 1$ (i.e. $\\ell \\equiv 1 \\pmod{p}$), such that $\\ell \\nmid ABC$, and for which the system of equations has no solution mod $\\ell$. If we can find such an $\\ell$, then we have obtained a contradiction. The reason for choosing $\\ell$ of this form is that we have, for each $i \\in \\{1,2,3\\}$, either $\\ell \\mid y_i$, or \\[ ({{y_i}^p})^{2k} = {y_i}^{\\ell-1} \\equiv 1 \\pmod{\\ell}. \\] In particular, ${y_i}^p \\in \\mu_{2k}(\\mathbb{F}_\\ell) \\cup \\{0\\},$ where $ \\mu_{2k}(\\mathbb{F}_\\ell) = \\{ \\alpha \\in \\mathbb{F}_\\ell : \\alpha^{2k} = 1 \\}$. We therefore only have $2k+1$ possibilities for ${y_i}^p \\pmod{\\ell}$, and moreover the set $\\mu_{2k}(\\mathbb{F}_\\ell)$ can be computed extremely quickly using a primitive root modulo $\\ell$. Indeed, if $g$ is a primitive root modulo $\\ell$, then \\[ \\mu_{2k}(\\mathbb{F}_\\ell) = \\{({g^p})^r : 0 \\leq r \\leq 2k-1 \\}.\\]\n\nFor each triple $(A,B,C)$, we searched for a prime $\\ell$ by testing with $1 \\leq k \\leq 150$. For $p>5$, with $p$ less than the prime bound for $m$ obtained in Proposition \\ref{LogBound}, apart from the cases where we have a global solution, and a single case when $p = 7$, we succeeded in obtaining a contradiction.\n\nWhen $p=3$ or $p = 5$, the method sometimes fails even when there is no global solution. In these cases, as $p$ is small we can simply solve the two Thue equations using \\texttt{Magma} and verify whether we have a solution $(y_1,y_2,y_3)$ with $y_1,y_2 > 0$ (since $x > 0$). As mentioned above, the local method also fails for $p = 7$ in a single case. This is for the case $m = 21$ and $(A,B) = (2^4 \\cdot 3, 1)$. Here we also simply solve the corresponding Thue equations directly to conclude there are no non-zero solutions.\n\nFor certain triples, the local method will fail \\emph{for all values of $p$} as we have a global solution for all $p$. There are three cases when this happens.\n\n\\begin{enumerate}[(I)]\n\\item $A = 1, B = 2$, and $a_m-C = b_m$ . Here we have a global solution $(y_1,y_2,y_3) = (1,1,1)$ for all $p$, which comes from the solution $x= y = 1$ to our original equation. However, in this case, our first Thue equation is \\[ 2{y_2}^p - {y_1}^p = 1.\\] Applying a well-known result of Bennett \\citep[Theorem~1.1]{approx}, we see that $y_1 = y_2 = 1$ for all $p$, so $x = 1$.\n\n\\item $A = 1$ and $C = a_m + b_m$. This admits the solution $(y_1,y_2,y_3) = (-1,0,-1)$.\n\n\\item $B = 1$ and $C = b_m$. This admits the solution $(y_1,y_2,y_3) = (0,1,-1)$. \n\\end{enumerate}\n\nIn cases (II) and (III) we must use a different strategy. We use Theorem \\ref{ThmD} (stated below) together with the modular method.\n\n\\section{The Modular Method}\n\nIt remains to deal with cases (II) and (III), outlined in Section 4, for each $6 \\leq m \\leq 50$. In each case, we have $A = 1$ or $B = 1$, and this leads to an equation of the form \\begin{equation}\\label{eqD} z_1^p - Dz_2^p = 1 \\end{equation} for integers $z_1$ and $z_2$. The following result of Bartolom\\'e and Mih{\\u{a}}ilescu will be extremely helpful. \n\n\\begin{theorem}[{\\citep[Theorem~1.3]{bartmihai}}]\\label{ThmD} Let $D>1$ and and let $p$ be an odd prime satisfying \\[ \\gcd(\\mathrm{Rad}(\\varphi(D),p) = 1. \\] Suppose $z_1$ and $z_2$ are integers satisfying equation (\\ref{eqD}) with $ \\abs{z_2} > 1$. Then either $(z_1,z_2,D,p) = (18,7,17,3)$ or $p > 163 \\cdot 10^{12}$.\n\\end{theorem}\n\nHere, $\\varphi$ denotes Euler's totient function, and $\\mathrm{Rad}(\\varphi(D))$ denotes the product of all primes dividing $\\varphi(D)$. We note that the theorem stated in \\citep{bartmihai} goes on to give further constraints in the case $p > 163 \\cdot 10^{12}$.\n\nSince the prime bound obtained in Proposition \\ref{LogBound} is smaller than $10^6$ in each case, Theorem \\ref{ThmD} reduces our problem to only needing to consider finitely many small primes in each case. When $ p = 3$, $5$, or $7$, we can solve the relevant Thue equations directly with \\texttt{Magma}. For $p \\geq 11$ and for each value of $m$ we are then left with at most one triple $(A,B,C)$ and at most one value of $p$ that we are unable to eliminate. Table \\ref{TabRat} records these remaining values of $p$ and corresponding triples. We note that one of $A$ and $B$ is equal to $1$, and the other is exactly divisible by $2$ in each case.\n\n\\begingroup \n\\renewcommand*{\\arraystretch}{1.5}\n\\begin{table}[ht!]\n\\begin{center} \\small\n\\begin{tabular}{ |c|c|c|c|c|c| } \n \\hline\n $m$ & $p$ & $A$ & $B$ & $C$ \\\\\n \\hline\n $15$ & $11$ & $1$ & $2 \\cdot 3 \\cdot 23^{10}$ & $23$ \\\\ \\hline\n $27$ & $23$ & $1$ & $2 \\cdot 3 \\cdot 47^{22}$ & $47$ \\\\ \\hline\n $28$ & $11$ & $2 \\cdot 3 \\cdot 23^{10}$ & $1$ & $23$ \\\\ \\hline\n $30$ & $13$ & $1$ & $2 \\cdot 3 \\cdot 53^{12}$ & $53$ \\\\ \\hline\n $33$ & $29$ & $1$ & $2 \\cdot 3 \\cdot 59^{28}$ & $59$ \\\\ \\hline\n $37$ & $11$ & $1$ & $2 \\cdot 3 \\cdot 67^{10}$ & $67$ \\\\ \\hline\n $38$ & $11$ & $1$ & $2 \\cdot 3 \\cdot 23^{10}$ & $23$ \\\\ \\hline\n $43$ & $13$ & $1$ & $2 \\cdot 3 \\cdot 79^{12}$ & $79$ \\\\ \\hline\n $45$ & $41$ & $1$ & $2 \\cdot 3 \\cdot 83^{40}$ & $83$ \\\\ \\hline\n $48$ & $11$ & $1$ & $2 \\cdot 3 \\cdot 89^{10}$ & $89$ \\\\ \\hline\n \n\\end{tabular}\n\\caption{\\label{TabRat}\\normalsize Remaining cases after applying Theorem \\ref{ThmD} and solving Thue equations for $p \\leq 7$.}\n\\end{center}\n\\end{table} \n\nIn order to eliminate the cases remaining in Table \\ref{TabRat}, we will use some of the local arguments of Section 4 in combination with the modular method. We start by seeing how one may associate, following standard recipes (see \\citep{ppp} for example), a Frey curve to equation (\\ref{eqD}). We rewrite equation (\\ref{eqD}) as \\begin{equation}\\label{ppp} -1 - Dz_2^p +z_1^p = 0, \\end{equation} and we assume that $p \\geq 11$ and $\\operatorname{ord}_2(D) = 1$, since this will be our set-up. The Frey curve we associate to this equation is \\[E: \\; Y^2 = X(X+1)(X-Dz_2^p). \\] The conductor, $N$, of $E$ is then given by \\[ N = \\begin{cases} 2 \\cdot \\mathrm{Rad}_2(Dz_1z_2) & \\text{ if } \\; \\; 2 \\mid z_2, \\\\ 2^5 \\cdot \\mathrm{Rad}_2(Dz_1z_2) & \\text{ if } \\; \\;2 \\nmid z_2. \\end{cases} \\] Here, $\\mathrm{Rad}_2(Dz_1z_2)$ denotes the product of all \\emph{odd} primes dividing $Dz_1z_2$. We write $\\overline{\\rho}_{E,p}$ for the mod $p$ Galois representation of $E$. Applying standard level-lowering results, we obtain that \\[ \\overline{\\rho}_{E,p} \\sim \\overline{\\rho}_{f,\\mathfrak{p}}, \\] for $f$ a newform at level $N_p$, where \\[N_p = \\begin{cases} 2 \\cdot \\mathrm{Rad}_2(D) & \\text{ if } \\; \\; 2 \\mid z_2, \\\\ 2^5 \\cdot \\mathrm{Rad}_2(D) & \\text{ if } \\; \\;2 \\nmid z_2, \\end{cases} \\] and $\\mathfrak{p}$ a prime above $p$ in the coefficient field of $f$.\n\nWe are now in a position to complete the proof of Theorem \\ref{Mainthm}.\n\n\\begin{proof}[Proof of Theorem \\ref{Mainthm}] It remains to deal with the cases appearing in Table \\ref{TabRat}. Suppose we are in one of these cases, and let $(y_1,y_2,y_3)$ be a non-zero solution to the system (\\ref{system}) of Thue equations. By rewriting ${y_i}^p$ as $-(-y_i)^p$ if necessary, we obtain an equation of the same form as (\\ref{ppp}). As described above, we attach a Frey curve $E$ to this equation, and level lower so that $\\overline{\\rho}_{E,p} \\sim \\overline{\\rho}_{f,\\mathfrak{p}}$, for $f$ a newform at level $2 \\cdot \\mathrm{Rad}_2(D)$ or $2^5 \\cdot \\mathrm{Rad}_2(D)$.\n\n Now, if $\\ell \\mid y_1y_2$ is a prime, then it must be a prime of multiplictive reduction for $E$, and by comparing traces of Frobenius, we have \\[\\ell+1 \\equiv \\pm c_\\ell(f) \\pmod{\\mathfrak{p}},\\] where $c_\\ell(f)$ denotes the $\\ell$th Fourier coefficient of the newform $f$. It follows that\\begin{equation}\\label{traces} p \\mid \\mathrm{Norm}((\\ell+1)^2 - c_\\ell(f)^2) \\end{equation}\n\nWe now search for a prime $\\ell \\nmid D$ with $\\ell \\equiv 1 \\pmod{p}$, for which the system of Thue equations (\\ref{system}) has a unique solution mod ${\\ell}$, \\emph{and} for which (\\ref{traces}) does not hold. If the system has a unique solution mod $\\ell$, then this solution must be the reduction mod $\\ell$ of the known global solution, for which $y_1y_2 = 0$, so either $y_1 \\equiv 0 \\pmod{\\ell}$ or $y_2 \\equiv 0 \\pmod{\\ell}$. So $\\ell \\mid y_1y_2$, and we have therefore obtained a contradiction if (\\ref{traces}) does not hold. For each newform $f$ in each case we were able to find such a prime $\\ell$, apart from the cases listed in Table \\ref{TabSturm}.\n\nFor the remaining newforms in Table \\ref{TabSturm}, we find that for any prime $q \\nmid 2D$ that we test, \\begin{equation}\\label{strm} p \\mid \\mathrm{Norm}(q+1 - c_q(f)). \\end{equation} This suggests that the representation $\\overline{\\rho}_{f,\\mathfrak{p}}$ is reducible, which would be a contradiction. We proceed by applying \\citep[Proposition~2.2]{tripexp} to the newform $f$. We obtain that $p \\mid \\# E(\\mathbb{F}_q)$ for any prime $q \\nmid D$, and so $E$ must have a rational subgroup of order $p$, a contradiction since $p \\geq 11$.\n\\begingroup \n\\renewcommand*{\\arraystretch}{1.5}\n\\begin{table}[ht!]\n\\begin{center} \\small\n\\begin{tabular}{ |c|c|c| } \n \\hline\n $m$ & $p$ & $f$ \\\\\n \\hline\n $15$ & $11$ & 138.2.a.d \\\\ \\hline\n $27$ & $23$ & 282.2.a.e \\\\ \\hline\n $28$ & $11$ & 138.2.a.d \\\\ \\hline\n $30$ & $13$ & 318.2.a.g \\\\ \\hline\n $33$ & $29$ & 354.2.a.h \\\\ \\hline\n $37$ & $11$ & 402.2.a.g \\\\ \\hline\n $38$ & $11$ & --- \\\\ \\hline\n $43$ & $13$ & 474.2.a.e \\\\ \\hline\n $45$ & $41$ & 498.2.a.g \\\\ \\hline\n $48$ & $11$ & 534.2.a.f \\\\ \\hline \n\\end{tabular}\n\\caption{\\label{TabSturm}\\normalsize Remaining newforms. We use the notation of the LMFDB \\citep{lmfdb}.}\n\\end{center}\n\\end{table} \n\\endgroup\n\\end{proof}\n\n\\bibliographystyle{plainnat}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzalcr b/data_all_eng_slimpj/shuffled/split2/finalzzalcr new file mode 100644 index 0000000000000000000000000000000000000000..cdabb8280699bc5884e3c9fca7e31dc4dba95dc0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzalcr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nWe perform a resurgence analysis of the $SU(2)$ Chern-Simons partition function on a Bireksorn homology sphere, following \\cite{GMP}. Consider the Chern-Simons action with a gauge group $G$ on a 3-manifold $M_{3}$:\n$$CS(A) = \\frac{1}{8\\pi^{2}} \\int_{M_{3}} A \\wedge dA + \\frac{2}{3}A \\wedge A \\wedge A,$$\nwhere $A$ is a Lie algebra (ad$G$) valued 1-form on $M_{3}$. Classical solutions of this action are the flat connections, satisfying $F_{A} = dA + A \\wedge A = 0$. The Chern-Simons partition function at level $k$ can be expanded with a perturbation parameter $1\/k$, around the flat connections:\n\n\\begin{equation}\nZ_{CS}(M_{3}) = \\sum_{\\alpha \\in \\mathcal{M}_{\\text{flat}}(M_{3}, G)} e^{2 \\pi i k CS(\\alpha)}Z^{\\text{pert}}_{\\alpha}.\n\\label{eqn:pert}\n\\end{equation}\nAbove, $\\mathcal{M}_{\\text{flat}}(M_{3}, G)$ is the moduli space of flat $G$-connections on $M_{3}$, and we have assumed a discrete moduli space. When $k$ is an integer, $CS(A)$ is only defined modulo 1. \n\nThe exact partition function $Z_{CS}(M_{3}) = \\int \\mathcal{D}A e^{2 \\pi i k CS(A)}$ can be recovered from its perturbative expansion by a resurgence analysis of Jean \\'{E}calle \\cite{Ecalle}. We first analytically continue $k$ to compex values and apply the method of steepest descent. Then, perform a Borel transformation and resummation of the perturbative partition function, to recover the exact partition function. Surprisingly, the exact partition function is now written as a linear sum of the ``homological blocks'' \\cite{GMP}:\n\\begin{equation}\nZ_{CS}(M_{3}) = \\sum_{a \\, \\text{abelian}} e^{2 \\pi i k CS(\\alpha)}Z_{a}.\n\\label{eqn:abelianDecomposition}\n\\end{equation}\nAbove, $Z_{a}$ gets contributions from \\textit{both} the abelian flat connection $a$ and the irreducible flat connections. In \\cite{GPV}, it was proposed that the partition function in this form allows a ``categorification,'' in a sense that it is a ``S-transform'' of a vector whose entries are integer-coefficient Laurent series in $q = e^{2\\pi i \/k}$.\n\nIn this paper, we provide a supporting example of \\cite{GMP}. First, we perform a resurgence analysis of $SU(2)$ Chern-Simons partition function on a Brieskorn homology sphere, $M_{3} = \\Sigma(2,5,7)$. We start with the exact partition function $Z_{CS}(\\Sigma(2,5,7))$, which is written as a linear sum of ``mock modular forms'' \\cite{HikamiBrieskorn}. Then, we consider its perturbative expansion and perform a Borel resummation. The Borel resummation in effect recovers the full partition function $Z_{CS}(\\Sigma(2,5,7))$, and we observe a Stokes phenomenon which encodes the non-perturbative contributons to the partition function. \n\n\\section{Setups for the Borel resummation in Chern-Simons theory}\nIn this section, we provide necessary notations and setups for the Borel resummation in Chern-Simons theory. A complete and concise review can be found in section 2 of \\cite{GMP}. \n\nLet us start with the exact Chern-Simons partition function $Z_{CS}(M_{3}) = \\int \\mathcal{D}A e^{2 \\pi i k CS(A)}$, integrated over $G=SU(2)$ connections. Next, analytically continue $k$ to complex values and apply the method of steepest descent on the Feynman path integral \\cite{Marino, Garoufalidis, GLM, ArgyresUnsal, Kashani-Poor, CostinGaroufalidis, WittenAnalytic, KontsevichPerimeter,KontsevichSCGP,KontsevichTFC}. Then, the integration domain is altered to a middle-dimensional cycle $\\Gamma$ in the moduli space of $G_{\\mathbb{C}} = SL(2,\\mathbb{C})$ connections, which is the union of the steepest descent flows from the saddle points. To elaborate, the moduli space is the universal cover of the space of $SL(2,\\mathbb{C})$ connections modulo ``based'' gauge transformations, in which the gauge transformations are held to be $1$ at the designated points. In sum, the partition function becomes:\n\\begin{equation}\nZ_{CS}(M_{3}) = \\int_{\\Gamma} \\mathcal{D}A e^{2 \\pi i k CS(A)}, \\quad k \\in \\mathbb{C}.\n\\label{eqn:ZoverGamma}\n\\end{equation}\n\n\n\\subsection{Borel resummation basics}\nPartition function of form Equation \\ref{eqn:ZoverGamma} is interesting, for its perturbative expansion can be regarded as a \\textit{trans-series} expansion, which can be Borel resummed. Let us provide here the basics of Borel resummation, following \\cite{Marino}. The simplest example of a trans-series is a formal power series solution of Euler's equation:\n$$\\frac{d \\varphi}{dz} + A \\varphi(z) = \\frac{A}{z}, \\quad \\varphi_{0}(z) = \\sum_{n \\geq 0} \\frac{A^{-n}n!}{z^{n+1}}.$$\nOne may view the above trans-series as a perturbative (in $1\/z$) solution to the differential equation, but the solution has zero radius of convergence. By the Borel resummation, however, one can recover a convergent solution. When a trans-series is of form $\\varphi(z) = \\sum_{n \\geq 0} a_{n}\/z^{n}$ with $a_{n} \\sim n!$, its Borel transformation is defined as:\n$$\\hat{\\varphi}(\\zeta) = \\sum_{n \\geq 1} a_{n} \\frac{\\zeta^{n-1}}{(n-1)!}.$$\nThe Borel transformation $\\hat{\\varphi}(\\zeta)$ is analytic near the origin of $\\zeta$-plane. If we can analytically continue $\\hat{\\varphi}(\\zeta)$ to a neighborhood of the positive real axis, we can perform the Laplace transform:\n$$S_{0}\\varphi(z) = a_{0} + \\int_{0}^{\\infty} e^{- z \\zeta}\\hat{\\varphi}(\\zeta) d \\zeta,$$\nwhere the subscript ``0'' indicates that the integration contour is along the positive real axis, $\\{ \\arg(z) = 0 \\}$. It can be easily checked that the asymptotics of the above integral coincides with that of $\\varphi(z)$. When $S_{0}\\varphi(z)$ converges in some region in the $z$-plane, $\\varphi(z)$ is said to be Borel summable, and $S_{0}\\varphi(z)$ is called the Borel sum of $\\varphi(z)$. \n\n\\subsection{Chern-Simons partition function as a trans-series}\n\nSaddle points of the Chern-Simons action form the moduli space of flat connections $\\tilde{M}$,\nwhose connected components $\\tilde{M}_{\\tilde{\\alpha}}$ are indexed by their ``instanton numbers,''\n$$\\tilde{\\alpha} = (\\alpha, CS(\\tilde{\\alpha})) \\in \\mathcal{M}_{\\text{flat}}(M_{3},SL(2, \\mathbb{C})) \\times \\mathbb{Z}.$$ \nHere, $CS(\\tilde{\\alpha})$ denotes the value of Chern-Simons action at $\\alpha$, without moding out by 1. Following \\cite{GMP}, we will call a flat connection \\textit{abelian} (\\textit{irreducible}, resp.), if the stabilizer is $SU(2)$ or $U(1)$ ($\\{ \\pm 1\\}$, resp.) action on $Hom(\\pi_{1}(M_{3}),SU(2))$. \n\nNow, let $\\Gamma_{\\tilde{\\alpha}}$ be the union of steepest descent flows in $\\tilde{M}$, starting from $\\tilde{\\alpha}$. The integration cycle $\\Gamma$ is then given by a linear sum of these ``Lefshetz thimbles.''\n\n\\begin{equation}\n\\Gamma = \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}\\Gamma_{\\tilde{\\alpha},\\theta},\n\\label{eqn:GammaDecomposition}\n\\end{equation}\nwhere $\\theta = \\arg(k)$, and $n_{\\tilde{\\alpha},\\theta} \\in \\mathbb{Z}$ are the \\textit{trans-series} parameters, given by the pairing between the submanifolds of steepest descent and ascent. The value of $\\theta$ is adjusted so that there is no steepest descent flow between the saddle points. Let $I_{\\tilde{\\alpha},\\theta}$ be the contribution from a Lefshetz thimble $\\Gamma_{\\tilde{\\alpha},\\theta}$ to $Z_{CS}(M_{3})$ in Equation \\ref{eqn:ZoverGamma}:\n\n$$I_{\\tilde{\\alpha},\\theta} = \\int_{\\Gamma_{\\tilde{\\alpha},\\theta}} \\mathcal{D}A e^{2 \\pi i k CS(A)},$$\nwhich can be expanded in $1\/k$ near $\\tilde{\\alpha}$ as:\n$$I_{\\tilde{\\alpha},\\theta} \\sim e^{2 \\pi i k CS(\\tilde{\\alpha})}Z^{\\text{pert}}_{\\alpha}, \\quad \\text{where} \\quad Z^{\\text{pert}}_{\\alpha} = \\sum_{n=0}^{\\infty} a_{n}^{\\alpha}k^{-n+(d_{\\alpha}-3)\/2}, \\quad d_{\\alpha} = dim_{\\mathbb{C}}\\tilde{\\mathcal{M}}_{\\tilde{\\alpha}}.$$\nIn sum, we can write the Chern-Simons partition function in the form:\n\n\\begin{equation}\nZ_{CS}(M_{3};k) = \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}I_{\\tilde{\\alpha},\\theta} \\sim \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}e^{2 \\pi i k CS(\\tilde{\\alpha})}Z^{\\text{pert}}_{\\alpha}(k),\n\\label{eqn:trans-series}\n\\end{equation}\nwhich is a trans-series expansion of the Chern-Simons partition function. From the asymptotics given by this trans-series, we can apply Borel resummation and recover the full Chern-Simons partition function. Note that Equation \\ref{eqn:trans-series} depends on the choice of $\\theta = \\arg(k)$. In fact, as we vary $\\theta$, the value of $I_{\\tilde{\\alpha},\\theta}$ jumps to keep the whole expression continuous in $\\theta$ as follows:\n\\begin{equation}\nI_{\\tilde{\\alpha},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}+\\epsilon} = I_{\\tilde{\\alpha},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}-\\epsilon} + m_{\\tilde{\\alpha}}^{\\tilde{\\beta}}I_{\\tilde{\\beta},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}-\\epsilon}.\n\\label{eqn:Stokes}\n\\end{equation}\nThis is called the Stokes phenomenon, and it happens near the Stokes rays $\\theta = \\theta_{\\tilde{\\alpha}\\tilde{\\beta}} \\equiv \\frac{1}{i}\\arg(S_{\\tilde{\\alpha}}-S_{\\tilde{\\beta}})$. The trans-series parameters $n_{\\tilde{\\alpha},\\theta}$ jump accordingly to keep $Z_{CS}(M_{3};k)$ continuous in $\\theta$. The coefficients $m_{\\tilde{\\alpha}}^{\\tilde{\\beta}}$ are called Stokes monodromy coefficients.\n\n\\section{Exact partition function $Z_{CS}(\\Sigma(2,5,7))$}\n\\label{sec:exact}\nBefore going into the resurgence analysis of $Z_{CS}(\\Sigma(2,5,7))$, let us provide here the exact partition function $Z_{CS}(\\Sigma(2,5,7))$. We first compute the Witten-Reshetikhin-Turaev (WRT) invariant $\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))$ and then write the exact $SU(2)$ Chern-Simons partition function in terms of WRT invariants as follows:\n\\begin{equation}\nZ_{CS}(\\Sigma(p_{1},p_{2},p_{3})) = \\frac{\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))}{\\tau_{k}(S^{2} \\times S^{1})}.\n\\label{eqn:WRTandCSpartition}\n\\end{equation}\nHere, $k$ is the level of Chern-Simons theory.\\footnote{To be more precise, $k$ must be replaced by $k+2$. However, our interest in this paper is to recover the full partition function from a perturbative expansion in $1\/k$. Therefore, we will assume $k$ to be large, and replace $k+2$ with $k$ here.}\n\nWRT invariants for Seifert homology spheres can be computed from their surgery presentations \\cite{LawrenceRozansky}. In this paper, we focus on a specific type of Seifert homology spheres, the so-called Bireskorn homology spheres. A Brieskorn manifold $\\Sigma(p_{1},p_{2},p_{3})$ is defined as an intersection of a complex unit sphere $|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}=1$ and a hypersurface $z_{1}^{p_{1}}+z_{2}^{p_{2}}+z_{3}^{p_{3}}=0$. When $p_{1},p_{2},p_{3}$ are coprime integers, $\\Sigma(p_{1},p_{2},p_{3})$ is a homology sphere with three singular fibers. From the surgery presentation of $\\Sigma(p_{1},p_{2},p_{3})$, we can write its WRT invariant, which can be written a linear sum of mock modular forms \\cite{HikamiBrieskorn, LawrenceZagier}. In particular, when $1\/p_{1}+1\/p_{2}+1\/p_{3} < 1$, we can write:\n\\begin{equation}\ne^{\\frac{2 \\pi i}{k}(\\frac{\\phi(p_{1},p_{2},p_{3})}{4}-\\frac{1}{2})}(e^{\\frac{2 \\pi i}{k}} -1) \\tau_{k}(\\Sigma(p_{1},p_{2},p_{3})) = \\frac{1}{2}\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}(1\/k).\n\\label{eqn:WRTmodular}\n\\end{equation}\nLet us decode Equation \\ref{eqn:WRTmodular}. First of all, $\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))$ is the desired WRT invariant, normalized such that $\\tau_{k}(S^{3}) = 1$ and $\\tau_{k}(S^{2} \\times S^{1}) = \\sqrt{\\frac{k}{2}}\\frac{1}{\\sin(\\pi \/ k)}.$ Next, the number $\\phi(p_{1},p_{2},p_{3})$ is defined as:\n\\begin{gather}\n\\phi(p_{1},p_{2},p_{3}) = 3 - \\frac{1}{p_{1}p_{2}p_{3}} + 12 (s(p_{1}p_{2},p_{3})+s(p_{2}p_{3},p_{1})+s(p_{3}p_{1},p_{2})), \\nonumber \\\\\n\\text{where} \\quad s(a,b) = \\frac{1}{4b}\\sum_{n=1}^{b-1}\\cot(\\frac{n \\pi}{b})\\cot(\\frac{n a \\pi}{b}). \\nonumber\n\\end{gather}\nFinally, $\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}$ is a linear sum of mock modular forms $\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{a}$, namely:\n\\begin{gather}\n\\tilde{\\Psi}_{P}^{a}(1\/k) = \\sum_{n \\geq 0} \\psi_{2P}^{a}(n)q^{n^{2}\/4P}, \\quad \\text{where} \\quad \\psi_{2P}^{a}(n) = \\begin{cases}\n\\pm 1 & \\text{$n \\equiv \\pm a$ mod $2P$} \\\\ 0 & \\text{otherwise}\n\\end{cases} \\label{eqn:modular} \\\\\n\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}(1\/k) = -\\frac{1}{2}\\sum_{\\epsilon_{1},\\epsilon_{2},\\epsilon_{3} = \\pm 1} \\epsilon_{1}\\epsilon_{2}\\epsilon_{3}\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{p_{1}p_{2}p_{3}(1+\\sum_{j}\\epsilon_{j}\/p_{j})}(1\/k), \\label{eqn:linSumModular}\n\\end{gather}\nwhere $q$ in Equation \\ref{eqn:modular} is given by $e^{2 \\pi i \/k }$.\n\nNow, let us restrict ourselves to $(p_{1},p_{2},p_{3}) = (2,5,7)$. First of all, $p_{1}=2,p_{2}=5,p_{3}=7$ are relatively prime, so $\\Sigma(2,5,7)$ is a homology sphere. Next, $1\/p_{1}+1\/p_{2}+1\/p_{3} < 1$, so we can write the WRT invariant as a linear sum of mock modular forms:\n\n\\begin{align}\ne^{\\frac{2 \\pi i}{k}(\\frac{\\phi(2,5,7)}{4}-\\frac{1}{2})}(e^{\\frac{2 \\pi i}{k}} -1) \\tau_{k}(\\Sigma(2,5,7)) &= \\frac{1}{2}\\tilde{\\Psi}_{70}^{(1,1,1)}(1\/k) \\nonumber \\\\\n&= \\frac{1}{2}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(1\/k),\n\\label{eqn:WRTinvariantModularDecomposition}\n\\end{align}\nwhere $\\phi(2,5,7) = -\\frac{19}{70}$. From Equation \\ref{eqn:WRTandCSpartition} and \\ref{eqn:WRTinvariantModularDecomposition}, we can explicitly write the exact Chern-Simons partition function $Z_{CS}(\\Sigma(2,5,7))$ as follows:\n\n\\begin{equation}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{i q^{\\phi(2,5,7)\/4}\\sqrt{8k}}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(1\/k).\n\\label{eqn:CSpartition}\n\\end{equation} \n\n\\section{Asymptotics of $Z_{CS}(\\Sigma(2,5,7))$}\n\\label{sec:asymptotics}\nBefore proceeding to the Borel transform and resummation of the exact partition function, let us briefly consider the its asymptotics in the large $k$ limit. This can be most easily done by considering the ``mock modular'' property of mock modular forms:\n\\begin{gather}\n\\tilde{\\Psi}_{p}^{a}(q) = -\\sqrt{\\frac{k}{i}} \\sum_{b=1}^{p-1}\\sqrt{\\frac{2}{p}}\\sin{\\frac{\\pi a b}{p}}\\tilde{\\Psi}_{p}^{b}(e^{-2 \\pi i k}) + \\sum_{n \\geq 0} \\frac{L(-2n,\\psi_{2p}^{a})}{n!}\\bigg(\\frac{\\pi i}{2 p k}\\bigg)^{n}, \\label{eqn:mockmodular} \\\\\n\\text{where} \\quad L(-n,\\psi_{2p}^{a}) = -\\frac{(2p)^{n}}{n+1}\\sum_{m=1}^{2p}\\psi_{2p}^{a}(m)B_{n+1}\\bigg(\\frac{m}{2p}\\bigg),\n\\end{gather}\nand $B_{n+1}$ stands for the $(n+1)$-th Bernoulli polynomial. For integer values of $k$, \n$$\\tilde{\\Psi}^{b}_{p}(e^{-2 \\pi i k}) = (1-\\tfrac{b}{p})e^{-\\frac{2 \\pi i k b^{2}}{2p}},$$ and in large $k$ limit, we may consider the second summation in Equation \\ref{eqn:mockmodular} as ``perturbative'' contributions, while the first summation standing for ``non-perturbative'' contributions. Therefore, the asymptotics of $Z_{CS}(\\Sigma(2,5,7))$ can be written as ($p=70$, below):\n\\begin{multline}\ni q^{-19\/280}\\sqrt{8k}Z_{CS}(\\Sigma(2,5,7)) = \\\\\n -\\sqrt{\\frac{k}{i}} \\sum_{b=1}^{70-1} \\sqrt{\\frac{2}{70}}\\bigg( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\bigg)(1-\\tfrac{b}{p})e^{-\\frac{2 \\pi i k b^{2}}{4p}} \\\\\n+ i q^{-19\/280}\\sqrt{8k}Z_{\\text{pert}}(1\/k),\n\\label{eqn:asymptotics}\n\\end{multline}\nwhere the perturbative contributions $i q^{-19\/280}\\sqrt{8k}Z_{\\text{pert}}(1\/k)$ can be explicitly written as:\n\\begin{gather}\nZ_{\\text{pert}}(1\/k) = Z^{11}_{\\text{pert}}(1\/k)-Z^{31}_{\\text{pert}}(1\/k)-Z^{39}_{\\text{pert}}(1\/k)+Z^{59}_{\\text{pert}}(1\/k), \\nonumber \\\\\n\\text{where} \\quad i \\sqrt{8} q^{-19\/280} Z^{a}_{\\text{pert}}(1\/k) = \\sum_{n \\geq 0} \\frac{b_{n}^{a}}{k^{n+1\/2}} \\quad \\text{for} \\quad a = 11, 31, 39, 59 \\nonumber \\\\\n\\text{and} \\quad b_{n}^{a} = \\frac{L(-2n,\\psi_{2p}^{a})}{n!}\\bigg(\\frac{\\pi i}{2p}\\bigg)^{n}.\n\\end{gather}\n\nOne can easily see that the sum $\\big( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\big)$ in Equation \\ref{eqn:asymptotics} is nonzero if and only if $b$ is not divisible by $2,5$ or $7$. We will later see that these $b$'s correspond to the positions of the poles in the Borel plane. \n\n\n\\section{Resurgence analysis of $Z_{CS}(\\Sigma(2,5,7))$}\nIn this section, we perform a resurgence analysis of the partition function and decompose $Z_{CS}(M(2,5,7))$ into the homological blocks:\n$$Z_{CS}(\\Sigma(2,5,7)) = \\sum_{\\alpha} n_{\\alpha}e^{2 \\pi i k CS(\\alpha)} Z_{\\alpha},$$\nwhere $\\alpha$ runs over the abelian\/reducible flat connections. Since $Z_{\\alpha}$ gets contributions from both the abelian\/reducble flat connection $\\alpha$ and the irreducible flat connections, it is necessary to study how the contributions from the irreducible flat connections regroup themselves into the homological blocks. We accomplish the goal in three steps. First, we study the Borel transform and resummation of the partition function and identify the contributions from the irreducible flat connections. Then, the contributions from the irreducible flat connections are shown to enter in the homological blocks via Stokes monodromy coefficients.\n\n\\subsection{Borel transform and resummation of $Z_{CS}(M(2,5,7))$}\nRecall that the perturbative contributions $Z^{a}_{\\text{pert}}(1\/k)$ have the following asymptotics:\n\\begin{equation}\ni\\sqrt{8}q^{\\phi(2,5,7)\/4}Z^{a}_{\\text{pert}}(1\/k) = \\sum_{n \\geq 0} \\frac{b^{a}_{n}}{k^{(n+1\/2)}}.\n\\end{equation}\nNow, consider its Borel transform:\n\\begin{align}\nBZ^{a}_{\\text{pert}}(\\zeta) &= \\sum_{n \\geq 1} \\frac{b^{a}_{n}}{\\Gamma(n+1\/2)} \\zeta^{n-1\/2} \\\\\n&= \\frac{1}{\\sqrt{\\zeta}} \\sum_{n \\geq 0} b^{a}_{n}\\frac{4^{n}}{\\sqrt{\\pi}} \\frac{n!}{(2n)!}\\zeta^{n} \\quad \\bigg(\\because \\Gamma(n+1\/2) = \\frac{\\sqrt{\\pi}}{4^{n}}\\frac{(2n)!}{n!} \\bigg) \\\\\n&= \\frac{1}{\\sqrt{\\pi \\zeta}} \\sum_{n \\geq 0} c^{a}_{n} \\frac{n!}{(2n)!} z^{2n}, \\quad \\text{where} \\quad z = \\sqrt{\\frac{2 \\pi i}{p}\\zeta}.\n\\end{align}\nIn the last equality, we have simply changed the variable from $\\zeta$ to $z$ and absorbed all other factors into the coefficients $c_{n}^{a}$.\n\nAlthough the coefficients $c^{a}_{n}$ only appear in the perturbative piece of the partition function, we can recover the exact partition function from them. Let us first consider generating functions which package the coefficients $c^{a}_{n}$:\n$$\\frac{\\sinh((p-a)z)}{\\sinh(pz)} = \\sum_{n \\geq 0} c^{a}_{n} \\frac{n!}{(2n)!} z^{2n} = \\sum_{n \\geq 0} \\psi_{2p}^{a} e^{-nz}.$$ \nNow we can write the mock modular forms in an integral from, using these generating functions:\n\\begin{gather}\n\\frac{\\sinh(p-a)\\eta}{\\sinh p \\eta} = \\sum_{n \\geq 0} \\psi_{2p}^{a}(n)e^{-n \\eta} \\\\\n\\Rightarrow \\quad \\int_{i \\mathbb{R} + \\epsilon} d \\eta \\frac{\\sinh(p-a)\\zeta}{\\sinh p \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} = \\int_{i \\mathbb{R}+\\epsilon} d \\eta \\sum_{n \\geq 0} \\psi_{2p}^{a}(n)e^{-n \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} \\label{eqn:Borel1} \\\\\n\\Rightarrow \\quad \\int_{i \\mathbb{R} + \\epsilon} d \\eta \\frac{\\sinh(p-a)\\eta}{\\sinh p \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} = \\sqrt{\\frac{2 \\pi^{2} i}{p}} \\frac{1}{\\sqrt{k}} \\tilde{\\Psi}_{p}^{a}(q). \\label{eqn:Borel2}\n\\end{gather}\nIn the second line, the integral is taken along a line $Re[\\eta] = \\epsilon > 0$, where the integral converges, and the third line is simply a Gaussian integral. The change of variables\n$$ \\zeta = \\frac{p \\eta^{2}}{2 \\pi i}$$\nalters the integration contour from a single line to the union of two rays from the origin, $i e^{i \\delta} \\mathbb{R}_{+}$ and $i e^{-i \\delta} \\mathbb{R}_{+}$. In sum, \n\\begin{equation}\n\\frac{1}{\\sqrt{k}} \\tilde{\\Psi}_{p}^{a}(q) = \\frac{1}{2}\\bigg(\\int_{i e^{i \\delta} \\mathbb{R}_{+}} + \\int_{i e^{-i \\delta} \\mathbb{R}_{+}} \\bigg) \\frac{d \\zeta}{\\sqrt{\\pi \\zeta}} \\frac{\\sinh \\bigg( (p-a)\\sqrt{\\frac{2 \\pi i \\zeta}{p}}\\bigg)}{\\sinh \\bigg(p \\sqrt{\\frac{2 \\pi i \\zeta}{p}}\\bigg)} e^{-k\\zeta}. \\label{eqn:BorelZeta}\n\\end{equation}\n\nThus we have recovered the entire mock modular form from its perturbative expansion. Since the partition function is a linear sum of mock modular forms, this implies that the Borel resummation of $BZ_{\\text{pert}}$ will return the exact partition function. Furthermore, the poles of generating functions $\\sinh((p-a)z)\/\\sinh(pz)$ encodes the information of the non-perturbative contributions, as we exhibit below.\n\nFirst of all, since $Z_{CS}(\\Sigma(2,5,7)) \\sim (\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(q)$, the Borel transform of $Z_{\\text{pert}}$ is given by:\n\\begin{equation}\n\\frac{\\sinh(59\\eta)-\\sinh(39\\eta)-\\sinh(31\\eta)+\\sinh(11\\eta)}{\\sinh(70\\eta)} = \\frac{4\\sinh(35\\eta)\\sinh(14\\eta)\\sinh(10\\eta)}{\\sinh(70\\eta)},\n\\label{eqn:psiBorel}\n\\end{equation}\nNote that the RHS of Equation \\ref{eqn:psiBorel} has only simple poles at $\\eta = n \\pi i\/ 70$ for $n$ non-divisible by 2, 5, or 7. In particular, the poles are aligned on the imaginary axis, so we choose the same integration contours as in Equation \\ref{eqn:Borel1} - \\ref{eqn:BorelZeta}. The Borel resummation of Equation \\ref{eqn:psiBorel} is then the average of Borel sums along the two rays depicted in Figure \\ref{fig:integrationContour}(a):\n\\begin{equation}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2} \\bigg[ S_{\\frac{\\pi}{2} - \\delta}Z_{\\text{pert}}(1\/k) + S_{\\frac{\\pi}{2} + \\delta}Z_{\\text{pert}}(1\/k) \\bigg].\n\\label{eqn:Borelsums}\n\\end{equation}\n\n\\begin{figure} [htb]\n\\centering\n\\includegraphics{integrationContour}\n\\caption{(a) An integration contour in the $\\zeta$-plane, made of two rays from the origin. Dots represent the poles. (b) An equivalent integration contour. The contribution from the integration along the real axis must be doubled.}\n\\label{fig:integrationContour}\n\\end{figure}\n\nTo evaluate the RHS of Equation \\ref{eqn:Borelsums}, we integrate along an equivalent contour in Figure \\ref{fig:integrationContour}(b). Note that as we change to the contour in Figure \\ref{fig:integrationContour}(b), a Stokes ray $i e^{-i \\delta}\\mathbb{R}_{+}$ has crossed the poles on the imaginary axis, towards the positive real axis. As a reult, the poles contribute to the Borel sums with residues, which is precisely a Stokes phenomenon. Since each pole is located at $\\eta = n \\pi i \/ 70$, its residue includes a factor of $e^{-k \\zeta} = e^{-k \\frac{70 \\eta^{2}}{2 \\pi i}} = e^{2 \\pi i k (- \\frac{n^{2}}{280})}$. Shortly, we will exhibit that these factors precisely correspond to the Chern-Simons instanton actions, so let us regroup the poles ($n$ modulo 140) by their instanton actions:\n\n\\begin{itemize}\n\\item $n = 9, 19, 51, 61, 79, 89, 121, 131$, for which $CS = -\\frac{9^{2}}{280}$ and residues $\\{1,1,1,1,-1,-1,-1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70})$.\n\\item $n = 3, 17, 53, 67, 73, 87, 123, 137$, for which $CS = -\\frac{3^{2}}{280}$ and residues $\\{ -1, -1, -1,-1,1,1,1,1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{\\pi}{35} + \\cos \\frac{6 \\pi}{35})$.\n\\item $n = 23, 33, 37, 47, 93, 103, 107, 117$, for which $CS = -\\frac{23^{2}}{280}$ and residues $\\{1, 1, 1, 1,-1,-1,-1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{4 \\pi}{35} + \\sin \\frac{13 \\pi}{70})$.\n\\item $n = 13, 27, 43, 57, 83, 97, 113, 127$, for which $CS = -\\frac{13^{2}}{280}$ and residues $\\{ -1, -1, -1, -1,1,1,1,1\\}$ with overall factor $\\frac{i}{35}(\\sin \\frac{3 \\pi}{70} + \\sin \\frac{17 \\pi}{70})$.\n\\item $n = 11, 31, 39, 59, 81, 101, 109, 129$, for which $CS = -\\frac{11^{2}}{280}$ and residues $\\{1, -1, -1, 1, -1,1,1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{8 \\pi}{35} + \\sin \\frac{9 \\pi}{70})$.\n\\item $n = 1, 29, 41, 69, 71, 99, 111, 139$, for which $CS = -\\frac{1^{2}}{280}$ and residues $\\{1,-1,-1,1,-1,1,1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{2 \\pi}{35} - \\sin \\frac{11 \\pi}{70})$.\n\\end{itemize}\n\nThe top four groups of poles correspond to the four irreducible $SU(2)$ flat connections, while the remaining two correspond to the complex flat connections. To see this, first consider the moduli space of flat connections $\\mathcal{M}_{\\text{flat}}(\\Sigma(2,5,7), SL(2,\\mathbb{C}))$. Since $\\Sigma(2,5,7)$ is a homology 3-sphere, it has only one abelian flat connection $\\alpha_{0}$, which is trivial. Next, there are total $\\frac{(2-1)(5-1)(7-1)}{4} = 6$ irreducible $SL(2,\\mathbb{C})$ flat connections, four of which are conjugate to $SU(2)$ and the remaining two are ``complex'' (conjugate to $SL(2,\\mathbb{R})$) \\cite{KitanoYamaguchi,BodenCurtis,FintushelStern}. To compute their Chern-Simons instanton actions, we characterize all six flat connections by their ``rotation angles,'' which we will briefly explain here. Consider the following presentation of the fundamental group of $\\Sigma(2,5,7)$.\n\\begin{equation}\n\\pi_{1}(\\Sigma(2,5,7)) = \\langle x_{1},x_{2},x_{3},h \\, | \\, h \\, \\text{central}, x_{1}^{2} = h^{-1}, x_{2}^{5} = h^{-9}, x_{3}^{7} = h^{-5}, x_{1}x_{2}x_{3} = h^{-3} \\rangle.\n\\label{eqn:fundPresentation}\n\\end{equation}\nWhen a representation $\\alpha: \\pi_{1}(\\Sigma(2,5,7)) \\rightarrow SL(2,\\mathbb{C})$ is conjugate in $SU(2)$, $\\alpha(h)$ is equal to $\\pm 1$, and the conjugacy classes of $\\alpha(x_{j})$ can be represented in the form $\\bigl(\\begin{smallmatrix}\n\\lambda_{j} & 0 \\\\ 0 & \\lambda_{j}^{-1}\n\\end{smallmatrix} \\bigr)$ for some $| \\lambda_{j} | = 1$. There are four triples $(\\lambda_{1}, \\lambda_{2}, \\lambda_{3})$ satisfying the relations in Equation \\ref{eqn:fundPresentation}:\n\\begin{equation}\n(l_{1},l_{2},l_{3}) = (1,1,3), \\, (1,3,1), \\, (1,3,3), \\, (1,3,5) \\quad \\text{where} \\quad \\lambda_{j} = e^{\\pi i l_{j} \/ p_{j}}. \\label{eqn:flatCon}\n\\end{equation} \nEach triple corresponds to one of the four irreducible $SU(2)$ flat connections, which we will call $\\alpha_{1}, \\alpha_{2}, \\alpha_{3}$ and $\\alpha_{4}$. From the rotation angles of an irreducible flat connection $A$, we can read off its Chern-Simons instanton action:\n\\begin{gather}\nCS(A) = -\\frac{p_{1}p_{2}p_{3}}{4}(1+\\sum_{i}l_{j}\/p_{j})^{2} \\nonumber \\\\\n\\Rightarrow \\quad CS(\\alpha_{1}) = -\\frac{9^{2}}{280}, \\quad CS(\\alpha_{2}) = -\\frac{3^{2}}{280}, \\quad CS(\\alpha_{3}) = -\\frac{23^{2}}{280}, \\quad CS(\\alpha_{4}) = -\\frac{13^{2}}{280}, \\label{eqn:CSinstanton}\n\\end{gather}\nwhich is in agreement with the instanton actions of the poles in the Borel plane. Likewise, one can compute the Chern-Simons instanton actions of the two complex flat connections $\\alpha_{5}$ and $\\alpha_{6}$, \n$$CS(\\alpha_{5}) = -\\frac{11^{2}}{280}, \\quad CS(\\alpha_{6}) = -\\frac{1^{2}}{280}.$$\n\nNow, let us sum the residues to reproduce the non-perturbative contributions in Equation \\ref{eqn:asymptotics}. When $k$ is an integer, the residues from the poles $SU(2)$ connection $\\alpha_{1}$ are summed into:\n\\begin{multline}\n\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{280}} \\bigg[ \\sum_{n \\equiv \\pm 9 \\, (mod \\, 140)} \\pm 1 \\, + \\sum_{n \\equiv \\pm 19 \\, (mod \\, 140)} \\pm 1 \\\\\n+ \\sum_{n \\equiv \\pm 51\\, (mod \\, 140)} \\pm 1 \\, + \\sum_{n \\equiv \\pm 61\\, (mod \\, 140)} \\pm 1 \\bigg].\n\\label{eqn:alpha1Contribution}\n\\end{multline}\nVia zeta-function regularization $\\sum_{n \\equiv \\pm a \\, (mod \\, 2p)} \\pm 1 = 1 - \\frac{a}{p}$, we can rewrite Equation \\ref{eqn:alpha1Contribution} as follows:\n\\begin{gather*}\n\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{280}}\\bigg( (1-\\frac{9}{70}) + (1-\\frac{19}{70}) + (1-\\frac{51}{70}) + (1-\\frac{61}{70}) \\bigg) \\\\\n= \\frac{2i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{240}} = n_{\\alpha_{1}}Z_{\\text{pert}}^{\\alpha_{1}}e^{2 \\pi i k CS(\\alpha_{1})},\n\\end{gather*}\nwhere $n_{\\alpha_{1}}$ is the trans-series parameter. Similarly for connections $\\alpha_{2}, \\alpha_{3}$ and $\\alpha_{4}$, \n\\begin{itemize}\n\\item $n_{\\alpha_{2}}Z_{\\text{pert}}^{\\alpha_{2}}e^{2 \\pi i k CS(\\alpha_{2})} = -\\frac{2i}{35}(\\cos \\frac{\\pi}{35} + \\cos \\frac{6 \\pi}{35})e^{-2 \\pi i k \\frac{3^{2}}{280}}$.\n\\item $n_{\\alpha_{3}}Z_{\\text{pert}}^{\\alpha_{3}}e^{2 \\pi i k CS(\\alpha_{3})} = \\frac{2i}{35}(\\cos \\frac{4 \\pi}{35} + \\sin \\frac{13 \\pi}{70})e^{-2 \\pi i k \\frac{23^{2}}{280}}$.\n\\item$n_{\\alpha_{4}}Z_{\\text{pert}}^{\\alpha_{4}}e^{2 \\pi i k CS(\\alpha_{4})} = -\\frac{2i}{35}(\\sin \\frac{3 \\pi}{70} + \\sin \\frac{17 \\pi}{70})e^{-2 \\pi i k \\frac{13^{2}}{280}}$.\n\\end{itemize}\nAnd the contributions from the two complex connections vanish. Notice that the poles grouped by their instanton actions correspond to the $b$'s with non-vanishing contributions in Equation \\ref{eqn:asymptotics}. Furthermore, the sum of residues is proportional to the sum $\\big( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\big)$ at each $b$, so the Borel sum correctly captures the non-perturbative contributions to the exact partition function. \n\n\\section{Homological block decomposition of $Z_{CS}(\\Sigma(2,5,7))$ and the modular transform}\nWe conclude this paper by writing the partition function in a categorification-friendly form, as was advertised in Equation \\ref{eqn:abelianDecomposition}. To summarize, we started with the exact partition function $Z_{CS}(\\Sigma(2,5,7))$, considered its perturbative expansion and performed a Borel resummation. Although our example is a homology 3-sphere and has only one abelian flat connection, more generally the Borel sum results in a decomposition into homological blocks \\cite{GMP}:\n$$Z_{CS}(\\Sigma(2,5,7)) = \\sum_{a} e^{2 \\pi i CS_{a}}Z_{a},$$\nwhere the summation runs over abelian flat connections. Each ``homological block'' $Z_{a}$ gets contributions from both the abelian flat connection $a$ and the irreducible $SU(2)$ flat connections. How the irreducible flat connections regroup themselves into each homological block is encoded in the Stokes monodromy coefficients as follows:\n\\begin{gather}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2}\\bigg[S_{\\frac{\\pi}{2}-\\epsilon}Z_{\\text{pert}}(k) + S_{\\frac{\\pi}{2}+\\epsilon}Z_{\\text{pert}}(k) \\bigg] = Z^{\\alpha_{0}}_{\\text{pert}} + \\frac{1}{2} \\sum_{\\tilde{\\beta}}m_{\\tilde{\\beta}}^{(\\alpha_{0},0)}e^{2 \\pi i k S_{\\tilde{\\beta}}}Z^{\\beta}_{\\text{pert}} \\nonumber \\\\\n=\\sum_{\\tilde{\\beta}} n_{\\tilde{\\beta},0}e^{2 \\pi i k S_{\\tilde{\\beta}}}Z^{\\beta}_{\\text{pert}}, \\quad \\text{where} \\quad n_{\\tilde{\\beta}} = \\begin{cases} 1 & \\tilde{\\beta}=(\\alpha_{0},0) \\\\ \\frac{1}{2}m_{\\tilde{\\beta}}^{(\\alpha_{0},0)} & \\text{otherwise.} \\end{cases} \\label{eqn:trans-seriesCoeff}\n\\end{gather}\n\\begin{equation}\nm_{\\tilde{\\beta}}^{(\\alpha_{0},0)} = \\begin{cases}\n1 & \\tilde{\\beta} = (\\alpha_{1}, -n^{2}\/280), \\quad \\text{for} \\quad n = 9, 19, 51, 61 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{1}, -n^{2}\/280), \\quad \\text{for} \\quad n = 79, 89, 121, 131 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{2}, -n^{2}\/280), \\quad \\text{for} \\quad n = 73, 87, 123, 137 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{2}, -n^{2}\/280), \\quad \\text{for} \\quad n = 3, 17, 53, 67 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{3}, -n^{2}\/280), \\quad \\text{for} \\quad n = 23, 33, 37, 47 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{3}, -n^{2}\/280), \\quad \\text{for} \\quad n = 93, 103, 107, 117 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{4}, -n^{2}\/280), \\quad \\text{for} \\quad n = 83, 97, 113, 127 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{4}, -n^{2}\/280), \\quad \\text{for} \\quad n = 13, 27, 43, 57 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{5}, -n^{2}\/280), \\quad \\text{for} \\quad n = 11, 59, 101, 109 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{5}, -n^{2}\/280), \\quad \\text{for} \\quad n = 31, 39, 81, 129 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{6}, -n^{2}\/280), \\quad \\text{for} \\quad n = 1, 69, 99, 111 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{6}, -n^{2}\/280), \\quad \\text{for} \\quad n = 29, 41, 71, 139 \\quad (\\text{mod} \\, 140) \\end{cases}\n\\end{equation}\nThe formula \\ref{eqn:trans-seriesCoeff} holds for any Seifert manifold with three singular fibers (which includes our example $\\Sigma(2,5,7)$) \\cite{CostinGaroufalidis}. In \\cite{GPV}, it was conjectured that there is a ``modular transform'' of the homological blocks $Z_{a}$, which turns it into a ``categorification-friendly'' form. Namely,\n$$Z_{a} = \\frac{1}{i \\sqrt{2k}} \\sum_{b} S_{ab} \\hat{Z}_{b},$$\nfor some $k$-independent $S_{ab}$. Above, $b$ runs over the abelian flat connections, and each $\\hat{Z}_{b}$ is an element of $q^{\\Delta_{b}}\\mathbb{Z}[[q]]$ for some $\\Delta_{b} \\in \\mathbb{Q}$. Suppose the exact partition function is a linear sum of mock modular forms, and there are multiple abelian flat connections. Then, a homological block decomposition regroups the mock modular forms (see \\cite{GPV} for examples.) In our example, however, there is only one abelian flat connection $\\alpha_{0}$, because $\\Sigma(2,5,7)$ is a homology sphere. Therefore, it suffices to find $\\hat{Z}_{\\alpha_{0}}$ which is an element of $q^{\\Delta_{\\alpha_{0}}} \\mathbb{Z}[[q]]$. From the exact partition function\n$$i q^{\\phi(2,5,7)\/4}\\sqrt{2k}Z_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(q),$$\nand the definition of the mock modular forms\n$$\\tilde{\\Psi}_{p}^{a}(q) = \\sum_{n \\geq 0} \\psi_{2p}^{a}q^{n^{2}\/4p},$$\nwe can easily see that the partition function is an element of $q^{121\/280}\\mathbb{Z}[[q]]$. Thus,\n$$\\hat{Z}_{\\alpha_{0}} = q^{1\/2}(1 - q^{3} - q^{5} + q^{12} + \\cdots) \\quad \\text{and} \\quad S_{\\alpha_{0}\\alpha_{0}} = \\frac{1}{2}.$$\n\n\n\\acknowledgments{The author is deeply indebted to Sergei Gukov for his suggestions and invaluable discussions.\n\nThe work is funded in part by the DOE Grant DE-SC0011632 and the Walter Burke Institute for Theoretical Physics, and also by the Samsung Scholarship.}\n\n\n\n\\newpage\n\n\\bibliographystyle{JHEP_TD}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhereas many upper bounds for the probability that a sum of independent\nreal-valued (integrable) random variables exceeds its expectation by a\nspecified threshold value $t\\in\\mathbb{R}$ are documented in the literature\n(see \\textit{e.g.} \\cite{BLM13} and the references therein), very few results\nare available when the random variables involved in the summation are sampled\nfrom a finite population according to a given survey scheme and next\nappropriately normalized (using the related survey weights as originally\nproposed in \\cite{HT51} for approximating a total). The sole situation where\nresults in the independent setting straightforwardly carry over to survey\nsamples (without replacement) corresponds to the case where the variables are\nsampled independently with possibly unequal weights, \\textit{i.e.} Poisson\nsampling. For more complex sampling plans, the dependence structure between\nthe sampled variables makes the study of the fluctuations of the resulting\nweighted sum approximating the total (referred to as the\n\\textit{Horvitz-Thompson total estimate}) very challenging. The case of basic\n\\textit{sampling without replacement} (\\textsc{SWOR} in abbreviated form) has\nbeen first considered in \\cite{Hoeffding63}, and refined in \\cite{Serfling74}\nand \\cite{BardenetMaillard}. In contrast, the asymptotic behavior of the\nHorvitz-Thompson estimator as $N$ tends to infinity is well-documented in the\nlitterature. Following in the footsteps of the seminal contribution\n\\cite{Hajek64}, a variety of limit results (\\textit{e.g.} consistency,\nasymptotic normality) have been established for Poisson sampling and next\nextended to rejective sampling viewed as conditional Poisson sampling given\nthe sample size and to sampling schemes that are closed to the latter in a\n\\textit{coupling} sense in \\cite{Rob82} and \\cite{Ber98}. Although the nature\nof the results established in this paper are nonasymptotic, these arguments\n(conditioning upon the sampling size and coupling) are involved in their proofs.\n\nIt is indeed the major purpose of this article to extend tail bounds proved\nfor \\textsc{SWOR} to the case of rejective sampling, a fixed size sampling\nscheme generalizing it. The approach we develop is thus based on viewing\nrejective sampling as conditional Poisson sampling given the sample size and\nwriting then the deviation probability as a ratio of two quantities: the joint\nprobability that a Poisson sampling-based total estimate exceeds the threshold\n$t$ and the size of the cardinality of the Poisson sample equals the\n(deterministic) size $n$ of the rejective plan considered in the numerator and\nthe probability that the Poisson sample size is equal to $n$ in the\ndenominator. Whereas a sharp lower bound for the denominator can be\nstraightforwardly derived from a local Berry-Esseen bound proved in\n\\cite{Deheuvels} for sums of independent, possibly non indentically\ndistributed, Bernoulli variables, an accurate upper bound for the numerator can be\nestablished by means of an appropriate exponential change of measure\n(\\textit{i.e.} Escher transformation), following in the footsteps of the\nmethod proposed in \\cite{Talagrand95}, a refinement of the classical argument\nof Bahadur-Rao's theorem in order to improve exponential bounds in the\nindependent setting. The tail bounds (of Bennett\/Bernstein type) established\nby means of this method are shown to be sharp in the sense that they\nexplicitely involve the 'small' asymptotic variance of the Horvitz-Thompson total\nestimate based on rejective sampling, in contrast to those proved by using the\n\\textit{negative association} property of the sampling scheme.\n\nThe article is organized as follows. A few key concepts pertaining to survey\ntheory are recalled in section \\ref{sec:background}, as well as specific\nproperties of Poisson and rejective sampling schemes. For comparison purpose,\npreliminary tail bounds in the (conditional) Poisson case are stated in\nsection \\ref{sec:Poisson}. The main results of the paper, sharper exponential\nbounds for conditional Poisson sampling namely, are proved in section\n\\ref{sec:main}, while section \\ref{sec:extension} explains how they can be\nextended to other sampling schemes, sufficiently close to rejective sampling in the sense of the total variation norm. A few remarks are finally collected in section \\ref{sec:concl} and some\ntechnical details are deferred to the Appendix section.\n\n\\section{Background and Preliminaries}\n\n\\label{sec:background} As a first go, we start with briefly recalling basic\nnotions in survey theory, together with key properties of (conditional)\nPoisson sampling schemes. Here and throughout, the indicator function of any\nevent $\\mathcal{E}$ is denoted by $\\mathbb{I}\\{\\mathcal{E}\\}$, the power set\nof any set $E$ by $\\mathcal{P}(E)$, the variance of any square integrable r.v.\n$Y$ by $Var(Y)$, the cardinality of any finite set $E$ by $\\#E$ and the Dirac\nmass at any point $a$ by $\\delta_{a}$. For any real number $x$, we set\n$x^{+}=\\max\\{x,\\; 0 \\}$, $x^{-}=\\max\\{ -x,\\; 0 \\}$, $\\lceil x\\rceil=\\inf\\{\nk\\in\\mathbb{Z}:\\; x\\leq k \\}$ and $\\lfloor x \\rfloor=\\sup\\{ k\\in\\mathbb{Z}:\\;\nk\\leq x \\}$.\n\n\\subsection{Sampling schemes and Horvitz-Thompson estimation}\n\nConsider a finite population of $N\\geq1$ distinct units, $\\mathcal{I\n_{N}=\\{1,\\ \\ldots,\\; N \\}$ say, a survey sample of (possibly random) size\n$n\\leq N$ is any subset $s=\\{i_{1},\\; \\ldots,\\; i_{n(s)} \\}\\in\\mathcal{P\n(\\mathcal{I}_{N})$ of size $n(s)=n$. A sampling design without replacement is\ndefined as a probability distribution $R_{N}$ on the set of all possible\nsamples $s\\in\\mathcal{P}(\\mathcal{I}_{N})$. For all $i\\in\\mathcal{I}_{N}$, the\nprobability that the unit $i$ belongs to a random sample $S$ defined on a\nprobability space $(\\Omega,\\; \\mathcal{F},\\; \\mathcal{P})$ and drawn from\ndistribution $R_{N}$ is denoted by $\\pi_{i}=\\mathbb{P}\\{i\\in S \\}=R_{N\n(\\{i\\})$. The $\\pi_{i}$'s are referred to as \\textit{first order inclusion\nprobabilities}. The \\textit{second order inclusion probability} related to any\npair $(i,j)\\in\\mathcal{I}_{N}^{2}$ is denoted by $\\pi_{i,j}=\\mathbb{P\n\\{(i,j)\\in S^{2} \\}=R_{N}(\\{i,\\; j \\})$ (observe that $\\pi_{i,i}=\\pi_{i}$).\nHere and throughout, we denote by $\\mathbb{E}[.]$ the $\\mathbb{P}$-expectation\nand by $Var(Z)$ the conditional variance of any $\\mathbb{P}$-square integrable\nr.v. $Z:\\Omega\\rightarrow\\mathbb{R}$.\n\nThe random vector $\\boldsymbol{\\epsilon} _{N}=(\\epsilon_{1},\\; \\ldots,\\;\n\\epsilon_{N})$ defined on $(\\Omega,\\; \\mathcal{F},\\; \\mathcal{P})$, where\n$\\epsilon_{i}=\\mathbb{I}\\{ i\\in S \\}$ fully characterizes the random sample\n$S\\in\\mathcal{P}(\\mathcal{I}_{N})$. In particular, the sample size $n(S)$ is\ngiven by $n=\\sum_{i=1}^{N}\\epsilon_{i}$, its expectation and variance by\n$\\mathbb{E}[n(S)]=\\sum_{i=1}^{N}\\pi_{i}$ and $Var(n(S))=\\sum_{1\\leq i,\\; j\\leq\nN}\\{\\pi_{i,j}-\\pi_{i}\\pi_{j}\\}$ respectively. The $1$-dimensional marginal\ndistributions of the random vector $\\boldsymbol{\\epsilon} _{N}$ are the\nBernoulli distributions $Ber(\\pi_{i})=\\pi_{i}\\delta_{1}+(1-\\pi_{i})\\delta_{0\n$, $1\\leq i \\leq N$ and its covariance matrix is $\\Gamma_{N}=(\\pi_{i,j\n-\\pi_{i}\\pi_{j})_{1\\leq i,\\;j \\leq N}$. \\medskip\n\nWe place ourselves here in the \\textit{fixed-population} or\n\\textit{design-based} sampling framework, meaning that we suppose that a fixed\n(unknown) real value $x_{i}$ is assigned to each unit $i\\in\\mathcal{I}_{N}$.\nAs originally proposed in the seminal contribution \\cite{HT51}, the\nHorvitz-Thompson estimate of the population total $S_{N}=\\sum_{i=1}^{N} x_{i}$\nis given by\n\\begin{equation}\n\\label{eq:HT}\\widehat{S}_{\\boldsymbol{\\pi} _{N}}^{\\boldsymbol{\\epsilon} _{N\n}=\\sum_{i=1}^{N} \\frac{\\epsilon_{i}}{\\pi_{i}}x_{i}=\\sum_{i\\in S}\\frac{1\n{\\pi_{i}}x_{i},\n\\end{equation}\nwith $0\/0=0$ by convention. Throughout the article, we assume that the\n$\\pi_{i}$'s are all strictly positive. Hence, the conditional expectation of\n\\eqref{eq:HT} is $\\mathbb{E}[\\widehat{S}_{\\boldsymbol{\\pi} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}]=S_{N}$ and, in the case where the size of the\nrandom sample is deterministic, its variance is given by\n\\begin{equation}\nVar(\\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon} _{N}})=\\sum_{i<\nj}\\left( \\frac{x_{i}}{\\pi_{i}}-\\frac{x_{j}}{\\pi_{j}} \\right) ^{2\n\\times\\left( \\pi_{i}\\pi_{j}-\\pi_{i,j} \\right) .\n\\end{equation}\n\nThe goal of this paper is to establish accurate bounds for tail probabilities\n\\begin{equation}\n\\label{eq:tailprob}\\mathbb{P}\\{\\widehat{S}_{\\boldsymbol{\\pi} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} >t \\},\n\\end{equation}\nwhere $t\\in\\mathbb{R}$, when the sampling scheme $\\boldsymbol{\\epsilon} _{N}$\nis \\textit{rejective}, a very popular sampling plan that generalizes \\textit{random\nsampling without replacement} and can be expressed as a conditional Poisson\nscheme, as recalled in the following subsection for clarity. One may refer to\n\\cite{Dev87} for instance for an excellent account of survey theory, including\nmany more examples of sampling designs.\n\n\\subsection{Poisson and conditional Poisson sampling}\n\n\\label{subsec:Poisson} Undoubtedly, one of the simplest sampling plan is the\n\\textit{Poisson survey scheme} (without replacement), a generalization of\n\\textit{Bernoulli sampling} originally proposed in \\cite{Goodman} for the case\nof unequal weights: the $\\epsilon_{i}$'s are independent and the sampling\ndistribution $P_{N}$ is thus entirely determined by the first order inclusion\nprobabilities $\\mathbf{p}_{N}=(p_{1},\\;\\ldots,\\;p_{N})\\in]0,1[^{N}$:\n\\begin{equation}\n\\forall s\\in\\mathcal{P}(\\mathcal{I}_{N}),\\;\\;P_{N}(s)=\\prod_{i\\in S}p_{i\n\\prod_{i\\notin S}(1-p_{i}). \\label{eq:Poisson\n\\end{equation}\nObserve in addition that the behavior of the quantity \\eqref{eq:HT} can be\ninvestigated by means of results established for sums of independent random\nvariables. However, the major drawback of this sampling plan lies in the\nrandom nature of the corresponding sample size, impacting significantly the\nvariability of \\eqref{eq:HT}. The variance of the Poisson sample size is\ngiven by $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i})$, while the variance of\n\\eqref{eq:HT} is in this case:\n\\[\nVar\\left( \\widehat{S}_{\\boldsymbol{\\pi} _{N}}^{\\boldsymbol{\\epsilon} _{N\n}\\right) =\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}.\n\\]\nFor this reason, \\textit{rejective sampling}, a sampling design $R_{N}$ of\nfixed size $n\\leq N$, is often preferred in practice. It generalizes the\n\\textit{simple random sampling without replacement} (where all samples with\ncardinality $n$ are equally likely to be chosen, with probability $(N-n)!\/n!$,\nall the corresponding first and second order probabilities being thus equal to\n$n\/N$ and $n(n-1)\/(N(N-1))$ respectively). Denoting by $\\boldsymbol{\\pi}\n_{N}^{R}=(\\pi_{1}^{R},\\;\\ldots,\\;\\pi_{N})$ its first order inclusion\nprobabilities and by $\\mathcal{S}_{n}=\\{s\\in\\mathcal{P}(\\mathcal{I\n_{N}):\\;\\#s=n\\}$ the subset of all possible samples of size $n$, it is defined\nby:\n\\begin{equation}\n\\forall s\\in\\mathcal{S}_{n},\\;\\;R_{N}(s)=C\\prod_{i\\in s}p_{i}^{R\n\\prod_{i\\notin s}(1-p_{i}^{R}), \\label{eq:Rejective\n\\end{equation}\nwhere $C=1\/\\sum_{s\\in\\mathcal{S}_{n}}\\prod_{i\\in s}p_{i}^{R}\\prod_{i\\notin\ns}(1-p_{i}^{R})$ and the vector of parameters $\\mathbf{p}_{N}^{R}=(p_{1}^{R},\\;\\ldots\n,\\;p_{N}^{R})\\in]0,1[^{N}$ yields first order inclusion probabilities equal to\nthe $\\pi_{i}^{R}$'s and is such that $\\sum_{i=1}^{N}p_{i}^{R}=n$. Under this\nlatter additional condition, such a vector $\\mathbf{p}_{N}^{R}$ exists and is\nunique (see \\cite{Dupacova}) and the related representation\n\\eqref{eq:Rejective} is then said to be \\textit{canonical}. Notice\nincidentally that any vector $\\mathbf{p}_{N}^{\\prime}\\in]0,1[^{N}$ such that\n$p_{i}^{R}\/(1-p_{i}^{R})=cp_{i}^{\\prime}\/(1-p_{i}^{\\prime})$ for all\n$i\\in\\{1,\\;\\ldots,\\;n\\}$ for some constant $c>0$ can be used to write a\nrepresentation of $R_{N}$ of the same type as \\eqref{eq:Rejective}. Comparing\n\\eqref{eq:Rejective} and \\eqref{eq:Poisson} reveals that rejective $R_{N}$\nsampling of fixed size $n$ can be viewed as Poisson sampling given that the\nsample size is equal to $n$. It is for this reason that rejective sampling is\nusually referred to as \\textit{conditional Poisson sampling}. For simplicity's\nsake, the superscrit $R$ is omitted in the sequel. One must pay attention not\nto get the $\\pi_{i}$'s and the $p_{i}$'s mixed up (except in the SWOR case, where these quantities are all equal to $n\/N$): the latter are the first\norder inclusion probabilities of $P_{N}$, whereas the former are those of its\nconditional version $R_{N}$. However they can be related by means of the\nresults stated in \\cite{Hajek64} (see Theorem 5.1 therein, as well as Lemma \\ref{lem:bias} in section \\ref{sec:main} and \\cite{BLRG12}): $\\forall\ni\\in\\{1,\\;\\ldots,\\;N\\}$,\n\\begin{align}\n\\pi_{i}(1-p_{i}) & =p_{i}(1-\\pi_{i})\\times\\left( 1-\\left( \\tilde{\\pi\n-\\pi_{i}\\right) \/d_{N}^{\\ast}+o(1\/d_{N}^{\\ast})\\right) ,\\label{eq:rel1}\\\\\np_{i}(1-\\pi_{i}) & =\\pi_{i}(1-p_{i})\\times\\left( 1-\\left( \\tilde{p\n-p_{i}\\right) \/d_{N}+o(1\/d_{N})\\right) , \\label{eq:rel2\n\\end{align}\nwhere $d_{N}^{\\ast}=\\sum_{i=1}^{N}\\pi_{i}(1-\\pi_{i})$, $d_{N}=\\sum_{i=1\n^{N}p_{i}(1-p_{i})$, $\\tilde{\\pi}=(1\/d_{N}^{\\ast})\\sum_{i=1}^{N}\\pi_{i\n^{2}(1-\\pi_{i})$ and $\\tilde{p}=(1\/d_{N})\\sum_{i=1}^{N}(p_{i})^{2}(1-p_{i})$. \\medskip\n\nSince the major advantage of conditional Poisson sampling lies in its reduced\nvariance property (compared to Poisson sampling in particular, see the discussion in section \\ref{sec:main}), focus is next\non exponential inequalities involving a variance term, of Bennett\/Bernstein\ntype namely.\n\n\\section{Preliminary Results}\n\n\\label{sec:Poisson}\n\nAs a first go, we establish tail bounds for the Horvitz-Thompson estimator in\nthe case where the variables are sampled according to a Poisson scheme. We\nnext show how to exploit the \\textit{negative association} property satisfied\nby rejective sampling in order to extend the latter to conditional Poisson\nsampling. Of course, this approach do not account for the reduced variance\nproperty of Horvitz-Thompson estimates based on rejective sampling, it is the\npurpose of the next section to improve these first exponential bounds.\n\n\\subsection{Tails bounds for Poisson sampling}\n\nAs previously observed, bounding the tail probability \\eqref{eq:tailprob} is\neasy in the Poisson situation insofar as the variables summed up in\n\\eqref{eq:HT} are independent though possibly non identically distributed\n(since the inclusion probabilities are not assumed to be all equal). The\nfollowing theorem thus directly follows from well-known results related to\ntail bounds for sums of independent random variables.\n\n\\begin{theorem}\n\\label{thm:poisson}\\textsc{(Poisson sampling)} Assume that the survey scheme\n$\\boldsymbol{\\epsilon} _{N}$ defines a Poisson sampling plan with first order\ninclusion probabilities $p_{i}>0$, with $1\\leq i \\leq N$. Then, we\nalmost-surely have: $\\forall t>0$, $\\forall N\\geq1$,\n\\begin{align}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}}-S_{N} >t \\right\\} & \\leq\\exp\\left( -\\frac{\\sum_{i=1}^{N\n\\frac{1-p_{i}}{p_{i}}x_{i}^{2}}{\\left( \\max_{1\\leq i \\leq N}\\frac{x_{i\n}{p_{i}} \\right) ^{2}} H\\left( \\frac{\\max_{1\\leq i \\leq N}\\frac{\\vert\nx_{i}\\vert}{p_{i}}t}{\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}} \\right)\n\\right) \\label{eq:Bennett}\\\\\n& \\leq\\exp\\left( \\frac{-t^{2}}{\\frac{2}{3}\\max_{1\\leq i\\leq N}\\frac{\\vert\nx_{i}\\vert}{p_{i}}+ 2\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}} \\right) ,\n\\label{eq:Bern\n\\end{align}\nwhere $H(x)=(1+x)\\log(1+x)-x$ for $x\\geq0$.\n\\end{theorem}\n\nBounds \\eqref{eq:Bennett} and \\eqref{eq:Bern} straightforwardly result from\nBennett inequality \\cite{Bennett} and Bernstein exponential inequality\n\\cite{Bernstein} respectively, when applied to the independent random\nvariables $(\\epsilon_{i}\/p_{i})x_{i}$, $1\\leq i \\leq N$. By applying these\nresults to the variables $-(\\epsilon_{i}\/p_{i})x_{i}$'s, the same bounds\nnaturally hold for the deviation probability $\\mathbb{P}\\{\\widehat\n{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} <-t \\}$ (and,\nincidentally, for $\\mathbb{P}\\{\\vert\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} \\vert>t \\}$ up to a factor $2$).\nDetails, as well as extensions to other deviation inequalities (see\n\\textit{e.g.} \\cite{FukNagaev}), are left to the reader.\n\n\n\\subsection{Exponential inequalities for sums of negatively associated random variables}\n\nFor clarity, we first recall the definition of \\textit{negatively associated\nrandom variables}, see \\cite{JDP83}.\n\n\\begin{definition}\n\\label{def:negassoc} Let $Z_{1},\\; \\ldots,\\; Z_{n}$ be random variables\ndefined on the same probability space, valued in a measurable space\n$(E,\\mathcal{E})$. They are said to be negatively associated iff for any pair\nof disjoint subsets $A_{1}$ and $A_{2}$ of the index set $\\{1,\\; \\ldots,\\; n\n\\}$\n\\begin{equation}\n\\label{eq:neg}Cov \\left( f((Z_{i})_{i\\in A_{1}}),\\; g((Z_{j})_{j\\in A_{2}})\n\\right) \\leq0,\n\\end{equation}\nfor any real valued measurable functions $f:E^{\\#A_{1}}\\rightarrow\\mathbb{R}$\nand $g:E^{\\#A_{2}}\\rightarrow\\mathbb{R}$ that are both increasing in each variable.\n\\end{definition}\n\nThe following result provides tail bounds for sums of negatively associated\nrandom variables, which extends the usual Bennett\/Bernstein inequalities in the\ni.i.d. setting, see \\cite{Bennett} and \\cite{Bernstein}.\n\n\\begin{theorem}\n\\label{thm:BernNeg} Let $Z_{1},\\;\\ldots,\\;Z_{N}$ be square integrable\nnegatively associated real valued random variables such that $|Z_{i}|\\leq c$\na.s. and $\\mathbb{E}[Z_{i}]=0$ for $1\\leq i\\leq N$. Let $a_{1},\\; \\ldots,\\; a_N$ be\nnon negative constants and set $\\sigma^{2}=\\frac{1}{N}\\sum_{i=1}^{N}a_{i\n^{2}Var(Z_{i})$. Then, for all $t>0$, we have: $\\forall N\\geq1$,\n\\begin{align}\n\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}a_{i}Z_{i}\\geq t\\right\\} & \\leq\\exp\\left(\n-\\frac{N\\sigma^{2}}{c^{2}}H\\left( \\frac{ct}{N\\sigma^{2}}\\right) \\right) \\\\\n& \\leq\\exp\\left( -\\frac{t^{2}}{2N\\sigma^{2}+\\frac{2ct}{3}}\\right) .\n\\end{align}\n\\end{theorem}\n\nBefore detailing the proof, observe that the same bounds hold true for the\ntail probability $\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}a_{i}Z_{i}\\leq-t\\right\\} $\n(and for $\\mathbb{P}\\left\\{ |\\sum_{i=1}^{N}a_{i}Z_{i}|\\geq t\\right\\} $ as\nwell, up to a multiplicative factor $2$). Refer also to Theorem 4 in\n\\cite{Janson} for a similar result in a more restrictive setting\n(\\textit{i.e.} for tail bounds related to sums of \\textit{negatively related}\nr.v.'s) and to \\cite{Shao00} as well. \\begin{proof\n\nThe proof starts off with the usual Chernoff method: for all $\\lambda>0$,\n\\begin{equation\n\\label{eq:Chernoff}\n\\mathbb{P}\\left\\{\\sum_{i=1}\n^N a_i Z_i \\geq t\\right\\}\\leq \\exp\\left( -t\\lambda +\\log \\mathbb{E\n\\left[e^{t\\sum_{i=1}^N a_i Z_i} \\right] \\right).\n\\end{equation}\n\nNext, observe that, for all $t>0$, we have\n\\begin{eqnarray}\\label{eq:neg2}\n\\mathbb{E}\\left[\\exp\\left(t\\sum_{i=1}^n a_i Z_i\\right)\\right]&=&\\mathbb{E}\n\\left[\\exp(t a_n Z_n )\\exp\\left(t\\sum_{i=1}^{n-1}\na_i Z_i\\right)\\right]\\nonumber\\\\\n&\\leq &\\mathbb{E}\n\\left[ \\exp(ta_n Z_n) \\right]\\mathbb{E}\\left[\\exp\\left(t\\sum_{i=1}^{n-1}\na_i Z_i\\right) \\right]\\nonumber\\\\\n&\\leq & \\prod_{i=1}^n\\mathbb{E}\n\\left[ \\exp(ta_iZ_i) \\right],\\label{eq:neg2}\n\\end{eqnarray}\n\nusing the property \\eqref{eq:neg}\ncombined with a descending recurrence on $i$. The proof is finished by plugging \\eqref{eq:neg2}\ninto \\eqref{eq:Chernoff}\nand optimizing finally the resulting bound w.r.t. $\\lambda>0$, just like in the proof of the classic Bennett\/Bernstein inequalities, see \\cite{Bennett}\nand \\cite{Bernstein}. $\\square$\n\\end{proof} \\medskip\n\nThe first assertion of the theorem stated below reveals that any rejective\nscheme $\\boldsymbol{\\epsilon} ^{*}_{N}$ forms a collection of negatively\nrelated r.v.'s, the second one appearing then as a direct consequence of Theorem \\ref{thm:BernNeg}.\nWe underline that many sampling schemes (\\textit{e.g.} Rao-Sampford sampling, Pareto sampling, Srinivasan sampling) of fixed size are actually described by random vectors $\\boldsymbol{\\epsilon}_N$ with negatively associated components, see \\cite{BJ12} or \\cite{KCR11}, so that exponential bounds similar to that stated below can be proved for such sampling plans.\n\n\\begin{theorem}\n\\label{thm:neg} Let $N\\geq1$ and $\\boldsymbol{\\epsilon} ^{*}_{N}=(\\epsilon\n^{*}_{1},\\; \\ldots,\\; \\epsilon^{*}_{N})$ be the vector of indicator variables\nrelated to a rejective plan on $\\mathcal{I}_{N}$ with first order inclusion probabilities $(\\pi_1,\\; \\ldots,\\; \\pi_N)\\in ]0,1]^N$. Then, the following\nassertions hold true.\n\n\\begin{itemize}\n\\item[(i)] The binary random variables $\\epsilon^{*}_{1},\\; \\ldots,\\;\n\\epsilon^{*}_{N}$ are negatively related.\n\n\\item[(ii)] For any $t\\geq0$ and $N\\geq1$, we have:\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon}\n^{*}_{N}}-S_{N} \\geq t \\right\\} & \\leq2 \\exp\\left( -\\frac{\\sum_{i=1\n^{N}\\frac{1-\\pi_{i}}{\\pi_{i}}x_{i}^{2}}{\\left( \\max_{1\\leq i \\leq\nN}\\frac{x_{i}}{\\pi_{i}} \\right) ^{2}} H\\left( \\frac{\\max_{1\\leq i \\leq\nN}\\frac{\\vert x_{i}\\vert}{\\pi_{i}}t\/2}{\\sum_{i=1}^{N}\\frac{1-\\pi_{i}}{\\pi_{i\n}x_{i}^{2}} \\right) \\right) \\\\\n& \\leq2 \\exp\\left( \\frac{-t^{2}\/4}{\\frac{2}{3}\\max_{1\\leq i\\leq N}\\frac{\\vert\nx_{i}\\vert}{\\pi_{i}}t+ 2\\sum_{i=1}^{N}\\frac{1-\\pi_{i}}{\\pi_{i}}x_{i}^{2}}\n\\right) .\n\\end{align*}\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof\n\nConsidering the usual representation of the distribution of $(\\epsilon_1,\\; \\ldots,\\; \\epsilon_N)$ as the conditional distribution of a sample of independent Bernoulli variables $(\\epsilon^*_1,\\; \\ldots,\\; \\epsilon^*_N)$ conditioned upon the event $\\sum_{i=1\n^N\\epsilon^*_i=n$ (see subsection \\ref{subsec:Poisson\n), Assertion $(i)$ is a straightforward consequence from Theorem 2.8 in \\cite{JDP83}\n(see also \\cite{Barbour\n).\n\\medskip\nAssertion $(i)$ shows in particular that Theorem \\ref{thm:BernNeg}\ncan be applied to the random variables $\\{ (\\epsilon_i^*\/\\pi_i-1)x_i^+:\\; 1\\leq i \\leq N \\}$ and to the random variables $\\{ (\\epsilon_i^*\/\\pi_i-1)x_i^-:\\; 1\\leq i \\leq N \\}$ as well. Using the union bound, we obtain that\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi}}^{\\boldsymbol{\\epsilon\n^*_N}-S_N \\geq t \\right\\}\\leq \\mathbb{P}\\left\\{ \\sum_{i=1\n^N\\left( \\frac{\\epsilon^*_i}{\\pi_i\n-1 \\right)x^+_i\\geq t\/2 \\right\\} \\\\+ \\mathbb{P}\\left\\{ \\sum_{i=1\n^N\\left( \\frac{\\epsilon^*_i}{\\pi_i\n-1 \\right)x^-_i\\leq -t\/2 \\right\\},\n\\end{multline*}\nand a direct application of Theorem \\ref{thm:BernNeg} to each of the terms involved in this bound straightforwardly proves Assertion $(ii)$. $\\square$\n\\end{proof}\n \\medskip\n\nThe negative association property permits to handle the dependence of the\nterms involved in the summation. However, it may lead to rather loose\nprobability bounds. Indeed, except the factor $2$, the bounds of Assertion\n$(ii)$ exactly correspond to those stated in Theorem \\ref{thm:poisson}, as if\nthe $\\epsilon_{i}^{*}$'s were independent, whereas the asymptotic variance $\\sigma^2_N$ of\n$\\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon} _{N}^{*}}$ can be much smaller than $\\sum_{i=1}^{N}(1-\\pi_{i})x_{i}^{2}\/\\pi_{i}$.\nIt is the goal of the subsequent analysis to improve these preliminary results and establish exponential bounds involving the asymptotic variance $\\sigma^2_N$.\n\\begin{remark}{\\sc (SWOR)} We point out that in the specific case of sampling without replacement, \\textit{i.e.} when $\\pi_i=n\/N$ for all $i\\in \\{1,\\; \\ldots,\\; N \\}$, the inequality stated in Assertion $(ii)$ is quite comparable (except the factor $2$) to that which can be derived from the Chernoff bound given in \\cite{Hoeffding63}, see Proposition 2 in \\cite{BardenetMaillard}.\n\n\\end{remark}\n\n\n\\section{Main Results - Exponential Inequalities for Rejective Sampling}\n\n\\label{sec:main} The main results of the paper are stated and discussed in the\npresent section. More accurate deviation probabilities related to the total estimate\n\\eqref{eq:HT} based on a rejective sampling scheme $\\boldsymbol{\\epsilon}\n_{N}^{\\ast}$ of (fixed) sample size $n\\leq N$ with first order inclusion\nprobabilities $\\boldsymbol{\\pi} _{N}=(\\pi_{1},\\;\\ldots,\\;\\pi_{N})$ and\ncanonical representation $\\mathbf{p}_{N}=(p_{1},\\;\\ldots,\\;p_{N})$ are now\ninvestigated. Consider $\\boldsymbol{\\epsilon} _{N}$ a Poisson scheme with\n$\\mathbf{p}_{N}$ as vector of first order inclusion probabilities. As\npreviously recalled, the distribution of $\\boldsymbol{\\epsilon} _{N}^{\\ast}$\nis equal to the conditional distribution of $\\boldsymbol{\\epsilon} _{N}$ given\n$\\sum_{i=1}^{N}\\varepsilon_{i}=n$:\n\\begin{equation}\n(\\varepsilon_{1}^{\\ast},\\varepsilon_{2}^{\\ast},....,\\varepsilon_{N}^{\\ast\n})\\overset{d}{=}(\\varepsilon_{1},....,\\varepsilon_{N})\\ |\\sum_{i=1\n^{N}\\varepsilon_{i}=n.\\label{eq:distribution\n\\end{equation}\nHence, we almost-surely have: $\\forall t>0$, $\\forall N\\geq1$,\n\\begin{equation}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi} _{N}\n^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>t\\right\\} =\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{\\pi_{i}}x_{i}-S_{N}>t\\mid\n\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} .\\label{eq:cond_tail\n\\end{equation}\nAs a first go, we shall prove tail bounds for the quantity\n\\begin{equation}\n\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}\n\\overset{def}{=}\\sum_{i=1}^{N}\\frac{\\epsilon_{i}^{\\ast}}{p_{i}}x_{i\n.\\label{eq:HT2\n\\end{equation}\nObserve that this corresponds to the HT estimate of the total $\\sum_{i=1\n^{N}\\frac{p_{i}}{\\pi_{i}}x_{i}$. Refinements of relationships \\eqref{eq:rel1} and\n\\eqref{eq:rel2} between the $p_{i}$'s and the $\\pi_{i}$'s shall next allow us\nto obtain an upper bound for \\eqref{eq:cond_tail}. Notice incidentally that,\nthough slightly biased (see Assertion $(i)$ of Theorem \\ref{thm:final}), the statistic\n\\eqref{eq:HT2} is commonly used as an estimator of $S_{N}$, insofar as the\nparameters $p_{i}$'s are readily available from the canonical representation\nof $\\boldsymbol{\\epsilon} _{N}^{\\ast}$, whereas the computation of the\n$\\pi_{i}$'s is much more complicated. One may refer to \\cite{CDL94} for\npractical algorithms dedicated to this task. Hence, Theorem \\ref{thm:rejective} is of practical interest to build non asymptotic confidence intervals for the total $S_N$.\n\\medskip\n\n\\noindent {\\bf Asymptotic variance.} Recall that $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i})$ is the variance\n$Var(\\sum_{i=1}^{N}\\epsilon_{i})$ of the size of the Poisson plan\n$\\boldsymbol{\\epsilon} _{N}$ and set\n\\[\n\\theta_{N}=\\frac{\\sum_{i=1}^{N}x_{i}(1-p_{i})}{d_{N}}.\n\\]\nAs explained in \\cite{BCC2013}, the quantity $\\theta_{N}$ is the coefficient of the linear regression relating $\\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}$ to the sample\nsize $\\sum_{i=1}^{N}\\epsilon_{i}$. We may thus write\n\\[\n\\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}=\\theta_{N}\\times \\sum_{i=1\n^{N}\\epsilon_{i}+r_{N},\n\\]\nwhere the residual $r_{N}$\\ is orthogonal to $\\sum\n_{i=1}^{N}\\epsilon_{i}$. Hence, we have the following decomposition\n\\begin{equation}\\label{eq:Poisson_var}\nVar\\left( \\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}\\right) =\\sigma^2_{N}+\\theta_{N}^{2}d_{N},\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:Asympt_Var}\n\\sigma^2_{N}=Var\\left( \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\left(\n\\frac{x_{i}}{p_{i}}-\\theta_{N}\\right) \\right) \n\\end{equation}\n is the asymptotic variance\nof the statistic $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}$, see \\cite{Hajek64}. In other words,\nthe variance reduction resulting from the use of a rejective sampling plan instead of a Poisson plan is\nequal to $\\theta_{N}^{2}d_{N}$, and can be very large in practice. A sharp Bernstein type probability inequality for $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}$ should thus involve $\\sigma^2_N$ rather than the Poisson variance $Var( \\sum_{i=1}^{N}(\\epsilon_{i}\/p_{i})x_{i})$.\nUsing the fact that $\\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})=0$ on the event\n$\\{\\sum_{i=1}^{N}\\epsilon_{i}=n\\}$, we may now write:\n\\begin{align}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}^{\\ast}}-S_{N}>t\\right\\} & =\\mathbb{P}\\left\\{ \\sum_{i=1\n^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}>t\\mid\\sum_{i=1}^{N}\\epsilon\n_{i}=n\\right\\} \\nonumber\\label{eq:ratio}\\\\\n& =\\frac{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\frac{x_{i\n}{p_{i}}>t,\\;\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }{\\mathbb{P}\\left\\{\n\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }\\nonumber\\\\\n& =\\frac{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\left(\n\\frac{x_{i}}{p_{i}}-\\theta_{N}\\right) >t,\\sum_{i=1}^{N}\\epsilon\n_{i}=n\\right\\} }{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }.\n\\end{align}\nBased on the\nobservation that the random variables $\\sum_{i=1}^{N}(\\epsilon_{i\n-p_{i})(x_{i}\/p_{i}-\\theta_{N})$ and $\\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})$ are\nuncorrelated, Eq. \\eqref{eq:ratio} thus permits to establish directly the CLT $\\sigma_N^{-1}(\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N})\\Rightarrow \\mathcal{N}(0,1)$,\nprovided that $d_{N}\\rightarrow+\\infty$, as $N\\rightarrow+\\infty$, symplifying\nasymptotically the ratio, see \\cite{Hajek64}. Hence, the asymptotic variance of $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}$ is the variance $\\sigma^2_N$ of the quantity $\\sum_{i=1}^{N}(\\epsilon_{i\n-p_{i})(x_{i}\/p_{i}-\\theta_{N})$, which is less than that of the Poisson HT estimate $\\eqref{eq:Poisson_var}$, since it eliminates the variability due to the sample size. We also point out that Lemma \\ref{lem:bias} proved in the Appendix section straightforwardly shows that the \"variance term\" $\\sum_{i=1}^Nx_i^2(1-\\pi_i)\/\\pi_i$ involved in the bound stated in Theorem \\ref{thm:BernNeg} is always larger than $(1+6\/d_N)^{-1}\\sum_{i=1}^Nx_i^2(1-p_i)\/p_i$.\n\\medskip\n\nThe desired result here is non\nasymptotic and accurate exponential bounds are required for both the numerator\nand the denominator of \\eqref{eq:ratio}. It is proved in \\cite{Hajek64} (see\nLemma 3.1 therein) that, as $N\\rightarrow+\\infty$:\n\\begin{equation}\n\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} =(2\\,\\pi\n\\,d_{N})^{-1\/2}\\,(1+o(1)).\\label{local\n\\end{equation}\nAs shall be seen in the proof of the theorem stated below, the approximation\n\\eqref{local} can be refined by using a local Berry-Essen bound or the\nresults in \\cite{Deheuvels} and we thus essentially need to establish an\nexponential bound for the numerator with a constant of order $d_{N}^{-1\/2}$,\nsharp enough so as to simplify the resulting ratio bound and cancel off the\ndenominator. We shall prove that this can be achieved by using a similar\nargument as that considered in \\cite{BERCLEM2010} for establishing an accurate\nexponential bound for i.i.d. $1$-lattice random vectors, based on a device\nintroduced in \\cite{Talagrand95} for refining Hoeffding's inequality.\n\n\\begin{theorem}\n\\label{thm:rejective}Let $N\\geq1$. Suppose that $\\boldsymbol{\\epsilon}\n_{N}^{\\ast}$ is a rejective scheme of size $n\\leq N$ with canonical parameter\n$\\boldsymbol{p} _{N}=(p_{1},\\;\\ldots,\\;p_{N})\\in]0,\\;1[^{N}$. Then, there\nexist universal constants $C$ and $D$ such that we have for all $t>0$ and\nfor all $N\\geq1$,\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}^{\\ast}}-S_{N}>t\\right\\} & \\leq C\\exp\\left( -\\frac{\\sigma^2_{N\n}{\\left( \\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\\right) ^{2}}H\\left(\n\\frac{t\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\n{\\sigma^2_{N}}\\right) \\right) \\\\\n& \\leq C\\exp\\left( -\\frac{t^{2}}{2\\left( \\sigma^2_{N}+\\frac{1\n{3}t\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\\right)\n}\\right) ,\n\\end{align*}\nas soon as $\\min\\{d_{N},\\;d_{N}^{\\ast}\\}\\geq1$ and $d_N\\geq D$.\n\\end{theorem}\n\nAn overestimated value of the constant $C$ can be deduced by a careful\nexamination of the proof given below. Before we detail it, we point out that\nthe exponential bound in Theorem \\ref{thm:rejective} involves the asymptotic variance of \\eqref{eq:HT2}, in contrast to bounds\nobtained by exploiting the \\textit{negative association} property of the\n$\\epsilon_{i}^{\\ast}$'s.\n\n\\begin{remark}{\\sc (SWOR (bis))} We underline that, in the particular case of sampling without replacement (\\textit{i.e.} when $p_i=\\pi_i=n\/N$ for $1\\leq i\\leq N$), the Bernstein type exponential inequality stated above provides a control of the tail similar to that obtained in \\cite{BardenetMaillard}, see Theorem 2 therein, with $k=n$. In this specific situation, we have $d_N=n(1-n\/N)$ and $\\theta_N=S_N\/n$, so that the formula \\eqref{eq:Asympt_Var} then becomes $$\n\\sigma_N^2=\\left(1-\\frac{n}{N}\\right)\\frac{N^2}{n}\\left\\{ \\frac{1}{N}\\sum_{i=1}^Nx_i^2 -\\left(\\frac{1}{N}\\sum_{i=1}^N x_i\\right)^2 \\right\\}.\n$$\nThe control induced by Theorem \\ref{thm:rejective} is actually slightly better than that given by Theorem 2 in \\cite{BardenetMaillard}, insofar as the factor $(1-n\/N)$ is involved in the variance term, rather than $(1-(n-1)\/N)$, that is crucial when considering situations where $n$ gets close to $N$ (see the discussion preceded by Proposition 2 in \\cite{BardenetMaillard}).\n\\end{remark}\n\n\\begin{proof}\nWe first introduce additional notations.\nSet $Z_{i\n=(\\epsilon _{i}-p_{i})(x_{i}\/p_{i}-\\theta _{N})$ and $m_{i}=\\epsilon _{i\n-p_{i\n$ for $1\\leq i\\leq N$ and, for convenience, consider the standardized variables given by\n\\begin{equation*}\n\\mathcal{Z}_{N}=n^{1\/2}\\frac{1}{N}\\sum_{1\\leq i\\leq N}Z_{i} \\text{ and }\n\\mathcal{M}_{N}=d_N^{-1\/2}\\sum_{1\\leq i\\leq N}m_{i}.\n\\end{equation*\nAs previously announced, the proof is based on Eq. \\eqref{eq:ratio}. The lemma below first provides a sharp lower bound for the denominator, $ \\mathbb{P\n^*\\left\\{ \\mathcal{M}_{N\n=0\\right\\}$ with the notations above. As shown in the proof given in the Appendix section, it can be obtained by applying the local Berry-Esseen bound established in \\cite{Deheuvels}\nfor sums of independent (and possibly non identically) Bernoulli random variables.\n\\begin{lemma}\n\\label{lem:denominator}\nSuppose that Theorem \\ref{thm:rejective\n's assumptions are fulfilled. Then, there exist universal constants $C_1$ and $D$ such that: $\\forall N\\geq 1$,\n\\begin{equation}\n\\mathbb{P}\\{ \\mathcal{M}_N=0 \\}\\geq C_1\\frac{1}{\\sqrt{d_N}},\n\\end{equation}\nprovided that $d_N\\geq D$.\n\\end{lemma}\n\nThe second lemma gives an accurate upper bound for the numerator. Its proof can be found in the Appendix section.\n\\begin{lemma\n\\label{lem:numerator}\nSuppose that Theorem \\ref{thm:rejective\n's assumptions are fulfilled. Then, we have for all $x\\geq 0$, and for all $N\\geq 1$ such that $\\min\\{d_N,\\; d_N^*\\}\\geq 1$:\n\\begin{multline*}\n\\mathbb{P}\\left\\{\\mathcal{Z}_{N}\\geq x,\\mathcal{M}_{N\n=0\\right\\}\\leq C \\frac{1}{\\sqrt{d_N}\n\\times \\\\\\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^NZ_i \\right)\n{\\left(\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j} \\right)^2\nh\\left( \\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert\n{p_j}}{Var\\left( \\sum_{i=1}^NZ_i \\right)}\n\\right) \\right)\\\\\n\\leq C_2 \\frac{1}{\\sqrt{d_N}}\\exp\\left( -\\frac{N^2x^2\/n\n{2\\left(Var\\left( \\sum_{i=1}^NZ_i \\right)+\\frac{1}{3}\\frac{N}{\\sqrt{n\n}x\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j} \\right)}\n\\right),\n\\end{multline*\nwhere $C_2<+\\infty$ is a universal constant.\n\\end{lemma}\n\nThe bound stated in Theorem \\ref{thm:rejective}\nnow directly results from Eq. \\eqref{eq:ratio}\ncombined with Lemmas \\ref{lem:denominator} and \\ref{lem:numerator}, with $x=t\\frac{\\sqrt{n}}{N}$. $\\square$\n\\end{proof} \n\n\\bigskip\n\nEven if the computation of the biased statistic \\eqref{eq:HT2} is much more tractable from a practical perspective, we now come back to the study of the HT total estimate \\eqref{eq:HT}. The first part of the result stated below provides an estimation of the bias that replacement of \\eqref{eq:HT} by \\eqref{eq:HT2} induces, whereas its second part finally gives a tail bound for $\\eqref{eq:HT}$.\n\n\\begin{theorem}\\label{thm:final} Suppose that the assumptions of Theorem \\ref{thm:rejective} are fulfilled and set $M_N=(6\/d_N)\\sum_{i=1}^N\\vert x_i\\vert \/\\pi_i$. The following assertions hold true.\n\\begin{itemize}\n\\item[(i)] For all $N\\geq 1$, we almost-surely have:\n\\begin{equation*}\n\\left\\vert \\widehat{S}^{\\boldsymbol{\\epsilon}_N^*}_{\\boldsymbol{\\pi}_N } - \\widehat{S}^{\\boldsymbol{\\epsilon}_N^*}_{\\mathbf{p}_N } \\right\\vert \\leq M_N .\n\\end{equation*}\n\\item[(ii)] There exist universal\nconstants $C$ and $D$ such that, for all $t>M_{N}$ and for all $N\\geq1$, we have:\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\widehat{S}^{\\boldsymbol{\\epsilon}^*_N}_{\\boldsymbol{\\pi}_N}-S_{N}>t \\right\\} \n\\leq \\\\ C\\exp\\left( -\\frac{\\sigma_{N}^2}{\\left( \\max_{1\\leq j\\leq\nN}\\frac{|x_{j}|}{p_{j}}\\right) ^{2}}H\\left( \\frac{N}{\\sqrt{n}\n\\frac{(t-M_{N})\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}}{\\sigma^2_{N\n}\\right) \\right) \\\\\n \\leq C\\exp\\left( -\\frac{N^{2}(t-M_{N})^{2}\/n}{2\\left( \\sigma^2_{N}+\\frac{1}{3}\\frac{N}{\\sqrt{n}}(t-M_{N})\\max_{1\\leq j\\leq N}\\frac{|x_{j\n|}{p_{j}}\\right) }\\right) ,\n\\end{multline*}\nas soon as $\\min\\{d_{N},\\;d_{N}^{\\ast}\\}\\geq1$ and $d_N\\geq D$.\n\\end{itemize}\n\\end{theorem}\n\nThe proof is given in the Appendix section. We point out that, for nearly uniform weights, \\textit{i.e.} when $c_1n\/N\\leq\\pi_i\\leq c_2n\/N$ for all $i\\in\\{1,\\; \\ldots,\\; N \\}$ with $0t\\right\\} -\\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\right\\} \\right| \n& \\leq &\\Vert\\widetilde{R}_{N}-R_{N}\\Vert_{1} \\\\ &\\leq&\\sqrt{2 D_{KL}(R_{N}\\vert\\vert\\widetilde\n{R}_{N})}.\n\\end{eqnarray*}\n\\end{lemma}\n\\begin{proof} The first bound immediately results from the following elementary observation:\n\\begin{multline*}\n \\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N}>t\\right\\} -\\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\right\\} =\\\\\n\\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N)}\\mathbb{I}\\{\\sum_{i\\in s}x_i\/p_i-S_N >t\\}\\times \\left(R_N(s) -\\widetilde{R}_N(s) \\right),\n\\end{multline*}\nwhile the second bound is the classical Pinsker inequality.\n$\\square$\n\\end{proof}\n\\medskip\n\nIn practice, $R_{N}$ is typically the rejective sampling plan\ninvestigated in the previous subsection (or eventually the Poisson sampling scheme) and $\\widetilde{R}_N$ a sampling plan from which the Kullback-Leibler divergence to $R_N$ asymptotically vanishes, \\textit{e.g.} the rate at which $D_{KL}(R_{N}\\vert\\vert\\widetilde{R}_{N})$ decays to zero has been investigated in \\cite{Ber98} when $\\widetilde{R}_N$ corresponds to Rao-Sampford, successive sampling or Pareto sampling\nunder appropriate regular conditions (see also \\cite{BTL06}). Lemma \\ref{lem:ext} combined with Theorem \\ref{thm:rejective} or Theorem \\ref{thm:final} permits then to obtain upper bounds for the tail probabilities $\\mathbb{P}\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\} $.\n\n\\section{Conclusion}\\label{sec:concl}\nIn this article, we proved Bernstein-type tail bounds to quantify the deviation between a total and its Horvitz-Thompson estimator when based on conditional Poisson sampling, extending (and even slightly improving) results proved in the case of basic sampling without replacement. The original proof technique used to establish these inequalities relies on expressing the deviation probablities related to a conditional Poisson scheme as conditional probabilities related to a Poisson plan. This permits to recover tight exponential bounds, involving the asymptotic variance of the Horvitz-Thompson estimator. Beyond the fact that rejective sampling is of prime importance in the practice of survey sampling, extension of these tail bounds to sampling schemes that can be accurately approximated by rejective sampling in the total variation sense is also discussed. \n\n\n\\section*{Appendix - Technical Proofs}\n\n\\subsection*{Proof of Lemma \\ref{lem:denominator}}\n\nFor clarity, we first recall the following result.\n\n\\begin{theorem}\n\\label{thm:denominator}(\\cite{Deheuvels}, Theorem 1.3) Let $(Y_{j,n})_{1\\leq\nj\\leq n}$ be a triangular array of independent Bernoulli random variables with\nmeans $q_{1,n},\\; \\ldots,\\; q_{n,n}$ in $(0,1)$ respectively. Denote by\n$\\sigma^{2}_{n}=\\sum_{i=1}^{n}q_{i,n}(1-q_{i,n})$ the variance of the sum\n$\\Sigma_{n}=\\sum_{i=1}^{n}Y_{i,n}$ and by $\\nu_{n}=\\sum_{i=1}^{n}q_{i,n}$ its\nmean. Considering the cumulative distribution function (cdf) $F_{n\n(x)=\\mathbb{P}\\{ \\sigma_{n}^{-1}(\\Sigma_{n}-\\nu_{n} )\\leq x \\}$, we have:\n$\\forall n\\geq1$,\n\\[\n\\sup_{k\\in\\mathbb{Z}}\\left\\vert F_{n}(x_{n,k})-\\Phi(x_{n,k})-\\frac{1-x_{n,k\n^{2}}{6\\sigma_{n}}\\phi(x_{n,k})\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}q^{2\n_{i,n}(1-q_{i,n})}{\\sigma_{n}^{2}} \\right\\} \\right\\vert \\leq\\frac{C\n{\\sigma^{2}_{n}},\n\\]\nwhere $x_{n,k}=\\sigma_{n}^{-1}(k-\\nu_{n}+1\/2)$ for any $k\\in\\mathbb{Z}$,\n$\\Phi(x)=(2\\pi)^{-1\/2}\\int_{-\\infty}^{x}\\exp(-z^{2}\/2)dz$ is the cdf of the\nstandard normal distribution $\\mathcal{N}(0,1)$, $\\phi(x)=\\Phi^{\\prime}(x)$\nand $C<+\\infty$ is a universal constant.\n\\end{theorem}\n\nObserve first that we can write:\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\mathcal{M}_{N}=0\\right\\} =\\mathbb{P}\\left\\{\\sum_{i=1}^{N}(\\epsilon\n_{i}-p_{i})\\in]-1\/2,1\/2]\\right\\}\\\\\n=\\mathbb{P}\\left\\{ d_{N}^{-1\/2}\\sum_{i=1}^{N\nm_{i}\\leq \\frac1 2 d_{N}^{-1\/2}\\right\\} -\\mathbb{P}\\left\\{ d_{N}^{-1\/2\n\\sum_{i=1}^{N}m_{i}\\leq-\\frac1 2 d_{N}^{-1\/2}\\right\\} .\n\\end{multline*}\nApplying Theorem \\ref{thm:denominator} to bound the first term of this\ndecomposition (with $k=\\nu_{n}$ and $x_{n,k}=1\/(2\\sqrt{d_{N}})$) directly yields\nthat\n\\begin{multline*}\n \\mathbb{P}\\left\\{ \\frac{\\sum_{i=1}^{N}m_{i}}{\\sqrt{d_N}}\\leq\n\\frac{1}{2\\sqrt{d_N}}\\right\\} \\geq \\Phi\\left(\\frac{1}{2\\sqrt{d_{N}}}\\right)\\\\+\\frac{1-\\frac{1}{4d_{N}}\n{6\\sqrt{d_{N}}}\\phi\\left(\\frac{1}{2\\sqrt{d_{N}}}\\right)\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}p_{i\n^{2}(1-p_{i})}{d_{N}}\\right\\} - \\frac{C}{d_{N}}.\n\\end{multline*}\nFor the second term, its application with $k=\\nu_{n}-1$ entails that:\n\\begin{multline*}\n-\\mathbb{P}\\left\\{ \\frac{1}{2\\sqrt{d_N}}\\sum_{i=1}^{N}m_{i}\\geq\n-\\frac{1}{2\\sqrt{d_N}} \\right\\} \\leq -\\Phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right)\\\\-\\frac{1-\\frac{1}{4d_{N}}}{6\\sqrt{d_{N}}}\\phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right) \n\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}p_{i\n^{2}(1-p_{i})}{d_{N}}\\right\\} -\\frac{C}{d_{N}}.\n\\end{multline*}\nIf $d_N\\geq 1$, it follows that\n\\begin{eqnarray*}\n\\mathbb{P}\\left\\{ \\mathcal{M}_{N}=0\\right\\} &\\geq &\\Phi\\left(\\frac{1}{2\\sqrt{d_{N}}\n\\right)-\\Phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right)- \\frac{2C}{d_{N}}\\\\\n&= & 2\\int_{0}^{\\frac{1}{2\\sqrt{d_N}}}\\phi(t)dt- \\frac{2C}{d_{N}}\n\\geq \\left( \\phi(1\/2)-\\frac{2C}{\\sqrt{d_N}}\\right)\\frac{1}{\\sqrt{d_N}}.\n\\end{eqnarray*}\n We\nthus obtain the desired result for $d_{N}\\geq D$, where $D>0$ is any constant strictly larger than $4C^2\\phi^2(1\/2)$.\n\n\\subsection{Proof of Lemma \\ref{lem:numerator}}\n\nObserve that\n\\begin{equation}\nVar \\left( \\sum_{i=1}^{N} Z_{i}\\right) =\\sum_{i=1}^{N}Var\\left(\nZ_{i}\\right) =Var \\left( \\sum_{i=1}^{N} \\epsilon_{i}^{*} \\frac{x_{i}}{p_{i\n}\\right) =Var \\left( \\widehat{S}^{\\boldsymbol{\\epsilon} ^{*}_{N\n}_{\\boldsymbol{p} _{N}} \\right) .\n\\end{equation}\nLet $\\psi_{N}(u)=\\log\\mathbb{E}^{*}[\\exp(\\langle u,(\\mathcal{Z}_{N\n,\\;\\mathcal{M}_{N})\\rangle)]$, $u=(u_{1},u_{2})\\in\\mathbb{R}^{+\n\\times\\mathbb{R}$, be the log-Laplace of the $1$-lattice random vector\n$(\\mathcal{Z}_{N},\\;\\mathcal{M}_{N})$, where $\\langle.,\\; .\\rangle$ is the\nusual scalar product on $\\mathbb{R}^{2}$. Denote by $\\psi_{N}^{(1)}(u)$ and\n$\\psi_{N}^{(2)}(u)$ its gradient and its Hessian matrix respectively. Consider\nnow the conditional probability measure $\\mathbb{P}^{*}_{u,N}$ given\n$\\mathcal{D}_{N}$ defined by the Esscher transform\n\\begin{equation}\nd\\mathbb{P}_{u,N}=\\exp\\left( \\left\\langle u,(\\mathcal{Z}_{N},\\mathcal{M\n_{N})\\right\\rangle -\\psi_{N}(u)\\right) d\\mathbb{P}.\n\\end{equation}\nThe $\\mathbb{P}_{u,N}$-expectation is denoted by $\\mathbb{E}_{u,N}[.]$, the\ncovariance matrix of a $\\mathbb{P}_{u,N}$-square integrable random vector $Y$\nunder $\\mathbb{P}_{u,N}$ by $Var_{u^{*},N}(Y)$. With $x=t\\sqrt{n}\/N$, by\nexponential change of probability measure, we can rewrite the numerator of\n\\eqref{eq:ratio} as\n\\begin{multline}\n\\mathbb{P}\\left\\{ \\mathcal{Z}_{N}\\geq x,\\mathcal{M}_{N}=0\\right\\}\n=\\mathbb{E}_{u,N}\\left[ e^{\\psi_{N}(u)-\\left\\langle u,(\\mathcal{Z\n_{N},\\mathcal{M}_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq\nx,\\mathcal{M}_{N}=0\\}\\right] \\nonumber\\\\\n=H(u)\\mathbb{E}_{u,N}\\left[ e^{-\\left\\langle u,(\\mathcal{Z}_{N\n-x,\\mathcal{M}_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq\nx,\\mathcal{M}_{N}=0\\}\\right] ,\n\\end{multline}\nwhere we set $H(u)=\\exp(-\\left\\langle u,(x,0)\\right\\rangle +\\psi_{N}(u))$.\nNow, as $\\psi_{N}$ is convex, the point defined by\n\\[\nu^{\\ast}=(u_{1}^{*},0)=\\arg\\sup_{u\\in\\mathbb{R}_{+}\\times\\{0\\}}\\{\\langle\nu,(x,0)\\rangle-\\psi_{N}(u)\\}\n\\]\nis such that $\\psi_{N}^{(1)}(u^{\\ast})=(x,0)$. Since $\\mathbb{E\n[\\exp()]=\\exp(\\psi_{N}(u))$, by\ndifferentiating one gets\n\\[\n\\mathbb{E}[e^{}(\\mathcal{S\n_{N},\\;\\mathcal{M}_{N})]=\\psi_{N}^{(1)}(u^{\\ast})e^{\\psi_{N}(u^{\\ast\n)}=(x,0)e^{\\psi_{N}(u^{\\ast})},\n\\]\nso that $\\mathbb{E}_{u^{\\ast},N}[(\\mathcal{Z}_{N},\\mathcal{M}_{N})]=(x,0)$ and\n$Var_{u^{\\ast},N}[(\\mathcal{Z}_{N},\\mathcal{M}_{N})]=\\psi_{N}^{(2)}(u^{\\ast\n)$. Choosing $u=u^{*}$, integration by parts combined with straightforward\nchanges of variables yields\n\\[\n\\mathbb{E}_{u^{*},N}[e^{-\\left\\langle u^{*},(\\mathcal{Z}_{N}-x,\\mathcal{M\n_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq x,\\mathcal{M\n_{N}=0\\}]\\newline \\leq\\mathbb{P}^{*}_{u^{*},N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\} .\n\\]\nHence, we have the bound:\n\\begin{equation}\n\\label{eq:prod}\\mathbb{P}\\left\\{ \\mathcal{Z}_{N}\\geq x,\\mathcal{M\n_{N}=0\\right\\} \\leq H(u^{*})\\times\\mathbb{P}_{u^{*},N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\} .\n\\end{equation}\nWe shall bound each factor involved in \\eqref{eq:prod} separately. We start\nwith bounding $H(u^{*})$, which essentially boils down to bounding\n$\\mathbb{E}[e^{\\langle u^{*},(\\mathcal{Z}_{N},\\mathcal{M}_{N})\\rangle}]$.\n\n\\begin{lemma}\n\\label{lem:factor1} Under Theorem \\ref{thm:rejective}'s assumptions, we have:\n\\begin{align}\nH(u^{*}) & \\leq\\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^{N}Z_{i} \\right)\n}{\\left( \\max_{1\\leq j\\leq N}\\vert x_{j}\\vert\/p_{j} \\right) ^{2}}h\\left(\n\\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\vert x_{j}\\vert\/p_{j\n}{Var\\left( \\sum_{i=1}^{N}Z_{i} \\right) } \\right) \\right) \\\\\n& \\leq\\exp\\left( -\\frac{N^{2}x^{2}\/n}{2\\left( Var\\left( \\sum_{i=1\n^{N}Z_{i} \\right) +\\frac{1}{3}\\frac{N}{\\sqrt{n}}x \\max_{1\\leq j\\leq N}\\vert\nx_{j}\\vert\/p_{j} \\right) } \\right) ,\n\\end{align}\nwhere $h(x)=(1+x)\\log(1+x)-x$ for $x\\geq0$.\n\\end{lemma}\n\n\\begin{proof}\nUsing the standard argument leading to the Bennett-Bernstein bound, observe that: $\\forall i\\in \\{1,\\; \\ldots,\\; N \\}$, $\\forall u_1> 0$,\n\\begin{equation*}\n\\mathbb{E}[e^{u_1Z_i\n]\\leq \\exp\\left( Var(Z_i)\\frac{\\exp\\left(u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j} \\right) - 1- u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j}}{\\left(\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert\n{p_j}\\right)^2} \\right).\n\\end{equation*}\nsince we $\\mathbb{P\n$-almost surely have $\\vert Z_{i}\\vert \\leq \\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j $ for all $i\\in \\{1,\\; \\ldots,\\; N \\}$. Using the independence of the $Z_i$'s, we obtain that: $\\forall u_1> 0$,\n\\begin{multline*}\n\\mathbb{E}[e^{u_1 \\mathcal{Z}_N}]\\leq \\\\\n\\exp\\left( Var\\left( \\sum_{i=1\n^NZ_i\\right)\\frac{\\exp\\left(\\frac{\\sqrt{n}}{N}u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j} \\right) - 1- \\frac{\\sqrt{n}}{N\nu_1\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j}}{\\left(\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j}\\right)^2} \\right).\n\\end{multline*\n\nThe resulting upper bound for $H((u_1,0))$ being minimum for\n$$\nu_1=\\frac{N\n{\\sqrt{n}}\\frac{\\log \\left( 1+\\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j}{Var(\\sum_{i=1}^NZ_i)} \\right)}{\\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j},\n$$\nthis yields\n\\begin{equation}\nH(u^*)\\leq \\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^NZ_i \\right)\n{\\left(\\max_{1\\leq j\\leq N}\\vert x_j\\vert\/p_j \\right)^2}h\\left( \\frac{N\n{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\vert x_j\\vert\/p_j}{Var\\left( \\sum_{i=1\n^NZ_i \\right)} \\right) \\right).\n\\end{equation\n\nUsing the classical\ninequality\n\\begin{equation*}\nh(x)\\geq \\frac{x^{2\n}{2(1+x\/3)},\\text{ for }x\\geq 0,\n\\end{equation*\n\nwe also get that\n\\begin{equation}\\label{eq:prod}\nH(u^*)\\leq \\exp\\left( -\\frac{N^2x^2\/n}{2\\left(Var\\left( \\sum_{i=1\n^NZ_i \\right)+\\frac{1}{3}\\frac{N}{\\sqrt{n}}x \\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j \\right)} \\right).\n\\end{equation}\n$\\square$\n\\end{proof}\n \n\n\nWe now prove the lemma stated below, which provides an upper bound for\n$\\mathbb{P}^{*}_{u^{*},N}\\{ \\mathcal{M}_{N}=0\\}$.\n\n\\begin{lemma}\n\\label{lem:factor2} Under Theorem \\ref{thm:rejective}'s assumptions, there\nexists a universal constant $C^{\\prime}$ such that: $\\forall N\\geq1$,\n\\begin{equation}\n\\mathbb{P}_{u^{*},N}\\left\\{ \\mathcal{M}_{N}=0\\right\\} \\leq C^{\\prime\n}\\frac{1}{\\sqrt{d_{N}}}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nUnder the probability measure $\\mathbb{P}_{u^*,N\n$, the $\\varepsilon _{i\n$'s are still\nindependent Bernoulli variables, with means now given by\n\\begin{equation*}\n\\pi^* _{i}\\overset{def}{=}\\sum_{s\\in\n\\mathcal{P}(\\mathcal{I}_{N\n)}e^{\\left\\langle u^*,(\\mathcal{Z}_{N}(s)\n\\mathcal{M}_{N\n(s))\\right\\rangle -\\psi _{N}(u^*)}\\mathbb{I}\\left\\{ i\\in\ns\\right\\} R_{N\n(s)>0,\n\\end{equation*}\nfor $i\\in \\{1,\\; \\ldots,\\; N \\}$.\nSince $\\mathbb{E\n_{u^{\\ast },N}[\\mathcal{M}_{N}]=0$, we have $\\sum_{i=1}^{N}\\pi^*_{i\n=n$ and thus\n\\begin{equation*}\nd_{N,u^{\\ast\n}}\\overset{def}{=\nVar_{u^{\\ast },N}\\left(\\sum_{i=1}^{N}\\varepsilon _{i}\\right)=\\sum_{i=1\n^{N}\\pi^* _{i}(1-\\pi ^*_{i})\\leq n.\n\\end{equation*\n\nWe can thus apply the local Berry-Esseen bound established in \\cite{Deheuvels}\nfor sums of independent (and possibly non identically) Bernoulli random variables, recalled in Theorem \\ref{thm:numerator\n.\n\\begin{theorem}\\label{thm:numerator}(\\cite{Deheuvels\n, Theorem 1.2) Let $(Y_{j,n})_{1\\leq j\\leq n\n$ be a triangular array of independent Bernoulli random variables with means $q_{1,n\n,\\; \\ldots,\\; q_{n,n\n$ in $(0,1)$ respectively. Denote by $\\sigma^2_n=\\sum_{i=1}^nq_{i,n\n(1-q_{i,n})$ the variance of the sum $\\Sigma_n=\\sum_{i=1}^nY_{i,n\n$ and by $\\nu_n=\\sum_{i=1}^nq_{i,n\n$ its mean. Considering the cumulative distribution function (cdf) $F_n(x)=\\mathbb{P\n\\{ \\sigma_n^{-1\n(\\Sigma_n-\\nu_n )\\leq x \\}$, we have: $\\forall n\\geq 1$,\n\\begin{equation}\n\\sup_x\\left(1+\\vert x\\vert^3 \\right)\\left\\vert F_n(x)-\\Phi(x)\\right\\vert\\leq \\frac{C\n{\\sigma_n},\n\\end{equation}\nwhere $\\Phi(x)=(2\\pi)^{-1\/2}\\int_{-\\infty\n^x\\exp(-z^2\/2)dz$ is the cdf of the standard normal distribution $\\mathcal{N\n(0,1)$ and $C<+\\infty$ is a universal constant.\n\\end{theorem\n\nApplying twice a pointwise version of the bound recalled above (for $x=0$ and $x=1\/\\sqrt{d_{N,u^*\n}$), we obtain that\n\\begin{multline*}\n\\mathbb{P}_{u^*,N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\}= \\mathbb{P}_{u^*,N}\\left\\{d_{N,u^{\\ast\n}}^{-1\/2}\n\\sum_{i=1}^Nm_i\\leq 0\\right\\}\\\\- \\mathbb{P}_{u^*,N}\\left\\{d_{N,u^{\\ast\n\n}^{-1\/2} \\sum_{i=1}^Nm_i\\leq -d_{N,u^{\\ast\n}}^{-1\/2\n\\right\\}\\\\\n\\leq \\frac{2C}{\\sqrt{d_{N,u^*}}}+\\Phi(0)-\\Phi(-d_{N,u^*\n^{-1\/2})\\leq \\left(\\frac{1}{\\sqrt{2\\pi}}+2C\\right)\\frac{1}{\\sqrt{d_{N,u^*}\n},\n\\end{multline*\n\nby means of the finite increment theorem.\nFinally, observe that:\n\\begin{multline*}\nd_{N,u^*\n=\\mathbb{E}_{u^*, N}\\left[\\left(\\sum_{i=1}^Nm_i\\right)^2 \\right]=\\mathbb{E\n\\left[ \\left(\\sum_{i=1}^Nm_i\\right)^2\/H(u^*) \\right]\\\\\n\\geq \\mathbb{E\n\\left[ \\left(\\sum_{i=1}^Nm_i\\right)^2 \\right]=d_N,\n\\end{multline*\n\nsince we proved that $H(u^*)\\leq 1$. Combined with the previous bound, this yields the desired result.\n$\\square$\n\\end{proof\n\n \n\nLemmas \\ref{lem:factor1} and \\ref{lem:factor2} combined with Eq.\n\\eqref{eq:prod} leads to the bound stated in Lemma \\ref{lem:numerator}.\n\n\n\n\\subsection*{Proof of Theorem \\ref{thm:final}}\nWe start with proving the preliminary result below.\n\n\\begin{lemma}\n\\label{lem:bias}Let $\\pi_{1},\\; \\ldots, \\pi_N$ be the first order\ninclusion probabilities of a rejective sampling of size $n$ with canonical representation characterized by the Poisson weights $p_1,\\; \\ldots,\\; p_N$.Provided that $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i\n)\\geq1$, we have: $\\forall i\\in\\{1,\\; \\ldots,\\; N \\}$,\n\\[\n\\left\\vert \\frac{1}{\\pi_{i}}-\\frac{1}{p_{i}}\\right\\vert\\leq\\frac{6}{d_{N}}\\times \\frac{1-\\pi_{i}\n{\\pi_{i}}.\n\\]\n\\end{lemma}\n\\begin{proof}\nThe proof follows the representation (5.14) on page 1509 of \\cite{Hajek64}.\nWe have\n For all $i\\in \\{1,\\; \\ldots,\\; N\\}$, we have:\n\\begin{eqnarray*}\n\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}} &=&\\frac{\\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N):\\; i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\sum_{h\\in s}\\frac{1-p_{h}\n{\\sum_{j\\in s}(1-p_{j})+(p_{h}-p_{i})}}{ \\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N):\\; i\\in\n\\mathcal{I}_N\\setminus\\{s \\}}P(s)}\\\\\n&=&\\frac{\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}\nP_N(s)\\sum_{h\\in s}\\frac{1-p_{h}}{\\sum_{j\\in s}(1-p_{j})\\left( 1+\\frac\n{(p_{h}-p_{i})}{\\sum_{j\\in s}(1-p_{j})}\\right) }}{\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P_N(s)}.\n\\end{eqnarray*}\nNow recall that for any $x\\in]-1,1[ $, we have:\n\\[\n1-x\\leq\\frac{1}{1+x}\\leq1-x+x^{2}.\n\\]\nIt follows that\n\\begin{align*}\n\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}} & \\leq1-\\left( \\sum_{s:\\ i\\in\n\\mathcal{I}_N\\setminus\\{s \\}}P(s)\\right) ^{-1}\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\sum_{h\\in s}\\frac{(1-p_{h\n)(p_{h}-p_{i})}{\\left( \\sum_{j\\in s}(1-p_{j})\\right) ^{2}}\\\\\n& +\\left( \\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\right) ^{-1}\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}\nP(s)\\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})^{2}}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) ^{3}\n\\end{align*}\nFollowing now line by line the proof on p. 1510 in \\cite{Hajek64} and noticing that $\\sum_{j\\in s\n(1-p_{j})\\geq1\/2d_{N}$ (see Lemma 2.2 in \\cite{Hajek64}), we have\n\\begin{align*}\n\\left\\vert \\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) ^{2}}\\right\\vert & \\leq\\frac{1}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) } \\leq\\frac{2}{d_{N}}\n\\end{align*}\nand similarl\n\\begin{align*}\n\\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})^{2}}{\\left( \\sum_{j\\in s\n(1-p_{j})\\right) ^{3}} & \\leq\\frac{1}{\\left( \\sum_{j\\in s}(1-p_{j})\\right)\n^{2}} \\leq\\frac{4}{d_{N}^{2}}.\n\\end{align*}\nThis yieds: $\\forall i\\in\\{1,\\; \\ldots,\\; N \\}$,\n\\[\n1-\\frac{2}{d_{N}}\\leq\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}}\\leq\n1+\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}\n\\]\nand\n\\[\np_{i}(1-\\pi_{i})(1-\\frac{2}{d_{N}})\\leq\\pi_{i}(1-p_{i})\\leq p_{i}(1-\\pi\n_{i})\\left(1+\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right),\n\\]\nleading then to\n\\[\n-\\frac{2}{d_{N}}(1-\\pi_{i})p_{i}\\leq\\pi_{i}-p_{i}\\leq p_{i}\\left(1-\\pi_{i\n\\right)\\left(\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right)\n\\]\nand finally to\n\\[\n-\\frac{(1-\\pi_{i})}{\\pi_{i}}\\frac{2}{d_{N}}\\leq\\frac{1}{p_{i}}-\\frac{1\n{\\pi_{i}}\\leq\\frac{(1-\\pi_{i})}{\\pi_{i}}\\left(\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right).\n\\]\nSince $1\/d_{N}^{2}\\leq 1\/d_{N}$ as soon as $d_{N}\\geq1$, the lemma is proved. $\\square$\n\\end{proof}\n\\medskip\n\nBy virtue of lemma \\ref{lem:bias}, we obtain that:\n\\begin{equation*}\n\\left\\vert \\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}\\right\\vert\n \\leq\\frac{6}{d_{N}}\\sum_{i=1}^{N}\\frac{1}{\\pi_{i}}|x_{i}|=M_{N\n\\end{equation*}\nIt follows that\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-S_N>x\\right\\} &\n\\leq\\mathbb{P}\\left\\{ |\\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}|+\\widehat\n{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>x\\right\\}\n\\\\\n& \\leq\\mathbb{P}\\left\\{ M_{N}+\\widehat{S}_{\\boldsymbol{p} _{N}\n^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>x\\right\\}\n\\end{align*}\nand a direct application of Theorem \\ref{thm:rejective} finally gives the desired result. \n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s:Intro}\n\nIn recent years, there has been increasing interest in the study of structures \nthat can be presented by automata. The underlying idea \nis to apply techniques of automata theory to decision \nproblems that arise in logic and applications such as databases and verification. A typical decision problem \nis the model checking problem: for a \nstructure $\\mathcal A$ (e.g.\\ a graph), design an algorithm that, given a formula $\\phi(\\bar{x})$ in a formal system and a tuple $\\bar{a}$ from the structure, decides if \n$\\phi(\\bar{a})$ is true in $\\mathcal A$. In particular, when the formal system is the first order predicate logic or the monadic second order logic, we would like to know if the \ntheory of the structure is decidable. Fundamental early results in this direction by B\\\"uchi (\\cite{Buc60}, \\cite{Buc62}) and Rabin (\\cite{Rab69}) proved the decidability of the monadic second order theories of the successor on the natural numbers and of the binary tree.\nThere have been numerous applications and extensions of these results in logic, algebra \\cite{EPCHLT92}, verification and model checking \\cite{VW84} \\cite{Var96}, and databases \\cite{Var05}. Moreover, automatic structures provide a theoretical framework for constraint databases over discrete domains such as strings and trees \\cite{BL02}. Using simple closure properties and the decidability of the emptiness problem for automata, one can prove that the first order (and monadic second order) theories of some well-known structures are decidable. Examples of such structures are Presburger arithmetic and some of its extensions, the term algebra, the\nreal numbers under addition, finitely generated abelian groups, and the atomless Boolean algebra. Direct proofs of these results, without the use of automata, require non-trivial technical work.\n\n\nA structure $\\mathcal A=(A; R_0, \\ldots, R_m)$ is {\\bf automatic} if the domain $A$ and all the relations $R_0, \\ldots, R_m$ of the structure are recognised by finite automata (precise definitions are in the next section). \nIndependently, Hodgson \\cite{H82} and later Khoussainov and Nerode \\cite{KhN95} proved that for any given automatic structure there is an algorithm that solves the model checking problem for the first order logic. In particular, the first order theory of the structure is decidable. Blumensath and Gr\\\"adel proved a logical characterization theorem stating that automatic structures are exactly those definable in the fragment of arithmetic $(\\omega; +, |_2, \\leq, 0)$, where $+$ and $\\leq$ have their usual meanings and $|_2$ is a weak divisibility predicate for which $x|_2 y$ if and only if $x$ is a power of $2$ and divides $y$ (see \\cite{BG00}). In addition, for some classes of automatic structures there are characterization theorems that have direct algorithmic implications. For example, in \\cite{Del04}, Delhomm\\'e proved that automatic well-ordered sets are all strictly less than $\\omega^\\omega$. Using this characterization, \\cite{KhRS03} gives an algorithm which decides the isomorphism problem for automatic well-ordered sets. The algorithm is based on extracting the Cantor normal form for the ordinal isomorphic to the given automatic well-ordered set. Another characterization theorem of this ilk gives that automatic Boolean algebras are exactly those that are finite products of the Boolean algebra of finite and co-finite subsets of $\\omega$ \\cite{KhNRS04}. Again, this result can be used to show that the isomorphism problem for automatic Boolean algebras is decidable. \n\nAnother body of work is devoted to the study of resource-bounded complexity of the model checking problem for automatic structures. On the one hand, Gr\\\"adel and Blumensath (\\cite{BG00}) constructed examples of automatic structures whose first order theories are non-elementary. On the other hand, Lohrey in \\cite{Loh03} proved that the first order theory of any automatic graph of bounded degree is elementary. It is noteworthy that when both a first order formula $\\phi$ and an automatic structure $\\mathcal A$ are fixed, determining if a tuple $\\bar{a}$ from $\\mathcal A$ satisfies \n$\\phi(\\bar{x})$ can be done in linear time. There are also feasible time bounds on deciding the first order theories of automatic structures over the unary alphabet (\\cite{Blu99}, \\cite{KhLM}). \n\nMost current results demonstrate that automatic structures are not complex in various concrete senses.\nHowever, in this paper we use well-established concepts from both logic and model theory to prove results in the opposite direction. We now briefly describe the measures of complexity we use (ordinal heights of well-founded relations, Scott ranks of \nstructures, and Cantor-Bendixson ranks of trees) and connect them with the results of this paper.\n\n\n\nA relation $R$ is called {\\bf well-founded} if there is no infinite sequence $x_1,x_2,x_3, \\ldots$ such that $(x_{i+1}, x_{i})\\in R$ for $i \\in \\omega$. In computer science, well-founded relations are of interest due to a natural connection between well-founded sets and terminating programs. \nWe say that a program is {\\bf terminating} if every computation from an initial state is finite.\nThis is equivalent to well-foundedness of the collection of states reachable from the initial state, under the reachability relation \\cite{BG06}. The {\\bf ordinal height} is a measure of the depth of well-founded relations. Since all automatic structures are also computable structures, the obvious bound for ordinal heights of automatic well-founded relations is $\\omega_1^{CK}$ (the first non-computable ordinal). Sections \\ref{s:RanksofOrders} and \\ref{s:ranksWF} study the sharpness of this bound.\nTheorem \\ref{thm:OrderRank} characterizes automatic well-founded partial orders in terms of their (relatively low) ordinal heights, whereas Theorem \\ref{thm:HeightRank} shows that $\\omega_1^{CK}$ is the sharp bound \nin the general case.\n\n\\begin{theorem}\\label{thm:OrderRank} For each ordinal $\\alpha$, $\\alpha$ is the ordinal height of an automatic well-founded partial order if and only if $\\alpha< \\omega^\\omega$. \n\\end{theorem}\n\n\\begin{theorem}\\label{thm:HeightRank}\nFor each (computable) ordinal $\\alpha < \\omega_{1}^{CK}$, there is an automatic well-founded relation $\\mathcal A$ with ordinal height greater than $\\alpha$.\n\\end{theorem}\n\n\nSection \\ref{s:SR} is devoted to building automatic structures with high Scott ranks. The concept of Scott rank comes from a well-known theorem of Scott stating that for every countable structure $\\mathcal A$ there exists a sentence $\\phi$ in $L_{\\omega_1,\\omega}$-logic which characterizes $\\mathcal A$ up to isomorphism \\cite{Sco65}. The minimal quantifier rank of such a formula is called the Scott rank of $\\mathcal A$. A known upper bound on the Scott rank of computable structures implies that the Scott rank of automatic structures is at most $\\omega_1^{CK}+1$.\nBut, until now, all the known examples of automatic structures had low Scott ranks. Results in \\cite{Loh03}, \\cite{Del04}, \\cite{KhRS05} \nsuggest that the Scott ranks of automatic structures could be bounded by small ordinals. This intuition is falsified in Section \\ref{s:SR} with the theorem:\n\n\n\\begin{theorem}\\label{thm:ScottRank}\nFor each computable ordinal $\\alpha$\nthere is an automatic structure of Scott rank at least $\\alpha$.\n\\end{theorem}\n\nIn particular, this theorem gives a new proof that the isomorphism problem for automatic structures is $\\Sigma_1^1$-complete (another proof may be found in \\cite{KhNRS04}).\n\nIn the last section, we investigate the Cantor-Bendixson ranks of automatic trees. A\n{\\bf partial order tree} is a partially ordered set $(T, \\leq)$ such that there is a $\\leq$-minimal element of $T$, and each subset $\\{x \\in T : x \\leq y\\}$ is finite and is linearly ordered\nunder $\\leq$. A {\\bf successor tree} is a pair $(T, S)$ such that the reflexive and transitive closure $\\leq_S$ of $S$ produces a partial order tree \n$(T, \\leq_{S})$. The {\\bf derivative} of a tree $\\mathcal T$ is obtained by removing all the nonbranching paths of the tree. One applies the derivative operation to $\\mathcal T$ successively until a fixed point is reached. The minimal ordinal that is needed to reach the fixed point is called the {\\bf Cantor-Bendixson (CB) rank} of the tree. The CB rank plays an important role in logic, algebra, and topology. Informally, the CB rank tells us how far the structure is from algorithmically (or algebraically) simple structures. Again, the obvious bound on $CB$ ranks of automatic successor trees is $\\omega_1^{CK}$. \nIn \\cite{KhRS03}, it is proved that the CB rank of any automatic partial order tree is finite and can be computed from the automaton for the $\\leq$ relation on the tree. It has been an open question whether\nthe CB ranks of automatic successor trees can also be bounded by small ordinals. We answer this question in the following theorem.\n\n\n\n\\begin{theorem}\\label{thm:RecTrees}\nFor $\\alpha < \\omega_1^{CK}$ there is an automatic successor tree of CB rank $\\alpha$.\n\\end{theorem}\n\nThe main tool we use to prove results about high ranks is the configuration spaces of Turing machines, considered as automatic graphs.\nIt is important to note that graphs which arise as configuration spaces have very low model-theoretic complexity: their Scott ranks are at most $3$, and if they are well-founded then their ordinal heights are at most $\\omega$ (see Propositions \\ref{ppn:ConfigWF} and \\ref{ppn:ConfigScott}). Hence, the configuration spaces serve merely as building blocks in the construction of automatic structures with high complexity, rather than contributing materially to the high complexity themselves.\n\n\n\\section*{Acknowledgement}\nWe thank Moshe Vardi who posed the question about ranks of automatic well-founded relations. We also thank Anil Nerode and Frank Stephan with whom \nwe discussed Scott and Cantor-Bendixson ranks of automatic structures.\n\n\n\n\\section{Preliminaries}\\label{s:Prelim}\n\nA (relational) {\\bf vocabulary} is a finite sequence $(P_1^{m_1}, \\ldots, P_t^{m_t}, c_1, \\ldots, c_s)$, where each $P_j^{m_j}$ is a predicate symbol of arity $m_j>0$, and each $c_k$ is a constant symbol. A {\\bf structure} with this vocabulary is a tuple $\\mathcal A=(A;P_1^{\\mathcal A}, \\ldots, P_t^{\\mathcal A}, c_1^{\\mathcal A}, \\ldots, c_s^{\\mathcal A})$, where $P_j^{\\mathcal A}$ and $c_k^{\\mathcal A}$ are interpretations of the symbols of the vocabulary. When convenient, we may omit the superscripts $\\mathcal A$. We only consider infinite structures, that is, those whose universe is an infinite set.\n\n\nTo establish notation, we briefly recall some definitions associated with finite automata. A {\\bf finite automaton} $\\mathcal M$ over an alphabet $\\Sigma$ is a tuple\n$(S,\\iota,\\Delta,F)$, where $S$ is a finite set of {\\bf states}, $\\iota \\in S$\nis the {\\bf initial state}, $\\Delta \\subset S \\times \\Sigma \\times S$ is the\n{\\bf transition table}, and $F \\subset S$ is the set of {\\bf final states}.\nA {\\bf computation} of $\\mathcal A$ on a word $\\sigma_1 \\sigma_2 \\dots \\sigma_n$\n($\\sigma_i \\in \\Sigma$) is a sequence of states, say $q_0,q_1,\\dots,q_n$, such\nthat $q_0 = \\iota$ and $(q_i,\\sigma_{i+1},q_{i+1}) \\in \\Delta$ for all $i \\in\n\\{0,\\ldots,n-1\\}$. If $q_n \\in F$, then the computation is {\\bf successful}\nand we say that the automaton $\\mathcal M$ {\\bf accepts} the word $\\sigma_1 \\sigma_2 \\dots \\sigma_n$. The {\\bf language}\naccepted by the automaton $\\mathcal M$ is the set of all words accepted by $\\mathcal M$. In\ngeneral, $D \\subset \\Sigma^{\\star}$ is {\\bf finite automaton recognisable},\nor {\\bf regular}, if $D$ is the language accepted by some finite automaton~$\\mathcal M$.\n\n\n\nTo define automaton recognisable relations, we use $n$-variable (or $n$-tape) automata.\nAn {\\bf $n$--tape automaton} can be thought of as a one-way \nTuring machine with $n$ input tapes \\cite{Eil69}. Each tape is regarded as \nsemi-infinite, having written on it a word over the alphabet $\\Sigma$ followed \nby an infinite succession of blanks (denoted by $\\diamond$ symbols). The automaton \nstarts in the initial state, reads simultaneously the first symbol of each tape, \nchanges state, reads simultaneously the second symbol of each tape, \nchanges state, etc., until it reads a blank on each tape. The automaton then \nstops and accepts the $n$--tuple of words if it is in a final state. The set of all \n$n$--tuples accepted by the automaton is the relation recognised by the automaton. \nFormally, an $n$--tape automaton on $\\Sigma$ is a finite automaton over the alphabet $(\\Sigma_{\\diamond})^n$, where $\\Sigma_{\\diamond}=\\Sigma \\cup \\{\\diamond\\}$ and\n$\\diamond \\not \\in \\Sigma$.\nThe {\\bf convolution of a tuple} $(w_1,\\cdots,w_n) \\in\n\\Sigma^{\\star n}$ is the string $ c(w_1,\\cdots,w_n)$ of length $\\max_i|w_i|$\nover the alphabet $(\\Sigma_{\\diamond})^n$ which is defined as follows.\nIts $k$'th symbol is $(\\sigma_1,\\ldots,\\sigma_n)$ where $\\sigma_i$ is the\n$k$'th symbol of $w_i$ if $k \\leq |w_i|$ and $\\diamond$ otherwise.\nThe {\\bf convolution of a relation} $R \\subset \\Sigma^{\\star n}$ is the language\n$c(R) \\subset (\\Sigma_{\\diamond})^{n\\star}$ formed as the set of convolutions\nof all the tuples in $R$. \n An $n$--ary relation $R \\subset \\Sigma^{{\\star}n}$ is {\\bf finite automaton recognisable},\n or {\\bf regular}, if its convolution $c(R)$ is recognisable by an $n$--tape automaton.\n\n\\begin{definition} \\label{dfn:automatic} A structure $\\mathcal A=(A; R_0, R_1, \\ldots, R_m)$ is {\\bf automatic} over $\\Sigma$ if its domain $A$ and all relations \n$R_0$, $R_1$, $\\ldots$, $R_m$ are regular over $\\Sigma$. If $\\mathcal B$ is isomorphic to an automatic structure $\\mathcal A$\nthen we call $\\mathcal A$ an {\\bf automatic presentation} of $\\mathcal B$ and say that\n$\\mathcal B$ is {\\bf automatically presentable}.\n\\end{definition}\n\nThe configuration graph of any Turing machine\nis an example of an automatic structure. The graph is defined by letting the \nconfigurations of the Turing machine be the vertices, \nand putting an edge from configuration $c_1$ to configuration $c_2$ if the machine can make an instantaneous move from $c_1$ to $c_2$. Examples of automatically\npresentable structures are $(\\mathbb N, +)$, $(\\mathbb N, \\leq)$, $(\\mathbb N, S)$,\n$(\\mathbb{Z},+)$, the order on the rationals $(Q, \\leq)$, and the Boolean algebra\nof finite and co-finite subsets of $\\mathbb N$. In the following, we abuse terminology and identify the notions of \\textquotedblleft automatic\\textquotedblright~ and \\textquotedblleft automatically presentable\\textquotedblright~.\nMany examples of automatic structures can be formed using the $\\omega$-fold disjoint union of a structure $\\mathcal A$ (the disjoint union of $\\omega$ many \ncopies of $\\mathcal A$). \n\n\\begin{lemma}\\cite{Rub04}\\label{lm:omega-fold} If $\\mathcal A$ is automatic then its $\\omega$-fold disjoint union is isomorphic to an automatic structure.\n\\end{lemma}\n\\begin{proof}\nSuppose that $\\mathcal A = (A; R_{1}, R_2, \\ldots)$ is automatic. Define $\\mathcal A' = (A \\times 1^{\\star}; R'_{1}, R_2', \\ldots)$ by\n\\[\n\\langle (x,i), (y, j) \\rangle \\in R'_{m} \\qquad \\iff \\qquad i = j~ \\& ~\\langle x, y \\rangle \\in R_{m}, \\ \\ m=1,2,\\ldots.\n\\]\nIt is clear that $\\mathcal A'$ is automatic and is isomorphic to the $\\omega$-fold disjoint union of $\\mathcal A$. \n\\end{proof}\n\nThe class of automatic structures is a proper subclass of the computable structures. \nWe therefore mention some crucial definitions and facts about computable structures.\nGood references for the theory of computable structures include \\cite{Hariz98}, \\cite{KhSh99}. \n\n\\begin{definition}\nA {\\bf computable structure} is $\\mathcal A = (A; R_{1}, \\ldots, R_m)$ whose domain and relations are all computable. \n\\end{definition}\n\n The domains of computable structures can always be identified with the set $\\omega$ of natural numbers. Under this assumption, we introduce new constant symbols \n $c_n$ for each $n\\in \\omega$ and interpret $c_n$ as $n$. We expand the vocabulary of each structure to include these new constants $c_{n}$. In this context, $\\mathcal A$ is computable if and only if \n the {\\bf atomic diagram} of $\\mathcal A$ (the set of G\\\"odel numbers of all quantifier-free sentences in the extended vocabulary that are true in $\\mathcal A$) is a computable set.\n If $\\mathcal A$ is computable and $\\mathcal B$ is isomorphic to $\\mathcal A$ then we say that $\\mathcal A$ is a {\\bf computable presentation}\nof $\\mathcal B$. Note that if $\\mathcal B$ has a computable presentation then $\\mathcal B$ has $\\omega$ many computable presentations. In this paper, we will be coding computable structures into automatic ones.\n\n\nThe ranks that we use to measure the complexity of automatic structures take values in the ordinals. In particular, we will see that only a subset of the countable ordinals will play an important role. An ordinal is called {\\bf computable} if it is the order-type of a computable well-ordering of the natural numbers. The least ordinal which is not computable is denoted $\\omega_1^{CK}$ (after Church and Kleene). \n\n\\section{Ranks of automatic well-founded partial orders} \\label{s:RanksofOrders}\n\nIn this section we consider structures $\\mathcal A = (A; R)$ with a single binary relation. \nAn element $x$ is said to be {\\bf $R$-minimal for a set $X$} if for each \n$y \\in X$, $(y,x) \\notin R$. The relation $R$ is said to be {\\bf well-founded} \nif every non-empty subset of $A$ has an $R$-minimal element. \nThis is equivalent to saying that $(A; R)$ has no infinite chains \n$x_1, x_2, x_3, \\ldots$ where $(x_{i+1}, x_i) \\in R$ for all $i$. \n\n\n A {\\bf ranking function} for $\\mathcal A$ is an ordinal-valued function $f$ such that $f(y)< f(x)$ whenever $(y,x)\\in R$. If $f$ is a ranking function on $\\mathcal A$, let $ord(f)= \\sup\\{ f(x) : x \\in A \\}$. The structure $\\mathcal A$ is well-founded if and only if $\\mathcal A$ admits a ranking function. The {\\bf ordinal height} of $\\mathcal A$, denoted $r(\\mathcal A)$, is the least ordinal $\\alpha$ which is $ord(g)$ for some ranking function $g$ on $\\mathcal A$. An equivalent definition for the rank of $\\mathcal A$ is the following. We define the function $r_{\\mathcal A}$ by induction: for the $R$-minimal elements $x$,\nset $r_{\\mathcal A}(x)=0$; for $z$ not $R$-minimal, put $r_{\\mathcal A}(z)=\\sup\\{ r(y)+1 : (y,z) \\in R\\}$. Then $r_{\\mathcal A}$ is a ranking function admitted by $\\mathcal A$ and $r(\\mathcal A) = \\sup\\{ r_{\\mathcal A}(x) : x \\in A\\}$.\nFor $B \\subseteq A$, we write $r(B)$ for the ordinal height of the structure obtained by restricting the relation $R$ to the subset $B$. \n\n\\begin{lemma}\\label{lm:compRank}\nIf $\\alpha<\\omega_1^{CK}$, there is a computable well-founded relation of \nordinal height $\\alpha$.\n\\end{lemma}\n\\begin{proof}\nThis lemma is trivial: the ordinal height of an ordinal $\\alpha$ is $\\alpha$ itself. Since all computable ordinals are computable and well-founded relations, we are done.\n\\end{proof}\n \nThe next lemma follows easily from the well-foundedness of ordinals and of $R$. The proof is left to the reader.\n\n\\begin{lemma}\\label{lm:witnessRank}\nFor a structure $\\mathcal A = (A; R)$ where $R$ is well-founded, if $r(\\mathcal A) = \\alpha$ and $\\beta < \\alpha$ then there is an $x \\in A$ such that $r_{\\mathcal A}(x) = \\beta$.\n\\end{lemma}\n\nFor the remainder of this section, we assume further that $R$ is a partial order. For\nconvenience, we write $\\leq$ instead of $R$. Thus, we consider automatic well-founded partial orders $\\mathcal A=(A,\\leq)$. We will use the notion of {\\bf natural sum of ordinals}. The natural sum of ordinals $\\alpha, \\beta$ (denoted $\\alpha +' \\beta$) is defined recursively: $\\alpha +' 0 = \\alpha$, $0 +' \\beta = \\beta$, and $\\alpha +' \\beta$ is the least ordinal strictly greater than $\\gamma +' \\beta$ for all $\\gamma < \\alpha$ and strictly greater than $\\alpha +' \\gamma$ for all $\\gamma < \\beta$.\n\n\\begin{lemma}\nLet $A_1$ and $A_2$ be disjoint subsets of $A$ such that $A=A_1\\cup A_2$. \nConsider the partially ordered sets $\\mathcal A_1=(A_1,\\leq_1)$ and $\\mathcal A_2=(A_2,\\leq_2)$ obtained by restricting $\\leq$ to $A_1$ and $A_2$ respectively. Then, $r(\\mathcal A)\\leq \\alpha_1 +' \\alpha_2$, where $\\alpha_i=r(\\mathcal A_i)$. \n\\end{lemma}\n\\begin{proof}\nWe will show that there is a ranking function on $A$ whose range is contained in the ordinal\n$\\alpha_1 +' \\alpha_2$. \nFor each $x\\in A$\nconsider the partially ordered sets $\\mathcal A_{1,x}$ and $\\mathcal A_{2,x}$ obtained by restricting\n$\\leq$ to $\\{z\\in A_1 \\mid z < x\\}$ and $\\{z\\in A_2 \\mid z < x\\}$, respectively. \nDefine $f(x)=r(\\mathcal A_{1,x}) +' r(\\mathcal A_{2,x})$.\nWe claim that $f$ is a ranking function. Indeed, assume that $x0$ there is\n$u_n\\in A$ such that $r_{\\mathcal A}(u_n)=\\omega^n$. For each $u \\in A$ we define the set\n\\[\nu \\downarrow = \\{ x \\in A : x < u \\}.\n\\]\nNote that if $r_{\\mathcal A}(u)$ is a limit ordinal then $r_{\\mathcal A}(u) = r(u\\downarrow)$. We define a finite partition of $u \\downarrow$ in order to apply Corollary \\ref{cr:PartitionRank}. To do so, \nfor $u, v \\in \\Sigma^{\\star}$, define\n$X_{v}^{u} = \\{ vw \\in A : w \\in \\Sigma^{\\star} \\ \\& \\ vw < u \\}$. \nEach set of the form $u \\downarrow$ can then be partitioned based on the prefixes of words\nas follows:\n\\[\nu \\downarrow = \\{ x \\in A : |x| < |u | \\ \\& \\ x < u \\} \\cup \\bigcup_{v \\in \\Sigma^{\\star} : |v| = |u|} X_{v}^{u}.\n\\]\n(All the unions above are finite and disjoint.) Hence, applying Corollary \\ref{cr:PartitionRank}, for each $u_n$ there exists a $v_n$ such that $|u_n|=|v_n|$ and $r(X_{v_n}^{u_n})=r(u_n \\downarrow)=\\omega^n$.\n\n\nOn the other hand, we use the automata to define the following equivalence relation on pairs of words of equal lengths:\n\\begin{align*}\n(u,v) \\sim (u', v') \\ \\iff \\ &\\Delta_{A}(\\iota_{A}, v) = \\Delta_{A}(\\iota_{A}, v') \\ \\& \\\\ &\\Delta_{\\leq}(\\iota_{\\leq}, \\binom{v}{u}) = \\Delta_{\\leq}(\\iota_{\\leq}, \\binom{v'}{u'})\n\\end{align*}\nThere are at most $|S_{A}|\\times |S_{\\leq}|$ equivalence classes. Thus, the infinite sequence $(u_1, v_1)$, $(u_2, v_2)$, $\\ldots$ contains $m$, $n$ such that $m \\neq n$ and $(u_{m}, v_{m}) \\sim (u_{n}, v_{n})$. \n\n\\begin{lemma}\\label{lm:IsoXvu}\nFor any $u,v,u',v' \\in \\Sigma^{\\star}$, if $(u,v) \\sim (u', v')$ then $r(X_{v}^{u}) = r(X_{v'}^{u'})$.\n\\end{lemma}\n\nTo prove the lemma, consider $g: X_{v}^{u} \\to X_{v'}^{u'}$ defined as $g(vw) = v'w$. From the equivalence relation, we see that $g$ is well-defined, bijective, and order preserving. Hence $X_v^u \\cong X_{v'}^{u'}$ (as partial orders). Therefore, $r(X_{v}^{u}) = r(X_{v'}^{u'})$.\n\n\n\nBy Lemma \\ref{lm:IsoXvu}, $\\omega^{m} = r(X_{v_{m}}^{u_{m}}) = r(X_{v_{n}}^{u_{n}}) = \\omega^{n}$, a contradiction with the assumption that $m \\neq n$. Therefore, there is no automatic well-founded partial order of ordinal height greater than or equal to $\\omega^{\\omega}$.\n\\end{proof}\n\n\n\\section{Ranks of automatic well-founded relations}\\label{s:ranksWF}\n\n\\subsection{Configuration spaces of Turing machines}\\label{s:Config}\nIn the forthcoming constructions, we embed computable structures into \nautomatic ones via configuration spaces of Turing machines.\nThis subsection provides terminology and background for these constructions. \nLet $\\mathcal M$ be an $n$-tape deterministic Turing machine. \nThe {\\bf configuration space} of $\\mathcal M$, denoted by $Conf(\\mathcal M)$, is a directed graph whose nodes are configurations of $\\mathcal M$. The nodes are $n$-tuples, each of whose coordinates \nrepresents the contents of a tape. Each tape is encoded as $(w ~q ~ w')$, \nwhere $w, w' \\in \\Sigma^{\\star}$ are the symbols on the tape before and \nafter the location of the read\/write head, and $q$ is one of the states \nof $\\mathcal M$. The edges of the graph are all the pairs of the form $(c_1,c_2)$ such that \nthere is an instruction of $\\mathcal M$ that transforms \n$c_{1}$ to $c_{2}$. The configuration space is an automatic graph. The out-degree of every vertex in $Conf(\\mathcal M)$ is $1$; the in-degree need not be $1$. \n\n\\begin{definition}\nA deterministic Turing machine $\\mathcal M$ is {\\bf reversible} if $Conf(\\mathcal M)$ consists only of finite chains and chains of type $\\omega$.\n\\end{definition}\n\n\\begin{lemma} \\cite{Ben73} \\label{lm:reverse}\nFor any deterministic $1$-tape Turing machine there is a reversible $3$-tape Turing machine which accepts the same language.\n\\end{lemma}\n\n\\begin{proof}(Sketch)\nGiven a deterministic Turing machine, define a $3$-tape Turing machine\nwith a modified set of instructions. \nThe modified instructions have the property that neither the domains nor the ranges overlap. The first tape performs the computation exactly as the original machine would have done. As the new machine executes each instruction, it stores the index of the instruction on the second tape, forming a history. Once the machine enters a state which would have been halting for the original machine, the output of the computation is copied onto the third tape. Then, the machine runs the computation backwards and erases the history tape. The halting configuration contains the input on the first tape, blanks on the second tape, and the output on the third tape\n\\end{proof}\n\nWe establish the following notation for a $3$-tape reversible Turing machine $\\mathcal M$ given by the construction in this lemma. A {\\bf valid initial configuration} of $\\mathcal M$ is of the form $(\\lambda~ \\iota ~ x , \\lambda, \\lambda )$, where $x$ in the domain, $\\lambda$ is the empty string, and $\\iota$ is the initial state of $\\mathcal M$. From the proof of Lemma \\ref{lm:reverse}, observe that a {\\bf final (halting)} configuration is of the form $(x, \\lambda, \\lambda ~q_{f} ~y)$, \nwith $q_{f}$ a halting state of $\\mathcal M$. Also, because of the reversibility assumption, all the chains in $Conf(\\mathcal M)$ \nare either finite or $\\omega$-chains (the order type of the natural numbers). In particular, this means that $Conf(\\mathcal M)$ is well-founded. We call an element of in-degree $0$ a {\\bf base} (of a chain). The set of valid initial or final configurations is regular. We classify the components (chains) of $Conf(\\mathcal M)$ as follows:\n\\begin{itemize}\n\\item {\\bf Terminating computation chains}: finite chains whose base is a valid initial configuration; that is, one of the form $(\\lambda ~\\iota~ x, \\lambda, \\lambda )$, for $x \\in \\Sigma^{\\star}$.\n\\item {\\bf Non-terminating computation chains}: infinite chains whose base is a valid initial configuration. \n\\item {\\bf Unproductive chains}: chains whose base is not a valid initial configuration.\n\\end{itemize} \n\nConfiguration spaces of reversible Turing machines are locally finite graphs (graphs of finite degree) and well-founded. Hence, the following proposition guarantees that their ordinal heights are small.\n\n\\begin{proposition} \\label{ppn:ConfigWF} \nIf $G = (A,E)$ is a locally finite graph then $E$ is well-founded and the ordinal height of $E$ is not above $\\omega$, or $E$ has an infinite chain.\n\\end{proposition}\n\n\\begin{proof}\nSuppose $G$ is a locally finite graph and $E$ is well-founded. For a contradiction, suppose $r(G) > \\omega$. Then there is $v \\in A$ with $r(v) = \\omega$. By definition, $r(v) = \\sup\\{ r(u) : u E v \\}$. But, this implies that there are infinitely many elements $E$-below $v$, a contradiction with local finiteness of $G$\n\\end{proof}\n\n\\subsection{Automatic well-founded relations of high rank}\n\nWe are now ready to prove that $\\omega_1^{CK}$ is the sharp bound for ordinal heights of automatic well-founded relations.\n\n\\vspace{5pt}\n\n\\noindent{\\bf Theorem \\ref{thm:HeightRank}.~}{\\em\nFor each computable ordinal $\\alpha < \\omega_{1}^{CK}$, there is an automatic well-founded relation $\\mathcal A$ with ordinal height greater than $\\alpha$}\n\n\n\\begin{proof} \nThe proof of the theorem uses properties of Turing machines and their configuration spaces. We take a computable well-founded relation whose ordinal height is $\\alpha$, and ``embed\" it into an automatic well-founded relation with similar ordinal height. \n\n\nBy Lemma \\ref{lm:compRank}, let $\\mathcal C=(C, L_\\alpha)$ be a computable well-founded relation of ordinal height $\\alpha$. \nWe assume without loss of generality that $C = \\Sigma^{\\star}$ for some finite alphabet $\\Sigma$. Let $\\mathcal M$ be the Turing machine computing the relation $L_{\\alpha}$. On each pair $(x,y)$ from the domain, $\\mathcal M$ halts and outputs \\textquotedblleft yes\\textquotedblright~ or \\textquotedblleft no\\textquotedblright~. By Lemma \\ref{lm:reverse}, we can assume that $\\mathcal M$ is reversible. Recall that $Conf(\\mathcal M) = (D, E)$ is an automatic graph. \nWe define the domain of our automatic structure to be $A = \\Sigma^{\\star} \\cup D$. The binary relation of the automatic structure is:\n\\begin{align*}\nR = E ~\\cup~ &\\{ (x, (\\lambda ~ \\iota ~ (x, y), \\lambda, \\lambda) ) : x,y \\in \\Sigma^{\\star}\\} ~\\cup \\\\\n&\\{ (( (x,y), \\lambda, \\lambda~q_{f}~\\text{\\textquotedblleft yes\\textquotedblright~}), y) : x,y \\in \\Sigma^{\\star}\\}.\n\\end{align*}\nIntuitively, the structure $(A; R)$ is a stretched out version of $(C, L_\\alpha)$ with infinitely many finite pieces extending from elements of $C$, and with disjoint pieces which are either finite chains or chains of type $\\omega$. The structure $(A; R)$ is automatic because its domain is a regular set of words\nand the relation $R$ is recognisable by a $2$-tape automaton. We should verify, however, that $R$ is well-founded. Let $Y \\subset A$. If $Y \\cap C \\neq \\emptyset$ then since $(C, L_{\\alpha})$ is well-founded, there is $x \\in Y \\cap C$ which is $L_{\\alpha}$-minimal. The only possible elements $u$ in $Y$ for which $(u,x) \\in R$ are those which lie on computation chains connecting some $z \\in C$ with $x$. Since each such computation chain is finite, there is an $R$-minimal $u$ below $x$ on each chain. Any such $u$ is $R$-minimal for $Y$. On the other hand, if $Y \\cap C = \\emptyset$, then $Y$ consists of disjoint finite chains and chains of type $\\omega$. Any such chain has a minimal element, and any of these elements are $R$-minimal for $Y$. \nTherefore, $(A; R)$ is an automatic well-founded structure. \n\n\nWe now consider the ordinal height of $(A; R)$. \nFor each element $x \\in C$, an easy induction on $r_{C}(x)$, shows that \n$r_{\\mathcal C} (x) \\leq r_{\\mathcal A}(x) \\leq \\omega+r_{\\mathcal C} (x)$. \nWe denote by $\\ell(a,b)$ the (finite) length of the computation chain of $\\mathcal M$ with input $(a,b)$. For any element $a_{x,y}$ in the computation chain which represents the computation of $\\mathcal M$ determining whether $(x,y) \\in R$, we have \\ \n$r_{\\mathcal A}(x) \\leq r_{\\mathcal A}(a_{x,y}) \\leq r_{\\mathcal A}(x) + \\ell(x,y)$. \\ \nFor any element $u$ in an unproductive chain of the configuration space, $0\\leq r_{\\mathcal A}(u)<\\omega$. Therefore, since $C \\subset A$, \\ \n$r(\\mathcal C) \\leq r(\\mathcal A) \\leq \\omega + r(C)$.\n\\end{proof}\n\n\n\n\\section{Automatic Structures and Scott Rank}\\label{s:SR}\n \n The Scott rank of a structure is introduced in the proof of Scott's Isomorphism Theorem \\cite{Sco65}. Since then, variants of the Scott rank have been used in the computable model theory literature. Here we follow the definition of Scott rank from \\cite{CGKn05}.\n\n\\begin{definition}\nFor structure $\\mathcal A$ and tuples $\\bar{a}, \\bar{b} \\in A^{n}$ (of equal length), define\n\\begin{itemize}\n\\item $\\bar{a} \\equiv^{0} \\bar{b}$ if $\\bar{a}, \\bar{b}$ satisfy the same quantifier-free formulas in the language of $\\mathcal A$; \n\\item For $\\alpha > 0$, $\\bar{a} \\equiv^{\\alpha} \\bar{b}$ if for all $\\beta < \\alpha$, for each $\\bar{c}$ (of arbitrary length) there is $\\bar{d}$ such that \n$\\bar{a}, \\bar{c} \\equiv^{\\beta} \\bar{b}, \\bar{d}$; and for each $\\bar{d}$ (of arbitrary length) there is $\\bar{c}$ such that \n$\\bar{a}, \\bar{c} \\equiv^{\\beta} \\bar{b}, \\bar{d}$.\n\\end{itemize}\nThen, the {\\bf Scott rank} of the tuple $\\bar{a}$, denoted by $\\mathcal{SR}(\\bar{a})$, is the least \n$\\beta$ such that for all $\\bar{b} \\in A^{n}$, $\\bar{a}\\equiv^{\\beta} \\bar{b}$ implies that $(\\mathcal A, \\bar{a}) \\cong (\\mathcal A, \\bar{b})$. Finally, the Scott rank of $\\mathcal A$, denoted by $\\mathcal{SR}(\\mathcal A)$, is the\nleast $\\alpha$ greater than the Scott ranks of all tuples of $\\mathcal A$.\n\\end{definition}\n\n\\begin{example} $\\mathcal{SR}(\\mathbb Q, \\leq) = 1$, $\\mathcal{SR}(\\omega, \\leq) = 2$, and $\\mathcal{SR}( n \\cdot \\omega, \\leq) = n+1$.\n\\end{example}\n\nConfiguration spaces of reversible Turing machines are locally finite graphs. By the proposition below, they all have low Scott Rank.\n\\begin{proposition}\\label{ppn:ConfigScott}\nIf $G = (V,E)$ is a locally finite graph, $SR(G) \\leq 3$.\n\\end{proposition}\n\\begin{proof}\nThe neighbourhood of diameter $n$ of a subset $U$, denoted $B_{n}(U)$, is defined as follows: $B_0(U) = U$ and $B_n(U)$ is the set of $v \\in V$ which can be reached from $U$ by $n$ or fewer edges. The proof of the proposition relies on two lemmas.\n\n\\begin{lemma}\\label{lm:ConfigScott_1}\nLet $\\bar{a}, \\bar{b} \\in V$ be such that $\\bar{a} \\equiv^2 \\bar{b}$. Then for all $n$, there is a bijection of the $n$-neighbourhoods around $\\bar{a}, \\bar{b}$ which sends $\\bar{a}$ to $\\bar{b}$ and which respects $E$.\n\\end{lemma}\n\\begin{proof}\nFor a given $n$, let $\\bar{c} = B_{n}(\\bar{a})\\setminus \\bar{a}$. Note that $\\bar{c}$ is a finite tuple because of the local finiteness condition. Since $\\bar{a} \\equiv^2 \\bar{b}$, there is $\\bar{d}$ such that $\\bar{a} \\bar{c} \\equiv^1 \\bar{b} \\bar{d}$. If $B_{n}(\\bar{b}) = \\bar{b} \\bar{d}$, we are done. Two set inclusions are needed. First, we show that $d_{i} \\in B_{n}(\\bar{b})$. By definition, we have that $c_{i} \\in B_{n}(\\bar{a})$, and let $a_{j}, u_{1}, \\ldots, u_{n-1}$ witness this. Then since $\\bar{a} \\bar{c} \\equiv^1 \\bar{b} \\bar{d}$, there are $v_{1}, \\ldots, v_{n-1}$ such that $\\bar{a}\\bar{c} \\bar{u}\\equiv^0 \\bar{b}\\bar{d}\\bar{v}$. In particular, we have that if $c_{i} E u_{i} E \\cdots E u_{n-1} E a_{j}$, then also $d_{i} E v_{i} E \\cdots E v_{n-1} E b_{j}$ (and likewise if the $E$ relation is in the other direction). Hence, $d_{i} \\in B_{n}(\\bar{b})$. Conversely, suppose $v \\in B_{n}(\\bar{b}) \\setminus \\bar{d}$. Let $v_{1}, \\ldots, v_{n}$ be witnesses and this will let us find a new element of $B_{n}(\\bar{a})$ which is not in $\\bar{c}$, a contradiction.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lm:ConfigScott_2}\nLet $G=(V,E)$ be a graph. Suppose $\\bar{a}, \\bar{b} \\in V$ are such that for all $n$, $(B_{n}(\\bar{a}), E, \\bar{a}) \\cong (B_{n}(\\bar{b}), E, \\bar{b})$. Then there is an isomorphism between the component of $G$ containing $\\bar{a}$ and that containing $\\bar{b}$ which sends $\\bar{a}$ to $\\bar{b}$.\n\\end{lemma}\n\\begin{proof}\nWe consider a tree of partial isomorphisms of $G$. The nodes of the tree are bijections from $B_{n}(\\bar{a})$ to $B_{n}(\\bar{b})$ which respect the relation $E$ and map $\\bar{a}$ to $\\bar{b}$. Node $f$ is the child of node $g$ in the tree if $\\text{dom}(f) = B_{n}(\\bar{a})$, $\\text{dom}(g) = B_{n+1}(\\bar{a})$ and $f \\supset g$. Note that the root of this tree is the map which sends $\\bar{a}$ to $\\bar{b}$. Moreover, the tree is finitely branching and is infinite by Lemma \\ref{lm:ConfigScott_1}. Therefore, K\\\"onig's Lemma gives an infinite path through this tree. The union of all partial isomorphisms along this path is the required isomorphism.\n\\end{proof}\n\nTo prove the proposition, we note that for any $\\bar{a}, \\bar{b}$ in $V$ such that $\\bar{a} \\equiv^2 \\bar{b}$, Lemmas \\ref{lm:ConfigScott_1} and \\ref{lm:ConfigScott_2} yield an isomorphism from the component of $\\bar{a}$ to the component of $\\bar{b}$ that maps $\\bar{a}$ to $\\bar{b}$. Hence, if $\\bar{a} \\equiv^2 \\bar{b}$, there is an automorphism of $G$ that maps $\\bar{a}$ to $\\bar{b}$. Therefore, for each $\\bar{a} \\in V$, $SR(\\bar{a}) \\leq 2$, so $SR(G) \\leq 3$.\n\\end{proof}\n\nLet $\\mathcal C=(C; R_{1}, \\ldots, R_{m})$ be a computable structure. Recall that since $C$ is a computable set, we may assume it is $\\Sigma^\\star$ for some finite alphabet $\\Sigma$. We construct an \nautomatic structure $\\mathcal A$ whose Scott rank is (close to) the Scott rank of $\\mathcal C$. Since the domain of $\\mathcal C$ is computable, we assume that $C=\\Sigma^{\\star}$ for some finite \n$\\Sigma$. The construction of $\\mathcal A$ involves connecting the configuration spaces of Turing machines computing relations $R_{1}, \\ldots, R_{m}$. Note that Proposition \\ref{ppn:ConfigScott} suggests that the high Scott rank of the resulting automatic structure is the main part of the construction because it is not provided by the configuration spaces themselves. The construction in some sense expands $\\mathcal C$ into an automatic structure. We comment that expansions do not necessarily preserve the Scott rank. For example, any computable structure, $\\mathcal C$, has an expansion with Scott rank $2$. The expansion is obtained by adding the successor relation into the signature.\n\n\n We detail the construction for $R_{i}$. Let $\\mathcal M_{i}$ be a Turing machine for $R_{i}$. \nBy a simple modification of the machine we assume that $\\mathcal M_{i}$ halts if and only if its output is \\textquotedblleft yes\\textquotedblright~. By Lemma \\ref{lm:reverse}, we can also assume that $\\mathcal M_{i}$ is reversible. We now modify the configuration space $Conf(\\mathcal M_{i})$ so as to respect the isomorphism type of $\\mathcal C$. \nThis will ensure that the construction (almost) preserves the Scott rank of $\\mathcal C$. We use the terminology from Subsection \\ref{s:Config}.\n\n\n\n\n{\\bf Smoothing out unproductive parts}. The length and number of unproductive chains is determined by the machine $\\mathcal M_{i}$ and hence may differ even for Turing machines computing the same set. In this stage, we standardize the format of this unproductive part of the configuration space. We wish to add enough redundant information in the unproductive section of the structure so that if two given computable structures are isomorphic, the unproductive parts of the automatic representations will also be isomorphic .\nWe add $\\omega$-many chains of length $n$ (for each $n$) and $\\omega$-many copies of $\\omega$. This ensures that the (smoothed) unproductive section of the configuration space of any Turing machine will be isomorphic and preserves automaticity. We comment that adding this redundancy preserves automaticity since the operation is a disjoint union of automatic structures.\n\n\n\n\n{\\bf Smoothing out lengths of computation chains}. We turn our attention to the chains which have valid initial configurations at their base. The length of each finite chain denotes the length of computation required to return a \\textquotedblleft yes\\textquotedblright~ answer. We will smooth out these chains by adding \\textquotedblleft fans\\textquotedblright~ to each base. For this, we connect to each base of a computation chain a structure which consists of $\\omega$ many chains of each finite length. To do so we follow Rubin \\cite{Rub04}: consider the structure whose domain is $0^{\\star} 0 1^{\\star}$ and whose relation is given by $x E y$ if and only if $|x| = |y|$ and $y$ is the least lexicographic successor of $x$. This structure has a finite chain of every finite length. As in Lemma \\ref{lm:omega-fold}, we take the $\\omega$-fold disjoint union of the structure and identify the bases of all the finite chains. We get a \\textquotedblleft fan\\textquotedblright~ with infinitely many chains of each finite size whose base can be identified with a valid initial computation state. Also, the fan has an infinite component if and only if $R_{i}$ does not hold of the input tuple corresponding to the base. The result is an automatic graph, $Smooth(R_{i}) = ( D_{i}, E_{i})$, which extends $Conf(\\mathcal M_{i})$.\n\n\n\n\n{\\bf Connecting domain symbols to the computations of the relation}. \nWe apply the construction above to each $R_{i}$ in the signature of $\\mathcal C$. \nTaking the union of the resulting automatic graphs and adding vertices for the domain, we have the structure $(\\Sigma^{\\star} \\cup \\cup_i D_{i}, E_{1}, \\ldots, E_{n})$ (where we assume that the $D_i$ are disjoint).\nWe assume without loss of generality that each $\\mathcal M_{i}$ has a different initial state, and denote it by $\\iota_{i}$. We add $n$ predicates $F_i$ to the signature of the automatic structure connecting the elements of the domain of $\\mathcal C$ with the computations of the relations $R_{i}$:\n$$F_i = \\{ (x_{0}, \\ldots, x_{m_{i}-1}, (\\lambda~\\iota_{i}~(x_{0},\\ldots, x_{m_{i}-1}), \\lambda, \\lambda)) \\mid x_{0}, \\ldots, x_{m_{i}-1} \\in \\Sigma^{\\star} \\}.$$\nNote that for $\\bar{x} \\in \\Sigma^{\\star}$, $R_{i}(\\bar{x})$ if and only if \n$F_{i} (\\bar{x}, (\\lambda ~\\iota_{i}~\\bar{x}, \\lambda, \\lambda))$ holds and all $E_{i}$ chains emanating from $(\\lambda~\\iota_{i}~\\bar{x}, \\lambda, \\lambda)$ are finite. \nWe have built the automatic structure $$\\mathcal A= (\\Sigma^{\\star} \\cup \\cup_i D_{i}, E_{1}, \\ldots, E_{n}, F_{1}, \\ldots, F_{n}).$$ Two technical lemmas are used to show that the Scott rank of $\\mathcal A$ is close to $\\alpha$:\n\n\\begin{lemma}\\label{lm:EquivTransfer}\nFor $\\bar{x}, \\bar{y}$ in the domain of $\\mathcal C$ and for ordinal $\\alpha$, if $\\bar{x} \\equiv_{\\mathcal C}^{\\alpha} \\bar{y}$ then $\\bar{x} \\equiv_{\\mathcal A}^{\\alpha} \\bar{y}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $X = \\text{dom}{\\mathcal A} \\setminus \\Sigma^{\\star}$. We prove the stronger result that for any ordinal $\\alpha$, and for all $\\bar{x}, \\bar{y} \\in \\Sigma^{\\star}$ and $\\bar{x}', \\bar{y}' \\in X$, if the following assumptions hold\n\\begin{enumerate}\n\\item \\label{asmp:C}$\\bar{x} \\equiv_{\\mathcal C}^{\\alpha} \\bar{y}$;\n\\item \\label{asmp:I}$\\langle \\bar{x}', E_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} \\cong_{f} \\langle \\bar{y}', E_{i}, : i =1 \\ldots n\\rangle_{\\mathcal A}$ (hence the substructures in $A$ are isomorphic) with $f(\\bar{x}') = \\bar{y}'$; and\n\\item \\label{asmp:E}for each $x'_{k} \\in \\bar{x}'$, each $i=1, \\ldots, n$ and each subsequence of indices of length $m_{i}$, \n\\[\nx'_{k} = (\\lambda ~\\iota_{i}~\\bar{x}_{j}, \\lambda, \\lambda) ~~ \\iff ~~ y'_{k} = (\\lambda ~\\iota_{i}~\\bar{y}_{j}, \\lambda, \\lambda)\n\\]\n\\end{enumerate} \nthen $\\bar{x} \\bar{x}' \\equiv_{\\mathcal A}^{\\alpha} \\bar{y} \\bar{y}'$. The lemma follows if we take $\\bar{x}' = \\bar{y}' = \\lambda$ (the empty string).\n\nWe show the stronger result by induction on $\\alpha$. If $\\alpha = 0$, we need to show that for each $i,k, k', k_{0}, \\ldots, k_{m_{i}-1}$, \n\\[\nE_{i}(x'_{k}, x'_{k'}) \\iff E_{i}(y'_{k}, y'_{k'}), \n\\]\nand that \n\\[\nF_{i}(x_{k_{0}}, \\ldots, x_{k_{m_{i}-1}}, x'_{k'}) \\iff F_{i}(y_{k_{0}}, \\ldots, y_{k_{m_{i}-1}}, y'_{k'}). \n\\]\nThe first statement follows by assumption \\ref{asmp:I}, since the isomorphism must preserve the $E_{i}$ relations and maps $\\bar{x}'$ to $\\bar{y}'$. The second statement follows by assumption \\ref{asmp:E}. \n\nAssume now that $\\alpha >0$ and that the result holds for all $\\beta < \\alpha$. Let $\\bar{x}, \\bar{y} \\in \\Sigma^{\\star}$ and $\\bar{x}', \\bar{y}' \\in A$ be such that the assumptions of the lemma hold. We will show that $\\bar{x} \\bar{x}' \\equiv_{\\mathcal A}^{\\alpha} \\bar{y} \\bar{y}'$. Let $\\beta < \\alpha$ and suppose $\\bar{u} \\in \\Sigma^{\\star}, \\bar{u}' \\in A$. By assumption \\ref{asmp:C}, there is $\\bar{v} \\in \\Sigma^{\\star}$ such that $\\bar{x}\\bar{u} \\equiv_{\\mathcal C}^{\\beta} \\bar{y} \\bar{v}$. By the construction (in particular, the smoothing steps), we can find a corresponding $\\bar{v}' \\in A$ such that assumptions \\ref{asmp:I}, \\ref{asmp:E} hold. Applying the inductive hypothesis, we get that $\\bar{x} \\bar{u} \\bar{x}'\\bar{u}' \\equiv_{\\mathcal A}^{\\beta} \\bar{y} \\bar{v}\\bar{y}' \\bar{v}'$. Analogously, given $\\bar{v}, \\bar{v}'$ we can find the necessary $\\bar{u}, \\bar{u}'$. Therefore, $\\bar{x}\\bar{x}' \\equiv_{\\mathcal A}^{\\alpha}\\bar{y} \\bar{y}'$.\n\\end{proof}\n\n\\begin{lemma}\\label{lm:CRankHigher}\nIf $\\bar{x} \\in \\Sigma^{\\star} \\cup \\cup_i D_i$, there is $\\bar{y} \\in \\Sigma^{\\star}$ with $\\mathcal{SR}_{\\mathcal A} (\\bar{x} \\bar{x}' \\bar{u} ) \\leq 2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y})$.\n\\end{lemma}\n\\begin{proof}\nWe use the notation $X_{P}$ to mean the subset of $X = A \\setminus \\Sigma^{\\star}$ which corresponds to elements on fans associated with productive chains of the configuration space. We write $X_{U}$ to mean the subset of $X$ which corresponds to the unproductive chains of the configuration space. Therefore, $A = \\Sigma^{\\star} \\cup X_{P} \\cup X_{U}$, a disjoint union. Thus, we will show that for each $\\bar{x} \\in \\Sigma^{\\star}$, $\\bar{x}' \\in X_{P}$, $\\bar{u} \\in X_{U}$ there is $\\bar{y} \\in \\Sigma^{\\star}$ such that $\\mathcal{SR}_{\\mathcal A} (\\bar{x} \\bar{x}' \\bar{u} ) \\leq 2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y})$.\n\nGiven $\\bar{x}, \\bar{x}', \\bar{u}$, let $\\bar{y} \\in \\Sigma^{\\star}$ be a minimal element satisfying that $\\bar{x} \\subset \\bar{y}$ and that $\\bar{x}' \\subset \\langle \\bar{y}, E_{i}, F_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} $. Then we will show that $\\bar{y}$ is the desired witness. First, we observe that since the unproductive part of the structure is disconnected from the productive elements we can consider the two independently. Moreover, because the structure of the unproductive part is predetermined and simple, for $\\bar{u}, \\bar{v} \\in X_{U}$, if $\\bar{u} \\equiv_{\\mathcal A}^{1} \\bar{v}$ then $(\\mathcal A, \\bar{u}) \\cong (\\mathcal A, \\bar{v})$. It remains to consider the productive part of the structure. \n\n\nConsider any $\\bar{z} \\in \\Sigma^{\\star}$, $\\bar{z}' \\in X_{P}$ satisfying $\\bar{z}' \\subset \\langle \\bar{z}, E_{i}, F_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} $. We claim that $SR_{\\mathcal A}(\\bar{z} \\bar{z}') \\leq 2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z})$. It suffices to show that for all $\\alpha$, for all $\\bar{w} \\in \\Sigma^{\\star}, \\bar{w}' \\in X_{P}$, \n\\[\n\\bar{z}\\bar{z}'~\\equiv_{\\mathcal A}^{2+\\alpha}~\\bar{w}\\bar{w}' \\qquad \\implies \\qquad \\bar{z}~\\equiv_{\\mathcal C}^{\\alpha}~\\bar{w}.\n\\]\nThis is sufficient for the following reason. If $\\bar{z} \\bar{z}' \\equiv_{\\mathcal A}^{2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z})} \\bar{w} \\bar{w}'$ then $\\bar{z} \\equiv_{\\mathcal C}^{\\mathcal{SR}_{\\mathcal C}(\\bar{z})} \\bar{w}$ and hence $(\\mathcal C, \\bar{z}) \\cong (\\mathcal C, \\bar{w})$. From this automorphism, we can define an automorphism of $\\mathcal A$ mapping $\\bar{z} \\bar{z}'$ to $\\bar{w} \\bar{w}'$ because $\\bar{z} \\bar{z}' \\equiv_{\\mathcal A}^{2} \\bar{w} \\bar{w}'$ and hence for each $i$, the relative positions of $\\bar{z}'$ and $\\bar{w}'$ in the fans above $\\bar{z}$ and $\\bar{w}$ are isomorphic. Therefore, $2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z}) \\geq \\mathcal{SR}_{\\mathcal A}(\\bar{z}\\bar{z}')$.\n\n\nSo, we now show that for all $\\alpha$, for all $\\bar{w} \\in \\Sigma^{\\star}, \\bar{w}' \\in X_{P}$, $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2+\\alpha} \\bar{w} \\bar{w}'$ implies that $\\bar{z} \\equiv_{\\mathcal C}^{\\alpha} \\bar{w}$. We proceed by induction on $\\alpha$. For $\\alpha = 0$, suppose that $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2} \\bar{w} \\bar{w}'$. This implies that for each $i$ and for each subsequence of length $m_{i}$ of the indices, the $E_{i}$-fan above $\\bar{z}_{j}$ has an infinite chain if and only if the $E_{i}$-fan above $\\bar{w}_{j}$ does. Therefore, $R_{i}(\\bar{z}_{j})$ if and only if $R_{i}(\\bar{w}_{j})$. Hence, $\\bar{z} \\equiv_{\\mathcal C}^{0} \\bar{w}$, as required. For the inductive step, we assume the result holds for all $\\beta < \\alpha$. Suppose that $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2+\\alpha} \\bar{w} \\bar{w}'$. Let $\\beta < \\alpha$ and $\\bar{c} \\in \\Sigma^{\\star}$. Then $2 + \\beta < 2 +\\alpha$ so by definition there is $\\bar{d} \\in \\Sigma^{\\star}, \\bar{d}' \\in X_{P}$ such that $\\bar{z}\\bar{z}' \\bar{c}\\equiv_{\\mathcal A}^{2+\\beta} \\bar{w} \\bar{w}' \\bar{d} \\bar{d}'$. However, since $2 + \\beta > 1$, $\\bar{d}'$ must be empty (elements in $\\Sigma^{\\star}$ cannot be $1$-equivalent to elements in $X_{P}$). Then by the induction hypothesis, $\\bar{z} \\bar{c} \\equiv_{\\mathcal C}^{\\beta} \\bar{w} \\bar{d}$. The argument works symmetrically if we are given $\\bar{d}$ and want to find $\\bar{c}$. Thus, $\\bar{z} \\equiv_{\\mathcal C}^{\\alpha} \\bar{w}$, as required.\n\\end{proof}\n\n\nPutting Lemmas \\ref{lm:EquivTransfer} and \\ref{lm:CRankHigher} together, we can prove the main result about our construction.\n\n\\begin{theorem}\\label{thm:SR} Let $\\mathcal C$ be a computable structure and construct the automatic structure $\\mathcal A$ from it as above. \nThen $\\mathcal{SR}(\\mathcal C) \\leq \\mathcal{SR}(\\mathcal A) \\leq 2 + \\mathcal{SR}(\\mathcal C)$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\bar{x}$ be a tuple in the domain of $\\mathcal C$. Then, by the definition of Scott rank, $\\mathcal{SR}_{A}(\\bar{x})$ is the least ordinal $\\alpha$ such that for all $\\bar{y} \\in \\text{dom}(\\mathcal A)$, $\\bar{x} \\equiv_{A}^{\\alpha} \\bar{y}$ implies that $(\\mathcal A, \\bar{x}) \\cong (\\mathcal A, \\bar{y})$; and similarly for $\\mathcal{SR}_{C}(\\bar{x})$. We first show that $\\mathcal{SR}_{\\mathcal A}(\\bar{x}) \\geq \\mathcal{SR}_{\\mathcal C}(\\bar{x})$. Suppose $\\mathcal{SR}_{C}(\\bar{x}) = \\beta$. We assume for a contradiction that $\\mathcal{SR}_{A}(\\bar{x})= \\gamma < \\beta$. Consider an arbitrary $\\bar{z} \\in \\Sigma^{\\star}$ (the domain of $\\mathcal C$) such that $\\bar{x} \\equiv_{\\mathcal C}^{\\gamma} \\bar{z}$. By Lemma \\ref{lm:EquivTransfer}, $\\bar{x} \\equiv_{\\mathcal A}^{\\gamma} \\bar{z}$. But, the definition of $\\gamma$ as the Scott rank of $\\bar{x}$ in $\\mathcal A$ implies that $(\\mathcal A, \\bar{x}) \\cong (\\mathcal A, \\bar{z})$. Now, $\\mathcal C$ is $L_{\\omega_{1}, \\omega}$ definable in $\\mathcal A$ and therefore inherits the isomorphism. Hence, $(\\mathcal C, \\bar{x}) \\cong (\\mathcal C, \\bar{z})$. But, this implies that $\\mathcal{SR}_{\\mathcal C}(\\bar{x}) \\leq \\gamma < \\beta = \\mathcal{SR}_{\\mathcal C}(\\bar{x})$, a contradiction.\n\nSo far, we have that for each $\\bar{x} \\in \\Sigma^{\\star}$, $\\mathcal{SR}_{\\mathcal A}(\\bar{x}) \\geq \\mathcal{SR}_{\\mathcal C}(\\bar{x})$. Hence, since $\\text{dom}(\\mathcal C) \\subset \\text{dom}(\\mathcal A)$, \n\\begin{align*}\n\\mathcal{SR} (\\mathcal A) &= \\sup\\{\\mathcal{SR}_{\\mathcal A}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal A)\\} \\\\\n&\\geq \\sup\\{\\mathcal{SR}_{\\mathcal A}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal C)\\} \\\\\n&\\geq \\sup\\{\\mathcal{SR}_{\\mathcal C}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal C)\\} = \\mathcal{SR}(C).\n\\end{align*}\n\nIn the other direction, we wish to show that $ \\mathcal{SR}(\\mathcal A) \\leq 2+ \\mathcal{SR}(\\mathcal C)$. Suppose this is not the case. Then there is $\\bar{x} \\bar{x}' \\bar{u} \\in \\mathcal A$ such that $\\mathcal{SR}_{\\mathcal A}(\\bar{x} \\bar{x}' \\bar{u} ) \\geq 2 + \\mathcal{SR}(\\mathcal C)$. By Lemma \\ref{lm:CRankHigher}, there is $\\bar{y} \\in \\Sigma^{\\star}$ such that $2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y}) \\geq 2 + \\mathcal{SR}(\\mathcal C)$, a contradiction.\n\\end{proof}\n\nRecent work in the theory of computable structures has focussed on finding computable structures of high Scott rank. Nadel \\cite{Nad85} proved that any computable structure has Scott rank at most $\\omega_{1}^{CK} + 1$. Early on, Harrison \\cite{Harr68} showed that there is a computable ordering of type $\\omega_{1}^{CK}( 1 + \\eta)$ (where $\\eta$ is the order type of the rational numbers). This ordering has Scott rank $\\omega_{1}^{CK}+1$, as witnessed by any element outside the initial $\\omega_{1}^{CK}$ set. However, it was not until much more recently that a computable structure of Scott rank $\\omega_{1}^{CK}$ was produced (see Knight and Millar \\cite{KnM}). A recent result of Cholak, Downey, and Harrington gives the first natural example of a structure with Scott rank $\\omega_1^{CK}$: the computable enumerable sets under inclusion \\cite{CDH}.\n\n\\begin{corollary}\nThere is an automatic structure with Scott rank $\\omega_1^{CK}$. There is an automatic structure with Scott rank $\\omega_1^{CK}+1$.\n\\end{corollary}\n\nWe also apply the construction to \\cite{GonKn02}, where it is proved that there are computable structures with Scott ranks above each computable ordinal. In this case, we get the following theorem.\\\\\n\n\\noindent{\\bf Theorem \\ref{thm:ScottRank}.~}{\\em For each computable ordinal $\\alpha$, there is an automatic structure of Scott rank at least $\\alpha$.}\n\n\n\n\\section{Cantor-Bendixson Rank of Automatic Successor Trees}\\label{s:CBdefs}\n\nIn this section we show that there are automatic successor trees of high Cantor-Bendixson (CB) \nrank. Recall the definitions of partial order trees and successor trees from Section \\ref{s:Intro}.\nNote that if $(T,\\leq)$ is an automatic partial order tree then the successor tree $(T,S)$, where the relation $S$ is defined by $S(x,y) \\iff (x < y) \\ \\& \\ \\neg \\exists z (x < z < y)$, is automatic. \n\n\\begin{definition}\nThe {\\bf derivative} of a (partial order or successor) tree $T$, $d(T)$, is the subtree of $T$ whose domain is \n\\[\n\\{ x \\in T: \\text{ $x$ lies on at least two infinite paths in $T$}\\}.\n\\]\nBy induction, $d^{0}(T) = T$, $d^{\\alpha+1} (T) = d(d^{\\alpha}(T))$, and for $\\gamma$ a limit ordinal, $d^{\\gamma}(T) = \\cap_{\\beta < \\gamma} d^{\\beta}(T)$. The {\\bf CB rank} of the tree, $CB(T)$, is the least $\\alpha$ such that $d^{\\alpha}(T) = d^{\\alpha+1}(T)$.\n\\end{definition}\n\nThe CB ranks of automatic partial order trees are finite \\cite{KhRS03}. This is not true of automatic successor trees.\nThe main theorem of this section provides a general technique for building trees of given CB ranks. Before we get to it, we give some examples of automatic successor ordinals whose CB ranks are low.\n\n\\begin{example} \nThere is an automatic partial order tree (hence an automatic successor tree) whose CB rank is $n$ for each $n \\in \\omega$.\n\\end{example}\n\n\\begin{proof}\nThe tree $T_n$ is defined over the $n$ letter alphabet $\\{a_1,\\ldots, a_n\\}$ as follows. The domain of the tree is $a_{1}^{\\star} \\cdots a_{n}^{\\star}$. The order $\\leq_n$ is the prefix partial order. Therefore, the successor relation is given as follows:\n\\begin{equation*}\nS(a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}}) = \\begin{cases}\n\t\\{ a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}+1}, a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}}a_{i+1} \\} \\qquad \\text{if $1 \\leq i