diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzalcr" "b/data_all_eng_slimpj/shuffled/split2/finalzzalcr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzalcr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nWe perform a resurgence analysis of the $SU(2)$ Chern-Simons partition function on a Bireksorn homology sphere, following \\cite{GMP}. Consider the Chern-Simons action with a gauge group $G$ on a 3-manifold $M_{3}$:\n$$CS(A) = \\frac{1}{8\\pi^{2}} \\int_{M_{3}} A \\wedge dA + \\frac{2}{3}A \\wedge A \\wedge A,$$\nwhere $A$ is a Lie algebra (ad$G$) valued 1-form on $M_{3}$. Classical solutions of this action are the flat connections, satisfying $F_{A} = dA + A \\wedge A = 0$. The Chern-Simons partition function at level $k$ can be expanded with a perturbation parameter $1\/k$, around the flat connections:\n\n\\begin{equation}\nZ_{CS}(M_{3}) = \\sum_{\\alpha \\in \\mathcal{M}_{\\text{flat}}(M_{3}, G)} e^{2 \\pi i k CS(\\alpha)}Z^{\\text{pert}}_{\\alpha}.\n\\label{eqn:pert}\n\\end{equation}\nAbove, $\\mathcal{M}_{\\text{flat}}(M_{3}, G)$ is the moduli space of flat $G$-connections on $M_{3}$, and we have assumed a discrete moduli space. When $k$ is an integer, $CS(A)$ is only defined modulo 1. \n\nThe exact partition function $Z_{CS}(M_{3}) = \\int \\mathcal{D}A e^{2 \\pi i k CS(A)}$ can be recovered from its perturbative expansion by a resurgence analysis of Jean \\'{E}calle \\cite{Ecalle}. We first analytically continue $k$ to compex values and apply the method of steepest descent. Then, perform a Borel transformation and resummation of the perturbative partition function, to recover the exact partition function. Surprisingly, the exact partition function is now written as a linear sum of the ``homological blocks'' \\cite{GMP}:\n\\begin{equation}\nZ_{CS}(M_{3}) = \\sum_{a \\, \\text{abelian}} e^{2 \\pi i k CS(\\alpha)}Z_{a}.\n\\label{eqn:abelianDecomposition}\n\\end{equation}\nAbove, $Z_{a}$ gets contributions from \\textit{both} the abelian flat connection $a$ and the irreducible flat connections. In \\cite{GPV}, it was proposed that the partition function in this form allows a ``categorification,'' in a sense that it is a ``S-transform'' of a vector whose entries are integer-coefficient Laurent series in $q = e^{2\\pi i \/k}$.\n\nIn this paper, we provide a supporting example of \\cite{GMP}. First, we perform a resurgence analysis of $SU(2)$ Chern-Simons partition function on a Brieskorn homology sphere, $M_{3} = \\Sigma(2,5,7)$. We start with the exact partition function $Z_{CS}(\\Sigma(2,5,7))$, which is written as a linear sum of ``mock modular forms'' \\cite{HikamiBrieskorn}. Then, we consider its perturbative expansion and perform a Borel resummation. The Borel resummation in effect recovers the full partition function $Z_{CS}(\\Sigma(2,5,7))$, and we observe a Stokes phenomenon which encodes the non-perturbative contributons to the partition function. \n\n\\section{Setups for the Borel resummation in Chern-Simons theory}\nIn this section, we provide necessary notations and setups for the Borel resummation in Chern-Simons theory. A complete and concise review can be found in section 2 of \\cite{GMP}. \n\nLet us start with the exact Chern-Simons partition function $Z_{CS}(M_{3}) = \\int \\mathcal{D}A e^{2 \\pi i k CS(A)}$, integrated over $G=SU(2)$ connections. Next, analytically continue $k$ to complex values and apply the method of steepest descent on the Feynman path integral \\cite{Marino, Garoufalidis, GLM, ArgyresUnsal, Kashani-Poor, CostinGaroufalidis, WittenAnalytic, KontsevichPerimeter,KontsevichSCGP,KontsevichTFC}. Then, the integration domain is altered to a middle-dimensional cycle $\\Gamma$ in the moduli space of $G_{\\mathbb{C}} = SL(2,\\mathbb{C})$ connections, which is the union of the steepest descent flows from the saddle points. To elaborate, the moduli space is the universal cover of the space of $SL(2,\\mathbb{C})$ connections modulo ``based'' gauge transformations, in which the gauge transformations are held to be $1$ at the designated points. In sum, the partition function becomes:\n\\begin{equation}\nZ_{CS}(M_{3}) = \\int_{\\Gamma} \\mathcal{D}A e^{2 \\pi i k CS(A)}, \\quad k \\in \\mathbb{C}.\n\\label{eqn:ZoverGamma}\n\\end{equation}\n\n\n\\subsection{Borel resummation basics}\nPartition function of form Equation \\ref{eqn:ZoverGamma} is interesting, for its perturbative expansion can be regarded as a \\textit{trans-series} expansion, which can be Borel resummed. Let us provide here the basics of Borel resummation, following \\cite{Marino}. The simplest example of a trans-series is a formal power series solution of Euler's equation:\n$$\\frac{d \\varphi}{dz} + A \\varphi(z) = \\frac{A}{z}, \\quad \\varphi_{0}(z) = \\sum_{n \\geq 0} \\frac{A^{-n}n!}{z^{n+1}}.$$\nOne may view the above trans-series as a perturbative (in $1\/z$) solution to the differential equation, but the solution has zero radius of convergence. By the Borel resummation, however, one can recover a convergent solution. When a trans-series is of form $\\varphi(z) = \\sum_{n \\geq 0} a_{n}\/z^{n}$ with $a_{n} \\sim n!$, its Borel transformation is defined as:\n$$\\hat{\\varphi}(\\zeta) = \\sum_{n \\geq 1} a_{n} \\frac{\\zeta^{n-1}}{(n-1)!}.$$\nThe Borel transformation $\\hat{\\varphi}(\\zeta)$ is analytic near the origin of $\\zeta$-plane. If we can analytically continue $\\hat{\\varphi}(\\zeta)$ to a neighborhood of the positive real axis, we can perform the Laplace transform:\n$$S_{0}\\varphi(z) = a_{0} + \\int_{0}^{\\infty} e^{- z \\zeta}\\hat{\\varphi}(\\zeta) d \\zeta,$$\nwhere the subscript ``0'' indicates that the integration contour is along the positive real axis, $\\{ \\arg(z) = 0 \\}$. It can be easily checked that the asymptotics of the above integral coincides with that of $\\varphi(z)$. When $S_{0}\\varphi(z)$ converges in some region in the $z$-plane, $\\varphi(z)$ is said to be Borel summable, and $S_{0}\\varphi(z)$ is called the Borel sum of $\\varphi(z)$. \n\n\\subsection{Chern-Simons partition function as a trans-series}\n\nSaddle points of the Chern-Simons action form the moduli space of flat connections $\\tilde{M}$,\nwhose connected components $\\tilde{M}_{\\tilde{\\alpha}}$ are indexed by their ``instanton numbers,''\n$$\\tilde{\\alpha} = (\\alpha, CS(\\tilde{\\alpha})) \\in \\mathcal{M}_{\\text{flat}}(M_{3},SL(2, \\mathbb{C})) \\times \\mathbb{Z}.$$ \nHere, $CS(\\tilde{\\alpha})$ denotes the value of Chern-Simons action at $\\alpha$, without moding out by 1. Following \\cite{GMP}, we will call a flat connection \\textit{abelian} (\\textit{irreducible}, resp.), if the stabilizer is $SU(2)$ or $U(1)$ ($\\{ \\pm 1\\}$, resp.) action on $Hom(\\pi_{1}(M_{3}),SU(2))$. \n\nNow, let $\\Gamma_{\\tilde{\\alpha}}$ be the union of steepest descent flows in $\\tilde{M}$, starting from $\\tilde{\\alpha}$. The integration cycle $\\Gamma$ is then given by a linear sum of these ``Lefshetz thimbles.''\n\n\\begin{equation}\n\\Gamma = \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}\\Gamma_{\\tilde{\\alpha},\\theta},\n\\label{eqn:GammaDecomposition}\n\\end{equation}\nwhere $\\theta = \\arg(k)$, and $n_{\\tilde{\\alpha},\\theta} \\in \\mathbb{Z}$ are the \\textit{trans-series} parameters, given by the pairing between the submanifolds of steepest descent and ascent. The value of $\\theta$ is adjusted so that there is no steepest descent flow between the saddle points. Let $I_{\\tilde{\\alpha},\\theta}$ be the contribution from a Lefshetz thimble $\\Gamma_{\\tilde{\\alpha},\\theta}$ to $Z_{CS}(M_{3})$ in Equation \\ref{eqn:ZoverGamma}:\n\n$$I_{\\tilde{\\alpha},\\theta} = \\int_{\\Gamma_{\\tilde{\\alpha},\\theta}} \\mathcal{D}A e^{2 \\pi i k CS(A)},$$\nwhich can be expanded in $1\/k$ near $\\tilde{\\alpha}$ as:\n$$I_{\\tilde{\\alpha},\\theta} \\sim e^{2 \\pi i k CS(\\tilde{\\alpha})}Z^{\\text{pert}}_{\\alpha}, \\quad \\text{where} \\quad Z^{\\text{pert}}_{\\alpha} = \\sum_{n=0}^{\\infty} a_{n}^{\\alpha}k^{-n+(d_{\\alpha}-3)\/2}, \\quad d_{\\alpha} = dim_{\\mathbb{C}}\\tilde{\\mathcal{M}}_{\\tilde{\\alpha}}.$$\nIn sum, we can write the Chern-Simons partition function in the form:\n\n\\begin{equation}\nZ_{CS}(M_{3};k) = \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}I_{\\tilde{\\alpha},\\theta} \\sim \\sum_{\\tilde{\\alpha}} n_{\\tilde{\\alpha},\\theta}e^{2 \\pi i k CS(\\tilde{\\alpha})}Z^{\\text{pert}}_{\\alpha}(k),\n\\label{eqn:trans-series}\n\\end{equation}\nwhich is a trans-series expansion of the Chern-Simons partition function. From the asymptotics given by this trans-series, we can apply Borel resummation and recover the full Chern-Simons partition function. Note that Equation \\ref{eqn:trans-series} depends on the choice of $\\theta = \\arg(k)$. In fact, as we vary $\\theta$, the value of $I_{\\tilde{\\alpha},\\theta}$ jumps to keep the whole expression continuous in $\\theta$ as follows:\n\\begin{equation}\nI_{\\tilde{\\alpha},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}+\\epsilon} = I_{\\tilde{\\alpha},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}-\\epsilon} + m_{\\tilde{\\alpha}}^{\\tilde{\\beta}}I_{\\tilde{\\beta},\\theta_{\\tilde{\\alpha}\\tilde{\\beta}}-\\epsilon}.\n\\label{eqn:Stokes}\n\\end{equation}\nThis is called the Stokes phenomenon, and it happens near the Stokes rays $\\theta = \\theta_{\\tilde{\\alpha}\\tilde{\\beta}} \\equiv \\frac{1}{i}\\arg(S_{\\tilde{\\alpha}}-S_{\\tilde{\\beta}})$. The trans-series parameters $n_{\\tilde{\\alpha},\\theta}$ jump accordingly to keep $Z_{CS}(M_{3};k)$ continuous in $\\theta$. The coefficients $m_{\\tilde{\\alpha}}^{\\tilde{\\beta}}$ are called Stokes monodromy coefficients.\n\n\\section{Exact partition function $Z_{CS}(\\Sigma(2,5,7))$}\n\\label{sec:exact}\nBefore going into the resurgence analysis of $Z_{CS}(\\Sigma(2,5,7))$, let us provide here the exact partition function $Z_{CS}(\\Sigma(2,5,7))$. We first compute the Witten-Reshetikhin-Turaev (WRT) invariant $\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))$ and then write the exact $SU(2)$ Chern-Simons partition function in terms of WRT invariants as follows:\n\\begin{equation}\nZ_{CS}(\\Sigma(p_{1},p_{2},p_{3})) = \\frac{\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))}{\\tau_{k}(S^{2} \\times S^{1})}.\n\\label{eqn:WRTandCSpartition}\n\\end{equation}\nHere, $k$ is the level of Chern-Simons theory.\\footnote{To be more precise, $k$ must be replaced by $k+2$. However, our interest in this paper is to recover the full partition function from a perturbative expansion in $1\/k$. Therefore, we will assume $k$ to be large, and replace $k+2$ with $k$ here.}\n\nWRT invariants for Seifert homology spheres can be computed from their surgery presentations \\cite{LawrenceRozansky}. In this paper, we focus on a specific type of Seifert homology spheres, the so-called Bireskorn homology spheres. A Brieskorn manifold $\\Sigma(p_{1},p_{2},p_{3})$ is defined as an intersection of a complex unit sphere $|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}=1$ and a hypersurface $z_{1}^{p_{1}}+z_{2}^{p_{2}}+z_{3}^{p_{3}}=0$. When $p_{1},p_{2},p_{3}$ are coprime integers, $\\Sigma(p_{1},p_{2},p_{3})$ is a homology sphere with three singular fibers. From the surgery presentation of $\\Sigma(p_{1},p_{2},p_{3})$, we can write its WRT invariant, which can be written a linear sum of mock modular forms \\cite{HikamiBrieskorn, LawrenceZagier}. In particular, when $1\/p_{1}+1\/p_{2}+1\/p_{3} < 1$, we can write:\n\\begin{equation}\ne^{\\frac{2 \\pi i}{k}(\\frac{\\phi(p_{1},p_{2},p_{3})}{4}-\\frac{1}{2})}(e^{\\frac{2 \\pi i}{k}} -1) \\tau_{k}(\\Sigma(p_{1},p_{2},p_{3})) = \\frac{1}{2}\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}(1\/k).\n\\label{eqn:WRTmodular}\n\\end{equation}\nLet us decode Equation \\ref{eqn:WRTmodular}. First of all, $\\tau_{k}(\\Sigma(p_{1},p_{2},p_{3}))$ is the desired WRT invariant, normalized such that $\\tau_{k}(S^{3}) = 1$ and $\\tau_{k}(S^{2} \\times S^{1}) = \\sqrt{\\frac{k}{2}}\\frac{1}{\\sin(\\pi \/ k)}.$ Next, the number $\\phi(p_{1},p_{2},p_{3})$ is defined as:\n\\begin{gather}\n\\phi(p_{1},p_{2},p_{3}) = 3 - \\frac{1}{p_{1}p_{2}p_{3}} + 12 (s(p_{1}p_{2},p_{3})+s(p_{2}p_{3},p_{1})+s(p_{3}p_{1},p_{2})), \\nonumber \\\\\n\\text{where} \\quad s(a,b) = \\frac{1}{4b}\\sum_{n=1}^{b-1}\\cot(\\frac{n \\pi}{b})\\cot(\\frac{n a \\pi}{b}). \\nonumber\n\\end{gather}\nFinally, $\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}$ is a linear sum of mock modular forms $\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{a}$, namely:\n\\begin{gather}\n\\tilde{\\Psi}_{P}^{a}(1\/k) = \\sum_{n \\geq 0} \\psi_{2P}^{a}(n)q^{n^{2}\/4P}, \\quad \\text{where} \\quad \\psi_{2P}^{a}(n) = \\begin{cases}\n\\pm 1 & \\text{$n \\equiv \\pm a$ mod $2P$} \\\\ 0 & \\text{otherwise}\n\\end{cases} \\label{eqn:modular} \\\\\n\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{(1,1,1)}(1\/k) = -\\frac{1}{2}\\sum_{\\epsilon_{1},\\epsilon_{2},\\epsilon_{3} = \\pm 1} \\epsilon_{1}\\epsilon_{2}\\epsilon_{3}\\tilde{\\Psi}_{p_{1}p_{2}p_{3}}^{p_{1}p_{2}p_{3}(1+\\sum_{j}\\epsilon_{j}\/p_{j})}(1\/k), \\label{eqn:linSumModular}\n\\end{gather}\nwhere $q$ in Equation \\ref{eqn:modular} is given by $e^{2 \\pi i \/k }$.\n\nNow, let us restrict ourselves to $(p_{1},p_{2},p_{3}) = (2,5,7)$. First of all, $p_{1}=2,p_{2}=5,p_{3}=7$ are relatively prime, so $\\Sigma(2,5,7)$ is a homology sphere. Next, $1\/p_{1}+1\/p_{2}+1\/p_{3} < 1$, so we can write the WRT invariant as a linear sum of mock modular forms:\n\n\\begin{align}\ne^{\\frac{2 \\pi i}{k}(\\frac{\\phi(2,5,7)}{4}-\\frac{1}{2})}(e^{\\frac{2 \\pi i}{k}} -1) \\tau_{k}(\\Sigma(2,5,7)) &= \\frac{1}{2}\\tilde{\\Psi}_{70}^{(1,1,1)}(1\/k) \\nonumber \\\\\n&= \\frac{1}{2}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(1\/k),\n\\label{eqn:WRTinvariantModularDecomposition}\n\\end{align}\nwhere $\\phi(2,5,7) = -\\frac{19}{70}$. From Equation \\ref{eqn:WRTandCSpartition} and \\ref{eqn:WRTinvariantModularDecomposition}, we can explicitly write the exact Chern-Simons partition function $Z_{CS}(\\Sigma(2,5,7))$ as follows:\n\n\\begin{equation}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{i q^{\\phi(2,5,7)\/4}\\sqrt{8k}}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(1\/k).\n\\label{eqn:CSpartition}\n\\end{equation} \n\n\\section{Asymptotics of $Z_{CS}(\\Sigma(2,5,7))$}\n\\label{sec:asymptotics}\nBefore proceeding to the Borel transform and resummation of the exact partition function, let us briefly consider the its asymptotics in the large $k$ limit. This can be most easily done by considering the ``mock modular'' property of mock modular forms:\n\\begin{gather}\n\\tilde{\\Psi}_{p}^{a}(q) = -\\sqrt{\\frac{k}{i}} \\sum_{b=1}^{p-1}\\sqrt{\\frac{2}{p}}\\sin{\\frac{\\pi a b}{p}}\\tilde{\\Psi}_{p}^{b}(e^{-2 \\pi i k}) + \\sum_{n \\geq 0} \\frac{L(-2n,\\psi_{2p}^{a})}{n!}\\bigg(\\frac{\\pi i}{2 p k}\\bigg)^{n}, \\label{eqn:mockmodular} \\\\\n\\text{where} \\quad L(-n,\\psi_{2p}^{a}) = -\\frac{(2p)^{n}}{n+1}\\sum_{m=1}^{2p}\\psi_{2p}^{a}(m)B_{n+1}\\bigg(\\frac{m}{2p}\\bigg),\n\\end{gather}\nand $B_{n+1}$ stands for the $(n+1)$-th Bernoulli polynomial. For integer values of $k$, \n$$\\tilde{\\Psi}^{b}_{p}(e^{-2 \\pi i k}) = (1-\\tfrac{b}{p})e^{-\\frac{2 \\pi i k b^{2}}{2p}},$$ and in large $k$ limit, we may consider the second summation in Equation \\ref{eqn:mockmodular} as ``perturbative'' contributions, while the first summation standing for ``non-perturbative'' contributions. Therefore, the asymptotics of $Z_{CS}(\\Sigma(2,5,7))$ can be written as ($p=70$, below):\n\\begin{multline}\ni q^{-19\/280}\\sqrt{8k}Z_{CS}(\\Sigma(2,5,7)) = \\\\\n -\\sqrt{\\frac{k}{i}} \\sum_{b=1}^{70-1} \\sqrt{\\frac{2}{70}}\\bigg( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\bigg)(1-\\tfrac{b}{p})e^{-\\frac{2 \\pi i k b^{2}}{4p}} \\\\\n+ i q^{-19\/280}\\sqrt{8k}Z_{\\text{pert}}(1\/k),\n\\label{eqn:asymptotics}\n\\end{multline}\nwhere the perturbative contributions $i q^{-19\/280}\\sqrt{8k}Z_{\\text{pert}}(1\/k)$ can be explicitly written as:\n\\begin{gather}\nZ_{\\text{pert}}(1\/k) = Z^{11}_{\\text{pert}}(1\/k)-Z^{31}_{\\text{pert}}(1\/k)-Z^{39}_{\\text{pert}}(1\/k)+Z^{59}_{\\text{pert}}(1\/k), \\nonumber \\\\\n\\text{where} \\quad i \\sqrt{8} q^{-19\/280} Z^{a}_{\\text{pert}}(1\/k) = \\sum_{n \\geq 0} \\frac{b_{n}^{a}}{k^{n+1\/2}} \\quad \\text{for} \\quad a = 11, 31, 39, 59 \\nonumber \\\\\n\\text{and} \\quad b_{n}^{a} = \\frac{L(-2n,\\psi_{2p}^{a})}{n!}\\bigg(\\frac{\\pi i}{2p}\\bigg)^{n}.\n\\end{gather}\n\nOne can easily see that the sum $\\big( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\big)$ in Equation \\ref{eqn:asymptotics} is nonzero if and only if $b$ is not divisible by $2,5$ or $7$. We will later see that these $b$'s correspond to the positions of the poles in the Borel plane. \n\n\n\\section{Resurgence analysis of $Z_{CS}(\\Sigma(2,5,7))$}\nIn this section, we perform a resurgence analysis of the partition function and decompose $Z_{CS}(M(2,5,7))$ into the homological blocks:\n$$Z_{CS}(\\Sigma(2,5,7)) = \\sum_{\\alpha} n_{\\alpha}e^{2 \\pi i k CS(\\alpha)} Z_{\\alpha},$$\nwhere $\\alpha$ runs over the abelian\/reducible flat connections. Since $Z_{\\alpha}$ gets contributions from both the abelian\/reducble flat connection $\\alpha$ and the irreducible flat connections, it is necessary to study how the contributions from the irreducible flat connections regroup themselves into the homological blocks. We accomplish the goal in three steps. First, we study the Borel transform and resummation of the partition function and identify the contributions from the irreducible flat connections. Then, the contributions from the irreducible flat connections are shown to enter in the homological blocks via Stokes monodromy coefficients.\n\n\\subsection{Borel transform and resummation of $Z_{CS}(M(2,5,7))$}\nRecall that the perturbative contributions $Z^{a}_{\\text{pert}}(1\/k)$ have the following asymptotics:\n\\begin{equation}\ni\\sqrt{8}q^{\\phi(2,5,7)\/4}Z^{a}_{\\text{pert}}(1\/k) = \\sum_{n \\geq 0} \\frac{b^{a}_{n}}{k^{(n+1\/2)}}.\n\\end{equation}\nNow, consider its Borel transform:\n\\begin{align}\nBZ^{a}_{\\text{pert}}(\\zeta) &= \\sum_{n \\geq 1} \\frac{b^{a}_{n}}{\\Gamma(n+1\/2)} \\zeta^{n-1\/2} \\\\\n&= \\frac{1}{\\sqrt{\\zeta}} \\sum_{n \\geq 0} b^{a}_{n}\\frac{4^{n}}{\\sqrt{\\pi}} \\frac{n!}{(2n)!}\\zeta^{n} \\quad \\bigg(\\because \\Gamma(n+1\/2) = \\frac{\\sqrt{\\pi}}{4^{n}}\\frac{(2n)!}{n!} \\bigg) \\\\\n&= \\frac{1}{\\sqrt{\\pi \\zeta}} \\sum_{n \\geq 0} c^{a}_{n} \\frac{n!}{(2n)!} z^{2n}, \\quad \\text{where} \\quad z = \\sqrt{\\frac{2 \\pi i}{p}\\zeta}.\n\\end{align}\nIn the last equality, we have simply changed the variable from $\\zeta$ to $z$ and absorbed all other factors into the coefficients $c_{n}^{a}$.\n\nAlthough the coefficients $c^{a}_{n}$ only appear in the perturbative piece of the partition function, we can recover the exact partition function from them. Let us first consider generating functions which package the coefficients $c^{a}_{n}$:\n$$\\frac{\\sinh((p-a)z)}{\\sinh(pz)} = \\sum_{n \\geq 0} c^{a}_{n} \\frac{n!}{(2n)!} z^{2n} = \\sum_{n \\geq 0} \\psi_{2p}^{a} e^{-nz}.$$ \nNow we can write the mock modular forms in an integral from, using these generating functions:\n\\begin{gather}\n\\frac{\\sinh(p-a)\\eta}{\\sinh p \\eta} = \\sum_{n \\geq 0} \\psi_{2p}^{a}(n)e^{-n \\eta} \\\\\n\\Rightarrow \\quad \\int_{i \\mathbb{R} + \\epsilon} d \\eta \\frac{\\sinh(p-a)\\zeta}{\\sinh p \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} = \\int_{i \\mathbb{R}+\\epsilon} d \\eta \\sum_{n \\geq 0} \\psi_{2p}^{a}(n)e^{-n \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} \\label{eqn:Borel1} \\\\\n\\Rightarrow \\quad \\int_{i \\mathbb{R} + \\epsilon} d \\eta \\frac{\\sinh(p-a)\\eta}{\\sinh p \\eta} e^{-\\frac{k p \\eta^{2}}{2 \\pi i}} = \\sqrt{\\frac{2 \\pi^{2} i}{p}} \\frac{1}{\\sqrt{k}} \\tilde{\\Psi}_{p}^{a}(q). \\label{eqn:Borel2}\n\\end{gather}\nIn the second line, the integral is taken along a line $Re[\\eta] = \\epsilon > 0$, where the integral converges, and the third line is simply a Gaussian integral. The change of variables\n$$ \\zeta = \\frac{p \\eta^{2}}{2 \\pi i}$$\nalters the integration contour from a single line to the union of two rays from the origin, $i e^{i \\delta} \\mathbb{R}_{+}$ and $i e^{-i \\delta} \\mathbb{R}_{+}$. In sum, \n\\begin{equation}\n\\frac{1}{\\sqrt{k}} \\tilde{\\Psi}_{p}^{a}(q) = \\frac{1}{2}\\bigg(\\int_{i e^{i \\delta} \\mathbb{R}_{+}} + \\int_{i e^{-i \\delta} \\mathbb{R}_{+}} \\bigg) \\frac{d \\zeta}{\\sqrt{\\pi \\zeta}} \\frac{\\sinh \\bigg( (p-a)\\sqrt{\\frac{2 \\pi i \\zeta}{p}}\\bigg)}{\\sinh \\bigg(p \\sqrt{\\frac{2 \\pi i \\zeta}{p}}\\bigg)} e^{-k\\zeta}. \\label{eqn:BorelZeta}\n\\end{equation}\n\nThus we have recovered the entire mock modular form from its perturbative expansion. Since the partition function is a linear sum of mock modular forms, this implies that the Borel resummation of $BZ_{\\text{pert}}$ will return the exact partition function. Furthermore, the poles of generating functions $\\sinh((p-a)z)\/\\sinh(pz)$ encodes the information of the non-perturbative contributions, as we exhibit below.\n\nFirst of all, since $Z_{CS}(\\Sigma(2,5,7)) \\sim (\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(q)$, the Borel transform of $Z_{\\text{pert}}$ is given by:\n\\begin{equation}\n\\frac{\\sinh(59\\eta)-\\sinh(39\\eta)-\\sinh(31\\eta)+\\sinh(11\\eta)}{\\sinh(70\\eta)} = \\frac{4\\sinh(35\\eta)\\sinh(14\\eta)\\sinh(10\\eta)}{\\sinh(70\\eta)},\n\\label{eqn:psiBorel}\n\\end{equation}\nNote that the RHS of Equation \\ref{eqn:psiBorel} has only simple poles at $\\eta = n \\pi i\/ 70$ for $n$ non-divisible by 2, 5, or 7. In particular, the poles are aligned on the imaginary axis, so we choose the same integration contours as in Equation \\ref{eqn:Borel1} - \\ref{eqn:BorelZeta}. The Borel resummation of Equation \\ref{eqn:psiBorel} is then the average of Borel sums along the two rays depicted in Figure \\ref{fig:integrationContour}(a):\n\\begin{equation}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2} \\bigg[ S_{\\frac{\\pi}{2} - \\delta}Z_{\\text{pert}}(1\/k) + S_{\\frac{\\pi}{2} + \\delta}Z_{\\text{pert}}(1\/k) \\bigg].\n\\label{eqn:Borelsums}\n\\end{equation}\n\n\\begin{figure} [htb]\n\\centering\n\\includegraphics{integrationContour}\n\\caption{(a) An integration contour in the $\\zeta$-plane, made of two rays from the origin. Dots represent the poles. (b) An equivalent integration contour. The contribution from the integration along the real axis must be doubled.}\n\\label{fig:integrationContour}\n\\end{figure}\n\nTo evaluate the RHS of Equation \\ref{eqn:Borelsums}, we integrate along an equivalent contour in Figure \\ref{fig:integrationContour}(b). Note that as we change to the contour in Figure \\ref{fig:integrationContour}(b), a Stokes ray $i e^{-i \\delta}\\mathbb{R}_{+}$ has crossed the poles on the imaginary axis, towards the positive real axis. As a reult, the poles contribute to the Borel sums with residues, which is precisely a Stokes phenomenon. Since each pole is located at $\\eta = n \\pi i \/ 70$, its residue includes a factor of $e^{-k \\zeta} = e^{-k \\frac{70 \\eta^{2}}{2 \\pi i}} = e^{2 \\pi i k (- \\frac{n^{2}}{280})}$. Shortly, we will exhibit that these factors precisely correspond to the Chern-Simons instanton actions, so let us regroup the poles ($n$ modulo 140) by their instanton actions:\n\n\\begin{itemize}\n\\item $n = 9, 19, 51, 61, 79, 89, 121, 131$, for which $CS = -\\frac{9^{2}}{280}$ and residues $\\{1,1,1,1,-1,-1,-1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70})$.\n\\item $n = 3, 17, 53, 67, 73, 87, 123, 137$, for which $CS = -\\frac{3^{2}}{280}$ and residues $\\{ -1, -1, -1,-1,1,1,1,1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{\\pi}{35} + \\cos \\frac{6 \\pi}{35})$.\n\\item $n = 23, 33, 37, 47, 93, 103, 107, 117$, for which $CS = -\\frac{23^{2}}{280}$ and residues $\\{1, 1, 1, 1,-1,-1,-1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{4 \\pi}{35} + \\sin \\frac{13 \\pi}{70})$.\n\\item $n = 13, 27, 43, 57, 83, 97, 113, 127$, for which $CS = -\\frac{13^{2}}{280}$ and residues $\\{ -1, -1, -1, -1,1,1,1,1\\}$ with overall factor $\\frac{i}{35}(\\sin \\frac{3 \\pi}{70} + \\sin \\frac{17 \\pi}{70})$.\n\\item $n = 11, 31, 39, 59, 81, 101, 109, 129$, for which $CS = -\\frac{11^{2}}{280}$ and residues $\\{1, -1, -1, 1, -1,1,1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{8 \\pi}{35} + \\sin \\frac{9 \\pi}{70})$.\n\\item $n = 1, 29, 41, 69, 71, 99, 111, 139$, for which $CS = -\\frac{1^{2}}{280}$ and residues $\\{1,-1,-1,1,-1,1,1,-1\\}$ with overall factor $\\frac{i}{35}(\\cos \\frac{2 \\pi}{35} - \\sin \\frac{11 \\pi}{70})$.\n\\end{itemize}\n\nThe top four groups of poles correspond to the four irreducible $SU(2)$ flat connections, while the remaining two correspond to the complex flat connections. To see this, first consider the moduli space of flat connections $\\mathcal{M}_{\\text{flat}}(\\Sigma(2,5,7), SL(2,\\mathbb{C}))$. Since $\\Sigma(2,5,7)$ is a homology 3-sphere, it has only one abelian flat connection $\\alpha_{0}$, which is trivial. Next, there are total $\\frac{(2-1)(5-1)(7-1)}{4} = 6$ irreducible $SL(2,\\mathbb{C})$ flat connections, four of which are conjugate to $SU(2)$ and the remaining two are ``complex'' (conjugate to $SL(2,\\mathbb{R})$) \\cite{KitanoYamaguchi,BodenCurtis,FintushelStern}. To compute their Chern-Simons instanton actions, we characterize all six flat connections by their ``rotation angles,'' which we will briefly explain here. Consider the following presentation of the fundamental group of $\\Sigma(2,5,7)$.\n\\begin{equation}\n\\pi_{1}(\\Sigma(2,5,7)) = \\langle x_{1},x_{2},x_{3},h \\, | \\, h \\, \\text{central}, x_{1}^{2} = h^{-1}, x_{2}^{5} = h^{-9}, x_{3}^{7} = h^{-5}, x_{1}x_{2}x_{3} = h^{-3} \\rangle.\n\\label{eqn:fundPresentation}\n\\end{equation}\nWhen a representation $\\alpha: \\pi_{1}(\\Sigma(2,5,7)) \\rightarrow SL(2,\\mathbb{C})$ is conjugate in $SU(2)$, $\\alpha(h)$ is equal to $\\pm 1$, and the conjugacy classes of $\\alpha(x_{j})$ can be represented in the form $\\bigl(\\begin{smallmatrix}\n\\lambda_{j} & 0 \\\\ 0 & \\lambda_{j}^{-1}\n\\end{smallmatrix} \\bigr)$ for some $| \\lambda_{j} | = 1$. There are four triples $(\\lambda_{1}, \\lambda_{2}, \\lambda_{3})$ satisfying the relations in Equation \\ref{eqn:fundPresentation}:\n\\begin{equation}\n(l_{1},l_{2},l_{3}) = (1,1,3), \\, (1,3,1), \\, (1,3,3), \\, (1,3,5) \\quad \\text{where} \\quad \\lambda_{j} = e^{\\pi i l_{j} \/ p_{j}}. \\label{eqn:flatCon}\n\\end{equation} \nEach triple corresponds to one of the four irreducible $SU(2)$ flat connections, which we will call $\\alpha_{1}, \\alpha_{2}, \\alpha_{3}$ and $\\alpha_{4}$. From the rotation angles of an irreducible flat connection $A$, we can read off its Chern-Simons instanton action:\n\\begin{gather}\nCS(A) = -\\frac{p_{1}p_{2}p_{3}}{4}(1+\\sum_{i}l_{j}\/p_{j})^{2} \\nonumber \\\\\n\\Rightarrow \\quad CS(\\alpha_{1}) = -\\frac{9^{2}}{280}, \\quad CS(\\alpha_{2}) = -\\frac{3^{2}}{280}, \\quad CS(\\alpha_{3}) = -\\frac{23^{2}}{280}, \\quad CS(\\alpha_{4}) = -\\frac{13^{2}}{280}, \\label{eqn:CSinstanton}\n\\end{gather}\nwhich is in agreement with the instanton actions of the poles in the Borel plane. Likewise, one can compute the Chern-Simons instanton actions of the two complex flat connections $\\alpha_{5}$ and $\\alpha_{6}$, \n$$CS(\\alpha_{5}) = -\\frac{11^{2}}{280}, \\quad CS(\\alpha_{6}) = -\\frac{1^{2}}{280}.$$\n\nNow, let us sum the residues to reproduce the non-perturbative contributions in Equation \\ref{eqn:asymptotics}. When $k$ is an integer, the residues from the poles $SU(2)$ connection $\\alpha_{1}$ are summed into:\n\\begin{multline}\n\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{280}} \\bigg[ \\sum_{n \\equiv \\pm 9 \\, (mod \\, 140)} \\pm 1 \\, + \\sum_{n \\equiv \\pm 19 \\, (mod \\, 140)} \\pm 1 \\\\\n+ \\sum_{n \\equiv \\pm 51\\, (mod \\, 140)} \\pm 1 \\, + \\sum_{n \\equiv \\pm 61\\, (mod \\, 140)} \\pm 1 \\bigg].\n\\label{eqn:alpha1Contribution}\n\\end{multline}\nVia zeta-function regularization $\\sum_{n \\equiv \\pm a \\, (mod \\, 2p)} \\pm 1 = 1 - \\frac{a}{p}$, we can rewrite Equation \\ref{eqn:alpha1Contribution} as follows:\n\\begin{gather*}\n\\frac{i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{280}}\\bigg( (1-\\frac{9}{70}) + (1-\\frac{19}{70}) + (1-\\frac{51}{70}) + (1-\\frac{61}{70}) \\bigg) \\\\\n= \\frac{2i}{35}(\\cos \\frac{3 \\pi}{35} - \\sin \\frac{\\pi}{70}) e^{-2 \\pi i k \\frac{9^{2}}{240}} = n_{\\alpha_{1}}Z_{\\text{pert}}^{\\alpha_{1}}e^{2 \\pi i k CS(\\alpha_{1})},\n\\end{gather*}\nwhere $n_{\\alpha_{1}}$ is the trans-series parameter. Similarly for connections $\\alpha_{2}, \\alpha_{3}$ and $\\alpha_{4}$, \n\\begin{itemize}\n\\item $n_{\\alpha_{2}}Z_{\\text{pert}}^{\\alpha_{2}}e^{2 \\pi i k CS(\\alpha_{2})} = -\\frac{2i}{35}(\\cos \\frac{\\pi}{35} + \\cos \\frac{6 \\pi}{35})e^{-2 \\pi i k \\frac{3^{2}}{280}}$.\n\\item $n_{\\alpha_{3}}Z_{\\text{pert}}^{\\alpha_{3}}e^{2 \\pi i k CS(\\alpha_{3})} = \\frac{2i}{35}(\\cos \\frac{4 \\pi}{35} + \\sin \\frac{13 \\pi}{70})e^{-2 \\pi i k \\frac{23^{2}}{280}}$.\n\\item$n_{\\alpha_{4}}Z_{\\text{pert}}^{\\alpha_{4}}e^{2 \\pi i k CS(\\alpha_{4})} = -\\frac{2i}{35}(\\sin \\frac{3 \\pi}{70} + \\sin \\frac{17 \\pi}{70})e^{-2 \\pi i k \\frac{13^{2}}{280}}$.\n\\end{itemize}\nAnd the contributions from the two complex connections vanish. Notice that the poles grouped by their instanton actions correspond to the $b$'s with non-vanishing contributions in Equation \\ref{eqn:asymptotics}. Furthermore, the sum of residues is proportional to the sum $\\big( \\sin\\frac{11b \\pi}{70} - \\sin\\frac{31b \\pi}{70} - \\sin\\frac{39b \\pi}{70} + \\sin\\frac{59b \\pi}{70}\\big)$ at each $b$, so the Borel sum correctly captures the non-perturbative contributions to the exact partition function. \n\n\\section{Homological block decomposition of $Z_{CS}(\\Sigma(2,5,7))$ and the modular transform}\nWe conclude this paper by writing the partition function in a categorification-friendly form, as was advertised in Equation \\ref{eqn:abelianDecomposition}. To summarize, we started with the exact partition function $Z_{CS}(\\Sigma(2,5,7))$, considered its perturbative expansion and performed a Borel resummation. Although our example is a homology 3-sphere and has only one abelian flat connection, more generally the Borel sum results in a decomposition into homological blocks \\cite{GMP}:\n$$Z_{CS}(\\Sigma(2,5,7)) = \\sum_{a} e^{2 \\pi i CS_{a}}Z_{a},$$\nwhere the summation runs over abelian flat connections. Each ``homological block'' $Z_{a}$ gets contributions from both the abelian flat connection $a$ and the irreducible $SU(2)$ flat connections. How the irreducible flat connections regroup themselves into each homological block is encoded in the Stokes monodromy coefficients as follows:\n\\begin{gather}\nZ_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2}\\bigg[S_{\\frac{\\pi}{2}-\\epsilon}Z_{\\text{pert}}(k) + S_{\\frac{\\pi}{2}+\\epsilon}Z_{\\text{pert}}(k) \\bigg] = Z^{\\alpha_{0}}_{\\text{pert}} + \\frac{1}{2} \\sum_{\\tilde{\\beta}}m_{\\tilde{\\beta}}^{(\\alpha_{0},0)}e^{2 \\pi i k S_{\\tilde{\\beta}}}Z^{\\beta}_{\\text{pert}} \\nonumber \\\\\n=\\sum_{\\tilde{\\beta}} n_{\\tilde{\\beta},0}e^{2 \\pi i k S_{\\tilde{\\beta}}}Z^{\\beta}_{\\text{pert}}, \\quad \\text{where} \\quad n_{\\tilde{\\beta}} = \\begin{cases} 1 & \\tilde{\\beta}=(\\alpha_{0},0) \\\\ \\frac{1}{2}m_{\\tilde{\\beta}}^{(\\alpha_{0},0)} & \\text{otherwise.} \\end{cases} \\label{eqn:trans-seriesCoeff}\n\\end{gather}\n\\begin{equation}\nm_{\\tilde{\\beta}}^{(\\alpha_{0},0)} = \\begin{cases}\n1 & \\tilde{\\beta} = (\\alpha_{1}, -n^{2}\/280), \\quad \\text{for} \\quad n = 9, 19, 51, 61 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{1}, -n^{2}\/280), \\quad \\text{for} \\quad n = 79, 89, 121, 131 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{2}, -n^{2}\/280), \\quad \\text{for} \\quad n = 73, 87, 123, 137 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{2}, -n^{2}\/280), \\quad \\text{for} \\quad n = 3, 17, 53, 67 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{3}, -n^{2}\/280), \\quad \\text{for} \\quad n = 23, 33, 37, 47 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{3}, -n^{2}\/280), \\quad \\text{for} \\quad n = 93, 103, 107, 117 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{4}, -n^{2}\/280), \\quad \\text{for} \\quad n = 83, 97, 113, 127 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{4}, -n^{2}\/280), \\quad \\text{for} \\quad n = 13, 27, 43, 57 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{5}, -n^{2}\/280), \\quad \\text{for} \\quad n = 11, 59, 101, 109 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{5}, -n^{2}\/280), \\quad \\text{for} \\quad n = 31, 39, 81, 129 \\quad (\\text{mod} \\, 140) \\\\\n1 & \\tilde{\\beta} = (\\alpha_{6}, -n^{2}\/280), \\quad \\text{for} \\quad n = 1, 69, 99, 111 \\quad (\\text{mod} \\, 140) \\\\\n-1 & \\tilde{\\beta} = (\\alpha_{6}, -n^{2}\/280), \\quad \\text{for} \\quad n = 29, 41, 71, 139 \\quad (\\text{mod} \\, 140) \\end{cases}\n\\end{equation}\nThe formula \\ref{eqn:trans-seriesCoeff} holds for any Seifert manifold with three singular fibers (which includes our example $\\Sigma(2,5,7)$) \\cite{CostinGaroufalidis}. In \\cite{GPV}, it was conjectured that there is a ``modular transform'' of the homological blocks $Z_{a}$, which turns it into a ``categorification-friendly'' form. Namely,\n$$Z_{a} = \\frac{1}{i \\sqrt{2k}} \\sum_{b} S_{ab} \\hat{Z}_{b},$$\nfor some $k$-independent $S_{ab}$. Above, $b$ runs over the abelian flat connections, and each $\\hat{Z}_{b}$ is an element of $q^{\\Delta_{b}}\\mathbb{Z}[[q]]$ for some $\\Delta_{b} \\in \\mathbb{Q}$. Suppose the exact partition function is a linear sum of mock modular forms, and there are multiple abelian flat connections. Then, a homological block decomposition regroups the mock modular forms (see \\cite{GPV} for examples.) In our example, however, there is only one abelian flat connection $\\alpha_{0}$, because $\\Sigma(2,5,7)$ is a homology sphere. Therefore, it suffices to find $\\hat{Z}_{\\alpha_{0}}$ which is an element of $q^{\\Delta_{\\alpha_{0}}} \\mathbb{Z}[[q]]$. From the exact partition function\n$$i q^{\\phi(2,5,7)\/4}\\sqrt{2k}Z_{CS}(\\Sigma(2,5,7)) = \\frac{1}{2}(\\tilde{\\Psi}_{70}^{11} - \\tilde{\\Psi}_{70}^{31} - \\tilde{\\Psi}_{70}^{39} + \\tilde{\\Psi}_{70}^{59})(q),$$\nand the definition of the mock modular forms\n$$\\tilde{\\Psi}_{p}^{a}(q) = \\sum_{n \\geq 0} \\psi_{2p}^{a}q^{n^{2}\/4p},$$\nwe can easily see that the partition function is an element of $q^{121\/280}\\mathbb{Z}[[q]]$. Thus,\n$$\\hat{Z}_{\\alpha_{0}} = q^{1\/2}(1 - q^{3} - q^{5} + q^{12} + \\cdots) \\quad \\text{and} \\quad S_{\\alpha_{0}\\alpha_{0}} = \\frac{1}{2}.$$\n\n\n\\acknowledgments{The author is deeply indebted to Sergei Gukov for his suggestions and invaluable discussions.\n\nThe work is funded in part by the DOE Grant DE-SC0011632 and the Walter Burke Institute for Theoretical Physics, and also by the Samsung Scholarship.}\n\n\n\n\\newpage\n\n\\bibliographystyle{JHEP_TD}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhereas many upper bounds for the probability that a sum of independent\nreal-valued (integrable) random variables exceeds its expectation by a\nspecified threshold value $t\\in\\mathbb{R}$ are documented in the literature\n(see \\textit{e.g.} \\cite{BLM13} and the references therein), very few results\nare available when the random variables involved in the summation are sampled\nfrom a finite population according to a given survey scheme and next\nappropriately normalized (using the related survey weights as originally\nproposed in \\cite{HT51} for approximating a total). The sole situation where\nresults in the independent setting straightforwardly carry over to survey\nsamples (without replacement) corresponds to the case where the variables are\nsampled independently with possibly unequal weights, \\textit{i.e.} Poisson\nsampling. For more complex sampling plans, the dependence structure between\nthe sampled variables makes the study of the fluctuations of the resulting\nweighted sum approximating the total (referred to as the\n\\textit{Horvitz-Thompson total estimate}) very challenging. The case of basic\n\\textit{sampling without replacement} (\\textsc{SWOR} in abbreviated form) has\nbeen first considered in \\cite{Hoeffding63}, and refined in \\cite{Serfling74}\nand \\cite{BardenetMaillard}. In contrast, the asymptotic behavior of the\nHorvitz-Thompson estimator as $N$ tends to infinity is well-documented in the\nlitterature. Following in the footsteps of the seminal contribution\n\\cite{Hajek64}, a variety of limit results (\\textit{e.g.} consistency,\nasymptotic normality) have been established for Poisson sampling and next\nextended to rejective sampling viewed as conditional Poisson sampling given\nthe sample size and to sampling schemes that are closed to the latter in a\n\\textit{coupling} sense in \\cite{Rob82} and \\cite{Ber98}. Although the nature\nof the results established in this paper are nonasymptotic, these arguments\n(conditioning upon the sampling size and coupling) are involved in their proofs.\n\nIt is indeed the major purpose of this article to extend tail bounds proved\nfor \\textsc{SWOR} to the case of rejective sampling, a fixed size sampling\nscheme generalizing it. The approach we develop is thus based on viewing\nrejective sampling as conditional Poisson sampling given the sample size and\nwriting then the deviation probability as a ratio of two quantities: the joint\nprobability that a Poisson sampling-based total estimate exceeds the threshold\n$t$ and the size of the cardinality of the Poisson sample equals the\n(deterministic) size $n$ of the rejective plan considered in the numerator and\nthe probability that the Poisson sample size is equal to $n$ in the\ndenominator. Whereas a sharp lower bound for the denominator can be\nstraightforwardly derived from a local Berry-Esseen bound proved in\n\\cite{Deheuvels} for sums of independent, possibly non indentically\ndistributed, Bernoulli variables, an accurate upper bound for the numerator can be\nestablished by means of an appropriate exponential change of measure\n(\\textit{i.e.} Escher transformation), following in the footsteps of the\nmethod proposed in \\cite{Talagrand95}, a refinement of the classical argument\nof Bahadur-Rao's theorem in order to improve exponential bounds in the\nindependent setting. The tail bounds (of Bennett\/Bernstein type) established\nby means of this method are shown to be sharp in the sense that they\nexplicitely involve the 'small' asymptotic variance of the Horvitz-Thompson total\nestimate based on rejective sampling, in contrast to those proved by using the\n\\textit{negative association} property of the sampling scheme.\n\nThe article is organized as follows. A few key concepts pertaining to survey\ntheory are recalled in section \\ref{sec:background}, as well as specific\nproperties of Poisson and rejective sampling schemes. For comparison purpose,\npreliminary tail bounds in the (conditional) Poisson case are stated in\nsection \\ref{sec:Poisson}. The main results of the paper, sharper exponential\nbounds for conditional Poisson sampling namely, are proved in section\n\\ref{sec:main}, while section \\ref{sec:extension} explains how they can be\nextended to other sampling schemes, sufficiently close to rejective sampling in the sense of the total variation norm. A few remarks are finally collected in section \\ref{sec:concl} and some\ntechnical details are deferred to the Appendix section.\n\n\\section{Background and Preliminaries}\n\n\\label{sec:background} As a first go, we start with briefly recalling basic\nnotions in survey theory, together with key properties of (conditional)\nPoisson sampling schemes. Here and throughout, the indicator function of any\nevent $\\mathcal{E}$ is denoted by $\\mathbb{I}\\{\\mathcal{E}\\}$, the power set\nof any set $E$ by $\\mathcal{P}(E)$, the variance of any square integrable r.v.\n$Y$ by $Var(Y)$, the cardinality of any finite set $E$ by $\\#E$ and the Dirac\nmass at any point $a$ by $\\delta_{a}$. For any real number $x$, we set\n$x^{+}=\\max\\{x,\\; 0 \\}$, $x^{-}=\\max\\{ -x,\\; 0 \\}$, $\\lceil x\\rceil=\\inf\\{\nk\\in\\mathbb{Z}:\\; x\\leq k \\}$ and $\\lfloor x \\rfloor=\\sup\\{ k\\in\\mathbb{Z}:\\;\nk\\leq x \\}$.\n\n\\subsection{Sampling schemes and Horvitz-Thompson estimation}\n\nConsider a finite population of $N\\geq1$ distinct units, $\\mathcal{I\n_{N}=\\{1,\\ \\ldots,\\; N \\}$ say, a survey sample of (possibly random) size\n$n\\leq N$ is any subset $s=\\{i_{1},\\; \\ldots,\\; i_{n(s)} \\}\\in\\mathcal{P\n(\\mathcal{I}_{N})$ of size $n(s)=n$. A sampling design without replacement is\ndefined as a probability distribution $R_{N}$ on the set of all possible\nsamples $s\\in\\mathcal{P}(\\mathcal{I}_{N})$. For all $i\\in\\mathcal{I}_{N}$, the\nprobability that the unit $i$ belongs to a random sample $S$ defined on a\nprobability space $(\\Omega,\\; \\mathcal{F},\\; \\mathcal{P})$ and drawn from\ndistribution $R_{N}$ is denoted by $\\pi_{i}=\\mathbb{P}\\{i\\in S \\}=R_{N\n(\\{i\\})$. The $\\pi_{i}$'s are referred to as \\textit{first order inclusion\nprobabilities}. The \\textit{second order inclusion probability} related to any\npair $(i,j)\\in\\mathcal{I}_{N}^{2}$ is denoted by $\\pi_{i,j}=\\mathbb{P\n\\{(i,j)\\in S^{2} \\}=R_{N}(\\{i,\\; j \\})$ (observe that $\\pi_{i,i}=\\pi_{i}$).\nHere and throughout, we denote by $\\mathbb{E}[.]$ the $\\mathbb{P}$-expectation\nand by $Var(Z)$ the conditional variance of any $\\mathbb{P}$-square integrable\nr.v. $Z:\\Omega\\rightarrow\\mathbb{R}$.\n\nThe random vector $\\boldsymbol{\\epsilon} _{N}=(\\epsilon_{1},\\; \\ldots,\\;\n\\epsilon_{N})$ defined on $(\\Omega,\\; \\mathcal{F},\\; \\mathcal{P})$, where\n$\\epsilon_{i}=\\mathbb{I}\\{ i\\in S \\}$ fully characterizes the random sample\n$S\\in\\mathcal{P}(\\mathcal{I}_{N})$. In particular, the sample size $n(S)$ is\ngiven by $n=\\sum_{i=1}^{N}\\epsilon_{i}$, its expectation and variance by\n$\\mathbb{E}[n(S)]=\\sum_{i=1}^{N}\\pi_{i}$ and $Var(n(S))=\\sum_{1\\leq i,\\; j\\leq\nN}\\{\\pi_{i,j}-\\pi_{i}\\pi_{j}\\}$ respectively. The $1$-dimensional marginal\ndistributions of the random vector $\\boldsymbol{\\epsilon} _{N}$ are the\nBernoulli distributions $Ber(\\pi_{i})=\\pi_{i}\\delta_{1}+(1-\\pi_{i})\\delta_{0\n$, $1\\leq i \\leq N$ and its covariance matrix is $\\Gamma_{N}=(\\pi_{i,j\n-\\pi_{i}\\pi_{j})_{1\\leq i,\\;j \\leq N}$. \\medskip\n\nWe place ourselves here in the \\textit{fixed-population} or\n\\textit{design-based} sampling framework, meaning that we suppose that a fixed\n(unknown) real value $x_{i}$ is assigned to each unit $i\\in\\mathcal{I}_{N}$.\nAs originally proposed in the seminal contribution \\cite{HT51}, the\nHorvitz-Thompson estimate of the population total $S_{N}=\\sum_{i=1}^{N} x_{i}$\nis given by\n\\begin{equation}\n\\label{eq:HT}\\widehat{S}_{\\boldsymbol{\\pi} _{N}}^{\\boldsymbol{\\epsilon} _{N\n}=\\sum_{i=1}^{N} \\frac{\\epsilon_{i}}{\\pi_{i}}x_{i}=\\sum_{i\\in S}\\frac{1\n{\\pi_{i}}x_{i},\n\\end{equation}\nwith $0\/0=0$ by convention. Throughout the article, we assume that the\n$\\pi_{i}$'s are all strictly positive. Hence, the conditional expectation of\n\\eqref{eq:HT} is $\\mathbb{E}[\\widehat{S}_{\\boldsymbol{\\pi} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}]=S_{N}$ and, in the case where the size of the\nrandom sample is deterministic, its variance is given by\n\\begin{equation}\nVar(\\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon} _{N}})=\\sum_{i<\nj}\\left( \\frac{x_{i}}{\\pi_{i}}-\\frac{x_{j}}{\\pi_{j}} \\right) ^{2\n\\times\\left( \\pi_{i}\\pi_{j}-\\pi_{i,j} \\right) .\n\\end{equation}\n\nThe goal of this paper is to establish accurate bounds for tail probabilities\n\\begin{equation}\n\\label{eq:tailprob}\\mathbb{P}\\{\\widehat{S}_{\\boldsymbol{\\pi} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} >t \\},\n\\end{equation}\nwhere $t\\in\\mathbb{R}$, when the sampling scheme $\\boldsymbol{\\epsilon} _{N}$\nis \\textit{rejective}, a very popular sampling plan that generalizes \\textit{random\nsampling without replacement} and can be expressed as a conditional Poisson\nscheme, as recalled in the following subsection for clarity. One may refer to\n\\cite{Dev87} for instance for an excellent account of survey theory, including\nmany more examples of sampling designs.\n\n\\subsection{Poisson and conditional Poisson sampling}\n\n\\label{subsec:Poisson} Undoubtedly, one of the simplest sampling plan is the\n\\textit{Poisson survey scheme} (without replacement), a generalization of\n\\textit{Bernoulli sampling} originally proposed in \\cite{Goodman} for the case\nof unequal weights: the $\\epsilon_{i}$'s are independent and the sampling\ndistribution $P_{N}$ is thus entirely determined by the first order inclusion\nprobabilities $\\mathbf{p}_{N}=(p_{1},\\;\\ldots,\\;p_{N})\\in]0,1[^{N}$:\n\\begin{equation}\n\\forall s\\in\\mathcal{P}(\\mathcal{I}_{N}),\\;\\;P_{N}(s)=\\prod_{i\\in S}p_{i\n\\prod_{i\\notin S}(1-p_{i}). \\label{eq:Poisson\n\\end{equation}\nObserve in addition that the behavior of the quantity \\eqref{eq:HT} can be\ninvestigated by means of results established for sums of independent random\nvariables. However, the major drawback of this sampling plan lies in the\nrandom nature of the corresponding sample size, impacting significantly the\nvariability of \\eqref{eq:HT}. The variance of the Poisson sample size is\ngiven by $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i})$, while the variance of\n\\eqref{eq:HT} is in this case:\n\\[\nVar\\left( \\widehat{S}_{\\boldsymbol{\\pi} _{N}}^{\\boldsymbol{\\epsilon} _{N\n}\\right) =\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}.\n\\]\nFor this reason, \\textit{rejective sampling}, a sampling design $R_{N}$ of\nfixed size $n\\leq N$, is often preferred in practice. It generalizes the\n\\textit{simple random sampling without replacement} (where all samples with\ncardinality $n$ are equally likely to be chosen, with probability $(N-n)!\/n!$,\nall the corresponding first and second order probabilities being thus equal to\n$n\/N$ and $n(n-1)\/(N(N-1))$ respectively). Denoting by $\\boldsymbol{\\pi}\n_{N}^{R}=(\\pi_{1}^{R},\\;\\ldots,\\;\\pi_{N})$ its first order inclusion\nprobabilities and by $\\mathcal{S}_{n}=\\{s\\in\\mathcal{P}(\\mathcal{I\n_{N}):\\;\\#s=n\\}$ the subset of all possible samples of size $n$, it is defined\nby:\n\\begin{equation}\n\\forall s\\in\\mathcal{S}_{n},\\;\\;R_{N}(s)=C\\prod_{i\\in s}p_{i}^{R\n\\prod_{i\\notin s}(1-p_{i}^{R}), \\label{eq:Rejective\n\\end{equation}\nwhere $C=1\/\\sum_{s\\in\\mathcal{S}_{n}}\\prod_{i\\in s}p_{i}^{R}\\prod_{i\\notin\ns}(1-p_{i}^{R})$ and the vector of parameters $\\mathbf{p}_{N}^{R}=(p_{1}^{R},\\;\\ldots\n,\\;p_{N}^{R})\\in]0,1[^{N}$ yields first order inclusion probabilities equal to\nthe $\\pi_{i}^{R}$'s and is such that $\\sum_{i=1}^{N}p_{i}^{R}=n$. Under this\nlatter additional condition, such a vector $\\mathbf{p}_{N}^{R}$ exists and is\nunique (see \\cite{Dupacova}) and the related representation\n\\eqref{eq:Rejective} is then said to be \\textit{canonical}. Notice\nincidentally that any vector $\\mathbf{p}_{N}^{\\prime}\\in]0,1[^{N}$ such that\n$p_{i}^{R}\/(1-p_{i}^{R})=cp_{i}^{\\prime}\/(1-p_{i}^{\\prime})$ for all\n$i\\in\\{1,\\;\\ldots,\\;n\\}$ for some constant $c>0$ can be used to write a\nrepresentation of $R_{N}$ of the same type as \\eqref{eq:Rejective}. Comparing\n\\eqref{eq:Rejective} and \\eqref{eq:Poisson} reveals that rejective $R_{N}$\nsampling of fixed size $n$ can be viewed as Poisson sampling given that the\nsample size is equal to $n$. It is for this reason that rejective sampling is\nusually referred to as \\textit{conditional Poisson sampling}. For simplicity's\nsake, the superscrit $R$ is omitted in the sequel. One must pay attention not\nto get the $\\pi_{i}$'s and the $p_{i}$'s mixed up (except in the SWOR case, where these quantities are all equal to $n\/N$): the latter are the first\norder inclusion probabilities of $P_{N}$, whereas the former are those of its\nconditional version $R_{N}$. However they can be related by means of the\nresults stated in \\cite{Hajek64} (see Theorem 5.1 therein, as well as Lemma \\ref{lem:bias} in section \\ref{sec:main} and \\cite{BLRG12}): $\\forall\ni\\in\\{1,\\;\\ldots,\\;N\\}$,\n\\begin{align}\n\\pi_{i}(1-p_{i}) & =p_{i}(1-\\pi_{i})\\times\\left( 1-\\left( \\tilde{\\pi\n-\\pi_{i}\\right) \/d_{N}^{\\ast}+o(1\/d_{N}^{\\ast})\\right) ,\\label{eq:rel1}\\\\\np_{i}(1-\\pi_{i}) & =\\pi_{i}(1-p_{i})\\times\\left( 1-\\left( \\tilde{p\n-p_{i}\\right) \/d_{N}+o(1\/d_{N})\\right) , \\label{eq:rel2\n\\end{align}\nwhere $d_{N}^{\\ast}=\\sum_{i=1}^{N}\\pi_{i}(1-\\pi_{i})$, $d_{N}=\\sum_{i=1\n^{N}p_{i}(1-p_{i})$, $\\tilde{\\pi}=(1\/d_{N}^{\\ast})\\sum_{i=1}^{N}\\pi_{i\n^{2}(1-\\pi_{i})$ and $\\tilde{p}=(1\/d_{N})\\sum_{i=1}^{N}(p_{i})^{2}(1-p_{i})$. \\medskip\n\nSince the major advantage of conditional Poisson sampling lies in its reduced\nvariance property (compared to Poisson sampling in particular, see the discussion in section \\ref{sec:main}), focus is next\non exponential inequalities involving a variance term, of Bennett\/Bernstein\ntype namely.\n\n\\section{Preliminary Results}\n\n\\label{sec:Poisson}\n\nAs a first go, we establish tail bounds for the Horvitz-Thompson estimator in\nthe case where the variables are sampled according to a Poisson scheme. We\nnext show how to exploit the \\textit{negative association} property satisfied\nby rejective sampling in order to extend the latter to conditional Poisson\nsampling. Of course, this approach do not account for the reduced variance\nproperty of Horvitz-Thompson estimates based on rejective sampling, it is the\npurpose of the next section to improve these first exponential bounds.\n\n\\subsection{Tails bounds for Poisson sampling}\n\nAs previously observed, bounding the tail probability \\eqref{eq:tailprob} is\neasy in the Poisson situation insofar as the variables summed up in\n\\eqref{eq:HT} are independent though possibly non identically distributed\n(since the inclusion probabilities are not assumed to be all equal). The\nfollowing theorem thus directly follows from well-known results related to\ntail bounds for sums of independent random variables.\n\n\\begin{theorem}\n\\label{thm:poisson}\\textsc{(Poisson sampling)} Assume that the survey scheme\n$\\boldsymbol{\\epsilon} _{N}$ defines a Poisson sampling plan with first order\ninclusion probabilities $p_{i}>0$, with $1\\leq i \\leq N$. Then, we\nalmost-surely have: $\\forall t>0$, $\\forall N\\geq1$,\n\\begin{align}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}}-S_{N} >t \\right\\} & \\leq\\exp\\left( -\\frac{\\sum_{i=1}^{N\n\\frac{1-p_{i}}{p_{i}}x_{i}^{2}}{\\left( \\max_{1\\leq i \\leq N}\\frac{x_{i\n}{p_{i}} \\right) ^{2}} H\\left( \\frac{\\max_{1\\leq i \\leq N}\\frac{\\vert\nx_{i}\\vert}{p_{i}}t}{\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}} \\right)\n\\right) \\label{eq:Bennett}\\\\\n& \\leq\\exp\\left( \\frac{-t^{2}}{\\frac{2}{3}\\max_{1\\leq i\\leq N}\\frac{\\vert\nx_{i}\\vert}{p_{i}}+ 2\\sum_{i=1}^{N}\\frac{1-p_{i}}{p_{i}}x_{i}^{2}} \\right) ,\n\\label{eq:Bern\n\\end{align}\nwhere $H(x)=(1+x)\\log(1+x)-x$ for $x\\geq0$.\n\\end{theorem}\n\nBounds \\eqref{eq:Bennett} and \\eqref{eq:Bern} straightforwardly result from\nBennett inequality \\cite{Bennett} and Bernstein exponential inequality\n\\cite{Bernstein} respectively, when applied to the independent random\nvariables $(\\epsilon_{i}\/p_{i})x_{i}$, $1\\leq i \\leq N$. By applying these\nresults to the variables $-(\\epsilon_{i}\/p_{i})x_{i}$'s, the same bounds\nnaturally hold for the deviation probability $\\mathbb{P}\\{\\widehat\n{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} <-t \\}$ (and,\nincidentally, for $\\mathbb{P}\\{\\vert\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N} \\vert>t \\}$ up to a factor $2$).\nDetails, as well as extensions to other deviation inequalities (see\n\\textit{e.g.} \\cite{FukNagaev}), are left to the reader.\n\n\n\\subsection{Exponential inequalities for sums of negatively associated random variables}\n\nFor clarity, we first recall the definition of \\textit{negatively associated\nrandom variables}, see \\cite{JDP83}.\n\n\\begin{definition}\n\\label{def:negassoc} Let $Z_{1},\\; \\ldots,\\; Z_{n}$ be random variables\ndefined on the same probability space, valued in a measurable space\n$(E,\\mathcal{E})$. They are said to be negatively associated iff for any pair\nof disjoint subsets $A_{1}$ and $A_{2}$ of the index set $\\{1,\\; \\ldots,\\; n\n\\}$\n\\begin{equation}\n\\label{eq:neg}Cov \\left( f((Z_{i})_{i\\in A_{1}}),\\; g((Z_{j})_{j\\in A_{2}})\n\\right) \\leq0,\n\\end{equation}\nfor any real valued measurable functions $f:E^{\\#A_{1}}\\rightarrow\\mathbb{R}$\nand $g:E^{\\#A_{2}}\\rightarrow\\mathbb{R}$ that are both increasing in each variable.\n\\end{definition}\n\nThe following result provides tail bounds for sums of negatively associated\nrandom variables, which extends the usual Bennett\/Bernstein inequalities in the\ni.i.d. setting, see \\cite{Bennett} and \\cite{Bernstein}.\n\n\\begin{theorem}\n\\label{thm:BernNeg} Let $Z_{1},\\;\\ldots,\\;Z_{N}$ be square integrable\nnegatively associated real valued random variables such that $|Z_{i}|\\leq c$\na.s. and $\\mathbb{E}[Z_{i}]=0$ for $1\\leq i\\leq N$. Let $a_{1},\\; \\ldots,\\; a_N$ be\nnon negative constants and set $\\sigma^{2}=\\frac{1}{N}\\sum_{i=1}^{N}a_{i\n^{2}Var(Z_{i})$. Then, for all $t>0$, we have: $\\forall N\\geq1$,\n\\begin{align}\n\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}a_{i}Z_{i}\\geq t\\right\\} & \\leq\\exp\\left(\n-\\frac{N\\sigma^{2}}{c^{2}}H\\left( \\frac{ct}{N\\sigma^{2}}\\right) \\right) \\\\\n& \\leq\\exp\\left( -\\frac{t^{2}}{2N\\sigma^{2}+\\frac{2ct}{3}}\\right) .\n\\end{align}\n\\end{theorem}\n\nBefore detailing the proof, observe that the same bounds hold true for the\ntail probability $\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}a_{i}Z_{i}\\leq-t\\right\\} $\n(and for $\\mathbb{P}\\left\\{ |\\sum_{i=1}^{N}a_{i}Z_{i}|\\geq t\\right\\} $ as\nwell, up to a multiplicative factor $2$). Refer also to Theorem 4 in\n\\cite{Janson} for a similar result in a more restrictive setting\n(\\textit{i.e.} for tail bounds related to sums of \\textit{negatively related}\nr.v.'s) and to \\cite{Shao00} as well. \\begin{proof\n\nThe proof starts off with the usual Chernoff method: for all $\\lambda>0$,\n\\begin{equation\n\\label{eq:Chernoff}\n\\mathbb{P}\\left\\{\\sum_{i=1}\n^N a_i Z_i \\geq t\\right\\}\\leq \\exp\\left( -t\\lambda +\\log \\mathbb{E\n\\left[e^{t\\sum_{i=1}^N a_i Z_i} \\right] \\right).\n\\end{equation}\n\nNext, observe that, for all $t>0$, we have\n\\begin{eqnarray}\\label{eq:neg2}\n\\mathbb{E}\\left[\\exp\\left(t\\sum_{i=1}^n a_i Z_i\\right)\\right]&=&\\mathbb{E}\n\\left[\\exp(t a_n Z_n )\\exp\\left(t\\sum_{i=1}^{n-1}\na_i Z_i\\right)\\right]\\nonumber\\\\\n&\\leq &\\mathbb{E}\n\\left[ \\exp(ta_n Z_n) \\right]\\mathbb{E}\\left[\\exp\\left(t\\sum_{i=1}^{n-1}\na_i Z_i\\right) \\right]\\nonumber\\\\\n&\\leq & \\prod_{i=1}^n\\mathbb{E}\n\\left[ \\exp(ta_iZ_i) \\right],\\label{eq:neg2}\n\\end{eqnarray}\n\nusing the property \\eqref{eq:neg}\ncombined with a descending recurrence on $i$. The proof is finished by plugging \\eqref{eq:neg2}\ninto \\eqref{eq:Chernoff}\nand optimizing finally the resulting bound w.r.t. $\\lambda>0$, just like in the proof of the classic Bennett\/Bernstein inequalities, see \\cite{Bennett}\nand \\cite{Bernstein}. $\\square$\n\\end{proof} \\medskip\n\nThe first assertion of the theorem stated below reveals that any rejective\nscheme $\\boldsymbol{\\epsilon} ^{*}_{N}$ forms a collection of negatively\nrelated r.v.'s, the second one appearing then as a direct consequence of Theorem \\ref{thm:BernNeg}.\nWe underline that many sampling schemes (\\textit{e.g.} Rao-Sampford sampling, Pareto sampling, Srinivasan sampling) of fixed size are actually described by random vectors $\\boldsymbol{\\epsilon}_N$ with negatively associated components, see \\cite{BJ12} or \\cite{KCR11}, so that exponential bounds similar to that stated below can be proved for such sampling plans.\n\n\\begin{theorem}\n\\label{thm:neg} Let $N\\geq1$ and $\\boldsymbol{\\epsilon} ^{*}_{N}=(\\epsilon\n^{*}_{1},\\; \\ldots,\\; \\epsilon^{*}_{N})$ be the vector of indicator variables\nrelated to a rejective plan on $\\mathcal{I}_{N}$ with first order inclusion probabilities $(\\pi_1,\\; \\ldots,\\; \\pi_N)\\in ]0,1]^N$. Then, the following\nassertions hold true.\n\n\\begin{itemize}\n\\item[(i)] The binary random variables $\\epsilon^{*}_{1},\\; \\ldots,\\;\n\\epsilon^{*}_{N}$ are negatively related.\n\n\\item[(ii)] For any $t\\geq0$ and $N\\geq1$, we have:\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon}\n^{*}_{N}}-S_{N} \\geq t \\right\\} & \\leq2 \\exp\\left( -\\frac{\\sum_{i=1\n^{N}\\frac{1-\\pi_{i}}{\\pi_{i}}x_{i}^{2}}{\\left( \\max_{1\\leq i \\leq\nN}\\frac{x_{i}}{\\pi_{i}} \\right) ^{2}} H\\left( \\frac{\\max_{1\\leq i \\leq\nN}\\frac{\\vert x_{i}\\vert}{\\pi_{i}}t\/2}{\\sum_{i=1}^{N}\\frac{1-\\pi_{i}}{\\pi_{i\n}x_{i}^{2}} \\right) \\right) \\\\\n& \\leq2 \\exp\\left( \\frac{-t^{2}\/4}{\\frac{2}{3}\\max_{1\\leq i\\leq N}\\frac{\\vert\nx_{i}\\vert}{\\pi_{i}}t+ 2\\sum_{i=1}^{N}\\frac{1-\\pi_{i}}{\\pi_{i}}x_{i}^{2}}\n\\right) .\n\\end{align*}\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof\n\nConsidering the usual representation of the distribution of $(\\epsilon_1,\\; \\ldots,\\; \\epsilon_N)$ as the conditional distribution of a sample of independent Bernoulli variables $(\\epsilon^*_1,\\; \\ldots,\\; \\epsilon^*_N)$ conditioned upon the event $\\sum_{i=1\n^N\\epsilon^*_i=n$ (see subsection \\ref{subsec:Poisson\n), Assertion $(i)$ is a straightforward consequence from Theorem 2.8 in \\cite{JDP83}\n(see also \\cite{Barbour\n).\n\\medskip\nAssertion $(i)$ shows in particular that Theorem \\ref{thm:BernNeg}\ncan be applied to the random variables $\\{ (\\epsilon_i^*\/\\pi_i-1)x_i^+:\\; 1\\leq i \\leq N \\}$ and to the random variables $\\{ (\\epsilon_i^*\/\\pi_i-1)x_i^-:\\; 1\\leq i \\leq N \\}$ as well. Using the union bound, we obtain that\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi}}^{\\boldsymbol{\\epsilon\n^*_N}-S_N \\geq t \\right\\}\\leq \\mathbb{P}\\left\\{ \\sum_{i=1\n^N\\left( \\frac{\\epsilon^*_i}{\\pi_i\n-1 \\right)x^+_i\\geq t\/2 \\right\\} \\\\+ \\mathbb{P}\\left\\{ \\sum_{i=1\n^N\\left( \\frac{\\epsilon^*_i}{\\pi_i\n-1 \\right)x^-_i\\leq -t\/2 \\right\\},\n\\end{multline*}\nand a direct application of Theorem \\ref{thm:BernNeg} to each of the terms involved in this bound straightforwardly proves Assertion $(ii)$. $\\square$\n\\end{proof}\n \\medskip\n\nThe negative association property permits to handle the dependence of the\nterms involved in the summation. However, it may lead to rather loose\nprobability bounds. Indeed, except the factor $2$, the bounds of Assertion\n$(ii)$ exactly correspond to those stated in Theorem \\ref{thm:poisson}, as if\nthe $\\epsilon_{i}^{*}$'s were independent, whereas the asymptotic variance $\\sigma^2_N$ of\n$\\widehat{S}_{\\boldsymbol{\\pi} }^{\\boldsymbol{\\epsilon} _{N}^{*}}$ can be much smaller than $\\sum_{i=1}^{N}(1-\\pi_{i})x_{i}^{2}\/\\pi_{i}$.\nIt is the goal of the subsequent analysis to improve these preliminary results and establish exponential bounds involving the asymptotic variance $\\sigma^2_N$.\n\\begin{remark}{\\sc (SWOR)} We point out that in the specific case of sampling without replacement, \\textit{i.e.} when $\\pi_i=n\/N$ for all $i\\in \\{1,\\; \\ldots,\\; N \\}$, the inequality stated in Assertion $(ii)$ is quite comparable (except the factor $2$) to that which can be derived from the Chernoff bound given in \\cite{Hoeffding63}, see Proposition 2 in \\cite{BardenetMaillard}.\n\n\\end{remark}\n\n\n\\section{Main Results - Exponential Inequalities for Rejective Sampling}\n\n\\label{sec:main} The main results of the paper are stated and discussed in the\npresent section. More accurate deviation probabilities related to the total estimate\n\\eqref{eq:HT} based on a rejective sampling scheme $\\boldsymbol{\\epsilon}\n_{N}^{\\ast}$ of (fixed) sample size $n\\leq N$ with first order inclusion\nprobabilities $\\boldsymbol{\\pi} _{N}=(\\pi_{1},\\;\\ldots,\\;\\pi_{N})$ and\ncanonical representation $\\mathbf{p}_{N}=(p_{1},\\;\\ldots,\\;p_{N})$ are now\ninvestigated. Consider $\\boldsymbol{\\epsilon} _{N}$ a Poisson scheme with\n$\\mathbf{p}_{N}$ as vector of first order inclusion probabilities. As\npreviously recalled, the distribution of $\\boldsymbol{\\epsilon} _{N}^{\\ast}$\nis equal to the conditional distribution of $\\boldsymbol{\\epsilon} _{N}$ given\n$\\sum_{i=1}^{N}\\varepsilon_{i}=n$:\n\\begin{equation}\n(\\varepsilon_{1}^{\\ast},\\varepsilon_{2}^{\\ast},....,\\varepsilon_{N}^{\\ast\n})\\overset{d}{=}(\\varepsilon_{1},....,\\varepsilon_{N})\\ |\\sum_{i=1\n^{N}\\varepsilon_{i}=n.\\label{eq:distribution\n\\end{equation}\nHence, we almost-surely have: $\\forall t>0$, $\\forall N\\geq1$,\n\\begin{equation}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi} _{N}\n^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>t\\right\\} =\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{\\pi_{i}}x_{i}-S_{N}>t\\mid\n\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} .\\label{eq:cond_tail\n\\end{equation}\nAs a first go, we shall prove tail bounds for the quantity\n\\begin{equation}\n\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}\n\\overset{def}{=}\\sum_{i=1}^{N}\\frac{\\epsilon_{i}^{\\ast}}{p_{i}}x_{i\n.\\label{eq:HT2\n\\end{equation}\nObserve that this corresponds to the HT estimate of the total $\\sum_{i=1\n^{N}\\frac{p_{i}}{\\pi_{i}}x_{i}$. Refinements of relationships \\eqref{eq:rel1} and\n\\eqref{eq:rel2} between the $p_{i}$'s and the $\\pi_{i}$'s shall next allow us\nto obtain an upper bound for \\eqref{eq:cond_tail}. Notice incidentally that,\nthough slightly biased (see Assertion $(i)$ of Theorem \\ref{thm:final}), the statistic\n\\eqref{eq:HT2} is commonly used as an estimator of $S_{N}$, insofar as the\nparameters $p_{i}$'s are readily available from the canonical representation\nof $\\boldsymbol{\\epsilon} _{N}^{\\ast}$, whereas the computation of the\n$\\pi_{i}$'s is much more complicated. One may refer to \\cite{CDL94} for\npractical algorithms dedicated to this task. Hence, Theorem \\ref{thm:rejective} is of practical interest to build non asymptotic confidence intervals for the total $S_N$.\n\\medskip\n\n\\noindent {\\bf Asymptotic variance.} Recall that $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i})$ is the variance\n$Var(\\sum_{i=1}^{N}\\epsilon_{i})$ of the size of the Poisson plan\n$\\boldsymbol{\\epsilon} _{N}$ and set\n\\[\n\\theta_{N}=\\frac{\\sum_{i=1}^{N}x_{i}(1-p_{i})}{d_{N}}.\n\\]\nAs explained in \\cite{BCC2013}, the quantity $\\theta_{N}$ is the coefficient of the linear regression relating $\\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}$ to the sample\nsize $\\sum_{i=1}^{N}\\epsilon_{i}$. We may thus write\n\\[\n\\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}=\\theta_{N}\\times \\sum_{i=1\n^{N}\\epsilon_{i}+r_{N},\n\\]\nwhere the residual $r_{N}$\\ is orthogonal to $\\sum\n_{i=1}^{N}\\epsilon_{i}$. Hence, we have the following decomposition\n\\begin{equation}\\label{eq:Poisson_var}\nVar\\left( \\sum_{i=1}^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}\\right) =\\sigma^2_{N}+\\theta_{N}^{2}d_{N},\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:Asympt_Var}\n\\sigma^2_{N}=Var\\left( \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\left(\n\\frac{x_{i}}{p_{i}}-\\theta_{N}\\right) \\right) \n\\end{equation}\n is the asymptotic variance\nof the statistic $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}$, see \\cite{Hajek64}. In other words,\nthe variance reduction resulting from the use of a rejective sampling plan instead of a Poisson plan is\nequal to $\\theta_{N}^{2}d_{N}$, and can be very large in practice. A sharp Bernstein type probability inequality for $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}$ should thus involve $\\sigma^2_N$ rather than the Poisson variance $Var( \\sum_{i=1}^{N}(\\epsilon_{i}\/p_{i})x_{i})$.\nUsing the fact that $\\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})=0$ on the event\n$\\{\\sum_{i=1}^{N}\\epsilon_{i}=n\\}$, we may now write:\n\\begin{align}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}^{\\ast}}-S_{N}>t\\right\\} & =\\mathbb{P}\\left\\{ \\sum_{i=1\n^{N}\\frac{\\epsilon_{i}}{p_{i}}x_{i}-S_{N}>t\\mid\\sum_{i=1}^{N}\\epsilon\n_{i}=n\\right\\} \\nonumber\\label{eq:ratio}\\\\\n& =\\frac{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\frac{x_{i\n}{p_{i}}>t,\\;\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }{\\mathbb{P}\\left\\{\n\\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }\\nonumber\\\\\n& =\\frac{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})\\left(\n\\frac{x_{i}}{p_{i}}-\\theta_{N}\\right) >t,\\sum_{i=1}^{N}\\epsilon\n_{i}=n\\right\\} }{\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} }.\n\\end{align}\nBased on the\nobservation that the random variables $\\sum_{i=1}^{N}(\\epsilon_{i\n-p_{i})(x_{i}\/p_{i}-\\theta_{N})$ and $\\sum_{i=1}^{N}(\\epsilon_{i}-p_{i})$ are\nuncorrelated, Eq. \\eqref{eq:ratio} thus permits to establish directly the CLT $\\sigma_N^{-1}(\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N})\\Rightarrow \\mathcal{N}(0,1)$,\nprovided that $d_{N}\\rightarrow+\\infty$, as $N\\rightarrow+\\infty$, symplifying\nasymptotically the ratio, see \\cite{Hajek64}. Hence, the asymptotic variance of $\\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}$ is the variance $\\sigma^2_N$ of the quantity $\\sum_{i=1}^{N}(\\epsilon_{i\n-p_{i})(x_{i}\/p_{i}-\\theta_{N})$, which is less than that of the Poisson HT estimate $\\eqref{eq:Poisson_var}$, since it eliminates the variability due to the sample size. We also point out that Lemma \\ref{lem:bias} proved in the Appendix section straightforwardly shows that the \"variance term\" $\\sum_{i=1}^Nx_i^2(1-\\pi_i)\/\\pi_i$ involved in the bound stated in Theorem \\ref{thm:BernNeg} is always larger than $(1+6\/d_N)^{-1}\\sum_{i=1}^Nx_i^2(1-p_i)\/p_i$.\n\\medskip\n\nThe desired result here is non\nasymptotic and accurate exponential bounds are required for both the numerator\nand the denominator of \\eqref{eq:ratio}. It is proved in \\cite{Hajek64} (see\nLemma 3.1 therein) that, as $N\\rightarrow+\\infty$:\n\\begin{equation}\n\\mathbb{P}\\left\\{ \\sum_{i=1}^{N}\\epsilon_{i}=n\\right\\} =(2\\,\\pi\n\\,d_{N})^{-1\/2}\\,(1+o(1)).\\label{local\n\\end{equation}\nAs shall be seen in the proof of the theorem stated below, the approximation\n\\eqref{local} can be refined by using a local Berry-Essen bound or the\nresults in \\cite{Deheuvels} and we thus essentially need to establish an\nexponential bound for the numerator with a constant of order $d_{N}^{-1\/2}$,\nsharp enough so as to simplify the resulting ratio bound and cancel off the\ndenominator. We shall prove that this can be achieved by using a similar\nargument as that considered in \\cite{BERCLEM2010} for establishing an accurate\nexponential bound for i.i.d. $1$-lattice random vectors, based on a device\nintroduced in \\cite{Talagrand95} for refining Hoeffding's inequality.\n\n\\begin{theorem}\n\\label{thm:rejective}Let $N\\geq1$. Suppose that $\\boldsymbol{\\epsilon}\n_{N}^{\\ast}$ is a rejective scheme of size $n\\leq N$ with canonical parameter\n$\\boldsymbol{p} _{N}=(p_{1},\\;\\ldots,\\;p_{N})\\in]0,\\;1[^{N}$. Then, there\nexist universal constants $C$ and $D$ such that we have for all $t>0$ and\nfor all $N\\geq1$,\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon}\n_{N}^{\\ast}}-S_{N}>t\\right\\} & \\leq C\\exp\\left( -\\frac{\\sigma^2_{N\n}{\\left( \\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\\right) ^{2}}H\\left(\n\\frac{t\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\n{\\sigma^2_{N}}\\right) \\right) \\\\\n& \\leq C\\exp\\left( -\\frac{t^{2}}{2\\left( \\sigma^2_{N}+\\frac{1\n{3}t\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}\\right)\n}\\right) ,\n\\end{align*}\nas soon as $\\min\\{d_{N},\\;d_{N}^{\\ast}\\}\\geq1$ and $d_N\\geq D$.\n\\end{theorem}\n\nAn overestimated value of the constant $C$ can be deduced by a careful\nexamination of the proof given below. Before we detail it, we point out that\nthe exponential bound in Theorem \\ref{thm:rejective} involves the asymptotic variance of \\eqref{eq:HT2}, in contrast to bounds\nobtained by exploiting the \\textit{negative association} property of the\n$\\epsilon_{i}^{\\ast}$'s.\n\n\\begin{remark}{\\sc (SWOR (bis))} We underline that, in the particular case of sampling without replacement (\\textit{i.e.} when $p_i=\\pi_i=n\/N$ for $1\\leq i\\leq N$), the Bernstein type exponential inequality stated above provides a control of the tail similar to that obtained in \\cite{BardenetMaillard}, see Theorem 2 therein, with $k=n$. In this specific situation, we have $d_N=n(1-n\/N)$ and $\\theta_N=S_N\/n$, so that the formula \\eqref{eq:Asympt_Var} then becomes $$\n\\sigma_N^2=\\left(1-\\frac{n}{N}\\right)\\frac{N^2}{n}\\left\\{ \\frac{1}{N}\\sum_{i=1}^Nx_i^2 -\\left(\\frac{1}{N}\\sum_{i=1}^N x_i\\right)^2 \\right\\}.\n$$\nThe control induced by Theorem \\ref{thm:rejective} is actually slightly better than that given by Theorem 2 in \\cite{BardenetMaillard}, insofar as the factor $(1-n\/N)$ is involved in the variance term, rather than $(1-(n-1)\/N)$, that is crucial when considering situations where $n$ gets close to $N$ (see the discussion preceded by Proposition 2 in \\cite{BardenetMaillard}).\n\\end{remark}\n\n\\begin{proof}\nWe first introduce additional notations.\nSet $Z_{i\n=(\\epsilon _{i}-p_{i})(x_{i}\/p_{i}-\\theta _{N})$ and $m_{i}=\\epsilon _{i\n-p_{i\n$ for $1\\leq i\\leq N$ and, for convenience, consider the standardized variables given by\n\\begin{equation*}\n\\mathcal{Z}_{N}=n^{1\/2}\\frac{1}{N}\\sum_{1\\leq i\\leq N}Z_{i} \\text{ and }\n\\mathcal{M}_{N}=d_N^{-1\/2}\\sum_{1\\leq i\\leq N}m_{i}.\n\\end{equation*\nAs previously announced, the proof is based on Eq. \\eqref{eq:ratio}. The lemma below first provides a sharp lower bound for the denominator, $ \\mathbb{P\n^*\\left\\{ \\mathcal{M}_{N\n=0\\right\\}$ with the notations above. As shown in the proof given in the Appendix section, it can be obtained by applying the local Berry-Esseen bound established in \\cite{Deheuvels}\nfor sums of independent (and possibly non identically) Bernoulli random variables.\n\\begin{lemma}\n\\label{lem:denominator}\nSuppose that Theorem \\ref{thm:rejective\n's assumptions are fulfilled. Then, there exist universal constants $C_1$ and $D$ such that: $\\forall N\\geq 1$,\n\\begin{equation}\n\\mathbb{P}\\{ \\mathcal{M}_N=0 \\}\\geq C_1\\frac{1}{\\sqrt{d_N}},\n\\end{equation}\nprovided that $d_N\\geq D$.\n\\end{lemma}\n\nThe second lemma gives an accurate upper bound for the numerator. Its proof can be found in the Appendix section.\n\\begin{lemma\n\\label{lem:numerator}\nSuppose that Theorem \\ref{thm:rejective\n's assumptions are fulfilled. Then, we have for all $x\\geq 0$, and for all $N\\geq 1$ such that $\\min\\{d_N,\\; d_N^*\\}\\geq 1$:\n\\begin{multline*}\n\\mathbb{P}\\left\\{\\mathcal{Z}_{N}\\geq x,\\mathcal{M}_{N\n=0\\right\\}\\leq C \\frac{1}{\\sqrt{d_N}\n\\times \\\\\\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^NZ_i \\right)\n{\\left(\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j} \\right)^2\nh\\left( \\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert\n{p_j}}{Var\\left( \\sum_{i=1}^NZ_i \\right)}\n\\right) \\right)\\\\\n\\leq C_2 \\frac{1}{\\sqrt{d_N}}\\exp\\left( -\\frac{N^2x^2\/n\n{2\\left(Var\\left( \\sum_{i=1}^NZ_i \\right)+\\frac{1}{3}\\frac{N}{\\sqrt{n\n}x\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j} \\right)}\n\\right),\n\\end{multline*\nwhere $C_2<+\\infty$ is a universal constant.\n\\end{lemma}\n\nThe bound stated in Theorem \\ref{thm:rejective}\nnow directly results from Eq. \\eqref{eq:ratio}\ncombined with Lemmas \\ref{lem:denominator} and \\ref{lem:numerator}, with $x=t\\frac{\\sqrt{n}}{N}$. $\\square$\n\\end{proof} \n\n\\bigskip\n\nEven if the computation of the biased statistic \\eqref{eq:HT2} is much more tractable from a practical perspective, we now come back to the study of the HT total estimate \\eqref{eq:HT}. The first part of the result stated below provides an estimation of the bias that replacement of \\eqref{eq:HT} by \\eqref{eq:HT2} induces, whereas its second part finally gives a tail bound for $\\eqref{eq:HT}$.\n\n\\begin{theorem}\\label{thm:final} Suppose that the assumptions of Theorem \\ref{thm:rejective} are fulfilled and set $M_N=(6\/d_N)\\sum_{i=1}^N\\vert x_i\\vert \/\\pi_i$. The following assertions hold true.\n\\begin{itemize}\n\\item[(i)] For all $N\\geq 1$, we almost-surely have:\n\\begin{equation*}\n\\left\\vert \\widehat{S}^{\\boldsymbol{\\epsilon}_N^*}_{\\boldsymbol{\\pi}_N } - \\widehat{S}^{\\boldsymbol{\\epsilon}_N^*}_{\\mathbf{p}_N } \\right\\vert \\leq M_N .\n\\end{equation*}\n\\item[(ii)] There exist universal\nconstants $C$ and $D$ such that, for all $t>M_{N}$ and for all $N\\geq1$, we have:\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\widehat{S}^{\\boldsymbol{\\epsilon}^*_N}_{\\boldsymbol{\\pi}_N}-S_{N}>t \\right\\} \n\\leq \\\\ C\\exp\\left( -\\frac{\\sigma_{N}^2}{\\left( \\max_{1\\leq j\\leq\nN}\\frac{|x_{j}|}{p_{j}}\\right) ^{2}}H\\left( \\frac{N}{\\sqrt{n}\n\\frac{(t-M_{N})\\max_{1\\leq j\\leq N}\\frac{|x_{j}|}{p_{j}}}{\\sigma^2_{N\n}\\right) \\right) \\\\\n \\leq C\\exp\\left( -\\frac{N^{2}(t-M_{N})^{2}\/n}{2\\left( \\sigma^2_{N}+\\frac{1}{3}\\frac{N}{\\sqrt{n}}(t-M_{N})\\max_{1\\leq j\\leq N}\\frac{|x_{j\n|}{p_{j}}\\right) }\\right) ,\n\\end{multline*}\nas soon as $\\min\\{d_{N},\\;d_{N}^{\\ast}\\}\\geq1$ and $d_N\\geq D$.\n\\end{itemize}\n\\end{theorem}\n\nThe proof is given in the Appendix section. We point out that, for nearly uniform weights, \\textit{i.e.} when $c_1n\/N\\leq\\pi_i\\leq c_2n\/N$ for all $i\\in\\{1,\\; \\ldots,\\; N \\}$ with $0t\\right\\} -\\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\right\\} \\right| \n& \\leq &\\Vert\\widetilde{R}_{N}-R_{N}\\Vert_{1} \\\\ &\\leq&\\sqrt{2 D_{KL}(R_{N}\\vert\\vert\\widetilde\n{R}_{N})}.\n\\end{eqnarray*}\n\\end{lemma}\n\\begin{proof} The first bound immediately results from the following elementary observation:\n\\begin{multline*}\n \\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}}-S_{N}>t\\right\\} -\\mathbb{P}\\left\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\right\\} =\\\\\n\\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N)}\\mathbb{I}\\{\\sum_{i\\in s}x_i\/p_i-S_N >t\\}\\times \\left(R_N(s) -\\widetilde{R}_N(s) \\right),\n\\end{multline*}\nwhile the second bound is the classical Pinsker inequality.\n$\\square$\n\\end{proof}\n\\medskip\n\nIn practice, $R_{N}$ is typically the rejective sampling plan\ninvestigated in the previous subsection (or eventually the Poisson sampling scheme) and $\\widetilde{R}_N$ a sampling plan from which the Kullback-Leibler divergence to $R_N$ asymptotically vanishes, \\textit{e.g.} the rate at which $D_{KL}(R_{N}\\vert\\vert\\widetilde{R}_{N})$ decays to zero has been investigated in \\cite{Ber98} when $\\widetilde{R}_N$ corresponds to Rao-Sampford, successive sampling or Pareto sampling\nunder appropriate regular conditions (see also \\cite{BTL06}). Lemma \\ref{lem:ext} combined with Theorem \\ref{thm:rejective} or Theorem \\ref{thm:final} permits then to obtain upper bounds for the tail probabilities $\\mathbb{P}\\{ \\widehat{S}_{\\mathbf{p} _{N}\n^{\\widetilde{\\boldsymbol{\\epsilon}}_{N}}-S_{N}>t\\} $.\n\n\\section{Conclusion}\\label{sec:concl}\nIn this article, we proved Bernstein-type tail bounds to quantify the deviation between a total and its Horvitz-Thompson estimator when based on conditional Poisson sampling, extending (and even slightly improving) results proved in the case of basic sampling without replacement. The original proof technique used to establish these inequalities relies on expressing the deviation probablities related to a conditional Poisson scheme as conditional probabilities related to a Poisson plan. This permits to recover tight exponential bounds, involving the asymptotic variance of the Horvitz-Thompson estimator. Beyond the fact that rejective sampling is of prime importance in the practice of survey sampling, extension of these tail bounds to sampling schemes that can be accurately approximated by rejective sampling in the total variation sense is also discussed. \n\n\n\\section*{Appendix - Technical Proofs}\n\n\\subsection*{Proof of Lemma \\ref{lem:denominator}}\n\nFor clarity, we first recall the following result.\n\n\\begin{theorem}\n\\label{thm:denominator}(\\cite{Deheuvels}, Theorem 1.3) Let $(Y_{j,n})_{1\\leq\nj\\leq n}$ be a triangular array of independent Bernoulli random variables with\nmeans $q_{1,n},\\; \\ldots,\\; q_{n,n}$ in $(0,1)$ respectively. Denote by\n$\\sigma^{2}_{n}=\\sum_{i=1}^{n}q_{i,n}(1-q_{i,n})$ the variance of the sum\n$\\Sigma_{n}=\\sum_{i=1}^{n}Y_{i,n}$ and by $\\nu_{n}=\\sum_{i=1}^{n}q_{i,n}$ its\nmean. Considering the cumulative distribution function (cdf) $F_{n\n(x)=\\mathbb{P}\\{ \\sigma_{n}^{-1}(\\Sigma_{n}-\\nu_{n} )\\leq x \\}$, we have:\n$\\forall n\\geq1$,\n\\[\n\\sup_{k\\in\\mathbb{Z}}\\left\\vert F_{n}(x_{n,k})-\\Phi(x_{n,k})-\\frac{1-x_{n,k\n^{2}}{6\\sigma_{n}}\\phi(x_{n,k})\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}q^{2\n_{i,n}(1-q_{i,n})}{\\sigma_{n}^{2}} \\right\\} \\right\\vert \\leq\\frac{C\n{\\sigma^{2}_{n}},\n\\]\nwhere $x_{n,k}=\\sigma_{n}^{-1}(k-\\nu_{n}+1\/2)$ for any $k\\in\\mathbb{Z}$,\n$\\Phi(x)=(2\\pi)^{-1\/2}\\int_{-\\infty}^{x}\\exp(-z^{2}\/2)dz$ is the cdf of the\nstandard normal distribution $\\mathcal{N}(0,1)$, $\\phi(x)=\\Phi^{\\prime}(x)$\nand $C<+\\infty$ is a universal constant.\n\\end{theorem}\n\nObserve first that we can write:\n\\begin{multline*}\n\\mathbb{P}\\left\\{ \\mathcal{M}_{N}=0\\right\\} =\\mathbb{P}\\left\\{\\sum_{i=1}^{N}(\\epsilon\n_{i}-p_{i})\\in]-1\/2,1\/2]\\right\\}\\\\\n=\\mathbb{P}\\left\\{ d_{N}^{-1\/2}\\sum_{i=1}^{N\nm_{i}\\leq \\frac1 2 d_{N}^{-1\/2}\\right\\} -\\mathbb{P}\\left\\{ d_{N}^{-1\/2\n\\sum_{i=1}^{N}m_{i}\\leq-\\frac1 2 d_{N}^{-1\/2}\\right\\} .\n\\end{multline*}\nApplying Theorem \\ref{thm:denominator} to bound the first term of this\ndecomposition (with $k=\\nu_{n}$ and $x_{n,k}=1\/(2\\sqrt{d_{N}})$) directly yields\nthat\n\\begin{multline*}\n \\mathbb{P}\\left\\{ \\frac{\\sum_{i=1}^{N}m_{i}}{\\sqrt{d_N}}\\leq\n\\frac{1}{2\\sqrt{d_N}}\\right\\} \\geq \\Phi\\left(\\frac{1}{2\\sqrt{d_{N}}}\\right)\\\\+\\frac{1-\\frac{1}{4d_{N}}\n{6\\sqrt{d_{N}}}\\phi\\left(\\frac{1}{2\\sqrt{d_{N}}}\\right)\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}p_{i\n^{2}(1-p_{i})}{d_{N}}\\right\\} - \\frac{C}{d_{N}}.\n\\end{multline*}\nFor the second term, its application with $k=\\nu_{n}-1$ entails that:\n\\begin{multline*}\n-\\mathbb{P}\\left\\{ \\frac{1}{2\\sqrt{d_N}}\\sum_{i=1}^{N}m_{i}\\geq\n-\\frac{1}{2\\sqrt{d_N}} \\right\\} \\leq -\\Phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right)\\\\-\\frac{1-\\frac{1}{4d_{N}}}{6\\sqrt{d_{N}}}\\phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right) \n\\left\\{ 1-\\frac{2\\sum_{i=1}^{n}p_{i\n^{2}(1-p_{i})}{d_{N}}\\right\\} -\\frac{C}{d_{N}}.\n\\end{multline*}\nIf $d_N\\geq 1$, it follows that\n\\begin{eqnarray*}\n\\mathbb{P}\\left\\{ \\mathcal{M}_{N}=0\\right\\} &\\geq &\\Phi\\left(\\frac{1}{2\\sqrt{d_{N}}\n\\right)-\\Phi\\left(-\\frac{1}{2\\sqrt{d_{N}}}\\right)- \\frac{2C}{d_{N}}\\\\\n&= & 2\\int_{0}^{\\frac{1}{2\\sqrt{d_N}}}\\phi(t)dt- \\frac{2C}{d_{N}}\n\\geq \\left( \\phi(1\/2)-\\frac{2C}{\\sqrt{d_N}}\\right)\\frac{1}{\\sqrt{d_N}}.\n\\end{eqnarray*}\n We\nthus obtain the desired result for $d_{N}\\geq D$, where $D>0$ is any constant strictly larger than $4C^2\\phi^2(1\/2)$.\n\n\\subsection{Proof of Lemma \\ref{lem:numerator}}\n\nObserve that\n\\begin{equation}\nVar \\left( \\sum_{i=1}^{N} Z_{i}\\right) =\\sum_{i=1}^{N}Var\\left(\nZ_{i}\\right) =Var \\left( \\sum_{i=1}^{N} \\epsilon_{i}^{*} \\frac{x_{i}}{p_{i\n}\\right) =Var \\left( \\widehat{S}^{\\boldsymbol{\\epsilon} ^{*}_{N\n}_{\\boldsymbol{p} _{N}} \\right) .\n\\end{equation}\nLet $\\psi_{N}(u)=\\log\\mathbb{E}^{*}[\\exp(\\langle u,(\\mathcal{Z}_{N\n,\\;\\mathcal{M}_{N})\\rangle)]$, $u=(u_{1},u_{2})\\in\\mathbb{R}^{+\n\\times\\mathbb{R}$, be the log-Laplace of the $1$-lattice random vector\n$(\\mathcal{Z}_{N},\\;\\mathcal{M}_{N})$, where $\\langle.,\\; .\\rangle$ is the\nusual scalar product on $\\mathbb{R}^{2}$. Denote by $\\psi_{N}^{(1)}(u)$ and\n$\\psi_{N}^{(2)}(u)$ its gradient and its Hessian matrix respectively. Consider\nnow the conditional probability measure $\\mathbb{P}^{*}_{u,N}$ given\n$\\mathcal{D}_{N}$ defined by the Esscher transform\n\\begin{equation}\nd\\mathbb{P}_{u,N}=\\exp\\left( \\left\\langle u,(\\mathcal{Z}_{N},\\mathcal{M\n_{N})\\right\\rangle -\\psi_{N}(u)\\right) d\\mathbb{P}.\n\\end{equation}\nThe $\\mathbb{P}_{u,N}$-expectation is denoted by $\\mathbb{E}_{u,N}[.]$, the\ncovariance matrix of a $\\mathbb{P}_{u,N}$-square integrable random vector $Y$\nunder $\\mathbb{P}_{u,N}$ by $Var_{u^{*},N}(Y)$. With $x=t\\sqrt{n}\/N$, by\nexponential change of probability measure, we can rewrite the numerator of\n\\eqref{eq:ratio} as\n\\begin{multline}\n\\mathbb{P}\\left\\{ \\mathcal{Z}_{N}\\geq x,\\mathcal{M}_{N}=0\\right\\}\n=\\mathbb{E}_{u,N}\\left[ e^{\\psi_{N}(u)-\\left\\langle u,(\\mathcal{Z\n_{N},\\mathcal{M}_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq\nx,\\mathcal{M}_{N}=0\\}\\right] \\nonumber\\\\\n=H(u)\\mathbb{E}_{u,N}\\left[ e^{-\\left\\langle u,(\\mathcal{Z}_{N\n-x,\\mathcal{M}_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq\nx,\\mathcal{M}_{N}=0\\}\\right] ,\n\\end{multline}\nwhere we set $H(u)=\\exp(-\\left\\langle u,(x,0)\\right\\rangle +\\psi_{N}(u))$.\nNow, as $\\psi_{N}$ is convex, the point defined by\n\\[\nu^{\\ast}=(u_{1}^{*},0)=\\arg\\sup_{u\\in\\mathbb{R}_{+}\\times\\{0\\}}\\{\\langle\nu,(x,0)\\rangle-\\psi_{N}(u)\\}\n\\]\nis such that $\\psi_{N}^{(1)}(u^{\\ast})=(x,0)$. Since $\\mathbb{E\n[\\exp()]=\\exp(\\psi_{N}(u))$, by\ndifferentiating one gets\n\\[\n\\mathbb{E}[e^{}(\\mathcal{S\n_{N},\\;\\mathcal{M}_{N})]=\\psi_{N}^{(1)}(u^{\\ast})e^{\\psi_{N}(u^{\\ast\n)}=(x,0)e^{\\psi_{N}(u^{\\ast})},\n\\]\nso that $\\mathbb{E}_{u^{\\ast},N}[(\\mathcal{Z}_{N},\\mathcal{M}_{N})]=(x,0)$ and\n$Var_{u^{\\ast},N}[(\\mathcal{Z}_{N},\\mathcal{M}_{N})]=\\psi_{N}^{(2)}(u^{\\ast\n)$. Choosing $u=u^{*}$, integration by parts combined with straightforward\nchanges of variables yields\n\\[\n\\mathbb{E}_{u^{*},N}[e^{-\\left\\langle u^{*},(\\mathcal{Z}_{N}-x,\\mathcal{M\n_{N})\\right\\rangle }\\mathbb{I}\\{\\mathcal{Z}_{N}\\geq x,\\mathcal{M\n_{N}=0\\}]\\newline \\leq\\mathbb{P}^{*}_{u^{*},N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\} .\n\\]\nHence, we have the bound:\n\\begin{equation}\n\\label{eq:prod}\\mathbb{P}\\left\\{ \\mathcal{Z}_{N}\\geq x,\\mathcal{M\n_{N}=0\\right\\} \\leq H(u^{*})\\times\\mathbb{P}_{u^{*},N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\} .\n\\end{equation}\nWe shall bound each factor involved in \\eqref{eq:prod} separately. We start\nwith bounding $H(u^{*})$, which essentially boils down to bounding\n$\\mathbb{E}[e^{\\langle u^{*},(\\mathcal{Z}_{N},\\mathcal{M}_{N})\\rangle}]$.\n\n\\begin{lemma}\n\\label{lem:factor1} Under Theorem \\ref{thm:rejective}'s assumptions, we have:\n\\begin{align}\nH(u^{*}) & \\leq\\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^{N}Z_{i} \\right)\n}{\\left( \\max_{1\\leq j\\leq N}\\vert x_{j}\\vert\/p_{j} \\right) ^{2}}h\\left(\n\\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\vert x_{j}\\vert\/p_{j\n}{Var\\left( \\sum_{i=1}^{N}Z_{i} \\right) } \\right) \\right) \\\\\n& \\leq\\exp\\left( -\\frac{N^{2}x^{2}\/n}{2\\left( Var\\left( \\sum_{i=1\n^{N}Z_{i} \\right) +\\frac{1}{3}\\frac{N}{\\sqrt{n}}x \\max_{1\\leq j\\leq N}\\vert\nx_{j}\\vert\/p_{j} \\right) } \\right) ,\n\\end{align}\nwhere $h(x)=(1+x)\\log(1+x)-x$ for $x\\geq0$.\n\\end{lemma}\n\n\\begin{proof}\nUsing the standard argument leading to the Bennett-Bernstein bound, observe that: $\\forall i\\in \\{1,\\; \\ldots,\\; N \\}$, $\\forall u_1> 0$,\n\\begin{equation*}\n\\mathbb{E}[e^{u_1Z_i\n]\\leq \\exp\\left( Var(Z_i)\\frac{\\exp\\left(u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j} \\right) - 1- u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j}}{\\left(\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert\n{p_j}\\right)^2} \\right).\n\\end{equation*}\nsince we $\\mathbb{P\n$-almost surely have $\\vert Z_{i}\\vert \\leq \\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j $ for all $i\\in \\{1,\\; \\ldots,\\; N \\}$. Using the independence of the $Z_i$'s, we obtain that: $\\forall u_1> 0$,\n\\begin{multline*}\n\\mathbb{E}[e^{u_1 \\mathcal{Z}_N}]\\leq \\\\\n\\exp\\left( Var\\left( \\sum_{i=1\n^NZ_i\\right)\\frac{\\exp\\left(\\frac{\\sqrt{n}}{N}u_1\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j} \\right) - 1- \\frac{\\sqrt{n}}{N\nu_1\\max_{1\\leq j\\leq N}\\frac{\\vert x_j\\vert}{p_j}}{\\left(\\max_{1\\leq j\\leq N\n\\frac{\\vert x_j\\vert}{p_j}\\right)^2} \\right).\n\\end{multline*\n\nThe resulting upper bound for $H((u_1,0))$ being minimum for\n$$\nu_1=\\frac{N\n{\\sqrt{n}}\\frac{\\log \\left( 1+\\frac{N}{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j}{Var(\\sum_{i=1}^NZ_i)} \\right)}{\\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j},\n$$\nthis yields\n\\begin{equation}\nH(u^*)\\leq \\exp\\left( -\\frac{Var\\left( \\sum_{i=1}^NZ_i \\right)\n{\\left(\\max_{1\\leq j\\leq N}\\vert x_j\\vert\/p_j \\right)^2}h\\left( \\frac{N\n{\\sqrt{n}}\\frac{x\\max_{1\\leq j\\leq N}\\vert x_j\\vert\/p_j}{Var\\left( \\sum_{i=1\n^NZ_i \\right)} \\right) \\right).\n\\end{equation\n\nUsing the classical\ninequality\n\\begin{equation*}\nh(x)\\geq \\frac{x^{2\n}{2(1+x\/3)},\\text{ for }x\\geq 0,\n\\end{equation*\n\nwe also get that\n\\begin{equation}\\label{eq:prod}\nH(u^*)\\leq \\exp\\left( -\\frac{N^2x^2\/n}{2\\left(Var\\left( \\sum_{i=1\n^NZ_i \\right)+\\frac{1}{3}\\frac{N}{\\sqrt{n}}x \\max_{1\\leq j\\leq N\n\\vert x_j\\vert\/p_j \\right)} \\right).\n\\end{equation}\n$\\square$\n\\end{proof}\n \n\n\nWe now prove the lemma stated below, which provides an upper bound for\n$\\mathbb{P}^{*}_{u^{*},N}\\{ \\mathcal{M}_{N}=0\\}$.\n\n\\begin{lemma}\n\\label{lem:factor2} Under Theorem \\ref{thm:rejective}'s assumptions, there\nexists a universal constant $C^{\\prime}$ such that: $\\forall N\\geq1$,\n\\begin{equation}\n\\mathbb{P}_{u^{*},N}\\left\\{ \\mathcal{M}_{N}=0\\right\\} \\leq C^{\\prime\n}\\frac{1}{\\sqrt{d_{N}}}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nUnder the probability measure $\\mathbb{P}_{u^*,N\n$, the $\\varepsilon _{i\n$'s are still\nindependent Bernoulli variables, with means now given by\n\\begin{equation*}\n\\pi^* _{i}\\overset{def}{=}\\sum_{s\\in\n\\mathcal{P}(\\mathcal{I}_{N\n)}e^{\\left\\langle u^*,(\\mathcal{Z}_{N}(s)\n\\mathcal{M}_{N\n(s))\\right\\rangle -\\psi _{N}(u^*)}\\mathbb{I}\\left\\{ i\\in\ns\\right\\} R_{N\n(s)>0,\n\\end{equation*}\nfor $i\\in \\{1,\\; \\ldots,\\; N \\}$.\nSince $\\mathbb{E\n_{u^{\\ast },N}[\\mathcal{M}_{N}]=0$, we have $\\sum_{i=1}^{N}\\pi^*_{i\n=n$ and thus\n\\begin{equation*}\nd_{N,u^{\\ast\n}}\\overset{def}{=\nVar_{u^{\\ast },N}\\left(\\sum_{i=1}^{N}\\varepsilon _{i}\\right)=\\sum_{i=1\n^{N}\\pi^* _{i}(1-\\pi ^*_{i})\\leq n.\n\\end{equation*\n\nWe can thus apply the local Berry-Esseen bound established in \\cite{Deheuvels}\nfor sums of independent (and possibly non identically) Bernoulli random variables, recalled in Theorem \\ref{thm:numerator\n.\n\\begin{theorem}\\label{thm:numerator}(\\cite{Deheuvels\n, Theorem 1.2) Let $(Y_{j,n})_{1\\leq j\\leq n\n$ be a triangular array of independent Bernoulli random variables with means $q_{1,n\n,\\; \\ldots,\\; q_{n,n\n$ in $(0,1)$ respectively. Denote by $\\sigma^2_n=\\sum_{i=1}^nq_{i,n\n(1-q_{i,n})$ the variance of the sum $\\Sigma_n=\\sum_{i=1}^nY_{i,n\n$ and by $\\nu_n=\\sum_{i=1}^nq_{i,n\n$ its mean. Considering the cumulative distribution function (cdf) $F_n(x)=\\mathbb{P\n\\{ \\sigma_n^{-1\n(\\Sigma_n-\\nu_n )\\leq x \\}$, we have: $\\forall n\\geq 1$,\n\\begin{equation}\n\\sup_x\\left(1+\\vert x\\vert^3 \\right)\\left\\vert F_n(x)-\\Phi(x)\\right\\vert\\leq \\frac{C\n{\\sigma_n},\n\\end{equation}\nwhere $\\Phi(x)=(2\\pi)^{-1\/2}\\int_{-\\infty\n^x\\exp(-z^2\/2)dz$ is the cdf of the standard normal distribution $\\mathcal{N\n(0,1)$ and $C<+\\infty$ is a universal constant.\n\\end{theorem\n\nApplying twice a pointwise version of the bound recalled above (for $x=0$ and $x=1\/\\sqrt{d_{N,u^*\n}$), we obtain that\n\\begin{multline*}\n\\mathbb{P}_{u^*,N}\\left\\{ \\mathcal{M\n_{N}=0\\right\\}= \\mathbb{P}_{u^*,N}\\left\\{d_{N,u^{\\ast\n}}^{-1\/2}\n\\sum_{i=1}^Nm_i\\leq 0\\right\\}\\\\- \\mathbb{P}_{u^*,N}\\left\\{d_{N,u^{\\ast\n\n}^{-1\/2} \\sum_{i=1}^Nm_i\\leq -d_{N,u^{\\ast\n}}^{-1\/2\n\\right\\}\\\\\n\\leq \\frac{2C}{\\sqrt{d_{N,u^*}}}+\\Phi(0)-\\Phi(-d_{N,u^*\n^{-1\/2})\\leq \\left(\\frac{1}{\\sqrt{2\\pi}}+2C\\right)\\frac{1}{\\sqrt{d_{N,u^*}\n},\n\\end{multline*\n\nby means of the finite increment theorem.\nFinally, observe that:\n\\begin{multline*}\nd_{N,u^*\n=\\mathbb{E}_{u^*, N}\\left[\\left(\\sum_{i=1}^Nm_i\\right)^2 \\right]=\\mathbb{E\n\\left[ \\left(\\sum_{i=1}^Nm_i\\right)^2\/H(u^*) \\right]\\\\\n\\geq \\mathbb{E\n\\left[ \\left(\\sum_{i=1}^Nm_i\\right)^2 \\right]=d_N,\n\\end{multline*\n\nsince we proved that $H(u^*)\\leq 1$. Combined with the previous bound, this yields the desired result.\n$\\square$\n\\end{proof\n\n \n\nLemmas \\ref{lem:factor1} and \\ref{lem:factor2} combined with Eq.\n\\eqref{eq:prod} leads to the bound stated in Lemma \\ref{lem:numerator}.\n\n\n\n\\subsection*{Proof of Theorem \\ref{thm:final}}\nWe start with proving the preliminary result below.\n\n\\begin{lemma}\n\\label{lem:bias}Let $\\pi_{1},\\; \\ldots, \\pi_N$ be the first order\ninclusion probabilities of a rejective sampling of size $n$ with canonical representation characterized by the Poisson weights $p_1,\\; \\ldots,\\; p_N$.Provided that $d_{N}=\\sum_{i=1}^{N}p_{i}(1-p_{i\n)\\geq1$, we have: $\\forall i\\in\\{1,\\; \\ldots,\\; N \\}$,\n\\[\n\\left\\vert \\frac{1}{\\pi_{i}}-\\frac{1}{p_{i}}\\right\\vert\\leq\\frac{6}{d_{N}}\\times \\frac{1-\\pi_{i}\n{\\pi_{i}}.\n\\]\n\\end{lemma}\n\\begin{proof}\nThe proof follows the representation (5.14) on page 1509 of \\cite{Hajek64}.\nWe have\n For all $i\\in \\{1,\\; \\ldots,\\; N\\}$, we have:\n\\begin{eqnarray*}\n\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}} &=&\\frac{\\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N):\\; i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\sum_{h\\in s}\\frac{1-p_{h}\n{\\sum_{j\\in s}(1-p_{j})+(p_{h}-p_{i})}}{ \\sum_{s\\in \\mathcal{P}(\\mathcal{I}_N):\\; i\\in\n\\mathcal{I}_N\\setminus\\{s \\}}P(s)}\\\\\n&=&\\frac{\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}\nP_N(s)\\sum_{h\\in s}\\frac{1-p_{h}}{\\sum_{j\\in s}(1-p_{j})\\left( 1+\\frac\n{(p_{h}-p_{i})}{\\sum_{j\\in s}(1-p_{j})}\\right) }}{\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P_N(s)}.\n\\end{eqnarray*}\nNow recall that for any $x\\in]-1,1[ $, we have:\n\\[\n1-x\\leq\\frac{1}{1+x}\\leq1-x+x^{2}.\n\\]\nIt follows that\n\\begin{align*}\n\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}} & \\leq1-\\left( \\sum_{s:\\ i\\in\n\\mathcal{I}_N\\setminus\\{s \\}}P(s)\\right) ^{-1}\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\sum_{h\\in s}\\frac{(1-p_{h\n)(p_{h}-p_{i})}{\\left( \\sum_{j\\in s}(1-p_{j})\\right) ^{2}}\\\\\n& +\\left( \\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}}P(s)\\right) ^{-1}\\sum_{s:\\ i\\in \\mathcal{I}_N\\setminus\\{s \\}\nP(s)\\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})^{2}}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) ^{3}\n\\end{align*}\nFollowing now line by line the proof on p. 1510 in \\cite{Hajek64} and noticing that $\\sum_{j\\in s\n(1-p_{j})\\geq1\/2d_{N}$ (see Lemma 2.2 in \\cite{Hajek64}), we have\n\\begin{align*}\n\\left\\vert \\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) ^{2}}\\right\\vert & \\leq\\frac{1}{\\left( \\sum_{j\\in\ns}(1-p_{j})\\right) } \\leq\\frac{2}{d_{N}}\n\\end{align*}\nand similarl\n\\begin{align*}\n\\sum_{h\\in s}\\frac{(1-p_{h})(p_{h}-p_{i})^{2}}{\\left( \\sum_{j\\in s\n(1-p_{j})\\right) ^{3}} & \\leq\\frac{1}{\\left( \\sum_{j\\in s}(1-p_{j})\\right)\n^{2}} \\leq\\frac{4}{d_{N}^{2}}.\n\\end{align*}\nThis yieds: $\\forall i\\in\\{1,\\; \\ldots,\\; N \\}$,\n\\[\n1-\\frac{2}{d_{N}}\\leq\\frac{\\pi_{i}}{p_{i}}\\frac{1-p_{i}}{1-\\pi_{i}}\\leq\n1+\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}\n\\]\nand\n\\[\np_{i}(1-\\pi_{i})(1-\\frac{2}{d_{N}})\\leq\\pi_{i}(1-p_{i})\\leq p_{i}(1-\\pi\n_{i})\\left(1+\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right),\n\\]\nleading then to\n\\[\n-\\frac{2}{d_{N}}(1-\\pi_{i})p_{i}\\leq\\pi_{i}-p_{i}\\leq p_{i}\\left(1-\\pi_{i\n\\right)\\left(\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right)\n\\]\nand finally to\n\\[\n-\\frac{(1-\\pi_{i})}{\\pi_{i}}\\frac{2}{d_{N}}\\leq\\frac{1}{p_{i}}-\\frac{1\n{\\pi_{i}}\\leq\\frac{(1-\\pi_{i})}{\\pi_{i}}\\left(\\frac{2}{d_{N}}+\\frac{4}{d_{N}^{2}}\\right).\n\\]\nSince $1\/d_{N}^{2}\\leq 1\/d_{N}$ as soon as $d_{N}\\geq1$, the lemma is proved. $\\square$\n\\end{proof}\n\\medskip\n\nBy virtue of lemma \\ref{lem:bias}, we obtain that:\n\\begin{equation*}\n\\left\\vert \\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}\\right\\vert\n \\leq\\frac{6}{d_{N}}\\sum_{i=1}^{N}\\frac{1}{\\pi_{i}}|x_{i}|=M_{N\n\\end{equation*}\nIt follows that\n\\begin{align*}\n\\mathbb{P}\\left\\{ \\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-S_N>x\\right\\} &\n\\leq\\mathbb{P}\\left\\{ |\\widehat{S}_{\\boldsymbol{\\pi}_N}^{\\boldsymbol{\\epsilon}_N^*}-\\widehat{S}_{\\boldsymbol{p} _{N\n}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}|+\\widehat\n{S}_{\\boldsymbol{p} _{N}}^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>x\\right\\}\n\\\\\n& \\leq\\mathbb{P}\\left\\{ M_{N}+\\widehat{S}_{\\boldsymbol{p} _{N}\n^{\\boldsymbol{\\epsilon} _{N}^{\\ast}}-S_{N}>x\\right\\}\n\\end{align*}\nand a direct application of Theorem \\ref{thm:rejective} finally gives the desired result. \n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s:Intro}\n\nIn recent years, there has been increasing interest in the study of structures \nthat can be presented by automata. The underlying idea \nis to apply techniques of automata theory to decision \nproblems that arise in logic and applications such as databases and verification. A typical decision problem \nis the model checking problem: for a \nstructure $\\mathcal A$ (e.g.\\ a graph), design an algorithm that, given a formula $\\phi(\\bar{x})$ in a formal system and a tuple $\\bar{a}$ from the structure, decides if \n$\\phi(\\bar{a})$ is true in $\\mathcal A$. In particular, when the formal system is the first order predicate logic or the monadic second order logic, we would like to know if the \ntheory of the structure is decidable. Fundamental early results in this direction by B\\\"uchi (\\cite{Buc60}, \\cite{Buc62}) and Rabin (\\cite{Rab69}) proved the decidability of the monadic second order theories of the successor on the natural numbers and of the binary tree.\nThere have been numerous applications and extensions of these results in logic, algebra \\cite{EPCHLT92}, verification and model checking \\cite{VW84} \\cite{Var96}, and databases \\cite{Var05}. Moreover, automatic structures provide a theoretical framework for constraint databases over discrete domains such as strings and trees \\cite{BL02}. Using simple closure properties and the decidability of the emptiness problem for automata, one can prove that the first order (and monadic second order) theories of some well-known structures are decidable. Examples of such structures are Presburger arithmetic and some of its extensions, the term algebra, the\nreal numbers under addition, finitely generated abelian groups, and the atomless Boolean algebra. Direct proofs of these results, without the use of automata, require non-trivial technical work.\n\n\nA structure $\\mathcal A=(A; R_0, \\ldots, R_m)$ is {\\bf automatic} if the domain $A$ and all the relations $R_0, \\ldots, R_m$ of the structure are recognised by finite automata (precise definitions are in the next section). \nIndependently, Hodgson \\cite{H82} and later Khoussainov and Nerode \\cite{KhN95} proved that for any given automatic structure there is an algorithm that solves the model checking problem for the first order logic. In particular, the first order theory of the structure is decidable. Blumensath and Gr\\\"adel proved a logical characterization theorem stating that automatic structures are exactly those definable in the fragment of arithmetic $(\\omega; +, |_2, \\leq, 0)$, where $+$ and $\\leq$ have their usual meanings and $|_2$ is a weak divisibility predicate for which $x|_2 y$ if and only if $x$ is a power of $2$ and divides $y$ (see \\cite{BG00}). In addition, for some classes of automatic structures there are characterization theorems that have direct algorithmic implications. For example, in \\cite{Del04}, Delhomm\\'e proved that automatic well-ordered sets are all strictly less than $\\omega^\\omega$. Using this characterization, \\cite{KhRS03} gives an algorithm which decides the isomorphism problem for automatic well-ordered sets. The algorithm is based on extracting the Cantor normal form for the ordinal isomorphic to the given automatic well-ordered set. Another characterization theorem of this ilk gives that automatic Boolean algebras are exactly those that are finite products of the Boolean algebra of finite and co-finite subsets of $\\omega$ \\cite{KhNRS04}. Again, this result can be used to show that the isomorphism problem for automatic Boolean algebras is decidable. \n\nAnother body of work is devoted to the study of resource-bounded complexity of the model checking problem for automatic structures. On the one hand, Gr\\\"adel and Blumensath (\\cite{BG00}) constructed examples of automatic structures whose first order theories are non-elementary. On the other hand, Lohrey in \\cite{Loh03} proved that the first order theory of any automatic graph of bounded degree is elementary. It is noteworthy that when both a first order formula $\\phi$ and an automatic structure $\\mathcal A$ are fixed, determining if a tuple $\\bar{a}$ from $\\mathcal A$ satisfies \n$\\phi(\\bar{x})$ can be done in linear time. There are also feasible time bounds on deciding the first order theories of automatic structures over the unary alphabet (\\cite{Blu99}, \\cite{KhLM}). \n\nMost current results demonstrate that automatic structures are not complex in various concrete senses.\nHowever, in this paper we use well-established concepts from both logic and model theory to prove results in the opposite direction. We now briefly describe the measures of complexity we use (ordinal heights of well-founded relations, Scott ranks of \nstructures, and Cantor-Bendixson ranks of trees) and connect them with the results of this paper.\n\n\n\nA relation $R$ is called {\\bf well-founded} if there is no infinite sequence $x_1,x_2,x_3, \\ldots$ such that $(x_{i+1}, x_{i})\\in R$ for $i \\in \\omega$. In computer science, well-founded relations are of interest due to a natural connection between well-founded sets and terminating programs. \nWe say that a program is {\\bf terminating} if every computation from an initial state is finite.\nThis is equivalent to well-foundedness of the collection of states reachable from the initial state, under the reachability relation \\cite{BG06}. The {\\bf ordinal height} is a measure of the depth of well-founded relations. Since all automatic structures are also computable structures, the obvious bound for ordinal heights of automatic well-founded relations is $\\omega_1^{CK}$ (the first non-computable ordinal). Sections \\ref{s:RanksofOrders} and \\ref{s:ranksWF} study the sharpness of this bound.\nTheorem \\ref{thm:OrderRank} characterizes automatic well-founded partial orders in terms of their (relatively low) ordinal heights, whereas Theorem \\ref{thm:HeightRank} shows that $\\omega_1^{CK}$ is the sharp bound \nin the general case.\n\n\\begin{theorem}\\label{thm:OrderRank} For each ordinal $\\alpha$, $\\alpha$ is the ordinal height of an automatic well-founded partial order if and only if $\\alpha< \\omega^\\omega$. \n\\end{theorem}\n\n\\begin{theorem}\\label{thm:HeightRank}\nFor each (computable) ordinal $\\alpha < \\omega_{1}^{CK}$, there is an automatic well-founded relation $\\mathcal A$ with ordinal height greater than $\\alpha$.\n\\end{theorem}\n\n\nSection \\ref{s:SR} is devoted to building automatic structures with high Scott ranks. The concept of Scott rank comes from a well-known theorem of Scott stating that for every countable structure $\\mathcal A$ there exists a sentence $\\phi$ in $L_{\\omega_1,\\omega}$-logic which characterizes $\\mathcal A$ up to isomorphism \\cite{Sco65}. The minimal quantifier rank of such a formula is called the Scott rank of $\\mathcal A$. A known upper bound on the Scott rank of computable structures implies that the Scott rank of automatic structures is at most $\\omega_1^{CK}+1$.\nBut, until now, all the known examples of automatic structures had low Scott ranks. Results in \\cite{Loh03}, \\cite{Del04}, \\cite{KhRS05} \nsuggest that the Scott ranks of automatic structures could be bounded by small ordinals. This intuition is falsified in Section \\ref{s:SR} with the theorem:\n\n\n\\begin{theorem}\\label{thm:ScottRank}\nFor each computable ordinal $\\alpha$\nthere is an automatic structure of Scott rank at least $\\alpha$.\n\\end{theorem}\n\nIn particular, this theorem gives a new proof that the isomorphism problem for automatic structures is $\\Sigma_1^1$-complete (another proof may be found in \\cite{KhNRS04}).\n\nIn the last section, we investigate the Cantor-Bendixson ranks of automatic trees. A\n{\\bf partial order tree} is a partially ordered set $(T, \\leq)$ such that there is a $\\leq$-minimal element of $T$, and each subset $\\{x \\in T : x \\leq y\\}$ is finite and is linearly ordered\nunder $\\leq$. A {\\bf successor tree} is a pair $(T, S)$ such that the reflexive and transitive closure $\\leq_S$ of $S$ produces a partial order tree \n$(T, \\leq_{S})$. The {\\bf derivative} of a tree $\\mathcal T$ is obtained by removing all the nonbranching paths of the tree. One applies the derivative operation to $\\mathcal T$ successively until a fixed point is reached. The minimal ordinal that is needed to reach the fixed point is called the {\\bf Cantor-Bendixson (CB) rank} of the tree. The CB rank plays an important role in logic, algebra, and topology. Informally, the CB rank tells us how far the structure is from algorithmically (or algebraically) simple structures. Again, the obvious bound on $CB$ ranks of automatic successor trees is $\\omega_1^{CK}$. \nIn \\cite{KhRS03}, it is proved that the CB rank of any automatic partial order tree is finite and can be computed from the automaton for the $\\leq$ relation on the tree. It has been an open question whether\nthe CB ranks of automatic successor trees can also be bounded by small ordinals. We answer this question in the following theorem.\n\n\n\n\\begin{theorem}\\label{thm:RecTrees}\nFor $\\alpha < \\omega_1^{CK}$ there is an automatic successor tree of CB rank $\\alpha$.\n\\end{theorem}\n\nThe main tool we use to prove results about high ranks is the configuration spaces of Turing machines, considered as automatic graphs.\nIt is important to note that graphs which arise as configuration spaces have very low model-theoretic complexity: their Scott ranks are at most $3$, and if they are well-founded then their ordinal heights are at most $\\omega$ (see Propositions \\ref{ppn:ConfigWF} and \\ref{ppn:ConfigScott}). Hence, the configuration spaces serve merely as building blocks in the construction of automatic structures with high complexity, rather than contributing materially to the high complexity themselves.\n\n\n\\section*{Acknowledgement}\nWe thank Moshe Vardi who posed the question about ranks of automatic well-founded relations. We also thank Anil Nerode and Frank Stephan with whom \nwe discussed Scott and Cantor-Bendixson ranks of automatic structures.\n\n\n\n\\section{Preliminaries}\\label{s:Prelim}\n\nA (relational) {\\bf vocabulary} is a finite sequence $(P_1^{m_1}, \\ldots, P_t^{m_t}, c_1, \\ldots, c_s)$, where each $P_j^{m_j}$ is a predicate symbol of arity $m_j>0$, and each $c_k$ is a constant symbol. A {\\bf structure} with this vocabulary is a tuple $\\mathcal A=(A;P_1^{\\mathcal A}, \\ldots, P_t^{\\mathcal A}, c_1^{\\mathcal A}, \\ldots, c_s^{\\mathcal A})$, where $P_j^{\\mathcal A}$ and $c_k^{\\mathcal A}$ are interpretations of the symbols of the vocabulary. When convenient, we may omit the superscripts $\\mathcal A$. We only consider infinite structures, that is, those whose universe is an infinite set.\n\n\nTo establish notation, we briefly recall some definitions associated with finite automata. A {\\bf finite automaton} $\\mathcal M$ over an alphabet $\\Sigma$ is a tuple\n$(S,\\iota,\\Delta,F)$, where $S$ is a finite set of {\\bf states}, $\\iota \\in S$\nis the {\\bf initial state}, $\\Delta \\subset S \\times \\Sigma \\times S$ is the\n{\\bf transition table}, and $F \\subset S$ is the set of {\\bf final states}.\nA {\\bf computation} of $\\mathcal A$ on a word $\\sigma_1 \\sigma_2 \\dots \\sigma_n$\n($\\sigma_i \\in \\Sigma$) is a sequence of states, say $q_0,q_1,\\dots,q_n$, such\nthat $q_0 = \\iota$ and $(q_i,\\sigma_{i+1},q_{i+1}) \\in \\Delta$ for all $i \\in\n\\{0,\\ldots,n-1\\}$. If $q_n \\in F$, then the computation is {\\bf successful}\nand we say that the automaton $\\mathcal M$ {\\bf accepts} the word $\\sigma_1 \\sigma_2 \\dots \\sigma_n$. The {\\bf language}\naccepted by the automaton $\\mathcal M$ is the set of all words accepted by $\\mathcal M$. In\ngeneral, $D \\subset \\Sigma^{\\star}$ is {\\bf finite automaton recognisable},\nor {\\bf regular}, if $D$ is the language accepted by some finite automaton~$\\mathcal M$.\n\n\n\nTo define automaton recognisable relations, we use $n$-variable (or $n$-tape) automata.\nAn {\\bf $n$--tape automaton} can be thought of as a one-way \nTuring machine with $n$ input tapes \\cite{Eil69}. Each tape is regarded as \nsemi-infinite, having written on it a word over the alphabet $\\Sigma$ followed \nby an infinite succession of blanks (denoted by $\\diamond$ symbols). The automaton \nstarts in the initial state, reads simultaneously the first symbol of each tape, \nchanges state, reads simultaneously the second symbol of each tape, \nchanges state, etc., until it reads a blank on each tape. The automaton then \nstops and accepts the $n$--tuple of words if it is in a final state. The set of all \n$n$--tuples accepted by the automaton is the relation recognised by the automaton. \nFormally, an $n$--tape automaton on $\\Sigma$ is a finite automaton over the alphabet $(\\Sigma_{\\diamond})^n$, where $\\Sigma_{\\diamond}=\\Sigma \\cup \\{\\diamond\\}$ and\n$\\diamond \\not \\in \\Sigma$.\nThe {\\bf convolution of a tuple} $(w_1,\\cdots,w_n) \\in\n\\Sigma^{\\star n}$ is the string $ c(w_1,\\cdots,w_n)$ of length $\\max_i|w_i|$\nover the alphabet $(\\Sigma_{\\diamond})^n$ which is defined as follows.\nIts $k$'th symbol is $(\\sigma_1,\\ldots,\\sigma_n)$ where $\\sigma_i$ is the\n$k$'th symbol of $w_i$ if $k \\leq |w_i|$ and $\\diamond$ otherwise.\nThe {\\bf convolution of a relation} $R \\subset \\Sigma^{\\star n}$ is the language\n$c(R) \\subset (\\Sigma_{\\diamond})^{n\\star}$ formed as the set of convolutions\nof all the tuples in $R$. \n An $n$--ary relation $R \\subset \\Sigma^{{\\star}n}$ is {\\bf finite automaton recognisable},\n or {\\bf regular}, if its convolution $c(R)$ is recognisable by an $n$--tape automaton.\n\n\\begin{definition} \\label{dfn:automatic} A structure $\\mathcal A=(A; R_0, R_1, \\ldots, R_m)$ is {\\bf automatic} over $\\Sigma$ if its domain $A$ and all relations \n$R_0$, $R_1$, $\\ldots$, $R_m$ are regular over $\\Sigma$. If $\\mathcal B$ is isomorphic to an automatic structure $\\mathcal A$\nthen we call $\\mathcal A$ an {\\bf automatic presentation} of $\\mathcal B$ and say that\n$\\mathcal B$ is {\\bf automatically presentable}.\n\\end{definition}\n\nThe configuration graph of any Turing machine\nis an example of an automatic structure. The graph is defined by letting the \nconfigurations of the Turing machine be the vertices, \nand putting an edge from configuration $c_1$ to configuration $c_2$ if the machine can make an instantaneous move from $c_1$ to $c_2$. Examples of automatically\npresentable structures are $(\\mathbb N, +)$, $(\\mathbb N, \\leq)$, $(\\mathbb N, S)$,\n$(\\mathbb{Z},+)$, the order on the rationals $(Q, \\leq)$, and the Boolean algebra\nof finite and co-finite subsets of $\\mathbb N$. In the following, we abuse terminology and identify the notions of \\textquotedblleft automatic\\textquotedblright~ and \\textquotedblleft automatically presentable\\textquotedblright~.\nMany examples of automatic structures can be formed using the $\\omega$-fold disjoint union of a structure $\\mathcal A$ (the disjoint union of $\\omega$ many \ncopies of $\\mathcal A$). \n\n\\begin{lemma}\\cite{Rub04}\\label{lm:omega-fold} If $\\mathcal A$ is automatic then its $\\omega$-fold disjoint union is isomorphic to an automatic structure.\n\\end{lemma}\n\\begin{proof}\nSuppose that $\\mathcal A = (A; R_{1}, R_2, \\ldots)$ is automatic. Define $\\mathcal A' = (A \\times 1^{\\star}; R'_{1}, R_2', \\ldots)$ by\n\\[\n\\langle (x,i), (y, j) \\rangle \\in R'_{m} \\qquad \\iff \\qquad i = j~ \\& ~\\langle x, y \\rangle \\in R_{m}, \\ \\ m=1,2,\\ldots.\n\\]\nIt is clear that $\\mathcal A'$ is automatic and is isomorphic to the $\\omega$-fold disjoint union of $\\mathcal A$. \n\\end{proof}\n\nThe class of automatic structures is a proper subclass of the computable structures. \nWe therefore mention some crucial definitions and facts about computable structures.\nGood references for the theory of computable structures include \\cite{Hariz98}, \\cite{KhSh99}. \n\n\\begin{definition}\nA {\\bf computable structure} is $\\mathcal A = (A; R_{1}, \\ldots, R_m)$ whose domain and relations are all computable. \n\\end{definition}\n\n The domains of computable structures can always be identified with the set $\\omega$ of natural numbers. Under this assumption, we introduce new constant symbols \n $c_n$ for each $n\\in \\omega$ and interpret $c_n$ as $n$. We expand the vocabulary of each structure to include these new constants $c_{n}$. In this context, $\\mathcal A$ is computable if and only if \n the {\\bf atomic diagram} of $\\mathcal A$ (the set of G\\\"odel numbers of all quantifier-free sentences in the extended vocabulary that are true in $\\mathcal A$) is a computable set.\n If $\\mathcal A$ is computable and $\\mathcal B$ is isomorphic to $\\mathcal A$ then we say that $\\mathcal A$ is a {\\bf computable presentation}\nof $\\mathcal B$. Note that if $\\mathcal B$ has a computable presentation then $\\mathcal B$ has $\\omega$ many computable presentations. In this paper, we will be coding computable structures into automatic ones.\n\n\nThe ranks that we use to measure the complexity of automatic structures take values in the ordinals. In particular, we will see that only a subset of the countable ordinals will play an important role. An ordinal is called {\\bf computable} if it is the order-type of a computable well-ordering of the natural numbers. The least ordinal which is not computable is denoted $\\omega_1^{CK}$ (after Church and Kleene). \n\n\\section{Ranks of automatic well-founded partial orders} \\label{s:RanksofOrders}\n\nIn this section we consider structures $\\mathcal A = (A; R)$ with a single binary relation. \nAn element $x$ is said to be {\\bf $R$-minimal for a set $X$} if for each \n$y \\in X$, $(y,x) \\notin R$. The relation $R$ is said to be {\\bf well-founded} \nif every non-empty subset of $A$ has an $R$-minimal element. \nThis is equivalent to saying that $(A; R)$ has no infinite chains \n$x_1, x_2, x_3, \\ldots$ where $(x_{i+1}, x_i) \\in R$ for all $i$. \n\n\n A {\\bf ranking function} for $\\mathcal A$ is an ordinal-valued function $f$ such that $f(y)< f(x)$ whenever $(y,x)\\in R$. If $f$ is a ranking function on $\\mathcal A$, let $ord(f)= \\sup\\{ f(x) : x \\in A \\}$. The structure $\\mathcal A$ is well-founded if and only if $\\mathcal A$ admits a ranking function. The {\\bf ordinal height} of $\\mathcal A$, denoted $r(\\mathcal A)$, is the least ordinal $\\alpha$ which is $ord(g)$ for some ranking function $g$ on $\\mathcal A$. An equivalent definition for the rank of $\\mathcal A$ is the following. We define the function $r_{\\mathcal A}$ by induction: for the $R$-minimal elements $x$,\nset $r_{\\mathcal A}(x)=0$; for $z$ not $R$-minimal, put $r_{\\mathcal A}(z)=\\sup\\{ r(y)+1 : (y,z) \\in R\\}$. Then $r_{\\mathcal A}$ is a ranking function admitted by $\\mathcal A$ and $r(\\mathcal A) = \\sup\\{ r_{\\mathcal A}(x) : x \\in A\\}$.\nFor $B \\subseteq A$, we write $r(B)$ for the ordinal height of the structure obtained by restricting the relation $R$ to the subset $B$. \n\n\\begin{lemma}\\label{lm:compRank}\nIf $\\alpha<\\omega_1^{CK}$, there is a computable well-founded relation of \nordinal height $\\alpha$.\n\\end{lemma}\n\\begin{proof}\nThis lemma is trivial: the ordinal height of an ordinal $\\alpha$ is $\\alpha$ itself. Since all computable ordinals are computable and well-founded relations, we are done.\n\\end{proof}\n \nThe next lemma follows easily from the well-foundedness of ordinals and of $R$. The proof is left to the reader.\n\n\\begin{lemma}\\label{lm:witnessRank}\nFor a structure $\\mathcal A = (A; R)$ where $R$ is well-founded, if $r(\\mathcal A) = \\alpha$ and $\\beta < \\alpha$ then there is an $x \\in A$ such that $r_{\\mathcal A}(x) = \\beta$.\n\\end{lemma}\n\nFor the remainder of this section, we assume further that $R$ is a partial order. For\nconvenience, we write $\\leq$ instead of $R$. Thus, we consider automatic well-founded partial orders $\\mathcal A=(A,\\leq)$. We will use the notion of {\\bf natural sum of ordinals}. The natural sum of ordinals $\\alpha, \\beta$ (denoted $\\alpha +' \\beta$) is defined recursively: $\\alpha +' 0 = \\alpha$, $0 +' \\beta = \\beta$, and $\\alpha +' \\beta$ is the least ordinal strictly greater than $\\gamma +' \\beta$ for all $\\gamma < \\alpha$ and strictly greater than $\\alpha +' \\gamma$ for all $\\gamma < \\beta$.\n\n\\begin{lemma}\nLet $A_1$ and $A_2$ be disjoint subsets of $A$ such that $A=A_1\\cup A_2$. \nConsider the partially ordered sets $\\mathcal A_1=(A_1,\\leq_1)$ and $\\mathcal A_2=(A_2,\\leq_2)$ obtained by restricting $\\leq$ to $A_1$ and $A_2$ respectively. Then, $r(\\mathcal A)\\leq \\alpha_1 +' \\alpha_2$, where $\\alpha_i=r(\\mathcal A_i)$. \n\\end{lemma}\n\\begin{proof}\nWe will show that there is a ranking function on $A$ whose range is contained in the ordinal\n$\\alpha_1 +' \\alpha_2$. \nFor each $x\\in A$\nconsider the partially ordered sets $\\mathcal A_{1,x}$ and $\\mathcal A_{2,x}$ obtained by restricting\n$\\leq$ to $\\{z\\in A_1 \\mid z < x\\}$ and $\\{z\\in A_2 \\mid z < x\\}$, respectively. \nDefine $f(x)=r(\\mathcal A_{1,x}) +' r(\\mathcal A_{2,x})$.\nWe claim that $f$ is a ranking function. Indeed, assume that $x0$ there is\n$u_n\\in A$ such that $r_{\\mathcal A}(u_n)=\\omega^n$. For each $u \\in A$ we define the set\n\\[\nu \\downarrow = \\{ x \\in A : x < u \\}.\n\\]\nNote that if $r_{\\mathcal A}(u)$ is a limit ordinal then $r_{\\mathcal A}(u) = r(u\\downarrow)$. We define a finite partition of $u \\downarrow$ in order to apply Corollary \\ref{cr:PartitionRank}. To do so, \nfor $u, v \\in \\Sigma^{\\star}$, define\n$X_{v}^{u} = \\{ vw \\in A : w \\in \\Sigma^{\\star} \\ \\& \\ vw < u \\}$. \nEach set of the form $u \\downarrow$ can then be partitioned based on the prefixes of words\nas follows:\n\\[\nu \\downarrow = \\{ x \\in A : |x| < |u | \\ \\& \\ x < u \\} \\cup \\bigcup_{v \\in \\Sigma^{\\star} : |v| = |u|} X_{v}^{u}.\n\\]\n(All the unions above are finite and disjoint.) Hence, applying Corollary \\ref{cr:PartitionRank}, for each $u_n$ there exists a $v_n$ such that $|u_n|=|v_n|$ and $r(X_{v_n}^{u_n})=r(u_n \\downarrow)=\\omega^n$.\n\n\nOn the other hand, we use the automata to define the following equivalence relation on pairs of words of equal lengths:\n\\begin{align*}\n(u,v) \\sim (u', v') \\ \\iff \\ &\\Delta_{A}(\\iota_{A}, v) = \\Delta_{A}(\\iota_{A}, v') \\ \\& \\\\ &\\Delta_{\\leq}(\\iota_{\\leq}, \\binom{v}{u}) = \\Delta_{\\leq}(\\iota_{\\leq}, \\binom{v'}{u'})\n\\end{align*}\nThere are at most $|S_{A}|\\times |S_{\\leq}|$ equivalence classes. Thus, the infinite sequence $(u_1, v_1)$, $(u_2, v_2)$, $\\ldots$ contains $m$, $n$ such that $m \\neq n$ and $(u_{m}, v_{m}) \\sim (u_{n}, v_{n})$. \n\n\\begin{lemma}\\label{lm:IsoXvu}\nFor any $u,v,u',v' \\in \\Sigma^{\\star}$, if $(u,v) \\sim (u', v')$ then $r(X_{v}^{u}) = r(X_{v'}^{u'})$.\n\\end{lemma}\n\nTo prove the lemma, consider $g: X_{v}^{u} \\to X_{v'}^{u'}$ defined as $g(vw) = v'w$. From the equivalence relation, we see that $g$ is well-defined, bijective, and order preserving. Hence $X_v^u \\cong X_{v'}^{u'}$ (as partial orders). Therefore, $r(X_{v}^{u}) = r(X_{v'}^{u'})$.\n\n\n\nBy Lemma \\ref{lm:IsoXvu}, $\\omega^{m} = r(X_{v_{m}}^{u_{m}}) = r(X_{v_{n}}^{u_{n}}) = \\omega^{n}$, a contradiction with the assumption that $m \\neq n$. Therefore, there is no automatic well-founded partial order of ordinal height greater than or equal to $\\omega^{\\omega}$.\n\\end{proof}\n\n\n\\section{Ranks of automatic well-founded relations}\\label{s:ranksWF}\n\n\\subsection{Configuration spaces of Turing machines}\\label{s:Config}\nIn the forthcoming constructions, we embed computable structures into \nautomatic ones via configuration spaces of Turing machines.\nThis subsection provides terminology and background for these constructions. \nLet $\\mathcal M$ be an $n$-tape deterministic Turing machine. \nThe {\\bf configuration space} of $\\mathcal M$, denoted by $Conf(\\mathcal M)$, is a directed graph whose nodes are configurations of $\\mathcal M$. The nodes are $n$-tuples, each of whose coordinates \nrepresents the contents of a tape. Each tape is encoded as $(w ~q ~ w')$, \nwhere $w, w' \\in \\Sigma^{\\star}$ are the symbols on the tape before and \nafter the location of the read\/write head, and $q$ is one of the states \nof $\\mathcal M$. The edges of the graph are all the pairs of the form $(c_1,c_2)$ such that \nthere is an instruction of $\\mathcal M$ that transforms \n$c_{1}$ to $c_{2}$. The configuration space is an automatic graph. The out-degree of every vertex in $Conf(\\mathcal M)$ is $1$; the in-degree need not be $1$. \n\n\\begin{definition}\nA deterministic Turing machine $\\mathcal M$ is {\\bf reversible} if $Conf(\\mathcal M)$ consists only of finite chains and chains of type $\\omega$.\n\\end{definition}\n\n\\begin{lemma} \\cite{Ben73} \\label{lm:reverse}\nFor any deterministic $1$-tape Turing machine there is a reversible $3$-tape Turing machine which accepts the same language.\n\\end{lemma}\n\n\\begin{proof}(Sketch)\nGiven a deterministic Turing machine, define a $3$-tape Turing machine\nwith a modified set of instructions. \nThe modified instructions have the property that neither the domains nor the ranges overlap. The first tape performs the computation exactly as the original machine would have done. As the new machine executes each instruction, it stores the index of the instruction on the second tape, forming a history. Once the machine enters a state which would have been halting for the original machine, the output of the computation is copied onto the third tape. Then, the machine runs the computation backwards and erases the history tape. The halting configuration contains the input on the first tape, blanks on the second tape, and the output on the third tape\n\\end{proof}\n\nWe establish the following notation for a $3$-tape reversible Turing machine $\\mathcal M$ given by the construction in this lemma. A {\\bf valid initial configuration} of $\\mathcal M$ is of the form $(\\lambda~ \\iota ~ x , \\lambda, \\lambda )$, where $x$ in the domain, $\\lambda$ is the empty string, and $\\iota$ is the initial state of $\\mathcal M$. From the proof of Lemma \\ref{lm:reverse}, observe that a {\\bf final (halting)} configuration is of the form $(x, \\lambda, \\lambda ~q_{f} ~y)$, \nwith $q_{f}$ a halting state of $\\mathcal M$. Also, because of the reversibility assumption, all the chains in $Conf(\\mathcal M)$ \nare either finite or $\\omega$-chains (the order type of the natural numbers). In particular, this means that $Conf(\\mathcal M)$ is well-founded. We call an element of in-degree $0$ a {\\bf base} (of a chain). The set of valid initial or final configurations is regular. We classify the components (chains) of $Conf(\\mathcal M)$ as follows:\n\\begin{itemize}\n\\item {\\bf Terminating computation chains}: finite chains whose base is a valid initial configuration; that is, one of the form $(\\lambda ~\\iota~ x, \\lambda, \\lambda )$, for $x \\in \\Sigma^{\\star}$.\n\\item {\\bf Non-terminating computation chains}: infinite chains whose base is a valid initial configuration. \n\\item {\\bf Unproductive chains}: chains whose base is not a valid initial configuration.\n\\end{itemize} \n\nConfiguration spaces of reversible Turing machines are locally finite graphs (graphs of finite degree) and well-founded. Hence, the following proposition guarantees that their ordinal heights are small.\n\n\\begin{proposition} \\label{ppn:ConfigWF} \nIf $G = (A,E)$ is a locally finite graph then $E$ is well-founded and the ordinal height of $E$ is not above $\\omega$, or $E$ has an infinite chain.\n\\end{proposition}\n\n\\begin{proof}\nSuppose $G$ is a locally finite graph and $E$ is well-founded. For a contradiction, suppose $r(G) > \\omega$. Then there is $v \\in A$ with $r(v) = \\omega$. By definition, $r(v) = \\sup\\{ r(u) : u E v \\}$. But, this implies that there are infinitely many elements $E$-below $v$, a contradiction with local finiteness of $G$\n\\end{proof}\n\n\\subsection{Automatic well-founded relations of high rank}\n\nWe are now ready to prove that $\\omega_1^{CK}$ is the sharp bound for ordinal heights of automatic well-founded relations.\n\n\\vspace{5pt}\n\n\\noindent{\\bf Theorem \\ref{thm:HeightRank}.~}{\\em\nFor each computable ordinal $\\alpha < \\omega_{1}^{CK}$, there is an automatic well-founded relation $\\mathcal A$ with ordinal height greater than $\\alpha$}\n\n\n\\begin{proof} \nThe proof of the theorem uses properties of Turing machines and their configuration spaces. We take a computable well-founded relation whose ordinal height is $\\alpha$, and ``embed\" it into an automatic well-founded relation with similar ordinal height. \n\n\nBy Lemma \\ref{lm:compRank}, let $\\mathcal C=(C, L_\\alpha)$ be a computable well-founded relation of ordinal height $\\alpha$. \nWe assume without loss of generality that $C = \\Sigma^{\\star}$ for some finite alphabet $\\Sigma$. Let $\\mathcal M$ be the Turing machine computing the relation $L_{\\alpha}$. On each pair $(x,y)$ from the domain, $\\mathcal M$ halts and outputs \\textquotedblleft yes\\textquotedblright~ or \\textquotedblleft no\\textquotedblright~. By Lemma \\ref{lm:reverse}, we can assume that $\\mathcal M$ is reversible. Recall that $Conf(\\mathcal M) = (D, E)$ is an automatic graph. \nWe define the domain of our automatic structure to be $A = \\Sigma^{\\star} \\cup D$. The binary relation of the automatic structure is:\n\\begin{align*}\nR = E ~\\cup~ &\\{ (x, (\\lambda ~ \\iota ~ (x, y), \\lambda, \\lambda) ) : x,y \\in \\Sigma^{\\star}\\} ~\\cup \\\\\n&\\{ (( (x,y), \\lambda, \\lambda~q_{f}~\\text{\\textquotedblleft yes\\textquotedblright~}), y) : x,y \\in \\Sigma^{\\star}\\}.\n\\end{align*}\nIntuitively, the structure $(A; R)$ is a stretched out version of $(C, L_\\alpha)$ with infinitely many finite pieces extending from elements of $C$, and with disjoint pieces which are either finite chains or chains of type $\\omega$. The structure $(A; R)$ is automatic because its domain is a regular set of words\nand the relation $R$ is recognisable by a $2$-tape automaton. We should verify, however, that $R$ is well-founded. Let $Y \\subset A$. If $Y \\cap C \\neq \\emptyset$ then since $(C, L_{\\alpha})$ is well-founded, there is $x \\in Y \\cap C$ which is $L_{\\alpha}$-minimal. The only possible elements $u$ in $Y$ for which $(u,x) \\in R$ are those which lie on computation chains connecting some $z \\in C$ with $x$. Since each such computation chain is finite, there is an $R$-minimal $u$ below $x$ on each chain. Any such $u$ is $R$-minimal for $Y$. On the other hand, if $Y \\cap C = \\emptyset$, then $Y$ consists of disjoint finite chains and chains of type $\\omega$. Any such chain has a minimal element, and any of these elements are $R$-minimal for $Y$. \nTherefore, $(A; R)$ is an automatic well-founded structure. \n\n\nWe now consider the ordinal height of $(A; R)$. \nFor each element $x \\in C$, an easy induction on $r_{C}(x)$, shows that \n$r_{\\mathcal C} (x) \\leq r_{\\mathcal A}(x) \\leq \\omega+r_{\\mathcal C} (x)$. \nWe denote by $\\ell(a,b)$ the (finite) length of the computation chain of $\\mathcal M$ with input $(a,b)$. For any element $a_{x,y}$ in the computation chain which represents the computation of $\\mathcal M$ determining whether $(x,y) \\in R$, we have \\ \n$r_{\\mathcal A}(x) \\leq r_{\\mathcal A}(a_{x,y}) \\leq r_{\\mathcal A}(x) + \\ell(x,y)$. \\ \nFor any element $u$ in an unproductive chain of the configuration space, $0\\leq r_{\\mathcal A}(u)<\\omega$. Therefore, since $C \\subset A$, \\ \n$r(\\mathcal C) \\leq r(\\mathcal A) \\leq \\omega + r(C)$.\n\\end{proof}\n\n\n\n\\section{Automatic Structures and Scott Rank}\\label{s:SR}\n \n The Scott rank of a structure is introduced in the proof of Scott's Isomorphism Theorem \\cite{Sco65}. Since then, variants of the Scott rank have been used in the computable model theory literature. Here we follow the definition of Scott rank from \\cite{CGKn05}.\n\n\\begin{definition}\nFor structure $\\mathcal A$ and tuples $\\bar{a}, \\bar{b} \\in A^{n}$ (of equal length), define\n\\begin{itemize}\n\\item $\\bar{a} \\equiv^{0} \\bar{b}$ if $\\bar{a}, \\bar{b}$ satisfy the same quantifier-free formulas in the language of $\\mathcal A$; \n\\item For $\\alpha > 0$, $\\bar{a} \\equiv^{\\alpha} \\bar{b}$ if for all $\\beta < \\alpha$, for each $\\bar{c}$ (of arbitrary length) there is $\\bar{d}$ such that \n$\\bar{a}, \\bar{c} \\equiv^{\\beta} \\bar{b}, \\bar{d}$; and for each $\\bar{d}$ (of arbitrary length) there is $\\bar{c}$ such that \n$\\bar{a}, \\bar{c} \\equiv^{\\beta} \\bar{b}, \\bar{d}$.\n\\end{itemize}\nThen, the {\\bf Scott rank} of the tuple $\\bar{a}$, denoted by $\\mathcal{SR}(\\bar{a})$, is the least \n$\\beta$ such that for all $\\bar{b} \\in A^{n}$, $\\bar{a}\\equiv^{\\beta} \\bar{b}$ implies that $(\\mathcal A, \\bar{a}) \\cong (\\mathcal A, \\bar{b})$. Finally, the Scott rank of $\\mathcal A$, denoted by $\\mathcal{SR}(\\mathcal A)$, is the\nleast $\\alpha$ greater than the Scott ranks of all tuples of $\\mathcal A$.\n\\end{definition}\n\n\\begin{example} $\\mathcal{SR}(\\mathbb Q, \\leq) = 1$, $\\mathcal{SR}(\\omega, \\leq) = 2$, and $\\mathcal{SR}( n \\cdot \\omega, \\leq) = n+1$.\n\\end{example}\n\nConfiguration spaces of reversible Turing machines are locally finite graphs. By the proposition below, they all have low Scott Rank.\n\\begin{proposition}\\label{ppn:ConfigScott}\nIf $G = (V,E)$ is a locally finite graph, $SR(G) \\leq 3$.\n\\end{proposition}\n\\begin{proof}\nThe neighbourhood of diameter $n$ of a subset $U$, denoted $B_{n}(U)$, is defined as follows: $B_0(U) = U$ and $B_n(U)$ is the set of $v \\in V$ which can be reached from $U$ by $n$ or fewer edges. The proof of the proposition relies on two lemmas.\n\n\\begin{lemma}\\label{lm:ConfigScott_1}\nLet $\\bar{a}, \\bar{b} \\in V$ be such that $\\bar{a} \\equiv^2 \\bar{b}$. Then for all $n$, there is a bijection of the $n$-neighbourhoods around $\\bar{a}, \\bar{b}$ which sends $\\bar{a}$ to $\\bar{b}$ and which respects $E$.\n\\end{lemma}\n\\begin{proof}\nFor a given $n$, let $\\bar{c} = B_{n}(\\bar{a})\\setminus \\bar{a}$. Note that $\\bar{c}$ is a finite tuple because of the local finiteness condition. Since $\\bar{a} \\equiv^2 \\bar{b}$, there is $\\bar{d}$ such that $\\bar{a} \\bar{c} \\equiv^1 \\bar{b} \\bar{d}$. If $B_{n}(\\bar{b}) = \\bar{b} \\bar{d}$, we are done. Two set inclusions are needed. First, we show that $d_{i} \\in B_{n}(\\bar{b})$. By definition, we have that $c_{i} \\in B_{n}(\\bar{a})$, and let $a_{j}, u_{1}, \\ldots, u_{n-1}$ witness this. Then since $\\bar{a} \\bar{c} \\equiv^1 \\bar{b} \\bar{d}$, there are $v_{1}, \\ldots, v_{n-1}$ such that $\\bar{a}\\bar{c} \\bar{u}\\equiv^0 \\bar{b}\\bar{d}\\bar{v}$. In particular, we have that if $c_{i} E u_{i} E \\cdots E u_{n-1} E a_{j}$, then also $d_{i} E v_{i} E \\cdots E v_{n-1} E b_{j}$ (and likewise if the $E$ relation is in the other direction). Hence, $d_{i} \\in B_{n}(\\bar{b})$. Conversely, suppose $v \\in B_{n}(\\bar{b}) \\setminus \\bar{d}$. Let $v_{1}, \\ldots, v_{n}$ be witnesses and this will let us find a new element of $B_{n}(\\bar{a})$ which is not in $\\bar{c}$, a contradiction.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lm:ConfigScott_2}\nLet $G=(V,E)$ be a graph. Suppose $\\bar{a}, \\bar{b} \\in V$ are such that for all $n$, $(B_{n}(\\bar{a}), E, \\bar{a}) \\cong (B_{n}(\\bar{b}), E, \\bar{b})$. Then there is an isomorphism between the component of $G$ containing $\\bar{a}$ and that containing $\\bar{b}$ which sends $\\bar{a}$ to $\\bar{b}$.\n\\end{lemma}\n\\begin{proof}\nWe consider a tree of partial isomorphisms of $G$. The nodes of the tree are bijections from $B_{n}(\\bar{a})$ to $B_{n}(\\bar{b})$ which respect the relation $E$ and map $\\bar{a}$ to $\\bar{b}$. Node $f$ is the child of node $g$ in the tree if $\\text{dom}(f) = B_{n}(\\bar{a})$, $\\text{dom}(g) = B_{n+1}(\\bar{a})$ and $f \\supset g$. Note that the root of this tree is the map which sends $\\bar{a}$ to $\\bar{b}$. Moreover, the tree is finitely branching and is infinite by Lemma \\ref{lm:ConfigScott_1}. Therefore, K\\\"onig's Lemma gives an infinite path through this tree. The union of all partial isomorphisms along this path is the required isomorphism.\n\\end{proof}\n\nTo prove the proposition, we note that for any $\\bar{a}, \\bar{b}$ in $V$ such that $\\bar{a} \\equiv^2 \\bar{b}$, Lemmas \\ref{lm:ConfigScott_1} and \\ref{lm:ConfigScott_2} yield an isomorphism from the component of $\\bar{a}$ to the component of $\\bar{b}$ that maps $\\bar{a}$ to $\\bar{b}$. Hence, if $\\bar{a} \\equiv^2 \\bar{b}$, there is an automorphism of $G$ that maps $\\bar{a}$ to $\\bar{b}$. Therefore, for each $\\bar{a} \\in V$, $SR(\\bar{a}) \\leq 2$, so $SR(G) \\leq 3$.\n\\end{proof}\n\nLet $\\mathcal C=(C; R_{1}, \\ldots, R_{m})$ be a computable structure. Recall that since $C$ is a computable set, we may assume it is $\\Sigma^\\star$ for some finite alphabet $\\Sigma$. We construct an \nautomatic structure $\\mathcal A$ whose Scott rank is (close to) the Scott rank of $\\mathcal C$. Since the domain of $\\mathcal C$ is computable, we assume that $C=\\Sigma^{\\star}$ for some finite \n$\\Sigma$. The construction of $\\mathcal A$ involves connecting the configuration spaces of Turing machines computing relations $R_{1}, \\ldots, R_{m}$. Note that Proposition \\ref{ppn:ConfigScott} suggests that the high Scott rank of the resulting automatic structure is the main part of the construction because it is not provided by the configuration spaces themselves. The construction in some sense expands $\\mathcal C$ into an automatic structure. We comment that expansions do not necessarily preserve the Scott rank. For example, any computable structure, $\\mathcal C$, has an expansion with Scott rank $2$. The expansion is obtained by adding the successor relation into the signature.\n\n\n We detail the construction for $R_{i}$. Let $\\mathcal M_{i}$ be a Turing machine for $R_{i}$. \nBy a simple modification of the machine we assume that $\\mathcal M_{i}$ halts if and only if its output is \\textquotedblleft yes\\textquotedblright~. By Lemma \\ref{lm:reverse}, we can also assume that $\\mathcal M_{i}$ is reversible. We now modify the configuration space $Conf(\\mathcal M_{i})$ so as to respect the isomorphism type of $\\mathcal C$. \nThis will ensure that the construction (almost) preserves the Scott rank of $\\mathcal C$. We use the terminology from Subsection \\ref{s:Config}.\n\n\n\n\n{\\bf Smoothing out unproductive parts}. The length and number of unproductive chains is determined by the machine $\\mathcal M_{i}$ and hence may differ even for Turing machines computing the same set. In this stage, we standardize the format of this unproductive part of the configuration space. We wish to add enough redundant information in the unproductive section of the structure so that if two given computable structures are isomorphic, the unproductive parts of the automatic representations will also be isomorphic .\nWe add $\\omega$-many chains of length $n$ (for each $n$) and $\\omega$-many copies of $\\omega$. This ensures that the (smoothed) unproductive section of the configuration space of any Turing machine will be isomorphic and preserves automaticity. We comment that adding this redundancy preserves automaticity since the operation is a disjoint union of automatic structures.\n\n\n\n\n{\\bf Smoothing out lengths of computation chains}. We turn our attention to the chains which have valid initial configurations at their base. The length of each finite chain denotes the length of computation required to return a \\textquotedblleft yes\\textquotedblright~ answer. We will smooth out these chains by adding \\textquotedblleft fans\\textquotedblright~ to each base. For this, we connect to each base of a computation chain a structure which consists of $\\omega$ many chains of each finite length. To do so we follow Rubin \\cite{Rub04}: consider the structure whose domain is $0^{\\star} 0 1^{\\star}$ and whose relation is given by $x E y$ if and only if $|x| = |y|$ and $y$ is the least lexicographic successor of $x$. This structure has a finite chain of every finite length. As in Lemma \\ref{lm:omega-fold}, we take the $\\omega$-fold disjoint union of the structure and identify the bases of all the finite chains. We get a \\textquotedblleft fan\\textquotedblright~ with infinitely many chains of each finite size whose base can be identified with a valid initial computation state. Also, the fan has an infinite component if and only if $R_{i}$ does not hold of the input tuple corresponding to the base. The result is an automatic graph, $Smooth(R_{i}) = ( D_{i}, E_{i})$, which extends $Conf(\\mathcal M_{i})$.\n\n\n\n\n{\\bf Connecting domain symbols to the computations of the relation}. \nWe apply the construction above to each $R_{i}$ in the signature of $\\mathcal C$. \nTaking the union of the resulting automatic graphs and adding vertices for the domain, we have the structure $(\\Sigma^{\\star} \\cup \\cup_i D_{i}, E_{1}, \\ldots, E_{n})$ (where we assume that the $D_i$ are disjoint).\nWe assume without loss of generality that each $\\mathcal M_{i}$ has a different initial state, and denote it by $\\iota_{i}$. We add $n$ predicates $F_i$ to the signature of the automatic structure connecting the elements of the domain of $\\mathcal C$ with the computations of the relations $R_{i}$:\n$$F_i = \\{ (x_{0}, \\ldots, x_{m_{i}-1}, (\\lambda~\\iota_{i}~(x_{0},\\ldots, x_{m_{i}-1}), \\lambda, \\lambda)) \\mid x_{0}, \\ldots, x_{m_{i}-1} \\in \\Sigma^{\\star} \\}.$$\nNote that for $\\bar{x} \\in \\Sigma^{\\star}$, $R_{i}(\\bar{x})$ if and only if \n$F_{i} (\\bar{x}, (\\lambda ~\\iota_{i}~\\bar{x}, \\lambda, \\lambda))$ holds and all $E_{i}$ chains emanating from $(\\lambda~\\iota_{i}~\\bar{x}, \\lambda, \\lambda)$ are finite. \nWe have built the automatic structure $$\\mathcal A= (\\Sigma^{\\star} \\cup \\cup_i D_{i}, E_{1}, \\ldots, E_{n}, F_{1}, \\ldots, F_{n}).$$ Two technical lemmas are used to show that the Scott rank of $\\mathcal A$ is close to $\\alpha$:\n\n\\begin{lemma}\\label{lm:EquivTransfer}\nFor $\\bar{x}, \\bar{y}$ in the domain of $\\mathcal C$ and for ordinal $\\alpha$, if $\\bar{x} \\equiv_{\\mathcal C}^{\\alpha} \\bar{y}$ then $\\bar{x} \\equiv_{\\mathcal A}^{\\alpha} \\bar{y}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $X = \\text{dom}{\\mathcal A} \\setminus \\Sigma^{\\star}$. We prove the stronger result that for any ordinal $\\alpha$, and for all $\\bar{x}, \\bar{y} \\in \\Sigma^{\\star}$ and $\\bar{x}', \\bar{y}' \\in X$, if the following assumptions hold\n\\begin{enumerate}\n\\item \\label{asmp:C}$\\bar{x} \\equiv_{\\mathcal C}^{\\alpha} \\bar{y}$;\n\\item \\label{asmp:I}$\\langle \\bar{x}', E_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} \\cong_{f} \\langle \\bar{y}', E_{i}, : i =1 \\ldots n\\rangle_{\\mathcal A}$ (hence the substructures in $A$ are isomorphic) with $f(\\bar{x}') = \\bar{y}'$; and\n\\item \\label{asmp:E}for each $x'_{k} \\in \\bar{x}'$, each $i=1, \\ldots, n$ and each subsequence of indices of length $m_{i}$, \n\\[\nx'_{k} = (\\lambda ~\\iota_{i}~\\bar{x}_{j}, \\lambda, \\lambda) ~~ \\iff ~~ y'_{k} = (\\lambda ~\\iota_{i}~\\bar{y}_{j}, \\lambda, \\lambda)\n\\]\n\\end{enumerate} \nthen $\\bar{x} \\bar{x}' \\equiv_{\\mathcal A}^{\\alpha} \\bar{y} \\bar{y}'$. The lemma follows if we take $\\bar{x}' = \\bar{y}' = \\lambda$ (the empty string).\n\nWe show the stronger result by induction on $\\alpha$. If $\\alpha = 0$, we need to show that for each $i,k, k', k_{0}, \\ldots, k_{m_{i}-1}$, \n\\[\nE_{i}(x'_{k}, x'_{k'}) \\iff E_{i}(y'_{k}, y'_{k'}), \n\\]\nand that \n\\[\nF_{i}(x_{k_{0}}, \\ldots, x_{k_{m_{i}-1}}, x'_{k'}) \\iff F_{i}(y_{k_{0}}, \\ldots, y_{k_{m_{i}-1}}, y'_{k'}). \n\\]\nThe first statement follows by assumption \\ref{asmp:I}, since the isomorphism must preserve the $E_{i}$ relations and maps $\\bar{x}'$ to $\\bar{y}'$. The second statement follows by assumption \\ref{asmp:E}. \n\nAssume now that $\\alpha >0$ and that the result holds for all $\\beta < \\alpha$. Let $\\bar{x}, \\bar{y} \\in \\Sigma^{\\star}$ and $\\bar{x}', \\bar{y}' \\in A$ be such that the assumptions of the lemma hold. We will show that $\\bar{x} \\bar{x}' \\equiv_{\\mathcal A}^{\\alpha} \\bar{y} \\bar{y}'$. Let $\\beta < \\alpha$ and suppose $\\bar{u} \\in \\Sigma^{\\star}, \\bar{u}' \\in A$. By assumption \\ref{asmp:C}, there is $\\bar{v} \\in \\Sigma^{\\star}$ such that $\\bar{x}\\bar{u} \\equiv_{\\mathcal C}^{\\beta} \\bar{y} \\bar{v}$. By the construction (in particular, the smoothing steps), we can find a corresponding $\\bar{v}' \\in A$ such that assumptions \\ref{asmp:I}, \\ref{asmp:E} hold. Applying the inductive hypothesis, we get that $\\bar{x} \\bar{u} \\bar{x}'\\bar{u}' \\equiv_{\\mathcal A}^{\\beta} \\bar{y} \\bar{v}\\bar{y}' \\bar{v}'$. Analogously, given $\\bar{v}, \\bar{v}'$ we can find the necessary $\\bar{u}, \\bar{u}'$. Therefore, $\\bar{x}\\bar{x}' \\equiv_{\\mathcal A}^{\\alpha}\\bar{y} \\bar{y}'$.\n\\end{proof}\n\n\\begin{lemma}\\label{lm:CRankHigher}\nIf $\\bar{x} \\in \\Sigma^{\\star} \\cup \\cup_i D_i$, there is $\\bar{y} \\in \\Sigma^{\\star}$ with $\\mathcal{SR}_{\\mathcal A} (\\bar{x} \\bar{x}' \\bar{u} ) \\leq 2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y})$.\n\\end{lemma}\n\\begin{proof}\nWe use the notation $X_{P}$ to mean the subset of $X = A \\setminus \\Sigma^{\\star}$ which corresponds to elements on fans associated with productive chains of the configuration space. We write $X_{U}$ to mean the subset of $X$ which corresponds to the unproductive chains of the configuration space. Therefore, $A = \\Sigma^{\\star} \\cup X_{P} \\cup X_{U}$, a disjoint union. Thus, we will show that for each $\\bar{x} \\in \\Sigma^{\\star}$, $\\bar{x}' \\in X_{P}$, $\\bar{u} \\in X_{U}$ there is $\\bar{y} \\in \\Sigma^{\\star}$ such that $\\mathcal{SR}_{\\mathcal A} (\\bar{x} \\bar{x}' \\bar{u} ) \\leq 2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y})$.\n\nGiven $\\bar{x}, \\bar{x}', \\bar{u}$, let $\\bar{y} \\in \\Sigma^{\\star}$ be a minimal element satisfying that $\\bar{x} \\subset \\bar{y}$ and that $\\bar{x}' \\subset \\langle \\bar{y}, E_{i}, F_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} $. Then we will show that $\\bar{y}$ is the desired witness. First, we observe that since the unproductive part of the structure is disconnected from the productive elements we can consider the two independently. Moreover, because the structure of the unproductive part is predetermined and simple, for $\\bar{u}, \\bar{v} \\in X_{U}$, if $\\bar{u} \\equiv_{\\mathcal A}^{1} \\bar{v}$ then $(\\mathcal A, \\bar{u}) \\cong (\\mathcal A, \\bar{v})$. It remains to consider the productive part of the structure. \n\n\nConsider any $\\bar{z} \\in \\Sigma^{\\star}$, $\\bar{z}' \\in X_{P}$ satisfying $\\bar{z}' \\subset \\langle \\bar{z}, E_{i}, F_{i} : i =1 \\ldots n\\rangle_{\\mathcal A} $. We claim that $SR_{\\mathcal A}(\\bar{z} \\bar{z}') \\leq 2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z})$. It suffices to show that for all $\\alpha$, for all $\\bar{w} \\in \\Sigma^{\\star}, \\bar{w}' \\in X_{P}$, \n\\[\n\\bar{z}\\bar{z}'~\\equiv_{\\mathcal A}^{2+\\alpha}~\\bar{w}\\bar{w}' \\qquad \\implies \\qquad \\bar{z}~\\equiv_{\\mathcal C}^{\\alpha}~\\bar{w}.\n\\]\nThis is sufficient for the following reason. If $\\bar{z} \\bar{z}' \\equiv_{\\mathcal A}^{2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z})} \\bar{w} \\bar{w}'$ then $\\bar{z} \\equiv_{\\mathcal C}^{\\mathcal{SR}_{\\mathcal C}(\\bar{z})} \\bar{w}$ and hence $(\\mathcal C, \\bar{z}) \\cong (\\mathcal C, \\bar{w})$. From this automorphism, we can define an automorphism of $\\mathcal A$ mapping $\\bar{z} \\bar{z}'$ to $\\bar{w} \\bar{w}'$ because $\\bar{z} \\bar{z}' \\equiv_{\\mathcal A}^{2} \\bar{w} \\bar{w}'$ and hence for each $i$, the relative positions of $\\bar{z}'$ and $\\bar{w}'$ in the fans above $\\bar{z}$ and $\\bar{w}$ are isomorphic. Therefore, $2 + \\mathcal{SR}_{\\mathcal C}(\\bar{z}) \\geq \\mathcal{SR}_{\\mathcal A}(\\bar{z}\\bar{z}')$.\n\n\nSo, we now show that for all $\\alpha$, for all $\\bar{w} \\in \\Sigma^{\\star}, \\bar{w}' \\in X_{P}$, $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2+\\alpha} \\bar{w} \\bar{w}'$ implies that $\\bar{z} \\equiv_{\\mathcal C}^{\\alpha} \\bar{w}$. We proceed by induction on $\\alpha$. For $\\alpha = 0$, suppose that $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2} \\bar{w} \\bar{w}'$. This implies that for each $i$ and for each subsequence of length $m_{i}$ of the indices, the $E_{i}$-fan above $\\bar{z}_{j}$ has an infinite chain if and only if the $E_{i}$-fan above $\\bar{w}_{j}$ does. Therefore, $R_{i}(\\bar{z}_{j})$ if and only if $R_{i}(\\bar{w}_{j})$. Hence, $\\bar{z} \\equiv_{\\mathcal C}^{0} \\bar{w}$, as required. For the inductive step, we assume the result holds for all $\\beta < \\alpha$. Suppose that $\\bar{z}\\bar{z}' \\equiv_{\\mathcal A}^{2+\\alpha} \\bar{w} \\bar{w}'$. Let $\\beta < \\alpha$ and $\\bar{c} \\in \\Sigma^{\\star}$. Then $2 + \\beta < 2 +\\alpha$ so by definition there is $\\bar{d} \\in \\Sigma^{\\star}, \\bar{d}' \\in X_{P}$ such that $\\bar{z}\\bar{z}' \\bar{c}\\equiv_{\\mathcal A}^{2+\\beta} \\bar{w} \\bar{w}' \\bar{d} \\bar{d}'$. However, since $2 + \\beta > 1$, $\\bar{d}'$ must be empty (elements in $\\Sigma^{\\star}$ cannot be $1$-equivalent to elements in $X_{P}$). Then by the induction hypothesis, $\\bar{z} \\bar{c} \\equiv_{\\mathcal C}^{\\beta} \\bar{w} \\bar{d}$. The argument works symmetrically if we are given $\\bar{d}$ and want to find $\\bar{c}$. Thus, $\\bar{z} \\equiv_{\\mathcal C}^{\\alpha} \\bar{w}$, as required.\n\\end{proof}\n\n\nPutting Lemmas \\ref{lm:EquivTransfer} and \\ref{lm:CRankHigher} together, we can prove the main result about our construction.\n\n\\begin{theorem}\\label{thm:SR} Let $\\mathcal C$ be a computable structure and construct the automatic structure $\\mathcal A$ from it as above. \nThen $\\mathcal{SR}(\\mathcal C) \\leq \\mathcal{SR}(\\mathcal A) \\leq 2 + \\mathcal{SR}(\\mathcal C)$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\bar{x}$ be a tuple in the domain of $\\mathcal C$. Then, by the definition of Scott rank, $\\mathcal{SR}_{A}(\\bar{x})$ is the least ordinal $\\alpha$ such that for all $\\bar{y} \\in \\text{dom}(\\mathcal A)$, $\\bar{x} \\equiv_{A}^{\\alpha} \\bar{y}$ implies that $(\\mathcal A, \\bar{x}) \\cong (\\mathcal A, \\bar{y})$; and similarly for $\\mathcal{SR}_{C}(\\bar{x})$. We first show that $\\mathcal{SR}_{\\mathcal A}(\\bar{x}) \\geq \\mathcal{SR}_{\\mathcal C}(\\bar{x})$. Suppose $\\mathcal{SR}_{C}(\\bar{x}) = \\beta$. We assume for a contradiction that $\\mathcal{SR}_{A}(\\bar{x})= \\gamma < \\beta$. Consider an arbitrary $\\bar{z} \\in \\Sigma^{\\star}$ (the domain of $\\mathcal C$) such that $\\bar{x} \\equiv_{\\mathcal C}^{\\gamma} \\bar{z}$. By Lemma \\ref{lm:EquivTransfer}, $\\bar{x} \\equiv_{\\mathcal A}^{\\gamma} \\bar{z}$. But, the definition of $\\gamma$ as the Scott rank of $\\bar{x}$ in $\\mathcal A$ implies that $(\\mathcal A, \\bar{x}) \\cong (\\mathcal A, \\bar{z})$. Now, $\\mathcal C$ is $L_{\\omega_{1}, \\omega}$ definable in $\\mathcal A$ and therefore inherits the isomorphism. Hence, $(\\mathcal C, \\bar{x}) \\cong (\\mathcal C, \\bar{z})$. But, this implies that $\\mathcal{SR}_{\\mathcal C}(\\bar{x}) \\leq \\gamma < \\beta = \\mathcal{SR}_{\\mathcal C}(\\bar{x})$, a contradiction.\n\nSo far, we have that for each $\\bar{x} \\in \\Sigma^{\\star}$, $\\mathcal{SR}_{\\mathcal A}(\\bar{x}) \\geq \\mathcal{SR}_{\\mathcal C}(\\bar{x})$. Hence, since $\\text{dom}(\\mathcal C) \\subset \\text{dom}(\\mathcal A)$, \n\\begin{align*}\n\\mathcal{SR} (\\mathcal A) &= \\sup\\{\\mathcal{SR}_{\\mathcal A}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal A)\\} \\\\\n&\\geq \\sup\\{\\mathcal{SR}_{\\mathcal A}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal C)\\} \\\\\n&\\geq \\sup\\{\\mathcal{SR}_{\\mathcal C}(\\bar{x}) +1: \\bar{x} \\in \\text{dom}(\\mathcal C)\\} = \\mathcal{SR}(C).\n\\end{align*}\n\nIn the other direction, we wish to show that $ \\mathcal{SR}(\\mathcal A) \\leq 2+ \\mathcal{SR}(\\mathcal C)$. Suppose this is not the case. Then there is $\\bar{x} \\bar{x}' \\bar{u} \\in \\mathcal A$ such that $\\mathcal{SR}_{\\mathcal A}(\\bar{x} \\bar{x}' \\bar{u} ) \\geq 2 + \\mathcal{SR}(\\mathcal C)$. By Lemma \\ref{lm:CRankHigher}, there is $\\bar{y} \\in \\Sigma^{\\star}$ such that $2 + \\mathcal{SR}_{\\mathcal C} (\\bar{y}) \\geq 2 + \\mathcal{SR}(\\mathcal C)$, a contradiction.\n\\end{proof}\n\nRecent work in the theory of computable structures has focussed on finding computable structures of high Scott rank. Nadel \\cite{Nad85} proved that any computable structure has Scott rank at most $\\omega_{1}^{CK} + 1$. Early on, Harrison \\cite{Harr68} showed that there is a computable ordering of type $\\omega_{1}^{CK}( 1 + \\eta)$ (where $\\eta$ is the order type of the rational numbers). This ordering has Scott rank $\\omega_{1}^{CK}+1$, as witnessed by any element outside the initial $\\omega_{1}^{CK}$ set. However, it was not until much more recently that a computable structure of Scott rank $\\omega_{1}^{CK}$ was produced (see Knight and Millar \\cite{KnM}). A recent result of Cholak, Downey, and Harrington gives the first natural example of a structure with Scott rank $\\omega_1^{CK}$: the computable enumerable sets under inclusion \\cite{CDH}.\n\n\\begin{corollary}\nThere is an automatic structure with Scott rank $\\omega_1^{CK}$. There is an automatic structure with Scott rank $\\omega_1^{CK}+1$.\n\\end{corollary}\n\nWe also apply the construction to \\cite{GonKn02}, where it is proved that there are computable structures with Scott ranks above each computable ordinal. In this case, we get the following theorem.\\\\\n\n\\noindent{\\bf Theorem \\ref{thm:ScottRank}.~}{\\em For each computable ordinal $\\alpha$, there is an automatic structure of Scott rank at least $\\alpha$.}\n\n\n\n\\section{Cantor-Bendixson Rank of Automatic Successor Trees}\\label{s:CBdefs}\n\nIn this section we show that there are automatic successor trees of high Cantor-Bendixson (CB) \nrank. Recall the definitions of partial order trees and successor trees from Section \\ref{s:Intro}.\nNote that if $(T,\\leq)$ is an automatic partial order tree then the successor tree $(T,S)$, where the relation $S$ is defined by $S(x,y) \\iff (x < y) \\ \\& \\ \\neg \\exists z (x < z < y)$, is automatic. \n\n\\begin{definition}\nThe {\\bf derivative} of a (partial order or successor) tree $T$, $d(T)$, is the subtree of $T$ whose domain is \n\\[\n\\{ x \\in T: \\text{ $x$ lies on at least two infinite paths in $T$}\\}.\n\\]\nBy induction, $d^{0}(T) = T$, $d^{\\alpha+1} (T) = d(d^{\\alpha}(T))$, and for $\\gamma$ a limit ordinal, $d^{\\gamma}(T) = \\cap_{\\beta < \\gamma} d^{\\beta}(T)$. The {\\bf CB rank} of the tree, $CB(T)$, is the least $\\alpha$ such that $d^{\\alpha}(T) = d^{\\alpha+1}(T)$.\n\\end{definition}\n\nThe CB ranks of automatic partial order trees are finite \\cite{KhRS03}. This is not true of automatic successor trees.\nThe main theorem of this section provides a general technique for building trees of given CB ranks. Before we get to it, we give some examples of automatic successor ordinals whose CB ranks are low.\n\n\\begin{example} \nThere is an automatic partial order tree (hence an automatic successor tree) whose CB rank is $n$ for each $n \\in \\omega$.\n\\end{example}\n\n\\begin{proof}\nThe tree $T_n$ is defined over the $n$ letter alphabet $\\{a_1,\\ldots, a_n\\}$ as follows. The domain of the tree is $a_{1}^{\\star} \\cdots a_{n}^{\\star}$. The order $\\leq_n$ is the prefix partial order. Therefore, the successor relation is given as follows:\n\\begin{equation*}\nS(a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}}) = \\begin{cases}\n\t\\{ a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}+1}, a_{1}^{\\ell_{1}} \\cdots a_{i}^{\\ell_{i}}a_{i+1} \\} \\qquad \\text{if $1 \\leq i