diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfprb" "b/data_all_eng_slimpj/shuffled/split2/finalzzfprb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfprb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\subsection{Motivation and content}\n\nA quantum field theory can be defined via a Lagrangian density $\\mathcal L$, simply called Lagrangian hereafter. In perturbative computations in this theory, the monomials of degree $n>2$ in $\\mathcal L$ correspond to $n$-valent interaction vertices in Feynman diagrams. The monomials of degree two define the propagator of the field. We only consider scalar quantum fields in this paper. A free scalar quantum field theory has no self-interaction and is defined via the Lagrangian \n\\begin{align}\\label{Lfree} \n\\mathcal L_\\phi &=\\frac 12 \\partial_\\mu \\phi(x) \\partial^\\mu \\phi(x) -\\frac 12 m^2 \\phi^2(x),\n\\end{align}\nwhere $m\\geq 0 $ is the mass of a $\\phi$-particle. We allow $m=0$ for the massles theory. $x$ is a point in spacetime which we take to be 4-dimensional for concreteness even if the results do not depend on this choice.\n\nWe express the field $\\phi(x)$ as a diffeomorphism\n\\begin{align}\\label{def_diffeomorphism} \n\\phi(x) &= \\sum_{j=0}^\\infty a_j \\rho^{j+1}(x), \\qquad a_0=1\n\\end{align}\nof another field $\\rho(x)$. $\\{a_j\\}_{j\\geq 1}$ are constants with respect to spacetime. The constraint $a_0=1$ means the diffeomorphism is tangent to identity, i.e. the fields $\\rho$ and $\\phi$ coincide at leading order. This replacement, when applied to the Lagrangian density $\\mathcal L_\\phi$ of $\\phi$, gives rise to a Lagrangian $\\mathcal L_\\rho$ of $\\rho$ which generally involves monomials of any order. Thus, $\\rho$ is an interacting quantum field theory even if the original field $\\phi$ was free.\n\n If \\cref{def_diffeomorphism} is applied to the classical field theory before canonical quantization, it amounts to a canonical transformation which leaves the poisson brackets intact \\cite{nakanishi_covariant_1990} and only changes the Lagrangian, thus relating theories with different Lagrangians.\n \nIn the framework of the path integral, observables appear invariant under diffeomorphisms as the diffeomorphism can be undone by a redefinition of the integration measure. However, different lines of argument lead to different results \\cite{apfeldorf} which might be due to operator ordering ambiguities \\cite{pointPathIntegral}. One possibility to resolve these puzzles is to understand field diffeomorphisms order by order in perturbation theory and then, in a later step, extend these results to a statement about the generating functionals of Feynman graphs. This might be a mathematically cleaner way than direct, formal manipulations of field variables within (divergent) generating functionals. This argument is inspired by the recent proof \\cite{jackson_robust_2017} that the Legendre transform - which relates the generating functionals of connected Feynman graphs to that of 1PI-graphs - can be understood order by order without problems regarding the divergence of said generating functionals in quantum field theory. \nIn this paper we focus on the first of the two steps, i.e. the change of the Lagrangian density $\\mathcal L_\\phi \\mapsto \\mathcal L_\\rho=\\mathcal L_\\phi(\\phi(\\rho))$ which is induced by a transformation $\\phi \\mapsto \\rho$ and examination of the Feynman rules and correlation functions of the theory defined by $\\mathcal L_\\rho$. \n\nThis paper is a continuation of earlier work by Kreimer, Yeats and Velenich \\cite{KY17,velenich}.\nIn the remainder of this section and \\cref{sec_free}, we will set up notation and review the results of \\cite{KY17}, which mainly consider a diffeomorphism of an underlying free theory. Many concepts developed there can be readily applied to diffeomorphisms of an underlying interacting theory, which will be done in \\cref{subs_int}. Next, we proceed in \\cref{sec_cancellation} to a possible application. Namely, we show that a diffeomorphism can - for offshell correlation functions - alter the type of interaction present in the theory. In \\cref{sec_arbitrary} we note that the results of \\cite{KY17} for the free theory are not limited to the specific momentum dependece of the propagator used there. Finally, in \\cref{subs_nonlocal} we present a possibility of including derivatives into the transformation $\\phi \\mapsto \\rho$ such that the welcome combinatorial structure of the local diffeomorphism is conserved. \n \n\\subsection{Prerequisites}\n\nA Lagrangian is a power series in the field variable $\\phi$, hence replacing $\\phi$ by $\\phi(\\rho)$ according to \\cref{def_diffeomorphism} amounts to an insertion of power series into each other. For formal power series, the coefficient of the concatenation are given by Fa\\`{a} di Bruno's formula \\cite{flajolet_analytic_2009}: If\n\\begin{align*} \nf(t) &= \\sum_{n=1}^\\infty f_n t^n \\quad \\text{ and } \\quad g(t) = \\sum_{n=0}^\\infty g_n t^n \n\\end{align*} \nthen\n\\begin{align}\\label{faadibruno}\n[t^n] \\left( f\\left( g(t) \\right) \\right) &=\\frac{1}{n!} \\sum_{k=1}^n k!f_k \\cdot B_{n,k} \\left( 1! g_1, 2! g_2 \\ldots,(n+1-k)! g_{n+1-k} \\right) .\n\\end{align}\nHere, $[t^n]$ denotes extraction of the $n^{\\text{th}}$ coefficient (i.e. $[t^n]f(t) = f_n$) and $B_{n,k}$ are the partial Bell polynomials, defined via\n\\begin{align}\\label{bell_generating}\n\\sum_{n=0}^\\infty \\sum_{k=0}^n B_{n,k} \\left( x_1, x_2, \\ldots \\right) u^k \\frac{t^n}{n!} &= \\exp \\left( u \\sum_{j=1}^\\infty x_j \\frac{t^j}{j!}\\right) . \n\\end{align}\nBell Polynomials count the possible partitions of $\\left \\lbrace 1, \\ldots, n \\right \\rbrace $ into $k$ nonempty disjoint subsets:\n\\begin{align*}\nB_{n,k} \\left( x_1, x_2, x_3, \\ldots \\right) &=\\sum_P x_{\\abs {P_1}} \\cdots x_{\\abs{P_k}}\n\\end{align*}\nwhere\n\\begin{align}\\label{bell_partitions}\nP &= \\big \\lbrace \\emptyset \\neq P_i \\subseteq \\left \\lbrace 1, \\ldots, n \\right \\rbrace ~\\forall i, \\quad P_i \\cap P_j = \\emptyset ~ \\forall i\\neq j, \\\\\n&\\qquad P_1 \\cup \\ldots \\cup P_k = \\left \\lbrace 1, \\ldots, n \\right \\rbrace \\big \\rbrace. \\nonumber\n\\end{align}\n\n\n\nInversion of a power series is given by Lagrange inversion \\cite{lagrangeInversion}, which again applies for formal power series regardless of convergence as functions \\cite{henrici_algebraic_1964}:\n\\begin{align}\\label{lagrange_inversion} \n \\left( f^{-1} \\right) _n &= \\frac 1 {n!} \\sum_{k=1}^{n-1} \\frac{1}{f_1^{n+k} }B_{n-1+k, k} \\left( 0, -2! f_2,- 3! f_3 , \\ldots \\right), \\quad \\left( f^{-1} \\right) _1=\\frac{1}{f_1} . \n\\end{align}\nWe note in passing that concatenation and inversion of formal power series can also be interpreted as coproduct and antipode of the Fa\\`{a} di Bruno Hopf algebra \\cite{figueroa_faa_2005}.\n\n\n For fixed $a\\in \\mathbb N_0$ and $b\\in \\mathbb N$ the \\emph{Fuss-Catalan numbers} are defined as\n\t\\begin{align}\\label{fc}\n\tF_m (a,b) &= \\frac{b}{ma+b}\\binom{ma+b}{m} = b\\frac{(ma+b-1)! }{(ma+b-m)! m!}.\n\t\\end{align}\n \n\nWe will also use the Euler characteristic which relates the number of vertices $\\abs{V_\\Gamma}$, (internal) edges $\\abs{E_\\Gamma}$ and loops $\\abs \\Gamma$ of a connected graph $\\Gamma$: \n\\begin{align}\\label{euler_graph}\n\\abs{V_\\Gamma} - \\abs{E_\\Gamma} + \\abs{\\Gamma} &=1.\n\\end{align}\n\n\n\n\n\\section{Local diffeomorphisms of a free field}\\label{sec_free}\nA significant part of this section is a review of \\cite{KY17}, introducing concepts and results needed for the current paper.\n\n\\subsection{Feynman rules }\n\n\\begin{definition}\\label{def_offshellvariable} \n\tFor a 4-momentum $p$ in a theory with mass $m\\geq 0$, the corresponding \\emph{offshell variable} is defined as\n\t\\begin{align*}\n\tx_p &:= p^2-m^2.\n\t\\end{align*}\n\\end{definition}\nThis generalizes to sums of numbered momenta in a slight abuse of notation, e.g. $x_{2+5} :=x_{p_2 + p_5}\\equiv (p_2+p_5)^2-m^2$. We will also use the notation $x_e:=p_e^2-m^2$ where $e$ is some edge in a graph with momentum $p_e$ flowing through it.\n\n\nApplying the diffeomorphism \\cref{def_diffeomorphism} to the free Lagrangian \\cref{Lfree} and collecting the derivatives of the kinetic term by partial integration yields\n\\begin{align}\\label{L_free_rho} \n\\mathcal L_\\rho := \\mathcal L_\\phi(\\phi(\\rho))&= - \\sum_{n=1}^{\\infty} \\frac{f_{n+1}}{n!} \\rho^{n} \\partial_\\mu \\partial^\\mu \\rho -m^2\\sum_{n=2}^{\\infty} \\frac{ c_{n-2}}{n!} \\rho^{n} .\n\\end{align}\nThe quantities $f_n, g_n$ appear as coupling constants induced by the diffeomorphism. \n\\begin{align*}\nf_n&= B_{n-2,1} ( 2! a_1, 3! a_2, \\ldots) + B_{n-2,2} (2! a_1, 3! a_2, \\ldots ), \\\\\n c_{n-2}&=B_{n,2} \\left( 1, 2! a_1, \\ldots \\right) ,\\\\\ng_n &:= n f_n - c_{n-2}= \\frac {n(n-2)!}{2 } \\sum_{k=0}^{n-2} a_{n-k-2} a_k (n-k-2) k .\n\\end{align*}\nFrom \\cref{L_free_rho}, one reads off the $n$-valent vertex Feynman rules\n\\begin{align}\\label{diff_vn}\niv_n &= i f_n \\cdot \\left( x_1 + \\ldots + x_n \\right) + i g_n m^2, \\quad n \\geq 2. \n\\end{align}\nWe will subsequently call these vertices \\emph{diffeomorphism vertices}.\nThe first ones together with their explicit Feynman rules are depicted in \\cref{free_vertex_picture}.\n\n \\begin{figure}[htbp]\n\t\\centering\n\t\\begin{tikzpicture}\n\t\n\t\n\t\\node at (-1,0) {$iv_3=$};\n\t\n\t\\node [diffVertex] (c) at (0,0) {} ;\n\t\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [thick] (c) -- + (210:.5);\n\t\\draw [thick] (c) -- + (330:.5);\n\t\n\t\\node at (.5,0) [anchor=west,text width = 9 cm] {$=2ia_1 \\left(x_1+x_2+x_3\\right)$};\n\t\n\t\n\t\\node at (-1,-1.5) {$iv_4=$};\n\t\n\t\\node [diffVertex] (c) at (0,-1.5) {} ;\n\t\n\t\\draw [thick] (c) -- + (45:.5);\n\t\\draw [thick] (c) -- + (135:.5);\n\t\\draw [thick] (c) -- + (225:.5);\n\t\\draw [thick] (c) -- + (315:.5);\n\t\n\t\\node at (.5,-1.5) [anchor=west,text width = 9cm]{$= 4im^2 a_1^2 + i\\left( 6 a_2+4 a_1^2 \\right) \\left(x_1+x_2+x_3+x_4\\right) $};\n\t\n\t\n\t\\node at (-1,-3) {$iv_5=$};\n\t\n\t\\node [diffVertex] (c) at (0,-3) {} ;\n\t\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [thick] (c) -- + (162:.5);\n\t\\draw [thick] (c) -- + (234:.5);\n\t\\draw [thick] (c) -- + (306:.5);\n\t\\draw [thick] (c) -- + (18:.5);\n\t\n\t\\node at (.5,-3) [anchor=west,text width = 11cm]{$= 60 im^2 a_1 a_2 + i\\left( 24 a_3+36 a_1 a_2 \\right) \\left(x_1+x_2+x_3+x_4+x_5\\right) $};\n\t\n\t\n\t\n\t\n\t\\end{tikzpicture}\n\t\\caption{Graphical representation of the diffeomorphism vertices $iv_n$ from \\cref{diff_vn}. The numbered momenta correspond to edges adjacent to the vertex.}\n\t\\label{free_vertex_picture}\n\\end{figure}\n \n\n\n\n\n\n\n\n\\FloatBarrier\n\\subsection{Tree sums with one external edge offshell}\n\nSince the diffeomorphism \\cref{def_diffeomorphism} is tangent to identity, the 2-point vertex is unaltered. The field $\\rho$ has the same propagator (=non-amputated time ordered 2-point function) as $\\phi$, with \\cref{def_offshellvariable}\n\\begin{align}\\label{free_propagator} \n \\Gamma_{2, \\text{free}}^{-1} = \\langle \\rho(p) \\rho(-p)\\rangle &= \\frac{i}{p^2-m^2}=\\frac{i}{x_p}.\n\\end{align}\n\n\\begin{definition}\\label{def_bn}\n\tThe tree sums $b_n$ for $n\\geq 2$ are defined as the sum of all trees with $n$ external edges onshell (i.e. $x_j=0$ for these edges $j$) and additionally one external edge offshell, where the propagator of this offshell edge is included in $b_n$. Further, $b_1:=1$.\n\\end{definition}\n\n\n\nThe construction of $b_n$ is illustrated in \\cref{free_b1_b2_b3_picture}.\n\n\\begin{example}\\label{ex_bn}\n\tAn explicit calculation using \\cref{diff_vn} yields\n\t\\begin{align*} \n\tb_2 &= \\frac{i}{x_{1+2}}\\cdot 2ia_1 \\left( x_1 + x_2 + x_{1+2} \\right) \\Big|_{x_1=0=x_2} = -2 a_1\\\\\n\tb_3 &= -6 a_2 + 12 a_1^2.\n\t\\end{align*}\n\\end{example}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{tikzpicture} \n\t\n\t\n\t\\node at (-2,1.5) [anchor=west,text width = 1cm]{$b_1=$};\n\t\n\t\\node [diffTree] (c) at (-.8,1.5) {};\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [-|,thick] (c) -- +(270:.4);\n\t\n\t\\node at (-.2,1.5) {$=$};\n\t\n\t\\node at (.5, 1.5){$1$};\n\t \n\t\n\t\n\t\n\t\n\t\\node at (-2,0) [anchor=west,text width = 1cm] {$b_2=$};\n\t\n\t\\node [diffTree] (c) at (-.8,0) {} ;\n\t\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [ thick, -|,bend angle=20,bend left] (c.245) to +(240:.4);\n\t\\draw [thick, -|, bend angle=20, bend right] (c.295) to +(300:.4); \n\t\n\t\\node at (-.2 ,0) {$=$};\n\t\n\t\\node [diffVertex] (c) at (.6,0) {} ;\n\t\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [-|,thick] (c) -- +(240:.5);\n\t\\draw [-|,thick] (c) -- +(300:.5); \n\t\n\t\\node (of) at (2, .8) [anchor=west,text width = 7cm] {offshell edge, propagator included};\n\t\\node (on) at (2, 0) [anchor=west,text width = 7cm] {onshell edges, no propagator};\n\t\n\t\\draw [->, bend angle=10, bend right] (of.180) to (.8,.4);\n\t\\draw [->, bend angle=10, bend right] (on.180) to (.9,-.2);\n\t\n\t\\node at (-2,-1.5) [anchor=west,text width = 1cm] {$b_3=$};\n\t\n\t\n\t\\node [diffTree] (c) at (-.8,-1.5) {} ;\n\t\n\t\\draw [thick] (c) -- + (90:.5);\n\t\\draw [-|, thick, bend angle=20, bend left] (c.240) to +(230:.5);\n\t\\draw [-|, thick] (c.270) -- +(270:.5); \n\t\\draw [-|, thick, bend angle=20, bend right] (c.300) to +(310:.5); \n\t\n\t\\node at (-.2,-1.5) {$=$};\n\t\n\t\\node [diffVertex] (c1) at (.6,-1.5) {};\n\t\\draw [thick] (c1) -- + (90:.5);\n\t\\draw [-|,thick] (c1) -- +(220:.7);\n\t\\draw [-|,thick] (c1) -- +(270:.6); \n\t\\draw [-|,thick] (c1) -- +(320:.7);\n\t\n\t\\node at (1.8,-1.6) {$+$};\n\t\\node [diffVertex] (c1) at (3,-1.3) {};\n\t\\node [diffVertex] (c2) at (3.3,-1.7) {};\n\t\\draw [thick] (c1) -- (c2);\n\t\\draw [thick] (c1) -- + (90:.5);\n\t\\draw [-|,thick] (c1) -- +(230:.8);\n\t\\draw [-|,thick] (c2) -- +(240:.5); \n\t\\draw [-|,thick] (c2) -- +(310:.5);\n\t\n\t\\node at (4,-1.6) {$+$};\n\t\\node [diffVertex] (c1) at (5,-1.3) {};\n\t\\node [diffVertex] (c2) at (4.7,-1.7) {};\n\t\\draw [thick] (c1) -- (c2);\n\t\\draw [thick] (c1) -- + (90:.5);\n\t\\draw [-|,thick] (c1) -- +(310:.8);\n\t\\draw [-|,thick] (c2) -- +(240:.5); \n\t\\draw [-|,thick] (c2) -- +(310:.5);\n\t\n\t\\node at (6,-1.6) {$+$};\n\t\\node [diffVertex] (c1) at (6.9,-1.3) {};\n\t\\node [diffVertex] (c2) at (7.2,-1.7) {};\n\t\\draw [thick] (c1) -- (c2);\n\t\\draw [thick] (c1) -- + (80:.5);\n\t\\draw [-|,thick] (c1) -- +(270:.9);\n\t\\draw [-|,thick] (c2) -- +(205:1); \n\t\\draw [-|,thick] (c2) -- +(310:.5);\n\t\n\t\n\t\n\t\\end{tikzpicture}\n\t\\caption{\n\t\tThe first tree sums $b_n$. The onshell edges are indicated with a perpendicular line at the end. \n\t}\n\t\\label{free_b1_b2_b3_picture}\n\\end{figure}\n\n\nThe tree sums $_n$ computed in \\cref{ex_bn} are independent of masses and momenta. This continues even for higher $n$, the following remarkable result was shown in \t\\cite[Thm.~3.5]{KY17}:\n\\begin{theorem}\\label{thm_bn}\n\t\\begin{align*}\n\tb_{n+1} &= \\sum_{k=1}^{n}\\frac{(n+k)!}{n!}B_{n,k} \\left( -1! a_1, -2! a_2, \\ldots, -n! a_n\\right) .\n\t\\end{align*}\n\\end{theorem}\n\n\n\n\\begin{definition}\\label{def_Ajn}\n\t$A^j_n$ is the sum of all trees with a total of $n$ external edges, at most $j$ of which are offshell. \n\\end{definition}\nThis definition implies that amplitudes with more than $j$ external edges offshell include $A^j_n$ as a summand or, in the other direction, amplitudes with less than $j$ external edges offshell can be extracted from $A^j_n$ by setting some of the $j$ edges onshell and then symmetrizing, see \\cref{ex_A4}.\nUnlike $b_n$, the tree sum $A^j_n$ does not include the propagators of external offshell edges. \n\nBy \\cref{thm_bn}, the $b_n$ are independent of masses and momenta. Since they include the propagator of the single offshell edge $\\frac{i}{x_{1+2+\\ldots + n}}$, the symmetric amputated tree sum with $n+1$ external edges, at most one of which is offshell, is given by\n\\begin{align}\\label{free_A1n} \nA^1_{n+1} :=-i\\left( x_1 + x_2 + \\ldots + x_n + x_{1+2+\\ldots + n}\\right) b_n. \n\\end{align}\nIn terms of graphs, this implies that the sum of all trees with a given number of external edges, one of which is offshell, effectively is a vertex with amplitude $A^1_{n+1}$.\n\n\\begin{lemma}\\label{lem_Sn_free}\n\tIf $A^0_n$ is the connected tree level Feynman amplitude with $n>2$ external edges, all of which are onshell from \\cref{def_Ajn}, then\n\t\\begin{align*} \n\tA^0_n &=0.\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}\n\tSet all $x_j=0$ in \\cref{free_A1n} to obtain the amplitude $A^0_{n+1}$ where none of the edges is offshell.\n\\end{proof}\nA direct consequence of \\cref{lem_Sn_free} is that if $j>0$ in \\cref{def_Ajn} then at least one of the edges actually \\emph{is} offshell as the summand $A^0_n$ contained in $A^j_n$ is zero. \n\n\nApart from their interpretation as Feynman amplitudes of tree sums, the $b_n$ also have an equally remarkable second interpretation: \n\\begin{lemma}\\label{lem_inverse}\n\tThe tree sums $b_n$ are coefficients of the inverse diffeomorphism $\\rho(\\phi)$ ,\n\t\\begin{align*} \n\t\\rho(x) &= \\sum_{n=1}^\\infty \\frac{b_n}{n!} \\phi^n(x).\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}\n\tCompute the coefficients of the inverse diffeomorphism using Lagrange inversion \\cref{lagrange_inversion}. Extracting coefficients from \\cref{bell_generating}, one confirms\n\t\\begin{align*} \n\tB_{n+k,k} \\left( 0, -2! a_1, -3! a_2, \\ldots \\right) &= \\frac{(n+k)!}{n!}B_{n,k} \\left( -1! a_1, -2! a_2, -3! a_3, \\ldots \\right).\n\t\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Uncancelled edges as cuts}\\label{subs_cuts}\n\nThe Feynman rules \\cref{diff_vn} of the diffeomorphism vertices $iv_n$ are -- up to summands $m^2$ -- proportional to offshell variables $x_e$ of adjacent edges $e$. In a Feynman diagram, such edges come with a propagator \\cref{free_propagator} with Feynman amplitude $\\frac{i}{x_e}$. That is, the propagator together with the corresponding summand of the vertex evaluates to a constant. This is the mechanism underlying \\cref{thm_bn}, the vertex effectively \\emph{cancels} the edge $e$. Clearly, the possibility to cancel the edge depends on the presence of the summand $\\propto x_e$ in the vertex Feynman rule, on the other hand, setting $x_e=0$ corresponds to the momentum flow through the edge $e$ being onshell. The interplay between the notions of \\emph{onshellness} and \\emph{cancellation} is crucial for the subsequent discussion of tree level amplitudes.\n\nConsider any edge $e$ which connects two different tree sums $A^{j_1}_{n_1}, A^{j_2}_{n_2}$. Let $e$ have the offshell variable $x_e= p_e^2-m^2$. We observe the following:\n\\begin{enumerate}\n\t\\item If the edge is cancelled by one of the adjacent tree sums, this tree sum includes a summand proportional to $x_e$. Thus, the edge $e$ is an offshell external edge from the point of view of this tree sum. \n\t\\item If the edge $e$ is not cancelled, then none of the two adjacent tree sums must contain a factor $x_e$. This can be obtained by formally setting $x_e=0$ in these tree sums, i.e. $e$ becomes an onshell external edge of the tree sums. \n\\end{enumerate}\nSetting $x_e=0$ for the factors does not mean that $x_e$ actually is zero for the whole, connected diagram, but just inside the two individual constituents. This is a purely formal procedure to eliminate the factors $x_e$ in the amplitudes $A^j_n$, consisting of two steps: First, setting $x_e=0$ removes the unwanted contributions, second, the remaining amplitude is interpreted as if the edge momentum $p_e$ were completely arbitrary. The overall diagram where $e$ is uncancelled has an amplitude\n\\begin{align*} \nA^{j_1}_{n_1} \\Big|_{x_e=0} \\cdot \\frac{i}{x_e}\\cdot A^{j_2}_{n_2}\\Big|_{x_e=0}.\n\\end{align*}\nWe noted below \\cref{def_Ajn} that setting one of the $j_1$ external offshell edges onshell turns $A^{j_1}_{n_1}$ into $A^{j_1-1}_{n_1}$. This is not what we are doing here: In the present case, we set one \\emph{specific} edge $e$ onshell whereas in the above case it was an arbitrary edge and the result is to be symmetrized. Especially, $A^{j_1}_{n_1} \\big|_{x_e=0}$ is no longer symmetric in its $n$ external edges. \n\n\n\nOne might think that due to the momentum dependence of $A^j_n$, setting $x_e=0$ for an external edge $e$ does more than just eliminating the summands proportional to a power of $x_e$. This is not the case as can be understood from the general structure of the Feynman rules: Thanks to the absence of 2-valent vertices, no internal propagator in a tree has the Feynman rule $\\frac{i}{x_e}$ and external propagators are not included into $A^j_n$. Hence there is no factor $\\frac{1}{x_e}$ in $A^j_n$. There are of course factors from internal propagators of the form $\\frac{1}{x_{e+f+\\ldots}}$, where $p_f$ is some other momentum. But those are unaffected, for example\n\\begin{align*} \nx_{e+f} \\Big|_{x_e=0} = \\Big( \\left( p_e + p_f \\right) ^2-m^2 \\Big)_{p_e^2=m^2} = m^2 + 2 p_e p_f + p_f^2 -m^2 \\neq p_f^2-m^2=x_f.\n\\end{align*}\nThat is, the offshell variables $x_{e+f+\\ldots}$ in the amplitude are not determined even if one (or more) of the involved momenta $p_e, p_f, \\ldots$ are onshell. Fixing the magnitude of the vectors imposes some constraint, e.g. the value of $p_e p_f$ is bounded by $m^2$ if both vectors have magnitude $m$, but the only thing important to us is that $x_{e+f}$ remains as an undetermined variable. If we in the second step remove the constraint $x_e=0$ then $x_{e+f}$ can again take completely arbitrary values but the factors $x_e$ in the numerator are gone. This way, it is possible to eliminate all numerator factors proportional to $x_e$ without otherwise altering the amplitude. See \\cref{ex_A4,ex_bsn,ex_As4} for explicit results, setting external edges onshell there does not symbolically alter the remainder of the amplitudes apart from eliminating terms proportional to the corresponding $x_e$.\n\n\nIf $\\Gamma$ is a tree graph, we can consider any uncancelled internal edge $e$ as a cut: The resulting amplitude is the product of amplitudes of the individual components (this is always true for a tree graph), but these components have the same Feynman rule as if $e$ were an onshell external edge. This phenomenon motivated the use of analytic properties of the $S$-matrix in \\cite{velenich}.\n\n\n\\subsection{Offshell tree sums }\\label{subs_offshell}\nIt is possible to obtain tree sums with more than one external edge offshell (\\cref{def_Ajn}) from the $b_n$ as indicated in \\cite[Sec.~4.2.1]{KY17}. To this end, the external onshell edges of multiple $b_k$ are glued. This leaves an uncancelled internal propagator, which, in the sum over all trees, becomes a symmetric sum of all partitions of the external edges. Especially, for $j>1$ the quantity $A^j_n$ depends on external masses and momenta.\n\n\n\\begin{example}\\label{ex_A4}\n\tThe amplitude with four external edges, all of which are possibly offshell, is\n\t\\begin{align*} \n\tA_4^4 \t &= -ib_3 \\left( x_1+x_2+x_3+x_4 \\right) -i b_2^2\\left( \\frac{(x_1+x_2)(x_3+x_4)}{x_{1+2}} +\\text{ 2 more} \\right) .\n\t\\end{align*}\n\tHere, $b_j$ are the free diffeomorphism tree sums (\\cref{def_bn}) as always. \\Cref{fig_A44} depicts the construction. The numerator factors $(x_1+x_2)(x_3+x_4)$ indicate that at least two of the four external edges need to be cancelled in order for an uncancelled internal edge to be possible. Setting all $x_j=0$ except one and then symmetrizing $j$ yields \\cref{free_A1n} as indicated below \\cref{def_Ajn},\n\t\\begin{align*} \n\tA^1_4 &= -i b_3\\left(x_1+x_2+x_3+x_4\\right) .\n\t\\end{align*}\n\tNote especially that $A^4_4 \\equiv A^3_4 \\equiv A^2_4$, i.e. there is no term of higher than second order in the external variables $x_j$. This is because of the Euler Characteristic \\cref{euler_graph}, a tree with four external edges has at most two vertices and each vertex is linear in the offshell variables. Hence, the overall amplitude can at most be quadratic. \n\\end{example}\n\n\n\\begin{figure}[h]\n\t\\begin{tikzpicture}\n\t\n\t\\node [] at (0,0) {$A^4_4\\sim $};\n\t\n\t\\node at (1,-.2) {$\\sum\\limits_{\\text{4 Perm.}}$};\n\t\\node[diffTree] (a) at (2.3,0){};\n\t\\draw [thick] (a) to +(90:.5);\n\t\\draw [-|, thick, bend angle=30, bend left] (a.240) to +(210:.5);\n\t\\draw [-|, thick] (a.270) -- +(270:.5); \n\t\\draw [-|, thick, bend angle=30, bend right] (a.300) to +(330:.5); \n\t\n\t\n\t\n\t\\node at (4,-.2) {$+\\sum\\limits_{\\text{6 Perm.}}$};\n\t\\node[diffTree] (a) at (5.3,.2){};\n\t\\node[diffTree] (b) at (5.9,-.4){};\n\t\\draw[thick, -|-, bend angle = 40, bend right] (a.290) to (b.160);\n\t\\draw[thick, -|, bend angle = 20, bend left] (a.260) to +(220:.5);\n\t\\draw[thick, -|, bend angle = 20, bend right] (b.190) to +(230:.5);\n\t\\draw[thick] (b.0) to +(0:.4);\n\t\\draw[thick] (a.90) to +(90:.4);\n\t\n\t\n\t\\end{tikzpicture}\n\t\\caption{Construction of $A^4_4$ by connecting tree sums $b_j$. The external propagators included in $b_j$ have not been taken into account graphically, hence $\\sim$ and not $=$. ``Perm.'' indicates permutations of external edges. The first sum runs over the four possibilities for one of the edges being the offshell edge of $b_3$, the second sum over all six possibilities to choose two out of four edges as the offshell ones. }\n\t\\label{fig_A44}\n\\end{figure}\n\n\n\\subsection{Tadpole graphs}\n\nIf $\\Gamma$ is a Feynman graph, we denote the set of its internal edges by $E_\\Gamma$ and the set of vertices by $V_\\Gamma$. A vertex where one of the external edges of a diagram is attached is called external vertex. \n\n\n\\begin{definition}\\label{def_tadpole}\n\tA tadpole graph is a Feynman graph where there is at least one closed path of edges which is connected to the rest of the diagram via at most one vertex. \n\\end{definition}\nIn terms of Feynman integrals, this amounts to an integral over the corresponding loop momenta which is independent of any external momenta of the amplitude. Especially, we call a diagram a tadpole if there is at least one such loop, not only if it consists of a momentum-independent loop exclusively. A non-tadpole graph has at least two external vertices.\nWe assume that tadpole diagrams give no contribution to the $S$-matrix. This is the case in kinematic renormalization schemes \\cite{brown_angles_2011}.\n\nBy cancellation of internal edges, the diffeomorphism Feynman rules \\cref{diff_vn} can turn a non-tadpole diagram into a tadpole by two mechanisms:\n\\begin{enumerate}\n\t\\item They cancel all but one edges in any loop in the diagram. By this, the loop only contains a single vertex and is a tadpole.\n\t\\item They cancel a path of edges which connects all external vertices. By cancellation, the path collapses to a single effective vertex where all external momenta are attached, that is, all integrals of the amplitude will be independent of external momenta. \n\\end{enumerate}\nBoth effects are illustrated in \\cref{fig_tadpoles} for a 2-loop example graph.\n\n\n\\begin{figure}[h]\n\t\n\t\\centering\n\t\\begin{tikzpicture} \n\t\n\t\n\t\n\t\\node [diffVertex] (a) at (1.5,3) [] {};\n\t\\node [diffVertex] (b) at (3,3.8) [] {};\n\t\\node [diffVertex] (c) at (3,2.2) [] {};\n\t\\node [diffVertex] (d) at (4.5,3) [] {};\n\t\n\t\\draw [thick,-|-] (a) ..controls +(.4,.8) and +(-.4,0) .. (b);\n\t\\draw [thick,red] (a) ..controls +(.4,-.8) and +(-.4,0) .. (c);\n\t\\draw [thick,-|-] (c) -- (b);\n\t\\draw [thick,-|-] (b) ..controls +(.4,0) and +(-.4,.8) .. (d);\n\t\\draw [thick,red] (c) ..controls +(.4,0) and +(-.4,-.8) .. (d);\n\t\n\t\\draw [thick,-] (a) -- +(180:.5);\n\t\\draw [thick,-] (d) -- + (0:.5);\n\t\\draw [thick,-] (c) -- + (270:.5);\n\t\n\t\n\t\\node at (6,3){$\\longrightarrow$};\n\t\n\t\n\t\\node [diffVertex] (b) at (8.5,3.8) [] {};\n\t\\node [diffVertex] (c) at (8.5,2.2) [] {};\n\t\n\t\n\t\\draw [thick,-|-] (b) ..controls +(.7,-.2) and +(.7,.5) .. (c);\n\t\\draw [thick,-|-] (c) -- (b);\n\t\\draw [thick,-|-] (b) ..controls +(-.7,-.2) and +(-.7,.5) .. (c);\n\t\n\t\n\t\\draw [thick,-] (c) -- +(180:.5);\n\t\\draw [thick,-] (c) -- + (0:.5);\n\t\\draw [thick,-] (c) -- + (270:.5);\n\t\n\t\n\t\n\t\n\t\\node [diffVertex] (a) at (1.5,6) [] {};\n\t\\node [diffVertex] (b) at (3,6.8) [] {};\n\t\\node [diffVertex] (c) at (3,5.2) [] {};\n\t\\node [diffVertex] (d) at (4.5,6) [] {};\n\t\n\t\\draw [thick,red] (a) ..controls +(.4,.8) and +(-.4,0) .. (b);\n\t\\draw [thick,-|-] (a) ..controls +(.4,-.8) and +(-.4,0) .. (c);\n\t\\draw [thick,red] (c) -- node[left]{ }(b);\n\t\\draw [thick,-|-] (b) ..controls +(.4,0) and +(-.4,.8) .. (d);\n\t\\draw [thick,-|-] (c) ..controls +(.4,0) and +(-.4,-.8) .. (d);\n\t\n\t\\draw [thick,-] (a) -- node[below]{ } +(180:.8);\n\t\\draw [thick,-] (d) -- node[below]{ } + (0:.8);\n\t\\draw [thick,-] (c) -- + (270:.5);\n\t\n\t\\node at (6,6){$\\longrightarrow$};\n\t\n\t\\node [diffVertex] (a) at (8,6) [] {};\n\t\n\t\\node [diffVertex] (d) at (10,6) [] {};\n\t\n\t\n\t\\draw [thick,-|-] (a) ..controls +(.8,1.2) and +(-1,1.2) .. (a);\n\t\\draw [thick,-|-] (a) ..controls +(.4,.5) and +(-.4,.5) .. (d);\n\t\\draw [thick,-|-] (a) ..controls +(.4,-.5) and +(-.4,-.5) .. (d);\n\t\n\t\\draw [thick,-] (a) -- +(180:.8);\n\t\\draw [thick,-] (d) -- + (0:.8);\n\t\\draw [thick,-] (a) -- + (270:.5);\n\t\n\t\n\t\\end{tikzpicture}\n\t\\caption{Two different ways to obtain tadpoles from non-tadpole graphs by cancellation of internal edges. Cancelled edges are red, uncancelled ones have a perpendicular line. First row: cancellation of all but one edge in a loop, second row: Cancellation of a path connecting all external vertices.}\n\t\n\t\\label{fig_tadpoles}\n\\end{figure}\n\n\nAs a result of the above discussion, we have the following lemma:\n\\begin{lemma}\\label{lem_tadpole}\n\tAssume $\\Gamma$ is no tadpole graph, then\n\t\\begin{enumerate}\n\t\t\\item There are at least two uncancelled edges in any closed path of edges in $\\Gamma$.\n\t\t\\item There is at least one external vertex $v$ such that there is not a path of cancelled edges connecting $v$ to all other external vertices.\n\t\\end{enumerate}\n\\end{lemma}\n\n\n\n\\FloatBarrier\n\n\\subsection{Loop amplitudes}\\label{subs_free_loops}\nWe will see in the following that the tree sums $A^j_n$ from \\cref{def_Ajn} act as building blocks of Feynman diagrams by the mechanism discussed in \\cref{subs_cuts}. Since they take the role of $n$-valent vertices in these diagrams, we call them ``meta-vertices''. Note that this does not mean that $A^j_n$ collapses to an actual vertex inside the Feynman diagram, it might still contain uncancelled internal edges if $j>1$.\n\n\\begin{lemma}\\label{lem_loop}\n\tThe Feynman integrand of the sum $G$ of all connected graphs with $l>0$ loops is obtained by building all $l$-loop graphs from $n_k$-valent meta-vertices with Feynman amplitude $A^{j_k}_{n_k}$ from \\cref{def_Ajn}. Here, $j_k$ is the number of external edges of $G$ connected to the meta-vertex $A^{j_k}_{n_k}$. Each pair of meta-vertices is connected by at least two internal edges.\n\\end{lemma}\n\\begin{proof}\n\tThe proof consists of three steps: First, we show that any non-tadpole graph can be decomposed into trees. Second, we specify how the individual components turn into tree sums $A^j_k$ acting as meta-vertices when summing over all graphs. Third, we show that no pair of meta-vertices must be connected by only a single internal edge. \n\t\n(1): Let $\\Gamma$ be any non-tadpole Feynman graph according to \\cref{def_tadpole}. Then by \\cref{lem_tadpole} there exists at least one set of uncancelled edges $U\\subseteq E_\\Gamma$. Let $U$ be such a set with minimum number of edges. Identify the uncancelled edges as cuts following \\cref{subs_cuts}. These cuts divide $\\Gamma$ into connected components $\\Gamma_1, \\ldots, \\Gamma_n$. By \\cref{lem_tadpole}, $n\\geq 2$ and no $\\Gamma_k$ contains loops, i.e. $\\Gamma_k$ are trees. Let $n_k$ be the number of external edges of $\\Gamma_k$. Let further $j_k$ be the number of external offshell edges of $\\Gamma_k$, i.e. which were external edges of the uncut graph $\\Gamma$. \n\n(2): Now sum all possible ways $U$ of cutting $\\Gamma$ with the minmal possible number of cuts and for each cut, replace all $\\Gamma_k$ by the corresponding tree sum $A^{j_k}_{n_k}$ from \\cref{def_Ajn}. By \\cite[Prop.~4.1]{KY17}, this sum equals the sum of all connected graphs with $\\abs{\\Gamma}$ loops, weighted with their correct symmetry factors up to an overall factor. In other words: The total non-tadpole integrand of an amplitude with $\\abs{\\Gamma}$ loops is the sum of all possible ways of connecting the $(n_k-j_k)$ onshell edges of tree sums $A^{j_k}_{n_k}$. Thereby, these tree sums act as meta-vertices. No two edges of the same meta vertex $A^{j_k}_{n_k}$ must be connected to each other and the $j_k$ offshell edges become external edges of $\\Gamma$. In case $j_k=0$ for some $k$, that meta vertex vanishes by \\cref{lem_Sn_free}. Note that regardless of \\cref{def_Ajn}, if $j_k>0$ then at least one of those edges actually \\emph{is} offshell because if all are onshell the amplitude is zero by \\cref{lem_Sn_free}. Hence each non-vanishing contribution to $\\Gamma$ has at least one external edge at each meta vertex.\n\n(3): Assume there is a pair of meta-vertices $A^{j_k}_{n_k}$, $k\\in \\{1,2\\}$ connected by only a single internal edge $e$. Since $\\Gamma$ is no tree, there is at least one other path between these meta-vertices, involving yet another vertex. This forms a loop with at least 3 uncancelled edges. So $e$ can be cancelled without making $\\Gamma$ a tadpole, i.e. $U$ does contain more than the minimum necessary number of cuts. Indeed, $A^{j_1}_{n_1} \\frac{i}{x_e}A^{j_2}_{n_2}$ together with the case where $e$ is cancelled form the tree sum $A^{j_1+j_2}_{n_1+n_2-2}$ which is taken into account when a cut $U$ without cutting $e$ is used. \n\\end{proof}\n\nUsually, if in a Feynman graph two external momenta enter at the same vertex $v$, they can be added to yield a single effective momentum. This is not the case here: A meta-vertex $A^{j>1}_n$ has internal uncancelled edges, so if two external momenta are entering $A^j_n$, they can not be combined into only a single external edge. Only if $j=1$, the meta-vertex $A^1_n$ has no internal structure by \\cref{thm_bn}.\n\n\\begin{example}\n\tConsider $A^2_4=A^4_4$ from \\cref{ex_A4}. Assume for simplicitly the two external offshell edges are numbered 1 and 2 and the remaining two edges are onshell, then this meta-vertex has an amplitude \n\t\t\\begin{align*} \n\tA_2^4 \\Big|_{x_3=0=x_4} &= -ib_3 \\left( x_1+x_2 \\right) -i b_2^2\\left( \\frac{x_1 x_2 }{x_{1+3}} +\\frac{x_1x_2}{x_{1+4}} \\right) .\n\t\\end{align*}\n\tThe second summand contains an uncancelled propagator and is proportional to $x_1x_2 = (p_1^2-m^2)(p_2^2-m^2)$ whereas the first summand is proportional to $p_1^2 + p_2^2-2m^2$. It is not possible to reproduce this Feynman amplitude if one ``combines'' both momenta into some effective momentum $p:= p_1+p_2$ . So if the external momenta $p_1$ and $p_2$ are incident to the meta-vertex $A^2_4$ then they have to stay distinct. \n\t\n\\end{example}\n\n\\Cref{lem_loop} implies that an amplitude with $k$ external offshell edges contains at most $k$ meta-vertices $A^j_n$. On the other hand, by \\cref{lem_tadpole} it contains at least two such vertices in order to not be a tadpole. Hence, the diffeomorphism does not contribute to the onshell amplitudes, i.e. the $S$-matrix \\cite[Thm.~4.7]{KY17} and graphs with two external edges have the topology of $l$-loop multiedges \\cite[lem.~4.8]{KY17} with vertices $A^1_n\\sim b_{n-1}$. This implies an alternative proof for \\cref{lem_inverse}: \n\\begin{lemma}\\label{lem_inverse2}\n\tThe coefficients $b_n$ of the inverse \n\t\\begin{align*}\n\t\\rho(x) &= \\sum_{n=1}^\\infty \\frac{b_n}{n!}\\phi^n(x)\n\t\\end{align*}\n\tof the diffeomorphism \\cref{def_diffeomorphism} are the Feynman amplitudes of amputated meta-vertices $A^1_{n+1}$ (i.e. of the tree sums $b_n$ as defined in \\cref{def_bn}).\n\\end{lemma}\n\\begin{proof}\n\tConsider the non-amputated time-ordered 2-point correlation function of $\\rho$. By the above discussion, it is supported on $l$-loop multiedge graphs $M^{(l)}$ where the two vertices are meta-vertices $A^1_{l+2}$ (we denote by $M^{(l)}$ the Feynman amplitude of the amputated $l$-loop multiedge). The single offshell edge of said vertices carries momentum $p$ and the graph $M^{(l)}$ has a symmetry factor $\\frac{1}{(l+1)!}$,\n\t\\begin{align*}\n\t\\langle \\rho(-p) \\rho (p) \\rangle = \\langle \\phi(-p) \\phi(p) \\rangle + \\frac{i}{x_p }\\sum_{l=1}^\\infty \\left( A^1_{l+2} \\right) ^2 \\cdot \\frac{1}{(l+1)!}M^{(l)} (p) \\frac{i}{x_p}.\n\t\\end{align*} \n\tOn the other hand, using the inverse diffeomorphism in position space (where we take $b_n$ to be the unknown coefficients of the inverse) the same function is\n\t\\begin{align*}\n\t\\langle T\\rho(x) \\rho(y)\\rangle &= \\sum_{j=1}^\\infty \\sum_{k=1}^\\infty \\frac{b_j b_k}{j! k!} \\langle T\\phi^j(x) \\phi^k(y) \\rangle.\n\t\\end{align*}\n\tThe right hand side are $(j+k)$-point correlation functions of the field $\\phi$ which are computed via Wick's theorem, i.e. all factors $\\phi$ of the field have to be connected in pairs which eliminates all summands were $j+k$ is odd. If $j+k$ is even, any term where two fields at the same spacetime point are connected correspond - after Fourier transformation - to a tadpole graph which is assumed to vanish. Hence all non-vanishing pairs have to be of the form $\\phi(x) \\phi(y)$ which implies $j=k$, \n\t\\begin{align*}\n\t\\langle T\\rho(x) \\rho(y)\\rangle &= \\langle T\\phi(x) \\phi(y) + \\sum_{k=2}^\\infty \\frac{\\left( b_k\\right)^2}{(k!)^2 } \\langle{T\\phi^k(x) \\phi^k(y)} \\rangle .\n\t\\end{align*}\n\tThere are precisely $k!$ equivalent ways of forming pairs which cancels one factor in the denominator. The resulting sum equals after Fourier transformation the sum over all $(k-1)$-loop multiedges, weighted with their correct symmetry factor $\\frac{1}{k!}$,\n\t\\begin{align*}\n\t\\langle \\rho(p) \\rho(-p)\\rangle &= \\langle \\phi(-p) \\phi(p) \\rangle+ \\sum_{k=2}^\\infty \\frac{\\left( b_k\\right)^2}{k!} M^{(k-1)}(p).\n\t\\end{align*}\n\tComparing coefficients yields the claimed equality\n\t\\begin{align*}\n\tA^1_{l+2}\\big|_{x_1=x_p\\neq 0} &= -ix_p b_{l+1}.\n\t\\end{align*}\n\tNote that this is not \\cref{free_A1n}: In the latter, $A^1_n$ was defined using tree sums $b_n$ whereas here $b_{l+1}$ is a diffeomorphism coefficient which we showed to coincide with a tree sum.\n\\end{proof}\n\n\\begin{example}\n\tAs an example for the proof, consider $c_1(p)$, the first non-tadpole contribution to the 2-point function of $\\rho$. It is a 1-loop multiedge built from two 3-valent vertices $iv_3$ from \\cref{diff_vn}. These vertices have only one external edge which has the offshell variable $x_p$. Their amplitude therefore is $A^1_3$ with a fixed (non-symmetric) offshell edge. We consider non-amputated graphs so there are also the external propagators $\\frac{i}{x_p}$ and the overall momentum-space amplitude is\n\t\\begin{align*} \n\tc_1(p):=\\frac 12 \\left( \\frac{i}{x_p} \\right) ^2\\left( A^1_3 \\big|_{x_p\\neq 0} \\right) ^2 \\int \\d^4 k \\; \\frac{i}{x_k} \\frac{i}{x_{p-k}} = \\frac 12\\left( \\frac{i A^1_3|_{x_p\\neq 0} }{x_p} \\right) \\int \\d^4 k \\frac{1}{x_k x_{p-k}}.\n\t\\end{align*}\n\tDefine the prefactor to be $A$. \n\tIn position space, the same amplitude by Fourier transformation is\n\t\\begin{align*} \n\t\\tilde c_1(x) &= \\int \\d^4 p \\; c_1(p) e^{ipx} = \\frac 12 A \\iint \\d^4 p \\d^4 k \\; \\frac{1}{x_k}\\frac{1}{x_{p-k}}e^{ipx}= \\frac 12 A\\int \\d^4 q e^{iqx} \\frac{1}{x_q}\\int \\d^4 k \\; \\frac{1}{x_k}e^{ikx}.\n\t\\end{align*}\n\tThis is the product of two propagators between the same two points in spacetime. That is, $\\tilde c_1(x)$ corresponds to a Wick contraction of $\\phi^2(y) \\phi^2(y+x)$ into two pairs, $\\left( \\phi(y) \\phi(y+x) \\right) \\cdot \\left( \\phi(y) \\phi(y+x) \\right) $. \n\t\n\tIf $b_2$ is the second coefficient of the inverse diffeomorphism, i.e. $\\rho = \\phi + \\frac 12 b_2 \\phi^2 + \\ldots$, then\t we expect this amplitude $\\tilde c_1(x)$ to be the product of two position-space propagators with the prefactor $\t2 \\left( \\frac{1}{2}b_2 \\right) ^2$. The additional 2 arises from two possibilities to Wick-contract the fields. This prefactor has to be $A$ so we can read off that $A^1_3\\big|_{x_p\\neq 0} = -ix_p b_2$, that is, the diffeomorphism coefficient $b_2$ coincides with the 3-valent tree sum as claimed.\n\t\n\\end{example}\n\n\n\n\n\nUsing Lagrange inversion \\cref{lagrange_inversion} to determine the coefficients of the inverse diffeomorphism, \\cref{lem_inverse2} yields the explicit formula \\cref{thm_bn} for the tree sums \\cref{def_bn}. Note that \\cref{lem_inverse2} - unlike \\cref{thm_bn} - explicitly uses the fact that tadpole graphs vanish. \n\n\n\n\n\n\n\n \n \n\n\n\\section{Local diffeomorphisms of an interacting field}\\label{subs_int}\n\n\\subsection{Feynman rules}\nIn this section, the diffeomorphism \\cref{def_diffeomorphism} is applied to a $\\phi^s$-type interacting field, i.e. \n\\begin{align}\\label{L_phis}\n\\mathcal L_\\phi &= \\frac 12 \\partial_\\mu \\phi ( x)\\partial^\\mu \\phi ( x) -\\frac 12 m^2 \\phi^2( x) -\\frac{\\lambda_s }{s!}\\phi^s( x).\n\\end{align}\nThe coupling constant $\\lambda_s$ has an index $s$ to better keep track of the type of interaction.\n\nSince the first part of \\cref{L_phis} coincides with \\cref{Lfree}, the field $\\rho$ obtains the same vertices $i v_n$ from \\cref{diff_vn} as in the free case. Additionally, the interaction monomial in \\cref{L_phis} gives rise to a second type of vertex with Feynman rule\n\\begin{align*}\n- iw^{(s)}_n &= -i\\frac{\\lambda_s }{s!}n! \\underbrace{\\sum_{j=0}^{ n-s} \\sum_{k=0}^{n-s-j} \\cdots \\sum_{l=0}^{n-s-j-k-\\ldots}}_{s-1 \\text{ sums}} \\underbrace{a_j a_k \\cdots a_l a_{n-s-j-k-\\ldots -l}}_{s \\text{ factors }a} .\n\\end{align*}\nThe vertex Feynman rule is by definition $n!$ times the coefficient of $\\rho^n$ of the power series $ \\frac{\\lambda_s}{s!} \\phi^s(\\rho)$. Hence using \\cref{faadibruno}, Bell polynomials allow for a condensed notation of this multisum:\n\\begin{align}\\label{diff_arb_wns}\n-i w^{(s)}_n &=- i\\lambda_s B_{n,s}(1! a_0, 2! a_1, 3! a_2, 4! a_3, \\ldots ), \\qquad w_n^{(s)}=0 \\ \\forall ns$ are zero. This is based on two observations: \n\\begin{enumerate}\n\t\\item Any interaction vertex $-i w^{(s)}_n$ is of order one in $\\lambda_s$. Hence, it has to be cancelled against trees which again contain a single vertex of $-i w^{(s)}_j$ type and the remaining vertices are of pure diffeomorphism type $i v_j$. \n\t\\item An interaction vertex $-iw^{(s)}_n$ does not cancel adjacent propagators, therefore such vertex can only be cancelled against a tree sum which also does not cancel propagators. Especially, one can require all $n$ external edges to be on-shell. \n\\end{enumerate}\nEuler characteristic \\cref{euler_graph} ensures compatibility of these requirements: A tree $T$ with $\\abs{V_T}$ vertices has $\\abs{V_T}-1$ internal edges, and if one of the vertices is of $-i w^{(s)}_j$ type, the remaining $\\abs{V_T}-1$ vertices of $i v_j$ type precisely suffice to cancel all internal edges but no external one. Summing over all possible trees and permutations of external edges, $S_n$ consists of summands where one vertex $-iw^{(s)}_j$ is connected to $j$ tree sums $b_{k_1}, \\ldots, b_{k_j}$ such that $k_1 + \\ldots + k_j=n$. This is shown in \\cref{Sn_image} for $s=3$ (for other $s$, the sum would end at an $s$-valent interaction vertex on the right side). \n\n \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{tikzpicture} \n\t\n\t\n\t\\node at (0,0) {$S^{(3)}_n=$};\n\t\n\t\\node [intVertex] (c1) at (1.5,.2) {};\n\t\\node [diffTree] (v1) at (.6, -.5){};\n\t\\node [diffTree] (v2) at (1, -.5){};\n\t\\node [diffTree] (v4) at (1.4, -.5){};\n\t\\node at (1.9, -.5){$\\cdots $};\n\t\\node [diffTree] (v3) at (2.4, -.5){};\n\t\n\t\\draw [thick ] (c1) .. controls +(-.2,-.1) and +(0,.5).. (v1);\n\t\\draw [thick ] (c1) .. controls +(-.2,-.2) and +(0,.3)..(v2);\n\t\\draw [thick ] (c1) .. controls +(.0,-.1) and +(0,.5).. (v4);\n\t\\draw [thick ] (c1) .. controls +(.2,-.1) and +(0,.5).. (v3);\n\t\n\t\\draw [-|,thick] (v1) -- +(270:.4);\n\t\\draw [-|,thick] (v2) -- +(270:.4); \n\t\\draw [-|,thick] (v3) -- +(270:.4);\n\t\\draw [-|,thick] (v4) -- +(270:.4);\n\t\n\t\\node at (1.5, -1.5) {$\\underbrace{\\qquad \\qquad \\quad }_{n}$};\n\t\n\t\\node at (3,0) [anchor=west,text width = 1cm] {$+\\quad \\ldots $};\n\t\n\t\n\t\\node at (4.5,-.15) [anchor=west,text width = 1cm] {$+ \\quad \\sum \\limits_{P(3)} $};\n\t\n\t\\node [intVertex] (c1) at (7.3,.5) {};\n\t\\node [diffTree] (v1) at (6.6, -.2){};\n\t\\node [diffTree] (v2) at (7.4, -.2){};\n\t\\node [diffTree] (v3) at (8.2, -.2){};\n\t\n\t\n\t\n\t\\draw [thick ] (c1) .. controls +(-.3,-.1) and +(0,.5).. (v1);\n\t\\draw [thick ] (c1) .. controls +(.05, -.1) and +(0,.3)..(v2);\n\t\\draw [thick ] (c1) .. controls +(.4,-.1) and +(0,.5)..(v3);\n\t\n\t\\draw [-|, thick, bend angle=20, bend left] (v1.230) to +(220:.5);\n\t\\draw [-|, thick, bend angle=10, bend left] (v1.250) to +(240:.5); \n\t\\draw [-|, thick, bend angle=10, bend right] (v1.280) to +(280:.5); \n\t\\draw [-|, thick, bend angle=20, bend right] (v1.300) to +(310:.5); \n\t\n\t\\node at (6.5, -1.3) {$\\underbrace{\\qquad }_{\\abs{P_1}}$};\n\t\n\t\\draw [-|, thick, bend angle=10, bend left] (v2.260) to +(250:.5);\n\t\\draw [-|, thick, bend angle=10, bend right] (v2.280) to +(290:.5); \n\t\n\t\\node at (7.5, -1.3) {$\\underbrace{\\quad }_{\\abs{P_2}}$};\n\t\n\t\n\t\n\t\\draw [-|, thick, bend angle=10, bend left] (v3.260) to +(250:.5);\n\t\\draw [-|, thick, bend angle=10, bend right] (v3.280) to +(290:.5); \n\t\\draw [-|, thick, bend angle=20, bend right] (v3.310) to +(320:.5); \n\t\n\t\\node at (8.4, -1.3) {$\\underbrace{\\qquad }_{\\abs{P_3}}$};\n\t\n\t\n\t\n\t\\end{tikzpicture}\n\t\\caption{Structure of the contributions to $S^{(3)}_n$: Since the external edges are onshell, all terms consist of tree sums $b_k$ and one interaction vertex $-iw^{(3)}_k$. The sum $P(k)$ runs over all possible ways of distributing the $n$ external edges to the given number $k$ of tree sums $b_{\\abs{k_i}}$. }\n\t\\label{Sn_image}\n\\end{figure}\n\n\nThe first term only contains tree sums $b_1\\equiv 1$, it is\n\\begin{align*}\nS^{(s)}_{n,n}&=-i w^{(s)}_n \\underbrace{b_1 b_1 \\cdots b_1}_{n \\text{ factors}}.\n\\end{align*}\nThe second contribution to $S^{(s)}_n$ contains one $b_2$ and the rest $b_1$, it is $S^{(s)}_{n,(n-1), \\text{single}} = -iw^{(s)}_{n-1} b_2 \\cdot b_1^{n-2}$.\nThe third term is made from a vertex $-iw^{(s)}_{n-2}$ and either one $b_3$ or two $b_2$ and the rest $b_1$.\nThe last contribution to $S^{(s)}_n$, shown in the right in \\cref{Sn_image}, has one vertex $-iw^{(s)}_s$ and $s$ tree sums $b_{k_1}, b_{k_2}, b_{k_s}$ such that $k_j \\geq 1, k_1 + k_2 +\\ldots + k_s =n$. \n\nFor any fixed valence $k$ of the interaction vertex $-i w^{(s)}_k$, there is a sum over the set $P(k)$ of all possible ways of assigning the $n$ external edges to the individual tree sums. This is the same partition as in the definition of Bell polynomials \\cref{bell_partitions}, hence for any fixed $k$\n\\begin{align*}\nS^{(s)}_{n,k} &=-i w^{(s)}_k\\sum_{P(k)} S^{(s)}_{n,k,\\text{single}} = -i w^{(s)}_k\\sum_{P(k)} \\prod_{i=1}^k b_{k_i} = B_{n,k} \\left( b_1, b_2, \\ldots \\right) .\n\\end{align*}\nFinally, $S^{(s)}_n$ contains all of these terms for $k\\in \\{s, \\ldots, n\\}$. Inserting the Feynman rule \\cref{diff_arb_wns} for $-iw^{(s)}_n$ yields\n\\begin{align}\\label{diff_phis_Sn}\nS^{(s)}_n &= -i \\lambda_s \\sum_{k=s}^n B_{k,s}(1, 2! a_1, 3! a_2, 4! a_3, \\ldots ) B_{n,k} \\left( b_1, b_2, \\ldots \\right).\n\\end{align}\n \n \n\n\\begin{theorem}\\label{thm_phis_Sn}\n\t $S^{(s)}_s= -i\\lambda_s$ and $S^{(s)}_n= 0$ for any $n\\neq s$. \n\\end{theorem}\n \n\n\\begin{proof}\\footnote{The interpretation of $S_n^{(s)}$ as a series coefficient was originally suggested by Ali Assem Mahmoud. This simplified the proof considerably.} For $ks$, then one of two cases can occur: \n\\begin{enumerate}\n\t\\item All (internal or external) edges connected to $-iw^{(s)}_j$ are uncancelled. Then, in the sum over all possible trees, there will be precisely all trees at the position of $-iw^{(s)}_j$ to make up $S_j$,compare \\cref{Sn_image}, and this sum vanishes due to \\cref{thm_phis_Sn}. Hence all vertices $-iw^{(s)}_{j>s}$ where no adjacent edge is cancelled can be left out from the beginning. \n\t\\item At least one adjacent edge to $-iw^{(s)}_j$ is cancelled. This cancellation always originates from an adjacent diffeomorphism vertex, not from $-iw^{(s)}_j$ itself. But then, $-iw^{(s)}_j$ and the adjacent vertex form a contribution to $S^{(s)}_m$ for some $m>j>s$. Since $S^{(s)}_m=0$, even these contributions can be left out.\n\\end{enumerate}\nAll that remains are vertices $-iw^{(s)}_s=-i\\lambda_s$ - which are the original vertices of $\\phi^s$-theory - where none of the $s$ adjacent edges are cancelled.\n\nBy \\cref{def_bsn}, there is only one offshell external edge in $b'_n$, the top one. But by \\cref{lem_Sn_free}, any tree of pure diffeomorphism type vanishes if all its external edges are onshell. Hence any contribution of pure diffeomorphism type must be connected to the single offshell edge of $b'_n$. Consequently, all vertices of pure diffeomorphism type $iv_j$ are connected inside $b'_n$. On the other hand, the vertices $-i\\lambda_s$ do not need to be connected to each other, they can be attached anywhere to the pure diffeomorphism part. This determines the structure of $b'_n$ up to a summation over all permutations of external edges: \n\\begin{lemma}\\label{lem_bsn}\n\tThe $b'_n$ are of the following form: \n\t\\begin{align*} \n\tb'_k &= b_k \\qquad \\forall\\; ks$. Then again $b'_{s-1}$ collapses to $(-i\\lambda_s)$. All contributions to $b'_{n-1}$ are by \\cref{lem_bsn} either proportional to $b'_{s-1}$ or to some $b_k$ where $k\\geq s$. All the latter vanish if the remaining edge is set onshell. What remains are the trees which are built from $b'_{n-1}=(-i\\lambda_s)$ and vertices $-i\\lambda_s$. This is the tree sum of the underlying $\\phi^s$ theory.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Offshell tree Amplitudes}\\label{sec_offshelltree}\nFrom the tree sums $b'_n$ with one external edge offshell, one can construct tree sums with arbitrary many external edges offshell following the mechanism discussed in \\cref{subs_offshell}. The effect is similar to the inclusion of interaction vertices $-iw_n$ which do not cancel adjacent edges in \\cref{subs_sum_int}: Allowing for more external edges to be cancelled amounts to leaving more internal edges uncancelled and gives rise to tree sums which can be cut along an internal uncancelled edge according to \\cref{subs_cuts}. Therefore, the structure of these tree sums is similar to \\cref{lem_bsn}. \n\n\\begin{definition}\\label{def_Asjn}\n\t${A'}^j_n$ is defined as the sum of all trees with $n$ external edges, $j$ of which are offshell, in the diffeomorphism of the interacting theory \\cref{L_phis}. This is the generalization of \\cref{def_Ajn}.\n\\end{definition}\nClearly, ${A'}^j_n$ contains the trees of pure diffeomorphism as a subset, so ${A'}^j_n= A^j_n + \\mathcal O (\\lambda_s)$. The interaction vertices $-i \\lambda_s$ can be included in several equivalent ways. Conceptually, it is most transparent if only the tree sums $b_{s-1}$ are replaced by $b'_{s-1}$ according to \\cref{lem_bsn}. This will precisely include all trees with interaction vertices but not more. Recall that by \\cref{thm_phis_Sn}, it is sufficient to include the $s$-valent interaction vertices. Equivalently, one could also replace $b_{n-1}$ by $b'_{n-1}$ and leave all other $b_{k1$.\n\\end{lemma}\n\\begin{proof}\n\tFollows from \\cref{lem_bsn}: For $10$ are offshell. Assume all offshell external edges of ${A'}^j_n$ have the offshell variable $x_p$. Then the quantum field $\\rho$ defined by \n\t\\begin{align*}\n\t\\rho( x) &=\\phi( x) - \\frac{\\lambda_s }{(s-1)! x_p}\\phi^{s-1}( x) \n\t\\end{align*}\n\thas the same tree level amplitudes as a free quantum field with mass $m$, i.e. \n\t\\begin{align*} \n\t{A'}^2_2 &= -ix_p\\\\\n\t{A'}^j_n &= 0 \\ \\forall n>2, j>0.\n\t\\end{align*}\t\n\\end{theorem}\n\\begin{proof}\n\tBy \\cref{sec_offshelltree} it is sufficient to have $b'_j=0 $ $\\forall j$ to reach ${A'}^k_n=0$ $\\forall n>2$. By \\cref{lem_bsn_0} we know the values of $b_j$ to reach $b'_n=0$, by \\cref{lem_inverse} these $b_j$ are the series coefficients of $\\rho(x)$.\t\n\\end{proof}\n\n\\begin{lemma}\\label{lem_adiabatic_aj}\n\tThe diffeomorphism $\\phi(\\rho)$ defined in \\cref{thm_adiabatic_phis} has the coefficients\n\t\\begin{alignat*}{3}\n\ta_{s-2} &= \\frac{\\lambda_s }{(s-1)!x_p} && \\\\\n\ta_{j \\cdot (s-2)} &= \\left(a_{s-2}\\right)^j\\cdot F_j(s-1, 1), \\qquad && j \\in \\mathbb N \\\\\n\ta_k &= 0 , \\qquad &&k \\notin \\mathbb N \\cdot (s-2).\n\t\\end{alignat*}\n\tHere, $ F_j(s-1, 1)$ are the Fuss-Catalan numbers \\cref{fc}.\n\\end{lemma}\n\\begin{proof}By \\cref{lem_bsn}, $b'_n=b_n$ for $ns-2$.\n\\end{proof}\n\n\n\\subsection{Loop amplitudes}\\label{subs_cance_loops}\n\n\nTo cancel the integrands of loop amplitudes, the idea is that the diffeomorphism of an interacting field should mimic the behaviour of the diffeomorphism of a free field in \\cref{subs_free_loops}. There, \\cref{lem_loop} was obtained because the tree sums $A^0_n=S_n$ of the free diffeomorphism vanish by \\cref{lem_Sn_free} if the external momenta are onshell. We implicitly used the vanishing of tadpole graphs in \\cref{lem_tadpole,lem_loop}. \n\nTo cancel interaction vertices, by \\cref{thm_adiabatic_phis} the situation is similar to the free case in terms of combinatorics, only that the tree sums ${A'}^j_n$ vanish if all their $j$ external offshell momenta have the fixed offshell variable $x_p\\neq 0$ instead of being onshell. On the other hand, the tree sums ${A'}^0_n$ do not vanish, hence a graph may have arbitrarily many such meta-vertices as internal vertices. Assume that there are $j>0$ external edges with offshell variable $x_p\\neq 0$ and all remaining external edges are onshell. \nLet $e$ be one of the external edges with offshell variable $x_p$. Then $e$ is connected to a meta vertex ${A'}^j_n$ with $j>0$ and $n>2$ which vanishes due to \\cref{thm_adiabatic_phis}. By \\cref{subs_cuts}, the Feynman integrand of the sum of all amplitudes with given loop number is proporitonal to ${A'}^j_n$, hence it is zero. Consequently, as soon as there is at least one external edge with offshell variable $x_p$ and all remaining edges are either also $x_p$ or onshell, the integrand vanishes. This implies \\cref{thm_adiabatic_phis} holds not only for tree level amplitudes, but also for the integrands of loop amplitudes if tadpoles vanish. \n\n\n\n\n\\section{Scalar field with arbitrary propagator}\\label{sec_arbitrary}\n\n\\subsection{Generalized free Lagrangian density}\n\nBy \\cref{thm_bn} the tree level-amplitudes of the diffeomorphism are independent of masses and momenta, which suggests that the actual form of the propagator is unimportant for the result. In this section we show that this is the case. We restrict to a free scalar field, that is, the Lagrangian is quadratic in the field variable $\\phi$ and its derivatives. Lorentz invariance dictates that any derivative $\\partial_\\mu \\phi$ be contracted with $\\partial^\\mu \\phi$. By partial integration, all derivatives can be concentrated on one of the two field variables up to a vanishing total derivative. We assume a Lorentz invariant free scalar Lagrangian density of the form \n\\begin{align}\\label{diff_skalar_beliebig_L}\n\\mathcal L_\\phi &= \\frac 12 \\phi(x) \\hat X \\phi(x)\n\\end{align}\nwhere $\\hat X$ is a linear Lorentz invariant differential operator\n\\begin{align*} \n\\hat X &= \\beta_0 + \\beta_1 \\left(-\\partial_\\mu \\partial^\\mu\\right) + \\beta_2 \\left( -\\partial_\\mu \\partial^\\mu \\right) ^2 + \\ldots = \\sum_{k=0}^\\infty \\beta_k \\left( -\\partial_\\mu \\partial^\\mu \\right) ^k.\n\\end{align*}\n$\\left \\lbrace \\beta_j \\right \\rbrace _j$ are spacetime-independent constants. The ordinary free field \\cref{Lfree} amounts to $\\beta_0=-m^2, \\beta_1 = 1, \\beta_{j>1}=0$. Recently, Lagrangians involving higher derivatives have for example been studied in the context of fixed points in more than four spacetime dimensions \\cite{gracey_higher_2017} or conformal field theory \\cite{brust_free_2017}.\nWe do not discuss here for which values of $\\beta_j$ the Lagrangian \\cref{diff_skalar_beliebig_L} describes a physically meaningful, e.g. unitary or renormalizable, theory, but are just interested in its formal behaviour under field diffeomorphism. \n\nThe classical field defined by \\cref{diff_skalar_beliebig_L} has the equation of motion\n\\begin{align}\\label{diff_skalar_beliebig_eom}\n\\hat X \\phi(x) &= 0\n\\end{align}\nwhich is linear as desired for a free field. In momentum space, the operator $\\hat X$ acts as multiplication,\n\\begin{align*}\n\\hat X \\phi( x) &= \\int \\frac{\\d^4 k }{(2 \\pi)^4 }\\underbrace{\\left( \\beta_0 + \\beta_1 k_\\mu k^\\mu +\\beta_2 k_\\mu k^\\mu k_\\nu k^\\nu + \\ldots \\right)}_{=:X_k} \\phi( k) e^{-ik x}.\n\\end{align*}\n\nThe propagator of the field $\\phi$ is defined as the inverse of the field differential operator \\cref{diff_skalar_beliebig_eom} (i.e. as a Green's function) in momentum space,\n\\begin{align}\\label{diff_skalar_beliebig_propagator}\n \\Gamma_{2, \\text{free}}^{-1} (k) = \\langle \\phi(k) \\phi(-k)\\rangle &= \\frac{i}{X_k}.\n\\end{align}\nAs long as $\\beta_1 \\neq 0$, this propagator can also be obtained by the standard perturbative calculation of propagators: Use the offshell variable \\cref{def_offshellvariable} to write $X_k = x_k + X'_k$ where $X'_k$ contains all higher derivative terms and is to be treated as a perturbation. If $\\frac{i}{x_k}$ is identified as the propagator of $\\phi$, the remainder $X'_k$ gives rise to a 2-valent vertex with Feynman rule $iX'_k$. Summing arbitrary many of these vertices, connected with a propagator $\\frac{i}{x_k}$, one obtains a geometric series with the sum $\\frac{i}{x_k + X'_k } = \\frac{i}{X_k}$. \n\nNote that in this setting, the definition of ``offshellness'' is according to this propagator, i.e. the momentum $k$ is offshell iff $X_k\\neq 0$, regardless if $x_k=0$ or not. We do not care here if the pole $X_k=0$ of the propagator \\cref{diff_skalar_beliebig_propagator} corresponds to a one-particle state (i.e. is a simple pole with unit residue) as in the standard interpretation of perturbative quantum field theory.\n\n\n\\subsection{Diffeomorphism Feynman rules}\n\nApplying the diffeomorphism \\cref{def_diffeomorphism} to \\cref{diff_skalar_beliebig_L} gives rise to the Lagrangian\n\\begin{align}\\label{diff_skalar_beliebig_L_rho}\n\\mathcal L_\\rho &= \\frac 12 \\sum_{j=0}^\\infty \\sum_{k=0}^\\infty a_j a_k \\rho^{j+1} \\hat X \\rho^{k+1}.\n\\end{align}\nSince $a_0=1$, the propagator of $\\rho$ coincides with that of $\\phi$, \n\\begin{align*}\n\\left \\langle \\rho( p) \\rho(-p)\\right \\rangle &= \\frac{i}{X_p}.\n\\end{align*}\n\nTo derive the vertex Feynman rules of $\\rho$, consider the Fourier transform of the action of $\\hat X$ on a product of fields. \n\\begin{align*}\n\\hat X \\phi^n ( x) &= \\idotsint \\frac{\\d^4 k_1}{(2\\pi)^4} \\cdots \\frac{\\d^4 k_n}{(2\\pi)^4} \\phi( k_1) \\cdots \\phi( k_n) X_{1+\\ldots + n} \\cdot e^{-i k_1 x} \\cdots e^{-i k_n x} .\n\\end{align*}\nWe have defined $X_{1+\\ldots +n}$ analogous to \\cref{def_offshellvariable}, e.g.\n\\begin{align*} \nX_{1+2} &= \\beta_0 + \\beta_1 \\left( k_1 + k_2 \\right) _\\mu \\left( k_1 + k_2 \\right) ^\\mu + \\beta_2\\left( \\left( k_1 + k_2 \\right) _\\mu \\left( k_1 + k_2 \\right) ^\\mu \\right)^2+ \\ldots. \n\\end{align*}\nThe $n$-valent diffeomorphism vertex arises from terms $\\phi^j \\hat X \\phi^k$ in \\cref{diff_skalar_beliebig_L_rho} where $j+k=n$. Summing over all permutations of $n$ edges, this amounts to $j!$ times a sum over all ways to choose $k$ out of $n$, each of them multiplied by $k!$ because $X_{1+\\ldots + k}$ is symmetric under permutation of its indices. Hence the vertex Feynman rule is \n\\begin{align}\\label{diff_skalar_beliebig_vn}\ni v_n &= i \\frac 12 \\sum_{k=1}^{n-1} a_{n-k-1} a_{k-1} (n-k)! k! \\sum_{P\\in Q^{(n,k)}} X_{P }.\n\\end{align}\nThe latter sum consists of all $\\abs{ Q^{(n,k)} } = \\binom n k$ possibilities to choose $k$ out of $n$ external edges without distinguishing the order,\n\\begin{align}\\label{Qnk} \nQ^{(n,k)} :=\\left \\lbrace \\left \\lbrace j_1, \\ldots, j_k \\right \\rbrace \\subseteq \\left \\lbrace 1, \\ldots, n \\right \\rbrace \\right \\rbrace .\n\\end{align}\nOf course, these are the same partitions which make up elementary symmetric polynomials, i.e. the elementary symmetric polynomial of degree $k$ in variables $\\left \\lbrace z_1, \\ldots, zn \\right \\rbrace $ is $e_k(\\left \\lbrace z_1, \\ldots, z_n \\right \\rbrace )= \\sum_{P\\in Q^{(n,k)}} \\prod_{z\\in P} z $ .\n\\begin{example}\n\tFor $n=5, k=3$ there are ten summands\n\t\\begin{align*} \n\t\\sum_{P\\in Q^{(5,3)}} X_{P} &= X_{1+2+3} + X_{1+2+4} + X_{1+2+5}+ X_{1+3+4} + X_{1+3+5} \\\\\n\t&\\qquad + X_{1+4+5} + X_{2+3+4} + X_{2+3+5} + X_{2+4+5} + X_{3+4+5}.\n\t\\end{align*}\n\tThe indices correspond to the summands of the elementary symmetric polynomial\n\t\\begin{align*} \n\te_3(\\left \\lbrace z_1, \\ldots, z_5 \\right \\rbrace ) &= z_1 z_2 z_3 + z_1 z_2 z_4 + \\text{ (6 more) } + z_2 z_4 z_5+ z_3 z_4 z_5.\n\t\\end{align*}\n\\end{example}\n\nMomentum is conserved at each vertex, $ p_1+ \\ldots + p_n = 0$, hence only the elements of $Q^{(n, k)}$ for $k \\leq \\frac n 2$ are linearly independent. For $k>\\frac n 2$, each $X_P$ coincides with a term which was already present, this mechanism effectively cancels the prefactor $\\frac 1 2$ in the Feynman rule \\cref{diff_skalar_beliebig_vn}. If $n$ is even, the terms in $Q^{(n, \\frac n 2)}$ coincide pairwise, again cancelling the $\\frac 1 2$. \n\n\n\\begin{example}Using $X_{1+2} \\equiv X_3$ etc., the 3-valent vertex \\cref{diff_skalar_beliebig_vn} is\n\t\\begin{align*}\n\tiv_3 &= i \\frac 12 a_1 2! 1!\\left( X_1+X_2 + X_3 \\right) + i \\frac 12 1! 2! \\left( X_{1+2} + X_{1+3}+ X_{2+3} \\right) \\\\\n\t&= 2ia_1 \\left( X_1 + X_2 + X_3 \\right) . \n\t\\end{align*}\n\tThis result coincides with \\cref{free_vertex_picture}, but all higher vertices do not, e.g.\n\t\\begin{align*}\n\tiv_4 \t&= 6 ia_2 \\left( X_1 + X_2 + X_3 + X_4 \\right) + 4i a_1^2 \\left( X_{1+2} + X_{1+3} + X_{2+3} \\right) . \n\t\\end{align*}\n\tThis is because for higher than second powers, the sum of momenta does not decompose, i.e. $X_{1+2}$ cannot generally be expressed as $X_1, X_2$ and $m^2$.\n\\end{example}\n\n\n\\subsection{Cancellation of internal edges}\n\nThe vertex Feynman rule \\cref{diff_skalar_beliebig_vn} involves summands $X_j$ which directly cancel an adjacent propagator $\\frac{i}{X_j}$, but also summands $X_{j+k+\\ldots}$. These summands correspond to a partition of the external edges into two subsets and it turns out they cancel the corresponding contribution of two vertices connected with an internal edge.\n\n\\begin{lemma}\\label{diff_skalar_beliebig_lem_intern}\n\tLet $e$ be an internal edge in a tree sum $A^n_n$ from \\cref{def_Ajn}. Then there is no summand proportional to $X_e$ in $A^n_n$. \n\\end{lemma}\n\\begin{proof}\n\tConsider one arbitrary tree $A\\in A^n_n$. \n\tSince $e$ is internal, it is connected to two vertices. Let their valence be $j$ resp. $k$, then $e$ together with the two vertices form a subtree $T$ which has $j+k-2\\leq n$ external edges, some (or all) of which may be external edges of $A^n_n$. Its Feynman rule is\n\t\\begin{align*}\n \tT&=\ti v_j \\cdot \\frac{i}{X_e}\\cdot i v_k,\n\t\\end{align*}\n\tWe identify the external edges of $T$ with numbers $1, \\ldots, j+k-2$ and consider terms proportional to $X_e$ in $T$. This requires both vertices $v_j, v_k$ to cancel the edge $e$ connecting them. By \\cref{diff_skalar_beliebig_vn} this summand has the Feynman amplitude \n\t\\begin{align*}\n\ti a_{k-2} (k-1)! X_e \\frac{i}{X_e} i a_{j-2} (j-1)! X_e &= -i a_{j-2} a_{k-2} (j-1)! (k-1)! X_e.\n\t\\end{align*}\n\t\n\tThe momentum running through $e$ corresponds to one way to partition the external momenta $\\left \\lbrace 1, \\ldots, j+k-2 \\right \\rbrace $ of $T$ into two sets, namely into $(k-1)$ and $(j-1)$ elements.\n\t\n\tThe tree sum $A^n_n$ also contains a tree $A'$ where $T$ is replaced by a vertex $iv_{j+k-2}$ at the same positon, see \\cref{diff_skalar_beliebig_T}. By \\cref{diff_skalar_beliebig_vn} this vertex is made of summands proportional to $X_P$ where $P$ is any way to partition the $j+k-2$ external edges into two sets. Especially, precisely two such partitions resembles the momentum running through $e$ such that $X_P=X_e$. They contribute to $iv_{j+k-2}$ with an amplitude \n\t\\begin{align*}\n\ti \\frac 12 a_{j-2}a_{k-2} (j-1)! (k-1)! X_e+ i \\frac 12 a_{k-2} a_{j-2} (k-1)! (j-1)!X_e &= i a_{j-2} a_{k-2} (j-1)! (k-1)! X_e.\n\t\\end{align*}\n\tThis contribution cancels the contribution from $T$. \n\t\n\t We did not impose any conditions on $T$, therefore the cancellation works in any case: Whenever there is an internal edge, the sum $A^n_n$ contains the same term with a vertex at the same position which cancels the contribution proportional to $X_e$.\n\\end{proof}\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\begin{tikzpicture} \n\t\n\t\n\t\\node at (-1,0) {$A=$};\n\t\n \n\t\\node [diffVertex,label=below:$iv_4$] (v1) at (1,0){};\n\t\\node [diffVertex,label=below:$iv_5$] (v2) at (3,0){};\n\t\\node [diffVertexGray] (v3) at (4, 1){};\n\t\\node [diffVertexGray] (v4) at (4, -.5){};\n\t\\node [diffVertexGray] (v5) at (0, .5){};\n\t \n\t\\draw [thick] (v1) -- node[above]{$ e$} (v2);\n\t\\draw [thick] (v1) -- (v5);\n\t\\draw [thick] (v3) -- (v2);\n\t\\draw [thick] (v4) -- (v2);\n\t\n\t \n\t\n\t\\draw [-,thick] (v1) -- +(200:1.4);\n\t\\draw [-,thick] (v1) -- +(220:1.4);\n\t\\draw [-,thick] (v2) -- +(0:1.4); \n\t\\draw [-,thick] (v2) -- +(20:1.4); \n\t\\draw [gray,thick] (v3) -- +(70:.4);\n\t\\draw [gray,thick] (v3) -- +(40:.4);\n\t\\draw [gray,thick] (v3) -- +(0:.4);\n\t\\draw [gray,thick] (v4) -- +(290:.5);\n\t\\draw [gray,thick] (v4) -- +(340:.5);\n\t\\draw [gray ,thick] (v5) -- +(120:.5);\n\t\\draw [gray ,thick] (v5) -- +(150:.5);\n\t\\draw [gray,thick] (v5) -- +(180:.5);\n\t\\draw [gray,thick] (v5) -- +(210:.5);\n\t\n\t\n\t\\node at (2, -1.5) {$\\underbrace{\\hspace{3cm} }_{T}$};\n\t\n\t\n\t\n\t\t\\node at (6,0) {$A'=$};\n\t\n\t\n\t\\node [diffVertex,label=below:$iv_7$] (v1) at (9,0){};\n\t\\node [diffVertexGray] (v3) at (11, 1){};\n\t\\node [diffVertexGray] (v4) at (11, -.5){};\n\t\\node [diffVertexGray] (v5) at (7, .5){};\n\t \n\t\\draw [thick] (v1) -- (v5);\n\t\\draw [thick] (v3) -- (v1);\n\t\\draw [thick] (v4) -- (v1);\n\t\n\t\n\t\n\t\\draw [-,thick] (v1) -- +(190:2.4);\n\t\\draw [-,thick] (v1) -- +(200:2.4);\n\t\\draw [-,thick] (v1) -- +(0:2.4); \n\t\\draw [-,thick] (v1) -- +(10:2.4); \n\t\\draw [gray,thick] (v3) -- +(70:.4);\n\t\\draw [gray,thick] (v3) -- +(40:.4);\n\t\\draw [gray,thick] (v3) -- +(0:.4);\n\t\\draw [gray,thick] (v4) -- +(290:.5);\n\t\\draw [gray,thick] (v4) -- +(340:.5);\n\t\\draw [gray ,thick] (v5) -- +(120:.5);\n\t\\draw [gray ,thick] (v5) -- +(150:.5);\n\t\\draw [gray,thick] (v5) -- +(180:.5);\n\t\\draw [gray,thick] (v5) -- +(210:.5);\n\t\n\t\n\t\n \n\t\\end{tikzpicture}\n\t\\caption{ Illustration of contributions to a tree sum for $j=4, k=5$. $A$ contains a subtree $T$ consisting of one internal edge $e$ connected to two vertices whereas in $A'$ that subtree is replaced by a single vertex.}\n\t\\label{diff_skalar_beliebig_T}\n\\end{figure}\n\n\nThe vertex Feynman rules \\cref{diff_skalar_beliebig_vn} allow for a transparent discussion of the cancellation of internal edges in tree sums. Therefore we believe that they are a more intuitive form than \\cref{diff_vn} even in the case of an ordinary quadratic propagator. Especially, it is nice to see how the Lagrangian being quadratic in $\\phi$ (i.e. free) translates to the Feynman rules containing partitions of edges into two subsets. The special discussion of possible constant terms in the Feynman rules is also no longer necessary: The vertex \\cref{diff_skalar_beliebig_vn} contains only terms proportional to some $X_P$, the constants $m^2$ in \\cref{diff_vn} were mere artifacts of decomposing $x_{i+j+\\ldots}$.\n\n\n\\begin{theorem}\\label{thm_A1n}\n\tFor any $n>2$ there is a $C_n (a_1, a_2, \\ldots)$ which does not depend on kinematics such that\n\t\\begin{align*}\n\tA_n^1 = -iC_n \\cdot \\left( X_1 + \\ldots + X_n \\right) .\n\t\\end{align*}\n\\end{theorem}\n\\begin{proof}\n\tBy the Euler characteristic \\cref{euler_graph}, a tree $A\\in A_n^1$ with $\\abs{V_A}$ vertices contains $\\abs{V_A}-1$ internal edges. By \\cref{diff_skalar_beliebig_vn} each vertex is proportional to some $X$. Each internal edge has an propagator proportional to $\\frac{1}{X}$ (these are not the same $X_k$, we just count powers of $X$). Consequently, $A_n^1$ is in total proportional to $X^1$. \n\t\n\tBy \\cref{diff_skalar_beliebig_lem_intern} there is no summand $\\propto X_e$ with any internal edge $e$ of $A^1_n$. Also, each vertex \\cref{diff_skalar_beliebig_vn} is proportional to $X^1$ (and not e.g. quadratic in $X$) and each external edge is connected to only one vertex in any $A\\in A^1_n$. Therefore there can be no summand proportional to $X_j^{k>1}$ for an external edge $j$. Further, there are no factors $X^{-1}_j$ for external edges $j$ since their propagators are not included in $A^1_n$. Consequently, an individual tree $A\\in A^1_n$ must be of the form $C_n \\frac{X_j X_k \\cdots }{X_e X_f \\cdots}$ where $j\\neq k\\neq \\ldots$ are external momenta and $e\\neq f\\neq \\ldots$ internal edges (i.e. partitions of the external momenta) and there is one more factor in the numerator than in the denominator. But in $A^1_n$, only one of the $n$ external edges is offshell, consequently there can be at most one non-zero factor in the numerator. The only non-zero term contributing to $A$ is of the form \t\n\t $X_j \\cdot c_n$ were $c_n$ does not depend on any $X$. Since $X$ represent the only possible dependence on kinematics in the Feynman rules \\cref{diff_skalar_beliebig_vn}, the constant $c_n$ must be independent of kinematics. \n\t \n\t The tree sum $A^1_n$ is symmetric in external momenta, hence it must be proportional to $\\sum_{j=1}^n X_j $ where again the proportionality constant does not depend on any $X_P$ but only on the diffeomorphism-parameters $\\left \\lbrace a_1, \\ldots, \\right \\rbrace $.\n\t \n\t \n\\end{proof}\nConsistency of \\cref{thm_A1n,def_bn} in the case $X_k \\rightarrow x_k$ requires that $C_n= b_{n-1}$ are the known coefficients from \\cref{thm_bn} resp \\cref{lem_inverse2}.\n\n\n\\begin{lemma}\n\tThe diffeomorphism does not alter the $S$-matrix and $S_n=A_n^0=0 \\ \\forall n>2$ .\n\\end{lemma}\n\\begin{proof}\n This is analogous to \\cref{lem_Sn_free}. The vanishing follows immediately from \\cref{thm_A1n} since there is no constant term which survives if all $X_j=0$. For loop amplitudes, the discussion from \\cref{subs_free_loops} is equally valid here where just all $x_j$ are to be replaced by $X_j$. \n\\end{proof}\n\n\n\n\n\n\\subsection{Recursive definition of tree sums}\n\n The proof of \\cref{thm_bn} in \\cite{KY17} relied on a recursive definition of $b_n$. Such a definition is possible in the arbitrary-propagator-case as well: $b_n$ has one upper vertex with valence $k+1$ connected to $k$ smaller tree sums. One has to sum all partitions of the $n$ lower edges into $k$ subsets and also all numbers $k$ of subtrees. Additionally, the offshell propagator $\\frac{i}{X_{1+\\ldots + n}}$ has to be included and all the lower propagators $\\left \\lbrace X_l \\right \\rbrace _{l\\in \\left \\lbrace 1, \\ldots, n \\right \\rbrace }$ are set to zero. This yields\n\n\\begin{align*} \nb_n &= - \\frac{1}{X_{1+\\ldots + n}} \\cdot \\sum_{k>1} \\sum_{\\left \\lbrace P_j \\right \\rbrace \\in P^{(n,k)}} b_{\\abs{P_1}} \\cdots b_{\\abs{P_k}} \\cdot v_{k+1} \\Big|_{\\left \\lbrace X_l \\right \\rbrace =0}\\\\\n&= - \\frac{1}{2X_{1+\\ldots + n}} \\sum_{k=2}^n\\sum_{\\left \\lbrace P_j \\right \\rbrace \\in P^{(n,k)}} b_{\\abs{P_1}} \\cdots b_{\\abs{P_k}}\\cdot \\sum_{j=1}^{k} a_{k-j} a_{j-1} (k+1-j)! j! \\sum_{Q\\in Q^{(k+1,j)}} X_{Q} \\Big|_{\\left \\lbrace X_l \\right \\rbrace =0}.\n\\end{align*}\nHere, $P^{(n,k)}$ is the set of all possible partitions of $\\left \\lbrace 1, \\ldots, n \\right \\rbrace $ into $k$ nonempty disjoint subsets\n\\begin{align*} \nP^{(n,k)} &= \\left \\lbrace \\left \\lbrace P_j \\right \\rbrace _j: P_1 \\sqcup \\ldots \\sqcup P_k = \\left \\lbrace 1, \\ldots, n \\right \\rbrace , \\abs{P_j} \\geq 1\\right \\rbrace \n\\end{align*}\nand $Q^{(n,k)}$ from \\cref{Qnk} is the set of all $k$-element subsets of $\\left \\lbrace 1, \\ldots, n \\right \\rbrace$. Note that in our case, $Q^{(k+1, j)}$ are not necessarily subsets of external edges but of the top edges of the lower $b_j$. These in turn are given by the partition $P_j$ and one of the external edges of $v_{k+1}$ is the top edge which carries momentum $(1+\\ldots + n )$, hence\n\\begin{align*} \nQ^{(k+1,j)}: =\\left \\lbrace \\left \\lbrace P_{i_1}, \\ldots, P_{i_{j}} \\right \\rbrace \\subseteq \\left \\lbrace P_1, \\ldots, P_k, (1+\\ldots + n )\\right \\rbrace \\right \\rbrace .\n\\end{align*}\n\n\\begin{example} \n\tThe first nontrivial tree sum $b_2$ only involves the summand $k=2$. Consequently, the only possible partition of the two lower edges is $1+1$ and \n\t\\begin{align*} \n\tb_2 &= -\\frac 12 \\frac{1}{X_{1+2}} \\sum_{j=1}^{2} a_{2-j} a_{j-1} (2+1-j)! j! \\sum_{Q\\in Q^{(3,j)}} X_{Q} \\Big|_{X_1=0=X_2} \\\\\n\t&= -\\frac{1}{2X_{1+2}} \\left( a_1 a_0 2! 1! \\left( X_1+X_2 + X_{1+2} \\right) + a_0 a_1 1! 2! \\left( X_{1+2} + X_{1} + X_2\\right) \\right) \\Big|_{X_1=0=X_2}\\\\\n\t&= -2 a_1.\n\t\\end{align*}\n\tThis coincides with \\cref{ex_bn}. For the next tree $b_3$, two values of $k$ contribute:\n\t\\begin{align*} \n\tb_3 &= -\\frac{1}{2 X_{1+2+3}} \\left( 3 b_2 \\sum_{j=1}^2 a_{2-j} a_{j-1} (3-j)! j!\\sum_{Q\\in Q^{(3,j)}} X_{Q} + \\sum_{j=1}^3 a_{3-j} a_{j-1} (4-j)! j! \\sum_{Q\\in Q^{(4,j)}} X_{Q} \\right) .\n\t\\end{align*}\n\tComputing the sums and inserting $\\left \\lbrace X_l \\right \\rbrace =0$ one again obtains the same result as in \\cref{ex_bn},\n\t\\begin{align*} \n\tb_3 &= -\\frac{1}{2 X_{1+2+3}} \\left( -8 a_1^2 \\left( X_{1+2} + X_{2+3} + X_{1+3} + 3X_{1+2+3} \\right) \\right. \\\\\n\t&\\qquad \\left. + 12 a_2 \\left( X_{1+2+3} \\right) + 8a_1^2 \\left( X_{1+2} + X_{2+3} + X_{1+3} \\right) \\right) \\\\\n\t&=12 a_1^2-6a_2.\n\t\\end{align*}\n\\end{example}\n\n\n\n\n\n\n\n\n\\subsection{Interacting theory}\n\nIf instead of \\cref{diff_skalar_beliebig_L} the underlying Lagrangian density is\n\\begin{align} \\label{diff_skalar_beliebig_L_interacting}\n\\mathcal L_\\phi &= \\frac 12 \\phi(x) \\hat X \\phi (x) -\\frac{\\lambda_s }{s!}\\phi^s( x),\n\\end{align}\nadditionally the vertices $-iw^{(s)}_n$ given by \\cref{diff_arb_wns} are present. But since by \\cref{thm_A1n} each tree sum cancels precisely one adjacent propagator $\\frac{i}{X_j}$, the argument from \\cref{subs_cancellation_higher} works even for non-standard propagators. Especially, \\cref{thm_phis_Sn} still holds. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Non-local field transformations}\\label{subs_nonlocal}\n\nWe noted below \\cref{arb_generalvertex} that even the most general vertex which can arise by a local diffeomorphism \\cref{def_diffeomorphism} of a scalar interacting quantum field theory \\cref{L_arb} has a very restricted momentum dependence which is essentially determined by the propagator resp. offshell variable \\cref{def_offshellvariable} of adjacent edges. On the other hand, in \\cref{sec_arbitrary} we saw that the combinatorics of the diffeomorphism are independent of the momentum dependence of the propagator. In this section we include derivatives into the transformation $\\phi \\mapsto \\rho$ in order to change the momentum dependence of the propagator of a free field. \n\n\\subsection{Arbitrary propagator from field transformation}\n\nWe first consider a transformation involving derivatives, but not a local diffeomorphism, i.e.\n\\begin{align}\\label{nonlocal_transform_only} \n\\phi (x) &= \\sum_{k=0}^\\infty \\alpha_k \\left( -\\partial_\\mu \\partial^\\mu \\right) ^k \\rho (x).\n\\end{align}\nApplied to the standard free Lagrangian \\cref{Lfree}, this yields (after $2k+1$ partial integrations)\n\\begin{align}\\label{L_nonlocal_only} \n\\mathcal L_\\rho &= \\frac 12 \\rho (x) \\sum_{n=0}^\\infty \\sum_{m=0}^n \\alpha_{n-k} \\alpha_k \\left( -\\partial_\\mu \\partial^\\mu -m^2 \\right) \\left( -\\partial_\\mu \\partial^\\mu \\right) ^{n} \\rho (x).\n\\end{align}\nThis equals the Lagrangian \\cref{diff_skalar_beliebig_L} with the operator\n\\begin{align*} \n\\hat X &= \\sum_{n=0}^\\infty \\sum_{k=0}^n \\alpha_{n-k} \\alpha_k \\left( -\\partial_\\mu \\partial^\\mu -m^2 \\right) \\left( -\\partial_\\mu \\partial^\\mu \\right) ^{n}= \\sum_{k=0}^\\infty \\beta_k \\left( -\\partial_\\mu \\partial^\\mu \\right) ^k\n\\end{align*}\nwhere\n\\begin{align}\\label{nonlocal_betan} \n\\beta_n &= \\sum_{k=0}^{n-1} \\alpha_{n-1-k} \\alpha_k -m^2 \\sum_{k=0}^n \\alpha_{n-k} \\alpha_k .\n\\end{align}\nBy this, the transformed field $\\rho$ can be identified with the underlying field $\\phi$ in the free field case in \\cref{sec_arbitrary}.\n \n\\subsection{Local diffeomorphism and non-local transformation}\\label{subs_local_nonlocal}\n\n\\Cref{L_nonlocal_only} suggests that with \\cref{nonlocal_betan} the non-locally transformed field $\\rho$ from \\cref{nonlocal_transform_only} takes the role of the underlying field $\\phi$ in the local diffeomorphism. That is, the local diffeomorphism \\cref{def_diffeomorphism} replaces $\\rho$ in \\cref{nonlocal_transform_only}.\n\n\\begin{definition}\\label{def_nonlocal_transformation}\n\tIn a combined transformation, the field $\\phi(x)$ is replaced by \n\t\\begin{align*} \n\t\\phi(x) &= \\sum_{k=0}^\\infty \\alpha_k \\left( -\\partial_\\mu \\partial^\\mu \\right) ^k \\left( \\sum_{j=0}^\\infty a_j \\rho^{j+1} (x) \\right) \n\t\\end{align*}\n\twhere both $\\left \\lbrace \\alpha_k \\right \\rbrace _{k \\geq 0}$ and $\\left \\lbrace a_j \\right \\rbrace _{j\\geq 0}$ are constants and $a_0=1$.\n\\end{definition}\nIndeed, an explicit calculation shows that it is not possible to have $\\alpha_k$ depend on $j$, i.e. to transform different monomials with different derivative operators. If that were the case, the cancellation property of the local diffeomorphism would break down as it would fail to produce vertices proportional to an inverse propagator.\n\nThe transformation \\cref{def_nonlocal_transformation} applied to a free Lagrangian \\cref{Lfree} gives rise to a field $\\rho$ which has propagator \\cref{diff_skalar_beliebig_propagator}\n\\begin{align}\\label{nonlocal_propagator} \n\\langle \\rho(k) \\rho(-k) \\rangle &= \\frac{i}{X_k} = \\frac{i}{\\beta_0 + \\beta_1 k^2 + \\beta_2 (k^2)^2 + \\ldots }\n\\end{align}\nwith $\\beta_n$ from \\cref{nonlocal_betan} and $(n\\geq 3)$-valent interaction vertices \\cref{diff_skalar_beliebig_vn},\n\\begin{align*}\ni v_n &= i \\frac 12 \\sum_{k=1}^{n-1} a_{n-k-1} a_{k-1} (n-k)! k! \\sum_{P\\in Q^{(n,k)}} X_{P }.\n\\end{align*}\nBy \\cref{thm_A1n}, all $(n>2)$-valent tree sums vanish in the onshell limit $X_j=0\\ \\forall j\\in \\left \\lbrace 1, \\ldots, n \\right \\rbrace $, thus $\\rho$ is a free theory. Note that on the other hand its 2-point function stays \\cref{nonlocal_propagator}, so the derivative part of the non-local transform \\cref{def_nonlocal_transformation} has a potential influence on observables. \n\nIn this paper, we do not consider the effect of \\cref{def_nonlocal_transformation} on an underlying interacting Lagrangian. \n\n\n\n\\section{Conclusion}\n\nWe have continued the perturbative examination of diffeomorphisms of quantum field started in \\cite{velenich,KY17} and reached the following results:\n\\begin{enumerate}\n\t\\item Even if the underlying Lagrangian is not a free theory, but contains self-interaction, still a local diffeomorphism does not contribute to the $S$-matrix (\\cref{thm_Alm}) provided tadpole graphs vanish. Thus, scalar quantum field theories related by diffeomorphism can be regarded physically equivalent.\n\t\t\\item The invariance of the $S$-matrix under local field diffeomorphisms does not require a special form of the propagator (\\cref{thm_A1n}). \n\t\\item The coefficients of the diffeomorphism of an interacting theory can be chosen such that for an offshell amplitude, the interaction is cancelled at tree level and the integrands of loop amplitudes are zero at a specific external momentum up to tadpole graphs (\\cref{thm_adiabatic_phis}) .\n\t\\item In the case of an underlying free Lagrangian, it is possible to extend the local diffeomorphism to a transformation involving derivatives in a way that preserves the combinatoric structure of the local diffeomorphism (\\cref{subs_local_nonlocal}).\n\\end{enumerate}\nApart from the natural task to extend the study to charged or non-scalar fields, open questions to be covered in future work are:\n\\begin{enumerate}\n\t\\item There should be a simple combinatoric explaination why the tree sums $b_n$ from \\cref{def_bn} are the coefficients of the inverse diffeomorphism (\\cref{lem_inverse,lem_inverse2}), perhaps by a Legendre transform \\cite{jackson_robust_2017,weinberg2}.\n\t\\item By power counting, the local diffeomorphism $\\rho$ is non-renormalizable, whereas by the above theorems we (in principle) know its $S$-matrix values, which are finite if the underlying theory is renormalizable. At which point do assumptions about renormalizability enter and is it possible to carry out a renormalization procedure for the field $\\rho$ directly, i.e. without undoing the diffeomorphism?\n\\end{enumerate}\n\n \n.\n\n\n\n\n\n\n\n\n \n\\printbibliography\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nApart from the low-energy regime of the strong interaction,\nflavour physics is the least tested part of the \nSM. This is reflected in the rather large error bars of \nseveral flavour parameters such as the mixing parameters\nat the twenty percent level \\cite{Parodi}.\nHowever, the experimental situation concerning $B$ physics \nwill drastically change in the near future. \nThere are several $B$ physics experiments successfully\nrunning at the moment. In the upcoming years new facilities will start\nto explore $B$ physics with increasing \nsensitivity and within different \nexperimental settings \\cite{Artuso}.\n\nThe $b$ quark system \nis an ideal laboratory for studying flavour physics. \nHadrons containing a $b$ quark \nare the heaviest hadrons experimentally accessible. \nSince the mass of the $b$ quark is much larger than the QCD scale, \nthe long-range strong interactions are expected to be comparably\nsmall and are well under control thanks to the heavy quark \nexpansion~\\cite{Isgur,Neuberta}. \n\nOf particular interest are the so-called rare $B$ decays, which are \nflavour changing neutral current processes (FCNC) which\nvanish at the tree level of the SM. Thus, they \nare rather sensitive probes for physics beyond the SM \\cite{Susya,a}.\n\nOne of the main difficulties in analysing rare $B$ decays is the \ncalculation of short-distance QCD effects. These radiative corrections \nlead to a tremendous rate enhancement.\nThe QCD radiative corrections bring in large logarithms of the form \n$\\alpha_s^n(m_b) \\, \\log^m(m_b\/M)$,\nwhere $M$ is the top quark or the W mass and $m \\le n$ (with $n=0,1,2,...$).\nThey have to get resummed at least to \nleading-log (LL) precision ($m=n$). \n\n\nWithin the SM the accuracy\nin the domina- ting perturbative contribution to \n$B \\rightarrow X_s \\gamma$ \nwas recently improved to NLL\nprecision.\nThis was a joint effort of many different groups \n \\cite{AG91}. The theoretical error of the previous \nleading-log (LL) result was substantially reduced \nto $\\pm 10\\%$ and the central value of the partonic\ndecay rate increased by about $20\\%$.\n\n\n\nSupersymmetric extensions of the SM have become the most popular \nframework of new theoretical structures at higher scales,\nmuch below the Planck scale. \nThe precise mechanism of the necessary supersymmetry breaking\nis unknown \\cite{question}. A reasonable approach to this problem \nis the inclusion of the most general \nsoft breaking term consistent with the SM gauge symmetries in the\nso-called unconstrained minimal supersymmetric standard model (MSSM). \nThis leads to a proli- feration of free parameters in the theory. \n\nA global fit to electroweak precision measurements within supersymmetric\nmodels shows that \nif the superpartner spectrum becomes light the fit to the data results\nin typically larger values of $\\chi^2$ compared with the SM. \nSupersymmetric \\mbox{models}, however, \ncan always avoid serious constraints from data because the supersymmetric \ncontributions decouple \\cite{Damien}.\n\nIn the MSSM there are two kinds of new contributions to FCNC processes.\nThe first class results from flavour mixing in the sfermion mass matrices\n\\cite{DNW}. Moreover, one has CKM-induced contributions from charged Higgs\nboson and chargino exchanges (see \\cite{miksum}). \nThis leads to the well-known supersymmetric flavour problem: the severe\nexperimental constraints on flavour violations have no direct explanation \nin the structure of the unconstrained MSSM. \nClearly, the origin of flavour violation is a model-dependent issue\nand is based \non the relation of the dynamics of flavour and \nthe mechanism of supersymmetry breaking. \nKeeping in mind our current phenomenological knowledge about \nsupersymmetry, it \nis suggestive to \nperform a model-independent analysis of flavour changing phenomena. \nSuch an analysis provides important hints on the more \nfundamental theory of soft supersymmetry breaking.\n\n\n\\section{Gluino Contribution to $B \\rightarrow X_s \\gamma$}\n \nAmong inclusive rare $B$ decays, the $B \\rightarrow X_s \\gamma$ mode\nis the most prominent because it is the only decay mode in this class \nthat is already measured \\cite{CLEOneu,ALEPH}.\nMany papers are devoted to studying the $B \\rightarrow X_s \\gamma$ decay \nand similar decays within the MSSM.\nHowever, in most of these analyses, the contributions of\nsupersymmetry were not investigated with the systematics of the SM\n calculations. \nIn \\cite{guidice} it was shown, \nthat in a specific supersymmetric scenario NLL contributions are \nimportant and lead to a significant reduction of the stop-chargino \nmass region where the supersymmetric contribution has\na large destructive interference with the charged-Higgs boson \ncontribution. \nIt is expected that the complete NLL calculation drastically decreases \nthe scale dependence and, thus, the \\mbox{theoretical} error. \nThe NLL analysis \nis also a necessary check of the validity of the perturbative ansatz\n(see \\cite{BG}).\nThe NLL calculations in \\cite{guidice} and \nalso in \\cite{mikolaj99} are worked out in the\nheavy gluino case. In the analysis \\cite{complete} reported here, \nthe gluino-mediated decay $B \\rightarrow X_s \\gamma$\nis discussed where the gluino is not assumed to be decoupled.\n\nPrevious work on the gluino contribution \n\\cite{DNW,HKT,GGMS}\ndid not include LL or NLL QCD corrections,\nand gluino exchanges were\ntreated in the so-called mass insertion approximation (MIA) \nonly, \nwhere the off-diagonal squark mass matrix elements are taken to be small \nand their higher\npowers neglected.\nIn our analysis we explore the limits of the MIA.\nFurthermore, we analyse the sensitivity of the bounds \non the sfermion mass matrices to radiative QCD corrections. \n\n\nWithin the SM, there is one \ncoupling constant, $G_F$, relevant to the $b \\rightarrow s \\gamma$\ndecay. There is also one flavour violation parameter only, \nnamely the product \nof two CKM matrices. All the loops \\mbox{giving} the logarithms \nare due to gluons, which imply a factor of $\\alpha_s$.\nThe corrections can then be classified according to: \n\\begin{itemize}\n\\item (LL), $G_F (\\alpha_s Log)^N$, \n\\item (NLL), $G_F \\alpha_s (\\alpha_s Log)^N$.\n\\end{itemize}\nThus, the above ordering also reflects the actual size of the\ncontributions to $b \\rightarrow s \\gamma$. \n\nThe corresponding analysis of QCD corrections in the MSSM is much more \ncomplicated. \nThe MSSM has several couplings\nrelevant to this decay and there are several \nflavour changing parameters. Thus, a formal LL term might have a \nsmall coupling\nwhile a NLL contribution is multiplied with a large one.\nMoreover, the couplings generally depend on the parameters,\nand the results should be applicable for large domains on\nthe parameters.\n\nAnother complication in supersymmetric theo- ries is the occurrence of flavour \nviolations such as gluino exchanges\n(through the gluino-quark-squark coupling)\nwhere additional factors $\\alpha_s$ \nare induced.\nThey lead to magnetic penguin operators where the Wilson \ncoefficients naturally contain factors of $\\alpha_s$. \nMoreover, these contributions induce magnetic operators where \nthe (small) factor $m_b$ is replaced by the gluino mass. Clearly this \ncontribution is expected to be\ndominating. \nThe gluino-induced contributions to the decay amplitude for $b \\to s \\gamma$\nare of the following form:\n\\begin{itemize}\n\\item (LL), $\\alpha_s \\, (\\alpha_s Log)^N$\n\\item (NLL), $\\alpha_s \\, \\alpha_s (\\alpha_s Log)^N$\n\\end{itemize}\n\nIn the matching calculation, \nall factors $\\alpha_s$ regardless of their source should get \nexpressed in terms of the $\\alpha_s$ running with five flavours.\nHowever, non-decoupling effects through\nviolations of the supersymmetric equivalence between gauge bosons and \ncorresponding gaugino couplings have to be taken \ninto account at the NLL level.\n\nFurthermore, one finds that \ngluino-squark boxes induce new scalar and tensorial \nfour-Fermi operators, which are shown to \nmix into the magnetic operators without gluons. \nOn the other hand, the vectorial four-Fermi operators \nmix only with an\nadditional gluon into magnetic ones. \nThus, they will contribute at the next-to-leading order only.\nHowever, from the numerical point\nof view the contributions of the vectorial operators (although NLL) are\nnot necessarily suppressed w.r.t the new four-Fermi contributions;\nthis is due to the expectation \nthat the flavour-violation parameters\npresent in the Wilson coefficients of the new operators are expected\nto be much smaller (or much more stringently constrained)\nthan the corresponding ones in the coefficients of the vectorial \noperators. This is one of the most important reasons why a complete NLL\norder calculation should be performed. \n\nThe mixed graphs, containing a $W$, a gluino and a squark, \nare proportional to $G_F \\alpha_s$. They give rise only to corrections\nof the SM operators at the NLL level.\nThere are also penguin contributions with two gluino lines\nin the NLL matching.\n\n\nThe current discussion is restricted to the $W$ or \ngluino-mediated flavour changes and does not consider contributions \nwith other Susy particles such as chargino, charged Higgs or neutralino. \nClearly, analogous phenomena occur in those contributions.\n\nTo understand the sources of flavour violation that may be present in\nsupersymmetric models in addition to those enclosed in the CKM matrix,\none has to consider the contributions to the mass\nmatrix of a squark of flavour $f$: \n\\begin{equation}\n{\\cal M}_{f}^2 = \n\\left( \\begin{array}{cc}\n m^2_{f,LL} & m^2_{f,LR} \\\\\n m^2_{f,RL} & m^2_{f,RR} \n \\end{array} \\right) +\n\\label{squarku}\n\\nonumber\n\\end{equation}\n\\begin{equation}\n + \\left( \\begin{array}{cc}\n F_{f,LL} +D_{f,LL} & F_{f,LR} \\\\\n F_{f,RL} & F_{f,RR} +D_{f,RR} \n \\end{array} \\right) \n\\nonumber\n\\label{squarku2}\n\\end{equation}\n\nIn the super CKM basis where the quark mass matrix is diagonal \nand the squarks are rotated in parallel to their superpartners,\nthe $F$ terms from the superpotential and the $D$ terms \nin the $6 \\times 6$\nmass matrices ${\\cal M}^2_f$ turn out to be diagonal \n$3 \\times 3$ submatrices. This is in general not true\nfor the additional terms (\\ref{squarku}) from the soft \nsupersymmetry breaking potential. \nBecause all neutral gaugino couplings are flavour diagonal\nin the super CKM basis, the gluino contributions to the\ndecay width of $b \\to s \\gamma$ are induced by the off-diagonal\nelements of the soft terms \n$m^2_{f,LL}$, $m^2_{f,RR}$, $m^2_{f,RL}$.\n\n\n\n\n\n\n\\section{Numerical Results}\n\\label{constraints}\n\n\n\nWe show a few features of our numerical results\nbased on a complete LL calculation. More details \nof the analysis can be found in \\cite{complete}.\nThe size of the gluino contribution crucially depends on the soft\nterms in the squark mass matrix ${\\cal M}_{\\tilde{D}}^2$ and to a lesser\nextent on those in ${\\cal M}_{\\tilde{U}}^2$. \nIn the following, we take all the diagonal entries in the soft matrices\n$m^2_{\\tilde{Q},LL}$,\n$m^2_{\\tilde{D},RR}$,\n$m^2_{\\tilde{U},RR}$,\nto be equal;\ntheir common mass is denoted by $m_{\\tilde{q}}$ and set to the value $500$ GeV.\nFirst, the matrix element \n$m^2_{\\tilde{D},LR;23}$ \nis varied. All other entries in the soft mass terms are put to zero.\nFollowing the notation of \\cite{GGMS},\nwe define \n\\begin{equation} \n\\delta_{LR;23} = m^2_{\\tilde{D},LR;23}\/m^2_{\\tilde{q}} \\quad \n\\mbox{and} \\quad x=m^2_{\\tilde{g}}\/m^2_{\\tilde{q}} \\, ,\n\\label{deltadef}\n\\end{equation}\nwhere $m_{\\tilde{g}}$\nis the gluino mass.\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{qcdlr.ps}\n\\end{center}\n\\caption[f1]{Gluino-induced branching ratio $BR(B \\to X_s \\gamma)$ \nas a function of $x= m^2_{\\tilde{g}}\/m^2_{\\tilde{q}}$,\nwhen only $\\delta_{LR,23}$ is non-vanishing (see text).}\n\\label{sizeqcd23ll}\n\\end{figure}\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{glsmd23lr_mubxn.ps}\n\\end{center}\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{glsmd23ll_mubxn.ps}\n\\end{center}\n\\caption[f1]{$BR(B \\to X_s \\gamma)$ \nwith $x$ = 0.3 (short-dashed\n line), 0.5 (long-dashed line), 1 (solid line), 2 (dot-dashed line), see text.}\n\\label{glsm23ll}\n\\end{figure}\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{exaktll_sm_gl_0.3.ps}\n\\end{center}\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{exaktll_sm_gl_4.0.ps}\n\\end{center}\n\\caption[f1]{Mass insertion approximation (dashed line) vs. exact result \n(solid line) as a function of $\\delta_{LL;23}$ (see text).} \n\\label{massins_ll03}\n\\end{figure}\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\epsfxsize= 6.0 truecm\n\\epsfbox[18 167 580 580]{ll23lr33lr23.ps}\n\\end{center}\n\\caption[f1]{$BR(B \\to X_s \\gamma)$ as a function of $\\delta_{LR;23}$\nincluding interference effects with a chain $\\delta_{LL;23} \\delta_{LR;33}$ in solid line \n(see text).}\n\\label{ll23lr33lr23}\n\\end{figure} \nIn Fig.~\\ref{sizeqcd23ll}, \nthe QCD-corrected\nbranching ratio is shown as a function of\n$x$ (solid lines), obtained when only $\\delta_{LR,23}$ is vanishing \n($\\delta_{LR,23}=0.01$).\nShown is also the range of variation\nof the branching ratio, delimited by dotted lines, obtained when the\nlow-energy scale $\\mu_b$ spans the interval $2.4$--$9.6\\,$GeV. The\nmatching scale $\\mu_W$ is here fixed to $m_W$. As can be seen, the\ntheoretical estimate of $BR(B \\to X_s \\gamma)$ is still largely\nuncertain ($\\sim \\pm 25\\%$). An extraction of bounds on the $\\delta$\nquantities more precise than just an order of magnitude, therefore,\nwould require the inclusion of next-to-leading logarithmic QCD\ncorrections. It should be noticed, however, that the inclusion of the\nLL QCD corrections has already removed the large ambiguity on the\nvalue to be assigned to the factor $\\alpha_s(\\mu)$ in the\ngluino-induced operators. \nBefore adding QCD corrections, it is not clear whether the explicit $\\alpha_s$ \nfactor should be taken at some high scale $\\mu_W$ or a some low scale \n$\\mu_b$, the difference is a LL effect.\nThe corresponding values\nfor $BR(B \\to X_s \\gamma)$ for the two extreme choices of $\\mu$ are\nindicated in Fig.~\\ref{sizeqcd23ll} by\nthe dot-dashed lines ($\\mu=m_W$) and the dashed lines\n($\\mu=4.8\\,$GeV). The branching ratio is then virtually unknown. \n\nIn spite of the large uncertainties which the branching ratio \n$BR(B \\to X_s \\gamma)$ still has at the LL in QCD, it is possible\nto extract indications on the size that the $\\delta$-quantities \nmay maximum acquire without inducing conflicts with the \nexperimental measurements. As already noted in Ref.~\\cite{GGMS}, \nthe element $\\delta_{LR,23}$ is certainly the flavour-violating\nparameter most efficiently constrained. In Fig.~\\ref{glsm23ll} , \nthe dependence of $BR(B \\to X_s \\gamma)$ is shown as a function \nof this parameter when this is the only flavour-violating source. \nThe branching ratio is obtained by adding the SM and the \ngluino contribution obtained for different choices of $x$\nfor the fixed values \n$\\mu_b = 4.8\\,$GeV and $\\mu_W = m_W$. \nThe gluino contribution interferes constructively with the SM \nfor negative values of $\\delta_{LR,23}$, which are then more \nsharply constrained than the positive values. Overall, this \nparameter cannot exceed the per cent level. \nMuch weaker is the dependence on\n$\\delta_{LL,23}$ if this is the only off-diagonal\nelement in the down squark mass matrix. This dependence is illustrated\nin Fig.~\\ref{glsm23ll} for different choices of $x$. \nThe induced\ngluino contribution interferes constructively with the SM\ncontribution for positive $\\delta_{LL,23}$. \nNotice that given the\nlarge values of $\\delta_{LL,23}$ allowed by the experimental\nmeasurement, the MIA cannot be used in this\ncase to obtain a reliable estimate of $BR(B \\to X_s \\gamma)$, whereas\nit is an excellent \\mbox{approximation} of the complete calculation in the\ncase of $\\delta_{LR,23}$.\nIn the upper frame of Fig.~\\ref{massins_ll03},\nwe vary $\\delta_{LL;23}$ for $x=0.3$.\nThe MIA and the exact result start to deviate considerably for\n$x>0.4$, i.e. well within the experimental error band;\nthe exact result leads to more stringent bounds. An even more drastic\nexample is shown in the lower frame of Fig.~\\ref{massins_ll03}, where\nwe increase $m_{\\tilde{g}}$ in such a way \nthat $x=4$, leaving all the other parameters\nunchanged: for $\\delta_{LL;23}>0.1$ \nthe mass matrix ${\\cal M}_{\\tilde{D}}^2$ \nhas at least one negative eigenvalue.\nThis feature is of course completely missed in\nthe MIA.\nOne also has to consider\ninterference effects. In Fig.~\\ref{ll23lr33lr23}\nwe show that the additional contribution through\na chain $\\delta_{LL;23} \\delta_{LR;33}$ weakens\nthe bound on the parameter $\\delta_{LR;23}$ significantly.\nIn the solid curve we put \n$\\delta_{LL;23}=\\delta_{LR;33}=\n\\sqrt{\\delta_{LR;23}}$, while in the dashed curve \n$\\delta_{LL;23}=\\delta_{LR;33}=0$.\nWe have chosen again $x=0.3$.\n\n\n\nFinally, we stress that a consistent precision analysis\nof the bounds on the sfermion mass matrix \nshould include a NLL calculation and also \ninterference effects with the chargino contribution. \\\\\n\n\n\n\n\n\n\n\n\n\n\n\n\\thanks \n\nThe work reported here has been done in\ncollaboration with F. Borzumati, C. Greub \nand \\mbox{D. Wyler}, which is gratefully acknowledged.\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe spectra of SNRs in radio continuum are usually represented by a power-law, reflecting the pure \nsynchrotron radiation from the SNR shell:\\begin{equation}\nS_{\\nu}\\propto\\nu^{-\\alpha},\n\\end{equation} where $S_{\\nu}$ is the spatially-integrated flux density and $\\alpha$ is the radio spectral index.\n\n\nTest-particle diffusive shock acceleration (DSA) theory predicts that for strong shocks, radio spectral index $\\alpha$ \nis approximately 0.5 (Bell 1978ab). Indeed, mean value of the observed radio spectral indices, for Galactic SNRs, is \naround 0.5 and correspondent distribution is roughly Gaussian \\citep{b7}. On the other hand, the dispersion in that \ndistribution is non-negligible (see Fig.\\@ 1 in Reynolds et al.\\@ 2012). \n\n\nIn the case of shocks with sufficiently low Mach number (less than around ten), steeper spectra \n($\\alpha>0.5$) are generally expected. In test-particle DSA, for the synchrotron spectral index $\\alpha$, the following relations hold:\n\\begin{equation}\n\\alpha=\\frac{s-1}{2},\\ \\ s=\\frac{\\chi+2}{\\chi-1},\\ \\ \\chi=\\frac{\\gamma+1}{\\gamma-1+\\frac{2}{M^{2}}}\\approx\\frac{\\gamma+1}{\\gamma-1}, \n\\end{equation}\nwhere $\\gamma$ is the post-shock thermal gas adiabatic index, $\\chi$ is the compression ratio for parallel shock, $M$ \nis the (upstream adiabatic) Mach number that represents the ratio of shock velocity to the local sound speed and $s$ is the energy spectral \nindex. On the other hand, only few old SNRs would be expected to have such weak shocks \\citep{b16}. In fact, SNRs with steep spectrum \n($\\alpha>0.5$) are usually young objects \\citep{b7,b33,b34}. Steeper radio spectra may be explained by a cumulative effect of \nnon-linear and oblique-shock steepening in the case of young SNRs \\citep{b28}. An important prediction of the so called \nnon-linear DSA (NDSA), where the reaction effects of cosmic-ray particle pressure is taken into account, is that the spectrum \nof young SNRs should flatten at higher energies so that a \"concave up\" spectrum is formed \\citep{b15,b8}. \n\n\nThere is also a considerable number of Galactic SNRs with $\\alpha<0.5$ \\citep{b16}. Contamination with flat spectrum \nthermal emission can not be completely ruled out in all of the cases so it may be partially responsible for these lower \n(flat) spectral index values.\n\n\\begin{table*}\n \\centering\n \\caption{Properties of several Galactic SNRs with $\\alpha<0.5$ that expand in high density environment: type, spectral \nindex $\\alpha$, low-frequency turn-over in overall spectrum (t.o.), distance $d$, mean diameter $D$, approximate age $t$, \nassociation\/interaction with molecular cloud (MC), and detection in $\\gamma$ rays ($\\gamma$).}\n \\begin{tabular}{@{}cccccccccc@{}}\n \\hline\nSNR&Type&$\\alpha$&$\\rm{t.o.}$&$d\\ [\\rm{kpc}]$&$D\\ [\\rm{pc}]$&$t\\ [\\rm{kyr}]$&MC&$\\gamma$&Ref.\\\\\n\\hline\nG6.4-0.1 (W28)&M-M&0.35$^{*}$&?&1.9&26.5&33-150&Y&Y&1--5\\\\\nG18.8+0.3 (Kes 67)&S&0.40&N&14&57&16-100&Y&?&6--8\\\\\nG31.9+0.0 (3C391)&M-M&0.49$^{\\dagger}$&Y&8&$\\sim$14&4&Y&Y&9--12\\\\\nG34.7-0.4 (W44)&M-M&0.37&N&2.9&$\\sim$26&20&Y&Y&13--16\\\\\nG39.2-0.3 (3C396)&C&0.34&Y&6.2&$\\sim$13&3&Y&?&13, 17--19\\\\\nG43.3-0.2 (W49B)&M-M&0.46&Y&8-11&9-13&2.3&Y&Y&13, 20--22\\\\\nG89.0+4.7 (HB21)&M-M&0.36&N&0.8&$\\sim$25&19&Y&Y&23--25\\\\\nG94.0+1.0 (3C434.1)&S&0.45&N&4.5&$\\sim$39&25&Y?&?&13, 26--27\\\\\nG189.1+3.0 (IC443)&M-M&0.38&Y&1.5&20&4&Y&Y&23, 28--31\\\\\n\\hline\n\\end{tabular}\n\n\\bigskip\n\n\\begin{flushleft}\n\n{\\small\n\n(1) Abdo at al.~(2010a), (2) Dubner et al.~(2000), (3) Gabici et al.~(2010), (4) Giuliani et al.~(2010), (5) Sawada \\& Koyama (2012), \n(6) Tian et al.~(2007), (7) Paron et al.~(2012), (8) Dubner et al.~(1999), (9) Brogan et al.\\@ (2005), (10) Chen et al.~(2004), \n(11) Castro \\& Slane (2010), (12) Green (2009), (13) Sun et al.~(2011), (14) Abdo et al.~(2010b), (15) Castelletti et al.~(2007), \n(16) Uchida et al.~(2012), (17) Su et al.~(2011), (18) Patnaik et al.~(1990), (19) Anderson \\& Rudnick (1993), \n(20) Abdo at al.~(2010c), (21) Moffett \\& Reynolds (1994), (22) Zhou et al.~(2011), (23) Gao et al.~(2011), \n(24) Reichardt et al.~(2012), (25) Pannuti et al.~(2010), (26) Foster (2005), (27) Jeong et al.~(2012), (28) Troja et al.~(2006), \n(29) Troja et al.~(2008), (30) Castelletti et al.~(2011), (31) Abdo et al.~(2010d)\n\n\\medskip\n\n$*$ \\citet{b7} noted that spectral index of this SNR is varying.\\\\\n$\\dagger$ Or 0.54 at frequencies higher and 0.02 at frequencies lower than 1 GHz (Sun et al.~2011).\\\\}\n\n\\end{flushleft}\n\n\\end{table*}\n\nThe most of the (shell, composite and mixed-morphology) SNRs with $\\alpha<0.5$ are in fact (evolutionary) old objects \nexpanding in a high density environment. Because the shock velocities are much lower than in young SNRs, the oblique-shock \neffects considered by Bell et al.\\@ (2011) are unlikely to contribute to the flattening of the radio spectral indices of \nthese SNRs. Also, in the case of older SNRs, it is not yet clear whether the mechanism for producing relativistic electrons \nis local acceleration at a shock front or compression of the existing population of Galactic background relativistic \nelectrons (Leahy 2006 and references therein).\n\n\nIt must be noted that the majority of members of the so called mixed-morphology class of SNRs have flat spectral indices \n(see Table 4 in Vink 2012). The mixed-morphology (thermal composite) SNRs are characterized with bright (thermal) interiors \nin X-rays, and bright (non-thermal) shell-like radio morphology \\citep{b18, b26}. They are mainly evolutionary old SNRs \nexpanding in a high density environment (many of them interacting with molecular clouds) and many of them are detected in \n$\\gamma$-rays \\citep{b26}. In Greens's catalog of Galactic SNRs \\citep{b7} these remnants are classified either as shell \nor composite type (they are not separated from the rest of SNRs).\n\n\nIt should be pointed out that, from the list of the SNRs with flat spectrum, plerions (pulsar wind or synchrotron nebulae) \nare excluded since they are not of interest for this paper. In fact, as the radio and X-ray emission from these objects \nare powered by the pulsar wind, not by the supernova explosion, they should not be referred as SNRs (at least in a classical \ncontext). For pulsar wind nebulae, radio spectral indices are usually less than around $0.3$ \\citep{b31}.\n\n\nFinally, for a completeness of this summary, it should be noted that among the Galactic and some Large Magellanic Cloud \n(LMC) SNRs, existence of significant spectral break and steepening at higher radio frequencies were also observed \n(Crawford et al.\\@ 2008; Xiao et al.\\@ 2008; Bozzetto et al.\\@ 2010; de Horta et al.\\@ 2012). The best example is \nthe radio continuum spectrum of Galactic SNR G180.0-1.7 (S147) for which the spectral break is around 1.5 GHz, \nwith low-frequency spectral index around 0.3 and high-frequency spectral index around 1.20 (Xiao et al.\\@ 2008). \nThe compression of the local magnetic field and shift of the turn-over in the Galactic radio spectrum to higher \nfrequencies is one of the possible explanation of this kind of spectrum. On the other hand, synchrotron losses \nduring the early phase of the SNR would cause a bend at a rather high frequency, but subsequent expansion of the \nSNR would shift the frequency toward about 1.5 GHz (Xiao et al.\\@ 2008). Some of the models that try to explain \n\"concave down\" spectra of some Galactic and extragalactic objects (including synchrotron losses as well as finite \nemission region) are presented in the following papers: Bregman (1985), Biermann \\& Strittmatter (1987), \nHeavens \\& Meiseheimer (1987).\n\n\nIt is very difficult to derive accurate integrated flux densities. The small number of data points \n(with associated errors) and the dispersion of flux densities at the same frequencies pose a practical problem \nin determining the (integrated) radio spectral index and, generally, the shape of radio continuum. In that sense, it \nshould be noted that present knowledge of radio spectral indices as well as of the overall shape of the Galactic SNR \nradio continuum spectra is far from being precise (at least for the majority of Galactic SNRs). New observations \nusually lead to non-negligible changes in the measured spectral indices. For example, although for some mixed-morphology \nSNRs Greens's catalog \\citep{b7} gives rather steep spectral index, new observations given in \\citet{b33} and \\citet{b34} \ngive slightly flatter, but still steep ($\\alpha>0.5$) spectra (e.g.\\@ for G53.6-2.2, $\\alpha=0.75\\rightarrow0.50$; \nG93.7-0.2, $\\alpha=0.65\\rightarrow0.52$; G116.9+0.2, $\\alpha=0.61\\rightarrow0.57$; G160.9+2.6, $\\alpha=0.64\\rightarrow0.59$). \nIn this paper are considered only those SNRs for which the flux densities are on the scale of \\citet{b1} and have known \nerrors $<20\\%$. SNRs with only four or less data points are not discussed in this paper.\n\n\nTable 1 shows the list of several Galactic SNRs, with well defined radio spectra in the sense of previously mentioned \ncriteria, and their basic properties: type, integrated radio spectral index $\\alpha$, presence of low-frequency \nturn-over in overall spectrum (t.o.), distance $d$, mean diameter $D$, approximate age $t$, association\/interaction \nwith molecular cloud (MC), and detection in $\\gamma$-rays ($\\gamma$). \n\n\nThe question of nature of the flat spectral indices in some Galactic SNRs is, in fact, still open. In Sections 2 -- 4 a \nbrief revision of proposed explanations (with the strongest arguments in their favor) is presented. \nSection 4 also deals with high energy properties of the SNRs with flat radio spectra focusing on their detection in \n$\\gamma$-rays with the Large Area Telescope onboard Fermi $\\gamma$-ray Space Telescope ({\\it Fermi} LAT). In Section 5, model \nwhich includes significant intrinsic thermal bremsstrahlung radiation is discussed. Finally, Section 6 concludes the paper.\n\n\\section[Models involving Fermi II mechanism]{Models involving second-order Fermi mechanism}\n\nAlthough Fermi I (DSA) is generally believed to be a primary mechanism for particle acceleration in SNRs, \nFermi II (stochastic acceleration, see Liu et al.\\@ 2008 for details) process in the shock turbulent vicinity could also \nbe significant in some cases.\n\n\n\\citet{b20} used results of Dr\\\"{o}ge et al.\\@ (1987) who solved the steady-state transport equation for the \nvolume-integrated phase space distribution function of relativistic electrons in the vicinity of a plane-parallel \nshock wave allowing simultaneously for energy gain by multiple shock crossing and by second-order Fermi momentum \ndiffusion upstream and downstream of the shock. They concluded that the observed dispersion in spectral index values \nbelow $\\alpha=0.5$ is attributed to a distribution of low upstream plasma $\\beta$ values for different SNRs. \n\n\nPlasma $\\beta$ is defined as follows: \\begin{equation}\n\\beta=\\frac{P}{P_{\\rm{mag}}}=\\frac{8\\pi P}{B^{2}}=\\frac{2}{\\gamma}\\left(\\frac{V_{\\mathrm{sound}}}{V_{\\rm A}}\\right)^{2}=\\frac{8\\pi nk(T_{\\rm e}+T_{\\rm i})}{B^{2}},\n\\end{equation}\nwhere $P$ is the gas pressure, $P_{\\rm{mag}}$ the magnetic pressure, $B$ the magnetic induction, $n$ the number density, \n$k$ the Boltzmann constant, $T_{\\rm e}$ and $T_{\\rm i}$ the electron and ion temperatures (if different, or just temperature $T$ \nof the medium), $V_{\\mathrm{sound}}$ the adiabatic sound speed, $V_{\\rm A}$ the Alfv$\\mathrm{\\acute{e}}$n speed and $\\gamma$ \nthe adiabatic index. For example, temperatures of around $10^{4}$ K ($\\sim$ 1 eV) and number densities of around $0.1\\ \\rm{cm^{-3}}$ \nlead to the plasma $\\beta$ of 0.16 for the average value of Galactic magnetic field (magnetic induction) of around \n$5\\ \\rm{\\mu\\rm{G}}$. Higher densities (as well as higher temperatures) would rise the plasma $\\beta$. Also, higher magnetic \nfields will result in lower values of plasma $\\beta$ for the same set of temperatures and densities.\n\n\n\\citet{b20} also pointed out that values of $\\beta\\simeq0.05$ are sufficient to yield $\\alpha\\simeq0.2$ for shock \ncompression ratios higher than (or equal) 2.5 (assuming, also, constant spatial diffusion coefficient; see equations \n6 and 7 in Schlickeiser \\& F\\\"{u}rst 1989). \n\n\nOn the other hand, in their work, they did not account for the possibility of compression ratios higher than 4 that could be \nexpected in some (evolutionary) older SNRs (spectral index becomes 0.5 for compression ratio of 4 independently of \nplasma $\\beta$). Dr\\\"{o}ge et al.\\@ (1987) emphasized that the flat spectra could be obtained for compression \nratios that tend to the value of 4 if the spatial diffusion coefficient increases with momentum and $\\beta=1$. However, \nin the case of $\\beta=1$ plasma and constant spatial diffusion coefficient, flat spectral indices are obtained for \ncompression ratios higher than 4 (which is not discussed in Dr\\\"{o}ge et al.\\@ 1987). Adiabatic index is assumed \nto be $5\/3$ in all the relevant equations.\n\n\n\\citet{b13} discussed the particle acceleration process at a (parallel) shock wave, in the presence of the second-order Fermi \nacceleration in the turbulent medium near the shock, as an alternative explanation for the observed flat radio spectra \nof some Galactic SNRs in molecular clouds. He found that even very weak shocks may produce very flat cosmic-ray particle \nspectra in the presence of momentum diffusion (using power-law form for the spatial diffusion coefficient -- $\\kappa\\propto p^{0.67}$, \nassuming shock compression ratio of 4 and applying an effective jump condition for the Alfv$\\mathrm{\\acute{e}}$n velocity \n-- $V_{{\\rm A},1}=V_{{\\rm A},2}$, where indices 1 and 2 refer to the upstream and downstream of the shock, respectively). \nOstrowski (1999) pointed out that as a consequence, the dependence of the particle spectral index of the shock \ncompression ratio can be weaker than that predicted for the case of pure first-order acceleration. His analysis \nwas based on the model of Ostrowski \\& Schlickeiser (1993) that improved the work of Dr\\\"{o}ge et al.\\@ (1987) and \n\\citet{b20} deriving a simplified kinetic equation in the momentum space valid for the case of small momentum diffusion \nand also allowing the description of stationary spectra at the shock.\n\n\n\\citet{b13} emphasized that shock waves with Alfv$\\mathrm{\\acute{e}}$n speed non-negligible in comparison to the shock \nvelocity are responsible for generating the flat particle distribution. He noted that for the ratio of \nAlfv$\\mathrm{\\acute{e}}$n speed and shock velocity (inverse of the so called Alfv$\\mathrm{\\acute{e}}$n Mach number, \nsee equation 4) around 0.1 the second-order Fermi process can substantially modify the energy spectrum of the \nparticles accelerated at the shock and that the model holds for values of that ratio less than 0.2. \n\n\nPreviously mentioned Alfv$\\mathrm{\\acute{e}}$n Mach number of a shock is determined by \\citep{b2}:\n\\begin{equation}\nM_{\\rm A}=\\frac{V_{\\rm s}}{V_{\\rm A}}\\approx460\\ V_{{\\rm s}8}\\ n_{\\rm a}^{1\/2}\\ \/\\ B_{-6}, \n\\end{equation}\nwhere $V_{\\rm s}$ is the shock velocity, $V_{\\rm A}$ is the Alfv$\\mathrm{\\acute{e}}$n \nvelocity and $V_{{\\rm s}8}$ is the shock velocity in $10^{8}\\ \\rm{cm}\\ \\rm{s}^{-1}$, $n_{\\rm a}$ is the ionized \nambient gas number density in $\\rm{cm}^{-3}$ and $B_{-6}$ is the local magnetic induction, just before the shock, \nmeasured in $\\mu\\rm{G}$. Alfv$\\mathrm{\\acute{e}}$n Mach number is generally an important parameter as the ratio \nof acceleration rates of the Fermi I (DSA) and Fermi II (second-order) processes scales as $M_{\\rm A}^{-2}$ \\citep{b16}. \nAs the bulk post-shock flow is expected to be super-Alfv$\\mathrm{\\acute{e}}$nic it is logical to conclude that the DSA \nis more rapid process. \n\n\nThe corresponding analysis was successfully applied to the SNRs W44 and IC443. Although \\citet{b13} could not reproduce the \nsteep spectral index (0.55) in the case of SNR 3C391, using the spectral index value of 0.49 \\citep{b7}, this model could be \njustified (for shock velocity of around 300 km\/s and inter-clump density of around $10\\ \\rm{cm^{-3}}$, as in Ostrowski 1999, \nthis model could account for observable spectral index of 0.49 if magnetic induction is $\\sim 17\\ \\rm{\\mu\\rm{G}}$; see \nequations 2.6-2.9 from Ostrowski 1999). It is worth mentioning that compression ratios higher than 4 (not discussed in \nOstrowski 1999) would give, in this model, lower magnetic induction estimates. \n\n\nIt is worth noting that SNRs expanding in molecular clouds mostly evolve in the clumpy medium. The dominant part of \nthe cloud mass is contained in the compact dense clumps filling only around 10\\% or less of the cloud volume and the rest \nof the cloud is filled with inter-clump gas of average number density of $10\\ \\rm{cm^{-3}}$ (Chevalier 1999; Ostrowski 1999). \nHowever, one must bear in mind that SNR precursors could contribute to modification of physical conditions in the immediate \npre-shock ISM (Arthur 2012). \n\n\nThe fact that many SNRs with flat spectral indices are also connected with molecular cloud environment could be a sign that \nmodels like \\citet{b13} are more appropriate in these cases. As the radiative shock has been revealed by the {\\it Spitzer} \nIRS slit observation of SNR 3C396, that expands through the clumpy environment, model of Ostrowski (1999) could possibly \naccount for the observed flat radio spectral index (Hewitt et al.\\@ 2009; Su et al.\\@ 2011). For inter-clump density of \naround $1\\ \\rm{cm^{-3}}$ and for shock velocities of 120 km\/s and 350 km\/s (Hewitt et al.\\@ 2009), magnetic induction of \n$10-29\\ \\rm{\\mu\\rm{G}}$ is needed, respectively, to account for observable radio spectral index of 0.34 (see equations \n2.6-2.9 from Ostrowski 1999). For a blast wave velocity of 870 km\/s (Su et al.\\@ 2011), needed value of magnetic induction \nis higher -- around $72.5\\ \\mu\\rm{G}$. In the case of SNR W28, for shock velocity of around 80 km\/s and inter-clump density \nof around $5\\ \\rm{cm^{-3}}$ (Rho \\& Borkowski 2002), model of Ostrowski (1999) could account for observable spectral index \nof 0.35 if magnetic induction is $\\sim 14\\ \\rm{\\mu\\rm{G}}$. For higher inter-clump density of $10\\ \\rm{cm^{-3}}$, needed \nmagnetic induction would be around $20\\ \\rm{\\mu\\rm{G}}$. Finally, HB21 is known to be interacting with clumpy molecular \nclouds (Pannuti et al.~2010 and references therein). If we assume, approximately, 130 km\/s for the velocity of radiative \nshell and the mean ambient \\mbox{H\\,{\\sc i}} density of $3.7\\ \\rm{cm^{-3}}$ (Koo et al.~2001 and references therein), model \nof Ostrowski (1999) could account for observable radio spectral index of 0.36 if magnetic induction is \n$\\sim 19\\ \\rm{\\mu\\rm{G}}$. In all of these cases compression ratio of 4 was assumed.\n\n\nFinally, it must be emphasized that there are several SNRs, with flat spectral indices, that are generally associated \nwith relatively lower densities such as G82.2+5.3 (W63), with $\\alpha=0.44$ (Higgs et al.~1991; Mavromatakis 2004; \nGao et al.\\@ 2011), and G166.0+4.3 (VRO 42.05.01), with $\\alpha=0.33$ (Guo \\& Burrows 1997; Leahy \\& Tian 2005; \nBocchino et al.\\@ 2009; Gao et al.\\@ 2011). Both SNRs are classified as mixed-morphology ones. W63 is probably \nexpanding in the wind blown bubble and VRO 42.05.01 has probably broken out of the cloud within which it formed, \nexpanded across an interstellar tunnel or cavity, and is now interacting with the material that forms the \nopposite tunnel wall (Pineault et al.\\@ 1987; Burrows \\& Guo 1994; Guo \\& Burrows 1997; Mavromatakis 2004). \nIf the model of \\citet{b20} is acceptable for these SNRs, then the observed spectral indices lead to \nthe plasma $\\beta$ around 0.17 and 0.05 for W63 and G166.0+4.3, respectively, in the case of compression ratio of 3.9 \nand constant spatial diffusion coefficient. Slightly higher values of plasma $\\beta$ are obtained for compression ratio \nof 2.5. If power-law index of the spatial diffusion coefficient dependence of momentum equals 0.67, then observed spectral \nindices could not be obtained by the model of Dr\\\"{o}ge et al.\\@ (1987). Finally, if plasma $\\beta=1$ than compression ratios \nof around 6.95 and 12.3 correspond to the observed spectral indices of W63 and G166.0+4.3, respectively (for constant spatial \ndiffusion coefficient). These estimates will, on the other hand, lead to (unrealistically) low isothermal Mach number, \nfully radiative SNR shocks.\n\n\\section[Model involving high compression ratios]{Models involving high compression ratios}\n\nThe flatter spectra may also be due to compression ratios greater than four at fully radiative (isothermal) shocks \n(in the post Sedov-Taylor phases of SNR evolution). In the case of parallel isothermal shock, compression ratio \n$\\chi$ is equal to a square of isothermal Mach number ($M_{\\rm T}$) so that test particle DSA gives:\n\\begin{equation}\n\\alpha=\\frac{3}{2(\\chi-1)},\\ \\ \\ \\ \\chi=M_{\\rm T}^{2}.\n\\end{equation}\nIn Figure 1 it is shown that this explanation, for observable flat spectral index range, $\\alpha\\in(0.25-0.5)$, \nholds only for very low isothermal Mach number fully radiative shocks (isothermal Mach numbers around 2--3) \nin the test-particle DSA.\n\nOf course, one must bear in mind that the onset of the so called momentum-conserving snowplow phase occur \nonly near the end of a SNR lifetime, or not occur at all and even a pressure-driven snowplow model is just \nthe asymptotic case \\citep{b29}. Of course, local regions of the SNR blast wave may leave the Sedov-Taylor phase earlier \nat those places where the ambient density is much higher than average, though the bulk evolution still corresponds to \nenergy conserving phase \\citep{b15}. This means that different parts of an SNR may be in different phases so that \ndifferent shock velocities and compression ratios within the SNR are possible, which could have repercussions \non the integrated radio spectrum.\n\n\\begin{figure}[h!]\n\\includegraphics[width=84mm]{onic_fig1.eps}\n \\caption{The change of integrated radio spectral index with isothermal Mach numbers of fully radiative (isothermal) \nSNR shock wave.}\n\\end{figure}\n\n\nIt must be noted that the values of adiabatic index slightly less than $5\/3$ (due to high temperatures behind the shock front \nand\/or because the molecules in the cloud, with which SNR interacts, will cool more efficiently than the ambient medium or \nthe ejecta as discussed in Davis 2007) would reproduce some of the observable flat spectra in the case of parallel adiabatic \nshock wave and test-particle DSA (see equation 2). On the other hand, such values are generally not expected in the \npost-shock medium. Finally, Anand (2012) derived and discussed the jump relations across a shock in non-ideal gas flow. \nIt is interesting to note that even for very small non-idealness of gas, using equation (5) from Anand (2012), for the \ncompression ratio, flat spectral indices are obtained for the large range of shock velocities in the framework of DSA. \nOf course, applicability of the non-ideal gas equation of state in the case of the particular ISM plasmas is questionable.\n\n\\section{High energy properties of the SNRs with flat integrated radio spectral indices}\n\nSince there is a significant connection between the majority of Galactic SNRs with flat integrated radio spectrum and their \ndetection in $\\gamma$-rays, the high energy properties of these SNRs are now discussed (see Table 1). Apart from historical \n(e.g.\\@ Tycho and Cassiopeia A -- Abdo et al.~2010e; Giordano et al.~2012) and young TeV-bright SNRs \n(e.g.\\@ RX J1713.7-3946 and Vela Jr. -- Abdo et al.~2011; Tanaka et al.~2011), among the SNRs recently \ndetected with {\\it Fermi} LAT, there is a huge class of SNRs that interact with molecular clouds as well as of the \nevolved SNRs without molecular cloud interaction (e.g.\\@ Cygnus Loop, S147 -- Katagiri et al.~2011; Katsuta et al.~2012). \n\n\n$\\gamma$-ray emission from SNRs can be produced by few different, non-exclusive, mechanisms. Electrons and positrons in the \nSNR shell can interact by inverse Compton scattering with ambient photon fields (such as cosmic microwave background and \ninfrared radiation) to produce $\\gamma$-rays (Helder et al.\\@ 2012). Also, electron-atom or electron-molecule interactions \nin a dense medium result in bremsstrahlung radiation (Reichardt et al.~2012). Mentioned mechanisms involve electrons \naccelerated up to the sufficient energies to produce the observed $\\gamma$-rays (leptonic scenario). $\\gamma$-rays \ncan also be produced by neutral pion decay, where neutral pions result from the collision of accelerated protons \n(or heavier nuclei) with nucleons of the ambient gas (hadronic scenario).\n\n\nIn the case of SNRs that interact with molecular clouds, hadronic scenario is probable and one of the spectral signatures \nis steepening in the GeV band. Generally, absence of non-thermal X-ray emission favors hadronic origin of the observed \n$\\gamma$-ray emission. The rapid steepening of the spectrum above few GeV is the kind of signature expected from the \nre-acceleration of pre-existing cosmic-rays, where high energy cosmic-rays escape from the SNR confinement region \n(Reichardt et al.~2012 and references therein). Uchiyama et al.\\@ (2012) emphasized that it has not been clear yet \nwhether the $\\gamma$-ray emission is produced by escaping or by trapped cosmic-rays, since shocked molecular clouds \ninside an SNR could also be the sites of efficient $\\gamma$-ray production. Although the majority of SNRs that interact \nwith molecular clouds and are detected in $\\gamma$-rays are those with flat radio spectra (e.g.\\@ W44, IC443, W28, W49B, \n3C391, W51C, CTB37A, HB21 -- Abdo et al.~2009; Abdo et al.\\@ 2010abcd; Castro \\& Slane 2010; Reichardt et al.~2012), \nit must be mentioned that several SNRs that interact with molecular clouds and have spectral indices of 0.5 are also \ndetected in $\\gamma$-rays with {\\it Fermi} LAT (e.g.\\@ G8.7-0.1, G109.1-1.0, G349.7+0.2 -- Castro \\& Slane 2010; \nAjello et al.~2012; Castro et al.\\@ 2012). \n\n\nReichardt et al.~(2012) emphasized that SNR HB21 belongs to the group of low-luminosity, GeV-emitting SNRs \n(such as Cygnus Loop or S147), which are clearly less luminous than the first GeV-emitting SNRs that were \ndiscovered (such as W51C, IC443, W49B). The spectral break is found at lower energy in HB21 than in the \ncase of the luminous SNRs. Reichardt et al.~(2012) suggested that the $\\gamma$-ray emission from HB21 can\nbe understood as a combination of emission from shocked\/illuminated molecular clouds, one of them coincident \nwith the SNR shell itself.\n\n\nSNR G260.4-3.4 (Puppis A), with radio spectral index of 0.5 (Green 2009), was also observed at $\\gamma$-rays \n(Hewitt et al.~2012) and it represents an interesting transitional case between young SNRs still evolving into \na circumstellar medium (e.g.\\@ Cas A), and older SNRs which are interacting with large, dense molecular clouds \n(e.g.\\@ IC 443).\n\n\nIn the case of Kes 67, not yet detected by {\\it Fermi} LAT, the bright pulsar wind nebula HESS J1825-137 is extended near \nthe position of the SNR (Grondin et al.~2011) thus making the analysis highly complicated. Similarly, near the position of \nSNR 3C396, a radio-faint $\\gamma$-ray pulsar (PSR J1907+0602) powering a bright TeV pulsar wind nebula (MGRO J1908+06) is \nlocated, substantially complicating the analysis (Abdo et al.~2010f). \n\n\nPossible interaction with molecular cloud in the case of SNR 3C434.1 is indicated by Jeong et al.~(2012), which makes \nthis remnant an interesting target for analysis of possible $\\gamma$-radiation in its direction. On the other hand, \nthere are no point sources included in the LAT 2-year point source catalog (Nolan et al.~2012) closer than 2 degrees \nto the position of SNR 3C434.1. Also, there are no extended sources closer than 10 degrees to the position of this remnant. \nA detailed study of the $\\gamma$-emission in the direction of SNR 3C434.1 is planned for a work in the near future. \n\n\\subsection[Model of Uchyama et al.]{Model of Uchyama et al.}\n\n\\citet{b24} analyzed $\\gamma$-ray emission from SNRs interacting with molecular clouds. They showed that the simple \nre-acceleration of pre-existing cosmic-rays by the process of diffusive shock acceleration at a cloud shock is generally \nsufficient to power the observed $\\gamma$-ray emission through the decays of neutral pions. Neutral pions are produced in \nhadronic interactions between high-energy protons (nuclei) and gas in the compressed-cloud layer. \\citet{b24} proposed \nthat the radio emission may be additionally enhanced by the presence of secondary electrons\/positrons, i.e.\\@ the products \nleft over from the decay of charged pions, created due to cosmic-ray nuclei colliding with the background plasma. They \nconcluded that presence of secondary electrons\/positrons may also explain the flat spectral radio indices of some \nmixed-morphology SNRs. \n\n\n\\citet{b24} applied their analysis to SNRs W51C, W44 and IC443. Of course, this kind of explanation holds only for those SNRs \nthat interact with molecular clouds assuming further that hadron scenario for $\\gamma$-radiation is significant. Another \nrestriction of this model is the fact that \\citet{b24} used relations that are applicable in the case of the Sedov-Taylor \nphase which are not suitable for mixed-morphology SNRs. On the other hand, the simple re-acceleration of pre-existing cosmic-rays \nand subsequent compression alone would not fully explain the $\\gamma$-rays associated with cloud-interacting SNRs. It must be \nnoted that GeV and TeV $\\gamma$-rays outside the southern boundary of SNR W28 may be explained by molecular clouds \nilluminated by runaway cosmic-rays (Uchiyama et al.~2010 and references therein). \\citet{b24} also assumed pre-existing \ncosmic-rays in the cloud to have the same spectra as the Galactic cosmic-rays in the vicinity of the Solar system although \nthe ambient cosmic-rays in the pre-shock cloud may deviate from the Galactic pool due to the runaway cosmic-rays that have \nescaped from SNR shocks at earlier epochs.\n\n\nAnother model by \\citet{b3} was also proposed assuming the possibility of electron acceleration by \nmagnetohydrodynamical turbulence behind high density shocks (Vink 2012). In their model, the calculated radio \nspectrum fits that observed from the SNR IC443 shell because of the large shock compression ratio and to a lesser \nextent because of the effect of the second-order Fermi acceleration. It is worth mentioning that this model involves \na lepton scenario of $\\gamma$-ray emission from SNRs interacting with molecular clouds. \n\n\\section[Model involving thermal radiation]{Model involving significant intrinsic thermal bremsstrahlung radiation}\n\nRecently, it was shown that the observations over a very broad range of radio frequencies reveal a curvature in the spectra \nof some evolutionary older Galactic SNRs expanding in high density environment (Tian \\& Leahy 2005; \nUro{\\v s}evi{\\' c} \\& Pannuti 2005; Leahy \\& Tian 2006; Uro{\\v s}evi{\\' c}, Pannuti \\& Leahy 2007; \nOni\\'c \\& Uro{\\v s}evi{\\'c} 2008). The NDSA effects could not be responsible for a characteristic shape of \nthe integrated radio spectra in these cases. The presence of intrinsic thermal bremsstrahlung radiation was proposed \nas an explanation of the \"concave up\" radio spectrum (Uro{\\v s}evi{\\' c} 2000; Uro{\\v s}evi{\\' c} et al.\\@ 2003ab; \nOni\\'c et al.\\@ 2012 and references therein). Presence of the cooled thermal X-ray electrons in post Sedov-Taylor SNRs, \nhigher ambient densities, interaction with or expansion in molecular cloud are main arguments used by the model that \nincludes significant intrinsic thermal bremsstrahlung emission (thermal model). Detections of the low-frequency turn-overs, \nin the integrated radio spectrum, related with thermal absorption linked to SNRs (Brogan et al.\\@ 2005; \nCastelletti et al.\\@ 2011) are important in justification of this model as one can then predict \nthe significance of thermal emission at higher frequencies. Generally, linear polarization measurements \n(that give the upper limits for the thermal component of the radio emission), possible detections in \n$\\mathrm{H}\\alpha$ as well as the observations of radio recombination lines associated with SNRs should accompany \nthis kind of analysis. \n\n\nIt must be emphasized that most of the potential targets for testing the thermal model hypothesis fall \nin the mixed-morphology class of SNRs. Because of the fact that these SNRs appear to be in the radiative \nphases of their evolution, with shock velocities less than 200 km\/s (Vink 2012), the ensemble of cooled \nthermal X-ray electrons could exist, supporting the thermal model \\citep{b11}. In addition, recent discovery of \nstrong radiative recombination continua (RRC), seen in the X-rays of several SNRs of this type, represents \ndefinite evidence that their plasma is recombining (Yamaguchi et al.\\@ 2012). RRCs in their X-ray spectra are \nindicative of higher densities in interiors of these remnants (Vink 2012). Thus, mixed-morphology SNRs are \ncharacterized by a more\/less uniform high interior density with medium hot temperatures, rather than very high \ntemperatures and very low interior densities, as in the case of typical shell remnants. Also, among the SNRs with detected \nRRC, the most of them (if not all) have flat radio spectral index (Sawada \\& Koyama 2012; Uchida et al.~2012; \nVink 2012). Finally, as it was emphasized earlier, the morphology of these SNRs is difficult to explain with standard \nSNR evolutionary phases \\citep{b26} and several models were proposed in the current literature (White \\& Long 1991; \nCox et al.\\@ 1999). An X-ray emitting overionized plasma is the result of an early heating followed by a rapid cooling, \nbut the mechanism responsible for such processes in mixed-morphology SNRs is still not completely understood (Miceli 2011). \n\n\n\\citet{b11} have recently analyzed the integrated radio spectra of 3 SNRs (IC443, 3C391, 3C396) for which the thermal model \ncould be a natural explanation. All of the 3 SNRs are characterized by flat spectral indices (see Table 1) based on a simple \npower-law (synchrotron) fit and all of them interact with molecular clouds. For IC443 and 3C391 the low-frequency turn-over \nis due to thermal absorption linked to the SNRs and for SNR 3C396 the nature of the observed low-frequency turn-over is not \nyet clear. It must be emphasized that most of the SNRs that are the best targets for testing thermal model are also \ncharacterized by the flat spectral indices. The observed flat spectral indices could be just the apparent manifestation of \nthe presence of thermal component. It must be noted that there are no predictions of curved -- \"concave up\" integrated radio \nspectra in the cases of the explanations presented in Sections 2--4 for the flat radio spectra seen in some SNRs. \n\n\nThe shock wave dissociates molecules, or raises the gas temperature so that previously inaccessible degrees of freedom \nbecome accessible. The compression ratio for the parallel shock wave, including the possibility of jump in adiabatic index, \nis as follows: \\begin{eqnarray}\n&\\chi=&\\frac{\\gamma_{2}\\left(M^{2}+\\frac{1}{\\gamma_{1}}\\right)}{(\\gamma_{2}-1)\\left(M^{2}+\\frac{2}{\\gamma_{1}-1}\\right)}+\\nonumber\\\\\n&&\\quad\\quad\\quad\\quad+\\frac{\\sqrt{M^{4}+M^{2}\\frac{2(\\gamma_{1}-\\gamma_{2}^{2})}{\\gamma_{1}(\\gamma_{1}-1)}+\\left(\\frac{\\gamma_{2}}{\\gamma_{1}}\\right)^{2}}}{(\\gamma_{2}-1)\\left(M^{2}+\\frac{2}{\\gamma_{1}-1}\\right)},\n\\end{eqnarray}\nwhere $\\gamma_{1}$ and $\\gamma_{2}$ are adiabatic indices in upstream and downstream region, respectively. If \n$\\gamma_{1}=\\frac{7}{5}$ (diatomic molecules like $\\rm{H_{2}}$) and $\\gamma_{2}=\\frac{5}{3}$ (fully ionized gas) \nit is easily seen that (upstream) adiabatic Mach number $M=20$ leads to $\\alpha\\approx0.51$ so that for weaker shocks the steeper \nspectra will be preferable. As the ambient gas is usually pre-ionized by radiative precursor, the similar conclusion (for slightly \nsmaller adiabatic Mach numbers) could be drawn from the simple relation for constant adiabatic index of $5\/3$ (described in \nSection 1 using equation 2, see Figure 2). This means that we expect that evolutionary old SNRs, with velocities \nsmaller than around $200\\ \\rm{km\\ s^{-1}}$, have steeper synchrotron spectral indices in the framework of \ntest-particle DSA. This can partially account (except high parameter uncertainties) for steeper synchrotron spectral \nindices determined from the thermal plus non-thermal model fits \\citep{b11}.\n\\begin{figure}[h!]\n\\includegraphics[width=84mm]{onic_fig2.eps}\n \\caption{Change of the radio spectral index with Mach number for slow shocks using equations (2) -- full line and (6) -- \ndotted line.}\n\\end{figure}\n\n\nRadio spectral index variations, both with frequency and location within several Galactic SNRs were detected \n(F\\\"{u}rst \\& Reich 1988; Anderson \\& Rudnic 1993; Zhang et al.\\@ 1997; Leahy \\& Roger 1998; \nLeahy et al.\\@ 1998; Tian \\& Leahy 2005; Leahy 2006; Ladouceur \\& Pineault 2008). In order to have spectral changes with \nposition, physical conditions that alter the radio spectrum must change with position. Of course, spatial radio spectral \nindex variations are naturally expected in the case of composite SNRs due to the presence of pulsar wind nebula. On the \nother hand, variations in $\\alpha$ are detected in both shell and mixed-morphology SNRs. If the spectral index is changing \nwith frequency, generally, there is a possibility that two different radiation mechanisms are producing the radio continuum \nemission. Integrated radio spectrum with \"concave up\" curvature in the case of older SNRs, where efficient NDSA process is \nnot expected, can be produced by adding two different emission spectra along the same line of sight or within the same \nemitting volume (Leahy \\& Roger 1998). These kind of spectra could be due to different electron populations or because \nof the significant intrinsic thermal bremsstrahlung radiation \\citep{b11}. Linear polarization measurements are of the great \nimportance as spectral flattening associated with high density regions of an SNR which do not show high linear polarization \ncould be naturally explained by the thermal model.\n\n\nIt is worth mentioning that there are some other processes that could shape the integrated radio continuum at higher \nfrequencies (between 10-100 GHz) in several Galactic SNRs, such as the spinning dust emission (Draine \\& Lazarian 1998; \nScaife at al.\\@ 2007). The spinning dust model will be responsible for a creation of a \"hump\" in the high-frequency \nradio spectrum (see Figure 5 from Scaife at al.\\@ 2007). Of course, as it was emphasized it the Section 1, present knowledge \non integrated radio spectrum is generally unsatisfactory for firm quantitative analysis of the proposed models so as to \nthe contribution of dust emission. More data at radio frequencies higher than 1 GHz, e.g.\\@ those from ATCA (The Australia \nTelescope Compact Array), are expected to shed a light on the existence of the so called \"radio thermally active\" SNRs as \nwell as on the issue of the flat spectral indices. It must be mentioned that there is a general problem in observing at \nthe higher radio continuum frequencies ($>$30 GHz) due to atmospheric absorption issues. Ground-based radio astronomy \nis limited to high altitude sites such as in the case of ALMA (Atacama Large Millimeter Array).\n\n\n\\subsection{The results regarding thermal model for Galactic SNRs}\n\nExcept the supernova remnants G31.9+0.0 (3C391), G39.2-0.3 (3C396), G189.1+3.0 (IC443), analyzed in \\citet{b11}, that \nrepresent the best targets for justification of the thermal model, in this paper, the integrated radio spectra of \nSNRs G6.4-0.1 (W28), G18.8+0.3 (Kes 67), G34.7-0.4 (W44), G43.3-0.2 (W49B), G89.0+4.7 (HB21), G94.0+1.0 (3C434.1) were \nalso discussed. All of these SNRs are associated with dense environment (see Table 1). Integrated flux densities are \ncollected from the literature keeping in mind the criteria defined in Section 1. In the case of presence of spectral \nturn-over at low frequencies only data which are not influenced by the possible thermal absorption (or maybe by \nsynchrotron self-absorption) were used in thermal model fits \\citep{b11}. Simple sum of non-thermal and thermal components, \nrepresented by corresponding power-laws, (see equation 18 and corresponding discussion in Oni\\'c et al.\\@ 2012) is assumed. \nFor frequencies in $\\mathrm{GHz}$, the relation for the integrated flux density can be written as follows: \\begin{equation}\nS_{\\nu}=S_{1\\ \\!\\!\\mathrm{GHz}}^{\\mathrm{NT}}\\ \\left(\\nu^{-\\alpha}+\\frac{S_{1\\\n\\!\\!\\mathrm{GHz}}^{\\mathrm{T}}}{S_{1\\\n\\!\\!\\mathrm{GHz}}^{\\mathrm{NT}}}\\ \\nu^{-0.1}\\right)\\ [\\mathrm{Jy}],\n\\end{equation} where: $S_{1\\ \\!\\!\\mathrm{GHz}}^{\\mathrm{T}}$ and $S_{1\\ \\!\\!\\mathrm{GHz}}^{\\mathrm{NT}}$ are flux densities \ncorresponding to thermal and non-thermal components, respectively. Weighted least squares fit \n(influenced by the instrumental errors) were applied. In the case of all of these six SNRs thermal model fit \nis either not significant (contribution of the thermal component at 1 GHz is near zero) or the fit quality \nis not satisfactory (and associated parameter errors exceed values of the corespondent parameters). Pure \nnon-thermal fit parameters for these SNRs correspond to ones given in the present literature (see Table 1).\n\n\nIn the case of SNR W28, low-frequency spectral turn-over in the integrated spectrum is possible as flux density at 30.9 MHz \nhas a rather low value in the sense of scatter from the power-law fit (see Figure 5 in Dubner et al.\\@ 2000). The associated \nerror for that flux density is 20\\% (Kassim 1989). On the other hand, there is a great spread in flux density values, \nespecially at 2.7 GHz, which prevents firm conclusions. The great spread in the integrated flux densities at the same frequencies \nis very good seen in the case of SNRs W49B and W44 (Morsi \\& Reich 1987; Kassim 1989; Taylor et al.~1992; Kovalenko et al.~1994; \nLacey et al.~2001; Castelletti et al.\\@ 2007; Sun et al.~2011). For SNR W44 there is no apparent low-frequency \nspectral turn-over in the integrated radio spectrum, only the localized one, towards the southeast border of the SNR, \nlikely due to the free-free absorption from ionized gas in the post-shock region at the SNR\/molecular cloud interface \n(Castelletti et al.\\@ 2007). Detected turn-over at lower frequencies for W49B is likely due to the extrinsic free-free absorption \nby an intervening cloud of thermal electrons (Lacey et al.~2001). There is no apparent low-frequency spectral turn-over in \nintegrated spectra of SNRs Kes 67, HB21 and 3C434.1 (Reich et al.~1983; Goss et al.~1984; Landecker et al.~1985; Kassim 1989; \nMilne et al.~1989; Tatematsu et al.~1990; Kovalenko et al.~1994; Dubner et al.~1996; Zhang et al.~2001; Reich et al.~2003; \nFoster 2005; Kothes et al.~2006; Gao et al.~2011; Sun et al.~2011). \n \n\nIn the case of other SNRs that are listed in \\citet{b7} as those with flat spectral indices (not plerions), expanding in \nhigh density environment, there are not enough data (under criteria presented in Section 1) for a detailed discussion. \nGenerally, there is a significant number of SNRs listed in \\citet{b7} for which the spectral indices are determined \nonly by two or three data points. Some of the SNRs that could be interesting targets for analysis, in future, when more \nreliable data at different frequencies are obtained, are G49.2-0.7 (W51C) and G290.1-0.8 (MSH 11-61A).\n\n\nA few words of caution regarding the justification of thermal model. SNR's expansion in complex environment and possible \nassociation with \\mbox{H\\,{\\sc ii}} regions (and so possible thermal contamination) makes the study more complicated. The \nbest example is the case of steep radio spectrum SNR G132.7+1.3 (HB3) for which the intrinsic thermal radiation was \ndismissed due to the actual overlap between the SNR and corespondent \\mbox{H\\,{\\sc ii}} regions (Shi et al.~2008). \nOf course, the question on the presence of significant thermal radio emission from SNR HB3, in fact, still remains open as \nthe possible \\mbox{H\\,{\\sc ii}} region radiation contamination can not rule out the possibility of thermal bremsstrahlung \nemission from the SNR itself, it can just mask it.\n\n\nFinal conclusion is that our present knowledge on integrated flux densities prevents us from possibility of strong \njustification of the thermal model and limits the discussion to just a few Galactic SNRs analyzed in \\citet{b11}. \n\n\\section{Conclusions}\n\nA considerable fraction of Galactic SNRs are characterized by the flat spectral indices ($\\alpha<0.5$). In this paper, \nknown models are summarized and discussed:\n\n\\begin{enumerate}\n \\item There are several explanations of the observable flat radio spectra of some SNRs. Most of the models involve a \nsignificant contribution of second-order Fermi mechanism, but some of them discuss higher shock compressions, contribution \nof secondary electrons\/positrons left over from the decay of charged pions, as well as the possibility of thermal \ncontamination. \n \\item In the case of expansion in high density environment, thermal bremsstrahlung could theoretically shape the spectrum \nof SNRs. Lack of more high quality data constraints the discussion only to several SNRs. On the other hand, this model can \nnaturally account to observable curved -- \"concave up\" radio spectra of some Galactic SNRs. New observations are expected \nto create a clear picture about the high frequency range (1-100 GHz) of the radio continuum, as well as about the question \nof the significant contribution of intrinsic thermal bremsstrahlung radiation. With instruments like the \nJVLA\\footnote{The Karl G.\\@ Jansky Very Large Array of the National Radio Astronomy Observatory is a facility of the National \nScience Foundation operated under cooperative agreement by Associated Universities, Inc.} and ATCA, it should be far easier \nto measure the spectral indices in individual observations. \n \\item The fundamental question about the origin of flat spectral indices still remains open. The more realistic picture would involve \nthe action of more than one of the mentioned processes, in which some of them could be more or less prominent depending on \nthe particular setup.\n \\item Since there is a significant connection between the majority of Galactic SNRs with flat integrated radio spectra \nand their detection in $\\gamma$-rays, as well as detection of RRC in their X-ray spectra, the analysis of high energy \nproperties of these SNRs will be potentially very important.\n\\end{enumerate}\n\n\\acknowledgments\n\nThis work is part of Project No.\\@ 176005 \"Emission nebulae: structure and evolution\" supported by the Ministry of Education, \nScience, and Technological Development of the Republic of Serbia.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\subsection{Analysis of the algorithm}\n\n\n\n\n\n\\begin{lemma} \\label{lem:atLeastLOCAL}\n All nodes in $\\textsf{Good}$ decide on a value of at least $\\left\\lfloor\\tfrac{\\gamma}{2}\\log_\\Delta n\\right\\rfloor$.\n\\end{lemma}\n\\begin{proof}\nWe will proceed by induction over the number of rounds. Consider the graph $H$ given by Lemma~\\ref{lem:goodNodes} and recall that $u \\in V(H)$ by definition. Since $H$ has a vertex expansion $\\ge \\alpha'$, it follows that $u$'s neighborhood (in $H$) has size at least $1 + \\alpha'$, which guarantees that $u$ passes the expansion-check in Line \\ref{line:expansionCheck} in Round $1$. Moreover, $u$ has a distance of at least $\\lfloor\\tfrac{\\gamma}{2}\\log_\\Delta n\\rfloor$ from any Byzantine node, and hence it does not receive any inconsistent information during the first $\\lfloor\\tfrac{\\gamma}{2}\\log_\\Delta n\\rfloor$ rounds. This ensures $u$ will not decide in Line~\\ref{line:inconsistent} Round $1$, which completes the inductive base.\n\n\nNow, consider the inductive step $1 < i < \\lfloor \\tfrac{\\gamma}{2}\\log_\\Delta n\\rfloor$, and suppose that $u$ has not decided at the end of round $i-1$. Similarly as in the case $i = 1$, it holds that $u$ does not decide due to receiving inconsistent information. Moreover, note that\n\\begin{equation*}\n |\\mathcal{B}(u,i)| \\le \\Delta^{i+1} \\le n^{\\gamma\/2} < \\tfrac{|H|}{2}\\text{,}\n\\end{equation*}\nsince $|H|\\ge n - o(n)$ by Lemma~\\ref{lem:goodNodes}. Hence every subset of $\\hat{B}(u,i)$ is guaranteed to have a vertex expansion of at least $\\alpha'$, which ensures that $u$ continues to round $i+1$ without deciding.\n\\end{proof}\n\n\n\nThe next lemma tells us that, if a good node $u$ that has not yet decided, then its local $i$-neighborhood approximation $\\hat{B}(u,i)$ does not contain inconsistent information concerning the nodes in $\\textsf{Good}$. We will make use of this property in Lemma~\\ref{lem:atMostLOCAL} below.\n\n\\begin{lemma} \\label{lem:inconsistent}\n Suppose that $u \\in \\textsf{Good}$ has not decided by the end of round $i$, and consider graph $H$ given by Lemma \\ref{lem:goodNodes}. Then, for each $v \\in \\mathcal{B}_H(u,i)$ and any node $w$, it holds that $e=\\{v,w\\} \\in \\hat{B}(u,i)$ if and only if $e \\in E(G)$.\n\\end{lemma}\n\\begin{proof}\n By the definition of $H$, it follows that, for each $v \\in \\mathcal{B}_H(u,i)$, there exists a shortest path $p = (v = p_1, p_2, \\ldots, p_j = u)$ consisting of $j \\le i$ good nodes connecting $u$ and $v$. Moreover, it is easy to see that node $p_k$ ($1\\le k < j$) on path $p$ must have broadcast its topology information in round $i-j+k$, since otherwise its neighbor $p_{k+1}$ would have terminated at the end of round $i-j+k$, because of having noticed that $p_k$ was mute. This in turn would have cause $p_{k+1}$ to terminate at the end of round $i-j+k+1$ and hence would have propagated to $u$ by round $i$, contradicting the premise of the lemma.\n\n\n Since each of the good nodes in $p$ forwards the received topology information towards $u$, it follows that node $u$ receives a message from some good neighbor, which contains the exact set $E_v$ of edges incident to $v$ in $G$. Suppose that, in some round during round $i$, node $u$ also receives a message containing a set of edges $E_v' \\ne E_v$ of $v$, possibly injected by Byzantine nodes. However, Line~\\ref{line:inconsistent} ensures that $u$ decides instantly in round $i$, since it has added inconsistent information to $\\hat{B}(u,i)$. This results in a contradiction.\n\\end{proof}\n\n\n\n\\begin{lemma} \\label{lem:atMostLOCAL}\n Every node in $\\textsf{Good}$ decides on a value of at most $\\text{diam}(G) + 1$.\n\\end{lemma}\n\\begin{proof}\n Assume toward a contradiction that there is a node $u \\in \\textsf{Good}$ that decides on a value strictly greater than $\\text{diam}(G) + 1$. By the description, of the algorithm, this means that $u$ did not decide when executing round $i$, where $i = \\text{diam}(G) + 1$. Consider the content of $\\hat{B}(u,i)$ after receiving all messages for round $i$. Note that it is possible that $\\hat{B}(u,i)$ also contains information that was injected by Byzantine nodes.\n \n \n Let $F$ denote the \\emph{Byzantine part} of $\\hat{B}(u,i)$, i.e.,\n \\begin{center}\n $F \\stackrel{\\mathsmaller{\\mathsf{def}}}{=} \\hat{B}(u,i) \\setminus \\textsf{Good}$\n \\end{center}\n and call\n \\begin{center}\n $R \\stackrel{\\mathsmaller{\\mathsf{def}}}{=} \\hat{B}(u,i) \\cap \\textsf{Good}$\n \\end{center}\n its \\emph{honest part}.\n\n\n We can assume that Byzantine nodes do not send any inconsistent information regarding the graph induced by $R$, as otherwise $u$ will decide in round $i$ and we are done. Similarly, we can rule out that any node in $R$ has already decided: For if some $w$ decided and remained mute, this would cause its good neighbors to decide in the next round (cf.\\ Line \\ref{line:inconsistent}), which in turn would propagate (through good nodes) to $u$, causing it to decide. Consequently, Lemma~\\ref{lem:inconsistent} tells us that all edges emanating from nodes in $R \\setminus \\textsf{Byz}$ in graph $\\hat{B}(u,i)$ also exist in $G$. In particular, there are no edges between $R \\setminus \\textsf{Byz}$ and $F$. Since every node in $G$ has distance at most $\\text{diam}(G) = i - 1$ to $u$, it follows that $R \\subseteq \\hat{B}(u,i-1)$ and thus $u$ will check $R$'s vertex expansion with respect to graph $\\hat{B}(u,i)$ at the end of round $i+1$.\n\n\n To complete the proof, we analyze the expansion-check in Line \\ref{line:expansionCheck} for the set $R$. Observe that $R$ contains all nodes within distance $\\text{diam}(G)$ from $u$ in graph $H$ (see Lemma~\\ref{lem:goodNodes}). Given that $\\text{diam}(H) \\le \\text{diam}(G)$ and the fact that nodes in $\\textsf{Good}$ are connected in $H$, we know that\n \\begin{center}\n $|R| \\ge |\\textsf{Good}| = n - o(n)$.\n \\end{center}\n\n\n Recall that there are at most $n^{1-\\gamma}$ Byzantine nodes in $R \\cap \\hat{B}(u,i)$. Since we assumed that Byzantine nodes did not send inconsistent information, each Byzantine node has at most $\\Delta$ neighbors in $F$ (cf.\\ Line \\ref{line:degreeCheck}). It follows that there is a set $S'$ of at most $\\Delta n^{1-\\gamma} = o(n)$ fake vertices in the set $(\\hat{B}(u,i) \\setminus R) \\subseteq F$ that have an edge to $R$. To satisfy the expansion-check (see Line~\\ref{line:expansionCheck}), the number of neighbors of vertices in $R$ would need to be $|R|(1+\\alpha') = \\Omega(n)$, far exceeding the $o(n)$ fake vertices in $S'$. Hence the expansion-check fails for set $R$, causing $u$ to decide on $i = \\text{diam}(G) + 1$.\n\\end{proof}\n\n\n\nCombining Lemmas \\ref{lem:atLeastLOCAL} and \\ref{lem:atMostLOCAL} shows the claimed bound on the approximation achieved by the $n - o(n)$ nodes in set $\\textsf{Good}$. From Lemma \\ref{lem:atMostLOCAL}, it follows that the round complexity until all but $o(n)$ nodes have decided is $O(\\text{diam}(G)) = O(\\log{n})$. This completes the proof of Theorem \\ref{thm:local}.\n\n\\subsubsection{Analysis of the early phases of the algorithm: when $i < \\rho$}\n\n\n\n\n\nWe first show that, during the early phases of the algorithm, nodes in $\\textsf{GoodTL}$ do not add corrupted information to their $\\textsf{shortestPath}$ variable (see Lemma \\ref{lem:shortestPath}).\n\\begin{lemma} \\label{lem:shortestPath}\n Consider any phase $i < \\rho$, some iteration $j$, and some $u \\in \\textsf{GoodTL}$. At the end of iteration $j$, it holds that either $\\textsf{shortestPath} = \\texttt{none}$ or $\\textsf{shortestPath}$ corresponds to a shortest path in $G$ starting at some node $v$ that generated a beacon message and ending at $u$.\n\\end{lemma}\n\\begin{proof}\n Since $u \\in \\textsf{GoodTL}$ and $i < \\rho$, all Byzantine node are at a distance of at least $i+2$ from $u$, and hence no information injected by a Byzantine node can reach $u$ until it stops waiting for beacon messages in iteration $j$. It follows that any information that was added to $\\textsf{shortestPath}$ corresponds to a path in $G$.\n\\end{proof}\n\n\n\nIn Lemma~\\ref{lem:borderBlacklisted} we show an upper bound on the number of nodes that are all located at the closest possible distance to $u \\in \\textsf{GoodTL}$ such that $u$ will blacklist them if they generate beacon messages. We note that some of or even all of such nodes may be good nodes, but that does not cause any conflicts with our argument here. We will use this lemma together with the tree-like property in to argue that the remaining non-blacklisted nodes (and their expanded neighbors) provide a sufficiently large set (see Lemma~\\ref{lem:available}) for making it likely that some node generates a beacon message (see Lemma~\\ref{lem:expectedNotStoppedi} and Lemma~\\ref{lem:whpNotStoppedNodes}).\n\n\n\\begin{lemma} \\label{lem:borderBlacklisted}\n Consider any phase $i < \\rho$ and some good node $u \\in \\textsf{GoodTL}$ that has not yet decided at the start of $i$. For each iteration $j$, node $u$ blacklists at most one node in its $\\lceil (1-\\epsilon)i \\rceil$-boundary $\\mathcal{D}(u, \\lceil(1-\\epsilon)i\\rceil)$ (and none of the nodes that are at a lesser distance).\n\\end{lemma}\n\\begin{proof}\n Assume towards a contradiction that, in some iteration $j$, node $u$ adds at least two nodes $w_1, w_2 \\in \\mathcal{D}(u,\\lceil(1-\\epsilon)i\\rceil)$ to its blacklist. By the code of the algorithm, it follows that the ids of $w_1$ and $w_2$ must both be in $\\textsf{shortestPath}$ at the end of iteration $j$. Without loss of generality, suppose that $\\textsf{shortestPath} = (v,\\dots,w_1,\\dots,w_2,\\dots,u)$, i.e., $v$ is the origin of the beacon message that caused the update to $\\textsf{shortestPath}$ in iteration $j$. Since $w_1, w_2 \\in \\mathcal{D}(u,\\lceil(1-\\epsilon)i\\rceil)$ it follows that there exists a path $P_1=(w_1,\\dots,u)$ of length $\\lceil (1-\\epsilon)i \\rceil$ between $w_1$ and $u$ that does not contain $w_2$.\n\n\n However, this means that $u$ must have received a beacon message containing a path field that contains the concatenation of paths $Q' = (v,\\dots,w_1)$ and $P_1$, where $|Q'| < |\\textsf{shortestPath}|$. This contradicts the assumption that both $w_1$ and $w_2$ are in $\\textsf{shortestPath}$, and completes the proof.\n\\end{proof}\n\n\n\n\\begin{lemma} \\label{lem:available}\n Let $BL_u^*$ denote the set of nodes added to $u$'s blacklist during phase $i < \\rho$ and let $A_u^*$ be the set of nodes in $\\mathcal{B}(u,i+2) \\setminus BL_u^*$ having a shortest path to $u$ that does not traverse nodes in $BL_u^*$. Then, it holds that $|A_u^*| \\ge {d^i}$.\n\\end{lemma}\n\\begin{proof}\n Lemma \\ref{lem:borderBlacklisted} tells us that during phase $i$, the number of nodes in $\\mathcal{D}(u,\\lceil(1-\\epsilon)i\\rceil)$ that are added to $BL_u$ is at most\n \\begin{align}\n e^{(1-\\gamma)i} + 1\n &\\le 2e^{(1-\\gamma)i} \\notag\\\\\n &\\le e^{(1-\\gamma)i + \\log 2}. \\label{eq:boundaryblacklisted}\n \\end{align}\n\n\n On the other hand,\n \\begin{align}\n |\\mathcal{D}(u,\\lceil (1-\\epsilon)i \\rceil)|\n &\\ge d^{(1-\\epsilon)i } \\notag\\\\\n &= e^{(1-\\epsilon)i\\log d} \\notag \\\\\n &= e^{(1-\\delta)\\gamma i}. \\tag{by \\eqref{eq:eps}}\n \\end{align}\n\n\n This implies that\n \\begin{align}\n \\frac{|\\mathcal{D}(u,\\lceil (1 - \\epsilon)i \\rceil)|}{2}\n &\\ge e^{(1-\\delta)\\gamma i - \\log 2}, \\label{eq:boundary}\n \\end{align}\n\n\n To show that at most half of the nodes in the $\\lceil (1-\\epsilon) i \\rceil$-boundary of $u$ are blacklisted, it suffices if the right-hand side of \\eqref{eq:boundaryblacklisted} is upper bounded by \\eqref{eq:boundary}. This is true if\n \\begin{center}\n $(1-\\gamma)i + \\log 2 \\le (1-\\delta)\\gamma i - \\log 2$,\n \\end{center}\n which holds if\n \\begin{equation} \\label{eq:gammaDominates}\n \\gamma \\ge \\frac{1}{2-\\delta} + \\frac{2\\log 2}{(2-\\delta)i}\\text{.}\n \\end{equation}\n\n\n By the code of the algorithm, we know that $i \\ge \\tfrac{2\\log 2}{(2-\\delta)\\eta}$ (see Line~\\ref{line:startPhase}) and hence \\eqref{eq:gammaDominates} is guaranteed by the assumed bound on $\\gamma$ stated in \\eqref{eq:gamma}.\n\n\n So far, we have shown that there is a set $S'$ of at least half of the nodes in the $\\lceil (1 - \\epsilon)i \\rceil$-boundary of $u$ that are \\emph{not} in the phase $i$ blacklist $BL_u^*$ of $u$, i.e.,\n \\begin{equation} \\label{eq:notblacklisted}\n |S'| \\ge \\frac{\\mathcal{D}(u,\\lceil(1-\\epsilon)i\\rceil)}{2} \\ge d^{(1-\\epsilon)i -\\log 2 }.\n \\end{equation}\n\n\n Since $i < \\rho$ and $u \\in \\textsf{GoodTL}$, it follows that each node in $S'$ is the root of a $d$-ary subtree of depth at least $\\lfloor \\epsilon i \\rfloor + 2$. By the tree-like property of $u$, we know that the sets of nodes in these trees are pairwise disjoint. Let $T$ be the set of nodes that are in these trees. By the above,\n \\begin{align*}\n |A_u^*| \\ge |T| &\\ge \\frac{\\mathcal{D}(u,\\lceil (1-\\epsilon)i\\rceil)}{2} \\cdot d^{\\lfloor \\epsilon i\\rfloor + 2} \\\\\n &\\ge d^{\\lceil (1-\\epsilon)i \\rceil - \\log_d(2) + \\lfloor \\epsilon i\\rfloor + 2} \\tag{by \\eqref{eq:boundary}} \\\\\n \\intertext{and, assuming $\\log_d(2) \\le 1$, we get}\n |A_u^*| &\\ge d^{\\lceil (1-\\epsilon)i \\rceil + \\lfloor \\epsilon i\\rfloor + 1} \\\\\n &\\ge d^i.\n \\end{align*}\n\n\n In the remainder of the proof, we argue that none of the nodes in $T$ is blacklisted by $u$.\n\n\n Consider any node $w \\in T$. The only way that $w$ can be added to $BL_u^*$ is that $w \\in \\textsf{shortestPath}$ during some iteration $j$. However, by the tree-like property, we know that any shortest path from $w$ to $u$ must pass through some node $w' \\in S'$ and hence, by Lemma \\ref{lem:borderBlacklisted}, $\\textsf{shortestPath}$ must contain $w'$. This contradicts the assumption that the nodes in $S'$ are never blacklisted, and thus it follows that none of the nodes in $T$ end up in $u$'s phase $i$ blacklist $BL_u^*$.\n\\end{proof}\n\\subsection{Analysis of Algorithm~2} \\label{sec:congestAnalysis}\n\n\n\n\n\nFor the analysis, we assume at most $n^{1 - \\gamma}$ Byzantine nodes, where $\\gamma$ needs to satisfy\n\\begin{equation} \\label{eq:gamma}\n \\gamma \\geq \\frac{1}{2 - \\delta} + \\eta,\n\\end{equation}\nfor any fixed constants $0 < \\delta \\leq \\frac{1}{2}$ and $\\eta > 0$. Note that the smaller $\\delta$ is, the smaller $\\gamma$ is. Therefore maximum Byzantine tolerance is achieved when $\\delta$ is very close to (but slightly greater than) zero and $\\gamma$ is very close to (but slightly greater than) $\\frac{1}{2}$. In that case, the maximum Byzantine tolerance, i.e., the maximum number of Byzantine nodes that our algorithm can tolerate, boils down to $n^{\\frac{1}{2} - \\xi}$, as stated in Theorem~\\ref{thm:congest}.\n\n\nThe parameter $\\epsilon$ that we use to determine the distance outside of which the blacklisting becomes effective in our algorithm, is fixed as\n\\begin{equation} \\label{eq:eps}\n \\epsilon = 1 - \\frac{(1-\\delta)}{\\log d}\\gamma\\text{.}\n\\end{equation}\n\n\n\nLet $\\textsf{GoodTL} = \\textsf{Good} \\cap \\textsf{TreeLike}$ be the set of nodes that have a sufficiently large distance to all Byzantine nodes due to being in set $\\textsf{Good}$, and that also have the property of $d$-ary trees up to some radius of length $\\frac{\\log_d{n}}{10}$ (see Section \\ref{sec:treelike}).\n\n\n\nWe will first study the progress of the algorithm at nodes in $\\textsf{GoodTL}$, for the phases up to radius $\\rho$, where\n\\begin{equation} \\label{eq:rho}\n \\rho = \\left\\lfloor \\min\\left({(1-\\delta)\\gamma}\\log_d{n}, \\frac{1}{10}\\log_d{n}\\right)\\right\\rfloor - 2,\n\\end{equation}\nsince, in phase $i$, we require the tree-like property to hold up until radius $(i + 2)$. We also recall that $c_1$ is any large constant, as defined in Line~\\ref{pseudocode-line-activation-probability-and-also-the-definition-of-c1} of the pseudocode of Algorithm \\ref{alg:congest}.\n\\subsubsection{Analysis of the later phases of the algorithm: when $i = \\lceil \\log{n} \\rceil$}\n\n\n\nOnce a node $u \\in \\textsf{GoodTL}$ proceeds beyond phase $\\rho$, it has obtained a sufficiently good estimate of $\\log{n}$, and hence our goal is to show that it is likely to decide. For this part of the analysis, we need to deal with the possibility that conflicting information originating at Byzantine nodes reaches $u$ during an iteration. However, in the following analysis, we show that $u$ is unlikely to increase its phase counter above $\\lceil \\log n\\rceil$.\n\n\n\\begin{lemma} \\label{lem:upperEnd}\n Consider phase $i = \\lceil \\log{n} \\rceil$. The following hold with probability at least $1 - O(\\frac{1}{n})$:\n \\begin{enumerate}\n \\item No good node becomes active.\n \\item Every node in $\\textsf{GoodTL}$ decides at the end of phase $i = \\lceil \\log n \\rceil$.\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nLet $Active(u,j)$ be the event that a good node $u$ becomes active in iteration $j$ of phase $i$. For any $u \\in \\textsf{GoodTL}$ and any iteration $j$, it holds that\n\\begin{equation*}\n \\Pr[ Active(u,j) ] = \\frac{c_1\\cdot i}{d^i} \\le \\frac{c_1\\log n}{n^{\\log d}}\\text{.}\n\\end{equation*}\nBy taking a union bound over all good nodes and over all $e^{(1-\\gamma)i}+1$ iterations, we get\n\\begin{align}\n \\Pr[ \\exists u\\colon Active(u,j) ]\n &\\le \\frac{c_1\\log n}{n^{\\log(d) - 1}}; \\notag\\\\\n \\Pr[\\exists j\\ \\exists u\\colon Active(u,j)]\n &\\le \\left( e^{\\log(n)(1-\\gamma)}+1 \\right) \\frac{c_1\\log n}{n^{\\log(d) - 1}} \\notag\\\\\n &\\le 2n^{1-\\gamma} \\frac{c_1\\log n}{n^{\\log(d) - 1}} \\notag\\\\\n &\\le \\frac{2c_1\\log n}{n^{\\log d - 2}}. \\notag\n\\end{align}\nThis completes the proof of Part (1), assuming $\\log{d} \\geq 4$.\\\\\n \n \n \nTo show Part (2), we \\emph{condition} on Part (1) being true, and assume towards a contradiction that there is an undecided node $u \\in \\textsf{GoodTL}$ that does not decide in phase $i$. We will argue that $u$ blacklists at least one Byzantine node in each iteration $j$ of phase $i$.\n\n\nBy the code of the algorithm, $u$ has set $\\textsf{shortestPath} \\gets (v_1, v_2, \\ldots, v_k)$, for some $k \\leq i + 1$, which is the path information of the first beacon message that it received in iteration $j$.\n\n\nNote that we cannot be sure that $v_1$ is the id of a Byzantine node, as it could have happened that some other Byzantine node $v_\\ell$ $(\\ell \\in [2,k]$) tampered with the prefix $(v_1,\\dots,v_{\\ell-1})$ before that message reached $u$. However, by Lemma \\ref{lem:goodNodes}, we know any Byzantine node is at least $\\lfloor {(1-\\delta)\\gamma}\\log_d n\\rfloor$ hops away from $u$. In particular, this guarantees that the path suffix $P'$, which consists of the last $\\lfloor {(1-\\delta)\\gamma}\\log_d n\\rfloor$ nodes in path $P$, contains only ids of good nodes. Hence at least one Byzantine node's id must be in the path prefix prefix $Q$ (where we obtain $Q$ by removing $P'$ from $P$), as we have assumed that only Byzantine nodes generate beacon messages at this point.\n\n\nWe will now argue that all nodes in $Q$ are blacklisted by $u$. By the description of the algorithm, $u$ blacklists only nodes that have a distance of at least $\\lfloor (1-\\epsilon) i \\rfloor$ from $u$. We observe that\n\\begin{align*}\n \\lfloor (1-\\epsilon)i \\rfloor &\\leq (1-\\epsilon )i = {(1-\\delta)\\gamma}\\log_d n. \\tag{by \\eqref{eq:eps}}\n\\end{align*}\n\n\nIt follows that the entire prefix $Q$ will be blacklisted. Thus, we have shown that $u$ does not accept a beacon message that visits any of the nodes in $Q$ in a future iteration of this phase.\n\n\nBy the above reasoning, we know that $u$ blacklists at least $1$ Byzantine node in each iteration. Recall that $u$ executes $e^{(1-\\gamma)i}+1 \\ge n^{1-\\gamma}+1$ iterations in phase $i$. Given that there are only $n^{1-\\gamma}$ Byzantine nodes in the network, it follows that there exists an iteration in which $u$ does not set its variable $\\textsf{shortestPath}$ to a value different from \\texttt{none} since all Byzantine nodes are already blacklisted at that point. We conclude that $u$ decides in Line \\ref{line:decide}, yielding a contradiction.\n\\end{proof}\n\n\n\n\\begin{lemma} \\label{lem:time}\n At least $(1 - \\beta)n$ nodes decide within $O(n^{1-\\gamma}\\log^2 n)$ rounds of the algorithm.\n\\end{lemma}\n\\begin{proof}\n Lemma~\\ref{lem:upperEnd}(b) tells us that every node in $\\textsf{GoodTL}$ decides by phase $i=\\lceil \\log n \\rceil$ with high probability. By the description of the algorithm, each phase $i$ consists of $2i+3$ rounds and hence the total number of rounds executed until that point is $O(\\log^2{n})$.\n\\end{proof}\n\n\n\nWe now combine the previous lemmas to complete the proof of Theorem~\\ref{thm:congest}.\n\\begin{proof}[Proof of Theorem \\ref{thm:congest}]\n We focus on nodes in $\\textsf{GoodTL}$. From Lemma~\\ref{lem:whpNotStoppedNodes} we know that $\\Omega(n)$ nodes in $\\textsf{GoodTL}$ will proceed to at least phase $\\rho = \\Omega(\\log{n})$ before deciding and thus we can set the parameter $\\beta$ of the theorem statement accordingly. On the other hand, Lemma \\ref{lem:upperEnd} guarantees that all of these nodes decide by the end of phase $\\lceil \\log n \\rceil$ with high probability.\n\n The claim on the running time follows immediately from Lemma \\ref{lem:time}.\n\\end{proof}\n\n\n\n\\begin{remark} \\label{remark-approximation-factor-for-congest-algorithm-is-not-universal}\n The approximation factor mentioned in Theorem \\ref{thm:congest} is not universal. It may be different for different nodes, but in all cases it is bounded by the quantity $\\lceil\\frac{\\log{n}}{\\rho}\\rceil$, where $\\rho$ is as defined in Equation \\eqref{eq:rho}. Also, while the estimates may vary by a constant factor, it holds with high probability that all the nodes in $\\textsf{GoodTL}$ have estimates that are upper-bounded by $\\lceil\\log{n}\\rceil$, i.e., the estimates of $\\log{n}$ are upper-bounded by an additive constant term (which is $1$ basically).\n\\end{remark}\n\n\n\n\\begin{remark}\n We point out that a node in $\\textsf{GoodTL}$ may reenter the for-loop and participate in sending out beacons even after it has already decided (see Line~\\ref{line:reenter}). However, in Corollary~\\ref{lem:benign} below, we show that in the benign case where there are no Byzantine nodes in the network, the algorithm computes $\\log(n)$ exactly and all nodes terminate.\n\\end{remark}\n\n\n\n\\begin{corollary} \\label{lem:benign}\n Suppose that all nodes are good. Then the algorithm terminates in $O(\\log(n))$ rounds, and whp, $\\Omega(n)$ nodes decide on $\\lceil\\log(n)\\rceil$ and stop sending messages.\n\\end{corollary}\n\\begin{proof}\n Lemmas \\ref{lem:whpNotStoppedNodes} and \\ref{lem:upperEnd} tell us that $\\Omega(n)$ nodes will proceed to phase $i = \\lceil \\log(n) \\rceil$. Moreover, none of the good nodes will generate a beacon message with high probability at that point. Thus no node will send a continue message and all nodes will exit the for-loop.\n\\end{proof}\n\\section{Expander Subgraph Lemma}\n\n\n\n\n\nFor completeness, we restate the following lemma; the original proof is in \\cite{Augustine_2015_FOCS}. \n\n\\begin{lemma}[cf.\\ Lemma 3 in \\cite{Augustine_2015_FOCS}] \\label{lem:culling}\n Let $G$ be a $n$-node graph with expansion $\\phi$ and constant node degree $d$ and suppose that all nodes in a set $F$ are removed from $G$, where $|F| = o(n)$. Then, for any positive constant $c < 1$, there exists a subgraph $H$ of $G$ such that\n \\begin{enumerate}\n \\item $H$ has expansion $c \\phi$, and\n \\item $|H| \\geq n - |F|\\left(1 + \\frac{1}{\\phi(1 - c)}\\right)$. \n \\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\nWe adapt the proof of Theorem 2.1 in \\cite{Bagchi_2006}. Let $G_F$ be the graph yielded by removing the set $F$ from $G$. Perform the following iterative procedure (cf.\\ Algorithm {\\sf Prune} in \\cite{Bagchi_2006}):\n\\begin{enumerate}\n \\item Initialize $G_0 = G_F$. \n \\item In iteration $i\\ge 0$, let $S_i$ be any set of up to $|V(G_i)|\/2$ nodes with expansion smaller than $c\\phi$.\n \\item If $S_i$ exists, prune $S_i$ from $G_i$, i.e., $G_{i+1} = G_i \\setminus S_i$.\n \\item Let $H$ be graph that we get after the final iteration; note that $H$ has expansion $\\ge c\\phi$.\n\\end{enumerate}\n\n\nWe now lower bound the size of $H$. Suppose that the pruning procedure stops after $m$ iterations. For the sake of a contradiction, suppose that $$\\frac{|F|}{\\phi(1-c)} < \\left|\\bigcup_{i=0}^m S_i\\right|.$$ Define the set $S = \\bigcup_{i=0}^{\\ell}S_i$ where $\\ell$ is the smallest index such that exactly one of the following two cases holds:\n\n\nFirst, assume that $|S| \\le n\/2$. Let $N_{G_i}(S_j)$ denote the neighbors in $G_i \\setminus S_j$ of nodes in set $S_j$. We make use of the following result whose proof follows analogously to Lemma 2.6 of \\cite{Bagchi_2006}:\n\n\\begin{lemma}[cf.\\ Lemma 2.6 in \\cite{Bagchi_2006}] \\label{lem:nbound}\n Suppose that we execute procedure $\\sf Prune(c)$. For all $j$ with $0 \\leq j < m$, it holds that $| N_{G_F}\\left(\\bigcup_{i = 0}^j S_i \\right) | \\le c \\phi | \\bigcup_{i=0}^j S_i |.$ \n\\end{lemma}\n\n\nSince $G$ has expansion of $\\phi$, it holds that $| N_G(S) | \\ge \\phi |S|$. On the other hand, Lemma \\ref{lem:nbound} implies that $| N_{G_F}(S)| \\le c\\phi |S|$. Thus we get $|F| \\ge \\phi |S| - c\\phi |S| = \\phi(1 - c)|S|$ and hence $|S| \\leq \\frac{|F|}{\\phi(1 - c)}$, yielding a contradiction.\n\n\nNow, consider the case where $|S| > n\/2$. Then, it follows that\n\\begin{equation} \\label{eq:ssize}\n \\left|\\bigcup_{i = 0}^{\\ell - 1}S_i\\right| \\leq \\frac{|F|}{\\phi(1 - c)},\n\\end{equation}\nbut $|S_\\ell| > n\/2 - \\frac{|F|}{\\phi(1-c)}$. (Note that if $S_\\ell \\leq \\frac{n}{2} - \\frac{|F|}{\\phi(1 - c)}$, then $|S| \\leq \\frac{n}{2}$ and the first case applies.) Recalling that $F = o(n)$, it follows that $S_\\ell \\in \\Theta(n)$. Since $S_\\ell$ was removed when executing \\textsf{Prune($c$)}, we know that $|N_{G_{\\ell}}(S_\\ell)| \\le c\\phi |S_\\ell|$ and, by the expansion of $G$, we have $|N_G(S_\\ell)| \\ge \\phi |S_\\ell|$ as $S_\\ell \\le n\/2$. Thus $$|N_G(S_\\ell)| - |N_{G_{\\ell}}(S_\\ell)| \\ge \\phi(1 - c)|S_\\ell| = \\Theta(n)$$ We observe that the size of the neighborhood of $S_\\ell$ must have been reduced by $\\Theta(n)$ either due to the removal of nodes in $F$ or because of the pruning of the sets $S_0,\\dots,S_{\\ell-1}$. This, however, yields a contradiction, since\n\\begin{equation*}\n |F| + \\bigcup_{i=0}^{\\ell-1}S_i = O(|F|) = o(n)\\text{.}\n\\end{equation*}\nThus we have shown that\n\\begin{equation*}\n \\left|\\bigcup_{i = 0}^m S_i\\right| \\leq \\frac{|F|}{\\phi(1 - c)}\\text{,}\n\\end{equation*}\nand hence\n\\begin{equation*}\n |H| \\geq |G| - |F|(1 + \\frac{1}{\\phi(1 - c)})\\text{,}\n\\end{equation*}\nwhich completes the proof.\n\\end{proof}\n\\section{Computing Model and Problem Definition} \\label{sec:model}\n\n\n\n\n\n\\paragraph{The distributed computing model.} We consider a synchronous network represented by a graph $G$ whose nodes execute a distributed algorithm and whose edges represent connectivity in the network. The computation proceeds in synchronous rounds, i.e., we assume that nodes run at the same processing speed (and have access to a synchronized clock) and any message that is sent by some node $u$ to its neighbors in some round $r\\ge 1$ will be received by the end of round $r$. We consider the \\textsf{Local} model, where there is no restriction on the size of the messages that can be transmitted per edge per round, \\cite{Pandurangan_2019_Book, Peleg_2000_Book}; but we point out that our second algorithm ensures that most good nodes send only small-sized messages.\n\n\nAs is usual \\cite{Pandurangan_2019_Book, Peleg_2000_Book}, we assume local computation (within a node) is free and instantaneous.\n\n\n\n\\paragraph{Byzantine nodes.} Among the $n$ nodes ($n$ or its estimate is not known to the nodes initially), up to $B(n)$ can be \\emph{Byzantine}. The Byzantine nodes have unbounded computational power and can deviate arbitrarily from the protocol. This setting is commonly referred to as the \\emph{full information model}.\n\n\nWe say that a node $u$ is \\emph{good} or \\emph{honest} if $u$ is not a Byzantine node. Byzantine nodes are \\emph{adaptive} --- they have complete knowledge of the entire states of all nodes at the beginning of every round (including random choices made by all the nodes), and thus can take the current state of the computation into account when determining their next action. The Byzantine nodes also know the future random choices of the honest nodes, i.e., the Byzantine nodes are \\emph{omniscient}. We assume that the Byzantine nodes are {\\em arbitrarily} distributed in the network and that when a Byzantine node sends a message over an edge, it cannot fake its id. We note that both of these assumptions are quite typical in the literature \\cite{Dwork_1988, Upfal_1994, King_2006_FOCS, Augustine_2012, Augustine_2013_PODC, Augustine_2015_DISC}.\n\n\n\n\n\n\\paragraph{Distinct IDs.} We assume that all nodes (including the Byzantine nodes) have {\\em distinct IDs}, chosen from an arbitrarily large set whose size is unknown a priori. In other words, the node IDs can be viewed as comparable black boxes that do not leak any information about the network size. We point out that this precludes most nodes from estimating $\\log{n}$ by looking at the length of their IDs.\n\n\n\n\n\n\\paragraph{Network topology for the first (deterministic) algorithm.} Let $G = (V, E)$ be the graph representing the network. We assume $G$ to be a bounded-degree expander network. For the sake of a self-contained exposition, we recall the definition of \\emph{vertex expansion} below.\n\n\\begin{definition}[Vertex expansion of a graph]\n The \\emph{vertex expansion} of a graph $G = (V, E)$ on $n$ nodes is defined as\n \\begin{equation*}\n h(G) = \\min_{0 < |S| \\leq \\frac{n}{2}} \\frac{|Out(S)|}{|S|}\\text{,}\n \\end{equation*}\n where $S$ is any subset of $V$ of size at most $\\frac{n}{2}$ and $Out(S)$ is the set of neighbors of $S$ in $V \\setminus S$.\n\\end{definition}\n\nWe assume that the network graph $G$ has a constant \\emph{vertex expansion} $\\alpha > 0$, where $\\alpha$ is a fixed positive constant.\n\n\n\n\n\n\\paragraph{Network topology for the second (randomized) algorithm.} Here we assume $G$ to be a random $d$-regular graph model ($d$ is a constant) that is constructed by the union of $\\frac{d}{2}$ (assume $d \\geq 8$ is an even constant) random Hamiltonian cycles of $n$ nodes. We call this random graph model the \\emph{$H(n, d)$ random graph model}, also called the \\emph{permutation model} (please refer to \\cite{Chatterjee_2021_arXiv} for a detailed exposition of the $H(n, d)$ random graph model and its various properties). It is known that such a random graph is an expander (in fact a Ramanujan expander \\cite{Friedman_1991, Law_2003}) with high probability. The $H(n, d)$ model is a well-studied and popular random graph model (see e.g., \\cite{Wormald_1999}), and has been used as a model for peer-to-peer networks and self-healing networks \\cite{Law_2003, Pandurangan_2014}.\n\n\nWe note that the usual $d$-regular random graph model is the model where a graph is selected with uniform probability among all (simple) $d$-regular graphs \\cite{Wormald_1999}. Thus if one can show a result that holds with high probability in a $d$-regular random graph, then it holds for \\emph{almost all} $d$-regular graphs (as in Dwork et al\\cite{Dwork_1988}). Since it is hard to work directly with the above model, one usually works with the so-called \\emph{configuration (or pairing) model} \\cite{Bollobas_1981} that can be used to generate a $d$-regular random graph. The advantage of the configuration model is that if one can show a high probability bound on the configuration model, then this implies a similar bound for $d$-regular ($d$ is a constant) random graphs \\cite{Dwork_1988}. The configuration model is closely related to the $H(n, d)$ (i.e., permutation) model, which is sometimes easier to work with compared to the configuration model. It was shown by Greenhill et al.\\ \\cite{Greenhill_2002} that an event that holds with probability at least $1 - o(1)$ in the configuration model also holds with probability $1 - o(1)$ in the $H(n, d)$ model and vice versa. Thus, the results that we show for the $H(n, d)$ model also hold for the configuration model with probability at least $1 - o(1)$. Therefore they also hold for $d$-regular random graphs with the same probability. Hence they hold for \\emph{almost all} $d$-regular graphs.\n\n\n\n\n\n\\paragraph{Problem definition.} Since we assume a \\emph{sparse} (constant bounded degree) network and a large number of Byzantine nodes, it is difficult to ensure that every honest node eventually knows an exact estimate of $n$. This motivates us to consider the following ``approximate, almost everywhere'' variant of counting:\n\n\n\\begin{definition}[Byzantine counting] \\label{definition-problem-definition}\n Suppose that there are $B(n)$ Byzantine nodes in the network and let $\\epsilon$ be an arbitrarily small (but fixed) positive constant. We say that an algorithm solves Byzantine Counting in $T$ rounds if the following properties hold in all runs:\n \\begin{enumerate}\n \\item Every honest node $u$ (irrevocably) decides on an estimate of $\\log{n}$, denoted by $\\mathcal{L}_u$, within $T$ rounds.\n\t\t\\item There is a set $S$ of at least $(1 - \\epsilon)n - B(n)$ honest nodes such that each $u \\in S$ has a constant factor estimate of $\\log{n}$; i.e., there are fixed constants $c_1, c_2 > 0$, such that\n \\begin{equation*}\n c_1\\log{n} \\leq \\mathcal{L}_u \\leq c_2\\log{n}\\text{.}\n \\end{equation*}\n \\end{enumerate}\n\\end{definition}\n\n\n\n\n\n\\paragraph{Some terminology.} We recall the following terminology that are used throughout this paper.\n\\begin{enumerate}\n \\item We use the terms \\emph{sparse network} and \\emph{bounded-degree network} synonymously --- each describing a network where the maximum degree of a node is bounded by a constant, and hence the number of edges is linear in the number of vertices.\n \n \\item A \\emph{small-sized message} is defined to be one that contains $O(\\log{n})$ bits in addition to at most a constant number of node IDs.\n \n \\item We use the term \\emph{most nodes} or \\emph{most good nodes} to indicate $\\geq (1 - \\beta)n$ nodes, where $n$ is the total number of nodes in the network (and is an unknown quantity in the context of this paper) and $\\beta$ is any arbitrarily small (but fixed) positive constant.\n \n \\item By \\emph{efficient algorithms} we mean algorithms that use small-sized messages and run in $\\operatorname{polylog}{(n)}$ time.\n\\end{enumerate}\n\\section{Conclusion and Open Problems}\n\n\n\n\n\nIn this paper we take a step towards designing localized, secure, robust, and scalable distributed algorithms for large-scale networks. We presented two distributed protocols for the fundamental Byzantine counting problem. Our work leaves many questions open. While our deterministic algorithm runs in optimal $O(\\log{n})$ rounds, the randomized algorithm takes rounds that is essentially proportional to the number of Byzantine nodes in the network. Thus a main open problem would be to show a polylogarithmic round algorithm for Byzantine counting using small messages or to prove that this is not possible. Another open problem is to show a algorithm that can tolerate a significantly larger number of Byzantine nodes, e.g., $\\Theta(n)$ Byzantine nodes.\n\\section{Byzantine Counting with Small Messages} \\label{sec:congest}\n\n\n\n\n\nWe now describe an algorithm that guarantees most good nodes will achieve a constant factor approximation of $\\log(n)$ while sending only messages of small size (proportional to the number of bits of any node's ID). We give the detailed correctness proof in Section \\ref{sec:congestAnalysis}. As mentioned in Section \\ref{sec:model}, our algorithm works in the $H(n,d)$ $d$-regular random graph model with high probability, i.e., with probability at least $1 - n^{-c}$, for some constant $c \\geq 1$. As discussed in Section \\ref{sec:model}, this implies that the algorithm works in almost all $d$-regular graphs with probability at least $1 - o(1)$.\n\n\nIn Algorithm \\ref{alg:congest}, each node keeps track of its current estimate in a variable $i$ that is initialized to a fixed constant. A node increases $i$ whenever it enters a new \\emph{phase}, where the goal of a phase is to determine whether $i$ is already a sufficiently good approximation of $\\log(n)$. On the other hand, once a node concludes that its current value of $i$ is sufficiently large, it \\emph{decides on $i$} and stops participating in future phases. Each phase $i$ consists of roughly $e^{(1-\\gamma)i}+1$ \\emph{iterations}, and each iteration of phase $i$ takes $2i+5$ rounds: During the first $i+2$ rounds, nodes disseminate so called ``beacon messages'' (described next) whereas, during the following $(i+3)$-rounds, all yet-undecided nodes ensure that everyone in their $(i+3)$-neighborhood knows that they have not yet decided by sending a ``continue'' message.\n\n\n\n\n\n\\paragraph{Beacon Messages and Path Fields.} At the start of an iteration, a good node $u$ chooses to become \\emph{active} with probability $\\Theta\\!\\left(\\frac{i}{d^i}\\right)$, where $d$ is $u$'s degree. The intuition behind this probability is that this ensures that on the average there are approximately $O(i)$ nodes that are active in a ball of radius $i$ --- note that the tree-like property of expander graphs (see Section \\ref{sec:treelike}) ensures that the number of nodes in a ball is $\\Theta(d^i)$.\n\n\nIf $u$ becomes active, it broadcasts a \\emph{beacon message} to its neighbors, which is then forwarded for $i+2$ rounds. Intuitively speaking, these beacon messages signal to other nodes that they should not yet decide on their current estimate.\n\n\nIn more detail, a beacon message $\\langle \\texttt{beacon}, u, P \\rangle$ has an \\emph{origin id} $u$, and a \\emph{path field} $P$, which is the path of nodes that the message has visited so far. That is, whenever the message is sent from a node $w$ to a node $v$, we append $v$ to the path field before forwarding the message. Of course, it is entirely possible that these fields contain bogus information if the message passed through a Byzantine node.\n\n\n\n\n\n\\paragraph{Blacklisting.} Whenever a node $v$ receives a beacon message, it inspects the attached path field $P = (u_1, u_2, \\ldots, u_k)$ by performing a series of checks.\n\n\nFirst, $v$ checks whether the neighbor from which it received the message does indeed have id $u_k$. If $v$ finds that the sender has an id different from $u_k$, it simply discards that message. Node $v$ also maintains a \\emph{blacklist} set $BL$, which is reset at the start of each phase and is gradually filled throughout a phase's iterations.\n\n\nIn more detail, let us suppose that the above mentioned beacon message was the \\emph{first} one received by $v$ in iteration $1$ of phase $i$, from some neighbor $w$. Then, $v$ adds all nodes except the ones in the {$\\lfloor (1-\\epsilon)\\rfloor i$-suffix of $P$} to its blacklist $BL$, where this {suffix} consists of the last $\\lfloor (1-\\epsilon)\\rfloor i$ nodes on the path to the destination $v$. The intuition behind this rule is that $u$ blindly trusts all nodes that are close to it, but won't accept another beacon message in the future if it has traversed the same (far away) nodes twice in this iteration.\n\n\nIn addition, $v$ sets its variable $\\textsf{shortestPath} \\gets P$, which indicates the (supposedly) shortest path over which $v$ received a beacon message in this iteration. (If $v$ receives two or more beacon messages simultaneously, it discards all but one.) Note that $v$ resets $\\textsf{shortestPath}$ at the end of each iteration. Then, assuming that we are still in the first $i+1$ rounds of the iteration upon the reception of this beacon message, $v$ broadcasts the message $\\langle \\texttt{beacon}, P' \\rangle$ with the modified path field $P'$ to its neighbors where $P'$ is obtained by appending $v$ to $P$.\n\n\nAs mentioned, blacklisting ensures that $v$ does not accept a beacon message if the message took a path leading through the same nodes from which it has already seen a beacon message in this phase. Blacklisting is implemented as follows: If $v$ receives a beacon message $m'$ in some iteration $\\ell>1$ of phase $i$ and the node IDs contained in the path field that are at least $\\lfloor(1-\\epsilon)i\\rfloor$ away from $v$ intersect with the nodes already added to $BL$ during the previous iterations, then $v$ will not use $m'$ to update its $\\textsf{shortestPath}$ variable. However, it is important to keep in mind that, even in this case, $v$ still broadcasts the message with the updated path field to its neighbors, assuming we are still in the first $(i+1)$-rounds of the iteration.\n\n\nConsequently, if $(i+2)$ rounds have passed and node $v$ did not set $\\textsf{shortestPath}$ in this iteration either because it did not receive any beacon messages or all received beacons carried already blacklisted node IDs, then $v$ decides on its current value of $i$.\n\n\nIntroducing blacklisting avoids the scenario where Byzantine nodes keep generating new beacon messages that trigger good nodes to continue progressing to the next phase, possibly significantly overshooting the actual value of $\\log(n)$ before deciding. The blacklisting mechanism kicks in once $i = \\Omega(\\log{n})$ since the algorithm ensures that (see Lemma \\ref{lem:upperEnd}):\n\\begin{enumerate}\n \\item there is no iteration in which a good node still generates a new beacon message (whp);\n \\item the number of iterations performed in phase $i$ exceeds the number of Byzantine nodes.\n\\end{enumerate}\n\n\nFor instance, suppose that a Byzantine node $b$ generates a beacon message with a fake path field in iteration $1$. Even though $b$ can trick all good nodes into accepting this beacon message in this iteration, it will fail to convince a set $U$ of good nodes that have a distance of at least $\\lfloor (1-\\epsilon)i\\rfloor$ from $b$ into accepting such a message in any future iteration of this phase.\n\n\nTo see why this is the case, observe that when a node $u \\in U$ receives a message where $b$ was involved in faking the path field, $b$ will be added to $u$'s blacklist because its ID will not be in the $\\lfloor (1 - \\epsilon)i\\rfloor$ suffix of the path field by the time the message reaches $u$ (cf.\\ Line \\ref{line:blacklist}), assuming that the message did not pass through other Byzantine nodes that are closer to $u$. (Recall that $i$ is large enough such that good nodes have ceased from generating beacon messages and hence every beacon message that is still in transit must have been injected by Byzantine nodes.) The upshot is that a good node $u$ that has all the Byzantine nodes at a distance of at least $\\lfloor (1-\\epsilon)i\\rfloor$ will blacklist at least $1$ Byzantine node $b$ in each iteration if $b$ generates a beacon message. Hence, $u$ will encounter an iteration in which its $\\textsf{shortestPath}$ variable is not set, thus causing it to decide on $i$.\n\n\n\n\n\n\\paragraph{Technical challenges.} There are several technical difficulties that we need to handle in our correctness proof. For instance, we need to choose the probability of generating beacon messages in a way such that $i$ does not become too large before most nodes have reached a decision, as we might end up with a value of $i$ where almost all good nodes are \\emph{within} distance $\\lfloor (1 - \\epsilon)i\\rfloor$ of some Byzantine node, thus disarming the blacklisting mechanism.\n\n\nOn the other hand, the blacklisting process itself reduces the number of nodes that a good node considers for beacon messages, which may cause too many nodes to decide early due to not seeing a beacon message in each iteration. We use two techniques to avoid this second problem:\n\\begin{enumerate}\n \\item We use the \\emph{tree-like} property of the regular expander graphs (see Section \\ref{sec:treelike}). This shows that the remaining nodes provide sufficient expansion even if a large number of paths have been invalidated due to at least one of their nodes being blacklisted.\n\n\n \\item We instruct undecided nodes to send out \\texttt{continue} messages that are forwarded for $(i + 3)$ rounds in phase $i$. Upon reception of such a message, a node that has possibly already decided and stopped increasing its phase counter, will become active again and generate beacon messages with the appropriate probability.\n\\end{enumerate}\n\n\n\nWe state below the main result of this section. The rest of this section is devoted to proving it.\n\\begin{theorem} \\label{thm:congest}\n Let $\\xi$ and $\\beta$ be any arbitrarily small (but fixed) positive constants. Let $B(n) = n^{\\frac{1}{2} - \\xi}$ denote the number of Byzantine nodes in the network. Consider the $H(n,d)$ random regular graph $G$ of $n$ nodes with constant vertex expansion, where $d$ is a sufficiently large constant. Then there exists an algorithm such that, with high probability, at least $(1 - \\beta)n$ nodes send messages of at most $O(\\log{n})$ bits and decide on a constant factor approximation of $\\log{n}$ in time $O(B(n) \\cdot \\log^2{n})$ in the presence of up to $B(n)$ arbitrarily (adversarially) placed Byzantine nodes.\n\\end{theorem}\n\\section{A Time-Optimal Deterministic Algorithm} \\label{sec:local}\n\n\n\n\n\nIn this section, we present and analyze a simple algorithm that solves the Byzantine counting problem in the \\textsf{Local} model --- see Algorithm \\ref{alg:local} for the pseudocode.\n\n\nOur goal here is to show that a set called $\\textsf{Good}$ consisting of $\\geq n - o(n)$ good nodes achieve a constant factor approximation of $\\log{n}$ when executing our algorithm. Lemma \\ref{lem:goodNodes} formalizes the criteria for a good node to be in $\\textsf{Good}$: in particular, a good node needs to have a distance of $\\Omega(\\log{n})$ from all Byzantine nodes and the graph induced by $\\textsf{Good}$ must have nearly the same vertex expansion as the original network.\n\n\n\n\\subsection{Description of the algorithm}\n\nThroughout the algorithm, each node $u$ locally builds an approximation of its $i$-hop neighborhood for rounds $i = 1, 2, 3, \\ldots$, which we denote by $\\hat{B}(u,i)$. To this end, we instruct nodes to simply forward the content of their current $\\hat{B}(u,i)$ at the start of round $i$. Considering that we assume (at most) $n^{1-\\gamma}$ Byzantine nodes, node $u$ needs to be careful when integrating any newly received knowledge.\n\n\nThere are two possibilities for triggering a decision of node $u$. Firstly, $u$ immediately decides if it notices some structural inconsistencies in the received topology information, such as a degree larger than $\\Delta$, or the addition of spurious edges to vertices that it had already learned about previously (cf.\\ Line~\\ref{line:inconsistent}).\n\n\nFurthermore, after obtaining $\\hat{B}(u, i+1)$ by adding the received topology information in round $r$ into $\\hat{B}(u,i)$, node $u$ also decides if any of the subsets of $\\hat{B}(u,i)$ do not have sufficient vertex expansion with respect to $\\hat{B}(u,i+1)$. Intuitively speaking, this second condition ensures that Byzantine nodes cannot trick $u$ into continuing forever. The algorithm's correctness crucially rests on the original network having constant expansion --- a point that is further emphasized by our impossibility result in Theorem~\\ref{thm:impossibility}.\n\n\n\\begin{remark}\n We observe that, for $o(n)$ nodes in $G \\setminus \\textsf{Good}$, the adversary essentially controls the termination time. This is \\emph{not} simply a drawback of our algorithm, but, instead, \\emph{unavoidable} when assuming a worst-case placement of Byzantine nodes in the network: For instance, consider a $d$-regular expander and a set of $\\lfloor n^{1-\\gamma}\/d\\rfloor$ good nodes $U$ that are surrounded by roughly $n^{1-\\gamma}$ Byzantine nodes, i.e., none of the edges emanating from $U$ to $G\\setminus U$ are connected to good nodes. Then, the Byzantine nodes could simply send information corresponding to a large fake network of some arbitrary size $n'$ with sufficiently high expansion to the nodes in $U$. It is easy to see that no algorithm can distinguish this case from $U$ being indeed part of a network of size $n'$.\n\\end{remark}\n\n\n\nWe state below the main result of this section. The rest of this section is devoted to proving it.\n\\begin{theorem} \\label{thm:local}\n Let $\\gamma \\in (0,1)$ and $\\Delta > 0$ be arbitrary fixed constants. Consider an $n$-node network with a maximum node degree bounded by $\\Delta$ and a constant vertex expansion $\\alpha$. There exists a deterministic \\textsf{LOCAL} algorithm such that $n - o(n)$ good nodes decide on a $\\left(\\frac{\\gamma}{2\\log\\Delta}\\right)$-approximation of $\\log{n}$ in $O(\\log{n})$ rounds in the presence of up to $n^{1-\\gamma}$ arbitrarily (adversarially) placed Byzantine nodes.\n\\end{theorem}\n\\subsection{A High-level Description of Our Protocols}\n\n\n\n\n\nWe now give a high-level intuition behind our protocols. Our first protocol works for \\emph{any} expander network as long as the nodes have knowledge of some lower bound on the expansion. The main idea is to show that honest nodes that have a sufficiently large distance from any of the Byzantine nodes will be able to detect any deviations in the network structure caused by Byzantine nodes. The honest nodes can accomplish that by checking the expansion of their $i$-hop neighborhood, for some $i = \\Omega(\\log{n})$. This algorithm is \\emph{time-optimal} and runs in time proportional to the network diameter. However, it is designed for the \\textsf{Local} model, as the expansion check requires nodes to send messages of polynomial size.\n\n\nThe second algorithm achieves Byzantine counting by ensuring that most good nodes will send only small-sized messages. The main idea here is the following. The algorithm proceeds in phases. In phase $i$, $i$ is the current estimate of $\\log(n)$. In each $i$-hop neighborhood of some node consisting only of good nodes, there are likely to be $\\Theta(i)$ nodes that are generating \\emph{beacon messages}, which are propagated for at least $i$ rounds through the network.\n\n\nUpon receiving a beacon message, a node assumes that the value of $i$ is not yet too large and hence proceeds without \\emph{deciding}. On the other hand, the probability of any good node generating a beacon message becomes $\\frac{1}{\\operatorname{poly}{(n)}}$ once $i = \\Omega(\\log{n})$, and hence good nodes that do not observe a beacon message within $O(i)$ rounds of phase $i$, decide on $i$ as their estimate.\n\n\nTo avoid the scenario where Byzantine nodes simply keep generating new beacon messages (to falsely induce a larger network size), the algorithm implements a \\emph{blacklisting mechanism} that uses properties of random regular graphs to prevent nodes from generating multiple beacon messages within the same phase. This ensures that the Byzantine nodes will be blacklisted if they attempt to generate fake beacon messages.\n\\section{Impossibility result} \\label{sec:impossibility}\n\n\n\n\n\nWe have seen that both of our algorithms crucially rely on the expansion properties of the underlying network. In Theorem \\ref{thm:impossibility}, we show that having sufficient expansion is necessary for obtaining \\emph{any} approximation of $\\log(n)$. In the proof, we make use of the fact that a single Byzantine node can trick the honest nodes into believing that there may be some large number of nodes hidden ``behind'' the Byzantine node, and the honest nodes have no way of verifying whether this bottleneck actually exists.\n\n\n\n\\begin{theorem} \\label{thm:impossibility}\n There is no randomized algorithm that ensures that more than $\\lceil\\frac{n}{2}\\rceil$ nodes output an approximation of $\\log{n}$ in the presence of one Byzantine node with probability at least $(1 - \\epsilon)$, if there are no restrictions on the network topology of the given $n$-node network, for any constant $0 < \\epsilon < 1$.\n\\end{theorem}\n\\begin{proof}\n For the sake of a contradiction, suppose that there is an algorithm $A$ that solves the counting problem and let $C_n$ be an arbitrary graph of size $n$. Fix any approximation factor $c = c(n) > 0$, which includes $c$ being a constant as a special case. Suppose that an execution $\\gamma$ of algorithm $A$ results in a set $S$ of more than $\\lceil \\frac{n}{2}\\rceil$ nodes, where every $u_i \\in S$ outputs an estimate $\\hat{\\ell_i}$ such that $\\frac{\\ell}{c} \\le \\hat{\\ell_i} \\le c \\ell$, for some $\\ell$. Then we say that \\emph{$A$ decides on a common estimate of $\\ell$ in execution $\\gamma$}.\n\n\n For a given $n$, let $t \\geq n$ be the smallest integer such that the probability of $A$ deciding on a common estimate of $\\log{(nt)}$ when executing on network $C_n$ is at most $\\frac{1-\\epsilon}{t}$, where $\\epsilon>0$ is the assumed constant error probability of the algorithm $A$. Since we assume that decisions are irrevocable (see Def.~\\ref{definition-problem-definition}), we know that $A$ can decide at most one common estimate in a single execution, the event of producing common estimate $\\ell_1$ and the event of producing common estimate $\\ell_2$, where $c\\ \\ell_1 < \\frac{\\ell_2}{c}$, are mutually exclusive.\n If $t$ does not exist, then the algorithm has probability $> \\frac{1 - \\epsilon}{k}$ to output $\\log{(nk)}$ as the common estimate, for all $k \\geq n$. However, by summing up the probabilities of these (mutually exclusive) events we get $\\sum_{k = n}^{\\infty}\\frac{1-\\epsilon}{k} > 1$, i.e., the probabilities of outputting the common estimates do not form a valid probability distribution. It follows that $t$ exists.\n\n\n Now consider a graph $H$ of $t$ copies of $C_n$ where the Byzantine node $b$ is part of each copy, i.e., node $b$ has degree $t \\cdot deg(b)$ where $deg(b)$ is the degree of $b$ in $C_n$. For each copy $C_n$, node $b$ outputs the same set of messages and local state transitions, as are required by an execution of algorithm $A$ in the network $C_n$ for some given random coin flips when $b$ is an honest node. For the algorithm to output a common estimate of $\\log(nt)$ when executing on $H$, at least $>\\frac{nt}{2}$ nodes need to output a common estimate of $\\log{(nt)}$, which involves at least $\\frac{n}{2}$ nodes in at least $\\frac{t}{2}$ copies of $C_n$. Since the nodes in any given copy of $C_n$ cannot distinguish the execution in $C_n$ from the execution on $H$ and nodes in each individual $C_n$ have probability at most $\\frac{1 - \\epsilon}{t}$ of outputting the required approximation of $\\log(nt)$, we can take a union bound over the $\\frac{t}{2}$ copies of $C_n$. Thus, for the probability of the algorithm to produce a common estimate in $H$, we obtain $\\frac{1 - \\epsilon}{t}\\frac{t}{2} \\leq \\frac{1 - \\epsilon}{2}$, a contradiction to the assumed probability of success being $\\geq (1 - \\epsilon)$.\n\\end{proof}\n\\section{Introduction} \\label{sec:intro}\n\n\n\n\n\nThe recent surge in the popularity of decentralized peer-to-peer protocols has renewed the interest in achieving Byzantine fault-tolerance in sparse networks of untrusted participants. In this work, we study the fundamental problem of {\\em Byzantine counting} where the goal is to estimate the number of nodes in a network in the presence of a large number of Byzantine nodes. We say that a node $u$ is \\emph{good} or \\emph{honest} if $u$ is not a Byzantine node. We assume that the Byzantine nodes are {\\em arbitrarily} distributed in the network and that when a Byzantine node sends a message over an edge, it cannot fake its ID. We note that both of these assumptions are quite typical in the literature \\cite{Dwork_1988, Upfal_1994, King_2006_FOCS, Augustine_2012, Augustine_2013_PODC, Augustine_2015_DISC}. Please refer to Section \\ref{sec:model} for more details on the distributed computing model.\n\n\nWe focus on the Byzantine counting problem in the context of \\emph{sparse} networks because of the following reasons.\n\\begin{enumerate}\n \\item Peer-to-peer networks and most other large-scale, real-world networks happen to be sparse.\n \\item In a $d$-regular network, if the degree $d$ is a non-constant function of $n$, e.g., if $d = \\Theta(\\log{n})$, then it might become trivial for a node to estimate $n$ from its knowledge of its own degree $d$.\n\\end{enumerate}\n\n\nEssentially all known algorithms studied in the literature for solving problems like \\emph{Byzantine consensus} and \\emph{Byzantine leader election} in sparse networks require an underlying {\\em expander graph}: the expansion property is \\emph{needed} in tolerating a large number of Byzantine nodes. The vertex expansion of a graph $G = (V, E)$ on $n$ nodes is defined as\n\\begin{equation*}\n h(G) = \\min_{0 < |S| \\leq \\frac{n}{2}} \\frac{|Out(S)|}{|S|}\\text{,}\n\\end{equation*}\nwhere $S$ is any subset of $V$ of size at most $\\frac{n}{2}$ and $Out(S)$ is the set of neighbors of $S$ in $V \\setminus S$. In particular, the seminal paper of Dwork et al.\\ \\cite{Dwork_1988}, which introduced and studied the problem of almost-everywhere Byzantine agreement in bounded degree graphs showed that such an agreement is achievable in \\emph{almost all} $d$-regular graphs (i.e., all but a vanishingly small fraction of such graphs). We exploit the following fact in our current work: almost all $d$-regular graphs possess good expansion properties.\n\n\nHowever, these algorithms assume knowledge of at least an estimate of the {\\em size of the network} (in many cases, an estimate of the logarithm of the network size suffices) and related parameters such as the network diameter or the mixing time. In fact, the result of Dwork et al.\\ assumes that all nodes know the global network topology. This suggests that it is non-trivial to design algorithms that work \\emph{without} knowledge of these global network parameters in bounded-degree (or $d$-regular) expander networks. In such networks, nodes have a limited local view that is highly symmetric, and this enables Byzantine nodes to fake the presence (or absence) of parts of the network.\n\n\nThe goal of our algorithms is to guarantee that most of the honest (i.e., non-Byzantine) nodes obtain a good estimate of the network size. We note that obtaining ``almost-everywhere'' knowledge is the best one can hope for in such networks \\cite{Dwork_1988}. Byzantine counting is related to, yet different from, other fundamental problems in distributed computing, namely, {\\em Byzantine agreement} and {\\em Byzantine leader election}. Similar to the latter two problems, it involves solving a global problem under the presence of Byzantine nodes. However, it is a different problem, since protocols for Byzantine agreement or leader election do not necessarily yield a protocol for Byzantine counting. In fact, many existing algorithms for these two problems (discussed below and in Section \\ref{sec:results}) assume knowledge of $n$, the number of nodes in the network. In sparse networks, they require at least a reasonably good estimate of $n$, typically a constant factor estimate of $\\log{n}$ is needed and usually sufficient (as explained in Section \\ref{sec:results}). Indeed, one of the main motivations for this paper is to design distributed protocols in sparse networks that can work with little or no global knowledge, including the network size. An efficient protocol for the Byzantine counting problem can serve as a preprocessing step for protocols for Byzantine agreement, leader election, and other problems that either require or assume knowledge of an estimate of $\\log n$ \\cite{Augustine_2016} (cf.\\ Section \\ref{sec:results}).\n\n\nByzantine agreement and leader election have been studied extensively for several decades. Dwork et al.\\ \\cite{Dwork_1988}, Upfal \\cite{Upfal_1994}, and King et al.\\ \\cite{King_2006_FOCS} studied the Byzantine agreement problem in \\emph{sparse (bounded-degree) expander networks} under the condition of \\emph{almost-everywhere} agreement, where {\\em almost} all (honest) processors need to reach agreement as opposed to \\emph{all} nodes agreeing as required in the standard Byzantine agreement problem. Dwork et.\\ al.\\ \\cite{Dwork_1988} showed how one can achieve almost-everywhere agreement under up to $\\Theta(\\frac{n}{\\log{n}})$ of Byzantine nodes in a bounded-degree \\emph{expander} network ($n$ is the network size). Subsequently, Upfal~\\cite{Upfal_1994} gave an improved protocol that can tolerate up to a linear number of faults in a bounded degree \\emph{expander} of \\emph{sufficiently large spectral gap}. These algorithms required polynomial number of rounds in the CONGEST model (where honest nodes send only small-sized messages) and required $O(\\log n)$ rounds in the LOCAL model (where there is no restriction on the message sizes) and polynomial (in $n$) number of messages. (For a comparison, similarly, our \\textsf{Local} algorithm takes $O(\\log{n})$ rounds and our \\textsf{Congest} algorithm takes polynomial number of rounds.) Moreover, for Upfal's algorithm the local computation required by each processor is exponential. The work of King et al.\\cite{King_2006_FOCS} was the first to study scalable (polylogarithmic communication and number of rounds, and polylogarithmic computation per processor) algorithms for Byzantine leader election and agreement. All of the above algorithms require knowledge of the network topology (including the knowledge of $n$) --- nodes need to have this information hardcoded from the very start.\n\n\nThe works of \\cite{Augustine_2012}, \\cite{Augustine_2013_PODC}, and \\cite{Augustine_2015_DISC} studied stable agreement, Byzantine agreement, and Byzantine leader election (respectively) in dynamic networks (see also \\cite{Augustine_2016}), where in addition to Byzantine nodes there is also adversarial churn. All these works assume that there is an underlying bounded-degree regular expander graph (in fact, Dwork et al. among others assume $d$-regular random graphs which are expanders with high probability) and {\\em all nodes are assumed to have knowledge of $n$}. It was not clear how to estimate $n$ without additional information under presence of Byzantine nodes in such (essentially, regular and constant degree expander) networks. In fact, the works of \\cite{Augustine_2016, Augustine_2015_DISC} raised the question of designing protocols in expander networks that work when the network size is not known and may even change over time, with the goal of obtaining a protocol that works when nodes have strictly local knowledge. This requires devising a distributed protocol that can measure global network parameters such as size, diameter, average degree, etc.\\ in the presence of Byzantine nodes in sparse networks, especially in sparse \\emph{expander} networks.\n\n\nMotivated by the above considerations, the work of Chatterjee et al.\\ \\cite{Chatterjee_2019} studied the Byzantine counting problem in a {\\em ``small-world''} expander network under the assumption that the Byzantine nodes are {\\em randomly} distributed (cf. Section \\ref{sec:technical} for more details). They present a distributed algorithm running in polylogarithmic (in $n$) rounds in the CONGEST model that can output a constant factor estimate of $\\log{n}$, where $n$ is the (unknown) network size under the presence of $O(n^{1 - \\gamma})$ Byzantine nodes, where $\\gamma > 0$ can be be any arbitrarily small (but fixed) constant. While this presents the first known Byzantine counting algorithm under this setting, it has two major drawbacks.\n\n\nFirst, it does not work when Byzantine nodes are \\emph{arbitrarily distributed} --- it crucially needs that they be \\emph{randomly distributed}.\n\n\nSecond, it does not work for (just) expander networks; it needs additional structure, namely a {\\em small-world} network, i.e., a network that has a large clustering coefficient.\\footnote{i.e., a Watts-Strogatz type network similar to \\cite{Watts_1998, Barthelemy_1999}.} The work of Chatterjee et al. crucially relies on the small-world property in its estimation of the network size. Hence the algorithm and techniques used in that paper \\cite{Chatterjee_2019} are \\emph{not} directly applicable to the present paper. Indeed, this paper uses a different approach compared to that of \\cite{Chatterjee_2019} (cf. Section \\ref{sec:technical}). While prior works on Byzantine agreement and leader election required only (sparse) expander networks \\cite{Dwork_1988, Upfal_1994, King_2006_FOCS} under an arbitrary distribution of Byzantine nodes, Chatterjee et al.\\ remark that:\n\n\n\\begin{quote}\n ``... for the Byzantine counting problem, which seems harder, however, expansion by itself does not seem to be sufficient.''\n\\end{quote}\n\n\nIn this paper, we show that Byzantine counting can indeed be solved in expander networks and almost all $d$-regular graphs under arbitrarily (adversarially) placed Byzantine nodes. This is the setting that is typically assumed in prior works on Byzantine agreement and leader election problems (e.g., \\cite{Dwork_1988, Upfal_1994, King_2006_FOCS, Augustine_2012, Augustine_2013_PODC, Augustine_2015_DISC}).\n\n\nThroughout this paper, we use the following terminology.\n\\begin{enumerate}\n \\item We use the terms \\emph{sparse network} and \\emph{bounded-degree network} synonymously --- each describing a network where the maximum degree of a node is bounded by a constant, and hence the number of edges is linear in the number of vertices.\n \n \\item A \\emph{small-sized message} is defined to be one that contains $O(\\log{n})$ bits in addition to at most a constant number of node IDs.\n \n \\item We use the term \\emph{most nodes} or \\emph{most good nodes} to indicate $\\geq (1 - \\beta)n$ nodes, where $n$ is the total number of nodes in the network (and is an unknown quantity in the context of this paper) and $\\beta$ is any arbitrarily small (but fixed) positive constant.\n \n \\item By \\emph{efficient algorithms} we mean algorithms that use small-sized messages and run in $\\operatorname{polylog}{(n)}$ time.\n\\end{enumerate}\n\n\\subsection{Other Related Work} \\label{sec:related}\n\n\n\n\n\nThere have been several works on estimating the size of the network, see e.g., the works of \\cite{Ganesh_2007, Horowitz_2003, Luna_2014, Terelius_2012, Shafaat_2008}, but all these works do not work under the presence of Byzantine adversaries. There have been some work on using network coding for designing byzantine protocols (see e.g., \\cite{Jaggi_2008}); but these protocols have polynomial message sizes and are highly inefficient for problems such as counting, where the output size is small. There are also some works on topology discovery problems under Byzantine setting (e.g., \\cite{Nesterenko_2006}), but these do not solve the counting problem.\n\n\nSeveral recent works deal with Byzantine agreement, Byzantine leader election, and fault-tolerant protocols in dynamic networks. We refer to \\cite{Guerraoui_2013, Augustine_2012, Augustine_2013_PODC, Augustine_2013_SPAA, Augustine_2015_DISC} and the references therein for details on these works. These works crucially assume the knowledge of the network size (or at least an estimate of it) and don't work if the network size is not known.\n\n\nThere has been significant work in designing peer-to-peer networks that are provably robust to a large number of Byzantine faults \\cite{Fiat_2002, Hildrum_2003, Naor_2003, Scheideler_2005}. These focus only on (robustly) enabling storing and retrieving data items. The works of \\cite{King_2006_FOCS, King_2014, King_2006_SODA} address the Byzantine agreement problem, and the work of \\cite{Guerraoui_2013} presents a solution for maintaining a clustering of the network. In particular, \\cite{King_2014} use a spectral technique to ``blacklist'' malicious nodes leading to faster and more efficient Byzantine agreement. All these works assume a sufficiently good estimate of the network size; in particular, none of them solves the Byzantine counting problem in sparse networks.\n\n\nThe work of \\cite{Bortnikov_2009} shows how to implement uniform sampling in a peer-to-peer system under the presence of Byzantine nodes where each node maintains a local ``view'' of the active nodes. We point out that the choice of the view size and the sample list size of $\\Theta(n^{\\frac{1}{3}})$ necessary for withstanding adversarial attacks requires the nodes to have a priori knowledge of a polynomial estimate of the network size. \\cite{Horowitz_2003} considers a dynamically changing network \\emph{without} Byzantine nodes where nodes can join and leave over time and provides a local distributed protocol that achieves a polynomial estimate of the network size.\n\n\nIn \\cite{Bovenkamp_2012}, the authors present a gossip-based algorithm for computing aggregate values in large dynamic networks (but without the presence of Byzantine failures), which can be used to obtain an estimate of the network size. The work of \\cite{Chlebus_2009} focuses on the consensus problem under crash failures and assumes knowledge of $\\log{n}$, where $n$ is the network size. Lenzen et al.\\ \\cite{Lenzen_2017} study the synchronous counting problem under Byzantine nodes which is a different problem: the goal here is to synchronize pulses among correct nodes. They study the problem in a complete network, and hence the network size is trivially known.\n\n\n\n\\paragraph{Byzantine fault detection in the context of asynchronous distributed systems.} There have also been several works on Byzantine fault detection --- see, e.g., \\cite{Alvisi_2001}, \\cite{Kihlstrom_2003}, \\cite{Haeberlen_2006}, and \\cite{Greve_2012}. Alvisi et al.\\ \\cite{Alvisi_2001} consider the problem of fault detection in Byzantine \\emph{quorum systems} and design statistical methods to compute the current number of failures at any point of time. Their model assumes the knowledge of $n$, the total number of servers, whereas their goal is also clearly different. There have also been works on Byzantine fault detectors \\cite{Kihlstrom_2003, Haeberlen_2006}, but these assume the complete graph where the knowledge of $n$ becomes trivial.\n\n\nKihlstrom et al.\\ \\cite{Kihlstrom_2003} propose and analyze new classes of Byzantine fault detectors to solve the consensus problem in an asynchronous distributed system of $n$ processes, in which the number of (Byzantine-) faulty processors is strictly less than $\\frac{n}{3}$. Haeberlen et al.\\ \\cite{Haeberlen_2006} proposes a new idea for Byzantine fault detection by achieving \\emph{eventual strong completeness} where every faulty node is eventually blacklisted by every correct node. The underlying communication graph is a \\emph{complete graph} in both the network models of \\cite{Kihlstrom_2003} and \\cite{Haeberlen_2006}, thus the knowledge of $n$ becomes immediate and trivial.\n\n\nGreve et al.\\ \\cite{Greve_2012} design and analyze a powerful Byzantine failure detector that works in dynamic distributed systems, where both the number of processors and the topology of the communication graph can change from round to round. Their work does not assume any knowledge of $n$; however, their work does \\emph{not} solve the Byzantine counting problem either --- \\emph{no} estimate about the \\emph{global} network size can be made during the execution of their algorithm.\n\n\\subsection{Our Contributions} \\label{sec:results}\n\n\n\n\n\nWe present two distributed algorithms for the {\\em Byzantine counting problem}, which is concerned with estimating the size (more specifically, the logarithm of the size, as considered here) of a sparse network in the presence of a large number of Byzantine nodes.\n\n\nLet the network be denoted by $G = (V, E)$; let $n = |V|$ denote the (unknown) network size. Our first algorithm is {\\em deterministic} and finishes in $O(\\log{n})$ rounds in the \\textsf{LOCAL} model and is time-optimal. This algorithm can tolerate up to $O(n^{1 - \\gamma})$ adversarially placed Byzantine nodes for any arbitrarily small (but fixed) positive constant $\\gamma$. It outputs a constant factor estimate of $\\log{n}$ that is known to all but $o(1)$ fraction of the good nodes. This algorithm works for \\emph{any} bounded degree expander network. \n\n\nOur second algorithm is {\\em randomized}. This algorithm works in \\emph{almost all} $d$-regular graphs (i.e., all but a vanishingly small fraction of such graphs). We note that this is the same model used in the seminal work of Dwork et al.\\cite{Dwork_1988}. Our algorithm works in the CONGEST model, where honest nodes use only \\emph{small-sized} messages (unlike the first algorithm). See Section~\\ref{sec:model} for more details about the network model. It tolerates up to $B(n) = n^{\\frac{1}{2} - \\xi}$ adversarially placed Byzantine nodes, where $\\xi$ is any arbitrarily small (but) fixed positive constant. This algorithm takes $O(B(n)\\log^2{n})$ rounds (hence $o(\\sqrt{n})$ rounds for $B(n) = n^{\\frac{1}{2} - \\xi}$) and outputs a constant factor estimate of $\\log{n}$ with probability at least $1-o(1)$. The said estimate is known to at least $ (1 - \\beta)n$ nodes for any arbitrarily small positive constant $\\beta$.\n\n\nWe note that similar to our result, many prior Byzantine protocols \\cite{Dwork_1988, Berman_1993_MST, Berman_1993_DC, Upfal_1994} in bounded-degree networks take a polynomial number of rounds in the \\textsf{Congest} model (where honest nodes are limited to small-sized messages). However, all these protocols assume knowledge of $n$, and in certain cases, even the entire network topology. A notable exception is the protocol of King et al.\\ \\cite{King_2006_FOCS} that takes a polylogarithmic number of rounds in the \\textsf{Congest} model, but this protocol also requires knowledge of the entire network topology (including the value of $n$). On the other hand, these protocols tolerate a substantially larger number of Byzantine nodes, even up to $\\Theta(n)$ Byzantine nodes. It is unknown whether one can achieve the same level of fault-tolerance for Byzantine counting.\n\n\nTo complement our algorithms, we also present an impossibility result that shows that it is impossible to estimate the network size (or the logarithm of it) with any reasonable approximation and with any non-trivial probability of success if the network does not have sufficient vertex expansion. This shows that the assumption of the expansion property of the network is \\emph{necessary} for solving Byzantine counting.\n\n\nBoth our algorithms are the first such algorithms that solve Byzantine counting in sparse, bounded degree networks under very general assumptions: they are fully local and need no global knowledge. Our algorithms can serve as a building block for implementing other non-trivial distributed computational tasks in Byzantine networks such as agreement and leader election where the network size (or its estimate) is not known a priori.\n\n\n\n\\paragraph{Applying our counting protocols.} To illustrate, we consider the {\\em Byzantine agreement} protocol of \\cite{Augustine_2013_PODC} that applies to sparse bounded-degree expander networks. It applies even when the network is dynamic with adversarial churn, but the network size is assumed to be stable. This protocol uses two main ideas to solve binary agreement, where the requirement is that most good nodes should decide on a common value ($0$ or $1$) which should be an input value of a good node: (1) {\\em random walks} to sample nodes uniformly at random from the network and (2) {\\em a majority protocol} to converge to the correct value. Both ideas require knowledge of $\\log{n}$, in particular, a constant factor upper bound of $\\log{n}$. For random walks, $O(\\log{n})$ is the {\\em mixing time}, which is needed for walks to converge to the stationary distribution in a bounded degree expander; nodes need to know an upper bound on the mixing time to ensure that only sufficiently ``mixed'' random walks are used for sampling. The majority protocol uses the following simple idea: In one iteration, each node samples two random nodes and updates its value to the majority value among the three values: its own value and the two other values. It is shown that $O(\\log{n})$ iterations are needed to converge to the almost-everywhere agreement with high probability, provided the number of Byzantine nodes is bounded by $O(\\sqrt{n})$.\n\n\nIt is important to note that the above protocol assumes knowledge of $c\\log{n}$, for some constant $c > 1$. However, using the Byzantine counting protocol of this paper as a preprocessing step, the above assumption can be removed. The counting protocol ensures that most honest nodes have a constant factor estimate of $\\log{n}$ (this constant is fixed in the analysis). Although the counting protocol does not guarantee that all (or most) honest nodes have the \\emph{same} estimate of $\\log n$, it is easy to ensure that most honest nodes have an estimate that is some constant factor larger than $\\log{n}$. This estimate suffices to run the Byzantine agreement protocol of \\cite{Augustine_2013_PODC}.\n\\section{Preliminaries}\n\n\n\n\n\nWe use the notation $\\mathcal{B}_G(u,i)$ to refer to the inclusive $i$-hop neighborhood of node $u$ in graph $G$ and we omit $G$ when it is clear from the context. For a set of nodes $S$, we define $\\mathcal{B}_G(S,i) = \\bigcup_{u \\in S}\\mathcal{B}_G(u,i)$.\n\n\nBoth of our algorithms make use of a structural result that shows that Byzantine nodes have a somewhat limited impact on \\emph{most} good nodes in expander graphs.\n\n\\begin{lemma} \\label{lem:goodNodes}\n Consider an $n$-node graph $G = (V, E)$ with maximum degree $\\Delta = O(1)$ and vertex expansion $\\alpha>0$. Let $\\textsf{Byz}$, an arbitrary subset of $V$, denote the set of Byzantine nodes with the restriction that $|\\textsf{Byz}| \\leq n^{1-\\gamma}$, where $\\gamma$ is any arbitrarily small (but fixed) positive constant. Then, for any $o(n)$-sized $F \\subset V$, there exists a set $\\textsf{Good} \\subseteq V\\setminus F$ of good nodes such that\n \\begin{align}\n |\\textsf{Good}| \\ge n - 2|F| - o(n). \\label{eq:goodsize}\n \\end{align}\n Moreover, for each $u \\in \\textsf{Good}$, the following hold:\n \\begin{enumerate}\n \\item $\\mathcal{B}(u,\\lfloor\\tfrac{\\gamma}{2}\\log_\\Delta n\\rfloor)$ does not contain any Byzantine nodes.\n\n \\item Let $H$ be the subgraph induced by nodes in $\\textsf{Good}$. Then, for any constant $c>0$ such that $|\\mathcal{B}_H(u,c\\log n)| \\le \\tfrac{|\\textsf{Good}|}{2}$, it holds that every vertex subset $S \\subseteq \\mathcal{B}_H(u,c\\log n)$ has a vertex expansion of $\\ge \\alpha'$ in graph $H$, for any fixed constant $\\alpha' < \\alpha$.\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n Consider the set $\\textsf{Byz}$ of Byzantine nodes. We first instantiate Lemma \\ref{lem:culling} by removing the set $V(G) \\setminus (\\textsf{Byz} \\cup F)$ from $G$, and thus obtain a connected subgraph $H$ of size $\\ge n - o(n)$. By Lemma \\ref{lem:culling}, $H$ contains at least\n \\begin{center}\n $n - (|F| + |\\textsf{Byz}|)\\left( 1 + \\frac{1}{\\phi(1-c')}\\right) = n - |F| \\left( 1 + \\frac{1}{\\phi(1-c')} \\right) - o(n)$\n \\end{center}\n good nodes. Also, every one of its subsets of size $\\geq \\frac{|H|}{2}$ has a vertex expansion of at least $\\alpha'$, for any constants $\\alpha' < \\alpha$ and $c' < 1$. Choosing $c' = 1 - \\tfrac{1}{\\phi}$, implies that $H$ contains $n - 2|F| - o(n)$ nodes, as required. Assuming a maximum degree of $\\Delta$, we get $|\\mathcal{B}(\\textsf{Byz},j)| \\le |\\textsf{Byz}|\\Delta^j$, for any $j\\ge 0$.\n\n\n We observe that\n \\begin{center}\n $\\big|\\mathcal{B}(\\textsf{Byz},\\lfloor\\tfrac{\\gamma}{2}\\log_\\Delta n\\rfloor)\\big| \\le |\\textsf{Byz}| \\cdot \\Delta^{(\\gamma\/2)\\log_\\Delta n} = n^{1 - \\gamma\/2}$.\n \\end{center}\n\n\n It follows that the set $\\textsf{Good} = V(H) \\setminus \\mathcal{B}(\\textsf{Byz},\\tfrac{\\gamma}{2}\\log_\\Delta n)$ satisfies (1) and (2).\n\\end{proof}\n\\subsection{The ``locally tree-like'' property of an $H(n,d)$ random graph} \\label{sec:treelike}\n\n\n\n\n\nWe refer to \\cite{Chatterjee_2021_arXiv} for a detailed exposition of the $H(n, d)$ random graph model and its various properties. For the sake of completeness, we merely state the main definitions and lemmas needed here. The ``locally tree-like'' property of an $H(n,d)$ random graph says that for most nodes $w$, the subgraph induced by $B(w,r)$ up to a certain radius $r$ looks like a tree. More specifically, let $G$ be an $H(n,d)$ random graph and $w$ be any node in $G$. Consider the subgraph induced by $B(w,r)$ for $r = \\frac{\\log{n}}{10\\log{d}}$. Let $u$ be any node in $Bd(w,j)$, $1 \\leq j < r$. $u$ is said to be \\emph{typical} if $u$ has only one neighbor in $Bd(w,j-1)$ and $(d-1)$-neighbors in $Bd(w,j+1)$; otherwise it is called \\emph{atypical}.\n\n\n\\begin{definition}[Locally Tree-Like Property] \\label{defn-locally-tree-like-node-preliminaries}\n We call a node $w$ \\emph{locally tree-like} if no node in $B(w,r)$ is atypical. In other words, $w$ is locally tree-like if the subgraph induced by $B(w,r)$ is a $(d-1)$-ary tree.\n\\end{definition}\n\n\nUsing the properties of the $H(n, d)$ random graph model and standard concentration bounds, it can be shown that most nodes in $G$ are locally tree-like:\n\\begin{lemma} \\label{lemma-most-nodes-are-locally-tree-like-preliminaries}\n In an $H(n,d)$ random graph, with high probability, at least $n - O(n^{0.8})$ nodes are locally tree-like.\n\\end{lemma}\n\n\nThe proof of this lemma as well as a more detailed discussion of the $H(n, d)$ random graph model can be found in the appendix of \\cite{Chatterjee_2021_arXiv}.\n\n\n\\subsection{Technical Challenges and Drawbacks of Previous Approaches} \\label{sec:technical}\n\n\n\n\n\nThe main challenge is designing and analyzing distributed algorithms in the presence of Byzantine nodes in networks where the honest nodes have only local knowledge, i.e., knowledge of their immediate neighborhood. For example, in a constant degree regular network, a node's local view does not yield any information on the network size. It is possible to solve the counting problem exactly in networks \\emph{without} Byzantine nodes by simply building a spanning tree and converge-casting the nodes' counts to the root, which in turn can compute the total number of nodes in the network. A more robust and alternate way that works also in the case of \\emph{anonymous} networks is the technique of {\\em support estimation} \\cite{Augustine_2012, Augustine_2016} which uses \\emph{exponential} distribution. Alternatively, one can use a geometric distribution (see e.g., \\cite{Kutten_2015, Pandurangan_2019_Book, Newport_2018}) to accurately estimate the network size.\n\n\nConsider the following simple protocol for estimating the network size that uses the geometric distribution. Each node $u$ flips an unbiased coin until the outcome is heads; let $X_u$ denote the random variable that denotes the number of times that $u$ needs to flip its coin. Then, nodes exchange their respective values of $X_u$ whereas each node only forwards the highest value of $X_u$ (once) that it has seen so far. We observe that $X_u$ is geometrically distributed and denote its global maximum by $\\bar{X}$; it can be shown that $\\bar{X} = \\Theta(\\log{n})$ with high probability and hence can be used to estimate $\\log{n}$.\n\n\nThe geometric distribution protocol fails when even just one Byzantine node is present. Byzantine nodes can fake the maximum value or can stop the correct maximum value from spreading and hence can violate any desired approximation guarantee. The work of \\cite{Chatterjee_2019} successfully adapts the geometric distribution to work for their purpose. However, their work \\cite{Chatterjee_2019} assumes additional structural properties of the network --- they assume ``small-world'' networks, i.e., networks with constant expansion \\emph{and} large clustering coefficient. The latter property implies that for every node, many of its neighbors are well-connected among themselves. The protocol of \\cite{Chatterjee_2019} exploits this fact to detect fake values sent by Byzantine nodes. This protocol does not work for graphs that {\\em only} have the expander property (which as we show in the impossibility result is needed to estimate the network size within a non-trivial factor). Hence a new approach is needed as shown in this paper.\n\n\nThe work of \\cite{Chatterjee_2019} also assumes that the Byzantine nodes are \\emph{randomly} distributed in the network. This assumption coupled with the fact that their number is only $O(n^{1-\\gamma})$ (where $\\gamma$ is any arbitrarily small, but fixed, positive constant), results in (with high probability) every honest node having a significant number of honest neighbors (the number of neighbors depends on $\\gamma$). The algorithm of \\cite{Chatterjee_2019} \\emph{fails} to work for expander networks with arbitrary or adversarial Byzantine node distribution, which is typically assumed in previous works on Byzantine protocols \\cite{Dwork_1988, Upfal_1994, King_2006_FOCS, Augustine_2012, Augustine_2013_PODC, Augustine_2015_DISC}.\n\n\nPrior localized techniques that have been used successfully for solving other problems such as Byzantine agreement and leader election such as random walks and majority agreement (e.g., \\cite{Augustine_2013_PODC, Augustine_2015_DISC}) do not imply efficient algorithms for Byzantine counting. For instance, random walk-based techniques crucially exploit a uniform sampling of tokens (generated by nodes) after $\\Theta(\\text{mixing time})$ number of steps. However, the main difficulty in this approach is that the mixing time is unknown (since the network size is unknown) --- and hence it is unclear a priori how many random walk steps the tokens should take. Similar approaches based on the return time of random walks fail due to long random walks having a high chance of encountering a Byzantine node.\n\n\nOne can also use ``birthday paradox'' ideas to try to estimate $n$, e.g., as in the work of \\cite{Ganesh_2007} in a non-Byzantine setting. However it fails too in the Byzantine case.\n\n\nWe note that one can possibly solve Byzantine counting if one can solve Byzantine leader election, as observed in \\cite{Chatterjee_2019}, however, all known algorithms for Byzantine leader election (or agreement) {\\em assume a priori knowledge (or at least a good estimate) of the network size}. Hence we require a new protocol that solves Byzantine counting from ``scratch''. In our network model, where most nodes, with high probability, see (essentially) the same local topological structure (and constant degree) even for a reasonably large neighborhood radius (see Lemma \\ref{lemma-most-nodes-are-locally-tree-like-preliminaries}), it is difficult for nodes to break symmetry or gain a priori knowledge of $n$.\n\n\nWe point out that with constant probability, in our network model, due to the property of the $d$-regular random graph, an expected constant number of nodes might have multi-edges --- this can potentially be used to break ties; however, this approach \\emph{fails} with constant probability.\n\n\nAnother approach is to try to estimate the diameter of the network, which, being $\\Theta(\\log{n})$ for sparse expanders, can be used to deduce an approximation of the network size. Assuming that there exists a leader in the network, one way to do this is for the leader to initiate the flooding of a message and it can be shown that a large fraction of nodes (say a $(1 - \\epsilon)$-fraction, for some small $\\epsilon > 0$) can estimate the diameter by recording the time when they see the first token, since we assume a synchronous network. However, this method fails since it is not clear, how to break symmetry initially by choosing a leader --- this by itself appears to be a hard problem in the Byzantine setting without knowledge of $n$ (or an estimate of $\\log{n}$).","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDetecting changes in a given sequence of data is a problem of critical importance in a variety of disparate fields and has been studied extensively in the statistics and econometrics literature for the past 40+ years. Recent applications include finding changes in terrorism-related online content \\citep{theo}, intrusion detection for cloud computing security \\citep{aldribi:traore:etal:2020}, and monitoring emergency floods through the use of social media data \\citep{shoyama:cui:etal:2021}, among many others. The general problem of change point detection may be considered from a variety of viewpoints; for instance, it may be considered in either ``online'' (sequential) and ``offline'' (retrospective) settings, under various types of distributional assumptions, or under specific assumptions on the type of change points themselves. See, for example, \\citet{horvath:rice:2014} for a survey on some traditional approaches and some of their extensions. \n\n\nThe importance of traditional univariate and multivariate contexts notwithstanding, it is increasingly common in contemporary applications to encounter \\textit{high-dimensional} data whose dimension $d$ may be comparable or even substantially larger than the number of observations $N$. Popular examples include applications in genomics \\citep{amaratunga:cabrera:2018}, or in the analysis of social media data \\citep{gole:tidke:2015}, where $d$ can be up to several orders of magnitude larger than $N$. However many classical inferential methods provide statistical guarantees only in ``fixed-$d$\" large-sample asymptotic settings that implicitly require the sample size $N$ to overwhelm the dimension $d$, rendering several traditional approaches to change-point detection unsuitable for modern applications in which $N$ and $d$ are both large. Accordingly, there has been a surge of research activity in recent years concerning methodology and theory for change-point detection in the asymptotic setting most relevant for applications to high dimensional data, i.e., where both $N,d\\to\\infty$ in some fashion; see \\citet{liu:zhang:etal:2022} for a survey regarding new developments. Commonly, asymptotic results in this context require technical restrictions on the size of $d$ relative to $N$, ranging from more stringent conditions such as $d$ having logarithmic-type or polynomial growth in $N$ \\citep{jirak:2012}, to milder conditions that permit $d$ to have possibly exponential growth in $N$ \\citep{liu:zhou:etal:2020}. For maximal flexibility in practice, it is desirable to have methods that require as little restriction as possible on the rate at which $d$ grows relative to $N$.\n\nIn this work, we are concerned with change-point detection problem in the ``offline'' setting in which a given sequence of historical data is analyzed for the presence of changes.\nSpecifically, we are concerned with the following: let ${\\bf X}_1, {\\bf X}_2, \\ldots, {\\bf X}_N$ be random vectors in $R^d$ with distribution functions $F_1({\\bf x}), F_2({\\bf x}), \\ldots, F_N({\\bf x})$. We aim to test the null hypothesis\n$$\nH_0:\\;\\;F_1({\\bf x}) = F_2({\\bf x}) = \\ldots =F_N({\\bf x})\\quad \\mbox{for all}\\;{\\bf x}\\in R^d\n$$\nagainst the alternative\n\\begin{align*}\nH_A:\\;\\;\\;&\\mbox{there are}\\;10$ for all $0<\\delta<1\/2$\\\\\n(ii) $w(t)$ is non decreasing in a neighborhood of 0\\\\\n(iii) $w(t)$ is non increasing in a neighborhood of 1.\n\\end{assumption}\n\nThe use of weight functions allows for increased statistical power against certain alternatives, such as when a change point is near the boundary of the data; see, e.g., \\citet{showe}.\nWe use the integral functional\n$$\nI(w,c)=\\int_0^1 \\frac{1}{t(1-t)}\\exp\\left( -\\frac{cw^2(t)}{t(1-t)} \\right)dt\n$$\nto determine necessary and sufficient conditions to obtain a finite limit for the weighted supremum functionals of $V_{N,d}(t)$ and $Z_{N,d}(t)$. The functional $I(w,c)$ appeared in \\citet{ito} to characterize the upper and lower classes of a Wiener process. \\citet{csh-1} and \\citet{showe} provide several results on the theory and applications of $I(w,c)$ in nonparametric statistics.\n\nWe also require that\n\\begin{assumption}\\label{as2} \\;${\\bf X}_1, {\\bf X}_2, \\ldots, {\\bf X}_N$ are independent random vectors.\n\\end{assumption}\nAssumptions \\ref{as1}-\\ref{as2} are used throughout the paper. We now separately consider two distinct scenarios concerning possible types of dependence in the coordinates of the $\\mathbf X_i$. In the first case, the coordinates are assumed to be weakly dependent, which is expressed through conditions on the moments of the coordinate differences $|X_{1,j}-X_{2,j}|$. In the second case, the coordinates are strongly dependent in the sense that they are discretely sampled values of a continuous random function. \n\nWe first turn to the case of weakly dependent coordinates. We assume\n\n\n\\begin{assumption}\\label{as3} With some $\\alpha>\\max\\{p, 10-4\/p, 12-8\/p\\}$, and some $C>0$ independent of the dimension $d$, %\n$$\n\\max_{1\\leq j \\leq d}E|X_{1,j}|^\\alpha\\leq C. %\n$$\n\\end{assumption}\nIn the important cases of $p=1$ and $p=2$, we assume that $\\alpha>6$ and 8, respectively. For the next statements, we define the functions\n\\begin{equation}\\label{def:gj}\ng_j(x_j)=E|X_{1,j}-x_j|^p,\\;\\;1\\leq j \\leq d,\n\\end{equation}\nwhere $x_1,\\ldots,x_d\\in R$. \nThe next conditions specifies the meaning of ``weakly dependent\" coordinates:\n\\begin{assumption}\\label{as4}\\;With some $\\alpha>max\\{p,10-4\/p, 12-8\/p\\}$, and some constant $C>0$ independent of $d$ and $N$,\n\\begin{equation}\\label{as4-1}\nE\\bigg|\\sum_{j=1}^d\\big[\\left| X_{1,j}-X_{2,j} \\right|^p-E\\left| X_{1,j}-X_{2,j} \\right|^p\\big]\\bigg|^\\alpha\\leq Cd^{\\alpha\/2}\n\\end{equation}\nand\n\\begin{equation}\\label{as4-2}\nE\\bigg|\\sum_{j=1}^d\\big[ g_j(X_{1,j}) -E g_j(X_{1,j}) \\big]\\bigg|^\\alpha\\leq Cd^{\\alpha\/2}.\n\\end{equation}\nMoreover, the limit\n\\begin{equation}\\label{as4-3}\n\\tau^2=\\lim_{d\\to \\infty}\\frac{1}{d}E\\bigg( \\sum_{j=1}^d [ g_j(X_{1,j})-Eg_j(X_{1,j})] \\bigg)^2\n\\end{equation}\nexists.\n\\end{assumption}\nNote the distribution of ${\\bf X}_1$ depends on $d$ and is also allowed to depend on $N$. In Section \\ref{app-ex} we provide examples that satisfy Assumption \\ref{as4}.\\\\\n\nNext we define the limiting variance of $U_{N,d}(t)$ and $Z_{N,d}(t)$ for the case of weakly dependent coordinates. Let\n$$\n\\lim_{d\\to\\infty}\\frac{1}{d}\\sum_{j=1}^dEg_j(X_{1,j})=\\mathcal a(p),\n$$\nand\n\\begin{equation}\\label{sidef}\n\\sigma^2=\\left(\\frac{2}{p}\\mathcal a^{-1+1\/p}(p)\\right)^2\\tau^2.\n\\end{equation}\nNote $\\mathcal a(p)$ and $ \\sigma^2$ are well-defined under Assumption \\ref{as4}.\nSince we will normalize with $\\sigma^2$, it is natural to require\n\\begin{assumption}\\label{as5}\\;$\\sigma^2>0$.\n\\end{assumption}\n\n\n\nTheorems \\ref{th1} and \\ref{th2}, stated next, are our main results in the context of weakly dependent coordinates. Their statements provide asymptotic tests based on weighted functionals of $V_{N,d}$ and $Z_{N,d}$, whose limit behavior may differ depending on the chosen class of weights. In our proofs we show that $V_{N,d}(t)\/\\sigma$ and $Z_{N,d}(t)\/\\sigma$ converge weakly in the space ${\\mathcal D}[0,1]$ of c\\`adl\\`ag functions on [0,1] to the standard Brownian bridge $\\{B(t), 0\\leq t \\leq 1\\}$ and a Gaussian process $\\{\\Gamma(t), 0\\leq t \\leq 1\\}$, respectively, where\n\\begin{equation}\\label{Gade}\n\\Gamma(t)=(1-t) W(t)+ t W(1-t)-2t(1-t)W(1),\\quad 0\\leq t \\leq 1%\n\\end{equation}\nand $\\{W(t), 0\\leq t \\leq 1\\}$ denotes a standard Wiener process. %\n\n\\begin{theorem}\\label{th1} Assume that $H_0$ holds and Assumptions \\ref{as1}--\\ref{as5} are satisfied. If $I(w,c)<\\infty$ is finite for some $c>0$, then\n\\begin{align}\\label{th1-1}\n\\frac{1}{\\sigma}\\sup_{00$, we prove Theorem \\ref{th1} under optimal conditions (see \\citet{csh-1}).\n\nIt is popular to self-normalize maximally selected statistics, i.e., to use a weight function that is proportional to the standard deviation of the limit, at least in a neighborhood of 0 and 1. In our case the weight function of self normalization is $(t(1-t))^{1\/2}$, and Theorem \\ref{th1} cannot be applied since $I((t(1-t))^{1\/2}, c)=\\infty$ for all $c>0$. Theorem \\ref{th2}, stated next, is a nonstandard Darling--Erd\\H{o}s--type result and is the counterpart to Theorem \\ref{th1} for the case of self-normalization. \nBefore giving its statement, we define some auxiliary quantities based on the projections of U-statistics (e.g., \\citet{lee:1990}). \n For ${\\bf x}=(x_1, x_2, \\ldots, x_d)$, let\n$$\nH({\\bf x})=E\\left(\\frac{1}{d}\\sum_{\\ell=1}^d|X_{1,\\ell}-x_\\ell|^p\\right)^{1\/p},\\quad \\theta=E [H({\\bf X}_1)].%\n$$\nand %\n\\begin{equation}\\label{zedef}\n\\zeta_i=H({\\bf X}_i)-\\theta, \\;\\;\\;1\\leq i \\leq N.\n\\end{equation}\ni.e., $\\zeta_i$ are centered projections of the normalized $L_p$ norms of the differences $\\mathbf X_i-\\mathbf X_j$ onto the linear space of all measurable functions of $\\mathbf X_i$.\n\n\\begin{theorem}\\label{th2} Let\n$$\n\\mathcal s^2(d)=E\\zeta_1^2,\n$$\nand define\n$$\n a(x)=(2\\log x)^{1\/2}\\quad\\mbox{and}\\quad b(x)=2\\log x +\\frac{1}{2}\\log \\log x-\\frac{1}{2} \\log \\pi.\n$$If $H_0$ holds and Assumptions \\ref{as2}--\\ref{as5} are satisfied, then\n\\begin{align}\\label{th2-1}\n\\lim_{N\\to\\infty}P\\Biggl\\{a(\\log N)&\\max_{22$\n$$\nE\\left|\\frac{1}{d}\\sum_{j=1}^d |X_{1,j}|^p-\\int_0^1|Y_1(t)|^pdt\\right|^\\nu\\to 0.\n$$\n\\end{assumption}\nThe projections $H(\\mathbf x)$ in \\eqref{zedef} can also be approximated with functionals of $Y_i(t)$. Let\n$$\n\\mathcal H(g)=E\\left(\\int_0^1 \\left| Y_1(t)-g(t)\\right|^pdt\\right)^{1\/p}.\n$$\nNext we define the asymptotic variance in the context of strongly dependent coordinates. Let\n$$\n\\gamma^2=\\mbox{\\rm var}\\left( \\mathcal H(Y_1)\\right).\n$$\nNote $\\gamma^2<\\infty$ under Assumption \\ref{as6}(iii). Since we normalize with $\\gamma^2$, we naturally require\n\n\\begin{assumption}\\label{as7}$ \\gamma^2>0.$\n\\end{assumption}\n\n\n\nTheorems \\ref{th3} and \\ref{th4}, stated next, are our main results in the context of the strongly dependent coordinate setting of Assumption \\ref{as6}.\n\n\\begin{theorem}\\label{th3} We assume that $H_0$ and Assumptions \\ref{as6} and \\ref{as7} are satisfied. If $I(w,c)<\\infty$ is finite for some $c>0$, then\n\\begin{align}\\label{th3-1}\n\\frac{1}{2\\gamma}d^{-1\/2}\\sup_{00$, e.g., $|X_{1,i}-X_{k_1+1,i}|>\\delta$ for $i=1,\\ldots,s$, then $|\\mu_1-\\mu_{1,2}|\\geq s^{1\/p}\\delta$, and \\eqref{consis1} implies consistency holds if\n$$\n\\delta (s\/d)^{1\/p} (Nd)^{1\/2}\\to \\infty\n$$\nwhich suggests that larger values of $p$ may provide better power for sparse changes, i.e., when the number of changing coordinates $s$ is small relative to $d$. \n\nNote however, when $\\mu_1=\\mu_2$, the change point location estimator $\\widehat \\eta_{N,d}$ may not be consistent for $\\eta$ even when the above condition is met. \n\n\n\n\n\\medskip\n\\section{Examples}\\label{app-ex}\nWe consider three examples when the conditions of our results are satisfied. We start with the case when the coordinates of ${\\bf X}_1$ are independent.\n\n\\begin{example}\\label{exe1} {\\rm We assume that $X_{1,1}, X_{1,2}, \\ldots, X_{1,d}$ are independent, and that \\eqref{as4-3} holds. Using Rosenthal's inequality (cf.\\ Petrov, 1995,\\ p.\\ 59) we get for all $\\beta\\geq 2$\n\\begin{align*}\nE\\left|\\sum_{j=1}^d [|X_{1,j}-X_{2,j}|^p -E|X_{1,j}-X_{2,j}|^p]\\right|^\\beta\n&\\leq C\\left(\\sum_{j=1}^d E\\left||X_{1,j}-X_{2,j}|^p -E|X_{1,j}-X_{2,j}|^p\\right|^\\beta \\right.\\\\\n&\\hspace{.7cm}\\left.+\n\\left(\\sum_{j=1}^dE\\left(|X_{1,j}-X_{2,j}|^p -E|X_{1,j}-X_{2,j}|^p\\right)^2\\right)^{\\beta\/2}\\right).\n\\end{align*}\nSimilarly,\n\\begin{align*}\nE\\bigg|\\sum_{j=1}^d\\big[ g_j(X_{1,j}) -E g_j(X_{1,j}) \\big]\\bigg|^\\beta \\leq C\\Bigg( \\sum_{j=1}^d E\\big| g_j(X_{1,j}) -E g_j(X_{1,j})\\big|^\\beta + \\bigg(\\sum_{j=1}^d\\var\\big(g_j(X_{1,j}) \\big) \\bigg)^{\\beta\/2}\\Bigg).\n\\end{align*}\nNote for all $\\beta\\geq 1$, $\\E |g_j(X_{1,j})|^\\beta \\leq C \\E |X_{1,j}|^{p\\beta}$. Hence conditions \\eqref{as4-1} and \\eqref{as4-2} in Assumption \\ref{as4} are satisfied if\n$$\n\\limsup_{d\\to \\infty}\\frac{1}{d}\\sum_{j=1}^d \\E |X_{1,j}|^{2p}<\\infty\n$$\nand\n$$\n\\limsup_{d\\to \\infty}\\frac{1}{d^{\\alpha\/2}}\\sum_{j=1}^d\\left[E|X_{1,j}|^{p\\alpha} +(E|X_{1,j}|^p)^\\alpha\\right]<\\infty.\n$$\n}\n\\end{example}\nNext we extend Example \\ref{exe1} to dependent coordinates.\n\\begin{example}\\label{exe2}{\\rm\nLet\n$$\n{\\bf X}={\\bf A}{\\bf N},\n$$\nwhere the coordinates of ${\\bf N}=({\\mathcal N}_1, {\\mathcal N}_2, \\ldots, {\\mathcal N}_d)^\\T$ are independent and identically distributed with $E{\\mathcal N}_j=0$, $E{\\mathcal N}_j^2=1$ and\n\n\\begin{equation}\\label{exe2-1}\nE|{\\mathcal N}_j|^{2p\\alpha}\\leq c_1.%\n\\end{equation}\n We also assume the matrix \n${\\bf A}=\\{a_{k,\\ell}, 1\\leq k,\\ell \\leq d\\}$ satisfies\n\\begin{equation}\\label{exe2-2}\n|a_{k,\\ell}|\\leq c_2\\exp(-c_3|k-\\ell|),\n\\end{equation}\nwith some $0c_6\\log d}a_{j,\\ell}\\mathcal M_{j,\\ell},\n$$\nwhere $(\\mathcal M_{j,1},\\mathcal M_{j,2}, \\ldots, \\mathcal M_{j,d}), 1\\leq j \\leq d,$ are independent copies of ${\\bf M}$.\nBy definition, $\\bar{Z}_1, \\bar{Z}_2, \\ldots, \\bar{Z}_d$ are $c_6\\log d$--dependent random variables. Using the inequality $||x|^p-|y|^p|\\leq C_p(|x|^{p-1}+|y|^{p-1})|x-y|$, The decay of $a_{j,k}$ implies we can choose $c_6=c_6(\\alpha)$ large enough so that \n\\begin{equation}\\label{e:xbarj_minus_xj}\nE\\left|\\sum_{j=1}^d\\big|Z_j|^p-|\\bar{Z}_j\\big|^p\\right|^\\alpha\\leq c_7E\\Bigg(\\sum_{j=1}^d(|Z_j|^{p-1}+|\\bar{Z}_j|^{p-1})|Z_j-\\bar{Z}_j| \\Bigg)^\\alpha $$\n$$\n\\leq d^{\\alpha-1} c_7\\sum_{j=1}^d\\Big(E\\big(|Z_j|^{p-1}+|\\bar{Z}_j|^{p-1})\\big)^{2\\alpha}\\E |Z_j-\\bar{Z}_j|^{2\\alpha} \\Big)^{1\/2}\\leq c_8 d^{\\alpha\/2}.\n\\end{equation}\nThus, it suffices to establish \\eqref{exe2-22} for the variables $\\bar Z_j$ in place of $Z_j$. For simplicity, for each $d$ we set $\\bar Z_\\ell=0$ whenever $\\ell>d$. Now define $n_k= (k-1)\\lfloor (\\log d)^2\\rfloor$, $\\zeta_j=|\\bar Z_j|^p-E|\\bar Z_j|^p$, and observe Rosenthal's inequality implies $\\sup_{d\\geq 1}\\max_{1\\leq j \\leq d}\\E|\\zeta_j|^\\alpha<\\infty$. \nSo, let\n$$\nQ_{k,1}=\\sum_{\\ell=n_{k-1}+1}^{n_k}\\zeta_\\ell,\\;\\;\\;k=1, 3, \\ldots, k^*_1, \\qquad\nQ_{k,2}=\\sum_{\\ell=n_{k-1}+1}^{n_k}\\zeta_\\ell,\\;\\;\\;k=2,4,\\ldots, k^*_2,\n$$\nwhere $k_1^*,$ and $k_2^*$ are the smallest odd and even integers, respectively, such that $n_{k_i^*}\\geq d$. (Note the sums $Q_{k^*_1,1}$ and $Q_{k^*_2,2}$ respectively may contain fewer than $n_{k^*_1}-n_{k^*_1-1}$ and $n_{k^*_2}-n_{k^*_2-1}$ terms.) By construction, the variables $ Q_{k,1}, k=1,3,\\ldots, k^*_1$ are independent and similarly $ Q_{k,2}$, $k=2,4,\\ldots,k^*_2$ are independent. Thus, using Rosenthal's inequality again, we obtain\n\\begin{equation}\\label{e:ros_ex_2}\nE\\bigg|\\sum_{k=1,3,\\ldots,k^*_1} Q_{k,1} \\bigg|^\\alpha \\leq c_9\\left( \\sum_{k=1,3,\\ldots,k^*_1}\\E |Q_{k,1}|^\\alpha+ \\Big( \\sum_{k=1,3,\\ldots,k^*_1} \\E (Q_{k,1})^2 \\Big)^{\\alpha\/2} \\right)\n\\end{equation}\n and similarly for $Q_{k,2}$. We proceed to bound $E |Q_{k,1}|^\\alpha$ and $E |Q_{k,1}|^2$. For $E |Q_{k,1}|^\\alpha$, we have\n\\begin{equation}\\label{e:Q_k_alphabound}\n E |Q_{k,1}|^\\alpha \\leq (n_k-n_{k-1})^{\\alpha-1}\\sum_{\\ell=n_{k-1}+1}^{n_k} \\E |\\zeta_\\ell|^\\alpha \\leq c_{10}(n_k-n_{k-1})^\\alpha \\leq c_{10} (\\log d)^{2\\alpha}.\n\\end{equation}\nWe now turn to $E |Q_{k,1}|^2$. For each $j$, reexpress\n$$\n\\bar Z_j = \\mathbf A \\mathbf N^{(j)}\n$$\nwhere for each $j$, $\\mathbf N^{(j)}=(\\mathcal N^{(j)}_1,\\ldots,\\mathcal N^{(j)}_d)^\\T$ is a vector of iid variables; observe that $\\mathbf N^{(1)},\\ldots,\\mathbf N^{(d)}$ are generally dependent vectors. Now, for each $1\\leq \\ell,j\\leq d$, $j\\neq \\ell$, define\n$$\n\\bar Z_{j,\\ell} = \\sum_{i\\in(j-\\lfloor |j-\\ell|\/2\\rfloor,j+\\lfloor |j-\\ell|\/2\\rfloor]}a_{\\ell,i}\\mathcal N^{(j)}_i +\\sum_{i\\notin(j-\\lfloor |j-\\ell|\/2\\rfloor,j+\\lfloor |j-\\ell|\/2\\rfloor]} \\mathcal M^{(j)}_i\n$$\nwhere $\\{\\mathbf M^{(j)}=(\\mathcal M_1^{(j)},\\ldots,\\mathcal M_d^{(j)})^\\T,~1\\leq j\\leq d\\}$ are jointly independent vectors satisfying $\\mathbf M^{(j)}\\stackrel d = \\mathbf N^{(j)}$ for each $j=1,\\ldots,d$. By construction, for each pair $(j,\\ell)$, $j\\neq\\ell$, the variables $ \\bar Z_{\\ell,j}$ and $ \\bar Z_{\\ell,j}$ are independent, since the sets $\\big(\\ell-\\lfloor |j-\\ell|\/2\\rfloor,\\ell+\\lfloor |j-\\ell|\/2\\rfloor\\big]$ and $\\big(j-\\lfloor |j-\\ell|\/2\\rfloor,j+\\lfloor |j-\\ell|\/2\\rfloor\\big]$ contain no common integers. Further, if we define\n$$\\zeta_{j,\\ell}= |\\bar X_{j,\\ell}|^p -\\E |\\bar X_{j,\\ell}|^p%\n$$\nthe exponential decay of $a_{i,k}$ implies\n$$\n\\E|\\zeta_j-\\zeta_{j,\\ell}|^2 \\leq c_{11}\\exp(-c_{12}|j-\\ell|), \\quad j,\\ell\\geq 1,\n$$\nfor some constants $c_{11},c_{12}>0$ independent of $d$. \nTherefore, noting that $E |Q_{k,1}|^2 = \\sum_{\\ell=n_{k-1}+1}^{n_k}\\E \\zeta_\\ell^2 + 2\\sum_{n_{k-1}+1\\leq j<\\ell\\leq n_k}\\E\\zeta_\\ell\\zeta_j $, using $\\E \\zeta_{j,\\ell}\\zeta_{\\ell,j}=0$, we obtain\n\\begin{align*}\n\\Big|\\sum_{n_{k-1}+1\\leq j<\\ell\\leq n_k}\\E \\zeta_\\ell\\zeta_j \\Big| &= \\Big|\\sum_{n_{k-1}+1\\leq j<\\ell\\leq n_k}\\E \\big[(\\zeta_\\ell-\\zeta_{\\ell,j })\\zeta_j \\big] + \\E \\big[(\\zeta_j-\\zeta_{j,\\ell})\\zeta_{\\ell,j })\\big]\\Big|\\\\\n&\\leq \\sum_{n_{k-1}+1\\leq j<\\ell\\leq n_k}\\Big(\\big(\\E (\\zeta_\\ell-\\zeta_{\\ell,j })^2\\E\\zeta_j^2) \\big)^{1\/2} + \\big(\\E (\\zeta_j-\\zeta_{j,\\ell})^2\\E\\zeta_{\\ell,j }^2\\big)^{1\/2} \\Big)\\\\\n& \\leq c_{13} (n_{k}-n_{k+1})\n\\end{align*}\nThus, \n\\begin{equation}\\label{e:Q_k_varibound}\n\\E |Q_{k,1}|^2 \\leq c_{14} (n_{k}-n_{k+1})\n\\end{equation}\nCombining \\eqref{e:Q_k_alphabound} and \\eqref{e:Q_k_varibound} with \\eqref{e:ros_ex_2}, we obtain $E\\big|\\sum_{k=1,3,\\ldots,k^*_1} Q_{k,1} \\big|^\\alpha \\leq c_{15} d^{\\alpha\/2}$. The same arguments apply to $Q_{k,2}$, which establishes \\eqref{exe2-22}. \n\nFor \\eqref{exe2-23}, we note\n$$\n|g_j(x)-g_j(y)|\\leq c_{16}(|x|^{p-1}+|y|^{p-1})|x-y|.\n$$\nThus, \\eqref{exe2-23} %\ncan be established using the same arguments leading to \\eqref{exe2-22} with minor adjustments}.\\\\\n\\end{example}\n\nOur last example covers the multinomial distribution which is useful in the analysis of categorical data.\n\n\n\\begin{example}\\label{exe3} {\\rm We assume that ${\\bf X}_1$ has a multinomial distribution, $(\\gamma_1\/d, \\gamma_2\/d, \\ldots, \\gamma_d\/d, Cd)$, where $C$ is a positive integer, $0{1}\/{d}$ and the remaining $d-\\lfloor s d\\rfloor$ components have probabilities $p_i(s,t)<{1}\/{d}$; if $s>t$, the opposite holds. This parameterization is adapted from that used in \\citet{wang:zou:etal:2018}.\\\\\n\n\\indent\n(ii) We used normal observations with mean $\\boldsymbol \\mu \\in R^d$ and covariance matrix $\\mathbf \\Sigma = \\{\\exp(-\\mathcal c|i-j|), 1\\leq i,j\\leq d\\}$ where $\\mathcal c>0$. %\n\nTo examine size, we provide the empirical sizes for multinomial model (i) when $s=1\/2$ and $t=1\/2, 1\/8$; in the normal model (ii) we take $\\boldsymbol \\mu =\\bf 0$ and $\\mathcal c=1$. For each model we consider $N=10, 25, 50, 100, 150, 200, 300,\\ldots,1000$ and the high-dimensional settings of $d=N$ and $d=1.25N$. The values of $\\kappa$ in \\eqref{e:weightfunction} are chosen as $\\kappa=0, .2, .4$ and $.45$. The results are displayed in Figure \\ref{fig:alpha05} which show acceptable empirical sizes with nominal significance levels $.05$ and $.01$. Each reported empirical test size is based on 5000 replications. We used the norms $p=1$ and $p=2$, but since performance in either case is essentially the same, we only report the results for $p=2$. The test tends to be less conservative for larger values of $\\kappa$.\n\\begin{center}\n\\begin{figure}[ht!]\n\\centering\n\\begin{minipage}{0.95\\linewidth}\n\\hspace{0.5cm} \\includegraphics[width=\\linewidth]{Null_05_temp.pdf}\n\\end{minipage}\n\\begin{minipage}{0.95\\linewidth}\n\\hspace{0.5cm} \\includegraphics[width=\\linewidth]{Null_01_temp.pdf}\n\\end{minipage}\n\\caption{\\label{fig:alpha05} Empirical rejection rates under the null hypothesis at the nominal levels $0.05$ (top figure) and $0.01$ (bottom figure) for settings (i) and (ii). In the multinomial setting (i), we consider $s={1}\/{2}$, $t={1}\/{8}, {1}\/{2}$, and $m=10d$; in the Gaussian setting (ii), we take $\\mathcal c=1$. } %\n\\end{figure}\n\\end{center}\n\nTo study power, we examine various changes for the settings (i) and (ii) described above. First, we consider the following types of changes in setting (i):\\\\\n\n\\indent\n(ia) We examine the effect of a change at time $\\lfloor N\/2\\rfloor$ with the weight function parameter $\\kappa=0.25$. The first $\\lfloor N\/2\\rfloor$ observations are multinomial with parameters $m=10d$ and ${\\bf { p}}(1\/2,1\/2)$; the remaining $N-\\lfloor N\/2\\rfloor$ observations are multinomial with parameters $m=10d$ and $\\mathbf p(s,t)$, for $s,t = .05, .1, \\ldots, .95$. (Recall that if $s=t$, then there is no change.) The results of this study are displayed in the first row of Figure \\ref{fig:power_mod1} with $d=50$, which displays good power at modest sample sizes. \\\\\n\\indent\n(ib) We examine the effect of the change point location and the weight parameter $\\kappa$. In this setting, the parameters change from ${\\bf { p}}(.5, .5)$ to ${\\bf { p}}(.3,.4)$ at time $\\lfloor N\\theta\\rfloor$; $m=10d$ throughout. The outcomes are reported in the second row of Figure \\ref{fig:power_mod1} with $d=50$ for the parameter choices $\\theta=.05, .1, \\ldots, .95$, and $\\kappa=0, .1,.2, .25,.3, .4$ and $.45$. The power is increasing with the sample size; if $\\theta$ is close to 0 or 1, the power is highest when $\\kappa$ is large. The test has its highest power when the change is toward middle of the data ($\\theta\\approx \\frac{1}{2}$).\\\\%Here, $\\mathbf X_1,\\ldots, X_{\\lfloor N\\theta\\rfloor} \\sim \\text{Multinomial}(m,\\mathbf p(0.5,0.5))$, $\\mathbf X_{\\lfloor N\\theta\\rfloor + 1},\\ldots, X_{N} \\sim \\text{Multinomial}(m,\\mathbf p (0.3,0.4))$ for each $(\\theta,\\kappa)\\in \\{1\/20,2\/20,\\dots,19\/20\\}\\times \\{0,.1,.2,.25,.3,.4,.45\\}$.\n\nThe upper panels of Figure \\ref{fig:power_mod1} show that even at the modest sample size of $N=25$ the test already has high power, which improves for values of $(s,t)$ further from the diagonal $s=t$. Note that when $s=t$ there is no change. In the lower panels of Figure \\ref{fig:power_mod1}, \n for $N=200,300$, if $\\theta$ is close to 0 or 1, a slight gain in power is achieved by increasing the weight function parameter $\\kappa$. In all cases, the test has highest power when $\\theta\\approx \\frac{1}{2}$. \\\\\n\n\n\n\\begin{center}\n\\begin{figure}[ht!]\n\\centering\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{multinomialSW_temp.pdf}\n\\end{minipage}\\vspace{0.5cm}\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{multinomialGCP_temp.pdf}\n\\end{minipage}\n\\caption{Empirical power in the multinomial setting (i). In the upper panels we have (ia) when $d=50$. From left to right, we have $N= 25, 100, 200$. In the lower panels we have (ib) with $d=50$. From left to right, we have $N=100,200, 300$. }\n\\label{fig:power_mod1}\\end{figure}\n\\end{center}\nSecond, we consider the following types of changes in setting (ii):\\\\\n\n\\indent\n(iia) We examine the effect of a change in the mean $\\boldsymbol \\mu$ at time $\\lfloor N\/2\\rfloor$ with the weight function parameter $\\kappa=0.25$. The first $\\lfloor N\/2\\rfloor$ observations have mean $\\boldsymbol \\mu={\\bf {0}}$; the remaining $N-\\lfloor N\/2 \\rfloor$ observations have mean $\\boldsymbol \\mu_A=\\boldsymbol \\mu(\\mathcal{a}, j)=(\\mu_{A,1}, \\mu_{A,2}, \\ldots, \\mu_{A,d})^\\T$, where\n$$\n\\mu_{A,i}(\\mathcal{a},j)=\\mathcal{a},\\;1\\leq i\\leq j\\;\\;\\;\\mbox{and}\\;\\;\\;\\mu_{A,i}(\\mathcal{a},j)=0,\\;j+1\\leq i\\leq d,\n$$\nwhere $\\mathcal{a}>0$. The covariance matrix $\\{\\exp(-|k-\\ell|), 1\\leq k,\\ell\\leq d\\}$ remains the same throughout. The outcome is reported in the first row of Figure \\ref{fig:power_mod2} when $d=50$, $\\mathcal{a}=.8, 1, 1.2$ and $j=2,3, \\ldots, 50$. The power increases substantially as $j$ increases or as $\\mathcal a$ increases. \\\\\n\n\n\\indent\n(iib) We examine the effect of a covariance change at time $\\lfloor N\/2\\rfloor$ using the weight function of \\eqref{e:weightfunction} with parameter $\\kappa=0.25$. The mean stays ${\\bf 0}$ throughout. The observations have covariance matrix ${\\bf I}={\\bf I}_{d\\times d}$ before the change and covariance $\\boldsymbol \\Sigma_{\\mathcal c, j}$ after the change, where\n$$\n\\boldsymbol \\Sigma_{\\mathcal c, j}(k,\\ell)=\\begin{cases}\n\\exp(-\\mathcal c |k-\\ell|)& ~1\\leq k, \\ell\\leq j, \\\\\n1 & k=\\ell > j,\\\\\n 0 & \\text{otherwise.}\n\\end{cases}\n$$\n The results of this study are collected in the second row of Figure \\ref{fig:power_mod2} with $d=50$ for the parameter choices\n$\\mathcal c=.1, .04, .008$ and $j=2, 3, \\ldots, 50$. The power is increasing as $\\mathcal c$ decreases since smaller values of $\\mathcal c$ correspond to increasingly correlated coordinates. The power also increases substantially with $j$.\\\\\n\n\n \n \\indent\n(iic) We examine the effect of the change point location $\\lfloor N\\theta \\rfloor$ and the weight parameter $\\kappa$ for a change in the covariance structure. The mean is $\\mathbf 0$ throughout. Before the change at time $\\lfloor N\\theta\\rfloor$ the covariance matrix is ${\\bf I}$, and after the change it is $\\boldsymbol \\Sigma_{\\mathcal c,j}$ as defined above. We report the results when $\\mathcal c=.008, j=30$ in the first row of Figure \\ref{fig:power_mod3} with $d=50$ for the values $\\theta=.05,.1,\\ldots,.95$ and $\\kappa=0, .1, .2, .25, .3, .4, .45$. When $\\theta$ is large, a majority of of the series is uncorrelated, and the power is highest for larger values of $\\kappa$. However, when $\\theta$ is small, only a small portion of the series is uncorrelated before the change point, and the power is generally low regardless of $\\kappa$.\\\\\n\n\\indent\n (iid) %\nFinally, we examine the effect of the change point location $\\lfloor N\\theta \\rfloor$ and weight parameter $\\kappa$ for a different type of covariance change. Specifically, before the change, the covariance is $\\boldsymbol \\Sigma_0=\\{\\exp(-|k-\\ell|), 1\\leq k,\\ell \\leq d\\}$; after the change, the covariance is $\\boldsymbol \\Sigma_{\\sigma^2, j}=\\mathbf V^\\T \\boldsymbol \\Sigma_0 \\mathbf V$, where the matrix $\\mathbf V=(v_{k\\ell})$ has entries $v_{kk}=\\sigma^2$, $1\\leq k \\leq j$, $v_{kk}=1$, $j+1\\leq k \\leq d$, and $v_{k\\ell}=0$ for $k\\neq \\ell$. The results with $\\sigma^2=1.3$, $j=30$ are summarized in the second row of Figure \\ref{fig:power_mod3} with $d=50$ for the values $\\theta=.05,.1,\\ldots,.95$ and $\\kappa=0, .1, .2, .25, .3, .4, .45$. The test has reasonably good power at the modest sample size of $N=80$. The power is highest for $\\theta \\approx \\frac{1}{2}$. When the change point is toward the boundary of the data, higher power is achieved by increasing $\\kappa$. \\ \\\\\n\n\\vspace{-0.5cm}\n\\begin{center}\n\\begin{figure}[ht!]\n\\centering\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{meanChangeOnly_temp.pdf}\n\\end{minipage}\\vspace{0.5cm}\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{decayCoefficientChange_temp.pdf}\n\\end{minipage}\n\\caption{Empirical power in the multivariate normal setting (ii) for a midpoint change. In the upper panels we have (iia) with $d=50$. From left to right, we have $\\mathcal a=0.8, 1.0, 1.2$. In the lower panels we have (iib) with $d=50$.}\n\\label{fig:power_mod2}\\end{figure}\n\\end{center}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{tailSensitiveCOV_temp.pdf}\n\\end{minipage}\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{tailSensitiveINFLATE_temp.pdf}\n\\end{minipage}\n\\caption{{Empirical power in multivariate normal setting (ii) for varying change--point location.}\nIn the upper panels we have setting (iic) with $d=50$; from left to right, $N=200,300, 400$. In the lower panels, setting (iid) with $d=50$; from left to right, $N=80,160, 320$. }\n\\label{fig:power_mod3}\n\\end{figure}\n\\vfill\n\n\\medskip\n\\section{Applications: mentions of U.S.\\ governors and frequency of social justice related keywords on Twitter }\\label{sec-appl}\n\\counterwithin{figure}{section}\n\\counterwithin{table}{section}\nWe illustrate our method through two applications, both of which involve Twitter data. The data was collected using the full--archive tweet counts endpoint in the Twitter Developer API\\footnote{\\url{https:\/\/developer.twitter.com\/en\/docs\/twitter-api\/tweets\/counts\/introduction}}. The API allows retrieval of the count by day of tweets matching any query from the complete history of public tweets. Section \\ref{s:governor} concerns data related to counting the number of tweets about U.S. Governors; Section \\ref{s:social_justice} concerns the frequency of occurrence of tweets related to social injustice keywords. Note that when using the full-archive tweet counts endpoint in the Twitter API to collect both datasets, the filters \\texttt{-is:retweet -is:reply -is:quote} were applied to remove retweets, replies and quote tweets respectively.\\\\\n\nIn all instances, we used the asymptotic test based on the statistic $V_{N,d}$ with $p=2$ described as in \\eqref{th1-101}. After a change is detected, we estimate the location of change with $\\hat{\\eta}_{N,d}$, the estimator of \\eqref{e:changepoint_location_estimator} with weight function $w(t)=(t(1-t))^\\kappa$ with $\\kappa=0,.2,.45$ and continue to estimate changes via binary segmentation until no further changes are detected.\n\n\\subsection{Daily mentions of U.S.\\ Governors on Twitter}\\label{s:governor}\nWe apply our proposed method to a dataset concerning daily counts of the number of tweets matching queries that reference any of the 50 U.S. governors from 1\/1\/21 to 12\/31\/21, resulting in a series of length and dimension $(N,d) = (365,50)$. Only governors holding office on the date 12\/31\/21 were included in this dataset; the full list of queried names is given in Table \\ref{t:govnames}.\n\\begin{center}\n\\begin{table}[h]\n\\begin{tabular}{| m{15cm} |}\\hline\\\n\\Centering U.S. Governors holding office on 12\/31\/21\n\\\\\n\\hline\\hline\n{\\small Greb Abbott, Charlie Baker, Andy Beshear, Kate Brown, Doug Burgum, John Carney, Roy Cooper, Spencer Cox, Ron DeSantis, Mike Dewine, Doug Ducey, Mike Dunleavy, John Bel Edwards, Tony Evers, Greg Gianforte, Mark Gordon, Michelle Grisham, Kathy Hochul, Larry Hogan, Eric Holcomb, Asa Hutchinson, David Ige, Jay Inslee, Kay Ivey, Jim Justice, Laura Kelly, Brian Kemp, Ned Lamont, Bill Lee, Brad Little, Dan McKee, Henry McMaster, Janet Mills, Phil Murphy, Gavin Newsom, Kristi Noem, Ralph Northam, Mike Parson, Jared Polis, J.B. Pritzker, Tate Reeves, Kim Reynolds, Pete Ricketts, Phil Scott, Steve Sisolak, Kevin Stitt, Chris Sununu, Tim Walz, Gretchen Whitmer, Tom Wolf} \\\\\\hline\n\\end{tabular}\n\\caption{\\label{t:govnames} Governor names used for query in Twitter data collection}\n\\end{table}\n\\end{center}\n\n\n\n An inspection of the heatmap of correlation coefficients for each of the $d=50$ components of this series suggests that weak dependence is plausible, revealing that a substantial proportion of governor tweet mentions have relatively small correlation coefficients. %\n\\begin{center}\n\\begin{figure*}[h!]\n\\begin{minipage}{0.65\\linewidth}\n\\includegraphics[width=\\linewidth]{pearsonAbsoluteCorrelationMatrix_temp.eps}\n\\end{minipage}\n\\caption{Governor dataset: heatmap of estimated correlation coefficients for each of the ${50 \\choose 2}$ pairs of components.} %\n\\end{figure*}\n\\end{center}\nTo accommodate variability in the total number of mentions among all governors, which range from roughly 1,000 to 20,000 mentions in a given day, a subset of $m=500$ observations on each day were randomly sampled without replacement, resulting in conditionally multivariate hypergeometric observations with parameters $m=500,\\mathbf r_i$ where the vector ${\\mathbf r}_i =(r_{1,i},\\ldots,r_{50,i})$ contains the observed number of daily mentions for each governor. Test results did not substantially change for other choices of $m\\leq 1000$. Each time a change point was detected, binary segmentation was conducted and tests were repeated in each subsegment until failure to reject the null hypothesis. Figure \\ref{f:gov_changepoints} contains detected change points for $p=2$ and $\\kappa= 0, .25, .45$ at level $0.1$.\n\n\nSeveral common dates are detected irrespective of the weight parameter $\\kappa$, though surprisingly the most change points were detected when $\\kappa=0.25$. Several detected changes apparently coincide with important dates or events in the U.S.\\ news cycle. For instance, the date 9\/11 is detected as a change point in all three cases, as is the date 7\/30\/2021, which coincides with a news cycle immediately following Florida Governor Ron Desantis' executive order on 7\/30\/2021 concerning masking.\\footnote{\\url{https:\/\/www.flgov.com\/2021-executive-orders\/}} %\n\n\n\n\n\n\\begin{figure}[!ht]\n\\subfloat[$\\kappa= 0$]{\\qquad\\scalebox{0.55}{\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2021-07-30 [2021-05-09 [,no edge, draw=none] [,no edge, draw=none]] [2021-09-11 [,no edge, draw=none] [,no edge, draw=none]]]\\end{forest}}\n\\qquad}\n\\subfloat[$\\kappa= 0.25$]{\\scalebox{0.55}{\\qquad\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2021-07-30 [2021-05-09 [2021-01-22[2021-01-11[2021-01-02[,no edge, draw=none] [,no edge, draw=none]] [,no edge, draw=none]] [,no edge, draw=none]] [,no edge, draw=none]] [2021-09-11 [2021-09-02[,no edge, draw=none] [,no edge, draw=none]] [,no edge, draw=none]]]\\end{forest}\\qquad}\n}\n\\subfloat[$\\kappa = 0.45$]{\\scalebox{0.55}{\\qquad\\qquad\\qquad\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2021-07-30 [,no edge, draw=none] [2021-08-22 [,no edge, draw=none] [2021-09-11[,no edge, draw=none] [,no edge, draw=none]]]]\\end{forest}\\qquad\\qquad\\qquad}\n}\n\\caption{\\label{f:gov_changepoints} Change-point detection with level $0.1$ tests using $p=2$ for Governor dataset (described in Section \\ref{s:governor}) from 1\/1\/21 to 12\/31\/21. Bisection conducted until failure to reject null hypothesis or time interval of two days or less. In the figure above, the top of each tree is the first estimated change point; subsequent estimated change points after bisection appear along each respective branch in the order of detection, displayed chronologically left--to--right.}\n\\end{figure}\n\n\n\\medskip\n\\subsection{Social justice\/injustice keyword frequency on Twitter}\\label{s:social_justice}\nWe also apply our method to weekly counts of tweets matching the queries of $d=38$ terms associated with social justice\/injustice over the period $1\/1\/16$ to $1\/1\/22$, resulting in a total of $N = 313$ weeks. Note the data is collected from Twitter by day and is then aggregated by week. The terms included in this dataset are shown in Table \\ref{t:keywords}. The full list of terms in Table \\ref{t:keywords} consist of three distinct subsets included for the sake of variety:\n\\begin{itemize}\n\\item A subset of the terms concerning racial equity included in Racial Equity Tools Glossary\\footnote{\\url{https:\/\/www.racialequitytools.org\/glossary}} on the site \\texttt{https:\/\/www.racialequitytools.org};\n\\item Colloquial phrases reported by several major media outlets\\footnote{See \\url{https:\/\/abc7.com\/racism-black-lives-matter-racist-words\/6302853\/}, \\url{https:\/\/www.today.com\/tmrw\/everyday-words-phrases-racist-offensive-backgrounds-t187422}, \\url{https:\/\/abcnews.go.com\/Politics\/commonly-terms-racist-origins\/story?id=71840410}, \\url{https:\/\/www.businessinsider.com\/offensive-phrases-that-people-still-use-2013-11}} to have potentially racist origins\n\\item Common phrases regarding police brutality.\n\\end{itemize}\n\n\\begin{center}\n\\begin{table}[h]\n\\begin{tabular}{| m{15cm} |}\\hline\\\n\\Centering Social justice related keywords\n\\\\\n\\hline\\hline\n{\\small ACAB, accountability, bigotry, black lives matter, blacklist, cakewalk, crack the whip,\npolice brutality, critical race theory, defund the police, disenfranchisement, diversity, implicit bias, inclusion, institutional racism, liberation, lynching, marginalization, master bedroom, microagressions, nitty gritty, oppression, peanut gallery, police reform, prejudice, privilege, racial healing, racist policies, reparations, sold down the river, structural racism, systemic racism, tokenism, uppity, white privilege, white supremacy, whitelist, white} \\\\\\hline\n\\end{tabular}\n\\caption{\\label{t:keywords}Social justice keywords used in Twitter query}\n\\end{table}\n\\end{center}\n\nTo accommodate varying weekly total number of observations which range from roughly 100,000 to 1,500,000 each week, a subset of $m=10d=380$ observations were randomly sampled from each week without replacement, resulting in conditionally multivariate hypergeometric observations with parameters $m=380,$ ${\\mathbf r_i}$, where the vector ${\\mathbf r}_i=(r_{i,1},\\ldots,r_{i,38})$ contains the observed counts for each of the 38 key phrases on week $i$. The estimated change points are displayed in Figure \\ref{f:social_changepoints}.\n\nEstimated change points are fairly consistent for $\\kappa=0$ and $\\kappa=0.25$, though substantially fewer changes are detected when $\\kappa=0.45$. Several of the detected change points coincide with significant dates concerning events in the U.S. news cycle concerning social justice: for instance, the detected week of 5\/21\/2020-5\/28\/2020 coincides with the murder of George Floyd (5\/25\/2020) and appears as an estimated change point in all three choices of $\\kappa$. The week 6\/6\/2019$-$6\/13\/2019, detected as a change point when $\\kappa=0$ and $\\kappa=.25$, coincides with riots in Memphis, Tennessee that occurred after U.S. Marshals killed a black man named Brandon Webber during an attempted arrest\\footnote{\"Memphis Police Appeal for Calm after Marshals Kill Black Man.\" Edited by Adrian Sainz, AP NEWS, 14 June 2019, https:\/\/apnews.com\/article\/tn-state-wire-police-us-news-ap-top-news-mississippi-c290fd44dace4fe7b836110de77c1894.}. Not all estimated change points can be readily associated with headline news events. %\n\n\\begin{figure*}[h!]\n\\subfloat[$\\kappa = 0$]{\\scalebox{0.5}{\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2019-06-06, [2018-01-18, [,no edge, draw=none] [,no edge, draw=none]] [2020-05-21, [,no edge, draw=none] [2020-12-31[,no edge, draw=none] [2021-04-08[,no edge, draw=none] [,no edge, draw=none]]]]]\\end{forest}} \\quad}\n\\qquad\n\\subfloat[$\\kappa = .25$]{\\quad\\scalebox{0.5}{\\qquad\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2019-06-06, [2018-01-18, [,no edge, draw=none] [,no edge, draw=none]] [2020-05-21, [,no edge, draw=none] [2020-12-31[2020-05-28[,no edge, draw=none] [,no edge, draw=none]] [2021-04-08[,no edge, draw=none] [,no edge, draw=none]]]]]\\end{forest}}\\quad}\n\\quad\n\\subfloat[$\\kappa = .45$]{\\qquad\\qquad\\scalebox{0.5}{\\begin{forest} for tree={grow=south,circle,draw,minimum size=1ex,inner sep=1mm,s sep=3mm}[2020-01-16, [,no edge, draw=none] [2020-05-21[,no edge, draw=none] [,no edge, draw=none]]]\\end{forest}\\qquad}\\quad\\qquad}\n\\\\\n\n\\caption{\\label{f:social_changepoints}Detected change points for level $0.1$ tests, $p=2$ in the social injustice keyword frequency data. Bisection conducted until failure to reject null hypothesis or time interval shorter than two weeks. Each circle above represents a change point detected in the week starting with the reported date; e.g., the encircled value 2019-06-06 represents the 7-day interval containing 2019-06-06 through 2019-06-12. %\n}\n\\end{figure*}\n\n\n\n\\medskip\n\\section{Preliminary lemmas}\\label{sec-pr1}\n\nIn all asymptotic statements, $O_P(1)$ and $o_P(1)$ denote terms which are bounded in probability and tend to zero in probability, respectively, in the limit $\\min\\{N,d\\}\\to\\infty$.\n\nRecall the independent and identically distributed random variables $\\zeta_i, 1\\leq i \\leq N$ defined in \\eqref{zedef}. Let\n\\begin{align}\\label{psize}\n\\psi_{i,j}=\\left(\\frac{1}{d}\\sum_{\\ell=1}^d|X_{i,\\ell}-X_{j,\\ell}|^p\\right)^{1\/p}-\\theta.%\n\\end{align}\nFirst we provide a bound for the distance between U-statistics and the corresponding projections.\n\\begin{lemma}\\label{lem-1} If the conditions of Theorem \\ref{th1} or \\ref{th2} are satisfied, then\nfor any $\\bar{\\alpha}>1$\n\\begin{align}\\label{lem-11}\n\\max_{1\\leq k\\leq N}\\frac{1}{k^{\\bar{\\alpha}}}\\left| \\sum_{1\\leq ix \\left(E\\psi^2_{1,2}+E\\zeta_1^2 \\right)^{1\/2} \\right\\}\\\\\n&\\leq P\\left\\{ \\max_{1\\leq j \\leq \\log N}\\max_{e^{j-1}\\leq k \\leq e^j}\\frac{1}{k^{\\bar\\alpha}}\\left| \\sum_{i=1}^k\\xi_i \\right|>x { \\left(E\\psi^2_{1,2}+E\\zeta_1^2 \\right)^{1\/2} } \\right\\}\\\\\n&\\leq \\sum_{j=1}^{\\log N}P\\left\\{ \\max_{e^{j-1}\\leq k \\leq e^j}\\frac{1}{k^{\\bar\\alpha}}\\left| \\sum_{i=1}^k\\xi_i \\right|>x\n \\left(E\\psi^2_{1,2}+E\\zeta_1^2 \\right)^{1\/2} \\right\\}\\\\\n&\\leq \\sum_{j=1}^{\\log N}P\\left\\{ \\max_{e^{j-1}\\leq k \\leq e^j}\\left| \\sum_{i=1}^k\\xi_i \\right|>x e^{(j-1) \\bar\\alpha} \\left(E\\psi^2_{1,2}+E\\zeta_1^2 \\right)^{1\/2} \\right\\}\\\\\n&\\leq \\frac{c_1}{x^2}\\sum_{j=1}^{\\log N}e^{-2(j-1) \\bar\\alpha}e^{2j}j^2\\\\\n&\\leq \\frac{c_2}{x^2},\n\\end{align*}\ncompleting the proof of \\eqref{lem-11}. By symmetry, \\eqref{lem-11} implies \\eqref{lem-12}.\n\\end{proof}\n\nNext we consider approximations for the sums of the projections. Let\n$$\n\\mathcal s^2(d)=E\\zeta_1^2\\quad\\quad\\mbox{and}\\quad \\quad \\mathcal m(d,\\nu)=E|\\zeta_1|^{\\nu}.\n$$\n\\begin{lemma}\\label{lem-2} If the conditions of Theorem \\ref{th1} or \\ref{th2} are satisfied, then for each $N$ and $d$ we can define independent Wiener processes $\\{W_{N,d,1}(x), 0\\leq x \\leq N\/2\\}$ and $\\{W_{N,d,2}(x), 0\\leq x \\leq N\/2\\}$ such that,\n\\begin{equation}\\label{lem-21}\n\\sup_{1\\leq x \\leq N\/2}\\frac{1}{x^{\\beta}}\\left|\\frac{1}{\\mathcal s(d)}\\sum_{i=1}^{\\lfloor x \\rfloor}\\zeta_i-W_{N,d,1}(x) \\right|=O_P(1),%\n\\end{equation}\nand\n\\begin{align}\\label{lem-22}\n&%\n\\sup_{N\/2\\leq x \\leq N-1}\\frac{1}{(N-x)^{\\beta}}\\left|\\frac{1}{\\mathcal s(d)}\\sum_{i=\\lfloor x \\rfloor+1}^{N}\\zeta_i-W_{N,d,2}(N-x) \\right|=O_P(1).%\n\\end{align}\nfor any {$\\beta>\\max\\{1\/4, 1\/\\nu\\}$}. %\n\\end{lemma}\n\\begin{proof} Using the Skorokhod embedding scheme (cf.\\ Breiman, 1968) we can define Wiener processes $W_{N,d,3}(x)$ such that\n$$\n\\sum_{i=1}^k\\zeta_i=W_{N,d,3}\\left(\\mathcal t_i\\right),\n$$\n\n$$\n\\mathcal t_i=\\mathcal r_1+\\mathcal r_2+\\ldots +\\mathcal r_i,\n$$\nwhere for each $N$ and $d$ the random variables $\\mathcal r_1, \\mathcal r_2. \\ldots, \\mathcal r_{N\/2}$ are independent and identically distributed with\n$$\nE\\mathcal r_1=\\mathcal s^2(d)\\quad\\quad\\mbox{and}\\quad\\quad E\\mathcal r_1^{\\nu\/2}\\leq c_1\\mathcal m(d,\\nu).\n$$\nWe can assume without loss of generality that $2<\\nu<4$. Using the Marcinkiewicz--Zygmund and von Bahr--Esseen inequalities (cf.\\ Petrov, 1995, p.\\ 82) we get\n\\begin{align*}\nE\\max_{1\\leq j\\leq k}\\left|\\sum_{i=1}^j(\\mathcal r_i-\\mathcal s^2(d))\\right|^{\\nu\/2}&\\leq c_2 k\\left(E\\mathcal r_1^{\\nu\/2}+(\\mathcal s^2(d))^{\\nu\/2}\\right)\\\\\n&\\leq c_3k\\left( \\mathcal m(d,\\nu) +(\\mathcal s^2(d))^{\\nu\/2} \\right).\n\\end{align*}\nWe get for all $\\nu\/2<\\lambda$ that \n\\begin{align}\nP&\\left\\{\\max_{1\\leq k \\leq N\/2}\\frac{1}{k^{\\lambda}}\\left| \\sum_{i=1}^{k}(\\mathcal r_i-\\mathcal s^2(d)) \\right|>x \\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)^{2\/\\nu} \\right\\}\\notag\\\\\n&\\leq P\\left\\{\\max_{1\\leq j\\leq \\log (N\/2)}\\max_{e^{j-1}\\leq k \\leq e^j}\\frac{1}{k^{\\lambda}}\\left| \\sum_{i=1}^{k}(\\mathcal r_i-\\mathcal s^2(d)) \\right|>x \\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)^{2\/\\nu}\\right\\}\\notag\\\\\n&\\leq P\\left\\{\\max_{1\\leq j\\leq \\log (N\/2)}\\max_{e^{j-1}\\leq k \\leq e^j}\\left| \\sum_{i=1}^{k}(\\mathcal r_i-\\mathcal s^2(d)) \\right|>x e^{(j-1){\\lambda}} \\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)^{2\/\\nu}\\right\\}\\notag\\\\\n&\\leq \\sum_{j=1}^{\\log (N\/2)}P\\left\\{\\max_{e^{j-1}\\leq k \\leq e^j}\\left| \\sum_{i=1}^{k}(\\mathcal r_i-\\mathcal s^2(d)) \\right|^{\\nu\/2}>x^{\\nu\/2} e^{(j-1){\\lambda}\\nu\/2} \\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)\\right\\}\\notag\\\\\n&\\leq \\frac{c_4}{x^{\\nu\/2}}\\sum_{j=1}^{\\log (N\/2)}e^je^{-(j-1){\\lambda}\\nu\/2} \\notag\\\\\n&\\leq \\frac{c_5}{x^{\\nu\/2}}.\\label{e:t_k_close_to_ ks^2(d)}\n\\end{align}\nFor $s>0$, and any $c_6>0$, let\n$$\nh({s},d)=c_6{s}^{\\lambda}\\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)^{2\/\\nu}\n$$\n The bound \\eqref{e:t_k_close_to_ ks^2(d)} implies\n$$\\lim_{c_6\\to\\infty}\\liminf_{N\\to\\infty}\\P\\Big\\{\\mathcal t_{\\lfloor x\\rfloor} \\in \\big\\{y>0: |y- x\\mathcal s^2(d)| \\leq h(x,d)\\big\\},~ 1\\leq x \\leq N\/2\\Big\\} = 1.$$ In turn, this implies%\n\\begin{align*}\n\\lim_{c_6\\to \\infty}\\liminf_{N\\to\\infty}P\\biggl\\{\\max_{1\\leq x \\leq N\/2}&\\frac{1}{\\mathcal s(d)x^{\\beta}}\\left| W_{N,d,3}\\left( \\mathcal t_{\\lfloor x\\rfloor} \\right)-W_{N,d,3}\\left(\\mathcal s^2(d)x \\right)\\right|\\\\\n&\\leq \\sup_{1\\leq x\\leq N\/2}\\sup_{\\{ y>0: |y-\\mathcal s^2(d)x|\\leq h(x,d)\\}}\\frac{1}{\\mathcal s(d)x^{\\beta}}\\left| W_{N,d,3}\\left( y \\right)-W_{N,d,3}\\left(\\mathcal s^2(d)x \\right)\\right|\n\\Biggl\\}=1.\n\\end{align*}\nThe distribution of $W_{N,3,d}(x)$ is the same as of the Wiener process $W(x)$ and therefore, by the scale transformation of the Wiener process\n\\begin{align}\n\\sup_{1\\leq x\\leq N\/2}&\\sup_{\\{y>0:|y-\\mathcal s^2(d)x|\\leq h(x,d)\\}}\\frac{1}{\\mathcal s(d)x^{\\beta}}\\left| W_{N,d,3}\\left( y \\right)-W_{N,d,3}\\left(\\mathcal s^2(d)x \\right)\\right|\\notag\\\\\n&\\stackrel{{\\mathcal D}}{=}\n\\sup_{1 \\leq x\\leq N\/2}\\sup_{\\{z>0:|z-x|\\leq h(x,d)\/\\mathcal s^2(d)\\}}\\frac{1}{x^{\\beta}}\\left| W(z)-W(x) \\right|\\notag\\\\\n&\\leq \n\\sup_{1 \\leq x\\leq N\/2}\\sup_{\\{z>0:|z-x|\\leq c_7 x^\\lambda \\}}\\frac{1}{x^{\\beta}}\\left| W(z)-W(x) \\right|,\\label{e:ss_bound}\n\\end{align}\nwhere we used that $\\sup_{d\\geq 1}\\frac{\\left(\\mathcal m(d,\\nu)+(\\mathcal s^2(d))^{\\nu\/2}\\right)^{2\/\\nu}}{\\mathcal s^2(d)}<\\infty$ by Lemma \\ref{appe-mom}. Recall (cf. \\citet[p. 24]{csorgo:revesz:1981}) for every $\\epsilon>0$ there exists a $c_0(\\epsilon)$ such that\n$$\n\\P\\Big\\{\\sup_{0 \\leq x\\leq T}\\sup_{\\{z>0:|z-x|\\leq h \\}}\\left| W(z)-W(x) \\right|>h^{1\/2} y\\Big\\} \\leq \\frac{c_0(\\epsilon) T}{h} \\exp\\Big(-\\frac{y^2}{2+\\epsilon}\\Big),\n$$\nfor any $T>0$ and $00$ and $ \\lambda<2\\beta$, we have\n\\begin{align*}\n\\P\\Big\\{\\sup_{1 \\leq x\\leq N}&\\sup_{\\{z>0:|z-x|\\leq c x^\\lambda \\}}\\frac{1}{x^{\\beta}}\\left| W(z)-W(x) \\right|> M \\Big\\}\\\\\n& \\leq \\sum_{j=1}^{\\log N }\\P\\Big\\{\\sup_{e^{j-1}\\leq x\\leq e^j}\\sup_{\\{z>0:|z-x|\\leq c x^\\lambda \\}}\\frac{1}{x^{\\beta}}\\left| W(z)-W(x) \\right|> M\\Big\\}\\\\\n& \\leq \\sum_{j=1}^{\\log N }\\P\\Big\\{\\sup_{e^{j-1}\\leq x\\leq e^j}\\sup_{\\{z>0:|z-x|\\leq c x^\\lambda \\}}\\left| W(z)-W(x) \\right|> Me^{\\beta (j-1)}\\Big\\}\\\\\n& \\leq \\sum_{j=1}^{\\log N }\\P\\Big\\{\\sup_{0\\leq x\\leq e^j}\\sup_{\\{z>0:|z-x|\\leq c e^{j\\lambda} \\}}\\left| W(z)-W(x) \\right|> c^{-1}e^{-\\beta} (c e^{j\\lambda\/2}) \\cdot M e^{j(\\beta-\\lambda\/2)} \\Big\\}\\\\\n& \\leq c_8 \\sum_{j=1}^{\\log N } e^{j(1-\\lambda)}\\exp\\Big(-c_9 M^2 e^{j(2\\beta-\\lambda)} \\Big)\\\\\n& \\leq c_8 \\sum_{j=1}^{\\infty } e^{j(1-\\lambda)}\\exp\\Big(-c_9 M^2 e^{j(2\\beta-\\lambda)} \\Big)\\\\\n&= c_8 \\mathcal G(M),\n\\end{align*}\nNote $\\mathcal G(M)<\\infty$ since $\\lambda<2\\beta$, and clearly $\\mathcal G(M)\\to0$ as $M\\to\\infty$. Returning to \\eqref{e:ss_bound}, this implies\n$$\n\\sup_{1 \\leq x\\leq N\/2}\\sup_{\\{z>0:|z-x|\\leq c_7 x^\\lambda \\}}\\frac{1}{x^{\\beta}}\\left| W(z)-W(x) \\right| = O_P(1),\\quad N\\to\\infty,\n$$\nwhich establishes the result.\n\n\\end{proof}\n\n\n\\medskip\n\\section{Proofs of Theorems \\ref{th1}--\\ref{th4}}\\label{sec-pr2}\n\n\n\\medskip\n\\noindent\n{\\bf Proof of Theorem \\ref{th1}.} Let $2\/N\\leq t \\leq 1-2\/N$. With $k=\\lfloor Nt\\rfloor$, and $k^* = N-k$, note\n\\begin{align}\n\\frac{1}{\\sigma}V_{N,d}(t) &= N^{1\/2}\\frac{k}{N}\\frac{k^*}{N} \\frac{2 d^{1\/2}}{\\sigma}\\bigg(\\frac{1}{k(k-1)} \\sum_{1\\leq i0$, observe\n$$ %\nN^{1\/2-\\bar{\\alpha}}\\frac{|R_{N,d}(t)|}{(t(1-t))^{\\bar{\\alpha}}}= \\frac{2 d^{1\/2}}{\\sigma}\\Big(\\frac{(1-t)^{1-\\bar\\alpha}}{k^{\\bar\\alpha}(k-1)} \\sum_{1\\leq ix \\right\\}=0\n$$\nfor all $x>0$. The same arguments give\n$$\n\\sup_{1-z\\leq t<1}\\frac{|B(t)|}{w(t)}\\stackrel{P}{\\to}c_2, \\quad z\\to 0,\n$$\nfor some $0\\leq c_2<\\infty$ and\n$$\n\\lim_{z\\to 0}\\limsup_{N\\to \\infty}P\\left\\{ \\left|\\sup_{1-z\\leq t \\leq 1-2\/N}\\frac{1}{w(t)}\\left|\\frac{1}{\\sigma}V_{N,d}(t)\\right|-c_2\\right|>x \\right\\}=0\n$$\nfor all $x>0$, completing the proof of \\ref{th1-1}. The proof of \\eqref{th1-2} is similar to that of \\eqref{th1-1} and therefore it is omitted.\n\\qed\n\n\n\\medskip\n\\noindent\n{\\bf Proof of Theorem \\ref{th2}.} {Similarly as in the arguments leading to \\eqref{pr1-2}, using Lemmas \\ref{lem-1} and \\ref{lem-2}, we obtain}\n\\begin{align}\\label{da-pr1}\n\\max_{2