diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznqar" "b/data_all_eng_slimpj/shuffled/split2/finalzznqar" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznqar" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec_intro}\nSuppose $H$ is a locally-compact Hausdorff group acting freely and continuously on a locally-compact Hausdorff space $X$, so that $(H,X)$ is a free transformation group. In \\cite[pp.~95--96]{Green1977} Green gives an example of a free non-proper action of $H=\\mathbb R$ on a subset $X$ of $\\mathbb R^3$; the non-properness comes down to the existence of $z\\in X$, $\\braces{x_n}\\subset X$, and two sequences $\\braces{s_n}$ and $\\braces{t_n}$ in $H$ such that\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}}\\renewcommand{\\labelenumi}{(\\theenumi)}\n\\item $s_n^{-1}\\cdot x_n\\rightarrow z$ and $t_n^{-1}\\cdot x_n\\rightarrow z$; and \n\\item $t_ns_n^{-1}\\rightarrow\\infty$ as $n\\rightarrow\\infty$, in the sense that $\\braces{t_ns_n^{-1}}$ has no convergent subsequence.\n\\end{enumerate}\nIn \\cite[Definition~2.2]{Archbold-Deicke2005}), and subsequently in \\cite[p.~2]{Archbold-anHuef2006}, the sequence $\\{x_n\\}$ is said to converge $2$-times in the orbit space to $z\\in X$. Each orbit $H\\cdot x$ gives an induced representation $\\operatorname{Ind}\\epsilon_{x}$ of the associated transformation-group $C^*$-algebra $C_0(X)\\rtimes H$ which is irreducible, and the $k$-times convergence of $\\{x_n\\}$ in the orbit space to $z\\in X$ translates into statements about various multiplicity numbers associated to $\\operatorname{Ind}\\epsilon_z$ in the spectrum of $C_0(X)\\rtimes H$, as in \\cite[Theorem~2.5]{Archbold-Deicke2005}, \\cite[Theorem~1.1]{Archbold-anHuef2006} and \\cite[Theorem~2.1]{Archbold-anHuef2008}.\n\nUpper and lower multiplicity numbers associated to irreducible representations $\\pi$ of a $C^*$-algebra $A$ were introduced by Archbold \\cite{Archbold1994} and extended to multiplicity numbers relative to a net of irreducible representations by Archbold and Spielberg \\cite{Archbold-Spielberg1996}. \nThe upper multiplicity $M_U(\\pi)$ of $\\pi$, for example, counts `the number of nets of orthogonal equivalent pure states which can converge to a common pure state associated to $\\pi$' \\cite[p. 26]{aklss2001}. The definition of $k$-times convergence and \\cite[Theorem~2.5]{Archbold-Deicke2005} were very much motivated by a notion of $k$-times convergence in the dual space of a nilpotent Lie group \\cite{Ludwig1990} and its connection with relative multiplicity numbers (see, for example, \\cite[Theorem~2.4]{aklss2001} and \\cite[Theorem~5.8]{Archbold-Ludwig-Schlichting2007}).\n\nTheorem~1.1 of \\cite{Archbold-anHuef2006} shows that the topological property of a sequence $\\{x_n\\}$ converging $k$-times in the orbit space to $z\\in X$ is equivalent to (1) a measure theoretic accumulation along the orbits $G\\cdot x_n$ and (2) that the lower multiplicity of $\\operatorname{Ind}\\epsilon_z$ relative to the sequence $\\{\\operatorname{Ind}\\epsilon_{x_n}\\}$ is at least $k$. In this paper we prove that the results of \\cite{Archbold-anHuef2006} generalise to principal groupoids. \n\n\nIn our main arguments we have tried to preserve as much as possible the structure of those in \\cite{Archbold-anHuef2006}, although the arguments presented here are often more complicated in order to cope with the partially defined product in a groupoid and the set of measures that is a Haar system compared to the fixed Haar measure used in the transformation-group case. \n\nOur theorems have led us to a new class of examples exhibiting $k$-times convergence in groupoids that are not based on transformation groups, thus justifying our level of generality. Given a row-finite directed graph $E$, Kumjian, Pask, Raeburn and Renault in \\cite{kprr1997} used the set of all infinite paths in $E$ to construct an r-discrete groupoid $G_E$, called a {\\em path groupoid}. We prove that $G_E$ is principal if and only if $E$ contains no cycles (Proposition~\\ref{prop_principal_iff_no_cycles}). We then exhibit principal $G_E$ with Hausdorff and non-Hausdorff orbits space, respectively, both with a $k$-times converging sequence in the orbit space. In particular, our examples can be used to find a groupoid $G_E$ whose $C^*$-algebra has non-Hausdorff spectrum and distinct upper and lower multiplicity counts among its irreducible representations.\n\n\n\\section{Preliminaries}\\label{sec_prelim}\n\nWe denote the unit space of a groupoid $G$ by $G^{(0)}$. For $x\\in G^{(0)}$ we call the set $r\\big(s^{-1}(\\{x\\})\\big)=s\\big(r^{-1}(\\{x\\})\\big)$ the {\\em orbit} of $x$ and denote it by $[x]$. For a subset $U$ of $G^{(0)}$ we define $G_U:=s^{-1}(U)$, $G^U:=r^{-1}(U)$, and $G|_U:=s^{-1}(U)\\cap r^{-1}(U)$. We denote the set of all positive integers by $\\mathbb P$ and the set of all non-negative integers by $\\mathbb N$.\n\n\\begin{definition}\\label{def_Haar_system}A {\\em right Haar system} on $G$ is a set $\\{\\lambda_x:x\\in G^{(0)}\\}$ of non-negative Radon measures on $G$ such that\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}}\n \\item $\\mathrm{supp}\\,\\lambda_x=G_x$ $\\big(=s^{-1}(\\{x\\})\\big)$\\quad for all $x\\in G^{(0)}$;\n\\item\\label{Haar_system_property_2} for $f\\in C_c(G)$, the function $x\\mapsto \\int f\\,d\\lambda_x$ on $G^{(0)}$ is in $C_c(G^{(0)})$; and\n\\item\\label{Haar_system_property_3} for $f\\in C_c(G)$ and $\\gamma\\in G$, \n\\[\\int f(\\alpha\\gamma)\\,d\\lambda_{r(\\gamma)}(\\alpha)=\\int f(\\alpha)\\,d\\lambda_{s(\\gamma)}(\\alpha).\\]\n\\end{enumerate}\nWe will refer to \\eqref{Haar_system_property_2} as the {\\em continuity of the Haar system} and to \\eqref{Haar_system_property_3} as {\\em Haar-system invariance}. The collection $\\{\\lambda^x:x\\in G^{(0)}\\}$ of measures where $\\lambda^x(E):=\\lambda_x(E^{-1})$ is a {\\em left Haar system}, which is a system of measures such that $\\mathrm{supp}\\,\\lambda^x=G^x$ and, for $f\\in C_c(G)$, $x\\mapsto \\int f\\,d\\lambda^x$ is continuous and $\\int f(\\gamma\\alpha)\\,d\\lambda^{s(\\gamma)}(\\alpha)=\\int f(\\alpha)\\,d\\lambda^{r(\\gamma)}(\\alpha)$. Given that we can easily convert a right Haar system $\\{\\lambda_x\\}$ into a left Haar system $\\{\\lambda^x\\}$ and vice versa, we will simply refer to a {\\em Haar system} $\\lambda$ and use subscripts to refer to elements of the right Haar system $\\{\\lambda_x\\}$ and superscripts to refer to elements of the left Haar system $\\{\\lambda^x\\}$.\n\\end{definition}\n\nThe following lemma follows from the invariance of the Haar system and the Dominated Convergence Theorem; we omit the proof.\n\n\\begin{lemma}[Haar-system invariance\nSuppose $G$ is a locally-compact Hausdorff groupoid with Haar system $\\lambda$. If $K\\subset G$ is compact and $\\gamma\\in G$ with $s(\\gamma)=x$ and $r(\\gamma)=y$, then $\\lambda_x(K\\gamma)=\\lambda_y(K)$ and $\\lambda^x(\\gamma^{-1} K)=\\lambda^y(K)$.\n\\end{lemma}\n\nDefinition~\\ref{def_induced_representation} below is Definition 2.45 in the unpublished book \\cite{Muhly-book}. Alternative descriptions of the induced representation may be found in \\cite[pp.~234]{Muhly-Williams1990} and \\cite[pp.~81--82]{Renault1980}.\n\\begin{definition}\\label{def_induced_representation}\nSuppose $G$ is a second-countable locally-compact Hausdorff groupoid with Haar system $\\lambda$ and let $\\mu$ be a Radon measure on $G^{(0)}$.\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item We write $\\nu=\\mu\\circ\\lambda=\\int \\lambda^x \\, d\\mu$ for the measure on $G$ defined for every Borel-measurable function $f:G\\rightarrow\\mathbb C$ by $\\int_G f(\\gamma)\\, d\\nu(\\gamma)=\\int_{G^{(0)}}\\int_G f(\\gamma)\\, d\\lambda^x(\\gamma)\\, d\\mu(x)$. We call $\\nu$ the measure induced by $\\mu$, and we write $\\nu^{-1}$ for the image of $\\nu$ under the homeomorphism $\\gamma\\mapsto \\gamma^{-1}$.\n\\item For $f\\in C_c(G)$, $\\mathrm{Ind}\\,\\mu(f)$ is the operator on $L^2(G,\\nu^{-1})$ defined by the formula\n\\begin{align*}\n\\big(\\mathrm{Ind}\\,\\mu(f)\\xi\\big)(\\gamma)&=\\int_G f(\\alpha)\\xi(\\alpha^{-1}\\gamma)\\, d\\lambda^{r(\\gamma)}(\\alpha)\\\\\n&=\\int_G f(\\gamma \\alpha)\\xi(\\alpha^{-1})\\, d\\lambda^{s(\\gamma)}(\\alpha).\n\\end{align*}\n\\end{enumerate}\n\\end{definition}\nIn this paper we are interested in representations that are induced by point-mass measures $\\delta_x$ on $G^{(0)}$. We denote $\\mathrm{Ind}\\,\\delta_x$ by $\\mathrm{L}^x$ for all $x\\in G^{(0)}$ as in \\cite{Muhly-Williams1990} and \\cite{Clark-anHuef2008}. \n\n\\begin{remark}\\label{measure_induced_epsilon_x} It follows from the definition of the induced mesasure that for $x\\in G^{(0)}$, the measure $\\nu$ induced by $\\delta_x$ is equal to $\\lambda^x$. In particular we have $\\nu^{-1}=\\lambda_x$, so $\\mathrm{L}^x$ acts on $L^2(G,\\lambda_x)$. The operator $\\mathrm{L}^x$ is then given by\n\\[\n\\big(\\mathrm{L}^x(f)\\xi\\big)(\\gamma)=\\int_G f(\\gamma \\alpha^{-1})\\xi(\\alpha)\\, d\\lambda_x(\\alpha)\n\\]\nfor all $\\xi\\in L^2(G,\\lambda_x)$ and all $\\gamma\\in G$. There is a close relationship between the convolution on $C_c(G)$ and these induced representations: recall that for $f,g\\in C_c(G)$, the convolution $f\\ast g\\in C_c(G)$ is given by\n\\[\n f\\ast g(\\gamma)=\\int_G f(\\gamma\\alpha^{-1})g(\\alpha)\\,d\\lambda_{s(\\gamma)}(\\alpha)\\quad\\text{for all }\\gamma\\in G,\n\\]\nso that\n\\[\n\\big(\\mathrm{L}^x(f)g\\big)(\\gamma)=f\\ast g(\\gamma)\\quad\\text{for any }x\\in G^{(0)}\\text{ and }\\gamma\\in G_x.\n\\]\nWe denote the norm in $L^2(G,\\lambda_x)$ by $\\|\\cdot\\|_x$. Finally note that when $G$ is a second-countable locally-compact principal groupoid that admits a Haar system, each $\\mathrm{L}^x$ is irreducible by \\cite[Lemma~2.4]{Muhly-Williams1990}.\n\\end{remark}\n\\begin{remark} If\n $G=(H, X)$ is a second-countable free transformation group, then the representations $\\mathrm{L}^x$ defined above are unitarily equivalent to the representations $\\operatorname{Ind}\\epsilon_x$ used in \\cite{Archbold-anHuef2006}. Specifically, let $\\nu$ be a choice of right Haar measure on $H$ and $\\Delta$ the associated modular function. The map $\\iota:C_c(H\\times X)\\to C_c(H\\times X)$ defined by\n\\[\n\\iota(f)(t,x)=f(t,x)\\Delta(t)^{1\/2}\n\\]\nextends to an isomorphism $\\iota$ of the groupoid $C^*$-algebra $C^*(H\\times X)$ onto the transformation-group $C^*$-algebra $C_0(X)\\rtimes H$ \\cite[p.~58]{Renault1980}. Fix $x\\in X$. Then there is a unitary $U_x:L^2(H,\\nu)\\to L^2(H\\times X,\\lambda_x)$, characterised by $U(\\xi)(h, y)=\\xi(h)\\delta_x(h^{-1}\\cdot y)$ for $\\xi\\in C_c(H)$, and \n$U(\\operatorname{Ind}\\epsilon_x(\\iota(f))U^*=\\mathrm{L}^x(f)$ for $f\\in C^*(H\\times X)$.\n\\end{remark}\n\nLet $A$ be a $C^*$-algebra. We write $\\theta$ for the canonical surjection from the space $P(A)$ of pure states of $A$ to the spectrum $\\hat A$ of $A$. We frequently identify an irreducible representation $\\pi$ with its equivalence class in $\\hat A$ and we write $\\mathcal H_\\pi$ for the Hilbert space on which $\\pi(A)$ acts.\n\nLet $\\pi\\in\\hat A$ and let $\\braces{\\pi_\\alpha}$ be a net in $\\hat{A}$. We now recall the definitions of {\\em upper} and {\\em lower multiplicity} $M_\\mathrm{U}(\\pi)$ and $M_\\mathrm{L}(\\pi)$ from \\cite{Archbold1994}, and {\\em relative upper} and {\\em relative lower multiplicity} $\\mathrm{M}_\\mathrm{U}(\\pi,\\braces{\\pi_\\alpha})$ and $\\mathrm{M}_\\mathrm{L}(\\pi,\\braces{\\pi_\\alpha})$ from \\cite{Archbold-Spielberg1996}. \n\nLet $\\mathcal N$ be the weak$^*$-neighbourhood base at zero in the dual $A^*$ of $A$ consisting of all open sets of the form\n\\[\nN=\\{\\psi\\in A^*:|\\psi(a_i)|<\\epsilon, 1\\leqslant i\\leqslant n\\},\n\\]\nwhere $\\epsilon>0$ and $a_1,a_2,\\ldots,a_n\\in A$.\nSuppose $\\phi$ is a pure state of $A$ associated with $\\pi$ and let $N\\in\\mathcal N$. Define\n\\[\nV(\\phi,N)=\\theta\\big((\\phi+N)\\cap P(A)\\big),\n\\]\nan open neighbourhood of $\\pi$ in $\\hat{A}$. For $\\sigma\\in \\hat{A}$ let\n\\[\n\\mathrm{Vec}(\\sigma,\\phi,N)=\\braces{\\eta\\in \\mathcal H_\\sigma:\\|\\eta\\|=1,(\\sigma(\\cdot)\\eta\\, |\\,\\eta)\\in \\phi+N}.\n\\]\nNote that $\\mathrm{Vec}(\\sigma,\\phi,N)$ is non-empty if and only if $\\sigma\\in V(\\phi,N)$. For any $\\sigma\\in V(\\phi,N)$ define $d(\\sigma,\\phi,N)$ to be the supremum in $\\mathbb P\\cup\\braces{\\infty}$ of the cardinalities of finite orthonormal subsets of $\\mathrm{Vec}(\\sigma,\\phi,N)$. Write $d(\\sigma,\\phi,N)=0$ when $\\mathrm{Vec}(\\sigma,\\phi,N)$ is empty.\n\nDefine\n\\[\n\\mathrm{M}_\\mathrm{U}(\\phi,N)=\\underset{\\sigma\\in V(\\phi,N)}{\\sup} d(\\sigma,\\phi,N)\\in\\mathbb P\\cup\\braces{\\infty}.\n\\]\nNote that if $N'\\in \\mathcal N$ and $N\\subset N$, then $\\mathrm{M}_\\mathrm{U}(\\phi,N')\\leqslant\\mathrm{M}_\\mathrm{U}(\\phi,N)$. Now define\n\\[\n\\mathrm{M}_\\mathrm{U}(\\phi)=\\underset{N\\in\\mathcal N}{\\inf}\\mathrm{M}_\\mathrm{U}(\\phi,N)\\in\\mathbb P\\cup\\braces{\\infty}.\n\\]\nBy \\cite[Lemma~2.1]{Archbold1994}, the value of $\\mathrm{M}_\\mathrm{U}(\\phi)$ is independent of the pure state $\\phi$ associated to $\\pi$. Thus $\\mathrm{M}_\\mathrm{U}(\\pi):=\\mathrm{M}_\\mathrm{U}(\\phi)$ is well-defined. For lower multiplicity, assume that $\\braces{\\pi}$ is not open in $\\hat{A}$, and using \\cite[Lemma~2.1]{Archbold1994} again, define\n\\[\n\\mathrm{M}_\\mathrm{L}(\\pi):=\\underset{N\\in\\mathcal N}\\inf \\Big(\\underset{\\sigma\\rightarrow\\pi, \\sigma\\ne\\pi}{\\lim\\,\\inf} d(\\sigma,\\phi,N)\\Big)\\in\\mathbb P\\cup\\braces{\\infty}.\n\\]\n\nNow suppose that $\\{\\pi_\\alpha\\}_{\\alpha\\in\\Lambda}$ is a net in $\\hat{A}$. For $N\\in\\mathcal N$ let\n\\[\n\\mathrm{M}_\\mathrm{U}\\big(\\phi,N,\\{\\pi_\\alpha\\}\\big)=\\underset{\\alpha\\in\\Lambda}{\\lim\\,\\sup}\\, d(\\pi_\\alpha,\\phi,N)\\in\\mathbb N\\cup\\braces{\\infty}.\n\\]\nNote that if $N'\\in\\mathcal N$ and $N'\\subset N$ then $\\mathrm{M}_\\mathrm{U}\\big(\\phi,N',\\{\\pi_\\alpha\\}\\big)\\leqslant \\mathrm{M}_\\mathrm{U}\\big(\\phi,N,\\{\\pi_\\alpha\\}\\big)$. Then\n\\[\n\\mathrm{M}_\\mathrm{U}\\big(\\pi,\\{\\pi_\\alpha\\}\\big):=\\underset{N\\in\\mathcal N}\\inf \\mathrm{M}_\\mathrm{U}\\big(\\phi,N,\\{\\pi_\\alpha\\}\\big)\\in\\mathbb N\\cup\\braces{\\infty},\n\\]\nis well-defined because the right-hand side is independent of the choice of $\\phi$ by an argument similar to the proof of \\cite[Lemma~2.1]{Archbold1994}. Similarly define\n\\[\n\\mathrm{M}_\\mathrm{L}\\big(\\phi,N,\\{\\pi_\\alpha\\}\\big):=\\underset{\\alpha\\in\\Lambda}{\\lim\\,\\inf}\\, d(\\pi_\\alpha,\\phi,N)\\in\\mathbb N\\cup\\braces{\\infty},\n\\]\nand let\n\\[\n\\mathrm{M}_\\mathrm{L}\\big(\\pi,\\{\\pi_\\alpha\\}\\big)=\\underset{N\\in\\mathcal N}\\inf \\mathrm{M}_\\mathrm{L}\\big(\\phi,N,\\{\\pi_\\alpha\\}\\big)\\in\\mathbb N\\cup\\braces{\\infty}.\n\\]\nIt follows that for any irreducible representation $\\pi$ and any net $\\{\\pi_\\alpha\\}_{\\alpha\\in\\Lambda}$ of irreducible representations,\n\\[\n\\mathrm{M}_\\mathrm{L}\\big(\\pi,\\{\\pi_\\alpha\\}\\big)\\leqslant\\mathrm{M}_\\mathrm{U}\\big(\\pi,\\{\\pi_\\alpha\\}\\big)\\leqslant\\mathrm{M}_\\mathrm{U}(\\pi)\n\\]\nand, if $\\{\\pi_\\alpha\\}$ converges to $\\pi$ with $\\pi_\\alpha\\ne\\pi$ eventually,\n\\[\n\\mathrm{M}_\\mathrm{L}(\\pi)\\leqslant\\mathrm{M}_\\mathrm{L}\\big(\\pi,\\{\\pi_\\alpha\\}\\big).\n\\]\nFinally, if $\\{\\pi_\\beta\\}$ is a subnet of $\\{\\pi_\\alpha\\}$, then\n\\[\n\\mathrm{M}_\\mathrm{L}\\big(\\pi,\\{\\pi_\\alpha\\}\\big)\\leqslant \\mathrm{M}_\\mathrm{L}\\big(\\pi,\\{\\pi_\\beta\\}\\big)\\leqslant \\mathrm{M}_\\mathrm{U}\\big(\\pi,\\{\\pi_\\beta\\}\\big)\\leqslant \\mathrm{M}_\\mathrm{U}\\big(\\pi,\\{\\pi_\\alpha\\}\\big).\n\\]\n\n\n\\section{Lower multiplicity and $k$-times convergence I}\\label{sec_lower_multiplicity_1}\nA key goal for this paper is to describe the relationship between multiplicities of induced representations and strength of convergence in the orbit space. We start this section by recalling the definition of $k$-times convergence in a groupoid from \\cite{Clark-anHuef2008}. We then show that if a sequence converges $k$-times in the orbit space of a principal groupoid, then the lower multiplicity of the associated sequence of representations is at least $k$; the converse will be shown in Section \\ref{sec_lower_multiplicity_2}.\n\nRecall that a sequence $\\{\\gamma_n\\}\\subset G$ tends to infinity if it admits no convergent subsequence.\n\\begin{definition}\\label{def_k-times_convergence}\nLet $k\\in\\mathbb P$. A sequence $\\{x_n\\}$ in $G^{(0)}$ is $k$-times convergent in $G^{(0)}\/G$ to $z\\in G^{(0)}$ if there exist $k$ sequences $\\{\\gamma_n^{(1)}\\},\\{\\gamma_n^{(2)}\\},\\ldots,\\{\\gamma_n^{(k)}\\}\\subset G$ such that\n\\begin{enumerate}\\renewcommand{\\theenumi}{\\roman{enumi}}\\renewcommand{\\labelenumi}{(\\theenumi)}\n\\item\\label{def_k-times_convergence_1} $s(\\gamma_n^{(i)})=x_n$ for all $n$ and $1\\leqslant i\\leqslant k$;\n\\item\\label{def_k-times_convergence_2} $r(\\gamma_n^{(i)})\\rightarrow z$ as $n\\rightarrow\\infty$ for $1\\leqslant i\\leqslant k$; and\n\\item\\label{def_k-times_convergence_3} if $1\\leqslant i0$. Define $\\eta\\in L^2(G,\\lambda_z)$ by\n\\[\n\\eta(\\alpha)=\\|g\\|_z^{-1}g(\\alpha)\\quad\\text{for all }\\alpha\\in G. \n\\]\nThen\n\\[\n\\|\\eta\\|_z^2=\\|g\\|_z^{-2}\\int g(\\alpha)^2\\, d\\lambda_z(\\alpha)=\\|g\\|_z^{-2}\\|g\\|_z^2=1,\n\\]\nso $\\eta$ is a unit vector in $L^2(G,\\lambda_z)$\nand the GNS construction of $\\phi:=(\\mathrm{L}^z(\\cdot)\\eta\\,|\\,\\eta)$ is unitarily equivalent to $\\mathrm{L}^z$. By the definition of lower multiplicity we now have\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})=\\inf_{N\\in\\mathcal N} \\mathrm{M}_\\mathrm{L}(\\phi,N,\\{\\mathrm{L}^{x_n}\\})=r,\n\\]\nso there exists $N\\in\\mathcal N$ such that\n\\[\n\\mathrm{M}_\\mathrm{L}(\\phi,N,\\{\\mathrm{L}^{x_n}\\})=\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\, d(\\mathrm{L}^{x_n},\\phi,N)=r,\n\\]\nand consequently there exists a subsequence $\\{y_m\\}$ of $\\{x_n\\}$ such that\n\\begin{equation}\\label{observation_to_contradict}\nd(\\mathrm{L}^{y_m},\\phi,N)=r\\quad\\text{for all }m.\n\\end{equation}\n Since any subsequence of a sequence that is $k$-times convergent is also $k$-times convergent, we know that $\\{y_m\\}$ converges $k$-times to $z$ in $G^{(0)}\/G$.\n\nWe will now use the $k$-times convergence of $\\{y_m\\}$ to construct $k$ sequences of unit vectors with sufficient properties to establish our contradiction. By the $k$-times convergence of $\\{y_m\\}$ there exist $k$ sequences\n\\[\n\\{\\gamma_m^{(1)}\\},\\{\\gamma_m^{(2)}\\},\\ldots,\\{\\gamma_m^{(k)}\\}\\subset G\n\\]\nsatisfying\n{\\allowdisplaybreaks\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item $s(\\gamma_m^{(i)})=y_m$ for all $m$ and $1\\leqslant i\\leqslant k$;\n\\item $r(\\gamma_m^{(i)})\\rightarrow z$ as $m\\rightarrow\\infty$ for $1\\leqslant i\\leqslant k$; and\n\\item if $1\\leqslant i(k-1)\\lambda_z(G^{W_m}).\n\\]\nThen $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$.\n\\begin{proof}\nLet $\\{K_m\\}$ be an increasing sequence of compact subsets of $G$ such that $G=\\bigcup_{m\\geqslant 1}\\mathrm{Int}\\,K_m$. By the regularity of $\\lambda_z$, for each $m\\geqslant 1$ there exist $\\delta_m>0$ and an open neighbourhood $U_m$ of $G_z^{W_m}$ such that\n\\begin{equation}\\label{AaH4.1}\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(U_m)+\\delta_m.\n\\end{equation}\n\nWe will construct, by induction, a strictly increasing sequence of positive integers $\\{n_m\\}$ such that, for all $n\\geqslant n_m$,\n\\begin{align}\n&\\lambda_{x_n}(K_m\\alpha\\cap G^{W_m})<\\lambda_z(U_m)+\\delta_m\/k\\quad\\text{for all }\\alpha\\in G_{x_n}^{W_m},\\quad\\text{and}\\label{AaH4.2}\\\\\n&\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(U_m)+\\delta_m.\\label{AaH4.3}\n\\end{align}\n\nBy applying Lemma \\ref{the_unbroken_lemma} with $\\delta=\\lambda_z(U_1)-\\lambda_z(G^{W_1})+\\delta_1\/k$ there exists $n_1$ such that $n\\geqslant n_1$ implies\n\\[\\lambda_{x_n}(K_1\\alpha\\cap G^{W_1})<\\lambda_z(U_1)+\\delta_1\/k \\quad\\text{for all }\\alpha\\in G_{x_n}^{W_1},\\] establishing \\eqref{AaH4.2} for $m=1$. If necessary we can increase $n_1$ to ensure \\eqref{AaH4.3} holds for $m=1$ by considering \\eqref{AaH4.1}. Assuming that we have constructed $n_1n_{m-1}$ such that \\eqref{AaH4.2} holds, and again, if necessary, increase $n_m$ to obtain \\eqref{AaH4.3}.\n\nIf $n_1>1$ then, for each $1\\leqslant n\\big((k-1)\\lambda_z(U_m)+\\delta_m\\big)-\\big(\\lambda_z(U_m)+\\delta_m\/k\\big)\\\\\n&=(k-2)\\lambda_z(U_m)+\\frac{(k-1)}k\\delta_m.\n\\end{align*}\nSo for each $n\\geqslant n_1$ and its associated $m$ we can choose $\\gamma_n^{(2)}\\in G_{x_n}^{W_m}\\backslash K_m\\gamma_n^{(1)}$. We now have\n\\begin{align*}\n\\lambda_{x_n}&\\big(G^{W_m}\\backslash (K_m\\gamma_n^{(1)}\\cup K_m\\gamma_n^{(2)})\\big)\\\\\n&=\\lambda_{x_n}(G^{W_m}\\backslash K_m\\gamma_n^{(1)})-\\lambda_{x_n}\\big((G^{W_m}\\backslash K_m\\gamma_n^{(1)})\\cap K_m\\gamma_n^{(2)}\\big)\\\\\n&\\geqslant\\lambda_{x_n}(G^{W_m}\\backslash K_m\\gamma_n^{(1)})-\\lambda_{x_n}(G^{W_m}\\cap K_m\\gamma_n^{(2)})\\\\\n&>\\Big((k-2)\\lambda_z(U_m)+\\frac{(k-1)}k\\delta_m\\Big)-\\Big(\\lambda_z(U_m)+\\delta_m\/k\\Big)\\\\\n&=(k-3)\\lambda_z(U_m)+\\frac{(k-2)}k\\delta_m,\n\\end{align*}\nenabling us to choose $\\gamma_n^{(3)}\\in G_{x_n}^{W_m}\\backslash (K_m\\gamma_n^{(1)}\\cup K_m\\gamma_n^{(2)})$. By continuing this process, for each $j=3,\\ldots,k$ and each $n\\geqslant n_1$ we have\n\\[\n\\lambda_{x_n}\\Bigg(G^{W_m}\\backslash \\bigg(\\bigcup_{i=1}^{j-1}K_m\\gamma_n^{(i)}\\bigg)\\Bigg)>(k-j)\\lambda_z(U_m)+\\frac{(k-j-1)\\delta_m}k,\n\\]\nenabling us to choose\n\\begin{equation}\\label{eqn_choosing_gammas}\n\\gamma_n^{(j)}\\in G^{W_m}_{x_n}\\backslash \\bigg(\\bigcup_{i=1}^{j-1}K_m\\gamma_n^{(i)}\\bigg).\n\\end{equation}\nNote that for $n_m\\leqslant n0$. By the outer regularity of $\\lambda_z$, there exists an open neighbourhood $U$ of $K$ such that\n\\[\n\\lambda_z(K)\\leqslant \\lambda_z(U)<\\lambda_z(K)+\\epsilon\/2.\n\\]\nBy Urysohn's Lemma there exists $f\\in C_c(G)$ with $0\\leqslant f\\leqslant 1$ such that $f$ is identically one on $K$ and zero off $U$. In particular we have\n\\begin{equation}\\label{measure_of_f_near_measure_of_K}\n\\lambda_z(K)\\leqslant\\int f\\,d\\lambda_z<\\lambda_z(K)+\\epsilon\/2.\n\\end{equation}\n\nThe continuity of the Haar system implies $\\int f\\,d\\lambda_{x_n}\\rightarrow \\int f\\,d\\lambda_z$, so there exists $n_0$ such that $n\\geqslant n_0$ implies\n\\[\n\\int f\\,d\\lambda_z-\\epsilon\/2 < \\int f\\,d\\lambda_{x_n}<\\int f\\,d\\lambda_z +\\epsilon\/2.\n\\]\nBy our choice of $f$ we have $\\lambda_{x_n}(K)\\leqslant \\int f\\,d\\lambda_{x_n}$, so\n\\[\n\\lambda_{x_n}(K)\\leqslant\\int f\\,d\\lambda_{x_n}<\\int f\\,d\\lambda_z+\\epsilon\/2.\n\\]\nCombining this with \\eqref{measure_of_f_near_measure_of_K} enables us to observe that for $n\\geqslant n_0$, $\\lambda_{x_n}(K)<\\lambda_z(K)+\\epsilon$, completing the proof.\n\\end{proof}\n\\end{lemma}\n\\begin{lemma}\\label{corollary_astrid_lemma}\nSuppose $G$ is a second-countable groupoid with Haar system $\\lambda$ and let $K$ be a compact subset of $G$. For every $\\epsilon>0$ and $z\\in G^{(0)}$ there exists a neighbourhood $U$ of $z\\in G^{(0)}$ such that $x\\in U$ implies $\\lambda_x(K)<\\lambda_z(K)+\\epsilon$.\n\\begin{proof}\nFix $\\epsilon>0$ and $z\\in G^{(0)}$. Let $\\{U_n\\}$ be a decreasing neighbourhood basis for $z$ in $G^{(0)}$. If our claim is false, then each $U_n$ contains an element $x_n$ such that $\\lambda_{x_n}(K)\\geqslant\\lambda_z(K)+\\epsilon$. But since each $x_n\\in U_n$, $x_n\\rightarrow z$, and so by Lemma \\ref{astrids_lim_sup} there exists $n_0$ such that $n\\geqslant n_0$ implies $\\lambda_{x_n}(K)<\\lambda_z(K)+\\epsilon$, a contradiction.\n\\end{proof}\n\\end{lemma}\n\n\n\\begin{prop}\\label{part_of_AaH4.2_generalisation}\nSuppose $G$ is a second-countable locally-compact Hausdorff groupoid with Haar system $\\lambda$. Suppose that $z\\in G^{(0)}$ with $[z]$ locally closed in $G^{(0)}$ and suppose $\\{x_n\\}$ is a sequence in $G^{(0)}$. Assume that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact, $\\lambda_{x_n}(G^V)\\rightarrow \\infty$ as $n\\rightarrow\\infty$. Then, for every $k\\geqslant 1$, the sequence $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$.\n\\begin{proof}\nLet $\\{ K_m\\}$ be an increasing sequence of compact subsets of $G$ such that $G=\\cup_{m\\geqslant 1}\\,\\mathrm{Int}\\, K_m$. By Lemma \\ref{corollary_astrid_lemma}, for each $K_m$ there exists an open neighbourhood $V_m$ of $z$ such that\n\\[\nx\\in V_m\\quad\\text{implies}\\quad \\lambda_x(K_m)<\\lambda_z(K_m)+1.\n\\]\nSince $[z]$ is locally closed, by Lemma~4.1(1) in \\cite{Clark-anHuef2010-preprint} we can crop $V_1$ if necessary to ensure that $G_z^{V_1}$ is relatively compact. By further cropping each $V_m$ we may assume that $\\{V_m\\}$ is a decreasing neighbourhood basis of $z$. By our hypothesis, for each $m$ there exists $n_m$ such that \n\\begin{equation}\\label{lower_bound}\nn\\geqslant n_m\\quad\\text{implies}\\quad\\lambda_{x_n}(G^{V_m})>k\\big(\\lambda_z(K_m)+1\\big).\n\\end{equation}\nNote that for any $\\gamma\\in G_{x_n}^{V_m}$ with $n\\geqslant n_m$, we have $r(\\gamma)\\in V_m$, and so $\\lambda_{r(\\gamma)}(K_m)<\\lambda_z(K_m)+1$. By Haar-system invariance we know that $\\lambda_{r(\\gamma)}(K_m)=\\lambda_{x_n}(K_m\\gamma)$, which shows us that\n\\begin{equation}\\label{upper_bound}\n\\lambda_{x_n}(K_m\\gamma)<\\lambda_z(K_m)+1.\n\\end{equation}\nIf necessary we can increase the elements of $\\{n_m\\}$ so that it is a strictly increasing sequence.\n\nWe now proceed as in the proof of Proposition \\ref{AaH_prop4_1_1}. For all $nk\\big(\\lambda_z(K_m)+1\\big) - \\big(\\lambda_z(K_m)+1\\big)\\\\\n&=(k-1)\\big(\\lambda_z(K_m)+1\\big).\n\\end{align*}\nWe can thus choose $\\gamma_n^{(2)}\\in G_{x_x}^{V_m}\\backslash K_m\\gamma_n^{(1)}$ for each $n\\geqslant n_1$. This now gives us\n\\begin{align*}\n\\lambda_{x_n}&(G^{V_m}\\backslash (K_m\\gamma_n^{(1)}\\cup K_m\\gamma_n^{(2)}))\\\\\n&=\\lambda_{x_n}(G^{V_m}\\backslash K_m\\gamma_n^{(1)})-\\lambda_{x_n}\\big( (G^{V_m}\\backslash K_m\\gamma_n^{(1)})\\cap K_m\\gamma_n^{(1)}\\big)\\\\\n&\\geqslant \\lambda_{x_n}(G^{V_m}\\backslash K_m\\gamma_n^{(1)})-\\lambda_{x_n}(K_m\\gamma_n^{(2)})\\\\\n&>(k-1)\\big(\\lambda_z(K_m)+1\\big) - \\big(\\lambda_z(K_m)+1\\big)\\\\\n&=(k-2)\\big(\\lambda_z(K_m)+1\\big).\n\\end{align*}\nContinuing in this manner we can choose \n\\[\n\\gamma_n^{(j)}\\in G^{V_m}_{x_n}\\backslash \\bigg(\\bigcup_{i=1}^{j-1}K_m\\gamma_n^{(i)}\\bigg)\n\\]\nfor every $n\\geqslant n_1$ and $j=3,\\ldots,k$. The tail of the proof of Proposition \\ref{AaH_prop4_1_1} establishes our desired result.\n\\end{proof}\n\\end{prop}\n\n\n\\section{Measure ratios and bounds on lower multiplicity}\\label{sec_fun}\nIn this section we show that upper bounds on measure ratios along orbits gives upper bounds on multiplicities. A subset $S$ of a topological space $X$ is {\\em locally closed} if there exist an open set $U$ of $X$ and a closed set $V$ of $X$ such that $S=U\\cap V$; this is equivalent to $S$ being open in the closure of $S$ with the subspace topology by, for example, \\cite[Lemma~1.25]{Williams2007}.\n\n\\begin{lemma}\\label{based_on_Ramsay}\nSuppose $G$ is a second-countable locally-compact Hausdorff groupoid. Suppose $z\\in G^{(0)}$ and $[z]$ is locally closed. Then the restriction of $r$ to $G_z\/(G|_{\\{z\\}})$ is a homeomorphism onto $[z]$. If in addition $G$ is principal, then the restriction of $r$ to $G_z$ is a homeomorphism onto $[z]$.\n\\begin{proof}\nWe consider the transitive groupoid $G|_{[z]}$. Since $[z]$ is locally closed, $G|_{[z]}$ is a second-countable locally-compact Hausdorff groupoid. Thus $G|_{[z]}$ is Polish by, for example, \\cite[Lemma~6.5]{Williams2007}. Now \\cite[Theorem~2.1]{Ramsay1990} applies to give the result.\n\\end{proof}\n\\end{lemma}\n\nTheorem~\\ref{M2_thm} is based on \\cite[Theorem~3.1]{Archbold-anHuef2006}; it is only an intermediary result which will be used to prove a sharper bound in Theorem \\ref{M_thm}.\n\\begin{theorem}\\label{M2_thm}\nSuppose $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $M\\in\\mathbb R$ with $M\\geqslant 1$, suppose $z\\in G^{(0)}$ such that $[z]$ is locally closed and let $\\{x_n\\}$ be a sequence in $G^{(0)}$. Suppose there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \n\\[\n\\lambda_{x_n}(G^V)\\leqslant M\\lambda_z(G^V)\n\\]\nfrequently. Then $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})\\leqslant\\lfloor M^2\\rfloor$.\n\\begin{proof}[Proofish]\nFix $\\epsilon>0$ such that $M^2(1+\\epsilon)^2<\\lfloor M^2\\rfloor +1$. We will build a function $D\\in C_c(G)$ such that $\\mathrm{L}^z(D^\\ast\\ast D)$ is a rank-one projection and\n\\[\n\\mathrm{Tr}\\big(\\mathrm{L}^{x_n}(D^\\ast\\ast D)\\big)0$ such that\n\\[\n\\delta<\\frac{\\epsilon\\lambda_z(G^V)}{1+\\epsilon}<\\lambda_z(G^V).\n\\]\nSince $\\lambda_z$ is inner regular on open sets and $G_z^V$ is $G_z$-open, there exists a $G_z$-compact subset $W$ of $G_z^V$ such that\n\\[\n0<\\lambda_z(G_z^V)-\\delta<\\lambda_z(W).\n\\]\nSince $W$ is $G_z$-compact there exists a $G_z$-compact neighbourhood $W_1$ of $W$ that is contained in $G_z^V$ and there exists a continuous function $g:G_z\\rightarrow [0,1]$ that is identically one on $W$ and zero off the interior of $W_1$. We have\n\\[\n\\lambda_z(G^V)-\\delta=\\lambda_z(G_z^V)-\\delta<\\lambda_z(W)\\leqslant\\int_{G_z} g(t)^2\\, d\\lambda_z(t)=\\|g\\|_z^2,\n\\]\nand hence\n\\begin{equation}\\label{AaH3.1}\n\\frac{\\lambda_z(G^V)}{\\|g\\|_z^2}<1+\\frac{\\delta}{\\|g\\|_z^2}<1+\\frac{\\delta}{\\lambda_z(G^V)-\\delta}<1+\\epsilon.\n\\end{equation}\n\nBy Lemma \\ref{based_on_Ramsay} the restriction $\\tilde{r}$ of $r$ to $G_z$ is a homeomorphism onto $[z]$. So there exists a continuous function $g_1:\\tilde{r}(W_1)\\rightarrow [0,1]$ such that $g_1\\big(\\tilde{r}(\\gamma)\\big)=g(\\gamma)$ for all $\\gamma\\in W_1$. Thus $\\tilde{r}(W_1)$ is $[z]$-compact, which implies that $\\tilde{r}(W_1)$ is $G^{(0)}$-compact. Since we know that $G^{(0)}$ is second countable and Hausdorff, Tietze's Extension Theorem can be applied to extend $g_1$ to a continuous map\n\\[\ng_2:G^{(0)}\\rightarrow [0,1].\n\\]\nBecause $\\tilde{r}(W_1)$ is a compact subset of the open set $V$, there exist a compact neighbourhood $P$ of $\\tilde{r}(W_1)$ contained in $V$ and a continuous function $h:G^{(0)}\\rightarrow [0,1]$ that is identically one on $\\tilde{r}(W_1)$ and zero off the interior of $P$. Note that $h$ has compact support that is contained in $P$.\n\nWe set $f(x)=h(x)g_2(x)$. Then $f\\in C_c(G^{(0)})$ with $0\\leqslant f\\leqslant 1$ and \n\\begin{equation}\\label{supp_f_contained_in_V}\n\\mathrm{supp} f\\subset \\mathrm{supp} h\\subset P\\subset V.\n\\end{equation}\nNote that\n{\\allowdisplaybreaks\n\\begin{align}\n\\|f\\circ r\\|_z^2&=\\int_{G_z} f\\big(\\tilde{r}(\\gamma)\\big)^2\\,d\\lambda_z(\\gamma)\\notag\\\\\n&=\\int_{G_z} h\\big(\\tilde{r}(\\gamma)\\big)^2g_2\\big(\\tilde{r}(\\gamma)\\big)^2\\, d\\lambda_z(\\gamma)\\notag\\\\\n&\\geqslant\\int_{W_1} h\\big(\\tilde{r}(\\gamma)\\big)^2 g(\\gamma)^2\\, d\\lambda_z(\\gamma)\\notag\\\\\n&=\\int_{W_1}g(\\gamma)^2\\,d\\lambda_z(\\gamma)\\notag\\\\\n&=\\|g\\|_z^2\\label{AaH3.2}\n\\end{align}}since $\\mathrm{supp}\\, g\\subset W_1$ and $h$ is identically one on $\\tilde{r}(W_1)$. We now define $F\\in C_c(G^{(0)})$ by\n\\begin{equation}\\label{definition_F}\nF(x)=\\frac{f(x)}{\\|f\\circ r\\|_z}.\n\\end{equation}\nThen $\\|F\\circ r\\|_z=1$ and\n\\begin{equation}\\label{unmotivated_ref}\nF\\circ r(\\gamma)\\ne 0\\implies h\\big(r(\\gamma)\\big)\\ne 0 \\implies r(\\gamma)\\in V \\implies \\gamma\\in G^V.\n\\end{equation}\n\nLet $N=\\mathrm{supp}\\, F$ so that $N=\\mathrm{supp}\\, f\\subset V$ by \\eqref{supp_f_contained_in_V} and \\eqref{definition_F}. Since $G_z^V$ is relatively compact by our hypothesis, the set $\\overline{G_z^N}$ is compact. Let $b\\in C_c(G)$ be a function that is identically one on $(\\overline{G_z^N})(\\overline{G_z^N})^{-1}$ and has range contained in $[0,1]$. We can assume that $b$ is self-adjoint by considering $\\frac12(b+b^*)$ if necessary. Define $D\\in C_c(G)$ by\n\\[\nD(\\gamma):=F\\big(r(\\gamma)\\big)F\\big(s(\\gamma)\\big)b(\\gamma).\n\\]\nFor $\\xi\\in L^2(G,\\lambda_u)$ and $\\gamma\\in G$ we have\n\\begin{align*}\n\\big(\\mathrm{L}^{u}(D)\\xi\\big)(\\gamma)&=\\int_GD(\\gamma\\alpha^{-1})\\xi(\\alpha)\\,d\\lambda_u(\\alpha)\\\\\n&=\\int_G F\\big(r(\\gamma)\\big)F\\big(s(\\alpha^{-1})\\big) b(\\gamma\\alpha^{-1})\\xi(\\alpha)\\, d\\lambda_u(\\alpha)\\\\\n&=F\\big(r(\\gamma)\\big) \\int_G F\\big(r(\\alpha)\\big) b(\\gamma\\alpha^{-1})\\xi(\\alpha)\\, d\\lambda_u(\\alpha).\n\\end{align*}\n\nIn the case where $u=z$, if $\\alpha,\\gamma\\in \\mathrm{supp}\\, F\\circ r\\cap s^{-1}(z)$, then $r(\\alpha),r(\\gamma)\\in \\mathrm{supp}\\, F=N$ and $\\gamma,\\alpha\\in G_z^N$. This implies $b(\\gamma\\alpha^{-1})=1$, so\n\\begin{align*}\n\\big(\\mathrm{L}^z(D)\\xi\\big)(\\gamma)&=\\int_G F\\big(r(\\alpha)\\big) \\xi(\\alpha)\\, d\\lambda_z(\\alpha)\\\\\n&=(\\xi\\,|\\,F\\circ r)_z F\\circ r(\\gamma),\n\\end{align*}\nand $L^z(D)$ is a rank-one projection.\n\nBy the hypothesis on $V$ there exists a subsequence $\\{x_{n_i}\\}$ of $\\{x_n\\}$ such that\n\\[\n\\lambda_{x_{n_i}}(G^V)\\leqslant M\\lambda_z(G^V)\n\\]\nfor all $i\\geqslant 1$. If we define $E:=\\{\\gamma\\in G:F\\big(r(\\gamma)\\big)\\ne 0\\}$ then $E$ is open with\n\\begin{align}\n\\lambda_{x_{n_i}}(E)&\\leqslant\\lambda_{x_{n_i}}(G^V)\\text{\\quad(using \\eqref{unmotivated_ref})}\\notag\\\\\n&\\leqslant M\\lambda_z(G^V)\\label{measure_of_E_finite}\n\\end{align}\nand\n\\begin{equation}\\label{AaH3.3}\n\\int_G\\big(F\\circ r(\\gamma)\\big)^2\\, d\\lambda_{x_{n_i}}(\\gamma)\\leqslant \\frac{\\lambda_{x_{n_i}}(E)}{\\|f\\circ r\\|_z^2}\n\\leqslant \\frac{M\\lambda_z(G^V)}{\\|g\\|_z^2}.\n\\end{equation}\nby \\eqref{AaH3.2}. Consider the continuous function\n\\[\nT(\\alpha,\\beta):=F\\big(r(\\alpha)\\big) F\\big(r(\\beta)\\big) b(\\alpha\\beta^{-1}).\n\\]\nNote that\n\\begin{align*}\n\\int_G &T(\\alpha,\\beta)^2\\, d(\\lambda_{x_{n_i}}\\times\\lambda_{x_{n_i}})(\\alpha,\\beta)\\\\\n&=\\int_G F\\big(r(\\alpha)\\big)^2F\\big(r(\\beta)\\big)^2b(\\alpha\\beta^{-1})^2\\, d(\\lambda_{x_{n_i}}\\times\\lambda_{x_{n_i}})(\\alpha,\\beta)\\\\\n&\\leqslant\\|F\\|_\\infty^4\\int_G \\chi_{E\\times E}(\\alpha,\\beta)\\,d(\\lambda_{x_{n_i}}\\times\\lambda_{x_{n_i}})(\\alpha,\\beta)\\\\\n&=\\|F\\|_\\infty^4\\lambda_{x_{n_i}}(E)^2,\n\\end{align*}\nwhich is finite by \\eqref{measure_of_E_finite}. Thus\n\\[T\\in L^2(G\\times G,\\lambda_{x_{n_i}}\\times \\lambda_{x_{n_i}}),\\]\nand since $T$ is conjugate symmetric, \\cite[Proposition~3.4.16]{Pedersen1989} implies that $\\mathrm{L}^{x_{n_i}}(D)$ is the self-adjoint Hilbert-Schmidt operator on $L^2(G,\\lambda_{x_{n_i}})$ with kernel $T$. It follows that $\\mathrm{L}^{x_{n_i}}(D^*\\ast D)$ is a trace-class operator, and since we equip the Hilbert-Schmidt operators with the trace norm, we have\n\\[\n\\mathrm{Tr}\\,\\mathrm{L}^{x_{n_i}}(D^*\\ast D)=\\|T\\|_{L^2(\\lambda_{x_{n_i}}\\times\\lambda_{x_{n_i}})}^2.\n\\]\nApplying Fubini's Theorem to $T$ now gives\n{\\allowdisplaybreaks\\begin{align}\n\\mathrm{Tr}\\,\\mathrm{L}^{x_{n_i}}&(D^*\\ast D)\\notag\\\\\n&=\\int_G\\int_G F\\big(r(\\alpha)\\big)^2 F\\big(r(\\beta)\\big)^2b(\\alpha\\beta^{-1})^2\\, d\\lambda_{x_{n_i}}(\\alpha)\\, d\\lambda_{x_{n_i}}(\\beta)\\notag\\\\\n&\\leqslant\\bigg(\\int_G F\\big(r(\\alpha)\\big)^2\\, d\\lambda_{x_{n_i}}(\\alpha)\\bigg)^2\\notag\\\\\n&\\leqslant\\frac{M^2\\lambda_z(G^V)^2}{\\|g\\|_z^4}\\text{\\quad (using \\eqref{AaH3.3})}\\notag\\\\\n&0$ there exists $n_0$ such that, for every $n\\geqslant n_0$ and every $\\gamma\\in G_{x_n}^W$,\n\\[\n\\lambda_{x_n}(K\\gamma\\cap G^W)<\\lambda_z(G^W)+\\delta.\n\\]\n\\begin{proof}\nSuppose not. Then, by passing to a subsequence if necessary, for each $n$ there exists $\\gamma_n\\in G_{x_n}^W$ such that\n\\begin{equation}\\label{assumption_in_unbroken_lemma}\n\\lambda_{x_n}(K\\gamma_n\\cap G^W)\\geqslant \\lambda_z(G^W)+\\delta.\n\\end{equation}\nSince each $r(\\gamma_n)$ is in the compact set $W$, we can pass to a subsequence so that $r(\\gamma_n)\\rightarrow y$ for some $y\\in G^{(0)}$. This implies $[r(\\gamma_n)]\\rightarrow [y]$, but $[r(\\gamma_n)]=[s(\\gamma_n)]=[x_n]$ and $[x_n]\\rightarrow [z]$ uniquely, so $[y]=[z]$. Choose $\\psi\\in G$ with $s(\\psi)=z$ and $r(\\psi)=y$. By Haar-system invariance\n\\[\n\\lambda_{x_n}(K\\gamma_n\\cap G^W)=\\lambda_{r(\\gamma_n)}(K\\cap G^W),\n\\]\nso by applying Lemma \\ref{astrids_lim_sup} with the compact space $K\\cap G^W$ and $\\{r(\\gamma_n)\\}$ converging to $y$,\n\\begin{align*}\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(K\\gamma_n\\cap G^W)\n&=\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{r(\\gamma_n)}(K\\cap G^W)\\\\\n&\\leqslant\\lambda_y(K\\cap G^W)\\quad\\text{(by Lemma \\ref{astrids_lim_sup})}\\\\\n&=\\lambda_z(K\\psi\\cap G^W)\\quad\\text{(Haar-system invariance)}\\\\\n&\\leqslant\\lambda_z(G^W).\n\\end{align*}\nThis contradicts our assertion \\eqref{assumption_in_unbroken_lemma}.\n\\end{proof}\n\\end{lemma}\n\nThe following is a generalisation of \\cite[Lemma~3.3]{Archbold-anHuef2006}.\n\\begin{lemma}\\label{AaH_Lemma3.3}\nSuppose $G$ is a groupoid with Haar system $\\lambda$. Fix $\\epsilon>0$, $z\\in G^{(0)}$ and let $V$ be an open neighbourhood of $z\\in G^{(0)}$ such that $\\lambda_z(G^V)<\\infty$. Then there exists an open relatively-compact neighbourhood $V_1$ of $z$ such that $\\overline{V_1}\\subset V$ and\n\\[\n\\lambda_z(G^V)-\\epsilon<\\lambda_z(G^{V_1})\\leqslant\\lambda_z(G^{\\overline{V_1}})\\leqslant\\lambda_z(G^V)<\\lambda_z(G^{V_1})+\\epsilon.\n\\]\n\\begin{proof}\nWe use $G_z$ equipped with the subspace topology to find a compact subset $\\lambda_z$-estimate of $V$. This estimate is then used to obtain the required open set $V_1$. Since $G_z^V$ is $G_z$-open, by the regularity of $\\lambda_z$ there exists a compact subset $W$ of $G_z^V$ such that $\\lambda_z(W)>\\lambda_z(G_z^V)-\\epsilon$. Then $r(W)$ is compact and contained in $V$, so there exists an open relatively-compact neighbourhood $V_1$ of $r(W)$ such that $\\overline{V_1}\\subset V$. Then\n\\begin{align*}\n\\lambda_z(G^V)-\\epsilon<\\lambda_z(W)&\\leqslant \\lambda_z(G^{V_1}) \\leqslant \\lambda_z(G^{\\overline{V_1}})\\leqslant \\lambda_z(G^V)\\\\&<\\lambda_z(W)+\\epsilon\\leqslant\\lambda_z(G^{V_1})+\\epsilon,\n\\end{align*}\nas required.\n\\end{proof}\n\\end{lemma}\n\nThe following lemma is equivalent to the claim in \\cite[Proposition~3.6]{Clark2007} that $[x]\\mapsto [L^x]$ from $G^{(0)}\/G$ to the spectrum of $C^*(G)$ is open.\n\\begin{lemma}\\label{lemma_ind_reps_converge_imply_orbits_converge}\nSuppose $G$ is a second-countable locally-compact Hausdorff groupoid with Haar system $\\lambda$. If $\\{x_n\\}$ is a sequence in $G^{(0)}$ with $\\mathrm{L}^{x_n}\\rightarrow \\mathrm{L}^{z}$, then $[x_n]\\rightarrow [z]$.\n\\begin{proof}\nWe prove the contrapositive. Suppose $[x_n]\\nrightarrow [z]$. Then there exists an open neighbourhood $U_0$ of $[z]$ in $G^{(0)}\/G$ such that $[x_n]$ is frequently not in $U_0$. Let $q:G^{(0)}\\rightarrow G^{(0)}\/G$ be the quotient map $x\\mapsto [x]$. Then $U_1:=q^{-1}(U_0)$ is an open invariant neighbourhood of $z$ and $x_n\\notin U_1$ frequently. Note that $C^*(G|_{U_1})$ is isomorphic to a closed two-sided ideal $I$ of $C^*(G)$ (see \\cite[Lemma~2.10]{Muhly-Renault-Williams1996}). \n\nWe now claim that $I\\subset\\ker\\,\\mathrm{L}^{x_n}$ whenever $x_n\\notin U_1$. Suppose $x_n\\notin U_1$ and recall from Remark \\ref{measure_induced_epsilon_x} that $\\mathrm{L}^{x_n}$ acts on $L^2(G,\\lambda_{x_n})$. Fix $f\\in C_c(G)$ such that $f(\\gamma)=0$ whenever $\\gamma\\notin G|_{U_1}$ and fix $\\xi\\in L^2(G,\\lambda_{x_n})$. Then by Remark \\ref{measure_induced_epsilon_x} we have\n\\[\n\\|\\mathrm{L}^{x_n}(f)\\xi\\|_{x_n}^2=\\int_G\\bigg(\\int_G f(\\gamma\\alpha^{-1})\\xi(\\alpha)\\, d\\lambda_{x_n}(\\alpha)\\bigg)^2\\, d\\lambda_{x_n}(\\gamma).\n\\]\nWhen evaluating the inner integrand, we have $s(\\alpha)=s(\\gamma)=x_n$, so $\\gamma\\alpha^{-1}\\in G|_{[x_n]}$. Since $U_1$ is invariant with $x_n\\notin U_1$, it follows that $\\gamma\\alpha^{-1}\\notin G|_{U_1}$, and so $f(\\gamma\\alpha^{-1})=0$. Thus\n\\[\n\\|\\mathrm{L}^{x_n}(f)\\xi\\|_{x_n}=0,\n\\]\nand since $\\xi$ was fixed arbitrarily, $\\mathrm{L}^{x_n}(f)=0$. This implies that $I\\subset \\ker\\,\\mathrm{L}^{x_n}$.\n\nWe now conclude by observing that since $I\\subset\\ker\\,\\mathrm{L}^{x_n}$ frequently, $\\mathrm{L}^{x_n}\\notin \\hat{I}$ frequently. But $\\hat{I}$ is an open neighbourhood of $\\mathrm{L}^z$, so $\\mathrm{L}^{x_n}\\nrightarrow \\mathrm{L}^z$.\n\\end{proof}\n\\end{lemma}\n\nWe may now proceed to strengthening the $\\lfloor M^2\\rfloor$ bound in Theorem \\ref{M2_thm}. This theorem is a generalisation of \\cite[Theorem~3.5]{Archbold-anHuef2006}.\n\\begin{theorem}\\label{M_thm}\nSuppose $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $M\\in\\mathbb R$ with $M\\geqslant 1$, suppose $z\\in G^{(0)}$ such that $[z]$ is locally closed and let $\\{x_n\\}$ be a sequence in $G^{(0)}$. Suppose there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \n\\[\n\\lambda_{x_n}(G^V)\\leqslant M\\lambda_z(G^V)\n\\]\nfrequently. Then $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})\\leqslant\\lfloor M\\rfloor$.\n\\begin{proof}\nIf $\\mathrm{L}^{x_n}$ does not converge to $\\mathrm{L}^z$, then $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})=0<\\lfloor M \\rfloor$. So we assume from now on that $\\mathrm{L}^{x_n}\\rightarrow \\mathrm{L}^z$. Lemma \\ref{lemma_ind_reps_converge_imply_orbits_converge} now shows that $[x_n]\\rightarrow [z]$.\n\nNext we claim that we may assume, without loss of generality, that $[z]$ is the unique limit of $\\{[x_n]\\}$ in $G^{(0)}\/G$. To see this, note that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})\\leqslant\\lfloor M^2\\rfloor<\\infty$ by Theorem \\ref{M2_thm}. Hence, by \\cite[Proposition~3.4]{Archbold-anHuef2006}, $\\{\\mathrm{L}^z\\}$ is open in the set of limits of $\\{\\mathrm{L}^{x_n}\\}$. So there exists an open neighbourhood $U_2$ of $\\mathrm{L}^z$ in $C^*(G)^\\wedge$ such that $\\mathrm{L}^z$ is the unique limit of $\\{\\mathrm{L}^{x_n}\\}$ in $U_2$.\n\nBy \\cite[Proposition~2.5]{Muhly-Williams1990} there is a continuous function $\\mathrm{L}:G^{(0)}\/G\\rightarrow C^*(G)^\\wedge$ such that $[x]\\mapsto\\mathrm{L}^x$ for all $x\\in G^{(0)}$. Define $p:G^{(0)}\\rightarrow G^{(0)}\/G$ by $p(x)=[x]$ for all $x\\in G^{(0)}$. Then $p$ is continuous, and\n\\[\nY:=(\\mathrm{L}\\circ p)^{-1}(U_2).\n\\]\nis an open $G$-saturated neighbourhood of $z$ in $G^{(0)}$. Note that $x_n\\in Y$ eventually.\n\nNow suppose that, for some $y\\in Y$, $[x_n]\\rightarrow [y]$ in $Y\/G$ and hence in $G^{(0)}\/G$. Then $\\mathrm{L}^{x_n}\\rightarrow \\mathrm{L}^y$ by \\cite[Proposition~2.5]{Muhly-Williams1990}, and $\\mathrm{L}^y\\in U_2$ since $y\\in(\\mathrm{L}\\circ p)^{-1}(U_2)$. But $\\{\\mathrm{L}^{x_n}\\}$ has the unique limit $\\mathrm{L}^z$ in $U_2$, so $\\mathrm{L}^z=\\mathrm{L}^y$ and hence $\\overline{[z]}=\\overline{[y]}$. Since $[z]$ is locally closed, Lemma \\ref{lemma_orbits_equal} shows that $[z]=[y]$ in $G^{(0)}$ and hence in $Y$.\n\nWe know $Y$ is an open saturated subset of $G^{(0)}$, so $C^*(G|_Y)$ is isomorphic to a closed two-sided ideal $J$ of $C^*(G)$. We can apply \\cite[Proposition~5.3]{Archbold-Somerset-Spielberg1997} with the $C^*$-subalgebra $J$ to see that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})$ is the same whether we compute it in the ideal $J$ or in $C^*(G)$. Since $Y$ is $G$-invariant, $G_z^V=G_z^{V\\cap Y}$ and eventually $G_{x_n}^V=G_{x_n}^{V\\cap Y}$. We may thus consider $G|_Y$ instead of $G$ and therefore assume that $[z]$ is the unique limit of $[x_n]$ in $G^{(0)}\/G$ as claimed.\n\nAs in \\cite{Archbold-anHuef2006}, the idea for the rest of the proof is the same as in Theorem \\ref{M2_thm}, although more precise estimates are used. Fix $\\epsilon>0$ such that $M(1+\\epsilon)^2<\\lfloor M\\rfloor +1$ and choose $\\kappa>0$ such that\n\\begin{equation}\\label{choice_of_kappa}\n\\kappa<\\frac{\\epsilon\\lambda_z(G^V)}{1+\\epsilon}<\\lambda_z(G^V).\n\\end{equation}\nBy Lemma \\ref{AaH_Lemma3.3} there exists an open relatively compact neighbourhood $V_1$ of $z$ such that $\\overline{V_1}\\subset V$ and\n\\[\n0<\\lambda_z(G^V)-\\kappa<\\lambda_z(G^{V_1})\\leqslant\\lambda_z(G^{\\overline{V_1}})\\leqslant\\lambda_z(G^V)<\\lambda_z(G^{V_1})+\\kappa.\n\\]\nChoose a subsequence $\\{x_{n_i}\\}$ of $\\{x_n\\}$ such that\n\\[\n\\lambda_{x_{n_i}}(G^V)\\leqslant M\\lambda_z(G^V)\n\\]\nfor all $i\\geqslant 1$. Then\n\\begin{align}\n\\lambda_{x_{n_i}}(G^{V_1})&\\leqslant\\lambda_{x_{n_i}}(G^V)\\notag\\\\\n&\\leqslant M\\lambda_z(G^V)\\notag\\\\\n&0$ such that $\\delta<\\lambda_z(G^{V_1})$ and\n\\begin{equation}\\label{AaH3.9}\n\\frac{\\lambda_z(G^{V_1})\\big(\\lambda_z(G^{\\overline{V_1}})+\\delta\\big)}{\\big(\\lambda_z(G^{V_1})-\\delta\\big)^2}\n<\n\\frac{\\lambda_z(G^{V_1})\\big(\\lambda_z(G^{V_1})+\\kappa+\\delta\\big)}{\\big(\\lambda_z(G^{V_1})-\\delta\\big)^2}<1+\\epsilon.\n\\end{equation}\n\nWe will now construct a function $F\\in C_c(G^{(0)})$ with support inside $V_1$. Since $\\lambda_z$ is inner regular on open sets and $G_z^{V_1}$ is $G_z$-open, there exists a $G_z$-compact subset $W$ of $G_z^{V_1}$ such that\n\\[\n0<\\lambda_z(G_z^{V_1})-\\delta<\\lambda_z(W).\n\\]\nSince $W$ is $G_z$-compact there exists a $G_z$-compact neighbourhood $W_1$ of $W$ that is contained in $G_z^{V_1}$ and there exists a continuous function $g:G_z\\rightarrow [0,1]$ that is identically one on $W$ and zero off the interior of $W_1$. We have\n\\begin{equation}\\label{AaH3.10}\n\\lambda_z(G^{V_1})-\\delta<\\lambda_z(W)\n\\leqslant\\int_{G_z} g(t)^2\\, d\\lambda_z(t)\n=\\|g\\|_z^2,\n\\end{equation}\nBy Lemma \\ref{based_on_Ramsay} the restriction $\\tilde{r}$ of $r$ to $G_z$ is a homeomorphism onto $[z]$. So there exists a continuous function $g_1:\\tilde{r}(W_1)\\rightarrow [0,1]$ such that $g_1\\big(\\tilde{r}(\\gamma)\\big)=g(\\gamma)$ for all $\\gamma\\in W_1$. Thus $\\tilde{r}(W_1)$ is $[z]$-compact, which implies that $\\tilde{r}(W_1)$ is $G^{(0)}$-compact. Since we know that $G^{(0)}$ is second countable and Hausdorff, Tietze's Extension Theorem can be applied to show that $g_1$ can be extended to a continuous map\n\\[\ng_2:G^{(0)}\\rightarrow [0,1].\n\\]\nBecause $\\tilde{r}(W_1)$ is a compact subset of the open set $V_1$, there exist a compact neighbourhood $P$ of $\\tilde{r}(W_1)$ contained in $V_1$ and a continuous function $h:G^{(0)}\\rightarrow [0,1]$ that is identically one on $\\tilde{r}(W_1)$ and zero off the interior of $P$. Note that $h$ has compact support that is contained in $P$.\n\nWe set $f(x)=h(x)g_2(x)$. Then $f\\in C_c(G^{(0)})$ with $0\\leqslant f\\leqslant 1$ and \n\\begin{equation}\\label{supp_f_contained_in_V_2}\n\\mathrm{supp} f\\subset \\mathrm{supp} h\\subset P\\subset V_1.\n\\end{equation}\nNote that\n\\begin{align}\n\\|f\\circ r\\|_z^2&=\\int_{G_z} f\\big(\\tilde{r}(\\gamma)\\big)^2\\,d\\lambda_z(\\gamma)\\notag\\\\\n&=\\int_{G_z} h\\big(\\tilde{r}(\\gamma)\\big)^2g_2\\big(\\tilde{r}(\\gamma)\\big)^2\\, d\\lambda_z(\\gamma)\\notag\\\\\n&\\geqslant\\int_{W_1} h\\big(\\tilde{r}(\\gamma)\\big)^2 g(\\gamma)^2\\, d\\lambda_z(\\gamma)\\notag\\\\\n&=\\int_{W_1}g(\\gamma)^2\\,d\\lambda_z(\\gamma)\\notag\\\\\n&=\\|g\\|_z^2\\label{AaH3.11}\n\\end{align}\nsince $\\mathrm{supp}\\, g\\subset W_1$ and $h$ is identically one on $\\tilde{r}(W_1)$. We now define $F\\in C_c(G^{(0)})$ by\n\\begin{equation}\\label{definition_F_2}\nF(x)=\\frac{f(x)}{\\|f\\circ r\\|_z}.\n\\end{equation}\nThen $\\|F\\circ r\\|_z=1$ and\n\\begin{equation}\\label{unmotivated_ref_2}\nF\\circ r(\\gamma)\\ne 0\\implies h\\big(r(\\gamma)\\big)\\ne 0 \\implies r(\\gamma)\\in V_1 \\implies \\gamma\\in G^{V_1}.\n\\end{equation}\n\nLet $N=\\mathrm{supp}\\, F$. Suppose $K$ is an open relatively compact symmetric neighbourhood of $(\\overline{G_z^N})(\\overline{G_z^N})^{-1}$ in $G$ and choose $b\\in C_c(G)$ such that $b$ is identically one on $(\\overline{G_z^N})(\\overline{G_z^N})^{-1}$ and identically zero off $K$. As in Theorem \\ref{M2_thm} we may assume that $b$ is self-adjoint by considering $\\frac12(b+b^*)$. Define $D\\in C_c(G)$ by $D(\\gamma):=F\\big(r(\\gamma)\\big)F\\big(s(\\gamma)\\big)b(\\gamma)$. By the same argument as in Theorem \\ref{M2_thm}, $\\mathrm{L}^z(D)$, and hence $\\mathrm{L}^z(D^*\\ast D)$, is the rank one projection determined by the unit vector $F\\circ r\\in L^2(G,\\lambda_z)$. From \\eqref{Aah3.4} we have\n\\begin{align*}\n\\mathrm{Tr}\\big(\\mathrm{L}^{x_{n_i}}&(D^*\\ast D)\\big)\\\\\n&=\\int_G F\\big(r(\\beta)\\big)^2\\bigg(\\int_G F\\big(r(\\alpha)\\big)^2 b(\\alpha\\beta^{-1})^2\\, d\\lambda_{x_{n_i}}(\\alpha)\\bigg)\\, d\\lambda_{x_{n_i}}(\\beta).\n\\end{align*}\nSince $b$ is identically zero off $K$, the inner integrand is zero unless $\\alpha\\beta^{-1}\\in K$. Combining this with \\eqref{supp_f_contained_in_V_2} and the fact that $\\mathrm{supp}\\,\\lambda_{x_{n_i}}\\subset G_{x_{n_i}}$ enables us to see that this inner integrand is zero unless $\\alpha\\in G_{x_{n_i}}^{V_1}\\cap K\\beta$. Thus\n\\begin{align*}\n\\mathrm{Tr}\\big(\\mathrm{L}^{x_{n_i}}&(D^*\\ast D)\\big)\\\\\n&\\leqslant\\int_{\\beta\\in G_{x_{n_i}}^{V_1}} F\\big(r(\\beta)\\big)^2\\bigg(\\int_{\\alpha\\in G_{x_{n_i}}^{V_1}\\cap K\\beta} F\\big(r(\\alpha)\\big)^2\\, d\\lambda_{x_{n_i}}(\\alpha)\\bigg)\\, d\\lambda_{x_{n_i}}(\\beta).\\\\\n&\\leqslant\\frac{1}{\\|f\\circ r\\|_z^4}\\int_{\\beta\\in G_{x_{n_i}}^{V_1}} 1\\bigg(\\int_{\\alpha\\in G_{x_{n_i}}^{V_1}\\cap K\\beta} 1\\, d\\lambda_{x_{n_i}}(\\alpha)\\bigg)\\, d\\lambda_{x_{n_i}}(\\beta).\n\\end{align*}\n\nSince $\\overline{V_1}$ and $\\overline{K}$ are compact, by Lemma \\ref{the_unbroken_lemma} there exists $i_0$ such that for every $i\\geqslant i_0$ and any $\\beta\\in G_{x_{n_i}}^{\\overline{V_1}}$,\n\\[\n\\lambda_{x_{n_i}}(K\\beta\\cap G^{\\overline{V_1}})<\\lambda_z(G^{\\overline{V_1}})+\\delta.\n\\]\nSo, provided $i\\geqslant i_0$,\n{\\allowdisplaybreaks\\begin{align*}\n\\mathrm{Tr}\\big(\\mathrm{L}^{x_{n_i}}(D^*\\ast D)\\big)&\\leqslant \\frac{1}{\\|f\\circ r\\|_z^4}\\int_{\\beta\\in G_{x_{n_i}}^{V_1}}\\lambda_{x_{n_i}}(K\\beta\\cap G_{x_{n_i}}^{V_1})\\, d\\lambda_{x_{n_i}}(\\beta)\\\\\n&\\leqslant\\frac{1}{\\|f\\circ r\\|_z^4}\\int_{\\beta\\in G_{x_{n_i}}^{V_1}}\\big(\\lambda_z(G_z^{\\overline{V_1}})+\\delta\\big)\\, d\\lambda_{x_{n_i}}(\\beta)\\\\\n&<\\frac{\\big(\\lambda_z(G^{\\overline{V_1}})+\\delta\\big)\\lambda_{x_{n_i}}(G^{V_1})}{\\|f\\circ r\\|_z^4}\\\\\n&<\\frac{M(1+\\epsilon)\\big(\\lambda_z(G^{\\overline{V_1}})+\\delta\\big)\\lambda_z(G^{V_1})}{\\|g\\|_z^4}\\quad\\text{(by \\eqref{AaH3.8} and \\eqref{AaH3.11})}\\\\\n&<\\frac{M(1+\\epsilon)\\big(\\lambda_z(G^{\\overline{V_1}})+\\delta\\big)\\lambda_z(G^{V_1})}{(\\lambda_z(G^{V_1})-\\delta)^2}\\quad\\text{(by \\eqref{AaH3.10})}\\\\\n&k-1$ such that for every open neighbourhood $U$ of $z$ with $G_z^U$ relatively compact we have\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^U)\\geqslant R\\lambda_z(G^U).\n\\]\nGiven an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact, there exists a compact neighborhood $N$ of $z$ with $N\\subset V$ such that\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^N)>(k-1)\\lambda_z(G^N).\n\\]\n\\begin{proof}\nApply Lemma \\ref{AaH_Lemma3.3} to $V$ with $0<\\epsilon<\\frac{R-k+1}R \\lambda_z(G^V)$ to get an open relatively-compact neighbourhood $V_1$ of $z$ with $\\overline{V_1}\\subset V$ and\n\\[\n\\lambda_z(G^V)-\\epsilon<\\lambda_z(G^{V_1})\\leqslant\\lambda_z(G^{\\overline{V_1}})\\leqslant\\lambda_z(G^V)<\\lambda_z(G^{V_1})+\\epsilon.\n\\]\nSince $G_z^{V_1}$ is relatively compact we have\n\\begin{align*}\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^{\\overline{V_1}})&\\geqslant\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^{V_1})\\\\\n&\\geqslant R\\lambda_z(G^{V_1})&\\text{(by hypothesis)}\\\\\n&>R\\big(\\lambda_z(G^V)-\\epsilon\\big)\\\\\n&>(k-1)\\lambda_z(G^V)&\\text{(by our choice of }\\epsilon\\text{)}\\\\\n&\\geqslant (k-1)\\lambda_z(G^{\\overline{V_1}}).\n\\end{align*}\nSo we may take $N=\\overline{V_1}$.\n\\end{proof}\n\\end{lemma}\n\\begin{remark}\\label{AaH_lemma5.1_variant}\nThe preceding Lemma also holds when $\\lim\\,\\inf$ is replaced by $\\lim\\,\\sup$. No modification of the proof is needed beyond replacing the two occurrences of $\\lim\\,\\inf$ with $\\lim\\,\\sup$.\n\\end{remark}\n\nWe may now proceed to our main theorem.\n\\begin{theorem}\\label{circle_thm}\nSuppose $G$ is a second-countable locally-compact Hausdorff principal groupoid that admits a Haar system $\\lambda$. Let $k$ be a positive integer, let $z\\in G^{(0)}$ and let $\\{x_n\\}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed in $G^{(0)}$. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{circle_thm_1} the sequence $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$;\n\\item\\label{circle_thm_2} $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})\\geqslant k$;\n\\item\\label{circle_thm_3} for every open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact we have\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^V)\\geqslant k\\lambda_z(G^V);\n\\]\n\\item\\label{circle_thm_4} there exists a real number $R>k-1$ such that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ with $G_z^V$ relatively compact we have\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^V)\\geqslant R\\lambda_z(G^V);\\quad\\text{and}\n\\]\n\\item\\label{circle_thm_5} there exists a basic decreasing sequence of compact neighbourhoods $\\{W_m\\}$ of $z$ in $G^{(0)}$ such that, for each $m\\geqslant 1$,\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\inf}\\,\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(G^{W_m}).\n\\]\n\\end{enumerate}\n\\begin{proof}\n\n\nWe know that \\eqref{circle_thm_1} implies \\eqref{circle_thm_2} by Proposition \\ref{AaH_thm_1.1_1_implies_2}.\n\nSuppose \\eqref{circle_thm_2}. If $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})\\geqslant k$, then $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})>\\lfloor k-\\epsilon\\rfloor$ for all $\\epsilon>0$. By Theorem \\ref{M_thm}, for every $G^{(0)}$-open neighborhood $V$ of $z$ such that $G_z^V$ is relatively compact,\n\\[\n\\lambda_{x_n}(G^V)>(k-\\epsilon)\\lambda_z(G^V)\n\\]\neventually, and hence \\eqref{circle_thm_3} holds.\n\nIt is immediately true that \\eqref{circle_thm_3} implies \\eqref{circle_thm_4}.\n\nSuppose \\eqref{circle_thm_4}. We will construct the sequence $\\{W_m\\}$ of compact neighbourhoods inductively. Let $\\{V_j\\}$ be a basic decreasing sequence of open neighborhoods of $z$ such that $G_z^{V_1}$ is relatively compact (such neighborhoods exist by \\cite[Lemma~4.1(1)]{Clark-anHuef2010-preprint}). By Lemma \\ref{AaH_lemma5.1} there exists a compact neighbourhood $W_1$ of $z$ such that $W_1\\subset V_1$ and\n\\[\n\\lambda_{x_n}(G^{W_1})>(k-1)\\lambda_z(G^{W_1}).\n\\]\nNow assume there are compact neighbourhoods $W_1,W_2,\\ldots,W_m$ of $z$ with $W_1\\supset W_2\\supset\\cdots\\supset W_m$ such that\n\\begin{equation}\\label{circle_thm_4_imply_5_eqn}\nW_i\\subset V_i\\quad\\text{and}\\quad\\lambda_{x_n}(G^{W_i})>(k-1)\\lambda_z(G^{W_i})\n\\end{equation}\nfor all $1\\leqslant i\\leqslant m$. Apply Lemma \\ref{AaH_lemma5.1} to $(\\mathrm{Int}\\, m)\\cap V_{m+1}$ to obtain a compact neighbourhood $W_{m+1}$ of $z$ such that $W_{m+1}\\subset(\\mathrm{Int}\\, W_m)\\cap V_{m+1}$ and \\eqref{circle_thm_4_imply_5_eqn} holds for $i=m+1$, establishing \\eqref{circle_thm_5}.\n\nSuppose \\eqref{circle_thm_5}. We begin by showing that $[x_n]\\rightarrow [z]$ in $G^{(0)}\/G$. Let $q:G^{(0)}\\rightarrow G^{(0)}\/G$ be the quotient map. Let $U$ be a neighbourhood of $[z]$ in $G^{(0)}\/G$ and $V=q^{-1}(U)$. There exists $m$ such that $W_m\\subset V$. Since $\\lim\\,\\inf_n\\,\\lambda_{x_n}(G^{W_m})>0$ there exists $n_0$ such that $G_{x_n}^{W_m}\\ne\\emptyset$ for all $n\\geqslant n_0$. Thus, for $n\\geqslant n_0$,\n\\[\n[x_n]=q(x_n)\\in q(W_m)\\subset q(V)=U.\n\\]\nThus $[x_n]$ is eventually in every neighbourhood of $[z]$ in $G^{(0)}\/G$.\n\nNow suppose that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})<\\infty$. Then, as in the proof of Theorem \\ref{M_thm}, we may localise to an open invariant neighbourhood $Y$ of $z$ such that $[z]$ is the unique limit in $Y\/G$ of $[x_n]$. Eventually $W_m\\subset Y$, and so the sequence $\\{x_n\\}$ converges $k$-times in $Y\/(G|_Y)=Y\/G$ to $z$ by Proposition \\ref{AaH_prop4_1_1} applied to the groupoid $G|_Y$. This implies that the sequence $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$.\n\nFinally, if $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})=\\infty$, then $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$ by Proposition \\ref{AaH_prop_4.2}, establishing \\eqref{2nd_circle_thm_1} and completing the proof.\n\\end{proof}\n\\end{theorem}\n\n\\begin{cor}\nSuppose that $G$ is a second-countable locally-compact Hausdorff principal groupoid such that all the orbits are locally closed. Let $k\\in\\mathbb P$ and let $z\\in G^{(0)}$ such that $[z]$ is not open in $G^{(0)}$. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{AaH_cor5.5_1} whenever $\\braces{x_n}$ is a sequence in $G^{(0)}$ which converges to $z$ with $[x_n]\\ne [z]$ eventually, then $\\braces{x_n}$ is $k$-times convergent in $G^{(0)}\/G$ to $z$;\n\\item\\label{AaH_cor5.5_2} $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z)\\geqslant k$.\n\\end{enumerate}\n\\begin{proof}\nAssume \\eqref{AaH_cor5.5_1}. We must first establish that $\\braces{\\mathrm{L}^z}$ is not open in $C^*(G)^\\wedge$. If this is not the case, then $\\braces{\\mathrm{L}^z}$ is open and we can apply \\cite[Proposition~3.6]{Clark2007} to see that $\\braces{[z]}$ is open in $G^{(0)}\/G$, and so $[z]$ is open in $G^{(0)}$, contradicting our assumption. Since $\\braces{\\mathrm{L}^z}$ is not open in $C^*(G)^\\wedge$, we can apply \\cite[Lemma~A.2]{Archbold-anHuef2006} to see that there exists a sequence $\\braces{\\pi_i}$ of irreducible representations of $C^*(G)$ such that each $\\pi_i$ is not unitarily equivalent to $\\mathrm{L}^z$, $\\pi_i\\rightarrow \\mathrm{L}^z$ in $C^*(G)^\\wedge$, and\n\\begin{equation}\\label{AaH_5.3}\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z)=\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\pi_i})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\pi_i}).\n\\end{equation}\nSince the orbits are locally closed, the map $G^{(0)}\/G\\rightarrow C^*(G)^\\wedge$ such that $[x]\\mapsto \\mathrm{L}^x$ is a homeomorphism by \\cite[Proposition~5.1]{Clark2007}\\footnote{Proposition~5.1 in \\cite{Clark2007} states that if a principal groupoid has locally closed orbits, then the map from $G^{(0)}\/G$ to $C^*(G)^\\wedge$ where $[x]\\mapsto \\mathrm{L}^x$ is a `homeomorphism from $G^{(0)}\/G$ into $C^*(G)^\\wedge$'. The proof explicitely shows that this map is a surjection.}. It follows that the mapping $G^{(0)}\\rightarrow C^*(G)^\\wedge$ such that $x\\mapsto \\mathrm{L}^x$ is an open surjection, so by \\cite[Proposition~1.15]{Williams2007} there is a sequence $\\braces{x_n}$ in $G^{(0)}$ such that $x_n \\rightarrow z$ and $\\braces{\\mathrm{L}^{x_n}}$ is unitarily equivalent to a subsequence of $\\braces{\\pi_i}$.\nBy \\eqref{AaH_5.3}, \n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z)=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\pi_i})\\geqslant \\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant \\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}}).\n\\]\nWe know by \\eqref{AaH_cor5.5_1} that $\\braces{x_n}$ converges $k$-times to $z$ in $G^{(0)}\/G$, so it follows from Theorem \\ref{circle_thm} that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z)\\geqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant k$.\n\nAssume \\eqref{AaH_cor5.5_2}. If $\\braces{x_n}$ is a sequence in $G^{(0)}$ which converges to $z$ such that $[x_n]\\ne [z]$ eventually, then\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z)\\geqslant k.\n\\]\nBy Theorem \\ref{circle_thm}, $\\braces{x_n}$ is $k$-times convergent to $z$ in $G^{(0)}\/G$.\n\\end{proof}\n\\end{cor}\n\nThe next corollary improves Proposition \\ref{AaH_prop_4.2} and is an immediate consequence of Proposition \\ref{AaH_prop_4.2} and Theorem \\ref{circle_thm}.\n\\begin{cor}\\label{AaH_cor5.6}\nSuppose that $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $z\\in G^{(0)}$ and let $\\braces{x_n}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{AaH_cor5.6_1} $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\{\\mathrm{L}^{x_n}\\})=\\infty$.\n\\item\\label{AaH_cor5.6_2} For every open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact, $\\lambda_{x_n}(G^V)\\rightarrow\\infty$ as $n\\rightarrow\\infty$.\n\\item\\label{AaH_cor5.6_3} For each $k\\geqslant 1$, the sequence $\\{x_n\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$.\n\\end{enumerate}\n\\end{cor}\n\n\\section{Upper multiplicity and $k$-times convergence}\\label{sec_upper_multiplicity}\nThe results in this section are corollaries of Theorems~\\ref{M_thm} and~\\ref{circle_thm}: they relate $k$-times convergence, measure ratios and upper multiplicity numbers, generalising all the upper-multiplicity results of \\cite{Archbold-anHuef2006}. We begin with the upper-multiplicity analogue of Theorem~\\ref{M_thm}.\n\\begin{theorem}\\label{thm_AaH3.6}\nSuppose that $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $M\\in \\mathbb R$ with $M\\geqslant 1$, let $z\\in G^{(0)}$ and let $\\braces{x_n}$ be a sequence in $G^{(0)}$. Assume that $[z]$ is locally closed. Suppose that there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and\n\\[\n\\lambda_{x_n}(G^V)\\leqslant M\\lambda_z(G^V)<\\infty\n\\]\neventually. Then $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\leqslant \\lfloor M\\rfloor$.\n\\begin{proof}\nSince $G$ is second countable, $C^*(G)$ is separable. By \\cite[Lemma~A.1]{Archbold-anHuef2006} there exists a sequence $\\braces{\\mathrm{L}^{x_{n_i}}}$ such that\n\\[\n\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}})=\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}}).\n\\]\nBy Theorem \\ref{M_thm}, $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}})\\leqslant \\lfloor M\\rfloor$, so $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\leqslant\\lfloor M\\rfloor$.\n\\end{proof}\n\\end{theorem}\n\n\\begin{cor}\\label{AaH_cor3.7}\nSuppose that $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$ such that all the orbits are locally closed. Let $M\\in \\mathbb R$ with $M\\geqslant 1$ and let $z\\in G^{(0)}$. If for every sequence $\\braces{x_n}$ in $G^{(0)}$ which converges to $z$ there exists an open neighbourhood $V$ of $z$ in $G^{(0)}$ such that $G_z^V$ is relatively compact and \n\\[\n\\lambda_{x_n}(G^V)\\leqslant M\\lambda_z(G^V)<\\infty\n\\]\nfrequently, then $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)\\leqslant\\lfloor M\\rfloor$.\n\\begin{proof}\nSince $G$ is second countable, $C^*(G)$ is separable, and so we can apply \\cite[Lemma~1.2]{Archbold-Kaniuth1999} to see that there exists a sequence $\\braces{\\pi_n}$ in $C^*(G)^\\wedge$ that converges to $\\mathrm{L}^z$ such that\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z).\n\\]\nSince the orbits are locally closed, the map $G^{(0)}\/G\\rightarrow C^*(G)^\\wedge$ such that $[x]\\mapsto \\mathrm{L}^x$ is a homeomorphism by \\cite[Proposition~5.1]{Clark2007}. In particular, the mapping $G^{(0)}\\rightarrow C^*(G)^\\wedge$ such that $x\\mapsto \\mathrm{L}^x$ is an open surjection, so by \\cite[Proposition~1.15]{Williams2007} there exists a sequence $\\braces{x_i}$ in $G^{(0)}$ converging to $z$ such that $\\braces{[\\mathrm{L}^{x_i}]}$ is a subsequence of $\\braces{[\\pi_n]}$. By Theorem \\ref{M_thm}, $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\leqslant\\lfloor M\\rfloor$. Since \n\\begin{align*}\n\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)&=\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\pi_n})\\leqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_i}})\\leqslant\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_i}})\\\\\n&\\leqslant\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z),\n\\end{align*}\nwe obtain $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)\\leqslant\\lfloor M\\rfloor$, as required.\n\\end{proof}\n\\end{cor}\n\nIn Proposition~\\ref{AaH_prop4_1_1} we generalised the first part of \\cite[Proposition~4.1]{Archbold-anHuef2006}. We will now generalise the second part. The argument we use is similar to that used in Proposition \\ref{AaH_prop4_1_1}.\n\\begin{prop}\\label{AaH_prop4_1_2}\nLet $G$ be a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $k\\in\\mathbb P$ and $z\\in G^{(0)}$ with $[z]$ locally closed in $G^{(0)}$. Assume that $\\{x_n\\}$ is a sequence in $G^{(0)}$ such that $[x_n]\\rightarrow [z]$ uniquely in $G^{(0)}\/G$. Suppose $\\{W_m\\}$ is a basic decreasing sequence of compact neighbourhoods of $z$ such that each $m$ satisfies\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(G^{W_m}).\n\\]\nThen there exists a subsequence of $\\{x_n\\}$ which converges $k$-times in $G^{(0)}\/G$ to $z$. \n\n\\begin{proof}\nLet $\\{K_m\\}$ be an increasing sequence of compact subsets of $G$ such that $G=\\bigcup_{m\\geqslant 1}\\mathrm{Int}\\,K_m$. By the regularity of $\\lambda_z$, for each $m\\geqslant 1$ there exist $\\delta_m>0$ and an open neighbourhood $U_m$ of $G_z^{W_m}$ such that\n\\begin{equation}\\label{AaH4.4}\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(U_m) + \\delta_m.\n\\end{equation}\nWe will construct, by induction, a strictly increasing sequence of positive integers $\\{i_m\\}$ such that, for all $m$,\n\\begin{align}\n&\\lambda_{x_{i_m}}(K_m\\alpha\\cap G^{W_m})<\\lambda_z(U_m)+\\delta_m\/k\\quad\\text{for all }\\alpha\\in G_{x_{i_m}}^{W_m},\\quad\\text{and}\\label{AaH4.5}\\\\\n&\\lambda_{x_{i_m}}(G^{W_m})>(k-1)\\lambda_z(U_m)+\\delta_m.\\label{AaH4.6}\n\\end{align}\n\nBy Lemma \\ref{the_unbroken_lemma} with $\\delta=\\lambda_z(U_1)-\\lambda_z(G^{W_1})+\\delta_1\/k$, there exists $n_1$ such that $n\\geqslant n_1$ implies\n\n\\[\n\\lambda_{x_n}(K_1\\alpha\\cap G^{W_1})<\\lambda_z(U_1)+\\delta_1\/k\\quad\\text{for all }\\alpha\\in G_{x_n}^{W_m}.\n\\]\nBy considering \\eqref{AaH4.4} with $m=1$ we can choose $i_1\\geqslant n_1$ such that\n\\[\n\\lambda_{x_{i_1}}(G^{W_1})>(k-1)\\lambda_z(U_1)+\\delta_1.\n\\]\nAssuming that $i_1i_{m-1}$ such that\n\\[\nn\\geqslant n_m\\quad\\text{implies}\\quad\\lambda_{x_n}(K_m\\alpha\\cap G^{W_m})<\\lambda_z(U_m)+\\delta_m\/k\\quad\\text{for all }\\alpha\\in G_{x_n}^{W_m},\n\\]\nand then by \\eqref{AaH4.4} we can choose $i_m\\geqslant n_m$ such that\n\\[\n\\lambda_{x_{i_m}}(G^{W_m})>(k-1)\\lambda_z(U_m)+\\delta_m.\n\\]\n\nFor each $m\\in\\mathbb P$ choose $\\gamma_{i_m}^{(1)}\\in G_{x_{i_m}}^{W_m}$ (which is non-empty by \\eqref{AaH4.6}). By \\eqref{AaH4.5} and \\eqref{AaH4.6} we have\n\\begin{align*}\n\\lambda_{x_{i_m}}(G^{W_m}\\backslash K_m\\gamma_{i_m}^{(1)})&=\\lambda_{x_{i_m}}(G^{W_m})-\\lambda_{x_{i_m}}(G^{W_m}\\cap K_m\\gamma_{i_m}^{(1)})\\\\\n&>(k-1)\\lambda_z(U_m)+\\delta_m-\\big(\\lambda_z(U_m)+\\delta_m\/k\\big)\\\\\n&=(k-2)\\lambda_z(U_m)+\\frac{k-1}k\\delta_m.\n\\end{align*}\nSo we can choose $\\gamma_{i_m}^{(2)}\\in G_{x_{i_m}}^{W_m}\\backslash K_m\\gamma_{i_m}^{(1)}$. This implies, as in the proof of Proposition \\ref{AaH_prop4_1_1}, that\n\\[\n\\lambda_{x_{i_m}}\\big(G^{W_m}\\backslash (K_m\\gamma_{i_m}^{(1)}\\cup K_m\\gamma_{i_m}^{(2)})\\big)>(k-3)\\lambda_z(U_m)+\\frac{(k-2)}k\\delta_m,\n\\]\nenabling us to choose $\\gamma_{i_m}^{(3)}\\in G_{x_{i_m}}^{W_m}\\backslash (K_m\\gamma_{i_m}^{(1)}\\cap K_m\\gamma_{i_m}^{(2)})$. Continuing in this way for $j=3,\\ldots,k$, for each $i_m$ we choose\n\\begin{equation}\\label{eqn_choosing_gammas-2}\n\\gamma_{i_m}^{(j)}\\in G_{x_{i_m}}^{W_m}\\backslash\\bigg(\\bigcup_{l=1}^{j-1}K_m\\gamma_{i_m}^{(l)}\\bigg).\n\\end{equation}\nNote that $\\gamma_{i_m}^{(j)}\\notin K_m\\gamma_{i_m}^{(l)}$ for $1\\leqslant lk-1$ such that for every open neighbourhood $V$ of $z$ in $G^{(0)}$ with $G_z^V$ relatively compact we have\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^V)\\geqslant R\\lambda_z(G^V);\\quad\\text{and}\n\\]\n\\item\\label{2nd_circle_thm_5} there exists a basic decreasing sequence of compact neighbourhoods $\\{W_m\\}$ of $z$ in $G^{(0)}$ such that, for each $m\\geqslant 1$,\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^{W_m})>(k-1)\\lambda_z(G^{W_m}).\n\\]\n\\end{enumerate}\n\\begin{proof}\nIf \\eqref{2nd_circle_thm_1} holds then $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}})\\geqslant k$ by Theorem \\ref{circle_thm}, and so\n\\[\n\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}}\\geqslant\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}})\\geqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_i}}})\\geqslant k.\n\\]\n\nIf \\eqref{2nd_circle_thm_2} holds then by \\cite[Lemma~A.1]{Archbold-anHuef2006} there is a subsequence $\\braces{x_{n_r}}$ such that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_r}}})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})$ so that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_r}}})\\geqslant k$. Let $V$ be any open neighbourhood of $z$ in $G^{\\bkt{0}}$ such that $G_z^V$ is relatively compact. Then\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}\\bkt{G^V}\\geqslant\\underset{\\scriptstyle r}{\\lim\\,\\sup}\\,\\lambda_{x_{n_r}}\\bkt{G^V}\\geqslant \\underset{\\scriptstyle r}{\\lim\\,\\inf}\\,\\lambda_{x_{n_r}}\\bkt{G^V}\\geqslant k\\lambda_z\\bkt{G^V},\n\\]\nusing Theorem \\ref{circle_thm} for the last step.\n\nThat \\eqref{2nd_circle_thm_3} implies \\eqref{2nd_circle_thm_4} is immediate.\n\nThat \\eqref{2nd_circle_thm_4} implies \\eqref{2nd_circle_thm_5} follows by making references to Remark \\ref{AaH_lemma5.1_variant} rather than Lemma \\ref{AaH_lemma5.1} in the \\eqref{circle_thm_4} implies \\eqref{circle_thm_5} component of the proof of Theorem \\ref{circle_thm}.\n\nAssume \\eqref{2nd_circle_thm_5}. First suppose that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})<\\infty$. Since $[x_n]\\rightarrow[z]$, we can use an argument found at the beginning of the proof of Theorem \\ref{M_thm} to obtain an open $G$-invariant neighborhood $Y$ of $z$ in $G^{(0)}$ so that if we define $H:=G|_Y$, there exists a subsequence $\\braces{x_{n_i}}$ of $\\braces{x_n}$ such that $[x_{n_i}]\\rightarrow [z]$ uniquely in $H^{(0)}\/H$. Proposition \\ref{AaH_prop4_1_2} now shows us that there exists a subsequence $\\braces{x_{n_{i_j}}}$ of $\\braces{x_{n_i}}$ that converges $k$-times in $H^{(0)}\/H$ to $z$. It follows that $\\braces{x_{n_{i_j}}}$ converges $k$-times in $G^{(0)}\/G$ to $z$.\n\nWhen $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})=\\infty$, $\\braces{x_n}$ converges $k$-times in $G^{(0)}\/G$ to $z$ by Corollary \\ref{AaH_cor5.6}, establishing \\eqref{2nd_circle_thm_1}.\n\\end{proof}\n\\end{theorem}\n\n\\begin{cor}\\label{AaH_cor5.4}\nSuppose that $G$ is a second-countable locally-compact Hausdorff principal groupoid such that all the orbits are locally closed. Let $k\\in\\mathbb P$ and let $z\\in G^{(0)}$. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{AaH_cor5.4_1} there exists a sequence $\\braces{x_n}$ in $G^{(0)}$ which is $k$-times convergent in $G^{(0)}\/G$ to $z$;\n\\item\\label{AaH_cor5.4_2} $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)\\geqslant k$.\n\\end{enumerate}\n\\begin{proof}\nAssume \\eqref{AaH_cor5.4_1}. By the definitions of upper and lower multiplicity, \n\\[\n\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)\\geqslant\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}}).\n\\]\nBy Theorem \\ref{circle_thm} we know that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant k$, establishing \\eqref{AaH_cor5.4_2}.\n\nAssume \\eqref{AaH_cor5.4_2}. By \\cite[Lemma~1.2]{Archbold-Kaniuth1999} there exists a sequence $\\braces{\\pi_n}$ converging to $\\mathrm{L}^z$ such that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)$. Since the orbits are locally closed, by \\cite[Proposition~5.1]{Clark2007} the mapping $G^{(0)}\\rightarrow C^*(G)^\\wedge:x\\mapsto\\mathrm{L}^x$ is a surjection. So there is a sequence $\\braces{\\mathrm{L}^{x_n}}$ in $C^*(G)^\\wedge$ such that $\\mathrm{L}^{x_n}$ is unitarily equivalent to $\\pi_n$ for each $n$. Then\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})\\geqslant\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\pi_n})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)\\geqslant k,\n\\]\nand it follows from Theorem \\ref{circle_thm} that $\\braces{x_n}$ is $k$-times convergent in $G^{(0)}\/G$ to $z$.\n\\end{proof}\n\\end{cor}\n\n\n\n\n\\begin{cor}\\label{AaH_cor5.7}\nSuppose that $G$ is a secound-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$. Let $z\\in G^{(0)}$ and let $\\braces{x_n}\\subset G^{(0)}$ be a sequence converging to $z$. Assume that $[z]$ is locally closed. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{AaH_cor5.7_1} there exists an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact and\n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^V)<\\infty;\n\\]\n\\item\\label{AaH_cor5.7_2} $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})<\\infty$.\n\\end{enumerate}\n\\begin{proof}\nSuppose that \\eqref{AaH_cor5.7_1} holds. Since $C^*(G)$ is separable, it follows from \\cite[Lemma~A.1]{Archbold-anHuef2006} that there exists a subsequence $\\braces{x_{n_j}}$ of $\\braces{x_n}$ such that\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_j}}})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_{n_j}}})=\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}}).\n\\]\nBy \\eqref{AaH_cor5.7_1} and Corollary \\ref{AaH_cor5.6}, $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})<\\infty$. Hence $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})<\\infty$, as required.\n\nSuppose that \\eqref{AaH_cor5.7_1} fails. Let $\\braces{V_i}$ be a basic decreasing sequence of open neighbourhoods of $z$ such that $G_z^{V_1}$ is relatively compact (such neighborhoods exist by \\cite[Lemma~4.1(1)]{Clark-anHuef2010-preprint}). Then \n\\[\n\\underset{\\scriptstyle n}{\\lim\\,\\sup}\\,\\lambda_{x_n}(G^{V_i})=\\infty\\quad\\text{for each }i\n\\]\nand we may choose a subsequence $\\braces{x_{n_i}}$ of $\\braces{x_n}$ such that $\\lambda_{x_{n_i}}(G^{V_i})\\rightarrow\\infty$ as $i\\rightarrow\\infty$.\n\nLet $V$ be any open neighbourhood of $z$ such that $G_z^V$ is relatively compact. There exists $i_0$ such that $V_i\\subset V$ for all $i\\geqslant i_0$. Then, for $i\\geqslant i_0$,\n\\[\n\\lambda_{x_{n_i}}(G^{V_i})\\leqslant\\lambda_{x_{n_i}}(G^V).\n\\]\nThus $\\lambda_{x_{n_i}}(G^V)\\rightarrow\\infty$ as $i\\rightarrow\\infty$. By Corollary \\ref{AaH_cor5.6}, $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})=\\infty$. Hence $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_n}})=\\infty$, that is \\eqref{AaH_cor5.7_2} fails.\n\\end{proof}\n\\end{cor}\n\n\\begin{cor}\\label{AaH_cor5.8}\nSuppose $G$ is a second-countable locally-compact Hausdorff principal groupoid with Haar system $\\lambda$ such that all the orbits are locally closed. Let $z\\in G^{(0)}$. Then the following are equivalent:\n\\begin{enumerate}\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\item\\label{AaH_cor5.8_1} $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)<\\infty$;\n\\item\\label{AaH_cor5.8_2} there exists an open neighbourhood $V$ of $z$ such that $G_z^V$ is relatively compact and\n\\[\n\\sup_{x\\in V}\\lambda_x(G^V)<\\infty.\n\\]\n\\end{enumerate}\n\\begin{proof}\nIf \\eqref{AaH_cor5.8_2} holds then \\eqref{AaH_cor5.8_1} holds by Corollary \\ref{AaH_cor3.7}.\n\nLet $\\braces{V_i}$ be a basic decreasing sequence of open neighbourhoods of $z$ such that $G_z^{V_1}$ is relatively compact. If \\eqref{AaH_cor5.8_2} fails then $\\sup_{x\\in V_i}\\braces{\\lambda_x(G^{V_i})}=\\infty$ for each $i$ and we may choose a sequence $\\braces{x_i}$ such that $x_i\\in V_i$ for all $i$ and $\\lambda_{x_i}(G^{V_i})\\rightarrow\\infty$. Since $\\braces{V_i}$ is a basic decreasing sequence, $x_i\\rightarrow z$.\n\nLet $V$ be an open neighbourhood of $z$ such that $G_z^V$ is relatively compact. There exists $i_0$ such that $V_i\\subset V$ for all $i\\geqslant i_0$. Then, for $i\\geqslant i_0$,\n\\[\n\\lambda_{x_i}(G^{V_i})\\leqslant \\lambda_{x_i}(G^V).\n\\]\nThus $\\lambda_{x_i}(G^V)\\rightarrow\\infty$. By Corollary \\ref{AaH_cor5.7}, $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x_i}})=\\infty$. Hence $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z)=\\infty$, and so \\eqref{AaH_cor5.8_1} fails.\n\\end{proof}\n\\end{cor}\n\n\\section{Graph algebra examples}\\label{section_graph_algebra_examples}\nWe begin this section by introducing the notion of a directed graph as well as some related concepts as in the expository book \\cite{Raeburn2005}, although some notation is also taken from \\cite{kprr1997}. A {\\em directed graph} $E=(E^0,E^1,r,s)$ consists of two countable sets $E^0$, $E^1$ and functions $r,s:E^1\\rightarrow E^0$. The elements of $E^0$ and $E^1$ are called {\\em vertices} and {\\em edges} respectively. For each edge $e$, call $s(e)$ the {\\em source} of $e$ and $r(e)$ the {\\em range} of $e$. A directed graph $E$ is {\\em row finite} if $r^{-1}(v)$ is finite for every $v\\in E^0$.\n\nA {\\em finite path} in a directed graph $E$ is a finite sequence $\\alpha=\\alpha_1\\alpha_2\\cdots\\alpha_k$ of edges $\\alpha_i$ with $s(\\alpha_j)=r(\\alpha_{j+1})$ for $1\\leqslant j\\leqslant k-1$; write $s(\\alpha)=s(\\alpha_k)$ and $r(\\alpha)=r(\\alpha_1)$, and call $|\\alpha|:=k$ the {\\em length} of $\\alpha$. An {\\em infinite path} $x=x_1x_2\\cdots$ is defined similarly, although $s(x)$ remains undefined. Let $E^*$ and $E^\\infty$ denote the set of all finite paths and infinite paths in $E$ respectively. If $\\alpha=\\alpha_1\\cdots\\alpha_k$ and $\\beta=\\beta_1\\cdots\\beta_j$ are finite paths then, provided $s(\\alpha)=r(\\beta)$, let $\\alpha\\beta$ be the path $\\alpha_1\\cdots\\alpha_k\\beta_1\\cdots\\beta_j$. When $x\\in E^\\infty$ with $s(\\alpha)=r(x)$ define $\\alpha x$ similarly. A {\\em cycle} is a finite path $\\alpha$ of non-zero length such that $s(\\alpha)=r(\\alpha)$.\n\nWhen $v$ is a vertex, $f$ is an edge, and there is exactly one infinite path with range $v$ that includes the edge $f$, then we denote this infinite path by $[v,f]^\\infty$. When there is exactly one finite path $\\alpha$ with $r(\\alpha)=v$ and $\\alpha_{|\\alpha|}=f$, we denote $\\alpha$ by $[v,f]^*$. In \\cite{kprr1997} two paths $x,y\\in E^\\infty$ are defined to be {\\em shift equivalent} with lag $k\\in\\mathbb Z$ (written $x\\sim_k y$) if there exists $N\\in\\mathbb N$ such that $x_i=y_{i+k}$ for all $i\\geqslant N$.\n\nSuppose $E$ is a row-finite directed graph. We refer to the groupoid constructed from $E$ by Kumjian, Pask, Raeburn and Renault in \\cite{kprr1997} as the {\\em path groupoid}. Before describing this construction we caution that we are using the now standard notation for directed graphs which has the range and source swapped from the notation used in \\cite{kprr1997}. This new convention is due to the development of the higher-rank graphs, where edges become morphisms in a category and the new convention ensures that ``composition of morphisms is compatible with multiplication of operators in $B(\\mathcal H)$'' \\cite[p.~2]{Raeburn2005}. The path groupoid $G=G_E$ constructed from $E$ is defined as follows:\n\\[\nG:=\\braces{(x,k,y)\\in E^\\infty\\times \\mathbb Z\\times E^\\infty :x\\sim_k y}.\n\\]\nFor elements of \n\\[\nG^{(2)}:=\\braces{\\big((x,k,y),(y,l,z)\\big):(x,k,y),(y,l,z)\\in G},\n\\]\nKumjian, Pask, Raeburn, and Renault defined\n\\[\n(x,k,y)\\cdot (y,l,z):=(x,k+l,z),\n\\]\nand for arbitrary $(x,k,y)\\in G$, defined\n\\[\n(x,k,y)^{-1}:=(y,-k,x).\n\\]\nFor each $\\alpha,\\beta\\in E^*$ with $s(\\alpha)=s(\\beta)$, let $Z(\\alpha,\\beta)$ be the set\n\\[\n\\braces{(x,k,y):x\\in Z(\\alpha), y\\in Z(\\beta), k=|\\beta|-|\\alpha|, x_i=y_{i+k}\\text{ for } i>|\\alpha|}.\n\\]\nBy \\cite[Proposition~2.6]{kprr1997}, the collection of sets\n\\[\n\\braces{Z(\\alpha,\\beta):\\alpha,\\beta\\in E^*, s(\\alpha)=s(\\beta)}\n\\]\nis a basis of compact open sets for a second-countable locally-compact Hausdorff topology on $G$ that makes $G$ r-discrete. Kumjian, Pask, Raeburn and Renault equipped $G$ with the Haar system consisting of counting measures, which they observe is possible by first showing that a Haar system exists for the groupoid with \\cite[Proposition~I.2.8]{Renault1980}, and then using \\cite[Lemma~I.2.7]{Renault1980} to show that they can choose the system of counting measures.\n\nBy \\cite[Corollary~2.2]{kprr1997}, the cylinder sets\n\\[\nZ(\\alpha):=\\braces{x\\in E^\\infty :x_1=\\alpha_1,\\ldots,x_{|\\alpha|}=\\alpha_{|\\alpha|}}\n\\]\nparameterised by $\\alpha\\in E^*$ form a basis of compact open sets for a locally-compact $\\sigma$-compact totally-disconnected Hausdorff topology on $E^\\infty $. After identifying each $(x,0,x)\\in G^{(0)}$ with $x\\in E^\\infty $, \\cite[Proposition~2.6]{kprr1997} tells us that the topology on $G^{(0)}$ is identical to the topology on $E^\\infty $.\n\nFor a row-finite directed graph $E$, Kumjian, Pask, Raeburn and Renault use the path groupoid $G$ to construct the usual groupoid $C^*$-algebra $C^*(G)$, and show how a collection of partial isometries subject to some relations derived from $E$ generate $C^*(G)$. More recently, a $C^*$-algebra $C^*(E)$ is constructed from a collection of partial isometries subject to slightly weakened relations derived from $E$. The slightly weakened relations permit non-zero partial isometries to be related to sources in the graph, and as a result $C^*(E)$ is isomorphic to $C^*(G)$ only when $E$ contains no sources. It turns out that $C^*(E)$ and $C^*(G)$ can be substantially different: an example in \\cite{Hazlewood_graph_algebra_classification} describes a graph with sources where $C^*(G)$ has continuous trace while $C^*(E)$ does not. \nIn this paper we are only interested in groupoid $C^*$-algebras, so we will make no further mention of the graph algebra $C^*(E)$.\n\nSince we wish to apply Theorem \\ref{circle_thm} to path groupoids, we must be able to show that the path groupoids we consider are principal.\n\\begin{prop}\\label{prop_principal_iff_no_cycles}\nSuppose $E$ is a row-finite directed graph. The path groupoid $G$ constructed from $E$ is principal if and only if $E$ contains no cycles.\n\\begin{proof}\nWe first show that if $E$ contains no cycles then $G$ is principal. Suppose $G$ is not principal. Then there exist $x,y\\in E^\\infty $ and distinct $\\gamma,\\delta\\in G$ such that $r(\\gamma)=r(\\delta)=x$ and $s(\\gamma)=s(\\delta)=y$. It follows that there exist $a,b\\in\\mathbb Z$ such that $\\gamma=(x,a,y)$ and $\\delta=(x,b,y)$. Notice that since $\\gamma\\ne\\delta$, $a\\ne b$. We may assume without loss of generality that $a>b$.\n\nNow $\\gamma=(x,a,y)$ implies $x\\sim_a y$ and $\\delta=(x,b,y)$ implies $x\\sim_b y$, so there exists $N$ such that\n\\[\nn\\geqslant N\\implies x_n=y_{n+a}=y_{n+b},\n\\]\nand so $x_n=y_{n+a}=y_{n+a-b+b}=x_{n+a-b}$. Thus $E$ contains a cycle of length at most $a-b$.\n\nWe now show that if $G$ is principal then $E$ contains no cycles. Suppose $E$ contains the cycle $\\alpha=\\alpha_1\\alpha_2\\cdots \\alpha_k$. Then $x:=\\alpha\\alpha\\cdots$ is in $E^\\infty $ with $x\\sim_k x$, so both $(x,0,x)$ and $(x,k,x)$ are in $G$. It follows that $G$ is not principal.\n\\end{proof}\n\\end{prop}\n\n\n\n\n\n\n\\begin{example}[$2$-times convergence in a path groupoid]\\label{2-times_convergence_example}\nLet $E$ be the graph\n\\[\n\\begin{alternativegraphic}[2]{merged_fun_with_groupoids_graphics}\n\\begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \n\\def\\cellwidth{5.5};\n\\clip (-5em,-5.6em) rectangle (3*\\cellwidth em + 4.5em,0.3em);\n\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {0} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {$\\scriptstyle v_{\\x}$};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {1} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {2} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-1.5em-1.5*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {1,2} \\fill[black] (x\\x y\\y) circle (0.15em);\n\n\n\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend left] (x\\x y0) to node[anchor=west] {$\\scriptstyle f_{\\x}^{(2)}$} (x\\x y1);\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend right] (x\\x y0) to node[anchor=east] {$\\scriptstyle f_{\\x}^{(1)}$} (x\\x y1);\n\n\\foreach \\x \/ \\z in {1\/2,2\/3,3\/4} \\draw[black,<-] (x\\x y0) to node[anchor=south] {} (x\\z y0);\n\n\\foreach \\x in {1,2,3,4} \\draw [<-] (x\\x y1) -- (x\\x y2);\n\\foreach \\x in {1,2,3,4} {\n\t\\node (endtail\\x) at (\\cellwidth*\\x em-\\cellwidth em, -6em) {};\n\t\\draw [dotted,thick] (x\\x y2) -- (endtail\\x);\n}\n\n\\node(endtailone) at (3*\\cellwidth em + 2.5em,0em) {};\n\\draw[dotted,thick] (x4y0) -- (endtailone);\n\\end{tikzpicture}\n\\end{alternativegraphic}\n\\]\nand let $G$ be the path groupoid. For each $n\\geqslant 1$ define $x^{(n)}:=[v_1, f_n^{(1)}]^\\infty$ and let $z$ be the infinite path with range $v_1$ that passes through each $v_n$. Then $\\{x^{(n)}\\}$ converges $2$-times in $G^{(0)}\/G$ to $z$.\n\n\\begin{proof}\nWe will describe two sequences in $G$ as in Definition \\ref{def_k-times_convergence}. For each $n\\geqslant 1$ define $\\gamma_n^{(1)}:=(x^{(n)},0,x^{(n)})$ and $\\gamma_n^{(2)}:=\\big([v_1, f_n^{(2)}]^\\infty,0,x^{(n)}\\big)$. It follows immediately that $s(\\gamma_n^{(1)})=x^{(n)}=s(\\gamma_n^{(2)})$ for all $n$ and that both $r(\\gamma_n^{(1)})$ and $r(\\gamma_n^{(2)})$ converge to $z$ as $n\\rightarrow\\infty$. It remains to show that $\\gamma_n^{(2)}(\\gamma_n^{(1)})^{-1}\\rightarrow\\infty$ as $n\\rightarrow\\infty$.\n\nLet $K$ be a compact subset of $G$. Our goal is to show that $\\gamma_n^{(2)}(\\gamma_n^{(1)})^{-1}=\\gamma_n^{(2)}$ is eventually not in $K$. Since sets of the form $Z(\\alpha,\\beta)$ for some $\\alpha,\\beta\\in E^*$ form a basis for the topology on the path groupoid, for each $\\gamma\\in K$ there exist $\\alpha^{(\\gamma)},\\beta^{(\\gamma)}\\in E^*$ with $s(\\alpha^{(\\gamma)})=s(\\beta^{(\\gamma)})$ so that $Z(\\alpha^{(\\gamma)},\\beta^{(\\gamma)})$ is an open neighbourhood of $\\gamma$ in $G$. Thus $\\cup_{\\gamma\\in K}Z(\\alpha^{(\\gamma)},\\beta^{(\\gamma)})$ is an open cover of the compact set $K$, and so admits a finite subcover $\\cup_{i=1}^I Z(\\alpha^{(i)},\\beta^{(i)})$. \n\nWe now claim that for any fixed $n\\in\\mathbb P$, if there exists $i$ with $1\\leqslant i\\leqslant I$ such that $\\gamma_n^{(2)}\\in Z(\\alpha^{(i)},\\beta^{(i)})$, then $\\big|[v_1,f_n^{(2)}]^*\\big|\\leqslant \\abs{\\alpha^{(i)}}$. Temporarily fix $n\\in\\mathbb P$ and suppose there exists $i$ with $1\\leqslant i\\leqslant I$ such that $\\gamma_n^{(2)}\\in Z(\\alpha^{(i)},\\beta^{(i)})$. Suppose the converse: that $ \\abs{\\alpha^{(i)}}<\\big|[v_1,f_n^{(2)}]^*\\big|$. Since $\\gamma_n^{(2)}\\in Z(\\alpha^{(i)},\\beta^{(i)})$, it follows that $r(\\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\\infty\\in Z(\\alpha^{(i)})$, and so $\\alpha_p^{(i)}=[v_1, f_n^{(2)}]^\\infty_p$ for every $1\\leqslant p\\leqslant|\\alpha^{(i)}|$. By examining the graph we can see that $s\\big([v_1, f_n^{(2)}]^\\infty_p\\big)=v_{p+1}$ for all $1\\leqslant p<\\big|[v_1,f_n^{(2)}]^*\\big|$. Since we also know that $|\\alpha^{(i)}|<\\big|[v_1,f_n^{(2)}]^*\\big|$, we can deduce that $s(\\alpha^{(i)})=v_j$ for some $j$.\nFurthermore since $s(\\alpha^{(i)})=s(\\beta^{(i)})$, $s(\\beta^{(i)})=v_j$. There is only one path with source $v_j$ and range $v_1$, so $\\alpha^{(i)}=\\beta^{(i)}$.\nNote that when $k=|\\beta^{(i)}|-|\\alpha^{(i)}|$, the set $Z(\\alpha^{(i)},\\beta^{(i)})$ is by definition equal to\n\\[\n\\{(x,k,y):x\\in Z(\\alpha^{(i)}),y\\in Z(\\beta^{(i)}), x_p=y_{p+k} \\text{ for }p>|\\alpha^{(i)}|\\},\n\\]\nso since $\\gamma_n^{(2)}\\in Z(\\alpha^{(i)},\\beta^{(i)})$ and $\\alpha^{(i)}=\\beta^{(i)}$, we can see that $s(\\gamma_n^{(2)})_p=r(\\gamma_n^{(2)})_p$ for all $p>|\\alpha^{(i)}|$. We know $s(\\gamma_n^{(2)})=[v_1, f_n^{(1)}]^\\infty$ and $r(\\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\\infty$, so $[v_1, f_n^{(2)}]^\\infty_p=[v_1, f_n^{(1)}]^\\infty_p$ for all $p>|\\alpha^{(i)}|$. In particular, since we assumed that $\\big|[v_1,f_n^{(2)}]^*\\big|>|\\alpha^{(i)}|$, we have \n\\[\n[v_1, f_n^{(2)}]^\\infty_{\\big|[v_1,f_n^{(2)}]^*\\big|}=[v_1, f_n^{(1)}]^\\infty_{\\big|[v_1,f_n^{(2)}]^*\\big|},\\]\nso that $f_n^{(2)}=f_n^{(1)}$. But $f_n^{(1)}$ and $f_n^{(2)}$ are distinct, so we have found a contradiction, and we must have $\\big|[v_1,f_n^{(2)}]^*\\big|\\leqslant |\\alpha^{(i)}|$.\n\nOur next goal is to show that each $Z(\\alpha^{(i)},\\beta^{(i)})$ contains at most one $\\gamma_n^{(2)}$. Fix $n,m\\in\\mathbb P$ and suppose that both $\\gamma_n^{(2)}$ and $\\gamma_m^{(2)}$ are in $Z(\\alpha^{(i)},\\beta^{(i)})$ for some $i$. We will show that $n=m$. Since $\\gamma_n^{(2)}\\in Z(\\alpha^{(i)}\\,\\beta^{(i)})$, $r(\\gamma_n^{(2)})=[v_1, f_n^{(2)}]^\\infty\\in Z(\\alpha^{(i)})$. Thus there exists $x\\in E^\\infty $ such that $[v_1,f_n^{(2)}]^*x\\in Z(\\alpha^{(i)})$ and, since $\\big|[v_1,f_n^{(2)}]^*\\big|\\leqslant \\abs{\\alpha^{(i)}}$, we can crop $x$ to form a finite $\\epsilon\\in E^*$ such that $[v_1,f_n^{(2)}]^*\\epsilon=\\alpha^{(i)}$. Similarly there exists $\\delta\\in E^*$ such that $[v_1,f_m^{(2)}]^*\\delta=\\alpha^{(i)}$. Then\n\\[\n[v_1,f_n^{(2)}]^*\\epsilon=\\alpha^{(i)}=[v_1,f_m^{(2)}]^*\\delta,\n\\]\nwhich we can see by looking at the graph is only possible if $n=m$. We have thus shown that if $\\gamma_n^{(2)}$ and $\\gamma_m^{(2)}$ are in $Z(\\alpha^{(i)},\\beta^{(i)})$, then $\\gamma_n^{(2)}=\\gamma_m^{(2)}$.\n\nLet $S=\\{n\\in\\mathbb P:\\gamma_n^{(2)}\\in K\\}$. Since $K\\subset \\cup_{i=1}^IZ(\\alpha^{(i)},\\beta^{(i)})$ and since $\\gamma_n^{(2)},\\gamma_m^{(2)}\\in Z(\\alpha^{(i)},\\beta^{(i)})$ implies $n=m$, $S$ can contain at most $I$ elements. Then $S$ has a maximal element $n_0$ and $\\gamma_n^{(2)}\\notin K$ provided $n>n_0$. Thus $\\gamma_n\\rightarrow\\infty$ as $n\\rightarrow\\infty$, and we have shown that $x^{(n)}$ converges $2$-times to $z$ in $G^{(0)}\/G$.\n\\end{proof}\n\\end{example}\n\n\\begin{example}[$k$-times convergence in a path groupoid]\\label{k-times_convergence_example}\nFor any fixed positive integer $k$, let $E$ be the graph\n\\[\n\\begin{alternativegraphic}[3]{merged_fun_with_groupoids_graphics}\n\\begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \n\\def\\cellwidth{5.5};\n\\clip (-5em,-5.6em) rectangle (3*\\cellwidth em + 4.5em,0.3em);\n\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {0} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {$\\scriptstyle v_{\\x}$};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {1} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {2} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-1.5em-1.5*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {1,2} \\fill[black] (x\\x y\\y) circle (0.15em);\n\n\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend right=40] (x\\x y0) to node[anchor=west] {} (x\\x y1);\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend right=25] (x\\x y0) to node[anchor=west] {} (x\\x y1);\n\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend left] (x\\x y0) to node[anchor=west,rotate=-25] {$\\scriptstyle f_{\\x}^{(1)},\\ldots,f_{\\x}^{(k)}$} (x\\x y1);\n\\foreach \\x in {1,2,3,4} \\draw [dotted,thick] ($(x\\x y0)!{1\/(2*cos(10))}!-10:(x\\x y1)$) -- ($(x\\x y0)!{1\/(2*cos(14))}!14:(x\\x y1)$);\n\n\n\n\\foreach \\x \/ \\z in {1\/2,2\/3,3\/4} \\draw[black,<-] (x\\x y0) to node[anchor=south] {} (x\\z y0);\n\n\\foreach \\x in {1,2,3,4} \\draw [<-] (x\\x y1) -- (x\\x y2);\n\\foreach \\x in {1,2,3,4} {\n\t\\node (endtail\\x) at (\\cellwidth*\\x em-\\cellwidth em, -6em) {};\n\t\\draw [dotted,thick] (x\\x y2) -- (endtail\\x);\n}\n\n\\node(endtailone) at (3*\\cellwidth em + 2.5em,0em) {};\n\\draw[dotted,thick] (x4y0) -- (endtailone);\n\\end{tikzpicture}\n\\end{alternativegraphic}\n\\]\nand let $G$ be the path groupoid. For each $n\\geqslant 1$ define $x^{(n)}:=[v_1, f_n^{(1)}]^\\infty$ and let $z$ be the infinite path that passes through each $v_n$. Then the sequence $\\{x^{(n)}\\}$ converges $k$-times in $G^{(0)}\/G$ to $z$.\n\\begin{proof}\nAfter defining $\\gamma_n^{(i)}:=\\big([v_1, f_n^{(i)}]^\\infty,0,x^{(n)}\\big)$ for each $1\\leqslant i\\leqslant k$, an argument similar to that in Example \\ref{2-times_convergence_example} establishes the $k$-times convergence.\n\\end{proof}\n\\end{example}\n\n\\begin{example}\\label{ML2_MU3_example}[Lower multiplicity 2 and upper multiplicity 3]\nConsider the graph $E$ described by\n\\[\n\\begin{alternativegraphic}[3]{merged_fun_with_groupoids_graphics}\n\\begin{tikzpicture}[>=stealth,baseline=(current bounding box.center)] \n\\def\\cellwidth{5.5};\n\\clip (-2.5em,-7.1em) rectangle (4*\\cellwidth em + 2.5em,0.3em);\n\n\\foreach \\x in {1,2,3,4,5} \\foreach \\y in {0} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {$\\scriptstyle v_{\\x}$};\n\\foreach \\x in {1,2,3,4,5} \\foreach \\y in {1} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {$\\scriptstyle w_{\\x}$};\n\\foreach \\x in {1,2,3,4,5} \\foreach \\y in {2,3} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-1.5em-1.5*\\y em) {};\n\\foreach \\x in {1,2,3,4,5} \\foreach \\y in {2,3} \\fill[black] (x\\x y\\y) circle (0.15em);\n\n\\foreach \\x in {1,3,5} \\draw [<-, bend left] (x\\x y0) to node[anchor=west] {} (x\\x y1);\n\\foreach \\x in {1,3,5} \\draw [<-, bend right] (x\\x y0) to node[anchor=east] {$\\scriptstyle f_{\\x}^{(1)}$} (x\\x y1);\n\\foreach \\x in {2,4} \\draw [<-, bend left=40] (x\\x y0) to node[anchor=west] {} (x\\x y1);\n\\foreach \\x in {2,4} \\draw [<-, bend right=40] (x\\x y0) to node[anchor=east] {$\\scriptstyle f_{\\x}^{(1)}$} (x\\x y1);\n\\foreach \\x in {2,4} \\draw [<-] (x\\x y0) to node {} (x\\x y1);\n\n\\foreach \\x \/ \\z in {1\/2,2\/3,3\/4,4\/5} \\draw[black,<-] (x\\x y0) to node[anchor=south] {} (x\\z y0);\n\n\\foreach \\x in {1,2,3,4,5} \\draw [<-] (x\\x y1) -- (x\\x y2);\n\\foreach \\x in {1,2,3,4,5} \\draw [<-] (x\\x y2) -- (x\\x y3);\n\\foreach \\x in {1,2,3,4,5} {\n\t\\node (endtail\\x) at (\\cellwidth*\\x em-\\cellwidth em, -7.5em) {};\n\t\\draw [dotted,thick] (x\\x y3) -- (endtail\\x);\n}\n\n\\node(endtailone) at (4*\\cellwidth em + 2.5em,0em) {};\n\\draw[dotted,thick] (x5y0) -- (endtailone);\n\\end{tikzpicture}\n\\end{alternativegraphic}\n\\]\nwhere for each odd $n\\geqslant 1$ there are exactly two paths $f_n^{(1)},f_n^{(2)}$ with source $w_n$ and range $v_n$, and for each even $n\\geqslant 2$ there are exactly three paths $f_n^{(1)},f_n^{(2)},f_n^{(3)}$ with source $w_n$ and range $v_n$. Let $G$ be the path groupoid, define $x^{(n)}:=[v_1, f_n^{(1)}]^\\infty$ for every $n\\geqslant 1$, and let $z$ be the infinite path that meets every vertex $v_n$ (so $z$ has range $v_1$). Then\n\\[\n\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x^{(n)}}})=2\\quad\\text{and}\\quad\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x^{(n)}}})=3.\n\\]\n\\begin{proof}\nWe know that $\\braces{x^{(n)}}$ converges $2$-times to $z$ in $G^{(0)}\/G$ by the argument in Example \\ref{2-times_convergence_example}, so we can apply Theorem \\ref{circle_thm} to see that $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x^{(n)}}})\\geqslant 2$. We can see that the subsequence $\\braces{x_{2n}}$ of $\\braces{x^{(n)}}$ converges $3$-times to $z$ in $G^{(0)}\/G$ by Example \\ref{k-times_convergence_example}. Theorem \\ref{2nd_circle_thm} now tells us that $\\mathrm{M}_\\mathrm{U}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x^{(n)}}})\\geqslant 3$.\n\nNow suppose $\\mathrm{M}_\\mathrm{L}(\\mathrm{L}^z,\\braces{\\mathrm{L}^{x^{(n)}}})\\geqslant 3$. Then by Theorem \\ref{circle_thm}, $\\braces{x^{(n)}}$ converges $3$-times to $z$ in $G^{(0)}\/G$, so there must exist three sequences $\\braces{\\gamma_n^{(1)}}$,$\\braces{\\gamma_n^{(2)}}$, and $\\braces{\\gamma_n^{(3)}}$ as in the definition of $k$-times convergence (Definition \\ref{def_k-times_convergence}). For each odd $n$, there are only two elements in $G$ with source $x^{(n)}$, so there must exist $1\\leqslant i=stealth,baseline=(current bounding box.center)] \n\\def\\cellwidth{5.5};\n\\clip (-4em,-5.6em) rectangle (3*\\cellwidth em + 3em,4.3em);\n\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {0,1} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-3*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {2} \\node (x\\x y\\y) at (\\cellwidth*\\x em-\\cellwidth em,-1.5em-1.5*\\y em) {};\n\\foreach \\x in {1,2,3,4} \\foreach \\y in {0,1,2} \\fill[black] (x\\x y\\y) circle (0.15em);\n\n\\foreach \\x in {1,2,3,4} {\n\t\\node (x\\x z1) at (\\cellwidth*\\x em-1.25*\\cellwidth em,4em) {$\\scriptstyle v_\\x$};\n\t\\node (x\\x z2) at (\\cellwidth*\\x em-1.5*\\cellwidth em,2em) {$\\scriptstyle w_\\x$};\n}\n\\foreach \\x in {1,2,3,4} {\n\t\\draw[<-] (x\\x z1) -- (x\\x y0);\n\t\\draw[<-] (x\\x z2) -- (x\\x y0);\n}\n\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend left] (x\\x y0) to node[anchor=west] {$\\scriptstyle f_{\\x}^{(2)}$} (x\\x y1);\n\\foreach \\x in {1,2,3,4} \\draw [<-, bend right] (x\\x y0) to node[anchor=east] {$\\scriptstyle f_{\\x}^{(1)}$} (x\\x y1);\n\n\\foreach \\x \/ \\z in {1\/2,2\/3,3\/4} \\draw[black,<-] (x\\x z1) to node[anchor=south] {} (x\\z z1);\n\\foreach \\x \/ \\z in {1\/2,2\/3,3\/4} \\draw[black,<-] (x\\x z2) to node[anchor=south] {} (x\\z z2);\n\n\\foreach \\x in {1,2,3,4} \\draw [<-] (x\\x y1) -- (x\\x y2);\n\\foreach \\x in {1,2,3,4} {\n\t\\node (endtail\\x) at (\\cellwidth*\\x em-\\cellwidth em, -6em) {};\n\t\\draw [dotted,thick] (x\\x y2) -- (endtail\\x);\n}\n\n\\node (endtailone) at ($(x4z1) + 0.5*(\\cellwidth em, 0 em)$) {};\n\\node (endtailtwo) at ($(x4z2) + 0.5*(\\cellwidth em, 0 em)$) {};\n\\draw[dotted,thick] (x4z1) -- (endtailone);\n\\draw[dotted,thick] (x4z2) -- (endtailtwo);\n\\end{tikzpicture}\n\\]\nand let $G$ be the path groupoid. For every $n\\geqslant 1$ let $x^{(n)}$ be the infinite path $[v_1, f_n^{(1)}]^\\infty$. Let $x$ be the infinite path with range $v_1$ that passes through each $v_n$ and let $y$ be the infinite path with range $w_1$ that passes through each $w_n$. Then the orbit space $G^{(0)}\/G$ is not Hausdorff and $\\braces{x^{(n)}}$ converges $2$-times in $G^{(0)}\/G$ to both $x$ and $y$.\n\n\\begin{proof}\nTo see that $\\braces{x^{(n)}}$ converges $2$-times to $x$ in $G^{(0)}\/G$, consider the sequences $\\braces{([v_1, f_n^{(2)}]^\\infty,0,x^{(n)})}$ and $\\braces{(x^{(n)},0,x^{(n)})}$ and follow the argument as in Example \\ref{2-times_convergence_example}. To see that $\\braces{x^{(n)}}$ converges $2$-times to $y$ in $G^{(0)}\/G$, consider the sequences $\\braces{([w_1,f_n^{(1)}]^\\infty,0,x^{(n)})}$ and $\\braces{([w_1,f_n^{(2)}]^\\infty,0,x^{(n)})}$. While it is tempting to think that this example exhibits $4$-times convergence (or even $3$-times convergence), this is not the case (see Example~\\ref{ML2_MU3_example} for an argument demonstrating this). We know $x^{(n)}$ converges $k$-times to $x$ in $G^{(0)}\/G$, so $[x^{(n)}]\\rightarrow [x]$ in $G^{(0)}\/G$, and similarly $[x^{(n)}]\\rightarrow [y]$ in $G^{(0)}\/G$. It follows that $G^{(0)}\/G$ is not Hausdorff since $[x]\\ne [y]$.\n\\end{proof}\n\\end{example}\n\nIn all of the examples above, the orbits in $G^{(0)}$ are closed and hence $C^*(G)$ and $G^{(0)}\/G$ are homeomorphic by \\cite[Proposition~5.1]{Clark2007}. By combining the features of the graphs in Examples~\\ref{ML2_MU3_example} and~\\ref{example_with_non-Hausdorff_orbit_space} we obtain a principal groupoid whose $C^*$-algebra has non-Hausdorff spectrum and distinct upper and lower multiplicities among its irreducible representations.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the study of parametrized nonlinear approximations, the framework of continuous nonlinear widths of DeVore \\emph{et al.} \\cite{devore1989optimal} is a general approach based on the assumption that approximation parameters depend continuously on the approximated object. Under this assumption, the framework provides lower error bounds for approximations in several important functional spaces, in particular Sobolev spaces. \n\nIn the present paper we will only consider the simplest Sobolev space $W^{1,\\infty}([0,1]),$ i.e. the space of Lipschitz functions on the segment $[0,1]$. The norm in this space can be defined by $\\|f\\|_{W^{1,\\infty}([0,1])}=\\max(\\|f\\|_\\infty, \\|f'\\|_\\infty),$ where $f'\\in L^\\infty([0,1])$ is the weak derivative of $f$, and $\\|\\cdot\\|_\\infty$ is the norm in $L^\\infty([0,1])$. We will denote by $B$ the unit ball in $W^{1,\\infty}([0,1])$. \nThen the following result contained in Theorem 4.2 of \\cite{devore1989optimal} establishes an asymptotic lower bound on the accuracy of approximating functions from $B$ under assumption of continuous parameter selection:\n\\begin{theor}\\label{th:1}\nLet $\\nu$ be a positive integer and $\\eta:\\mathbb R^\\nu\\to L^\\infty([0,1])$ be any map between the space $\\mathbb R^\\nu$ and the space $L^\\infty([0,1])$. Suppose that there is a continuous map $\\psi:B\\to \\mathbb R^\\nu$ such that $\\|f-\\eta(\\psi(f))\\|_\\infty\\le \\epsilon$ for all $f\\in B$. Then $\\epsilon\\ge \\frac{c}{\\nu}$ with some absolute constant $c$.\n\\end{theor}\n\nThe bound $\\frac{c}{\\nu}$ stated in the theorem is attained, for example, by the usual piecewise-linear interpolation with uniformly distributed breakpoints. Namely, assuming $\\nu\\ge 2$, define $\\psi(f)=(f(0), f(\\frac{1}{\\nu-1}),\\ldots, f(1)))$ and define $\\eta(y_1,\\ldots,y_\\nu)$ to be the continuous piecewise-linear function with the nodes $(\\frac{n-1}{\\nu-1}, y_n), n=1,\\ldots,\\nu.$ Then it is easy to see that $\\|f-\\eta(\\psi(f))\\|_\\infty\\le \\frac{1}{\\nu-1}$ for all $f\\in B.$ \n\nThe key assumption of Theorem \\ref{th:1} that $\\psi$ is continuous need not hold in applications, in particular for approximations with neural networks. The most common practical task in this case is to optimize the weights of the network so as to obtain the best approximation of a specific function, without any regard for other possible functions. Theoretically, Kainen \\emph{et al.} \\cite{kainen1999approximation} prove that already for networks with a single hidden layer the optimal weight selection is discontinuous in general. The goal of the present paper is to quantify the accuracy gain that discontinuous weight selection brings to a deep feed-forward neural network model in comparison to the baseline bound of Theorem \\ref{th:1}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[clip,trim=15mm 5mm 5mm 5mm]{standardNet-eps-converted-to.pdf}\n\\caption{The standard network architecture of depth $N=9$ and width $M=5$ (i.e., containing 9 fully-connected hidden layers, 5 neurons each).}\\label{fig:stnet}\n\\end{center}\n\\end{figure}\n\nSpecifically, we consider a common deep architecture shown in Fig. \\ref{fig:stnet} that we will refer to as \\emph{standard}. The network consists of one input unit, one output unit, and $N$ fully-connected hidden layers, each including a constant number $M$ of units. We refer to $N$ as the \\emph{depth} of the network and to $M$ as its \\emph{width}. The layers are fully connected in the sense that each unit is connected with all units of the previous and the next layer. Only neighboring layers are connected. A hidden unit computes the map $$(x_1,\\ldots,x_K)\\mapsto\\sigma\\Big(\\sum_{k=1}^K w_kx_k+h\\Big),$$\nwhere $x_1,\\ldots,x_K$ are the inputs, $\\sigma$ is a nonlinear activation function, and $(w_k)_{k=1}^K$ and $h$ are the weights associated with this computation unit. For the first hidden layer $K=1$, otherwise $K=M$. The network output unit is assumed to compute a linear map:\n$$(x_1,\\ldots,x_M)\\mapsto\\sum_{k=1}^M w_kx_k+h.$$\nThe total number of weights in the network then equals $$\\nu_{M,N}=M^2(N-1)+M(N+2)+1.$$ \n\nWe will take the activation function $\\sigma$ to be the ReLU function (Rectified Linear Unit), which is a popular choice in applications:\n\\begin{equation}\\label{eq:relu}\\sigma(x)=\\max(0,x).\n\\end{equation}\nWith this activation function, the functions implemented by the whole network are continuous and piecewise linear, in particular they belong to $W^{1,\\infty}([0,1])$. Let $\\eta_{M,N}:\\mathbb R^{\\nu_{M,N}}\\to W^{1,\\infty}([0,1])$ map the weights of the width-$M$ depth-$N$ network to the respective implemented function. \nConsider uniform approximation rates for the ball $B$, with or without the weight selection continuity assumption:\n\\begin{align*}\nd_{\\rm cont}(M,N)=&\\inf_{\\substack{\\psi:B\\to\\mathbb R^{\\nu_{M,N}}\\\\\\psi\\text{ continuous}}}\\sup_{f\\in B}\\|f-\\eta_{M,N}(\\psi(f))\\|_\\infty\\\\\nd_{\\rm all}(M,N)=&\\inf_{\\psi:B\\to\\mathbb R^{\\nu_{M,N}}}\\sup_{f\\in B}\\|f-\\eta_{M,N}(\\psi(f))\\|_\\infty\n\\end{align*}\nTheorem \\ref{th:1} implies that $d_{\\rm cont}(M,N)\\ge \\frac{c}{\\nu_{M,N}}$.\n\nWe will be interested in the scenario where the width $M$ is fixed and the depth $N$ is varied. The main result of our paper is an upper bound for $d_{\\rm all}(M,N)$ that we will prove in the case $M=5$ (but the result obviously also holds for larger $M$). \n\\begin{theor}\\label{th:2}\n$$d_{\\rm all}(5,N)\\le \\frac{c}{N\\ln N},$$\nwith some absolute constant $c$.\n\\end{theor}\nComparing this result with Theorem \\ref{th:1}, we see that without the assumption of continuous weight selection the approximation error is lower than with this assumption at least by a factor logarithmic in the size of the network:\n\\begin{corol}\n$$\\frac{d_{\\rm all}(5,N)}{d_{\\rm cont}(5,N)}\\le \\frac{c}{\\ln N},$$\nwith some absolute constant $c$.\n\\end{corol}\n\nThe remainder of the paper is the proof of Theorem \\ref{th:2}. The proof relies on a construction of adaptive network architectures with ``cached'' elements from \\cite{yarotsky2016error_v3}. We use a somewhat different, streamlined version of this construction: whereas in \\cite{yarotsky2016error_v3} it was first performed for a discontinuous activation function and then extended to ReLU by some approximation, in the present paper we do it directly, without invoking auxiliary activation functions. \n\n\n\\section{Proof of Theorem \\ref{th:2}}\n\nIt is convenient to divide the proof into four steps. In the first step we construct for any function $f\\in B$ an approximation $\\widetilde f$ using cached functions. In the second step we expand the constructed approximation in terms of the ReLU function \\eqref{eq:relu} and linear operations. In the third step we implement $\\widetilde f$ by a highly parallel shallow network with an $f$-dependent (``adaptive'') architecture. In the fourth step we show that this adaptive architecture can be embedded in the standard deep architecture of width 5. \n\n\n\n\\paragraph{Step 1: Cache-based approximation.}\n\nWe first explain the idea of the construction. We start with interpolating $f$ by a piece-wise linear function $\\widetilde f_1$ with uniformly distributed nodes. After that, we create a ``cache'' of auxiliary functions that will be used for detalization of the approximation in the intervals between the nodes. The key point is that one cached function can be used in many intervals, which will eventually lead to a $\\sim\\log N$ saving in the error--complexity relation. The assignment of cached functions to the intervals will be $f$-dependent and encoded in the network architecture in Step 3.\n\nThe idea of reused subnetworks dates back to the proof of the $O(2^n\/n)$ upper bound for the complexity of Boolean circuits implementing $n$-ary functions (\\cite{shannon1949synthesis}). \n\nWe start now the detailed exposition. Given $f\\in B$, we will construct an approximation $\\widetilde f$ to $f$ in the form\n$$\\widetilde f= \\widetilde f_1+\\widetilde f_2.$$\nHere, $\\widetilde f_1$ is the piecewise linear interpolation of $f$ with the breakpoints $\\{\\frac{t}{T}\\}_{t=0}^T$:\n$$\\widetilde f_1\\Big(\\frac{t}{T}\\Big)=f\\Big(\\frac{t}{T}\\Big), \\quad t=0,1,\\ldots,T.$$\nThe value of $T$ will be chosen later.\n\nSince $f$ is Lipschitz with constant 1, $\\widetilde f_1$ is also Lipschitz with constant 1. We denote by $I_t$ the intervals between the breakpoints:\n$$I_t=\\Big[\\frac{t}{T}, \\frac{t+1}{T}\\Big),\\quad t=0,\\ldots,T-1.$$\nWe will now construct $\\widetilde f_2$ as an approximation to the difference \\begin{equation}\\label{eq:f2}f_2=f-\\widetilde f_1.\\end{equation} \nNote that $f_2$ vanishes at the endpoints of the intervals $I_t$:\n\\begin{equation}\\label{eq:f2pr1} f_2\\Big(\\frac{t}{T}\\Big)=0,\\;\\; t=0,\\ldots,T,\n\\end{equation}\nand $f_2$ is Lipschitz with constant 2: \n\\begin{equation}\\label{eq:f2pr2} |f_2(x_1)-f_2(x_2)|\\le 2|x_1-x_2|,\n\\end{equation}\nsince $f$ and $\\widetilde f_1$ are Lipschitz with constant 1.\n\nTo define $\\widetilde f_2$, we first construct a set $\\Gamma$ of cached functions. Let $m$ be a positive integer to be chosen later. Let $\\Gamma$ be the set of piecewise linear functions $\\gamma: [0,1]\\to\\mathbb R$ with the breakpoints $\\{\\frac{r}{m}\\}_{r=0}^m$ and the properties \n$$\\gamma(0)=\\gamma(1)=0$$\nand\n$$\\gamma\\Big(\\frac{r}{m}\\Big)-\\gamma\\Big(\\frac{r-1}{m}\\Big)\\in \\Big\\{-\\frac{2}{m}, 0, \\frac{2}{m}\\Big\\},\\quad r =1, \\ldots, m.$$\nNote that the size $|\\Gamma|$ of $\\Gamma$ is not larger than $3^m$.\n\nIf $g:[0,1]\\to \\mathbb R$ is any Lipschitz function with constant 2 and $g(0)=g(1)=0$, then $g$ can be approximated by some $\\gamma\\in \\Gamma$ with error not larger than $\\frac{2}{m}$: namely, take $\\gamma(\\frac{r}{m})=\\frac{2}{m}\\lfloor g(\\frac{r}{m})\/\\frac{2}{m}\\rfloor$ (here $\\lfloor\\cdot\\rfloor$ is the floor function.)\n\nMoreover, if $f_2$ is defined by \\eqref{eq:f2}, then, using \\eqref{eq:f2pr1}, \\eqref{eq:f2pr2}, on each interval $I_t$ the function $f_2$ can be approximated with error not larger than $\\frac{2}{Tm}$ by a properly rescaled function $\\gamma\\in\\Gamma$. Namely, for each $t=0,\\ldots,T-1$ we can define the function $g$ by $g(y)=Tf_2(\\frac{t+y}{T})$. Then it is Lipschitz with constant 2 and $g(0)=g(1)=0$, so we can find $\\gamma_t\\in\\Gamma$ such that\n\\begin{equation*}\n\\sup_{y\\in [0,1)}\\Big|Tf_2\\Big(\\frac{t+y}{T}\\Big)-\\gamma_t(y)\\Big| \\le \\frac{2}{m}.\n\\end{equation*}\nThis can be equivalently written as \n\\begin{equation*}\n\\sup_{x\\in I_t}\\Big|f_2(x)-\\frac{1}{T}\\gamma_t(Tx-t)\\Big| \\le \\frac{2}{Tm}.\n\\end{equation*}\nNote that the obtained assignment $t\\mapsto \\gamma_t$ is not injective, in general ($T$ will be larger than $|\\Gamma|$).\n\nWe can then define $\\widetilde f_2$ on the whole interval $[0,1)$ by\n\\begin{equation}\\label{eq:f2i}\n\\widetilde f_2(x)=\\frac{1}{T}\\gamma_t(Tx-t),\\quad x\\in I_t, \\quad t=0,\\ldots,T-1.\n\\end{equation}\nThis $\\widetilde f_2$ approximates $f_2$ with error $\\frac{2}{Tm}$ on $[0,1)$: \n\\begin{equation*\n\\sup_{x\\in[0,1)}|f_2(x)-\\widetilde f_2(x)|\\le \\frac{2}{Tm},\n\\end{equation*}\nand hence, by \\eqref{eq:f2}, for the full approximation $\\widetilde f=\\widetilde f_1+\\widetilde f_2$ we will also have\n\\begin{equation}\\label{eq:f2tm}\n\\sup_{x\\in[0,1)}|f(x)-\\widetilde f(x)|\\le \\frac{2}{Tm}.\n\\end{equation}\n\n\n\n\\paragraph{Step 2: ReLU-expansion of $\\widetilde f$.} We express now the constructed approximation $\\widetilde f=\\widetilde f_1+\\widetilde f_2$ in terms of linear and ReLU operations. \n\nLet us first describe the expansion of $\\widetilde f_1$. Since $\\widetilde f_1$ is a continuous piecewise-linear interpolation of $f$ with the breakpoints $\\{\\frac{t}{T}\\}_{t=0}^T$, we can represent it on the segment $[0,1]$ in terms of the ReLU activation function $\\sigma$ as \n\\begin{equation}\\label{eq:f1exp}\n\\widetilde f_1(x)=\\sum_{t=0}^{T-1}w_t\\sigma\\Big(x-\\frac{t}{T}\\Big)+h,\n\\end{equation}\nwith some weights $(w_t)_{t=0}^{T-1}$ and $h$.\n\nNow we turn to $\\widetilde f_2$, as given by \\eqref{eq:f2i}. Consider the ``tooth'' function\n\\begin{equation}\n\\phi(x)=\n\\begin{cases}\n0, & x\\notin[-1,1],\\\\\n1-|x|,& x\\in[-1,1].\n\\end{cases}\n\\end{equation}\nNote in particular that for any $t=0,1,\\ldots,T-1$\n\\begin{equation}\\label{eq:phivanish}\n\\phi(Tx-t)=0\\quad \\text{if } x\\notin I_{t-1}\\cup I_{t}.\n\\end{equation}\nThe function $\\phi$ can be written as \n\\begin{equation}\\label{eq:phiexp}\\phi(x)= \\sum_{q=-1}^1\\alpha_q\\sigma(x-q),\n\\end{equation}\nwhere $$\\alpha_{-1}=\\alpha_1=1, \\alpha_0=-2.$$\nLet us expand each $\\gamma\\in\\Gamma$ over the basis of shifted ReLU functions:\n\\begin{equation}\\label{eq:gammaexp}\n\\gamma(x) = \\sum_{r=0}^{m-1}c_{\\gamma,r}\\sigma\\Big(x-\\frac{r}{m}\\Big),\\quad x\\in[0,1],\n\\end{equation}\nwith some coefficients $c_{\\gamma,r}$. There is no constant term because $\\gamma(0)=0$. Since $\\gamma(1)=0$, we also have\n\\begin{equation}\\label{eq:cgr1}\n\\sum_{r=0}^{m-1}c_{\\gamma,r}\\sigma\\Big(1-\\frac{r}{m}\\Big) = 0.\n\\end{equation}\nConsider the functions $\\theta_\\gamma:\\mathbb R^2\\to\\mathbb R$ defined by \n\\begin{equation}\\label{eq:ggammaexp}\n\\theta_\\gamma(a,b)=\\sum_{r=0}^{m-1}c_{\\gamma,r}\\sigma\\Big(\\frac{m-r}{m}a-\\frac{r}{m}b\\Big).\n\\end{equation}\n\n\\begin{lemma} \n\\begin{align}\\label{eq:ggamma11}\n\\theta_\\gamma(a,0)&=0\\quad\\text{ for all } a\\ge 0,\\\\\n\\label{eq:ggamma12}\n\\theta_\\gamma(0,b)&=0\\quad\\text{ for all } b\\ge 0,\\\\\n\\label{eq:ggamma2}\n\\theta_{\\gamma}(\\phi(Tx-t-1),\\phi(Tx-t))&=\\begin{cases} \\gamma(Tx-t),& x\\in I_t,\\\\\n0, &x\\notin I_{t},\\end{cases}\\end{align}\nfor all $x\\in[0,1]$ and $t=0,1,\\ldots,T-1$.\n\\end{lemma}\n\\begin{proof}\nProperty \\eqref{eq:ggamma11} follows from \\eqref{eq:cgr1} using positive homogeneity of $\\sigma$. Property \\eqref{eq:ggamma12} follows since $\\sigma(x)=0$ for $x\\le 0$. \n\nTo establish \\eqref{eq:ggamma2}, consider the two cases:\n\\begin{enumerate}\n\\item If $x\\notin I_t,$ then at least one of the arguments of $\\theta_\\gamma$ vanishes by \\eqref{eq:phivanish}, and hence the l.h.s. vanishes by \\eqref{eq:ggamma11}, \\eqref{eq:ggamma12}. \n\\item If $x\\in I_t,$ then $\\phi(Tx-t-1)=Tx-t$ and $\\phi(Tx-t)=1+t-Tx$, so \n$$\n\\frac{m-r}{m}\\phi(Tx-t-1)-\\frac{r}{m}\\phi(Tx-t)=Tx-t-\\frac{r}{m}\n$$\nand hence, by \\eqref{eq:ggammaexp} and \\eqref{eq:gammaexp},\n$$\n\\theta_{\\gamma}(\\phi(Tx-t-1),\\phi(Tx-t))=\\sum_{r=0}^{m-1}c_{\\gamma,r}\\sigma\\Big(Tx-t-\\frac{r}{m}\\Big)=\\gamma(Tx-t).\n$$\n\\end{enumerate}\n\\end{proof}\nUsing definition \\eqref{eq:f2i} of $\\widetilde f_2$ and representation \\eqref{eq:ggamma2}, we can then write, for any $x\\in[0,1),$ \n\\begin{align}\n\\widetilde f_2(x)\n&=\\frac{1}{T}\\sum_{t=0}^{T-1}\\theta_{\\gamma_t}(\\phi(Tx-t-1),\\phi(Tx-t))\\nonumber\\\\\n&=\\frac{1}{T}\\sum_{\\gamma\\in\\Gamma}\\sum_{t:\\gamma_t=\\gamma}\\theta_{\\gamma}(\\phi(Tx-t-1),\\phi(Tx-t)).\\nonumber\n\\end{align}\nIn order to obtain computational gain from caching, we would like to move summation over $t$ into the arguments of the function $\\theta_\\gamma$. However, we need to avoid double counting associated with overlapping supports of the functions $x\\mapsto \\phi(Tx-t),t=0,\\ldots,T-1$. Therefore we first divide all $t$'s into three series $\\{t: t\\equiv i \\Mod{3}\\}$ indexed by $i\\in\\{0,1,2\\}$, and we will then move summation over $t$ into the argument of $\\theta_\\gamma$ separately for each series. Precisely, we write\n\\begin{equation}\\label{eq:f2final0}\n\\widetilde f_2(x)=\\frac{1}{T}\\sum_{\\gamma\\in\\Gamma}\\sum_{i=0}^2\\widetilde f_{2,\\gamma,i},\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:f2final1}\n\\widetilde f_{2,\\gamma,i}(x)=\\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\theta_{\\gamma}(\\phi(Tx-t-1),\\phi(Tx-t)).\n\\end{equation}\nWe claim now that $\\widetilde f_{2,\\gamma,i}$ can be alternatively written as\n\\begin{equation}\\label{eq:f2final}\n\\widetilde f_{2,\\gamma,i}(x)=\\theta_\\gamma\\Big(\\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\phi(Tx-t-1), \\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\phi(Tx-t)\\Big).\n\\end{equation}\nTo check that, suppose that $x\\in I_{t_0}$ for some $t_0$ and consider several cases: \n\\begin{enumerate}\n\\item Let $i\\not\\equiv t_0 \\Mod{3}$, then both expressions \\eqref{eq:f2final1} and \\eqref{eq:f2final} vanish. Indeed, at least one of the sums forming the arguments of $\\theta_\\gamma$ in the r.h.s. of \\eqref{eq:f2final} vanishes by \\eqref{eq:phivanish}, so $\\theta_\\gamma$ vanishes by \\eqref{eq:ggamma11}, \\eqref{eq:ggamma12}. Similar reasoning shows that the r.h.s. of \\eqref{eq:f2final1} vanishes too.\n\\item Let $i\\equiv t_0 \\Mod{3}$ and $\\gamma\\ne\\gamma_{t_0}$. Then all terms in the sums over $t$ in \\eqref{eq:f2final1} and \\eqref{eq:f2final} vanish, so both \\eqref{eq:f2final1} and \\eqref{eq:f2final} vanish.\n\\item Let $i\\equiv t_0 \\Mod{3}$ and $\\gamma=\\gamma_{t_0}$. Then all terms in the sums over $t$ in \\eqref{eq:f2final1} and \\eqref{eq:f2final} vanish except for the term $t=t_0$, so both \\eqref{eq:f2final1} and \\eqref{eq:f2final} are equal to $\\theta_\\gamma(\\phi(Tx-t_0-1), \\phi(Tx-t_0))$. \n\\end{enumerate}\nThe desired ReLU expansion of $\\widetilde f_2$ is then given by \\eqref{eq:f2final0} and \\eqref{eq:f2final}, where $\\phi$ and $\\theta_\\gamma$ are further expanded by \\eqref{eq:phiexp}, \\eqref{eq:ggammaexp}:\n\\begin{alignat}{2}\\label{eq:f2final2}\n\\widetilde f_2(x)&=\\frac{1}{T}\\sum_{\\gamma\\in\\Gamma}\\sum_{i=0}^2\\widetilde f_{2,\\gamma,i} &&\\nonumber\\\\\n&=\\frac{1}{T}\\sum_{\\gamma\\in\\Gamma}\\sum_{i=0}^2\\sum_{r=0}^{m-1}c_{\\gamma,r}\\sigma\\Big(&\\frac{m-r}{m}&\\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\sum_{q=-1}^1\\alpha_q\\sigma(Tx-t-q-1)\\nonumber\\\\\n&&-\\frac{r}{m}&\\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\sum_{q=-1}^1\\alpha_q\\sigma(Tx-t-q)\\Big).\n\\end{alignat}\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[clip,trim=75mm 23mm 86mm 25mm]{cache2-eps-converted-to.pdf}\n\\caption{Implementation of the function $\\widetilde f=\\widetilde f_1+\\widetilde f_2$ by a neural network with $f$-dependent architecture. The computation units $Q^{(2)}_{t,j,q}, Q^{(3)}_{\\gamma,i,j}$ and $Q^{(4)}_{\\gamma,i,r}$ are depicted in their layers in the lexicographic order (later indices change faster). Dimensions of the respective index arrays are indicated on the right (the example shown is for $T=4,$ $m=3$ and $|\\Gamma|=2$). Computation of $\\widetilde f_2$ includes $3|\\Gamma|$ parallel computations of functions $\\widetilde f_{2,\\gamma,i}$; the corresponding subnetworks are formed by units and connections connected to $Q^{(3)}_{\\gamma,i,0}$ and $Q^{(3)}_{\\gamma,i,1}$ with specific $\\gamma,i$.}\\label{fig:cache2}\n\\end{center}\n\\end{figure}\n\n\\paragraph{Step 3: Network implementation with $f$-dependent architecture.}\nWe will now express the approximation $\\widetilde f$ by a neural network (see Fig. \\ref{fig:cache2}). The network consists of two parallel subnetworks implementing $\\widetilde f_1$ and $\\widetilde f_2$ that include three and five layers, respectively (one and three layers if counting only hidden layers). The units of the network either have the ReLU activation function or simply perform linear combinations of inputs without any activation function. We will denote individual units in the subnetworks $\\widetilde f_1$ and $\\widetilde f_2$ by symbols $P$ and $Q$, respectively, with superscripts numbering the layer and subscripts identifying the unit within the layer. The subnetworks have a common input unit:\n$$R=P^{(1)}=Q^{(1)}$$\nand their output units are merged so as to sum $\\widetilde f_1$ and $\\widetilde f_2$:\n$$Z=P^{(3)}+Q^{(5)}.$$\n\nLet us first describe the network for $\\widetilde f_1$. By \\eqref{eq:f1exp}, we can represent $\\widetilde f_1$ by a 3-layer ReLU network as follows:\n\\begin{enumerate}\n\\item The first layer contains the single input unit $P^{(1)}$.\n\\item The second layer contains $T$ units $P^{(2)}_{t}= \\sigma(P^{(1)}-\\frac{t}{T})$, where $t\\in\\{0,\\ldots,T-1\\}$.\n\\item The third layer contains a single output unit $P^{(3)}=\\sum_{t=0}^{T-1}w_tP^{(2)}+h$.\n\\end{enumerate}\nNow we describe the network for $\\widetilde f_2$ based on the representation \\eqref{eq:f2final2}.\n\\begin{enumerate}\n\\item The first layer contains the single input unit $Q^{(1)}$.\n\\item The second layer contains $6T$ units $Q^{(2)}_{t,j,q}$, where $t\\in\\{0,\\ldots,T-1\\}$, $q\\in\\{-1,0,1\\}$, and $j\\in\\{0,1\\}$ corresponds to the first or second argument of the function $\\theta_\\gamma$:\n$$Q^{(2)}_{t,j,q}=\\sigma(TQ^{(1)}-t-q-j).$$ \n\\item The third layer contains $6|\\Gamma|$ units $Q^{(3)}_{\\gamma,i,j}$, where $\\gamma\\in\\Gamma$, $i\\in\\{0,1,2\\}$, and $j\\in\\{0,1\\}$: \n\\begin{equation}\\label{eq:q3}Q^{(3)}_{\\gamma,i,j}\n=\\sum_{\\substack{t:\\gamma_t=\\gamma\\\\t\\equiv i \\Mod{3}}}\\sum_{q=-1}^1\\alpha_{q}Q^{(2)}_{t,j,q}.\\end{equation}\n\\item The fourth layer contains $3m|\\Gamma|$ units $Q^{(4)}_{\\gamma,i,r}$, where $\\gamma\\in\\Gamma, i\\in\\{0,1,2\\}$ and $r\\in\\{0,\\ldots,m-1\\}$:\n$$Q^{(4)}_{\\gamma,i,r}=\\sigma\\Big(\\frac{m-r}{m}Q^{(3)}_{\\gamma,i,1}-\\frac{r}{m}Q^{(3)}_{\\gamma,i,0}\\Big).$$\n\\item The final layer consists of the single output unit \\begin{equation}\\label{eq:q5}Q^{(5)}=\\frac{1}{T}\\sum_{\\gamma\\in\\Gamma}\\sum_{i=0}^2\\sum_{r=0}^{m-1}c_{\\gamma,r}Q^{(4)}_{\\gamma,i,r}.\\end{equation}\n\\end{enumerate}\n\n\n\n\\paragraph{Step 3: Embedding into the standard deep architecture.}\nWe show now that the $f$-dependent ReLU network constructed in the previous step can be realized within the standard width-5 architecture.\n\nNote first that we may ignore unneeded connections in the standard network (simply by assigning weight 0 to them). Also, we may assume some units to act purely linearly on their inputs, i.e., ignore the nonlinear ReLU activation. Indeed, for any bounded set $D\\subset \\mathbb R$, if $h$ is sufficiently large, then $\\sigma(x+h)-h=x$ for all $x\\in D$, hence we can ensure that the ReLU activation function always works in the identity regime in a given unit by adjusting the intercept term in this unit and in the units accepting its output. In particular, we can also implement in this way identity units, i.e. those having a single input and passing the signal further without any changes. \n\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[b]{1\\textwidth}\n\\begin{center}\n\\includegraphics[clip,trim=22mm 3mm 20mm 0mm]{embedOverview.pdf}\n\\caption{Embedding overview: the subnetworks $\\widetilde f_1$ and $\\widetilde f_2$ are computed in parallel; the subnetwork $\\widetilde f_2$ is further divided into parallel subnetworks $\\widetilde f_{2,\\gamma,i}$, where $\\gamma\\in \\Gamma, i\\in\\{0,1,2\\}$.}\\label{fig:embedOverview}\n\\end{center}\n\\end{subfigure}\n\\begin{subfigure}[b]{1\\textwidth}\n\\begin{center}\n\\includegraphics[clip,trim=15mm 2mm 15mm 0mm]{embedf1-eps-converted-to.pdf}\n\\caption{Embedding of the $\\widetilde f_1$ subnetwork.}\\label{fig:embedf1}\n\\end{center}\n\\end{subfigure}\n\\begin{subfigure}[b]{1\\textwidth}\n\\begin{center}\n\\includegraphics[clip,trim=25mm 7mm 22mm 3mm]{embedf2-eps-converted-to.pdf}\n\\caption{Embedding of a subnetwork $\\widetilde f_{2,\\gamma,i}$.}\\label{fig:ebedf2}\n\\end{center}\n\\end{subfigure}\n\\caption{Embedding of the adaptive network into the standard architecture.}\n\\end{center}\n\\end{figure}\n\nThe embedding strategy is to arrange parallel subnetworks along the ``depth'' of the standard architecture as shown in Fig. \\ref{fig:embedOverview}. Note that $\\widetilde f_1$ and $\\widetilde f_2$ are computed in parallel; moreover, computation of $\\widetilde f_2$ is parallelized over $3|\\Gamma|$ independent subnetworks computing $\\widetilde f_{2,\\gamma,i}$ and indexed by $\\gamma\\in \\Gamma$ and $i\\in\\{0,1,2\\}$ (see \\eqref{eq:f2final2} and Fig. \\ref{fig:cache2}). Each of these independent subnetworks gets embedded into its own batch of width-5 layers. The top row of units in the standard architecture is only used to pass the network input to each of the subnetworks, so all the top row units function in the identity mode. The bottom row is only used to accumulate results of the subnetworks into a single linear combination, so all the bottom units function in the linear mode. \n\nImplementation of the $\\widetilde f_1$ subnetwork is shown in Fig. \\ref{fig:embedf1}. It requires $T+2$ width-5 layers of the standard architecture. The original output unit $P^{(3)}$ of this subnetwork gets implemented by $T$ linear units that have not more than two inputs each and gradually accumulate the required linear combination. \n\nImplementation of a $\\widetilde f_{2,\\gamma,i}$ subnetwork is shown in Fig. \\ref{fig:ebedf2}. Its computation can be divided into two stages. \n\nIn terms of the original adaptive network, in the first stage we perform parallel computations associated with the $Q^{(2)}$ units and combine their results in the two linear units $Q^{(3)}_{\\gamma,i,0}$ and $Q^{(3)}_{\\gamma,i,1}$. By \\eqref{eq:q3}, each of $Q^{(3)}_{\\gamma,i,0}$ and $Q^{(3)}_{\\gamma,i,1}$ accepts $3N_{\\gamma,i}$ inputs from the $Q^{(2)}$ units, where $$N_{\\gamma,i}=|\\{t\\in\\{0,\\ldots,T-1\\}|\\gamma_t=\\gamma,t\\equiv i\\Mod{3}\\}|.$$\nIn the standard architecture, the two original linear units $Q^{(3)}_{\\gamma,i,0}$ and $Q^{(3)}_{\\gamma,i,1}$ get implemented by $6N_{\\gamma,i}$ linear units that occupy two reserved lines of the network. This stage spans $6N_{\\gamma,i}+2$ width-5 layers of the standard architecture. \n\nIn the second stage we use the outputs of $Q^{(3)}_{\\gamma,i,0}$ and $Q^{(3)}_{\\gamma,i,1}$ to compute $m$ values $Q^{(4)}_{\\gamma,i,r}, r=0,\\ldots,m-1$, and accumulate the respective part $\\sum_{r=0}^{m-1}c_{\\gamma,r}Q^{(4)}_{\\gamma,i,r}$ of the final output \\eqref{eq:q5}. This stage spans $m+2$ layers of the standard architecture.\n\nThe full implementation of one $\\widetilde f_{2,\\gamma,i}$ subnetwork thus spans $6N_{\\gamma,i}+m+4$ width-5 layers. Since $\\sum_{\\gamma,i}N_{\\gamma,i}=T$ and there are $3|\\Gamma|$ such subnetworks, implementation of the whole $\\widetilde f_2$ subnetwork spans $6T+3(m+4)|\\Gamma|$ width-5 layers. Implementation of the whole $\\widetilde f$ network then spans $7T+3(m+4)|\\Gamma|+2$ layers. \n\nIt remains to optimize $T$ and $m$ so as to achieve the minimum approximation error, subject to the total number of layers in the standard network being bounded by $N$. Recall that $|\\Gamma|\\le 3^m$ and that the approximation error of $\\widetilde f$ is not greater than $\\frac{2}{Tm}$ by \\eqref{eq:f2tm}. Choosing $T=\\lfloor\\frac{N}{8}\\rfloor$, $m=\\lfloor\\frac{1}{2}\\log_3 N\\rfloor$ and assuming $N$ sufficiently large, we satisfy the network size constraint and achieve the desired error bound $\\|f-\\widetilde f\\|_\\infty\\le\\frac{c}{N\\ln N}$, uniformly in $f\\in B$. Since, by construction, $\\widetilde f=\\eta_{5,N}(\\psi(f))$ with some weight selection function $\\psi$, this completes the proof.\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{\\sffamily \\Large GRAPHICAL TABLE OF CONTENTS} \n\n\\section{Introduction}\n\nMarkov chain Monte Carlo (MCMC) algorithms have been used for nearly 60 years,\nbecoming a reference method for analysing Bayesian complex models in the early\n1990's \\citep{gelfand:smith:1990}. The strength of this method is that it\nguarantees convergence to the quantity (or quantities) of interest with minimal\nrequirements on the targeted distribution (also called {\\em target}) behind such quantities. In that sense, MCMC\nalgorithms are robust or universal, as opposed to the most standard Monte Carlo methods\n\\citep[see, e.g.,][]{rubinstein81, robert:casella:2004} that require direct\nsimulations from the target distribution. This robustness may however induce a\nslow convergence behaviour in that the exploration of the relevant\nspace---meaning the part of the space supporting the distribution that has a\nsignificant probability mass under that distribution---may take a long while, as the\nsimulation usually proceeds by local jumps in the vicinity of the current\nposition. In other words, MCMC--especially in its off-the-shelf versions like\nGibbs sampling and Metropolis--Hastings algorithms---is very often myopic in\nthat it provides a\ngood illumination of a local area, while remaining unaware of the global\nsupport of the distribution. As with most other\nsimulation methods, there always exist ways of creating highly convergent MCMC\nalgorithms by taking further advantage of the structure of the target\ndistribution. Here, we mostly limit ourselves to the realistic situation where\nthe target density is only known as the output of a computer code or to a setting\nsimilarly limited in its information content.\n\nThe approaches to the acceleration of MCMC algorithms can be divided in several categories, from those\nwhich improve our knowledge about the target distribution, to those that modify\nthe proposal in the algorithm, including those that exploit better the outcome of the\noriginal MCMC algorithm. The following sections provide more details about\nthese directions and the solutions proposed in the literature.\n\n\\section{What is MCMC and why does it need accelerating?}\\label{sec:why}\n\nMCMC methods have a history \\citep[see, e.g.][]{cappe:robert:2000b} that\nstarts at approximately the same time as the Monte Carlo methods, in\nconjunction with the conception of the first computers. They have been devised\nto handle the simulation of complex target distributions, when complexity stems\nfrom the shape of the target density, the size of the associated data, the\ndimension of the object to be simulated, or from time requirements. For instance,\nthe target density $\\pi(\\theta)$ may happen to be expressed in terms of\nmultiple integrals that cannot be solved analytically,\n$$\n\\pi(\\theta) = \\int \\omega(\\theta,\\xi)\\text{d}\\xi\\,,\n$$ \nwhich requires the simulation of the entire vector $(\\theta,\\xi)$. In cases\nwhen $\\xi$ is of the same dimension as the data, as for instance in latent\nvariable models, this significant increase in the dimension of the object to be\nsimulated creates computational difficulties for standard Monte Carlo methods,\nfrom managing the new target $\\omega(\\theta,\\xi)$, to devising a new and\nefficient simulation algorithm. A Markov chain Monte Carlo (MCMC) algorithm\nallows for an alternative resolution of this computational challenge by\nsimulating a Markov chain that explores the space of interest (and possibly\nsupplementary spaces of auxiliary variables) without requiring a deep\npreliminary knowledge on the density $\\pi$, besides the ability to compute\n$\\pi(\\theta_0)$ for a given parameter value $\\theta_0$ (if up to a normalising\nconstant) and possibly the gradient $\\nabla \\log \\pi(\\theta_0)$. The validation\nof the method \\citep[e.g.,][]{robert:casella:2004} is that the Markov chain is\n{\\em ergodic} \\citep[e.g.,][]{meyn:tweedie:1993}, namely that it converges in\ndistribution to the distribution with density $\\pi$, no matter where the Markov\nchain is started at time $t=0$. \n\nThe Metropolis--Hastings algorithm is a generic illustration of this principle. The\nbasic algorithm is constructed by choosing a {\\em proposal}, that is, a conditional\ndensity $K(\\theta'|\\theta)$ (also known as a {\\em Markov kernel}), the Markov\nchain $\\{\\theta_t \\}_{t=1}^{\\infty}$ being then derived by successive simulations of the\ntransition\n$$\\theta_{t+1} = \\begin{cases}\n\\theta^\\prime \\sim K(\\theta^\\prime |\\theta_t) &\\text{with probability }\n\\left\\{\\dfrac{\\pi(\\theta^\\prime)}{\\pi(\\theta_t)}\\times\\dfrac{K(\\theta_t|\\theta^\\prime)}\n{K(\\theta^\\prime|\\theta_t)}\\right\\}\\wedge 1,\\\\\n\\theta_t &\\text{otherwise}.\\\\\n\\end{cases}$$\nThis acceptance-rejection feature of the algorithm makes it appropriate for\ntargeting $\\pi$ as its stationary distribution if the resulting Markov chain\n$\\{\\theta_t\\}_{t=1}^{\\infty}$ is irreducible, i.e., has a positive probability of visiting any\nregion of the support of $\\pi$ in a finite number of iterations. (Stationarity\ncan easily be shown, e.g., by using the so-called {\\em detailed balance\nproperty} that makes the chain time-reversible, \\citealp[see, e.g.,][]{robert:casella:2004}.) \n\nConsidering the initial goal of simulating samples from the target distribution $\\pi$,\nthe performances of MCMC methods like the Metropolis--Hastings algorithm above\noften vary quite a lot, depending primarily on the correspondance between the proposal $K$ and the\ntarget $\\pi$. For instance, if $K(\\theta|\\theta_t)=\\pi(\\theta)$, the\nMetropolis--Hastings algorithm reduces to i.i.d.~sampling from the target,\nwhich is of course a formal option when i.i.d.~sampling from $\\pi$ proves impossible to implement.\nAlthough there exist rare instances when the Markov chain $\\{\\theta_t\\}_{t=1}^\\infty$ leads\nto negative correlations between the successive terms of the chain, making it\n{\\em more efficient} than regular i.i.d.~sampling \\citep{liu:won:kon95}, the\nmost common occurrence is one of positive correlation between the simulated\nvalues (sometimes uniformly, see \\citealp{liu:won:kon94}). This feature implies a\nreduced efficiency of the algorithm and hence requires a larger number of\nsimulations to achieve the same precision as an approximation based\non i.i.d.~simulations (without accounting for differences in computing time). More\ngenerally, a MCMC algorithm may require a large number of iterations to escape\nthe attraction of its starting point $\\theta_0$ and to reach stationarity, to the extent that\nsome versions of such algorithms fail to converge in the time available (i.e.,\nin practice if not in theory). \n\nIt thus makes sense to seek ways of accelerating (a) the convergence of a given\nMCMC algorithm to its stationary distribution, (b) the convergence of a given\nMCMC estimate to its expectation, and\/or (c) the exploration of a given MCMC\nalgorithm of the support of the target distribution. Those goals are related\nbut still distinct. For instance, a chain initialised by simulating from the target\ndistribution may still fail to explore the whole support in an acceptable\nnumber of iterations. While there is not an optimal and universal solution to\nthis issue, we will discuss below approaches that are as generic as possible,\nas opposed to artificial ones taking advantage of the mathematical structure of\na specific target distribution. Ideally, we aim at covering realistic\nsituations when the target density is only known [up to a constant or an\nadditional completion step] as the output of an existing computer code.\nPragmatically, we also cover here solutions that require more efforts and\ncalibration steps when they apply to a wide enough class of problems.\n\n\\section{Accelerating MCMC by exploiting the geometry of the target}\\label{sec:geo}\n\nWhile there is no end in trying to construct more efficient and faster MCMC\nalgorithms, and while this (endless) goal needs to account for the cost of devising such\nalternatives under limited resources budgets, there exist several generic\nsolutions such that a given target can first be explored in terms of the\ngeometry (or topology) of the density before constructing the algorithm. Although\nthis type of methods somehow takes us away from our original purpose which was\nto improve upon an existing algorithm, they still make sense within this survey in\nthat they allow for almost automated implementations. \n\n\\subsection{Hamiltonian Monte Carlo}\n\nFrom the point of view of this review, Hamiltonian (or hybrid) Monte Carlo\n(HMC) is an auxiliary variable technique that takes advantage of a continuous\ntime Markov process to sample from the target $\\pi$. This approach comes from\nphysics \\citep{duane:etal:1987} and was popularised in statistics by\n\\cite{neal:1996,neal:2011} and \\cite{mackay:2002}. Given a target\n$\\pi(\\theta)$, where $\\theta\\in\\mathbb{R}^d$, an artificial auxiliary variable\n$\\vartheta\\in\\mathbb{R}^d$ is introduced along with a density $\\varpi(\\vartheta|\\theta)$\nso that the joint distribution of $(\\theta,\\vartheta)$ enjoys $\\pi(\\theta)$ as\nits marginal. While there is complete freedom in this representation, the HMC\nliterature often calls $\\vartheta$ the {\\em momentum} of a particle located at $\\theta$ by analogy\nwith physics. Based on the representation of the joint distribution\n$$\n\\omega(\\theta,\\vartheta)\n=\\pi(\\theta)\\varpi(\\vartheta|\\theta) \\propto \\exp\\{ -H(\\theta,\\vartheta) \\}\\,,\n$$\nwhere $H(\\cdot)$ is called the {\\em Hamiltonian}, Hamiltonian Monte Carlo (HMC)\nis associated with the continuous time process\n$(\\theta_t,\\vartheta_t)$ generated by the so-called {\\em Hamiltonian equations}\n$$\n\\dfrac{\\text{d}\\theta_t}{\\text{d}t}=\\dfrac{\\partial H}{\\partial \\vartheta}(\\theta_t,\\vartheta_t)\\qquad\n\\dfrac{\\text{d}\\vartheta_t}{\\text{d}t}=-\\dfrac{\\partial H}{\\partial \\theta}(\\theta_t,\\vartheta_t)\\,,\n$$\nwhich keep the Hamiltonian target stable over time, as\n$$\n\\dfrac{\\text{d}H(\\theta_t,\\vartheta_t)}{\\text{d}t}=\\dfrac{\\partial H}{\\partial \\vartheta}(\\theta_t,\\vartheta_t)\\,\\dfrac{\\text{d}\\vartheta_t}{\\text{d}t}+\\dfrac{\\partial H}{\\partial \\theta}(\\theta_t,\\vartheta_t)\\,\\dfrac{\\text{d}\\theta_t}{\\text{d}t}=0\\,.\n$$\nObviously, the above continuous time Markov process is deterministic and only\nexplores a given level set, $$\\{(\\theta, \\vartheta) : H(\\theta, \\vartheta) =\nH(\\theta_0, \\vartheta_0)\\}\\,,$$ instead of the whole augmented state space\n$\\mathbb{R}^{2d}$, which induces an issue with irreducibility. An acceptable solution \nto this problem is to refresh the momentum, $\\vartheta_{t} \\sim \\varpi(\\vartheta|\\theta_{t-})$, at\nrandom times $\\{\\tau_n\\}_{n=1}^{\\infty}$, where $\\theta_{t-}$ denotes the \nlocation of $\\theta$ immediately prior to time $t$, and the random \ndurations $\\{\\tau_n - \\tau_{n-1}\\}_{n=2}^{\\infty}$ follow an exponential\ndistribution. By construction, continuous-time Hamiltonian Markov chain can be\nregarded as a specific piecewise deterministic Markov process using\nHamiltonian dynamics \\citep{davis1984piecewise, davis1993markov,\nbou2017randomized} and our target, $\\pi$, is the marginal of its associated\ninvariant distribution.\n\nBefore moving to the practical\nimplementation of the concept, let us point out that the free cog in the\nmachinery is the conditional density $\\varpi(\\vartheta|\\theta)$, which is\nusually chosen as a Gaussian density with either a constant covariance matrix $M$\ncorresponding to the target covariance or as a local curvature depending on\n$\\theta$ in Riemannian Hamiltonian Monte Carlo \\citep{girolami:2011}.\n\\cite{betancourt:2017} argues in favour of these two cases against non-Gaussian\nalternatives and \\cite{livingstone2017kinetic} analyse how different choices\nof kinetic energy in Hamiltonian Monte Carlo affect algorithm performances. For\na fixed covariance matrix, the Hamiltonian equations become\n$$\n\\dfrac{\\text{d}\\theta_t}{\\text{d}t}=\nM^{-1}\\vartheta_t\\qquad \\dfrac{\\text{d}\\vartheta_t}{\\text{d}t}=\\nabla \\mathcal{L}(\\theta_t)\\,,\n$$\nwhich is the score function. The velocity (or momentum) of the process is thus\ndriven by this score function, gradient of the log-target.\n\nThe above description remains quite conceptual in that there is no generic\nmethodology for producing this continuous time process, since the Hamiltonian\nequations cannot be solved exactly in most cases. Furthermore, standard\nnumerical solvers like Euler's method create an instable approximation that\ninduces a bias as the process drifts away from its true trajectory. There\nexists however a discretisation simulation technique that produces a Markov\nchain and is well-suited to the Hamiltonian equations in that it preserves the\nstationary distribution \\citep{betancourt:2017}. It is called the {\\em\nsymplectic integrator}, and one version in the independent case with constant\ncovariance consists in the following (so-called {\\em leapfrog}) steps\n\\begin{align*}\n\\vartheta_{t+\\epsilon\/2} &= \\vartheta_t+\\epsilon \\nabla \\mathcal{L}(\\theta_t)\/2,\\\\\n\\theta_{t+\\epsilon} &= \\theta_t+\\epsilon M^{-1} \\vartheta_{t+\\epsilon\/2},\\\\\n\\vartheta_{t+\\epsilon} &= \\vartheta_{t+\\epsilon\/2}+\\epsilon \\nabla \\mathcal{L}(\\theta_{t+\\epsilon})\/2,\n\\end{align*}\nwhere $\\epsilon$ is the time-discretisation step. Using a proposal on\n$\\vartheta_0$ drawn from the Gaussian auxiliary target and deciding on the\nacceptance of the value of $(\\theta_{T\\epsilon},\\vartheta_{T\\epsilon})$ by a\nMetropolis--Hastings step can limit the danger of missing the\ntarget. Note that the first two leapfrog steps induce a Langevin move on\n$\\theta_t$:\n$$\n\\theta_{t+\\epsilon} = \\theta_t+\\epsilon^2 M^{-1} \\nabla \\mathcal{L}(\\theta_t)\/2\n+\\epsilon M^{-1} \\vartheta_{t}\\,,\n$$\nthus connecting with the MALA algorithm discussed below (see \\citealp{durmus:moulines:2017} \nfor a theoretical discussion of the optimal choice of $\\epsilon$). Note that the leapfrog integrator\nis quite an appealing middleground between accuracy (as it is second-order\naccurate) and computational efficiency.\n\nIn practice, it is important to note that discretising the Hamiltonian dynamics\nintroduces two free parameters, the step size $\\epsilon$ and the trajectory\nlength $T\\epsilon$, both to be calibrated. As an empirically successful and popular\nvariant of HMC, the ``no-U-turn sampler'' (NUTS) of \\cite{hoffman2014no} adapts\nthe value of $\\epsilon$ based on primal-dual averaging. It also eliminates the\nneed to choose the trajectory length $T$ via a recursive algorithm that builds\na set of candidate proposals for a number of forward and backward leapfrog\nsteps and stops automatically when the simulated path steps back. \n\nA further acceleration step in this area is proposed by \\cite{rasmussen:2003}\n\\citep[see also][]{fielding:nott:liong:2011}, namely the replacement of the\nexact target density $\\pi(\\cdot)$ by an approximation $\\hat\\pi(\\cdot)$ that is\nmuch faster to compute in the many iterations of the HMC algorithm. A generic\nway of constructing this approximation is to rely on Gaussian processes, when\ninterpreted as prior distributions on the target density $\\pi(\\cdot)$, which is\nonly observed at some values of $\\theta$, $\\pi(\\theta_1),\\ldots,\\pi(\\theta_n)$\n\\citep{rasmussen:wiliams:2006}. This solution is speeding up the algorithm,\npossibly by orders of magnitude, but it introduces a further approximation into\nthe Monte Carlo approach, even when the true target is used at the end of the\nleapfrog discretisation, as in \\cite{fielding:nott:liong:2011}.\n\nStan (named after Stanislas Ullam, see \\citealp{carpenter:etal:2017}) is a computer language for \nBayesian inference that, among other approximate techniques, implements the NUTS\nalgorithm to remove hand-tuning. More precisely, Stan is a probabilistic\nprogramming language in that the input is at the level of a statistical model,\nalong with data, rather than the specifics of an MCMC algorithm. The algorithmic\npart is somehow automated, meaning that when models can be conveniently\ndefined through this language, it offers an alternative to the sampler that\nproduced the original chain. As an illustration of the acceleration brought by\nHMC, Figure~\\ref{fig:nuts}, reproduced from \\cite{hoffman2014no}, shows the\nperformance of NUTS, compared with both random-walk MH and Gibbs samplers.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\textwidth]{nuts.jpg}\n\\caption{\nComparisons between random-walk Metropolis-Hastings, Gibbs sampling, and NUTS \nalgorithm of samples corresponding to a highly correlated 250-dimensional\nmultivariate Gaussian target. Similar computation budgets are used for all methods to produce the\n1,000 samples on display ({\\em Source:} \\cite{hoffman2014no}, with permission).\n}\n\\label{fig:nuts}\n\\end{figure}\n\n\\section{Accelerating MCMC by breaking the problem into pieces}\\label{sec:scal}\n\nThe explosion in the collection and analysis of ``big'' datasets in recent years has\nbrought new challenges to the MCMC algorithms that are used for Bayesian inference. When examining\nwhether or not a new proposed sample is accepted at the accept-reject step, an MCMC algorithm such as the\nMetropolis-Hastings version needs to sweep over the whole data set, at each\nand every iteration, for the evaluation of the likelihood function. MCMC algorithms are then difficult to scale up, which strongly hinders their\napplication in big data settings. In some cases, the datasets may be too large\nto fit on a single machine. It may also be that confidentiality measures impose\ndifferent databases to stand on separate networks, with the possible added\nburden of encrypted data \\citep{aslet:esperanza:holmes:2015}. Communication\nbetween the separate machines may prove impossible on an MCMC scale that\ninvolves thousands or hundreds of thousands iterations.\n\n\\subsection{Scalable MCMC methods}\n\nIn the recent years, efforts have been made to design {\\em scalable} algorithms, namely, solutions that manage to handle large scale targets by breaking the problem into manageable or scalable pieces.\nRoughly speaking, these methods can be classified into two categories\n\\citep{bardenet2015markov}: divide-and-conquer approaches and sub-sampling approaches.\n\nDivide-and-conquer approaches partition the whole data set, denoted\n$\\mathcal{X}$, into batches, $\\{\\mathcal{X}_1, \\cdots, \\mathcal{X}_k\\}$, and run\nseparate MCMC algorithms on each data batch, independently, as if they were independent\nBayesian inference problems.\\footnote{In order to keep the notations\nconsistent, we still denote the target density by $\\pi$, with the prior density\ndenoted as $\\pi_0$ and the sampling distribution of one observation $x$ as\n$p(x|\\theta)$. The dependence on the sample $\\mathcal{X}$ is not reported\nunless necessary.} These methods then combine the simulated parameter outcomes together\nto approximate the original posterior distribution. Depending on the\ntreatments of the batches selected in the MCMC stages, these approaches can be\nfurther subdivided into two finer groups: sub-posterior methods and boosted\nsub-posterior methods. Sub-posterior methods are motivated by the\nindependent product equation:\n\\begin{equation}\n\\pi(\\theta) \\propto\n\\prod_{i=1}^k\\left(\\pi_0(\\theta)^{1\/k}\\prod_{\\ell\\in\\mathcal{X}_i}p(x_{\\ell}|\\theta)\\right)\n=\\prod_{i=1}^k\\pi_i(\\theta)\\,,\n\\end{equation}\nand they target the densities $\\pi_i(\\theta)$ (up to a constant) in their\nrespective MCMC steps. They thus bypass communication costs \\citep{scott2016bayes}, by running MCMC\nsamplers independently on each batch, and they most often increase MCMC mixing\nrates (in effective samples sizes produced by second), given that the\nsub-posterior distributions $\\pi_i(\\theta)$ are based on smaller datasets. For\ninstance, \\cite{scott2016bayes} combine the samples from the sub-posteriors,\n$\\pi_i(\\theta)$, by a Gaussian reweighting.\n\\cite{neiswanger2013asymptotically} estimate the sub-posteriors $\\pi_i(\\theta)$\nby non-parametric and semi-parametric methods, and they run additional MCMC\nsamplers on the product of these estimators towards approximating the true\nposterior $\\pi(\\theta)$. \\cite{wang2013parallelizing} refine this product\nestimator with an additional Weierstrass sampler, while\n\\cite{wang2015parallelizing} estimate the posterior by partitioning the space\nof samples with step functions. \\cite{vehtari:etal:2014} devised an expectation propagation scheme\nto improve the postprocessing of the parallel samplers. \n\nAs an alternative to sampling from the sub-posteriors, boosted sub-posterior\nmethods target instead the components\n\\begin{equation}\n\\tilde{\\pi}_i(\\theta) \\propto \\pi_0(\\theta)\\left(\\prod_{\\ell\\in\\mathcal{X}_i}p(x_{\\ell}|\\theta)\\right)^k\n\\end{equation}\nin separate MCMC runs. Since they formaly amount to repeating each batch $k$\ntimes towards producing pseudo data sets with the same size as the true\none, the resulting boosted sub-posteriors, $\\tilde{\\pi}_1(\\theta), \\cdots,\n\\tilde{\\pi}_k(\\theta)$, have the same scale in variance of each component of\nthe parameters, $\\theta$, as the true posterior, and can thus be treated as a\ngroup of estimators of the true posterior. In the subsequent combining stage,\nthese sub-posteriors are merged together to construct a better approximation of the\ntarget distribution. For instance, \\cite{minsker2014scalable} approximate the\nposterior with the geometric median of the boosted sub-posteriors, embedding\nthem into associated reproducing kernel Hilbert spaces (rkhs), while\n\\cite{srivastava2015wasp} achieve this goal using the barycentres of\n$\\tilde{\\pi}_1,\\cdots, \\tilde{\\pi}_k$, these barycentres being computed with\nrespect to a Wasserstein distance.\n\nIn a perspective different from the above parallel scheme of divide-and-conquer approaches,\nsub-sampling approaches aim at reducing the number of individual datapoint\nlikelihood evaluations operated at each iteration towards accelerating MCMC algorithms.\nFrom a general perspective, these approaches can be further classified into two finer classes:\nexact subsampling methods and approximate subsampling methods, depending on their\nresulting outputs. Exact subsampling approaches typically require subsets of\ndata of random size at each iteration. One solution to this effect is taking advantage of\npseudo-marginal MCMC via constructing unbiased estimators of \nthe target density evaluated on subsets of the data\n\\citep{andrieu2009pseudo}. \\cite{quiroz2016exact} follow this direction by\ncombining the powerful debiasing technique of \\cite{rhee2015unbiased} and the correlated\npseudo-marginal MCMC approach of \\cite{deligiannidis2015correlated}. Another direction is to\nuse piecewise deterministic Markov processes (PDMP)\n\\citep{davis1984piecewise,davis1993markov}, which enjoy the target\ndistribution as the marginal of their invariant distribution. This PDMP version requires\nunbiased estimators of the gradients of the log-likelihood function,\ninstead of the likelihood itself. By using a tight enough bound on the event\nrate function of the associated Poisson processes, PDMP can produce\nsuper-efficient scalable MCMC algorithms. The bouncy particle sampler\n\\citep{bouchard2017bouncy} and the zig-zag sampler \\citep{bierkens2016zig} are\ntwo competing PDMP algorithms, while \\cite{bierkens2017piecewise} unify and\nextend these two methods. Besides, one should note that PDMP produces a\nnon-reversible Markov chain, which means that the algorithm should be more\nefficient in terms of mixing rate and asymptotic variance, when compared with\nreversible MCMC algorithms, such as MH, HMC and MALA, as observed in some\ntheoretical and experimental works \\citep{hwang1993accelerating,\nsun2010improving,chen2013accelerating,bierkens2016non}.\n\nApproximate subsampling approaches aim at constructing an approximation of the\ntarget distribution. Beside the aforementioned attempts of \\cite{rasmussen:2003}\nand \\cite{fielding:nott:liong:2011}, one direction is to approximate the acceptance probability\nwith high accuracy by using subsets of the data\n\\citep{bardenet2014towards,bardenet2015markov}. {Another solution is based on a\ndirect modification of exact methods. The seminal work\nof \\cite{welling2011bayesian}, SGLD, is to exploit the Langevin diffusion \n\\begin{equation}\n\\text{d}\\mathbf{\\theta}_t = \\frac{1}{2}\\pmb{\\Lambda}\\nabla\\log\\pi(\\mathbf{\\theta}_t)\\text{d}t + \\pmb{\\Lambda}^{1\/2}\\text{d}\\mathbf{B}_t,\\quad \\mathbf{\\theta}_0 \\in\\mathbb{R}^d, t\\in[0, \\infty)\n\\end{equation} \nwhere $\\pmb{\\Lambda}$ is a user-specified matrix, $\\pi$ is the target\ndistribution and $\\mathbf{B}_t$ is a d-dimensional Brownian process. By virtue of the\nEuler-Maruyama discretisation and using unbiased estimators of the gradient of\nthe log-target density, SGLD and its variants\n\\citep{ding2014bayesian,chen2014stochastic} often produce fast and accurate results\nin practice when compared with MCMC algorithms using MH steps.} \n\nFigure~\\ref{fig:cmc} shows the time requirements of a consensus Monte Carlo\nalgorithm \\citep{scott2016bayes} compared with a Metropolis--Hastings algorithm using the whole\ndataset, while Figure~\\ref{fig:subsampling} displays the saving in\nlikelihood evaluations in confidence sampler of \\cite{bardenet2015markov}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\textwidth]{cmc.jpg}\n\\caption{Elapsed time when drawing 10,000 MCMC samples with different amounts of data under\n the single machine and consensus Monte Carlo algorithms for a hierarchical Poisson regression.\n The horizontal axis represents the amounts of data. The single machine algorithm stops after\n 30 because of the explosion in computation budget.\n ({\\em Source:} \\cite{scott2016bayes}, with permission.)}\n\\label{fig:cmc}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\textwidth]{subsampling.jpg}\n\\caption{Percentage of numbers of data points used in each iteration of the confidence sampler with a\n single 2nd order Taylor approximation at $\\theta_{\\text{MAP}}$. The plots describe 10,000 iterations\n of the confidence sampler for the posterior distribution of the mean and variance of a uni-dimensional\n Normal distribution with a flat prior: {\\em (left)} 10,000 observations are generated from \n $\\mathcal{N}(0,1)$, {\\em (right)} 10,000\n observations are generated from $\\mathcal{LN}(0,1)$ ({\\em Source:}\n \\cite{bardenet2015markov}, with permission).}\n\\label{fig:subsampling}\n\\end{figure}\n\n\\subsection{Parallelisation and distributed schemes}\n\nModern computational architectures are built with several computing units that\nallow for parallel processing, either fully independent or with certain\ncommunication. Although the Markovian nature of MCMC is inherently sequential\nand somewhat alien to the notion of parallelising, several partial solutions\nhave been proposed in the literature for exploiting these parallel\narchitectures. The simplest approach consists in running several MCMC chains\nin parallel, blind to all others, until the allotted computing time is\nexhausted. Finally, the resulting estimators of all chains are averaged.\nHowever, this naive implementation may suffer from the fact that some of those\nchains have not reached their stationary regime by the end of the computation\ntime, which then induces a bias in the resulting estimate. Ensuring that\nstationarity has been achieved is a difficult (if at all possible) task,\nalthough several approaches can be found in the literature\n\\citep{mykland1995regeneration,guihenneuc1998discretization,jacob:oleary:atchade:2017}. At the opposite\nextreme, complex targets may be represented as products that involve many terms\nthat must be evaluated, each of which can be attributed to a different thread\nbefore being multiplied all together. This strategy requires communication\namong processors at each MCMC step. A middle-ground version\n\\citep{jacob2011using} consists in running several Markov chains in parallel\nwith periodic choices of the reference chain, all simulations being recycled\nthrough a Rao-Blackwell scheme. (See also \\citealp{calderhead2014general} for a similar scheme.) \nThe family of interacting \\emph{orthogonal}\nMCMC methods (O-MCMC) is proposed in \\cite{martino2016orthogonal} with the aim\nof fostering better exploration of the state space, specially in\nhigh-dimensional and multimodal targets. Multiple MCMC chains are run in\nparallel exploring the space with random-walk proposals. The parallel chains\nperiodically share information, also through joint MCMC steps, thus allowing an\nefficient combination of global (coordinated) exploration and local\napproximation. O-MCMC methods also allow for a parallel implementation of the\nMultiple Try Metropolis (MTM). In \\cite{calderhead2014general}, a\ngeneralisation of the Metropolis-Hastings algorithm allows for a\nstraightforward parallelisation. Each proposed point can be evaluated in a\ndifferent processor at every MCMC iteration. Finally, note that the \nsection on scalable MCMC also contains parallelisable approaches, such as the\nprefetching method of \\cite{angelino:etal:2014} (see also\n\\citealp{banterle2015accelerating} for a related approach, primarily based on\nan approximation of the target). A most recent endeavour called asynchronous\nMCMC \\citep{terenin:simpson:draper:2015} aims at higher gains in\nparallelisation by reducing the amount of exchange between the parallel\nthreads, but the notion still remains confidential at this stage.\n\n\\section{Accelerating MCMC by improving the proposal}\\label{sec:newprop}\n\nIn the same spirit as the previous section, this section is stretching the\npurpose of this paper by considering possible modifications of the MCMC\nalgorithm itself, rather than merely exploiting the output of a given MCMC\nalgorithm. For instance, devising an HMC algorithm is an answer to this\nquestion even though the ``improvement'' is not garanteed. Nonetheless, our\nargument here is that, once provided with this output, it is possible to derive\nnew proposals in a semi-autonomous manner.\n\n\\subsection{Simulated tempering}\n\nThe target distribution, $\\pi(\\theta)$ on $d$-dimensional state space $\\Theta$,\ncan exhibit multi-modality with the probability mass being located in different\nregions in the state space. The majority of MCMC algorithms use a localised\nproposal mechanism which is tuned towards local approximate optimality see,\ne.g.,\\ \\cite{roberts1997weak} and \\cite{roberts2001optimal}. By construction,\nthese localised proposals result in the Markov chain becoming ``trapped'' in a\nsubset of the state space meaning that in finite run-time the chain can\nentirely fail to explore other modes in the state space, leading to biased\nsamples. Strategies to accelerate MCMC often use local gradient information and\nthis draws the chain back towards the centre of the mode, which is the opposite\nof what is required in a multi-modal setting.\n\nThere is an array of methodology available to overcome issues of multi-modality\nin MCMC, the majority of which use state space augmentation. Auxiliary\ndistributions that allow a Markov chain to explore the entirety of the state\nspace are targeted and their mixing information is then passed on to aid mixing\nin the true target. While the sub-posteriors of the previous section can be\nseen as special cases of the following, the most successful and convenient implementation of\nthese methods is to use {\\em power-tempered target distributions}. The target\ndistribution at inverse temperature level, $\\beta$, for $\\beta \\in (0,1]$ is defined as\n\\[\\pi_\\beta(\\theta)=\\mathfrak{K}(\\beta)\\left[\\pi(\\theta)\\right]^\\beta~~~\\mbox{where}~~~\\mathfrak{K}(\\beta)=\\left[\\int\n\\left[\\pi(\\theta)\\right]^\\beta d\\theta\\right]^{-1}.\\]\nTherefore, $\\pi_1(\\theta)=\\pi(\\theta)$.\nTemperatures $\\beta<1$ flatten out the target distribution allowing the chain to\nexplore the entire state space provided the $\\beta$ value is sufficiently\nsmall. The simulated tempering (ST) and parallel tempering (PT) algorithms\n\\citep{geyer1991markov, marinari1992simulated} typically use the power-tempered\ntargets to overcome the issue of multi-modality. The ST approach runs a single\nMarkov chain on the augmented state space $\\{ B, \\Theta\\}$, where $B=\n\\{\\beta_0,\\beta_1,\\ldots,\\beta_n \\}$ is a discrete collection of $n$ inverse\ntemperature levels with $1=\\beta_0>\\beta_1>\\ldots>\\beta_n>0$. The algorithm\nuses a Metropolis-within-Gibbs strategy by cycling between updates in the\n$\\Theta$ and $B$ components of the space. For instance, a proposed temperature swap\nmove $\\beta_i \\rightarrow \\beta_j$ is accepted with probability \n$$ \n\\min \\left\\{ 1, \\frac{\\pi_{\\beta_j}(\\theta)}{\\pi_{\\beta_i}(\\theta)} \\right\\} \n$$\nin order to preserve detailed balance. Note that this acceptance ratio depends on\nthe normalisation constants $\\mathfrak{K}(\\beta)$ which are typically unknown, although\nthey can sometimes be estimated, as in, e.g.,\\ \\cite{Wang2001a} and \\cite{Atchade2004}. In\ncase estimation of the marginal normalisation constants is\nimpractical then the PT algorithm is employed. This approach simultaneously\nruns a Markov chain at each of the $n+1$ temperature levels targeting the\njoint distribution given by $\\prod_{i=0}^{n} [\\pi(\\theta_i)]^{\\beta_i}$. Swap\nmoves between chains at adjacent temperature levels are accepted according to a\nratio that no longer depends on the marginal normalisation constants. Indeed, this \npower tempering approach has been successfully employed in a number of settings and is \nwidely used e.g.\\ \\cite{Neal1996}, \\cite{earl2005parallel}, \\cite{Xie2010}, \\cite{Mohamed2012} \nand \\cite{Carter2013}.\n\nIn both approaches, there is a ``Goldilocks'' principle to setting up the\ninverse temperature schedule. Spacings between temperature levels that are \n``too large\" result in swap moves that are rarely accepted, hence delaying the transfer of hot\nstate mixing information to the cold states. On the other\nhand, spacings that are too small require a large number of intermediate\ntemperature levels, again resulting in slow mixing through the\ntemperature space. This problem becomes even more difficult as the\ndimensionality of $\\Theta$ increases.\n\nMuch of the historical literature suggested that a geometric spacing was\noptimal i.e., there exists $c\\in (0,1)$ such that $\\beta_{i+1}=c\\beta_i$ for\n$i=0,1,\\ldots,n$. However, in the case of the simulated tempering version (ST), \n\\cite{atchade2011towards} considered the problem as\nan optimal scaling problem by maximising the (asymptotic in dimension) expected\nsquared jumping distance in the $B$ space for temperature swap moves. Under\nrestrictive assumptions, he showed that the spacings\nbetween consecutive inverse temperature levels should scale with dimension as\n$O\\left(d^{-1\/2}\\right)$ to prevent degeneracy of the swap move acceptance\nrate. For a practitioner the result gave guidance on optimal setup since it\nsuggested a corresponding optimal swap move acceptance rate of 0.234 between\nconsecutive inverse temperature levels, in accordance with\n\\cite{gelman:gilks:roberts:1996}. Finally, contrary to the historically\nrecommended geometric schedule, the authors suggested that temperature schedule setup\nshould be constructed consecutively so as to induce an approximate 0.234 swap\nacceptance rate between consecutive levels; which is achieved adaptively in\n\\cite{miasojedow2013adaptive}. The use of expected squared jumping distance as\nthe measure of mixing speed was justified in \\cite{roberts2014minimising}\nwhere, under the same conditions as in \\cite{atchade2011towards}, it was shown\nthat the temperature component of the ST chain has an associated diffusion\nprocess.\n\n\nThe target of an 0.234 acceptance rate gives good guidance to setting up the ST\/PT\nalgorithms in certain settings, but there is a major warning for practitioners\nfollowing this rule for optimal setup. The assumptions made in\n\\cite{atchade2011towards} and \\cite{roberts2014minimising} ignore the\nrestrictions of mixing within a temperature level, instead assuming that this\ncan be done infinitely fast relative to the mixing within the temperature\nspace. \\cite{woodard2009conditions}, \\cite{woodard2009sufficient} and \\cite{Bhatnagar2016} undertake\na comprehensive analysis of the spectral gap of the ST\/PT chains and their\nconclusion is rather damning of the ST\/PT approaches that use power-tempered\ntargets. Essentially, in situations where the modes have different structures,\nthe time required to reach a given level of convergence for the ST\/PT\nalgorithms can grow exponentially in dimension. A major reason for this is that\npower-based tempering does not preserve the relative weights\/mass between regions at the different \ntemperature levels, see Figure~\\ref{Fig:badweight}. This issue can scale exponentially in dimension. \nFrom a practical perspective, in these finite run high-dimensional non-identical modal\nstructure settings the swap acceptance rates can be very misleading, meaning\nthat they have limited use as a diagnostic for inter-modal mixing quality.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[height=6cm,width=9cm]{Weightissue.jpg}\n\\caption{Un-normalised tempered target densities of a bimodal Gaussian mixture using inverse temperature levels $\\beta=\\{ 1,0.1,0.05,0.005 \\}$ respectively. At the hot state (bottom right) it is evident that the mode centred on 40 begins to dominate the weight as $\\beta$ increases to $\\infty$ even though at the cold state it was only attributable for a fraction (0.2) of the total mass.}\n\\label{Fig:badweight}\n\\end{center}\n\\end{figure}\n\n\\subsection{Adaptive MCMC}\n\nImproving and calibrating an MCMC algorithm towards a better correspondance with\nthe intended target is a natural step in making the algorithm more efficient,\nprovided enough information is available about this target distribution. For\ninstance, when an MCMC sample associated with this target is available, even\nwhen it has not fully explored the range of the target, it contains some\namount of information, which can then be exploited to construct new MCMC\nalgorithms. Some of the solutions available in the literature \\citep[see,\ne.g.][]{liangetal06} proceed by repeating blocks of MCMC iterations and\nupdating the proposal $K$ after each block, aiming at a particular optimality\ngoal like a specific acceptance rate like $0.234$ for Metropolis--Hastings\nsteps \\citep{gelman:gilks:roberts:1996}. Most versions of this method update the scale structure of a random walk proposal, based on previous\nrealisations \\citep{robert:casella:2009} or on an entire sample\n\\citep{douc:guillin:marin:robert:2005}, which turns the method into iterated\nimportance sampling with Markovian dependence. (It can also be seen as a static\nversion of particle filtering,\n\\citealp{doucet:godsill:andrieu:2000,andrieu:doucet:2002,storvik:2002}.) \n\nOther adaptive resolutions bypass this preliminary and somewhat {\\em ad hoc}\nconstruction and aim instead at a permanent updating within the algorithm,\nmotivated by the idea that a continuous adaptation keeps improving the\ncorrespondance with the target. In order to preserve the validation of the method\n\\citep{gelman:gilks:roberts:1996,haario:sacksman:tamminen:1999,roberts:rosenthal:2007,saksman:vihola:2010},\nnamely that the chain produced by the algorithm converges to the intended\ntarget, specific convergence results need be established, as the ergodic\ntheorem behind standard MCMC algorithms do not apply. Without due caution (see Figure \n\\ref{figadap}), an\nadaptive MCMC algorithm may fail to converge due to over-fitting. A drawback of\nadaptivity is that the update of the proposal distribution relies {\\em too\nmuch} on the earlier simulations and thus reinforces the exclusion of parts of\nthe space that have not yet been explored.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.6\\textwidth,keepaspectratio=true]{adap.jpg}\n \\caption{Markov chains produced by an adaptive algorithm where the proposal distribution is a Gaussian distribution with mean and variance computed from the past simulations of the chain. The three rows correspond to different initial distributions. The fit of the histogram of the resulting MCMC sample is poor, even for the most spread-out initial distribution (bottom) \\citep[with permission]{robert:casella:2004}}\n \\label{figadap}\n\\end{figure}\n\nFor the validation of adaptive MCMC methods, stricter constraints must thus be imposed on\nthe algorithm. One well-described solution \\citep{roberts:rosenthal:2006} is\ncalled {\\em diminishing adaptation}. Informally, it consists in imposing a distance between\ntwo consecutive proposal kernels to uniformly decrease to zero. In practice, this means\nstabilising the changes in the proposal by ridge-like factors as in the early proposal by\n\\cite{haario:sacksman:tamminen:1999}. A drawback of this resolution is that the\ndecrease itself must be calibrated and may well fail to bring a significant\nimprovement over the original proposal.\n\n\\subsection{Multiple try MCMC}\n\nA completely different approach to improve the original proposal used in an\nMCMC algorithm is to consider a collection of proposals, built on different\nrationales and experiments. The {\\em multiple try MCMC algorithm}\n\\citep{liu:liang:wong:2000,bedard:douc:moulines:2012,martino:2018} follows this perspective.\nAs the name suggests, the starting point of a multiple try MCMC algorithm is to\nsimultaneously propose $N$ potential moves $\\theta^1_t,\\ldots,\\theta^N_t$ of the Markov chain, instead of a\nsingle value. The proposed values $\\theta^i_t$ may be independently generated according to\n$N$ different proposal densities $K_i(\\cdot|\\theta_t)$ that are conditional on the current value of\nthe Markov chain, $\\theta_t$. One of the $\\theta^i_t$'s is selected based on the importance\nsampling weights $w_t^i\\propto \\pi(\\theta^i_t)\/K_i(\\cdot|\\theta_t)$. The\nselected value is then accepted by a further Metropolis--Hastings step which involves\na ratio of normalisation constants for the importance stage, one corresponding to the selection made\npreviously and another one created for this purpose. Indeed, besides the added\ncost of computing the sum of the importance weights and generating the\ndifferent variates, this method faces the non-negligible drawback of requiring\n$N-1$ supplementary simulations that are only used for achieving detailed balance\nand computing a backward summation of importance weights. This constraint may vanish when\nconsidering a collection of independent Metropolis-Hastings proposals, $q(\\theta)$,\nbut this setting is rarely realistic as it requires\nwhich make life simpler, but are less realistic since \nsome amount of prior knowledge or experimentation to build a relevant distribution.\n\nAn alternative found in the literature is {\\em ensemble Monte Carlo} \\citep{iba:2000,\ncappe:douc:guillin:marin:robert:2007,neal:2011e,martino:2018}, illustrated in Figure \\ref{figensemble}\nwhich produces a whole sample at each iteration, with target the product\nof the initial targets, in closer proximity with particle methods\n\\citep{cappe:guillin:marin:robert:2003,mengersen:robert:2003}.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=.6\\textwidth,keepaspectratio=true]{Marino18.jpg}\n \\caption{A comparison of an ensemble MCMC approach with a regular adaptive MCMC algorithm (lower line) and a static importance sampling approach, in terms of mean square error (MSE), for a fixed total number of likelihood evaluations, where $N$ denotes the size of the ensemble ({\\em Source:} \\citealp{martino:2018}, with permission).}\n \\label{figensemble}\n\\end{figure}\n\nYet another implementation of this principle is called\n{\\em delayed rejection} \\citep{tierney:mira:1998,mira:2001,miraetal01}, where proposals are instead\nconsidered sequentially, once the previous proposed value has been rejected.\nto speeding up MCMC by considering several possibilities, if sequentially.\nA computational difficulty with this approach is that the associated acceptance\nprobabilities get increasingly complex as the number of delays grows, which may annihilate its\nappeal relative to simultaneous multiple tries. A further difficulty is to devise the sequence of proposals in a diverse enough manner.\n\n\\subsection{Multiple proposals and parameterisations}\n\nA rather basic approach to comparing proposals of MCMC algorithms is to run\nseveral in parallel and to check whether these parallel chains can be exchanged\nby coupling. Chains with divergent behaviour will not couple as often as chains\nexploring the same area. While creating multiple MCMC algorithms may seem a\nmajor challenge, automated and semi-automated schemes can be replicated as much\nas desired by changing the parameterisation of the target. Each change\nintroduces a different Jacobian in the expression of the density, which means\ndifferent efficiencies in the exploration of the target.\n\n\\section{Accelerating MCMC by reducing the variance}\\label{sec:reduce}\n\nSince the main goal of MCMC is to produce approximations for quantities of interest of the form\n$$\n\\mathfrak{I}_h = \\int_\\Theta h(\\theta) \\pi(\\theta) \\text{d}\\theta,\n$$\nan alternative (and cumulative) way of accelerating these algorithms is to\nimprove the quality of the approximation derived from an MCMC output. That is,\ngiven an MCMC sequence $\\theta_1,\\ldots,\\theta_T$, converging to\n$\\pi(\\cdot)$, one can go beyond resorting to the basic Monte Carlo\napproximation\n\\begin{equation}\\label{eq:bazmo}\n\\hat{\\mathfrak{I}}_h^T = \\nicefrac{1}{T}\\sum_{t=1}^T h(\\theta_t)\n\\end{equation}\ntowards reducing the variance (if not the speed of convergence) of\n$\\hat{\\mathfrak{I}}_h^T$ to ${\\mathfrak{I}}_h$.\n\nA common remark when considering Monte Carlo approximations of $\\mathfrak{I}_h$\nis that the representation of the integral as an expectation is not unique\n\\citep[e.g.][]{robert:casella:2004}. This leads to the technique of importance\nsampling where alternative distributions are used in replacement of\n$\\pi(\\theta)$, possibly in an adaptive manner\n\\citep{douc:guillin:marin:robert:2007a}, or sequentially as in particle filters\n\\citep{delmoral:doucet:jasra:2006,andrieu:doucet:holenstein:2010}. Within the\nframework of this essay, the outcome of a given MCMC sampler can also be\nexploited in several ways that lead to an improvement of the approximation of\n$\\mathfrak{I}_h$.\n\n\\subsection{Rao--Blackwellisation and other averaging techniques}\n\nThe name `Rao--Blackwellisation' was coined by \n\\cite{gelfand:smith:1990} in their foundational Gibbs sampling paper and it\nhas since then become a standard way of reducing the variance of integral\napproximations. While it essentially proceeds from the basic probability\nidentity\n$$\\mathbb{E}^\\pi[h(\\theta)]=\\mathbb{E}^{\\pi_1}[\\mathbb{E}^{\\pi_2}\\{h(\\theta)|\\xi\\}],$$\nwhen $\\pi$ can be expressed as the following marginal density\n$$\\pi(\\theta)=\\int_{\\Xi} \\pi_1(\\xi)\\pi_2(\\theta|\\xi)\\text{d}\\xi\\,,$$\nand while sufficiency does not have a clear equivalence for Monte Carlo\napproximation, the name stems from the Rao--Blackwell theorem\n\\citep{lehmann:casella:1998} that improves upon a given estimator by\nconditioning upon a sufficient statistics. In a Monte Carlo setting, this means\nthat \\eqref{eq:bazmo} can be improved by a partly integrated version\n\\begin{equation}\\label{eq:razmo}\n\\tilde{\\mathfrak{I}}_h^T = \\nicefrac{1}{T}\\sum_{t=1}^T \\mathbb{E}^{\\pi_2}[h(\\theta)|\\xi^t]\n\\end{equation}\nassuming that a second and connected sequence of simulations $(\\xi_t)$ is available and that the conditional expectation is easily constructed. For instance, Gibbs sampling \\citep{gelfand:smith:1990} is often open to this Rao--Blackwell decomposition as it relies on successive simulations from several conditional distributions, possibly including auxiliary variates and nuisance parameters. In particular, a generic form of Gibbs sampling called the slice sampler \\citep{robert:casella:2004} produces one or several uniform variates at each iteration. \n\nHowever, a more universal type of Rao--Blackwellisation is available\n\\citep{casella:robert96} for all MCMC methods involving rejection, first and\nforemost, Metropolis--Hastings algorithms. Indeed, first, the distribution of\nthe rejected variables can be derived or approximated, which leads to an\nimportance correction of the original estimator. Furthermore, the accept-reject\nstep depends on a uniform variate, but this uniform variate can be integrated\nout. Namely, given a sample produced by a Metropolis--Hastings algorithm\n$\\theta^{(1)},\\ldots,\\theta^{(T)}$, one can exploit both underlying samples,\nthe proposed values $\\vartheta_1,\\ldots,\\vartheta_T$, and the uniform\n$u_1,\\ldots,u_T$, so that the ergodic mean can be rewritten as\n$$\n\\hat{\\mathfrak{I}}_h^T = \\nicefrac{1}{T} \\sum_{t=1}^T h(\\theta^{(t)}) \n= \\nicefrac{1}{T} \\sum_{t=1}^T h(\\vartheta_t) \\sum_{i=t}^T \\mathbb{I}_{\\theta^{(i)}=\\vartheta_t}\\,.\n$$\nThe conditional expectation\n\\begin{align*}\n\\tilde{\\mathfrak{I}}_h^T &= \\nicefrac{1}{T} \\sum_{t=1}^T h(\\vartheta_t)\n\\mathbb{E}\\left[\\sum_{i=t}^T \\mathbb{I}_{\\theta^{(i)}=\\vartheta_t}\\bigg|\\vartheta_1,\\ldots,\\vartheta_T\\right]\\\\\n&= \\nicefrac{1}{T} \\sum_{t=1}^T h(\\vartheta_t) \\left\\{\n\\sum_{i=t}^T \\mathbb{P}(\\theta^{(i)}=\\vartheta_t|\\vartheta_1,\\ldots,\\vartheta_T) \\right\\}\n\\end{align*}\nthen enjoys a smaller variance. See also \\cite{tjemeland:2004} and \\cite{douc:robert:2010} for connected improvements based on multiple tries. An even more rudimentary (and cheaper) version can be considered by integrating out the decision step at each Metropolis--Hastings iteration: if $\\theta_t$ is the current value of the Markov chain and $\\vartheta_t$ the proposed value, to be accepted (as $\\theta_{t+1}$) with probability $\\alpha_t$, the version\n$$\n\\nicefrac{1}{T} \\sum_{t=1}^T \\left\\{ \\alpha_t h(\\vartheta_t) +(1-\\alpha_t) h(\\theta_t) \\right\\}\n$$\nshould most often\\footnote{The improvement is not universal, due to the correlation between the terms of the sum induced by the Markovian nature of the sequence $\\{\\theta_t\\}_{t=1}^T$.} bring an improvement over the basic estimate \\citep{liu:won:kon95,robert:casella:2004}.\n\n\n\\section{Conclusion}\n\nAccelerating MCMC algorithms may sound like a new Achille versus tortoise paradox in that there are aways methods to speed up a given algorithm. The stopping rule of this infinite regress is however that the added pain in achieving this acceleration may overcome the added gain at some point. While we have only and mostly superficially covered some of the possible directions in this survey, we thus encourage most warmly readers to keep an awareness for the potential brought by a wide array of almost cost-free accelerating solutions as well as to keep trying devising more fine-tuned improvements in every new MCMC implementation. For instance, for at least one of us, Rao-Blackwellisation is always considered at this stage. Keeping at least one such bag of tricks at one's disposal is thus strongly advised.\n\n\\section*{Acknowledgements}\n\nChristian P.~Robert is grateful to Gareth Roberts, Mike Betancourt, and Julien Stoehr for helpful discussions. He is currently supported by an Institut Universitaire de France 2016--2021 senior grant. Changye Wu is currently a Ph.D. candidate at Universit\\'e Paris-Dauphine and supported by a grant of the Chinese Government (CSC). V\\'ictor Elvira acknowledges support from the \\emph{Agence Nationale de la Recherche} of France under PISCES project (ANR-17-CE40-0031-01), the Fulbright program, and the Marie Curie Fellowship (FP7\/2007-2013) under REA grant agreement n. PCOFUND-GA-2013-609102, through the PRESTIGE program. The authors are quite grateful to a reviewer for his or her detailed coverage of an earlier version of the paper, which contributed to significant improvements in the presentation and coverage of the topic. All remaining errors and ommissions are solely the responsability of the authors.\n\n\n\\input arXclmc.bbl\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nObservational data of type Ia supernovae (SNeIa) collected by Riess\net al. \\cite{riess} in the High-redshift Supernova Search Team and\nby Perlmutter et al. \\cite{perl} in the Supernova Cosmology Project\nTeam independently reported that the present observable universe is\nundergoing an accelerated expansion phase. The exotic source for\nthis cosmic acceleration is generally dubbed ``dark energy'' (DE)\nwhich is distinguished from ordinary matter (such as baryons and\nradiation), in the sense that it has negative pressure. This\nnegative pressure leads to the accelerated expansion of the universe\nby counteracting the gravitational force. The astrophysical\nobservations show that about 70\\% of the present energy of the\nuniverse is contained in DE. Although the nature and cosmological\norigin of DE is still enigmatic at the present, a great variety of\nmodels has been proposed to describe the DE (see e.g., the\nreviews \\cite{reviews,luca}). Two promising candidates are the\nholographic DE (HDE) \\cite{Li} and the agegraphic DE (ADE)\n\\cite{Cai} models which are originated from some considerations of\nthe features of the quantum theory of gravity.\n\nIt is curious to note that the HDE model has its origin\n(i.e. definition and derivation) depends on the Bekenstein-Hawking\n(BH) entropy-area relationship $S_{\\rm BH}=A\/4$ of black hole\nthermodynamics, where $A$ is the area of the horizon \\cite{Coh}.\nHowever, this definition of HDE can be modified (or\ncorrected) due to the various correction procedures applied to\ngravity theories \\cite{Banerjee}. For instance, the corrections to\nthe entropy which appear in dealing with the entanglement\nof quantum fields in and out the horizon \\cite{Sau}\ngenerate a power-corrected area term in the entropy\nexpression. The power-law corrected entropy has the form\n\\cite{pavon1}\n\\begin{equation}\nS=\\frac{A}{4}\\left[1-K_{\\alpha}\nA^{1-\\frac{\\alpha}{2}}\\right],\\label{ec}\n\\end{equation}\nwhere $\\alpha$ is a dimensionless constant whose value is currently\nunder debate and determining its unique and precise value requires\nseparate investigation, and\n\\begin{equation}\nK_\\alpha=\\frac{\\alpha(4\\pi)^{\\frac{\\alpha}{2}-1}}{(4-\\alpha)r_c^{2-\\alpha}},\n\\end{equation}\nwhere $r_c$ is the crossover scale. The second term in Eq.\n(\\ref{ec}) can be regarded as a power-law correction to the\nentropy-area law, resulting from entanglement i.e. the wave function\nof the field is taken to be a superposition\/entanglement of ground\nand exited states \\cite{Sau}. The entanglement entropy of the ground\nstate satisfies the BH entropy-area relationship. Only the excited\nstate contributes to the correction, and more excitations produce\nmore deviation from the BH entropy-area law \\cite{sau1} (also see\n\\cite{sau2} for a review on the origin of black hole entropy through\nentanglement). This lends further credence to entanglement as a\npossible source of black hole entropy. The correction term is also\nmore significant for higher excitations \\cite{Sau}. It is important\nto note that the correction term falls off rapidly with\nincreasing $A$. So for large black holes the correction\nterm falls off rapidly and the BH entropy-area law is recovered,\nwhereas for the small black holes the correction is significant.\n\nThe ADE model is originated from the uncertainty relation of quantum\nmechanics together with the gravitational effect in general\nrelativity (GR). The ADE model assumes that the observed DE comes\nfrom the spacetime and matter field fluctuations in the universe\n\\cite{Cai}. Following the line of quantum fluctuations of spacetime,\nKarolyhazy \\cite{kar} proposed that the distance in Minkowski\nspacetime cannot be known to a better accuracy than $\\delta\nt=\\varepsilon t_{P}^{2\/3}t^{1\/3}$, where $\\varepsilon$ is a\ndimensionless constant of order unity and $t_P$ is the reduced\nPlanck time. Based on Karolyhazy relation, Maziashvili proposed that\nthe energy density of metric fluctuations of Minkowski spacetime is\ngiven by \\cite{maz}\n\\begin{equation}\\label{2}\n\\rho_{\\Lambda}\\sim\\frac{1}{t_{P}^{2}t^{2}}\\sim\\frac{M_{P}^{2}}{t^{2}},\n \\end{equation}\n where $M_P$ is the reduced Planck mass $M_P^{-2}=8\\pi G$. Since in the original ADE model the\nage of the universe is chosen as the length measure, instead of the\nhorizon distance, the causality problem in the HDE is avoided\n\\cite{Cai}. The original ADE model had some difficulties. In\nparticular, it cannot justify the matter-dominated era \\cite{Cai}.\nThis motivated Wei and Cai \\cite{Wei2} to propose the new ADE (NADE)\nmodel, while the time scale is chosen to be the conformal time\ninstead of the age of the universe. The NADE density is given by\n\\cite{Wei2}\n\\begin{equation}\n\\rho_{\\Lambda}=\\frac{3{n}^2M_P^2}{\\eta^2},\\label{NADE}\n\\end{equation}\nwhere 3$n^2$ is the numerical factor and $\\eta$ is the conformal\ntime and defined as\n\\begin{equation}\n\\eta=\\int\\frac{{\\rm d}t}{a}=\\int_0^a\\frac{{\\rm\nd}a}{Ha^2}.\\label{eta}\n\\end{equation}\nThe ADE models have been examined and constrained by various\nastronomical observations \\cite{age2,age3,age,shey1,Jamilnade,\nkarami2}. Inspired by the power-law corrected entropy relation\n(\\ref{ec}), and following the derivation of HDE \\cite{Gub} and\nentropy-corrected HDE (ECHDE) \\cite{Wei}, we can easily obtain the\nso-called ``power-law entropy-corrected'' NADE (PLECNADE) whose the\nscale is chosen to be the conformal time $\\eta$. Therefore, we write\ndown the energy density of PLECNADE as \\cite{shey2,esm}\n\\begin{equation}\n\\rho_{\\Lambda} = \\frac{3n^2{M_P^2}}{\\eta^2}-\\frac{\\beta\nM_P^2}{\\eta^{\\alpha}},\\label{density-nade}\n\\end{equation}\nwhere $\\beta$ is a dimensional constant whose the precise\nvalue needs to be determined. In this paper, our aim is to\ninvestigate the PLECNADE model in Ho\\v{r}ava-Lifshitz cosmology.\n\nThe plane of the paper as follows: In section 2, we give a brief\nreview of Ho\\v{r}ava-Lifshitz cosmology in detailed balance case. In\nsection 3, we construct a model of interaction between DE and DM. In\nsection 4, we discuss some cosmological implications of this model.\nWe obtain the evolution of dimensionless energy density,\ndeceleration parameter and equation of state parameter of PLECNADE\nmodel. In section 5 we give the conclusion.\n\n\\section{Basics of Ho\\v{r}ava-Lifshitz Cosmology: Detailed Balance Case}\n\nRecently a power-counting renormalizable UV complete theory of\ngravity was proposed by Ho\\v{r}ava \\cite{horava}. Quantum gravity\nmodels based on an anisotropic scaling of the space and time\ndimensions have recently attracted significant attention \\cite{att}.\nIn particular, Ho\\v{r}ava-Lifshitz point gravity might be has\ndesirable features, but in its original incarnation one is forced to\naccept a non-zero cosmological constant of the wrong sign to be\ncompatible with observations \\cite{wrong}. At a first look it seems\nthat this non-relativistic model for quantum gravity has a well\ndefined IR limit and it reduces to GR. But as it was first indicated\nby Mukohyama \\cite{muko}, Ho\\v{r}ava-Lifshitz theory mimics GR plus\ndark matter (DM). This theory has a scale invariant power spectrum\nwhich describes inflation \\cite{inflation}. Moreover, some new integrable and\nnonintegrable cosmological models of the Ho\\v{r}ava-Lifshitz gravity\nhave been discussed in \\cite{ratbay}. Phenomenologically, in Ho\\v{r}ava-Lifshitz gravity the radiation energy density decreases proportional to $a^{-6}$ \\cite{mukho}. Hence the resultant baryon asymmetry as well as the stochastic gravity waves can be enhanced. Some cosmological solutions to in Ho\\v{r}ava-Lifshitz gravity are obtained previously \\cite{cai}. Saridakis formulated Horava-Lifshitz cosmology with an additional scalar field and showed that Horava-Lifshitz dark energy naturally presents very interesting behaviors, possessing a varying equation-of-state parameter, exhibiting phantom behavior and allowing for a realization of the phantom divide crossing. In addition, Horava-Lifshitz dark energy guarantees for a bounce at small scale factors and it may trigger the turnaround at large scale factors, leading naturally to cyclic cosmology \\cite{saridakis}. The large scale evolution and curvature perturbations in HL gravity are explored in \\cite{lss}, while the origin of primordial large-scale magnetic fields in the Horava's non-relativistic gravity have been discussed in \\cite{maeda}. The generalized second law of thermodynamics in Horava-Lifshitz cosmology is studied in \\cite{jcap}. For reviews on the scenario\nwhere the cosmological evolution is governed by Ho\\v{r}ava-Lifshitz\ngravity see \\cite{muko,HL}.\n\nUnder the detailed balance and the projectability conditions, the\nmodified Friedmann equations in the framework of Ho\\v{r}ava-Lifshitz\n(HL) gravity are given by \\cite{horava}\n\\begin{eqnarray}\\label{Fr1fluid}\nH^2 = \\frac{\\kappa^2}{6(3\\lambda-1)} \\rho_{\\rm m}\n+\\frac{\\kappa^2}{6(3\\lambda-1)}\\left[ \\frac{3\\kappa^2\\mu^2\nk^2}{8(3\\lambda-1)a^4} +\\frac{3\\kappa^2\\mu^2\\Lambda\n^2}{8(3\\lambda-1)}\n \\right]-\\frac{\\kappa^4\\mu^2\\Lambda k}{8(3\\lambda-1)^2a^2},\n\\end{eqnarray}\n\\begin{eqnarray}\\label{Fr2fluid}\n\\dot{H}+\\frac{3}{2}H^2 = -\\frac{\\kappa^2}{4(3\\lambda-1)} p_{\\rm m}\n-\\frac{\\kappa^2}{4(3\\lambda-1)}\\left[\\frac{\\kappa^2\\mu^2\nk^2}{8(3\\lambda-1)a^4} -\\frac{3\\kappa^2\\mu^2\\Lambda\n^2}{8(3\\lambda-1)}\n \\right]-\\frac{\\kappa^4\\mu^2\\Lambda k}{16(3\\lambda-1)^2a^2},\n\\end{eqnarray}\nwhere $H=\\frac{\\dot a}{a}$ is the Hubble parameter, $\\lambda$ is a\ndimensionless constant and $\\Lambda $ is a positive constant which\nas usual is related to the cosmological constant in the IR limit.\nThe parameters $\\kappa$ and $\\mu$ are constants. Also $k$ denotes\nthe curvature of space $k=0,1,-1$ for a flat, closed an open\nuniverse, respectively. Furthermore, $\\rho_{\\rm m}$ and $p_{\\rm m}$\nare the energy density and pressure of the matter.\n\nNoticing the form of the above Friedmann equations, we can define\nthe energy density $\\rho_\\Lambda$ and pressure $p_{\\Lambda}$ for DE\nas\n\\begin{equation}\\label{rhoDE}\n\\rho_\\Lambda\\equiv \\frac{3\\kappa^2\\mu^2 k^2}{8(3\\lambda-1)a^4}\n+\\frac{3\\kappa^2\\mu^2\\Lambda ^2}{8(3\\lambda-1)},\n\\end{equation}\n\\begin{equation}\n\\label{pDE} p_{\\Lambda}\\equiv \\frac{\\kappa^2\\mu^2\nk^2}{8(3\\lambda-1)a^4} -\\frac{3\\kappa^2\\mu^2\\Lambda\n^2}{8(3\\lambda-1)}.\n\\end{equation}\nThe first term on the right hand side proportional to $a^{-4}$ is\neffectively the ``dark radiation term'', present in HL cosmology\n\\cite{Calcagni:2009ar}, while the second term is referred as an\nexplicit cosmological constant.\n\nFinally in order for these expressions to match with the standard\nFriedmann equations we set \\cite{Calcagni:2009ar,Mubasher}\n\\begin{eqnarray}\n&&G_{\\rm c}=\\frac{\\kappa^2}{16\\pi(3\\lambda-1)},\\label{simpleconstants0a}\\\\\n&&\\frac{\\kappa^4\\mu^2\\Lambda}{8(3\\lambda-1)^2}=1,\n\\label{simpleconstants0}\n\\end{eqnarray}\nwhere $G_{\\rm c}$ is the ``cosmological'' Newton's constant. Note\nthat in gravitational theories with the violation of Lorentz\ninvariance (like HL gravity) the ``gravitational'' Newton's constant\n$G_{\\rm g}$, which is present in the gravitational action,\ndiffers from the ``cosmological'' Newton's constant $G_{\\rm\nc}$, which is present in the Friedmann equations, unless\nLorentz invariance is restored \\cite{Carroll:2004ai}. For the sake\nof completeness we write\n\\begin{eqnarray}\nG_{\\rm g}=\\frac{\\kappa^2}{32\\pi}\\label{Ggrav}.\n\\end{eqnarray}\nNote that in the IR limit ($\\lambda=1$), where Lorentz invariance is\nrestored, $G_{\\rm c}$ and $G_{\\rm g}$ are the same.\n\nFurther we can rewrite the modified Friedmann Eqs.\n(\\ref{Fr1fluid}) and (\\ref{Fr2fluid}) in the usual forms as\n\\begin{equation}\n\\label{eqfr} H^2+\\frac{k}{a^2} = \\frac{8\\pi G_{\\rm c}}{3}(\\rho_{\\rm\nm}+\\rho_\\Lambda),\n\\end{equation}\n\\begin{equation}\n\\label{Fr2b} \\dot{H}+\\frac{3}{2}H^2+\\frac{k}{2a^2} = - 4\\pi G_{\\rm\nc}(p_{\\rm m}+p_\\Lambda).\n\\end{equation}\n\n\\section{Model with Interaction}\n\nHere we would like to investigate the PLECNADE in HL theory. To do\nthis we consider a spatially non-flat Friedmann-Robertson-Walker\n(FRW) universe containing the PLECNADE and DM. Let us define the\ndimensionless energy densities as\n\\begin{equation}\n\\Omega_{\\rm m}=\\frac{\\rho_{\\rm m}}{\\rho_{\\rm cr}}=\\frac{8\\pi {\\rm\nG_{c}}}{3H^2}\\rho_{\\rm m},~~~~~~\\Omega_{\\rm\n\\Lambda}=\\frac{\\rho_{\\Lambda}}{\\rho_{\\rm cr}}=\\frac{8\\pi {\\rm\nG_{c}}}{3H^2}\\rho_{\\Lambda},~~~~~~\\Omega_{k}=-\\frac{k}{a^2H^2},\n\\label{eqomega}\n\\end{equation}\nthus the Friedmann Eq. (\\ref{Fr1fluid}) can be rewritten as\n\\begin{equation}\n1-\\Omega_{k}=\\Omega_{\\Lambda}+\\Omega_{\\rm m}.\\label{eq10}\n\\end{equation}\nTaking time derivative of Eq. (\\ref{density-nade}) and using\nrelation $\\dot{\\eta}=1\/a$, we get\n\\begin{equation}\n\\dot{\\rho}_{\\Lambda}=\\left(\\frac{1}{a\\eta}\\right)\\left[-2\\rho_{\\Lambda}+\\frac{\\beta\nM_P^2}{\\eta^{\\alpha}}(\\alpha-2)\\right].\\label{rhodot}\n\\end{equation}\nAlso, if we take the time derivative of the second relation in Eq.\n(\\ref{eqomega}) after using (\\ref{rhodot}), as well as relations\n$\\dot{\\eta}=1\/a$ and $\\dot{\\Omega}_{\\Lambda}= H\n{\\Omega}^{\\prime}_{\\Lambda}$, we obtain the equation of motion for\n${\\Omega}_{\\Lambda}$ as\n\\begin{eqnarray}\n\\label{omegaD-eq-motion1} {\\Omega^{\\prime}_{\\Lambda}} =\n\\left[-2\\Omega_{\\Lambda}\\frac{\\dot{H}}{H^2}-\\frac{2\\Omega_{\\Lambda}}{aH\\eta}+\\frac{G_{\\rm\nc}}{G_{\\rm g}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}\\right].\n\\end{eqnarray}\nHere, prime denotes the derivative with respect to $x=\\ln a$. Taking\nderivative of $\\Omega_{k} = -k\/(a^2H^2)$ with respect to $x = \\ln\na$, one gets\n\\begin{equation}\n{\\Omega^{\\prime}_{k}}=-2\\Omega_{k}\\left(1+\\frac{\\dot{H}}{H^2}\\right).\\label{omegak-eq-motion1}\n\\end{equation}\nTo be more general we consider an interaction between DM and\nPLECNADE. The recent observational evidence provided by the galaxy\nclusters supports the interaction between DE and DM\n\\cite{Bertolami8}. In this case, the energy densities of DE and DM\nno longer satisfy independent conservation laws. They obey instead\n\\begin{equation}\n\\dot{\\rho}_{\\Lambda}+3H(1+\\omega_{\\Lambda})\\rho_{\\Lambda}=-Q,\\label{eqintDE}\n\\end{equation}\n\\begin{equation}\n\\dot{\\rho}_{\\rm m}+3H\\rho_{\\rm m}=Q,\\label{eqintCDM}\n\\end{equation}\nwhere $Q=3b^2H\\rho_{\\Lambda}$ stands for the interaction term with\ncoupling constant $b^2$. Note that the form of Q is chosen purely\nphenomenologically i.e. to obtain certain desirable cosmological\nfindings including phantom crossing and accelerated expansion. In\nliterature, one can find numerous forms of $Q(H\\rho)$ while we chose\na simpler form sufficient for our purpose. A more general form of\n$Q$ was proposed in \\cite{rashid}. Also the three\ninteracting fluids has been studied before to investigate the\ntriple coincidence problem \\cite{triple}. Differentiating\nthe Friedmann Eq. (\\ref{Fr1fluid}) with respect to time and\nusing Eqs. (\\ref{eqomega}), (\\ref{eq10}), (\\ref{rhodot}),\n(\\ref{eqintDE}) and (\\ref{eqintCDM}) we find\n\\begin{eqnarray}\n\\frac{\\dot{H}}{H^2} =\\frac{1}{2}\\Big[\\Omega_k-3(1 -\n\\Omega_{\\Lambda})+3b^2\\Omega_{\\Lambda}\\Big]\n-\\frac{\\Omega_{\\Lambda}}{aH\\eta}+\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{6aH^3\\eta^{\\alpha+1}}.\\label{H-dot-to-H2-interact}\n\\end{eqnarray}\nInserting this result into Eq. (\\ref{omegaD-eq-motion1})\none gets\n\\begin{eqnarray}\n{\\Omega^{\\prime}_{\\Lambda}}= \\Omega_{\\Lambda} \\left[3(1 -\n\\Omega_{\\Lambda})-3b^2{\\Omega}_{\\Lambda} -\\Omega_k\\right]+(1 -\n\\Omega_{\\Lambda})\\left[\\frac{-2\\Omega_{\\Lambda}}{aH\\eta}+\\frac{G_{\\rm\nc}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}\\right].\\label{omegaD-eq-motion3}\n\\end{eqnarray}\nCombining Eq. (\\ref{H-dot-to-H2-interact}) with\n(\\ref{omegak-eq-motion1}) we have\n\\begin{equation}\n{\\Omega^{\\prime}_{k}}=\\Omega_{k}\\left[(1\n-\\Omega_k)-3\\Omega_{\\Lambda}-3b^2\\Omega_{\\Lambda}\n+\\frac{2\\Omega_{\\Lambda}}{aH\\eta}-\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}\\right].\\label{omega-prime-k-interact}\n\\end{equation}\nAdding Eqs. (\\ref{omegaD-eq-motion3}) and\n(\\ref{omega-prime-k-interact}) yields\n\\begin{eqnarray}\n{\\Omega^{\\prime}_{\\Lambda}}+{\\Omega^{\\prime}_{k}}=(1-\\Omega_{k}-\\Omega_{\\Lambda})\\left[\\Omega_{k}+3\\Omega_{\\Lambda}\n-\\frac{3b^2\\Omega_{\\Lambda}(\\Omega_{k}+\\Omega_{\\Lambda})}{(1-\\Omega_{k}-\\Omega_{\\Lambda})}+\\frac{G_{\\rm\nc}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}-\\frac{2\\Omega_{\\Lambda}}{aH\\eta}\\right].\\label{omegalambdakpri}\n\\label{add-omega-prime-interact}\n\\end{eqnarray}\nFor completeness we give the deceleration parameter which is defined\nas\n\\begin{equation}\nq=-\\left(1+\\frac{\\dot{H}}{H^2}\\right).\\label{q1}\n\\end{equation}\nAfter combining Eq. (\\ref{H-dot-to-H2-interact}) with (\\ref{q1}) we\nget\n\\begin{eqnarray}\nq = \\frac{1}{2}[1-\\Omega_{k}-3(1+b^2)\\Omega_{\\Lambda}]+\n\\frac{\\Omega_{\\Lambda}}{aH\\eta}-\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}.\n\\end{eqnarray}\nUsing definitions (\\ref{eqomega}) as well as Eq. (\\ref{eq10}) we\nhave\n\\begin{equation}\n\\rho_{\\Lambda}=\\frac{\\rho_{\\rm m}}{\\Omega_{\\rm\nm}}\\Omega_{\\Lambda}=\\frac{\\rho_{\\rm\nm}}{(1-\\Omega_{k}-\\Omega_{\\Lambda})}\\Omega_{\\Lambda},\\label{rholambdaint}\n\\end{equation}\nwhich from it we can obtain\n\\begin{equation}\n\\frac{\\rm d{\\ln{\\rho_{\\Lambda}}}}{{\\rm d}\n\\ln{a}}=\\frac{\\rho^{\\prime}_{\\rm m}}{\\rho_{\\rm\nm}}-\\frac{\\Omega^{\\prime}_{\\rm m}}{\\Omega_{\\rm\nm}}+\\frac{\\Omega^{\\prime}_{\\Lambda}}{\\Omega_{\\Lambda}}.\\label{drholambda}\n\\end{equation}\n\n\\section{Cosmological Implications}\n\nIn this section, we study some cosmological consequences of a\nphenomenologically time-dependent parameterization for the PLECNADE\nequation of state as\n\\begin{equation}\n\\omega_{\\Lambda}(z)=\\omega_0+\\omega_1 z.\\label{wpar}\n\\end{equation}\nIt was shown in \\cite{barboza} that this parameterization\nallows to divide the parametric plane $(\\omega_0,\\omega_1)$ in\ndefined regions associated to distinct classes of DE models that can\nbe confirmed or excluded from a confrontation with current\nobservational data.\n\nAfter using Eq. (\\ref{eqintDE}), the evolution of the DE density is\nobtained as \\cite{reviews,hunt}\n\\begin{equation}\n\\frac{\\rho_{\\Lambda}}{\\rho_{\\Lambda_{0}}}=a^{-3(1+\\omega_{0}-\\omega_{1}+b^2)}e^{3\\omega_{1}z}.\\label{eqCDEint}\n\\end{equation}\nThe Taylor expansion of the DE density around $a_0 = 1$ at the\npresent time yields\n\\begin{equation}\n\\ln{\\rho_{\\Lambda}}=\\ln{\\rho_{\\Lambda_{0}}}+\\frac{\\rm\nd{\\ln{\\rho_{\\Lambda}}}}{{\\rm d}\n\\ln{a}}\\Big{|}_0\\ln{a}+\\frac{1}{2}\\frac{\\rm d^2\n\\ln{\\rho_{\\Lambda}}}{{\\rm\nd}({\\ln{a}})^2}\\Big{|}_0(\\ln{a})^2+\\cdots.\\label{taylor expand}\n\\end{equation}\nUsing the fact that for small redshifts, $\\ln a = -\\ln(1 + z) \\simeq\n-z + \\frac{z^2}{2}$, Eqs. (\\ref{eqCDEint}) and (\\ref{taylor\nexpand}), respectively, reduce to\n\\begin{equation}\n\\frac{\\ln{(\\rho_{\\Lambda}\/\\rho_{\\Lambda_{0}})}}{\\ln{a}}=-3(1+\\omega_{0}+b^2)-\\frac{3}{2}\\omega_{1}z\\label{eqCDE1int},\n\\end{equation}\n\\begin{equation}\n\\frac{\\ln{(\\rho_{\\Lambda}\/\\rho_{\\Lambda_{0}})}}{\\ln{a}}=\\frac{\\rm\nd{\\ln{\\rho_{\\Lambda}}}}{{\\rm d}\n\\ln{a}}\\Big{|}_0-\\frac{1}{2}\\frac{\\rm d^2 \\ln{\\rho_{\\Lambda}}}{{\\rm\nd}({\\ln{a}})^2}\\Big{|}_0z\\label{eqCDE2}.\n\\end{equation}\nComparing Eq. (\\ref{eqCDE1int}) with (\\ref{eqCDE2}), we find that\nthese two equations are consistent provided we have\n\\begin{equation}\n\\omega_{0}=-\\frac{1}{3}\\frac{\\rm d{\\ln{\\rho_{\\Lambda}}}}{{\\rm d}\n\\ln{a}}\\Big{|}_0-1-b^2,\\label{w0int}\n\\end{equation}\n\\begin{equation}\n\\omega_{1}=\\frac{1}{3}\\frac{\\rm d^2 \\ln{\\rho_{\\Lambda}}}{{\\rm d}\n({\\ln{a}})^2}\\Big{|}_0.\\label{w1int}\n\\end{equation}\nInserting Eq. (\\ref{drholambda}) in (\\ref{w0int}) and (\\ref{w1int}),\nafter using (\\ref{eqintCDM}), yields\n\\begin{equation}\n\\omega_{0}=-\\frac{1}{3}\\left[\\frac{\\Omega^{\\prime}_{\\Lambda}}{\\Omega_{\\Lambda}}\n+\\frac{\\Omega^{\\prime}_{\\Lambda}+\\Omega^{\\prime}_{k}}{(1-\\Omega_{k}-\\Omega_{\\Lambda})}\\right]_0\n-b^2\\left(\\frac{1-\\Omega_{k}}{1-\\Omega_{k}-\\Omega_{\\Lambda}}\\right)_0,\n\\label{eqomega0-interact}\n\\end{equation}\n\\begin{eqnarray}\n\\omega_{1}=\\frac{1}{3}\\left[\\frac{3b^2\\Omega^{\\prime}_{\\Lambda}}{(1-\\Omega_{k}-\\Omega_{\\Lambda})}\n+\\frac{3b^2\\Omega_{\\Lambda}(\\Omega^{\\prime}_{\\Lambda}+\\Omega^{\\prime}_{k})}{(1-\\Omega_{k}-\\Omega_{\\Lambda})^2}\n+\\frac{{\\Omega}^{\\prime\\prime}_{\\Lambda}}{\\Omega_{\\Lambda}}\n-\\frac{{\\Omega}^{\\prime2}_{\\Lambda}}{\\Omega_{\\Lambda}^2}\n\\right.~~~~~~~~\\nonumber\\\\\\left.+\\frac{\\Omega^{\\prime\\prime}_{\\Lambda}+{\\Omega}^{\\prime\\prime}_{k}}{(1-\\Omega_{k}-\\Omega_{\\Lambda})}\n+\\frac{(\\Omega^{\\prime}_{\\Lambda}+\\Omega^{\\prime}_{k})^2}{(1-\\Omega_{k}-\\Omega_{\\Lambda})^2}\\right]_0.\\label{w1-2int}\n\\end{eqnarray}\nSubstituting Eqs. (\\ref{omegaD-eq-motion3}) and\n(\\ref{add-omega-prime-interact}) into (\\ref{eqomega0-interact}) we\nreach\n\\begin{eqnarray}\n\\label{state-parameter-2-interact}\n\\omega_{0}=\\frac{-1}{3\\Omega_{\\Lambda_0}}\\left(\\frac{G_{\\rm\nc}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3H_0^3\\eta_0^{\\alpha+1}}-\\frac{2\\Omega_{\\Lambda_0}}{H_0\\eta_0}\\right)-b^2-1.\n\\end{eqnarray}\nTaking derivative of Eqs. (\\ref{omegaD-eq-motion3}) and\n(\\ref{omega-prime-k-interact}) with respect to $x = \\ln a$ and using\n(\\ref{state-parameter-2-interact}), one gets\n\\begin{eqnarray}\n\\Omega^{\\prime\\prime}_{\\Lambda}&=&-(\\Omega_{\\Lambda}\\Omega^{\\prime}_{k}+\\Omega^{\\prime}_{\\Lambda}\\Omega_{k})\n-3\\Omega_{\\Lambda}(\\Omega_{\\Lambda}-\\Omega^{\\prime}_{\\Lambda}-1)(\\omega_{0}+b^2+1)\n+(1-\\Omega_{\\Lambda})A\n\\nonumber\\\\&&-6\\Omega^{\\prime}_{\\Lambda}\\Omega_{\\Lambda}(b^2+1)+3\\Omega^{\\prime}_{\\Lambda},\\label{omegalambdapri-interact}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\Omega^{\\prime\\prime}_{k}=\\Omega^{\\prime}_{k}(1-2\\Omega_{k})\n-3\\Omega_{\\Lambda}(\\Omega_{k}-\\Omega^{\\prime}_{k})(\\omega_{0}+b^2+1)\n-A\\Omega_{k} -3(\\Omega^{\\prime}_{k}\\Omega_{\\Lambda}+\\Omega_{\nk}\\Omega^{\\prime}_{\\Lambda})(b^2+1),\\label{omegakpri-interact}\n\\end{eqnarray}\nwhere $A$ is given by\n\\begin{eqnarray}\nA=\\frac{2\\Omega_{\\Lambda}}{aH\\eta}\\left(\\frac{\\dot{H}}{H^2}+\\frac{\\dot{\\eta}}{H\\eta}\\right)\n-\\frac{2\\Omega^{\\prime}_{\\Lambda}}{aH\\eta}-\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}\\left(\\frac{3\\dot{H}}{H^2}+\\frac{(\\alpha+1)\\dot{\\eta}}{H\\eta}\\right).\\label{A}\n\\end{eqnarray}\nThe above expression for $A$ can also be rewritten as\n\\begin{eqnarray}\nA&=&\\frac{2}{aH\\eta}\\Big[\\Omega_{\nk}\\Omega_{\\Lambda}-3\\Omega_{\\Lambda}(1-\\Omega_{\\Lambda}-b^2\\Omega_{\\Lambda})\\Big]\n+3\\Omega_{\\Lambda}(\\omega_{0}+b^2+1)\\left[\\frac{1}{aH\\eta}-2\\Omega_{\\Lambda}-q-1\\right]\n\\nonumber\n\\\\&&-\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{3aH^3\\eta^{\\alpha+1}}\\left[\\frac{\\alpha}{aH\\eta}-2(1+q)\\right],\\label{A2}\n\\end{eqnarray}\nwhere we have used Eqs. (\\ref{H-dot-to-H2-interact}),\n(\\ref{omegaD-eq-motion3}) as well as the relation $\\dot{\\eta}=1\/a$.\nAdding Eqs. (\\ref{omegalambdapri-interact}) and\n(\\ref{omegakpri-interact}) gives\n\\begin{eqnarray}\n\\Omega^{\\prime\\prime}_{\\Lambda}+\\Omega^{\\prime\\prime}_{k}&=&(1-\\Omega_{\\Lambda}-\\Omega_{k})\\Big[\\Omega^{\\prime}_{\nk}+3\\Omega_{\\Lambda}(1+\\omega_{0}+b^2)+A\\Big]+(3\\Omega_{\\Lambda}\\omega_{0}-\\Omega_{k})(\\Omega^{\\prime}_{\nk}+\\Omega^{\\prime}_{\\Lambda})\n\\nonumber\\\\&&+3\\Omega^{\\prime}_{\\Lambda}\\Big[1-(\\Omega_{k}+\\Omega_{\\Lambda})(1+b^2)\\Big].\\label{omegalambdakppri-interact}\n\\end{eqnarray}\nFinally, by combining Eqs. (\\ref{omegaD-eq-motion3}),\n(\\ref{add-omega-prime-interact}), (\\ref{omegalambdapri-interact}),\n(\\ref{A2}) and (\\ref{omegalambdakppri-interact}) with\n(\\ref{w1-2int}) we find\n\\begin{equation}\n\\omega_{1}=(1+\\omega_{0}+b^2)\\Big[3\\omega_{0}(\\Omega_{\\Lambda_0}-1)-\\Omega_{\nk_0}-3b^2+1\\Big] +\\frac{A_0}{3\\Omega_{\\Lambda_0}},\n\\end{equation}\nwhich more explicitly can be written as\n\\begin{eqnarray}\n\\omega_{1}&=&(1+\\omega_{0}+b^2)\\Big[\\Omega_{\\Lambda_0}(3\\omega_{0}-2)-3(\\omega_0+b^2)-\\Omega_{\nk_0}-q_0\\Big]\\nonumber\n\\\\&&+\\frac{1}{H_0\\eta_0}\\left[\\omega_0+(1+b^2)(1+2\\Omega_{\\Lambda_0})+2\\left(\\frac{\\Omega_{k_0}}{3}-1\\right)\\right]\\nonumber\n\\\\&&-\\frac{G_{\\rm c}}{G_{\\rm\ng}}\\frac{\\beta(\\alpha-2)}{9H_0^3\\eta_0^{\\alpha+1}\\Omega_{\\Lambda_0}}\\left[\\frac{\\alpha}{H_0\\eta_0}-2(1+q_0)\\right].\\label{EoS1}\n\\end{eqnarray}\nTherefore, with $\\omega_0$ and $\\omega_1$ at hand we can easily\nwrite down the explicit expression for $\\omega_{\\Lambda}(z)$ in Eq.\n(\\ref{wpar}) in terms of model parameters such as $\\Omega_\\Lambda$,\n$\\Omega_{k}$, the running parameter $\\lambda$ of HL gravity, the\nparameter $n$ of PLECNADE, the interaction coupling $b^2$, and the\ncorrection coefficients $\\alpha$ and $\\beta$. From Fig. 1 we notice\nthat in the absence of interaction $b^2=0$, $\\omega_0=-0.99$ showing\nquintessence state. By introducing interaction term, the state\nparameter evolves to a phantom state and gradually goes to more\nsuper-phantom state. Figure 1 also shows that the phantom\ncrossing for $\\omega_0$ happens for $b^2=0.01$ which is compatible\nwith the observation \\cite{Komatsu}. This is also in agreement with\nthe result obtained by \\cite{Karami3}. Note that phantom crossing has sound empirical support: analysis of Gold SNe and other observational datasets suggests that $\\omega(z)\\leq-1$, for $0\\leq z\\leq0.5$ \\cite{obser}. From Fig. 2, we notice that\nin the absence of interaction, the first order correction $\\omega_1$\nto state parameter behaves like quintessence but when interaction is\nintroduced, the parameter $\\omega_1$ evolves towards $-1$,\ncosmological constant. Here it is probable that $\\omega_1$ can cross\nthe cosmological constant boundary if $b^2>0.20$. For reader's clarity, we did not use any initial conditions for plotting both figures since Eq. (\\ref{state-parameter-2-interact}) and Eq. (\\ref{EoS1}) are not differential equations. However the behavior of curves in figures is strongly sensitive to the values of free parameters. \n\n\\section{Conclusions}\n\nIt has been shown that the origin of black hole entropy may lie in\nthe entanglement of quantum fields between inside and outside of the\nhorizon \\cite{Sau}. Since the modes of gravitational fluctuations in\na black hole background behave as scalar fields, one is able to\ncompute the entanglement entropy of such a field, by tracing over\nits degrees of freedom inside a sphere. In this way the authors of\n\\cite{Sau} showed that the black hole entropy is proportional to the\narea of the sphere when the field is in its ground state, but a\ncorrection term proportional to a fractional power of area results\nwhen the field is in a superposition of ground and excited states.\nFor large horizon areas, these corrections are relatively small and\nthe BH entropy-area law is recovered.\n\nHere, we investigated the PLECNADE scenario in the framework of HL\ngravity. We considered an arbitrary spatial local curvature for the\nbackground geometry and allowed for an interaction between the\nPLECNADE and DM. We obtained the deceleration parameter as well as\nthe differential equation which determines the evolution of the\nPLECNADE density parameter. Using a low redshift expansion of the\nEoS parameter of PLECNADE as\n $\\omega_{\\Lambda}(z)=\\omega_0 +\\omega_1\nz$, we calculated $\\omega_0$ and $\\omega_1$ as functions of the\nPLECNADE and curvature density parameters, $\\Omega_\\Lambda$ and\n$\\Omega_{k}$ respectively, of the running parameter $\\lambda$ of HL\ngravity, of the parameter $n$ of PLECNADE, of the interaction\ncoupling $b^2$, and of the coefficients of correction terms $\\alpha$\nand $\\beta$. It is quite interesting to note that phantom crossing\nfor $\\omega_0$ happens for $b^2=0.01$ i.e. a small but non-zero\ninteraction parameter. In literature, there are two well-studied ways for phantom crossing: via modified gravities including scalar-tensor or Gauss-Bonnet braneworld models \\cite{gauss} or by introducing an interaction between fluid dark energy and dark matter. In the later models, its a generic feature of interacting dark energy models to have phantom crossing, for instance, using different forms of dark energy including Chaplygin gas \\cite{sad}, quintessence \\cite{sad1}, new-agegraphic and holographic dark energy \\cite{Jamilnade}. In \\cite{nadebd}, the present authors studied the PLECNADE interacting with matter in Brans-Dicke gravity and obtained the state parameter behaving to cross the phantom divide for small values of coupling parameter $b$.\n\n\\subsection*{Acknowledgements}\n\nThe works of K. Karami and A. Sheykhi have been supported\nfinancially by Research Institute for Astronomy and Astrophysics of\nMaragha (RIAAM) under research project No. 1\/2340. M. Jamil would like to thank the warm hospitality of Eurasian National University, Astana, Kazakhstan where part of this work completed. Authors would thank the anonymous referees for their constructive criticism on this paper.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{References}}\n\n\\usepackage[sort&compress,comma,authoryear]{natbib}\n\\usepackage[colorlinks=true,urlcolor=blue,citecolor=red,linkcolor=red,bookmarks=true]{hyperref}\n\\setcitestyle{citesep={;}}\n\n\n\n\n\n\\usepackage{graphicx}\n\\usepackage{epstopdf}\n\n\\epstopdfsetup{\n suffix=,\n}\n\n\\usepackage{booktabs}\n\\usepackage{multirow}\n\\usepackage[table,xcdraw]{xcolor}\n\\usepackage{color}\n\\usepackage{amsmath,amssymb,amsfonts}\n\\usepackage{algorithmic}\n\\usepackage{graphicx}\n\\usepackage{textcomp}\n\\usepackage{float}\n\\usepackage{xcolor}\n\\usepackage{pgf} \n\\usepackage{pgfplots}\n\\usepackage{times}\n\\usepackage{adjustbox}\n\n\\usepackage[normalem]{ulem}\n\\usepackage{pdfpages}\n\\usepackage[skins]{tcolorbox}\n\\usepackage{subfigure}\n\\usepackage[boxed]{algorithm2e}\n\n\n\n\\usepackage{tikz}\n\\usetikzlibrary{fit,calc}\n\\usepgfplotslibrary{external}\n\\newcommand{\\boxplot}[7]{%\n\n\t\\filldraw[fill=#7,line width=0.2mm] let \\n{boxxl}={#1-0.35}, \\n{boxxr}={#1+0.35} in (axis cs:\\n{boxxl},#3) rectangle (axis cs:\\n{boxxr},#4); \n\t\\draw[line width=0.2mm, color=red] let \\n{boxxl}={#1-0.35}, \\n{boxxr}={#1+0.35} in (axis cs:\\n{boxxl},#2) -- (axis cs:\\n{boxxr},#2); \n\t\\draw[line width=0.2mm] (axis cs:#1,#4) -- (axis cs:#1,#6); \n\t\\draw[line width=0.2mm] let \\n{whiskerl}={#1-0.025}, \\n{whiskerr}={#1+0.025} in (axis cs:\\n{whiskerl},#6) -- (axis cs:\\n{whiskerr},#6); \n\t\\draw[line width=0.2mm] (axis cs:#1,#3) -- (axis cs:#1,#5); \n\n\t\\draw[line width=0.2mm] let \\n{whiskerl}={#1-0.025}, \\n{whiskerr}={#1+0.025} in (axis cs:\\n{whiskerl},#5) -- (axis cs:\\n{whiskerr},#5); \n\t}\n\n\n\n\\SetKwProg{Fn}{Function}{}{end}\\SetKwFunction{FRecurs}{FnRecursive}%\n\\newcommand{$i=0$ \\KwTo $n$}{$i=0$ \\KwTo $n$}\n\n\\def\\BibTeX{{\\rm B\\kern-.05em{\\sc i\\kern-.025em b}\\kern-.08em\n T\\kern-.1667em\\lower.7ex\\hbox{E}\\kern-.125emX}}\n \n\\usepackage{array}\n\\newcolumntype{C}[1]{>{\\centering\\arraybackslash}p{#1}}\n\n\\usetikzlibrary{arrows,automata}\n\n\\journal{Expert Systems with Applications}\n\n\\begin{document}\n\n\\newenvironment{RQ}{\\vspace{2mm}\\begin{tcolorbox}[enhanced,width=3.0in,size=fbox,fontupper=\\normalsize,colback=blue!5,drop shadow southwest,sharp corners]}{\\end{tcolorbox}}\n\n\n\n\\title{Understanding Static Code Warnings: an Incremental AI Approach}\n\n\\author{Xueqi Yang}\n\\ead{xyang37@ncsu.edu}\n\\author{Zhe Yu}\n\\ead{zyu9@ncsu.edu}\n\\author{Junjie Wang}\n\\ead{wangjunjie@itechs.iscas.ac.cn}\n\\author{Tim Menzies}\n\\ead{tim.menzies@gmail.com}\n\n\\address{Department of Computer Science, North Carolina State University, Raleigh, NC, USA}\n\\address{Institute of Software Chinese Academy of Sciences, Beijing, China}\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{abstract}\n\nKnowledge-based systems reason over some knowledge base. \nHence, an important issue for such systems is how to acquire the knowledge needed\nfor their inference.\nThis paper assesses active learning methods for acquiring knowledge for ``static code warnings''.\n\nStatic code analysis is a widely-used method\nfor detecting bugs and security vulnerabilities in software systems. As software becomes more complex, analysis tools also report lists of increasingly complex warnings that developers need to address on a daily basis. \nSuch static code analysis tools\nare usually over-cautious; i.e. they often offer many warnings about spurious issues.\nPrevious research work shows that about 35\\% to 91 \\% warnings reported as bugs by SA tools are actually unactionable~ (i.e., warnings that would not be acted on by developers because they are falsely suggested as bugs).\n\nExperienced developers know which errors are important and which can be safely ignored. How can we capture that experience?\nThis paper reports on an incremental AI tool that watches humans reading false alarm reports. Using an incremental support vector machine mechanism, this AI tool can quickly learn to distinguish spurious false alarms from more serious matters that deserve further attention.\n\n In this work, nine open-source projects are employed to evaluate our proposed model on the features extracted by previous researchers and identify the actionable warnings in a priority order given by our algorithm. We observe that our model can identify over 90\\% of actionable warnings when our methods tell humans to ignore 70 to 80\\% of the warnings.\n \n\n\\end{abstract}\n\n\n\n\n\\begin{keyword}\nActionable warning identification\\sep Active learning\\sep Static analysis\\sep Selection process\n\\end{keyword}\n\\maketitle\n\n\\section{Introduction}\n\\label{sec:intro}\n\n\n\n\n\n\n\n\n\n\n\n\nKnowledge acquisition problem is a longstanding and challenging bottleneck in artificial intelligence, especially like Semantic Web project~\\citep{feigenbaum1980knowledge}. Traditional knowledge engineering methodologies\nhandcraft the knowledge prior to testing that data\non some domain~\\citep{hoekstra2010knowledge}.\nSuch handcrafted knowledge is expensive to collect. Also, building competent\nsystems can require extensive manually crafting-- which leads to a long gap between crafting and testing knowledge.\n\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=4.5in]{guaranteedDereference.png}\\end{center}\n\\caption{Example of a static code analysis warning, generated via the FindBugs\ntool.}\\label{fig:fb}\n\\end{figure*}\n\nIn this paper, we address these problems with self-adaptive incremental active learning utilizing a human-in-the-loop process.\nThis approach can be leveraged to filter spurious vs serious static warnings generated by static analysis (SA) tools. \n\nStatic code analysis is a common operation of detecting bugs and security vulnerabilities in software systems. \nThe wide range of commercial applications of static analysis demonstrates the industrial perception that these tools have a very high economic value.\nOne of the popular SA tools, FindBugs\\footnote{http:\/\/findbugs.sourceforge.net\/} (shown in Figure~\\ref{fig:fb}) has been downloaded over a million times so far. \nHowever, large amounts of warnings are falsely suggested by SA tools as bugs to developers, overwhelming the few actionable ones (true bugs). Due to high rates of unactionable warnings, the utility of such static code analysis tools is questionable.\nPrevious research work shows that about 35\\% to 91 \\% warnings reported as bugs by SA tools are actually unactionable~ \\citep{kim2007warnings,heckman2008establishing,heckman2011systematic}.\n\nExperienced developers have the knowledge of filtering out ignorable and unactioable warnings.\nOur active learning methods incrementally acquire and validate this knowledge.\nBy continuously and incrementally constructing and updating the model, our approach can help SE developers to identify more actionable static warnings with very low inspection costs and provide an efficient way to deal with software mining on the early life cycle. \n\nThis paper evaluates the proposed approach with the following four research questions:\n\n\n\\begin{RQ}\n{\\bf RQ1.} What is the baseline rate for bad static warnings?\n\\end{RQ}\n\nWhile this is more a systems question rather than a research question, it is a necessary precondition to our work since it documents the problem we are trying to address. For this question, we report results from FindBugs. These results will serve as the baseline for the rest of our work.\n\n\\begin{RQ}\n\n{\\bf RQ2.} What is the previous state-of-the-art method to tackle the prevalence of actionable warnings in SA tools?\n\\end{RQ}\n\n\nWang et al.~\\citep{wang2018there} conduct a systematic evaluation of all the publicly available features (116 features in total) that discuss static code warnings.\nThat work offered a \"golden set of features\"; i.e., 23 features that Wang et al.~\\citep{wang2018there} argued were most useful for extracting serious bug reports generated from FindBugs.\nOur experiments combining three supervised learning models from the literature with these 23 features.\n\n\n\n\\begin{RQ}\n{\\bf RQ3.} Does incremental active learning reduce the cost to identify actionable Static Warnings?\n\\end{RQ}\n\n\nWe will show that incremental active learning reduces the cost of identifying actionable warnings dramatically (and obtains performance almost as good as supervised learning).\n\n\\begin{RQ}\n{\\bf RQ4.} How many samples should be retrieved to identify all the actionable Static Warnings?\n\\end{RQ}\n\nIn this case study, incremental active learning can identify over 90\\% of actionable warnings by learning from about 20\\% to 30\\% of data. \nHence, we recommend this system to developers who wish to reduce the time they waste chasing spurious errors.\n\n\n\n\n\\subsection{Organization of this Paper}\nThe remainder of this paper is organized as follows. Research background and related work is introduced in Section~\\ref{sec:related}. In Section~\\ref{sec:method}, we describe the detail of our methodology. Our experiment details are introduced in Section~\\ref{sec:experiment}. In Section~\\ref{sec:evaluation}, we answer proposed research questions. Threats to validity and future work are discussed in Section~\\ref{sec:threats} and we finally draw a conclusion in Section~\\ref{sec:conclusion}.\n\nTo facilitate other researchers in this area, all our scripts are data are freely available on-line\\footnote{Download our scripts and data from \\url{https:\/\/github.com\/XueqiYang\/incrementally-active-learning_SWID}.}.\n\n\\subsection{Contributions of this Paper}\n\nIn the literature, \nactive learning methods have been extensively discussed, like finding relevant papers in literature review~\\citep{yu2018finding,yu2019fast2}, security vulnerability prediction~\\citep{8883076}, crowdsourced testing~\\citep{wang2016local}, place-aware application development~\\citep{murukannaiah2015platys}, classification of software behavior~\\citep{bowring2004active}, and multi-objective optimization~\\citep{krall2015gale}. The unique contribution of this work lies in the novel application of these methods to resolving problems with static code warnings. To the best of our knowledge, no prior work has tried to tame spurious static code warnings by treating these as an incremental knowledge acquisition problem.\n\n\\section{Related Work}\n\\label{sec:related}\n\\subsection{Reasoning About Source Code}\n\nThe software development community has produced numerous static code analysis tools such as FindBugs, PMD\\footnote{https:\/\/pmd.github.io\/}, or Checkstyle\\footnote{http:\/\/checkstyle.sourceforge.net\/} that are able to generate various warnings to help developers identifying potential code problems. Such static code analysis tools such as FindBugs leverage \\textit{static analysis} (SA) techniques to inspect source code for the occurrence of bug patterns (i.e., the code idiom that is often an error) without actually executing nor considering an exact input. These bugs detected by FindBugs are grouped into a pattern list, (i.e, performance, style, correctness and so forth) and each bug is reported by FindBugs with priority from 1 to 20 to measure the severity, which is finally grouped into four scales either scariest, scary, troubling, and of concern~\\citep{ayewah2008using}.\n\n\n\n\n\nSome SA tools learn to identify new bugs using historical data from past problems.\nThis is not ideal since it means that whenever there are chances to tasks, languages, platforms, and perhaps even developers then the old warnings might go out of date and new ones have to be learned.\nStatic warning identification is increasingly relying on complex software systems~\\citep{wijayasekara2012mining}. Identifying static warnings in every stage of the software life cycle is essential, especially for projects in early development stage~\\citep{murtaza2016mining}.\n\nArnold et al.~\\citep{arnold2009security} suggests that every project, early in its own lifecycle, should build its own static warning system.\nSuch advice is hard to follow since it means a tedious, time-consuming and expensive retraining process at the start of each new project. To say that in another way, Arnold et al.'s advice suffers from the knowledge acquisition bottleneck problem.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Static Warning Identification}\n\nStatic warning identification aims at identifying common coding problems early in the development process via SA tools and distinguishing actionable warnings from unactionable ones~\\citep{heckman2011systematic,hovemeyer2004finding,yan2017revisiting}. \n\n\nPrevious studies have shown that false positives in static alerts have been one of the most important barriers for developers to use static analysis tools~\\citep{thung2015extent,avgustinov2015tracking,johnson2013don}. To address this issue, many techniques have been introduced to identify actionable warnings or alerts.\nVarious models have been mentioned in their study, including graph theory~\\citep{boogerd2008assessing,bhattacharya2012graph}, machine learning~\\citep{wang2016automatically,shivaji2009reducing} etc. However, most of the studies are plagued by a common issue, choosing the appropriate warning characteristics from abundant feature artifacts proposed by SA studies so far. \n\n\nRanking schemes are one way to improve\nstatic analysis tool~\\citep{kremenek2004correlation}. Allier et al.~\\citep{allier2012framework} proposed a framework to compare 6 warning ranking algorithms and identified the best one to rank warnings. Similarly, Shen et al.~\\citep{shen2011efindbugs} employed a ranking technique to rank the true error reports on top so as to reduce false positive warnings. Some other works also prioritize warnings by selecting different categories of impact factors~\\citep{liang2010automatic} or by analyzing software history~\\citep{kim2007prioritizing}.\n\n\nRecent work has shown that this problem can be solved by combining machine learning techniques to identify whether a detected warning is actionable or not, e.g., finding alerts with similar code patterns and building prediction models to classify new alerts~\\citep{hanam2014finding}. Heckman and Williams did a systematic literature review revealing that most of these works focus on exploring a reasonable characteristic set, like Alert characteristics (AC) and Code characteristics (CC), to distinguish actionable and unactionable warnings more accurately~\\citep{heckman2011systematic,hanam2014finding,heckman2009model}. One of the most integrated study explores 15 machine learning algorithms and 51 warning characteristics derived from static analysis tools and achieves good performance with high recall (83-99 \\%)~\\citep{heckman2009model}. However, in practice, information on bug warning patterns is limited to be obtained, especially for some trivial checkers in SA tools. Also, these tools suffer from conflation issues where similar warnings are given different names in different studies.\n\n\n\nWang et al.~\\citep{wang2018there} recently conducted a systematic literature review to collect all publicly available features (116 in total) for SA analysis and implemented a tool based on Java for feature extraction. All the values of these collected features are extracted from warning reports generated by FindBugs based on 60 revisions of 12 projects. Six machine learning classifiers were employed to automatically identify actionable static warning. 23 common features were identified as the best and most useful feature combination for Static Warning Identification, since the best performance is always obtained when using these 23 golden features, better than using total feature set or other subset strategies. To the best of our knowledge, this is the most exhaustive research about SA characteristics yet published.\n\n\n\n\n\n\n\n\\subsection{Active Learning}\n\n\nLabeled data is required by supervised machine learning techniques. Without such data, these algorithms cannot learn predictors.\nObtaining good labeled data can sometimes be time consuming and expensive. In the case of this paper, we are concerned with learning how to label static code warnings (spurious or serious). For another example, training a good document classifier might require hundreds of thousands of samples. Usually, these examples do not come with labels, and therefore expert knowledge (e.g., recognizing a handwritten digit) is required to determine the ``right'' label. \n\nActive learning~\\citep{settles2009active} is a machine learning algorithm that enables the learners to actively choose which examples to label from amongst the currently unlabeled instances. This approach trains on a little bit of labeled data, and then asks again for some more labels for the unlabelled examples that are most ``interesting'' (e.g. whose labels are most uncertain). This process greatly reduces the amount of labeled data required to train a model while still achieving good predictive performance.\n\n\nActive learning has been applied successfully in several SE research areas, such as finding relevant papers in literature review~\\citep{yu2018finding,yu2019fast2}, security vulnerability prediction~\\citep{8883076}, crowd sourced testing~\\citep{wang2016local}, place-aware application development~\\citep{murukannaiah2015platys}, classification of software behavior~\\citep{bowring2004active}, and multi-objective optimization~\\citep{krall2015gale}.\nOverall, there are three different categories of active learning:\n\n\\begin{itemize}\n \\item \\textit{Membership query synthesis.} In this scenario, a learner is able to generate synthetic data for labeling, which might not be applicable to all cases. \n \\item \\textit{Stream-based selective sampling.} Each sample is considered separately in the case of label querying or rejection. There are no assumptions on data distribution, and therefore it is adaptive to change.\n \\item \\textit{Pool-based sampling.} Samples are chosen from a pool of unlabeled data for the purpose of labeling. The learner is usually initially trained on a fully labeled fraction of data to generate a preliminary model, which is subsequently used to identify which sample would be most beneficial to be used next in the training set during the next generation of active learning loop. Pool-based sampling scenario is the most widely adopted scheme in literature, which is also applied in our work.\n\\end{itemize}\n\nPrevious work has shown the successful adoption of active learning in several research areas. \nWang et.al~\\citep{wang2016local} applied active learning to identify the test reports that reveal ``true fault'' from a large amount of test reports in crowdsourced testing of GUI applications. \nWithin that framework, they proposed a classification technique that labels a fraction of most informative samples with user knowledge, and trained classifiers based on the local neighborhood.\n\nYu et.al~\\citep{yu2018finding,yu2019fast2,yu2019searching} proposed a framework called FASTREAD to assist researchers to find the relevant papers to read. FASTREAD works by 1) leveraging external domain knowledge (e.g., keyword search) to guide the initial selection of papers; 2) using an estimator of the number of remaining paper to decide when to stop; 3) applying error correction algorithm to correct human mislabeling. This framework has also been shown effective in solving other software engineering problems~\\citep{yu2018total}, such as inspecting software security vulnerabilities~\\citep{8883076}, finding self-admitted technical debt~\\citep{fahid2019better}, and test case prioritization~\\citep{yu2019terminator}. In this work, we adopt a similar framework in static warning analysis.\n\n\nTo the best of our knowledge, this work is the first study to utilize incremental active learning to reduce unnecessary inspection of static warnings based on the most effective feature attributes.\nWhile Wang et al. is the closest work to this paper, we differ very much from their work.\n\\begin{itemize}\n\\item\nIn that study, their raw data was screen-snaps of erroneous conditions within a GUI. \nAlso, they spend much effort tuning a feedback mechanism specialized for their images.\n\\item\nIn our work, our raw data is all textual (the text of a static code warning).\nWe found that a different method, based on active learning, worked best for such textual data.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methodology}\n\\label{sec:method}\n\n\\subsection{Overview}\n\nThis work applies an incremental active learning framework to identify static warnings. \nThis is derived from active learning, which has been proved outperformed in solving the total recall problem in several areas, e.g., electronic discovery, evidence-based medicine, primary study selection, test case prioritization, and so forth.\nAs illustrated in Figure ~\\ref{fig:learning}, we aim to achieve higher recall with lower effort in inspecting warnings generated by SA tools.\n\n\n\\begin{figure}[!b]\n\\center{\\includegraphics[width=0.45\\textwidth]{learning_curve.png}}\n\\caption{Learning Curve of Different Learners. }\n\\label{fig:learning}\n\\end{figure}\n\n\\subsection{Evaluation Metrics}\n\\label{subsec:metrics}\n\n\nTable \\ref{table:despription} represents all the variables involved in our study.\nWe evaluated the active learning results in terms of \\textbf{\\textit{total recall}} and \\textbf{\\textit{cost}}, which are demonstrated as follows:\n\n\n\n\n\\begin{table}[!htbp]\n\\small\n\\caption{Description of Variables in Incremental Active Learning.}\n\\begin{adjustbox}{max width=0.48\\textwidth}\n\\begin{tabular}{l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Variable}} & \\multicolumn{1}{c}{\\textbf{Description}} \\\\ \\hline\n$E$ & \\begin{tabular}[c]{@{}l@{}}Set of warning that reported by static \\\\ analysis tools\\end{tabular} \\\\ \\hline\n$T$ & \\begin{tabular}[c]{@{}l@{}}Set of actionable warning or target\\\\ samples\\end{tabular} \\\\ \\hline\n$L$ & \\begin{tabular}[c]{@{}l@{}}Set of warning that has been currently\\\\ retrieved or labeled\\end{tabular} \\\\ \\hline\n$L_T$ & \\begin{tabular}[c]{@{}l@{}}Set of warning has been currently labeled\\\\ and reveals actionable warning\\end{tabular} \\\\ \\hline\n\\textbf{Total Recall} & $L_T \/ T$ \\\\ \\hline\n\\textbf{cost} & $ L $\/$ E$ \\\\ \\hline\n\\end{tabular}\n\\label{table:despription}\n\\end{adjustbox}\n\\end{table}\n\n\\textit{Total recall} addresses the ratio between samples labeled but not revealing actionable warning and total real actionable warning samples.\nThe optimal value of \\textit{total recall} is 1, which represents all of the target samples (or actionable warning in our case) have been retrieved and labeled as actionable. \n\n\\textit{Cost} considers the set of warning that has currently been retrieved or labeled out of the set of warning reported by the static warning analysis tools. The value of cost varies between the ratio of actionable warnings in the dataset and 1. The lower bound means active learning algorithm prioritizes all targeted samples without uselessly labeling any unactionable warnings. This is a theoretical optimal value (which, in practice,\nmay be unreachable).\nThe upper bound means active learning algorithm successfully retrieves all the real warning samples, but at the cost of labeling them all\n(which is meaningless because randomly labeling samples will achieve the same goal).\n\nFigure~\\ref{fig:learning} is an Alberg diagram showing the learning curve of different learners. In this figure, the x-axis and y-axis respectively represent the percentage of warnings retrieved or labeled by learners (i.e. cost) and the percentage of actionable warnings retrieved out of total actionable ones (i.e. total recall). An optimal learner will achieve higher total recall than others when a specific cost threshold is given, e.g., at the cost of 20 \\% effort as illustrated in Figure~\\ref{fig:learning}. The best performance in Figure~\\ref{fig:learning} is obtained by optimal learner, followed by proposed learner, random learner and worst learner. This learning curve is a performance measurement at different cost thresholds settings. \n\n\\textit{AUC} (Area under the ROC Curve) measures the area under the Receiver Operator Characteristic (ROC) curve~\\citep{witten2016data, heckman2011systematic} and reflects the percentage of actionable warnings against the percentage of unactionable ones so as to overall report the discrimination of a classifier~\\citep{wang2018there}. This is a widely adopted measurement in Software Engineering, especially for imbalanced data~\\citep{liang2010automatic}. \n\n\n\\subsection{Active Learning Model Operators}\nSeveral operators are apply to address the challenge of the total recall problem, as listed in Table~\\ref{tbl:FASTREADOperator}. Specific details about each operator are illustrated as follows:\n\n\n\\begin{table}[!t]\n\\small\n\\centering\n\\caption{Operators of Active Learning.}\n\\begin{adjustbox}{max width=0.48\\textwidth}\n\\begin{tabular}{l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Operator}} & \\multicolumn{1}{c}{\\textbf{Description}} \\\\ \\hline\nMachine Learning Classifier & \\begin{tabular}[c]{@{}l@{}}Widely-used classification \\\\ technique.\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Presumptive non-relevant \\\\ examples\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Alleviate the sampling bias of \\\\ non-relevant examples.\\end{tabular} \\\\ \\hline\nAggressive Undersampling & Data-balancing technique. \\\\ \\hline\nQuery strategy & \\begin{tabular}[c]{@{}l@{}}Uncertainly sampling and \\\\ certainty sampling in active \\\\ learning.\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\label{tbl:FASTREADOperator}\n\\end{adjustbox}\n\\end{table}\n\n\\textbf{Classifier}\nWe employ three machine learning classifiers as an embedded active learning model, linear SVM with weighting scheme, Random Forest and Decision Tree with default parameters as these classifiers are widely explored in software engineering area and also reported in Wang et al.'s paper. All of the classifiers are modules from Sckit-learn~\\citep{pedregosa2011scikit}, a Python package for machine learning.\n\n\\textbf{Presumptive non-relevant examples}, proposed by Cormack et al.~\\citep{cormack2015autonomy}, is a technique to alleviate the sample bias of negative samples in unbalanced dataset. To be specific, before each training process, the model samples randomly from the unlabeled pool and assumes that the sampled instance is labeled as negative in training, due to the prevalence of negative samples.\n\n\\textbf{Aggressive undersampling} ~\\citep{wallace2009meta} is a sampling method to cope with an unbalanced dataset by throwing away majority negative training points close to the decision plane of SVM and aggressively accessing minority positive points until the ratio of these two categories is balanced. It's an effective approach to kill unbalanced bias in datasets.\nThis technique is suggested by Wallace et al.~\\citep{wallace2010semi} after the initial stage of incremental active learning and when the established model becomes stable.\n\nThe \\textbf{querying strategy}\nis the approach utilized to determine which data instance in an unlabelled pool to query for labelling next. We adopt two of the most commonly used strategies, \\textit{uncertainty sampling} ~\\citep{settles2009active} and \\textit{certainty sampling}~\\citep{miwa2014reducing}.\n\nUncertainty sampling~\\citep{settles2009active}\nis the simplest and most commonly used query strategy in active learning, where unlabeled samples closest to the decision plane of SVM or predicted to be the least likely positive by a classifier are sampled for query. Wallace et al.~\\citep{wallace2010semi} recommended uncertainty sampling method in biomedical literature review and it reduces the cost of manually screening literature efficiently.\n\nCertainty sampling ~\\citep{miwa2014reducing} is a kind of greedy algorithm to maximize the utility of incremental learning model by prioritizing the samples which are the most likely to be actionable warnings. Contrary to uncertainty sampling, certainty sampling method gives priority to the instances which are far away from the decision plane of SVM or have the highest probability score predicted by the classifier. It speeds up the process of retrieving and plays the major role of stopping earlier.\n\n\\begin{figure}[!b]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{procedure.png}}\n\\caption{Procedure of Incremental Active Learning.} \n\\label{fig:procedure}\n\\end{figure}\n\n\\subsection{Active Learning Procedures}\n\nFigure ~\\ref{fig:procedure} presents the procedures of incremental active learning, and a detailed description of each step is demonstrated as follows:\n\n\n\n\\begin{enumerate}\n \\item Initial Sampling.\n \n We propose two initial sampling strategies to cope with the scenario that historical information is available or not.\n \n For software projects in early life cycle without sufficient historical revisions in the version control system, random sampling without replacement is used in the initial stage when the labeled warning pool is NULL.\n \n For software projects with previous version information, we utilize version N-1 to get a pre-trained model and initialize sampling on version N. This practice can reduce the cost of manually excluding unactionable warnings since the prevalence of false positive in SA datasets.\n \n \n \n \n \\item Human or oracle labeling.\n \n After a warning message is selected by initial sampling or query strategy, manual inspection is required to identify whether the retrieved warning is actionable or not. In our simulation, the ground truth serves as a human oracle and it returns a label once a warning presupposed as unlabeled is queried by active learning model.\n \n In static analysis, inspecting and tagging the warning being queried is considered as a main overhead of this process. As demonstrated in Table \\ref{tabel:variables}, this overhead is denoted as \\textbf{Cost} and is what software developers strive to reduce.\n \n \\item Model Training and updating.\n \n After a new coming-in warning is labeled by human oracle, this data sample is added to training data. The model is retrained and updated recursively.\n \n \\item Query Strategy.\n \n Uncertainty sampling is leveraged when the actionable samples retrieved and labeled by our model is under a specific threshold. This query strategy mainly applies when targeted data samples are rare in training set and building a stable model faster is required~\\citep{8883076}.\n \n Finally, after labeled actionable warning exceed the given threshold, certainty sampling is employed to aggressively searching for true positive and greedily reduce the cost of warning inspection. \n\\end{enumerate}\n\n\n\n\n\\begin{table}[!htbp]\n\\footnotesize\n\\centering\n\\caption{Summary of Projects Surveyed.}\n\\tabcolsep=0.11cm\n\\begin{tabular}{l|l|l|l}\n\\hline\n\\multicolumn{1}{c|}{\\textbf{Project}} & \\multicolumn{1}{c|}{\\textbf{Period}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Revision-\\\\ Interval\\end{tabular}}} & \\multicolumn{1}{c}{\\textbf{Domain}} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Lucence\\\\-solr\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}01\/2013-\\\\ 01\/2014\\end{tabular} & 3 month & Search engine \\\\ \\hline\nTomcat & \\begin{tabular}[c]{@{}l@{}}01\/2013-\\\\ 01\/2014\\end{tabular} & 3 month & Server \\\\ \\hline\nDerby & \\begin{tabular}[c]{@{}l@{}}01\/2013-\\\\ 01\/2014\\end{tabular} & 3 month & Database \\\\ \\hline\nPhoenix & \\begin{tabular}[c]{@{}l@{}}01\/2013-\\\\ 01\/2014\\end{tabular} & 3 month & Driver \\\\ \\hline\nCassandra & \\begin{tabular}[c]{@{}l@{}}01\/2013-\\\\ 01\/2014\\end{tabular} & 3 month & Big data manage \\\\ \\hline\nJmeter & \\begin{tabular}[c]{@{}l@{}}01\/2012-\\\\ 01\/2014\\end{tabular} & 6 month & Performance manage \\\\ \\hline\nAnt & \\begin{tabular}[c]{@{}l@{}}01\/2012-\\\\ 01\/2014\\end{tabular} & 6 month & Build manage \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Commons\\\\ .lang\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}01\/2012-\\\\ 01\/2014\\end{tabular} & 6 month & Java utility \\\\ \\hline\nMaven & \\begin{tabular}[c]{@{}l@{}}01\/2012-\\\\ 01\/2014\\end{tabular} & 6 month & Project manage \\\\ \\hline\n\\end{tabular}\n\\label{table:project}\n\\end{table}\n\n\\section{Experiment}\n\\label{sec:experiment}\n\n\\subsection{Static Warning Dataset}\n\nThe nine datasets explored in this work are collected from previous research.\nWang et al.~\\citep{wang2018there} performed a systematic literature review to gather all publicly available features (116 in total) for SA analysis. \nFor this research, all the values of this collected feature set were extracted from warnings reported by FindBugs on the 60 successive revisions of 12 projects. Using \nthe Static Warning (SA) tool, we applied FindBugs to 60 revisions from 12 projects' revision history. By collected performance statistics from three supervised learning classifiers on 12 datasets, a golden feature set (23 features) is found via a greedy backward elimination algorithm. We utilize the best feature combination as the warning characteristics in our research.\n\nOn closer inspection of these datasets, we found three projects with obvious data inconsistency issues (such as data features mismatch with data labels).\nHence, our study only explored the remaining nine projects.\n\nTable \\ref{table:project} lists the summary of projects surveyed in our paper. For each project, there are 5 versions collected from starting revision time after a specific revision interval. We train the model on version 4 and test on version 5.\n \nPrevious research~\\citep{wang2018there} collected 116 static warning features with a systematic literature review. These features fall into eight categories, and 95 features are left after eliminating unavailable ones as shown in Table \\ref{tabel:variables}. We employ the 23 golden features as the independent variables proposed by Wang et al., which are highlighted in bold in Table \\ref{tabel:variables}.\nIn our study, the dependent variable is actionable or unactionable.\nThese labels were generated via a method proposed by previous researches~\\citep{heckman2008establishing,hanam2014finding,liang2010automatic}. \nThat is, for a specific warning, if it is closed in later revision after a revision interval when the project was collected, it will finally be labeled as actionable. For warning still existing after later revision interval, it will be labeled as unactionable. Otherwise, for some minority warnings which are deleted after later interval, they will be removed and ignored in our study. \n\nTable \\ref{table:numofSamples} shows the number of warnings and distribution of each warning type (as reported by FindBugs) in nine software projects. Note that our data is highly imbalanced with the ratio of targeted samples from 3 to 34 percent. \n\n\n\n\\begin{table}[]\n\\small\n\\caption{Categories of Selected Features.~(8 categories are shown in the left column, and 95 features explored in Wang et al. are shown in the right column with 23 golden features in bold.)}\n\\tabcolsep=0.11cm\n\\begin{adjustbox}{max width=0.48\\textwidth}\n\\begin{tabular}{ll}\n\\hline\n\\textbf{Category} & \\textbf{Features} \\\\ \\hline\nWarning combination & \\begin{tabular}[c]{@{}l@{}}size content for warning type;\\\\ size context in method, file, package;\\\\ \\textbf{warning context in method, file,} package;\\\\ \\textbf{warning context for warning type};\\\\ fix, non-fix change removal rate;\\\\ \\textbf{defect likelihood for warning pattern};\\\\ variance of likelihood;\\\\ defect likelihood for warning type;\\\\ \\textbf{discretization of defect likelihood}; \\\\ \\textbf{average lifetime for warning type};\\end{tabular} \\\\ \\hline\nCode characteristics & \\begin{tabular}[c]{@{}l@{}}method, file, package size;\\\\ comment length;\\\\ \\textbf{comment-code ratio};\\\\ \\textbf{method, file depth};\\\\ method callers, callees;\\\\ \\textbf{methods in file}, package;\\\\ classes in file, \\textbf{package};\\\\ indentation;\\\\ complexity;\\end{tabular} \\\\ \\hline\nWarning characteristics & \\begin{tabular}[c]{@{}l@{}}\\textbf{warning pattern, type, priority,} rank;\\\\ warnings in method, file, \\textbf{package};\\end{tabular} \\\\ \\hline\nFile history & \\begin{tabular}[c]{@{}l@{}}latest file, package modification;\\\\ file, package staleness;\\\\ \\textbf{file age}; \\textbf{file creation};\\\\ deletion revision; \\textbf{developers};\\end{tabular} \\\\ \\hline\nCode analysis & \\begin{tabular}[c]{@{}l@{}}call name, class, \\textbf{parameter signature},\\\\ return type; \\\\ new type, new concrete type;\\\\operator;\\\\ field access class, field; \\\\catch;\\\\ field name, type, visibility, is static\/final;\\\\ \\textbf{method visibility}, return type,\\\\ is static\/ final\/ abstract\/ protected;\\\\ class visibility, \\\\ is abstract \/ interfact \/ array class;\\end{tabular} \\\\ \\hline\nCode history & \\begin{tabular}[c]{@{}l@{}}added, changed, deleted, growth, total, percentage \\\\ of LOC in file in the past 3 months;\\\\ \\textbf{added}, changed, deleted, growth, total, percentage \\\\ of LOC in file in the last 25 revisions;\\\\ \\textbf{added}, changed, deleted, growth, total, percentage \\\\ of LOC in package in the past 3 months;\\\\ added, changed, deleted, growth, total, percentage \\\\ of LOC in package in the last 25 revisions;\\end{tabular} \\\\ \\hline\nWarning history & \\begin{tabular}[c]{@{}l@{}}warning modifications;\\\\ warning open revision;\\\\ \\textbf{warning lifetime by revision}, by time;\\end{tabular} \\\\ \\hline\nFile characteristics & \\begin{tabular}[c]{@{}l@{}}file type;\\\\ file name; \\\\package name;\\end{tabular} \\\\ \\hline\n\\end{tabular}\n\\label{tabel:variables}\n\\end{adjustbox}\n\\end{table}\n\n\n\n\n\n\n\n\\begin{table}[]\n\\caption{Number of Samples on Version 5.}\n\\small\n\\begin{adjustbox}{max width=0.48\\textwidth}\n\n\\begin{tabular}{ll\n>{\\columncolor[HTML]{C0C0C0}}l l}\n\\hline\nProject & Open\/Unactionable & Close\/Actionable & Delete \\\\ \\hline\nant & 1061 & 54 & 0 \\\\\ncommons & 744 & 42 & 0 \\\\\ntomcat & 1115 & 326 & 0 \\\\\njmeter & 468 & 145 & 7 \\\\\ncass & 2245 & 356 & 64 \\\\\nphoenix & 2046 & 343 & 13 \\\\\nmvn & 790 & 28 & 44 \\\\\nlucence & 2257 & 1168 & 440 \\\\\nderby & 2386 & 121 & 0 \\\\ \\hline\n\\end{tabular}\n\\end{adjustbox}\n\\label{table:numofSamples}\n\\end{table}\n\n\n\n\n\\subsection{Machine Learning Algorithms}\n\nWe choose three machine learning algorithms, i.e., Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT). These classifiers are selected for their common use in the software engineering literature. All these three algorithms are studied in Wang et al.'s paper~\\citep{wang2018there} and the best performance is obtained by Random Forest, followed by Decision Tree. Regarding to SVM, it obtains the worst perform reported in six algorithms by Wang et al.~\\citep{wang2018there}, but due to its wide combination with active learning and promising performance in many research areas like image retrieval~\\citep{pasolli2013svm} and text classification~\\citep{tong2001support}, especially imbalanced problems~\\citep{ertekin2007active}, we also include this algorithm in our work. We now give a brief description of these algorithms and their application in this work.\n\nAll our learners come from the Python toolkit\nScikit-Learn~\\citep{pedregosa2011scikit}. For the most part, we use the default parameters from that toolkit. Exception for support vector machines, we followed the advice of a previous publication~\\citep{krishna2016bigse} \nwhich suggested using a linear, and a not radial, kernel).\n\n\\textbf{Support Vector Machine.} Support Vector Machine (SVM)~\\citep{cortes1995support} is a supervised learning model for binary classification and regression analysis. The optimization objective of SVM is to maximize the margin, which is defined as the distance between the separating hyperplane (i.e., the decision boundary) and the training samples (i.e., support vectors) that are closest to the hyperplane. Support vector machine is a powerful linear model, it also can tackle nonlinear problems through the kernel trick, which consists of multiple hyperparameters that can be tuned to make good predictions.\n\n\n\\textbf{Random Forest.} Random forests~\\citep{liaw2002classification} can be viewed as an ensemble of decision trees. The idea behind ensemble learning is to combine weak learners to build a more robust model or a strong learner, which has a better generalization error and is less susceptible to over-fitting. Such forests can be utilized for both classification and regression problems, and also employed to measure the relative importance of each feature on the prediction (by counting how often attributes are used in each tree of the forest).\n\n\\textbf{Decision Tree.} Decision tree learners are known for their ability to decompose complex decision processes into small and simple subsets~\\citep{safavian1991survey}. In this process an associated multistage decision tree is hierarchically developed. There are several tree-based approaches widely used in software engineering areas like ID3, C4.5, CART and so forth. Decision tree is computationally cheap to use, and is easy for developers or managers to interpret.\n\n\n\\begin{algorithm}[!htbp]\n\\scriptsize\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetKwInOut{Parameter}{Parameter}\n\\SetKwRepeat{Do}{do}{while}\n\\Input{$V_{n-1}$, previous version for training\\\\\n$V_n$, current version for prediction\\\\ \n\\textit{ C}, common set of features shared by five releases}\n\\Output{\\textit{ Total Recall}, total recall for version n\\\\\n\\textit{ cost}, samples retrieved by percent}\n\\BlankLine\n\n\n\\BlankLine\n\\tcp{Keep reviewing until stopping rule satisfied}\n\\While{$|L_R| < 0.95|R|$}{\n \\tcp{Start training or not}\n \\eIf{$|L_R| \\geq 1$}{\n $CL\\leftarrow \\mathit{Train}(L)$\\;\n \\tcp{Query next}\n $x\\leftarrow \\mathit{Query(CL},\\neg L,L_R)$\\;\n }{\n \\tcp{Random Sampling}\n $x\\leftarrow \\mathit{Random}(\\neg L)$\\;\n }\n \\tcp{Simulate review}\n $L_R,L\\leftarrow \\mathit{Include}(x,R,L_R,L)$\\;\n $\\neg L\\leftarrow E \\setminus L$\\;\n}\n\\Return{$L_R$}\\;\n\\BlankLine\n\\Fn{Train($V_{n-1}$)}{\n \\BlankLine\n \\tcp{Classifier: Linear-SVM,decision tree, random forest}\n \\BlankLine\n clf$\\leftarrow \\mathit{Classifier}$\\;\n \n \n \n \n $ \\mathit{training}_x, \\mathit{training}_y \\leftarrow V_{n-1} $\\,\n \n \\BlankLine\n clf$\\leftarrow \\mathit{clf.fit(training}_x, \\mathit{training}_y)$\\,\n \n \\BlankLine\n \\Return{clf}\\;\n}\n\\BlankLine\n\\Fn{PredictProb($V_n$,$ \\mathit{clf}$)}{\n \\BlankLine\n \\tcp{predict Probability}\n \\BlankLine\n $ \\mathit{pos}_{\\mathit{at}} \\leftarrow \\mathit{list(clf.classes).index(\"yes\")}$\\,\n \n $\\mathit{testset}_x,\\mathit{testset}_y \\leftarrow V_n $\\,\n \n $ \\mathit{prob} \\leftarrow \\mathit{clf.PredictProb}(\\mathit{testset}_x)[:, \\mathit{pos}_{\\mathit{at}}]$\\,\n \n \\BlankLine\n \\Return{prob, $\\mathit{testset}_y$}\\;\n}\n\\BlankLine\n\\Fn{Retrieve(prob, $\\mathit{testset}_y $)}{\n \\BlankLine\n \\tcp{retrieve by descending-sorted probability }\n \\BlankLine\n $ \\mathit{sum} = 0$\\,\n \n $ \\mathit{order} \\leftarrow \\mathit{np}.\\mathit{argsort(prob)}[::-1][:]$\\,\n \n $ \\mathit{pos}_{\\mathit{all}} \\leftarrow \\mathit{number-of-positive- samples}$ \\,\n \n $ \\mathit{num}_{\\mathit{all}} \\leftarrow\n \\mathit{length-of-testset}_y $\\,\n \n \\BlankLine\n \\While{$ i \\in \\mathit{order} $}{\n \\tcp{Sort label by descending order}\n $ \\mathit{label}_{\\mathit{real}} \\leftarrow \\mathit{testset}_\\mathit{y[i]}$\\,\n \n $ \\mathit{sorted}_{\\mathit{label}}.\\mathit{append}(\\mathit{label}_{\\mathit{real}}) $\\,\n \n \\BlankLine\n \n \\tcp{Retrieve}\n \\While{$\\mathit{label} \\in \\mathit{sorted}_{\\mathit{label}} $}{\n \\eIf{$\\mathit{label} == \"\\mathit{yes}\"$}{sum $ +=1$\\,\n \n $\\mathit{pos}_{\\mathit{get}} \\leftarrow \\mathit{sum}$\\,}{continue}\n }\n $ \\mathit{total}_{\\mathit{recall}}.\\mathit{append(pos}_{\\mathit{get}} \/ \\mathit{pos}_{\\mathit{all}})$\\,\n \n cost.append(len($\\mathit{sorted}_{\\mathit{label}}$) \/ $\\mathit{num}_{\\mathit{all}}$)\n \n }\n \n \\Return{$\\mathit{total}_{\\mathit{recall}}$, cost}\\;\n}\n\\caption{Pseudo Code for Supervised Learning.}\\label{alg:alg2}\n\\end{algorithm}\n\n\n\\section{Experiments}\n\\label{sec:evaluation}\n\nIn this section, we answer the four research questions formulated in Section \\ref{sec:intro}.\n\n\\begin{RQ}\n{\\bf RQ1.} What is the baseline rate for bad static warnings?\n\\end{RQ}\n\n\\subsection{\nResearch method}\n\n\nStatic warning tools like FindBugs, Jlint and PMD are widely used in static warning analysis. Previous research has shown that FindBugs is more reliable than other SA tools regarding to its effective discrimination between true and false positives~\\citep{wang2018there, rahman2014comparing}.\nFindBugs is also known as a cost-efficient SA tool for detecting warnings by the combination of line-level, method-level and class-level granularity, thus reports much fewer warnings with obviously more lines ~\\citep{rahman2014comparing, panichella2015would}.\nDue to all the merits mentioned above, FindBugs has gained widespread popularity among individual users and technology-intensive companies, like Google\\footnote{In 2009, Google held a global fixit for UMD's FindBugs tool and aimed at gathering feedback for the 4,000 highest confidence reported by FindBugs. It has been downloaded for more than a million times so far.}.\n\nIn terms of a baseline result, we used the default priority ranking reported by FindBugs. Since FindBugs generates warnings and classifies them into seven categories of patterns~\\citep{shen2011efindbugs}, in which warnings with the same priority in random order have the same severity to be fixed. And the higher priority denotes that the warning report is more likely to be actionable suggested by FindBugs. This randomly ranking strategy provides a reasonable probabilistic bounded time for software developers to find bugs and implements the scene without any information to prioritize warning reports~\\citep{heckman2011systematic,kremenek2004correlation}.\n\n\n\\subsection{Research results}\n\nAs is shown in Figure \\ref{fig:results}, the dark blue dashed line denotes the learning curve of random selection generated from Findbugs reports. The curve grows diagonally, indicating that an end-user without any historical warning information or auxiliary tool has to inspect 2507 warnings to identify only 121 actionable ones in Derby dataset.\n\n\\begin{RQ}\n{\\bf RQ2.} \nWhat is the previous state-of-the-art method to tackle the prevalence of actionable warnings in SA tools?\n\\end{RQ}\n\n\n\\begin{table*}[]\n\\small\n\\caption{AUC \\% on 9 projects for 10 runs. Our results are better than prior results (shown in blue) since they used default parameters in Weka while we adjusted (e.g.) the SVM kernel (as well as a more recent implementation of these tools).}\n\\tabcolsep=0.11cm\n\\begin{adjustbox}{max width=1\\textwidth}\n\\begin{tabular}{@{}lllllllllllll@{}}\n\\toprule\n\\multicolumn{1}{c}{} & \\multicolumn{2}{c}{Active+SVM} & \\multicolumn{2}{c}{Supervised\\_SVM} & \\multicolumn{2}{c}{Active+RF} & \\multicolumn{2}{c}{Supervised\\_RF} & \\multicolumn{2}{c}{Active+DT} & \\multicolumn{2}{c}{Supervised\\_DT} \\\\ \\cmidrule(l){2-13} \n\\multicolumn{1}{c}{\\multirow{-2}{*}{\\textbf{Project}}} & \\textbf{Median} & IQR & \\textbf{\\begin{tabular}[c]{@{}l@{}}Median\\\\ (IQR)\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Median of\\\\ Prior work\\end{tabular} & \\textbf{Median} & IQR & \\textbf{\\begin{tabular}[c]{@{}l@{}}Median\\\\ (IQR)\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Median of\\\\ Prior work\\end{tabular} & \\textbf{Median} & IQR & \\textbf{\\begin{tabular}[c]{@{}l@{}}Median\\\\ (IQR)\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Median of\\\\ Prior work\\end{tabular} \\\\ \\cmidrule(r){1-1}\nDerby & \\cellcolor[HTML]{FFCCC9}98 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}97(2) & \\cellcolor[HTML]{CBCEFB}50 & \\cellcolor[HTML]{FFCCC9}{\\color[HTML]{333333} 96} & \\cellcolor[HTML]{FFCCC9}{\\color[HTML]{333333} 7} & \\cellcolor[HTML]{FFCCC9}{\\color[HTML]{333333} 97(4)} & \\cellcolor[HTML]{CBCEFB}{\\color[HTML]{333333} 43} & \\cellcolor[HTML]{FFCCC9}93 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}94(4) & \\cellcolor[HTML]{CBCEFB}44 \\\\\nMvn & \\cellcolor[HTML]{FFCCC9}94 & \\cellcolor[HTML]{FFCCC9}3 & \\cellcolor[HTML]{FFCCC9}96(7) & \\cellcolor[HTML]{CBCEFB}50 & \\cellcolor[HTML]{FFCCC9}93 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}97(3) & \\cellcolor[HTML]{CBCEFB}45 & 67 & 3 & 91(2) & \\cellcolor[HTML]{CBCEFB}45 \\\\\nLucence & \\cellcolor[HTML]{FFCCC9}95 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}97(3) & \\cellcolor[HTML]{CBCEFB}50 & 85 & 9 & 99(2) & \\cellcolor[HTML]{CBCEFB}98 & \\cellcolor[HTML]{FFCCC9}94 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}93(4) & \\cellcolor[HTML]{CBCEFB}98 \\\\\nPhoenix & \\cellcolor[HTML]{FFCCC9}97 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}97(3) & \\cellcolor[HTML]{CBCEFB}62 & 90 & 7 & 97(3) & \\cellcolor[HTML]{CBCEFB}71 & \\cellcolor[HTML]{FFCCC9}90 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}91(7) & \\cellcolor[HTML]{CBCEFB}70 \\\\\nCass & \\cellcolor[HTML]{FFCCC9}96 & \\cellcolor[HTML]{FFCCC9}5 & \\cellcolor[HTML]{FFCCC9}99(3) & \\cellcolor[HTML]{CBCEFB}67 & \\cellcolor[HTML]{FFCCC9}96 & \\cellcolor[HTML]{FFCCC9}4 & \\cellcolor[HTML]{FFCCC9}98(5) & \\cellcolor[HTML]{CBCEFB}70 & \\cellcolor[HTML]{FFCCC9}90 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}94(4) & \\cellcolor[HTML]{CBCEFB}69 \\\\\nJmeter & \\cellcolor[HTML]{FFCCC9}94 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}95(2) & \\cellcolor[HTML]{CBCEFB}50 & 90 & 4 & 97(2) & \\cellcolor[HTML]{CBCEFB}86 & \\cellcolor[HTML]{FFCCC9}86 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}91(12) & \\cellcolor[HTML]{CBCEFB}82 \\\\\nTomcat & \\cellcolor[HTML]{FFCCC9}98 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}97(3) & \\cellcolor[HTML]{CBCEFB}50 & \\cellcolor[HTML]{FFCCC9}92 & \\cellcolor[HTML]{FFCCC9}5 & \\cellcolor[HTML]{FFCCC9}96(2) & \\cellcolor[HTML]{CBCEFB}80 & \\cellcolor[HTML]{FFCCC9}94 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}92(6) & \\cellcolor[HTML]{CBCEFB}64 \\\\\nAnt & \\cellcolor[HTML]{FFCCC9}95 & \\cellcolor[HTML]{FFCCC9}2 & \\cellcolor[HTML]{FFCCC9}98(2) & \\cellcolor[HTML]{CBCEFB}50 & \\cellcolor[HTML]{FFCCC9}94 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}98(3) & \\cellcolor[HTML]{CBCEFB}44 & 84 & 3 & 94(7) & \\cellcolor[HTML]{CBCEFB}44 \\\\\nCommons & 91 & 3 & 98(3) & \\cellcolor[HTML]{CBCEFB}50 & \\cellcolor[HTML]{FFCCC9}93 & \\cellcolor[HTML]{FFCCC9}1 & \\cellcolor[HTML]{FFCCC9}92(2) & \\cellcolor[HTML]{CBCEFB}57 & \\cellcolor[HTML]{FFCCC9}80 & \\cellcolor[HTML]{FFCCC9}8 & \\cellcolor[HTML]{FFCCC9}85(14) & \\cellcolor[HTML]{CBCEFB}56 \\\\ \\bottomrule\n\\end{tabular}\n\\label{table:AUC}\n\\end{adjustbox}\n\\end{table*}\n\n\n\\subsection{Research method}\n\nWang et al.~\\citep{wang2018there} implements a Java tool to extract the value of 116 total features collected from exhaustive systematic literature review and employs the machine learning utility Weka\\footnote{https:\/\/www.cs.waikato.ac.nz\/~ml\/weka\/} to build classification models. An optimal SA feature set with 23 features is identified as the golden features by obtaining the best AUC values evaluated with 6 machine learning classifiers.\nWe reproduce the experiments with three most outperforming supervised learning models in the previous research study, e.g., weighted linear SVM, random forest and decision tree with default parameters in Python3.7.\nThe detailed process to replicate the baseline is demonstrated in Algorithm~\\ref{alg:alg2}.\n\n\n\nThe specific process is as follows:\nFor each project, a supervised model (either weighted SVM, Random Forest or Decision Tree) is built by training on Version 4. After the training process, we test on Version 5 for the same project and get a list of probability for each bug reported by FindBugs to be actionable. Sort this list of probability from most likely to be real actionable to least likely and retrieve these warnings in a descending order to report the \\textit{total recall}, \\textit{cost} and \\textit{AUC} as evaluation metrics.\n\n\n\n\\subsection{Research results}\n\nAs shown in Table~\\ref{table:AUC}, the median and IQR of AUC scores of ten runs on nine projects are reported in our paper. \\textit{Median} and \\textit{IQR} are commonly used robust measures of a set of observations. \\textit{IQR}~(the interquartile range) is a measure of statistical dispersion. It evaluates the variability of distribution by dividing a data set into quartiles and reflecting the difference between 75th and 25th percentiles.\n\n\nFor three supervised learning methods explored, Linear weighted Support Vector Machine and Random Forest both outperform Decision Tree. For incremental active learning algorithms, the best combination is \\textit{Active Learning + Support Vector Machine}, followed by \\textit{Active Learning + Random Forest} and \\textit{Active Learning + Decision Tree}. \n\nIt's observed that incremental active learning can obtain high AUC, no worse than supervised learning on most of datasets. The pink shadow highlights the median results of active learning methods which are better or no less 0.05 than the median AUC of the state-of-the-art methods.\n\nThe column \"Prior Work\" shows results reported in Wang et al.'s prior research~\\citep{wang2018there}.\nNote that our AUC scores for supervised models replicated with Python3.7 are higher than that prior work implemented by Weka. This difference is explained by two factors.\n\\begin{itemize}\n\\item\nWe found that better results could be obtained by adjusting some of the learner parameters; e.g. we use a linear (not radial) kernel for our SVM.\n\\item\nThe implementation tools employed by our study and previous work are different. Prior work used a Java implementation of these tools (in Weka) while our replication utilizes a more recent Python toolkit (Scikit-Learn) that is being used and updated by a larger and more developed community.\n\\end{itemize}\n\n\n\\begin{RQ}\n{\\bf RQ3.} Does incremental active learning reduce the cost to identify actionable Static Warnings?\n\\end{RQ}\n\nThe purpose of this research question is to compare incremental active learning with random selection and traditional supervised learning models. \n\n\n\\subsection{Research method\n}\n\nConsidering a real-world scenario when a software project in different stages of the life cycle,\nRQ3 is answered in two parts: We first contrast incremental active learning, denoted as solid lines in Figure~\\ref{fig:results} with random ranking (default ranking reported from FindBugs, denoted as dark blue dashed line in Figure~\\ref{fig:results}). Then, we compare active learning results with supervised learning (denoted as purple, lighted blue and red dashed lines in Figure~\\ref{fig:results}).\n\n\n\n\n\\subsection{Research results}\n\nResults of supervised learning methods are denoted as light blue, purple and red dashed lines. As revealed in Figure~\\ref{fig:results}, Random Forest outperforms the other classifiers, followed by Linear SVM and Decision Tree.\n\nFigure \\ref{fig:results} provides an overall view of the experiment results to address Research Question 3. These nine subplots are the results of a ten-time repeated experiment on fourth and fifth versions of nine projects and we only report the median values here. The latest version 5 is selected to construct incremental active learning, while for the supervised learning model, we choose the two latest versions, learning patterns from version fourth for model construction and testing on version fifth for evaluation to make the experimental results comparable.\n\nFigure \\ref{fig:boxplots} summarizes the ratio of real actionable warnings in version 5 of each project and the corresponding median of cost when applying incremental active learning to identify all these actionable warnings. \n\nAs illustrated in Figure \\ref{fig:results}, incremental active learning outperforms random selection, which simulates real-time cost bound when an end-user recurs to warning reports prioritized by FindBugs.\nWhile, the learning curve of incremental active learning without historical version is almost as good as supervised learning in most of nine projects based on version history. Also, the test results on nine datasets suggest that \\textit{Linear SVM $+$ incremental active learning} is the best combination of all active learning schemes, and \\textit{Random Forest} is the winner in supervised learning methods.\n\nOverall, the above observations suggest that applying an incremental active learning model in static warning identification can help to retrieve actionable warnings in higher priority and reduce the effort to eliminate false alarms for software projects without adequate version history.\n\n\n\\begin{figure*}[!htbp]\n \\centering\n \\subfigure[commons]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{commons8.pdf}} \n \\subfigure[tomcat]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{tomcat8.pdf}}\n \\subfigure[jmeter]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{jmeter8.pdf}}\n \\subfigure[cass]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{cass8.pdf}}\n \\subfigure[derby]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{derby8.pdf}}\n \\subfigure[phoenix]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{phoenix8.pdf}}\n \\subfigure[lucence]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{lucence8.pdf}}\n \\subfigure[mvn]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{mvn8.pdf}}\n \\subfigure[ant]{\\includegraphics[width=0.32\\textwidth, trim = {0.1cm 0.1cm 2.8cm 0.5cm}, clip]{ant8.pdf}}\n \\caption{Test Results of incremental active learning, supervised learning and randomly selection.}\n \\label{fig:results}\n\\end{figure*}\n\n\n\n\n\\begin{figure*}[!htbp]\n \\centering\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{commons.pdf}} \n \\subfigure{\\includegraphics[width=0.32\\textwidth]{tomcat.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{jmeter.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{cass.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{derby.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{phoenix.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{lucence.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{mvn.pdf}}\n \\subfigure{\\includegraphics[width=0.32\\textwidth]{ant.pdf}}\n \\caption{Cost Results at different thresholds for Incremental Active Learning. }\n \\label{fig:boxplots}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\\begin{RQ}\n{\\bf RQ4.} How many samples should be retrieved to identify all the actionable Static Warnings?\n\\end{RQ}\n\nHow many samples to be retrieved is a critical problem when implementing an active learning model in the scenario of static warning identification. Stopping too early or too late will incur the issue of missing important actionable warnings or wasting unnecessary running time and CPU resources. \n\nIn the following part, we introduce the research method and analysis of the experimental results to answer Research Question 4.\n\n\\subsection{Research method}\n\n\nFigure \\ref{fig:boxplots} employs the box-plot to describe the costs required or percentage of samples retrieved by three classifiers, Linear weighted SVM, Random Forest and Decision Tree combined with our incremental active learning algorithm. \nHorizontal coordinate of the box charts represents the thresholds of recall, a mechanism to stop retrieving new potential actionable warnings when the proportion of related samples found reached the specific given thresholds. The vertical axis means the corresponding effort required to obtain the given recall, measured by the proportion of warnings retrieved.\n\n\\subsection{Research results}\n\nBased on the results shown in Figure \\ref{fig:boxplots}, it can be observed that the growth of effort required is in a gentle and slow fashion when the threshold of relevant warnings visited increasing from 70 \\% to 90 \\%.\nHowever, for reaching 100 \\% threshold, the effort needed is almost or over twice compared with the cost of threshold equal to 90 \\%. A very intuitive suggestion can be obtained from Figure \\ref{fig:boxplots} is learning from 20 \\% or 30 \\% warnings for each of these nine projects, in which case the active learning models can identify over 90 \\% of actionable warnings.\n\nHowever, there is an exception. Results of lucence reveal that our model has to inspect more than 40 \\% of data to identify 90 \\% actionable warnings. Revisiting Table \\ref{table:numofSamples}, it indicts that most of our projects have data imbalanced issues (ratio of target class is less than 20 \\% for derby, mvn, phoenix, cass, commons and ant, and for jmeter and tomcat it's slightly over 20 \\%) while ratio of lucence (about 35 \\%) is relatively higher. Our study attempts to provide a solid guideline but there is no general conclusion about the specific percent of data that should be fed into the learner. It highly depends on the degree of data imbalance and the trade-off between missing target samples and reducing costs. Since the cost can only be reduced at the expense of a lower threshold, which means missing some real actionable warnings.\n\nIn summary,\nour model has been proven to be an efficient methodology to deal with information retrieval problem for SA identification of extremely unbalanced data sets, moreover, it is also a good option for engineers and researchers to apply active learning model in general problems because it has a lower building cost, a wider application range, and a higher efficiency compared with state-of-the-art supervised learning methods and random selection.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:threats}\n\\subsection{Threats to validity}\n\nAs to any empirical study, biases can affect the final results. Therefore, conclusions drawn from this work must be considered with threats to validity in mind. In this section, we discuss the validity of our work.\n\n\\textbf{Learner bias.}\nThis work applies three classifiers, weighted linear SVM, Random Forest and Decision Tree, which are the best setting according to previous research work~\\citep{wang2018there}. However, this doesn't necessarily guarantee the best performance in other domains or other static warning datasets. According to the No Free Lunch Theorems~\\citep{wolpert1997no}, applying our method framework to other areas would be needed before we can assert that our methods are also better in those domains.\n\n\n\\textbf{Sampling bias.}\nOne of the most important threats to validity is sampling bias since several sampling methods, random sampling, uncertainty sampling and certainty sampling, are used in combination. However, there are also many sampling methods in the active learning area we can utilize. And different sampling strategies and combinations may result in better performance. This is a potential research direction.\n\n\\textbf{Ratio bias.}\nIn this paper, we propose an ideal scale value for our learner to retrieve on nine static warning datasets to effectively solve the prevalence of false positive in warnings reported by SA tools. obvious improvement is observed for this unbalanced problem. But it doesn't necessarily apply to balanced datasets. \n\n\\textbf{Measurement bias.}\nTo evaluate the validity of the incremental active learning method proposed in this paper, we employ two measurement metrics: total recall and cost. Several prior research work has demonstrated the necessity and effectiveness of these measurements~\\citep{8883076,yu2018finding,yu2019fast2}. Nevertheless, many studies are still based on some classic and traditional metrics, eg. confusion matrix or also known as error matrix~\\citep{landgrebe2008efficient}. There exist many popular terminology and derivations from confusion matrix, false positive, F1 score, G measure, and so on. We cannot explore and include all the options in one article. Also, even for this same research methodology, conclusions drawn from different evaluation matrices may differ. However, in this research scenario, it is more efficient to report recall and cost for an effort-aware model.\n\n \n \n \n \n\n\\subsection{Future Work}\n\n\n\n\\textbf{Estimation.} In real-world problems, labeled data may be scarce or expensive to be obtained, while data without labels may be abundant. In this case, the query process of our incremental learning model cannot safely stop to obtain a given targeted threshold without knowing the actual number of actionable warnings in the data set beforehand. Therefore, estimation is required to guarantee the algorithm stopping detection at an appropriate stage: stopping too late will cause unnecessary cost to explore unactionable warnings and increase false alarms; while stopping too early may incur missing potential and important true warnings.\n\n\\textbf{Ensemble of classifiers.} Ensemble learning is a methodology of making decision based on inputs of multiple experts or classifiers~\\citep{zhang2012ensemble}. It's a feasible and important scheme to reduce the variance of classifiers and improve the reliability and robustness of the decision system. The famous No Free Lunch Theorems proposed by Wolpert et al. ~\\citep{wolpert1997no} gives us an instinct guidance to recur to ensemble learners. This will be promising to make the best of incremental active learning by precisely making predictions and pinpoint real actionable warnings with a generalized decision system.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nPrevious research work shows that about 35\\% to 91\\% warnings reported as bugs by\nstatic analysis tools are actually unactionable (i.e., warnings that would not be acted on by developers because they are falsely suggested as bugs). Therefore, to make such systems usable for programmers, some mechanism is required to reduce those false alarms.\n\nArnold et al.~\\citep{arnold2009security} warn that knowledge about what is an ignorable static code warning may not transfer from project to project. Here, they advise that methods for managing static code warnings should be tuned to different software projects.\nWhile we agree with that advice, it does create a knowledge acquisition bottleneck problem since acquiring that knowledge can be a time-consuming and tedious task.\n \nThis explored methods for acquiring knowledge of what static code warnings can be ignored.\nUsing a human-in-the-loop active learner,\nwe conducted an empirical study with 9 software projects and 3 machine learning classifiers to verify how the performance of current SA tools could be improved by an efficient incremental active learning method. We found about 90 \\% of actionable static warnings can be identified when only inspecting about 20 \\% to 30 \\% warning reports without using historical version information.\nOur study attempts to bridge the research gap between supervised learning and effort-aware active learning models by an in-depth analysis of reducing the cost of static warning identification problems.\n\nOur method significantly decreases the cost of inspecting falsely reported warnings generated by static code analysis tools for software engineers (especially in the early stage of software project's life cycle) and provides a meaningful guideline to improve the performance of current SA tools. Acceptance and adoption of future static analysis tools can be enhanced by combining with SA feature extraction and self-adaptive incremental active learning.\n\n\\section*{Acknowledgements}\nThis work was partially funded\nby NSF grant \\#1908762.\n\n\n\n\\bibliographystyle{model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}