diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzprlq" "b/data_all_eng_slimpj/shuffled/split2/finalzzprlq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzprlq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe Euler characteristic, defined as the alternating sum of the Betti numbers, is a key invariant of topological spaces of finite type (such as cell complexes built out of a finite number of cells). One can define an invariant $\\widetilde \\chi(G)$ for a group $G$ by substituting group cohomology for singular cohomology, but unless $G$ has a finite-type $K(G,1)$-space this invariant lacks many desirable features of topological Euler characteristics. This is unfortunate because many of the most interesting groups have torsion, and groups with torsion never have finite-type $K(G,1)$-spaces. A solution was proposed by C.T.C.\\ Wall, who observed that if $G$ has a torsion-free subgroup $H$ of finite index that {\\it does} have a finite-type classifying space then the rational number $\\chi(H)\/[G:H]$ is an invariant of $G$ \\cite{Wall61}. In particular, this number does not depend on the choice of $H$. Wall called it the {\\it rational Euler characteristic} of $G$, denoted $\\chi(G)$. This rational Euler characteristic is better behaved than $\\widetilde \\chi(G)$; for example if $1\\to A\\to B\\to C\\to 1$ is a short exact sequence of groups then $\\chi(B)=\\chi(A)\\chi(C)$, assuming $\\chi(A), \\chi(B) $ and $\\chi(C)$ are all defined. \n\nIt turns out that rational Euler characteristics of arithmetic groups can often be expressed in terms of zeta functions; this ultimately depends on the Gauss-Bonnet theorem (see \\cite{Serre71, Serre79} for details and a guide to the literature). Remarkably, Harer and Zagier showed that the rational Euler characteristics of mapping class groups are also given by zeta functions, e.g.\\ the rational Euler characteristic of the mapping class group of a once-punctured surface of genus $g$ is equal to $\\zeta(1-2g)$ \\cite{HZ86}. This was later reproved by Penner \\cite{penner1986moduli} and by Kontsevich \\cite{Kon92}, each using asymptotic methods related to perturbation expansions from quantum field theory. \n\n We are concerned here with the rational Euler characteristic of the group $\\Out(F_n)$ of outer automorphisms of a finitely-generated free group. This group shares many features with both mapping class groups and arithmetic groups, though it belongs to neither class. In 1987 Smillie and Vogtmann found a generating function for $\\chi(\\Out(F_n))$ and computed its value for $n\\leq 11$ \\cite{SV87a}. From the results of these computations, they conjectured that $\\chi(\\Out(F_n))$ is always negative and the absolute value grows faster than exponentially. In 1989 Zagier simplified the generating function and computed $\\chi(\\Out(F_n))$ for $n\\leq 100$; this added strong evidence for these conjectures without providing a proof \\cite{Zagier}. The only general statements previously known about the value of $\\chi(\\Out(F_n))$ are that it is non-zero for even values of $n$, and certain information was established about the primes dividing the denominator \\cite{SV87a, SV87b}. \n\n\n\nIn this article we show that $\\chi(\\Out(F_n))$ is negative for all $n$ and prove that its asymptotic growth rate is controlled by the classical gamma and log functions:\n\n\\begin{thmx}\\label{thm:SVconj}%\n The rational Euler characteristic of $\\Out(F_n)$ is strictly negative, $\\chi\\left( \\Out(F_n) \\right) < 0$, for all $n \\geq 2$ and its magnitude grows more than exponentially,\n \\begin{align*}\n \\chi\\left(\\Out(F_n) \\right) &\\sim - \\frac{1}{\\sqrt{2 \\pi}} \\frac{\\Gamma(n-\\frac32)}{\\log^2 n} \\text{ as } n \\to \\infty.\n \\end{align*}\n\\end{thmx}\n\n\n \n\n\n\n\nThe proof of Theorem~\\ref{thm:SVconj} is based on the following theorem, in which we produce an asymptotic expansion, with respect to the asymptotic scale $\\{(-1)^k\\G(n+\\frac{1}{2}-k)\\}_{k\\geq0}$ in the limit $n\\to \\infty$, whose coefficients are closely related to $\\chi(\\Out(F_n))$.\n\n\n\\begin{thmx}\\label{thm:asymptotic_expansion}\n\\begin{align*}\n \\sqrt{2 \\pi}e^{-n} n^n &\\sim \\sum_{ k\\geq 0 } \\Ch_k (-1)^k \\Gamma( n + \\frac12 - k ) \\text{ as } n\\rightarrow \\infty,\n\\end{align*}\nwhere $\\Ch_k$ is the coefficient of $z^k$ in the formal power series $\\exp\\left( \\sum_{n\\geq 1} \\chi\\left( \\Out (F_{n+1}) \\right) z^n \\right)$.\n\\end{thmx}\n\n\nWe then relate this to a certain expansion of the Lambert $W$-function. The Lambert $W$-function is a well-studied function with a long history \\cite{corless1996lambertw}. Eventually, we are able to use results of Volkmer \\cite{volkmer2008factorial} about the coefficients of this second expansion to prove Theorem~\\ref{thm:SVconj}. \nIn Proposition~\\ref{prop:efficient_chn} we also exploit the connection with the Lambert $W$-function to give an efficient recursive algorithm for computing $\\chi(\\Out(F_n))$. \n\n\nIn Section~\\ref{sec:core} we show that there is a close relationship between $\\chi(\\Out(F_n))$ and the classical zeta function by considering the Connes-Kreimer Hopf algebra ${\\mathbf{H}}$ of 1-particle-irreducible graphs, i.e.\\ graphs with no separating edges. Briefly, the formula in \\cite{SV87a} for $\\chi(\\Out(F_n))$ can be regarded as the integral of a certain character $\\tau$ of ${\\mathbf{H}}$ on the space spanned by admissible connected graphs with fundamental group $F_n$, with respect to the ``measure'' $\\mu(\\G)=1\/|\\Aut(\\G)|$. Proposition~\\ref{prop:bernoulli_graphs_sum} shows that integrating $\\tau^{-1}$ (the inverse of $\\tau$ in the group of characters) with respect to the same measure produces $\\zeta(-n)\/n=\\frac{B_{n+1}}{n(n+1)},$ where $B_n$ is the $n$-th Bernoulli number. \n\n\n\nThe asymptotic expansion in Theorem~\\ref{thm:asymptotic_expansion} is strikingly reminiscent of the well-known Stirling asymptotic expansion of the gamma function in the asymptotic scale $\\{\\sqrt{2\\pi} e^{-n} n^{n-\\frac12-k} \\}_{k\\geq 0}$,\n\\begin{align*}\n \\Gamma(n) &\\sim \n \\sum_{k\\geq 0} \\widehat b_k \\sqrt{2\\pi} e^{-n} n^{n-\\frac12-k} \\text{ as } n \\to \\infty.\n\\end{align*}\nThe coefficients of this asymptotic expansion are related to the Bernoulli numbers as well: $\\widehat b_k$ is the coefficient of $z^k$ in the formal power series $\\exp\\left( \\sum_{n \\geq 1} \\frac{B_{n+1}}{n(n+1)} z^n \\right)$. We will explore this analogy in more detail in Section~\\ref{sec:asymptotic_expansions}.\nGiven this intriguing parallel between the numbers $\\chi(\\Out(F_n))$ and the Bernoulli numbers, which are very prominent objects in number theory, it would be interesting to look for a number theoretic meaning for the numbers $\\chi(\\Out(F_n))$ as well. The algorithm \ngiven in Proposition~\\ref{prop:efficient_chn} may be helpful for investigations in this direction. \n\nAs was pointed out in \\cite{SV87a}, non-vanishing of $\\chi(\\Out(F_n))$ implies that the kernel of the natural map from $\\Out(F_n)$ to $GL(n,\\mathbb{Z})$ does not have finitely-generated homology. Magnus proved in 1935 that this kernel is finitely generated and asked whether it is finitely presented, which would imply that the second homology is finitely generated \\cite{Magnus}. Bestvina, Bux and Margalit showed in 2007 that the homology in dimension $2n-4$ is not finitely generated \\cite{BBM}. However Magnus' question is still unanswered for $n>3$. \n\n\nTheorem~\\ref{thm:SVconj} implies that for large $n,$ torsion-free subgroups of finite index in $\\Out(F_n)$ have a huge amount of homology in odd dimensions. We would like to say the same is true for the whole group $\\Out(F_n)$. One way to prove this is to compare the asymptotic growth rate of $\\chi(\\Out(F_n))$ with that of the ``naive'' Euler characteristic $\\widetilde\\chi(\\Out(F_n))$. Brown \\cite{Brown82} showed that the difference between $\\widetilde \\chi$ and $\\chi$ can be expressed in terms of rational Euler characteristics of centralizers of finite-order elements of $\\Out(F_n)$. Harer and Zagier used this to compare the rational and naive Euler characteristics for surface mapping class groups, using the fact that centralizers of finite-order elements are basically mapping class groups of surfaces of lower complexity. Centralizers in $\\Out(F_n)$ are more difficult to understand, but preliminary results obtained by combining the methods of this paper with results of Krsti\\'{c} and Vogtmann \\cite{KrVo93} indicate that the ratio $\\widetilde\\chi(\\Out(F_n))\/ \\chi(\\Out(F_n))$ tends to a positive constant. Note that \nGalatius proved that the stable rational homology of $\\Out(F_n)$ vanishes \\cite{Galatius}, so this would show that there is a huge amount of {\\em unstable} homology in odd dimensions. This is completely mysterious, as only one odd-dimensional class has ever been found, by a very large computer calculation in rank $7$ \\cite{Bartholdi}, and this calculation gives no insight into where all of these odd-dimensional classes might come from. \n\n\n \n\n\nFinally we recall that, by the work of Kontsevich, the cohomology of $\\Out(F_n)$ with coefficients in a field of characteristic zero can be identified with the homology of the rank $n$ subcomplex of his Lie graph complex \\cite{kontsevich1993formal}. \nOur results therefore apply to the Euler characteristic of this graph complex as well. Kontsevich himself wrote down asymptotic formulas for the Euler characteristics of three of his graph complexes in \\cite{kontsevich1993formal}; see Chapter 5 of \\cite{gerlits2002} for a detailed derivation of these formulas. The connection with graph complexes is explained a little further in Section~\\ref{sec:kontsevich}. \nMore discussion of the relation of the current paper with ideas from topological quantum field theory---with further relations to Kontsevich's work---can be found in Section~\\ref{sec:tqft}. \n\n\n\n\n\n\n\\section*{Acknowledgements}\nWe thank Dirk Kreimer for support during this project. \nMB would like to thank Karen Yeats and Sam Yusim for discussions on the subject.\nMB was supported by the NWO Vidi grant 680-47-551.\nKV would like to thank Peter Smillie for discussions. KV was partially supported by the Royal Society Wolfson Research Merit Award WM130098 and by a Humboldt Research Award.\n\n\n\\newcommand\\vertex[1]{\\fill #1 circle (.15)}\n\\newcommand{{\\rm A}}{{\\rm A}}\n\\section{Graphs and rational Euler characteristics}\n\nIn this section we give variations on the results of \\cite{SV87a}. The idea is to use the action of $\\Out(F_n)$ and closely related groups on contractible spaces of finite graphs\nto deduce information about the homology of the groups, including the rational Euler characteristic. \n\n\\subsection{Combinatorial graphs}\n\nWe begin with a combinatorial definition of a graph and related terms.\n\n\\begin{definition} A {\\em graph} $\\G$ consists of a finite set $H(\\G)$ called {\\em half-edges} together with\n\\begin{itemize}\n\\item a partition of $H(\\G)$ into parts called {\\em vertices} and\n\\item an involution $\\iota_\\G\\colon H(\\G)\\to H(\\G)$.\n\\end{itemize}\nThe {\\em valence} $|v|$ of a vertex $v$ is the number of half-edges in $v$. A {\\em leaf} of $\\G$ is a fixed point of the involution $\\iota_\\G$ and an {\\em edge} is an orbit of size $2$. \nA {\\em graph isomorphism} $\\G\\to\\G'$ is a bijection $H(\\G)\\to H(\\G')$ that preserves the vertex partitions and commutes with the involutions. \n\\end{definition} \n\n\\begin{notation} Let $\\G$ be a graph.\n\\begin{itemize}\n\\item $\\Aut(\\G)$ is the group of isomorphisms $\\G\\overset{\\cong}{\\rightarrow} \\G$. \n\\item $L(\\G)$ is the set of leaves of $\\G$, and $s(\\G)=|L(\\G)|$.\n\\item $E(\\G)$ is the set of edges of $\\G$ and $e(\\G)=|E(\\G)|$.\n\\item $V(\\G)$ is the set of vertices of $\\G$ and $v(\\G)=|V(\\G)|$.\n\\end{itemize}\n\\end{notation} \n\nThe graph with one vertex, $n$ edges, $s$ leaves and $2n+s$ half-edges is called a {\\em thorned rose}, and will be denoted $R_{n,s}$. If $n=0$ we will also call $R_{0,s}$ a {\\em star graph} (see Figure~\\ref{fig:graphs}).\n\nWith the exception of Section~\\ref{sec:core}, we will only consider {\\em admissible} graphs, where a graph is admissible if all vertices have valence at least $3$. \n\n\\begin{figure}\n\\begin{center}\n \\begin{tikzpicture} [thick, scale=.25] \\fill (0,0) circle (.2); \\draw (0,0) to [out=135, in=180] (0,4.1); \\draw (0,0) to [out=45, in=0] (0,4.1); \\draw (0,0) to [out=45, in=110] (4,1); \\draw [thick] (0,0) to [out=-30, in=-70] (4,1); \\draw (0,0) to [out=135, in=70] (-4,1); \\draw (0,0) to [out=210, in=-110] (-4,1); \\foreach \\x in {-120, -100, -80, -60} { \\draw (0,0) to (\\x:3); } \\begin{scope} [xshift = 15cm]; \\fill (0,0) circle (.2); \\foreach \\x in {0,72,144,216,288} { \\draw (0,0) to (\\x:3); } \\end{scope} \\end{tikzpicture}\n\\end{center}\n\\caption{Thorned rose $R_{3,4}$ and star graph $R_{0,5}$}\\label{fig:graphs}\n\\end{figure}\n \n\\begin{definition} Let $\\G$ be a graph. A {\\em subgraph} of $\\G$ is a graph $\\gamma$ with $H(\\gamma)=H(\\G)$, $V(\\gamma)=V(\\G)$, and $E(\\gamma)\\subseteq E(\\G)$.\n\\end{definition}\nA graph $\\G$ always has itself as a subgraph. There is a unique ``trivial'' subgraph $\\gamma_0$ with involution the identity, so $H(\\gamma_0)=H(\\G)$,$V(\\gamma_0)=V(\\G)$, $E(\\gamma_0)=\\emptyset$ and $L(\\gamma_0)=H(\\G).$\n\n\\subsection{Topological graphs} Every combinatorial graph $\\G$ has a {\\em topological realization} as a 1-dim\\-en\\-sional $CW$-complex. For each element of $V(\\G)$ we have a $0$-cell called a {\\em vertex} and for each element of $E(\\G)$ we have a $1$-cell called an {\\em edge}. For each element of $L(\\G)$ we have both a $0$-cell, called a {\\em leaf vertex} and a $1$-cell connecting the leaf vertex to a (non-leaf) vertex. \nBy our definition each connected component of the topological realization must have at least one vertex. \nNote that graphs may have multiple edges and they may have loops at a vertex. \nThe thorned rose $R_{n,s}$ defined in the last section has $(s+1)$ $0$-cells and $(n+s)$ 1-cells. \n\nThe valence of a point $x$ is the minimal number of components of a deleted neighborhood of $x$. \nIn an admissible graph the vertices are at least trivalent and the leaf vertices are univalent, so there are no bivalent $0$-cells. \n\nA graph isomorphism is a cellular homeomorphism, up to isotopy. Since admissible graphs have no bivalent $0$-cells, any homeomorphism is a cellular homeomorphism. \n\nNotice that, by our definition, the topological realization of a subgraph of $\\G$ is not a subcomplex of the topological realization of $\\G$. Rather, it can be described as a graph obtained from $\\G$ by cutting some of its edges, thus forming pairs of leaves. To make the result a $CW$-complex we have to add $0$-cells (leaf vertices) to the ends of the cut edges. A subgraph can also be visualized as the closure of a sufficiently small neighborhood of a subcomplex. \n \n In the remainder of this section we will work with the topological realization of a graph instead of using the combinatorial definition, so that we may freely use topological concepts such as connectivity, fundamental group and homotopy equivalence. \n\n\nFor $s\\geq 0$ let \n\\begin{itemize}\n\\item $\\GG$ denote the set of isomorphism classes of finite admissible graphs,\n\\item $\\GG_s\\subset \\GG$ be the subset consisting of admissible graphs with exactly $s$ leaves, \n\\item $\\GG^c \\subset \\GG$ and $\\GG_s^c \\subset \\GG_s$ be the respective subsets of connected graphs. \n\\end{itemize} \n \n\\subsection{Groups}\n\\label{sec:groups}\n\nFor any $n$ and $s$ we define ${\\rm A}_{n,s}$ to be the group of homotopy classes of homotopy equivalences of the thorned rose $R_{n,s}$ that fix the leaf vertices $\\{b_1,\\ldots,b_s\\},$ i.e.\\ ${\\rm A}_{n,s}=\\pi_0(HE(\\G, b_1,\\ldots,b_s))$. The groups ${\\rm A}_{n,s}$ generalize $\\Out(F_n) \\cong {\\rm A}_{n,0}$ and $\\Aut(F_n) \\cong {\\rm A}_{n,1}$. If $n=0$ then $R_{0,s}$ is a graph with no loops and at least $3$ leaves as we are insisting on at least one vertex, which is at least trivalent. So, ${\\rm A}_{0,s}$ is only defined for $s\\geq 3$, where it is the trivial group. If $n=1$ then $R_{1,s}$ is a loop with $s\\geq 1$ leaves, and there is a short exact sequence $$1\\to \\mathbb{Z}^{s-1}\\to {\\rm A}_{1,s}\\to \\mathbb{Z}\/2\\mathbb{Z}\\to 1.$$ \n If $n\\geq 2$ and $s\\geq 0$ there is a short exact sequence $$1\\to F_{n}^{s}\\to {\\rm A}_{n,s}\\to \\Out(F_{n})\\to 1.$$\n See \\cite{CHKV16} for background on the groups ${\\rm A}_{n,s}$.\nThese groups appear, for example, in the context of homology stability theorems \\cite{Hat95}, the bordification of Outer space and virtual duality \\cite{BF00, BSV18}, and assembly maps for homology \\cite{CHKV16}. \n \n\\subsection{Complexes of graphs and the rational Euler characteristic}\nIf a group $G$ is virtually torsion-free and acts properly and cocompactly on a contractible $CW$-complex $X$, then the rational Euler characteristic $\\chi(G)$ can be calculated using this action, by the formula $$\\chi(G)=\\sum_{\\sigma\\in \\mathcal C} \\frac{(-1)^{\\dim \\sigma}}{|\\text{Stab}(\\sigma)|},$$\nwhere $\\mathcal C$ is a set of representatives for the cells of $X$ mod $G$ (see, e.g.\\ \\cite{Bro82}, Proposition~(7.3)). \n \n \n For any $s\\geq 0$ the group ${\\rm A}_{n,s}$ is virtually torsion-free and acts properly and cocompactly on a contractible cube complex $K_{n,s}$. To describe $K_{n,s}$ it is convenient to label the leaves of a graph, so that two graphs $\\G$ and $\\G'$ are isomorphic if there is a graph isomorphism $\\G\\to\\G'$ that preserves leaf-labels; an isomorphism class is then called a {\\em leaf-labeled graph}. We use the notation $\\lG$, $\\lG_s$, $\\lG^c$ and $\\lG^c_s$ instead of $\\GG$, $\\GG_s$, $\\GG^c$ and $\\GG^c_s$ to denote the respective set of leaf-labeled graphs and $\\PAut(\\G)$ to denote the set of automorphisms of a graph that fix the leaves.\n \n \\begin{figure}\n \\begin{center}\n \\begin{tikzpicture} [scale=.55] \\draw (6.25,-1.75) to (6.25,3.5) to (11.75,3.5) to (11.75,-1.75) to (6.25,-1.75); \\begin{scope}[xshift=7.75cm, yshift=.25cm, scale=.4] \\coordinate (a) at (0,-.1); \\coordinate (b) at (3,4.5); \\coordinate (c) at (6,0); \\coordinate (one) at (-1.5,-.6); \\coordinate (two) at (7.5,-.6); \\coordinate (three) at (7.5,.6); \\draw (a) to (b) to (c) to (a); \\draw (a) to [out=90, in= 180 ] (b); \\draw[thick, red] (a) to (b) to (c); \\draw (a) to (one); \\node [left] (x) at (one) {$1$}; \\draw (c) to (two); \\node [below right] (x) at (two) {$3$}; \\draw (c) to (three); \\node [above right] (x) at (three) {$2$}; \\node (x) at (-.1,3.1) {$a$}; \\node (y) at (2.8,-.8) {$b$}; \\draw (0,2.4) to (.5,2.7) to (.75,2.3); \\draw (3,-.25) to (3.5,0) to (3,.25); \\vertex{(a)};\\vertex{(b)};\\vertex{(c)}; \\end{scope} \\begin{scope}[xshift=12.25cm, yshift=3.75cm,scale=.25] \\coordinate (a) at (0,0); \\coordinate (b) at (3,4.5); \\coordinate (c) at (6,0); \\draw (a) to (b) to (c) to (a); \\draw (a) to [out=90, in= 180 ] (b); \\draw (a) to (-1.5,-.6); \\draw (c) to (7.5,-.6);\\draw (c) to (7.5,.6); \\vertex{(a)};\\vertex{(b)};\\vertex{(c)}; \\end{scope} \\begin{scope}[xshift=12.5cm,yshift=.5cm, scale=.25] \\coordinate (a) at (0,0); \\coordinate (b) at (3,4.5); \\coordinate (c) at (6,0); \\draw (a) to (b) to (c) to (a); \\draw (a) to [out=90, in= 180 ] (b); \\draw[thick, red] (b) to (a); \\vertex{(a)};\\vertex{(b)};\\vertex{(c)}; \\draw (a) to (-1.5,-.6); \\draw (c) to (7.5,-.6); \\draw (c) to (7.5,.6); \\end{scope} \\begin{scope}[xshift=8cm, yshift=4cm,scale=.25] \\coordinate (a) at (0,0); \\coordinate (b) at (3,4.5); \\coordinate (c) at (6,0); \\draw (a) to (b) to (c) to (a); \\draw (a) to [out=90, in= 180 ] (b); \\draw[thick, red] (b) to (c); \\vertex{(a)};\\vertex{(b)};\\vertex{(c)}; \\draw (a) to (-1.5,-.6); \\draw (c) to (7.5,-.6); \\draw (c) to (7.5,.6); \\end{scope} \\begin{scope}[xshift=12cm, yshift=-2.75cm, scale=.25] \\coordinate (c) at (6,0); \\coordinate (n) at (1.5,2.25); \\coordinate(r) at (0,3); \\vertex{(c)};\\vertex{(n)}; \\draw (n) to[out=90, in=70] (r); \\draw (n) to [out=210, in=-110] (r); \\draw (c) to [out=190, in= -90] (n);\\draw (c) to [out=120, in= 20 ](n); \\draw (n) to (.5,.75);\\draw (c) to (7.5,-.6); \\draw (c) to (7.5,.6); \\end{scope} \\begin{scope}[xshift=8cm, yshift=-3cm, scale=.25] \\coordinate (c) at (6,0); \\coordinate (n) at (1.5,2.25); \\coordinate(r) at (0,3); \\draw (n) to[out=90, in=70] (r); \\draw (n) to [out=210, in=-110] (r); \\draw (c) to [out=190, in= -90] (n); \\draw [thick, red] (c) to [out=120, in= 20 ](n); \\vertex{(c)};\\vertex{(n)}; \\draw (n) to (.5,.75); \\draw (c) to (7.5,-.6); \\draw (c) to (7.5,.6); \\end{scope} \\begin{scope}[xshift=4.25cm, yshift=.5cm, scale=.25] \\coordinate (a) at (0,0);\\coordinate (m) at (4.5,2.25); \\vertex{(a)};\\vertex{(m)}; \\draw (a) to (m); \\draw[thick, red] (a) to (m); \\draw (a) to [out=80, in= 150] (m); \\draw (a) to [out=-15, in= -105 ](m); \\draw (a) to (-1.5,-.6); \\draw (m) to (6,3.5); \\draw (m) to (6.2,2); \\end{scope} \\begin{scope}[xshift=4.25cm, yshift=3.75cm, scale=.25] \\coordinate (a) at (0,0); \\coordinate (b) at (3,4.5); \\coordinate (c) at (6,0); \\coordinate (m) at (4.5,2.25); \\coordinate (n) at (1.5,2.25); \\vertex{(a)};\\vertex{(m)}; \\draw (a) to (m); \\draw (a) to [out=80, in= 150] (m); \\draw (a) to [out=-15, in= -105 ](m); \\draw (a) to (-1.5,-.6); \\draw (m) to (6,3.5); \\draw (m) to (6.2,2); \\end{scope} \\begin{scope}[xshift=5cm, yshift=-2.5cm, scale=.25] \\vertex{(0,0)}; \\draw (0,0) to [out=45, in=110] (4,1); \\draw (0,0) to [out=-30, in=-70] (4,1); \\draw (0,0) to [out=135, in=70] (-4,1); \\draw (0,0) to [out=210, in=-110] (-4,1); \\draw (0,0) to (0,-2); \\draw (0,0) to (-.5,2); \\draw (0,0) to (.5,2); \\end{scope} \\end{tikzpicture}\n \\caption{A $2$-dimensional cube $(\\G,\\varphi)$ in $K_{2,3}$. Here $\\varphi$ is the red subgraph, $\\pi_1(\\G)=F\\langle a, b\\rangle$ and the leaves are labeled $1,2,3$. The marking is indicated by arrows and labels on the edges in the complement of a maximal tree.}\\label{fig:cube}\n \\end{center}\n \\end{figure}\n\nThe cube complex $K_{n,s}$ has one cube for each equivalence class of triples $(\\G, \\varphi, g)$, where \n\\begin{itemize}\n \\item $\\G\\in \\lG^c_s$ is connected with $s$ labeled leaves and with $\\pi_1(\\G) \\cong F_n$,\n \\item $\\varphi$ is a \\textit{subforest} of $\\G,$ i.e.\\ a subgraph with no cycles, \n \\item $g\\colon R_{n,s}\\to \\G$ is a leaf-label-preserving homotopy equivalence, called a \\textit{marking} and\n \\item $(\\G,\\varphi,g)\\sim (\\G',\\varphi',g')$ if there is a leaf-label-preserving graph isomorphism $h\\colon\\G\\to\\G'$ sending $\\varphi$ to $\\varphi'$ such that $h\\circ g$ is homotopic to $g'$ through leaf-label-preserving homotopies.\n\\end{itemize}\nAn example of a cube in $K_{2,3}$ is depicted in Figure~\\ref{fig:cube}.\nContractibility of $K_{n,s}$ was proved for $s=0, n\\geq 2$ by Culler and Vogtmann \\cite{CV86} and in general by Hatcher \\cite{Hat95} (see also \\cite{HV96}). (For $n\\geq 2$, $K_{n,s}$ was originally described as a simplicial complex, but its simplices naturally group themselves into cubes, as was done, e.g.\\ in \\cite{HV98}.) \n\n Smillie and Vogtmann \\cite{SV87a} considered only the case $s=0,$ but their arguments apply verbatim for graphs with leaves. We define a function\n \\[\n\\tau(\\G)=\\sum_{\\varphi\\subset \\G} (-1)^{e(\\varphi)},\n\\] \nwhere the sum is over all subforests $\\varphi\\subset \\G,$ including the trivial subgraph, and $e(\\varphi)$ is the number of edges in $\\varphi$. \nFor instance, \n$\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n) = 1$, \nas it has only the trivial subforest and\n$\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\draw (vm) to (-.3,-1.25);\\draw (vm) to (.3,-1.25); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n) = 1$ for the same reason (recall that a leaf is not an edge).\nWe have $\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (v0); \\coordinate [right=1.5 of v0] (v1); \\coordinate [left=.7 of v0] (i0); \\coordinate [right=.7 of v1] (o0); \\draw (v0) -- (v1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\draw (i0) circle(.7); \\draw (o0) circle(.7); \\end{tikzpicture}%\n}\n) = 0$, as it has two forests whose respective contributions to the sum cancel (in fact $\\tau$ always vanishes on graphs with a separating edge as ensured by Lemma~(2.3) of \\cite{SV87a}) and \n$\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}\n)=-2\n$, as the graph $\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}$ has four forests with contributions $-1,-1,-1$ and $1$.\n\nThe following theorem relates the rational Euler characteristics of the groups ${\\rm A}_{n,s}$ to the function $\\tau$. \n \\begin{theorem}\\label{thm:SV}%\n \\[\n \\chi({\\rm A}_{n,s})=\\sum \\frac{\\tau(\\G)}{|\\PAut(\\G)|},\n\\]\nwhere the sum is over all connected leaf-labeled graphs $\\Gamma$ with $s$ leaves and fundamental group $F_n$. \\end{theorem} \n \\begin{proof} \n The group ${\\rm A}_{n,s}$ acts properly and cocompactly on $K_{n,s}$. It acts transitively on markings, assuming the graphs are leaf-labeled, so there is one orbit of cubes for each isomorphism class of pairs $(\\G,\\varphi)\\in\\lG^c_s$ with fundamental group $F_n$. The dimension of this cube is $e(\\varphi)$, and the stabilizer of $(\\G,\\varphi, g)$ is isomorphic to the group $\\PAut(\\G,\\varphi)$ of automorphisms of $\\G$ that fix the leaves and send $\\varphi$ to itself. Therefore we have\n $$\\chi({\\rm A}_{n,s})=\\sum_{\\sigma\\in \\mathcal C} \\frac{(-1)^{\\dim \\sigma}}{|\\text{Stab}(\\sigma)|}=\\sum_{(\\G,\\varphi)} \\frac{(-1)^{e(\\varphi)}}{|\\PAut(\\G,\\varphi)|},$$\n where the sum is over all isomorphism classes of pairs $(\\G,\\varphi)$ of leaf-labeled graphs $\\G\\in \\lG_s^c$ and forests $\\varphi\\subset \\G$. The full automorphism group $\\PAut(\\G)$ acts on the set of forests in $\\G$, and an orbit is an isomorphism class of pairs $(\\G,\\varphi)$, so the orbit-stabilizer theorem now gives\n $$\\sum_{(\\G,\\varphi)} \\frac{(-1)^{e(\\varphi)}}{|\\PAut(\\G,\\varphi)|} = \\sum_{\\G\\in\\lG^c_s} \\sum_{ \\varphi\\subset \\G} \\frac{(-1)^{e(\\varphi)}}{|\\PAut(\\G)|}=\\sum_{\\G\\in\\lG^c_s} \\frac{\\tau(\\G)}{|\\PAut(\\G)|}$$\n as desired.\n \n \n Note that for $n\\geq 2$ and $s=0$ we have $\\lG_0^c=\\GG_0^c,$ ${\\rm A}_{n,0} \\cong \\Out(F_n)$ and $\\PAut(\\G) \\cong \\Aut(\\G)$, so this is Proposition~(1.12) of \\cite{SV87a}. \n \\end{proof}\n \n\n\\begin{example} \nUsing this theorem we can immediately verify that\n\\begin{gather*}\n\\chi({\\rm A}_{2,0}) = \\chi(\\Out(F_2))= \\\\\n\\frac{\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n)}{|\\PAut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n)|}\n+\n\\frac{\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (v0); \\coordinate [right=1.5 of v0] (v1); \\coordinate [left=.7 of v0] (i0); \\coordinate [right=.7 of v1] (o0); \\draw (v0) -- (v1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\draw (i0) circle(.7); \\draw (o0) circle(.7); \\end{tikzpicture}%\n}\n)}{|\\PAut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (v0); \\coordinate [right=1.5 of v0] (v1); \\coordinate [left=.7 of v0] (i0); \\coordinate [right=.7 of v1] (o0); \\draw (v0) -- (v1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\draw (i0) circle(.7); \\draw (o0) circle(.7); \\end{tikzpicture}%\n}\n)|}\n+\n\\frac{\\tau(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}\n)}{|\\PAut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}\n)|}\n=\n\\frac{1}{8} + \\frac{0}{8} + \\frac{-2}{12} = \n-\\frac{1}{24}.\n\\end{gather*}\n\n\\end{example}\n \n \n \\begin{corollary}\\label{cor:SV}\n $$ \\frac{\\chi({\\rm A}_{n,s})}{s!}= \\sum_{\\substack{\\G\\in\\GG^c_{s}\\\\ \\pi_1(\\G) \\cong F_n}} \\frac{\\tau(\\G)}{|\\Aut(\\G)|}.$$\n \\end{corollary}\n\n %\n %\n \\begin{proof} \n $\\Aut(\\G)$ acts on the set of leaf-labelings of $\\G \\in \\lG_s^c$. The orbits are the leaf-labeled graphs, and the stabilizer of a labeling is $\\PAut(\\G)$, giving $|\\Aut(\\G)|=|\\text{Orbit}(\\G)| |\\PAut(\\G)|. $ There are $s!$ labelings of $\\G.$ Each orbit has the same size, so the size of each orbit is $s!\/\\ell(\\G)$, where $\\ell(\\G)$ is the number of leaf-labeled graphs with underlying graph $\\G$. Therefore,\n \\begin{align*} \n |\\Aut(\\G)|&=\\frac{s!}{\\ell(\\G)} |\\PAut(\\G)|. \\qedhere\n \\end{align*}\n \\end{proof}\n\n\n\\section{Formal power series} \nFor the rest of this article it will be convenient to use $|\\G|=e(\\G)-v(\\G)$ instead of the rank of $\\pi_1(\\G)$ to filter the set of graphs. For connected graphs $\\G$ this is only a minor change of notation as $\\text{rank}(\\pi_1(\\G)) = e(\\G)-v(\\G) +1 =|\\G|+1$. %\n \nConsistent with this shift, we define $\\ch_n = \\chi(\\Out(F_{n+1}))$ and consider the formal power series\n\\begin{align*}\nT(z)=\\sum_{n=1}^\\infty \\chi(\\Out(F_{n+1})) z^n=\\sum_{n=1}^\\infty \\ch_n z^n.\n\\end{align*}\nBy Theorem~\\ref{thm:SV} with $s=0$ we have \n\\begin{align}\n\\label{eqn:def_Tz_cntd_graph_sum}\nT(z)=\\sum_{n=1}^\\infty \\left(\\sum_{\\substack{\\G\\in\\GG^c_0\\\\ \\pi_1(\\G) \\cong F_{n+1}}}\\frac{\\tau(\\G)}{|\\Aut(\\G)|}\\right) z^n = \\sum_{\\G\\in\\GG^c_0}\\frac{\\tau(\\G)}{|\\Aut(\\G)|}z^{|\\G|}.\n\\end{align}\nFor general $A_{n,s}$ we define a bivariate generating function for the Euler characteristic by\n\\begin{align}\n\\label{eqn:def_Tzx_Ans}\nT(z,x)= \\sum_{s\\geq 3} \\ch({\\rm A}_{0,s})z^{-1}\\frac{x^s}{s!} + \\sum_{s\\geq 1} \\ch({\\rm A}_{1,s}) \\frac{x^s}{s!} + \\sum_{n\\geq 1}\\sum_{s\\geq 0} \\ch({\\rm A}_{n+1,s})z^n\\frac{x^s}{s!}.\n\\end{align}\nRecall that the groups ${\\rm A}_{n,s}$ are only defined for $s \\geq 3$ if $n=0$, for $s \\geq 1$ if $n=1$ and for $s\\geq 0$ if $n \\geq 2$. \nObviously, $T(z,0) = T(z)$ and by Corollary~\\ref{cor:SV}\n\\begin{align}\n\\label{eqn:def_Tzx_cntd_graph_sum}\nT(z,x)=\\sum_{\\G\\in\\GG^c}\\frac{\\tau(\\G)}{|\\Aut(\\G)|}z^{|\\G|}x^{s(\\G)},\n\\end{align}\nwhere $s(\\G)$ is the number of leaves in $\\G$. \n\nThe relationships between the groups ${\\rm A}_{n,s}$, which were described in Section~\\ref{sec:groups}, imply the following functional relation between $T(z)$ and the bivariate generating function $T(z,x)$.\n \n\\begin{proposition} \n \\label{prop:Tzx_leaves_identity}\n $$T(z,x) = \\frac{e^x-\\frac{x^2}{2}-x-1}{z} + \\frac{x}{2} + T(ze^{-x}).$$\n\\end{proposition}\n\n\\begin{proof} \nThe groups ${\\rm A}_{0,s}$ are trivial, so we have $\\ch({\\rm A}_{0,s})=1$ for all $s \\geq 3$. \n For the groups ${\\rm A}_{1,s}$ we have the short exact sequence $$1\\to \\mathbb{Z}^{s-1}\\to {\\rm A}_{1,s}\\to \\mathbb{Z}\/2\\mathbb{Z}\\to 1.$$ \nThus $\\chi({\\rm A}_{1,s})=0$ if $s\\geq 2$ and $\\chi({\\rm A}_{1,1})=\\chi(\\mathbb{Z}\/2\\mathbb{Z})=\\frac{1}{2}$.\nFor the groups ${\\rm A}_{n+1,s}$ with $n\\geq 1$ the short exact sequence $$1\\to F_{n+1}^{s}\\to {\\rm A}_{n+1,s}\\to \\Out(F_{n+1})\\to 1,$$\ngives $\\chi({\\rm A}_{n+1,s})=\\chi(\\Out(F_{n+1}))(-n)^{s}=\\ch_n (-n)^{s}$. \n\nPutting these together into eq.~\\eqref{eqn:def_Tzx_Ans} gives \n\\begin{align*}\n T(z,x)&=\\sum_{s\\geq 3} \\frac{x^s}{s!}z^{-1} + \\frac{x}{2} +\\sum_{n\\geq 1}\\sum_{s\\geq 0} \\ch_n\\frac{ (-n)^s}{s!} z^nx^s \\\\\n & =\\sum_{s\\geq 3} \\frac{x^s}{s!}z^{-1} + \\frac{x}{2} +\\sum_{n\\geq 1}\\ch_n \\sum_{s\\geq 0} \\frac{(-n)^s}{s!} x^s z^n \\\\\n & =\\sum_{s\\geq 3} \\frac{x^s}{s!}z^{-1} + \\frac{x}{2} +\\sum_{n\\geq 1}\\ch_n e^{-nx} z^n\\\\\n &=\\sum_{s\\geq 3} \\frac{x^s}{s!}z^{-1} + \\frac{x}{2} + T(ze^{-x}). \\qedhere\n\\end{align*}\n\\end{proof}\n\n\n\n \n\n\\section{Algebraic graph combinatorics}\n\\label{sec:graphical_enumeration}\n\nAlthough Theorem~\\ref{thm:SV} gives an explicit expression for the coefficients of $T(z)$ and $T(z,x)$, we will use an implicit equation, which the generating function $T(z,x)$ must satisfy, to prove Theorem \\ref{thm:asymptotic_expansion}. This implicit equation together with the identity from Proposition~\\ref{prop:Tzx_leaves_identity} determines the coefficients $\\chi(\\Out(F_n))$ completely.\n\nTo formulate this implicit equation, it is convenient to use the \\textit{coefficient extraction operator} notation: For an arbitrary formal power series $f(x)$ the notation $[x^n] f(x)$ means `the $n$-th coefficient in $x$ of $f(x)$.'\n\n\\begin{proposition}\n\\label{prop:Tzx_graph_counting_identity}\n\\begin{align}\n\\label{eqn:Tzx_implicit_equation}\n 1 &= \\sum_{\\ell \\geq 0} (-z)^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( T(z,x) \\right),\n\\end{align}\nwhere $(2\\ell-1)!!=(2\\ell)! \/ ( \\ell! 2^\\ell )$ is the double factorial.\n\\end{proposition}\nIn the remainder of this section we will first give a combinatorial reformulation of this identity and then prove it. \n\n\n\\subsection{The exponential formula}\nThe exponential of the generating function $T(z,x)$ in \\eqref{eqn:Tzx_implicit_equation} has a straightforward combinatorial interpretation. While $T(z,x)$ can be expressed as a sum over connected graphs as we did in \\eqref{eqn:def_Tzx_cntd_graph_sum}, the generating function $\\exp(T(z,x))$ can be expressed as a sum over all graphs.\nThe reason for this is that the function $\\tau$ is multiplicative on disjoint unions of graphs, so we can apply the \\textit{exponential formula} given, for example, in \\cite[5.1]{stanley1997enumerative2}.\n\nBriefly, the argument behind the exponential formula is that if $\\phi$ is a function on graphs that is multiplicative on disjoint unions, i.e.\\ $\\phi(\\G_1 \\sqcup \\G_2) = \\phi(\\G_1) \\phi(\\G_2)$, then \n\\begin{align*}\n\\sum_{\\substack{\\G \\in \\GG \\\\ |C(\\G)| = n}} \\frac{\\phi(\\G)}{|\\Aut\\G|} = \\frac{1}{n!} \\sum_{\\gamma_1, \\ldots, \\gamma_n \\in \\GG^c} \\prod_{i=1}^n \\frac{\\phi(\\gamma_i)}{|\\Aut \\gamma_i|},\n\\end{align*}\nwhere we sum over all graphs with $n$ connected components on the left hand side and over all $n$ tuples of connected graphs on the right hand side. The factor $1\/n!$ accounts for the number of ways to order the connected components of the graphs. If we sum this equation over all $n \\geq 0$ and use $e^x = \\sum_{n \\geq 0} x^n\/n!$, we get\n\\begin{lemma}[Exponential formula]\n\\label{lmm:exponential_formula}\nLet $\\phi$ be a function from graphs to a power series ring that is multiplicative on disjoint unions (i.e.\\ $\\phi(\\G_1 \\sqcup \\G_2) = \\phi(\\G_1) \\phi(\\G_2)$). If the coefficient in $\\phi(\\G)$ of a given monomial is non-zero for only finitely many graphs $\\G$, then \n\\begin{align*}\n\\sum_{\\G \\in \\GG} \\frac{\\phi(\\G)}{|\\Aut\\G|}\n= \\exp\\left( \n\\sum_{\\G \\in \\GG^c} \\frac{\\phi(\\G)}{|\\Aut\\G|}\n\\right).\n\\end{align*}\n\\end{lemma}\nThe finiteness condition on the function $\\phi$ is necessary to ensure that $\\sum_{\\G \\in \\GG} \\frac{\\phi(\\G)}{|\\Aut\\G|}$ exists in the respective power series space.\n \n\\begin{corollary}\n\\label{crll:disconnected_exp}\n\\begin{align*}\n\\exp({T(z)}) &= \\sum_{\\G\\in\\GG_0}\\frac{\\tau(\\G)}{|\\Aut(\\G)|}z^{|\\G|}\\\\\n\\exp({T(z,x)}) &= \\sum_{\\G\\in\\GG}\\frac{\\tau(\\G)}{|\\Aut(\\G)|}z^{|\\G|}x^{s(\\G)}.\n\\end{align*}\n\\end{corollary}\n\n\n\\begin{proof}\nLet $\\phi_1$ be the function defined by $\\phi_1(\\G) = \\tau(\\G) z^{|\\G|}$ for $\\G \\in \\GG_0$ and $\\phi_1(\\G)=0$ for $\\G \\in \\GG_s$ with $s \\geq 1$. This function is multiplicative on disjoint unions of graphs, because $\\tau$ is (Lemma~(2.2) of \\cite{SV87a}) and both $|\\G|$ and $s(\\G)$ are additive graph invariants. The first statement follows by applying Lemma~\\ref{lmm:exponential_formula} to $\\phi_1$ and using eq.\\ \\eqref{eqn:def_Tz_cntd_graph_sum}. For the second statement apply Lemma~\\ref{lmm:exponential_formula} to the function $\\phi_2(\\G) = \\tau(\\G) z^{|\\G|} x^{s(\\G)}$ for all $\\G \\in \\GG$, note that there are only a finite number of admissible graphs with fixed $|\\G|$ and $s(\\G)$ and apply eq.\\ \\eqref{eqn:def_Tzx_cntd_graph_sum}. \n\\end{proof}\nNote that the power series $T(z)$ and $\\exp(T(z))$ carry the same information. \nRecall that $\\ch_n$ is the coefficient of $z^n$ in $T(z)$, and denote the coefficient of $z^n$ in $\\exp(T(z))$ by $\\Ch_n$. \nThe coefficients $\\ch_n$ and $\\Ch_n$ are related by\n\\begin{align}\n\\label{eqn:exp_pwrsrs_relation}\n \\ch_n&=\\Ch_n-\\frac{1}{n}\\sum_{k=1}^{n-1}k \\ch_k \\Ch_{n-k} \\text{ for } n \\geq 1.\n\\end{align}\nThis recursive relation can be derived by taking the formal derivative of $\\exp(T(z))$ which results in the (formal) differential equation $\\frac{d}{dz}\\exp({T(z)}) = e^{T(z)} \\frac{d}{dz}T(z)$. Note that it is also important that $\\ch_0 = 0$ for $\\exp(T(z))$ to make sense as a power series with $\\mathbb{Q}$ as coefficient ring. \n\nWe can immediately use the relationship between the coefficients $\\Ch_n$ and $\\ch_n$ to prove the following statement which will be helpful later while proving that the rational Euler characteristic of $\\Out(F_n)$ is always negative.\n\\begin{lemma}\n\\label{lmm:exp_negative}\nIf $\\Ch_n < 0$ for all $n \\geq 1$, then $\\ch_n < 0$ for all $n \\geq 1$.\n\\end{lemma}\n\\begin{proof}\nThis follows by induction on $n$ on eq.\\ \\eqref{eqn:exp_pwrsrs_relation}. \n\\end{proof}\nBecause $\\ch_n = \\chi(\\Out(F_{n+1}))$, proving $\\Ch_n < 0$ for all $n\\geq 1$ is therefore sufficient to show that $\\chi(\\Out(F_n)) < 0$ for all $n\\geq 2$.\n\n\\subsection{Convolution identities}\n\nBy Corollary~\\ref{crll:disconnected_exp}, the statement of Proposition~\\ref{prop:Tzx_graph_counting_identity} is equivalent to the identity\n\\begin{align}\n\\label{eqn:graph_sum_tau_identity}\n1=\\sum_{\\ell\\geq 0} (-z)^\\ell(2\\ell-1)!! \\sum_{\\G\\in\\GG_{2\\ell}} \\frac{\\tau(\\G)}{|\\Aut(\\G)|} z^{|\\G|}.\n\\end{align}\n\nIf $\\gamma$ is a subgraph of $\\G$, we denote by $\\G\/\\gamma$ the graph that one obtains from $\\G$ by collapsing each edge that is in $\\gamma$ to a point.\nWe will use the following \\textit{convolution identity} for $\\tau$ to prove eq.\\ \\eqref{eqn:graph_sum_tau_identity} and therefore also Proposition~\\ref{prop:Tzx_graph_counting_identity}. \n\\begin{proposition}\n\\label{prop:tau_identity}\nIf $\\G$ is a graph with at least one cycle, then \n\\begin{align*}\n \\sum_{\\gamma \\subset \\G} \\tau(\\gamma) (-1)^{e(\\G\/\\gamma)} = 0.\n\\end{align*}\nwhere the sum is over all subgraphs of $\\G$, including the trivial subgraph and $\\G$ itself. \n\\end{proposition}\n\n This statement can be seen as an identity in the incidence algebra of the subgraph poset of a graph. We will discuss a related viewpoint in Section~\\ref{sec:core}, where we will interpret it as an inverse relation in the group of characters of the Hopf algebra of core graphs.\n\n\\begin{proof}\nRecall that $\\tau(\\G) = \\sum_{\\varphi \\subset \\G} (-1)^{e(\\varphi)}$ where the sum is over all subforests of $\\G$. Therefore,\n\\begin{gather*}\n \\sum_{\\gamma \\subset \\G} \\tau(\\gamma) (-1)^{e(\\G\/\\gamma)}\n =\n (-1)^{e(\\G)} \\sum_{\\gamma \\subset \\G} \\tau(\\gamma) (-1)^{e(\\gamma)}\n =\n \\\\\n (-1)^{e(\\G)} \\sum_{\\gamma \\subset \\G} \\sum_{\\substack{ \\varphi \\subset \\gamma\\\\ \\text{ forest } \\varphi}} (-1)^{e(\\varphi)}(-1)^{e(\\gamma)}\n =\n (-1)^{e(\\G)} \\sum_{\\substack{ \\varphi \\subset \\G\\\\ \\text{ forest } \\varphi}} (-1)^{e(\\varphi)} \\sum_{\\substack{\\gamma \\subset \\G\\\\ \\gamma \\supset \\varphi}} (-1)^{e(\\gamma)}.\n\\end{gather*}\nThe set of subgraphs of $\\G$ containing $\\varphi$ is in bijection with the set of subsets of $E(\\G) \\setminus E(\\varphi)$. Because $\\G$ has at least one cycle, $E(\\G) \\setminus E(\\varphi)$ is not empty and the alternating sum \n\\begin{align*}\n\\sum_{\\substack{\\gamma \\subset \\G\\\\ \\gamma \\supset \\varphi}} (-1)^{e(\\gamma)} \n&= \n(-1)^{e(\\varphi)}\\sum_{E' \\subset E(\\G) \\setminus E(\\varphi)} (-1)^{|E'|} = 0. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary} \n\\label{crll:tau_identity_summed}\n\\begin{align*}\n1= \\sum_{\\G\\in \\GG_0} \\sum_{\\gamma\\subset\\G} \\frac{\\tau(\\gamma)(-1)^{e(\\G\/\\gamma)}}{|\\Aut(\\G)|}z^{|\\G|}.\n\\end{align*}\n\\end{corollary}\n\\begin{proof} Since all non-trivial graphs in $\\GG_0$ have cycles, Proposition~\\ref{prop:tau_identity} implies that the only non-zero contribution to the sum comes from the empty graph.\n\\end{proof}\n\nTo eventually obtain the statement of Proposition~\\ref{prop:Tzx_graph_counting_identity}, we transform this identity using the following proposition, which is an elementary application of labeled counting.\n\n\n\n\n\\begin{proposition}\n \\label{prop:convoluted_graph_sum}\nLet $\\phi$ be a function from graphs to a formal power series ring such that for each monomial $m$ and each integer $\\ell\\geq 0$, the coefficient of $m$ in $\\phi(\\G)$ is non-zero for only finitely many graphs $\\G\\in \\GG_{2\\ell}.$ \nThen\n $$\\sum_{\\G\\in \\GG_0} \\sum_{\\gamma\\subset\\G} \\frac{\\phi(\\gamma)w^{e(\\G\/\\gamma)}}{|\\Aut(\\G)|}=\\sum_{\\ell\\geq 0} w^\\ell (2\\ell-1)!! \\sum_{\\gamma \\in \\GG_{2\\ell}} \\frac{\\phi(\\gamma)}{|\\Aut \\gamma|},$$\n where $w$ is a formal variable. \n\\end{proposition}\n\n\\begin{proof} \n To prove the proposition we will use (totally) labeled graphs. \n Here a {\\em labeling} of $\\G$ with $e(\\G)$ edges, $v(\\G)$ vertices and $s(\\G)$ leaves consists of \n\\begin{itemize}\n\\item ordering the edges, i.e.\\ labeling them $1,\\ldots,e(\\G)$,\n\\item orienting each edge, \n\\item ordering the vertices, i.e.\\ labeling them $1,\\ldots,v(\\G)$,\n\\item ordering the leaves, i.e.\\ labeling them $1,\\ldots,s(\\G)$. \n\\end{itemize}\nThe set of labeled graphs with $s$ leaves will be denoted $\\LG_s$. \n\n\nThe advantage of using labeled graphs instead of unlabeled graphs is that a sum of terms $1\/|\\Aut(\\G)|$ over unlabeled graphs on $v$ vertices and $e$ edges becomes a sum of $1\/(v!e!2^e)$ over labeled graphs using the orbit-stabilizer theorem. The group $\\Aut(\\G)$ acts on the set of labelings of $\\G$, an orbit is a labeled graph and all stabilizers are trivial. This simplifies expressions that involve these automorphism groups. In particular, abbreviating $v=v(\\G), e=e(\\G)$ and $d=e(\\gamma)$ we have\n \\begin{align*}\n \\sum_{\\G\\in \\GG_0} \\sum_{\\gamma\\subset\\G} \\frac{\\phi(\\gamma)w^{e(\\G\/\\gamma)}}{|\\Aut(\\G)|}\n=\\sum_{\\G\\in \\LG_0} \\sum_{\\gamma\\subset\\G}\\frac{w^{e-d}}{e!v!2^{e}}\\phi(\\gamma).\n\\end{align*}\nA subgraph $\\gamma$ inherits a labeling from $\\G$: the vertices are the same, so they have the same labels. The ordering and orientation on the edges of $\\G$ induces an ordering and orientation on the edges of $\\gamma$, giving a labeling on these. The edges not in $\\gamma$ also have an induced ordering, and we use that to order the leaves of $\\gamma$ by the following rule: If there are $\\ell$ edges in $E(\\G)\\setminus E(\\gamma)$, label the leaf corresponding to the initial half of the $i$-th edge by $i$, and the leaf corresponding to the terminal half by $i+\\ell$. \n\nWe now change the order of summation. Remembering that $\\gamma$ has an even number of leaves, we get\n\\begin{align*}\n\\sum_{\\G\\in \\LG_0} \\sum_{\\gamma\\subset\\G}\\frac{w^{e-d}}{e!v!2^{e}}\\phi(\\gamma)&=\n \\sum_{\\ell\\geq 0}\\sum_{\\gamma\\in\\LG_{2\\ell}} \\sum_{\\G\\in \\LG_0, \\G\\supset \\gamma}\\frac{w^{e-d}}{e!v!2^{e}}\\phi(\\gamma)\\\\\n&=\\sum_{\\ell\\geq 0}\\sum_{\\gamma\\in\\LG_{2\\ell}} \\sum_{\\G\\in \\LG_0, \\G\\supset \\gamma}\\frac{w^{\\ell}}{(\\ell+d)!v!2^{\\ell+d}}\\phi(\\gamma).\n \\intertext{\nWe next note that a labeling on $\\gamma$ specifies an isomorphism type of $\\Gamma$ containing $\\gamma$, using the rule that the $i$-th leaf should be connected to the $(i+\\ell)$-th leaf. This also orders the edges in $E(\\Gamma)\\setminus E(\\gamma)$ and orients them from $i$ to $i+\\ell$. The edges of $\\gamma\\subset \\G$ are ordered and oriented as subsets of $E(\\gamma)$. There are $\\binom{d+\\ell}{\\ell}$ ways of shuffling the two orderings to get a total ordering on $E(\\G)$ that induces the given orderings on $E(\\gamma)$ and $E(\\G)\\setminus E(\\gamma)$. Thus the last expression becomes \n}\n &=\\sum_{\\ell\\geq 0}\\sum_{\\gamma\\in\\LG_{2\\ell}} \\binom{d+\\ell}{\\ell} \\frac{w^{\\ell}}{(d+\\ell)!v!2^{d+\\ell}}\\phi(\\gamma) \\\\\n &=\\sum_{\\ell\\geq 0} \\frac{w^\\ell}{2^\\ell}\\sum_{\\gamma\\in\\LG_{2\\ell}} \\frac{(d+\\ell)!}{\\ell!d!}\\frac{1}{(d+\\ell)!v!2^{d}}\\phi(\\gamma) \\\\\n &=\\sum_{\\ell\\geq 0} \\frac{1}{\\ell!2^\\ell} w^\\ell \\sum_{\\gamma\\in\\LG_{2\\ell}} \\frac{\\phi(\\gamma)}{d!v!2^{d}}.\n \\intertext{\nWe now translate back to unlabeled graphs to get \n}\n& =\\sum_{\\ell \\geq 0} \\frac{1}{\\ell!2^\\ell} w^\\ell \\sum_{\\gamma\\in\\GG_{2\\ell}} \\frac{\\phi(\\gamma)(2\\ell)!}{|\\Aut(\\gamma)|}\\\\\n& =\\sum_{\\ell \\geq 0} \\frac{(2\\ell)!}{\\ell!2^\\ell} w^\\ell \\sum_{\\gamma\\in\\GG_{2\\ell}} \\frac{\\phi(\\gamma)}{|\\Aut(\\gamma)|}\\\\\n& =\\sum_{\\ell \\geq 0}(2\\ell-1)!! w^\\ell \\sum_{\\gamma\\in\\GG_{2\\ell}} \\frac{\\phi(\\gamma)}{|\\Aut(\\gamma)|}.\n\\qedhere\n\\end{align*}\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:Tzx_graph_counting_identity}]\n Use Proposition~\\ref{prop:convoluted_graph_sum} with $\\phi(\\Gamma) = \\tau(\\Gamma) z^{|\\Gamma|}$ and $w = -z$, together with the observation that $(-1)^{e(\\G\/\\gamma)}z^{|\\G|}=z^{|\\gamma|}(-z)^{e(\\G)-e(\\gamma)}$. We get\n$$\n \\sum_{\\G \\in \\GG_0} \\sum_{\\gamma \\subset \\G} \\frac{ \\tau(\\gamma)(-1)^{e({\\G\/\\gamma})}}{|\\Aut \\G|}z^{|\\G|} \n =\\sum_{\\ell\\geq 0} (-z)^\\ell(2\\ell-1)!! \\left(\\sum_{\\gamma\\in\\GG_{2\\ell}} \\frac{\\tau(\\gamma)}{|\\Aut(\\gamma)|} z^{|\\gamma|}\\right).\n$$\nApply Corollary~\\ref{crll:tau_identity_summed} to obtain eq.\\ \\eqref{eqn:graph_sum_tau_identity} and Corollary~\\ref{crll:disconnected_exp} after that.\n\\end{proof}\n\\section{The Hopf algebra of core graphs}\\label{sec:core}\n\nA graph with no separating edges is called a {\\it core graph}, {\\it bridgeless} or {\\it 1-particle irreducible graph}. Let ${\\mathbf{H}}$ denote the $\\mathbb{Q}$-vector space generated by all finite core graphs. In contrast to the rest of the article, we will include graphs with bivalent edges as generators of ${\\mathbf{H}}$. \nThe vector space ${\\mathbf{H}}$ can be made into an algebra whose multiplication is induced by disjoint union of generators; here we identify all graphs with no edges with the neutral element $\\mathbb{I}$ for this multiplication. (Thus a topological graph representing the neutral element is a (possibly empty) disjoint union of isolated vertices and star graphs.)\n\nThe algebra ${\\mathbf{H}}$ can also be equipped with a coproduct $\\Delta \\colon {\\mathbf{H}} \\to {\\mathbf{H}} \\otimes {\\mathbf{H}}$, defined by \n\\begin{align}\n \\label{eqn:coproduct}\n\\Delta(\\G)=\\sum_{\\gamma\\subset\\G} \\gamma\\otimes \\G\/\\gamma,\n\\end{align}\nwhere the sum is over all core subgraphs of $\\G$.\n\n\\begin{example}\nThe graph \n$\n{\n\\def 2ex {1.3ex}\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n}\n$ has seven mutually non-isomorphic core subgraphs---including the trivial subgraph graph (identified with $\\mathbb{I}$) and the graph $\n{\n\\def 2ex {1.3ex}\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n}\n$ itself.\nThe coproduct is given by \n{\n\\def 2ex {2ex}\n\\begin{gather*}\n\\Delta \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n=\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes\n\\mathbb{I}\n+\n2~\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\coordinate[above =.5 of v2] (e2); \\coordinate[below =.5 of v3] (e3); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v2) -- (e2); \\draw (v3) -- (e3); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) -- (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v) ; \\def .4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture {.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i2) at ([shift=(255:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i3) at ([shift=(285:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i4) at ([shift=(315:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture}\n+\n2~\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\coordinate[above left=.5 of v2] (e2); \\coordinate[below left=.5 of v3] (e3); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v2) -- (e2); \\draw (v3) -- (e3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.4 of v0] (m); \\coordinate[right=.4 of m] (v1); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture}\n+\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\coordinate[above left=.5 of v2] (e2); \\coordinate[below left=.5 of v3] (e3); \\coordinate[above right=.5 of v2] (h2); \\coordinate[below right=.5 of v3] (h3); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v2) -- (h2); \\draw (v3) -- (h3); \\draw (v2) -- (e2); \\draw (v3) -- (e3); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v) ; \\def .8}; \\def \\rud {.4}; \\coordinate (s1) at ([shift=(30:\\rad)]v); \\coordinate (s2) at ([shift=(150:\\rad)]v); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\draw (v) to[out=90,in=120] (s1) to[out=-60,in=0-30] (v); \\draw (v) to[out=210,in=240] (s2) to[out=60,in=90] (v); \\filldraw (v) circle (1pt); \\end{scope} \\end{tikzpicture {.8}; \\def .4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture {.4}; \\coordinate (s1) at ([shift=(30:.8}; \\def \\rud {.4}; \\coordinate (s1) at ([shift=(30:\\rad)]v); \\coordinate (s2) at ([shift=(150:\\rad)]v); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\draw (v) to[out=90,in=120] (s1) to[out=-60,in=0-30] (v); \\draw (v) to[out=210,in=240] (s2) to[out=60,in=90] (v); \\filldraw (v) circle (1pt); \\end{scope} \\end{tikzpicture)]v); \\coordinate (s2) at ([shift=(150:.8}; \\def \\rud {.4}; \\coordinate (s1) at ([shift=(30:\\rad)]v); \\coordinate (s2) at ([shift=(150:\\rad)]v); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\draw (v) to[out=90,in=120] (s1) to[out=-60,in=0-30] (v); \\draw (v) to[out=210,in=240] (s2) to[out=60,in=90] (v); \\filldraw (v) circle (1pt); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i1) at ([shift=(225:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i2) at ([shift=(255:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i3) at ([shift=(285:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\coordinate (i4) at ([shift=(315:.4}; \\coordinate [above=.4 of v] (m); \\coordinate (i1) at ([shift=(225:\\rud)]v); \\coordinate (i2) at ([shift=(255:\\rud)]v); \\coordinate (i3) at ([shift=(285:\\rud)]v); \\coordinate (i4) at ([shift=(315:\\rud)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\filldraw (v) circle (1pt); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture)]v); \\draw (v) -- (i1); \\draw (v) -- (i2); \\draw (v) -- (i3); \\draw (v) -- (i4); \\draw (v) to[out=90,in=120] (s1) to[out=-60,in=0-30] (v); \\draw (v) to[out=210,in=240] (s2) to[out=60,in=90] (v); \\filldraw (v) circle (1pt); \\end{scope} \\end{tikzpicture}\n\\\\\n+\n4~\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\coordinate[above left=.5 of v2] (e2); \\coordinate[below left=.5 of v3] (e3); \\coordinate[above right=.5 of v2] (h2); \\coordinate[below right=.5 of v3] (h3); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v2) -- (e2); \\draw (v3) -- (e3); \\draw (v2) -- (h2); \\draw (v3) -- (h3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) -- (v3); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.4 of v0] (m); \\coordinate[right=.4 of m] (v1); \\coordinate[right=.3 of v1] (v2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below =.5 of v1] (o1); \\coordinate[above =.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\draw (v2) circle (.3); \\draw (m) circle (.4); \\end{scope} \\end{tikzpicture}\n+\n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\coordinate[above left=.5 of v2] (e2); \\coordinate[below left=.5 of v3] (e3); \\coordinate[above right=.5 of v2] (h2); \\coordinate[below right=.5 of v3] (h3); \\draw (v2) -- (h2); \\draw (v3) -- (h3); \\draw (v2) -- (e2); \\draw (v3) -- (e3); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}\n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.4 of v0] (m1); \\coordinate[right=.4 of m1] (v1); \\coordinate[right=.4 of v1] (m2); \\coordinate[right=.4 of m2] (v2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v2] (o1); \\coordinate[above right=.5 of v2] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v2) -- (o1); \\draw (v2) -- (o2); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\draw (m1) circle (.4); \\draw (m2) circle (.4); \\end{scope} \\end{tikzpicture}\n+\n\\mathbb{I} \n\\otimes \n\\begin{tikzpicture}[x=2ex,y=2ex,baseline={([yshift=-.7ex]current bounding box.center)}] \\begin{scope}[node distance=1] \\coordinate (v0); \\coordinate[right=.5 of v0] (v4); \\coordinate[above right= of v4] (v2); \\coordinate[below right= of v4] (v3); \\coordinate[below right= of v2] (v5); \\coordinate[right=.5 of v5] (v1); \\coordinate[above right= of v2] (o1); \\coordinate[below right= of v2] (o2); \\coordinate[below left=.5 of v0] (i1); \\coordinate[above left=.5 of v0] (i2); \\coordinate[below right=.5 of v1] (o1); \\coordinate[above right=.5 of v1] (o2); \\draw (v0) -- (i1); \\draw (v0) -- (i2); \\draw (v1) -- (o1); \\draw (v1) -- (o2); \\draw (v0) to[bend left=20] (v2); \\draw (v0) to[bend right=20] (v3); \\draw (v1) to[bend left=20] (v3); \\draw (v1) to[bend right=20] (v2); \\draw (v2) to[bend right=60] (v3); \\draw (v2) to[bend left=60] (v3); \\filldraw (v0) circle(1pt); \\filldraw (v1) circle(1pt); \\filldraw (v2) circle(1pt); \\filldraw (v3) circle(1pt); \\end{scope} \\end{tikzpicture}.\n\\end{gather*}%\n\nNote that the complete contraction $\\G\/\\G$ has no edges, so is identified with $\\mathbb{I}$. Also note that we omitted isolated vertices of the subgraphs on the left hand side of the tensor product since isolated vertices are also identified with $\\mathbb{I}$.\n\\end{example}\n\n\n\nThis coproduct is coassociative, making ${\\mathbf{H}}$ into a bialgebra. In fact Kreimer showed that ${\\mathbf{H}}$ has a Hopf algebra structure \\cite{kreimer2009core}. The unit $u \\colon \\mathbb{Q} \\to {\\mathbf{H}}$ sends $u \\colon q \\mapsto q \\mathbb{I}$. The co-unit $\\epsilon \\colon {\\mathbf{H}}\\to \\mathbb{Q}$ sends $\\mathbb{I}$ to $1\\in \\mathbb{Q}$ and all other graphs to $0$.\nThe antipode $S\\colon {\\mathbf{H}}\\to {\\mathbf{H}}$ can be defined inductively by $S(\\mathbb{I})=\\mathbb{I}$ and, \n\\begin{align*} \n S(\\G)=-\\sum_{\\gamma\\subsetneq \\G} S(\\gamma)\\G\/\\gamma \\text{ for } \\Gamma \\neq \\mathbb{I},\n\\end{align*}\nwhere the sum is over all core subgraphs of $\\G$ which are not equal to $\\G$. This recursion terminates since the graphs $\\gamma$ in the sum have fewer independent cycles (i.e. smaller first Betti number) than $\\G$. The result is a polynomial in core graphs. \nWe refer the reader to \\cite{sweedler1969hopf} for a general account of Hopf algebras or \\cite[Ch.\\ 3]{borinsky2018graphs} for more information about this specific Hopf algebra.\n \n A {\\em character} on ${\\mathbf{H}}$ is a linear map $\\phi$ which satisfies $\\phi(\\G_1 \\G_2) = \\phi(\\G_1) \\phi(\\G_2).$ The convolution $\\phi \\star \\psi$ of two characters \nis defined by $$(\\phi\\star \\psi)(\\Gamma)=\\sum_{\\gamma \\subset \\G} \\phi(\\gamma) \\psi(\\Gamma\/\\gamma),$$ where we again sum over all core subgraphs of $\\G$. Because ${\\mathbf{H}}$ is a Hopf algebra, the set of all characters from ${\\mathbf{H}}$ to any commutative algebra forms a group under the convolution product. This follows from the antipode being the inverse to the identity map, $\\id \\colon {\\mathbf{H}} \\to {\\mathbf{H}}$, in the sense that $\\id \\star S = S \\star \\id= u \\circ \\epsilon$. The map $u \\circ \\epsilon \\colon {\\mathbf{H}} \\to {\\mathbf{H}}$ is the identity element of the $\\star$-group of characters ${\\mathbf{H}} \\to {\\mathbf{H}}$. It satisfies $u \\circ \\epsilon(\\mathbb{I}) = \\mathbb{I}$ and $u \\circ \\epsilon(\\G) = 0$ if $\\G \\neq \\mathbb{I}$. If $\\phi$ is a character ${\\mathbf{H}} \\to \\mathcal{A}$ which maps to a unital commutative algebra $\\mathcal{A}$, then $\\phi^{\\star -1} := \\phi \\circ S$ is the inverse of $\\phi$ under the star product in the sense that $$ \\phi^{\\star -1} \\star \\phi = \\phi \\star \\phi^{\\star -1} = u_{\\mathcal{A}} \\circ \\epsilon,$$ where $u_{\\mathcal{A}}$ is the unit of $\\mathcal{A}$.\n\nBecause $\\tau$ is multiplicative on disjoint unions of graphs, it induces a character ${\\mathbf{H}} \\to \\mathbb{Q}$. We can define the even simpler character $\\sigma(\\Gamma) = (-1)^{e(\\Gamma)}$ and formulate Proposition~\\ref{prop:tau_identity} in the Hopf algebra language:\n\\begin{proposition}\n$\\tau \\star \\sigma = \\sigma \\star \\tau = u_\\mathbb{Q} \\circ \\epsilon$. \n\\end{proposition}\n\n\\begin{proof}\nBy Proposition~\\ref{prop:tau_identity} and the definition of the $\\star$ product $\\tau \\star \\sigma = u_\\mathbb{Q} \\circ \\epsilon$. Because the characters form a group, we also have $\\sigma \\star \\tau = u_\\mathbb{Q} \\circ \\epsilon$.\n\\end{proof}\n\n\nAlthough the Hopf algebra ${\\mathbf{H}}$ and its coproduct are defined only on core graphs, we can also consider the maps $\\tau$ and $\\sigma$ on the space of all graphs.\n The linear space ${\\mathbf{G}}$ which is generated by all graphs can be made into a (left) ${\\mathbf{H}}$-comodule by defining a \\textit{coaction}, $\\rho \\colon {\\mathbf{G}} \\to {\\mathbf{H}} \\otimes {\\mathbf{G}}$, using the formula \\eqref{eqn:coproduct} with $\\rho$ in place of $\\Delta$. The left side of the tensor product in \\eqref{eqn:coproduct} will always be a core graph and can naturally be associated with an element in ${\\mathbf{H}}$. The star product applied on characters of ${\\mathbf{G}}$ in the same way as on characters of ${\\mathbf{H}}$ becomes an \\textit{action} this way. See \\cite[Ch.\\ 3]{borinsky2018graphs} for details. \n\nApplying $\\sigma$ to the weighted sum of all connected graphs with no leaves gives an especially interesting result: \n\\begin{proposition}\n\\label{prop:bernoulli_graphs_sum}\n\\begin{align*}\n \\sum_{\\substack{\\Gamma\\in \\GG^c_0\\\\|\\G|=n}}\\frac{\\sigma(\\G)}{|\\Aut(\\G)|}=\\frac{\\zeta(-n)}{n}=-\\frac{B_{n+1}}{n(n+1)} \\text{ for all } n \\geq 1.\n\\end{align*}\n\\end{proposition}\nThis statement is not new. It follows as a special case from `Penner's model' \\cite{penner1986moduli} (see also \\cite[Appendix\\ D]{Kon92}). \nThe sum could be thought of as the integral of $\\sigma$ over the space of connected graphs with measure $\\mu(\\G)=1\/|\\Aut(\\G)|$, whereas integrating its convolutive inverse $\\tau(\\G)$ over the same space with the same measure gives $\\ch_n$ by Corollary~\\ref{cor:SV}. \n\nIn Section~\\ref{sec:laplace} we will give a proof of Proposition~\\ref{prop:bernoulli_graphs_sum} as a special case of Corollary~\\ref{crll:graph_laplace}. Here, we can immediately verify it for $n=1$:\n\\begin{align*}\n\\frac{\\sigma(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n)}{|\\Aut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=.7 of vm] (v0); \\coordinate [right=.7 of vm] (v1); \\draw (v0) circle(.7); \\draw (v1) circle(.7); \\filldraw (vm) circle (1pt); \\end{tikzpicture}%\n}\n)|}\n+\n\\frac{\\sigma(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (v0); \\coordinate [right=1.5 of v0] (v1); \\coordinate [left=.7 of v0] (i0); \\coordinate [right=.7 of v1] (o0); \\draw (v0) -- (v1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\draw (i0) circle(.7); \\draw (o0) circle(.7); \\end{tikzpicture}%\n}\n)}{|\\Aut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (v0); \\coordinate [right=1.5 of v0] (v1); \\coordinate [left=.7 of v0] (i0); \\coordinate [right=.7 of v1] (o0); \\draw (v0) -- (v1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\draw (i0) circle(.7); \\draw (o0) circle(.7); \\end{tikzpicture}%\n}\n)|}\n+\n\\frac{\\sigma(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}\n)}{|\\Aut(\n{\n\\begin{tikzpicture}[x=1ex,y=1ex,baseline={([yshift=-.6ex]current bounding box.center)}] \\coordinate (vm); \\coordinate [left=1 of vm] (v0); \\coordinate [right=1 of vm] (v1); \\draw (v0) -- (v1); \\draw (vm) circle(1); \\filldraw (v0) circle (1pt); \\filldraw (v1) circle (1pt); \\end{tikzpicture}%\n}\n)|}\n&=\n\\frac{1}{8} + \\frac{-1}{8} + \\frac{-1}{12} = \n-\\frac{1}{12} = -\\frac{B_2}{2}.\n\\end{align*}\n\n\nThe Bernoulli numbers are classical objects with a long history, and it is well-known that $B_{2n+1}$ vanishes for $n \\geq 1$ and that the sign of $B_{2n}$ is $(-1)^{n+1}$ for $n\\geq 1$. To analyse similar properties of the numbers $\\ch_n$, we will make heavy use of \\textit{asymptotic expansions}. We will go into the details after a short digression about the relation of our methods with perturbative methods used in quantum field theory. \n\n\n\\section{Renormalized topological quantum field theory}\\label{sec:tqft}\n\nOur approach to analyzing the numbers $\\ch_n$ is in line with an established technique for analyzing topological objects by using \\textit{perturbative quantum field theory} or equivalently \\textit{Feynman diagram techniques} \\cite{bessis1980quantum}. The term \\textit{topological quantum field theory} is used for a quantum field theory whose observables are topological invariants \\cite{witten1988topological}. See also \\cite{kontsevich1993formal,kontsevich1994feynman} for further aspects of this theory and \\cite{conant2003theorem} for a more detailed account focused on group cohomology. \n\nOne prominent application of topological quantum field theory is intersection theory in the moduli space of complex curves, as developed by Witten \\cite{witten1990two} and Kontsevich \\cite{Kon92}. Penner \\cite{penner1986moduli} had already applied perturbative quantum field theory techniques to reprove the result of Harer and Zagier %\non the rational Euler characteristic of the mapping class group. In the course of his study of intersection theory Kontsevich gave a simplified version of Penner's proof \\cite[Appendix\\ D]{Kon92}. This simplified proof involves a formula similar to the one in Proposition~\\ref{prop:bernoulli_graphs_sum}. \n\nWe can endow our approach to studying the numbers $\\ch_n$ with a quantum field theoretical interpretation, in a spirit similar to the work of Penner and Kontsevich. Here is a brief, heuristic indication of how this goes.\n\nWe start with the statement of Proposition~\\ref{prop:Tzx_graph_counting_identity} and immediately apply Proposition~\\ref{prop:Tzx_leaves_identity} to obtain the equation\n\\begin{align*}\n 1 &= \\sum_{\\ell \\geq 0} (-z)^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( \\frac{e^x-\\frac{x^2}{2}-x-1}{z} + \\frac{x}{2} + T(ze^{-x}) \\right).\n\\end{align*}\nNow flip the sign of $z$ to get\n\\begin{align}\n\\label{eqn:Tzx_graph_counting_identity_qft_expl}\n 1 &= \\sum_{\\ell \\geq 0} z^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( -\\frac{e^x-\\frac{x^2}{2}-x-1}{z} + \\frac{x}{2} + T(-ze^{-x}) \\right).\n\\end{align}\nFor the remainder of this section we regard $z$ not as a formal variable, but rather as a positive real number. We then recall the Gaussian integrals \n\\begin{align*}\n \\frac{1}{\\sqrt{2\\pi z}} \\int_\\mathbb{R} x^{2 \\ell} e^{-\\frac{x^2}{2 z}} dx &= z^\\ell (2\\ell-1)!! \\text{ for all } \\ell \\geq 0 \\\\\n \\frac{1}{\\sqrt{2\\pi z}} \\int_\\mathbb{R} x^{2 \\ell+1} e^{-\\frac{x^2}{2 z}} dx &=0 \\text{ for all } \\ell \\geq 0.\n\\end{align*}\nSubstituting these into eq.\\ \\eqref{eqn:Tzx_graph_counting_identity_qft_expl} gives\n\\begin{align*}\n 1 &= \\sum_{\\ell \\geq 0} \\frac{1}{\\sqrt{2\\pi z}} \\int_\\mathbb{R} x^{2 \\ell} e^{-\\frac{x^2}{2 z}} dx [x^{2\\ell}] \\exp\\left( -\\frac{e^x-\\frac{x^2}{2}-x-1}{z} + \\frac{x}{2} + T(-ze^{-x}) \\right).\n\\end{align*}\nThis integral is not convergent since we are no longer regarding $z$ as a formal variable, but we will disregard this issue for this heuristic argument. In the same laissez-faire spirit, we ignore convergence issues and interchange summation with integration to obtain\n\\begin{align}\n \\label{eqn:renormalization_condition}\n 1 &= \\frac{1}{\\sqrt{2\\pi z}} \\int_\\mathbb{R} \\exp\\left( -\\frac{e^x-x-1}{z} + \\frac{x}{2} + T(-ze^{-x}) \\right) dx.\n\\end{align}\nThis integral is again not well-defined, as the series $T(z)$ does not converge to a function of $z$ in any finite domain: it is only a formal power series with a vanishing radius of convergence. However, we can interpret the right hand side of this equation as a `path-integral' of a \\textit{zero-dimensional quantum field theory} with the \\textit{action} $-(e^x-x-1)$, where the parameter $z$ takes the role of \\textit{Planck's constant} $\\hbar$. The additional terms in the exponent $\\frac{x}{2} + T(-ze^{-x})$ can be interpreted as \\textit{counterterms} or \\textit{renormalization constants} which \\textit{renormalize} the quantum field theory in a generalized sense. In fact, equation~\\eqref{eqn:renormalization_condition} can be interpreted as a \\textit{renormalization condition} of a quantum field theory. \n\nIn Kontsevich's proof of the Harer-Zagier formula, a topological quantum field theory was constructed whose perturbative expansion encoded the geometric invariants of interest. As we have seen above, our method can also be interpreted as an application of quantum field theory to the analysis of the invariants $\\ch_n$. However, instead of using the coefficients of the perturbative expansion directly, we use the coefficients of the renormalization constants to express the quantities which are of interest. We might therefore say that we are using a \\textit{renormalized topological quantum field theory} to encode $\\ch_n$. \n\nThis is consistent with the interpretation of $\\tau$ as a character on the core Hopf algebra. Connes and Kreimer \\cite{connes2000renormalization} showed that the renormalization procedure in quantum field theory can be seen as the solution of a Riemann-Hilbert problem using a Birkhoff decomposition. The Birkhoff decomposition can be formulated elegantly as an inversion in the group of characters of a certain Hopf algebra. In our topological case, which is much simpler than the full physical picture, this interpretation boils down to the brief exposition in Section~\\ref{sec:core}. Consult \\cite{borinsky2017renormalized} for a general treatment of renormalized zero-dimensional quantum field theory in a Hopf algebraic framework. \n\nAfter these expository remarks we now return to our rigorous treatment of the Euler characteristic of $\\Out(F_n)$. \n\n\n\n\\newcommand{\\llrrparen}[1]{%\n \\left(\\mkern-3mu\\left(#1\\right)\\mkern-3mu\\right)}\n\\section{Asymptotic expansions}\n\\label{sec:asymptotic_expansions}\n\n\n\nAn often useful approach to studying a generating function such as $T(z)= \\sum_{n\\geq 1} \\ch_n z^n$ is to interpret it as an analytic function in $z$ and then use analytic techniques to study the nature of its coefficients \\cite{flajolet2009analytic}. However, in our case this standard approach is doomed to fail, at least if it is applied naively, as the coefficients of $T(z)$ turn out to grow factorially so the power series $T(z)$ has a vanishing radius of convergence.\n\nWe will circumvent this problem by using an \\textit{asymptotic expansion} of a certain function to describe the coefficients of $T(z)$. In contrast to Taylor expansions of analytic functions, asymptotic expansions are not necessarily convergent in any non-vanishing domain of $\\mathbb{C}$. \n\n\n\\subsection{Asymptotic notation}\n\nIn this section we fix the notation we use for asymptotic expansions and prove a basic property that we will use repeatedly. \nWe begin by recalling the {\\em big $\\mathcal{O}$} and {\\em small $o$} notation.\nLet $f, g$ and $h$ be functions defined on a domain $D$ and let $L$ be a limit point of $D$. \nThe notation $f(x)=g(x) + \\mathcal{O}(h(x))$ means $f-g\\in \\mathcal{O}(h)$, where $\\mathcal{O}(h)$ is the set of all functions $u$ defined on $D$ such that \n\\begin{align*}\n \\limsup_{x\\to L} \\left| \\frac{u(x)}{h(x)} \\right| < \\infty.\n\\end{align*}\nSimilarly, $f(x)=g(x)+o(h(x))$ means $f-g\\in o(h),$ where $o(h)$ consists of all functions $u$ that satisfy $\\lim_{x\\to L} \\frac{u(x)}{h(x)} = 0$. %\n\nAn {\\it asymptotic scale} on $D$ with respect to a limit $L$ is a sequence of functions $\\{\\varphi_k\\}_{k\\geq 0}$ with the property $\\varphi_{k+1} \\in o(\\varphi_k)$ for $k \\geq 0$. A common example, for functions with domain $\\mathbb{R}$ and limit $L=\\infty$, is $\\varphi_k(x)=x^{-k}$.\n\n\\begin{definition}\n \\label{def:asymptotic_expansion}\n An {\\it asymptotic expansion} of a function $f$ defined on $D$ with respect to the limit $L$ and the asymptotic scale $\\{\\varphi_k\\}_{k\\geq0}$ is a sequence of coefficients $c_k$ such that\n\\begin{align*}\n f(x) = \\sum_{k=0}^{R-1} c_k\\varphi_k(x) + \\mathcal{O}(\\varphi_R(x)) \\text{ for all } R \\geq 0,\n\\end{align*}\nwhere the $\\mathcal{O}$ refers to the limit $x \\rightarrow L$. %\nWe will write this infinite set of $\\mathcal{O}$ relations as, \n\\begin{align*}\n f(x)\\sim \\sum_{k\\geq 0} c_k\\varphi_k(x) \\text{ as } x \\rightarrow L.\n\\end{align*}\n\\end{definition}\nAsymptotic expansions are widely used in mathematical analysis, the physical sciences and engineering to obtain very accurate approximations to functions. \nA detailed introduction to asymptotic expansions can be found in de Bruijn's book \\cite{de1981asymptotic}. A key feature of asymptotic expansions is that, for a given function $f$, limit $L$ and asymptotic scale $\\{\\varphi_k\\}_{k \\geq 0}$, the coefficients $c_k$ are unique if they exist. We will make use of this property in the proof of Theorem~\\ref{thm:asymptotic_expansion}.\n\nThe coefficients of an asymptotic expansion depend on the choice of the asymptotic scale. However, under certain conditions we can translate between asymptotic expansions in different asymptotic scales:\n\\begin{lemma}\n \\label{lmm:scale_change}\n Suppose $\\Phi=\\{\\varphi_k\\}_{k\\geq0}$ and $\\Psi=\\{\\psi_m\\}_{m\\geq0}$ are two asymptotic scales on a domain $D$ with respect to the same limit $L$, and suppose $f$ has an asymptotic expansion in $\\Phi$\n\\begin{equation}\n \\label{eqn:scale_change_original}\n f(x) \\sim \\sum_{k \\geq 0} c_k \\varphi_k(x) \\text{ as } x\\rightarrow L.\n\\end{equation}\nIf each $\\psi_m$ also has an asymptotic expansion in $\\Phi$\n\\begin{align}\n \\label{eqn:scale_change_scale_relation}\n\\psi_m(x)\\sim \\sum_{k\\geq m} c_{m,k} \\varphi_k(x) \\text{ as } x\\rightarrow L\n\\end{align}\nwith $c_{m,m}\\neq 0$, then $f$ has an asymptotic expansion in $\\Psi$\n\\begin{align*}\n f(x) \\sim \\sum_{m \\geq 0} c'_m \\psi_m(x) \\text{ as } x\\rightarrow L,\n\\end{align*}\n where the coefficients $c_m'$ are implicitly determined by the infinite triangular equation system $c_k = \\sum_{m= 0}^{k} c_m' c_{m,k}$ for all $k\\geq 0$.\n\\end{lemma}\n \n\n\\begin{proof}\nBy the definition of an asymptotic expansion we have\n \\begin{align*}\n \\psi_m- \\sum^{R-1}_{k=m} c_{m,k} \\varphi_k\\in \\mathcal{O}(\\varphi_R) \\text{ for all } R \\geq m \\geq 0.\n \\end{align*}\nWe can multiply a function in $\\mathcal{O}(h)$ by a constant or add a finite number of functions in $\\mathcal{O}(h)$ without changing the $\\mathcal{O}$ class. Thus multiplying by $c_m'$ and then adding from $m=0$ to $R-1$ gives\n \\begin{align*}\n \\sum_{m=0}^{R-1} c'_m \\psi_m - \\sum_{m=0}^{R-1}\\sum^{R-1}_{k=m} c'_mc_{m,k} \\varphi_k \\in \\mathcal{O}(\\varphi_R) \\text{ for all } R \\geq 0.\n \\end{align*}\n Changing the order of summation and using the definition of the constants $c'_m$ gives \n \\begin{align*}\n \\sum_{m=0}^{R-1} c'_m \\psi_m- \\sum_{k=0}^{R-1}\\sum_{m=0}^{k} c'_mc_{m,k} \\varphi_k = \\sum_{m=0}^{R-1} c'_m \\psi_m - \\sum_{k=0}^{R-1}c_k \\varphi_k \\in \\mathcal{O}(\\varphi_R) \\text{ for all } R \\geq 0.\n \\end{align*}\nBy eq.\\ \\eqref{eqn:scale_change_original} we have $f- \\sum_{k=0}^{R-1} c_k\\varphi_k \\in \\mathcal{O}(\\varphi_R)$, so combining this with the above gives $$f- \\sum_{m=0}^{R-1} c_m'\\psi_m \\in \\mathcal{O}(\\varphi_R) \\text{ for all } R\\geq 0.$$\nIt remains only to check that $\\mathcal{O}(\\varphi_R)=\\mathcal{O}(\\psi_R)$. This follows from eq.\\ \\eqref{eqn:scale_change_scale_relation}, which implies $\\psi_R=c_{R,R}\\varphi_R + \\mathcal{O}(\\varphi_{R+1}),$ together with the assumption that $c_{R,R}\\neq 0$ and the fact that $\\varphi_{R+1}\\in o(\\varphi_R)$. \n \\end{proof}\n\n\n\n In this paper the domain of our functions will mostly be the natural numbers, i.e.\\ our functions are sequences $f\\colon\\mathbb{N} \\rightarrow \\mathbb{R}$, and the limit will almost always be $\\infty$, but the asymptotic scale will vary. \n\n\\subsection{Stirling's approximation}\nArguably, one of the most studied asymptotic expansions is \\textit{Stirling's approximation}. This is an asymptotic expansion of the gamma function \n\\begin{align}\n \\label{eqn:stirling_approximation}\n \\Gamma(n) \\sim \\sum_{k \\geq 0} \\widehat b_k \\sqrt{2\\pi} e^{-n}n^{n-\\frac12-k} \\text{ as } n \\rightarrow \\infty,\n\\end{align}\nwhere $\\widehat b_k$ is the coefficient of $z^k$ in $\\exp\\left( \\sum_{k=1}^\\infty \\frac{B_{k+1}}{k(k+1)} z^k \\right)$. See for instance \\cite[Sec.\\ 3.10]{de1981asymptotic} for a proof.\nStirling's approximation is used extensively as a tool for approximating the value of $\\Gamma(n)$ for large $n$. %\nWe, however, will view eq.\\ \\eqref{eqn:stirling_approximation} as an asymptotic expansion of $\\Gamma(n)$ in the asymptotic scale $\\{\\sqrt{2 \\pi} e^{-n} n^{n-\\frac12-k}\\}_{k\\geq0}$ and use it merely as a tool to encode and manipulate the coefficients $\\widehat b_k$. \n\nRecall that the gamma function satisfies $\\Gamma(z+1) = z\\Gamma(z);$ this ensures that the sequence of functions $\\{\\Gamma( n -k + \\frac12 )\\}_{k\\geq0}$ forms an asymptotic scale in the limit $n \\rightarrow \\infty$. The statement of Theorem~\\ref{thm:asymptotic_expansion} gives an asymptotic expansion of $f(n)= \\sqrt{2 \\pi} e^{-n} n^{n}$ in this scale, whose coefficients coincide with those of the formal power series $\\exp(T(z))$; we can think of this as a kind of ``inverted'' Stirling's approximation. \n \nAlthough there is a large and growing literature on Stirling's approximation (see \\cite{borwein2018gamma} for a recent survey), such an asymptotic expansion of $\\sqrt{2 \\pi} e^{-n} n^{n}$ does not seem to have been studied previously. \nThis type of `inverted Stirling's approximation' might also be relevant for other applications: many problems dictate or suggest an inherent asymptotic scale. For instance, it might be natural to work in the asymptotic scale $\\{(2(n-k)-1)!!\\}_{k\\geq 0}$, where $(2(n-k)-1)!! = 2^{n-k} \\Gamma(n-k+\\frac12)\/\\sqrt{\\pi}$, for counting problems whose solution involves double factorials. Moreover, power series with coefficients which have an asymptotic expansion in the scale $\\{\\Gamma(n-k+\\beta)\\}_{k\\geq 0}$ with $\\beta \\in \\mathbb{R}$ have a rich algebraic structure; for instance they are closed under multiplication and functional composition \\cite{borinsky2018generating}.\n\nTo establish the asymptotic expansion in Theorem~\\ref{thm:asymptotic_expansion} we will start with a trivial asymptotic expansion for the constant function $1$ in the scale $\\{n^{-k}\\}_{k\\geq0}$, then use Lemma~\\ref{lmm:scale_change} to change to the scale $\\{\\psi_m\\}_{m\\geq0}$, where \n\\begin{align} \\label{eqn:psi_defn}\n\\psi_m(n) = \\frac{\\Gamma( n -m +\\frac12 )}{\\sqrt{2 \\pi} e^{-n} n^{n}}.\n\\end{align}\nIn order to apply the lemma, we need to find asymptotic expansions for the functions $\n\\psi_m(n)$. We do this using the following variant of Stirling's approximation. \n\\begin{proposition}\n \\label{prop:graph_stirling}\n Let $\\Psi=\\{\\psi_m\\}_{m\\geq0}$ be the asymptotic scale with domain $\\mathbb{N}$ and limit $\\infty$ defined in eq.\\ \\eqref{eqn:psi_defn}.\n Then each $\\psi_m$ has an asymptotic expansion in the asymptotic scale $\\{n^{-k}\\}_{k\\geq0}$ given by\n\\begin{align*}\n %\n\\psi_m(n)\n\\sim \\sum_{k \\geq m} c_{m,k} n^{-k}\n\\text{ as } n \\rightarrow \\infty,\n\\end{align*}\nwhere $c_{m,k}$ is the coefficient of $z^k$ in the formal power series \n \\begin{align}\n \\label{eqn:graph_stirling_asymp_psi}\nz^m\n\\sum_{\\ell \\geq 0}\nz^{\\ell} (2\\ell-1)!!\n[x^{2\\ell}]\ne^{-\\frac{1}{z}\\left(e^x - \\frac{x^2}{2} - x - 1\\right) + x \\left( \\frac12 - m \\right)}.\n\\end{align}\n\\end{proposition}\n\nWe will prove this proposition using \\textit{Laplace's method}, which serves as a connection between graphical enumeration and asymptotic expansions. We will introduce this method in the next section and therefore postpone the proof of Proposition~\\ref{prop:graph_stirling} until then. \n\nAssuming Proposition~\\ref{prop:graph_stirling} we are now ready to prove Theorem~\\ref{thm:asymptotic_expansion}.\n\n\\begin{repthmx}{thm:asymptotic_expansion}\n The function $\\sqrt{2 \\pi}e^{-n} n^n$ has the following asymp\\-totic expansion in the asymptotic scale $\\{(-1)^k \\Gamma( n + \\frac12 - k ) \\}_{k\\geq0}$,\n\\begin{align*}\n \\sqrt{2 \\pi}e^{-n} n^n &\\sim \\sum_{ k\\geq 0 } \\Ch_k (-1)^k \\Gamma\\left( n + \\frac12 - k \\right) \\text{ as } n\\rightarrow \\infty,\n\\end{align*}\nwhere $\\Ch_k$ is the coefficient of $z^k$ in the formal power series $\\exp\\left( \\sum_{n\\geq 1} \\chi( \\Out (F_{n+1}) ) z^n \\right)$.\n\\end{repthmx}\n\n\\begin{proof}\n The constant function $f(n)\\equiv 1$ has a trivial asymp\\-totic expansion in the asymptotic scale $\\{n^{-k}\\}_{k\\geq0}$, namely\n \\begin{align*}\n 1 &\\sim \\sum_{k \\geq 0} c_k n^{-k} \\text{ as } n \\rightarrow \\infty,\n \\end{align*}\n with coefficients $c_0 = 1$ and $c_k=0$ for all $k\\geq 1$. Using Lemma~\\ref{lmm:scale_change} and Proposition~\\ref{prop:graph_stirling} we can change the asymptotic scale from $\\{n^{-k}\\}_{k\\geq0}$ to the scale $\\Psi$ as defined in Proposition~\\ref{prop:graph_stirling}, giving\n \\begin{align*}\n %\n 1 &\\sim \\sum_{m \\geq 0} c_m' \\psi_m(n) \\text{ as } n \\rightarrow \\infty,\n \\end{align*}\nwhere the coefficients $c_m'$ are uniquely determined by the triangular equation system \n\\begin{align}\n\\label{eqn:triangular_system} \nc_k = \\sum_{m = 0}^{k} c_m' c_{m,k} \\text{ for all } k \\geq 0\n\\end{align}\nand the coefficients $c_{m,k}$ are those defined in the statement of Proposition~\\ref{prop:graph_stirling}. Namely, \n$c_{m,k}$ is the coefficient of $z^k$ in the formal power series given in eq.\\ \\eqref{eqn:graph_stirling_asymp_psi}. It follows from this power series representation that $c_{m,m} \\neq 0$ for all $m \\geq 0$, which justifies our application of Lemma~\\ref{lmm:scale_change} and guarantees that the linear equation system~\\eqref{eqn:triangular_system} can be uniquely solved for the coefficients $c_m'$. \n\nBy definition of $\\psi_m$ in eq.\\ \\eqref{eqn:psi_defn}, this asymptotic expansion becomes\n \\begin{align*}\n 1 &\\sim \\sum_{m \\geq 0} c'_m\\frac{\\Gamma( n -m +\\frac12 )}{\\sqrt{2 \\pi} e^{-n} n^{n}} \\text{ as } n \\rightarrow \\infty.\n \\end{align*}\n Multiplying both sides by $\\sqrt{2 \\pi} e^{-n} n^{n}$ gives \n \\begin{align*}\n \\sqrt{2 \\pi} e^{-n} n^{n} &\\sim \\sum_{m \\geq 0} c'_m\\Gamma( n -m +\\frac12 ) \\text{ as } n \\rightarrow \\infty.\n \\end{align*}\n It remains to show that $c_m'=(-1)^m\\Ch_m$. \nFrom Proposition~\\ref{prop:Tzx_graph_counting_identity} \nand Proposition~\\ref{prop:Tzx_leaves_identity} we have\n \\begin{align*}\n1 &= \\sum_{\\ell \\geq 0} (-z)^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( T(z,x) \\right)\\\\\n&= \n\\sum_{\\ell \\geq 0} (-z)^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( \\frac{e^x-\\frac{x^2}{2}-x-1}{z} + \\frac{x}{2} + T(ze^{-x}) \\right)\\\\\n &=\\sum_{\\ell \\geq 0} z^{\\ell} (2\\ell-1)!! [x^{2\\ell}] e^{-\\frac{1}{z} \\left( e^x-\\frac{x^2}{2} - x - 1 \\right) + \\frac12 x} \\exp\\left( T(-ze^{-x}) \\right).\n \\end{align*}\n Expanding the second exponential in $z$ gives,\n \\begin{align*}\n 1 &= \\sum_{\\ell \\geq 0} z^{\\ell} (2\\ell-1)!! [x^{2\\ell}] e^{-\\frac{1}{z} \\left( e^x-\\frac{x^2}{2} - x - 1 \\right) + \\frac12 x } \\sum_{m \\geq 0} z^{m} e^{-mx}(-1)^m\\Ch_m \\\\\n &= \\sum_{m \\geq 0}(-1)^m\\Ch_m z^{m} \\sum_{\\ell \\geq 0} z^{\\ell} (2\\ell-1)!! [x^{2\\ell}] e^{-\\frac{1}{z} \\left( e^x-\\frac{x^2}{2} - x - 1 \\right) + \\left(\\frac12-m\\right) x},\n \\end{align*}\nwhere $\\Ch_k$ is the coefficient of $z^k$ in the formal power series $\\exp\\left( \\sum_{n\\geq 1} \\ch_n z^n \\right)$.\nBy eq.\\ \\eqref{eqn:graph_stirling_asymp_psi} this is\n$$1 = \\sum_{m \\geq 0} (-1)^m \\Ch_m \\sum_{k\\geq m} c_{m,k} z^k=\\sum_{k\\geq 0}\\sum_{m\\leq k} (-1)^m \\Ch_m c_{m,k} z^k.$$\n Because $[z^k] 1 = c_k$, we can also write this as $c_k = \\sum_{m \\leq k} (-1)^m \\Ch_m c_{m,k}$ for all $k\\geq 0$. Therefore, we constructed a solution of the triangular equation system in \\eqref{eqn:triangular_system}. %\nBecause the coefficients $c_m'$ are unique, it follows that $c_m' = (-1)^m \\Ch_m$ as claimed.\n\\end{proof}\n\n\\begin{remark} The coefficients $c_{m,k}$ of the asymptotic expansion of the functions $\\psi_{m}$ given in Proposition~\\ref{prop:graph_stirling} (eq.\\ \\eqref{eqn:graph_stirling_asymp_psi}) can also be written in terms of Bernoulli numbers if we use the conventional expression of Stirling's approximation given in eq.\\ \\eqref{eqn:stirling_approximation}. Slightly abusing the $\\sim$ notation me may write the asymptotic expansion for $\\psi_m(n)$ as\n\\begin{align*}\n\\frac{\\Gamma( n -m +\\frac12 )}{\\sqrt{2 \\pi} e^{-n} n^{n}} &\\sim \n\\frac{\\sqrt{2\\pi} \\left(n-m+\\frac12\\right)^{n-m} e^{-n+m-\\frac12}\\exp\\left( \\sum_{k\\geq1} \\frac{B_{k+1}}{k(k+1)} \\left(n-m+\\frac12\\right)^{-k} \\right)}{\\sqrt{2 \\pi} e^{-n} n^{n}}\n\\\\\n&=\nn^{-m}\\left(\\frac{n-m+\\frac12}{n}\\right)^{n-m} e^{m - \\frac12} \\exp\\left(\\sum_{k\\geq1} \\frac{B_{k+1}}{k(k+1)} \\left(n-m+\\frac12\\right)^{-k} \\right).\n\\end{align*}\nWriting $z=\\frac{1}{n}$ this becomes\n$$z^m\\left(\\left({1-z\\left(m-\\frac12\\right)}\\right)^{\\frac{1}{z}-m}e^{m - \\frac12} \\exp\\left(\\sum_{k\\geq1} \\frac{B_{k+1}}{k(k+1)} \\left(\\frac{1}{z}-m+\\frac12\\right)^{-k} \\right)\\right).\n$$\nSince the coefficients of the asymptotic expansion for $\\psi_m(n)$ are given by the above power series as well as by the power series in eq.\\ \\eqref{eqn:graph_stirling_asymp_psi}, the series are equal, giving the following identity for Bernoulli numbers, for all $m\\geq 0$. \n\\begin{align*}\n&\\sum_{\\ell \\geq 0}\nz^{\\ell} (2\\ell-1)!!\n[x^{2\\ell}]\n\\exp\\left(-\\frac{1}{z}\\left(e^x - \\frac{x^2}{2} - x - 1\\right) + x \\left( \\frac12 - m \\right) \\right) \\\\\n&=\\exp\\left( \n \\left(m-\\frac{1}{z}\\right) \\log \\frac{1}{1-z\\left(m-\\frac12\\right)} + m - \\frac12 +\\sum_{k\\geq1} \\frac{B_{k+1}}{k(k+1)} z^k \\left(1-z\\left(m-\\frac12\\right)\\right)^{-k} \\right) \n \\\\\n&=\\exp\\left( \n\\sum_{k\\geq1} \\frac{z^k}{k(k+1)} \\left( \\left(m-\\frac12 k \\right) \\left(m-\\frac12\\right)^k + \\frac{B_{k+1}}{\\left(1-z\\left(m-\\frac12\\right)\\right)^{k}} \\right) \\right)\n\\end{align*}\n\nThis identity actually holds for all $m\\in \\mathbb{R}$. However, it is unclear how to prove such an identity without asymptotic techniques. The special case $m=\\frac12$ lies at the heart of the proof of Proposition~\\ref{prop:bernoulli_graphs_sum}. De Bruijn also discusses this case using Laplace's method and writes that the identity is `by no means easy to verify directly' \\cite[Sec.\\ 4.5]{de1981asymptotic}.\n\\end{remark}\n \\subsection{Laplace's method: A bridge between graphical enumeration and asymptotics}\n\\label{sec:laplace}\n A common source of asymptotic expansions is \\textit{Laplace's method}. Laplace's method is, as one might guess from the name, quite an old technique. It is usually used to extract asymptotic information from a complicated integral without evaluating it in full generality. We will use Laplace's method in the opposite way, as we are going analyze the properties of a complicated number sequence by associating it with a relatively simple integral. This way, the method will serve as a bridge between graphical enumeration as described in Section~\\ref{sec:graphical_enumeration} and the analytic world of integrals and their asymptotic expansions.\n \n\n\\begin{lemma}[Laplace's method]\n \\label{lmm:laplace_method}\nLet $f$ and $g$ be real-valued functions on a domain $D \\subset \\mathbb{R}$ with $0$ in its interior. Suppose both $f$ and $g$ are analytic in a neighborhood of $0$, that $g(0)=g'(0)=0$, $g''(0)=-1,$ and $0$ is the unique global supremum of $g$. Finally, assume that the integral\n \\begin{align*}\n \\int_D | f(x) | e^{n g(x)} dx\n \\end{align*}\n exists for sufficiently large $n$.\n Then the sequence $I(n)$ given by the integral formula\n\\begin{align}\n \\label{eqn:integral_I}\n I(n) = \\sqrt{\\frac{n}{2 \\pi}} \\int_D f(x) e^{n g(x)} dx\n\\end{align}\nadmits an asymptotic expansion with asymptotic scale $\\{n^{-k}\\}_{k\\geq0}$, \n\\begin{align}\n \\label{eqn:laplace_expansion}\n I(n) \\sim \\sum_{k \\geq 0} c_k n^{-k} \\text{ as } n \\rightarrow \\infty,\n\\end{align}\nwhere $c_k$ is the coefficient of $z^k$ in the formal power series,\n\\begin{align}\n\\label{eqn:laplace_coeffs}\n \\sum_{\\ell \\geq 0} z^\\ell (2\\ell-1)!! [x^{2\\ell}] f(x) e^{\\frac{1}{z} \\left( g(x) + \\frac{x^2}{2} \\right) }.\n\\end{align}\n\\end{lemma}\nA quite similar statement is given in \\cite[Thm. B7]{flajolet2009analytic}. Unfortunately, only a partial proof is given there. For the convenience of the reader we provide a proof in the appendix. The argument revolves around approximating the integral in eq.\\ \\eqref{eqn:integral_I} with a Gaussian integral. It closely follows the arguments in \\cite[Sec.\\ 4.4]{de1981asymptotic} and \\cite[Thm. B7]{flajolet2009analytic}.\n\n\n\nWe wrote the coefficients of the asymptotic expansion in eq.\\ \\eqref{eqn:laplace_coeffs} suggestively to illustrate the close relationship of asymptotic expansions which come from Laplace's method \nand generating functions of graphs such as the one in Proposition~\\ref{prop:convoluted_graph_sum}. We will use this relationship in the following Corollary, which we will need to give the relation between graphs and the zeta function stated in Proposition~\\ref{prop:bernoulli_graphs_sum}. \n\\begin{corollary}\n \\label{crll:graph_laplace}\n Let $f$ be the constant function $f(x)\\equiv 1$, and assume $g$ is analytic near $0$ with Taylor series $$g(x)=-\\frac{x^2}{2} + \\sum_{s\\geq 3} x^s \\frac{b_s}{s!}.$$\n Then for all $k\\geq 0$ the coefficients $c_k$ of the asymptotic expansion in eq.\\ \\eqref{eqn:laplace_coeffs} can be written as a weighted sum over graphs,\n\\begin{align}\n c_k = \\sum_{ \\substack{ \\G \\in \\GG_0\\\\ |\\G| = k} }\\frac{ \\prod_{v \\in V(\\Gamma)} b_{|v|} }{|\\Aut \\G|},\n\\end{align}\nwhere $|v|$ is the \\textit{valence} of the vertex $v$. \\end{corollary}\n\\begin{proof}\n Let $\\phi: \\GG \\rightarrow \\mathbb{R}\\llrrparen{z}$ be the function from the set of graphs to the space of Laurent series in $z$ defined by setting $\\phi(\\G) = 0$ if $\\G$ contains an edge and $\\phi(\\G) = \\prod_{v \\in V(\\G)} \\left(z^{-1}b_{|v|}\\right)$ if $\\G$ has no edges. There are only finitely many graphs with $2\\ell$ leaves which have no edges, and the function $\\phi$ is multiplicative on the disjoint union of graphs, so we may apply Proposition~\\ref{prop:convoluted_graph_sum} and Lemma~\\ref{lmm:exponential_formula} to get\n\\begin{align*}\n \\sum_{\\G \\in \\GG_0}\\frac{w^{e(\\G)} \\prod_{v \\in V(\\G)} (z^{-1}b_{|v|})}{|\\Aut \\G|} = \\sum_{\\ell \\geq 0} w^\\ell (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( \\sum_{\\gamma \\in \\GG^c} x^{s(\\gamma)} \\frac{\\phi(\\gamma)}{|\\Aut \\gamma|} \\right),\n\\end{align*}\nwhere we used the fact that a graph has only one subgraph with no edges. The only graphs without edges which are also connected are the star graphs $R_{0,s}$. This together with the fact that $R_{0,s}$ has the symmetric group $\\Sigma_s$ as automorphism group gives \n\\begin{align*}\n\\sum_{\\gamma \\in \\GG^c} x^{s(\\G)} \\frac{\\phi(\\gamma)}{|\\Aut \\gamma|} = \\sum_{s\\geq 3} x^s\\frac{\\phi(R_{0,s})}{|\\Aut R_{0,s}|}\n= \\frac{1}{z} \n\\sum_{s\\geq 3} x^s \\frac{b_s}{s!}.\n\\end{align*}\n Setting $w = z$ results in,\n\\begin{align*}\n \\sum_{\\G \\in \\GG_0}\\frac{\\prod_{v \\in V(\\G)} b_{|v|}}{|\\Aut \\G|}z^{|\\G|} = \\sum_{\\ell \\geq 0} z^{\\ell} (2\\ell-1)!! [x^{2\\ell}] \\exp\\left( \\frac{1}{z}\n \\sum_{s\\geq 3} x^s \\frac{b_s}{s!} \\right). \n\\end{align*}\nThe right hand side is now exactly the power series given in eq.\\ \\eqref{eqn:laplace_coeffs} that determines $c_k$.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:bernoulli_graphs_sum}]\nWe start with Euler's integral representation of the gamma function \n$$\\G(n)=\\int_{0}^\\infty u^ne^{-u}\\frac{du}{u}.$$ Substituting $u = n e^x$ gives\n\\begin{align*}\n \\Gamma( n ) &= \\int_{-\\infty}^\\infty n^n e^{nx} e^{-ne^x}dx\n = e^{-n} n^{n} \\int_{-\\infty}^\\infty e^{-n\\left(e^x-x - 1 \\right) } dx.\n\\end{align*}\nWe can now apply Lemma~\\ref{lmm:laplace_method} with $g(x) =-(e^x-x-1)$, $f(x)=1$ and $D=\\mathbb{R}$ to get\n an asymptotic expansion\n\\begin{align*}\n \\Gamma( n ) &= \\sqrt{2\\pi} e^{-n} n^{n-\\frac12} \\sum_{k \\geq 0} c_k n^{-k}.\n\\end{align*}\nBy Corollary~\\ref{crll:graph_laplace} and because $-(e^x-x-1) = -\\sum_{s\\geq3} \\frac{x^s}{s!}$ the coefficients satisfy,\n\\begin{align*}\n c_k = \\sum_{ \\substack{ \\G \\in \\GG_0\\\\ |\\G| = k} }\\frac{ (-1)^{v(\\Gamma)} }{|\\Aut \\G|} \\text{ for all } k \\geq 0.\n\\end{align*}\nStirling's approximation in eq.\\ \\eqref{eqn:stirling_approximation} gives another expression for the coefficients $c_k$. Because the different ways to express the asymptotic expansion of $\\Gamma(n)$ with the same scale and limit must coincide, we get\n\\begin{align*}\n\\sum_{ \\substack{ \\G \\in \\GG_0} }\\frac{ (-1)^{v(\\Gamma)} }{|\\Aut \\G|} z^{|\\Gamma|}\n=\n\\exp\\left( \\sum_{k=1}^\\infty \\frac{B_{k+1}}{k(k+1)} z^k \\right).\n\\end{align*}\nSince taking the formal logarithm restricts the sum on the left to connected graphs (Lemma~\\ref{lmm:exponential_formula}) we get\n\\begin{align*}\n\\sum_{ \\substack{ \\G \\in \\GG_0^c} }\\frac{ (-1)^{v(\\Gamma)} }{|\\Aut \\G|} z^{|\\Gamma|}\n=\n \\sum_{k=1}^\\infty \\frac{B_{k+1}}{k(k+1)} z^k.\n\\end{align*}\nNow notice that \n$\\sigma(\\Gamma) = (-1)^{e(\\Gamma)} = (-1)^{|\\Gamma|} (-1)^{v(\\Gamma)}$ and $B_{k+1} =0$ for all even $k > 0$, giving \n\\begin{align*} \n\\sum_{ \\substack{ \\G \\in \\GG_0^c\\\\ |\\G| = n} }\\frac{ \\sigma(\\Gamma) }{|\\Aut \\G|}&= (-1)^n\\frac{B_{n+1}}{n(n+1)}= - \\frac{B_{n+1}}{n(n+1)} =\\frac{\\zeta(-n)}{n}. \\qedhere\n\\end{align*}\n\\end{proof}\n\nWe now turn to the proof of Proposition~\\ref{prop:graph_stirling}, which follows along similar lines.\n\\begin{proof}[Proof of Proposition~\\ref{prop:graph_stirling}]\n Assume $n,m \\geq 0$ with $n \\geq \\max\\{1,m\\}$. Start with Euler's integral and substitute $u = n e^x$ to obtain\n\\begin{align*}\n \\Gamma\\left( n -m +\\frac12 \\right) &= \\int_0^\\infty u^{n-m+\\frac12} e^{-u} \\frac{du}{u} \n = e^{-n} n^{n-m+\\frac12} \\int_{-\\infty}^\\infty e^{-n\\left(e^x-x - 1 \\right) + x \\left( \\frac12 - m \\right) } dx.\n\\end{align*}\nTherefore,\n\\begin{align*}\n\\psi_m (n)= \n\\frac{\\Gamma( n -m +\\frac12 )}{\\sqrt{2 \\pi} e^{-n} n^{n} } = \nn^{-m}\\sqrt{\\frac{n}{2\\pi}} \n\\int_{-\\infty}^\\infty e^{-n\\left(e^x-x-1 \\right) + x \\left( \\frac12 - m \\right) } dx.\n\\end{align*}\nThe condition $n \\geq \\max\\{1,m\\}$ guarantees that the integral exists. \nThe functions $f(x) =e^{ x \\left( \\frac12 - m \\right)}$ and $g(x) = -(e^x-x - 1)$, defined on $D= \\mathbb{R}$, satisfy the conditions of Lemma~\\ref{lmm:laplace_method}, so we can apply Laplace's method to obtain\n \\begin{align*}\n n^m \\psi_m(n) \\sim \\sum_{k \\geq 0} c_{m,k}' n^{-k} \\text{ as } n \\rightarrow \\infty,\n\\end{align*}\nwhere $c'_{m,k}$ is the coefficient of $z^k$ in the power series\n\\begin{align*}\n\\sum_{\\ell \\geq 0} z^{\\ell} (2\\ell-1)!! [x^{2\\ell}] e^{- \\frac{1}{z} \\left(e^x- \\frac{x^2}{2} -x - 1 \\right) + x \\left( \\frac12 - m \\right)}.\n\\end{align*}\nFrom Definition \\ref{def:asymptotic_expansion} and the fact that $n^{-m}\\mathcal{O}(n^{-R+m}) = \\mathcal{O}(n^{-R})$, it follows that\n\\begin{align*}\n \\psi_m(n) \\sim \\sum_{k \\geq m} c_{m,k-m}' n^{-k} \\text{ as } n \\rightarrow \\infty.\n\\end{align*}\nSetting $c_{m,k} := c_{m,k-m}'$ gives eq.\\ \\eqref{eqn:graph_stirling_asymp_psi}.\n\\end{proof}\n\nWe have now completed all of the steps in the proof of Theorem~\\ref{thm:asymptotic_expansion}.\nBefore we continue with the proof of Theorem~\\ref{thm:SVconj}, we will briefly discuss the relationship of our considerations with Kontsevich's Lie graph complex.\n\\subsection{Lie graph complex}\n\\label{sec:kontsevich} Kontsevich's {\\em Lie graph complex} $\\mathfrak{L}_*$ computes the Chevalley-Eilenberg homology of a certain infinite-dimensional Lie algebra $\\ell_\\infty$ associated to the Lie operad. %\nIn \\cite{kontsevich1993formal} Kontsevich remarked that the orbifold Euler characteristic of the subcomplex $\\mathfrak{L}^{(n)}_*$ spanned by connected graphs with fundamental group $F_n$ can be encoded as coefficients of the asymptotic expansion of the integral\n\\begin{align}\n \\label{eqn:integral_kontsevich}\n \\sqrt{\\frac{n}{2 \\pi}} \\int_{D} \\exp\\left(-n \\sum_{s \\geq 2} \\frac{x^s}{s(s-1)}\\right) \\sim\n\\sum_{k \\geq 0} c_k n^{-k} \\text{ as } n \\to \\infty,\n\\end{align}\nwhere $D$ is a small domain that contains a neighborhood of $0$ and $c_k$ is the $z^k$ coefficient of the power series $\\exp( \\sum_{n \\geq 1} \\chi(\\mathfrak{L}_*^{(n+1)}) z^n )$. %\nObserving that $- \\sum_{s \\geq 2} \\frac{x^s}{s(s-1)} = -\\sum_{s \\geq 2} (s-2)! \\frac{x^s}{s!}$ and using Corollary~\\ref{crll:graph_laplace} together with the exponential formula (Lemma~\\ref{lmm:exponential_formula}), we may conclude that \n\\begin{align*}\n \\chi(\\mathfrak{L}_*^{(n)}) = \\sum_{\\substack{\\G \\in \\GG^c_0\\\\ \\pi_1(\\G) \\cong F_n}} \\frac{\\xi(\\G)}{|\\Aut \\G|},\n\\end{align*}\nwhere $\\xi$ is the function given by $$\\xi(\\G) = (-1)^{|V(\\G)|} \\prod_{v \\in V(\\G)} (|v|-2)!$$ This formula for $\\chi(\\mathfrak{L}_*^{(n)})$ also follows directly from counting graphs whose vertices are dressed with Lie operad elements. \nWe have $\\chi(\\mathfrak{L}_*^{(n)}) = \\ch_{n-1}$, because\n\\begin{align*}\n H_k(\\mathfrak{L}_*^{(n)}) \\cong H^{2n-2-k}(\\Out(F_n)).\n\\end{align*}\nThis was first observed by Kontsevich \\cite{kontsevich1993formal}; see \\cite{conant2003theorem} for a detailed proof. The statements in Theorems~\\ref{thm:SVconj} and \\ref{thm:asymptotic_expansion}, therefore apply verbatim to the orbifold Euler characteristic of $\\mathfrak{L}_*^{(n)}$. It is, however, unclear what role the map $\\xi$ and the Lie graph complex play in the interesting Hopf algebraic duality between $\\tau$ and $\\sigma$ explained in Section~\\ref{sec:core}. \n\nThe integral in eq.\\ \\eqref{eqn:integral_kontsevich} gives another representation of the coefficients $\\ch_n,$ but the descriptive power of this representation is limited: it seems that the integral does not evaluate to a `known' function, which could facilitate the extraction of information about the coefficients $\\ch_n$. Recall that the fact that two functions have the same asymptotic expansion does not imply their equality, so it does not follow from the considerations above and Theorem~\\ref{thm:asymptotic_expansion} that the left hand side of eq.\\ \\eqref{eqn:integral_kontsevich} is equal to $\\sqrt{2 \\pi}e^{-n} n^n$. %\n\n\n\n\n\\section{The Lambert \\texorpdfstring{$W$}{W}-function}\nIn this section we prove that the coefficients of the asymptotic expansion in Theorem~\\ref{thm:asymptotic_expansion} are all negative. The first statement of Theorem~\\ref{thm:SVconj}, that $\\chi(\\Out(F_n)) < 0$ for all $n \\geq 2$, follows then by Lemma~\\ref{lmm:exp_negative}.\n\n\\subsection{Singularity analysis}\n\nWe will accomplish this by using a second method to obtain the asymptotic expansion of the sequence $\\sqrt{2 \\pi} e^{-n} n^n$ with respect to the asymptotic scale $\\{(-1)^k\\Gamma(n-k+\\frac{1}{2})\\}_{k\\geq0}$. This second method is \\textit{singularity analysis}. By Theorem~\\ref{thm:asymptotic_expansion} and because of the uniqueness of asymptotic expansions, we therefore obtain another expression for the coefficients $\\hat\\chi_n$ of $\\exp(\\sum_{n\\geq 1} \\ch_n z^n)$. This expression will involve the Lambert $W$-function, which is defined as the solution of the functional equation $W(z) e^{W(z)}= z$ \\cite{corless1996lambertw}. Eventually, we will use a theorem of Volkmer \\cite{volkmer2008factorial} to show that the coefficients of the asymptotic expansion are negative. \n\n\\begin{figure}\n\\begin{tikzpicture} \\begin{axis}[ height=\\figureheight, tick align=outside, tick pos=left, width=\\figurewidth, x grid style={white!69.01960784313725!black}, xlabel={\\(\\displaystyle z\\)}, xmajorgrids, xmin=-1, xmax=1, xtick style={color=black}, y grid style={white!69.01960784313725!black}, ylabel={\\(\\displaystyle W(z)\\)}, ymajorgrids, ymin=-5, ymax=5, ytick style={color=black} ] \\addplot [semithick, black] table {%\n-0.367879441171442 -1\n-0.367322540037808 -0.945960576192766\n-0.365569562332727 -0.891921152385532\n-0.362489397863951 -0.837881728578298\n-0.357940064029995 -0.783842304771064\n-0.351767902646442 -0.72980288096383\n-0.343806721099256 -0.675763457156596\n-0.333876874118769 -0.621724033349363\n-0.321784282228105 -0.567684609542129\n-0.307319382664605 -0.513645185734895\n-0.290256008301518 -0.459605761927661\n-0.270350189808666 -0.405566338120427\n-0.247338875984067 -0.351526914313193\n-0.220938566862327 -0.297487490505959\n-0.19084385385887 -0.243448066698725\n-0.156725860840461 -0.189408642891491\n-0.118230579620611 -0.135369219084257\n-0.0749770929619213 -0.0813297952770232\n-0.0265556777246851 -0.0272903714697893\n0.0274742196695564 0.0267490523374447\n0.0875861437909174 0.0807884761446787\n0.154288935633532 0.134827899951913\n0.228129192355259 0.188867323759146\n0.309693891352915 0.24290674756638\n0.399613189339125 0.296946171373614\n0.498563407764693 0.350985595180848\n0.607270216650625 0.405025018988082\n0.72651202965913 0.459064442795316\n0.857123624045936 0.51310386660255\n1 0.567143290409784\n}; \\addplot [semithick, black, dashed] table {%\n-0.0336897349954273 -5\n-0.0376055021646619 -4.86206896551724\n-0.0419426179058552 -4.72413793103448\n-0.0467400631717225 -4.58620689655172\n-0.0520391328982212 -4.44827586206897\n-0.0578832678204224 -4.31034482758621\n-0.0643177859261937 -4.17241379310345\n-0.0713894875400297 -4.03448275862069\n-0.0791461025317493 -3.89655172413793\n-0.0876355415898728 -3.75862068965517\n-0.0969049056948192 -3.62068965517241\n-0.106999198646334 -3.48275862068966\n-0.117959676476869 -3.3448275862069\n-0.129821754505718 -3.20689655172414\n-0.142612377291115 -3.06896551724138\n-0.156346738389076 -2.93103448275862\n-0.171024215124495 -2.79310344827586\n-0.186623357930594 -2.6551724137931\n-0.203095743525044 -2.51724137931034\n-0.220358465453742 -2.37931034482759\n-0.238284993397005 -2.24137931034483\n-0.256694082986828 -2.10344827586207\n-0.275336359427955 -1.96551724137931\n-0.293878129430117 -1.82758620689655\n-0.31188189506695 -1.68965517241379\n-0.328782948102461 -1.55172413793103\n-0.343861311642715 -1.41379310344828\n-0.356208164845436 -1.27586206896552\n-0.364685732545771 -1.13793103448276\n-0.367879441171442 -1\n}; \\addplot [semithick, black, dotted] table {%\n-0.367879441171442 -5\n-0.367879441171442 5\n}; \\addplot [semithick, black, dotted] table {%\n0 -5\n0 5\n}; \\end{axis} \\end{tikzpicture}\n\\caption{Plot of the two real branches of the Lambert $W$-function. The solid line depicts the principal branch $W_0$, the dashed line the other real branch, $W_{-1}$. Both branches share a square root type singularity at $z = -1\/e$. The $W_{-1}$ additionally has a logarithmic singularity at $z=0$. The locations of the singularities are indicated with dotted lines.\n}\n\\label{fig:lambertW}\n\\end{figure}\n\n\\begin{proposition}\n \\label{prop:lambert_rep}\n The coefficient $\\Ch_k$ of $z^k$ in $\\exp\\left(\\sum_{n\\geq 1} \\ch_n z^n\\right)$ satisfies\n\\begin{align}\n \\Ch_k = -2 \\frac{\\Gamma(k +\\frac12 )}{\\sqrt{2\\pi}} v_{2k-1} \\text{ for all } k\\geq 0,\n\\end{align}\nwhere the $\\{v_k\\}_{k\\geq -1}$ are the coefficients of the following expansion involving the derivative of the principal branch of the Lambert $W$-function in the vicinity of its branch-point at $z=-\\frac{1}{e}$, \n\\begin{align}\n \\label{eqn:ck_def_lambert}\n z W_0'(z) &= \\sum_{k= -1}^{\\infty} (-1)^{k+1} v_{k} (1+ez)^{\\frac{k}{2}}.\n\\end{align}\n\\end{proposition}\nIn Figure~\\ref{fig:lambertW}, the principal branch $W_0$ of the Lambert $W$-function is depicted with a solid line. Note that the index $k$ in the summation starts with $-1$. We chose this notation to be consistent with Volkmer \\cite{volkmer2008factorial}, who proved a couple of interesting properties of the numbers $v_k$ motivated by a problem posed by Ramanujan. Most important for our considerations, he shows in \\cite[Thm.\\ 3]{volkmer2008factorial} that $v_k > 0$ for all $k \\geq 1$. He proves this by deriving the following integral representation for the coefficients $v_k$ \\cite[Thm.\\ 2]{volkmer2008factorial},\n\\begin{align*}\n v_k &= - \\frac{1}{2\\pi} \\int_0^\\infty (1+z)^{-\\frac{k}{2}-1} \\frac{\\Im W_{-1}(e^{-1} z)}{|1+W_{-1}(e^{-1}z)|^2} dz \\text{ for all }k \\geq 1,\n\\end{align*}\nwhere $\\Im$ denotes the imaginary part of a complex number and $W_{-1}$ is the branch of the Lambert $W$-function which is real and decreasing on the interval $(-\\frac{1}{e}, 0)$. This branch is drawn with a dashed line in Figure~\\ref{fig:lambertW}. The integrand is strictly negative since $\\Im W_{-1}(z) \\in (-2\\pi, -\\pi)$ for $z \\in (0,\\infty)$ \\cite{corless1996lambertw}.\n\n\n\\begin{corollary}\n \\label{crll:negative_T}\n For all $n\\geq 2$, $\\chi\\left( \\Out(F_{n}) \\right) < 0$.\n\\end{corollary}\n\\begin{proof}\n Apply Proposition~\\ref{prop:lambert_rep}, the fact that $\\Gamma(k+\\frac12) > 0$ and \\cite[Thm.\\ 3]{volkmer2008factorial} to get \n$\\Ch_n < 0$ for all $n \\geq 1$. The result now follows from Lemma~\\ref{lmm:exp_negative}.\n\\end{proof}\n\nAs already mentioned, we will use singularity analysis to prove Proposition~\\ref{prop:lambert_rep}. The basic observation behind singularity analysis is the following: the radius of convergence of the Taylor expansion of a function $f$ is equal to norm of the singularity of $f$ in $\\mathbb{C}$ which is closest to the origin. This singularity is called the \\textit{dominant singularity} of the function. The radius of convergence is also equal to the limit $\\limsup_{n\\rightarrow \\infty} |a_n|^{-\\frac{1}{n}}$ where $f(z) = \\sum_{n=0}^\\infty a_n z^n$. The radius of convergence therefore determines the exponential growth rate of the coefficients $a_n$. In many cases, the detailed nature of the function's dominant singularity determines the asymptotic behaviour of the coefficients completely. To illustrate these notions, we will start by proving one of the most basic statements from the framework of singularity analysis. For other required statements from this framework, we will refer to the literature. \nA very detailed and instructive introduction to singularity analysis can be found in Flajolet's and Sedgewick's book \\cite[Part 2]{flajolet2009analytic}. \n\\begin{lemma}\n \\label{lmm:dominant_singularity}\n If $g$ is a generating function with power series expansion $g(z)= \\sum_{n=0}^\\infty b_n z^n,$ which has radius of convergence $r$, then \n\\begin{align*}\n b_n \\in o\\left(C^{-n}\\right) \\text{ for all } 0< C < r.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nBy elementary calculus, $r^{-1} = \\limsup_{n \\rightarrow \\infty} |b_n|^{\\frac{1}{n}}.$ Therefore for every $\\delta > 0$ there exists an $n_0$ such that $|b_n|^{\\frac{1}{n}} < r^{-1}+\\delta$ for all $n \\geq n_0.$ It follows that $$|b_n| < (r^{-1} +\\delta)^n = \\left( \\frac{r}{1 + \\delta r} \\right)^{-n} \\text{ for all } n \\geq n_0.$$\nBecause we can choose any $\\delta > 0$, the statement follows. This argument also works if $r = \\infty$. \n\\end{proof}\n\nSuppose we can decompose a generating function $h(z)=\\sum_{n\\geq 0}d_nz^n$ as a sum $h(z)=f(z)+g(z)$ with $f(z)=\\sum_{n\\geq 0}a_nz^n$ and $g$ analytic in a disk around $0\\in \\mathbb{C}$ of radius larger than $1$.\nThen by Lemma~\\ref{lmm:dominant_singularity} there is a constant $C>1$ such that \n\\begin{align*}\n d_n = a_n + o(C^{-n}).\n\\end{align*}\nThis is especially useful if the coefficients $a_n$ have an asymptotic expansion,\n\\begin{align*}\n a_n \\sim \\sum_{k \\geq 0 } c_k \\varphi_k(n) \\text{ as } n \\to \\infty,\n\\end{align*}\nwith an asymptotic scale $\\{\\varphi_k\\}_{k \\geq 0}$ which satisfies $o(C^{-n}) \\subset \\mathcal{O}(\\varphi_k(n))$ for all $k \\geq 0$. In this common case, we may neglect terms contributed by $g$ to the generating function $h$\nand conclude that \n\\begin{align*}\n d_n \\sim \\sum_{k \\geq 0 } c_k \\varphi_k(n) \\text{ as } n \\to \\infty.\n\\end{align*}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tikzpicture} [scale=1.5] \\fill [gray!20] ([shift=(20:1.3cm)] 0,0) arc (20:340:1.3); \\fill [white] (20:1.3) to (.5,0) to (340:1.3) to (20:1.3); \\draw [thick] ([shift=(0:.8)] 0,0) arc (0:20:.5); \\draw[->] (-2,0) to (2,0); \\draw[->] (0,-2) to (0,2); \\fill (0:.5) circle (.025); \\node [above] (one) at (0:.5) {$1$}; \\node [below] (Re) at (2,0) {$\\Re z$}; \\node [left] (Im) at (0,2) {$\\Im z$}; \\node (fi) at (1,.15) {$\\phi$}; \\node (Delta) at (-.4,.4) {$\\Delta$}; \\end{tikzpicture}\n\\caption{The region $\\Delta \\subset \\mathbb{C}$ in the statement of Lemma~\\ref{lmm:singularity_analysis}}\\label{fig:pacman}\n\\end{center}\n\\end{figure}\n\nTo prove Proposition~\\ref{prop:lambert_rep} we will need\n\\begin{lemma}[Basic singularity analysis {\\cite[Cor.\\ 3]{flajolet1990singularity}}]\n \\label{lmm:singularity_analysis}\n Let $f:\\mathbb{C} \\rightarrow \\mathbb{C}$ be analytic at $0$ with an isolated singularity at $1$, such that $f(z)$ can be analytically continued to an open domain of the form $\\Delta = \\{ z : |z| < R, z \\neq 1, | \\arg(z-1)| > \\phi \\} \\subset \\mathbb{C}$ with some $R > 1$ and $0 < \\phi < \\pi\/2$ (see Figure~\\ref{fig:pacman}). \n %\n If $f(z)$ has the following asymptotic behaviour in $\\Delta$ for $R \\geq 0$,\n \\begin{align}\n \\label{eqn:singular_limit_exp}\n f(z) &= \\sum_{k=0}^{R-1} c_k (1-z)^{\\alpha_k} + \\mathcal{O}\\left((1-z)^{A}\\right) \\text{ as } z \\rightarrow 1^{-},\n \\end{align}\n where $c_k\\in \\mathbb{R}$ and $\\alpha_0 \\leq \\alpha_1 \\leq \\ldots \\leq \\alpha_{R-1} < A \\in \\mathbb{R}$, then the coefficients $a_n = [z^n] f(z)$ have the asymptotic behaviour, \\begin{align}\n \\label{eqn:singular_limit_asymp}\n a_n = \\sum_{k = 0}^{R-1} c_k \\binom{ n - \\alpha_{k} -1}{n} + \\mathcal{O}(n^{-A-1}) \\text{ as } n \\rightarrow \\infty.\n \\end{align}\n\\end{lemma}\n Note that eq.\\ \\eqref{eqn:singular_limit_asymp} is not an asymptotic expansion in the sense of Definition \\ref{def:asymptotic_expansion}, because we did not specify an asymptotic scale. \n\n\\begin{proof}[Proof of Proposition~\\ref{prop:lambert_rep}]\nThe principal branch of the Lambert $W$-function has the series representation \\cite{corless1996lambertw},\n\\begin{align*}\n W_0(z) &= \\sum_{n\\geq1} (-1)^{n+1} \\frac{n^{n-1}}{n!} z^n.\n\\end{align*}\nBy acting with $z \\frac{d}{d z}$, we obtain the expansion\n\\begin{align}\n \\label{eqn:Wprime_expansion}\n z W_0'(z) &= \\sum_{n\\geq1} (-1)^{n+1} \\frac{n^{n}}{n!} z^n.\n\\end{align}\nThe function $W_0$ is analytic in the cut plane $\\mathbb{C} \\setminus [-1\/e,-\\infty)$ and has an expansion in the vicinity of the branch point at $z=-1\/e$,\n\\begin{align*}\n W_0(z) &= -1 + \\sqrt{2(1+ez)} - \\frac{2}{3} (1+ez) + \\frac{11}{72} \\left(\\sqrt{2(1+ez)}\\right)^{3} + \\ldots\n\\end{align*}\nwhich is convergent if $z \\in [-1\/e,0)$ \\cite[Sec.\\ 4]{corless1996lambertw} (see also Figure~\\ref{fig:lambertW}).\nTherefore, the function $z W_0'(z)$ has an expansion of the form\n\\begin{align*}\n z W_0'(z) &= \\sum_{k = -1}^\\infty (-1)^{k+1} v_{k} (1+ez)^{\\frac{k}{2}}.\n\\end{align*}\nUsing the basic version of singularity analysis from Lemma~\\ref{lmm:singularity_analysis}, we can obtain the asymptotic behaviour of the sequence $e^{-n} \\frac{n^{n}}{n!}$ from this: \nfirst we rescale the $z$-variable of $z W_0'(z)$ to obtain the expansion,\n\\begin{align*}\n -\\frac{z}{e} W_0'\\left(-\\frac{z}{e} \\right) &= \\sum_{k = -1}^{R-1} (-1)^{k+1} v_k (1-z)^{\\frac{k}{2}} + \\mathcal{O}\\left((1-z)^{\\frac{R}{2}}\\right) \\text{ as } z \\rightarrow 1^{-} \\text{ for all } R \\geq 0.\n\\end{align*}\nAs $z W_0'(z)$ is analytic in the cut plane $\\mathbb{C} \\setminus [-1\/e,-\\infty)$, the function $-\\frac{z}{e} W_0'\\left(-\\frac{z}{e} \\right)$ is analytic in another cut plane $\\mathbb{C} \\setminus [1,\\infty)$. \nAs $\\Delta \\subset \\mathbb{C} \\setminus [1,\\infty)$, we can apply Lemma~\\ref{lmm:singularity_analysis} and eq.\\ \\eqref{eqn:Wprime_expansion} to get\n\\begin{align*}\n [z^n]\\frac{-z}{e} W_0'\\left(-\\frac{z}{e} \\right) &= \n -e^{-n} \\frac{n^{n}}{n!} =\n \\sum_{k = -1}^{R-1} (-1)^{k+1} v_{k} \\binom{n-\\frac{k}{2} -1}{n} + \\mathcal{O}\\left(n^{-\\frac{R}{2}-1}\\right) \\text{ for all } R \\geq 0,\n\\end{align*}\nwhere we used $\\alpha_k = \\frac{k}{2}$ and $A = \\frac{R}{2}$.\nThe even contributions in the sum over $k$ vanish since the first argument of the binomial coefficient is an integer that is smaller than the second. Therefore, \n\\begin{align*}\n -e^{-n} \\frac{n^{n}}{n!} &= \\sum_{k = 0}^{R-1} v_{2k-1} \\binom{n-k -\\frac12}{n} + \\mathcal{O}\\left(n^{-R- \\frac12}\\right) \\text{ for all } R \\geq 0.\n\\end{align*}\nThe binomial coefficient can be expressed in terms of $\\Gamma$ functions $\\binom{n-k -\\frac12}{n} = \\frac{\\Gamma(n-k +\\frac12)}{n! \\Gamma(\\frac12 - k)}$. As a consequence of the reflection formula $\\Gamma(z)\\Gamma(1-z) =\\frac{\\pi}{\\sin(\\pi z)}$, we have $\\Gamma\\left(\\frac12 - k\\right) = \\frac{(-1)^k \\pi}{\\Gamma(k + \\frac12)}$. Hence,\n\\begin{align*}\n -e^{-n} \\frac{n^{n}}{n!} &= \\frac{1}{\\pi n!}\n \\sum_{k = 0}^{R-1}(-1)^k v_{2k-1} \\Gamma\\left( n - k + \\frac12 \\right) \\Gamma\\left(k+\\frac12\\right) + \\mathcal{O}\\left(n^{-\\frac{R}{2}-1}\\right) \\text{ for all } R \\geq 0.\n\\end{align*}\nThe statement follows from the uniqueness of asymptotic expansions and the property of the $\\Gamma$ function that $\\mathcal{O}\\left((n!) n^{-R-\\frac12}\\right) = \\mathcal{O}\\left(\\Gamma\\left(n-R+\\frac12\\right)\\right)$.\n\\end{proof}\n\n\n Proposition~\\ref{prop:lambert_rep} together with known techniques for evaluating the various expansion coefficients of the Lambert $W$-function provides an efficient way to calculate the numbers $\\ch_n$: \n\n\\begin{proposition}\n\\label{prop:efficient_chn}\nThe numbers $\\ch_n$ and $\\Ch_n$ can be calculated using the recursion equations,\n \\begin{gather*}\n \\ch_n=\\Ch_n-\\frac{1}{n}\\sum_{k=1}^{n-1}k \\ch_k \\Ch_{n-k} \\\\\n \\Ch_n = - (2n-1)!!\\left( \\frac12 (2n-1) \\mu_{2n-1} -(2n+1) \\mu_{2n+1} \\right)\n\\\\\n %\n \\mu_n = \\frac{n-1}{n+1}\\left( \\frac{\\mu_{n-2}}{2} + \\frac{\\alpha_{n-2}}{4} \\right) -\\frac{\\alpha_{n}}{2} - \\frac{\\mu_{n-1}}{n+1} \\\\\n \\alpha_n = \\sum_{k=2}^{n-1} \\mu_k \\mu_{n+1-k},\n \\end{gather*}\n for all $n \\geq 1$ with $\\alpha_0=2, \\alpha_1= -1, \\mu_{-1} = 0, \\mu_0 = -1, \\mu_1 = 1$ and $\\Ch_0 = 1$.\n\\end{proposition}\n\\begin{proof}\nThe coefficients $\\mu_n$ are the expansion coefficients of the Lambert-$W$ function near its branch point:\n$W_0(z) = \\sum_{n \\geq 0} \\mu_n \\left( 2 ( 1+ez) \\right)^{\\frac{n}{2}}.$ The recursion for $\\mu_n$ is given in \\cite[eqs.\\ (4.23) and (4.24)]{corless1996lambertw}; it follows from the differential equation which $W$ satisfies. \nWe can adapt \\cite[eq.\\ (2.11)]{volkmer2008factorial} to the notation of \\cite{corless1996lambertw} (compare \\cite[eq.\\ (2.1)]{volkmer2008factorial} with the definition of $\\mu_n$) to get an expression for $v_n$ in terms of $\\mu_n$:\n\\begin{align*}\n v_n = (-1)^{n+1} 2^{\\frac{n}{2}} \\left( \\frac12 n \\mu_n -(n+2) \\mu_{n+2} \\right)\n\\end{align*}\nThe equation for $\\Ch_n$ follows using Proposition~\\ref{prop:lambert_rep} and $(2n-1)!! = 2^{n+\\frac12}\\Gamma(n+\\frac12)$. Finally, we use eq.~\\eqref{eqn:exp_pwrsrs_relation} to translate from $\\Ch_n$ to $\\ch_n$.\n\\end{proof}\n\n Written in power series notation with $T(z) = \\sum_{n\\geq 1} \\ch_n z^n$ and $\\exp(T(z)) = \\sum_{n\\geq 0} \\Ch_n z^n$, the first few coefficients are\n \\begin{gather*}\nT(z)= - \\frac{1}{24} z - \\frac{1}{48} z^{2} - \\frac{161}{5760} z^{3} - \\frac{367}{5760} z^{4} - \\frac{120257}{580608} z^{5} + \\ldots \\\\\n\\exp(T(z))= 1 - \\frac{1}{24} z - \\frac{23}{1152} z^{2} - \\frac{11237}{414720} z^{3} - \\frac{2482411}{39813120} z^{4} - \\frac{272785979}{1337720832} z^{5} + \\ldots\n \\end{gather*}\n With this approach we calculated the value of $\\ch_n$ up to $n=1000$ with basic computer algebra tools. \n \nIn addition to being able compute the value of $\\ch_n$ for very large $n$, we can also determine the explicit asymptotic behavior of the coefficients for large $n$. We do that in the next section.\n\n\\subsection{The asymptotic growth of \\texorpdfstring{$\\chi(\\Out(F_n))$}{chi(Out(Fn))}}\n\n\\begin{proposition}\n \\label{prop:chi_outfn_asymptotics}\n The Euler characteristic of $\\Out(F_n)$ has the leading asymptotic behaviour,\n \\begin{align}\n \\chi(\\Out(F_n))=\n - \\frac{1}{\\sqrt{2\\pi}} \\frac{\\Gamma(n -\\frac32 )}{\\log^2 n} + \\mathcal{O}\\left( \\frac{\\log \\log n }{\\log^3 n}\\Gamma\\left(n -\\frac32 \\right) \\right) \\text{ as } n\\to \\infty.\n \\end{align}\n\\end{proposition}\nWe will prove Proposition~\\ref{prop:chi_outfn_asymptotics} by applying a stronger version of singularity analysis to determine the asymptotic behaviour of the coefficients $v_k$. Proposition~\\ref{prop:lambert_rep} and a classic theorem by Wright \\cite{wright1970asymptotic} will eventually enable us to deduce the asymptotic behaviour of the sequence $\\chi(\\Out(F_n))$.\n\\begin{lemma}\n \\label{lmm:asymp_vk}\n The coefficients $v_{k}$ have the leading asymptotic behaviour,\n \\begin{align}\n v_{k} &= -\\frac{1}{k(\\log k)^2} + \\mathcal{O}\\left( \\frac{\\log \\log k}{k(\\log k)^3}\\right) \\text{ as } k \\to \\infty.\n \\end{align}\n\\end{lemma}\n\\begin{proof}\n In addition to the expansion in eq.\\ \\eqref{eqn:ck_def_lambert}, the numbers $v_k$ are the coefficients of the following expansion of the other real branch of the Lambert $W$-function \\cite{volkmer2008factorial}, \n \\begin{align*}\n z W_{-1}'(z) &= -\\sum_{k = -1}^{\\infty}v_k (1+ez)^{\\frac{k}{2}} \\text{ for } z \\in \\left(-\\frac{1}{e},0\\right).\n \\end{align*}\nThe discrepancy between the two expansions is given by the two different choices for the branch of the square root. We first consider the odd coefficients $v_{2k-1}$. Setting $w = 1+ez$ we define\n \\begin{align*}\n g(w)= \\frac12 \\sqrt{w} \\left( z W_0'\\left(z\\right) - z W_{-1}'\\left(z\\right) \\right) = \\sum_{k = 0}^{\\infty}v_{2k-1} w^{k}.\n \\end{align*}\n The function $g(w)$ can be analytically continued to $w=0$. Moreover, $g(w)$ has no other singularities in a $\\Delta$-domain as defined in Lemma~\\ref{lmm:singularity_analysis}: the dominant singularity of $g(w)$ comes from the logarithmic singularity of $W_{-1}$ at $z=0$ (see Figure~\\ref{fig:lambertW}), so is located at $w=1$ after the variable change. The principal branch $W_0$ is analytic at $z=0$. Neither $W_0$ nor $W_{-1}$ has any other singularities in the relevant domain.\n\n Because the differential equation $W'(z) = \\frac{W(z)}{z(1+W(z))}$ is satisfied by every branch of the Lambert $W$-function, we have\n \\begin{align*}\n g(w) &= \\frac12 \\sqrt{w} \\left( \\frac{W_{0}(z)}{1+W_{0}(z)} - \\frac{W_{-1}(z)}{1+W_{-1}(z)} \\right) = \n - \\frac12 \\sqrt{w} \\frac{W_{-1}\\left(\\frac{w-1}{e}\\right)}{1+W_{-1}\\left(\\frac{w-1}{e}\\right)} + \\text{`analytic'} \\text{ as } w\\rightarrow 1^- \\\\\n &=\n \\frac12 \\frac{\\sqrt{w}}{1+W_{-1}\\left(\\frac{w-1}{e}\\right)} + \\text{`analytic'} \\text{ as } w\\rightarrow 1^{-},\n \\end{align*}\n where we are able to neglect contributions which are analytic at $w=1$ since, by Lemma~\\ref{lmm:dominant_singularity}, they will eventually contribute only exponentially suppressed terms asymptotically.\n The function $W_{-1}$ has the singular behaviour \\cite[Sec.\\ 4]{corless1996lambertw},\n \\begin{align*}\n W_{-1}(z) = \\log(-z) + \\mathcal{O}\\left(\\log(-\\log(-z))\\right) \\text{ as } z \\rightarrow 0^{-}.\n \\end{align*}\n Thus, we get the singular expansion for $g(w)$,\n \\begin{align*}\n g(w) &= \n \\frac12 \\frac{\\sqrt{ 1- (1-w)}}{1+\\log\\left(\\frac{1-w}{e} \\right)+ \\mathcal{O}\\left(\\log(-\\log(\\frac{1-w}{e}))\\right)}+ \\text{`analytic'} \\text{ as } w\\rightarrow 1^{-}\n\\\\\n &= \n- \\frac12 \\left(\\log\\frac{1}{1-w}\\right)^{-1} + \\mathcal{O}\\left( \\frac{\\log(-\\log\\left(1-w \\right))}{\\left(\\log\\left(1-w \\right)\\right)^2}\\right) + \\text{'analytic'} \\text{ as } w\\rightarrow 1^{-}.\n \\end{align*}\n With this knowledge we may use a more general statement from singularity analysis to extract the asymptotics of the coefficients of $g(w)$, for instance \\cite[Cor.\\ 6]{flajolet1990singularity}. More details are given in \\cite[Sec.\\ VI.2]{flajolet2009analytic}, where one can find the `asymptotic transfer law' $[w^k] \\left(\\log\\frac{1}{1-w} \\right)^{-1}= -\\frac{1}{k(\\log k)^2} + \\mathcal{O}\\left(\\frac{1}{k(\\log k)^3}\\right)$ for $k\\rightarrow \\infty$ in Table VI.5. Also `transferring' the $\\mathcal{O}$ term in the singular expansion of $g$ into its corresponding asymptotic term for the coefficients \\cite[Cor.\\ 6]{flajolet1990singularity} gives,\n \\begin{align*}\n [w^k]g(w)&= v_{2k-1} = \\frac12 \\frac{1}{k (\\log k)^2} + \\mathcal{O}\\left( \\frac{\\log\\log k}{k (\\log k)^3} \\right) \\text{ as } k \\rightarrow \\infty.\n \\end{align*}\n We note that the asymptotic behaviour of the even coefficients $v_{2k}$ follows analogously by starting with \n $g(w)= \\frac12 \\left(- z W_0'\\left(z\\right) - z W_{-1}'\\left(z\\right) \\right) = \\sum_{k = 0}^{\\infty}v_{2k} w^{k}$, although we will not need this for the present article.\n \\end{proof}\n\nThe only remaining task for proving Theorem~\\ref{thm:SVconj} is to transfer our knowledge of the asymptotic behaviour of $v_k$ to the coefficients $\\ch_n$. To deduce the asymptotic behaviour of these coefficients, we will use a classical theorem by Wright in the theory of graphical enumeration.\n\\begin{lemma}[Thm.\\ 2 of \\cite{wright1970asymptotic} with $R=1$] \n \\label{lmm:wright_connected_asymptotics}\nLet $f(x)= \\sum_{n \\geq 0} c_n x^n$ be a power series in $\\mathbb{R}[[x]]$, and let $\\exp(f(x)) = \\sum_{n \\geq 0} \\hat{c}_n x^n$. Suppose \n $c_0 = 0$, $\\hat{c}_0 = 1,$ and $\\hat{c}_{n-1} \\in o(\\hat{c}_{n})$ as $n\\to\\infty$ as well as \n \\begin{align}\n \\label{eqn:center_sum}\n \\sum_{k = 1}^{n-1} \\hat{c}_k \\hat{c}_{n-k} \\in \\mathcal{O}(\\hat{c}_{n-1}) \\text{ as } n\\to \\infty.\n \\end{align}\nThen \n $c_n = \\hat{c}_n + \\mathcal{O}(\\hat{c}_{n-1})$ as $n \\to \\infty$.\n\\end{lemma}\n\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:chi_outfn_asymptotics}]\nLet $T(z) = \\sum_{n\\geq 1} \\ch_n z^n$, and $\\exp(T(z)) = \\sum_{n\\geq 0} \\Ch_n z^n$.\n\n We have to verify that $\\Ch_n$ satisfies the conditions of Lemma~\\ref{lmm:wright_connected_asymptotics}. The only condition that is not immediate is eq.\\ \\eqref{eqn:center_sum}. By Proposition~\\ref{prop:lambert_rep} and Lemma~\\ref{lmm:asymp_vk} we have\n \\begin{align*}\n \\Ch_n &= - \\frac{1}{\\sqrt{2\\pi}} \\frac{\\Gamma(n +\\frac12 )}{n \\log^2 n} + \\mathcal{O}\\left( \\frac{\\log \\log n }{n \\log^3 n}\\Gamma(n +\\frac12 ) \\right)\\\\\n &= - \\frac{1}{\\sqrt{2\\pi}} \\frac{\\Gamma(n -\\frac12 )}{\\log^2 (n+1)} + \\mathcal{O}\\left( \\frac{\\log \\log n }{\\log^3 n}\\Gamma(n -\\frac12 ) \\right).\n \\end{align*}\n From this it follows that we can find a constant $C \\in \\mathbb{R}$ such that $|\\Ch_n| \\leq C \\frac{\\Gamma(n -\\frac12 )}{\\log^2 (n+1)}$ for all $n \\geq 1$.\n Recall that $\\Gamma(x)$ is \\textit{log-convex} on the interval $x\\in(0,\\infty)$, i.e.\\ $\\log(\\Gamma(x))$ is a convex function on this interval \\cite{artin2015gamma}. The function $-\\log(\\log(1+x))$ is convex on this interval as well, since its second derivative $\\frac{1+\\log(1+x)}{(1+x)^2 \\log^2(1+x)}$ is positive. If $f(x)$ is convex on the interval $[a,b]$, then $f(b+a-x)$ is also convex on $[a,b]$. If another function $g(x)$ is convex on this interval, then $f(x) + g(x)$ is too. Therefore, \n \\begin{align*}\n \\log( \\Gamma(n-x-\\frac12) ) + \\log( \\Gamma(x-\\frac12) ) - 2 \\log \\log(1+n-x) - 2 \\log \\log(1+x)\n \\end{align*}\n is convex for $x\\in(\\frac12 , n-\\frac12)$. Because $e^x$ is an increasing function,\n $\\frac{\\Gamma(n-x-\\frac12)\\Gamma(x-\\frac12)}{ \\log^2(1+n-x) \\log^2(1+x) }$\n is also convex on $x\\in(\\frac12, n-\\frac12)$. This also implies convexity on the smaller interval $[2,n-2]$. The usual inequality for convex functions now gives\n \\begin{align*}\n \\frac{\\Gamma(n-x-\\frac12)\\Gamma(x-\\frac12)}{ \\log^2(1+n-x) \\log^2(1+x) }\n \\leq \\frac{\\Gamma(n-2-\\frac12)\\Gamma(2-\\frac12)}{ \\log^2(1+n-2) \\log^2(1+2) } \\text{ for all } x \\in [2,n-2],\n \\end{align*}\n and we can estimate\n \\begin{gather*}\n \\left|\\sum_{k=1}^{n-1} \\Ch_{n-k} \\Ch_k \\right| \\leq 2 | \\Ch_{n-1} \\Ch_1 | +\\sum_{k=2}^{n-2} | \\Ch_{n-k} \\Ch_k| \n \\leq 2 | \\Ch_{n-1} \\Ch_1 | + C^2 \\sum_{k=2}^{n-2} \\frac{\\Gamma(n-k -\\frac12 )\\Gamma(k -\\frac12 )}{\\log^2 (1+n-k)\\log^2 (1+k)} \\\\\n \\leq 2 | \\Ch_{n-1} \\Ch_1 | + C^2 (n-3) \\frac{\\Gamma(n-2 -\\frac12 )\\Gamma(2 -\\frac12 )}{\\log^2 (1+n-2)\\log^2 (1+2)}.\n \\end{gather*}\n It follows that $\\sum_{k=1}^{n-1} \\Ch_{n-k} \\Ch_k \\in \\mathcal{O}(\\Ch_{n-1})$, so Lemma~\\ref{lmm:wright_connected_asymptotics} can be applied to give \n\\begin{align*}\n \\ch_n &= \\Ch_n + \\mathcal{O}(\\Ch_{n-1}) = - \\frac{1}{\\sqrt{2\\pi}} \\frac{\\Gamma(n -\\frac12 )}{\\log^2 n} + \\mathcal{O}\\left( \\frac{\\log \\log n }{\\log^3 n}\\Gamma\\left(n -\\frac12 \\right) \\right),\n\\end{align*}\nbecause $\\Ch_{n-1} \\in \\mathcal{O}\\left( \\frac{\\log \\log n }{\\log^3 n}\\Gamma\\left(n -\\frac12 \\right) \\right)$.\n\\end{proof}\n\nThe asymptotic behavior of $\\chi(\\Out(F_n))$ now follows by combining our results.\n\\begin{repthmx}{thm:SVconj}\n The rational Euler characteristic of $\\Out(F_n)$ is strictly negative, $\\chi\\left( \\Out(F_n) \\right) < 0$, for all $n \\geq 2$ and its magnitude grows more than exponentially,\n \\begin{align*}\n \\chi\\left(\\Out(F_{n}) \\right) &\\sim - \\frac{1}{\\sqrt{2 \\pi}} \\frac{\\Gamma(n-\\frac32)}{\\log^2(n)}.\n \\end{align*}\n\\end{repthmx}\n\n\\begin{proof}\n Apply Corollary~\\ref{crll:negative_T} and Proposition~\\ref{prop:chi_outfn_asymptotics}.\n\\end{proof}\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Supplemental Material}\n\\begin{center}\n{\\Large \\bf Supplemental Material}\\\\ \n\\end{center}\n\\vspace*{12pt}\n\\begin{enumerate}[\\bf I.]\n\n\\item {\\bf Thermodynamic stability of stoichiometric vs non-storichiometric Mg$_M$O$_x$ clusters}\n\n\\item {\\bf Details on the implemented GA schemes}\n\n\\item {\\bf Performance of reaxFF}\n\n\\item {\\bf O$_2$-adsorption energy on MgO$_x$, with functionals corrected by the experimental value of O$_2$ binding energy}\n\n\\item {\\bf O$_2$-adsorption energy on Mg$_2$O$_x$ and Mg$_3$O$_x$ clusters}\n\n\\item {\\bf Mg$_2$O$_x$ phase diagrams with various functionals}\n\n\\item {\\bf Effect of translational, rotational, vibrational contributions to the free energy on Mg$_2$O$_x$ phase diagram}\n\n\\item {\\bf Effect of anharmonic contributions to configurational free energy}\n\n\\item {\\bf Examples of spin densities on non-stochiometric Mg$_M$O$_x$}\n\n \n\\end{enumerate}\n\n\n\n\\begin{figure*}[b!]\n{\\bf \\Large I. Thermodynamic stability of stoichiometric vs non-storichiometric Mg$_M$O$_x$ clusters \\\\}\n\\includegraphics[width=0.8\\columnwidth,clip]{fig-1-SI.png}\n\\caption[]{Free energy of formation of thermodynamically most stable non-stoichiometric (Mg$_M$O$_x$ with $M \\neq x$) relative to stoichiometric ($M = x$) clusters at several $(T,p_{\\textrm{O}_2})$ conditions. The geometries were optimized with PBE+vdW, and the electronic energy was calculated using PBE0+vdW. The label of the horizontal axis shoes, below the amount $M$ of Mg atoms, the amount $x$ of O atoms for the thermodynamically most stable non-stoichiometric cluster at the corresponding $M$, at $p_{\\textrm{O}_2} = 1$~atm and $T=300$ K. For the same thermodynamic condition, the third line reports the spin multiplicity $\\mathcal{M}$ of the lowest free-energy non-stoichiometric Mg$_M$O$_x$ (the stoichiometric clusters are all singlets, i.e., $\\mathcal{M}=1$).}\n\\label{SM:I}\n\\end{figure*} \n\n\\clearpage\n\\newpage\n\\begin{center}\n{\\bf \\Large II. Details on the implemented GA scheme \\\\} {\\bf The benchmark and full detail on validation is found in \\cite{long}\\\\} \n\\end{center}\n\nSchematically, our cGA algorithm proceeds as follows (all terms in italic will be explained afterwards):\n\\begin{enumerate}[(1)]\n \\item Selection of a composition of the clusters and formation of an initial pool of random structures, locally optimized by a classical force field (FF).\n \\item Evaluation of the {\\em fitness} function for all structures (using FF binding energy).\n \\item GA global optimization using the classical FF. This consists in the iteration of steps (i)--(v): \n \\begin{enumerate}[(i)]\n \\item {\\em Selection} of two structures (in GA jargon, {\\em parents}).\n \\item Assemblage of a trial structure ({\\em child}) through {\\em crossover} and {\\em mutation}.\n \\item Local optimization (force minimization) of child structure using the classical FF.\n \\item Evaluation of the {\\em fitness} function. Comparison of the optimized child with existent structures; reject if {\\em similar}, jump to (i). \n \\item Check whether convergence has been reached. If so, stop FF-GA and go the next step, DFT-GA.\n \\end{enumerate}\n \\item Formation of a new pool of structures using best fit structures from FF-GA, locally optimized at the DFT level (PBE+vdW, {\\em low-level settings}).\n \\item Calculation of fitness function for all structures (using energy at the PBE0+vdW level).\n \\item GA scheme using DFT. In practice iteration of steps (a)--(i):\n \\begin{enumerate}[(a)]\n \\item {\\em Selection} of two structures.\n \\item Assemblage of a child structure through {\\em crossover} and {\\em mutation}.\n \\item Local optimization of the child structure with PBE+vdW, {\\em low-level settings}.\n \\item Comparison of the optimized child with existent structures. {\\em Early rejection} if {\\em similar}; jump to (a). \n \\item Further local optimization of the child with PBE+vdW, {\\em high-level settings}.\n \\item Harmonic analysis of the optimized child; if unstable, perturb along the unstable mode and go back to (c). \n \\item Evaluation of {\\em fitness} function based on PBE0+vdW total energy. \n \\item Check whether convergence has been reached. If so, stop.\n \\end{enumerate}\n\\end{enumerate}\n\nIn the following, we analyze one by one the key words introduced in the detailed scheme above.\n\n \n\\subsubsection*{\\bf Fitness function}\n\\label{ff}\nEach cluster $i$ in the population is assigned a normalized fitness value, $\\rho_i$, based on its total energy (binding energy for the FF):\n\\begin{equation}\n\\label{eqn1}\n\\rho_i=\\frac{\\epsilon_i}{\\sum_i\\epsilon_i}\n\\end{equation}\nand $\\epsilon_i$ is the relative energy of the $i^{th}$ cluster as defined below:\n\\begin{equation}\n\\label{eqn2}\n\\epsilon_i=\\frac{E_\\textrm{max}-E_i}{E_\\textrm{max}-E_\\textrm{min}}\n\\end{equation}\nWhere $E_i$ is the total energy of the $i^{th}$ cluster of the population and $E_\\textrm{min}, E_\\textrm{max}$ correspond to the dynamically updated lowest and highest total energies in the population, respectively. \n\nWith this definition, low (more negative) energy clusters have high fitness and high (less negative) energy clusters have low fitness. \n\n\\subsubsection*{\\bf Selection rule}\n\\label{sr}\n We use a ``roulette-wheel'' selection criterion \\cite{roulette} with selection probability proportional to the value of the normalized fitness function. The idea is that the lower the total (or binding) energy (i.e., large negative value) of a certain configuration, the larger the probability to be chosen from the population. A cluster is picked at random and is selected for mating if its normalized fitness value ($\\rho_i$) is greater than $\\textrm{Rand}[0,1]$, a randomly generated number between 0 and 1 (i.e., if $\\rho_i > \\textrm{Rand}[0,1]$); where $\\rho_i$ is the normalized fitness function defined in section-\\ref{ff}. \n \nThe above `` best-fit'' selection scheme can take significantly long time to reach another basin in the PES. In such situations, adding a little diversity by selecting one ``bad'' (high-energy) structure in the population is found to help in moving out to the next basin. Therefore, we define a complementary fitness function $\\tilde{\\rho}_j = (1-\\rho_j)$ and we select one structure with high $\\rho_i$ and another with high $\\tilde{\\rho}_j$. This choice, when the the mixing ratio among different selection rules is optimized, greatly helps the convergence of the GA scheme and we also show how we optimized the mixing ratio among different selection rules.\n\n\\subsubsection*{\\bf Crossover}\n\\label{sec:cross}\nThe crossover operator takes care of combining the two parent clusters selected as explained above. It is implemented as a modified version of the cut-and-splice crossover operator of Deaven and Ho.\\cite{crossover} In our implementation of the cut-and-splice operation, first a random rotation is given (keeping the center of geometry of the cluster at the origin of the coordinate system) to both the parent clusters. Both clusters are then cut horizontally parallel to the $xy$-plane ($z=0$). Atoms with positive $z$-value are selected from one cluster and atoms with negative $z$-value are selected form the other cluster. These complementary fragments are spliced together. Importantly, this cut-and-splice operation does not ensure the preservation of the chosen cluster size (i.e., the total number of atoms) and the specific composition. We have adapted here three different kind of crossovers to maintain size and composition. \n\n(i) Crossover-1: Strictly speaking, this is a combined crossover and mutation (see below) step. After cut-and-splice we always maintain the same ordering of atoms that is given in the parent clusters. As an example, let us consider a small cluster like Mg$_2$O$_2$. In the parent cluster the ordering of atomic coordinate is given as Mg(1), O(2), Mg(3), O(4). When the cut-and-splice operation is applied, we get, for instance, a child with atomic coordinates from cluster one as Mg(1), Mg(3) (i.e., above the $xy$-plane) and that of from cluster two as Mg(3'), O(4') (i.e., below the $xy$-plane). Therefore, the entire atomic coordinates of the child are [Mg(1), Mg(3)], [Mg(3'), O(4')]. If this is the case, we replace the species of Mg(3) with O (without changing its coordinate) to impose the correct composition to the child. Thus the new ordering of atoms is: Mg(1), O(2), Mg(3), O(4) (i.e., the same composition as the parents). Therefore, it is possible that after the cut-and-splice operation a Mg atom of the parent cluster is replaced by an O atom in the child and vice versa.\n\nAlso the total spin of the clusters is left free to evolve together with the spatial coordinates of the atoms. In this way we sample on equal footing the configurational space of atomic coordinates and the spin.\nThe crossover of the spin coordinates is performed in the following way: when we create a new child by grabbing the atomic coordinates from the parents as explained above, we also make note of the atom-projected spin moments (via Hirshfeld partitioning of the electron density) for each atom. Such spin moments are given as initial moments of the individual atoms of the child. During the optimization process, these atom-projected moments are left free to change. \n\n(ii) Crossover-2: In this procedure, after cut-and-splice we check whether the stoichiometry of the parents is maintained in the child. If it is maintained, we accept the child, otherwise we reject it and we iterate until until the child has the required stoichiometry.\\cite{r15,r16} The advantage of this procedure is that it helps to maintain winning features of the parent molecule but most of the time it takes many iterations to obtain a valid child, even for a moderately sized cluster. In case one or more pairs of atom are too close, we adopt the same remedy as for crossover-1. The spin coordinates are taken care of the same as in the crossover-1 case.\n\n(iii) Crossover-3: After re-orientation of the selected parent clusters we take all the metal (Mg) atoms from one parent molecule and all the oxygen atoms from another parent molecule. This crossover helps introducing diversity in the genetic pool, but the rate of rejection during the assemblage of the child can be rather high due to the high likelihood that two atoms are too close.\n\n\\subsubsection*{\\bf Mutation}\n\\label{mut}\nAfter crossover, which generates a child, mutation is introduced.\nDifferent mutation operators can be defined. We have adopted a) a translation between the two halves of the clusters (this is performed if atoms coming from the two different parents find themselves too close upon splicing of the two halves) and b) exchange of the atom species without perturbing their coordinates \n\n\\subsubsection*{\\bf Similarity of structures}\n\\label{ss}\nIn order to decide whether a newly found structure was already seen previously during the GA scan, after the local optimization we a) compare the energy of the new structure with that of all the others seen before and b) use a criterion based on the distances between all the atoms' pairs. In practice, we construct a coarse grained radial distribution function (rdf) of the clusters, consisting of 14 bins conveniently spaced. Each bin contains the (normalized) number of atom-pairs whose distance is between the distances that define the boundaries of the bin. For each cluster we have then a 14-dimensional rdf-array \nand the euclidean distance (i.e., the square root of the sum of the squared difference between corresponding elements in the two arrays) between the arrays arranged for two clusters is evaluated.\nIf this distance (note that it is a pure number) is greater than a convenient threshold (we used 0.01), then the structures are considered as different.\n\n\\subsubsection*{\\bf Local optimization and early rejection scheme}\n\\label{cascade}\n\nAlthough the geometry and the energy of the structures is not fully converged with PBE+vdW @ {\\em low-level settings}, we have realized that there is a one-to-one mapping between the structures found at this level and those fully converged. In other words, if two structures are {\\em similar} according to PBE+vdW @ {\\em low-level settings}, they are also at the PBE0+vdW @ {\\em high-level settings} (see below). Furthermore, if a structure at the PBE0+vdW @ {\\em high-level settings} is within $\\sim 0.5$ eV from the running GM, with PBE+vdW @ {\\em low-level settings} the structure is not further than 0.2 eV from the same running GM (with energy evaluated with PBE+vdW @ {\\em low-level settings}). This implies that, with our conservative choice of not optimizing with the {\\em high-level settings} structures that with {\\em low-level settings} result positive to the {\\em similarity} test or are more than 1.5 eV higher in energy than the current GM, we are not risking to reject structures that would eventually result in the GM or close to it. \n\nIn the PBE+vdW, {\\em high-level settings} optimization, atomic forces were converged to less than $10^{-6}$ eV\/\\AA. The grid settings where set to ``tight'' and the basis set was tier-2. In cascade, for the structure optimized in this way (i.e., without further optimization), we evaluated the PBE0+vdW energy with the tier-2 basis set. This energy is later used for the calculation of fitness of that particular cluster. \nThe difference in binding energy between an isomer optimized with PBE0+vdW forces (tight \/ tier 1 \/ forces converged to 10$^{-5}$ eV\/\\AA) and the same optimized with PBE+vdW (tight \/ tier 2 \/ forces converged to 10$^{-5}$ eV\/\\AA), when the energy of both geometries is evaluated via PBE0+vdW (tight \/ tier 2), is small, i.e. at most 0.04 eV among all cases we checked. The computational cost of the PBE0+vdW further optimization would be thus not worthy (we estimated a gain of up to a factor 2 of overall computational time just by skipping the latter optimization)\n\n\\subsubsection*{\\bf Parallelization}\nThe operation of selecting from the genetic pool two structures for the mating and the subsequent local optimization of the child, is an operation that can be performed at any moment also when a local optimization of a child is already running.\nThe algorithm is thus suitable for a very efficient parallelization. \n\nOn top of FHI-aims parallelization (i.e. local optimization is run in parallel on an optimized number of CPUs) we add a second level of parallelization, i.e., we run at the same time several local optimizations, independently. The only communication among such replicas is the selection of the parents that is performed from a common genetic pool. The latter is also updated by each replica at the end of each local optimization.\nThe local optimizations run independently, i.e., each replica can start a new mating + local optimization cycle right after one is concluded; hence, there is no idling time between cycles.\nThus, we have $n$ local optimizations running in parallel, each requiring $p$ cores\n, that fill the $n\\times p$ cores required for the algorithm. The scaling behavior is about O($p^{1.5}$) with the number of cores for the local optimization part \\cite{aimsp} . The number $p$ is indeed tuned in order to be sure that the speed-up is still O($p^{1.5}$) for the specific system. The scaling with respect to the $n$ replicas is linear, because the replicas are for the most of the time independent and only at the beginning and at the end of each local optimization, information is shared among the replicas.The first level of parallelization is performed within the FHI-aims code, by means of the MPI environment. The second level is script based: The total $n\\times p$ number of cores is divided into $n$ groups, $n$ subdirectories are created and in each of them a cycle of local optimization job runs, each using $p$ cores.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\begin{figure}[t]\n\\includegraphics[width=0.45\\textwidth]{.\/images\/Intro_v3.pdf}\n\\vspace{-0.3cm}\n\\centering\n\\caption{(a) T-SNE \\cite{van2013barnes} visualization of\nthe BEV features in different UDA scenarios, in which features are obviously separated by domain. (b) The performance of our method compared with previous works \\cite{li2022bevdepth,wang2021exploring,li2022unsupervised}. Both methods are built on BevDepth with a ResNet-50 backbone and evaluated on the target domain.}\n\\label{fig:intro}\n\\vspace{-0.4cm}\n\\end{figure}\nThe camera-based 3D object detection has attracted increasing attention, especially in the field of autonomous driving \\cite{arnold2019survey, chen2017multi, chen2016monocular}. Nowadays, it has obtained obvious advancements driven by Bird-Eye-View (BEV) perception methods \\cite{philion2020lift, huang2021bevdet, li2022bevdepth,li2022unifying, li2022bevstereo} and large scale labeled autonomous driving datasets \\cite{geiger2012we, caesar2020nuscenes, sun2020scalability}. However, due to the vast variety of perception scenes \\cite{wang2021exploring, li2022unsupervised}, the camera-based methods can suffer significant performance degradation caused by domain shift or data distribution variation.\n\n\nRecently, though mono-view 3D detection methods \\cite{li2022towards, li2022unsupervised} carry out the UDA setting on camera parameters or annotation methods variation, domain adaptation problem on many real-world scenarios is still unexplored in both Mono-view \\cite{cai2020monocular, wang2021fcos3d, brazil2019m3d, ding2020learning, li2022diversity, zhang2021objects, simonelli2019disentangling} and Multi-view\\cite{philion2020lift, wang2022detr3d, liu2022petr, liu2022petrv2, chen2022polar, jiang2022polarformer, li2022bevformer, reading2021categorical, huang2021bevdet, li2022unifying, li2022bevdepth} settings. As shown in Fig.\\ref{fig:intro}, we discover the tremendous domain gap in the scene, weather, and day-night changing scenarios from source to target data in nuscenes \\cite{caesar2020nuscenes}, which leads to inferior performance of baseline \\cite{li2022bevdepth} (only 0.174, 0.159, and 0.05 NDS). Therefore, we attempt to transfer a multi-view 3D detector from a labeled source domain to an unlabeled target domain.\n\nIn UDA, the main challenge for Multi-view LSS-based BEV perception is the entangle of domain shift on multiple latent spaces: \n(1)\\textit{2D images latent space. } Since multi-view images contain abundant semantic information, it will result in a manifold domain shift when scenarios change.\n(2) \\textit{3D voxel latent space.}\nVoxel features that are constructed by domain-specific images feature and unreliable depth prediction will assemble more domain shift. \n(3) \\textit{BEV latent space.}\nDue to the shift in the above aspects, the constructed BEV feature results in an accumulation of domain shift and leading to alignment noises. As shown in Fig.\\ref{fig:intro} (a), we visualize the distribution of BEV features, which are obviously separated by domains on different cross domain scenarios. \n\n\n\n\n\n\n\n\n\n\nTo this end, we propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to disentangle domain shift problems in multiple latent spaces, which consists of a Depth-Aware Teacher (DAT) model and Multi-space Feature Aligned (MFA) Student model. \nDAT introduces composite depth-aware information to construct better voxel and BEV features in the target domain, which contains less source domain specific information. \nTo construct composite depth-aware information, DAT adaptively screened out reliably predicted depth by uncertainty estimation to compensate for lidar. It promotes representation consistency between DAT and student model through transferring domain-invariant multi-space knowledge and pseudo labels, thus addressing domain shift in pixel and instance level respectively. \nIn order to assist DAT in further bridging global level domain gaps on multiple latent spaces, we propose MFA in the student model. It aligns three task-relevant features of two domains, including multi-view images, 3D voxel, and BEV features. In the overall $M^{2}ATS$ framework, DAT and MFA compensate each other to address the domain shift of multiple latent spaces in multi-level.\n\nAs far as we know, we are the first to study the cross domain problem in Multi-View 3D object detection. We design three classical and one continual changing UDA scenarios, which are \\textbf{Scene} (from Boston to Singapore), \\textbf{Weather} (from sunny to rainy and foggy), \\textbf{Day-night}, and \\textbf{Changing Foggy degree} in \\cite{caesar2020nuscenes}. For continual changing scenarios, we construct cross domain experiments with the continuously increased density of Fog, which gradually enlarges the domain gap. Our proposed method achieves competitive performance in all scenarios (shown in Fig. \\ref{fig:intro} (b)). Compared with the previous state-of-the-art (SOTA) UDA method (i.e., STM3D\\cite{li2022unsupervised}), it improves the NDS by 2.5, 3.2, and 5.2\\% respectively in three classical scenarios.\n\nThe main contributions are summarized as follows:\n\n\\textbf{1)} We explore the unsupervised domain adaptation (UDA) problem for BEV perception of Multi-view 3D object detection. We propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to address multi-latent space domain shift.\n\n\\textbf{2)} In $M^{2}ATS$, we propose a Depth-Aware Teacher (DAT) to construct uncertainty-guided depth-aware information, and then transfer the domain-invariant pixel-level features and instance-level pseudo label to the student model. $M^{2}ATS$ contains a Multi-space Feature Aligned (MFA) student model which aligns multi-space features between two domains and compensates DAT to further alleviate the domain shift at global-level.\n\n\\textbf{3)} We conduct extensive experiments on the four challenging UDA scenarios, achieving SOTA performance compared with previous Mono-view 3D and 2D detection UDA methods. And we provide a Foggy-nuScene dataset.\n\\begin{figure*}[t]\n\\includegraphics[width=0.95\\textwidth]{.\/images\/framework_v8.pdf}\n\\centering\n\\vspace{-0.22cm}\n\\caption{The framework of Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$), which is composed of the Depth-Aware Teacher (DAT) and Multi-space Feature Aligned (MFA) student model. In \\textbf{the bottom part}, the DAT model takes target domain input and adopts depth-aware information to construct domain-invariant features, which transfers pixel (multi-space features) along with instance-level (pseudo label) knowledge to the student model. In \\textbf{the upper part}, the MFA student model aligns multi-space features (\\textcolor{red}{red circle}) at the global level between two domains. $M^{2}ATS$ framework aims to comprehensively address the multi-latent space domain shift problem.}\n\\label{fig:method}\n\\vspace{-0.42cm}\n\\end{figure*}\n\\section{Related work}\n\\subsection{Camera-based 3D object detection}\nNowadays, 3D Object Detection plays an important role in autonomous driving and machine scene understanding. \nTwo paradigms are prominent in this aspect: Single-view \\cite{cai2020monocular, wang2021fcos3d, brazil2019m3d, ding2020learning, liu2021autoshape, manhardt2019roi, barabanau2019monocular, li2022diversity, zhang2021objects, simonelli2019disentangling} and Multi-view\\cite{philion2020lift, wang2022detr3d, liu2022petr, liu2022petrv2, chen2022polar, jiang2022polarformer, li2022bevformer, reading2021categorical, huang2021bevdet, li2022unifying, li2022bevdepth, huang2022bevdet4d, li2022bevstereo}. In Single-view detection, previous works can be categorized into several streams, i.e. leveraging CAD models \\cite{liu2021autoshape, manhardt2019roi, barabanau2019monocular}, setting prediction targets as key points \\cite{li2022diversity, zhang2021objects}, and disentangling transformation for 2D and 3D detection \\cite{simonelli2019disentangling}. Specifically, FCOS3D \\cite{wang2021fcos3d} can predict 2D and 3D attributes synchronously. M3D-RPN \\cite{brazil2019m3d} considers single-view 3D object detection task as a standalone 3D region proposal network. In order to establish a more reliable 3D structure, D4LCN \\cite{ding2020learning} alters 2D depth prediction with pseudo LiDAR representation. \\cite{cai2020monocular} calculates the depth of the objects by integrating the actual height of the objects. To better leverage the depth information in the detection process, MonoDTR\\cite{huang2022monodtr} proposes an end-to-end depth-aware transformer network. However, taking into account the precision and practicality of detection, more and more multi-view 3D object detectors are proposed. \n\n\nThe Multi-view paradigm can be categorized into two branches, namely transformer-based \\cite{carion2020end} and LSS-based \\cite{philion2020lift}. \nFirst of all, to extend DETR \\cite{carion2020end} into 3D detection, DETR3D \\cite{wang2022detr3d} first predicts 3D bounding boxes with a transformer network. Inspired by DETR3D, some works adopt object queries \\cite{liu2022petr, liu2022petrv2, chen2022polar, jiang2022polarformer} or BEV grid queries \\cite{li2022bevformer} to extract features from images and utilize attention method, resulting in better 2D-to-3D transformation. However, transformer-based methods don't project image features onto BEV representation. Following LSS \\cite{philion2020lift}, some methods \\cite{reading2021categorical, huang2021bevdet, li2022unifying} predict a distribution over lidar depth and generate a point cloud with multi-view image features for 3D detection. Specifically, BevDepth \\cite{li2022bevdepth} introduces depth supervision and speeds up the operation of voxel pooling. Bevdet4d \\cite{huang2022bevdet4d} and BevStereo \\cite{li2022bevstereo} thoroughly explore temporal information in the task and concatenate volumes from multiple time steps. In this paper, we adopt BevDepth \\cite{li2022bevdepth} as the baseline 3D object detector for its simple and powerful working flow, along with its great potential in cross domain feature extraction.\n\n\\subsection{UDA in 3D object detection}\nDomain Adaptive Faster R-CNN \\cite{chen2018domain} first probes the cross domain problem in object detection. Based on \\cite{ganin2015unsupervised}, most previous works \\cite{cai2019exploring, saito2019strong, wang2021exploring, xu2020exploring, xu2020cross, yu2022cross} follow the cross domain alignment strategy\nand explore the influence of domain shift in multi-level features. As for 3D object detection, \\cite{luo2021unsupervised, li2022unsupervised, zhang2021srdan} investigate Unsupervised Domain Adaptation (UDA) strategies for point cloud 3D detectors. In particular, \\cite{luo2021unsupervised, zhang2021srdan} adopt alignment methods to align the feature and instance level information between two domains. STM3D \\cite{li2022unsupervised} develop self-training strategies to realize UDA by consistent and high-quality pseudo\nlabels. Recently, some works \\cite{barrera2021cycle, acuna2021towards, ng2020bev, saleh2019domain} investigate the cross domain strategies in BEV perception, which aim to reduce the simulation-to-real domain shift. In terms of camera-based monocular 3d object detection, \\cite{li2022towards, li2022unsupervised} first attempt to disentangle the camera parameters and guarantee the geometry consistency in cross domain phase. In contrast, we dedicate ourselves to solving the domain gap in multi-view 3d object detection tasks, which infer 3D scenes from the BEV perspective. We propose a novel Multi-level Multi-space Alignment\nTeacher-Student framework to deal with the accumulation of domain shift on multi-latent spaces.\n\n\n\\section{Methods}\n\\subsection{Overall framework}\n\\label{sec:overall}\nIn this paper, we study the Unsupervised Domain Adaptation (UDA) problem in LSS-based Vision-Centric Bird-Eye-View (BEV) perception \\cite{philion2020lift,li2022bevdepth}, where the accumulation of multi-latent space domain shift existed. As shown in Fig. \\ref{fig:method}, \\textbf{Multi-level Multi-space Alignment Teacher-Student} ($M^{2}ATS$) framework contains a Depth-Aware Teacher (DAT) and Multi-space Feature Aligned (MFA) Student model. Along with transferring multi-latent space knowledge from DAT to the student model, the MFA student model simultaneously aligns the task-relevant features between two domains in the same latent spaces. We discuss each component and its interactions in the following.\n\n\\textbf{Depth-Aware Teacher model}. As shown in Fig. \\ref{fig:method}, DAT extracts features and generates pseudo label from the target domain data, and transfers the pixel and instance-level knowledge to the student model. Specifically, due to the unreliable depth prediction and sparse property of target lidar, it suffers domain shift and incompletion when constructing voxel and BEV features. \nTherefore, we introduce a depth-aware mechanism that completes the sparse lidar by the predicted depth and adopts uncertainty estimation to reserve reliable depth prediction. DAT then adopts depth-aware information as an intermediate to construct domain-invariant voxel and BEV features and generates reliable pseudo labels to fully exploit target domain data. The goal is to align the representation of DAT and the student model, thus alleviating domain shift accumulation.\n\n\\textbf{Multi-space Feature Aligned} student model.\nAs shown in Fig. \\ref{fig:method}, the MFA student model receives the transferred knowledge and deals with domain shift at the global level. It extracts features from both source and target domains while introducing an alignment mechanism to pull close the representation of two domains in multi-latent space. The alignments lie in the 2D image, voxel, and BEV features, which aim to obtain domain-invariant features and alleviate domain shift at global level. MFA and DAT compensate each other to address the domain shift in the same latent spaces.\n\n\\subsection{Preliminary}\n\\label{sec:setup}\n\n\n\nFor the UDA setting \\cite{zhao2020review}, we are provided by labeled source domain $D_{s} = \\{\\{I^i_{s}\\}^M_{j=1},L^i_{s},G^i_{s}\\}^{N_{s}}_{i=1}$ and unlabeled target domain $D_{t} = \\{\\{I^i_{t}\\}^M_{j=1},L^i_{s}\\}^{N_{t}}_{i=1}$ of N samples and M camera views, in which $I^i$, $L^i$, and $G^i$ denote images, lidar, and detection ground truth respectively. We adopt an encoder \\cite{he2016deep} to extract image features, and use lidar data $L^i$ to supervise its depth prediction. These joint projects to 3D voxel feature $VF^i$, which are then voxel pooled to BEV feature $BF^i$. Note that, we only utilize lidar supervision during training following previous camera-based works \\cite{li2022bevdepth}.\n\n\\subsection{Depth-aware teacher}\n\\label{sec:DAT}\n\n\\begin{figure}[t]\n\\includegraphics[width=0.38\\textwidth]{.\/images\/unc_v3.pdf}\n\\centering\n\\vspace{-0.2cm}\n\\caption{The detailed process of constructing depth-aware information. The uncertainty map is estimated by MC Dropout \\cite{gal2016dropout}.}\n\\label{fig:unc}\n\\vspace{-0.38cm}\n\\end{figure}\n\nSince detection perception is generated from voxel and BEV features which are initially constructed by depth prediction, the performance depends heavily on the accuracy of the depth sub-network estimation \\cite{li2022bevdepth,li2022bevstereo}. However, cross domain transferring significantly aggravating depth prediction error \\cite{liu2022unsupervised}. Therefore, in Depth-Aware Teacher (DAT) model, our goal is to leverage depth-aware information to construct domain-invariant depth information along with corresponding voxel and BEV features.\n\nTo this end, we construct composite depth information in the teacher model by combining sparse lidar data with reliable and domain-invariant depth prediction. Note that, domain shift lies in the original depth prediction of the target domain, we thus adaptively screened out reliable prediction of the target domain by depth uncertainty estimation. In contrast to uncertainty guidance, a more simple selection mechanism, which selects based on confidence score, moves decision boundaries to low-density regions in sampling-based approaches. However, it may lead to inaccuracy due to the poor calibration of neural networks. Moreover, the domain gap issue will result in more severe miscalibration \\cite{guo2017calibration}. Therefore, we adopt Dropout methods \\cite{gal2016dropout,ghiasi2018dropblock} with uncertainty measurement instead of confidence predictions. Specifically, similar to \\cite{rizve2021defense}, instead of designing a particular uncertainty measure, we pioneering introduce uncertainty guidance to ignore depth prediction of the source domain specified. And we offer a new solution to address the domain shift in dense prediction task, which leverages an uncertainty map to select reliable and domain-invariant pixel prediction in the target domain. When utilizing reliable depth information to build the following features in the target, the source data trained model has a better feature representation and concentrates more on depth estimation tasks without domain shift influence. \nThe uncertainty map of depth prediction is: \n\n\\begin{equation}\n \n D^i_{map} = \\left\\{\n \\begin{array}{l}\n 1 , \\quad \\mathcal{U}(D^i)\\le \\mathcal{E}_{thresh}\\\\ \n 0 , \\quad \\mathcal{U}(D^i)\\ge \\mathcal{E}_{thresh}\n \\end{array}\n\\right.\n\\end{equation}\nwhere $\\mathcal{U}(D^i)$ is the uncertainty of $i^{th}$ pixel depth prediction, and $\\mathcal{E}_{thresh}$ is the selected threshold, which is decided on the variance of the uncertainty values. As shown in Fig.\\ref{fig:unc}, we utilize an uncertainty map to obtain reliable depth prediction and composite target sparse Lidar data, constructing depth-aware information.\n\nInspired by the prevalent soft teacher \\cite{sohn2020simple, xu2021end}, the rest of the teacher model is built with mean teacher mechanism \\cite{tarvainen2017mean}. The weights of teacher model $\\mathcal{T}_{DAT}$ at time step $t$ is the exponential moving average (EMA) of successive student model $\\mathcal{S}_{TGT.}$, shown below:\n\\begin{equation}\n\\label{eq:2}\n \\mathcal{T}_{DAT.}^{t} = \\alpha \\mathcal{T}_{DAT.}^{t-1} + (1-\\alpha) \\mathcal{S}_{TGT.} ^{t}\n\\end{equation}\n, where $\\alpha$ is a smoothing coefficient. With the help of depth-aware information and mean-teacher mechanism, the DAT model can continuously transfer multi-latent space (\\ie, images, voxel, BEV, output) knowledge to the student, including pixel-level features and instance-level pseudo labels. The student model thus can concentrate more on task-driven knowledge learning without domain shift.\n\n\\subsection{Multi-space feature aligned student}\n\\label{sec:MFA}\nIn student model learning, aside from the pixel and instance-level transferred knowledge from DAT, we further introduce Multi-space Feature Alignment (MFA) method to deal with the domain shift at the global level. In order to further alleviate the domain specific feature influence, we pull close the features representation of the source and target domain in multiple latent spaces, including 2D images, voxel, and BEV. Specifically, we utilize the domain adversarial training method \\cite{ganin2016domain} which inserts gradient reversal layers and reverses back-propagated gradients for domain discriminators to optimize the model and extract domain-invariant features. Different from the previous alignment method \\cite{wang2021exploring, yu2022cross}, LSS-based approaches hold multi-space features so that we align multi-space global information synchronously. The MFA loss $\\mathcal{L}_{MFA}$ is shown in Eq.\\ref{eq:MFA}, where ${F}_{s,l}$ and ${F}_{t,l}$ is the source and target domain features at $l$ latent space $(L\\in\\{images, voxel, BEV\\})$ and $D$ denotes the domain discriminator.\n\\begin{equation}\n\\label{eq:MFA}\n\\mathcal{L}_{MFA}({F}_{s},{F}_{t}) = \\sum_{l\\in L}(\\log{D}({F}_{s,l}) + \\log(1-{D}({F}_{t,l}))\n\\end{equation}\n\nIn the $M^{2}ATS$ framework, DAT and MFA compensate each other to deal with the multi-latent spaces domain shift problem in multi-level. \n\n\n\n\\subsection{Training objectives and inference}\n\\label{sec:loss}\nThe $M^{2}ATS$ framework contains a Depth-Aware Teacher (DAT) model and Multi-space Feature Aligned (MFA) Student model, the teacher model is updated by EMA operation. When adopting the DAT model to transfer multi-space features to the student, the knowledge transfer loss $\\mathcal{L}_{MKT}$ is shown as follows:\n\\begin{equation}\n\\mathcal{L}_{MKT} = \\sum_{l\\in L}\\frac{1}{W_{l}'\\times H_{l}'}\\sum_{i\\in \\mathcal{P}}||F_{Te,l}^{{i}}-F_{St,l}^{{i}}||^{2}\n\\label{eq:KT}\n\\end{equation}\nwhere $F_{Te,l}^{i}$ and $F_{St,l}^{i}$ stand for the $i^{th}$ pixel value from DAT model and student model at $l$ latent space, $L\\in\\{images, voxel, BEV\\}$ . $W_{l}^{'}$ and $H_{l}^{'}$ stand for width and height of the transferred features, $\\mathcal{P} =\\{1,2,..,W_{l}'\\times H_{l}'\\}$. We thus pull close the distance of the feature between model $\\mathcal{T}_{DAT}$ and $\\mathcal{S}_{TGT}$ with MSE loss. With the help of pixel-level domain-invariant knowledge transfer, we can reduce the domain shift in the three latent spaces. Meanwhile, the integrated domain adaptation loss $\\mathcal{L}_{DA}$ is shown in Eq.\\ref{eq:domain}.\n\\begin{equation}\n \\mathcal{L}_{DA} = \\lambda_1*\\mathcal{L}_{UNC} + \\lambda_2*\\mathcal{L}_{MKT} + \\lambda_3*\\mathcal{L}_{MFA}\n\\label{eq:domain}\n\\end{equation}\n, where $\\mathcal{L}_{UNC}$ is the detection loss \\cite{li2022bevdepth} penalized by target domain pseudo label, which is generated by teacher model.\nIn order to maintain the balance of loss penalties, $\\lambda_1$ is set to 1, $\\lambda_2$ and $\\lambda_3$ is set to 0.1.\nDuring inference, same with other camera-based methods \\cite{li2022bevdepth,li2022bevformer,li2022bevstereo}, we only adopt multi-view camera data.\n\n\n\n\n\\section{Evaluation}\nWe conduct extensive experiments to demonstrate the effectiveness of the Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework. \nIn Sec~\\ref{sec:4.1}, the details of the setup of UDA scenarios and implementation details are given. \nIn Sec~\\ref{sec:4.2}, we evaluate the cross domain performance of $M^{2}ATS$ in the three classical and one continual changing scenarios, including scene, weather, day-night, and continuously increased foggy degree Adaptation. \nThe comprehensive ablation studies are conducted in Sec~\\ref{ablation}, which investigate the impact of each component. \nFinally, we provide qualitative analysis to better present the effectiveness of our proposed framework in Sec~\\ref{sec:4.4}.\n\n\n\\begin{table*}[t]\n \\centering\n \\caption{Results of different methods for scene adaptation scenario on the validation set \\cite{caesar2020nuscenes}, from Boston to Singapore. DA means utilizing the domain adaption method, and R50 and R101 adopt Resnet 50 and 101 as the backbone.} \n \\vspace{-0.2cm}\n \n \\setlength{\\tabcolsep}{1.3mm}{\n \\begin{tabular}{c|c|c|c|cccccc}\n \\hline\n & Method & Backbone & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 & mATE \u2193 & mASE \u2193 & mAOE \u2193 & mAVE \u2193 & mAAE \u2193 \\\\\n \\hline\\hline\n & BEVDet\\cite{ huang2021bevdet} & R50 & 0.126 & 0.117 & 0.873 & 0.787 & 1.347& 1.302 & 0.666 \\\\\n Baseline & BEVDepth\\cite{li2022bevdepth} & R50 & 0.174 & 0.115 & 0.888 & 0.412 & 1.031 & 1.056 & 0.527 \\\\\n & BEVDepth\\cite{li2022bevdepth} & R101 & 0.187 & 0.115 & 0.874 & 0.391 & 0.944 & 1.021 & 0.501 \\\\\n \\hline\n \\multirow{2}{*}{DA} & SFA\\cite{wang2021exploring}(BEVDepth) & R50 & 0.181 & 0.124 & 0.856 & 0.411 & 1.023 & 1.075 & 0.540 \\\\\n \n & STM3D\\cite{li2022unsupervised}(BEVDepth) & R50 & 0.183 & 0.129 & 0.840 & 0.421 & 1.050 & 1.055 & 0.550 \\\\\n \n \\hline\n \\multirow{2}{*}{DA} & Ours(BEVDepth) & R50 & \\textbf{0.208} & \\textbf{0.148} & \\textbf{0.813} & \\textbf{0.402} & \\textbf{0.907} & \\textbf{1.134} & \\textbf{0.536} \\\\\n &Ours(BEVDepth) & R101 & \\textbf{0.211} & \\textbf{0.166} & \\textbf{0.758} & \\textbf{0.427} & \\textbf{1.127} & \\textbf{1.108} & \\textbf{0.535} \\\\\n\n \\hline\n \n \n \n \\end{tabular}%\n }\n \\label{tab:scene}%\n \\vspace{-0.1cm}\n\\end{table*}%\n\n\\begin{table*}[t]\n \\centering\n \\caption{Results of different methods for weather adaptation scenarios on the validation set \\cite{caesar2020nuscenes}, from Sunny to Rainy and Foggy-3.} \n \\vspace{-0.2cm}\n \n \\setlength{\\tabcolsep}{1.3mm}{\n \\begin{tabular}{c|c|c|ccc|ccc}\n \\hline\n & & & & Target Rainy & & & Target Foggy-3 & \\\\\n & Method & Backbone & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 & mATE \u2193 & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 & mATE \u2193 \\\\\n \\hline\\hline\n & BEVDet\\cite{ huang2021bevdet} & R50 & 0.232 & 0.207 & 0.818 & 0.135 & 0.072 & 0.867 \\\\\n Baseline & BEVDepth\\cite{li2022bevdepth} & R50 & 0.268 & 0.196 & 0.824 & 0.159 & 0.079 & 0.882 \\\\\n & BEVDepth\\cite{li2022bevdepth} & R101 & 0.272 & 0.212 & 0.842 & 0.202 & 0.122 & 0.804 \\\\\n \\hline\n \\multirow{2}{*}{DA} & SFA\\cite{wang2021exploring}(BEVDepth) & R50 & 0.281 & 0.200 & 0.840 & 0.228 & 0.133 & 0.840 \\\\\n \n & STM3D\\cite{li2022unsupervised}(BEVDepth) & R50 & 0.276 & 0.212 & 0.820 & 0.234 & 0.145 & 0.721 \\\\\n \n \\hline\n \\multirow{2}{*}{DA} & Ours(BEVDepth) & R50 & \\textbf{0.305} & \\textbf{0.243} & \\textbf{0.819} & \\textbf{0.266} & \\textbf{0.173} & \\textbf{0.805} \\\\\n & Ours(BEVDepth) & R101 & \\textbf{0.308} & \\textbf{0.247} & \\textbf{0.726} & \\textbf{0.271} & \\textbf{0.174} & \\textbf{0.793} \\\\\n \\hline\n \n \n \n \\end{tabular}%\n }\n \\label{tab:weather}%\n \\vspace{-0.20cm}\n\\end{table*}%\n\n\\begin{table*}[t]\n \\centering\n \\caption{Results of different methods for day-night adaptation scenario on the validation set \\cite{caesar2020nuscenes}. } \n \\vspace{-0.25cm}\n \n \\setlength{\\tabcolsep}{1.1mm}{\n \\begin{tabular}{c|c|c|c|cccccc}\n \\hline\n & Method & Backbone & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 & mATE \u2193 & mASE \u2193 & mAOE \u2193 & mAVE \u2193 & mAAE \u2193 \\\\\n \\hline\\hline\n & BEVDet\\cite{ huang2021bevdet} & R50 & 0.010 & 0.009 & 0.990 & 0.977 & 1.078 & 1.509&0.984 \\\\\n Baseline & BEVDepth\\cite{li2022bevdepth} & R50 & 0.050 & 0.012 & 0.042 & 0.646 & 1.129 & 1.705 & 0.915 \\\\\n & BEVDepth\\cite{li2022bevdepth} & R101 & 0.062 & 0.036 & 1.033 & 0.706 & 0.973 & 1.447 & 0.895 \\\\\n \\hline\n \\multirow{2}{*}{DA} & SFA\\cite{wang2021exploring}(BEVDepth) & R50 & 0.092 & 0.032 & 0.995 & 0.556 & 0.993 & 1.480 & 0.948 \\\\\n \n & STM3D\\cite{li2022unsupervised}(BEVDepth) & R50 & 0.070 & 0.035 & 0.979 & 0.549 & 1.063 & 1.587 & 0.937 \\\\\n \n \\hline\n \\multirow{2}{*}{DA} & Ours(BEVDepth) & R50 & \\textbf{0.132} & \\textbf{0.054} & \\textbf{0.711} & \\textbf{0.465} & \\textbf{1.072} & \\textbf{1.504} & \\textbf{0.772} \\\\\n & Ours(BEVDepth) & R101 & \\textbf{0.188} & \\textbf{0.127} & \\textbf{0.189} & \\textbf{0.484} & \\textbf{0.820} & \\textbf{1.784} & \\textbf{0.711} \\\\\n \\hline\n \n \n \n \\end{tabular}%\n }\n \\label{tab:day}%\n \\vspace{-0.1cm}\n\\end{table*}%\n\n\n\n\\subsection{Experimental setup}\n\\label{sec:4.1}\n\\subsubsection{Datasets and adaptation scenarios}\nWe evaluate our proposed framework on nuscenes \\cite{caesar2020nuscenes}, which is a large-scale autonomous-driving dataset. In order to pave the way for Unsupervised Domain Adaptation (UDA) in multi-view 3d object detection, we split the nuscenes into different paired source-target domain data. We introduce three classical and challenging cross-domain scenarios, which are \\textbf{Scene}, \\textbf{Weathers}, and \\textbf{Day-Night} adaptation. In addition, Due to the lack of foggy weather (a normal target data), we generate a foggy dataset (Foggy-nuscenes) in various dense degrees inspired by Foggy-Cityscapes \\cite{sakaridis2018semantic}. The Foggy-nuscenes dataset will be released for research. Besides, we also evaluate $M^{2}ATS$ in the continual changing scenario of increased foggy degrees.\n\n\\textbf{Scene Adaptation} We set Boston as the source scene data and realize UDA on the Singapore target domain. Since scene layouts are frequently changing in autonomous driving, the domain gap occurs in multiple scenes. \n\n\\textbf{Weathers Adaptation} The sunny weather is considered as source domain data, rainy and foggy weather is considered as target domain data. Various weather conditions are common phenomena in the real world, and multi-view 3d object detection should be reliable under such conditions. \n\n\\textbf{Day-Night Adaptation} We design daytime data as the source domain and realize UDA on the target domain (night data). Since the camera-based method has a tremendous domain gap from day to night, it is essential to explore the domain adaptation method in the day-night scenario. \n\n\\textbf{Continues Changing Adaptation} We set sunny weather data as the source domain and set continuously increased foggy degree data as the target domain. Specifically, we realize UDA from sunny to Foggy-1, Foggy-3, and Foggy-5 step by step with the degree of fog increased. Since the continual changing domain gap usually appears in autonomous driving, the domain shift is essential to be addressed. Moreover, we demonstrate the detailed information and generation method of the dataset in the appendix.\n\n\n\n\n\n\\subsubsection{Implementation details}\n$M^{2}ATS$ framework is built based on BEVDepth \\cite{li2022bevdepth}. According to previous work \\cite{li2022bevdepth, reading2021categorical, huang2021bevdet, li2022unifying}, ResNet-50 and ResNet-101 \\cite{he2016deep} serve as backbone to extract image features respectively. We adopt $256\\times 704$ as image input size and the same data augmentation methods as \\cite{li2022bevdepth}. We apply AdamW \\cite{loshchilov2017decoupled} optimizer with 2e-4 learning rate and without any decay. For training, the source domain pretrain and UDA experiments are trained for 24 and 12 epochs \\textbf{without CBGS} \\cite{zhu2019class}. During inference, our method infers without any test time augmentation or model ensemble. We report the evaluation metrics following previous 3d detection works\\cite{li2022bevdepth,huang2021bevdet}, including nuScenes Detection Score (NDS), mean Average Precision (mAP), as well as five True Positive (TP) metrics including mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE). All experiments are conducted on NVIDIA Tesla V100 GPUs. \n\n\n\\subsection{Main results}\n\\label{sec:4.2}\n\\begin{figure*}[t]\n\\includegraphics[width=0.95\\textwidth]{.\/images\/Vis_v3.pdf}\n\\centering\n\\vspace{-0.2cm}\n\\caption{ Visualizations on the benefits of our proposed method. (a) Qualitative results: The upper and bottom parts are visualization of BevDepth \\cite{li2022bevdepth} and our proposed method respectively. The results are visualized on the weather adaptation scenario. (b) Visualization of feature distributions using T-SNE \\cite{van2013barnes}. The \\textcolor{blue}{blue spots} denote the source features, while \\textcolor{red}{red spots} represent target features.}\n\\label{fig:vis}\n\\vspace{-0.2cm}\n\\end{figure*}\n\nWe compare our proposed method with other BEV perception methods to verify the superior performance of $M^{2}ATS$. Meanwhile, to further demonstrate our special design in addressing domain shift of LSS-based multi-view 3D object detection, we reproduce other promising 2D and mono-view 3D detection Domain Adaptation (DA) methods on BEVDepth \\cite{li2022bevdepth}, \\ie, SFA \\cite{wang2021exploring} and STM3D \\cite{li2022unsupervised}.\n\n\n\n\\noindent\\textbf{Scene Adaptation} As shown in Tab.\\ref{tab:scene}, $M^{2}ATS$ outperforms all the baseline methods, which obviously exceeds BEVDepth \\cite{li2022bevdepth} of R50 and R101 backbone by 3.4\\% and 2.4\\% NDS. It thus demonstrates that our proposed method can effectively address the multi-latent spaces domain shift caused by scene and environmental change. Compared with other SOTA DA methods, $M^{2}ATS$ outperforms SFA and STM3D by 2.7\\% and 2.5\\% NDS respectively. And it even improves mAP by 1.9\\% compared with 2nd place. The comparison further demonstrates that our proposed method is specially designed for LSS-based 3D object detection.\n\n\n\\noindent\\textbf{Weathers Adaptation} As shown in Tab. \\ref{tab:weather}, in Sunny to Foggy-3 scenario adaptation, $M^{2}ATS$ outperforms other baseline methods by a significant margin. It even exceeds BEVDepth by around 10\\% in both NDS and mAP (R50 setting). Moreover, compared with SFA and STM3D, $M^{2}ATS$ improves NDS by 3.8\\% and 3.2\\% respectively since it utilizes multi-space domain invariant features to realize a better representation in extreme weather data. In order to evaluate the generalization of $M^{2}ATS$, we also conduct experiments on Sunny to Rainy, $M^{2}ATS$ also increases 2.4\\% NDS compared with 2nd place.\n\n\n\n\n\\noindent\\textbf{Day-Night Adaptation} The Day-Night adaptation is the most challenging scenario for camera-based methods, $M^{2}ATS$ significantly improves the detection performance and solves the domain shift problem in the Night domain. As shown in Tab. \\ref{tab:day}, the tremendous domain gap makes baseline methods perform extremely poorly with only 0.062 NDS and 0.036 mAP under R101. While $M^{2}ATS$, especially with R101 as its backbone, can achieve 0.188 NDS and 0.127 mAP. Even compared with other domain adaptation methods, it also achieves a superior improvement of more than 4.0\\% and 6.2\\%. Since previous DA methods like STM3D and STA ignore the inaccuracy depth estimation in Night data, it thus can not effectively deal with Day-Night domain shift in the LSS-based method.\n\n\\noindent\\textbf{Continues Changing Adaptation} As shown in Tab. \\ref{tab:fog}, due to the page limitation, we only compare $M^{2}ATS$ with the baseline method. Along with the increased foggy degree, the baseline method shows an obvious performance degradation. However, $M^{2}ATS$ alleviates the gradually increased domain gap and outperforms the baseline method by 12.3\\% NDS in the final Foggy-5 domain.\n\n\n\\begin{table}[t]\n \\begin{center}\n \\caption{Results of different methods for continuous changing adaptation scenario on Nuscenes-Foggy, from Sunny to Foggy-1, Foggy-3, and Foggy-5 step by step. The metric is NDS}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{0.8mm}{\n \t\\begin{tabular}{c|c|ccc}\n \t\\hline\n \n Train on&Method(R50) & Foggy-1 & Foggy-3 & Foggy-5 \\\\\n \\hline\n \\hline\n \\multirow{2}{*}{Sunny}& BEVDepth \\cite{li2022bevdepth}& 0.2214 & 0.1592 & 0.096\\\\\n &Ours(BEVDepth) & \\textbf{0.2835} & \\textbf{0.2728} & \\textbf{0.2190} \\\\\n \\hline\n\t\t\\end{tabular}}\n \\label{tab:fog}\n \\vspace{-0.6cm}\n \\end{center}\n\\end{table}\n\n\\subsection{Ablation study}\n\\label{ablation}\nTo better reflect the role of each component in $M^{2}ATS$, we conduct ablation experiments to analyze how each component can deal with domain shift for LSS-based BEV perception. It should be noted that the ablation study is only conducted on \\textbf{Sunny-Rainy} weather adaptation. The ablation study of other scenarios is presented in the appendix. \n\n\n\n\\noindent\\textbf{The effectiveness of DAT and MFA.}\nAs shown in Tab. \\ref{tab:abl}, vanilla BEVDepth ($Ex_{0}$) can only achieve 26.8\\% NDS and 19.6\\% mAP when the scenario is transformed from sunny to the rainy domain. For DAT, it transfers multi-latent space knowledge to the student model via domain invariant multi-space features and more reliable pseudo labels, which are constructed by depth-aware information. As shown in $Ex_{1}$, the student model can learn pixel and instance level knowledge from DAT, NDS, and mAP are improved by 1.5\\% and 3.5\\% respectively. $Ex_{2}$ evaluates the benefits of mean-teacher mechanism, achieving further 0.4\\% mAP improvement. By gradually aligning the multi-space feature ($Ex_{3-5}$) in the student model, our method will get a 2\\% improvement in NDS, which demonstrates that it is essential to align the global-level representation of two domains in multi-latent spaces. When we combine DAT and MFA in $M^{2}ATS$ ($Ex_{6-7}$), NDS can be further enhanced to 30.5\\% while mAP can achieve 24.3\\%. We can come to the conclusion that NDS and mAP continuously increase with the addition of each component in DAT and MFA, demonstrating that each of these modules is necessary and effective. It also proves that DAT and MFA can compensate each other to address the domain shift in multi-latent spaces.\n\n\n\n\n \n \n\n\\begin{table}[t]\n \\begin{center}\n \\caption{Ablation studies on the Sunny to Rainy scenario. It shows the effectiveness of DAT and MFA in the framework. For DAT, it consists of three components, including depth-aware information(DA), mean-teacher(MT), and multi-latent space knowledge transfer(KT). For MFA, it concludes three-latent space alignments, which are Bev(BA), image(IA), and voxel(VA) feature alignment.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{1.1mm}{\n \t\\begin{tabular}{c|ccc|ccc|cc}\n \t\\hline\n \n Name & DA & MT & KT& BA& IA & VA & \\cellcolor{lightgray} NDS \u2191&\\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{0}$ & - & - & - & - & - & - & 0.268 & 0.196 \\\\\n \n \\hline\n $Ex_{1}$ & \\Checkmark & - & \\Checkmark & - & - & - & 0.283 & 0.231 \\\\\n \n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark & - & - & - & 0.286 & 0.235 \\\\\\hline\n \n $Ex_{3}$ & - & - & - & \\Checkmark & - & - &0.276 & 0.200\\\\\n $Ex_{4}$ & - & - & - & \\Checkmark & \\Checkmark & - & 0.282 & 0.204\\\\\n $Ex_{5}$ & - & - & - & \\Checkmark & \\Checkmark & \\Checkmark & 0.288 & 0.207 \\\\\\hline\n $Ex_{6}$ & \\Checkmark & -& \\Checkmark & \\Checkmark&\\Checkmark & \\Checkmark & 0.301 & 0.238 \\\\\n \n \n $Ex_{7}$ & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & 0.305 & 0.243 \\\\\n \n\t\t\\hline \n\t\t\\end{tabular}}\n \\label{tab:abl}\n \\vspace{-0.3cm}\n \\end{center}\n\\end{table}\n\n\n\\begin{table}[t]\n \\begin{center}\n \\caption{The ablation study on the effectiveness of each component in depth-aware information. Pred means depth prediction, and UG means adaptive uncertainty-guided depth selection.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{1.3mm}{\n \t\\begin{tabular}{c|ccc|cc}\n \t\\hline\n Depth-aware: & Lidar & Pred & UG & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{2-1}$& \\Checkmark & - & - & 0.275 & 0.223 \\\\\n $Ex_{2-2}$ & \\Checkmark &\\Checkmark & - & 0.278 & 0.228 \\\\\n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark & 0.286 & 0.235 \\\\\n \\hline\n\t\t\\end{tabular}}\n \\label{tab:abl_da}\n \\vspace{-0.3cm}\n \\end{center}\n\\end{table}\n\n\\begin{table}[t]\n \\begin{center}\n \\caption{The ablation study on the effectiveness of each component in Multi-latent space Knowledge Transfer. PL means transferring instance-level pseudo labels. BEV, Voxel, and Image stand for transferring on corresponding latent spaces.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{0.8mm}{\n \t\\begin{tabular}{c|cccc|cc}\n \t\\hline\n Latent Space: & PL & BEV & Voxel & Image & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{2-3}$& \\Checkmark& - & - & - & 0.280 & 0.213 \\\\\n $Ex_{2-4}$& \\Checkmark& \\Checkmark & - & - & 0.283 & 0.222 \\\\\n $Ex_{2-5}$& \\Checkmark & \\Checkmark &\\Checkmark& - & 0.285 & 0.230 \\\\\n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark&\\Checkmark & 0.286 & 0.235 \\\\\n\t\t\\hline \n\t\t\\end{tabular}}\n \\label{tab:abl_kt}\n \\vspace{-0.45cm}\n \\end{center}\n\\end{table}\n\n \n \n\n\n \n\\noindent\\textbf{Detailed ablation study of DAT}\nWe study the effectiveness of depth-aware information composition and multi-latent knowledge transferring in DAT.\nAs shown in Tab. \\ref{tab:abl_da}, only taking lidar ground truth to replace depth prediction ($Ex_{2-1}$) can improve 0.7\\% NDS and 2.7\\% mAP compared with $Ex_{0}$. The obviously increased mAP demonstrates that lidar data plays an important role in domain invariant voxel feature construction. However, due to the sparse property of lidar data, we utilize dense depth prediction to composite sparse lidar. In ($Ex_{2-2}$), NDS and mAP can achieve 27.9\\% and 22.8\\%, which only have limited improvement compared with $Ex_{2-1}$. Therefore, we introduce uncertainty guidance to adaptively select more reliable and task-relevant depth predictions. $Ex_{2}$ has obvious performance progress compared with $Ex_{2-1}$ and $Ex_{2-2}$, demonstrating the uncertainty-guided depth selection can reduce the domain shift caused by domain specific depth prediction. \nAs shown in Tab. \\ref{tab:abl_kt}, applying for knowledge transfer in different latent spaces can be beneficial to $M^{2}ATS$. With pseudo label, BEV, voxel, and image feature transferred between DAT and student model, NDS is gradually improved from 26.8\\% to 28.6\\%, and mAP is improved from 19.6\\% to 23.5\\%. The improved performance demonstrates that multi-space global-level alignment is introduced to ease different domain shifts in each latent space. It shows that pseudo label and three spaces knowledge are all essential for the student model to address multi-latent space domain gaps, which is constructed by depth-aware information.\n\n\n\n\n\n\\subsection{Qualitative analysis}\n\\label{sec:4.4}\nWe further present some visualization results of the prediction produced by the $M^{2}ATS$ and the baseline BEVDpeth~\\cite{li2022bevdepth}, as shown in Fig. \\ref{fig:vis} (a). It is quite clear that the BEVDpeth fails to locate the objects well, while $M^{2}ATS$ yields more accurate localization results as its predicted \\textcolor{green}{green box} overlaps better with the ground truth \\textcolor{red}{red box}. We can also observe that $M^{2}ATS$ can detect objects that baseline ignores, demonstrating the superiority of $M^{2}ATS$ in object detection and presenting great potential in deploying to real-world autonomous driving applications. The visualization in Fig. \\ref{fig:vis} (b) further verifies the explicit cross domain ability of $M^{2}ATS$. As a clear separation can be seen in the clusters of the \\textcolor{blue}{blue-source} and \\textcolor{red}{red-target} dots produced by BEVDepth, the features generated by $M^{2}ATS$ get closer together further demonstrates the ability of our proposed method in representing domain invariant features.\n\n\\begin{figure*}[t]\n\\includegraphics[width=0.98\\textwidth]{.\/images\/fog_v3.png}\n\\centering\n\\vspace{-0.65cm}\n\\caption{The visualization of Foggy-Nuscenes dataset. The first row is the original multi-view images in Nuscenes \\cite{caesar2020nuscenes}, and the last three rows demonstrate images of increased foggy degree.}\n\\label{fig:fog}\n\\vspace{-0.3cm}\n\\end{figure*}\n\n\n\\section{Conclusion and discussion of limitations}\nWe explore the UDA problem in Multi-View 3D object detection and propose a novel Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework to fully ease multi-latent space domain shift for LSS-based BEV perception. On the one hand, Depth-Aware Teacher (DAT) leverages depth-aware information to generate reliable pseudo labels and multi-space domain-invariant features, which are transferred to the student model. On the other hand, the Multi-space Feature Aligned (MFA) student model utilizes source and target domain data to align the global-level feature representation. $M^{2}ATS$ achieves SOTA performance in three challenge UDA and continual changing domain gap scenarios. For limitations, the teacher-student framework brings more computational cost and extra small parameters (discriminator) during training. However, the student model keeps the same forward time and computational cost with a baseline in inference. \n\n\\section{Appendix}\n\n\nIn the supplementary material, we first present more details of our generated Foggy Nuscenes dataset in Sec .\\ref{sec:1}, which aims at adding a foggy weather scene in Nuscenes \\cite{caesar2020nuscenes} and providing an open source dataset (Foggy-Nuscenes) for research in autonomous driving. \nSecondly, in Sec .\\ref{sec:2}, we show the details of each scenario and the partition of dataset.\nIn Sec .\\ref{sec:3}, we then provide additional and detailed cross domain training strategy.\nIn Sec .\\ref{sec:4}, the extra ablation studies are conducted on Day-night cross-domain scenario, which investigate the impact of each component in the Multi-level Multi-space Alignment Teacher-Student ($M^{2}ATS$) framework.\nFinally, we provide additional qualitative analysis on Day-night scenario to further evaluate the effectiveness of our proposed method in Sec~\\ref{sec:5}.\n\n\\begin{table*}[t]\n \\begin{center}\n \\caption{The partitioning and details of four UDA scenarios. Frames means the number of multi-view image frames in the datasets.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{3mm}{\n \t\\begin{tabular}{c|c|c|c|c|c}\n \t\\hline\n UDA scenarios & Domain & Training sequences & Frames & Validation sequences & Frames \\\\\n \\hline\n \\hline\n \\multirow{2}{*}{Scene changing}&Source(Boston) & 857 & 15695 & 77 & 3090\\\\\n &Target(Singapore) & 693 & 12435 & 73 & 2929\\\\\n \\hline\n \\multirow{2}{*}{Weather changing}&Source(Sunny) & 1276 & 23070 & 126 & 5051\\\\\n &Target(Rainy) & 274 & 5060 & 24 & 968\\\\\n \\hline\n \\multirow{2}{*}{Weather changing}&Source(Nuscenes) & 1550 & 28130 & 150 & 6019\\\\\n &Target(Foggy-3) & 1550 & 28130 & 150 & 6019\\\\\n \\hline\n \\multirow{2}{*}{Day-night changing}&Source(Day) & 1367 & 24745 & 135 & 5417\\\\\n &Target(Night) & 183 & 3385 & 15 & 602\\\\\n \\hline\n \\multirow{4}{*}{Continual changing}&Source(Nuscenes) & 1550 & 28130 & 150 & 6019\\\\\n &Target-1(Foggy-1) & 1550 & 28130 & 150 & 6019\\\\\n &Target-2(Foggy-3) & 1550 & 28130 & 150 & 6019\\\\\n &Target-3(Foggy-5) & 1550 & 28130 & 150 & 6019\\\\\n \\hline\n\t\t\\end{tabular}}\n \\label{tab:data}\n \\end{center}\n \\vspace{-0.3cm}\n\\end{table*}\n\\subsection{Supplementary description of foggy Nuscenes dataset}\n\\label{sec:1}\nWe apply the fog simulation pipeline \\cite{sakaridis2018semantic} to the multi-view images provided in the Nuscenes dataset \\cite{caesar2020nuscenes}. specifically, we generate synthetic foggy images for all scenes of training, validation, and test set, which reserve original annotation of 3D object detection task. We utilize five different density degree of foggy simulator to construct Foggy-Nuscenes dataset, including Foggy-1, Foggy-2, Foggy-3, Foggy-4, and Foggy-5 (gradually increasing fog density). As shown in Fig .\\ref{fig:fog}, we adopt Foggy-1, Foggy-3, and Foggy-5 as the experimental datasets for weather adaptation and continual changing scenario, which have an obvious domain gap with original Nuscenes dataset \\cite{caesar2020nuscenes}. \n\n\n\n\n\n\\subsection{Additional details of UDA scenarios}\n\\label{sec:2}\nIn this paper, we utilize Nuscenes \\cite{caesar2020nuscenes} and Foggy-Nuscenes datasets to generate three classical and one continual changing UDA scenarios. As shown in Tab .\\ref{tab:data}, for Scene changing scenario, We set Boston as the source scene data and realize UDA on the Singapore target domain. The source domain contains 15.6k frames labeled data and target domain has 12.4k frames unlabeled data. For Weather changing scenario, the sunny weather is considered as source domain data, rainy and foggy weather is considered as target domain data. In the second row, the source domain has 1276 training sequences and rainy target domain has 274 training sequences. In the third row, we leverage all Nuscenes dataset as source domain data and set Foggy-3 as target domain data. As we can see, in Day-night changing scenario, we design daytime data as the source domain and realize UDA on the target domain (night data). Since the source domain data is way more larger than the target domain data and the camera can not capture night-time data clearly, which is considered as the most challenging adaptation scenarios. Finally, we set all Nuscenes sequence as the source domain data and set continuously increased foggy degree data as the target domain. The various target domains are of the same sequence and scene with source domain, but the different foggy degree on the frames bring significant domain gap. We introduce this continual changing scenario to demonstrate the continual domain adaptation ability of our proposed method.\n\\begin{table}[t]\n \\begin{center}\n \\caption{Ablation studies on the Day-night scenario. It shows the effectiveness of DAT and MFA in the framework. For DAT, it consists of three components, including depth-aware information(DA), mean-teacher(MT), and multi-latent space knowledge transfer(KT). For MFA, it concludes three-latent space alignments, which are Bev(BA), image(IA), and voxel(VA) feature alignment.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{1.0mm}{\n \t\\begin{tabular}{c|ccc|ccc|cc}\n \t\\hline\n \n Name & DA & MT & KT& BA& IA & VA & \\cellcolor{lightgray} NDS \u2191&\\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{0}$ & - & - & - & - & - & - & 0.050 & 0.012 \\\\\n \n \\hline\n $Ex_{1}$ & \\Checkmark & - & \\Checkmark & - & - & - & 0.103 & 0.034 \\\\\n \n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark & - & - & - & 0.104 & 0.041 \\\\\\hline\n \n $Ex_{3}$ & - & - & - & \\Checkmark & - & - &0.065 & 0.038\\\\\n $Ex_{4}$ & - & - & - & \\Checkmark & \\Checkmark & - & 0.071 & 0.042\\\\\n $Ex_{5}$ & - & - & - & \\Checkmark & \\Checkmark & \\Checkmark & 0.098 & 0.051 \\\\\\hline\n $Ex_{6}$ & \\Checkmark & -& \\Checkmark & \\Checkmark&\\Checkmark & \\Checkmark & 0.124 & 0.049 \\\\\n \n \n $Ex_{7}$ & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & \\Checkmark & 0.132 & 0.054 \\\\\n \n\t\t\\hline \n\t\t\\end{tabular}}\n \\label{tab:abl_ap}\n \\end{center}\n \\vspace{-0.3cm}\n\\end{table}\n\n\\subsection{Additional implementation details}\n\\label{sec:3}\nOur training process of cross domain can be divided into 2 stages: pretraining on source domain and transfer training from source to target domain. \nFirstly, in order to fully leverage the feature extraction ability of the model \\cite{li2022bevdepth}, we load the backbone of ImageNet \\cite{deng2009imagenet} pretrained parameters. Then we train the model on the source domain data in a supervised manner, which aims to obtain the source domain pretrained model parameters.\nIn cross domain phase, we load the integrated model parameters, which are pretrained on source domain, into the student model and conduct transferred training for 12 epochs. \n$M^{2}ATS$ framework adopts source domain and target domain data as input, and only leverages source domain annotation. During training, we alternate Depth-Aware Teacher knowledge transferred and Multi-space Feature Alignment training to update student model. Finally, we directly infer the student model on target domain data and achieve the result of our proposed method.\n\n\n\\subsection{Additional ablation study on day-night scenario}\n\\label{sec:4}\nThe $M^{2}ATS$ framework contains a Depth-Aware Teacher (DAT) and a Multi-space Feature Aligned (MFA) student model.\nTo evaluate each component of the proposed $M^{2}ATS$ framework, we conduct ablation experiments to analyze how each component can mitigate domain shift for LSS-based BEV perception. It should be noted that the ablation study is conducted on the most challenging \\textbf{Day-Night} scenario, in which other methods suffer tremendous performance degradation. \n\n\\begin{table}[t]\n \\begin{center}\n \\caption{The ablation study on the effectiveness of each component in depth-aware information. Pred means depth prediction, and UG means adaptive uncertainty-guided depth selection.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{1.3mm}{\n \t\\begin{tabular}{c|ccc|cc}\n \t\\hline\n Depth-aware: & Lidar & Pred & UG & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{2-1}$& \\Checkmark & - & - & 0.068 & 0.028 \\\\\n $Ex_{2-2}$ & \\Checkmark &\\Checkmark & - & 0.084 & 0.032 \\\\\n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark & 0.104 & 0.041 \\\\\n \\hline\n\t\t\\end{tabular}}\n \\label{tab:abl_da_ap}\n \\end{center}\n \\vspace{-0.3cm}\n\\end{table}\n\n\\begin{table}[t]\n \\begin{center}\n \\caption{The ablation study on the effectiveness of each component in Multi-latent space Knowledge Transfer. PL means transferring instance-level pseudo labels. BEV, Voxel, and Image stand for transferring on corresponding latent space.}\n \\vspace{-0.2cm}\n \\setlength{\\tabcolsep}{0.8mm}{\n \t\\begin{tabular}{c|cccc|cc}\n \t\\hline\n Latent Space: & PL & BEV & Voxel & Image & \\cellcolor{lightgray} NDS \u2191& \\cellcolor{lightgray} mAP \u2191 \\\\\n \\hline\n \\hline\n $Ex_{2-3}$& \\Checkmark& - & - & - & 0.076 & 0.036 \\\\\n $Ex_{2-4}$& \\Checkmark& \\Checkmark & - & - & 0.096 & 0.039 \\\\\n $Ex_{2-5}$& \\Checkmark & \\Checkmark &\\Checkmark& - & 0.101 & 0.044 \\\\\n $Ex_{2}$ & \\Checkmark & \\Checkmark & \\Checkmark&\\Checkmark & 0.104 & 0.041 \\\\\n\t\t\\hline \n\t\t\\end{tabular}}\n \\label{tab:abl_kt_ap}\n \\end{center}\n \\vspace{-0.3cm}\n\\end{table}\n\\begin{figure*}[t]\n\\includegraphics[width=0.95\\textwidth]{.\/images\/vis_ap_v2.png}\n\\centering\n\\caption{Visualizations on the benefits of our proposed method. The upper and bottom parts are visualization of BevDepth \\cite{li2022bevdepth} and our proposed method respectively. The \\textcolor{red}{red box} is the Ground Truth, and the \\textcolor{green}{green box} is the predictions.}\n\\label{fig:vis_ap}\n\\vspace{-0.3cm}\n\\end{figure*}\n\\noindent\\textbf{The effectiveness of DAT and MFA.}\nAs shown in Tab. \\ref{tab:abl_ap}, due to the faint light of target domain (night-time), vanilla BEVDepth ($Ex_{0}$) can only achieve 5.0\\% NDS and 1.2\\% mAP when the scenario is transformed from source domain (daytime). For DAT, it transfers multi-latent space knowledge to the student model via pixel-level multi-space features and instance-level pseudo labels, which are constructed by depth-aware information. As shown in $Ex_{1}$, the student model can learn domain-invariant knowledge from DAT, NDS and mAP are \nthus improved by 5.3\\% and 2.2\\% respectively. $Ex_{2}$ evaluates the benefits of mean-teacher mechanism, which brings 0.7\\% mAP improvement. As we can see, depth-aware information plays an important role in cross domain transferred learning and greatly improves NDS compared with baseline. By gradually aligning the multi-space feature ($Ex_{3-5}$) in the MFA student model, our method get a 4.8\\% and 3.9\\% improvement in NDS and mAP, which demonstrates that it is essential to align the global-level representation of two domains in multi-latent spaces. The MFA student model has a great improvement on mAP compared with baseline, in which the global-level alignment focuses more on classification and location of objects. \nWhen we combine DAT and MFA in $M^{2}ATS$ ($Ex_{6-7}$), NDS can be further enhanced to 13.2\\% while mAP can achieve 5.4\\%. We can come to the conclusion that NDS and mAP continuously increase with the addition of each component in DAT and MFA, showing similar results with the previous ablation study on submission. It also proves that DAT and MFA can compensate each other to address the domain shift in multi-latent spaces.\n\n\n\n\\noindent\\textbf{Detailed ablation study of DAT}\nWe study the effectiveness of depth-aware information composition and multi-latent knowledge transferring in DAT.\nAs shown in Tab. \\ref{tab:abl_da_ap}, only taking lidar ground truth to replace depth prediction ($Ex_{2-1}$) can only improve 1.8\\% NDS and 1.6\\% mAP compared with BEVDepth ($Ex_{0}$). The obviously increased performance demonstrates that lidar data plays an important role in domain-invariant voxel feature construction. However, due to the sparse property of lidar data, we utilize dense depth prediction to composite sparse lidar data. In ($Ex_{2-2}$), NDS and mAP can achieve 8.4\\% and 3.2\\%, which only have limited improvement compared with $Ex_{2-1}$. Therefore, we introduce uncertainty guidance to adaptively select more reliable and task-relevant depth predictions to composite sparse lidar data. Due to the reliable composite depth-aware information, $Ex_{2}$ has obvious performance progress compared with $Ex_{2-2}$, which further improves 2.0\\% NDS and 0.9\\% mAP.\nThe results demonstrate the uncertainty-guided depth selection can reduce the domain shift caused by domain specific depth prediction. \nAs shown in Tab. \\ref{tab:abl_kt_ap}, applying knowledge transfer on different latent spaces can be beneficial to $M^{2}ATS$. With pseudo label, BEV, voxel, and image feature transferred between DAT and student model, NDS is gradually improved from 5.0\\% to 10.4\\%, and mAP is improved from 1.2\\% to 4.1\\%. The results show that pseudo label and the knowledge of three spaces are all essential for the student model to address multi-latent space domain gaps, which is constructed by depth-aware information. In conclusion, leveraging depth-aware information has a significant impact on addressing multi-space domain shift in BEV perception, which transfers domain-invariant knowledge from teacher model to student model.\n\n\n\\subsection{Additional qualitative analysis}\n\\label{sec:5}\nIn contrast with the visualization on submission, we further present some visualization results of the prediction produced by the $M^{2}ATS$ and the baseline BEVDpeth~\\cite{li2022bevdepth} on the most challenging scenario (day-night), as shown in Fig. \\ref{fig:vis_ap}. Due to the faint light of night-time data, we can not classify and locate objects even with the naked eye, not to mention camera-based methods. It is quite clear that the BEVDpeth has various inaccurate and missing detection, while $M^{2}ATS$ yields more accurate localization results as its predicted \\textcolor{green}{green box} overlaps better with the ground truth \\textcolor{red}{red box}.\nWe can also observe that $M^{2}ATS$ can detect objects that baseline ignores, demonstrating the superiority of $M^{2}ATS$ in object detection and presenting great potential in deploying to real-world autonomous driving applications. However, our proposed method still has some missing detection which inspires us to pay more attention on the BEV perception in night-time.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{S:introduction}\n\n\n\nAn increasing number of studies\n involve the regression analysis of $p$ continuous covariates and continuous or discrete univariate responses on $n$ subjects, with $p$ being much larger than $n$.\nThe development of effective clustering and sparse regression models for reliable predictions is especially challenging in these ``small $n$, large $p$'' problems.\nThe goal of the analysis is often three-pronged: {\\it{(i) Cluster identification:}} We wish to identify clusters of covariates with similar patterns for the subjects. For example, in biomedical studies where the covariates are gene expression levels, subsets of genes associated with distinctive between-subject patterns may correspond to different underlying biological processes; {\\it{(ii) Detection of sparse regression predictors:}} From the set of $p$ covariates, we wish to select a sparse subset of reliable predictors for the subject-specific responses and infer the nature of their relationship with the responses. In most genomic applications, just a few of the biological processes are usually relevant to a response variable of interest, and we need reliable and parsimonious regression models; and {\\it{(iii) Response prediction:}} Using the inferred regression relationship, we wish to predict the responses of $\\tilde{n}$ additional subjects for whom only covariate information is available. The reliability of an inference procedure is measured by its prediction accuracy for out-of-sample individuals.\n\nIn high-throughput regression settings with continuous covariates and continuous or discrete outcomes, this paper proposes a nonparametric Bayesian framework called \\textbf{VariScan} for simultaneous clustering, variable selection, and prediction.\n\\subsection{Motivating applications}\n\nOur methods and computational endeavors are motivated by recent high-throughput investigations in biomedical research, especially in cancer. Advances in array-based technologies allow simultaneous measurements of biological units (e.g.\\ genes) on a relatively small number of subjects. Practitioners wish to select important genes involved with the disease processes and to develop efficient prediction models for patient-specific clinical outcomes such as continuous survival times or categorical disease subtypes. The analytical challenges posed by such data include not only high-dimensionality but also the existence of considerable gene-gene correlations induced by biological interactions. In this article, we analyze gene expression profiles assessed using microarrays in patients with diffuse large B-cell lymphoma (DLBCL) \\citep{Rosenwald_etal_2002} and breast cancer \\citep{vantVeer_2002}. Both datasets are publicly available and have the following general characteristics: for individuals $i=1,\\ldots,n$, the data consist of mRNA expression levels $x_{i1},\\ldots,x_{ip}$ for $p$ genes, where $n>>p$, along with censored survival times denoted by $w_i$. More details, analysis results, and gains using our methods over competing approaches are discussed in Section~\\ref{S:benchmark_data}.\n\n\nThe scope and success of the proposed methodology and its associated theoretical results extend far beyond the examples we discuss in this paper. For instance, the technique is not restricted to biomedical studies; we have successfully applied VariScan in a variety of other high-dimensional~applications and achieved high inferential gains relative to existing~methodologies.\n\n\n\n\n\n\\subsection{Challenges in high-dimensional predictor detection}\nDespite the large number of existing methods related to clustering, variable selection and prediction, researchers continue to develop new methods to meet the challenges posed by newer applications and larger datasets. Predictor detection becomes particularly problematic in big datasets due to the pervasive collinearity of the covariates.\n\nFor a simple demonstration of this fact, consider a process that independently samples $n$-variate covariate column vectors $\\boldsymbol{x}_1\\ldots,\\boldsymbol{x}_p$, so that $p=200$ vectors with $n=10$ i.i.d.\\ elements are generated from a common normal distribution. Although the vectors are independently generated, extreme values of the pairwise correlations are observed in the sample, as shown in the histogram of Figure \\ref{F:toy_pair}. The proportion of extremely high or low correlations typically increases with $p$, and with greater correlation of the generated vectors under the true~process.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.25]{example_pairwise.pdf}\n\\caption{Pairwise sample correlations for $p=200$ vectors independently generated from a multivariate normal distribution with $n=10$ uncorrelated elements.}\n\\label{F:toy_pair}\n\\end{center}\n\\end{figure}\n\n\n\nMulticollinearity is common in high-dimensional problems because the $n$ - dimensional space of the covariate columns becomes saturated with the large number of covariates. This is disadvantageous for regression because a cohort of highly correlated covariates is weakly identifiable as regression predictors.\nFor {example,\n imagine that the $j^{th}$ and $k^{th}$ covariate columns have a sample correlation close to 1, but that neither covariate is really a predictor in a linear regression model. An alternative model in which \\textit{both} covariates are predictors with equal and opposite regression coefficients, has a nearly identical joint likelihood for all regression outcomes. Consequently, an inference procedure is often unable to choose between these competing models as the likely explanation for the data.\n\n In the absence of strong application-specific priors to guide model selection, collinearity makes it impossible to pick the true set of predictors in high-dimensional problems.\n Furthermore, collinearity causes unstable inferences and erroneous test case predictions \\citep{Weisberg_1985}. The problem is exacerbated if some of the regression outcomes are unobserved, as with categorical responses and survival~applications.\n\n\n\n\n\n\\subsection{Bidirectional clustering with adaptively nonlinear functional regression and prediction}\n\n\n Since the data in small $n$, large $p$ regression problems are informative only about the combined effect of a cohort of highly correlated covariates, we address the issue of collinearity using clustering approaches. Specifically, VariScan utilizes the sparsity-inducing property of Poisson-Dirichlet processes (PDPs) to first group the $p$ columns of the covariate matrix into $q$ latent clusters, where $q \\ll p$, with each cluster consisting of columns with similar patterns across the subjects. The data are allowed to direct the choice between a class of PDPs and their special case, a Dirichlet process, for selecting a suitable allocation scheme for the covariates. These partitions could provide meaningful insight into unknown biological processes (e.g.\\ signaling pathways) represented by the latent clusters.\n\n To flexibly capture the within-cluster pattern of the covariates, the $n$ subjects are allowed to group differently in each cluster via a nested Dirichlet process. This feature is motivated by genomic studies \\citep[e.g.,][]{Jiang_Tang_Zhang_2004} which have demonstrated that subjects or biological samples often group differently under different biological processes. In essence, this modeling framework specifies a random, bidirectional (covariate, subject) nested clustering of the high-dimensional covariate~matrix.\n\nClustering downsizes the small $n$, large $p$ problem to a ``small $n$, small $q$'' problem, facilitating an effective stochastic search of the indices $\\mathcal{S}^* \\subset\\{1,\\ldots,q\\}$ of potential \\textit{cluster predictors}. If necessary, we could then infer the indices $\\mathcal{S}\\subset\\{1,\\ldots,p\\}$ of the covariate predictors. This feature differentiates the VariScan procedure from black-box nonlinear prediction methods. In addition, the technique is capable of detecting functional relationships through elements such as nonlinear functional kernels and basis functions such as splines or wavelets. An adaptive mixture of linear and nonlinear elements in the regression relationship aims to achieve a balance between model parsimony and~flexibility. These aspects of VariScan define a joint model for the responses and covariates, resulting in an effective model-based clustering and variable selection procedure, improved posterior\ninferences and accurate test case~predictions.\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.6]{toy_VariScan2.pdf}\n\\vspace{-2.5 cm}\n\\caption{Stylized example illustrating the basic methodology for reliable prediction for $n=10$ subjects and $p=25$ covariates allocated to $q=11$ number of PDP clusters. The column labels represent the covariate indices. The row labels are the subjects. See the text for further explanation.}\n\\label{F:toy_VariScan}\n\\end{center}\n\\end{figure}\n\nFigure \\ref{F:toy_VariScan} illustrates the key ideas of VariScan using a toy example with $n=10$ subjects and $p=25$ covariates. The plot in the upper left panel represents a heatmap of the covariates. When investigators are interested in discovering a sparse prediction model for additional subjects, the posterior analysis averages over all possible realizations of two basic steps, both of which are stochastic and may be stylistically described as follows:\n\n \\begin{enumerate}\n \\item\\textbf{Clustering} \\quad The column vectors are allocated in an unsupervised manner to $q=11$ number of PDP clusters. This is plotted in the upper right panel, where the columns are grouped via bidirectional clustering to reveal the similarities in the within-cluster~patterns.\n\n \\item\\textbf{Variable selection and regression} \\quad One covariate is stochastically selected from each cluster and is known as the \\textit{cluster representative}. The middle right panel displays the set of representatives, $\\{\\boldsymbol{x}_7,\\boldsymbol{x}_4,\\boldsymbol{x}_{11},\\boldsymbol{x}_5,\\boldsymbol{x}_{24},$\n$\\boldsymbol{x}_{17},\\boldsymbol{x}_9,\\boldsymbol{x}_{12},\\boldsymbol{x}_3,\n\\boldsymbol{x}_{15},\\boldsymbol{x}_{14}\\}$, for the 11 clusters. The regression predictors are stochastically selected from the random set of the cluster representatives. Some representatives are not associated with the response; the remaining covariates are outcome predictors and may have either a linear or nonlinear regression relationship. The linear predictors $\\{\\boldsymbol{x}_{24},\\boldsymbol{x}_{12},\\boldsymbol{x}_{3}\\}$ and non-linear predictors $\\{\\boldsymbol{x}_{11},\\boldsymbol{x}_{9}\\}$ are shown in the middle left panel. For a nonlinear function $h$, the regression equation for a subject is displayed in the lower panel for a zero-mean Gaussian error, $\\epsilon$. The subscripts of the $\\beta$ parameters are the cluster labels, e.g., covariate $\\boldsymbol{x}_{24}$ represents the fifth PDP cluster.\n\\end{enumerate}\nWhen out-of-the-bag prediction is not of primary interest, alternative variable selection strategies discussed in Section \\ref{S:predictors} may be applied.\n\n\n\n\n\\subsection{Existing Bayesian approaches and limitations}\\label{S:lit review}\n\n\n\nThere is a vast literature on Bayesian strategies for one or more of the three inferential goals mentioned at the beginning of Section \\ref{S:introduction}. A majority of Bayesian model-based clustering techniques rely on the celebrated Dirichlet process; see \\citet[chap.\\ 4]{muller2013bayesian} for a comprehensive review. A seminal paper by \\cite*{Lijoi_Mena_Prunster_2007b} advocated the use of Gibbs-type priors \\citep*{Gnedin_Pitman_2005, Lijoi_Mena_Prunster_2007a} for accommodating more flexible clustering mechanisms than those induced by the Dirichlet process. This work also demonstrated the practical utility of PDPs in genomic applications.\n\nAmong model-based clustering techniques based on Dirichlet processes, the approaches of \\cite{Medvedovic_Siva_2002}, \\cite{Dahl_2006}, and \\cite{Muller_Quintana_Rosner_2011} assume that it is possible to \\textit{globally} reshuffle the rows and columns of the covariate matrix to reveal the clustering pattern. More closely related to our clustering objectives is the nonparametric Bayesian local clustering (NoB-LoC) approach of \\cite{Lee_etal_2013}, which clusters the covariates \\textit{locally} using two sets of Dirichlet processes. Although some similarities exist between NoB-LoC and the clustering aspect of VariScan, there are major differences. Specifically, the VariScan framework can accommodate high-dimensional regression in addition to bidirectional clustering. Furthermore, VariScan typically produces more efficient inferences by its greater flexibility in modeling a larger class of clustering patterns via PDPs. The distinction becomes especially important for genomic datasets where PDP-based models are often preferred to Dirichlet-based models by log-Bayes factors on the order of thousands; see Section \\ref{S:benchmark_data} for an example. Moreover, the Markov chain Monte Carlo (MCMC) implementation of VariScan explores the posterior substantially faster due to its better ability to allocate outlying covariates to singleton clusters via augmented variable Gibbs sampling. From a theoretical perspective, contrary to widely held beliefs about the non-identifiability of mixture model clusters, we discover the remarkable theoretical property of VariScan that, as both $n$ and $p$ grow, a fixed set of covariates that (do not) co-cluster under the true VariScan model, also (do not) asymptotically co-cluster under its posterior.}\n\n\n\n{From a regression-based Bayesian viewpoint, perhaps the most ubiquitous approaches are based on Bayesian variable selection techniques in linear and non-linear regression models. See \\cite{david2002bayesian} for a comprehensive review. For Gaussian responses, the common linear methods include stochastic search variable selection \\citep{George_McCulloch_1993}, selection-based priors \\citep{Kuo_Mallick_1997} and shrinkage-based methods \\citep{Park_Casella_2008, xu2015bayesian, griffin2010inference}. Some of these methods have been extended to non-linear regression contexts \\citep{smith1996nonparametric} and to generalized linear models \\citep{dey2000generalized,meyer2002predictive}. However, most of the {afore-mentioned regression} methods are based on {strong} parametric assumptions and do not explicitly account for the multicollinearity commonly observed in high-dimensional settings. Nonparametric approaches typically assume priors on the error residuals \\citep{hanson2002modeling,kundu2014bayes} or on the regression coefficients using random effect representations \\citep{bush1996semiparametric, maclehose2010bayesian}. For nonparametric mean function estimations, they are typically based on basis function expansions such as wavelets\\citep{morris2006wavelet} and splines \\citep{baladandayuthapani2005spatially}. We take a fundamentally different approach in this article by defining a nonparametric joint model, first on the covariates and then via an adaptive nonlinear prediction model on the responses.}\n\n\n\nThe rest of the paper is organized as follows. We introduce the VariScan model in Section \\ref{S:VariScan_model}. \nSome theoretical results for the VariScan procedure are presented in Sections \\ref{S:clusters}. Through simulations in Section \\ref{S:simulation2} and \\ref{S:simulation}, we demonstrate the accuracy of the clustering mechanism and compare the prediction reliability of VariScan with that of several established variable selection procedures using artificial datasets. In Section \\ref{S:benchmark_data}, we analyze the motivating gene expression microarray\ndatasets for leukemia and breast cancer to demonstrate the effectiveness of VariScan and compare its prediction accuracy with those of competing methods. Additional supplementary materials contain the theorem proofs, as well as additional simulation and data analyses~results.\n\n\\bigskip\n\n\n\\section{VariScan Model}\\label{S:VariScan_model}\n\nIn this section, we layout the detailed construction of the Variscan model components, which involves two major steps. First, we utilize the sparsity-inducing property of Poisson-Dirichlet processes to perform a directional nested clustering of the covariate matrix (Section \\ref{S:covariates}), and second, we describe the choice of the cluster-specific predictors and nonlinearly relate them to Gaussian regression outcomes of the subjects (Section \\ref{S:predictors}). Subsequently, Section \\ref{S:justifications} provides details of the model justifications and generalizations to discrete and survival outcomes.\n\n\\subsection{Covariate clustering model}\\label{S:covariates}\n\n\n\nFirst, each of the $p$ covariate matrix columns, $\\boldsymbol{x}_{1},\\ldots,\\boldsymbol{x}_{p}$, is assigned to one of $q$ latent clusters, where $q \\ll p$, and where the assignments and $q$ are unknown.\nThat is, for $j=1,\\ldots,p$ and $k=1,\\ldots,q$, an \\textbf{allocation variable}\n$c_j$ equals $k$ if the $j^{th}$ covariate is assigned to the $k^{th}$ cluster.\n\nWe posit that the $q$ clusters are associated with \\textbf{latent vectors} $\\boldsymbol{v}_1,\\ldots,\\boldsymbol{v}_q$ of length $n$. The covariate columns assigned to a latent cluster are essentially contaminated versions of these cluster's latent vector and thus induces high correlations among covariates belonging to a cluster.\nIn practice, however, a few individuals within each cluster may have highly variable covariates. We model this aspect by associating a larger error variance with those individuals. This is achieved via a Bernoulli variable,~$z_{ik}$, for which the value $z_{ik}=0$ indicates a high variance:\n\\begin{align*}\nx_{ij} \\mid z_{i k}, c_j = k &\\stackrel{indep}\\sim\n \\begin{cases}\n N(v_{ik}, \\tau_1^2) \\quad &\\text{if $z_{ik}=0$}\\\\\n N(v_{ik}, \\tau^2) \\quad&\\text{if $z_{ik}=1$}\\\\\n \\end{cases}\n\\end{align*}\n where\n$\\tau_1^2$ and $\\tau^2$ are variance parameters with inverse Gamma priors and $\\tau_1^2$ is much greater than $\\tau^2$. It is assumed that the support of $\\tau$ is bounded below by a small, positive constant, $\\tau_*$. Although not necessary from a methodological perspective, this restriction guarantees the asymptotic result of Section \\ref{S:clusters}.\nThe indicator variables for the individual--cluster combinations are apriori modeled {\\it i.i.d} as:\n\\begin{equation*}\nz_{ik} \\stackrel{iid}\\sim \\text{Ber}(\\xi), \\qquad\\text{$i=1,\\ldots,n$ and $k=1,\\ldots,q$,}\n\\end{equation*}\nwhere $\\xi \\sim \\text{beta}(\\iota_1,\\iota_0)$. The condition $\\iota_1 \\gg \\iota_0$ guarantees that prior probability $P(z_{ik} = 1)$ is nearly equal to 1, and so only a small proportion of the individuals have highly variable covariates within each cluster.\n\n\\bigskip\n\n\\noindent \\textbf{Allocation variables.} As an appropriate model for the covariate-to-cluster allocations that accommodates a wide range of allocation patterns, we rely on the partitions induced by the \\textit{two-parameter Poisson-Dirichlet process}, $\\mathbb{PDP}\\bigl(\\alpha_1, d\\bigr)$, with discount parameter $0 \\le d < 1$ and precision or mass parameter $\\alpha_1>0$. In genomic applications, for example, these partitions may allow the discovery of unknown biological processes represented by the latent clusters.\n We defer additional details of the empirical and theoretical justifications of using PDP processes until Section~\\ref{S:justifications}.\n\nThe PDP was introduced by \\cite{Perman_etal_1992} and later investigated by \\cite{Pitman_1995} and \\cite{Pitman_Yor_1997}. Refer to \\cite{Lijoi_Prunster_2010} for a detailed discussion of different classes of Bayesian nonparametric~models, including Gibbs-type priors \\citep*{Gnedin_Pitman_2005, Lijoi_Mena_Prunster_2007a} such as Dirichlet processes and PDPs. \\cite*{Lijoi_Mena_Prunster_2007b} were the first to implement Gibbs-type priors for more flexible clustering mechanisms than Dirichlet process partitions.\n\n\nThe PDP-based allocation variables are apriori exchangeable and evolve as follows. Since the cluster allocations labels are arbitrary, we may assume without loss of generality that $c_1=1$, i.e., the first covariate is assigned to the first cluster. Subsequently, for covariates $j=2,\\ldots,p$, suppose there exist\n $q^{(j-1)}$ distinct clusters among $c_1,\\ldots,c_{j-1}$, with the $k^{th}$ cluster containing $n_{k}^{(j-1)}$ number of covariates. The predictive probability that the $j^{th}$ covariate is assigned to the $k^{th}$ cluster is~then\n\\begin{align*}\nP\\left(c_j = k \\mid c_1, \\ldots, c_{j-1} \\right) \\propto\n \\begin{cases}\n n_{k}^{(j-1)} - d\n \\quad &\\text{if $k = 1,\\ldots,q^{(j-1)}$}\\\\\n \\alpha_1 + q^{(j-1)} \\cdot d \\quad &\\text{if $k = q^{(j-1)} + 1$}\\\\\n \\end{cases}\n\\end{align*}\nwhere the event $c_j=q^{(j-1)} + 1$ in the second line corresponds to the $j^{th}$ covariate opening a new cluster.\n When $d = 0$, we obtain the well known P\\`{o}lya urn scheme for Dirichlet processes \\citep{Ferguson_1973}.\n\n In general, exchangeability holds for all product partition models \\citep{Barry_Hartigan_1993, Quintana_Iglesias_2003} and species sampling models \\citep{Ishwaran_James_2003}, of which PDPs are a special case. The number of clusters, $q$, stochastically increases as $\\alpha_1$ and $d$ increase. For $d$ fixed, the $p$ covariates are each assigned to $p$ singleton clusters in the limit as $\\alpha_1 \\to \\infty$. \n\n A PDP achieves dimension reduction in the number of covariates because $q$, the random number of clusters, is asymptotically equivalent to\n\\begin{align}\n \\begin{cases}\n \\alpha_1 \\cdot \\log p \\qquad &\\text{if $d = 0$} \\\\ \n T_{d, \\alpha_1} \\cdot p^d\\qquad &\\text{if $0 < d < 1$}\\\\\n \\end{cases}\\label{q}\n\\end{align}\nfor a random variable $T_{d, \\alpha_1} >0$ as $p\\to \\infty$. This implies that the number of Dirichlet process clusters (i.e., when $d=0$) is asymptotically of a smaller order than the number of PDP clusters when $d>0$. This property was effectively utilized by \\cite*{Lijoi_Mena_Prunster_2007b} in species prediction problems and applied to gene discovery settings. The use of Dirichlet processes to achieve\n dimension reduction has precedence in the literature; see \\cite{Medvedovic_etal_2004}, \\cite{Kim_etal_2006}, \\cite{Dunson_etal_2008} and \\cite{Dunson_Park_2008}. \n\n\n\nThe PDP discount parameter $d$ is given the mixture prior $\\frac{1}{2}\\delta_0 + \\frac{1}{2} U(0,1)$, where $\\delta_0$ denotes the point mass at 0. Posterior inferences of this parameter allows us to flexibly choose between Dirichlet processes and more general~PDPs for the best-fitting clustering mechanism.\n\n\n\\bigskip\n\n\\noindent \\textbf{Latent vectors.} The hierarchical prior for the covariates is completed by specifying a \\textit{base distribution} $G^{(n)}$ in $\\mathcal{R}^n$ for the latent vectors $\\boldsymbol{v}_1,\\ldots,\\boldsymbol{v}_q$. Consistent with our goal of developing a flexible and scalable inference procedure capable of fitting large datasets, we impose additional lower-dimensional structure on the $n$-variate base distribution. Specifically, since the $n$ subjects are exchangeable, base distribution $G^{(n)}$ is assumed to be the $n$-fold product measure of a univariate distribution, $G$. This allows the individuals and clusters to communicate through the $nq$ number of latent vector elements:\n\\begin{equation}\nv_{ik} \\stackrel{iid}\\sim G \\qquad \\text{for } i=1,\\ldots,n, \\text{ and } k=1,\\ldots,q. \\label{v}\n\\end{equation}\nThe unknown, univariate distribution, $G$, is itself given a nonparametric Dirichlet process prior, allowing the latent vectors to flexibly capture the within-covariate patterns of the subjects:\n\\begin{equation}\nG \\sim \\mathcal{DP}(\\alpha_2)\\label{G}\n\\end{equation}\nfor mass parameter $\\alpha_2>0$ and univariate base distribution, $N(\\mu_2, \\tau_2^2)$. Being a realization of a Dirichlet process, distribution $G$ is discrete and allows the subjects to group differently in different PDP clusters. \nThe number of distinct values among the $v_{ik}$'s is asymptotically equivalent to $\\alpha_2 \\cdot \\log n q$, facilitating further dimension reduction and scalability of inference as $n$ approaches hundreds or thousands of individuals, as commonly encountered in genomic datasets.\n\n\n\\smallskip\n\n\n\\subsection{Prediction and regression model}\\label{S:predictors}\n\nNow, suppose there are $n_{k}$ covariates allocated to the $k^{th}$ cluster. \nWe posit that each cluster elects from among its member covariates a \\textit{representative}, denoted by $\\boldsymbol{u}_k$. A subset of the $q$ cluster representatives, rather than the covariates, feature in an additive regression model that can accommodate nonlinear functional relationships. The cluster representatives may be chosen in several different ways depending on the application. Some possible options include:\n\\begin{enumerate}\n\\item[\\textit{(i)}] Select with apriori equal probability one of the $n_k$ covariates belonging to the $k^{th}$ cluster as the representative. Let $s_k$ denote the index of the covariate chosen as the representative, so that $c_{s_k}=k$ and $\\boldsymbol{u}_k=\\boldsymbol{x}_{s_k}$.\n\\item[\\textit{(ii)}] Set latent vector $\\boldsymbol{v}_k$ of Section \\ref{S:covariates} as the cluster representative.\n\\end{enumerate}\nOption \\textit{(i)} is preferable when practitioners are mainly interested in identifying the effects of individual regressors, as in gene selection applications in cancer survival times (as noted in the introduction). Option \\textit{(ii)} is preferable when the emphasis is less on covariate selection and more on identifying clusters of candidate variables (e.g., genomic pathways) that are jointly associated with the responses.\n\n\nThe regression predictors are selected from among the $q$ cluster representatives, with their parent clusters constituting the set of \\textit{cluster predictors}, $\\mathcal{S}^* \\subset\\{1,\\ldots,q\\}$.\nExtensions of the spike-and-slab approaches \\citep{George_McCulloch_1993, Kuo_Mallick_1997, Brown_etal_1998} are applied to relate the regression outcomes to the cluster representatives as:\n\\begin{align}\ny_i &\\stackrel{indep}\\sim N\\left( \\eta_i,\\, \\sigma_i^2\\right), \\quad\\text{where} \\notag\\\\\n\\eta_i &= \\beta_0 + \\sum_{k=1}^q \\gamma_{k}^{(1)} \\beta_k^{(1)} u_{ik} + \\sum_{k=1}^q \\gamma_{k}^{(2)} h(u_{ik},\\boldsymbol{\\beta}_k^{(2)}) \\label{eta_i}\n\\end{align}\nand $h$ is a nonlinear function. Possible options for the nonlinear function $h$ in equation (\\ref{eta_i}) include reproducible kernel Hilbert spaces \\citep{mallickJRSSB2005}, nonlinear basis smoothing splines \\citep{Eubank1999}, and wavelets. Alternatively, due to their interpretability as a linear model, order-$r$ splines with $m$ number of knots \\citep{deBoor_1978, Hastie_Tibshirani_1990,Denison_etal_1998} are especially attractive and computationally tractable.\n\nThe linear predictor $\\eta_i$ in expression (\\ref{eta_i}) implicitly relies on a vector of cluster-specific indicators, $\\boldsymbol{\\gamma}=(\\boldsymbol{\\gamma}_1,\\ldots,\\boldsymbol{\\gamma}_q)$, where the triplet of indicators, $\\boldsymbol{\\gamma}_k=(\\gamma_{k}^{(0)}, \\gamma_{k}^{(1)}, \\gamma_{k}^{(2)})$, add to 1 for each cluster $k$. If $\\gamma_{k}^{(0)}=1$, the cluster representative and none of the covariates belonging to cluster~$k$ are associated with the responses. If $\\gamma_{k}^{(1)}=1$, the cluster representative appears as a simple linear regressor in equation (\\ref{eta_i}); $\\gamma_{k}^{(2)}=1$ implies a nonlinear regressor. \nThe number of linear predictors, non-linear predictors, and non-predictors are respectively, $q_1 =\\sum_{j=1}^q \\gamma_j^{(1)}$, $q_2 =\\sum_{j=1}^q \\gamma_j^{(2)}$, and $q_0 =q-q_1 - q_2$.\nFor a simple illustration of this concept, consider again the toy example of Figure \\ref{F:toy_VariScan}, where one covariate is nominated from each cluster as the~representative. Of the $q=11$ cluster representatives, $q_1=3$ are linear predictors, $q_2=2$ are non-linear predictors, and the remaining $q_0=6$ representatives are non-predictors.\n\n\n\nFor nonlinear functions $h$ having a linear representation (e.g., splines), let $\\boldsymbol{U}_{\\boldsymbol{\\gamma}}$ be a matrix of $n$ rows consisting of the intercept column and the independent regressors based on the cluster representatives. For example, if we use order-$r$ splines with $m$ number of knots in equation (\\ref{eta_i}), then the number of columns, $\\text{col}(\\boldsymbol{U}_{\\boldsymbol{\\gamma}})=q_1 + (m+r)\\cdot q_2 + 1$. With $[\\cdot]$ denoting densities of random variables, the\n prior,\n\\begin{equation}\n[\\boldsymbol{\\gamma}] \\propto \\omega_0^{q_0}\\omega_1^{q_1}\\omega_2^{q_2}\\cdot \\mathcal{I}\\biggl(\\text{col}(\\boldsymbol{U}_{\\boldsymbol{\\gamma}}) < n\\biggr), \\label{gamma}\n\\end{equation}\nwhere the probabilities $\\boldsymbol{\\omega}=(\\omega_{0}, \\omega_{1}, \\omega_{2})$ are given the Dirichlet distribution prior, $\\boldsymbol{\\omega} \\sim \\mathcal{D}_3(1,1,1)$.\nThe truncated prior for $\\boldsymbol{\\gamma}$ is designed to ensure model sparsity and prevent overfitting, as explained below. Conditional on the variances of the regression outcomes in equation (\\ref{eta_i}), we postulate a weighted g~prior for the regression coefficients:\n\\begin{equation}\n\\boldsymbol{\\beta}_{\\boldsymbol{\\gamma}} | \\boldsymbol{\\Sigma} \\sim N_{|\\mathcal{S}^*|+1}\\biggl(\\boldsymbol{0}, \\sigma_\\beta^2({\\boldsymbol{U}_{\\boldsymbol{\\gamma}}}'\\boldsymbol{\\Sigma}^{-1}\\boldsymbol{U}_{\\boldsymbol{\\gamma}})^{-1}\\biggr),\\label{Zellner}\n\\end{equation}\nwhere matrix $\\boldsymbol{\\Sigma}=\\text{diag}(\\sigma_1^2,\\ldots,\\sigma_n^2)$.\n\nA schematic representation of the entire hierarchical model involving both the clustering and prediction components is shown in Figure \\ref{F:dag}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.1]{dag_2016.pdf}\n\\caption{Directed acyclic graph of the VariScan model in which the cluster representatives are chosen from the set of co-clustered covariates. Circles represent stochastic model parameters, solid rectangles represent data and deterministic variables, and dashed rectangles represent model constants. Solid (dashed) arrows represent stochastic (deterministic) relationships. }\n\\label{F:dag}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\bigskip\n\n\n\\subsection{Model justification and generalizations}\\label{S:justifications}\n\nIn this section, we discuss the justification, consequences, and generalizations of different aspects of the Variscan model. In particular, we investigate the appropriateness of PDPs in this application as a tool for covariate clustering. We also discuss the choice of basis functions for the nonlinear prediction model and consider generalizations to discrete and survival outcomes.\n\n\\bigskip\n\n\\noindent \\textbf{Empirical justification of PDPs.} We conducted an exploratory data analysis (EDA) of the gene expression levels in the DLBCL data set of \\citet*{Rosenwald_etal_2002}.\nRandomly selecting a set of $p=500$ probes for $n=100$ randomly chosen individuals, we iteratively applied the k-means procedure until the covariates were grouped into fairly concordant clusters with a small overall value of $\\tau^2$.\nThe allocation pattern depicted in Figure \\ref{F:eda_barchart} is atypical of Dirichlet processes which, as is well known among practitioners, are usually associated with relatively small number of clusters and exponentially decaying cluster sizes. Instead,\nthe large number of clusters ($\\hat{q}=161$) and the predominance of small clusters suggest a non-Dirichlet model for the covariate-cluster assignments. More specifically, a PDP is favored due to the slower, power law decay in the cluster sizes typically associated with these models.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{eda_barchart.pdf}\n\\caption{Barchart of cluster sizes obtained by exploratory data analysis.}\n\\label{F:eda_barchart}\n\\end{center}\n\\end{figure}\n\n\\bigskip\n\n\\noindent \\textbf{Theoretical justifications for a PDP model.}\n\\cite{Sethuraman_1994} derived the \\emph{stick-breaking representation} for a Dirichlet process, and then \\cite {Pitman_1995} extended it to PDPs. These stick-breaking representations have the following consequences for the induced partitions. Let $\\mathbb{N}$ be the set of natural numbers.\n Subject to a one-to-one transformation of the first $q$ natural numbers into $\\mathbb{N}$, the allocation variables $c_1,\\ldots,c_p$ may be regarded as i.i.d.\\ samples from a discrete distribution $F_{\\alpha_1,d}$ on $\\mathbb{N}$ with stick-breaking probabilities, $\\pi_1 = V_1$ and $\\pi_h = V_h \\prod_{t=1}^{h-1}(1-V_t)$ for $h = 2,3,\\ldots$, where $V_h \\stackrel{indep}\\sim \\text{beta}(1-d, \\alpha_1+hd)$.\nThis implies that for large values of $p$ and for clusters $k=1,\\ldots,q$, the frequencies $n_k^{(p)}\/p$ are approximately equal to $\\pi_{h_k}$ for some distinct integers $h_1,\\ldots,h_q$.\n\n\nAs previously mentioned, the VariScan model assumes that the base distribution $G^{(n)}$ of the PDP is the $n$-fold product measure of a univariate distribution, $G$, which follows a Dirichlet process with mass parameter $\\alpha_2$. This bidirectional clustering structure has some interesting consequences. Let $\\{\\phi_h\\}_{h=1}^{\\infty}$ be the stick-breaking probabilities associated with this nested Dirichlet process. For two or more of the $q$ PDP clusters, the latent vectors are identical with a probability bounded above by ${q \\choose 2} \\cdot \\bigl(\\sum_{h=1}^\\infty \\phi_h^2\\bigr)^n$. Applying the asymptotic relationship of $p$ and $q$ given in expression (\\ref{q}), we find that the upper bound tends to 0 as the dataset grows, provided $p$ grows at a slower-than-exponential rate as $n$. In fact, for $n$ as small as 50 and $p$ as small as 250, in simulations as well as in data analyses, we found all the latent vectors associated with the PDP clusters to be distinct. Consequently, from a practical standpoint, the VariScan allocations may be regarded as clusters with unique characteristics in even moderate-sized datasets.\n\nTheorem \\ref{Theorem:stick-breaking} below provides formal expressions for the first and second moments of the random log-probabilities of the discrete distribution $F_{\\alpha_1,d}$. In conjunction with equation (\\ref{q}), this result justifies the use of PDPs when the observed number of clusters is large or the cluster sizes decay slowly. Part~\\ref{T:DP_lim} provides an explanation for the fact that Dirichlet process allocations typically consist of a small number of clusters, only a few of which are large, with exponential decay in the cluster sizes. Part~\\ref{T:PDP_lim} suggests that in PDPs with $d>0$ (i.e., non-Dirichlet realizations), there is a slower, power law decay of the cluster sizes as $d$ increases. Part~\\ref{T:PDP_DP} indicates that for every $\\alpha_1$ and $d>0$, a PDP realization $F_{\\alpha_1,d}$ is thicker tailed compared to a Dirichlet process realization, $F_{\\alpha_1,0}$. See Section \\ref{S_sup:stick-breaking proof} of the Appendix for a proof.\n\n\n\nIt should be noted that the differential allocation patterns of PDPs and Dirichlet processes are well known, and has been previously emphasized by several papers, including \\cite*{Lijoi_Mena_Prunster_2007a} and \\cite*{Lijoi_Mena_Prunster_2007b}. However, it is difficult to come across a formal proof for this differential behavior. Although the theorem is primarily of interest when the base measure is non-atomic, it is relevant in this application because of the empirically observed uniqueness of the latent vectors in high-dimensional~settings due to VariScan's nested structure.\n\n\\smallskip\n\n\\begin{theorem}\\label{Theorem:stick-breaking}\nConsider the process $\\mathbb{PDP}\\bigl(\\alpha_1, d \\bigr)$ with mass parameter $\\alpha_1>0$ and discount parameter $0 \\le d < 1$. Let $\\psi(x)=d\\log \\Gamma(x)\/dx$ denote the digamma function and $\\psi_1(x)=d^2\\log \\Gamma(x)\/dx^2$ denote the trigamma function.\n\\begin{enumerate}\n\n \n\\item\\label{T:PDP} For $0 < d < 1$, the distribution $F_{\\alpha_1,d} \\in \\mathbb{N}$ is a realization of a PDP with stick-breaking probabilities $\\pi_h$, where $h \\in \\mathbb{N}$. \n However, $F_{\\alpha_1,d}$ is not a Dirichlet process realization because $d \\neq 0$. Then\n \\begin{enumerate}\n \\item\\label{T:PDP_E} $E(\\log \\pi_h)=\\psi(1-d)-\\psi(\\alpha_1)+\\frac{1}{d}\\bigl(\\psi(\\alpha_1\/d)-\\psi(\\alpha_1\/d+h)\\bigr)$. This implies that $\\lim_{h \\to \\infty}E(\\log \\pi_h)=-\\infty$.\n\n \\item\\label{T:PDP_V} $\\text{Var}(\\log \\pi_h)=\\psi_1(1-d)-\\psi_1(\\alpha_1)+\\frac{1}{d^2}\\bigl(\\psi_1(\\alpha_1\/d)-\\psi_1(\\alpha_1\/d+h)\\bigr)$. Unlike a Dirichlet process realization, $\\lim_{h \\to \\infty} \\text{Var}(\\log \\pi_h)$ is finite regardless of $d>0$.\n\n \\item\\label{T:PDP_lim} For any $\\alpha_1>0$ and as $h\\to\\infty$,\n $\\log \\pi_h\/\\log h^{-1\/d} \\stackrel{p}\\rightarrow 1\n $ for non-Dirichlet process realizations.\n\n \\end{enumerate}\n\n\\item\\label{T:DP} For $d=0$, the distribution $F_{\\alpha_1,0} \\in \\mathbb{N}$ is a Dirichlet process realization with stick-breaking probabilities $\\pi_h^*$ based on $V_h^* \\stackrel{iid}\\sim \\text{beta}(1, \\alpha_1)$ for $h \\in \\mathbb{N}$. Then\n \\begin{enumerate}\n \\item\\label{T:DP_E} $E(\\log \\pi_h^*)=\\psi(1)-\\psi(\\alpha_1)-h\/\\alpha_1$. Thus, $\\lim_{h \\to \\infty}E(\\log \\pi_h^*) = -\\infty$.\n\n \\item\\label{T:DP_V} $\\text{Var}(\\log \\pi_h^*)=\\psi_1(1)-\\psi_1(\\alpha_1)+h\/\\alpha_1^2$. Thus, $\\lim_{h \\to \\infty} \\text{Var}(\\log \\pi_h^*)=\\infty$.\n\n \\item\\label{T:DP_lim} As $h\\to\\infty$,\n $\n \\sqrt{h}\\left(\\frac{1}{h}\\log (\\pi_h^*) + 1\/\\alpha_1\\right) \\stackrel{L}\\rightarrow N(0, 1\/\\alpha_1^2)$. This implies that as $h\\to\\infty$, the random stick-breaking Dirichlet process probabilities, $\\pi_h^*$, are stochastically equivalent to $e^{-h\/\\alpha_1}$.\n \\end{enumerate}\n\n\n \\item\\label{T:PDP_DP} As $h\\to\\infty$,\n $\\sqrt{h}\\left(\\frac{1}{h}\\log (\\pi_h^*\/\\pi_h) + 1\/\\alpha_1\\right) \\stackrel{L}\\rightarrow N(0, 1\/\\alpha_1^2)$. That is, as $h\\to\\infty$, the ratios of the Dirichlet process and non-Dirichlet process stick-breaking random probabilities, $\\pi_h^*\/\\pi_h$, are stochastically equivalent to $e^{-h\/\\alpha_1}$ for every $d>0$. \n\\end{enumerate}\n\\end{theorem}\n\n\n\\smallskip\n\n\n\\begin{remark}\nBy Lemma~1 of \\cite{Ishwaran_James_2003}, $\\lim_{h \\to \\infty}E(\\log \\pi_h^*) = -\\infty$ in Part~\\ref{T:DP_E} of Theorem \\ref{Theorem:stick-breaking} implies that $\\sum_{h=1}^{\\infty} \\pi_h^*=1$ almost surely for a Dirichlet process. A similar comment applies in Part~\\ref{T:PDP_E} for a PDP.\n\\end{remark}\n\n\\bigskip\n\n\\noindent \\textbf{Empirical justification of nested Dirichlet process model for the latent vector elements.}\nFor the DLBCL dataset, Figure \\ref{F:DP barchart} presents a summary of the VariScan model estimates for the 14,000 latent vector elements with estimated Bernoulli indicators $\\hat{z}_{ik}=1$. More than 87\\% of the $n\\hat{q}=16,500$ latent vector elements were estimated to have $\\hat{z}_{ik}=1$, implying that a relatively small proportion of covariate values for the DLBCL dataset can be regarded as random noise having no clustering structure. Further details about the inference procedure are provided in Section \\ref{S:inference}. In Figure \\ref{F:DP barchart}, the small number of clusters corresponding to the large number of latent vector elements, and the sharp decline in the cluster sizes compared with Figure \\ref{F:eda_barchart}, are consistent with Dirichlet process allocation patterns. Similar results were obtained for the breast cancer data and for other genomic datasets that we have analyzed.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.31]{dahl_DP_barchart.pdf}\n\\caption{For the DLBCL dataset, least-squares Dirichlet process configuration of the more than 14,000 latent vector elements with Bernoulli indicators equal to 1.}\n\\label{F:DP barchart}\n\\end{center}\n\\end{figure}\n\n\n\\bigskip\n\n\n\\noindent \\textbf{Choice of basis functions: model parsimony versus flexibility.} \\label{S:parsimony_v_flexibility}\nThe reliability of inference and prediction rapidly deteriorates as the number of cluster predictors and additive nonlinear components in equation (\\ref{eta_i}) increases beyond a threshold value and approaches the number of subjects, $n$. The restriction in the prior~(\\ref{gamma}) prevents over-fitting. It ensures that the matrix $\\boldsymbol{U}_{\\boldsymbol{\\gamma}}$, consisting of the independent regression variables, has fewer columns than rows, and is a sufficient condition for the existence of $({\\boldsymbol{U}_{\\boldsymbol{\\gamma}}}'\\boldsymbol{\\Sigma}^{-1}\\boldsymbol{U}_{\\boldsymbol{\\gamma}})^{-1}$ and the least-squares estimate of $\\boldsymbol{\\beta}_{\\boldsymbol{\\gamma}}$ in equation (\\ref{eta_i}). In spline-based models, the relatively small number of subjects also puts a constraint on the order of the splines, often necessitating the use of linear splines with $m=1$ knot per cluster in equation (\\ref{eta_i}). In the applications presented in this paper, we fixed the knot for each covariate at the sample median.\n\n\nUnusually small values of $\\sigma_i^2$ in equation (\\ref{eta_i}) correspond to over-fitted models, whereas unusually large values correspond to under-fitted models. Any parameters that determine $\\sigma_1^2,\\ldots,\\sigma_n^2$ are key, and their priors must be carefully chosen.\n For instance, linear regression assumes that $\\sigma_i^2=\\sigma^2$. We have found that non-informative priors for $\\sigma^2$ do not work well because the optimal model sizes for variable selection are unknown.\n Additionally, we have found that it is helpful to restrict the range of $\\sigma^2$ based on reasonable goals for inference precision.\nIn the examples discussed in this paper, we assigned the following truncated prior:\n$\\sigma^{-2} \\sim \\chi^2_{\\nu} \\cdot \\mathcal{I}\\left(0.95^{-1}\/\\text{Var}(\\hat{\\boldsymbol{y}}) < \\sigma^{-2} < 0.5^{-1}\/ \\text{Var}(\\hat{\\boldsymbol{y}})\\right)\n$,\nwhere the degrees of freedom $\\nu$ were appropriately chosen and the vector $\\hat{\\boldsymbol{y}}$ relied on EDA estimates of latent regression outcomes from a previous study or the training set individuals. The support for $\\sigma^{-2}$\napproximately corresponds to the constraint, $0.5 < R^2 < 0.95$, quantifying the effectiveness of~regression.\nAs Sections \\ref{S:simulation} and \\ref{S:benchmark_data} demonstrate, the aforementioned strategies often result in high reliability of the response predictions.\n\n\n\\bigskip\n\n\\noindent \\textbf{Generalizations for discrete or survival outcomes} In a general investigation, the subject-specific responses may be discrete or continuous, and\/or may be censored. \nIn such cases, the responses, denoted by $w_1,\\ldots,w_n$, can be modeled as deterministic transformations of random variables~$R_i$ from an exponential family distribution. The Laplace approximation \\citep{Harville_1977} transforms each $R_i$ into a Gaussian \\textit{regression outcome},~$y_i$, that can be modeled using our Variscan model proposed above.\nThe details of the calculation are as follows.\nFor a set of functions~$f_i$, we assume that $w_i = f_i(R_i)$ and density function\n$[R_i \\mid \\varrho_{i}, \\varsigma ] = r(R_i,\\varsigma)\\cdot \\exp\\left(\n\\frac{R_i\\,\\varrho_{i}-b(\\varrho_{i})}{a(\\varsigma)}\\right)$,\nwhere $r(\\cdot)$ is a non-negative function, $\\varsigma$ is a dispersion parameter, $\\varrho_{i}$ is the canonical parameter, and $[\\cdot]$ represents densities with respect to a dominating measure.\nThe Laplace approximation relates the $R_i$'s to Gaussian regression outcomes:\n$y_{i} = \\eta_{i} + \\frac{\\partial \\eta_{i}}{\\partial \\mu_{i}}\\cdot(R_{i} - \\mu_{i})$ is approximately $N\\left(\\eta_{i},\\sigma_i^2\\right)$\nwith precision $\\sigma_i^{-2}=\\{b^{''}(\\mu_{i})\\}^{-1}\\left(\\partial\n\\mu_{i}\/\\partial \\eta_{i}\\right)^2$.\nFor an appropriate link function $g(\\cdot)$, the mean $\\eta_{i}$ equals $g(\\mu_{i})$.\nGaussian, Poisson, and binary responses are applicable in this setting.\nAccelerated failure time (AFT) censored outcomes \\citep{Buckley_James_1979, Cox_Oakes_1984} also fall into this modeling framework.\n\n\nThe idea of using a Laplace-type approximation for exponential families is not new. Some early examples in Bayesian settings include \\cite{Zeger_Karim_1991}, \\cite{Chib_Greenberg_1994}, and \\cite{Chib_Greenberg_Winkelmann_1998}.\nFor linear regression, the approximation is exact\nwith $y_{i} = R_{i}$.\nThe Laplace approximation\nis not restrictive even when it is approximate; for example, MCMC proposals for the model parameters can be filtered through a Metropolis-Hastings step to obtain samples from the target posterior. Alternatively, inference strategies relying on normal mixture representations through auxiliary variables could be used to relate the $R_i$'s to the $y_i$'s. For instance, \\cite{Albert_Chib_1993} used truncated normal sampling to obtain a probit model for binary responses, and \\cite{HolmesHeld2006}\n utilized a scale mixture representation of the normal distribution \\citep{Andrews_Mallows_1974,West_1987} to implement logistic regression using latent variables.\n\n\n\\bigskip\n\n \\section{Posterior inference}\\label{S:inference}\n\n\n Starting with an initial configuration obtained by a na\\\"{i}ve, preliminary analysis, the model parameters are iteratively updated by MCMC methods.\n Due to the intensive nature of the posterior inference, the analysis is performed in two stages, with cluster detection followed by predictor discovery:\n\n \\smallskip\n\n \\begin{enumerate}\n \\item[\\textbf{Stage 1}] Focusing only on the covariates and ignoring the responses:\n\n \\smallskip\n\n \\begin{enumerate}\n \\item[\\textit{Stage 1a}] The allocation variables, latent vector elements, and binary indicators are iteratively updated until the MCMC chain converges. Monte Carlo estimates are computed for the posterior probability of clustering for each pair of covariates. Applying the technique of \\cite{Dahl_2006}, these pairwise probabilities are used to compute a point estimate, called the \\textit{least-squares allocation}, for the allocation~variables.\n Further details of the MCMC procedure are provided in Sections \\ref{S:MCMC_c} and \\ref{S:MCMC_v} of the Appendix.\n\n\n \\smallskip\n\n \\item[\\textit{Stage 1b}] Conditional on the least-squares allocation being the true clustering of the covariates, a second MCMC sample of the latent vector elements and binary indicators is generated. Again applying the technique of \\cite{Dahl_2006}, we compute a point estimate, called the \\textit{least-squares configuration}, for the latent vector elements and binary indicators.\n \\end{enumerate}\n\n \\bigskip\n\n \\item[\\textbf{Stage 2}] Conditional on the least-squares allocation and least-squares configuration, and focussing on the responses, the cluster predictors and latent regression outcomes, if any, are generated to obtain a third MCMC sample. The MCMC sample is post-processed to predict the responses for out-of-the-bag test set individuals. The interested reader is referred to Sections \\ref{S:MCMC_gamma}, \\ref{S:MCMC_y} and \\ref{S:test_case_prediction} of the Appendix for details.\n \\end{enumerate}\n\nAs a further benefit of a coherent model for the covariates, VariScan is able to perform model-based imputations of any missing subject-specific covariates as part of the MCMC procedure.\n\n\n\\bigskip\n\n\n\n\n\\section{Clustering consistency}\\label{S:clusters}\n\n\nAs mentioned in Section \\ref{S:covariates}, the latent vectors associated with two or more PDP clusters may be identical under the VariScan model, but this probability becomes vanishingly small as $n$ grows. Consequently, for practical purposes, the VariScan allocations may be interpreted as distinct, identifiable clusters in even moderately large datasets. In order to study the reliability of VariScan's clustering procedure in our targeted Big Data applications, we\n make large-sample theoretical comparisons between the VariScan model's cluster allocations and the true allocations of a hypothetical covariate generating process.\n\n In the general problem of using mixture models to allocate $p$ objects to an unknown number of clusters, the problem of non-identifiability and redundancy of the detected clusters has been extensively documented in Bayesian and frequentist applications \\citep[e.g., see][]{Fruhwirth-Schnatter_2006}. Some partial solutions are available in the Bayesian literature. For example, in finite mixture models, rather than assuming exchangeability of the mixture component parameters, \\cite{Petralia_etal_2012} regard them as draws\nfrom a repulsive process, leading to fewer, better separated and more interpretable\nclusters. \\cite{Rousseau_Mengersen_2011} show that a carefully chosen prior leads to asymptotic emptying of the\nredundant components in over-fitted finite mixture models.\nThe underlying strategy of these procedures is that they focus on detecting the correct number of clusters rather than the correct allocation of the $p$~objects.\n\nIn contrast to the non-identifiability of the detected clusters in fixed $n$ settings, Theorem \\ref{Thm:consistency} establishes the interesting fact that, when $p$ and $n$ are both large, a fixed set of covariates that (do not) co-cluster under the true process, also (do not) asymptotically co-cluster under the posterior. The key intuition is that, as with most mixture model applications, when $n$-dimensional objects are clustered and $n$ is small, it is possible for the clusters to be erroneously placed too close together even if $p$ is large. On the other hand, if $n$ is allowed to grow with $p$, then objects in $\\mathcal{R}^n$ eventually become well separated. \nConsequently, for $n$ and $p$ large enough, the VariScan method is able to infer the true clustering for a fixed subset of the $p$ covariate columns. In the sequel, using synthetic datasets in Section~\\ref{S:simulation2}, we exhibit the high accuracy of VariScan's clustering-related inferences.\n\n \\bigskip\n\n \\paragraph{\\textbf{The true model.}} The VariScan model's exchangeability assumption for the $p$ covariates stems from our belief in the existence of a true, unknown de Finetti density in $\\mathcal{R}^n$ from which the column vectors arise as a random sample. In particular, for any given $n$ and $p$, we make the following assumptions about the true covariate-generating process:\n\n\\begin{enumerate}[(a)]\n\\item\\label{true:first} The column vectors $\\boldsymbol{x}_1,\\ldots,\\boldsymbol{x}_p$ are a random sample of size $p$ from an $n$-variate distribution $P_0^{(n)}$ convolved with $n$-variate, independent-component Gaussian errors.\n \\item The true distribution $P_0^{(n)}$ is discrete in the space $\\mathcal{R}^n$. Let the $n$-dimensional atoms of $P_0^{(n)}$ be denoted by $\\boldsymbol{v}^{(0)}_t=(v^{(0)}_{1t},\\ldots,v^{(0)}_{nt})'$ for positive integers $t$.\n \\item\\label{true.alloc} Due to the discreteness of distribution $P_0^{(n)}$, there exist true allocation variables, $c_1^{(0)},\\ldots,c_p^{(0)}$, mapping the $p$ covariates to distinct atoms of $P_0^{(n)}$. For subjects $i=1,\\ldots,n$, and columns $j=1,\\ldots,p$, the covariates are then distributed as\n \\begin{equation}\n x_{ij} \\mid c_j^{(0)} \\stackrel{indep}\\sim N(v^{(0)}_{i\\, c_j^{(0)}}, \\tau_0^2), \\label{true.lik.x}\n \\end{equation}\n\\item The $n$-variate atoms of distribution $P_0^{(n)}$ are i.i.d.\\ realizations of the $n$-fold product measure of a univariate distribution, $G_0$. Consequently, the atom elements are $v^{(0)}_{it}\\stackrel{i.i.d.}\\sim G_0$ for $i=1,\\ldots,n$, and $j=1,\\ldots,p$.\n\n \\item\\label{true:last} The true distribution $G_0$ is non-atomic and has compact support on the real line.\n\\end{enumerate}\n\n\n\\bigskip\n\nLet $\\mathcal{L}=\\{j_1,\\ldots, j_L\\} \\subset \\{1,\\ldots,p\\}$ be a fixed subset of $L$ covariate indexes. Given a vector of inferred allocations $\\boldsymbol{c}=(c_1,\\ldots,c_p)$, we quantify the inference accuracy by the \\textit{proportion of correctly clustered covariate pairs}:\n \\begin{equation}\n \\varkappa_{\\mathcal{L}}(\\boldsymbol{c}) = \\frac{1}{{L \\choose 2}} \\sum_{j_1 \\neq j_2 \\in \\mathcal{L}} \\mathcal{I}\\biggl(\\mathcal{I}(c_{j_1}=c_{j_2})=\\mathcal{I}(c_{j_1}^{(0)}=c_{j_2}^{(0)})\\biggr). \\label{varkappa}\n \\end{equation}\n A value near 1 indicates the high accuracy of inferred allocations $\\boldsymbol{c}$ for the set $\\mathcal{L}$. Notice that the measure $\\varkappa_{\\mathcal{L}}(\\boldsymbol{c})$ is invariant to permutations of the clusters labels. This is desirable because the labels are arbitrary.\n\n\n\n\n\\bigskip\n\n\\begin{theorem}\\label{Thm:consistency}\nDenote the covariate matrix by $\\boldsymbol{X}_{np}$. In addition to assumptions (\\ref{true:first})--(\\ref{true:last}) about the true covariate-generating process, suppose that the true standard deviation $\\tau_0$ in equation (\\ref{true.lik.x}) is bounded below by $\\tau_*$, the small, positive constant postulated in Section~\\ref{S:covariates} as a lower bound for the Variscan model parameters, $\\tau_1$ and $\\tau$.\n\nLet $\\mathcal{L}=\\{j_1,\\ldots, j_L\\} \\subset \\{1,\\ldots,p\\}$ be a fixed subset of $L$ covariate indexes. Then there exists an increasing sequence of numbers $\\{p_n\\}$ such that, as $n$ grows and provided $p>p_n$, the VariScan clustering inferences for the covariate subset $\\mathcal{L}$ are aposteriori consistent. That is,\n\\[\n\\lim_{\\substack{n \\to \\infty \\\\ p> p_n}} P\\bigl[\\varkappa_{\\mathcal{L}}(\\boldsymbol{c})=1 \\mid \\boldsymbol{X}_{np}\\bigr] \\to 1.\n\\]\n\n\n\\end{theorem}\n\n\\bigskip\n\nSee Section \\ref{S:proof of Thm:consistency} of the Appendix for a proof.\nThe result relies on non-trivial extensions, in several directions, of the important theoretical insights provided by \\citep{Ghosal_Ghosh_Ramamoorthi_1999}.\nSpecifically, it extends Theorem 3 of \\cite{Ghosal_Ghosh_Ramamoorthi_1999} to densities on $\\mathcal{R}^n$ arising as convolutions of vector locations with errors distributed as zero-mean finite normal mixtures. \n\\bigskip\n\n\\section{Simulation studies}\n\n\\smallskip\n\n\\subsection{Cluster-related inferences}\\label{S:simulation2}\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.4]{d_cred.pdf}\n\\caption{ 95\\% posterior credible intervals for the discount parameter, $d$ for different values of $\\tau_0$. The true value, $d_0$, is shown by the red dashed line.}\n\\label{F:d}\n\\end{figure}\n\n We investigate VariScan's accuracy as a clustering procedure using artificial datasets for which the true clustering pattern is known. We simulated the covariates for $n=50$ subjects and $p=250$ genes from a discrete distribution convolved with Gaussian noise, and compared the co-clustering posterior probabilities of the $p$ covariates with the truth. The parameters of the true model were chosen to approximate match the corresponding estimates for the DLBCL dataset of \\cite{Rosenwald_etal_2002}. Specifically, for each of 25 synthetic datasets, and for the true model's parameter $\\tau_0$ in Theorem~\\ref{Thm:consistency} belonging to the range $[0.60, 0.96]$, we generated the following quantities to obtain the matrix $\\boldsymbol{X}$ in Step \\ref{X_step} below:\n\n\\begin{enumerate}\n\n \\item \\textbf{True allocation variables:} We generated $c_1^{(0)},\\ldots,c_p^{(0)}$ as the partitions induced by a PDP with true discount parameter $d^{(0)}=0.33$ and mass parameter $\\alpha_1=20$. The true number of clusters, $Q_0$, was thereby computed for this non-Dirichlet allocation.\n\n\\item \\textbf{Latent vector elements:} For $i=1,\\ldots,n$ and $k=1,\\ldots,Q_0$, elements $v_{ik}^{(0)} \\stackrel{iid}\\sim G_0$, where $G_0 \\sim \\mathcal{DP}(\\alpha_2)$,\nwith mass $\\alpha_2=10$ and uniform base distribution $U_0$ on the interval $[1.4,2.6]$.\n\n\\item\\label{X_step} \\textbf{Covariates:} $x_{ij} \\stackrel{indep}\\sim N(v_{ic_j}^{(0)}, \\tau_0^2)$ for $i=1,\\ldots,n$ and $j=1,\\ldots,p$.\n\n\\end{enumerate}\n\n No responses were generated in this study. \n Applying the general technique of \\cite{Dahl_2006} developed for Dirichlet process models, we computed a point estimate for the allocations, called the \\textit{least-squares configuration}, and denoted by $\\hat{c}_1,\\ldots,\\hat{c}_p$. For the full set of covariates, we estimated the accuracy of the least-squares allocation by the \\textit{estimated proportion of correctly clustered covariate pairs},\n \\[\n \\hat{\\varkappa} = \\frac{1}{{p \\choose 2}} \\sum_{j_1 \\neq j_2 \\in \\{1,\\ldots,p\\}} \\mathcal{I}\\biggl(\\mathcal{I}(\\hat{c}_{j_1}=\\hat{c}_{j_2})=\\mathcal{I}(c_{j_1}^{(0)}=c_{j_2}^{(0)})\\biggr).\n \\]\n A high value of $\\hat{\\varkappa}$ is indicative of VariScan's high clustering accuracy for all $p$ covariates.\n\n For each value of $\\tau_0$, the second column of Table \\ref{T:varkappa} displays the percentage $\\hat{\\varkappa}$ averaged over the 25 independent replications. We find that, for each $\\tau_0$, significantly less than 5 pairs were incorrectly clustered out of the ${250 \\choose 2}=$ 31,125 different covariate pairs, and so $\\hat{\\varkappa}$ was significantly greater than 0.999. The posterior inferences appear to be robust to large noise levels, i.e., large values of $\\tau_0$. For every dataset, $\\hat{q}$, the estimated number of clusters in the least-squares allocation was exactly equal to $Q_0$, the true number of~clusters. Recall that the non-atomicity of true distribution $G_0$ is a sufficient condition of Theorem~\\ref{Thm:consistency}. Although the condition is not satisfied in this setting, we nevertheless obtained highly accurate clustering-related inferences for the full set of $p=250$~covariates.\n\n\n\n Accurate inferences were also obtained for the PDP discount parameter, $d \\in [0,1)$. Figure \\ref{F:d} plots the 95\\% posterior credible intervals for $d$ against different values of $\\tau_0$. The posterior inferences are substantially more precise than the prior and each interval contained the true value,~$d_0=0.33$. Furthermore, in spite of being assigned a prior probability of 0.5, there is no posterior mass allocated to Dirichlet process models.\nThe ability of VariScan to discriminate between PDP and Dirichlet process models was evaluated using the log-Bayes factor, $\n \\log\\left(P[d>0|\\boldsymbol{X}]\/P[d=0|\\boldsymbol{X}]\\right)$. With $\\Theta^*$ representing all the parameters except $d$, and applying Jensen's inequality, the log-Bayes factor exceeds $E\\left(\\log\\left(\\frac{P[d>0|\\boldsymbol{X},\\Theta^*]}{p[d=0|\\boldsymbol{X},\\Theta^*]} \\right) \\mid \\boldsymbol{X} \\right)$, which (unlike the log-Bayes factor) can be estimated using just the post--burn-in MCMC sample. For each $\\tau_0$, the third column of Table \\ref{T:varkappa} displays 95\\% posterior credible intervals for this\n lower bound.\n The Bayes factors are significantly greater than $e^{10}=22,026.5$ and are overwhelmingly in favor of PDP~allocations, i.e., the true model.\n\n\n {\\small\n\n\\begin{table}\n\\begin{center}\n\\renewcommand{\\arraystretch}{1}\n\\begin{tabular}{ c | c |c }\n\\hline\\hline\n \\textbf{True $\\tau_0$} &\\textbf{Percent $\\hat{\\varkappa}$} &\\textbf{95\\% C.I.\\ for lower }\\\\\n & &\\textbf{bound of log-BF} \\\\\n\\hline\n0.60 &99.984 (0.000) &(11.05, 11.10)\\\\\n0.66 &99.978 (0.000) &(11.17, 11.25)\\\\\n0.72 &99.976 (0.000) &(10.89, 10.98)\\\\\n0.78 &99.973 (0.001) &(10.23, 10.31)\\\\\n0.84 &99.971 (0.000) &(10.86, 10.93)\\\\\n0.90 &99.960 (0.000) &(11.88, 11.94)\\\\\n0.96 &99.941 (0.001) &(10.49, 10.56)\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{For different values of simulation parameter $\\tau_0$, column 2 displays the proportion of correctly clustered covariate pairs, with the standard errors for the 25 independent replications shown in parentheses. Column 3 presents 95\\% posterior credible intervals for the lower bound of the log-Bayes factor of PDP models relative to Dirichlet process models. See the text for further explanation.}\\label{T:varkappa}\n\\end{table}\n}\n\n\n\n\n\n\n\n\\subsection{Prediction accuracy}\\label{S:simulation}\n\n\nWe evaluate the operating characteristics of our methods using a simulation study based\non the DLBCL dataset of \\cite{Rosenwald_etal_2002}. To generate the simulated data, we selected $p=500$ genes from the original gene expression dataset of 7,399 probes, as detailed below:\n\n\n\n\\begin{enumerate}\n\n\\item Select $10$ covariates with pairwise correlations less than 0.5 as the true predictor set, $\\mathcal{S} \\subset\\{1,\\ldots,500\\}$, so that $|\\mathcal{S}|=10$.\n\n\n\\item For each value of $\\beta^*\\in \\{0.2, 0.6, 1.0\\}$:\n\n\\begin{enumerate}\n\n \\item For subjects $i=1,\\ldots,100$, generate failure times $t_i$ from distribution $ \\mathcal{E}_i$, denoting the exponential distribution with mean~$\\exp(\\beta^* \\sum_{j\\in \\mathcal{S}} x_{ij})$. Note that the model used to generate the outcomes differs from VariScan assumption~(\\ref{eta_i}) for the log-failure times.\n\n\\item For 20\\% of individuals, generate their censoring times as follows: $u_i \\sim$ $\\mathcal{E}_i \\cdot \\mathcal{I}(u_i < t_i)$. Set the survival times of these individuals to $w_i=\\log u_i$ and their failure statuses to $\\delta_i=0$.\n\n\\item For the remaining individuals, set $w_i = \\log t_i$ and $\\delta_i=1$.\n\n\\end{enumerate}\n\n\\item Randomly assign 67 individuals to the training set and the remaining 33 individuals to the test set.\n\n\\item Assuming the AFT survival model, apply the VariScan procedure with linear splines and $m=1$ knot per spline. Choose a single covariate from each cluster as the representative as described in Section \\ref{S:predictors}. Make posterior inferences using the training data and predict the outcomes for the test~cases.\n\n\\end{enumerate}\n\nWe analyzed the\n same set of simulated data using six other techniques for gene selection with survival outcomes: lasso \\citep{Tibshirani_1997}, adaptive lasso \\citep{Zou_2006}, elastic net \\citep{Zou_Trevor_2005}, $L_2$-boosting \\citep{Hothorn_Buhlmann_2006}, random survival forests \\citep{Ishwaran_etal_2010}, and supervised principal components \\citep{Bair_Tibshirani_2004}, which have been implemented in the R packages glmnet, mboost, randomSurvivalForest, and superpc. The ``RSF-VH'' version of the random survival forests procedure was chosen because of its success in high-dimensional~problems. The selected techniques are excellent examples of the three categories of approaches for small $n$, large $p$ problems (variable\nselection, nonlinear prediction, and regression\nbased on lower-dimensional projections) discussed in Section~\\ref{S:introduction}.\nWe repeated this procedure over fifteen independent replications.\n\n\n\n\n\n\nWe compared the prediction errors of the methods using the \\textit{concordance error rate}, which is defined as $1-C$, where $C$ denotes the c index of \\cite{Harrell_etal_1982}. Let the set of ``usable'' pairs of subjects be $\\mathcal{U} = \\{(i,j): w_i < w_j, \\delta_i=1\\} \\cup \\{(i,j): w_i = w_j, \\delta_i\\neq \\delta_j\\}$. The concordance error rate of a procedure is \\citep{May_etal_2004}:\n $\n 1 - C = \\frac{1}{|\\mathcal{U}|}\\sum_{(i,j) \\in \\mathcal{U}} \\mathcal{I}(\\tilde{w}_i \\ge \\tilde{w}_j) - \\frac{1}{2|\\mathcal{U}|}\\sum_{(i,j) \\in \\mathcal{U}} \\mathcal{I}(\\tilde{w}_i = \\tilde{w}_j)\n$,\nwhere $\\tilde{w}_i$ is the predicted response of subject $i$. For example, for the VariScan procedure applied to analyze AFT survival outcomes, the predicted responses are $\\tilde{w}_i=\\exp(\\tilde{y}_i)$, where\n $\\tilde{y}_i$ are the predicted regression outcomes. \n\n \\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.31]{new2_sim_boxplot_1.pdf}\n\\includegraphics[scale=0.31]{new2_sim_boxplot_2.pdf}\n\\includegraphics[scale=0.31]{new2_sim_boxplot_3.pdf}\n\\caption{Side-by-side boxplots comparing the percentage concordance error rates of the different techniques in the simulation study.}\n\\label{F:C2_simulation}\n\\end{center}\n\\end{figure}\n\nThe concordance error rate measures a procedure's probability of incorrectly ranking the failure times of two randomly chosen individuals. The accuracy of a procedure is inversely related to its concordance error rate. The measure is especially useful for comparisons because it does not rely on the survivor function, which is estimable by VariScan, but not by some of the other procedures.\n Figure~\\ref{F:C2_simulation} depicts boxplots of the concordance error rates of the procedures sorted by increasing order of prediction accuracy. \n We find that as $\\beta^*$ increases, the concordance error rates progressively decrease for most procedures, including VariScan. For larger $\\beta^*$, the error rates for VariScan are significantly lower than the error rates for the other~methods.\n\nIn order to facilitate a more systematic evaluation, we have plotted in Figure~\\ref{F:sims} the error rates versus model sizes for the different methods, thereby providing a joint examination of model parsimony and prediction. To aid a visual interpretation, we did not include the supervised principal components method, since it performs the worst in terms of prediction and detects models that are two to four fold larger than $L_2$-boosting, which typically produces the largest models among the depicted methods. The three panels correspond to increasing effect size, $\\beta^*$. A few facts are evident from the plots. VariScan seems to balance sparsity and prediction the best for all values of $\\beta^*$, with its performance increasing appreciably with $\\beta^*$. Penalization approaches such as lasso, adaptive lasso, and elastic net produce sparser models but have lower prediction accuracies. $L_2$-boosting is comparable to Variscan in terms of prediction accuracy, but detects larger models for the lower effect sizes (left and middle panel); Variscan is the clear winner for the largest effect size (right panel). Additionally, especially for the largest $\\beta^*$, we observe substantial variability between the simulation runs for the penalization approaches, as reflected by the large standard errors. Further simulation study comparisons of VariScan and the competing approaches are presented in Section \\ref{SA:prediction_simulation} of the Appendix.\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{plot_sims.pdf}\n\\caption{Plot of concordance error rates versus model sizes for the competing methods along with the standard errors (shown by whiskers).\nThe left, middle and right respectively correspond to effect size $\\beta^*$ equal to 0.2, 0.6, and 1.}\n\\label{F:sims}\n\\end{center}\n\\end{figure}\n\n\\textbf{Nonlinearity measure.} Unlike some existing approaches, VariScan is able to measure the degree of nonlinearity in the relationships between the responses and covariates. For example, we could define \\textit{nonlinearity measure} $\\mathcal{N}$ as the posterior expectation,\n\\begin{equation}\n\\mathcal{N} = E\\bigl( \\frac{\\omega_2}{\\omega_1+\\omega_2} | \\boldsymbol{w},\\boldsymbol{X}\\bigr). \\label{N}\n\\end{equation}\nThis represents the posterior odds that a hypothetical, new cluster is a non-linear predictor in equation~(\\ref{eta_i}) rather than a simple linear regressor. \nA value of $\\mathcal{N}$ close to 1 corresponds to predominantly nonlinear associations between the responses and their predictors.\n\nAveraging over the 15 independent replications of the simulation, as $\\beta^*$ varied over the set $\\{0.2, 0.6, 1.0\\}$, the estimates of the nonlinearity measure $\\mathcal{N}$ defined in equation~(\\ref{N}), were 0.72, 0.41, and 0.25, respectively. The corresponding standard errors were 0.04, 0.07, and 0.06. This indicates that on the scale of the simulated log--failure times, simple linear regressors are increasingly preferred to linear splines as the signal-to-noise ratio, quantified by $\\beta^*$, increases. Such interpretable measures of nonlinearity are not provided by the competing methods.\n\n\\bigskip\n\n\\begin{center}\n\\section{Analysis of benchmark data sets} \\label{S:benchmark_data}\n\\end{center}\n\nReturning\n to the two publicly available datasets of Section \\ref{S:introduction}, we chose $p=500$ probes for further analysis. For the DLBCL dataset of \\citet*{Rosenwald_etal_2002}, we randomly selected 100 out of the 235 individuals who had non-zero survival times. Of the individuals selected, 50\\% had censored failure times. For the breast cancer dataset of \\citet*{vantVeer_2002}, we analyzed the 76 individuals with non-zero survival times, of which 44 individuals (57.9\\%) had censored failure times.\n\nWe performed 50 independent replications of the three steps that follow. \\textit{(i)} We randomly split the data into training and test sets in a 2:1 ratio. \\textit{(ii)} We analyzed the survival times and $p=500$ gene expression levels of the training cases using the techniques VariScan, lasso, adaptive lasso, elastic net, $L_2$-boosting, random survival forests, and supervised principal components. \\textit{(iii)}~The different techniques were used to predict the test~case outcomes. For the VariScan procedure, a single covariate from each cluster was chosen to be the cluster representative.\n\n\nThe number of clusters for the least-squares allocation of covariates, $\\hat{q}$, computed in Stage 1a of the analysis, were 165 and 117 respectively for the DLBCL and the breast cancer datasets. The nonlinearity measure $\\mathcal{N}$ estimates were 0.97 and 0.75 respectively with small standard errors. This indicates that the responses in both datasets, but especially in the DLBCL dataset, have predominantly nonlinear relationships with the predictors. In spite of being assigned a prior probability of 0.5, the estimated posterior probability of the Dirichlet process model (corresponding to discount parameter $d=0$) was exactly 0 for both datasets, justifying the PDP-based allocation scheme.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.31]{d.pdf}\n\\includegraphics[scale=0.31]{q.pdf}\n\\includegraphics[scale=0.31]{dahl_covariate_barchart.pdf}\n\\caption{Posterior summaries for the DLBCL dataset. The top panels and the lower panel summarize the least-squares covariate-to-cluster PDP allocation of the 500 genes. \n}\n\\label{F:clustering}\n\\end{center}\n\\end{figure}\n\nFor the DLBCL data, the upper panel of Figure \\ref{F:clustering} displays the estimated posterior density of the PDP's discount parameter $d$. The estimated posterior probability of the event $[d=0]$ is exactly zero, implying that a non-Dirichlet process clustering mechanism is strongly favored by the data, as suggested earlier by the EDA. The middle panel of Figure \\ref{F:clustering} plots the estimated posterior density of the number of clusters. The a posteriori large number of clusters (for $p=500$ covariates) is suggestive of a PDP model with $d>0$ (i.e.\\ a non-Dirichlet process model).\nThe lower panel of Figure \\ref{F:clustering} summarizes the cluster sizes of the least-squares allocation \\citep{Dahl_2006}. The large number of clusters ($\\hat{q}=165$) and the multiplicity of small clusters are very unusual for a Dirichlet process, justifying the use of the more general PDP~model.\n\n\n\nThe effectiveness of VariScan as a model-based clustering procedure can be shown as follows.\nFor each of the $\\hat{q}=165$ clusters in the least-squares allocation of Stage 1a, we computed the correlations between its member covariates and the latent vector for individuals with $\\hat{z}_{ik}=1$. The cluster-wise median correlations are plotted in Figure \\ref{F:corr}. The plots reveal fairly good within-cluster concordance regardless of the cluster size. Figure \\ref{F:heatmap_cluster} displays heatmaps for the DLBCL covariates that were allocated to column clusters having more than 10 members.\nThe panels display the covariates before and after bidirectional clustering of the subjects and probes, with the lower panel of Figure \\ref{F:heatmap_cluster} illustrating the within-cluster patterns revealed by VariScan. For each column cluster in the lower panel, the uppermost~rows represent the covariates of any subjects that do not follow the cluster structure and which are better modeled as random noise (i.e., covariates with $\\hat{z}_{ik}=0$).\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{dahl_median_corr.pdf}\n\\includegraphics[scale=0.3]{corr_size.pdf}\n\\caption{For the DLBCL dataset, median pairwise correlations for the $\\hat{q}=165$ PDP clusters in the least-squares allocation of Stage 1a.}\n\\label{F:corr}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{raw_cluster.pdf}\n\\includegraphics[scale=0.5]{VariScan_cluster.pdf}\n\\caption{Heatmaps of DLBCL covariates that were assigned to latent column clusters with more than 10 members. The panels display the covariates before and after bidirectional local clustering by VariScan. The vertical lines in the bottom panel mark the covariate-clusters. The color key for both panels is displayed at the top of the plot.}\n\\label{F:heatmap_cluster}\n\\end{center}\n\\end{figure}\n\n\nComparing the test case predictions with the actual survival times, boxplots of numerical summaries of the concordance error rates for all the methods are presented in Figure \\ref{F:C}. \nThe success of VariScan appears to be robust to the different censoring rates of survival datasets. Although $L_2$-boosting had comparable error rates for the DLBCL dataset, VariScan had the lowest error rates for both datasets. Further data analysis results and comparisons are available in Section \\ref{S_sup:benchmark data} of the~Appendix.\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.25]{new_DLBCL_boxplot.pdf}\n\\includegraphics[scale=0.25]{new_vantVeer_boxplot.pdf}\n\\caption{Side-by-side boxplots of percentage concordance error rates for the benchmark datasets.}\n\\label{F:C}\n\\end{center}\n\\end{figure}\n\n\n\n\nFor subsequent biological interpretations, we selected genes having high probability of being selected as predictors (with the upper percentile decided by the model size). We then analyzed these genes for their role in cancer progression by cross-referencing with the existing literature. For the breast cancer dataset, our survey indicated several prominent genes related to breast cancer development and progression, such as TGF-B2 \\citep{pmid17261761}, ABCC3, which is known to be up-regulated in primary breast cancers, and LAPTM4B, which is related to breast carcinoma relapse with metastasis \\citep{pmid20098429}. For the DLBCL dataset, we found several genes related to DLBCL progression, such as the presence of multiple chemokine ligands (CXCL9 and CCL18), interleukin receptors of IL2 and IL5 \\citep{pmid16418498}, and BNIP3, which is down-regulated in DLBCL and is a known marker associated with positive survival \\citep{pmid18288132}. A detailed functional\/mechanistic analysis of the main set of genes for both datasets is provided in Section \\ref{S_sup:benchmark data} of the Appendix.\\\\\n\n\\bigskip\n\n\\section{Conclusions}\\label{S:conclusion}\n\nUtilizing the sparsity-inducing property of PDPs, VariScan offers an efficient technique for clustering, variable selection, and prediction in high-dimensional regression problems. The covariates are grouped into a smaller number of clusters consisting of covariates with similar across-subject patterns. We theoretically demonstrate how a PDP allocation can be differentiated from a Dirichlet process allocation in terms of the relative sizes of the latent clusters. We provide a theoretical explanation for the impressive ability of VariScan to aposteriori detect the true covariate clusters for a general class of models.\n\nIn simulations and real data analysis, we show that VariScan makes highly accurate cluster-related inferences. The technique consistently outperforms established methodologies such as random survival forests, $L_2$-boosting, and supervised principal components, in terms of prediction accuracy. \nIn the analyses of benchmark microarray datasets, we identified several genes having known implications in cancer development and progression, which further engenders our hypothesis.\n\nThe VariScan methodology focusses on continuous covariates as a proof of concept, achieving simultaneous clustering, variable selection, and prediction in high-throughput regression settings and possessing appealing theoretical and empirical properties. Generalization to count, categorical, and ordinal covariates is possible. It is important to investigate the dependence structures and theoretical properties associated with the more general framework. This will be the focus of our group's future~research.\n\nDue to the intensive nature of the MCMC inference, we performed these analyses in two stages, with cluster detection followed by predictor discovery. We are currently working on implementing VariScan's MCMC procedure in a parallel computing framework using graphical processing units \\citep{suchard2010understanding}. We plan to make the software available as an R package for general purpose use in the near future. The single-stage analysis will allow the regression and clustering results to be interrelated, as implied by the VariScan model. We anticipate being able to dramatically speed up the calculations by multiple orders of magnitude, which will allow for single-stage inferences of user-specified datasets on ordinary desktop and laptop~computers.\n\n\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $A$ and $B$ be two closed \nconvex nonempty sets in a Hilbert space $X$. The (2-set) convex feasibility problem asks to find a point in the intersection of $A$ and $B$ (or, when $A \\cap B=\\emptyset$, a pair of points, one in $A$ and the other in $B$, that realizes the distance between $A$ and $B$). The relevance of this problem is due to the fact that many mathematical and concrete problems in applications can be formulated as a convex feasibility\nproblem. As typical examples, we mention solution of convex inequalities, partial differential equations, minimization of convex nonsmooth functions, medical imaging, computerized tomography and image reconstruction. \n\nThe method of alternating projections is the simplest iterative procedure for finding a solution of the convex feasibility problem and it goes back to von Neumann \\cite{vonNeumann}: let us denote by $P_A$ and $P_B$ the projections on the sets $A$ and $B$, respectively, and, given a starting point $c_0\\in X$, consider the {\\em alternating projections sequences} $\\{c_n\\}$ and $\\{d_n\\}$ given by $$d_n=P_{B}(c_{n-1})\\ \\ \\text{and}\\ \\ c_n=P_{A}(d_n)\\ \\ \\ \\ \\ (n\\in\\N).$$\nIf the sequences $\\{c_n\\}$ and $\\{d_n\\}$ converge in norm, we say that the method of alternating projections converges.\nOriginally, von Neumann proved that the method of alternating projection converges when $A$ and $B$ are closed subspace. Then, for two generic convex sets, the weak convergence of the alternating projection sequences was proved by Bregman in 1965 (\\cite{Bregman}). Nevertheless, the problem whether the alternating projections algorithm converges in norm for each couple of convex sets remained open till the example given by Hundal in 2004 (\\cite{Hundal}). This example shows that the alternating projections do not converge in norm when $A$ is a suitable convex cone and $B$ is a hyperplane touching the vertex of $A$. Moreover, this example emphasizes the importance of finding sufficient conditions ensuring the norm convergence of the alternating projections algorithm. In the literature, conditions of this type were studied (see, e.g., \\cite{BauschkeBorwein93,BorweinSimsTam}), even before the example by Hundal.\nHere, we focus on those conditions based on the notions of regularity, introduced in \\cite{BauschkeBorwein93}. Indeed, in the present paper, we investigate the relationships between regularity of the couple $(A,B)$ (see Definition~\\ref{def: regularity} below) and ``stability'' properties of the alternating projections method in the following sense. Let us suppose that \n$\\{A_n\\}$ and $\\{B_n\\}$\nare two sequences of closed convex sets such that $A_n\\rightarrow\nA$ and $B_n\\rightarrow B$ for the Attouch-Wets {variational} convergence (see Definition~\\ref{def:AW}) and let us introduce the definition of {\\em perturbed alternating projections sequences}.\n\n\n\\begin{definition}\\label{def:perturbedseq} Given $a_0\\in X$, the {\\em perturbed alternating projections sequences} $\\{a_n\\}$ and $\\{b_n\\}$, w.r.t. $\\{A_n\\}$ and $\\{B_n\\}$ and with starting point $a_0$, are defined inductively by\n\t$$b_n=P_{B_n}(a_{n-1})\\ \\ \\ \\text{and}\\ \\ \\ a_n=P_{A_n}(b_n) \\ \\ \\ \\ \\ \\ \\ \\ \\ (n\\in\\N).$$ \n\\end{definition}\n\n\\noindent Our aim is to find some conditions on the limit sets $A$ and $B$ such that, for each choice of the sequences $\\{A_n\\}$ and $\\{B_n\\}$ and for each choice of the starting point $a_0$, \nthe corresponding perturbed alternating projections sequences $\\{a_n\\}$ and $\\{b_n\\}$ satisfy $\\mathrm{dist}(a_n,A\\cap B)\\to 0$ and $\\mathrm{dist}(b_n,A\\cap B)\\to 0$. If this is the case, we say that the couple $(A,B)$ is {\\em $d$-stable}.\nIn particular, we show that the regularity of the couple $(A,B)$ implies not only the norm convergence of the alternating projections sequences for the couple $(A,B)$ (as already known from \\cite{BauschkeBorwein93}), but also that the couple $(A,B)$ is $d$-stable. This result \n might be interesting also in applications since real data are often affected by some uncertainties. Hence stability of the convex feasibility problem with respect to data perturbations is a desirable property, also in view of computational developments. \n\nLet us conclude the introduction by a brief description of the structure of the paper. In Section \\ref{SEction notations}, we list some notations and definitions, and we recall some well-known facts about the alternating projections method. Section~\\ref{Section regularity} is devoted to various notions of regularity and their relationships. It is worth pointing out that in this section we provide a new and alternative proof of the convergence of the alternating projections algorithm under regularity assumptions. This proof well illustrates the main geometrical idea behind the proof of our main result Theorem~\\ref{teo: mainHilbert}, stated and proved in Section \\ref{Section Main Result}. This result shows that {\\em a regular couple $(A,B)$ is $d$-stable whenever $A\\cap B$ (or a suitable substitute if $A \\cap B=\\emptyset$) is bounded}. \nCorollaries~\\ref{Corollary:stronglyexp}, \\ref{corollary:corpilur}, and \\ref{cor:sottospazisommachiusa} simplify and generalize some of the results obtained in \\cite{DebeMigl}, since there we considered only the case where $A \\cap B\\neq\\emptyset$ whereas, in the present paper, we encompass also the situation where the intersection of $A$ and $B$ is empty. We conclude the paper with Section 5, where we discuss the necessity of the assumptions of our main result and we state a natural open problem: suppose that $A \\cap B$ is bounded, is regularity equivalent to $d$-stability? \t\n\n\n\n\n\n\n\\section{Notation and preliminaries }\\label{SEction notations}\n\nThroughout all this paper, $X$ denotes a nontrivial real normed space with\nthe topological dual $X^*$. We\ndenote by $B_X$ and $S_X$ the closed unit ball and the unit sphere of $X$, respectively. \nFor $x,y\\in X$, $[x,y]$ denotes the closed segment in $X$ with\nendpoints $x$ and $y$, and $(x,y)=[x,y]\\setminus\\{x,y\\}$ is the\ncorresponding ``open'' segment. \nFor a subset $A$ of $X$, we denote by $\\inte(A)$, $\\conv(A)$ and\n$\\cconv(A)$ the interior, the convex hull and the closed convex\nhull of $A$, respectively.\nLet us recall that a body is a closed convex setsin $X$ with nonempty interior.\n\n We denote by $$\\textstyle \\mathrm{diam}(A)=\\sup_{x,y\\in A}\\|x-y\\|,$$\nthe (possibly infinite) diameter of $A$. For $x\\in X$, let\n$$\\dist(x,A) =\\inf_{a\\in A} \\|a-x\\|.$$ Moreover, given $A,B$\nnonempty subsets of $X$, we denote by $\\dist(A,B)$ the usual\n``distance'' between $A$ and $B$, that is,\n$$ \\dist(A,B)=\\inf_{a\\in A} \\dist(a,B).$$\n\nNow, we recall two notions of convergence for sequences of sets (for a wide overview about this topic see, e.g., \\cite{Beer}). By $\\C(X)$ we denote the family of all nonempty closed subsets of\n\t$X$.\nLet us introduce the (extended) Hausdorff metric $h$ on\n$\\C(X)$. For $A,B\\in\\C(X)$, we define the excess of $A$ over $B$\nas\n$$e(A,B) = \\sup_{a\\in A} \\mathrm{dist}(a,B).$$\n\\noindent Moreover, if $A\\neq\\emptyset$ and $B=\\emptyset$ we put\n$e(A,B)=\\infty$, if $A=\\emptyset$ we put $e(A,B)=0$. Then, we define\n\n$$h(A,B)=\\max \\bigl\\{ e(A,B),e(B,A) \\bigr\\}.$$\n\n\\begin{definition} A sequence $\\{A_j\\}$ in $\\C(X)$ is said to\n\tHausdorff converge to $A\\in\\C(X)$ if $$\\textstyle \\lim_j h(A_j,A)\n\t= 0.$$\n\\end{definition}\n\n\n\nAs the second notion of convergence, we consider the so called Attouch-Wets convergence (see,\ne.g., \\cite[Definition~8.2.13]{LUCC}), which can be seen as a\nlocalization of the Hausdorff convergence. If $N\\in\\N$ and\n$A,B\\in\\C(X)$, define\n\\begin{eqnarray*}\n\te_N(A,B) &=& e(A\\cap N B_X, B)\\in[0,\\infty),\\\\\n\th_N(A,B) &=& \\max\\{e_N(A,B), e_N(B,A)\\}.\n\\end{eqnarray*}\n\n\n\\begin{definition}\\label{def:AW} A sequence $\\{A_j\\}$ in $\\C(X)$ is said to\n\tAttouch-Wets converge to $A\\in\\C(X)$ if, for each $N\\in\\N$,\n\t$$\\textstyle \\lim_j h_N(A_j,A)= 0.$$\n\\end{definition}\n\nWe conclude this section by recalling some well known results about distance between two convex sets and projection of a point onto a convex set.\n\nSuppose that $A$ and $B$ are closed convex nonempty subsets of $X$, we denote\n\\begin{eqnarray*}\n\tE&=&\\{a\\in A;\\, d(a,B)=d(A,B)\\},\\\\\n\tF&=&\\{b\\in B;\\, d(b,B)=d(A,B)\\}.\n\\end{eqnarray*}\n\n{If $C$ is a nonempty closed convex subset of $X$, the projection of a point $x$ onto $C$ is denoted by $P_Cx$.} \nWe say that $v=P_{\\overline{B-A}}(0)$ is the {\\em displacement vector} for the couple $(A,B)$.\nIt is clear that if $A\\cap B\\neq\\emptyset$ then $E=F=A\\cap B$ and the displacement vector for the couple $(A,B)$ is null.\nWe {recall} the following fact, where, given a map $T:X\\rightarrow X$, $\\mathrm{Fix}(T)$ denotes the set of fixed points of $T$.\n \n\n\\begin{fact}[{\\cite[Fact~1.1]{BauschkeBorwein93}}]\\label{fact: BB93} Suppose that $X$ is a Hilbert space and that $A,B$ are closed convex nonempty subsets of $X$. Then we have:\n\t\\begin{enumerate}\n\t\t\\item $\\|v\\|=d(A,B)$ and $E+v=F$;\n\t\t\\item $E=\\mathrm{Fix}(P_A P_B)=A\\cap(B-v)$ and $F=\\mathrm{Fix}(P_B P_A)=B\\cap(A+v)$;\n\t\t\\item $P_B e=P_F e=e+v$ ($e\\in E$) and $P_A f=P_E f=f-v$ ($f\\in F$).\n \t\\end{enumerate}\n\t\\end{fact} \n\n\n\\section{Notions of regularity for a couple of convex sets} \\label{Section regularity}\n{\nIn this section we introduce some notions of regularity for a couple of nonempty closed convex sets $A$ and $B$. This class of notions was originally introduced in \\cite{BauschkeBorwein93}, in order to obtain some conditions ensuring the norm convergence of the alternating projections algorithm (see, also, \\cite{BorweinZhu}).\nHere we list three different type of regularity: (i) and (ii) are exactly as they appeared in \\cite{BauschkeBorwein93}, whereas (iii) is new.\n}\n\n\\begin{definition}\\label{def: regularity} Let $X$ be a Hilbert space and $A,B$ closed convex nonempty subsets of $X$. Suppose that $E,F$ are nonempty. We say that the couple $(A,B)$ is:\\begin{enumerate}\n\t\t\\item {\\em regular} if for each $\\epsilon>0$ there exists $\\delta>0$ such that $\\mathrm{dist}(x,E)\\leq \\epsilon$, whenever $x\\in X$ satisfies\n\t\t$$\\max\\{\\mathrm{dist}(x,A),\\mathrm{dist}(x,B-v)\\}\\leq\\delta;$$\n\\item {\\em boundedly regular} if for each bounded set $S\\subset X$ and for each $\\epsilon>0$ there exists $\\delta>0$ such that $\\mathrm{dist}(x,E)\\leq \\epsilon$, whenever $x\\in S$ satisfies\n$$\\max\\{\\mathrm{dist}(x,A),\\mathrm{dist}(x,B-v)\\}\\leq\\delta;$$\n\\item {\\em linearly regular for points bounded away from $E$} if for each $\\epsilon>0$ there exists $K>0$ such that \n$$\\mathrm{dist}(x,E)\\leq K\\max\\{\\mathrm{dist}(x,A),\\mathrm{dist}(x,B-v)\\},$$\nwhenever $\\mathrm{dist}(x,E)\\geq \\epsilon$.\n\t\\end{enumerate} \n\\end{definition}\n\nThe following proposition shows that (i) and (iii) in the definition above are equivalent. The latter part of the proposition is a generalization of \\cite[Theorem~3.15]{BauschkeBorwein93}.\n\n\\begin{proposition}\\label{prop: regular-largedistances} Let $X$ be a Hilbert space and $A,B$ closed convex nonempty subsets of $X$. Suppose that $E, F$ are nonempty. Let us consider the following conditions. \\begin{enumerate}\n\t\t\\item The couple $(A,B)$ is regular.\n\t\t\\item The couple $(A,B)$ is boundedly regular.\n\t\t\\item The couple $(A,B)$\n\t\tis linearly regular for points bounded away from $E$.\n\t\\end{enumerate}\nThen $(iii)\\Leftrightarrow(i)\\Rightarrow(ii)$. Moreover, if $E$ is bounded, then $(ii)\\Rightarrow(i)$.\n\\end{proposition}\n\n\\begin{proof}\nThe implications $(iii)\\Rightarrow(i)\\Rightarrow(ii)$ are trivial. Let us prove that $(i)\\Rightarrow(iii)$. Suppose on the contrary that there exists $\\epsilon>0$ and a sequence $\\{x_n\\}\\subset X$ such that $\\mathrm{dist}(x_n,E)>\\epsilon$ ($n\\in\\N$) and \n$$\\textstyle \\frac{\\max\\{\\mathrm{dist}(x_n,A),\\mathrm{dist}(x_n,B-v)\\}}{\\mathrm{dist}(x_n,E)}\\to 0.$$\nFor each $n\\in\\N$, let $e_n\\in E$, $a_n\\in A$, and $b_n\\in B$ be such that $\\|e_n-x_n\\|=\\mathrm{dist}(x_n,E)$, $\\|a_n-x_n\\|=\\mathrm{dist}(x_n,A)$, and $\\|b_n-v-x_n\\|=\\mathrm{dist}(x_n,B-v)$. Put $\\lambda_n=\\frac{\\epsilon}{\\|e_n-x_n\\|}\\in(0,1)$ and define $z_n=\\lambda_n x_n+(1-\\lambda_n)e_n$, \n$a'_n=\\lambda_n a_n+(1-\\lambda_n)e_n\\in A$, and $b'_n=\\lambda_n b_n+(1-\\lambda_n)(e_n+v)\\in B$.\nBy our construction, it is clear that\n$$\\textstyle \\frac{\\mathrm{dist}(z_n,A)}{\\epsilon}\\leq\\frac{\\|z_n-a'_n\\|}{\\epsilon}=\\frac{\\|x_n-a_n\\|}{\\|e_n-x_n\\|}\\ \\ \\ \\text{and}\\ \\ \\ \\frac{\\mathrm{dist}(z_n,B-v)}{\\epsilon}\\leq\\frac{\\|b'_n-v-z_n\\|}{\\epsilon}=\\frac{\\|b_n-v-x_n\\|}{\\|e_n-x_n\\|}.$$\nHence, $\\mathrm{dist}(z_n,E)=\\epsilon$ and $\\max\\{\\mathrm{dist}(z_n,A),\\mathrm{dist}(z_n,B-v)\\}\\to 0$. This contradicts (i) and the proof is concluded.\n\nNow, suppose that $E$ is bounded, and let us prove that $(ii)\\Rightarrow(i)$. Suppose on the contrary that there exists $\\epsilon>0$ and a sequence $\\{x_n\\}\\subset X$ such that $\\mathrm{dist}(x_n,E)>\\epsilon$ ($n\\in\\N$) and \n$$\\textstyle \\max\\{\\mathrm{dist}(x_n,A),\\mathrm{dist}(x_n,B-v)\\}\\to 0.$$\nFor each $n\\in\\N$, let $e_n\\in E$, $a_n\\in A$, and $b_n\\in B$ be such that $\\|e_n-x_n\\|=\\mathrm{dist}(x_n,E)$, $\\|a_n-x_n\\|=\\mathrm{dist}(x_n,A)$, and $\\|b_n-v-x_n\\|=\\mathrm{dist}(x_n,B-v)$. Put $\\lambda_n=\\frac{\\epsilon}{\\|e_n-x_n\\|}\\in(0,1)$ and define $z_n=\\lambda_n x_n+(1-\\lambda_n)e_n$, \n$a'_n=\\lambda_n a_n+(1-\\lambda_n)e_n\\in A$, and $b'_n=\\lambda_n b_n+(1-\\lambda_n)(e_n+v)\\in B$.\nBy our construction, it is clear that\n$$\\textstyle {\\mathrm{dist}(z_n,A)}\\leq{\\|z_n-a'_n\\|}\\leq{\\|x_n-a_n\\|}$$\nand\n$$ {\\mathrm{dist}(z_n,B-v)}\\leq{\\|b'_n-v-z_n\\|}\\leq{\\|b_n-v-x_n\\|}.$$\nHence, $\\mathrm{dist}(z_n,E)=\\epsilon$ and $\\max\\{\\mathrm{dist}(z_n,A),\\mathrm{dist}(z_n,B-v)\\}\\to 0$. Moreover, since $E$ is bounded $\\{z_n\\}$ is a bounded sequence. This contradicts (ii) and the proof is concluded.\n\\end{proof}\n\nThe following theorem follows by \\cite[Theorem~3.7]{BauschkeBorwein93}. \n\n\\begin{theorem}\nLet $X$ be a Hilbert space and $A,B$ closed convex nonempty subsets of $X$. Suppose that the couple $(A,B)$ is regular. Then the alternating projections method converges.\n\\end{theorem}\n\nWe present an alternative proof of this theorem containing a simplified version of the argument that we will use in our main result Theorem~\\ref{teo: mainHilbert}. For the sake of simplicity we present a proof in the case $A\\cap B$ is nonempty. The proof in the general case is similar.\n\n\\begin{proof} \nBy \\cite[Theorem~3.3, (iv)]{BauschkeBorwein93}, it is sufficient to prove that $\\mathrm{dist}(c_n,A\\cap B)\\to 0$.\n\tLet us recall that, by the definition of the sequences $\\{c_n\\}$ and $\\{d_n\\}$, we have \n\t\\begin{enumerate}\n\t\t\\item[($\\alpha$)] $\\mathrm{dist}(c_n,A\\cap B)\\leq\\mathrm{dist}(d_n,A\\cap B)$ and $\\mathrm{dist}(d_{n+1},A\\cap B)\\leq\\mathrm{dist}(c_n,A\\cap B).$\n\t\\end{enumerate}\n\tLet $\\epsilon>0$, by the equivalence $(i)\\Leftrightarrow(iii)$ in Proposition~\\ref{prop: regular-largedistances}, there exists $K>0$ such that $$\\mathrm{dist}(x,A\\cap B)\\leq K\\max\\{\\mathrm{dist}(x,A),\\mathrm{dist}(x,B)\\},$$\n\twhenever $\\mathrm{dist}(x,A\\cap B)\\geq \\epsilon$. Observe that {$K\\geq 1$} and define $\\eta=\\sqrt{1-\\frac1{K^2}}$.\n\t{Then a computation, based on some trigonometric considerations in the plane defined by $c_n,d_n$ and $d_{n+1}$,} shows that the following condition holds for each $n\\in\\N$: \n\t\\begin{enumerate}\n\t\t\\item[($\\beta$)] if $\\mathrm{dist}(c_n,A\\cap B)\\geq\\epsilon$\t then $\\textstyle \\mathrm{dist}(c_n,A\\cap B)\\leq\\eta\\,\\mathrm{dist}(d_n,A\\cap B);$\n\t\tif $\\mathrm{dist}(d_{n+1},A\\cap B)\\geq\\epsilon$ then $\\textstyle \\mathrm{dist}(d_{n+1},A\\cap B)\\leq\\eta\\,\\mathrm{dist}(c_n,A\\cap B).$\n\t\\end{enumerate}\n\t {By} taking into account ($\\alpha$), ($\\beta$), and the fact that $\\eta<1$, we have that eventually $\\mathrm{dist}(c_n,A\\cap B)\\leq\\epsilon$ and $\\mathrm{dist}(d_n,A\\cap B)\\leq\\epsilon$. The proof is concluded.\t \n\\end{proof}\n\n\\section{{Regularity and perturbed alternating projections}}\\label{Section Main Result}\n{\n\tThis section is devoted to prove our main result. Indeed, here we show that if a couple $(A,B)$ of convex closed sets is regular then not only the alternating projections method converges but also the couple $(A,B)$ satisfies certain ``stability'' properties with respect to perturbed projections sequences. In the present section, if not differently stated, $X$ denotes a Hilbert space. If $u,v\\in X\\setminus\\{0\\}$, we denote as usual\n\t$$\\textstyle\\cos(u,v)=\\frac{\\langle u,v\\rangle}{\\|u\\|\\|v\\|},$$\n\twhere $\\langle \\cdot,\\cdot\\rangle$ denotes the inner product in $X$. \n\t\nLet us start by making precise the word ``stability'' by introducing\n} \nthe following two notions of stability for a couple $(A,B)$ of convex closed subsets of $X$.\n\n\n\n\\begin{definition}\\label{def: stability}\n\tLet $A$ and $B$ be closed convex subsets of $X$ such that $E, F$ are nonempty. We say that the couple $(A,B)$ is {\\em stable} [{\\em $d$-stable}, respectively] if for each choice of sequences $\\{A_n\\},\\{B_n\\}\\subset\\C(X)$ converging {with respect to} the Attouch-Wets convergence to $A$ and $B$, respectively, and for each choice of the starting point $a_0$, the corresponding perturbed alternating projections sequences $\\{a_n\\}$ and $\\{b_n\\}$ converge in norm [satisfy \n\t$\\mathrm{dist}(a_n, E)\\to0$ and $\\mathrm{dist}(b_n, F)\\to0$, respectively].\n\\end{definition} \n\n\n\\begin{remark}\n\tWe remark that the couple $(A,B)$ is {\\em stable} if and only\t\n\tif for each choice of sequences $\\{A_n\\},\\{B_n\\}\\subset\\C(X)$ converging {with respect to} the Attouch-Wets convergence to $A$ and $B$, respectively, and for each choice of the starting point $a_0$, there exists $e\\in E$ such that the perturbed alternating projections sequences $\\{a_n\\}$ and $\\{b_n\\}$ satisfy\n $a_n\\to e$ and $b_n\\to e+v$ in norm.\n\\end{remark}\n\n\\begin{proof}\n\t\tLet us start by proving that if $a_n\\to e$ then $e\\in E$. It is not difficult to prove that, since $$a_{n+1}= P_{A_n}P_{B_n}a_n=P_{A}P_{B}a_n+(P_{A_n}P_{B_n}-P_{A}P_{B})a_n$$ and since $A_n\\to A,B_n\\to B$ for the Attouch-Wets convergence, we have $e=P_AP_B e$. By Fact~\\ref{fact: BB93}, (ii), we have that $e\\in E$. Similarly, it is easy to see that \n\t$$b_{n+1}=P_{B_n}a_n=P_{B}a_n+(P_{B_n}-P_{B})a_n\\to P_B e=e+v,$$\n\tand the proof is concluded.\n\\end{proof}\n\n\n\nIt is clear that if the couple $(A,B)$ is stable,\nthen it is $d$-stable. Moreover, if $E,F$ are singletons then also the converse implication holds true.\n{The following basic assumptions will be considered in the sequel of the paper.}\n\n\n\\begin{BA}\\label{ba}\n\tLet $A,B$ be closed convex non\\-empty subsets of $X$. Suppose that:\n\t\\begin{enumerate}\n\t\t\\item $E,F$ are nonempty and bounded;\n\t\t\\item $\\{A_n\\}$ and $\\{B_n\\}$ are sequences of closed convex sets such that $A_n\\rightarrow\n\t\tA$ and $B_n\\rightarrow B$ for the Attouch-Wets convergence.\n\t\\end{enumerate} \n\\end{BA}\n\n\nNow, let us prove a chain of lemmas and propositions that we shall use in the proof of our main result, Theorem~\\ref{teo: mainHilbert} below. \n\n\n\\begin{lemma}\\label{lemma:2dimensional}\n\tLet $G$ be a closed convex subset of $X$. Suppose that there exist $\\epsilon,K>0$ such that $\\epsilon B_X\\subset G\\subset K B_X$. Then, if $u,w\\in\\partial G$ and $\\cos(u,w)=\\theta>0$, we have \n\t$$\\textstyle \\|u-w\\|^2\\leq K^2(\\frac{K^2}{\\epsilon^2}+1)\\frac{1-\\theta^2}{\\theta^2}.$$\n\\end{lemma}\n\n\\begin{proof}\n\tWithout any loss of generality we can suppose that $X=\\R^2$ and $u=(\\|u\\|,0)$. Let us denote $w=(x,y)$, with $x,y\\in\\R$, and suppose that $u,w\\in\\partial G$. \n\t\n\tWe claim that \n\t$\\textstyle |y|\\geq|\\frac{\\epsilon}{\\|u\\|}(x-\\|u\\|)|.$\n\tIndeed, since $\\epsilon B_X\\subset G$, an easy convexity argument shows that\n\tif $u\\in \\partial G$ and $w$ is such that $|y|<|\\frac{\\epsilon}{\\|u\\|}(x-\\|u\\|)|$ then we would have $w\\notin \\partial G $. \n\t\n\t \n{Now,} suppose that $\\cos(u,w)=\\frac{x}{\\|w\\|}=\\theta>0$. We have $y^2=\\frac{1-\\theta^2}{\\theta^2}x^2$ and hence, by our claim,\n$$\\textstyle (x-\\|u\\|)^2\\leq\\frac{1-\\theta^2}{\\theta^2}\\frac{\\|u\\|^2}{\\epsilon^2} x^2.$$\nHence,\n$$\\textstyle \\|u-w\\|^2=(x-\\|u\\|)^2+y^2\\leq x^2(\\frac{\\|u\\|^2}{\\epsilon^2}+1)\\frac{1-\\theta^2}{\\theta^2}\\leq K^2(\\frac{K^2}{\\epsilon^2}+1)\\frac{1-\\theta^2}{\\theta^2}.$$\n\\end{proof}\n\n\n\n\\begin{proposition}\\label{prop:eventuallycos<1}\n Let Basic assumptions~\\ref{ba} be satisfied and, for each $n\\in\\N$, let $a_n\\in A_n$ and $b_n\\in B_n$. Suppose that the couple $(A,B)$ is regular. \nLet $\\epsilon>0$, then there exist $\\eta\\in(0,1)$ and $n_1\\in\\N$ such that for each $n\\geq n_1$ we have:\n\\begin{enumerate}\n\t\\item if $\\mathrm{dist}(a_n,E)\\geq2\\epsilon$\tand $\\mathrm{dist}(b_n,F)\\geq2\\epsilon$ then $\\cos\\bigl(a_n-e,b_n-(e+v)\\bigr)\\leq\\eta,$ whenever $e\\in E+\\epsilon B_X$.\n\\item if $\\mathrm{dist}(a_n,E)\\geq2\\epsilon$\tand $\\mathrm{dist}(b_{n+1},F)\\geq2\\epsilon$ then $\\cos\\bigl(b_{n+1}-f,a_n+v-f\\bigr)\\leq\\eta,$ whenever $f\\in F+\\epsilon B_X$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof} Let us prove that there exist $\\eta\\in(0,1)$ such that eventually (i) holds, the proof that there exist $\\eta\\in(0,1)$ such that eventually (ii) holds is similar. \n\tSuppose that this is not the case, then there exist sequences $\\{e_k\\}\\subset E+\\epsilon B_X$, $ \\{\\theta_k\\}\\subset (0,1)$ and an increasing sequence of the integers $\\{n_k\\}$ such that $\\mathrm{dist}(a_{n_k},E)\\geq2\\epsilon$, $\\mathrm{dist}(b_{n_k},F)\\geq2\\epsilon$, and\n\t$$\\cos\\bigl(a_{n_k}-e_k,b_{n_k}-(e_k+v)\\bigr)=\\theta_k\\to 1.$$\n\tLet $G=E+2\\epsilon B_X$ and observe that $G$ is a bounded body in $X$. Since $e_k\\in\\inte G$ and $a_{n_k}\\not\\in\\inte G$, there exists a unique point $a'_k\\in[e_k,a_{n_k}]\\cap\\partial G$. Similarly, there exists a unique point $b'_k\\in[e_k,b_{n_k}-v]\\cap\\partial G$.\n\tMoreover, it is clear that $$\\cos\\bigl(a'_{k}-e_k,b'_{k}-e_k\\bigr)=\\theta_k.$$\n\tLemma~\\ref{lemma:2dimensional} implies that $\\|a'_{k}-b'_{k}\\|\\to0$. \n\t Since $G$ is bounded and $A_{n_k}\\to A,\\,B_{n_k}\\to B$ for the Attouch-Wets convergence, there exist sequences $\\{a''_k\\}\\subset A$ and $\\{b''_k\\}\\subset B-v$ such that $\\|a''_k-a'_k\\|\\to 0$ and $\\|b''_k-b'_k\\|\\to 0$. Hence, $\\|a''_{k}-b''_{k}\\|\\to0$ and eventually $\\mathrm{dist}(a''_k, E)\\geq \\epsilon$, a contradiction since the couple $(A,B)$ is regular.\n\\end{proof}\n\n\\begin{lemma}\\label{prop: eventuallycos0}\n\t Let Basic assumptions~\\ref{ba} be satisfied, suppose that the couple $(A,B)$ is regular, and let $\\delta,\\epsilon>0$. For each $n\\in\\N$, let $a_n,x_n\\in A_n$ and $b_n,y_n\\in B_n$ be such that $\\mathrm{dist}(x_n, E)\\to 0$ and $\\mathrm{dist}(y_n, F)\\to 0$. \n\tThen there exists $n_2\\in\\N$ such that for each $n\\geq n_2$ we have:\n\t\\begin{enumerate}\n\t\t\\item if $\\mathrm{dist}(a_n,E)\\geq2\\epsilon$, $\\mathrm{dist}(b_n,F)\\geq2\\epsilon$, and $a_n=P_{A_n}b_n$ then \n$$\\cos\\bigl(x_n-a_n,b_n-(a_n+v)\\bigr)\\leq\\delta;$$\t\n\t\t\\item if $\\mathrm{dist}(a_n,E)\\geq2\\epsilon$, $\\mathrm{dist}(b_{n+1},F)\\geq2\\epsilon$, and $b_{n+1}=P_{B_{n+1}}a_n$ then \n$$\\cos\\bigl(y_{n+1}-b_{n+1},a_n+v-b_{n+1}\\bigr)\\leq\\delta.$$\t\n\n\\end{enumerate}\n\t\\end{lemma}\n\n\\begin{proof} Let us prove that eventually (i) holds, the proof that eventually (ii) holds is similar. {Since $\\mathrm{dist}(a_n,E)\\geq 2 \\varepsilon$ and $\\mathrm{dist}(x_n,E)\\rightarrow 0$ we have that eventually $x_n-a_n\\neq0$}. By Proposition~\\ref{prop:eventuallycos<1}, there exists $\\eta\\in(0,1)$ and $n_1\\in\\N$ such that\n\t$$\\cos\\bigl(a_n-e,b_n-(e+v)\\bigr)\\leq\\eta,$$\n\twhenever $n\\geq n_1$ and $e\\in E+\\epsilon B_X$. Since $\\mathrm{dist}(a_n,E)\\geq2\\epsilon$\tand $\\mathrm{dist}(b_n,F)\\geq2\\epsilon$, it is not difficult to see that there exists a constant $\\eta'>0$ such that $\\|b_n-(a_n+v)\\|\\geq\\eta'$, whenever $n\\geq n_1$. In particular, eventually $\\cos\\bigl(x_n-a_n,b_n-(a_n+v)\\bigr)$ is well-defined. \n\t If $v=0$, the thesis is trivial since \n\t$$\\langle x_n-a_n,b_n-a_n\\rangle\\leq0,$$\n\twhenever $n\\in\\N$. \n\t\n\tSuppose that $v\\neq 0$. \n\t\tWe claim that, if $v$ denotes the displacement vector for the couple $(A,B)$, eventually we have $$\\textstyle \\langle v,a_n-x_n\\rangle\\leq \\delta\\eta' \\|a_n-x_n\\|.$$\n\tTo prove our claim observe that, since $\\mathrm{dist}(x_n, E)\\to 0$, we can suppose without any loss of generality that $\\mathrm{dist}(x_n, E)\\leq\\epsilon$ ($n\\in\\N$). Moreover, we can consider a sequence $\\{x'_n\\}\\subset E$ such that $\\|x'_n-x_n\\|\\to0$. Let $G=E+2\\epsilon B_X$ and observe that $G$ is a bounded body in $X$. Since $x_n\\in\\inte G$ and $a_{n}\\not\\in\\inte G$, there exists a unique point $a'_n\\in[x_n,a_{n}]\\cap\\partial G$. \n\tSince $G$ is bounded and $A_{n}\\to A$ for the Attouch-Wets convergence, there exists a sequence $\\{a''_n\\}\\subset A$ such that $\\|a''_n-a'_n\\|\\to 0$. Since $\\{x'_n\\}\\subset E$, it is clear that \n\t$\\langle v, a''_n-x'_n\\rangle\\leq 0$ and hence eventually\n\t$$\\langle v, a''_n-x'_n\\rangle-\\delta\\eta'\\|a''_n-x'_n\\|\\leq -\\delta\\eta'\\epsilon.$$\t\n\tSince $\\|x'_n-x_n\\|\\to0$ and $\\|a''_n-a'_n\\|\\to0$, eventually we have\n\t$$\\langle v, a'_n-x_n\\rangle-\\delta\\eta'\\|a'_n-x_n\\|\\leq0.$$\n\tBy homogeneity and by our construction the claim is proved.\n\t\n\tNow, by our claim, since $a_n=P_{A_n}b_n$ and $x_n\\in A_n$ ($n\\in\\N$), we have \n\t $$\\textstyle \\langle x_n-a_n,b_n-(a_n+v)\\rangle=\\langle x_n-a_n,b_n-a_n\\rangle+\\langle a_n-x_n,v\\rangle\\leq \\delta\\eta' \\|a_n-x_n\\|.$$\n\tEventually, since $\\|b_n-(a_n+v)\\|\\geq\\eta'$, we have \n\t \t$$\\cos\\bigl(x_n-a_n,b_n-(a_n+v)\\bigr)\\leq \\frac{\\delta\\eta'}{\\|b_n-(a_n+v)\\|}\\leq \\delta.$$\n\t\n\\end{proof}\n{\n\\noindent Now, we need a simple geometrical result whose proof is a simple application of the law of cosines combined with the triangle inequality. The details of the proof are left to the reader.} \n\n\\begin{fact}\\label{fact: quasiortho}\nLet $\\eta,\\eta'\\in(0,1)$ be such that $\\eta<\\eta'$. If $\\delta\\in(0,1)$ satisfies $\\frac{\\delta+\\eta}{1-\\delta}\\leq \\eta'$ and if $x,y\\in X$ are linearly independent vectors such that $\\cos(x,y)\\leq \\eta$ and $\\cos(y-x,-x)\\leq \\delta$ then $\\|x\\|\\leq \\eta'\\|y\\|$. \n\\end{fact}\n\nLet us recall that, given a normed space $Z$, the {\\em modulus of convexity of $Z$} is the function $\\delta_Z:[0,2]\\to [0,1]$ defined by \n$$\n\\delta_Z(\\eta)=\\inf \\left\\lbrace 1-\\left\\| \\dfrac{x+y}{2} \\right\\|:x,y \\in B_X, \\|x-y\\|\\geq \\eta\\right\\rbrace. \n$$\nMoreover, we say that $Z$ is {\\em uniformly rotund} if $\\delta_Z(\\eta)>0$, whenever $\\eta \\in (0,2] $.\n\n\\begin{lemma}\\label{lemma: unifrotund} \n\tLet $Z$ be a uniformly rotund {normed} space. For each $\\rho\\geq 0$ and $M>0$ there exists $\\epsilon'>0$ such that if $C$ is a convex set such that $\\rho-\\epsilon'\\leq\\|c\\|\\leq\\rho+\\epsilon'$, whenever $c\\in C$, then $\\mathrm{diam}(C)\\leq M$.\n\\end{lemma}\n\n\\begin{proof}\nIn the case $\\rho= 0$ the proof is trivial, so we can suppose $\\rho>0$. We claim that if we take $\\epsilon'>0$ such that $\\varepsilon'\\left(2-\\delta_Z(M) \\right) < \\rho \\delta_Z(M)$ the thesis follows. To see this, suppose that $C$ is a convex set such that $\\rho-\\epsilon'\\leq\\|c\\|\\leq\\rho+\\epsilon'$, whenever $c\\in C$, and suppose on the contrary that there exist $c_1,c_2 \\in C$ such that $\\|c_1-c_2\\|> M$. By the definition of $\\delta_Z$ and since $\\frac{c_1+c_2}{2} \\in C$, we have \n\t$$\n\\rho - \\varepsilon'\\leq\\left\\| \\dfrac{c_1+c_2}{2}\\right\\| \\leq \\left( \\rho + \\varepsilon'\\right) \\left( 1- \\delta_Z(M)\\right). \n$$\n Therefore, we have $\\varepsilon'\\left(2-\\delta_Z(M) \\right) \\geq \\rho \\delta_Z(M)$, a contradiction. \n\\end{proof}\n\n{Since it is well known that a Hilbert space is a uniformly rotund space, the previous lemma allows us to prove the following proposition. }\n\n\\begin{proposition}\\label{prop: normprojectionsmall} Let Basic assumptions~\\ref{ba} be satisfied. For each $M>0$ there exist $\\theta\\in(0,M)$ and $n_0\\in\\N$ such that if $n\\geq n_0$ we have:\n\t\\begin{enumerate}\n\t\t\\item if $b_n\\in B_n$, $a_n=P_{A_n}b_n$, and $\\mathrm{dist}(b_n, F)\\leq \\theta$ then $$\\mathrm{dist}(a_n, E)\\leq 2M;$$\n\t\t\\item if $a_n\\in A_n$, $b_{n+1}=P_{B_{n+1}}a_n$, and $\\mathrm{dist}(a_n, E)\\leq \\theta$ then $$\\mathrm{dist}(b_{n+1}, F)\\leq 2M.$$\n\\end{enumerate} \n\\end{proposition}\t\n\n\n\\begin{proof} Let $M>0$ and $\\rho=\\|v\\|$, {where $v$ is the displacement vector}. \n Let $\\epsilon'\\in(0,3M)$ be given by Lemma~\\ref{lemma: unifrotund}. Put $\\theta=\\epsilon'\/3$, since Basic assumption~\\ref{ba} are satisfied, there exists $n_0\\in \\N$ such that if $n\\geq n_0$ we have:\n\t\\begin{enumerate}\n\t\t\\item[(a)] if $w\\in A_n$ then $\\mathrm{dist}(w,F)\\geq \\rho-3\\theta$;\n\t\t\\item[(b)] if $e\\in E$, there exists $x\\in A_n$ such that $\\|e-x\\|\\leq\\theta$.\n\t\\end{enumerate} \nNow, let $n\\geq n_0$, $b_n\\in B_n$, $a_n=P_{A_n}b_n$, and $\\mathrm{dist}(b_n, F)\\leq\\theta$. Let $f_n\\in F$ be such that $\\|f_n-b_n\\|\\leq\\theta$ and put $e_n=f_n-v\\in E$. By (b), there exists $x_n\\in A_n$ such that $\\|x_n-e_n\\|\\leq \\theta$. Hence, since $a_n=P_{A_n}b_n$ \nand $\\|e_n-f_n\\|=\\rho$, we have\n\\begin{eqnarray*}\n \\|a_n-f_n\\|&\\leq& \\|a_n-b_n\\|+\\|f_n-b_n\\|\\\\\n&\\leq& \\|x_n-b_n\\|+\\|f_n-b_n\\|\\\\\n&\\leq& \\|x_n-e_n\\|+ \\rho +2\\|f_n-b_n\\|\\leq\\rho+3\\theta.\n\\end{eqnarray*} \n\tLet us consider the convex set $C=[x_n-f_n,a_n-f_n]$. Observe that, since $$\\|x_n-f_n\\|\\leq\\|e_n-x_n\\|+\\|e_n-f_n\\|\\leq\\rho+\\theta,$$ we have that \n\t$\\|c\\|\\leq\\rho+3\\theta$, whenever $c\\in C$.\n\tMoreover, since $C\\subset A_n$ and $f_n\\in F$, by (a) we have $\\|c\\|\\geq\\rho-3\\theta$, whenever $c\\in C$. Hence, we\n\t can apply Lemma~\\ref{lemma: unifrotund} to the set $C$ and we have $\\|a_n-x_n\\|=\\mathrm{diam}(C)\\leq M$. Then\n\t$$\\mathrm{dist}(a_n, E)\\leq \\|a_n-e_n\\|\\leq \\|a_n-x_n\\|+\\|e_n-x_n\\|\\leq M+\\theta\\leq 2M.$$\n\t\tThe proof that eventually (ii) holds is similar. \n\\end{proof}\n\n{\nWe are now ready to state and prove the main result of this paper.}\n\n\n\\begin{theorem}\\label{teo: mainHilbert}\n\t Let $A,B$ be closed convex nonempty subsets of $X$ such that $E$ and $F$ are bounded. Suppose that the couple $(A,B)$ is regular, then the couple $(A,B)$ is $d$-stable.\n\t\n\\end{theorem}\n\n\\begin{proof}\nLet $a_0\\in X$ and let $\\{a_n\\}$ and $\\{b_n\\}$ {be} the corresponding perturbed alternating projections sequences, { i.e,\n$$a_n=P_{A_n}(b_n) \\quad \\text{and} \\quad b_n=P_{B_n}(a_{n-1}).$$ \nFirst of all, we remark that it is enough to prove that $\\mathrm{dist}(a_n, E)\\to 0$ since the proof that $\\mathrm{dist}(b_n, F)\\to 0$ follows by the symmetry of the problem. \nTherefore our aim is to prove that for each $M>0$, eventually we have $$\\mathrm{dist}(a_n, E)\\leq M.$$}\n \n\n \nBy applying Proposition~\\ref{prop: normprojectionsmall} twice, there exists $0<\\epsilonn_4$.\n\\end{proof}\n\n\n{If the intersection of $A$ and $B$ is nonempty, we obtain, as an immediate consequence of Theorem \\ref{teo: mainHilbert}, the following result.}\n\\begin{corollary} \n\t Let $A,B$ be closed convex nonempty subsets of $X$ such that $A\\cap B$ is bounded and nonempty. If the couple $(A,B)$ is regular then the perturbed alternating projections sequences $\\{a_n\\}$ and $\\{b_n\\}$ satisfy $\\mathrm{dist}(a_n,A\\cap B)\\to 0$ and $\\mathrm{dist}(b_n,A\\cap B)\\to 0$\n\t\\end{corollary}\n\n{We conclude this section by putting in evidence some relationships between the results of \\cite{DebeMigl} and Theorem \\ref{teo: mainHilbert}.}\n\t{First of all, we briefly recall some notions.}\t\n{\t\\begin{definition}[{see, e.g., \\cite[Definition~7.10]{FHHMZ}}]\\label{def:strexp} Let $A$ be a nonempty subset of a normed space $Z$. A point $a\\in A$\n\t\tis called a strongly exposed point of $A$ if there exists a\n\t\tsupport functional $f\\in Z^*\\setminus\\{0\\}$ for $A$ at $a$ $\\bigl($i.e.,\n\t\t$f (a) = \\sup f(A)$$\\bigr)$, such that $x_n\\to a$ for all sequences\n\t\t$\\{x_n\\}$ in $A$ such that $\\lim_n f(x_n) = \\sup f(A)$. In this\n\t\tcase, we say that $f$ strongly exposes $A$ at $a$.\n\t\\end{definition}}\n\n\n\n\t\\begin{definition}[{see, e.g., \\cite[Definition~1.3]{KVZ}}]\n\t\tLet $A$ be a body in a normed space $Z$. We say that $x\\in\\partial A$ is an\n\t\t{\\em LUR (locally uniformly rotund) point} of $A$ if for each\n\t\t$\\epsilon>0$ there exists $\\delta>0$ such that if $y\\in A$\n\t\tand $\\dist(\\partial A,(x+y)\/2)<\\delta$ then $\\|x-y\\|<\\epsilon$. \n\t\\end{definition}\n\tWe say that $A$ is an {\\em LUR body} if each point in\n\t$\\partial A$ is an LUR point of $A$. The following lemma shows that each LUR point is a strongly exposed point. \t\n\t\\begin{lemma}[{\\cite[Lemma~4.3]{DebeMiglMol}} ]\\label{slicelimitatoselur} Let $A$ be a body in a normed space $Z$\n\t\tand suppose that $a\\in\\partial A$ is an LUR point of $A$. Then, if\n\t\t$f\\in S_{Z^*}$ is a support functional for $A$ in $a$, $f$\n\t\tstrongly exposes $A$ at $a$.\n\t\\end{lemma} \n\nFirst, we show that a more general variant of the assumptions of one of the main results in \\cite{DebeMigl}, namely \\cite[Theorem~3.3]{DebeMigl}, imply that the couple $(A,B)$ is regular. It is interesting to remark that here we consider also the case in which $A$ and $B$ do not intersect.\n\t\t \t\n\t\\begin{proposition} \\label{prop:stronglyexpregular}\n\t\tLet $A,B$ be nonempty closed convex subsets of $X$. Let us suppose that there exist $e \\in A \\cap (B-v)$ and a linear continuous functional $x^*\\in S_{X^*}$ such that\n\t\t$$ \\inf x^*(B-v)=x^*(e)=\\sup x^*(A)$$\n\t\tand such that $x^*$ strongly exposes $A$ at $e$. Then the couple $(A,B)$ is regular. \n\t\\end{proposition}\n\\begin{proof} {There is no loss of generality in assuming $e=0$.}\n\tIt is a simple matter to see that $E=\\{0\\}$. Now, suppose on the contrary that that $(A,B)$ is not regular. Therefore there exist sequences $\\{x_n\\}\\subset X$, $\\{a_n\\}\\subset A$, $\\{b_n\\}\\subset B$, and a real number $\\bar{\\varepsilon}>0$ such that \n\t\\begin{equation} \\label{dist e}\n\t\t\\mathrm{dist}(x_n,E)=\\|x_n\\|>\\bar{\\varepsilon}, \n\t\\end{equation}\n\t\tand such that\n\\begin{equation}\\label{eq: notregular}\n\\mathrm{dist}(x_n,A)=\\|x_n -a_n\\|\\rightarrow 0, \\quad \\mathrm{dist}(x_n,B-v)=\\|x_n -b_n+v\\|\\rightarrow 0.\n\\end{equation}\t\n\t\n\t\t By (\\ref{eq: notregular}) and since $ {\\inf x^*(B-v)}=0=\\sup x^*(A)$, it holds $\\lim_n x^*(x_n)=0$ and hence $\\lim_n x^*(a_n)=0$. Since $x^*$ strongly exposes $A$ at $e$, the last equality implies that $\\|a_n\\|\\rightarrow 0$. We conclude that $\\|x_n\\|\\rightarrow 0$, contrary to (\\ref{dist e}).\n\\end{proof}\n\nBy combining the previous proposition and Theorem~\\ref{teo: mainHilbert}, we obtain the following corollary generalizing \\cite[Theorem~3.3]{DebeMigl}.\n\n\\begin{corollary}\\label{Corollary:stronglyexp}\n\tLet $A,B$ be\n\tnonempty closed convex subsets of $X$. \n\tLet us suppose that there exist $e \\in A \\cap (B-v)$ and a linear continuous functional $x^*\\in S_{X^*}$ such that\n\t$$ \\inf x^*(B-v)=x^*(e)=\\sup x^*(A)$$\n\tand such that $x^*$ strongly exposes $A$ at $e$.\n\tThen, \n\tthe couple $(A,B)$ is stable.\n\\end{corollary} \n\n\n\n Moreover, in \\cite{DebeMigl}, the authors proved the following sufficient condition for the stability of a couple $(A,B)$.\n\n\\begin{theorem}[{\\cite[Theorem~4.2]{DebeMigl}}]\\label{theorem:corpilur} Let $X$ be a Hilbert space and $A,B$\n\tnonempty closed convex subsets of $X$. \n\tSuppose that $\\inte(A\\cap B)\\neq\\emptyset$, then the couple $(A,B)$ is stable.\n\\end{theorem}\n\nBy combining Corollary~\\ref{Corollary:stronglyexp} and Theorem~\\ref{theorem:corpilur}, we obtain the following sufficient condition for the stability of the couple $(A,B)$ generalizing \\cite[Corollary~4.3, (ii)]{DebeMigl}.\n\t\n\\begin{corollary}\\label{corollary:corpilur} Let $X$ be a Hilbert space, suppose that \t$A,B$ are bodies in $X$ and that $A$ is LUR. Then the couple $(A,B)$ is stable.\n\\end{corollary}\n\n\\begin{proof}\n\tIf $\\inte(A\\cap B)\\neq\\emptyset$, {the thesis follows by applying Theorem~\\ref{theorem:corpilur}.} If $\\inte(A\\cap B)=\\emptyset$,\n\tsince $A$ and $B$ are bodies, we have $\\mathrm{int}(A)\\cap B=\\emptyset$. Since $A$ is LUR the intersection $A \\cap (B-v)$ reduces to a singleton $\\{e\\}$. By the Hahn-Banach theorem, there exists a linear functional $x^*\\in X^*$ such that $$ \\inf x^*(B-v)=x^*(e)=\\sup x^*(A).$$ Since $A$ is an LUR body, by Lemma \\ref{slicelimitatoselur}, we have that $x^*$ strongly exposes $A$ at $e$. We are now in position to apply Corollary \\ref{Corollary:stronglyexp} and conclude the proof.\n\t\\end{proof}\n\n\n\t\n\tFinally, we show that \\cite[Theorem~5.2]{DebeMigl}, follows by Theorem~\\ref{teo: mainHilbert}.\n\\begin{corollary}\\label{cor:sottospazisommachiusa}\n\tLet $U,V$ be closed subspaces of $X$ such that $U\\cap V=\\{0\\}$ and $U+V$ is closed. Then the couple $(U,V)$ is stable.\n\\end{corollary} \n\\begin{proof}\n\tSince $U+V$ is closed, by \\cite[Corollary~4.5]{BauschkeBorwein93}, the couple $(U,V)$ is regular. By Theorem~\\ref{teo: mainHilbert}, the couple $(U,V)$ is $d$-stable. Since $U\\cap V$ is a singleton, the couple $(U,V)$ is stable. \n\\end{proof}\n \n\n\n\n\\section{Final remarks, examples, and an open problem}\n\n\n\nKnown examples show that the hypothesis about regularity of the couple $(A,B)$, in Theorem~\\ref{teo: mainHilbert}, is necessary. To see this, it is indeed sufficient to consider any couple $(A,B)$ of sets such that $A\\cap B$ is a singleton and such that, for a suitable starting point, the method of alternating projections does not converge (see \\cite{Hundal} for such a couple of sets). \n\nA natural question is whether, in the same theorem, the hypothesis about regularity of the couple $(A,B)$ can be replaced by the weaker hypothesis that ``for any starting point the method of alternating projections converges''. The answer to previous question is negative; indeed, in \\cite[Theorem~5.7]{DebeMigl}, the authors provided an example of a couple $(A,B)$ of closed subspaces of a Hilbert space such that $A\\cap B=\\{0\\}$ and such that the couple $(A,B)$ is not stable (and hence not $d$-stable since $A\\cap B$ is a singleton).{It is interesting to observe} that, by the classical Von Neumann result \\cite{vonNeumann}, the method of alternating projections converges for this couple of sets.\n\nThe next example shows that, if we consider closed convex sets $A,B\\subset X$ such that $A\\cap B$ is nonempty and bounded, the regularity of the couple $(A,B)$ does not imply in general that $(A,B)$ is stable. In particular, we cannot replace $d$-stability with stability in the statement of Theorem~\\ref{teo: mainHilbert}.\n\n\\begin{example}[{\\cite[Example~4.4]{DebeMigl}}] \\label{ex: notconverge}\n\t\tLet $X=\\R^2$ and let us consider the following compact convex subsets of $X$:\n\t\t\\begin{eqnarray*}\n\t\t\tA&=&\\textstyle \\conv\\{(1,1),(-1,1),(1,0),(-1,0)\\};\\\\ \n\t\t\t\t\tB&=&\\textstyle\\conv\\{(1,-1),(-1,-1),(1,0),(-1,0)\\}.\n\t\t\\end{eqnarray*} \n\tThen the couple $(A,B)$ is regular (see \\cite[Theorem~3.9]{BauschkeBorwein93}) but not stable (see \\cite[Example~4.4]{DebeMigl}). \n\\end{example} \n\n{Now, the following example shows that, even in finite dimension, the hypothesis concerning the boundedness of the sets $E,F$ cannot be dropped in the statement of Theorem~\\ref{teo: mainHilbert}. } \n\n\\begin{example}\\label{ex: EnotBounded}\nLet $A,B$ be the subsets of $\\R^3$ defined by\n$$A=\\{(x,y,z)\\in\\R^3;\\, z=0, y\\geq 0\\},\\ \\ \\ B=\\{(x,y,z)\\in\\R^3;\\, z=0\\},$$\nthen the following conditions hold:\n\\begin{enumerate}\n\t\\item[(a)] $A\\cap B$ coincides with $A$ (and hence $(A,B)$ is regular); \n\\item[(b)] $A\\cap B$ is not bounded;\n\\item[(c)] the couple $(A,B)$ is not $d$-stable.\n\\end{enumerate}\n\\end{example}\n\n\nThe proof of (a) and (b) is trivial. To prove (c), we need the following lemma, whose elementary proof is left to the reader.\n\n\\begin{lemma}\\label{lemma: example fin-dim} Let $A,B$ be defined as in Example~\\ref{ex: EnotBounded}. For each $n\\in\\N$ and $x_0\\geq1$, let $P_{n,x_0}^1,P_{n,x_0}^2,P_{n,x_0}^3\\in\\R^3$ be defined by \n$$\\textstyle\nP_{n,x_0}^1=(x_0+n x_0,-1,0),\\ \\ \nP_{n,x_0}^2=(x_0+n x_0+\\frac{1}{n x_0},0,0),\\ \\ \nP_{n,x_0}^3=(0,\\frac1n,\\frac1n).\n$$\t\nLet $t_{n,x_0}$ be the line in $\\R^3$ containing the points $P_{n,x_0}^1$ and $P_{n,x_0}^3$, and let $r_{n,x_0}$ be the ray in $\\R^3$ with initial point $P_{n,x_0}^1$ and containing the points $P_{n,x_0}^2$. Let $A_{n,x_0},B_{n,x_0}$ be the closed convex subsets of $\\R^3$ defined by\n\t$$A_{n,x_0}=\\conv(t_{n,x_0}\\cup r_{n,x_0}),\\ \\ \\ B_{n,x_0}=\\{(x,y,z)\\in\\R^3;\\, z=0\\}.$$\nThen the following conditions hold.\n\\begin{enumerate}\n\t\\item \n\tfor each $N\\in\\N$,\n\t$\\textstyle \\lim_n h_N(A_{n,x_0},A)= 0$, uniformly with respect to $x_0\\geq1$;\n\t\\item For each $n\\in\\N$ and $x_0\\geq 1$, the alternating projections sequences, relative to the sets $A_{n,x_0},B_{n,x_0}$ and starting point $(x_0,0,0)$, converge to $P^1_{n,x_0}$. \n\\end{enumerate}\t\n\t\n\\end{lemma}\n\n\\begin{proof}[Sketch of the proof of Example~\\ref{ex: EnotBounded}, (c)] \nFix the starting point $a_0=(1,0,0)$ and let $A_{1,1},B_{1,1}$, be defined by Lemma~\\ref{lemma: example fin-dim}. Observe that, if we consider the points $a^1_k=(P_{A_{1,1}} P_{B_{1,1}})^k a_0$ ($k\\in\\N$), by Lemma~\\ref{lemma: example fin-dim}, (ii), there exists $N_1\\in\\N$ such that $\\mathrm{dist}(a^1_{N_1}, A\\cap B)\\geq\\frac12$.\nDefine $A_n=A_{1,1}$ and $B_n=B_{1,1}=B$, whenever $1\\leq n\\leq N_1$. Then define $A_{N_1+1}=A$ and $B_{N_1+1}=B$, and observe that $a^2_0:=P_A P_B a^1_{N_1}=(x_1,0,0)$ for some $x_1\\geq1$. Similarly, if we consider the points $a^2_{k}=(P_{A_{2,x_1}} P_{B_{2,x_1}})^k a^2_{0}$ ($k\\in\\N$), then there exists $N_2\\in\\N$ such that $\\mathrm{dist}(a^2_{N_2}, A\\cap B)\\geq\\frac12$. \nDefine $A_n=A_{2,x_1},B_n=B_{2,x_1}=B$, whenever $N_1+1< n\\leq N_2$. Then define $A_{N_2+1}=A,B_{N_2+1}=B$, and observe that $a^3_{0}:=P_A P_B a_{N_2}^{2}=(x_2,0,0)$ for some $x_2\\geq1$.\n\n Then, proceeding inductively, we can construct sequences $\\{A_n\\}$ and $\\{B_n\\}$ such that, by Lemma~\\ref{lemma: example fin-dim}, (i), $A_n\\to A$ and $B_n\\to B$ for the Attouch-Wets convergence.\n Moreover, by our construction, \n it is easy to see that the corresponding perturbed alternating projections sequences $\\{a_n\\}$ and $\\{b_n\\}$, with starting point $a_0$, are such that \n $$\\textstyle \\limsup_n \\mathrm{dist}(a_n,A\\cap B)\\geq \\frac12.$$\n This proves that the couple $(A,B)$ is not $d$-stable.\n\\end{proof}\n\n\n\n\nFinally, we conclude with an open problem asking whether the inverse of Theorem~\\ref{teo: mainHilbert} holds true.\n\n\\begin{problem} \nLet $A,B$ be closed convex nonempty subsets of $X$ such that $E$ and $F$ are nonempty and bounded. Suppose that the couple $(A,B)$ is $d$-stable. Does the couple $(A,B)$ is regular?\t\n\\end{problem}\n\n\n\n\\section*{Acknowledgements.}\nThe research of the authors is partially\nsupported by GNAMPA-INdAM, Progetto GNAMPA 2020. {The second author is also partially supported by the Ministerio de Ciencia, Innovaci\u00f3n y Universidades (MCIU), Agencia Estatal de Investigaci\u00f3n (AEI) (Spain) and\tFondo Europeo de Desarrollo Regional (FEDER) under project PGC2018-096899-\tB-I00 (MCIU\/AEI\/FEDER, UE)}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}