diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjwqv" "b/data_all_eng_slimpj/shuffled/split2/finalzzjwqv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjwqv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nQCD and electroweak radiative corrections to high energy scattering amplitudes were computed recently using effective field theory methods~\\cite{Chiu:2009ft,Chiu:2009mg,Chiu:2009yz,Chiu:2009yx,Chiu:2008vv,Chiu:2007dg,Chiu:2007yn}, by extending SCET~\\cite{BFL,SCET1,SCET2,BPS} to broken gauge theories with massive gauge bosons. The radiative corrections (including the purely electroweak ones) are large because of Sudakov double-logarithms; for example the electroweak corrections to transverse $W$ pair production are 37\\% at 2~TeV. The computation of radiative corrections is divided into a matching computation from the standard model onto SCET at a high scale $Q$ of order the center-of-mass energy $\\sqrt{s}$, and the scattering amplitude in the effective theory. Logarithms of the form $\\log^2 Q^2\/M_Z^2$, including the Sudakov double logarithms, are summed using renormalization group evolution in the effective theory. The high-scale matching coefficients for vector boson and Higgs production were included in the numerical results of Refs.~\\cite{Chiu:2009ft,Chiu:2009mg}. In this paper we give a detailed discussion of the required matching calculation, and explicit results for a gauge theory with an arbitrary number of gauge groups, as well as results for the $SU(3)\\times SU(2) \\times U(1)$ standard model theory.\n\nThe computation of radiative corrections to gauge boson and Higgs production is not new, and has been obtained previously by fixed order calculations by many groups~\\cite{ccc,ciafaloni1,ciafaloni2,fadin,kps,fkps,jkps,jkps4,beccaria,dp1,dp2,hori,beenakker,dmp,pozzorini,js,melles1,melles2,melles3,Denner:2006jr,kuhnW,Denner:2008yn,Ellis:1985er,Kunszt:1993sd,sack,bardin,Dixon:1998py,Dixon:1999di}. However, the matching computation we need is not readily available in the literature. What is available is the total one-loop scattering amplitude, which is the sum of the matching coefficient and the SCET amplitude, and our results agree with existing computations for the sum. The EFT computation requires the matching and SCET contributions separately, so that large logarithms can be summed using renormalization group evolution in the effective theory. \n\n\nIn Sec.~\\ref{sec:smatrix}, we show how the matching computation is related to the $S$-matrix for parton scattering. Using this, we can use the matching coefficients to compute the one-loop corrections to the $q \\bar q \\to g g$ cross-section in QCD. This was computed a long time ago by Ellis and Sexton~\\cite{Ellis:1985er}, and we have checked that our amplitude reproduces their cross-section. Kunszt, Signer and Tr\\'ocs\\'anyi~\\cite{Kunszt:1993sd} give the helicity amplitudes for $q \\bar q \\to gg$ for an $SU(N)$ gauge theory, and we agree with their results.\n\nIn this paper, we give expressions for the one-loop matching contributions for gauge boson pair production and scalar production. These can then be used to compute the renormalization group improved scattering amplitudes for transverse and longitudinal gauge boson pair production, as well as Higgs production, using the results in Refs.~\\cite{Chiu:2009ft,Chiu:2009mg}. We give the results for the individual Feynman diagrams as the product of a Feynman integral and a group theory factor. The results can be used for an arbitrary gauge theory with any number of gauge groups. Gauge bosons from a maximum of three gauge groups can occur in a single diagram at one loop.\n\nSection~\\ref{sec:outline} gives an outline of the method we use to compute the matching condition. We discuss the relation between the on-shell diagrams in dimensional regularization and the matching calculation, the group theory notation, kinematics and the Dirac basis for the matrix elements. The diagrams for vector boson production are given in Sec.~\\ref{sec:vector}, and the standard model amplitude is given in Sec.~\\ref{sec:smvv}. Section~\\ref{sec:scalar} gives the graphs for scalar production, with the standard model results, including top-quark loops, in Sec.~\\ref{sec:smscalar}.\nA consistency check between the matching condition and the EFT anomalous dimension matrix is verified in Sec.~\\ref{sec:check}. Section~\\ref{sec:smatrix} gives the relation between the matching calculation and the on-shell $S$-matrix elements in the massless theory.\n\n\n\\section{Outline of Method and Notation}\\label{sec:outline}\n\nThe basic processes we consider are $f(p_1) + \\bar{f}(p_2) \\to V_i^a(p_4) + V_j^b(p_3)$ and $f(p_1) + \\bar{f}(p_2) \\to \\phi^\\dagger(p_4) + \\phi(p_3)$. Here $f$ and $\\bar f$ are incoming fermions and antifermions of momentum $p_1$ and $p_2$, $V^a_i(p)$ is a gauge boson of gauge group $G_i$ with gauge index $a$ and momentum $p$, and $\\phi$ is a (complex) scalar field. Note that $i$ and $j$ can refer to different gauge groups, so that our results are also applicable to processes such as $q \\bar q \\to W g$. The gauge bosons $V$ will be taken to have transverse polarization. Massive gauge bosons which are longitudinally polarized can be computed using the $\\phi^\\dagger \\phi$ amplitude and the Goldstone boson equivalence theorem.\n\nOur EFT results are valid in the regime where $\\sqrt{s}$ is much larger than the gauge boson masses $M_{W,Z}$, the Higgs mass $M_Z$, and the fermion masses. The matching from the full gauge theory onto the EFT is done at a scale $\\mu$ of order $\\sqrt{s}$, and power corrections such as $M_Z^2\/s$ are neglected. Thus the matching coefficients can be computed by evaluating the graphs in the full theory setting all the particle masses to zero, and neglecting gauge symmetry breaking. For the standard model, this implies that the best way to compute the EFT operators is to match onto operators with $W^{1,2,3}$ and $B$ fields, rather onto operators with $W^{\\pm}, Z$ and $A$ fields.\n\nWe first summarize the standard method used to evaluate matching conditions for an EFT. More details can be found, for example, in Refs.~\\cite{Manohar:1996cq,Manohar:1997qy}. The full theory graphs are evaluated using dimensional regularization in $4-2\\epsilon$ dimension, which regulate the ultraviolet (UV) and infrared (IR) divergences, and have the schematic form\n\\begin{eqnarray}\nA_{\\text{full}} &=& \\left(\\sum_{k\\ge1} \\frac{C_k}{\\epsilon^k}\\right)_{\\text{UV}}+\\left(\\sum_{k\\ge1} \\frac{D_k}{\\epsilon^k}\\right)_{\\text{IR}}+A_{\\text{full,finite}}\\,.\n\\end{eqnarray}\nThe ultraviolet divergences are cancelled by the full theory renormalization counterterms, leaving the infrared divergences,\n\\begin{eqnarray}\nA_{\\text{full}} + \\text{c.t.} &=& \\left(\\sum_{k\\ge1} \\frac{D_k}{\\epsilon^k}\\right)_{\\text{IR}}+A_{\\text{full,finite}}\\,.\n\\label{afull}\n\\end{eqnarray}\nThe EFT graphs are also computed using dimensional regularization. Since all the scales that enter the EFT computation (such as masses) have been set to zero, the EFT integrals are all scaleless and vanish.\nThe EFT integrals have the schematic form\n\\begin{eqnarray}\nA_{\\text{EFT}} &=& \\left(\\sum_{k\\ge1} \\frac{\\widetilde C_k}{\\epsilon^k}\\right)_{\\text{UV}}+\\left(\\sum_{k\\ge1} -\\frac{\\widetilde C_k}{\\epsilon^k}\\right)_{\\text{IR}}=0\\,,\\end{eqnarray}\ni.e.\\ a cancellation of $1\/\\epsilon$ terms arising from ultraviolet and infrared divergences, \\emph{without any finite part}. The\n$(1\/\\epsilon)_{\\text{UV}}$ terms are cancelled by the renormalization counterterms in the EFT, leaving the $(1\/\\epsilon)_{\\text{IR}}$ terms,\n\\begin{eqnarray}\nA_{\\text{EFT}} + \\text{c.t.} &=& \\left(\\sum_{k\\ge1} -\\frac{\\widetilde C_k}{\\epsilon^k}\\right)_{\\text{IR}}\\,,\n\\label{aeft}\n\\end{eqnarray}\nThe counterterms (and hence the anomalous dimensions) in the full and effective theories are in general different.\nThe EFT, by construction, is designed to reproduce the infrared structure of the full theory. Thus the $(1\/\\epsilon)_{\\text{IR}}$ in the full and effective theories \\emph{must} agree, \n\\begin{eqnarray}\nD_k= -\\widetilde C_k\\,,\n\\label{eq5}\n\\end{eqnarray} \nwhich provides a non-trivial computational check on the EFT, and also shows that infrared divergences in the full theory are equal to ultraviolet divergences in the EFT.\n\nThe matching coefficient is given by the difference of the renormalized full and effective theory expressions, Eqs.~(\\ref{afull},\\ref{aeft}). Using Eq.~(\\ref{eq5}), we see that the matching coefficient is $A_{\\text{full,finite}}$. This gives the standard method of computing matching coefficients --- compute graphs in the full theory in dimensional regularization setting all EFT scales to zero, and keep only the finite parts. This is the procedure used here. In giving the values for the graphs, we will also give the divergent terms, which should be dropped for the matching corrections. The divergent terms are useful in that they allow one to check the matching of infrared divergences, and also to compare with the results of Refs.~\\cite{Ellis:1985er,Kunszt:1993sd}. Scaleless integrals in the full theory computation have been set to zero, so the $1\/\\epsilon$ divergences can be either UV or IR.\n\n\\subsection{Group Theory}\n\nWe consider an arbitrary gauge group $\\otimes_r G_r$ which is a product\nof groups with coupling constants $g_r=\\sqrt{4 \\pi \\alpha_r}$.\nThe generators of $G_r$ are $T^a_r$ and satisfy the commutation relations\n\\begin{equation}\n\\left[T^a_r,T^b_s\\right] = i f^{(r)}_{abc} \\delta_{rs}\\, T^c_r\\,,\n\\end{equation}\nwhere $f^{(r)}_{abc}$ are the structure constants of $G_r$. Some products of group generators can be simplifed in terms of Casimir operators, e.g.\\\n\\begin{eqnarray}\nT_j^b T^a_i T^b_j &=& \\left(C_R(j) - \\frac12 \\delta_{ij}\nC_A(i)\\right)T^a_i \\,,\n\\end{eqnarray}\nwhere $C_R$ is the Casimir of the representation $R$ of the matrices $T_j$, and $C_A(i)$ is the Casimir of the adjoint representation of $G_i$.\n\nIn general, anti-commutators of group generators such as $\\left\\{T^a_r,T^b_r\\right\\}$ cannot be simplified. If $G_r$ is an $SU(N_r)$ group, and $T_r$ is in the fundamental representation, one has \n\\begin{equation}\n\\left\\{T^a_r,T^b_r\\right\\} = \\frac{1}{N_r}\\delta_{ab}^{(r)}+d^{(r)}_{abc} T^c_r \\,,\n\\label{eq8}\n\\end{equation}\nwhere $d_{abc}=0$ for $SU(2)$. However, there is no simple expression such as Eq.~(\\ref{eq8}) in general, not even for arbitrary representations of $SU(N)$. For this reason, we will give a general expression for the group theory factor valid for arbitrary gauge theories, and then its value for a $SU(N) \\times SU(2) \\times U(1)$ gauge theory.\n\nDiagrams with a closed fermion or scalar loop contribute at one loop order. We use the symbols $\\text{Tr}_{WF}$ and $\\text{Tr}_{CS}$ to denote traces over the Weyl fermions and the complex scalars of the theory, respectively.\n\nFor the standard model results, $T^a$ are the color generators, $t^a$ are the $SU(2)$ generators, and $Y$ is the $U(1)$ generator.\n\n\\subsection{Kinematics}\n\nThe amplitude $\\mathcal{M}$ is defined as\n\\begin{equation}\n\\langle p_3\\,p_4\\,,\\mathrm{out}| p_1\\,p_2\\,,\\mathrm{in} \\rangle =\n (2\\pi)^4 \\delta^{(4)}(p_1+p_2-p_3-p_4)i \\mathcal{M} \\, . \\nonumber\n\\end{equation}\nWe will work in the center of mass frame (CMS) throughout this\narticle. For $f(p_1) + \\bar{f}(p_2) \\to V_i^a(p_4) +\nV_j^b(p_3)$, the Dirac structure can be written as a linear combination of five\nbasic terms\n\\begin{eqnarray}\n\\mathcal{M}_0 &=& \\bar{v}(p_2) \\slashed{\\epsilon}_4\n\\left(\\slashed{p}_4-\\slashed{p}_2\\right) \\slashed{\\epsilon}_3 P_L\nu(p_1) \\,,\\nonumber \\\\ \n\\mathcal{M}_1 &=& \\bar{v}(p_2) \\slashed{p}_4 (\\epsilon_4 \\cdot\n\\epsilon_3) P_L u(p_1) \\,,\\nonumber \\\\ \n\\mathcal{M}_4 &=& \\bar{v}(p_2) \\slashed{\\epsilon}_4 (\\epsilon_3 \\cdot\np_1) P_L u(p_1) \\,,\\nonumber \\\\ \n\\mathcal{M}_5 &=& - \\bar{v}(p_2) \\slashed{\\epsilon}_3 (\\epsilon_4\n\\cdot p_2)P_L u(p_1) \\,,\\nonumber \\\\ \n\\mathcal{M}_6 &=& \\bar{v}(p_2) \\slashed{p}_4 (\\epsilon_4 \\cdot p_2)(\n\\epsilon_3 \\cdot p_1) P_L u(p_1) \\,,\n\\end{eqnarray}\nin the notation of Sack~\\cite{sack}, where $\\epsilon^\\mu_i \\equiv \\epsilon^\\mu(p_i)$ and $P_{L} \\equiv \\left(1-\\gamma_5 \\right)\/2$. The other amplitudes used by Sack ($\\mathcal{M}_{2,3,7,8,9}$) vanish for \\emph{transversely} polarized \\emph{on-shell} gauge bosons, neglecting power corrections.\n\nFor scalar production, the Dirac structure which enters is\n\\begin{eqnarray}\n\\mathcal{M}_\\phi &=& \\bar{v}(p_2) \\slashed{p}_4 P_L\nu(p_1) \\, .\n\\end{eqnarray}\n\n\nThe full amplitude is the sum of all diagrams $R_i$ with group theory factor $\\mathcal{C}_i$,\n\\begin{equation}\n\\mathcal{M} = \\sum_{i} \\mathcal{C}(R_i) R_i \\, .\n\\end{equation}\nIn many cases, a diagram $R$ has a corresponding crossed graph which we denote by\n$\\bar{R}$ with group theory factor $\\mathcal{C}(\\bar{R})$.\n\n\nThe Mandelstam variables are defined as \n\\begin{eqnarray}\ns &=& (p_1+p_2)^2 \\,,\\nonumber \\\\ \nt &=& (p_1-p_4)^2 \\,, \\nonumber \\\\ \nu &=& (p_1-p_3)^2 \\, .\n\\end{eqnarray}\nto agree with the conventions of Refs.~\\cite{Chiu:2009ft,Chiu:2009mg}.\n\nUnder the exchange of the two final state gauge bosons, $\\epsilon_3\n\\leftrightarrow \\epsilon_4$, $p_3 \\leftrightarrow p_4$, the matrix\nelements and Mandelstam variables transform as\n\\begin{eqnarray}\n\\mathcal{M}_0 &\\leftrightarrow& \\mathcal{M}_0+2\\mathcal{M}_1\\,,\\nonumber \\\\ \n\\mathcal{M}_1 &\\rightarrow& -\\mathcal{M}_1\\,,\\nonumber \\\\ \n\\mathcal{M}_4 &\\leftrightarrow& \\mathcal{M}_5 \\,, \\nonumber \\\\ \n\\mathcal{M}_6 &\\leftrightarrow& -\\mathcal{M}_6 \\,, \\nonumber \\\\ \nt &\\leftrightarrow& u \\,, \\nonumber \\\\ \ns &\\leftrightarrow & s \\, .\n\\label{mcross}\n\\end{eqnarray}\nIf there is a crossed graph, then $\\bar{R}$ is obtained from $R$ using Eq.~(\\ref{mcross}).\n\n\nThroughout the article, space-time is $d = 4-2\\epsilon$\ndimensional which regulates the ultraviolet as well as the infrared\nbehavior, and we work in 't~Hooft-Feynman gauge, $\\xi = 1$. Furthermore, we define the function $\\mathsf{L}_X \\equiv\n\\log(-X-i0^+)\/\\mu^2$. For scattering kinematics, $s > 0$ and $t,u < 0$, the correct\nanalytical continuation is given by\n\\begin{eqnarray}\n\\mathsf{L}_{s} &=& \\log(s\/\\mu^2)-i\\pi\\,,\\nonumber \\\\ \\mathsf{L}_{t} &=& \\log(-t\/\\mu^2)\\,, \\nonumber \\\\ \\mathsf{L}_{u} &=&\n\\log(-u\/\\mu^2) \\, .\n\\end{eqnarray}\n\n\nWe have assumed that the incoming fermion is a left-chiral field, so that the incoming fermion $f$ has helicity $h=-1\/2$ and incoming antifermion $\\bar f$ has helicity $h=1\/2$. The results for a right-chiral field are given by $P_L \\to P_R$. \n\n\\subsection{EFT Lagrangian}\n\nWe give the Feynman diagram results for the on-shell scattering amplitude $\\mathcal{M}$. This also gives the matching condition onto the SCET operators in the EFT. The EFT Lagrangian is\n\\begin{eqnarray}\nL &=& \\frac12\\sum_{p_1,p_2,p_3,p_4} \\mathcal{M}^{ia,jb}(p_1,p_2,p_3,p_4) V^{i,a}_{p_4} V^{j,b}_{p_3}\n\\bar \\psi_{p_2} \\psi_{p_1}\\nonumber \\\\ \n\\end{eqnarray}\nfor vector boson production, and \n\\begin{eqnarray}\nL &=& \\sum_{p_1,p_2,p_3,p_4} \\mathcal{M}^{ia,jb}(p_1,p_2,p_3,p_4) \\phi^\\dagger_{p_4} \\phi_{p_3}\n\\bar \\psi_{p_2} \\psi_{p_1}\\nonumber \\\\ \n\\end{eqnarray}\nfor scalar production. The subscripts $p_i$ are the label momenta of the external SCET fields, and are summed over. \n\nThe vector boson term has a factor of $1\/2$ because there are two identical fields. To make clear the combinatorial factor of $1\/2$, consider the production of a $W$ boson with momentum $p_W$ and a gluon with momentum $p_g$. This is obtained from $\\mathcal{M}$ by picking out the term with $i,a$ in $SU(2)$ and $j,b$ in $SU(3)$, and setting $p_4=p_W$ and $p_3=p_g$ \\emph{or} the term with $i,a$ in $SU(3)$ and $j,b$ in $SU(2)$, and setting $p_4=p_g$ and $p_3=p_W$, \\emph{but not both.}\n\n\n\\subsection{Topologies}\n\nThe diagrams are classified in seven different topologies shown in\nFigure \\ref{fig:topos}. Note that we do not explicitly draw the crossed\ntopologies. Because this is a matching calculation, counterterm diagrams and wavefunction corrections are omitted. The on-shell wavefunction graphs are scaleless, and vanish in dimensional regularization.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\includegraphics[height=1.5cm]{T1.eps} &\n\\includegraphics[height=1.5cm]{T2.eps} &\n\\includegraphics[height=1.5cm]{T3.eps} \\\\\nT1 & T2 & T3 \\\\[10pt]\n\\includegraphics[height=1.5cm]{T6.eps} &\n\\includegraphics[height=1.5cm]{T7.eps} &\n\\includegraphics[height=1.5cm]{T8.eps} \\\\\nT4 & T5 & T6 \\\\[10pt]\n\\includegraphics[height=1.5cm]{T9.eps} & & \\\\\nT7 & & \n\\end{tabular}\n\\end{center}\n\\caption{The seven different topologies for a general $2 \\to 2$\n scattering process. The $\\otimes$ denotes a one particle irreducible\n subdiagram. Wavefunction renormalization diagrams are omitted.}\\label{fig:topos}\n\\end{figure}\n\n\\section{Diagrams for vector boson production}\\label{sec:vector}\n\nWe provide the result of each tree-level and one-loop diagram $R_i$ and list the group theory\nstructure $\\mathcal{C}_i$ in a general form in terms of generators of the gauge\ngroups. The pertinent group theory factors for the Standard Model are given in the Section~\\ref{sec:smvv}.\n\n\n\n\\subsection{Tree level amplitude}\n\n\\begin{figure}[tb]\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{tr2.eps}&\\includegraphics[height=1.5cm]{tr1.eps}\n\\\\\n$R_1$ & $R_2$ \n\\end{tabular}\n\\end{center}\n\\caption{The tree level diagrams. Quarks, gauge bosons, scalars and ghosts are denoted by solid, wavy, dashed and dotted lines, respectively. Crossed diagrams are not shown.}\\label{fig:tree}\n\\end{figure}\n\n\nThe tree level diagrams are shown in Figure~\\ref{fig:tree}.\nFor the tree level amplitude, the group theory factors and the\ndiagrams read\n\\begin{eqnarray}\n\\mathcal{C}(R_1) &=& g_i g_j T^b_j T^a_i \\,, \\nonumber \\\\ \n\\mathcal{C}(\\bar{R}_1) &=& g_i g_j T^a_i T^b_j \\,, \\nonumber \\\\ \n\\mathcal{C}(R_2) &=& g_i^2 \\left(-i \\delta_{ij} f^{(i)}_{abc}T_i^c\\right)\n\\end{eqnarray}\n\\begin{eqnarray}\nR_1 &=& -\\frac{1}{t}\\left(\\mathcal{M}_0 +2 \\mathcal{M}_1\\right) \\,, \\nonumber \\\\ \n\\bar{R}_1 &=& -\\frac{1}{u}\\left(\\mathcal{M}_0\\right)\\,, \\nonumber \\\\ \nR_2 &=& -\\frac{1}{s}\\left(2 \\mathcal{M}_1\\right) \\, .\n\\end{eqnarray}\nwhere $\\bar{R}_1$ is the crossed-graph related to $R_1$. $R_2$ does not have a crossed graph.\n\n\\begin{widetext}\n\n\\subsection{Topology T1}\n\nThe four diagrams shown in Figure \\ref{fig:t1} share topology T1. \n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R1.eps}&\\includegraphics[height=1.5cm]{R3.eps}\n\\\\\n$T1a$ & $T1b$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R18.eps} &\n\\includegraphics[height=1.5cm]{R2.eps} \\\\\n$T1c$ & $T1d$ \n\\end{tabular}\n\\end{center}\n\\caption{Diagrams with topology T1. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t1}\n\\end{figure}\n\n\n\\subsubsection{T1a}\n\n\n\\begin{equation}\n\\mathcal{C}(R_{T1a}) = \\frac{g_i^4}{16\\pi^2}\\delta_{ij} f_{ebc}^{(i)} f_{aed}^{(i)} T^c_i T^d_i\n\\end{equation}\n\n\\begin{eqnarray}\nR_{T1a} &=& \\frac{\\mathcal{M}_0}{t}\\Bigg\\{-\\frac{2}{\\epsilon^2 }+\\frac{2 (\\mathsf{L}_{s}-1)}{\\epsilon } + \\frac{1}{u}\\biggl[\n-3t\\mathsf{L}_{s}^2-(s+4t)\\mathsf{L}_{t}^2+2(s+4t)\\mathsf{L}_{s} \\mathsf{L}_{t} +2u \\mathsf{L}_{t} -\\pi^2\\left(\\frac76s+\\frac{25}{6}t\\right)-4u\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\mathcal{M}_1\\Biggl\\{-\\frac{1}{\\epsilon^2 }\\left(\\frac9s+\\frac4t\\right)+\\frac{1}{\\epsilon } \n\\left(\\frac{4\\mathsf{L}_{s}}{t}+\\frac{8\\mathsf{L}_{t}}{s}+\\frac{\\mathsf{L}_{s}}{s}-\\frac{2}{s}-\\frac{4}{t}\\right)+ \\frac{1}{u^2ts}\\biggl[\\frac12t(9s^2+14st+7t^2)\\mathsf{L}_{s}^2+s(2s+t)(s+2t)\\mathsf{L}_{t}^2\\nonumber \\\\ \n&&-2(2s^3+9s^2t+10st^2+4t^3)\\mathsf{L}_{s}\\mathsf{L}_{t}-2t^2u\\mathsf{L}_{s}-2us(2s+3t)\\mathsf{L}_{t}+\\pi^2\\Bigl(\\frac73s^3+\\frac{125}{12}s^2t+\\frac{71}{6}st^2+\\frac{19}{4}t^3\\Bigr)\\nonumber \\\\ \n&&-8s^3-20s^2t-16st^2-4t^3\n\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\frac{\\mathcal{M}_4+\\mathcal{M}_5}{t}\\Biggl\\{-\\frac{2}{\\epsilon^2}+\\frac{1}{\\epsilon\n}\\left(\\frac{2t}s+2\\mathsf{L}_{t}\\right)+ \\frac{1}{u^2}\\biggl[-t(3s+4t)\\mathsf{L}_{s}^2 -(s^2+5st+5t^2)\\mathsf{L}_{t}^2+2t(3s+4t)\\mathsf{L}_{s}\\mathsf{L}_{t} \\nonumber \\\\ \n&&+2ut(2s+t)\\frac{\\mathsf{L}_{s}}{s}- 2ut\\mathsf{L}_{t} +\\pi^2\\Bigl(\\frac{s^2}6-\\frac83st-\\frac{23}{6}t^2\\Bigr) +4\\frac{t^3}{s}+4st+8t^2\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\frac{\\mathcal{M}_6}{tu^3}\\Biggl\\{-4t(s+2t)(\\mathsf{L}_{s}-\\mathsf{L}_{t})^2 +4u(3s+5t)(\\mathsf{L}_{s}-\\mathsf{L}_{t})-4\\pi^2t\\Bigl(s+2t\\Bigr)-4u^2 \\Biggr\\}\n\\end{eqnarray}\n\nThe crossed graph $\\bar{R}_{T1a}$ is given by applying Eq.~(\\ref{mcross}) to $R_{T1a}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T1a}) = \\frac{g_i^4}{16\\pi^2} \\delta_{ij} f_{eac}^{(i)} f_{bed}^{(i)} T^c_i T^d_i\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T1a})$ by $i,a \\leftrightarrow j,b$.\n\n\\subsubsection{T1b}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T1b}) &=& \\frac{g_i^3 g_j}{16\\pi^2} (if_{dac}^{(i)}) T^c_i T^b_j T^d_i\n\\end{eqnarray}\nThe diagram is given by\n\\begin{eqnarray}\nR_{T1b}&=& \\frac{\\mathcal{M}_0}{tu}\\Biggl\\{\\frac{2s}{\\epsilon^2 }+\\frac{2u \\mathsf{L}_{u} +2t \\mathsf{L}_{t}+s}{\\epsilon }\n+\\frac{1}{s^2}\\biggl[-st(2s+3t)\\mathsf{L}_{u}^2\n+su \\left(s+3t\\right)\\mathsf{L}_{t}^2 +2s(s^2+3st+3t^2)\\mathsf{L}_{u}\\mathsf{L}_{t} +s^2t\\mathsf{L}_{u}+s^2u\\mathsf{L}_{t}\\nonumber \\\\ \n&&-\\pi^2\\left(\\frac76s^3+3s^2t+3st^2\\right)+2s^3\n\\biggr] \\Biggr\\}\\nonumber \\\\ \n&&+\\frac{\\mathcal{M}_1}{t}\\Biggl\\{-\\frac{4}{\\epsilon^2 }+\\frac{4 \\mathsf{L}_{u}-2}{\\epsilon } + \\frac{1}{s^2 u}\\biggl[3stu\\mathsf{L}_{u}^2 +su\\left(2s+3t\\right)\\mathsf{L}_{t}^2 -2su\\left(2s+3t\\right)\\mathsf{L}_{u}\\mathsf{L}_{t}+2s^2u\\mathsf{L}_{t} -\\pi^2\\left(\\frac73s^3+\\frac{16}{3}s^2t+3st^2\\right)\\nonumber \\\\ \n&&+4s^3+4s^2t\n\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\frac{\\mathcal{M}_4}{t u}\\Biggl\\{\\frac{4s}{\\epsilon }+\n\\frac{1}{s^2}\\biggl[-3stu \\left(\\mathsf{L}_{u}-\\mathsf{L}_{t}\\right)^2 +4s^2t\\mathsf{L}_{u} +4s^2u\\mathsf{L}_{t} +\\pi^2\\left(3s^2t+3st^2\\right) +8s^3\\biggr]\\Biggr\\}\\nonumber \\\\ \n&+& \\frac{\\mathcal{M}_5}{t u}\\Biggl\\{\\frac{2s}{\\epsilon^2}+\\frac{2 u\n \\mathsf{L}_{t} + 2 t \\mathsf{L}_{u}}{\\epsilon }+ \\frac{1}{s^2}\\biggl[st(2s+3t)\\mathsf{L}_{u}^2-su(s+3t)\\mathsf{L}_{t}^2 +6stu\\mathsf{L}_{u}\\mathsf{L}_{t} +\\pi^2\\Bigl(-\\frac16s^3+3s^2t+3st^2\\Bigr)\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\mathcal{M}_6\\frac{12}{tu}(\\mathsf{L}_{t}-\\mathsf{L}_{u}) \\, .\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T1b}$ is given by applying Eq.~(\\ref{mcross}) to $R_{T1b}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T1b}) = \\frac{g_i g_j^3}{16\\pi^2} (if_{dbc}^{(j)}) T^c_j T^a_i T^d_j\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T1a})$ by $i,a \\leftrightarrow j,b$.\n\n\\subsubsection{T1c}\n\\begin{eqnarray}\n\\mathcal{C}(R_{T1c}) &=& \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^c_k T^b_j T^a_i T^c_k\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T1c} &=& \\frac{\\mathcal{M}_0}{t}\\left[\\frac{2}{\\epsilon^2 }-\\frac{2 \\mathsf{L}_{s}}{\\epsilon } + 2 \\mathsf{L}_{s} \\mathsf{L}_{t}-\\frac{7\\pi^2}{6}-\\mathsf{L}_{t}^2\\right]\\nonumber \\\\ \n&&+\\frac{\\mathcal{M}_1}{t}\\Biggl\\{\\frac{4}{\\epsilon^2 }-\\frac{4 \\mathsf{L}_{s}}{\\epsilon } + \\frac{1}{u^2}\\biggl[st \\mathsf{L}_{s}^2-(2s^2+3st+2t^2)\\mathsf{L}_{t}^2+2(2s^2+3st+2t^2)\\mathsf{L}_{s}\\mathsf{L}_{t}+2tu(\\mathsf{L}_{s}-\\mathsf{L}_{t})-\\pi^2\\Bigl(\\frac73s^2+\\frac{11}{3}st+\\frac73t^2\\Bigr)\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+\\frac{\\mathcal{M}_4+\\mathcal{M}_5}{t}\\Biggl\\{\\frac{4}{\\epsilon }+ \\frac{1}{u^2}\\biggl[t(3s+2t) (\\mathsf{L}_{s}-\\mathsf{L}_{t})^2+2ut\\mathsf{L}_{s} +2u(2s+t)\\mathsf{L}_{t} +\\pi^2t\\Bigl(3s+2t\\Bigr) +8u^2\\biggr]\\Biggr\\}\\nonumber \\\\ \n&&+ \\frac{\\mathcal{M}_6}{tu^3}\\Biggl\\{4t(2s+t)(\\mathsf{L}_{s}-\\mathsf{L}_{t})^2-4u(3s+t)(\\mathsf{L}_{s}-\\mathsf{L}_{t})+4\\pi^2t\\Bigl(2s+t\\Bigr)-4u^2\\Biggr\\}\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T1c}$ is given by applying Eq.~(\\ref{mcross}) to $R_{T1c}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T1c}) = \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^c_k T^a_i T^b_j T^c_k\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T1c})$ by $i,a \\leftrightarrow j,b$.\n\n\\end{widetext}\n\n\\subsubsection{T1d}\n\\begin{eqnarray}\n\\mathcal{C}(R_{T1d}) = \\frac{g_i^4}{16\\pi^2}\\delta_{ij} \\left\\{T^d_i, T^c_i\\right\\}\nf_{adg}^{(i)}f_{bcg}^{(i)} \n\\end{eqnarray}\n\nThe result of the diagram is \n\\begin{eqnarray}\nR_{T1d}= \\frac{\\mathcal{M}_4+\\mathcal{M}_5}{s}\\left[\n \\frac{2}{\\epsilon}-2\\mathsf{L}_{s}+4 \\right] \\, .\n\\end{eqnarray}\n\nThere is no corresponding crossed graph.\n\n\n\\subsection{Topology T2}\n\nThe diagrams of topology T2 are shown in Figure \\ref{fig:t2}.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R4.eps}&\\includegraphics[height=1.5cm]{R5.eps}\n\\\\\n$T2a$ & $T2b$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R6.eps} &\n\\includegraphics[height=1.5cm]{R23.eps} \\\\\n$T2c$ & $T2d$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R7.eps} &\n\\includegraphics[height=1.5cm]{R8.eps} \\\\\n$T2e$ & $T2f$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R9.eps} &\n\\includegraphics[height=1.5cm]{R10.eps} \\\\\n$T2g$ & $T2h$ \n\\end{tabular}\n\\end{center}\n\\caption{Diagrams of topology T2. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t2}\n\\end{figure}\n\n\n\\subsubsection{T2a}\n\nThe sum of the ghost graph and its crossed-graph is\n\\begin{eqnarray}\n\\mathcal{C}(R_{T2a}) &=& \\frac{g_i^4}{16\\pi^2} \\delta_{ij} i f_{dcf}^{(i)} f_{fbe}^{(i)} f_{ead}^{(i)} T^c_i\\nonumber \\\\ \n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T2a} &=& \\frac{\\mathcal{M}_1}{s}\\Biggl[-\\frac{1}{6\\epsilon}+\\frac{\\mathsf{L}_{s}}{6}-\\frac{11}{18}\\Biggr]\n\\end{eqnarray}\n\n\n\\subsubsection{T2b}\n\nThe sum of the scalar graph and its crossed graph is\n\\begin{eqnarray}\n\\mathcal{C}(R_{T2b}) &=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2} \\delta_{ij} if_{abg}^{(i)}T^c_k \\, \\text{Tr}_{CS}\\,\\left(T^c_k T^g_i\\right) \\nonumber \\\\ \nR_{T2b} &=& \\frac{\\mathcal{M}_1}{s}\\Biggl[\\frac{2}{3\\epsilon}-\\frac{2\\mathsf{L}_{s}}{3}+\\frac{22}{9}\\Biggr]\\, .\n\\end{eqnarray}\n\nIf the gauge generators are orthogonal, then the $\\text{Tr}_{CS}$ factor is proportional to $\\delta_{ik}$. However, in general, the generators for $U(1)$ factors need not be orthogonal.\n\n\\subsubsection{T2c}\n\nThere is no crossed graph since the gauge bosons are real fields.\n\\begin{eqnarray}\n\\mathcal{C}(R_{T2c}) &=& i\\frac{g_i^4}{16\\pi^2}\\delta_{ij} f_{cdf}^{(i)} f_{dae}^{(i)} f_{ebf}^{(i)} T^c_i\\,,\\nonumber \\\\ \n\\nonumber \\\\ \nR_{T2c} &=& \\frac{\\mathcal{M}_1}{s}\\Biggl[\\frac{3}{\\epsilon^2}+\\frac{17-6\\mathsf{L}_{s}}{2\\epsilon}+ \n\\frac32\\mathsf{L}_{s}^2-\\frac{17}{2}\\mathsf{L}_{s}-\\frac{\\pi^2}{4}+\\frac{95}{6}\\Biggr]\\, .\\nonumber \\\\ \n\\end{eqnarray}\n\n\n\n\\subsubsection{T2d}\\label{sec:T2d}\n\nThe sum of graph T2d and its crossed graph is\n\\begin{eqnarray}\n\\mathcal{C}(R_{T2d}) &=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2}i \\delta_{ij} f^{(i)}_{abg} T^c_k \\text{Tr}_{WF}\\,\\left(T^c_k T_i^g \\right) \\nonumber \\\\ \nR_{T2d} &=& \\frac{\\mathcal{M}_1}{s}\\Biggl[\\frac{4}{3\\epsilon}-\\frac43\\mathsf{L}_{s}+\\frac{14}{9} \\Biggr]\n\\end{eqnarray}\n\nGraph T2d also has a piece proportional to the $\\epsilon$ symbol, with a group theory factor proportional to \n$\\text{Tr}_{WF}\\,(T^c_k \\left\\{ T^b_j, T^a_i\\right]\\}$. This contribution is proportional to the gauge anomaly, must vanish when summed over all fermions in the loop for a consistent gauge theory, and so has not been given explicitly.\nOur result for T2d differs from that in Ref.~\\cite{bardin}. The formul\\ae\\ in Sec.~(14.13) give $26\/9$ instead of $14\/9$ for the finite part.\n\n\\subsubsection{T2e, T2f, T2g and T2h}\n\nAll these diagrams vanish.\n\n\n\\subsection{Topologies T3 and T4}\n\nThe diagrams with topology T3 and T4 are shown in Figure \\ref{fig:t3}.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R15.eps}&\\includegraphics[height=1.5cm]{R14.eps}\n\\\\\n$T3a$ & $T4a$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R20.eps} &\n\\includegraphics[height=1.5cm]{R19.eps} \\\\\n$T3b$ & $T4b$ \n\\end{tabular}\n\\end{center}\n\\caption{Diagrams of topology T3. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t3}\n\\end{figure}\n\n\\subsubsection{T3a}\n\\begin{eqnarray}\n\\mathcal{C}(R_{T3a}) &=& \\frac{g_i^3 g_j}{16\\pi^2} \\frac12 C_A(i) T^b_j T^a_i\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T3a} &=& \\frac{\\mathcal{M}_0+2\\mathcal{M}_1}{t}\\left[\\frac{2}{\\epsilon^2}-\\frac{2\\mathsf{L}_{t}}{\\epsilon}+\\mathsf{L}_{t}^2-\\frac{\\pi^2}{6}\\right]\\nonumber \\\\ \n&+& \\frac{\\mathcal{M}_5}{t}\\left[-\\frac{2}{\\epsilon^2}+\\frac{2\\mathsf{L}_{t}}{\\epsilon}-\\mathsf{L}_{t}^2+\\frac{\\pi^2}{6}+2\\right]\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T3a}$ is given by applying Eq.~(\\ref{mcross}) to $R_{3a}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T3a}) = \\frac{g_i g_j^3}{16\\pi^2} \\frac12 C_A(j) T^a_i T^b_j\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T3a})$ by $i,a \\leftrightarrow j,b$.\n\n\\subsubsection{T3b}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T3b}) &=& \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^b_j T^c_k T^a_i T^c_k\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T3b}&=& \\frac{\\mathcal{M}_0+2\\mathcal{M}_1}{t}\\left[\\frac{1}{\\epsilon}-\\mathsf{L}_{t}+4\\right]\\nonumber \\\\ \n&+& \\frac{\\mathcal{M}_5}{t}\\left[-\\frac{4}{\\epsilon}+4\\mathsf{L}_{t}-10\\right]\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T3b}$ is given by applying Eq.~(\\ref{mcross}) to $R_{3b}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T3b}) = \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^i_a T^c_k T^b_j T^c_k \n\\end{equation}\ngiven from $\\mathcal{C}(R_{T3b})$ by $i,a \\leftrightarrow j,b$.\n\n\n\n\\subsubsection{T4a}\n\\begin{eqnarray}\n\\mathcal{C}(R_{T4a}) &=& \\frac{g_i g_j^3}{16\\pi^2} \\frac12 C_A(j) T^b_j T^a_i\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T4a} &=& \\frac{\\mathcal{M}_0+2\\mathcal{M}_1}{t}\\left[\\frac{2}{\\epsilon^2}-\\frac{2\\mathsf{L}_{t}}{\\epsilon}+\\mathsf{L}_{t}^2-\\frac{\\pi^2}{6}\\right]\\nonumber \\\\ \n&+& \\frac{\\mathcal{M}_4}{t}\\left[-\\frac{2}{\\epsilon^2}+\\frac{2\\mathsf{L}_{t}}{\\epsilon}-\\mathsf{L}_{t}^2+\\frac{\\pi^2}{6}+2\\right]\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T4a}$ is given by applying Eq.~(\\ref{mcross}) to $R_{T4a}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T4a}) = \\frac{g_i^3 g_j}{16\\pi^2} \\frac12 C_A(i) T^a_i T^b_j\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T4a})$ by $i,a \\leftrightarrow j,b$.\n\n\n\\subsubsection{T4b}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T4b}) &=& \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^c_k T^b_j T^c_k T^a_i\n\\end{eqnarray}\n\\begin{eqnarray}\nR_{T4b}&=& \\frac{\\mathcal{M}_0+2\\mathcal{M}_1}{t}\\left[\\frac{1}{\\epsilon}-\\mathsf{L}_{t}+4\\right]\\nonumber \\\\ \n&+&\\frac{\\mathcal{M}_4}{t}\\left[-\\frac{4}{\\epsilon}+4\\mathsf{L}_{t}-10\\right]\n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T4b}$ is given by applying Eq.~(\\ref{mcross}) to $R_{4b}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T4b}) = \\sum_k \\frac{g_i g_j g_k^2}{16\\pi^2} T^c_k T^i_a T^c_k T^b_j\n\\end{equation}\ngiven from $\\mathcal{C}(R_{T4b})$ by $i,a \\leftrightarrow j,b$.\n\n\n\n\n\n\\subsection{Topology T5}\n\nThe diagrams with topology T5 are shown in Figure~\\ref{fig:t7}. There are no crossed graphs for this topology.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R16.eps}&\\includegraphics[height=1.5cm]{R22.eps}\n\\\\\n$T5a$ & $T5b$ \n\\end{tabular}\n\\end{center}\n\\caption{Diagrams of topology T5. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t7}\n\\end{figure}\n\n\\subsubsection{T5a}\n\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T5a}) &=& i f_{abc}^{(i)} \\delta_{ij}\\frac{g_i^4}{16\\pi^2} \\frac{C_A(i)}{2} T_i^c \\nonumber \\\\ \nR_{T5a}&=& \\frac{\\mathcal{M}_1}{s} \\left[- \\frac{2}{\\epsilon}+ 2\\mathsf{L}_{s} -4 \\right] \n\\end{eqnarray}\n\n\\subsubsection{T5b}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T5b}) &=& \\sum_{k} i f_{abc}^{(i)} \\delta_{ij}\\frac{g_i^2 g_k^2}{16\\pi^2} T_k^d T_i^c\nT_k^d \\nonumber \\\\ \nR_{T5b} &=& \\frac{\\mathcal{M}_1}{s}\\biggl[\n -\\frac{4}{\\epsilon^2}-\\frac{6}{\\epsilon}+\\frac{4}{\\epsilon}\\mathsf{L}_{s}-2\\mathsf{L}_{s}^2 + 6\\mathsf{L}_{s}-16+\\frac{\\pi^2}{3}\n \\biggr]\\nonumber \\\\ \n\\end{eqnarray}\n\n\\subsection{Topology T6}\n\nThe diagrams with topology T6 are shown in Figure~\\ref{fig:t8}. There\nare no crossed diagrams with this topology.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R11.eps}&\\includegraphics[height=1.5cm]{R12.eps}\n\\\\\n$T6a$ & $T6b$\\\\[10pt]\n\\includegraphics[height=1.5cm]{R13.eps}&\\includegraphics[height=1.5cm]{R17.eps}\\\\\n$T6c$ & $T6d$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{R24.eps}&\\includegraphics[height=1.5cm]{R25.eps}\n\\\\\n$T6e$ & $T6f$\\\\\n\\end{tabular}\n\\end{center}\n\\caption{Diagrams of topology T6. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t8}\n\\end{figure}\n\n\\subsubsection{T6a}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T6a}) &=& \\frac{g_i^4}{16\\pi^2} if_{abc}^{(i)}\\delta_{ij} C_A(i) T^c_i\\nonumber \\\\ \nR_{T6a} &=& \\frac{\\mathcal{M}_1}{s} \\biggl[ \\frac{19}{6\\epsilon} - \\frac{19}{6} \\mathsf{L}_{s} + \\frac{58}{9} \\biggr] \n\\end{eqnarray}\n\n\\subsubsection{T6b}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T6b})&=& \\frac{g_i^4}{16\\pi^2} if_{abc}^{(i)}\\delta_{ij} C_A(i) T^c_i\\nonumber \\\\ \nR_{T6b} &=& \\frac{\\mathcal{M}_1}{s} \\biggl[ \\frac{1}{6\\epsilon} - \\frac{1}{6} \\mathsf{L}_{s}+ \\frac{4}{9} \\biggr] \n\\end{eqnarray}\n\n\n\\subsubsection{T6c}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T6c})&=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2}i f_{abc}^{(i)}\\delta_{ij} \\text{Tr}_{CS}(T^c_i T^d_k)T^d_k \\nonumber \\\\ \nR_{T6c} &=& \\frac{\\mathcal{M}_1}{s} \\biggl[ -\\frac{2}{3\\epsilon} +\n \\frac{2}{3} \\mathsf{L}_{s} - \\frac{16}{9} \\biggr] \n\\end{eqnarray}\n\n\\subsubsection{T6d}\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T6d}) &=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2}i f_{abc}^{(i)}\\delta_{ij} \\text{Tr}_{WF}(T^c_i T^d_k)T^d_k \\nonumber \\\\ \nR_{T6d} &=& \\frac{\\mathcal{M}_1}{s} \\biggl[ -\\frac{4}{3\\epsilon} +\n \\frac{4}{3} \\mathsf{L}_{s}- \\frac{20}{9} \\biggr]\n\\end{eqnarray}\n\n\\subsubsection{T6e, T6f}\n\nThese diagrams vanish in dimensional regularization.\n\n\\subsection{Topology T7}\n\nThe diagram with topology T7 is shown in Figure~\\ref{fig:t9}.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{R21.eps}& \\\\\n$T7$ & \n\\end{tabular}\n\\end{center}\n\\caption{Diagram with topology T7. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t9}\n\\end{figure}\n\n\n\\begin{eqnarray}\n\\mathcal{C}(R_{T7}) &=& \\sum_k \\frac{g_i g_jg_k^2}{16\\pi^2} T^b_j T_k^c T_k^c\nT^a_i \\nonumber \\\\ \nR_{T7} &=& \\frac{\\mathcal{M}_0 +2\n\\mathcal{M}_1}{t}\\left[\\frac{1}{\\epsilon} - \\mathsf{L}_{t} +1 \\right] \n\\end{eqnarray}\n\nThe crossed graph $\\bar R_{T7}$ is given by applying Eq.~(\\ref{mcross}) to $R_{T7}$, and has color factor\n\\begin{equation}\n\\mathcal{C}(\\bar R_{T7}) = \\sum_k \\frac{g_i g_jg_k^2}{16\\pi^2} T^a_i T_k^c T_k^c T^b_j \n\\end{equation}\ngiven from $\\mathcal{C}(R_{T7})$ by $i,a \\leftrightarrow j,b$.\n\n\n\n\n\\section{$\\bar{q} q \\to VV$ in the Standard Model}\\label{sec:smvv}\n\n\nThe group theory factors for $\\bar{q} q \\to V^a_i V^b_j$ have been written in a form where they are applicable to gauge boson production for an arbitrary product group, with fermions in an arbitrary irreducible representation. In this section, we tabulate the group theory factors and give their values for the standard model.\n\nThe group theory factors when the two vector bosons belong to the same group $G_i$ are given in Table~\\ref{tab:VV}. \nThe only assumption we have made on the structure of the gauge theory is that the gauge generators are orthogonal,\n\\begin{eqnarray}\n\\text{Tr}_{CS} \\left(T^a_i T^b_j\\right) &=& \\delta_{ab} \\delta_{ij} \\trcs{i}\\,,\\nonumber \\\\ \n\\text{Tr}_{WF} \\left(T^a_i T^b_j\\right) &=& \\delta_{ab} \\delta_{ij} \\trwf{i}\\,,\n\\label{eq57}\n\\end{eqnarray}\nwhich define $\\trcs{i}$ and $\\trwf{i}$. The orthogonality only needs to be checked if both $i$ and $j$ correspond to $U(1)$ factors, and is satisfied in theories which arise as the low-energy limit of unified theories based on semisimple Lie groups. The group factors and coupling constants have to be evaluated using their values for $G_i$ in the representation of the fermion.\n\n\nThe group theory factors have been written in terms of\n\\begin{eqnarray}\n\\mathcal{G}_f &=& i f_{abc}T^c\\,, \\nonumber \\\\ \n\\mathcal{G}_+&=& \\frac12 \\left\\{T^a,T^b\\right\\}\\,, \\nonumber \\\\ \n\\mathcal{G}_{TT} &=& \\frac12 f_{acg}f_{bch}\\left\\{T^g, T^h\\right\\}\\, .\n\\end{eqnarray}\n$\\mathcal{G}_+$ and $\\mathcal{G}_{TT}$ can not, in general, be written as expressions linear in the group generators $T^a$.\nFor $SU(N)$ groups in the fundamental representation,\n\\begin{eqnarray}\n\\mathcal{G}_+&=& \\frac12 d_{abc} T^c + \\frac{1}{2N}\\delta_{ab} \\,, \\nonumber \\\\ \n\\mathcal{G}_{TT} &=& \\frac14 C_A d_{abc} T^c + \\frac1 2 \\delta_{ab}\\,,\n\\label{sunform}\n\\end{eqnarray}\nbut these expressions are not valid for general $SU(N)$ representations. Some examples of $\\mathcal{G}_{TT}$ for higher $SU(N)$ representations are given in Appendix~A of Ref.~\\cite{Dashen:1994qi}.\nFor $U(1)$ groups, \n\\begin{eqnarray}\n\\mathcal{G}_f &=& 0\\,,\\nonumber \\\\ \n\\mathcal{G}_+&=& Y_i^2 \\,, \\nonumber \\\\ \n\\mathcal{G}_{TT} &=& 0 \\,,\n\\label{u1form}\n\\end{eqnarray}\nwhere $Y_i$ is the $U(1)$ charge. \n\n$\\Lambda_Q$ is defined by\n\\begin{eqnarray}\n\\Lambda_Q &=& \\sum_i \\alpha_i C_{F,Q}(i)\n\\label{eq58}\n\\end{eqnarray}\nwhere $C_{F,Q}(i)$ is the quadratic Casimir of the incoming fermion under gauge group $G_i$, and the sum is over all gauge groups. $C_A$ is the Casimir in the adjoint representation.\n\n\nFor the standard model, the high-energy amplitude is most conveniently written in terms of the gauge bosons of the unbroken gauge theory --- $W^a$ of $SU(2)$, $B$ of $U(1)$ and gluons $G^a$ of $SU(3)$, and we can use Eqs.~(\\ref{sunform},\\ref{u1form}) with $Y_i \\to Y_Q=1\/6$ and $N=2,3$. The $d$-symbol vanishes for $SU(2)$. The factors in Eqs.~(\\ref{eq57},\\ref{eq58}) are\n\\begin{eqnarray}\n\\trcs{i}&=& \\left\\{ \\begin{array}{cc} \n0 & \\text{for}\\quad SU(3) \\\\\n\\frac12 n_S& \\text{for}\\quad SU(2) \\\\\n\\frac12n_S & \\text{for}\\quad U(1) \\,,\n \\end{array} \\right. \\nonumber \\\\ \n \\trwf{i} &=& \\left\\{ \\begin{array}{cc} \n2n_g & \\text{for}\\quad SU(3) \\\\\n2n_g& \\text{for}\\quad SU(2) \\\\\n\\frac{10}{3}n_g& \\text{for}\\quad U(1) \\,,\n \\end{array} \\right. \n\\end{eqnarray}\nand\n\\begin{eqnarray}\\label{eq:lambda}\n\\Lambda_Q &=& \\frac43 \\alpha_3 + \\frac34 \\alpha_2 +Y_Q^2 \\alpha_1\n\\end{eqnarray}\nwhere $n_g=3$ is the number of fermion generations, and $n_S=1$ is the number of Higgs doublets.\n\n\nGroup theory factors for the crossed graphs are given by $a \\leftrightarrow b$, so that $\\mathcal{G}_f \\to -\\mathcal{G}_f$ changes sign, $\\mathcal{G}_+ \\to \\mathcal{G}_+$ and $\\mathcal{G}_{TT} \\to \\mathcal{G}_{TT}$.\n\n\n\\begin{table*}\n\\begin{eqnarray*}\n\\renewcommand{\\arraystretch}{1.8} \n\\begin{array}{cc|cc|cc|cc}\n\\hline\nR_1 & 4\\pi \\alpha \\left(\\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) & \nR_2 & -4\\pi \\alpha \\mathcal{G}_f & \n&&\n& \\\\\n\nT1a & -\\alpha^2\\left(\\mathcal{G}_{TT}-\\frac14 C_A \\mathcal{G}_f \\right)&\nT1b & -\\frac12 \\alpha^2 C_A \\mathcal{G}_+ + \\alpha^2 \\mathcal{G}_{TT}&\nT1c & \\alpha\\left(\\frac14 \\alpha C_A -\\frac12\\Lambda_Q\\right)\\mathcal{G}_f +\\alpha^2 \\mathcal{G}_{TT} &\nT1d & 2\\alpha^2 \\mathcal{G}_{TT} \\\\[-5pt]\n&&\n&&\n& +\\alpha \\left( \\Lambda_Q -\\alpha C_A\\right)\\mathcal{G}_+ &\n& \\\\\n\nT2a & \\frac12\\alpha^2 C_A\\mathcal{G}_f &\nT2b & \\alpha^2 \\trcs i \\mathcal{G}_f &\nT2c & - \\frac12 \\alpha^2 C_A\\mathcal{G}_f &\nT2d & \\alpha^2 \\trwf i \\mathcal{G}_f \\\\\n\nT3a & \\frac12 \\alpha^2 C_A \\left(\\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) &\nT3b & \\alpha \\left( \\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) \\left[\\Lambda_Q-\\frac12 \\alpha C_A \\right] &\nT4a & \\frac12 \\alpha^2 C_A\\left( \\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) &\nT4b & \\alpha\\left(\\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) \\left[\\Lambda_Q - \\frac12 \\alpha C_A \\right] \\\\\n\n\nT5a & \\frac12 \\alpha^2 C_A \\mathcal{G}_f &\nT5b & \\alpha \\mathcal{G}_f \\left[ \\Lambda_Q - \\frac12 \\alpha C_A \\right] & & & & \\\\\n\nT6a & \\alpha^2 C_A\\mathcal{G}_f & \nT6b & \\alpha^2 C_A\\mathcal{G}_f &\nT6c & \\alpha^2 \\trcs i \\mathcal{G}_f &\nT6d & \\alpha^2 \\trwf i \\mathcal{G}_f \\\\\n\nT7 & \\alpha \\left(\\mathcal{G}_+- \\frac12\\mathcal{G}_f\\right) \\Lambda_Q & & & & & & \\\\\n\\hline\n\\end{array}\n\\end{eqnarray*}\n\\caption{Group theory coefficients $\\mathcal{C}_i$ for the production of two\n identical gauge bosons. The\n coefficients $\\bar{\\mathcal{C}}_i$ of the crossed diagrams are given by\n $\\mathcal{C}_i$ with $a \\leftrightarrow\n b$, under which $\\mathcal{G}_f \\to -\\mathcal{G}_f$, $\\mathcal{G}_+ \\to \\mathcal{G}_+$ and $\\mathcal{G}_{TT} \\to \\mathcal{G}_{TT}$. The notation is explained in the main text.}\\label{tab:VV}\n\\end{table*}\n\n\nGroup theory factors for the production of gauge bosons in two different gauge groups are given in Table~\\ref{tab:GW}.\nThe gauge bosons with momentum $p_4$ and $p_3$ are $V^a_i$ and $V^b_j$, respectively. We define\n\\begin{eqnarray}\n\\rho = \\sqrt{\\alpha_{i} \\alpha_{j}}\\ T^a_i T^b_j\\, .\n\\end{eqnarray}\nThe factors for the crossed graphs are given by $i \\leftrightarrow j$. This gives the group theory factors for the reactions $\\bar{q} q \\to G^a(p_4) W^b(p_3)$, $\\bar{q}\nq \\to G^a(p_4) B(p_3)$ and $\\bar{q} q \\to W^a(p_4) B(p_3)$. The reaction $\\bar{q} q \\to W^a(p_3) G^b(p_4)$,\nis related to $\\bar{q} q \\to G^a(p_4) W^b(p_3)$ by exchanging the final gauge bosons, i.e.\\ by the swap\n$i,a,p_4 \\leftrightarrow j,b,p_3$.\n\n\n\n\\begin{table*}\n\\begin{eqnarray*}\n\\renewcommand{\\arraystretch}{1.8} \n\\begin{array}{cc|cc|cc|cc}\n\\hline\nR_1 & 4\\pi \\rho & \nR_2 & 0 & \n&&\n& \\\\\n\nT1a & 0 &\nT1b & -\\frac12\\rho \\alpha_{i} C_A(i) &\nT1c & \\rho \\left[ \\Lambda_Q- \\frac12\\alpha_{i}\n C_A(i)-\\frac12 \\alpha_{j} C_A(j) \\right] &\nT1d & 0 \\\\\n\nT2a & 0 &\nT2b & 0 &\nT2c & 0 &\nT2d & 0 \\\\\n\nT3a &\\frac12 \\rho \\alpha_{i} C_A(i) &\nT3b & \\rho \\left[\n \\Lambda_Q- \\frac12 \\alpha_{i} C_A(i) \\right] &\nT4a &\\frac12 \\rho \\alpha_{j} C_A(j) &\nT4b & \\rho \\left[\n \\Lambda_Q- \\frac12 \\alpha_{j} C_A(j) \\right] \\\\\n\nT5a & 0 & T5b & 0 & & & & \\\\\n\nT6a & 0 &\nT6b & 0 &\nT6c & 0 &\nT6d & 0 \\\\\n\nT7 & \\rho \\Lambda_Q & & & & & &\\\\\n\n\\hline\n\\end{array}\n\\end{eqnarray*}\n\\caption{Group theory coefficients $\\mathcal{C}_i$ for the production of two\n different gauge bosons. Here $\\rho = \\sqrt{\\alpha_{i} \\alpha_{j}} T^a_i T^b_j$. The\n coefficients $\\bar{\\mathcal{C}}_i$ of the crossed diagrams are given by\n $\\mathcal{C}_i$ with $i \\leftrightarrow j$.}\\label{tab:GW}\n\\end{table*}\n\n\n\n\n\\section{Scalar production}\\label{sec:scalar}\n\n\nThe notation for the scalar production is analogous to that for vector boson production. The full\namplitude is given by the sum of all diagrams $S_i$ with \ngroup theory factor $\\mathcal{C}(S_i)$.\n\\begin{equation}\\label{eq:amps}\n\\mathcal{M} = \\sum_i \\mathcal{C}(S_i)S_i\\, .\n\\end{equation}\nAs in the vector boson case, the $\\bar S$ and $\\bar \\mathcal{C}$ denote the crossed\ndiagrams and group theory factors. \nThe Dirac matrix element is \n\\begin{eqnarray}\n\\mathcal{M}_\\phi = \\bar{v}(p_2)\\slashed{p}_4 P_L u(p_1)\\, .\n\\end{eqnarray}\n\nThe diagrams are classified in terms of the topologies\ngiven in Figure~\\ref{fig:topos}. Exchanging the two final state scalars gives\n\\begin{eqnarray}\n\\mathcal{M}_\\phi &\\leftrightarrow& -\\mathcal{M}_\\phi\\,,\\nonumber \\\\ \nt &\\leftrightarrow& u\\, .\n\\label{scross}\n\\end{eqnarray}\n\n\n\\subsection{Tree level amplitude}\n\nThe tree level diagram is shown in Figure~\\ref{fig:trees}.\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{st.eps}& \\\\\n$S_{1}$ & \n\\end{tabular}\n\\end{center}\n\\caption{Tree level diagram for the production of two scalars. See caption of Figure~\\ref{fig:tree}.}\\label{fig:trees}\n\\end{figure}\n\n\nAt tree level, one finds \n\\begin{eqnarray}\n\\mathcal{C}(S_1) &=& \\sum_i g_i^2 T^a_i \\otimes T^a_i \\,, \\nonumber \\\\ \nS_1 &=& \\frac{2}{s} \\, \\mathcal{M}_\\phi \\, .\n\\end{eqnarray}\n\nFor the group theory factor $X \\otimes Y$, $X$ acts on the initial fermion space and $Y$ on the final scalar particle space.\n\n\\subsection{Topology T1}\n\nThe diagrams of topology T1 are shown in Figure~\\ref{fig:t1s}.\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{S1.eps}&\\includegraphics[height=1.5cm]{S2.eps} \\\\\n$T1a$ & $T1b$\n\\end{tabular}\n\\end{center}\n\\caption{Diagram with topology T1. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t1s}\n\\end{figure}\n\n\\subsubsection{T1a}\n\nThis diagram vanishes.\n\n\\subsubsection{T1b}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T1b}) &=& \\sum_{ij} \\frac{g_i^2 g_j^2}{16\\pi^2} T^a_i T^b_j \\otimes\nT^b_j T^a_i \\nonumber \\\\ \nS_{T1b} &=& \\mathcal{M}_\\phi \\Biggl[ -\\frac{9}{s \\epsilon^2}+\\frac{1}{s\n \\epsilon}\\left(\\mathsf{L}_{s}+8\\mathsf{L}_{t}-2\\right)\n -\\mathsf{L}_{s}^2\\frac{1}{2u}\\left(\\frac{7t}{s}+3\\right)\\nonumber \\\\ \n&&+\\frac{2}{u}\\mathsf{L}_{t}^2+\\mathsf{L}_{s}\\mathsf{L}_{t}\\frac{4}{u}\\left(\\frac{t-u}{s}\\right) +\\mathsf{L}_{s}\\frac{2}{s}-\\frac{4}{s}\n\\nonumber \\\\ &&-\\frac{\\pi^2}{4u}\\left(11+\\frac{19t}{s}\\right)\\Biggr] \n\\end{eqnarray}\n\nThe box diagram $S_{T1b}$ is the only one where a crossed diagram\nexists for scalar production. Exchanging the final state scalars gives the crossed-box\ngraph. The amplitude is given by applying Eq.~(\\ref{scross}) to $S_{T1b}$.\nFor the group theory factor, one finds\n\\begin{eqnarray}\n\\mathcal{C}(\\bar S_{T1b}) &=& \\sum_{ij} \\frac{g_i^2 g_j^2}{16\\pi^2} T^a_i T^b_j\n\\otimes T^a_i T^b_j \\, .\n\\end{eqnarray}\n\n\n\\subsection{Topology T2}\n\nThe diagrams of topology T2 are shown in Figure~\\ref{fig:t2s}. There are no crossed diagrams with this topology.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{S3.eps}&\\includegraphics[height=1.5cm]{S14.eps} \\\\\n$T2a$ & $T2b$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{S10.eps}&\\includegraphics[height=1.5cm]{S13.eps} \\\\\n$T2c$ & $T2d$ \\\\[10pt]\n\\includegraphics[height=1.5cm]{S12.eps} & \\\\\n$T2e$ & \n\\end{tabular}\n\\end{center}\n\\caption{Diagram with topology T2. See caption of Figure~\\ref{fig:tree}.}\\label{fig:t2s}\n\\end{figure}\n\n\\subsubsection{T2a}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T2a}) &=& \\sum_{i,j} \\frac{g_i^2 g_j^2}{16\\pi^2} T^a_i \\otimes T_j^b\nT^a_i T_j^b\\\\\nS_{T2a} &=& \\frac{ \\mathcal{M}_\\phi}{s}\n\\biggl[-\\frac{4}{\\epsilon^2}-\\frac{8}{\\epsilon}+\\frac{4}{\\epsilon}\\mathsf{L}_{s}-2\\mathsf{L}_{s}^2 + 8\\mathsf{L}_{s}-16+\\frac{\\pi^2}{3}\n\\biggr] \\nonumber\n\\end{eqnarray}\n\n\\subsubsection{T2b}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T2b}) &=& \\sum_{i} \\frac{g_i^4 }{16\\pi^2} \\frac{C_A(i)}{2} T^a_i\n\\otimes T^a_i \\\\\nS_{T2b} &=& \\frac{\\mathcal{M}_\\phi}{s} \\biggl[\\frac{1}{\\epsilon^2}-\\frac{2}{\\epsilon}\n -\\frac{1}{\\epsilon}\\mathsf{L}_{s}+\\frac12\\mathsf{L}_{s}^2 + 2\\mathsf{L}_{s}-4-\\frac{\\pi^2}{12} \\biggr] \\nonumber\n\\end{eqnarray}\n\n\\subsubsection{T2c, T2d, T2e}\n\nThese diagrams vanish. Graph T2d is the only diagram involving the $\\lambda \\phi^4$ coupling.\n\n\\subsection{Topologies T3, T4}\n\nThere are no diagrams with topology T3 and T4.\n\n\\subsection{Topology T5}\n\nThe diagrams of topology T5 are shown in Figure~\\ref{fig:t5s}. There are no crossed graphs with this topology.\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{S4.eps}&\\includegraphics[height=1.5cm]{S5.eps} \\\\\n$T5a$ & $T5b$\n\\end{tabular}\n\\end{center}\n\\caption{Diagram with topology T5. See caption of\n Figure~\\ref{fig:tree}.}\\label{fig:t5s}\n\\end{figure}\n\n\n\\subsubsection{T5a}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T5a}) &=& \\sum_{i,j} \\frac{g_i^2 g_j^2}{16\\pi^2} T^b_i T^a_j T^b_i \\otimes T^a_j \\,, \\\\\nS_{T5a} &=& \\frac{\\mathcal{M}_\\phi}{s}\\left[-\\frac{4}{\\epsilon ^2} +\\frac{-6+4\n \\mathsf{L}_{s}}{ \\epsilon }-2 \\mathsf{L}_{s}^2+6 \\mathsf{L}_{s}+\\frac{\\pi ^2}{3 }-16 \\right] \\nonumber\n\\end{eqnarray}\n\n\\subsubsection{T5b}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T5b}) &=& \\sum_k \\frac{g_i^4}{16\\pi^2}\\frac 12 C_A(i) T^a_i \\otimes T^c_i \\nonumber \\\\ \nS_{T5b} &=& \\frac{\\mathcal{M}_\\phi}{s}\\left[-\\frac{2}{ \\epsilon }+2\\mathsf{L}_{s} -4 \\right] \n\\end{eqnarray}\n\n\n\\subsection{Topology T6}\n\nThe diagrams of topology T6 are shown in Figure~\\ref{fig:t6s}. There are no crossed graphs with this topology.\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{S7.eps}&\\includegraphics[height=1.5cm]{S8.eps} \\\\\n$T6a$ & $T6b$\\\\[10pt]\n\\includegraphics[height=1.5cm]{S9.eps}&\\includegraphics[height=1.5cm]{S6.eps} \\\\\n$T6c$ & $T6d$\n\\end{tabular}\n\\end{center}\n\\caption{Diagram with topology T5. See caption of\n Figure~\\ref{fig:tree}.}\\label{fig:t6s}\n\\end{figure}\n\n\n\n\\subsubsection{T6a}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T6a}) &=& \\frac{g_i^4}{16\\pi^2} C_A(i) T_i^a \\otimes T_i^a \\nonumber \\\\ \nS_{T6a} &=& \\frac{\\mathcal{M}_\\phi}{s} \\biggl[ \\frac{19}{6\\epsilon} - \\frac{19}{6} \\mathsf{L}_{s} + \\frac{58}{9} \\biggr] \n\\end{eqnarray}\n\n\\subsubsection{T6b}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T6b}) &=& \\frac{g_i^4}{16\\pi^2} C_A(i) T_i^a \\otimes T_i^a \\nonumber \\\\ \nS_{T6b} &=& \\frac{\\mathcal{M}_\\phi}{s} \\biggl[ \\frac{1}{6\\epsilon}- \\frac{1}{6} \\mathsf{L}_{s} + \\frac{4}{9} \\biggr] \n\\end{eqnarray}\n\n\n\\subsubsection{T6c}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T6c}) &=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2} T_i^a \\otimes T^d_k\\text{Tr}_{CS}(T^a_i T^d_k) \\nonumber \\\\ \nS_{T6c} &=& \\frac{\\mathcal{M}_\\phi}{s} \\biggl[ -\\frac{2}{3\\epsilon} + \\frac{2}{3} \\mathsf{L}_{s}- \\frac{16}{9} \\biggr] \n\\end{eqnarray}\n\n\\subsubsection{T6d}\n\n\\begin{eqnarray}\n\\mathcal{C}(S_{T6d}) &=& \\sum_k \\frac{g_i^2 g_k^2}{16\\pi^2} T_i^a \\otimes T^d_k \\text{Tr}_{WF}(T^a_i T^d_k)\\nonumber \\\\ \nS_{T6d} &=& \\frac{\\mathcal{M}_\\phi}{s} \\biggl[ -\\frac{4}{3\\epsilon} - \\frac{20}{9} +\n \\frac{4}{3} \\mathsf{L}_{s} \\biggr]\n\\end{eqnarray}\n\n\n\n\n\\section{$q \\bar{q} \\to \\phi^\\dagger \\phi$ in the Standard Model}\\label{sec:smscalar}\n\nGroup theory factors for scalar production are given in Table~\\ref{tab:ss}. They depend on group invariants which are listed below, followed by their values in the standard model.\n\\begin{eqnarray}\n\\Phi_B &=& \\sum_i \\alpha_i \\alpha_j T_i^a T_j^b \\otimes T_j^b T_i^a \\nonumber \\\\ \n&=& \\left[\\frac12\\alpha_2^2 +2 \\alpha_1 \\alpha_2 Y_Q Y_\\phi\\right] t^a \\otimes t^a\\nonumber \\\\ \n&&+ \\left[ \\frac{3}{16} \\alpha_2^2 +\\alpha_1^2 Y_Q^2 Y_\\phi^2\\right] \\openone \\otimes \\openone \\nonumber \\\\ \n\\Phi_C &=& \\sum_i \\alpha_i \\alpha_j T_i^a T_j^b \\otimes T_i^a T_j^b \\nonumber \\\\ \n&=& \\left[-\\frac12\\alpha_2^2 +2 \\alpha_1 \\alpha_2 Y_Q Y_\\phi\\right] t^a \\otimes t^a\\nonumber \\\\ \n&& + \\left[ \\frac{3}{16} \\alpha_2^2 +\\alpha_1^2 Y_Q^2 Y_\\phi^2\\right] \\openone \\otimes \\openone \\nonumber \\\\ \n \\Phi_1 &=& \\sum_i \\alpha_i T_i^a \\otimes T_i^a \\nonumber \\\\ \n&=& \\alpha_2 t^a \\otimes t^a +\\alpha_1 Y_Q Y_\\phi \\openone \\otimes \\openone \\nonumber \\\\ \n\\Phi_2 &=& \\sum_i \\alpha_i^2 C_A(i) T_i^a \\otimes T_i^a = 2 \\alpha_2^2 t^a \\otimes t^a \\nonumber \\\\ \n\\Phi_{CS} &=& \\sum_i \\alpha_i^2 \\trcs{i} T_i^a \\otimes T_i^a \\nonumber \\\\ \n&=& \\frac12\\alpha_2^2 n_S t^a \\otimes t^a + 2 \\alpha_1^2 n_S Y_Q Y_\\phi^3 \\openone \\otimes \\openone \\nonumber \\\\ \n\\Phi_{WF} &=& \\sum_i \\alpha_i^2 \\trwf{i} T_i^a \\otimes T_i^a \\nonumber \\\\ \n&=& 2\\alpha_2^2 n_g t^a \\otimes t^a + \\frac{10}{3}\\alpha_1^2 n_g Y_Q Y_\\phi \\openone \\otimes \\openone \\nonumber \\\\ \n\\Lambda_{Q} &=& \\sum_i \\alpha_i C_{F,Q}(i) = \\frac43 \\alpha_3+ \\frac{3}{4} \\alpha_2 + \\alpha_1 Y_Q^2 \\nonumber \\\\ \n\\Lambda_{\\phi} &=& \\sum_i \\alpha_i C_{F,\\phi}(i) = \\frac{3}{4} \\alpha_2 + \\alpha_1 Y_\\phi^2\n\\end{eqnarray}\nHere $Y_\\phi=1\/2$ is the hypercharge of the Higgs scalar, $C_{F,\\phi}$ is the Casimir in the representation of the scalar field, and $\\trcs{i}$ and $\\trwf{i}$ are defined in Eq.~(\\ref{eq57}).\n\n\\subsection{Top quark loops}\n\nTop quark loops have to be included in the high scale matching for scalar production in the standard model, since\nthe top-quark Yukawa coupling is comparable to the gauge couplings.\n\nSince this contribution to the high-scale matching depends on the\ndetails of the theory, top quark loops are only computed for\nthe standard model. Here $y_t$ is the top quark\nYukawa coupling and $Y(t_L)$ and $Y(t_R)$ are used for the $U(1)$\ncharges of the left- and right-handed top quarks, respectively. Note\n$Y(t_R)-Y(t_L)=Y_\\phi=1\/2$.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[height=1.5cm]{Z1.eps}& \\\\\n$S_{\\text{top}}$&\n\\end{tabular}\n\\end{center}\n\\caption{Top quark loop contribution to the amplitude. See also caption of \n Figure~\\ref{fig:tree}.}\\label{fig:top}\n\\end{figure}\n\n\nThe relevant diagram is shown in Figure~\\ref{fig:top}, and has to be added to the amplitude for\nscalar production, Eq.~(\\ref{eq:amps}), The graph with $t_L$ coupling to the gauge boson is\n\\begin{eqnarray}\n&&3y_t^2 \\frac{\\alpha_1}{4\\pi}Y_Q Y(t_L)\\frac{\\mathcal{M}_\\phi}{s} \\Biggl[-\\frac{2}{\\epsilon} +2 \\mathsf{L}_{s} -4\\Biggr]\\\n\\openone \\otimes \\openone \\nonumber \\\\ \n&&-3y_t^2 \\frac{\\alpha_2}{4\\pi} \\frac{\\mathcal{M}_\\phi}{s}\\Biggl[-\\frac{2}{\\epsilon}+2 \\mathsf{L}_{s} -4\\Biggr] \nt^a \\otimes t^a\n\\end{eqnarray}\nand the graph with $t_R$ coupling to the gauge boson is\n\\begin{eqnarray}\n- 3y_t^2\\frac{\\alpha_1}{4\\pi } Y_Q Y(t_R) \\frac{\\mathcal{M}_\\phi}{s}\\Biggl[-\\frac{2}{\\epsilon}+2 \\mathsf{L}_{s} -4\\Biggr]\n\\openone \\otimes \\openone \n\\end{eqnarray}\n\\begin{eqnarray}\nS_{\\text{top}} &=& -3y_t^2\\frac{ \\alpha_1 }{4\\pi} Y_Q Y_\\phi\n\\frac{\\mathcal{M}_\\phi}{s}\\Biggl[-\\frac{2}{\\epsilon} +2 \\mathsf{L}_{s} -4\\Biggr]\\openone \\otimes \\openone \\nonumber \\\\ \n&&-3y_t^2\\frac{ \\alpha_2 }{4\\pi} \\frac{\\mathcal{M}_\\phi}{s}\\Biggl[-\\frac{2}{\\epsilon} +2\n \\mathsf{L}_{s} -4\\Biggr] t^a \\otimes t^a \\, .\n \\label{tloop}\n\\end{eqnarray}\nusing $Y(t_R)-Y(t_L)=Y_\\phi$.\n\n\n\n\n\n\\begin{table}\n\\begin{eqnarray*}\n\\renewcommand{\\arraystretch}{1.8} \n\\begin{array}{cc|cc}\n\\hline\nS_1 & 4\\pi \\Phi_1 & \\\\\n\nT1b & \\Phi_B &\n\n\\bar T1b & \\Phi_C \\\\\n\nT2a & -\\frac12 \\Phi_2 + \\Lambda_{\\phi} \\Phi_1&\nT2b & \\frac12 \\Phi_2 \\\\ \n\nT5a & -\\frac12 \\Phi_2 +\\Lambda_{Q} \\Phi_1 &\nT5b & \\frac12 \\Phi_2 \\\\\n\nT6a & \\Phi_2 &\nT6b & \\Phi_2 \\\\\nT6c &\\Phi_{CS} &\nT6d & \\Phi_{WF} \\\\\n\n\\hline\n\\end{array}\n\\end{eqnarray*}\n\\caption{Group theory coefficients for the production of two\n charged scalars. The notation is explained in the main text.}\\label{tab:ss}\n\\end{table}\n\n\n\n\\section{Consistency Checks}\\label{sec:check}\n\nThere is a consistency check on our matching coefficients, which follows from the fact that $S$-matrix elements\nare independent of the scale $\\mu$ at which one matches from the full theory to SCET. Consider, for example\nelectroweak gauge boson production by left-handed quark doublets. There are five SCET operators which contribute~\\cite{Chiu:2009ft,Chiu:2009mg}\n\\begin{eqnarray}\nO_1 &=& \\bar Q^{(u)}_2 Q^{(u)}_1 W^a_4W^a_3\\nonumber \\\\ \nO_2 &=& \\bar Q^{(u)}_2 t^c Q^{(u)}_1 i \\epsilon^{abc} W^a_4 W^b_3 \\nonumber \\\\ \nO_3 &=& \\bar Q^{(u)}_2 t^a Q^{(u)}_1 B_4 W^a_3\\nonumber \\\\ \nO_4 &=& \\bar Q^{(u)}_2 t^a Q^{(u)}_1 W^a_4 B_3 \\nonumber \\\\ \nO_5 &=& \\bar Q^{(u)}_2 Q^{(u)}_1 B_4 B_3 \n\\label{185.b}\n\\end{eqnarray}\nwith matching coefficients $C_i(\\mu)$ at the matching scale $\\mu$. Write the coefficients as\n\\begin{eqnarray}\nC_i &=& C_i^{(0)} + C_i^{(1)} + \\ldots\n\\end{eqnarray}\nwhere $ C_i^{(0)}$ are the tree-level coefficients, $ C_i^{(1)}$ are the one-loop contributions, etc.\nThen $\\mu$-independence implies the constraint\n\\begin{eqnarray}\n\\sum_k \\mu \\frac{{\\rm d}\\alpha_k }{{\\rm d}\\mu} \\frac{\\partial C_i^{(0)}}{\\partial \\alpha_k} + \\mu \\frac{\\partial C_i^{(1)}}{\\partial \\mu} &=& \\gamma^{(1)}_{ij} C_j^{(0)}\n\\label{conscond}\n\\end{eqnarray}\nwhere the sum on $k$ is over the three standard model gauge groups, and $\\gamma^{(1)}_{ij} $ is the one-loop anomalous dimension in SCET computed in Refs.~\\cite{Chiu:2009ft,Chiu:2009mg}.\n\nThe SCET anomalous dimension is\n\\begin{eqnarray}\n\\bm{\\gamma} &=& \\left(2 \\gamma_Q+ 2 \\gamma_V\\right) +\n\\bm{\\gamma}_S\n\\end{eqnarray}\nwhere $\\gamma_Q$ and $\\gamma_V=\\gamma_{W,B}$ are the collinear anomalous dimensions of $Q$, $W$, and $B$, and $\\gamma_S$ is the soft anomalous dimension~\\cite{Chiu:2009ft,Chiu:2009mg}. Using the values\nin Refs.~\\cite{Chiu:2009ft,Chiu:2009mg} gives\n\\begin{widetext}\n\n\nThe anomalous dimension is ($\\mathsf{L}=\\log s\/\\mu^2$)\n\\begin{eqnarray}\n\\bm{\\gamma} &=& 2 \\gamma_Q \\openone + \\left[ \\begin{array}{ccccc}\n2 \\gamma_W & 0 & 0 & 0 & 0 \\\\\n0 & 2 \\gamma_W & 0 &0 & 0\\\\\n0 & 0 & \\gamma_W+\\gamma_B & 0 & 0 \\\\\n0 & 0 & 0 & \\gamma_W+\\gamma_B & 0 \\\\\n0 & 0 & 0 & 0 & 2 \\gamma_B \\\\\n\\end{array} \\right]\\nonumber \\\\ \n&&+ \\frac{\\alpha_1}{\\pi} \\left(-i\\pi Y_Q^2\\right)+\\frac{\\alpha_s}{\\pi}\\left(-\\frac43i \\pi\\right) +\\frac{\\alpha_2}{\\pi}\\left[ \\begin{array}{ccccc}\n-\\frac{11}{4}i \\pi & U-T & 0 & 0 & 0 \\\\\n2(U-T) & -\\frac{11}{4}i \\pi +T+U& 0 &0 & 0\\\\\n0 & 0 & -\\frac{7}{4}i \\pi + T + U & 0 & 0 \\\\\n0 & 0 & 0 & -\\frac{7}{4}i \\pi + T + U & 0 \\\\\n0 & 0 & 0 & 0 & -\\frac{3}{4}i \\pi \\\\\n\\end{array} \\right]\n\\label{adW}\n\\end{eqnarray}\n where the first line is the collinear contribution, and the second line is the soft contribution. \n \n \\end{widetext}\n \nHere\n\\begin{eqnarray}\n\\gamma_Q &=& \\left( \\frac{\\alpha_s}{4\\pi} \\frac43 +\\frac{\\alpha_2}{4\\pi} \\frac34 +\\frac{\\alpha_1}{4\\pi} Y_Q^2 \\right) \\left( 2 \\log \\frac{s}{\\mu^2}-3\\right)\\nonumber \\\\ \n\\gamma_W &=& \\frac{\\alpha_2}{4\\pi} \\left( 4 \\log \\frac{s}{\\mu^2}-\\frac{19}{6}\\right)\\nonumber \\\\ \n\\gamma_B &=& \\frac{\\alpha_1}{4\\pi} \\left( \\frac{41}{6}\\right)\\nonumber \\\\ \n\\end{eqnarray}\n$T=\\log(-t\/s)-i \\pi$, $U=\\log(-u\/s)-i\\pi$, and $Y_Q=1\/6$.\n\nThe consistency condition Eq.~(\\ref{conscond}) is satisfied using our results for the matching coefficients and Eq.~(\\ref{adW}). This only checks the relation between the $\\log \\mu$ terms at one-loop and the tree-level coefficients.\n\nFor scalar production by doublet quarks, the EFT operators are\n\\begin{eqnarray}\nO_1 &=& \\bar Q^{(u)} t^a Q^{(u)} \\phi^\\dagger_4 t^a \\phi_3\\nonumber \\\\ \nO_2 &=& \\bar Q^{(u)} Q^{(u)} \\phi^\\dagger_4 \\phi_3\n\\end{eqnarray}\nand the EFT anomalous dimension is~\\cite{Chiu:2009ft,Chiu:2009mg}\n\n\\begin{eqnarray}\n\\bm{\\gamma} &=& \\left(2 \\gamma_Q + 2 \\gamma_\\phi\\right)\\openone +\n\\bm{\\gamma}_S\\nonumber \\\\ \n&& + \\frac{\\alpha_s}{\\pi} \\left( -\\frac43 i \\pi \\openone \\right)\\nonumber \\\\ \n&&+ \\frac{\\alpha_2}{\\pi}\\left(-\\frac32 i \\pi \\openone + \\left[ \\begin{array}{cc} \nT+U & 2(T-U) \\\\ \\frac38(T-U) & 0 \\end{array} \\right]\\right)\\nonumber \\\\ \n&&+\\frac{\\alpha_1}{\\pi} \\left(2 Y_Q Y_\\phi (T-U) - i \\pi (Y_Q^2+Y_\\phi^2)\\right)\n\\end{eqnarray}\nwhere $Y_\\phi=1\/2$, and\n\\begin{eqnarray}\n\\gamma_\\phi &=&\\left(\\frac34 \\frac{\\alpha_2}{4\\pi}+\\frac {1} {4} \\frac{\\alpha_1}{4\\pi} \\right)\\left(2 \\log \\frac{s}{\\mu^2}-4\\right)+3 \\frac{y_t^2}{16\\pi^2}\\,.\n\\end{eqnarray}\nThe consistency condition Eq.~(\\ref{conscond}) is again satisfied by our matching results. Note that the $y_t^2$ term in $\\gamma_\\phi$ is consistent with the top-quark loop contribution to the matching Eq.~(\\ref{tloop}).\n\n\\section{Relation between the $S$-Matrix and the Matching Coefficient}\\label{sec:smatrix}\n\nThe results in this paper are for the on-shell diagrams with dimensional regularization used to regulate the ultraviolet and infrared divergences, and all low-energy scales set to zero. The total amplitude has the form\n\\begin{eqnarray}\nA &=& \\alpha \\mu^{2\\epsilon} T + \\alpha^2 \\mu^{2 \\epsilon} L + \\ldots\\,,\n\\end{eqnarray}\nwhere $T$ is the tree amplitude, and the one-loop amplitude $L$ contains $1\/\\epsilon$ UV and IR divergences,\n\\begin{eqnarray}\nL &=& \\frac{C_2}{\\epsilon^2}+\\frac{C_1+D_1}{\\epsilon}+\\frac{C_2}{\\epsilon} \\log \\frac{\\mu^2}{s}\\nonumber \\\\ \n&&+\\left(C_1+D_1\\right) \\log \\frac{\\mu^2}{s} +\\frac12 C_2 \\log^2 \\frac{\\mu^2}{s}+ F(s,t)\\, .\\nonumber \\\\ \n\\end{eqnarray}\nHere $C_{1,2}$ are coefficients of the IR divergences, and $D_1$ is the coefficient of the UV divergence.\nSince we have set scaleless integrals to zero, we cannot distinguish IR and UV divergences, and our calculation thus gives $C_1+D_1$, but not each term separately. Note that the coefficient of the $\\log \\mu^2$ term is proportional to the sum of the $1\/\\epsilon$ UV plus IR singularitites.\n\nTo this must be added the counterterm graphs, which cancel the ultraviolet divergence $D_1\/\\epsilon$, to give the renormalized $S$-matrix element\n\\begin{eqnarray}\nS &=& \\alpha \\mu^{2\\epsilon} T + \\alpha^2 \\mu^{2 \\epsilon}\\Biggl\\{\\frac{C_2}{\\epsilon^2}+\\frac{C_1}{\\epsilon}+\\frac{C_2}{\\epsilon} \\log \\frac{\\mu^2}{s}\\nonumber \\\\ \n\t&&+\\left(C_1+D_1\\right) \\log \\frac{\\mu^2}{s} +\\frac12 C_2 \\log^2 \\frac{\\mu^2}{s}+ F(s,t)\\Biggr\\}\\nonumber \\\\ \n&=& A - \\alpha^2 \\mu^{2 \\epsilon}\\frac{D_1}{\\epsilon}\n\\label{93}\n\\label{smatrix}\n\\end{eqnarray}\nThe counterterm graphs must cancel all the UV singularities, since the the theory is renormalizable, so there is no overall $1\/\\epsilon$ divergence times a $q \\bar q VV$ or $q \\bar q \\phi^\\dagger\\phi$ operator. Note that the counterterm graphs cancel $D_1\/\\epsilon$, but not $D_1 \\log \\mu^2\/s$.\n\nThe renormalized $S$-matrix has $1\/\\epsilon$ divergences which are purely IR. These IR divergences lead to IR divergent cross-sections for parton-parton scattering in the massless theory. In QCD, the IR divergences cancel when computing a physical process involving IR safe observables. A textbook example is the cancellation of IR divergences between $e^+ e^- \\to q \\bar q$ at one-loop, and the tree-level rate for $e^+ e^- \\to q \\bar q g$, to give a IR safe cross-section for $e^+ e^- \\to \\text{hadrons}$ at order $\\alpha_s$.\n\n\n The renormalized $S$-matrix satisfies the renormalization group equation\n\\begin{eqnarray}\n\\left[ \\mu \\frac{\\partial}{\\partial \\mu}+\\beta(g,\\epsilon) \\frac{\\partial}{\\partial g} \\right] S &=& 0\n\\label{srge}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n\\beta(g,\\epsilon) &=& - \\epsilon g - \\frac{b_0 g^3}{16\\pi^2} + \\ldots\n\\end{eqnarray}\nis the $\\beta$-function in $4-2\\epsilon$ dimensions, with $b_0=11 C_A\/3 -2\\trwf{}\/3-\\trcs{} \/3$. Applying Eq.~(\\ref{srge}) to Eq.~(\\ref{smatrix}) shows that\n\\begin{eqnarray}\nD_1 &=& \\alpha \\frac{b_0}{4\\pi} T\\, .\n\\label{dvalue}\n\\end{eqnarray}\nThe one-loop counterterm contribution is equal to the one-loop $\\beta$-function times the tree-level amplitude. Thus we do not need to explicitly compute the counterterm graphs.\nEquation~(\\ref{dvalue}) and Eq.~(\\ref{93}) give\n\\begin{eqnarray} \nS &=& \\alpha \\mu^{2\\epsilon} T + \\alpha^2 \\mu^{2 \\epsilon} L - \\alpha^2 \\mu^{2 \\epsilon}\\frac{b_0}{4\\pi} \n\\frac{1}{\\epsilon} T \\ldots\\,,\n\\label{expr}\n\\end{eqnarray}\nwhich relates the renormalized $S$-matrix to the matching condition. An expression analogous to Eq.~(\\ref{expr}) can be derived to higher orders. Eq.~(\\ref{expr}) is the renormalized $S$-matrix, so all $1\/\\epsilon$ singularities are IR divergences, which are present in $S$-matrix elements for massless particles.\n\nUsing Eq.~(\\ref{expr}), we have computed the $q \\bar q \\to g g$ cross-section and verified that it agrees with Ellis and Sexton~\\cite{Ellis:1985er}. Some terms in $A$ do not interfere with the tree amplitude, and hence do not contribute to the cross-section at order $\\alpha^3$. The tree-level amplitude only has non-zero helicity amplitudes for $+-$ and $-+$ polarized gauge bosons, so only the real parts of these one-loop helicity amplitudes are checked by the cross-section results of Ref.~\\cite{Ellis:1985er}. For example, the matrix element $\\mathcal{M}_1$ only contributes to $++$ and $--$ polarization states, and so does not interfere with the tree amplitude. We have also computed the one-loop helicity amplitudes for $++$, $+-$, $-+$ and $++$ polarized gauge bosons from the $S$-matrix Eq.~(\\ref{expr}), and verified that they agree with the results in Ref.~\\cite{Kunszt:1993sd} for an $SU(N)$ gauge theory. This provides a check on the result of Sec.~\\ref{sec:T2d}, which is proportional to $\\mathcal{M}_1$.\n\n\n\\section{Conclusion}\n\nWe have computed the high-scale matching at one-loop for vector boson production $q \\bar q \\to V^a_i V^b_j$ and scalar production $q \\bar q \\to \\phi^\\dagger \\phi$, for an arbitrary gauge theory, and given the group theory factors for the standard model. When combined with the EFT results of Refs.~\\cite{Chiu:2009mg,Chiu:2009ft}, this gives the renormalization group improved amplitudes for gauge boson and Higgs production in the standard model. Numerical plots using these results were already presented in Refs.~\\cite{Chiu:2009mg,Chiu:2009ft}. The electroweak corrections to standard model processes at TeV energies are substantial; for example the correction to transverse $W$ pair production at 2~TeV is $37$\\%.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStatisticians, like all scientists, are acutely aware that the clock speeds on their desktops and laptops have stalled. Does this mean that statistical computing has hit a wall? The answer fortunately is no, but the hardware advances that we routinely expect have taken an interesting detour. Most computers now sold have two to eight processing cores. Think of these as separate CPUs on the same chip. Naive programmers rely on sequential algorithms and often fail to take advantage of more than a single core. Sophisticated programmers, the kind who work for commercial firms such as Matlab, eagerly exploit parallel programming. However, multicore CPUs do not represent the only road to the success of statistical computing.\n\nGraphics processing units (GPUs) have caught the scientific community by surprise. These devices are designed for graphics rendering in computer animation and games. Propelled by these nonscientific markets, the old technology of numerical (array) coprocessors has advanced rapidly. Highly parallel GPUs are now making computational inroads against traditional CPUs in image processing, protein folding, stock options pricing, robotics, oil exploration, data mining, and many other areas \\cite{Owens:2007:ASO}. We are starting to see orders of magnitude improvement on some hard computational problems. Three companies, Intel, NVIDIA, and AMD\/ATI, dominate the market. Intel is struggling to keep up with its more nimble competitors.\n\nModern GPUs support more vector and matrix operations, stream data faster, and possess more local memory per core than their predecessors. They are also readily available as commodity items that can be inserted as video cards on modern PCs. GPUs have been criticized for their hostile programming environment and lack of double precision arithmetic and error correction, but these faults are being rectified. The CUDA programming environment \\cite{cuda-book} for NVIDIA chips is now easing some of the programming chores. We could say more about near-term improvements, but most pronouncements would be obsolete within months.\n\nOddly, statisticians have been slow to embrace the new technology. \\citet{silberstein08} first demonstrated the potential for GPUs in fitting simple Bayesian networks. Recently \\citet{suchard09} have seen greater than $100$-fold speed-ups in MCMC simulations in molecular phylogeny. \\citet{holmes-tech-report} and \\citet{tibbets09} are following suit with Bayesian model fitting via particle filtering and slice sampling. Finally, work is under-way to port common data mining techniques such as hierarchical clustering and multi-factor dimensionality reduction onto GPUs \\citep{sinnott-armstrong09}. These efforts constitute the first wave of an eventual flood of statistical and data mining applications. The porting of GPU tools into the R environment will undoubtedly accelerate the trend \n\\cite{buckner09}.\n\nNot all problems in computational statistics can benefit from GPUs. Sequential algorithms are resistant unless they can be broken into parallel pieces. Even parallel algorithms can be problematic if the entire range of data must be accessed by each GPU. Because they have limited memory, GPUs are designed to operate on short streams of data. The greatest speedups occur when all of the GPUs on a card perform the same arithmetic operation simultaneously. Effective applications of GPUs in optimization involves both separation of data and separation of parameters. \n\nIn the current paper, we illustrate how GPUs can work hand in glove with the MM algorithm, a generalization of the EM algorithm. In many optimization problems, the MM algorithm explicitly separates parameters by replacing the objective function by a sum of surrogate functions, each of which involves a single parameter. Optimization of the one-dimensional surrogates can be accomplished by assigning each subproblem to a different core. Provided the different cores each access just a slice of the data, the parallel subproblems execute quickly. By construction the new point in parameter space improves the value of the objective function. In other words, MM algorithms are iterative ascent or descent algorithms. If they are well designed, then they separate parameters in high-dimensional problems. This is where GPUs enter. They offer most of the benefits of distributed computer clusters at a fraction of the cost. For this reason alone, computational statisticians need to pay attention to GPUs.\n\n\\begin{figure}[htpb]\n\\begin{center}\n$$\n\\begin{array}{cc}\n\\includegraphics[width=2.3in]{RosenbrockSurface.eps} & \\includegraphics[width=2.3in]{RosenbrockMM-q0.eps}\n\\end{array}\n$$\n\\end{center}\n\\caption{Left: The Rosenbrock (banana) function (the lower surface) and a majorization function at point (-1,-1) (the upper surface). Right: MM iterates.} \n\\label{fig:Rosenbrock}\n\\end{figure}\n\nBefore formally defining the MM algorithm, it may help the reader to walk through a simple numerical example stripped of statistical content. Consider the Rosenbrock test function\n\\begin{eqnarray}\nf({\\bf x}) & = & 100(x_1^2-x_2)^2+(x_1-1)^2 \t\\label{eqn:Rosenbrock-objective} \\\\\n& = & 100(x_1^4 + x_2^2 - 2x_1^2 x_2) + (x_1^2 - 2x_1 + 1), \\nonumber\n\\end{eqnarray}\nfamiliar from the minimization literature. As we iterate toward the minimum at ${\\bf x} ={\\bf 1} = (1,1)$, we construct a surrogate function that separates parameters. This is done by exploiting the obvious majorization\n\\begin{eqnarray*}\n- 2 x_1^2 x_2 & \\le & x_1^4 + x_2^2 + (x_{n1}^2 + x_{n2})^2 - 2 (x_{n1}^2 + x_{n2}) (x_1^2 + x_2),\n\\end{eqnarray*}\nwhere equality holds when ${\\bf x}$ and the current iterate ${\\bf x}_n$ coincide. It follows that \n$f({\\bf x})$ itself is majorized by the sum of the two surrogates\n\\begin{eqnarray*}\ng_1(x_1 \\mid {\\bf x}_n) & = & 200 x_1^4 - [200( x_{n1}^2 + x_{n2})-1] x_1^2 - 2x_1 +1 \\\\\ng_2(x_2 \\mid {\\bf x}_n) & = & 200 x_2^2 - 200( x_{n1}^2+x_{n2} ) x_2 + (x_{n1}^2 + x_{n2})^2.\n\\end{eqnarray*}\nThe left panel of Figure~\\ref{fig:Rosenbrock} depicts the Rosenbrock function and its majorization\n$g_1(x_1 \\mid {\\bf x}_n) + g_2(x_2 \\mid {\\bf x}_n)$ at the point $-{\\bf 1}$. \n\nAccording to the MM recipe, at each iteration one must minimize the quartic polynomial $g_1(x_1 \\mid {\\bf x}_n)$ and the quadratic polynomial $g_2(x_2 \\mid {\\bf x}_n)$. The quartic possesses either a single global minimum or two local minima separated by a local maximum These minima are the roots of the cubic function $g_1'(x_1|{\\bf x}_n)$ and can be explicitly computed. We update $x_1$ by the root corresponding to the global minimum and $x_2$ via $x_{n+1,2} = \\frac{1}{2}(x_{n1}^2+x_{n2})$. The right panel of Figure~\\ref{fig:Rosenbrock} displays the iterates starting from ${\\bf x}_0 = -{\\bf 1}$. These immediately jump into the Rosenbrock valley and then slowly descend to ${\\bf 1}$. \n\nSeparation of parameters in this example makes it easy to decrease the objective function. This almost trivial advantage is amplified when we optimize functions depending on tens of thousands to millions of parameters. In these settings, Newton's method and variants such as Fisher's scoring are fatally handicapped by the need to store, compute, and invert huge Hessian or information matrices. On the negative side of the balance sheet, MM algorithms are often slow to converge. This disadvantage is usually outweighed by the speed of their updates even in sequential mode. If one can harness the power of parallel processing GPUs, then MM algorithms become the method of choice for many high-dimensional problems.\n\nWe conclude this introduction by sketching a roadmap to the rest of the paper. Section~\\ref{sec:MM} reviews the MM algorithm. Section~\\ref{sec:examples} discusses three high-dimensional MM examples. Although the algorithm in each case is known, we present brief derivations to illustrate how simple inequalities drive separation of parameters. We then implement each algorithm on a realistic problem and compare running times in sequential and parallel modes. We purposefully omit programming syntax since many tutorials already exist for this purpose, and material of this sort is bound to be ephemeral. Section \\ref{discussion_section} concludes with a brief discussion of other statistical applications of GPUs and other methods of accelerating optimization algorithms.\n\n\\section{MM Algorithms}\n\\label{sec:MM}\n\nThe MM algorithm like the EM algorithm is a principle for creating optimization algorithms. In minimization the acronym MM stands for majorization-minimization; in maximization it stands for minorization-maximization. Both versions are convenient in statistics. For the moment we will concentrate on maximization.\n\nLet $f(\\bftheta)$ be the objective function whose maximum we seek. Its argument $\\bftheta$ can be high-dimensional and vary over a constrained subset $\\Theta$ of Euclidean space. An MM algorithm involves minorizing $f(\\bftheta)$ by a surrogate function $g(\\bftheta \\mid \\bftheta_n)$ anchored at the current iterate $\\bftheta_n$ of the search. The subscript $n$ indicates iteration number throughout this article. If $\\bftheta_{n+1}$ denotes the maximum of $g(\\bftheta \\mid \\bftheta_n)$ with respect to its left argument, then the MM principle declares that $\\bftheta_{n+1}$ increases \n$f(\\bftheta)$ as well. Thus, MM algorithms revolve around a basic ascent property. \n\nMinorization is defined by the two properties\n\\begin{eqnarray}\nf(\\bftheta_n) & = & g( \\bftheta_n \\mid \\bftheta_n) \\label{minorization_definition1} \\\\\nf(\\bftheta) & \\ge & g(\\bftheta \\mid \\bftheta_n)\\: , \\quad \\quad \\bftheta \\ne \\bftheta_n . \n\\label{minorization_definition2}\n\\end{eqnarray}\nIn other words, the surface $\\bftheta \\mapsto g(\\bftheta \\mid \\bftheta_n)$ lies below the surface $\\bftheta \\mapsto f(\\bftheta)$ and is tangent to it at the point $\\bftheta=\\bftheta_n$. Construction of the minorizing function $g(\\bftheta \\mid \\bftheta_n)$ constitutes the first M of the MM algorithm. In our examples \n$g(\\bftheta \\mid \\bftheta_n)$ is chosen to separate parameters. \n\nIn the second M of the MM algorithm, one maximizes the surrogate $g(\\bftheta \\mid \\bftheta_n)$ rather than \n$f(\\bftheta)$ directly. It is straightforward to show that the maximum point $\\bftheta_{n+1}$ satisfies the ascent property $f(\\bftheta_{n+1}) \\ge f(\\bftheta_n)$. The proof \n\\begin{eqnarray*}\nf(\\bftheta_{n+1}) & \\ge & g(\\bftheta_{n+1} \\mid \\bftheta_n) \\;\\; \\ge \\;\\; g(\\bftheta_n \\mid \\bftheta_n)\n\\;\\; = \\;\\; f(\\bftheta_n)\n\\end{eqnarray*}\nreflects definitions (\\ref{minorization_definition1}) and (\\ref{minorization_definition2}) and the choice of $\\bftheta_{n+1}$. The ascent property is the source of the MM algorithm's numerical stability\nand remains valid if we merely increase $g(\\bftheta \\mid \\bftheta_n)$ rather than maximize it. In many problems MM updates are delightfully simple to code, intuitively compelling, and automatically consistent with parameter constraints. In minimization we seek a majorizing function $g(\\bftheta \\mid \\bftheta_n)$ lying above the surface $\\bftheta \\mapsto f(\\bftheta)$ and tangent to it at the point $\\bftheta=\\bftheta_n$. Minimizing $g(\\bftheta \\mid \\bftheta_n)$ drives $f(\\bftheta)$ downhill. \n\nThe celebrated Expectation-Maximization (EM) algorithm \\cite{Dempster77EM,McLachlan08EMBook} is a special case of the MM algorithm. The $Q$-function produced in the E step of the EM algorithm constitutes a minorizing function of the loglikelihood. Thus, both EM and MM share the same advantages: simplicity, stability, graceful adaptation to constraints, and the tendency to avoid large matrix inversion. The more general MM perspective frees algorithm derivation from the missing data straitjacket and invites wider applications. For example, our multi-dimensional scaling (MDS) and non-negative matrix factorization (NNFM) examples involve no likelihood functions. \\citet{WuLange09EMMM} briefly summarize the history of the MM algorithm and its relationship to the EM algorithm. \n\nThe convergence properties of MM algorithms are well-known \\citep{Lange04Optm}. In particular,\nfive properties of the objective function $f(\\bftheta)$ and the MM algorithm map $\\bftheta \\mapsto M(\\bftheta)$ guarantee convergence to a stationary point of $f(\\bftheta)$: (a) $f(\\bftheta)$ is coercive on its open domain; (b) $f(\\bftheta)$ has only isolated stationary points; (c) $M(\\bftheta)$ is continuous; (d) $\\bftheta^*$ is a fixed point of $M(\\bftheta)$ if and only if $\\bftheta^*$ is a stationary point of \n$f(\\bftheta)$; and (e) $f[M(\\bftheta^*)] \\ge f(\\bftheta^*)$, with equality if and only if $\\bftheta^*$ is a fixed point of $M(\\bftheta)$. These conditions are easy to verify in many applications. The local rate\nof convergence of an MM algorithm is intimately tied to how well the surrogate function \n$g(\\bftheta \\mid \\bftheta^*)$ approximates the objective function $f(\\bftheta)$ near the optimal point \n$\\bftheta^*$.\n\n\n\n\n\n\n\\section{Numerical Examples}\n\\label{sec:examples}\n\nIn this section, we compare the performances of the CPU and GPU implementations of three classical MM algorithms coded in C++: (a) non-negative matrix factorization (NNMF), (b) positron emission tomography (PET), and (c) multidimensional scaling (MDS). In each case we briefly derive the algorithm from the MM perspective. For the CPU version, we iterate until the relative change \n\\begin{eqnarray*}\n\\frac{|f(\\bftheta_n)-f(\\bftheta_{n-1})|}{|f(\\bftheta_{n-1})|+1}\n\\end{eqnarray*}\nof the objective function $f(\\bftheta)$ between successive iterations falls below a pre-set threshold $\\epsilon$ or the number of iterations reaches a pre-set number $n_{\\rm max}$, whichever comes first. In these examples, we take $\\epsilon = 10^{-9}$ and $n_{\\rm max}=100,000$. For ease of comparison, we iterate the GPU version for the same number of steps as the CPU version. Overall, we see anywhere from a 22-fold to 112-fold decrease in total run time. The source code is freely available from the first author.\n\nTable~\\ref{table:system-config} shows how our desktop system is configured. Although the CPU is a high-end processor with four cores, we use just one of these for ease of comparison. In practice, it takes considerable effort to load balance the various algorithms across multiple CPU cores. With 240 GPU cores, the GTX 280 GPU card delivers a peak performance of about 933 GFlops in single precision. This card is already obsolete. Newer cards possess twice as many cores, and up to four cards can fit inside a single desktop computer. It is relatively straightforward to program multiple GPUs. Because previous generation GPU hardware is largely limited to single precision, this is a worry in scientific computing. To assess the extent of roundoff error, we display the converged values of the objective functions to ten significant digits. Only rarely is the GPU value far off the CPU mark. Finally, the extra effort in programming the GPU version is relatively light. Exploiting the standard CUDA library \\cite{cuda-book}, it takes 77, 176, and 163 extra lines of GPU code to implement the NNMF, PET, and MDS examples, respectively. \n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\toprule\n & CPU & GPU\t\\\\\n\\midrule\nModel & Intel Core 2 & NVIDIA GeForce \\\\\n& Extreme X9440 & GTX 280\t\\\\\n\\# Cores & 4 & 240\t\\\\\nClock & 3.2G & 1.3G \\\\\nMemory & 16G & 1G\t\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{Configuration of the desktop system}\n\\label{table:system-config}\n\\end{table}\n\n\\subsection{Non-Negative Matrix Factorizations}\n\nNon-negative matrix factorization (NNMF) is an alternative to principle component analysis useful in modeling, compressing, and interpreting nonnegative data such as observational counts and images. The\narticles \\cite{LeeSeung99NNMF,LeeSeung01NNMFAlgo,Berry07NNMF} discuss in detail algorithm development and statistical applications of NNMF. The basic problem is to approximate a data matrix $\\bfX$ with nonnegative entries $x_{ij}$ by a product $\\bfV \\bfW$ of two low rank matrices $\\bfV$ and $\\bfW$ with nonnegative entries $v_{ik}$ and $w_{kj}$. Here $\\bfX$, $\\bfV$, and $\\bfW$ are $p \\times q$, $p \\times r$, and $r \\times q$, respectively, with $r$ much smaller than $\\min\\{p,q\\}$. One version of NNMF minimizes the objective function\n\\begin{eqnarray}\nf(\\bfV,\\bfW) & = & \\| \\bfX - \\bfV \\bfW \\|_{\\text{F}}^2 \\;\\; = \\;\\; \n\\sum_i \\sum_j \\Big( x_{ij} - \\sum_k v_{ik} w_{kj} \\Big)^2, \\label{eqn:nnmf-objfn}\n\\end{eqnarray}\nwhere $\\|\\cdot\\|_{\\text{F}}$ denotes the Frobenius-norm. To get an idea of the scale of NNFM imaging problems, $p$ (number of images) can range $10^1-10^4$, $q$ (number of pixels per image) can surpass $10^2-10^4$, and one seeks a rank $r$ approximation of about 50. Notably, part of the winning solution of the Netflix challenge relies on variations of NNMF \\cite{KorBel09}. For the Netflix data matrix, $p=480,000$ (raters), \n$q=18,000$ (movies), and $r$ ranged from 20 to 100.\n\nExploiting the convexity of the function $x \\mapsto (x_{ij} - x)^2$, one can derive the inequality \n\\begin{eqnarray*}\n\\Big( x_{ij} - \\sum_k v_{ik} w_{kj} \\Big)^2 & \\le & \\sum_k \\frac{a_{nikj}}{b_{nij}} \\left( x_{ij} - \\frac{b_{nij}}{a_{nikj}} v_{ik} w_{kj} \\right)^2\n\\end{eqnarray*}\nwhere $a_{nikj} = v_{nik} w_{nkj}$ and $b_{nij} = \\sum_k a_{nikj}$. This leads to the surrogate function\n\\begin{eqnarray}\ng(\\bfV,\\bfW \\mid \\bfV_{n},\\bfW_{n}) & = & \\sum_i \\sum_j \\sum_k \\frac{a_{nikj}}{b_{nij}} \\left( x_{ij} - \\frac{b_{nij}}{a_{nikj}} v_{ik} w_{kj} \\right)^2\n\\label{frobenius_majorization}\n\\end{eqnarray}\nmajorizing the objective function $f(\\bfV,\\bfW) = \\|\\bfX-\\bfV \\bfW\\|_{\\text{F}}^2$. Although the majorization (\\ref{frobenius_majorization}) does not achieve a complete separation of parameters, it does if we fix $\\bfV$ and update $\\bfW$ or vice versa. This strategy is called block relaxation.\n\nIf we elect to minimize $g(\\bfV,\\bfW \\mid \\bfV_{n},\\bfW_{n})$ holding $\\bfW$ fixed at $\\bfW_n$, then the stationarity condition for $\\bfV$ reads\n\\begin{eqnarray*}\n\\frac{\\partial}{\\partial v_{ik}} g(\\bfV,\\bfW_{n} \\mid \\bfV_{n},\\bfW_{n}) & = & -2\\sum_j \\Big( x_{ij} - \\frac{b_{nij}}{a_{nikj}} v_{ik} w_{nkj} \\Big) w_{nkj} \\;\\; = \\;\\; 0.\n\\end{eqnarray*}\nIts solution furnishes the simple multiplicative update\n\\begin{eqnarray}\nv_{n+1,ik} & = & v_{nik} \\frac{\\sum_j x_{ij} w_{nkj}}{\\sum_j b_{nij} w_{nkj}}. \\label{nnmf_v_update}\n\\end{eqnarray}\nLikewise the stationary condition\n\\begin{eqnarray*}\n\\frac{\\partial}{\\partial w_{kj}} g(\\bfV_{n+1},\\bfW \\mid \\bfV_{n+1},\\bfW_{n})& = & 0\n\\end{eqnarray*}\ngives the multiplicative update\n\\begin{eqnarray}\nw_{n+1,kj} & = & w_{nkj} \\frac{\\sum_i x_{ij} v_{n+1,ik}}{\\sum_i c_{nij} v_{n+1,ik}}, \\label{nnmf_w_update}\n\\end{eqnarray}\nwhere $c_{nij} = \\sum_k v_{n+1,ik}w_{nkj}$. Close inspection of the multiplicative updates (\\ref{nnmf_v_update}) and (\\ref{nnmf_w_update}) shows that their numerators depend on the matrix products \n$\\bfX \\bfW_n^t$ and $\\bfV_{n+1}^t \\bfX $ and their denominators depend on the matrix products $\\bfV_n \\bfW_n \\bfW_n^t$ and $\\bfV_{n+1}^t \\bfV_{n+1} \\bfW_n$. Large matrix multiplications are very fast on GPUs because CUDA implements in parallel the BLAS (basic linear algebra subprograms) library widely applied in numerical analysis \\cite{cublas-book08}. Once the relevant matrix products are available, each elementwise update of $v_{ik}$ or $w_{kj}$ involves just a single multiplication and division. These scalar operations are performed in parallel through hand-written GPU code. Algorithm~\\ref{algo:nnmf-multiplicative} summarizes the steps in performing NNMF.\n\n\\begin{algorithm}\n\\begin{algorithmic}\n\\STATE Initialize: Draw $v_{0ik}$ and $w_{0kj}$ uniform on (0,1) for all $1 \\le i \\le p$, $1 \\le k \\le r$, $1 \\le j \\le q$\n\\REPEAT\n\\STATE Compute $\\mathbf{X} \\mathbf{W}_{n}^t$ and $\\mathbf{V}_{n}\\mathbf{W}_{n}\\mathbf{W}_{n}^t$\n\\STATE $v_{n+1,ik} \\leftarrow v_{nik} \\cdot \\{\\mathbf{X} \\mathbf{W}_{n}^t \\}_{ik} \\, \/ \\, \\{\\mathbf{V}_{n}\\mathbf{W}_{n}\\mathbf{W}_{n}^t\\}_{ik}$ for all $1 \\le i \\le p$, $1 \\le k \\le r$\n\\STATE Compute $\\mathbf{V}_{n+1}^t \\mathbf{X} $ and $\\mathbf{V}_{n+1}^t\\mathbf{V}_{n+1}\\mathbf{W}_n$\n\\STATE $w_{n+1,kj} \\leftarrow w_{nkj} \\cdot \\{\\bfV_{n+1}^t \\bfX\\}_{kj} \\, \/ \\, \\{\\bfV_{n+1}^t\\bfV_{n+1}\\bfW_{n}\\}_{kj}$ for all $1 \\le k \\le r$, $1 \\le j \\le q$\n\\UNTIL{convergence occurs}\n\\end{algorithmic}\n\\caption{(NNMF) Given $\\bfX \\in \\mathbb{R}_+^{p \\times q}$, find $\\bfV \\in \\mathbb{R}_+^{p \\times r}$ and $\\bfW \\in \\mathbb{R}_+^{r \\times q}$ minimizing $\\|\\bfX - \\bfV \\bfW\\|_{\\text{F}}^2$.}\n\\label{algo:nnmf-multiplicative}\n\\end{algorithm}\n\nWe now compare CPU and GPU versions of the multiplicative NNMF algorithm on a training set of face images. Database \\#1 from the MIT Center for Biological and Computational Learning (CBCL) \\cite{MITCBCL} reduces to a matrix $\\bfX$ containing $p=2,429$ gray scale face images with $q=19 \\times 19 = 361$ pixels per face. Each image (row) is scaled to have mean and standard deviation 0.25. Figure\n\\ref{fig:approx-face} shows the recovery of the first face in the database using a rank $r=49$ decomposition. The 49 basis images (rows of $\\bfW$) represent different aspects of a face. The rows of $\\bfV$ contain the coefficients of these parts estimated for the various faces. Some of these facial features are immediately obvious in the reconstruction. Table~\\ref{table:nnmf-faces} compares the run times of Algorithm~\\ref{algo:nnmf-multiplicative} implemented on our CPU and GPU respectively. We observe a 22 to 112-fold speed-up in the GPU implementation. Run times for the GPU version depend primarily on the number of iterations to convergence and very little on the rank $r$ of the approximation. Run times of the CPU version scale linearly in both the number of iterations and $r$.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{crrrrrc}\n\\toprule\n& & \\multicolumn{2}{c}{CPU} & \\multicolumn{2}{c}{GPU} & \\\\\n \\cmidrule(lr){3-4} \\cmidrule(lr){5-6}\nRank $r$ & Iters & Time & Function & Time & Function & Speedup \\\\\n\\midrule\n10 & 25459 & 1203 & 106.2653503 & 55 & 106.2653504 & 22 \\\\\n20 & 87801 & 7564 & 89.56601262 & 163 & 89.56601287 & 46 \\\\\n30 & 55783 & 7013 & 78.42143486 & 103 & 78.42143507 & 68 \\\\\n40 & 47775 & 7880 & 70.05415929 & 119 & 70.05415950 & 66 \\\\\n50 & 53523 & 11108 & 63.51429261 & 121 & 63.51429219 & 92 \\\\\n60 & 77321 & 19407 & 58.24854375 & 174 & 58.24854336 & 112 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{Run-time (in seconds) comparisons for NNMF on the MIT CBCL face image data. The dataset contains $p=2,429$ faces with $q = 19 \\times 19 = 361$ pixels per face. The columns labeled Function refer to the converged value of the objective function.}\n\\label{table:nnmf-faces}\n\\end{table}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4.7in]{FaceFigure}\n\\end{center}\n\\caption{Approximation of a face image by rank-49 NNMF: coefficients $\\times$ basis images = approximate image.}\n\\label{fig:approx-face}\n\\end{figure}\n\nIt is worth stressing a few points. First, the objective function~(\\ref{eqn:nnmf-objfn}) is convex in $\\bfV$ for $\\bfW$ fixed, and vice versa but not jointly convex. Thus, even though the MM algorithm enjoys the descent property, it is not guaranteed to find the global minimum \\citep{Berry07NNMF}. There are two good alternatives to the multiplicative algorithm. First, pure block relaxation can be conducted by alternating least squares (ALS). In updating $\\bfV$ with $\\bfW$ fixed, ALS omits majorization and solves the $p$ separated nonnegative least square problems\n\\begin{eqnarray*}\n\t\\min_{\\bfV(i,:)} \\|\\bfX(i,:) - \\bfV(i,:)\\bfW]\\|_2^2 \\quad \\text{ subject to } \\bfV(i,:) \\ge 0,\n\\end{eqnarray*}\nwhere $\\bfV(i,:)$ and $\\bfX(i,:)$ denote the $i$-th row of the corresponding matrices. Similarly, in updating $\\bfW$ with $\\bfV$ fixed, ALS solves $q$ separated nonnegative least square problems. Another possibility is to change the objective function to\n\\begin{eqnarray*}\n\tL(\\bfV,\\bfW) & = & \\sum_i \\sum_j \\Big[ x_{ij} \\ln \\Big( \\sum_k v_{ik} w_{kj} \\Big) - \\sum_k v_{ik} w_{kj} \\Big]\n\\end{eqnarray*}\naccording to a Poisson model for the counts $x_{ij}$ \\cite{LeeSeung99NNMF}. This works even when some entries $x_{ij}$ fail to be integers, but the Poisson loglikelihood interpretation is lost. A pure MM algorithm for maximizing $L(\\bfV,\\bfW)$ is\n\\begin{eqnarray*}\nv_{n+1,ik} & = & v_{nik} \\sqrt{ \\frac{\\sum_j x_{ij} w_{nkj} \/ b_{nij}}{\\sum_j w_{nkj}} }, \\quad w_{n+1,ij} = w_{nkj} \\sqrt{ \\frac{\\sum_i x_{ij} v_{nik} \/ b_{nij}}{\\sum_i v_{nik}}}.\n\\end{eqnarray*}\nDerivation of these variants of Lee and Seung's \\cite{LeeSeung99NNMF} Poisson updates is left to the reader.\n\n\\subsection{Positron Emission Tomography}\n\nThe field of computed tomography has exploited EM algorithms for many years. In positron emission tomography (PET), the reconstruction problem consists of estimating the Poisson emission intensities $\\bflambda = (\\lambda_1,\\ldots,\\lambda_p)$ of $p$ pixels arranged in a 2-dimensional grid surrounded by an array of photon detectors. The observed data are coincidence counts $(y_1, \\ldots y_d)$ along $d$ lines of flight connecting pairs of photon detectors. The loglikelihood under the PET model is\n\\begin{eqnarray*}\nL(\\bflambda) & = & \\sum_i \\Big[y_i \\ln\\Big( \\sum_j\ne_{ij} \\lambda_j\\Big)-\\sum_j e_{ij} \\lambda_j\\Big] ,\n\\end{eqnarray*}\nwhere the $e_{ij}$ are constants derived from the geometry of the grid and the detectors. Without loss of generality, one can assume $\\sum_i e_{ij}=1$ for each $j$. It is straightforward to derive the traditional EM algorithm \\cite{Lange84PET,Vardi85PET} from the MM perspective using the concavity of the function $\\ln s$. Indeed, application of Jensen's inequality produces the minorization \n\\begin{eqnarray*}\nL(\\bflambda) & \\ge & \\sum_i y_i \\sum_j w_{nij}\n\\ln\\Big( \\frac{e_{ij} \\lambda_j}{w_{nij}} \\Big) -\\sum_i \\sum_j e_{ij} \\lambda_j\n\\;\\; = \\;\\; Q(\\bflambda \\mid \\bflambda_n),\n\\end{eqnarray*}\nwhere $w_{nij} = e_{ij} \\lambda_{nj}\/(\\sum_k e_{ik} \\lambda_{nk})$. This maneuver again separates parameters. The stationarity conditions for the surrogate $Q(\\bflambda \\mid \\bflambda_n)$ supply the parallel updates\n\\begin{eqnarray}\n\\lambda_{n+1,j} & = & \\frac{\\sum_i y_i w_{nij}}{\\sum_i e_{ij}}. \\label{PET_algorithm}\n\\end{eqnarray}\n\nThe convergence of the PET algorithm (\\ref{PET_algorithm}) is frustratingly slow, even under systematic acceleration \\cite{Roland07PET,ZhouAlexanderLange09QN}. Furthermore, the reconstructed images are of poor quality with a grainy appearance. The early remedy of premature halting of the algorithm cuts computational cost but is entirely {\\it ad hoc}, and the final image depends on initial conditions. A better option is add a roughness penalty to the loglikelihood. This device not only produces better images but also accelerates convergence. Thus, we maximize the penalized loglikelihood\n\\begin{eqnarray}\nf(\\bflambda) & = & L(\\bflambda) -\\frac{\\mu}{2} \\sum_{\\{j,k\\}\n\\in {\\cal N}} (\\lambda_j - \\lambda_k)^2 \\label{eqn:PET-objpen}\t\n\\end{eqnarray}\nwhere $\\mu$ is the roughness penalty constant, and ${\\cal N}$ is the neighborhood system that pairs spatially adjacent pixels. An absolute value penalty is less likely to deter the formation of edges than a square penalty, but it is easier to deal with a square penalty analytically, and we adopt it for the sake of simplicity. In practice, visual inspection of the recovered images guides the selection of the roughness penalty constant $\\mu$.\n\nTo maximize $f(\\bflambda)$ by an MM algorithm, we must minorize the penalty in a manner consistent with the separation of parameters. In view of the evenness and convexity of the function $s^2$, we have \n\\begin{eqnarray*}\n(\\lambda_j - \\lambda_k)^2 & \\leq & \\frac{1}{2} (2 \\lambda_j -\n\\lambda_{nj} - \\lambda_{nk})^2\n+ \\frac{1}{2} (2 \\lambda_k - \\lambda_{nj} - \\lambda_{nk})^2 .\n\\end{eqnarray*}\nEquality holds if $\\lambda_j+\\lambda_k = \\lambda_{nj} + \\lambda_{nk}$, which is true when $\\bflambda = \\bflambda_n$. Combining our two minorizations furnishes the surrogate function\n\\begin{eqnarray*}\n\tg(\\bflambda \\mid \\bflambda_n) = Q(\\bflambda \\mid \\bflambda_n) -\\frac{\\mu}{4} \n\\sum_{\\{j,k\\} \\in {\\cal N}}\\Big[(2 \\lambda_j - \\lambda_{nj} - \\lambda_{nk})^2 + (2 \\lambda_k - \\lambda_{nj} - \\lambda_{nk})^2 \\Big].\n\\end{eqnarray*}\nTo maximize $g(\\bflambda \\mid \\bflambda_n)$, we define ${\\cal N}_j = \\{k: \\{j,k\\} \\in {\\cal N}\\}$ and set the partial derivative\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial \\lambda_j} g(\\bflambda \\mid \\bflambda_n) & = & \\sum_i\n\\Big[\\frac{y_i w_{nij}}{\\lambda_j}-e_{ij}\\Big] - \\mu \\sum_{k: \\in {\\cal N}_j} (2\n\\lambda_j-\\lambda_{nj} - \\lambda_{nk})\n \\label{eqn:next-lambda}\n\\end{eqnarray}\nequal to 0 and solve for $\\lambda_{n+1,j}$. Multiplying equation (\\ref{eqn:next-lambda}) by $\\lambda_j$ produces a quadratic with roots of opposite signs. We take the positive root\n\\begin{eqnarray*}\n\\lambda_{n+1,j} & = & \\frac{-b_{nj} - \\sqrt{b_{nj}^{2} - 4a_jc_{nj}}}{2a_j},\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\na_{j} & = & -2 \\mu \\sum_{k \\in {\\cal N}_j} 1, \\quad \nb_{nj} \\;\\; = \\;\\; \\sum_{k \\in {\\cal N}_j} (\\lambda_{nj}+\\lambda_{nk}) -1, \\quad\nc_{nj} \\;\\; = \\;\\; \\sum_i y_i w_{nij} .\n\\end{eqnarray*}\nAlgorithm \\ref{algo:PET} summarizes the complete MM scheme. Obviously, complete parameter separation is crucial. The quantities $a_j$ can be computed once and stored. The quantities $b_{nj}$ and $c_{nj}$ are computed for each $j$ in parallel. To improve GPU performance in computing the sums over $i$, we exploit the widely available parallel sum-reduction techniques \\citep{silberstein08}. Given these results, a specialized but simple GPU code computes the updates $\\lambda_{n+1,j}$ for each $j$ in parallel.\n\nTable~\\ref{table:pet} compares the run times of the CPU and GPU implementations for a simulated PET image \\citep{Roland07PET}. The image as depicted in the top of Figure~\\ref{fig:pet} has $p=64 \\times 64=4,096$ pixels and is interrogated by $d=2,016$ detectors. Overall we see a 43- to 53-fold reduction in run times with the GPU implementation. Figure~\\ref{fig:pet} displays the true image and the estimated images under penalties of $\\mu=0$, $10^{-5}$, $10^{-6}$, and $10^{-7}$. Without penalty ($\\mu=0$), the algorithm fails to converge in 100,000 iterations.\n\n\\begin{algorithm}\n\\begin{algorithmic}\n\\STATE Scale $\\bf{E}$ to have unit $l_1$ column norms.\n\\STATE Compute $|{\\cal N}_j| = \\sum_{k: \\{j,k\\} \\in {\\cal N}} 1$ and $a_j -2 \\mu |{\\cal N}_j|$ for all $1 \\le j \\le p$.\n\\STATE Initialize: $\\lambda_{0j} \\leftarrow 1$, $j=1,\\ldots,p$.\n\\REPEAT\n\\STATE $z_{nij} \\leftarrow (y_i e_{ij}\\lambda_{nj})\/(\\sum_k e_{ik} \\lambda_{nk})$ for all $1 \\le i \\le d$, $1 \\le j \\le p$\n\\FOR{$j=1$ to $p$}\n\\STATE $b_{nj} \\leftarrow \\mu(|{\\cal N}_j|\\lambda_{nj}+\\sum_{k \\in {\\cal N}_j} \\lambda_{nk})-1$\n\\STATE $c_{nj} \\leftarrow \\sum_i z_{nij}$\n\\STATE $\\lambda_{n+1,j} \\leftarrow (-b_{nj}-\\sqrt{b_{nj}^{2}-4a_jc_{nj}})\/(2a_j)$\n\\ENDFOR\n\\UNTIL{convergence occurs}\n\\end{algorithmic}\n\\caption{(PET Image Recovering) Given the coefficient matrix $\\mathbf{E} \\in \\mathbb{R}_+^{d \\times p}$, coincident counts ${\\bf y} = (y_1,\\ldots,y_d) \\in \\mathbf{Z}_+^d$, and roughness parameter $\\mu>0$, find the intensity vector $\\bflambda=(\\lambda_1,\\ldots,\\lambda_p) \\in \\mathbb{R}_+^p$ that maximizes the objective function~(\\ref{eqn:PET-objpen}).}\n\\label{algo:PET}\n\\end{algorithm}\n\n\\begin{sidewaystable}\n\\begin{center}\n\\begin{tabular}{crrrrrrcrrrc}\n\\toprule\n& \\multicolumn{3}{c}{CPU} & \\multicolumn{4}{c}{GPU} & \\multicolumn{4}{c}{QN(10) on CPU} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-8} \\cmidrule(lr){9-12}\nPenalty $\\mu$ & Iters & Time & Function & Iters & Time & Function & Speedup & Iters & Time & Function & Speedup\t\\\\\n\\midrule\n0 & 100000 & 14790 & -7337.152765 & 100000 & 282 & -7337.153387 & 52 & 6549 & 2094 & -7320.100952 & n\/a \\\\\n$10^{-7}$ & 24457 & 3682 & -8500.083033 & 24457 & 70 & -8508.112249 & 53 & 251 & 83 & -8500.077057 & 44 \\\\\n$10^{-6}$ & 6294 & 919 & -15432.45496 & 6294 & 18 & -15432.45586 & 51 & 80 & 29 & -15432.45366 & 32 \\\\\n$10^{-5}$ & 589 & 86 & -55767.32966 & 589 & 2 & -55767.32970 & 43 & 19 & 9 & -55767.32731 & 10 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{Comparison of run times (in seconds) for a PET imaging problem on the simulated data in \\cite{Roland07PET}. The image has $p=64 \\times 64=4,096$ pixels and is interrogated by $d=2,016$ detectors. The columns labeled Function refer to the converged value of the objective function. The results under the heading {\\tt $QN(10)$ on CPU} invoke quasi-Newton acceleration \\cite{ZhouAlexanderLange09QN} with 10 secant conditions.}\n\\label{table:pet}\n\\end{sidewaystable}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=2in]{pet-true.eps}\n$$\n\\begin{array}{cc}\n\t\\includegraphics[width=2in]{pet-gpu-penalty-0.eps} & \\includegraphics[width=2in]{pet-gpu-penalty-1e-7.eps}\t\\\\\n\t\\includegraphics[width=2in]{pet-gpu-penalty-1e-6.eps} & \\includegraphics[width=2in]{pet-gpu-penalty-1e-5.eps}\t\\\\\n\\end{array}\n$$\n\\end{center}\n\\caption{The true PET image (top) and the recovered images with penalties $\\mu= 0$, $10^{-7}$, $10^{-6}$, and $10^{-5}$.}\n\\label{fig:pet}\n\\end{figure}\n\n\\subsection{Multidimensional Scaling}\n\nMultidimensional scaling (MDS) was the first statistical application of the MM principle \\cite{deLeeuw76MDS,deLeeuwHeiser77MDS}. MDS represents $q$ objects as faithfully as possible in $p$-dimensional space given a nonnegative weight $w_{ij}$ and a nonnegative dissimilarity measure $y_{ij}$ for each pair of objects $i$ and $j$. If $\\bftheta^i \\in \\mathbb{R}^p$ is the position of object $i$, then the $p \\times q$ parameter matrix $\\bftheta = (\\bftheta^1, \\ldots, \\bftheta^q)$ is estimated by minimizing the stress function\n\\begin{eqnarray}\nf(\\bftheta)\t&=& \\sum_{1 \\le i < j \\le q} w_{ij} (y_{ij} - \\|\\bftheta^i-\\bftheta^j\\|)^2 \\label{eqn:stress} \\\\\n&=& \\sum_{i < j} w_{ij} y_{ij}^2 - 2 \\sum_{i < j} w_{ij} y_{ij} \\|\\bftheta^i - \\bftheta^j\\| \n + \\sum_{i < j} w_{ij} \\|\\bftheta^i - \\bftheta^j\\|^2, \\nonumber\n\\end{eqnarray}\nwhere $\\|\\bftheta^i - \\bftheta^j\\|$ is the Euclidean distance between $\\bftheta^i$ and $\\bftheta^j$. The stress function~(\\ref{eqn:stress}) is invariant under translations, rotations, and reflections of $\\mathbb{R}^p$. To avoid translational and rotational ambiguities, we take $\\bftheta^1$ to be the origin and the first $p-1$ coordinates of $\\bftheta^2$ to be 0. Switching the sign of \n$\\theta^2_{p}$ leaves the stress function invariant. Hence, convergence to one member of a pair of reflected minima immediately determines the other member. \n\nGiven these preliminaries, we now review the derivation of the MM algorithm presented in \\citep{Lange00OptTrans}. Because we want to minimize the stress, we majorize it. The middle term in the stress~(\\ref{eqn:stress}) is majorized by the Cauchy-Schwartz inequality\n\\begin{eqnarray*}\n- \\|\\bftheta^i - \\bftheta^j\\| & \\le & - \\frac{(\\bftheta^i - \\bftheta^j)^t (\\bftheta^i_n - \\bftheta^j_n)}\n{\\|\\bftheta^i_n - \\bftheta^j_n\\|}.\n\\end{eqnarray*}\nTo separate the parameters in the summands of the third term of the stress, we invoke the convexity of the Euclidean norm $\\|\\cdot\\|$ and the square function $s^2$. These maneuvers yield\n\\begin{eqnarray*}\n\\|\\bftheta^i-\\bftheta^j\\|^2 & = & \\Big\\|\\frac{1}{2}\\Big[2\\bftheta^i - (\\bftheta^i_n+\\bftheta^j_n)\\Big]\n- \\frac{1}{2}\\Big[2\\bftheta^j -(\\bftheta^j_n+\\bftheta^j_n)\\Big]\\Big\\|^2 \\\\\n& \\le & 2 \\Big\\|\\bftheta^i - \\frac 12 (\\bftheta^i_n + \\bftheta^j_n)\\Big\\|^2 \n+ 2 \\Big\\|\\bftheta^j - \\frac 12 (\\bftheta^i_n+\\bftheta^j_n)\\Big\\|^2.\n\\end{eqnarray*}\nAssuming that $w_{ij}=w_{ji}$ and $y_{ij}=y_{ji}$, the surrogate function therefore becomes\n\\begin{eqnarray*}\ng(\\bftheta \\mid \\bftheta_n) &=& 2 \\sum_{i < j} w_{ij} \\left[ \\Big\\|\\bftheta^i - \\frac 12 (\\bftheta^i_n + \\bftheta^j_n)\\Big\\|^2 - \\frac{y_{ij} (\\bftheta^i)^t (\\bftheta^i_n - \\bftheta^j_n)}{\\|\\bftheta^i_n-\\bftheta^j_n\\|}\\right] \\\\\n& & + 2 \\sum_{i3$ dimensional spaces may sound artificial, there are situations where this is standard practice. First, MDS is foremost a dimension reduction tool, and it is desirable to keep $p>3$ to maximize explanatory power. Second, the stress function tends to have multiple local minima in low dimensions \\cite{Groenen96Tunneling}. A standard optimization algorithm like MM is only guaranteed to converge to a local minima of the stress function. As the number of dimensions increases, most of the inferior modes disappear. One can formally demonstrate that the stress has a unique minimum when $p=q-1$ \\cite{deLeeuw93,Groenen96Tunneling}. In practice, uniqueness can set in well before $p$ reaches $q-1$. In the recent work \\cite{ZhouLange09Annealing}, we propose a ``dimension crunching\" technique that increases the chance of the MM algorithm converging to the global minimum of the stress function. In dimension crunching, we start optimizing the stress in a Euclidean space $\\mathbb{R}^m$ with $m>p$. The last $m-p$ components of each column $\\bftheta^i$ are gradually subjected to stiffer and stiffer penalties. In the limit as the penalty tuning parameter tends to $\\infty$, we recover the global minimum of the stress in $\\mathbb{R}^p$. This strategy inevitably incurs a computational burden when $m$ is large, but the MM+GPU combination comes to the rescue.\n\n\\begin{algorithm}\n\\begin{algorithmic}\n\\STATE Precompute: $x_{ij} \\leftarrow w_{ij} y_{ij}$ for all $1 \\le i,j \\le q$\n\\STATE Precompute: $w_{i\\cdot} \\leftarrow \\sum_j w_{ij}$ for all $1 \\le i \\le q$\n\\STATE Initialize: Draw $\\theta^{i}_{0k}$ uniformly on [-1,1] for all $1 \\le i \\le q$, $1 \\le k \\le p$\n\\REPEAT\n\\STATE Compute $\\mathbf{\\Theta}_n^t \\mathbf{\\Theta}_n$\n\\STATE $d_{nij} \\leftarrow \\{\\mathbf{\\Theta}_n^t \\mathbf{\\Theta}_n\\}_{ii} + \\{\\mathbf{\\Theta}_n^t \\mathbf{\\Theta}_n\\}_{jj} - 2 \\{\\mathbf{\\Theta}_n^t \\mathbf{\\Theta}_n\\}_{ij}$ for all $1 \\le i, j \\le q$\n\\STATE $z_{nij} \\leftarrow x_{ij} \/ d_{nij}$ for all $1 \\le i \\ne j \\le q$\n\\STATE $z_{ni\\cdot} \\leftarrow \\sum_j z_{nij}$ for all $1 \\le i \\le q$\n\\STATE Compute $\\mathbf{\\Theta}_n (\\mathbf{W}-\\mathbf{Z}_n)$\n\\STATE $\\theta_{n+1,k}^i \\leftarrow [\\theta_{nk}^i (w_{i\\cdot}+z_{ni\\cdot}) + \\{\\mathbf{\\Theta}_n (\\mathbf{W}-\\mathbf{Z}_n)\\}_{ik}] \/ (2w_{i\\cdot})$ for all $1 \\le i \\le p$, $1 \\le k \\le q$\n\\UNTIL{convergence occurs}\n\\end{algorithmic}\n\\caption{(MDS) Given weights $\\mathbf{W}$ and distances $\\mathbf{Y} \\in \\mathbb{R}^{q \\times q}$, find the matrix $\\mathbf{\\Theta} = [\\bftheta^1, \\ldots, \\bftheta^q] \\in \\mathbb{R}^{p \\times q}$ which minimizes the stress~(\\ref{eqn:stress}).}\n\\label{algo:mds}\n\\end{algorithm}\n\n\\begin{sidewaystable}\n\\begin{center}\n\\begin{tabular}{crrrrrrcrrrc}\n\\toprule\n& \\multicolumn{3}{c}{CPU} & \\multicolumn{4}{c}{GPU} & \\multicolumn{4}{c}{QN(20) on CPU} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-8} \\cmidrule(lr){9-12}\nDim-$p$ & Iters & Time & Stress & Iters & Time & Stress & Speedup & Iters & Time & Stress & Speedup\t\\\\\n\\midrule\n2 & 3452 & 43 & 198.5109307 & 3452 & 1 & 198.5109309 & 43 & 530 & 16 & 198.5815072 & 3 \\\\\n3 & 15912 & 189 & 95.55987770 & 15912 & 6 & 95.55987813 & 32 & 1124 & 38 & 92.82984196 & 5 \\\\\n4 & 15965 & 189 & 56.83482075 & 15965 & 7 & 56.83482083 & 27 & 596 & 18 & 56.83478026 & 11 \\\\\n5 & 24604 & 328 & 39.41268434 & 24604 & 10 & 39.41268444 & 33 & 546 & 17 & 39.41493536 & 19 \\\\\n10 & 29643 & 441 & 14.16083986 & 29643 & 13 & 14.16083992 & 34 & 848 & 35 & 14.16077368 & 13 \\\\\n20 & 67130 & 1288 & 6.464623901 & 67130 & 32 & 6.464624064 & 40 & 810 & 43 & 6.464526731 & 30 \\\\\n30 & 100000 & 2456 & 4.839570118 & 100000 & 51 & 4.839570322 & 48 & 844 & 54 & 4.839140671 & n\/a \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{Comparison of run times (in seconds) for MDS on the 2005 House of Representatives roll call data. The number of points (representatives) is $q=401$. The results under the heading {\\tt $QN(20)$ on CPU} invoke the quasi-Newton acceleration \\cite{ZhouAlexanderLange09QN} with 20 secant conditions.}\n\\label{table:mds}\n\\end{sidewaystable}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4.5in]{mds-houseparty-3D.eps}\n\\end{center}\n\\caption{Display of the MDS results with $p=3$ coordinates on the 2005 House of Representatives roll call data.}\n\\label{fig:mds}\n\\end{figure}\n\n\\section{Discussion}\n\\label{discussion_section}\n\nThe rapid and sustained increases in computing power over the last half century have transformed statistics. Every advance has encouraged statisticians to attack harder and more sophisticated problems. We tend to take the steady march of computational efficiency for granted, but there are limits to a chip's clock speed, power consumption, and logical complexity. Parallel processing via GPUs is the technological\ninnovation that will power ambitious statistical computing in the coming decade. Once the limits of parallel processing are reached, we may see quantum computers take off. In the meantime statisticians should learn how to harness GPUs productively.\n\nWe have argued by example that high-dimensional optimization is driven by parameter and data separation. It takes both to exploit the parallel capabilities of GPUs. Block relaxation and the MM algorithm often generate ideal parallel algorithms. In our opinion the MM algorithm is the more versatile of the two generic strategies. Unfortunately, block relaxation does not accommodate constraints well and may generate sequential rather than parallel updates. Even when its updates are parallel, they may not be data separated. The EM algorithm is one of the most versatile tools in the statistician's toolbox. The MM principle generalizes the EM algorithm and shares its positive features. Scoring and Newton's methods become impractical in high dimensions. Despite these arguments in favor of MM algorithms, one should always keep in mind hybrid algorithms such as the one we implemented for NNMF.\n\nAlthough none of our data sets is really large by today's standards, they do demonstrate that a good GPU implementation can easily achieve one to two orders of magnitude improvement over a single CPU core. Admittedly, modern CPUs come with 2 to 8 cores, and distributed computing over CPU-based clusters remains an option. But this alternative also carries a hefty price tag. The NVIDIA GTX280 GPU on which our examples were run drives 240 cores at a cost of several hundred dollars. High-end computers with 8 or more CPU nodes cost thousands of dollars. It would take 30 CPUs with 8 cores each to equal a single GPU at the same clock rate. Hence, GPU cards strike an effective and cost efficient balance.\n\nThe simplicity of MM algorithms often comes at a price of slow (at best linear) convergence. Our MDS, NNMF, and PET (without penalty) examples are cases in point. Slow convergence is a concern as statisticians head into an era dominated by large data sets and high-dimensional models. Think about the scale of the Netflix data matrix. The speed of any iterative algorithm is determined by both the computational cost per iteration and the number of iterations until convergence. GPU implementation reduces the first cost. Computational statisticians also have a bag of software tricks to decrease the number of iterations \\cite{MengRubin93ECM,Jamshidian93ConjGrad,LiuRubin94ECME,Lange95QuasiNewton,Jamshidian97,MengVanDyk97,Varadhan08SQUAREM}. For instance, the recent paper~\\cite{ZhouAlexanderLange09QN} proposes a quasi-Newton acceleration scheme particularly suitable for high-dimensional problems. The scheme is off-the-shelf and broadly applies to any search algorithm defined by a smooth algorithm map. The acceleration requires only modest increments in storage and computation per iteration. Tables~\\ref{table:pet} and~\\ref{table:mds} also list the results of this quasi-Newton acceleration of the CPU implementation for the MDS and PET examples. As the tables make evident, quasi-Newton acceleration significantly reduces the number of iterations until convergence. The accelerated algorithm always locates a better mode while cutting run times compared to the unaccelerated algorithm. We have tried the quasi-Newton acceleration on our GPU hardware with mixed results. We suspect that the lack of full double precision on the GPU is the culprit. When full double precision becomes widely available, the combination of GPU hardware acceleration and algorithmic software acceleration will be extremely potent.\n\nSuccessful acceleration methods will also facilitate attacking another nagging problem in computational statistics, namely multimodality. No one knows how often statistical inference is fatally flawed because a standard optimization algorithm converges to an inferior mode. The current remedy of choice is to start a search algorithm from multiple random points. Algorithm acceleration is welcome because the number of starting points can be enlarged without an increase in computing time. As an alternative to multiple starting points, our recent paper~\\cite{ZhouLange09Annealing} suggests modifications of several standard MM algorithms that increase the chance of locating better modes. These simple modifications all involve variations on deterministic annealing~\\cite{Ueda98DAEM}.\n\nOur treatment of simple classical examples should not hide the wide applicability of the powerful MM+GPU combination. A few other candidate applications include penalized estimation of haplotype frequencies in genetics \\cite{ayers08}, construction of biological and social networks under a random multigraph model \\cite{ranola10}, and data mining with a variety of models related to the multinomial distribution \\cite{zhou10}. Many mixture models will benefit as well from parallelization, particularly in assigning group memberships. Finally, parallelization is hardly limited to optimization. We can expect to see many more GPU applications in MCMC sampling. Given the computationally intensive nature of MCMC, the ultimate payoff may even be higher in the Bayesian setting than in the frequentist setting. Of course realistically, these future triumphs will require a great deal of thought, effort, and education. There is usually a desert to wander and a river to cross before one reaches the promised land.\n\n\\section*{Acknowledgements}\nM.S. acknowledges support from NIH grant R01 GM086887. K.L. was supported by United States Public Health Service grants GM53275 and MH59490. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe production of heavy quarkonia, the QCD bound states of heavy-quark pairs\n($Q\\bar{Q}$), serves a an ideal laboratory to probe both the perturbative and\nnonperturbative aspects of QCD.\nThe effective quantum field theory of nonrelativistic QCD (NRQCD)\n\\cite{Caswell:1985ui} endowed with the factorization formalism introduced by\nBodwin, Braaten, and Lepage \\cite{Bodwin:1994jh} is nowadays the most\nfavorable theoretical approach to study heavy-quarkonium production and decay.\nIn this framework, the theoretical predictions are separated into\nprocess-dependent short-distance coefficients (SDCs) and supposedly universal\nlong-distance matrix elements (LDMEs).\nThe SDCs may be calculated perturbatively as expansions in the strong-coupling\nconstant $\\alpha_{s}$, while the LDMEs are predicted to scale with definite\npowers of the relative velocity $v$ of the heavy quarks in the quarkonium rest\nframe \\cite{Lepage:1992tx}.\nIn this way, the theoretical calculations are organized as double expansions in\n$\\alpha_s$ and $v$.\nIn contrast to the color-singlet (CS) model, in which the $Q\\bar{Q}$ pair must\nbe in the CS state that shares the spin $S$, orbital angular momentum $L$, and\ntotal angular momentum $J$ with the considered quarkonium, NRQCD accommodates\nall possible Fock states $n={}^{2S+1}\\!L_{J}^{[a]}$, where $a=1,8$ stands for\nCS and color octet (CO), respectively.\n\nDuring the past two decades, tests of NRQCD factorization and the universality\nof the LDMEs were performed in a vast number of experimental and theoretical\nworks.\nDespite numerous great successes, there are still some challenges in $J\/\\psi$\nhadroproduction.\nAs for the prompt yield of unpolarized $J\/\\psi$ mesons, the\nnext-to-leading-order (NLO) QCD corrections were calculated for all the\nrelevant channels, including ${}^3\\!S_1^{[1]}$ \\cite{Campbell:2007ws},\n${}^3\\!S_1^{[8]}$, ${}^1\\!S_0^{[8]}$ \\cite{Gong:2008sn}, and ${}^3\\!P_{J}^{[8]}$\n\\cite{Butenschoen:2010rq,Ma:2010yw} for direct production and the feed down\nfrom $\\psi^\\prime$ mesons, and\n${}^3\\!P_{J}^{[1]}$ \\cite{Ma:2010vd,Butenschoen:2013pxa}\nfor the feed down from $\\chi_{cJ}$ mesons.\nAs for the LDMEs, different ways of fitting lead to different results.\nBy a global fit to the world's $J\/\\psi$ data from hadroproduction,\nphotoproduction, two-photon scattering, and $e^{+}e^{-}$ annihilation, the\nthree CO LDMEs relevant for direct production were successfully pinned down\n\\cite{Butenschoen:2011yh} in a way compatible with the velocity scaling rules\n\\cite{Lepage:1992tx}, which greatly supported their universality.\nUsing these LDMEs, the $J\/\\psi$ polarization in hadroproduction was predicted\nto be largely transversal \\cite{Butenschoen:2012px}.\nBy fitting to the CDF Run II measurements of $J\/\\psi$ yield and polarization\nfor transverse momenta $p_T>7~\\mathrm{GeV}$, the authors of\nRef.~\\cite{Chao:2012iv} obtained two linear combinations of the three LDMEs,\nwhich led to an almost unpolarized prediction for the LHC.\nFitting to the prompt $J\/\\psi$ yield measured for $p_T>7~\\mathrm{GeV}$ by\nCDF~II and LHCb, including also the feed-down contributions from the\n$\\chi_{cJ}$ and $\\psi^\\prime$ mesons, a third set of LDMEs was obtained, which\nresulted in a moderately transverse $J\/\\psi$ polarization at the LHC\n\\cite{Gong:2012ug}.\nAll of these three LDME sets can describe well the $J\/\\psi$ yield at the LHC.\nUnfortunately, none of the resulting predictions for $J\/\\psi$ polarization is\nconsistent with the latest measurements by CMS \\cite{Chatrchyan:2013cla} and\nLHCb \\cite{Aaij:2013nlm}.\nThe above results reflect the fact that the NRQCD prediction of prompt $J\/\\psi$\npolarization strongly depends on the actual values of the LDMEs.\nTo test the universality of the LDMEs in a meaningful way and to clarify the\n$J\/\\psi$ polarization puzzle, it is useful to investigate some other effects in\nthe determination of the LDMEs, such as the higher-order relativistic\ncorrections.\n\nIn the heavy-quarkonium system, we actually have $v^2\\sim \\alpha_{s}(2m_{Q})$,\nwhich is not very small.\nIn some cases, it was found that the higher-order $v^2$ corrections are even as\nimportant as the higher-order $\\alpha_s$ corrections.\nFor example, the relativistic corrections played an important role in resolving\nboth the double-charmonium \\cite{He:2007te} and $J\/\\psi+X_{\\mathrm{non-cc}}$\nproduction problems at the $B$ factories \\cite{He:2009uf,Jia:2009np}.\nIn $J\/\\psi$ hadroproduction, the $v^2$ corrections in the CO channels\n${}^1\\!S_0^{[8]}$ and ${}^3\\!S_1^{[8]}$ were found to be significant in the\nlarge-$p_T$ region \\cite{Xu:2012am}, although they are tiny in the CS\n${}^3\\!S_1^{[1]}$ channel \\cite{Fan:2009zq}.\nIn double-quarkonium hadroproduction, the $v^2$ corrections also turned out to\nbe significant \\cite{Martynenko:2012tf}, especially in the CO channels\n\\cite{Li:2013csa}.\nIn the test of the hypothesis $X(3872)=\\chi_{c1}^\\prime$ in hadroproduction,\nappreciable $v^2$ corrections were encountered in the ${}^3\\!P_1^{[1]}$ channel\n\\cite{Butenschoen:2013pxa}.\nAll these observations provide a strong motivation for us to systematically\nstudy the $v^2$ corrections to the cross sections of prompt $J\/\\psi$\nphotoproduction and hadroproduction.\nThis will allow us to render global fits of the contributing LDMEs more\nreliable and to deepen our understanding of their universality.\nWhile the SDCs of direct $J\/\\psi$ production immediately carry over to the\nfeed down from the $\\psi^\\prime$ mesons, the feed down from the $\\chi_{cJ}$\nmesons requires a separate calculation.\n\nThe remainder of this paper is organized as follows.\nIn Sec.~\\ref{sec:two}, we explain how we calculate all the relevant SDCs.\nOur numerical results are presented in Sec.~\\ref{sec:three}.\nOur conclusions are contained in Sec.~\\ref{sec:four}.\nOur analytic results are listed in the Appendix.\n\n\\section{NRQCD factorization formula}\n\\label{sec:two}\n\nInvoking the Weizs\\\"{a}cker--Williams approximation and the factorization\ntheorem of the QCD parton model, the cross sections for the photoproduction or\nhadroproduction of the hadron $H=J\/\\psi,\\chi_{cJ},\\psi^\\prime$ may be written\nas\n\\cite{Butenschoen:2011yh}\n\\begin{equation}\\label{xs}\n\\sigma(AB\\to H+X)\n=\\sum_{i,j,k} \\int dx_1 dy_1 d x_2\\, f_{i\/A}(x_1)f_{j\/i}(y_1)f_{k\/B}(x_2)\n\\hat{\\sigma}(jk\\to H +X),\n\\end{equation}\nwhere $f_{i\/A}(x)$ is the parton distribution function (PDF) of the parton $i$\nin the hadron $A=p,\\bar{p}$ or the flux function of the photon $i=\\gamma$ in\nthe charged lepton $A=e^-,e^+$, $f_{j\/i}(y_1)$ is $\\delta_{ij}\\delta(1-y_1)$ or\nthe PDF of the parton $j$ in the resolved photon $i=\\gamma$, and\n$\\hat{\\sigma}(jk\\to H+X)$ is the partonic cross section.\nIn NRQCD through relative order $v^2$, the latter is factorized as\n\\cite{Bodwin:1994jh}\n\\begin{equation}\n\\hat{\\sigma}(ij\\to H +X)=\n\\sum_{n}\\left(\n\\frac{F_{ij}(n)}{m_c^{d_{\\mathcal{O}(n)}-4}}\\langle\\mathcal{O}^{H}(n)\\rangle+\n\\frac{G_{ij}(n)}{m_c^{d_{\\mathcal{P}(n)}-4}}\\langle\\mathcal{P}^{H}(n)\\rangle\n\\right),\n\\label{eq:fg}\n\\end{equation}\nwhere $\\mathcal{O}^H(n)$ is the four-quark operator pertaining to the\ntransition $n\\to H$ at leading order (LO) in $v$, with mass dimension\n$d_{\\mathcal{O}(n)}$; $\\mathcal{P}^H(n)$ is related to its $v^2$ correction and\ncarries mass dimension $d_{\\mathcal{P}(n)}=d_{\\mathcal{O}(n)}+2$; and $F_{ij}(n)$ and\n$G_{ij}(n)$ are the appropriate SDCs of the partonic subprocesses\n$i+j\\to c\\bar{c}(n)+X$.\nWorking in the fixed-flavor-number scheme, the parton $i$ runs over the gluon\n$g$ and the light quarks $q=u,d,s$ and antiquarks $\\bar{q}$.\n\nAccording to the velocity scaling rules \\cite{Lepage:1992tx}, the leading\ncontributions to direct $J\/\\psi$ and $\\psi^\\prime$ production are due to the\n${}^3\\!S_1^{[1]}$, ${}^3\\!S_1^{[8]}$, ${}^1\\!S_0^{[8]}$, and ${}^3\\!P_J^{[8]}$\nchannels, and those to direct $\\chi_{cJ}$ production are due to the\n${}^3\\!P_J^{[1]}$ and ${}^3\\!S_1^{[8]}$ channels.\nAccordingly, prompt $J\/\\psi$ photoproduction and hadroproduction proceeds at LO\nthrough the partonic subprocesses\n\\begin{eqnarray}\ng+\\gamma &\\to& c\\bar{c}({}^3\\!S_1^{[1,8]},{}^1\\!S_0^{[8]},{}^3\\!P_J^{[8]})+g,\n\\nonumber\\\\\nq(\\bar{q})+\\gamma &\\to& c\\bar{c}({}^3\\!S_1^{[8]},{}^1\\!S_0^{[8]},{}^3\\!P_J^{[8]})+q(\\bar{q}),\n\\nonumber\\\\\ng+g &\\to& c\\bar{c}({}^3\\!S_1^{[1,8]},{}^1\\!S_0^{[8]},{}^3\\!P_J^{[1,8]})+g,\n\\nonumber\\\\\nq(\\bar{q})+g &\\to& c\\bar{c}({}^3\\!S_1^{[8]},{}^1\\!S_0^{[8]},{}^3\\!P_J^{[1,8]})+q(\\bar{q}),\n\\nonumber\\\\\n\\bar{q}+q &\\to& c\\bar{c}({}^3\\!S_1^{[8]},{}^1\\!S_0^{[8]},{}^3\\!P_J^{[1,8]})+g.\n\\label{eq:sub}\n\\end{eqnarray}\nWe adopt the definitions of the relevant four-quark operators\n$\\mathcal{O}^H(n)$ from the literature \\cite{Bodwin:1994jh,Brambilla:2008zg}\nand define the corresponding four-quark operators $\\mathcal{P}^H(n)$ as:\n\\begin{eqnarray}\n\\mathcal{P}^{J\/\\psi}(^{3}S_{1}^{[1]})&=&\\chi^{\\dagger}\\boldsymbol{\\sigma}^{i}\\psi\n(a^{\\dagger}_{J\/\\psi}a_{J\/\\psi})\\psi^{\\dagger}\\boldsymbol{\\sigma}^{i}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{J\/\\psi}(^{1}S_{0}^{[8]})&=&\\chi^{\\dagger}T^{a}\\psi\n(a^{\\dagger}_{J\/\\psi}a_{J\/\\psi})\\psi^{\\dagger}T^{a}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{J\/\\psi}(^{3}P_{J}^{[8]})&=&\\chi^{\\dagger}\\boldsymbol{\\sigma}^{i}\n\\left(-\\frac{i}{2}\\overleftrightarrow{\\boldsymbol{D}^{j}}\\right)T^{a}\\psi\n(a^{\\dagger}_{J\/\\psi}a_{J\/\\psi})\\psi^{\\dagger}\\boldsymbol{\\sigma}^{i}T^{a}\n\\left(-\\frac{i}{2}\\overleftrightarrow{\\boldsymbol{D}^{j}}\\right)\n\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{\\chi_{c0}}(^{3}P_{0}^{[1]})&=&\\frac{1}{3}\\chi^{\\dagger}\n\\left(-\\frac{i}{2}\\overleftrightarrow{\\boldsymbol{D}}\\cdot\\boldsymbol{\\sigma}\n\\right)\\psi\n(a^{\\dagger}_{\\chi_{c0}}a_{\\chi_{c0}})\\psi^{\\dagger}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\cdot\\boldsymbol{\\sigma}\\right)\n\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{\\chi_{c1}}(^{3}P_{1}^{[1]})&=&\\frac{1}{2}\\chi^{\\dagger}\n\\left(-\\frac{i}{2}\\overleftrightarrow{\\boldsymbol{D}}\\times\\boldsymbol{\\sigma}\n\\right)\\psi\n(a^{\\dagger}_{\\chi_{c1}}a_{\\chi_{c1}})\\psi^{\\dagger}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\times\\boldsymbol{\\sigma}\\right)\n\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{\\chi_{c2}}(^{3}P_{2}^{[1]})&=&\\frac{1}{2}\\chi^{\\dagger}\n\\left(-\\frac{i}{2}\\overleftrightarrow{\\boldsymbol{D}}^{(i}\\boldsymbol{\\sigma}^{j)}\n\\right)\n\\psi(a^{\\dagger}_{\\chi_{c2}}a_{\\chi_{c2}})\\psi^{\\dagger}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}^{(i}\\boldsymbol{\\sigma}^{j)}\\right)\n\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{H}(^{3}S_{1}^{[8]})&=&\\chi^{\\dagger}\\boldsymbol{\\sigma}^{i}T^{a}\\psi\n(a^{\\dagger}_{H}a_{H})\\psi^{\\dagger}\\boldsymbol{\\sigma}^{i}T^{a}\\left(-\\frac{i}{2}\n\\overleftrightarrow{\\boldsymbol{D}}\\right)^{2}\\chi+{\\rm H.c.},\n\\nonumber\\\\\n\\mathcal{P}^{H}(^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})&=&\\sqrt{\\frac{3}{5}}\\chi^{\\dagger}\n\\sigma^{i}T^{a}\\psi (a^{\\dagger}_{H}a_{H})\\psi^{\\dagger}\\boldsymbol{\\sigma}^{j}\n\\boldsymbol{K}^{ij}T^{a}\\chi+{\\rm H.c.},\n\\label{eq:pop}\n\\end{eqnarray}\nwhere\n$\\overleftrightarrow{\\boldsymbol{D}}^{(i}\\boldsymbol{\\sigma}^{j)}\n=(\\overleftrightarrow{\\boldsymbol{D}}^{i}\\boldsymbol{\\sigma}^{j}\n+\\overleftrightarrow{\\boldsymbol{D}}^{j}\\boldsymbol{\\sigma}^{i})\/2\n-\\overleftrightarrow{\\boldsymbol{D}}\\cdot\\boldsymbol{\\sigma}\\delta^{ij}\/3$ and\n$\\boldsymbol{K}^{ij}\n=(-i\/2)^2(\\overleftrightarrow{\\boldsymbol{D}}^{i}\n\\overleftrightarrow{\\boldsymbol{D}}^{j}\n-\\overleftrightarrow{\\boldsymbol{D}}^{2}\\delta^{ij}\/3)$.\nOur definition of the $S$-$D$ mixing operator\n$\\mathcal{P}^{H} (^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})$ differs from that in\nRefs.~\\cite{Bodwin:1994jh,Brambilla:2008zg}, where a linear combination of\n$\\mathcal{P}^{H}(^{3}S_{1}^{[8]})$ and\n$\\mathcal{P}^{H}(^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})$ in Eq.~(\\ref{eq:pop}) is\nused instead.\nAt order $v^2$, there are no heavy-quark spin symmetries among the\n$\\mathcal{O}^H(n)$ operators, but they still hold among the $\\mathcal{P}^H(n)$\noperators, yielding the relationships\n\\begin{eqnarray}\n\\langle\\mathcal{P}^{J\/\\psi}(^{3}P_{0}^{[8]})\\rangle\n&=&\\frac{1}{2J+1}\\langle\\mathcal{P}^{J\/\\psi}(^{3}P_{J}^{[8]})\\rangle,\n\\nonumber\\\\\n\\langle\\mathcal{P}^{\\chi_{c0}}(^{3}S_{1}^{[8]})\\rangle\n&=&\\frac{1}{2J+1}\\langle\\mathcal{P}^{\\chi_{cJ}}(^{3}S_{1}^{[8]})\\rangle,\n\\nonumber\\\\\n\\langle\\mathcal{P}^{\\chi_{c0}}(^{3}P_{0}^{[1]})\\rangle\n&=&\\frac{1}{2J+1}\\langle\\mathcal{P}^{\\chi_{cJ}}(^{3}P_{J}^{[1]})\\rangle,\n\\nonumber\\\\\n\\langle\\mathcal{P}^{\\chi_{c0}}(^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})\\rangle\n&=&-\\frac{2}{3}\\langle\\mathcal{P}^{\\chi_{c1}}(^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})\\rangle\n=2\\langle\\mathcal{P}^{\\chi_{c2}}(^{3}S_{1}^{[8]},^{3}D_{1}^{[8]})\\rangle.\n\\end{eqnarray}\n\nThe SDCs $F_{ij}(n)$ and $G_{ij}(n)$ may be obtained perturbatively by matching\nthe QCD and NRQCD calculations via the condition\n\\begin{equation}\\label{mathching}\n\\sigma(c\\bar{c})|_{\\mathrm{pert\\;QCD}}\n=\\sum_{n}\\frac{F_{n}(\\Lambda)}{m_c^{d_{\\mathcal{O}(n)}-4}}\n\\langle0|\\mathcal{O}_{n}^{c\\bar{c}}(\\Lambda)|0\\rangle|_{\\mathrm{pert\\;NRQCD}}\n+\\sum_{n}\\frac{G_{n}(\\Lambda)}{m_c^{d_{\\mathcal{P}(n)}-4}}\n\\langle0|\\mathcal{P}_{n}^{c\\bar{c}}(\\Lambda)|0\\rangle|_{\\mathrm{pert\\;NRQCD}}.\n\\end{equation}\nThe left-hand side of Eq.~(\\ref{mathching}) may be computed directly using the\nspinor projection method developed in Ref.~\\cite{Kuhn:1979bb}, by which the\nproduct of Dirac spinors $v(P\/2-q)\\bar{u}(P\/2+q)$ is projected onto the\nconsidered $^{2S+1}L_J$ state in a Lorentz-covariant form.\nIn an arbitrary reference frame, the four-momenta $P\/2+q$ and $P\/2-q$ of the\nheavy quark and antiquark may be related to those in the quarkonium rest frame\nas\n\\begin{equation}\n\\frac{P}{2}+q=L\\left(\\frac{P_r}{2}+\\boldsymbol{q}\\right),\n\\qquad\n\\frac{P}{2}-q=L\\left(\\frac{P_r}{2}-\\boldsymbol{q}\\right),\n\\end{equation}\nwhere $P^{\\mu}_r=(2E_q,\\boldsymbol{0})$, $E_q=\\sqrt{m_c^2+\\boldsymbol{q}^2}$,\n$2\\boldsymbol{q}$ is the relative three-momentum between the two quarks in the\nquarkonium rest frame, and $L^{\\mu}_{\\phantom{\\mu}\\nu}$ is the Lorentz\ntransformation matrix for the boost from the quarkonium rest frame to the\nconsidered reference frame.\nTo all orders in $v^2$, the projectors onto the spin-singlet $(S=0)$ and\nspin-triplet $(S=1)$ states in the quarkonium rest frame read\n\\cite{Bodwin:2002hg}\n\\begin{eqnarray}\n\\sum_{\\lambda_{1},\\lambda_{2}}v(-\\boldsymbol{q},\\lambda_{2})\n\\bar{u}(\\boldsymbol{q},\\lambda_{1})\n\\left\\langle\\frac{1}{2},\\lambda_{1};\\frac{1}{2},\\lambda_{2}|0,0\\right\\rangle\n&=&\\frac{E_q+m_c}{\\sqrt{2}}\n\\left(1-\\frac{\\boldsymbol{\\alpha}\\cdot\\boldsymbol{q}}{E_q+m_c}\\right)\n\\nonumber\\\\\n&&{}\\times\n\\gamma^{5}\\frac{1+\\gamma^{0}}{2}\n\\left(1+\\frac{\\boldsymbol{\\alpha}\\cdot\\boldsymbol{q}}{E_q+m_c}\\right)\\gamma^{0},\n\\nonumber\\\\\n\\sum_{\\lambda_{1},\\lambda_{2}}v(-\\boldsymbol{q},\\lambda_{2})\n\\bar{u}(\\boldsymbol{q},\\lambda_{1})\n\\left\\langle\\frac{1}{2},\\lambda_{1};\\frac{1}{2},\\lambda_{2}|1,\n\\boldsymbol{\\epsilon}\\right\\rangle\n&=&\\frac{E_q+m_c}{\\sqrt{2}}\n\\left(1-\\frac{\\boldsymbol{\\alpha}\\cdot\\boldsymbol{q}}{E_q+m_c}\\right)\n\\nonumber\\\\\n&&{}\\times\n\\boldsymbol{\\alpha}\\cdot\\boldsymbol{\\epsilon}\\frac{1+\\gamma^{0}}{2}\n\\left(1+\\frac{\\boldsymbol{\\alpha}\\cdot\\boldsymbol{q}}{E_q+m_c}\\right)\\gamma^{0}.\n\\end{eqnarray}\nIn an arbitrary reference frame, they become\n\\begin{eqnarray}\n\\sum_{\\lambda_{1},\\lambda_{2}}v(-q,\\lambda_{2})\\bar{u}(q,\\lambda_{1})\n\\left\\langle\\frac{1}{2},\\lambda_{1};\\frac{1}{2},\\lambda_{2}|0,0\\right\\rangle\n&=&\\frac{-1}{2\\sqrt{2}(E_q+m_c)}\n\\left(\\frac{\\slashed{P}}{2}-\\slashed{q}-m_c\\right)\n\\nonumber\\\\\n&&{}\\times\n\\gamma^{5}\\frac{\\slashed{P}+2E_q}{2E_q}\n\\left(\\frac{\\slashed{P}}{2}+\\slashed{q}+m_c\\right),\n\\nonumber\\\\\n\\sum_{\\lambda_{1},\\lambda_{2}}v(-q,\\lambda_{2})\\bar{u}(q,\\lambda_{1})\n\\left\\langle\\frac{1}{2},\\lambda_{1};\\frac{1}{2},\\lambda_{2}|1,\\epsilon\n\\right\\rangle\n&=&\\frac{-1}{2\\sqrt{2}(E_q+m_c)}\n\\left(\\frac{\\slashed{P}}{2}-\\slashed{q}-m_c\\right)\n\\nonumber\\\\\n&&{}\\times\n\\slashed{\\epsilon}\\frac{\\slashed{P}+2E_q}{2E_q}\n\\left(\\frac{\\slashed{P}}{2}+\\slashed{q}+m_c\\right).\n\\end{eqnarray}\nNote that the normalization of the Dirac spinors is $\\bar{u}u=-\\bar{v}v=m_c^2$.\nWith the help of the spinor projection method, the partonic scattering\namplitude $M(ij\\to c\\bar{c}(n)+X)$ may then be expanded in the relative\nmomentum $q$.\nTo this end, we write\n\\begin{equation}\nM(ij\\to c\\bar{c}(n)+X)=\\sqrt{\\frac{m_c}{E_q}}A(q),\n\\label{eq:exp}\n\\end{equation}\nwhere the factor $\\sqrt{m_c\/E_q}$ stems from the relativistic normalization of\nthe $c\\bar{c}(n)$ state and\n\\begin{eqnarray}\nA(q)&=&\\sum_{\\lambda_{1},\\lambda_{2}}\\sum_{k,l}\n\\left\\langle\\frac{1}{2},\\lambda_{1};\\frac{1}{2},\\lambda_{2}|S,S_z\\right\\rangle\n\\langle3,k;\\bar{3},l|1(8,a)\\rangle\n\\nonumber\\\\\n&&{}\\times\\mathcal{A}\\left(ij\\to c_{\\lambda_1,k}\\left(\\frac{P}{2}+q\\right)\n\\bar{c}_{\\lambda_2,l}\\left(\\frac{P}{2}-q\\right)+X\\right).\n\\label{eq:A}\n\\end{eqnarray}\nHere, $\\langle3,k;\\bar{3},l|1\\rangle=\\delta_{kl}\/\\sqrt{N_c}$ and\n$\\langle3,k;\\bar{3},l|8,a\\rangle=\\sqrt{2}\\,T^{a}_{kl}$ are the color-SU(3)\nClebsch--Gordan coefficients for the $c\\bar{c}(n)$ pair projected onto CS and CO\nstates, respectively, and\n$\\mathcal{A}(ij\\to c_{\\lambda_1,k}(P\/2+q)\\bar{c}_{\\lambda_2,l}(P\/2-q)+X)$ is\nthe standard Feynman amplitude.\nDefining\n\\begin{equation}\nA_{\\alpha_1\\cdots\\alpha_N}(0)\n=\\left.\\frac{\\partial^NA(q)}{\\partial q^{\\alpha_1}\\cdots\\partial q^{\\alpha_N}}\n\\right|_{q=0},\n\\end{equation}\nwe may write the expansion of Eq.~(\\ref{eq:A}) in $q$ as\n\\begin{equation}\nA(q)=A(0)+q^{\\alpha_1}A_{\\alpha_1}(0)\n+\\frac{1}{2}q^{\\alpha_1}q^{\\alpha_2}A_{\\alpha_1\\alpha_2}(0)\n+\\frac{1}{6}q^{\\alpha_1}q^{\\alpha_2}q^{\\alpha_3}A_{\\alpha_1\\alpha_2\\alpha_3}(0)\n+\\cdots.\n\\label{eq:Aq}\n\\end{equation}\nFor $S$- and $D$-wave states, only the terms with even powers in $q$\ncontribute, while for $P$-wave states it is the other way around.\nTo calculate the relativistic corrections for the production of $S$- and\n$P$-wave states, we need to decompose the higher-rank tensor products of $q$\nfactors in Eq.~(\\ref{eq:Aq}) into their irreducible representations and to\nretain the $L=S$ and $L=P$ terms, respectively.\nThrough order $v^2$, we thus obtain,\nfor $n={}^3\\!S_{1}^{[1]},{}^3\\!S_{1}^{[8]},{}^1\\!S_{0}^{[8]}$,\n\\begin{equation}\nM(i j\\to c\\bar{c}(n)+X)=\\sqrt{\\frac{m_c}{E_q}}\n\\left[A(0)+\\frac{|\\boldsymbol{q}|^2}{6} \\Pi^{\\alpha_1\\alpha_2}A_{\\alpha_1\\alpha_2}(0)\n\\right],\n\\end{equation}\nand, for $n={}^3\\!P_{J}^{[1]},{}^3\\!P_{J}^{[8]}$,\n\\begin{equation}\nM(i j\\to c\\bar{c}(n)+X)=\\sqrt{\\frac{m_c}{E_q}}q^{\\alpha_1}\\left[A_{\\alpha_1}(0)\n-\\frac{|\\boldsymbol{q}|^2}{30}\n\\Pi_{\\alpha_1}^{\\phantom{\\alpha_1}\\alpha_2\\alpha_3\\alpha_4}A_{\\alpha_2\\alpha_3\\alpha_4}(0)\\right],\n\\end{equation}\nwhere\n$\\Pi^{\\alpha_1\\alpha_2}\n=-g^{\\alpha_1\\alpha_2}+P^{\\alpha_1}P^{\\alpha_2}\/(4E_q^2)$ and\n$\\Pi^{\\alpha_1\\alpha_2\\alpha_3\\alpha_4}=\\Pi^{\\alpha_1\\alpha_2}\\Pi^{\\alpha_3\\alpha_4}\n+\\Pi^{\\alpha_1\\alpha_3}\\Pi^{\\alpha_2\\alpha_4}+\\Pi^{\\alpha_2\\alpha_3}\\Pi^{\\alpha_1\\alpha_4}$.\nIn the $D$-wave case, we only need the amplitude at LO in $v^2$, which is\n\\begin{equation}\nM(i j\\to c\\bar{c}(^3D_J^{[8]})+X)\n=\\sqrt{\\frac{m_c}{E_q}}\\,\\frac{1}{2}q^{\\alpha_1}q^{\\alpha_2}A_{\\alpha_1\\alpha_2}(0).\n\\end{equation}\n\nWe are now in a position to perform the matching between the calculations in\nNRQCD and full QCD.\nWe thus obtain\n\\begin{eqnarray}\n\\frac{F_{ij}(n)}{m_c^{d_{\\mathcal{O}(n)}-4}}&=&\\frac{1}{2s}\n\\int d\\mathrm{LIPS}|\\overline{M}_{ij}(n)|^2,\n\\nonumber\\\\\n\\frac{G_{ij}(n)}{m_c^{d_{\\mathcal{P}(n)}-4}}&=&\\frac{1}{2s}\\int d\\mathrm{LIPS}\n\\left(K|\\overline{M}_{ij}(n)|^2+|\\overline{N}_{ij}(n)|^2\\right),\n\\label{eq:fg1}\n\\end{eqnarray}\nwhere $\\sqrt{s}$ is the invariant mass of the incoming partons and\n\\begin{eqnarray}\n|\\overline{M}_{ij}(n)|^2&=&\n\\overline{\\sum_{L_z}}|A(0)|^2\\bigg|_{\\boldsymbol{q}^2=0},\n\\nonumber\\\\\n|\\overline{N}_{ij}(n)|^2&=&\n\\overline{\\sum_{L_z}}\n\\left\\{\\frac{\\partial}{\\partial\\boldsymbol{q}^2}\n\\left[\\frac{m_c}{E_q}|A(0)|^2\\right]\n+\\frac{1}{3}\\Pi^{\\alpha_1\\alpha_2}\\mathop{\\mathrm{Re}}\\nolimits[A^*(0)A_{\\alpha_1\\alpha_2}(0)]\\right\\}\n\\bigg|_{\\boldsymbol{q}^2=0},\n\\label{eq:mnd}\n\\end{eqnarray}\nfor $n={}^3\\!S_{1}^{[1]},{}^3\\!S_{1}^{[8]},{}^1\\!S_{0}^{[8]}$;\n\\begin{eqnarray}\n|\\overline{M}_{ij}(n)|^2&=&\\overline{\\sum_{S_z,L_z,J_z}}\n|\\langle1,L_z;1,S_z|J,J_z\\rangle\n\\epsilon^{\\ast\\alpha}_{L_z}A_{\\alpha}(0)|^2\\bigg|_{\\boldsymbol{q}^2=0},\n\\nonumber\\\\\n|\\overline{N}_{ij}(n)|^2&=&\\left\\{\n\\overline{\\sum_{S_z,L_z,J_z}}\\frac{\\partial}{\\partial\\boldsymbol{q}^2}\n\\left[\\frac{m_c}{E_q}|\\langle1,L_z;1,S_z|J,J_z\\rangle\n\\epsilon^{\\ast\\alpha}_{L_z}A_{\\alpha}(0)|^2\\right]\n\\right.\\nonumber\\\\\n&&{}-\\frac{1}{15}\\overline{\\sum_{S_z,L_z,J_z}}\\langle1,L_z;1,S_z|J,J_z\\rangle\n\\overline{\\sum_{S^{\\prime}_z,L^{\\prime}_z,J^{\\prime}_z}}\n\\langle1,L^{\\prime}_z;1,S^{\\prime}_z|J,J^{\\prime}_z\\rangle\n\\nonumber\\\\\n&&{}\\times\\left.\\vphantom{\\overline{\\sum_{S_z,L_z,J_z}}}\n\\Pi_\\alpha^{\\phantom{\\alpha}\\alpha_1\\alpha_2\\alpha_3}\n\\mathop{\\mathrm{Re}}\\nolimits[\\epsilon^{\\ast\\alpha}_{L_z}\\epsilon^{\\beta}_{L^{\\prime}_z}\nA_{\\beta}^*(0)A_{\\alpha_1\\alpha_2\\alpha_3}(0)]\\right\\}\\bigg|_{\\boldsymbol{q}^2=0},\n\\qquad\n\\end{eqnarray}\nfor $n={}^3\\!P_{J}^{[1]},{}^3\\!P_{J}^{[8]}$; and\n\\begin{equation}\n|\\overline{N}_{ij}(n)|^2=\\overline{\\sum_{S_z,L_z,J_z}}\n\\langle2,L_z;1,S_z|1,J_z\\rangle\n\\mathop{\\mathrm{Re}}\\nolimits[\\epsilon^{\\ast\\alpha\\beta}_{L_z}A^*(0)A_{\\alpha\\beta}(0)]\n\\bigg|_{\\boldsymbol{q}^2=0},\n\\label{eq:mnm}\n\\end{equation}\nfor the ${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$ mixing term.\nHere, $\\epsilon_{L_z}^{\\alpha}$ ($\\epsilon_{L_z}^{\\alpha\\beta}$) is the\npolarization four-vector (four-tensor) for $L=P$ ($D$), the symbol $\\sum$\nalso implies the summation over the polarizations of the other external\npartons, and the bar implies the average over the spins and colors of the\nincoming partons and those of the $c\\bar{c}(n)$ state.\nThe factor $K$ in Eq.~(\\ref{eq:fg1}) contains the $v^2$ corrections to the\nphase space and depends on the kinematic variables that we are interested in.\nIn the cases under consideration here, we have $K=-4\/(s-4m_c^2)$. \nWe generate the Feynman diagrams using the FeynArts package\n\\cite{Kublbeck:1990xc} and compute the amplitude squares using the FeynCalc\npackage \\cite{Mertig:1990an}.\nIn the Appendix, we present our results for $|\\overline{N}_{ij}(n)|^2$.\n\n\nWe reproduce the well-known results for $|\\overline{M}_{ij}(n)|^2$, which were\ncalled $|\\mathcal{A}|^2$ in Ref.~\\cite{Cho:1995vh} and\n$|\\mathcal{M}^{\\prime}|^2$ in Ref.~\\cite{Ko:1996xw}.\nWe may also compare some of our results for $|\\overline{N}_{ij}(n)|^2$ with the\nliterature.\nThe $v^2$ corrections to direct $J\/\\psi$ photoproduction were first studied in\nRef.~\\cite{Jung:1993cd} within the relativistic quark model, which accounts for\nthe CS contributions.\nWe reproduce Eq.~(23) in Ref.~\\cite{Jung:1993cd} if we do not expand\n$E_q$, but set $E_q=m_{J\/\\psi}\/2$.\nThe relativistic corrections to direct $J\/\\psi$ hadroproduction were considered\nin Ref.~\\cite{Xu:2012am} within NRQCD.\nWe find agreement with Ref.~\\cite{Xu:2012am}, except for some typographical\nerrors in Eq.~(A4) therein, which corresponds to\n$|\\overline{N}_{gg}({}^3\\!S_1^{[8]})|^2$ in our notation.\nFurthermore, the ${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$ mixing contribution was\nnot considered there.\nApart from correcting the misprints in Eq.~(A4) of Ref.~\\cite{Xu:2012am}, our\npaper reaches beyond the available literature by studying the CO contributions\nto direct $J\/\\psi$ photoproduction, the ${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$\nmixing contribution to direct $J\/\\psi$ hadroproduction, and the feed-down\ncontributions to $J\/\\psi$ hadroproduction.\n\n\\section{Phenomenological results}\n\\label{sec:three}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\includegraphics[scale=0.50]{pt2dis.eps} &\n\\includegraphics[scale=0.50]{wwdis.eps} &\n\\includegraphics[scale=0.50]{zzdis.eps} \\\\\n(a) & (b) & (c) \\\\\n\\end{tabular}\n\\caption{Ratios $R(n)$ for $J\/\\psi$ direct photoproduction under HERA~II\nkinematic conditions as functions of (a) $p_T^2$, (b) $W$, and (c) $z$.}\n\\label{fig:one}\n\\end{center}\n\\end{figure}\n\nWe are now in a position to investigate the phenomenological significance of\nthe $v^2$ corrections in prompt $J\/\\psi$ photoproduction and hadroproduction.\nIn our numerical analysis, we use $m_c=1.5$~GeV, $\\alpha=1\/137.036$, the LO\nformula for $\\alpha_s^{(n_f)}(\\mu_r)$ with $n_f=4$ active quark flavors and\nasymptotic scale parameter $\\Lambda_\\mathrm{QCD}^{(4)}=215$~MeV\n\\cite{Pumplin:2002vw}, the CTEQ6L1 set for proton PDFs \\cite{Pumplin:2002vw},\nthe photon flux function given in Eq.~(5) of Ref.~\\cite{Kniehl:1996we} with\n$Q_{\\rm max}^2=2.5~\\mathrm{GeV}^2$ \\cite{Aaron:2010gz}, and the choice\n$\\mu_r=\\mu_f=\\sqrt{p_T^2+4m_c^2}$ for the renormalization and factorization\nscales.\nAccording to Eqs.~(\\ref{xs}) and (\\ref{eq:fg}), the hadronic cross sections of\nthe direct photoproduction and hadroproduction of the charmonia\n$H=J\/\\psi,\\chi_{cJ},\\psi^\\prime$, differential in some observable $x$, may be\ngenerically written as\n\\begin{equation}\n\\frac{d\\sigma}{dx}=\\sum_n\\left(\n\\frac{dF(n)}{dx}\\,\\frac{\\langle\\mathcal{O}^{H}(n)\\rangle}\n{m_c^{d_{\\mathcal{O}(n)}-4}}\n+\\frac{dG(n)}{dx}\\,\\frac{\\langle\\mathcal{P}^{H}(n)\\rangle}\n{m_c^{d_{\\mathcal{P}(n)}-4}}\\right),\n\\end{equation}\nwhere it is understood that $dF(n)\/dx=0$ if $n$ stands for\n${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$ mixing.\nFor all other channels $n$, the relative $v^2$ corrections are\n$R(n)\\langle\\mathcal{P}^{H}(n)\\rangle\/(m_c^2\\langle\\mathcal{O}^{H}(n)\\rangle)$,\nwhere $R(n)=dG(n)\/dx\/(dF(n)\/dx)$ is a dimensionless ratio.\nTo standardize the numerical discussion, we also introduce the quantity\n$R(n)=dG(n)\/dx\/(dF({}^3\\!S_1^{[8]})\/dx)$ for the case when $n$ stands for\n${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$ mixing.\nAccording to the velocity scaling rules \\cite{Lepage:1992tx}, the weight\n$\\langle\\mathcal{P}^{H}(n)\\rangle\/(m_c^2\\langle\\mathcal{O}^{H}(n)\\rangle)$ of\n$R(n)$ is of order $v^2$, which is approximately 0.23 for charmonium\n\\cite{Bodwin:2007fz,Guo:2011tz}.\nFor definiteness, we ignore these weights in the following and concentrate on\nthe SDC ratios $R(n)$ instead.\nFurthermore, we limit ourselves to the direct production of $J\/\\psi$ mesons and\ntheir production via the feed down from the $\\chi_{cJ}$ mesons.\nIn the latter case, the branching fractions $B(\\chi_{cJ}\\to J\/\\psi+X)$ drop out\nin the ratios $R(n)$.\nHowever, there remains a kinematic effect on the transverse momentum.\nSince $p_T\\equiv p_T^{J\/\\psi}\\gg M_{\\chi_{cJ}}-M_{J\/\\psi}$, we may approximate\n$p_T^{J\/\\psi}=p_T^{\\chi_{cJ}}M_{J\/\\psi}\/M_{\\chi_{cJ}}$, with\n$M_{J\/\\psi}=3.097~\\mathrm{GeV}$, $M_{\\chi_{c0}}=3.415~\\mathrm{GeV}$,\n$M_{\\chi_{c1}}=3.511~\\mathrm{GeV}$, and $M_{\\chi_{c2}}=3.556~\\mathrm{GeV}$\n\\cite{Beringer:1900zz}.\n\nWe consider three typical experimental environments, namely, Run~II at HERA,\nRun~I at the Tevatron, and the LHCb setup at the LHC.\nAt HERA~II, the cross section of prompt $J\/\\psi$ photoproduction was measured\nat center-of-mass energy $\\sqrt{S}=319~\\mathrm{GeV}$ differential in $p_T^2$,\n$W=\\sqrt{(p_p+p_{\\gamma})^2}$, and\n$z=p_{J\/\\psi}\\cdot p_{p}\/p_{\\gamma}\\cdot p_{p}$ \\cite{Aaron:2010gz}, where\n$p_{p}$, $p_{\\gamma}$, and $p_{J\/\\psi}$ are the four-momenta of the proton,\nphoton, and $J\/\\psi$ meson, respectively, imposing in turn two of the\nacceptance cuts $1~\\mathrm{GeV}^27~\\mathrm{GeV}$ say, then the inclusion of the $v^2$ corrections would not\nimprove the quality of the fit, but rather induce strong correlations between\n$\\langle\\mathcal{O}^{H}(n)\\rangle$ and $\\langle\\mathcal{P}^{H}(n)\\rangle$ for\neach $n$, both as for sign and magnitude.\nTo reduce these correlations or even enable independent determinations of the\nLDMEs in the enlarged set, it is indispensable to reduce the low-$p_T$ cut in\nhadroproduction, to $p_T>3~\\mathrm{GeV}$ say, and to include photoproduction\ndata in the fit.\nIn a fit with such a low-$p_T$ cut in hadroproduction, the inclusion of the\n$v^2$ corrections might enhance the goodness because the notorious lack of\nturnover of the NLO NRQCD predictions in the low-$p_T$ region, which causes\nthe latter to overshoot the experimental data there, might be cured at least\nto some extent by the fact that all the ratios $R(n)$ are amplified as $p_T$ is\ndecreased.\nIn conclusion, the addition of $v^2$ corrections is expected to have a\nsignificant impact on state-of-the-art determinations of CO LDMEs through\nglobal fits to prompt $J\/\\psi$ production data and might improve the goodness\nof such fits.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.75]{ptdis_lhcb1.eps} &\n\\includegraphics[scale=0.75]{ptdis_lhcb2.eps} \\\\\n(a) & (b) \\\\\n\\end{tabular}\n\\caption{Ratios $R(n)$ for $J\/\\psi$ hadroproduction (a) in the direct mode\nand (b) via the feed down from $\\chi_{cJ}$ mesons under LHCb kinematic\nconditions as functions of $p_T$.}\n\\label{fig:three}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:four}\n\nWe performed a systematic study of the $v^2$ corrections to the yields of\nprompt $J\/\\psi$ photoproduction and hadroproduction, providing the relevant\nSDCs in analytic form.\nSpecifically, this includes the partonic subprocesses listed in\nEq.~(\\ref{eq:sub}) in combination with the $c\\bar{c}$ Fock states\n$n={}^3\\!S_1^{[1]},{}^3\\!P_J^{[1]},{}^3\\!S_1^{[8]},{}^1\\!S_0^{[8]},\n{}^3\\!P_J^{[8]}$, and ${}^3\\!S_1^{[8]}$-${}^3\\!D_1^{[8]}$ mixing.\nWe compared our results with the literature as far as the latter goes.\nWe assessed the phenomenological significance of the $v^2$ corrections in the\nvarious channels by studying their ratios with respect to the corresponding LO\nresults.\nAssuming the relevant LDMEs $\\langle\\mathcal{O}^H(n)\\rangle$ and\n$\\langle\\mathcal{P}^H(n)\\rangle$ to obey the hierarchy predicted by the\nvelocity scaling rules \\cite{Lepage:1992tx}, we found that $v^2$ corrections of\nup to 50\\% are realistic.\nWe thus conclude that it is indispensable to include $v^2$ corrections in\ndeterminations of CO LDMEs by global data fits, the more so as the $v^2$\ncorrections in the various channels greatly differ between photoproduction and\nhadroproduction.\n\n\\section*{Acknowledgments}\n\nZ.-G.H. would like to thank Yu-Jie Zhang for a communication about the results\nin Ref.~\\cite{Xu:2012am}.\nThis work was supported in part by the German Federal Ministry for Education\nand Research BMBF through Grant No.~05H12GUE.\n\n\\section{Appendix}\n\nIn this appendix, we present analytic expressions for the nonvanishing SDCs\n$|\\overline{N}_{ij}(n)|^2$ appropriate for prompt $J\/\\psi$ photoporduction and\nhadroproduction evaluated according to Eqs.~(\\ref{eq:mnd})--(\\ref{eq:mnm}).\nFor the partonic subprocess $i(p_1)+j(p_2)\\to c\\bar{c}(P)+X$, the Mandelstam\nvariables are defined as $s=(p_1+p_2)^2$, $t=(p_1-P)^2|_{\\boldsymbol{q}=0}$, and\n$u=(p_2-P)^2|_{\\boldsymbol{q}=0}$ and satisfy $s+t+u=4m_c^2$.\nNote that our results for hadroproduction can also be applied to resolved\nphotoproduction.\n\n\\subsection{Photoproduction}\n\n$g+\\gamma\\to c\\bar{c}({}^3\\!S_1^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{g\\gamma}({}^3\\!S_1^{[1]})|^2\n=\\frac{8192 \\pi ^3 \\alpha \\alpha_s^2} {729 m_c\n\\left(s-4 m_c^2\\right)^3 \\left(4 m_c^2-t\\right)^3 (s+t)^3}\\nonumber\\\\\n&&\\times\\Big{[}2048 m_c^{10} \\left(3 s^2+2 s t+3 t^2\\right)-256 m_c^8\n\\left(5 s^3-8 s^2 t+4 s t^2+5 t^3\\right)-64 m_c^6 (15 s^4+68 s^3 t\\nonumber\\\\\n&&+62 s^2 t^2+32 s t^3+15 t^4)+16 m_c^4 \\left(21 s^5+87 s^4 t+130 s^3 t^2\n+106 s^2 t^3+51 s t^4+21 t^5\\right)\\nonumber\\\\\n&&-4 m_c^2 \\left(7 s^6+30 s^5 t+59 s^4 t^2+64 s^3 t^3+47 s^2 t^4+18 s t^5+7 t^6\\right)-s t (s+t)\n\\left(s^2+s t+t^2\\right)^2\\Big{]}\\nonumber\\\\\n\\end{eqnarray}\n\n$g+\\gamma\\to c\\bar{c}({}^3\\!S_1^{[8]})+g$:\n\n\\begin{equation}\n|\\overline{N}_{g\\gamma}({}^3\\!S_1^{[8]})|^2\n=\\frac{15}{8}|\\overline{N}_{g\\gamma}(^3S_1^{[1]})|^2\n\\end{equation}\n\n$g+\\gamma\\to c\\bar{c}({}^1\\!S_0^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{g\\gamma}({}^1\\!S_0^{[8]})|^2\n=\\frac{128 \\pi ^3 \\alpha \\alpha_s^2 s t}{9 m_c^3 \\left(4 m_c^2\n-s\\right)^3\\left(4 m_c^2-t\\right)^3 (s+t)^3 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}131072 m_c^{14}-4096 m_c^{12} (23 s+11 t)+1024 m_c^{10} \\left(33 s^2\n+34 s t-3 t^2\\right)-256 m_c^8 (s+t) \\nonumber\\\\\n&&\\left(7 s^2-11 s t-29 t^2\\right)-64 m_c^6 \\left(13 s^4+94 s^3 t+170 s^2 t^2+142 s t^3\n+37 t^4\\right)+16 m_c^4 (23 s^5\\nonumber\\\\\n&&+112 s^4 t+214 s^3 t^2+226 s^2 t^3+112 s t^4+23 t^5)-4 m_c^2 (9 s^6+50 s^5 t\n+116 s^4 t^2+138 s^3 t^3\\nonumber\\\\\n&&+104 s^2 t^4+38 s t^5+9 t^6)+5 s t (s+t) \\left(s^2+s t+t^2\\right)^2\\Big{]}\n\\end{eqnarray}\n\n$g+\\gamma\\to c\\bar{c}({}^3\\!P_J^{[8]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{g\\gamma}({}^3\\!P_J^{[8]})|^2\n=\\frac{128 \\pi ^3 \\alpha \\alpha_s^2}{15 m_c^5\n\\left(s-4 m_c^2\\right)^4 \\left(t-4 m_c^2\\right)^4 (s+t)^4\n\\left(-4m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}3145728 m_c^{18} (s+t) \\left(s^2+8 s t+t^2\\right)\n-65536 m_c^{16} (18 s^4+113 s^3 t+370 s^2 t^2-27 s t^3\\nonumber\\\\\n&&+18 t^4)-32768 m_c^{14} \\left(39 s^5+318 s^4 t+459 s^3 t^2+929 s^2 t^3\n+548 s t^4+39 t^5\\right)+16384 m_c^{12} \\nonumber\\\\\n&&\\left(63 s^6+519 s^5 t+1200 s^4 t^2+2043 s^3 t^3+1805 s^2 t^4\n+629 s t^5+63 t^6\\right)-2048 m_c^{10} (144 s^7 \\nonumber\\\\\n&&+1231s^6 t+3673 s^5 t^2+7074 s^4 t^3+7874 s^3 t^4+4843 s^2 t^5\n+1301 s t^6 +144 t^7)+512 m_c^8 \\nonumber\\\\\n&&\\left(75 s^8+662 s^7 t+2626 s^6 t^2+5981 s^5 t^3+7990 s^4 t^4+6371 s^3 t^5\n+3206 s^2 t^6+642 s t^7+75 t^8\\right)\\nonumber\\\\\n&&-128 m_c^6 (15 s^9+162 s^8 t+964 s^7 t^2+2725 s^6 t^3\n+4364 s^5 t^4+4034 s^4 t^5+2635 s^3 t^6+1114 s^2 t^7\\nonumber\\\\\n&&+132 s t^8+15 t^9)+16 m_c^4 s t (27 s^8+502 s^7 t\n+1968 s^6 t^2+3774 s^5t^3+4338 s^4 t^4+3114 s^3 t^5\\nonumber\\\\\n&&+1668 s^2 t^6+502 s t^7+27 t^8)-16 m_c^2 s^2 t^2 (s+t)\n(25 s^6+118 s^5 t+259 s^4 t^2+310 s^3 t^3\\nonumber\\\\\n&&+244 s^2 t^4+103 s t^5+25 t^6)+31 s^3 t^3 (s+t)^2\n\\left(s^2+s t+t^2\\right)^2\\Big{]}\n\\end{eqnarray}\n\n$g+\\gamma\\to c\\bar{c}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{g\\gamma}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})|^2\n=\\frac{2048 \\sqrt{5\/3}\n\\pi ^3 \\alpha \\alpha_{s}^{2}}{81 m_c \\left(s-4 m_c^2\\right)^4\n\\left(t-4 m_c^2\\right)^4 (s+t)^4} \\Big{[}32768 m_c^{14} (s+t) (3 s^2\\nonumber\\\\\n&&+5 s t+3 t^2)-4096 m_c^{12} \\left(s^2+5 s t+t^2\\right)\n\\left(11 s^2+18 s t+11 t^2\\right)-1024 m_c^{10} (s+t) (10 s^4\\nonumber\\\\\n&&-109 s^3 t-202 s^2 t^2-109 s t^3+10 t^4)+256 m_c^8\n(36 s^6-27 s^5 t-395 s^4 t^2-640 s^3 t^3\\nonumber\\\\\n&&-395 s^2 t^4-27 s t^5+36 t^6)-64 m_c^6 (s+t)\n(28 s^6-3 s^5 t-250 s^4 t^2-400 s^3 t^3-250 s^2 t^4\\nonumber\\\\\n&&-3 s t^5+28 t^6)+16 m_c^4 (7 s^8+4 s^7 t\n-100 s^6 t^2-323 s^5 t^3-440 s^4t^4-323 s^3 t^5-100 s^2 t^6\\nonumber\\\\\n&&+4 s t^7+7 t^8)+4 m_c^2 s t (s+t)\n\\left(6 s^6+22 s^5 t+51 s^4 t^2+66 s^3 t^3+51 s^2 t^4+22 s t^5+6 t^6\\right)\\nonumber\\\\\n&&-s^2 t^2 (s+t)^2 \\left(s^2+s t+t^2\\right)^2\\Big{]}\n\\end{eqnarray}\n\n\n\n$q(\\bar{q})+\\gamma\\to c\\bar{c}({}^3\\!S_1^{[8]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})\\gamma}({}^3\\!S_1^{[8]})|^2\n=\\frac{8 \\pi ^3 \\alpha \\alpha_s^2 e_q^2}\n{27 m_c^5 s t \\left(4 m_c^2-s\\right)}\\nonumber\\\\\n&&\\times\\Big{[}640 m_c^6-160 m_c^4 (2 s+t)+4 m_c^2 \\left(27 s^2+10 s t\n+5 t^2\\right)-11 s \\left(s^2+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+\\gamma\\to c\\bar{c}({}^1\\!S_0^{[8]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n|\\overline{N}_{q(\\bar{q})\\gamma}({}^1\\!S_0^{[8]})|^2\n=\\frac{256 \\pi ^3 \\alpha \\alpha_s^2 \\left[m_c^2 \\left(44 s^3\n+92 s^2 t-4 s t^2+44 t^3\\right)-5 s (s+t)\\left(s^2+t^2\\right)\\right]}{81 m_c^3\n \\left(s-4 m_c^2\\right) (s+t)^3 \\left(-4 m_c^2+s+t\\right)}\n\\end{eqnarray}\n\n$q(\\bar{q})+\\gamma\\to c\\bar{c}({}^3\\!P_J^{[8]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})\\gamma}({}^3\\!P_J^{[8]})|^2\n=\\frac{256 \\pi ^3 \\alpha \\alpha_s^2}\n{135 m_c^5 \\left(s-4 m_c^2\\right) (s+t)^4 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}512 m_c^6 \\left(5 s^2+26 s t+25 t^2\\right)+\n64 m_c^4 \\left(s^3-23 s^2 t-111 s t^2-19 t^3\\right)+4 m_c^2 (s+t)\\nonumber\\\\\n&& (57 s^3+169 s^2 t-3 s t^2+61 t^3)-31 s (s+t)^2 \\left(s^2+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+\\gamma\\to c\\bar{c}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})+q(\\bar{q})$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})\\gamma}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})|^2\n=-\\frac{16 \\sqrt{5\/3} \\pi ^3 \\alpha \\alpha_{s}^{2}\ne_q^2 \\left[32 m_c^4-8 m_c^2 (s+t)+s^2+t^2\\right]}{9 m_c^5 s t}\n\\end{eqnarray}\n\n\\subsection{Hadroproduction}\n\n$g+g\\to c\\bar{c}({}^3\\!S_1^{[1]})+g$:\n\n\n\\begin{eqnarray}\n|\\overline{N}_{gg}({}^3\\!S_1^{[1]})|^2=\\frac{15\\alpha_s}{128\\alpha}\n|\\overline{N}_{g\\gamma}({}^3\\!S_1^{[1]})|^2\n\\end{eqnarray}\n\n\n$g+g\\to c\\bar{c}({}^3\\!P_0^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!P_0^{[1]})|^2\n=\\frac{8 \\pi ^3 \\alpha_{s}^{3}}\n{45 m_c^5 s t \\left(4 m_c^2-s\\right)^5\n\\left(4 m_c^2-t\\right)^5 (s+t)^5 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}251658240 (s+t)^3 \\left(11 s^2+30 t s+11 t^2\\right)\nm_c^{24}-4194304 (s+t)^2 (5 s+3 t) \\big{(}231 s^3+847 t s^2\\nonumber\\\\\n&&+921 t^2 s+385 t^3\\big{)} m_c^{22}+1048576 (s+t)^2\n\\big{(}4155 s^5+19680 t s^4+36066 t^2 s^3+34246 t^3 s^2\\nonumber\\\\\n&&+17160 t^4 s+3795 t^5\\big{)} m_c^{20}-262144\n\\big{(}9735 s^8+69804 t s^7+219507 t^2 s^6+399776 t^3 s^5\\nonumber\\\\\n&&+466128 t^4 s^4+359956 t^5 s^3+181147 t^6 s^2+54864 t^7 s\n+7755 t^8\\big{)} m_c^{18}+65536 \\big{(}15750 s^9\\nonumber\\\\\n&&+119943 t s^8+411033 t^2 s^7+838246 t^3 s^6+1131460 t^4 s^5\n+1057300 t^5 s^4+688706 t^6 s^3\\nonumber\\\\\n&&+304133 t^7 s^2+83403 t^8 s+10890 t^9\\big{)} m_c^{16}\n-32768 \\big{(}8955 s^{10}+72307 t s^9+267272 t^2 s^8\\nonumber\\\\\n&&+600188 t^3 s^7+913993 t^4 s^6+992452 t^5 s^5+782713 t^6 s^4\n+445538 t^7 s^3+177082 t^8 s^2\\nonumber\\\\\n&&+44767 t^9 s+5445 t^{10}\\big{)} m_c^{14}+4096 \\big{(}14235 s^{11}\n+121663 t s^{10}+482477 t^2 s^9+1181531 t^3 s^8\\nonumber\\\\\n&&+1997582 t^4 s^7+2459484 t^5 s^6+2258884 t^6 s^5+1553862 t^7 s^4\n+791411 t^8 s^3+287757 t^9 s^2\\nonumber\\\\\n&&+67783 t^{10} s+7755 t^{11}\\big{)} m_c^{12}-1024 \\big{(}7575 s^{12}\n+69080 t s^{11}+295378 t^2 s^{10}+788288 t^3 s^9\\nonumber\\\\\n&&+1469633 t^4 s^8+2024628 t^5 s^7+2117532 t^6 s^6+1697668 t^7 s^5\n+1040113 t^8 s^4+480268 t^9 s^3\\nonumber\\\\\n&&+160638 t^{10} s^2+35180 t^{11} s+3795 t^{12}\\big{)}\nm_c^{10}+256 \\big{(}2415 s^{13}+24464 t s^{12}+115885 t^2 s^{11}\\nonumber\\\\\n&&+342339 t^3 s^{10}+708615 t^4 s^9+1090700 t^5 s^8+1287226 t^6 s^7\n+1180426 t^7 s^6+842220 t^8 s^5\\nonumber\\\\\n&&+464375 t^9 s^4+194239 t^{10} s^3+58985 t^{11} s^2+11684 t^{12} s\n+1155 t^{13}\\big{)} m_c^8-64 \\big{(}345 s^{14}\\nonumber\\\\\n&&+4528 t s^{13}+25895 t^2 s^{12}+89188 t^3 s^{11}+211442 t^4 s^{10}\n+369630 t^5 s^9+495006 t^6 s^8+517164 t^7 s^7\\nonumber\\\\&&\n+424126 t^8 s^6+271810 t^9 s^5+134322 t^{10} s^4+49688 t^{11} s^3\n+12995 t^{12} s^2+2128 t^{13} s+165 t^{14}\\big{)} m_c^6\\nonumber\\\\\n&&+16 s t (s+t) \\big{(}259 s^{12}+2323 t s^{11}+9736 t^2 s^{10}\n+25649 t^3 s^9+47816 t^4 s^8+66583 t^5 s^7+71232 t^6 s^6\\nonumber\\\\\n&&+58883 t^7 s^5+37456 t^8 s^4+17829 t^9 s^3+6056 t^{10} s^2+1303 t^{11} s\n+139 t^{12}\\big{)} m_c^4-4 s^2 t^2 (s+t)^2 \\nonumber\\\\\n&&\\left(s^2+t s+t^2\\right)^2 \\left(87 s^6+372 t s^5+746 t^2 s^4\n+854 t^3 s^3+666 t^4 s^2+292 t^5 s+67 t^6\\right) m_c^2\\nonumber\\\\\n&&+13 s^3 t^3 (s+t)^3 \\left(s^2+t s+t^2\\right)^4\\Big{]}\n\\end{eqnarray}\n\n$g+g\\to c\\bar{c}({}^3\\!P_1^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!P_1^{[1]})|^2\n=\\frac{16 \\pi ^3 \\alpha_{s}^{3}}\n{45 m_c^5 \\left(s-4 m_c^2\\right)^5\n\\left(4 m_c^2-t\\right)^5 (s+t)^5}\\nonumber\\\\\n&&\\times\\Big{[}2097152 m_c^{20} (s+t)^2 \\left(s^2+t^2\\right)-131072 m_c^{18}\n\\left(11 s^5-35 s^4 t-40 s^3 t^2+5 s t^4+11 t^5\\right)\\nonumber\\\\\n&&-32768 m_c^{16} \\left(125 s^6+864 s^5 t+1419 s^4 t^2+1164 s^3 t^3+739 s^2\nt^4+404 s t^5+85 t^6\\right)\\nonumber\\\\\n&&+8192 m_c^{14} (818 s^7+4579 s^6 t+9054 s^5 t^2+9591 s^4 t^3\n+7291 s^3 t^4+4914 s^2 t^5+2379 s t^6\\nonumber\\\\\n&&+538 t^7)-2048 m_c^{12} (2106 s^8+11940 s^7 t+27483 s^6 t^2+35428 s^5 t^3\n+31746 s^4 t^4+23628 s^3 t^5\\nonumber\\\\\n&&+14863 s^2 t^6+6300 s t^7+1306 t^8)+512m_c^{10} (2905 s^9+17675 s^8 t\n+46718 s^7 t^2+71799 s^6 t^3\\nonumber\\\\\n&&+75989 s^5 t^4+63429 s^4 t^5+44939 s^3 t^6+25058 s^2 t^7+9275 s t^8+1705 t^9)\\nonumber\\\\\n&&-128 m_c^8 (2265 s^{10}+15280 s^9 t+46309 s^8 t^2+84008 s^7 t^3+\n105380 s^6 t^4+101476 s^5 t^5\\nonumber\\\\\n&&+79720 s^4t^6+51188 s^3 t^7+24729 s^2 t^8+7940 s t^9+1265 t^{10})\n+32 m_c^6 (944 s^{11}+7381 s^{10} t\\nonumber\\\\\n&&+26060 s^9 t^2+56013 s^8t^3+83898 s^7 t^4+95046 s^6 t^5+85066 s^5 t^6+61038 s^4 t^7+34133 s^3 t^8\\nonumber\\\\\n&&+14020 s^2 t^9+3821 s t^{10}+504 t^{11})-8 m_c^4\n (164 s^{12}+1700 s^{11} t+7455 s^{10} t^2+19660 s^9 t^3\\nonumber\\\\\n&&+35909 s^8 t^4+48760 s^7 t^5+51084 s^6 t^6+41660 s^5 t^7+26329 s^4 t^8+12440 s^3\n t^9+4175 s^2 t^{10}\\nonumber\\\\&&\n+900 s t^{11}+84 t^{12})+2 m_c^2 s t (s+t) \\left(s^2+s t+t^2\\right)^2 (106 s^6+490 s^5 t+981 s^4\n t^2+1092 s^3 t^3\\nonumber\\\\\n&&+821 s^2 t^4+330 s t^5+66 t^6)-11 s^2 t^2 (s+t)^2 \\left(s^2+s t+t^2\\right)^4\\Big{]}\n\\end{eqnarray}\n\n$g+g\\to c\\bar{c}({}^3\\!P_2^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!P_2^{[1]})|^2\n=\\frac{16 \\pi ^3 \\alpha_s^3}\n{225 m_c^5 s t \\left(4 m_c^2-s\\right)^5\n\\left(4 m_c^2-t\\right)^5 (s+t)^5 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}100663296 (s+t)^3 \\left(15 s^2+38 t s+15 t^2\\right)\nm_c^{24}-8388608 (s+t)^2 (315 s^4+1312 ts^3\\nonumber\\\\\n&&+2010 t^2 s^2+1252 t^3 s+315 t^4) m_c^{22}+524288 (s+t)\n(4620 s^6+25287 t s^5+58998 t^2 s^4\\nonumber\\\\\n&&+73262 t^3 s^3+53438 t^4 s^2+21927 t^5 s+4140 t^6)\nm_c^{20}-262144 (5550 s^8+36807 t s^7\\nonumber\\\\\n&&+109272 t^2 s^6+189017 t^3 s^5+212562 t^4 s^4+162877 t^5\n s^3+84442 t^6 s^2+27387 t^7 s\\nonumber\\\\\n&&+4230 t^8)m_c^{18}+32768 (18360 s^9+126189 t s^8\n+389652 t^2 s^7+714467 t^3 s^6+879782 t^4 s^5\\nonumber\\\\\n&&+782522 t^5 s^4+517507 t^6 s^3+248752 t^7 s^2+78429 t^8 s+11880 t^9)\nm_c^{16}-8192 (21240 s^{10}\\nonumber\\\\\n&&+155344 t s^9+504167 t^2 s^8+967724 t^3 s^7+1252087 t^4 s^6\n+1199524 t^5 s^5+911047 t^6 s^4\\nonumber\\\\\n&&+558524 t^7 s^3+262447 t^8 s^2+81544 t^9 s+11880 t^{10})\n m_c^{14}+2048 (17100 s^{11}+136829 t s^{10}\\nonumber\\\\\n&&+477883 t^2 s^9+972667 t^3 s^8+1311058 t^4 s^7+1293789 t^5 s^6\n+1038389 t^6 s^5+732018 t^7 s^4\\nonumber\\\\&&\n+446587 t^8 s^3+210243 t^9 s^2+63149 t^{10} s+8460 t^{11})\nm_c^{12}-512 (9180 s^{12}+82966 t s^{11}\\nonumber\\\\\n&&+321992 t^2 s^{10}+716914 t^3 s^9+1033387 t^4 s^8+1055592 t^5 s^7\n+855762 t^6 s^6+634352 t^7 s^5\\nonumber\\\\\n&&+454727 t^8 s^4+283274 t^9 s^3+129252 t^{10} s^2+35566\n t^{11} s+4140 t^{12}) m_c^{10}+128 (2940 s^{13}\\nonumber\\\\\n&&+31855 t s^{12}+143846 t^2 s^{11}+365607 t^3 s^{10}+592170 t^4 s^9+663334\n t^5 s^8+565754 t^6 s^7\\nonumber\\\\\n&&+427154 t^7 s^6+326754 t^8 s^5+239410 t^9 s^4+141047 t^{10} s^3\n+57346 t^{11} s^2+13375 t^{12} s \\nonumber\\\\\n&&+1260 t^{13}) m_c^8-32 (420 s^{14}+6548 t s^{13}\n+37651 t^2 s^{12}+116096 t^3 s^{11}+224575 t^4 s^{10}\\nonumber\\\\\n&&+299040 t^5 s^9+299220 t^6 s^8+251484 t^7\n s^7+199880 t^8 s^6+152480 t^9 s^5+99555 t^{10} s^4\\nonumber\\\\\n&&+48976 t^{11} s^3+16051 t^{12} s^2+2828 t^{13} s+180 t^{14}) m_c^6\n+8 s t (s+t) (464 s^{12}+4070 t s^{11}\\nonumber\\\\\n&&+15035 t^2 s^{10}+32905 t^3 s^9+49483 t^4 s^8+56786 t^5 s^7\n+54168 t^6 s^6+44806 t^7 s^5+32183 t^8 s^4\\nonumber\\\\\n&&+18405 t^9 s^3+7675 t^{10} s^2+2030 t^{11} s+224 t^{12})\nm_c^4-2 s^2 t^2 (s+t)^2 \\left(s^2+t s+t^2\\right)^2(138 s^6\\nonumber\\\\\n&&+552 t s^5+943 t^2 s^4+922 t^3 s^3+783 t^4 s^2+392 t^5 s+98 t^6)\nm_c^2+7 s^3 t^3 (s+t)^3 \\left(s^2+t s+t^2\\right)^4\\Big{]}\\nonumber\\\\\n\\end{eqnarray}\n\n$g+g\\to c\\bar{c}({}^3\\!S_1^{[8]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!S_1^{[8]})|^2\n=\\frac{\\pi ^3 \\alpha_{s}^{3}}{54 m_c^5\n\\left(s-4 m_c^2\\right)^3 \\left(4 m_c^2-t\\right)^3 (s+t)^3}\\nonumber\\\\\n&&\\times\\Big{[}16384 m_c^{14} \\left(87 s^2+22 s t+87 t^2\\right)-4096 m_c^{12}\n\\left(14 s^3-449 s^2 t-221 s t^2+14 t^3\\right)\\nonumber\\\\\n&&-2048 m_c^{10} \\left(480 s^4+1969 s^3 t+2182 s^2 t^2+1384 s t^3\n+399 t^4\\right)+256 m_c^8 (2910 s^5 \\nonumber\\\\\n&&+11616 s^4 t+17509 s^3 t^2+15433 s^2 t^3+8178 s t^4+2100 t^5)-64 m_c^6\n(4048 s^6+17148 s^5 t \\nonumber\\\\\n&&+31739 s^4 t^2+35722 s^3 t^3+25841 s^2t^4+11412 s t^5+2590 t^6)+16 m_c^4\n(2754 s^7+12968 s^6 t\\nonumber\\\\\n&&+28779 s^5 t^2+39352 s^4 t^3+35950 s^3 t^4+22137 s^2 t^5+8270 st^6+1620 t^7)\\nonumber\\\\\n&&-108 m_c^2 \\left(s^2+s t+t^2\\right) \\left(27 s^6+131 s^5 t+269 s^4 t^2\n+302 s^3 t^3+227 s^2 t^4+89 s t^5+15t^6\\right)\\nonumber\\\\\n&&+297 s t (s+t) \\left(s^2+s t+t^2\\right)^3\\Big{]}\n\\end{eqnarray}\n\n\n\n$g+g\\to c\\bar{c}({}^1\\!S_0^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^1\\!S_0^{[8]})|^2=\\frac{5 \\pi ^3 \\alpha_s^3}\n{3 m_c^3 s t \\left(4 m_c^2-s\\right)^3 \\left(4 m_c^2\n-t\\right)^3 (s+t)^3 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}65536 m_c^{16} \\left(5 s^3+20 s^2 t+8 s t^2+5 t^3\\right)\n-16384 m_c^{14} (25 s^4+116 s^3 t+110 s^2 t^2+56 s t^3\\nonumber\\\\\n&&+25 t^4 )+12288 m_c^{12} \\left(28 s^5+129 s^4 t+189 s^3 t^2\n+137 s^2 t^3+65 s t^4+20 t^5\\right)\\nonumber\\\\\n&&-2048 m_c^{10} \\left(87 s^6+426 s^5 t+801 s^4 t^2+806 s^3 t^3\n+501 s^2 t^4+204 s t^5+45 t^6\\right)\\nonumber\\\\\n&&+256 m_c^8 \\left(222 s^7+1182 s^6 t+2653 s^5 t^2+3385 s^4 t^3\n+2749 s^3 t^4+1501 s^2 t^5+522 s t^6+90 t^7\\right)\\nonumber\\\\\n&&-64 m_c^6 (180 s^8+1020 s^7 t+2591 s^6 t^2+3964 s^5t^3\n+4006 s^4 t^4+2812 s^3 t^5+1355 s^2 t^6\\nonumber\\\\\n&&+408 s t^7+60 t^8)+16 m_c^4 (85 s^9+516 s^8 t+1456 s^7 t^2\n+2561 s^6 t^3+3096 s^5 t^4+2688 s^4 t^5\\nonumber\\\\\n&&+1673 s^3 t^6+724 s^2 t^7+192 s t^8+25 t^9)-4 m_c^2\n\\left(s^2+s t+t^2\\right)^2 (17 s^6+88 s^5 t+167 s^4 t^2\\nonumber\\\\\n&&+168 s^3 t^3+119 s^2 t^4+40 s t^5+5 t^6)+5 s t (s+t) \\left(s^2+s t+t^2\\right)^4\\Big{]}\n\\end{eqnarray}\n\n$g+g\\to c\\bar{c}({}^3\\!P_J^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!P_J^{[8]})|^2= \\frac{\\pi ^3 \\alpha_s^3}\n{m_c^5 s t \\left(s-4 m_c^2\\right)^4 \\left(t\n-4 m_c^2\\right)^4 (s+t)^4 \\left(-4 m_c^2+s+t\\right)}\\nonumber\\\\\n&&\\times\\Big{[}1048576 m_c^{20} \\left(115 s^4+435 s^3 t+564 s^2 t^2\n+295 s t^3+115 t^4\\right)-524288 m_c^{18}(345 s^5 \\nonumber\\\\\n&&+1571 s^4 t+2683 s^3 t^2+2148 s^2 t^3+1156 s t^4+345 t^5)\n+65536 m_c^{16} (2235 s^6+11102 s^5 t\\nonumber\\\\\n&&+22740 s^4 t^2+23720 s^3 t^3+16400 s^2 t^4+7932 s t^5\n+1955 t^6)-16384 m_c^{14} (4710 s^7\\nonumber\\\\\n&&+24932 s^6 t+57177 s^5 t^2+71731 s^4 t^3+60491 s^3 t^4\n+37287 s^2 t^5+16102 s t^6+3450 t^7 )\\nonumber\\\\\n&&+4096 m_c^{12} (6660 s^8+38018 s^7 t+94927 s^6 t^2\n+134978 s^5 t^3+132094 s^4 t^4+97708 s^3 t^5\\nonumber\\\\\n&&+54647 s^2 t^6+21708 s t^7+4140 t^8)-1024 m_c^{10}\n(6390 s^9+39787 s^8 t+107560 s^7 t^2\\nonumber\\\\\n&&+168509 s^6 t^3+183028 s^5 t^4+154598 s^4 t^5+104319 s^3 t^6\n+54540 s^2 t^7+20307 s t^8+3450 t^9)\\nonumber\\\\\n&&+256 m_c^8 (4055 s^{10}+27855 s^9 t+82334 s^8 t^2\n+142320 s^7 t^3+169197 s^6 t^4+156936 s^5 t^5\\nonumber\\\\\n&&+120977 s^4 t^6+77050 s^3 t^7+38034 s^2 t^8+13095 s t^9\n+1955 t^{10})-64 m_c^6 (1530 s^{11}\\nonumber\\\\\n&&+12009 s^{10} t+40145 s^9 t^2+79031 s^8 t^3+106482 s^7 t^4\n+110003 s^6 t^5+93903 s^5 t^6+68322 s^4 t^7\\nonumber\\\\\n&&+40531 s^3 t^8+18105 s^2 t^9+5449 s t^{10}+690 t^{11})\n+16 m_c^4 (255 s^{12}+2610 s^{11} t+10798 s^{10} t^2\\nonumber\\\\\n&&+26205 s^9 t^3+43547 s^8 t^4+54320 s^7 t^5+53906 s^6 t^6\n+43520 s^5 t^7+28607 s^4 t^8+14485 s^3 t^9\\nonumber\\\\\n&&+5278 s^2 t^{10}+1230 s t^{11}+115 t^{12})-4 m_c^2 s t\n(s+t) \\left(s^2+s t+t^2\\right)^2 (150 s^6+696 s^5 t\\nonumber\\\\\n&&+1299 s^4 t^2+1358 s^3 t^3+1059 s^2 t^4+456 s t^5+90 t^6)\n+31 s^2 t^2 (s+t)^2 \\left(s^2+s t+t^2\\right)^4\\Big{]}\n\\end{eqnarray}\n\n$g+g\\to c\\bar{c}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{gg}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})|^2\n=-\\frac{\\pi ^3 \\alpha_{s}^{3}} {108 \\sqrt{15} m_c^5\n\\left(s-4 m_c^2\\right)^4 \\left(t-4 m_c^2\\right)^4 (s+t)^4} \\nonumber\\\\\n&&\\times\\Big{[}-524288 m_c^{20} \\left(4463 s^2\n+25840 s t+5057 t^2\\right)+65536 m_c^{18} (s+t) (7767s^2+145760 s t\\nonumber\\\\\n&&+13689 t^2)+16384 m_c^{16} \\left(40922 s^4+83203 s^3 t\n-78401 s^2 t^2+46753 s t^3+25451 t^4\\right)\\nonumber\\\\\n&&-4096 m_c^{14} (s+t) \\left(41543 s^4+579922 s^3 t\n+620206 s^2 t^2+525220 s t^3+20753 t^4\\right)-3072 m_c^{12}\\nonumber\\\\\n&&\\left(40919 s^6-153453 s^5 t-767319 s^4 t^2-1075115 s^3 t^3-737820 s^2 t^4\n-133812 s t^5+45008 t^6\\right)\\nonumber\\\\\n&&+256 m_c^{10} (s+t) (302824 s^6+248112 s^5 t-1111219 s^4 t^2\n-1766115 s^3 t^3-1102093 s^2 t^4\\nonumber\\\\\n&&+250353 s t^5+304390 t^6 )-64 m_c^8 (287151 s^8\n+1147982 s^7 t+1476785 s^6 t^2+462324 s^5 t^3\\nonumber\\\\\n&&-288201 s^4 t^4+391881 s^3 t^5+1418006 s^2 t^6+1132457 s t^7\n+286503 t^8)+16 m_c^6 (s+t)\\nonumber\\\\\n&&(136080 s^8+645678 s^7 t+1174264 s^6 t^2+1234125 s^5 t^3\n+1166106 s^4 t^4+1190394 s^3 t^5\\nonumber\\\\\n&&+1143268 s^2 t^6+640305 s t^7+136080 t^8)-4 m_c^4\n(27216 s^{10}+226800 s^9 t+774349 s^8 t^2\\nonumber\\\\\n&&+1531962 s^7 t^3+2158036 s^6 t^4+2377567 s^5 t^5\n+2135518 s^4 t^6+1512603 s^3 t^7+768949 s^2 t^8\\nonumber\\\\\n&&+226800 s t^9+27216 t^{10})+m_c^2 s t (s+t)\n(18144 s^8+104976 s^7 t+274390 s^6 t^2+449444 s^5 t^3\\nonumber\\\\\n&&+528929 s^4 t^4+449390 s^3 t^5+274363 s^2 t^6+104976 s t^7+18144 t^8)\\nonumber\\\\\n&&-1620 s^2 t^2 (s+t)^2 \\left(s^2+s t+t^2\\right)^3\\Big{]}\n\\end{eqnarray}\n\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!P_0^{[1]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!P_0^{[1]})|^2\n=\\frac{16 \\pi ^3 \\alpha_{s}^{3}}\n{405 m_c^5 t \\left(4 m_c^2-s\\right) \\left(4 m_c^2- t\\right)^5}\n\\left(12 m_c^2-t\\right) \\Big{[}87040 m_c^{10}-256 m_c^8 \\nonumber\\\\\n&& (285 s+278 t)+256 m_c^6 \\left(115 s^2 +166 s t\n+81 t^2\\right)-32 m_c^4 \\big{(}115 s^3+311 s^2 t + 243 s t^2+77 t^3\\big{)}\\nonumber\\\\\n&& +4 m_c^2 t \\left(96 s^3+188 s^2 t+120 s t^2+23 t^3\\right)\n-13 s t^2 \\left(2 s^2+2 s t+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!P_1^{[1]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!P_1^{[1]})|^2\n=\\frac{32 \\pi ^3 \\alpha_s^3}\n{405 m_c^5 \\left(4 m_c^2-s\\right)\n\\left(4 m_c^2-t\\right)^5} \\Big{[}256 m_c^8 (124 s+21 t)\n-64 m_c^6 \\nonumber\\\\&&(288 s^2\n+341 s t+63 t^2)+16 m_c^4\n(164 s^3+456 s^2 t+321 s t^2+63 t^3)-4 m_c^2 t(106 s^3\\nonumber\\\\\n&&+190 s^2 t+115 s t^2+21 t^3)+11 s t^2 \\left(2 s^2+2 s t+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!P_2^{[1]})+q(\\bar{q})$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!P_2^{[1]})|^2\n=\\frac{32 \\pi ^3 \\alpha_s^3}\n{2025 m_c^5 t \\left(4 m_c^2-s\\right) \\left(4 m_c^2-t\\right)^5}\n \\Big{[}614400 m_c^{12}-34816 m_c^{10}(15 s\\nonumber\\\\\n&&+13 t)+256 m_c^8 \\left(840 s^2+1386 s t+443 t^2\\right)\n-64 m_c^6 \\left(420 s^3+1708 s^2 t+1315 s t^2+177 t^3\\right)\\nonumber\\\\\n&&+16 m_c^4 t \\left(464 s^3+1060 s^2 t+519 s t^2+43\n t^3\\right)-4 m_c^2 t^2 \\left(138 s^3+206 s^2 t+87 s t^2+17 t^3\\right)\\nonumber\\\\\n&&+7 s t^3 \\left(2 s^2+2 s t+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!S_1^{[8]})+q(\\bar{q})$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!S_1^{[8]})|^2=\\frac{\\pi ^3 \\alpha_s^3}\n{81 m_c^5 s t \\left(4 m_c^2-s\\right) (s+t)^3}\n\\Big{[}128 m_c^6 \\left(20 s^3+69 s^2 t-39 s t^2+20 t^3\\right)\\nonumber\\\\\n&&-32 m_c^4 \\left(40 s^4+113 s^3 t+27 s^2 t^2+10 s t^3+20 t^4\\right)\n+4 m_c^2 (108 s^5+193 s^4 t+41 s^3 t^2\\nonumber\\\\\n&&+225 s^2 t^3+s t^4+20 t^5)-11 s \\left(4 s^5+3 s^4 t+7 s^3\n t^2+7 s^2 t^3+3 s t^4+4 t^5\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^1\\!S_0^{[8]})+q(\\bar{q})$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^1\\!S_0^{[8]})|^2=\\frac{10 \\pi ^3 \\alpha_s^3\n\\left[m_c^2 \\left(44 s^3+92 s^2 t-4 s t^2+44 t^3\\right)\n-5 s \\left(s^3+s^2 t+s t^2+t^3\\right)\\right]}{27 m_c^3\n\\left(s-4 m_c^2\\right) (s+t)^3 \\left(-4 m_c^2+s+t\\right)}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!P_J^{[8]})+q(\\bar{q})$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!P_J^{[8]})|^2=\\frac{2 \\pi ^3 \\alpha_s^3}\n{9 m_c^5 \\left(s-4 m_c^2\\right) (s+t)^4 \\left(-4 m_c^2+s+t\\right)}\n\\Big{[}512 m_c^6 \\left(5 s^2+26 s t+25 t^2\\right)\\nonumber\\\\\n&&+64 m_c^4 \\left(s^3-23 s^2 t-111 s t^2-19 t^3\\right)+4 m_c^2 (s+t)\n\\left(57 s^3+169 s^2 t-3 s t^2+61 t^3\\right)\\nonumber\\\\\n&&-31 s (s+t)^2 \\left(s^2+t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$q(\\bar{q})+g\\to c\\bar{c}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})+q(\\bar{q})$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{q(\\bar{q})g}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})|^2\n=-\\frac{2 \\pi ^3 \\alpha_{s}^{3}}\n{27 \\sqrt{15} m_c^5 s t (s+t)^4}\n\\Big{[}128 m_c^4 \\left(5 s^4+2 s^3 t+21 s^2 t^2+2 s t^3+5 t^4\\right)\\nonumber\\\\\n&&-32 m_c^2 \\left(5 s^5+16 s^4 t+14 s^3 t^2+14 s^2 t^3+16 s t^4+5 t^5\\right)\\nonumber\\\\\n&&+5 (s+t)^2 \\left(4 s^4-s^3 t+8 s^2 t^2-s t^3+4 t^4\\right)\\Big{]}\n\\end{eqnarray}\n\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!P_0^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!P_0^{[1]})|^2\n=-\\frac{128 \\pi ^3 \\alpha_{s}^{3}\\left(12 m_c^2-s\\right)\n\\left(880 m_c^4-112 m_c^2 s+13 s^2\\right) } {3645 m_c^5 s\n\\left(4 m_c^2-s\\right)^5}\n\\nonumber\\\\ &&\\times\\big{[}2 t \\left(s-4 m_c^2\\right)+\\left(s-4 m_c^2\\right)^2+2 t^2\\big{]}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!P_1^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!P_1^{[1]})|^2\n=\\frac{256 \\pi ^3 \\alpha_s^3}\n{1215 m_c^5 \\left(s-4 m_c^2\\right)^5}\n \\Big{[}64 m_c^6 (31 s+84 t)-16 m_c^4 (73 s^2+150 s t\\nonumber\\\\\n&&+84 t^2)+4 m_c^2 s \\left(53 s^2+88 s t+66 t^2\\right)\n-11 s^2 \\left(s^2+2 s t+2 t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!P_2^{[1]})+g$:\n\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!P_2^{[1]})|^2\n=\\frac{256 \\pi ^3 \\alpha_s^3}{6075 m_c^5 s \\left(s-4 m_c^2\\right)^5}\n \\Big{[}92160 m_c^{10}-512 m_c^8 (71 s+90 t)+64 m_c^6 \\nonumber\\\\\n&&(33 s^2+404 s t+180 t^2)-112 m_c^4 s \\left(s^2+46 s t+32 t^2\\right)\n+4 m_c^2 s^2 \\left(33 s^2+112 s t+98 t^2\\right)\\nonumber\\\\\n&&-7 s^3 \\left(s^2+2 s t+2 t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!S_1^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!S_1^{[8]})|^2=\\frac{8 \\pi ^3 \\alpha_s^3}\n{243 m_c^5 t \\left(4 m_c^2-s\\right)^3 \\left(-4 m_c^2+s+t\\right)}\n\\Big{[}45056 m_c^{10}\\nonumber\\\\\n&&-256 m_c^8 (84 s+205 t)+64 m_c^6 \\left(128 s^2+496 s t\n+475 t^2\\right)-16 m_c^4 (224 s^3+780 s^2 t \\nonumber\\\\\n&&+1029 s t^2+540 t^3)+4 m_c^2 \\left(180 s^4+676 s^3 t+1155 s^2 t^2\n+936 s t^3+270 t^4\\right)\\nonumber\\\\\n&&-11 s \\left(4 s^4+17 s^3 t+35 s^2 t^2+36 s t^3+18 t^4\\right)\\Big{]}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^1\\!S_0^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^1\\!S_0^{[8]})|^2=-\\frac{400 \\pi ^3 \\alpha_s^3\n\\left[16 m_c^4-8 m_c^2 (s+t)+s^2+2 s t+2 t^2\\right]}{81 m_c^3 s\n\\left( s-4 m_c^2\\right)^2}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!P_J^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!P_J^{[8]})|^2=\\frac{-16 \\pi ^3 \\alpha_s^3}\n{27 m_c^5 s \\left(s-4 m_c^2\\right)^4}\\Big{[}29440 m_c^8\n-128 m_c^6 (68 s+115 t)+32 m_c^4 (59 s^2\\nonumber\\\\\n&&+205 s t+115 t^2)-8 m_c^2 s \\left(64 s^2+121 s t+90 t^2\\right)\n+31 s^2 \\left(s^2+2 s t+2 t^2\\right)\\Big{]}\n\\end{eqnarray}\n\n$\\bar{q}+q\\to c\\bar{c}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})+g$:\n\n\\begin{eqnarray}\n&&|\\overline{N}_{\\bar{q}q}({}^3\\!S_1^{[8]},{}^3\\!D_1^{[8]})|^2\n=-\\frac{16 \\pi ^3 \\alpha_{s}^{3}}\n{81 \\sqrt{15} m_c^5 t \\left(s-4 m_c^2\\right)^4\n\\left(-4 m_c^2+s+t\\right)} \\Big{[}81920 m_c^{12}\\nonumber\\\\\n&&-1024 m_c^{10} (80 s+157 t)+256 m_c^8\n\\left(140 s^2+569 s t+535 t^2\\right)-128 m_c^6 (80 s^3+425 s^2 t\\nonumber\\\\\n&&+710 s t^2+378 t^3 )+32 m_c^4 \\left(70 s^4+353 s^3 t\n+705 s^2 t^2+630 s t^3+189 t^4\\right)-4 m_c^2 s (80 s^4 \\nonumber\\\\\n&&+353 s^3 t+700 s^2 t^2+684 s t^3+252 t^4)+5 s^2\n\\left(4 s^4+17 s^3 t+35 s^2 t^2+36 s t^3+18\n t^4\\right)\\Big{]}\n\\end{eqnarray}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Sensorimotor Control Across Weathers}\\label{sec:method}\n\nIn this section, we introduce a computational framework for transferring knowledge of ground truth labels from one weather condition to multiple different scenarios without any semantic labels and additional human labeling effort. Figure~\\ref{fig:highlevel} gives a high-level overview of the framework. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{images\/TLwithoutSemantics.pdf}\n \\caption{This figure gives a high level overview of the 3 steps for transferring knowledge between two domains $D_0$ and $D_1$ for the purpose of sensorimotor control. Ground truth steering data for only a limited number of images from domain $D_0$ is available. Details of the framework are provided in Section \\ref{sec:method}.}\n \\label{fig:highlevel}\n\\end{figure}\n\n\\subsection{Teacher End-to-End Training} \nIn this step, the teacher model is trained end-to-end in a supervised manner to predict the steering command of the vehicle from the raw RGB images generated by the camera placed at the front of the ego-vehicle. The training data is collected by an expert driver only once for that particular weather scenario. We refer to the images recorded under the weather condition under which this data was collected as belonging to domain $D_0$. Note that the teacher model is itself divided into a Feature Extraction Module (FEM), $F_0$ and a control module, $C_0$. The raw image (belonging to $D_0$) is passed through $F_0$ to retrieve a lower-dimensional feature representation. This feature representation is in turn fed to the $C_0$ which predicts the steering angle. A depiction of the model is shown in Figure~\\ref{fig:end2end}. The FEM, $F_0$ is a sequential combination of 4 units where each unit comprises a convolutional, pooling, and activation layer. The output of unit 4 is flattened to a size of 800, which is in turn fed as an input to the module, $C_0$. The control module, $C_0$ is a series of fully connected layers and outputs the steering command. \n\n\\noindent{\\textbf{Auxiliary network.}} It might be the case that the amount of images with labels is limited or the model is too large for the task at hand. Hence, the model may tend to overfit. Therefore, during training, to mitigate the effect of overfitting, $F_0$ additionally uses auxiliary networks connected to its intermediate layers~\\cite{SzegedyCVPR2015}. Each of the auxiliary networks has a control module, $C_0$ with shared weights. The projection layers, $P_1$, $P_2$ and $P_3$ project the feature maps of the intermediate layers to the dimension of $C_0$ \\emph{i.e.} 800. The overall output of the teacher model is the weighted sum of the outputs of the auxiliary networks. The loss is also described by a weighted combination of the individual losses of the 4 auxiliary networks. The loss for each of the control modules is the mean squared error (MSE) between the ground truth label provided by the expert and that predicted by $C_0$. The overall loss is a weighted sum of the losses from each of the 4 control modules.\n\n\\begin{align*}\n \\mathcal{L} &=\\sum_{i=1}^{4} \\alpha_i \\cdot \\mathcal{L}_{i},\n &\\text{s.t.} \\sum_{i=1}^{4} \\alpha_i = 1,\n\\end{align*}\n\nwhere $\\alpha_i$, and $\\mathcal{L}_i$ are the weighting and the error for the auxiliary network, obtained from the intermediate unit $i$ of the FEM $F_0$. The error functions are calculated as follows:\n\n\\begin{equation*}\n \\mathcal{L}_i = \\frac{1}{N}\\sum_{j=1}^{N}(y_j - O_{ij})^2,\n\\end{equation*}\n\nwhere $y_j$ is the ground truth steering angle obtained from the expert driver for a sample $j$ and $N$ denotes the number of total samples. $O_{ij}$ is the output of the control module corresponding to the $i$th auxiliary network for the $j$th sample. \n\nThe weights $\\alpha_i$ are themselves learned by a separate weight network. The auxiliary network that has the greatest contribution towards the overall output would also have the highest relative weight. This is important in case of limited data, wherein not all layers may be essential to train the model. In such a case the weights of the shallower auxiliary networks would be higher in comparison to the deeper auxiliary networks. Hence, a significant contribution towards the overall prediction would come from these shallow layers, thereby making the deeper layers effectively dormant. An extreme case would be when the labeled data is so small that even the first layer is enough to give a correct model prediction. In such a case, only $\\alpha_1 = 1$ and all other $\\alpha_i = 0, \\ \\text{for}\\ i = 2,3,4$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{images\/WeightedEnd2EndTraining.pdf}\n \\caption{The figure depicts the general architecture of the model comprised of the FEM and the auxiliary control modules.}\n \\label{fig:end2end}\n\\end{figure}\n\n\\subsection{Knowledge Transfer} \n\nAs described in step 2 of Figure~\\ref{fig:highlevel}, knowledge of ground truth labels from domain $D_0$ is transferred to domain $D_1$ using a teacher-student architecture. The output of the auxiliary networks acts as the teacher to provide supervised information to train the student.\n\nWe use the FEM, $F_0$ and control module, $C_0$ (combined, referred to as teacher) trained on images belonging to domain $D_0$, for which ground-truth steering labels are available, to transfer knowledge to a different combination of FEM, $F_1$ and control module, $C_1$ (referred to as student) for domain $D_1$, for which we have access to only unlabeled images. The subsequent procedure is detailed in the following steps:\n\n\\begin{enumerate}\n\\item Image $I_0$ belonging to domain $D_0$ is passed through an image-translation-network to generate image $I_1$ belonging to domain $D_1$ in a manner that only the semantic information is preserved but the weather condition is modified. \\cite{ZhuICCV2017, HuangECCV2018, MinjunECCV2018} describe methods for training a translation network in an unsupervised manner using generative adversarial networks (GANs). We use \\cite{ZhuICCV2017} for our experiments. A positive implication of using these networks is that they preserve the semantics of the scene and hence the steering angle label would also be the same.\n\\item \\noindent{\\bf Hard loss:} If $I_0$ happens to have a ground truth (\\emph{hard}) label then the weights of the student network are updated with these labels and the loss is referred to as the \\emph{hard loss}. \\noindent{\\bf Soft loss:} Otherwise, a forward pass can also be done by passing $I_0$ through the teacher. Meanwhile, the corresponding image $I_1$ is passed through the student network. The output of the teacher can then used as a soft target for updating the weights of the student via the soft loss. The overall loss is the weighted average of the soft and hard losses. The weights indicate the relative importance given to the soft targets in relation to the ground truth labels.\n\\end{enumerate}\n\nNote that the student network can be fed not only images from domain $D_1$ but rather multiple domains including domain $D_0$. Hence, the student network would not only be capable of predicting the steering for multiple domains but would act as a regularizer for better generalization (See P1 in Section~\\ref{sec:discussion}).\n\n\\subsection{Substitution}\nThis refers to step 3 described in Figure \\ref{fig:highlevel}. At inference time, the teacher network can be substituted with the student network to predict the correct steering command on images from all domains which the student encountered during training.\n\\section{Related Work}\\label{sec:related_work}\n\nVision-based autonomous driving approaches have been studied extensively in an academic and industrial setting~\\cite{JanaiArXiv2017}. A plenty of real world~\\cite{GeigerCVPR2012,CordtsCVPR2016,XuCVPR2017} as well as synthetic~\\cite{RosCVPR2016,GaidonCVPR2016,RichterICCV2017,RichterECCV2016} datasets for autonomous driving research have become available. In recent years, neural network approaches have significantly advanced the state-of-the-art in computer vision tasks. Especially, end-to-end learning for sensorimotor control has recently gained a lot of interest in the vision and robotics community. In this context, different approaches to autonomous driving are studied: modular pipelines~\\cite{ThrunJFR2006}, imitation learning~\\cite{PomerleauNIPS1989}, conditional imitation learning~\\cite{Codevilla2018ICRA}, and direct perception~\\cite{ChenICCV2015}.\n\n\\noindent{\\textbf{Embodied agent evaluation. }}Most available datasets~\\cite{GeigerCVPR2012, CordtsCVPR2016} cannot be used for evaluating online driving performance due to their static nature. The evaluation of driving models on realistic data is challenging and often not feasible. Therefore, a lot of interest has emerged in building photo-realistic simulators~\\cite{MullerIJCV2018, ShahFSR2017, DosovitskiyCoRL2017} to analyze those models. However, despite having access to simulation engines, there is currently no universally accepted benchmark to evaluate vision-based control agents. Therefore, our experimental setup is a step towards a field where it is still not quite established how to evaluate and measure the performance of the models~\\cite{AndersonArXiv2018, CodevillaECCV2018}. \n\n\\noindent{\\textbf{Unpaired image-to-image translation networks. }}Unsupervised image-to-image translation techniques are rapidly making progress in generating high-fidelity images across various domains~\\cite{ZhuICCV2017, LiuNIPS2017, HuangECCV2018, MinjunECCV2018}. Our framework is agnostic to any particular method. Hence, continual improvements in these networks can be easily integrated into our framework by replacing a previous network.\n\n\\noindent{\\textbf{Transfer learning via semantic modularity. }}Several works used semantic labels of the scene as an \nintermediate representation for transferring knowledge between domains. In the context of autonomous driving, the authors of~\\cite{MullerCoRL2018} proposed to map the driving policy utilizing semantic segmentation to a local trajectory plan to be able to transfer between simulation and real-world data. Furthermore, for making a reinforcement model trained in a virtual environment workable in the real world, the authors of~\\cite{YouBMVC2017} utilize the intermediate semantic representation as well to translate virtual to real images. However, there is still little work on generalizing driving models across weathers. The work by~\\cite{WenzelCoRL2018} showed how to transfer knowledge between different weather conditions using a semantic map of the scene. In contrast, in this paper, we demonstrate the possibility of transferring the knowledge between weathers even without semantic labels. \n\n\\noindent{\\textbf{Knowledge distillation. }}Originally, knowledge distillation~\\cite{HintonArXiv2015} was used for network compression (student network is smaller than the teacher while maintaining the accuracy). However, the authors of~\\cite{YangArXiv2018} focus on extracting knowledge from a trained (teacher) network and guide another (student) network in an individual training process. Furthermore,~\\cite{ShenArXiv2016} used a slightly modified version of knowledge distillation for the task of pedestrian detection. In this work, we use a teacher-student architecture, but rather to leverage unlabeled data for sensorimotor control.\n\\section{Introduction}\\label{sec:introduction}\n\nThe ubiquity of a tremendous amount of processing power in contemporary computing units has proliferated the usage of deep learning-based approaches in control applications. In particular, supervised deep learning methods have made great strides in sensorimotor control, whether it be for autonomous driving~\\cite{BojarskiArXiv2016}, robot perception~\\cite{KaufmannCoRL2018}, or manipulation tasks~\\cite{LevineJMLR2016, NairICRA2017, ZhangICRA2018}. However, the performance of such models is heavily dependent on the availability of ground truth labels. To have the best generalization capability, one should annotate data for all possible scenarios. Nonetheless, obtaining labels of high quality is a tedious, time consuming, and error-prone process. \n\nWe propose to instead utilize the information available for one domain and transfer it to a different one without human supervision as shown in Figure~\\ref{fig:intro_fig}. This is particularly helpful for many robotic applications wherein a robotic system trained in one environment should generalize across different environments without human intervention. For example in simultaneous localization and mapping (SLAM), it is very important that the algorithm is robust to different lighting conditions~\\cite{NewmanCVPR2017}. \nIn the context of autonomous driving, transferring knowledge from simulation to the real world or between different weather conditions is of high relevance. Recently,~\\cite{MullerCoRL2018, YouBMVC2017, WenzelCoRL2018} have attempted to tackle these problems by dividing the task of vehicle control into different modules, where each module specialized in extracting features from a particular domain. In these works, semantic labels are used as an intermediate representation for transferring knowledge between different domains. However, obtaining these semantic labels requires human effort which is time-consuming, expensive, and error-prone \\cite{WenzelCoRL2018}. In this work, we instead propose to use a teacher-student learning-based approach to generalize sensorimotor control across weather conditions without the need for extra annotations, \\emph{e.g.}, semantic segmentation labels. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\linewidth]{images\/intro_fig.pdf}\n \\caption{Teacher-student training for generalizing sensorimotor control across weather conditions. \\textbf{Top:} The teacher network, trained on ground truth data collected on sunny weather is capable of predicting the correct steering angle when tested on this condition. \\textbf{Middle:} However, the teacher fails to predict the correct steering when tested on an input image from a different domain (rainy weather). \\textbf{Bottom:} With our proposed framework, the student network trained with supervised information from the teacher network is capable of predicting the correct steering for the rainy weather. This is done without any additional ground truth labels or semantic information.}\n \\label{fig:intro_fig}\n\\end{figure}\n\nTo this end, we make the following contributions:\n\n\\begin{itemize}\n\\item We demonstrate how knowledge of ground truth data for steering angles can be transferred from one weather scenario to multiple different weather conditions. This is achieved without the additional requirement of having semantic labels. We make use of an image-to-image translation network to transfer the images between different domains while preserving information necessary for taking a driving decision.\n\\item We show how the proposed method can also utilize images without ground truth steering commands to train the models using a teacher-student framework. The teacher provides relevant supervised information regarding the unlabeled images to train the features of the student. Hence, we can eliminate the need for an expert driver for data collection across diverse conditions.\n\\item If the sample data with ground truth labels is limited, then the teacher and student models may tend to overfit. To overcome this, we propose using weighted auxiliary networks connected to the intermediate layers of these models. During inference, the model size can be reduced by eliminating auxiliary layers with low weights without reducing accuracy. \n\\end{itemize}\n\nIn the following sections, we first review related work. We then present the details of our method, followed by an analysis of our model's performance. Finally, we discuss various parts of our model.\n\\section{Experiments}\\label{sec:experiments}\n\nIn this section, we evaluate our approach on the CARLA simulator~\\cite{DosovitskiyCoRL2017} version 0.8.2. It provides a total of 15 different weather conditions (labeled from 0 to 14) for two towns, \\emph{Town1} and \\emph{Town2}, respectively.\n\n\\subsection{Evaluation Metrics}\n\nFinding appropriate evaluation metrics is rather challenging for navigation and driving tasks. There is no unique way to quantify these tasks. The authors of~\\cite{AndersonArXiv2018} discuss different problem statements for embodied navigation and present based on these discussions evaluation metrics for some standard scenarios. In~\\cite{CodevillaECCV2018}, a more extensive study on evaluation metrics for vision-based driving models is carried out. In particular, they analyzed the difference between online and offline evaluation metrics for driving tasks. The preliminary results showed that driving models can have similar mean squared error (MSE) but drastically different driving performance. As a result of this, it is not straight forward to trivially link offline to online performance due to a low correlation between them. Nevertheless, the authors of \\cite{CodevillaECCV2018} found that among offline metrics not requiring additional parameters, the mean absolute error between the driving commands and that predicted ones yields the highest correlation with online driving performance. \n\nIn addition to using this offline metric, we evaluate the online performance of the models when executing multiple and diverse turnings around corners, since it is a much more challenging task in comparison with simply moving in a straight line. The online performance is tested on the CARLA simulator across all the 15 weather conditions. For each weather condition, we evaluate the models for multiple different turns. In all experiments, the starting positions of the vehicle is just before the curve. The duration of the turn is fixed to 120 frames because it covers the entire curvature of the turn. We report the percentage of time the car remains within the driving lane as a measure of success. \n\n\\subsection{Dataset}\nFor collecting ground truth training data, we navigate through the city using the autopilot mode. To demonstrate the superiority of our method, we collect a limited sample size of 6500 images for weather condition 0 of which only 3200 are labeled with ground truth steering commands. Using our proposed method we aim to transfer knowledge to the remaining 14 weather scenarios. Also, note that none of the 6500 images have any semantic labels.\n\nThe 3200 sample images with ground truth data are only available for \\emph{Town2}, whereas all the offline and online evaluations are performed on \\emph{Town1}. To focus the attention on the effectiveness of our approach and preserve compatibility with prior work~\\cite{BojarskiArXiv2016, XuCVPR2017, CodevillaECCV2018}, the models are trained to predict the steering angle of the car while keeping the throttle fixed. The steering angles in CARLA are normalized to values between -1 and 1. The corresponding degrees for these normalized values depends on the vehicle being used. The default vehicle which we use for our experiments has a maximum steering angle of \\SI{70}{\\degree}. \n\n\\subsection{Models}\\label{subsec:models}\nThe offline and online performance of the models described in this section are given in Figure~\\ref{fig:l1errorplot} and Table~\\ref{tab:turns}, respectively. Figure~\\ref{fig:l1errorplot} shows the plot of the mean absolute error between the actual steering command and that predicted by all of the models. Table~\\ref{tab:turns} contains the percentage for which the ego-vehicle remains within the driving lane while making turning maneuvers executed by the models across the 15 weather scenarios.\n\n\\noindent{\\bf Oracle: Steering labels for all weathers. }Here we have assumed that we have access to the ground truth steering commands across all the 15 different weather conditions for \\emph{Town1}. Since we are also evaluating the models on \\emph{Town1} across all the weather conditions, we find in both the offline and online evaluation metrics that this model achieves the highest accuracy and hence it could serve as an upper bound for evaluating the other models along with our approach.\n\n\\noindent{\\bf Model~\\cite{WenzelCoRL2018}: Steering and semantic labels for weather 0. }Here we adopt the approach of~\\cite{WenzelCoRL2018}, wherein the semantic labels of the images are additionally available for the 3200 labeled samples on weather 0. This additional information is used to first train what we refer to as the feature extraction module (FEM) in a supervised manner. The FEM module, in this case, is trained as an encoder-decoder architecture. The encoder encodes the input image into a lower-dimensional latent vector, while the decoder reconstructs the semantic map of the image from the latent vector. The latent vector is then used to train the control module from the ground truth steering labels. The FEM and control modules are hence trained independently and without any auxiliary networks. This FEM trained on the semantics of weather 0 is used as a teacher to train the student which is capable of producing the semantics of all the other 14 weather conditions. The authors of~\\cite{WenzelCoRL2018} used the method of~\\cite{ZhuICCV2017} and provide 10 separate networks for translating from weather 0 to weathers 2, 3, 4, 6, 8, 9, 10, 11, 12, and 13, respectively. The translated images for each of the 10 weather conditions along with weather 0 are fed in equal proportion to train the student. We would particularly like to evaluate our method which does not have access to any semantic labels against this model. In addition to this, we also evaluate the performance of this method on the model provided by the paper, which was trained with more than 30000 samples from both \\emph{Town1} and \\emph{Town2}. The performance of this model on \\emph{Town1} is far superior since it was trained on much greater data and also had access to ground truth data from \\emph{Town1}.\n\n\\noindent{\\bf Teacher: Steering angles for weather 0. }This model is trained using only the available labeled data for weather 0 in an end-to-end manner. This model has a poor performance for the unseen weather conditions, particularly for conditions 3-14, which are considerably different in visual appearance compared to weather 0. Nevertheless, despite the poor performance this model can be used as a teacher to train the student for predicting the correct steering angles for weather conditions 1-14 for which no ground truth data exists. This approach is described in the next model. Also, note that the unlabeled data remains unutilized here.\n\n\\noindent{\\bf Ours: Steering angles for weather 0. }This model is trained using the method described in Section~\\ref{sec:method}, wherein knowledge is transferred from the teacher network trained on images and ground truth steering commands from weather 0 to the student network which is capable of handling images from all weathers 0-14. For a fair comparison against the model trained with semantic labels (Model~\\cite{WenzelCoRL2018}, described earlier) we use the same data and generative models to translate even the unlabeled images to weathers 2, 3, 4, 6, 8, 9, 10, 11, 12, and 13, respectively. These generated images can then be fed to the student model for predicting the correct steering angles for all the 15 weather conditions.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{images\/comparison.pdf}\n \\caption{This plot shows the mean absolute error between the actual steering angle and that predicted by the 5 different models (see subsection \\ref{subsec:models}) on data collected across the 15 different weather conditions on \\emph{Town1}. Lower is better.}\n \\label{fig:l1errorplot}\n\\end{figure}\n\n\\begin{table*}\n\\begin{center}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l||l|}\n\\hline\n& & \\multicolumn{16}{c|}{Weather Conditions}\\\\\n\\cline{3-18}\nMethod & Trained on & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & overall \\\\\n\\hline\\hline\nOracle & \\emph{Town1} & 99.79 & 99.90 & 100 & 97.40 & 98.96 & 99.27 & 98.13 & 98.85 & 98.27 & 99.90 & 99.27 & 96.35 & 93.85 & 93.96 & 96.35 & 98.02\\\\\nModel~\\cite{WenzelCoRL2018} & \\emph{Town1\\&2} & 99.06 & 93.44 & 98.85 & 98.75 & 97.92 & 98.23 & 97.60 & 96.56 & 91.15 & 96.04 & 97.29 & 95.00 & 94.69 & 82.08 & 95.41 & 95.47 \\\\\nModel~\\cite{WenzelCoRL2018} & \\emph{Town2} & 68.33 & 67.71 & 50.00 & 71.77 & 67.40 & 64.38 & 63.85 & 63.65 & 61.88 & 71.35 & 51.35 & 67.50 & 58.33 & 61.67 & 66.98 & 63.74\\\\\nTeacher & \\emph{Town2} & 92.19 & 92.40 & 82.12 & 44.38 & 51.77 & 73.65 & 32.50 & 61.56 & 49.48 & 80.10 & 60.63 & 48.54 & 35.20 & 34.27 & 50.52 & 59.29 \\\\\nOurs & \\emph{Town2} & 93.96 & 95.21 & 81.25 & 99.90 & 100 & 94.17 & 90.42 & 79.69 & 77.19 & 86.77 & 84.58 & 65.63 & 68.54 & 58.44 & 80.73 & 83.77 \\\\\n\\hline\\hline\nOurs (Auxiliary network 1) & \\emph{Town2} & 93.96 & 93.44 & 80.73 & 92.40 & 100 & 99.69 & 90.42 & 80.10 & 77.19 & 87.50 & 91.98 & 67.40 & 66.25 & 57.29 & 81.15 & 83.97\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{This table shows the percentage for which the ego-vehicle remains within the driving lane while executing a turn for the models across the 15 different weather scenarios on \\emph{Town1}. Higher is better.}\n\\label{tab:turns}\n\\end{table*}\n\\section{Conclusion}\\label{sec:conclusion}\nIn this work, we showed how a teacher-student learning-based approach can leverage limited labeled data for transferring knowledge between multiple different domains. Our approach, specifically designed to work for sensorimotor control tasks, learns to accurately predict the steering angle under a wide range of conditions. Experimental results showed the effectiveness of the proposed method, even without having access to semantic labels as an intermediate representation between weather conditions. This framework may be extendable to other application areas for which a certain domain has ground truth data and shares a common characteristic with other domains for which no labels are available. \n\n\\section{Discussion}\\label{sec:discussion}\n\nIn this section, we discuss some critical insights on the experimental observations we obtained while evaluating the models. Here are some points we found worthwhile to provide some commentary based on the results provided in Figure~\\ref{fig:l1errorplot} and Table~\\ref{tab:turns}.\n\n\\noindent{\\bf P1 - Better regularization:} It is interesting to observe that the teacher model, trained only on the available 3200 labeled samples from \\emph{Town2} on weather 0 has a worse offline performance for \\emph{Town1} on weather 0 in comparison to our method. This seems to imply that our approach which has been trained on multiple kinds of weather has better generalization capabilities and can even outperform its teacher when evaluated in a different town. Hence, an additional positive consequence of training the student with generated images from multiple diverse domains is that it acts as a regularizer tending to prevent overfitting to one specific domain.\n\n\\noindent{\\bf P2 - Semantic inconsistency:} Note that Model~\\cite{WenzelCoRL2018} which in addition to having the same data and labels as our approach has also access to ground truth semantic labels. Yet, its performance is significantly poor. Upon investigation, we found that due to the limited number of semantic labels, the FEM trained as an encoder-decoder architecture seemed to be overfitting to the available data. Hence, when tested on unseen environments, the semantic segmentation output of the module breaks. The latent vector representing these broken semantics is then fed to the control module, which is incapable of predicting the correct steering command. Figure~\\ref{fig:semantics} shows some sample images with the corresponding semantic segmentation outputs which are considerably different from the true semantics of the scene.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{images\/semantics.pdf}\n \\caption{This plot shows three sample images (column 1) with the corresponding semantic segmentation output by the model (column 2) for 3 different weathers. The segmentation produced by the model does not reflect the actual semantic characteristics of the scene (column 3).}\n \\label{fig:semantics}\n\\end{figure}\n\n\\noindent{\\bf P3 - Modular training constraints:} Furthermore, the modular approach of Model~\\cite{WenzelCoRL2018} wherein the FEM and control module are trained independently as opposed to an end-to-end model served to be a bottleneck in being able to learn the features universally. Also, an assumption to train the control module well is that the FEM would work perfectly well, which is not the case. Hence, the overall error of the modular pipeline would be an accumulation of the errors of the independent FEMs and control modules. We found that if we also shift the training of our approach to a modular one then performance deteriorates. This can be done in our approach by updating only the weights of the FEM of the student from the output features of the FEM of the teacher.\n\n\\noindent{\\bf P4 - Auxiliary weights:} To prevent overfitting of the models, trained on limited data we used a weighted sum of the outputs of the auxiliary layers. The weights themselves were learned as part of the training. Once training of our student model was complete, we found that more than \\SI{97}{\\percent} of the weight was held by the first auxiliary network. This seemed to imply that only the first unit of the FEM is enough for predicting the steering command. Hence the remaining unit layers are not providing any additional information for the model. So we evaluated our model based on the output of the first auxiliary network rather than on the weighted sum of the 4 auxiliary networks. The online evaluation of this approach is given in Table~\\ref{tab:turns} against the row labeled \\emph{Ours (Auxiliary network 1)}. It is interesting to note that this approach is comparable in its performance with the original one. Therefore, at test time we can prune the network to a smaller size by making predictions only based on the first auxiliary network and removing the remaining 3 auxiliary networks. This would result in less computation and faster inference. \n\n\\noindent{\\bf P5 - Online vs. offline evaluation:} Figure~\\ref{fig:auxillaryUnit1} shows an offline evaluation of the two variations of our method described in the previous point across the 15 weather conditions. Note that apart from weather 0, 1, and 2, the two curves are indistinguishable from one another. However, the online evaluation results do not correspond with this observation. For weathers 3, 5, 7, and 9-14 the online performance is different despite having the same offline metric. This confirms the intuition presented in~\\cite{AndersonArXiv2018} and the problems associated with evaluating embodied agents in offline scenarios. The topic of finding a correlation between offline evaluation metrics and online performance has therefore recently started to receive positive traction. It is therefore important to come up with a universal metric for evaluating various algorithms across the same benchmark. Due to the non-existence of such benchmarks, we created our own for the evaluation of the different approaches.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{images\/comparison_aux.pdf}\n \\caption{This plot shows the mean absolute error between the ground truth steering label and that predicted by the two models. The \\textcolor{new_blue}{\\textbf{blue}} curve is the weighted sum of all the 4 auxiliary networks of our model. The \\textcolor{new_orange}{\\textbf{orange}} line depicts the output of only the first auxiliary network of our model.}\n \\label{fig:auxillaryUnit1}\n\\end{figure}\n\n\\noindent{\\bf P6 - Activation maps:} To understand the behavior of the model, which also works with only the first auxiliary network, we took the sum of the activation maps of the first unit of the FEM of the student and displayed it as a heatmap as shown in Figure~\\ref{fig:stud_filter} for a sample of 2 images. We see that the activation maps are most prominent in regions where there are lane markings, sidewalks, cars, or barriers. Knowing these cues seems to be enough for the network to take an appropriate driving decision in most of the cases. Therefore, the higher-level features determined by the preliminary layers of the model are already enough to detect these objects of interest. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{images\/stud_filter.pdf}\n \\caption{This figure shows the sum of the activation maps displayed as a heatmap of the first unit of the FEM of the student model for a sample taken from 2 different weather conditions. The activation maps are more prominent in regions where there are lane markings, sidewalks boundaries, other vehicles, or barriers.}\n \\label{fig:stud_filter}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nThe notion of amenability for monoidal categories first appeared in\nPopa's seminal work~\\cite{MR1278111} on classification of subfactors\nas a crucial condition defining a class of inclusions admitting good\nclassification. He then gave various characterizations of this property\nanalogous to the usual amenability conditions for discrete groups: a\nKesten type condition on the norm of the principal graph, a F{\\o}lner\ntype condition on the existence of almost invariant sets, and a\nShannon--McMillan--Breiman type condition on relative entropy, to name\na few.\n\nThis stimulated a number of interesting developments in related fields\nof operator algebras. First, Longo and Roberts~\\cite{MR1444286}\ndeveloped a general theory of dimension for C$^*$-tensor categories,\nand indicated that the language of sectors\/subfactors is well suited\nfor studying amenability in this context. Then Hiai and\nIzumi~\\cite{MR1644299} studied amenability for fusion\nalgebras\/hypergroups endowed with a probability measure, and obtained\nmany characterizations of this property in terms of random walks and\nalmost invariant vectors in the associated $\\ell^p$-spaces. These\nstudies were followed by the work of Hayashi and\nYamagami~\\cite{MR1749868}, who established a way to realize amenable\nmonoidal categories as bimodule categories over the hyperfinite II$_1$\nfactor.\n\nIn addition to the subfactor theory, another source of interesting\nmonoidal categories is the theory of quantum groups. In this\nframework, the amenability question concerns existence of almost\ninvariant vectors and invariant means for a discrete quantum group, or\nsome property of the dimension function on the category of unitary\nrepresentations of a compact quantum group~\\citelist{\\cite{MR1679171}\\cite{MR2276175} \\cite{MR2113848}}. Here, one should be aware that there are two different notions of amenability involved. One is the coamenability of compact quantum groups (equivalently, amenability of their discrete duals) considered in the regular representations, the\nother is the amenability of representation categories. These notions\ncoincide only for quantum groups of Kac type.\n\n\\smallskip\n\nIn yet another direction, Izumi~\\cite{MR1916370} developed a theory of\nnoncommutative Poisson boundary for discrete quantum groups in order to study\nthe minimality (or lack thereof) of infinite tensor product type\nactions of compact quantum groups. From the subsequent\nwork~\\citelist{\\cite{MR2200270}\\cite{MR2335776}} it became\nincreasingly clear that for coamenable compact quantum groups the\nPoisson boundary captures a very elaborate difference between the two\namenability conditions. Later, an important result on noncommutative\nPoisson boundaries was obtained by De Rijdt and Vander\nVennet~\\cite{MR2664313}, who found a way to compute the boundaries\nthrough monoidal equivalences. In light of the categorical duality\nfor compact quantum group actions recently developed\nin~\\citelist{\\cite{MR3121622}\\cite{neshveyev-mjm-categorification}},\nthis result suggests that the Poisson boundary should really be an\nintrinsic notion of the representation category $\\Rep G$ itself,\nrather than of the choice of a fiber functor giving a concrete\nrealization of $\\Rep G$ as a category of Hilbert spaces. Starting from\nthis observation, in this paper we define Poisson boundaries for\nmonoidal categories.\n\nTo be more precise, our construction takes a rigid C$^*$-tensor\ncategory $\\mathcal{C}$ with simple unit and a probability measure $\\mu$ on the\nset $\\Irr(\\mathcal{C})$ of isomorphism classes of simple objects, and gives\nanother C$^*$-tensor category $\\mathcal{P}$ together with a unitary tensor\nfunctor $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$. Although the category~$\\mathcal{P}$ is defined\npurely categorically, there are several equivalent ways to describe\nit, or at least its morphism sets, that are more familiar to the operator\nalgebraists. One is an analogue of the standard description of\nclassical Poisson boundaries as ergodic components of the time\nshift. Another is in terms of relative commutants of von Neumann\nalgebras, in the spirit\nof~\\citelist{\\cite{MR1444286}\\cite{MR1749868}\\cite{MR1916370}}. For\ncategories arising from subfactors and quantum groups, this can be\nmade even more concrete. For subfactors, computing the Poisson boundary essentially corresponds to passing to the standard model of a subfactor~\\cite{MR1278111}. For quantum groups, not surprisingly as this was our initial motivation, the Poisson boundary of the representation category of $G$ can be described in terms of the Poisson boundary of~$\\hat G$. The last\nresult will be discussed in detail in a separate\npublication~\\cite{arXiv:1310.4407}, since we also want to describe the\naction of $\\hat G$ on the boundary in categorical terms and this would\nlead us away from the main subject of this paper.\n\n\\smallskip\n\nOur main result is that if $\\mathcal{P}$ has simple unit, which corresponds to\nergodicity of the classical random walk defined by $\\mu$ on\n$\\Irr(\\mathcal{C})$, then $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is a universal unitary tensor\nfunctor which induces the amenable dimension function on $\\mathcal{C}$. From\nthis we conclude that $\\mathcal{C}$ is amenable if and only if there exists a\nmeasure $\\mu$ such that $\\Pi$ is a monoidal equivalence. The last\nresult is a direct generalization of the famous characterization of\namenability of discrete groups in terms of their Poisson boundaries\ndue to Furstenberg~\\cite{MR0352328}, Kaimanovich and\nVershik~\\cite{MR704539}, and Rosenblatt~\\cite{MR630645}. From this\ncomparison it should be clear that, contrary to the usual\nconsiderations in subfactor theory, it is not enough to work only with\nfinitely supported measures, since there are amenable groups which do\nnot admit any finitely supported ergodic\nmeasures~~\\cite{MR704539}. The characterization of amenability in\nterms of Poisson boundaries generalizes several results\nin~\\citelist{\\cite{MR1278111} \\cite{MR1444286}\\cite{MR1749868}}. Our\nmain result also allows us to describe functors that factor through\n$\\Pi$ in terms of categorical invariant means. For quantum groups\nthis essentially reduces to the equivalence between coamenability of\n$G$ and amenability of~$\\hat\nG$~\\citelist{\\cite{MR2276175}\\cite{MR2113848}}.\n\n\\smallskip\n\nAlthough our theory gives a satisfactory unification of various\namenability results, the main remarkable property of the functor\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is, in our opinion, the universality. If the\ncategory $\\mathcal{P}$ happens to have a simpler structure compared to $\\mathcal{C}$,\nthis universality allows one to reduce classification of functors from\n$\\mathcal{C}$ inducing the amenable dimension function, to an easier\nclassification problem for functors from $\\mathcal{P}$. This idea will be used\nin~\\cite{classification} to classify a class of compact quantum\ngroups.\n\n\\smallskip\n\n\\paragraph{\\bf Acknowledgement} M.Y.~thanks M.~Izumi, S.~Yamagami, T.~Hayashi, and R.~Tomatsu for their interest and encouragement at various stages of\nthe project.\n\n\\bigskip\n\n\\section{Preliminaries}\n\\label{sec:preliminaries}\n\n\\subsection{Monoidal categories}\n\nThe principal subject of this paper is \\emph{rigid C$^*$-tensor\ncategories}. By now there are many materials covering the basics of\nthis subject, see for\nexample~\\citelist{\\cite{MR2091457}\\cite{arXiv:0804.3587v3} \\cite{neshveyev-tuset-book}}\nand references therein. We mainly follow the conventions\nof~\\cite{neshveyev-tuset-book}, but for the convenience of the reader\nwe summarize the basic definitions and facts below.\n\n\\smallskip\n\nA \\emph{C$^*$-category} is a category $\\mathcal{C}$ whose morphism sets\n$\\mathcal{C}(U, V)$ are complex Banach spaces endowed with complex conjugate\ninvolution $\\mathcal{C}(U, V) \\to \\mathcal{C}(V, U)$, $T \\mapsto T^*$ satisfying the\nC$^*$-identity. Unless said otherwise, we always assume that $\\mathcal{C}$ is\nclosed under direct sums and subobjects. The latter means that any\nidempotent in the endomorphism ring $\\End{\\mathcal{C}}{X} = \\mathcal{C}(X, X)$ comes\nfrom a direct summand of~$X$.\n\nA C$^*$-category is said to be \\emph{semisimple} if any object is\nisomorphic to a direct sum of simple (that is, with the endomorphism\nring $\\mathbb{C}$) objects. We then denote the isomorphism classes of simple\nobjects by~$\\Irr (\\mathcal{C})$ and assume that this is an at most countable\nset. Many results admit formulations which do not require this\nassumption and can be proved by considering subcategories generated by\ncountable sets of simple objects, but we leave this matter to the\ninterested reader.\n\nA \\emph{unitary functor}, or a \\emph{C$^*$-functor}, is a linear\nfunctor of C$^*$-categories $F\\colon \\mathcal{C} \\to \\mathcal{C}'$ satisfying $F(T^*)\n= F(T)^*$.\n\nIn this paper we frequently perform the following operation: starting\nfrom a C$^*$-category $\\mathcal{C}$, we replace the morphisms sets by some\nlarger system $\\mathcal{D}(X, Y)$ naturally containing the original $\\mathcal{C}(X,\nY)$. Then we perform the \\emph{idempotent completion} to construct a\nnew category $\\mathcal{D}$. That is, we regard the projections $p\n\\in \\End{\\mathcal{D}}{X}$ as objects in the new category, and take $q \\mathcal{D}(X,\nY) p$ as the morphism set from the object represented by $p\n\\in \\End{\\mathcal{D}}{X}$ to the one by $q \\in \\End{\\mathcal{D}}{Y}$. Then the\nembeddings $\\mathcal{C}(X, Y) \\to \\mathcal{D}(X, Y)$ can be considered as a\nC$^*$-functor $\\mathcal{C} \\to \\mathcal{D}$.\n\nA \\emph{C$^*$-tensor category} is a C$^*$-category endowed with a\nunitary bifunctor $\\otimes \\colon \\mathcal{C} \\times \\mathcal{C} \\to \\mathcal{C}$, a\ndistinguished object $\\mathds{1} \\in \\mathcal{C}$, and natural isomorphisms\n\\begin{align*} \\mathds{1} \\otimes U &\\simeq U \\simeq U \\otimes \\mathds{1},& \\Phi({U,\nV, W})&\\colon (U \\otimes V) \\otimes W \\to U \\otimes (V \\otimes W)\n\\end{align*} satisfying certain of compatibility conditions.\n\nA \\emph{unitary tensor functor}, or a \\emph{C$^*$-tensor functor},\nbetween two C$^*$-tensor categories $\\mathcal{C}$ and $\\mathcal{C}'$ is given by a\ntriple $(F_0, F, F_2)$, where $F$ is a C$^*$-functor $\\mathcal{C} \\to \\mathcal{C}'$,\n$F_0$ is a unitary isomorphism $\\mathds{1}_{\\mathcal{C}'} \\to F(\\mathds{1}_\\mathcal{C})$, and $F_2$\nis a natural unitary isomorphism $F(U) \\otimes F(V) \\to F(U \\otimes\nV)$, which are compatible with the structure morphisms of $\\mathcal{C}$ and\n$\\mathcal{C}'$. As a rule, we denote tensor functors by just one symbol\n$F$.\n\nWhen $\\mathcal{C}$ is a strict C$^*$-tensor category and $U \\in \\mathcal{C}$, an\nobject $V$ is said to be a \\emph{dual object} of $U$ if there are\nmorphisms $R \\in \\mathcal{C}(\\mathds{1}, V \\otimes U)$ and $\\bar{R} \\in \\mathcal{C}(\\mathds{1}, U\n\\otimes V)$ satisfying the conjugate equations\n\\begin{align*} (\\iota_{V} \\otimes \\bar{R}^*) (R \\otimes \\iota_{V}) &=\n\\iota_{V},& (\\iota_{U} \\otimes R^*) (\\bar{R} \\otimes \\iota_{U}) &=\n\\iota_{U}.\n\\end{align*} If any object in $\\mathcal{C}$ admits a dual, $\\mathcal{C}$ is said to be\n\\emph{rigid} and we denote a choice of a dual of $U \\in \\mathcal{C}$\nby~$\\bar{U}$. A rigid C$^*$-tensor category (with simple unit) has\nfinite dimensional morphism spaces and hence is automatically\nsemisimple. The quantity\n$$\nd^\\mathcal{C}(U) =\\min_{(R, \\bar{R})} \\norm{R} \\norm{\\bar{R}}\n$$\nis called the \\emph{intrinsic dimension} of $U$, where $(R, \\bar{R})$\nruns through the set of solutions of conjugate equations as above. We\nomit the superscript $\\mathcal{C}$ when this is no danger of confusion. A\nsolution $(R, \\bar{R})$ of the conjugate equations for $U$ is called\n\\emph{standard} if\n$$\n\\|R\\|=\\|\\bar R\\|=d(U)^{1\/2}.\n$$\nSolutions of the conjugate equations for $U$ are unique up to the\ntransformations $$(R, \\bar{R}) \\mapsto ((T^* \\otimes \\iota) R, (\\iota\n\\otimes T^{-1}) \\bar{R}).$$ Furthermore, if $(R,\\bar R)$ is standard,\nthen such a transformation defines a standard solution if and only if\n$T$ is unitary.\n\nIn a rigid C$^*$-tensor category $\\mathcal{C}$ we often fix standard solutions\n$(R_U,\\bar R_U)$ of the conjugate equations for every object $U$. Then\n$\\mathcal{C}$ becomes \\emph{spherical} in the sense that one has the equality\n$R_U^* (\\iota \\otimes T) R_U = \\bar{R}_U^* (T \\otimes \\iota)\n\\bar{R}_U$ for any $T \\in \\End{\\mathcal{C}}{U}$. The normalized linear\nfunctional\n$$\n\\tr_U(T) = \\frac{1}{d(U)} R_U^* (\\iota \\otimes T)\nR_U=\\frac{1}{d(U)}\\bar{R}_U^* (T \\otimes \\iota) \\bar{R}_U\n$$\nis a tracial state on the finite dimensional C$^*$-algebra\n$\\End{\\mathcal{C}}{U}$. It is independent of the choice of a standard\nsolution.\n\nGiven a rigid C$^*$-tensor category $\\mathcal{C}$, if $[U]$ and $[V]$ are\nelements of $\\Irr(\\mathcal{C})$, we can define their product in\n$\\mathbb{Z}_+[\\Irr(\\mathcal{C})]$ by putting\n$$\n[U] \\cdot [V] = \\sum_{[W] \\in \\Irr (\\mathcal{C})} \\dim \\mathcal{C}(W, U \\otimes V)\n[W],\n$$\nthus getting a semiring $\\mathbb{Z}_+[\\Irr(\\mathcal{C})]$. Extending this formula by\nbilinearity, we obtain a ring structure on $\\mathbb{Z}[\\Irr(\\mathcal{C})]$. The map\n$[U] \\mapsto d(U)$ extends to a ring homomorphism $\\mathbb{Z}[\\Irr(\\mathcal{C})] \\to\n\\mathbb{R}$. The pair $(\\mathbb{Z}[\\Irr(\\mathcal{C})], d)$ is called the \\emph{fusion\nalgebra} of $\\mathcal{C}$. In general, a ring homomorphism $d'\\colon\n\\mathbb{Z}[\\Irr(\\mathcal{C})] \\to \\mathbb{R}$ satisfying $d'([U]) > 0$ and $d'([U])=d'([\\bar\nU])$ for every $[U] \\in \\Irr(\\mathcal{C})$ is said to be a \\emph{dimension\nfunction} on $\\mathcal{C}$.\n\nFor a rigid C$^*$-tensor category $\\mathcal{C}$, the right multiplication by\n$[U] \\in \\Irr(\\mathcal{C})$ on $\\mathbb{Z}[\\Irr(\\mathcal{C})]$ can be considered as a densely\ndefined operator $\\Gamma_U$ on $\\ell^2(\\Irr(\\mathcal{C}))$. This definition\nextends to arbitrary objects of $\\mathcal{C}$ by the formula $\\Gamma_U =\n\\sum_{[V] \\in \\Irr(\\mathcal{C})} \\dim(V, U) \\Gamma_V$. If $d'$ is a dimension\nfunction on $\\mathcal{C}$, one has the estimate $$\\norm{\\Gamma_U}_{B(\\ell^2(\n\\Irr(\\mathcal{C})))} \\le d'(U).$$ If the equality holds for all objects $U$,\nthen the dimension function $d'$ is called \\emph{amenable}. Clearly,\nthere can be at most one amenable dimension function. If the intrinsic\ndimension function is amenable, then $\\mathcal{C}$ itself is called amenable.\n\n\\subsection{Categories of functors} \\label{sstensorfunctors}\n\nGiven a rigid C$^*$-tensor category $\\mathcal{C}$ we will consider the\ncategory of unitary tensor functors from $\\mathcal{C}$ into C$^*$-tensor\ncategories. Its objects are pairs $(\\mathcal{A},E)$, where $\\mathcal{A}$ is a\nC$^*$-tensor category and $E\\colon\\mathcal{C}\\to\\mathcal{A}$ is a unitary tensor\nfunctor. The morphisms $(\\mathcal{A},E)\\to(\\mathcal{B},F)$ are unitary tensor functors\n$G\\colon\\mathcal{A}\\to\\mathcal{B}$, considered up to natural unitary monoidal\nisomorphisms,\\footnote{Therefore the category of functors from $\\mathcal{C}$\nwe consider here is different from the category $\\mathpzc{Tens}(\\mathcal{C})$ defined\nin \\cite{arXiv:1310.4407}, where we wanted to distinguish between\nisomorphic functors and defined a more refined notion of morphisms.}\nsuch that $GE$ is naturally unitarily isomorphic to $F$.\n\nA more concrete way of thinking of this category is as follows. First\nof all we may assume that~$\\mathcal{C}$ is strict. Consider a unitary tensor\nfunctor $E\\colon\\mathcal{C}\\to\\mathcal{A}$. The functor $E$ is automatically faithful\nby semisimplicity and existence of conjugates in $\\mathcal{C}$. It follows\nthat by replacing the pair $(\\mathcal{A},E)$ by an isomorphic one, we may\nassume that $\\mathcal{A}$ is a strict C$^*$-tensor category containing $\\mathcal{C}$\nand $E$ is simply the embedding functor. Namely, define the new sets\nof morphisms between objects $U$ and $V$ in $\\mathcal{C}$ as $\\mathcal{A}(E(U),E(V))$,\nand then complete the category we thus obtain with respect to\nsubobjects.\n\nAssume now that we have two strict C$^*$-tensor categories $\\mathcal{A}$ and\n$\\mathcal{B}$ containing $\\mathcal{C}$, and let $E\\colon\\mathcal{C}\\to \\mathcal{A}$ and\n$F\\colon\\mathcal{C}\\to\\mathcal{B}$ be the embedding functors. Assume $[G]\\colon\n(\\mathcal{A},E)\\to(\\mathcal{B},F)$ is a morphism. This means that there exist unitary\nisomorphisms $\\eta_U\\colon G(U)\\to U$ in $\\mathcal{B}$ such that\n$G(T)=\\eta_V^{-1}T\\eta_U$ for any morphism $T\\in\\mathcal{C}(U,V)$, and the\nmorphisms $$G_{2}(U,V)\\colon G(U)\\otimes G(V)\\to G(U\\otimes V)$$\ndefining the tensor structure of $G$ restricted to $\\mathcal{C}$ are given by\n$G_{2}(U,V)=\\eta_{U\\otimes V}^{-1}(\\eta_U\\otimes\\eta_V)$. For\nobjects~$U$ of~$\\mathcal{A}$ that are not in $\\mathcal{C}$ put\n$\\eta_U=1\\in\\mathcal{B}(G(U))$. We can then define a new unitary tensor\nfunctor $\\tilde G\\colon A\\to\\mathcal{B}$ by letting $\\tilde G(U)=U$ for\nobjects $U$ in $\\mathcal{C}$ and $\\tilde G(U)=G(U)$ for the remaining objects,\n$\\tilde G(T)=\\eta_VG(T)\\eta_U^{-1}$ for morphisms, and $\\tilde\nG_2(U,V)=\\eta_{U\\otimes\nV}G_2(U,V)(\\eta_U^{-1}\\otimes\\eta_V^{-1})$. Then $[G]=[\\tilde G]$ and\nthe restriction of $\\tilde G$ to $\\mathcal{C}\\subset\\mathcal{A}$ coincides with the\nembedding (tensor) functor $\\mathcal{C}\\to\\mathcal{B}$.\n\nTherefore, any unitary tensor functor $\\mathcal{C}\\to\\mathcal{A}$ is naturally unitary\nisomorphic to an embedding functor, and the morphisms between two such\nembeddings $E\\colon\\mathcal{C}\\to\\mathcal{A}$ and $F\\colon\\mathcal{C}\\to\\mathcal{B}$ are the unitary\ntensor functors $G\\colon\\mathcal{A}\\to\\mathcal{B}$ extending $F$, considered up to\nnatural unitary isomorphisms. If, furthermore, $\\mathcal{A}$ is generated by\nthe objects of $\\mathcal{C}$ then $[G]$ is completely determined by the maps\n$\\mathcal{A}(U,V)\\to\\mathcal{B}(U,V)$ extending the identity maps on $\\mathcal{C}(U,V)$ for all\nobjects $U$ and $V$ in $\\mathcal{C}$.\n\n\\subsection{Subfactor theory}\n\nLet $N \\subset M$ be an inclusion of von Neumann algebras represented\non a Hilbert space $H$. There is a canonical bijective correspondence\nbetween the normal semifinite faithful operator valued weights $\\Phi\n\\colon M \\to N$ and the ones $\\Psi\\colon N' \\to M'$ in terms of\nspatial derivatives~\\cite{MR561983}. Given such a $\\Phi$, one\nassociates a unique $\\Psi$ denoted by $\\Phi^{-1}$ and characterized by\nthe equation\n$$\n\\frac{d \\omega \\Phi}{d \\omega'} = \\frac{d \\omega}{d \\omega'\n\\Phi^{-1}},\n$$\nwhere $\\omega$ and $\\omega'$ are any choices of normal semifinite\nfaithful weights on $N$ and $M'$.\n\nIf $E$ is a normal faithful conditional expectation from $M$ to $N$,\nits \\emph{index} $\\Ind E$ can be defined as\n$E^{-1}(1)$~\\cite{MR829381}. Suppose that $M$ and $N$ are factors\nadmitting conditional expectations of finite index. Then\nthere is a unique choice of $E$ which minimizes $\\Ind E$. This $E$ is\ncalled the \\emph{minimal conditional expectation} of the subfactor $N\n\\subset M$~\\cite{MR976765}.\n\nSuppose that $N \\subset M$ is a subfactor endowed with a normal\nconditional expectation of finite index $E\\colon M \\to N$. We then\nobtain a von Neumann algebra $M_1$ called the \\emph{basic extension}\nof $N\\subset M$ with respect to $E$, as follows. Taking a normal\nsemifinite faithful weight $\\psi$ on $N$, the algebra $M_1 \\subset\nB(L^2(M, \\psi E))$ is generated by~$M$ and the orthogonal projection\n$e_N$, called the Jones projection, onto $L^2(N, \\psi) \\subset L^2(M,\n\\psi E)$. One has the equality $M_1 = J N' J$, where $J$ is the\nmodular conjugation of $M$ with respect to~$\\psi E$. From the above\ncorrespondence of operator valued weights, there is a canonical\nconditional expectation $E_1 \\colon M_1 \\to M$ which has the same\nindex as $E$, namely, $E_1=(\\Ind E)^{-1}JE^{-1}(J\\cdot\nJ)J$. Iterating this procedure, we obtain a tower of von Neumann\nalgebras\n$$\nN \\subset M \\subset M_1 \\subset M_2 \\subset \\cdots.\n$$\nThe higher relative commutants\n$$\nN' \\cap M_k = \\{ x \\in M_k \\mid \\forall y \\in N \\colon x y = y x \\}\n$$\nare finite dimensional C$^*$-algebras. The algebras $M' \\cap M_{2 k}$\n($k \\in \\mathbb{N}$) can be considered as the endomorphism rings of $M\n\\otimes_N M \\otimes_N \\cdots \\otimes_N M$ in the category of\n$M$-bimodules, and there are similar interpretations for the $N' \\cap\nM_{2k + 1}$, etc., in terms of the $N$-bimodules, $M$-$N$-modules, and\n$N$-$M$-modules.\n\n\\subsection{Relative entropy}\n\nAn important numerical invariant for inclusions of von Neumann\nalgebras, closely related to index, is relative entropy. For this part\nwe follow the exposition of~\\cite{MR2251116}. \n\nWhen $\\varphi$ and\n$\\psi$ are positive linear functionals on a C$^*$-algebra $M$, we\ndenote their relative entropy by $S(\\varphi,\\psi)$. If $M$ is finite\ndimensional, it can be defined as\n$$\nS(\\varphi,\\psi)=\\begin{cases}\\Tr(Q_\\varphi(\\log Q_\\varphi-\\log\nQ_\\psi)),& \\text{if}\\ \\ \\varphi\\le\\lambda\\psi\\ \\ \\text{for some}\\ \\\n\\lambda>0,\\\\+\\infty,&\\text{otherwise,}\\end{cases}\n$$\nwhere $\\Tr$ is the canonical trace on $M$, which takes value $1$ on\nevery minimal projection in $M$, and $Q_\\varphi\\in M$ is the density\nmatrix of $\\varphi$, so that we have $\\varphi(x) =\\Tr( x\nQ_\\varphi)$. For a single positive linear functional $\\psi$ on a\nfinite dimensional $M$, we also have its von Neumann entropy defined\nas $S(\\psi)=-\\Tr(Q_\\psi\\log Q_\\psi)$.\n\n\\smallskip\n\nGiven an inclusion of C$^*$-algebras $N\\subset M$ and a state\n$\\varphi$ on $M$, the \\emph{relative entropy} $H_\\varphi(M|N)$ (also\ncalled \\emph{conditional entropy} in the classical probability\ntheory) is defined as the supremum of the quantities\n$$\n\\sum_i(S(\\varphi_i,\\varphi)-S(\\varphi_i|_N,\\varphi|_N))\n$$\nwhere $(\\varphi_i)_i = (\\varphi_1, \\ldots, \\varphi_k)$ runs through\nthe tuples of positive linear functionals on $M$ satisfying $\\varphi =\n\\sum^k_{i=1} \\varphi_i$. If $M$ is finite dimensional, this can also\nbe written as\n$$\nH_\\varphi(M | N) = S(\\varphi) - S(\\varphi|_N) + \\sup_{(\\varphi_i)_i}\n\\sum_i \\bigl(S(\\varphi_i|_N) - S(\\varphi_i)\\bigr),\n$$\nwhere supremum is again taken over all finite decompositions of\n$\\varphi$.\n\nRelative entropy has the following lower semicontinuity\nproperty. Suppose that $N \\subset M$ is an inclusion of von Neumann\nalgebras and $\\varphi$ is a normal state on $M$. Suppose that $B_i\n\\subset A_i$ ($i=1,2,\\ldots$) are increasing sequences of subalgebras\n$B_i \\subset N$, $A_i \\subset M$ such that $\\cup_i A_i$ and $\\cup_i\nB_i$ are $s^*$-dense in~$M$ and~$N$, respectively. Then one has\nthe estimate $$H_\\varphi(M | N) \\le \\liminf_i H_\\varphi(A_i | B_i).$$\n\nIf $N \\subset M$ is an inclusion of von Neumann algebras and $E \\colon\nM \\to N$ is a normal conditional expectation, the relative entropy of\n$M$ and $N$ with respect to $E$ is defined by\n$$\nH_E(M | N) = \\sup_\\varphi H_\\varphi(M | N),\n$$\nwhere $\\varphi$ runs through the normal states on $M$ satisfying\n$\\varphi = \\varphi E$~\\cite{MR1096438}. If $M$ and $N$ are factors,\nthen we have the estimate $H_E(M|N)\\le\\log\\Ind E$.\n\n\\bigskip\n\n\\section{Categorical Poisson boundary}\n\nLet $\\mathcal{C}$ be a strict rigid C$^*$-tensor category satisfying our\nstandard assumptions: it is closed under direct sums and subobjects,\nthe tensor unit is simple, and $\\Irr(\\mathcal{C})$ is at most countable.\n\n\\smallskip\n\nLet $\\mu$ be a probability measure on $\\Irr(\\mathcal{C})$. The Poisson\nboundary of $(\\mathcal{C},\\mu)$ will be a new C$^*$-tensor category $\\mathcal{P}$,\npossibly with nonsimple unit, together with a unitary tensor functor\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$. In this section we define $(\\mathcal{P},\\Pi)$ in purely\ncategorical terms. In the next section we will give several more\nconcrete descriptions of this construction.\n\n\\smallskip\n\nRecall that $\\tr_X$ stands for the normalized categorical\ntrace on $\\End{\\mathcal{C}}{X}$. More generally, for objects~$U$ and~$V$ we\ndenote by\n$$\n\\tr_X\\otimes\\iota\\colon\\mathcal{C}(X\\otimes U,X\\otimes V)\\to\\mathcal{C}(U,V)\n\\ \\ \\text{and}\\ \\ \\iota\\otimes\\tr_X\\colon\\mathcal{C}(U\\otimes X,V\\otimes\nX)\\to\\mathcal{C}(U,V)\n$$\nthe normalized partial categorical traces. Namely, if\n$R_X\\colon\\mathds{1}\\to\\bar X\\otimes X$ and $\\bar R_X\\colon \\mathds{1}\\to\nX\\otimes\\bar X$ form a standard solution of the conjugate equations\nfor $X$, then\n$$\n(\\tr_X\\otimes\\iota)(T)=d(X)^{-1}(R_X^*\\otimes\\iota)(\\iota\\otimes\nT)(R_X\\otimes\\iota),\n\\ \\ (\\iota\\otimes\\tr_X)(T)=d(X)^{-1}(\\iota\\otimes\\bar\nR_X^*)(T\\otimes\\iota)(\\iota\\otimes\\bar R_X).\n$$\n\nFor an object $U$ consider the functor $\\iota\\otimes\nU\\colon\\mathcal{C}\\to\\mathcal{C}$. Given two objects $U$ and $V$, consider the space\n$\\Nat(\\iota\\otimes U,\\iota\\otimes V)$ of natural transformations from\n$\\iota\\otimes U$ to $\\iota\\otimes V$, so elements of\n$\\Nat(\\iota\\otimes U,\\iota\\otimes V)$ are collections\n$\\eta=(\\eta_X)_X$ of natural in $X$ morphisms $\\eta_X\\colon X\\otimes\nU\\to X\\otimes V$. For every object $X$ we can define a linear operator\n$P_X$ on $\\Nat(\\iota\\otimes U,\\iota\\otimes V)$ by\n$$\nP_X(\\eta)_Y=(\\tr_X\\otimes\\iota)(\\eta_{X\\otimes Y}).\n$$\nDenote by $\\hat{\\mathcal{C}}(U, V) \\subset \\Nat(\\iota\\otimes U,\\iota\\otimes\nV)$ the subspace of bounded natural transformations, that is, of\nelements $\\eta$ such that $\\sup_Y\\|\\eta_Y\\|<\\infty$. This is a Banach\nspace, and the operator $P_X$ defines a contraction on it. It is also\nclear that the operator $P_X$ depends only on the isomorphism class of\n$X$.\n\nFor every $s\\in \\Irr(\\mathcal{C})$ fix a representative $U_s$. We write\n$\\tr_s$ instead of $\\tr_{U_s}$, $P_s$ instead of $P_{U_s}$, and so\non. Similarly, for a natural transformation $\\eta\\colon\\iota\\otimes\nU\\to\\iota\\otimes V$ we will write $\\eta_s$ instead of\n$\\eta_{U_s}$. Let also denote by $e\\in\\Irr(\\mathcal{C})$ the index\ncorresponding to $\\mathds{1}$. For convenience we assume that\n$U_e=\\mathds{1}$. Define an involution on $\\Irr(\\mathcal{C})$ such that $U_{\\bar s}$\nis a dual object to $U_s$.\n\nConsider now the operator\n$$\nP_\\mu=\\sum_s\\mu(s)P_s.\n$$\nThis is a well-defined contraction on $\\hat{\\mathcal{C}}(U, V)$. We say that a\nbounded natural transformation $\\eta\\colon\\iota\\otimes\nU\\to\\iota\\otimes V$ is $P_\\mu$-{\\em harmonic} if\n$$P_\\mu(\\eta)=\\eta.$$\nAny morphism $T\\colon U\\to V$ defines a bounded natural transformation\n$(\\iota_X\\otimes T)_X$, which is obviously $P_\\mu$-harmonic for every\n$\\mu$. When there is no ambiguity we denote this natural\ntransformation simply by $T$.\n\nThe composition of harmonic transformations is in general not\nharmonic. But we can define a new composition as follows.\n\n\\begin{proposition} \\label{pproduct} Given bounded $P_\\mu$-harmonic\n natural transformations $\\eta\\colon\\iota\\otimes U\\to\\iota\\otimes V$\n and $\\nu\\colon\\iota\\otimes V\\to\\iota\\otimes W$, the limit\n$$\n(\\nu\\cdot\\eta)_X=\\lim_{n\\to\\infty}P^n_\\mu(\\nu\\eta)_X\n$$\nexists for all objects $X$ and defines a bounded $P_\\mu$-harmonic\nnatural transformation $\\iota\\otimes U\\to\\iota\\otimes W$. Furthermore,\nthe composition $\\cdot$ is associative.\n\\end{proposition}\n\nNote that since the spaces $\\mathcal{C}(X\\otimes U,X\\otimes W)$ are finite\ndimensional by our assumptions on $\\mathcal{C}$, the notion of a limit is\nunambiguous.\n\n\\begin{proof}[Proof of Proposition~\\ref{pproduct}] This is an immediate\nconsequence of results of Izumi~\\cite{MR2995726} (another proof will\nbe given in Section~\\ref{sec:poiss-bound-from}). Namely, replacing\n$U$, $V$ and $W$ by their direct sum we may assume that\n$U=V=W$. Then\n$$\n\\hat{\\mathcal{C}}(U) = \\hat{\\mathcal{C}}(U, U) \\cong\n\\ell^\\infty\\text{-}\\bigoplus_s\\End{\\mathcal{C}}{U_s\\otimes U}\n$$\nis a von Neumann algebra and $P_\\mu$ is a normal unital completely\npositive map on it. By \\cite{MR2995726}*{Corollary~5.2} the subspace\nof $P_\\mu$-invariant elements is itself a von Neumann algebra with\nproduct $\\cdot$ such that $x\\cdot y$ is the $s^*$-limit of the\nsequence $\\{P^n_\\mu(xy)\\}_n$. \\end{proof}\n\nUsing this product on harmonic elements we can define a new\nC$^*$-tensor category $\\mathcal{P}=\\mathcal{P}_{\\mathcal{C},\\mu}$ and a unitary tensor functor\n$\\Pi=\\Pi_{\\mathcal{C},\\mu}\\colon\\mathcal{C}\\to\\mathcal{P}$ as follows.\n\nFirst consider the category $\\tilde\\mathcal{P}$ with the same objects as in\n$\\mathcal{C}$, but define the new spaces $\\tilde\\mathcal{P}(U,V)$ of morphisms as the\nspaces of bounded $P_\\mu$-harmonic natural transformations\n$\\iota\\otimes U\\to\\iota\\otimes V$. Define the composition of morphisms\nas in Proposition~\\ref{pproduct}. We thus get a C$^*$-category,\npossibly without subobjects. Furthermore, the C$^*$-algebras\n$\\End{\\tilde\\mathcal{P}}{U}$ are von Neumann algebras.\n\nNext, we define the tensor product of objects in the same way as in\n$\\mathcal{C}$, and define the tensor product of morphisms by\n$$\n\\nu\\otimes\\eta=(\\nu\\otimes\\iota)\\cdot(\\iota\\otimes\\eta).\n$$\nHere, given $\\nu\\colon \\iota\\otimes U\\to\\iota\\otimes V$ and\n$\\eta\\colon\\iota\\otimes W\\to\\iota\\otimes Z$, the natural\ntransformation $\\nu\\otimes\\iota_Z\\colon\\iota\\otimes U\\otimes\nZ\\to\\iota\\otimes V\\otimes Z$ is defined by\n$$\n(\\nu\\otimes\\iota_Z)_X=\\nu_X\\otimes\\iota_Z,\n$$\nwhile the natural transformation $\\iota_U\\otimes\\eta\\colon\\iota\\otimes\nU\\otimes W\\to\\iota\\otimes U\\otimes Z$ is defined by\n$$\n(\\iota_U\\otimes\\eta)_X=\\eta_{X\\otimes U}.\n$$\nWe remark that $\\nu\\otimes\\iota$ and $\\iota\\otimes\\eta$ are still\n$P_\\mu$-harmonic due to the identities\n$$\nP_X(\\nu\\otimes\\iota) = P_X(\\nu)\\otimes\\iota,\\ \\ P_X(\\iota\\otimes\\eta)\n= \\iota\\otimes P_X(\\eta).\n$$\nNote also that by naturality of $\\eta$ we have\n$(\\nu_X\\otimes\\iota_Z)\\eta_{X\\otimes U}=\\eta_{X\\otimes\n V}(\\nu_X\\otimes\\iota_Z)$, which implies that\n$$\n\\nu\\otimes\\eta=(\\iota\\otimes\\eta)\\cdot(\\nu\\otimes\\iota).\n$$\nThis shows that $\\otimes\\colon\\tilde\\mathcal{P}\\times\\tilde\\mathcal{P}\\to\\tilde\\mathcal{P}$ is\nindeed a bifunctor.\n\nFinally, complete the category $\\tilde\\mathcal{P}$ with respect to\nsubobjects. This is our C$^*$-tensor category~$\\mathcal{P}$, possibly with\nnonsimple unit. The unitary tensor functor $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is\ndefined in the obvious way: it is the strict tensor functor which is\nthe identity map on objects and $\\Pi(T)=(\\iota_X\\otimes T)_X$ on\nmorphisms. We will often omit $\\Pi$ and simply consider $\\mathcal{C}$ as a\nC$^*$-tensor subcategory of $\\mathcal{P}$.\n\n\\begin{definition} The pair $(\\mathcal{P},\\Pi)$ is called the {\\em Poisson\n boundary} of $(\\mathcal{C},\\mu)$. We say that the Poisson boundary is\n trivial if $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is an equivalence of categories, or\n in other words, for all objects~$U$ and~$V$ in $\\mathcal{C}$ the only\n bounded $P_\\mu$-harmonic natural transformations $\\iota\\otimes\n U\\to\\iota\\otimes V$ are the transformations of the form\n $\\eta=(\\iota_X\\otimes T)_X$ for $T\\in\\mathcal{C}(U,V)$.\n\\end{definition}\n\nThe algebra $\\mathcal{P}(\\mathds{1})$ is determined by the random walk on $\\Irr(\\mathcal{C})$\nwith transition probabilities\n$$\np_\\mu(s,t)=\\sum_r\\mu(r)m^t_{rs}\\frac{d(t)}{d(r) d(s)},\n$$\nwhere $d(s)=d(U_s)$ and $m^t_{rs}=\\dim\\mathcal{C}(U_t,U_r\\otimes\nU_s)$. Namely, if we identify $\\hat\\mathcal{C}(\\mathds{1})$ with\n$$\n\\ell^\\infty\\text{-}\\bigoplus_s\\mathcal{C}(U_s)=\\ell^\\infty(\\Irr(\\mathcal{C})),\n$$\nthen the operator $P_\\mu$ on $\\hat\\mathcal{C}(\\mathds{1})$ is the Markov operator\ndefined by $p_\\mu$, so $(P_\\mu f)(s)=\\sum_tp_\\mu(s,t)f(t)$. Therefore\n$\\mathcal{P}(\\mathds{1})$ is the algebra of bounded measurable functions on the\nPoisson boundary, in the usual probabilistic sense, of the random walk\non $\\Irr(\\mathcal{C})$ with transition probabilities~$p_\\mu(s,t)$. We say\nthat~$\\mu$ is \\emph{ergodic}, if this boundary is trivial, that is,\nthe tensor unit of~$\\mathcal{P}$ is simple.\n\nWe say that $\\mu$ is \\emph{symmetric} if $\\mu(s)=\\mu(\\bar s)$ for all\n$s$, and that $\\mu$ is \\emph{generating} if every simple object\nappears in the decomposition of $U_{s_1}\\otimes\\dots \\otimes U_{s_n}$\nfor some $s_1,\\dots,s_n\\in\\supp\\mu$ and $n\\ge1$. Equivalently, $\\mu$\nis generating if $\\cup_{n\\ge1}\\supp\\mu^{*n}=\\Irr(\\mathcal{C})$, where the\nconvolution of probability measures on $\\Irr(\\mathcal{C})$ is defined by\n$$\n(\\nu*\\mu)(t)=\\sum_{s,r}\\nu(s)\\mu(r)m^t_{sr}\\frac{d(t)}{d(s) d(r)}.\n$$\nWe will write $\\mu^n$ instead of $\\mu^{*n}$. The definition of the\nconvolution is motivated by the identity $P_\\mu P_\\nu=P_{\\nu*\\mu}$.\n\nWe remark that a symmetric ergodic measure $\\mu$, or even an ergodic\nmeasure with symmetric support, is automatically generating. Indeed,\nthe symmetry assumption implies that we have a well-defined\nequivalence relation on $\\Irr(\\mathcal{C})$ such that $s\\sim t$ if and only if\n$t$ can be reached from $s$ with nonzero probability in a finite\nnonzero number of steps. Then any bounded function on $\\Irr(\\mathcal{C})$ that\nis constant on equivalence classes is $P_\\mu$-harmonic. Hence $\\mu$ is\ngenerating by the ergodicity assumption.\n\n\\smallskip\n\nLet us say that $\\mathcal{C}$ is \\emph{weakly amenable} if the fusion algebra\n$(\\mathbb{Z}[\\Irr(\\mathcal{C})],d)$ is weakly amenable in the sense of Hiai and\nIzumi~\\cite{MR1644299}, that is, there exists a left invariant mean on\n$\\ell^\\infty(\\Irr(\\mathcal{C}))$. By definition this is a state $m$ such that\n$m(P_s(f))=m(f)$ for all $f\\in\\ell^\\infty(\\Irr(\\mathcal{C}))$ and\n$s\\in\\Irr(\\mathcal{C})$. Of course, it is also possible to define right\ninvariant means, and by \\cite{MR1644299}*{Proposition~4.2} if there\nexists a left or right invariant mean, then there exists a\nbi-invariant mean. By the same proposition amenability implies weak\namenability, as the term suggests. But as opposed to the group case,\nin general, the converse is not true. Using this terminology let us\nrecord the following known result.\n\n\\begin{proposition} \\label{pweakamen} An ergodic probability measure\non $\\Irr(\\mathcal{C})$ exists if and only if $\\mathcal{C}$ is weakly\namenable. Furthermore, if an ergodic measure exists, then it can be\nchosen to be symmetric and with support equal to the entire space\n$\\Irr(\\mathcal{C})$.\n\\end{proposition}\n\n\\begin{proof} If $\\mu$ is an ergodic measure, then any weak$^*$ limit point of\nthe sequence $n^{-1}\\sum^{n-1}_{k=0}\\mu^k$ defines a right invariant\nmean. For random walks on groups this implication was observed by\nFurstenberg. The other direction is proved\nin~\\cite{MR1749868}*{Theorem~2.5}. It is an analogue of a result of\nKaimanovich--Vershik and Rosenblatt. \\end{proof}\n\nIt should be remarked that if the fusion algebra of $\\mathcal{C}$ is weakly\namenable and finitely generated, in general it is not possible to find\na finitely supported ergodic\nmeasure~\\cite{MR704539}*{Proposition~6.1}.\n\n\\smallskip\n\nTo finish the section, let us show that, not surprisingly, categorical\nPoisson boundaries are of interest only for infinite categories.\n\n\\begin{proposition}\\label{pfb} Assume $\\mathcal{C}$ is finite, meaning that\n$\\Irr(\\mathcal{C})$ is finite, and $\\mu$ is generating. Then the Poisson\nboundary of $(\\mathcal{C},\\mu)$ is trivial.\n\\end{proposition}\n\n\\begin{proof} The proof is similar to the proof of triviality of the Poisson\nboundary of a random walk on a finite set based on the maximum\nprinciple. Fix an object $U$ in $\\mathcal{C}$ and assume that\n$\\eta\\in\\hat{\\mathcal{C}}(U)$ is positive and $P_\\mu$-harmonic. We claim that\nif $\\eta\\ne0$ then there exists a positive nonzero morphism\n$T\\in\\End{\\mathcal{C}}{U}$ such that $\\eta\\ge T$. Assuming that the claim is\ntrue, we can then choose a maximal $T$ with this property. Applying\nagain the claim to the element $\\eta-T$, we conclude that $\\eta=T$ by\nmaximality.\n\nIn order to prove the claim observe that $\\eta_e\\in\\End{\\mathcal{C}}{U}$ is\nnonzero. Indeed, by assumption there exists~$s$ such that\n$\\eta_s\\ne0$. Since the categorical traces are faithful, and therefore\npartial categorical traces are faithful completely positive maps, it\nfollows that $P_s(\\eta)_e\\ne0$. Since $s\\in\\supp\\mu^n$ for some\n$n\\ge1$, we conclude that $\\eta_e=P_{\\mu^n}(\\eta)_e\\ne0$.\n\nDenote the positive nonzero element $\\eta_e\\in\\End{\\mathcal{C}}{U}$ by\n$S$. Fix $s\\in\\Irr(\\mathcal{C})$. Let $(R_s,\\bar R_s)$ be a standard solution\nof the conjugate equations for $U_s$, and $p\\in \\End{\\mathcal{C}}{\\bar\nU_s\\otimes U_s}$ be the projection defined by $p=d(s)^{-1}R_sR_s^*$. By\nnaturality of $\\eta$ we then have $\\eta_{\\bar U_s\\otimes U_s}\\ge\np\\otimes S$, whence\n$$\nP_{\\bar s}(\\eta)_s\\ge(\\tr_{\\bar s}\\otimes\\iota)(p)\\otimes\nS=d(s)^{-2}(\\iota\\otimes S).\n$$\nUsing the generating property of $\\mu$ and finiteness of $\\Irr(\\mathcal{C})$,\nwe conclude that there exists a number $\\lambda>0$ such that\n$\\eta_s\\ge \\iota\\otimes\\lambda S$ for all $s$. This proves the\nclaim. \\end{proof}\n\n\\bigskip\n\n\\section{Realizations of the Poisson boundary}\n\nAs in the previous section, we fix a strict rigid C$^*$-tensor\ncategory $\\mathcal{C}$ and a probability measure~$\\mu$ on~$\\Irr(\\mathcal{C})$. In\nSections~\\ref{sec:longo-roberts-appr} and~\\ref{sHY} we will in\naddition assume that $\\mu$ is generating. Let $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ be\nthe Poisson boundary of $(\\mathcal{C},\\mu)$. Our goal is to give several\ndescriptions of the algebras~$\\mathcal{P}(U)$ of harmonic elements.\n\n\\subsection{Time shift on the categorical path space}\n\\label{sec:poiss-bound-from}\n\nFix an object $U$. Denote by $M^{(0)}_U$ the von Neumann algebra\n$\\hat{\\mathcal{C}}(U)\\cong\\ell^\\infty{\\text-}\\oplus_s\\End{\\mathcal{C}}{U_s\\otimes\nU}$. More generally, for every $n\\ge0$ consider the von Neumann\nalgebra\n$$\nM^{(n)}_U=\\Endb(\\iota_{\\mathcal{C}^{n+1}}\\otimes U),\n$$\nso $M^{(n)}_U$\nconsists of bounded collections\n$\\eta=(\\eta_{X_{n},\\dots,X_0})_{X_n,\\dots,X_0}$ of natural in\n$X_n,\\dots,X_0$ endomorphisms of $X_n\\otimes\\dots \\otimes X_0\\otimes\nU$. We consider $M^{(n)}_U$ as a subalgebra of~$M^{(n+1)}_U$ using the\nembedding\n$$\n(\\eta_{X_{n},\\dots,X_0})_{X_n,\\dots,X_0}\\mapsto(\\iota_{X_{n+1}}\\otimes\n\\eta_{X_n,X_{n-1},\\dots,X_0})_{X_{n+1},\\dots,X_0}.\n$$\nDefine a conditional\nexpectation $E_{n+1,n}\\colon M^{(n+1)}_U\\to M^{(n)}_U$ by\n$$\nE_{n+1,n}(\\eta)_{X_n,\\dots,X_0} = \\sum_s \\mu(s)(\\tr_s\\otimes\\iota)\n(\\eta_{U_s,X_n,\\dots,X_0}).\n$$\nTaking compositions of such conditional expectations we get normal\nconditional expectations $$E_{n,0}\\colon M^{(n)}_U\\to M^{(0)}_U.$$\nThese conditional expectations are not faithful for $n\\ge1$ unless the\nsupport of $\\mu$ is the entire space $\\Irr(\\mathcal{C})$. The support of\n$E_{n,0}$ is a central projection, and we denote by $\\mathcal{M}^{(n)}_U$ the\nreduction of $M^{(n)}_U$ by this projection. More concretely, we have\na canonical isomorphism\n\\begin{equation}\\label{eq:M-n-U-defn}\n \\mathcal{M}^{(n)}_U \\cong \\ell^\\infty\\text{-}\\bigoplus_{\\substack{s_n,\\dots,s_1\\in\\supp\\mu\\\\ s_0\\in\\Irr(\\mathcal{C})}}\n\\End{\\mathcal{C}}{U_{s_n}\\otimes\\dots\\otimes U_{s_0}\\otimes U}.\n\\end{equation} The conditional expectations $E_{n,0}$ define normal\nfaithful conditional expectations\n$\\mathcal{E}_{n,0}\\colon\\mathcal{M}^{(n)}_U\\to\\mathcal{M}^{(0)}_U=M^{(0)}_U$, and similarly\n$E_{n+1,n}$ define conditional expectations $\\mathcal{E}_{n+1,n}$. Denote by\n$\\mathcal{M}_U$ the von Neumann algebra obtained as the inductive\nlimit of the algebras $\\mathcal{M}^{(n)}_U$ with respect to $\\mathcal{E}_{n,0}$. In other\nwords, take any faithful normal state $\\phi^{(0)}_U$ on $\\mathcal{M}^{(0)}_U$. By\ncomposing it with the conditional expectation $\\mathcal{E}_{n,0}$ we get a\nstate $\\phi^{(n)}_U$ on $\\mathcal{M}^{(n)}_U$. Together these states define a\nstate on $\\cup_n\\mathcal{M}^{(n)}_U$. Finally, complete $\\cup_n\\mathcal{M}^{(n)}_U$ to a\nvon Neumann algebra in the GNS-representation corresponding to this\nstate. Denote the corresponding normal state on $\\mathcal{M}_U$\nby~$\\phi_U$.\n\nNote that if we start with a trace on\n$\\mathcal{M}^{(0)}_U$ which is a convex combination of the traces\n$\\tr_{U_s\\otimes U}$, then the corresponding state $\\phi_U$\non $\\mathcal{M}_U$ is tracial. Since it is faithful on $\\mathcal{M}^{(n)}_U$\nfor every~$n$, it is faithful on~$\\mathcal{M}_U$. This shows that\n$\\mathcal{M}_U$ is a finite von Neumann algebra. Furthermore, the\n$\\phi_U$-preserving normal faithful conditional expectation\n$\\mathcal{E}_n\\colon\\mathcal{M}_U\\to \\mathcal{M}^{(n)}_U$ coincides with $\\mathcal{E}_{n+1,n}$\non $\\mathcal{M}^{(n+1)}_U$. It follows that on the dense algebra\n$\\cup_m\\mathcal{M}^{(m)}_U$ the conditional expectation $\\mathcal{E}_n$ is the limit, in\nthe pointwise $s^*$-topology, of $\\mathcal{E}_{n+1,n}\\mathcal{E}_{n+2,n+1}\\dots\n\\mathcal{E}_{m+1,m}$ as $m\\to\\infty$. Hence $\\mathcal{E}_n$ is independent of the choice\nof a faithful normal trace $\\phi^{(0)}_U$ as above.\n\n\\smallskip\n\nDefine a unital endomorphism $\\theta_U$ of\n$\\cup_nM^{(n)}_U$ such that $\\theta_U(M^{(n)}_U)\\subset M^{(n+1)}_U$\nby\n$$\n\\theta_U(\\eta)_{X_{n+1},\\dots,X_0}=\\eta_{X_{n+1},\\dots,X_2,X_1\\otimes\nX_0}.\n$$\nConsidering $\\mathcal{M}^{(k)}_U$ as a quotient of $M^{(k)}_U$ we get a unital\nendomorphism of $\\cup_n\\mathcal{M}^{(n)}_U$.\n\n\\begin{lemma} \\label{lshift}\nThe endomorphism $\\theta_U$ of $\\cup_n\\mathcal{M}^{(n)}_U$\nextends to a normal faithful endomorphism of~$\\mathcal{M}_U$, which\nwe continue to denote by $\\theta_U$.\n\\end{lemma}\n\n\\begin{proof} Consider the normal semifinite faithful (n.s.f.)~trace\n$\\psi^{(0)}_U=\\sum_sd(s)^2\\tr_{U_s\\otimes U}$ on\n$$\\mathcal{M}^{(0)}_U\\cong\\ell^\\infty\\text{-}\\bigoplus_s\\End{\\mathcal{C}}{U_s\\otimes U}$$ and put\n$\\psi_U=\\psi^{(0)}_U\\mathcal{E}_0$. Then $\\psi_U$ is an n.s.f.~trace. In order\nto prove the lemma it suffices to show that the restriction of\n$\\psi_U$ to $\\cup_n\\mathcal{M}^{(n)}_U$ is $\\theta_U$-invariant. Indeed, if the\ninvariance holds, then we can define an isometry $U$ on\n$L^2(\\mathcal{M}_U,\\psi_U)$ by\n$U\\Lambda_{\\psi_U}(x)=\\Lambda_{\\psi_U}(\\theta_U(x))$ for\n$x\\in\\cup_n\\mathcal{M}^{(n)}_U$ such that $\\psi_U(x^*x)<\\infty$. Let $H\\subset\nL^2(\\mathcal{M}_U,\\psi_U)$ be the image of~$U$ and $\\mathcal{M}$ be the von Neumann\nalgebra generated by the image of $\\theta_U$. Then $H$ is\n$\\mathcal{M}$-invariant. We can choose $0\\le e_i\\le1$ such that\n$\\theta_U(e_i)\\to1$ strongly and $\\psi_U(e_i)<\\infty$. Now, if\n$x\\in\\mathcal{M}_+$ is such that $x|_H=0$, then\n$\\psi_U(\\theta_U(e_i)x\\theta_U(e_i))=0$, and by lower semicontinuity\nwe get $\\psi_U(x)=0$, so $x=0$. Therefore we can define $\\theta_U$ as\nthe composition of the map $\\mathcal{M}_U\\to B(H)$, $x\\mapsto UxU^*$, with the\ninverse of the map $\\mathcal{M}\\to \\mathcal{M}|_H$.\n\nIt remains to check the invariance. By definition we have\n$\\mathcal{E}_{n+2,n+1}\\theta_U=\\theta_U\\mathcal{E}_{n+1,n}$ on $\\mathcal{M}^{(n+1)}_U$ for all\n$n\\ge0$. This implies that $\\mathcal{E}_{n+1}\\theta_U=\\theta_U\\mathcal{E}_n$ on\n$\\cup_k\\mathcal{M}^{(k)}_U$. It follows that for any $x\\in \\cup_n\\mathcal{M}^{(n)}_U$ we\nhave\n$$\n\\psi_U\\theta_U(x)=\\psi_U\\mathcal{E}_1\\theta_U(x)=\\psi_U\\theta_U\\mathcal{E}_0(x)\n=\\psi_U\\mathcal{E}_0\\theta_U\\mathcal{E}_0(x).\n$$\nThis implies that it suffices to show that $\\psi_U\\mathcal{E}_0\\theta_U=\\psi_U$\non~$\\mathcal{M}^{(0)}_U$. Since $\\tr_{U_s\\otimes U}=\\tr_s(\\iota\\otimes\\tr_U)$,\nit is enough to consider the case $U=\\mathds{1}$. Note also that\n$\\mathcal{E}_0\\theta_U=P_\\mu$ on~$\\mathcal{M}^{(0)}_U$. Thus we have to check that\n$\\psi_\\mathds{1} P_\\mu=\\psi_\\mathds{1}$ on\n$\\mathcal{M}^{(0)}_\\mathds{1}\\cong\\ell^\\infty(\\Irr(\\mathcal{C}))$. This is equivalent to the\neasily verifiable identity $\\mu*m=m$, where\n$m=\\sum_sd(s)^2\\delta_s$. \\end{proof}\n\nWe call the endomorphism $\\theta_U$ of $\\mathcal{M}_U$ the {\\em time\nshift}. Now, take $\\eta\\in\\mathcal{M}^{(0)}_U$. Then for every $n\\ge0$ we can\ndefine an element $\\eta^{(n)}\\in M^{(n)}_U$ by\n$$\n\\eta^{(n)}_{X_n,\\dots,X_0}=\\eta_{X_n\\otimes\\dots\\otimes X_0}.\n$$\nConsider the image of $\\eta^{(n)}$ in $\\mathcal{M}^{(n)}_U$ and denote it again\nby $\\eta^{(n)}$, since this is the only element we are interested\nin. Then $\\eta$ is $P_\\mu$-harmonic if and only if\n$\\mathcal{E}_{1,0}(\\eta^{(1)})=\\eta$, and in this case\n$\\mathcal{E}_{n+1,n}(\\eta^{(n+1)})=\\eta^{(n)}$ for all $n$. Therefore if $\\eta$\nis $P_\\mu$-harmonic, then the sequence $\\{\\eta^{(n)}\\}_n$ is a\nmartingale. Denote by $\\eta^{(\\infty)}\\in\\mathcal{M}_U$ its $s^*$-limit.\n\n\\begin{proposition} \\label{prop:lr-emb} The map\n$\\eta\\mapsto\\eta^{(\\infty)}$ is an isomorphism between the von Neumann\nalgebra $\\mathcal{P}(U)$ of $P_\\mu$-harmonic bounded natural transformations\n$\\iota\\otimes U\\to\\iota\\otimes U$ and the fixed point algebra\n$\\mathcal{M}^{\\theta_U}_U$. The inverse map is given by $x\\mapsto \\mathcal{E}_0(x)$.\n\\end{proposition}\n\n\\begin{proof} By definition we have $\\eta^{(n)}=\\theta^n_U(\\eta)$. It follows\nthat if $\\eta$ is $P_\\mu$-harmonic, so that\n$\\eta^{(n)}\\to\\eta^{(\\infty)}$, then the element $\\eta^{(\\infty)}$ is\n$\\theta_U$-invariant. We also clearly have\n$\\mathcal{E}_0(\\eta^{(\\infty)})=\\eta$.\n\nConversely, take $x\\in\\mathcal{M}_U^{\\theta_U}$. The proof of\nLemma~\\ref{lshift} implies that $\\mathcal{E}_{n+1}\\theta_U=\\theta_U\\mathcal{E}_n$. Hence\nthe martingale $\\{x_n=\\mathcal{E}_n(x)\\}_n$ has the property\n$x_{n+1}=\\theta_U(x_n)$. As $\\mathcal{E}_0\\theta_U=P_\\mu$ on~$\\mathcal{M}^{(0)}_U$, we\nconclude that $x_0$ is $P_\\mu$-harmonic and $x_0^{(\\infty)}=x$.\n\nWe have thus proved that the maps in the formulation are inverse to\neach other. Since they are unital completely positive, they must be\nisomorphisms. \\end{proof}\n\nThe bijection between $\\mathcal{P}(U)$ and $\\mathcal{M}^{\\theta_U}_U$ could be used to\ngive an alternative proof of Proposition~\\ref{pproduct}. Namely, we\ncould define a product $\\cdot$ on harmonic elements by\n$\\nu\\cdot\\eta=\\mathcal{E}_0(\\nu^{(\\infty)}\\eta^{(\\infty)})$. Since\n$\\nu^{(\\infty)}\\eta^{(\\infty)}$ is the $s^*$-limit of the elements\n$\\nu^{(n)}\\eta^{(n)}=(\\nu\\eta)^{(n)}$, and\n$\\mathcal{E}_0((\\nu\\eta)^{(n)})=P^n_\\mu(\\nu\\eta)$, it follows that\n$P^n_\\mu(\\nu\\eta)\\to \\nu\\cdot\\eta$ in the $s^*$-topology, which is\nequivalent to saying that $P^n_\\mu(\\nu\\eta)_X\\to (\\nu\\cdot\\eta)_X$ for\nevery $X$.\n\n\\subsection{Relative commutants: Izumi--Longo--Roberts approach}\n\\label{sec:longo-roberts-appr}\n\nWe will now modify the construction of the algebras $\\mathcal{M}_U$ to get\nalgebras $\\mathcal{N}_U$ and an identification of $\\mathcal{P}(U)$ with\n$\\mathcal{N}_\\mathds{1}'\\cap\\mathcal{N}_U$. Conceptually, instead of considering all paths\nof the random walk defined by $\\mu$, we consider only paths starting\nat the unit object. The time shift is no longer defined on this space,\nbut by considering a larger space we can still get a description of\n$\\mathcal{P}(U)$ in simple von Neumann algebraic terms. For this to work we\nhave to assume that $\\mu$ is generating, so that we can reach any\nsimple object from the unit.\n\nThis identification of harmonic elements is closely related to Izumi's\ndescription of Poisson boundaries of discrete quantum\ngroups~\\cite{MR1916370}. A similar construction was also used by Longo\nand Roberts using sector theory~\\cite{MR1444286}. More precisely, they\nworked with a somewhat limited form of $\\mu$ and what we obtain is a\npossibly infinite von Neumann algebra for what corresponds to the\nfinite gauge-invariant von Neumann subalgebra in their work.\n\n\\smallskip\n\nWe first put $V = \\oplus_{s \\in \\supp \\mu} U_s$. In the case $\\supp\n\\mu$ is infinite, this should be understood only as a suggestive\nnotation which does not make sense inside $\\mathcal{C}$. Given an object $U$,\nby $\\mathcal{C}(V^{\\otimes n}\\otimes U)$ we understand the space\n$$\n\\bigoplus_{s_*, s'_* \\in \\supp \\mu^n} \\mathcal{C}(U_{s_n} \\otimes \\cdots\n\\otimes U_{s_1} \\otimes U, U_{s'_n} \\otimes \\cdots\\otimes U_{s'_1}\n\\otimes U)\n$$\nendowed with the obvious $*$-algebra structure. Similarly to\nSection~\\ref{sec:poiss-bound-from} we have completely positive maps\n$$\n\\mathcal{E}_{n+1,n}=\\sum_s \\mu(s) (\\tr_s \\otimes \\iota)\\colon \\mathcal{C}(V^{\\otimes\n(n+1)}\\otimes U)\\to \\mathcal{C}(V^{\\otimes n}\\otimes U),\n$$\nand taking the composition of these maps we get maps\n$$\n\\mathcal{E}_{n,0}\\colon \\mathcal{C}(V^{\\otimes n}\\otimes U)\\to \\mathcal{C}(U).\n$$\nThen $\\omega_U^{(n)} = \\tr_U \\mathcal{E}_{n,0}$ is a state on $\\mathcal{C}(V^{\\otimes\nn}\\otimes U)$. We denote by $\\mathcal{N}^{(n)}_U$ the von Neumann algebra\ngenerated by $\\mathcal{C}(V^{\\otimes n}\\otimes U)$ in the GNS-representation\ndefined by this state. The elements of $\\mathcal{N}_U^{(n)}$ are represented\nby certain bounded families in the direct product of the morphism\nsets $$\\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes U_{s_1} \\otimes U, U_{s'_n}\n\\otimes \\cdots \\otimes U_{s'_1} \\otimes U).$$ Since the positive\nelements of $\\mathcal{N}_U^{(n)}$ have positive diagonal entries, the state\n$\\omega_U^{(n)}$ is faithful on $\\mathcal{N}_U^{(n)}$.\n\nThere is a natural diagonal embedding $\\mathcal{N}_U^{(n)} \\rightarrow\n\\mathcal{N}_U^{(n+1)}$ defined by $T \\mapsto \\iota_V \\otimes T$. The map\n$\\mathcal{E}_{n+1,n}$ extends then to a normal conditional expectation\n$\\mathcal{N}_U^{(n+1)} \\rightarrow \\mathcal{N}_U^{(n)}$ such that\n$\\omega^{(n)}_U\\mathcal{E}_{n+1,n}=\\omega^{(n+1)}_U$. This way we obtain an\ninductive system $(\\mathcal{N}_U^{(n)}, \\omega_U^{(n)})_n$ of von Neumann\nalgebras, and we let $(\\mathcal{N}_U, \\omega_U)$ be the von Neumann algebra\nand the faithful state obtained as the limit. As in\nSection~\\ref{sec:poiss-bound-from}, composing the conditional\nexpectations $\\mathcal{E}_{n+1,n}$ and passing to the limit we get\n$\\omega_U$-preserving conditional expectations\n$\\mathcal{E}_n\\colon\\mathcal{N}_U\\to\\mathcal{N}^{(n)}_U$.\n\n\\smallskip\n\nWhen $U = \\mathds{1}$, we simply write $\\mathcal{N}^{(n)}$ and $\\mathcal{N}$ instead of\n$\\mathcal{N}^{(n)}_\\mathds{1}$ and $\\mathcal{N}_\\mathds{1}$. If $U'$ and $U$ are objects in~$\\mathcal{C}$,\nthen the map $x\\mapsto x\\otimes\\iota_U$ defines an embedding\n$\\mathcal{N}_{U'}\\hookrightarrow\\mathcal{N}_{U'\\otimes U}$. In particular, the\nalgebra $\\mathcal{N}$ is contained in any of $\\mathcal{N}_U$.\n\n\\smallskip\n\nWhen $\\eta$ is a natural transformation in $\\hat\\mathcal{C}(U)$,\nthe morphism $$\\eta_{V^{\\otimes n}} = \\oplus_{s_*} \\eta_{U_{s_n}\n\\otimes \\cdots \\otimes U_{s_1}}$$ defines an element in the diagonal\npart of $\\mathcal{N}_U^{(n)}$, which we denote by $\\eta^{[n]}$. Note that\nthe direct summand $s_0 = e$ of~\\eqref{eq:M-n-U-defn} can be\nidentified with the diagonal part of $\\mathcal{N}_U^{(n)}$, and $\\eta^{[n]}$\nsimply becomes the component of $\\eta^{(n)}$ in this summand. If\n$\\eta$ is $P_\\mu$-harmonic, the sequence $\\{\\eta^{[n]}\\}_n$ forms a\nmartingale and defines an element $\\eta^{[\\infty]} \\in\n\\mathcal{N}_U^{(\\infty)}$.\n\n\\begin{proposition}\n\\label{prop:p-bdry-as-rel-comm-LR0} For every object $U$ in $\\mathcal{C}$, the\nmap $\\eta \\mapsto\\eta^{[\\infty]}$ defines an isomorphism of von\nNeumann algebras $\\End{\\mathcal{P}}{U}\\cong \\mathcal{N}'\\cap\\mathcal{N}_{U}$.\n\\end{proposition}\n\n\\begin{proof} If $\\eta$ is a harmonic element in $\\hat\\mathcal{C}(U)$, the\nnaturality implies that the elements $\\eta_{V^{\\otimes m}}$ commute\nwith the image of $\\End{\\mathcal{C}}{V^{\\otimes n}}$ for $m \\ge n$. Thus,\n$\\eta^{[\\infty]} = \\lim_m \\eta_{V^{\\otimes m}}$ is in the relative\ncommutant. Since~$\\mu$ is generating, it is also clear that the map\n$\\eta \\mapsto \\eta^{[\\infty]}$ is injective.\n\nTo construct the inverse map, take an element\n$x\\in\\mathcal{N}'\\cap\\mathcal{N}_{U}$. Then $x_n = \\mathcal{E}_n(x)$ is an element of\n$(\\mathcal{N}^{(n)})' \\cap \\mathcal{N}_{U}^{(n)}$. Hence, for every $n\\ge1$ and\n$s\\in\\supp\\mu^n$, there is a morphism $x_{n, s} \\in \\End{\\mathcal{C}}{U_s\n\\otimes U}$ such that $x_n$ is the direct sum of the $x_{n, s}$ (with\nmultiplicities). It follows that we can choose $\\eta(n)\\in \\hat\\mathcal{C}(U)$\nsuch that $\\|\\eta(n)\\|\\le\\|x\\|$ and $x_n=\\eta(n)^{[n]}$. The elements\n$\\eta(n)$ are not uniquely determined, only their components\ncorresponding to $s\\in\\supp\\mu^n$ are. The identity $\\mathcal{E}_{n+1,\nn}(x_{n+1}) = x_n$ translates into $P_\\mu(\\eta(n+1))_s=\\eta(n)_s$ for\n$s\\in\\supp\\mu^n$.\n\nWe now define an element $\\eta\\in\\Endb(U)$ by letting\n$$\n\\eta_s=\\eta(n)_s\\ \\ \\text{if}\\ \\ s\\in\\supp\\mu^n\\ \\ \\text{for some}\\ \\\nn\\ge1.\n$$\nIn order to see that this definition in unambiguous, assume\n$s\\in(\\supp\\mu^n)\\cap(\\supp\\mu^{n+k})$ for some~$n$ and~$k$. Then by\nthe $0$-$2$ law, see~\\cite{MR2034922}*{Proposition~2.12}, we have\n$\\|P^m_\\mu-P^{m+k}_\\mu\\|\\to0$ as $m\\to\\infty$. Since the sequence\n$\\{\\eta(m)\\}_m$ is bounded and we have\n$\\eta(n)_s=P^{m+k}_\\mu(\\eta(n+m+k))_s$ and\n$\\eta(n+k)_s=P^{m}_\\mu(\\eta(n+m+k))_s$, letting $m\\to\\infty$ we\nconclude that $\\eta(n)_s=\\eta(n+k)_s$. Hence $\\eta$ is well-defined,\n$P_\\mu$-harmonic, and $x_n=\\eta^{[n]}$. Therefore $x=\\eta^{[\\infty]}$.\n\nThe linear isomorphism $\\mathcal{P}(U)\\to\\mathcal{N}'\\cap\\mathcal{N}_U$ and its inverse that\nwe have constructed, are unital and completely positive, hence they\nare isomorphisms of von Neumann algebras. \\end{proof}\n\nAs in the case of Proposition~\\ref{prop:lr-emb}, the linear\nisomorphism $\\mathcal{P}(U)\\cong\\mathcal{N}'\\cap\\mathcal{N}_U$ could be used to give an\nalternative proof of Proposition~\\ref{pproduct}, at least for\ngenerating measures.\n\n\\smallskip\n\nApplying Proposition~\\ref{prop:p-bdry-as-rel-comm-LR0} to $U=\\mathds{1}$ we\nget the following.\n\n\\begin{corollary}\\label{cor:ergod-factor} The von Neumann algebra\n$\\mathcal{N}$ is a factor if and only if $\\mu$ is ergodic.\n\\end{corollary}\n\nUnder a mildly stronger assumption on the measure we can prove a\nbetter result than Proposition~\\ref{prop:p-bdry-as-rel-comm-LR0},\nwhich will be important later.\n\n\\begin{proposition}\n\\label{prop:p-bdry-as-rel-comm-LR} Assume that for any\n$s,t\\in\\Irr(\\mathcal{C})$ there exists $n\\ge0$ such\nthat $$\\supp(\\mu^n*\\delta_s)\\cap\\supp(\\mu^n*\\delta_t)\\ne\\emptyset.$$\nThen for any objects $U$ and $U'$ in $\\mathcal{C}$, the map $\\eta \\mapsto\n(\\iota_{U'} \\otimes \\eta)^{[\\infty]}$ defines an isomorphism of von\nNeumann algebras $\\End{\\mathcal{P}}{U}\\cong \\mathcal{N}_{U'}'\\cap\\mathcal{N}_{U' \\otimes\nU}$.\n\\end{proposition}\n\n\\begin{proof} That we get a map $\\End{\\mathcal{P}}{U}\\to \\mathcal{N}_{U'}'\\cap\\mathcal{N}_{U' \\otimes\nU}$ does not require any assumptions on $\\mu$ and is easy to see: if\n$\\eta$ is a harmonic element in $\\hat\\mathcal{C}(U)$, the\nnaturality implies that the elements $\\eta_{V^{\\otimes m} \\otimes U'}$\ncommute with $\\End{\\mathcal{C}}{V^{\\otimes n} \\otimes U'}$ for $m \\ge n$, and\nhence $(\\iota_{U'} \\otimes \\eta)^{[\\infty]} = \\lim_m \\eta_{V^{\\otimes\nm} \\otimes U'}$ lies in $ \\mathcal{N}_{U'}'\\cap\\mathcal{N}_{U' \\otimes U}$.\n\nTo construct the inverse map assume first $U'=U_t$ for some $t$. Take\n$x\\in \\mathcal{N}_{U'}'\\cap\\mathcal{N}_{U' \\otimes U}$. Similarly to the proof of\nProposition~\\ref{prop:p-bdry-as-rel-comm-LR0} we can find elements\n$\\eta(n)\\in\\hat{\\mathcal{C}}(U)$ such that $\\|\\eta(n)\\|\\le\\|x\\|$ and\n$\\mathcal{E}_n(x)=(\\iota_{U'}\\otimes\\eta(n))^{[n]}$. The identity $\\mathcal{E}_{n+1,\nn}(x_{n+1}) = x_n$ means now that $P_\\mu(\\eta(n+1))_s=\\eta(n)_s$ for\n$s\\in\\supp(\\mu^n*\\delta_t)$. We want to define an element\n$\\eta\\in\\hat\\mathcal{C}(U)$ by\n$$\n\\eta_s=\\eta(n)_s\\ \\ \\text{if}\\ \\ s\\in\\supp(\\mu^n*\\delta_t)\\ \\\n\\text{for some}\\ \\ n\\ge1.\n$$\nAs in the proof of Proposition~\\ref{prop:p-bdry-as-rel-comm-LR0}, in\norder to see that $\\eta$ is well-defined, it suffices to show that if\n$s\\in\\supp(\\mu^n*\\delta_t)\\cap\\supp(\\mu^{n+k}*\\delta_t)$ for some~$n$\nand~$k$, then $\\|P^m_\\mu-P^{m+k}_\\mu\\|\\to0$ as $m\\to\\infty$. Since\n$\\mu$ is assumed to be generating, there exists $l$ such that\n$t\\in\\supp\\mu^l$. But then\n$$\ns\\in (\\supp\\mu^{n+l})\\cap(\\supp\\mu^{n+l+k}),\n$$\nso the convergence $\\|P^m_\\mu-P^{m+k}_\\mu\\|\\to0$ indeed holds by the\n$0$-$2$ law. This finishes the proof of the proposition for $U'=U_t$,\nand we see that no assumption in addition to the generating property\nof $\\mu$ is needed in this case.\n\n\\smallskip\n\nConsider now an arbitrary $U'$. Decompose $U'$ into a direct sum of\nsimple objects:\n$$\nU\\cong U_{s_1}\\oplus\\dots\\oplus U_{s_n}.\n$$\nDenote by $p_i\\in\\mathcal{C}(U')$ the corresponding projections. Then the\ninclusion $p_i\\mathcal{N}_{U'}p_i\\subset p_i\\mathcal{N}_{U'\\otimes U}p_i$ can be\nidentified with $\\mathcal{N}_{U_{s_i}}\\subset \\mathcal{N}_{U_{s_i}\\otimes U}$.\n\nTake $x\\in \\mathcal{N}_{U'}'\\cap\\mathcal{N}_{U'\\otimes U}$. Then $x$ commutes with\n$p_i$. Since the element $xp_i$ lies in\n$\\mathcal{N}_{U_{s_i}}'\\cap\\mathcal{N}_{U_{s_i}\\otimes U}$, it is defined by a\n$P_\\mu$-harmonic element $\\eta(i)\\in\\hat{\\mathcal{C}}(U)$. In terms\nof these elements the condition that $\\mathcal{E}_n(x)$ commutes with\n$\\mathcal{C}(V^{\\otimes n}\\otimes U')$ means that $\\eta(i)_s=\\eta(j)_s$\nwhenever $s\\in\\supp(\\mu^n*\\delta_{s_i})\\cap\\supp(\\mu^n*\\delta_{s_j})$,\nwhile to finish the proof we need the equality $\\eta(i)=\\eta(j)$.\n\nFix $s\\in\\Irr(\\mathcal{C})$ and indices $i$ and $j$. By assumption there\nexists $t\\in\\supp(\\mu^n*\\delta_{s_i})\\cap\\supp(\\mu^n*\\delta_{s_j})$\nfor some $n$. Since $\\mu$ is generating, there exists $m$ such that\n$s\\in\\supp(\\mu^m*\\delta_t)$. Then\n$$\ns\\in \\supp(\\mu^{m+n}*\\delta_{s_i})\\cap\\supp(\\mu^{m+n}*\\delta_{s_j}),\n$$\nand therefore $\\eta(i)_s=\\eta(j)_s$. \\end{proof}\n\nNote that the proof shows that the additional assumption on the\nmeasure is not only sufficient but is also necessary for the result to\nbe true. Even for symmetric ergodic measures this condition does not\nalways hold: take the random walk on $\\mathbb{Z}$ defined by the measure\n$\\mu=2^{-1}(\\delta_{-1}+\\delta_1)$. At the same time this condition is\nsatisfied, for example, for any generating measure $\\mu$ with\n$\\mu(e)>0$. Indeed, for such a measure we can find $n$ such that\n$s\\in\\supp(\\mu^n*\\delta_t)$, and then $s\\in\\supp(\\mu^n*\\delta_s)\\cap\n\\supp(\\mu^n*\\delta_t)$.\n\n\\smallskip\n\nApplying the proposition to $U=\\mathds{1}$ we get the following result.\n\n\n\\begin{corollary}\\label{cor:engod-factor} Assume $\\mu$ is ergodic and\nsatisfies the assumption of\nProposition~\\ref{prop:p-bdry-as-rel-comm-LR}. Then $\\mathcal{N}_U$ is a\nfactor for every object $U$ in $\\mathcal{C}$.\n\\end{corollary}\n\n\\begin{remark}\\label{rmultiplicity} It is sometimes convenient to\nconsider slightly more general constructions allowing\nmultiplicities. Namely, instead of $V=\\oplus_{s\\in\\supp\\mu}U_s$ we\ncould take $V=\\oplus_{i\\in I}U_{s_i}$, where $(s_i)_{i\\in I}$ is any\nfinite or countable collection of elements running through\n$\\supp\\mu$. For the state on $\\mathcal{C}(V)$ we could take\n$\\mathcal{C}(U_{s_i},U_{s_j})\\ni T \\mapsto\\delta_{ij}\\lambda_i\\tr_{s_i}(T)$,\nwhere $\\lambda_i>0$ are any numbers such that $\\sum_{i\\colon\ns_i=s}\\lambda_i=\\mu(s)$ for all $s\\in\\supp\\mu$. All the above results\nwould remain true, with essentially identical proofs.\n\\end{remark}\n\n\\subsection{Relative commutants: Hayashi--Yamagami\n approach} \\label{sHY}\n\nWe will now explain a modification of the Izumi--Longo--Roberts\nconstruction due to Hayashi and Yamagami~\\cite{MR1749868}. Its\nadvantage is that, at the expense of introducing an extra variable in\na II$_1$ factor, we can stay in the framework of finite von Neumann\nalgebras.\n\n\\smallskip\n\nWe continue to assume that $\\mu$ is generating. We will use a slightly\ndifferent notation compared~\\cite{MR1749868} to be more consistent\nwith the previous sections.\n\nLet $\\mathcal{R}$ be the hyperfinite II$_1$-factor and $\\tau$ be the\nunique normal tracial state on $\\mathcal{R}$. Choose a partition of\nunity by projections $(e_s)_{s \\in \\supp \\mu}$ in $\\mathcal{R}$ which satisfy\n$$\n\\tau(e_s) = \\frac{\\mu(s)}{c d(s)},\\ \\ \\text{where}\\ \\\nc=\\sum_{s\\in\\supp\\mu}\\frac{\\mu(s)}{ d(s)}.\n$$\nWhen $(s_n, \\ldots, s_1) \\in (\\supp \\mu)^n$, we write $e_{s_*} =\ne_{s_n} \\otimes \\cdots \\otimes e_{s_1} \\in \\mathcal{R}^{\\otimes\nn}$. As in Section~\\ref{sec:longo-roberts-appr}, put\n$V=\\oplus_{s\\in\\supp\\mu}U_s$. Now, for a fixed object $U$ in $\\mathcal{C}$,\ninstead of the algebra $\\mathcal{C}(V^{\\otimes n}\\otimes U)$ used there,\nconsider the algebra\n$$\n\\tilde\\mathcal{C}(V^{\\otimes n}\\otimes U)=\\bigoplus_{s_*, s'_* \\in (\\supp\n\\mu)^n} \\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes U_{s_1} \\otimes U, U_{s'_n}\n\\otimes \\cdots \\otimes U_{s'_1} \\otimes U) \\otimes e_{s'_*}\n\\mathcal{R}^{\\otimes n} e_{s_*}.\n$$\nIt carries a tracial state $\\tau^{(n)}_U$ defined by\n$$\n\\tau^{(n)}_U(T\\otimes x)=\\delta_{s_*,s'_*}c^nd(s_1)\\dots\nd(s_n)\\tr_{U_{s_n} \\otimes\\dots\\otimes U_{s_1}\\otimes\nU}(T)\\tau^{\\otimes n}(x)\n$$\nfor $T\\otimes x\\in \\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes U_{s_1} \\otimes\nU, U_{s'_n} \\otimes \\cdots \\otimes U_{s'_1} \\otimes U) \\otimes\ne_{s'_*} \\mathcal{R}^{\\otimes n} e_{s_*}$. Let $\\mathcal{A}^{(n)}_U$ be the von\nNeumann algebra generated by $\\tilde\\mathcal{C}(V^{\\otimes n}\\otimes U)$ in\nthe GNS-representation defined by $\\tau^{(n)}_U$. These algebras form\nan inductive system under the embeddings\n$$\n\\mathcal{A}^{(n)}_U\\hookrightarrow\\mathcal{A}^{(n+1)}_U,\\ \\ T\\otimes x\\mapsto\n\\sum_{s\\in\\supp\\mu}(\\iota_s\\otimes T)\\otimes(e_s\\otimes x).\n$$\nPassing to the limit we get a von Neumann algebra $\\mathcal{A}_U$ equipped with\na faithful tracial state $\\tau_U$. We write $\\mathcal{A}$ for $\\mathcal{A}_\\mathds{1}$.\n\nGiven $\\eta\\in\\hat{\\mathcal{C}}(U)$, consider the elements\n$$\n\\eta^{\\{n\\}}=\\sum_{s_*\\in(\\supp \\mu)^n} \\eta_{U_{s_n} \\otimes \\cdots\n\\otimes U_{s_1}}\\otimes e_{s_*}\\in\\mathcal{A}^{(n)}_U.\n$$\nIf $\\eta$ is $P_\\mu$-harmonic, then the sequence $\\{\\eta^{\\{n\\}}\\}_n$\nforms a martingale with respect to the $\\tau_U$-preserving conditional\nexpectations $\\mathcal{E}_n\\colon\\mathcal{A}_U\\to\\mathcal{A}^{(n)}_U$. Denote its limit by\n$\\eta^{\\{\\infty\\}}$. Then we get the following analogues of\nPropositions~\\ref{prop:p-bdry-as-rel-comm-LR0}\nand~\\ref{prop:p-bdry-as-rel-comm-LR}, with almost identical proofs,\nwhich we omit.\n\n\\begin{proposition}\\label{prop:PP-equals-HY} For every object $U$ in\n$\\mathcal{C}$, the map $\\eta \\mapsto\\eta^{\\{\\infty\\}}$ defines an isomorphism\nof von Neumann algebras $\\End{\\mathcal{P}}{U}\\cong \\mathcal{A}'\\cap\\mathcal{A}_{U}$. If in\naddition to the generating property the measure $\\mu$ satisfies the\nassumption of Proposition~\\ref{prop:p-bdry-as-rel-comm-LR}, then also\nthe map $\\eta \\mapsto(\\iota_{U'}\\otimes\\eta)^{\\{\\infty\\}}$ defines an\nisomorphism of von Neumann algebras $\\End{\\mathcal{P}}{U}\\cong\n\\mathcal{A}_{U'}'\\cap\\mathcal{A}_{U'\\otimes U}$ for any object $U'$.\n\\end{proposition}\n\nThe work of Hayashi and Yamagami contains much more than the\nconstruction of the algebras $\\mathcal{A}_U$ and, in fact, allows us to\ndescribe, under mild additional assumptions on $\\mu$, not only the\nmorphisms but the entire Poisson boundary $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ in\nterms of Hilbert bimodules over~$\\mathcal{A}$.\n\nFor objects $X$ and $Y$ consider their direct sum $X\\oplus Y$, and\ndenote by $p_X,p_Y\\in\\mathcal{C}(X\\oplus Y)$ the corresponding projections. We\ncan consider $p_X$ and $p_Y$ as projections in $\\mathcal{A}_{X\\oplus Y}$, then\n$p_X(\\mathcal{A}_{X\\oplus Y})p_X \\cong \\mathcal{A}_X$ and $p_Y(\\mathcal{A}_{X\\oplus Y})p_Y \\cong\n\\mathcal{A}_Y$. Put\n$$\n\\mathcal{A}_{X,Y}=p_Y(\\mathcal{A}_{X\\oplus Y})p_X.\n$$\nThe $\\mathcal{A}_Y$-$\\mathcal{A}_X$-module $\\mathcal{A}_{X,Y}$ can be described as an inductive\nlimit of completions of the spaces\n$$\n\\tilde\\mathcal{C}(V^{\\otimes n}\\otimes X,V^{\\otimes n}\\otimes\nY)=\\bigoplus_{s_*, s'_* \\in (\\supp \\mu)^n} \\mathcal{C}(U_{s_n} \\otimes \\cdots\n\\otimes U_{s_1} \\otimes X, U_{s'_n} \\otimes \\cdots \\otimes U_{s'_1}\n\\otimes Y) \\otimes e_{s'_*} \\mathcal{R}^{\\otimes n} e_{s_*}.\n$$\nDenote by $\\mathcal{H}_X$ the Hilbert space completion of $\\mathcal{A}_{\\mathds{1},X}$ with\nrespect to the scalar product $$(x,y)=\\tau_\\mathds{1}(y^*x).$$ Then $\\mathcal{H}_X$\nis a Hilbert $\\mathcal{A}_X$-$\\mathcal{A}$-module (it is denoted by $X_\\infty$\nin~\\cite{MR1749868}). Viewing $\\mathcal{H}_X$ as a Hilbert bimodule over $\\mathcal{A}$,\nwe get a unitary functor $F$ from $\\mathcal{C}$ into the category $\\mathrm{Hilb}_\\mathcal{A}$\nof Hilbert bimodules over~$\\mathcal{A}$ such that $F(U)=\\mathcal{H}_U$ on objects and\nin the obvious way on morphisms in $\\mathcal{C}$. We want to make~$F$ into a\ntensor functor. By the computation on pp.~40--41 of~\\cite{MR1749868}\nthe map\n$$\n\\tilde\\mathcal{C}(V^{\\otimes n},V^{\\otimes n}\\otimes X)\\otimes\n\\tilde\\mathcal{C}(V^{\\otimes n},V^{\\otimes n}\\otimes Y)\\to\\tilde\n\\mathcal{C}(V^{\\otimes n},V^{\\otimes n}\\otimes X\\otimes Y),\n$$\n$$\n(S \\otimes a)\\otimes (T\\otimes b)\\mapsto (S\\otimes\\iota_Y)T \\otimes\nab,\n$$\ndefines an isometry\n$$\nF_2(X,Y)\\colon \\mathcal{H}_X\\otimes_\\mathcal{A}\\mathcal{H}_Y\\to\\mathcal{H}_{X\\otimes Y}.\n$$\n\n\\begin{lemma} \\label{lcondmeas} Assume that for every $s\\in\\Irr(\\mathcal{C})$\nwe have\n$$\n(\\mu^n*\\delta_s)(\\supp\\mu^n)\\to1\\ \\ \\text{as}\\ \\ n\\to\\infty.\n$$\nThen the maps $F_2(X,Y)$ are unitary.\n\\end{lemma}\n\n\\begin{proof} It suffices to prove the lemma for simple objects. Assume $X=U_s$\nfor some $s$. For every $n\\ge1$ and $s_*\\in(\\supp\\mu)^n$, let\n$p_{s_*}^{(n)}\\in\\mathcal{C}(U_{s_n}\\otimes\\dots\\otimes U_{s_1}\\otimes X)$ be\nthe projection onto the direct sum of the isotypic components\ncorresponding to $U_t$ for some $t\\in\\supp\\mu^n$. Put\n$$\np^{(n)}=\\sum_{s_* \\in (\\supp \\mu)^n}p^{(n)}_{s_*}\\otimes e_{s_*}\\in\\mathcal{A}^{(n)}_{U_s}.\n$$\nThen $\\tau_X(p^{(n)})=(\\mu^n*\\delta_s)(\\supp\\mu^n)$. Therefore by\nassumption $p^{(n)}\\to1$ in the $s^*$-topology. It follows that to\nprove the lemma it suffices to show that if\n$$\nT\\otimes x\\in \\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes U_{s_1}, U_{s'_n}\n\\otimes \\cdots \\otimes U_{s'_1} \\otimes X\\otimes Y) \\otimes e_{s'_*}\n\\mathcal{R}^{\\otimes n} e_{s_*}\n$$\nis such that $p^{(n)}(T\\otimes x)=T\\otimes x$, then $T\\otimes x$ is in\nthe image of $F_2(X,Y)$. The assumption on $T$ means that the simple\nobjects appearing in the decomposition of $U_{s'_n} \\otimes \\cdots\n\\otimes U_{s'_1} \\otimes X$ appear also in the decomposition of\n$U_{t_n}\\otimes\\dots\\otimes U_{t_1}$ for $t_*\\in(\\supp\\mu)^n$. This\nimplies that $T$ can be written as a finite direct sum of morphisms of\nthe form $(S\\otimes\\iota_Y)R$, with $R\\in\\mathcal{C}(U_{s_n} \\otimes \\cdots\n\\otimes U_{s_1},U_{t_n}\\otimes\\dots\\otimes U_{t_1}\\otimes Y)$ and\n$S\\in\\mathcal{C}(U_{t_n} \\otimes \\cdots \\otimes\nU_{t_1},U_{s'_n}\\otimes\\dots\\otimes U_{s'_1}\\otimes Y)$. Since we also\nhave density of $e_{s'_*}\\mathcal{R} e_{t_*}\\mathcal{R} e_{s_*}$ in $e_{s'_*}\\mathcal{R}\ne_{s_*}$, this proves the lemma. \\end{proof}\n\nWe remark that the assumption of the lemma is obviously satisfied if\n$\\supp\\mu=\\Irr(\\mathcal{C})$. It is also satisfied if $\\mu$ is ergodic and\n$\\mu(e)>0$, since then $\\|\\mu^n*\\delta_s-\\mu^n\\|_1\\to0$ by\n\\cite{MR1644299}*{Proposition~3.3}.\n\n\\smallskip\n\nOnce the maps $F_2(X,Y)$ are unitary, it is easy to see that $(F,F_2)$\nis a unitary tensor functor $\\mathcal{C}\\to\\mathrm{Hilb}_\\mathcal{A}$.\n\n\\begin{proposition} \\label{pHYrealization} Assume the measure $\\mu$\nsatisfies the assumption of Lemma~\\ref{lcondmeas}. Let $\\mathcal{B}$ be the\nfull C$^*$-tensor subcategory of $\\mathrm{Hilb}_\\mathcal{A}$ generated by the image of\n$F\\colon\\mathcal{C}\\to\\mathrm{Hilb}_\\mathcal{A}$. Then the Poisson boundary\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ of $(\\mathcal{C},\\mu)$ is isomorphic to\n$F\\colon\\mathcal{C}\\to\\mathcal{B}$.\n\\end{proposition}\n\n\\begin{proof} The functor $F$ extends to the full subcategory $\\tilde\\mathcal{P}$ of\n$\\mathcal{P}$ formed by the objects of $\\mathcal{C}$ using the isomorphisms\n$\\mathcal{P}(U)\\cong\\mathcal{A}'\\cap\\mathcal{A}_U$. It follows immediately by definition that\nthis way we get a unitary tensor functor $E\\colon\\tilde\\mathcal{P}\\to\\mathcal{B}$ if\nwe put $E_2(X,Y)=F_2(X,Y)$. We then extend this functor to a unitary\ntensor functor $\\mathcal{P}\\to\\mathcal{B}$, which we continue to denote by $E$. To\nprove the proposition it remains to show that $E$ is fully\nfaithful. In other words, we have to show that the left action of\n$\\mathcal{A}_U$ on $\\mathcal{H}_U$ defines an isomorphism\n$\\mathcal{A}'\\cap\\mathcal{A}_U\\cong\\Endd_{\\mathcal{A}\\mhyph\\mathcal{A}}(\\mathcal{H}_U)$.\n\nLet us check the stronger statement that the left action defines an\nisomorphism $\\mathcal{A}_U\\cong\\Endd_{\\,\\mhyph\\mathcal{A}}(\\mathcal{H}_U)$. Recalling how $\\mathcal{H}_U$ was\nconstructed using complementary projections in $\\mathcal{A}_{\\mathds{1}\\oplus U}$, it\nbecomes clear that the map $\\mathcal{A}_U\\to\\Endd_{\\,\\mhyph\\mathcal{A}}(\\mathcal{H}_U)$ is always\nsurjective, and it is injective if and only if the projection\n$p_\\mathds{1}\\in\\mathcal{A}_{\\mathds{1}\\oplus U}$ has central support $1$. Using the\nFrobenius reciprocity isomorphism\n$$\n\\mathcal{C}(V^{\\otimes n}\\otimes U)\\cong \\mathcal{C}(V^{\\otimes n},V^{\\otimes\nn}\\otimes U\\otimes\\bar U),\n$$\nit is easy to check that $\\mathcal{H}_{U\\otimes\\bar U}\\cong L^2(\\mathcal{A}_U,\\tau_U)$\nas a Hilbert $\\mathcal{A}_U$-$\\mathcal{A}$-module. Hence the representation of~$\\mathcal{A}_U$ on\n$\\mathcal{H}_{U\\otimes\\bar U}$ is faithful. Since $\\mathcal{H}_{U\\otimes\\bar U}\\cong\n\\mathcal{H}_U\\otimes_\\mathcal{A}\\mathcal{H}_{\\bar U}$, it follows that the representation of\n$\\mathcal{A}_U$ on~$\\mathcal{H}_U$ is faithful as well. \\end{proof}\n\nA similar result could also be proved using the algebras $\\mathcal{N}_U$ from\nSection~\\ref{sec:longo-roberts-appr} instead of $\\mathcal{A}_U$. The situation\nwould be marginally more complicated, since in dealing with the Connes\nfusion tensor product~$\\otimes_\\mathcal{N}$ we would have to take into\naccount the modular group of $\\omega_U$. We are not going to pursue\nthis topic here, although it could provide a somewhat alternative\nroute to Proposition~\\ref{pminindex} below.\n\n\\bigskip\n\n\\section{A universal property of the Poisson\nboundary}\\label{suniversal}\n\nLet $\\mathcal{C}$ be a weakly amenable strict C$^*$-tensor category. Fix an\nergodic probability measure $\\mu$ on~$\\Irr(\\mathcal{C})$. Recall that such a\nmeasure exists by Proposition~\\ref{pweakamen}. Let\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ be the Poisson boundary of~$(\\mathcal{C},\\mu)$.\n\nFor an object $U$ in $\\mathcal{C}$ define\n$$\nd_{\\mathrm{min}}^\\mathcal{C}(U)=\\inf d^\\mathcal{A}(F(U)),\n$$\nwhere the infimum is taken over all unitary tensor functors\n$F\\colon\\mathcal{C}\\to\\mathcal{A}$. We will show in the next section that $d_{\\mathrm{min}}^\\mathcal{C}$\nis the amenable dimension function on $\\mathcal{C}$. The goal of the present\nsection is to prove the following.\n\n\\begin{theorem} \\label{tuniversal1} The Poisson boundary\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is a universal unitary tensor functor such that\n$d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{P}\\Pi$.\n\\end{theorem}\n\nIn other words, $d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{P}\\Pi$ and for any unitary tensor\nfunctor $F\\colon\\mathcal{C}\\to\\mathcal{A}$ such that $d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{A} F$ there exists a\nunique, up to a natural unitary monoidal isomorphism, unitary tensor\nfunctor $\\Lambda\\colon\\mathcal{P}\\to\\mathcal{A}$ such that $\\Lambda\\Pi\\cong F$.\n\n\\smallskip\n\nConsider a unitary tensor functor $F\\colon\\mathcal{C}\\to\\mathcal{A}$, with no\nrestriction on the dimension function. As we discussed in\nSection~\\ref{sstensorfunctors}, we may assume that $\\mathcal{A}$ is strict,\n$\\mathcal{C}$ is a C$^*$-tensor subcategory of $\\mathcal{A}$ and $F$ is the embedding\nfunctor. Motivated by Izumi's Poisson integral~\\cite{MR1916370} we\nwill define linear maps\n$$\n\\Theta_{U,V}\\colon\\mathcal{A}(U,V)\\to\\mathcal{P}(U,V).\n$$\nWe will write $\\Theta_U$ for $\\Theta_{U,U}$ and often omit the\nsubscripts altogether, if there is no danger of confusion. The proof\nof the theorem will be based on analysis of the multiplicative domain\nof $\\Theta$.\n\n\\smallskip\n\nFor every object $U$ in $\\mathcal{C}$ fix a standard solution $(R_U,\\bar R_U)$\nof the conjugate equations in $\\mathcal{C}$. Define a faithful state $\\psi_U$\non $\\End{\\mathcal{A}}{U}$ by\n$$\n\\psi_U(T)=||\\bar R_U||^{-2}\\bar R_U^*(T\\otimes\\iota)\\bar R_U.\n$$\nSince any other standard solution has the form\n$((u\\otimes\\iota)R_U,(\\iota\\otimes u)\\bar R_U)$ for a unitary $u$,\nthis definition is independent of any choices. More generally, we can\ndefine in a similar way ``slice maps\"\n$$\n\\iota\\otimes\\psi_V\\colon \\End{\\mathcal{A}}{U\\otimes V}\\to \\End{\\mathcal{A}}{U}.\n$$\nThen, since $((\\iota\\otimes R_U\\otimes \\iota)R_V,(\\iota\\otimes \\bar\nR_V\\otimes \\iota)\\bar R_U)$ is a standard solution for $U\\otimes V$,\nwe get\n\\begin{equation} \\label{eslice1} \\psi_{U\\otimes V}=\\psi_U(\\iota\\otimes\n\\psi_V).\n\\end{equation}\n\nBy definition the state $\\psi_U$ extends the trace $\\tr_U$ on\n$\\End{\\mathcal{C}}{U}$.\n\n\\begin{lemma} The subalgebra $\\End{\\mathcal{C}}{U}\\subset \\End{\\mathcal{A}}{U}$ is\ncontained in the centralizer of the state $\\psi_U$.\n\\end{lemma}\n\n\\begin{proof} If $u$ is a unitary in $\\mathcal{C}(U)$, then the state $\\psi_U(u\\cdot\nu^*)$ is defined similarly to $\\psi_U$, but using the solution\n$((\\iota\\otimes u^*)R_U,(u^*\\otimes\\iota)\\bar R_U)$ of the conjugate\nequations for $U$. Since $\\psi_U$ is independent of the choice of\nstandard solutions, it follows that $\\psi_U(u\\cdot u^*)=\\psi_U$. But\nthis exactly means that $\\mathcal{C}(U)$ is contained in the centralizer of\n$\\psi_U$. \\end{proof}\n\nIt follows that there exists a unique $\\psi_U$-preserving conditional\nexpectation $E_U\\colon \\End{\\mathcal{A}}{U}\\to \\End{\\mathcal{C}}{U}$.\nFor objects $U$ and $V$ we can consider $\\mathcal{A}(U,V)$ as a subspace of\n$\\End{\\mathcal{A}}{U\\oplus V}$. Then $E_{U\\oplus V}$ defines a linear map\n$$\nE_{U,V}\\colon\\mathcal{A}(U,V)\\to\\mathcal{C}(U,V).\n$$\nAgain, we omit the subscripts when convenient.\n\n\\begin{lemma} \\label{lcondexp1} The maps $E_{U,V}$ satisfy the\nfollowing properties:\n\\begin{enumerate}\n\\item $E_{U,V}(T)^*=E_{V,U}(T^*)$;\n\\item if $T\\in\\mathcal{A}(U,V)$ and $S\\in\\mathcal{C}(V,W)$, then\n$E_{U,W}(ST)=SE_{U,V}(T)$;\n\\item for any object $X$ in $\\mathcal{C}$ we have $E_{U\\otimes\nX,V\\otimes X}(T\\otimes\\iota_X)=E_{U,V}(T)\\otimes\\iota_X$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof} Properties (i) and (ii) follows immediately from the corresponding\nproperties of conditional expectations. To prove (iii), it suffices to\nconsider the case $U=V$. Take $S\\in\\End{\\mathcal{C}}{U\\otimes X}$. Then we\nhave to check that\n$$\n\\psi_{U\\otimes X}(S(T\\otimes\\iota))=\\psi_{U\\otimes X}(S(E(T)\\otimes\n\\iota)).\n$$\nThis follows from \\eqref{eslice1} and the fact that by definition we\nhave $(\\iota\\otimes\\psi_X)(S)\\in\\End{\\mathcal{C}}{U}$. \\end{proof}\n\nNow, given a morphism $T\\in \\mathcal{A}(U,V)$, define a bounded natural\ntransformation $\\Theta_{U,V}(T)\\colon\\iota\\otimes U\\to\\iota\\otimes V$\nof functors on $\\mathcal{C}$ by\n$$\n\\Theta_{U,V}(T)_X=E_{X\\otimes U,X\\otimes V}(\\iota_X\\otimes T).\n$$\n\n\\begin{lemma}\\label{luniharmonic} The natural transformation\n$\\Theta_{U,V}(T)$ is $P_X$-harmonic for any object $X$ in $\\mathcal{C}$.\n\\end{lemma}\n\n\\begin{proof} It suffices to consider the case $U=V$. We claim that\n$$\n(\\tr_X\\otimes \\iota)E(\\iota_X\\otimes T)=E(T).\n$$\nIndeed, for any $S\\in \\End{\\mathcal{C}}{U}$ we have\n$$\n\\tr_U\\big(S(\\tr_X\\otimes \\iota)E(\\iota_X\\otimes T)\\big) =\\tr_{X\\otimes\nU}(E(\\iota_X\\otimes ST))\\\\ =\\psi_{X\\otimes U}(\\iota\\otimes ST)\n=\\psi_U(ST)=\\tr_U(SE(T)),\n$$\nwhere in the third equality we used \\eqref{eslice1}. This proves the\nclaim.\n\nWe now compute:\n$$\nP_X(\\Theta(T))_Y=(\\tr_X\\otimes\\iota)(\\Theta(T)_{X\\otimes\nY})=(\\tr_X\\otimes\\iota_{Y}\\otimes\\iota_U)(E(\\iota_{X}\\otimes\\iota_Y\\otimes\nT))\\\\ =E(\\iota_Y\\otimes T)=\\Theta(T)_Y,\n$$\nso $\\Theta(T)$ is $P_X$-harmonic. \\end{proof}\n\nIt follows that $\\Theta_{U,V}$ is a well-defined linear map\n$\\mathcal{A}(U,V)\\to \\mathcal{P}(U,V)$.\n\n\\begin{lemma} \\label{ltheta} The maps $\\Theta_{U,V}$ satisfy the\nfollowing properties:\n\\begin{itemize}\n\\item[{\\rm (i)}] $\\Theta_{U,V}(T)^*=\\Theta_{V,U}(T^*)$;\n\\item[{\\rm (ii)}] if $T\\in\\mathcal{A}(U,V)$ and $S\\in\\mathcal{C}(V,W)$, then\n$\\Theta_{U,W}(ST)=S\\Theta_{U,V}(T)$;\n\\item[{\\rm (iii)}] for any object $X$ in $\\mathcal{C}$ we have\n$\\Theta_{U\\otimes X,V\\otimes\nX}(T\\otimes\\iota_X)=\\Theta_{U,V}(T)\\otimes\\iota_X$ and\n$\\Theta_{X\\otimes U,X\\otimes V}(\\iota_X\\otimes T)=\\iota_X\\otimes\n\\Theta_{U,V}(T)$;\n\\item[{\\rm (iv)}] the maps $\\Theta_U\\colon\\End{\\mathcal{A}}{U}\\to\\End{\\mathcal{P}}{U}$\nare unital, completely positive and faithful.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} All these properties are immediate consequences of the definitions\nand the properties of the maps $E_{U,V}$ given in\nLemma~\\ref{lcondexp1}. We would like only to point out that the\nproperty $\\Theta(\\iota\\otimes T)=\\iota\\otimes\\Theta(T)$ follows from\nthe definition of the tensor product in $\\mathcal{P}$, the corresponding\nproperty for the maps $E$ is neither satisfied nor needed. \\end{proof}\n\nOur goal now is to understand the multiplicative domains of the maps\n$\\Theta_U\\colon\\End{\\mathcal{A}}{U}\\to\\End{\\mathcal{P}}{U}$. We will first show that\nthese domains cannot be very large. More precisely, assume we have an\nintermediate C$^*$-tensor category $\\mathcal{C}\\subset\\mathcal{B}\\subset\\mathcal{A}$ such that\n$d^\\mathcal{A}=d^\\mathcal{B}$ on $\\mathcal{C}$. For an object $U$ in $\\mathcal{C}$ denote by\n$E^\\mathcal{B}_U\\colon\\End{\\mathcal{A}}{U}\\to\\mathcal{B}(U)$ the conditional expectation\npreserving the categorical trace on $\\mathcal{A}$. Then we have the following\nresult inspired by \\cite{MR2335776}*{Lemma~4.5}.\n\n\\begin{lemma} \\label{lthetafactor} We have $\\Theta_U=\\Theta_U\nE^\\mathcal{B}_U$.\n\\end{lemma}\n\n\\begin{proof} We will first show that a similar property holds for the maps $E$,\nso $E_U=E_UE^\\mathcal{B}_U$.\n\nConsider the normalized categorical trace $\\tr^\\mathcal{A}_U$ on\n$\\End{\\mathcal{A}}{U}$. We have $\\psi_U=\\tr^\\mathcal{A}_U(\\cdot\\, Q)$ for some\n$Q\\in\\End{\\mathcal{A}}{U}$. The identity $E_U=E_UE^\\mathcal{B}_U$ holds if and only if\nthe conditional expectation $E^\\mathcal{B}_U$ is $\\psi_U$-preserving, or\nequivalently, $Q\\in\\mathcal{B}(U)$.\n\nBy assumption we have $d^\\mathcal{A}(U)=d^\\mathcal{B}(U)$ for every object $U$ in\n$\\mathcal{C}$. It follows that a standard solution $(R^\\mathcal{B}_U,\\bar R^\\mathcal{B}_U)$ of\nthe conjugate equations for $U$ and $\\bar U$ in $\\mathcal{B}$ remains standard\nin $\\mathcal{A}$. We have $\\bar R_U=(T\\otimes\\iota)\\bar R^\\mathcal{B}_U$ for a uniquely\ndefined $T\\in\\mathcal{B}(U)$. Then $Q=\\frac{d_\\mathcal{B}(U)}{d_\\mathcal{C}(U)}TT^*\\in\\mathcal{B}(U)$.\n\n\\smallskip\n\nWe also need the simple property $E^\\mathcal{B}_{X\\otimes U}(\\iota_X\\otimes\nT)=\\iota_X\\otimes E^\\mathcal{B}_U(T)$. This is proved similarly to\nLemma~\\ref{lcondexp1}(iii), using that $\\tr^\\mathcal{A}_{X\\otimes\nU}=\\tr^\\mathcal{A}_U(\\tr^\\mathcal{A}_X\\otimes\\iota)$ and the fact that $\\tr^\\mathcal{A}$ is\ndefined using standard solutions in $\\mathcal{B}$, so that\n$(\\tr^\\mathcal{A}_X\\otimes\\iota)(\\mathcal{B}(X\\otimes U))\\subset\\mathcal{B}(U)$.\n\n\\smallskip\n\nThe equality $\\Theta_U E^\\mathcal{B}_U=\\Theta_U$ is now immediate:\n$$\n\\Theta E^\\mathcal{B}(T)_X=E(\\iota_X\\otimes E^\\mathcal{B}(T))=E E^\\mathcal{B}(\\iota_X\\otimes T)\n=E(\\iota_X\\otimes T)=\\Theta(T)_X.\n$$\nThis proves the assertion.\n\\end{proof}\n\nSince the completely positive map $\\Theta_U$ is faithful, the\nmultiplicative domain of $\\Theta_U=\\Theta_UE^\\mathcal{B}_U$ is contained in\nthat of $E^\\mathcal{B}_U$, which is exactly $\\mathcal{B}(U)$. Therefore to find this\ndomain we have to consider the smallest possible subcategory that\ncontains $\\mathcal{C}$ and still defines the same dimension function as~$\\mathcal{A}$.\n\n\\begin{lemma} \\label{luniversalau} For every object $U$ in $\\mathcal{C}$ there\nexists a unique positive invertible element $a_U\\in{\\mathcal{A}}(U)$ such that\n$$\n(\\iota\\otimes a_U^{1\/2})R_U\\ \\ \\text{and}\\ \\\n(a^{-1\/2}_U\\otimes\\iota)\\bar R_U\n$$\nform a standard solution of the conjugate equations for $U$ in $\\mathcal{A}$.\n\\end{lemma}\n\n\\begin{proof} We can find an invertible element $T\\in\\End{\\mathcal{A}}{U}$ such that\n$(\\iota\\otimes T)R_U$ and $((T^*)^{-1}\\otimes\\iota)\\bar R_U$ form a\nstandard solution in $\\mathcal{A}$. Then we can take $a_U=T^*T$, since\n$Ta_U^{-1\/2}$ is unitary and hence the morphisms $(\\iota\\otimes\na_U^{1\/2})R_U$ and $(a^{-1\/2}_U\\otimes\\iota)\\bar R_U$ still form a\nstandard solution.\n\nAny other standard solution for $U$ and $\\bar U$ has the form\n$(\\iota\\otimes va_U^{1\/2})R_U$, $(va^{-1\/2}_U\\otimes\\iota)\\bar R_U$\nfor a unitary $v\\in\\End{\\mathcal{A}}{U}$. By uniqueness of the polar\ndecomposition the element $va_U^{1\/2}$ is positive only if $v=1$. \\end{proof}\n\nNote that if we replace $(R_U,\\bar R_U)$ by $((\\iota\\otimes\nu)R_U,(u\\otimes\\iota)\\bar R_U)$ for a unitary $u\\in\\End{\\mathcal{C}}{U}$, then\n$a_U$ gets replaced by $ua_Uu^*$.\n\n\\begin{lemma} \\label{luniversaldim} For every object $U$ in $\\mathcal{C}$ we\nhave $d^\\mathcal{P}(U)\\le d^\\mathcal{A}(U)$, and if the equality holds, then we have\n$\\Theta_U(a_U)^{-1}=\\Theta_U(a_U^{-1})$.\n\\end{lemma}\n\n\\begin{proof} As usual, we omit the subscript $U$ in the computations. Consider\nthe solution\n$$\nr=(\\iota\\otimes \\Theta(a)^{1\/2})R,\\ \\ \\bar\nr=(\\Theta(a)^{-1\/2}\\otimes\\iota)\\bar R\n$$\nof the conjugate equations for $U$ in $\\mathcal{P}$. Then from the equality\n$$\nr^*r=R^*(\\iota\\otimes \\Theta(a))R=\\Theta(R^*(\\iota\\otimes a)R),\n$$\nwe have $\\|r\\|=d^{\\mathcal{A}}(U)^{1\/2}$. On the other hand, we also have\n$$\n\\bar r^*\\bar r=\\bar R^*(\\Theta(a)^{-1}\\otimes \\iota)\\bar R.\n$$\nBy Jensen's inequality for positive maps and the fact that the\nfunction $t\\mapsto t^{-1}$ on $(0,+\\infty)$ is operator convex (see\ne.g.~\\cite{MR2251116}*{B.2}), we have\n$\\Theta(a)^{-1}\\le\\Theta(a^{-1})$. Hence we have the estimate\n$$\n\\bar r^*\\bar r\\le\\bar R^*(\\Theta(a^{-1})\\otimes \\iota)\\bar\nR=\\Theta(\\bar R^*(a^{-1}\\otimes \\iota)\\bar R),\n$$\nand we conclude that $\\|\\bar r\\|\\le d^{\\mathcal{A}}(U)^{1\/2}$. Hence\n$d^\\mathcal{P}(U)\\le d^\\mathcal{A}(U)$, and if the equality holds, then we have $\\|\\bar\nr\\|= d^{\\mathcal{A}}(U)^{1\/2}$ and\n$$\n\\bar R^*(\\Theta(a)^{-1}\\otimes \\iota)\\bar R=\\bar\nR^*(\\Theta(a^{-1})\\otimes \\iota)\\bar R.\n$$\nSince $T\\mapsto \\bar R^*(T\\otimes \\iota)\\bar R$ is a faithful positive\nlinear functional on $\\End{\\mathcal{P}}{U}$, this is equivalent to\n$\\Theta(a)^{-1}=\\Theta(a^{-1})$. \\end{proof}\n\nIf we have $d^\\mathcal{P}(U)=d^\\mathcal{A}(U)$, we can then apply the following general result,\nwhich is surely well-known.\n\n\\begin{lemma} \\label{lmultdomain} Assume $\\theta\\colon A\\to B$ is a\nunital completely positive map of C$^*$-algebras and $a\\in A$ is a\npositive invertible element such that\n$\\theta(a)^{-1}=\\theta(a^{-1})$. Then $a$ lies in the multiplicative\ndomain of $\\theta$.\n\\end{lemma}\n\n\\begin{proof} It suffices to show that $a^{1\/2}$ lies in the multiplicative\ndomain. This, in turn, is equivalent to the equality\n$\\theta(a)^{1\/2}=\\theta(a^{1\/2})$.\n\nUsing Jensen's inequality and operator convexity of the functions\n$t\\mapsto -t^{1\/2}$ and $t\\mapsto t^{-1}$, we have\n$$\n\\theta(a)^{1\/2}\\ge\\theta(a^{1\/2}),\\ \\\n\\theta(a^{-1})^{1\/2}\\ge\\theta(a^{-1\/2})\\ \\ \\text{and}\\ \\\n\\theta(a^{-1\/2})^{-1}\\le\\theta(a^{1\/2}).\n$$\nThe second and the third inequalities imply\n$$\n\\theta(a^{-1})^{-1\/2}\\le \\theta(a^{1\/2}).\n$$\nSince $\\theta(a^{-1})=\\theta(a)^{-1}$, this gives $\\theta(a)^{1\/2}\\le\n\\theta(a^{1\/2})$. Hence $\\theta(a)^{1\/2}=\\theta(a^{1\/2})$. \\end{proof}\n\nTo finish the preparation for the proof of Theorem~\\ref{tuniversal1}\nwe consider the maps $\\Theta$ for $\\mathcal{A}=\\mathcal{P}$.\n\n\\begin{lemma} \\label{lthetaid} The maps\n$\\Theta_{U,V}\\colon\\mathcal{P}(U,V)\\mapsto\\mathcal{P}(U,V)$ defined by the functor\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ are the identity maps.\n\\end{lemma}\n\n\\begin{proof} It suffices to consider $U=V$. Take $\\eta\\in\\End{\\mathcal{P}}{U}$. Let us\nshow first that $\\Theta(\\eta)_\\mathds{1}=\\eta_\\mathds{1}$, that is,\n$E(\\eta)=\\eta_\\mathds{1}$. In other words, we have to check that for any\n$S\\in\\End{\\mathcal{C}}{U}$ we have\n$$\n\\psi_U(S\\eta)=\\tr_U(S\\eta_\\mathds{1}).\n$$\nThis follows immediately by definition, since\n$$\n(\\bar R_U^*(S\\eta\\otimes\\iota)\\bar R_U)_\\mathds{1}=\\bar\nR_U^*(S\\eta_\\mathds{1}\\otimes\\iota)\\bar R_U.\n$$\n\nNow, for any object $X$ in $\\mathcal{C}$, we have\n$$\n\\Theta(\\eta)_X=E(\\iota_X\\otimes\\eta)=\\Theta(\\iota_X\\otimes\\eta)_\\mathds{1}=(\\iota_X\\otimes\\eta)_\\mathds{1}=\\eta_X.\n$$\nTherefore we have $\\Theta(\\eta)=\\eta$. \\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{tuniversal1}] The equality\n$d_{\\mathrm{min}}^\\mathcal{C}(U)=d^\\mathcal{P}(U)$ for objects $U$ in $\\mathcal{C}$ follows from\nLemma~\\ref{luniversaldim}.\n\n\\smallskip\n\nLet $F\\colon\\mathcal{C}\\to\\mathcal{A}$ be a unitary tensor functor such that\n$d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{A} F$. As above, we assume that $F$ is simply an\nembedding functor. Consider the minimal subcategory\n$\\tilde\\mathcal{B}\\subset\\mathcal{A}$ containing $\\mathcal{C}\\subset\\mathcal{A}$ and the\nmorphisms~$\\iota_V\\otimes a_U\\otimes\\iota_W$ for all objects $V$, $U$\nand $W$ in $\\mathcal{C}$, where $a_U\\in\\End{\\mathcal{A}}{U}$ are the morphisms defined\nin Lemma~\\ref{luniversalau}. This is a C$^*$-tensor subcategory, in\ngeneral without subobjects. Complete~$\\tilde\\mathcal{B}$ with respect to\nsubobjects to get a C$^*$-tensor category~$\\mathcal{B}$. By adding more\nobjects to $\\mathcal{A}$ we may assume without loss of generality that\n$\\mathcal{B}\\subset\\mathcal{A}$. Lemmas~\\ref{ltheta},~\\ref{luniversaldim}\nand~\\ref{lmultdomain} imply that the maps $\\Theta_{U,V}$ define a\nstrict unitary tensor functor $\\tilde\\mathcal{B}\\to\\mathcal{P}$. Thus $\\tilde\\mathcal{B}$ is\nunitary monoidally equivalent to a C$^*$-tensor subcategory\n$\\tilde\\mathcal{P}\\subset \\mathcal{P}$, possibly without subobjects. Completing\n$\\tilde\\mathcal{P}$ with respect to subobjects we get a C$^*$-tensor\nsubcategory $\\mathcal{P}'\\subset\\mathcal{P}$, which is unitarily monoidally equivalent\nto $\\mathcal{B}$.\n\nWe claim that the embedding functor $\\mathcal{P}'\\to\\mathcal{P}$ is a unitary monoidal\nequivalence. Indeed, by construction we have\n$d^{\\mathcal{P}'}(U)=d_{\\mathrm{min}}^\\mathcal{C}(U)$ for every object $U$ in $\\mathcal{C}$. By\nLemmas~\\ref{lthetafactor} and~\\ref{lthetaid} it follows then that the\nidentity maps $\\End{\\mathcal{P}}{U}\\to\\End{\\mathcal{P}}{U}$ factor through the\nconditional expectations\n$E^{\\mathcal{P}'}_U\\colon\\End{\\mathcal{P}}{U}\\to\\End{\\mathcal{P}'}{U}$. Hence\n$\\End{\\mathcal{P}}{U}=\\End{\\mathcal{P}'}{U}$. Since the objects of $\\mathcal{C}$ generate\n$\\mathcal{P}$, this implies that the embedding functor $\\mathcal{P}'\\to\\mathcal{P}$ is a\nunitary monoidal equivalence.\n\nWe have therefore shown that $\\mathcal{P}$ and $\\mathcal{B}$ are unitarily monoidally\nequivalent, and furthermore, by properties of the maps $\\Theta$ such\nan equivalence $\\Lambda\\colon\\mathcal{P}\\to\\mathcal{B}$ can be chosen to be the\nidentity tensor functor on $\\mathcal{C}$. Considered as a functor $\\mathcal{P}\\to\\mathcal{A}$,\nthe unitary tensor functor $\\Lambda$ gives the required factorization\nof $F\\colon\\mathcal{C}\\to\\mathcal{A}$.\n\n\\smallskip\n\nIt remains to prove uniqueness. Denote by $\\rho_U\\in\\End{\\mathcal{P}}{U}$ the\nelements $a_U$ constructed in Lemma~\\ref{luniversalau} for the\ncategory $\\mathcal{P}$. By the uniqueness part of that lemma, it is clear that\nany unitary tensor functor $\\Lambda\\colon\\mathcal{P}\\to\\mathcal{A}$ extending the\nembedding functor $\\mathcal{C}\\to\\mathcal{A}$ must map $\\rho_U\\in\\End{\\mathcal{P}}{U}$ into\n$a_U\\in\\End{\\mathcal{A}}{U}$. But this completely determines $\\Lambda$ up to a\nunitary monoidal equivalence, since by the above considerations the\ncategory $\\mathcal{P}$ is obtained from $\\mathcal{C}$ by adding the morphisms $\\rho_U$\nand then completing the new category with respect to subobjects. \\end{proof}\n\nWe finish the section with a couple of corollaries.\n\n\\smallskip\n\nThe universality of the Poisson boundary implies that up to an\nisomorphism the boundary does not depend on the choice of an ergodic\nmeasure. But the proof shows that a stronger result is true.\n\n\\begin{corollary} \\label{cindepend} Let $\\mathcal{C}$ be a weakly amenable\nC$^*$-tensor category and $\\mu$ be an ergodic probability measure on\n$\\Irr(\\mathcal{C})$. Then any bounded $P_\\mu$-harmonic natural transformation\nis $P_s$-harmonic for every $s\\in\\Irr(\\mathcal{C})$, so the Poisson boundary\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ of $(\\mathcal{C},\\mu)$ does not depend on the choice of\nan ergodic measure.\n\\end{corollary}\n\n\\begin{proof} By Lemma~\\ref{lthetaid} the maps\n$\\Theta_{U,V}\\colon\\mathcal{P}(U,V)\\to\\mathcal{P}(U,V)$ are the identity maps, while\nby Lem\\-ma~\\ref{luniharmonic} their images consist of elements that\nare $P_s$-harmonic for all $s$. \\end{proof}\n\nWhen $\\mathcal{C}$ is amenable, then $d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{C}$ and we get the\nfollowing.\n\n\\begin{corollary} \\label{camenb} If $\\mathcal{C}$ is an amenable C$^*$-tensor\ncategory, then its Poisson boundary with respect to any ergodic\nprobability measure on $\\Irr(\\mathcal{C})$ is trivial. In other words, any\nbounded natural transformation $\\iota\\otimes U\\to\\iota\\otimes V$ which\nis $P_s$-harmonic for all $s\\in\\Irr(\\mathcal{C})$, is defined by a morphism\nin~$\\mathcal{C}(U,V)$.\n\\end{corollary}\n\n\\begin{proof} The identity functor $\\mathcal{C}\\to\\mathcal{C}$ is already universal, so it is\nisomorphic to the Poisson boundary. \\end{proof}\n\nWe remark that if we were interested only in proving this corollary, a\nmajority of the above arguments, being applied to the functor\n$\\mathcal{C}\\to\\mathcal{P}$, would become either trivial or unnecessary. Namely, in\nthis case a standard solution of the conjugate equations in~$\\mathcal{C}$\nremains standard in $\\mathcal{P}$, so we have $E_U=E^\\mathcal{C}_U$, and the key parts\nof the proof are contained in Lemmas~\\ref{lthetafactor}\nand~\\ref{lthetaid}. The first lemma shows that given\n$\\eta\\in\\End{\\mathcal{P}}{U}$ we have $E(\\iota_X\\otimes\\eta)=E(\\iota_X\\otimes\nE(\\eta))$, while the second shows that\n$E(\\iota_X\\otimes\\eta)=\\eta_X$. Since $E(\\iota_X\\otimes\nE(\\eta))=\\iota_X\\otimes E(\\eta)$, we therefore see that $\\eta$\ncoincides with $E(\\eta)\\in\\End{\\mathcal{C}}{U}$.\n\n\\smallskip\n\nCorollary~\\ref{camenb} is more or less known: in view of\nProposition~\\ref{prop:PP-equals-HY}, for measures considered\nin~\\cite{MR1749868} it is equivalent\nto~\\cite{MR1749868}*{Theorem~7.6}. For an even more restrictive class\nof measures the result also follows\nfrom~\\cite{MR1444286}*{Theorem~5.16}.\n\n\\bigskip\n\n\\section{Amenability of the minimal dimension function}\n\\label{sec:amen-minim-dimens}\n\nAs in the previous section, let $\\mathcal{C}$ be a weakly amenable strict\nC$^*$-tensor category. We defined the dimension function $d_{\\mathrm{min}}^\\mathcal{C}$\non $\\mathcal{C}$ as the infimum of dimension functions under all possible\nembeddings of $\\mathcal{C}$ into C$^*$-tensor categories, and showed that it\nis indeed a dimension function realized by the Poisson boundary of\n$\\mathcal{C}$ with respect to any ergodic measure. The goal of this section is\nto prove the following.\n\n\\begin{theorem} \\label{tminimal} The dimension function $d_{\\mathrm{min}}^\\mathcal{C}$ is\n amenable, that is, $d_{\\mathrm{min}}^\\mathcal{C}(U)=\\|\\Gamma_U\\|$ holds for every\n object $U$ in $\\mathcal{C}$.\n\\end{theorem}\n\nWe remark that already the fact the fusion algebra of a weakly\namenable C$^*$-tensor category admits an amenable dimension function\nis nontrivial. We do not know whether this is true for weakly amenable\ndimension functions on fusion algebras that are not of categorical\norigin. If the fusion algebra is commutative, this is true by a result\nof Yamagami~\\cite{MR1721584}.\n\n\\smallskip\n\nLet $\\mu$ be an ergodic probability measure $\\mu$ on $\\Irr(\\mathcal{C})$ and\nconsider the corresponding Poisson boundary $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$. By\nTheorem~\\ref{tuniversal1} we already know that\n$d_{\\mathrm{min}}^\\mathcal{C}=d^\\mathcal{P}\\Pi$. Therefore Theorem~\\ref{tminimal} is equivalent\nto saying that $d^\\mathcal{P}\\Pi$ is the amenable dimension function on\n$\\mathcal{C}$.\n\n\\smallskip\n\nWe will use the realization of harmonic transformations as elements of\n$\\mathcal{N}'\\cap\\mathcal{N}_U$ given in Section~\\ref{sec:longo-roberts-appr}. It\nwill also be important to work with factors. Therefore we assume that\nin addition to being ergodic the measure $\\mu$ is generating and\nsatisfies the assumption of\nProposition~\\ref{prop:p-bdry-as-rel-comm-LR} (recall that for the\nlatter it suffices to require $\\mu(e)>0$). Recall once again that by\nProposition~\\ref{pweakamen} such a measure exists. We also remind that\nby Corollary~\\ref{cindepend} the Poisson boundary does not depend on\nthe ergodic measure, but its realization in terms of relative\ncommutants does. We then have the following expected (in view of\nProposition~\\ref{pHYrealization} and the discussion following it), but\ncrucial, result.\n\n\\begin{proposition} \\label{pminindex} For every object $U$ in $\\mathcal{C}$,\nwe have $d^\\mathcal{P}(U)=[\\mathcal{N}_U\\colon\\mathcal{N}]_0^{1\/2}$, where\n$[\\mathcal{N}_U\\colon\\mathcal{N}]_0$ is the minimal index of the subfactor\n$\\mathcal{N}\\subset\\mathcal{N}_U$.\n\\end{proposition}\n\nBefore we turn to the proof, recall the construction of\n$\\mathcal{N}_U$. Consider $V=\\oplus_{s\\in\\supp\\mu}U_s$. We will work with $V$\nas with a well-defined object. If $\\supp\\mu$ is infinite, to be\nrigorous, in what follows we have to replace $V$ by finite sums of\nobjects $U_s$, $s\\in\\supp\\mu$, and then pass to the limit, but we will\nomit this repetitive simple argument. With this understanding,\n$\\mathcal{N}_U$ is the inductive limit of the algebras\n$\\mathcal{N}^{(n)}_U=\\mathcal{C}(V^{\\otimes n}\\otimes U)$ equipped with the faithful\nstates $\\omega^{(n)}_U$.\n\nGiven another object $U'$, the partial trace $\\iota \\otimes \\tr_{U}$\ndefines, for each $n$, a conditional expectation $\\mathcal{N}_{U' \\otimes\nU}^{(n)} \\to \\mathcal{N}_{U'}^{(n)}$ which preserves the state\n$\\omega^{(n)}_{U'\\otimes U}$. The conditional expectation $\\mathcal{N}_{U'\n\\otimes U} \\to \\mathcal{N}_{U'}$ which we get in the limit, is denoted by\n$E_{U',U}$, or simply by $E_U$ if there is no danger of confusion. Fix\na standard solution $(R_U,\\bar R_U)$ of the conjugate equations for\n$U$ in $\\mathcal{C}$.\n\n\\begin{lemma}\\label{l:basic-ext} The index of the conditional\nexpectation $E_U\\colon\\mathcal{N}_U\\to\\mathcal{N}$ equals $d^\\mathcal{C}(U)^2$, the\ncorresponding basic extension is $\\mathcal{N}_U\\subset\\mathcal{N}_{U \\otimes\n\\bar{U}}$, with the Jones projection $e_U= d^\\mathcal{C}(U)^{-1} \\bar{R}_U\n\\bar{R}_U^* \\in\\mathcal{N}^{(0)}_{U\\otimes\\bar U}\\subset \\mathcal{N}_{U \\otimes\n\\bar{U}}$ and the conditional expectation\n$E_{\\bar{U}}\\colon\\mathcal{N}_{U\\otimes\\bar U}\\to\\mathcal{N}_U$.\n\\end{lemma}\n\n\\begin{proof} By the abstract characterization of the basic extension\n\\cite{MR1245827}*{Theorem~8} it suffices to check the following three\nproperties: $E_{\\bar U}(e_U)=d^\\mathcal{C}(U)^{-2}1$, $E_{\\bar\nU}(xe_U)e_U=d^\\mathcal{C}(U)^{-2}xe_U$ for all $x\\in\\mathcal{N}_{U\\otimes\\bar U}$,\nand $e_Uxe_U=E_U(x)e_U$ for all $x\\in\\mathcal{N}_U$. The first and the third\nproperties are immediate by definition. To prove the second, it is\nenough to show that for all $x\\in\\mathcal{C}(X\\otimes U\\otimes\\bar U)$ we have\n$$\nd^\\mathcal{C}(U)\\big((\\iota\\otimes\\tr_{\\bar U})(x(\\iota_X\\otimes \\bar R_U\\bar\nR^*_U))\\otimes\\iota_{\\bar U}\\big) (\\iota_X\\otimes\\bar R_U\\bar\nR_U^*)=x(\\iota_X\\otimes\\bar R_U\\bar R_U^*).\n$$\nThe left hand side equals\n\\begin{gather*}\n (\\iota_{X}\\otimes\\iota_{U}\\otimes R^*_U\\otimes\\iota_{\\bar U})\n (x\\otimes\\iota_{U}\\otimes\\iota_{\\bar U}) (\\iota_X\\otimes \\bar\n R_U\\bar R^*_U\\otimes\\iota_{U}\\otimes\\iota_{\\bar U})\n (\\iota_{X}\\otimes\\iota_{U}\\otimes R_U\\otimes\\iota_{\\bar U})\n (\\iota_X\\otimes\\bar R_U\\bar R_U^*)\\\\\n\\begin{split}\n &= (\\iota_{X}\\otimes\\iota_{U}\\otimes R^*_U\\otimes\\iota_{\\bar U})\n (x\\otimes\\iota_{U}\\otimes\\iota_{\\bar U}) (\\iota_X\\otimes \\bar\n R_U\\otimes\\iota_{U}\\otimes\\iota_{\\bar U}) (\\iota_X\\otimes\\bar R_U\\bar R_U^*)\\\\\n &= (\\iota_{X}\\otimes\\iota_{U}\\otimes R^*_U\\otimes\\iota_{\\bar U})\n (x\\otimes\\iota_{U}\\otimes\\iota_{\\bar U}) (\\iota_X\\otimes \\bar\n R_U\\otimes\\bar R_U) (\\iota_X\\otimes\\bar R_U^*)\\\\\n &=x(\\iota_X\\otimes\\bar R_U\\bar R_U^*),\n\\end{split}\n\\end{gather*}\nwhich proves the lemma.\n\\end{proof}\n\nThis lemma implies in particular that there exists a unique\nrepresentation $$\\pi\\colon\\mathcal{N}_{U\\otimes\\bar U}\\to\nB(L^2(\\mathcal{N}_U,\\omega_U))$$ that extends the representation of $\\mathcal{N}_U$\nand is such that $\\pi(e_U)$ is the projection onto the closure of\n$\\Lambda_{\\omega_U}(\\mathcal{N})\\subset L^2(\\mathcal{N}_U,\\omega_U)$.\n\n\\begin{lemma} \\label{lpi} The representation\n $\\pi\\colon\\mathcal{N}_{U\\otimes\\bar U}\\to B(L^2(\\mathcal{N}_U,\\omega_U))$ is given\n by\n$$\n\\pi(x)\\Lambda_{\\omega_U}(y) = \\Lambda_{\\omega_U}((\\iota\\otimes\nR_U^*)(x\\otimes\\iota_U)(y\\otimes\\iota_{\\bar U}\\otimes\n\\iota_U)(\\iota\\otimes \\bar R_U\\otimes\\iota_U))\n$$\nfor $x\\in\\cup_n\\mathcal{N}^{(n)}_{U\\otimes\\bar U}$ and\n$y\\in\\cup_n\\mathcal{N}^{(n)}_U$.\n\\end{lemma}\n\n\\begin{proof} Let us write $\\tilde\\pi(x)$ for the operators in the formulation\nof the lemma. The origin of the formula for $\\tilde\\pi$ is the\nFrobenius reciprocity isomorphism\n$$\n\\mathcal{C}(V^{\\otimes n}\\otimes U) \\cong \\mathcal{C}(V^{\\otimes n}, V^{\\otimes\n n}\\otimes U\\otimes \\bar U),\\ \\ T\\mapsto (T\\otimes\\iota_{\\bar\n U})(\\iota_{V^{\\otimes n}}\\otimes\\bar R_U),\n$$\nwith inverse $S\\mapsto (\\iota\\otimes R^*_U)(S\\otimes\\iota_{U})$. Up to\nscalar factors these isomorphisms become unitary once we equip both\nspaces with scalar products defined by the states $\\omega^{(n)}_U$ and\n$\\omega^{(n)}_\\mathds{1}$, respectively. The algebra $\\mathcal{C}(V^{\\otimes\n n}\\otimes U\\otimes \\bar U)$ is represented on $ \\mathcal{C}(V^{\\otimes\n n},V^{\\otimes n}\\otimes U\\otimes \\bar U)$ by the operators of\nmultiplication on the left. Being written on the space $\\mathcal{C}(V^{\\otimes\n n}\\otimes U)$, this representation is exactly $\\tilde\\pi$. Therefore\n$\\tilde\\pi$ certainly defines a representation of the $*$-algebra\n$\\cup_n\\mathcal{N}^{(n)}_{U\\otimes\\bar U}$ on the dense subspace $\\cup_n\nL^2(\\mathcal{N}^{(n)}_U,\\omega^{(n)}_U)$ of $L^2(\\mathcal{N}_U,\\omega_U)$. In order\nto see that this representation extends to a normal representation of\n$\\mathcal{N}_{U\\otimes\\bar U}$, observe that the vector\n$\\Lambda_{\\omega_U}(1)$ is cyclic and\n$$\n\\left(\\tilde\\pi(x) \\Lambda_{\\omega_U}(1), \\Lambda_{\\omega_U}(1)\\right)\n= d^\\mathcal{C}(U)^2 \\omega_{U\\otimes\\bar U}(e_{U} x e_{U}),\n$$\nsince for every $z\\in\\mathcal{C}(U\\otimes\\bar U)$ we have\n\\begin{align*} \\tr_U((\\iota_U\\otimes R_U^*)(z\\otimes\\iota_U)(\\bar\nR_U\\otimes\\iota_U)) &=d^\\mathcal{C}(U)^{-1}\\bar R_U^*(\\iota_U\\otimes\nR_U^*\\otimes\\iota_{\\bar U})(z\\otimes\\iota_U\\otimes\\iota_{\\bar U})(\\bar\nR_U\\otimes\\iota_U\\otimes\\iota_{\\bar U})\\bar R_U\\\\ &=d^\\mathcal{C}(U)^{-1}\\bar\nR_U^*z\\bar R_U=d^\\mathcal{C}(U)\\tr_{U\\otimes\\bar U}(z\\bar R_U^*\\bar R_U)\\\\\n&=d^\\mathcal{C}(U)^2\\tr_{U\\otimes\\bar U}(ze_U)=d^\\mathcal{C}(U)^2\\tr_{U\\otimes\\bar\nU}(e_Uze_U).\n\\end{align*}\n\nIt is clear that\n$\\tilde\\pi(x)\\Lambda_{\\omega_U}(y)=\\Lambda_{\\omega_U}(xy)$ for\n$x\\in\\cup_n\\mathcal{N}^{(n)}_U$, so $\\tilde\\pi$ extends the representation of\n$\\mathcal{N}_U$ on $L^2(\\mathcal{N}_U,\\omega_U)$. Therefore to prove that\n$\\pi=\\tilde\\pi$ it remains to show that $\\tilde\\pi(e_U)$ is the\nprojection onto $\\overline{\\Lambda_{\\omega_U}(\\mathcal{N})}$, that~is,\n$$\n\\tilde\\pi(e_U)\\Lambda_{\\omega_U}(y)=\\Lambda_{\\omega_U}(E_U(y))\\ \\\n\\text{for}\\ \\ y\\in\\cup_n\\mathcal{N}^{(n)}_U.\n$$\nBut this is obvious, as $(\\iota_U\\otimes R^*_U)(\\bar R_U\\bar\nR^*_U\\otimes\\iota_U)=\\bar R^*_U\\otimes\\iota_U$. \\end{proof}\n\nIt is easy to describe the modular group $\\sigma^{\\omega_U}$ of\n$\\omega_U$. For $s_* = (s_1, \\ldots, s_n) \\in (\\supp \\mu)^n$, let us\nput\n$$\n\\delta_{s_*} = \\frac{\\mu(s_1) \\cdots \\mu(s_n)}{d^\\mathcal{C}(U_{s_1}) \\cdots\nd^\\mathcal{C}(U_{s_n})}.\n$$\nThen\n$$\n\\sigma^{\\omega_U}_t(x)=\\left(\\frac{\\delta_{s'_*}}{\\delta_{s_*}}\\right)^{it}x\\\n\\ \\text{for}\\ \\ x\\in \\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes U_{s_n}\\otimes\nU, U_{s'_n} \\otimes \\cdots \\otimes U_{s'_1}\\otimes U).\n$$\nWhat matters for us is that since the automorphisms\n$\\sigma^{\\omega_U}_t$ are approximately implemented by unitaries in\n$\\mathcal{N}$, the relative commutant $\\mathcal{N}'\\cap\\mathcal{N}_U$ is contained in the\ncentralizer of the state $\\omega_U$.\n\n\\smallskip\n\nConsider the modular conjugation $J=J_{\\omega_U}$ on\n$L^2(\\mathcal{N}_U,\\omega_U)$. By Lemma~\\ref{l:basic-ext} and definition of\nthe basic extension we have\n$$\nJ\\mathcal{N}'J=\\pi(\\mathcal{N}_{U\\otimes\\bar U}).\n$$\nTherefore the map $x\\mapsto Jx^*J$ defines a $*$-anti-isomorphism\n$$\n\\mathcal{N}'\\cap\\mathcal{N}_U\\cong \\mathcal{N}_U'\\cap\\mathcal{N}_{U\\otimes\\bar U}.\n$$\nIdentifying these relative commutants with $\\mathcal{P}(U)$ and $\\mathcal{P}(\\bar U)$,\nrespectively, we get a $*$-anti-isomorphism $\\mathcal{P}(U)\\cong\\mathcal{P}(\\bar U)$,\nwhich we denote by $\\eta\\mapsto\\eta^\\vee$.\n\n\\begin{lemma} \\label{lvee} For every $\\eta\\in\\mathcal{P}(U)$ we have\n$$\n\\eta^\\vee=(R^*_U\\otimes\\iota_{\\bar U})(\\iota_{\\bar\nU}\\otimes\\eta\\otimes\\iota_{\\bar U})(\\iota_{\\bar U}\\otimes\\bar R_U).\n$$\n\\end{lemma}\n\n\\begin{proof} Consider the element $\\tilde\\eta=(R^*_U\\otimes\\iota_{\\bar\nU})(\\iota_{\\bar U}\\otimes\\eta\\otimes\\iota_{\\bar U})(\\iota_{\\bar\nU}\\otimes\\bar R_U)$. In terms of families of morphisms this means that\n$$\n\\tilde\\eta_X=(\\iota_X\\otimes R^*_U\\otimes\\iota_{\\bar\nU})(\\eta_{X\\otimes\\bar U}\\otimes\\iota_{\\bar\nU})(\\iota_X\\otimes\\iota_{\\bar U}\\otimes\\bar R_U),\n$$\nor equivalently,\n\\begin{equation}\\label{evee} (\\iota_X\\otimes\nR^*_U)(\\tilde\\eta_X\\otimes\\iota_U)=(\\iota_X\\otimes\nR^*_U)\\eta_{X\\otimes\\bar U}.\n\\end{equation}\n\nFor every $n$ consider the projection $p_n\\colon\nL^2(\\mathcal{N}_U,\\omega_U)\\to L^2(\\mathcal{N}^{(n)}_U,\\omega^{(n)}_U)$. Let\n$x\\in\\mathcal{N}'\\cap\\mathcal{N}_U$ be the element corresponding to $\\eta$, and\n$\\tilde x\\in\\mathcal{N}_U'\\cap\\mathcal{N}_{U\\otimes\\bar U}$ be the element\ncorresponding to $\\tilde\\eta$. By Lemma~\\ref{lpi} and the way we\nrepresent $\\tilde\\eta$ by $\\tilde x$, for every $y\\in\\mathcal{N}^{(n)}_U$ we\nhave\n\\begin{equation} \\label{evee2} p_n\\pi(\\tilde\nx)\\Lambda_{\\omega_U}(y)=\\Lambda_{\\omega_U}((\\iota\\otimes\nR_U^*)(\\tilde\\eta_{V^{\\otimes n}\\otimes\nU}\\otimes\\iota_U)(y\\otimes\\iota_{\\bar U}\\otimes \\iota_U)(\\iota\\otimes\n\\bar R_U\\otimes\\iota_U)).\n\\end{equation} On the other hand, since $x$ is contained in the\ncentralizer of $\\omega_U$, we have\n\\begin{align*}\np_nJx^*J\\Lambda_{\\omega_U}(y)&=p_n\\Lambda_{\\omega_U}(yx)=\\Lambda_{\\omega_U}(y\n\\eta_{V^{\\otimes n}})\\\\ &=\\Lambda_{\\omega_U}((\\iota\\otimes\nR_U^*)(y\\otimes\\iota_{\\bar U}\\otimes \\iota_U)(\\iota\\otimes \\bar\nR_U\\otimes\\iota_U)\\eta_{V^{\\otimes n}})\\\\\n&=\\Lambda_{\\omega_U}((\\iota\\otimes R_U^*)\\eta_{V^{\\otimes n}\\otimes\nU\\otimes\\bar U}(y\\otimes\\iota_{\\bar U}\\otimes \\iota_U)(\\iota\\otimes\n\\bar R_U\\otimes\\iota_U)).\n\\end{align*} By \\eqref{evee} the last expression equals \\eqref{evee2},\nso\n$$\np_n\\pi(\\tilde x)\\Lambda_{\\omega_U}(y)=p_nJx^*J\\Lambda_{\\omega_U}(y).\n$$\nSince this is true for all $n$ and $y\\in\\mathcal{N}^{(n)}_U$, we conclude\nthat $\\pi(\\tilde x)=Jx^*J$. \\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{pminindex}] The operator valued weights\nfrom $\\mathcal{N}_U$ to $\\mathcal{N}$ are parametrized by the positive elements $a\n\\in \\mathcal{N}'\\cap\\mathcal{N}_U$ by $a \\mapsto E^a$, where $E^a$ is defined by\n$E^a(x) = E_U(a^{1\/2} x a^{1\/2})$. The map $E^a$ is a conditional\nexpectation if and only if the normalization condition $E_U(a)=1$\nholds. Moreover, by the proof of~\\cite{MR976765}*{Theorem~1},\n$(E^a)^{-1}$ is given by $x \\mapsto E_U^{-1}(a^{-1\/2} x\na^{-1\/2})$. Therefore we have\n$$\n[\\mathcal{N}_U\\colon\\mathcal{N}]_0=\\min_{\\substack{a\\in \\mathcal{N}'\\cap \\mathcal{N}_U,\\\\\na>0}}E_U(a)E^{-1}_U(a^{-1})=\\min_{\\substack{a\\in \\mathcal{N}'\\cap \\mathcal{N}_U,\\\\\na>0}}d^\\mathcal{C}(U)^2E_U(a)\\tilde E_U(Ja^{-1}J),\n$$\nwhere $\\tilde E_U=d^\\mathcal{C}(U)^{-2}JE_U^{-1}(J\\cdot\nJ)J\\colon\\pi(\\mathcal{N}_{U\\otimes\\bar U})\\to\\mathcal{N}_U$.\n\nIf $a\\in \\mathcal{N}'\\cap \\mathcal{N}_U$ corresponds to $\\eta\\in\\mathcal{P}(U)$, we have\n$$\nE_U(a)=d^{\\mathcal{C}}(U)^{-1}\\bar R_U(\\eta\\otimes\\iota)\\bar R_U^*\n$$\nBy Lemma~\\ref{l:basic-ext} we have $\\tilde E_U(\\pi(x))=\\pi(E_{\\bar\nU}(x))$ for $x\\in\\mathcal{N}_{U\\otimes\\bar U}$. Hence by Lemma~\\ref{lvee} we\nget\n\\begin{align*} \\tilde\nE_U(Ja^{-1}J)&=d^{\\mathcal{C}}(U)^{-1}R_U^*((\\eta^{-1})^\\vee\\otimes\\iota)R_U\\\\\n&=d^{\\mathcal{C}}(U)^{-1}R_U^*(R^*_U\\otimes\\iota_{\\bar\nU}\\otimes\\iota_U)(\\iota_{\\bar U}\\otimes\\eta^{-1}\\otimes\\iota_{\\bar\nU}\\otimes\\iota_U)(\\iota_{\\bar U}\\otimes\\bar R_U\\otimes\\iota_U)R_U\\\\\n&=d^{\\mathcal{C}}(U)^{-1}R_U^*(\\iota_{\\bar U}\\otimes\\eta^{-1})R_U.\n\\end{align*} We thus conclude that $[\\mathcal{N}_U\\colon\\mathcal{N}]_0$ is the\nminimum of the products of the scalars\n$$\n\\bar R_U(\\eta\\otimes\\iota)\\bar R_U^*\\ \\ \\text{and}\\ \\\nR_U^*(\\iota\\otimes\\eta^{-1})R_U\n$$\nover all positive invertible $\\eta\\in\\mathcal{P}(U)$. This is exactly\n$d^\\mathcal{P}(U)^2$. \\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{tminimal}] The estimate $\\| \\Gamma_U \\| \\le\nd^\\mathcal{P}(U)$ comes for free. We thus need to prove the opposite\ninequality.\n\nLet $E^\\mathcal{P}_U\\colon\\mathcal{N}_U\\to\\mathcal{N}$ be the minimal conditional\nexpectation. Let us first assume that $\\mathcal{N}_U$ (and hence $\\mathcal{N}$) is\ninfinite. Then by Proposition~\\ref{pminindex} and\n\\cite{MR1096438}*{Corollary~7.2} we have the equalities\n$$\n2\\log d^\\mathcal{P}(U)=\\log \\Ind E^\\mathcal{P}_U =H_{E^\\mathcal{P}_U}(\\mathcal{N}_U | \\mathcal{N}).\n$$\nLet $\\epsilon > 0$ and $\\psi$ be a normal state on $\\mathcal{N}_U$ such that\n$$H_{\\psi}(\\mathcal{N}_U | \\mathcal{N}) \\ge 2 \\log d^\\mathcal{P}(U)\n- \\epsilon.$$\n\nWhen $A$ is a finite subset of $\\supp \\mu$, consider the projection\n$p_A = \\oplus_{s \\in A} \\iota_s$ in $\\mathcal{N}^{(1)}$. If $A_1, \\ldots,\nA_n$ are finite subsets of $\\supp\\mu$, then $p_{A_*} = p_{A_n} \\otimes\n\\cdots \\otimes p_{A_1}$ is a projection in $\\mathcal{N}^{(n)}$, and we\nconsider the corresponding corner\n$$\n\\mathcal{N}_U^{A_*} = p_{A_*}\\mathcal{N}_U^{(n)} p_{A_*} = \\bigoplus_{\\substack{s_i,\ns'_i \\in A_i\\\\i=1,\\ldots,n}} \\mathcal{C}(U_{s_n} \\otimes \\cdots \\otimes\nU_{s_1} \\otimes U, U_{s'_n} \\otimes \\cdots \\otimes U_{s'_1} \\otimes U)\n$$\nin $\\mathcal{N}^{(n)}_U$ and the similarly defined corner $\\mathcal{N}^{A_*}$ in\n$\\mathcal{N}^{(n)}$. When $\\psi(p_{A_*})\\ne0$, define also a state\n$\\psi_{A_*}$ on~$\\mathcal{N}_U^{A_*} $ by $\n\\psi_{A_*}=\\psi(p_{A_*})^{-1}\\psi(p_{A_*}\\cdot p_{A_*}). $ By the\nlower semicontinuity of relative entropy, we can find~$n$ and finite\nsets $A_1, \\ldots, A_n$ such that\n$$\nH_{\\psi_{A_*}}(\\mathcal{N}^{A_*}_U | \\mathcal{N}^{A_*}) \\ge H_\\psi(\\mathcal{N}_U | \\mathcal{N}) -\n\\epsilon.\n$$\nBy Proposition~\\ref{prop:stt-rel-entropy-graph-norm}, the inclusion\nmatrix $\\Gamma_{A_*, U}$ of $\\mathcal{N}^{A_*} \\subset \\mathcal{N}_U^{A_*}$\nsatisfies $$2 \\log \\| \\Gamma_{A_*,U} \\|\\ge H_{\\psi_{A_*}}(\n\\mathcal{N}_U^{A_*} | \\mathcal{N}^{A_*}).$$ Therefore we have the estimate\n$$\n\\log \\| \\Gamma_{A_*, U} \\|\\ge\\log d^\\mathcal{P}(U)- \\epsilon.\n$$\nBut the transpose of the matrix $\\Gamma_{A_*,U}$ is obtained from\n$\\Gamma_U$ by considering only columns that correspond to the simple\nobjects appearing in the decomposition of $U_{s_n} \\otimes \\cdots\n\\otimes U_{s_1} $ for $s_i\\in A_i$, and then removing the zero\nrows. Hence\n$$\n\\|\\Gamma_U \\|\\ge \\| \\Gamma_{A_*,U} \\|.\n$$\nSince $\\epsilon$ was arbitrary, we thus get $\\| \\Gamma_U \\| \\ge\nd^\\mathcal{P}(U)$.\n\n\\smallskip\n\nIf $\\mathcal{N}_U$ is finite, we consider the inclusion $\\mathcal{N} \\mathbin{\\bar{\\otimes}} M\n\\subset \\mathcal{N}_U \\mathbin{\\bar{\\otimes}} M$ for some infinite hyperfinite von Neumann\nalgebra $M$ with a prescribed strongly operator dense increasing\nsequence $M_{n_k}(\\mathbb{C})\\subset M$; for example, we could take a Powers\nfactor $R_\\lambda$ with the usual copies of $M_2(\\mathbb{C})^{\\otimes k}$ in\nit. Then the minimal conditional expectation $\\mathcal{N}_U \\mathbin{\\bar{\\otimes}} M \\to\n\\mathcal{N} \\mathbin{\\bar{\\otimes}} M$ is given by $E^\\mathcal{P}_U \\otimes \\iota$, and its index\nequals that of $E^\\mathcal{P}_U$. Since the inclusion matrix of $\\mathcal{N}^{A_*}\n\\otimes M_{n_k}(\\mathbb{C}) \\subset \\mathcal{N}_U^{A_*} \\otimes M_{n_k}(\\mathbb{C})$ is the\nsame as that of $\\mathcal{N}^{A_*} \\subset \\mathcal{N}_U^{A_*}$, we can then argue\nin the same way as above. \\end{proof}\n\nSince amenability of dimension functions is preserved under\nhomomorphisms of fusion algebras by\n\\cite{MR1644299}*{Proposition~7.4}, we get the following corollary.\n\n\\begin{corollary}\\label{cor:from-P-eq-C-to-amen} Let\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ be the Poisson boundary of a rigid C$^*$-tensor\ncategory with respect to an ergodic probability measure on\n$\\Irr(\\mathcal{C})$. Then $\\mathcal{P}$ is an amenable C$^*$-tensor category.\n\\end{corollary}\n\nCombining this with Corollary~\\ref{camenb} we get the following\ncategorical version of the\nFurstenberg--Kaimanovich--Vershik--Rosenblatt characterization of\namenability.\n\n\\begin{theorem} \\label{tFKVR} A rigid C$^*$-tensor category $\\mathcal{C}$ is\namenable if and only if there is a probability measure~$\\mu$ on\n$\\Irr(\\mathcal{C})$ such that the Poisson boundary of $(\\mathcal{C},\\mu)$ is\ntrivial. Furthermore, the Poisson boundary of an amenable C$^*$-tensor\ncategory is trivial for any ergodic probability measure.\n\\end{theorem}\n\nTherefore we can say that while weak amenability can be detected by\nstudying classical Poisson boundaries of random walks on the fusion\nalgebra, for amenability we have to consider noncommutative, or\ncategorical, random walks. We can also say that nontriviality of the\nPoisson boundary $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ with respect to an ergodic\nmeasure shows how far a weakly amenable category $\\mathcal{C}$ is from being\namenable.\n\n\\bigskip\n\n\\section{Amenable functors}\n\nIn this section we will give another characterization of amenability\nin terms of invariant means. We know that on the level of fusion\nalgebras existence of invariant means is not enough for\namenability. Therefore we need a more refined categorical notion.\n\n\\begin{definition} Let $\\mathcal{C}$ be a C$^*$-tensor category and\n $F\\colon\\mathcal{C}\\to\\mathcal{A}$ be a unitary tensor functor into a C$^*$-tensor\n category $\\mathcal{A}$ with possibly nonsimple unit. A \\emph{right invariant\n mean} for $F$ is a collection $m=(m_{U,V})_{U,V}$ of linear\n maps $$m_{U,V}\\colon \\hat{\\mathcal{C}}(U, V) \\to \\mathcal{A}(F(U),F(V))$$ that are\n natural in $U$ and $V$ and satisfy the following properties:\n\\begin{enumerate}\n\\item the maps $m_U=m_{U,U}\\colon \\hat{\\mathcal{C}}(U)\\to\n\\End{\\mathcal{A}}{F(U)}$ are unital and positive;\n\n\\medskip\n\n\\item for any $\\eta\\in \\hat{\\mathcal{C}}(U, V)$ and any object $Y$ in\n $\\mathcal{C}$ we have\n$$\nm_{U\\otimes Y,V\\otimes\nY}(\\eta\\otimes\\iota_Y)=F_2(m_{U,V}(\\eta)\\otimes\\iota_{F(Y)});\n$$\n\n\\item for any $\\eta\\in \\hat{\\mathcal{C}}(U, V)$ and any object $Y$ in\n $\\mathcal{C}$ we have\n$$\nm_{Y\\otimes U,Y\\otimes V}(\\iota_Y\\otimes\\eta)=F_2(\\iota_{F(Y)}\\otimes\nm_{U,V}(\\eta)).\n$$\n\\end{enumerate}\n\nIf a right invariant mean for $F$ exists, we say that $F$ is\n\\emph{amenable}.\n\\end{definition}\n\nNote that naturality of $m_{U,V}$ and property (i) in the above\ndefinition easily imply that the maps~$m_U$ are completely positive,\nand $m_{U,V}(\\eta)^*=m_{V,U}(\\eta^*)$. As usual, we omit subscripts\nand simply write $m$ instead of $m_{U, V}$ when there is no confusion.\n\n\\smallskip\n\nThe relevance of this notion for categorical random walks is explained\nby the following simple observation, similar to the easy part of\nProposition~\\ref{pweakamen}.\n\n\\begin{proposition} \\label{ppoissonamenability} Let $\\mathcal{C}$ be a rigid\nC$^*$-tensor category, $\\mu$ be a probability measure on $\\Irr(\\mathcal{C})$,\nand $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ be the Poisson boundary of~$(\\mathcal{C},\\mu)$. Then\nthe functor $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$ is amenable.\n\\end{proposition}\n\n\\begin{proof} Fix a free ultrafilter $\\omega$ on $\\mathbb{N}$, and then define\n$$\nm(\\eta)_X=\\lim_{n\\to\\omega}\\frac{1}{n}\\sum^{n-1}_{k=0}P^k_\\mu(\\eta)_X.\n$$\nAll the required properties of a right invariant mean follow\nimmediately by definition. For example, property (iii) in the\ndefinition follows from the identity $P_X(\\iota_Y\\otimes\n\\eta)=\\iota_Y\\otimes P_X(\\eta)$. \\end{proof}\n\nFor functors into categories with nonsimple units we do not have much\ninsight into the meaning of amenability. But if we fall back to our\nstandard assumption of simplicity of tensor units, we have the\nfollowing result.\n\n\\begin{theorem}\\label{tamnefunctor} Let $\\mathcal{C}$ be a rigid C$^*$-tensor\ncategory and $F\\colon\\mathcal{C}\\to\\mathcal{A}$ be a unitary tensor functor. Then~$F$\nis amenable if and only if $\\mathcal{C}$ is weakly amenable and $d^\\mathcal{A} F$ is\nthe amenable dimension function on~$\\mathcal{C}$.\n\\end{theorem}\n\nLet $F\\colon\\mathcal{C}\\to\\mathcal{A}$ be an amenable unitary tensor functor with a\nright invariant mean $m$. For simplicity we assume as usual that $\\mathcal{C}$\nand $\\mathcal{A}$ are strict and $F$ is an embedding functor. Let us start by\nshowing that existence of $F$ implies weak amenability.\n\n\\begin{lemma} \\label{lweakamen} The linear functional\n$m_\\mathds{1}\\colon\\Endb(\\iota_\\mathcal{C})\\cong\\ell^\\infty(\\Irr(\\mathcal{C}))\\to\\End{\\mathcal{A}}{\\mathds{1}}\\cong\\mathbb{C}$\nis a right invariant mean on the fusion algebra of $\\mathcal{C}$ equipped with\nthe dimension function $d^\\mathcal{C}$.\n\\end{lemma}\n\n\\begin{proof} In addition to the operators $P_X$ on $\\hat\\mathcal{C}(\\mathds{1})$ we\nnormally use, we also have the operators $Q_X$ given by\n$$\nQ_X(\\eta)_Y=d^\\mathcal{C}(X)^{-1}(\\iota_Y\\otimes\\tr_X)(Q_{Y\\otimes\nX})=d^\\mathcal{C}(X)^{-1} (\\iota_Y\\otimes\\bar R_X^*)(\\eta_{Y\\otimes\nX}\\otimes\\iota_{\\bar X})(\\iota_Y\\otimes \\bar R_X),\n$$\nwhere $(R_X,\\bar R_X)$ is a standard solution of the conjugate\nequations for $X$ in $\\mathcal{C}$. Since\n$$\n\\eta_{Y\\otimes X}\\otimes\\iota_{\\bar\nX}=(\\iota_X\\otimes\\eta\\otimes\\iota_{\\bar X})_Y,\n$$\nwe can write this as\n$$\nQ_X(\\eta)=d^\\mathcal{C}(X)^{-1}\\bar R^*_X(\\iota_X\\otimes\\eta\\otimes\\iota_{\\bar\nX})\\bar R_X.\n$$\nApplying the invariant mean we get\n$$\nm(Q_X(\\eta))=d^\\mathcal{C}(X)^{-1}\\bar R^*_X(\\iota_X\\otimes\nm(\\eta)\\otimes\\iota_{\\bar X})\\bar R_X=m(\\eta).\n$$\nThus $m_\\mathds{1}$ is a right invariant mean on\n$\\ell^\\infty(\\Irr(\\mathcal{C}))$. \\end{proof}\n\nSince $\\mathcal{C}$ is weakly amenable, we can choose an ergodic probability\nmeasure and consider the corresponding Poisson boundary\n$\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$. We then have the following result, which has its\norigin in Tomatsu's considerations in~\\cite{MR2335776}*{Section~4}.\n\n\\begin{lemma} \\label{lmult2} For every object $U$ in $\\mathcal{C}$ the map\n$\\Lambda_U\\colon\\End{\\mathcal{P}}{U}\\to\\End{\\mathcal{A}}{U}$ obtained by\nrestricting~$m_U$ to~$\\End{\\mathcal{P}}{U}$ is multiplicative.\n\\end{lemma}\n\n\\begin{proof} Recall that in Section~\\ref{suniversal} we constructed faithful\nunital completely positive maps\n$\\Theta_U\\colon\\End{\\mathcal{A}}{U}\\to\\End{\\mathcal{P}}{U}$, $\\Theta_U(T)_X=E_{X\\otimes\nU}(\\iota\\otimes T)$. By faithfulness of $\\Theta_U$, the multiplicative\ndomain of $\\Theta_U\\Lambda_U$ is contained in that of\n$\\Lambda_U$. Therefore in order to prove the lemma it suffices to show\nthat $\\Theta_U\\Lambda_U$ is the identity map.\n\nLet us show first that for any $\\eta\\in\\End{\\mathcal{P}}{U}$ we have $\\Theta\n\\Lambda(\\eta)_\\mathds{1}=\\eta_\\mathds{1}$, that is,\n$$\nE(m(\\eta))=\\eta_\\mathds{1}.\n$$\nTake $S\\in\\End{\\mathcal{C}}{U}$. Then we have\n$$\n\\tr_U(SE(m(\\eta)))=\\psi_U(m(S\\eta))=d(U)^{-1}\\bar\nR_U^*(m(S\\eta)\\otimes\\iota)\\bar R_U =d(U)^{-1}m(\\bar\nR_U^*(S\\eta\\otimes\\iota)\\bar R_U).\n$$\nSince the element $\\bar R_U^*(S\\eta\\otimes\\iota)\\bar R_U$ lies in\n$\\End{\\mathcal{P}}{\\mathds{1}}$, it is scalar. This scalar must be equal to\n$$\\bar R_U^*(S\\eta_\\mathds{1}\\otimes\\iota)\\bar R_U=\\tr_U(S\\eta_\\mathds{1}).$$\nHence we obtain\n$$\n\\tr_U(SE(m(\\eta)))=\\tr_U(S\\eta_\\mathds{1}),\n$$\nand since this is true for all $S$, we get $\\Theta\n\\Lambda(\\eta)_\\mathds{1}=\\eta_\\mathds{1}$.\n\nNow, for any object $X$ in $\\mathcal{C}$, we use the above equality for\n$\\iota_X\\otimes\\eta$ instead of $\\eta$ and get\n$$\n\\Theta \\Lambda(\\eta)_X=E(\\iota_X\\otimes\nm(\\eta))=E(m(\\iota_X\\otimes\\eta))=\\Theta\n\\Lambda(\\iota_X\\otimes\\eta)_\\mathds{1} =(\\iota_X\\otimes\\eta)_\\mathds{1}=\\eta_X,\n$$\nwhich implies the desired equality $\\Theta \\Lambda(\\eta)=\\eta$. \\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{tamnefunctor}] Consider an amenable unitary\ntensor functor $F\\colon\\mathcal{C}\\to\\mathcal{A}$.\nBy Lemma~\\ref{lweakamen} we know that\n$\\mathcal{C}$ is weakly amenable. By Lemma~\\ref{lmult2} and the definition of\ninvariant means, any right invariant mean for $F$ defines a strict\nunitary tensor functor $\\Lambda\\colon\\tilde\\mathcal{P}\\to\\mathcal{A}$, where\n$\\tilde\\mathcal{P}\\subset\\mathcal{P}$ is the full subcategory consisting of objects in\n$\\mathcal{C}$. Extend this functor to a unitary tensor functor\n$\\Lambda\\colon\\mathcal{P}\\to\\mathcal{A}$. Then $d^\\mathcal{A}(U)\\le d^\\mathcal{P}(U)$ for any object $U$\nin $\\mathcal{C}$, but since by Theorem~\\ref{tminimal} the dimension function\n$d^\\mathcal{P}\\Pi$ on $\\mathcal{C}$ is amenable, we conclude that\n$d^\\mathcal{A}(U)=d^\\mathcal{P}(U)=\\|\\Gamma_U\\|$.\n\n\\smallskip\n\nConversely, assume $\\mathcal{C}$ is weakly amenable and $F\\colon\\mathcal{C}\\to\\mathcal{A}$ is a\nunitary tensor functor such that $d^\\mathcal{A} F$ is the amenable dimension\nfunction. Then by Theorem~\\ref{tuniversal1} there exists a unitary\ntensor functor $\\Lambda\\colon\\mathcal{P}\\to\\mathcal{A}$ such that $\\Lambda\\Pi\\cong\nF$. By Proposition~\\ref{ppoissonamenability} there exists a right\ninvariant mean for the functor $\\Pi\\colon\\mathcal{C}\\to\\mathcal{P}$. Composing it with\nthe functor $\\Lambda$ we get a right invariant mean for $\\Lambda\\Pi$,\nfrom which we get a right invariant mean for $F$. \\end{proof}\n\nApplying Theorem~\\ref{tamnefunctor} to the identity functor we get a\ncharacterization of amenability of tensor categories in terms of\ninvariant means.\n\n\\begin{theorem} A rigid C$^*$-tensor category $\\mathcal{C}$ is amenable if and\nonly if the identity functor $\\mathcal{C}\\to\\mathcal{C}$ is amenable.\n\\end{theorem}\n\nNote that by the proof of Theorem~\\ref{tamnefunctor}, given an\namenable C$^*$-tensor category $\\mathcal{C}$, we can construct a right\ninvariant mean for the identity functor as follows. Choose an ergodic\nprobability measure $\\mu$ on $\\Irr(\\mathcal{C})$ and a free ultrafilter\n$\\omega$ on $\\mathbb{N}$. Then we can define\n$$\nm(\\eta)=\\lim_{n\\to\\omega}\\frac{1}{n}\\sum^{n-1}_{k=0}\nP^k_\\mu(\\eta)_\\mathds{1}.\n$$\nOn the other hand, the construction of a right invariant mean for a\nfunctor $F\\colon\\mathcal{C}\\to\\mathcal{A}$ such that $\\mathcal{C}$ is weakly amenable, but not\namenable, and $d^\\mathcal{A} F$ is the amenable dimension function, is more\nelusive, as it relies on the existence of a factorization of $F$\nthrough the Poisson boundary $\\mathcal{C}\\to\\mathcal{P}$.\n\n\\bigskip\n\n\\section{Amenability of quantum groups and subfactors}\n\nIn this section we apply some of our results to categories considered\nin the theory of compact quantum groups and subfactor theory.\n\n\\subsection{Quantum groups}\n\nLet $G$ be a compact quantum group. We follow the conventions\nof~\\cite{arXiv:1310.4407}. In particular, the algebra $\\mathbb{C}[G]$ of\nregular functions on $G$ is a Hopf $*$-algebra, and by a finite\ndimensional unitary representation of $G$ we mean a unitary element\n$U\\in B(H_U)\\otimes\\mathbb{C}[G]$, where $H_U$ is a finite dimensional Hilbert\nspace, such that $(\\iota\\otimes\\Delta)(U)=U_{12}U_{13}$. Finite\ndimensional unitary representations form a rigid C$^*$-tensor category\n$\\Rep G$, with the tensor product of $U$ and $V$ defined by\n$U_{13}V_{23}\\in B(H_U)\\otimes B(H_V)\\otimes\\mathbb{C}[G]$. The categorical\ndimension of $U$ is equal to the quantum dimension, given by the trace $\\Tr(\\rho_U)$ of the Woronowicz character.\n\n\\smallskip\n\nFrom the universal property of the Poisson boundary it is easy to\ndeduce that the Poisson boundary of $\\Rep G$ with respect to any\nergodic measure is the forgetful functor $\\Rep G\\to \\Rep K$, where\n$K\\subset G$ is the maximal quantum subgroup of $G$ of Kac type. This\nwill be discussed in detail in~\\cite{classification}. Here we want\nonly to concentrate on various notions of amenability.\n\n\\smallskip\n\nRecall that $G$ is called \\emph{coamenable} if $\\|\\Gamma_U\\|=\\dim H_U$\nfor every finite dimensional unitary representation $U$. There are a\nnumber of equivalent conditions, but using this definition as our\nstarting point we immediately get that\n$$\n\\Rep G \\ \\ \\text{is amenable}\\Leftrightarrow G\\ \\ \\text{is coamenable,\n and of Kac type}.\n$$\n\nCoamenability of $G$ is known to be equivalent to amenability of the\ndual discrete quantum group~$\\hat G$. Recall that the algebra of\nbounded functions on $\\hat G$ is defined by $\\ell^\\infty(\\hat\nG)=\\ell^\\infty\\text{-}\\oplus_{s\\in\\Irr(G)}B(H_s)$, and the coproduct\n${\\hat\\Delta}\\colon\\ell^\\infty(\\hat G)\\to\\ell^\\infty(\\hat\nG)\\mathbin{\\bar{\\otimes}}\\ell^\\infty(\\hat G)$ is defined by duality from the product\non $\\mathbb{C}[G]$, if we view $\\ell^\\infty(\\hat G)$ as a subspace of\n$\\mathbb{C}[G]^*$ by associating to a functional $\\omega\\in\\mathbb{C}[G]^*$ the\ncollection of operators $\\pi_s(\\omega)=(\\iota\\otimes\\omega)(U_s)\\in\nB(H_s)$, $s\\in\\Irr(G)$. The quantum group $\\hat G$ is called amenable,\nif there exists a right invariant mean on $\\hat G$, that is, a state\n$m$ on $\\ell^\\infty(\\hat G)$ such that\n$$\nm(\\iota\\otimes\\phi){\\hat\\Delta}=\\phi(\\cdot)1\\ \\ \\text{for any normal linear\nfunctional}\\ \\phi\\ \\text{on}\\ \\ell^\\infty(\\hat G).\n$$\nThe restriction of such an invariant mean to $Z(\\ell^\\infty(\\hat\nG))\\cong\\ell^\\infty(\\Irr(G))$ defines a right invariant mean on the\nfusion algebra of $\\Rep G$ equipped with the quantum dimension\nfunction. Therefore\n$$\n\\hat G \\ \\ \\text{is amenable}\\Rightarrow \\Rep G\\ \\ \\text{is weakly\namenable}.\n$$\n\nAmong various known characterizations of coamenability the implication\n($\\hat G$ is amenable $\\Rightarrow$ $G$ is coamenable) is probably the\nmost nontrivial. This was proved independently\nin~\\cite{MR2276175}*{Theorem~3.8}\nand~\\cite{MR2113848}*{Corollary~9.6}. We will show now that our\nresults on amenable functors are generalizations of this.\n\n\\begin{theorem} If $\\hat G$ is amenable, then the forgetful functor\n$F\\colon\\Rep G\\to\\mathrm{Hilb}_f$ is amenable, and therefore $G$ is\ncoamenable.\n\\end{theorem}\n\n\\begin{proof} We will only consider the case when $\\Irr(G)$ is at most\ncountable, so that $\\Rep G$ satisfies our standing assumptions, the\ngeneral case can be easily deduced from this.\n\nAs discussed in \\cite{arXiv:1310.4407}*{Section~4.1}, the space\n$\\hat{\\mathcal{C}}(U, V)$ can be identified with the space of elements\n$$\n\\eta\\in \\ell^\\infty(\\hat G)\\otimes B(H_U,H_V)\\ \\ \\text{such that}\\ \\\nV^*_{31}(\\alpha\\otimes\\iota)(\\eta)U_{31}=1\\otimes \\eta,\n$$\nwhere $\\alpha\\colon \\ell^\\infty(\\hat G)\\to L^\\infty(G)\\bar\\otimes\n\\ell^\\infty(\\hat G)$ is the left adjoint action of $G$. Under this\nidentification we have\n$$\n\\iota_Y\\otimes\\eta=(\\iota\\otimes\\pi_Y\\otimes\\iota)({\\hat\\Delta}\\otimes\\iota)(\\eta),\n$$\nwhere $\\pi_Y\\colon\\ell^\\infty(\\hat G)\\to B(H_Y)$ is the representation\ndefined by $Y$, while the element $\\eta\\otimes\\iota_Y$ has the obvious\nmeaning. From this we immediately see that if $m$ is a right invariant\nmean on $\\hat G$, then the maps $m\\otimes\\iota\\colon \\ell^\\infty(\\hat\nG)\\otimes B(H_U,H_V)\\to B(H_U,H_V)$ define a right invariant mean for\n$F$. Thus $F$ is amenable. By Theorem~\\ref{tamnefunctor} we conclude\nthat $\\|\\Gamma_U\\|=\\dim F(U)=\\dim H_U$ for every $U$, so $G$ is\ncoamenable. \\end{proof}\n\n\\subsection{Subfactor theory}\n\nLet $N\\subset M$ be a finite index inclusion of II$_1$-factors. Denote\nby $\\tau$ the tracial state on $M$, and by $E$ the trace-preserving\nconditional expectation $M\\to N$. We denote $[M : N]=\\Ind E$, and\nthe minimal index of $N \\subset M$ by $[M : N]_0$. Put $M_{-1}=N$,\n$M_0=M$, and choose a tunnel\n$$\n\\dots\\subset M_{-3}\\subset M_{-2}\\subset M_{-1}\\subset M_0,\n$$\nso that $M_{-n+1}$ is the basic extension of $M_{-n-1}\\subset M_{-n}$\nfor all $n\\ge1$. For every $j\\le 1$ denote by $M_j^\\mathrm{st}\\subset M_j$ the\n$s^*$-closure of $\\cup_{n\\ge1}(M_{j-n}'\\cap M_j)$ with respect to the\nrestriction of $\\tau$. The inclusion $N^\\mathrm{st}\\subset M^\\mathrm{st}$ of finite\nvon Neumann algebras is called a standard model of $N\\subset\nM$~\\cite{MR1278111}.\n\nLet $\\mathcal{B}_N(M)$ be the full C$^*$-tensor subcategory of the category\n$\\mathrm{Hilb}_N$ of Hilbert bimodules over~$N$ generated by~$L^2(M)$.\nLet $M_1$ be the basic extension of $N\\subset M$, so that\n$\\Endd_{N\\mhyph N}(L^2(M))\\cong N'\\cap M_1$. The embedding $N \\to M_1$\ninduces a morphism $L^2(N) \\to L^2(M) \\otimes_N L^2(M)$ in $\\mathcal{B}_N(M)$,\nwhich defines a solution of the conjugate equations for $L^2(M)$ up to a scalar normalization. Moreover, it can be shown (compare with Proposition~\\ref{pminindex}) that the categorical trace corresponds to the minimal conditional expectation $M_1\\to N$, and consequently $d(L^2(M))=[M_1 : N]^{1\/2}_0=[M : N]_0$. It is also known, see Proposition~\\ref{pocneanu}, that the inductive system of the algebras\n$\\Endd_{N\\mhyph N}(L^2(M)^{\\otimes_Nn})$, with respect to the embeddings\n$T\\mapsto \\iota_{L^2(M)}\\otimes T$, can be identified with\n$(M_{-2n+1}'\\cap M_1)_{n\\ge1}$ in such a way that the shift\nendomorphism $T\\mapsto T\\otimes\\iota_{L^2(M)}$ of\n$\\cup_{n\\ge1}\\Endd_{N\\mhyph N}(L^2(M)^{\\otimes_Nn})$ corresponds to the\nendomorphism $\\gamma^{-1}$ of $\\cup_{n\\ge1}(M_{-2n+1}'\\cap M_1)$,\nwhere $\\gamma$ is the canonical shift.\n\nThe normalized categorical trace on $\\Endd_{N\\mhyph N}(L^2(M))$ defines a\nprobability measure~$\\mu_\\mathrm{st}$ on the set of isomorphism classes of\nsimple submodules of $L^2(M)$. More explicitly, it can be shown that\nthe value of the normalized categorical trace on any minimal\nprojection $p\\in N'\\cap M_1$ equals\n$$\n(\\tau(p)\\tau'(p))^{1\/2}\\frac{[M : N]}{[M : N]_0},\n$$\nwhere $\\tau'$ is the unique tracial state on $N'\\subset B(L^2(M))$. See~\\cite{MR976765}*{Section~2}\nand~\\cite{MR1278111}*{Section~1.3.6} for related results. Then the\nmeasure~$\\mu_\\mathrm{st}$ is defined~by\n$$\n\\mu_\\mathrm{st}([pL^2(M)])=m_p(\\tau(p)\\tau'(p))^{1\/2}\\frac{[M :\nN]}{[M : N]_0},\n$$\nwhere $m_p$ is the multiplicity of $pL^2(M)$ in $L^2(M)$.\n\nRecall that an inclusion for which $[M : N]=[M : N]_0$, is called\nextremal. From the above considerations, unless $N\\subset M$ is extremal,\nwe see that the categorical trace defines a tracial state of\n$\\cup_{n\\ge1}(M_{-2n+1}'\\cap M_1)$ that is different from~$\\tau$.\n\n\\smallskip\n\nLet us first review what our results say about $(\\mathcal{B}_N(M),\\mu_\\mathrm{st})$ for extremal inclusions. From the identification of $\\cup_{n\\ge1}\\Endd_{N\\mhyph N}(L^2(M)^{\\otimes_Nn})$ with $\\cup_{n\\ge1}(M_{-2n+1}'\\cap M_1)$ we conclude that the von Neumann algebra $\\mathcal{N}_{L^2(M)}$ constructed in Section~\\ref{sec:longo-roberts-appr} is\nisomorphic to $M_1^\\mathrm{st}$. More precisely, we take $V=L^2(M)$ for the\nconstruction of~$\\mathcal{N}_{L^2(M)}$, so unless $N'\\cap M_1$ is abelian, we\napply the modification of our construction of the algebras~$\\mathcal{N}_U$\ndiscussed in Remark~\\ref{rmultiplicity}. The subalgebra\n$\\mathcal{N}\\subset\\mathcal{N}_{L^2(M)}$ corresponds then to $N^\\mathrm{st}=\\gamma^{-1}(M_1^\\mathrm{st})\\subset\nM_1^\\mathrm{st}$. In particular, $N^\\mathrm{st}$ is a factor if and only if $\\mu_\\mathrm{st}$ is ergodic. Proposition~\\ref{prop:p-bdry-as-rel-comm-LR0}\ntranslates into the following statement, which is closely related to a\nresult of Izumi~\\cite{MR2059809}.\n\n\\begin{proposition} \\label{pIzumi} Let $\\Pi\\colon\\mathcal{B}_N(M)\\to\\mathcal{P}$ be the Poisson boundary of $(\\mathcal{B}_N(M),\\mu_\\mathrm{st})$. Then, assuming that $N\\subset M$ is extremal, we have\n$$\n\\mathcal{P}(L^2(M))\\cong ({N^\\mathrm{st}})'\\cap M_1^\\mathrm{st}.\n$$\n\\end{proposition}\n\nMore generally, by the same argument we have\n$\\mathcal{P}(L^2(M)^{\\otimes_Nn})\\cong ({M_{-2n+1}^\\mathrm{st}})'\\cap M_1^\\mathrm{st}$. Since\n$L^2(M)$ contains a copy of the unit object induced by the inclusion\n$N \\to M$, we have $\\mu_\\mathrm{st}(e)>0$. Hence the supports of $\\mu^n_\\mathrm{st}$\nare increasing, and therefore the isomorphisms\n$\\mathcal{P}(L^2(M)^{\\otimes_Nn})\\cong ({M_{-2n+1}^\\mathrm{st}})'\\cap M_1^\\mathrm{st}$\ncompletely describe the morphisms in the category $\\mathcal{P}$. In fact,\nrecalling that $M_1^\\mathrm{st}$ is the basic extension of $N^\\mathrm{st}\\subset\nM^\\mathrm{st}$, see ~\\cite{MR1278111}*{Section~1.4.3}, we may conclude that\n$\\mathcal{P}$ can be identified with $\\mathcal{B}_{N^\\mathrm{st}}(M^\\mathrm{st})$. We leave it to the\ninterested reader to find a good description of the functor $\\Pi\\colon\n\\mathcal{B}_N(M)\\to\\mathcal{B}_{N^\\mathrm{st}}(M^\\mathrm{st})$.\n\n\\smallskip\n\nConsider the principal graph $\\Gamma_{N,M}$ of $N\\subset M$. Then\n$\\Gamma_{L^2(M)}$ can be identified with $\\Gamma_{M,M_1}\n\\Gamma_{M,M_1}^t$. Recall also that we have the equality\n$\\|\\Gamma_{N,M}\\| = \\|\\Gamma_{M,M_1}\\|$\nby~\\cite{MR1278111}*{Section~1.3.5}.\n\nTurning now to Theorem~\\ref{tminimal} and Proposition~\\ref{pminindex},\nwe get the following result (again, to be more precise we use the\nmodification of the construction of~$\\mathcal{N}_U$ described in\nRemark~\\ref{rmultiplicity}).\n\n\\begin{theorem} \\label{tPopaext} Assume $N \\subset M$ is extremal and\n $N^\\mathrm{st}$ is a factor. Then we have\n$$\n\\|\\Gamma_{N,M}\\|^4=[M_1^\\mathrm{st} : N^\\mathrm{st}]_0.\n$$\n\\end{theorem}\n\nIf $M^\\mathrm{st}$ is also a factor, this can of course be formulated as\n$\\|\\Gamma_{N,M}\\|^2=[M^\\mathrm{st} : N^\\mathrm{st}]_0$.\n\n\\smallskip\n\nApplying Theorem~\\ref{tFKVR} we get the following result, which recovers part of Popa's characterization of extremal subfactors with strongly amenable standard invariant~\\cite{MR1278111}*{Theorem~5.3.1}.\n\n\\begin{theorem}\\label{thm:amen-equiv-high-rel-comm-std-mdl}\n Assume $N \\subset M$ is extremal. The following conditions are\n equivalent:\n\\begin{enumerate}\n\\item $N^\\mathrm{st}$ is a factor and $\\|\\Gamma_{N,M}\\|^2=[M : N]_0$;\n\\item $(M^\\mathrm{st}_{-2n+1})'\\cap M_1^\\mathrm{st}=M_{-2n+1}'\\cap M_1$ for all\n$n\\ge1$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} As we already observed, the condition that $N^\\mathrm{st}$ is a factor in (i) means exactly that the measure $\\mu_\\mathrm{st}$ is ergodic. The condition $\\|\\Gamma_{N,M}\\|^2=[M : N]_0$ means that $\\|\\Gamma_{L^2(M)}\\|=d(L^2(M))$. Since the module $L^2(M)$ is self-dual and generates $\\mathcal{B}_N(M)$, this condition is equivalent to the amenability of $\\mathcal{B}_N(M)$.\n\nOn the other hand, by Proposition~\\ref{pIzumi} and its extension to\nthe modules $L^2(M)^{\\otimes_Nn}$ discussed above, condition (ii) is\nequivalent to triviality of the Poisson boundary of\n$(\\mathcal{B}_N(M),\\mu_\\mathrm{st})$.\n\nThis shows that the equivalence of (i) and (ii) is indeed a\nconsequence of Theorem~\\ref{tFKVR}. \\end{proof}\n\nIf we write the proof of the implication (ii)$\\Rightarrow$(i) in terms\nof the algebras $M_{-2n+1}'\\cap M_1$ instead of\n$\\Endd_{N\\mhyph N}(L^2(M)^{\\otimes_Nn})$, we get an argument similar to Popa's\nproof based on~\\cite{MR1111570}, which was our inspiration. On the\nother hand, our proof of (i)$\\Rightarrow$(ii) seems to be very\ndifferent from Popa's arguments.\n\n\\smallskip\n\nNext, let us comment on the nonextremal case. One possibility is to consider the completion of $\\cup_{n\\ge1}(M_{j-n}'\\cap M_j)$ with respect to the trace induced by the minimal conditional expectation (that is, the categorical trace) instead of $\\tau$. Then all the above statements continue to hold if we replace $N^\\mathrm{st}$ and $M^\\mathrm{st}$ by the corresponding new von Neumann\nalgebras. Note that the inclusion $N^\\mathrm{st}\\subset M^\\mathrm{st}$ defined this way is the standard model in the\nconventions of~\\cite{MR1339767}. Then, for example, the implication (i)$\\Rightarrow$(ii) in Theorem~\\ref{thm:amen-equiv-high-rel-comm-std-mdl}\ncorresponds to~\\cite{MR1339767}*{Lemma~5.2}.\n\nBut some results, notably~Theorem~\\ref{tPopaext}, continue to hold for the\ninclusion $N^\\mathrm{st} \\subset M^\\mathrm{st}$ defined with respect to $\\tau$ in the\nnonextremal case also. The proof goes in basically the same way as in the extremal case, by noting that the proof of the inequality $\\|\\Gamma_U\\|^2\\ge [\\mathcal{N}_U :\\mathcal{N}]_0$ in Theorem~\\ref{tminimal} did not depend on how exactly the inductive limit of the algebras $\\mathcal{C}(V^{\\otimes n}\\otimes U)$ was completed to\nget the factors $\\mathcal{N}\\subset\\mathcal{N}_U$. Therefore we have\n$$\n\\|\\Gamma_{N,M}\\|^4\\ge [M_1^\\mathrm{st} : N^\\mathrm{st}]_0.\n$$\nThe opposite inequality can be proved either by realizing that the\ndimension function on $\\mathcal{B}_{N^\\mathrm{st}}(M_1^\\mathrm{st})$ defines a dimension\nfunction on $\\mathcal{B}_N(M_1)$, or by the following string of\n(in)equalities:\n$$\n\\|\\Gamma_{N,M}\\|^4=\\|\\Gamma_{N,M_1}\\|^2\\le\n\\|\\Gamma_{N^\\mathrm{st},M_1^\\mathrm{st}}\\|^2 \\le [M_1^\\mathrm{st} : N^\\mathrm{st}]_0,\n$$\ncompare with~\\cite{MR1278111}*{p.~235}. We remark that from this one\ncan easily obtain the implication (ii)$\\Rightarrow$(vii)\nin~\\cite{MR1278111}*{Theorem~5.3.2} promised in~\\cite{MR1278111}.\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}