diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbkbi" "b/data_all_eng_slimpj/shuffled/split2/finalzzbkbi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbkbi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA finite configuration of points on the unit sphere $S^{n-1}$ in $\\mathbb{R}^n$\nis \\emph{balanced} if it is in equilibrium (possibly unstable) under\nall pairwise forces depending only on distance, assuming the points are\nconfined to the surface of the sphere. In other words, the net forces\nacting on the points are all orthogonal to the sphere. As is usual in\nphysics, any two distinct particles exert forces on each other,\ndirected oppositely and with magnitude equal to some fixed function of\nthe Euclidean distance between them. The net force on each point is the\nsum of the contributions from the other points.\n\nFor example, the vertices of any regular polyhedron are balanced. On\nthe other hand, most configurations are not balanced. Even if some\npoints are in equilibrium under one force law, there is no reason to\nexpect that they will be in equilibrium under every force law, and\nusually they will not be. The balanced configurations are quite\nremarkable.\n\nThe condition of being balanced was defined by Leech in \\cite{Leech}.\nIt arises in the search for energy-minimizing point configurations on\nspheres. Given a potential function, typically an inverse-power law,\nhow should we arrange some particles to minimize the total potential\nenergy? This problem originated in Thomson's model of the atom in\n\\cite[p.~255]{T}. Of course, that model was superseded by quantum\nmechanics, but it remains of considerable mathematical interest. It\nprovides a natural measure of how well distributed points are on the\nsurface of the sphere, and it also offers the possibility of\ncharacterizing important or beautiful configurations via extremal\nproperties.\n\nIn most cases the optimal configuration depends on the potential\nfunction, but occasionally it does not. In \\cite{CK}, Cohn and Kumar\nintroduced the concept of \\emph{universally optimal} configurations,\nwhich minimize energy not only under all inverse-power laws but also\nunder the broader class of completely monotonic potential functions (as\nfunctions of squared Euclidean distance). In $\\mathbb{R}^2$ the vertices of any\nregular polygon form a universally optimal configuration. The vertex\nsets of the regular tetrahedron, octahedron, or icosahedron are\nuniversally optimal, but there are no larger examples in $\\mathbb{R}^3$.\nHigher-dimensional examples include the vertices of the regular simplex\nand cross polytope (or hyperoctahedron), and also various exceptional\nexamples, notably the vertices of the regular $600$-cell, the $E_8$\nroot system, the Schl\\\"afli configuration of $27$ points in $\\mathbb{R}^6$\ncorresponding to the $27$ lines on a cubic surface, and the minimal\nvectors of the Leech lattice. A number of the sporadic finite simple\ngroups act on universal optima. See Tables~1 and~2 in \\cite{BBCGKS} for\na list of the known and conjectured universal optima, as well as a\ndiscussion of how many more there might be. (They appear to be quite\nrare.)\n\nEvery universal optimum is balanced (as we will explain below), but\nbalanced configurations do not necessarily minimize energy even\nlocally. In the space of configurations, balanced configurations are\nuniversal critical points for energy, but they are frequently saddle\npoints. For example, the vertices of a cube are balanced but one can\nlower the energy by rotating the vertices of a facet. Nevertheless,\nbeing balanced is an important necessary condition for universal\noptimality.\n\nThe simplest reason a configuration would be balanced is due to its\nsymmetry: the net forces inherit this symmetry, which can constrain\nthem to point orthogonally to the sphere. More precisely, call a finite\nsubset $\\mathcal{C} \\subset S^{n-1}$ \\emph{group-balanced} if for every\n$x \\in \\mathcal{C}$, the stabilizer of $x$ in the isometry group of\n$\\mathcal{C}$ fixes no vectors in $\\mathbb{R}^n$ other than the multiples of\n$x$. A group-balanced configuration must be balanced, because the net\nforce on $x$ is invariant under the stabilizer of $x$ and is thus\northogonal to the sphere.\n\nIn his 1957 paper \\cite{Leech}, Leech completely classified the\nbalanced configurations in $S^2$. His classification shows that they\nare all group-balanced, and in fact the complete list can be derived\neasily from this assertion using the classification of finite subgroups\nof $O(3)$. However, Leech's proof is based on extensive case analysis,\nand it does not separate cleanly in this way. Furthermore, the\ntechniques do not seem to apply to higher dimensions.\n\nIt is natural to wonder whether all balanced configurations are\ngroup-balanced in higher dimensions. If true, that could help explain\nthe symmetry of the known universal optima. However, in this paper we\nshow that balanced configurations need not be group-balanced. Among\nseveral counterexamples, we construct a configuration of $25$ points in\n$\\mathbb{R}^{12}$ that is balanced yet has no nontrivial symmetries.\n\nThis result is compatible with the general philosophy that it is\ndifficult to find conditions that imply symmetry in high dimensions,\nshort of simply imposing the symmetry by fiat. We prove that if a\nconfiguration is a sufficiently strong spherical design, relative to\nthe number of distances between points in it, then it is automatically\nbalanced (see Theorem~\\ref{theorem:main}). Every spectral embedding of\na strongly regular graph satisfies this condition (see\nSection~\\ref{section:srg}). There exist strongly regular graphs with no\nnontrivial symmetries, and their spectral embeddings are balanced but\nnot group-balanced.\n\nBefore we proceed to the proofs, it is useful to rephrase the condition\nof being balanced as follows: a configuration $\\mathcal{C}$ is balanced\nif and only if for every $x \\in \\mathcal{C}$ and every real number $u$,\nthe sum $S_u(x)$ of all $y \\in \\mathcal{C}$ whose inner product with\n$x$ is $u$ is a multiple of~$x$. The reason is that the contribution to\nthe net force on~$x$ from the particles at a fixed distance is in the\nspan of $x$ and $S_u(x)$. Since we are using arbitrary force laws, each\ncontribution from a fixed distance must independently be orthogonal to\nthe sphere (since we can weight them however we desire). Note that a\ngroup-balanced configuration $\\mathcal{C}$ clearly satisfies this\ncriterion: for every $x \\in \\mathcal{C}$ and every real number $u$, the\nsum $S_u(x)$ is itself fixed by the stabilizer of $x$ and hence must be\na multiple of $x$.\n\nAn immediate consequence of this characterization of balanced\nconfigurations is that it is easy to check whether a given\nconfiguration is balanced. By contrast, it seems difficult to check\nwhether a configuration is universally optimal. For example, the paper\n\\cite{BBCGKS} describes a $40$-point configuration in $\\mathbb{R}^{10}$ that\nappears to be universally optimal, but so far no proof is known.\n\nSo far we have not explained why universal optima must be balanced. Any\noptimal configuration must be in equilibrium under the force laws\ncorresponding to the potential functions it minimizes, but no\nconfiguration could possibly minimize all potential functions\nsimultaneously (universal optima minimize a large but still restricted\nclass of potentials). The explanation is that a configuration is\nbalanced if and only if it is balanced for merely the class of\ninverse-power force laws. In the latter case, we cannot weight the\nforce contributions from different distances independently. However, as\nthe exponent of the force law tends to infinity, the force contribution\nfrom the shortest distance will dominate unless it acts orthogonally to\nthe sphere. This observation can be used to isolate each force\ncontribution in order by distance. Alternatively, we can argue that\nthe configuration is balanced under any linear combination of\ninverse-power laws and hence any polynomial in the reciprocal of\ndistance. We can then isolate any single distance by choosing that\npolynomial to vanish at all the other distances.\n\n\\section{Spherical designs}\n\nRecall that a \\emph{spherical $t$-design} in $S^{n-1}$ is a (non-empty)\nfinite subset $\\mathcal{C}$ of $S^{n-1}$ such that for every polynomial\n$p \\colon \\mathbb{R}^n \\to \\mathbb{R}$ of total degree at most~$t$, the average of $p$\nover $\\mathcal{C}$ equals its average over all of $S^{n-1}$. In other\nwords,\n$$\n\\frac{1}{|\\mathcal{C}|}\\sum_{x \\in \\mathcal{C}} p(x)\n= \\frac{1}{\\mathop{\\textup{vol}}(S^{n-1})} \\int_{S^{n-1}} p(x) \\, d\\mu(x),\n$$\nwhere $\\mu$ denotes the surface measure on $S^{n-1}$ and\n$\\mathop{\\textup{vol}}(S^{n-1})$ is of course not the volume of the enclosed ball but\nrather $\\int_{S^{n-1}} d\\mu(x)$.\n\n\\begin{theorem} \\label{theorem:main}\nLet $\\mathcal{C} \\subset S^{n-1}$ be a spherical $t$-design. If for\neach $x \\in \\mathcal{C}$, $$|\\{\\langle x,y \\rangle : y \\in \\mathcal{C},\ny \\ne \\pm x\\}| \\le t,$$ then $\\mathcal{C}$ is balanced.\n\\end{theorem}\n\nHere, $\\langle\\cdot,\\cdot\\rangle$ denotes the usual Euclidean inner\nproduct.\n\n\\begin{proof}\nLet $x$ be any element of $\\mathcal{C}$, and let $u_1,\\dots,u_k$ be the\ninner products between $x$ and the elements of $\\mathcal{C}$ other than\n$\\pm x$. By assumption, $k \\le t$. We wish to show that for each $i$,\nthe sum $S_{u_i}(x)$ of all $z \\in \\mathcal{C}$ such that $\\langle z,x\n\\rangle = u_i$ is a multiple of $x$.\n\nGiven any vector $y \\in \\mathbb{R}^n$ and integer $i$ satisfying $1 \\le i \\le\nk$, define the degree $k$ polynomial $p \\colon \\mathbb{R}^n \\to \\mathbb{R}$ by\n$$\np(z) = \\langle y,z \\rangle \\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i}\n \\big(\\langle x,z \\rangle - u_j\\big).\n$$\n\nSuppose now that $y$ is orthogonal to $x$. Then the average of $p$ over\n$S^{n-1}$ vanishes, because on the cross sections of the sphere on\nwhich $\\langle x,z \\rangle$ is constant, each factor $\\langle x,z\n\\rangle - u_j$ is constant, while $\\langle y,z \\rangle$ is an odd\nfunction on such a cross section. More precisely, under the map $z\n\\mapsto 2 \\langle x,z \\rangle x - z$ (which preserves the component\nof~$z$ in the direction of~$x$ and multiplies everything orthogonal to\n$x$ by $-1$), the inner product with~$x$ is preserved while the inner\nproduct with~$y$ is multiplied by~$-1$. Since $\\mathcal{C}$ is a\n$t$-design, it follows that the sum of $p(z)$ over $z \\in \\mathcal{C}$\nalso vanishes.\n\nMost of the terms in this sum vanish: when $z = \\pm x$, we have\n$\\langle y,z \\rangle = 0$, and when $\\langle x,z \\rangle = u_j$ the\nproduct vanishes unless $j=i$. It follows that the sum of $p(z)$ over\n$z \\in \\mathcal{C}$ equals\n$$\n\\sum_{z \\in \\mathcal{C} \\,:\\, \\langle z,x \\rangle = u_i}\n\\langle y,z \\rangle\n\\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i} (u_i - u_j)\n=\n\\left(\\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i} (u_i - u_j)\\right)\n\\big\\langle y, S_{u_i}(x) \\big\\rangle.\n$$\n\nBecause the first factor is nonzero, we conclude that $S_{u_i}(x)$ must\nbe orthogonal to $y$. Because this holds for all $y$ orthogonal\nto~$x$, it follows that $S_{u_i}(x)$ is a multiple of $x$, as desired.\n\\end{proof}\n\n\\emph{Examples.} The vertices of a cube form a spherical $3$-design,\nand only two inner products other than $\\pm1$ occur between them, so\nTheorem~\\ref{theorem:main} implies that the cube is balanced. On the\nother hand, not every group-balanced configuration satisfies the\nhypotheses of the theorem. For example, the configuration in $S^2$\nformed by the north and south poles and a ring of $k$ equally spaced\npoints around the equator is group-balanced, but it is not even a\n$2$-design if $k \\ne 4$. In Section~\\ref{section:srg} we will show\nthat Theorem~\\ref{theorem:main} applies to some configurations that are\nnot group-balanced, so the two sufficient conditions for being balanced\nare incomparable.\n\n\\section{Counterexamples from strongly regular graphs}\n\\label{section:srg}\n\nEvery spectral embedding of a strongly regular graph is both a\nspherical $2$-design and a $2$-distance set, so by\nTheorem~\\ref{theorem:main} they are all balanced. Recall that to form\na spectral embedding of a strongly regular graph with $N$ vertices, one\northogonally projects the standard orthonormal basis of $\\mathbb{R}^N$ to a\nnontrivial eigenspace of the adjacency matrix of the graph. See\nSections~2 and~3 of \\cite{CGS} for a brief review of the theory of\nspectral embeddings. Theorem~4.2 in \\cite{CGS} gives the details of the\nresult that every spectral embedding is a $2$-design, a fact previously\nnoted as part of Example~9.1 in \\cite{DGS}.\n\nThe symmetry group of such a configuration is simply the combinatorial\nautomorphism group of the graph, so it suffices to find a strongly\nregular graph with no nontrivial automorphisms. According to Brouwer's\ntables \\cite{B1}, the smallest such graph is a $25$-vertex graph with\nparameters $(25,12,5,6)$ (the same as those of the Paley graph for the\n\\hbox{$25$-element} field), which has a spectral embedding in\n$\\mathbb{R}^{12}$. See Figure~\\ref{figure:srg25} for an adjacency matrix.\nVerifying that this graph has no automorphisms takes a moderate amount\nof calculation, best done by computer.\n\n\\begin{figure}\n{\\tiny \\begin{center}\n\\begin{tabular}{ccccccccccccccccccccccccc}\n0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0\\\\\n1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0\\\\\n1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1\\\\\n1 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0\\\\\n1 1 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 0\\\\\n1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1\\\\\n1 1 1 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1\\\\\n1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 0 1 1 1\\\\\n1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1\\\\\n1 0 0 1 0 0 1 1 1 0 0 0 1 0 0 0 1 1 1 1 1 0 0 1 0\\\\\n1 0 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 0 1 0\\\\\n1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 1 0 0\\\\\n1 0 0 0 0 1 1 0 0 1 1 1 0 1 0 1 1 0 0 1 1 0 0 0 1\\\\\n0 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 1 1 0 0 1 1 0 0 1\\\\\n0 1 0 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 0 1 0 1 0\\\\\n0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 0 0 0 1 1 0 0 1 0 1\\\\\n0 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 0 1 0 0 0 1 1\\\\\n0 1 0 0 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 1 0 0\\\\\n0 1 0 0 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 1 0 1 1 0\\\\\n0 0 1 1 1 0 0 0 0 1 0 1 1 0 1 1 1 0 0 0 1 0 1 1 0\\\\\n0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 1 0 1 1 0 0\\\\\n0 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1\\\\\n0 0 1 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1 1 0 0 0 1\\\\\n0 0 1 0 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 1 0 0 1\\\\\n0 0 1 0 0 1 1 1 1 0 0 0 1 1 0 1 1 0 0 0 0 1 1 1 0\n\\end{tabular}\n\\end{center}}\n\\caption{An adjacency matrix of a (25,12,5,6) strongly regular graph\nwith no nontrivial automorphisms.}\\label{figure:srg25}\n\\end{figure}\n\nIn fact, there are two such graphs with no nontrivial automorphisms\n(the other is the complement of the graph in\nFigure~\\ref{figure:srg25}). Paulus classified the $(25,12,5,6)$\nstrongly regular graphs in \\cite{P}; unfortunately, his paper was never\npublished. There are fifteen such graphs, whose automorphism groups\nhave a variety of sizes: two have order $1$, four have order $2$, two\nhave order $3$, four have order $6$, two have order $72$, and one has\norder $600$ (the Paley graph). See \\cite{B2} for more information.\n\nThe Paulus graphs give the lowest-dimensional balanced configurations\nwe have found that have trivial symmetry groups. However, there are\nlower-dimensional counterexamples (with some symmetry but not enough to\nbe group-balanced). The lowest-dimensional one we have constructed is\nin $\\mathbb{R}^7$, and it can be built as follows; fortunately, no computer\ncalculations are needed.\n\nLet $\\mathcal{C}_n$ consist of the $n(n+1)\/2$ midpoints of the edges of\na regular simplex in $\\mathbb{R}^n$ (scaled so that $\\mathcal{C}_n \\subset\nS^{n-1}$). This configuration is a $2$-distance set, with the\ndistances corresponding to whether the associated edges of the simplex\nintersect or not. To compute the inner products, note that if\n$x_1,\\dots,x_{n+1}$ are the vertices of a regular simplex with\n$|x_i|^2=1$ for all $i$ (and hence $\\langle x_i,x_j\\rangle=-1\/n$ for $i\n\\ne j$), then for $i \\ne j$ and $k \\ne \\ell$,\n$$\n\\big\\langle x_i+x_j,x_k+x_\\ell \\big\\rangle = \\begin{cases}\n2-2\/n & \\textup{if $\\{i,j\\} = \\{k,\\ell\\}$,}\\\\\n1-3\/n & \\textup{if $|\\{i,j\\} \\cap \\{k,\\ell\\}|=1$, and}\\\\\\\n-4\/n & \\textup{if $\\{i,j\\} \\cap \\{k,\\ell\\} = \\emptyset$.}\n\\end{cases}\n$$\nThus, when we renormalize the vectors $x_i+x_j$ to lie on the unit\nsphere, we find that the inner products between them are\n$(1-3\/n)\/(2-2\/n) = (n-3)\/(2n-2)$ and $-(4\/n)\/(2-2\/n) = -2\/(n-1)$.\n\nFor $n>3$, the symmetry group of $\\mathcal{C}_n$ is the same as that of\nthe original simplex (namely the symmetric group on the vertices of the\nsimplex). Clearly, that group acts on $\\mathcal{C}_n$. To see that\n$\\mathcal{C}_n$ has no other symmetries, we will show that the original\nsimplex can be constructed from it in such a way as to be preserved by\nall symmetries of $\\mathcal{C}_n$. Specifically, consider the subsets\nof $\\mathcal{C}_n$ of size $n$ in which all pairs of points are at the\nminimal distance; the sums of these subsets are proportional to the\nvertices of the original simplex. To see why, note that such a subset\ncorresponds to a collection of $n$ pairwise intersecting edges of the\noriginal simplex. They must be exactly the edges containing one of the\nvertices of the simplex: once two intersecting edges are specified,\nonly one other edge can intersect both without containing their common\nvertex, so at most three edges can intersect pairwise without\ncontaining a common vertex. (Note that this conclusion genuinely\nrequires that $n>3$, because $\\mathcal{C}_3$ is an octahedron, which\nhas more symmetry than the tetrahedron from which it was derived.)\n\nWhen $n=7$ the inner products in $\\mathcal{C}_7$ are simply $\\pm 1\/3$.\nThe coincidence that these inner products are negatives of each other\nis deeper than it appears, and it plays a role in several useful\nconstructions. For example, the union of $\\mathcal{C}_7$ and its\nantipode $-\\mathcal{C}_7$ is a $3$-distance set, while in other\ndimensions it would be a $5$-distance set. In fact, $\\mathcal{C}_7\n\\cup (-\\mathcal{C}_7)$ is the unique $56$-point universal optimum in\n$\\mathbb{R}^7$, and it is invariant under the Weyl group of~$E_7$. We will\nmake use of the unusual inner products in $\\mathcal{C}_7$ to construct\na modification of it that is balanced but not group-balanced.\n\nWithin $\\mathcal{C}_7$, there are regular tetrahedra (i.e., quadruples\nof points with all inner products $-1\/3$). Geometrically, such a\ntetrahedron corresponds to a set of four disjoint edges in the original\nsimplex, and there is a unique such set up to symmetry, since the\nsimplex in $\\mathbb{R}^7$ has eight vertices and all permutations of these\nvertices are symmetries. Choose a set of four disjoint edges and call\nthem the distinguished edges.\n\nWe now define a modified configuration $\\mathcal{C}_7'$ by replacing\neach point in this tetrahedron by its antipode. Replacing the regular\ntetrahedron preserves the $2$-design property, because the tetrahedron\nis itself a $2$-design within the $2$-sphere it spans. In particular,\nfor every polynomial of total degree at most $2$, its sum over the\noriginal tetrahedron is the same as its sum over the antipodal\ntetrahedron. Furthermore, when we replace the tetrahedron, all inner\nproducts remain $\\pm 1\/3$ (some are simply multiplied by $-1$). Thus,\nthe resulting configuration $\\mathcal{C}'_7$ remains both a\n$2$-distance set and a $2$-design, so it is balanced by\nTheorem~\\ref{theorem:main}.\n\nHowever, the process of inverting a tetrahedron reduces the symmetry\ngroup.\n\n\\begin{lemma}\n\\label{lemma:group} The configuration $\\mathcal{C}'_7$ has only\n$4!\\cdot 2^4 = 384$ symmetries, namely the permutations of the vertices\nof that original simplex that preserve the set of four distinguished\nedges.\n\\end{lemma}\n\n\\begin{proof}\nThere are clearly $4! \\cdot 2^4$ symmetries of $\\mathcal{C}_7$ that\npreserve the set of distinguished edges of the simplex (they can be\npermuted arbitrarily and their endpoints can be swapped). All of these\nsymmetries preserve $\\mathcal{C}'_7$.\n\nTo show that there are no further symmetries, it suffices to show that\nthe distinguished tetrahedron in $\\mathcal{C}'_7$ is preserved under\nall symmetries. (For then the antipodal tetrahedron is also preserved,\nand hence $\\mathcal{C}_7$ is preserved as well.) Label the vertices of\nthe original simplex $1,2,\\dots,8$, and suppose that the distinguished\nedges correspond to the pairs $12$, $34$, $56$, and $78$. Label the\npoints of $\\mathcal{C}'_7$ by the pairs for the corresponding edges.\n\nThere are at most two orbits under the symmetry group of\n$\\mathcal{C}'_7$, one containing $12$, $34$, $56$, and $78$ and the\nother containing the remaining points. We wish to show that these sets\ndo not in fact form a single orbit. To separate the two orbits, we\nwill count the number of regular tetrahedra each point is contained in.\n(We drop the word ``regular'' below.) The answer will be seven for the\nfour distinguished points and eleven for the other points, so they\ncannot lie in the same orbit.\n\nBefore beginning, we need a criterion for when the inner product\nbetween two points in $\\mathcal{C}'_7$ is $-1\/3$. If both points are\ndistinguished or both are non-distinguished, then that occurs exactly\nwhen their label pairs are disjoint. If one point is distinguished and\nthe other is not, then it occurs exactly when their label pairs\nintersect.\n\nNow it is straightforward to count the tetrahedra containing a\ndistinguished point, without loss of generality $12$. There is one\ntetrahedron of distinguished points, namely $\\{12,34,56,78\\}$. If we\ninclude a second distinguished point, say $34$, then there are two ways\nto complete the tetrahedron using two non-distinguished points, namely\n$\\{12,34,13,24\\}$ and $\\{12,34,14,23\\}$ (the two additional pairs must\nbe disjoint and each intersect both $12$ and $34$). Because there are\nthree choices for the second distinguished point, this yields six\ntetrahedra. Finally, it is impossible to form a tetrahedron using $12$\nand three non-distinguished points (one cannot choose three disjoint\npairs that each intersect $12$). Thus, $12$ is contained in seven\ntetrahedra.\n\nTo complete the proof, we need only show that a non-distinguished\npoint, without loss of generality $13$, is contained in more than seven\ntetrahedra. There is a unique tetrahedron containing $13$ and two\ndistinguished points, namely $\\{13, 12, 34, 24\\}$. (There are only two\ndistinguished points that overlap with $13$, namely $12$ and $34$; then\nthe fourth point $24$ is determined.) No tetrahedron can contain a\nsingle distinguished point, as we saw in the previous paragraph, and if\na tetrahedron contains three distinguished points then it must contain\nthe fourth. Thus, the only remaining possibility is that all the points\nare non-distinguished. The three other points in the tetrahedron must\nbe labeled with disjoint pairs from $\\{2,4,5,6,7,8\\}$, and the labels\n$56$ and $78$ are not allowed (because those points are distinguished).\nThere are $6!\/(2!^3\\cdot3!)=15$ ways to split $\\{2,4,5,6,7,8\\}$ into\nthree disjoint pairs. Among them, three contain the pair $56$, three\ncontain the pair $78$, and one contains both pairs. Thus, there are\n$15-3-3+1=10$ possibilities containing neither $56$ nor~$78$. In\ntotal, the point $13$ is contained in eleven tetrahedra. Since it is\ncontained in more than seven tetrahedra, we see that $12$ and $13$ are\nin different orbits, as desired.\n\\end{proof}\n\nBy Lemma~\\ref{lemma:group}, there are two orbits of points in\n$\\mathcal{C}'_7$, namely the four points in the tetrahedron and the\nremaining $24$ points. The stabilizer of any point in the large orbit\nactually fixes two such points. Specifically, consider the edge in the\noriginal simplex that corresponds to the point. It shares its vertices\nwith two of the four distinguished edges (each vertex is in a unique\ndistinguished edge), and there is another edge that connects the other\ntwo vertices of those distinguished edges. For example, in the\nnotation of the proof of Lemma~\\ref{lemma:group}, the edge $13$ has the\ncompanion~$24$. This second edge has the same stabilizer as the first.\nIt follows that $\\mathcal{C}'_7$ is not group-balanced.\n\nIf we interpret the configuration $\\mathcal{C}'_7$ as a graph by using\nits shorter distance to define edges, then we get a strongly regular\ngraph, with parameters $(28,12,6,4)$, the same as those of\n$\\mathcal{C}_7$. In fact, every $2$-design $2$-distance set yields a\nstrongly regular graph, by Theorem~7.4 of \\cite{DGS}. We have checked\nusing Brouwer's list \\cite{B1} that spectral embeddings of strongly\nregular graphs do not yield counterexamples in lower dimensions. It\nsuffices to consider graphs with at most $27$ vertices, since by\nTheorem~4.8 in \\cite{DGS} no two-distance set in $S^5$ contains more\nthan $27$ points. Aside from the degenerate case of complete\nmultipartite graphs and their complements, the full list of strongly\nregular graphs with spectral embeddings in six or fewer dimensions is\nthe pentagon, the Paley graph on $9$ vertices, the Petersen graph, the\nPaley graph on $13$ vertices, the line graph of $K_6$, the Clebsch\ngraph, the Shrikhande graph, the $4 \\times 4$ lattice graph, the line\ngraph of $K_7$, the Schl\\\"afli graph, and the complements of these\ngraphs. It is straightforward to check that these graphs all have\ngroup-balanced spectral embeddings. Of course there may be\nlow-dimensional counterexamples of other forms.\n\nWe suspect that there are no counterexamples in $\\mathbb{R}^4$:\n\n\\begin{conjecture}\nIn $\\mathbb{R}^4$, every balanced configuration is group-balanced.\n\\end{conjecture}\n\nIf true, this conjecture would lead to a complete classification of\nbalanced configurations in $\\mathbb{R}^4$, because all the finite subgroups of\n$O(4)$ are known (see for example \\cite{CS}). It is likely that using\nsuch a classification one could prove completeness for the list of\nknown universal optima in $\\mathbb{R}^4$, namely the regular simplices, cross\npolytope, and $600$-cell, but we have not completed this calculation.\n\nIn $\\mathbb{R}^5$ or $\\mathbb{R}^6$, we are not willing to hazard a guess as to whether\nall balanced configurations are group-balanced. The construction of\n$\\mathcal{C}'_7$ uses such an ad hoc approach that it provides little\nguidance about lower dimensions.\n\n\\section{Counterexamples from lattices}\n\nIn higher dimensions, we can use lattices to construct counterexamples\nthat do not arise from strongly regular graphs. For example, consider\nthe lattice $\\Lambda(G_2)$ in the Koch-\\kern-.15exVenkov list of extremal even\nunimodular lattices in $\\mathbb{R}^{32}$ (see \\cite[p.~212]{KV} or the\nNebe-Sloane catalogue~\\cite{NS} of lattices). This lattice has\n$146880$ minimal vectors. When they are renormalized to be unit\nvectors, only five inner products occur besides $\\pm 1$ (namely, $\\pm\n1\/2$, $\\pm 1\/4$, and $0$). By Corollary~3.1 of \\cite{BV}, this\nconfiguration is a spherical $7$-design. Hence, by\nTheorem~\\ref{theorem:main} it is balanced. However,\n$\\mathop{\\textup{Aut}}(\\Lambda(G_2))$ is a relatively small group, of order $3 \\cdot\n2^{12}$, and one can check by computer calculations that some minimal\nvectors have trivial stabilizers. (The lattice is generated by its\nminimal vectors, and thus it and its kissing configuration have the\nsame symmetry group.) The kissing configuration of $\\Lambda(G_2)$ is\ntherefore balanced but not group-balanced.\n\nThe case of $\\Lambda(G_2)$ is particularly simple since some\nstabilizers are trivial, but one can also construct lower-dimensional\ncounterexamples using lattices. For example, let $L$ be the unique\n$2$-modular lattice in dimension~$20$ with Gram matrix\ndeterminant~$2^{10}$, minimal norm~$4$, and automorphism group $2 \\cdot\nM_{12} \\cdot 2$ (see \\cite[p.~101]{BV} or \\cite{NS}). The kissing\nnumber of $L$ is $3960$, and its automorphism group (which is, as\nabove, the same as the symmetry group of its kissing configuration)\nacts transitively on the minimal vectors. The kissing configuration is\na spherical $5$-design (again by Corollary~3.1 in \\cite{BV}), and only\nfive distances occur between distinct, non-antipodal points. Thus, by\nTheorem~\\ref{theorem:main} the kissing configuration of~$L$ is\nbalanced. However, computer calculations show that the stabilizer of a\npoint fixes a $2$-dimensional subspace of $\\mathbb{R}^{20}$, and thus the\nconfiguration is not group-balanced.\n\nThe kissing configurations of the extremal even unimodular lattices\n$P_{48p}$ and $P_{48q}$ in $\\mathbb{R}^{48}$ (see \\cite[p.~149]{CSl}) are also\nbalanced but not group-balanced. They have $52416000$ minimal vectors,\nwith inner products $\\pm 1$, $\\pm 1\/2$, $\\pm 1\/3$, $\\pm 1\/6$, and $0$\nafter rescaling to the unit sphere. By Corollary~3.1 in \\cite{BV}, the\nkissing configurations are spherical $11$-designs, so by\nTheorem~\\ref{theorem:main} they are balanced. However, in both cases\nthere are points with trivial stabilizers, so they are not\ngroup-balanced. Checking this is more computationally intensive than\nin the previous two cases. Fortunately, for the bases listed in\n\\cite{NS}, in both cases the first basis vector is a minimal vector\nwith trivial stabilizer, and this triviality is easily established by\nsimply enumerating the entire orbit. (The automorphism groups of\n$P_{48p}$ and $P_{48q}$ have orders $72864$ and $103776$,\nrespectively.) We expect that the same holds for every extremal even\nunimodular lattice in $\\mathbb{R}^{48}$, but they have not been fully\nclassified and we do not see how to prove it except for checking each\ncase individually.\n\n\\section{Euclidean balanced configurations}\n\nThe concept of a balanced configuration generalizes naturally to\nEuclidean space: a discrete subset $\\mathcal{C} \\subset \\mathbb{R}^n$ is\n\\emph{balanced} if for every $x \\in \\mathcal{C}$ and every distance\n$d$, the set $\\{y \\in \\mathcal{C} : |x-y|=d\\}$ either is empty or has\ncentroid $x$. As in the spherical case, this characterization is\nequivalent to being in equilibrium under all pairwise forces that\nvanish past some radius (to avoid convergence issues).\n\nThe concept of a group-balanced configuration generalizes as well. Let\n$\\mathop{\\textup{Aut}}(\\mathcal{C})$ denote the set of rigid motions of $\\mathbb{R}^n$\npreserving $\\mathcal{C}$. Then $\\mathcal{C}$ is \\emph{group-balanced}\nif for every $x \\in \\mathcal{C}$, the stabilizer of $x$ in\n$\\mathop{\\textup{Aut}}(\\mathcal{C})$ fixes only the point $x$. For example, every\nlattice in Euclidean space is group-balanced, because the stabilizer of\neach lattice point contains the operation of reflection through that\npoint. Clearly, group-balanced configurations are balanced, because the\ncentroid of $\\{y \\in \\mathcal{C} : |x-y|=d\\}$ is fixed by the\nstabilizer of $x$.\n\n\\begin{conjecture} \\label{conjecture:R2}\nEvery balanced discrete subset of $\\mathbb{R}^2$ is group-balanced.\n\\end{conjecture}\n\nConjecture~\\ref{conjecture:R2} can likely be proved using ideas similar\nto those used by Leech in \\cite{Leech} to prove the analogue for $S^2$,\nbut we have not completed a proof.\n\n\\begin{conjecture} \\label{conjecture:highdim}\nIf $n$ is sufficiently large, then there exists a discrete subset of\n$\\mathbb{R}^n$ that is balanced but not group-balanced.\n\\end{conjecture}\n\nOne might hope to prove Conjecture~\\ref{conjecture:highdim} using an\nanalogue of Theorem~\\ref{theorem:main}. Although we have not succeeded\nwith this approach, one can indeed generalize several of the\ningredients to Euclidean space: the analogue of a polynomial is a\nradial function from $\\mathbb{R}^n$ to $\\mathbb{R}$ whose Fourier transform has compact\nsupport (i.e., the function is an entire function of exponential type),\nand the analogue of the degree of the polynomial is the radius of the\nsupport. Instead of having a bounded number of roots, such a function\nhas a bounded density of roots. The notion of a spherical design also\ngeneralizes to Euclidean space as follows. A configuration\n$\\mathcal{C}$ with density $1$ (i.e., one point per unit volume in\nspace) is a ``Euclidean $r$-design'' if whenever $f$ is a radial\nSchwartz function with $\\mathop{\\textup{supp}}\\big(\\widehat{f}\\,\\big) \\subseteq B_r(0)$,\nthe average of $\\sum_{y \\in \\mathcal{C}} f(x-y)$ over $x \\in\n\\mathcal{C}$ equals $\\int f(x-y) \\, dy = \\widehat{f}(0)$. (The average\nor even the sum may not make sense if $\\mathcal{C}$ is pathological,\nbut for example they are always well-defined for periodic\nconfigurations.) It is plausible that an analogue of\nTheorem~\\ref{theorem:main} is true in the Euclidean setting, but we\nhave not attempted to state or prove a precise analogue, because it is\nnot clear that it would have any interesting applications.\n\n\\section*{Acknowledgements}\n\nWe thank Richard Green and the anonymous referee for their suggestions\nand feedback.\n\n\\bibliographystyle{amsalpha}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Abstract}\nTodays big data applications generate hundreds or even thousands of terabytes of data. Commonly, Java based applications are used for further analysis. A single commodity machine, for example in a data center or typical cloud environment, cannot store and process the vast amounts of data making distribution mandatory. Thus, the machines have to use interconnects to exchange data or coordinate data analysis. However, commodity interconnects used in such environments, e.g. Gigabit Ethernet, cannot provide high throughput and low latency compared to alternatives like InfiniBand to speed up data analysis of the target applications. In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB\/s with only 2 kb message size.\n\n\\section{Introduction}\n\nInteractive applications, especially on the web \\cite{facebook2, Liu:2016:ECI:2964797.2964815}, simulations \\cite{doi:10.1093\/bioinformatics\/btt055} or online data analysis \\cite{Desikan:2005:IPR:1062745.1062885, 6547630, DOI:10.1007\/978-3-319-55699-4_20} have to process terabytes of data often consisting of small objects. For example, social networks are storing graphs with trillions of edges resulting in a per object size of less than 64 bytes for the majority of objects \\cite{Ching:2015:OTE:2824032.2824077}. Other graph examples are brain simulations with billions of neurons and thousands of connections each \\cite{IntroducingGraph500} or search engines for billions of indexed web pages \\cite{Gulli:2005:IWM:1062745.1062789}. To provide high interactivity to the user, low latency is a must in many of these application domains. Furthermore, it is also important in the domain of mobile networks moving state management into the cloud \\cite{Kablan:2015:SNF:2785989.2785993}.\n\nBig data applications are processing vast amounts of data which require either an expensive supercomputer or distributed platforms, like clusters or cloud environments \\cite{HASHEM201598}. High performance interconnects, such as InfiniBand, are playing a key role to keep processing and response times low, especially for highly interactive and always online applications. Today, many cloud providers, e.g. Microsoft, Amazon or Google, offer instances equipped with InfiniBand.\n\nInfiniBand offers messaging verbs and RDMA, both providing one way single digit microsecond latencies. It depends on the application requirements whether messaging verbs or RDMA is the better choice to ensure optimal performance \\cite{Su:2017:RRF:3064176.3064189}.\n\nIn this report, we focus on Java-based parallel and distributed applications, especially big data applications, which commonly communicate with remote nodes using asynchronous and synchronous messages \\cite{Ching:2015:OTE:2824032.2824077, Ekanayake:2016:SJH:2972969.2972972, Dean:2008:MSD:1327452.1327492, Zaharia:2016:ASU:3013530.2934664}. Unfortunately, accessing InfiniBand verbs from Java is not a built-in feature of the commonly used JVMs. There are several external libraries, wrappers or JVMs with built-in support available but all trade performance for transparency or require proprietary environments (\\S \\ref{related_work_java_ib}). To use InfiniBand from Java, one can rely on available (Java) MPI implementations. But, these are not providing features such as serialization for messaging objects and no automatic connection management (\\S \\ref{related_work_mpi}).\n\nWe developed the network subsystem DXNet (\\S \\ref{dxnet}) which provides transparent and simple to use sending and event based receiving of synchronous and asynchronous messages with transparent serialization of messaging objects \\cite{dxnet}. It is optimized for high concurrency on all operations by implementing lock-free synchronization. DXNet is implemented in Java and open source and available at Github \\cite{dxnetgithub}.\n\nIn this report, we propose Ibdxnet, a transport for the DXNet network subsystem. The transport uses reliable messaging verbs to implement InfiniBand support for DXNet and provides low latency and high throughput messaging for Java.\n\nIbdxnet implements scalable and automatic connection and queue pair management, the \\textit{msgrc} transport engine, which uses InfiniBand messaging verbs, and a JNI interface. We present best practices applied to ensure scalability across multiple threads and nodes when working with InfiniBand verbs by elaborating on the implementation details of Ibdxnet. We carefully designing an efficient and low latency JNI layer to connect the native Ibdxnet subsystem to the Java based IB transport in DXNet. The IB transport uses the JNI layer to interface with Ibdxnet, extends DXNet's outgoing ring buffer for InfiniBand usage and implements scalable scheduling of outgoing data for many simultaneous connections. We evaluated DXNet with the IB transport and Ibdxnet, and compared then to two MPI implementations supporting InfiniBand: the well known MVAPICH2 and the Java based FastMPJ implementations.\n\nThough, MPI is discussed in related work (\\S \\ref{related_work_mpi}) and two implementations are evaluated and compared to DXNet (\\S \\ref{eval}), DXNet, the IB transport nor Ibdxnet are implementing the MPI standard. The term \\textit{messaging} is used by DXNet to simply refer to exchanging data in the form of messages (i.e. additional metadata identifies message on receive). DXNet does not implement any by the standard defined MPI primitives. Various low-level libraries to use InfiniBand in Java are not compared in this report, but in a separate one.\n\nThe report is structured in the following way: In Section \\ref{dxnet}, we present a summary of DXNet and its aspects important to this report. In Section \\ref{related_work}, we discuss related work which includes a brief summary of available libraries and middleware for interfacing InfiniBand in Java applications. MPI and selected implementations supporting InfiniBand are presented as available middleware solutions and compared to DXNet. Lastly, we discuss target applications in the field of Big-Data which benefit from InfiniBand usage. Section \\ref{ib_basics} covers InfiniBand basics which are of concern for this report. Section \\ref{java_and_native} discusses JNI usage and presents best practices for low latency interfacing with native code from Java using JNI. Section \\ref{overview_infiniband_transport} gives a brief overview of DXNet's multi layered stack when using InfiniBand. Implementation details of the native part Ibdxnet are given in Section \\ref{ibdxnet_native} and the IB transport in Java are presented in Section \\ref{transport_impl_java}. Section \\ref{eval} presents and compares the experimental results of MVAPICH2, FastMPJ and DXNet. Conclusions are presented in Section \\ref{conclusions}.\n\n\\section{DXNet}\n\\label{dxnet}\nDXNet is a network library for Java targeting, but not limited to, highly concurrent big data applications. DXNet implements an \\textbf{asynchronous event driven messaging} approach with a simple and easy to use application interface. \\textbf{Messaging} describes \\textbf{transparent sending and receiving of complex (even nested) data structures} with implicit serialization and de-serialzation. Furthermore, DXNet provides a built in primitive for transparent \\textbf{request-response communication}.\n\nDXNet is optimized for highly multi-threaded sending and receiving of small messages by using \\textbf{lock-free data structures, fast concurrent serialization, zero copy and zero allocation}. The core of DXNet provides \\textbf{automatic connection and buffer management}, serialization of message objects and an interface for implementing different transports. Currently, an Ethernet transport using Java NIO sockets and an InifiniBand transport using \\textit{ibverbs} (\\S \\ref{ibdxnet_native}) is implemented.\n\nThe following subsections describe the most important aspects of DXNet and its core which are depicted in Figure \\ref{dxnet_simple_fig} and relevant for further sections of this report. A more detailed insight is given in a dedicated paper \\cite{dxnet}. The source code is available at Github \\cite{dxnetgithub}.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{dxnet_simple.png}\n\t\\caption{Simplified DXNet Architecture}\n\t\\label{dxnet_simple_fig}\n\\end{figure}\n\n\\subsection{Automatic Connection Management}\n\\label{dxnet_con_man}\nTo relieve the programmer from explicit connection creation, handling and cleanup, DXNet implements automatic and transparent connection creation, handling and cleanup. Nodes are addressed using an \\textbf{abstract and unique 16-bit nodeID}. Address mappings must be registered to allow associating the nodeIDs of each remote node with a corresponding implementation dependent endpoint (e.g. socket, queue pair). To provide scalability with up to hundreds of simultaneous connections, our event driven system does not create one thread per connection. A \\textbf{new connection is created automatically} once the first message is either sent to a destination or received from one. Connections are closed once a configurable connection limit is reached using a recently used strategy. Faulty connections (e.g. remote node not reachable anymore) are handled and cleaned up by the manager. Error handling on connection errors or timeouts is propagated to the application using exceptions.\n\n\\subsection{Sending of Messages}\n\\label{dxnet_send}\n\\textbf{Messages} are serialized Java objects and sent \\textbf{asynchronously} without waiting for a completion. A message can be targeted towards one or multiple receivers. Using the message type \\textbf{Request}, it is sent to one receiver, only. When sending a request, the sender waits until \\textbf{receiving a corresponding response} message (transparently handled by DXNet) or skips waiting and collects the response later.\n\nWe expect applications calling DXNet concurrently with \\textbf{multiple threads} to send messages. Every message is automatically and concurrently serialized into the \\textbf{Outgoing Ring Buffer (ORB)}, a natively allocated and lock-free ring buffer. \\textbf{Messages are automatically aggregated} which increases send throughput. The ORB, one per connection, is allocated in native memory to allow \\textbf{direct and zero-copy access} by the low-level transport. A transport runs a decoupled dedicated thread which removes the serialized and ready to send data from the ORB and forwards it to the hardware.\n\n\\subsection{Receiving of Messages}\n\\label{dxnet_receive}\nThe network transport handles incoming data by writing it to \\textbf{pooled native buffers} to avoid burdening the Java garbage collection. Depending on how a transport writes and reads data, the buffers might contain fully serialized messages or just fragments. Every received buffer is pushed to the ring buffer based \\textbf{Incoming Buffer Queue (IBQ)}. Both, the buffer pool as well as the IBQ are shared among all connections. \\textbf{Dedicated handler threads} pull buffers from the IBQ and process them asynchronously by de-serializing them and creating Java message objects. The messages are passed to \\textbf{pre-registered callback methods} of the application.\n\n\\subsection{Flow Control}\nDXNet implements its own \\textbf{flow control (FC)} mechanism to avoid flooding a remote node with many (very small) messages. This would result in an increased overall latency and lower throughput if the receiving node cannot keep up with processing incoming messages. On sending a message, the per connection dedicated FC checks if a configurable threshold is exceeded. This threshold describes the \\textbf{number of bytes sent by the current node but not fully processed by the receiving node}. Once the configurable threshold is exceeded, the receiving node slices the number of bytes received into equally sized windows (window size configurable) and sends the number of windows confirmed back to the source node. Once the sender receives this confirmation, the number of bytes sent but not processed is \\textbf{reduced by the number of received windows multiplied with the configured window size}. If an application send thread was previously blocked due to exceeding this threshold, it can now continue with processing.\n\n\\subsection{Transport Interface}\n\\label{dxnet_transport_interface}\nDXNet provides a transport interface allowing implementations of different transport types. On initialization of DXNet, one of the implemented transports can be selected. Afterwards when using DXNet, the transport is transparent for the application. The following tasks must be handled by every transport implementation:\n\n\\begin{itemize}\n \\item Connection: Create, close and cleanup\n \\item Get ready to send data from ORB and send it (ORB triggers callback once data is available)\n \\item Handle received data by pushing it to the IBQ\n \\item Manage flow control when sending\/receiving data\n\\end{itemize}\n\nEvery other task that is not exposed directly by one of the following methods must be handled internally by the transport. The core of DXNet relies on the following methods of abstract Java classes\/interfaces which must be implemented by every transport:\n\n\\begin{itemize}\n \\item Connection: open, close, dataPosted\n \\item ConnectionManager: createConnection, closeConnection\n \\item FlowControl: sendFlowControlData, getAndResetFlowControlData\n\\end{itemize}\n\nWe elaborate on further details about the transport interface in Section \\ref{transport_impl_java} where we describe the transport implementation for Ibdxnet.\n\n\\section{Related Work}\n\\label{related_work}\nRelated work discusses different topics which are of interest to DXNet with the IB transport and Ibdxnet. First, we present a summary of our evaluation results of existing solutions to use InfiniBand in Java applications (\\S \\ref{related_work_java_ib}). These results were important before developing Ibdxnet. Next, we compare DXNet and the MPI standard (\\S \\ref{related_work_mpi}) followed by MPI implementations supporting InfiniBand (\\S \\ref{related_work_mpi_impls}) and UPX (\\S \\ref{rel_work_other}). To our knowledge, this concludes the list of available middleware offering higher level networking primitives comparable to DXNet's. In the last Subsection \\ref{rel_work_big_data}, we discuss big-data systems and applications supporting InfiniBand for target applications of interest to DXNet.\n\n\\subsection{Java and InfiniBand}\n\\label{related_work_java_ib}\nBefore developing Ibdxnet and the InfiniBand transport for DXNet, we evaluated available (low-level) solutions for leveraging InfiniBand hardware in Java applications. This includes using NIO sockets with \\textbf{IP over InfiniBand (IPoIB)} \\cite{ipoib}, \\textbf{jVerbs} \\cite{Stuedi:2013:JUL:2523616.2523631}, \\textbf{JSOR} \\cite{jsor}, \\textbf{libvma} \\cite{libvma} and \\textbf{native c-verbs with ibverbs}. Extensive experiments analyzing throughput and latency of both messaging verbs and RDMA were conducted to determine a suitable candidate for using InfiniBand with Java applications and are published in a separate report.\n\nSummerized, the results show that transparent solutions like IPoIB, libvma or JSOR, which allow existing socket-based applications to send and receive data transparently over InfiniBand hardware, are not able to deliver an overall adequate throughput and latency. For the verbs-based libraries, jVerbs gets close to the native ibverbs performance but, like JSOR, requires a proprietary JVM to run. Overall, none of the analyzed solutions, other than ibverbs, are delivering an adequate performance. Furthermore, we want DXNet to stay independent of the JVM when using InfiniBand hardware. Thus, we decided to use the native ibverbs library with the Java Native Interface to avoid the known performance issues of the evaluated solutions.\n\n\\subsection{MPI}\n\\label{related_work_mpi}\nThe message passing interface \\cite{Forum:1994:MMI:898758} defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications.\n\nAn application can send and receive primitive data types, arrays, derived or vectors of primitive data types, and indexed data types using MPI. The synchronous primitives \\textit{MPI\\_Send} and \\textit{MPI\\_Recv} perform these operations in blocking mode. The asynchronous operations \\textit{MPI\\_Isend} and \\textit{MPI\\_Irecv} allow non blocking communication. A status handle is returned with each started asynchronous operation. This can be used to check the completion of the operation or to actively wait for one or multiple completions using \\textit{MPI\\_Wait} or \\textit{MPI\\_Waitall}. Furthermore, there are various collective primitives which implement more advanced operations such as scatter, gather or reduce.\n\nSending and receiving of data with MPI requires the application to issue a receive for every send with a target buffer that can hold at least the amount of data sent by the remote. DXNet relieves the application from this responsibility. Application threads can send messages with variable size andDXNet manages the buffers used for sending and receiving. The application does not have to issue any receive operations and wait for data to arrive actively. Incoming messages are dispatched to pre-registered callback handlers by dedicated handler threads of DXNet.\n\nDXNet supports transparent serialization and de-serialization of complex (even nested) data types (Java objects) for messages. MPI primitives for sending and receiving data require the application to use one of the available data types supported and doesn't offer serialization for more complex datatypes such as objects. However, the MPI implementation can benefit from the lack of serialization by avoiding any copying of data, entirely. Due to the nature of serialization, DXNet has to create a (serialized) \"copy\" of the message when serializing it into the ORB. Analogously, data is copied when a message is created from incoming data during de-serialization.\n\nMessages in DXNet are sent asynchronously while requests offer active waiting or probing for the corresponding response. These communication patterns can also be applied by applications using MPI. The communication primitives currently provided by DXNet are limited to messages and request-response. Nevertheless, using these two primitives, other MPI primitives, such as scatter, gather or reduce, can be implemented by the application if required.\n\nDXNet does not implement multiple protocols for different buffer sizes like MPI with eager and rendezvous. A transport for DXNet might implement such a protocol but our current implementations for Ethernet and InfiniBand do not. The aggregated data available in the ORB is either sent as a whole or sliced and sent as multiple buffers. The transport on the receiving side passes the stream of buffers to DXNet and puts them into the IBQ. Afterwards, the buffers are re-connected to a stream of data by the MCC before extracting and processing the messages.\n\nAn instance using DXNet runs within one process of a Big Data application with one or multiple application threads. Typically, one DXNet instance runs per cluster node. This allows the application to dynamically scale the number of threads up or down within the same DXNet instance as needed. Furthermore, fast communication between multiple threads within the same process is possible, too.\n\nCommonly, an MPI application runs a single thread per process. Multiple processes are spawned according to the number of cores per node with IPC fully based on MPI. MPI does offer different thread modes which includes issuing MPI calls using different threads in a process. Typically, this mode is used in combination with OpenMP \\cite{openmp}. However, it is not supported by all MPI implementations which also offer InfiniBand support (\\S \\ref{related_work_mpi_impls}). Furthermore, DXNet supports dynamic up and down scaling of instances. MPI implementations support up-scaling (for non singletons) but down scaling is considered an issue for many implementations. Processes cannot be removed entirely and might cause other processes to get stuck or crash.\n\nConnection management and identifying remote nodes are similar with DXNet and MPI. However, DXNet does not come with deployment tools such as \\textit{mpirun} which assigns the ids\/ranks to identify the instances. This intentional design decision allows existing applications to integrate DXNet without restrictions to the bootstrapping process of the application. Furthermore, DXNet supports dynamically adding and removing instances. With MPI, an application must be created by using the MPI environment. MPI applications must be run using a special coordinator such as \\textit{mpirun}. If executed without a communicator, an MPI world is limited to the current process it is created in which doesn't allow communication with any other instances. Separate MPI worlds can be connected but the implementation must support this feature. To our knowledge, there is no implementation (with InfiniBand support) that currently supports this.\n\n\\subsection{MPI Implementations Supporting InfiniBand}\n\\label{related_work_mpi_impls}\nThis section only considers MPI implementations supporting InfiniBand directly. Naturally, IPoIB can be used to run any MPI implementation supporting Ethernet networks over InfiniBand. But, as previously discussed (\\S \\ref{related_work_java_ib}), the network performance is very limited when using IPoIB.\n\n\\textbf{MVAPICH2} is a MPI library \\cite{4343853} supporting various network interconnects, such as Ethernet, iWARP, Omni-Path, RoCE and InfiniBand. MVAPICH2 includes features like RDMA fast path or RDMA operations for small message transfers and is widely used on many clusters over the world. \\textbf{Open MPI} \\cite{openmpi} is an open source implementation of the MPI standard (currently full 3.1 conformance) supporting a variety of interconnects, such as Ethernet using TCP sockets, RoCE, iWARP and InfiniBand. \n\n\\textbf{mpiJava} \\cite{mpijava} implements the MPI standard by a collection of wrapper classes that call native MPI implementations, such as MVAPICH2 or OpenMPI, through JNI. The wrapper based approach provides efficient communication relying on native libraries. However, it is not threadsafe and, thus, is not able to take advantage of multi-core systems using multithreading.\n\n\\textbf{FastMPJ} \\cite{Exposito:2014aa} uses Java Fast Sockets \\cite{Taboada:2008:JFS:1456731.1457122} and ibvdev to provide a MPI implementation for parallel systems using Java. Initially, \\textbf{ibvdev} \\cite{Exposito2012} was implemented as a low-level communication device for \\textbf{MPJ Express} \\cite{MPJExpress}, a Java MPI implementation of the mpiJava 1.2 API specification. ibvdev implements InfiniBand support using the low-level verbs API and can be integrated into any parallel and distributed Java application. FastMPJ optimizes MPJ Express collective primitives and provides efficient non-blocking communication. Currently, FastMPJ supports issuing MPI calls using a single thread, only.\n\n\\subsection{Other Middleware}\n\\label{rel_work_other}\n\\textbf{UCX} \\cite{7312665} is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT.\n\nUCX aims to be a common computing platform for multi-threaded applications. However, DXNet does not and, thus, does not include its own atomic operations, thread safety or memory management for data structures. Instead, it relies on the multi-threading utilities provided by the Java environment. DXNet does abstract different hardware like UCX but only network interconnects and not GPUs or other co-processors. Furthermore, DXNet is a simple networking library for Java applications and does not implement MPI or PGAS models. Instead, it provides simple asynchronous messaging and synchronous request-response communication, only.\n\n\\subsection{Target Applications using InfiniBand}\n\\label{rel_work_big_data}\nProviding high throughput and low latency, InfiniBand is a technology which is widely used in various big-data applications.\n\n\\textbf{Apache Hadoop} \\cite{Islam:2012:HPR:2388996.2389044} is a well known Java big-data processing framework for large scale data processing using the MapReduce programming model. It uses the Hadoop Distributed File System for storing and accessing application data which supports InfiniBand interconnects using RDMA. Also implemented in Java, \\textbf{Apache Spark} is a framework for big-data processing offering the domain-specific-language Spark SQL, a stream processing and machine learning extension and the graph processing framework GraphX. It supports InfiniBand hardware using an additional RDMA plugin \\cite{sparkrdma}.\n\nNumerous key-value storages for big-data applications have been proposed that use InfiniBand and RDMA to provide low latency data access for highly interactive applications.\n\n\\textbf{RAMCloud} \\cite{Ousterhout:2015:RSS:2818727.2806887} is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. \\textbf{FaRM} \\cite{179767} implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. \\textbf{Pilaf} \\cite{Mitchell:2013:UOR:2535461.2535475} also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. \\textbf{MICA} \\cite{179747} implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. \\textbf{HERD} \\cite{Kalia:2014:URE:2619239.2626299} borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client.\n\n\n\\section{InfiniBand and ibverbs Basics}\n\\label{ib_basics}\nThis section covers the most important aspects of the InfiniBand hardware and the native ibverbs library which are relevant for this report. Abbreviations introduced here (most of them commonly used in the InfiniBand context) are used throughout the report from this point on.\n\nThe \\textbf{host channel adapter (HCA)} connected to the PCI bus of the host system is the network device for communicating with other nodes. The offloading engine of the HCA processes outgoing and incoming data asynchronously and is connected to other nodes using copper or optical cables via one or multiple switches. The \\textbf{ibverbs} API provides the interface to communicate with the HCA either by exchanging data using Remote Direct Memory Access (RDMA) or messaging verbs. \n\nA \\textbf{queue pair (QP)} identifies a physical connection to a remote node when using \\textbf{reliable connected (RC)} communication. Using non connected \\textbf{unreliable datagram (UD)} communication, a single QP is sufficient to send data to multiple remotes. A QP consists of one \\textbf{send queue (SQ)} and one \\textbf{receive queue (RQ)}. On RC communication, a QP's SQ and RQ are always cross connected with a target's QP, e.g. node 0 SQ connects to node 1 RQ and node 0 RQ to node 1 SQ.\n\nIf an application wants to send data, it posts a \\textbf{work request (WR)} containing a pointer to the buffer to send and the length to the SQ. A corresponding WR must be posted on the RQ of the connected QP on the target node to receive the data. This WR also contains a pointer to a buffer and its size to receive any incoming data to.\n\nOnce the data is sent, a \\textbf{work completion (WC)} is generated and added to a \\textbf{completion queue (CQ)} associated with the SQ. A WC is also generated for the corresponding WCQ of the remote's RQ receiving the data, once the data arrived. The WC of the send task tells the application that the data was successfully sent to the remote (or provides error information otherwise). On the remote receiving the data, the WC indicates that the buffer attached to the previously posted WR is now filled with the remote's data.\n\nWhen serving multiple connections, every single SQ and RQ does not need a dedicated CQ. A single CQ can be used as a \\textbf{shared completion queue (SCQ)} with multiple SQs or RQs. Furthermore, when receiving data from multiple sources, instead of managing many RQs to provide buffers for incoming data, a \\textbf{shared receive queue (SRQ)} can be used on multiple QPs instead of single RQs.\n\nWhen attaching a buffer to a WR, it is attached as a \\textbf{scatter gather element (SGE)} of a \\textbf{scatter gather list (SGL)}. For sending, the SGL allows the offloading engine to gather the data from many scattered buffers and send it as one WR. For receiving, the received data is scattered to one or multiple buffers by the offloading engine.\n\n\\section{Low Latency Data Exchange Between Java and C}\n\\label{java_and_native}\nIn this section, we describe our experiences with and best practices for the Java Native Interface (JNI) to avoid performance penalties for latency sensitive applications. These are applied to various implementation aspects of the IB transport which are further explained in their dedicated sections.\n\nUsing JNI is mandatory if the Java space has to interface with native code, e.g. for IO operations or when using native libraries. As we decided to use the low-level ibverbs library to benefit from full control, high flexibility and low latency (\\S \\ref{related_work_java_ib}), we had to ensure that interfacing with native code from Java does not introduce too much overhead compared to the already existing and evaluated solutions.\n\nThe Java Native Interface (JNI) allows Java programmers to call native code from C\/C++ libraries. It is a well known method to interface with native libraries that are not available in Java or access IO using system calls or other native libraries. When calling code of a native library, the library has to expose and implement a predefined interface which allows the JVM to connect the native functions to native declared Java methods in a Java class. With every call from Java to the native space and vice versa, a context switch is required to be executed by the JVM environment. This involves tasks related to thread and cache management adding latency to every native call. This increases the duration of such a call and is crucial, especially regarding the low latency of IB.\n\n\\textbf{Exchanging data with a native library without adding considerable overhead is challenging}. For single primitive values, passing parameters to functions is convenient and does not add any considerable overhead. However, access to Java classes or arrays from native space requires synchronization with the JVM (and its garbage collector) which is very expensive and must be avoided. Alternatively, one can use ByteBuffers allocated as DirectByteBuffers which allocates memory in native memory. Java can access the memory through the ByteBuffer and the native library can get the native address of the array and the size with the functions \\texttt{GetDirectBufferAddress} and \\texttt{GetDirectBufferCapacity}. However, these two calls increase the latency by tenth to even hundreds of microseconds (with high variation).\n\nThis problem can be solved by \\textbf{allocating a native buffer in the native space, passing its address and size} to the Java space and \\textbf{access it using the Unsafe API} or wrap it as a newly allocated (Direct) ByteBuffer. The latter requires reflection to access the constructor of the DirectByteBuffer and set the address and size fields. We decided to use the Unsafe API because we map native structs and don't require any of the additional features the ByteBuffer provides. The native address is cached which allows fast exchange of data from Java to native and vice versa. To improve convenience when accessing fields of a data structure, a helper class with getter and setter wrapper methods is created to access the fields of the native struct.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{jni_merged.pdf}\n\t\\caption{Microbenchmarks to evaluate JNI call overhead and data exchange overhead using different types of memory access}\n\t\\label{jni_overhead}\n\\end{figure}\n\nWe evaluated different means of passing data from Java to native and vice versa as well as the function\/method call overhead. Figure \\ref{jni_overhead} shows the results of the microbenchmarks used to evaluate JNI call overhead as well as overhead of different memory access methods. The results displayed are the averages of three runs of each benchmark executing the operation 100,000,000 times. A warm-up of 1,000 operations preceeds each benchmark run. For JNI context switching, we measured the latency introduced of Java to native (jtn), native to Java (ntj), native to Java with exception checking (ntjexc) and native to Java with thread detaching (ntjdet). For exchanging data between Java and native, we measured the latency introduced by accessing a 64 byte buffer in both spaces for a primitive Java byte array (ba), Java DirectByteBuffer (dbb) and Unsafe (u). The benchmarks were executed on a machine with Intel Core i7-5820K CPU and Java 1.8 runtime.\n\nThe results show that the average single costs for context switching are neglectable with an average switching time of only up to 0.1 \\textmu s. We exchange data using primitive function arguments, only. Data structures are mapped and accessed as C-structs in the native space. In Java, we access the native C-structs using a helper class which utilizes the Unsafe library \\cite{javaunsafe} as this is the fastest method in both spaces.\n\nThese results influenced the important design decision to \\textbf{run native threads, attached once as daemon threads to the JVM}, which call to Java instead of Java threads calling native methods (\\S \\ref{send_thread}, \\S \\ref{receive_thread}). Furthermore, we avoid using any of the JNI provided helper functions where possible \\cite{Liang:1999:JNI:520155}. For example: attaching a thread to the JVM involves expensive operations like creating a new Java thread object and various state changes to the JVM environment. Avoiding them on every context switch is crucial to latency and performance on every call.\n\nLastly, we minimized the number of calls to the Java space by combining multiple tasks into a single cross-space call instead of yielding multiple calls. For inter space communication, we highly rely on communication via buffers mapped to structs in native space and wrapper classes in Java (see above). This is highly application dependable and not always possible. But if possible and applied, this can improve the overall performance.\n\nWe applied this technique of combining multiple tasks into a single cross-space call to sending and receiving of data to minimize latency and context switching overhead. The native send and receive threads implement the most latency critical logic in the native space which is not simply wrapping ibverbs functions to be exposed to Java (\\S \\ref{send_thread} and \\ref{receive_thread}).. The counterpart to the native logic is implemented in Java (\\S \\ref{transport_impl_java}). In the end, we are able to reduce sending and receiving of data to a single context switching call.\n\n\\section{Overview Ibdxnet and Java InfiniBand Transport}\n\\label{overview_infiniband_transport}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=2.0in]{dxnet_ib_simple.png}\n\t\\caption{Layered architecture of DXNet with the IB transport and Ibdxnet (using the msgrc engine). Threads are colored green.}\n\t\\label{dxnet_ib_simple}\n\\end{figure}\n\nThis section gives a brief top-down introduction of the full transport implementation. Figure \\ref{dxnet_ib_simple} depicts the different components and layers involved when using InfiniBand with DXNet. The \\textbf{Java InfiniBand transport (IB transport)} (\\S \\ref{transport_impl_java}) implements DXNet's transport interface (\\S \\ref{dxnet_transport_interface}) and uses JNI to connect to the native counterpart. \\textbf{Ibdxnet} uses the native ibverbs library to access the hardware and provides a separate subsystem for connection management, sending and receiving data. Furthermore, it implements a set of functions for the Java Native Interface to connect to the Java implementation.\n\n\\section{Ibdxnet: Native InfiniBand Subsystem with Transport Engine}\n\\label{ibdxnet_native}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_simple.png}\n\t\\caption{Simplified architecture of Ibdxnet with the msgrc transport engine}\n\t\\label{ibdxnet_simple}\n\t\\vspace{-20pt}\n\\end{figure}\n\nThis section elaborates on the implementation details of our native InfiniBand subsystem \\textbf{Ibdxnet} which is used by the IB transport implementation in DXNet to utilize InfiniBand hardware. Ibdxnet provides the following key features: a basic foundation with re-usable components for implementations using different means of communication (e.g. messaging verbs, RDMA) or protocols, automatic connection management and transport engines using different communication primitives. Figure \\ref{ibdxnet_simple} shows an outline of the different components involved.\n\nIbdxnet provides an \\textbf{automatic connection and QP manager} (\\S \\ref{scalable_connection_management}) which can be used by every transport engine. An interface for the connection manager and a connection object allows implementations for different transport engines. The engine \\textbf{msgrc} (see Figure \\ref{ibdxnet_simple}) uses the provided connection management and is based on RC messaging verbs. The engine \\textbf{msgud} using UD messaging verbs is already implemented and will be discussed and extensively evaluated in a separate publication.\n\nA \\textbf{transport engine} implements its own protocol to send\/receive data and exposes a low-level interface. It creates an abstraction layer to hide direct interaction with the ibverbs library. Through the low-level interface, a transport implementation (\\S \\ref{transport_impl_java}) provides data-to-send and forwards received data for further processing. For example: the low-level interface of the msgrc engine does not provide concurrency control or serialization mechanisms for messages. It accepts a stream of data in one or multiple buffers for sending and provides buffers creating a stream of data on receive (\\S \\ref{msgrc}). This engine is connected to the Java transport counterpart via JNI and uses the existing infrastructure of DXNet (\\S \\ref{transport_impl_java}).\n\nFurthermore, we implemented a \\textbf{loopback} like stand alone transport for debugging and measuring performance of the native engine, only. The loopback transport creates a continuous stream of data for sending to one or multiple nodes and throws away any data received. This ensures that sending and receiving introduce no additional overhead and allows measuring the performance of different low-level aspects of our implementation. This was used to determine the maximum possible throughput with Ibdxnet (\\S \\ref{eval_dxnet_nodes_tp}).\n\nIn the following sections, we explain the implementation details of Ibdxnet's connection manager (\\S \\ref{scalable_connection_management}) and the messaging engine msgrc (\\S \\ref{msgrc}). Additionally, we describe best practices for using the ibverbs API and optimizations for optimal hardware utilization. Furthermore, we elaborate on how Ibdxnet connects to the IB transport in Java using JNI and how we implemented low overhead data exchange between Java and native space.\n\n\\subsection{Dynamic, Scalable and Concurrent Connection Management}\n\\label{scalable_connection_management}\nEfficient connection management for many nodes is a challenging task. For example, hundreds of application threads want to send data to a node but the connection is not yet established. Who creates the connection and synchronizes access of other threads? How to avoid synchronization overhead or blocking of threads that want to get an already established connection? How to manage the lifetime of a connection? \n\nThese challenges are addressed by a \\textbf{dedicated connection manager in Ibdxnet}. The connection manager handles all tasks required to establish and manage connections and hides them from the higher level application. For our higher level Java transport (\\S \\ref{transport_con_man}), complexity and latency is reduced for connection setup by avoiding context switching. \n\nFirst, we explain how nodes are identified, the contents of a connection and how online\/offline nodes are discovered and handled. Next, we describe how existing connections are accessed and non-existing connections are created on the fly during application runtime. We explain the details how a connection creation job is handled by the internal job manager, how connection data is exchanged with the remote in order to create a QP. At last, we briefly describe our previous attempt which failed to address the above challenges properly.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_conman1.pdf}\n\t\\caption{Connection manager: Creating non-existing connections (send thread: node 1 to node 0) and re-using existing connections (recv thread: node 1 to node 5).}\n\t\\label{ibdxnet_conman1}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_conman2.pdf}\n\t\\caption{Connection manager: Automatic connection creation with QP data exchange (node 3 to node 0). The job \\textit{CR0} is added to the back of the queue to initiate this process. The dedicated thread processes the queue by removing jobs from the front and processing them according to their type.}\n\t\\label{ibdxnet_conman2}\n\\end{figure}\n\nA node is identified by \\textbf{a unique 16-bit integer nodeID (NID)}. The NID is assigned to a node on start of the connection manager and cannot be changed during runtime. A connection consists of the source NID (the current node) and the destination NID (the target remote node). Depending on the transport implementation, an existing connection holds one or multiple ibverbs QPs, buffers and other data necessary to send and receive data using that connection. The connection manager provides a \\textbf{connection interface for the transport engines} which allows them to implement their own type of connection. The following example describes a connection with a single QP, only.\n\nBefore a connection to a remote node can be established, the remote node must be discovered and known as available. The job type \\textbf{node discovery} (further details about the job system follow in the next paragraphs) detects online\/offline nodes using UDP sockets over Ethernet. On startup, a list of node hostnames is provided to the connection manager. The list can be extended by adding\/removing entries during runtime for dynamic scaling. The discovery job tries to contact all non-discovered nodes of that list in regular intervals. When a node was discovered, it is removed from the list and marked as discovered. A connection can only be established with an already discovered node. If a connection to the node was already created and is lost (e.g. node crash), the NID is added back to the list in order to re-discovered the node on the next iteration of the job. Node discovery is mandatory for InfiniBand in order to exchange QP information on connection creation.\n\nFigure \\ref{ibdxnet_conman1} shows how existing connections are accessed and new connections are created when two threads, e.g. a send and a receive thread, are accessing the connection manager. The send thread wants to send new data to node 0 and the receive thread has received some data (e.g. from a SRQ). It has to forward it for further processing which requires information stored in each connection (e.g. a queue for the incoming data). If the connection is already established (the receive thread gets the connection to node 5), a connection handle (\\textit{H5}) is returned to the calling thread. If no connection has been established so far (the send thread wants to get the connection to node 0), \\textbf{a job to create the specific connection} (\\textit{CR0} = create to node 0) is added to the internal job queue. The calling thread has to wait until the job is dispatched and the connection is created before being able to send the data.\n\nFigure \\ref{ibdxnet_conman2} shows how connection creation is handled by the internal job thread. The job \\textit{CR0} (yielded by the send thread from the previous example in figure \\ref{ibdxnet_conman1}) is pushed to the back of the job queue. The job queue might contain jobs which affect different connections, i.e. there is no per connection dedicated queue. \\textbf{The dedicated connection manager thread} is processing the queue by removing a job from the front and dispatching by type. There are three types of jobs: create a connection to a node with a NID, discover other connection managers, close an existing connection to node.\n\nTo create a new connection with a remote node, the current node has to create an ibverbs QP with a SQ and RQ. Both queues are cross-connected to a remote QP (send with recv, recv with send) which requires data exchange using another communication channel (Sockets over Ethernet). For the job \\textit{CR0}, the thread creates a new QP on the current node (3) and exchanges its QP data with the remote it wants to connect to (0) using UDP sockets. The remote (0) also creates a QP and uses the received connection information (of 3). It replies with its own QP data (0 to 3) to complete QP creation. The newly established connection is added to the connection table and is now accessible (by the send and receive thread).\n\nAt last, we briefly describe our lessons learned from our first attempt for an automatic connection manager. It was relying on active connection creation. The first thread calling the connection manager to acquire a connection creates it on the fly, if it does not exist. The calling thread executes connection exchange, waits for the remote data and finishes connection creation. This requires coordination of all threads accessing the connection manager either to create a new connection or getting an existing one. It introduced a very complex architecture with high synchronization overhead and latency especially when many threads are concurrently accessing the connection manager. Furthermore, it was error prone and difficult to debug. We encountered severe performance issues when creating connections with one hundred nodes in a very short time range (e.g. all-to-all communication). This resulted in connection creation times of up to half a minute. Even with a small setup of 4 to 8 nodes, creating a connection could take up to a few seconds if multiple threads tried to create the same or different connections simultaneously.\n\n\\subsection{msgrc: Transport Engine for Messaging using RC QPs}\n\\label{msgrc}\nThis section describes the \\textbf{msgrc} transport engine. It uses reliable QPs to implement messaging using a dedicated send and receive thread. The engine's interface allows a transport to provide a stream of data (to send) in form of variable sized buffers and provides a stream of data (received) to a registered callback handler. \n\nThis interface is rather low-level and the backend does not implement any means of serialization\/deserialization for sending\/receiving of complex data structures. In combination with DXNet (\\S \\ref{dxnet}), the logic for these tasks resides in the Java space with DXNet and is shared with other transports such as the NIO Ethernet transport \\cite{dxnetnio}. However, there are no restrictions to implement these higher level components for the msgrc engine natively, if required. Further details on how the msgrc engine is connected with the Java transport counterpart are given in Section \\ref{transport_impl_java}.\n\nThe following subsections explain the general architecture and interface of the transport, sending and receiving of data using dedicated threads and how various features of InfiniBand were used for optimal hardware utilization.\n\n\\subsubsection{Architecture}\n\\label{engine_architecture}\nThis section explains the basic architecture as well as the low-level interface of the engine. Figure \\ref{ibdxnet_simple} includes the msgrc transport and can be referred to for an abstract representation of the most important components. The engine relies on our dedicated connection manager (\\S \\ref{scalable_connection_management}) for connection handling. We decided to use one dedicated thread for sending (\\S \\ref{send_thread}) and one for receiving (\\S \\ref{receive_thread}) to benefit from the following advantages: a clear separation of responsibilities resulting in a less complex architecture, no scheduling of send\/receive jobs when using a single thread for both and higher concurrency because we can run both threads on different CPU cores concurrently. The architecture allows us to create decoupled pipeline stages using lock-free queues and ring buffers. Thereby, we avoid complex and slow synchronization between the two threads and with hundreds of threads concurrently accessing shared resources.\n\n\\subsubsection{Engine interface}\n\\label{engine_interface}\n\n\\begin{lstlisting}[caption={Structures and callback of the msgrc engine's send interface},label=send_interface, xleftmargin=4.0ex]\nstruct NextWorkPackage {\n uint32_t posBackRel;\n uint32_t posFrontRel;\n uint8_t flowControlData;\n uint16_t nodeId;\n};\n\nstruct PrevWorkPackageResults {\n uint16_t nodeId;\n uint32_t numBytesPosted;\n uint32_t numBytesNotPosted;\n uint8_t fcDataPosted;\n uint8_t fcDataNotPosted;\n};\n\nstruct CompletedWorkList {\n uint16_t numNodes;\n uint32_t bytesWritten[NODE_ID_MAX_NUM_NODES];\n uint8_t fcDataWritten[NODE_ID_MAX_NUM_NODES];\n uint16_t nodeIds[];\n};\n\nNextWorkPackage* GetNextDataToSend(PrevWorkPackageResults* prevResults, CompletedWorkList* completionList);\n\\end{lstlisting}\n\nThe low-level interface allows fine-grained control for the target transport over the engine. The interface for sending data is depicted in Listing \\ref{send_interface} and receiving is depicted in Listing \\ref{recv_interface}. Both interfaces create an abstraction hiding connection and QP management as well as how the hardware is driven with the ibverbs library. For sending data, the interface provides the callback \\textit{GetNextDataToSend}. This function is called by the send thread to pull new data to send from the transport (e.g. from the ORB, see \\ref{transport_send}). When called, an instance of each of the two structures \\textit{PrevWorkPackageResults} and \\textit{CompletedWorkList} are passed to the implementation of the callback as parameters: the first contains information about the previous call to the function and how much data was actually sent. If the SQ is full, no further data can be sent. Instead of introducing an additional callback, we combine getting the next data with returning information about the previous send call to reduce call overhead (important for JNI access). The second parameter contains data about completed work requests, i.e. data sent for the transport. This must be used in the transport to mark data processed (e.g. moving the pointers of the ORB).\n\n\\begin{lstlisting}[caption={Structure and callback of the msgrc engine's receive interface},label=recv_interface, xleftmargin=4.0ex]\nstruct ReceivedPackage {\n uint32_t count;\n struct Entry {\n uint16_t sourceNodeId;\n uint8_t fcData;\n IbMemReg* data;\n void* dataRaw;\n uint32_t dataLength;\n } m_entries[];\n};\n\nstruct IncomingRingBuffer {\n uint32_t m_usedEntries;\n uint32_t m_front;\n uint32_t m_back;\n uint32_t m_size;\n\n struct Entry {\n con::NodeId m_sourceNodeId;\n uint8_t m_fcData;\n uint32_t m_dataLength;\n core::IbMemReg* m_data;\n void* m_dataRaw;\n } m_entries[];\n};\n\nuint32_t Received(IncomingRingBuffer* ringBuffer);\n\nvoid ReturnBuffer(IbMemReg* buffer);\n\\end{lstlisting}\n\nIf data is received, the receive thread calls the callback function \\textit{Received} with an instance of the \\textit{IncomingRingBuffer} structure as its parameter. This parameter holds a list of received buffers with their source NID. The transport can iterate this list and forward the buffers for further processing such as de-serialization. If the transport has to return the number of elements processed and, thus, is able to control the amount of buffers it can process. Once the received buffers are processed by the transport, they must be returned back to the \\textit{RecvBufferPool} by calling \\textit{ReturnRecvBuffer} to allow re-using them for further receives.\n\n\\subsubsection{Sending of Data}\n\\label{send_thread}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{msgrc_sge.png}\n\t\\caption{Example for sending and receiving data using scatter gather elements: Get data (aggregated messages) from ORB, send 1 SGE. Receive data scattered to multiple receive buffers.}\n\t\\label{msgrc_sge}\n\\end{figure}\n\nThis section explains the data and control flow of the \\textbf{dedicated send thread} which \\textbf{asynchronously} drives the engine for sending data. Listing \\ref{send_thread_code} depicts a simplified version of the contents of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below.\n\n\\begin{lstlisting}[caption={Send thread main flow (simplified)},label=send_thread_code, xleftmargin=4.0ex]\nworkPackage = GetNextDataToSend(prevWorkResults, completionList);\nReset(prevWorkResults);\nReset(completionList);\n\nif (workPackage != NULL) {\n\tconnection = GetConnection(workPackage.nodeId);\n\tprevWorkResults = SendData(connection, workPackage);\n\tReturnConnection(connection);\n}\n\ncompletionList = PollCompletions();\n\\end{lstlisting}\n\nThe loop starts with getting a \\textit{workPackage}, the next data to send (line 1), using the engine's low-level interface (\\S \\ref{engine_interface}). The instance \\textit{prevWorkResults} contains information about posted and non-posted data from the previous loop iteration. The instance \\textit{completionList} holds data about completed sends. Both instances are reseted\/nulled (line 2-3) for re-use in the current iteration. \n\nIf the \\textit{workPackage} is valid (line 5), i.e. data to send is available, the \\textit{nodeId} from that package is used to get the \\textit{connection} to the send target from the connection manager (line 6). The \\textit{connection} and \\textit{workPackage} are passed to the \\textit{SendData} function (line 7). It processes the \\textit{workPackage} and returns how much data was processed, i.e. posted to the SQ of the connection, and how much data could not be processed. The latter happens if the SQ is full and must be kept track of to not lose any data. Afterwards, the thread returns the \\textit{connection} to the connection manager (line 8).\n\nAt the end of a loop iteration, the thread polls the SCQ to remove any available WCs. \\textbf{We share the completion queue among all SQs\/connections to avoid iterating over many connections for a task}. The loop iteration ends and the thread starts from the beginning by calling \\textit{GetNextDataToSend} and provides the work results of our previous iteration. Data about polled WCs from the SCQ are stored in the \\textit{completionList} and forwarded via the interface (to the transport).\n\nIf no data is available (line 5), lines 6-8 are skipped and the thread executes a completion poll, only. This is important to ensure that any outstanding WCs are processed and passed to the transport (via the \\textit{completionList} and calling \\textit{GetNextDataToSend}). Otherwise, if no data is sent for a while, the transport will not receive any information about previously processed data. This leads to false assumptions about the available buffer space for sending data, e.g. assuming that data fits into the buffer but actually does not because the processed buffer space is not free'd, yet.\n\nIn the following paragraphs, we further explain how the functions \\textit{SendData} and \\textit{PollCompletions} make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above.\n\nThe \\textbf{SendData} function is responsible for preparing and posting of FC data and normal data (payload). FC data, which determines the number of flow control windows to confirm, is a small number (< 128) and, thus, does not require a lot of space. We post it as part of the \\textbf{immediate data}, which can hold up to 4 bytes of data, with the WR instead of using a separate side channel, e.g. another QP. This avoids overhead of posting and polling of another QP which \\textbf{benefits overall performance, especially with many simultaneous connections}. With FC data using 1 byte of the immediate data field, we use further 2 bytes to include the NID of the source node. This allows us to identify the source of the incoming WC on the remote. Otherwise, identifying the source would be very inconvenient. The only information provided with the incoming WC is the sender's unique physical QP id. In our case, this id must be mapped to the corresponding NID of the sender. However, this introduces an indirection every time a package arrives which hurts performance.\n\nFor sending normal data (payload), the provided \\textit{workPackage} holds two pointers, front and back, which enclose a memory area of data to send. This memory area belongs to a buffer (e.g. the ORB) which was registered with the protection domain on start to allow access by the HCA. Figure \\ref{msgrc_sge} depicts an example with three (aggregated) ready to send messages in the ORB. We create a WR for the data to send and provide a single \\textbf{SGE which takes the pointers of the enclosed memory area}. The HCA will directly read from that area without further copying of the data (zero copy). For buffer wrap arounds, two SGEs are created and attached to one WR: one SGE for the data from the front pointer to the end of the buffer, another SGE for the data from the start of the buffer to the back pointer. If the size of the area to send (sum of all SGEs) exceeds the maximum configurable receive size, the data to send must be sliced into multiple WRs. \\textbf{Multiple WRs are chained to a link list to minimize call overhead} when posting them to the SQ using \\textit{ibv\\_post\\_send}. This greatly increases performance compared to posting multiple standalone WRs with single calls.\n\nThe number of SGEs of a WR can be 0, if no normal data is available to send but FC data is available. To send FC data only, we write it to the immediate data field of a WR along with our source NID and post it without any SGEs attached which results in a 0 length data WR. \n\nThe \\textbf{PollCompletions} function calls \\textit{ibv\\_poll\\_cq}, \\textbf{once, to poll for any completions available} on the SCQ. A SCQ is used instead of per connection CQs to avoid iterating the CQs of all connections which impacts performance. The send thread keeps track of the number of posted WRs and, thus, knows how many WCs are outstanding and expected to arrive on the SCQ. If none are being expected, polling is skipped. \\textit{ibv\\_poll\\_cq} is called once per PollCompletion call, only, and every call tries to poll WCs in batches to keep the call overhead minimal.\n\nExperiments have shown that most calls to \\textit{ibv\\_poll\\_cq}, even on high loads, will return empty, i.e. no WRs have completed. Thus, polling the SCQ until at least one completion is received is the wrong approach and greatly impacts overall performance. If the SQ of another connection is not full and there is data available to send, this method wastes CPU resources on busy polling instead of processing further data to send. The performance impact (resulting in low throughput) increases with the number of simultaneous connections being served. Furthermore, this increases the chance of SQs running empty because time is wasted on waiting for completions instead of keeping all SQs filled. \\textbf{Full SQs ensure that the HCA is kept busy which is the key to optimal performance}.\n\n\\subsubsection{Receiving of Data}\n\\label{receive_thread}\n\n\\begin{lstlisting}[caption={Receive thread main flow (simplified)},label=recv_thread_code, xleftmargin=4.0ex]\nworkCompletions = PollCompletions();\n\nif (recvQueuePending < ibqSize) {\n Refill();\n}\n\nif (workCompletions > 0) {\n\tProcessCompletions(workCompletions);\n}\n\nif (!IncomingRingBufferIsEmpty()) {\n DispatchReceived();\n}\n\\end{lstlisting}\n\nAnalogous to Section \\ref{send_thread}, this section explains the data and control flow of the \\textbf{dedicated receive thread} which \\textbf{asynchronously} drives the engine for receiving data. Listing \\ref{recv_thread_code} depicts a simplified version of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below.\n\nData is received using a SRQ and SCQ instead of multiple receive and completions queues. This avoids iterating over all open connections and checking for data availability which introduces overhead with increasing number of simultaneous connections. Equally sized buffers for receiving data (configurable size and amount) are pooled and returned for re-use by the transport, once processed (\\S \\ref{engine_interface}).\n\nThe loop starts by calling \\textit{PollCompletions} (line 1) to poll the SCQ for WCs. Before processing the WCs returned, the SRQ is refilled by calling \\textit{Refill} (line 4), if the SRQ is not filled, yet. Next, if any WCs were polled previously, they are processed by calling \\textbf{ProcessCompletions} (line 8). This step pushes them to the \\textbf{Incoming Ring Buffer (IRB)}, a temporary ring buffer, before dispatching them. Finally, if the IRB is not empty (line 11), the thread tries to forward the contents of the IRB by calling \\textit{DispatchReceived} via the interface to the transport (\\S \\ref{engine_interface}).\n\nThe following paragraphs are further elaborating on how \\textit{PollCompletions}, \\textit{Refill}, \\textit{ProcessCompletions} and \\textit{DispatchReceived} make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above.\n\nThe \\textbf{PollCompletions} function is very similar to the one explained in Section \\ref{send_thread} already. WCs are polled in batches of max. currently available IRB space and buffered before being processed.\n\nThe \\textbf{Refill} function adds new receive WRs to the SRQ, if the SRQ is not completely filled and receive buffers from the receive buffer pool are available. Every WR consists of a configurable number of SGEs which make up the maximum receive size. This is also the limiting size the send thread can post with a single WR (sum of sizes of SGE list). Using this method, the receive thread does not have to take care of any software slicing of received data because the HCA scatters one big chunk of send data transparently to multiple (smaller) receive buffers on the receiver side. At last, \\textit{Refill} chains the WRs to a linked list which is posted on a single call to \\textit{ibv\\_post\\_srq\\_recv} for minimal overhead.\n\nIf WCs are buffered from the previous call to \\textit{PollCompletions}, the \\textbf{ProcessReceived} function iterates this list of WCs. For each WC of the list, it gets the source NID and FC data from the immediate data field. If the recv length of this WC is non zero, the attached SGEs contain the received data scattered to the receive buffers of the SGE list.\n\nAs the receive thread does not know or have any means of determining the size of the next incoming data, the challenge is optimal receive buffer usage with minimal internal fragmentation. Here, fragmentation describes the amount of receive buffers provided with a WR as SGEs in relation to the amount of received data written to that block of buffers. The less data written to the buffers, the higher the fragmentation. In the example shown in figure \\ref{msgrc_sge}, the three aggregated and serialized messages are received in five buffers but the last buffer is not completely used.\n\nThis fragmentation cannot be avoided but handled to avoid negative results like empty buffer pools or low per buffer utilization. Receive buffers\/SGEs of a WR that do not contain any received data, because the amount of received data is less than the total size of the list of buffers of the SGE list, are pushed back to the buffer pool. All receive buffers of the SGE list that contain valid received data are pushed to the IRB (in the order they were received). \n\nDepending on the target application, the fragmentation degree can be lowered if one configures the receive buffer and pool sizes accordingly. Applications typically sending small messages are performing well with small receive buffer sizes. However, throughput might decrease slightly for applications sending mainly big messages on small receive buffer sizes requiring more WRs per send data send (data sliced into multiple WRs).\n\nIf the IRB contains any elements, the \\textbf{DispatchReceived} function tries to forward them to the transport via the \\textit{Received} callback (\\S \\ref{engine_interface}). The callback returns the number of elements it consumed from the IRB and, thus, is allowed to consume none or up to what's available. The consumed buffers are returned asynchronously to the receive buffer pool by transport, once it finished processing them.\n\n\\subsubsection{Load Adaptive Thread Parking}\n\\label{thread_parking}\n\nThe send and receive threads must be kept busy running their loops to send and receive data as fast as possible to ensure low latency. However, pure busy polling without any sleeping or yielding introduces high CPU load and occupying two cores of the CPU permanently. This is unnecessary during periods when the network is not used frequently. We do not want the send and receive threads to waste CPU resources and, therewith, decrease the overall node performance. Experiments have shown that simply adding sleep or yield operations highly impacts network latency and throughput and introduces high fluctuations \\cite{dxnet}.\n\nTo solve this, we used a simple but efficient wait pattern we call \\textit{load adaptive thread parking}. After a defined amount of time (e.g. 100 ms) of polling and no data available, the thread enters a yield phase and calls yield on every loop iteration if no data is available. After another timeframe passed (e.g. 1 sec), the thread enters a parking phase calling sleep\/park with a minimum value of 1 ns on every loop iteration reducing CPU load significantly. The lowest value possible (1 ns) ensure that the scheduler of the operating system sends the thread sleeping for the shortest period of time possible. Once data is available, the current phase is interrupted and the timer is reset. This ensures busy looping for the next iterations keeping latency for successive messages and on high loads low. For further details including evaluation results refer to our DXNet publication \\cite{dxnet}.\n\n\\section{IB Transport Implementation in DXNet (Java)}\n\\label{transport_impl_java}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=2.8in]{ibdxnet_overview.pdf}\n\t\\caption{Components of Ibdxnet, IB transport and DXNet involved in data and control flow (simplified).}\n\t\\label{transport_java}\n\\end{figure}\n\nThis section describes the transport implementation for DXNet in Java which utilizes the low-level transport engines, e.g. msgrc (\\S \\ref{msgrc}), provided by Ibdxnet (\\S \\ref{ibdxnet_native}). We describe the native interface which implements the low-level interface exposed by the engine (\\S \\ref{engine_interface}) and how it is used in the DXNet IB transport for higher level connection management (\\S \\ref{transport_con_man}), sending serialized data from the ORB (\\S \\ref{transport_send}) and handling incoming receive buffers from remote nodes (\\S \\ref{transport_recv}).\n\nFigure \\ref{transport_java} depicts the involved components with the main aspects of their data and control flow which are referred to in the following subsection.\n\nIf an application wants to send one or multiple messages, it calls DXNet which serializes them into the ORB and signals the WriteInterestManager (WIM) about available data (\\S \\ref{dxnet_send}). The native send thread checks the WIM for data to send periodically and, if available, gets it from the ORB. Depending on the size, the data to send might be sliced into multiple elements which are posted to the SQ as one or multiple work requests (\\S \\ref{send_thread}).\n\nReceived data on the recv queue is written to one or multiple buffers (depending on the amount of data) from a native buffer pool (\\S \\ref{receive_thread}). Without further processing, the buffers are forwarded to the Java space and pushed to the IncomingBufferQueue (IBQ). DXNet's de-serialization is processing the buffers in order and creates messages (Java objects) which are dispatched to pre-registered callbacks using dedicated message handler threads (\\S \\ref{dxnet_receive}).\n\n\\subsection{Connection Handling}\n\\label{transport_con_man}\nTo implement new transports in DXNet, it provides an interface to create specific connection types for the transport to implement. The DXNet core, which is shared across all transport implementations, manages the connections for the target application by automatically creating new connections on demand or closing connections if a configurable threshold is exceeded (\\S \\ref{dxnet_con_man}). \n\nFor the IB transport implementation, the derived connection does not have to store further data or implement functionality. This is already stored and handled by the connection manager of Ibdxnet. It reduces overall architectural complexity by avoiding split functionality between Java and native space. Furthermore, it avoids context switching between Java and native code. \n\nOnly the NID of either the target node to send to or the source node of the received data is exchanged between the Java and native space and vice versa. Thus, \\textbf{Connection setup} in the transport implementation in Java is limited to creating the Java connection object for DXNet's connection manager. \\textbf{Connection close and cleanup} is similar with an additional callback to the native library to signal a connection was closed to Ibdxnet's connection management.\n\n\\subsection{Dispatch of Ready-to-send Data}\n\\label{transport_send}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_wim.pdf}\n\t\\caption{Internals of the Write Interest Manager (WIM).}\n\t\\label{ibdxnet_wim}\n\\end{figure}\n\nThe engine msgrc is running dedicated threads for sending data. The send thread pulls new data from the transport via the \\textit{GetNextDataToSend} function of the low-level interface (\\S \\ref{engine_interface}, \\S \\ref{send_thread}). In order to allow this and other callbacks (for connection management and receiving data) to be available to the IB transport, a lightweight JNI binding with the aspects explained in Section \\ref{java_and_native} was created. The transports implement the \\textit{GetNextDataToSend} function exposed by the JNI binding. To get new data to send, the send thread calls the JNI binding which is implemented in the IB transport in Java.\n\nNext, we elaborate on the implementation of \\textit{GetNextDataToSend} in the IB transport, how the send thread gets data to send and how the different states for the data (posted, not posted, send completed) are handled in combination with the existing ORB data structure.\n\nApplication threads using DXNet and sending messages are concurrently serializing them into the ORB (\\S \\ref{dxnet_send}). Once serialization completes, the thread signals the transport that there is ready to send (RTS) data in the ORB. For the IB transport, this signal \\textbf{adds a write interest to the dedicated Write Interest Manager (WIM)}. The WIM manages interest tokens using a lock-free list (based on a ring buffer) and a per connection atomic counter for both, RTS normal data from the ORB and FC data. Each type has a separate atomic counter, but, if not explicitly stated, we refer to them as one for ease of comprehension.\n\nThe list contains the nodeIDs of the connections that have RTS data in the order they were added. The atomic counter is used to keep track of the number of interests signalled, i.e. the number of times the callback was triggered for the selected NID. \n\nFigure \\ref{ibdxnet_wim} depicts this situation with two threads (T1 and T2) which finished serializing data to the ORBs of two independent connections (3 and 2). The table with atomic counters keeps track of the number of signaled interests for RTS data\/messages per connection. By calling \\textit{GetNextDataToSend}, the send thread from Ibdxnet checks a lock-free list which contains nodeIDs of the connections with at least one write interest available. The nodeIDs are added in order to the list but only if it is not already in the list. This is detected by checking if the atomic counter returned 0 after a fetch and add operation. This mechanism ensures that data from many connection is processed in a round robin fashion. Furthermore, avoiding duplicates in the queue sets an upper bound for memory requirement which is \\textit{sizeof(nodeID) * maxNumConnections}. Otherwise, the queue can grow depending on the load and number of active connections. If the queue of the WIM is empty, the send thread aborts and returns to the native space.\n\nThe send thread uses the NID it removed from the queue to get and reset the number of interests of the corresponding atomic counter. If there are any interests available for FC data, the send thread processes them by getting the FC from the connection and getting, but not yet removing, the stored FC data. For interests concerning normal data, the send thread gets the ORB from the connection and reads the current front and back pointers. The pointers of the ORB are not modified, only read (details below). With this data, along with the NID of the connection, the send thread returns to the native space for processing (\\S \\ref{send_thread}).\n\nEvery time the send thread returns to the Java space to get more data to send, it carries the parameters \\textit{prevWorkResults}, which contains data about the previous send operation, and \\textit{completionList}, which contains data about completed WRs, i.e. data send confirmations (\\S \\ref{send_thread}). For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers (\\S \\ref{java_and_native}).\n\nThe asynchronous workflow used to send and receive data by posting WRs and polling WCs must be adopted by updating the ORB and FC accordingly. Depending on the fill level of the SQ, the send thread might not be able to post all normal data or FC it retrieved in the previous iteration. The \\textit{prevWorkResults} parameter contains this information about how much normal and FC data was processed and could not be processed. This information must be preserved for the next send operation to avoid sending data multiple times. For the ORB however, we cannot move the front pointer because this frees up the memory which is not confirmed to be sent, yet.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ib_ringbuffer.pdf}\n\t\\caption{Extended outgoing ring buffer used by IB transport.}\n\t\\label{ib_ringbuffer}\n\\end{figure}\n\nThus, we introduce a second front pointer, front posted, which is only known to and modified by the send thread and allows it to keep track of already posted data. Figure \\ref{ib_ringbuffer} depicts the most important aspects of the enhanced ORB which is used for the IB transport. In total, this creates three virtual areas of memory designated to the following states:\n\\begin{itemize}\n \\item Data posted but not confirmed: front to front posted\n \\item Data RTS and not posted: front posted to back\n \\item Free memory for send threads to serialize to: back to front\n\\end{itemize}\n\nUsing the parameter \\textit{prevWorkResults}, the front posted pointer is moved by the amount of data posted. Any non processed data remains unprocessed (front posted not moved to cover entire area of RTS data). For data provided with the parameter \\textit{completionList}, the front pointer is updated according to the number of bytes now confirmed to be sent. A similar but less complex approach is applied to updating FC.\n\n\\subsection{Process Incoming Buffers}\n\\label{transport_recv}\n\nThe dedicated receive thread of msgrc is pushing received data to the low-level interface. Analogous to how RTS data is pulled from the IB transport via the JNI binding, the receive thread uses a received function provided by the binding to push the received buffers to the IB transport into Java space. All received buffers are stored as a batch in the \\textit{recvPackage} data structure (\\S \\ref{engine_interface}) to minimize context switching overhead. For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers (\\S \\ref{java_and_native}).\n\nThe receive thread iterates the package in Java space, dispatches received FC data to each connection and pushes the received buffers (including the connection of the source node) to the IBQ (\\S \\ref{dxnet_receive}). The buffers are handled and processed asynchronously by the MessageCreationCoordinator and one or multiple MessageHandlers of the DXNet core (all of them are Java threads). Once the buffers are processed (de-serializing its contents), the Java threads return them asynchronously to the transport engines receive buffer pool (\\S \\ref{receive_thread}).\n\n\n\\section{Evaluation}\n\\label{eval}\n\nFor better readability, we refer to DXNet with the IB transport Ibdxnet and msgrc engine as DXNet from here onwards.\n\nWe implemented commonly used microbenchmarks to compare DXNet to two MPI implementations supporting InfiniBand: MVAPICH2 and FastMPJ. We decided to compare against two MPI implementations for the following reasons: To the best of our knowledge, there is no other system available that offers all features of DXNet and big data applications implementing their dedicated network stack do not offer it as a separate application\/library like DXNet does. MPI can be used to partially cover some features of DXNet but not all (\\S \\ref{related_work}). We are aware that MPI is targeting a different application domain, mainly HPC, whereas DXNet is targeting big data. However, MPI was already used in big data applications as well and several aspects related to the network stack and the technologies are overlapping in both application domains.\n\nBandwidth with two nodes is compared using typical uni- and bi-directional benchmarks. We also compared scalability using an all-to-all benchmark (worst-case scenario) with up to 8 nodes. Latency is compared by measuring the RTT with a request-response communication pattern. These benchmarks are executed single threaded to compare all three systems.\n\nFurthermore, we compared how DXNet and MVAPICH2 perform in a multi-threaded environment which is typical for Big Data but not HPC applications. However, we can only compare it using three benchmarks. Latency multi-threaded is not possible since it would require MVAPICH2 to implement additional infrastructure to store and map requests with responses and dynamic dispatching callbacks to handlers of incoming data to multiple receive threads (similar to DXNet). MVAPICH2 does not provide such a processing pipeline. FastMPJ cannot be compared at all here because it only supports single threaded environments. Table \\ref{eval_benchmarks} summerizes the systems and benchmarks executed.\n\nAll benchmarks were executed on up to 8 nodes of our private cluster, each with a single socket Intel Xeon CPU E5-1650 v3, 6 cores running at 3.50 GHz per core clock speed and 64 GB RAM. The nodes are running Ubuntu 16.04 with kernel version 4.4.0-57. All nodes are equipped with a Mellanox MT27500 HCA, connected with 56 Gbps links to a single Mellanox SX6015 18 port switch. For Java applications, we used the Oracle JVM version 1.8.0\\_151.\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n & \\multicolumn{1}{l|}{FastMPJ} & \\multicolumn{1}{l|}{MVAPICH2} & \\multicolumn{1}{l|}{DXNet} \\\\ \\hline\nUni-dir. TP ST & x & x & x \\\\ \\hline\nBi-dir. TP ST & x & x & x \\\\ \\hline\nLatency ST & x & x & x \\\\ \\hline\nAll-to-all TP ST & x & x & x \\\\ \\hline\nUni-dir. TP MT & & & x \\\\ \\hline\nBi-dir. TP MT & & x & x \\\\ \\hline\nLatency MT & & & x \\\\ \\hline\nAll-to-all MT & & & x \\\\ \\hline\n\\end{tabular}\n\\caption{Summary of benchmarks and systems. TP = throughput, ST = single threaded, MT = multi-threaded}\n\\label{eval_benchmarks}\n\\end{table}\n\n\\subsection{Benchmarks}\n\\label{benchmarks}\n\nThe \\textit{osu} benchmarks included with MVAPICH2 implement typical micro benchmarks to measure uni- and bi-directional bandwidth and uni-directional latency which reflect basic usage of any network stack for point-to-point communication. \\textit{osu\\_latency} is used as a foundation and extended with recording of all RTTs to determine the 95th, 99th and 99.9th percentile after execution. The latency measured is the full RTT when the source is sending a request to the destination up to when the corresponding response is received by the source. For evaluating throughput, the benchmarks \\textit{osu\\_bw} and \\textit{osu\\_bibw} were combined to a single benchmark and extended to enable all-to-all bi-directional execution with more than two nodes. We consider this a relevant benchmark to show if the system is capable of handling multiple connections under high load. This is a common situation found in big data applications as well as backend storages \\cite{yahoo}. On all-to-all, every node receives from all other nodes and sends messages to all other nodes in a round robin fashion. The bi-directional and all-to-all results presented are the aggregated send throughputs of all participating nodes. We added options to support multi-threaded sending and receiving using a configurable number of send and receive threads. As the per-processor core count increases, the multi-threading aspect becomes more and more important. Furthermore, our target application domain big data relies heavily on multi-threaded environments.\n\nFor the evaluation of FastMPJ, we ported the \\textit{osu} benchmarks to Java. The benchmarks for evaluating a multi-threaded MPI process were omitted because FastMPJ does not support multi-threaded processes. DXNet comes with its own benchmarks already implemented which are comparable to the \\textit{osu} benchmarks.\n\nThe \\textit{osu} benchmarks use a configurable parameter \\textit{window\\_size} (WS) which denotes the number of messages sent in a single batch. Since MPI does not support implicit message aggregation like DXNet, we executed all MPI experiments with increasing WS to determine bandwidth peaks and saturation under optimal conditions and ensure a fair comparison to DXNet's built in aggregation. No MPI collectives are required for the benchmarks and, thus, aren't evaluated.\n\nAll benchmarks are executed three times and their variance is displayed using error bars. Throughputs are specified in GB\/s, latencies\/RTTs in us and message rates in mmps (million messages per second). All throughput benchmarks send 100 million messages and all latency benchmarks 10 million messages. The total number of messages is incrementally halved starting with 4 kb message size to avoid unnecessary long running benchmark runs. All throughputs measured are based on the total amount of sent payload bytes. This does not include any overhead like message headers or envelopes that are required by the systems for message identification or routing.\n\nFurthermore, we included the results of the ib perf tools \\textit{ib\\_write\\_bw} and \\textit{ib\\_write\\_lat} as baselines to all end-to-end type benchmarks. These simple perf tools cannot be compared directly to the complex systems evaluated. But, these baselines show the best possible network performance (without any overhead by the evaluated system) and for rough comparisons of the systems across multiple plots. We chose parameters that reflect the configuration values of DXNet as close as possible (but still allow comparisons to FastMPJ and MVAPICH2 as well): receive queue size 2000 and send queue size 20 for both bandwidth and latency measurements; 100,000,000 messages for bandwidth and 10,000,000 for latency.\n\n\\subsection{DXNet with Ibdxnet Transport}\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\nIBQ max. capacity buffer count & 8192 \\\\ \\hline\nIBQ max. capacity aggregated data size & 128 MB \\\\ \\hline\nMessage handlers & varying (see experiments) \\\\ \\hline\nIB SQ size (per connection) & 20 \\\\ \\hline\nIB SRQ size & 2000 (default value for up to 100 connections) \\\\ \\hline\nMax. connection limit & 100 \\\\ \\hline\nRecv buffer pool capacity & 4 GB \\\\ \\hline\nFlow control window & 16 MB \\\\ \\hline\nFlow control threshold & 0.1 \\\\ \\hline\nReceive buffer size (for small message sizes 1 - 16 kb) & 32 kb \\\\ \\hline\nSGEs per WR (for small message sizes 1 - 16 kb) & 4 \\\\ \\hline\nReceive buffer size (for medium\/large message sizes 32 kb - 1 MB) & 1 MB \\\\ \\hline\nSGEs per WR (for medium\/large message sizes 32 kb - 1 MB) & 1 \\\\ \\hline\n\\end{tabular}\n\\caption{DXNet configuration values for experiments}\n\\label{dxnet_config}\n\\end{table*}\n\nWe configured DXNet using the parameters depicted in Table \\ref{dxnet_config}. The configuration values were determined with various debugging statistics and experiments, and are currently considered optimal configuration parameters.\n\nFor comparing single threaded performance, the number of application threads and message handlers (referred to as MH) is limited to one each to allow comparing it to FastMPJ and MVAPICH2. DXNet's multi-threaded architecture does not allow combining the logic of the application send thread and a message handler into a single thread. Thus, DXNet's ``single threaded'' benchmarks are always executed with one dedicated send and one dedicated receive thread.\n\nThe following subsections present the results of the various benchmarks. First, we present the results of all single threaded benchmarks with one send thread: uni- and bi-directional throughput, uni-directional latency and all-to-all with increasing node count. Afterwards, the results of the same four benchmarks are presented with multiple send threads.\n\n\\subsubsection{Uni-directional Throughput}\n\\label{eval_dxnet_uni_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_0_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional throughput and message rate with one application send thread, increasing message size and number of message handlers}\n\t\\label{eval_dxnet_uni_bw}\n\\end{figure}\n\nThe results of the uni-directional benchmark are depicted in figure \\ref{eval_dxnet_uni_bw}. Considering one MH, DXNet's throughput peaks at 5.9 GB\/s at a message size of 16 kb. For larger messages (32 kb to 1 MB), one MH is not sufficient to de-serialize and dispatch all incoming messages fast enough and drops to a peak bandwidth of 5.4 GB\/s. However, this can be resolved by simply using two MHs. Now, DXNet's throughput peaks and saturates at 5.9 GB\/s with a message size of just 4 kb and stays saturated up to 1 MB. Message sizes smaller than 4 kb also benefit significantly from the shorter receive processing times by utilizing two MHs. Further MHs can still improve performance but only slightly for a few message sizes.\n\nFor small messages up to 64 bytes, DXNet achieves peak message rates of 4.0-4.5 mmps using one MH. Multiple MHs cannot significantly increase performance for such small messages further. However, with growing message size (512 byte to 16 kb), the message rate can be increased with two message handlers.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, DXNet's peak performance is approx. 0.5 to 1.0 mmps less. With increasing message size, this gap closes and DXNet even surpasses the baseline 1 kb to 32 kb message sizes when using multiple threads. DXNet peaks close to the baseline's peak performance of 6.0 GB\/s. The results with small message sizes are fluctuating independent of the number of MHs. This can be observed on all other benchmarks with DXNet measuring message\/payload throughput as well. It is a common issue which can be observed when running high load throughput benchmarks using the bare ibverbs API as well.\n\nThis benchmark shows that DXNet is capable of handling a vast amount of small messages efficiently. The application send thread and, thus, the user does not have to bother with aggregating messages explicitly because DXNet handles this transparently and efficiently. The overall performance benefits from multiple message handlers increasing receive throughput. Large messages do impact performance with one MH because the de-serialization of data consumes most of the processing time during receive. However, simply adding at least another MH solves this issue and further increases performance.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_dxnet_bi_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, bi-directional throughput and message rate with one application send thread, increasing message size and number of message handlers}\n\t\\label{eval_dxnet_bi_bw}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_bi_bw} depicts the results of the bi-directional benchmark with one send thread. With one MH, the aggregated throughput peaks at approx. 10.4 GB\/s for 8 kb. Using two message handlers, the fluctuations starting with 16 kb messages using one MH can be resolved (as already explained in \\ref{eval_dxnet_uni_tp}). Further increasing the performance using four MHs is not possible in this benchmark and actually degrades it for 512 byte to 2 kb message sizes. DXNet's throughput peaks at approx. 10.4 GB\/s and saturates with a message size of 32 kb.\n\nThe peak aggregated message rate for small messages up to 64 bytes is varying from approx. 6 to 6.9 mmps with one MH. Using more MHs cannot improve performance significantly for this benchmark. Due to the multi-threaded and highly pipelined architecture of DXNet, these variations cannot be avoided, especially when exclusively handling many small messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, there is still room for improvement for DXNet's performance on small message sizes (up to 2.5 mmps difference). For medium message sizes, \\textit{ib\\_send\\_bw} yields slightly higher throughput for up to 1 kb message size. But, DXNet surpasses \\textit{ib\\_send\\_bw} on 1 kb to 16 kb message size. DXNet's peak performance is approx. 1.1 GB\/sec less than \\textit{ib\\_send\\_bw}'s (11.5 GB\/sec).\n\nOverall, this benchmark shows that DXNet can deliver great performance especially for small messages similar to the uni-directional benchmark (\\S \\ref{eval_dxnet_uni_tp}).\n\n\\subsubsection{Uni-directional Latency}\n\\label{eval_dxnet_uni_lat}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1_st.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional RTT and message rate with one application send thread, increasing message size}\n\t\\label{fig_eval_dxnet_uni_lat}\n\\end{figure}\n\nFigure \\ref{fig_eval_dxnet_uni_lat} depicts the average RTTs as well as the 95th, 99th and 99.9th percentile of the uni-directional latency benchmark with one send thread and one MH. For message sizes up to 512 bytes, DXNet achieves an avg. RTT of 7.8 to 8.3 \\textmu s, a 95th percentile of 8.5 to 8.9 \\textmu s, a 99th percentile of 8.9 to 9.2 and 99.9th percentile of 11.8 to 12.7 \\textmu s. This results in a message rate of approx 0.1 mmps. As expected, starting with 1 kb message size, latency increases with increasing message size.\n\nThe RTT can be broken down into three parts: DXNet, Ibdxnet and hardware processing. Taking the lowest avg. of 7.8 \\textmu s, DXNet requires approx. 3.5 \\textmu s of the total RTT (the full breakdown is published in our other publication \\cite{dxnet}) and the hardware approx. 2.0 \\textmu s (assuming avg. one way latency of 1 \\textmu s for the used hardware). Message de- and serialization as well as message object creation and dispatching are part of DXNet. For Ibdxnet, this results in approx. 2.3 \\textmu s processing time which includes JNI context switching as well as several pipeline stages explained in the earlier sections.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, DXNet's latency is significantly higher. Obviously, additional latency cannot be avoided with such a long and complex processing pipeline. Considering the breakdown mentioned above, the native part Ibdxnet, which calls ibverbs to send and receive data, is to some degree comparable to the minimal perf tool \\textit{ib\\_send\\_bw}. With a total of 2.3 \\textmu s (of the full pipeline's 7.8 \\textmu s), the total RTT is just slightly higher than \\textit{ib\\_send\\_bw}'s 1.8 \\textmu s. But, Ibdxnet already includes various data structures for state handling and buffer scheduling (\\S \\ref{send_thread}, \\S \\ref{receive_thread}) which \\textit{ib\\_send\\_bw} doesn't. Buffers for sending data are re-used instantly and the data received is discarded immediately.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes}\n\\label{eval_dxnet_nodes_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_1_nodes.pdf}\n\t\\caption{\\textbf{DXNet}: 2 to 8 nodes, all-to-all aggregated send throughput and message rate with one application send thread, increasing message size and one message handler}\n\t\\label{eval_dxnet_nodes}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_nodes} shows the aggregated send throughput and message rate of all participating nodes (up to 8) executing the all-to-all benchmark with one send thread and one MH with increasing message size. For small up to 64 byte messages, peak message rates of 7.0 mmps, 14.5 mmps, 20.1 mmps and 25.6 mmps are achieved for 2, 4, 6 and 8 nodes. Throughput increases with increasing node count peaking at 8 kb message size with 10.4 GB\/s for 2 nodes. The peaks for 4, 6 and 8 nodes are reached with 16 kb message size at 18.9 GB\/s, 26.0 GB\/s and 32.4 GB\/s. Incrementally adding two nodes, throughput is increased by 8.5 GB\/s (for 2 to 4 nodes), by 7.1 GB\/s (for 4 to 6 nodes) and 6.4 GB\/s (for 6 to 8 nodes). One would expect approx. equally large throughput increments but the gain is noticeably lowered with every two nodes added.\n\nWe tried different configuration parameters for DXNet and ibverbs like different MTU sizes, SGE counts, receive buffer sizes, WRs per SQ\/SRQ or CQ size. No combination of settings allowed us to improve this situation.\n\nWe assume that the all-to-all communication pattern puts high stress on the HCA which, at some point, cannot keep up with processing outstanding requests. To rule out any software issues with DXNet first, we implemented a low-level ``loopback'' like test which uses the native part of Ibdxnet, only. The loopback test does not involve any dynamic message posting when sending data or data processing when receiving. Instead, a buffer equally to the size of the ORB is processed by Ibdxnet's send thread on every iteration and posted to every participating SQ. This ensures that all SQs are filled and are quickly refilled once at least one WR was processed. When receiving data on the SRQ, all buffers received are directly put back into the pool without processing and the SRQ is refilled. This ensures that no additional processing overhead is added for sending and receiving data. Thus, Ibdxnet's loopback test comes close to a perftool like benchmark. We executed the benchmark with 2, 4, 6 and 8 nodes which yielded aggregated throughputs of 11.7 GB\/s, 21.7 GB\/s, 28.3 GB\/s and 34.0 GB\/s. \n\nThese results are very close to the performance of the full DXNet stack but don't rule out all software related issues, yet. The overall aggregated bandwidth could still somehow be limited by Ibdxnet. Thus, we executed another benchmark which, first, executes all-to-all communication with up to 8 nodes, then, once bandwidth is saturated, switching to a ring formation for communication without restarting the benchmark (every node sends to its successor determined by NID, only). \n\nOnce the nodes switch the communication pattern during execution, the per node aggregated bandwidth increases very quickly and reaches a maximum aggregated bandwidth of approx. $(11.7 \/ 2 \\times num\\_nodes)$ GB\/s independent of the number of nodes used. This rules out total bandwidth limitations for software and hardware. Furthermore, we can now rule out any performance issues in DXNet or even ibverbs with connection management (e.g. too many QPs allocated).\n\nThis leads to the assumption that the HCA cannot keep up with processing outstanding WRQs when SQs are under high load (always filled with WRQs). With more than 3 SQs per node, the total bandwidth drops noticably. Similar results with other systems further support this assumption (\\S \\ref{eval_fmpj_nodes} and \\ref{eval_mva_nodes}).\n\n\\subsubsection{Uni-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_0_1_4.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_bw_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_uni_bw_mt} shows the uni-directional benchmark executed with 4 MHs and 1 to 16 send threads. For 1 to 4 send threads throughput saturates at 5.9 GB\/s at either 4 kb or 8 kb messages. For 256 byte to 8 kb, using one thread yields better throughput than two or sometimes four threads. However, running the benchmark with 8 and 16 send threads increases overall throughput for all messages greater 32 byte significantly with saturation starting at 2 kb message size. DXNet's pipeline benefits from the many threads posting messages to the ORB concurrently. This results in greater aggregation of multiple messages and allows higher buffer utilization for the underlaying transport.\n\nDXNet also increases message throughput on small message sizes up to 512 byte. from approx. 4.0 mmps up to 6.7 mmps for 16 send threads. Again, performance is slightly worse with two and four compared to a single thread.\n\nFurthermore, DXNet even surpasses the baseline performance of \\textit{ib\\_send\\_bw} when using multiple send threads. However, the peak performance cannot be improved further which shows the current limit of DXNet for this benchmark and the hardware used.\n\n\\subsubsection{Bi-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_4_mt.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, bi-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_bi_bw_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_bi_bw_mt} shows the bi-directional benchmark executed with 4 MHs and 1 to 16 send threads. With more than one send thread, the aggregated throughput peaks at approx. 10.4 and 10.7 GB\/s with messages sizes of 2 and 4 kb. DXNet delivers higher throughputs for all medium and small messages with increasing send thread count. The baseline performance of \\textit{ib\\_send\\_bw} is reached on small message sizes and even surpassed with medium sized messages up to 16 kb. The peak throughput is not reached showing DXNet's current limit with the used hardware.\n\nThe overall performance with 8 and 16 send threads don't differ noticeably which indicates saturation of DXNet's processing pipeline. For small messages (less than 512 byte), the message rates also increase with increasing send thread count. Again, saturation starts with 8 send threads with a message rate of approx. 8.6 to 10.2 mmps.\n\nDXNet is capable of handling a multi-threaded environment under high load with CPU over-provisioning and still delivers high throughput. Especially for small messages, DXNet's pipeline even benefits from the highly concurrent activity by aggregating many messages. This results in higher buffer utilization and, for the user, higher overall throughput.\n\n\\subsubsection{Uni-directional Latency Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional avg. RTT and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_lat_mt}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1_2.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional 95th, 99th and 99.9th percentile RTT and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_lat_mt2}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_uni_lat_mt} depicts the avg. RTT and message rate of the uni-directional latency benchmark with up to 16 send threads and 4 MHs. The 95th, 99th and 99.9th percentiles are depicated in figure \\ref{eval_dxnet_uni_lat_mt2}. DXNet keeps a very stable avg. RTT of 8.1 \\textmu s for message sizes of 1 to 512 bytes with one send thread. Using two send threads, this value just slightly increases. With four or more send threads the avg. RTT increases to approx 9.3 \\textmu s. When the total number of threads, which includes DXNet's internal threads, MH and send threads, exceed the core count of the CPU, DXNet switches to different parking strategies for the different thread types which slightly increase latency but greatly reduce overall CPU load (\\S \\ref{thread_parking}).\n\nThe message rate can be increased up to 0.33 mmps with up to 4 send threads as, practically, every send thread can use a free MH out of the 4 available. With 8 and 16 send threads, the MHs on the remote must be shared and DXNet's over-provisioning is active which reduces the overall throughput. The percentiles shown in figure \\ref{eval_dxnet_uni_lat_mt2} reflect this sitution very well and increase noticeably. \n\nWith a single thread, as already discussed in \\ref{eval_dxnet_uni_lat}, the difference of the avg. (7.8 to 8.3 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) RTT for message sizes less than 1 kb is approx. 4 to 5 \\textmu s. When doubling the send thread count, the 99.9th percentiles roughly double as well. When over-provisioning the CPU, we cannot avoid the higher than usual RTT caused by the increasing amount of messages getting posted.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_16_4_nodes_mt.pdf}\n\t\\caption{\\textbf{DXNet}: 2 to 8 nodes, all-to-all aggregated send throughput and message rate with 16 application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_nodes_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_nodes_mt} shows the results of the all-to-all benchmark with up to 8 nodes, 16 application send threads and 4 message handlers. Compared to the single threaded results (\\S \\ref{eval_dxnet_nodes_tp}), DXNet achieves slightly higher throughputs for all node counts: for two nodes, throughput saturates at 4 kb message size and peaks at 10.7 GB\/s; for 4 nodes, throughput saturates at 4 kb message size and peaks at 19.5 GB\/s; for 6 nodes, throughput saturates at 2 kb message size and peaks at 27.0 GB\/s; for 8 nodes, throughput saturates at 2 kb message size and peaks at 33.6 GB\/s. However, the message rate is improved significantly for small messages up to 64 byte with 8.4 to 10.3 mmps, 18.9 to 21.1 mmps, 27.6 to 31.4 mmps and 33.2 to 43.4 mmps for 2 to 8 nodes.\n\nThese results show that DXNet delivers high throughputs and message rates under high loads with increasing node and thread count. Small messages profit significantly through better aggregation and buffer utilization.\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark.\n\n\\textbf{Single-threaded}:\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} One MH: saturation with 16 kb messages, peak throughput at 5.9 GB\/s; Two MHs: saturation with 4 kb messages, peak throughput at 5.9 GB\/s; Peak message rate of 4.0 to 4.5 mmps for small messages up to 64 bytes \n \\item \\textbf{Bi-directional throughput} Saturation with 8 kb messages at 10.4 GB\/s with one MH; Peak message rate of 6.0 to 6.9 mmps for small messages up to 64 bytes\n \\item \\textbf{Uni-directional latency} up to 512 byte messages: avg. 7.8 to 8.3 \\textmu s, 95th percentile of 8.5 to 8.9 \\textmu s, 99th percentile of 8.9 to 9.2, 99.9th percentile of 11.8 to 12.7 \\textmu s; Peak message rate of 0.1 mmps.\n \\item \\textbf{All-to-all nodes} With 8 nodes: Total aggregated peak throughput of 32.4 GB\/s, saturation with 16 kb message size; Peak message rate of 25.6 mmps for small messages up to 64 bytes.\n\\end{itemize}\n\n\\textbf{Multi-threaded}:\nOverall, DXNet benefits from higher message aggregation through multiple outstanding messages in the ORB posted concurrently by many threads.\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation at 5.9 GB\/s at 4 kb message size\n \\item \\textbf{Bi-directional throughput} Overall improved throughput for many message sizes, saturation at 10.7 GB\/s with 4 kb message size, message rate of 8.6 to 10.2 mmps for small messages up to 64 bytes\n \\item \\textbf{Uni-directional latency} Slightly higher latencies than single threaded as long as enough MHs serve available send threads. Message rate can be increased with additional send threads but at the cost of increasing avg. latency. The 99.9th percentiles roughly double when doubling the number of send threads.\n \\item \\textbf{All-to-all nodes}: With up to 8 nodes, 33.6 GB\/s peak throughput, saturation at 2 kb message size, 33.2 to 43.4 mmps for up to 64 byte messages \n\\end{itemize}\n\n\\subsection{FastMPJ}\n\\label{eval_fmpj}\n\nThis section describes the results of the benchmarks executed with FastMPJ and compares them to the results of DXNet presented in the previous sections. We used FastMPJ 1.0\\_7 with the device \\textit{ibvdev} to run the benchmarks on InfiniBand hardware. The \\textit{osu} benchmarks of MVAPICH2 were ported to Java (\\S \\ref{benchmarks}) and used for all following experiments. Since FastMPJ does not support multithreading in a single process, all benchmarks were executed single threaded and compared to the single threaded results of DXNet, only.\n\n\\subsubsection{Uni-directional Throughput}\n\\label{eval_fmpj_uni_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_uni_bw.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, uni-directional throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_unibw}\n\\end{figure}\n\nFigure \\ref{eval_fmpj_unibw} shows the results of executing the uni-directional benchmark with two nodes with increasing message size. Furthermore, the benchmark was executed with increasing WS to ensure bandwidth saturation. As expected, throughput increases with increasing message size and bandwidth saturation starts at a medium message size of 64k with approx. 5.7 GB\/s. The actual peak throughput is reached with large 512k message for a WS of 64 with 5.9 GB\/s. \n\nFor small message sizes up to 512 byte and independent of the WS, FastMPJ achieves a message rate of approx. 1.0 mmps. Furthermore, the results show that the WS doesn't matter for message sizes up to 64 KB. For 128 KB to 1 MB, FastMPJ profits from explicit aggregation with increasing WS. This indicates that ibvdev might include some message aggregation mechanism.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, FastMPJ's performance is always inferior to it with a peak performance of 5.9 GB\/s close to \\textit{ib\\_send\\_bw}'s with 6.0 GB\/s.\n\nCompared to the results of DXNet (\\S \\ref{eval_dxnet_uni_tp}), DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB\/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB\/s due to increased message processing time (de-serialization). However, such a mechanism is absent from FastMPJ and DXNet can further improve performance by using two MHs. With two MHs, DXNet's throughput peaks even earlier at 5.9 GB\/s with 4 kb message size. For small messages of up to 64 bytes, DXNet achieves 4.0 to 4.5 mmps compared to FastMPJ with 1.0 mmps.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_fmpj_bi_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_bw.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, bi-directional (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_bibw}\n\\end{figure}\n\nThe results of the bi-directional benchmark are depicted in figure \\ref{eval_fmpj_bibw}. Again, throughput increases with increasing message size peaking at 10.8 GB\/s with WS 2 and large 512 kb messages. However, when handling messages of 128 kb and greater, throughput peaks at approx 10.2 GB\/s for the WSs 4 to 32 and saturation varies depending on the WS. For WSs 4 to 32, throughput is saturated with 64 kb messages, for WSs 1 and 2 at 512 kb. Starting at 128 kb message size, WSs of 1 and 2 achieve slightly better results than the greater WSs. Especially WS 64 drops significantly with message sizes of 128 kb and greater. However, for message sizes of 64 kb to 512 kb, FastMPJ profits from explicit aggregation.\n\nCompared to the uni-directional results (\\S \\ref{eval_fmpj_uni_tp}), FastMPJ does profit to some degree from explicit aggregation for small messages with 1 to 128 bytes. WS 1 to 16 allow higher message throughputs with WS 16 as an optimal value peaking at approx. 2.4 mmps for 1 to 128 byte messages. Greater WSs degrade message throughput significantly. However, this does not apply to message sizes of 256 bytes where greater explcit aggregation does always increase message throughput.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, FastMPJ's performance is again always inferior to it with a difference in peak performance of 0.7 GB\/sec (10.8 GB\/s to 11.5 GB\/s).\n\nWhen comparing to DXNet's results (\\S \\ref{eval_dxnet_bi_tp}), the throughputs are nearly equal with 10.7 GB\/s also at 512 kb message size. However, DXNet outperforms FastMPJ for medium sized messages by reaching a peak throughput of 10.4 GB\/s for just 8 kb messages. Even with a WS of 64, FastMPJ can only achieve 6.3 GB\/s aggregated throughput here. For small messages of up to 64 bytes, DXNet clearly outperforms FastMPJ with 6 to 7.2 mmops compared to 1.9 to 2.1 mmops with a WS of 16.\n\n\\subsubsection{Uni-directional Latency}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_lat.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, uni-directional latency and message rate with increasing message and window size}\n\t\\label{eval_fmpj_lat}\n\\end{figure}\n\nThe results of the latency benchmark are depicted in figure \\ref{eval_fmpj_lat}. FastMPJ achieves a very low average RTT of 2.4 \\textmu s for up to 16 byte messages. This just slightly increases to 2.8 \\textmu s for up to 128 byte messages and to 4.5 \\textmu s for up to 512 byte messages. 3 \\textmu s RTT is achieved by the 95th percentile for up to 64 byte which slightly increases to 5 \\textmu s for up to 512 byte messages. Message sizes up to 16 bytes achieve a 7.7 \\textmu s RTT for the 99th percentile with up to 10 \\textmu s for up to 512 byte messages. For the 99.9th percentile, messages sizes up to 16 byte fluctuate slightly with a RTT of 14.5 to 15.5 \\textmu s. This continues for 32 byte to 2 kb with a low of 16.3 \\textmu s and a high of 19.5 \\textmu s. The average message rate peaks at approx. 0.41 mmps for up to 16 byte messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, FastMPJ's average RTT comes close to its 1.8 \\textmu s and closes that gap slightly further starting with 256 byte message size. \n\nComparing the avg. RTT and 95th percentile to DXNet's results (\\S \\ref{eval_dxnet_uni_lat}), FastMPJ outperforms DXNet by a up to four times lower RTT. This is also reflected by the message rate of 0.41 mmps for FastMPJ and 0.1 mmps for DXNet. The breakdown given Section \\ref{eval_dxnet_uni_lat} explains the rather high RTTs and the amount of processing time spent by DXNet on major sections of the pipeline. However, even DXNet's avg. RTT for message sizes up to 512 byte is higher than FastMPJ's, DXNet achieves lower 99th (8.9 to 9.2 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) than FastMPJ.\n\n\\subsubsection{All-to-all with Increasing Node Count}\n\\label{eval_fmpj_nodes}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_4nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 4 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_4nodes}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_6nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 6 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_6nodes}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_8nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 8 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_8nodes}\n\\end{figure}\n\nFigures \\ref{eval_fmpj_4nodes}, \\ref{eval_fmpj_6nodes} and \\ref{eval_fmpj_8nodes} show the aggregated send throughputs and message rates of the all-to-all benchmark running on 4, 6 and 8 nodes. The results for 2 nodes were already discussed in \\ref{eval_fmpj_bi_tp} and are depicted in figure \\ref{eval_fmpj_bibw}. The results for WS 64 and messages greater than 64 kb are absent because FastMPJ hangs (no error output) on message sizes greater than 64 kb with WS 64. We couldn't resolve this by re-running the benchmark several times and with different configuration parameters like increasing buffer sizes.\n\nFastMPJ scales well with increasing node count on all-to-all communication with the following peak throughputs: 10.8 GB\/s with WS 2 and 512 kb messages on 2 nodes, 19.2 GB\/s with WS 64 and 1 MB messages on 4 nodes, 26.3 GB\/s with WS 32 and 1 MB messages on 6 nodes, 32.7 GB\/s with WS 32 and 1 MB messages on 8 nodes. This results in per node send throughputs of 5.1 GB\/s, 4.8 GB\/s, 4.38 GB\/s and 4.08 GB\/s. The gradually decreasing per node throughput seems to be a non software related issue as explained in Section \\ref{eval_dxnet_nodes_tp}. For small messages up to 64 bytes, FastMPJ achieves the following peak message rates: 2.4 mmps for WS 16 on 2 nodes, 7.2 mmps for WS 16 on 4 nodes, 7.6 mmps for WS 16 on 6 nodes and 9.5 mmps for WS 8 on 8 nodes.\n\nDXNet also reaches peak throughputs close to FastMPJ's (\\S \\ref{eval_dxnet_nodes_tp}) on all node counts. However, DXNet saturates bandwidth very early with just 8 kb and 16 kb message sizes. Furthermore, DXNet outperforms FastMPJ's message rates for small messages on all node counts by up to three times (7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps).\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and key numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark and are single-threaded, only. All results benefit from explicit aggregation using the WS.\n\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation at 64 kb message size with 5.7 GB\/s; Peak throughput at 512 kb message size with 5.9 GB\/s; 1.0 mmps for message sizes up to 64 byte\n \\item \\textbf{Bi-directional throughput} Saturation at 64 kb message size with 10.8 GB\/s; 2.4 mmps for message sizes up to 128 byte\n \\item \\textbf{Uni-directional latency} For up to 512 byte messages: avg. RTT of 2.4 to 4.5 \\textmu s, 95th percentile of 3 to 5 \\textmu s; 99th percentile of 7.7 to 10 \\textmu s; 99.9th percentile of 16.3 to 19.5 \\textmu s \n \\item \\textbf{All-to-all nodes} With 8 nodes: Total aggregated peak throughput of 32.7 GB\/s, saturation with 1 mb message size; Peak message rate of 9.5 mmps for small messages up to 64 byte\n\\end{itemize}\n\nCompared to DXNet's single threaded results, it outperforms FastMPJ on small messages with a up to 4 times higher message rate on both un- und bi-directional benchmarks. However, FastMPJ achieves a lower average and 95th percentile latency on the uni-directional latency benchmark. But, even with a more complicated and dynamic pipeline, DXNet achieves lower 99th and 99.9th percentile than FastMPJ demonstrating high stability. On all-to-all communication with up to 8 nodes, DXNet reaches similar throughputs to FastMPJ's for large messages but outperforms FastMPJ's message rate by up to three times for small messages. \\textbf{DXNet is always better for small messages}.\n\n\\subsection{MVAPICH2}\n\\label{eval_mvapich2}\n\nThis section describes the results of the benchmarks executed with MVAPICH2 and compares them to the results of DXNet. All \\textit{osu} benchmarks (\\S \\ref{benchmarks}) were executed with MVAPICH2-2.3. Since MVAPICH2 supports MPI calls with multiple threads of the same process, some benchmarks were executed single and multi-threaded. We set the following environmental variables for optimal performance and comparability: \n\\begin{itemize}\n \\item MV2\\_DEFAULT\\_MAX\\_SEND\\_WQE=128\n \\item MV2\\_DEFAULT\\_MAX\\_RECV\\_WQE=128\n \\item MV2\\_SRQ\\_SIZE=1024\n \\item MV2\\_USE\\_SRQ=1\n \\item MV2\\_ENABLE\\_AFFINITY=1\n\\end{itemize}\n\nAdditionally for the multi-threaded benchmarks, the following environmental variables were set: \n\\begin{itemize}\n \\item MV2\\_CPU\\_BINDING\\_POLICY=hybrid\n \\item MV2\\_THREADS\\_PER\\_PROCESS=X (where X equals the number of threads we used when executing the benchmark)\n \\item MV2\\_HYBRID\\_BINDING\\_POLICY=linear\n\\end{itemize}\n\n\\subsubsection{Uni-directional Throughput}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_uni_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, uni-directional throughput and message rate, single threaded with increasing message and window size}\n\t\\label{eval_mva_uni_st}\n\\end{figure}\n\nThe results of the uni-directional single threaded benchmark are depicted in figure \\ref{eval_mva_uni_st}. The overall throughput increases with increasing message size, peaking at 5.9 GB\/s with multiple WS on large messages: 512 kb with 64 WS, 256 kb with 32 WS, 512 KB with 8 WS, 512 kb with 4 WS and 1 MB with 2 WS. Bandwidth saturation starts at approx. 64 kb to 128 kb for WSs of 16 or greater. This also applies to smaller messages with up to 64 bytes. Reaching a peak of 4.0 mmps is only possible with WS 64. If send calls are not batched explicitly, message rates are rather low (0.45 mmps for WS 1).\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, MVAPICH2's peak performance is approx. 1.0 mmps less for small messages. With increasing message size, on a WS of 64, the performance comes close to the baseline and even exceeds it for 2 kb to 8 kb messages. MVAPICH2 peaks very close to the baseline's peak performance of 6.0 GB\/s.\n\nDXNet achieves very similar results (\\S \\ref{eval_dxnet_uni_tp}) compared to MVAPICH2 but without relying on explicit aggregation. DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB\/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB\/s due to increased message processing time (de-serialization). As already explained in Section \\ref{eval_fmpj_uni_tp}, this can be resolved by using two MHs. For small messages of up to 64 bytes, DXNet achieves an equal to slightly higher message rate of 4.0 to 4.5 mmps.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_mva_bench_bi_st}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_bi_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, bi-directional throughput and message rate, single threaded with increasing message and window size}\n\t\\label{eval_mva_bi_st}\n\\end{figure}\n\nThe results of the bi-directional single threaded benchmark are depicted in figure \\ref{eval_mva_bi_st}. Overall throughput increases with message size and, like on the uni-directional benchmark, benefits a lot from greater WSs. The aggregated throughput peaks at 11.1 GB\/s with 512 kb messages on multiple WSs. Throughputs for 128 byte to 512 kb message sizes benefit from explicit aggregation. The message rate for small messages up to 64 bytes do not always profit from explicit aggregation. Message rate increases with WS 1 to 8 and peaks at 4.7 mmps with WS 8. However, greater WS degrade the message rate slightly compared to the optimal case.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, MVAPICH2's peak performance for small messages is approx. half of \\textit{ib\\_send\\_bw}'s 9.5 mmps. With increasing message size, the throughput of MVAPICH2 comes close \\textit{ib\\_send\\_bw}'s with WS 64 and 32 for 4 and 8 kb messages, only. Peak throughput for large messages comes close to \\textit{ib\\_send\\_bw}'s 11.5 GB\/s.\n\nCompared to DXNet's results (\\S \\ref{eval_dxnet_bi_tp}), the aggregated throughput is slightly higher than DXNet's (10.7 GB\/s). However, DXNet outperforms MVAPICH2 for medium sized messages by reaching a peak throughput of 10.4 GB\/s compared to 9.5 GB\/s (on WS 64) for just 8 kb messages. Furthermore, DXNet offers a higher message rate of 6 to 7.2 mmps on small messages up to 64 bytes. DXNet achieves overall higher performance without relying on explicit message aggregation.\n\n\\subsubsection{Uni-directional Latency}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_lat_uni_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, uni-directional latency and message rate, single threaded with increasing message size}\n\t\\label{eval_mva_uni_lat_st}\n\\end{figure}\n\nFigure \\ref{eval_mva_uni_lat_st} shows the results of the uni-directional single threaded latency benchmark. MVAPICH2 achieves a very low average RTT of 2.1 to 2.4 \\textmu s for up to 64 byte messages and up to 3.9 \\textmu s for up to 512 byte messages. The 95th, 99th and 99.9th percentile are just slightly higher than the average RTT with 2.2 to 4.0 \\textmu s for the 95th, 2.4 to 4.0 \\textmu s for the 99th and 2.4 to 5.0 \\textmu s for the 99.9th (for up to 512 byte message size). This results in an average message rate of 0.43 to 0.47 mmps for up to 64 byte messages and 0.25 to 0.40 for 128 to 512 byte messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, MVAPICH2's average, 95h, 99th, and 99.9th percentile RTT are very close to the baseline. With a minimum of 2.1 \\textmu s for the average latency and maximum of 5.0 \\textmu s for the 99.9th percentile on small messages, MVAPICH2 shows that its overall overhead is very low.\n\nCompared to DXNet's results (\\S \\ref{fig_eval_dxnet_uni_lat}), MVAPICH2 achieves an overall lower latency. DXNet's average with 7.8 to 8.3 \\textmu s is nearly four times higher. The 95h (8.5 to 8.9 \\textmu s), 99th (8.9 to 9.2 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) are also at least two to three times higher. MVAPICH2 implements a very thin layer of abstraction, only. Application threads issuing MPI calls, are pinned to cores and are directly calling ibverbs functions after passing through these few layers of abstraction. DXNet however implements multiple pipeline stages with de\/-serialization and multiple (JNI) context\/thread switches. Naturally, data passing through such a long pipeline takes longer to process which impacts overall latency. However, DXNet traded latency for multithreading support and performance as well as efficient handling of small messages.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes}\n\\label{eval_mva_nodes}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_4nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 4 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes4}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_6nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 6 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes6}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_8nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 8 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes8}\n\\end{figure}\n\nFigures \\ref{eval_mva_nodes4}, \\ref{eval_mva_nodes6} and \\ref{eval_mva_nodes8} show the results of executing the all-to-all benchmark with 4, 6 and 8 nodes. The results for 2 nodes are depicted in figure \\ref{eval_mva_bi_st}. \n\nMVAPICH2 achieves a peak throughput of 19.5 GB\/s with 128 kb messages on WSs 16, 32 and 64 and starts at approx 32 kb message size. WS 8 gets close to the peak throughput as well but the remaining WSs peak lower for messages greater than 32 kb. Minor fluctuations appear for WS 1 to 16 for 1 kb to 16 kb messages. For small messages of up to 512 byte, the smaller the WS the better the performance. With WS 2, a message rate 8.4 to 8.8 mmps for up to 64 byte messages is achieved and 6.6 to 8.8 mmps for up to 512 byte. \n\nRunning the benchmark with 6 nodes, MVAPICH2 hits a peak throughput of 27.3 GB\/s with 512 kb messages on WSs 16, 32 and 64. Saturation starts with a message size of approx. 64 to 128 kb depending on the WS. For 1 kb to 32 kb messages, the fluctuations increased compared to executing the benchmark with 4 nodes. Again, message rate is degraded when using large WS for small messages. An optimal message rate of 11.9 to 13.1 is achieved with WS 2 for up to 64 byte messages.\n\nWith 8 nodes, the benchmark peaks at 33.3 GB\/s with 64 kb messages on a WS of 64. Again, WS does matter for large messages as well with WS 16, 32 and 64 reaching the peak throughput and starting saturation at approx. 128 kb message size. The remaining WSs peak significantly lower. The fluctuations for mid range messages sizes of 1 kb to 64 kb increased further compared to 6 nodes. Most notable, the performance with 4 kb messages and WS 4 is nearly 10 GB\/s better than 4 kb with WS 64. With up to 64 byte messages, a message rate of 16.5 to 17.8 mmps is achieved. For up to 512 byte messages, the message rate varies with 13.5 to 17.8 mmps. As with the previous node counts, a smaller WS increases the message rate significantly while larger WSs degrade performance by a factor of two.\n\nMVAPICH2 has the same ``scalability issues'' as DXNet (\\S \\ref{eval_dxnet_nodes_tp}) and FastMPJ (\\S \\ref{eval_fmpj_nodes}). The maximum achievable bandwidth matches what was determined with the other systems. With the same results on three different systems, it's very unlikely that this is some kind of software issue like a bug or bad implementation but most likely a hardware limitation. So far, we haven't seen this issue discussed in any other publication and think it is noteworthy to know what the hardware is currently capable of.\n\nCompared to DXNet (\\S \\ref{eval_dxnet_nodes_tp}), MVAPICH2 reaches slightly higher peak throughputs for large messages. However, this peak as well as saturation is reached later at 32 to 512 kb messages compared to DXNet with approx. 16 kb. The fluctuations for mid range size messages cannot be compared as DXNet does not rely on explicit aggregation. For small messages up to 64 byte, DXNet achieves significantly higher message rates, with peaks at 7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps for 2 to 8 nodes, compared to MVAPICH2.\n\n\\subsubsection{Bi-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_bi_mt.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, bi-directional throughput and message rate, multi-threaded with one send and one recv thread with increasing message and window size}\n\t\\label{eval_mva_bi_mt}\n\\end{figure}\n\nFigure \\ref{eval_mva_bi_mt} shows the results of the bi-directional multi-threaded benchmark with two threads (on each node): a separate thread for sending and receiving each. In our case, this is the simplest multi-threading configuration to utilize more than one thread for MPI calls. The plot shows highly fluctuating results of the three runs executed as well as overall low throughput compared to the single threaded results (\\S \\ref{eval_mva_bench_bi_st}). Throughput peaks at 8.8 GB\/s with a message size of 512 kb for WS 16. A message rate of 0.78 to 1.19 mmps is reached for for up to 64 byte messages for WS 32.\n\nWe tried varying the configuration values (e.g. queue sizes, buffer sizes, buffer counts) but could not find configuration parameters that yielded significantly better, especially less fluctuating, results. Furthermore, the benchmarks could not be finished with sending 100,000,000 messages. When using \\textit{MPI\\_THREAD\\_MULTIPLE}, the memory consumption increases continuously and exhausts the total memory available on our machine (64 GB). We reduced the number of messages to 1,000,000 which still consumes approx. 20\\% of the total main memory but at least executes and finishes within a reasonable time. This does not happen with the widely used \\textit{MPI\\_THREAD\\_SINGLE} mode.\n\nMVAPICH2 implements multi-threading support using a single global lock for various MPI calls which includes \\textit{MPI\\_Isend} and \\textit{MPI\\_Irecv} used in the benchmark. This fulfils the requirements described in the MPI standard and avoids a complex architecture with lock-free data structures. However, a single global lock reduces concurrency significantly and does not scale well with increasing thread count \\cite{MPIMultithreading17}. This effect impacts performance less on applications with short bursts and low thread count. However, for multi-threaded applications under high load, a single-threaded approach with one dedicated thread driving the network decoupled from the application threads, might be a better solution. Data between application threads and the network thread can be exchanged using data structures such as buffers, queues or pools like provided by DXNet.\n\nMVAPICH2's implementation of multi-threading does not allow to improve performance by increasing the send or receive thread counts. Thus, further multi-threaded experiments using MVAPICH2 are not reasonable.\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark.\n\\textbf{Single-threaded}:\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation with 64 kb to 128 kb message size, peak at 5.9 GB\/s; Message rate of 4.0 mmps for up 64 byte messages\n \\item \\textbf{Bi-directional throughput} Saturation at 512 kb message size, peak at 11.1 GB\/s; Message rate of 4.7 mmps for up to 64 byte messages\n \\item \\textbf{Uni-directional latency} For up to 64 byte message size: 2.1 to 2.4 \\textmu s average latency and 2.4 to 5.0 \\textmu s for 99.9th percentile; 0.43 to 0.47 mmps message rate \n \\item \\textbf{All-to-all nodes} For 8 nodes: peak at 33.3 GB\/s with 64 kb message size on WS 64, WS matters for large messages; Message rate of 16.5 to 17.8 mmps for up to 64 byte messages\n \\item \\textbf{Bi-directional throughput multi-threaded}: High fluctuations with low throughputs caused by global locking, 8.8 GB\/s peak throughput at 512 kb message size; Message rate of 0.78 to 1.19 mmps for up to 64 byte messages\n\\end{itemize}\n\nCompared to DXNet, the uni-directional results are similar but DXNet does not require explicit message aggregation to deliver high throughput. On bi-directional communication, MVAPICH2 achieves a slightly higher aggregated peak throughput than DXNet but DXNet performs better by approx 0.9 GB\/s on medium sized messages. DXNet outperforms MVAPICH2 on small messages with a up to 1.8 times higher message rate. But, MVAPICH2 clearly outperforms DXNet on the uni-directional latency benchmark with an overall lower average, 95th, 99th and 99.9th percentile latency. On all-to-all communication with up to 8 nodes, MVAPICH2 reaches slightly higher peak throughputs for large messages but DXNet reaches its saturation earlier and performs significantly better on small message sizes up to 64 bytes.\n\nThe low multi-threading performance of MVAPICH2 cannot be compared to DXNet's due to the following reasons: First, MVAPICH2 implements synchronization using a global lock which is the most simplest but very often least performant method to ensure thread safety. Second, MVAPICH2, like many other MPI implementations, typically create multiple processes (one process per core) to enable concurrency on a single processor socket. However, as already discussed in related work (\\S \\ref{related_work}), this programming model is not suitable for all application domains, especially in big data applications.\n\n\\textbf{DXNet is better for small messages and multi-threaded access like required in big-data applications.}\n\n\\section{Conclusions}\n\\label{conclusions}\n\nWe presented Ibdxnet, a transport for the Java messaging library DXNet which allows multi-threaded Java applications to benefit from low latency and high-throughput using InfiniBand hardware. DXnet provides transparent connection management, concurrency handling, message serialization and hides the transport which allows the application to switch from Ethernet to InfiniBand hardware transparently, if the hardware is available. Ibdxnet's native subsystem provides dynamic, scalable, concurrent and automatic connection management and the msgrc messaging engine implementation. The msgrc engine uses a dedicated send and receive thread and to drive RC QPs asynchronously which ensures scalability with many nodes. Load adaptive parking avoids high loads on idle but ensures low latency when busy. SGEs are used to simplify buffer handling and increase buffer utilization when sending data provided by the higher level DXNet core. A carefully crafted architecture minimizes context switching between Java and the native space as well us exchanging data using shared memory buffers. The evaluation shows that DXNet with the Ibdxnet transport can keep up with FastMPJ and MVAPICH2 on single threaded applications and even exceed them in multi-threaded applications on high load applications. DXNet with Ibdxnet is capable of handling concurrent connections and data streams with up to 8 nodes. Furthermore, multi-threaded applications benefit significantly from the multi-threaded aware architecture.\n\nThe following topics are of interest for future research with DXnet and Ibdxnet:\n\\begin{itemize}\n \\item Experiments with more than 100 nodes on our university's cluster\n \\item Evaluate DXNet with the key-value store DXRAM using the YCSB and compare it to RAMCloud\n \\item Implementation and evaluation of a UD QP based transport engine\n \\item Hybrid mode for DXNet: Analyze if applications benefit from using Ethernet and InfiniBand if both are available\n \\item RDMA path: Boost performance for applications like key-value storages\n\\end{itemize}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe unification of gravity and electromagnetism has remained an unfulfilled goal in spite of many efforts by stalwarts like Einstein, Weyl, Kaluza and Schr\\\"{o}dinger. This has been largely due to the unrealistic dream of a {\\em unitary classical field theory} to achieve the unification of gravity and electromagnetism and at the same time that of waves and particles by having particles as spherically symmetric singulariy-free solutions of the field equations. No singularity-free solutions of these field equations, however, have been found. Moreover, after the advent of quantum theory, the unification of particles and fields has acquired a completely different significance. The quantum theoretic unification of the electromagnetic and weak nuclear forces has also opened up a new perspective on the unification of forces. Nevertheless, all attempts to unify gravity with other forces have remained unsuccessful so far because of a fundamental incompatibility between quantum mechanics and general relativistic gravity, namely, quantum theories can be constructed only on a fixed nondynamical space-time background whereas General Relativity requires diffeomorphism invariance. This has led in recent years to the view that perhaps gravity is an {\\em emergent} rather than a fundamental field, and hence does not need to be quantized \\cite{clgr}. In this situation it would be worthwhile once again to revisit the earlier attempts at unification of classical electromagnetism and gravity without invoking extra dimensions. \n\nIn this context an almost completely ignored paper of S. N. Bose \\cite{bose} is of particular interest. In this paper Bose generalized the equations of Einstein's unitary field theory and derived them from a variational principle. This resulted in interesting mathematical solutions. However, Bose included terms that broke an important symmetry of the Einstein action, namely, $\\Lambda$-{\\em transformation invariance} \\cite{einstein} or, in modern parlance, {\\em projective invariance} \\cite{proj} that is necessary for a true geometric unification. In what follows a slightly different approach will be taken that is in conformity with modern developments in unified theories.\n\n\\section{A Projective Invariant Unified Theory}\n\nLet the starting point be a 4-dimensional manifold ${\\cal{E}}$ with signature $(+,+,+,\\\\-)$, a non-symmetric tensor $g^{\\mu\\nu}$ and a non-symmetric affine connection $\\Gamma$ with the property \n\\beq\n\\Gamma_\\mu = \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\lambda} - \\Gamma^\\lambda_{\\lambda\\mu}\\right) \\neq 0.\n\\eeq \nLet\n\\beq\n\\Gamma^\\lambda_{\\mu\\nu} = \\Gamma^\\lambda_{(\\mu\\nu)} + \\Gamma^\\lambda_{[\\mu\\nu]}\n\\eeq where\n\\ben\n\\Gamma^\\lambda_{(\\mu\\nu)} &=& \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\nu} + \\Gamma^\\lambda_{\\nu\\mu}\\right),\\\\\n\\Gamma^\\lambda_{[\\mu\\nu]} &=& \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\nu} - \\Gamma^\\lambda_{\\nu\\mu}\\right), \n\\een and\n\\beq\n\\Gamma_\\mu = \\Gamma^\\lambda_{[\\mu\\lambda]} \\neq 0. \n\\eeq\nThis condition turns out to be of crucial importance as $\\Gamma_\\mu$ acts, as we will see, as the common source term for the electromagnetic and gravitational fields.\n$\\Gamma^\\lambda_{[\\mu\\nu]}$ is known as the Cartan torsion tensor which will be related to the electromagnetic field.\n\nConsider a non-symmetric tensor of the form\n\\beq\nE_{\\mu\\nu} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\lambda,\\,\\nu} + \\Gamma^\\lambda_{\\nu\\lambda,\\,\\mu} \\right) + 2\\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - 2\\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu}. \\label{E} \n\\eeq\nThis tensor is both {\\em transposition invariant} and {\\em$\\Lambda$-transformation invariant}. These are two symmetries that restrict the number of possible covariant terms in a nonsymmetric theory \\cite{einstein}. \n\n{\\flushleft{\\em Transposition symmetry}}\n\nLet $\\widetilde{\\Gamma^\\lambda_{\\mu\\nu}} = \\Gamma^\\lambda_{\\nu\\mu}$ and $\\widetilde{g_{\\mu\\nu}} = g_{\\nu\\mu}$. Then terms that are invariant under the simultaneous replacements of $\\Gamma^\\lambda_{\\mu\\nu}$ and $g_{\\mu\\nu}$ by $\\widetilde{\\Gamma^\\lambda_{\\mu\\nu}}$ and $\\widetilde{g_{\\mu\\nu}}$, followed by the interchange of the two lower indices, are called transposition invariant. For example, the tensor\n\\beq\nE^\\prime_{\\mu\\nu} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\Gamma^\\lambda_{\\mu\\lambda,\\,\\nu} + \\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - \\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu}\n\\eeq is not transposition invariant because it is transposed to\n\\beq\n\\widetilde{E^\\prime_{\\mu\\nu}} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\Gamma^\\lambda_{\\nu\\lambda,\\,\\mu} + \\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - \\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu} \\neq E^\\prime_{\\mu\\nu}. \n\\eeq\nBut (\\ref{E}) is transposition invariant.\n\n{\\flushleft{\\em $\\Lambda$-transformation or projective symmetry}}\n\nDefine the transformations\n\\ben\n\\Gamma^{\\lambda *}_{\\mu\\nu} &=& \\Gamma^{\\lambda}_{\\mu\\nu} + \\delta^\\lambda_\\mu \\Lambda_{,\\,\\nu},\\nonumber\\\\\ng^{\\mu\\nu *} &=& g^{\\mu\\nu}\\label{proj},\n\\een \nwhere $\\Lambda$ is an arbitrary function of the coordinates, $\\delta^\\lambda_\\mu$ is the Kronecker tensor, and the comma denotes the partial derivative. It is easy to check that $E_{\\mu\\nu}$ given by (\\ref{E}) and hence $g^{\\mu\\nu}E_{\\mu\\nu}$ are invariant under these transformations. What this means is that a theory characterized by $E_{\\mu\\nu}$ cannot determine the $\\Gamma$-field completely but only up to an arbitrary function $\\Lambda$. Hence, in such a theory, $\\Gamma$ and $\\Gamma^*$ represent the same field. Further, this {\\em $\\Lambda$-transformation} produces a non-symmetric $\\Gamma^*$ from a $\\Gamma$ that is symmetric or anti-symmetric in the lower indices. Hence, the symmetry condition for $\\Gamma$ loses objective significance. This sets the ground for a genuine unification of the gravitational and electromagnetic fields, the former determined by the symmetric part of the tensor $E_{\\mu\\nu}$ and the latter by its antisymmetric part.\n\nSeparating the symmetric and antisymmetric parts of $E_{\\mu\\nu}$, and using the definitions\n\\ben\nR^\\prime_{\\mu\\nu}&=& \\Gamma^\\lambda_{(\\mu\\nu),\\,\\lambda} - \\frac{1}{2}\\left(\\Gamma^\\lambda_{(\\mu\\lambda),\\,\\nu} + \\Gamma^\\lambda_{(\\nu\\lambda),\\,\\mu} \\right) + \\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)},\\\\\nG^\\lambda_{\\mu\\nu} &=& \\Gamma^\\lambda_{[\\mu\\nu]} + \\frac{1}{3}\\delta^\\lambda_\\mu \\Gamma_\\nu - \\frac{1}{3}\\delta^\\lambda_\\nu \\Gamma_\\mu,\\label{G}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} &=& G^\\lambda_{\\mu\\nu,\\,\\lambda} - G^\\lambda_{\\mu\\xi}\\Gamma^\\xi_{(\\lambda\\nu)} - G^\\lambda_{\\xi\\nu}\\Gamma^\\xi_{(\\mu\\lambda)} + G^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{(\\xi\\lambda)},\n\\een \none can show that\n\\ben\nE_{\\mu\\nu} &=& \\frac{1}{2}\\left[R^\\prime_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu -\\frac{1}{2}(\\Gamma_{\\mu,\\,\\nu} + \\Gamma_{\\nu,\\,\\mu}) + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda\\right]\\nonumber\\\\ &+& \\frac{1}{2}\\left[G^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right].\n\\een \nNotice that by construction $G^\\lambda_{\\mu\\lambda} = 0$. One can now write an invariant Lagrangian density\n\\ben\n{\\cal{L}} &=& \\frac{1}{\\kappa} \\sqrt{\\vert g\\vert}g^{\\mu\\nu}E_{\\mu\\nu} = \\frac{1}{\\kappa}\\left(s^{\\mu\\nu} + a^{\\mu\\nu}\\right)E_{\\mu\\nu}\\nonumber\\\\ \n&=& \\frac{1}{\\kappa} s^{\\mu\\nu}\\left[R_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - \\Gamma_{\\nu,\\,\\nu} \\right]\\nonumber\\\\ &+& \\frac{1}{\\kappa} a^{\\mu\\nu}\\left[G^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right],\\label{L}\n\\een\nwhere\n\\ben\nR_{\\mu\\nu}&=& \\Gamma^\\lambda_{(\\mu\\nu),\\,\\lambda} - \\Gamma^\\lambda_{(\\mu\\lambda),\\,\\nu} + \\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)},\\\\\ns^{\\mu\\nu} &=& \\frac{1}{2}\\sqrt{\\vert g\\vert}\\left(g^{\\mu\\nu} + g^{\\nu\\mu}\\right) \\equiv \\sqrt{\\vert g\\vert}g^{(\\mu\\nu)},\\\\\na^{\\mu\\nu} &=& \\frac{1}{2}\\sqrt{\\vert \\overline{g}\\vert}\\left(g^{\\mu\\nu} - g^{\\nu\\mu}\\right) \\equiv \\sqrt{\\vert g\\vert}g^{[\\mu\\nu]},\\\\\n\\vert \\overline{g}\\vert &=& \\vert g\\vert, \n\\een and $\\kappa$ is an arbitrary constant of the dimension of inverse force.\nLet us therefore consider the variation\n\\beq\n\\delta I = \\delta \\int {\\cal{L}}\\, d^4 x = 0.\n\\eeq\nArbitrary variations of $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ while keeping the connections fixed (generalized Palatini variations) give rise to the field equations\n\\ben\nR_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3} \\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda &=& 0,\\label{A}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}\\left(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu}\\right) &=& 0. \\label{B}.\n\\een\nThe coefficients of $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ in the Lagrangian (\\ref{L}) are respectively the symmetric and anti-symmetric curvature tensors in the theory. These variational equations show that these tensors vanish. They also show that $\\Gamma_\\mu$ acts as the common source of $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ and $R_{\\mu\\nu}$. In a theory in which $\\Gamma_\\mu =0$, $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ and $R_{\\mu\\nu}$ would have no common source, and the two cannot be said to be genuinely unified.\n\n\nTo derive the equations of connection, one can use a variational principle with an undetermined Lagrange multiplier $k^\\mu$, namely\n\\beq\n\\delta\\int \\left({\\kappa\\cal{L}} - 2k^\\mu G^\\lambda_{\\mu\\lambda}\\right) d^4 x = 0,\n\\eeq \nin which all the $24$ components of $G^\\lambda_{\\mu\\nu}$ are treated to be independent although, as we have seen, they are not because $G^\\lambda_{\\mu\\lambda} = 0$. One then obtains the equation (see Appendix for details)\n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 3 g^{\\mu\\nu}\\Phi_\\lambda \n\\eeq\nwith the affine connections $\\Gamma^\\prime$ given by Eqns. (\\ref{affine1}) and (\\ref{affine2}) in the Appendix, and\n\\ben\n\\Phi_\\lambda &=& \\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}},\\\\\nk^\\beta &=& \\frac{1}{3}\\left(s^{\\beta\\nu}\\Gamma_\\nu + \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\beta_{(\\lambda\\nu)} \\right),\\label{k}\n\\een\nand\n\\ben\ns^{\\mu\\alpha}_{,\\,\\alpha} + s^{\\alpha\\beta}\\Gamma^\\mu_{(\\alpha\\beta)} + a^{\\alpha\\beta}G^\\mu_{\\alpha\\beta} &=& 0,\\\\\na^{\\mu\\nu}_{,\\,\\nu} &=& 3k^\\mu.\\label{kmu}\n\\een The last equation implies\n\\beq\nk^\\mu_{,\\,\\mu} = 0.\n\\eeq\nEqn. (\\ref{k}) determines this 4-vector $k^\\mu$ and constrains the number of independent components of $G^\\lambda_{\\mu\\nu}$ to be $20$ in accordance with the property $G^\\lambda_{\\mu\\lambda} = 0$.\nIf the determinant\n\\beq\n\\vert \\vert g_{[\\lambda\\beta]}\\vert \\vert = \\left(g_{12}g_{34} + g_{23}g_{14} + g_{31}g_{24}\\right)^2 = 0, \n\\eeq\none can have $\\Phi_\\lambda = 0$ but $k^\\mu \\neq 0$. It is possible in this case to relate the electromagnetic field intensity with $a^{\\mu\\nu}$ through the relation\n\\beq\n{\\mathfrak{F}}^{\\mu\\nu} = e c {\\mathcal{R}} a^{\\mu\\nu}\\label{emf}\n\\eeq with a non-zero curvature\n\\beq\n{\\mathcal{R}} = g^{[\\mu\\nu]}E_{[\\mu\\nu]}.\n\\eeq Thus, $(e c {\\mathcal{R}}\\sqrt{\\vert g\\vert})\\left(g_{23}, g_{31}, g_{12}\\right)$ are components of the magnetic field $\\vec{B}$ and $(e c {\\mathcal{R}}\\sqrt{\\vert g\\vert})\\left(g_{41}, g_{42}, g_{43}\\right)$ those of the electric field $i\\vec{E}$ which satisfy the condition $\\vec{E}\\,. \\vec{B} = 0$, $e$ being the electric charge and $c = \\frac{1}{\\sqrt{\\epsilon_0 \\mu_0}}$. In the absence of electrically charged matter, $e = 0$, and hence $\\vec{E}$ and $\\vec{B}$ vanish, but $a^{\\mu\\nu}\\neq 0$, and the geometric structure of the electromagnetic field remains. It is only with the introduction of charged matter, as we will see, that this geometric structure acquires physical dimensions.\n\nThe equations of connection are then of the form \n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 0,\\label{C}\n\\eeq\nand one also has \n\\beq\n{\\mathfrak{F}}^{\\mu\\nu}_{,\\,\\nu} = e c{\\mathcal{R}}_{,\\,\\nu} a^{\\mu\\nu} + e c{\\mathcal{R}}\\, a^{\\mu\\nu}_{,\\,\\nu} = e {\\mathfrak{J}}^\\mu_{em}.\\label{jem}\n\\eeq When ${\\mathfrak{J}}^\\mu_{em} = 0$,\n\\beq\na^{\\mu\\nu}_{,\\,\\nu} = - {\\mathcal{R}}^{-1}{\\mathcal{R}}_{,\\,\\nu}a^{\\mu\\nu} = 3 k^\\mu.\n\\eeq\nThis is the case in the projective invariant limit with no particles present.\n\nHowever, as we have seen, $R_{\\mu\\nu}$ and $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ cannot be objectively separated and identified with the physical gravitational and electromagnetic fields respectively because of projective invariance. In the observable universe at present, however, this symmetry is badly broken in the sense that the electromagnetic and gravitational fields can be objectively separated and identified, and the electric charge $e$ and the gravitational charge $\\kappa = 8\\pi G\/c^4$ are widely different. Furthermore, there are no charged particles in the theory which can be shown to be singularity-free solutions of the classical field equations and which can act as the source ${\\mathfrak{J}}^\\mu_{em}$ of the electromagnetic field. This makes the projective invariant classical theory with $\\Gamma_\\mu \\neq 0$ incomplete. Furthermore, there are strong and weak nuclear interactions and quantum mechanical effects that need to be taken into account. These are the issues that will be addressed in the following sections. \n\n\\section{Matter and Projective Symmetry Breaking}\n\nLet us first consider projective symmetry breaking. In the perspective of modern developments in unified theories, it would be natural to think of a symmetry breaking transition at some appropriate stage of the evolution of the universe that separates the gravitational and electromagnetic fields objectively and physically. Such a scenario would be possible provided there is some natural mechanism to break the {\\em $\\Lambda$-transformation} or projective symmetry of the action so that the symmetric and anti-symmetric parts of the connection can be objectively separated. The symmetry condition for the connection $\\Gamma$ characteristic of Riemann manifolds and Einstein's gravitational theory based on them would then acquire objective significance. From such a symmetry breaking would then emerge the observed space-time world endowed with a symmetric dynamical metric field $g_{(\\mu\\nu)}$ encoding gravity as well as the anti-symmetric field $g_{[\\mu\\nu]}$ (resulting from torsion) encoding electromagnetism. The most natural stage for such a symmetry breaking to occur would be the emergence of matter fields at the end of a non-symmetric affine field dominated phase of a {\\em premetric} universe. This is because a matter Lagrangian ${\\cal{L}}_m$ obtained by minimally coupling the matter to the connection is not generally projective invariant. \n\nIn such a theory one would have\n\\beq\n{\\mathfrak{J}}^\\mu_{em} = \\left[\\sum_i \\bar{\\psi}_i\\gamma^\\mu \\psi_i + \\sum_j\\bar{\\phi}_j\\beta^\\mu \\phi_j + \\cdots \\right]\n\\eeq\nwhere $\\psi_i$ are Dirac wavefunctions describing spin-$\\frac{1}{2}$ particles, $\\phi_j$ are Kemmer-Duffin wavefunctions describing spin-0 and spin-1 particles, the $\\beta$s being the Kemmer-Duffin-Petiau matrices, and the dots represent higher spin wavefunctions if any. \n\nThe broken symmetric Lagrangian density including the matter wavefunctions can then take the general form \n\\ben\n{\\cal{L}} ^\\prime &=& \\frac{1}{\\kappa}s^{\\mu\\nu}\\left[R_{\\mu\\nu} - k^\\prime \\left(G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} - \\frac{1}{3} \\Gamma_\\mu \\Gamma_\\nu - \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda\\right)\\right]\\nonumber\\\\ &+& \\frac{1}{\\kappa}a^{\\mu\\nu}\\left[G^\\lambda_{\\mu\\nu;\\lambda} - \\frac{k}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right]\\nonumber\\\\ &+& {\\cal{L}}_m (\\psi, g, \\Gamma)\n\\een \nwhere now $\\kappa = 8\\pi G\/c^4$ and $k$ and $k^\\prime$ are arbitrary dimensionless constants to be determined by experiments.\nHence, varying $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ together with the connections,\none obtains \n\\ben\nR_{\\mu\\nu} - \\frac{1}{2}g_{(\\mu\\nu)}R &=& -\\kappa \\left[T^{m}_{(\\mu\\nu)} + T^{em}_{(\\mu\\nu)}\\right], \\label{gr}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} &=& \\frac{k}{3} \\left(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu}\\right)\\label{em},\n\\een where\n\\ben\nT^m_{(\\mu\\nu)} &=& -\\frac{2}{\\sqrt{\\vert g\\vert}}\\frac{\\delta \\left(\\sqrt{\\vert g\\vert}{\\cal{L}}_m \\right)}{\\delta g^{(\\mu\\nu)}},\\label{F5}\\\\\nT^{em}_{(\\mu\\nu)} &=& \\frac{k^\\prime}{\\kappa}\\left[\\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu}\\right]\n\\een and\n\\ben\n\\Gamma_\\mu &=& \\frac{1}{c {\\mathcal{R}}}{\\mathfrak{J}}^{em}_\\mu - \\frac{1}{e c{\\mathcal{R}}^2} {\\mathcal{R}}_{,\\, \\nu} {\\mathfrak{F}}^{\\,\\,\\nu}_{\\mu} - \\frac{3}{2} s^{\\lambda\\nu}\\Gamma_{\\mu, (\\lambda\\nu)},\\\\\n\\Gamma^\\lambda_{(\\mu\\nu)} &=& \\frac{1}{2}g^{(\\lambda\\rho)}\\left(g_{(\\rho\\mu,\\,\\nu)} + g_{(\\rho\\nu,\\,\\mu)} - g_{(\\mu\\nu,\\,\\rho)} \\right),\\\\\n\\Gamma_{\\mu, (\\lambda\\nu)} &=& \\frac{1}{2}\\left(g_{(\\mu\\lambda),\\,\\nu} + g_{(\\mu\\nu),\\,\\lambda} - g_{(\\lambda\\nu),\\,\\mu}\\right).\n\\een The first of these equations follows from Eqn. (\\ref{jem}) and Eqns. (\\ref{k}), (\\ref{kmu}) and (\\ref{emf}), and the other two are consequences of Eqn. (\\ref{C}) for the symmetric part. \nThe constant $k^\\prime$ must be chosen to fit experimental data on electromagnetic contributions to the total stress-energy tensor. A comparison of Eqns. (\\ref{gr}) and \n(\\ref{em}) suggests that we identify $k =3\\sqrt{\\alpha}$ where $\\alpha = 1\/137$ is the dimensionless fine structure constant so that the two fundamental constants $\\kappa$ and $\\alpha$ determine the strengths of the couplings of the sources to the symmetric and antisymmetric curvature tensors in the theory. \n\nIn the projective invariant limit, $k^\\prime = k = 1$ and $\\kappa \\rightarrow 0$ so that the matter Lagrangian density can be ignored in camparison with the other terms. Hence, the theory predicts that in the invariant limit $\\alpha_{sym} = \\frac{1}{9}$.\n\nNotice also that $T^{em}_{(\\mu\\nu)}$ differs from the standard general relativistic form of the electromagnetic stress-energy tensor\n\\beq\n\\frac{1}{\\mu_0}\\left(F^{\\mu\\alpha}F^\\nu\\,_{\\alpha} - \\frac{1}{4}g^{(\\mu\\nu)}F_{\\alpha\\beta} F^{\\alpha\\beta}\\right).\n\\eeq Hence, the predictions of the theory regarding the effects of the electromanetic stress tensor on gravity differ from those of standard General Relativity, and hence can be tested in principle. These effects are being investigated and will be reported elsewhere. \n\n\\section{Quantization}\nLet us now see how the above scheme can be incorporated into the current understanding of gravitation and the quantum theory of matter and radiation. We first note that the electromagnetic gauge potential $A_\\mu$ has played no role so far in our considerations. That is because $A_\\mu$ is required for minimal coupling with charged matter which is absent in a projective invariant theory. Charged matter is represented by complex quantum mechanical wavefunctions whose imaginary parts are arbitrary local phases, a typical quantum mechanical feature that is wholly absent in classical theories of matter. $A_\\mu$ has the geometric significance of a connection associated with a horizontal subspace of a principal bundle $P = (E, \\Pi, M, G)$ (associated with the phase) whose projection is $\\Pi: E \\rightarrow M$. The 1-form $A = A_\\mu dx^\\mu$ transforms as $hAh^{-1} + hdh$ under a group transformation $g^\\prime = hg, g\\in G$. There is a curvature associated with this 1-form given by $F^\\prime = F^\\prime_{\\mu\\nu}dx^\\mu \\wedge dx^\\nu$ with $F^\\prime_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu$. It transforms as $hF^{\\,\\prime} h^{-1}$. $F^{\\,\\prime}_{\\mu\\nu}$ is identified with the electromagnetic field. In this case $G = U(1)$, and $F^{\\,\\prime}$ is invariant. The bundle is trivial since the base space $M$ is flat and $E = M \\times G$ everywhere. \n\nThe first change that is needed is the replacement of the principal bundle $P$ by $P^\\prime = (E, \\Pi, {\\cal{E}}, G)$ whose base space is the manifold ${\\cal{E}}$ of the theory and whose local structure group $G$ is ideally a simple Lie group which can be broken down to $SU(3)\\times SU(2) \\times U(1)$, the symmetry group of the Standard Model. Such a bundle is only locally $M \\times G$. In keeping with current theory, we also require that matter wavefunctions be introduced as global sections of vector bundles associated with specific representations of $G$. Since the electromagnetic field is also defined on the manifold ${\\cal{E}}$ by the relation (\\ref{emf}), the compatibility requirement \n\\beq\n\\Pi {\\mathfrak{F}}^{\\,\\prime\\mu\\nu} = {\\mathfrak{F}}^{\\mu\\nu} = e c {\\mathcal{R}}\\, a^{\\mu\\nu}\n\\eeq\nmust be satisfied, where ${\\mathfrak{F}}^{\\,\\prime\\mu\\nu} = \\sqrt{\\vert g\\vert}g^{(\\mu\\alpha)}g^{(\\nu\\beta)}F^\\prime_{\\alpha\\beta}$. This immediately implies that quantization of $F^\\prime_{\\mu\\nu}$ leads to quantization of $a^{\\mu\\nu}$. Other gauge fields in the theory belonging to non-Abelian groups are not subject to this compatibility requirement.\n\nFinally, let us consider the quantization of the gravitational field. It is important to emphasize that whereas the metric tensor $g_{(\\mu\\nu)}$ plays a fundamental role in a Riemannian manifold and the connections (Christoffel symbols) are derived from it, in an affine manifold like ${\\cal{E}}$ the tensor $g_{\\mu\\nu}$ and the non-symmetric connections $\\Gamma$ play equally fundamental roles, and the two are later related through the equations of connection. Now, quantization requires the existence of a symplectic manifold with a nondegenerate $2$-form $\\omega$ on which Poisson brackets can be defined. Following Dirac's prescription, these Poisson brackets can then be replaced by commutators to quantize a classical theory. This is the canonical quantization procedure. Therefore, to quantize the fields in our theory we need to construct the $2$-form $\\omega = E_{\\mu\\nu} dx^\\mu \\wedge dx^\\nu$. Because of the antisymmetry of the wedge product, $\\omega = E_{[\\mu\\nu]}dx^\\mu \\wedge dx^\\nu$ which shows that only the antisymmetric part of $E_{\\mu\\nu}$ contributes to $\\omega$. However, as we have seen, the splitting of $E_{\\mu\\nu}$ into a symmetric and an antisymmetric part has no objective significance due to projective invariance. Hence, canonical quantization cannot be carried out with any objective significance on such a manifold. But, it can be done after the invariance is broken, and then it is at once clear why the symmetric part $E_{(\\mu\\nu)}$ associated with gravity cannot be quantized canonically while the antisymmetric part associated with electromagnetism can be. Thus, in the theory developed here, though both gravity and electromagnetism are emergent fields from a premetric, prequantum manifold ${\\cal{E}}$, gravity remains classical while electromagnetism can be quantized. \n\n\\section{Concluding Remarks}\nWe have seen that the unification of electromagnetism and gravity into a single geometric entity is beautifully accomplished in a theory with nonsymmetric connection and $\\Gamma_\\mu \\neq 0$, the unifying symmetry being projective symmetry. Matter wavefunctions appear in the theory as global sections of vector bundles associated with specific representations of an appropriate simple Lie group $G$ that can be broken down to $SU(3)\\times SU(2) \\times U(1)$, the symmetry group of the Standard Model. The matter Lagrangian breaks projective invariance, generating classical relativistic gravity and quantum electromagnetism. This is possible because the original non-symmetric manifold ${\\cal{E}}$ is assumed to be smooth. Hence, for the theory to be valid, the symmetry breaking transition must occur at a larger scale than the Planck length. The theory predicts $\\alpha_{sym} = \\frac{1}{9}$ below this scale.\n\nIn the projective invariant premetric phase the fourth dimension of the manifold ${\\cal{E}}$ with negative signature cannot be identified with physical time. Also, this manifold is affine and has no origin. These features of the theory have important implications for the origin (and possibly also the dissolution) of the observed universe which need to be further explored but are beyond the scope of this paper.\n\n\\section{Acknowledgement}\nI dedicate this paper to the memory of my research guide and teacher, the Late Professor Satyendranath Bose whose 1953 paper is its inspiration. I thank the National Academy of Sciences, India for the grant of a Senior Scientist Platinum Jubilee Fellowship which enabled this work to be undertaken.\n\n\\section{Appendix}\n\nIn order to derive the equations of connection, let us first write \n\\beq\n\\kappa{\\cal{L}} = H + \\frac{d X^\\lambda}{dx^\\lambda}\\label{H}\n\\eeq\nwith \n\\ben\nX^\\lambda &=& s^{\\mu\\nu}\\Gamma_{(\\mu\\nu)}^\\lambda - s^{\\mu\\lambda}\\Gamma^\\nu_{(\\mu\\nu)} + a^{\\mu\\nu}G^\\lambda_{\\mu\\nu} + \\frac{2}{3} a^{\\mu\\lambda}\\Gamma_\\mu + \\Gamma^\\lambda\\nonumber\\\\\nH &=& - s^{\\mu\\nu}_{,\\,\\lambda}\\Gamma^\\lambda_{(\\mu\\nu)} + s^{\\mu\\lambda}_{,\\,\\lambda}\\Gamma^\\nu_{{(\\mu\\nu)}} + s^{\\mu\\nu}\\left(\\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)}\\right)\\nonumber\\\\ &+& s^{\\mu\\nu}\\left(-G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu\\right) + s^{\\mu\\nu}\\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - a^{\\mu\\nu}_{,\\,\\lambda}G^\\lambda_{\\mu\\nu} \\nonumber\\\\ &+& a^{\\mu\\nu}\\left( - G^\\lambda_{\\mu\\xi}\\Gamma^\\xi_{(\\lambda\\nu)} - G^\\lambda_{\\xi\\nu}\\Gamma^\\xi_{(\\mu\\lambda)} + G^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{(\\xi\\lambda)} \\right) - \\frac{2}{3} a^{\\mu\\lambda}_{,\\,\\lambda}\\Gamma_\\mu.\n\\een Thus, $H$ is free of the partial derivatives of $\\Gamma^\\lambda_{(\\mu\\nu)}, G^\\lambda_{\\mu\\nu}$ and $\\Gamma_\\mu$, and the four-divergence term in the integral $I$ is equal to a surface integral at infinity on which all arbitrary variations are taken to vanish. \n\nNow, it follows from the definition of $G^\\lambda_{\\mu\\nu}$ that $G^\\lambda_{\\mu\\lambda} = 0$, and hence all the 24 components of $G^\\lambda_{\\mu\\nu}$ are not independent. Remembering that these four relations must always hold good in the variations of the elements $\\Gamma^\\lambda_{(\\mu\\nu)}, G^\\lambda_{\\mu\\nu}, \\Gamma_\\mu$, one can use the method of undetermined Lagrange multipliers $k^\\mu$ to derive the equations of connection by varying the function\n\\beq\nH - 2k^\\mu G^\\lambda_{\\mu\\lambda}.\n\\eeq \nThe resulting equations are\n\\ben\ns^{\\mu\\nu}_{,\\,\\lambda} + s^{\\mu\\alpha}\\Gamma^\\nu_{(\\lambda\\alpha)} + s^{\\alpha\\nu}\\Gamma^\\mu_{(\\alpha\\lambda)} - s^{\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)} = -[a^{\\mu\\alpha}G^\\nu_{\\lambda\\alpha} + a^{\\alpha\\nu}G^\\mu_{\\alpha\\lambda}]\\label{1} \\\\\na^{\\mu\\nu}_{,\\,\\lambda} + a^{\\mu\\alpha}\\Gamma^\\nu_{(\\lambda\\alpha)} + a^{\\alpha\\nu}\\Gamma^\\mu_{(\\alpha\\lambda)} - a^{\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)} - k^\\mu\\delta^\\nu_\\lambda + k^\\nu\\delta^\\mu_\\lambda = - [s^{\\mu\\alpha}G^\\nu_{\\lambda\\alpha} + s^{\\alpha\\nu}G^\\mu_{\\alpha\\lambda}]\\label{2}\n\\een \nand\n\\beq\na^{\\mu\\nu}_{,\\,\\nu} - s^{\\mu\\nu}\\Gamma_\\nu- \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\mu_{(\\lambda\\nu)} = 0.\n\\eeq\nIt follows from these equations that\n\\ben\ns^{\\mu\\alpha}_{,\\,\\alpha} + s^{\\alpha\\beta}\\Gamma^\\mu_{(\\alpha\\beta)} + a^{\\alpha\\beta}G^\\mu_{\\alpha\\beta} &=& 0,\\\\\na^{\\mu\\nu}_{,\\,\\nu} &=& 3k^\\mu,\n\\een which imply\n\\ben\nk^\\mu_{,\\,\\mu} &=& 0,\\\\\nk^\\mu &=& \\frac{1}{3}\\left(s^{\\mu\\nu}\\Gamma_\\nu + \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\mu_{(\\lambda\\nu)}\\right).\n\\een\nAdding (\\ref{1}) and (\\ref{2}), we get\n\\ben\ng^{\\prime\\mu\\nu}_{,\\lambda} &+& g^{\\prime\\mu\\alpha}\\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha}\\right) + g^{\\prime\\alpha\\nu}\\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda}\\right) - g^{\\prime\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)}\\nonumber\\\\ &=& k^\\mu \\delta^\\nu_\\lambda - k^\\nu \\delta^\\mu_\\lambda \\label{X}\n\\een where $g^{\\prime\\mu\\nu} = \\sqrt{\\vert g\\vert} g^{\\mu\\nu}$.\nMultiplying (\\ref{X}) by $g^\\prime_{\\mu\\nu}$ and using the results\n\\beq\ng^{\\prime\\mu\\nu}g^\\prime_{\\mu\\lambda} = \\delta^\\nu_\\lambda,\\,\\,\\,\\,\\,\\,g^{\\prime\\mu\\nu}g^\\prime_{\\lambda\\nu} = \\delta^\\mu_\\lambda,\\,\\,\\,\\,\\,\\,G^\\lambda_{\\alpha\\lambda} = 0,\\label{Y} \n\\eeq\nwe first observe that\n\\ben\n\\Gamma^\\alpha_{(\\lambda\\alpha)} &=& \\frac{\\vert g\\vert_{,\\,\\lambda}}{2 \\sqrt{\\vert g\\vert}} + \\frac{1}{2}\\left(g^\\prime_{\\lambda\\beta} - g^\\prime_{\\beta\\lambda}\\right)k^\\beta\\nonumber\\\\\n&\\equiv& \\frac{\\vert g\\vert_{,\\,\\lambda}}{2 \\sqrt{\\vert g\\vert}} + g^\\prime_{[\\lambda\\beta]}k^\\beta \\label{Z}\n\\een\nHence, dividing (\\ref{X}) by $\\sqrt{\\vert g\\vert}$, and also using (\\ref{Z}) and the results\n\\beq\ng^{\\mu\\alpha}g_{\\beta\\alpha}k^\\beta = k^\\mu\\,\\,\\,\\, {\\rm and} \\,g^{\\alpha\\nu}g_{\\alpha\\beta}k^\\beta = k^\\nu,\n\\eeq\nwe get\n\\ben\ng^{\\mu\\nu}_{,\\,\\lambda} &+& g^{\\mu\\alpha}\\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\lambda\\beta}k^\\beta \\delta^\\nu_\\alpha - g_{\\beta\\alpha}k^\\beta \\delta^\\nu_\\lambda) \\right)\\nonumber\\\\ &+& g^{\\alpha\\nu}\\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\alpha\\beta}k^\\beta \\delta^\\mu_\\lambda - g_{\\beta\\lambda}k^\\beta \\delta^\\mu_\\alpha)\\right)\\nonumber\\\\ &=& 3g^{\\mu\\nu}\\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}}\\label{x} \n\\een\nNow, define the new affine coefficients\n\\ben\n\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} &=& \\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\lambda\\beta}k^\\beta \\delta^\\nu_\\alpha - g_{\\beta\\alpha}k^\\beta \\delta^\\nu_\\lambda) \\right) \\label{affine1}\\\\\n\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} &=& \\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\alpha\\beta}k^\\beta \\delta^\\mu_\\lambda - g_{\\beta\\lambda}k^\\beta \\delta^\\mu_\\alpha)\\right) \\label{affine2} \n\\een\nand\n\\beq\n\\Phi_\\lambda = \\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}}\\\\\n\\eeq\nThen, Eqn. (\\ref{x}) can be written in the form\n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 3g^{\\mu\\nu}\\Phi_\\lambda. \n\\eeq\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn recent years it has been suggested that the medium of quarks and\ngluons produced in heavy ion collisions at RHIC goes through a\nstrongly-coupled phase at least during some period of its evolution\n\\cite{Teaney:2003kp,Shuryak:2006se,Huovinen:2001cy,Teaney:2001av}. The\nAnti-de Sitter space\/Conformal Field Theory (AdS\/CFT) correspondence\n\\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} is often used to\nstudy the dynamics of this strongly-coupled medium\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Gubser:2010ze,Chesler:2010bi,Beuf:2009cx,Beuf:2008ep}:\nwhile it is valid only for ${\\cal N} =4$ super-Yang-Mills (SYM)\ntheory, there is a possibility that the qualitative (and some of the\nquantitative) results obtained from AdS\/CFT correspondence may be\napplied to the real-world case of QCD.\n\n\nThe main thrust of the efforts to study the dynamics of the medium\nproduced in heavy ion collisions using AdS\/CFT correspondence has been\ndirected toward understanding how (and when) the medium isotropizes\nand thermalizes\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Chesler:2010bi,Beuf:2009cx}.\nThe existing approaches can be divided into two categories: while some\nstudies concentrated on the dynamics of the produced medium in the\nforward light-cone without analyzing the production mechanism for the\nmedium\n\\cite{Janik:2005zt,Kovchegov:2007pq,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Chesler:2009cy,Beuf:2009cx},\na large amount of work has been concentrated on studying the\ncollisions by modeling the heavy ions with shock waves in AdS$_5$ and\nattempting to solve Einstein equations in the bulk for a collision of\ntwo AdS$_5$ shock waves\n\\cite{Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2010bi}.\nMany of the existing calculations strive to obtain the expectation\nvalue of the energy-momentum tensor $\\langle T_{\\mu\\nu} \\rangle$ of\nthe produced medium in the boundary gauge theory\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Albacete:2008vs,Albacete:2009ji,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Chesler:2010bi,Beuf:2009cx},\nsince this is the quantity most relevant for addressing the question\nof the isotropization of the medium. Other works address the general\nquestion of thermalization by noticing that it corresponds to creation\nof a black hole in the AdS bulk, and by constructing a physical proof\nof the black hole formation with the help of a trapped surface\nanalysis\n\\cite{Gubser:2008pc,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du}.\n\n\nIn this work we concentrate on a different observable characterizing\nheavy ion collisions: we study correlation functions in the produced\nexpanding strongly-coupled medium. Correlation functions have become a\npowerful tool for the analysis of data coming out of heavy ion\ncollisions, allowing for a quantitative measure of a wide range of\nphenomena, from Hanbury-Brown--Twiss (HBT) interferometry\n\\cite{Adler:2001zd}, to jet quenching \\cite{Adler:2002tq} and Color\nGlass Condensate (CGC) \\cite{Braidot:2010ig}. In recent years a new\npuzzling phenomenon was discovered in the two-particle correlation\nfunctions measured in $Au+Au$ collisions at Relativistic Heavy Ion\nCollider (RHIC) \\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id}: the\nexperiments see correlations with a rather small azimuthal angle\nspread, but with a rather broad (up to several units) distribution in\nrapidity. This type of correlation is referred to as ``the ridge''.\nMore recently the ridge correlations have been seen in\nhigh-multiplicity proton-proton collisions at the Large Hadron\nCollider (LHC) \\cite{Khachatryan:2010gv}, as well as in the\npreliminary data on $Pb+Pb$ collisions at LHC.\n\n\nSeveral theoretical explanations have been put forward to account for\nthe ridge correlations. They can be sub-divided into two classes:\nperturbative and non-perturbative. Perturbative explanations, put\nforward in the CGC framework in\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv,Kovner:2010xk},\nare based on the long-range rapidity correlations present in the\ninitial state of a heavy ion collision due to CGC classical gluon\nfields\n\\cite{McLerran:1993ni,Kovner:1995ja,Kovchegov:1997ke,Kovchegov:1999ep}\n(see \\cite{Jalilian-Marian:2005jf,Weigert:2005us,Iancu:2003xm} for\nreviews of CGC physics). In \\cite{Gavin:2008ev,Dumitru:2008wn} the\nauthors invoke causality to argue that long-range rapidity correlation\ncan only arise in the early times after the collision, since at later\ntimes the regions at different rapidities become causally\ndisconnected. This is illustrated in \\fig{spacetime}, where one can\nsee that the gray-shaded causal pasts of two particles produced in the\ncollision (labeled by arrows with momenta $k_1$ and $k_2$) overlap\nonly at very early time (the red-shaded region). The authors of\n\\cite{Gavin:2008ev} then suggest that the late-time radial flow due to\nhydrodynamic evolution would lead to azimuthal correlations\ncharacteristic of the ``ridge''. Alternatively, the authors of\n\\cite{Dumitru:2008wn,Dusling:2009ni,Dumitru:2010mv,Dumitru:2010iy}\nhave identified a class of Feynman diagrams which generate azimuthal\ncorrelations in nucleus--nucleus collisions. \n\n\nThe CGC correlations found in\n\\cite{Dumitru:2008wn,Dusling:2009ni,Dumitru:2010mv,Dumitru:2010iy} are\nbased on purely perturbative small-coupling physics: however, it\nremains to be shown whether such perturbative dynamics contains large\nenough azimuthal correlations to account for all of the observed\n``ridge'' phenomenon. In the scenario of\n\\cite{Gavin:2008ev,Dumitru:2008wn} CGC dynamics provides rapidity\ncorrelations, while azimuthal correlations are generated by\nhydrodynamic evolution. As we have already mentioned, it is possible\nthat the medium created at RHIC is strongly-coupled\n\\cite{Teaney:2003kp,Shuryak:2006se,Huovinen:2001cy,Teaney:2001av}: if\nso, hydrodynamic evolution would then be a non-perturbative effect,\nmaking the scenario proposed in \\cite{Gavin:2008ev,Dumitru:2008wn}\nimplicitly non-perturbative. Purely non-perturbative explanations of\nthe ``ridge'' include parton cascade models \\cite{Werner:2010ss},\nhadronic string models \\cite{Konchakovski:2008cf}, and event-by-event\nhydrodynamic simulations \\cite{Takahashi:2009na}. The causality\nargument of \\cite{Gavin:2008ev,Dumitru:2008wn} is valid in the\nnon-perturbative case as well: one needs correlations in the initial\nstate, either due to soft pomeron\/hadronic strings interactions\n\\cite{Werner:2010ss,Konchakovski:2008cf}, or due to initial-state\nfluctuations \\cite{Takahashi:2009na}, in order to obtain long-range\nrapidity correlations. In this work we will use AdS\/CFT to address the\ntheoretical question whether long-range rapidity correlations are\npresent in the non-perturbative picture of heavy ion collisions. At\nthe same time we recognize that a complete understanding of whether\nthe ``ridge'' correlations observed at RHIC and LHC are perturbative\n(CGC) or non-perturbative in nature is still an open problem left for\nfuture studies.\n\n\n\n\\begin{figure}[th]\n\\begin{center}\n\\epsfxsize=8cm\n\\leavevmode\n\\hbox{\\epsffile{spacetime.eps}}\n\\end{center}\n\\caption{Space-time picture of a heavy ion collision demonstrating\n how long-range rapidity correlations can be formed only in the\n initial stages of the collision, as originally pointed out in\n \\cite{Gavin:2008ev,Dumitru:2008wn}. Gray shaded regions denote\n causal pasts of the two produced particles with four-momenta $k_1$\n and $k_2$, with their overlap region highlighted in red. We have\n drawn the lines of constant proper time $\\tau$ and constant\n space-time rapidity $\\eta$ to guide the eye and to underscore that\n late-time emission events for the two particles are likely to be\n causally disconnected. }\n\\label{spacetime}\n\\end{figure}\n\nThe goal of the present work is to study long-range rapidity\ncorrelations in heavy ion collisions in the strongly-coupled AdS\/CFT\nframework. In order to test for the long range rapidity correlations\nobserved in heavy ion collisions, we would like to study the two-point\nfunction $\\langle\\tr F_{\\mu\\nu}^2(x) \\, \\tr\nF_{\\rho\\sigma}^2(y)\\rangle$ of glueball operators $\\tr\n\\left(F_{\\mu\\nu}^2 \\right)$ right after the collision but before the\nthermalization. According to causality arguments of\n\\cite{Gavin:2008ev,Dumitru:2008wn}, one expects that the long range\ncorrelations in rapidity should occur at such early times. The choice\nof observable is mainly governed by calculational simplicity. The\nmetric for the early times after the collision of two shock waves in\nAdS$_5$ was obtained in\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}: after\nformulating the problem in Sec. \\ref{general} and presenting our\ngeneral expectation for the answer in Sec. \\ref{simple}, we use this\nmetric to calculate the correlation function of two glueball operators\nin Sec. \\ref{Correlators}. (Since the glueball operator corresponds to\nthe massless scalar field in the bulk, we compute the two-point\nfunction of the scalar field in the background of the colliding shock\nwaves metric.) Our main result is that we do find long-range rapidity\ncorrelations in the strongly-coupled initial state, albeit with a\nrather peculiar rapidity dependence: the two-glueball correlation\nfunction scales as\n\\begin{align}\n \\label{eq:ampl}\n C (k_1, k_2) \\, \\sim \\, \\cosh \\left( 4 \\, \\Delta y \\right)\n\\end{align}\nwith the (large) rapidity interval $\\Delta y$ between them. We also\nshow in Sec. \\ref{Correlators} that the correlator of two\nenergy-momentum tensors $\\langle T_{2}^1 (x) \\, T_{2}^1 (y) \\rangle$\n(with $1,2$ transverse directions) exhibits the same long-range\nrapidity correlations. This should be contrasted with the CGC result,\nin which the correlations are at most flat in rapidity\n\\cite{Dumitru:2008wn,Dumitru:2010iy,Dumitru:2010mv,Gavin:2008ev}.\nIndeed the growth of correlations with rapidity interval in\n\\eq{eq:ampl} also contradicts experimental data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nAlthough we should not {\\it a priori} expect an agreement between\nAdS\/CFT calculations and experimental QCD data, we argue in Sec.\n\\ref{sum} that inclusion of higher-order corrections in the AdS\ncalculation along the lines of \\cite{Albacete:2009ji} should help \nto flatten out such growth, though it is a very difficult problem to\ndemonstrate this explicitly.\n\n\nUsing the causality argument of \\cite{Gavin:2008ev,Dumitru:2008wn}\nillustrated in \\fig{spacetime} we also expect that after\nthermalization the rapidity correlations should only be short-ranged.\nAs a result, due to causality, the initial long ranged correlations\ncan not be ``washed away'' and will be observed at later times.\nThis explanation is analogous to the resolution of the `horizon\nproblem' in the cosmic microwave background radiation (CMB), where the\nobserved near-homogeneity of the CMB suggests that the universe was\nextremely homogeneous at the time of the last scattering even over\ndistance scales that could not have been in causal contact in the\npast. This problem was solved by assuming that the universe, when it\nwas still young and extremely homogeneous, went through a very rapid\nperiod of expansion (inflation). As a consequence of inflation,\ndifferent regions of the universe became causally disconnected, while\npreserving the initial homogeneity.\nThe idea that we pursue here for the heavy ion collisions seems to be\nof similar nature. To verify the statement that late-time dynamics can\nnot generate (or otherwise affect) long-range rapidity correlations we\nstudy glueball correlation again in Sec. \\ref{late-times} now using\nthe metric found by Janik and Peschanski \\cite{Janik:2005zt}, which is\ndual to Bjorken hydrodynamics \\cite{Bjorken:1982qr}. (This is done in\nthe absence an analytic solution of the problem of colliding shock\nwaves: despite some recent progress\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji} the late-time\nmetric is unknown at present.) Performing a perturbative estimate, we\nfind that, indeed, only short-range rapidity correlations result from\nthe gauge theory dynamics dual to the Janik and Peschanski metric.\n\nWe summarize our results in Sec. \\ref{sum}.\n\n\n\n\\section{Generalities and Problem Setup}\n\\label{general}\n\n\\subsection{AdS\/CFT Tools}\n\n\nWe start with a metric for a single shock wave moving along a light\ncone in the $x^+$ direction \\cite{Janik:2005zt} in Fefferman--Graham\ncoordinates \\cite{F-G}:\n\\begin{equation}\\label{1nuc}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\left\\{ -2 \\, dx^+ \\, dx^- + t_1 (x^-) \\, z^4 \\, d\n x^{- \\, 2} + d x_\\perp^2 + d z^2 \\right\\}\n\\end{equation}\nwhere\n\\begin{align}\\label{t1}\n t_1 (x^-) \\, \\equiv \\, \\frac{2 \\, \\pi^2}{N_c^2} \\, \\langle T_{1 \\,\n --} (x^-) \\rangle.\n\\end{align}\nHere $x^\\pm = \\frac{x^0 \\pm x^3}{\\sqrt{2}}$, ${\\un x} = (x^1, x^2)$,\n$d x_\\perp^2 = (d x^1)^2 + (d x^2)^2$, $z$ is the coordinate\ndescribing the 5th dimension such that the ultraviolet (UV) boundary\nof the AdS space is at $z=0$, and $L$ is the radius of the AdS space.\nAccording to holographic renormalization \\cite{deHaro:2000xn},\n$\\langle T_{--} (x^-) \\rangle$ is the expectation value of the\nenergy-momentum tensor for a single ultrarelativistic nucleus moving\nalong the light-cone in the $x^+$-direction in the gauge theory. We\nassume that the nucleus is made out of nucleons consisting of $N_c^2$\n``valence gluons'' each, such that $\\langle T_{--} (x^-) \\rangle\n\\propto N_c^2$, and the metric \\peq{1nuc} has no $N_c^2$-suppressed\nterms in it. The metric in \\eq{1nuc} is a solution of Einstein\nequations in AdS$_5$:\n\\begin{align}\n \\label{ein}\n R_{\\mu\\nu} + \\frac{4}{L^2} \\, g_{\\mu\\nu} = 0.\n\\end{align}\n\nImagine a collision of the shock wave \\peq{1nuc} with another similar\nshock wave moving in the light cone $x^-$ direction described by the\nmetric\n\\begin{align}\\label{2nuc}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\left\\{ -2 \\, dx^+ \\, dx^- + t_2\n (x^+) \\, z^4 \\, d x^{+ \\, 2} + d x_\\perp^2 + d z^2 \\right\\}\n\\end{align}\nwith\n\\begin{align}\\label{t2}\n t_2 (x^+) \\, \\equiv \\, \\frac{2 \\, \\pi^2}{N_c^2} \\, \\langle T_{2 \\,\n ++} (x^+) \\rangle.\n\\end{align}\nHere we will consider the high-energy approximation, in which the\nshock waves' profiles are given by delta-functions,\n\\begin{align}\\label{deltas}\n t_1 (x^-) = \\mu_1 \\, \\delta (x^-), \\ \\ \\ t_2 (x^+) = \\mu_2 \\, \\delta\n (x^+).\n\\end{align}\nThe two scales $\\mu_1$ and $\\mu_2$ can be expressed in terms of the\nphysical parameters in the problem since we picture the shock waves as\ndual to the ultrarelativistic heavy ions in the boundary gauge theory\n\\cite{Albacete:2008vs,Albacete:2008ze} :\n\\begin{align}\\label{mus}\n \\mu_{1} \\sim p_{1}^+ \\, \\Lambda_1^2 \\, A_1^{1\/3}, \\ \\ \\ \\mu_{2} \\sim\n p_{2}^- \\, \\Lambda_2^2 \\, A_2^{1\/3}.\n\\end{align}\nHere $p_1^+$, $p_2^-$ are the large light-cone momenta per nucleon,\n$A_1$ and $A_2$ are atomic numbers, and $\\Lambda_1$ and $\\Lambda_2$\nare the typical transverse momentum scales in the two nuclei\n\\cite{Albacete:2008vs}. Note that $\\mu_1$ and $\\mu_2$ are independent\nof $N_c$.\n\n\nThe exact analytical solution of Einstein equations \\peq{ein} starting\nwith the superposition of the metrics \\peq{1nuc} and \\peq{2nuc} before\nthe collision, and generating the resulting non-trivial metric after\nthe collisions, is not known. Instead one constructs perturbative\nexpansion of the solution of Einstein equations in powers of $t_1$ and\n$t_2$, or, equivalently, $\\mu_1$ and $\\mu_2$\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji,Taliotis:2010pi,Lin:2010cb}.\nAt present the metric is known up to the fourth order in $\\mu$'s\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}, and also a\nresummation to all-orders in $\\mu_2$ ($\\mu_1$) while keeping $\\mu_1$\n($\\mu_2$) at the lowest order has been performed in\n\\cite{Albacete:2009ji}. The validity of the perturbatively obtained\nmetric is limited to early proper times $\\tau = \\sqrt{2 \\, x^+ \\,\n x^-}$, see e.g. \\cite{Albacete:2009ji} (though indeed the\nfully-resummed series in powers of $\\mu_1$, $\\mu_2$ would be valid\neverywhere). Since here we are interested in the early-time\ncorrelations (and due to complexity of the $\\mu_2$-resummed metric\nobtained in \\cite{Albacete:2009ji}), we limit ourselves to the $O\n(\\mu_1 \\, \\mu_2)$ metric obtained in\n\\cite{Albacete:2008vs,Grumiller:2008va} in the Fefferman--Graham\ncoordinates:\n\\begin{align}\\label{2nuc_gen}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\bigg\\{ -\\left[ 2 + G (x^+, x^-, z)\n \\right] \\, dx^+ \\, dx^- + \\left[ t_1 (x^-) \\, z^4 + F (x^+, x^-, z)\n \\right] \\, d x^{- \\, 2} \\notag \\\\ + \\left[ t_2 (x^+) \\, z^4 +\n {\\tilde F} (x^+, x^-, z) \\right] \\, d x^{+ \\, 2} + \\left[ 1 + H\n (x^+, x^-, z) \\right] \\, d x_\\perp^2 + d z^2 \\bigg\\}.\n\\end{align}\nThe components of the metric at the order-$\\mu_1 \\, \\mu_2$ are\n\\begin{align}\\label{LO}\n F (x^+, x^-, z) \\, & = \\, - \\lambda_1 (x^+, x^-) \\, z^4 -\n \\frac{1}{6} \\, \\partial_-^2 h_0 (x^+, x^-) \\, z^6 - \\frac{1}{16} \\,\n \\partial_-^2\n h_1 (x^+, x^-) \\, z^8 \\notag \\\\\n {\\tilde F} (x^+, x^-, z) \\, & = \\, - \\lambda_2 (x^+, x^-) \\, z^4 -\n \\frac{1}{6} \\, \\partial_+^2 h_0 (x^+, x^-) \\, z^6 - \\frac{1}{16} \\,\n \\partial_+^2\n h_1 (x^+, x^-) \\, z^8 \\notag \\\\\n G (x^+, x^-, z) \\, & = \\, - 2 \\, h_0 (x^+, x^-) \\, z^4 - 2 \\, h_1\n (x^+, x^-) \\, z^6 + \\frac{2}{3} \\, t_1 (x^-) \\, t_2 (x^+) \\, z^8 \\notag \\\\\n H (x^+, x^-, z) \\, & = \\, h_0 (x^+, x^-) \\, z^4 + h_1 (x^+, x^-) \\,\n z^6,\n\\end{align}\nwhere we defined \\cite{Albacete:2008vs}\n\\begin{align}\\label{LOstuff}\n h_0 (x^+, x^-) \\, & = \\, \\frac{8}{\\partial_+^2 \\, \\partial_-^2} \\,\n t_1 (x^-) \\, t_2 (x^+), \\ \\ \\ h_1 (x^+, x^-) \\, = \\, \\frac{4}{3 \\,\n \\partial_+ \\, \\partial_-} \\, t_1 (x^-) \\, t_2 (x^+) \\notag \\\\\n \\lambda_1 (x^+, x^-) \\, & = \\, \\frac{\\partial_{-}}{\\partial_{+}} \\,\n h_0 (x^+, x^-), \\ \\ \\ \\lambda_2 (x^+, x^-) \\, = \\,\n \\frac{\\partial_{+}}{\\partial_{-}} \\, h_0 (x^+, x^-)\n\\end{align}\nalong with the definition of the causal integrations\n\\begin{align}\\label{ints}\n \\frac{1}{\\partial_{+}} [\\ldots](x^+) \\, \\equiv \\,\n \\int\\limits_{-\\infty}^{x^+} \\, d x'^+ \\, [\\ldots](x'^+), \\ \\ \\\n \\frac{1}{\\partial_{-}} [\\ldots](x^-) \\, \\equiv \\,\n \\int\\limits_{-\\infty}^{x^-} \\, d x'^- \\, [\\ldots](x'^-).\n\\end{align}\n\n\nBelow we will calculate correlation functions of the glueball\noperators\n\\begin{align}\\label{Jdef}\n J(x) \\, \\equiv \\, \\frac{1}{2} \\, \\tr [F_{\\mu\\nu} \\, F^{\\mu\\nu}]\n\\end{align}\nin the boundary gauge theory.\\footnote{When defining the glueball\n operator we assume that in the boundary theory the gluon field\n $A_\\mu^a$ is defined without absorbing the gauge coupling $g_{YM}$\n in it, such that the field strength tensor $F_{\\mu\\nu}^a$ contains\n the coupling $g_{YM}$.} According to the standard AdS\/CFT\nprescription,\\footnote{Since $\\Delta =4$, with $\\Delta$ the conformal\n dimension of $J(x)$ , the mass of the dual scalar field, $m^2 =\n \\Delta \\, (\\Delta-4)$, is zero.} the glueball operator is dual to\nthe massless scalar (dilaton) field $\\phi$ in the AdS$_5$ bulk\n\\cite{Klebanov:2000me} with the action\n\\begin{align}\n S^\\phi \\, = \\, - \\frac{N_c^2}{16 \\, \\pi^2 \\, L^3} \\, \\int d^4 x \\, d\n z \\, \\sqrt{-g} \\, g^{MN} \\, \\partial_M \\phi (x,z) \\, \\partial_N \\phi\n (x,z),\n\\end{align}\nwhere $M,N = (\\mu,z)$, $\\mu = (0,1,2,3)$ and $x^{\\mu}$ correspond to\n4D field theory coordinates, while $z$ is the coordinate along the\nextra fifth (holographic) dimension. (As usual $g = \\det {g_{MN}}$.)\n\nThe equation of motion (EOM) for the scalar field is\n\\begin{align}\\label{eom}\n \\frac{1}{\\sqrt{-g}} \\, \\partial_M \\left[ \\sqrt{-g}~g^{MN}\\partial_N\n \\phi (x,z)\\right] \\, = \\, 0.\n\\end{align}\nUsing \\eq{eom}, the dilaton action evaluated on the classical solution\ncan be cast in the following form convenient for the calculation of\ncorrelation functions:\n\\begin{align}\n \\label{dil_action}\n S^\\phi_{cl} \\, = \\, \\frac{N_c^2}{16 \\, \\pi^2 \\, L^3} \\, \\int d^4 x\n \\, \\left[ \\sqrt{-g} \\, g^{zz} \\, \\phi(x,z) \\, \\partial_z \\phi(x,z)\n \\right] \\Bigg|_{z=0} \\, = \\, \\frac{N_c^2}{16 \\, \\pi^2} \\, \\int d^4 x\n \\, \\phi_B (x) \\, \\left[ \\frac{1}{z^3} \\, \\partial_z \\phi(x,z)\n \\right] \\Bigg|_{z=0}.\n\\end{align}\nIn arriving at the expression on the right of \\eq{dil_action} we have\nused the metric in Eqs. \\peq{2nuc_gen}, \\peq{LO}, and \\peq{LOstuff},\nalong with the standard assumption that the fields $\\phi$ have the\nfollowing boundary condition (BC) at the UV boundary, $\\phi(x,z\\to 0)\n= \\phi_B(x)$, which allowed us to approximate near $z=0$\n\\begin{align}\n \\label{g_eq}\n g \\, = \\, - \\frac{L^{10}}{z^{10}} \\, \\left( 1 - \\frac{1}{3} \\, z^8\n \\, t_1 (x^-) \\, t_2 (x^+) \\right) \\, \\approx \\, -\n \\frac{L^{10}}{z^{10}}.\n\\end{align}\nIn arriving at \\eq{dil_action} we have also demanded that\\footnote{As\n one can see later, our classical solutions satisfy this condition.}\n\\begin{align}\n \\sqrt{-g} \\, g^{zz} \\, \\phi(x,z) \\, \\partial_z \\phi(x,z) \\,\n \\rightarrow \\, 0 \\ \\ \\ \\text{as} \\ \\ \\ z \\rightarrow \\infty.\n\\end{align}\n\nDefine the retarded Green function of the glueball operator \\peq{Jdef}\n(averaged in the heavy ion collision background),\n\\begin{align}\n \\label{retG}\n G_R (x_1, z_2) \\, = \\, - i \\, \\theta (x_1^0 - x_2^0) \\, \\langle\n [J(x_1), J(x_2)] \\rangle.\n\\end{align}\nAccording to the AdS\/CFT correspondence the contribution to the\nretarded Green function coming from the medium produced in the\ncollision is given by \\cite{Son:2002sd}\\footnote{As was shown in\n \\cite{Herzog:2002pc,Skenderis:2008dg} the right-hand side of\n \\eq{Sdiff} contains contributions of both the retarded and advanced\n Green functions $G_R$ and $G_A$. In the lowest-order calculation we\n are going to perform here the Green functions are real, and, since\n $\\text{Re} \\, G_R = \\text{Re} \\, G_F = \\text{Re} \\, G_A$ (with $G_F$\n the Feynman Green function defined below in \\eq{GF}), we do not need\n to address the question of disentangling the contributions of\n different wave functions to \\eq{Sdiff} and will adopt the convention\n of \\cite{Mueller:2008bt,Avsar:2009xf} by calling the object in\n \\eq{Sdiff} a retarded Green function.}\n\\begin{align}\n \\label{Sdiff}\n G_R (x_1, x_2) \\, = \\, \\frac{\\delta^2 [S^\\phi_{cl} - S_0]}{\\delta\n \\phi_B (x_1) \\, \\delta \\phi_B (x_2)},\n\\end{align}\nwhere we subtract the action $S_0$ of the scalar field in the empty\nAdS$_5$ space to remove the contribution of the retarded Green\nfunction in the vacuum. The latter has nothing to do with the\nproperties of the medium produced in the collision and has to be\ndiscarded.\n\n\nLater we will be interested in the Fourier transform of the retarded\nGreen function\n\\begin{align}\n \\label{Gr_mom}\n G_R (k_1, k_2) \\, = \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{-i \\, k_1 \\cdot\n x_1 -i \\, k_2 \\cdot x_2} \\, G_R (x_1, x_2).\n\\end{align}\n(We are working in the $(-,+,+,+)$ metric in the boundary four\ndimensions.)\n\n\n\n\n\n\\subsection{Kinematics}\n\\label{kine}\n\nWe have defined above $k^{\\pm} = (k^0 \\pm k^3)\/\\sqrt{2}$, ${\\un k} =\n(k^1, k^2)$, $k_\\perp = |{\\un k}|$ and $k^2 = k_{\\bot}^2 - 2 \\, k^+ \\,\nk^- \\, = \\, -m^2$. The particle rapidity, defined as, $y =\n\\frac{1}{2}\\, \\ln \\frac{k^+}{k^-}$, is a useful variable, since the\nrapidity difference between any pair of particles remains unchanged if\nwe go from the center of mass frame to any other frame by performing a\nboost along the longitudinal direction, $x^3$. On the other hand,\nwhen $k^0 \\gg m $, $y \\approx y_p = \\ln \\cot (\\theta\/2)$, where $y_P$\nis pseudorapidity, and $\\theta$ is the angle at which the particle\nemerges in the center of mass frame. Furthermore, defining $m_{\\bot}\n\\equiv \\sqrt{k_{\\bot}^2+m^2}$, we can rewrite the light-cone\ncomponents of the momentum as: $k^+ = m_{\\bot}e^y\/\\sqrt{2}$ and $k^- =\nm_{\\bot}e^{-y}\/\\sqrt{2}$. In the case when $k_{\\bot}^2 \\gg m^2$ one\nhas $k^+k^- \\approx k^2_{\\bot}\/2$.\n\nConsider two identical on mass-shell particles with momenta $k_1 =\n(k_1^+, k^-_1, {\\un k}_{1})$ and $k_2 = (k_2^+,k^-_2, {\\un k}_{2})$.\nAssuming $k^2_1 = k^2_2 = -m^2$ and ${\\un k}_{1} = {\\un k}_{2} = {\\un\n k}$, we obtain\n\\begin{align}\n q^2 \\equiv (k_2-k_1)^2 = -2 \\, m^2 -2 \\, k_1 \\cdot k_2 \\, = \\,\n 4 \\, m^2_{\\bot} \\, \\sinh^2 \\frac{\\Delta y}{2} > 0 \\ ,\n\\end{align}\nwhere $\\Delta y = y_2 - y_1$ with $y_1$ and $y_2$ the rapidities of\nthe two particles. In case when $k^2_{\\bot} \\gg m^2$ and $\\Delta y \\gg\n1$, we have $q^2 \\approx 2 \\, k_{\\bot}^2 \\cosh \\Delta y \\approx\nk_{\\bot}^2 e^{\\Delta y}$. It is worth noting that the momentum\ndifference is space-like, since $q^2 \\equiv Q^2 > 0$.\n\n\n\n\n\\subsection{Defining the observable in the boundary gauge theory}\n\n\nLet us now specify the observable we want to calculate in the boundary\ngauge theory. Our primary goal is to study rapidity correlations using\nAdS\/CFT. Ideally one would like to find correlations between produced\nparticles. However, ${\\mathcal N} =4$ SYM theory has no bound states,\nand, at strong coupling, it does not make sense to talk about\nindividual supersymmetric particles. Therefore we will study\ncorrelators of operators, starting with the glueball operator defined\nin \\eq{Jdef}. One can think of the glueballs as external probes to\n${\\mathcal N} =4$ SYM theory (in the sense of being particles from\nsome other theory in four dimensions), which couple to the gluons in\n${\\mathcal N} =4$ SYM, and therefore can be produced in the collision.\nLater on we will also consider correlators of the energy-momentum\ntensor $T_{\\mu\\nu}$, which should be also thought of as an operator\ncoupling to a particle (in four dimensions) external to the ${\\mathcal\n N} =4$ SYM theory.\n\nWe start with the glueball production. To study two-particle\ncorrelations we need to find the two-particle multiplicity\ndistribution\n\\begin{align}\n \\label{N2}\n \\frac{d^6 N}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}\n\\end{align}\nwhere $k_{1}^{\\perp}$, $y_1$ and $k_{2}^{\\perp}$, $y_2$ are the\ntransverse momenta of the produced particles (glueballs) and their\nrapidities, and $d^2 k \\equiv d k^1 \\, d k^2$. As usual we can\ndecompose the two-particle multiplicity distribution into the\nuncorrelated and correlated pieces\n\\begin{align}\n \\label{2terms}\n \\frac{d^6 N}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, = \\, \\frac{d^3\n N}{d^2 k_1 \\, d y_1} \\, \\frac{d^3 N}{d^2 k_2 \\, d y_2} + \\frac{d^6\n N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}.\n\\end{align}\nWe are interested in computing the second (correlated) term on the\nright hand side of \\peq{2terms}. We begin by writing it as\n\\begin{align}\n \\label{ampl^2}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\langle | M (k_1, k_2) |^2 \\rangle\n\\end{align}\nwhere $M (k_1, k_2)$ is the two-particle production amplitude. (Note\nthat since we are primarily interested in rapidity dependence of\ncorrelators, we are not keeping track of prefactors and other\ncoefficients not containing two-particle correlations.)\n\nFor the correlated term in \\eq{2terms} the amplitude of inclusive\ntwo-glueball production in a heavy ion collision is\n\\begin{align}\n \\label{ampl1}\n M (k_1, k_2) \\, \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\, k_1\n \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\langle n | \\, T \\left\\{ J\n (x_1) \\, J (x_2) \\right\\} | A_1, A_2 \\rangle,\n\\end{align}\nwhich is a consequence of the LSZ reduction formula with $T$ denoting\ntime-ordering. Here $| n \\rangle$ denotes an arbitrary state of the\ngauge theory which describes other particles which may be produced in\na collision apart from the two glueballs.\n\nThe state $| A_1, A_2 \\rangle$ can be thought of as the vacuum in the\npresence of a source, with the source being the two nuclei with atomic\nnumbers $A_1$ and $A_2$. Consider first the expectation value of the\nenergy-momentum operator $\\langle T_{\\mu\\nu} \\rangle$ in a nuclear\ncollision. According to the standard prescription we can write it as\n\\begin{align}\n \\label{Tmn}\n \\langle T_{\\mu\\nu} (x) \\rangle \\, = \\, \\frac{\\int {\\cal D} A_\\mu \\,\n e^{i \\, S [A]} \\, W_+ [A] \\, W_- [A] \\, T_{\\mu\\nu} (x)}{\\int {\\cal\n D} A_\\mu \\, e^{i \\, S [A]} \\, W_+ [A] \\, W_- [A]}\n\\end{align}\nwhere $S [A]$ is the action of the gauge theory. For simplicity we\nonly explicitly show the integrals over gauge fields in \\eq{Tmn},\nimplying the integrals over all other fields in the theory. The\nobjects $W_+ [A]$ and $W_- [A]$ are some functionals of the fields in the theory describing the\ntwo colliding nuclei. For instance, in the perturbative QCD approaches\nsuch as CGC, these operators are Wilson lines along $x^-=0$ and $x^+\n=0$ light cone directions\n\\cite{McLerran:1993ni,Kovner:1995ja,Kovchegov:1997ke,Kovchegov:1999ep}.\n\\footnote{Calculation of the expectation value of $T_{\\mu\\nu}$ in CGC\n is reduced to perturbative evaluation\/resummation of \\eq{Tmn} (see\n e.g. \\cite{Kovchegov:2005az} for an example of such calculation).}\n\nUsing operators and states in Heisenberg picture one can rewrite\n\\eq{Tmn} as\n\\begin{align}\n \\label{eq:ave}\n \\langle T_{\\mu\\nu} (x) \\rangle \\, = \\, \\langle A_1, A_2 | T_{\\mu\\nu}\n (x) |A_1, A_2 \\rangle.\n\\end{align}\nComparing \\eq{eq:ave} to \\eq{Tmn} clarifies the meaning of the $| A_1,\nA_2 \\rangle$ state by demonstrating that the averaging in \\eq{eq:ave}\nis over a state of vacuum in the presence of nuclear sources (which of\ncourse strongly disturb the vacuum).\n\n\nUsing \\eq{ampl1} in \\eq{ampl^2} we obtain\n\\begin{align}\n \\label{corr2}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i \\, k_1\n \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\ \\times\n \\, \\sum_n \\, \\langle A_1, A_2 | \\, {\\overline T} \\left\\{ J (x'_1) \\,\n J (x'_2) \\right\\} | n \\rangle \\ \\langle n | \\, T \\left\\{ J (x_1)\n \\, J (x_2) \\right\\} | A_1, A_2 \\rangle\n\\end{align}\nwhere ${\\overline T}$ denotes the inverse time-ordering and we have\nused the fact that $J (x)$ is a hermitean operator. Summing over a\ncomplete set of states $| n \\rangle$ yields\n\\begin{align}\n \\label{corr3}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, &\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\\n & \\times \\, \\langle A_1, A_2 | \\, {\\overline T} \\left\\{ J (x'_1) \\,\n J (x'_2) \\right\\} \\, T \\left\\{ J (x_1) \\, J (x_2) \\right\\} | A_1,\n A_2 \\rangle.\n\\end{align}\n\n\nAs one could have expected, in order to calculate two-particle\nproduction, we need to calculate a 4-point function given in\n\\eq{corr3}. This is, in general, a difficult task: instead we will use\nthe following simplification. Begin by replacing the complete set of\nstates $| n \\rangle$ by states ${\\cal O}_n (x) \\, |A_1, A_2 \\rangle$\nobtained by acting on our ``vacuum'' state $| A_1, A_2 \\rangle$ by a\ncomplete orthonormal set of gauge theory operators ${\\cal O}_n (x)$,\nsuch that\n\\begin{align}\n \\label{unity}\n \\mathds{1} \\, = \\, \\sum_n \\, | n \\rangle \\ \\langle n | \\, \\, = \\,\n \\sum_n \\, \\int \\, d^4 x \\, {\\cal O}_n (x) \\, | A_1, A_2 \\rangle \\ \\,\n \\langle A_1, A_2 | \\, {\\cal O}_n^\\dagger (x)\n\\end{align}\nwith the normalization condition\n\\begin{align}\n \\label{norm}\n \\langle A_1, A_2 | \\, {\\cal O}_m^\\dagger (y) \\, {\\cal O}_n (x) \\, |\n A_1, A_2 \\rangle \\, = \\, \\delta_{nm} \\, \\delta^{(4)} (x-y).\n\\end{align}\nUsing \\eq{unity} in \\eq{corr2} we write\n\\begin{align}\n \\label{corr4}\n & \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\,\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\, \\sum_n\n \\, \\int \\, d^4 x \\notag \\\\ & \\times \\, \\langle A_1, A_2 | \\,\n {\\overline T} \\left\\{ J (x'_1) \\, J (x'_2) \\, {\\cal O}_n (x)\n \\right\\} \\, | A_1, A_2 \\rangle \\ \\, \\langle A_1, A_2 | \\, T \\left\\{\n {\\cal O}_n^\\dagger (x) \\, J (x_1) \\, J (x_2) \\right\\} | A_1, A_2\n \\rangle.\n\\end{align}\nTo evaluate \\eq{corr4} we have to insert all possible operators ${\\cal\n O}_n (x)$ from the orthonormal set in it. Noting that $J (x)$ is a\ngauge-invariant color-singlet operator, we conclude that only\ncolor-singlet ${\\cal O}_n (x)$ would contribute. Also, since the final\nstate in a scattering problem should be an observable, the operators\n${\\cal O}_n$ should be hermitean. The set of contributing ${\\cal O}_n\n(x)$'s should therefore include the identity operator, $J(x)$,\n$T_{\\mu\\nu} (x)$, etc.\n\n\nAs we will see below, since we are using the metric \\peq{2nuc_gen},\nwhich is a perturbative solution of Einstein equations to order $\\mu_1\n\\, \\mu_2$, we can only calculate correlators to order $\\mu_1 \\, \\mu_2$\nas well. Moreover, correlators which are independent of $\\mu_1$ and\n$\\mu_2$ are vacuum correlators that we are not interested in.\nCorrelators of order $\\mu_1$ or $\\mu_2$ correspond to performing deep\ninelastic scattering (DIS) on a single shock wave similar to\n\\cite{Mueller:2008bt,Avsar:2009xf,Kovchegov:2010uk}, and are thus not\ndirectly relevant to the problem of heavy ion collisions at hand. Thus\nin this paper we are only interested in correlators exactly at the\norder $\\mu_1 \\, \\mu_2$ in the expansion in the two shock waves. Using\nsuch power counting it is easy to see that inserting the identity\noperator (normalized to one to satisfy \\eq{norm}) into \\eq{corr4} in\nplace of ${\\cal O}_n$'s would give us a contribution of the order of\n$\\mu_1^2 \\, \\mu_2^2$, which is the lowest order contribution to double\nglueball production. Inserting $J(x)$ or $T_{\\mu\\nu} (x)$ into\n\\eq{corr4} instead of ${\\cal O}_n$'s would give zero. One can also see\nthat replacing ${\\cal O}_n$'s by higher (even) powers of $J(x)$ or\n$T_{\\mu\\nu} (x)$ (properly orthogonalized) in \\eq{corr4} would\ngenerate non-zero contributions, which are either higher order in\n$\\mu_1$ and $\\mu_2$ or $N_c^2$-suppressed. We therefore insert the\nidentity operator into \\eq{corr4}, which in the color space can be\nwritten as ${\\bf 1} = \\delta^{ab}\/N_c$ to satisfy normalization in\n\\eq{norm}, and write\n\\begin{align}\n \\label{corr5}\n & \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\,\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\\n & \\times \\, \\frac{1}{N_c^2} \\, \\langle A_1, A_2 | \\, {\\overline T}\n \\left\\{ J (x'_1) \\, J (x'_2) \\right\\} | A_1, A_2 \\rangle \\ \\,\n \\langle A_1, A_2 | \\, T \\left\\{ J (x_1) \\, J (x_2) \\right\\} | A_1,\n A_2 \\rangle \\, \\left[1 + O (1\/N_c^2) \\right].\n\\end{align}\nWe have thus reduced the problem of two-glueball production to\ncalculation of two-point correlation functions! Note that the\nprefactor of $1\/N_c^2$ makes the $N_c$ counting right: since each\nconnected correlator is order-$N_c^2$, we see from \\eq{corr5} that the\ncorrelated two-particle multiplicity scales as $N_c^2$ as well, in\nagreement with perturbative calculations\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv}.\n\nDefining Feynman Green function\n\\begin{align}\n \\label{GF}\n G_F (k_1, k_2) \\, = \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\, k_1\n \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\langle A_1, A_2 | \\, T \\left\\{\n J (x_1) \\, J (x_2) \\right\\} | A_1, A_2 \\rangle\n\\end{align}\nwe can summarize \\eq{corr5} as\n\\begin{align}\n \\label{corr6}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\frac{1}{N_c^2} \\, |G_F (k_1, k_2)|^2.\n\\end{align}\n\nWith the help of the retarded Green function\n \\begin{align}\n \\label{GR}\n G_R (k_1, k_2) \\, = \\, - i \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\,\n k_1 \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\theta (x_1^0 - x_2^0) \\,\n \\langle A_1, A_2 | \\, \\left[ J (x_1) , J (x_2) \\right] | A_1, A_2\n \\rangle\n \\end{align}\n and using the fact that at zero temperature $|G_F|^2 = |G_R|^2$\n \\cite{Son:2002sd}, we rewrite \\eq{corr6} as\n\\begin{align}\n \\label{corr7}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\frac{1}{N_c^2} \\, |G_R (k_1, k_2)|^2.\n\\end{align}\nTherefore we need to calculate the two-point retarded Green function\nat the order $\\mu_1 \\, \\mu_2$. This is exactly the kind of Green\nfunction one can calculate using the AdS\/CFT techniques of Eqs.\n\\peq{Sdiff} and \\peq{Gr_mom}.\n\n\n\n\n\\section{A Simple Physical Argument}\n\\label{simple}\n\nBefore we present the full calculation of the two-particle\ncorrelations in AdS, we would like to give a simple heuristic argument\nof what one may expect from such a calculation. First of all, as we\nhave noted already, we are going to expand the Green function, and,\ntherefore, the bulk field $\\phi$ into powers of $\\mu_1$ and $\\mu_2$,\nstopping at the order-$\\mu_1 \\, \\mu_2$. To find the field $\\phi$ at\nthe order-$\\mu_1 \\, \\mu_2$ one has to solve \\eq{eom} with the metric\ntaken up to the order $\\mu_1 \\, \\mu_2$. Since we are interested in the\nlong-range rapidity correlations, our goal is to obtain the leading\nrapidity contribution from the calculation. Analyzing Eqs.\n\\peq{Gr_mom}, \\peq{dil_action}, and \\peq{Sdiff}, one can conclude that\nthe leading large-rapidity contribution comes from terms with the\nhighest number of factors of light-cone momenta, i.e., from terms like\n$k_1^+ \\, k_2^-$ and $k_1^- \\, k_2^+$ (but clearly not from $k_1^+ \\,\nk_1^- = m_\\perp^2\/2$ which is rapidity-independent). Taking $M=N=-$\nin \\eq{eom} one obtains, among other terms, the following\n(leading-rapidity) contribution:\n\\begin{align}\\label{contr1}\ng^{--}_{(2)} \\ \\partial_-^2 \\, \\phi_0,\n\\end{align}\nwhere $\\phi_0$ is the field at the order $(\\mu_1)^0 \\, (\\mu_2)^0$ and\n$g^{MN}_{(2)}$ is the metric at order-$\\mu_1 \\, \\mu_2$. Concentrating\non order-$z^4$ terms in the metric, which, according to holographic\nrenormalization \\cite{deHaro:2000xn}, are proportional to the\nenergy-momentum tensor in the boundary theory, and remembering that\nthe latter is rapidity-independent at order-$\\mu_1 \\, \\mu_2$\n\\cite{Grumiller:2008va,Albacete:2008vs}, we use energy-momentum\nconservation, $\\partial_\\mu \\, T^{\\mu\\nu} =0$, which, in particular,\nimplies that $\\partial_- \\, T^{--} + \\partial_+ \\, T^{+-} = 0$, to\nwrite\n\\begin{align}\n g^{--}_{(2)} \\, = \\, -\n \\frac{\\partial_+}{\\partial_-} \\, g^{+-}_{(2)}.\n\\end{align}\nTherefore \\eq{contr1} contains the term\n\\begin{align}\\label{contr2}\n - \\left( \\frac{\\partial_+}{\\partial_-} \\, g^{+-}_{(2)} \\right) \\\n \\partial_-^2 \\, \\phi_0,\n\\end{align}\nwhich contributes to the field $\\phi$ at order-$\\mu_1 \\, \\mu_2$, and,\nas follows from \\eq{Gr_mom}, resulting in a contribution to the\nretarded Green function in momentum space proportional to\n\\begin{align}\\label{contr3}\n G_R \\, \\sim \\, \\frac{k_1^-}{k_1^+} \\, {\\tilde g}^{+-}_{(2)} \\\n (k_2^+)^2\n\\end{align}\nwith ${\\tilde g}^{+-}$ the Fourier transform of $g^{+-}$ into momentum\nspace. Since metric component ${\\tilde g}^{+-}$ at the order-$\\mu_1 \\,\n\\mu_2$ can not be rapidity-dependent\n\\cite{Grumiller:2008va,Albacete:2008vs}, we see that \\eq{contr3} gives\n\\begin{align}\\label{contr4}\n G_R\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, e^{2 \\, (y_2 - y_1)} \\, = \\,\n e^{2 \\, \\Delta y}.\n\\end{align}\nAdding the $k_1 \\leftrightarrow k_2$ term, arising from the $g^{++}$\ncomponent of the metric in \\eq{eom}, we get\n\\begin{align}\\label{contr5}\n G_R\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\, \\Delta y}).\n\\end{align}\nDefining the correlation function\n\\begin{align}\\label{corrdef}\n C (k_1, k_2) \\, \\equiv \\, \\frac{\\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1\n \\, d^2 k_2 \\, d y_2}}{\\frac{d^3 N}{d^2 k_1 \\, d y_1} \\,\n \\frac{d^3 N}{d^2 k_2 \\, d y_2}}\n\\end{align}\nand using Eqs. \\peq{contr5} and \\peq{corr7} to evaluate it we observe\nthat at large rapidity intervals it scales as\n\\begin{align}\\label{corrf}\n C (k_1, k_2) \\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta\n y}).\n\\end{align}\n\nIndeed the argument we have just presented relies on several\nassumptions: in particular it assumes that no other term in the metric\nwould cancel correlations arising from the terms we have considered.\nTo make sure that this is indeed the case we will now present the full\ncalculation. The result of our simplistic argument given in\n\\eq{corrf} would still turn out to be valid at the end of this\ncalculation.\n\n\n\n\\section{Two-Point Correlation Function at Early Times}\n\\label{Correlators}\n\n\n\\subsection{Glueball correlator}\n\\label{glueball}\n\nWe now proceed to the calculation of the retarded Green function in\nthe background of the metric \\peq{2nuc_gen}, following the AdS\/CFT\nprescription outlined in Eqs. \\peq{Gr_mom}, \\peq{dil_action}, and\n\\peq{Sdiff}.\n\n\n\\subsubsection{Bulk scalar field}\n\nFirst we have to find the classical scalar field $\\phi$. Similar to\nthe way the metric \\peq{2nuc_gen} was constructed in\n\\cite{Albacete:2008vs}, we will build the scalar field $\\phi$\norder-by-order in the powers of $\\mu_1$ and $\\mu_2$, assuming $\\mu_1$\nand $\\mu_2$ are small perturbations. We would like to find the\nsolution of \\eq{eom} up to order $\\cO(\\mu_1\\mu_2)$. For this we use\nthe following expansion,\n\\begin{align}\\label{expansion}\n \\phi(x,z) = \\phi_0(x,z) + \\phi_a(x,z) + \\phi_b(x,z) + \\phi_2(x,z) +\n \\ldots \\ ,\n\\end{align}\nwhere $\\phi_0 \\sim \\cO(\\mu^0_{1,2})$, $\\phi_{a,b} \\sim \\cO(\\mu_{1,2})$\nand $\\phi_2 \\sim \\cO(\\mu_1\\mu_2)$. We will use the standard method\n(see e.g. \\cite{Mueller:2008bt,Avsar:2009xf,Kovchegov:2010uk}) and\ndemand that the boundary conditions at $z \\rightarrow 0$ are as\nfollows:\n\\begin{align}\\label{bc}\n \\phi_0(x,z \\rightarrow 0) \\, = \\, \\phi_B (x), \\, \\, \\, \\phi_a(x,z\n \\rightarrow 0) \\, = \\, \\phi_b(x,z \\rightarrow 0) \\, = \\, \\phi_2(x,z\n \\rightarrow 0) \\, = \\, \\ldots \\, = \\, 0.\n\\end{align}\nIn this case the variation of the classical action with respect to\nboundary value of the field $\\phi_B$ required in \\eq{Sdiff} is\nstraightforward.\n\n\nUsing \\eq{2nuc_gen} in \\eq{eom}, and expanding the linear operator in\nthe latter in powers of $\\mu_1$ and $\\mu_2$ up to order-$\\mu_1 \\,\n\\mu_2$ with the help of \\peq{LO} and \\peq{LOstuff}, the EOM can be\nwritten explicitly in the form\n\\begin{align}\\label{MainEOM}\n \\left[ \\Box_5 + z^4 \\, t_1 \\, \\partial^2_+ + z^4 \\, t_2 \\,\n \\partial^2_- + \\frac{1}{12} \\, z^4 \\, \\hat{M} \\right] \\, \\phi(x,z)\n = 0 \\ .\n\\end{align}\nTaking into account that $t_1 = t_1(x^-)$ and $t_2 = t_2(x^+)$, we\ngive the following list of definitions:\n\\begin{align}\n\\Box_5 & \\, \\equiv \\, -\\partial^2_z + \\frac{3}{z} \\, \\partial_z + \\Box_4 \\ ,\n\\ \\ \\ \\ \\ \\ \\ \\ \\Box_4 \\, \\equiv \\, 2 \\, \\partial_+\\partial_- -\n\\nabla_{\\bot}^2 \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\frac{1}{\\partial_{\\pm}} \\equiv\n\\int^{x^{\\pm}}_{-\\infty} dx'^{\\pm} \\ ,\n\\\\ \\nonumber\n\\hat{M} & \\, \\equiv \\, \\left(\\hat{D} + z^4 \\right) \\, t_1 \\, t_2 \\,\n\\nabla_{\\bot}^2 - \\frac{\\partial_+}{\\partial_-} \\, \\hat{D} \\, t_1 \\,\nt_2 \\, \\partial_-^2 - \\frac{\\partial_-}{\\partial_+} \\, \\hat{D} \\, t_1\n\\, t_2 \\, \\partial_+^2 + 2 \\, \\left(\\hat{D} + 5 \\, z^4 \\right) \\, t_1\n\\, t_2 \\, \\partial_+ \\partial_- \\\\ \\nonumber\n&+ 5 \\, z^4 \\, t_1 \\, (\\partial_+ \\, t_2) \\, \\partial_- + 5 \\, z^4 \\,\nt_2 \\, (\\partial_- \\, t_1) \\, \\partial_+ + 10 \\, z^3 \\, t_1 \\, t_2 \\,\n\\partial_z + 2 \\, z^4 \\, t_1 \\, t_2 \\, \\partial_z^2 \\ , \\\\ \\nonumber\n\\hat{D} & \\, \\equiv \\, 96 \\, \\frac{1}{\\partial^2_+} \\,\n\\frac{1}{\\partial^2_-} + 16 \\, z^2 \\, \\frac{1}{\\partial_+} \\,\n\\frac{1}{\\partial_+} + z^4 \\ .\n\\end{align}\nSubstituting expansion (\\ref{expansion}) into (\\ref{MainEOM}), and\ngrouping different powers of $\\mu_1$ and $\\mu_2$ together we end up\nwith the following set of equations, listed here along with their\nboundary conditions:\n\\begin{subequations}\\label{eom3}\n\\begin{align}\n &\\Box_5 \\phi_0(x,z) = 0 \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\phi_0(x,z\\to0) = \\phi_B(x) \\ , \\label{eom0} \\\\\n &\\Box_5 \\phi_a(x,z) = - z^4 \\, t_1(x^-) \\, \\partial_+^2 \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\ \\phi_a(x,z\\to0) = 0 \\ , \\label{eoma} \\\\\n &\\Box_5 \\phi_b(x,z) = - z^4 \\, t_2(x^+) \\, \\partial_-^2 \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\ \\phi_b(x,z\\to0) = 0 \\ , \\label{eomb} \\\\\n &\\Box_5 \\phi_2(x,z) = - z^4 \\, t_1(x^-) \\, \\partial_+^2 \\,\n \\phi_b(x,z) - z^4 \\, t_2(x^+) \\, \\partial_-^2 \\, \\phi_a(x,z) -\n \\frac{z^4}{12} \\, \\hat{M} \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\\n \\phi_2(x,z\\to0) = 0 \\ , \\label{eom2}\n\\end{align}\n\\end{subequations}\nwhere we also imply that all the solutions should be regular at $z\\to\n\\infty$.\n\nTo solve equations \\peq{eom3} it is convenient to introduce a Green\nfunction $G(x,z,z')$ satisfying the equation\n\\begin{align}\\label{Green1}\n \\Box_5 \\, G(x,z,z') \\, = \\, z'^3 \\, \\delta(z-z').\n\\end{align}\nThe Green function can be written as\n\\begin{align}\\label{Green2}\n G(x,z,z') \\, = \\, z^2 \\, z'^2 \\, I_2(z_< \\sqrt{\\Box_4}) \\, K_2(z_>\n \\sqrt{\\Box_4}) \\ ,\n\\end{align}\nwhere $z_{\\{<,>\\}} = {\\rm \\{min,max\\}}\\{z,z'\\}$. We can rewrite the\ninverse of $\\Box_5$ operator as\n\\begin{align}\n \\frac{1}{\\Box_5} f(x,z) \\equiv \\int^{\\infty}_0 \\frac{dz'}{z'^3} \\,\n G(x,z,z') \\, f(x,z').\n\\end{align}\nSolving the first equation in \\peq{eom3} we find\n\\begin{align}\\label{free}\n \\phi_0(x,z) = \\frac{1}{2}z^2 \\Box_4 K_2(z\\sqrt{\\Box_4})\\phi_B(x) \\ .\n\\end{align}\nFrom Eqs. \\peq{eoma}, \\peq{eomb}, and \\eq{eom2} we have\n\\begin{align}\n \\phi_a(x,z) &= - \\frac{1}{\\Box_5} \\left[z^4 \\, t_1 \\, \\partial_+^2\n \\, \\phi_0\\right] \\ , \\ \\ \\ \\ \\ \\ \\ \\\n \\phi_b(x,z) = - \\frac{1}{\\Box_5} \\left[z^4 \\, t_2 \\, \\partial_-^2 \\,\n \\phi_0\\right] \\ , \\label{solab} \\\\\n \\phi_2(x,z) &= \\frac{1}{\\Box_5} \\, z^4 \\, t_1 \\, \\partial_+^2 \\,\n \\frac{1}{\\Box_5} \\, z^4 \\, t_2 \\, \\partial_-^2 \\, \\phi_0 +\n \\frac{1}{\\Box_5} \\, z^4 \\, t_2 \\, \\partial_-^2 \\, \\frac{1}{\\Box_5}\n \\, z^4 \\, t_1 \\, \\partial_+^2 \\, \\phi_0 - \\frac{1}{\\Box_5} \\, z^4 \\,\n \\frac{\\hat{M}}{12} \\, \\phi_0 \\ . \\label{sol2}\n\\end{align}\nWe have constructed the bulk scalar field which we need to find the\ncorrelation function.\n\n\n\n\n\n\n\\subsubsection{Glueball correlation function}\n\n\nWe can now calculate the retarded glueball correlation function using\n\\eq{sol2} in Eqs. \\peq{dil_action}, \\peq{Sdiff}, and \\peq{Gr_mom}. It\nis straightforward to check that\n\\begin{align}\\label{Gexp}\n &\\left[\\frac{1}{z^3} \\, \\partial_z \\, G(x,z,z')\\right]_{z\\to0} \\, =\n \\, \\frac{1}{2} \\, z'^2 \\, \\Box_4 \\, K_2(z' \\, \\sqrt{\\Box_4}) \\ .\n\\end{align}\nUsing \\eq{Gexp}, along with Eqs. \\peq{dil_action}, \\peq{Sdiff}, and\n\\peq{Gr_mom}, we obtain\n\\begin{align}\n \\label{eq:G1}\n G_R (k_1, k_2) \\, = \\, \\frac{N_c^2}{16} \\,\\mu_1 \\, \\mu_2 \\,\n \\delta^{(2)}({\\un k}_{1} + {\\un k}_{2}) \\, k_1^2 \\, k_2^2 \\, \\left[\n F(k_1,k_2) + F(k_2,k_1) \\right] \\ ,\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:FAB}\n F(k_1,k_2) \\equiv & F_\\text{I} (k_1,k_2) + F_\\text{II} (k_1,k_2)\n\\end{align}\nwith\n\\begin{align}\\label{eq:FI}\n F_\\text{I} (k_1,k_2) = & \\int^{\\infty}_0 dz~z^5 \\, K_2 \\left(z \\,\n \\sqrt{k_1^2}\\right) \\,\n\\int^{\\infty}_0 dz'~z'^5 \\, K_2\\left(z' \\, \\sqrt{k_2^2} \\right) \\notag \\\\\n& \\times \\, \\left[(k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right)K_2\\left(Q_1z_>\\right) + (k_1^+k_2^-)^2\n I_2\\left(Q_2z_<\\right)K_2\\left(Q_2z_>\\right)\\right]\n\\end{align}\nand\n\\begin{align}\\label{eq:FII}\n F_\\text{II} (k_1,k_2) = &\\frac{k^2_{2\\bot}}{12}\\int^{\\infty}_0 dz~z^5 K_2\n \\left(z \\, \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-}\\right]\nK_2\\left(z \\, \\sqrt{k_2^2}\\right) \\notag \\\\[7pt] \\nonumber\n&-\n\\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right] \\,\n\\int^{\\infty}_0 dz~z^5 K_2 \\left( z \\, \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-} + z^4 \\right]\n\\, K_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt] \\nonumber\n&+\\frac{1}{6} \\, k_2^+k_2^-\\int^{\\infty}_0 dz~z^5 \\, K_2 \\left( z \\,\n \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-} + 8 \\,\n z^4 \\right] K_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt] \\nonumber &-\n\\frac{5}{12}\\left[2 \\, k_2^+k_2^- + k_2^+k_1^- +\n k_2^-k_1^+\\right]\\int^{\\infty}_0 dz~z^9 \\, K_2 \\left( z\\,\n \\sqrt{k_1^2} \\right)\nK_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt]\n&+ \\frac{4 \\, k_2^2}{3} \\, \\int^{\\infty}_0 dz~z^8 \\, K_2 \\left( z\\,\n \\sqrt{k_1^2} \\right) \\, K_1\\left(z \\, \\sqrt{k_2^2} \\right)\\ .\n\\end{align}\nWe have defined\n\\begin{align}\n \\label{eq:Q12}\n Q_1^2 \\, = \\, 2 \\, k_1^- \\, k_2^+ + k_\\perp^2, \\ \\ \\ Q_2^2 \\, = \\, 2\n \\, k_1^+ \\, k_2^- + k_\\perp^2,\n\\end{align}\nwith $k_{1, \\perp} = k_{2, \\perp} = k_\\perp$.\n\nBefore evaluating the obtained expressions further, let us comment on\nsome of their features. First one may note that \\eq{eq:G1}\ncontains a delta-function of transverse momenta of the two glueballs\n$\\delta^{(2)}({\\un k}_{1} + {\\un k}_{2})$. This demonstrates that at\nthe lowest non-trivial order in $\\mu_1$ and $\\mu_2$ expansion\n(order-$\\mu_1 \\, \\mu_2$) there will be nothing else produced in the\nshock wave collision apart from the two glueballs. Note that indeed a\nnon-zero $\\langle T_{\\mu\\nu} \\rangle$ in the forward light-cone at the\norder-$\\mu_1 \\, \\mu_2$ found in\n\\cite{Grumiller:2008va,Albacete:2008vs} indicates that a medium is\ncreated: however this strongly-coupled medium in the ${\\cal N} =4$ SYM\ntheory without bound states and confinement does not fragment into\nindividual particles, and at late times simply results in a very low\n(and decreasing) energy density created in the collision, similar to\nthe asymptotic future of Bjorken hydrodynamics dual found in\n\\cite{Janik:2005zt}. Since in our calculation we have explicitly\nprojected out two glueballs with fixed momenta in the final state,\nthose two glueballs are all that is left carrying transverse momentum\nin the forward light-cone. (Leftovers of the original shock waves may\nalso be present, though they would not carry any transverse momentum.)\nThis picture is in agreement with the dominance of elastic processes\nin high energy scattering in the AdS\/CFT framework suggested in\n\\cite{Levin:2008vj}.\n\n\nAnother important aspect of the result in Eqs. \\peq{eq:FI} and\n\\peq{eq:FII} above is that the integrals over $z$ and $z'$ diverge for\ntime-like momenta $k_1$ and $k_2$, i.e., for $k_1^2 = - m^2$ and\n$k_2^2 = - m^2$ corresponding to production of physical glueballs of\nmass $m$. This result should be expected in ${\\cal N} =4$ SYM theory:\nsince there are no bound states in this theory, we conclude that there\nare no glueballs. Thinking of Bessel functions $K_2 (z\n\\sqrt{k_{1,2}^2})$ in Eqs. \\peq{eq:FI} and \\peq{eq:FII} as\ncontributing to the wave functions of glueballs in AdS$_5$ space\n\\cite{Brodsky:2003px,Polchinski:2002jw,Polchinski:2000uf}, we conclude\nthat the lack of glueball bound states in the theory manifests itself\nthrough de-localization of these wave functions, resulting in ``bound\nstates'' of infinite radii, both in the bulk and in the boundary\ntheory (if we identify the holographic coordinate $z$ with the inverse\nmomentum scale on the UV boundary). Since the glueballs for us have\nalways been some external probes of the ${\\cal N} =4$ SYM theory, we\nconclude that one has to define the probes by re-defining their\nwavefunctions. This can be accomplished, for instance, by introducing\nconfinement in the theory, by using either the ``hard-wall'' or\n``soft-wall'' models\n\\cite{Polchinski:2001tt,BoschiFilho:2002vd,Brodsky:2003px,Erlich:2005qh,DaRold:2005zs,BoschiFilho:2005yh,Grigoryan:2007vg,Karch:2006pv,Karch:2010eg,Grigoryan:2007my}.\nThe inverse confinement scale would define the typical size of the\nbound states. Indeed such procedure would introduce a model-dependent\nuncertainty associated with mimicking confinement in AdS\/CFT, but is\nunavoidable in order to define glueball probes. Besides, our main goal\nhere is to calculate long-range rapidity correlations, which are not\naffected (apart from a prefactor) by the exact shape of the glueball\nAdS$_5$ wave functions. We therefore model confinement by modeling\nthe glueball (external source) AdS wave functions by simply replacing\n$K_2 (z \\sqrt{k_{1,2}^2}) \\rightarrow K_2 (z \\, \\Lambda)$ in Eqs.\n\\peq{eq:FI} and \\peq{eq:FII} with $\\Lambda >0$ related to confinement\nmomentum scale. We then rewrite Eqs. \\peq{eq:FI} and \\peq{eq:FII} as\n\\begin{align}\\label{eq:FI2}\n F_\\text{I} (k_1,k_2) = & \\int^{\\infty}_0 dz~z^5 \\, K_2 \\left(z \\,\n \\Lambda \\right) \\,\n \\int^{\\infty}_0 dz'~z'^5 \\, K_2\\left(z' \\, \\Lambda \\right) \\notag \\\\\n & \\times \\, \\left[(k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right)K_2\\left(Q_1z_>\\right) + (k_1^+k_2^-)^2\n I_2\\left(Q_2z_<\\right)K_2\\left(Q_2z_>\\right)\\right],\n\\end{align}\n\\begin{align}\\label{eq:FII2}\n F_\\text{II} (k_1,k_2) = &\\frac{k^2_{\\bot}}{12}\\int^{\\infty}_0 dz~z^5\n \\, \\left[ K_2 \\left(z \\, \\Lambda \\right) \\right]^2\n \\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2}\\right]\n \\notag \\\\[7pt] \\nonumber\n &-\n \\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right]\n \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2 \\left( z \\, \\Lambda \\right)\n \\right]^2\n\\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2} + z^4\n\\right] \\\\[7pt] \\nonumber\n&+\\frac{1}{12} \\, m_\\perp^2 \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2\n \\left( z \\, \\Lambda \\right) \\right]^2\n\\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2} + 8 \\, z^4\n\\right] \\\\[7pt] \\nonumber &- \\frac{5}{12}\\left[m_\\perp^2 + k_2^+k_1^-\n + k_2^-k_1^+\\right]\\int^{\\infty}_0 dz~z^9 \\, \\left[ K_2 \\left( z\\,\n \\Lambda \\right) \\right]^2 \\\\[7pt]\n&+ \\frac{4 \\, m^2}{3} \\, \\int^{\\infty}_0 dz~z^8 \\, K_2 \\left( z\\,\n \\Lambda \\right) \\, K_1\\left(z \\, \\Lambda \\right)\\ ,\n\\end{align}\nwhere we have also replaced all rapidity-independent factors with\npowers of either glueball mass $m$ or $m_\\perp = \\sqrt{k_\\perp^2 +\n m^2}$.\n\n\nThe contributions in Eqs. \\peq{eq:FI2} and \\peq{eq:FII2} (or those in\nEqs. \\peq{eq:FI} and \\peq{eq:FII}) to the retarded Green function\n\\peq{Gr_mom} are shown diagrammatically in \\fig{graphs} in terms of\nWitten diagrams. There the wiggly lines represent gravitons, while\nthe dashed line denotes the scalar field. Crosses represent insertions\nof the boundary energy-momentum tensors of the two shock waves\n($\\mu_1$ and $\\mu_2$). $F_\\text{I}$ from \\eq{eq:FI2} corresponds to\nthe diagram on the left of \\fig{graphs}, while $F_\\text{II}$ from\n\\eq{eq:FII2} is given by the term on the right of \\fig{graphs}.\n\n\n\\begin{figure}[th]\n\\begin{center}\n\\epsfxsize=9cm\n\\leavevmode\n\\hbox{\\epsffile{graphs.eps}}\n\\end{center}\n\\caption{Diagrammatic representation of the correlation function\n calculated in this Section.}\n\\label{graphs}\n\\end{figure}\n\n\nIt is important to note that the Green function given by Eqs.\n\\peq{eq:G1}, \\peq{eq:FAB}, \\peq{eq:FI2}, and \\peq{eq:FII2} is indeed\nreal, justifying the assumption we employed in stating that \\eq{Sdiff}\nprovides us a retarded Green function. This can also be seen from the\ndiagrams in \\fig{graphs} in which one can not cut the scalar\npropagator. The imaginary part of $G_R$ appears at higher order in\n$\\mu_1 \\, \\mu_2$, when one has more graviton insertions in the scalar\npropagator, allowing for non-zero cuts of the latter.\n\n\nLet us now study the large-rapidity interval asymptotics of the\nobtained correlation function \\peq{eq:G1}. One can deduce from the\nkinematics described in Section \\ref{kine} that\n\\begin{align}\n &k^+_1k^-_2 = \\frac{m^2_{\\bot}}{2} \\, e^{-\\Delta y} \\ , \\ \\ \\ \\ \\\n k^-_1k^+_2 = \\frac{m^2_{\\bot}}{2} \\, e^{\\Delta y} \\ ,\n\\end{align}\nsuch that when $\\Delta y = y_2 - y_1 \\gg 1$ we have\n\\begin{align}\n Q^2_1 = k^2_{\\bot} + m_{\\bot}^2 \\, e^{\\Delta y} \\approx m^2_{\\bot}\n \\, e^{\\Delta y} \\ , \\ \\ \\ Q^2_2 = k^2_{\\bot} + 2 \\, m_{\\bot}^2 \\,\n e^{-\\Delta y} \\approx k^2_{\\bot}.\n\\end{align}\nTherefore, the contribution from \\eq{eq:FI2} becomes\n\\begin{align}\\label{FIapp1}\n F_I (k_1,k_2) \\big|_{\\Delta y \\gg 1} \\, \\approx \\, \\int^{\\infty}_0\n dz~z^5 \\, K_2 \\left(z \\, \\Lambda \\right) \\, \\int^{\\infty}_0 dz'~z'^5\n \\, K_2\\left(z' \\, \\Lambda \\right) \\, (k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right).\n\\end{align}\nTo determine the large-$Q_1$ asymptotics of $I_2\\left(Q_1z_<\\right) \\,\nK_2 \\left(Q_1z_>\\right)$ note that, according to Eqs. \\peq{Green1} and\n\\peq{Green2}, $z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2\n\\left(Q_1z_>\\right)$ satisfies\n\\begin{align}\n \\label{eq:IK}\n \\left[ - \\partial_z^2 + \\frac{3}{z} \\, \\partial_z + Q_1^2 \\right] \\,\n z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right) \\,\n = \\, z'^3 \\, \\delta (z - z').\n\\end{align}\nHence, for $Q_1$ larger than the inverse of the typical variation in\n$z$ we have\n\\begin{align}\n \\label{eq:IK2}\n z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right)\n \\bigg|_{\\text{large} \\, Q_1} \\, \\approx \\, \\frac{z'^3}{Q_1^2} \\, \\delta\n (z - z'),\n\\end{align}\nwhich, when used in \\eq{FIapp1} yields\n\\begin{align}\n \\label{FIapp2}\n F_I (k_1,k_2) \\big|_{\\Delta y \\gg 1} \\, \\approx \\, \\frac{2048}{7} \\,\n \\frac{(k_1^-k_2^+)^2}{Q_1^2 \\, \\Lambda^{10}} \\, \\approx \\,\n \\frac{512}{7} \\, \\frac{m_\\perp^2}{\\Lambda^{10}} \\, e^{\\Delta y}.\n\\end{align}\nThis result implies that the rapidity correlations coming from this\nterm grow as $e^{\\Delta y}$ at the early stages after the collision.\n\n\nOn the other hand, the dominant contributions from the second term,\n$F_\\text{II} (k_1,k_2)$, are coming from the expressions in the second\nand the fourth lines of \\eq{eq:FII2}. They give\n\\begin{align}\\label{FIIapp}\n F_\\text{II} (k_1,k_2) \\big|_{\\Delta y \\gg 1} &\\approx -\n \\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right]\n \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2 \\left( z \\, \\Lambda \\right)\n \\right]^2 \\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2}\n + z^4 \\right] \\\\ \\nonumber &- \\frac{5}{12} \\, \\left(k_2^+k_1^- +\n k_1^+k_2^-\\right) \\, \\int^{\\infty}_0 dz~z^9 \\, \\left[ K_2 \\left( z\n \\, \\Lambda \\right) \\right]^2 \\\\ \\nonumber & \\approx -\n \\frac{256}{21} \\, \\frac{m^2_{\\bot}}{\\Lambda^{10}} \\, e^{2 \\, \\Delta\n y} \\, \\left[ 1 - 3 \\, \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5}\n \\, \\frac{\\Lambda^4}{m_\\perp^4} \\right] - \\frac{1280}{21} \\,\n \\frac{m^2_{\\bot}}{\\Lambda^{10}} \\, e^{\\Delta y}.\n\\end{align}\n\n\nCombining Eqs. \\peq{FIapp2} and \\peq{FIIapp} in Eqs. \\peq{eq:G1} and\n\\peq{eq:FAB} we obtain\n\\begin{align}\n \\label{GRlargey}\n G_R (k_1, k_2)\\big|_{\\Delta y \\gg 1} \\, \\approx \\, - \\frac{64}{21}\n \\, \\frac{N_c^2 \\,\\mu_1 \\, \\mu_2 \\, m^4 \\, m_\\perp^2}{\\Lambda^{10}}\n \\, \\delta^{(2)}({\\un k}_{1} + {\\un k}_{2}) \\, \\left\\{ e^{2 \\, \\Delta\n y} \\, \\left[ 1 - 3 \\, \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5}\n \\, \\frac{\\Lambda^4}{m_\\perp^4} \\right] + e^{\\Delta y} \\right\\},\n\\end{align}\nwhich, dropping the second term in the curly brackets and using the $+\n\\leftrightarrow -$ symmetry of the problem can be generalized to\n\\begin{align}\n \\label{GR_gen}\n G_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\approx \\, & -\n \\frac{128}{21} \\, \\frac{N_c^2 \\,\\mu_1 \\, \\mu_2 \\, m^4 \\,\n m_\\perp^2}{\\Lambda^{10}} \\, \\delta^{(2)}({\\un k}_{1} + {\\un\n k}_{2}) \\, \\cosh ({2 \\, \\Delta y}) \\, \\left[ 1 - 3 \\,\n \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5} \\,\n \\frac{\\Lambda^4}{m_\\perp^4} \\right].\n\\end{align}\nWe thus conclude that at large rapidity separations\n\\begin{align}\n \\label{GRlarge}\n G_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\,\n \\Delta y})\n\\end{align}\nin agreement with our estimate in \\eq{contr5}.\n\nUsing \\eq{corr7} we conclude that\n\\begin{align}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}\n \\Bigg|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta y})\n\\end{align}\nsuch that the two-glueball correlation function defined in\n\\eq{corrdef} scales as\n\\begin{align}\\label{Corr_fin}\n C (k_1, k_2) \\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta\n y}),\n\\end{align}\njust like in \\eq{corrf}. We have demonstrated the presence of\nlong-range rapidity correlations in case of strongly-coupled\nhigh-energy heavy ion collisions. The rapidity shape of the obtained\ncorrelations is very different from the ``ridge'' correlation observed\nexperimentally at RHIC and at LHC\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}. It\nis possible that higher order in $\\mu_1$ and $\\mu_2$ corrections would\nmodify the rapidity shape of the correlation, putting it more in-line\nwith experiments. We will return to this point in Sec. \\ref{sum}.\n\n\nLet us now pause to determine the parameter of our approximation.\nUntil now we have, somewhat loosely, referred to our approximation as\nto an expansion in $\\mu_1$ and $\\mu_2$. However, these parameters have\ndimensions of mass cubed, and can not be expanded in. From \\eq{GR_gen}\nwe may suggest that the dimensionless expansion parameters are $\\mu_1\n\/ \\Lambda^3$ and $\\mu_2 \/ \\Lambda^3$, where $\\Lambda$ is the inverse\nglueball size. Thus our result in \\eq{GR_gen} dominates the\ncorrelation function only for\n\\begin{align}\n \\label{conds}\n \\frac{\\mu_1}{\\Lambda^3} \\ll 1, \\ \\ \\ \\frac{\\mu_2}{\\Lambda^3} \\ll 1.\n\\end{align}\nSince, as can be seen from \\eq{mus}, $\\mu_1$ and $\\mu_2$ are\nenergy-dependent, these conditions limit the energy range of\napplicability of \\eq{GR_gen}. \\eq{conds} also makes clear physical\nsense: since the metric \\peq{2nuc_gen} with the coefficients given by\nEqs. \\peq{LO} and \\eq{LOstuff} is valid only for early proper times\n$\\tau$ satisfying $\\mu_{1,2} \\, \\tau^3 \\ll 1$\n\\cite{Albacete:2008vs,Albacete:2009ji}, we see that the glueballs have\nto be small enough, $1\/\\Lambda \\approx \\tau \\approx \\mu_{1,2}^{-1\/3}$,\nto be able to resolve (and be sensitive to) the metric at such early\ntimes.\n\nNote also that the obtained Green function \\peq{GR_gen} is not a\nmonotonic function of $m_\\perp$: for $m_\\perp \\ll \\Lambda$ it grows\nwith $m_\\perp$ as $m_\\perp^2$, but, for $m_\\perp \\gg \\Lambda$ it falls\noff as $1\/m_\\perp^2$, peaking at $m_\\perp^2 = (28\/5) \\, \\Lambda^2$.\nThis translates into correlation function $C (k_1, k_2)$ first growing\nwith $m_\\perp$ (and, therefore, $k_\\perp$) as $m_\\perp^4$ for $m_\\perp\n\\ll \\Lambda$, and then decreasing as $1\/m_\\perp^4$ for $m_\\perp \\ll\n\\Lambda$. Similar non-monotonic behavior has been observed for\n``ridge'' correlation experimentally\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nWhile in CGC-based approaches\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv,Kovner:2010xk}\nthe maximum of the correlation function is given by the saturation\nscale $Q_s$, and happens at $k_\\perp \\approx Q_s$, in our AdS\/CFT case\nthe maximum appears to be related to the inverse size of the produced\nbound state and its mass, such that it takes place at $k_\\perp \\approx\n\\sqrt{\\Lambda^2 - m^2}$. At this point it is not clear though whether\nsuch conclusion is a physical prediction or an artifact of the\nperturbative solution of the problem in the AdS space.\n\n\nIn order to make a more detailed comparison with experiment one needs\nto improve on our AdS\/CFT approach both by calculating higher-order\ncorrections in $\\mu_1$ and $\\mu_2$, and, possibly, by implementing\nnon-conformal QCD features, such as confinement, along the lines of\nthe AdS\/QCD models\n\\cite{Polchinski:2001tt,BoschiFilho:2002vd,Brodsky:2003px,Erlich:2005qh,DaRold:2005zs,BoschiFilho:2005yh,Grigoryan:2007vg,Karch:2006pv,Karch:2010eg,Grigoryan:2007my}.\nThe latter modification would certainly change our glueball wave\nfunctions in the bulk, modifying the Bessel functions in\nEqs.~\\peq{eq:FI2} and \\peq{eq:FII2}. However, while the use of AdS\/QCD\ngeometry may affect the $m_\\perp$-dependence of the correlation\nfunction \\peq{GR_gen}, one may see from Eqs.~\\peq{eq:FI2} and\n\\peq{eq:FII2} that such modification would not affect our main\nconclusion about the rapidity-dependence of the correlations shown in\n\\eq{Corr_fin}. The leading large-rapidity asymptotics of the\ncorrelation function \\peq{Corr_fin} results from the second term on\nthe right-hand-side of \\eq{eq:FII2}: modifying the glueball wave\nfunction would only change the coefficient in front of the\nrapidity-dependent part.\\footnote{As the integrand in that term is\n positive-definite for any glueball wave function, the coefficient\n can not vanish.} Since the growth of correlations with rapidity does\nnot reproduce experimental data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}, our\nconclusion is that the inclusion of higher-order corrections in\n$\\mu_1$ and $\\mu_2$ is the only possibility for AdS\/CFT (or AdS\/QCD)\ncalculations to get in line with the data.\n\n\n\n\\subsection{Energy-momentum tensor correlator}\n\nWe have shown that there are long-range rapidity correlations in the\nglueball operator of \\eq{Jdef} in the strong-coupling heavy ion\ncollisions. At the same time we would like to extend this statement to\ncorrelations of other operators. Energy-momentum tensor is a natural\nnext candidate. Indeed the glueball operator \\peq{Jdef} is a part of\nthe energy-momentum tensor: hence correlations in $\\langle J(x) \\,\nJ(y) \\rangle$ probably imply correlations in $\\langle T_{\\mu\\nu} (x)\n\\, T_{\\mu\\nu} (y) \\rangle$ as well. To show this is true we will\npresent an argument below, largely following\n\\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nConsider a field theory whose dual holographic description is given\nby the metric of the general form\n\\begin{align}\\label{genmet}\n ds^2 &= g^{(0)}_{MN} \\, dx^M \\, dx^N = f(x^+,x^-,z) \\, dx^2_{\\bot} +\n g_{\\mu\\nu}(x^+,x^-,z) \\, d\\xi^{\\mu} \\, d\\xi^{\\nu} \\ ,\n\\end{align}\nwhere ${\\un x} = (x^1, x^2)$,\n$d x_\\perp^2 = (d x^1)^2 + (d x^2)^2$\nand $\\xi^{\\mu} = (x^+,x^-,z)$. Now, consider small perturbations\naround the metric independent of $x^1, x^2$, $g_{MN}= g_{MN}^{(0)} +\nh_{MN} (x^+,x^-,z)$. We will work in the $h_{Mz} =0$ gauge. The metric\n\\peq{genmet} has a rotational $O$(2) symmetry in the transverse plane.\nUnder the transverse rotations one may naively expect $\\{ h_{11},\nh_{12}, h_{22} \\}$ components to transform as tensors, $\\{ h_{01},\nh_{31}, h_{02}, h_{32} \\}$ components to transform as vectors, and $\\{\nh_{00}, h_{03}, h_{33} \\}$ components to be scalars under rotations.\nHowever, rewriting the transverse part of the metric as\n\\begin{align}\n \\label{trmet}\n \\left(\n \\begin{array}{cc}\n h_{11} & h_{12} \\\\\n h_{21} & h_{22}\n \\end{array}\n\\right) \\, = \\,\n\\left(\n \\begin{array}{cc}\n (h_{11} + h_{22})\/2 & 0 \\\\\n 0 & (h_{11} + h_{22})\/2\n \\end{array}\n\\right) +\n\\left(\n \\begin{array}{cc}\n (h_{11} - h_{22})\/2 & h_{12} \\\\\n h_{21} & - (h_{11} - h_{22})\/2\n \\end{array}\n\\right)\n\\end{align}\nwe see that $h_{11} + h_{22}$ is also invariant under $O$(2)\ntransverse plane rotations. Hence the final classification of the\nmetric components under $O$(2) rotations is: $\\{ h_{11} - h_{22},\nh_{12} \\}$ are in the tensor representation, $\\{ h_{01}, h_{31},\nh_{02}, h_{32} \\}$ are vectors, and $\\{ h_{00}, h_{03}, h_{33}, h_{11}\n+ h_{22} \\}$ are scalars \\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nUsing the above classification we see that we can assume that the only\nnon-vanishing component of $h_{MN}$ is $h_{12} = h_{21} =\nh_{12}(x^+,x^-,z)$. It is in the tensor representation and, as can be\nseen with the help of \\eq{trmet}, by rotating in the transverse plane\nwe can always find a coordinate system in which $h_{11} - h_{22} =0$\nand $h_{12} = h_{21}$ remains the only non-zero metric component in\nthe tensor representation. Since all other components of the metric\nare in other representations of the $O$(2) symmetry group, they do not\nmix with $h_{12}$ in Einstein equations, and can be safely put to zero\n\\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nSubstituting the metric $g_{MN}= g_{MN}^{(0)} + h_{MN} (x^+,x^-,z)$\nwith $g_{MN}^{(0)}$ given by \\peq{genmet} into Einstein equations\n\\peq{ein}, and expanding the result to linear order in $h_{12}$ we get\n\\cite{Policastro:2002se,Kovtun:2004de}\n\\begin{align}\n \\Box \\, h_{12} - 2 \\, \\frac{\\partial^{\\mu} f}{f} \\, \\partial_{\\mu}\n h_{12} + 2 \\, \\frac{(\\partial f)^2}{f^2} \\, h_{12} - \\frac{\\Box\n f}{f} \\, h_{12} = 0\\, ,\n\\label{minscal}\n\\end{align}\nwhere\n\\begin{align}\n \\label{box3}\n \\Box \\, = \\, \\frac{1}{\\sqrt{-g}} \\, \\partial_M \\left[ \\sqrt{-g} \\, g^{MN} \\, \\partial_N \\ldots \\right]\n \n\\end{align}\nand $(\\partial f)^2 = g^{MN} \\, \\partial_M f \\, \\partial_N f$.\nChanging the variable from $h_{12}$ to $h^1_2=h_{12}\/f$, one can see\nthat $h^1_2$ indeed satisfies the equation for a minimally coupled\nmassless scalar \\cite{Policastro:2002se,Kovtun:2004de}:\n\\begin{align}\n \\Box \\, h^1_2 = 0.\n\\end{align}\nTherefore, since our metric \\peq{2nuc_gen} falls into the category of\n\\eq{genmet}, the analysis of Sec \\ref{glueball} applies to the metric\ncomponent $h_2^1$. Defining the retarded Green function for the\n$T_2^1$ components of the energy-momentum tensor (EMT) by\n\\begin{align}\n \\label{Gr_ten}\n G^{EMT}_R (k_1, k_2) \\, = \\, - i \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{-\n i \\, k_1 \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\theta (x_1^0 - x_2^0)\n \\, \\langle A_1, A_2 | \\, \\left[ T_2^1 (x_1) , T_2^1 (x_2) \\right] |\n A_1, A_2 \\rangle\n\\end{align}\nwe conclude that, similar to the glueball operator,\n\\begin{align}\n \\label{GRlargeEMT}\n G^{EMT}_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\,\n \\Delta y}).\n\\end{align}\nHence we have shown that the correlators of EMT operators exhibit the\nsame long-range rapidity correlations as the glueball correlators. It\nis therefore very likely that such correlations are universal and are\nalso present in correlators of other operators.\n\n\n\n\n\\section{Estimate of the Two-Point Correlation Function at Late Times}\n\\label{late-times}\n\n\nOur conclusion about long-range rapidity correlations was derived\nusing the metric \\peq{2nuc_gen} which is valid only at very early\ntimes after a shock wave collision. As discussed in the Introduction,\nwe do not expect the interactions at later times to affect these\ncorrelations, since different-rapidity regions of the produced medium\nbecome causally disconnected at late times. To check that no\nlong-range rapidity correlations can arise from the late-time dynamics\none would have to calculate the correlation function \\peq{corrdef} in\nthe full metric produced in a shock wave collision including all\npowers of $\\mu_1$ and $\\mu_2$. Since no such analytical solution\nexists, instead we will use the metric dual to Bjorken hydrodynamics\n\\cite{Bjorken:1982qr} constructed in \\cite{Janik:2005zt}. One has to\nbe careful in interpreting the result we obtain in this Section:\nBjorken hydrodynamics \\cite{Bjorken:1982qr} is rapidity-independent,\nwhile there are reasons to believe that the medium produced in a shock\nwave collision would exhibit rapidity dependence, as indicated by\nperturbative solutions of Einstein equations done in\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}. Nonetheless,\nwe expect that our calculation below would be a good initial estimate\nof the late-time rapidity correlations.\n\n\nThe dual geometry corresponding to the perfect fluid was obtained by\nJanik and Peschanski in \\cite{Janik:2005zt}. It can be written as\n\\begin{align}\\label{JPmetric}\n ds^2 = L^2 \\, \\left\\{ -\\frac{1}{z^2}\\frac{\\left(1 -\n z^4\/z_h^4(\\tau)\\right)^2}{1 + z^4\/z_h^4(\\tau)}d\\tau^2 +\n \\frac{\\left(1 + z^4\/z_h^4(\\tau)\\right)}{z^2}\\left(\\tau^2 d\\eta^2 +\n dx_{\\bot}^2\\right) + \\frac{dz^2}{z^2} \\right\\} \\ ,\n\\end{align}\nwhere $\\tau = \\sqrt{2x^+x^-}$ is proper time, $\\eta =\n\\frac{1}{2}\\ln(x^+\/x^-)$ is space-time rapidity, and $z_h(\\tau) =\n\\left(\\frac{3}{\\cE_0}\\right)^{1\/4}\\tau^{1\/3}$ (with $\\cE_0$ some\ndimensionful quantity) determines the position of the dynamical\nhorizon in AdS$_5$ such that the Hawking temperature is\n\\begin{align}\nT(\\tau) = \\frac{\\sqrt{2}}{\\pi z_h(\\tau)} =\n\\frac{\\sqrt{2}}{\\pi}\\left(\\frac{\\cE_0}{3}\\right)^{1\/4}~\\tau^{-1\/3} \\\n.\n\\end{align}\n\n\nUnfortunately finding the glueball correlation function in Bjorken\nhydrodynamic state is equivalent to finding boundary-to-boundary\nscalar propagator in the background of the Janik-Peschanski metric\n\\peq{JPmetric}, which is a daunting task: such propagator has not yet\nbeen found even for the static AdS Schwarzschild black hole metric.\nInstead, to estimate the correlations we will perform a perturbative\ncalculation.\n\nAt late times, when $\\tau \\gg \\cE_0^{-3\/8}$, assuming either that $z$\nis fixed or is bounded from the above (by let us say an infrared (IR)\ncutoff coming from the definition of the glueball wave function), we\ncan consider the ratio $u(\\tau) \\equiv z\/z_{h}(\\tau) \\ll 1 $ to be a\nsmall quantity. If so, we can expand the EOM for the scalar field\n\\peq{eom} up to $\\cO(u^4)$ obtaining\n\\begin{align}\\label{EOMJP}\n &\\Box_5 \\phi(\\tau, \\eta, x_{\\bot}, z) + u^4 \\, \\left[4 \\,\n \\partial^2_{\\tau}\n - \\Box_4 \\right] \\, \\phi(\\tau, \\eta, x_{\\bot}, z) = 0 \\ , \\\\\n \\nonumber\n&\\Box_5\\phi \\equiv -z^3 \\partial_z\\left(\\frac{1}{z^3}\\partial_z\n\\phi\\right) + \\Box_4 \\phi \\ , \\ \\ \\ \\ \\ \\\n\\Box_4\\phi \\equiv \\frac{1}{\\tau}\\partial_{\\tau}\\left(\\tau\n \\partial_{\\tau}\\phi\\right) - \\frac{1}{\\tau^2}\\partial^2_{\\eta}\\phi -\n\\nabla^2_{\\bot}\\phi = \\left(2\\partial_+\\partial_- -\n \\nabla^2_{\\bot}\\right)\\phi \\ .\n\\end{align}\n\n\nExpanding the scalar field in the powers of $u$ we write\n\\begin{align}\n \\label{eq:phi_exp}\n \\phi = \\phi_0 + \\phi_1 + \\ldots\n\\end{align}\nwhere $\\phi_0 \\sim \\cO\\left(u^0 \\right)$ and $\\phi_1 \\sim\n\\cO\\left(u^{4}\\right)$. Substituting this back into Eq.~(\\ref{EOMJP}),\nwe get\n\\begin{align}\\label{EOMJP2}\n &\\Box_5 \\, \\phi_0 = 0 \\ , \\ \\ \\ \\ \\ \\ \\ \\Box_5 \\, \\phi_1 = -\n \\frac{\\cE_0}{3} \\, \\frac{z^4}{\\tau^{4\/3}} \\, \\left[4 \\,\n \\partial^2_{\\tau} - \\Box_4\\right] \\, \\phi_0 \\ .\n\\end{align}\nThe solution for $\\phi_0$ was found above and is given in \\eq{free}.\nWe write the solution for $\\phi_1$ as\n\\begin{align}\n \\label{phi1}\n \\phi_1 = - \\frac{\\cE_0}{3} \\, \\frac{1}{\\Box_5} \\,\n \\frac{z^4}{\\tau^{4\/3}} \\, \\left[4 \\, \\partial^2_{\\tau} - \\Box_4\n \\right] \\, \\phi_0 \\, \\approx \\, \\frac{\\cE_0}{3} \\, \\frac{1}{\\Box_5}\n \\, \\frac{z^4}{\\tau^{4\/3}} \\, \\Box_4 \\, \\phi_0\n\\end{align}\nwhere in the last step we neglected $\\partial^2_{\\tau}$, since a\nderivative like this generates $O (1\/\\tau^2)$ corrections (at fixed\n$u$), which were neglected in constructing the original metric\n\\peq{JPmetric} and are thus outside of the precision of our\napproximation. We are now ready to calculate the retarded Green\nfunction. Using \\eq{phi1} in Eqs. \\peq{Sdiff}, \\peq{dil_action}, and\n\\peq{Gr_mom}, and employing \\eq{Gexp} yields\n\\begin{align}\n \\label{GrBj1}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{N_c^2 \\, \\cE_0\n \\, m^6}{24} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2) \\, \\int^{\\infty}_0\n dz~z^5 \\, K_2 \\left( z \\, \\sqrt{k_1^2} \\right) \\, K_2 \\left( z \\,\n \\sqrt{k_2^2} \\right) \\notag \\\\ \\times \\, \\int_0^\\infty d x^+ \\, d\n x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) + i \\, x^- \\, (k_1^+ + k_2^+)}\n \\, \\frac{1}{\\tau^{4\/3}}\n\\end{align}\nwhere we have replaced $k_1^2$ and $k_2^2$ with $-m^2$ everywhere\nexcept for the arguments of the Bessel functions. The integrals over\n$x^+$ and $x^-$ in \\eq{GrBj1} run from $0$ to $\\infty$ since the\nmatter only exists in the forward light-cone. (On top of that the\nmetric \\peq{JPmetric} is valid at late times only, for $u \\ll 1$, such\nthat the actual $x^+$ and $x^-$ integration region should be even more\nrestricted, possibly suppressing the correlations we are about to\nobtain even more.)\n\nJust like in the case of the early times considered in Sec.\n\\ref{glueball}, the integral over $z$ in \\eq{GrBj1} is divergent for\ntime-like momenta $k_1$ and $k_2$. Similar to what we did in Sec.\n\\ref{glueball}, we recognize the Bessel functions in \\eq{GrBj1} as the\nglueball wave functions in the bulk, which need to be modified to\nreflect the finite size of glueballs, which do not exist in ${\\cal N}\n=4$ SYM theory. Replacing $K_2 (z \\sqrt{k_{1,2}^2}) \\rightarrow K_2 (z\n\\, \\Lambda)$ in \\eq{GrBj1} and integrating over $z$ yields\n\\begin{align}\n \\label{GrBj2}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{4 \\, N_c^2 \\,\n \\cE_0 \\, m^6}{15 \\, \\Lambda^6} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2)\n \\, \\int_0^\\infty d x^+ \\, d x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) +\n i \\, x^- \\, (k_1^+ + k_2^+)} \\, \\frac{1}{\\tau^{4\/3}}.\n\\end{align}\n\n\n\nEvaluating the integrals left in \\eq{GrBj2},\n\\begin{align}\n \\int_0^\\infty d x^+ \\, d x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) + i\n \\, x^- \\, (k_1^+ + k_2^+)} \\, \\frac{1}{(2 \\, x^+ \\, x^-)^{2\/3}} \\,\n = \\, \\frac{N}{(k_1^+ + k_2^+)^{1\/3}(k_1^- + k_2^-)^{1\/3}} \\ ,\n\\end{align}\nwhere\n\\begin{align}\n N \\, = \\, \\frac{\\Gamma^2 \\left( \\frac{1}{3} \\right) \\, e^{i \\, \\pi\n \/3}}{2^{2\/3}},\n\\end{align}\nwe obtain\n\\begin{align}\n \\label{GrBj3}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{4 \\, N_c^2 \\,\n \\cE_0 \\, m^6}{15 \\, \\Lambda^6} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2)\n \\, \\frac{N}{m^{2\/3}_{\\bot} \\, (1+ \\cosh \\Delta y)^{1\/3}}.\n\\end{align}\nThe corresponding two-glueball correlation function scales as\n\\begin{align}\\label{CBj}\n C^{Bj} (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\sim\n \\frac{1}{m^{4\/3}_{\\bot} \\, (\\cosh \\Delta y)^{2\/3}}.\n\\end{align}\nWe conclude that rapidity correlations coming from the AdS dual of\nBjorken hydrodynamics are suppressed at large rapidity interval, at\nleast in the perturbative estimate we have performed. This result\nappears to agree with the causality argument\n\\cite{Gavin:2008ev,Dumitru:2008wn} making appearance of long-range\nrapidity correlations unlikely at late times. Moreover, the locality\nof $C^{Bj}$ in rapidity suggests that late-time dynamics is not likely\nto affect long-range rapidity correlations coming from the early\nstages of the collision: hydrodynamic evolution can not ``wash out''\nsuch long-range rapidity correlations.\n\nNote that the complete momentum space two-glueball correlation\nfunction receives contributions from all regions of coordinate space,\ni.e., from all $x_1$ and $x_2$. In Sec. \\ref{Correlators} we have\ncalculated the contribution arising from early proper times, while\nhere we have estimated the late-time contribution. One may expect that\nin the complete result the two contributions coming from different\nintegration regions would simply add together: in such case clearly\nthe early-time contribution in \\eq{Corr_fin} would dominate for large\nrapidity intervals, leading to long-range rapidity correlations\narising in the collision.\n\n\n\n\n\n\\section{Summary}\n\\label{sum}\n\nLet us summarize by first restating that we have found long-range\nrapidity correlations in the initial stages of strongly-coupled heavy\nion collisions as described by AdS\/CFT correspondence. We expect that\ndue to causality the correlations would survive the late-time\nevolution of the produced medium, though one needs to have a full\nsolution of the shock wave collision problem to be able to verify this\nassertion. The long-range rapidity correlations may be relevant for\nthe description of the ``ridge'' correlation observed in heavy ion and\nproton-proton collisions\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nIndeed ``ridge'' correlation is characterized not only by the\nlong-range rapidity correlation, but also by a narrow zero-angle\nazimuthal correlation between the triggered and associated particles.\nAs was suggested in \\cite{Gavin:2008ev,Dumitru:2008wn} such azimuthal\ncorrelation may be due to the radial flow of the produced medium. The\nadvantage of the AdS\/CFT approach to the problem is that the full\nsolution to the problem for a collision of two shock waves with some\nnon-trivial transverse profiles would have radial flow included in the\nevolution of the dual metric, and would be able to demonstrate whether\nradial flow is sufficient to lead to the ``ridge'' phenomenon. Indeed\nsuch calculation appears to be prohibitively complicated to do\nanalytically at the moment.\n\n\nThe correlations we found grow very fast with rapidity interval, as\none can see from \\eq{corrf}, while the experimentally observed\ncorrelation\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv} is\nat most flat in rapidity. This result may lead to the conclusion that\nthe initial stages of heavy ion collisions can not be\nstrongly-coupled, since this contradicts existing observations. At the\nsame time, it may happen that higher-order corrections in $\\mu_1$ and\n$\\mu_2$ would affect this rapidity dependence, flattening the\nresulting distribution. On yet another hand, such higher-order\ncorrections become important at later times, and eventually causality\nmay prohibit further late-time modification to the long-range rapidity\ncorrelations. More work is needed to clarify this important question\nabout the rapidity-shape of the correlations coming from the solution\nof the full problem in AdS.\n\n\nAssuming that the issue of rapidity shape would be resolved, we would\nalso like to point out that $k_T$-dependence of obtained correlator\n\\peq{GR_gen} closely resembles that reported in the data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}: it\nstarts out growing with $k_T$ at low-$k_T$, and, at higher $k_T$, it\nfalls off with $k_T$. The location of the maximum of the correlator in\nour case was determined by the mass and size of the produced\nparticles, and was thus energy-independent. It is possible that the\nsolution of the full problem, resumming all powers of $\\mu_1$ and\n$\\mu_2$ would lead to the maximum of the correlation function given by\n$\\mu_{1,2}^{1\/3}$, which in turn would be inversely proportional to\nthe thermalization time \\cite{Grumiller:2008va,Kovchegov:2009du}, thus\nproviding an independent way of measuring this quantity. Again more\nresearch is needed to explore this possibility.\n\n\n\n\n\n\n\n\n\n\\acknowledgments\n\nThis research is sponsored in part by the U.S. Department of Energy\nunder Grant No. DE-SC0004286.\n\n\n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn those areas of particle phenomenology which require addressing\nnon-perturbative effects, Lattice QCD plays an increasingly\nsignificant role, being a first-principles method. The rapid advances in\ncomputational performance as well as algorithmic techniques\nare allowing for better control of various errors associated\nwith lattice calculations. \n\nIn this paper we address the phenomenology of $K^0\\to\\pi\\pi$ decays.\nOne of the long-standing puzzles is the ``$\\Delta I=1\/2$ rule'',\nwhich is the observation that the transition channel with isospin changing\nby 1\/2 is enhanced 22 times with respect to transitions\nwith isospin changing by 3\/2. \nThe strong interactions are essential for\nexplaining this effect within the Standard Model.\nSince the energy scales involved in these decays are rather small,\ncomputations in quantum chromodynamics (QCD) have to be done\nusing a non-perturbative method such as Lattice QCD. Namely,\nLattice QCD is used to calculate hadronic matrix elements of \nthe operators appearing in the effective weak Hamiltonian. \n\nThere have been so far several other attempts to study matrix elements of the \noperators relevant for $\\Delta I=1\/2$ rule on the \nlattice~\\cite{KilcupSharpe,BernardSoni,MartinelliMaiani},\nbut they fell short of desired accuracy.\nIn addition, several groups~\\cite{Twopi,BernardSoniTwopi} have studied matrix\nelements $\\langle \\pi^+\\pi^0|O_i|K^+\\rangle$, which describe\nonly $\\Delta I=3\/2$, not $\\Delta I=1\/2$ transition.\nIn the present simulation, the statistics is finally under control\nfor $\\Delta I=1\/2$ amplitude. \n\nOur main work is in calculating matrix elements \n$\\langle\\pi^+ |O_i|K^+\\rangle$ and $\\langle 0|O_i|K^0\\rangle$ for\nall basis operators (introduced in Sec.~\\ref{sec:framework}). \nThis is enough to recover matrix elements\n$\\langle\\pi\\pi|O_i|K^0\\rangle$ using chiral perturbation theory\nin the lowest order, although this procedure suffers from \nuncertainties arising from ignoring higher orders (in particular, \nfinal state interactions). The latter matrix elements are an essential\npart of the phenomenological expressions for $\\Delta I=1\/2$ \nand \\mbox{$\\Delta I=3\/2$} amplitudes, as well as $\\varepsilon '\/\\varepsilon$.\nThe ratio of the amplitudes computed in this way\nconfirms significant enhancement of $\\Delta I=1\/2$ channel,\nalthough systematic uncertainties preclude a definite answer.\n\nIn addition, we address a related issue of \n$\\varepsilon '\/\\varepsilon$ -- the direct \nCP-violation parameter in the neutral kaon system.\nAs of the day of writing, the experimental data are somewhat \nambiguous about this parameter: the group at CERN (NA48)~\\cite{CERN} \nreports $\\mbox{Re} (\\varepsilon '\/\\varepsilon) = \\mbox{\n$(23 \\pm 7) \\times 10^{-4},$}$ \nwhile the Fermilab group \\mbox{(E731)}~\\cite{Fermilab} has found \n$\\mbox{Re} (\\varepsilon '\/\\varepsilon) = \\mbox{$(7.4 \\pm 6.0) \\times 10^{-4}$.}$ \nThere is a hope that the discrepancy between the two reports will soon\nbe removed in a new generation of experiments.\n\nOn the theoretical side, the progress in estimating \n$\\varepsilon '\/\\varepsilon$ in the Standard\nModel is largely slowed down by the unknown matrix elements~\\cite{buras} \nof the appropriate operators.\nThe previous attempts~\\cite{KilcupSharpe,BernardSoni,MartinelliMaiani} \nto compute them on the lattice did not take\ninto account operator matching. In this work we repeat this calculation\nwith better statistics and better investigation of systematic \nuncertainties. We are using perturbative operator matching. In some cases\nit does not work, so we explore alternatives and come up with a\npartially non-perturbative renormalization procedure. The associated\nerrors are estimated to be large. This is currently the biggest stumbling \nblock in computing $\\varepsilon '\/\\varepsilon$. \n\nThe paper is structured as follows. In the Section~\\ref{sec:Framework}\nwe show the context of our calculations, define the quantities\nwe are looking after and discuss a number of theoretical points\nrelevant for the calculation. Section~\\ref{sec:lattice details}\ndiscusses issues pertaining to the lattice simulation. \nIn Section~\\ref{sec:di12} we present the results and discuss systematic errors\nfor $\\Delta I=1\/2$ rule amplitudes.\nIn Section~\\ref{sec:pert} we explain how the operator matching problem \ntogether with other systematic errors preclude a reliable calculation of \n$\\varepsilon '\/\\varepsilon$, and give our best estimates\nfor this quantity in Section~\\ref{sec:epsp_res}.\nSection~\\ref{sec:conclusion} contains the conclusion. In the Appendix\nwe give details about the quark operators and sources, and\nprovide explicit expressions for all contractions and matrix\nelements for reference purposes.\n\n\\section{Theoretical framework}\n\\label{sec:Framework}\n\n\\subsection{Framework and definitions}\n\\label{sec:framework}\n\nThe standard approach to describe the problems in question is to\nuse the Operator Product Expansion at the $M_W$ scale and use the\nRenormalization Group equations to translate the effective weak\ntheory to more \nconvenient scales ($\\mu \\sim$~2--4~GeV). At these scales the effective \nHamiltonian for $K\\to\\pi\\pi$ decays is the following linear \nsuperposition~\\cite{buras}:\n\\begin{equation}\nH_{\\mathrm W}^{\\mathrm eff} = \n\\frac{G_F}{\\sqrt{2}} V_{ud}\\,V^*_{us} \\sum_{i=1}^{10} \\Bigl[\nz_i(\\mu) + \\tau y_i(\\mu) \\Bigr] O_i (\\mu) \n \\, , \n\\end{equation}\nwhere $z_i$ and $y_i$ \nare Wilson coefficients (currently known at two-loop order), \n$\\tau \\equiv - V_{td}V_{ts}^{*}\/V_{ud} V_{us}^{*}$, \nand $O_i$ are basis of four-fermions operators defined as follows:\n\\begin{eqnarray}\n\\label{eq:ops1}\nO_1 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5) u_\\beta )\n(\\bar{u}_\\beta \\gamma^\\mu (1-\\gamma_5)d_\\alpha ) \\\\\nO_2 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)u_\\alpha)\n(\\bar{u}_\\beta \\gamma^\\mu (1-\\gamma_5)d_\\beta ) \\\\\n\\label{eq:ops3}\nO_3 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\beta ) \\\\\nO_4 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\alpha ) \\\\\nO_5 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\beta ) \\\\\nO_6 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\alpha ) \\\\\nO_7 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\beta ) \\\\\nO_8 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\alpha ) \\\\\nO_9 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\beta ) \\\\ \nO_{10} & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\alpha ) \n\\label{eq:ops10}\n\\end{eqnarray}\nHere $\\alpha$ and $\\beta$ are color indices, $e_q$ is quark\nelectric charge, and summation is done over all light quarks. \n\nIsospin amplitudes are defined as \n\\begin{equation}\n\\label{amp}\nA_{0,2}e^{i\\delta_{0,2}} \\equiv \\langle (\\pi\\pi )_{I=0,2}|H_W|K^0\\rangle ,\n\\end{equation}\nwhere $\\delta_{0,2}$ are the final state interaction phases of the\ntwo channels. Experimentally\n\\begin{equation}\n\\omega = \\mbox{Re} A_0 \/\\mbox{Re} A_2 \\simeq 22 \\, .\n\\end{equation}\n\nDirect CP violation parameter $\\varepsilon '$ is defined in terms \nof imaginary parts of these amplitudes:\n\\begin{equation}\n\\varepsilon ' = -\\frac{\\mbox{Im} A_0 - \\omega \\mbox{Im} A_2}{\\sqrt{2}\\omega\\mbox{Re} A_0}\n e^{i(\\pi\/2 + \\delta_2 - \\delta_0)}.\n\\end{equation}\nExperiments are measuring the quantity $\\mbox{Re} \\varepsilon '\/\\varepsilon$, \nwhich is given by\n\\begin{equation}\n\\label{eq:epsp}\n\\mbox{Re} \\,\\frac{\\varepsilon '}{\\varepsilon} \\simeq\n\\frac{G_F}{2\\omega |\\varepsilon |\\mbox{Re}{A_0}} \\,\n\\mbox{Im}\\, \\lambda_t \\, \\,\n \\left[ \\Pi_0 - \\omega \\: \\Pi_2 \\right] ,\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\label{P0}\n \\Pi_0 & = & \\sum_i y_i \\, \\langle (\\pi\\pi )_{I=0}|O_i^{(0)}|K^0\\rangle \n(1 - \\Omega_{\\eta +\\eta '}) \\\\\n\\label{P2}\n \\Pi_2 & = & \\sum_i y_i \\, \\langle (\\pi\\pi )_{I=2}|O_i^{(2)}|K^0\\rangle \n\\end{eqnarray}\nwith $\\mbox{Im}\\, \\lambda_t \\equiv \\mbox{Im} V_{td}V^*_{ts}$, and where\n$\\Omega_{\\eta + \\eta'} \\sim 0.25\\pm 0.05$ takes into account the effect\nof isospin breaking in quark masses ($m_u \\neq m_d$). $O_i^{(0)}$ and\n$O_i^{(2)}$ are isospin 0 and 2 parts of the basis operators.\nTheir expressions are given in the Appendix for completeness.\n\n\\subsection{Treatment of charm quark}\n\nThe effective Hamiltonian given above is obtained in the continuum \ntheory in which the top, bottom and charm quarks are integrated out. \n(In particular, the summation in Eqs.~(\\ref{eq:ops3}--\\ref{eq:ops10}) \nis done over $u$, $d$ and $s$ quarks.) This makes sense only when\nthe scale $\\mu$ is sufficiently low compared to the charm quark mass.\nAs mentioned in Ref.~\\cite{charm}, at scales comparable to $m_c$ \nhigher-dimensional \noperators can contribute considerably. Then one should consider\nan expanded set of operators including those containing the charm quark.\nLattice treatment of the charm quark is possible but\nin practice quite limited, for example by having to work at much\nsmaller lattice spacings and having a more complicated set\nof operators and contractions. Therefore\nwe have opted to work in the effective theory in which the charm quark\nis integrated out. Since we typically use $\\mu \\sim 2$~GeV in our\nsimulations, this falls into a dangerous region. We hope that\nthe effects of higher-dimensional operators can still be neglected, but\nstrictly speaking this issue should be separately investigated.\n\n\\subsection{Calculating $\\langle \\pi\\pi|O_i|K^0\\rangle$.}\n\nAs was shown by Martinelli and Testa~\\cite{testa}, two-particle\nhadronic\nstates are very difficult to construct on the lattice (and in general,\nin any Euclidean description). We have\nto use an alternative procedure to calculate the matrix elements \nappearing in Eqs.~(\\ref{amp},\\ref{P0},\\ref{P2}).\nWe choose the method ~\\cite{bernard} in which the lowest-order\nchiral perturbation theory is used to relate \n$\\langle \\pi\\pi |O_i|K^0\\rangle$ to matrix elements involving one-particle states:\n\\begin{eqnarray}\n\\label{eq:chpt1}\n\\langle \\pi^+\\pi^-|O_i|K^0\\rangle & = & \\frac{m_K^2-m_\\pi^2}{f}\\gamma \\\\\n\\langle \\pi^+|O_i|K^+\\rangle & = & (p_\\pi \\cdot p_K)\\gamma - \n \\frac{m_s+m_d}{f}\\delta \\\\\n\\label{eq:chpt3} \n\\langle 0|O_i|K^0\\rangle & = & (m_s-m_d)\\delta ,\n\\end{eqnarray}\nwhere $f$ is the lowest-order pseudoscalar decay constant.\nThe masses in the first of these formulae are the physical meson masses,\nwhile the quark masses and the momenta in the second and third formulae\nare meant to be from actual simulations on the lattice\n(done with unphysical masses). \nThese relationships ignore higher order terms in the chiral expansion, \nmost importantly the final state interactions. \nTherefore this method suffers from a significant uncertainty. \nGolterman and Leung~\\cite{golterman} have computed one-loop correction \nfor $\\Delta I=3\/2$ amplitude in chiral perturbation theory. \nThey find this correction can be large, up to 30\\% or 60\\%, depending\non the values of unknown contact terms and the cut-off. \n\n\\section{Lattice techniques}\n\\label{sec:lattice details}\n\n\\subsection{Mixing with lower-dimensional operators.}\n\nEqs.~(\\ref{eq:chpt1}--\\ref{eq:chpt3}) handle unphysical\n$s \\leftrightarrow d$ mixing in $\\langle\\pi^+|O_i|K^+\\rangle$ \nby subtracting the unphysical part proportional to \n$\\langle 0|O_i|K^0\\rangle$. This is equivalent to subtracting\nthe operator \n\\begin{equation}\nO_{sub} \\equiv (m_d+m_s)\\bar{s}d + (m_d-m_s)\\bar{s}\\gamma_5d \\,.\n\\label{eq:SubOp}\n\\end{equation}\nAs shown by Kilcup, Sharpe {\\it et al.} in Refs.~\\cite{ToolKit,WeakME}, \nthese statements are also true\non the lattice if one uses staggered fermions. A number of Ward identities\ndiscussed in these references show that lattice formulation with\nstaggered fermions retains \nthe essential chiral properties of the continuum theory. In particular,\n$O_{sub}$ defined in Eq.~\\ref{eq:SubOp} is the only lower-dimensional \noperator appears in mixing with the basis operators.\n(Lower-dimensional operators have to be subtracted non-perturbatively\nsince they are multiplied by powers of $a^{-1}$.)\nWe employ the non-perturbative procedure suggested in Ref.~\\cite{WeakME}:\n\\begin{equation}\n\\label{eq:sub1}\n\\langle \\pi^+\\pi^-|O_i|K^0\\rangle = \n\\langle \\pi^+|O_i - \\alpha_i O_{sub}|K^+\\rangle \\cdot \\frac{m_K^2-m_\\pi^2}\n{(p_\\pi\\cdot p_K)f} \\, , \n\\end{equation}\nwhere $\\alpha_i$ are found from \n\\begin{equation}\n0 = \\langle 0|O_i - \\alpha_i O_{sub}|K^0\\rangle \\, .\n\\label{eq:sub2}\n\\end{equation}\nThis procedure is equivalent to the lattice version of \nEqs.~(\\ref{eq:chpt1}--\\ref{eq:chpt3}) and allows subtraction \ntimeslice by timeslice.\n\nThroughout our simulation we use only degenerate mesons, i.e. $m_s=m_d=m_u$.\nSince only negative parity part of $O_{sub}$ contributes in \nEq.~(\\ref{eq:sub2}), one naively expects infinity when calculating \n$\\alpha_i$. However, matrix elements \n$\\langle 0|O_i|K^0\\rangle$ of all basis operators \nvanish when $m_s=m_d$ due to invariance of both the Lagrangian\nand all the operators in question under the CPS symmetry, which\nis defined as the CP symmetry combined with interchange of $s$ and $d$ \nquarks. Thus calculation of $\\alpha_i$ requires taking the first derivative \nof $\\langle 0|O_i|K^0\\rangle$ with respect to $(m_d-m_s)$. In order\nto evaluate the first derivative numerically, we insert another\nfermion matrix inversion in turn into all propagators involving\nthe strange quark. Detailed expressions for all contractions \nare given in the Appendix.\n\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=12cm \\epsfbox{diag.eps}}\n\\end{center}\n\\caption{Five diagrams types needed to be computed: (a) ``Eight'';\n(b) ``Eye''; (c) ``Annihilation''; (d) ``Subtraction''; (e)\ntwo-point function.}\n\\label{diagrams}\n\\end{figure} \n\n\\subsection{Diagrams to be computed}\n\n\\label{sec:diag}\n\nAccording to Eqs.~(\\ref{eq:sub1},\\ref{eq:sub2}), we need to compute three\ndiagrams involving four-fermion operators (shown in Fig.~\\ref{diagrams})\nand a couple of bilinear contractions. The ``eight'' contraction type \n(Fig.~\\ref{diagrams}a) is relatively cheap to compute. It is the only\ncontraction needed for the $\\Delta I=3\/2$ amplitude.\nThe ``eye'' and ``annihilation'' \ndiagrams (Fig.~\\ref{diagrams}b and~\\ref{diagrams}c) are much more \nexpensive since they involve calculation of propagators from\nevery point in space-time. \n\n\n\\subsection{Lattice parameters and other details}\n\nThe parameters of simulation are listed in the Table~\\ref{tab:parameters}.\nWe use periodic boundary conditions in both space and time.\nOur main ``reference'' ensemble is a set of quenched configurations\nat $\\beta \\equiv 6\/g^2 =6.0$ ($Q_1$). In addition, we use an\nensemble with a larger lattice volume ($Q_2$), an ensemble\nwith $\\beta =6.2$ ($Q_3$) for checking the lattice spacing dependence,\nand an ensemble with 2 dynamical flavors ($m=0.01$) generated by the \nColumbia group, used for checking the impact of quenching. \nThe ensembles were obtained using 4 sweeps of $SU(2)$ overrelaxed\nand 1 sweep of $SU(2)$ heatbath algorithm\\footnote{except for the dynamical \nset which was obtained by R-algorithm~\\cite{Columbia1}}. The configurations \nwere separated by\n1000 sweeps, where one sweep includes three $SU(2)$ subgroups updates.\n\n\\begin{table}[tbh]\n\\caption{Simulation parameters}\n\\label{tab:parameters}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\nEnsemble & $N_f$ & $\\beta $ & Size & L, fm & Number of & Quark masses\\\\\nname & & & & & configurations & used \\\\\n\\hline\n$Q_1$ & 0 & 6.0 & $16^3\\times (32\\times 4)$ & 1.6 & 216 & 0.01 --- 0.05 \\\\\n$Q_2$ & 0 & 6.0 & $32^3\\times (64\\times 2)$ & 3.2 & 26 & 0.01 --- 0.05 \\\\\n$Q_3$ & 0 & 6.2 & $24^3\\times (48\\times 4)$ & 1.7 & 26 & 0.005 --- 0.03 \\\\\n$D$ & 2 & 5.7 & $16^3\\times (32\\times 4)$ & 1.6 & 83 & 0.01 --- 0.05 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{table}\n\nWe use the standard staggered fermion action. \nFermion matrices are inverted by conjugate gradient.\nJackknife is used for statistical analysis. \n\nAs explained below, we have extended the lattice 4\ntimes\\footnote{for all ensemble except the biggest volume, \nwhich we extend 2 times.}\nin time dimension by copying gauge links. This is done in order to \nget rid of excited states contamination and wrap-around effects. \n\nThe lattice spacing values for quenched ensembles were obtained by \nperforming a fit in\nthe form of the asymptotic scaling to the quenched data of $\\rho$ meson\nmass given elsewhere~\\cite{spectrum}. Lattice spacing for the dynamical \nensemble is also set by the $\\rho$ mass~\\cite{Columbia}. \n\nSome other technicalities are as follows.\nWe work in the two flavor formalism. We use local wall sources\nthat create pseudoscalar mesons at rest.\n(Smearing did not have a substantial effect.) \nThe mesons are degenerate ($m_s=m_d=m_u$, $m_\\pi=m_K$).\nWe use staggered\nfermions and work with gauge-invariant operators, since the \ngauge symmetry enables significant reduction of the list of \npossible mixing operators. The staggered flavour structure\nis assigned depending on the contraction type.\nOur operators are tadpole-improved. This \nserves to `improve'' the perturbative expansion at a later stage \nwhen we match the lattice and continuum operators.\nFor calculating fermion loops we employ the $U(1)$ pseudofermion\nstochastic estimator. \nMore details and explanation of some of these \nterms can be found in the Appendix.\n\n\\begin{figure}[!hbt]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=7cm \\epsfbox{setup.eps}}\n\\end{center}\n\\caption{The general setup of calculation of\n$\\langle \\pi^+|O_i|K^+\\rangle$ (without loss of generality, \nan ``eight'' contraction is shown). The kaon source is\nat the timeslice 0, while the pion sink is at the timeslice $T$.\nThe operator is inserted at a variable time $t$. The result of this\ncontraction is proportional to the product of two exponentials\nshown in the figure.}\n\\label{setup}\n\\end{figure} \n\n\\subsection{Setup for calculating matrix elements of four-fermion \\\\\noperators}\n\n\nConsider the setup for calculation of $\\langle \\pi^+|O_i|K^+\\rangle$.\nKaons are created at $t_0=0$, the operators are inserted at \na variable time $t$, and the pion sink is located at the\ntime $T$ (see Fig.~\\ref{setup}), where $T$ is sufficiently large.\nIn principle, a number of states with pseudoscalar quantum numbers\ncan be created by the kaon source.\nEach state's contribution is proportional to $\\sqrt{Z}e^{-m|t|}$, so the \nlightest state (kaon) dominates at large enough $t$.\nAnalogously, states annihilated by the sink contribute proportionally\nto $\\sqrt{Z}e^{-m|T-t|}$, which is dominated by the pion. \n\n\nIn this work kaon and pion have equal mass.\nIn the middle of the lattice, where $t$\nis far enough from both 0 and $T$, we expect to see a plateau, \ncorresponding to $Z e^{-m_\\pi T}\\langle\\pi|O|K\\rangle$. \nThis plateau is our working region (see Fig.~\\ref{plateau}). \n\nAs concerns the kaon annihilation matrix elements \n$\\langle 0|O_i|K^0\\rangle$, we only need their ratio to \n$\\langle 0|\\overline{s}\\gamma_5 d|K^0\\rangle$, in which the \nfactors $\\sqrt{Z}e^{-mt}$ cancel. Indeed, we observe a rather steady\nplateau (Fig.~\\ref{ann}). \n\n\\begin{figure}[!bht]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=5cm \\epsfbox{ratio.eps}}\n\\end{center}\n\\caption{$B$ ratio is formed by dividing the four-fermion matrix element\nby the product of two-point functions, typically involving $A_\\mu$\nor $P$ bilinears. All the operators involved are inserted at the same\ntimeslice $t$, and summation is done over spatial volume. The external \nmeson sources \nare located at timeslices 0 and $T$, both in the numerator and the \ndenominator. This enables cancellation of some common factors.}\n\\label{ratio}\n\\end{figure} \n\n\\subsection{$B$ ratios}\n\nIt has become conventional\nto express the results for matrix elements\nin terms of so-called $B$ ratios, which are the ratios of desired \nfour-fermion matrix elements to their\nvalues obtained by vacuum saturation approximation (VSA).\nFor example, the $B$ ratios of operators $O_2$ and $O_4$ are formed by\ndividing the full matrix element by the product of axial-current\ntwo-point functions (Fig.~\\ref{ratio}).\nWe expect the denominator to form a plateau \nin the middle of the lattice, equal to\n$Z e^{-m_\\pi T} \\, \\langle\\pi|A_\\mu|0\\rangle \\,\\cdot \\,\n\\langle 0|A^\\mu|K\\rangle$,\nwhere $A^\\mu$ are the axial vector currents with appropriate flavor quantum\nnumbers for kaon and pion. The\nfactor $Z e^{-m_\\pi T}$ cancels, leaving the desirable ratio\n$\\langle\\pi|O|K\\rangle \\, \/ \\,\n(\\langle\\pi|A_\\mu|0\\rangle\\, \\cdot \\, \\langle 0|A^\\mu|K\\rangle)$. \nApart from common normalization factors, \na number of systematic uncertainties also tend to cancel in this ratio,\nincluding the uncertainty in the lattice spacing, quenching and\nin some cases perturbative correction uncertainty. \nTherefore, it is sometimes reasonable to give lattice answers in terms\nof the $B$ ratios. \n\nHowever, eventually the physical matrix element\nneeds to be reconstructed by using the known experimental parameters \n(namely $f_K$) to compute VSA. In some cases, such as for operators\n$O_5$---$O_8$, the VSA itself is known very imprecisely due to the\nfailure of perturbative matching (see Sec.~\\ref{sec:pert}).\nThen it is more reasonable to give answers in terms of matrix elements\nin physical units. We have adopted the strategy of expressing all matrix \nelements in units of $\\langle\\pi|A_\\mu|0\\rangle \\, \\langle 0|A^\\mu|K\\rangle\n= (f_K^{latt})^2 m_M^2$ at an intermediate stage, and using \npre-computed $f_K^{latt}$ at the given meson mass to convert to physical \nunits. This method is sensitive to the choice of the lattice spacing. \n\n\\begin{figure}[!hbt]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=7cm \\epsfbox{IB2vst.eps}}\n\\end{center}\n\\caption{An example of the signal we get for one of the $B$ ratios\n(in this case, for the ``eye'' part of the $O_2$ operator on $Q_1$ ensemble). \nThe wall sources are at $t=1$ and $t=49$. We see that\nthe excited states quickly disappear and a stable, well-distinguished\nplateau is observed. We perform jackknife averaging in the range of $t$\nfrom 12 to 37 (shown with the horizontal lines). It is important to \nconfirm the existence of the plateau for reliability of the results.}\n\\label{plateau}\n\\end{figure} \n\n\\begin{figure}[!htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6.5cm \\epsfbox{AB2vst.eps}}\n\\end{center}\n\\caption{An example of the signal for \n$\\langle 0|O_2|K^0\\rangle\\,\/\\,\n[(m_d-m_s)\\,\\langle 0|\\overline{s}\\gamma_5 d|K^0\\rangle]$ on $Q_1$\nensemble. The kaon source is at $t=1$. We average over the range\nof $t$ from 5 to 12 (shown with horizontal lines).}\n\\label{ann}\n\\end{figure} \n\nIt is very important to check that the time distance between the\nkaon and pion sources $T$ is large enough so \nthat the excited states do not contribute. That is, the plateau\nin the middle of the lattice should be sufficiently flat,\nand the $B$ ratios should not depend on $T$. We have found that \nin order to satisfy this requirement the lattice has to be\nartificially extended in time direction by using\na number of copies of the gauge links (4 in the case of the small\nvolume lattices, 2 otherwise). We are using $T=72$ for $Q_3$ \n($\\beta =6.2$) ensemble, and $T=48$ for the rest.\nAn example of a plateau that we obtain\nwith this choice of $T$ is shown in Fig.~\\ref{plateau}.\nTo read off the result, we average over the whole extension\nof the plateau, and use jackknife to estimate the statistical\nerror in this average. \n\n\n\\section{$\\Delta I=1\/2$ rule results}\n\\label{sec:di12}\n\nUsing the data obtained for matrix elements of basis operators,\nin this section we report numerical results for $\\mbox{Re} A_0$\nand $\\mbox{Re} A_2$ amplitudes as well as their ratio. We discuss these\namplitudes separately since the statistics for $\\mbox{Re} A_2$ is\nmuch better and the continuum limit extrapolation is possible. \n\n\\subsection{$\\mbox{Re} A_2$ results}\n\\label{sec:A2}\n\nThe expression for $\\mbox{Re} A_2$ can be written as\n\\begin{equation}\n\\mbox{Re} A_2 = \\frac{G_F}{\\sqrt{2}}\\, V_{ud}V_{us}^*\\, z_+(\\mu )\n\\langle O_2\\rangle _2,\n\\end{equation}\nwhere $z_+ (\\mu )$ is a Wilson coefficient and\n\\begin{equation}\n\\langle O_2\\rangle _2 \\equiv \\langle (\\pi\\pi)_{I=2}|O_2^{(2)}|K\\rangle .\n\\end{equation}\nHere\n\\begin{eqnarray}\nO_2^{(2)} = O_1^{(2)} & = & \\frac{1}{3} \n[ (\\overline{s}\\gamma_\\mu(1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu(1-\\gamma_5)d) \n\\nonumber \\\\ & &\n+(\\overline{s}\\gamma_\\mu(1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu(1-\\gamma_5)u)\n-(\\overline{s}\\gamma_\\mu(1-\\gamma_5)d)(\\overline{d}\\gamma^\\mu(1-\\gamma_5)d).\n\\end{eqnarray}\nIn the lowest order chiral perturbation theory the matrix element\n$\\langle O_2\\rangle_2$ can be expressed as\n\\begin{equation}\n\\langle O_2\\rangle _2 \n= \\sqrt{2} \\,\\frac{m_K^2-m_\\pi^2}{f} \n\\,\\frac{\\langle\\pi^+|O_2^{(2)}|K^+\\rangle}{m^2}.\n\\end{equation}\nThe latter matrix element involves only ``eight'' diagrams. Moreover, \nin the limit of preserved $SU(3)_{\\mathrm flavor}$ symmetry\nit is directly related~\\cite{donoghue} to parameter $B_K$ (which is the \n$B$ ratio of the neutral kaon mixing operator \n$O_K= (\\overline{s}\\gamma_L d) \\;(\\overline{s}\\gamma_L d)$), so that\n\\begin{equation}\n\\langle O_2\\rangle _2 \n= \\frac{4\\sqrt{2}}{9} \\,\\frac{m_K^2-m_\\pi^2}{f_{\\mbox{exp}}} \n\\,f_{\\mbox{latt}}^2\\,B_K \\, ,\n\\end{equation}\n\nParameter $B_K$ is rather well studied (for example, by \nKilcup, Pekurovsky~\\cite{PK1} and JLQCD collaboration~\\cite{jlqcd}).\nQuenched chiral perturbation theory~\\cite{sharpe1} predicts \nthe chiral behaviour of the form \n\\mbox{$B_K=a+bm_K^2+c\\;m_K^2\\log{m_K^2}$,} which \nfits the data well (see Fig.~\\ref{Bk}) and\nyields a finite non-zero value in the chiral limit. \nNote that $\\mbox{Re} A_2$ is proportional\nto the combination $B_K f_{\\mbox{latt}}^2$, and since both multipliers \nhave a significant\ndependence on the meson mass (Figs.~\\ref{Bk} and~\\ref{fk}), \n$\\mbox{Re} A_2$ is very sensitive to that mass.\nFig.~\\ref{A2} shows $\\mbox{Re} A_2$ data for the dynamical ensemble, based on\n$B_K$ values we have reported elsewhere~\\cite{PK1}. \nWhich meson mass\nshould be used to read off the result becomes an open question. \nIf known, the higher order chiral terms would remove this ambiguity.\nForced to make a choice, we \nextrapolate to \\mbox{$M^2=(m_K^2+m_\\pi^2)\/2$}.\nUsing our data for $B_K$ in quenched QCD and taking\nthe continuum limit we obtain:\n$\\mbox{Re} A_2 = (1.7 \\pm 0.1)\\cdot 10^{-8}\\;\\mbox{GeV}$,\nwhere the error is only statistical,\nto be compared with the experimental result\n\\mbox{$\\;\\mbox{Re} A_2 = 1.23 \\cdot 10^{-8}\\;\\mbox{GeV}$. }\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{Bk.dyn.eps}}\n\\end{center}\n\\caption{Parameter $B_K$ in NDR $\\overline{\\mathrm MS}$ scheme at 2 GeV\non the dynamical ensemble vs. the meson mass squared. The fit\nis of the form \\mbox{$B_K=a+bm_K^2+c\\;m_K^2\\log{m_K^2}$.} The vertical line\nhere and in the other plots below marks the physical kaon mass.}\n\\label{Bk}\n\\end{figure} \n\\begin{figure}[tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{fpi.dyn.eps}}\n\\end{center}\n\\caption{Pseudoscalar decay constant ($F_\\pi = 93$ MeV experimentally) on \nthe dynamical ensemble vs. meson mass squared. The fit is of the same \nform as $B_K$.}\n\\label{fk}\n\\end{figure} \n\\clearpage\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{A2.dyn.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_2$ for the dynamical ensemble. The fit is of\nthe same form as $B_K$. The horizontal line is the experimental value\nof 1.23 GeV. }\n\\label{A2}\n\\end{figure} \n\nHigher order chiral terms (including the meson mass dependence)\nare the largest systematic error in this determination.\nAccording to Golterman and Leung~\\cite{golterman}, one-loop corrections\nin (quenched) chiral perturbation theory are expected to be\nas large as $30\\%$ or $60\\%$. Other uncertainties (from lattice spacing \ndetermination, from perturbative operator matching and\nfrom using finite lattice volume) are much smaller.\n\n\n\\subsection{$\\mbox{Re} A_0$ results}\n\nUsing Eqs.~(\\ref{eq:sub1},\\ref{eq:sub2}), $\\mbox{Re} A_0$ can be expressed \nas\\footnote{In our normalization $\\mbox{Re} A_0 = 27.2 \\cdot 10^{-8}$.}\n\\begin{equation}\n\\mbox{Re} A_0 = \\frac{G_F}{\\sqrt{2}}V_{ud}V_{us}^* \\frac{m_K^2-m_\\pi^2}{f}\n\\sum_i z_i R_i ,\n\\end{equation}\nwhere $z_i$ are Wilson coefficients and \n$$\nR_i \\equiv \\frac{\\langle \\pi^+|O_i^{(0)}|K^+\\rangle_s}{m^2}.\n$$\nThe subscript '$s$' indicates that these matrix elements already\ninclude subtraction of \\linebreak $\\langle \\pi^+|O_{sub}|K^+\\rangle$. \nAll contraction types are needed, including the expensive ``eyes''\nand ``annihilations''.\n$O_i^{(0)}$ are isospin 0 parts of operators \n$O_i$ (given in the Appendix for completeness). For example,\n\\begin{eqnarray}\nO_1^{(0)} & = & \\frac{2}{3} \n(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu (1-\\gamma_5)u)\n-\\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu \n(1-\\gamma_5)d) \\nonumber \\\\\n& + & \\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)\n(\\overline{d}\\gamma^\\mu (1-\\gamma_5)d) \\\\\nO_2^{(0)} & = & \\frac{2}{3} \n(\\overline{s}\\gamma_\\mu (1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu (1-\\gamma_5)d)\n-\\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu \n(1-\\gamma_5)u) \\nonumber \\\\\n& + & \\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)\n(\\overline{d}\\gamma^\\mu (1-\\gamma_5)d) \n\\end{eqnarray}\n\nThe results for quenched $\\beta =6.0$ and $\\beta =6.2$ ensembles\nare shown in Fig.~\\ref{A0}. Dependence on the \nmeson mass is small, so there is no big ambiguity about the mass\nprescription as in the $\\mbox{Re} A_2$ case. \nSome lattice spacing dependence may be present (Fig.~\\ref{A0cont}), \nalthough the statistics\nfor $\\beta =6.2$ ensemble is too low at this moment.\n\nThe effect of\nthe final state interactions (contained in the higher order\nterms of the chiral perturbation theory) is likely to be large.\nThis is the biggest and most poorly estimated\nuncertainty. \n\nAn operator matching uncertainty arises due to mixing of $O_2$ with $O_6$ \noperator through penguin diagrams in lattice perturbation\ntheory. This is explained in the Section~\\ref{sec:A0pert}. We estimate\nthis uncertainty at 20\\% for all ensembles. \n\nAs for other uncertainties, we have checked the lattice volume\ndependence by comparing ensembles $Q_1$ and $Q_2$ (1.6 and 3.2 fm\nat $\\beta =6.0$).\nThe dependence was found to be small, so we consider $(1.6 \\;{\\mathrm fm})^3$ \nas a volume large enough to hold the system. We have also checked the effect\nof quenching and found it to be small compared to noise\n(see Fig.~\\ref{A0quench}). \n\nThe breakdown of contributions of various basis\noperators to $\\mbox{Re} A_0$ is shown in Fig.~\\ref{A0hist}.\nBy far, $O_2$ plays the most important role, whereas penguins\nhave only a small influence. \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=10cm \\epsfbox{A0m.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_0$ for quenched ensembles plotted against the meson mass \nsquared. The upper group of points\nis for ensembles $Q_1$ and $Q_2$, while the lower group is for $Q_3$. \nOnly statistical errors are shown. }\n\\label{A0}\n\\end{figure} \n\n\\begin{figure}[tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0vsa2.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_0$ for quenched ensembles plotted against lattice spacing\nsquared. The horizontal line shows the experimental result of \n$27.2\\cdot 10^{-8}$ GeV. Only statistical errors are shown.}\n\\label{A0cont}\n\\end{figure} \n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0m.dyn.eps}}\n\\end{center}\n\\caption{Comparison of quenched ($Q_1$) and dynamical results for $\\mbox{Re} A_0$\nat comparable lattice spacings.}\n\\label{A0quench}\n\\end{figure} \n\n\\clearpage\n\n\\begin{figure}[!tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0comp.eps}}\n\\end{center}\n\\caption{Contribution of various operators to $\\mbox{Re} A_0$.}\n\\label{A0hist}\n\\end{figure} \n\n\\begin{figure}[!bth]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=10cm \\epsfbox{omega.eps}}\n\\end{center}\n\\caption{The ratio $\\mbox{Re} A_2\/\\mbox{Re} A_0$ versus the meson mass squared\nfor quenched and dynamical ensembles.\nEnsembles $Q_1$ and $D$ have comparable lattice spacings. \nThe dynamical ensemble data were used for the fit. \nThe big slope of the fit line is accounted for by the\nmass dependence of $\\mbox{Re} A_2$. The horizontal line shows the experimental\nvalue of $1\/22$. The error bars show only the statistical errors \nobtained by jackknife.}\n\\label{omega}\n\\end{figure} \n\n\\subsection{Amplitude ratio}\n\nShown in Fig.~\\ref{omega} is the ratio $\\mbox{Re} A_2\/\\mbox{Re} A_0$ as directly \ncomputed on the lattice for quenched and dynamical data sets.\nThe data exhibit strong dependence on\nthe meson mass, due to $\\mbox{Re} A_2$ chiral behaviour (compare with \nFig.~\\ref{A2}). \n\nWithin our errors the results seem to confirm, indeed, the \ncommon belief that most of the $\\Delta I=1\/2$ enhancement comes \nfrom the ``eye'' and ``annihilation'' diagrams. The exact amount\nof enhancement is broadly consistent with experiment while being\nsubject to considerable uncertainty due to higher-order chiral terms.\nOther systematic errors are the same as those described \nin the previous Subsection.\n\n\\section{Operator matching}\n\n\\label{sec:pert}\n\nAs mentioned before, we have computed the matrix elements of all \nrelevant operators with an acceptable statistical accuracy.\nThese are regulated in the lattice renormalization \nscheme. To get physical results, \noperators need to be matched to the same scheme in which the Wilson\ncoefficients were computed in the continuum, namely $\\overline{\\mathrm MS}$\nNDR. While perturbative matching works quite well for\n$\\mbox{Re} A_0$ and $\\mbox{Re} A_2$, it seems to break down severely for\nmatching operators relevant for $\\varepsilon '\/\\varepsilon$.\n\n\\subsection{Perturbative matching and $\\mbox{Re} A_0$}\n\n\\label{sec:A0pert}\n\nConventionally, lattice and continuum operators are matched using\nlattice perturbation theory:\n\\begin{equation}\n\\displaystyle\n\\label{eq:matching}\nO_i^{\\it cont}(q^*) = O_i^{\\it lat} + \\displaystyle\\frac{g^2(q^*a)}{16\\pi^2}\\displaystyle\\sum_j(\\gamma_{ij}\\ln (q^*a)\n + C_{ij})O_j^{\\it lat} + O(g^4) + O(a^n) ,\n\\end{equation}\nwhere $\\gamma_{ij}$ is the one-loop anomalous dimension matrix \n(the same in the continuum \nand on the lattice), and $C_{ij}$ are finite coefficients calculated\nin one-loop lattice perturbation theory~\\cite{Ishizuka,PatelSharpe}. \nWe use the ``horizontal matching'' \nprocedure~\\cite{horizontal}, whereby the same coupling constant\nas in the continuum ($g_{\\overline{MS}}$) is used.\nThe operators are matched at an intermediate scale \n$q^*$ and evolved using the continuum renormalization\ngroup equations to the reference scale $\\mu$, which we take \nto be 2 GeV.\n\nIn calculation of $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$, the main contributions\ncome from left-left operators. One-loop renormalization\nfactors for such (gauge-invariant) operators were computed by\nIshizuka and Shizawa~\\cite{Ishizuka} (for current-current diagrams)\nand by Patel and Sharpe~\\cite{PatelSharpe} (for penguins). \nThese factors are fairly small, so at the first glance the perturbation theory\nseems to work well, in contrast to the case of left-right operators \nessential for estimating $\\varepsilon '\/\\varepsilon$, as described below.\nHowever, even in the case of $\\mbox{Re} A_0$ there is a certain\nambiguity due to mixing of $O_2$ operator with $O_6$ through\npenguin diagrams. The matrix element of $O_6$ is rather large, so\nit heavily influences $\\langle O_2\\rangle$ in spite of the small\nmixing coefficient. Operator $O_6$ receives enormous renormalization \ncorrections in the first order, as discussed below. Therefore, there\nis an ambiguity as to whether the mixing should be evaluated\nwith renormalized or bare $O_6$. Equivalently, the higher-order\ndiagrams (such as Fig.~\\ref{higher-order}b and~\\ref{higher-order}d) \nmay be quite important. \n\nIn order to estimate the uncertainty of neglecting higher-order diagrams,\nwe evaluate the mixing with $O_6$ renormalized\nby the partially non-perturbative procedure described below, and\ncompare with results obtained by evaluating mixing with bare $O_6$.\nThe first method amounts to resummation of those higher-order diagrams \nbelonging to type (b) in Fig.~\\ref{higher-order}, while the second method\nignores all higher-than-one-order corrections. \nResults quoted in the previous Section\nwere obtained by the first method, which is also close to using\ntree-level matching. The second method would produce\nvalues of $\\mbox{Re} A_0$ lower by about 20\\%.\nThus we consider 20\\% a conservative estimate of the matching uncertainty. \n\nIn calculating $\\varepsilon '\/\\varepsilon$ the operator \nmatching issue becomes a much more serious obstacle as explained below.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{O6m.dyn.eps}}\n\\end{center}\n\\caption{Three contributions to $\\langle O_6\\rangle$ matrix element:\n``eight'' (boxes), ``eye'' (diamonds) and ``subtraction'' (crosses).\nThese data represent bare operators for the dynamical ensemble.\nThe fit is done for the sum total of all contributions. All errors\nwere combined by jackknife.}\n\\label{fig:O6}\n\\end{figure} \n\n\\subsection{Problems with perturbative matching}\n\nThe value of $\\varepsilon '\/\\varepsilon$ depends on a number of \nsubtle cancellations\nbetween matrix elements. In particular, $O_6$ and $O_8$\nhave been so far considered the most important operators\nwhose contributions have opposite signs and almost cancel. Furthermore,\nmatrix element of individual operators contain three main components \n(``eights'', ``eyes'',\nand ``subtractions''), which again conspire to almost cancel each other\n(see Fig.~\\ref{fig:O6}). \nThus $\\varepsilon '\/\\varepsilon$ is extremely sensitive to each of these \ncomponents, and in particular to their matching. \n\n\\begin{table}[tbh]\n\\caption{$\\langle O_6\\rangle$ in arbitrary units with one-loop perturbative\nmatching using two values of $q^*$ for $Q_1$ ensemble. For comparison, \nthe results with no matching (``bare'') are given.}\n\\label{tab:O6pert}\n\\begin{tabular}{l|ccccc}\n\\hline\\tableline\nQuark mass & 0.01 & 0.02 & 0.03 & 0.04 & 0.05 \\\\\n\\hline\n$q^* =1\/a$ & \n$0.1 \\pm 1.2 $ &\n$-0.9 \\pm 0.4$ &\n$-1.2 \\pm 0.2$ &\n$-1.6 \\pm 0.3$ &\n$-1.1 \\pm 0.2$ \\\\\n$q^*=\\pi \/a$ &\n$-13.1 \\pm 1.8$ &\n$ -9.0 \\pm 0.5$ &\n$ -7.1 \\pm 0.3$ &\n$ -6.3 \\pm 0.5$ &\n$ -4.6 \\pm 0.5$ \\\\\nBare &\n$-55.6 \\pm 5.0$ &\n$-35.4 \\pm 1.5$ &\n$-27.0 \\pm 0.9$ &\n$-22.3 \\pm 1.4$ &\n$-16.4 \\pm 1.5$ \\\\\n\\hline\\tableline\n\\end{tabular}\n\\end{table}\n\nConsider fermion contractions with operators such as\\footnote{\nWe apologize for slightly confusing notation:\nwe use the same symbols here for\noperators as in the Appendix for types of contractions.}\n\\begin{eqnarray}\n\\label{eq:O6ops1}\n(PP)_{EU} & = & (\\overline{s} \\gamma_5 \\otimes \\xi_5 u) \n(\\overline{u} \\gamma_5 \\otimes \\xi_5 d) \\\\\n(SS)_{IU} & = & (\\overline{s} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) \n(\\overline{d} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) \\\\\n\\label{eq:O6ops3}\n(PS)_{A2U} & = & (\\overline{s} \\gamma_5 \\otimes \\xi_5 d) \n(\\overline{d} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) ,\n\\end{eqnarray}\nwhich are main parts of, correspondingly, ``eight'', ``eye'' and \n``subtraction''\ncomponents of $O_6$ and $O_8$ (see the Appendix). The \nfinite renormalization coefficients for these operators\nhave been computed in Ref.~\\cite{PatelSharpe}. \nThe diagonal coefficients are very large, so the\ncorresponding one-loop corrections are in the \nneighborhood of $-100\\%$. In addition, they strongly depend\non which $q^*$ is used (refer to Table~\\ref{tab:O6pert}).\nThus perturbation theory fails in reliably \nmatching the operators in Eqs.~(\\ref{eq:O6ops1}--\\ref{eq:O6ops3}). \n\nThe finite coefficients for other (subdominant)\noperators, for example\n$(PP)_{EF}$, $(SS)_{EU}$ and $(SS)_{EF}$,\nare not known for formulation with gauge-invariant \noperators\\footnote{Patel and Sharpe~\\cite{PatelSharpe}\nhave computed corrections for gauge-noninvariant operators.\nOperators in Eqs.~(\\ref{eq:O6ops1})--(\\ref{eq:O6ops3})\nhave zero distances, so the corrections are the same for\ngauge invariant and non-invariant operators.\nRenormalization of other operators\n(those having non-zero distances) differs from the \ngauge-noninvariant operators.}.\nFor illustration purposes,\nin Table~\\ref{tab:O6pert} we have used coefficients for gauge\nnon-invariant operators computed in Ref.~\\cite{PatelSharpe}, but \nstrictly speaking this is not justified. \n\nTo summarize, perturbative matching does not work and\nsome of the coefficients are even poorly known. A solution\nwould be to use a non-perturbative matching procedure, such as\ndescribed by Donini {\\it et al.}~\\cite{non-pert}. We have not completed\nthis procedure. Nevertheless, can we say anything\nabout $\\varepsilon '\/\\varepsilon$ at this moment?\n\n\\subsection{Partially nonperturbative matching}\n\n\\label{sec:ansatz}\n\nAs a temporary solution, we have adopted a partially\nnonperturbative operator matching procedure, which makes use of\nbilinear renormalization coefficients $Z_P$ and $Z_S$.\nWe compute the latter~\\cite{PKbil}\nfollowing the non-perturbative method suggested by Martinelli \n{\\it et al.}~\\cite{martinelli}. Namely we study the inverse\nof the ensemble-averaged quark propagator at large off-shell momenta\nin a fixed (Landau) gauge. \nAn estimate of the\nrenormalization of four-fermion operators can be obtained as follows. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{Oppren.eps}}\n\\end{center}\n\\caption{Example of one loop diagrams arising in \nrenormalization of four-fermion operators: in type (a) no propagator\ncrosses the axis, and type (b) includes the rest of the diagrams.}\n\\label{Opp}\n\\end{figure} \n\nConsider\nrenormalization of the pseudoscalar--pseudoscalar operator in\nEq.~(\\ref{eq:O6ops1}). \nAt one-loop level, the diagonal renormalization coefficient \n$C_{PP}$ (involving diagrams shown in Fig.~\\ref{Opp}) \nis almost equal to twice the pseudoscalar bilinear correction $C_P$. \nThis suggests that, at least at one-loop level,\nthe renormalization of $(PP)_{EU}$ comes mostly from diagrams\nin which no gluon propagator crosses the vertical axis of the diagram\n(for example, diagram $(a)$ in Fig.~\\ref{Opp}), and very little\nfrom the rest of the diagrams\n(such as diagram $(b)$ in Fig.~\\ref{Opp}). In other words, the\nrenormalization of $(PP)_{EU}$ would be identical to \nthe renormalization of product of two pseudoscalar bilinears,\nwere it not for the diagrams of type $(b)$, which give a subdominant\ncontribution. Mathematically, \n$$\n(PP)_{EU}^{\\mathrm{cont}} = (PP)_{EU}^{\\mathrm{latt}}\\; Z_{PP} + ... \\, ,\n$$\nwith\n\\begin{equation}\nZ_{PP} = Z_P^2 (1 + \\frac{g^2}{16\\pi^2} \\widetilde{C_{PP}} + O(g^4))\\, ,\n\\label{eq:Zpp}\n\\end{equation}\n\\begin{equation}\nZ_P = 1 + \\frac{g^2}{16\\pi^2} C_P + O(g^4)\\, ,\n\\label{eq:Zp}\n\\end{equation}\nand dots indicate mixing with other operators (non-diagonal part).\nThe factor $\\widetilde{C_{PP}} \\equiv C_{PP} - 2 C_P$ contains\ndiagrams of type $(b)$ in Fig.~\\ref{Opp} and is quite small.\n\nIn order to proceed, it may be reasonable to {\\bf assume} that the same \nholds at all orders in perturbation\ntheory, namely the diagrams of type $(c)$ in Fig.~\\ref{higher-order} give\nsubdominant contribution compared to type $(a)$ of the same\nFigure. This assumption should be verified\nseparately by performing non-perturbative renormalization procedure\nfor four-fermion operators. If this ansatz is true, we can substitute\nthe non-perturbative value of $Z_P$ into Eq.~(\\ref{eq:Zpp}) instead\nof using the perturbative expression from Eq.~(\\ref{eq:Zp}).\nThus a partially nonperturbative estimate of $(PP)_U^{\\mathrm cont}$\nis obtained. This procedure is quite similar to the tadpole\nimprovement idea: the bulk of diagonal renormalization is\ncalculated non-perturbatively, while the rest is reliably computed\nin perturbation theory. \nAnalogously we obtain diagonal renormalization\nof operators $(SS)_{IU}$ and $(PS)_{A2U}$ by using\n$Z_{SS} = Z_S^2(1+\\frac{g^2}{16\\pi^2} \\widetilde{C_{SS}} + O(g^4))$ and \n$Z_{PS} = Z_S Z_P(1+\\frac{g^2}{16\\pi^2} \\widetilde{C_{PS}} + O(g^4))$.\nWe note that $Z_P \\neq Z_S$, even though they are equal in perturbation\ntheory. We match operators at the scale $q^*=1\/a$ and use the\ncontinuum two-loop anomalous dimension to evolve to $\\mu =2$ GeV.\n\nUnfortunately, the above procedure does not solve completely the problem \nof operator renormalization, since it deals only with diagonal \nrenormalization of the zero-distance operators in\nEqs.~(\\ref{eq:O6ops1}---\\ref{eq:O6ops3}). Even though these operators\nare dominant in contributing to $\\varepsilon '\/\\varepsilon$, other\noperators (such as $(SS)_{EU}$ and $(PP)_{EF}$)\ncan be important due to mixing with the dominant ones.\nThe mixing coefficients for these operators are not known \neven in perturbation theory. For a reasonable estimate we use\nthe coefficients\nobtained for gauge non-invariant operator mixing~\\cite{PatelSharpe}.\n\nSecondly, since renormalization of operators $(PP)_{EU}$, $(SS)_{IU}$ \nand $(PS)_{A2U}$ is dramatic\\footnote{For example, at $m_q=0.01$ and \n$\\mu =2$ GeV for $Q_1$ ensemble we obtain $Z_{PP} = 0.055 \\pm 0.007$, \n$Z_{PS} = 0.088 \\pm 0.007$ and $Z_{SS} = 0.142 \\pm 0.010$.}, their \ninfluence on other operators \nthrough non-diagonal mixing is ambiguous at one-loop order, \neven if the mixing coefficients are known.\nThe ambiguity is due to higher\norder diagrams (for example, those shown in Fig.~\\ref{higher-order}). \nIn order to partially resum them\nwe use operators $(PP)_{EU}$, $(SS)_{IU}$ and $(PS)_{A2U}$ \nmultiplied by factors $Z_P^2$, $Z_S^2$ and $Z_P Z_S$, correspondingly,\nwhenever they appear in non-diagonal mixing with other operators\n\\footnote{\nA completely analogous scheme was used for mixing of $O_6$ with $O_2$ \nthrough penguins when evaluating $\\mbox{Re} A_0$.}. \nThis is equivalent to evaluating the diagrams of type ($a$) and ($b$)\nin Fig.~\\ref{higher-order} at all orders, but ignoring the rest\nof the diagrams (such as diagrams ($c$) and ($d$) in Fig~\\ref{higher-order})\nat all orders higher than first.\nTo estimate a possible error in this procedure\nwe compare with a simpler one, whereby bare operators\nare used in non-diagonal corrections (i.e. we apply strictly one-loop \nrenormalization).\nThe difference in $\\varepsilon '\/\\varepsilon$ between these two approaches\nis of the same order or even less than the error due to uncertainties in \ndetermination\nof $Z_P$ and $Z_S$ (see Tables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}). \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=14cm \\epsfbox{higher-order.eps}}\n\\end{center}\n\\caption{Example of four kinds of diagrams with arbitrary number of loops\narising in renormalization \nof four-fermion operators: in (a) and (b) no propagator\ncrosses the box or the axis; (c) and (d) exemplify the\nrest of the diagrams. The rectangular drawn in dotted line in (b)\ncorresponds to operator structure $PP_{EU}$.}\n\\label{higher-order}\n\\end{figure} \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{eps.q60.eps}}\n\\end{center}\n\\caption{A rough estimate of $\\varepsilon '\/\\varepsilon$ \nfor $Q_1$ ensemble using the partially-nonperturbative procedure\ndescribed in text. Three sets of points correspond to \nusing experimental $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ in \nEq.~(\\ref{eq:epsp}) (crosses),\nusing our $\\mbox{Re} A_0$ but experimental $\\omega$ (diamonds), or using \n$\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ obtained from our calculations (squares).\nAll other details are the same as in Table~\\ref{tab:epsp1}.\nThe shown error is a combination of the statistical error, \na matching error coming from uncertainties in the determination of $Z_P$ \nand $Z_S$, and an uncertainty in non-diagonal mixing of\nsubdominant operators.}\n\\label{epsp}\n\\end{figure} \n\n\\clearpage\n\n\n\\section{Estimates of $\\varepsilon '\/\\varepsilon$}\n\n\\label{sec:epsp_res}\n\nWithin the procedure outlined in the previous section we have found that \n$\\langle O_6\\rangle$ has a different sign from the expected one.\nThis translates into a negative or very slightly positive value of \n$\\varepsilon '\/\\varepsilon$ (Tables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}\nand Fig.~\\ref{epsp}). \n\nIf the assumptions about the subdominant diagrams made in the previous \nsection are valid, our results would contradict the present\nexperimental results, which favor a positive value of \n$\\varepsilon '\/\\varepsilon$. They would also change the existing\ntheoretical picture~\\cite{buras} due to the change of sign of \n$\\langle O_6\\rangle$.\n\nFinite volume and quenching effects were found small\ncompared to noise. \nThe main uncertainty in $\\varepsilon '\/\\varepsilon$ \ncomes from operator matching, diagonal and non-diagonal. \nFor diagonal matching the uncertainty comes from (1) errors in the\ndetermination of $Z_P$ and $Z_S$ non-perturbatively and from\n(2) unknown degree of validity of our ansatz in Sec.~\\ref{sec:ansatz}. \nFor non-diagonal matching, the error is due to (3) unknown\nnon-diagonal coefficients in mixing matrix and (4) ambiguity \nof accounting higher-order corrections. \nThe error (1), as well as the statistical error, is quoted in \nTables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}. The size\nof the error (4) can be judged by the spread\nin $\\varepsilon '\/\\varepsilon$ between two different\napproaches to higher-order corrections (strictly one-loop and partial\nresummation), also presented in\nTables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}. The error (3) is likely\nto be of the same order as the error (4).\nThe error (2) is uncontrolled at this point, since it \nis difficult to rigorously check our assumption made \nin Sec.~\\ref{sec:ansatz}. In Fig.~\\ref{epsp} we combine\nthe statistical error with errors (1) and (4) in quadrature.\n\nThe uncertainty due to operator matching is common to any method\nto compute the relevant matrix elements on the lattice (at least, with \nstaggered fermions). In addition, our method has an inherent\nuncertainty due to dropping the higher order chiral terms.\nLattice spacing dependence of $\\varepsilon '\/\\varepsilon$ is\nunclear at this point, but it may be significant.\n\nWe note that there are several ways to compute $\\varepsilon '\/\\varepsilon$.\nOne can use the experimental values of $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ in \nEq.~(\\ref{eq:epsp}), or one can use the values obtained on the lattice.\nOne can also adopt an intermediate strategy of using the experimental\namplitude ratio $\\omega$ and computed $\\mbox{Re} A_0$. When the higher-order\nchiral corrections are taken into account and the continuum limit is taken\n(so that $\\omega = 22$),\nthese three methods should converge. At this point any of them\ncan be used, and we compare them in Tables~\\ref{tab:epsp1} \nand~\\ref{tab:epsp2}.\n\nIn view of the issues raised above, $\\varepsilon '\/\\varepsilon$ is an \nextremely fragile quantity. The rough estimates in Tables~\\ref{tab:epsp1} \nand~\\ref{tab:epsp2} and Fig.~\\ref{epsp} should be used with extreme \ncaution. \n\n\\section{Conclusion}\n\n\\label{sec:conclusion}\n\nWe have presented in detail the setup of our calculation of\nhadronic matrix elements of all operator in the basis\ndefined in Eqs.~(\\ref{eq:ops1}---\\ref{eq:ops10}).\nWe have obtained statistically significant data for all operators.\nBased on these data we make theoretical estimates of $\\mbox{Re} A_0$ and\n$\\mbox{Re} A_2$ amplitudes as well as $\\varepsilon '\/\\varepsilon$.\n\nSimulation results show that the enhancement of the $\\Delta I=1\/2$ \ntransition is roughly consistent with the experimental findings. \nHowever, the uncertainty due to higher order\nchiral terms is very large. If these terms are calculated\nin the future, a more definite prediction for physical amplitudes\ncan be obtained using our present data for matrix elements.\nSimulations should be repeated at a few more values of $\\beta$ \nin the future in order to take the continuum limit. \n\nCalculation of $\\varepsilon '\/\\varepsilon$ is further complicated \nby the failure\nof perturbation theory in operator matching. We give our\ncrude estimates, but in order to achieve real progress \nthe full nonperturbative matching procedure should be performed.\n\nWe appreciate L. Venkataraman's help in developing \nCRAY-T3E software. \nWe thank the Ohio Supercomputing Center and National Energy Research \nScientific Computing Center (NERSC) for the CRAY-T3E time. We thank \nColumbia University group for access to their dynamical \nconfigurations. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}