diff --git a/.gitattributes b/.gitattributes index 92030287fde7e0d517219816a635f4abba81cc99..cbc89d899c8eb8ebb1b6799f9b41702175ddf5f4 100644 --- a/.gitattributes +++ b/.gitattributes @@ -227,3 +227,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-19.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-12.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac new file mode 100644 index 0000000000000000000000000000000000000000..928a01600349d8fe5d8535e5e40b709ca4f618c3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e731dae5423f927f47754fe9f6def10f32a223a4731135929965a52ab6138d7 +size 12576689640 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzyrq b/data_all_eng_slimpj/shuffled/split2/finalzyrq new file mode 100644 index 0000000000000000000000000000000000000000..5c4199dfb4ec2ccb301fffdf3462bcc559ff4728 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzyrq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIt is not an uncommon perception among the practitioners of machine learning and of theoretical many-body physics that some ideas of physics, most notably those of equilibrium and nonequilibrium statistical physics, might have significance in the fundamental understanding of the machine learning dynamics. Such sentiment and progress along the direction has continued for some time and still in active pursuit, mostly by the researchers in the machine learning community~\\cite{welling11,fox15,mandt16,chaudhari18,baldassi19,ganguli13,ganguli19,ganguli20,yaida18}. The belief in the statistical-physics foundation of the machine learning will be strengthened obviously by more examples of ideas originating from statistical physics and then manifesting themselves in the machine learning. Here we establish one such connection, relating a fundamental theorem in near-equilibrium statistical physics~\\cite{ao04,KAT05,ao06,kwon11a,kwon11} to the theory of learning dynamics~\\cite{welling11,fox15,mandt16,yaida18,ganguli13,ganguli19}, in particular where the learning process is {\\it linear} and described by a stochastic equation similar to what governs the Ornstein-Uhlenbeck processes~\\cite{risken96}. The theorem in question is the fluctuation-dissipation theorem (FDT). \n\nThe FDT in a strict sense refers to specific relations that hold between correlation functions and response functions of physical systems under equilibrium~\\cite{risken96}. Here we use the term in a more relaxed sense, referring to mathematical identities among the observable quantities under the stationary state condition. The difference between the equilibrium and the stationary state is revealed by the existence of an anti-symmetric matrix $\\v Q$~\\cite{ao04,KAT05,ao06,kwon11a,kwon11}, which will be defined shortly. The FDT is illustrated most simply in the Langevin dynamics of a single particle subject simultaneously to dissipative and stochastic forces \n\\begin{eqnarray} \\dot{x} = -\\gamma x + f(t) \\end{eqnarray}\nwhere, in the context of Newtonian motion, $x$ represents the velocity of a particle in one dimension, $-\\gamma x$ is the resistive force, and $f(t)$ is the random force coming from the environment. On integrating the first-order differential eqution we obtain the formally exact solution $x(t) = e^{-\\gamma t} [ x(0) + \\int_0^t e^{\\gamma t'} f(t') ]$ which, in the long-time limit ($t\\rightarrow \\infty$) yields the average \n\\begin{eqnarray} \n\\langle x^2 \\rangle = 2D e^{-2\\gamma t} \\int_0^t dt' e^{2\\gamma t'} = D\/\\gamma \\end{eqnarray}\nassuming the white-noise correlation\n$\\langle f(t) f(t') \\rangle = 2D \\delta (t - t' )$. The competing tendencies of the dissipation ($\\gamma$) and fluctuation ($D$) finds balance through the identity. \n\nMulti-dimensional generalization of the Langevin dynamics finds expression in \n\\begin{eqnarray} \\dot{\\v x} = -{\\bf \\Gamma} {\\v x} + {\\v f} ( t) \\label{eq:1.1} \\end{eqnarray}\nwith $n$-dimensional variables $\\v x = (x_1 , \\cdots x_n )$, the $n\\times n$ dissipation matrix $\\bf \\Gamma$, and the $n$-dimensional stochastic force vector $\\v f$ obeying the zero mean $\\langle \\v f \\rangle =0$ and the variance $\\langle \\v f ( t) \\v f^T (t' ) \\rangle = 2 \\v D \\delta (t-t')$, in terms of the $n\\times n$ diffusion matrix $\\v D$. From the exact solution $\\v x(t) = e^{-\\v \\Gamma t} [ \\v x (0) + \\int_{0}^{t} e^{\\v \\Gamma t'} \\v f (t') dt' ]$ we derive the long-time correlation average \n\\begin{eqnarray}\n\\bm \\Sigma(t) & = & \\langle \\v x(t) \\v x^T(t) \\rangle \\nonumber \\\\\n& = & 2 \\int_{0}^{t} dt' e^{\\v \\Gamma (t'-t)} \\v D e^{\\v \\Gamma^T(t'-t)}\n\\end{eqnarray}\nand the following identity for $\\bm \\Sigma = \\bm \\Sigma (t \\rightarrow \\infty)$: \n\\begin{eqnarray}\n\\v \\Gamma \\bm \\Sigma + \\bm \\Sigma \\v \\Gamma^T = 2\\v D . \\label{eq:2.13} \n\\end{eqnarray}\nThis identity relates the diffusion matrix $\\v D$ with the dissipation matrix $\\bf \\Gamma$ through the correlation matrix $\\bm \\Sigma$ in the stationary-state, for the Ornstein-Uhlenbeck processes with constant $\\bf \\Gamma$ and $\\v D$~\\cite{ao04,KAT05}. Extensions and applications of the theorem both in physical systems and machine learning have since appeared~\\cite{kwon11,fox15,mandt16}. Thanks to the identity, one can write the matrix $\\bf \\Gamma \\bm \\Sigma$ as the sum of the symmetric ($\\v D$) and anti-symmetric ($\\v Q$) matrix:\n\\begin{eqnarray} \\bm \\Gamma \\bm \\Sigma = \\v D + \\v Q . \\label{eq:decomposition} \\end{eqnarray}\nIt was pointed out in Ref. \\onlinecite{kwon11a} that $\\v Q =0$ implies the detailed balance, otherwise one should allow the possibility $\\v Q \\neq 0$ in the decomposition, Eq. (\\ref{eq:decomposition}). \n\n\nIn Sec. \\ref{sec:FDT-for-W}, we derive an analogous mathematical identity for the stochastic linear learning dynamics. This is then verified, in Sec. \\ref{sec:experiments}, through numerical experiments on several well-known machine learning datasets. Implications of our work are discussed in Sec. \\ref{sec:discussion}. \n\n\n\\section{FDT in Learning Dynamics} \n\\label{sec:FDT-for-W}\n\nIn the learning dynamics one is confronted with a collection of input vectors $\\v x_\\alpha$ (e.g. pixels in a jpg file re-formatted as a one-dimensional vector) and output vectors $\\v y_\\alpha$ (e.g. classification of the picture as an image of a cat or a dog), where $1 \\le \\alpha \\le N$ runs over the entire dataset called the {\\it batch}. In the linear learning dynamics one is interested in finding the matrix $\\v W$ that minimizes the error\n\\begin{eqnarray} E & = & \\frac{1}{2N} \\sum_{\\alpha=1}^N ({\\bf y}_\\alpha - \\v W \\v x_\\alpha )^T ({\\bf y}_\\alpha - \\v W \\v x_\\alpha ) \\nonumber \\\\\n& \\equiv & \\frac{1}{2} {\\rm Tr} [ {\\bf \\Sigma}_{xx} {\\bf W}^T {\\bf W} - {\\bf W}^T {\\bf \\Sigma}_{yx} - {\\bf \\Sigma}^T_{yx} {\\bf W} ] . \\label{eq:error-function}\n\\end{eqnarray}\nThe two correlation functions appearing in the second line are \n\\begin{eqnarray} {\\bf \\Sigma}_{xx} = \\frac{1}{N} \\sum_{\\alpha=1}^N {\\bf x}_\\alpha {\\bf x}_\\alpha^T , ~~ {\\bf \\Sigma}_{yx} = \\frac{1}{N} \\sum_{\\alpha=1}^N {\\bf y}_\\alpha {\\bf x}_\\alpha^T. \\label{eq:Sigma-xx-definition}\\end{eqnarray}\nThe gradient descent (GD) method of finding the optimal $\\v W$ results in the first-order differential equation for $\\v W$~\\cite{ganguli13,ganguli19}:\n\\begin{eqnarray} \\frac{d \\bf W }{dt} = - \\frac{\\delta E}{\\delta \\v W} = - {\\bf W} {\\bf \\Sigma}_{xx} + { \\bf \\Sigma}_{yx} . \\label{eq:1.10} \\end{eqnarray}\nThe full solution is given by ${\\bf W}(t) = {\\bf W}(0) e^{-{\\bf \\Sigma}_{xx} t} + \\v W_0 (1 - e^{-{\\bf \\Sigma}_{xx} t} )$ where $\\v W_0 = {\\bf \\Sigma}_{yx} {\\bf \\Sigma}_{xx}^{-1}$ offers the equilibrium solution. \n\nAn interesting connection to the Langevin dynamics and FDT arises when we treat $\\bm \\Sigma_{xx}$ and $\\bm \\Sigma_{yx}$ in the dynamics of Eq. (\\ref{eq:1.10}) as a mini-batch (not a full-batch) average. At each stage of $\\v W$-evolution one picks a different, randomly chosen mini-batch to compute the average $\\bm \\Sigma_{xx} (t) = N_m^{-1} \\sum_{\\alpha \\in B(t)} {\\bf x}_\\alpha {\\bf x}_\\alpha^T$ and ${\\bf \\Sigma}_{yx} (t) = N_m^{-1} \\sum_{\\alpha \\in B(t) } {\\bf y}_\\alpha {\\bf x}_\\alpha^T$, where $N_m$ is the mini-batch size and $B(t)$ is the particular mini-batch chosen at the time $t$. The $\\v W$-dynamics according to the stochastic gradient descent (SGD) scheme becomes \n\\begin{eqnarray} \\frac{d \\v W}{dt} = - \\v W \\bm \\Sigma_{xx}(t) + \\bm \\Sigma_{yx} (t) . \\label{eq:stochastic-W-dynamics} \\end{eqnarray}\nPhrased in the language of Langevin dynamics, both the dissipative ($\\bm \\Sigma_{xx} (t)$) and the stochastic ($\\bm \\Sigma_{yx} (t)$) forces are time-dependent. We can re-write the variables in the equation explicitly as the sum of the stationary (time-independent) and the fluctuating (time-dependent) parts,\n\\begin{eqnarray} \\v W (t) & \\rightarrow & \\v W_0 + \\v W (t), \\nonumber \\\\\n\\bm \\Sigma_{xx} (t) & \\rightarrow & \\bm \\Sigma_{xx} + \\bm \\Sigma_{xx} (t), \\nonumber \\\\\n\\bm \\Sigma_{yx} (t) & \\rightarrow &\\bm \\Sigma_{yx} + \\bm \\Sigma_{yx} (t), \\label{eq:re-definition} \\end{eqnarray}\nand work with the equation\n\\begin{eqnarray}\n\\frac{d\\v W }{dt}\\! =\\! -\\v W ( \\bm \\Sigma_{xx} \\!+\\! \\bm \\Sigma_{xx} (t) ) \\!+\\! \\bm \\Sigma_{yx} (t) \\!-\\! \\v W_0 \\bm \\Sigma_{xx} (t) . \\nonumber \\\\ \\label{eq:modified-stochastic-DE} \\end{eqnarray}\nAlthough the exact solution to this equation can be found in the form of Wiener integral (see Appendix \\ref{appendix:A}), we will here assume a simplified situation where $\\bm \\Sigma_{xx} (t) =0$ on the right-hand side of the equation. Relaxing the assumption will not change the overall conclusion as long as $\\bm \\Sigma_{xx} (t)$ is small - see Appendix \\ref{appendix:A}. The stochastic learning dynamics is now reduced to an Ornstein-Uhlenbeck process~\\cite{risken96} and allows a simple solution\n\\begin{eqnarray} \\v W(t) = \\left[ \\v W (0) + \\int_0^t \\bm \\Sigma_{yx} (t') e^{\\bm \\Sigma_{xx} t' } \\right] e^{-\\bm \\Sigma_{xx} t } . \\label{eq:approximate-W} \\end{eqnarray}\n\n\nWe can write down the long-time correlation matrix\n\\begin{eqnarray}\n\\bm \\Sigma_{WW} (t) & = & \\langle \\v W^T(t) \\v W(t) \\rangle \\nonumber \\\\\n& = & \\int_{0}^{t} dt' \\int_{0}^{t} dt'' e^{\\bm \\Sigma_{xx} (t'-t)} \\langle \\bm \\Sigma_{yx}^T(t')\\bm \\Sigma_{yx} (t'') \\rangle e^{\\bm \\Sigma_{xx} (t''-t)} \\nonumber \\\\\n& = & \\int_{0}^{t} dt' e^{\\bm \\Sigma_{xx} (t'-t)} 2\\v D e^{\\bm \\Sigma_{xx} (t'-t)}\n\\end{eqnarray}\nassuming $ \\langle \\bm \\Sigma_{yx}^T(t')\\bm \\Sigma_{yx} (t'') \\rangle = 2\\v D\\delta(t'-t'')$. From this follows the identity \n\\begin{eqnarray}\n\\bm \\Sigma_{xx} \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx} = 2\\v D \\label{eq:FDT-for-W}\n\\end{eqnarray}\nfor $\\bm \\Sigma_{WW} \\equiv \\bm \\Sigma_{WW} (t \\rightarrow \\infty )$. This is the FDT type identity in the stochastic linear learning dynamics and our central result (a more refined form of FDT exists - see Appendix \\ref{appendix:B}). In the expression (\\ref{eq:FDT-for-W}), $\\bm \\Sigma_{xx}$ is the full-batch correlation matrix given in Eq. (\\ref{eq:Sigma-xx-definition}). Restoring the original definition, we have \n\\begin{eqnarray} \n\\langle ( \\bm \\Sigma_{yx} (t) \\!-\\! \\bm \\Sigma_{yx} )^T ( \\bm \\Sigma_{yx} (t') \\!-\\! \\bm \\Sigma_{yx} ) \\rangle & = & 2\\v D \\delta(t -t') \\nonumber \\\\\n\\langle [ \\v W (t) - \\v W_0 ]^T [\\v W (t) - \\v W_0 ] \\rangle & = & \\bm \\Sigma_{WW} . \n\\label{eq:definitions} \n\\end{eqnarray}\n\nThe full-batch input-input correlation matrix $\\bm \\Sigma_{xx}$ provides a sort of dissipative force while (fluctuating part of) the input-output correlation function plays the stochastic force in the learning dynamics, according to Eq. (\\ref{eq:modified-stochastic-DE}). The correlator of the learning matrix, i.e. $\\bm \\Sigma_{WW}$, is obtained as the balance between the two tendencies. \n\n\n\\section{Numerical Experiments}\n\\label{sec:experiments} \n\nFor sufficiently small time $t=h$ we can solve the stochastic equation (\\ref{eq:stochastic-W-dynamics}) approximately\n\\begin{eqnarray}\n\\v W (h) &\\approx & \\left[ \\v W (0) + \\int_{0}^{h} \\bm \\Sigma_{yx} (t')e^{\\bm \\Sigma_{xx} (0) t'} dt' \\right]e^{-\\bm \\Sigma_{xx} (0) h} \\nonumber \\\\\n& \\approx & \\v W (0) [1 - \\bm \\Sigma_{xx} (0) h ] + \\int_{0}^{h} \\bm \\Sigma_{yx} (t') dt' . \\label{eq:W-h} \n\\end{eqnarray}\nWe can further divide up the interval $t \\in [0, h]$ into $M$ equal segments, each of width $\\varepsilon \\equiv h\/M$, and use the discrete formula\n$\\Sigma_{xx} (0) \\rightarrow M^{-1} \\sum_{i = 1}^M \\Sigma_{xx} (i \\cdot \\Delta)$ and \n$ \\int_{0}^{h} \\bm \\Sigma_{yx} (t') dt' \\rightarrow \\varepsilon \\sum_{i=1}^M \\Sigma_{yx} ( i \\cdot \\Delta )$. In the end, Eq. (\\ref{eq:W-h}) turns into a recursive formula\n\\begin{eqnarray} \\v W^{(n+1)} = \\v W^{(n)} [ 1- \\varepsilon \\bm \\Sigma_{xx}^{(n)} ] + \\varepsilon \\bm \\Sigma_{yx}^{(n)}\n\\label{eq:W-n} \\end{eqnarray}\nwhere $\\bm \\Sigma_{xx}^{(n)}$ and $\\bm \\Sigma_{yx}^{(n)}$ are averages over the mini-batch of size $M N_m$. At sufficiently large $n$, $\\v W^{(n)}$ executes a steady-state fluctuation around the minimum $\\v W_0$. \n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{fig1.png}\n\\caption{Fluctuation analysis for (a) MNIST (b) CIFAR-10 and (c) EMNIST datasets. (top) Plots of $\\v D$ obtained from each dataset. (middle) Plots of $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx}$. (bottom) Normalized Fourier components for $\\v D$ (red) and $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx}$ (blue) plotted along $\\v k = (k_x , 0)$ with $k_0 = 2\\pi\/a$.} \n\\label{fig:MNIST-CIFAR}\n\\end{figure*}\n\nTo test out the validity of the FDT in stochastic linear learning derived in Eq. (\\ref{eq:FDT-for-W}), we employ three representative datasets: MNIST, CIFAR-10 and EMNIST Letters (abbreviated as EMNIST from here on)~\\cite{cohen17}. MNIST and CIFAR-10 consist of ten different objectives or output vectors $\\v y^\\alpha$, represented by one-hot vectors $(1,0, \\cdots, 0)$ through $(0, \\cdots, 0, 1)$. Twenty-six alphabets are represented by as many output vectors in the case of EMNIST. The pixel sizes are $28\\times28$ for both MNIST and EMNIST, and $32\\times32$ for CIFAR-10. Updating $\\v W (t)$ according to the SGD algorithm outlined in Eqs. \n(\\ref{eq:W-h}) and (\\ref{eq:W-n}), we found good convergence to the error-minimizing value $\\v W_0 = \\bm \\Sigma_{xy} \\bm \\Sigma_{xx}^{-1}$ by measuring the inner product of the $\\v W(t)$ and $\\v W_0$ divided by their norms approaching unity: $\\cos \\theta(t) = \\v W (t) \\cdot \\v W_0 \/ \\| \\v W (t) \\| \\| \\v W_0 \\|$. The inner product of two matrices is defined by taking a product of the matrix elements sharing the same $(ij)$ index and making a sum over all $(ij)$'s.\n\nOnce the steady state is reached, e.g. $\\cos \\theta \\gtrsim 0.999$, we begin analyzing the small fluctuations by calculating the two correlators in Eq. (\\ref{eq:definitions}) by taking averages $\\langle \\cdots \\rangle$ over several tens of thousands of $\\v W^{(n)}$'s and $\\bm \\Sigma_{yx}^{(n)}$'s. To deduce the diffusion matrix $\\v D$ in Eq. (\\ref{eq:definitions}) we take the equal-time correlator $t=t'$ and compute the average of $\\bm \\Sigma_{yx}^{(n)}$. This gives the $\\v D$ matrix up to an overall constant. In the end, a good proportionality between $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx}$ and $\\v D$ is found as shown in Fig. \\ref{fig:MNIST-CIFAR} for all the datasets tested. It turns out the correlators exhibit a highly periodic structure with period $a$ coming from the $a\\times a$ pixel size of each dataset. (The original $a=28$ dimension of the MNIST and EMNIST was chopped at the boundary to $a=24$. Otherwise it was difficult to get the full-batch inverse $\\bm \\Sigma_{xx}^{-1}$.) \n\nDue to the highly periodic structure of the real-space images of $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx}$ and $\\v D$, only a handful of Fourier peaks at $\\v k= (k_x , k_y)$ given by multiples of $2\\pi\/a$ were significant. Figure \\ref{fig:MNIST-CIFAR} shows the Fourier components along $\\v k = (k_x , 0)$ normalized by the value at $\\v k = (0,0)$. \nThe near-perfect match in the Fourier analysis of both $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} + \\bm \\Sigma_{WW} \\bm \\Sigma_{xx}$ and $\\v D$ is not {\\it a priori} obvious, and must be attributed to the FDT theorem at work in the stochastic linear learning dynamics. \n\n\\section{Discussion}\n\\label{sec:discussion}\n\nOur work addresses a FDT type relation in the stochastic linear learning dynamics. The relation derived in Eq. (\\ref{eq:FDT-for-W}) is found to hold quite well for a number of machine learning datasets. The analogy to the Langevin dynamics naturally gives rise to an interpretation of the input covariance matrix $\\bm \\Sigma_{xx}$ as the effective friction, and the input-output variance $\\bm \\Sigma_{yx}$ as the effective stochastic force in the learning dynamics. \n\nWe have made several attempts to go beyond the simple stochastic linear learning scheme. For one, we tried placing a CNN layer before the neural network layer $\\v W$. As shown in Appendix \\ref{appendix:C}, this formulation naturally leads to FDT in terms of the CNN-filtered input data sets $\\v X^\\alpha = \\v C \\otimes \\v x^\\alpha$, where $\\otimes$ represents the CNN operation. The FDT holds with respect to the renormalized datasets $\\v X^\\alpha$. In another attempt, we tried introducing non-linearity explicitly by using an alternative error function $E = (2N)^{-1} \\sum_{\\alpha =1}^N \\sum_{i=1}^n (y^\\alpha_i - z^\\alpha_i )^2$ with the sigmoid function $z^\\alpha_i = [ e^{-\\sum_{j=1}^n W_{ij} x_j^\\alpha} + 1 ]^{-1}$ parameterized by the learning matrix $\\v W$. Such formulation leads to the dynamics $d \\v W \/dt$ that is, unfortunately, highly non-linear and defies further analytical treatment. \n\nThe FDT type relation in the stochastic learning was noticed some years earlier by Yaida~\\cite{yaida18}. His derivation of the so-called FDT relation avoids any use of an explicit error function and relies solely on the stationary property of observables after the learning process has saturated. It is a powerful formulation in the sense that the relations apply to an arbitrary learning architecture with non-linearities. On the other hand, by avoiding the stochastic differential equation formulation, the connection that his relations have with the FDT in statistical physics becomes somewhat vague. More seriously, when our error function is used to work out his formulas, the outcome does not match our FDT formula derived in Eq. (\\ref{eq:FDT-for-W}). This leads us to suspect that there may be multiple FDT type theorems governing the stationary states of learning, with both our formula and his addressing different facets. \n\nWe have investigated whether, writing $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW}$ in Eq. (\\ref{eq:FDT-for-W}) as the sum $\\bm \\Sigma_{xx} \\bm \\Sigma_{WW} = \\v D + \\v Q$, there will be a significant contribution of the anti-symmetric matrix $\\v Q$. A crude measure of the significance of $\\v Q$ relative to $\\v D$ is the maximum value of the matrix elements in $\\v Q$ divided by that of $\\v D$. The results are 0.12, 0.096, 0.045 for MNIST, CIFAR-10, and EMNIST, respectively, suggesting that the anti-symmetric components are probably very small and insignificant. \n\n\\acknowledgments \nThe Python code used in the numerical experiment can be found at https:\/\/github.com\/lemonseed117\/FDT-Stochastic.git J. H. H. acknowledges fruitful discussion with and input on the manuscript from Ping Ao, J. H. Jo, S. B. Lim, J. D. Noh, Vinit Singh, and Hayong Yun.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Intro}\n\nIn the study of the geometry of Hitchin's fibration a recurring problem has been to determine how much of this geometry is determined by the smooth part of the fibration. Ng\\^ o's support theorem provides a tool to formulate and sometimes to prove a precise version of this question for general fibrations equipped with an action of a family of polarized abelian group schemes (see \\cite{ngoaf}). In particular, for variants of the fibration parameterizing Higgs bundles with poles, Chaudouard and Laumon in \\cite{ch-la} proved that the only perverse cohomology sheaves appearing in the decomposition of the direct image of the constant sheaf are the intermediate extensions of the local systems on the smooth locus, that is the perverse cohomology sheaves are supported over the whole base. In particular, all of the cohomology is determined, in principle, by the monodromy of the cohomology of the smooth fibers. As is explained in the last section of \\cite{ch-la},\nunfortunately, this method does not apply to the original symplectic (no poles) version of the Hitchin fibration.\nMotivated by the $P=W$ conjecture \\cite{dCHM}, one would like to understand the perverse filtration of the fibration better and for this it is important to determine whether this result extends to this case as well. Surprisingly, we do find new supports for every rank $n\\geq 2$, and new cohomological contributions for any rank $n>2$.\n\n\nBefore we explain the general strategy of our approach, let us concisely state our main result. In order to do this, let us briefly introduce the standard notation that we use, which is recalled in more detail in \\cref{sec:NotationAndBackground}. We fix a smooth projective curve $C$ and denote by $h_{n}\\colon \\mathcal{M}_{n}^d \\to \\ensuremath{\\mathcal{A}_n}$ Hitchin's fibration for $\\textrm{GL}_n$ and $d$ an integer coprime to $n$, i.e., $\\mathcal{M}_{n}^d$ is the moduli space of semistable Higgs bundles of rank $n$ and degree $d$ on $C$. The base $\\mathcal{A}_n$ is an affine space parameterizing spectral curves $C_a\\in T^*C$ that are of degree $n$ over $C$.\nFor any partition $\\un{n}=(n_i)_{i=1,\\dots,k}$ of $n$ there is a closed subvariety $S_{\\un{n}}\\subset \\mathcal{A}_n$, which is the closure of the subset $S_{\\un{n}}^\\times\\subset S_{\\ve{n}}$ of reducible nodal curves having smooth irreducible components of degree $n_i$ over $C$ (see \\cref{rem:bertini}). Also we denote by $\\ensuremath{\\mathcal{A}_n}^{\\red}\\subset \\ensuremath{\\mathcal{A}_n}$ the open subset parameterizing reduced spectral curves. Using these notions our main results can be summarized as follows (note that according to our convention in \\S\\ref{sec:NotationAndBackground}, the local systems given by the $r$-th cohomology of \nthe smooth fibers of $h_{n}$ contribute to $^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R}{h_{n}}_{*}\\mathbbm{Q})$):\n\n\\begin{theorem*}[\\Cref{prop:SuppOnlySn} and \\Cref{thm:main1}]\nLet $h_{n}:\\mathcal{M}_{n}^d \\to \\ensuremath{\\mathcal{A}_n}$ be the Hitchin map.\n\tIf $S\\subset \\ensuremath{\\mathcal{A}_n}$ is a support of $^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R}{h_{n}}_{*}\\mathbbm{Q})$ for any $r$ with $S\\cap \\ensuremath{\\mathcal{A}_n}^{\\red}\\neq \\varnothing$ then $S=S_{\\ve{n}}$ for some partition $\\un{n}$.\n\nMoreover, for every partition $\\ve{n}$ of $n$, the stratum $S_{\\ve{n}}$ is a support for all of the sheaves\n\t\\begin{equation*}\\label{eq:range}\n\t^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R} {h_{n}}_{*}\\mathbbm{Q}) \n\t\\hbox{ with } \\delta^{\\aff}(\\ve{n}) \\leq r \\leq {2\\dim \\ensuremath{\\mathcal{A}_n} -\\delta^{\\aff}(\\ve{n})} \n\t\\end{equation*}\n\twhere $\\delta^{\\aff}(\\ve{n})=\\sum_{i l &\\hbox{ for } l> \\codim Y,\n\\end{align*}\ni.e., this is the usual t-structure, but shifted by $\\dim X$. This will be useful for us, as we will study restrictions of perverse sheaves to closed subvarieties and we can then avoid to shift the constant sheaf. \n\nA semisimple perverse sheaf on a complex variety $X$ is a complex of the form $P=\\bigoplus_\\alpha \\IC(Y_\\alpha, L_\\alpha)$,\nwhere $Y_\\alpha \\subseteq X$ are irreducible closed subvarieties and $L_\\alpha$ are semisimple local systems defined \non dense open subsets of the $Y_\\alpha$'s. The generic points of the $Y_\\alpha$'s\nare called the {\\em supports } of $P$. \n\nIf \n$h\\colon M \\to X$ is a proper map between smooth varieties, the decomposition theorem of \\cite{bbd} says that \n$${\\mathbbm R} h_*\\mathbbm{Q} \\simeq \\bigoplus_{k\\geq 0}\t \\,\\,\n^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q})[-k]$$\nwhere all $^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q})$ are semisimple perverse sheaves. The union of supports of the perverse sheaves $^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q})$\nis the set of {\\em supports} of the map $h$ (see \\cite[\\S 7]{NgoLEmme}).\n\nIf $Y$ is a support of ${\\mathbbm R} h_*\\mathbbm{Q} \\simeq \\bigoplus_k \\,\\,\n^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q})[-k]$\nwe denote by \n\\begin{align*}\nn^+_Y({\\mathbbm R} h_*\\mathbbm{Q})&:= \\max\\{ k | Y \\text{ is a support of } ^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q}) \\}\\\\\nn^-_Y({\\mathbbm R} h_*\\mathbbm{Q})&:= \\min\\{ k | Y \\text{ is a support of } ^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q}) \\}\n\\end{align*}\n\nWe say that a semisimple complex $K=\\bigoplus_k \\,\\,\n^p\\!\\!{\\mathscr H}^k({\\mathbbm R} h_*\\mathbbm{Q})[-k]$ has {\\em no proper supports} if $X$ is the only support of $K$.\n\\subsection{The Hitchin fibration}\\label{subsec:HitchinFibration}\nWe fix a nonsingular, connected, projective curve $C$ of genus $g\\geq 2$, an integer $n \\in \\zed_{\\geq 1}$, and \nan integer $d \\in \\zed$ such that ${\\rm gcd}(n,d)=1$. We denote by $K_C$ the canonical bundle of $C$. \n\nWe denote by $\\Higgs_n^d$ the moduli stack of Higgs bundle of rank $n$ and degree $d$ on $C$, i.e., it parametrizes pairs $(E,\\phi) $ where $E$ is a vector bundle of rank $n$ and degree $d$ on $C$ and $\\phi\\in H^0(C,\\End(E)\\otimes K_C)$. \n\nWe denote by $\\mathcal{M}_{n}^{d}$ the coarse moduli space of stable Higgs bundles of rank $n$ and degree $d$, where as usual, stability\nis defined by imposing the inequality $\\deg (F)\/{\\rm rank}(F) < \\deg (E)\/{\\rm rank} (E)$ for every $\\phi$-invariant\nproper sub-bundle $F \\subseteq E$. \n\nBecause of our assumption that $n$ and $d$ are coprime $\\mathcal{M}_{n}^{d}$ is an irreducible, nonsingular, quasi-projective variety of dimension \n\\begin{equation}\\label{eq:dimMn}\n\\dim (\\mathcal{M}_{n}^{d})= n^2(2g-2)+2=: 2d_{n}.\n\\end{equation}\nThe cotangent space $T^*\\mathcal{N}_{n}^{d}$ of the moduli space $\\mathcal{N}_{n}^{d}$ of stable rank $n$ and degree $d$ vector bundles on $C$ is a dense open subvariety of $\\mathcal{M}_{n}^{d}$.\nThe Hitchin base is defined to be the vector space\n\\begin{equation}\\label{anz}\n\\ensuremath{\\mathcal{A}_n}:= \\prod_{i=1}^n H^0(C, K_C^{\\otimes i}),\n\\end{equation}\nwhich has dimension\n\\begin{equation}\\label{dan}\n \\dim (\\ensuremath{\\mathcal{A}_n})= \\frac{1}{2} \\dim (\\mathcal{M}_n^d) = n^2(g-1)+1= d_{n}.\n\\end{equation}\nThe Hitchin morphisms\n\\begin{equation}\\label{himo}\n\\un{h}_n^d\\colon \\Higgs_n^d \\to \\ensuremath{\\mathcal{A}_n} \\text{ and }\nh_{n}^d: \\mathcal{M}_n^d \\to \\ensuremath{\\mathcal{A}_n}\n\\end{equation}\nassigns to any Higgs bundle $(E,\\phi)$, the coefficients of the characteristic polynomial of $\\phi$.\nThe morphism $h_{n}^d$ is proper, flat of relative dimension $d_{n}=n^2(g-1)+1$, it has connected fibers, and it is often called the Hitchin fibration.\n\nSince the degree $d$ doesn't play any role in what follows, as long as it is coprime to the rank $n$, we will not indicate it from now on, and simply write $\\mathcal{M}_{n}$ for $\\mathcal{M}_n^d$\nand $h_{n}$ for $h_{n}^d$.\n\n\\subsection{Spectral curves and the BNR-correspondence}\\label{spcv}\n\nAs the key to the geometry of the fibers of the Hitchin fibration $h_n$ is their description as compactified jacobians of spectral curves through the Beauville--Narasimhan--Ramanan--correspondence we also recall this briefly.\n\nAny $a\\in \\ensuremath{\\mathcal{A}_n}$ defines a curve $C_a$, called spectral curve, in the total space of the cotangent bundle $T^*C={\\rm Tot}_C(K_C)$ by viewing $a$ as a monic polynomial of degree $n$ with coefficient of the degree $n-i$ term in $H^0(C,K_C^{\\otimes i})$. This defines a flat family $C_{\\mathcal{A}}\\to \\mathcal{A}$ of projective curves. \n\n\nThe natural projection $\\pi: C_a \\to C$, exhibits the spectral curve as a degree $n$ cover of $C$, but $C_a$ can be singular, non-reduced and reducible.\n\nWe denote by $\\ensuremath{\\mathcal{A}_n}^{\\mathrm{red}} \\subset \\ensuremath{\\mathcal{A}_n}$ the subset corresponding to reduced spectral curves, by $\\ensuremath{\\mathcal{A}_n}^{\\mathrm{int}} \\subset\\ensuremath{\\mathcal{A}_n}^{\\mathrm{red}} $ the subset corresponding to integral spectral curves and by $\\ensuremath{\\mathcal{A}_n}^{{\\times}}\\subset \\ensuremath{\\mathcal{A}_n}$ the open subset corresponding to nodal spectral curves. For us reducible spectral curves will be of particular interest.\n\nWhen viewed as an effective divisor on the surface $T^*C$, any spectral curve $C_a$\ncan be written uniquely as \n\\begin{equation}\\label{eq:spectralcurvedecomp}\nC_a = \\sum_{k=1}^s m_k{C}_{a_k}, \n\\end{equation}\nwhere the $a_k$ are the distinct irreducible factors of the characteristic polynomial and $m_k$ their multiplicities.\nIn particular, the $C_{a_k}$ are integral and pairwise distinct curves which are\nspectral curves of some degree $n_k$. We then have \n\\begin{equation}\\label{eq:DecompositionOfN}\nn=\\sum_{k=1}^r m_k n_k.\n\\end{equation}\nFor $\\un{n}=(n_k)_k \\in {\\mathbbm Z}_{>0}^r$ we write\n$$\\mathcal{A}_{\\un{n}} := \\prod_{k=1}^r \\mathcal{A}_{n_k}$$\nThen, for $\\un{n},\\un{m} \\in {\\mathbbm Z}_{>0}^r$ satisfying (\\ref{eq:DecompositionOfN}), multiplication of polynomials $(p_k)_k \\mapsto \\prod p_k^{m_k}$ defines a finite morphism \n$$ \\mult_{\\un{m},\\un{n}}\\colon \\mathcal{A}_{\\un{n}} \\to \\mathcal{A}_n$$\nand we denote by $S_{\\un{m},\\un{n}}$ its image. For $\\un{m}=\\un{1}=(1,\\dots,1)$, the generic point of the image consists of reduced spectral curves and we abbreviate $$S_{\\un{n}}:= S_{\\un{1},\\un{n}} \\text{ and } \\mult_{\\un{n}}:=\\mult_{\\un{1},\\un{n}}.$$\nThe generic spectral curves defined by points in these subsets are rather simple.\n\n\\begin{lemma}\\label{rem:bertini}\n\\begin{enumerate}\n\t\\item[]\n\t\\item For every $a\\in \\mathcal{A}$ the spectral curve $C_a$ is connected.\n\t\\item For every $\\un{n},\\un{m}$ satisfying $\\sum m_kn_k=n$ there is a dense open subset $$S_{\\un{m},\\un{n}}^{\\times} \\subset S_{\\un{m},\\un{n}}$$ such that for $a\\in S^{\\times}_{\\un{m},\\un{n}}$ the reduced curve $C_a^{\\red} \\subset C_a$ is nodal and with nonsingular irreducible components. \n\t\n\tIn particular, since every irreducible component of $C_a$ has genus $g\\geq 2$, the curves $C_a^{\\red}$ are stable curves in the sense of Deligne-Mumford \\cite{DM} for all $a\\in S^{\\times}_{\\un{m},\\un{n}}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tThis is a consequence of Bertini's theorems, because the spectral curves $C_{a_k} \\subset T^*C \\subset {\\mathbbm P}:={\\mathbbm P}_C(\\mathcal{O}_C \\oplus K_C)$ are defined by general sections of the relative $\\mathcal{O}_{{\\mathbbm P}}(n_k)$ which is \n\tbig and generated by global sections in our case. This implies that the subset of $\\mathcal{A}_{n_k}$ defining smooth curves is dense and open and $S^{\\times}_{\\un{m},\\un{n}}$ is the image of the dense open subset of $\\mathcal{A}_{\\un{n}}$ where the curves intersect transversally.\n\\end{proof}\n\nGiven a Higgs bundle $(E,\\phi)$ with $h(E,\\phi)=a$ we can consider $E$ as a coherent sheaf on $C_a$, because sheaves on $T^*C$ can be viewed as $\\mathcal{O}_C$-modules equipped with an action of the $\\mathcal{O}_C$-algebra $\\oplus_{i\\geq 0} K_C^{-\\otimes i}$. The Cayley--Hamilton theorem then says that the module $\\mathcal{F}_{\\mathcal{E},\\phi}$ defined by $\\phi$ is supported on $C_a$ and it is a torsion free sheaf of rank $1$ on $C_a$ (a notion that in the case of non-reduced curves was introduced by Schaub \\cite{Schaub}). Conversely given $\\mathcal{F}$ a torsion free sheaf of rank $1$ on $C_a$ the sheaf $\\mathcal{E}=p_{a,*}\\mathcal{F}$ is a vector bundle of rank $n$ on $C$ that comes equipped with a Higgs field $\\phi$.\n\nThat this procedure in fact induces an equivalence was proved in increasing generality by Hitchin, Beauville--Narasimhan--Ramanan, and Schaub. To state this result let us denote by $\\Coh_{1,C_\\mathcal{A}}^{tf} \\to \\mathcal{A}$ the stack of torsion free sheaves of rank $1$ on spectral curves.\n\\begin{theorem}[\\cite{Hitchin}\\cite{BNR}\\cite{Schaub}]\n\tThe functor $(E,\\phi) \\mapsto \\mathcal{F}_{\\mathcal{E},\\phi}$ induces an equivalence $\\Higgs_n\\cong \\Coh_{1,C_\\mathcal{A}}^{tf}$. Under this equivalence the stack $\\Higgs_n^d$ is identified with the substack of torsion free sheaves of rank $1$ and Euler-characteristic $\\chi=d+n(1-g)$.\n\\end{theorem}\nIn \\cite{Schaub} (see \\cite[Remarque 4.2]{ch-la}) it was explained how stability of Higgs bundles translates into a stability condition for sheaves on spectral curves.\n\nLet $a\\in S_{\\un{n}}$ be a point that defines a reducible, reduced spectral curve $C_a$ with irreducible components $C_{a_1},\\dots,C_{a_k}$. \nIn this case a torsion-free rank $1$ sheaf $\\mathcal{F}$ on $C_a$ defines a stable Higgs bundle of degree $d$ if and only if for all proper subcurves $C_I=\\cup_{i\\in I} C_{a_i} \\subsetneq C$ we have\n$$ \\chi(\\mathcal{F}_{C_I}) \\geq \\sum_{i \\in I} (n_i\\cdot (\\frac{d}{n}+1-g)),$$\nwhere $\\chi$ is the Euler characteristic and $\\mathcal{F}_{C_I}$ is the maximal torsion-free quotient of $\\mathcal{F}|_{C_I}$. \n\n\\begin{remark}\\label{rem:BNRstable}\n\tThis notion of stability coincides with a stability notion for compactified jacobians (see e.g., \\cite{MRV1}) with respect to the polarization $ \\un{q}:=(n_i\\cdot (\\frac{d}{n}+1-g))_i$, which is a general polarization as $\\gcd(n,d)=1$.\n\t\n\tIn particular the restriction of the Hitchin fibration $h_{n} \\colon \\mathcal{M}_{n}\\to \\ensuremath{\\mathcal{A}_n}$ to $\\mathcal{A}^{\\red}$ is a fine relative compactified jacobian for the family $C_{\\mathcal{A}}|_{\\mathcal{A}^{\\red}}$ in the sense of Esteves \\cite[Theorem A]{EstevesTAMS}.\n\\end{remark}\nFinally we recall the $\\delta^{\\aff}$-invariant of our spectral curves. For any spectral curve $C_a$ we denote by $J_a:=\\Pic^{\\un{0}}_{C_a}$ the generalized jacobian of $C_a$, which is the group scheme parameterizing line bundles on $C_a$ that have degree $0$ on all irreducible components of $C_a$. The $J_a$ are the fibers of a group scheme $J_{\\mathcal{A}} \\to \\mathcal{A}_n$ over $\\mathcal{A}$ which acts on $\\mathcal{M}_{n}$.\n\nFor every $a$ the connected group scheme $J_a$ has a canonical filtration\n$$ 0 \\to J_a^{\\aff} \\to J_a \\to J_a^{\\mathrm{proj}} \\to 0$$\nwhere $J_a^{\\aff}$ is affine, $J_a^{\\mathrm{proj}}$ is projective and both are connected. One defines\n$$ \\delta^{\\aff}(C_a):= \\dim( J_a^{\\aff}).$$\n\n\\begin{remark}\\label{rem:deltanode}\n\tIf $C_a$ is a reduced, connected curve and $\\nu\\colon \\widetilde{C}_a \\to C_a$ is the normalization, then $\\nu^*$ defines an isomorphism $J_a^{\\mathrm{proj}}\\cong \\Pic_{\\widetilde{C}_a}^0$. In this case \n\\begin{equation}\\label{eq:deltaff}\n\\delta^{\\aff}(C_a) =\\dim H^0(C,\\nu_*\\mathcal{O}_{\\widetilde{C}_a}\/\\mathcal{O}_C)+1-\\#(\\pi_0(\\widetilde{C}_a)).\n\\end{equation}\nIf furthermore, the only singularities of $C_a$ are nodes, we have\n\\begin{equation}\\label{deltaff_nodes}\n\\delta^{\\aff}(C_a) =\\# (\\text{nodes})+1-\\#(\\pi_0(\\widetilde{C}_a))=1-\\chi(\\Gamma)=\\dim H^1(\\Gamma),\n\\end{equation}\t\nwhere $\\Gamma$ is the dual graph of the curve $C_a$.\n\\end{remark}\n\nThe function $a \\mapsto \\delta^{\\aff}(C_a)$ is upper semi-continuous by \\cite[X, Remark. 8.7]{sga3.2.2}, i.e. there are closed subsets\n$$\\mathcal{A}_n^{\\geq \\delta}:=\\{ a \\in \\mathcal{A}_n | \\delta^{\\aff}(C_a)\\geq \\delta\\} \\subseteq \\mathcal{A}_n.$$\n\n\\begin{remark}\nFor a flat family $C_Y \\to Y$ of projective curves over an irreducible scheme $Y$ with generic point $\\eta_Y$, we will denote by $\\delta^{\\aff}(Y):= \\delta^{\\aff}(C_{\\eta_Y})$ and call it generic $\\delta^{\\aff}$-invariant on $Y$.\t\n\\end{remark}\t\n\n\\begin{lemma}\\label{lem:deltasn}\n\tLet $\\un{n}$ be a partition of $n$ and $a\\in S_{\\un{n}}$ the generic point then we have \n\t$$\\codim S_{\\un{n}}= \\delta^{\\aff}(C_a)=:\\delta^{\\aff}(\\un{n}).$$\n\\end{lemma}\n\\begin{proof}\n\tWe know that $\\dim S_{\\un{n}} = \\dim \\mathcal{A}_{\\un{n}} = \\sum_{i=1}^k (n_i^2 (g-1)+1).$\n\t\n\tBy \\cref{rem:bertini} for a general $a=(a_k)\\in \\mathcal{A}_{\\un{n}}$ the spectral curve $C_a$ has $k$ smooth components intersecting transversally. As each component is defined by a polynomial in $\\mathcal{A}_{n_i}=\\oplus_{r=1}^{n_i} H^0(C, K_C^{\\otimes r})$ we have \n$$ \\# (C_{a_i}\\cap C_{a_j}) = n_in_j (2g-2).$$\nBy Remark \\ref{rem:deltanode} for any nodal curve $D$ the $\\delta$-invariant is equal to the number of nodes minus the number of irreducible components plus the number of connected components. Therefore \n \\begin{align*}\n \\dim S_{\\un{n}} + \\delta^{\\aff}(C_a) &= \\left(\\sum_{i=1}^k n_i^2 (g-1)\\right) + k + \\sum_{i0$ would contribute a summand in \n\t$\\, ^p\\!\\!{\\mathscr H}^{k_0 +j}(i^*K)$ which violates the RHL symmetry.\n\\end{proof}\n\\begin{lemma}\n\tLet $\\ensuremath{\\mathcal{L}}$ be a semisimple local system on an open dense subset $U\\subset X$ and let $P=\\IC(\\ensuremath{\\mathcal{L}})$ be its intersection cohomology complex. \n\tLet $i:Z \\hookrightarrow X$ be a closed subvariety such that: \n\t\\begin{enumerate}\n\t\t\\item \n\t\t$U \\bigcap Z$ is Zariski dense in $Z$.\n\t\t\\item\n\t\tThe complex $i^*P$ is perverse semisimple.\n\t\\end{enumerate}\n\tThen \n\t$$ i^*P = \\oplus_k \\IC(Z^{k},\\ensuremath{\\mathcal{L}}^{k})$$\n\twhere $Z^k$ is the union of the irreducible components of $\\mathrm{Supp}\\, \\mathcal{H}^{k}(P)\\bigcap Z$ of codimension $k$ in $Z$ and $\\ensuremath{\\mathcal{L}}^k = \\mathcal{H}^k(i^*P)$ on the smooth part of the dense open subset of $Z^k$ where this sheaf is a local system.\t\n\\end{lemma}\n\\begin{proof}\nRecall that, if $Q$ be a perverse semisimple sheaf on $Z$, \nthen we have a canonical decomposition \n\\begin{equation}\\label{candec}\nQ=\\bigoplus_{k=0}^n \\IC(Z^k, \\ensuremath{\\mathcal{L}}^k)\n\\end{equation}\nwhere, for every $k$,\n$Z^k$ is a closed subvariety of $Z$ of codimension $k$, and $\\ensuremath{\\mathcal{L}}^k$ is a semisimple local systems on an open set $Z^{k,\\circ}$ of $Z_k$.\nNote that $Z^k$ is allowed to be reducible and $\\ensuremath{\\mathcal{L}}^k$ may have different rank on the different components of $Z^{k,\\circ}$.\n\nThe subsets $Z^k$ and local systems $\\ensuremath{\\mathcal{L}}^k$ afford an easy characterization, which follows immediately from the strong support condition\nfor the intersection cohomology complex (\\Cref{subsec:ConventionIC}), i.e.: \nFor every $k$, the closed subset $Z^k$ is the union of the $k$-codimensional components of $\\mathrm{Supp}\\, \\mathcal{H}^{k}(Q)$ and if $x \\in Z^{k,\\circ}$, then there is a canonical isomorphism $\\ensuremath{\\mathcal{L}}^k_{x}= \\mathcal{H}^{k}(K)_{x}.$\n\\end{proof}\n\n\n\t\\begin{remark}\\label{rem:transverseorsplit}\n\t\tNotice that, by the support condition (see \\cref{subsec:ConventionIC}) for the intersection cohomology complex, we have that\n\t\t$\\codim \\mathrm{Supp}\\, \\mathcal{H}^{k}(\\IC(\\ensuremath{\\mathcal{L}})) \\geq k+1$. Thus, \n\t\t if $Z$ intersects $\\mathrm{Supp}\\, \\mathcal{H}^{k}(\\IC(\\ensuremath{\\mathcal{L}}))$ properly, we have\n\t\t\t$\\mathrm{Supp}\\, \\mathcal{H}^{k}(\\IC(\\ensuremath{\\mathcal{L}}))\\bigcap Z $ has codimension at least $k +1$ in $Z$ and therefore it cannot contribute a perverse summand. On the other hand,\n\t\t\tif $\\codim \\mathrm{Supp}\\, \\mathcal{H}^{k}(\\IC(\\ensuremath{\\mathcal{L}}))\\bigcap Z < k$, then $i_*i^*P$ is not perverse.\n\t\tSo if $i^*P$ is perverse on $Z$ then the splitting off of the restriction of an intersection cohomology complex is governed by a precise failing of transversality \tof $Z$ to the supports of the cohomology sheaves.\n\t\\end{remark}\n\\begin{remark}\\label{purity} In the situation of Proposition \\ref{prop:splitperverse}\n\tassume $K$ is pure of weight $0$ so that by our assumptions $i^*K$ is pure of weight $0$ too. \n\tThen the local systems $\\ensuremath{\\mathcal{L}}_{i}^{k} $ are pure of weight $i+k$.\n\tNotice that since $\\mathcal{R}_i$ is of weight $i$, by purity we have \n\t$$\n\t\\mathrm{weight} (\\mathcal{H}^k(\\IC(\\mathcal{R}_i)))_x \\leq i+k,\n\t$$\n\ttherefore the local systems $\\ensuremath{\\mathcal{L}}_{i}^{k}$ are the maximal weight quotients of the cohomology sheaves.\n\\end{remark}\n\n\\begin{remark}\\label{forex}\n\tThe assumptions of the \\cref{prop:splitperverse} are met when $K={\\mathbbm R} f_*\\mathbbm{Q}$ for $f$ a projective map satisfying the support theorem,\n\tand $Z\\subset X$ is a local complete intersection such that $f^{-1}(Z)$ is nonsingular. \n\\end{remark}\n\n\\subsection{The Kodaira--Spencer map for spectral curves}\n\nWe want to apply Remark \\ref{forex} to compare the cohomology of the Hitchin fibration to the cohomology of relative compactified jacobians for versal families of spectral curves. To verify the assumptions that $Z$ is a local complete intersection we need to describe the Kodaira--Spencer map for the family of spectral curves over $\\mathcal{A}_n$. \n\nFor any point $a\\in \\mathcal{A}_n$ we denote by $\\mathcal{I}_{C_a} \\subset \\mathcal{O}_{T^*C}$ the ideal sheaf defining $C_a \\subset T^*C$. \nRecall that embedded deformations of $C_a \\subset T^*C$ are described by the cotangent complex \n$$ {\\mathbbm L}_{C_a\/T^*C} =[\\mathcal{I}_a\/\\mathcal{I}_a^2 \\to 0]$$ \nwhich concentrated in degree $[-1,0]$. Considering the composition $C_a \\hookrightarrow T^*C \\to \\Spec k$ we see that the cotangent complex of $C_a$ is \n$$ {\\mathbbm L}_{C_a} = \\left[\\mathcal{I}_a\/\\mathcal{I}_a^2 \\to \\left(\\Omega_{T^*C}|_{C_a}\\right)\\right].$$\nNow the universal spectral curve over $\\mathcal{A}_n$ defines a Kodaira--Spencer map\n\\[ KS_a\\colon T_a\\mathcal{A}_n \\to H^1(C_a,{\\mathbbm L}_{C_a}^\\vee) = \\Ext^1({\\mathbbm L}_{C_a},\\mathcal{O}_{C_a}). \\]\nWe know that the ${\\mathbbm G}_m$-action on $\\mathcal{A}$ and the translation action $H^0(C,K_C)\\times \\mathcal{A}_n \\to \\mathcal{A}_n$ lift to the universal spectral curve $C_{\\mathcal{A}_n}\\to \\mathcal{A}_n$ and therefore induce trivial deformations of $C_a$. \nLet us denote by \n\\[ \\dmult\\colon k= \\Lie({\\mathbbm G}_m) \\to T_a\\mathcal{A}\\cong \\oplus_{i=1}^n H^0(C,K_C^{\\otimes i}) \\]\nthe derivative of the ${\\mathbbm G}_m$-action and by\n\\[ \\dshift \\colon H^0(C,K_C) \\to T_a\\mathcal{A}\\cong \\oplus_{i=1}^n H^0(C,K_C^{\\otimes i}) \\]\nthe derivative of the translation. We will show in \\Cref{Lem:KodairaSpencerComputation} below that the span of the image of these maps is the kernel of the Kodaira--Specner map.\n\nLet us also recall that $S_{n,1}\\subset \\mathcal{A}_n$ is the locus of spectral curves that are given by the $n-$th infinitesimal neighborhood of a section in $T^*C$.\n\\begin{lemma}[Kodaira-Spencer map for $C_a$]\\label{Lem:KodairaSpencerComputation}\n\t\\begin{enumerate}\n\t\t\\item[]\n\t\t\\item For any point $a\\in \\mathcal{A}_n-S_{n,1}$ the kernel of the Kodaira--Spencer $K_a$ is the direct sum of the images of $\\dmult$ and $\\dshift$, i.e., the map $KS_a$ factors as \n\t\t$$ T_a \\mathcal{A} \\twoheadrightarrow (T_a \\mathcal{A})\/(H^0(C,\\mathcal{O}_C\\oplus K_C)) \\hookrightarrow H^1(C_a,{\\mathbbm L}_{C_a}^\\vee).$$\n\t\t\\item \tFor $a \\in S_{n,1} \\subset \\mathcal{A}_n$ the kernel of the Kodaira--Spencer $K_a$ is equal the image of $\\dshift$, i.e., the map $KS_a$ factors as \n\t\t$$ T_a \\mathcal{A} \\twoheadrightarrow (T_a \\mathcal{A})\/(H^0(C,K_C)) \\hookrightarrow H^1(C_a,{\\mathbbm L}_{C_a}^\\vee).$$\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tLet us first describe the sheaves occurring in ${\\mathbbm L}_{C_a}$ more explicitly.\n\tThe cotangent bundle $T^*C$ is the relative spectrum of the $\\mathcal{O}_C$ algebra \n\t$$\\Sym^\\bullet K_C^{\\otimes -1} = \\oplus_{r=0}^\\infty K_C^{\\otimes -r}$$ and the spectral curve $\tC_a\\subset T^*C$ is defined by the ideal generated by $K_C^{\\otimes -n} \\to \\oplus_{r=0}^\\infty K_C^{\\otimes -r}$ defined by $\\alpha \\mapsto \\alpha + a_1\\alpha + \\dots + a_n\\alpha$. Therefore, denoting by $\\pi_a \\colon C_a \\to C$ the projection we see that\n\t\\[\\pi_{a,*} \\mathcal{O}_{C_a} \\cong \\oplus_{r=0}^{n-1}K_C^{\\otimes -r}\\]\n\tand\n\t\\[ \\mathcal{I}_a|_{C_a}= \\mathcal{I}_{C_a}\/\\mathcal{I}_{C_a}^2 \\cong \\pi_a^* K_C^{\\otimes -n}.\\] \n\t \n\n\tThe dual of the canonical map \n\t$$ {\\mathbbm L}_{C_a}= \\left[\\mathcal{I}_a\/\\mathcal{I}_a^2 \\to \\left(\\Omega_{T^*C}|_{C_a}\\right)\\right] \\to [\\mathcal{I}_a\/\\mathcal{I}_a^2 \\to 0]= {\\mathbbm L}_{C_a\/T^*C} $$\n\tis given by\n\t$$ [0 \\to \t(\\mathcal{I}_a\/\\mathcal{I}_a^2)^\\vee ] \\to [ T_{T^*C}|_{C_a} \\to (\\mathcal{I}_a\/\\mathcal{I}_a^2)^\\vee ].$$\n\tNote that\n\t$$ H^0(C_a,(\\mathcal{I}_a\/\\mathcal{I}_a^2)^\\vee) \\cong H^0(C, \\oplus_{r=1}^n K_C^{\\otimes{r}})=T_a\\ensuremath{\\mathcal{A}_n}$$\n\tis the space of embedded deformations of $C_a\\subset T^*C$.\n\t\n\tTaking cohomology of the exact triangle of complexes:\n\t$$ \\to [0 \\to \t(\\mathcal{I}_a\/\\mathcal{I}_a^2)^\\vee ] \\to {\\mathbbm L}_{C_a}^\\vee \\map{p} [ T_{T^*C}|_{C_a} \\to 0] \\to $$\n\twe obtain a long exact sequence:\n\t\\begin{equation}\\label{seq:kslong} 0 \\to H^0(C_a,{\\mathbbm L}_{C_a}^\\vee) \\map{H^0(p)} H^0(C_a, T_{T^*C}|_{C_a} ) \\map{\\delta} H^0(C_a, (\\mathcal{I}_a\/\\mathcal{I}_a^2)^\\vee) \\map{KS_a} H^1(C_a,{\\mathbbm L}_{C_a}^\\vee) \\to \\dots\\end{equation}\n\tTo conclude we will compute the dimension of $ H^0(C_a, T_{T^*C}|_{C_a})$ and then compare it to the dimension of the image of $\\dmult$ and $\\dshift$.\n\t\n\tRestricting the relative tangent sequence $0\\to \\pi^* K_C \\to T_{T^*C} \\to \\pi^* TC \\to 0$ on $T^*C$ to $C_a$ we get \n\t$$ 0\\to \\pi_a^* K_C \\to \\Omega_{T^*C}|_{C_a} \\to \\pi_a^* K_C^{\\otimes -1} \\to 0.$$\n\tApplying $\\pi_{a,*}$ and the projection we find: \n\t$$ 0 \\to \\oplus_{r=0}^{n-1} K_C^{\\otimes (1-r)} \\to \\pi_* T_{T^*C}|_{C_a} \\to \\oplus_{r=0}^{n-1} K_C^{\\otimes (-1-r)} \\to 0.$$\n\tIn particular we see that \\[H^0(C_a, T_{T^*C}|_{C_a} ) \\cong H^0(C_a,\\pi_a^*K_C)=H^0(C,K_C) \\oplus H^0(C,\\mathcal{O}_C).\\]\n\tThus we have an exact sequence:\n\t\\begin{equation}\\label{eq:KSmap}\n H^0(C_a,\\pi_a^*K_C)=H^0(C,K_C) \\oplus H^0(C,\\mathcal{O}_C) \\map{\\delta} T_a\\ensuremath{\\mathcal{A}_n} \\map{KS_a} H^1(C_a,{\\mathbbm L}_{C_a}^\\vee),\n\t\\end{equation}\n\twhere the map $\\delta$ is determined by the differential in the cotangent complex ${\\mathbbm L}_{C_a}^\\vee$.\n\t \n\tNow let us determine the dimension of the image of $\\dmult$ and $\\dshift$. The ${\\mathbbm G}_m$ action is given by the action of weight $i$ on $H^0(C,K_C^{\\otimes i})$, so at $a$ the element $c \\in {\\mathbbm C}=Lie({\\mathbbm G}_m)$ defines the tangent vector $(a_i+ i a_i\\cdot\\epsilon ) \\in T_a\\ensuremath{\\mathcal{A}_n} \\subset \\ensuremath{\\mathcal{A}_n}({\\mathbbm C}[\\epsilon]\/(\\epsilon^2))$. \n\t\n\tSimilarly as $a$ is given by the coefficients of a characteristic polynomial the translation by an element $\\omega \\in H^0(C,K_C)$ sends a polynomial $p(t)$ to $p(t-\\omega)$. Thus the derivative at $a=(a_i)$ is $(a_i - (n-i+1) \\omega a_{i-1} \\cdot\\epsilon)$ where we put $a_0:=1$.\n\t\n\tIn particular these vector fields are linearly independent unless $a_{i}= (-1)^i {n \\choose i} \\omega^i$, i.e., $a\\in S_{n,1}$. This shows that the the kernel of $KS_a$ has dimension $\\geq g+1$ for $a\\not\\in S_{n,1}$. By equation (\\ref{eq:KSmap}) we know that the dimension is $\\leq g+1$, so this shows the first claim.\n\t\n\tIf $a$ is the $n-$fold multiple of a section, then the spectral curve $C_a$ admits a continuous family of automorphisms, given by multiplication of the nilpotent generator therefore $ H^0(C_a,{\\mathbbm L}_{C_a}^\\vee)$ which is the tangent space to the automorphism group of $C_a$ is at least $1$ dimensional. Thus the second claim follows from (\\ref{seq:kslong}).\n\\end{proof}\n\nLet us now apply this result to the restriction of the Hitchin fibration to the subset of nodal curves. Let $g_n=d_n= \\dim \\ensuremath{\\mathcal{A}_n}$ be the arithmetic genus of the spectral curves $C_a$. We denote by $\\overline{\\mathcal{M}}_{g_n}$ the stack of stable curves of genus $g_n$. Then by \\cref{rem:bertini} the universal family of spectral curves $C_{\\ensuremath{\\mathcal{A}_n}}$ induces a morphism \n\\[f_{\\nod}\\colon \\ensuremath{\\mathcal{A}_n}^{\\nod} \\to \\overline{\\mathcal{M}}_{g_n}.\\]\nRecall from Remark \\ref{rem:BNRstable} that for any $a\\in \\mathcal{A}^{\\nod}$ the stability condition for Higgs bundles defines a general polarization $\\un{q}$ for rank $1$ torsion free sheaves on $C_a$ in the sense of \\cite{MRV1}.\nAs this polarization only depends on the fixed values $n,d$ and the genus of the irreducible components of $C_a$ the induced polarization on the fibers of a semiuniversal deformation again has this property and therefore is independent of the choice of $a$. Thus we obtain a polarization $\\un{q}$ on an open neighborhood $\\mathcal{U}\\subset \\overline{\\mathcal{M}}_{g_n}$ of $f_{\\nod}(\\ensuremath{\\mathcal{A}_n}^{\\nod})$.\n\n\n\nThe following is a consequence of the work of Esteves \\cite[Theorem A]{EstevesTAMS} and Melo--Rapagnetta--Viviani \\cite[Theorem C]{MRV1}.\n\\begin{proposition}\n\tThere exists an open substack $\\mathcal{U} \\subset \\overline{\\mathcal{M}}_{g_n}$ containing $f_{\\nod}(\\ensuremath{\\mathcal{A}_n}^{\\nod})$ such that {the compactified jacobian parametrizing $\\un{q}$-stable torsion free sheaves defined on any cover of $U$ that admits a universal family descends to} a regular and irreducible Deligne--Mumford stack $u\\colon \\overline{J}_{\\mathcal{U}}(\\un{q}) \\to \\mathcal{U}$. The map $u$ is representable and locally projective.\n\\end{proposition}\n\\begin{proof}\n\tBy \\cite[Theorem C]{MRV1} for any general polarization $\\un{q}$ the algebraic stack of $\\un{q}$-stable rank $1$ torsion free sheaves admits a projective geometric coarse moduli space for any versal deformation of a reduced curve with planar singularities and these spaces are regular and irreducible. We therefore obtain relative compactified jacobians $\\overline{J}_{\\Spec R}(\\un{q})$ on \\'etale neighborhoods of any point $a\\in \\mathcal{U}$. As these spaces are geometric coarse moduli spaces they are canonically isomorphic on the intersections of these neighborhoods and therefore define an \\'etale covering of {a} stack $\\overline{J}_{\\mathcal{U}}(\\un{q})$.\n\\end{proof}\t\n\nCombining this result with our computation of the Kodaira--Spencer map for the family $C_{\\mathcal{A}}$ (\\cref{Lem:KodairaSpencerComputation}) we deduce:\n\n\\begin{corollary}\\label{cor:map_to_moduli}\n\tFor every partition $\\ve{n} $ of $n$, let $a \\in S_{\\ve{n}}^{\\times}$ (see \\cref{rem:bertini}).\n\tGiven a subvariety $ \\Sigma_a $ passing through $a$ and intersecting $S_{\\ve{n}}$ transversally, \n\tthe classifying map $f_{\\Sigma_a}\\colon\\Sigma_a \\to \\mathcal{U} \\subset \\overline{\\mathcal{M}}_{g_n}$ is unramified on an open neighborhood of $a$ in $\\Sigma_a$.\n\tFurthermore we have a cartesian diagram \n\t\\begin{equation}\\label{equ:cartesian}\n\t\\xymatrix{\n\t\t{h_{n}}^{-1}(\\Sigma_a) \\ar[rd]_{{h_{n}}_|}\\ar[r]^{\\simeq} & \\ov{J}_{\\mathcal{C}}\\times_U \\Sigma_a \\ar[r]\\ar[d] & \\ov{J}_{\\mathcal{U}} \\ar[d]^{u}\\\\\n\t\t&\\Sigma_a \\ar[r]^-{f_{\\Sigma_a}} & \\mathcal{U}.\n\t}\n\t\\end{equation}\n\\end{corollary}\n\n\\begin{remark}\\label{rem:compwithversal}\n\tAs the map $f_{\\Sigma_a}$ is unramified (i.e. it is \\'etale over a closed embedding) it is a local complete intersection morphism. By Remark \\ref{forex} the restriction result for semisimple complexes \\ref{prop:splitperverse} we can therefore compute the supports of ${\\mathbbm R} h_{n,*}\\mathbbm{Q}$ on $\\Sigma_a$ as the restrictions of ${\\mathbbm R} u_* \\mathbbm{Q}$.\n\\end{remark}\n\n\n\n\\section{The partition strata are supports}\\label{sec:sn_supports}\n\n\\subsection{Description of the $\\IC$ complexes for families with full support}\nHere we recall a few facts on the Cattani--Kaplan--Schmid (CKS) complex, introduced in \\cite{cks, KK}, see also (\\cite{MSV} \\S 3). {This will finally allow us to reduce our problem to a homology calculation of an explicit combinatorially defined complex attached to the dual graph of a nodal curve.} \n\nAssume $B$ is a complex manifold of dimension $n$, $D \\subset {B}$ is a normal crossing divisor $D$, and\n$\\ensuremath{\\mathcal{L}}$ is a local system on $B \\setminus D$ with unipotent monodromies $\\{T_i\\}$ around the components of $D$. We work locally, near a point $p \\in B$. After picking a holomorphic chart $U \\subset B$ in a neighborhood of $p$, we may assume $p$ to be the origin in a polydisc $\\Delta^n$ and the divisor $D$ to have equation $\\prod_{i=1}^l z_i=0$. Thus $U \\bigcap (B \\setminus D)\\simeq (\\Delta^*)^l \\times \\Delta^{n-l} $, where $\\Delta^*$ is the punctured unit-disc.\nUp to taking a slice transverse to the stratum of $D$ to which $p$ belongs, we may assume $l=n$, and denote $i_p: \\{*\\} \\to B$ the closed embedding. The local system on $(\\Delta^*)^n$ is described by the stalk at a base point, a vector space $L$, and $n$ commuting nilpotent endomorphisms $N_i =\\log T_i: L \\to L$.\nGiven a subset $\\{i_1, \\cdots i_k\\}=I \\subset \\{1, \\cdots , n\\}$, with $1 \\leq i_1 \\delta^{\\aff}= \\dim H^1(\\Gamma)$ for every $i$.\n\\end{lemma}\nLemma \\ref{lem:bond_matroid} justifies the following definition:\n\\begin{definition}\\label{def:GraphComplex}\nGiven a connected graph $\\Gamma$, with set of edges $\\mathrm{E}$,\n\twe write $\\mathscr{C}(\\Gamma)$ for the collection of subsets of $\\mathrm{E}$ whose removal does not disconnect $\\Gamma$.\n\tIn other words, a subset $I\\subseteq \\mathrm{E}$ belongs to $\\mathscr{C}(\\Gamma)$ if and only if $\\Gamma\\setminus I$ is connected. \t\n\\end{definition} \n\\begin{remark}\\label{rem:matr}\nIn the literature $\\mathscr{C}(\\Gamma)$ is the family of independent subsets in what is known as the {\\em bond, or cographic, matroid } of the graph $\\Gamma$. \nThe set $\\mathscr{C}(\\Gamma)$ is partially ordered with respect to inclusions and we denote by $|\\mathscr{C}(\\Gamma)|$ the associated simplicial complex, i.e.,\nthe complex whose $k$-dimensional faces are the $(k+1)$-ples of edges belonging to $\\mathscr{C}(\\Gamma)$. \n\\end{remark}\nFor any matroid $M$ the simplicial complex $|\\In(M)|$ of its independent subsets has special properties:\n\\begin{theorem}\\cite[Theorem 7.3.3, Theorem 7.8.1]{bjorner}\\label{shellable_mat}\nThe simplicial complex $|\\In(M)|$ of independent subsets associated with a rank $r$ matroid \nis shellable and has the homotopy type of a bouquet of \n$r-1$-dimensional spheres.\n\\end{theorem}\n\nThe rank of $\\mathscr{C}(\\Gamma)$ is the cardinality of the complement of a spanning tree, namely\n$|\\mathrm{E}|-|\\mathrm{V}|+1$, which, in the case of the dual graph of a nodal curve $C_\\times$ equals \n$ \\delta^{\\aff}(C_\\times)$, hence:\n\n\\begin{corollary}\\label{shellable}\nLet $\\Gamma$ be the dual graph of a nodal curve $C_\\times$. \nThen $|\\mathscr{C}(\\Gamma)|$ is homotopic to a bouquet of $(\\delta^{\\aff}-1)$-dimensional spheres:\n\\[ |\\mathscr{C}(\\Gamma)| \\simeq \\bigvee S^{\\delta^{\\aff}-1},\\].\n\\end{corollary} \n\n\\begin{corollary}\\label{cor:highest_weight_cohomology}\nLet $C_\\times$ be a nodal stable curve, and ${\\mathbf C}^{\\bullet}( \\{ N_j \\}, \\bigwedge^i H^1(\\mathcal{C}_\\eta))$ be the associated CKS complex.\nThen, for every $i=0, \\cdots 2g$ we have\n\\begin{enumerate}\n\\item $H^{r}({\\mathbf C}^{\\bullet}( \\{ N_j \\}, \\bigwedge^i H^1(\\mathcal{C}_\\eta)))=0 \\text{ for } r > \\delta^{\\aff}.$\n\\item\\label{item:highestweight}\nFor the highest weight quotient of ${\\mathbf C}^{\\bullet}( \\{ N_j \\}, \\bigwedge^i H^1(\\mathcal{C}_\\eta))$ \nwe have \n$$Gr_{i+\\delta^{\\aff}}^WH^{r}({\\mathbf C}^{\\bullet}( \\{ N_j \\}, \\bigwedge^i H^1(\\mathcal{C}_\\eta))) = 0 \\hbox{ if }r < \\delta^{\\aff},$$ \n$$Gr_{i+\\delta^{\\aff}}^WH^{\\delta^{\\aff}}({\\mathbf C}^{\\bullet}( \\{ N_j \\}, \\bigwedge^i H^1(\\mathcal{C}_\\eta)))= \n\\bigwedge^{i-\\delta^{\\aff}}H^1(\\widetilde{C}_\\times)\\otimes H^{\\delta^{\\aff}-1}(|\\mathscr{C}(\\Gamma)|)(-\\delta^{\\aff}).$$\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nThe first statement follows from \\cref{lem:bond_matroid}.\nTo prove (\\ref{item:highestweight}) notice that $\\bigwedge^{\\delta^{\\aff}} H_1(\\Gamma)$ is one-dimensional, since $\\delta^{\\aff}=\\dim H_1(\\Gamma)$. Similarly the image of $N_I$ is isomorphic to $\\bigwedge^{\\delta^{\\aff}-|I|} (H_1(\\Gamma-I))$ which is also one dimensional. So for every $k$, the degree $k$ part of the complex has a basis consisting of the non-disconnecting cardinality $k$ subsets of the edges set, namely precisely the $(k-1)$-cells of $|\\mathscr{C}(\\Gamma)| $. It is easy to check that the boundary maps coincide with the maps of the complex computing the reduced homology of $|\\mathscr{C}(\\Gamma)| $. It then follows from \\cref{eq:weight_exterior} that, for $i \\geq \\delta^{\\aff}$, the cohomology of the CKS complex in highest weight is computed by a complex which is the tensor product with the vector space\n$\\bigwedge^{i-\\delta^{\\aff}}H^1(\\widetilde{C}_{\\times}) $ of the complex computing the reduced homology of $|\\mathscr{C}(\\Gamma)|$ twisted by $(-\\delta^{\\aff})$ and shifted by one.\n\\end{proof}\n\\begin{remark}\\label{rem:purity}\nThe vanishing of the cohomology groups in the range above can be also derived from the purity result \\cref{e:purity}.\n\\end{remark}\n\n\\subsection{Description of the summands supported on the partition strata}\n\nWe need the following well known, elementary estimate:\n\\begin{lemma}\\label{lem:estimate}\nLet $\\pi: \\mathcal{C} \\to B$ be a flat projective family of locally planar reduced curves of arithmetic genus $g$ such that \ncompactified jacobian family $\\pi^J: \\ov{J}_{\\mathcal{C}} \\to B$, \nrelative to a choice of a fine polarization exists and has nonsingular total space. Let $B^\\circ \\subset B$ be a dense open set such that the restriction\n$\\pi\\colon \\mathcal{C}_{B^\\circ} \\to B^\\circ$ is smooth, and denote by $R^1$ the local system on $B^\\circ$:\n$$\nR^1:={R^1\\pi_* \\mathbbm{Q}}_{|B^\\circ}.\n$$\nThen\n\\begin{equation}\\label{equ: estimate}\n\\mathcal{H}^r(\\IC(\\bigwedge^i R^1 ))=0 \\text{ for } r>i.\n\\end{equation}\n\n\\end{lemma}\n\\begin{proof}\nIt follows from the decomposition theorem for $\\pi^J: \\ov{J}_{\\mathcal{C}} \\to B$ that $\\IC(\\bigwedge^i R^1 ))[-i]$ is a direct summand in ${\\mathbbm R} \\pi^J\\mathbbm{Q}$:\n\\begin{equation}\\label{eq:dtpi}\n\\IC(\\bigwedge^i R^1 ))[-i] \\subset {\\mathbbm R} \\pi^J\\mathbbm{Q}\n\\end{equation}\nAssume $\\mathcal{H}^r(\\IC(\\bigwedge^i R^1 ))_b\\neq 0$ for $b\\in B$ and some $r >i$. It then follows from the relative Hard Lefschetz theorem that we may assume $i\\geq g$.\nTaking stalks of the cohomology sheaf $\\mathcal{H}^{r+i}$ at $b$ in the above \\cref{eq:dtpi} we have\n\\begin{equation*}\n0 \\neq \\mathcal{H}^{r}(\\IC(\\bigwedge^i R^1 ))\\subset H^{r+i}(\\ov{J}_{\\mathcal{C}}(b)).\n\\end{equation*} \nwhich is a contradiction since $r+i>2i \\geq 2g=2\\dim \\ov{J}_{\\mathcal{C}}(b)$.\n\\end{proof}\nWe will rely on the following result (\\cite[Theorem 5.11]{MSV}):\n\\begin{theorem}\\label{thm:hd_cj}\n\tLet $\\pi:\\mathcal{C} \\to B$ be a projective versal family of curves with locally planar singularities and arithmetic genus $g$, and let $\\pi^J:\\ov{J}_{\\mathcal{C}}\\to B$ \n\tbe a relative fine compactified Jacobian. \n\tThen no summand of ${\\mathbbm R} \\pi^J_* \\mathbbm{Q}$ has positive codimensional support,\n\tnamely, \n\t\\begin{equation*}\n\t{\\mathbbm R} \\pi^J_* \\mathbbm{Q}= \\bigoplus_{i=0}^{2g} \\IC(\\bigwedge^i R^1)[-i]\n\t\\end{equation*}\n\twhere $R^1$ is defined as in \\cref{lem:estimate}.\n\\end{theorem}\n\nThe following is one of the main results of this paper: \n\\begin{theorem}\\label{thm:main1}\nLet $h_{n}:\\mathcal{M}_{n} \\to \\ensuremath{\\mathcal{A}_n}$ be the Hitchin map.\n\\begin{enumerate}\n\\item \nFor every partition $\\ve{n}$ of $n$, the stratum $S_{\\ve{n}}$ is a support for all the sheaves\n\\begin{equation*}\\label{eq:range_supp}\n^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R} {h_{n}}_{*}\\mathbbm{Q}) \n\\hbox{ with } \\delta^{\\aff}(\\ve{n}) \\leq r \\leq {2\\dim \\ensuremath{\\mathcal{A}_n} -\\delta^{\\aff}(\\ve{n})}\n\\end{equation*}\n\\item\\label{item:loc_sys}\nMore precisely, for every $r$ in the range of (\\ref{eq:range_supp}), there is a direct summand in \n$^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R} {h_{n}}_{*}\\mathbbm{Q}) $ which is the intermediate extension of the \nlocal system $\\ensuremath{\\mathcal{L}}_{r, \\ve{n}}$ on the open set $ {S_{\\ve{n}}^{\\times}} \\subset S_{\\ve{n}}$, \nwhose stalk at a point $a\\in S_{\\ve{n}}^{\\times}$ is \n\\begin{equation*}\\label{eq:loc_sys}\n({\\ensuremath{\\mathcal{L}}_{r, \\ve{n}}})_a = H^{\\delta^{\\aff}(\\ve{n})-1}\\left( |\\mathscr{C}(\\Gamma_{\\ve{n}})| \\right)(-\\delta^{\\aff})\\otimes \\bigwedge^{r-\\delta^{\\aff}}H^1(\\widetilde{C}_a)\n\\end{equation*}\nand underlying a variation of pure Hodge structures of weight $r+\\delta^{\\aff}(\\ve{n})$.\n\\end{enumerate}\n\\end{theorem}\n\n{\n\\begin{remark}\nTheorem \\ref{thm:main1} holds in the context of M. Saito's mixed Hodge modules (see \\cite[Appendix]{drs}).\nIn particular, the resulting direct summands of the pure Hodge structures given by the cohomology groups $H^k(\\mathcal{M}_{n},\\mathbbm{Q})$ (these are pure since they coincide with the cohomology of the nilpotent cone (fiber of the Hitchin map over the origin)) are pure Hodge substructures. \n\\end{remark}\n}\n\n\\begin{proof}\nSince, by \\cref{thm:hidiscrHitc}, the stratum $S_{\\ve{n}}$ is a $\\delta^{\\aff}(\\ve{n})$-codimensional component of $\\Delta^{\\delta^{\\aff}(\\ve{n})}(h_{n})$, either \n$^p\\!\\!{\\mathscr H}^{r}({\\mathbbm R} {h_{n}}_{*}\\mathbbm{Q})$ has a summand which is {\\em fully} supported at $S_{\\ve{n}}$ or none of its summands intersect $S_{\\ve{n}}$, therefore\nit is enough to consider a general point $a\\in S_{\\ve{n}}$ corresponding to a nodal spectral curve $C_a$ with $l(\\ve{n})$ smooth components, by \\cref{rem:bertini}.\nLet $\\Sigma_a$ as in \\cref{cor:map_to_moduli} be a transversal slice to $S_{\\ve{n}}$ at $a$. Since $S_{\\ve{n}}$ has codimension $\\delta^{\\aff}(\\ve{n})$, we have $\\dim \\Sigma_a= \\delta^{\\aff}(\\ve{n})$. Furthermore,\nby transversality, $h_{n}^{-1}(\\Sigma_a)$ is nonsingular, and we have the diagram (\\ref{equ:cartesian}). Let $U^o\\subset U$ be an open set where the universal curve \n$\\pi: \\mathcal{C}_{U^o} \\to U^o$ is smooth, and denote by $R^1$ the local system \n$$\nR^1:={R^1\\pi_* \\mathbbm{Q}}_{U^o}.\n$$\nSince the family $\\pi: \\mathcal{C}_{U} \\to U$ is versal, we have, by \\cite[Theorem 5.11]{MSV}, which we recalled in \\cref{thm:hd_cj}, that \n\\begin{equation}\\label{equ:dec_thm_vsl}\n{\\mathbbm R} \\pi^J_* \\mathbbm{Q} \\simeq \\bigoplus_{i=0}^{2d_{n}} \\IC(\\bigwedge^iR^1)[-i]. \n\\end{equation}\nBy proper base change and the isomorphism in Diagram (\\ref{equ:cartesian}) we have that\n\\begin{equation}\n{{\\mathbbm R} {h_{n}}_* \\mathbbm{Q}}_{|\\Sigma_a} \\simeq i^*({\\mathbbm R} \\pi^J_* \\mathbbm{Q} )\n\\end{equation}\nis split semisimple. Since furthermore, $\\Sigma_a $ intersect the open set $U^o$, the hypotheses of \\cref{prop:splitperverse} are met.\nIn particular\n\\begin{equation}\\label{equ:perv_coh}\n^p\\!\\!{\\mathscr H}^i({{\\mathbbm R} {h_{n}}_* \\mathbbm{Q}}_{|\\Sigma_a}) = ^p\\!\\!{\\mathscr H}^i(i^*{\\mathbbm R} \\pi^J_* \\mathbbm{Q}) = i^* \\IC(\\bigwedge^iR^1).\n\\end{equation}\nBy stratification theory it is clear that\n$S_{\\ve{n}}$ is a support for $^p\\!\\!{\\mathscr H}^i({{\\mathbbm R} {h_{n}}_* \\mathbbm{Q}})$ if and only if $a$ is a support for $^p\\!\\!{\\mathscr H}^i({{\\mathbbm R} {h_{n}}_* \\mathbbm{Q}}_{|\\Sigma_a})$.\nSince $\\dim \\Sigma_a= \\delta^{\\aff}(\\ve{n})$, by \\cref{cor:point} and \\cref{equ:perv_coh}, this happens if and only if\n\\begin{equation}\\label{equ:new_sprt}\n\\mathcal{H}^{\\delta^{\\aff}(\\ve{n})}(\\IC(\\bigwedge^iR^1))_a \\neq 0.\n\\end{equation}\nBy \\cref{lem:estimate} this is possible only if $i>\\delta^{\\aff}(\\ve{n})$. On the other hand, \nwhen this is the case, \\cref{cor:highest_weight_cohomology} tells us that \nthat \\cref{equ:new_sprt} holds.\n\\end{proof}\n\n\n\n\\begin{corollary}\\label{cor:fin_mon}\nFor every $\\ve{n}$, and for $r=\\delta^{\\aff}(\\ve{n})$ and $r={2\\dim(\\ensuremath{\\mathcal{A}_n})- \\delta^{\\aff}(\\ve{n})}$, the pull back of the local systems \n${\\ensuremath{\\mathcal{L}}_{r, \\ve{n}}}$ to $\\mathcal{A}^{\\nod}_{\\un{n}}$ has trivial monodromy . \n\\end{corollary}\t\n\\begin{proof}\nThe irreducible components of the jacobian of the spectral curve $C_a$ are indexed by the degrees of the restriction of the line bundles to the components of $C_a$. \nTherefore, the sheaf of irreducible components of $C_a$ is constant on $\\mathcal{A}^{\\nod}_{\\un{n}}$. The local system in maximal perversity\n\\[\n({\\ensuremath{\\mathcal{L}}_{{2\\dim(\\ensuremath{\\mathcal{A}_n})- \\delta^{\\aff}(\\ve{n})}, \\ve{n}}})_a = H^{\\delta^{\\aff}(\\ve{n})-1}\\left( |\\mathscr{C}(\\Gamma_{\\ve{n}})| \\right)(-\\delta^{\\aff})\n\\otimes \\bigwedge^{h^1(\\ve{n})}H^1(\\widetilde{C_a})\n\\]\nis, by the decomposition theorem, a subsheaf of the $\\mathbbm{Q}$-linearizaton of the sheaf of irreducible components, see the argument in \\cref{prop:SuppOnlySn}, therefore its pullback to $\\mathcal{A}^{\\nod}_{\\un{n}}$\nis constant. This is also true for ${\\ensuremath{\\mathcal{L}}_{\\delta^{\\aff}(\\ve{n}), \\ve{n}}}$ which is isomorphic to it (up to a Tate twist). \n\\end{proof}\t\n\n\\begin{remark}\\label{rem:fin_mon}\n\tThe generic Galois group of the finite map \n$\\mult_{\\un{n}}: \\mathcal{A}^{\\nod}_{\\un{n}} \\longrightarrow S^{\\nod}_{\\un{n}}$\nis the subgroup of the symmetric group $S_r$ stabilizing the partition $\\ve{n}$ of $n$. Writing $\\ve{n}=1^{\\alpha_1}\\cdots n^{\\alpha_n}$, i.e. letting $\\alpha_i$ be the number of elements in $\\ve{n}$ equal to $i$,\nthis subgroup is \n\\begin{equation}\\label{subgroup}\n\\prod_i S_{\\alpha_i} \\subseteq S_r.\n\\end{equation}\nIn particular the sheaves $\\ensuremath{\\mathcal{L}}_{\\delta^{\\aff}( \\ve{n}), \\ve{n}}$ and \n\t$\\ensuremath{\\mathcal{L}}_{{2\\dim\\ensuremath{\\mathcal{A}_n} - \\delta^{\\aff}(\\ve{n})}, \\ve{n}}$ are constant if $n_i\\neq n_j$ for all $i\\neq j$.\n\\end{remark}\n\n\n\n\\section{The monodromy of the new local systems}\\label{sec:mon_loc_sys}\nIn this section we determine the monodromy of the local systems ${\\ensuremath{\\mathcal{L}}_{{2\\dim\\ensuremath{\\mathcal{A}_n} - \\delta^{\\aff}(\\ve{n})}, \\ve{n}}}$ and \n${\\ensuremath{\\mathcal{L}}_{\\delta^{\\aff}(\\ve{n}), \\ve{n}}}$. Notice that, by the relative Hard Lefschetz theorem, these local system differ only by a Tate twist, as it can also be seen directly from \\cref{thm:main1} (\\ref{item:loc_sys}).\n\nIt follows from \\cref{cor:fin_mon}\nand \\cref{rem:fin_mon} that it is enough to compute the associated representation of \nthe subgroup of \\cref{subgroup}. {In the main theorem \\cref{thm:main1} we already reduced this to the computation of the action on a combinatorially defined complex constructed from the dual graph of a nodal curve, which was a special case of the simplicial complex defined by a matroid (for terminology about matroids see \\cite{white}).}\n\nLet us denote by $\\Gamma_r$ the complete graph on $r$ vertices for $r\\geq 2$. As before we will denote by $|\\mathscr{C}(\\Gamma_r)|$ the simplicial complex defined by the cographic matroid of $\\Gamma_r$, i.e., its $k$-simplices are the subsets of $k+1$ edges of $\\Gamma_r$ that do not disconnect the graph. \n\nThe following result is a combination of well known results on matroids and a result of Stanley.\n\\begin{proposition}\\label{repsym}\n\tFor any $r \\geq 2$ the cohomology group \n$H^{\\mathrm{ top}}(|\\mathscr{C}(\\Gamma_{r})|)$, has rank $(r-1)!$, and, with its natural structure of $S_r$-module, is isomorphic to the representation induced by a primitive character of a maximal cyclic subgroup\n\\end{proposition}\nTo deduce this result let us introduce the simplical complex $\\Nspan(\\Gamma_r)$ defined by the subsets of edges that do not span $\\Gamma_r$. Let us denote by $\\Flat(\\Gamma_r)$ the lattice of partitions of $\\{1,\\dots,r\\}$ which in the language of matroids correspond to the poset of flats of the cographic matroid, because a flat in this case is a partition into complete subgraphs. To this lattice one attaches the simplicial complex $\\Delta(\\Flat(\\Gamma_r))$ whose $k-$simplices are chains of partitions $p_{disc} < p_1 < \\dots < p_k < p_{triv}$ where $p_{disc}$ is the discrete partition and $p_{triv}$ is the trivial partition. \n\nWe need the following result which is a general fact on matroids: \n\\begin{lemma}\\label{lem:cographicflat}\nLet $N={r \\choose 2}$ denote the number of edges of $\\Gamma_r$ We have natural isomorphisms\n\\begin{equation}\nH^i(|\\mathscr{C}(\\Gamma_{r})|) \\simeq H_{N-3-i}(|\\Nspan(\\Gamma_r)|) \\simeq H_{N-3-i}(|\\Delta(\\Flat(\\Gamma_r))|)\n\\end{equation}\nThe second isomorphism is $S_r$-equivariant, while in the first isomorphism the $S_r$-rep\\-re\\-sen\\-ta\\-tions differ by the sign character. \n\\end{lemma}\n\\begin{proof}\nConsider the boundary of the complex of all subsets of the edges of $\\Gamma_r$. Its geometric realization is the boundary of an $N-1$-simplex, i.e.\\ an $N-2$-dimensional sphere.\n\nThe first isomorphism is the content of Exercise 7.43 of \\cite{bjorner} and amounts to combinatorial Alexander duality, once one notices that $\\mathscr{C}(\\Gamma_{r})$ and $\\Nspan(\\Gamma_r)$ are Alexander dual complexes in $\\partial \\Delta^{N-1}$ (see \\cite{bjotan} for a quick proof if Alexander duality which is adapted to this context). We see that the isomorphism is twisted by the sign representation considering the action of $S_r$ on the top cohomology of the ambient sphere (\\cite[Theorem 2.4]{Stanley}).\n\nThe second isomorphism, due to Folkman, is (\\cite[Theorem 3.1]{folk}), using that the set of edges of $\\Gamma_r$ forms a crosscut of the partition lattice. To see that this isomorphism is $S_r$-equivariant we briefly recall Folkman's argument. \n\nNote that for any edge $e$ the subcomplex $L_e$ of $\\Delta(\\Flat(\\Gamma_r))$ formed by the simplices that are contained in a simplex that satisfies $p_1=e$ is contractible. Moreover for any non-spanning subset $I$ of edges the intersection $\\cap_{e\\in I} L_e$ is contractible to the $0$-simplex given by the partition defined by the subgraph $I$ (see \\cite[Section 3]{folk}). \n\nThus the cohomology of $\\Delta(\\Flat(\\Gamma_r))$ can be computed from the nerve of the covering given by the subcomplexes $L_e$ and this agrees with the cohomology of $|\\Nspan(\\Gamma_r)|$. \n\\end{proof}\n\\begin{proof}(of Proposition \\ref{repsym})\nApplying the previous lemma, the computation reduces to the computation of the homology of the lattice of partitions which was determined in \\cite[Theorem 7.3]{Stanley} to be the representation induced by a primitive character of a maximal cyclic subgroup tensored with the sign representation.\n\\end{proof}\n\nThe dual graph $\\Gamma$ of a spectral curve in $S_{\\ve{n}}^\\times$ contains a complete graph on the vertices, but it will have multiple edges between the vertices.\n\nLet us therefore fix some notation. Given a graph $\\Gamma$ and $I$ a subset of edges let us denote by $\\widehat{\\Gamma}_{I}$ the graph obtained by doubling the edges in $I$, i.e. for every edge $e\\in I$ we add an edge $\\widehat{e}$ connecting the same vertices as $e$. \n \n\\begin{proposition}\nLet $\\Gamma $ be a graph, let $I$ be a non-empty subset of edges. Let $ |\\mathscr{C}(\\Gamma)|$ and $|\\mathscr{C}(\\widehat{\\Gamma}_{I})|$ be the simplicial complexes associated to $\\Gamma$ and $\\widehat{\\Gamma}_{I}$. \nThen, for every $\\ell$, there is a canonical isomorphism\n\\begin{equation}\nH_{\\ell}\\left( |\\mathscr{C}(\\Gamma)| \\right)\\simeq H_{\\ell+|I|} ( |\\mathscr{C}(\\widehat{\\Gamma}_{I})| ).\n\\end{equation}\nIf a finite group $G$ acts on $\\Gamma$ preserving $I$, then these isomorphisms are $G$-equivariant.\n\\end{proposition}\n\n\\begin{proof}\nIt is a direct application of the deletion-contraction sequence: Let us first assume that $I=\\{e\\}$ consists of a single edge.\nThen the set of faces in $\\mathscr{C}(\\widehat{\\Gamma}_{I})$ is the disjoint union of the set of those which contain a doubled edge $\\widehat{e}$ and those who don't. The subcomplex of those faces not containing $\\widehat{e}$ is the simplicial complex of the graph $\\widehat{\\Gamma}_{I}\/{\\widehat{e}}$ obtained by removing $\\widehat{e}$ and collapsing the vertices joined by $\\widehat{e}$. We therefore get an exact sequence of chain complexes \n\\begin{equation}\n0 \\longrightarrow C_\\bullet(\\widehat{\\Gamma}_{I}\/\\widehat{e}) \\longrightarrow C_\\bullet(\\widehat{\\Gamma}_{I}) \\longrightarrow C_{\\bullet-1}(\\Gamma) \\longrightarrow 0.\n\\end{equation}\nNote that the edge $e$ becomes a loop in the graph $\\widehat{\\Gamma}_{I}\/{\\widehat{e}}$, hence $|\\mathscr{C}(\\widehat{\\Gamma}_{I}\/\\widehat{e})|$ is a cone and has vanishing homology.\n\nBy induction this shows that the morphism $C_\\bullet(\\widehat{\\Gamma}_{I}) \\to C_{\\bullet-|I|}(\\Gamma)$ induced by mapping those faces in $\\widehat{\\Gamma}_{I}$ that contain all of the doubled edges to its intersection with $\\Gamma$ induces an isomorphism in homology. \n\\end{proof}\n\n\n\n\\begin{corollary}\\label{cor:rank_loc_syst}\nLet $\\ve{n}= n_1 \\geq n_2 \\geq \\cdots \\geq n_r=1^{\\alpha_1}\\cdots n^{\\alpha_n}$ be a partition of $n$. The rank of the local system $\\ensuremath{\\mathcal{L}}_{{\\delta^{\\aff}(\\ve{n})+i}, \\ve{n}}$ is \n\\begin{equation}\n\\mathrm{rank}\\,\\ensuremath{\\mathcal{L}}_{{\\delta^{\\aff}(\\ve{n})+i}, \\ve{n}}=\n(r-1)!\\binom{{2(\\dim\\ensuremath{\\mathcal{A}_n}-\\delta^{\\aff})}}{i}.\n\\end{equation}\nThe monodromy of the (isomorphic) local systems ${\\ensuremath{\\mathcal{L}}_{\\delta^{\\aff}(\\ve{n}), \\ve{n}}}$ and \n${\\ensuremath{\\mathcal{L}}_{{2\\dim\\ensuremath{\\mathcal{A}_n} - \\delta^{\\aff}(\\ve{n})}, \\ve{n}}}$ is given by the restriction to the subgroup \n$\\prod_i S_{\\alpha_i} \\subseteq S_r$ \nof the representation of $S_r$ induced by a primitive character of a maximal cyclic subgroup. \n\\end{corollary}\n\n\\begin{remark}\\label{n=2case}\nIf $n=2$ and $\\Gamma_2$ is the graph with two vertices joined by $2g-2$ edges, it is immediately seen that $ |\\mathscr{C}({\\Gamma }_2)|$ is a sphere of dimension $2g-4$. The corresponding representation is, for $g\\neq 2$, the sign representation. For $g=2$ we have a zero-dimensional sphere, namely two points, and the relevant representation is that on {\\em reduced} homology.\n\\end{remark}\n\\section{Appendix: The derivative of the Hitchin morphism is dual to the derivative of the action}\\label{appe}\n\nThe duality statement from the title of the section is certainly known, but we could not find a reference for it. Although we only apply the result for the group $\\textrm{GL}_n$ it turns out that the proof is most easily explained in the more general setting of Higgs bundles for reductive groups. This is because in the case of $\\textrm{GL}_n$ it is easy to loose track of implicit identifications between the Lie algebra and its dual.\n\n\\subsection{Reminder on $G$-Higgs bundles}\n\nWe keep working over ${\\mathbbm C}$ and use our fixed smooth projective curve $C$. In addition let $G$ be a connected reductive group with Lie algebra $\\mathfrak{g}=\\Lie(G)$. We will denote the dual of $\\mathfrak{g}$ by $\\mathfrak{g}^*$.\n\nGiven a $G$-torsor $\\mathcal{P}\\to C$ and a representation $\\rho\\colon G \\to V$ we will denote by $\\mathcal{P}(V):=\\mathcal{P}\\times^G V$ the associated vector bundle.\n\nA $G$-Higgs bundle on $C$ is a pair $(\\mathcal{P},\\phi)$ where $\\mathcal{P}\\to C$ is $G$-torsor and $\\phi \\in H^0(C,\\mathcal{P}(\\mathfrak{g}^*)\\otimes K_C)$ is a global section of the coadjoint bundle twisted by $K_C$. We denote by \n$$ \\Higgs_G := \\left\\langle (\\mathcal{P},\\phi) | \\mathcal{E} \\in \\Bun_G, \\phi \\in H^0(C,\\mathcal{P}(\\mathfrak{g}^*)\\otimes K_C) \\right\\rangle$$\nthe stack of $G$-Higgs bundles over $C$, which is the cotangent stack to the stack of $G$-bundles on $C$.\n\n\\begin{remark}\n\tThe above definition follows the convention of \\cite{BeilinsonDrinfeld}. In the literature on $G$-Higgs bundles it is also common to choose a $G$-invariant inner product $(,)$ on $\\mathfrak{g}$ and use it to identify $\\mathfrak{g} \\cong \\mathfrak{g}^*$. To state the results in an invariant form it seems to be most convenient to avoid this choice. As a consequence we will formulate some notions for the dual $\\mathfrak{g}^*$ that are commonly used for $\\mathfrak{g}$ for Higgs bundles, i.e., to use coadjoint orbits instead of adjoint orbits. \n\\end{remark}\n\nLet us recall from \\cite{NgoHitchin} how to view $G$-Higgs bundles as sections of a morphism of stacks.\n\n\\begin{lemma}\\label{Lem:NgoHiggsDescription} The category of Higgs bundles $(\\mathcal{P},\\phi)$ on $C$ is equivalent to the category of 2-commutative diagram diagrams\n$$\\xymatrix{\n& [\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m]\\ar[d]\\\\\nC \\ar[r]_-{K_C}\\ar[ur]^-{(\\mathcal{P},\\phi)} & B{\\mathbbm G}_m,\n}$$\nwhere $B{\\mathbbm G}_m$ is the classifying stack of line bundles, $K_C$ is the map defined by the canonical bundle on $C$ and $[\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m]$ is the quotient stack defined by the product of the coadjoint action of $G$ on $\\mathfrak{g}^*$ and the standard scaling action of ${\\mathbbm G}_m$ on the vector space $\\mathfrak{g}^*$.\n\\end{lemma}\n\\begin{proof}\nThis is not hard to unravel: By definition a $G$-torsor on $C$ is the same as a map $C \\to BG =[\\Spec{{\\mathbbm C}}\/G]$, so the pair $\\mathcal{P},K_C$ defines a map $C\\to [B (G\\times {\\mathbbm G}_m)].$ Now for any representation $\\rho \\colon G\\times {\\mathbbm G}_m \\to \\textrm{GL}(V)$ the associated bundle is the pull back of the morphism $[V\/G\\times{\\mathbbm G}_m]$ and applying this to the representation on $\\mathfrak{g}^*$ we see that $\\mathcal{P}(\\mathfrak{g}^*)\\otimes K_C = C\\times_{B(G\\times {\\mathbbm G}_m)} [\\mathfrak{g}^*\/(G\\times {\\mathbbm G}_m)]$. Therefore the datum of a section of this bundle is equivalent to a section of \n$$\\xymatrix{\n\t& [\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m]\\ar[d]\\\\\n\tC \\ar[r]_-{(\\mathcal{P},K_C)}\\ar[ur]^-{(\\mathcal{P},\\phi)} & B(G\\times {\\mathbbm G}_m).\n}$$\n\\end{proof}\n\n\\subsection{Deformations of $G$-Higgs bundles}\nAs the main aim of the section is to compare derivatives of morphisms from an to $\\Higgs_G$ we need to recall the basic results on deformations of Higgs bundles.\n\nTo a Higgs bundle $(\\mathcal{P},\\phi)$ we attach the complex of vector bundles on $C$\n\t$$\\mathcal{C}(\\mathcal{P},\\phi):=[ \\mathcal{P}(\\mathfrak{g}) \\map{\\ad^*()(\\phi)} \\mathcal{P}(\\mathfrak{g}^*) \\otimes K_C],$$\nwhere $\\ad^*\\colon \\mathfrak{g} \\times \\mathfrak{g}^* \\to \\mathfrak{g}^*$ denotes the coadjoint action of $\\mathfrak{g}$ on $\\mathfrak{g}^*$.\n\n\\begin{lemma}[\\cite{Nitsure}]\nThe tangent space of the deformation functor of $G$-Higgs bundles at $(\\mathcal{P},\\phi)\\in \\Higgs_G$ is given by $H^1(C,\\mathcal{C}(\\mathcal{P},\\phi))$ and automorphisms of deformations that extend the identity of $(\\mathcal{P},\\phi)$ are given by $H^0(C,\\mathcal{C}(\\mathcal{P},\\phi))$.\n\\end{lemma}\n\\begin{proof}\n\tThe deformation theory argument for the computation of the tangent space to $\\Higgs_G$ can be found in \\cite{Nitsure}. In the language of \\cref{Lem:NgoHiggsDescription} we have a cartesian diagram:\t\n\t$$\\xymatrix{\n\t\t[\\mathfrak{g}^* \\otimes K_C\/G]\\ar[d]^{p_{K_C}}\\ar[r] & [\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m] \\ar[d]^p\\\\\n\t\tC \\ar[r]^-{K_C} & B{\\mathbbm G}_m =[\\Spec k\/{\\mathbbm G}_m]\n\t}$$ \n\tand Higgs bundles are sections of the map $p_{K_C}$. \n\t\n\tNow the tangent stack to any quotient stack $[X\/G]$ can be described as the quotient of the complex of $G$-vector bundles $\\Lie(G) \\times X \\to TX$ on $X$, which we think of a complex in degree $[-1,0]$. \n\t\n\tTherefore the tangent complex to the stack $[\\mathfrak{g}^*\/G]$ (which lives in degree $[-1,0]$) is given by the $G$-equivariant complex \n\t$$[\\mathfrak{g} \\map{\\ad^*} \\mathfrak{g}^*]$$\n\ton $\\mathfrak{g}^*$ and thus the tangent complex to $p_{K_C}$ over $\\mathfrak{g}^*\\otimes K_C$ is given by $$[\\mathfrak{g}\\otimes \\mathcal{O}_C \\to \\mathfrak{g}^* \\otimes K_C].$$ \n\t\n\tDeformations of $(\\mathcal{P},\\phi)$ are deformations of the corresponding section $(\\mathcal{P},\\phi) \\colon C\\to [\\mathfrak{g}^* \\otimes K_C\/G]$ and the pull back of the tangent complex at this section is $$[\\mathcal{P}(\\mathfrak{g}) \\map{\\ad^*()(\\phi)} \\mathcal{P}(\\mathfrak{g}^*) \\otimes K_C].$$ \n\\end{proof}\t\t\n\n\\begin{remark}\n\tFor any Higgs bundle $(\\mathcal{P},\\phi)$ the complex $\\mathcal{C}(\\mathcal{P},\\phi)$ is self-dual with respect to the duality defined by $\\cHom(\\,\\cdot\\, , K_C[1])$.\n\tTherefore Serre-duality induces pairings \n\t$$H^i(C,\\mathcal{C}(\\mathcal{P},\\phi)) \\times H^{2-i}(C,\\mathcal{C}(\\mathcal{P},\\phi)) \\to {\\mathbbm C}$$\n\tthat for $i=1$ define the standard 2-form $\\omega_{\\Higgs}$ on $\\Higgs_G=T^*\\Bun$.\n\\end{remark}\n\n\n\\subsection{The Hitchin morphism}\n\nThe Hitchin morphism for $G$-Higgs bundles is defined as follows. Denote by $\\chi$ the quotient map $$\\chi\\colon \\mathfrak{g}^* \\to \\mathfrak{g}^*\/\\!\/G =\\car^*,$$ where $\\car^*=\\Spec (\\Sym^\\bullet \\mathfrak{g})^G$. \n\n\\begin{remark}\n\tAs usual, a choice of homogeneous invariant polynomials would give an isomorphism $\\car^* \\cong \\Spec k[f_1,\\dots,f_r] \\cong {\\mathbbm A}^r$ identifying $\\car^*$ with an affine space. The map $\\chi$ is equivariant with respect to the ${\\mathbbm G}_m$ action on $\\mathfrak{g}^*$ and the induced action on $\\Spec (\\Sym^\\bullet \\mathfrak{g})^G$, whose weights are given by the degrees of the invariant polynomials $f_i$.\n\\end{remark}\n\nWe denote by $\\car^*_{K_C}=(\\mathfrak{g}^*\\times K_C\/\\!\/G) \\to C$ the corresponding affine bundle and by \n$$\\mathcal{A}_G := H^0 (C, \\car^*_{K_C})$$\nthe base of the Hitchin morphism. Again, any choice of invariant polynomials for $G$ defines an isomorphism $\\mathcal{A}_G \\cong \\oplus_i H^0(C,K_C^i)$, but it will be more convenient to avoid such a choice.\n\nThe map $\\overline{\\chi}\\colon[\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m] \\to [\\car^*\/{\\mathbbm G}_m]$ then induces a map\n$$h_G\\colon \\Higgs_G \\to \\mathcal{A}= H^0(C,\\car^*_{K_C}),$$\nwhich is often denoted as $h_G(\\mathcal{P},\\phi)=:\\chi(\\phi)$.\n\n\\subsection{The regular centralizer}\n\nTo define the Hitchin morphism and the analog of the action of the Jacobian of the spectral curve we now recall the construction of the regular centralizer groups from \\cite{NgoHitchin}. \n\nLet us fix the standard notations. The group $G$ acts on $\\mathfrak{g}$ via the adjoint action, which we will denote by $\\Ad\\colon G \\to \\textrm{GL}(\\mathfrak{g})$ the derivative of this action is denoted $\\ad\\colon \\mathfrak{g} \\to \\End(\\mathfrak{g})$. Similarly $\\Ad^*\\colon G \\to \\textrm{GL}(\\mathfrak{g}^*)$ denotes the dual action given by $\\Ad^*(g)(\\phi)(\\underline{\\quad}):=\\phi(\\Ad(g)^{-1}.\\underline{\\quad})$, so that its derivative is $ad^*(X)=-\\ad(X)^t$. \n\nFor an element $\\varphi\\in \\mathfrak{g}^*$ we denote its centralizer in $G$ by \n$C(\\varphi):=\\{ g\\in G | \\Ad^*(G)(\\varphi)=\\varphi\\}$ and by $\\mathfrak{g}^{\\varphi}:=\\{A\\in \\mathfrak{g} | \\ad^*(A)(\\varphi)=0\\}$ its Lie algebra. The groups $C_G(\\varphi)$ define a group scheme \n$$C_{\\mathfrak{g}^*} :=\\{(g,\\varphi)\\in G\\times \\mathfrak{g}^*| \\Ad^*(g)(\\varphi)=\\varphi\\} \\to \\mathfrak{g}^*$$ over $\\mathfrak{g}^*$. The set of regular elements $\\mathfrak{g}^{*,\\reg}\\subset \\mathfrak{g}^*$ is defined to be the subset of those elements for which $\\dim C_G(\\varphi)=\\rank(G)$ is minimal.\n\nThe restriction $C_{\\mathfrak{g}{*,reg}}$ of $C_{\\mathfrak{g}^*}$ to the space of regular elements descends to a group scheme $J_{\\car^*}$ on $\\car^*=\\mathfrak{g}^*\/\\!\/ G$, called the regular centralizer. The group scheme $J_{\\car^*}$ comes equipped with a natural map \n$$m\\colon \\chi^*J_{\\car^*} \\to C_{\\mathfrak{g}^*} \\subset G\\times \\mathfrak{g}^*$$ \nwhich is defined to be the unique regular map extending the natural isomorphism $\\chi^*J_{\\car^*}|_{\\mathfrak{g}^{*,\\reg}} \\cong C_{\\mathfrak{g}^{*,reg}}$. We denote by $dm$ the induced map on Lie algebras $$dm \\colon \\chi^*Lie(J_{\\car^*}) \\to \\Lie(C_{\\mathfrak{g}^*} )\\to \\mathfrak{g} \\times \\mathfrak{g}^*.$$\n\n\\begin{notation}\n\tAs in \\cite{NgoHitchin} we will need to keep track of the action of the multiplicative group ${\\mathbbm G}_m$ on our objects. We will denote by ${\\mathbbm C}(n)$ the one dimensional vector space with the ${\\mathbbm G}_m$ action given by the $n$-th power of the standard action. For any vector bundle $E$ with a ${\\mathbbm G}_m$-action we will denote by $E(n):=E\\otimes {\\mathbbm C}(n)$.\n\\end{notation}\n\n\\begin{remark}\\label{rem:dm}\n\tOn $\\mathfrak{g}^*$ the group ${\\mathbbm G}_m$ acts by scalar multiplication which induces an action on $\\car^*=\\mathfrak{g}^*\/\\!\/G$. The action on $\\mathfrak{g}^*$ also preserves centralizers and thus induces an action on $C_{G,\\mathcal{G}}$, given by $$t.(g,\\varphi):=(g,t\\varphi).$$ In particular this action preserves $\\mathfrak{g}^{*,\\reg}$ and thus $C|_{\\mathfrak{g}^{*,\\reg}}$ even descends to a group $\\overline{J}$ over $[\\car^*\/{\\mathbbm G}_m]$.\n\t\n\tNote that the formula for the ${\\mathbbm G}_m$ action shows that the derivative\n\t\\begin{equation*}\ndm\\colon \\chi^*Lie(J_{\\car^*}) \\to \\mathfrak{g} \\times \\mathfrak{g}^*\n\t\\end{equation*} is equivariant for the ${\\mathbbm G}_m$--action that on $\\mathfrak{g} \\times \\mathfrak{g}^*$ is given by the trivial action on the first factor $\\mathfrak{g}$ and the standard action on the second factor $\\mathfrak{g}^*$. Therefore, identifying $\\mathfrak{g}(-1) \\times \\mathfrak{g}^* \\cong T^*\\mathfrak{g}^*$ we can interpret $dm$ as a morphism\n\t\\begin{equation}\\label{eq:dm}\n\tdm\\colon \\chi^*Lie(J_{\\car^*})(-1) \\to T^*\\mathfrak{g}^* \n\t\\end{equation}\n\tThe restriction of this map to $\\mathfrak{g}^{*,\\reg}$ is injective, as $m$ was injective over $\\mathfrak{g}^{*,\\reg}$.\n\\end{remark}\n\n\\begin{remark}\\label{rem:dchi}\t\n\tThe map $\\chi\\colon \\mathfrak{g}^{*} \\to \\car^*$ is by definition $G$-invariant and equivariant with respect to the ${\\mathbbm G}_m$ action, therefore its derivative\n\t\\begin{equation}\\label{eq:dchi}\n\td\\chi \\colon \\mathfrak{g}^* \\times \\mathfrak{g}^* = T\\mathfrak{g}^* \\to \\chi^* T\\car^*\n\t\\end{equation}\n\tis also equivariant with respect to the induced ${\\mathbbm G}_m$ action and the restriction \n\t$$d\\chi|_{\\mathfrak{g}^{*,\\reg}} \\colon T \\mathfrak{g}^{*,\\reg} = \\mathfrak{g}^* \\times \\mathfrak{g}^{*,\\reg} \\to \\chi^* T\\car^*|_{\\mathfrak{g}^{*,\\reg}}$$ is surjective, because the map $\\chi\\colon \\mathfrak{g}^{*} \\to \\car^*$ admits a section $\\kappa\\colon \\car^* \\to \\mathfrak{g}^{*,\\reg}\\subset \\mathfrak{g}^*$ called the Kostant section.\t\t\n\\end{remark}\n\nThe following observation is the group theoretic origin of the duality result for the Hitchin fibration.\n\\begin{lemma}\\label{lem:LocalPairing}\n\tThe canonical pairing $$\\langle \\,, \\, \\rangle \\colon T\\mathfrak{g}^* \\times_{\\mathfrak{g}^*} T^*\\mathfrak{g}^* \\to {\\mathbbm C}$$\n\tinduces a $G\\times{\\mathbbm G}_m$-equivariant perfect pairing \n $$\\chi^*Lie(J_{\\car^*})|_{\\mathfrak{g}^{*,\\reg}}(-1) \\times_{\\mathfrak{g}^*} \\chi^* T\\car^*|_{\\mathfrak{g}^{*,\\reg}} \\to {\\mathbbm C}(0)$$\n and thereby an isomorphism\n $$\\Lie(J)^*(1) \\cong T\\car^*.$$ \n\\end{lemma}\n\\begin{proof}\n\tFrom s \\cref{rem:dm},\\cref{rem:dm} we know that $\\chi^*Lie(J_{\\car^*})|_{\\mathfrak{g}^{*,\\reg}}(-1)$ is a subbundle of $T^*\\mathfrak{g}^{*,\\reg}$ and $\\chi^* T\\car^*|_{\\mathfrak{g}^{*,\\reg}}$ is a quotient of $T\\mathfrak{g}^{*,\\reg}$ and both have the same dimension.\n\t\n\tAs the map $\\chi$ is constant on $G$-orbits, the tangent space to a $G$ orbit is in the kernel of $d\\chi$, i.e., for every $\\varphi\\in \\mathfrak{g}^*$\n\t$$ V_\\varphi:=\\mathrm{Im}\\,( \\mathfrak{g} \\map{\\ad^*({\\quad})(\\varphi)} T_\\varphi\\mathfrak{g}^* =\\mathfrak{g}^* )\\subset \\ker(d\\chi).$$\n\tIf $\\varphi\\in\\mathfrak{g}^{*,\\reg}$ is regular we have $\\dim V_\\varphi = \\dim \\mathfrak{g}\/\\mathfrak{g}^\\varphi = \\dim \\mathfrak{g} - \\dim \\car^*$. As $d\\chi$ is surjective in this case we find $V_\\varphi=\\ker(d\\chi)$ for $\\varphi\\in \\mathfrak{g}^{*,\\reg}$. \n\t\n\tNow $G$-invariance of the pairing $\\langle \\,, \\, \\rangle$ i.e., $\\langle g.\\varphi,g.A\\rangle=\\langle\\varphi,A\\rangle$ for all $g\\in G,\\varphi\\in \\mathfrak{g}^*,A\\in\\mathfrak{g}$ implies that for all $X\\in\\mathfrak{g}$ we have\n\t$$\\langle \\ad^*(X)(\\varphi),A\\rangle=\\langle\\varphi,-\\ad(X)(A)\\rangle=-\\langle\\ad^*(A)(\\varphi),X\\rangle.$$ \n\tThis implies that $V_\\varphi^\\perp=\\mathfrak{g}^\\varphi$ and this implies our claim. \n\\end{proof}\t\n\n\n\n\\begin{remark}\\label{LocalComputationForGLn}\n\tFor $G=\\textrm{GL}_n$ the above can be rephrased in terms of coordinates. In this case $\\car^*\\cong {\\mathbbm A}^n$ is the space of characteristic polynomials of matrices. In order to compute the differential $d\\chi$ of the map $\\chi \\colon \\mathfrak{gl}_n \\to \\car^*$ it is convenient to choose the coordinates $\\chi(\\varphi):=(\\frac{1}{i}\\Trace(\\varphi^i))_{i=1\\dots n}$. Then $d\\chi_\\varphi\\colon \\mathfrak{gl}_n \\to k^n$ is given by $X \\mapsto (\\Trace(\\varphi^{i-1}X))_{i=1 \\dots n}$.\n\t\n\tThe regular centralizer group scheme can also be described explicitly: For any monic polynomial $p(t)\\in k[t]$ we define $J_p:=(k[t]\/p(t))^*$ as the unit group of the algebra $k[t]\/p(t)$, which defines an $n$-dimensional commutative group scheme $J$ over ${\\mathbbm A}^n$. As a matrix $\\varphi$ is regular if and only if its characteristic polynomial $p_\\varphi$ is its minimal polynomial, we see that the assignment $J_{p_\\varphi} \\to \\textrm{GL}_n$ given by $f(t) \\mapsto f(\\varphi)$ is injective for regular matrices $\\varphi$ and therefore identifies $J_{p_\\varphi}$ with the centralizer of $\\varphi$. By definition of the regular centralizer the map $\\chi^*(J)\\to I \\subset \\textrm{GL}_n \\times \\mathfrak{gl}_n$ is given by the unique extension of the canonical map on $\\mathfrak{gl}^{\\reg}$. As the formula $f(t)\\mapsto f(\\varphi)$ is well defined for all $\\varphi$ it gives this extension.\n\t\n\tWe also observe that $s \\in {\\mathbbm G}_m$ acts on $\\car^*$ by $p\\mapsto s.p$, where $s.p$ is the polynomial given by multiplying the coefficient of $t^{n-i}$ by $s^i$. This lifts to an action $J_p \\to J_{s.p}$, given by $t\\mapsto st$ and this is compatible with the above map $f(t) \\mapsto f(\\varphi)$. \n\t\n\tNote that $\\Lie(J_p)\\cong k[t]\/(p(t))$ (as $1+\\epsilon f(t)$ is an invertible element of $k[\\epsilon,t]\/(\\epsilon^2,p(t))$ for all $f$). Finally the standard basis $1,t,\\dots t^{n-1}$ of $k[t]\/p(t)$ defines an isomorphism $\\Lie(J) \\cong k^n \\times \\car$.\n\t\n\tThus for any $\\varphi$ the map $dm \\colon k^n \\cong \\Lie(J_p) \\to \\mathfrak{gl}_n$ is given by $(a_i)\\mapsto \\sum_{i=0}^{n-1} a_i \\varphi^i$. \n\t\n\tFinally, we use the pairing $(A,B):=\\Trace(AB)$ on $\\mathfrak{gl}_n$. With respect to this form the dual of the map $k^n \\cong \\Lie(J_p) \\to \\mathfrak{gl}_n$ is therefore given by $$X \\mapsto (\\Trace(\\varphi^iX))_{i=0\\dots n-1}$$ which is $d\\chi_\\varphi$.\t\n\\end{remark}\n\n\\begin{remark}\\label{rem:LocalVersionOfDuality}\n\tWe can reformulate the above Lemma as a duality statement on $[\\mathfrak{g}^*\/G]$: As for any quotient stack, the tangent stack to this quotient is defined by the complex \n\t$$ [\\mathfrak{g} \\times \\mathfrak{g}^* \\map{(A,\\phi) \\mapsto (\\ad^*(A)(\\phi),\\phi)} T\\mathfrak{g}^* = \\mathfrak{g}^* \\times \\mathfrak{g}^*],$$\n\ti.e., the quotient stack of these bundles is the pull-back of the tangent stack to $\\mathfrak{g}^*$. This complex is self-dual up to a shift by $1$.\n\t\n\tConsidering $\\chi$ as a morphism $\\overline{\\chi}\\colon [\\mathfrak{g}^*\/G] \\to \\car^*$ the differential becomes the morphism \n\t$$ [\\mathfrak{g} \\times \\mathfrak{g}^* \\map{(A,\\phi) \\mapsto (\\ad^*(A)(\\phi),\\phi)} \\mathfrak{g}^* \\times \\mathfrak{g}^*] \\map{(0,d\\chi)} [ 0 \\to T\\car^*].$$\n\t\n\tSimilarly as the morphism $\\chi^*J \\to I$ is $G$-equivariant (because $J$ was defined by descending $I|_{\\mathfrak{g}^{*,\\reg}}$), the morphism $dm$ defines a $G$-equivariant morphism $$[\\chi^*\\Lie(J) \\to 0] \\map{(dm,0)} [\\mathfrak{g} \\times \\mathfrak{g}^* \\map{(A,\\phi) \\mapsto (\\ad^*(A)(\\phi),\\phi)} \\mathfrak{g}^* \\times \\mathfrak{g}^*].$$\n\t\n\t \\cref{lem:LocalPairing} says, that these morphisms are ${\\mathbbm G}_m$-equivariantly dual to each up to a shift by $1$ of the complex and twisting the action by $(1)$.\n\\end{remark}\n\\subsection{The regular centralizer}\n\nLet us recall the global version of the regular centralizer as explained in \\cite[Section 4]{NgoHitchin}: We saw that the regular centralizer defines a group scheme $\\overline{J}$ on $[\\car^*\/{\\mathbbm G}_m]$, that we can pull back to a group scheme $J_{\\car^*_{K_C}}$ on $\\car^*_{K_C}=C \\times_{B{\\mathbbm G}_m} [\\car^*\/{\\mathbbm G}_m]$ which we pull back via the tautological map $\\mathcal{A}_G\\times C\\to \\car^*_{K_C}$ to define a group scheme $J_{\\mathcal{A}_G}$ on $\\mathcal{A}_G\\times C$. \n\nSimilarly, the pull back of the sheaf of centralizers $\\overline{C}_{\\mathfrak{g}^*}$ on $[\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m]$ under the classifying map $\\Higgs_G \\times C \\to [\\mathfrak{g}^*\/G\\times {\\mathbbm G}_m]$ is denoted $C_{\\Higgs\\times C}$. By construction $C_{\\Higgs\\times C}=\\Aut(\\mathcal{E}_{\\textrm{\\tiny univ}},\\phi_{\\textrm{\\tiny univ}})$ is identified with the group of $G$-automorphisms of the universal Higgs bundle $(\\mathcal{E}_{\\textrm{\\tiny univ}},\\phi_{\\textrm{\\tiny univ}})$ that preserve $\\phi_{\\textrm{\\tiny univ}}$.\n\nThe map $\\chi^*J \\to C_{\\mathfrak{g}^*}$ therefore induces a natural morphism $$\\iota\\colon(h\\times id_C)^*J_{\\mathcal{A}} \\to \\Aut(\\mathcal{E}_{\\textrm{\\tiny univ}},\\phi_{\\textrm{\\tiny univ}}).$$ \n\nOver $\\mathcal{A}_G$ one defines the group scheme $P_{\\mathcal{A}_G}$ of $J_{\\mathcal{A}_G}$-torsors on $C$, i.e., at a point $a\\colon C \\to \\car^*_{\\Omega}$ is given by the torsors of the group scheme of $a^*J_{\\mathcal{A}_G}$ on $C$. Then $\\iota$ induces an action $\\act\\colon P_{\\mathcal{A}_G}\\times_{\\mathcal{A}_G} \\Higgs_G \\to \\Higgs_G$.\n\n\n\\subsection{The duality statement}\nWe can now formulate the main result of this section:\n\\begin{proposition}\\label{prop:dactdhdual}\n\tThere exists a canonical isomorphism $\\Lie(P_{\\mathcal{A}_G}\/\\mathcal{A}_G) \\cong T^*\\mathcal{A}_G$ such that the morphisms \n\t$$d\\act\\colon h^*\\Lie (P_{\\mathcal{A}_G}\/\\mathcal{A}_G) \\to T\\Higgs_G$$\n\tand $$dh\\colon T\\Higgs_G \\to h^*(T\\mathcal{A}_G)$$\n\tbecome dual to each other with respect to the symplectic form $\\omega_{\\Higgs}$.\n\\end{proposition}\n\n\\begin{proof}\n\tThe result follows from the local statement \\cref{lem:LocalPairing} as follows:\n\tThe regular centralizers $P_a$ are defined to be $a^*J_{\\mathcal{A}}$-torsors, so $\\Lie(P_a)=H^1(C,\\Lie(a^*J_{\\mathcal{A}_G}))$. \n\t\n\tFor any Higgs bundle $(\\mathcal{E},\\phi)$ the action of $\\act_{(\\mathcal{E},\\phi)}\\colon \\Lie(P_a) \\to h^{-1}(a)$ is induced from $\\iota\\colon(h\\times id_C)^*J_{\\mathcal{A}_G} \\to C_{(\\mathcal{E},\\phi)}\\subset \\Aut(\\mathcal{E}\/C)$. \n\t\n\tTherefore applying $\\Lie$ we find that the differential of the action is induced from the morphism of complexes \n\t$$ [\\Lie(a^*J_{\\mathcal{A}}) \\to 0 ] \\map{(d\\act,0)} [ \\ad(\\mathcal{E}) \\map{\\ad^*()(\\phi)} \\ad(\\mathcal{E})^* \\otimes \\Omega]$$\n\tafter passing to $H^1$.\n\t\n\n\n\n\n\t\n\tBy \\cref{lem:LocalPairing} we know that this map of complexes is up to tensoring with $K_C[-1]$ this map is dual to the map\n\t$$[ \\ad(\\mathcal{E}) \\map{\\ad^*()(\\phi)} \\ad(\\mathcal{E})^* \\otimes K_C] \\map{(0,d\\chi)} [0 \\to T\\car^*_{K_C}]$$\n\tthat induces $dh$ by \\cref{rem:LocalVersionOfDuality}. Therefore applying Serre-duality to $H^1$ of the above complexes we obtain the proposition. \n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nChange in the topological structure of the ground state, driven by disorders, has been intensively investigated recently and it is thought to be responsible for potential novel criticality which involves interplay of topological structures, disorders, and interactions \\cite{Ruy12}. Such a change of topological structure is reflected in magneto-electrical transport phenomena such as anomalous Hall and spin Hall effects \\cite{Nagaosa10}, negative longitudinal magnetoresistance \\cite{Nielsen83,HJKim13}, chiral magnetic effect \\cite{Fuku08}, and so on. Even if topology of the ground state is trivial, its anomalous geometric (local) structure described by the Berry curvature or determined by spin chirality can be also affected by disorders \\cite{Xiao10}, showing an interesting variation of magneto-electrical transport, for example such as a crossover from weak anti-localization to weak localization driven by randomness. BiTeI may be an appropriate platform to investigate the interplay between Berry phase and disorder, originating from the unique electronic structure with broken inversion symmetry.\n\nInversion symmetry breaking in BiTeI splits a single degenerate band near the hexagonal face center of the Brillouin zone, referred to as the A point, into an inner band with a left-handed or ``positive'' spin-chiral configuration and an outer one with a right-handed or ``negative'', whose spin structures are intimately locked with momentum\\cite{Ish11,Cre12,Lan12}. As a result, low energy physics of this inversion-symmetry-broken material is governed by two distinct Fermi surfaces when the Fermi energy $E_F$ lies near the band-touching point generated by the Rashba spin-orbit interaction. See Fig. 1(a). Dynamics of electrons on the inner Fermi surface (IFS) is described by the Weyl equation, exhibiting the change of the Fermi surface character from electron-like to hole-like across the Weyl point. Indeed, a nontrivial Berry phase of $\\pi$ has been detected for both the IFS and outer Fermi surface (OFS) in the Shubnikov-de Haas measurements \\cite{Mura13,Park}. Thus, this system is expected to show physics of a Weyl\/Dirac metal \\cite{Balents11} with interesting response to disorder \\cite{Hosur12,Bjorn14,Roy14}. \n\nUp to now, however, this important point in BiTeI has been overlooked. Most electrical transport studies have focused on measurements at high magnetic fields to detect Shubnikov-de Haas or quantum oscillations \\cite{Mura13,Park,Bell13,Martin13}. Probably, this is because Shubnikov-de Haas or quantum oscillations are considered to be few experimental techniques to provide essential information about nontrivial Berry phase in this system \\cite{Mura13,Park}. Another direction of research in BiTeI, in connection with its nontrivial topology is to induce a topological quantum phase transition and a topological insulator under pressure, first proposed by Nagaosa and his colleagues \\cite{Bah12}. Indeed, closing of the energy gap and some indirect signatures of the topological quantum phase transition were observed experimentally \\cite{Xi13,Tran14}, but the nature of this topological critical point and of a topological insulator under pressure are still elusive, particularly in the experimental point of view. \n\nIn this paper, we investigate the interplay between the Berry phase and randomness in magneto-electrical transport properties of BiTeI. By analyzing Hall and magneto resistivity of Fermi-energy-tuned BiTeI single crystals, particularly at low magnetic fields, we reveal extreme disparity of the mobility values between IFS and OFS near the Weyl point, where the IFS mobility becomes colossally enhanced, intimately related with anti-localization in electrical transport. Based on the self-consistent Born approximation, we explain this disparity and ``divergent'' IFS mobility near the Weyl point quantitatively. We identify this fixed-point solution for BiTeI as a diffusive helical Fermi liquid, characterized by a pair of concentric spin-chiral Fermi surfaces with negligible inter-valley scattering. Our theoretical analysis indicates the existence of a crossover in the ``topological'' structure or geometric phase toward a conventional diffusive Fermi liquid when the stronger-disorder-enhanced inter-valley scattering destroys the spin-chiral property. However, we realize that this mean-field theory for disorders fails to describe the universal scaling in Hall resistivity, which is another main experiment result. We speculate that this failure in the self-consistent Born analysis implies the existence of mass renormalization of the IFS near the Weyl point, possibly resulting from enhanced interactions between electrons near the Weyl point.\n\nMain experimental observations made on six Fermi-energy-tuned BiTeI single crystals are (1) an anomalous weak-field feature in Hall resistivity $\\rho_H(B)$, (2) unconventional magnetic field $B$ dependence of magnetoresistance (MR), which is in stark contrast with the usual $B$-quadratic MR in a metal, and (3) a universal scaling of Hall resistivity. The first experimental result is analyzed and understood based on a picture that two types of charge carriers exist in BiTeI: one with small mobility and the other with very large mobility. \nIndeed, we find that the overall negative slop in $\\rho_H$($B$) is determined by electrons on the OFS\nHowever, we also observe the deviation of $\\rho_H$ from the linear dependence at the low $B$ region.\nWe assign it a contribution from the Weyl fermions in the IFS. \n\nThe second result about MR is also consistent with the existence of two kinds of charge carriers in that the total electrical conductivity \n$\\sigma_{total}$ in a magnetic field is decomposed by two channels of conduction given by $\\sigma_{total} =\\sigma_{OFS}+\\sigma_{WF}$,\n where $\\sigma_{OFS}$ and $\\sigma_{WF}$ are the conductivity contributions of OFS and IFS, respectively. One can rewrite $\\sigma_{total}$ into $\\sigma_{total} = \\sigma_c +\\Delta\\sigma^N_{out} +\\Delta\\sigma^N_{in} +\\Delta\\sigma_{WAL}$ with $\\Delta\\sigma_{WAL} \\propto \\sqrt{B}$, where $\\sigma_c$, $\\Delta\\sigma^N_{out(in)}$, and $\\Delta\\sigma_{WAL}$ are the field-independent conductivity, the conductivity contribution of OFS (IFS), and the weak anti-localization correction in three dimensions (3D), respectively. The explicit form of each component will be given later. One important outcome in this MR analysis is the confirmation of the 3D weak anti-localization contribution in $\\sigma_{total}(B)$.\n The analysis of $\\rho_H(B)$ and $\\sigma_{total}$ enables us to extract separately the mobility values of the charge carriers in the OFS and IFS for all six samples. We plot the mobility values as a function of $E_F$ in Fig. 1(b). This data shows extreme disparity of the mobility values between IFS and OFS and ``divergent'' IFS mobility near the Weyl point. The detailed procedure how the IFS and OFS mobility values \n are obtained will be presented in subsequent sections. \n Here, we emphasize that mobility disparity and ``divergent'' IFS mobility near the Weyl point are determined not by the scattering time but by the transport time. As the transport time is a scattering time weighed more by backward scattering processes, chiral nature is more reflected in the transport time.\n \n\nThe rest of the paper is organized as follows. In Sec. II, we discuss the sample synthesis, magneto-electrical transport experiments, and analysis of the data in detail. In this section, we introduce two-carrier models for $\\rho_H(B)$ and $\\sigma_{total}(B)$, which is necessary to \nquantitatively explain the low-field features observed in $\\rho_H(B)$ and $\\sigma_{total}(B)$. As an outcome of the analysis, we determine the OFS and IFS mobility values of six BiTeI single crystals with different $E_F$ [Fig. 1(b)]. In Sec. III, we calculate the IFS and OFS mobility values based on the Rashba model within the self-consistent Born approximation. Here we consider two different cases: one considers only the intra-valley forward scattering and the other includes both intra- and inter-valley scattering. It is revealed that the experimental IFS and OFS mobility values are quite well reproduced within this model in the absence of the inter-valley scattering or in the weak inter-valley scattering. Besides, we predict how the ground state of BiTeI changes as the disorder increases by using renormalization group (RG) arguments. According to these, \nthe inter-valley scattering smears out the spin-chirality with increasing disorder, leading to a topological crossover or a weak version of topological phase transition driven by disorder. This also accompanies the change of quantum correction in electrical transport from weak antilocalization to weak localization. \nWithin this picture, the BiTeI single crystals which we investigate are in a weakly disordered region with negligible inter-valley scattering called diffusive helical Fermi liquid. In Sec. IV, we discuss implication of the experimental results based on the theory introduced in Sec. III. In fact, we find the existence of universal scaling in Hall resistivity from the experimental results. \nThis scaling, however, is not reproduced within the self-consistent Born approximation. This necessitates mass renormalization in the IFS beyond the independent electron picture, especially near the Weyl point. We conclude in Sec. V with a brief discussion of our main results. \n\n\n\n\n\\section{Experiment}\n\\subsection{Sample synthesis}\nSingle crystals of BiTeI were grown by a modified Bridgman method. We prepared more than 20 samples \nand tried to vary the carrier density $n$ by adding a small amount of extra Bi. \nAs the amount of additionally inserted Bi is quite small, X-ray diffraction measurements do not detect any change of structure\n in the doped samples.\nWe selected six single crystals ($\\#$1 $-$ $\\#$6). \nAll the as-grown single crystals were degenerate semiconductors, exhibiting a metallic behavior. \nCarrier densities $n$ were determined by the linear part of the Hall resistivity. \nTheir signs are all negative, implying that dominant charge carriers are electrons, which presumably determined by the OFS. \nCarrier densities from the linear part were determined to be 0.10, 0.30, 0.35, 0.80, 3.9, and \n6.4$\\times10^{20}$ cm$^{-3}$ for $\\#$1 $-$ $\\#$6, respectively. \nEstimated from the linear part of $\\rho_H$, the Fermi energies from the bottom of the conduction band are 40, 90, 100, 170, 550, and 760 meV for all six samples. As the Weyl point is located at 113 meV from the bottom of the conduction band \\cite{Bahramy11}, the former three (the latter three) should have positive (negative) charge carriers in the IFS. Later we will show that this, in fact is consistent with the sign change in the deviation of $\\rho_H$ from the linear dependence.\nTemperature dependence of the resistivity $\\rho(T)$ are presented in Fig. 2, \nshowing the overall decrease of the $\\rho(T)$ curves with the increase of $n$. \nThis behavior confirms that our samples are in the region of a typical degenerate semiconductor.\nSpecifying the distribution of the excess Bi and volatile I in the BiTeI samples can provide an important clue about the nature of disorder in this system,\n especially in connection with the results obtained in the present \ntransport experiments. Even though we verified that our single crystals are homogeneous and uniform on macroscopic scale, probably because of small amount of excess Bi, it is considered that nanoscale inhomogeneity can still exist. What type of local disorders or defects can promote intra- or inter-valley scattering in BiTeI is a very important question which should be addressed in future studies. The six samples investigated in the present study are considered to be in weakly disordered region based on our final results, which suggests that effects of disorder are nearly equal at least in those samples for electrical transport. \n\n\n\\subsection{Analysis of magnetoresistivity and Hall resistivity}\nThe transverse MR = $(\\rho(B) - \\rho(0))\/\\rho(0)$ with $\\rho(B)$ and $\\rho(0)$, resistivity at $B$ and $B = 0$, respectively, \nand the Hall resistivity $\\rho_H(B)$ for $\\#1 - \\#6$ are measured at 4.2 K and up to $B$ = 4 T. \nWhile the magnitude of MR is only few percent even at $B$ = 4 T for all samples, $\\#1 - \\#5$ show weak field anomalies, which deviate from the conventional $B$ quadratic behavior significantly, except for $\\#6$ with the largest $n$, as presented in Fig. 3(a). In particular, the MR for $\\#1$ \npossesses a pronounced dip in the weak field region. Even beyond the region of the dip, MR does not recover the $B$ quadratic behavior. \nThe sample $\\#1 - \\#5$ exhibit essentially same features. \n\n\nHall resistivity curves are almost linear with negative slops, suggesting the existence of ``normal'' negative charge carriers. \nHowever, a more careful inspection for the low-field region reveals tiny weak-field anomalies, displayed in Fig. 3(b), \nwhich plots the deviation $\\Delta \\rho_H$, where the overall linear dependence is subtracted from Hall resistvity $\\rho_H(B)$. \nThis data indicates that Hall resistivity curves deviate from the linearity significantly in the field region where a corresponding dip in MR is observed. While the deviations in $\\#2 - \\#5$ are confined \nfor $-1$ T $<$ $B < 1$ T, they extend to $-4$ T $< B < 4$ T for $\\#1$ and $\\#6$. The shape of $\\Delta \\rho_H$ in Fig. 3(b)\n is reminiscent of the general case for the Hall resistivity with two types of charge carriers \\cite{Wang14}. \n In the limit that one mobility is much larger than the other, the formula at the low field region is simplified into\n\\begin{equation}\n\\rho_H \\approx \\frac{1}{n_1ec}\\frac{B}{1+(\\mu_1B)^2}+\\frac{B}{n_2ec},\n\\end{equation}\nwhere $n_1$ and $n_2$ are carrier densities with larger and smaller mobility, \nand $\\mu_1$ is the larger mobility. If this simple expression explains the origin of the weak-field anomaly well, \nit suggests existence of a charge carrier with extremely high mobility, whose value corresponds to \nthe maximum or minimum of $\\Delta \\rho_H$. Indeed, the first term of Eq. (1) fits the $\\Delta \\rho_H(B)$ data quite well, giving the mobility\nvalue of the high mobility carrier as shown in Fig. 4.\nWe also observe that the sign of $\\Delta \\rho_H$ is positive for $\\#1 - \\#3$ \nand negative for $\\#4 - \\#6$, respectively, which implies that the charge carrier with extremely high mobility \nis hole-type for $\\#1 - \\#3$ and electron-type for $\\#4 - \\#6$.\n\nConsidering the band structure of BiTeI near $E_F$, \nwe assign the charge carrier with extremely high and the other one to be the Weyl fermion in the IFS and the OFS electron, respectively. \nWhile the OFS mobility is estimated from the linear part of the Hall resistivity and the residual resistivity, the mobility of the Weyl fermions can be obtained from the fitting of $\\Delta\\rho_H$ to the first term of Eq. (1). Our analysis based on Eq. (1) turns out to explain $\\Delta\\rho_H$ in a quantitative level. The mobility values of the Weyl fermion and the OFS carrier are plotted for the six samples in Fig. 1(b) as a function of $E_F$.\n\nIt is appealing that the simple formula of Eq. (1) for the Hall resistivity explains the low-field region quite well. However, one might speculate that there must be anomalous Hall effect either intrinsic (Berry curvature) or extrinsic (side jump or skew scattering) \\cite{Nagaosa10} because there are Weyl fermions in BiTeI. Although we cannot rule out the appearance of the extrinsic anomalous Hall effect, we strongly believe that the anomalous Hall effect induced by Berry curvature does not exist. The intrinsic anomalous Hall conductivity can be classified into two contributions, one of which results from the contribution of all states below the Fermi energy, given by the distance of momentum space between a pair of Weyl points \\cite{Chen14,Goswami13}, and the other of which originates from the contribution of Fermi surfaces with Berry phase. The second is non-universal \\cite{Haldane04,Xiao10,Nagaosa10}. Since a pair of Weyl points exists at the same momentum point, the first contribution vanishes. On the other hand, the second contribution from IFS and OFS may still exist, giving rise to an offset near the zero-field region. However, both contributions from the OFS and IFS will be cancelled because the sum of their Berry phases vanishes.\n\n\nAs in Hall resistivity, we also assume the existence of two conductivity channels. \nThen, the total contribution of electrical conductivity in BiTeI is given by $\\sigma_{total}=\\sigma_{OFS}+\\sigma_{WF}$, \nwhere $\\sigma_{OFS}=\\frac{\\sigma_{out}+\\Delta\\sigma^{out}_{WAL}}{\\sigma_{out}^{-2}(\\sigma_{out}+\\Delta\\sigma^{out}_{WAL})^2+\\omega^2_{out}\\tau^2_{out}}$\nis the conductivity from the OFS and $\\sigma_{WF}=\\frac{\\sigma_{in}+\\Delta\\sigma^{in}_{WAL}}{\\sigma_{in}^{-2}(\\sigma_{in}+\\Delta\\sigma^{in}_{WAL})^2+\\omega^2_{in}\\tau^2_{in}}$ is that from Weyl fermions of the IFS. These expressions can be derived from the Boltzmann-equation approach, where the role of the Berry phase is introduced into the Boltzmann equation via the weak anti-localization correction phenomenologically \\cite{Ki14}.\n$\\sigma_{in(out)}$ and $\\Delta\\sigma_{WAL}^{in(out)}$ are the residual conductivity at zero magnetic field and the weak anti-localization correction, respectively. $\\omega_{in(out)}$ is the cyclotron frequency, and $\\tau_{in(out)}$ is the transport time. \nEmploying $\\Delta\\sigma_{WAL}^{in(out)}=a_{in(out)}\\sqrt{B}$ in three dimensions, we are allowed to assume $\\sigma_{in(out)} >> \\Delta\\sigma_{WAL}^{in(out)}$ in the weak-field region. Then, these equations become simplified as follows, \n$\\sigma_{OFS} \\approx \\frac{\\sigma_{out}+\\Delta\\sigma^{out}_{WAL}}{1+\\omega^2_{out}\\tau^2_{out}} \\approx \\sigma_{out}^N+\\Delta\\sigma^{out}_{WAL}$ and\n$\\sigma_{WF} \\approx \\frac{\\sigma_{in}+\\Delta\\sigma^{in}_{WAL}}{1+\\omega^2_{in}\\tau^2_{in}} \\approx \\sigma_{in}^N+\\Delta\\sigma^{in}_{WAL}$, respectively, where $\\sigma_{out}^N = (\\rho_{out}+A_{out}B^2)^{-1} \\approx \\rho_{out}^{-1}+\\rho_{out}^{-2}A_{out}B^2$ \nwith $\\rho_{out} >> A_{out}B^2$ and $\\sigma_{in}^N = (\\rho_{in}+A_{in}B^2)^{-1}$ . \nThe total magneto-electrical conductivity is finally written as $\\sigma_{total} = \\sigma_c + \\Delta\\sigma_{out}^N + \\Delta\\sigma_{in}^N + \\Delta\\sigma_{WAL}$ with $\\Delta\\sigma_{WAL} = \\Delta\\sigma_{WAL}^{out}+\\Delta\\sigma_{WAL}^{in}$,\n where all field-independent constants are expressed as $\\sigma_c$.\n \nThe Fig. 5(a) show the decomposition of the magneto-electrical conductivity, $\\Delta\\sigma = \\sigma_{total} - \\sigma_c$ for the sample $\\#1$. \nThe sample $\\#6$ with the highest $n$ is described only with $\\Delta\\sigma_{out}^{N}$ presumably because $E_F$ is far away from the Weyl point. On the other hand, for other samples, all the other terms are necessary to describe the magneto-electrical conductivity properly. Performing successful decompositions, we isolate the weak anti-localization correction in Fig. 5(b), where all samples except for $\\#6$ exhibit the scaling behavior with $\\sqrt{b}$ dependence, where $b$ is a dimensionless reduced magnetic field given by $b = \\hbar \\omega\/E_F$, where \n$\\omega$ is the cyclotron frequency. The existence of $\\Delta\\sigma_{WAL}$ for $\\#1 - \\#5$ \njustifies the validity of our data analysis. \n\nOur analysis on the magneto-electrical conductivity demonstrates existence of two types of charge carriers, one of which has an extremely high mobility, identified with Weyl fermions on the IFS, given by $\\mu^2_{WF} = A_{in}\/\\rho_{in}$. Fig. 1(b) \ndisplays the mobility as a function of the Fermi energy $E_F$, \nwhere the value of $\\mu_{out} = \\sqrt{A_{out}\/\\rho_{out}}$ [(black) open circles] is in the order of $0.01 \\sim 0.03$ m$^2$\/Vs \nwhile that of $\\mu_{WF}$ [(red) open squares] is two or three orders of magnitude larger than $\\mu_{out}$. In particular, $\\mu_{WF}$ touches its maximum when $E_F$ is closest to the Weyl point. The enhancement of $\\mu_{WF}$ , compared to $\\mu_{out}$, is partially a consequence of a reduced phase space available for the scattering in the IFS, which is an intrinsic property of the Weyl metal as derived in the following theoretical sections. It is noted that the mobility values deduced from $\\Delta\\rho_H$ are very similar to those from the MR analysis. \n\n\n\n\\section{Calculation of mobility values within the Rashba model}\n\\subsection{Effective model Hamiltonian} \n\nWe start from the Rashba model with potential randomness:\n\\begin{eqnarray*}\nS[\\bar{\\Psi}_{i\\alpha}(\\tau,\\bm{x}),\\Psi_{i\\alpha}(\\tau,\\bm{x})]&=&\\frac{1}{2}\\int^{\\beta}_{0}d\\tau\\int d^{d}\\bm{x}\\biggr[\\bar{\\Psi}_{i\\alpha}(\\tau,\\bm{x})\\left(\\partial_{\\tau}-\\frac{\\hbar^{2}\\nabla^{2}}{2m}-E_{F}\\right)\\Psi_{i\\alpha}(\\tau,\\bm{x})\\\\\n&&+\\bar{\\Psi}_{i\\alpha}(\\tau,\\bm{x})\\lambda_{R}\\bm{\\sigma}^{\\textrm{spin}}_{\\alpha\\beta}\\cdot\\left(\\bm{E}\\times(-\\imath\\nabla)\\right)\\Psi_{i\\beta}(\\tau,\\bm{x})+\\bar{\\Psi}_{i\\alpha}(\\tau,\\bm{x})V(\\bm{x})\\Psi_{i\\alpha}(\\tau,\\bm{x})\\biggl] ,\n\\end{eqnarray*}\nwhere $\\lambda_{R}$ is the Rashba coupling constant and $V(\\bm{x})$ is a random potential. The indices of $``i\"$ and $``\\alpha\"$ stand for time-reversal and spin component, repectively. ``Time-reversal symmetrized\" basis is given by\n\\begin{eqnarray*}\n\\Psi(\\tau)=\\begin{pmatrix}\\psi(\\tau)\\\\\\imath\\sigma^{\\textrm{spin}}_{y}\\psi^{*}(-\\tau)\\end{pmatrix}=\\begin{pmatrix}\\psi_{\\uparrow}(\\tau)\\\\\\psi_{\\downarrow}(\\tau)\\\\\\psi^{*}_{\\downarrow}(-\\tau)\\\\-\\psi^{*}_{\\uparrow}(-\\tau)\\end{pmatrix} ~ \\& ~~ \\bar{\\Psi}(\\tau)=\\Psi^{\\dagger}(\\tau)I^{\\textrm{spin}}\\otimes\\sigma^{\\textrm{tr}}_{z}=\\begin{pmatrix}\\psi^{*}_{\\uparrow}(\\tau)\\\\\\psi^{*}_{\\downarrow}(\\tau)\\\\-\\psi_{\\downarrow}(-\\tau)\\\\\\psi_{\\uparrow}(-\\tau)\\end{pmatrix}^{T}.\n\\end{eqnarray*}\n\nTaking into account the BiTeI band structure given by $\\bm{E}=E\\hat{z} ~ \\left(\\alpha_{R}=\\lambda_{R}E\\right)$ and moving on the momentum and frequency space, we obtain\n\\fontsize{10}{10}\n\\begin{eqnarray*}\n&&S[\\bar{\\Psi}_{i\\alpha A(n)}(\\bm{k}),\\Psi_{i\\alpha A(n)}(\\bm{k})]\\\\\n&=&\\frac{1}{2}\\sum_{n}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\biggr[\\bar{\\Psi}_{i\\alpha A(n)}(\\bm{k})\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)\\Psi_{i\\alpha A(n)}(\\bm{k})+\\bar{\\Psi}_{i\\alpha n}(\\bm{k})\\alpha_{R}\\left(k_{x}(\\sigma_{y})_{\\alpha\\beta}-k_{y}(\\sigma_{x})_{\\alpha\\beta}\\right)\\Psi_{j\\beta A(n)}(\\bm{k})\\\\\n&&+\\int\\frac{d^{d}\\bm{q}}{(2\\pi)^{d}}\\bar{\\Psi}_{i\\alpha A(n)}(\\bm{k}+\\bm{q})V(\\bm{q})\\Psi_{i\\alpha A(n)}(\\bm{k})\\biggl] ,\n\\end{eqnarray*}\n\\normalsize\nwhere $A$ stands for ``retarded\" ($\\mathcal{R}$) or ``advanced\" ($\\mathcal{A}$). For example, $\\mathcal{A}(n)$ corresponds to a negative frequency whose magnitude is $\\left |\\omega_{n}\\right |$. Diagonalizing this effective Rashba Hamiltonian based on the following momentum-dependent unitary matrix \n\\fontsize{11}{11}\n\\begin{eqnarray*}\nU^{\\dagger}(\\bm{k})I^{\\textrm{spin}}U(\\bm{k})=I^{\\textrm{spin}} ~ \\& ~ U(\\bm{k})\\left(k_{x}\\sigma^{\\textrm{spin}}_{y}-k_{y}\\sigma^{\\textrm{spin}}_{x}\\right)U^{\\dagger}(\\bm{k})=\\sigma^{\\textrm{spin}}_{z}~\\Rightarrow~U(\\bm{k})=\\frac{1}{\\sqrt{2}}\\begin{pmatrix} e^{\\imath\\frac{\\phi(\\bm{k})}{2}}&-\\imath e^{-\\imath\\frac{\\phi(\\bm{k})}{2}}\\\\ e^{\\imath\\frac{\\phi(\\bm{k})}{2}}&\\imath e^{-\\imath\\frac{\\phi(\\bm{k})}{2}}\\end{pmatrix} ,\n\\end{eqnarray*}\n\\normalsize\nwe obtain \n\\begin{eqnarray*}\n&&S[\\bar{\\Phi}_{i\\alpha n}(\\bm{k}),\\Phi_{i\\alpha n}(\\bm{k})]\\\\\n&=&\\frac{1}{2}\\sum_{n}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\biggr[\\bar{\\Phi}_{i\\alpha n}(\\bm{k})\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)\\Phi_{i\\alpha n}(\\bm{k})+\\bar{\\Phi}_{i\\alpha n}(\\bm{k})\\alpha_{R}(\\sigma^{\\textrm{spin}}_{z})_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\Phi_{j\\beta n}(\\bm{k})\\\\\n&&~~~+\\int\\frac{d^{d}\\bm{q}}{(2\\pi)^{d}}\\bar{\\Phi}_{i\\alpha n}(\\bm{k}+\\bm{q})U_{\\alpha\\beta}(\\bm{k}+\\bm{q})V(\\bm{q})U^{\\dagger}_{\\beta\\gamma}(\\bm{k})\\Phi_{i\\gamma n}(\\bm{k})\\biggl]\n\\end{eqnarray*}\nwhere $\\Phi_{i\\alpha n}(\\bm{k})=U_{\\alpha\\beta}(\\bm{k})\\Psi_{i\\beta n}(\\bm{k})$ is an eigenfunction field and the index of $\\alpha$ represents spin chirality, identified with ``$+$\" or ``$-$\". \n\nPerforming the disorder average within the replica trick, the effective replicated Rashba action becomes\n\\fontsize{9.4}{9.4}\n\\begin{eqnarray*}\n&&S[\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k}),\\Phi_{i\\alpha n}^{a}(\\bm{k})]\\\\\n&=&\\frac{1}{2}\\sum_{n}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\biggr[\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k})\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)\\Phi^{a}_{i\\alpha n}(\\bm{k})+\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k})\\alpha_{R}\\left(\\sigma_{z}^{\\textrm{spin}}\\right)_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\Phi^{a}_{i\\beta n}(\\bm{k})\\biggl]\\\\\n&&-\\sum_{nm}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\int\\frac{d^{d}\\bm{k'}}{(2\\pi)^{d}}\\int\\frac{d^{d}\\bm{q}}{(2\\pi)^{d}}\\frac{\\Gamma}{8}\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k}+\\bm{q})M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k})\\Phi^{a}_{i\\alpha' n}(\\bm{k})\\bar{\\Phi}^{b}_{j\\beta m}(\\bm{k}'-\\bm{q})M_{\\beta\\beta'}(\\bm{k}'-\\bm{q},\\bm{k}')\\Phi^{b}_{j\\beta'm}(\\bm{k}') ,\n\\end{eqnarray*}\n\\normalsize\nwhere the free energy is given by $\\mathcal{F}=-T\\lim_{R \\to 0}\\frac{1}{R}\\left(\\int\\mathcal{D}(\\bar{\\Phi}^{a}_{i\\alpha},\\Phi^{a}_{i\\alpha})e^{-S[\\bar{\\Phi}^{a}_{i\\alpha},\\Phi^{a}_{i\\alpha}]}-1\\right)$. The product of unitary matrices is \n\\fontsize{10}{10}\n\\begin{eqnarray*}\nM(\\bm{k}+\\bm{q},\\bm{k})\\equiv U(\\bm{k}+\\bm{q})U^{\\dagger}(\\bm{k})= \\begin{pmatrix}\\cos{\\left(\\frac{\\phi(\\bm{k}+\\bm{q})-\\phi(\\bm{k})}{2}\\right)}&\\imath\\sin{\\left(\\frac{\\phi(\\bm{k}+\\bm{q})-\\phi(\\bm{k})}{2}\\right)}\\\\\\imath\\sin{\\left(\\frac{\\phi(\\bm{k}+\\bm{q})-\\phi(\\bm{k})}{2}\\right)}&\\cos{\\left(\\frac{\\phi(\\bm{k}+\\bm{q})-\\phi(\\bm{k})}{2}\\right)}\\end{pmatrix}.\n\\end{eqnarray*}\n\\normalsize\n\n\\subsection{A self-consistent Born approximation}\n\nWe perform the Hubbard-Stratonovich transformation in the particle-hole singlet channel of $\\Phi^{a}_{i\\alpha n}\\bar{\\Phi}^{b}_{j\\beta m}$, and obtain\n\\fontsize{10.5}{10.5}\n\\begin{eqnarray*}\n&&S[\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k}),\\Phi^{a}_{i\\alpha n}(\\bm{k});Q^{ab}_{ij;\\alpha\\beta;nm}(\\bm{q})]\\\\\n&=&\\frac{1}{2}\\sum_{n}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\left[\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k})\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)\\Phi^{a}_{i\\alpha n}(\\bm{k})+\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k})\\alpha_{R}\\left(\\sigma^{\\textrm{spin}}_{z}\\right)_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\Phi^{a}_{i\\beta n}(\\bm{k})\\right]\\\\\n&&+\\sum_{nm}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\int\\frac{d^{d}\\bm{q}}{(2\\pi)^{d}}\\biggr[-\\frac{\\imath}{2}\\bar{\\Phi}^{a}_{i\\alpha n}(\\bm{k}+\\bm{q})M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k})Q^{ab}_{ij;\\alpha'\\beta';nm}(\\bm{q})M_{\\beta'\\beta}(\\bm{k}-\\bm{q},\\bm{k})\\Phi^{b}_{j\\beta m}(\\bm{k})\\\\\n&&+\\frac{1}{2\\Gamma}M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k})Q^{ab}_{ij;\\alpha'\\beta;nm}(\\bm{q})M_{\\beta\\beta'}(\\bm{k}-\\bm{q},\\bm{k})Q^{ba}_{ji;\\beta'\\alpha;mn}(-\\bm{q})\\biggl].\n\\end{eqnarray*}\n\\normalsize\nIntegrating over fermionic degrees of freedom, we obtain \n\\fontsize{10.5}{10.5}\n\\begin{eqnarray*}\n&&S[Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(\\bm{q})]\\\\\n&=&-\\frac{1}{2}\\textrm{tr}\\textrm{ln}\\biggr[\\delta^{ab}\\delta_{AB}\\left\\{\\delta_{ij}\\delta_{\\alpha\\beta}\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)+\\alpha_{R}(\\sigma^{\\textrm{spin}}_{z})_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\right\\}\\\\\n&&-\\imath M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k})Q^{ab}_{ij;\\alpha'\\beta';A(n)B(m)}(\\bm{q})M_{\\beta'\\beta}(\\bm{k}-\\bm{q},\\bm{k})\\biggl]\\\\\n&&+\\sum_{nm}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\int\\frac{d^{d}\\bm{q}}{(2\\pi)^{d}}\\biggr[\\frac{1}{2\\Gamma}M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k}) Q^{ab}_{ij;\\alpha'\\beta;A(n)B(m)}(\\bm{q})M_{\\beta\\beta'}(\\bm{k}-\\bm{q},\\bm{k})Q^{ba}_{ji;\\beta'\\alpha;B(m)A(n)}(-\\bm{q})\\biggl].\n\\end{eqnarray*}\n\\normalsize\nMinimizing this effective free energy with respect to the matrix field $Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(\\bm{q})$, we obtain the saddle-point equation given by\n\\fontsize{10.5}{10.5}\n\\begin{eqnarray*}\n\\frac{2\\imath}{\\Gamma}Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(\\bm{q})=\\textrm{tr}\\left[{G^{ab}_{ij;\\alpha\\beta;A(n)B(m)}}^{-1}(\\bm{k})-\\imath M_{\\alpha\\alpha'}(\\bm{k}+\\bm{q},\\bm{k})Q^{ab}_{ij;\\alpha'\\beta';A(n)B(m)}(\\bm{q})M_{\\beta'\\beta}(\\bm{k}-\\bm{q},\\bm{k})\\right]^{-1} ,\n\\end{eqnarray*}\n\\normalsize\nwhere\n\\begin{eqnarray*}\n\\left[G^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(\\bm{k})\\right]^{-1}=\\delta^{ab}\\delta_{ij}\\delta_{AB} \\left\\{\\delta_{\\alpha\\beta}\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)+\\alpha_{R}(\\sigma^{\\textrm{spin}}_{z})_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\right\\} \n\\end{eqnarray*}\nis the inverse of the fermion Green's function.\n\n\\subsubsection{Saddle-point analysis I}\n\nFocusing on the forward scattering described by $Q_{MF}=Q(\\bm{0})$,\nwe obtain mean-field equations of\n\\begin{eqnarray*}\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{++}}&=&\\frac{G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}}{\\left(G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\right)\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)+Q_{\\scriptscriptstyle{+-}}Q_{\\scriptscriptstyle{-+}}}\\\\\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{+-}}&=&\\frac{\\imath Q_{\\scriptscriptstyle{+-}}}{\\left(G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\right)\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)+Q_{\\scriptscriptstyle{+-}}Q_{\\scriptscriptstyle{-+}}}\\\\\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{-+}}&=&\\frac{\\imath Q_{\\scriptscriptstyle{-+}}}{\\left(G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\right)\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)+Q_{\\scriptscriptstyle{+-}}Q_{\\scriptscriptstyle{-+}}}\\\\\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{--}}&=&\\frac{G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}}{\\left(G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\right)\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)+Q_{\\scriptscriptstyle{+-}}Q_{\\scriptscriptstyle{-+}}},\n\\end{eqnarray*}\nwhere \n\\begin{eqnarray*}\nG=\\begin{pmatrix} G_{\\scriptscriptstyle{++}}&0\\\\0&G_{\\scriptscriptstyle{--}}\\end{pmatrix} ~ \\& ~ Q=\\begin{pmatrix} Q_{\\scriptscriptstyle{++}}&Q_{\\scriptscriptstyle{+-}}\\\\Q_{\\scriptscriptstyle{-+}}&Q_{\\scriptscriptstyle{--}}\\end{pmatrix} \n\\end{eqnarray*}\nand the forward scattering doesn't change spin orientations, resulting in $M(\\bm{k},\\bm{k})=I$. It is straightforward to see $Q_{\\scriptscriptstyle{+-}}=0$ due to spin chirality. \nThen, we reach the following expression\n\\fontsize{10.5}{10.5}\n\\begin{eqnarray*}\n\\frac{2\\imath}{\\Gamma}Q^{ab}_{ij;\\pm\\pm;A(n)A(n)}(\\bm{0})=\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\frac{1}{\\delta^{ab}\\delta_{AB}\\left\\{\\delta_{ij}\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)\\pm\\alpha_{R}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\right\\}-\\imath Q^{ab}_{ij;\\pm\\pm;A(n)A(n)}(\\bm{0})}.\n\\end{eqnarray*}\n\\normalsize\n\nIn order to solve the above equation, we introduce the mean-field ansatz of\n\\begin{eqnarray*}\n(Q_{\\textrm{MF}})^{ab}_{ij;\\alpha\\beta;AB}=\\frac{\\pi}{2}N_{F}\\Gamma\\delta^{ab}\\delta_{ij}\\delta_{\\alpha\\beta}F_{\\alpha}(r)\\Lambda_{AB} ,\n\\end{eqnarray*}\nwhere $N_{F}=mk_{F}\/2\\pi^{2}\\hbar^{2}$ is the density of the states (without the factor $2$ of spin-degeneracy) at the Fermi level with $\\alpha_{R}=0$ and $\\Lambda_{AB}=\\textrm{diag}(1,-1)$ is the diagonal matrix for the retarded and advanced sectors. $F_{\\alpha}(r)$ is a function of $r=\\frac{2\\hbar^{2}E_{F}}{m\\alpha_{R}^{2}}$, regarded as an order parameter to be determined from this self-consistent equation. Considering $\\alpha=\\beta=+$ and $A=B=\\mathcal{R}$, we obtain\n\\begin{eqnarray*}\n\\imath \\pi N_{F}F_{+} = \\int\\frac{d^{3}\\bm{k}}{(2\\pi)^{3}}\\frac{1}{-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}+\\alpha_{R}\\sqrt{k_{x}^{2}+k_{y}^{2}}-\\frac{\\imath\\pi N_{F}\\Gamma F_{+}}{2}}.\n\\end{eqnarray*}\nSince this integrand is not rotationally invariant along the $z-$direction, we need to be cautious for the $k_{z}$ integration. We will not show this procedure. Performing the momentum integration, we find\n\n\\begin{eqnarray*}\n&& F_{+}(r)\n= \\frac{\\pi}{2}\\frac{1}{\\sqrt{r}\\sqrt{1+r}}\\biggr[\\frac{8}{3}+2r-\\frac{4}{3}\\sqrt{r+1}\\biggr\\{\\left(r+2\\right)\\mathcal{E}\\left(\\frac{1}{r+1}\\right)-r\\mathcal{K}\\left(\\frac{1}{r+1}\\right)\\biggl\\}\\biggl],\n\\end{eqnarray*}\nwhere $\\mathcal{K}(x)$ and $\\mathcal{E}(x)$ are complete elliptic integrals of the first kind and the second kind \\cite{Mathematica}. In the same way, we find \n\\begin{eqnarray*}\n&& F_{-}(r)= \\frac{\\pi}{2}\\frac{1}{\\sqrt{r}\\sqrt{1+r}}\\biggr[\\frac{8}{3}+2r+\\frac{4}{3}\\sqrt{r+1}\\biggr\\{\\left(r+2\\right)\\mathcal{E}\\left(\\frac{1}{r+1}\\right)-r\\mathcal{K}\\left(\\frac{1}{r+1}\\right)\\biggl\\}\\biggl],\n\\end{eqnarray*}\nwhere the sign in front of the elliptic integrals has been changed. As a result, two kinds of scattering times are given by\n\\begin{eqnarray*}\n\\tau_{+}=\\frac{1}{2Q_{+}(r)}=\\frac{1}{\\pi N_{F}\\Gamma F_{+}(r)} ~ \\& ~ \\tau_{-}=\\frac{1}{2Q_{-}(r)}=\\frac{1}{\\pi N_{F}\\Gamma F_{-}(r)}\n\\end{eqnarray*}\nfor inner and outer Fermi surfaces, respectively. Considering that the scattering time is expressed as $\\tau=\\frac{1}{\\pi N_{F}\\Gamma}$ for the normal diffusive Fermi-liquid state, one may regard that the appearance of the additional factor of $F_{\\pm}(r)$ results from the presence of the Rashba spin-orbit coupling, modifying the density of states for inner and our Fermi surfaces, respectively. Finally, we obtain diffusion constants, given by\n\\begin{eqnarray*}\nD_{\\pm}=\\hbar v_{F}^{2}\\tau_{\\pm} = \\frac{2\\pi\\alpha_{R}\\hbar^{3}}{m^{2}\\Gamma}\\frac{(1+r)}{\\sqrt{r}F_{\\pm}(r)}.\n\\end{eqnarray*}\n\nAlthough we did not show the integration procedure in a detail, these diffusion coefficients are justified only when $r \\geq 0$. Since there are no density of states for the inner Fermi surface (FIG. \\ref{FS_3d}), we divide this case when the Fermi energy is below the Weyl point from the other. As a result, we find the general formula valid for both cases of $r \\geq 0$ and $r < 0$, given by\n\\fontsize{8.5}{8.5}\n\\begin{eqnarray*}\nD_{\\pm}(r)=\\frac{4\\alpha_{R}\\hbar^{3}}{m^{2}\\Gamma}\\frac{(1+r)^{\\frac{3}{2}}}{\\textrm{Re}\\left[\\frac{8}{3}+2r-\\frac{4}{3}\\sqrt{r+1}\\left\\{(r+2)\\mathcal{E}\\left(\\frac{1}{r+1}\\right)-r\\mathcal{K}\\left(\\frac{1}{r+1}\\right)\\right\\}\\right]-\\Theta(-r)\\left(\\frac{8}{3}-\\frac{8}{3}\\sqrt{1-\\left | r\\right |}+\\frac{2}{3}\\left | r\\right |\\sqrt{1-\\left | r\\right |}-2\\left | r\\right |\\right)} .\n\\end{eqnarray*}\n\\normalsize\nIn order to compare our analytic expressions with the experimental data, we need to obtain the mobility. Resorting to the Einstein's relation $D=\\mu k_{B}T\/e$, we have\n\\fontsize{4.7}{2}\n\\begin{eqnarray*}\n\\mu_{\\pm}(E_{F})=A\\frac{(1+bE_{F})^{\\frac{3}{2}}}{\\textrm{Re}\\left[\\frac{8}{3}+2bE_{F}-\\frac{4}{3}\\sqrt{bE_{F}+1}\\left\\{(bE_{F}+2)\\mathcal{E}\\left(\\frac{1}{bE_{F}+1}\\right)-bE_{F}\\mathcal{K}\\left(\\frac{1}{bE_{F}+1}\\right)\\right\\}\\right]-\\Theta(-bE_{F})\\left(\\frac{8}{3}-\\frac{8}{3}\\sqrt{1-\\left | bE_{F}\\right |}+\\frac{2}{3}\\left | bE_{F}\\right |\\sqrt{1-\\left | bE_{F}\\right |}-2\\left | bE_{F}\\right |\\right)} ,\n\\end{eqnarray*}\n\\normalsize\nwhere $A=\\frac{2\\alpha_{R}\\hbar^{3}}{m^{2}\\Gamma ek_{B}T}$ and $b=\\frac{2\\hbar^{2}}{m\\alpha_{R}^{2}}$. In the experiment, the Weyl point was observed at 113 meV from the bottom of the conduction band mimnimum. In our model, we set $E_{W}=0$ and the conduction band minimum is $-\\frac{m\\alpha_{R}^{2}}{2\\hbar^{2}}$, so $b=\\frac{2\\hbar^{2}}{m\\alpha_{R}^{2}}=\\frac{1}{0.113eV}\\simeq$ 8.85 $(eV)^{-1}$. Based on the formula with this value, we fit the experimental data and obtain the result of FIG. \\ref{diffusion_constant_3d}, where $A=$ 0.984 $[m^{2}\/Vs]$. \n\n\n\\subsubsection{Saddle-point analysis II}\n\nPreviously, we did not take into account effects of inter-valley scattering. Taking $\\bm{q}=-2\\bm{k}-\\bm{a}$ where $\\bm{a}=\\frac{2m\\alpha_{R}}{\\hbar^{2}}\\frac{k_{x}\\hat{x}+k_{y}\\hat{y}}{\\sqrt{k_{x}^{2}+k_{y}^{2}}}$, the effective Rashba action becomes\n\\fontsize{10}{10}\n\\begin{eqnarray*}\n&&S[Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(-2\\bm{k}-\\bm{a})]\\\\\n&=&-\\frac{1}{2}\\textrm{tr}\\textrm{ln}\\biggr[\\delta^{ab}\\delta_{AB}\\left\\{\\delta_{ij}\\delta_{\\alpha\\beta}\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)+\\alpha_{R}(\\sigma^{\\textrm{spin}}_{z})_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\right\\}\\\\\n&&-\\imath M_{\\alpha\\alpha'}(-\\bm{k}-\\bm{a},-\\bm{k})Q^{ab}_{ij;\\alpha'\\beta';A(n)B(m)}(-2\\bm{k}-\\bm{a})M_{\\beta'\\beta}(\\bm{k}+\\bm{a},\\bm{k})\\biggl]\\\\\n&&+\\sum_{nm}\\int\\frac{d^{d}\\bm{k}}{(2\\pi)^{d}}\\frac{1}{2\\Gamma}\\biggr[M_{\\alpha\\alpha'}(-\\bm{k}-\\bm{a},-\\bm{k})Q^{ab}_{ij;\\alpha'\\beta;A(n)B(m)}(-2\\bm{k}-\\bm{a})M_{\\beta\\beta'}(\\bm{k}+\\bm{a},\\bm{k}) Q^{ba}_{ji;\\beta'\\alpha;B(m)A(n)}(2\\bm{k}+\\bm{a})\\biggl].\n\\end{eqnarray*}\n\\normalsize\nSince $\\bm{k}+\\bm{a}$ and $\\bm{k}$ are in the same direction on the $xy-$plane, we still have $M(-\\bm{k}-\\bm{a},-\\bm{k})=M(\\bm{k}+\\bm{a},\\bm{k})=I$. Then, the above expression is simplified as follows\n\\fontsize{10.5}{10.5}\n\\begin{eqnarray*}\n&&S[Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(-2\\bm{k}-\\bm{a})]\\\\\n&=&-\\frac{1}{2}\\textrm{tr}\\textrm{ln}\\biggr[\\delta^{ab}\\delta_{AB}\\left\\{\\delta_{ij}\\delta_{\\alpha\\beta}\\left(-\\imath\\omega_{n}+\\frac{\\hbar^{2}\\bm{k}^{2}}{2m}-E_{F}\\right)+\\alpha_{R}(\\sigma^{\\textrm{spin}}_{z})_{\\alpha\\beta}\\sqrt{k_{x}^{2}+k_{y}^{2}}\\right\\}-\\imath Q^{ab}_{ij;\\alpha\\beta;nm}(-2\\bm{k}-\\bm{a})\\biggl]\\\\\n&&+\\sum_{nm}\\frac{1}{2\\Gamma}\\biggr[Q^{ab}_{ij;\\alpha\\beta;A(n)B(m)}(-2\\bm{k}-\\bm{a})Q^{ba}_{ji;\\beta\\alpha;B(m)A(n)}(2\\bm{k}+\\bm{a})\\biggl].\n\\end{eqnarray*}\n\\normalsize\nUnfortunately, this effective action is not diagonal in the presence of such a $Q(2\\bm{k}+\\bm{a})$ matrix. We can resolve this difficulty, choosing a better basis as\n\\fontsize{9.5}{9.5}\n\\begin{eqnarray*}\n&&\\bar{\\phi}(\\bm{k})\\left[G^{-1}(\\bm{k})\\right]\\phi(\\bm{k})+\\bar{\\phi}(-\\bm{k}-\\bm{a})\\left[G^{-1}(-\\bm{k}-\\bm{a})\\right]\\phi(-\\bm{k}-\\bm{a})+\\bar{\\phi}(-\\bm{k}-\\bm{a})\\left[-\\imath Q(-2\\bm{k}-\\bm{a})\\right]\\phi(\\bm{k})\\\\\n&&+\\bar{\\phi}(\\bm{k})\\left[-\\imath Q(2\\bm{k}+\\bm{a})\\right]\\phi(-\\bm{k}-\\bm{a})\\\\\n&=&\\begin{pmatrix}\\bar{\\phi}(\\bm{k}),&\\bar{\\phi}(-\\bm{k}-\\bm{a})\\end{pmatrix}\\begin{pmatrix} G^{-1}(\\bm{k})&-\\imath Q(\\bm{2\\bm{k}+\\bm{a}})\\\\-\\imath Q(-2\\bm{k}-\\bm{a})&G^{-1}(-\\bm{k}-\\bm{a})\\end{pmatrix}\\begin{pmatrix}\\phi(\\bm{k})\\\\\\phi(-\\bm{k}-\\bm{a})\\end{pmatrix}\\\\\n&=&\\begin{pmatrix}\\bar{\\phi}_{+}(\\bm{k})\\\\\\bar{\\phi}_{-}(\\bm{k})\\\\\\bar{\\phi}_{+}(-\\bm{k}-\\bm{a})\\\\\\bar{\\phi}_{-}(-\\bm{k}-\\bm{a})\\end{pmatrix}^{T}\\begin{pmatrix} G^{-1}_{\\scriptscriptstyle{++}}(\\bm{k})&0&0&-\\imath Q_{\\scriptscriptstyle{+-}}(\\bm{2\\bm{k}+\\bm{a}})\\\\0&G^{-1}_{\\scriptscriptstyle{--}}(\\bm{k})&-\\imath Q_{\\scriptscriptstyle{-+}}(2\\bm{k}+\\bm{a})&0\\\\0&-\\imath Q_{\\scriptscriptstyle{+-}}(-2\\bm{k}-\\bm{a})&G^{-1}_{\\scriptscriptstyle{++}}(-\\bm{k}-\\bm{a})&0\\\\-\\imath Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})&0&0&G^{-1}_{\\scriptscriptstyle{--}}(-\\bm{k}-\\bm{a})\\end{pmatrix}\\begin{pmatrix}\\phi_{+}(\\bm{k})\\\\\\phi_{-}(\\bm{k})\\\\\\phi_{+}(-\\bm{k}-\\bm{a})\\\\\\phi_{-}(-\\bm{k}-\\bm{a})\\end{pmatrix}.\n\\end{eqnarray*}\n\\normalsize\n\nThis expanded matrix can be made to be a block-diagonal form, so we are allowed to consider two components only:\n\\begin{eqnarray*}\n\\begin{pmatrix}\\bar{\\phi}_{+}(\\bm{k}),&\\bar{\\phi}_{-}(-\\bm{k}-\\bm{a})\\end{pmatrix}\\begin{pmatrix} G^{-1}_{\\scriptscriptstyle{++}}(\\bm{k})&-\\imath Q_{\\scriptscriptstyle{+-}}(\\bm{2\\bm{k}+\\bm{a}})\\\\-\\imath Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})&G^{-1}_{\\scriptscriptstyle{--}}(-\\bm{k}-\\bm{a})\\end{pmatrix}\\begin{pmatrix}\\phi_{+}(\\bm{k})\\\\\\phi_{-}(-\\bm{k}-\\bm{a})\\end{pmatrix}.\n\\end{eqnarray*}\nAs a result, we find self-consistent equations for inter-valley scattering\n\\fontsize{11}{11}\n\\begin{eqnarray*}\n\\frac{2\\imath}{\\Gamma}\\cdot0&=&\\int\\frac{d^{3}\\bm{k}}{(2\\pi)^{3}}\\frac{G_{\\scriptscriptstyle{--}}^{-1}(-\\bm{k}-\\bm{a})}{G^{-1}_{\\scriptscriptstyle{++}}(\\bm{k})G^{-1}_{\\scriptscriptstyle{--}}(-\\bm{k}-\\bm{a})+Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})}\\\\\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})&=&\\int\\frac{d^{3}\\bm{k}}{(2\\pi)^{3}}\\frac{\\imath Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})}{G_{\\scriptscriptstyle{++}}^{-1}(\\bm{k})G_{\\scriptscriptstyle{--}}^{-1}(-\\bm{k}-\\bm{a})+Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})}\\\\\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})&=&\\int\\frac{d^{3}\\bm{k}}{(2\\pi)^{3}}\\frac{\\imath Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})}{G_{\\scriptscriptstyle{++}}^{-1}(\\bm{k})G_{\\scriptscriptstyle{--}}^{-1}(-\\bm{k}-\\bm{a})+Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})}\\\\\n\\frac{2\\imath}{\\Gamma}\\cdot0&=&\\int\\frac{d^{3}\\bm{k}}{(2\\pi)^{3}}\\frac{G_{\\scriptscriptstyle{++}}^{-1}(\\bm{k})}{G_{\\scriptscriptstyle{++}}^{-1}(\\bm{k}) G_{\\scriptscriptstyle{--}}^{-1}(-\\bm{k}-\\bm{a})+Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})Q_{\\scriptscriptstyle{-+}}(-2\\bm{k}-\\bm{a})},\n\\end{eqnarray*}\n\\normalsize\nwhere $Q_{\\scriptscriptstyle{++}(\\scriptscriptstyle{--})}(\\pm(2\\bm{k}+\\bm{a}))$ turn out to vanish due to spin chirality. Note that $G_{\\scriptscriptstyle{++}}(\\bm{k})$ being on the Fermi surface means $G_{\\scriptscriptstyle{--}}(-\\bm{k}-\\bm{a})=G_{\\scriptscriptstyle{--}}(\\bm{k}+\\bm{a})$ should also be on the Fermi surface.\nThus, $Q_{\\scriptscriptstyle{+-}}$ doesn't have to vanish in this case. \nLinearizing the energy spectrum around the inner Fermi surface, we obtain\n\\begin{eqnarray*}\n\\frac{2\\imath}{\\Gamma}Q_{\\scriptscriptstyle{+-}}&=&\\int^{2\\pi}_{0}\\frac{d\\phi}{2\\pi}\\int^{\\pi}_{0}\\frac{d\\theta J_{+}(\\sin{\\theta})}{2\\pi}\\int^{\\infty}_{-\\infty}\\frac{d\\varepsilon}{2\\pi}\\frac{\\imath Q_{\\scriptscriptstyle{+-}}}{J(\\sin{\\theta})\\left(\\hbar v_{F}\\varepsilon\\right)^{2}+\\left | Q_{\\scriptscriptstyle{+-}}\\right |^{2}}\n\\end{eqnarray*}\nwhere $J_{+}(\\sin{\\theta})$ is a Jacobian factor from expanding $\\bm{k}$ around the inner Fermi surface and $J(\\sin{\\theta})$ is a Jacobian factor from connecting the integral on the outer Fermi surface to the integral on the inner fermi surface. Other two equations are satisfied automatically, identically zero. Straightforward calculations give rise to the final expression\n\\begin{eqnarray*}\n\\left | Q_{\\scriptscriptstyle{+-}}\\right |=\\frac{\\Gamma}{2}\\frac{1}{2\\hbar v_{F}}\\left(\\frac{m\\alpha_{R}}{2\\pi\\hbar^{4}}\\right)^{2}\\left(ar+br^{2}+cr^{3}\\right)=\\frac{\\pi N_{F}\\Gamma}{4}\\sqrt{\\frac{r}{1+r}}\\left(a+br+cr^{2}\\right)\n\\end{eqnarray*}\nwhere $a=9.42\\times10^{-3},~b=2.36\\times10^{-1}$ and $c=-3.62\\times10^{-2}$.\n\n\n\nIn the presence of the off-diagonal term of $Q_{\\scriptscriptstyle{+-}}$, the fermion propagator is altered as follows\n\\fontsize{9}{9}\n\\begin{eqnarray*}\n\\begin{pmatrix} G^{-1}_{\\scriptscriptstyle{++}}-\\imath Q_{\\scriptscriptstyle{++}}&-\\imath Q_{\\scriptscriptstyle{+-}}\\\\-\\imath Q_{\\scriptscriptstyle{-+}}&J\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)\\end{pmatrix}^{-1}=\\frac{1}{\\left(G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\right)J\\left(G^{-1}_{\\scriptscriptstyle{--}}-\\imath Q_{\\scriptscriptstyle{--}}\\right)+\\left | Q_{\\scriptscriptstyle{+-}}\\right |^{2}}\\begin{pmatrix} J\\left(G_{\\scriptscriptstyle{--}}^{-1}-\\imath Q_{\\scriptscriptstyle{--}}\\right)&\\imath Q_{\\scriptscriptstyle{+-}}\\\\\\imath Q_{\\scriptscriptstyle{-+}}&G_{\\scriptscriptstyle{++}}^{-1}-\\imath Q_{\\scriptscriptstyle{++}}\\end{pmatrix}.\n\\end{eqnarray*}\n\\normalsize\nThen, the effective inner-fermion propagator is given by\n\\begin{eqnarray*}\nG_{\\scriptscriptstyle{++}\\textrm{eff}}^{-1}(\\bm{k})&=&\\frac{1}{G^{-1}_{\\scriptscriptstyle{++}}(\\bm{k})-\\imath Q_{\\scriptscriptstyle{++}}(\\bm{0})+\\frac{\\left | Q_{\\scriptscriptstyle{+-}}(2\\bm{k}+\\bm{a})\\right |^{2}}{J(\\Omega)\\left(G_{\\scriptscriptstyle{--}}^{-1}(-\\bm{k}-\\bm{a})-\\imath Q_{\\scriptscriptstyle{--}}(\\bm{0})\\right)}}.\n\\end{eqnarray*}\nTaking $\\bm{k}=\\bm{k}^{+}_{F}$, where $\\bm{k}_{F}^{+}$ is the Fermi momentum of the inner Fermi surface,\nwe find \n\\begin{eqnarray*}\nQ_{\\scriptscriptstyle{++}\\textrm{eff}}(\\bm{0})=Q_{\\scriptscriptstyle{++}}(\\bm{0})\\left[1-\\frac{\\left | Q_{\\scriptscriptstyle{+-}}(2\\bm{k}_{F}^{+}+\\bm{a})\\right |^{2}}{Q_{\\scriptscriptstyle{++}}(\\bm{0})Q_{\\scriptscriptstyle{--}}(\\bm{0})}\\right]=\\frac{\\pi}{2}N_{F}\\Gamma F_{+}\\left[1-\\frac{r}{4(1+r)}\\frac{\\left(a+br+cr^2\\right)^{2}}{F_{+}F_{-}}\\right].\n\\end{eqnarray*}\nAccordingly, scattering times are modified as\n\\begin{eqnarray*}\n\\tau_{\\pm\\textrm{eff}}=\\frac{1}{2Q_{\\scriptscriptstyle{\\pm\\pm}\\textrm{eff}}}=\\frac{1}{2\\frac{\\pi}{2}N_{F}\\Gamma F_{\\pm}\\left[1-\\frac{r}{4(1+r)}\\frac{\\left(a+br+cr^2\\right)^{2}}{F_{+}F_{-}}\\right]}.\n\\end{eqnarray*}\nAs a result, diffusion constants are given by\n\\begin{eqnarray*}\nD_{\\pm\\textrm{eff}}(r)=\\hbar v_{F}^{2}\\tau_{\\pm\\textrm{eff}} = \\frac{2\\pi\\alpha_{R}\\hbar^{3}}{m^{2}\\Gamma}\\frac{1+r}{\\sqrt{r}}\\frac{1}{F_{\\pm}\\left(1-\\frac{r}{4(1+r)}\\frac{\\left(a+br+cr^2\\right)^{2}}{F_{+}F_{-}}\\right)}\n\\end{eqnarray*}\nwhere $F_{\\pm}(r)$ are \n\\begin{eqnarray*}\nF_{\\pm}(r)&=&\\frac{\\pi}{2}\\frac{1}{\\sqrt{r}\\sqrt{1+r}}\\textrm{Re}\\left[\\frac{8}{3}+2r-\\frac{4}{3}\\sqrt{r+1}\\left\\{(r+2)\\mathcal{E}\\left(\\frac{1}{r+1}\\right)-r\\mathcal{K}\\left(\\frac{1}{r+1}\\right)\\right\\}\\right]\\\\\n&&-\\Theta(-r)\\left(\\frac{8}{3}-\\frac{8}{3}\\sqrt{1-\\left | r\\right |}+\\frac{2}{3}\\left | r\\right |\\sqrt{1-\\left | r\\right |}-2\\left | r\\right |\\right) ,\n\\end{eqnarray*}\nthe same as before. These results are summarized and compared to experiments in FIG. \\ref{diffusion_constant_3d_with_offQ}, where this fixed-point solution (the presence of the off-diagonal component) turns out to reduce the mobilities of both fermions at inner and outer Fermi surfaces.\n\n\\subsection{Two different cases within the self-consistent Born approximation}\n\nPreviously, we considered two types of solutions for the fermion Green's function within the Born approximation: One contains effects of only the intra-valley forward scattering and the other introduces both effects of intra- and inter-valley scattering into the fermion Green's function. It is natural to expect that the former solution would be justified when effects of disorder scattering are not strong. On the other hand, the second solution is expected to be realized when disorder scattering becomes more relevant than the first case. A crucially different point between these two solutions lies in spin chirality. Intra-valley scattering preserves the spin chirality while inter-valley scattering destroys it. As a result, we predict that the weak antilocalization turns into the weak localization in the presence of inter-valley scattering, increasing disorder strength. This crossover behavior may be regarded as a weak version of a topological phase transition driven by disorder although BiTeI is a topologically trivial metallic state.\n\n\\subsection{Self-consistent Born approximation as a fixed-point solution}\n\nThe solution based on the self-consistent Born approximation can be regarded as an effective theory for the corresponding diffusive Fermi-liquid fixed point. In order to understand this statement, we consider the following renormalization group equation for disorder strength up to one-loop order\n\\begin{eqnarray*}\n\\frac{d\\Gamma_{ss'}}{dt}=\\Gamma_{ss'}-\\Gamma_{ss_{1}}\\mathcal{C}_{s_{1}s_{2}}\\Gamma_{s_{2}s'} .\n\\end{eqnarray*}\n$\\Gamma_{\\scriptscriptstyle{++}(\\scriptscriptstyle{--})}$ is the scattering rate or variance within the inner (outer) Fermi surface and $\\Gamma_{\\scriptscriptstyle{+-}(\\scriptscriptstyle{-+})}$ is that between the inner and outer Fermi surfaces. The first term ensures the relevance of disorder scattering in the tree level when there is a Fermi surface. Such relevant disorder scattering becomes weak through quantum fluctuations, where the disorder potential is screened by particle-hole excitations. $\\mathcal{C}_{s_{1}s_{2}}$ are positive constants, computed in quantum corrections of the one-loop level. $t$ is the renormalization-group transformation scale.\n\nThese renormalization group equations can be rewritten as follows\n\\begin{eqnarray*}\n\\frac{d\\Gamma_{\\scriptscriptstyle{++}}}{dt}&=&\\Gamma_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{++}}\\Gamma_{\\scriptscriptstyle{++}} -\\Gamma_{\\scriptscriptstyle{+-}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{-+}}\\\\\n\\frac{d\\Gamma_{\\scriptscriptstyle{+-}}}{dt}&=&\\Gamma_{\\scriptscriptstyle{+-}}-\\Gamma_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{++}}\\Gamma_{\\scriptscriptstyle{+-}} -\\Gamma_{\\scriptscriptstyle{+-}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{--}}\\\\\n\\frac{d\\Gamma_{\\scriptscriptstyle{-+}}}{dt}&=&\\Gamma_{\\scriptscriptstyle{-+}}-\\Gamma_{\\scriptscriptstyle{-+}}C_{\\scriptscriptstyle{++}}\\Gamma_{\\scriptscriptstyle{++}} -\\Gamma_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{-+}}\\\\\n\\frac{d\\Gamma_{\\scriptscriptstyle{--}}}{dt}&=&\\Gamma_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{--}}\\Gamma_{\\scriptscriptstyle{--}} -\\Gamma_{\\scriptscriptstyle{-+}}C_{\\scriptscriptstyle{++}} \\Gamma_{\\scriptscriptstyle{+-}} .\n\\end{eqnarray*} \nFixed points are determined by $d \\Gamma_{ss'} \/ d t = 0$, resulting in\n\\begin{eqnarray*}\n&&\\Gamma_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{++}}^{2}C_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{+-}}\\Gamma_{\\scriptscriptstyle{-+}}C_{\\scriptscriptstyle{--}} =0\\\\\n&&\\Gamma_{\\scriptscriptstyle{+-}}\\left(1-\\Gamma_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{--}}\\right)=0\\\\\n&&\\Gamma_{\\scriptscriptstyle{-+}}\\left(1-\\Gamma_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{--}}\\right)=0\\\\\n&&\\Gamma_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{--}}^{2}C_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{-+}}\\Gamma_{\\scriptscriptstyle{+-}}C_{\\scriptscriptstyle{++}} =0.\n\\end{eqnarray*}\n\nFirst, we consider the case with the absence of inter-valley scattering, given by $\\Gamma_{\\scriptscriptstyle{+-}}=\\Gamma_{\\scriptscriptstyle{-+}}=0$. Then, we obtain\n\\begin{eqnarray*}\n\\Gamma_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{++}}^{2}C_{\\scriptscriptstyle{++}}=0 ~ \\& ~ \\Gamma_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{--}}^{2} C_{\\scriptscriptstyle{++}}=0.\n\\end{eqnarray*}\n$(\\Gamma_{\\scriptscriptstyle{++}},\\Gamma_{\\scriptscriptstyle{--}})=\\left\\{(0,0),~(0,1\/C_{\\scriptscriptstyle{--}}),~(1\/C_{\\scriptscriptstyle{++}},0)\\right\\}$ are unstable fixed points, and $(\\Gamma_{\\scriptscriptstyle{++}},\\Gamma_{\\scriptscriptstyle{--}})=(1\/C_{\\scriptscriptstyle{++}},1\/C_{\\scriptscriptstyle{--}})$ is the only stable fixed point. This stable fixed point is described by the first self-consistent Born approximation without inter-valley scattering, where spin chirality is well defined.\n\nNext, we consider the presence of inter-valley scattering, given by $\\Gamma_{\\scriptscriptstyle{+-}}=\\Gamma_{\\scriptscriptstyle{-+}}$ and $\\Gamma_{\\scriptscriptstyle{+-}}\\neq0$. Then, we obtain \n\\begin{eqnarray*}\n&&\\Gamma_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{++}}+\\Gamma_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{--}}=1\\\\\n&&\\Gamma_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{++}}^{2}C_{\\scriptscriptstyle{++}}-\\Gamma_{\\scriptscriptstyle{+-}}^{2}C_{\\scriptscriptstyle{--}}=0\\\\\n&&\\Gamma_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{--}}^{2}C_{\\scriptscriptstyle{--}}-\\Gamma_{\\scriptscriptstyle{+-}}^{2}C_{\\scriptscriptstyle{++}}=0.\n\\end{eqnarray*}\nSolving these equations, we find two fixed points:\n\\begin{eqnarray*}\n(\\Gamma_{\\scriptscriptstyle{++}},\\Gamma_{\\scriptscriptstyle{--}})&=&\\left(\\frac{1+\\sqrt{1-4C_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{+-}}^{2}}}{2C_{\\scriptscriptstyle{++}}},\\frac{1-\\sqrt{1-4C_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{++}}\\Gamma_{\\scriptscriptstyle{+-}}^{2}}}{2C_{\\scriptscriptstyle{--}}}\\right)\\\\\n(\\Gamma_{\\scriptscriptstyle{++}},\\Gamma_{\\scriptscriptstyle{--}})&=&\\left(\\frac{1-\\sqrt{1-4C_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{+-}}^{2}}}{2C_{\\scriptscriptstyle{++}}},\\frac{1+\\sqrt{1-4C_{\\scriptscriptstyle{--}}C_{\\scriptscriptstyle{++}}\\Gamma_{\\scriptscriptstyle{+-}}^{2}}}{2C_{\\scriptscriptstyle{--}}}\\right) .\n\\end{eqnarray*} \nWhen $\\Gamma_{\\scriptscriptstyle{+-}}$ satisfies $1-4C_{\\scriptscriptstyle{++}}C_{\\scriptscriptstyle{--}} \\Gamma_{\\scriptscriptstyle{+-}}^{2} \\approx 0$, we find that these two fixed points emerge into $(\\Gamma_{\\scriptscriptstyle{++}},\\Gamma_{\\scriptscriptstyle{--}}) \\approx (1\/C_{\\scriptscriptstyle{++}},1\/C_{\\scriptscriptstyle{--}})$. This fixed point is described by the second solution of the Born approximation in the presence of inter-valley scattering, in which the spin chirality is smeared out. This may be regarded as an intermediate solution with spin chirality before the ``topological phase transition\" toward normal diffusive Fermi liquids without spin chirality appears. Table. \\ref{table1} summarizes our results, where ``WAL\" and ``WL\" represent weak antilocalization and weak localization, respectively. \n\n\n\\begin{table}[ht]\n\\centering\n\\caption{Two ground states of self-consistent Born approximation and two fixed points of the renormalization group analysis.}\n\\renewcommand{\\arraystretch}{1.4}\n\\begin{tabular}{>{\\centering}m{2.1in} >{\\centering}m{2.1in} >{\\centering\\arraybackslash}m{1.6in}}\n\\hline\n& Diffusive Helical Fermi Liquid & Diffusive Fermi Liquid \\\\\n\\hline\nFixed point & \\shortstack[c]{~\\\\$\\Gamma_{\\scriptscriptstyle{++}}\\neq0$,~$\\Gamma_{\\scriptscriptstyle{--}}\\neq0$,\\\\and $\\Gamma_{\\scriptscriptstyle{+-}}=0$} & \\shortstack[c]{$\\Gamma_{\\scriptscriptstyle{++}}\\neq0$,~$\\Gamma_{\\scriptscriptstyle{--}}\\neq0$,\\\\and $\\Gamma_{\\scriptscriptstyle{+-}}\\neq0$} \\\\\n\\shortstack[c]{~\\\\Ground State\\\\~~(Self-consistent Born analysis)~~} & \\shortstack[c]{~\\\\$Q_{\\scriptscriptstyle{++}}\\neq0$,~$Q_{\\scriptscriptstyle{--}}\\neq0$,\\\\and $Q_{\\scriptscriptstyle{+-}}=0$} & \\shortstack[c]{~\\\\$Q_{\\scriptscriptstyle{++}}\\neq0$,~$Q_{\\scriptscriptstyle{--}}\\neq0$,\\\\and $Q_{\\scriptscriptstyle{+-}}\\neq0$}\\\\\nTransport property & WAL & \\shortstack[c]{~\\\\WAL$\\rightarrow$WL\\\\(Crossover)}\\\\\n\\hline\n\\end{tabular}\n\\label{table1}\n\\end{table}\n\n\n\\section{Discussion}\nConsidering that the only relevant energy scales are the cyclotron energy $\\hbar \\omega$ and the Fermi energy $E_F$ in the IFS, it is natural to introduce a single parameter $b = \\hbar \\omega\/E_F = (\\hbar e\/m_{WF}E_F)B$ for the Hall resistivity contribution from the Weyl fermions, anticipating the scaling behavior for $\\Delta \\rho_H$ \\cite{HJKim11}, where $m_{WF}$ is an effective mass of the Weyl fermion and $\\Delta \\rho_H$ is the Hall resistivity component deviating from the linearity. Indeed, we found a scaling property in $\\Delta \\rho_H$, presented in Fig. 9(a), where the y-axis should be also scaled as the magnitude of $\\Delta \\rho_H$ is inversely proportional to the carrier density. This scaling analysis enables us to estimate $m_{WF}$, whose values for all six samples are plotted in Fig. 9(b) as a function of the corresponding $E_F$. $m_{WF}$ is in the order of \n$10^{-6} - 10^{-4} m_0$, where $m_0$ is the mass of an electron and they exhibit a singular behavior with a minimum at the Weyl point.\n\nIt is straightforward to find $m_{WF} = m - \\frac{\\alpha_R}{\\sqrt{\\alpha_R^2+\\frac{2 \\hbar^2}{m}E_F}}m$ from the Rashba Hamiltonian with degenerate parabolic bands, where $m$ is the bare band mass given by the curvature of the parabolic band and the Rashba coupling constant $\\alpha_R$ determines the energy of the Weyl point from the bottom of the conduction band, given by $E_{WF} =\\frac{1}{2}\\frac{m}{\\hbar^2}\\alpha_R^2$. Considering an overall shift for the Fermi energy and taking the limit of $\\alpha_R^2 >> \\frac{2 \\hbar^2}{m}E_F$, we obtain $m_{WF} \\approx \\frac{\\hbar^2}{\\alpha_R^2} \\mid E_{F}-E_{WF} \\mid$. This equation describes zero mass at the Weyl point quite well. However, compared to the experimental result, the mass increases rather steeply as $E_F$ deviates from the Weyl point. In the opposite limit of the strictly linear dispersion, the mass is zero even away from the Weyl point. According to the density functional theory (DFT) \\cite{Bahramy11}, the dispersion near the Weyl point is neither quadratic nor linear. Therefore, to parameterize the degree of deviation from the linear dispersion, we introduce a phenomenological equation,\n$\\frac{m_{WF}}{m} \\approx \\frac{1}{2} | 1-(\\frac{v_{linear}}{v_{real}} )^2 |\\frac{\\mid E_F-E_{WF} \\mid}{E_{WF}}$. \nAssuming $\\frac{v_{linear}}{v_{real}} \\approx 1+\\varepsilon$, we obtain $\\varepsilon \\sim 2.2 \\times 10^{-2}$. This result implies that all higher order terms of the curvature in the dispersion is only few \\% and thus, the real dispersion in BiTeI near the Weyl point is considerably linear. Though the dispersion is mainly determined by periodic ionic potentials, we do not exclude any contribution to the linear dispersion resulting from electron interaction.\n\nThe universal scaling of the Hall resistivity discussed above is quite consistent with the extreme disparity of the mobility and divergent IFS mobility. Representing the Hall resistivity $\\Delta \\rho_H(B) =\\frac{1}{nec} \\frac{B}{1+\\mu^2B^2}$ as\n $\\Delta \\rho_H(b)=\\frac{1}{nec}\\frac{\\frac{m_{WF}E_F}{\\hbar e}b}{1+\\mu^2(\\frac{m_{WF}E_F}{\\hbar e})^2b^2}$ with the dimensionless magnetic field $b$ discussed before, we obtain the scaling expression of $\\frac{\\Delta \\rho_H(b)}{\\Delta \\rho_H(b=1)}=\\frac{(1+\\mu_{sc}^2)b}{1+\\mu_{sc}^2b^2}$, \n where $\\mu_{sc}=\\frac{m_{WF}E_F}{\\hbar e}\\mu$ is a scaled mobility. This scaled mobility, being a universal constant does not depend on the Fermi energy. Introducing the empirical formula introduced above for Weyl-fermion mass into the mobility, we find the following expressions of IFS and OFS mobility, given by $\\mu_{IFS}(E_F)=2\\frac{\\hbar e \\mu_{sc}}{m|1-(\\frac{v_{linear}}{v_{real}})^2|} \\frac{E_F}{|E_F||E_F-E_{WF}|}$ \n and $\\mu_{OFS}(E_F)=\\frac{\\hbar e\\mu_{sc}}{2m|E_F|}$, respectively. As $E_F$ is inversely proportional to $m$ for the charge carrier on OFS,\n $\\mu_{OFS}$ is constant and $\\mu_{IFS}$ follows $\\mu_{IFS}(E_F) \\propto \\frac{1}{|E_F-E_{WF}|}$ \n with the ratio of $\\frac{\\mu_{IFS}(E_F)}{\\mu_{OFS}(E_F)}=\\frac{m_{OFS}}{m_{WF}}=\\frac{4}{|1-(\\frac{v_{linear}}{v_{real}})^2|}\\frac{E_{WF}}{|E_F-E_{WF}|} \\approx 10^3 \\sim 10^4$. Indeed, this scaling argument is consistent with the ``divergent'' $\\mu_{IFS}$ at the Weyl point shown in the experimental result [Fig. 1(b)]. Note that while the mass ratio between IFS and OFS in this argument is mostly determined by the empirical factor\n $\\varepsilon \\approx1-\\frac{v_{linear}}{v_{real}}$ introduced above, the ``divergent'' behavior of $\\mu_{IFS}$ at the Weyl point is given by \n $\\frac{1}{|E_F-E_{WF}|}$ and in fact, this term is inherent in the Rashba model.\n\nIn order to understand the origin of the divergent IFS mobility and the extreme disparity between IFS and OFS mobility near the Weyl point, \nwe have performed the self-consistent Born analysis for the Rashba Hamiltonian, which is a mean-field theory in the presence of disorder with no consideration of electron correlation. Here we summarize main results of the perturbative renormalization group analysis to understand how a fixed-point phase is determined. In general, disorder strength increases at the short-distance scale in three dimensions because huge number of electrons on the Fermi surfaces are affected by disorder potentials. On the other hands, it decreases at the long-distance scale because disorder potentials are effectively screened. As a result, balancing is achieved and it gives rise to a finite-disorder fixed point, which is known as a diffusive Fermi liquid. In the present problem, we found two types of fixed points: One contains the effects of the intra-valley forward scattering only and the other considers both intra- and inter-valley scattering. We have performed the self-consistent Born approximation and found a fixed-point solution for the electron Green's function in both cases. Then, we have calculated transport coefficients, evaluating current-current correlation functions with this mean-field-theory propagator. \n\nFig. 8 shows that the self-consistent Born analysis describes our experimental data quantitatively, where lines and discrete points represent theoretical curves and experimental results, respectively. It is natural to expect that the presence of inter-valley scattering reduces the mobility. However, effects of the inter-valley scattering are not relevant in describing the experimental data in the present case\n because it suppresses spin chirality and the weak anti-localization. This fixed point is distinguished from a conventional diffusive Fermi liquid because of definite chirality and we name it a diffusive helical Fermi liquid. Thus, our BiTeI single crystals are weakly disordered with negligible inter-valley scattering whose ground state is considered to be a diffusive helical Fermi liquid. We would like to emphasize that only one fitting parameter, related with the variance of disorder potential at the fixed point is used in this comparison, whereas all other parameters are determined by the experiment.\n\nIt is straightforward to understand the divergent IFS mobility within the framework of the self-consistent Born approximation. As the IFS density of states vanishes, approaching the Weyl point, the scattering rate also becomes zero at the Weyl point. However, it is difficult to explain the experimentally confirmed scaling of Hall resistivity within the same framework. In fact, we find that scaled mobility \n$\\mu_{sc} = \\frac{m_{WF}E_F}{\\hbar e} \\mu$ is not independent of the Fermi energy $E_F$ when the mobility $\\mu$ evaluated from the self-consistent mean-field analysis is used. The independence of $\\mu_{sc}$ on $E_F$ is achieved only when the divergence of the IFS mobility is exactly cancelled by the mass reduction near the Weyl point. In the scaling argument, we obtained $m_{WF} \\propto |E_F - E_{WF}|\/E_F$. On the other hand, Born mean-field theory gives $\\mu_{IFS}(E_F) \\propto \\frac{1}{|E_F-E_{WF}|^{\\kappa}}$ with $\\kappa$ larger than 1. Thus, the $\\mu_{sc}$ \n is not independent of $E_F$ in the self-consistent mean-field analysis.\n \n One way to reconcile this inconsistency is to take into account the role of effective interactions between electrons near the Weyl point. As the IFS density of states vanishes at the Weyl point, effective interactions can be enhanced due to weaker screening effect. In fact, this is what happens in graphene. Correlation effects indeed reshape the linear band dispersion of graphene \\cite{Elias11,Siegel11,Kotov12,Chae12}. Possible interplay among inversion symmetry breaking (spin chirality), disorders, and effective interactions may lead to a novel interacting diffusive fixed point, which allows the universal scaling in the Hall resistivity.\n \nWhen disorders become stronger, it is possible that a topological structure (geometric phase) in the ground-state wave-function changes. Previously, we considered two types of fixed points, corresponding to the absence and presence of inter-valley scattering, respectively. The former solution would be justified when effects of disorder scattering are not strong, called a diffusive helical Fermi-liquid state. On the other hand, the second solution is expected to be realized when disorder scattering becomes more relevant than the first, identified with a diffusive Fermi-liquid state. A crucial difference is spin chirality. Intra-valley scattering preserves the spin chirality while inter-valley scattering destroys it. As a result, we predict that the weak anti-localization turns into the weak localization in the presence of strong inter-valley scattering, increasing disorder strength. This crossover behavior from the diffusive helical Fermi liquid to the conventional diffusive Fermi liquid may be regarded as a weak version of a topological phase transition driven by disorder although BiTeI is topologically trivial.\n\n\\section{Conclusion}\nIn conclusion, we uncovered that the interplay between disorder and inversion symmetry breaking is responsible for (1) divergent mobility in the inner chiral Fermi surface (FS), (2) extreme disparity of the mobility values between the inner and outer chiral FS, and (3) universal scaling in the Hall resistivity. Based on the self-consistent Born approximation, we could consistently explain the observation (1) and (2), quantitatively reproducing mobility values of the inner and outer FS as a function of the Fermi energy. However, the universal scaling of the Hall resistivity cannot be accounted for within this mean-field theory, which indicates the existence of mass renormalization of the inner Fermi-surface near the Weyl point, possibly originated from electron correlation due to weaker screening near the Weyl point. \\\\\n\n\\acknowledgements\nThis study was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (No. 2014R1A1A1002263). KS was also supported by the Ministry of Education, Science, and Technology (No. 2012R1A1B3000550 and No. 2011-0030785) of the National Research Foundation of Korea (NRF) and by TJ Park Science Fellowship of the POSCO TJ Park Foundation. MS was also supported by YO-COE Foundation from Yamagata University. MS wishes to express his thanks to Prof. T. Iwata for his support. \n\n$^{\\ast}$ Both authors equally contribute to the present paper \\\\\n$^{\\dagger}$ Corresponding author; hjkim76@daegu.ac.kr \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOn October 2019, we were invited to join a computing challenge on 3-XORSAT problems launched by some colleagues at University of Southern California \\cite{USCchallenge}.\nThe idea behind the challenge was to compare actual performance of the best available computing platforms, including quantum computers, in solving a particularly hard optimization problem.\nQuantum computing is becoming practical these days, and many different computing devices based on quantum technologies are becoming available (D-Wave, Google and IBM just to cite the most known).\nSo it is a natural question to ask, whether any of these quantum devices available today can do better than classical (i.e.\\ non-quantum) computing machines.\n\nWe decided to join this 3-XORSAT challenge with a proposal combining new algorithmic ideas and a highly optimized GPU implementation.\nWe are not going to discuss in detail the results of the 3-XORSAT challenge, that will appear elsewhere \\cite{USCchallenge}. We just remark that the performances of our algorithm running on commercial Nvidia GPUs are at least 2 orders of magnitude better than those of the other devices that entered the 3-XORSAT challenge: D-Wave quantum annealing processor \\cite{DWave}, Memcomputing machine \\cite{Memcomputing}, Fujitsu digital annealer \\cite{fujitsu} and Toshiba's simulated bifurcation machine \\cite{toshiba}.\n\nThis is clearly not the end of the story, as quantum technologies are evolving very fast and presumably will become competitive soon (eventually getting what is called a quantum advantage). Nonetheless, we believe it is very important to clarify what is today the state of art in the ``classical vs.\\ quantum computation challenge''.\n\nIn the present manuscript we report the ideas and the technical details that make our solving algorithm ranking first as a solver of the hard optimization problems presented at the 3-XORSAT challenge.\n\nThe manuscript is organized as follows. First we recap the known physical properties of these hard optimization problems (especially their energy landscape). Then we describe the algorithm we decided to use with a particular emphasis on the use of large number of clones and how this can be implemented efficiently for classical computers. We also provide a description of the technical choices that made our GPU implementation extremely efficient, even if the problem, being defined on a random graph topology, would in principle not be ideal for a platform like GPU. Finally, we discuss the numerical results about the time-to-solution (TTS), proposing an improved way of measuring the largest percentiles of TTS. We finish with a few concluding remarks.\n\n\\section{The model and its energy landscape}\n\nThe optimization problem that has been presented to the contenders at the 3-XORSAT challenge is the search for the ground state of a model based on Ising variables. The model is well known in statistical physics under the name of \\textit{diluted 3-spin ferromagnetic model} \\cite{franz2001ferromagnet}. In the computer science literature, it corresponds to a constraint satisfaction problem known under the name of \\textit{3-xorsat} \\cite{dubois20023}.\nIn this paper, we use the statistical physics formulation of the model, but switching to its computer science formulation requires just a change of variables.\n\nThe model is defined by the Hamiltonian\n\\begin{equation}\\label{eq:Hamiltonian}\n H[\\sigma]\\equiv -\\sum_{(i,j,k)\\in E} s_i s_j s_k\\;,\n\\end{equation}\nwhere $s_i=\\pm 1$ are $N$ Ising spins. The sum over the set $E$ of triplets $(i,j,k)$ is what defines the interaction topology. The instances provided in the 3-XORSAT challenge were generated on a random regular graph of fixed degree 3. In other words, the set $E$ is made of $N$ triplets randomly chosen under the constraints that in each triplet the 3 indices are different and each index appears exactly in 3 triplets.\nFrom the definition of the Hamiltonian $H[s]$ in Eq.~(\\ref{eq:Hamiltonian}), it is clear that the ground state is the configuration $s^*$ with all $s^*_i=1$. However our algorithm, publicly available at \\cite{algo}, searches for the ground states without computing the magnetization and having access only to the energy, so there is no need to ``hide'' the solution by a gauge transformation. The organizers of the competition are expected to check that the same is true for all the others contenders.\n\nOne may argue that such a model should be easy to optimize, because all interactions are ferromagnetic. However it is well known that such a model shows the same glassy physics of a disordered model \\cite{franz2001ferromagnet,ricci2010being} because the 3-spin interaction can be satisfied in many ways and this generates frustration in the system during the optimization.\\footnote{The careful reader may have noticed that the problem of satisfying all interactions in $H[s]$, i.e.\\ $s_j s_j s_k = 1$, is equivalent to the problem of solving linear equations modulo 2, $(x_i+x_j+x_k)(\\text{mod } 2)=0$ where $s_i=(-1)^{x_i}$. This problem can be solved in polynomial time, e.g.\\ by Gaussian elimination. However, as discussed in previous publications \\cite{barthel2002hiding}, the problem can be slightly modified preserving the same physical behavior, and making the polynomial algorithm no longer useful. The competition was restricted to algorithms which are robust with respect to such a change: using Gaussian elimination, or algorithms derived from it, was forbidden.} \n\nActually, in the 3-XORSAT challenge, an equivalent formulation has been used, where variables are twice in number (one $\\eta$ variable is added per each constraint) and variables interact only pairwise \\cite{hen2019equation}. This has been done to allow devices implementing only pairwise interactions to enter the competition. The resulting Hamiltonian $H_2[s,\\eta]$ is such that $\\sum_\\eta H_2[s,\\eta]=H[s]$. We have performed such a marginalization on the instances provided, so our algorithm minimizes the cost function $H[s]$ given in Eq.~(\\ref{eq:Hamiltonian}).\n\nThe 3-spin on 3-regular random graph (3S3R) model offers a paradigmatic example of a \\emph{golf course} energy landscape.\nThe thermodynamics of the model has been exactly solved \\cite{mezard2003two,montanari2003nature,krzakala2010following} and its dynamics has been accurately studied numerically \\cite{montanari2004cooling,krzakala2010following}.\nThe picture that comes out from these studies is exactly the one that goes under the name of ``random first order transition'' in the physics of glasses \\cite{kirkpatrick1987connections,kirkpatrick1989scaling,castellani2005spin,biroli2012random}.\nThere exists an exponential (in $N$) number of metastable states that dominate the Gibbs measure below the dynamical critical temperature $T_d\\simeq 0.51$, such that for $T0$ it is in a paramagnetic phase. This means that in thermal equilibrium, the dynamics will unlikely reach the ferromagnetic ground state in $s^*$.\nEven if, by chance, $s^*$ is reached, the dynamics will soon leave that configuration to thermalize again in the paramagnetic state.\n\nIn Ref.~\\cite{bellitti2021entropic}, a new class of heuristic algorithms has been introduced with the aim of proving that entropic barriers are the main source of computational complexity in the optimization of the 3S3R model. These \\emph{quasi-greedy} (QG) algorithms perform with high probability, when this is possible, a step decreasing the energy, but when they reach a local minimum they keep flipping a spin that enters in at least one violated interaction.\nThese algorithms have several advantages: (i) they converge fast to the interesting low-energy part of the configurational space; (ii) they keep the system evolving even in presence of many local minima, but without increasing too much the energy (that would make the search ineffective since it would be run in an uninteresting region); (iii) once the ground state $s^*$ is found, the algorithm stops and thus does not escape from the solution of the problem (without the need of checking it after every single spin flip).\n\nCalling $w_k$ the probability of flipping a spin entering $k$ unsatisfied interactions, in Ref.~\\cite{bellitti2021entropic} the QG algorithm with $w_0=0$ and $w_2=w_3=1$ was studied numerically. The probability of finding the solution $s^*$ was found to reach a maximum close to $w_1=0.05$, with a median TTS growing approximately as $\\exp(0.0835 N)$.\nStarting on these very promising results, we have built here a very optimized version of the QG algorithm.\n\nThe QG algorithm can also be viewed as an imperfect Metropolis algorithms not satisfying detailed balance, since by setting $w_0=w_1^3$ we would have an algorithm satisfying detailed balance for the 3S3R model. Setting $w_0=0$ breaks detailed balance, but brings two advantages: large energy jumps are forbidden (they are not strictly required in a search limited for entropic reasons), and once the ground state $s^*$ is found, the algorithm stops.\n\nThe latter property is extremely useful because any efficient implementation of the QG algorithm must perform a large number of steps before checking for the energy (that takes a time comparable to the one needed for a sweep of the QG algorithm).\nThe condition $w_0=0$ ensures that a ground state found during the dynamical evolution will not be lost\nbetween two successive measurements.\n\n\\section{The search by rare events requires many clones}\n\nAs discussed in \\cite{bellitti2021entropic}, the search for the ground state is slowed down by entropic barriers, i.e.\\ the search for the right well bringing from the $\\eth$ manifold to the ground state configuration $s^*$ is like ``finding a needle in a haystack''.\nFor this reason, instead of having a single copy of the system evolving for a very long time, it turns out to be more appropriate to follow a large number of copies of the system (evolving independently) for a shorter time, starting each one from a different random initial condition: we call these \\emph{clones}.\n\nThe rationale beyond this choice is that the evolution on the marginal manifold at $\\eth$ is not fast enough to allow a single clone to visit the entire manifold in a reasonable time.\nSo if the clone starts from an unfavourable initial condition, his search is bound to fail even if it keeps evolving for a very long time.\n\nWe are facing a typical phenomenon ruled by rare events: in the large $N$ limit, for a typical initial condition, the QG algorithm gets stuck at $\\eth$ and fails to find $s^*$, but there are rare initial conditions that allow the QG algorithm to find the solution $s^*$ in a short time.\nThe probability of such rare initial conditions (that roughly coincide with the basin of attraction of $s^*$) is exponentially small in $N$, as in any large deviation process.\n\nOne has to make a choice between the following two extreme strategies: running a single clone for a time scaling exponentially in $N$ or running a number of clones scaling exponentially with $N$ for a finite time. In principle, one should optimize over all choices in between these extremes, at a fixed total amount of computing time.\n\nOur choice has been to run the largest possible number of clones. This turns out to be the best choice for several reasons. It reduces fluctuations and, if the number of clones is large enough, we can derive analytic predictions for setting the running time to an optimal value (see below). Moreover, it is very unlikely for a single clone (or few clones) to find the solution, while when running a huge number of clones some of them can find the solution, thus allowing us to estimate the mean TTS.\nFinally, running a large number of clones is highly beneficial from the coding point of view, since the clones can evolve in parallel, leading to a drastic reduction in running times.\n\n\\section{Basic information about the GPU implementation}\n\nWe implemented the algorithm described above in a CUDA code that looks for the ground state of the given problem instance using concurrently thousands of clones.\nSince spins can only have two values, we use a multi-spin coding technique by packing values from different clones into 32-bit words \\cite{jacobs1981multi}. This allows to update 32 distinct clones in parallel by using Boolean operations on the spin words (we use the same random number for the 32 clones in the same word). Moreover we use the natural thread parallelism, evolving different clones in each GPU core. Finally, multiple GPUs can be used simultaneously by executing the same code with different random seeds.\nSo, in the end, we have three levels of parallelism: $i)$ multi-spin coding; $ii)$ thread level; $iii)$ multiple independent executions on distinct GPUs. Although the code is able to fit the number of threads to the actual number of cores available on the GPU in use, most of the runs have been executed on Volta 100 GPUs featuring 5120 cores. On those GPUs, the total number of clones was $N_\\text{cl}=327680$.\n\nOne more crucial aspect of our GPU implementation is the partitioning of variables in independent sets. In this way, the spin update procedure can be performed in parallel inside each independent set.\n\nFurther details on the GPU implementation, on the optimizations and fine tuning of the code are reported in the Supplementary Material.\n\n\\section{Numerical results}\n\nWe have been provided with 100 instances for different problem sizes. After some preliminary runs, we decided to focus our attention on sizes 256, 512 and 640, that, once transformed back to the form of a 3S3R model, correspond to $N=128,\\,256,\\,320$.\n\n\\begin{figure}\n \\onefigure[width=0.8\\columnwidth]{w1.pdf}\n \\caption{Preliminary runs on instance \\#38 of size $N=256$ allowed us to estimate of the optimal value $w_1=0.054(1)$ for the only parameter of the QG algorithm.}\n \\label{fig:w1}\n\\end{figure}\n\nThe QG algorithm depends on a single parameter, $w_1$, which is the probability of flipping a variable entering in one unsatisfied interaction and two satisfied interactions. The other parameters are fixed: $w_0=0$ (the ground state is a fixed point of the QG algorithm) and $w_2=w_3=1$ (the QG algorithm decreases the energy whenever possible).\nOur preliminary runs also served to optimize over $w_1$. In Fig.~\\ref{fig:w1} we show the mean TTS in a given instance of size $N=256$. The quadratic interpolation to the data estimates an optimal value $w_1=0.054(1)$, with a negligible dependence on the problem size (at least for $N\\ge 128$). So hereafter we fix $w_1=0.055$.\nAlthough the QG algorithm does not satisfy the detailed balance, we can associate a pseudo-temperature to the value of $w_1$ by the relation $w_1=\\exp(-2\/T_\\text{run})$, where the latter is the probability of flipping the spin in the Metropolis algorithm running at temperature $T_\\text{run}$. It is worth noticing that for $w_1=0.055$ we have $T_\\text{run}\\simeq 0.69$, which is slightly above the dynamical transition temperature $T_d\\simeq 0.51$ \\cite{krzakala2010following}.\n\nBeing the QG algorithm stochastic, the TTS is a random variable whose probability distribution is often measured via its percentiles $\\text{TTS}_p$, defined by $\\mathbb{P}[\\text{TTS}<\\text{TTS}_p] = p\/100$.\nThe organizers of the 3-XORSAT competition asked the participants to estimate the 99-th percentile $\\text{TTS}_{99}$ for each of the 100 instances of a given size and to report the median value (over the instances). This is the time required to solve with 99\\% probability an instance of median hardness. As we will discuss in detail we are not confident that this is the best metric for evaluating the performance of the proposed algorithms.\n\nMost of our simulations have been executed on Nvidia V100 GPUs running in parallel $N_\\text{cl}=327680$ clones. \nThe TTS for a single run of our QG algorithm is given by the shortest among the $N_\\text{cl}$ times each clone requires to reach the solution. Under the assumption, that we have checked numerically with great accuracy, that the cumulative distribution of the single clone time to reach a solution starts linearly in the origin, we have that the TTS (the best of the $N_\\text{cl}$ clones) is exponentially distributed as (see SM)\n\\begin{equation}\\label{eq:expTTS}\n \\mathbb{P}[\\text{TTS}>t] = \\exp(-t\/\\tau)\\,.\n\\end{equation}\nA check of the above equation is reported in Fig.~\\ref{fig:N256_fig1} where we plot in a semilogarithmic scale the probability that the TTS is larger than a given time $t$ (in seconds) for 20 instances of size $N=256$. Data have been obtained running the QG algorithm 1008 times and sorting the corresponding 1008 values of the TTS. We observe that the exponential distribution (which is linear in a semilogarithmic scale) describes very well the data down to a small probability.\nA consequence of this observation is that the TTS of our QG algorithm with a very large number of clones can be perfectly described in terms of the single timescale $\\tau$, \\emph{the mean TTS}, that depends solely on the particular instance under study.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{N256_fig1.pdf}\n \\caption{Cumulative probability distribution of TTS for 20 instances of size $N=256$.}\n \\label{fig:N256_fig1}\n\\end{figure}\n\nIn Fig.~\\ref{fig:N256_fig1} the crossing of the data with the horizontal dotted line determines the value of $\\text{TTS}_{99}$. We believe that this is not the best estimate for the time such that the QG algorithm finds the solution with probability 99\\%. As a matter of fact, a better estimate, that is affected by much smaller fluctuations, is given by $-\\log(0.01)\\;\\tau$. The latter estimate is much more robust than $\\text{TTS}_{99}$ since it is obtained from all the measured TTS values. Moreover, $\\text{TTS}_{99}$ requires the execution of the algorithm at least 100 times, whereas $\\tau$ can be safely estimated from a much smaller number of measures.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{N256_fig2.pdf}\n \\caption{Different estimates of $\\tau$, the mean TTS, for the 100 instances of size $N=256$.}\n \\label{fig:N256_fig2}\n\\end{figure}\n\nIn general, the value of $\\text{TTS}_p$ can be better computed via $-\\log(1-p\/100)\\tau$, after the values of $\\tau$ have been estimated. In Fig.~\\ref{fig:N256_fig2}, we plot for each of the 100 instances of size $N=256$ the mean TTS $\\tau$ and three equivalent estimates obtained from $\\text{TTS}_{50}$ (the median), $\\text{TTS}_{90}$ and $\\text{TTS}_{99}$.\nFor each instance, the four estimates are very close, whereas they vary a lot when changing the instance. A more careful inspection of the data in Fig.~\\ref{fig:N256_fig2} highlights that the mean TTS $\\tau$ is always in the middle of the group of the four estimates, while the estimate based on $\\text{TTS}_{99}$ is sometimes far from the other.\nThe above observations suggest that using $\\tau$ instead of $\\text{TTS}_{99}$ would provide more reliable and stable results in the analysis of algorithms performance.\n\n\\section{Running with a short timeout}\n\nWhat is the best possible way to estimate $\\tau$? Obviously, having an unbounded computing power, one could simply execute the search $N_\\text{run}$ times and just take the average of all the TTS values.\nBut for a process that requires a computing time growing exponentially with the problem size $N$, this naive approach becomes soon unfeasible.\n\nNonetheless, we deduce from the data shown in Fig.~\\ref{fig:N256_fig1} and from the cumulative distribution in Eq.~(\\ref{eq:expTTS}) that runs with a very short TTS always exists for any $\\tau$, although they become very rare for large values of $\\tau$.\n\nSo we can adopt a different search strategy. Instead of letting every run to finish reaching the solution $s^*$ (that sooner or later is found, since the dynamical process we simulate is ergodic for finite $N$ values), we can set a timeout $t_\\text{max}$ such that the QG algorithm reports a failure if the solution is not found in a time shorter than $t_\\text{max}$.\n\nThis \\emph{early stop} strategy has several advantages.\nThe use of a timeout prevents very long runs: this is very useful, not only because it stops in advance those unfortunate runs that would take an atypically long time, but also because makes all runs of a similar time duration (and this is very useful when planning a large group of parallel runs).\nMore importantly, the algorithm with a timeout can also be run for very large sizes, when the algorithm without any timeout would take too long to finish. The data for problems of size $N=320$ have been obtained with this strategy, and a sensible estimate would have been otherwise impossible to get.\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{timeout.pdf}\n \\caption{Estimates of mean TTS $\\tau$ in all the 100 instances of size $N=256$ obtained from runs with a short timeout.}\n \\label{fig3}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{99perc.pdf}\n \\caption{99-th percentiles $\\text{TTS}_{99}$ for the median instances of several problem sizes.}\n \\label{fig:99perc}\n\\end{figure}\n\nIf the timeout $t_\\text{max}$ is much shorter than the mean TTS, $t_\\text{max} \\ll \\tau$, only a small fraction of runs will find the solution. By running $N_\\text{run}$ runs with a timeout $t_\\text{max}$, we can estimate $\\tau$ from the number $n$ of successful runs as follows.\nThe posterior distribution on $\\tau$ given that we observe $n$ successful runs among $N_\\text{run}$ is proportional to\n\\begin{equation*}\nP(\\tau|n) \\propto \\frac{1}{\\tau} \\binom{N_\\text{run}}{n} \\left(1-e^{-t_\\text{max}\/\\tau}\\right)^n \\left(e^{-t_\\text{max}\/\\tau}\\right)^{N_\\text{run}-n}\\;,\n\\end{equation*}\nwhere the factor $1\/\\tau$ before the binomial coefficient is the prior on $\\tau$ and it is such that, before taking any measurement, the probability measure is uniform on the variable $\\ln\\tau$.\nSince $t_\\text{max}\\ll\\tau$ we can simplify the posterior to the following normalized distribution\n\\begin{equation*}\nP(\\tau|n)=\\frac{\\mathcal{T}_\\text{tot}^n}{(n-1)!}\\frac{e^{-\\mathcal{T}_\\text{tot}\/\\tau}}{\\tau^{n+1}}\\;,\n\\end{equation*}\nwith $\\mathcal{T}_\\text{tot}=N_\\text{run}t_\\text{max}$ being the total running time. Getting an estimate of $\\tau$ from this posterior distribution is straightforward and the results are shown in Fig.~\\ref{fig3} for timeouts of 5, 10 and 20 seconds.\n\nWe see from the data in Fig.~\\ref{fig3} that the estimates of $\\tau$ from runs with a rather short timeout are very accurate for most of the samples: only for samples whose mean TTS is almost 2 orders of magnitude larger than the timeout did the estimate turn out to be larger, but still compatible within error bars.\nIn particular we notice that for the median instance the estimates of $\\tau$ (and thus of the 99-th percentile $\\text{TTS}_{99}$) obtained from runs with a very short timeout are perfectly fine and allow to save a great amount of time.\n\nWe summarize in Table \\ref{table1} our best estimates for $\\text{TTS}_{99}$ in the median instance, and in Fig.~\\ref{fig:99perc}, we plot the values reported in the table together with the best fitting exponential growth, $\\text{TTS}_{99} \\propto \\exp(a N)$ with $a=0.0786(4)$.\n\n\\begin{table}[!ht]\n \\centering\n \\begin{tabular}{|llll|} \n \\hline\n & from all runs & from runs & timeout \\\\\n $N$ & (no timeout) & with timeout & value \\\\ [0.5ex]\n \\hline\n 128 & 0.0275 $\\pm$ 0.0005 & --- & --- \\\\\n 256 & 640 $\\pm$ 20 & 700 $\\pm$ 100 & 4.5 \\\\\n 320 & --- & 130k $\\pm$ 30k & 450 \\\\ [1ex] \n \\hline\n \\end{tabular}\n \\caption{Values of the 99-th percentile $\\text{TTS}_{99}$ for the median instances of several sizes. All times are expressed in seconds.}\n \\label{table1}\n\\end{table}\n\n\\section{Analytic prediction for exponent $a$ in $\\ln\\tau\\sim a N$}\n\nAlthough the QG algorithm is heuristic and it does not satisfy the detailed balance condition, we can still obtain an approximate analytical estimate of the exponent $a$ ruling the growth of $\\tau$ with $N$, assuming the dynamics takes place in contact with a thermal bath at an effective temperature $T_\\text{run}=-2\/\\log(w_1)$.\nIn thermal equilibrium, we expect the time to visit the ground state $s^*$ to be related to the free-energy barrier between the paramagnetic state and the ordered state around $s^*$.\nWe need to compute the free-energy as a function of the magnetization $m=\\sum_i s_i \/ N$.\n\nWe consider a $K$-spin model on a $K$-regular random graph (the model we simulated has $K=3$, but is worth presenting analytic computations for a generic $K$ value). In order to set the magnetization to an arbitrary value, we add an external field $b$ to the Hamiltonian: $H[s]-b \\sum_i s_i$. Using the cavity method for sparse models \\cite{mezard2001bethe,mezard2003cavity,mezard2009information} we can write the free-energy at temperature $T=1\/\\beta$ in the following variational form\n\\begin{eqnarray}\n -\\beta f &=& \\log(Z_i) + \\log(Z_a) + K \\log(Z_{ai}) - b\\,m\\;,\\label{eq:f}\\\\\n Z_i &=& \\frac{2 \\cosh(\\beta(Ku+b))}{(2 \\cosh(\\beta u))^K}\\;,\\nonumber\\\\\n Z_a &=& \\cosh(\\beta) \\left(1+\\tanh(\\beta) \\tanh(\\beta h)^K\\right)\\;,\\nonumber\\\\\n Z_{ai} &=& \\frac{\\cosh(\\beta(u+h))}{2\\cosh(\\beta u)\\cosh(\\beta h)}\\;,\\nonumber\n\\end{eqnarray}\nthat needs to be extremized with respect to the external field $b$ and the cavity fields $u$ and $h$. The saddle point equations read\n\\begin{eqnarray}\n m &=& \\tanh(\\beta(Ku+b))\\;,\\nonumber\\\\\n h &=& (K-1)u+b\\;,\\label{eq:spe}\\\\\n \\tanh(\\beta u) &=& \\tanh(\\beta) \\tanh(\\beta h)^{K-1}\\;.\\nonumber\n\\end{eqnarray}\nThe paramagnetic solution to Eq.~(\\ref{eq:spe}) has $u=h=m=0$ and $f=f_\\text{para}\\equiv-\\log(2\\cosh(\\beta))\/\\beta$.\nThe ferromagnetic solution has $u,h,m>0$ and it exists only for temperatures $T 2$:\n\\begin{align*}\nr = \\sum_{M_2\\le a \\le M_1} c_a x_2^ax_1^{n-a},\n\\end{align*}\nwhere $M_1 = \\min \\{n, N-1\\}$, $M_2 = \\max \\{0, n + 1 - N\\}$. Then\n\\begin{align*}\n\\Delta(r)&= \\sum_{M_2\\le a \\le M_1} c_a \\Big(\\sum_{i=0}^a \\binom{a}{i}_{k^2} x_2^i\\otimes x_2^{a-i}\\Big)\n\\Big(\\sum_{j=0}^{n-a} \\binom{n-a}{j}_{k^2} x_1^j\\otimes x_1^{n-a-j}\\Big) \\\\\n&= \\sum_{M_2\\le a \\le M_1}\\sum_{i=0}^a\\sum_{j=0}^{n-a} c_a \\binom{a}{i}_{k^2}\\binom{n-a}{j}_{k^2} (kp)^{(a-i)j} x_2^ix_1^j\\otimes x_2^{a-i}x_1^{n-a-j}.\n\\end{align*}\nThus\n\\begin{align*}\n\\partial_1(r) &= \\sum_{M_2\\le a \\le \\min \\{M_1, n-1\\}} c_a (n-a)_{k^2} (kp)^{a}\\, x_2^{a}x_1^{n-a-1},\n\\\\ \\partial_2(r) &= \\sum_{\\max\\{M_2, 1\\} \\le a \\le M_1} c_a (a)_{k^2} x_2^{a-1}x_1^{n-a}.\n\\end{align*}\nBy minimality of $n$, $\\partial_1(r) = \\partial_2(r) =0$, hence\n\\begin{align*}\nc_a (n-a)_{k^2} &= 0,& M_2 &\\le a \\le \\min \\{M_1, n-1\\}, \\\\ c_a (a)_{k^2} &= 0,& \\max\\{M_2, 1\\} &\\le a \\le M_1.\n\\end{align*}\nIf any $ c_a\\neq 0 $, then either $(a)_{k^2}$ or $(n-a)_{k^2} = 0$, but this contradicts \nthe definition of $N$. Therefore the coefficients $ c_a $ are trivial and $\\pi$ is bijective.\n\\end{proof}\n\nWe close this Subsection by a discussion of the Nichols algebras arising from the equivalence in \\cite{H}. \nFirst, \\ref{item:hiet-b} and \\ref{item:hiet-c} give rise to the same braiding, that is\n\\begin{align}\\label{eq:braiding21bc}\n(c'(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nk^2 x_1\\otimes x_1 & kp x_2\\otimes x_1 \\\\\nkq x_1\\otimes x_2 + (k^2-pq) x_2\\otimes x_1 & k^2 x_2\\otimes x_2\n\\end{pmatrix}.\n\\end{align}\nSince \\ref{item:hiet-b} is a change of basis, the Nichols algebras are isomorphic.\nSecond, \\ref{item:hiet-a} gives rise to the braiding\n\\begin{align}\\label{eq:braiding21a}\n(c''(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nk^2 x_1\\otimes x_1 & kq x_2\\otimes x_1 \\\\\nkp x_1\\otimes x_2 + (k^2-pq) x_2\\otimes x_1 & k^2 x_2\\otimes x_2\n\\end{pmatrix}.\n\\end{align}\nBut \\eqref{eq:braiding21a} is \\eqref{eq:braiding21bc} up to $p \\leftrightarrow q$, so no new Nichols algebra arises.\n\nThird, \\ref{item:hiet-a} composed with \\ref{item:hiet-c} gives the initial $ \\mathfrak{R}_{2, 1}$ \nup to $p \\leftrightarrow q$, so no new Nichols algebra arises.\n\n\\subsection{Case $ \\mathfrak{R}_{2, 2}$}\\label{subsec:hit22} \nWe assume that $ k, p, q\\neq 0 $ and $ k^2\\neq pq $. The associated braiding is\n\\begin{align*}\n(c(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nk^2 x_1\\otimes x_1 & kq x_2\\otimes x_1 + (k^2-pq) x_1\\otimes x_2 \\\\\nkp x_1\\otimes x_2 & -pq x_2\\otimes x_2 \n\\end{pmatrix}.\n\\end{align*}\nLet $N_1 = \\begin{cases} \\ord k^2, &\\text{if } 1 \\neq \\ord k^2; \\\\ \\infty, &\\text{otherwise.}\n\\end{cases}$ and $N_2 = \\begin{cases} \\ord (-pq), &\\text{if } 1 \\neq \\ord (-pq); \\\\ \\infty, &\\text{otherwise.}\n\\end{cases}$\n\n\\begin{pro} If $ k^2 \\neq -1 $ and $ pq \\neq 1$, then there are no quadratic relations.\n\tOtherwise, the Nichols algebras are\n\\begin{align}\\label{eqn:Nichols2,2}\n{\\mathcal B}(V) &= T(V) \/ \\langle x_1x_2 - kqx_2x_1, r_1, r_2\\rangle \n\\end{align}\nwhere $ r_i= x_i^{N_i}, \\, i\\in \\I_2$, only if $ N_i<\\infty $. Also $ \\{ x_2^{a_2}x_1^{a_1}: a_i \\in \\I_{0, N_i-1}\\}$ \nis a PBW-basis and $ \\GK {\\mathcal B}(V) = |\\{i\\in\\I_2: N_i =\\infty\\}|$; if 0, then $\\dim {\\mathcal B}(V) = N_1N_2$.\n\\end{pro}\n\n\\begin{proof} Similar to the proof of Proposition \\ref{prop:case2,1}.\n\\end{proof}\n\nWe next discuss the Nichols algebras arising from the equivalence in \\cite{H}. \n First, \\ref{item:hiet-a} gives rise to the braiding\n\\begin{align}\\label{eq:braiding22a}\n(c'(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nk^2 x_1\\otimes x_1 & kq x_2\\otimes x_1 \\\\\nkp x_1\\otimes x_2 + (k^2-pq) x_2\\otimes x_1 & -pq x_2\\otimes x_2 \n\\end{pmatrix}.\n\\end{align}\n\nLet $N_1$ and $N_2$ be as above.\n\n\\begin{pro} If $ k^2 \\neq -1 $ and $ pq \\neq 1$, then there are no quadratic relations.\n\tOtherwise, the Nichols algebras are\n\t\\begin{align}\\label{eqn:Nichols2,2a}\n\t{\\mathcal B}(V) &=T(V)\/ \\langle x_2x_1 - kpx_1x_2, r_1, r_2\\rangle \n\t\\end{align}\nwhere $ r_i= x_i^{N_i}, \\, i\\in \\I_2$, only if $ N_i<\\infty $. Also $ \\{ x_2^{a_2}x_1^{a_1}: a_i \\in \\I_{0, N_i-1}\\}$ \nis a PBW-basis and $ \\GK {\\mathcal B}(V) = |\\{i\\in\\I_2: N_i =\\infty\\}|$; if 0, then $\\dim {\\mathcal B}(V) = N_1N_2$.\n\\end{pro}\n\n\\begin{proof} Similar to the proof of Proposition \\ref{prop:case2,1}.\n\\end{proof}\n\nSecond, \\ref{item:hiet-c} gives rise to the braiding\n\\begin{align}\\label{eq:braiding22c}\n(c''(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nk^2 x_1\\otimes x_1 & kp x_2\\otimes x_1 \\\\\nkq x_1\\otimes x_2 + (k^2-pq) x_2\\otimes x_1 & -pq x_2\\otimes x_2 \n\\end{pmatrix}.\n\\end{align}\nBut \\eqref{eq:braiding22c} is \\eqref{eq:braiding22a} up to $p \\leftrightarrow q$, so no new Nichols algebra arises here.\n\nThird, \\ref{item:hiet-a} composed with \\ref{item:hiet-c} gives the initial $ \\mathfrak{R}_{2, 1}$ \nup to $p \\leftrightarrow q$, so no new Nichols algebra arises.\n\n\n\n\\subsection{Case $ \\mathfrak{R}_{2, 3}$}\\label{subsec:hit23} We assume that $ k\\neq 0$, and either $p\\neq 0$, or $q\\neq 0$, or $s\\neq 0$. \nThe associated braiding is $(c(x_i \\otimes x_j))_{i, j \\in \\I_2} =$\n\\begin{align*}\n= \\begin{pmatrix}\nk x_1\\otimes x_1 & k x_2\\otimes x_1 + q x_1\\otimes x_1 \\\\\nk x_1\\otimes x_2 + p x_1\\otimes x_1 & k x_2\\otimes x_2 + s x_1\\otimes x_1 + p x_2\\otimes x_1 + q x_1\\otimes x_2\n\\end{pmatrix}.\n\\end{align*}\n\n\n\\begin{rem}\nThe braided vector spaces $\\mathcal{V}(\\epsilon, 2)$ considered in \\cite[\\S 1.2]{AAH} \nfit in this case taking $k=\\epsilon$, $q=1$ and $p = s = 0$. \nIn particular, the Jordan plane $\\mathcal{V}(1, 2)$ and the super Jordan plane \n$\\mathcal{V}(-1, 2)$ belong to this case. \n\\end{rem}\n\nTo state the next result, we need the notation \n\\begin{align*}\nx_{21} = [x_2, x_1]_c = x_2x_1 - \\mu (c(x_2\\otimes x_1)).\n\\end{align*}\n\n\\begin{table}[ht]\n\t\\caption{Nichols algebras of type $ \\mathfrak{R}_{2, 3} $}\\label{tab:h23}\n\t\\begin{center}\n\t\t\\begin{tabular}{| c | p{1cm} | c | p{4,9cm} | c | c |}\\hline\n\t\t\t$k$ & $p$ & $ s $ & $ {\\mathcal J}(V)$ & Basis &$\\GK$\n\t\t\t\\\\\\hline\n\t\t\t\\begin{small}\n\t\t\t\t$-1$ \\end{small} & $ -q $ & $ q^2 $ &$ \\langle x_1^2, x_2^2 - qx_1x_2, x_1x_2 + x_2x_1 \\rangle$ & $ (*_{2, 3})_1 $ \n\t\t\t& \\begin{small} $0, \\dim = 4$ \\end{small}\n\t\t\t\\\\\\cline{3-6}\n\t\t\t & & $ \\neq q^2 $ &$\\langle x_1^2, x_1x_2 + x_2x_1 \\rangle$ & $ (*_{2, 3})_2 $ & $1$\n\t\t\t\\\\\\cline{2-6}\n\t\t\t & \\begin{small} $ \\neq -q $\\end{small} & & \\begin{small} \n\t\t\t \t$\\langle x_1^2, x_2x_{21} + (p-q)x_1x_{21} - x_{21}x_2 \\rangle$\\end{small} & $ (*_{2, 3})_3 $& $2$\n\t\t\t\\\\\\hline\n\t\t\t$1$ & & &$\\langle \\dfrac{q-p}{2}x_1^2 - x_1x_2 +x_2x_1 \\rangle$ & $ (*_{2, 3})_4 $ & $2$\n\t\t\t\\\\\\hline \n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\\begin{pro} \\label{prop:Nichols23}\nIf $k \\neq \\pm 1$, then there are no quadratic relations. \nOtherwise, the Nichols algebras are as in Table \\ref{tab:h23}, where\n\\begin{align*}\n (*_{2, 3})_1 &= \\{x_1^{a_1}x_2^{a_2}: 0\\leq a_i \\leq 1 \\} ;\\\\ \n(*_{2, 3})_2 &= \\{x_1^{a_1}x_2^{a_2}: 0\\leq a_1 \\leq 1, 0\\leq a_2 < \\infty \\};\\\\\n(*_{2, 3})_3 &= \\{x_1^{a}x_{21}^{b}x_2^{c}: 0\\leq a \\leq 1, 0\\leq b, c < \\infty \\}; \\\\\n(*_{2, 3})_4 &= \\{x_1^{a_1}x_2^{a_2}: 0\\leq a_i < \\infty \\}.\n\\end{align*}\n\\end{pro}\n\n\\begin{proof} Set $u = \\lambda_1x_1^2+\\lambda_2x_1x_2+\\lambda_3x_2x_1+\\lambda_4x_2^2$. Then\n\\begin{align*}\n\\Delta(u) &= u\\otimes 1 + 1\\otimes u + (\\lambda_1(1+k)+\\lambda_2q+\\lambda_3p+\\lambda_4s)x_1\\otimes x_1 \\\\\n&+ \\lambda_4(1+k)x_2\\otimes x_2 + (\\lambda_2k+\\lambda_3+\\lambda_4p)x_2\\otimes x_1 \\\\\n&+ (\\lambda_2+\\lambda_3k+\\lambda_4q)x_1\\otimes x_2.\n\\end{align*}\nThen all the assertions on quadratic relations hold. The claim in the \nfirst row of Table \\ref{tab:h23} follows then easily. \nTo simplify the discussion of the rest, we consider three cases:\n\\begin{enumerate}[leftmargin=*,label=\\rm{(\\roman*)}] \n\\item \\label{case1:23} $k=-1$, $ p=-q $ and $ s\\neq q^2 $;\n\\item \\label{case2:23} $k=-1$, $ p\\neq -q $;\n\\item \\label{case3:23} $k=1$.\n\\end{enumerate}\nCase \\ref{case1:23}: We first prove by induction that for $n\\geq 2$\n\\begin{align*}\n\\partial_1(x_2^n) &= \\begin{cases}\n\\dfrac{n}{2}qx_2^{n-1}+(\\dfrac{n(n-2)}{4}q^2 +\\dfrac{n}{2}s)x_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is even};\\\\\n-\\dfrac{n-1}{2}qx_2^{n-1}+ \\Big(\\dfrac{(n-1)(n-3)}{4}q^2 & \\hspace{-20pt}+\\dfrac{n-1}{2}s\\Big)x_1x_2^{n-2}, \n\\\\& \\textrm{if } n \\textrm{ is odd}.\n\\end{cases} \\\\\n\\partial_2(x_2^n)&= \\begin{cases}\n-\\dfrac{n}{2}q x_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is even};\\\\\nx_2^{n-1}-\\dfrac{n-1}{2}qx_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is odd}.\n\\end{cases}\n\\end{align*}\nFrom this, we prove again by induction that, for $n\\geq 2$\n\\begin{align*}\n\\partial_1(x_1x_2^{n-1})&= \\begin{cases}\nx_2^{n-1}+\\dfrac{n}{2}qx_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is even};\\\\\nx_2^{n-1}-\\dfrac{n-1}{2}qx_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is odd}.\n\\end{cases}\\\\\n\\partial_2(x_1x_2^{n-1})&= \\begin{cases}\n-x_1x_2^{n-2} , & \\textrm{if } n \\textrm{ is even};\\\\\n0, & \\textrm{if } n \\textrm{ is odd}.\n\\end{cases}\n\\end{align*}\nLet $\\mathfrak{B} = T(V)\/ \\langle x_1^2, x_1x_2 + x_2x_1\\rangle$ and let $\\pi: \\mathfrak{B}\\to {\\mathcal B}(V)$\nbe the natural projection. A standard argument shows that $(*_{2, 3})_2$ generates linearly $\\mathfrak{B}$. \nAssume that the image $B = \\pi(B)$ is not linearly independent\nand pick a linear homogeneous \nrelation of minimal degree $n > 2$\n\\begin{align*}\nr = c_1x_2^n + c_2x_1x_2^{n-1}.\n\\end{align*}\n\nAssume that $n$ is odd. Then\n\\begin{align*}\n0 &= \\partial_2(r) = c_1 \\left(x_2^{n-1}-\\dfrac{n-1}{2}qx_1x_2^{n-2}\\right)\n\\implies c_1=0 \\implies \\\\\n0 &= \\partial_1(r) = c_2\\left(x_2^{n-1}-\\dfrac{n-1}{2}qx_1x_2^{n-2}\\right) \\implies r = 0.\n\\end{align*}\nAssume that $n$ is even. Then\n\\begin{align*}\n0 = \\partial_1(r) &= c_1\\left(\\dfrac{n}{2}qx_2^{n-1}+\\left(\\dfrac{n(n-2)}{4}q^2 +\\dfrac{n}{2}s\\right) x_1x_2^{n-2}\\right)\\\\\n&+c_2\\left(x_2^{n-1}+\\dfrac{n}{2}qx_1x_2^{n-2}\\right)\\\\\n&= \\left(\\dfrac{n}{2}qc_1+c_2\\right)x_2^{n-1} + \\left(\\left(\\dfrac{n(n-2)}{4}q^2 +\\dfrac{n}{2}s\\right)c_1+\\dfrac{n}{2}qc_2\\right)x_1x_2^{n-2};\\\\\n0 = \\partial_2(r) &= -c_1\\dfrac{n}{2}q x_1x_2^{n-2}-c_2 x_1x_2^{n-2} =(-\\dfrac{n}{2}qc_1 -c_2)x_1x_2^{n-2}.\n\\end{align*}\nHence\n\\begin{align*}\n\\dfrac{nq}{2}c_1+c_2 &= 0, &\n\\left(\\dfrac{n(n-2)q^2}{4} +\\dfrac{ns}{2}\\right) c_1 + \\dfrac{nq}{2}c_2 &= 0.\n\\end{align*}\n\nThe system above has non trivial solution iff $q^2 = s$. The claim in row 2 of Table \\ref{tab:h23} is established. \n\n\\smallbreak\n\nCase \\ref{case2:23}: By analogy with the super Jordan plane $\\mathcal{V}(1, 2)$, \nwe look for cubic relations and obtain the following one by \\eqref{eq:deriv-criteria}:\n\\begin{align*}\n0=x_2^2x_1 + (p-q)x_1x_2x_1 - x_1x_2^2 = x_2x_{21} + (p-q)x_1x_{21} - x_{21}x_2.\n\\end{align*}\nLet $\\mathfrak{B} = T(V)\/ \\langle x_1^2, x_2x_{21} + (p-q)x_1x_{21} - x_{21}x_2\\rangle$. \nObserve that $x_1x_{21}= x_{21}x_1$ in $\\mathfrak{B}$. \nArguing as in \\cite{AAH}, that is using the commutation relations, \nwe see that $(*_{2, 3})_3$ is a system of linear generators of $\\mathfrak{B}$. \nWe need the formulae of the derivations on the elements of this basis.\nFirst,\n\\begin{align*}\n\\partial_1(x_{21}^b)&=b(p+q)x_1x_{21}^{b-1}, &\\partial_2(x_{21}^b)&=0,& b&\\geq 1.\n\\end{align*}\nFor $i\\in \\I_2$, $c\\geq 0$, set\n\\begin{align*}\n\\partial_i(x_{2}^c) &=\\partial_{i,c}=\\partial_{i,c,0}+x_1\\partial_{i,c,1},&\n\\text{where } \\partial_{i,c,j} \\in \\Bbbk \\{x_{21}^bx_2^d: b,d \\ge 0\\}.\n\\end{align*}\n\nStraightforward calculations show that, for $ b\\geq 1$, $c\\geq 0$,\n\\begin{align}\n\\partial_1(x_{21}^bx_2^c)&=x_{21}^b(\\partial_{1,c}-2bq\\partial_{2,c})+b(p+q)x_1x_{21}^{b-1}x_2^c, \\\\\n\\partial_2(x_{21}^bx_2^c)&=x_{21}^{b}\\partial_{2,c},\\\\\n\\partial_1(x_1x_{21}^bx_2^c)&=x_{21}^bx_2^c-x_1x_{21}^b\\partial_{1,c,0}+(2b+1)qx_1x_{21}^b\\partial_{2,c,0}, \\\\\n\\partial_2(x_1x_{21}^bx_2^c)&=-x_1x_{21}^{b}\\partial_{2,c,0},\n\\\\\n\\label{eqn:superjordan}\n\\partial_{2,c,0} &= \\begin{cases}\n0, & \\textrm{if } n \\textrm{ is even},\\\\\nx_2^{c-1}, & \\textrm{if } n \\textrm{ is odd}.\n\\end{cases}\n\\end{align}\n\nAssume that the image of $B$ under the projection $\\pi: \\mathfrak{B}\\to {\\mathcal B}(V)$ is not linearly independent.\nPick $r$ a non-trivial linear combination homogeneous of minimal degree $N\\geq 4$. \n\nSuppose first that $ N $ is odd. Then there are scalars $\\lambda_{b}$, $\\mu_{t}$ such that\n\\begin{align*}\nr = \\sum_{0 \\le b \\le \\frac{N-1}{2}} \\lambda_{b}\\,x_{21}^bx_{2}^{N-2b} +\\sum_{0 \\le t \\le \\frac{N-1}{2}} \\mu_{t} \\, x_1x_{21}^{t}x_{2}^{N-1-2t}.\n\\end{align*}\nApplying $ \\partial_2 $ to $r$, we obtain\n\\begin{align*}\n0 &= \\sum_{0 \\le b \\le \\frac{N-1}{2}} \\lambda_{b}x_{21}^b\\partial_{2,N-2b} -\\sum_{0 \\le t \\le \\frac{N-1}{2}} \\mu_{t}x_1x_{21}^{t}\\partial_{2,N-1-2t,0} \\\\\n&= \\sum_{0 \\le b \\le \\frac{N-1}{2}} \\lambda_{b}x_{21}^b(\\partial_{2,N-2b,0} + x_1\\partial_{2,N-2b,1}) -\\sum_{0 \\le t \\le \\frac{N-1}{2}} \\mu_{t}x_1x_{21}^{t}\\partial_{2,N-1-2t,0}\\\\\n&\\stackrel{\\eqref{eqn:superjordan}}{=} \\sum_{0 \\le b \\le \\frac{N-1}{2}} \\lambda_{b}x_{21}^bx_2^{N-1-2b} + \\lambda_{b}x_1x_{21}^b\\partial_{2,N-2b,1}.\n\\end{align*}\nFrom this, we see that $\\lambda_{b}= 0$, $b= 0, 1, \\cdots, \\frac{N-1}{2}$. Therefore\n\\begin{align*}\n0 &= \\partial_1(r) \\stackrel{\\eqref{eqn:superjordan}}{=} \\sum_{0 \\le t \\le \\frac{N-1}{2}} \\mu_{t}(x_{21}^{t}x_2^{N-1-2t}-x_1x_{21}^{t}\\partial_{1,N-1-2t,0});\n\\end{align*}\nhence $r=0$. \n\nAssume next that $N$ is even. Then there are scalars $\\lambda_{b}$, $\\mu_{t}$ such that\n\\begin{align*}\nr = \\sum_{0 \\le b \\le \\frac{N}{2}} \\lambda_{b}x_{21}^bx_{2}^{N-2b} +\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_1x_{21}^{t}x_{2}^{N-1-2t}.\n\\end{align*}\nThus\n\\begin{align}\\label{eqn:1superjordan}\n\\begin{split}\n0 &= \\partial_2(r) \\stackrel{\\eqref{eqn:superjordan}}{=} \\sum_{0 \\le b \\le \\frac{N}{2}} \\lambda_{b}x_1x_{21}^b\\partial_{2,N-2b,1} -\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_1x_{21}^{t}x_2^{N-2-2t}\\\\\n&= \\sum_{0 \\le b \\le \\frac{N -2}{2}} \\lambda_{b}x_1x_{21}^b\\partial_{2,N-2b,1} -\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_1x_{21}^{t}x_2^{N-2-2t}.\n\\end{split}\n\\end{align}\nApplying $\\partial_1$ to $r$, we obtain for some $z \\in {\\mathcal B}^{N-2}(V)$\n\\begin{align*}\n0&= \\sum_{0 \\le b \\le \\frac{N}{2}} \\lambda_{b}x_{21}^b(\\partial_{1,N-2b,0}-2bq\\partial_{2,N-2b,0}) +\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_{21}^{t}x_{2}^{N-1-2t}+x_1z\\\\\n&\\stackrel{\\eqref{eqn:superjordan}}{=} \\sum_{0 \\le b \\le \\frac{N -2}{2}} \\lambda_{b}x_{21}^b\\partial_{1,N-2b,0} +\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_{21}^{t}x_{2}^{N-1-2t}+x_1 z.\n\\end{align*}\nIn particular,\n\\begin{align}\\label{eqn:2superjordan}\n0&= \\sum_{0 \\le b \\le \\frac{N -2}{2}} \\lambda_{b}x_{21}^b\\partial_{1,N-2b,0} +\\sum_{0 \\le t \\le \\frac{N -2}{2}} \\mu_{t}x_{21}^{t}x_{2}^{N-1-2t}.\n\\end{align}\nAlso observe that if $n\\geq 2$ is even, then we obtain that for some $w_i\n\\in \\Bbbk \\{x_{21}^bx_2^d: b,d \\ge 0, \\, b+d =n+i-5\\}$, $ i\\in \\I_2 $, \n\\begin{align*}\n\\partial_{2,n,1}&= \\frac{n}{2}px_{2}^{n-2} +x_{21} w_1, &\\partial_{1,n,0}&= \\frac{n}{2}qx_{2}^{n-1} +x_{21} w_2.\n\\end{align*}\nLooking at the terms $ x_1x_2^{N-2} $ in \\eqref{eqn:1superjordan} and $ x_2^{N-1} $\nin \\eqref{eqn:2superjordan}, we get \n\\begin{align*}\n\\begin{cases}\n\\frac{N}{2}p\\lambda_{0} - \\mu_{0} =0\\\\\n\\frac{N}{2}q\\lambda_{0} + \\mu_{0} =0\n\\end{cases}\n\\end{align*}\nwhose determinant is $ \\frac{N}{2}(p+q)\\neq 0 $. Thus $ \\lambda_{0} = \\mu_{0} =0 $. Similarly, we prove that \n$\\lambda_{i} = \\mu_{i} =0$, $i=0, 1, \\cdots, \\frac{N-2}{2} $. It remains $\\lambda_{\\frac{N}{2}}$, but \n\\begin{align*}\n\\partial_1(x_{21}^{\\frac{N}{2}})= \\frac{N(p+q)}{2}x_1x_{21}^{\\frac{N-2}{2}}\n\\end{align*} \nhence $r=0$. \n\n\\smallbreak\nCase \\ref{case3:23}: \nWe start by the following claim, whose proof is straightforward:\n\t\\begin{align}\\label{eqn:1of2,3}\nc(x_1^n\\otimes x_2)&= x_2\\otimes x_1^n +nq x_1\\otimes x_1^n,\\quad n\\geq 1.\n\t\\end{align}\nLet $\\mathfrak{B} = T(V)\/ \\langle \\frac{q-p}{2}x_1^2 - x_1x_2 +x_2x_1\\rangle$. \nBy a standard argument, $(*_{2, 3})_4$ generates linearly $\\mathfrak{B}$. \nWe need the formulae of the derivations on $(*_{2, 3})_4$.\nFirst, we set \n\\begin{align*}\n\\partial_2(x_2^n) &= \\sum_{0\\le j\\le n-1} d_{j}^{(n-1)} x_1^jx_2^{n-1-j}, \\quad n\\geq 1,\n\\end{align*}\nand claim that \n\\begin{align}\\label{eq:3of2.3}\nd_{0}^{(n-1)}&=n, \\quad n\\geq 1.\n\\end{align}\nIndeed, the case $ n=1 $ is clear. Assume that \\eqref{eq:3of2.3} holds for $n$.\nProjecting $\\Delta(x_2^{n+1})$ to $V \\otimes {\\mathcal B}^{n}(V)$, we get\n\\begin{align*}\n\\sum_{i \\in \\I_2} x_i\\otimes \\partial_i(x_2^{n+1})&= (x_2\\otimes 1)(1\\otimes x_2^n) + (1\\otimes x_2)(x_1\\otimes \\partial_1(x_2^n) + x_2\\otimes \\partial_2(x_2^n))\\\\\n&= x_1\\otimes (x_2\\partial_1(x_2^n)+px_1\\partial_1(x_2^n)+sx_1\\partial_2(x_2^n)+qx_2\\partial_2(x_2^n)) \\\\\n&+x_2\\otimes (x_2^n+x_2\\partial_2(x_2^n)+px_1\\partial_2(x_2^n)).\n\\end{align*}\nHence, by the inductive hypothesis,\n\\begin{align*}\n\\partial_2(x_2^{n+1})&= x_2^n+x_2\\partial_2(x_2^n)+px_1\\partial_2(x_2^n) \\\\\n&= x_2^n+x_2(nx_2^{n-1}+\\sum_{1\\le j\\le n-1} d_{j}^{(n-1)} x_1^jx_2^{n-1-j})+px_1\\partial_2(x_2^n) \\\\\n&= (n+1)x_2^n +x_1(x_2+\\dfrac{p-q}{2}x_1)\\sum_{1\\le j\\le n-1} d_{j}^{(n-1)} x_1^{j-1}x_2^{n-1-j} \\\\\n&+px_1\\partial_2(x_2^n),\n\\end{align*}\nand the claim is proved.\n\nObserve that, by \\eqref{eqn:1of2,3}, for $ n\\geq 3 $\n\\begin{align}\\label{eq:1of2.3}\n\\partial_2(x_1^ix_2^{n-i})&=\\begin{cases}\n0, & \\textrm{if } i=n;\\\\\nx_1^{i}\\partial_2(x_2^{n-i}), & \\textrm{if } 0\\leq i2$. By \\eqref{eq:1of2.3}\n\\begin{align}\\label{eq:2of2.3}\n0 = \\partial_2(r) = \\sum_{0\\le i\\le n-1} c_i x_1^{i}\\partial_2(x_2^{n-i}).\n\\end{align}\nObserve that the term $ x_2^{n-1} $ appears only one time in \\eqref{eq:2of2.3}. Furthermore, by \\eqref{eq:3of2.3}, we can rewrite \\eqref{eq:2of2.3} as\n\\begin{align}\\label{eq:4of2.3}\n0= \\partial_2(r) = c_0 nx_2^{n-1} + \\sum_{1\\le j\\le n-1}m_jx_1^jx_2^{n-1-j} \n\\end{align}\nfor some $ m_j\\in \\Bbbk $. By minimality of $n$, we obtain $ c_0 = 0 $. Similarly, we can replace \\eqref{eq:4of2.3} by\n\\begin{align}\n0= \\partial_2(r) = c_1 (n-1)x_1x_2^{n-2} + \\sum_{2\\le j\\le n-1}m_j'x_1^jx_2^{n-1-j} \n\\end{align}\nwhat gives us $ c_1=0 $. Inductively, we get $ c_i = 0, \\, 0\\leq i 2$. By \\eqref{eq:1of1.3},\n\\begin{align}\\label{eq:2of1.3}\n0 = \\partial_2(r) = \\sum_{0\\le i\\le n-1} c_i x_1^{i}\\partial_2(x_2^{n-i}).\n\\end{align}\nLooking at the monomials $ x_2^{n-i} $in \\eqref{eq:2of1.3} as in case \\ref{case3:23} of the proof of Proposition \\ref{prop:Nichols23}, we get $ c_i = 0, \\, 0\\leq i 0.\n\\end{align*}\n\nThe claim is proved. Then we check, by an inductive argument, that \n\\begin{align*}\n\\partial_1(x_1^a) &= \\begin{cases}\n(\\frac{a-2}{2})_k \\, x_1^{a-1}, \\qquad \\,\\, & \\textrm{if } a \\textrm{ is even;}\\\\\n(\\frac{a-1}{2})_k \\, x_1^{a-1}, & \\textrm{if } a \\textrm{ is odd.}\n\\end{cases} \\\\\n\\partial_2(x_1^a) &= \\begin{cases}\nq(\\frac{a-2}{2})_k \\, x_1^{a-2}x_2, & \\textrm{if } a \\textrm{ is even;}\\\\\nq(\\frac{a-3}{2})_k \\, x_1^{a-3}x_2x_1, & \\textrm{if } a \\textrm{ is odd, } a\\geq 3;\\\\\n0, & \\textrm{if } a =1,\n\\end{cases}\n\\end{align*}\n$ a\\in \\mathbb{N}$.\nAlso, we have that, for $ b\\geq 1$\n\\begin{align*}\n\\partial_1((x_2x_1)^b) &= 0, && \\partial_2((x_2x_1)^b) = (2b-1)_k \\, x_1(x_2x_1)^{b-1}.\n\\end{align*}\nTherefore, for $ a,b\\geq 1 $ \n\\begin{align*}\n\\partial_1(x_1^a(x_2x_1)^b) &= \\begin{cases}\n(\\frac{a-2}{2})_k \\, x_1^{a-1}(x_2x_1)^b, & \\textrm{if } a \\textrm{ is even;}\\\\\n(\\frac{a-1}{2}+2b)_k \\, x_1^{a-1}(x_2x_1)^b, & \\textrm{if } a \\textrm{ is odd.}\n\\end{cases} \\\\\n\\partial_2(x_1^a(x_2x_1)^b) &= \\begin{cases}\n(\\frac{a-2}{2}+2b)_k \\, x_1^{a+1}(x_2x_1)^{b-1}, & \\textrm{if } a \\textrm{ is even;}\\\\\nq(\\frac{a-3}{2})_k \\, x_1^{a-3}(x_2x_1)^{b+1}, & \\textrm{if } a \\textrm{ is odd, } a\\geq 3;\\\\\n0, & \\textrm{if } a =1.\n\\end{cases}\n\\end{align*}\nand then\n\\begin{align*}\n\\partial_1(x_1^a(x_2x_1)^bx_2) &= \\begin{cases}\n\\partial_1(x_1^a(x_2x_1)^b)x_2, & \\textrm{if } a \\textrm{ is even;}\\\\\n(\\frac{a+1}{2}+2b)_k \\, x_1^{a-1}(x_2x_1)^bx_2, & \\textrm{if } a \\textrm{ is odd.}\n\\end{cases} \\\\\n\\partial_2(x_1^a(x_2x_1)^bx_2) &= \\begin{cases}\n(\\frac{a}{2}+2b)_k \\, x_1^{a+1}(x_2x_1)^{b-1}x_2, & \\textrm{if } a \\textrm{ is even;}\\\\\n\\partial_2(x_1^a(x_2x_1)^b)x_2, & \\textrm{if } a \\textrm{ is odd, } a\\geq 3;\\\\\n0, & \\textrm{if } a =1.\n\\end{cases}\n\\end{align*}\nWe then proceed as in the previous case. Namely, let $r \\in \\ker (\\widetilde{{\\mathcal B}} \\to {\\mathcal B}(V))$\nbe an homogeneous relation of degree $n$ with $n\\geq 3$ minimal.\nWe consider separately the cases $n$ odd and $n$ even.\nUsing the derivations, we see that the relations $r_{1,N}$ and $r_{2,N}$ hold.\nAlso, any other relation arises in higher degree.\nIn this way, rows $3$ and $4$ of Table \\ref{tab:h14} are established.\n\\end{proof}\n\n\nWe finally discuss the Nichols algebras arising from the equivalence in \\cite{H}. \nFirst, \\ref{item:hiet-a} and \\ref{item:hiet-a} composed with \\ref{item:hiet-c} give rise to the same braiding\n\\begin{align*}\n(c'(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nq x_2\\otimes x_2 & k x_1\\otimes x_2 \\\\\nk x_2\\otimes x_1 & p x_1\\otimes x_1\n\\end{pmatrix}\n\\end{align*}\nwhich is $ \\mathfrak{R}_{1, 4}$ up to $p \\leftrightarrow q$, so no new Nichols algebra arises.\n\nSecond, \\ref{item:hiet-c} gives the initial $ \\mathfrak{R}_{1, 4}$, so no new Nichols algebra arises.\n\n\n\n\n\\subsection{Case $ \\mathfrak{R}_{0, 1}$}\\label{subsec:hit01} The associated braiding is\n\\begin{align*}\n(c(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nkx_1\\otimes x_1 & -kx_2\\otimes x_1 \\\\\n-kx_1\\otimes x_2 & kx_2\\otimes x_2 + kx_1\\otimes x_1\n\\end{pmatrix}.\n\\end{align*}\n\n\\begin{pro} \nIf $k^2 \\neq 1$, then there are no quadratic relations. \n\n\\smallbreak\nIf $k=1$, then \n\\begin{align}\\label{eqn:Nichols10,k1}\n{\\mathcal B}(V) =T(V)\/ \\langle x_1x_2 + x_2x_1 \\rangle,\n\\end{align}\n$B_1 = \\{x_1^{a_1}x_2^{a_2}: a_i \\in \\mathbb{N}_0 \\}$ is a PBW-basis of ${\\mathcal B}(V)$ and \n$\\GK {\\mathcal B}(V) = 2$.\n\n\\smallbreak\nIf $k=-1$, then \n\\begin{align}\\label{eqn:Nichols10,k-1}\n{\\mathcal B}(V) =T(V)\/ \\langle x_1^2, x_1x_2 - x_2x_1 \\rangle,\n\\end{align}\n$B_2 = \\{x_1^{a_1}x_2^{a_2}: a_1 \\in \\I_{0,2}, \\, a_2 \\in \\mathbb{N}_0 \\}$ is a PBW-basis of $ {\\mathcal B}(V) $. Hence \\newline\n$\\GK {\\mathcal B}(V) = 1$.\n\\end{pro}\n\n\\begin{proof} Set $u = \\lambda_1x_1^2+\\lambda_2x_1x_2+\\lambda_3x_2x_1+\\lambda_4x_2^2$. Then\n\\begin{align*}\n\\Delta(u) &= u\\otimes 1 + 1\\otimes u + ((1+k)\\lambda_1+k\\lambda_4)x_1\\otimes x_1 +(1+k)\\lambda_4 x_2\\otimes x_2 \\\\\n&+ (\\lambda_2-k\\lambda_3)x_1\\otimes x_2 + (\\lambda_3-k\\lambda_2)x_2\\otimes x_1.\n\\end{align*}\nFrom here, all assertions about quadratic relations hold. \n\nCase $k =1$: Let $\\mathfrak{B} = T(V)\/ \\langle x_1x_2 + x_2x_1 \\rangle$. \nBy a standard argument, $B_1$ is a basis of $\\mathfrak{B}$. Note that, for $n\\geq 2$\n\\begin{align*}\n\\partial_1(x_2^n)&= \\binom{n}{2} x_1x_2^{n-2}, & \\partial_2(x_2^n)&=n x_2^{n-1}.\n\\end{align*}\nLet $ n\\geq 3 $ and $i \\in \\I_{2, n-2}$. Then\n\\begin{align}\\label{eqn:prop:0.1}\n\\begin{split}\n\\partial_1(x_1^ix_2^{n-i})&= \\binom{n-i}{2} x_1^{i+1}x_2^{n-i-2}+ix_1^{i-1}x_2^{n-i}, \\\\\n\\partial_2(x_1^ix_2^{n-i})&=(-1)^i (n-i) x_1^ix_2^{n-i-1}.\n\\end{split}\n\\end{align}\nObserve that the formulae \\eqref{eqn:prop:0.1} also hold for $i\\in \\{0, 1, n-1, n\\} $ taking by $0$ \nthe terms that are not well defined. \n\nAssume that the image of $B_1$ under the projection $\\pi: \\mathfrak{B}\\to {\\mathcal B}(V)$ is not linearly independent. \nPick $0 \\neq r = \\sum_{i=0}^N c_i x_1^ix_2^{N-i}$ a homogeneous relation of minimal degree $ N>2$. Thus\n\\begin{align*}\n0 &= \\partial_2(r) = \\sum_{i=0}^{N-1} c_i (-1)^i(N-i) x_1^ix_2^{N-i-1}.\n\\end{align*}\nThen, $ c_i = 0 $ for $i \\in \\I_{0, N-1}$. But $ \\partial_1(x_1^N) = N x_1^{N-1} $, hence $r = 0$.\n\n\\smallbreak\nCase $k = -1$: Let $\\mathfrak{B} = T(V)\/ \\langle x_1^2, x_1x_2 - x_2x_1 \\rangle$. \nBy a standard argument, $B_2$ is a basis of $\\mathfrak{B}$. Note that, for $n\\geq 2$\n\\begin{align*}\n\\partial_1(x_2^n) &= \\begin{cases}\n-\\frac{n}{2} x_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is even;}\\\\\n-\\frac{n-1}{2} x_1x_2^{n-2}, & \\textrm{if } n \\textrm{ is odd.}\n\\end{cases}, & \\partial_2(x_2^n)&=\\begin{cases}\n0, & \\textrm{if } n \\textrm{ is even;}\\\\\nx_2^{n-1}, & \\textrm{if } n \\textrm{ is odd.}\n\\end{cases}.\n\\end{align*}\nThen, for $n\\geq 2$,\n\\begin{align*}\n\\partial_1(x_1x_2^n) &= x_2^{n}, & \\partial_2(x_1x_2^n)&=\\begin{cases}\n0, & \\textrm{if } n \\textrm{ is even;}\\\\\nx_1x_2^{n-1}, & \\textrm{if } n \\textrm{ is odd.}\n\\end{cases}.\n\\end{align*}\n\n\nSuppose that the image of $B$ under the projection $\\pi: \\mathfrak{B}\\to {\\mathcal B}(V)$ is not linearly independent. \nPick $r = c_1 x_2^{N} + c_2 x_1x_2^{N-1}$ a homogeneous non-trivial relation of minimal degree $ N>2$. Applying $ \\partial_1 $ to $ r $, we obtain $r = 0$.\n\\end{proof}\n\nWe next discuss the Nichols algebras arising from the equivalence in \\cite{H}. \nFirst, \\ref{item:hiet-a}, \\ref{item:hiet-b} and \\ref{item:hiet-a} composed with \\ref{item:hiet-c} give rise to the same braiding\n\\begin{align*}\n(c'(x_i \\otimes x_j))_{i, j \\in \\I_2} = \\begin{pmatrix}\nkx_1\\otimes x_1 + kx_2\\otimes x_2 & -kx_2\\otimes x_1 \\\\\n-kx_1\\otimes x_2 & kx_2\\otimes x_2 \n\\end{pmatrix}.\n\\end{align*}\nSince \\ref{item:hiet-b} is a change of basis, the Nichols algebras are isomorphic.\n\nFinally, \\ref{item:hiet-c} gives the initial braiding\n$ \\mathfrak{R}_{0, 1}$, so no new Nichols algebra arises.\n\n\n\\section{Appendix}\n\nHere we collect all isomorphism classes of algebras arising as Nichols algebras in Theorem \\ref{th:main}, \nsee the information in Table \\ref{tab:general}. In many cases a change of variables is needed, and we leave to the reader its explicit calculation. All the algebras are of the form $T(W)\/ {\\mathcal J}$, where $W$ has a basis $y_1, y_2$. \n\n\\newcommand{\\zeta}{\\zeta}\n\\newcommand{\\eta}{\\eta}\n\\begin{table}[ht]\n\t\\caption{Algebras arising as Nichols algebras of rank 2 }\\label{tab:appendix}\n\t\\begin{center}\n\t\t\\begin{tabular}{|p{3cm} | p{5cm} | p{3cm} | } \n\t\\hline\tAlgebra & ${\\mathcal J}$ \t& Parameters\t\\\\\t\\hline\n\\begin{small}quantum plane \\end{small} &\t$\\langle y_1y_2 - \\zeta y_2 y_1 \\rangle$\t\t& $\\zeta \\in \\Bbbk^{\\times}$ \n\t\t\t\\\\ \\hline\n\\begin{small}quantum plane \\end{small} &\t$\\langle y_1^M, y_1y_2 - \\zeta y_2 y_1 \\rangle$\t\t& $\\zeta \\in \\Bbbk^{\\times}$,\\newline $M \\in \\mathbb{N}_{\\ge2}$ \n\\\\ \\hline\n\\begin{small}quantum plane \\end{small} &\t$\\langle y_1^M, y_1y_2 - \\zeta y_2 y_1, y_2^N \\rangle$\t\t& $\\zeta \\in \\Bbbk^{\\times}$, \\newline $M, N \\in \\mathbb{N}_{\\ge2}$ \n\\\\ \\hline\n\\begin{small}deformation of\\newline a quantum plane \\end{small} &\t$\\langle y_2^2 - \\zeta y_1^2, y_1y_2 -\\eta y_2 y_1, y_1^N \\rangle$\t\t& $\\eta, \\zeta \\in \\Bbbk^{\\times}$, \\newline\n$N \\in \\mathbb{N}_{\\ge 3}$ \n\\\\ \\hline\n\\begin{small}deformation of\\newline a quantum plane \\end{small} &\t$\\langle y_2^2 - \\zeta y_1^2, y_1y_2 -\\eta y_2 y_1 \\rangle$\t\t& $\\eta, \\zeta \\in \\Bbbk^{\\times}$ \n\\\\ \\hline\n\\begin{small}deformation of\\newline an exterior algebra \\end{small}&\t$\\langle y_1^2, y_2^2 - y_1 y_2, y_1 y_2 + y_2 y_1 \\rangle$\t\t& \n\\\\ \\hline\n&\t$\\langle y_1^2 - \\zeta y_2^2, y_1y_2 + \\epsilon y_2 y_1, y_1y_2^N \\rangle$\t\t& $\\zeta \\in \\Bbbk^{\\times}$, \n $\\epsilon \\in \\mathbb{G}_2$, \\newline $N \\in \\mathbb{N}_{\\ge2}$\n\\\\ \\hline\n\\begin{small}Jordan plane \\end{small} &\t$\\langle y_1^2 -y_1y_2 + y_2 y_1 \\rangle$\t\t& \n\\\\ \\hline\n\\begin{small} super Jordan plane \\end{small} &\t$\\langle y_1^2, y_2^2y_1 - y_1 y_2 y_1 - y_1 y_2^2 \\rangle$ & \n\\\\ \\hline\n&\t$\\langle y_1 y_2, y_2 y_1 \\rangle$\t\t&\n\\\\ \\hline\n &\t$\\langle y_1 y_2, y_2 y_1, y_1^{2N}-\\zeta y_2^{2N} \\rangle$\t\t& $\\zeta \\in \\Bbbk^{\\times}$, $N \\in \\mathbb{N}$ \n\\\\ \\hline\n &\t$\\langle y_1^2 - \\zeta y_2^2 \\rangle$\t& $\\zeta \\in \\Bbbk^{\\times}$ \n\\\\ \\hline\n &\t$\\langle y_1^2 - \\zeta y_2^2, r_{1,N}, r_{2,N} \\rangle$, \\newline\n cf. \\eqref{eqn:relation1of1.4}, \\eqref{eqn:relation2of1.4}\t\t& $\\zeta \\in \\Bbbk^{\\times}$, $N \\in \\mathbb{N}_{\\ge 3}$ \n\\\\ \\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbkbi b/data_all_eng_slimpj/shuffled/split2/finalzzbkbi new file mode 100644 index 0000000000000000000000000000000000000000..9dbbff7af18f3882f185859a32510ce7ab5c0e30 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbkbi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA finite configuration of points on the unit sphere $S^{n-1}$ in $\\mathbb{R}^n$\nis \\emph{balanced} if it is in equilibrium (possibly unstable) under\nall pairwise forces depending only on distance, assuming the points are\nconfined to the surface of the sphere. In other words, the net forces\nacting on the points are all orthogonal to the sphere. As is usual in\nphysics, any two distinct particles exert forces on each other,\ndirected oppositely and with magnitude equal to some fixed function of\nthe Euclidean distance between them. The net force on each point is the\nsum of the contributions from the other points.\n\nFor example, the vertices of any regular polyhedron are balanced. On\nthe other hand, most configurations are not balanced. Even if some\npoints are in equilibrium under one force law, there is no reason to\nexpect that they will be in equilibrium under every force law, and\nusually they will not be. The balanced configurations are quite\nremarkable.\n\nThe condition of being balanced was defined by Leech in \\cite{Leech}.\nIt arises in the search for energy-minimizing point configurations on\nspheres. Given a potential function, typically an inverse-power law,\nhow should we arrange some particles to minimize the total potential\nenergy? This problem originated in Thomson's model of the atom in\n\\cite[p.~255]{T}. Of course, that model was superseded by quantum\nmechanics, but it remains of considerable mathematical interest. It\nprovides a natural measure of how well distributed points are on the\nsurface of the sphere, and it also offers the possibility of\ncharacterizing important or beautiful configurations via extremal\nproperties.\n\nIn most cases the optimal configuration depends on the potential\nfunction, but occasionally it does not. In \\cite{CK}, Cohn and Kumar\nintroduced the concept of \\emph{universally optimal} configurations,\nwhich minimize energy not only under all inverse-power laws but also\nunder the broader class of completely monotonic potential functions (as\nfunctions of squared Euclidean distance). In $\\mathbb{R}^2$ the vertices of any\nregular polygon form a universally optimal configuration. The vertex\nsets of the regular tetrahedron, octahedron, or icosahedron are\nuniversally optimal, but there are no larger examples in $\\mathbb{R}^3$.\nHigher-dimensional examples include the vertices of the regular simplex\nand cross polytope (or hyperoctahedron), and also various exceptional\nexamples, notably the vertices of the regular $600$-cell, the $E_8$\nroot system, the Schl\\\"afli configuration of $27$ points in $\\mathbb{R}^6$\ncorresponding to the $27$ lines on a cubic surface, and the minimal\nvectors of the Leech lattice. A number of the sporadic finite simple\ngroups act on universal optima. See Tables~1 and~2 in \\cite{BBCGKS} for\na list of the known and conjectured universal optima, as well as a\ndiscussion of how many more there might be. (They appear to be quite\nrare.)\n\nEvery universal optimum is balanced (as we will explain below), but\nbalanced configurations do not necessarily minimize energy even\nlocally. In the space of configurations, balanced configurations are\nuniversal critical points for energy, but they are frequently saddle\npoints. For example, the vertices of a cube are balanced but one can\nlower the energy by rotating the vertices of a facet. Nevertheless,\nbeing balanced is an important necessary condition for universal\noptimality.\n\nThe simplest reason a configuration would be balanced is due to its\nsymmetry: the net forces inherit this symmetry, which can constrain\nthem to point orthogonally to the sphere. More precisely, call a finite\nsubset $\\mathcal{C} \\subset S^{n-1}$ \\emph{group-balanced} if for every\n$x \\in \\mathcal{C}$, the stabilizer of $x$ in the isometry group of\n$\\mathcal{C}$ fixes no vectors in $\\mathbb{R}^n$ other than the multiples of\n$x$. A group-balanced configuration must be balanced, because the net\nforce on $x$ is invariant under the stabilizer of $x$ and is thus\northogonal to the sphere.\n\nIn his 1957 paper \\cite{Leech}, Leech completely classified the\nbalanced configurations in $S^2$. His classification shows that they\nare all group-balanced, and in fact the complete list can be derived\neasily from this assertion using the classification of finite subgroups\nof $O(3)$. However, Leech's proof is based on extensive case analysis,\nand it does not separate cleanly in this way. Furthermore, the\ntechniques do not seem to apply to higher dimensions.\n\nIt is natural to wonder whether all balanced configurations are\ngroup-balanced in higher dimensions. If true, that could help explain\nthe symmetry of the known universal optima. However, in this paper we\nshow that balanced configurations need not be group-balanced. Among\nseveral counterexamples, we construct a configuration of $25$ points in\n$\\mathbb{R}^{12}$ that is balanced yet has no nontrivial symmetries.\n\nThis result is compatible with the general philosophy that it is\ndifficult to find conditions that imply symmetry in high dimensions,\nshort of simply imposing the symmetry by fiat. We prove that if a\nconfiguration is a sufficiently strong spherical design, relative to\nthe number of distances between points in it, then it is automatically\nbalanced (see Theorem~\\ref{theorem:main}). Every spectral embedding of\na strongly regular graph satisfies this condition (see\nSection~\\ref{section:srg}). There exist strongly regular graphs with no\nnontrivial symmetries, and their spectral embeddings are balanced but\nnot group-balanced.\n\nBefore we proceed to the proofs, it is useful to rephrase the condition\nof being balanced as follows: a configuration $\\mathcal{C}$ is balanced\nif and only if for every $x \\in \\mathcal{C}$ and every real number $u$,\nthe sum $S_u(x)$ of all $y \\in \\mathcal{C}$ whose inner product with\n$x$ is $u$ is a multiple of~$x$. The reason is that the contribution to\nthe net force on~$x$ from the particles at a fixed distance is in the\nspan of $x$ and $S_u(x)$. Since we are using arbitrary force laws, each\ncontribution from a fixed distance must independently be orthogonal to\nthe sphere (since we can weight them however we desire). Note that a\ngroup-balanced configuration $\\mathcal{C}$ clearly satisfies this\ncriterion: for every $x \\in \\mathcal{C}$ and every real number $u$, the\nsum $S_u(x)$ is itself fixed by the stabilizer of $x$ and hence must be\na multiple of $x$.\n\nAn immediate consequence of this characterization of balanced\nconfigurations is that it is easy to check whether a given\nconfiguration is balanced. By contrast, it seems difficult to check\nwhether a configuration is universally optimal. For example, the paper\n\\cite{BBCGKS} describes a $40$-point configuration in $\\mathbb{R}^{10}$ that\nappears to be universally optimal, but so far no proof is known.\n\nSo far we have not explained why universal optima must be balanced. Any\noptimal configuration must be in equilibrium under the force laws\ncorresponding to the potential functions it minimizes, but no\nconfiguration could possibly minimize all potential functions\nsimultaneously (universal optima minimize a large but still restricted\nclass of potentials). The explanation is that a configuration is\nbalanced if and only if it is balanced for merely the class of\ninverse-power force laws. In the latter case, we cannot weight the\nforce contributions from different distances independently. However, as\nthe exponent of the force law tends to infinity, the force contribution\nfrom the shortest distance will dominate unless it acts orthogonally to\nthe sphere. This observation can be used to isolate each force\ncontribution in order by distance. Alternatively, we can argue that\nthe configuration is balanced under any linear combination of\ninverse-power laws and hence any polynomial in the reciprocal of\ndistance. We can then isolate any single distance by choosing that\npolynomial to vanish at all the other distances.\n\n\\section{Spherical designs}\n\nRecall that a \\emph{spherical $t$-design} in $S^{n-1}$ is a (non-empty)\nfinite subset $\\mathcal{C}$ of $S^{n-1}$ such that for every polynomial\n$p \\colon \\mathbb{R}^n \\to \\mathbb{R}$ of total degree at most~$t$, the average of $p$\nover $\\mathcal{C}$ equals its average over all of $S^{n-1}$. In other\nwords,\n$$\n\\frac{1}{|\\mathcal{C}|}\\sum_{x \\in \\mathcal{C}} p(x)\n= \\frac{1}{\\mathop{\\textup{vol}}(S^{n-1})} \\int_{S^{n-1}} p(x) \\, d\\mu(x),\n$$\nwhere $\\mu$ denotes the surface measure on $S^{n-1}$ and\n$\\mathop{\\textup{vol}}(S^{n-1})$ is of course not the volume of the enclosed ball but\nrather $\\int_{S^{n-1}} d\\mu(x)$.\n\n\\begin{theorem} \\label{theorem:main}\nLet $\\mathcal{C} \\subset S^{n-1}$ be a spherical $t$-design. If for\neach $x \\in \\mathcal{C}$, $$|\\{\\langle x,y \\rangle : y \\in \\mathcal{C},\ny \\ne \\pm x\\}| \\le t,$$ then $\\mathcal{C}$ is balanced.\n\\end{theorem}\n\nHere, $\\langle\\cdot,\\cdot\\rangle$ denotes the usual Euclidean inner\nproduct.\n\n\\begin{proof}\nLet $x$ be any element of $\\mathcal{C}$, and let $u_1,\\dots,u_k$ be the\ninner products between $x$ and the elements of $\\mathcal{C}$ other than\n$\\pm x$. By assumption, $k \\le t$. We wish to show that for each $i$,\nthe sum $S_{u_i}(x)$ of all $z \\in \\mathcal{C}$ such that $\\langle z,x\n\\rangle = u_i$ is a multiple of $x$.\n\nGiven any vector $y \\in \\mathbb{R}^n$ and integer $i$ satisfying $1 \\le i \\le\nk$, define the degree $k$ polynomial $p \\colon \\mathbb{R}^n \\to \\mathbb{R}$ by\n$$\np(z) = \\langle y,z \\rangle \\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i}\n \\big(\\langle x,z \\rangle - u_j\\big).\n$$\n\nSuppose now that $y$ is orthogonal to $x$. Then the average of $p$ over\n$S^{n-1}$ vanishes, because on the cross sections of the sphere on\nwhich $\\langle x,z \\rangle$ is constant, each factor $\\langle x,z\n\\rangle - u_j$ is constant, while $\\langle y,z \\rangle$ is an odd\nfunction on such a cross section. More precisely, under the map $z\n\\mapsto 2 \\langle x,z \\rangle x - z$ (which preserves the component\nof~$z$ in the direction of~$x$ and multiplies everything orthogonal to\n$x$ by $-1$), the inner product with~$x$ is preserved while the inner\nproduct with~$y$ is multiplied by~$-1$. Since $\\mathcal{C}$ is a\n$t$-design, it follows that the sum of $p(z)$ over $z \\in \\mathcal{C}$\nalso vanishes.\n\nMost of the terms in this sum vanish: when $z = \\pm x$, we have\n$\\langle y,z \\rangle = 0$, and when $\\langle x,z \\rangle = u_j$ the\nproduct vanishes unless $j=i$. It follows that the sum of $p(z)$ over\n$z \\in \\mathcal{C}$ equals\n$$\n\\sum_{z \\in \\mathcal{C} \\,:\\, \\langle z,x \\rangle = u_i}\n\\langle y,z \\rangle\n\\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i} (u_i - u_j)\n=\n\\left(\\prod_{j \\,:\\, 1 \\le j \\le k, \\, j \\ne i} (u_i - u_j)\\right)\n\\big\\langle y, S_{u_i}(x) \\big\\rangle.\n$$\n\nBecause the first factor is nonzero, we conclude that $S_{u_i}(x)$ must\nbe orthogonal to $y$. Because this holds for all $y$ orthogonal\nto~$x$, it follows that $S_{u_i}(x)$ is a multiple of $x$, as desired.\n\\end{proof}\n\n\\emph{Examples.} The vertices of a cube form a spherical $3$-design,\nand only two inner products other than $\\pm1$ occur between them, so\nTheorem~\\ref{theorem:main} implies that the cube is balanced. On the\nother hand, not every group-balanced configuration satisfies the\nhypotheses of the theorem. For example, the configuration in $S^2$\nformed by the north and south poles and a ring of $k$ equally spaced\npoints around the equator is group-balanced, but it is not even a\n$2$-design if $k \\ne 4$. In Section~\\ref{section:srg} we will show\nthat Theorem~\\ref{theorem:main} applies to some configurations that are\nnot group-balanced, so the two sufficient conditions for being balanced\nare incomparable.\n\n\\section{Counterexamples from strongly regular graphs}\n\\label{section:srg}\n\nEvery spectral embedding of a strongly regular graph is both a\nspherical $2$-design and a $2$-distance set, so by\nTheorem~\\ref{theorem:main} they are all balanced. Recall that to form\na spectral embedding of a strongly regular graph with $N$ vertices, one\northogonally projects the standard orthonormal basis of $\\mathbb{R}^N$ to a\nnontrivial eigenspace of the adjacency matrix of the graph. See\nSections~2 and~3 of \\cite{CGS} for a brief review of the theory of\nspectral embeddings. Theorem~4.2 in \\cite{CGS} gives the details of the\nresult that every spectral embedding is a $2$-design, a fact previously\nnoted as part of Example~9.1 in \\cite{DGS}.\n\nThe symmetry group of such a configuration is simply the combinatorial\nautomorphism group of the graph, so it suffices to find a strongly\nregular graph with no nontrivial automorphisms. According to Brouwer's\ntables \\cite{B1}, the smallest such graph is a $25$-vertex graph with\nparameters $(25,12,5,6)$ (the same as those of the Paley graph for the\n\\hbox{$25$-element} field), which has a spectral embedding in\n$\\mathbb{R}^{12}$. See Figure~\\ref{figure:srg25} for an adjacency matrix.\nVerifying that this graph has no automorphisms takes a moderate amount\nof calculation, best done by computer.\n\n\\begin{figure}\n{\\tiny \\begin{center}\n\\begin{tabular}{ccccccccccccccccccccccccc}\n0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0\\\\\n1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0\\\\\n1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1\\\\\n1 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0\\\\\n1 1 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1 0\\\\\n1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1\\\\\n1 1 1 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1\\\\\n1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 0 1 1 1\\\\\n1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1\\\\\n1 0 0 1 0 0 1 1 1 0 0 0 1 0 0 0 1 1 1 1 1 0 0 1 0\\\\\n1 0 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 0 1 0\\\\\n1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 1 0 0\\\\\n1 0 0 0 0 1 1 0 0 1 1 1 0 1 0 1 1 0 0 1 1 0 0 0 1\\\\\n0 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 1 1 0 0 1 1 0 0 1\\\\\n0 1 0 1 0 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 0 1 0 1 0\\\\\n0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 0 0 0 1 1 0 0 1 0 1\\\\\n0 1 0 0 1 1 0 0 1 1 0 0 1 1 1 0 0 1 0 1 0 0 0 1 1\\\\\n0 1 0 0 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 1 0 0\\\\\n0 1 0 0 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 0 1 0 1 1 0\\\\\n0 0 1 1 1 0 0 0 0 1 0 1 1 0 1 1 1 0 0 0 1 0 1 1 0\\\\\n0 0 1 1 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 1 0 1 1 0 0\\\\\n0 0 1 1 0 0 1 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1\\\\\n0 0 1 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1 1 0 0 0 1\\\\\n0 0 1 0 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 1 0 1 0 0 1\\\\\n0 0 1 0 0 1 1 1 1 0 0 0 1 1 0 1 1 0 0 0 0 1 1 1 0\n\\end{tabular}\n\\end{center}}\n\\caption{An adjacency matrix of a (25,12,5,6) strongly regular graph\nwith no nontrivial automorphisms.}\\label{figure:srg25}\n\\end{figure}\n\nIn fact, there are two such graphs with no nontrivial automorphisms\n(the other is the complement of the graph in\nFigure~\\ref{figure:srg25}). Paulus classified the $(25,12,5,6)$\nstrongly regular graphs in \\cite{P}; unfortunately, his paper was never\npublished. There are fifteen such graphs, whose automorphism groups\nhave a variety of sizes: two have order $1$, four have order $2$, two\nhave order $3$, four have order $6$, two have order $72$, and one has\norder $600$ (the Paley graph). See \\cite{B2} for more information.\n\nThe Paulus graphs give the lowest-dimensional balanced configurations\nwe have found that have trivial symmetry groups. However, there are\nlower-dimensional counterexamples (with some symmetry but not enough to\nbe group-balanced). The lowest-dimensional one we have constructed is\nin $\\mathbb{R}^7$, and it can be built as follows; fortunately, no computer\ncalculations are needed.\n\nLet $\\mathcal{C}_n$ consist of the $n(n+1)\/2$ midpoints of the edges of\na regular simplex in $\\mathbb{R}^n$ (scaled so that $\\mathcal{C}_n \\subset\nS^{n-1}$). This configuration is a $2$-distance set, with the\ndistances corresponding to whether the associated edges of the simplex\nintersect or not. To compute the inner products, note that if\n$x_1,\\dots,x_{n+1}$ are the vertices of a regular simplex with\n$|x_i|^2=1$ for all $i$ (and hence $\\langle x_i,x_j\\rangle=-1\/n$ for $i\n\\ne j$), then for $i \\ne j$ and $k \\ne \\ell$,\n$$\n\\big\\langle x_i+x_j,x_k+x_\\ell \\big\\rangle = \\begin{cases}\n2-2\/n & \\textup{if $\\{i,j\\} = \\{k,\\ell\\}$,}\\\\\n1-3\/n & \\textup{if $|\\{i,j\\} \\cap \\{k,\\ell\\}|=1$, and}\\\\\\\n-4\/n & \\textup{if $\\{i,j\\} \\cap \\{k,\\ell\\} = \\emptyset$.}\n\\end{cases}\n$$\nThus, when we renormalize the vectors $x_i+x_j$ to lie on the unit\nsphere, we find that the inner products between them are\n$(1-3\/n)\/(2-2\/n) = (n-3)\/(2n-2)$ and $-(4\/n)\/(2-2\/n) = -2\/(n-1)$.\n\nFor $n>3$, the symmetry group of $\\mathcal{C}_n$ is the same as that of\nthe original simplex (namely the symmetric group on the vertices of the\nsimplex). Clearly, that group acts on $\\mathcal{C}_n$. To see that\n$\\mathcal{C}_n$ has no other symmetries, we will show that the original\nsimplex can be constructed from it in such a way as to be preserved by\nall symmetries of $\\mathcal{C}_n$. Specifically, consider the subsets\nof $\\mathcal{C}_n$ of size $n$ in which all pairs of points are at the\nminimal distance; the sums of these subsets are proportional to the\nvertices of the original simplex. To see why, note that such a subset\ncorresponds to a collection of $n$ pairwise intersecting edges of the\noriginal simplex. They must be exactly the edges containing one of the\nvertices of the simplex: once two intersecting edges are specified,\nonly one other edge can intersect both without containing their common\nvertex, so at most three edges can intersect pairwise without\ncontaining a common vertex. (Note that this conclusion genuinely\nrequires that $n>3$, because $\\mathcal{C}_3$ is an octahedron, which\nhas more symmetry than the tetrahedron from which it was derived.)\n\nWhen $n=7$ the inner products in $\\mathcal{C}_7$ are simply $\\pm 1\/3$.\nThe coincidence that these inner products are negatives of each other\nis deeper than it appears, and it plays a role in several useful\nconstructions. For example, the union of $\\mathcal{C}_7$ and its\nantipode $-\\mathcal{C}_7$ is a $3$-distance set, while in other\ndimensions it would be a $5$-distance set. In fact, $\\mathcal{C}_7\n\\cup (-\\mathcal{C}_7)$ is the unique $56$-point universal optimum in\n$\\mathbb{R}^7$, and it is invariant under the Weyl group of~$E_7$. We will\nmake use of the unusual inner products in $\\mathcal{C}_7$ to construct\na modification of it that is balanced but not group-balanced.\n\nWithin $\\mathcal{C}_7$, there are regular tetrahedra (i.e., quadruples\nof points with all inner products $-1\/3$). Geometrically, such a\ntetrahedron corresponds to a set of four disjoint edges in the original\nsimplex, and there is a unique such set up to symmetry, since the\nsimplex in $\\mathbb{R}^7$ has eight vertices and all permutations of these\nvertices are symmetries. Choose a set of four disjoint edges and call\nthem the distinguished edges.\n\nWe now define a modified configuration $\\mathcal{C}_7'$ by replacing\neach point in this tetrahedron by its antipode. Replacing the regular\ntetrahedron preserves the $2$-design property, because the tetrahedron\nis itself a $2$-design within the $2$-sphere it spans. In particular,\nfor every polynomial of total degree at most $2$, its sum over the\noriginal tetrahedron is the same as its sum over the antipodal\ntetrahedron. Furthermore, when we replace the tetrahedron, all inner\nproducts remain $\\pm 1\/3$ (some are simply multiplied by $-1$). Thus,\nthe resulting configuration $\\mathcal{C}'_7$ remains both a\n$2$-distance set and a $2$-design, so it is balanced by\nTheorem~\\ref{theorem:main}.\n\nHowever, the process of inverting a tetrahedron reduces the symmetry\ngroup.\n\n\\begin{lemma}\n\\label{lemma:group} The configuration $\\mathcal{C}'_7$ has only\n$4!\\cdot 2^4 = 384$ symmetries, namely the permutations of the vertices\nof that original simplex that preserve the set of four distinguished\nedges.\n\\end{lemma}\n\n\\begin{proof}\nThere are clearly $4! \\cdot 2^4$ symmetries of $\\mathcal{C}_7$ that\npreserve the set of distinguished edges of the simplex (they can be\npermuted arbitrarily and their endpoints can be swapped). All of these\nsymmetries preserve $\\mathcal{C}'_7$.\n\nTo show that there are no further symmetries, it suffices to show that\nthe distinguished tetrahedron in $\\mathcal{C}'_7$ is preserved under\nall symmetries. (For then the antipodal tetrahedron is also preserved,\nand hence $\\mathcal{C}_7$ is preserved as well.) Label the vertices of\nthe original simplex $1,2,\\dots,8$, and suppose that the distinguished\nedges correspond to the pairs $12$, $34$, $56$, and $78$. Label the\npoints of $\\mathcal{C}'_7$ by the pairs for the corresponding edges.\n\nThere are at most two orbits under the symmetry group of\n$\\mathcal{C}'_7$, one containing $12$, $34$, $56$, and $78$ and the\nother containing the remaining points. We wish to show that these sets\ndo not in fact form a single orbit. To separate the two orbits, we\nwill count the number of regular tetrahedra each point is contained in.\n(We drop the word ``regular'' below.) The answer will be seven for the\nfour distinguished points and eleven for the other points, so they\ncannot lie in the same orbit.\n\nBefore beginning, we need a criterion for when the inner product\nbetween two points in $\\mathcal{C}'_7$ is $-1\/3$. If both points are\ndistinguished or both are non-distinguished, then that occurs exactly\nwhen their label pairs are disjoint. If one point is distinguished and\nthe other is not, then it occurs exactly when their label pairs\nintersect.\n\nNow it is straightforward to count the tetrahedra containing a\ndistinguished point, without loss of generality $12$. There is one\ntetrahedron of distinguished points, namely $\\{12,34,56,78\\}$. If we\ninclude a second distinguished point, say $34$, then there are two ways\nto complete the tetrahedron using two non-distinguished points, namely\n$\\{12,34,13,24\\}$ and $\\{12,34,14,23\\}$ (the two additional pairs must\nbe disjoint and each intersect both $12$ and $34$). Because there are\nthree choices for the second distinguished point, this yields six\ntetrahedra. Finally, it is impossible to form a tetrahedron using $12$\nand three non-distinguished points (one cannot choose three disjoint\npairs that each intersect $12$). Thus, $12$ is contained in seven\ntetrahedra.\n\nTo complete the proof, we need only show that a non-distinguished\npoint, without loss of generality $13$, is contained in more than seven\ntetrahedra. There is a unique tetrahedron containing $13$ and two\ndistinguished points, namely $\\{13, 12, 34, 24\\}$. (There are only two\ndistinguished points that overlap with $13$, namely $12$ and $34$; then\nthe fourth point $24$ is determined.) No tetrahedron can contain a\nsingle distinguished point, as we saw in the previous paragraph, and if\na tetrahedron contains three distinguished points then it must contain\nthe fourth. Thus, the only remaining possibility is that all the points\nare non-distinguished. The three other points in the tetrahedron must\nbe labeled with disjoint pairs from $\\{2,4,5,6,7,8\\}$, and the labels\n$56$ and $78$ are not allowed (because those points are distinguished).\nThere are $6!\/(2!^3\\cdot3!)=15$ ways to split $\\{2,4,5,6,7,8\\}$ into\nthree disjoint pairs. Among them, three contain the pair $56$, three\ncontain the pair $78$, and one contains both pairs. Thus, there are\n$15-3-3+1=10$ possibilities containing neither $56$ nor~$78$. In\ntotal, the point $13$ is contained in eleven tetrahedra. Since it is\ncontained in more than seven tetrahedra, we see that $12$ and $13$ are\nin different orbits, as desired.\n\\end{proof}\n\nBy Lemma~\\ref{lemma:group}, there are two orbits of points in\n$\\mathcal{C}'_7$, namely the four points in the tetrahedron and the\nremaining $24$ points. The stabilizer of any point in the large orbit\nactually fixes two such points. Specifically, consider the edge in the\noriginal simplex that corresponds to the point. It shares its vertices\nwith two of the four distinguished edges (each vertex is in a unique\ndistinguished edge), and there is another edge that connects the other\ntwo vertices of those distinguished edges. For example, in the\nnotation of the proof of Lemma~\\ref{lemma:group}, the edge $13$ has the\ncompanion~$24$. This second edge has the same stabilizer as the first.\nIt follows that $\\mathcal{C}'_7$ is not group-balanced.\n\nIf we interpret the configuration $\\mathcal{C}'_7$ as a graph by using\nits shorter distance to define edges, then we get a strongly regular\ngraph, with parameters $(28,12,6,4)$, the same as those of\n$\\mathcal{C}_7$. In fact, every $2$-design $2$-distance set yields a\nstrongly regular graph, by Theorem~7.4 of \\cite{DGS}. We have checked\nusing Brouwer's list \\cite{B1} that spectral embeddings of strongly\nregular graphs do not yield counterexamples in lower dimensions. It\nsuffices to consider graphs with at most $27$ vertices, since by\nTheorem~4.8 in \\cite{DGS} no two-distance set in $S^5$ contains more\nthan $27$ points. Aside from the degenerate case of complete\nmultipartite graphs and their complements, the full list of strongly\nregular graphs with spectral embeddings in six or fewer dimensions is\nthe pentagon, the Paley graph on $9$ vertices, the Petersen graph, the\nPaley graph on $13$ vertices, the line graph of $K_6$, the Clebsch\ngraph, the Shrikhande graph, the $4 \\times 4$ lattice graph, the line\ngraph of $K_7$, the Schl\\\"afli graph, and the complements of these\ngraphs. It is straightforward to check that these graphs all have\ngroup-balanced spectral embeddings. Of course there may be\nlow-dimensional counterexamples of other forms.\n\nWe suspect that there are no counterexamples in $\\mathbb{R}^4$:\n\n\\begin{conjecture}\nIn $\\mathbb{R}^4$, every balanced configuration is group-balanced.\n\\end{conjecture}\n\nIf true, this conjecture would lead to a complete classification of\nbalanced configurations in $\\mathbb{R}^4$, because all the finite subgroups of\n$O(4)$ are known (see for example \\cite{CS}). It is likely that using\nsuch a classification one could prove completeness for the list of\nknown universal optima in $\\mathbb{R}^4$, namely the regular simplices, cross\npolytope, and $600$-cell, but we have not completed this calculation.\n\nIn $\\mathbb{R}^5$ or $\\mathbb{R}^6$, we are not willing to hazard a guess as to whether\nall balanced configurations are group-balanced. The construction of\n$\\mathcal{C}'_7$ uses such an ad hoc approach that it provides little\nguidance about lower dimensions.\n\n\\section{Counterexamples from lattices}\n\nIn higher dimensions, we can use lattices to construct counterexamples\nthat do not arise from strongly regular graphs. For example, consider\nthe lattice $\\Lambda(G_2)$ in the Koch-\\kern-.15exVenkov list of extremal even\nunimodular lattices in $\\mathbb{R}^{32}$ (see \\cite[p.~212]{KV} or the\nNebe-Sloane catalogue~\\cite{NS} of lattices). This lattice has\n$146880$ minimal vectors. When they are renormalized to be unit\nvectors, only five inner products occur besides $\\pm 1$ (namely, $\\pm\n1\/2$, $\\pm 1\/4$, and $0$). By Corollary~3.1 of \\cite{BV}, this\nconfiguration is a spherical $7$-design. Hence, by\nTheorem~\\ref{theorem:main} it is balanced. However,\n$\\mathop{\\textup{Aut}}(\\Lambda(G_2))$ is a relatively small group, of order $3 \\cdot\n2^{12}$, and one can check by computer calculations that some minimal\nvectors have trivial stabilizers. (The lattice is generated by its\nminimal vectors, and thus it and its kissing configuration have the\nsame symmetry group.) The kissing configuration of $\\Lambda(G_2)$ is\ntherefore balanced but not group-balanced.\n\nThe case of $\\Lambda(G_2)$ is particularly simple since some\nstabilizers are trivial, but one can also construct lower-dimensional\ncounterexamples using lattices. For example, let $L$ be the unique\n$2$-modular lattice in dimension~$20$ with Gram matrix\ndeterminant~$2^{10}$, minimal norm~$4$, and automorphism group $2 \\cdot\nM_{12} \\cdot 2$ (see \\cite[p.~101]{BV} or \\cite{NS}). The kissing\nnumber of $L$ is $3960$, and its automorphism group (which is, as\nabove, the same as the symmetry group of its kissing configuration)\nacts transitively on the minimal vectors. The kissing configuration is\na spherical $5$-design (again by Corollary~3.1 in \\cite{BV}), and only\nfive distances occur between distinct, non-antipodal points. Thus, by\nTheorem~\\ref{theorem:main} the kissing configuration of~$L$ is\nbalanced. However, computer calculations show that the stabilizer of a\npoint fixes a $2$-dimensional subspace of $\\mathbb{R}^{20}$, and thus the\nconfiguration is not group-balanced.\n\nThe kissing configurations of the extremal even unimodular lattices\n$P_{48p}$ and $P_{48q}$ in $\\mathbb{R}^{48}$ (see \\cite[p.~149]{CSl}) are also\nbalanced but not group-balanced. They have $52416000$ minimal vectors,\nwith inner products $\\pm 1$, $\\pm 1\/2$, $\\pm 1\/3$, $\\pm 1\/6$, and $0$\nafter rescaling to the unit sphere. By Corollary~3.1 in \\cite{BV}, the\nkissing configurations are spherical $11$-designs, so by\nTheorem~\\ref{theorem:main} they are balanced. However, in both cases\nthere are points with trivial stabilizers, so they are not\ngroup-balanced. Checking this is more computationally intensive than\nin the previous two cases. Fortunately, for the bases listed in\n\\cite{NS}, in both cases the first basis vector is a minimal vector\nwith trivial stabilizer, and this triviality is easily established by\nsimply enumerating the entire orbit. (The automorphism groups of\n$P_{48p}$ and $P_{48q}$ have orders $72864$ and $103776$,\nrespectively.) We expect that the same holds for every extremal even\nunimodular lattice in $\\mathbb{R}^{48}$, but they have not been fully\nclassified and we do not see how to prove it except for checking each\ncase individually.\n\n\\section{Euclidean balanced configurations}\n\nThe concept of a balanced configuration generalizes naturally to\nEuclidean space: a discrete subset $\\mathcal{C} \\subset \\mathbb{R}^n$ is\n\\emph{balanced} if for every $x \\in \\mathcal{C}$ and every distance\n$d$, the set $\\{y \\in \\mathcal{C} : |x-y|=d\\}$ either is empty or has\ncentroid $x$. As in the spherical case, this characterization is\nequivalent to being in equilibrium under all pairwise forces that\nvanish past some radius (to avoid convergence issues).\n\nThe concept of a group-balanced configuration generalizes as well. Let\n$\\mathop{\\textup{Aut}}(\\mathcal{C})$ denote the set of rigid motions of $\\mathbb{R}^n$\npreserving $\\mathcal{C}$. Then $\\mathcal{C}$ is \\emph{group-balanced}\nif for every $x \\in \\mathcal{C}$, the stabilizer of $x$ in\n$\\mathop{\\textup{Aut}}(\\mathcal{C})$ fixes only the point $x$. For example, every\nlattice in Euclidean space is group-balanced, because the stabilizer of\neach lattice point contains the operation of reflection through that\npoint. Clearly, group-balanced configurations are balanced, because the\ncentroid of $\\{y \\in \\mathcal{C} : |x-y|=d\\}$ is fixed by the\nstabilizer of $x$.\n\n\\begin{conjecture} \\label{conjecture:R2}\nEvery balanced discrete subset of $\\mathbb{R}^2$ is group-balanced.\n\\end{conjecture}\n\nConjecture~\\ref{conjecture:R2} can likely be proved using ideas similar\nto those used by Leech in \\cite{Leech} to prove the analogue for $S^2$,\nbut we have not completed a proof.\n\n\\begin{conjecture} \\label{conjecture:highdim}\nIf $n$ is sufficiently large, then there exists a discrete subset of\n$\\mathbb{R}^n$ that is balanced but not group-balanced.\n\\end{conjecture}\n\nOne might hope to prove Conjecture~\\ref{conjecture:highdim} using an\nanalogue of Theorem~\\ref{theorem:main}. Although we have not succeeded\nwith this approach, one can indeed generalize several of the\ningredients to Euclidean space: the analogue of a polynomial is a\nradial function from $\\mathbb{R}^n$ to $\\mathbb{R}$ whose Fourier transform has compact\nsupport (i.e., the function is an entire function of exponential type),\nand the analogue of the degree of the polynomial is the radius of the\nsupport. Instead of having a bounded number of roots, such a function\nhas a bounded density of roots. The notion of a spherical design also\ngeneralizes to Euclidean space as follows. A configuration\n$\\mathcal{C}$ with density $1$ (i.e., one point per unit volume in\nspace) is a ``Euclidean $r$-design'' if whenever $f$ is a radial\nSchwartz function with $\\mathop{\\textup{supp}}\\big(\\widehat{f}\\,\\big) \\subseteq B_r(0)$,\nthe average of $\\sum_{y \\in \\mathcal{C}} f(x-y)$ over $x \\in\n\\mathcal{C}$ equals $\\int f(x-y) \\, dy = \\widehat{f}(0)$. (The average\nor even the sum may not make sense if $\\mathcal{C}$ is pathological,\nbut for example they are always well-defined for periodic\nconfigurations.) It is plausible that an analogue of\nTheorem~\\ref{theorem:main} is true in the Euclidean setting, but we\nhave not attempted to state or prove a precise analogue, because it is\nnot clear that it would have any interesting applications.\n\n\\section*{Acknowledgements}\n\nWe thank Richard Green and the anonymous referee for their suggestions\nand feedback.\n\n\\bibliographystyle{amsalpha}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Abstract}\nTodays big data applications generate hundreds or even thousands of terabytes of data. Commonly, Java based applications are used for further analysis. A single commodity machine, for example in a data center or typical cloud environment, cannot store and process the vast amounts of data making distribution mandatory. Thus, the machines have to use interconnects to exchange data or coordinate data analysis. However, commodity interconnects used in such environments, e.g. Gigabit Ethernet, cannot provide high throughput and low latency compared to alternatives like InfiniBand to speed up data analysis of the target applications. In this report, we describe the design and implementation of Ibdxnet, a low-latency and high-throughput transport providing the benefits of InfiniBand networks to Java applications. Ibdxnet is part of the Java-based DXNet library, a highly concurrent and simple to use messaging stack with transparent serialization of messaging objects and focus on very small messages (< 64 bytes). Ibdxnet implements the transport interface of DXNet in Java and a custom C++ library in native space using JNI. Several optimizations in both spaces minimize context switching overhead between Java and C++ and are not burdening message latency or throughput. Communication is implemented using the messaging verbs of the ibverbs library complemented by an automatic connection management in the native library. We compared DXNet with the Ibdxnet transport to the MPI implementations FastMPJ and MVAPICH2. For small messages up to 64 bytes using multiple threads, DXNet with the Ibdxnet transport achieves a bi-directional message rate of 10 million messages per second and surpasses FastMPJ by a factor of 4 and MVAPICH by a factor of 2. Furthermore, DXNet scales well on a high load all-to-all communication with up to 8 nodes achieving a total aggregated message rate of 43.4 million messages per second for small messages and a throughput saturation of 33.6 GB\/s with only 2 kb message size.\n\n\\section{Introduction}\n\nInteractive applications, especially on the web \\cite{facebook2, Liu:2016:ECI:2964797.2964815}, simulations \\cite{doi:10.1093\/bioinformatics\/btt055} or online data analysis \\cite{Desikan:2005:IPR:1062745.1062885, 6547630, DOI:10.1007\/978-3-319-55699-4_20} have to process terabytes of data often consisting of small objects. For example, social networks are storing graphs with trillions of edges resulting in a per object size of less than 64 bytes for the majority of objects \\cite{Ching:2015:OTE:2824032.2824077}. Other graph examples are brain simulations with billions of neurons and thousands of connections each \\cite{IntroducingGraph500} or search engines for billions of indexed web pages \\cite{Gulli:2005:IWM:1062745.1062789}. To provide high interactivity to the user, low latency is a must in many of these application domains. Furthermore, it is also important in the domain of mobile networks moving state management into the cloud \\cite{Kablan:2015:SNF:2785989.2785993}.\n\nBig data applications are processing vast amounts of data which require either an expensive supercomputer or distributed platforms, like clusters or cloud environments \\cite{HASHEM201598}. High performance interconnects, such as InfiniBand, are playing a key role to keep processing and response times low, especially for highly interactive and always online applications. Today, many cloud providers, e.g. Microsoft, Amazon or Google, offer instances equipped with InfiniBand.\n\nInfiniBand offers messaging verbs and RDMA, both providing one way single digit microsecond latencies. It depends on the application requirements whether messaging verbs or RDMA is the better choice to ensure optimal performance \\cite{Su:2017:RRF:3064176.3064189}.\n\nIn this report, we focus on Java-based parallel and distributed applications, especially big data applications, which commonly communicate with remote nodes using asynchronous and synchronous messages \\cite{Ching:2015:OTE:2824032.2824077, Ekanayake:2016:SJH:2972969.2972972, Dean:2008:MSD:1327452.1327492, Zaharia:2016:ASU:3013530.2934664}. Unfortunately, accessing InfiniBand verbs from Java is not a built-in feature of the commonly used JVMs. There are several external libraries, wrappers or JVMs with built-in support available but all trade performance for transparency or require proprietary environments (\\S \\ref{related_work_java_ib}). To use InfiniBand from Java, one can rely on available (Java) MPI implementations. But, these are not providing features such as serialization for messaging objects and no automatic connection management (\\S \\ref{related_work_mpi}).\n\nWe developed the network subsystem DXNet (\\S \\ref{dxnet}) which provides transparent and simple to use sending and event based receiving of synchronous and asynchronous messages with transparent serialization of messaging objects \\cite{dxnet}. It is optimized for high concurrency on all operations by implementing lock-free synchronization. DXNet is implemented in Java and open source and available at Github \\cite{dxnetgithub}.\n\nIn this report, we propose Ibdxnet, a transport for the DXNet network subsystem. The transport uses reliable messaging verbs to implement InfiniBand support for DXNet and provides low latency and high throughput messaging for Java.\n\nIbdxnet implements scalable and automatic connection and queue pair management, the \\textit{msgrc} transport engine, which uses InfiniBand messaging verbs, and a JNI interface. We present best practices applied to ensure scalability across multiple threads and nodes when working with InfiniBand verbs by elaborating on the implementation details of Ibdxnet. We carefully designing an efficient and low latency JNI layer to connect the native Ibdxnet subsystem to the Java based IB transport in DXNet. The IB transport uses the JNI layer to interface with Ibdxnet, extends DXNet's outgoing ring buffer for InfiniBand usage and implements scalable scheduling of outgoing data for many simultaneous connections. We evaluated DXNet with the IB transport and Ibdxnet, and compared then to two MPI implementations supporting InfiniBand: the well known MVAPICH2 and the Java based FastMPJ implementations.\n\nThough, MPI is discussed in related work (\\S \\ref{related_work_mpi}) and two implementations are evaluated and compared to DXNet (\\S \\ref{eval}), DXNet, the IB transport nor Ibdxnet are implementing the MPI standard. The term \\textit{messaging} is used by DXNet to simply refer to exchanging data in the form of messages (i.e. additional metadata identifies message on receive). DXNet does not implement any by the standard defined MPI primitives. Various low-level libraries to use InfiniBand in Java are not compared in this report, but in a separate one.\n\nThe report is structured in the following way: In Section \\ref{dxnet}, we present a summary of DXNet and its aspects important to this report. In Section \\ref{related_work}, we discuss related work which includes a brief summary of available libraries and middleware for interfacing InfiniBand in Java applications. MPI and selected implementations supporting InfiniBand are presented as available middleware solutions and compared to DXNet. Lastly, we discuss target applications in the field of Big-Data which benefit from InfiniBand usage. Section \\ref{ib_basics} covers InfiniBand basics which are of concern for this report. Section \\ref{java_and_native} discusses JNI usage and presents best practices for low latency interfacing with native code from Java using JNI. Section \\ref{overview_infiniband_transport} gives a brief overview of DXNet's multi layered stack when using InfiniBand. Implementation details of the native part Ibdxnet are given in Section \\ref{ibdxnet_native} and the IB transport in Java are presented in Section \\ref{transport_impl_java}. Section \\ref{eval} presents and compares the experimental results of MVAPICH2, FastMPJ and DXNet. Conclusions are presented in Section \\ref{conclusions}.\n\n\\section{DXNet}\n\\label{dxnet}\nDXNet is a network library for Java targeting, but not limited to, highly concurrent big data applications. DXNet implements an \\textbf{asynchronous event driven messaging} approach with a simple and easy to use application interface. \\textbf{Messaging} describes \\textbf{transparent sending and receiving of complex (even nested) data structures} with implicit serialization and de-serialzation. Furthermore, DXNet provides a built in primitive for transparent \\textbf{request-response communication}.\n\nDXNet is optimized for highly multi-threaded sending and receiving of small messages by using \\textbf{lock-free data structures, fast concurrent serialization, zero copy and zero allocation}. The core of DXNet provides \\textbf{automatic connection and buffer management}, serialization of message objects and an interface for implementing different transports. Currently, an Ethernet transport using Java NIO sockets and an InifiniBand transport using \\textit{ibverbs} (\\S \\ref{ibdxnet_native}) is implemented.\n\nThe following subsections describe the most important aspects of DXNet and its core which are depicted in Figure \\ref{dxnet_simple_fig} and relevant for further sections of this report. A more detailed insight is given in a dedicated paper \\cite{dxnet}. The source code is available at Github \\cite{dxnetgithub}.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{dxnet_simple.png}\n\t\\caption{Simplified DXNet Architecture}\n\t\\label{dxnet_simple_fig}\n\\end{figure}\n\n\\subsection{Automatic Connection Management}\n\\label{dxnet_con_man}\nTo relieve the programmer from explicit connection creation, handling and cleanup, DXNet implements automatic and transparent connection creation, handling and cleanup. Nodes are addressed using an \\textbf{abstract and unique 16-bit nodeID}. Address mappings must be registered to allow associating the nodeIDs of each remote node with a corresponding implementation dependent endpoint (e.g. socket, queue pair). To provide scalability with up to hundreds of simultaneous connections, our event driven system does not create one thread per connection. A \\textbf{new connection is created automatically} once the first message is either sent to a destination or received from one. Connections are closed once a configurable connection limit is reached using a recently used strategy. Faulty connections (e.g. remote node not reachable anymore) are handled and cleaned up by the manager. Error handling on connection errors or timeouts is propagated to the application using exceptions.\n\n\\subsection{Sending of Messages}\n\\label{dxnet_send}\n\\textbf{Messages} are serialized Java objects and sent \\textbf{asynchronously} without waiting for a completion. A message can be targeted towards one or multiple receivers. Using the message type \\textbf{Request}, it is sent to one receiver, only. When sending a request, the sender waits until \\textbf{receiving a corresponding response} message (transparently handled by DXNet) or skips waiting and collects the response later.\n\nWe expect applications calling DXNet concurrently with \\textbf{multiple threads} to send messages. Every message is automatically and concurrently serialized into the \\textbf{Outgoing Ring Buffer (ORB)}, a natively allocated and lock-free ring buffer. \\textbf{Messages are automatically aggregated} which increases send throughput. The ORB, one per connection, is allocated in native memory to allow \\textbf{direct and zero-copy access} by the low-level transport. A transport runs a decoupled dedicated thread which removes the serialized and ready to send data from the ORB and forwards it to the hardware.\n\n\\subsection{Receiving of Messages}\n\\label{dxnet_receive}\nThe network transport handles incoming data by writing it to \\textbf{pooled native buffers} to avoid burdening the Java garbage collection. Depending on how a transport writes and reads data, the buffers might contain fully serialized messages or just fragments. Every received buffer is pushed to the ring buffer based \\textbf{Incoming Buffer Queue (IBQ)}. Both, the buffer pool as well as the IBQ are shared among all connections. \\textbf{Dedicated handler threads} pull buffers from the IBQ and process them asynchronously by de-serializing them and creating Java message objects. The messages are passed to \\textbf{pre-registered callback methods} of the application.\n\n\\subsection{Flow Control}\nDXNet implements its own \\textbf{flow control (FC)} mechanism to avoid flooding a remote node with many (very small) messages. This would result in an increased overall latency and lower throughput if the receiving node cannot keep up with processing incoming messages. On sending a message, the per connection dedicated FC checks if a configurable threshold is exceeded. This threshold describes the \\textbf{number of bytes sent by the current node but not fully processed by the receiving node}. Once the configurable threshold is exceeded, the receiving node slices the number of bytes received into equally sized windows (window size configurable) and sends the number of windows confirmed back to the source node. Once the sender receives this confirmation, the number of bytes sent but not processed is \\textbf{reduced by the number of received windows multiplied with the configured window size}. If an application send thread was previously blocked due to exceeding this threshold, it can now continue with processing.\n\n\\subsection{Transport Interface}\n\\label{dxnet_transport_interface}\nDXNet provides a transport interface allowing implementations of different transport types. On initialization of DXNet, one of the implemented transports can be selected. Afterwards when using DXNet, the transport is transparent for the application. The following tasks must be handled by every transport implementation:\n\n\\begin{itemize}\n \\item Connection: Create, close and cleanup\n \\item Get ready to send data from ORB and send it (ORB triggers callback once data is available)\n \\item Handle received data by pushing it to the IBQ\n \\item Manage flow control when sending\/receiving data\n\\end{itemize}\n\nEvery other task that is not exposed directly by one of the following methods must be handled internally by the transport. The core of DXNet relies on the following methods of abstract Java classes\/interfaces which must be implemented by every transport:\n\n\\begin{itemize}\n \\item Connection: open, close, dataPosted\n \\item ConnectionManager: createConnection, closeConnection\n \\item FlowControl: sendFlowControlData, getAndResetFlowControlData\n\\end{itemize}\n\nWe elaborate on further details about the transport interface in Section \\ref{transport_impl_java} where we describe the transport implementation for Ibdxnet.\n\n\\section{Related Work}\n\\label{related_work}\nRelated work discusses different topics which are of interest to DXNet with the IB transport and Ibdxnet. First, we present a summary of our evaluation results of existing solutions to use InfiniBand in Java applications (\\S \\ref{related_work_java_ib}). These results were important before developing Ibdxnet. Next, we compare DXNet and the MPI standard (\\S \\ref{related_work_mpi}) followed by MPI implementations supporting InfiniBand (\\S \\ref{related_work_mpi_impls}) and UPX (\\S \\ref{rel_work_other}). To our knowledge, this concludes the list of available middleware offering higher level networking primitives comparable to DXNet's. In the last Subsection \\ref{rel_work_big_data}, we discuss big-data systems and applications supporting InfiniBand for target applications of interest to DXNet.\n\n\\subsection{Java and InfiniBand}\n\\label{related_work_java_ib}\nBefore developing Ibdxnet and the InfiniBand transport for DXNet, we evaluated available (low-level) solutions for leveraging InfiniBand hardware in Java applications. This includes using NIO sockets with \\textbf{IP over InfiniBand (IPoIB)} \\cite{ipoib}, \\textbf{jVerbs} \\cite{Stuedi:2013:JUL:2523616.2523631}, \\textbf{JSOR} \\cite{jsor}, \\textbf{libvma} \\cite{libvma} and \\textbf{native c-verbs with ibverbs}. Extensive experiments analyzing throughput and latency of both messaging verbs and RDMA were conducted to determine a suitable candidate for using InfiniBand with Java applications and are published in a separate report.\n\nSummerized, the results show that transparent solutions like IPoIB, libvma or JSOR, which allow existing socket-based applications to send and receive data transparently over InfiniBand hardware, are not able to deliver an overall adequate throughput and latency. For the verbs-based libraries, jVerbs gets close to the native ibverbs performance but, like JSOR, requires a proprietary JVM to run. Overall, none of the analyzed solutions, other than ibverbs, are delivering an adequate performance. Furthermore, we want DXNet to stay independent of the JVM when using InfiniBand hardware. Thus, we decided to use the native ibverbs library with the Java Native Interface to avoid the known performance issues of the evaluated solutions.\n\n\\subsection{MPI}\n\\label{related_work_mpi}\nThe message passing interface \\cite{Forum:1994:MMI:898758} defines a standard for high level networking primitives to send and receive data between local and remote processes, typically used for HPC applications.\n\nAn application can send and receive primitive data types, arrays, derived or vectors of primitive data types, and indexed data types using MPI. The synchronous primitives \\textit{MPI\\_Send} and \\textit{MPI\\_Recv} perform these operations in blocking mode. The asynchronous operations \\textit{MPI\\_Isend} and \\textit{MPI\\_Irecv} allow non blocking communication. A status handle is returned with each started asynchronous operation. This can be used to check the completion of the operation or to actively wait for one or multiple completions using \\textit{MPI\\_Wait} or \\textit{MPI\\_Waitall}. Furthermore, there are various collective primitives which implement more advanced operations such as scatter, gather or reduce.\n\nSending and receiving of data with MPI requires the application to issue a receive for every send with a target buffer that can hold at least the amount of data sent by the remote. DXNet relieves the application from this responsibility. Application threads can send messages with variable size andDXNet manages the buffers used for sending and receiving. The application does not have to issue any receive operations and wait for data to arrive actively. Incoming messages are dispatched to pre-registered callback handlers by dedicated handler threads of DXNet.\n\nDXNet supports transparent serialization and de-serialization of complex (even nested) data types (Java objects) for messages. MPI primitives for sending and receiving data require the application to use one of the available data types supported and doesn't offer serialization for more complex datatypes such as objects. However, the MPI implementation can benefit from the lack of serialization by avoiding any copying of data, entirely. Due to the nature of serialization, DXNet has to create a (serialized) \"copy\" of the message when serializing it into the ORB. Analogously, data is copied when a message is created from incoming data during de-serialization.\n\nMessages in DXNet are sent asynchronously while requests offer active waiting or probing for the corresponding response. These communication patterns can also be applied by applications using MPI. The communication primitives currently provided by DXNet are limited to messages and request-response. Nevertheless, using these two primitives, other MPI primitives, such as scatter, gather or reduce, can be implemented by the application if required.\n\nDXNet does not implement multiple protocols for different buffer sizes like MPI with eager and rendezvous. A transport for DXNet might implement such a protocol but our current implementations for Ethernet and InfiniBand do not. The aggregated data available in the ORB is either sent as a whole or sliced and sent as multiple buffers. The transport on the receiving side passes the stream of buffers to DXNet and puts them into the IBQ. Afterwards, the buffers are re-connected to a stream of data by the MCC before extracting and processing the messages.\n\nAn instance using DXNet runs within one process of a Big Data application with one or multiple application threads. Typically, one DXNet instance runs per cluster node. This allows the application to dynamically scale the number of threads up or down within the same DXNet instance as needed. Furthermore, fast communication between multiple threads within the same process is possible, too.\n\nCommonly, an MPI application runs a single thread per process. Multiple processes are spawned according to the number of cores per node with IPC fully based on MPI. MPI does offer different thread modes which includes issuing MPI calls using different threads in a process. Typically, this mode is used in combination with OpenMP \\cite{openmp}. However, it is not supported by all MPI implementations which also offer InfiniBand support (\\S \\ref{related_work_mpi_impls}). Furthermore, DXNet supports dynamic up and down scaling of instances. MPI implementations support up-scaling (for non singletons) but down scaling is considered an issue for many implementations. Processes cannot be removed entirely and might cause other processes to get stuck or crash.\n\nConnection management and identifying remote nodes are similar with DXNet and MPI. However, DXNet does not come with deployment tools such as \\textit{mpirun} which assigns the ids\/ranks to identify the instances. This intentional design decision allows existing applications to integrate DXNet without restrictions to the bootstrapping process of the application. Furthermore, DXNet supports dynamically adding and removing instances. With MPI, an application must be created by using the MPI environment. MPI applications must be run using a special coordinator such as \\textit{mpirun}. If executed without a communicator, an MPI world is limited to the current process it is created in which doesn't allow communication with any other instances. Separate MPI worlds can be connected but the implementation must support this feature. To our knowledge, there is no implementation (with InfiniBand support) that currently supports this.\n\n\\subsection{MPI Implementations Supporting InfiniBand}\n\\label{related_work_mpi_impls}\nThis section only considers MPI implementations supporting InfiniBand directly. Naturally, IPoIB can be used to run any MPI implementation supporting Ethernet networks over InfiniBand. But, as previously discussed (\\S \\ref{related_work_java_ib}), the network performance is very limited when using IPoIB.\n\n\\textbf{MVAPICH2} is a MPI library \\cite{4343853} supporting various network interconnects, such as Ethernet, iWARP, Omni-Path, RoCE and InfiniBand. MVAPICH2 includes features like RDMA fast path or RDMA operations for small message transfers and is widely used on many clusters over the world. \\textbf{Open MPI} \\cite{openmpi} is an open source implementation of the MPI standard (currently full 3.1 conformance) supporting a variety of interconnects, such as Ethernet using TCP sockets, RoCE, iWARP and InfiniBand. \n\n\\textbf{mpiJava} \\cite{mpijava} implements the MPI standard by a collection of wrapper classes that call native MPI implementations, such as MVAPICH2 or OpenMPI, through JNI. The wrapper based approach provides efficient communication relying on native libraries. However, it is not threadsafe and, thus, is not able to take advantage of multi-core systems using multithreading.\n\n\\textbf{FastMPJ} \\cite{Exposito:2014aa} uses Java Fast Sockets \\cite{Taboada:2008:JFS:1456731.1457122} and ibvdev to provide a MPI implementation for parallel systems using Java. Initially, \\textbf{ibvdev} \\cite{Exposito2012} was implemented as a low-level communication device for \\textbf{MPJ Express} \\cite{MPJExpress}, a Java MPI implementation of the mpiJava 1.2 API specification. ibvdev implements InfiniBand support using the low-level verbs API and can be integrated into any parallel and distributed Java application. FastMPJ optimizes MPJ Express collective primitives and provides efficient non-blocking communication. Currently, FastMPJ supports issuing MPI calls using a single thread, only.\n\n\\subsection{Other Middleware}\n\\label{rel_work_other}\n\\textbf{UCX} \\cite{7312665} is a network stack designed for next generation systems for applications with an highly multi-threaded environment. It provides three independent layers: UCS is a service layer with different cross platform utilities, such as atomic operations, thread safety, memory management and data structures. The transport layer UCT abstracts different hardware architectures and their low-level APIs, and provides an API to implement communication primitives. UCP implements high level protocols such as MPI or PGAS programming models by using UCT.\n\nUCX aims to be a common computing platform for multi-threaded applications. However, DXNet does not and, thus, does not include its own atomic operations, thread safety or memory management for data structures. Instead, it relies on the multi-threading utilities provided by the Java environment. DXNet does abstract different hardware like UCX but only network interconnects and not GPUs or other co-processors. Furthermore, DXNet is a simple networking library for Java applications and does not implement MPI or PGAS models. Instead, it provides simple asynchronous messaging and synchronous request-response communication, only.\n\n\\subsection{Target Applications using InfiniBand}\n\\label{rel_work_big_data}\nProviding high throughput and low latency, InfiniBand is a technology which is widely used in various big-data applications.\n\n\\textbf{Apache Hadoop} \\cite{Islam:2012:HPR:2388996.2389044} is a well known Java big-data processing framework for large scale data processing using the MapReduce programming model. It uses the Hadoop Distributed File System for storing and accessing application data which supports InfiniBand interconnects using RDMA. Also implemented in Java, \\textbf{Apache Spark} is a framework for big-data processing offering the domain-specific-language Spark SQL, a stream processing and machine learning extension and the graph processing framework GraphX. It supports InfiniBand hardware using an additional RDMA plugin \\cite{sparkrdma}.\n\nNumerous key-value storages for big-data applications have been proposed that use InfiniBand and RDMA to provide low latency data access for highly interactive applications.\n\n\\textbf{RAMCloud} \\cite{Ousterhout:2015:RSS:2818727.2806887} is a distributed key-value storage optimized for low latency data access using InfiniBand with messaging verbs. Multiple transports are implemented for network communication, e.g. using reliable and unreliable connections with InfiniBand and Ethernet with unreliable connections. \\textbf{FaRM} \\cite{179767} implements a key-value and graph storage using a shared memory architecture with RDMA. It performs well with a throughput of 167 million key-value lookups and 31 us latency using 20 machines. \\textbf{Pilaf} \\cite{Mitchell:2013:UOR:2535461.2535475} also implements a key-value storage using RDMA for get operations and messaging verbs for put operations. \\textbf{MICA} \\cite{179747} implements a key-value storage with a focus on NUMA architectures. It maps each CPU core to a partition of data and communicates using a request-response approach using unreliable connections. \\textbf{HERD} \\cite{Kalia:2014:URE:2619239.2626299} borrows the design of MICA and implements networking using RDMA writes for the request to the server and messaging verbs for the response back to the client.\n\n\n\\section{InfiniBand and ibverbs Basics}\n\\label{ib_basics}\nThis section covers the most important aspects of the InfiniBand hardware and the native ibverbs library which are relevant for this report. Abbreviations introduced here (most of them commonly used in the InfiniBand context) are used throughout the report from this point on.\n\nThe \\textbf{host channel adapter (HCA)} connected to the PCI bus of the host system is the network device for communicating with other nodes. The offloading engine of the HCA processes outgoing and incoming data asynchronously and is connected to other nodes using copper or optical cables via one or multiple switches. The \\textbf{ibverbs} API provides the interface to communicate with the HCA either by exchanging data using Remote Direct Memory Access (RDMA) or messaging verbs. \n\nA \\textbf{queue pair (QP)} identifies a physical connection to a remote node when using \\textbf{reliable connected (RC)} communication. Using non connected \\textbf{unreliable datagram (UD)} communication, a single QP is sufficient to send data to multiple remotes. A QP consists of one \\textbf{send queue (SQ)} and one \\textbf{receive queue (RQ)}. On RC communication, a QP's SQ and RQ are always cross connected with a target's QP, e.g. node 0 SQ connects to node 1 RQ and node 0 RQ to node 1 SQ.\n\nIf an application wants to send data, it posts a \\textbf{work request (WR)} containing a pointer to the buffer to send and the length to the SQ. A corresponding WR must be posted on the RQ of the connected QP on the target node to receive the data. This WR also contains a pointer to a buffer and its size to receive any incoming data to.\n\nOnce the data is sent, a \\textbf{work completion (WC)} is generated and added to a \\textbf{completion queue (CQ)} associated with the SQ. A WC is also generated for the corresponding WCQ of the remote's RQ receiving the data, once the data arrived. The WC of the send task tells the application that the data was successfully sent to the remote (or provides error information otherwise). On the remote receiving the data, the WC indicates that the buffer attached to the previously posted WR is now filled with the remote's data.\n\nWhen serving multiple connections, every single SQ and RQ does not need a dedicated CQ. A single CQ can be used as a \\textbf{shared completion queue (SCQ)} with multiple SQs or RQs. Furthermore, when receiving data from multiple sources, instead of managing many RQs to provide buffers for incoming data, a \\textbf{shared receive queue (SRQ)} can be used on multiple QPs instead of single RQs.\n\nWhen attaching a buffer to a WR, it is attached as a \\textbf{scatter gather element (SGE)} of a \\textbf{scatter gather list (SGL)}. For sending, the SGL allows the offloading engine to gather the data from many scattered buffers and send it as one WR. For receiving, the received data is scattered to one or multiple buffers by the offloading engine.\n\n\\section{Low Latency Data Exchange Between Java and C}\n\\label{java_and_native}\nIn this section, we describe our experiences with and best practices for the Java Native Interface (JNI) to avoid performance penalties for latency sensitive applications. These are applied to various implementation aspects of the IB transport which are further explained in their dedicated sections.\n\nUsing JNI is mandatory if the Java space has to interface with native code, e.g. for IO operations or when using native libraries. As we decided to use the low-level ibverbs library to benefit from full control, high flexibility and low latency (\\S \\ref{related_work_java_ib}), we had to ensure that interfacing with native code from Java does not introduce too much overhead compared to the already existing and evaluated solutions.\n\nThe Java Native Interface (JNI) allows Java programmers to call native code from C\/C++ libraries. It is a well known method to interface with native libraries that are not available in Java or access IO using system calls or other native libraries. When calling code of a native library, the library has to expose and implement a predefined interface which allows the JVM to connect the native functions to native declared Java methods in a Java class. With every call from Java to the native space and vice versa, a context switch is required to be executed by the JVM environment. This involves tasks related to thread and cache management adding latency to every native call. This increases the duration of such a call and is crucial, especially regarding the low latency of IB.\n\n\\textbf{Exchanging data with a native library without adding considerable overhead is challenging}. For single primitive values, passing parameters to functions is convenient and does not add any considerable overhead. However, access to Java classes or arrays from native space requires synchronization with the JVM (and its garbage collector) which is very expensive and must be avoided. Alternatively, one can use ByteBuffers allocated as DirectByteBuffers which allocates memory in native memory. Java can access the memory through the ByteBuffer and the native library can get the native address of the array and the size with the functions \\texttt{GetDirectBufferAddress} and \\texttt{GetDirectBufferCapacity}. However, these two calls increase the latency by tenth to even hundreds of microseconds (with high variation).\n\nThis problem can be solved by \\textbf{allocating a native buffer in the native space, passing its address and size} to the Java space and \\textbf{access it using the Unsafe API} or wrap it as a newly allocated (Direct) ByteBuffer. The latter requires reflection to access the constructor of the DirectByteBuffer and set the address and size fields. We decided to use the Unsafe API because we map native structs and don't require any of the additional features the ByteBuffer provides. The native address is cached which allows fast exchange of data from Java to native and vice versa. To improve convenience when accessing fields of a data structure, a helper class with getter and setter wrapper methods is created to access the fields of the native struct.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{jni_merged.pdf}\n\t\\caption{Microbenchmarks to evaluate JNI call overhead and data exchange overhead using different types of memory access}\n\t\\label{jni_overhead}\n\\end{figure}\n\nWe evaluated different means of passing data from Java to native and vice versa as well as the function\/method call overhead. Figure \\ref{jni_overhead} shows the results of the microbenchmarks used to evaluate JNI call overhead as well as overhead of different memory access methods. The results displayed are the averages of three runs of each benchmark executing the operation 100,000,000 times. A warm-up of 1,000 operations preceeds each benchmark run. For JNI context switching, we measured the latency introduced of Java to native (jtn), native to Java (ntj), native to Java with exception checking (ntjexc) and native to Java with thread detaching (ntjdet). For exchanging data between Java and native, we measured the latency introduced by accessing a 64 byte buffer in both spaces for a primitive Java byte array (ba), Java DirectByteBuffer (dbb) and Unsafe (u). The benchmarks were executed on a machine with Intel Core i7-5820K CPU and Java 1.8 runtime.\n\nThe results show that the average single costs for context switching are neglectable with an average switching time of only up to 0.1 \\textmu s. We exchange data using primitive function arguments, only. Data structures are mapped and accessed as C-structs in the native space. In Java, we access the native C-structs using a helper class which utilizes the Unsafe library \\cite{javaunsafe} as this is the fastest method in both spaces.\n\nThese results influenced the important design decision to \\textbf{run native threads, attached once as daemon threads to the JVM}, which call to Java instead of Java threads calling native methods (\\S \\ref{send_thread}, \\S \\ref{receive_thread}). Furthermore, we avoid using any of the JNI provided helper functions where possible \\cite{Liang:1999:JNI:520155}. For example: attaching a thread to the JVM involves expensive operations like creating a new Java thread object and various state changes to the JVM environment. Avoiding them on every context switch is crucial to latency and performance on every call.\n\nLastly, we minimized the number of calls to the Java space by combining multiple tasks into a single cross-space call instead of yielding multiple calls. For inter space communication, we highly rely on communication via buffers mapped to structs in native space and wrapper classes in Java (see above). This is highly application dependable and not always possible. But if possible and applied, this can improve the overall performance.\n\nWe applied this technique of combining multiple tasks into a single cross-space call to sending and receiving of data to minimize latency and context switching overhead. The native send and receive threads implement the most latency critical logic in the native space which is not simply wrapping ibverbs functions to be exposed to Java (\\S \\ref{send_thread} and \\ref{receive_thread}).. The counterpart to the native logic is implemented in Java (\\S \\ref{transport_impl_java}). In the end, we are able to reduce sending and receiving of data to a single context switching call.\n\n\\section{Overview Ibdxnet and Java InfiniBand Transport}\n\\label{overview_infiniband_transport}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=2.0in]{dxnet_ib_simple.png}\n\t\\caption{Layered architecture of DXNet with the IB transport and Ibdxnet (using the msgrc engine). Threads are colored green.}\n\t\\label{dxnet_ib_simple}\n\\end{figure}\n\nThis section gives a brief top-down introduction of the full transport implementation. Figure \\ref{dxnet_ib_simple} depicts the different components and layers involved when using InfiniBand with DXNet. The \\textbf{Java InfiniBand transport (IB transport)} (\\S \\ref{transport_impl_java}) implements DXNet's transport interface (\\S \\ref{dxnet_transport_interface}) and uses JNI to connect to the native counterpart. \\textbf{Ibdxnet} uses the native ibverbs library to access the hardware and provides a separate subsystem for connection management, sending and receiving data. Furthermore, it implements a set of functions for the Java Native Interface to connect to the Java implementation.\n\n\\section{Ibdxnet: Native InfiniBand Subsystem with Transport Engine}\n\\label{ibdxnet_native}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_simple.png}\n\t\\caption{Simplified architecture of Ibdxnet with the msgrc transport engine}\n\t\\label{ibdxnet_simple}\n\t\\vspace{-20pt}\n\\end{figure}\n\nThis section elaborates on the implementation details of our native InfiniBand subsystem \\textbf{Ibdxnet} which is used by the IB transport implementation in DXNet to utilize InfiniBand hardware. Ibdxnet provides the following key features: a basic foundation with re-usable components for implementations using different means of communication (e.g. messaging verbs, RDMA) or protocols, automatic connection management and transport engines using different communication primitives. Figure \\ref{ibdxnet_simple} shows an outline of the different components involved.\n\nIbdxnet provides an \\textbf{automatic connection and QP manager} (\\S \\ref{scalable_connection_management}) which can be used by every transport engine. An interface for the connection manager and a connection object allows implementations for different transport engines. The engine \\textbf{msgrc} (see Figure \\ref{ibdxnet_simple}) uses the provided connection management and is based on RC messaging verbs. The engine \\textbf{msgud} using UD messaging verbs is already implemented and will be discussed and extensively evaluated in a separate publication.\n\nA \\textbf{transport engine} implements its own protocol to send\/receive data and exposes a low-level interface. It creates an abstraction layer to hide direct interaction with the ibverbs library. Through the low-level interface, a transport implementation (\\S \\ref{transport_impl_java}) provides data-to-send and forwards received data for further processing. For example: the low-level interface of the msgrc engine does not provide concurrency control or serialization mechanisms for messages. It accepts a stream of data in one or multiple buffers for sending and provides buffers creating a stream of data on receive (\\S \\ref{msgrc}). This engine is connected to the Java transport counterpart via JNI and uses the existing infrastructure of DXNet (\\S \\ref{transport_impl_java}).\n\nFurthermore, we implemented a \\textbf{loopback} like stand alone transport for debugging and measuring performance of the native engine, only. The loopback transport creates a continuous stream of data for sending to one or multiple nodes and throws away any data received. This ensures that sending and receiving introduce no additional overhead and allows measuring the performance of different low-level aspects of our implementation. This was used to determine the maximum possible throughput with Ibdxnet (\\S \\ref{eval_dxnet_nodes_tp}).\n\nIn the following sections, we explain the implementation details of Ibdxnet's connection manager (\\S \\ref{scalable_connection_management}) and the messaging engine msgrc (\\S \\ref{msgrc}). Additionally, we describe best practices for using the ibverbs API and optimizations for optimal hardware utilization. Furthermore, we elaborate on how Ibdxnet connects to the IB transport in Java using JNI and how we implemented low overhead data exchange between Java and native space.\n\n\\subsection{Dynamic, Scalable and Concurrent Connection Management}\n\\label{scalable_connection_management}\nEfficient connection management for many nodes is a challenging task. For example, hundreds of application threads want to send data to a node but the connection is not yet established. Who creates the connection and synchronizes access of other threads? How to avoid synchronization overhead or blocking of threads that want to get an already established connection? How to manage the lifetime of a connection? \n\nThese challenges are addressed by a \\textbf{dedicated connection manager in Ibdxnet}. The connection manager handles all tasks required to establish and manage connections and hides them from the higher level application. For our higher level Java transport (\\S \\ref{transport_con_man}), complexity and latency is reduced for connection setup by avoiding context switching. \n\nFirst, we explain how nodes are identified, the contents of a connection and how online\/offline nodes are discovered and handled. Next, we describe how existing connections are accessed and non-existing connections are created on the fly during application runtime. We explain the details how a connection creation job is handled by the internal job manager, how connection data is exchanged with the remote in order to create a QP. At last, we briefly describe our previous attempt which failed to address the above challenges properly.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_conman1.pdf}\n\t\\caption{Connection manager: Creating non-existing connections (send thread: node 1 to node 0) and re-using existing connections (recv thread: node 1 to node 5).}\n\t\\label{ibdxnet_conman1}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_conman2.pdf}\n\t\\caption{Connection manager: Automatic connection creation with QP data exchange (node 3 to node 0). The job \\textit{CR0} is added to the back of the queue to initiate this process. The dedicated thread processes the queue by removing jobs from the front and processing them according to their type.}\n\t\\label{ibdxnet_conman2}\n\\end{figure}\n\nA node is identified by \\textbf{a unique 16-bit integer nodeID (NID)}. The NID is assigned to a node on start of the connection manager and cannot be changed during runtime. A connection consists of the source NID (the current node) and the destination NID (the target remote node). Depending on the transport implementation, an existing connection holds one or multiple ibverbs QPs, buffers and other data necessary to send and receive data using that connection. The connection manager provides a \\textbf{connection interface for the transport engines} which allows them to implement their own type of connection. The following example describes a connection with a single QP, only.\n\nBefore a connection to a remote node can be established, the remote node must be discovered and known as available. The job type \\textbf{node discovery} (further details about the job system follow in the next paragraphs) detects online\/offline nodes using UDP sockets over Ethernet. On startup, a list of node hostnames is provided to the connection manager. The list can be extended by adding\/removing entries during runtime for dynamic scaling. The discovery job tries to contact all non-discovered nodes of that list in regular intervals. When a node was discovered, it is removed from the list and marked as discovered. A connection can only be established with an already discovered node. If a connection to the node was already created and is lost (e.g. node crash), the NID is added back to the list in order to re-discovered the node on the next iteration of the job. Node discovery is mandatory for InfiniBand in order to exchange QP information on connection creation.\n\nFigure \\ref{ibdxnet_conman1} shows how existing connections are accessed and new connections are created when two threads, e.g. a send and a receive thread, are accessing the connection manager. The send thread wants to send new data to node 0 and the receive thread has received some data (e.g. from a SRQ). It has to forward it for further processing which requires information stored in each connection (e.g. a queue for the incoming data). If the connection is already established (the receive thread gets the connection to node 5), a connection handle (\\textit{H5}) is returned to the calling thread. If no connection has been established so far (the send thread wants to get the connection to node 0), \\textbf{a job to create the specific connection} (\\textit{CR0} = create to node 0) is added to the internal job queue. The calling thread has to wait until the job is dispatched and the connection is created before being able to send the data.\n\nFigure \\ref{ibdxnet_conman2} shows how connection creation is handled by the internal job thread. The job \\textit{CR0} (yielded by the send thread from the previous example in figure \\ref{ibdxnet_conman1}) is pushed to the back of the job queue. The job queue might contain jobs which affect different connections, i.e. there is no per connection dedicated queue. \\textbf{The dedicated connection manager thread} is processing the queue by removing a job from the front and dispatching by type. There are three types of jobs: create a connection to a node with a NID, discover other connection managers, close an existing connection to node.\n\nTo create a new connection with a remote node, the current node has to create an ibverbs QP with a SQ and RQ. Both queues are cross-connected to a remote QP (send with recv, recv with send) which requires data exchange using another communication channel (Sockets over Ethernet). For the job \\textit{CR0}, the thread creates a new QP on the current node (3) and exchanges its QP data with the remote it wants to connect to (0) using UDP sockets. The remote (0) also creates a QP and uses the received connection information (of 3). It replies with its own QP data (0 to 3) to complete QP creation. The newly established connection is added to the connection table and is now accessible (by the send and receive thread).\n\nAt last, we briefly describe our lessons learned from our first attempt for an automatic connection manager. It was relying on active connection creation. The first thread calling the connection manager to acquire a connection creates it on the fly, if it does not exist. The calling thread executes connection exchange, waits for the remote data and finishes connection creation. This requires coordination of all threads accessing the connection manager either to create a new connection or getting an existing one. It introduced a very complex architecture with high synchronization overhead and latency especially when many threads are concurrently accessing the connection manager. Furthermore, it was error prone and difficult to debug. We encountered severe performance issues when creating connections with one hundred nodes in a very short time range (e.g. all-to-all communication). This resulted in connection creation times of up to half a minute. Even with a small setup of 4 to 8 nodes, creating a connection could take up to a few seconds if multiple threads tried to create the same or different connections simultaneously.\n\n\\subsection{msgrc: Transport Engine for Messaging using RC QPs}\n\\label{msgrc}\nThis section describes the \\textbf{msgrc} transport engine. It uses reliable QPs to implement messaging using a dedicated send and receive thread. The engine's interface allows a transport to provide a stream of data (to send) in form of variable sized buffers and provides a stream of data (received) to a registered callback handler. \n\nThis interface is rather low-level and the backend does not implement any means of serialization\/deserialization for sending\/receiving of complex data structures. In combination with DXNet (\\S \\ref{dxnet}), the logic for these tasks resides in the Java space with DXNet and is shared with other transports such as the NIO Ethernet transport \\cite{dxnetnio}. However, there are no restrictions to implement these higher level components for the msgrc engine natively, if required. Further details on how the msgrc engine is connected with the Java transport counterpart are given in Section \\ref{transport_impl_java}.\n\nThe following subsections explain the general architecture and interface of the transport, sending and receiving of data using dedicated threads and how various features of InfiniBand were used for optimal hardware utilization.\n\n\\subsubsection{Architecture}\n\\label{engine_architecture}\nThis section explains the basic architecture as well as the low-level interface of the engine. Figure \\ref{ibdxnet_simple} includes the msgrc transport and can be referred to for an abstract representation of the most important components. The engine relies on our dedicated connection manager (\\S \\ref{scalable_connection_management}) for connection handling. We decided to use one dedicated thread for sending (\\S \\ref{send_thread}) and one for receiving (\\S \\ref{receive_thread}) to benefit from the following advantages: a clear separation of responsibilities resulting in a less complex architecture, no scheduling of send\/receive jobs when using a single thread for both and higher concurrency because we can run both threads on different CPU cores concurrently. The architecture allows us to create decoupled pipeline stages using lock-free queues and ring buffers. Thereby, we avoid complex and slow synchronization between the two threads and with hundreds of threads concurrently accessing shared resources.\n\n\\subsubsection{Engine interface}\n\\label{engine_interface}\n\n\\begin{lstlisting}[caption={Structures and callback of the msgrc engine's send interface},label=send_interface, xleftmargin=4.0ex]\nstruct NextWorkPackage {\n uint32_t posBackRel;\n uint32_t posFrontRel;\n uint8_t flowControlData;\n uint16_t nodeId;\n};\n\nstruct PrevWorkPackageResults {\n uint16_t nodeId;\n uint32_t numBytesPosted;\n uint32_t numBytesNotPosted;\n uint8_t fcDataPosted;\n uint8_t fcDataNotPosted;\n};\n\nstruct CompletedWorkList {\n uint16_t numNodes;\n uint32_t bytesWritten[NODE_ID_MAX_NUM_NODES];\n uint8_t fcDataWritten[NODE_ID_MAX_NUM_NODES];\n uint16_t nodeIds[];\n};\n\nNextWorkPackage* GetNextDataToSend(PrevWorkPackageResults* prevResults, CompletedWorkList* completionList);\n\\end{lstlisting}\n\nThe low-level interface allows fine-grained control for the target transport over the engine. The interface for sending data is depicted in Listing \\ref{send_interface} and receiving is depicted in Listing \\ref{recv_interface}. Both interfaces create an abstraction hiding connection and QP management as well as how the hardware is driven with the ibverbs library. For sending data, the interface provides the callback \\textit{GetNextDataToSend}. This function is called by the send thread to pull new data to send from the transport (e.g. from the ORB, see \\ref{transport_send}). When called, an instance of each of the two structures \\textit{PrevWorkPackageResults} and \\textit{CompletedWorkList} are passed to the implementation of the callback as parameters: the first contains information about the previous call to the function and how much data was actually sent. If the SQ is full, no further data can be sent. Instead of introducing an additional callback, we combine getting the next data with returning information about the previous send call to reduce call overhead (important for JNI access). The second parameter contains data about completed work requests, i.e. data sent for the transport. This must be used in the transport to mark data processed (e.g. moving the pointers of the ORB).\n\n\\begin{lstlisting}[caption={Structure and callback of the msgrc engine's receive interface},label=recv_interface, xleftmargin=4.0ex]\nstruct ReceivedPackage {\n uint32_t count;\n struct Entry {\n uint16_t sourceNodeId;\n uint8_t fcData;\n IbMemReg* data;\n void* dataRaw;\n uint32_t dataLength;\n } m_entries[];\n};\n\nstruct IncomingRingBuffer {\n uint32_t m_usedEntries;\n uint32_t m_front;\n uint32_t m_back;\n uint32_t m_size;\n\n struct Entry {\n con::NodeId m_sourceNodeId;\n uint8_t m_fcData;\n uint32_t m_dataLength;\n core::IbMemReg* m_data;\n void* m_dataRaw;\n } m_entries[];\n};\n\nuint32_t Received(IncomingRingBuffer* ringBuffer);\n\nvoid ReturnBuffer(IbMemReg* buffer);\n\\end{lstlisting}\n\nIf data is received, the receive thread calls the callback function \\textit{Received} with an instance of the \\textit{IncomingRingBuffer} structure as its parameter. This parameter holds a list of received buffers with their source NID. The transport can iterate this list and forward the buffers for further processing such as de-serialization. If the transport has to return the number of elements processed and, thus, is able to control the amount of buffers it can process. Once the received buffers are processed by the transport, they must be returned back to the \\textit{RecvBufferPool} by calling \\textit{ReturnRecvBuffer} to allow re-using them for further receives.\n\n\\subsubsection{Sending of Data}\n\\label{send_thread}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.3in]{msgrc_sge.png}\n\t\\caption{Example for sending and receiving data using scatter gather elements: Get data (aggregated messages) from ORB, send 1 SGE. Receive data scattered to multiple receive buffers.}\n\t\\label{msgrc_sge}\n\\end{figure}\n\nThis section explains the data and control flow of the \\textbf{dedicated send thread} which \\textbf{asynchronously} drives the engine for sending data. Listing \\ref{send_thread_code} depicts a simplified version of the contents of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below.\n\n\\begin{lstlisting}[caption={Send thread main flow (simplified)},label=send_thread_code, xleftmargin=4.0ex]\nworkPackage = GetNextDataToSend(prevWorkResults, completionList);\nReset(prevWorkResults);\nReset(completionList);\n\nif (workPackage != NULL) {\n\tconnection = GetConnection(workPackage.nodeId);\n\tprevWorkResults = SendData(connection, workPackage);\n\tReturnConnection(connection);\n}\n\ncompletionList = PollCompletions();\n\\end{lstlisting}\n\nThe loop starts with getting a \\textit{workPackage}, the next data to send (line 1), using the engine's low-level interface (\\S \\ref{engine_interface}). The instance \\textit{prevWorkResults} contains information about posted and non-posted data from the previous loop iteration. The instance \\textit{completionList} holds data about completed sends. Both instances are reseted\/nulled (line 2-3) for re-use in the current iteration. \n\nIf the \\textit{workPackage} is valid (line 5), i.e. data to send is available, the \\textit{nodeId} from that package is used to get the \\textit{connection} to the send target from the connection manager (line 6). The \\textit{connection} and \\textit{workPackage} are passed to the \\textit{SendData} function (line 7). It processes the \\textit{workPackage} and returns how much data was processed, i.e. posted to the SQ of the connection, and how much data could not be processed. The latter happens if the SQ is full and must be kept track of to not lose any data. Afterwards, the thread returns the \\textit{connection} to the connection manager (line 8).\n\nAt the end of a loop iteration, the thread polls the SCQ to remove any available WCs. \\textbf{We share the completion queue among all SQs\/connections to avoid iterating over many connections for a task}. The loop iteration ends and the thread starts from the beginning by calling \\textit{GetNextDataToSend} and provides the work results of our previous iteration. Data about polled WCs from the SCQ are stored in the \\textit{completionList} and forwarded via the interface (to the transport).\n\nIf no data is available (line 5), lines 6-8 are skipped and the thread executes a completion poll, only. This is important to ensure that any outstanding WCs are processed and passed to the transport (via the \\textit{completionList} and calling \\textit{GetNextDataToSend}). Otherwise, if no data is sent for a while, the transport will not receive any information about previously processed data. This leads to false assumptions about the available buffer space for sending data, e.g. assuming that data fits into the buffer but actually does not because the processed buffer space is not free'd, yet.\n\nIn the following paragraphs, we further explain how the functions \\textit{SendData} and \\textit{PollCompletions} make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above.\n\nThe \\textbf{SendData} function is responsible for preparing and posting of FC data and normal data (payload). FC data, which determines the number of flow control windows to confirm, is a small number (< 128) and, thus, does not require a lot of space. We post it as part of the \\textbf{immediate data}, which can hold up to 4 bytes of data, with the WR instead of using a separate side channel, e.g. another QP. This avoids overhead of posting and polling of another QP which \\textbf{benefits overall performance, especially with many simultaneous connections}. With FC data using 1 byte of the immediate data field, we use further 2 bytes to include the NID of the source node. This allows us to identify the source of the incoming WC on the remote. Otherwise, identifying the source would be very inconvenient. The only information provided with the incoming WC is the sender's unique physical QP id. In our case, this id must be mapped to the corresponding NID of the sender. However, this introduces an indirection every time a package arrives which hurts performance.\n\nFor sending normal data (payload), the provided \\textit{workPackage} holds two pointers, front and back, which enclose a memory area of data to send. This memory area belongs to a buffer (e.g. the ORB) which was registered with the protection domain on start to allow access by the HCA. Figure \\ref{msgrc_sge} depicts an example with three (aggregated) ready to send messages in the ORB. We create a WR for the data to send and provide a single \\textbf{SGE which takes the pointers of the enclosed memory area}. The HCA will directly read from that area without further copying of the data (zero copy). For buffer wrap arounds, two SGEs are created and attached to one WR: one SGE for the data from the front pointer to the end of the buffer, another SGE for the data from the start of the buffer to the back pointer. If the size of the area to send (sum of all SGEs) exceeds the maximum configurable receive size, the data to send must be sliced into multiple WRs. \\textbf{Multiple WRs are chained to a link list to minimize call overhead} when posting them to the SQ using \\textit{ibv\\_post\\_send}. This greatly increases performance compared to posting multiple standalone WRs with single calls.\n\nThe number of SGEs of a WR can be 0, if no normal data is available to send but FC data is available. To send FC data only, we write it to the immediate data field of a WR along with our source NID and post it without any SGEs attached which results in a 0 length data WR. \n\nThe \\textbf{PollCompletions} function calls \\textit{ibv\\_poll\\_cq}, \\textbf{once, to poll for any completions available} on the SCQ. A SCQ is used instead of per connection CQs to avoid iterating the CQs of all connections which impacts performance. The send thread keeps track of the number of posted WRs and, thus, knows how many WCs are outstanding and expected to arrive on the SCQ. If none are being expected, polling is skipped. \\textit{ibv\\_poll\\_cq} is called once per PollCompletion call, only, and every call tries to poll WCs in batches to keep the call overhead minimal.\n\nExperiments have shown that most calls to \\textit{ibv\\_poll\\_cq}, even on high loads, will return empty, i.e. no WRs have completed. Thus, polling the SCQ until at least one completion is received is the wrong approach and greatly impacts overall performance. If the SQ of another connection is not full and there is data available to send, this method wastes CPU resources on busy polling instead of processing further data to send. The performance impact (resulting in low throughput) increases with the number of simultaneous connections being served. Furthermore, this increases the chance of SQs running empty because time is wasted on waiting for completions instead of keeping all SQs filled. \\textbf{Full SQs ensure that the HCA is kept busy which is the key to optimal performance}.\n\n\\subsubsection{Receiving of Data}\n\\label{receive_thread}\n\n\\begin{lstlisting}[caption={Receive thread main flow (simplified)},label=recv_thread_code, xleftmargin=4.0ex]\nworkCompletions = PollCompletions();\n\nif (recvQueuePending < ibqSize) {\n Refill();\n}\n\nif (workCompletions > 0) {\n\tProcessCompletions(workCompletions);\n}\n\nif (!IncomingRingBufferIsEmpty()) {\n DispatchReceived();\n}\n\\end{lstlisting}\n\nAnalogous to Section \\ref{send_thread}, this section explains the data and control flow of the \\textbf{dedicated receive thread} which \\textbf{asynchronously} drives the engine for receiving data. Listing \\ref{recv_thread_code} depicts a simplified version of its main loop with the relevant aspects for this section. Details of the functions involved in the main flow are explained further below.\n\nData is received using a SRQ and SCQ instead of multiple receive and completions queues. This avoids iterating over all open connections and checking for data availability which introduces overhead with increasing number of simultaneous connections. Equally sized buffers for receiving data (configurable size and amount) are pooled and returned for re-use by the transport, once processed (\\S \\ref{engine_interface}).\n\nThe loop starts by calling \\textit{PollCompletions} (line 1) to poll the SCQ for WCs. Before processing the WCs returned, the SRQ is refilled by calling \\textit{Refill} (line 4), if the SRQ is not filled, yet. Next, if any WCs were polled previously, they are processed by calling \\textbf{ProcessCompletions} (line 8). This step pushes them to the \\textbf{Incoming Ring Buffer (IRB)}, a temporary ring buffer, before dispatching them. Finally, if the IRB is not empty (line 11), the thread tries to forward the contents of the IRB by calling \\textit{DispatchReceived} via the interface to the transport (\\S \\ref{engine_interface}).\n\nThe following paragraphs are further elaborating on how \\textit{PollCompletions}, \\textit{Refill}, \\textit{ProcessCompletions} and \\textit{DispatchReceived} make optimal use of the ibverbs library and how this cooperates with the interleaved control flow of the main thread loop explained above.\n\nThe \\textbf{PollCompletions} function is very similar to the one explained in Section \\ref{send_thread} already. WCs are polled in batches of max. currently available IRB space and buffered before being processed.\n\nThe \\textbf{Refill} function adds new receive WRs to the SRQ, if the SRQ is not completely filled and receive buffers from the receive buffer pool are available. Every WR consists of a configurable number of SGEs which make up the maximum receive size. This is also the limiting size the send thread can post with a single WR (sum of sizes of SGE list). Using this method, the receive thread does not have to take care of any software slicing of received data because the HCA scatters one big chunk of send data transparently to multiple (smaller) receive buffers on the receiver side. At last, \\textit{Refill} chains the WRs to a linked list which is posted on a single call to \\textit{ibv\\_post\\_srq\\_recv} for minimal overhead.\n\nIf WCs are buffered from the previous call to \\textit{PollCompletions}, the \\textbf{ProcessReceived} function iterates this list of WCs. For each WC of the list, it gets the source NID and FC data from the immediate data field. If the recv length of this WC is non zero, the attached SGEs contain the received data scattered to the receive buffers of the SGE list.\n\nAs the receive thread does not know or have any means of determining the size of the next incoming data, the challenge is optimal receive buffer usage with minimal internal fragmentation. Here, fragmentation describes the amount of receive buffers provided with a WR as SGEs in relation to the amount of received data written to that block of buffers. The less data written to the buffers, the higher the fragmentation. In the example shown in figure \\ref{msgrc_sge}, the three aggregated and serialized messages are received in five buffers but the last buffer is not completely used.\n\nThis fragmentation cannot be avoided but handled to avoid negative results like empty buffer pools or low per buffer utilization. Receive buffers\/SGEs of a WR that do not contain any received data, because the amount of received data is less than the total size of the list of buffers of the SGE list, are pushed back to the buffer pool. All receive buffers of the SGE list that contain valid received data are pushed to the IRB (in the order they were received). \n\nDepending on the target application, the fragmentation degree can be lowered if one configures the receive buffer and pool sizes accordingly. Applications typically sending small messages are performing well with small receive buffer sizes. However, throughput might decrease slightly for applications sending mainly big messages on small receive buffer sizes requiring more WRs per send data send (data sliced into multiple WRs).\n\nIf the IRB contains any elements, the \\textbf{DispatchReceived} function tries to forward them to the transport via the \\textit{Received} callback (\\S \\ref{engine_interface}). The callback returns the number of elements it consumed from the IRB and, thus, is allowed to consume none or up to what's available. The consumed buffers are returned asynchronously to the receive buffer pool by transport, once it finished processing them.\n\n\\subsubsection{Load Adaptive Thread Parking}\n\\label{thread_parking}\n\nThe send and receive threads must be kept busy running their loops to send and receive data as fast as possible to ensure low latency. However, pure busy polling without any sleeping or yielding introduces high CPU load and occupying two cores of the CPU permanently. This is unnecessary during periods when the network is not used frequently. We do not want the send and receive threads to waste CPU resources and, therewith, decrease the overall node performance. Experiments have shown that simply adding sleep or yield operations highly impacts network latency and throughput and introduces high fluctuations \\cite{dxnet}.\n\nTo solve this, we used a simple but efficient wait pattern we call \\textit{load adaptive thread parking}. After a defined amount of time (e.g. 100 ms) of polling and no data available, the thread enters a yield phase and calls yield on every loop iteration if no data is available. After another timeframe passed (e.g. 1 sec), the thread enters a parking phase calling sleep\/park with a minimum value of 1 ns on every loop iteration reducing CPU load significantly. The lowest value possible (1 ns) ensure that the scheduler of the operating system sends the thread sleeping for the shortest period of time possible. Once data is available, the current phase is interrupted and the timer is reset. This ensures busy looping for the next iterations keeping latency for successive messages and on high loads low. For further details including evaluation results refer to our DXNet publication \\cite{dxnet}.\n\n\\section{IB Transport Implementation in DXNet (Java)}\n\\label{transport_impl_java}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=2.8in]{ibdxnet_overview.pdf}\n\t\\caption{Components of Ibdxnet, IB transport and DXNet involved in data and control flow (simplified).}\n\t\\label{transport_java}\n\\end{figure}\n\nThis section describes the transport implementation for DXNet in Java which utilizes the low-level transport engines, e.g. msgrc (\\S \\ref{msgrc}), provided by Ibdxnet (\\S \\ref{ibdxnet_native}). We describe the native interface which implements the low-level interface exposed by the engine (\\S \\ref{engine_interface}) and how it is used in the DXNet IB transport for higher level connection management (\\S \\ref{transport_con_man}), sending serialized data from the ORB (\\S \\ref{transport_send}) and handling incoming receive buffers from remote nodes (\\S \\ref{transport_recv}).\n\nFigure \\ref{transport_java} depicts the involved components with the main aspects of their data and control flow which are referred to in the following subsection.\n\nIf an application wants to send one or multiple messages, it calls DXNet which serializes them into the ORB and signals the WriteInterestManager (WIM) about available data (\\S \\ref{dxnet_send}). The native send thread checks the WIM for data to send periodically and, if available, gets it from the ORB. Depending on the size, the data to send might be sliced into multiple elements which are posted to the SQ as one or multiple work requests (\\S \\ref{send_thread}).\n\nReceived data on the recv queue is written to one or multiple buffers (depending on the amount of data) from a native buffer pool (\\S \\ref{receive_thread}). Without further processing, the buffers are forwarded to the Java space and pushed to the IncomingBufferQueue (IBQ). DXNet's de-serialization is processing the buffers in order and creates messages (Java objects) which are dispatched to pre-registered callbacks using dedicated message handler threads (\\S \\ref{dxnet_receive}).\n\n\\subsection{Connection Handling}\n\\label{transport_con_man}\nTo implement new transports in DXNet, it provides an interface to create specific connection types for the transport to implement. The DXNet core, which is shared across all transport implementations, manages the connections for the target application by automatically creating new connections on demand or closing connections if a configurable threshold is exceeded (\\S \\ref{dxnet_con_man}). \n\nFor the IB transport implementation, the derived connection does not have to store further data or implement functionality. This is already stored and handled by the connection manager of Ibdxnet. It reduces overall architectural complexity by avoiding split functionality between Java and native space. Furthermore, it avoids context switching between Java and native code. \n\nOnly the NID of either the target node to send to or the source node of the received data is exchanged between the Java and native space and vice versa. Thus, \\textbf{Connection setup} in the transport implementation in Java is limited to creating the Java connection object for DXNet's connection manager. \\textbf{Connection close and cleanup} is similar with an additional callback to the native library to signal a connection was closed to Ibdxnet's connection management.\n\n\\subsection{Dispatch of Ready-to-send Data}\n\\label{transport_send}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ibdxnet_wim.pdf}\n\t\\caption{Internals of the Write Interest Manager (WIM).}\n\t\\label{ibdxnet_wim}\n\\end{figure}\n\nThe engine msgrc is running dedicated threads for sending data. The send thread pulls new data from the transport via the \\textit{GetNextDataToSend} function of the low-level interface (\\S \\ref{engine_interface}, \\S \\ref{send_thread}). In order to allow this and other callbacks (for connection management and receiving data) to be available to the IB transport, a lightweight JNI binding with the aspects explained in Section \\ref{java_and_native} was created. The transports implement the \\textit{GetNextDataToSend} function exposed by the JNI binding. To get new data to send, the send thread calls the JNI binding which is implemented in the IB transport in Java.\n\nNext, we elaborate on the implementation of \\textit{GetNextDataToSend} in the IB transport, how the send thread gets data to send and how the different states for the data (posted, not posted, send completed) are handled in combination with the existing ORB data structure.\n\nApplication threads using DXNet and sending messages are concurrently serializing them into the ORB (\\S \\ref{dxnet_send}). Once serialization completes, the thread signals the transport that there is ready to send (RTS) data in the ORB. For the IB transport, this signal \\textbf{adds a write interest to the dedicated Write Interest Manager (WIM)}. The WIM manages interest tokens using a lock-free list (based on a ring buffer) and a per connection atomic counter for both, RTS normal data from the ORB and FC data. Each type has a separate atomic counter, but, if not explicitly stated, we refer to them as one for ease of comprehension.\n\nThe list contains the nodeIDs of the connections that have RTS data in the order they were added. The atomic counter is used to keep track of the number of interests signalled, i.e. the number of times the callback was triggered for the selected NID. \n\nFigure \\ref{ibdxnet_wim} depicts this situation with two threads (T1 and T2) which finished serializing data to the ORBs of two independent connections (3 and 2). The table with atomic counters keeps track of the number of signaled interests for RTS data\/messages per connection. By calling \\textit{GetNextDataToSend}, the send thread from Ibdxnet checks a lock-free list which contains nodeIDs of the connections with at least one write interest available. The nodeIDs are added in order to the list but only if it is not already in the list. This is detected by checking if the atomic counter returned 0 after a fetch and add operation. This mechanism ensures that data from many connection is processed in a round robin fashion. Furthermore, avoiding duplicates in the queue sets an upper bound for memory requirement which is \\textit{sizeof(nodeID) * maxNumConnections}. Otherwise, the queue can grow depending on the load and number of active connections. If the queue of the WIM is empty, the send thread aborts and returns to the native space.\n\nThe send thread uses the NID it removed from the queue to get and reset the number of interests of the corresponding atomic counter. If there are any interests available for FC data, the send thread processes them by getting the FC from the connection and getting, but not yet removing, the stored FC data. For interests concerning normal data, the send thread gets the ORB from the connection and reads the current front and back pointers. The pointers of the ORB are not modified, only read (details below). With this data, along with the NID of the connection, the send thread returns to the native space for processing (\\S \\ref{send_thread}).\n\nEvery time the send thread returns to the Java space to get more data to send, it carries the parameters \\textit{prevWorkResults}, which contains data about the previous send operation, and \\textit{completionList}, which contains data about completed WRs, i.e. data send confirmations (\\S \\ref{send_thread}). For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers (\\S \\ref{java_and_native}).\n\nThe asynchronous workflow used to send and receive data by posting WRs and polling WCs must be adopted by updating the ORB and FC accordingly. Depending on the fill level of the SQ, the send thread might not be able to post all normal data or FC it retrieved in the previous iteration. The \\textit{prevWorkResults} parameter contains this information about how much normal and FC data was processed and could not be processed. This information must be preserved for the next send operation to avoid sending data multiple times. For the ORB however, we cannot move the front pointer because this frees up the memory which is not confirmed to be sent, yet.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.0in]{ib_ringbuffer.pdf}\n\t\\caption{Extended outgoing ring buffer used by IB transport.}\n\t\\label{ib_ringbuffer}\n\\end{figure}\n\nThus, we introduce a second front pointer, front posted, which is only known to and modified by the send thread and allows it to keep track of already posted data. Figure \\ref{ib_ringbuffer} depicts the most important aspects of the enhanced ORB which is used for the IB transport. In total, this creates three virtual areas of memory designated to the following states:\n\\begin{itemize}\n \\item Data posted but not confirmed: front to front posted\n \\item Data RTS and not posted: front posted to back\n \\item Free memory for send threads to serialize to: back to front\n\\end{itemize}\n\nUsing the parameter \\textit{prevWorkResults}, the front posted pointer is moved by the amount of data posted. Any non processed data remains unprocessed (front posted not moved to cover entire area of RTS data). For data provided with the parameter \\textit{completionList}, the front pointer is updated according to the number of bytes now confirmed to be sent. A similar but less complex approach is applied to updating FC.\n\n\\subsection{Process Incoming Buffers}\n\\label{transport_recv}\n\nThe dedicated receive thread of msgrc is pushing received data to the low-level interface. Analogous to how RTS data is pulled from the IB transport via the JNI binding, the receive thread uses a received function provided by the binding to push the received buffers to the IB transport into Java space. All received buffers are stored as a batch in the \\textit{recvPackage} data structure (\\S \\ref{engine_interface}) to minimize context switching overhead. For performance reasons, this data resides in native memory as structs and is mapped and accessed using DirectByteBuffers (\\S \\ref{java_and_native}).\n\nThe receive thread iterates the package in Java space, dispatches received FC data to each connection and pushes the received buffers (including the connection of the source node) to the IBQ (\\S \\ref{dxnet_receive}). The buffers are handled and processed asynchronously by the MessageCreationCoordinator and one or multiple MessageHandlers of the DXNet core (all of them are Java threads). Once the buffers are processed (de-serializing its contents), the Java threads return them asynchronously to the transport engines receive buffer pool (\\S \\ref{receive_thread}).\n\n\n\\section{Evaluation}\n\\label{eval}\n\nFor better readability, we refer to DXNet with the IB transport Ibdxnet and msgrc engine as DXNet from here onwards.\n\nWe implemented commonly used microbenchmarks to compare DXNet to two MPI implementations supporting InfiniBand: MVAPICH2 and FastMPJ. We decided to compare against two MPI implementations for the following reasons: To the best of our knowledge, there is no other system available that offers all features of DXNet and big data applications implementing their dedicated network stack do not offer it as a separate application\/library like DXNet does. MPI can be used to partially cover some features of DXNet but not all (\\S \\ref{related_work}). We are aware that MPI is targeting a different application domain, mainly HPC, whereas DXNet is targeting big data. However, MPI was already used in big data applications as well and several aspects related to the network stack and the technologies are overlapping in both application domains.\n\nBandwidth with two nodes is compared using typical uni- and bi-directional benchmarks. We also compared scalability using an all-to-all benchmark (worst-case scenario) with up to 8 nodes. Latency is compared by measuring the RTT with a request-response communication pattern. These benchmarks are executed single threaded to compare all three systems.\n\nFurthermore, we compared how DXNet and MVAPICH2 perform in a multi-threaded environment which is typical for Big Data but not HPC applications. However, we can only compare it using three benchmarks. Latency multi-threaded is not possible since it would require MVAPICH2 to implement additional infrastructure to store and map requests with responses and dynamic dispatching callbacks to handlers of incoming data to multiple receive threads (similar to DXNet). MVAPICH2 does not provide such a processing pipeline. FastMPJ cannot be compared at all here because it only supports single threaded environments. Table \\ref{eval_benchmarks} summerizes the systems and benchmarks executed.\n\nAll benchmarks were executed on up to 8 nodes of our private cluster, each with a single socket Intel Xeon CPU E5-1650 v3, 6 cores running at 3.50 GHz per core clock speed and 64 GB RAM. The nodes are running Ubuntu 16.04 with kernel version 4.4.0-57. All nodes are equipped with a Mellanox MT27500 HCA, connected with 56 Gbps links to a single Mellanox SX6015 18 port switch. For Java applications, we used the Oracle JVM version 1.8.0\\_151.\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n & \\multicolumn{1}{l|}{FastMPJ} & \\multicolumn{1}{l|}{MVAPICH2} & \\multicolumn{1}{l|}{DXNet} \\\\ \\hline\nUni-dir. TP ST & x & x & x \\\\ \\hline\nBi-dir. TP ST & x & x & x \\\\ \\hline\nLatency ST & x & x & x \\\\ \\hline\nAll-to-all TP ST & x & x & x \\\\ \\hline\nUni-dir. TP MT & & & x \\\\ \\hline\nBi-dir. TP MT & & x & x \\\\ \\hline\nLatency MT & & & x \\\\ \\hline\nAll-to-all MT & & & x \\\\ \\hline\n\\end{tabular}\n\\caption{Summary of benchmarks and systems. TP = throughput, ST = single threaded, MT = multi-threaded}\n\\label{eval_benchmarks}\n\\end{table}\n\n\\subsection{Benchmarks}\n\\label{benchmarks}\n\nThe \\textit{osu} benchmarks included with MVAPICH2 implement typical micro benchmarks to measure uni- and bi-directional bandwidth and uni-directional latency which reflect basic usage of any network stack for point-to-point communication. \\textit{osu\\_latency} is used as a foundation and extended with recording of all RTTs to determine the 95th, 99th and 99.9th percentile after execution. The latency measured is the full RTT when the source is sending a request to the destination up to when the corresponding response is received by the source. For evaluating throughput, the benchmarks \\textit{osu\\_bw} and \\textit{osu\\_bibw} were combined to a single benchmark and extended to enable all-to-all bi-directional execution with more than two nodes. We consider this a relevant benchmark to show if the system is capable of handling multiple connections under high load. This is a common situation found in big data applications as well as backend storages \\cite{yahoo}. On all-to-all, every node receives from all other nodes and sends messages to all other nodes in a round robin fashion. The bi-directional and all-to-all results presented are the aggregated send throughputs of all participating nodes. We added options to support multi-threaded sending and receiving using a configurable number of send and receive threads. As the per-processor core count increases, the multi-threading aspect becomes more and more important. Furthermore, our target application domain big data relies heavily on multi-threaded environments.\n\nFor the evaluation of FastMPJ, we ported the \\textit{osu} benchmarks to Java. The benchmarks for evaluating a multi-threaded MPI process were omitted because FastMPJ does not support multi-threaded processes. DXNet comes with its own benchmarks already implemented which are comparable to the \\textit{osu} benchmarks.\n\nThe \\textit{osu} benchmarks use a configurable parameter \\textit{window\\_size} (WS) which denotes the number of messages sent in a single batch. Since MPI does not support implicit message aggregation like DXNet, we executed all MPI experiments with increasing WS to determine bandwidth peaks and saturation under optimal conditions and ensure a fair comparison to DXNet's built in aggregation. No MPI collectives are required for the benchmarks and, thus, aren't evaluated.\n\nAll benchmarks are executed three times and their variance is displayed using error bars. Throughputs are specified in GB\/s, latencies\/RTTs in us and message rates in mmps (million messages per second). All throughput benchmarks send 100 million messages and all latency benchmarks 10 million messages. The total number of messages is incrementally halved starting with 4 kb message size to avoid unnecessary long running benchmark runs. All throughputs measured are based on the total amount of sent payload bytes. This does not include any overhead like message headers or envelopes that are required by the systems for message identification or routing.\n\nFurthermore, we included the results of the ib perf tools \\textit{ib\\_write\\_bw} and \\textit{ib\\_write\\_lat} as baselines to all end-to-end type benchmarks. These simple perf tools cannot be compared directly to the complex systems evaluated. But, these baselines show the best possible network performance (without any overhead by the evaluated system) and for rough comparisons of the systems across multiple plots. We chose parameters that reflect the configuration values of DXNet as close as possible (but still allow comparisons to FastMPJ and MVAPICH2 as well): receive queue size 2000 and send queue size 20 for both bandwidth and latency measurements; 100,000,000 messages for bandwidth and 10,000,000 for latency.\n\n\\subsection{DXNet with Ibdxnet Transport}\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\nIBQ max. capacity buffer count & 8192 \\\\ \\hline\nIBQ max. capacity aggregated data size & 128 MB \\\\ \\hline\nMessage handlers & varying (see experiments) \\\\ \\hline\nIB SQ size (per connection) & 20 \\\\ \\hline\nIB SRQ size & 2000 (default value for up to 100 connections) \\\\ \\hline\nMax. connection limit & 100 \\\\ \\hline\nRecv buffer pool capacity & 4 GB \\\\ \\hline\nFlow control window & 16 MB \\\\ \\hline\nFlow control threshold & 0.1 \\\\ \\hline\nReceive buffer size (for small message sizes 1 - 16 kb) & 32 kb \\\\ \\hline\nSGEs per WR (for small message sizes 1 - 16 kb) & 4 \\\\ \\hline\nReceive buffer size (for medium\/large message sizes 32 kb - 1 MB) & 1 MB \\\\ \\hline\nSGEs per WR (for medium\/large message sizes 32 kb - 1 MB) & 1 \\\\ \\hline\n\\end{tabular}\n\\caption{DXNet configuration values for experiments}\n\\label{dxnet_config}\n\\end{table*}\n\nWe configured DXNet using the parameters depicted in Table \\ref{dxnet_config}. The configuration values were determined with various debugging statistics and experiments, and are currently considered optimal configuration parameters.\n\nFor comparing single threaded performance, the number of application threads and message handlers (referred to as MH) is limited to one each to allow comparing it to FastMPJ and MVAPICH2. DXNet's multi-threaded architecture does not allow combining the logic of the application send thread and a message handler into a single thread. Thus, DXNet's ``single threaded'' benchmarks are always executed with one dedicated send and one dedicated receive thread.\n\nThe following subsections present the results of the various benchmarks. First, we present the results of all single threaded benchmarks with one send thread: uni- and bi-directional throughput, uni-directional latency and all-to-all with increasing node count. Afterwards, the results of the same four benchmarks are presented with multiple send threads.\n\n\\subsubsection{Uni-directional Throughput}\n\\label{eval_dxnet_uni_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_0_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional throughput and message rate with one application send thread, increasing message size and number of message handlers}\n\t\\label{eval_dxnet_uni_bw}\n\\end{figure}\n\nThe results of the uni-directional benchmark are depicted in figure \\ref{eval_dxnet_uni_bw}. Considering one MH, DXNet's throughput peaks at 5.9 GB\/s at a message size of 16 kb. For larger messages (32 kb to 1 MB), one MH is not sufficient to de-serialize and dispatch all incoming messages fast enough and drops to a peak bandwidth of 5.4 GB\/s. However, this can be resolved by simply using two MHs. Now, DXNet's throughput peaks and saturates at 5.9 GB\/s with a message size of just 4 kb and stays saturated up to 1 MB. Message sizes smaller than 4 kb also benefit significantly from the shorter receive processing times by utilizing two MHs. Further MHs can still improve performance but only slightly for a few message sizes.\n\nFor small messages up to 64 bytes, DXNet achieves peak message rates of 4.0-4.5 mmps using one MH. Multiple MHs cannot significantly increase performance for such small messages further. However, with growing message size (512 byte to 16 kb), the message rate can be increased with two message handlers.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, DXNet's peak performance is approx. 0.5 to 1.0 mmps less. With increasing message size, this gap closes and DXNet even surpasses the baseline 1 kb to 32 kb message sizes when using multiple threads. DXNet peaks close to the baseline's peak performance of 6.0 GB\/s. The results with small message sizes are fluctuating independent of the number of MHs. This can be observed on all other benchmarks with DXNet measuring message\/payload throughput as well. It is a common issue which can be observed when running high load throughput benchmarks using the bare ibverbs API as well.\n\nThis benchmark shows that DXNet is capable of handling a vast amount of small messages efficiently. The application send thread and, thus, the user does not have to bother with aggregating messages explicitly because DXNet handles this transparently and efficiently. The overall performance benefits from multiple message handlers increasing receive throughput. Large messages do impact performance with one MH because the de-serialization of data consumes most of the processing time during receive. However, simply adding at least another MH solves this issue and further increases performance.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_dxnet_bi_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, bi-directional throughput and message rate with one application send thread, increasing message size and number of message handlers}\n\t\\label{eval_dxnet_bi_bw}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_bi_bw} depicts the results of the bi-directional benchmark with one send thread. With one MH, the aggregated throughput peaks at approx. 10.4 GB\/s for 8 kb. Using two message handlers, the fluctuations starting with 16 kb messages using one MH can be resolved (as already explained in \\ref{eval_dxnet_uni_tp}). Further increasing the performance using four MHs is not possible in this benchmark and actually degrades it for 512 byte to 2 kb message sizes. DXNet's throughput peaks at approx. 10.4 GB\/s and saturates with a message size of 32 kb.\n\nThe peak aggregated message rate for small messages up to 64 bytes is varying from approx. 6 to 6.9 mmps with one MH. Using more MHs cannot improve performance significantly for this benchmark. Due to the multi-threaded and highly pipelined architecture of DXNet, these variations cannot be avoided, especially when exclusively handling many small messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, there is still room for improvement for DXNet's performance on small message sizes (up to 2.5 mmps difference). For medium message sizes, \\textit{ib\\_send\\_bw} yields slightly higher throughput for up to 1 kb message size. But, DXNet surpasses \\textit{ib\\_send\\_bw} on 1 kb to 16 kb message size. DXNet's peak performance is approx. 1.1 GB\/sec less than \\textit{ib\\_send\\_bw}'s (11.5 GB\/sec).\n\nOverall, this benchmark shows that DXNet can deliver great performance especially for small messages similar to the uni-directional benchmark (\\S \\ref{eval_dxnet_uni_tp}).\n\n\\subsubsection{Uni-directional Latency}\n\\label{eval_dxnet_uni_lat}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1_st.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional RTT and message rate with one application send thread, increasing message size}\n\t\\label{fig_eval_dxnet_uni_lat}\n\\end{figure}\n\nFigure \\ref{fig_eval_dxnet_uni_lat} depicts the average RTTs as well as the 95th, 99th and 99.9th percentile of the uni-directional latency benchmark with one send thread and one MH. For message sizes up to 512 bytes, DXNet achieves an avg. RTT of 7.8 to 8.3 \\textmu s, a 95th percentile of 8.5 to 8.9 \\textmu s, a 99th percentile of 8.9 to 9.2 and 99.9th percentile of 11.8 to 12.7 \\textmu s. This results in a message rate of approx 0.1 mmps. As expected, starting with 1 kb message size, latency increases with increasing message size.\n\nThe RTT can be broken down into three parts: DXNet, Ibdxnet and hardware processing. Taking the lowest avg. of 7.8 \\textmu s, DXNet requires approx. 3.5 \\textmu s of the total RTT (the full breakdown is published in our other publication \\cite{dxnet}) and the hardware approx. 2.0 \\textmu s (assuming avg. one way latency of 1 \\textmu s for the used hardware). Message de- and serialization as well as message object creation and dispatching are part of DXNet. For Ibdxnet, this results in approx. 2.3 \\textmu s processing time which includes JNI context switching as well as several pipeline stages explained in the earlier sections.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, DXNet's latency is significantly higher. Obviously, additional latency cannot be avoided with such a long and complex processing pipeline. Considering the breakdown mentioned above, the native part Ibdxnet, which calls ibverbs to send and receive data, is to some degree comparable to the minimal perf tool \\textit{ib\\_send\\_bw}. With a total of 2.3 \\textmu s (of the full pipeline's 7.8 \\textmu s), the total RTT is just slightly higher than \\textit{ib\\_send\\_bw}'s 1.8 \\textmu s. But, Ibdxnet already includes various data structures for state handling and buffer scheduling (\\S \\ref{send_thread}, \\S \\ref{receive_thread}) which \\textit{ib\\_send\\_bw} doesn't. Buffers for sending data are re-used instantly and the data received is discarded immediately.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes}\n\\label{eval_dxnet_nodes_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_1_nodes.pdf}\n\t\\caption{\\textbf{DXNet}: 2 to 8 nodes, all-to-all aggregated send throughput and message rate with one application send thread, increasing message size and one message handler}\n\t\\label{eval_dxnet_nodes}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_nodes} shows the aggregated send throughput and message rate of all participating nodes (up to 8) executing the all-to-all benchmark with one send thread and one MH with increasing message size. For small up to 64 byte messages, peak message rates of 7.0 mmps, 14.5 mmps, 20.1 mmps and 25.6 mmps are achieved for 2, 4, 6 and 8 nodes. Throughput increases with increasing node count peaking at 8 kb message size with 10.4 GB\/s for 2 nodes. The peaks for 4, 6 and 8 nodes are reached with 16 kb message size at 18.9 GB\/s, 26.0 GB\/s and 32.4 GB\/s. Incrementally adding two nodes, throughput is increased by 8.5 GB\/s (for 2 to 4 nodes), by 7.1 GB\/s (for 4 to 6 nodes) and 6.4 GB\/s (for 6 to 8 nodes). One would expect approx. equally large throughput increments but the gain is noticeably lowered with every two nodes added.\n\nWe tried different configuration parameters for DXNet and ibverbs like different MTU sizes, SGE counts, receive buffer sizes, WRs per SQ\/SRQ or CQ size. No combination of settings allowed us to improve this situation.\n\nWe assume that the all-to-all communication pattern puts high stress on the HCA which, at some point, cannot keep up with processing outstanding requests. To rule out any software issues with DXNet first, we implemented a low-level ``loopback'' like test which uses the native part of Ibdxnet, only. The loopback test does not involve any dynamic message posting when sending data or data processing when receiving. Instead, a buffer equally to the size of the ORB is processed by Ibdxnet's send thread on every iteration and posted to every participating SQ. This ensures that all SQs are filled and are quickly refilled once at least one WR was processed. When receiving data on the SRQ, all buffers received are directly put back into the pool without processing and the SRQ is refilled. This ensures that no additional processing overhead is added for sending and receiving data. Thus, Ibdxnet's loopback test comes close to a perftool like benchmark. We executed the benchmark with 2, 4, 6 and 8 nodes which yielded aggregated throughputs of 11.7 GB\/s, 21.7 GB\/s, 28.3 GB\/s and 34.0 GB\/s. \n\nThese results are very close to the performance of the full DXNet stack but don't rule out all software related issues, yet. The overall aggregated bandwidth could still somehow be limited by Ibdxnet. Thus, we executed another benchmark which, first, executes all-to-all communication with up to 8 nodes, then, once bandwidth is saturated, switching to a ring formation for communication without restarting the benchmark (every node sends to its successor determined by NID, only). \n\nOnce the nodes switch the communication pattern during execution, the per node aggregated bandwidth increases very quickly and reaches a maximum aggregated bandwidth of approx. $(11.7 \/ 2 \\times num\\_nodes)$ GB\/s independent of the number of nodes used. This rules out total bandwidth limitations for software and hardware. Furthermore, we can now rule out any performance issues in DXNet or even ibverbs with connection management (e.g. too many QPs allocated).\n\nThis leads to the assumption that the HCA cannot keep up with processing outstanding WRQs when SQs are under high load (always filled with WRQs). With more than 3 SQs per node, the total bandwidth drops noticably. Similar results with other systems further support this assumption (\\S \\ref{eval_fmpj_nodes} and \\ref{eval_mva_nodes}).\n\n\\subsubsection{Uni-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_0_1_4.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_bw_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_uni_bw_mt} shows the uni-directional benchmark executed with 4 MHs and 1 to 16 send threads. For 1 to 4 send threads throughput saturates at 5.9 GB\/s at either 4 kb or 8 kb messages. For 256 byte to 8 kb, using one thread yields better throughput than two or sometimes four threads. However, running the benchmark with 8 and 16 send threads increases overall throughput for all messages greater 32 byte significantly with saturation starting at 2 kb message size. DXNet's pipeline benefits from the many threads posting messages to the ORB concurrently. This results in greater aggregation of multiple messages and allows higher buffer utilization for the underlaying transport.\n\nDXNet also increases message throughput on small message sizes up to 512 byte. from approx. 4.0 mmps up to 6.7 mmps for 16 send threads. Again, performance is slightly worse with two and four compared to a single thread.\n\nFurthermore, DXNet even surpasses the baseline performance of \\textit{ib\\_send\\_bw} when using multiple send threads. However, the peak performance cannot be improved further which shows the current limit of DXNet for this benchmark and the hardware used.\n\n\\subsubsection{Bi-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_1_4_mt.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, bi-directional throughput and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_bi_bw_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_bi_bw_mt} shows the bi-directional benchmark executed with 4 MHs and 1 to 16 send threads. With more than one send thread, the aggregated throughput peaks at approx. 10.4 and 10.7 GB\/s with messages sizes of 2 and 4 kb. DXNet delivers higher throughputs for all medium and small messages with increasing send thread count. The baseline performance of \\textit{ib\\_send\\_bw} is reached on small message sizes and even surpassed with medium sized messages up to 16 kb. The peak throughput is not reached showing DXNet's current limit with the used hardware.\n\nThe overall performance with 8 and 16 send threads don't differ noticeably which indicates saturation of DXNet's processing pipeline. For small messages (less than 512 byte), the message rates also increase with increasing send thread count. Again, saturation starts with 8 send threads with a message rate of approx. 8.6 to 10.2 mmps.\n\nDXNet is capable of handling a multi-threaded environment under high load with CPU over-provisioning and still delivers high throughput. Especially for small messages, DXNet's pipeline even benefits from the highly concurrent activity by aggregating many messages. This results in higher buffer utilization and, for the user, higher overall throughput.\n\n\\subsubsection{Uni-directional Latency Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional avg. RTT and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_lat_mt}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_uni_lat_3_1_1_2.pdf}\n\t\\caption{\\textbf{DXNet}: 2 nodes, uni-directional 95th, 99th and 99.9th percentile RTT and message rate with multiple application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_uni_lat_mt2}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_uni_lat_mt} depicts the avg. RTT and message rate of the uni-directional latency benchmark with up to 16 send threads and 4 MHs. The 95th, 99th and 99.9th percentiles are depicated in figure \\ref{eval_dxnet_uni_lat_mt2}. DXNet keeps a very stable avg. RTT of 8.1 \\textmu s for message sizes of 1 to 512 bytes with one send thread. Using two send threads, this value just slightly increases. With four or more send threads the avg. RTT increases to approx 9.3 \\textmu s. When the total number of threads, which includes DXNet's internal threads, MH and send threads, exceed the core count of the CPU, DXNet switches to different parking strategies for the different thread types which slightly increase latency but greatly reduce overall CPU load (\\S \\ref{thread_parking}).\n\nThe message rate can be increased up to 0.33 mmps with up to 4 send threads as, practically, every send thread can use a free MH out of the 4 available. With 8 and 16 send threads, the MHs on the remote must be shared and DXNet's over-provisioning is active which reduces the overall throughput. The percentiles shown in figure \\ref{eval_dxnet_uni_lat_mt2} reflect this sitution very well and increase noticeably. \n\nWith a single thread, as already discussed in \\ref{eval_dxnet_uni_lat}, the difference of the avg. (7.8 to 8.3 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) RTT for message sizes less than 1 kb is approx. 4 to 5 \\textmu s. When doubling the send thread count, the 99.9th percentiles roughly double as well. When over-provisioning the CPU, we cannot avoid the higher than usual RTT caused by the increasing amount of messages getting posted.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{dxnet_bi_0_2_16_4_nodes_mt.pdf}\n\t\\caption{\\textbf{DXNet}: 2 to 8 nodes, all-to-all aggregated send throughput and message rate with 16 application send threads, increasing message size and 4 message handlers}\n\t\\label{eval_dxnet_nodes_mt}\n\\end{figure}\n\nFigure \\ref{eval_dxnet_nodes_mt} shows the results of the all-to-all benchmark with up to 8 nodes, 16 application send threads and 4 message handlers. Compared to the single threaded results (\\S \\ref{eval_dxnet_nodes_tp}), DXNet achieves slightly higher throughputs for all node counts: for two nodes, throughput saturates at 4 kb message size and peaks at 10.7 GB\/s; for 4 nodes, throughput saturates at 4 kb message size and peaks at 19.5 GB\/s; for 6 nodes, throughput saturates at 2 kb message size and peaks at 27.0 GB\/s; for 8 nodes, throughput saturates at 2 kb message size and peaks at 33.6 GB\/s. However, the message rate is improved significantly for small messages up to 64 byte with 8.4 to 10.3 mmps, 18.9 to 21.1 mmps, 27.6 to 31.4 mmps and 33.2 to 43.4 mmps for 2 to 8 nodes.\n\nThese results show that DXNet delivers high throughputs and message rates under high loads with increasing node and thread count. Small messages profit significantly through better aggregation and buffer utilization.\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark.\n\n\\textbf{Single-threaded}:\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} One MH: saturation with 16 kb messages, peak throughput at 5.9 GB\/s; Two MHs: saturation with 4 kb messages, peak throughput at 5.9 GB\/s; Peak message rate of 4.0 to 4.5 mmps for small messages up to 64 bytes \n \\item \\textbf{Bi-directional throughput} Saturation with 8 kb messages at 10.4 GB\/s with one MH; Peak message rate of 6.0 to 6.9 mmps for small messages up to 64 bytes\n \\item \\textbf{Uni-directional latency} up to 512 byte messages: avg. 7.8 to 8.3 \\textmu s, 95th percentile of 8.5 to 8.9 \\textmu s, 99th percentile of 8.9 to 9.2, 99.9th percentile of 11.8 to 12.7 \\textmu s; Peak message rate of 0.1 mmps.\n \\item \\textbf{All-to-all nodes} With 8 nodes: Total aggregated peak throughput of 32.4 GB\/s, saturation with 16 kb message size; Peak message rate of 25.6 mmps for small messages up to 64 bytes.\n\\end{itemize}\n\n\\textbf{Multi-threaded}:\nOverall, DXNet benefits from higher message aggregation through multiple outstanding messages in the ORB posted concurrently by many threads.\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation at 5.9 GB\/s at 4 kb message size\n \\item \\textbf{Bi-directional throughput} Overall improved throughput for many message sizes, saturation at 10.7 GB\/s with 4 kb message size, message rate of 8.6 to 10.2 mmps for small messages up to 64 bytes\n \\item \\textbf{Uni-directional latency} Slightly higher latencies than single threaded as long as enough MHs serve available send threads. Message rate can be increased with additional send threads but at the cost of increasing avg. latency. The 99.9th percentiles roughly double when doubling the number of send threads.\n \\item \\textbf{All-to-all nodes}: With up to 8 nodes, 33.6 GB\/s peak throughput, saturation at 2 kb message size, 33.2 to 43.4 mmps for up to 64 byte messages \n\\end{itemize}\n\n\\subsection{FastMPJ}\n\\label{eval_fmpj}\n\nThis section describes the results of the benchmarks executed with FastMPJ and compares them to the results of DXNet presented in the previous sections. We used FastMPJ 1.0\\_7 with the device \\textit{ibvdev} to run the benchmarks on InfiniBand hardware. The \\textit{osu} benchmarks of MVAPICH2 were ported to Java (\\S \\ref{benchmarks}) and used for all following experiments. Since FastMPJ does not support multithreading in a single process, all benchmarks were executed single threaded and compared to the single threaded results of DXNet, only.\n\n\\subsubsection{Uni-directional Throughput}\n\\label{eval_fmpj_uni_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_uni_bw.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, uni-directional throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_unibw}\n\\end{figure}\n\nFigure \\ref{eval_fmpj_unibw} shows the results of executing the uni-directional benchmark with two nodes with increasing message size. Furthermore, the benchmark was executed with increasing WS to ensure bandwidth saturation. As expected, throughput increases with increasing message size and bandwidth saturation starts at a medium message size of 64k with approx. 5.7 GB\/s. The actual peak throughput is reached with large 512k message for a WS of 64 with 5.9 GB\/s. \n\nFor small message sizes up to 512 byte and independent of the WS, FastMPJ achieves a message rate of approx. 1.0 mmps. Furthermore, the results show that the WS doesn't matter for message sizes up to 64 KB. For 128 KB to 1 MB, FastMPJ profits from explicit aggregation with increasing WS. This indicates that ibvdev might include some message aggregation mechanism.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, FastMPJ's performance is always inferior to it with a peak performance of 5.9 GB\/s close to \\textit{ib\\_send\\_bw}'s with 6.0 GB\/s.\n\nCompared to the results of DXNet (\\S \\ref{eval_dxnet_uni_tp}), DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB\/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB\/s due to increased message processing time (de-serialization). However, such a mechanism is absent from FastMPJ and DXNet can further improve performance by using two MHs. With two MHs, DXNet's throughput peaks even earlier at 5.9 GB\/s with 4 kb message size. For small messages of up to 64 bytes, DXNet achieves 4.0 to 4.5 mmps compared to FastMPJ with 1.0 mmps.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_fmpj_bi_tp}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_bw.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, bi-directional (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_bibw}\n\\end{figure}\n\nThe results of the bi-directional benchmark are depicted in figure \\ref{eval_fmpj_bibw}. Again, throughput increases with increasing message size peaking at 10.8 GB\/s with WS 2 and large 512 kb messages. However, when handling messages of 128 kb and greater, throughput peaks at approx 10.2 GB\/s for the WSs 4 to 32 and saturation varies depending on the WS. For WSs 4 to 32, throughput is saturated with 64 kb messages, for WSs 1 and 2 at 512 kb. Starting at 128 kb message size, WSs of 1 and 2 achieve slightly better results than the greater WSs. Especially WS 64 drops significantly with message sizes of 128 kb and greater. However, for message sizes of 64 kb to 512 kb, FastMPJ profits from explicit aggregation.\n\nCompared to the uni-directional results (\\S \\ref{eval_fmpj_uni_tp}), FastMPJ does profit to some degree from explicit aggregation for small messages with 1 to 128 bytes. WS 1 to 16 allow higher message throughputs with WS 16 as an optimal value peaking at approx. 2.4 mmps for 1 to 128 byte messages. Greater WSs degrade message throughput significantly. However, this does not apply to message sizes of 256 bytes where greater explcit aggregation does always increase message throughput.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, FastMPJ's performance is again always inferior to it with a difference in peak performance of 0.7 GB\/sec (10.8 GB\/s to 11.5 GB\/s).\n\nWhen comparing to DXNet's results (\\S \\ref{eval_dxnet_bi_tp}), the throughputs are nearly equal with 10.7 GB\/s also at 512 kb message size. However, DXNet outperforms FastMPJ for medium sized messages by reaching a peak throughput of 10.4 GB\/s for just 8 kb messages. Even with a WS of 64, FastMPJ can only achieve 6.3 GB\/s aggregated throughput here. For small messages of up to 64 bytes, DXNet clearly outperforms FastMPJ with 6 to 7.2 mmops compared to 1.9 to 2.1 mmops with a WS of 16.\n\n\\subsubsection{Uni-directional Latency}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_lat.pdf}\n\t\\caption{\\textbf{FastMPJ}: 2 nodes, uni-directional latency and message rate with increasing message and window size}\n\t\\label{eval_fmpj_lat}\n\\end{figure}\n\nThe results of the latency benchmark are depicted in figure \\ref{eval_fmpj_lat}. FastMPJ achieves a very low average RTT of 2.4 \\textmu s for up to 16 byte messages. This just slightly increases to 2.8 \\textmu s for up to 128 byte messages and to 4.5 \\textmu s for up to 512 byte messages. 3 \\textmu s RTT is achieved by the 95th percentile for up to 64 byte which slightly increases to 5 \\textmu s for up to 512 byte messages. Message sizes up to 16 bytes achieve a 7.7 \\textmu s RTT for the 99th percentile with up to 10 \\textmu s for up to 512 byte messages. For the 99.9th percentile, messages sizes up to 16 byte fluctuate slightly with a RTT of 14.5 to 15.5 \\textmu s. This continues for 32 byte to 2 kb with a low of 16.3 \\textmu s and a high of 19.5 \\textmu s. The average message rate peaks at approx. 0.41 mmps for up to 16 byte messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, FastMPJ's average RTT comes close to its 1.8 \\textmu s and closes that gap slightly further starting with 256 byte message size. \n\nComparing the avg. RTT and 95th percentile to DXNet's results (\\S \\ref{eval_dxnet_uni_lat}), FastMPJ outperforms DXNet by a up to four times lower RTT. This is also reflected by the message rate of 0.41 mmps for FastMPJ and 0.1 mmps for DXNet. The breakdown given Section \\ref{eval_dxnet_uni_lat} explains the rather high RTTs and the amount of processing time spent by DXNet on major sections of the pipeline. However, even DXNet's avg. RTT for message sizes up to 512 byte is higher than FastMPJ's, DXNet achieves lower 99th (8.9 to 9.2 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) than FastMPJ.\n\n\\subsubsection{All-to-all with Increasing Node Count}\n\\label{eval_fmpj_nodes}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_4nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 4 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_4nodes}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_6nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 6 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_6nodes}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{fmpj_bi_8nodes.pdf}\n\t\\caption{\\textbf{FastMPJ}: 8 nodes, all-to-all (aggregated) throughput and message rate with increasing message and window size}\n\t\\label{eval_fmpj_8nodes}\n\\end{figure}\n\nFigures \\ref{eval_fmpj_4nodes}, \\ref{eval_fmpj_6nodes} and \\ref{eval_fmpj_8nodes} show the aggregated send throughputs and message rates of the all-to-all benchmark running on 4, 6 and 8 nodes. The results for 2 nodes were already discussed in \\ref{eval_fmpj_bi_tp} and are depicted in figure \\ref{eval_fmpj_bibw}. The results for WS 64 and messages greater than 64 kb are absent because FastMPJ hangs (no error output) on message sizes greater than 64 kb with WS 64. We couldn't resolve this by re-running the benchmark several times and with different configuration parameters like increasing buffer sizes.\n\nFastMPJ scales well with increasing node count on all-to-all communication with the following peak throughputs: 10.8 GB\/s with WS 2 and 512 kb messages on 2 nodes, 19.2 GB\/s with WS 64 and 1 MB messages on 4 nodes, 26.3 GB\/s with WS 32 and 1 MB messages on 6 nodes, 32.7 GB\/s with WS 32 and 1 MB messages on 8 nodes. This results in per node send throughputs of 5.1 GB\/s, 4.8 GB\/s, 4.38 GB\/s and 4.08 GB\/s. The gradually decreasing per node throughput seems to be a non software related issue as explained in Section \\ref{eval_dxnet_nodes_tp}. For small messages up to 64 bytes, FastMPJ achieves the following peak message rates: 2.4 mmps for WS 16 on 2 nodes, 7.2 mmps for WS 16 on 4 nodes, 7.6 mmps for WS 16 on 6 nodes and 9.5 mmps for WS 8 on 8 nodes.\n\nDXNet also reaches peak throughputs close to FastMPJ's (\\S \\ref{eval_dxnet_nodes_tp}) on all node counts. However, DXNet saturates bandwidth very early with just 8 kb and 16 kb message sizes. Furthermore, DXNet outperforms FastMPJ's message rates for small messages on all node counts by up to three times (7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps).\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and key numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark and are single-threaded, only. All results benefit from explicit aggregation using the WS.\n\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation at 64 kb message size with 5.7 GB\/s; Peak throughput at 512 kb message size with 5.9 GB\/s; 1.0 mmps for message sizes up to 64 byte\n \\item \\textbf{Bi-directional throughput} Saturation at 64 kb message size with 10.8 GB\/s; 2.4 mmps for message sizes up to 128 byte\n \\item \\textbf{Uni-directional latency} For up to 512 byte messages: avg. RTT of 2.4 to 4.5 \\textmu s, 95th percentile of 3 to 5 \\textmu s; 99th percentile of 7.7 to 10 \\textmu s; 99.9th percentile of 16.3 to 19.5 \\textmu s \n \\item \\textbf{All-to-all nodes} With 8 nodes: Total aggregated peak throughput of 32.7 GB\/s, saturation with 1 mb message size; Peak message rate of 9.5 mmps for small messages up to 64 byte\n\\end{itemize}\n\nCompared to DXNet's single threaded results, it outperforms FastMPJ on small messages with a up to 4 times higher message rate on both un- und bi-directional benchmarks. However, FastMPJ achieves a lower average and 95th percentile latency on the uni-directional latency benchmark. But, even with a more complicated and dynamic pipeline, DXNet achieves lower 99th and 99.9th percentile than FastMPJ demonstrating high stability. On all-to-all communication with up to 8 nodes, DXNet reaches similar throughputs to FastMPJ's for large messages but outperforms FastMPJ's message rate by up to three times for small messages. \\textbf{DXNet is always better for small messages}.\n\n\\subsection{MVAPICH2}\n\\label{eval_mvapich2}\n\nThis section describes the results of the benchmarks executed with MVAPICH2 and compares them to the results of DXNet. All \\textit{osu} benchmarks (\\S \\ref{benchmarks}) were executed with MVAPICH2-2.3. Since MVAPICH2 supports MPI calls with multiple threads of the same process, some benchmarks were executed single and multi-threaded. We set the following environmental variables for optimal performance and comparability: \n\\begin{itemize}\n \\item MV2\\_DEFAULT\\_MAX\\_SEND\\_WQE=128\n \\item MV2\\_DEFAULT\\_MAX\\_RECV\\_WQE=128\n \\item MV2\\_SRQ\\_SIZE=1024\n \\item MV2\\_USE\\_SRQ=1\n \\item MV2\\_ENABLE\\_AFFINITY=1\n\\end{itemize}\n\nAdditionally for the multi-threaded benchmarks, the following environmental variables were set: \n\\begin{itemize}\n \\item MV2\\_CPU\\_BINDING\\_POLICY=hybrid\n \\item MV2\\_THREADS\\_PER\\_PROCESS=X (where X equals the number of threads we used when executing the benchmark)\n \\item MV2\\_HYBRID\\_BINDING\\_POLICY=linear\n\\end{itemize}\n\n\\subsubsection{Uni-directional Throughput}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_uni_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, uni-directional throughput and message rate, single threaded with increasing message and window size}\n\t\\label{eval_mva_uni_st}\n\\end{figure}\n\nThe results of the uni-directional single threaded benchmark are depicted in figure \\ref{eval_mva_uni_st}. The overall throughput increases with increasing message size, peaking at 5.9 GB\/s with multiple WS on large messages: 512 kb with 64 WS, 256 kb with 32 WS, 512 KB with 8 WS, 512 kb with 4 WS and 1 MB with 2 WS. Bandwidth saturation starts at approx. 64 kb to 128 kb for WSs of 16 or greater. This also applies to smaller messages with up to 64 bytes. Reaching a peak of 4.0 mmps is only possible with WS 64. If send calls are not batched explicitly, message rates are rather low (0.45 mmps for WS 1).\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, MVAPICH2's peak performance is approx. 1.0 mmps less for small messages. With increasing message size, on a WS of 64, the performance comes close to the baseline and even exceeds it for 2 kb to 8 kb messages. MVAPICH2 peaks very close to the baseline's peak performance of 6.0 GB\/s.\n\nDXNet achieves very similar results (\\S \\ref{eval_dxnet_uni_tp}) compared to MVAPICH2 but without relying on explicit aggregation. DXNet's throughput saturates and peaks earlier at a message size of 16 kb with 5.9 GB\/s. However, if using one MH, throughput drops for larger messages down to 5.4 GB\/s due to increased message processing time (de-serialization). As already explained in Section \\ref{eval_fmpj_uni_tp}, this can be resolved by using two MHs. For small messages of up to 64 bytes, DXNet achieves an equal to slightly higher message rate of 4.0 to 4.5 mmps.\n\n\\subsubsection{Bi-directional Throughput}\n\\label{eval_mva_bench_bi_st}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_bi_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, bi-directional throughput and message rate, single threaded with increasing message and window size}\n\t\\label{eval_mva_bi_st}\n\\end{figure}\n\nThe results of the bi-directional single threaded benchmark are depicted in figure \\ref{eval_mva_bi_st}. Overall throughput increases with message size and, like on the uni-directional benchmark, benefits a lot from greater WSs. The aggregated throughput peaks at 11.1 GB\/s with 512 kb messages on multiple WSs. Throughputs for 128 byte to 512 kb message sizes benefit from explicit aggregation. The message rate for small messages up to 64 bytes do not always profit from explicit aggregation. Message rate increases with WS 1 to 8 and peaks at 4.7 mmps with WS 8. However, greater WS degrade the message rate slightly compared to the optimal case.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_bw}, MVAPICH2's peak performance for small messages is approx. half of \\textit{ib\\_send\\_bw}'s 9.5 mmps. With increasing message size, the throughput of MVAPICH2 comes close \\textit{ib\\_send\\_bw}'s with WS 64 and 32 for 4 and 8 kb messages, only. Peak throughput for large messages comes close to \\textit{ib\\_send\\_bw}'s 11.5 GB\/s.\n\nCompared to DXNet's results (\\S \\ref{eval_dxnet_bi_tp}), the aggregated throughput is slightly higher than DXNet's (10.7 GB\/s). However, DXNet outperforms MVAPICH2 for medium sized messages by reaching a peak throughput of 10.4 GB\/s compared to 9.5 GB\/s (on WS 64) for just 8 kb messages. Furthermore, DXNet offers a higher message rate of 6 to 7.2 mmps on small messages up to 64 bytes. DXNet achieves overall higher performance without relying on explicit message aggregation.\n\n\\subsubsection{Uni-directional Latency}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_lat_uni_st.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, uni-directional latency and message rate, single threaded with increasing message size}\n\t\\label{eval_mva_uni_lat_st}\n\\end{figure}\n\nFigure \\ref{eval_mva_uni_lat_st} shows the results of the uni-directional single threaded latency benchmark. MVAPICH2 achieves a very low average RTT of 2.1 to 2.4 \\textmu s for up to 64 byte messages and up to 3.9 \\textmu s for up to 512 byte messages. The 95th, 99th and 99.9th percentile are just slightly higher than the average RTT with 2.2 to 4.0 \\textmu s for the 95th, 2.4 to 4.0 \\textmu s for the 99th and 2.4 to 5.0 \\textmu s for the 99.9th (for up to 512 byte message size). This results in an average message rate of 0.43 to 0.47 mmps for up to 64 byte messages and 0.25 to 0.40 for 128 to 512 byte messages.\n\nCompared to the baseline performance of \\textit{ib\\_send\\_lat}, MVAPICH2's average, 95h, 99th, and 99.9th percentile RTT are very close to the baseline. With a minimum of 2.1 \\textmu s for the average latency and maximum of 5.0 \\textmu s for the 99.9th percentile on small messages, MVAPICH2 shows that its overall overhead is very low.\n\nCompared to DXNet's results (\\S \\ref{fig_eval_dxnet_uni_lat}), MVAPICH2 achieves an overall lower latency. DXNet's average with 7.8 to 8.3 \\textmu s is nearly four times higher. The 95h (8.5 to 8.9 \\textmu s), 99th (8.9 to 9.2 \\textmu s) and 99.9th percentile (11.8 to 12.7 \\textmu s) are also at least two to three times higher. MVAPICH2 implements a very thin layer of abstraction, only. Application threads issuing MPI calls, are pinned to cores and are directly calling ibverbs functions after passing through these few layers of abstraction. DXNet however implements multiple pipeline stages with de\/-serialization and multiple (JNI) context\/thread switches. Naturally, data passing through such a long pipeline takes longer to process which impacts overall latency. However, DXNet traded latency for multithreading support and performance as well as efficient handling of small messages.\n\n\\subsubsection{All-to-all Throughput with up to 8 Nodes}\n\\label{eval_mva_nodes}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_4nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 4 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes4}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_6nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 6 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes6}\n\\end{figure}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_bi_8nodes.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 8 nodes, aggregated send throughputs and message rates, single threaded with increasing message size}\n\t\\label{eval_mva_nodes8}\n\\end{figure}\n\nFigures \\ref{eval_mva_nodes4}, \\ref{eval_mva_nodes6} and \\ref{eval_mva_nodes8} show the results of executing the all-to-all benchmark with 4, 6 and 8 nodes. The results for 2 nodes are depicted in figure \\ref{eval_mva_bi_st}. \n\nMVAPICH2 achieves a peak throughput of 19.5 GB\/s with 128 kb messages on WSs 16, 32 and 64 and starts at approx 32 kb message size. WS 8 gets close to the peak throughput as well but the remaining WSs peak lower for messages greater than 32 kb. Minor fluctuations appear for WS 1 to 16 for 1 kb to 16 kb messages. For small messages of up to 512 byte, the smaller the WS the better the performance. With WS 2, a message rate 8.4 to 8.8 mmps for up to 64 byte messages is achieved and 6.6 to 8.8 mmps for up to 512 byte. \n\nRunning the benchmark with 6 nodes, MVAPICH2 hits a peak throughput of 27.3 GB\/s with 512 kb messages on WSs 16, 32 and 64. Saturation starts with a message size of approx. 64 to 128 kb depending on the WS. For 1 kb to 32 kb messages, the fluctuations increased compared to executing the benchmark with 4 nodes. Again, message rate is degraded when using large WS for small messages. An optimal message rate of 11.9 to 13.1 is achieved with WS 2 for up to 64 byte messages.\n\nWith 8 nodes, the benchmark peaks at 33.3 GB\/s with 64 kb messages on a WS of 64. Again, WS does matter for large messages as well with WS 16, 32 and 64 reaching the peak throughput and starting saturation at approx. 128 kb message size. The remaining WSs peak significantly lower. The fluctuations for mid range messages sizes of 1 kb to 64 kb increased further compared to 6 nodes. Most notable, the performance with 4 kb messages and WS 4 is nearly 10 GB\/s better than 4 kb with WS 64. With up to 64 byte messages, a message rate of 16.5 to 17.8 mmps is achieved. For up to 512 byte messages, the message rate varies with 13.5 to 17.8 mmps. As with the previous node counts, a smaller WS increases the message rate significantly while larger WSs degrade performance by a factor of two.\n\nMVAPICH2 has the same ``scalability issues'' as DXNet (\\S \\ref{eval_dxnet_nodes_tp}) and FastMPJ (\\S \\ref{eval_fmpj_nodes}). The maximum achievable bandwidth matches what was determined with the other systems. With the same results on three different systems, it's very unlikely that this is some kind of software issue like a bug or bad implementation but most likely a hardware limitation. So far, we haven't seen this issue discussed in any other publication and think it is noteworthy to know what the hardware is currently capable of.\n\nCompared to DXNet (\\S \\ref{eval_dxnet_nodes_tp}), MVAPICH2 reaches slightly higher peak throughputs for large messages. However, this peak as well as saturation is reached later at 32 to 512 kb messages compared to DXNet with approx. 16 kb. The fluctuations for mid range size messages cannot be compared as DXNet does not rely on explicit aggregation. For small messages up to 64 byte, DXNet achieves significantly higher message rates, with peaks at 7.0 mmps, 15.0 mmps, 21.1 mmps and 27.3 mmps for 2 to 8 nodes, compared to MVAPICH2.\n\n\\subsubsection{Bi-directional Throughput Multi-threaded}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=3.4in]{mva_tp_bi_mt.pdf}\n\t\\caption{\\textbf{MVAPICH2}: 2 nodes, bi-directional throughput and message rate, multi-threaded with one send and one recv thread with increasing message and window size}\n\t\\label{eval_mva_bi_mt}\n\\end{figure}\n\nFigure \\ref{eval_mva_bi_mt} shows the results of the bi-directional multi-threaded benchmark with two threads (on each node): a separate thread for sending and receiving each. In our case, this is the simplest multi-threading configuration to utilize more than one thread for MPI calls. The plot shows highly fluctuating results of the three runs executed as well as overall low throughput compared to the single threaded results (\\S \\ref{eval_mva_bench_bi_st}). Throughput peaks at 8.8 GB\/s with a message size of 512 kb for WS 16. A message rate of 0.78 to 1.19 mmps is reached for for up to 64 byte messages for WS 32.\n\nWe tried varying the configuration values (e.g. queue sizes, buffer sizes, buffer counts) but could not find configuration parameters that yielded significantly better, especially less fluctuating, results. Furthermore, the benchmarks could not be finished with sending 100,000,000 messages. When using \\textit{MPI\\_THREAD\\_MULTIPLE}, the memory consumption increases continuously and exhausts the total memory available on our machine (64 GB). We reduced the number of messages to 1,000,000 which still consumes approx. 20\\% of the total main memory but at least executes and finishes within a reasonable time. This does not happen with the widely used \\textit{MPI\\_THREAD\\_SINGLE} mode.\n\nMVAPICH2 implements multi-threading support using a single global lock for various MPI calls which includes \\textit{MPI\\_Isend} and \\textit{MPI\\_Irecv} used in the benchmark. This fulfils the requirements described in the MPI standard and avoids a complex architecture with lock-free data structures. However, a single global lock reduces concurrency significantly and does not scale well with increasing thread count \\cite{MPIMultithreading17}. This effect impacts performance less on applications with short bursts and low thread count. However, for multi-threaded applications under high load, a single-threaded approach with one dedicated thread driving the network decoupled from the application threads, might be a better solution. Data between application threads and the network thread can be exchanged using data structures such as buffers, queues or pools like provided by DXNet.\n\nMVAPICH2's implementation of multi-threading does not allow to improve performance by increasing the send or receive thread counts. Thus, further multi-threaded experiments using MVAPICH2 are not reasonable.\n\n\\subsubsection{Summary Results}\n\nThis section briefly summerizes the most important results and numbers of the previous benchmarks. All values are considered ``up to'' and show the possible peak performance in the given benchmark.\n\\textbf{Single-threaded}:\n\\begin{itemize}\n \\item \\textbf{Uni-directional throughput} Saturation with 64 kb to 128 kb message size, peak at 5.9 GB\/s; Message rate of 4.0 mmps for up 64 byte messages\n \\item \\textbf{Bi-directional throughput} Saturation at 512 kb message size, peak at 11.1 GB\/s; Message rate of 4.7 mmps for up to 64 byte messages\n \\item \\textbf{Uni-directional latency} For up to 64 byte message size: 2.1 to 2.4 \\textmu s average latency and 2.4 to 5.0 \\textmu s for 99.9th percentile; 0.43 to 0.47 mmps message rate \n \\item \\textbf{All-to-all nodes} For 8 nodes: peak at 33.3 GB\/s with 64 kb message size on WS 64, WS matters for large messages; Message rate of 16.5 to 17.8 mmps for up to 64 byte messages\n \\item \\textbf{Bi-directional throughput multi-threaded}: High fluctuations with low throughputs caused by global locking, 8.8 GB\/s peak throughput at 512 kb message size; Message rate of 0.78 to 1.19 mmps for up to 64 byte messages\n\\end{itemize}\n\nCompared to DXNet, the uni-directional results are similar but DXNet does not require explicit message aggregation to deliver high throughput. On bi-directional communication, MVAPICH2 achieves a slightly higher aggregated peak throughput than DXNet but DXNet performs better by approx 0.9 GB\/s on medium sized messages. DXNet outperforms MVAPICH2 on small messages with a up to 1.8 times higher message rate. But, MVAPICH2 clearly outperforms DXNet on the uni-directional latency benchmark with an overall lower average, 95th, 99th and 99.9th percentile latency. On all-to-all communication with up to 8 nodes, MVAPICH2 reaches slightly higher peak throughputs for large messages but DXNet reaches its saturation earlier and performs significantly better on small message sizes up to 64 bytes.\n\nThe low multi-threading performance of MVAPICH2 cannot be compared to DXNet's due to the following reasons: First, MVAPICH2 implements synchronization using a global lock which is the most simplest but very often least performant method to ensure thread safety. Second, MVAPICH2, like many other MPI implementations, typically create multiple processes (one process per core) to enable concurrency on a single processor socket. However, as already discussed in related work (\\S \\ref{related_work}), this programming model is not suitable for all application domains, especially in big data applications.\n\n\\textbf{DXNet is better for small messages and multi-threaded access like required in big-data applications.}\n\n\\section{Conclusions}\n\\label{conclusions}\n\nWe presented Ibdxnet, a transport for the Java messaging library DXNet which allows multi-threaded Java applications to benefit from low latency and high-throughput using InfiniBand hardware. DXnet provides transparent connection management, concurrency handling, message serialization and hides the transport which allows the application to switch from Ethernet to InfiniBand hardware transparently, if the hardware is available. Ibdxnet's native subsystem provides dynamic, scalable, concurrent and automatic connection management and the msgrc messaging engine implementation. The msgrc engine uses a dedicated send and receive thread and to drive RC QPs asynchronously which ensures scalability with many nodes. Load adaptive parking avoids high loads on idle but ensures low latency when busy. SGEs are used to simplify buffer handling and increase buffer utilization when sending data provided by the higher level DXNet core. A carefully crafted architecture minimizes context switching between Java and the native space as well us exchanging data using shared memory buffers. The evaluation shows that DXNet with the Ibdxnet transport can keep up with FastMPJ and MVAPICH2 on single threaded applications and even exceed them in multi-threaded applications on high load applications. DXNet with Ibdxnet is capable of handling concurrent connections and data streams with up to 8 nodes. Furthermore, multi-threaded applications benefit significantly from the multi-threaded aware architecture.\n\nThe following topics are of interest for future research with DXnet and Ibdxnet:\n\\begin{itemize}\n \\item Experiments with more than 100 nodes on our university's cluster\n \\item Evaluate DXNet with the key-value store DXRAM using the YCSB and compare it to RAMCloud\n \\item Implementation and evaluation of a UD QP based transport engine\n \\item Hybrid mode for DXNet: Analyze if applications benefit from using Ethernet and InfiniBand if both are available\n \\item RDMA path: Boost performance for applications like key-value storages\n\\end{itemize}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe unification of gravity and electromagnetism has remained an unfulfilled goal in spite of many efforts by stalwarts like Einstein, Weyl, Kaluza and Schr\\\"{o}dinger. This has been largely due to the unrealistic dream of a {\\em unitary classical field theory} to achieve the unification of gravity and electromagnetism and at the same time that of waves and particles by having particles as spherically symmetric singulariy-free solutions of the field equations. No singularity-free solutions of these field equations, however, have been found. Moreover, after the advent of quantum theory, the unification of particles and fields has acquired a completely different significance. The quantum theoretic unification of the electromagnetic and weak nuclear forces has also opened up a new perspective on the unification of forces. Nevertheless, all attempts to unify gravity with other forces have remained unsuccessful so far because of a fundamental incompatibility between quantum mechanics and general relativistic gravity, namely, quantum theories can be constructed only on a fixed nondynamical space-time background whereas General Relativity requires diffeomorphism invariance. This has led in recent years to the view that perhaps gravity is an {\\em emergent} rather than a fundamental field, and hence does not need to be quantized \\cite{clgr}. In this situation it would be worthwhile once again to revisit the earlier attempts at unification of classical electromagnetism and gravity without invoking extra dimensions. \n\nIn this context an almost completely ignored paper of S. N. Bose \\cite{bose} is of particular interest. In this paper Bose generalized the equations of Einstein's unitary field theory and derived them from a variational principle. This resulted in interesting mathematical solutions. However, Bose included terms that broke an important symmetry of the Einstein action, namely, $\\Lambda$-{\\em transformation invariance} \\cite{einstein} or, in modern parlance, {\\em projective invariance} \\cite{proj} that is necessary for a true geometric unification. In what follows a slightly different approach will be taken that is in conformity with modern developments in unified theories.\n\n\\section{A Projective Invariant Unified Theory}\n\nLet the starting point be a 4-dimensional manifold ${\\cal{E}}$ with signature $(+,+,+,\\\\-)$, a non-symmetric tensor $g^{\\mu\\nu}$ and a non-symmetric affine connection $\\Gamma$ with the property \n\\beq\n\\Gamma_\\mu = \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\lambda} - \\Gamma^\\lambda_{\\lambda\\mu}\\right) \\neq 0.\n\\eeq \nLet\n\\beq\n\\Gamma^\\lambda_{\\mu\\nu} = \\Gamma^\\lambda_{(\\mu\\nu)} + \\Gamma^\\lambda_{[\\mu\\nu]}\n\\eeq where\n\\ben\n\\Gamma^\\lambda_{(\\mu\\nu)} &=& \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\nu} + \\Gamma^\\lambda_{\\nu\\mu}\\right),\\\\\n\\Gamma^\\lambda_{[\\mu\\nu]} &=& \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\nu} - \\Gamma^\\lambda_{\\nu\\mu}\\right), \n\\een and\n\\beq\n\\Gamma_\\mu = \\Gamma^\\lambda_{[\\mu\\lambda]} \\neq 0. \n\\eeq\nThis condition turns out to be of crucial importance as $\\Gamma_\\mu$ acts, as we will see, as the common source term for the electromagnetic and gravitational fields.\n$\\Gamma^\\lambda_{[\\mu\\nu]}$ is known as the Cartan torsion tensor which will be related to the electromagnetic field.\n\nConsider a non-symmetric tensor of the form\n\\beq\nE_{\\mu\\nu} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\frac{1}{2}\\left(\\Gamma^\\lambda_{\\mu\\lambda,\\,\\nu} + \\Gamma^\\lambda_{\\nu\\lambda,\\,\\mu} \\right) + 2\\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - 2\\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu}. \\label{E} \n\\eeq\nThis tensor is both {\\em transposition invariant} and {\\em$\\Lambda$-transformation invariant}. These are two symmetries that restrict the number of possible covariant terms in a nonsymmetric theory \\cite{einstein}. \n\n{\\flushleft{\\em Transposition symmetry}}\n\nLet $\\widetilde{\\Gamma^\\lambda_{\\mu\\nu}} = \\Gamma^\\lambda_{\\nu\\mu}$ and $\\widetilde{g_{\\mu\\nu}} = g_{\\nu\\mu}$. Then terms that are invariant under the simultaneous replacements of $\\Gamma^\\lambda_{\\mu\\nu}$ and $g_{\\mu\\nu}$ by $\\widetilde{\\Gamma^\\lambda_{\\mu\\nu}}$ and $\\widetilde{g_{\\mu\\nu}}$, followed by the interchange of the two lower indices, are called transposition invariant. For example, the tensor\n\\beq\nE^\\prime_{\\mu\\nu} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\Gamma^\\lambda_{\\mu\\lambda,\\,\\nu} + \\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - \\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu}\n\\eeq is not transposition invariant because it is transposed to\n\\beq\n\\widetilde{E^\\prime_{\\mu\\nu}} = \\Gamma^\\lambda_{\\mu\\nu,\\,\\lambda} - \\Gamma^\\lambda_{\\nu\\lambda,\\,\\mu} + \\Gamma^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{\\xi\\lambda} - \\Gamma^\\xi_{\\mu\\lambda}\\Gamma^\\lambda_{\\xi\\nu} \\neq E^\\prime_{\\mu\\nu}. \n\\eeq\nBut (\\ref{E}) is transposition invariant.\n\n{\\flushleft{\\em $\\Lambda$-transformation or projective symmetry}}\n\nDefine the transformations\n\\ben\n\\Gamma^{\\lambda *}_{\\mu\\nu} &=& \\Gamma^{\\lambda}_{\\mu\\nu} + \\delta^\\lambda_\\mu \\Lambda_{,\\,\\nu},\\nonumber\\\\\ng^{\\mu\\nu *} &=& g^{\\mu\\nu}\\label{proj},\n\\een \nwhere $\\Lambda$ is an arbitrary function of the coordinates, $\\delta^\\lambda_\\mu$ is the Kronecker tensor, and the comma denotes the partial derivative. It is easy to check that $E_{\\mu\\nu}$ given by (\\ref{E}) and hence $g^{\\mu\\nu}E_{\\mu\\nu}$ are invariant under these transformations. What this means is that a theory characterized by $E_{\\mu\\nu}$ cannot determine the $\\Gamma$-field completely but only up to an arbitrary function $\\Lambda$. Hence, in such a theory, $\\Gamma$ and $\\Gamma^*$ represent the same field. Further, this {\\em $\\Lambda$-transformation} produces a non-symmetric $\\Gamma^*$ from a $\\Gamma$ that is symmetric or anti-symmetric in the lower indices. Hence, the symmetry condition for $\\Gamma$ loses objective significance. This sets the ground for a genuine unification of the gravitational and electromagnetic fields, the former determined by the symmetric part of the tensor $E_{\\mu\\nu}$ and the latter by its antisymmetric part.\n\nSeparating the symmetric and antisymmetric parts of $E_{\\mu\\nu}$, and using the definitions\n\\ben\nR^\\prime_{\\mu\\nu}&=& \\Gamma^\\lambda_{(\\mu\\nu),\\,\\lambda} - \\frac{1}{2}\\left(\\Gamma^\\lambda_{(\\mu\\lambda),\\,\\nu} + \\Gamma^\\lambda_{(\\nu\\lambda),\\,\\mu} \\right) + \\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)},\\\\\nG^\\lambda_{\\mu\\nu} &=& \\Gamma^\\lambda_{[\\mu\\nu]} + \\frac{1}{3}\\delta^\\lambda_\\mu \\Gamma_\\nu - \\frac{1}{3}\\delta^\\lambda_\\nu \\Gamma_\\mu,\\label{G}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} &=& G^\\lambda_{\\mu\\nu,\\,\\lambda} - G^\\lambda_{\\mu\\xi}\\Gamma^\\xi_{(\\lambda\\nu)} - G^\\lambda_{\\xi\\nu}\\Gamma^\\xi_{(\\mu\\lambda)} + G^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{(\\xi\\lambda)},\n\\een \none can show that\n\\ben\nE_{\\mu\\nu} &=& \\frac{1}{2}\\left[R^\\prime_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu -\\frac{1}{2}(\\Gamma_{\\mu,\\,\\nu} + \\Gamma_{\\nu,\\,\\mu}) + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda\\right]\\nonumber\\\\ &+& \\frac{1}{2}\\left[G^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right].\n\\een \nNotice that by construction $G^\\lambda_{\\mu\\lambda} = 0$. One can now write an invariant Lagrangian density\n\\ben\n{\\cal{L}} &=& \\frac{1}{\\kappa} \\sqrt{\\vert g\\vert}g^{\\mu\\nu}E_{\\mu\\nu} = \\frac{1}{\\kappa}\\left(s^{\\mu\\nu} + a^{\\mu\\nu}\\right)E_{\\mu\\nu}\\nonumber\\\\ \n&=& \\frac{1}{\\kappa} s^{\\mu\\nu}\\left[R_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - \\Gamma_{\\nu,\\,\\nu} \\right]\\nonumber\\\\ &+& \\frac{1}{\\kappa} a^{\\mu\\nu}\\left[G^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right],\\label{L}\n\\een\nwhere\n\\ben\nR_{\\mu\\nu}&=& \\Gamma^\\lambda_{(\\mu\\nu),\\,\\lambda} - \\Gamma^\\lambda_{(\\mu\\lambda),\\,\\nu} + \\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)},\\\\\ns^{\\mu\\nu} &=& \\frac{1}{2}\\sqrt{\\vert g\\vert}\\left(g^{\\mu\\nu} + g^{\\nu\\mu}\\right) \\equiv \\sqrt{\\vert g\\vert}g^{(\\mu\\nu)},\\\\\na^{\\mu\\nu} &=& \\frac{1}{2}\\sqrt{\\vert \\overline{g}\\vert}\\left(g^{\\mu\\nu} - g^{\\nu\\mu}\\right) \\equiv \\sqrt{\\vert g\\vert}g^{[\\mu\\nu]},\\\\\n\\vert \\overline{g}\\vert &=& \\vert g\\vert, \n\\een and $\\kappa$ is an arbitrary constant of the dimension of inverse force.\nLet us therefore consider the variation\n\\beq\n\\delta I = \\delta \\int {\\cal{L}}\\, d^4 x = 0.\n\\eeq\nArbitrary variations of $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ while keeping the connections fixed (generalized Palatini variations) give rise to the field equations\n\\ben\nR_{\\mu\\nu} - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3} \\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda &=& 0,\\label{A}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} - \\frac{1}{3}\\left(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu}\\right) &=& 0. \\label{B}.\n\\een\nThe coefficients of $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ in the Lagrangian (\\ref{L}) are respectively the symmetric and anti-symmetric curvature tensors in the theory. These variational equations show that these tensors vanish. They also show that $\\Gamma_\\mu$ acts as the common source of $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ and $R_{\\mu\\nu}$. In a theory in which $\\Gamma_\\mu =0$, $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ and $R_{\\mu\\nu}$ would have no common source, and the two cannot be said to be genuinely unified.\n\n\nTo derive the equations of connection, one can use a variational principle with an undetermined Lagrange multiplier $k^\\mu$, namely\n\\beq\n\\delta\\int \\left({\\kappa\\cal{L}} - 2k^\\mu G^\\lambda_{\\mu\\lambda}\\right) d^4 x = 0,\n\\eeq \nin which all the $24$ components of $G^\\lambda_{\\mu\\nu}$ are treated to be independent although, as we have seen, they are not because $G^\\lambda_{\\mu\\lambda} = 0$. One then obtains the equation (see Appendix for details)\n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 3 g^{\\mu\\nu}\\Phi_\\lambda \n\\eeq\nwith the affine connections $\\Gamma^\\prime$ given by Eqns. (\\ref{affine1}) and (\\ref{affine2}) in the Appendix, and\n\\ben\n\\Phi_\\lambda &=& \\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}},\\\\\nk^\\beta &=& \\frac{1}{3}\\left(s^{\\beta\\nu}\\Gamma_\\nu + \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\beta_{(\\lambda\\nu)} \\right),\\label{k}\n\\een\nand\n\\ben\ns^{\\mu\\alpha}_{,\\,\\alpha} + s^{\\alpha\\beta}\\Gamma^\\mu_{(\\alpha\\beta)} + a^{\\alpha\\beta}G^\\mu_{\\alpha\\beta} &=& 0,\\\\\na^{\\mu\\nu}_{,\\,\\nu} &=& 3k^\\mu.\\label{kmu}\n\\een The last equation implies\n\\beq\nk^\\mu_{,\\,\\mu} = 0.\n\\eeq\nEqn. (\\ref{k}) determines this 4-vector $k^\\mu$ and constrains the number of independent components of $G^\\lambda_{\\mu\\nu}$ to be $20$ in accordance with the property $G^\\lambda_{\\mu\\lambda} = 0$.\nIf the determinant\n\\beq\n\\vert \\vert g_{[\\lambda\\beta]}\\vert \\vert = \\left(g_{12}g_{34} + g_{23}g_{14} + g_{31}g_{24}\\right)^2 = 0, \n\\eeq\none can have $\\Phi_\\lambda = 0$ but $k^\\mu \\neq 0$. It is possible in this case to relate the electromagnetic field intensity with $a^{\\mu\\nu}$ through the relation\n\\beq\n{\\mathfrak{F}}^{\\mu\\nu} = e c {\\mathcal{R}} a^{\\mu\\nu}\\label{emf}\n\\eeq with a non-zero curvature\n\\beq\n{\\mathcal{R}} = g^{[\\mu\\nu]}E_{[\\mu\\nu]}.\n\\eeq Thus, $(e c {\\mathcal{R}}\\sqrt{\\vert g\\vert})\\left(g_{23}, g_{31}, g_{12}\\right)$ are components of the magnetic field $\\vec{B}$ and $(e c {\\mathcal{R}}\\sqrt{\\vert g\\vert})\\left(g_{41}, g_{42}, g_{43}\\right)$ those of the electric field $i\\vec{E}$ which satisfy the condition $\\vec{E}\\,. \\vec{B} = 0$, $e$ being the electric charge and $c = \\frac{1}{\\sqrt{\\epsilon_0 \\mu_0}}$. In the absence of electrically charged matter, $e = 0$, and hence $\\vec{E}$ and $\\vec{B}$ vanish, but $a^{\\mu\\nu}\\neq 0$, and the geometric structure of the electromagnetic field remains. It is only with the introduction of charged matter, as we will see, that this geometric structure acquires physical dimensions.\n\nThe equations of connection are then of the form \n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 0,\\label{C}\n\\eeq\nand one also has \n\\beq\n{\\mathfrak{F}}^{\\mu\\nu}_{,\\,\\nu} = e c{\\mathcal{R}}_{,\\,\\nu} a^{\\mu\\nu} + e c{\\mathcal{R}}\\, a^{\\mu\\nu}_{,\\,\\nu} = e {\\mathfrak{J}}^\\mu_{em}.\\label{jem}\n\\eeq When ${\\mathfrak{J}}^\\mu_{em} = 0$,\n\\beq\na^{\\mu\\nu}_{,\\,\\nu} = - {\\mathcal{R}}^{-1}{\\mathcal{R}}_{,\\,\\nu}a^{\\mu\\nu} = 3 k^\\mu.\n\\eeq\nThis is the case in the projective invariant limit with no particles present.\n\nHowever, as we have seen, $R_{\\mu\\nu}$ and $G^\\lambda_{\\mu\\nu,\\,;\\lambda}$ cannot be objectively separated and identified with the physical gravitational and electromagnetic fields respectively because of projective invariance. In the observable universe at present, however, this symmetry is badly broken in the sense that the electromagnetic and gravitational fields can be objectively separated and identified, and the electric charge $e$ and the gravitational charge $\\kappa = 8\\pi G\/c^4$ are widely different. Furthermore, there are no charged particles in the theory which can be shown to be singularity-free solutions of the classical field equations and which can act as the source ${\\mathfrak{J}}^\\mu_{em}$ of the electromagnetic field. This makes the projective invariant classical theory with $\\Gamma_\\mu \\neq 0$ incomplete. Furthermore, there are strong and weak nuclear interactions and quantum mechanical effects that need to be taken into account. These are the issues that will be addressed in the following sections. \n\n\\section{Matter and Projective Symmetry Breaking}\n\nLet us first consider projective symmetry breaking. In the perspective of modern developments in unified theories, it would be natural to think of a symmetry breaking transition at some appropriate stage of the evolution of the universe that separates the gravitational and electromagnetic fields objectively and physically. Such a scenario would be possible provided there is some natural mechanism to break the {\\em $\\Lambda$-transformation} or projective symmetry of the action so that the symmetric and anti-symmetric parts of the connection can be objectively separated. The symmetry condition for the connection $\\Gamma$ characteristic of Riemann manifolds and Einstein's gravitational theory based on them would then acquire objective significance. From such a symmetry breaking would then emerge the observed space-time world endowed with a symmetric dynamical metric field $g_{(\\mu\\nu)}$ encoding gravity as well as the anti-symmetric field $g_{[\\mu\\nu]}$ (resulting from torsion) encoding electromagnetism. The most natural stage for such a symmetry breaking to occur would be the emergence of matter fields at the end of a non-symmetric affine field dominated phase of a {\\em premetric} universe. This is because a matter Lagrangian ${\\cal{L}}_m$ obtained by minimally coupling the matter to the connection is not generally projective invariant. \n\nIn such a theory one would have\n\\beq\n{\\mathfrak{J}}^\\mu_{em} = \\left[\\sum_i \\bar{\\psi}_i\\gamma^\\mu \\psi_i + \\sum_j\\bar{\\phi}_j\\beta^\\mu \\phi_j + \\cdots \\right]\n\\eeq\nwhere $\\psi_i$ are Dirac wavefunctions describing spin-$\\frac{1}{2}$ particles, $\\phi_j$ are Kemmer-Duffin wavefunctions describing spin-0 and spin-1 particles, the $\\beta$s being the Kemmer-Duffin-Petiau matrices, and the dots represent higher spin wavefunctions if any. \n\nThe broken symmetric Lagrangian density including the matter wavefunctions can then take the general form \n\\ben\n{\\cal{L}} ^\\prime &=& \\frac{1}{\\kappa}s^{\\mu\\nu}\\left[R_{\\mu\\nu} - k^\\prime \\left(G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} - \\frac{1}{3} \\Gamma_\\mu \\Gamma_\\nu - \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda\\right)\\right]\\nonumber\\\\ &+& \\frac{1}{\\kappa}a^{\\mu\\nu}\\left[G^\\lambda_{\\mu\\nu;\\lambda} - \\frac{k}{3}(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu})\\right]\\nonumber\\\\ &+& {\\cal{L}}_m (\\psi, g, \\Gamma)\n\\een \nwhere now $\\kappa = 8\\pi G\/c^4$ and $k$ and $k^\\prime$ are arbitrary dimensionless constants to be determined by experiments.\nHence, varying $s^{\\mu\\nu}$ and $a^{\\mu\\nu}$ together with the connections,\none obtains \n\\ben\nR_{\\mu\\nu} - \\frac{1}{2}g_{(\\mu\\nu)}R &=& -\\kappa \\left[T^{m}_{(\\mu\\nu)} + T^{em}_{(\\mu\\nu)}\\right], \\label{gr}\\\\\nG^\\lambda_{\\mu\\nu;\\,\\lambda} &=& \\frac{k}{3} \\left(\\Gamma_{\\mu,\\,\\nu} - \\Gamma_{\\nu,\\,\\mu}\\right)\\label{em},\n\\een where\n\\ben\nT^m_{(\\mu\\nu)} &=& -\\frac{2}{\\sqrt{\\vert g\\vert}}\\frac{\\delta \\left(\\sqrt{\\vert g\\vert}{\\cal{L}}_m \\right)}{\\delta g^{(\\mu\\nu)}},\\label{F5}\\\\\nT^{em}_{(\\mu\\nu)} &=& \\frac{k^\\prime}{\\kappa}\\left[\\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu + \\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu}\\right]\n\\een and\n\\ben\n\\Gamma_\\mu &=& \\frac{1}{c {\\mathcal{R}}}{\\mathfrak{J}}^{em}_\\mu - \\frac{1}{e c{\\mathcal{R}}^2} {\\mathcal{R}}_{,\\, \\nu} {\\mathfrak{F}}^{\\,\\,\\nu}_{\\mu} - \\frac{3}{2} s^{\\lambda\\nu}\\Gamma_{\\mu, (\\lambda\\nu)},\\\\\n\\Gamma^\\lambda_{(\\mu\\nu)} &=& \\frac{1}{2}g^{(\\lambda\\rho)}\\left(g_{(\\rho\\mu,\\,\\nu)} + g_{(\\rho\\nu,\\,\\mu)} - g_{(\\mu\\nu,\\,\\rho)} \\right),\\\\\n\\Gamma_{\\mu, (\\lambda\\nu)} &=& \\frac{1}{2}\\left(g_{(\\mu\\lambda),\\,\\nu} + g_{(\\mu\\nu),\\,\\lambda} - g_{(\\lambda\\nu),\\,\\mu}\\right).\n\\een The first of these equations follows from Eqn. (\\ref{jem}) and Eqns. (\\ref{k}), (\\ref{kmu}) and (\\ref{emf}), and the other two are consequences of Eqn. (\\ref{C}) for the symmetric part. \nThe constant $k^\\prime$ must be chosen to fit experimental data on electromagnetic contributions to the total stress-energy tensor. A comparison of Eqns. (\\ref{gr}) and \n(\\ref{em}) suggests that we identify $k =3\\sqrt{\\alpha}$ where $\\alpha = 1\/137$ is the dimensionless fine structure constant so that the two fundamental constants $\\kappa$ and $\\alpha$ determine the strengths of the couplings of the sources to the symmetric and antisymmetric curvature tensors in the theory. \n\nIn the projective invariant limit, $k^\\prime = k = 1$ and $\\kappa \\rightarrow 0$ so that the matter Lagrangian density can be ignored in camparison with the other terms. Hence, the theory predicts that in the invariant limit $\\alpha_{sym} = \\frac{1}{9}$.\n\nNotice also that $T^{em}_{(\\mu\\nu)}$ differs from the standard general relativistic form of the electromagnetic stress-energy tensor\n\\beq\n\\frac{1}{\\mu_0}\\left(F^{\\mu\\alpha}F^\\nu\\,_{\\alpha} - \\frac{1}{4}g^{(\\mu\\nu)}F_{\\alpha\\beta} F^{\\alpha\\beta}\\right).\n\\eeq Hence, the predictions of the theory regarding the effects of the electromanetic stress tensor on gravity differ from those of standard General Relativity, and hence can be tested in principle. These effects are being investigated and will be reported elsewhere. \n\n\\section{Quantization}\nLet us now see how the above scheme can be incorporated into the current understanding of gravitation and the quantum theory of matter and radiation. We first note that the electromagnetic gauge potential $A_\\mu$ has played no role so far in our considerations. That is because $A_\\mu$ is required for minimal coupling with charged matter which is absent in a projective invariant theory. Charged matter is represented by complex quantum mechanical wavefunctions whose imaginary parts are arbitrary local phases, a typical quantum mechanical feature that is wholly absent in classical theories of matter. $A_\\mu$ has the geometric significance of a connection associated with a horizontal subspace of a principal bundle $P = (E, \\Pi, M, G)$ (associated with the phase) whose projection is $\\Pi: E \\rightarrow M$. The 1-form $A = A_\\mu dx^\\mu$ transforms as $hAh^{-1} + hdh$ under a group transformation $g^\\prime = hg, g\\in G$. There is a curvature associated with this 1-form given by $F^\\prime = F^\\prime_{\\mu\\nu}dx^\\mu \\wedge dx^\\nu$ with $F^\\prime_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu$. It transforms as $hF^{\\,\\prime} h^{-1}$. $F^{\\,\\prime}_{\\mu\\nu}$ is identified with the electromagnetic field. In this case $G = U(1)$, and $F^{\\,\\prime}$ is invariant. The bundle is trivial since the base space $M$ is flat and $E = M \\times G$ everywhere. \n\nThe first change that is needed is the replacement of the principal bundle $P$ by $P^\\prime = (E, \\Pi, {\\cal{E}}, G)$ whose base space is the manifold ${\\cal{E}}$ of the theory and whose local structure group $G$ is ideally a simple Lie group which can be broken down to $SU(3)\\times SU(2) \\times U(1)$, the symmetry group of the Standard Model. Such a bundle is only locally $M \\times G$. In keeping with current theory, we also require that matter wavefunctions be introduced as global sections of vector bundles associated with specific representations of $G$. Since the electromagnetic field is also defined on the manifold ${\\cal{E}}$ by the relation (\\ref{emf}), the compatibility requirement \n\\beq\n\\Pi {\\mathfrak{F}}^{\\,\\prime\\mu\\nu} = {\\mathfrak{F}}^{\\mu\\nu} = e c {\\mathcal{R}}\\, a^{\\mu\\nu}\n\\eeq\nmust be satisfied, where ${\\mathfrak{F}}^{\\,\\prime\\mu\\nu} = \\sqrt{\\vert g\\vert}g^{(\\mu\\alpha)}g^{(\\nu\\beta)}F^\\prime_{\\alpha\\beta}$. This immediately implies that quantization of $F^\\prime_{\\mu\\nu}$ leads to quantization of $a^{\\mu\\nu}$. Other gauge fields in the theory belonging to non-Abelian groups are not subject to this compatibility requirement.\n\nFinally, let us consider the quantization of the gravitational field. It is important to emphasize that whereas the metric tensor $g_{(\\mu\\nu)}$ plays a fundamental role in a Riemannian manifold and the connections (Christoffel symbols) are derived from it, in an affine manifold like ${\\cal{E}}$ the tensor $g_{\\mu\\nu}$ and the non-symmetric connections $\\Gamma$ play equally fundamental roles, and the two are later related through the equations of connection. Now, quantization requires the existence of a symplectic manifold with a nondegenerate $2$-form $\\omega$ on which Poisson brackets can be defined. Following Dirac's prescription, these Poisson brackets can then be replaced by commutators to quantize a classical theory. This is the canonical quantization procedure. Therefore, to quantize the fields in our theory we need to construct the $2$-form $\\omega = E_{\\mu\\nu} dx^\\mu \\wedge dx^\\nu$. Because of the antisymmetry of the wedge product, $\\omega = E_{[\\mu\\nu]}dx^\\mu \\wedge dx^\\nu$ which shows that only the antisymmetric part of $E_{\\mu\\nu}$ contributes to $\\omega$. However, as we have seen, the splitting of $E_{\\mu\\nu}$ into a symmetric and an antisymmetric part has no objective significance due to projective invariance. Hence, canonical quantization cannot be carried out with any objective significance on such a manifold. But, it can be done after the invariance is broken, and then it is at once clear why the symmetric part $E_{(\\mu\\nu)}$ associated with gravity cannot be quantized canonically while the antisymmetric part associated with electromagnetism can be. Thus, in the theory developed here, though both gravity and electromagnetism are emergent fields from a premetric, prequantum manifold ${\\cal{E}}$, gravity remains classical while electromagnetism can be quantized. \n\n\\section{Concluding Remarks}\nWe have seen that the unification of electromagnetism and gravity into a single geometric entity is beautifully accomplished in a theory with nonsymmetric connection and $\\Gamma_\\mu \\neq 0$, the unifying symmetry being projective symmetry. Matter wavefunctions appear in the theory as global sections of vector bundles associated with specific representations of an appropriate simple Lie group $G$ that can be broken down to $SU(3)\\times SU(2) \\times U(1)$, the symmetry group of the Standard Model. The matter Lagrangian breaks projective invariance, generating classical relativistic gravity and quantum electromagnetism. This is possible because the original non-symmetric manifold ${\\cal{E}}$ is assumed to be smooth. Hence, for the theory to be valid, the symmetry breaking transition must occur at a larger scale than the Planck length. The theory predicts $\\alpha_{sym} = \\frac{1}{9}$ below this scale.\n\nIn the projective invariant premetric phase the fourth dimension of the manifold ${\\cal{E}}$ with negative signature cannot be identified with physical time. Also, this manifold is affine and has no origin. These features of the theory have important implications for the origin (and possibly also the dissolution) of the observed universe which need to be further explored but are beyond the scope of this paper.\n\n\\section{Acknowledgement}\nI dedicate this paper to the memory of my research guide and teacher, the Late Professor Satyendranath Bose whose 1953 paper is its inspiration. I thank the National Academy of Sciences, India for the grant of a Senior Scientist Platinum Jubilee Fellowship which enabled this work to be undertaken.\n\n\\section{Appendix}\n\nIn order to derive the equations of connection, let us first write \n\\beq\n\\kappa{\\cal{L}} = H + \\frac{d X^\\lambda}{dx^\\lambda}\\label{H}\n\\eeq\nwith \n\\ben\nX^\\lambda &=& s^{\\mu\\nu}\\Gamma_{(\\mu\\nu)}^\\lambda - s^{\\mu\\lambda}\\Gamma^\\nu_{(\\mu\\nu)} + a^{\\mu\\nu}G^\\lambda_{\\mu\\nu} + \\frac{2}{3} a^{\\mu\\lambda}\\Gamma_\\mu + \\Gamma^\\lambda\\nonumber\\\\\nH &=& - s^{\\mu\\nu}_{,\\,\\lambda}\\Gamma^\\lambda_{(\\mu\\nu)} + s^{\\mu\\lambda}_{,\\,\\lambda}\\Gamma^\\nu_{{(\\mu\\nu)}} + s^{\\mu\\nu}\\left(\\Gamma^\\xi_{(\\mu\\nu)}\\Gamma^\\lambda_{(\\xi\\lambda)} - \\Gamma^\\xi_{(\\mu\\lambda)}\\Gamma^\\lambda_{(\\xi\\nu)}\\right)\\nonumber\\\\ &+& s^{\\mu\\nu}\\left(-G^\\lambda_{\\mu\\xi}G^\\xi_{\\lambda\\nu} + \\frac{1}{3}\\Gamma_\\mu \\Gamma_\\nu\\right) + s^{\\mu\\nu}\\Gamma^\\lambda_{(\\mu\\nu)}\\Gamma_\\lambda - a^{\\mu\\nu}_{,\\,\\lambda}G^\\lambda_{\\mu\\nu} \\nonumber\\\\ &+& a^{\\mu\\nu}\\left( - G^\\lambda_{\\mu\\xi}\\Gamma^\\xi_{(\\lambda\\nu)} - G^\\lambda_{\\xi\\nu}\\Gamma^\\xi_{(\\mu\\lambda)} + G^\\xi_{\\mu\\nu}\\Gamma^\\lambda_{(\\xi\\lambda)} \\right) - \\frac{2}{3} a^{\\mu\\lambda}_{,\\,\\lambda}\\Gamma_\\mu.\n\\een Thus, $H$ is free of the partial derivatives of $\\Gamma^\\lambda_{(\\mu\\nu)}, G^\\lambda_{\\mu\\nu}$ and $\\Gamma_\\mu$, and the four-divergence term in the integral $I$ is equal to a surface integral at infinity on which all arbitrary variations are taken to vanish. \n\nNow, it follows from the definition of $G^\\lambda_{\\mu\\nu}$ that $G^\\lambda_{\\mu\\lambda} = 0$, and hence all the 24 components of $G^\\lambda_{\\mu\\nu}$ are not independent. Remembering that these four relations must always hold good in the variations of the elements $\\Gamma^\\lambda_{(\\mu\\nu)}, G^\\lambda_{\\mu\\nu}, \\Gamma_\\mu$, one can use the method of undetermined Lagrange multipliers $k^\\mu$ to derive the equations of connection by varying the function\n\\beq\nH - 2k^\\mu G^\\lambda_{\\mu\\lambda}.\n\\eeq \nThe resulting equations are\n\\ben\ns^{\\mu\\nu}_{,\\,\\lambda} + s^{\\mu\\alpha}\\Gamma^\\nu_{(\\lambda\\alpha)} + s^{\\alpha\\nu}\\Gamma^\\mu_{(\\alpha\\lambda)} - s^{\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)} = -[a^{\\mu\\alpha}G^\\nu_{\\lambda\\alpha} + a^{\\alpha\\nu}G^\\mu_{\\alpha\\lambda}]\\label{1} \\\\\na^{\\mu\\nu}_{,\\,\\lambda} + a^{\\mu\\alpha}\\Gamma^\\nu_{(\\lambda\\alpha)} + a^{\\alpha\\nu}\\Gamma^\\mu_{(\\alpha\\lambda)} - a^{\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)} - k^\\mu\\delta^\\nu_\\lambda + k^\\nu\\delta^\\mu_\\lambda = - [s^{\\mu\\alpha}G^\\nu_{\\lambda\\alpha} + s^{\\alpha\\nu}G^\\mu_{\\alpha\\lambda}]\\label{2}\n\\een \nand\n\\beq\na^{\\mu\\nu}_{,\\,\\nu} - s^{\\mu\\nu}\\Gamma_\\nu- \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\mu_{(\\lambda\\nu)} = 0.\n\\eeq\nIt follows from these equations that\n\\ben\ns^{\\mu\\alpha}_{,\\,\\alpha} + s^{\\alpha\\beta}\\Gamma^\\mu_{(\\alpha\\beta)} + a^{\\alpha\\beta}G^\\mu_{\\alpha\\beta} &=& 0,\\\\\na^{\\mu\\nu}_{,\\,\\nu} &=& 3k^\\mu,\n\\een which imply\n\\ben\nk^\\mu_{,\\,\\mu} &=& 0,\\\\\nk^\\mu &=& \\frac{1}{3}\\left(s^{\\mu\\nu}\\Gamma_\\nu + \\frac{3}{2} s^{\\lambda\\nu} \\Gamma^\\mu_{(\\lambda\\nu)}\\right).\n\\een\nAdding (\\ref{1}) and (\\ref{2}), we get\n\\ben\ng^{\\prime\\mu\\nu}_{,\\lambda} &+& g^{\\prime\\mu\\alpha}\\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha}\\right) + g^{\\prime\\alpha\\nu}\\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda}\\right) - g^{\\prime\\mu\\nu}\\Gamma^\\alpha_{(\\lambda\\alpha)}\\nonumber\\\\ &=& k^\\mu \\delta^\\nu_\\lambda - k^\\nu \\delta^\\mu_\\lambda \\label{X}\n\\een where $g^{\\prime\\mu\\nu} = \\sqrt{\\vert g\\vert} g^{\\mu\\nu}$.\nMultiplying (\\ref{X}) by $g^\\prime_{\\mu\\nu}$ and using the results\n\\beq\ng^{\\prime\\mu\\nu}g^\\prime_{\\mu\\lambda} = \\delta^\\nu_\\lambda,\\,\\,\\,\\,\\,\\,g^{\\prime\\mu\\nu}g^\\prime_{\\lambda\\nu} = \\delta^\\mu_\\lambda,\\,\\,\\,\\,\\,\\,G^\\lambda_{\\alpha\\lambda} = 0,\\label{Y} \n\\eeq\nwe first observe that\n\\ben\n\\Gamma^\\alpha_{(\\lambda\\alpha)} &=& \\frac{\\vert g\\vert_{,\\,\\lambda}}{2 \\sqrt{\\vert g\\vert}} + \\frac{1}{2}\\left(g^\\prime_{\\lambda\\beta} - g^\\prime_{\\beta\\lambda}\\right)k^\\beta\\nonumber\\\\\n&\\equiv& \\frac{\\vert g\\vert_{,\\,\\lambda}}{2 \\sqrt{\\vert g\\vert}} + g^\\prime_{[\\lambda\\beta]}k^\\beta \\label{Z}\n\\een\nHence, dividing (\\ref{X}) by $\\sqrt{\\vert g\\vert}$, and also using (\\ref{Z}) and the results\n\\beq\ng^{\\mu\\alpha}g_{\\beta\\alpha}k^\\beta = k^\\mu\\,\\,\\,\\, {\\rm and} \\,g^{\\alpha\\nu}g_{\\alpha\\beta}k^\\beta = k^\\nu,\n\\eeq\nwe get\n\\ben\ng^{\\mu\\nu}_{,\\,\\lambda} &+& g^{\\mu\\alpha}\\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\lambda\\beta}k^\\beta \\delta^\\nu_\\alpha - g_{\\beta\\alpha}k^\\beta \\delta^\\nu_\\lambda) \\right)\\nonumber\\\\ &+& g^{\\alpha\\nu}\\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\alpha\\beta}k^\\beta \\delta^\\mu_\\lambda - g_{\\beta\\lambda}k^\\beta \\delta^\\mu_\\alpha)\\right)\\nonumber\\\\ &=& 3g^{\\mu\\nu}\\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}}\\label{x} \n\\een\nNow, define the new affine coefficients\n\\ben\n\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} &=& \\left(\\Gamma^\\nu_{(\\lambda\\alpha)} + G^\\nu_{\\lambda\\alpha} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\lambda\\beta}k^\\beta \\delta^\\nu_\\alpha - g_{\\beta\\alpha}k^\\beta \\delta^\\nu_\\lambda) \\right) \\label{affine1}\\\\\n\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} &=& \\left(\\Gamma^\\mu_{(\\alpha\\lambda)} + G^\\mu_{\\alpha\\lambda} + \\frac{1}{\\sqrt{\\vert g\\vert}}(g_{\\alpha\\beta}k^\\beta \\delta^\\mu_\\lambda - g_{\\beta\\lambda}k^\\beta \\delta^\\mu_\\alpha)\\right) \\label{affine2} \n\\een\nand\n\\beq\n\\Phi_\\lambda = \\frac{g_{[\\lambda\\beta]}k^\\beta}{\\sqrt{\\vert g\\vert}}\\\\\n\\eeq\nThen, Eqn. (\\ref{x}) can be written in the form\n\\beq\ng^{\\mu\\nu}_{,\\,\\lambda} + g^{\\mu\\alpha}\\Gamma^{\\prime\\nu}_{\\lambda\\alpha} + g^{\\alpha\\nu}\\Gamma^{\\prime\\mu}_{\\alpha\\lambda} = 3g^{\\mu\\nu}\\Phi_\\lambda. \n\\eeq\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn recent years it has been suggested that the medium of quarks and\ngluons produced in heavy ion collisions at RHIC goes through a\nstrongly-coupled phase at least during some period of its evolution\n\\cite{Teaney:2003kp,Shuryak:2006se,Huovinen:2001cy,Teaney:2001av}. The\nAnti-de Sitter space\/Conformal Field Theory (AdS\/CFT) correspondence\n\\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} is often used to\nstudy the dynamics of this strongly-coupled medium\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Gubser:2010ze,Chesler:2010bi,Beuf:2009cx,Beuf:2008ep}:\nwhile it is valid only for ${\\cal N} =4$ super-Yang-Mills (SYM)\ntheory, there is a possibility that the qualitative (and some of the\nquantitative) results obtained from AdS\/CFT correspondence may be\napplied to the real-world case of QCD.\n\n\nThe main thrust of the efforts to study the dynamics of the medium\nproduced in heavy ion collisions using AdS\/CFT correspondence has been\ndirected toward understanding how (and when) the medium isotropizes\nand thermalizes\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Chesler:2010bi,Beuf:2009cx}.\nThe existing approaches can be divided into two categories: while some\nstudies concentrated on the dynamics of the produced medium in the\nforward light-cone without analyzing the production mechanism for the\nmedium\n\\cite{Janik:2005zt,Kovchegov:2007pq,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Chesler:2009cy,Beuf:2009cx},\na large amount of work has been concentrated on studying the\ncollisions by modeling the heavy ions with shock waves in AdS$_5$ and\nattempting to solve Einstein equations in the bulk for a collision of\ntwo AdS$_5$ shock waves\n\\cite{Kajantie:2008rx,Grumiller:2008va,Gubser:2008pc,Albacete:2008vs,Albacete:2009ji,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du,Taliotis:2010pi,Lin:2010cb,Chesler:2010bi}.\nMany of the existing calculations strive to obtain the expectation\nvalue of the energy-momentum tensor $\\langle T_{\\mu\\nu} \\rangle$ of\nthe produced medium in the boundary gauge theory\n\\cite{Janik:2005zt,Janik:2006ft,Heller:2007qt,Benincasa:2007tp,Kovchegov:2007pq,Kajantie:2008rx,Grumiller:2008va,Albacete:2008vs,Albacete:2009ji,Taliotis:2010pi,Lin:2010cb,Chesler:2009cy,Chesler:2010bi,Beuf:2009cx},\nsince this is the quantity most relevant for addressing the question\nof the isotropization of the medium. Other works address the general\nquestion of thermalization by noticing that it corresponds to creation\nof a black hole in the AdS bulk, and by constructing a physical proof\nof the black hole formation with the help of a trapped surface\nanalysis\n\\cite{Gubser:2008pc,Lin:2009pn,Gubser:2009sx,AlvarezGaume:2008fx,Nastase:2008hw,Kovchegov:2009du}.\n\n\nIn this work we concentrate on a different observable characterizing\nheavy ion collisions: we study correlation functions in the produced\nexpanding strongly-coupled medium. Correlation functions have become a\npowerful tool for the analysis of data coming out of heavy ion\ncollisions, allowing for a quantitative measure of a wide range of\nphenomena, from Hanbury-Brown--Twiss (HBT) interferometry\n\\cite{Adler:2001zd}, to jet quenching \\cite{Adler:2002tq} and Color\nGlass Condensate (CGC) \\cite{Braidot:2010ig}. In recent years a new\npuzzling phenomenon was discovered in the two-particle correlation\nfunctions measured in $Au+Au$ collisions at Relativistic Heavy Ion\nCollider (RHIC) \\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id}: the\nexperiments see correlations with a rather small azimuthal angle\nspread, but with a rather broad (up to several units) distribution in\nrapidity. This type of correlation is referred to as ``the ridge''.\nMore recently the ridge correlations have been seen in\nhigh-multiplicity proton-proton collisions at the Large Hadron\nCollider (LHC) \\cite{Khachatryan:2010gv}, as well as in the\npreliminary data on $Pb+Pb$ collisions at LHC.\n\n\nSeveral theoretical explanations have been put forward to account for\nthe ridge correlations. They can be sub-divided into two classes:\nperturbative and non-perturbative. Perturbative explanations, put\nforward in the CGC framework in\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv,Kovner:2010xk},\nare based on the long-range rapidity correlations present in the\ninitial state of a heavy ion collision due to CGC classical gluon\nfields\n\\cite{McLerran:1993ni,Kovner:1995ja,Kovchegov:1997ke,Kovchegov:1999ep}\n(see \\cite{Jalilian-Marian:2005jf,Weigert:2005us,Iancu:2003xm} for\nreviews of CGC physics). In \\cite{Gavin:2008ev,Dumitru:2008wn} the\nauthors invoke causality to argue that long-range rapidity correlation\ncan only arise in the early times after the collision, since at later\ntimes the regions at different rapidities become causally\ndisconnected. This is illustrated in \\fig{spacetime}, where one can\nsee that the gray-shaded causal pasts of two particles produced in the\ncollision (labeled by arrows with momenta $k_1$ and $k_2$) overlap\nonly at very early time (the red-shaded region). The authors of\n\\cite{Gavin:2008ev} then suggest that the late-time radial flow due to\nhydrodynamic evolution would lead to azimuthal correlations\ncharacteristic of the ``ridge''. Alternatively, the authors of\n\\cite{Dumitru:2008wn,Dusling:2009ni,Dumitru:2010mv,Dumitru:2010iy}\nhave identified a class of Feynman diagrams which generate azimuthal\ncorrelations in nucleus--nucleus collisions. \n\n\nThe CGC correlations found in\n\\cite{Dumitru:2008wn,Dusling:2009ni,Dumitru:2010mv,Dumitru:2010iy} are\nbased on purely perturbative small-coupling physics: however, it\nremains to be shown whether such perturbative dynamics contains large\nenough azimuthal correlations to account for all of the observed\n``ridge'' phenomenon. In the scenario of\n\\cite{Gavin:2008ev,Dumitru:2008wn} CGC dynamics provides rapidity\ncorrelations, while azimuthal correlations are generated by\nhydrodynamic evolution. As we have already mentioned, it is possible\nthat the medium created at RHIC is strongly-coupled\n\\cite{Teaney:2003kp,Shuryak:2006se,Huovinen:2001cy,Teaney:2001av}: if\nso, hydrodynamic evolution would then be a non-perturbative effect,\nmaking the scenario proposed in \\cite{Gavin:2008ev,Dumitru:2008wn}\nimplicitly non-perturbative. Purely non-perturbative explanations of\nthe ``ridge'' include parton cascade models \\cite{Werner:2010ss},\nhadronic string models \\cite{Konchakovski:2008cf}, and event-by-event\nhydrodynamic simulations \\cite{Takahashi:2009na}. The causality\nargument of \\cite{Gavin:2008ev,Dumitru:2008wn} is valid in the\nnon-perturbative case as well: one needs correlations in the initial\nstate, either due to soft pomeron\/hadronic strings interactions\n\\cite{Werner:2010ss,Konchakovski:2008cf}, or due to initial-state\nfluctuations \\cite{Takahashi:2009na}, in order to obtain long-range\nrapidity correlations. In this work we will use AdS\/CFT to address the\ntheoretical question whether long-range rapidity correlations are\npresent in the non-perturbative picture of heavy ion collisions. At\nthe same time we recognize that a complete understanding of whether\nthe ``ridge'' correlations observed at RHIC and LHC are perturbative\n(CGC) or non-perturbative in nature is still an open problem left for\nfuture studies.\n\n\n\n\\begin{figure}[th]\n\\begin{center}\n\\epsfxsize=8cm\n\\leavevmode\n\\hbox{\\epsffile{spacetime.eps}}\n\\end{center}\n\\caption{Space-time picture of a heavy ion collision demonstrating\n how long-range rapidity correlations can be formed only in the\n initial stages of the collision, as originally pointed out in\n \\cite{Gavin:2008ev,Dumitru:2008wn}. Gray shaded regions denote\n causal pasts of the two produced particles with four-momenta $k_1$\n and $k_2$, with their overlap region highlighted in red. We have\n drawn the lines of constant proper time $\\tau$ and constant\n space-time rapidity $\\eta$ to guide the eye and to underscore that\n late-time emission events for the two particles are likely to be\n causally disconnected. }\n\\label{spacetime}\n\\end{figure}\n\nThe goal of the present work is to study long-range rapidity\ncorrelations in heavy ion collisions in the strongly-coupled AdS\/CFT\nframework. In order to test for the long range rapidity correlations\nobserved in heavy ion collisions, we would like to study the two-point\nfunction $\\langle\\tr F_{\\mu\\nu}^2(x) \\, \\tr\nF_{\\rho\\sigma}^2(y)\\rangle$ of glueball operators $\\tr\n\\left(F_{\\mu\\nu}^2 \\right)$ right after the collision but before the\nthermalization. According to causality arguments of\n\\cite{Gavin:2008ev,Dumitru:2008wn}, one expects that the long range\ncorrelations in rapidity should occur at such early times. The choice\nof observable is mainly governed by calculational simplicity. The\nmetric for the early times after the collision of two shock waves in\nAdS$_5$ was obtained in\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}: after\nformulating the problem in Sec. \\ref{general} and presenting our\ngeneral expectation for the answer in Sec. \\ref{simple}, we use this\nmetric to calculate the correlation function of two glueball operators\nin Sec. \\ref{Correlators}. (Since the glueball operator corresponds to\nthe massless scalar field in the bulk, we compute the two-point\nfunction of the scalar field in the background of the colliding shock\nwaves metric.) Our main result is that we do find long-range rapidity\ncorrelations in the strongly-coupled initial state, albeit with a\nrather peculiar rapidity dependence: the two-glueball correlation\nfunction scales as\n\\begin{align}\n \\label{eq:ampl}\n C (k_1, k_2) \\, \\sim \\, \\cosh \\left( 4 \\, \\Delta y \\right)\n\\end{align}\nwith the (large) rapidity interval $\\Delta y$ between them. We also\nshow in Sec. \\ref{Correlators} that the correlator of two\nenergy-momentum tensors $\\langle T_{2}^1 (x) \\, T_{2}^1 (y) \\rangle$\n(with $1,2$ transverse directions) exhibits the same long-range\nrapidity correlations. This should be contrasted with the CGC result,\nin which the correlations are at most flat in rapidity\n\\cite{Dumitru:2008wn,Dumitru:2010iy,Dumitru:2010mv,Gavin:2008ev}.\nIndeed the growth of correlations with rapidity interval in\n\\eq{eq:ampl} also contradicts experimental data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nAlthough we should not {\\it a priori} expect an agreement between\nAdS\/CFT calculations and experimental QCD data, we argue in Sec.\n\\ref{sum} that inclusion of higher-order corrections in the AdS\ncalculation along the lines of \\cite{Albacete:2009ji} should help \nto flatten out such growth, though it is a very difficult problem to\ndemonstrate this explicitly.\n\n\nUsing the causality argument of \\cite{Gavin:2008ev,Dumitru:2008wn}\nillustrated in \\fig{spacetime} we also expect that after\nthermalization the rapidity correlations should only be short-ranged.\nAs a result, due to causality, the initial long ranged correlations\ncan not be ``washed away'' and will be observed at later times.\nThis explanation is analogous to the resolution of the `horizon\nproblem' in the cosmic microwave background radiation (CMB), where the\nobserved near-homogeneity of the CMB suggests that the universe was\nextremely homogeneous at the time of the last scattering even over\ndistance scales that could not have been in causal contact in the\npast. This problem was solved by assuming that the universe, when it\nwas still young and extremely homogeneous, went through a very rapid\nperiod of expansion (inflation). As a consequence of inflation,\ndifferent regions of the universe became causally disconnected, while\npreserving the initial homogeneity.\nThe idea that we pursue here for the heavy ion collisions seems to be\nof similar nature. To verify the statement that late-time dynamics can\nnot generate (or otherwise affect) long-range rapidity correlations we\nstudy glueball correlation again in Sec. \\ref{late-times} now using\nthe metric found by Janik and Peschanski \\cite{Janik:2005zt}, which is\ndual to Bjorken hydrodynamics \\cite{Bjorken:1982qr}. (This is done in\nthe absence an analytic solution of the problem of colliding shock\nwaves: despite some recent progress\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji} the late-time\nmetric is unknown at present.) Performing a perturbative estimate, we\nfind that, indeed, only short-range rapidity correlations result from\nthe gauge theory dynamics dual to the Janik and Peschanski metric.\n\nWe summarize our results in Sec. \\ref{sum}.\n\n\n\n\\section{Generalities and Problem Setup}\n\\label{general}\n\n\\subsection{AdS\/CFT Tools}\n\n\nWe start with a metric for a single shock wave moving along a light\ncone in the $x^+$ direction \\cite{Janik:2005zt} in Fefferman--Graham\ncoordinates \\cite{F-G}:\n\\begin{equation}\\label{1nuc}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\left\\{ -2 \\, dx^+ \\, dx^- + t_1 (x^-) \\, z^4 \\, d\n x^{- \\, 2} + d x_\\perp^2 + d z^2 \\right\\}\n\\end{equation}\nwhere\n\\begin{align}\\label{t1}\n t_1 (x^-) \\, \\equiv \\, \\frac{2 \\, \\pi^2}{N_c^2} \\, \\langle T_{1 \\,\n --} (x^-) \\rangle.\n\\end{align}\nHere $x^\\pm = \\frac{x^0 \\pm x^3}{\\sqrt{2}}$, ${\\un x} = (x^1, x^2)$,\n$d x_\\perp^2 = (d x^1)^2 + (d x^2)^2$, $z$ is the coordinate\ndescribing the 5th dimension such that the ultraviolet (UV) boundary\nof the AdS space is at $z=0$, and $L$ is the radius of the AdS space.\nAccording to holographic renormalization \\cite{deHaro:2000xn},\n$\\langle T_{--} (x^-) \\rangle$ is the expectation value of the\nenergy-momentum tensor for a single ultrarelativistic nucleus moving\nalong the light-cone in the $x^+$-direction in the gauge theory. We\nassume that the nucleus is made out of nucleons consisting of $N_c^2$\n``valence gluons'' each, such that $\\langle T_{--} (x^-) \\rangle\n\\propto N_c^2$, and the metric \\peq{1nuc} has no $N_c^2$-suppressed\nterms in it. The metric in \\eq{1nuc} is a solution of Einstein\nequations in AdS$_5$:\n\\begin{align}\n \\label{ein}\n R_{\\mu\\nu} + \\frac{4}{L^2} \\, g_{\\mu\\nu} = 0.\n\\end{align}\n\nImagine a collision of the shock wave \\peq{1nuc} with another similar\nshock wave moving in the light cone $x^-$ direction described by the\nmetric\n\\begin{align}\\label{2nuc}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\left\\{ -2 \\, dx^+ \\, dx^- + t_2\n (x^+) \\, z^4 \\, d x^{+ \\, 2} + d x_\\perp^2 + d z^2 \\right\\}\n\\end{align}\nwith\n\\begin{align}\\label{t2}\n t_2 (x^+) \\, \\equiv \\, \\frac{2 \\, \\pi^2}{N_c^2} \\, \\langle T_{2 \\,\n ++} (x^+) \\rangle.\n\\end{align}\nHere we will consider the high-energy approximation, in which the\nshock waves' profiles are given by delta-functions,\n\\begin{align}\\label{deltas}\n t_1 (x^-) = \\mu_1 \\, \\delta (x^-), \\ \\ \\ t_2 (x^+) = \\mu_2 \\, \\delta\n (x^+).\n\\end{align}\nThe two scales $\\mu_1$ and $\\mu_2$ can be expressed in terms of the\nphysical parameters in the problem since we picture the shock waves as\ndual to the ultrarelativistic heavy ions in the boundary gauge theory\n\\cite{Albacete:2008vs,Albacete:2008ze} :\n\\begin{align}\\label{mus}\n \\mu_{1} \\sim p_{1}^+ \\, \\Lambda_1^2 \\, A_1^{1\/3}, \\ \\ \\ \\mu_{2} \\sim\n p_{2}^- \\, \\Lambda_2^2 \\, A_2^{1\/3}.\n\\end{align}\nHere $p_1^+$, $p_2^-$ are the large light-cone momenta per nucleon,\n$A_1$ and $A_2$ are atomic numbers, and $\\Lambda_1$ and $\\Lambda_2$\nare the typical transverse momentum scales in the two nuclei\n\\cite{Albacete:2008vs}. Note that $\\mu_1$ and $\\mu_2$ are independent\nof $N_c$.\n\n\nThe exact analytical solution of Einstein equations \\peq{ein} starting\nwith the superposition of the metrics \\peq{1nuc} and \\peq{2nuc} before\nthe collision, and generating the resulting non-trivial metric after\nthe collisions, is not known. Instead one constructs perturbative\nexpansion of the solution of Einstein equations in powers of $t_1$ and\n$t_2$, or, equivalently, $\\mu_1$ and $\\mu_2$\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji,Taliotis:2010pi,Lin:2010cb}.\nAt present the metric is known up to the fourth order in $\\mu$'s\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}, and also a\nresummation to all-orders in $\\mu_2$ ($\\mu_1$) while keeping $\\mu_1$\n($\\mu_2$) at the lowest order has been performed in\n\\cite{Albacete:2009ji}. The validity of the perturbatively obtained\nmetric is limited to early proper times $\\tau = \\sqrt{2 \\, x^+ \\,\n x^-}$, see e.g. \\cite{Albacete:2009ji} (though indeed the\nfully-resummed series in powers of $\\mu_1$, $\\mu_2$ would be valid\neverywhere). Since here we are interested in the early-time\ncorrelations (and due to complexity of the $\\mu_2$-resummed metric\nobtained in \\cite{Albacete:2009ji}), we limit ourselves to the $O\n(\\mu_1 \\, \\mu_2)$ metric obtained in\n\\cite{Albacete:2008vs,Grumiller:2008va} in the Fefferman--Graham\ncoordinates:\n\\begin{align}\\label{2nuc_gen}\n ds^2 \\, = \\, \\frac{L^2}{z^2} \\, \\bigg\\{ -\\left[ 2 + G (x^+, x^-, z)\n \\right] \\, dx^+ \\, dx^- + \\left[ t_1 (x^-) \\, z^4 + F (x^+, x^-, z)\n \\right] \\, d x^{- \\, 2} \\notag \\\\ + \\left[ t_2 (x^+) \\, z^4 +\n {\\tilde F} (x^+, x^-, z) \\right] \\, d x^{+ \\, 2} + \\left[ 1 + H\n (x^+, x^-, z) \\right] \\, d x_\\perp^2 + d z^2 \\bigg\\}.\n\\end{align}\nThe components of the metric at the order-$\\mu_1 \\, \\mu_2$ are\n\\begin{align}\\label{LO}\n F (x^+, x^-, z) \\, & = \\, - \\lambda_1 (x^+, x^-) \\, z^4 -\n \\frac{1}{6} \\, \\partial_-^2 h_0 (x^+, x^-) \\, z^6 - \\frac{1}{16} \\,\n \\partial_-^2\n h_1 (x^+, x^-) \\, z^8 \\notag \\\\\n {\\tilde F} (x^+, x^-, z) \\, & = \\, - \\lambda_2 (x^+, x^-) \\, z^4 -\n \\frac{1}{6} \\, \\partial_+^2 h_0 (x^+, x^-) \\, z^6 - \\frac{1}{16} \\,\n \\partial_+^2\n h_1 (x^+, x^-) \\, z^8 \\notag \\\\\n G (x^+, x^-, z) \\, & = \\, - 2 \\, h_0 (x^+, x^-) \\, z^4 - 2 \\, h_1\n (x^+, x^-) \\, z^6 + \\frac{2}{3} \\, t_1 (x^-) \\, t_2 (x^+) \\, z^8 \\notag \\\\\n H (x^+, x^-, z) \\, & = \\, h_0 (x^+, x^-) \\, z^4 + h_1 (x^+, x^-) \\,\n z^6,\n\\end{align}\nwhere we defined \\cite{Albacete:2008vs}\n\\begin{align}\\label{LOstuff}\n h_0 (x^+, x^-) \\, & = \\, \\frac{8}{\\partial_+^2 \\, \\partial_-^2} \\,\n t_1 (x^-) \\, t_2 (x^+), \\ \\ \\ h_1 (x^+, x^-) \\, = \\, \\frac{4}{3 \\,\n \\partial_+ \\, \\partial_-} \\, t_1 (x^-) \\, t_2 (x^+) \\notag \\\\\n \\lambda_1 (x^+, x^-) \\, & = \\, \\frac{\\partial_{-}}{\\partial_{+}} \\,\n h_0 (x^+, x^-), \\ \\ \\ \\lambda_2 (x^+, x^-) \\, = \\,\n \\frac{\\partial_{+}}{\\partial_{-}} \\, h_0 (x^+, x^-)\n\\end{align}\nalong with the definition of the causal integrations\n\\begin{align}\\label{ints}\n \\frac{1}{\\partial_{+}} [\\ldots](x^+) \\, \\equiv \\,\n \\int\\limits_{-\\infty}^{x^+} \\, d x'^+ \\, [\\ldots](x'^+), \\ \\ \\\n \\frac{1}{\\partial_{-}} [\\ldots](x^-) \\, \\equiv \\,\n \\int\\limits_{-\\infty}^{x^-} \\, d x'^- \\, [\\ldots](x'^-).\n\\end{align}\n\n\nBelow we will calculate correlation functions of the glueball\noperators\n\\begin{align}\\label{Jdef}\n J(x) \\, \\equiv \\, \\frac{1}{2} \\, \\tr [F_{\\mu\\nu} \\, F^{\\mu\\nu}]\n\\end{align}\nin the boundary gauge theory.\\footnote{When defining the glueball\n operator we assume that in the boundary theory the gluon field\n $A_\\mu^a$ is defined without absorbing the gauge coupling $g_{YM}$\n in it, such that the field strength tensor $F_{\\mu\\nu}^a$ contains\n the coupling $g_{YM}$.} According to the standard AdS\/CFT\nprescription,\\footnote{Since $\\Delta =4$, with $\\Delta$ the conformal\n dimension of $J(x)$ , the mass of the dual scalar field, $m^2 =\n \\Delta \\, (\\Delta-4)$, is zero.} the glueball operator is dual to\nthe massless scalar (dilaton) field $\\phi$ in the AdS$_5$ bulk\n\\cite{Klebanov:2000me} with the action\n\\begin{align}\n S^\\phi \\, = \\, - \\frac{N_c^2}{16 \\, \\pi^2 \\, L^3} \\, \\int d^4 x \\, d\n z \\, \\sqrt{-g} \\, g^{MN} \\, \\partial_M \\phi (x,z) \\, \\partial_N \\phi\n (x,z),\n\\end{align}\nwhere $M,N = (\\mu,z)$, $\\mu = (0,1,2,3)$ and $x^{\\mu}$ correspond to\n4D field theory coordinates, while $z$ is the coordinate along the\nextra fifth (holographic) dimension. (As usual $g = \\det {g_{MN}}$.)\n\nThe equation of motion (EOM) for the scalar field is\n\\begin{align}\\label{eom}\n \\frac{1}{\\sqrt{-g}} \\, \\partial_M \\left[ \\sqrt{-g}~g^{MN}\\partial_N\n \\phi (x,z)\\right] \\, = \\, 0.\n\\end{align}\nUsing \\eq{eom}, the dilaton action evaluated on the classical solution\ncan be cast in the following form convenient for the calculation of\ncorrelation functions:\n\\begin{align}\n \\label{dil_action}\n S^\\phi_{cl} \\, = \\, \\frac{N_c^2}{16 \\, \\pi^2 \\, L^3} \\, \\int d^4 x\n \\, \\left[ \\sqrt{-g} \\, g^{zz} \\, \\phi(x,z) \\, \\partial_z \\phi(x,z)\n \\right] \\Bigg|_{z=0} \\, = \\, \\frac{N_c^2}{16 \\, \\pi^2} \\, \\int d^4 x\n \\, \\phi_B (x) \\, \\left[ \\frac{1}{z^3} \\, \\partial_z \\phi(x,z)\n \\right] \\Bigg|_{z=0}.\n\\end{align}\nIn arriving at the expression on the right of \\eq{dil_action} we have\nused the metric in Eqs. \\peq{2nuc_gen}, \\peq{LO}, and \\peq{LOstuff},\nalong with the standard assumption that the fields $\\phi$ have the\nfollowing boundary condition (BC) at the UV boundary, $\\phi(x,z\\to 0)\n= \\phi_B(x)$, which allowed us to approximate near $z=0$\n\\begin{align}\n \\label{g_eq}\n g \\, = \\, - \\frac{L^{10}}{z^{10}} \\, \\left( 1 - \\frac{1}{3} \\, z^8\n \\, t_1 (x^-) \\, t_2 (x^+) \\right) \\, \\approx \\, -\n \\frac{L^{10}}{z^{10}}.\n\\end{align}\nIn arriving at \\eq{dil_action} we have also demanded that\\footnote{As\n one can see later, our classical solutions satisfy this condition.}\n\\begin{align}\n \\sqrt{-g} \\, g^{zz} \\, \\phi(x,z) \\, \\partial_z \\phi(x,z) \\,\n \\rightarrow \\, 0 \\ \\ \\ \\text{as} \\ \\ \\ z \\rightarrow \\infty.\n\\end{align}\n\nDefine the retarded Green function of the glueball operator \\peq{Jdef}\n(averaged in the heavy ion collision background),\n\\begin{align}\n \\label{retG}\n G_R (x_1, z_2) \\, = \\, - i \\, \\theta (x_1^0 - x_2^0) \\, \\langle\n [J(x_1), J(x_2)] \\rangle.\n\\end{align}\nAccording to the AdS\/CFT correspondence the contribution to the\nretarded Green function coming from the medium produced in the\ncollision is given by \\cite{Son:2002sd}\\footnote{As was shown in\n \\cite{Herzog:2002pc,Skenderis:2008dg} the right-hand side of\n \\eq{Sdiff} contains contributions of both the retarded and advanced\n Green functions $G_R$ and $G_A$. In the lowest-order calculation we\n are going to perform here the Green functions are real, and, since\n $\\text{Re} \\, G_R = \\text{Re} \\, G_F = \\text{Re} \\, G_A$ (with $G_F$\n the Feynman Green function defined below in \\eq{GF}), we do not need\n to address the question of disentangling the contributions of\n different wave functions to \\eq{Sdiff} and will adopt the convention\n of \\cite{Mueller:2008bt,Avsar:2009xf} by calling the object in\n \\eq{Sdiff} a retarded Green function.}\n\\begin{align}\n \\label{Sdiff}\n G_R (x_1, x_2) \\, = \\, \\frac{\\delta^2 [S^\\phi_{cl} - S_0]}{\\delta\n \\phi_B (x_1) \\, \\delta \\phi_B (x_2)},\n\\end{align}\nwhere we subtract the action $S_0$ of the scalar field in the empty\nAdS$_5$ space to remove the contribution of the retarded Green\nfunction in the vacuum. The latter has nothing to do with the\nproperties of the medium produced in the collision and has to be\ndiscarded.\n\n\nLater we will be interested in the Fourier transform of the retarded\nGreen function\n\\begin{align}\n \\label{Gr_mom}\n G_R (k_1, k_2) \\, = \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{-i \\, k_1 \\cdot\n x_1 -i \\, k_2 \\cdot x_2} \\, G_R (x_1, x_2).\n\\end{align}\n(We are working in the $(-,+,+,+)$ metric in the boundary four\ndimensions.)\n\n\n\n\n\n\\subsection{Kinematics}\n\\label{kine}\n\nWe have defined above $k^{\\pm} = (k^0 \\pm k^3)\/\\sqrt{2}$, ${\\un k} =\n(k^1, k^2)$, $k_\\perp = |{\\un k}|$ and $k^2 = k_{\\bot}^2 - 2 \\, k^+ \\,\nk^- \\, = \\, -m^2$. The particle rapidity, defined as, $y =\n\\frac{1}{2}\\, \\ln \\frac{k^+}{k^-}$, is a useful variable, since the\nrapidity difference between any pair of particles remains unchanged if\nwe go from the center of mass frame to any other frame by performing a\nboost along the longitudinal direction, $x^3$. On the other hand,\nwhen $k^0 \\gg m $, $y \\approx y_p = \\ln \\cot (\\theta\/2)$, where $y_P$\nis pseudorapidity, and $\\theta$ is the angle at which the particle\nemerges in the center of mass frame. Furthermore, defining $m_{\\bot}\n\\equiv \\sqrt{k_{\\bot}^2+m^2}$, we can rewrite the light-cone\ncomponents of the momentum as: $k^+ = m_{\\bot}e^y\/\\sqrt{2}$ and $k^- =\nm_{\\bot}e^{-y}\/\\sqrt{2}$. In the case when $k_{\\bot}^2 \\gg m^2$ one\nhas $k^+k^- \\approx k^2_{\\bot}\/2$.\n\nConsider two identical on mass-shell particles with momenta $k_1 =\n(k_1^+, k^-_1, {\\un k}_{1})$ and $k_2 = (k_2^+,k^-_2, {\\un k}_{2})$.\nAssuming $k^2_1 = k^2_2 = -m^2$ and ${\\un k}_{1} = {\\un k}_{2} = {\\un\n k}$, we obtain\n\\begin{align}\n q^2 \\equiv (k_2-k_1)^2 = -2 \\, m^2 -2 \\, k_1 \\cdot k_2 \\, = \\,\n 4 \\, m^2_{\\bot} \\, \\sinh^2 \\frac{\\Delta y}{2} > 0 \\ ,\n\\end{align}\nwhere $\\Delta y = y_2 - y_1$ with $y_1$ and $y_2$ the rapidities of\nthe two particles. In case when $k^2_{\\bot} \\gg m^2$ and $\\Delta y \\gg\n1$, we have $q^2 \\approx 2 \\, k_{\\bot}^2 \\cosh \\Delta y \\approx\nk_{\\bot}^2 e^{\\Delta y}$. It is worth noting that the momentum\ndifference is space-like, since $q^2 \\equiv Q^2 > 0$.\n\n\n\n\n\\subsection{Defining the observable in the boundary gauge theory}\n\n\nLet us now specify the observable we want to calculate in the boundary\ngauge theory. Our primary goal is to study rapidity correlations using\nAdS\/CFT. Ideally one would like to find correlations between produced\nparticles. However, ${\\mathcal N} =4$ SYM theory has no bound states,\nand, at strong coupling, it does not make sense to talk about\nindividual supersymmetric particles. Therefore we will study\ncorrelators of operators, starting with the glueball operator defined\nin \\eq{Jdef}. One can think of the glueballs as external probes to\n${\\mathcal N} =4$ SYM theory (in the sense of being particles from\nsome other theory in four dimensions), which couple to the gluons in\n${\\mathcal N} =4$ SYM, and therefore can be produced in the collision.\nLater on we will also consider correlators of the energy-momentum\ntensor $T_{\\mu\\nu}$, which should be also thought of as an operator\ncoupling to a particle (in four dimensions) external to the ${\\mathcal\n N} =4$ SYM theory.\n\nWe start with the glueball production. To study two-particle\ncorrelations we need to find the two-particle multiplicity\ndistribution\n\\begin{align}\n \\label{N2}\n \\frac{d^6 N}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}\n\\end{align}\nwhere $k_{1}^{\\perp}$, $y_1$ and $k_{2}^{\\perp}$, $y_2$ are the\ntransverse momenta of the produced particles (glueballs) and their\nrapidities, and $d^2 k \\equiv d k^1 \\, d k^2$. As usual we can\ndecompose the two-particle multiplicity distribution into the\nuncorrelated and correlated pieces\n\\begin{align}\n \\label{2terms}\n \\frac{d^6 N}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, = \\, \\frac{d^3\n N}{d^2 k_1 \\, d y_1} \\, \\frac{d^3 N}{d^2 k_2 \\, d y_2} + \\frac{d^6\n N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}.\n\\end{align}\nWe are interested in computing the second (correlated) term on the\nright hand side of \\peq{2terms}. We begin by writing it as\n\\begin{align}\n \\label{ampl^2}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\langle | M (k_1, k_2) |^2 \\rangle\n\\end{align}\nwhere $M (k_1, k_2)$ is the two-particle production amplitude. (Note\nthat since we are primarily interested in rapidity dependence of\ncorrelators, we are not keeping track of prefactors and other\ncoefficients not containing two-particle correlations.)\n\nFor the correlated term in \\eq{2terms} the amplitude of inclusive\ntwo-glueball production in a heavy ion collision is\n\\begin{align}\n \\label{ampl1}\n M (k_1, k_2) \\, \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\, k_1\n \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\langle n | \\, T \\left\\{ J\n (x_1) \\, J (x_2) \\right\\} | A_1, A_2 \\rangle,\n\\end{align}\nwhich is a consequence of the LSZ reduction formula with $T$ denoting\ntime-ordering. Here $| n \\rangle$ denotes an arbitrary state of the\ngauge theory which describes other particles which may be produced in\na collision apart from the two glueballs.\n\nThe state $| A_1, A_2 \\rangle$ can be thought of as the vacuum in the\npresence of a source, with the source being the two nuclei with atomic\nnumbers $A_1$ and $A_2$. Consider first the expectation value of the\nenergy-momentum operator $\\langle T_{\\mu\\nu} \\rangle$ in a nuclear\ncollision. According to the standard prescription we can write it as\n\\begin{align}\n \\label{Tmn}\n \\langle T_{\\mu\\nu} (x) \\rangle \\, = \\, \\frac{\\int {\\cal D} A_\\mu \\,\n e^{i \\, S [A]} \\, W_+ [A] \\, W_- [A] \\, T_{\\mu\\nu} (x)}{\\int {\\cal\n D} A_\\mu \\, e^{i \\, S [A]} \\, W_+ [A] \\, W_- [A]}\n\\end{align}\nwhere $S [A]$ is the action of the gauge theory. For simplicity we\nonly explicitly show the integrals over gauge fields in \\eq{Tmn},\nimplying the integrals over all other fields in the theory. The\nobjects $W_+ [A]$ and $W_- [A]$ are some functionals of the fields in the theory describing the\ntwo colliding nuclei. For instance, in the perturbative QCD approaches\nsuch as CGC, these operators are Wilson lines along $x^-=0$ and $x^+\n=0$ light cone directions\n\\cite{McLerran:1993ni,Kovner:1995ja,Kovchegov:1997ke,Kovchegov:1999ep}.\n\\footnote{Calculation of the expectation value of $T_{\\mu\\nu}$ in CGC\n is reduced to perturbative evaluation\/resummation of \\eq{Tmn} (see\n e.g. \\cite{Kovchegov:2005az} for an example of such calculation).}\n\nUsing operators and states in Heisenberg picture one can rewrite\n\\eq{Tmn} as\n\\begin{align}\n \\label{eq:ave}\n \\langle T_{\\mu\\nu} (x) \\rangle \\, = \\, \\langle A_1, A_2 | T_{\\mu\\nu}\n (x) |A_1, A_2 \\rangle.\n\\end{align}\nComparing \\eq{eq:ave} to \\eq{Tmn} clarifies the meaning of the $| A_1,\nA_2 \\rangle$ state by demonstrating that the averaging in \\eq{eq:ave}\nis over a state of vacuum in the presence of nuclear sources (which of\ncourse strongly disturb the vacuum).\n\n\nUsing \\eq{ampl1} in \\eq{ampl^2} we obtain\n\\begin{align}\n \\label{corr2}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i \\, k_1\n \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\ \\times\n \\, \\sum_n \\, \\langle A_1, A_2 | \\, {\\overline T} \\left\\{ J (x'_1) \\,\n J (x'_2) \\right\\} | n \\rangle \\ \\langle n | \\, T \\left\\{ J (x_1)\n \\, J (x_2) \\right\\} | A_1, A_2 \\rangle\n\\end{align}\nwhere ${\\overline T}$ denotes the inverse time-ordering and we have\nused the fact that $J (x)$ is a hermitean operator. Summing over a\ncomplete set of states $| n \\rangle$ yields\n\\begin{align}\n \\label{corr3}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, &\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\\n & \\times \\, \\langle A_1, A_2 | \\, {\\overline T} \\left\\{ J (x'_1) \\,\n J (x'_2) \\right\\} \\, T \\left\\{ J (x_1) \\, J (x_2) \\right\\} | A_1,\n A_2 \\rangle.\n\\end{align}\n\n\nAs one could have expected, in order to calculate two-particle\nproduction, we need to calculate a 4-point function given in\n\\eq{corr3}. This is, in general, a difficult task: instead we will use\nthe following simplification. Begin by replacing the complete set of\nstates $| n \\rangle$ by states ${\\cal O}_n (x) \\, |A_1, A_2 \\rangle$\nobtained by acting on our ``vacuum'' state $| A_1, A_2 \\rangle$ by a\ncomplete orthonormal set of gauge theory operators ${\\cal O}_n (x)$,\nsuch that\n\\begin{align}\n \\label{unity}\n \\mathds{1} \\, = \\, \\sum_n \\, | n \\rangle \\ \\langle n | \\, \\, = \\,\n \\sum_n \\, \\int \\, d^4 x \\, {\\cal O}_n (x) \\, | A_1, A_2 \\rangle \\ \\,\n \\langle A_1, A_2 | \\, {\\cal O}_n^\\dagger (x)\n\\end{align}\nwith the normalization condition\n\\begin{align}\n \\label{norm}\n \\langle A_1, A_2 | \\, {\\cal O}_m^\\dagger (y) \\, {\\cal O}_n (x) \\, |\n A_1, A_2 \\rangle \\, = \\, \\delta_{nm} \\, \\delta^{(4)} (x-y).\n\\end{align}\nUsing \\eq{unity} in \\eq{corr2} we write\n\\begin{align}\n \\label{corr4}\n & \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\,\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\, \\sum_n\n \\, \\int \\, d^4 x \\notag \\\\ & \\times \\, \\langle A_1, A_2 | \\,\n {\\overline T} \\left\\{ J (x'_1) \\, J (x'_2) \\, {\\cal O}_n (x)\n \\right\\} \\, | A_1, A_2 \\rangle \\ \\, \\langle A_1, A_2 | \\, T \\left\\{\n {\\cal O}_n^\\dagger (x) \\, J (x_1) \\, J (x_2) \\right\\} | A_1, A_2\n \\rangle.\n\\end{align}\nTo evaluate \\eq{corr4} we have to insert all possible operators ${\\cal\n O}_n (x)$ from the orthonormal set in it. Noting that $J (x)$ is a\ngauge-invariant color-singlet operator, we conclude that only\ncolor-singlet ${\\cal O}_n (x)$ would contribute. Also, since the final\nstate in a scattering problem should be an observable, the operators\n${\\cal O}_n$ should be hermitean. The set of contributing ${\\cal O}_n\n(x)$'s should therefore include the identity operator, $J(x)$,\n$T_{\\mu\\nu} (x)$, etc.\n\n\nAs we will see below, since we are using the metric \\peq{2nuc_gen},\nwhich is a perturbative solution of Einstein equations to order $\\mu_1\n\\, \\mu_2$, we can only calculate correlators to order $\\mu_1 \\, \\mu_2$\nas well. Moreover, correlators which are independent of $\\mu_1$ and\n$\\mu_2$ are vacuum correlators that we are not interested in.\nCorrelators of order $\\mu_1$ or $\\mu_2$ correspond to performing deep\ninelastic scattering (DIS) on a single shock wave similar to\n\\cite{Mueller:2008bt,Avsar:2009xf,Kovchegov:2010uk}, and are thus not\ndirectly relevant to the problem of heavy ion collisions at hand. Thus\nin this paper we are only interested in correlators exactly at the\norder $\\mu_1 \\, \\mu_2$ in the expansion in the two shock waves. Using\nsuch power counting it is easy to see that inserting the identity\noperator (normalized to one to satisfy \\eq{norm}) into \\eq{corr4} in\nplace of ${\\cal O}_n$'s would give us a contribution of the order of\n$\\mu_1^2 \\, \\mu_2^2$, which is the lowest order contribution to double\nglueball production. Inserting $J(x)$ or $T_{\\mu\\nu} (x)$ into\n\\eq{corr4} instead of ${\\cal O}_n$'s would give zero. One can also see\nthat replacing ${\\cal O}_n$'s by higher (even) powers of $J(x)$ or\n$T_{\\mu\\nu} (x)$ (properly orthogonalized) in \\eq{corr4} would\ngenerate non-zero contributions, which are either higher order in\n$\\mu_1$ and $\\mu_2$ or $N_c^2$-suppressed. We therefore insert the\nidentity operator into \\eq{corr4}, which in the color space can be\nwritten as ${\\bf 1} = \\delta^{ab}\/N_c$ to satisfy normalization in\n\\eq{norm}, and write\n\\begin{align}\n \\label{corr5}\n & \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\,\n \\propto \\, \\int d^4 x_1 \\, d^4 x_2 \\, d^4 x'_1 \\, d^4 x'_2 \\, e^{- i\n \\, k_1 \\cdot (x_1 - x'_1) - i \\, k_2 \\cdot (x_2 - x'_2)} \\notag \\\\\n & \\times \\, \\frac{1}{N_c^2} \\, \\langle A_1, A_2 | \\, {\\overline T}\n \\left\\{ J (x'_1) \\, J (x'_2) \\right\\} | A_1, A_2 \\rangle \\ \\,\n \\langle A_1, A_2 | \\, T \\left\\{ J (x_1) \\, J (x_2) \\right\\} | A_1,\n A_2 \\rangle \\, \\left[1 + O (1\/N_c^2) \\right].\n\\end{align}\nWe have thus reduced the problem of two-glueball production to\ncalculation of two-point correlation functions! Note that the\nprefactor of $1\/N_c^2$ makes the $N_c$ counting right: since each\nconnected correlator is order-$N_c^2$, we see from \\eq{corr5} that the\ncorrelated two-particle multiplicity scales as $N_c^2$ as well, in\nagreement with perturbative calculations\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv}.\n\nDefining Feynman Green function\n\\begin{align}\n \\label{GF}\n G_F (k_1, k_2) \\, = \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\, k_1\n \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\langle A_1, A_2 | \\, T \\left\\{\n J (x_1) \\, J (x_2) \\right\\} | A_1, A_2 \\rangle\n\\end{align}\nwe can summarize \\eq{corr5} as\n\\begin{align}\n \\label{corr6}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\frac{1}{N_c^2} \\, |G_F (k_1, k_2)|^2.\n\\end{align}\n\nWith the help of the retarded Green function\n \\begin{align}\n \\label{GR}\n G_R (k_1, k_2) \\, = \\, - i \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{- i \\,\n k_1 \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\theta (x_1^0 - x_2^0) \\,\n \\langle A_1, A_2 | \\, \\left[ J (x_1) , J (x_2) \\right] | A_1, A_2\n \\rangle\n \\end{align}\n and using the fact that at zero temperature $|G_F|^2 = |G_R|^2$\n \\cite{Son:2002sd}, we rewrite \\eq{corr6} as\n\\begin{align}\n \\label{corr7}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2} \\, \\propto\n \\, \\frac{1}{N_c^2} \\, |G_R (k_1, k_2)|^2.\n\\end{align}\nTherefore we need to calculate the two-point retarded Green function\nat the order $\\mu_1 \\, \\mu_2$. This is exactly the kind of Green\nfunction one can calculate using the AdS\/CFT techniques of Eqs.\n\\peq{Sdiff} and \\peq{Gr_mom}.\n\n\n\n\n\\section{A Simple Physical Argument}\n\\label{simple}\n\nBefore we present the full calculation of the two-particle\ncorrelations in AdS, we would like to give a simple heuristic argument\nof what one may expect from such a calculation. First of all, as we\nhave noted already, we are going to expand the Green function, and,\ntherefore, the bulk field $\\phi$ into powers of $\\mu_1$ and $\\mu_2$,\nstopping at the order-$\\mu_1 \\, \\mu_2$. To find the field $\\phi$ at\nthe order-$\\mu_1 \\, \\mu_2$ one has to solve \\eq{eom} with the metric\ntaken up to the order $\\mu_1 \\, \\mu_2$. Since we are interested in the\nlong-range rapidity correlations, our goal is to obtain the leading\nrapidity contribution from the calculation. Analyzing Eqs.\n\\peq{Gr_mom}, \\peq{dil_action}, and \\peq{Sdiff}, one can conclude that\nthe leading large-rapidity contribution comes from terms with the\nhighest number of factors of light-cone momenta, i.e., from terms like\n$k_1^+ \\, k_2^-$ and $k_1^- \\, k_2^+$ (but clearly not from $k_1^+ \\,\nk_1^- = m_\\perp^2\/2$ which is rapidity-independent). Taking $M=N=-$\nin \\eq{eom} one obtains, among other terms, the following\n(leading-rapidity) contribution:\n\\begin{align}\\label{contr1}\ng^{--}_{(2)} \\ \\partial_-^2 \\, \\phi_0,\n\\end{align}\nwhere $\\phi_0$ is the field at the order $(\\mu_1)^0 \\, (\\mu_2)^0$ and\n$g^{MN}_{(2)}$ is the metric at order-$\\mu_1 \\, \\mu_2$. Concentrating\non order-$z^4$ terms in the metric, which, according to holographic\nrenormalization \\cite{deHaro:2000xn}, are proportional to the\nenergy-momentum tensor in the boundary theory, and remembering that\nthe latter is rapidity-independent at order-$\\mu_1 \\, \\mu_2$\n\\cite{Grumiller:2008va,Albacete:2008vs}, we use energy-momentum\nconservation, $\\partial_\\mu \\, T^{\\mu\\nu} =0$, which, in particular,\nimplies that $\\partial_- \\, T^{--} + \\partial_+ \\, T^{+-} = 0$, to\nwrite\n\\begin{align}\n g^{--}_{(2)} \\, = \\, -\n \\frac{\\partial_+}{\\partial_-} \\, g^{+-}_{(2)}.\n\\end{align}\nTherefore \\eq{contr1} contains the term\n\\begin{align}\\label{contr2}\n - \\left( \\frac{\\partial_+}{\\partial_-} \\, g^{+-}_{(2)} \\right) \\\n \\partial_-^2 \\, \\phi_0,\n\\end{align}\nwhich contributes to the field $\\phi$ at order-$\\mu_1 \\, \\mu_2$, and,\nas follows from \\eq{Gr_mom}, resulting in a contribution to the\nretarded Green function in momentum space proportional to\n\\begin{align}\\label{contr3}\n G_R \\, \\sim \\, \\frac{k_1^-}{k_1^+} \\, {\\tilde g}^{+-}_{(2)} \\\n (k_2^+)^2\n\\end{align}\nwith ${\\tilde g}^{+-}$ the Fourier transform of $g^{+-}$ into momentum\nspace. Since metric component ${\\tilde g}^{+-}$ at the order-$\\mu_1 \\,\n\\mu_2$ can not be rapidity-dependent\n\\cite{Grumiller:2008va,Albacete:2008vs}, we see that \\eq{contr3} gives\n\\begin{align}\\label{contr4}\n G_R\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, e^{2 \\, (y_2 - y_1)} \\, = \\,\n e^{2 \\, \\Delta y}.\n\\end{align}\nAdding the $k_1 \\leftrightarrow k_2$ term, arising from the $g^{++}$\ncomponent of the metric in \\eq{eom}, we get\n\\begin{align}\\label{contr5}\n G_R\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\, \\Delta y}).\n\\end{align}\nDefining the correlation function\n\\begin{align}\\label{corrdef}\n C (k_1, k_2) \\, \\equiv \\, \\frac{\\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1\n \\, d^2 k_2 \\, d y_2}}{\\frac{d^3 N}{d^2 k_1 \\, d y_1} \\,\n \\frac{d^3 N}{d^2 k_2 \\, d y_2}}\n\\end{align}\nand using Eqs. \\peq{contr5} and \\peq{corr7} to evaluate it we observe\nthat at large rapidity intervals it scales as\n\\begin{align}\\label{corrf}\n C (k_1, k_2) \\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta\n y}).\n\\end{align}\n\nIndeed the argument we have just presented relies on several\nassumptions: in particular it assumes that no other term in the metric\nwould cancel correlations arising from the terms we have considered.\nTo make sure that this is indeed the case we will now present the full\ncalculation. The result of our simplistic argument given in\n\\eq{corrf} would still turn out to be valid at the end of this\ncalculation.\n\n\n\n\\section{Two-Point Correlation Function at Early Times}\n\\label{Correlators}\n\n\n\\subsection{Glueball correlator}\n\\label{glueball}\n\nWe now proceed to the calculation of the retarded Green function in\nthe background of the metric \\peq{2nuc_gen}, following the AdS\/CFT\nprescription outlined in Eqs. \\peq{Gr_mom}, \\peq{dil_action}, and\n\\peq{Sdiff}.\n\n\n\\subsubsection{Bulk scalar field}\n\nFirst we have to find the classical scalar field $\\phi$. Similar to\nthe way the metric \\peq{2nuc_gen} was constructed in\n\\cite{Albacete:2008vs}, we will build the scalar field $\\phi$\norder-by-order in the powers of $\\mu_1$ and $\\mu_2$, assuming $\\mu_1$\nand $\\mu_2$ are small perturbations. We would like to find the\nsolution of \\eq{eom} up to order $\\cO(\\mu_1\\mu_2)$. For this we use\nthe following expansion,\n\\begin{align}\\label{expansion}\n \\phi(x,z) = \\phi_0(x,z) + \\phi_a(x,z) + \\phi_b(x,z) + \\phi_2(x,z) +\n \\ldots \\ ,\n\\end{align}\nwhere $\\phi_0 \\sim \\cO(\\mu^0_{1,2})$, $\\phi_{a,b} \\sim \\cO(\\mu_{1,2})$\nand $\\phi_2 \\sim \\cO(\\mu_1\\mu_2)$. We will use the standard method\n(see e.g. \\cite{Mueller:2008bt,Avsar:2009xf,Kovchegov:2010uk}) and\ndemand that the boundary conditions at $z \\rightarrow 0$ are as\nfollows:\n\\begin{align}\\label{bc}\n \\phi_0(x,z \\rightarrow 0) \\, = \\, \\phi_B (x), \\, \\, \\, \\phi_a(x,z\n \\rightarrow 0) \\, = \\, \\phi_b(x,z \\rightarrow 0) \\, = \\, \\phi_2(x,z\n \\rightarrow 0) \\, = \\, \\ldots \\, = \\, 0.\n\\end{align}\nIn this case the variation of the classical action with respect to\nboundary value of the field $\\phi_B$ required in \\eq{Sdiff} is\nstraightforward.\n\n\nUsing \\eq{2nuc_gen} in \\eq{eom}, and expanding the linear operator in\nthe latter in powers of $\\mu_1$ and $\\mu_2$ up to order-$\\mu_1 \\,\n\\mu_2$ with the help of \\peq{LO} and \\peq{LOstuff}, the EOM can be\nwritten explicitly in the form\n\\begin{align}\\label{MainEOM}\n \\left[ \\Box_5 + z^4 \\, t_1 \\, \\partial^2_+ + z^4 \\, t_2 \\,\n \\partial^2_- + \\frac{1}{12} \\, z^4 \\, \\hat{M} \\right] \\, \\phi(x,z)\n = 0 \\ .\n\\end{align}\nTaking into account that $t_1 = t_1(x^-)$ and $t_2 = t_2(x^+)$, we\ngive the following list of definitions:\n\\begin{align}\n\\Box_5 & \\, \\equiv \\, -\\partial^2_z + \\frac{3}{z} \\, \\partial_z + \\Box_4 \\ ,\n\\ \\ \\ \\ \\ \\ \\ \\ \\Box_4 \\, \\equiv \\, 2 \\, \\partial_+\\partial_- -\n\\nabla_{\\bot}^2 \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\frac{1}{\\partial_{\\pm}} \\equiv\n\\int^{x^{\\pm}}_{-\\infty} dx'^{\\pm} \\ ,\n\\\\ \\nonumber\n\\hat{M} & \\, \\equiv \\, \\left(\\hat{D} + z^4 \\right) \\, t_1 \\, t_2 \\,\n\\nabla_{\\bot}^2 - \\frac{\\partial_+}{\\partial_-} \\, \\hat{D} \\, t_1 \\,\nt_2 \\, \\partial_-^2 - \\frac{\\partial_-}{\\partial_+} \\, \\hat{D} \\, t_1\n\\, t_2 \\, \\partial_+^2 + 2 \\, \\left(\\hat{D} + 5 \\, z^4 \\right) \\, t_1\n\\, t_2 \\, \\partial_+ \\partial_- \\\\ \\nonumber\n&+ 5 \\, z^4 \\, t_1 \\, (\\partial_+ \\, t_2) \\, \\partial_- + 5 \\, z^4 \\,\nt_2 \\, (\\partial_- \\, t_1) \\, \\partial_+ + 10 \\, z^3 \\, t_1 \\, t_2 \\,\n\\partial_z + 2 \\, z^4 \\, t_1 \\, t_2 \\, \\partial_z^2 \\ , \\\\ \\nonumber\n\\hat{D} & \\, \\equiv \\, 96 \\, \\frac{1}{\\partial^2_+} \\,\n\\frac{1}{\\partial^2_-} + 16 \\, z^2 \\, \\frac{1}{\\partial_+} \\,\n\\frac{1}{\\partial_+} + z^4 \\ .\n\\end{align}\nSubstituting expansion (\\ref{expansion}) into (\\ref{MainEOM}), and\ngrouping different powers of $\\mu_1$ and $\\mu_2$ together we end up\nwith the following set of equations, listed here along with their\nboundary conditions:\n\\begin{subequations}\\label{eom3}\n\\begin{align}\n &\\Box_5 \\phi_0(x,z) = 0 \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\phi_0(x,z\\to0) = \\phi_B(x) \\ , \\label{eom0} \\\\\n &\\Box_5 \\phi_a(x,z) = - z^4 \\, t_1(x^-) \\, \\partial_+^2 \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\ \\phi_a(x,z\\to0) = 0 \\ , \\label{eoma} \\\\\n &\\Box_5 \\phi_b(x,z) = - z^4 \\, t_2(x^+) \\, \\partial_-^2 \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\ \\phi_b(x,z\\to0) = 0 \\ , \\label{eomb} \\\\\n &\\Box_5 \\phi_2(x,z) = - z^4 \\, t_1(x^-) \\, \\partial_+^2 \\,\n \\phi_b(x,z) - z^4 \\, t_2(x^+) \\, \\partial_-^2 \\, \\phi_a(x,z) -\n \\frac{z^4}{12} \\, \\hat{M} \\, \\phi_0(x,z) \\ , \\ \\ \\ \\ \\\n \\phi_2(x,z\\to0) = 0 \\ , \\label{eom2}\n\\end{align}\n\\end{subequations}\nwhere we also imply that all the solutions should be regular at $z\\to\n\\infty$.\n\nTo solve equations \\peq{eom3} it is convenient to introduce a Green\nfunction $G(x,z,z')$ satisfying the equation\n\\begin{align}\\label{Green1}\n \\Box_5 \\, G(x,z,z') \\, = \\, z'^3 \\, \\delta(z-z').\n\\end{align}\nThe Green function can be written as\n\\begin{align}\\label{Green2}\n G(x,z,z') \\, = \\, z^2 \\, z'^2 \\, I_2(z_< \\sqrt{\\Box_4}) \\, K_2(z_>\n \\sqrt{\\Box_4}) \\ ,\n\\end{align}\nwhere $z_{\\{<,>\\}} = {\\rm \\{min,max\\}}\\{z,z'\\}$. We can rewrite the\ninverse of $\\Box_5$ operator as\n\\begin{align}\n \\frac{1}{\\Box_5} f(x,z) \\equiv \\int^{\\infty}_0 \\frac{dz'}{z'^3} \\,\n G(x,z,z') \\, f(x,z').\n\\end{align}\nSolving the first equation in \\peq{eom3} we find\n\\begin{align}\\label{free}\n \\phi_0(x,z) = \\frac{1}{2}z^2 \\Box_4 K_2(z\\sqrt{\\Box_4})\\phi_B(x) \\ .\n\\end{align}\nFrom Eqs. \\peq{eoma}, \\peq{eomb}, and \\eq{eom2} we have\n\\begin{align}\n \\phi_a(x,z) &= - \\frac{1}{\\Box_5} \\left[z^4 \\, t_1 \\, \\partial_+^2\n \\, \\phi_0\\right] \\ , \\ \\ \\ \\ \\ \\ \\ \\\n \\phi_b(x,z) = - \\frac{1}{\\Box_5} \\left[z^4 \\, t_2 \\, \\partial_-^2 \\,\n \\phi_0\\right] \\ , \\label{solab} \\\\\n \\phi_2(x,z) &= \\frac{1}{\\Box_5} \\, z^4 \\, t_1 \\, \\partial_+^2 \\,\n \\frac{1}{\\Box_5} \\, z^4 \\, t_2 \\, \\partial_-^2 \\, \\phi_0 +\n \\frac{1}{\\Box_5} \\, z^4 \\, t_2 \\, \\partial_-^2 \\, \\frac{1}{\\Box_5}\n \\, z^4 \\, t_1 \\, \\partial_+^2 \\, \\phi_0 - \\frac{1}{\\Box_5} \\, z^4 \\,\n \\frac{\\hat{M}}{12} \\, \\phi_0 \\ . \\label{sol2}\n\\end{align}\nWe have constructed the bulk scalar field which we need to find the\ncorrelation function.\n\n\n\n\n\n\n\\subsubsection{Glueball correlation function}\n\n\nWe can now calculate the retarded glueball correlation function using\n\\eq{sol2} in Eqs. \\peq{dil_action}, \\peq{Sdiff}, and \\peq{Gr_mom}. It\nis straightforward to check that\n\\begin{align}\\label{Gexp}\n &\\left[\\frac{1}{z^3} \\, \\partial_z \\, G(x,z,z')\\right]_{z\\to0} \\, =\n \\, \\frac{1}{2} \\, z'^2 \\, \\Box_4 \\, K_2(z' \\, \\sqrt{\\Box_4}) \\ .\n\\end{align}\nUsing \\eq{Gexp}, along with Eqs. \\peq{dil_action}, \\peq{Sdiff}, and\n\\peq{Gr_mom}, we obtain\n\\begin{align}\n \\label{eq:G1}\n G_R (k_1, k_2) \\, = \\, \\frac{N_c^2}{16} \\,\\mu_1 \\, \\mu_2 \\,\n \\delta^{(2)}({\\un k}_{1} + {\\un k}_{2}) \\, k_1^2 \\, k_2^2 \\, \\left[\n F(k_1,k_2) + F(k_2,k_1) \\right] \\ ,\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:FAB}\n F(k_1,k_2) \\equiv & F_\\text{I} (k_1,k_2) + F_\\text{II} (k_1,k_2)\n\\end{align}\nwith\n\\begin{align}\\label{eq:FI}\n F_\\text{I} (k_1,k_2) = & \\int^{\\infty}_0 dz~z^5 \\, K_2 \\left(z \\,\n \\sqrt{k_1^2}\\right) \\,\n\\int^{\\infty}_0 dz'~z'^5 \\, K_2\\left(z' \\, \\sqrt{k_2^2} \\right) \\notag \\\\\n& \\times \\, \\left[(k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right)K_2\\left(Q_1z_>\\right) + (k_1^+k_2^-)^2\n I_2\\left(Q_2z_<\\right)K_2\\left(Q_2z_>\\right)\\right]\n\\end{align}\nand\n\\begin{align}\\label{eq:FII}\n F_\\text{II} (k_1,k_2) = &\\frac{k^2_{2\\bot}}{12}\\int^{\\infty}_0 dz~z^5 K_2\n \\left(z \\, \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-}\\right]\nK_2\\left(z \\, \\sqrt{k_2^2}\\right) \\notag \\\\[7pt] \\nonumber\n&-\n\\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right] \\,\n\\int^{\\infty}_0 dz~z^5 K_2 \\left( z \\, \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-} + z^4 \\right]\n\\, K_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt] \\nonumber\n&+\\frac{1}{6} \\, k_2^+k_2^-\\int^{\\infty}_0 dz~z^5 \\, K_2 \\left( z \\,\n \\sqrt{k_1^2} \\right)\n\\left[\\frac{96}{(k_1^+k_1^-)^2} - \\frac{16 \\, z^2}{k_1^+k_1^-} + 8 \\,\n z^4 \\right] K_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt] \\nonumber &-\n\\frac{5}{12}\\left[2 \\, k_2^+k_2^- + k_2^+k_1^- +\n k_2^-k_1^+\\right]\\int^{\\infty}_0 dz~z^9 \\, K_2 \\left( z\\,\n \\sqrt{k_1^2} \\right)\nK_2 \\left(z \\, \\sqrt{k_2^2} \\right) \\\\[7pt]\n&+ \\frac{4 \\, k_2^2}{3} \\, \\int^{\\infty}_0 dz~z^8 \\, K_2 \\left( z\\,\n \\sqrt{k_1^2} \\right) \\, K_1\\left(z \\, \\sqrt{k_2^2} \\right)\\ .\n\\end{align}\nWe have defined\n\\begin{align}\n \\label{eq:Q12}\n Q_1^2 \\, = \\, 2 \\, k_1^- \\, k_2^+ + k_\\perp^2, \\ \\ \\ Q_2^2 \\, = \\, 2\n \\, k_1^+ \\, k_2^- + k_\\perp^2,\n\\end{align}\nwith $k_{1, \\perp} = k_{2, \\perp} = k_\\perp$.\n\nBefore evaluating the obtained expressions further, let us comment on\nsome of their features. First one may note that \\eq{eq:G1}\ncontains a delta-function of transverse momenta of the two glueballs\n$\\delta^{(2)}({\\un k}_{1} + {\\un k}_{2})$. This demonstrates that at\nthe lowest non-trivial order in $\\mu_1$ and $\\mu_2$ expansion\n(order-$\\mu_1 \\, \\mu_2$) there will be nothing else produced in the\nshock wave collision apart from the two glueballs. Note that indeed a\nnon-zero $\\langle T_{\\mu\\nu} \\rangle$ in the forward light-cone at the\norder-$\\mu_1 \\, \\mu_2$ found in\n\\cite{Grumiller:2008va,Albacete:2008vs} indicates that a medium is\ncreated: however this strongly-coupled medium in the ${\\cal N} =4$ SYM\ntheory without bound states and confinement does not fragment into\nindividual particles, and at late times simply results in a very low\n(and decreasing) energy density created in the collision, similar to\nthe asymptotic future of Bjorken hydrodynamics dual found in\n\\cite{Janik:2005zt}. Since in our calculation we have explicitly\nprojected out two glueballs with fixed momenta in the final state,\nthose two glueballs are all that is left carrying transverse momentum\nin the forward light-cone. (Leftovers of the original shock waves may\nalso be present, though they would not carry any transverse momentum.)\nThis picture is in agreement with the dominance of elastic processes\nin high energy scattering in the AdS\/CFT framework suggested in\n\\cite{Levin:2008vj}.\n\n\nAnother important aspect of the result in Eqs. \\peq{eq:FI} and\n\\peq{eq:FII} above is that the integrals over $z$ and $z'$ diverge for\ntime-like momenta $k_1$ and $k_2$, i.e., for $k_1^2 = - m^2$ and\n$k_2^2 = - m^2$ corresponding to production of physical glueballs of\nmass $m$. This result should be expected in ${\\cal N} =4$ SYM theory:\nsince there are no bound states in this theory, we conclude that there\nare no glueballs. Thinking of Bessel functions $K_2 (z\n\\sqrt{k_{1,2}^2})$ in Eqs. \\peq{eq:FI} and \\peq{eq:FII} as\ncontributing to the wave functions of glueballs in AdS$_5$ space\n\\cite{Brodsky:2003px,Polchinski:2002jw,Polchinski:2000uf}, we conclude\nthat the lack of glueball bound states in the theory manifests itself\nthrough de-localization of these wave functions, resulting in ``bound\nstates'' of infinite radii, both in the bulk and in the boundary\ntheory (if we identify the holographic coordinate $z$ with the inverse\nmomentum scale on the UV boundary). Since the glueballs for us have\nalways been some external probes of the ${\\cal N} =4$ SYM theory, we\nconclude that one has to define the probes by re-defining their\nwavefunctions. This can be accomplished, for instance, by introducing\nconfinement in the theory, by using either the ``hard-wall'' or\n``soft-wall'' models\n\\cite{Polchinski:2001tt,BoschiFilho:2002vd,Brodsky:2003px,Erlich:2005qh,DaRold:2005zs,BoschiFilho:2005yh,Grigoryan:2007vg,Karch:2006pv,Karch:2010eg,Grigoryan:2007my}.\nThe inverse confinement scale would define the typical size of the\nbound states. Indeed such procedure would introduce a model-dependent\nuncertainty associated with mimicking confinement in AdS\/CFT, but is\nunavoidable in order to define glueball probes. Besides, our main goal\nhere is to calculate long-range rapidity correlations, which are not\naffected (apart from a prefactor) by the exact shape of the glueball\nAdS$_5$ wave functions. We therefore model confinement by modeling\nthe glueball (external source) AdS wave functions by simply replacing\n$K_2 (z \\sqrt{k_{1,2}^2}) \\rightarrow K_2 (z \\, \\Lambda)$ in Eqs.\n\\peq{eq:FI} and \\peq{eq:FII} with $\\Lambda >0$ related to confinement\nmomentum scale. We then rewrite Eqs. \\peq{eq:FI} and \\peq{eq:FII} as\n\\begin{align}\\label{eq:FI2}\n F_\\text{I} (k_1,k_2) = & \\int^{\\infty}_0 dz~z^5 \\, K_2 \\left(z \\,\n \\Lambda \\right) \\,\n \\int^{\\infty}_0 dz'~z'^5 \\, K_2\\left(z' \\, \\Lambda \\right) \\notag \\\\\n & \\times \\, \\left[(k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right)K_2\\left(Q_1z_>\\right) + (k_1^+k_2^-)^2\n I_2\\left(Q_2z_<\\right)K_2\\left(Q_2z_>\\right)\\right],\n\\end{align}\n\\begin{align}\\label{eq:FII2}\n F_\\text{II} (k_1,k_2) = &\\frac{k^2_{\\bot}}{12}\\int^{\\infty}_0 dz~z^5\n \\, \\left[ K_2 \\left(z \\, \\Lambda \\right) \\right]^2\n \\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2}\\right]\n \\notag \\\\[7pt] \\nonumber\n &-\n \\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right]\n \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2 \\left( z \\, \\Lambda \\right)\n \\right]^2\n\\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2} + z^4\n\\right] \\\\[7pt] \\nonumber\n&+\\frac{1}{12} \\, m_\\perp^2 \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2\n \\left( z \\, \\Lambda \\right) \\right]^2\n\\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2} + 8 \\, z^4\n\\right] \\\\[7pt] \\nonumber &- \\frac{5}{12}\\left[m_\\perp^2 + k_2^+k_1^-\n + k_2^-k_1^+\\right]\\int^{\\infty}_0 dz~z^9 \\, \\left[ K_2 \\left( z\\,\n \\Lambda \\right) \\right]^2 \\\\[7pt]\n&+ \\frac{4 \\, m^2}{3} \\, \\int^{\\infty}_0 dz~z^8 \\, K_2 \\left( z\\,\n \\Lambda \\right) \\, K_1\\left(z \\, \\Lambda \\right)\\ ,\n\\end{align}\nwhere we have also replaced all rapidity-independent factors with\npowers of either glueball mass $m$ or $m_\\perp = \\sqrt{k_\\perp^2 +\n m^2}$.\n\n\nThe contributions in Eqs. \\peq{eq:FI2} and \\peq{eq:FII2} (or those in\nEqs. \\peq{eq:FI} and \\peq{eq:FII}) to the retarded Green function\n\\peq{Gr_mom} are shown diagrammatically in \\fig{graphs} in terms of\nWitten diagrams. There the wiggly lines represent gravitons, while\nthe dashed line denotes the scalar field. Crosses represent insertions\nof the boundary energy-momentum tensors of the two shock waves\n($\\mu_1$ and $\\mu_2$). $F_\\text{I}$ from \\eq{eq:FI2} corresponds to\nthe diagram on the left of \\fig{graphs}, while $F_\\text{II}$ from\n\\eq{eq:FII2} is given by the term on the right of \\fig{graphs}.\n\n\n\\begin{figure}[th]\n\\begin{center}\n\\epsfxsize=9cm\n\\leavevmode\n\\hbox{\\epsffile{graphs.eps}}\n\\end{center}\n\\caption{Diagrammatic representation of the correlation function\n calculated in this Section.}\n\\label{graphs}\n\\end{figure}\n\n\nIt is important to note that the Green function given by Eqs.\n\\peq{eq:G1}, \\peq{eq:FAB}, \\peq{eq:FI2}, and \\peq{eq:FII2} is indeed\nreal, justifying the assumption we employed in stating that \\eq{Sdiff}\nprovides us a retarded Green function. This can also be seen from the\ndiagrams in \\fig{graphs} in which one can not cut the scalar\npropagator. The imaginary part of $G_R$ appears at higher order in\n$\\mu_1 \\, \\mu_2$, when one has more graviton insertions in the scalar\npropagator, allowing for non-zero cuts of the latter.\n\n\nLet us now study the large-rapidity interval asymptotics of the\nobtained correlation function \\peq{eq:G1}. One can deduce from the\nkinematics described in Section \\ref{kine} that\n\\begin{align}\n &k^+_1k^-_2 = \\frac{m^2_{\\bot}}{2} \\, e^{-\\Delta y} \\ , \\ \\ \\ \\ \\\n k^-_1k^+_2 = \\frac{m^2_{\\bot}}{2} \\, e^{\\Delta y} \\ ,\n\\end{align}\nsuch that when $\\Delta y = y_2 - y_1 \\gg 1$ we have\n\\begin{align}\n Q^2_1 = k^2_{\\bot} + m_{\\bot}^2 \\, e^{\\Delta y} \\approx m^2_{\\bot}\n \\, e^{\\Delta y} \\ , \\ \\ \\ Q^2_2 = k^2_{\\bot} + 2 \\, m_{\\bot}^2 \\,\n e^{-\\Delta y} \\approx k^2_{\\bot}.\n\\end{align}\nTherefore, the contribution from \\eq{eq:FI2} becomes\n\\begin{align}\\label{FIapp1}\n F_I (k_1,k_2) \\big|_{\\Delta y \\gg 1} \\, \\approx \\, \\int^{\\infty}_0\n dz~z^5 \\, K_2 \\left(z \\, \\Lambda \\right) \\, \\int^{\\infty}_0 dz'~z'^5\n \\, K_2\\left(z' \\, \\Lambda \\right) \\, (k_1^-k_2^+)^2\n I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right).\n\\end{align}\nTo determine the large-$Q_1$ asymptotics of $I_2\\left(Q_1z_<\\right) \\,\nK_2 \\left(Q_1z_>\\right)$ note that, according to Eqs. \\peq{Green1} and\n\\peq{Green2}, $z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2\n\\left(Q_1z_>\\right)$ satisfies\n\\begin{align}\n \\label{eq:IK}\n \\left[ - \\partial_z^2 + \\frac{3}{z} \\, \\partial_z + Q_1^2 \\right] \\,\n z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right) \\,\n = \\, z'^3 \\, \\delta (z - z').\n\\end{align}\nHence, for $Q_1$ larger than the inverse of the typical variation in\n$z$ we have\n\\begin{align}\n \\label{eq:IK2}\n z^2 \\, z'^2 \\, I_2\\left(Q_1z_<\\right) \\, K_2 \\left(Q_1z_>\\right)\n \\bigg|_{\\text{large} \\, Q_1} \\, \\approx \\, \\frac{z'^3}{Q_1^2} \\, \\delta\n (z - z'),\n\\end{align}\nwhich, when used in \\eq{FIapp1} yields\n\\begin{align}\n \\label{FIapp2}\n F_I (k_1,k_2) \\big|_{\\Delta y \\gg 1} \\, \\approx \\, \\frac{2048}{7} \\,\n \\frac{(k_1^-k_2^+)^2}{Q_1^2 \\, \\Lambda^{10}} \\, \\approx \\,\n \\frac{512}{7} \\, \\frac{m_\\perp^2}{\\Lambda^{10}} \\, e^{\\Delta y}.\n\\end{align}\nThis result implies that the rapidity correlations coming from this\nterm grow as $e^{\\Delta y}$ at the early stages after the collision.\n\n\nOn the other hand, the dominant contributions from the second term,\n$F_\\text{II} (k_1,k_2)$, are coming from the expressions in the second\nand the fourth lines of \\eq{eq:FII2}. They give\n\\begin{align}\\label{FIIapp}\n F_\\text{II} (k_1,k_2) \\big|_{\\Delta y \\gg 1} &\\approx -\n \\frac{1}{12}\\left[\\frac{k_1^-k_2^{+2}}{k_1^+}+\\frac{k_1^+k_2^{-2}}{k_1^-}\\right]\n \\, \\int^{\\infty}_0 dz~z^5 \\, \\left[ K_2 \\left( z \\, \\Lambda \\right)\n \\right]^2 \\left[\\frac{384}{m_\\perp^4} - \\frac{32 \\, z^2}{m_\\perp^2}\n + z^4 \\right] \\\\ \\nonumber &- \\frac{5}{12} \\, \\left(k_2^+k_1^- +\n k_1^+k_2^-\\right) \\, \\int^{\\infty}_0 dz~z^9 \\, \\left[ K_2 \\left( z\n \\, \\Lambda \\right) \\right]^2 \\\\ \\nonumber & \\approx -\n \\frac{256}{21} \\, \\frac{m^2_{\\bot}}{\\Lambda^{10}} \\, e^{2 \\, \\Delta\n y} \\, \\left[ 1 - 3 \\, \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5}\n \\, \\frac{\\Lambda^4}{m_\\perp^4} \\right] - \\frac{1280}{21} \\,\n \\frac{m^2_{\\bot}}{\\Lambda^{10}} \\, e^{\\Delta y}.\n\\end{align}\n\n\nCombining Eqs. \\peq{FIapp2} and \\peq{FIIapp} in Eqs. \\peq{eq:G1} and\n\\peq{eq:FAB} we obtain\n\\begin{align}\n \\label{GRlargey}\n G_R (k_1, k_2)\\big|_{\\Delta y \\gg 1} \\, \\approx \\, - \\frac{64}{21}\n \\, \\frac{N_c^2 \\,\\mu_1 \\, \\mu_2 \\, m^4 \\, m_\\perp^2}{\\Lambda^{10}}\n \\, \\delta^{(2)}({\\un k}_{1} + {\\un k}_{2}) \\, \\left\\{ e^{2 \\, \\Delta\n y} \\, \\left[ 1 - 3 \\, \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5}\n \\, \\frac{\\Lambda^4}{m_\\perp^4} \\right] + e^{\\Delta y} \\right\\},\n\\end{align}\nwhich, dropping the second term in the curly brackets and using the $+\n\\leftrightarrow -$ symmetry of the problem can be generalized to\n\\begin{align}\n \\label{GR_gen}\n G_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\approx \\, & -\n \\frac{128}{21} \\, \\frac{N_c^2 \\,\\mu_1 \\, \\mu_2 \\, m^4 \\,\n m_\\perp^2}{\\Lambda^{10}} \\, \\delta^{(2)}({\\un k}_{1} + {\\un\n k}_{2}) \\, \\cosh ({2 \\, \\Delta y}) \\, \\left[ 1 - 3 \\,\n \\frac{\\Lambda^2}{m_\\perp^2} + \\frac{42}{5} \\,\n \\frac{\\Lambda^4}{m_\\perp^4} \\right].\n\\end{align}\nWe thus conclude that at large rapidity separations\n\\begin{align}\n \\label{GRlarge}\n G_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\,\n \\Delta y})\n\\end{align}\nin agreement with our estimate in \\eq{contr5}.\n\nUsing \\eq{corr7} we conclude that\n\\begin{align}\n \\frac{d^6 N_{corr}}{d^2 k_1 \\, d y_1 \\, d^2 k_2 \\, d y_2}\n \\Bigg|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta y})\n\\end{align}\nsuch that the two-glueball correlation function defined in\n\\eq{corrdef} scales as\n\\begin{align}\\label{Corr_fin}\n C (k_1, k_2) \\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({4 \\, \\Delta\n y}),\n\\end{align}\njust like in \\eq{corrf}. We have demonstrated the presence of\nlong-range rapidity correlations in case of strongly-coupled\nhigh-energy heavy ion collisions. The rapidity shape of the obtained\ncorrelations is very different from the ``ridge'' correlation observed\nexperimentally at RHIC and at LHC\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}. It\nis possible that higher order in $\\mu_1$ and $\\mu_2$ corrections would\nmodify the rapidity shape of the correlation, putting it more in-line\nwith experiments. We will return to this point in Sec. \\ref{sum}.\n\n\nLet us now pause to determine the parameter of our approximation.\nUntil now we have, somewhat loosely, referred to our approximation as\nto an expansion in $\\mu_1$ and $\\mu_2$. However, these parameters have\ndimensions of mass cubed, and can not be expanded in. From \\eq{GR_gen}\nwe may suggest that the dimensionless expansion parameters are $\\mu_1\n\/ \\Lambda^3$ and $\\mu_2 \/ \\Lambda^3$, where $\\Lambda$ is the inverse\nglueball size. Thus our result in \\eq{GR_gen} dominates the\ncorrelation function only for\n\\begin{align}\n \\label{conds}\n \\frac{\\mu_1}{\\Lambda^3} \\ll 1, \\ \\ \\ \\frac{\\mu_2}{\\Lambda^3} \\ll 1.\n\\end{align}\nSince, as can be seen from \\eq{mus}, $\\mu_1$ and $\\mu_2$ are\nenergy-dependent, these conditions limit the energy range of\napplicability of \\eq{GR_gen}. \\eq{conds} also makes clear physical\nsense: since the metric \\peq{2nuc_gen} with the coefficients given by\nEqs. \\peq{LO} and \\eq{LOstuff} is valid only for early proper times\n$\\tau$ satisfying $\\mu_{1,2} \\, \\tau^3 \\ll 1$\n\\cite{Albacete:2008vs,Albacete:2009ji}, we see that the glueballs have\nto be small enough, $1\/\\Lambda \\approx \\tau \\approx \\mu_{1,2}^{-1\/3}$,\nto be able to resolve (and be sensitive to) the metric at such early\ntimes.\n\nNote also that the obtained Green function \\peq{GR_gen} is not a\nmonotonic function of $m_\\perp$: for $m_\\perp \\ll \\Lambda$ it grows\nwith $m_\\perp$ as $m_\\perp^2$, but, for $m_\\perp \\gg \\Lambda$ it falls\noff as $1\/m_\\perp^2$, peaking at $m_\\perp^2 = (28\/5) \\, \\Lambda^2$.\nThis translates into correlation function $C (k_1, k_2)$ first growing\nwith $m_\\perp$ (and, therefore, $k_\\perp$) as $m_\\perp^4$ for $m_\\perp\n\\ll \\Lambda$, and then decreasing as $1\/m_\\perp^4$ for $m_\\perp \\ll\n\\Lambda$. Similar non-monotonic behavior has been observed for\n``ridge'' correlation experimentally\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nWhile in CGC-based approaches\n\\cite{Dumitru:2008wn,Gavin:2008ev,Dusling:2009ni,Dumitru:2010iy,Dumitru:2010mv,Kovner:2010xk}\nthe maximum of the correlation function is given by the saturation\nscale $Q_s$, and happens at $k_\\perp \\approx Q_s$, in our AdS\/CFT case\nthe maximum appears to be related to the inverse size of the produced\nbound state and its mass, such that it takes place at $k_\\perp \\approx\n\\sqrt{\\Lambda^2 - m^2}$. At this point it is not clear though whether\nsuch conclusion is a physical prediction or an artifact of the\nperturbative solution of the problem in the AdS space.\n\n\nIn order to make a more detailed comparison with experiment one needs\nto improve on our AdS\/CFT approach both by calculating higher-order\ncorrections in $\\mu_1$ and $\\mu_2$, and, possibly, by implementing\nnon-conformal QCD features, such as confinement, along the lines of\nthe AdS\/QCD models\n\\cite{Polchinski:2001tt,BoschiFilho:2002vd,Brodsky:2003px,Erlich:2005qh,DaRold:2005zs,BoschiFilho:2005yh,Grigoryan:2007vg,Karch:2006pv,Karch:2010eg,Grigoryan:2007my}.\nThe latter modification would certainly change our glueball wave\nfunctions in the bulk, modifying the Bessel functions in\nEqs.~\\peq{eq:FI2} and \\peq{eq:FII2}. However, while the use of AdS\/QCD\ngeometry may affect the $m_\\perp$-dependence of the correlation\nfunction \\peq{GR_gen}, one may see from Eqs.~\\peq{eq:FI2} and\n\\peq{eq:FII2} that such modification would not affect our main\nconclusion about the rapidity-dependence of the correlations shown in\n\\eq{Corr_fin}. The leading large-rapidity asymptotics of the\ncorrelation function \\peq{Corr_fin} results from the second term on\nthe right-hand-side of \\eq{eq:FII2}: modifying the glueball wave\nfunction would only change the coefficient in front of the\nrapidity-dependent part.\\footnote{As the integrand in that term is\n positive-definite for any glueball wave function, the coefficient\n can not vanish.} Since the growth of correlations with rapidity does\nnot reproduce experimental data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}, our\nconclusion is that the inclusion of higher-order corrections in\n$\\mu_1$ and $\\mu_2$ is the only possibility for AdS\/CFT (or AdS\/QCD)\ncalculations to get in line with the data.\n\n\n\n\\subsection{Energy-momentum tensor correlator}\n\nWe have shown that there are long-range rapidity correlations in the\nglueball operator of \\eq{Jdef} in the strong-coupling heavy ion\ncollisions. At the same time we would like to extend this statement to\ncorrelations of other operators. Energy-momentum tensor is a natural\nnext candidate. Indeed the glueball operator \\peq{Jdef} is a part of\nthe energy-momentum tensor: hence correlations in $\\langle J(x) \\,\nJ(y) \\rangle$ probably imply correlations in $\\langle T_{\\mu\\nu} (x)\n\\, T_{\\mu\\nu} (y) \\rangle$ as well. To show this is true we will\npresent an argument below, largely following\n\\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nConsider a field theory whose dual holographic description is given\nby the metric of the general form\n\\begin{align}\\label{genmet}\n ds^2 &= g^{(0)}_{MN} \\, dx^M \\, dx^N = f(x^+,x^-,z) \\, dx^2_{\\bot} +\n g_{\\mu\\nu}(x^+,x^-,z) \\, d\\xi^{\\mu} \\, d\\xi^{\\nu} \\ ,\n\\end{align}\nwhere ${\\un x} = (x^1, x^2)$,\n$d x_\\perp^2 = (d x^1)^2 + (d x^2)^2$\nand $\\xi^{\\mu} = (x^+,x^-,z)$. Now, consider small perturbations\naround the metric independent of $x^1, x^2$, $g_{MN}= g_{MN}^{(0)} +\nh_{MN} (x^+,x^-,z)$. We will work in the $h_{Mz} =0$ gauge. The metric\n\\peq{genmet} has a rotational $O$(2) symmetry in the transverse plane.\nUnder the transverse rotations one may naively expect $\\{ h_{11},\nh_{12}, h_{22} \\}$ components to transform as tensors, $\\{ h_{01},\nh_{31}, h_{02}, h_{32} \\}$ components to transform as vectors, and $\\{\nh_{00}, h_{03}, h_{33} \\}$ components to be scalars under rotations.\nHowever, rewriting the transverse part of the metric as\n\\begin{align}\n \\label{trmet}\n \\left(\n \\begin{array}{cc}\n h_{11} & h_{12} \\\\\n h_{21} & h_{22}\n \\end{array}\n\\right) \\, = \\,\n\\left(\n \\begin{array}{cc}\n (h_{11} + h_{22})\/2 & 0 \\\\\n 0 & (h_{11} + h_{22})\/2\n \\end{array}\n\\right) +\n\\left(\n \\begin{array}{cc}\n (h_{11} - h_{22})\/2 & h_{12} \\\\\n h_{21} & - (h_{11} - h_{22})\/2\n \\end{array}\n\\right)\n\\end{align}\nwe see that $h_{11} + h_{22}$ is also invariant under $O$(2)\ntransverse plane rotations. Hence the final classification of the\nmetric components under $O$(2) rotations is: $\\{ h_{11} - h_{22},\nh_{12} \\}$ are in the tensor representation, $\\{ h_{01}, h_{31},\nh_{02}, h_{32} \\}$ are vectors, and $\\{ h_{00}, h_{03}, h_{33}, h_{11}\n+ h_{22} \\}$ are scalars \\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nUsing the above classification we see that we can assume that the only\nnon-vanishing component of $h_{MN}$ is $h_{12} = h_{21} =\nh_{12}(x^+,x^-,z)$. It is in the tensor representation and, as can be\nseen with the help of \\eq{trmet}, by rotating in the transverse plane\nwe can always find a coordinate system in which $h_{11} - h_{22} =0$\nand $h_{12} = h_{21}$ remains the only non-zero metric component in\nthe tensor representation. Since all other components of the metric\nare in other representations of the $O$(2) symmetry group, they do not\nmix with $h_{12}$ in Einstein equations, and can be safely put to zero\n\\cite{Policastro:2002se,Kovtun:2004de}.\n\n\nSubstituting the metric $g_{MN}= g_{MN}^{(0)} + h_{MN} (x^+,x^-,z)$\nwith $g_{MN}^{(0)}$ given by \\peq{genmet} into Einstein equations\n\\peq{ein}, and expanding the result to linear order in $h_{12}$ we get\n\\cite{Policastro:2002se,Kovtun:2004de}\n\\begin{align}\n \\Box \\, h_{12} - 2 \\, \\frac{\\partial^{\\mu} f}{f} \\, \\partial_{\\mu}\n h_{12} + 2 \\, \\frac{(\\partial f)^2}{f^2} \\, h_{12} - \\frac{\\Box\n f}{f} \\, h_{12} = 0\\, ,\n\\label{minscal}\n\\end{align}\nwhere\n\\begin{align}\n \\label{box3}\n \\Box \\, = \\, \\frac{1}{\\sqrt{-g}} \\, \\partial_M \\left[ \\sqrt{-g} \\, g^{MN} \\, \\partial_N \\ldots \\right]\n \n\\end{align}\nand $(\\partial f)^2 = g^{MN} \\, \\partial_M f \\, \\partial_N f$.\nChanging the variable from $h_{12}$ to $h^1_2=h_{12}\/f$, one can see\nthat $h^1_2$ indeed satisfies the equation for a minimally coupled\nmassless scalar \\cite{Policastro:2002se,Kovtun:2004de}:\n\\begin{align}\n \\Box \\, h^1_2 = 0.\n\\end{align}\nTherefore, since our metric \\peq{2nuc_gen} falls into the category of\n\\eq{genmet}, the analysis of Sec \\ref{glueball} applies to the metric\ncomponent $h_2^1$. Defining the retarded Green function for the\n$T_2^1$ components of the energy-momentum tensor (EMT) by\n\\begin{align}\n \\label{Gr_ten}\n G^{EMT}_R (k_1, k_2) \\, = \\, - i \\, \\int d^4 x_1 \\, d^4 x_2 \\, e^{-\n i \\, k_1 \\cdot x_1 - i \\, k_2 \\cdot x_2} \\, \\theta (x_1^0 - x_2^0)\n \\, \\langle A_1, A_2 | \\, \\left[ T_2^1 (x_1) , T_2^1 (x_2) \\right] |\n A_1, A_2 \\rangle\n\\end{align}\nwe conclude that, similar to the glueball operator,\n\\begin{align}\n \\label{GRlargeEMT}\n G^{EMT}_R (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\, \\sim \\, \\cosh ({2 \\,\n \\Delta y}).\n\\end{align}\nHence we have shown that the correlators of EMT operators exhibit the\nsame long-range rapidity correlations as the glueball correlators. It\nis therefore very likely that such correlations are universal and are\nalso present in correlators of other operators.\n\n\n\n\n\\section{Estimate of the Two-Point Correlation Function at Late Times}\n\\label{late-times}\n\n\nOur conclusion about long-range rapidity correlations was derived\nusing the metric \\peq{2nuc_gen} which is valid only at very early\ntimes after a shock wave collision. As discussed in the Introduction,\nwe do not expect the interactions at later times to affect these\ncorrelations, since different-rapidity regions of the produced medium\nbecome causally disconnected at late times. To check that no\nlong-range rapidity correlations can arise from the late-time dynamics\none would have to calculate the correlation function \\peq{corrdef} in\nthe full metric produced in a shock wave collision including all\npowers of $\\mu_1$ and $\\mu_2$. Since no such analytical solution\nexists, instead we will use the metric dual to Bjorken hydrodynamics\n\\cite{Bjorken:1982qr} constructed in \\cite{Janik:2005zt}. One has to\nbe careful in interpreting the result we obtain in this Section:\nBjorken hydrodynamics \\cite{Bjorken:1982qr} is rapidity-independent,\nwhile there are reasons to believe that the medium produced in a shock\nwave collision would exhibit rapidity dependence, as indicated by\nperturbative solutions of Einstein equations done in\n\\cite{Grumiller:2008va,Albacete:2008vs,Albacete:2009ji}. Nonetheless,\nwe expect that our calculation below would be a good initial estimate\nof the late-time rapidity correlations.\n\n\nThe dual geometry corresponding to the perfect fluid was obtained by\nJanik and Peschanski in \\cite{Janik:2005zt}. It can be written as\n\\begin{align}\\label{JPmetric}\n ds^2 = L^2 \\, \\left\\{ -\\frac{1}{z^2}\\frac{\\left(1 -\n z^4\/z_h^4(\\tau)\\right)^2}{1 + z^4\/z_h^4(\\tau)}d\\tau^2 +\n \\frac{\\left(1 + z^4\/z_h^4(\\tau)\\right)}{z^2}\\left(\\tau^2 d\\eta^2 +\n dx_{\\bot}^2\\right) + \\frac{dz^2}{z^2} \\right\\} \\ ,\n\\end{align}\nwhere $\\tau = \\sqrt{2x^+x^-}$ is proper time, $\\eta =\n\\frac{1}{2}\\ln(x^+\/x^-)$ is space-time rapidity, and $z_h(\\tau) =\n\\left(\\frac{3}{\\cE_0}\\right)^{1\/4}\\tau^{1\/3}$ (with $\\cE_0$ some\ndimensionful quantity) determines the position of the dynamical\nhorizon in AdS$_5$ such that the Hawking temperature is\n\\begin{align}\nT(\\tau) = \\frac{\\sqrt{2}}{\\pi z_h(\\tau)} =\n\\frac{\\sqrt{2}}{\\pi}\\left(\\frac{\\cE_0}{3}\\right)^{1\/4}~\\tau^{-1\/3} \\\n.\n\\end{align}\n\n\nUnfortunately finding the glueball correlation function in Bjorken\nhydrodynamic state is equivalent to finding boundary-to-boundary\nscalar propagator in the background of the Janik-Peschanski metric\n\\peq{JPmetric}, which is a daunting task: such propagator has not yet\nbeen found even for the static AdS Schwarzschild black hole metric.\nInstead, to estimate the correlations we will perform a perturbative\ncalculation.\n\nAt late times, when $\\tau \\gg \\cE_0^{-3\/8}$, assuming either that $z$\nis fixed or is bounded from the above (by let us say an infrared (IR)\ncutoff coming from the definition of the glueball wave function), we\ncan consider the ratio $u(\\tau) \\equiv z\/z_{h}(\\tau) \\ll 1 $ to be a\nsmall quantity. If so, we can expand the EOM for the scalar field\n\\peq{eom} up to $\\cO(u^4)$ obtaining\n\\begin{align}\\label{EOMJP}\n &\\Box_5 \\phi(\\tau, \\eta, x_{\\bot}, z) + u^4 \\, \\left[4 \\,\n \\partial^2_{\\tau}\n - \\Box_4 \\right] \\, \\phi(\\tau, \\eta, x_{\\bot}, z) = 0 \\ , \\\\\n \\nonumber\n&\\Box_5\\phi \\equiv -z^3 \\partial_z\\left(\\frac{1}{z^3}\\partial_z\n\\phi\\right) + \\Box_4 \\phi \\ , \\ \\ \\ \\ \\ \\\n\\Box_4\\phi \\equiv \\frac{1}{\\tau}\\partial_{\\tau}\\left(\\tau\n \\partial_{\\tau}\\phi\\right) - \\frac{1}{\\tau^2}\\partial^2_{\\eta}\\phi -\n\\nabla^2_{\\bot}\\phi = \\left(2\\partial_+\\partial_- -\n \\nabla^2_{\\bot}\\right)\\phi \\ .\n\\end{align}\n\n\nExpanding the scalar field in the powers of $u$ we write\n\\begin{align}\n \\label{eq:phi_exp}\n \\phi = \\phi_0 + \\phi_1 + \\ldots\n\\end{align}\nwhere $\\phi_0 \\sim \\cO\\left(u^0 \\right)$ and $\\phi_1 \\sim\n\\cO\\left(u^{4}\\right)$. Substituting this back into Eq.~(\\ref{EOMJP}),\nwe get\n\\begin{align}\\label{EOMJP2}\n &\\Box_5 \\, \\phi_0 = 0 \\ , \\ \\ \\ \\ \\ \\ \\ \\Box_5 \\, \\phi_1 = -\n \\frac{\\cE_0}{3} \\, \\frac{z^4}{\\tau^{4\/3}} \\, \\left[4 \\,\n \\partial^2_{\\tau} - \\Box_4\\right] \\, \\phi_0 \\ .\n\\end{align}\nThe solution for $\\phi_0$ was found above and is given in \\eq{free}.\nWe write the solution for $\\phi_1$ as\n\\begin{align}\n \\label{phi1}\n \\phi_1 = - \\frac{\\cE_0}{3} \\, \\frac{1}{\\Box_5} \\,\n \\frac{z^4}{\\tau^{4\/3}} \\, \\left[4 \\, \\partial^2_{\\tau} - \\Box_4\n \\right] \\, \\phi_0 \\, \\approx \\, \\frac{\\cE_0}{3} \\, \\frac{1}{\\Box_5}\n \\, \\frac{z^4}{\\tau^{4\/3}} \\, \\Box_4 \\, \\phi_0\n\\end{align}\nwhere in the last step we neglected $\\partial^2_{\\tau}$, since a\nderivative like this generates $O (1\/\\tau^2)$ corrections (at fixed\n$u$), which were neglected in constructing the original metric\n\\peq{JPmetric} and are thus outside of the precision of our\napproximation. We are now ready to calculate the retarded Green\nfunction. Using \\eq{phi1} in Eqs. \\peq{Sdiff}, \\peq{dil_action}, and\n\\peq{Gr_mom}, and employing \\eq{Gexp} yields\n\\begin{align}\n \\label{GrBj1}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{N_c^2 \\, \\cE_0\n \\, m^6}{24} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2) \\, \\int^{\\infty}_0\n dz~z^5 \\, K_2 \\left( z \\, \\sqrt{k_1^2} \\right) \\, K_2 \\left( z \\,\n \\sqrt{k_2^2} \\right) \\notag \\\\ \\times \\, \\int_0^\\infty d x^+ \\, d\n x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) + i \\, x^- \\, (k_1^+ + k_2^+)}\n \\, \\frac{1}{\\tau^{4\/3}}\n\\end{align}\nwhere we have replaced $k_1^2$ and $k_2^2$ with $-m^2$ everywhere\nexcept for the arguments of the Bessel functions. The integrals over\n$x^+$ and $x^-$ in \\eq{GrBj1} run from $0$ to $\\infty$ since the\nmatter only exists in the forward light-cone. (On top of that the\nmetric \\peq{JPmetric} is valid at late times only, for $u \\ll 1$, such\nthat the actual $x^+$ and $x^-$ integration region should be even more\nrestricted, possibly suppressing the correlations we are about to\nobtain even more.)\n\nJust like in the case of the early times considered in Sec.\n\\ref{glueball}, the integral over $z$ in \\eq{GrBj1} is divergent for\ntime-like momenta $k_1$ and $k_2$. Similar to what we did in Sec.\n\\ref{glueball}, we recognize the Bessel functions in \\eq{GrBj1} as the\nglueball wave functions in the bulk, which need to be modified to\nreflect the finite size of glueballs, which do not exist in ${\\cal N}\n=4$ SYM theory. Replacing $K_2 (z \\sqrt{k_{1,2}^2}) \\rightarrow K_2 (z\n\\, \\Lambda)$ in \\eq{GrBj1} and integrating over $z$ yields\n\\begin{align}\n \\label{GrBj2}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{4 \\, N_c^2 \\,\n \\cE_0 \\, m^6}{15 \\, \\Lambda^6} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2)\n \\, \\int_0^\\infty d x^+ \\, d x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) +\n i \\, x^- \\, (k_1^+ + k_2^+)} \\, \\frac{1}{\\tau^{4\/3}}.\n\\end{align}\n\n\n\nEvaluating the integrals left in \\eq{GrBj2},\n\\begin{align}\n \\int_0^\\infty d x^+ \\, d x^- \\, e^{i \\, x^+ \\, (k_1^- + k_2^-) + i\n \\, x^- \\, (k_1^+ + k_2^+)} \\, \\frac{1}{(2 \\, x^+ \\, x^-)^{2\/3}} \\,\n = \\, \\frac{N}{(k_1^+ + k_2^+)^{1\/3}(k_1^- + k_2^-)^{1\/3}} \\ ,\n\\end{align}\nwhere\n\\begin{align}\n N \\, = \\, \\frac{\\Gamma^2 \\left( \\frac{1}{3} \\right) \\, e^{i \\, \\pi\n \/3}}{2^{2\/3}},\n\\end{align}\nwe obtain\n\\begin{align}\n \\label{GrBj3}\n G_R^{Bj} (k_1, k_2)\\big|_{O(1\/z_h^4)} \\, = \\, - \\frac{4 \\, N_c^2 \\,\n \\cE_0 \\, m^6}{15 \\, \\Lambda^6} \\, \\delta^2 ({\\un k}_1 + {\\un k}_2)\n \\, \\frac{N}{m^{2\/3}_{\\bot} \\, (1+ \\cosh \\Delta y)^{1\/3}}.\n\\end{align}\nThe corresponding two-glueball correlation function scales as\n\\begin{align}\\label{CBj}\n C^{Bj} (k_1, k_2)\\big|_{|\\Delta y| \\gg 1} \\sim\n \\frac{1}{m^{4\/3}_{\\bot} \\, (\\cosh \\Delta y)^{2\/3}}.\n\\end{align}\nWe conclude that rapidity correlations coming from the AdS dual of\nBjorken hydrodynamics are suppressed at large rapidity interval, at\nleast in the perturbative estimate we have performed. This result\nappears to agree with the causality argument\n\\cite{Gavin:2008ev,Dumitru:2008wn} making appearance of long-range\nrapidity correlations unlikely at late times. Moreover, the locality\nof $C^{Bj}$ in rapidity suggests that late-time dynamics is not likely\nto affect long-range rapidity correlations coming from the early\nstages of the collision: hydrodynamic evolution can not ``wash out''\nsuch long-range rapidity correlations.\n\nNote that the complete momentum space two-glueball correlation\nfunction receives contributions from all regions of coordinate space,\ni.e., from all $x_1$ and $x_2$. In Sec. \\ref{Correlators} we have\ncalculated the contribution arising from early proper times, while\nhere we have estimated the late-time contribution. One may expect that\nin the complete result the two contributions coming from different\nintegration regions would simply add together: in such case clearly\nthe early-time contribution in \\eq{Corr_fin} would dominate for large\nrapidity intervals, leading to long-range rapidity correlations\narising in the collision.\n\n\n\n\n\n\\section{Summary}\n\\label{sum}\n\nLet us summarize by first restating that we have found long-range\nrapidity correlations in the initial stages of strongly-coupled heavy\nion collisions as described by AdS\/CFT correspondence. We expect that\ndue to causality the correlations would survive the late-time\nevolution of the produced medium, though one needs to have a full\nsolution of the shock wave collision problem to be able to verify this\nassertion. The long-range rapidity correlations may be relevant for\nthe description of the ``ridge'' correlation observed in heavy ion and\nproton-proton collisions\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}.\nIndeed ``ridge'' correlation is characterized not only by the\nlong-range rapidity correlation, but also by a narrow zero-angle\nazimuthal correlation between the triggered and associated particles.\nAs was suggested in \\cite{Gavin:2008ev,Dumitru:2008wn} such azimuthal\ncorrelation may be due to the radial flow of the produced medium. The\nadvantage of the AdS\/CFT approach to the problem is that the full\nsolution to the problem for a collision of two shock waves with some\nnon-trivial transverse profiles would have radial flow included in the\nevolution of the dual metric, and would be able to demonstrate whether\nradial flow is sufficient to lead to the ``ridge'' phenomenon. Indeed\nsuch calculation appears to be prohibitively complicated to do\nanalytically at the moment.\n\n\nThe correlations we found grow very fast with rapidity interval, as\none can see from \\eq{corrf}, while the experimentally observed\ncorrelation\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv} is\nat most flat in rapidity. This result may lead to the conclusion that\nthe initial stages of heavy ion collisions can not be\nstrongly-coupled, since this contradicts existing observations. At the\nsame time, it may happen that higher-order corrections in $\\mu_1$ and\n$\\mu_2$ would affect this rapidity dependence, flattening the\nresulting distribution. On yet another hand, such higher-order\ncorrections become important at later times, and eventually causality\nmay prohibit further late-time modification to the long-range rapidity\ncorrelations. More work is needed to clarify this important question\nabout the rapidity-shape of the correlations coming from the solution\nof the full problem in AdS.\n\n\nAssuming that the issue of rapidity shape would be resolved, we would\nalso like to point out that $k_T$-dependence of obtained correlator\n\\peq{GR_gen} closely resembles that reported in the data\n\\cite{Adams:2005ph,Adare:2008cqb,Alver:2009id,Khachatryan:2010gv}: it\nstarts out growing with $k_T$ at low-$k_T$, and, at higher $k_T$, it\nfalls off with $k_T$. The location of the maximum of the correlator in\nour case was determined by the mass and size of the produced\nparticles, and was thus energy-independent. It is possible that the\nsolution of the full problem, resumming all powers of $\\mu_1$ and\n$\\mu_2$ would lead to the maximum of the correlation function given by\n$\\mu_{1,2}^{1\/3}$, which in turn would be inversely proportional to\nthe thermalization time \\cite{Grumiller:2008va,Kovchegov:2009du}, thus\nproviding an independent way of measuring this quantity. Again more\nresearch is needed to explore this possibility.\n\n\n\n\n\n\n\n\n\n\\acknowledgments\n\nThis research is sponsored in part by the U.S. Department of Energy\nunder Grant No. DE-SC0004286.\n\n\n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn those areas of particle phenomenology which require addressing\nnon-perturbative effects, Lattice QCD plays an increasingly\nsignificant role, being a first-principles method. The rapid advances in\ncomputational performance as well as algorithmic techniques\nare allowing for better control of various errors associated\nwith lattice calculations. \n\nIn this paper we address the phenomenology of $K^0\\to\\pi\\pi$ decays.\nOne of the long-standing puzzles is the ``$\\Delta I=1\/2$ rule'',\nwhich is the observation that the transition channel with isospin changing\nby 1\/2 is enhanced 22 times with respect to transitions\nwith isospin changing by 3\/2. \nThe strong interactions are essential for\nexplaining this effect within the Standard Model.\nSince the energy scales involved in these decays are rather small,\ncomputations in quantum chromodynamics (QCD) have to be done\nusing a non-perturbative method such as Lattice QCD. Namely,\nLattice QCD is used to calculate hadronic matrix elements of \nthe operators appearing in the effective weak Hamiltonian. \n\nThere have been so far several other attempts to study matrix elements of the \noperators relevant for $\\Delta I=1\/2$ rule on the \nlattice~\\cite{KilcupSharpe,BernardSoni,MartinelliMaiani},\nbut they fell short of desired accuracy.\nIn addition, several groups~\\cite{Twopi,BernardSoniTwopi} have studied matrix\nelements $\\langle \\pi^+\\pi^0|O_i|K^+\\rangle$, which describe\nonly $\\Delta I=3\/2$, not $\\Delta I=1\/2$ transition.\nIn the present simulation, the statistics is finally under control\nfor $\\Delta I=1\/2$ amplitude. \n\nOur main work is in calculating matrix elements \n$\\langle\\pi^+ |O_i|K^+\\rangle$ and $\\langle 0|O_i|K^0\\rangle$ for\nall basis operators (introduced in Sec.~\\ref{sec:framework}). \nThis is enough to recover matrix elements\n$\\langle\\pi\\pi|O_i|K^0\\rangle$ using chiral perturbation theory\nin the lowest order, although this procedure suffers from \nuncertainties arising from ignoring higher orders (in particular, \nfinal state interactions). The latter matrix elements are an essential\npart of the phenomenological expressions for $\\Delta I=1\/2$ \nand \\mbox{$\\Delta I=3\/2$} amplitudes, as well as $\\varepsilon '\/\\varepsilon$.\nThe ratio of the amplitudes computed in this way\nconfirms significant enhancement of $\\Delta I=1\/2$ channel,\nalthough systematic uncertainties preclude a definite answer.\n\nIn addition, we address a related issue of \n$\\varepsilon '\/\\varepsilon$ -- the direct \nCP-violation parameter in the neutral kaon system.\nAs of the day of writing, the experimental data are somewhat \nambiguous about this parameter: the group at CERN (NA48)~\\cite{CERN} \nreports $\\mbox{Re} (\\varepsilon '\/\\varepsilon) = \\mbox{\n$(23 \\pm 7) \\times 10^{-4},$}$ \nwhile the Fermilab group \\mbox{(E731)}~\\cite{Fermilab} has found \n$\\mbox{Re} (\\varepsilon '\/\\varepsilon) = \\mbox{$(7.4 \\pm 6.0) \\times 10^{-4}$.}$ \nThere is a hope that the discrepancy between the two reports will soon\nbe removed in a new generation of experiments.\n\nOn the theoretical side, the progress in estimating \n$\\varepsilon '\/\\varepsilon$ in the Standard\nModel is largely slowed down by the unknown matrix elements~\\cite{buras} \nof the appropriate operators.\nThe previous attempts~\\cite{KilcupSharpe,BernardSoni,MartinelliMaiani} \nto compute them on the lattice did not take\ninto account operator matching. In this work we repeat this calculation\nwith better statistics and better investigation of systematic \nuncertainties. We are using perturbative operator matching. In some cases\nit does not work, so we explore alternatives and come up with a\npartially non-perturbative renormalization procedure. The associated\nerrors are estimated to be large. This is currently the biggest stumbling \nblock in computing $\\varepsilon '\/\\varepsilon$. \n\nThe paper is structured as follows. In the Section~\\ref{sec:Framework}\nwe show the context of our calculations, define the quantities\nwe are looking after and discuss a number of theoretical points\nrelevant for the calculation. Section~\\ref{sec:lattice details}\ndiscusses issues pertaining to the lattice simulation. \nIn Section~\\ref{sec:di12} we present the results and discuss systematic errors\nfor $\\Delta I=1\/2$ rule amplitudes.\nIn Section~\\ref{sec:pert} we explain how the operator matching problem \ntogether with other systematic errors preclude a reliable calculation of \n$\\varepsilon '\/\\varepsilon$, and give our best estimates\nfor this quantity in Section~\\ref{sec:epsp_res}.\nSection~\\ref{sec:conclusion} contains the conclusion. In the Appendix\nwe give details about the quark operators and sources, and\nprovide explicit expressions for all contractions and matrix\nelements for reference purposes.\n\n\\section{Theoretical framework}\n\\label{sec:Framework}\n\n\\subsection{Framework and definitions}\n\\label{sec:framework}\n\nThe standard approach to describe the problems in question is to\nuse the Operator Product Expansion at the $M_W$ scale and use the\nRenormalization Group equations to translate the effective weak\ntheory to more \nconvenient scales ($\\mu \\sim$~2--4~GeV). At these scales the effective \nHamiltonian for $K\\to\\pi\\pi$ decays is the following linear \nsuperposition~\\cite{buras}:\n\\begin{equation}\nH_{\\mathrm W}^{\\mathrm eff} = \n\\frac{G_F}{\\sqrt{2}} V_{ud}\\,V^*_{us} \\sum_{i=1}^{10} \\Bigl[\nz_i(\\mu) + \\tau y_i(\\mu) \\Bigr] O_i (\\mu) \n \\, , \n\\end{equation}\nwhere $z_i$ and $y_i$ \nare Wilson coefficients (currently known at two-loop order), \n$\\tau \\equiv - V_{td}V_{ts}^{*}\/V_{ud} V_{us}^{*}$, \nand $O_i$ are basis of four-fermions operators defined as follows:\n\\begin{eqnarray}\n\\label{eq:ops1}\nO_1 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5) u_\\beta )\n(\\bar{u}_\\beta \\gamma^\\mu (1-\\gamma_5)d_\\alpha ) \\\\\nO_2 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)u_\\alpha)\n(\\bar{u}_\\beta \\gamma^\\mu (1-\\gamma_5)d_\\beta ) \\\\\n\\label{eq:ops3}\nO_3 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\beta ) \\\\\nO_4 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\alpha ) \\\\\nO_5 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\beta ) \\\\\nO_6 & = & (\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q(\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\alpha ) \\\\\nO_7 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\beta ) \\\\\nO_8 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1+\\gamma_5)q_\\alpha ) \\\\\nO_9 & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\alpha )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\beta ) \\\\ \nO_{10} & = & \\frac{3}{2}(\\bar{s}_\\alpha \\gamma_\\mu (1-\\gamma_5)d_\\beta )\n\\sum_q e_q (\\bar{q}_\\beta \\gamma^\\mu (1-\\gamma_5)q_\\alpha ) \n\\label{eq:ops10}\n\\end{eqnarray}\nHere $\\alpha$ and $\\beta$ are color indices, $e_q$ is quark\nelectric charge, and summation is done over all light quarks. \n\nIsospin amplitudes are defined as \n\\begin{equation}\n\\label{amp}\nA_{0,2}e^{i\\delta_{0,2}} \\equiv \\langle (\\pi\\pi )_{I=0,2}|H_W|K^0\\rangle ,\n\\end{equation}\nwhere $\\delta_{0,2}$ are the final state interaction phases of the\ntwo channels. Experimentally\n\\begin{equation}\n\\omega = \\mbox{Re} A_0 \/\\mbox{Re} A_2 \\simeq 22 \\, .\n\\end{equation}\n\nDirect CP violation parameter $\\varepsilon '$ is defined in terms \nof imaginary parts of these amplitudes:\n\\begin{equation}\n\\varepsilon ' = -\\frac{\\mbox{Im} A_0 - \\omega \\mbox{Im} A_2}{\\sqrt{2}\\omega\\mbox{Re} A_0}\n e^{i(\\pi\/2 + \\delta_2 - \\delta_0)}.\n\\end{equation}\nExperiments are measuring the quantity $\\mbox{Re} \\varepsilon '\/\\varepsilon$, \nwhich is given by\n\\begin{equation}\n\\label{eq:epsp}\n\\mbox{Re} \\,\\frac{\\varepsilon '}{\\varepsilon} \\simeq\n\\frac{G_F}{2\\omega |\\varepsilon |\\mbox{Re}{A_0}} \\,\n\\mbox{Im}\\, \\lambda_t \\, \\,\n \\left[ \\Pi_0 - \\omega \\: \\Pi_2 \\right] ,\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\label{P0}\n \\Pi_0 & = & \\sum_i y_i \\, \\langle (\\pi\\pi )_{I=0}|O_i^{(0)}|K^0\\rangle \n(1 - \\Omega_{\\eta +\\eta '}) \\\\\n\\label{P2}\n \\Pi_2 & = & \\sum_i y_i \\, \\langle (\\pi\\pi )_{I=2}|O_i^{(2)}|K^0\\rangle \n\\end{eqnarray}\nwith $\\mbox{Im}\\, \\lambda_t \\equiv \\mbox{Im} V_{td}V^*_{ts}$, and where\n$\\Omega_{\\eta + \\eta'} \\sim 0.25\\pm 0.05$ takes into account the effect\nof isospin breaking in quark masses ($m_u \\neq m_d$). $O_i^{(0)}$ and\n$O_i^{(2)}$ are isospin 0 and 2 parts of the basis operators.\nTheir expressions are given in the Appendix for completeness.\n\n\\subsection{Treatment of charm quark}\n\nThe effective Hamiltonian given above is obtained in the continuum \ntheory in which the top, bottom and charm quarks are integrated out. \n(In particular, the summation in Eqs.~(\\ref{eq:ops3}--\\ref{eq:ops10}) \nis done over $u$, $d$ and $s$ quarks.) This makes sense only when\nthe scale $\\mu$ is sufficiently low compared to the charm quark mass.\nAs mentioned in Ref.~\\cite{charm}, at scales comparable to $m_c$ \nhigher-dimensional \noperators can contribute considerably. Then one should consider\nan expanded set of operators including those containing the charm quark.\nLattice treatment of the charm quark is possible but\nin practice quite limited, for example by having to work at much\nsmaller lattice spacings and having a more complicated set\nof operators and contractions. Therefore\nwe have opted to work in the effective theory in which the charm quark\nis integrated out. Since we typically use $\\mu \\sim 2$~GeV in our\nsimulations, this falls into a dangerous region. We hope that\nthe effects of higher-dimensional operators can still be neglected, but\nstrictly speaking this issue should be separately investigated.\n\n\\subsection{Calculating $\\langle \\pi\\pi|O_i|K^0\\rangle$.}\n\nAs was shown by Martinelli and Testa~\\cite{testa}, two-particle\nhadronic\nstates are very difficult to construct on the lattice (and in general,\nin any Euclidean description). We have\nto use an alternative procedure to calculate the matrix elements \nappearing in Eqs.~(\\ref{amp},\\ref{P0},\\ref{P2}).\nWe choose the method ~\\cite{bernard} in which the lowest-order\nchiral perturbation theory is used to relate \n$\\langle \\pi\\pi |O_i|K^0\\rangle$ to matrix elements involving one-particle states:\n\\begin{eqnarray}\n\\label{eq:chpt1}\n\\langle \\pi^+\\pi^-|O_i|K^0\\rangle & = & \\frac{m_K^2-m_\\pi^2}{f}\\gamma \\\\\n\\langle \\pi^+|O_i|K^+\\rangle & = & (p_\\pi \\cdot p_K)\\gamma - \n \\frac{m_s+m_d}{f}\\delta \\\\\n\\label{eq:chpt3} \n\\langle 0|O_i|K^0\\rangle & = & (m_s-m_d)\\delta ,\n\\end{eqnarray}\nwhere $f$ is the lowest-order pseudoscalar decay constant.\nThe masses in the first of these formulae are the physical meson masses,\nwhile the quark masses and the momenta in the second and third formulae\nare meant to be from actual simulations on the lattice\n(done with unphysical masses). \nThese relationships ignore higher order terms in the chiral expansion, \nmost importantly the final state interactions. \nTherefore this method suffers from a significant uncertainty. \nGolterman and Leung~\\cite{golterman} have computed one-loop correction \nfor $\\Delta I=3\/2$ amplitude in chiral perturbation theory. \nThey find this correction can be large, up to 30\\% or 60\\%, depending\non the values of unknown contact terms and the cut-off. \n\n\\section{Lattice techniques}\n\\label{sec:lattice details}\n\n\\subsection{Mixing with lower-dimensional operators.}\n\nEqs.~(\\ref{eq:chpt1}--\\ref{eq:chpt3}) handle unphysical\n$s \\leftrightarrow d$ mixing in $\\langle\\pi^+|O_i|K^+\\rangle$ \nby subtracting the unphysical part proportional to \n$\\langle 0|O_i|K^0\\rangle$. This is equivalent to subtracting\nthe operator \n\\begin{equation}\nO_{sub} \\equiv (m_d+m_s)\\bar{s}d + (m_d-m_s)\\bar{s}\\gamma_5d \\,.\n\\label{eq:SubOp}\n\\end{equation}\nAs shown by Kilcup, Sharpe {\\it et al.} in Refs.~\\cite{ToolKit,WeakME}, \nthese statements are also true\non the lattice if one uses staggered fermions. A number of Ward identities\ndiscussed in these references show that lattice formulation with\nstaggered fermions retains \nthe essential chiral properties of the continuum theory. In particular,\n$O_{sub}$ defined in Eq.~\\ref{eq:SubOp} is the only lower-dimensional \noperator appears in mixing with the basis operators.\n(Lower-dimensional operators have to be subtracted non-perturbatively\nsince they are multiplied by powers of $a^{-1}$.)\nWe employ the non-perturbative procedure suggested in Ref.~\\cite{WeakME}:\n\\begin{equation}\n\\label{eq:sub1}\n\\langle \\pi^+\\pi^-|O_i|K^0\\rangle = \n\\langle \\pi^+|O_i - \\alpha_i O_{sub}|K^+\\rangle \\cdot \\frac{m_K^2-m_\\pi^2}\n{(p_\\pi\\cdot p_K)f} \\, , \n\\end{equation}\nwhere $\\alpha_i$ are found from \n\\begin{equation}\n0 = \\langle 0|O_i - \\alpha_i O_{sub}|K^0\\rangle \\, .\n\\label{eq:sub2}\n\\end{equation}\nThis procedure is equivalent to the lattice version of \nEqs.~(\\ref{eq:chpt1}--\\ref{eq:chpt3}) and allows subtraction \ntimeslice by timeslice.\n\nThroughout our simulation we use only degenerate mesons, i.e. $m_s=m_d=m_u$.\nSince only negative parity part of $O_{sub}$ contributes in \nEq.~(\\ref{eq:sub2}), one naively expects infinity when calculating \n$\\alpha_i$. However, matrix elements \n$\\langle 0|O_i|K^0\\rangle$ of all basis operators \nvanish when $m_s=m_d$ due to invariance of both the Lagrangian\nand all the operators in question under the CPS symmetry, which\nis defined as the CP symmetry combined with interchange of $s$ and $d$ \nquarks. Thus calculation of $\\alpha_i$ requires taking the first derivative \nof $\\langle 0|O_i|K^0\\rangle$ with respect to $(m_d-m_s)$. In order\nto evaluate the first derivative numerically, we insert another\nfermion matrix inversion in turn into all propagators involving\nthe strange quark. Detailed expressions for all contractions \nare given in the Appendix.\n\n\\begin{figure}[p]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=12cm \\epsfbox{diag.eps}}\n\\end{center}\n\\caption{Five diagrams types needed to be computed: (a) ``Eight'';\n(b) ``Eye''; (c) ``Annihilation''; (d) ``Subtraction''; (e)\ntwo-point function.}\n\\label{diagrams}\n\\end{figure} \n\n\\subsection{Diagrams to be computed}\n\n\\label{sec:diag}\n\nAccording to Eqs.~(\\ref{eq:sub1},\\ref{eq:sub2}), we need to compute three\ndiagrams involving four-fermion operators (shown in Fig.~\\ref{diagrams})\nand a couple of bilinear contractions. The ``eight'' contraction type \n(Fig.~\\ref{diagrams}a) is relatively cheap to compute. It is the only\ncontraction needed for the $\\Delta I=3\/2$ amplitude.\nThe ``eye'' and ``annihilation'' \ndiagrams (Fig.~\\ref{diagrams}b and~\\ref{diagrams}c) are much more \nexpensive since they involve calculation of propagators from\nevery point in space-time. \n\n\n\\subsection{Lattice parameters and other details}\n\nThe parameters of simulation are listed in the Table~\\ref{tab:parameters}.\nWe use periodic boundary conditions in both space and time.\nOur main ``reference'' ensemble is a set of quenched configurations\nat $\\beta \\equiv 6\/g^2 =6.0$ ($Q_1$). In addition, we use an\nensemble with a larger lattice volume ($Q_2$), an ensemble\nwith $\\beta =6.2$ ($Q_3$) for checking the lattice spacing dependence,\nand an ensemble with 2 dynamical flavors ($m=0.01$) generated by the \nColumbia group, used for checking the impact of quenching. \nThe ensembles were obtained using 4 sweeps of $SU(2)$ overrelaxed\nand 1 sweep of $SU(2)$ heatbath algorithm\\footnote{except for the dynamical \nset which was obtained by R-algorithm~\\cite{Columbia1}}. The configurations \nwere separated by\n1000 sweeps, where one sweep includes three $SU(2)$ subgroups updates.\n\n\\begin{table}[tbh]\n\\caption{Simulation parameters}\n\\label{tab:parameters}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\nEnsemble & $N_f$ & $\\beta $ & Size & L, fm & Number of & Quark masses\\\\\nname & & & & & configurations & used \\\\\n\\hline\n$Q_1$ & 0 & 6.0 & $16^3\\times (32\\times 4)$ & 1.6 & 216 & 0.01 --- 0.05 \\\\\n$Q_2$ & 0 & 6.0 & $32^3\\times (64\\times 2)$ & 3.2 & 26 & 0.01 --- 0.05 \\\\\n$Q_3$ & 0 & 6.2 & $24^3\\times (48\\times 4)$ & 1.7 & 26 & 0.005 --- 0.03 \\\\\n$D$ & 2 & 5.7 & $16^3\\times (32\\times 4)$ & 1.6 & 83 & 0.01 --- 0.05 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{table}\n\nWe use the standard staggered fermion action. \nFermion matrices are inverted by conjugate gradient.\nJackknife is used for statistical analysis. \n\nAs explained below, we have extended the lattice 4\ntimes\\footnote{for all ensemble except the biggest volume, \nwhich we extend 2 times.}\nin time dimension by copying gauge links. This is done in order to \nget rid of excited states contamination and wrap-around effects. \n\nThe lattice spacing values for quenched ensembles were obtained by \nperforming a fit in\nthe form of the asymptotic scaling to the quenched data of $\\rho$ meson\nmass given elsewhere~\\cite{spectrum}. Lattice spacing for the dynamical \nensemble is also set by the $\\rho$ mass~\\cite{Columbia}. \n\nSome other technicalities are as follows.\nWe work in the two flavor formalism. We use local wall sources\nthat create pseudoscalar mesons at rest.\n(Smearing did not have a substantial effect.) \nThe mesons are degenerate ($m_s=m_d=m_u$, $m_\\pi=m_K$).\nWe use staggered\nfermions and work with gauge-invariant operators, since the \ngauge symmetry enables significant reduction of the list of \npossible mixing operators. The staggered flavour structure\nis assigned depending on the contraction type.\nOur operators are tadpole-improved. This \nserves to `improve'' the perturbative expansion at a later stage \nwhen we match the lattice and continuum operators.\nFor calculating fermion loops we employ the $U(1)$ pseudofermion\nstochastic estimator. \nMore details and explanation of some of these \nterms can be found in the Appendix.\n\n\\begin{figure}[!hbt]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=7cm \\epsfbox{setup.eps}}\n\\end{center}\n\\caption{The general setup of calculation of\n$\\langle \\pi^+|O_i|K^+\\rangle$ (without loss of generality, \nan ``eight'' contraction is shown). The kaon source is\nat the timeslice 0, while the pion sink is at the timeslice $T$.\nThe operator is inserted at a variable time $t$. The result of this\ncontraction is proportional to the product of two exponentials\nshown in the figure.}\n\\label{setup}\n\\end{figure} \n\n\\subsection{Setup for calculating matrix elements of four-fermion \\\\\noperators}\n\n\nConsider the setup for calculation of $\\langle \\pi^+|O_i|K^+\\rangle$.\nKaons are created at $t_0=0$, the operators are inserted at \na variable time $t$, and the pion sink is located at the\ntime $T$ (see Fig.~\\ref{setup}), where $T$ is sufficiently large.\nIn principle, a number of states with pseudoscalar quantum numbers\ncan be created by the kaon source.\nEach state's contribution is proportional to $\\sqrt{Z}e^{-m|t|}$, so the \nlightest state (kaon) dominates at large enough $t$.\nAnalogously, states annihilated by the sink contribute proportionally\nto $\\sqrt{Z}e^{-m|T-t|}$, which is dominated by the pion. \n\n\nIn this work kaon and pion have equal mass.\nIn the middle of the lattice, where $t$\nis far enough from both 0 and $T$, we expect to see a plateau, \ncorresponding to $Z e^{-m_\\pi T}\\langle\\pi|O|K\\rangle$. \nThis plateau is our working region (see Fig.~\\ref{plateau}). \n\nAs concerns the kaon annihilation matrix elements \n$\\langle 0|O_i|K^0\\rangle$, we only need their ratio to \n$\\langle 0|\\overline{s}\\gamma_5 d|K^0\\rangle$, in which the \nfactors $\\sqrt{Z}e^{-mt}$ cancel. Indeed, we observe a rather steady\nplateau (Fig.~\\ref{ann}). \n\n\\begin{figure}[!bht]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=5cm \\epsfbox{ratio.eps}}\n\\end{center}\n\\caption{$B$ ratio is formed by dividing the four-fermion matrix element\nby the product of two-point functions, typically involving $A_\\mu$\nor $P$ bilinears. All the operators involved are inserted at the same\ntimeslice $t$, and summation is done over spatial volume. The external \nmeson sources \nare located at timeslices 0 and $T$, both in the numerator and the \ndenominator. This enables cancellation of some common factors.}\n\\label{ratio}\n\\end{figure} \n\n\\subsection{$B$ ratios}\n\nIt has become conventional\nto express the results for matrix elements\nin terms of so-called $B$ ratios, which are the ratios of desired \nfour-fermion matrix elements to their\nvalues obtained by vacuum saturation approximation (VSA).\nFor example, the $B$ ratios of operators $O_2$ and $O_4$ are formed by\ndividing the full matrix element by the product of axial-current\ntwo-point functions (Fig.~\\ref{ratio}).\nWe expect the denominator to form a plateau \nin the middle of the lattice, equal to\n$Z e^{-m_\\pi T} \\, \\langle\\pi|A_\\mu|0\\rangle \\,\\cdot \\,\n\\langle 0|A^\\mu|K\\rangle$,\nwhere $A^\\mu$ are the axial vector currents with appropriate flavor quantum\nnumbers for kaon and pion. The\nfactor $Z e^{-m_\\pi T}$ cancels, leaving the desirable ratio\n$\\langle\\pi|O|K\\rangle \\, \/ \\,\n(\\langle\\pi|A_\\mu|0\\rangle\\, \\cdot \\, \\langle 0|A^\\mu|K\\rangle)$. \nApart from common normalization factors, \na number of systematic uncertainties also tend to cancel in this ratio,\nincluding the uncertainty in the lattice spacing, quenching and\nin some cases perturbative correction uncertainty. \nTherefore, it is sometimes reasonable to give lattice answers in terms\nof the $B$ ratios. \n\nHowever, eventually the physical matrix element\nneeds to be reconstructed by using the known experimental parameters \n(namely $f_K$) to compute VSA. In some cases, such as for operators\n$O_5$---$O_8$, the VSA itself is known very imprecisely due to the\nfailure of perturbative matching (see Sec.~\\ref{sec:pert}).\nThen it is more reasonable to give answers in terms of matrix elements\nin physical units. We have adopted the strategy of expressing all matrix \nelements in units of $\\langle\\pi|A_\\mu|0\\rangle \\, \\langle 0|A^\\mu|K\\rangle\n= (f_K^{latt})^2 m_M^2$ at an intermediate stage, and using \npre-computed $f_K^{latt}$ at the given meson mass to convert to physical \nunits. This method is sensitive to the choice of the lattice spacing. \n\n\\begin{figure}[!hbt]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=7cm \\epsfbox{IB2vst.eps}}\n\\end{center}\n\\caption{An example of the signal we get for one of the $B$ ratios\n(in this case, for the ``eye'' part of the $O_2$ operator on $Q_1$ ensemble). \nThe wall sources are at $t=1$ and $t=49$. We see that\nthe excited states quickly disappear and a stable, well-distinguished\nplateau is observed. We perform jackknife averaging in the range of $t$\nfrom 12 to 37 (shown with the horizontal lines). It is important to \nconfirm the existence of the plateau for reliability of the results.}\n\\label{plateau}\n\\end{figure} \n\n\\begin{figure}[!htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6.5cm \\epsfbox{AB2vst.eps}}\n\\end{center}\n\\caption{An example of the signal for \n$\\langle 0|O_2|K^0\\rangle\\,\/\\,\n[(m_d-m_s)\\,\\langle 0|\\overline{s}\\gamma_5 d|K^0\\rangle]$ on $Q_1$\nensemble. The kaon source is at $t=1$. We average over the range\nof $t$ from 5 to 12 (shown with horizontal lines).}\n\\label{ann}\n\\end{figure} \n\nIt is very important to check that the time distance between the\nkaon and pion sources $T$ is large enough so \nthat the excited states do not contribute. That is, the plateau\nin the middle of the lattice should be sufficiently flat,\nand the $B$ ratios should not depend on $T$. We have found that \nin order to satisfy this requirement the lattice has to be\nartificially extended in time direction by using\na number of copies of the gauge links (4 in the case of the small\nvolume lattices, 2 otherwise). We are using $T=72$ for $Q_3$ \n($\\beta =6.2$) ensemble, and $T=48$ for the rest.\nAn example of a plateau that we obtain\nwith this choice of $T$ is shown in Fig.~\\ref{plateau}.\nTo read off the result, we average over the whole extension\nof the plateau, and use jackknife to estimate the statistical\nerror in this average. \n\n\n\\section{$\\Delta I=1\/2$ rule results}\n\\label{sec:di12}\n\nUsing the data obtained for matrix elements of basis operators,\nin this section we report numerical results for $\\mbox{Re} A_0$\nand $\\mbox{Re} A_2$ amplitudes as well as their ratio. We discuss these\namplitudes separately since the statistics for $\\mbox{Re} A_2$ is\nmuch better and the continuum limit extrapolation is possible. \n\n\\subsection{$\\mbox{Re} A_2$ results}\n\\label{sec:A2}\n\nThe expression for $\\mbox{Re} A_2$ can be written as\n\\begin{equation}\n\\mbox{Re} A_2 = \\frac{G_F}{\\sqrt{2}}\\, V_{ud}V_{us}^*\\, z_+(\\mu )\n\\langle O_2\\rangle _2,\n\\end{equation}\nwhere $z_+ (\\mu )$ is a Wilson coefficient and\n\\begin{equation}\n\\langle O_2\\rangle _2 \\equiv \\langle (\\pi\\pi)_{I=2}|O_2^{(2)}|K\\rangle .\n\\end{equation}\nHere\n\\begin{eqnarray}\nO_2^{(2)} = O_1^{(2)} & = & \\frac{1}{3} \n[ (\\overline{s}\\gamma_\\mu(1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu(1-\\gamma_5)d) \n\\nonumber \\\\ & &\n+(\\overline{s}\\gamma_\\mu(1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu(1-\\gamma_5)u)\n-(\\overline{s}\\gamma_\\mu(1-\\gamma_5)d)(\\overline{d}\\gamma^\\mu(1-\\gamma_5)d).\n\\end{eqnarray}\nIn the lowest order chiral perturbation theory the matrix element\n$\\langle O_2\\rangle_2$ can be expressed as\n\\begin{equation}\n\\langle O_2\\rangle _2 \n= \\sqrt{2} \\,\\frac{m_K^2-m_\\pi^2}{f} \n\\,\\frac{\\langle\\pi^+|O_2^{(2)}|K^+\\rangle}{m^2}.\n\\end{equation}\nThe latter matrix element involves only ``eight'' diagrams. Moreover, \nin the limit of preserved $SU(3)_{\\mathrm flavor}$ symmetry\nit is directly related~\\cite{donoghue} to parameter $B_K$ (which is the \n$B$ ratio of the neutral kaon mixing operator \n$O_K= (\\overline{s}\\gamma_L d) \\;(\\overline{s}\\gamma_L d)$), so that\n\\begin{equation}\n\\langle O_2\\rangle _2 \n= \\frac{4\\sqrt{2}}{9} \\,\\frac{m_K^2-m_\\pi^2}{f_{\\mbox{exp}}} \n\\,f_{\\mbox{latt}}^2\\,B_K \\, ,\n\\end{equation}\n\nParameter $B_K$ is rather well studied (for example, by \nKilcup, Pekurovsky~\\cite{PK1} and JLQCD collaboration~\\cite{jlqcd}).\nQuenched chiral perturbation theory~\\cite{sharpe1} predicts \nthe chiral behaviour of the form \n\\mbox{$B_K=a+bm_K^2+c\\;m_K^2\\log{m_K^2}$,} which \nfits the data well (see Fig.~\\ref{Bk}) and\nyields a finite non-zero value in the chiral limit. \nNote that $\\mbox{Re} A_2$ is proportional\nto the combination $B_K f_{\\mbox{latt}}^2$, and since both multipliers \nhave a significant\ndependence on the meson mass (Figs.~\\ref{Bk} and~\\ref{fk}), \n$\\mbox{Re} A_2$ is very sensitive to that mass.\nFig.~\\ref{A2} shows $\\mbox{Re} A_2$ data for the dynamical ensemble, based on\n$B_K$ values we have reported elsewhere~\\cite{PK1}. \nWhich meson mass\nshould be used to read off the result becomes an open question. \nIf known, the higher order chiral terms would remove this ambiguity.\nForced to make a choice, we \nextrapolate to \\mbox{$M^2=(m_K^2+m_\\pi^2)\/2$}.\nUsing our data for $B_K$ in quenched QCD and taking\nthe continuum limit we obtain:\n$\\mbox{Re} A_2 = (1.7 \\pm 0.1)\\cdot 10^{-8}\\;\\mbox{GeV}$,\nwhere the error is only statistical,\nto be compared with the experimental result\n\\mbox{$\\;\\mbox{Re} A_2 = 1.23 \\cdot 10^{-8}\\;\\mbox{GeV}$. }\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{Bk.dyn.eps}}\n\\end{center}\n\\caption{Parameter $B_K$ in NDR $\\overline{\\mathrm MS}$ scheme at 2 GeV\non the dynamical ensemble vs. the meson mass squared. The fit\nis of the form \\mbox{$B_K=a+bm_K^2+c\\;m_K^2\\log{m_K^2}$.} The vertical line\nhere and in the other plots below marks the physical kaon mass.}\n\\label{Bk}\n\\end{figure} \n\\begin{figure}[tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{fpi.dyn.eps}}\n\\end{center}\n\\caption{Pseudoscalar decay constant ($F_\\pi = 93$ MeV experimentally) on \nthe dynamical ensemble vs. meson mass squared. The fit is of the same \nform as $B_K$.}\n\\label{fk}\n\\end{figure} \n\\clearpage\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{A2.dyn.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_2$ for the dynamical ensemble. The fit is of\nthe same form as $B_K$. The horizontal line is the experimental value\nof 1.23 GeV. }\n\\label{A2}\n\\end{figure} \n\nHigher order chiral terms (including the meson mass dependence)\nare the largest systematic error in this determination.\nAccording to Golterman and Leung~\\cite{golterman}, one-loop corrections\nin (quenched) chiral perturbation theory are expected to be\nas large as $30\\%$ or $60\\%$. Other uncertainties (from lattice spacing \ndetermination, from perturbative operator matching and\nfrom using finite lattice volume) are much smaller.\n\n\n\\subsection{$\\mbox{Re} A_0$ results}\n\nUsing Eqs.~(\\ref{eq:sub1},\\ref{eq:sub2}), $\\mbox{Re} A_0$ can be expressed \nas\\footnote{In our normalization $\\mbox{Re} A_0 = 27.2 \\cdot 10^{-8}$.}\n\\begin{equation}\n\\mbox{Re} A_0 = \\frac{G_F}{\\sqrt{2}}V_{ud}V_{us}^* \\frac{m_K^2-m_\\pi^2}{f}\n\\sum_i z_i R_i ,\n\\end{equation}\nwhere $z_i$ are Wilson coefficients and \n$$\nR_i \\equiv \\frac{\\langle \\pi^+|O_i^{(0)}|K^+\\rangle_s}{m^2}.\n$$\nThe subscript '$s$' indicates that these matrix elements already\ninclude subtraction of \\linebreak $\\langle \\pi^+|O_{sub}|K^+\\rangle$. \nAll contraction types are needed, including the expensive ``eyes''\nand ``annihilations''.\n$O_i^{(0)}$ are isospin 0 parts of operators \n$O_i$ (given in the Appendix for completeness). For example,\n\\begin{eqnarray}\nO_1^{(0)} & = & \\frac{2}{3} \n(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu (1-\\gamma_5)u)\n-\\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu \n(1-\\gamma_5)d) \\nonumber \\\\\n& + & \\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)\n(\\overline{d}\\gamma^\\mu (1-\\gamma_5)d) \\\\\nO_2^{(0)} & = & \\frac{2}{3} \n(\\overline{s}\\gamma_\\mu (1-\\gamma_5)u)(\\overline{u}\\gamma^\\mu (1-\\gamma_5)d)\n-\\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)(\\overline{u}\\gamma^\\mu \n(1-\\gamma_5)u) \\nonumber \\\\\n& + & \\frac{1}{3}(\\overline{s}\\gamma_\\mu (1-\\gamma_5)d)\n(\\overline{d}\\gamma^\\mu (1-\\gamma_5)d) \n\\end{eqnarray}\n\nThe results for quenched $\\beta =6.0$ and $\\beta =6.2$ ensembles\nare shown in Fig.~\\ref{A0}. Dependence on the \nmeson mass is small, so there is no big ambiguity about the mass\nprescription as in the $\\mbox{Re} A_2$ case. \nSome lattice spacing dependence may be present (Fig.~\\ref{A0cont}), \nalthough the statistics\nfor $\\beta =6.2$ ensemble is too low at this moment.\n\nThe effect of\nthe final state interactions (contained in the higher order\nterms of the chiral perturbation theory) is likely to be large.\nThis is the biggest and most poorly estimated\nuncertainty. \n\nAn operator matching uncertainty arises due to mixing of $O_2$ with $O_6$ \noperator through penguin diagrams in lattice perturbation\ntheory. This is explained in the Section~\\ref{sec:A0pert}. We estimate\nthis uncertainty at 20\\% for all ensembles. \n\nAs for other uncertainties, we have checked the lattice volume\ndependence by comparing ensembles $Q_1$ and $Q_2$ (1.6 and 3.2 fm\nat $\\beta =6.0$).\nThe dependence was found to be small, so we consider $(1.6 \\;{\\mathrm fm})^3$ \nas a volume large enough to hold the system. We have also checked the effect\nof quenching and found it to be small compared to noise\n(see Fig.~\\ref{A0quench}). \n\nThe breakdown of contributions of various basis\noperators to $\\mbox{Re} A_0$ is shown in Fig.~\\ref{A0hist}.\nBy far, $O_2$ plays the most important role, whereas penguins\nhave only a small influence. \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=10cm \\epsfbox{A0m.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_0$ for quenched ensembles plotted against the meson mass \nsquared. The upper group of points\nis for ensembles $Q_1$ and $Q_2$, while the lower group is for $Q_3$. \nOnly statistical errors are shown. }\n\\label{A0}\n\\end{figure} \n\n\\begin{figure}[tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0vsa2.eps}}\n\\end{center}\n\\caption{$\\mbox{Re} A_0$ for quenched ensembles plotted against lattice spacing\nsquared. The horizontal line shows the experimental result of \n$27.2\\cdot 10^{-8}$ GeV. Only statistical errors are shown.}\n\\label{A0cont}\n\\end{figure} \n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0m.dyn.eps}}\n\\end{center}\n\\caption{Comparison of quenched ($Q_1$) and dynamical results for $\\mbox{Re} A_0$\nat comparable lattice spacings.}\n\\label{A0quench}\n\\end{figure} \n\n\\clearpage\n\n\\begin{figure}[!tbh]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=6cm \\epsfbox{A0comp.eps}}\n\\end{center}\n\\caption{Contribution of various operators to $\\mbox{Re} A_0$.}\n\\label{A0hist}\n\\end{figure} \n\n\\begin{figure}[!bth]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=10cm \\epsfbox{omega.eps}}\n\\end{center}\n\\caption{The ratio $\\mbox{Re} A_2\/\\mbox{Re} A_0$ versus the meson mass squared\nfor quenched and dynamical ensembles.\nEnsembles $Q_1$ and $D$ have comparable lattice spacings. \nThe dynamical ensemble data were used for the fit. \nThe big slope of the fit line is accounted for by the\nmass dependence of $\\mbox{Re} A_2$. The horizontal line shows the experimental\nvalue of $1\/22$. The error bars show only the statistical errors \nobtained by jackknife.}\n\\label{omega}\n\\end{figure} \n\n\\subsection{Amplitude ratio}\n\nShown in Fig.~\\ref{omega} is the ratio $\\mbox{Re} A_2\/\\mbox{Re} A_0$ as directly \ncomputed on the lattice for quenched and dynamical data sets.\nThe data exhibit strong dependence on\nthe meson mass, due to $\\mbox{Re} A_2$ chiral behaviour (compare with \nFig.~\\ref{A2}). \n\nWithin our errors the results seem to confirm, indeed, the \ncommon belief that most of the $\\Delta I=1\/2$ enhancement comes \nfrom the ``eye'' and ``annihilation'' diagrams. The exact amount\nof enhancement is broadly consistent with experiment while being\nsubject to considerable uncertainty due to higher-order chiral terms.\nOther systematic errors are the same as those described \nin the previous Subsection.\n\n\\section{Operator matching}\n\n\\label{sec:pert}\n\nAs mentioned before, we have computed the matrix elements of all \nrelevant operators with an acceptable statistical accuracy.\nThese are regulated in the lattice renormalization \nscheme. To get physical results, \noperators need to be matched to the same scheme in which the Wilson\ncoefficients were computed in the continuum, namely $\\overline{\\mathrm MS}$\nNDR. While perturbative matching works quite well for\n$\\mbox{Re} A_0$ and $\\mbox{Re} A_2$, it seems to break down severely for\nmatching operators relevant for $\\varepsilon '\/\\varepsilon$.\n\n\\subsection{Perturbative matching and $\\mbox{Re} A_0$}\n\n\\label{sec:A0pert}\n\nConventionally, lattice and continuum operators are matched using\nlattice perturbation theory:\n\\begin{equation}\n\\displaystyle\n\\label{eq:matching}\nO_i^{\\it cont}(q^*) = O_i^{\\it lat} + \\displaystyle\\frac{g^2(q^*a)}{16\\pi^2}\\displaystyle\\sum_j(\\gamma_{ij}\\ln (q^*a)\n + C_{ij})O_j^{\\it lat} + O(g^4) + O(a^n) ,\n\\end{equation}\nwhere $\\gamma_{ij}$ is the one-loop anomalous dimension matrix \n(the same in the continuum \nand on the lattice), and $C_{ij}$ are finite coefficients calculated\nin one-loop lattice perturbation theory~\\cite{Ishizuka,PatelSharpe}. \nWe use the ``horizontal matching'' \nprocedure~\\cite{horizontal}, whereby the same coupling constant\nas in the continuum ($g_{\\overline{MS}}$) is used.\nThe operators are matched at an intermediate scale \n$q^*$ and evolved using the continuum renormalization\ngroup equations to the reference scale $\\mu$, which we take \nto be 2 GeV.\n\nIn calculation of $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$, the main contributions\ncome from left-left operators. One-loop renormalization\nfactors for such (gauge-invariant) operators were computed by\nIshizuka and Shizawa~\\cite{Ishizuka} (for current-current diagrams)\nand by Patel and Sharpe~\\cite{PatelSharpe} (for penguins). \nThese factors are fairly small, so at the first glance the perturbation theory\nseems to work well, in contrast to the case of left-right operators \nessential for estimating $\\varepsilon '\/\\varepsilon$, as described below.\nHowever, even in the case of $\\mbox{Re} A_0$ there is a certain\nambiguity due to mixing of $O_2$ operator with $O_6$ through\npenguin diagrams. The matrix element of $O_6$ is rather large, so\nit heavily influences $\\langle O_2\\rangle$ in spite of the small\nmixing coefficient. Operator $O_6$ receives enormous renormalization \ncorrections in the first order, as discussed below. Therefore, there\nis an ambiguity as to whether the mixing should be evaluated\nwith renormalized or bare $O_6$. Equivalently, the higher-order\ndiagrams (such as Fig.~\\ref{higher-order}b and~\\ref{higher-order}d) \nmay be quite important. \n\nIn order to estimate the uncertainty of neglecting higher-order diagrams,\nwe evaluate the mixing with $O_6$ renormalized\nby the partially non-perturbative procedure described below, and\ncompare with results obtained by evaluating mixing with bare $O_6$.\nThe first method amounts to resummation of those higher-order diagrams \nbelonging to type (b) in Fig.~\\ref{higher-order}, while the second method\nignores all higher-than-one-order corrections. \nResults quoted in the previous Section\nwere obtained by the first method, which is also close to using\ntree-level matching. The second method would produce\nvalues of $\\mbox{Re} A_0$ lower by about 20\\%.\nThus we consider 20\\% a conservative estimate of the matching uncertainty. \n\nIn calculating $\\varepsilon '\/\\varepsilon$ the operator \nmatching issue becomes a much more serious obstacle as explained below.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{O6m.dyn.eps}}\n\\end{center}\n\\caption{Three contributions to $\\langle O_6\\rangle$ matrix element:\n``eight'' (boxes), ``eye'' (diamonds) and ``subtraction'' (crosses).\nThese data represent bare operators for the dynamical ensemble.\nThe fit is done for the sum total of all contributions. All errors\nwere combined by jackknife.}\n\\label{fig:O6}\n\\end{figure} \n\n\\subsection{Problems with perturbative matching}\n\nThe value of $\\varepsilon '\/\\varepsilon$ depends on a number of \nsubtle cancellations\nbetween matrix elements. In particular, $O_6$ and $O_8$\nhave been so far considered the most important operators\nwhose contributions have opposite signs and almost cancel. Furthermore,\nmatrix element of individual operators contain three main components \n(``eights'', ``eyes'',\nand ``subtractions''), which again conspire to almost cancel each other\n(see Fig.~\\ref{fig:O6}). \nThus $\\varepsilon '\/\\varepsilon$ is extremely sensitive to each of these \ncomponents, and in particular to their matching. \n\n\\begin{table}[tbh]\n\\caption{$\\langle O_6\\rangle$ in arbitrary units with one-loop perturbative\nmatching using two values of $q^*$ for $Q_1$ ensemble. For comparison, \nthe results with no matching (``bare'') are given.}\n\\label{tab:O6pert}\n\\begin{tabular}{l|ccccc}\n\\hline\\tableline\nQuark mass & 0.01 & 0.02 & 0.03 & 0.04 & 0.05 \\\\\n\\hline\n$q^* =1\/a$ & \n$0.1 \\pm 1.2 $ &\n$-0.9 \\pm 0.4$ &\n$-1.2 \\pm 0.2$ &\n$-1.6 \\pm 0.3$ &\n$-1.1 \\pm 0.2$ \\\\\n$q^*=\\pi \/a$ &\n$-13.1 \\pm 1.8$ &\n$ -9.0 \\pm 0.5$ &\n$ -7.1 \\pm 0.3$ &\n$ -6.3 \\pm 0.5$ &\n$ -4.6 \\pm 0.5$ \\\\\nBare &\n$-55.6 \\pm 5.0$ &\n$-35.4 \\pm 1.5$ &\n$-27.0 \\pm 0.9$ &\n$-22.3 \\pm 1.4$ &\n$-16.4 \\pm 1.5$ \\\\\n\\hline\\tableline\n\\end{tabular}\n\\end{table}\n\nConsider fermion contractions with operators such as\\footnote{\nWe apologize for slightly confusing notation:\nwe use the same symbols here for\noperators as in the Appendix for types of contractions.}\n\\begin{eqnarray}\n\\label{eq:O6ops1}\n(PP)_{EU} & = & (\\overline{s} \\gamma_5 \\otimes \\xi_5 u) \n(\\overline{u} \\gamma_5 \\otimes \\xi_5 d) \\\\\n(SS)_{IU} & = & (\\overline{s} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) \n(\\overline{d} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) \\\\\n\\label{eq:O6ops3}\n(PS)_{A2U} & = & (\\overline{s} \\gamma_5 \\otimes \\xi_5 d) \n(\\overline{d} {\\displaystyle\\mathbf 1} \\otimes {\\displaystyle\\mathbf 1} d) ,\n\\end{eqnarray}\nwhich are main parts of, correspondingly, ``eight'', ``eye'' and \n``subtraction''\ncomponents of $O_6$ and $O_8$ (see the Appendix). The \nfinite renormalization coefficients for these operators\nhave been computed in Ref.~\\cite{PatelSharpe}. \nThe diagonal coefficients are very large, so the\ncorresponding one-loop corrections are in the \nneighborhood of $-100\\%$. In addition, they strongly depend\non which $q^*$ is used (refer to Table~\\ref{tab:O6pert}).\nThus perturbation theory fails in reliably \nmatching the operators in Eqs.~(\\ref{eq:O6ops1}--\\ref{eq:O6ops3}). \n\nThe finite coefficients for other (subdominant)\noperators, for example\n$(PP)_{EF}$, $(SS)_{EU}$ and $(SS)_{EF}$,\nare not known for formulation with gauge-invariant \noperators\\footnote{Patel and Sharpe~\\cite{PatelSharpe}\nhave computed corrections for gauge-noninvariant operators.\nOperators in Eqs.~(\\ref{eq:O6ops1})--(\\ref{eq:O6ops3})\nhave zero distances, so the corrections are the same for\ngauge invariant and non-invariant operators.\nRenormalization of other operators\n(those having non-zero distances) differs from the \ngauge-noninvariant operators.}.\nFor illustration purposes,\nin Table~\\ref{tab:O6pert} we have used coefficients for gauge\nnon-invariant operators computed in Ref.~\\cite{PatelSharpe}, but \nstrictly speaking this is not justified. \n\nTo summarize, perturbative matching does not work and\nsome of the coefficients are even poorly known. A solution\nwould be to use a non-perturbative matching procedure, such as\ndescribed by Donini {\\it et al.}~\\cite{non-pert}. We have not completed\nthis procedure. Nevertheless, can we say anything\nabout $\\varepsilon '\/\\varepsilon$ at this moment?\n\n\\subsection{Partially nonperturbative matching}\n\n\\label{sec:ansatz}\n\nAs a temporary solution, we have adopted a partially\nnonperturbative operator matching procedure, which makes use of\nbilinear renormalization coefficients $Z_P$ and $Z_S$.\nWe compute the latter~\\cite{PKbil}\nfollowing the non-perturbative method suggested by Martinelli \n{\\it et al.}~\\cite{martinelli}. Namely we study the inverse\nof the ensemble-averaged quark propagator at large off-shell momenta\nin a fixed (Landau) gauge. \nAn estimate of the\nrenormalization of four-fermion operators can be obtained as follows. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{Oppren.eps}}\n\\end{center}\n\\caption{Example of one loop diagrams arising in \nrenormalization of four-fermion operators: in type (a) no propagator\ncrosses the axis, and type (b) includes the rest of the diagrams.}\n\\label{Opp}\n\\end{figure} \n\nConsider\nrenormalization of the pseudoscalar--pseudoscalar operator in\nEq.~(\\ref{eq:O6ops1}). \nAt one-loop level, the diagonal renormalization coefficient \n$C_{PP}$ (involving diagrams shown in Fig.~\\ref{Opp}) \nis almost equal to twice the pseudoscalar bilinear correction $C_P$. \nThis suggests that, at least at one-loop level,\nthe renormalization of $(PP)_{EU}$ comes mostly from diagrams\nin which no gluon propagator crosses the vertical axis of the diagram\n(for example, diagram $(a)$ in Fig.~\\ref{Opp}), and very little\nfrom the rest of the diagrams\n(such as diagram $(b)$ in Fig.~\\ref{Opp}). In other words, the\nrenormalization of $(PP)_{EU}$ would be identical to \nthe renormalization of product of two pseudoscalar bilinears,\nwere it not for the diagrams of type $(b)$, which give a subdominant\ncontribution. Mathematically, \n$$\n(PP)_{EU}^{\\mathrm{cont}} = (PP)_{EU}^{\\mathrm{latt}}\\; Z_{PP} + ... \\, ,\n$$\nwith\n\\begin{equation}\nZ_{PP} = Z_P^2 (1 + \\frac{g^2}{16\\pi^2} \\widetilde{C_{PP}} + O(g^4))\\, ,\n\\label{eq:Zpp}\n\\end{equation}\n\\begin{equation}\nZ_P = 1 + \\frac{g^2}{16\\pi^2} C_P + O(g^4)\\, ,\n\\label{eq:Zp}\n\\end{equation}\nand dots indicate mixing with other operators (non-diagonal part).\nThe factor $\\widetilde{C_{PP}} \\equiv C_{PP} - 2 C_P$ contains\ndiagrams of type $(b)$ in Fig.~\\ref{Opp} and is quite small.\n\nIn order to proceed, it may be reasonable to {\\bf assume} that the same \nholds at all orders in perturbation\ntheory, namely the diagrams of type $(c)$ in Fig.~\\ref{higher-order} give\nsubdominant contribution compared to type $(a)$ of the same\nFigure. This assumption should be verified\nseparately by performing non-perturbative renormalization procedure\nfor four-fermion operators. If this ansatz is true, we can substitute\nthe non-perturbative value of $Z_P$ into Eq.~(\\ref{eq:Zpp}) instead\nof using the perturbative expression from Eq.~(\\ref{eq:Zp}).\nThus a partially nonperturbative estimate of $(PP)_U^{\\mathrm cont}$\nis obtained. This procedure is quite similar to the tadpole\nimprovement idea: the bulk of diagonal renormalization is\ncalculated non-perturbatively, while the rest is reliably computed\nin perturbation theory. \nAnalogously we obtain diagonal renormalization\nof operators $(SS)_{IU}$ and $(PS)_{A2U}$ by using\n$Z_{SS} = Z_S^2(1+\\frac{g^2}{16\\pi^2} \\widetilde{C_{SS}} + O(g^4))$ and \n$Z_{PS} = Z_S Z_P(1+\\frac{g^2}{16\\pi^2} \\widetilde{C_{PS}} + O(g^4))$.\nWe note that $Z_P \\neq Z_S$, even though they are equal in perturbation\ntheory. We match operators at the scale $q^*=1\/a$ and use the\ncontinuum two-loop anomalous dimension to evolve to $\\mu =2$ GeV.\n\nUnfortunately, the above procedure does not solve completely the problem \nof operator renormalization, since it deals only with diagonal \nrenormalization of the zero-distance operators in\nEqs.~(\\ref{eq:O6ops1}---\\ref{eq:O6ops3}). Even though these operators\nare dominant in contributing to $\\varepsilon '\/\\varepsilon$, other\noperators (such as $(SS)_{EU}$ and $(PP)_{EF}$)\ncan be important due to mixing with the dominant ones.\nThe mixing coefficients for these operators are not known \neven in perturbation theory. For a reasonable estimate we use\nthe coefficients\nobtained for gauge non-invariant operator mixing~\\cite{PatelSharpe}.\n\nSecondly, since renormalization of operators $(PP)_{EU}$, $(SS)_{IU}$ \nand $(PS)_{A2U}$ is dramatic\\footnote{For example, at $m_q=0.01$ and \n$\\mu =2$ GeV for $Q_1$ ensemble we obtain $Z_{PP} = 0.055 \\pm 0.007$, \n$Z_{PS} = 0.088 \\pm 0.007$ and $Z_{SS} = 0.142 \\pm 0.010$.}, their \ninfluence on other operators \nthrough non-diagonal mixing is ambiguous at one-loop order, \neven if the mixing coefficients are known.\nThe ambiguity is due to higher\norder diagrams (for example, those shown in Fig.~\\ref{higher-order}). \nIn order to partially resum them\nwe use operators $(PP)_{EU}$, $(SS)_{IU}$ and $(PS)_{A2U}$ \nmultiplied by factors $Z_P^2$, $Z_S^2$ and $Z_P Z_S$, correspondingly,\nwhenever they appear in non-diagonal mixing with other operators\n\\footnote{\nA completely analogous scheme was used for mixing of $O_6$ with $O_2$ \nthrough penguins when evaluating $\\mbox{Re} A_0$.}. \nThis is equivalent to evaluating the diagrams of type ($a$) and ($b$)\nin Fig.~\\ref{higher-order} at all orders, but ignoring the rest\nof the diagrams (such as diagrams ($c$) and ($d$) in Fig~\\ref{higher-order})\nat all orders higher than first.\nTo estimate a possible error in this procedure\nwe compare with a simpler one, whereby bare operators\nare used in non-diagonal corrections (i.e. we apply strictly one-loop \nrenormalization).\nThe difference in $\\varepsilon '\/\\varepsilon$ between these two approaches\nis of the same order or even less than the error due to uncertainties in \ndetermination\nof $Z_P$ and $Z_S$ (see Tables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}). \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=14cm \\epsfbox{higher-order.eps}}\n\\end{center}\n\\caption{Example of four kinds of diagrams with arbitrary number of loops\narising in renormalization \nof four-fermion operators: in (a) and (b) no propagator\ncrosses the box or the axis; (c) and (d) exemplify the\nrest of the diagrams. The rectangular drawn in dotted line in (b)\ncorresponds to operator structure $PP_{EU}$.}\n\\label{higher-order}\n\\end{figure} \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\leavevmode\n\\centerline{\\epsfysize=8cm \\epsfbox{eps.q60.eps}}\n\\end{center}\n\\caption{A rough estimate of $\\varepsilon '\/\\varepsilon$ \nfor $Q_1$ ensemble using the partially-nonperturbative procedure\ndescribed in text. Three sets of points correspond to \nusing experimental $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ in \nEq.~(\\ref{eq:epsp}) (crosses),\nusing our $\\mbox{Re} A_0$ but experimental $\\omega$ (diamonds), or using \n$\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ obtained from our calculations (squares).\nAll other details are the same as in Table~\\ref{tab:epsp1}.\nThe shown error is a combination of the statistical error, \na matching error coming from uncertainties in the determination of $Z_P$ \nand $Z_S$, and an uncertainty in non-diagonal mixing of\nsubdominant operators.}\n\\label{epsp}\n\\end{figure} \n\n\\clearpage\n\n\n\\section{Estimates of $\\varepsilon '\/\\varepsilon$}\n\n\\label{sec:epsp_res}\n\nWithin the procedure outlined in the previous section we have found that \n$\\langle O_6\\rangle$ has a different sign from the expected one.\nThis translates into a negative or very slightly positive value of \n$\\varepsilon '\/\\varepsilon$ (Tables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}\nand Fig.~\\ref{epsp}). \n\nIf the assumptions about the subdominant diagrams made in the previous \nsection are valid, our results would contradict the present\nexperimental results, which favor a positive value of \n$\\varepsilon '\/\\varepsilon$. They would also change the existing\ntheoretical picture~\\cite{buras} due to the change of sign of \n$\\langle O_6\\rangle$.\n\nFinite volume and quenching effects were found small\ncompared to noise. \nThe main uncertainty in $\\varepsilon '\/\\varepsilon$ \ncomes from operator matching, diagonal and non-diagonal. \nFor diagonal matching the uncertainty comes from (1) errors in the\ndetermination of $Z_P$ and $Z_S$ non-perturbatively and from\n(2) unknown degree of validity of our ansatz in Sec.~\\ref{sec:ansatz}. \nFor non-diagonal matching, the error is due to (3) unknown\nnon-diagonal coefficients in mixing matrix and (4) ambiguity \nof accounting higher-order corrections. \nThe error (1), as well as the statistical error, is quoted in \nTables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}. The size\nof the error (4) can be judged by the spread\nin $\\varepsilon '\/\\varepsilon$ between two different\napproaches to higher-order corrections (strictly one-loop and partial\nresummation), also presented in\nTables~\\ref{tab:epsp1} and~\\ref{tab:epsp2}. The error (3) is likely\nto be of the same order as the error (4).\nThe error (2) is uncontrolled at this point, since it \nis difficult to rigorously check our assumption made \nin Sec.~\\ref{sec:ansatz}. In Fig.~\\ref{epsp} we combine\nthe statistical error with errors (1) and (4) in quadrature.\n\nThe uncertainty due to operator matching is common to any method\nto compute the relevant matrix elements on the lattice (at least, with \nstaggered fermions). In addition, our method has an inherent\nuncertainty due to dropping the higher order chiral terms.\nLattice spacing dependence of $\\varepsilon '\/\\varepsilon$ is\nunclear at this point, but it may be significant.\n\nWe note that there are several ways to compute $\\varepsilon '\/\\varepsilon$.\nOne can use the experimental values of $\\mbox{Re} A_0$ and $\\mbox{Re} A_2$ in \nEq.~(\\ref{eq:epsp}), or one can use the values obtained on the lattice.\nOne can also adopt an intermediate strategy of using the experimental\namplitude ratio $\\omega$ and computed $\\mbox{Re} A_0$. When the higher-order\nchiral corrections are taken into account and the continuum limit is taken\n(so that $\\omega = 22$),\nthese three methods should converge. At this point any of them\ncan be used, and we compare them in Tables~\\ref{tab:epsp1} \nand~\\ref{tab:epsp2}.\n\nIn view of the issues raised above, $\\varepsilon '\/\\varepsilon$ is an \nextremely fragile quantity. The rough estimates in Tables~\\ref{tab:epsp1} \nand~\\ref{tab:epsp2} and Fig.~\\ref{epsp} should be used with extreme \ncaution. \n\n\\section{Conclusion}\n\n\\label{sec:conclusion}\n\nWe have presented in detail the setup of our calculation of\nhadronic matrix elements of all operator in the basis\ndefined in Eqs.~(\\ref{eq:ops1}---\\ref{eq:ops10}).\nWe have obtained statistically significant data for all operators.\nBased on these data we make theoretical estimates of $\\mbox{Re} A_0$ and\n$\\mbox{Re} A_2$ amplitudes as well as $\\varepsilon '\/\\varepsilon$.\n\nSimulation results show that the enhancement of the $\\Delta I=1\/2$ \ntransition is roughly consistent with the experimental findings. \nHowever, the uncertainty due to higher order\nchiral terms is very large. If these terms are calculated\nin the future, a more definite prediction for physical amplitudes\ncan be obtained using our present data for matrix elements.\nSimulations should be repeated at a few more values of $\\beta$ \nin the future in order to take the continuum limit. \n\nCalculation of $\\varepsilon '\/\\varepsilon$ is further complicated \nby the failure\nof perturbation theory in operator matching. We give our\ncrude estimates, but in order to achieve real progress \nthe full nonperturbative matching procedure should be performed.\n\nWe appreciate L. Venkataraman's help in developing \nCRAY-T3E software. \nWe thank the Ohio Supercomputing Center and National Energy Research \nScientific Computing Center (NERSC) for the CRAY-T3E time. We thank \nColumbia University group for access to their dynamical \nconfigurations. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzchpq b/data_all_eng_slimpj/shuffled/split2/finalzzchpq new file mode 100644 index 0000000000000000000000000000000000000000..ceaa0095752ad492bd15c48f279e93299e5ceac7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzchpq @@ -0,0 +1,5 @@ +{"text":"\\section{Abstract}\nApplied machine learning (ML) has rapidly spread throughout the physical sciences; in fact, ML-based data analysis and experimental decision-making has become commonplace.\nWe suggest a shift in the conversation from proving that ML can be used to evaluating how to equitably and effectively implement ML for science.\nWe advocate a shift from a \"more data, more compute\" mentality to a model-oriented approach that prioritizes using machine learning to support the ecosystem of computational models and experimental measurements.\nWe also recommend an open conversation about dataset bias to stabilize productive research through careful model interrogation and deliberate exploitation of known biases.\nFurther, we encourage the community to develop ML methods that connect experiments with theoretical models to increase scientific understanding rather than incrementally optimizing materials.\nFurther, we encourage the community to develop machine learning methods that seek to connect experiments with theoretical models to increase scientific understanding rather than simply use them optimize materials. \nMoreover we envision a future of radical materials innovations enabled by computational creativity tools combined with online visualization and analysis tools that support active outside-the-box thinking inside the scientific knowledge feedback loop.\nFinally, as a community we must acknowledge ethical issues that can arise from blindly following machine learning predictions and the issues of social equity that will arise if data, code, and computational resources are not readily available to all.\n\n\\section{Introduction}\nSince Frank Rosenblatt created Perceptron to play checkers \\cite{Rosenblatt1960}, machine learning (ML) applications have been used to emulate human intelligence. The field has grown immensely with the advent of ever more powerful computers with increasingly smaller size combined with the development of robust statistical analyses. These advances allowed Deep Blue to beat Grandmaster Gary Kasparov in chess and Watson to win {\\it Jeopardy!} The technology has since progressed to more practical applications such as advanced manufacturing and common tasks we now expect from our phones like image and speech recognition. The future of ML promises to obviate much of the tedium of everyday life by assuming responsibility for more and more complex processes, \\textit{e.g.}, autonomous driving.\n\nWhen it comes to scientific application, our perspective is that ML methods are just another component of the scientific modeling toolbox, with a somewhat different profile of representational basis, parameterization, computational complexity, and data\/sample efficiency. Fully embracing this view will help the materials and chemistry communities to overcome perceived limitations and at the same time evaluate and deploy these methods with the same le vel of rigor and introspection as any physics-based modeling methodology. Toward this end, in this essay we identify five areas in which materials researchers can clarify our thinking to enable a vibrant and productive community of scientific ML practitioners:\n\n\\begin{enumerate}\n\\item Maintain perspective on resources required\n\\item Openly assess dataset bias\n\\item Keep sight of the goal\n\\item Dream big enough for radical innovation\n\\item Champion an ethical and equitable research ecosystem\n\\end{enumerate}\n\n\\section{Maintain perspective on resources required}\\label{sec:resources}\n\nThe recent high profile successes in mainstream ML applications enabled by internet-scale data and massive computation~\\cite{DBLP:journals\/corr\/abs-2005-14165,deng2009imagenet} have spurred two lines of discussion in the materials community that are worth examining more closely. The first is an unmediated and limiting preference for large scale data and computation, under the assumption that successful machine learning is unrealistic for materials scientists with datasets that are orders of magnitude smaller than those at the forefront of the publicity surrounding deep learning. The second is a tendency to dismiss brute-force machine learning systems as unscientific. While there is some validity to both these viewpoints, there are opportunities in materials research for productive, creative ML work with small datasets and for the \"go big or go home\" brute-force approach.\n\n\\subsection{Molehills of data (or compute) are sometimes better than mountains}\nA common sentiment in the contemporary deep learning community is that the most reliable means of improving the performance of a deep learning system is to amass ever larger datasets and apply raw computational power. This sometimes can encourage the fallacy that large scale data and computation are fundamental requirements for success with ML methods. This can lead to needlessly deploying massively overparameterized models when simpler ones may be more appropriate~\\cite{d2020underspecification}, and it limits the scope of applied ML research in materials by biasing the set of problems people are willing to consider addressing. There are many examples of productive, creative ML work with small datasets in materials research that counter this notion~\\cite{HattrickSimpers2018, Xue2016}.\n\nIn the small data regime, high quality data with informative features often trump excessive computational power with massive data and weakly correlated features. A promising approach is to exploit the bias-variance tradeoff by performing more rigorous feature selection or crafting a more physically motivated model form~\\cite{childs2019embedding}. Alternatively, it may be wise to reduce the scope of the ML task by restricting the material design space or use ML to solve a smaller chunk of the problem at hand. ML tools for exploratory analysis with appropriate features can bring us much higher dimensional spaces even at an early stage of the research, which may be helpful to have a bird's-eye view on our target.\n\nThere are also specific machine learning disciplines aimed at addressing the well-known issues of small datasets, dataset bias, noise, incomplete featurization, and over-generalization, and there has been some effort to develop tools to address them. Data augmentation and other regularization strategies can allow even small datasets to be treated with large deep learning models. Another common approach is transfer learning, where a proxy model is trained on a large dataset and adapted to a related task with fewer data points \\cite{Yamada2019, Hoffmann2019, goetz2021addressing}. Chen {\\it et. al.} showed that multi-fidelity graph networks could be used in comparatively inexpensive low-fidelity calculations to bolster the accuracy of ML predictions for expensive high-fidelity calculations~\\cite{Chen2021}. Finally, active learning methods are now being explored in many areas of materials research, where surrogate models are initialized on small datasets and updated as new data are taken with new predictions made, often in a manner that balances exploration with optimization~\\cite{Lookman2019}. Generally a solid understanding of uncertainty of the data is critical for success with these strategies, but ML systems can lead us to some insights or perhaps serve as a guide for optimization which might otherwise be intractable.\n\nWe assert that the materials community would generally benefit from taking a more model-oriented approach to applied machine learning, in contrast to the popular prediction-oriented approach that many method-development papers take. To achieve the goals of scientific discovery and knowledge generation, predictive ML must often play a supporting role within a larger ecosystem of computational models and experimental measurements. It can be productive to reassess~\\cite{Bartel2020} the predictive tasks we are striving to address with ML methods; more carefully thought out applications may provide more benefit than simply collecting larger datasets and training higher capacity models.\n\n\\subsection{Writing off massive computation can lead to missed opportunities}\nOn the other hand, quantifying brute computation as \"unscientific\" can lead to missed opportunities to meaningfully accelerate and enable new kinds or scales of scientific inquiry~\\cite{Holm2019}. Even without investment in massive datasets or specialized ML models, there is evidence that simply increasing the scale of computation applied can help compensate for small datasets~\\cite{he2019rethinking}. In many cases, advances enabled in this way do not directly contribute to scientific discovery or development, but they absolutely change the landscape of feasible scientific research by lowering the barrier to exploration and increasing the scale and automation of data analysis. \n\nFor example, recent advances in learned potential methods have provided paradigm-shifting performance improvements in protein structure prediction~\\cite{Senior2020} and offer the potential to vastly expand the domain of atomistic material simulation. Similarly, when good physical models of data-generating processes exist, massive computation can enable new scientific applications through scalable automated data analysis systems. Recent examples include phase identification in electron backscatter diffraction (EBSD)~\\cite{Kaufmann2020} and X-ray diffraction (XRD)~\\cite{maffettoneCrystallographyCompanionAgent2021c}, and local structural analysis via extended x-ray absorption fine structure (EXAFS)~\\cite{Timoshenko2020, Schmeide2021}. \n\nEven for domains where high-fidelity forward models are not available, generative models provide similar advances in data analysis capabilities. For example, a UV-Vis autoencoder trained on a large dataset of optical spectra~\\cite{Stein2019} directly enabled inverse design of solid-state functional materials~\\cite{Noh2019}.\n\n \n In light of the potential value of large-scale computation in advancing fundamental science, the materials field should make computational efficiency~\\cite{DBLP:journals\/corr\/abs-1907-10597} an evaluation criterion alongside accuracy and reproducibility~\\cite{DBLP:journals\/corr\/abs-2003-12206}. Comparison of competing methods using equal computational budgets can provide insight into which methodological innovations actually contribute to improved performance (as opposed to simply boosting model capacity) and can provide context for the feasibility of various methods to be deployed as online data analysis tools. Careful design and interpretation of benchmark tasks and performance measures are needed for the community to avoid chasing arbitrary targets that do not meaningfully facilitate scientific discovery and development of novel and functional materials.\n\n\\section{Openly assess dataset bias}\\label{sec:bias}\n\\subsection{Acknowledging dataset bias}\nIt is widely accepted that materials datasets are distinct from the datasets used to train and validate machine learning systems for more \"mainstream\" applications in a number of ways. While some of this is hyperbole, there are some genuine differences that have a large impact on the overall outlook for ML in materials research. For instance, there is a community-wide perception that all machine learning problems involve data on the scale of the classic image recognition and spam\/ham problems. While the MNIST\\cite{mnist} dataset contains 280,000 labeled images, about twice the number of labeled instances in the Materials Project Database\\cite{Jain2013}, other popular machine learning benchmark datasets are much more modest in size. For instance, the Iris Dataset contains only 50 samples each of three species of Iris and is treated as a standard dataset for evaluating a host of clustering and classification algorithms. As noted above dataset size is not necessarily the major hurdle for the materials science community in terms of developing and deploying ML systems; however, the data, input representation, and task must each be carefully considered.\n\nViewed as a monolithic dataset, the materials literature is an extremely heterogeneous multiview corpus with a significant fraction of missing entries. Even if this dataset were accessible in a coherent digital form, its diversity and deficiencies would pose substantial hurdles to its suitability for ML-driven science. Most research papers narrowly focus on a single or a small handful of material instances, address only a small subset of potentially relevant properties and characterization modalities, and often fail to adequately quantify measurement uncertainties. Perhaps most importantly, there is a strong systemic bias towards positive results~\\cite{Dwan2008}. All of these factors negatively impact the generalization potential of ML systems. \n\n\\begin{figure}[h!tbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{searchunderthelight}\n \\caption{Where to search for new discoveries?}\n \\label{fig:search}\n\\end{figure}\n\nTwo aspects of publication bias play a particularly large role: domain bias and selection bias. Domain bias results when training datasets do not adequately cover the input space. For example, Jia {\\it et. al.} recently demonstrated that the \"tried and true\" method of selecting reagents following previous successes artificially constrained the range of chemical space searched, providing the AI with a distorted view of the viable parameter space~\\cite{Jia2019}. Severe domain bias can lead to overly optimistic estimates of the performance of ML systems~\\cite{Wallach2018, Rauer2020} or in the worst case even render them unusable for real-world scientific application \\cite{griffiths2021dataset}. \n\nSelection bias arises when some external factor influences the likelihood of a data points inclusion in the dataset.\nIn scientific research, a major source of such selection bias is the large number of unreported failures. For instance the Landolt-Bornstein collection lists 71\\% of the alloys as being glass formers while the actual number of glass-forming compounds is estimated to be 5\\%~\\cite{10.1007\/978-3-642-13850-8}. This further complicates the already challenging task of learning from imbalanced datasets by skewing the prior probability of glass formation through dataset imbalance. Schrier {\\it et. al.} reported on how incorporating failed experiments into ML models can actually improve upon the overall predictive power of a model~\\cite{Raccuglia2016}.\n\nFurthermore, the annotations or targets used to train ML systems do not necessarily represent true physical ground truth. As an example, in the field of metallic glasses the full width half-maximum (FWHM) of the strongest diffraction peak at low {\\it q} is often used to categorize thin-film material as being metallic glass, nanocrystalline, or crystalline. Across the literature the FWHM value used as the threshold to distinguish between the first two classes varies from 0.4 to 0.7 \\AA$^{-1}$ (with associated uncertainties) depending upon the research group. Although compendiums invariably capture the label ascribed to the samples, they almost ubiquitously omit the threshold used for the classification, the uncertainty in the measurement of the FWHM, and the associated synthesis and characterization metadata. Comprehensive studies often report only reduced summaries for the datasets presented and include full details only for a subset of \"representative data.\" These shortcomings are common across the primary materials science literature. Given that even experts can reasonably disagree on the interpretation of experimental results, the lack of access to primary datasets prevents detailed model critique, posing a substantial impediment to model validation~\\cite{HattrickSimpers2021,griffiths2021dataset}. The push for creating F.A.I.R. (Findable, Accessible, Interoperable, and Reusable \\cite{Wilkinson2016}) datasets with human\/computer readable data structures notwithstanding, most of the data and meta-data for materials that have ever been made and studied have been lost to time.\n\n Systematic errors in datasets are not restricted to experimental results alone. Theoretical predictions from high throughput density functional theory (DFT) databases, for example, are a valuable resource for predicted material (meta-) stability, crystal structures, and physical properties, but DFT computations contain several underlying assumptions that are responsible for known systematic errors e.g., calculated band gaps. DFT experts are well aware of these limitations and their implications for model building; however, scientists unfamiliar with the field may not be able to reasonably draw conclusions about the potential viability of a model's predictions given these limitations. Discrepancy between DFT and experimental data will expand as systems get increasingly more complex, a longstanding trend in applied materials science. A heterogeneous model, in particular, may cause large uncertainty depending on the complexity of the input structure, and many times little to no information is detailed about the structure or the rationale for choosing it.\n \n Finally, even balanced datasets with quantified uncertainties are not guaranteed to generate predictive models if the features used to describe the materials and\/or how they are made are not sufficiently descriptive. Holistically describing the composition, structure, microstructure of existing materials is a challenging problem and the feature set used (e.g., microstructure 2-point correlation, compositional descriptors and radial distribution functions for functional materials, and calculated physical properties) is largely community driven. This presupposes that we know and can measure the relevant features during our experiments.\n Often identifying the parameters that strongly influence materials synthesis and the structural aspects highly correlated to function is a matter of scientific inquiry in and of itself.\n For example, identifying the importance of temperature in cross-linking rubber or the effect of moisture in the reproducible growth of super-dense, vertically aligned single-walled carbon nanotubes requires careful observation and lateral thinking to connect seemingly independent or unimportant variables.\n If these parameters (or covariate features, \\textit{e.g.}, CVD system pump curves) are not captured from the outset, then there is no hope of algorithmically discovering a causal model, and weakly predictive models are likely to be the best case output.\n\n\\subsection{Productivity in spite of dataset bias}\nBias in historical and as-collected datasets should be acknowledged, it but does not entirely preclude their use to train an AI targeted towards scientific inquiry. Instead one can continue to gain productive insights from AI by taking the appropriate approach and thinking analytically about the results of the model. \n\nOne method for maintaining \"good\" features and models is to adapt an active human intervention in the ML loop. For example, we have recently demonstrated that Random Forest models that are tuned to aggressively maximize only cross-validation accuracy may produce low-quality, unreliable feature ranking explainability~\\cite{Lei2021}. Carefully tracking which features (and data points) the model is most dependent on for its predictions allows a researcher to ensure that the model is capturing physically relevant trends, identify new potential insight into material behavior, and spot possible outliers. Similarly, when physics-based models are used to generate features and training data for ML models, subsequent comparison of new predictions to theory-based results offers the opportunity for improvement of both models~\\cite{Liu2020}. An alternative approach, as recently demonstrated by Kusne {\\it et. al.} is to directly have the ML model request expert input, such as performing a measurement or calculation, that is expected to lower predictive uncertainties~\\cite{Kusne2020}.\n\nEspecially with small datasets, it is important to characterize the extent of dataset bias and perform careful model performance analysis to obtain realistic estimates of the generalization of ML models. See Ref.~\\cite{Rauer2020} for compelling examples, an overview of recently-developed unbiasing techniques in the computational chemistry literature with details on the Asymmetric Validation Embedding method which quantifies the bias of a dataset relative to the ability of a first-nearest-neighbor model to memorize the training data. This method explicitly accounts for the label distribution but is specific to classification tasks. Leave-one-cluster-out cross-validation~\\cite{Meredig2018} is more general, using only distances in input space to define cross validation groups to reduce information leakage between folds. Similarly, De Breuck {\\it et. al.} used principal component analysis as a method for investigating the role of dataset bias by investigating the density of data points with scores plots~\\cite{DeBreuck2021}. \n\nA culture of careful model criticism is also important for robust applied ML research~\\cite{lipton2018troubling}. A narrow focus on benchmark tasks can lead to false incremental progress, where over time models begin overfitting to a particular test dataset and then lack generalizability beyond the initial dataset~\\cite{DBLP:journals\/corr\/abs-1902-10811}. Recht {\\it et. al.}~\\cite{DBLP:journals\/corr\/abs-1902-10811} demonstrated that a broad range of computer vision models suffer from this effect by developing extended test sets for the CIFAR-10 and ImageNet datasets extensively used in the community for model development. This can make it difficult to reason about exactly which methodological innovations truly contribute to generalization performance. Because many aspects of ML research are empirical, carefully designed experiments are needed to separate genuine improvements from statistical effects, and care is needed to avoid {\\it post-hoc} rationalization (Hypothesizing After the Results are Known (HARK)~\\cite{DBLP:journals\/corr\/abs-1904-07633}).\n\nThat there is historical dataset bias is both unavoidable and unresolvable, but once identified this bias does not necessarily constrain the search for new materials in directions that directly contradict the bias~\\cite{Nguyen2021}. For instance, Jia {\\it et. al.} identified anthropogenic biases in the design of amine-templated metal oxides, in that a small number of amine complexes had been used for a vast majority of the literature~\\cite{Jia2019}. Their solution was to perform 548 randomly generated experiments to demonstrate that a global maximum had not been reached but also to erode the systemic data bias their models observed. This is not to say that such an approach is a panacea for dataset or feature set bias as such experiments are still designed by scientists carrying their own biases (e.g., using only amines) and may suffer from uncaptured (but important!) features. Of course, a question remains how to best remove human bias from the experimental pipeline. One might begin that endeavor by allowing researchers to use their intuition and insights for featurization, data curation, and goal setting, while permitting the ML to perform the ultimate selection of the experiment to be performed and manage data acquisition. \n\n\\section{Keep sight of the goal}\\label{sec:goal}\nWhile the implementation of ML in materials science is goal driven, often focused on a push for better accuracy and faster calculations, these are not always the only objectives or even the most important ones. Consider the trade-off between accuracy and discovery. If one is optimizing the pseudopotentials to use for DFT ~\\cite{Behler2007, Bartk2010}, then design is centered around accuracy. On the other hand, if the goal is to identify a material that has a novel combination of physical properties, simply knowing that such a compound exists may be sufficient to embark on a meaningful research effort. The details related to synthesis and processing of the actual phase may likely go far beyond what is possible with any extant ML models, especially with limited benchmark datasets as one approaches the boundary of new science. \n\nThere are clearly cases where ML is the obvious choice to accelerate research, but there can be concerns about the suitability of ML to answer the relevant question. Many applied studies focus only on physical or chemical properties of materials and often fail to include parameters relating to their fundamental utility such as reproducibility, scalability, stability, productivity, safety, or cost~\\cite{olivetti2018toward}. While humans may not be able to find correlations or patterns in high-dimensional spaces, we have rich and diverse background knowledge and heuristics; we have only just begun the difficult work of inventing ways of building this knowledge into machine learning systems. In addition, for domains with small datasets, limited features, and a strong need for higher-level inference rather than a surrogate model, ML should not necessarily be the default approach. A more traditional approach may be faster due to the error in the ML models associated with sample size, and heuristics can play a role even with larger datasets~\\cite{George2021}. \n\nOne alternative is to employ a hybrid method which may include a Bayesian methodology to analysis~\\cite{gelman1995bayesian} or may use ML to guide the work through selective intervention~\\cite{hutchinson2017overcoming}. ML is only a means to model data (Figure~\\ref{fig:hallpetch}), and a good fit to the dataset is no guarantee that the model will be useful since it may have little to no relationship to actual science as it attempts to emulate apparent correlations between the features and the targets. A subsequent corollary is that any predictions from ML, especially when working with small datasets, may be unphysical. Again, we stress that it doesn't imply that we should never use ML for small datasets. Rather we need to employ ML tools judiciously and understand their limitations in the context of our scientific goals. For instance, most ML models are reasonably good at interpolation~\\cite{friedman2017elements}. On the other hand, ML is not nearly as robust when used for extrapolation, although this can be mitigated to some extent by including rigorous statistical analyses on the predictions ~\\cite{Tran2020}.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{grainsize.pdf}\n \\caption{A Gaussian Process model can effectively reproduce the grain size dependence of the mechanical strength of an alloy even though it is completely devoid of any knowledge of the effect of the density of grain boundaries for large-grain metals \\cite{Cordero2016}, the impact of grain boundary sliding in nanocrystalline alloys \\cite{Trelewicz2007} or even the regime change.}\n \\label{fig:hallpetch}\n\\end{figure}\n\nA discussion of errors and failure modes can help one understand the bounds of the validity of any ML analysis although it is often lacking or limited. An honest discourse includes not only principled estimates of model performance and detailed studies of predictive failure modes but also notes how reproducible the results within and across research groups. Such disclosure is important for the trustworthiness of ML for any application. \n\nFinally, one of the biggest potential pitfalls that can occur, even for large well-curated datasets, is that one can lose sight of the goal by focusing on the accuracy of the model rather than using it to learn new science. There is a particular risk of the community spending disproportionate effort incrementally optimizing models to overfit against benchmark tasks~\\cite{DBLP:journals\/corr\/abs-1902-10811}, which may or may not even truly represent meaningful scientific endeavors in themselves. The objective should not be to identify the one algorithm that is good at everything but rather to develop a more focused effort that addresses a specific scientific research question. For ML to reach its true potential to transform research and not just serve as a tool to expedite materials discovery and optimization, it needs to help provide a means to connect experimental and theoretical results instead of simply serving as a convenient means to describe them. For the ML novice it is helpful to remember to keep the scientific goal at the forefront when selecting a model and designing training and validation procedures.\n\n\\section{Dream big enough for radical innovation}\\label{sec:innovation}\n\nTo date, AI has increased its presence in materials science for mainly three applications: 1) automating data analysis that used to be manual, 2)serving as lead-generation in a materials screening funnel, illustrated by the Open Quantum Materials Database and Materials Project, and 3) optimizing existing materials, processes, and devices in a broadly incremental manner. While these applications are critically important in this field, we have witnessed that radical innovation historically has often been accomplished out of the context of these frameworks, driven by human interests or serendipity along with stubborn trial and error. For instance, graphene was first isolated during Friday night experiments when Geim and Novoselov would try out experimental science that was not necessarily linked to their day jobs. Escobar {\\it et. al.} discovered that peeling adhesive tape can emit enough x-rays to produce images~\\cite{Sanderson2008}. Shirakawa discovered a conductive polyacetylene film by accidentally mixing doping materials at a concentration a thousand times too high~\\cite{Guo2020}. Design research has argued that every radical innovation investigated was done without careful analysis of a person's or even a society's needs~\\cite{Norman2014}. If this is the case, an ultimate question about ML deployment in materials science would be, can ML help humans make the startling discovery of \"novel\" materials and eventually new science?\n\n\nAccording to a proposed categorization in design research~\\cite{Norman2014}, one can position their research based on scientific and application familiarity (Fig~\\ref{fig:innovation}). Here, incremental areas (blue region) can provide easier data acquisition and interpretation of results but may hinder new discovery. In contrast, an unexplored area may more likely provide such unexpected results but presents a huge risk of wasting research resources due to the inherent uncertainty. Self-aware resource allocation and inter-area feedback will be needed to balance novelty with the probability of successful research outcomes. Although there is currently a lack of ML methods that can directly navigate one in the radical change\/radical application quadrant to discover new science, we expect that there are methodologies that can harness ML to increase the chance of radical discovery.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{radicalinnovation2.pdf}\n \\caption{(a) Research categorization based upon the degree of scientific and application familiarity (b) Research loop involving machine learning with traditional and outside-the-box steps.}\n \\label{fig:innovation}\n\\end{figure}\n\n\\subsection{Active outside-the-box exploration driven by ML-assisted knowledge acquisition}\nHuman interests motivate outside-the-box research that may lead to a radical discovery, and these interests are fostered by theoretical or experimental knowledge acquisition. Therefore, any applied AI and automated research systems may contribute to discrete discovery by accelerating the knowledge feedback loop (Fig~\\ref{fig:innovation}b). Such ML-involved research loop can include a proposal of hypotheses, theoretical and experimental examination, knowledge extraction, and generalization, which may lead to an opportunity for radical thinking. For ML to play a meaningful role in expediting this loop, one should maintain exploratory curiosity at each step and be inspired or guided by any outputs while attentively being involved in the loop. Additionally, at the very beginning of proof-of-concept research, either in a current research loop or outside-the-box search, the fear of reproducibility should not prevent the attempt at new ideas because the scientific community needs to integrate conflicting observations and ideas into a coherent theory~\\cite{Redish2018}.\n\nOne can harken back to Delbruck's principle of limited sloppiness~\\cite{Yaqub2018}, which reminds us that our experimental design sometimes tests unintended questions, and hidden selectivity requires attention to abnormality. In this context, ML may help us notice the anomaly or even hidden variables with a rigorous statistical procedure, leading to new pieces of knowledge and outside-the-box exploration. For instance, Nega et al. used automated experiments and statistical analysis to clarify the effect of trace water on crystal\/domain growth of halide perovskite~\\cite{Nega2021}, which had often been communicated only in intra-lab conversation. Since such correlation analysis can only shed light on a domain where features are input, researchers still need comprehensive experimental records containing both data and metadata to be fed, possibly regardless of their initial interests. Also, an unbiased and flexible scientific attitude based upon observation may be crucial to reconceptualize a question after finding the abnormality.\n\n\\subsection{Deep generative inverse design to assist in creating material concepts}\nFunctionality-oriented inverse design~\\cite{Zunger2018} is an emerging approach for searching chemical spaces~\\cite{Kirkpatrick2004} for small molecules and possibly solid-state compounds~\\cite{ren2020inverse}. Briefly, a deep generative model learns a probabilistic latent representation of chemical space, and a surrogate model is used to optimize target properties in the latent space; novel compounds likely to have desired properties can then be sampled from the generative model~\\cite{SanchezLengeling2018}. While the design spaces, such as the 166 billion molecules mapped by chemical space projects~\\cite{Reymond2015}, are far beyond the human capability to understand them comprehensively, AI may distill patterns connecting functionalities and compound structures spanning the space. This approach can be a critical step in conceptualizing materials design based upon desired functionalities and further accelerating the AI-driven research loop. One application of such inverse design is to create a property-first optimization loop which includes defining a desired property, proposing a material and structure for the property, validating the results with (automated) experiments, and refining the model. \n\nWhile these generative methods may start to approach creativity, they still explicitly aim to learn an empirical distribution based on the available data. Therefore, extrapolation outside of the current distribution of known materials is not guaranteed to be productive. This suggests that these methods would probably not generate a carbon nanotube given only pre-nanotube-era structures for training or generate ordered superlattices if there is none in the training data. In addition, these huge datasets are mainly constructed based on simulation, and we need to be careful about a gap between simulated and actual experimental data as discussed previously. Still, a new concept extracted from inverse design may inspire researchers to jump into a new discrete subfield of material design by actively interpreting the abstracted property-structure relationship.\n\n\\subsection{Creative AI for materials science}\nThe essence of scientific creativity is the production of new ideas, questions, and connections \\cite{Lehmann2019}.\nThe era of AI as an innovative investigator in this sense has yet to arrive.\nHowever, since human creativity has been captured by actively learning and connecting dots highlighted by our curiosity, it may be possible that machine \"learning\" can be as creative as humans in order to reach radical innovation. While conventional supervised natural language processing~\\cite{Krallinger2017} has required large hand-labeled datasets for training, a recent unsupervised learning study~\\cite{Tshitoyan2019} indicates the possibility of extracting knowledge from literature without human intervention to identify relevant content and capturing preliminary materials science concepts such as the underlying structure of the periodic table and structure-properties relationships. This study was demonstrated by encoding latent literature into information-dense word embeddings, which recommended some materials for a specific application ahead of human discovery. Since the amount of currently existing literature is too massive for human cognition, generative AI systems may be useful to suggest a specific design or concept given appropriately defined functionalities. \n\nAn underlying challenge is how to deal with implicit and non-machine-readable data reported in the literature. For instance, it is common to summarize experimental results with a 2D figure which just describes some tendency in a limited range along with some maxima\/minima. Such disproportionate summarization does not span the entire range of the experimental space described in the figure, and may bias the parameter space that a model might explore depending upon how the literature is written. This also returns us to the issue of addressing the hesitancy of publishing \"unsuccessful\" research data. One may need to be careful in accepting AI-driven proposals since there is likely a gap between a human-interest-driven leap and a ML-driven suggestion based on some learned representation of the unstructured data gleaned from the literature.\n\nBeyond latent variable optimization, one may consider computational creativity, which is used to model imagination in fields such as the arts~\\cite{DBLP:journals\/corr\/abs-2006-08381}, music~\\cite{DBLP:journals\/corr\/abs-1709-01620}, and gaming. This endeavor may start with finding a vector space to measure novelty as a distance~\\cite{berns2020bridging}. A novelty-oriented algorithm searches the space for a set of distant new objects that is as diverse as possible as to maximize novelty instead of an objective function~\\cite{lehman2011abandoning}. Since there would be some bias for measuring the distance along with exploratory space, deep learning novelty explorer (DeLeNox) was recently proposed~\\cite{DBLP:journals\/corr\/abs-2103-11715} as a means to dynamically change the distance functions for improved diversity. These approaches could be applied to materials science to diversify research directions and help us pose and consider novel materials and ideas though measuring novelty may be subjective and most challenging for the community, and one always needs to be mindful of ethical and physical materials constraints.\n\n\\section{Champion an ethical and equitable research ecosystem}\\label{sec:ethics}\nLooking toward the future of the use of ML in materials science, there are issues, such as potential physical, economic, and legal risks, that have yet to be fully discussed and resolved.\nFor example, ML may predict mixing several materials together to form a new compound with a set of desired properties, but the synthesis is dangerous because of the toxic gases produced during a side reaction or the final product is flammable or explosive.\nAlso consider that indiscriminate use of ML could lead to infringement upon intellectual property rights if the algorithm is unaware of the protected status of certain processes or materials.\nA yet unanswered question regarding either scenario is, who is the responsible party - the person who created the ML environment or the person who provided the data which did not capture all potential hazards and conflicts? It is paramount that the community reach a consensus on issues such as this before widespread autonomous use of ML.\n\nAnother concern to be addressed as ML transforms materials research is the prospect for enormous inequities between the computationally rich and poor, where the rich quickly explore large parameter spaces and the have-nots fall behind, unable to compete. This disparity would grow larger and faster if end users, reviewers, and program managers deem that only resource-intensive ML is trustworthy. Although a materials cloud platform~\\cite{klimeck2008nanohub, Talirz2020} could help to bridge the gap between these groups, it would be meaningless without a strong culture of open publication of training source code, model parameters, and appropriate benchmark datasets. Yet even making these resources freely available may still be insufficient to sustain a level playing field unless there is equivalent access to state-of-the art instrumentation to validate the increasingly more detailed predictions. Clearly, we have time before we arrive at that reckoning, but the complexity of the matter requires us to begin discussing it now.\n\n\\section{Summary}\nMachine learning has been effective at expediting a variety of tasks, and the initial stage of its implementation for materials research has already confirmed that it has great promise to accelerate science and discovery~\\cite{baker2019workshop}. To realize that full potential, we need to tailor its usage to answer well defined questions while keeping perspective of the limits of the resources needed and the bounds of meaningful interpretation of the resulting analyses. Eventually, we may be able develop ML algorithms that will consistently lead us to new breakthroughs in an open and equitable framework. In the meantime, a complementary team of humans, AI, and robots has already begun to advance materials science for the common good.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $\\varphi:M\\to (N,h)$ be an immersion of a manifold $M$ into a\nRiemannian manifold $(N,h)$. We say that $\\varphi$ is {\\em\nbiharmonic}, or $M$ is a {\\em biharmonic submanifold}, if its mean curvature vector\nfield $H$ satisfies the following equation\n\\begin{eqnarray}\\label{eq: bih_eq}\n\\tau_2(\\varphi)=- m\\left(\\Delta H + \\trace{R^{N}}(\nd\\varphi(\\cdot), H) d\\varphi(\\cdot)\\right)=0,\n\\end{eqnarray}\nwhere $\\Delta$ denotes the rough Laplacian on sections of the\npull-back bundle $\\varphi^{-1}(TN)$ and $R^N$ denotes the curvature\noperator on $(N,h)$. The section $\\tau_2(\\varphi)$ is called the\n{\\em bitension field}.\n\nWhen $M$ is compact, the biharmonic condition arises from a\nvariational problem for maps: for an arbitrary smooth map\n$\\varphi:(M,g)\\to (N,h)$ we define\n$$\nE_{2}\\left( \\varphi \\right) = \\frac{1}{2} \\int_{M} |\\tau(\\varphi)|^{2}\\, v_{g},\n$$\nwhere $\\tau(\\varphi)=\\trace\\nabla d\\varphi$ is the {\\it tension field}. The functional $E_2$ is called the {\\em bienergy functional}. When $\\varphi:(M,\\varphi^{\\ast}h)\\to (N,h)$ is an immersion, the tension\nfield has the expression $\\tau(\\varphi)=mH$ and \\eqref{eq: bih_eq} is equivalent to $\\varphi$ being a critical point of $E_2$.\n\n\nObviously, any minimal immersion ($H=0$) is biharmonic. The non-harmonic biharmonic immersions are called {\\it proper biharmonic}.\n\nThe study of proper biharmonic submanifolds is nowadays becoming a\nvery active subject and its popularity initiated with the\nchallenging conjecture of B-Y.~Chen (see the recent book \\cite{C11}): {\\em any biharmonic submanifold\nin the Euclidean space is minimal}.\n\nChen's conjecture was generalized to: {\\em any biharmonic submanifold in a\nRiemannian manifold with non-positive sectional curvature is\nminimal}, but this was proved not to hold. Indeed, in \\cite{OT10}, Y.-L.~Ou and L.~Tang\nconstructed examples of proper biharmonic hypersurfaces\nin a $5$-dimensional space of non-constant negative sectional\ncurvature.\n\nYet, the conjecture is still open in its full generality for ambient\nspaces with constant non-positive sectional curvature, although it\nwas proved to be true in numerous cases when additional geometric\nproperties for the submanifolds were assumed (see, for example,\n\\cite{BMO08,CMO02,C91,D98,D92,HV95}).\n\nBy way of contrast, as we shall detail in Section~\\ref{sec:\nbih-sub}, there are several families of examples of proper\nbiharmonic submanifolds in the $n$-dimensional unit Euclidean sphere\n$\\mathbb{S}^{n}$. For simplicity we shall denote these classes by {\\bf B1}, {\\bf B2}, {\\bf B3} and {\\bf B4}.\n\n The goal of this paper is to continue the study of\nproper biharmonic submanifolds in $\\mathbb{S}^{n}$ in order to achieve their classification.\nThis program was initiated for the very first time in \\cite{J86} and then developed\nin \\cite{BMO12} -- \\cite{BO09}, \\cite{CMO02,CMO01,NU11,NU12, O02}.\n\nIn the following, by a rigidity result for proper biharmonic submanifolds we mean:\\\\\n{\\em find under what conditions a proper biharmonic submanifold in ${\\mathbb S}^n$ is one of the main examples {\\bf B1}, {\\bf B2}, {\\bf B3} and {\\bf B4}}.\n\nWe prove rigidity results for the following types of submanifolds in ${\\mathbb S}^n$: Dupin hypersurfaces; hypersurfaces, both compact and non-compact, with bounded norm of the second fundamental form; hypersurfaces satisfying intrinsic geometric properties; PMC submanifolds; parallel submanifolds.\n\nMoreover, we include in this paper two results of J.H.~Chen published in \\cite{C93}, in Chinese. We give a complete proof of these results using the invariant formalism and shortening the original proofs.\n\n\\vspace{2mm}\n\n{\\bf Conventions.}\nThroughout this paper all manifolds, metrics, maps are assumed to be smooth, i.e. $C^\\infty$. All manifolds are assumed to be connected. The following sign conventions are used\n$$\n\\Delta V=-\\trace\\nabla^2 V\\,,\\qquad R^N(X,Y)=[\\nabla_X,\\nabla_Y]-\\nabla_{[X,Y]},\n$$\nwhere $V\\in C(\\varphi^{-1}(TN))$ and $X,Y\\in C(TN)$.\nMoreover, the Ricci and scalar curvature $s$ are defined as\n$$\n\\langle \\ricci(X),Y\\rangle=\\ricci(X,Y)=\\trace (Z\\to R(Z,X)Y)), \\quad s=\\trace \\ricci,\n$$\nwhere $X,Y,Z\\in C(TN)$.\n\n\\vspace{2mm}\n\n{\\bf Acknowledgements.}\nThe authors would like to thank professor Jiaping Wang for some helpful discussions and Juan Yang for the accurate translation of \\cite{C93}. The third author would like to thank the Department of Mathematics and Informatics of the University of Cagliari for the warm hospitality.\n\n\\section{Biharmonic immersions in ${\\mathbb S}^n$}\\label{sec: bih-sub}\n\nThe key ingredient in the study of biharmonic submanifolds is the\nsplitting of the bitension field with respect to its normal and\ntangent components. In the case when the ambient space is the unit Euclidean sphere we have the following characterization.\n\n\\begin{theorem}[\\cite{C84, O02}]\\label{th: bih subm S^n}\nAn immersion $\\varphi:M^m\\to\\mathbb{S}^n$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_spheres}\n\\left\\{\n\\begin{array}{l}\n\\ \\Delta^\\perp {H}+\\trace B(\\cdot,A_{H}\\cdot)-m\\,{H}=0,\n\\vspace{2mm}\n\\\\\n\\ 2\\trace A_{\\nabla^\\perp_{(\\cdot)}{H}}(\\cdot)\n+\\dfrac{m}{2}\\grad {|H|}^2=0,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $A$ denotes the Weingarten operator, $B$ the second\nfundamental form, ${H}$ the mean curvature vector field, $|H|$ the mean curvature function,\n$\\nabla^\\perp$ and $\\Delta^\\perp$ the connection and the Laplacian\nin the normal bundle of $\\varphi$, respectively.\n\\end{theorem}\n\nIn the codimension one case, denoting by $A=A_\\eta$ the shape operator with respect to a (local) unit section $\\eta$ in the normal bundle and\nputting $f=(\\trace A)\/m$, the above result reduces to the following.\n\\begin{corollary}[\\cite{O02}]\\label{cor: caract_hypersurf_bih}\nLet $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ be an orientable hypersurface. Then $\\varphi$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_hipersurf_spheres}\n\\left\\{\n\\begin{array}{l}\n{\\rm (i)}\\quad \\Delta f=(m-|A|^2) f,\n\\\\ \\mbox{} \\\\\n{\\rm (ii)}\\quad A(\\grad f)=-\\dfrac{m}{2}f\\grad f.\n\\end{array}\n\\right.\n\\end{equation}\n\\end{corollary}\n\nA special class of immersions in $\\mathbb{S}^n$ consists of the parallel mean curvature immersions (PMC), that is immersions such that $\\nabla^{\\perp}H=0$. For this class of immersions Theorem~\\ref{th: bih subm S^n} reads as follows.\n\n\\begin{corollary}[\\cite{BO12}]\\label{th: caract_bih_pmc}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC immersion. Then $\\varphi$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_Hparallel_I}\n\\trace B(A_H(\\cdot),\\cdot)=mH,\n\\end{equation}\nor equivalently,\n\\begin{equation}\\label{eq: caract_bih_Hparallel_II}\n\\left\\{\n\\begin{array}{ll}\n\\langle A_H, A_\\xi\\rangle=0,\\quad \\forall\\xi\\in C(NM)\\, \\text{with}\\,\\, \\xi\\perp H,\n\\\\ \\mbox{} \\\\\n|A_H|^2=m|H|^2,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $NM$ denotes the normal bundle of $M$ in $\\mathbb{S}^n$.\n\\end{corollary}\n\nWe now list the main examples of proper biharmonic immersions in $\\mathbb{S}^n$.\n\n\\begin{list}{\\labelitemi}{\\leftmargin=2em\\itemsep=1.5mm\\topsep=0mm}\n\\item[{\\bf B1}.] The canonical inclusion of the small hypersphere\n\\begin{equation*}\\label{eq: small_hypersphere}\n\\mathbb{S}^{n-1}(1\/\\sqrt 2)=\\left\\{(x,1\/\\sqrt 2)\\in\\mathbb{R}^{n+1}: x\\in \\mathbb{R}^n, |x|^2=1\/2\\right\\}\\subset\\mathbb{S}^{n}.\n\\end{equation*}\n\\item[{\\bf B2}.] The canonical inclusion of the standard (extrinsic) products of spheres\n\\begin{equation*}\\label{eq: product_spheres}\n\\mathbb{S}^{n_1}(1\/\\sqrt 2)\\times\\mathbb{S}^{n_2}(1\/\\sqrt 2)=\\left\\{(x,y)\\in\\mathbb{R}^{n_1+1}\\times\\mathbb{R}^{n_2+1}, |x|^2=|y|^2=1\/2\\right\\}\\subset\\mathbb{S}^{n},\n\\end{equation*}\n$n_1+n_2=n-1$ and $n_1\\neq n_2$.\n\\item[{\\bf B3}.] The maps $\\varphi=\\imath\\circ\\phi:M\\to \\mathbb{S}^n$, where $\\phi:M\\to \\mathbb{S}^{n-1}(1\/\\sqrt 2)$ is a minimal immersion, and $\\imath:\\mathbb{S}^{n-1}(1\/\\sqrt 2)\\to\\mathbb{S}^n$ denotes the canonical inclusion.\n\n\\item[{\\bf B4}.] The maps $\\varphi=\\imath\\circ(\\phi_1\\times\\phi_2): M_1\\times M_2\\to \\mathbb{S}^n$, where $\\phi_i:M_i^{m_i}\\to\\mathbb{S}^{n_i}(1\/\\sqrt 2)$, $0 < m_i \\leq n_i$, $i=1,2$, are minimal immersions, $m_1\\neq m_2$, $n_1+n_2=n-1$, and $\\imath:\\mathbb{S}^{n_1}(1\/\\sqrt 2)\\times\\mathbb{S}^{n_2}(1\/\\sqrt 2)\\to \\mathbb{S}^n$ denotes the canonical inclusion.\n\\end{list}\n\n\n\\begin{remark}\n\\begin{itemize}\n\\item[(i)] The proper biharmonic immersions of class {\\bf B3} are pseudo-umbilical, i.e. $A_H=|H|^2\\Id$, have parallel mean curvature vector field and mean curvature $|H|=1$. Clearly, $\\nabla A_H=0$.\n\n\\item[(ii)] The proper biharmonic immersions of class {\\bf B4} are no longer pseudo-umbilical, but still have parallel mean curvature vector field and their mean curvature is $|H|={|m_1-m_2|}\/{m}\\in(0,1)$, where $m=m_1+m_2$. Moreover, $\\nabla A_H=0$ and the principal curvatures in the direction of $H$, i.e. the eigenvalues of $A_H$, are constant on $M$ and given by $\\lambda_1=\\ldots=\\lambda_{m_1}=({m_1-m_2})\/{m}$, $\\lambda_{m_1+1}=\\ldots=\\lambda_{m_1+m_2}=-({m_1-m_2})\/{m}$. Specific B4 examples were given by W.~Zhang in \\cite{Z11} and generalized in \\cite{BMO08a, WW12}.\n\\end{itemize}\n\\end{remark}\n\n\nWhen a biharmonic immersion has constant mean curvature (CMC) the following bound for $|H|$ holds.\n\n\\begin{theorem}[\\cite{O03}]\\label{teo:h=cst-b3}\nLet $\\varphi:M\\to\\mathbb{S}^n$ be a CMC proper biharmonic immersion. Then $|H|\\in(0,1]$, and $|H|=1$ if and only if $\\varphi$ induces a minimal immersion of $M$ into $\\mathbb{S}^{n-1}(1\/\\sqrt 2)\\subset\\mathbb{S}^n$, that is $\\varphi$ is {\\bf B3}.\n\\end{theorem}\n\n\\section{Biharmonic hypersurfaces in spheres}\n\nThe first case to look at is that of CMC proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$.\n\n\\begin{theorem}[\\cite{BMO08, BO12}]\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a CMC proper biharmonic hypersurface. Then\n\\begin{itemize}\n\\item[(i)] $|A|^2=m$;\n\\item[(ii)] the scalar curvature $s$ is constant and positive, $s=m^2(1+|H|^2)-2m$;\n\\item[(iii)] for $m>2$, $|H|\\in(0,({m-2})\/{m}]\\cup\\{1\\}$. Moreover, $|H|=1$ if and only if $\\varphi(M)$ is an open subset of the small hypersphere $\\mathbb{S}^m(1\/\\sqrt 2)$, and $|H|=({m-2})\/{m}$ if and only if $\\varphi(M)$ is an open subset of the standard product $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{remark}\nIn the minimal case the condition $|A|^2=m$ is exhaustive. In fact a minimal hypersurface in $\\mathbb{S}^{m+1}$ with $|A|^2=m$ is a minimal standard product of spheres (see \\cite{CdCK70, L69}). We point out that the full classification of CMC hypersurfaces in $\\mathbb{S}^{m+1}$ with $|A|^2=m$, therefore biharmonic, is not known.\n\\end{remark}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a complete proper biharmonic hypersurface.\n\\begin{itemize}\n\\item[(i)] If $|H|=1$, then $\\varphi(M)=\\mathbb{S}^{m}(1\/\\sqrt 2)$ and $\\varphi$ is an embedding.\n\\item[(ii)] If $|H|=({m-2})\/{m}$, $m>2$, then $\\varphi(M)=\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$ and the universal cover of $M$ is $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{R}$.\n\\end{itemize}\n\\end{corollary}\n\nAs a direct consequence of \\cite[Theorem 2]{NS69} we have the following result.\n\n\\begin{theorem}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a CMC proper biharmonic hypersurface. Assume that $M$ has non-negative sectional curvature. Then $\\varphi(M)$ is either an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or an open part of $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{theorem}\n\nIn the following we shall no longer assume that the biharmonic hypersurfaces have constant mean curvature, and we shall split our study in three cases. In Case 1 we shall study the proper biharmonic hypersurfaces with respect to the number of their distinct principal curvatures, in Case 2 we shall study them with respect to $|A|^2$ and $|H|^2$, and in Case 3 the study will be done with respect to the sectional and Ricci curvatures of the hypersurface.\n\n\\subsection{Case 1}\n\nObviously, if $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is an umbilical proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$.\n\nWhen the hypersurface has at most two or exactly three distinct principal curvatures everywhere we obtain the following rigidity results.\n\n\\begin{theorem}[\\cite{BMO08}]\\label{th: hypersurf_2curv}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a hypersurface. Assume that $\\varphi$ is proper biharmonic with at most two distinct principal curvatures everywhere. Then $\\varphi$ is CMC and $\\varphi(M)$ is either an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or an open part of $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$. Moreover, if $M$ is complete, then either\n$\\varphi(M)=\\mathbb{S}^{m}(1\/\\sqrt 2)$ and $\\varphi$ is an embedding, or $\\varphi(M)=\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$ and $\\varphi$ is an embedding when $m_1\\geq 2$ and $m_2\\geq 2$.\n\\end{theorem}\n\n\\begin{theorem}[\\cite{BMO08}]\\label{teo:quasi-umbilicall-conformally-flat}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m\\geq 3$, be a proper biharmonic hypersurface. The following statements are equivalent:\n\\begin{itemize}\n\\item[(i)] $\\varphi$ is quasi-umbilical,\n\\item[(ii)] $\\varphi$ is conformally flat,\n\\item[(iii)] $\\varphi(M)$ is an open part of $\\mathbb{S}^m(1\/\\sqrt 2)$ or of $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times \\mathbb{S}^{1}(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\nIt is well known that, if $m\\geq 4$, a hypersurface $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is quasi-umbilical if and only if it is conformally flat. From Theorem~\\ref{teo:quasi-umbilicall-conformally-flat}\nwe see that under the biharmonicity hypothesis the equivalence remains true when $m=3$.\n\n\\begin{theorem}[\\cite{BMO10}]\\label{th: hypersurf_3curv}\nThere exist no compact CMC proper biharmonic hypersurfaces $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ with three distinct principal curvatures everywhere.\n\\end{theorem}\n\nIn particular, in the low dimensional cases, Theorem~\\ref{th: hypersurf_2curv}, Theorem~\\ref{th: hypersurf_3curv} and a result of S.~Chang (see \\cite{CH93}) imply the following.\n\n\\begin{theorem}[\\cite{CMO01,BMO10}]\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a proper biharmonic hypersurface.\n\\begin{itemize}\n\\item[(i)] If $m=2$, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{2}(1\/\\sqrt 2)\\subset\\mathbb{S}^3$.\n\\item[(ii)] If $m=3$ and $M$ is compact, then $\\varphi$ is CMC and $\\varphi(M)=\\mathbb{S}^{3}(1\/\\sqrt 2)$ or $\\varphi(M)=\\mathbb{S}^{2}(1\/\\sqrt 2)\\times\\mathbb{S}^{1}(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\nWe recall that an orientable hypersurface $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ is\nsaid to be {\\it isoparametric} if it has constant principal curvatures or, equivalently, the number $\\ell$ of distinct principal curvatures $k_1 > k_2>\\cdots\n> k_\\ell$ is constant on $M$ and the $k_i$'s are constant. The distinct principal curvatures have constant multiplicities $m_1, \\ldots,m_\\ell$, $m = m_1 + m_2 + \\ldots + m_\\ell$.\n\nIn \\cite{IIU08}, T.~Ichiyama, J.I.~Inoguchi and H.~Urakawa classified the proper biharmonic isoparametric hypersurfaces in spheres.\n\n\\begin{theorem}[\\cite{IIU08}]\\label{teo:isoparametric}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable isoparametric hypersurface. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is either an open part of $\\mathbb{S}^m(1\/\\sqrt2)$, or an open part of\n$\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$, $m_1+m_2=m$,\n$m_1\\neq m_2$.\n\\end{theorem}\n\nAn orientable hypersurface $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ is\nsaid to be a {\\it proper Dupin hypersurface} if the number $\\ell$ of distinct principal curvatures is constant on $M$ and each principal curvature function is constant along its corresponding principal directions.\n\n\\begin{theorem}\\label{th: Dupin_bih_CMC}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable proper Dupin hypersurface. If $\\varphi$ is proper biharmonic, then $\\varphi$ is CMC.\n\\end{theorem}\n\n\\begin{proof}\nAs $M$ is orientable, we fix $\\eta\\in C(NM)$ and denote $A=A_\\eta$ and $f=(\\trace A)\/m$. Suppose that $f$ is not constant. Then there exists an open subset $U\\subset M$ such that $\\grad f\\neq 0$ at every point of $U$. Since $\\varphi$ is proper biharmonic, from \\eqref{eq: caract_bih_hipersurf_spheres} we get that $-{mf}\/{2}$ is a principal curvature with principal direction $\\grad f$. Since the hypersurface is proper Dupin, by definition, $\\grad f(f)=0$, i.e. $\\grad f=0$ on $U$, and we come to a contradiction.\n\\end{proof}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable proper Dupin hypersurface with $\\ell\\leq 3$. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is either an open part of $\\mathbb{S}^m(1\/\\sqrt2)$, or an open part of\n$\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$, $m_1+m_2=m$,\n$m_1\\neq m_2$.\n\\end{corollary}\n\n\\begin{proof}\nTaking into account Theorem~\\ref{th: hypersurf_2curv}, we only have to prove that there exist no proper biharmonic proper Dupin hypersurfaces with $\\ell=3$. Indeed, by Theorem~\\ref{th: Dupin_bih_CMC}, we conclude that $\\varphi$ is CMC. By a result in \\cite{BMO12}, $\\varphi$ is of type $1$ or of type $2$, in the sense of B.-Y.~Chen. If $\\varphi$ is of type $1$, we must have $\\ell=1$ and we get a contradiction. If $\\varphi$ is of type $2$, since $\\varphi$ is proper Dupin with $\\ell=3$, from Theorem~9.11 in \\cite{C96}, we get that $\\varphi$ is isoparametric. But, from\nTheorem~\\ref{teo:isoparametric}, proper biharmonic isoparametric hypersurfaces must have $\\ell\\leq 2$.\n\n\\end{proof}\n\n\\subsection{Case 2}\nThe simplest result is the following.\n\\begin{proposition}\\label{pro: a-compact}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. Assume that $\\varphi$ is proper biharmonic with nowhere zero mean curvature vector field and $|A|^2\\leq m$, or $|A|^2\\geq m$. Then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\\begin{proof}\nAs $H$ is nowhere zero, we can consider $\\eta=H\/|H|$ a global unit section in the normal bundle $NM$ of $M$ in $\\mathbb{S}^{m+1}$.\nThen, on $M$,\n$$\n\\Delta f=(m-|A|^2)f,\n$$\nwhere $f=(\\trace A)\/m=|H|$. Now, as $m-|A|^2$ does not change sign, from the maximum principle we get $f=$ constant and $|A|^2=m$.\n\\end{proof}\n\nIn fact, Proposition~\\ref{pro: a-compact} holds without the hypothesis ``$H$ nowhere zero''. In order to prove this we shall consider the cases $|A|^2\\geq m$ and $|A|^2\\leq m$, separately.\n\n\\begin{proposition}\\label{prop: |B|>m}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. Assume that $\\varphi$ is proper biharmonic and $|A|^2\\geq m$. Then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\\begin{proof}\nLocally,\n$$\n\\Delta f=(m-|A|^2)f,\n$$\nwhere $f=(\\trace A)\/m$, $f^2=|H|^2$,\nand therefore\n$$\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\\leq 0.\n$$\nAs $f^2$, $|A|^2$ and $|\\grad f|^2$ are well defined on the whole $M$, the formula holds on $M$. From the maximum principle we get that $|H|$ is constant and $|A|^2=m$.\n\\end{proof}\n\nThe case $|A|^2\\leq m$ was solved by J.H.~Chen in \\cite{C93}. Here we include the proof for two reasons. First, the original one is in Chinese and second, the formalism used by J.H.~Chen was local, while ours is globally invariant. Moreover, the proof we present is slightly shorter.\n\n\\begin{theorem}[\\cite{C93}]\\label{th: jchen1}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface in $\\mathbb{S}^{m+1}$. If $\\varphi$ is proper biharmonic and $|A|^2\\leq m$, then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{theorem}\n\n\\begin{proof}\nWe may assume that $M$ is orientable, since, otherwise, we consider the double covering $\\tilde{M}$ of $M$. This is compact, connected and orientable, and in the given hypotheses $\\tilde{\\varphi}:\\tilde M\\to \\mathbb{S}^{m+1}$ is proper biharmonic and $|\\tilde A|^2\\leq m$. Moreover, $\\tilde{\\varphi}(\\tilde{M})=\\varphi(M)$.\n\nAs $M$ is orientable, we fix a unit global section $\\eta\\in C(NM)$ and denote $A=A_\\eta$ and $f=(\\trace A)\/m$.\nIn the following we shall prove that\n\\begin{eqnarray}\\label{eq: fund_ineq_chen1}\n&&\\frac{1}{2}\\Delta \\left(|\\grad f|^2+\\frac{m^2}{8}f^4+f^2\\right)+\\frac{1}{2}\\Div(|A|^2\\grad f^2)\\leq\\nonumber\\\\\n&&\\leq\\frac{8(m-1)}{m(m+8)}(|A|^2-m) |A|^2 f^2,\n\\end{eqnarray}\non M, and this will lead to the conclusion.\n\nFrom \\eqref{eq: caract_bih_hipersurf_spheres}(i) one easily gets\n\\begin{equation}\\label{eq: delta_f^2}\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\n\\end{equation}\nand\n\\begin{equation}\\label{eq: delta_f^4}\n\\frac{1}{4}\\Delta f^4=(m-|A|^2)f^4-3f^2|\\grad f|^2.\n\\end{equation}\n\nFrom the Weitzenb\\\"{o}ck formula we have\n\\begin{equation}\\label{eq: Weitz_norm_grad}\n\\frac{1}{2}\\Delta |\\grad f|^2=-\\langle\\trace\\nabla^2\\grad f,\\grad f\\rangle-|\\nabla\\grad f|^2,\n\\end{equation}\nand, since\n$$\n\\trace \\nabla^2\\grad f=-\\grad(\\Delta f)+ \\ricci(\\grad f),\n$$\nwe obtain\n\\begin{equation}\\label{eq: cons_Weitz_norm_grad}\n\\frac{1}{2}\\Delta |\\grad f|^2=\\langle \\grad \\Delta f,\\grad f\\rangle-\\ricci(\\grad f,\\grad f)-|\\nabla\\grad f|^2.\n\\end{equation}\n\nEquations \\eqref{eq: caract_bih_hipersurf_spheres}(i) and \\eqref{eq: delta_f^2} imply\n\\begin{eqnarray}\\label{eq: grad_delta_f}\n\\langle \\grad \\Delta f,\\grad f\\rangle&=&(m-|A|^2)|\\grad f|^2-\\frac{1}{2}\\langle \\grad |A|^2, \\grad f^2\\rangle\\nonumber\\\\\n&=&(m-|A|^2)|\\grad f|^2-\\frac{1}{2}\\left(\\Div(|A|^2\\grad f^2)+|A|^2\\Delta f^2\\right)\\nonumber\\\\\n&=&m|\\grad f|^2-\\frac{1}{2}\\Div(|A|^2\\grad f^2)-|A|^2(m-|A|^2)f^2.\n\\end{eqnarray}\n\nFrom the Gauss equation of $M$ in $\\mathbb{S}^{m+1}$ we obtain\n\\begin{equation}\\label{eq:ricci-minsn}\n\\ricci(X,Y)=(m-1)\\langle X,Y\\rangle+\\langle A(X),Y\\rangle\\trace A-\\langle A(X), A(Y)\\rangle,\n\\end{equation}\nfor all $X, Y\\in C(TM)$, therefore, by using \\eqref{eq: caract_bih_hipersurf_spheres}(ii),\n\\begin{equation}\\label{eq: cons_Gauss}\n\\ricci(\\grad f,\\grad f)=\\left(m-1-\\frac{3m^2}{4}f^2\\right)|\\grad f|^2.\n\\end{equation}\n\nNow, by substituting \\eqref{eq: grad_delta_f} and \\eqref{eq: cons_Gauss} in \\eqref{eq: cons_Weitz_norm_grad} and using \\eqref{eq: delta_f^2} and \\eqref{eq: delta_f^4}, one obtains\n\\begin{eqnarray*}\n\\frac{1}{2}\\Delta |\\grad f|^2&=&\\left(1+\\frac{3m^2}{4}f^2\\right)|\\grad f|^2-\\frac{1}{2}\\Div(|A|^2\\grad f^2)\\\\\n&&-|A|^2(m-|A|^2)f^2-|\\nabla\\grad f|^2\\\\\n&=&-\\frac{1}{2}\\Delta f^2-\\frac{m^2}{16}\\Delta f^4-(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)f^2\\\\\n&&-\\frac{1}{2}\\Div(|A|^2\\grad f^2)-|\\nabla\\grad f|^2.\n\\end{eqnarray*}\nHence\n\\begin{eqnarray}\\label{eq: eq_int_1}\n&-\\frac{1}{2}\\Delta \\left(|\\grad f|^2+\\frac{m^2}{8}f^4+f^2\\right)-\\frac{1}{2}\\Div(|A|^2\\grad f^2)=\\nonumber\\\\\n&=(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)f^2+|\\nabla\\grad f|^2.\n\\end{eqnarray}\n\nWe shall now verify that\n\\begin{equation}\\label{eq: fund_ineq1}\n(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)\\geq (m-|A|^2)\\left(\\frac{9}{m+8}|A|^2-1\\right),\n\\end{equation}\nat every point of $M$.\nLet us now fix a point $p\\in M$. We have two cases.\\\\\n{\\it Case 1.} If $\\grad_p f\\neq 0$, then $e_1=({\\grad_p f})\/{|\\grad_p f|}$ is a principal direction for $A$ with principal curvature $\\lambda_1=-m f(p)\/2$. By considering $e_k\\in T_pM$, $k=2,\\ldots,m$, such that $\\{e_i\\}_{i=1}^m$ is an orthonormal basis in $T_pM$ and $A(e_k)=\\lambda_k e_k$, we get at $p$\n\\begin{eqnarray}\\label{eq: |A|}\n|A|^2&=&\\sum_{i=1}^m |A(e_i)|^2=|A(e_1)|^2+\\sum_{k=2}^m |A(e_k)|^2=\\frac{m^2}{4}f^2+\\sum_{k=2}^m \\lambda_k^2\\nonumber\\\\\n&\\geq& \\frac{m^2}{4}f^2+\\frac{1}{m-1}\\left(\\sum_{k=2}^m \\lambda_k\\right)^2=\\frac{m^2(m+8)}{4(m-1)}f^2,\n\\end{eqnarray}\nthus inequality \\eqref{eq: fund_ineq1} holds at $p$.\n\\\\\n{\\it Case 2.} If $\\grad_p f = 0$, then either there exists an open set $U\\subset M$, $p\\in U$, such that $\\grad f_{\/U}=0$, or $p$ is a limit point for the set $V=\\{q\\in M: \\grad_q f\\neq 0\\}$.\\\\\nIn the first situation, we get that $f$ is constant on $U$, and from a unique continuation result for biharmonic maps (see \\cite{O03}), this constant must be different from zero. Equation \\eqref{eq: caract_bih_hipersurf_spheres}(i) implies $|A|^2=m$ on $U$, and therefore inequality \\eqref{eq: fund_ineq1} holds at $p$.\\\\\nIn the second situation, by taking into account {\\it Case 1} and passing to the limit, we conclude that inequality \\eqref{eq: fund_ineq1} holds at $p$.\n\nIn order to evaluate the term $|\\nabla \\grad f|^2$ of equation \\eqref{eq: eq_int_1}, let us consider a local orthonormal frame field $\\{E_i\\}_{i=1}^m$ on $M$. Then, also using \\eqref{eq: caract_bih_hipersurf_spheres}(i),\n\\begin{eqnarray}\\label{eq: fund_ineq2}\n|\\nabla \\grad f|^2&=&\\sum_{i,j=1}^m\\langle \\nabla_{E_i}\\grad f,E_j\\rangle^2\\nonumber\\geq\\sum_{i=1}^m\\langle \\nabla_{E_i}\\grad f,E_i\\rangle^2\\\\\n&\\geq&\\frac{1}{m}\\left(\\sum_{i=1}^m\\langle \\nabla_{E_i}\\grad f,E_i\\rangle\\right)^2= \\frac{1}{m}(\\Delta f)^2\\nonumber\\\\\n&=&\\frac{1}{m}(m-|A|^2)^2 f^2.\n\\end{eqnarray}\nIn fact, \\eqref{eq: fund_ineq2} is a global formula.\n\nNow, using \\eqref{eq: fund_ineq1} and \\eqref{eq: fund_ineq2} in \\eqref{eq: eq_int_1}, we obtain \\eqref{eq: fund_ineq_chen1}, and by integrating it, since $|A|^2\\leq m$, we get\n\\begin{equation}\\label{eq: int3}\n(|A|^2-m)|A|^2 f^2=0\n\\end{equation}\non $M$. Suppose that there exists $p\\in M$ such that $|A(p)|^2\\neq m$. Then there exists an open set $U\\subset M$, $p\\in U$, such that $|A|^2_{\/U}\\neq m$. Equation \\eqref{eq: int3} implies that $|A|^2 f^2_{\/U}=0$.\nNow, if there were a $q\\in U$ such that $f(q)\\neq 0$, then $A(q)$ would be zero and, therefore, $f(q)=0$.\n Thus $f_{\/U}=0$ and, since $M$ is proper biharmonic, this is a contradiction. Thus $|A|^2=m$ on $M$ and $\\Delta f=0$, i.e. $f$ is constant and we conclude.\n\\end{proof}\n\n\n\\begin{remark}\nIt is worth pointing out that the statement of Theorem~\\ref{th: jchen1} is similar in the minimal case: if $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is a minimal hypersurface with $|A|^2\\leq m$, then either $|A|=0$ or $|A|^2=m$ (see \\cite{S68}).\nBy way of contrast, an analog of Proposition~\\ref{prop: |B|>m} is not true in the minimal case. In fact, it was proved in \\cite{PT83} that if a minimal hypersurface $\\varphi:M^3\\to \\mathbb{S}^{4}$ has $|A|^2>3$, then\n$|A|^2\\geq 6$.\n\\end{remark}\nObviously, from Proposition~\\ref{prop: |B|>m} and Theorem~\\ref{th: jchen1} we get the following result.\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic and $|A|^2$ is constant, then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\nThe next result is a direct consequence of Proposition~\\ref{prop: |B|>m}.\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic and $|H|^2\\geq {4(m-1)}\/({m(m+8)})$, then $\\varphi$ is CMC.\nMoreover,\n\\begin{itemize}\n\\item[(i)] if $m\\in\\{2,3\\}$, then $\\varphi(M)$ is a small hypersphere $\\mathbb{S}^m(1\/\\sqrt 2)$;\n\\item[(ii)] if $m=4$, then $\\varphi(M)$ is a small hypersphere $\\mathbb{S}^4(1\/\\sqrt 2)$ or a standard product of spheres $\\mathbb{S}^3(1\/\\sqrt 2)\\times \\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\nTaking into account \\eqref{eq: |A|}, the hypotheses imply $|A|^2\\geq m$.\n\\end{proof}\n\n\nFor the non-compact case we obtain the following.\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m>2$, be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant and $|A|^2\\geq m$, then $\\varphi$ is CMC and $|A|^2=m$. In this case $|H|^2\\leq({(m-2)}\/{m})^2$.\n\\end{proposition}\n\\begin{proof}\nWe may assume that $M$ is orientable (otherwise, we consider the double covering $\\tilde{M}$ of $M$, which is non-compact, connected, complete, orientable, proper biharmonic and with non-negative Ricci curvature; the final result will remain unchanged). We consider $\\eta$ to be a global unit section in the normal bundle $NM$ of $M$ in $\\mathbb{S}^{m+1}$.\nThen, on $M$, we have\n\\begin{equation}\\label{eq: d1}\n\\Delta f=(m-|A|^2)f,\n\\end{equation}\nwhere $f=(\\trace A)\/m$,\nand\n\\begin{equation}\\label{eq: d2}\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\\leq 0.\n\\end{equation}\nOn the other hand, as $f^2=|H|^2\\leq |A|^2\/m$ is bounded, by the Omori-Yau Maximum Principle (see, for example, \\cite{Y75}), there exists a sequence of points $\\{p_k\\}_{k\\in \\mathbb{N}}\\subset M$ such that\n$$\n\\Delta f^2(p_k)>-\\frac{1}{k}\\qquad\\textrm{and}\\qquad \\lim_{k\\to\\infty} f^2(p_k)=\\sup_M f^2.\n$$\nIt follows that $\\displaystyle{\\lim_{k\\to\\infty}}\\Delta f^2(p_k)=0$, so $\\displaystyle{\\lim_{k\\to\\infty}((m-|A|^2)f^2(p_k))}=0$.\n\nAs $\\displaystyle{\\lim_{k\\to\\infty} f^2(p_k)=\\sup_M f^2>0}$, we get $|A|^2=m$. But from \\eqref{eq: d1} follows that $f$ is a harmonic function on $M$. As $f$ is also a bounded function on $M$, by a result of Yau (see \\cite{Y75}), we deduce that $f=$ constant.\n\\end{proof}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant and $|H|^2\\geq {4(m-1)}\/({m(m+8)})$, then $\\varphi$ is CMC and $|A|^2=m$. In this case, $m\\geq 4$ and $|H|^2\\leq(({m-2})\/{m})^2$.\n\\end{corollary}\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant, $|A|^2\\leq m$ and $H$ is nowhere zero, then $\\varphi$ is CMC and $|A|^2=m$.\n\\begin{proof}\nAs $H$ is nowhere zero we consider $\\eta=H\/|H|$ a global unit section in the normal bundle. Then, on $M$,\n\\begin{equation}\n\\Delta f=(m-|A|^2)f,\n\\end{equation}\nwhere $f=|H|>0$. As $m-|A|^2\\geq 0$ by a classical result (see, for example, \\cite[pag.~2]{L06}) we conclude that $m=|A|^2$ and therefore $f$ is constant.\n\\end{proof}\n\\end{proposition}\n\n\\subsection{Case 3}\nWe first present another result of J.H.~Chen in \\cite{C93}. In order to do that, we shall need the following lemma.\n\\begin{lemma}\\label{lem: nablaA}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable hypersurface, $\\eta$ a unit section in the normal bundle, and put $A_\\eta=A$. Then\n\\begin{itemize}\n\\item[(i)] $(\\nabla A)(\\cdot,\\cdot)$ is symmetric,\n\\item[(ii)] $\\langle(\\nabla A)(\\cdot,\\cdot),\\cdot\\rangle$ is totally symmetric,\n\\item[(iii)] $\\trace (\\nabla A)(\\cdot,\\cdot)=m\\grad f$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{theorem}[\\cite{C93}]\\label{th: jchen2}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic, $M$ has non-negative sectional curvature and $m\\leq 10$, then $\\varphi$ is CMC and $\\varphi(M)$ is either $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{theorem}\n\n\\begin{proof}\nFor the same reasons as in Theorem~\\ref{th: jchen1} we include a detailed proof of this result. We can assume that $M$ is orientable (otherwise, as in the proof of Theorem~\\ref{th: jchen1}, we work with the oriented double covering of $M$). Fix a unit section $\\eta\\in C(NM)$ and put $A=A_\\eta$ and $f=(\\trace A)\/m$.\n\nWe intend to prove that the following inequality holds on $M$,\n\\begin{equation}\\label{eq: fund_ineq_chen2}\n\\frac{1}{2}\\Delta\\left(|A|^2+\\frac{m^2}{2}f^2\\right)\\leq \\frac{3m^2(m-10)}{4(m-1)}|\\grad f|^2-\\dfrac{1}{2}\\sum_{i,j=1}^m (\\lambda_i-\\lambda_j)^2 R_{ijij}.\n\\end{equation}\n\nFrom the Weitzenb\\\"ock formula we have\n\\begin{equation}\\label{eq: Weitz_norm_A}\n\\frac{1}{2}\\Delta |A|^2=\\langle \\Delta A,A\\rangle-|\\nabla A|^2.\n\\end{equation}\nLet us first verify that\n\\begin{eqnarray}\\label{eq: DeltaA_aux}\n\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)=\\nabla_X (\\trace\\nabla A),\n\\end{eqnarray}\nfor all $X\\in C(TM)$. Fix $p\\in M$ and let $\\{E_i\\}_{i=1}^n$ be a local orthonormal frame field, geodesic at $p$. Then, also using Lemma~\\ref{lem: nablaA}(i), we get at $p$,\n\\begin{eqnarray*}\n\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)&=&\\sum_{i=1}^m (\\nabla^2 A)(X,E_i,E_i)=\\sum_{i=1}^m (\\nabla_X \\nabla A)(E_i,E_i)\\nonumber\\\\\n&=&\\sum_{i=1}^m \\{\\nabla_X \\nabla A(E_i,E_i)-2\\nabla A(\\nabla_X E_i,E_i)\\}=\\sum_{i=1}^m \\nabla_X \\nabla A(E_i,E_i)\\nonumber\\\\\n&=&\\nabla_X (\\trace\\nabla A).\n\\end{eqnarray*}\nUsing Lemma~\\ref{lem: nablaA}, the Ricci commutation formula (see, for example, \\cite{B}) and \\eqref{eq: DeltaA_aux}, we obtain\n\\begin{eqnarray}\\label{eq: DeltaA}\n\\Delta A(X)&=&-(\\trace\\nabla^2 A) (X)=-\\trace(\\nabla^2 A)(\\cdot,\\cdot,X)=-\\trace(\\nabla^2 A)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=&-\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)- \\trace(RA)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=&-\\nabla_X (\\trace\\nabla A)-\\trace (R A)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=& -m\\nabla_X \\grad f-\\trace (R A)(\\cdot,X,\\cdot),\n\\end{eqnarray}\nwhere\n$$\nRA(X,Y,Z)=R(X,Y)A(Z)-A(R(X,Y)Z),\\quad \\forall\\,X,Y,Z\\in C(TM).\n$$\n\nAlso, using \\eqref{eq: caract_bih_hipersurf_spheres}(ii) and Lemma~\\ref{lem: nablaA}, we obtain\n\\begin{eqnarray}\\label{eq: partial}\n\\trace\\langle A(\\nabla_{\\cdot}\\grad f),\\cdot\\rangle\n&=&\\trace \\langle \\nabla_{\\cdot}A(\\grad f)-(\\nabla A)(\\cdot,\\grad f),\\cdot\\rangle\\nonumber\\\\\n&=&-\\dfrac{m}{4}\\trace \\langle \\nabla_{\\cdot} \\grad f^2,\\cdot\\rangle-\\langle \\trace(\\nabla A),\\grad f\\rangle\\nonumber\\\\\n&=&\\dfrac{m}{4}\\Delta f^2-m|\\grad f|^2.\n\\end{eqnarray}\n\nUsing \\eqref{eq: DeltaA} and \\eqref{eq: partial}, we get\n\\begin{eqnarray}\\label{eq: DeltaA_A}\n\\langle \\Delta A,A\\rangle&=&\\trace\\langle \\Delta A(\\cdot),A(\\cdot)\\rangle\\nonumber\\\\\n&=&-m\\trace\\langle \\nabla_{\\cdot}\\grad f,A(\\cdot)\\rangle+\\langle T, A\\rangle\\nonumber\\\\\n&=&-m\\trace\\langle A(\\nabla_{\\cdot}\\grad f),\\cdot\\rangle+\\langle T, A\\rangle\\nonumber\\\\\n&=&m^2|\\grad f|^2-\\dfrac{m^2}{4}\\Delta f^2+\\langle T, A\\rangle,\n\\end{eqnarray}\nwhere $T(X)=-\\trace (R A)(\\cdot,X,\\cdot)$, $X\\in C(TM)$.\n\nIn the following we shall verify that\n\\begin{equation}\\label{eq: estim_nablaA}\n|\\nabla A|^2\\geq\\dfrac{m^2(m+26)}{4(m-1)}|\\grad f|^2,\n\\end{equation}\nat every point of $M$. Now, let us fix a point $p\\in M$.\n\nIf $\\grad_p f=0$, then \\eqref{eq: estim_nablaA} obviously holds at $p$.\n\nIf $\\grad_p f\\neq 0$, then on a neighborhood $U\\subset M$ of $p$ we can consider an orthonormal frame field $E_1=({\\grad f})\/{|\\grad f|}$, $E_2$,\\ldots, $E_m$, where $E_k(f)=0$, for all $k=2,\\ldots, m$.\nUsing \\eqref{eq: caract_bih_hipersurf_spheres}(ii), we obtain on $U$\n\\begin{eqnarray}\\label{eq: NA1}\n\\langle (\\nabla A)(E_1,E_1),E_1\\rangle&=&\\frac{1}{|\\grad f|^3}(\\langle \\nabla_{\\grad f}A(\\grad f),\\grad f\\rangle\\nonumber\\\\&&\n-\\langle A(\\nabla_{\\grad f}\\grad f),\\grad f\\rangle)\\nonumber\\\\\n&=&-\\frac{m}{2}|\\grad f|.\n\\end{eqnarray}\nFrom here, using Lemma~\\ref{lem: nablaA}, we also have on $U$\n\\begin{eqnarray}\\label{eq: NA2}\n\\sum_{k=2}^m\\langle (\\nabla A)(E_k,E_k),E_1\\rangle&=&\\sum_{i=1}^m\\langle (\\nabla A)(E_i,E_i),E_1\\rangle-\\langle (\\nabla A)(E_1,E_1),E_1\\rangle\\nonumber\\\\\n&=&\\langle \\trace\\nabla A,E_1\\rangle+\\frac{m}{2}|\\grad f|=\\frac{3m}{2}|\\grad f|.\n\\end{eqnarray}\nUsing \\eqref{eq: NA1} and \\eqref{eq: NA2}, we have on $U$\n\\begin{eqnarray}\n|\\nabla A|^2&=&\\sum_{i,j=1}^m|(\\nabla A)(E_i,E_j)|^2 =\\sum_{i,j,h=1}^m\\langle(\\nabla A)(E_i,E_j),E_h\\rangle^2\\nonumber\\\\\n&\\geq& \\langle(\\nabla A)(E_1,E_1),E_1\\rangle^2+ 3\\sum_{k=2}^m\\langle(\\nabla A)(E_k,E_k),E_1\\rangle^2\\nonumber\\\\\n&\\geq& \\langle(\\nabla A)(E_1,E_1),E_1\\rangle^2+ \\frac{3}{m-1}\\left(\\sum_{k=2}^m\\langle(\\nabla A)(E_k,E_k),E_1\\rangle\\right)^2\\nonumber\\\\\n&=&\\dfrac{m^2(m+26)}{4(m-1)}|\\grad f|^2,\n\\end{eqnarray}\nthus \\eqref{eq: estim_nablaA} is verified, and \\eqref{eq: Weitz_norm_A} implies\n\\begin{equation}\\label{eq: Delta_intermed}\n\\frac{1}{2}\\Delta\\left(|A|^2+\\frac{m^2}{2} f^2\\right)\\leq \\frac{3m^2(m-10)}{4(m-1)}|\\grad f|^2+\\langle T,A\\rangle.\n\\end{equation}\n\nFix $p\\in M$ and consider $\\{e_i\\}_{i=1}^m$ to be an orthonormal basis of $T_pM$, such that $A(e_i)=\\lambda_i e_i$. Then, at $p$, we get\n\\begin{eqnarray*}\\label{eq: T_A}\n\\langle T, A\\rangle=-\\dfrac{1}{2}\\sum_{i,j=1}^m (\\lambda_i-\\lambda_j)^2 R_{ijij},\n\\end{eqnarray*}\nand then \\eqref{eq: Delta_intermed} becomes \\eqref{eq: fund_ineq_chen2}.\n\nNow, since $m\\leq 10$ and $M$ has non-negative sectional curvature, we obtain\n$$\n\\Delta\\left(|A|^2+\\frac{m^2}{2}|H|^2\\right)\\leq 0\n$$\non $M$. As $M$ is compact, we have\n$$\n\\Delta\\left(|A|^2+\\frac{m^2}{2}|H|^2\\right)= 0\n$$\non $M$, which implies\n\\begin{equation}\\label{eq:lambdarijij}\n(\\lambda_i-\\lambda_j)^2 R_{ijij}=0\n\\end{equation}\n on $M$. Fix $p\\in M$.\nFrom the Gauss equation for $\\varphi$, $R_{ijij}=1+\\lambda_i\\lambda_j$, for all $i\\neq j$, and from\n\\eqref{eq:lambdarijij} we obtain\n$$\n(\\lambda_i-\\lambda_j) (1+\\lambda_i\\lambda_j)=0,\\quad i\\neq j.\n$$\nLet us now fix $\\lambda_1$. If there exists another principal curvature $\\lambda_j\\neq \\lambda_1$, $j>1$, then from the latter relation we get that $\\lambda_1\\neq 0$ and $\\lambda_j=-1\/\\lambda_1$.\nThus $\\varphi$ has at most two distinct principal curvatures at $p$. Since $p$ was arbitrarily fixed, we obtain that $\\varphi$ has at most two distinct principal curvatures everywhere and we conclude by using Theorem~\\ref{th: hypersurf_2curv}.\n\\end{proof}\n\n\\begin{proposition}\\label{pro:nonnegricciesistepxpnozero}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ , $m\\geq 3$, be a hypersurface. Assume that $M$ has non-negative sectional curvature and for all $p\\in M$ there exists $X_p\\in T_pM$, $|X_p|=1$, such that $\\ricci(X_p,X_p)=0$. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{proposition}\n\n\\begin{proof}\nLet $p\\in M$ be an arbitrarily fixed point, and $\\{e_i\\}_{i=1}^m$ an orthonormal basis in $T_pM$ such that $A(e_i)=\\lambda_i e_i$. For $i\\neq j$, using \\eqref{eq:ricci-minsn}, we have that $\\ricci(e_i,e_j)=0$. Therefore, $\\{e_i\\}_{i=1}^m$ is also a basis of eigenvectors for the Ricci curvature. Now, if $\\ricci(e_i,e_i)>0$ for all $i=1,\\ldots m$, then $\\ricci(X,X)>0$ for all $X\\in T_pM\\setminus\\{0\\}$. Thus there must exist $i_0$ such that $\\ricci(e_{i_0},e_{i_0})=0$. Assume that $\\ricci(e_{1},e_{1})=0$. From $0=\\ricci(e_{1},e_{1})=\\sum_{j=2}^m R_{1j1j}=\\sum_{j=2}^m K_{1j}$ and since $K_{1j}\\geq 0$ for all\n$j\\geq 2$, we conclude that $K_{1j}=0$ for all $j\\geq 2$, that is $1+\\lambda_1 \\lambda _j=0$ for all $j\\geq 2$. The latter implies that $\\lambda_1\\neq 0$ and $\\lambda_j=-1\/\\lambda_1$ for all $j\\geq 2$. Thus $M$ has two distinct principal curvatures everywhere, one of them of multiplicity one.\n\\end{proof}\n\n\n\\begin{remark}\nIf $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m\\geq 3$, is a compact hypersurface, then the conclusion of\nProposition~\\ref{pro:nonnegricciesistepxpnozero} holds replacing the hypothesis on the Ricci curvature with the requirement that the first fundamental group is infinite. In fact, the full classification of compact hypersurfaces\nin $\\mathbb{S}^{m+1}$ with non-negative sectional curvature and infinite first fundamental group was given in \\cite{C03}.\n\\end{remark}\n\n\\section{PMC biharmonic immersions in $\\mathbb{S}^n$}\n\nIn this section we list some of the most important known results on PMC biharmonic submanifolds in spheres and we prove some new ones. In order to do that we first need the following lemma.\n\n\\begin{lemma}\\label{lem: AH_B}\nLet $\\varphi:M^m\\to N^n$ be an immersion. Then $|A_H|^2\\leq |H|^2 |B|^2$ on $M$. Moreover, $|A_H|^2= |H|^2 |B|^2$ at $p\\in M$ if and only if either $H(p)=0$, or the first normal of $\\varphi$ at $p$ is spanned by $H(p)$.\n\\end{lemma}\n\n\\begin{proof}\nLet $p\\in M$. If $|H(p)|=0$, then the conclusion is obvious.\nConsider now the case when $|H(p)|\\neq0$, let $\\eta_p=H(p)\/|H(p)|\\in N_pM$ and let $\\{e_i\\}_{i=1}^m$ be a basis in $T_pM$. Then, at $p$,\n\\begin{eqnarray*}\n|A_H|^2&=&\\sum_{i,j=1}^m\\langle A_H(e_i),e_j\\rangle^2=\\sum_{i,j=1}^m\\langle B(e_i,e_j),H\\rangle^2=|H|^2\\sum_{i,j=1}^m\\langle B(e_i,e_j),\\eta_p\\rangle^2\\\\\n&\\leq& |H|^2 |B|^2.\n\\end{eqnarray*}\nIn this case equality holds if and only if $\\displaystyle{\\sum_{i,j=1}^m\\langle B(e_i,e_j),\\eta_p\\rangle^2=|B|^2},$\ni.e.\n$$\n\\langle B(e_i,e_j),\\xi_p\\rangle =0,\\quad \\forall\\,\\xi_p\\in N_pM\\,\\text{ with}\\,\\, \\xi_p\\perp H(p).\n$$\nThis is equivalent to the first normal at $p$ being spanned by $H(p)$ and we conclude.\n\\end{proof}\n\nUsing the above lemma we can prove the following lower bound for the norm of the second fundamental form.\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^n$ be a PMC proper biharmonic immersion. Then $m\\leq |B|^2$ and equality holds if and only if $\\varphi$ induces a CMC proper biharmonic immersion of $M$ into a totally geodesic sphere $\\mathbb{S}^{m+1}\\subset \\mathbb{S}^n$.\n\\end{proposition}\n\\begin{proof}\nBy Corollary~\\ref{th: caract_bih_pmc} we have $|A_H|^2=m|H|^2$ and, by using Lemma~\\ref{lem: AH_B}, we obtain $m\\leq|B|^2$.\n\nSince $H$ is parallel and nowhere zero, equality holds if and only if the first normal is spanned by $H$, and we can apply the codimension reduction result of J.~Erbacher (\\cite{E71}) to obtain the existence of a totally geodesic sphere $\\mathbb{S}^{m+1}\\subset \\mathbb{S}^n$, such that $\\varphi$ is an immersion of $M$ into $\\mathbb{S}^{m+1}$. Since $\\varphi:M^m\\to \\mathbb{S}^n$ is PMC proper biharmonic, the restriction $M^m\\to \\mathbb{S}^{m+1}$ is CMC proper biharmonic.\n\\end{proof}\n\n\n\\begin{remark}\n\\begin{itemize}\n\\item[(i)] Let $\\varphi=\\imath\\circ\\phi:M\\to \\mathbb{S}^n$ be a proper biharmonic immersion of class {\\bf B3}. Then $m\\leq|B|^2$ and equality holds if and only if the induced $\\phi$ is totally geodesic.\n\n\\item[(ii)] Let $\\varphi=\\imath\\circ(\\phi_1\\times\\phi_2): M_1\\times M_2\\to \\mathbb{S}^n$ be a proper biharmonic immersion of class {\\bf B4}. Then $m\\leq|B|^2$ and equality holds if and only if both $\\phi_1$ and $\\phi_2$ are totally geodesic.\n\\end{itemize}\n\\end{remark}\n\nThe above remark suggests to look for PMC proper biharmonic immersions with $|H|=1$ and\n$|B|^2=m$.\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion. Then $|H|=1$ and $|B|^2=m$ if and only if $\\varphi(M)$ is an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)\\subset\\mathbb{S}^{m+1}\\subset\\mathbb{S}^n$.\n\\end{corollary}\n\nThe case when $M$ is a surface is more rigid. Using the classification of PMC surfaces in $\\mathbb{S}^{n}$ given by S.-T.~Yau \\cite{Y74}, and \\cite[Corollary~5.5]{BMO08}, we obtain the following result.\n\n\\begin{theorem}[\\cite{BMO08}]\\label{th: bih_PMC_surf}\nLet $\\varphi:M^2\\to\\mathbb{S}^n$ be a PMC proper biharmonic surface. Then $\\varphi$ induces a minimal immersion of $M$ into a small hypersphere $\\mathbb{S}^{n-1}(1\/\\sqrt{2})\\subset\\mathbb{S}^n$.\n\\end{theorem}\n\n\\begin{remark}\nIf $n=4$ in Theorem~\\ref{th: bih_PMC_surf}, then the same conclusion holds under the weakened assumption that the surface is CMC as it was shown in \\cite{BO09}.\n\\end{remark}\nIn the higher dimensional case we have the following bounds for the value of the mean curvature of a\nPMC proper biharmonic immersion.\n\n\\begin{theorem}[\\cite{BO12}]\\label{th: pmc1}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion. Assume that $m>2$ and $|H|\\in (0,1)$.\nThen $|H|\\in (0,({m-2})\/{m}]$, and $|H|=({m-2})\/{m}$ if and only\nif locally $\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$. Moreover, if $M$ is\ncomplete, then the above decomposition of $\\varphi(M)$ holds globally, where $M_1$ is a complete minimal submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$.\n\\end{theorem}\n\n\\begin{remark}\nThe same result of Theorem~\\ref{th: pmc1} was proved, independently, in \\cite{WW12}.\n\\end{remark}\nIf we assume that $M$ is compact and $|B|$ is bounded we obtain the following theorem.\n\n\\begin{theorem}\\label{th: pmc-santos}\nLet $\\varphi:M^m\\to\\mathbb{S}^{m+d}$ be a compact PMC proper biharmonic immersion with $m\\geq 2$, $d\\geq 2$ and\n$$\nm<|B|^2\\leq m \\frac{d-1}{2d-3}\\left(1+\\frac{3d-4}{d-1}|H|^2-\\frac{m-2}{\\sqrt{m-1}}|H| \\sqrt{1-|H|^2}\\right).\n$$\n\\begin{itemize}\n\\item[(i)] If $m=2$, then $|H|=1$, and either $d=2$, $|B|^2=6$, $\\varphi(M^2)=\\mathbb{S}^{1}(1\/{2})\\times\\mathbb{S}^{1}(1\/{2})\\subset\\mathbb{S}^{3}(1\/\\sqrt{2})$ or $d=3$, $|B|^2=14\/3$, $\\varphi(M^2)$ is the Veronese minimal surface in $\\mathbb{S}^{3}(1\/\\sqrt{2})$.\n\n\\item[(ii)] If $m>2$, then $|H|=1$, $d=2$, $|B|^2=3m$ and\n$$\n\\varphi(M^m)=\\mathbb{S}^{m_1}\\left(\\sqrt{{m_1}\/{(2m)}}\\right)\\times \\mathbb{S}^{m_2}\\left(\\sqrt{{m_2}\/{(2m)}}\\right)\\subset \\mathbb{S}^{m+1}(1\/\\sqrt{2}),\n$$\nwhere $m_1+m_2=m$, $m_1\\geq 1$ and $m_2\\geq 1$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nThe result follows from the classification of compact PMC immersions with bounded $|B|^2$ given in\nTheorem~1.6 of \\cite{S94}.\n\\end{proof}\n\n\n\n\\begin{theorem}[\\cite{BO12}]\\label{th: pmc2}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion with $\\nabla A_H=0$. Assume that $|H|\\in (0,({m-2})\/{m})$.\nThen, $m>4$ and, locally,\n$$\n\\varphi(M)=M^{m_1}_1\\times M^{m_2}_2\n\\subset\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_i$ is a minimal embedded submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$,\n$i=1,2$, $m_1+m_2=m$, $m_1\\neq m_2$, $n_1+n_2=n-1$. In this case $|H|={|m_1-m_2|}\/{m}$.\nMoreover, if $M$ is complete, then the above decomposition of $\\varphi(M)$ holds globally, where $M_i$ is a complete minimal submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$.\n\n\\end{theorem}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$, $m\\in\\{3,4\\}$, be a PMC proper biharmonic immersion with $\\nabla A_H=0$. Then $|H|\\in \\{({m-2})\/{m},1\\}$. Moreover, if $|H|=({m-2})\/{m}$, then locally\n$\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$,\nand if $|H|=1$, then $\\varphi$ induces a minimal immersion of $M$ into $\\mathbb{S}^{n-1}(1\/\\sqrt 2)$.\n\\end{corollary}\n\nWe should note that there exist examples of proper biharmonic submanifolds of $\\mathbb{S}^{5}$ and\n$\\mathbb{S}^{7}$ which are not PMC but with $\\nabla A_H=0$ (see \\cite{S05} and \\cite{FO12}).\n\n\n\\section{Parallel biharmonic immersions in $\\mathbb{S}^n$}\n\nAn immersed submanifold is said to be {\\it parallel} if\nits second fundamental form $B$ is parallel, that is $\\nabla^\\perp B=0$.\n\nIn the following we give the classification for proper biharmonic parallel immersed surfaces in $\\mathbb{S}^n$.\n\\begin{theorem}\\label{teo:parallel-surfaces}\nLet $\\varphi:M^2\\to \\mathbb{S}^n$ be a parallel surface in $\\mathbb{S}^n$. If $\\varphi$ is proper biharmonic, then the codimension can be reduced to $3$ and $\\varphi(M)$ is an open part of either\n\\begin{itemize}\n\\item[{\\rm(i)}] a totally umbilical sphere $\\mathbb{S}^2(1\/\\sqrt2)$ lying in\na totally geodesic $\\mathbb{S}^3\\subset \\mathbb{S}^5$,\nor\n\\item[{\\rm(ii)}] the minimal flat torus $\\mathbb{S}^1(1\/2)\\times \\mathbb{S}^1(1\/2)\\subset\n\\mathbb{S}^3(1\/\\sqrt2)$; $\\varphi(M)$ lies in a totally geodesic $\\mathbb{S}^4\\subset \\mathbb{S}^5$,\nor\n\\item[{\\rm(iii)}] the minimal Veronese surface in $\\mathbb{S}^4(1\/\\sqrt2)\\subset \\mathbb{S}^5$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nThe proof relies on the fact that parallel submanifolds in $\\mathbb{S}^n$ are classified in the following three categories (see, for example, \\cite{C10}):\n\\begin{itemize}\n\\item[(a)] a totally umbilical sphere $\\mathbb{S}^2(r)$ lying in a totally geodesic $\\mathbb{S}^3\\subset\\mathbb{S}^n$;\n\n\\item[(b)] a flat torus lying in a totally geodesic $\\mathbb{S}^4\\subset\\mathbb{S}^n$\ndefined by\n$$\n(0,\\ldots,0,a\\cos u,a\\sin u,b\\cos v,b\\sin v,\\sqrt{1-a^2 -b^2}),\\quad a^2 +b^2 \\leq1;\n$$\n\\item[(c)] a surface of positive constant curvature lying in a totally geodesic $\\mathbb{S}^5\\subset\\mathbb{S}^n$ defined by\n$$\nr\\left(0,\\ldots,0,\\frac{v w}{\\sqrt{3}},\\frac{u w}{\\sqrt{3}},\\frac{u v}{\\sqrt{3}},\\frac{u^2-v^2}{2\\sqrt{3}},\n\\frac{u^2+v^2-2w^2}{6},\\frac{\\sqrt{1-r^2}}{r}\\right),\n$$\nwith $u^2+v^2+w^2=3$ and $02$ and $|H|\\in (0,1)$.\n\n\\begin{theorem}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a parallel proper biharmonic immersion. Assume that $m>2$ and $|H|\\in (0,1)$. Then $|H|\\in (0,({m-2})\/{m}]$. Moreover:\n\\begin{itemize}\n\\item[(i)] $|H|=({m-2})\/{m}$ if and only\nif locally $\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a parallel minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$;\n\n\\item[(ii)] $|H|\\in (0,({m-2})\/{m})$ if and only if $m>4$ and, locally,\n$$\n\\varphi(M)=M^{m_1}_1\\times M^{m_2}_2\n\\subset\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_i$ is a parallel minimal embedded submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$,\n$i=1,2$, $m_1+m_2=m$, $m_1\\neq m_2$, $n_1+n_2=n-1$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof} We only have to prove that $M_i$ is a parallel minimal submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$. For this, denote by $B^i$ the second fundamental form of $M_i$ in $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$. If $B$ denotes the second fundamental form of $M_1\\times M_2$ in $\\mathbb{S}^n$, it is easy to verify, using the expression of the second fundamental form of $\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})$ in $\\mathbb{S}^n$, that\n$$\n(\\nabla^\\perp_{(X_1,X_2)} B)((Y_1,Y_2),(Z_1,Z_2))=((\\nabla^\\perp_{X_1} B^1)(Y_1,Z_1),(\\nabla^\\perp_{X_2} B^2)(Y_2,Z_2)),\n$$\nfor all $X_1,Y_1,Z_1\\in C(TM_1)$, $X_2 ,Y_2, Z_2\\in C(TM_2)$. Consequently, $M_1\\times M_2$ is parallel in $\\mathbb{S}^n$ if and only if $M_i$ is parallel in $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$.\n\\end{proof}\n\n\n\\section{Open problems}\nWe list some open problems and conjectures that seem to be natural.\n\n\\begin{conjecture}\\label{conj: 1}\nThe only proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$ are the open parts of hyperspheres $\\mathbb{S}^m(1\/\\sqrt2)$ or of the standard products of spheres $\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$,\n$m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{conjecture}\n\nTaking into account the results presented in this paper, we have a series of statements equivalent to Conjecture~\\ref{conj: 1}:\n\\begin{itemize}\n\\item[1.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ has at most two principal curvatures everywhere.\n\\item[2.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is parallel.\n\\item[3.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is CMC and has non-negative sectional curvature.\n\\item[4.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is isoparametric.\n\\end{itemize}\n\nOne can also state the following intermediate conjecture.\n\n\\begin{conjecture}\\label{conj: 2}\nThe proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$ are CMC.\n\\end{conjecture}\n\nRelated to PMC immersions and, in particular, to Theorem~\\ref{th: pmc2}, we propose the following problem.\n\n\\begin{problem}\nFind a PMC proper biharmonic immersion $\\varphi:M^m\\to\\mathbb{S}^{n}$ such that\n$A_{H}$ is not parallel.\n\\end{problem}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLegged robots can select discrete footholds to cross various complicated terrain, which leads them to execute motor tasks on fields such as field rescue and planetary exploration in the future. The hexapod robots that have higher stability and superior load capacity than biped robots and quadruped robots are widely used\\cite{moore2002reliable}\\cite{tunc2016experimental}\\cite{picardi2020bioinspired}. Still, the planning of gait and foothold for such robots is more complicated with more legs. The traditional planning framework plans gait first, and then plan the foothold of the swing leg according to the terrain\\cite{kalakrishnan2011learning}\\cite{belter2016adaptive}\\cite{fankhauser2018robust}. The two steps are independent of each other, making optimal decisions based on corresponding rules or evaluation functions. However, in harsh environments, traditional planning frameworks can easily cause robots to be trapped because such planning methods make decisions are based on only the current environment and the state of the robot, and it does not consider the following situation. In this paper, We focus on improving the robot's ability to pass in a sparse foothold environment by selecting appropriate footholds and gait.\n\n\n \n \n Gait is usually used to express the walking mode of the legged robot. The choice of gait can affect the robot's forward speed, stability, and passability. Classifying by whether the gait changes periodically, there are two modes, including periodic gait and aperiodic gait. According to different planning methods, gait can be divided into the rule-based method and CPG method. For rule-based method, when walking in a periodic gait, assuming that all footsteps are valid, legged robots move forward in a fixed swing sequence, which is usually taken as 3+3 tripod gait, 4+2 quadruped gait or 5+1 wave gait for hexapod robots\\cite{chu2002comparison}. Because these gaits are quickly to use, they are widely used by researchers\\cite{estremera2010continuous}\\cite{bjelonic2018weaver}\\cite{belter2019employing}. When the terrain is rugged or some areas are unsupportable, the legged robot needs to change its gait according to the terrain information and its state information, then generate a sequence of gaits with an irregular order. This kind of aperiodic gait is called as free gait. The free gait proposed by Kugushev and Jaroshevskij\\cite{kugushev1975problems} in 1975 is characterized by aperiodic, irregular, asymmetric, and terrain adapt. For free gait, the order of legs changes in a non-fixed but flexible manner depending on the trajectory, terrain properties and motion state. In irregular terrain, this gait type is more flexible and adaptable than periodic and regular gait. A large number of free gaits for quadruped or hexapod robots have been developed so far\\cite{mcghee1979adaptive}\\cite{hirose1984study}\\cite{estremera2005generating}.\\par\n \n \n \n \\begin{figure}[tbp] \n\\centering \n\\includegraphics[scale=0.13]{photo\/fig1.pdf} \n\\caption{Establishing Monte Carlo tree for walking planning of hexapod robot in sparse foothold environment. Each node represents a state of the robot. When the constructed node reaches the target position, the entire search algorithm ends. The dotted arrows in the figure indicate omitted parts, and the whole tree in the figure is a schematic structure and is not complete.} \n\\label{fig1} \n\\end{figure}\n\n \n\n\nAnother biologically-inspired gait generation method is the CPG method. From the perspective of imitating the biological gait rhythm control, the CPG gait planning method regards each foot of the robot as a neuron and realizes the walking of the robot by periodically triggering the movement of each foot. Shik et al.\\cite{shik1966control} proposed that the rhythmic movement of animals was controlled by a central pattern generator (CPG). Since then, a large number of scholars have carried out studies on CPG to planning the gait for legged robots \\cite{venkataraman1997simple}\\cite{arena2004adaptive}\\cite{liu2013central}. CPG gait mainly controls the movement of the legs through an oscillator, without any feedback, and it is easy to achieve smooth transitions between gait. However, when the environment becomes complicated, if a sequence of legs is wrong, the whole CPG model will collapse. Therefore some scholars combined it with the reflex model to improve the environmental adaptability of the CPG model. For example, Santos\\cite{santos2011gait} used the reflex model to provide reflection signals and realized the dynamic regulation of the movement rhythm by modifying the CPG model parameters. Mustafa Suphi Erden\\cite{erden2008free} used the reinforcement learning method to conduct the CPG network and reflection model's structural training.\\par\n\n\n\nThe selection of foothold is often carried out after gait planning. For foothold planning method, Expert threshold method is commonly used by scholars to select the footholds according to the features such as the roughness of the terrain, the amount of slope, the degree of proximity to the edge, the amount of slip and the height variance \\cite{krotkov1996perception}\\cite{rebula2007controller}. Kolter\\cite{kolter2008hierarchical} used a hierarchical apprenticeship learning algorithm to select the footholds. It still used the human expert experience to adjust the cost function weights. Besides, Kalakrishnan\\cite{kalakrishnan2009learning} proposed a more elaborate method, using geometric terrain template learning to extract useful landing features, and the terrain template was completed by human teaching. \\cite{belter2011rough} established a 2.5D map through 2D lidar, and propose a foothold selection algorithm, which employs unsupervised learning to create an adaptive decision surface. The robot can learn from realistic simulations, and no prior expert-given rules or parameters are used. The above methods only consider the environmental characteristics where the robot is. When the environment becomes extremely harsh, if a leg has no foothold, no related work has been found to solve it. Our method is that the problem can be avoided by sequence optimization, or a fault-tolerant gait method can be combined to deal with it.\\par\n\n\n\nInspired by well-known artificial intelligence case AlphaGo\\cite{silver2016mastering}\\cite{silver2017mastering}, Mente-Carlo Tree Search(MCTS)\\cite{browne2012survey} is an excellent method to find an optimal decision to solve the sequence optimization problem. Monte Carlo methods have a long history within numerical algorithms and have also had significant success in various AI game-playing algorithms. Recently, Monte Carlo trees have been used in unmanned vehicles and robots. For example, \\cite{lenz2016tactical} adopted the MCTS algorithm to consider interactions between different vehicles to plan cooperative motion plans. \\cite{naghshvar2018risk} combines QMDP, unscented transform, and MCTS to establish an autonomous driving decision framework. For the first time, MCTS was used to solve the planning problem of legged robots\\cite{clary2018monte}. This work mainly demonstrates the application of the MCTS method to the field of blind walking of biped robots, which requires robots to avoid obstacles on the platform ground. \\par\n\n\n\n\n\nFor other sequence optimization methods, there are several works. \\cite{aceituno2017simultaneous} combines contact search, and trajectory generation into a Mixed-Integer Convex Optimization problem at the same time, a sequence optimization method was formed. \\cite{naderi2017discovering} combines a graph-based high-level path planner with low-level sampling-based optimization of climbing to plan a footstep sequence. In the work of \\cite{tsounis2019deepgait}, the quadruped robot Anymal was trained to walk in complex environments through Deep Reinforcement Learning. They trained the perceptual planning layer and control layer into two networks. The perceptual planning layer strategy can generate basic motion sequences that lead the robot to the target position. The method is similar to the sequence optimization problem we emphasized, but they do not focus on the comparison of such passability with traditional methods. Whether the superiority in the sparse foothold environment can be guaranteed is not carefully explained. Besides, the above optimization work is carried out on quadruped or biped robots. Hexapod robots have a richer combination of gait and foothold, and no related work has been described yet.\\par\n\nIn this article, we mainly discuss how to plan gait and foothold to improve the robot's ability to pass in sparse foothold. The main contributions of this paper lie in: \\par\n\n1) The gait generation and foothold planning are solved as a sequence optimization problem, and the Monte Carlo Tree Search algorithm is used to optimize the decision sequence. Method couples gait generation and foothold selection.\\par\n\n2) Treat the legs without candidate foothold as faulty legs, and combine the idea of fault-tolerant gait with our planning method to improve the passing ability of hexapod robots in extreme environments. In addition, a free fault-tolerant gait expert planning method considering environmental fault tolerance is also proposed.\\par\n\n3) Two methods, Fast-MCTS, and Sliding-MCTS are proposed. The Fast-MCTS method has higher pass performance and faster search speed. Sliding-MCTS has an effective balance between optimization and search time.\\par\n\n4) Compare the indicators of traditional methods and sequence optimization methods in the sparse foothold environment. The advantages and disadvantages of different methods are explained.\n\n\n\n\n\n\n\\section{Fault Tolerant Free Gait Planning}\nTo explain the method better, first define and explain the relevant indicators for gait and foothold planning of hexapod robot.\n\n\\subsection{Notation and Definition}\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.6]{photo\/hexapodAxis.pdf} \n\\caption{Parameter definition of hexapod robot planning } \n\\label{hexapodAxis} \n\\end{figure}\n\n\\noindent \\textbf{Definition}:Support polygon\\par\nThe support polygon is a convex polygon formed by the projection points of the robot's supporting feet positions falling on the horizontal plane. Support polygons are often used to measure the stability of legged robot. If the horizontal projection of robot's COG falls within the supporting polygon, then the robot is statically stable. When robot moves, if the center of gravity is too close to the edge of the supporting polygon, the stability of robot is poor. In order to reduce the critical stability process during the planning process, this paper uses the centroid as center to reduce the support polygon, As shown in Figure \\ref {scaledPolygon}(a), the polygon formed by the solid line is the original supporting polygon, and the polygon formed by the broken line is the reduced supporting polygon. $(x_c,y_c)$ represents the coordinate of centroid, and $(x_i,y_i)$ denotes the coordinate of one support leg's feet position. The formula for calculating centroid coordinates $(x_c,y_c)$ is as follows:\\par\n\\begin{equation} \nx_c = \\frac{1}{6A}\\sum_{i=1}^{n}(x_i+x_{i+1})(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\n\\begin{equation} \ny_c = \\frac{1}{6A}\\sum_{i=1}^{n}(y_i+y_{i+1})(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\nWhere A represents the area of the original supporting polygon.\n\n\\begin{equation} \nA = \\frac{1}{2}\\sum_{i=1}^{n}(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\nFinally, according to the calculated centroid position and a constant stability margin $BM_0$, the support polygon is reduced.\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.9]{photo\/scaledPolygon.pdf} \n\\caption{(a) Support polygons shrink in proportion. (b) Simplified one-leg workspace.} \n\\label{scaledPolygon} \n\\end{figure}\n\n\\noindent \\textbf{Definition}:Single Leg Workspace \\par\nThe workspace of a single leg is simplified into a fan-shaped space, as shown in Figure \\ref{scaledPolygon}(b). The sector is defined in the single-leg workspace coordinate system $\\sum_{s_i}(O_{s_i}-x_s^{(i)}y_s^{(i)}z_s^{(i)})$ , Coordinate system $\\sum_{s_i}$ is fixed to the robot.\\newline\n\n\\noindent \\textbf{Definition}:Support State\\par\n$c_f:= [s_1,s_2,s_3,s_4,s_5,s_6]\\in \\mathbb{R}^{1\\times 6}$ is a vector indicating the support state of hexapod moving to the next step. If leg $ i $ is a support leg, the value of $ s_i $ is 0, if the leg $ i $ is a swing leg, the value of $ s_i $ is 1. \\newline\n\n\\noindent \\textbf{Definition}:Fault Leg \\par\nIf the environment is very complicated, which results in that some legs do not have effective footholds to choose from. Then at this time, the leg with no alternative foothold is defined as the fault leg. For possible physical damages to a leg, we also treat it as a fault leg.\\newline\\par\n\n\\noindent \\textbf{Definition}:Leg Fault State\\par\n$t_F:=[f_1,f_2,f_3,f_4,f_5,f_6]\\in \\mathbb{R}^{1\\times 6}$ represents the leg fault state vector of the six-legged robot from the current state to the next state. If leg $i$ is a fault leg, the value of $f_i$ is 1. If leg $i$ is a normal leg, the value of $f_i$ is 0. Note that if a leg is the fault leg, it must not be a support leg.\\newline\\par\n\n\\noindent \\textbf{Definition}:Hexapod State \\par\n$\\Phi:= <$ $_B^W\\!R,\\ _{B}^{W}\\!{r}\\ , c_F,\\ t_F,\\ _{F}^{W}\\textrm{r}$ $> $ is defined as the state of hexapod robot. Where $_B^W\\!R\\in SO_3$ is the rotation matrix representing the attitude of the base w.r.t $W$ frame. $_{B}^{W}\\!{r}\\in \\mathbb{R}^{3} $ is the target position of robot's COG in next step w.r.t $W$ frame. $c_F$ is the support state vector of the robot from the current state to the next state. $t_F$ represents the leg fault state vector of the hexapod robot from the current state to the next state. $_{F}^{W}\\!{r}\\in \\mathbb{R}^{3} $ is the target position of $i_{th}$ foot in next step(foothold position) w.r.t $W$ frame.\n\\newline\n\n\\noindent \\textbf{Definition}:Static Margin\\par\nStability margin, $SM$, also known as the absolute static stability margin, is the smallest distance from the vertical projection of the COG(centre of gravity) on a horizontal plane to the sides of the support polygon formed by joining the projections of the footholds on the same horizontal plane, as is shown in Figure \\ref{hexapodAxis}.\\newline\n\n\\noindent \\textbf{Definition}:Reduced Kinematic Margin\\par\nAs shown in Figure \\ref{scaledPolygon}(b), the reduced kinematic margin, $KM_i$, represents the distance that the $i_{th}$ foot position moves in the opposite direction of the robot motion and reaches the boundary of the working space of leg $i$. \\newline\\par\n\n\\noindent \\textbf{Definition}:Maximum Advance Amount Based COG\\par\nThe maximum advance amount based support area is the maximum distance which the hexapod can moves in the forward direction in the condition that the COG can't exceed the support area. It is defined as $AA$.\\newline\n\n\\noindent \\textbf{Definition}:Maximum Step Length\\par\nThe maximum step length is the maximum distance that hexapod can move as long as in the forward direction. It depends on the hexapod's state and is defined as:\n\\begin{equation} \\label{MSL}\nMSL={\\rm min}(KM_i,AA)(i=1,2,3...6)\n\\end{equation}\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.8]{photo\/planningPipline.pdf} \n\\caption{Traditional legged robot motion planning framework.} \n\\label{planningPipline} \n\\end{figure*}\n\n\\noindent \\textbf{Definition}:Support State List \\par\nSupport State List represents the maximum allowable set of support states for the robot. Each leg of a legged robot has support state and swing state two states. The combination of different leg states constitutes the support state of the robot. For a hexapod robot, there are $2^6$, 64 possible support states. To ensure static stability, the number of supporting legs should not be less than 3. After excluding these support states, only 42 alternative support states remain, as shown in Table \\ref{SupportStateList}.When planning the next support state with a specified six-footed robot state, the supportable state table is used as an initial candidate state table for screening, and a new candidate support state table that meets the requirements is finally obtained.\\newline\\par\n\n\n\\noindent \\textbf{Definition}:Solution Sequence \\par\nThe solution sequence represents the state sequence of the robot from the current position to the target position. Define the solution sequence as $\\Psi=\\left \\{ \\Phi_1,\\Phi_2...\\Phi_k \\right \\} $, Indicates that the robot needs to go through $k$ state transitions to reach its destination.\n\n\\begin{table}[h]\n\\caption{Support State List}\n\\label{SupportStateList} \n\\begin{tabular}{cccccccccccccc}\n\\hline \n\\textbf{Num} & \\multicolumn{6}{c}{ \\textbf{Support State}} & \\textbf{Num} & \\multicolumn{6}{c}{\\textbf{Support State}}\\\\\n\\hline \n1 & 0 & 0 & 0 & 1 & 1 & 1 & 22 & 1 & 0 & 1 & 0 & 1 & 0 \\\\\n2 & 0 & 0 & 1 & 0 & 1 & 1 & 23 & 1 & 0 & 1 & 0 & 1 & 1 \\\\\n3 & 0 & 0 & 1 & 1 & 0 & 1 & 24 & 1 & 0 & 1 & 1 & 0 & 0 \\\\\n4 & 0 & 0 & 1 & 1 & 1 & 0 & 25 & 1 & 0 & 1 & 1 & 0 & 1 \\\\\n5 & 0 & 0 & 1 & 1 & 1 & 1 & 26 & 1 & 0 & 1 & 1 & 1 & 0 \\\\\n6 & 0 & 1 & 0 & 0 & 1 & 1 & 27 & 1 & 0 & 1 & 1 & 1 & 1 \\\\\n7 & 0 & 1 & 0 & 1 & 0 & 1 & 28 & 1 & 1 & 0 & 0 & 0 & 1 \\\\\n8 & 0 & 1 & 0 & 1 & 1 & 0 & 29 & 1 & 1 & 0 & 0 & 1 & 0 \\\\\n9 & 0 & 1 & 0 & 1 & 1 & 1 & 30 & 1 & 1 & 0 & 0 & 1 & 1 \\\\\n10 & 0 & 1 & 1 & 0 & 0 & 1 & 31 & 1 & 1 & 0 & 1 & 0 & 0 \\\\\n11 & 0 & 1 & 1 & 0 & 1 & 0 & 32 & 1 & 1 & 0 & 1 & 0 & 1 \\\\\n12 & 0 & 1 & 1 & 0 & 1 & 1 & 33 & 1 & 1 & 0 & 1 & 1 & 0 \\\\\n13 & 0 & 1 & 1 & 1 & 0 & 0 & 34 & 1 & 1 & 0 & 1 & 1 & 1 \\\\\n14 & 0 & 1 & 1 & 1 & 0 & 1 & 35 & 1 & 1 & 1 & 0 & 0 & 0 \\\\\n15 & 0 & 1 & 1 & 1 & 1 & 0 & 36 & 1 & 1 & 1 & 0 & 0 & 1 \\\\\n16 & 0 & 1 & 1 & 1 & 1 & 1 & 37 & 1 & 1 & 1 & 0 & 1 & 0 \\\\\n17 & 1 & 0 & 0 & 0 & 1 & 1 & 38 & 1 & 1 & 1 & 0 & 1 & 1 \\\\\n18 & 1 & 0 & 0 & 1 & 0 & 1 & 39 & 1 & 1 & 1 & 1 & 0 & 0 \\\\\n19 & 1 & 0 & 0 & 1 & 1 & 0 & 40 & 1 & 1 & 1 & 1 & 0 & 1 \\\\\n20 & 1 & 0 & 0 & 1 & 1 & 1 & 41 & 1 & 1 & 1 & 1 & 1 & 0 \\\\\n21 & 1 & 0 & 1 & 0 & 0 & 1 & 42 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Fault Tolerant Free Gait Method}\n\n\n\n\n\\noindent(1) Traditional free gait planning pipline\\par\nThe free fault-tolerant gait proposed in this section is based on the traditional expert planning process. Traditional expert planning pipeline is shown in Figure \\ref{planningPipline}. First, plan the support state $c_F$ (gait). Second, according to the selected gait, the step length of the robot is determined. Third, for the swing legs determined by step 1, the optimal footholds $_{F}^{W}\\!{r}$ are selected in their working space. Finally, According to the terminal robot state $\\Phi_{k+1}$, current robot state $\\Phi_{k}$ and environmental obstacle information planned in the above steps, the robot body and leg trajectory are generated.\\par\n\nTo improve the stability and passiblity, the traditional planning framework also contains posture optimization step. This article focuses on gait and foothold planning, so we temporarily ignore the body posture optimization process, which will not affect the propose of our method. \\newline\\par\n\n\n\\noindent(2) Fault tolerant gait planning\\par\nFault-tolerant gait refers to the gait that occurs when each leg of the robot cannot work correctly due to hardware reasons, such as driver failure, motor failure, legs locked. Because the hexapod robot has more number of legs, the robot can still work in the condition of static stability even if one or two legs cannot run. As shown in Figure \\ref{evironmentFaultFig}, In the wild environment, there are situations where it is impossible to guarantee that all legs can fall, such as local slippage, subsidence, or sudden narrowing of the terrain. Therefore, referring to the idea of fault tolerance, the above situation is also regarded as a temporary leg fault. In a word, we define the leg without candidate foothold or having physical damages as the fault leg. Combined the idea of dynamic fault tolerance, the robot has a stronger ability to pass and adapt to the environment. When the faults occur, the hexapod can continue to walk by raising the fault legs up. When the fault is eliminated, the robot restores the function of the faulty leg.\n\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.85]{photo\/faultEnvironment.pdf} \n\\caption{Schematic diagram of environmental fault-tolerant terrain.} \n\\label{evironmentFaultFig} \n\\end{figure}\n\n\n\n\n\\noindent(3) Planning Method\\par\nThe gait planning in the article relies on heuristic rules because these rules are the only ones that can plan leg motions accurately and guarantee stability using physical laws. The task of free gait is to choose which leg is the swing leg and which leg is the support leg aperiodic during the walk. \n\n\\subsubsection{Support State Planning}\nIn most cases, there are three criteria to choose support state.\\par\nOne criterion to be taken into account in support state planning is the maximization of the stability margin. It is safer for the robot to have a larger static stability margin during walking.\\par\nSecond, maximize the amount of robot advance in a step, which is determined by the step length of the robot. It can be seen from the formula \\ref{MSL}, the maximum step length is determined by the robot's support state, kinematic margin, and robot's pose.\\par\nThird, to realize the idea of fault tolerance, choosing other legs as possible supporting legs instead of selecting the fault leg as the supporting leg. Adding the support states that can make the robot stable in support state table \\ref{SupportStateList} to the Set $ S$.\\par\nTo any $s\\in S$, the robot has the max step length $MS_{s}$ at $ MD_s $ direction on the condition that the robot has a stable state. For fault-tolerant gait, $MD_s$ represents the direction vector that advances in the support state $s$ and changes continuously during the path tracking process. To any $s\\in S$, noting that the robot's stability margin as $SM_{s}$.\\par\nBased on the above criteria, the evaluation equation for selecting the support state $s_0$ is designed below:\n\n\n\\begin{equation}\n \\left\\{\\begin{aligned}\n f(s) &= \\omega_1 \\cdot MS_{s} + \\omega_2 \\cdot SM_{s}\\\\\n s_0 &=\\mathop{\\arg\\max}_{s} (f(s))\n \\end{aligned}\n\\ \\right.\n \\qquad \\text{$s\\in S$}\n\\end{equation}\n\n$\\omega_1 $ rewards the maximum step length of the robot at the current state. The larger $\\omega_1 $ is, the more likely the robot is to take larger steps and move faster. The $\\omega_2 $ rewards the static stability margin of the hexapod robot, the value of $\\omega_2 $ increases, and the robot tends to choose a support state with a larger static stability margin. \\par \n\nAccording to the support state table \\ref{SupportStateList}, the relevant evaluation values of all support states can be calculated, but the candidate states need to be filtered before calculating the evaluation values.\\par\n$\\bullet$ Delete support states with static stability margin less than 0 before state transition. \\par\n$\\bullet$ Delete the state where the fault leg is selected as the supporting leg at the current stage. \\par\n$\\bullet$ Delete the same support state as in the previous step. If it is retained, the program may end up in an infinite loop\\newline\\par\n\n\n\n\\subsubsection{Step Length Planning}\nThe planning of step length also needs to consider the trade-off between the robot's speed and stability. As long as the stability can be guaranteed, it is better to plan a longer step length. Because the support polygons are reduced using the method mentioned before, a certain margin of stability margin is reserved to ensure the static stability of the robot. Here we set the step size to the maximum step size $MS_{s_0}$ to maximize the walking speed of the robot. \\newline\\par\n\n\n\n\\subsubsection{Foothold Planning}\nFor each swing leg, there may be multiple alternative footholds in its future workspace pretending that the hexapod's body has moved supported by the combination of supporting legs.\nThere are two principles for choosing footholds in this article. First, for a specific leg of the robot, it is more inclined to select foothold with higher leg's reduced kinematic margin $KM$. Second, the foothold selection prefers to choose the foothold combinations which make the robot have a higher stability margin. Noting the all possible foothold combinations of swing legs as set $C$, and noting the selected foothold combination of swing legs as $c_0$. The foothold combination selecting method is shown in the following equation.\n\\begin{equation}\n \\left\\{\\begin{aligned}\n f(c) &= \\omega_L \\cdot \\overline{KM}(c) + \\omega_M \\cdot SM(c)\\\\\n c_0 &=\\mathop{\\arg\\max}_{c} (f(c))\n \\end{aligned}\n\\ \\right.\n \\qquad \\text{$c\\in C$}\n\\end{equation}\n$\\overline{KM}(c)$ represents that average of all swing legs' kinematic margin after all swing legs swing done according to the foothold combination $c$. $SM(c)$ represents that the static stability margin of the hexapod robot with $c$ foothold combination of swing legs when the hexapod's body move $MS_{s_0}$ at $MD_{s_0}$ direction then reach the next state. The first item rewards the reduced kinematic margin of the foothold. The second item rewards the hexapod robot's static stability margin at the end of the state transition.\n$w_L$ and $w_M$ are corresponding weight coefficients, and their values are greater than 0.\n\nFor fault legs, the choice of foothold is different. It does not have a fixed foothold. The virtual foothold of the leg moves with the movement of the body and floats in the air.\\newline\\par \n\n\n\\subsubsection{Trajectory planning}\nGiven the start and target COG posture and target foothold position, the trajectory of the COG and swing leg needs to be planned to find a smooth trajectory. In addition, the trajectory should also ensure collision-free, optimal energy consumption, etc. In this paper, we use a simple polynomial method to plan the trajectory of the body. By setting constraints such as the position, velocity, and acceleration of the starting point, a trajectory equation with continuous acceleration can be obtained. \\newline\\par \n\n\n\\noindent(4) Defects of Expert Method\\par\n$\\bullet$ The planning of gait does not consider environmental information, which will affect the plan of footholds.\\par\n\n$\\bullet$ The planning for the step length of the robot is too violent, which affects the selection of the foothold. The two are coupled with each other.\\par\n\n$\\bullet$ The selection of the foothold considers the environment where the robot is, but it ignores the future situation. The planning will not only affect the next step planning but also affect all the decisions behind it.\\par\n\nThe above limitations can be summarized as the planning of gait, step length, and foothold of a legged robot is a sequential decision problem. All the previous decision-making plans will have an impact on subsequent decisions. A well-designed rule-based expert planning method cannot meet all requirements, and there are always some situations that cannot be dealt with.\\newline\\par \n\n\n\\section{Gait and Foothold Planning Based MCTS}\n\\subsection{Standard MCTS Method}\n\n\\noindent(1)Basic MCTS Algorithm\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.5]{photo\/stantardMCTS.pdf} \n\\caption{Workflow of Monte Carlo tree search algorithm} \n\\label{stantardMCTS} \n\\end{figure}\n\nMonte Carlo Tree Search (MCTS) is an algorithm that uses random (Monte Carlo) samples to find the best decision. Here, we briefly outline the main ideas of MCTS, as is shown in Figure \\ref{stantardMCTS}: First, the selection process is based on an existing search tree. Traverse the tree according to the tree strategy to decide which direction to take at each branch until it reaches the leaf node. Then, use one of the remaining possible actions to expand the leaf node to obtain a new leaf node. Starting at that node, within a fixed range or until the final node is reached, a simulation (often also called scaling) will be performed using some default strategy (i.e. the default behaviour of all relevant participants). Update the values of all traversed nodes based on some cost functions that evaluate the simulation results.\\par\n\nMCTS algorithm can be grouped into two distinct policies:\\par\n\n$\\bullet$ Tree Policy: Select or create a leaf node from the nodes already contained within the search tree. Search tree strategies need to strike a balance between exploitation and exploration, the classic method of search tree strategy is UCB (Upper Confidence Bound) algorithm \\cite{kocsis2006bandit}.\\par\n\n$\\bullet$ Simulation Policy: Play out the domain from a given non-terminal node to produce a value estimate.\n\nMCTS has many advantages, which makes it useful for the legged robot to plan its gait, foothold sequence:\\par\n$\\bullet$ MCTS is a statistical anytime algorithm for which more computing power generally leads to better performance. It can be stopped, and a result is available. It might not be optimal but valid.\\par\n$\\bullet$ MCTS can be used with little or no domain knowledge.\\par\n\n$\\bullet$ MCTS can enforce different policies on different nodes, so it is easy to scale.\\par\n\n$\\bullet$ MCTS can be highly parallelized, with multiple iterations at a time and multiple simulations at a time. It facilitates engineering applications.\\newline\\par\n\n\n\n\\noindent(2) Extensions for Legged Robot Planning Domain\\par\nBased on MCTS, this section proposes two modified MCTS methods for hexapod robots, one of which is called the Fast-MCTS method and the other is called as Sliding-MCTS method. First, introduce some definitions for standard MCTS in the field of hexapod robot planning.\\par\n\n1)Action Space: For hexapod robot planning, each node of the Monte Carlo tree represents the state of the robot, $\\Phi:= <_B^W\\!R,\\ _{W}^{B}\\!{r}\\ , c_F,\\ ,t_F,\\ _{F}^{W}\\textrm{r}> $, which includes the robot's posture, position, foothold position, support status during the transfer process, leg error status. The set of actions that lead the robot from the current node to the candidate nodes is called the action space. According to Table \\ref{SupportStateList} , $n$ alternative support states for a robot state can be obtained and note these $n$ alternative support states as set $S$. For any support state $s\\in S$, the maximum advancement $MS_s$ corresponding to the advancing direction $MD_s$ can be obtained. Discrete the maximum advancement $MS_s$ into three different step sizes: $MS_s\/3$, $2\\cdot MS_s\/3$, and $MS_s$, which constitute the set $L$. For a step length of $l \\in L$ and the supporting state s, there are $m_{l,s}$ combinations of footholds. Define the number of candidate states for each hexapod robot state $\\Phi_k$ as $N_{alternative}(\\Phi_k)$, as is shown in Figure \\ref{alternativeNodeFig} , its calculation formula is as follows:\n\\begin{equation} \\label{alternativeStateNum}\n N_{alternative}(\\Phi_k) = \\sum_{s \\in S}\\sum_{l \\in L}3m_{s,l}\n\\end{equation}\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.7]{photo\/alternativeNodeFig.pdf} \n\\caption{Schematic diagram of alternative nodes} \n\\label{alternativeNodeFig} \n\\end{figure}\n\n2)Search Tree Policy: For search tree policy, we use the standard UCB1 algorithm, the calculation formula is as follows:\n\\begin{equation}\n UCB1 = X_j + C\\cdot \\sqrt{\\frac{2\\cdot{\\rm ln}n}{n_j}}\n\\end{equation}\n\nWhere $X_j$ represents the average reward value of node $j$, $ n_j$ is the number of visits to node $j$. $n$ represents the number of visits by the root node. The parameter $C$ is a balance factor, which decides whether to focus on exploration or utilization when select. If the value of $C$ is large, it is more inclined to select some nodes with lower reward value. If the value of $C$ is small, it is more inclined to visit the nodes with higher evaluation reward value. From this, we can see how the UCB algorithm finds a balance between exploration and utilization: both the nodes that have obtained the largest average reward in the past time and the nodes with lower average rewards have chance to be selected.\\newline\\par\n\n3)Simulation Policy: There are two simulation policies in this paper. The first one is the free fault-tolerant gait planning method introduced in the previous section. The second is an entirely random method, which is to randomly select an executive action in the action space.\\newline\\par\n\n4)Simulation Horizon: The goal of a game of Gomoku, Go, etc. using Monte Carlo Tree Search is to win. For a hexapod robot walking in a sparse environment, the purpose is to pass it safely. The result returned by the MCTS simulation process of the six-legged robot is set as \"Pass\" or \"Not Pass\". In the sparse foothold environment, there are few nodes passed by the simulation, which will lead to node scores mostly 0. When the UCB algorithm is used for selection, the function of utilization is lost. According to the literature \\cite{clary2018monte}, humans only planned three steps in advance during the walking process. And it is tested in \\cite{matthis2014visual} that planning a certain distance in advance can already get a high passability, and continuing to increase the planning distance has no obvious effect on improving the passability. Therefore, this article sets a simulation view $SH$. If the simulation distance exceeds $SH$, then the simulation result is \"Pass\".\\newline\\par\n\n5)Simulation Termination condition: This termination condition applies to both node expansion and simulation. The termination conditions are as follows:\\par\n$\\bullet$ During the continuous $N_{\\rm stop}$ state transitions, the robot's forward amount is close to 0.\\par\nNote: Parameter $N_{\\rm stop}$ is set differently for different simulation policies. For example, the value of $N_{\\rm stop}$ can be small in the expert method. Because when the robot is stuck, the possibility of pass using expert method is low. For the random method, the temporary stuck can be released by continually switching the combination of the foothold and the support state, so the value of $N_{\\rm stop}$ can be slightly larger.\\par\n\n$\\bullet$ The expanded node's position is greater than simulation Horizon $SH$ or reaches the target position.\\newline\\par\n\n6) Node Score and Backpropagation\\par\n For each of the node called $j$, the score of it is defined as:\n \\begin{equation}\n X_j = N_{{\\rm pass},j} \/ N_{{\\rm visit}, j}\n \\end{equation}\n\nWhere $X_j$ represents the score of node $j$, $N_{{\\rm pass},j}$ represents the total number of simulation passes for the node $j$ or the descendants of the node $j$. And $N_{{\\rm visit},j}$ represents the total number of visits to node $j$ or its descendants.\\par\n\nUsing backtracking algorithm to update $N_{\\rm visit}(\\Phi)$ and $N_{\\rm pass}(\\Phi)$ from leaf node to root node. Representing $\\Phi$ as any of the leaf node's ancestor node. The update formula is as follows:\n\\begin{equation}\n N_{\\rm visit}(\\Phi)=N_{\\rm visit}(\\Phi) + 1\n\\end{equation}\n\n\\begin{equation}\n N_{\\rm pass}(\\Phi)=N_{\\rm pass}(\\Phi)+\\Delta_{\\rm simScore}\n\\end{equation}\n\\begin{equation}\n\\Delta_{\\rm simScore}=\\left\\{\n\\begin{array}{rcl}\n0 & & {\\rm not \\ pass}\\\\\n1 & & {\\rm pass} \n\\end{array} \\right.\n\\end{equation}\n\nWhere $\\Delta_{\\rm simScore}$ denotes the result of the simulation of the expanded node. When transfer to the root node, the backpropagation ends.\\newline\\par\n\n\n\\noindent(3)Defects of Standard MCTS in Hexapod Planning\\par\nAlthough the standard MCTS was quickly applied to the field of foot robots, the unmodified standard MCTS has the following problems. \\par \nFirst, the state of each hexapod robot usually has hundreds of candidate states. During the process of building the search tree, the time consumed by the entire expansion increases exponentially. Searching for a state tree passing only 1m has exceeded tens of thousands of nodes, and processing with a single-threaded CPU takes up to ten minutes. This is too unfriendly for real-time planning of foot robots.\\par\nSecond, in a dense foothold environment, it can be passed through a simple expert method, so there is no need to spend a lot of time to use MCTS method.\\par\nThird, the binary scoring method in measuring node scores is too crude. Although this method can also search for feasible solutions, there is no tendency to optimize the search result sequence. For example, it is more desirable to obtain a faster walking sequence.\n\n\\subsection{Selection Planning Based Fast MCTS}\nIn order to solve the standard MCTS speed problem, this section proposes a fast Monte Carlo search method for the planning of the hexapod robot, and it is called Fast-MCTS. In the simulation step of the standard MCTS method, a large number of simulations have been performed. But only the results of the simulation are used, and the state sequence obtained during the simulation was discarded. The Fast-MCTS uses simulation sequences to quickly build the master branch of the search tree and iteratively updates the master branch by the branch with the highest potential to the destination. The primary purpose of this algorithm is to construct a feasible state sequence quickly, but its optimality cannot be guaranteed. The fast Monte Carlo tree search algorithm is different from the standard MCTS framework. It consists of four main steps: Extension, Simulation, Updating Master Branch, and Backtracking.\\par\nFirst, take the starting state $\\Phi _{\\rm start}$ of the hexapod robot as the specified starting node$\\Phi _{ k}$. \\par\n$\\bullet$ Extension: Expand all candidate states of the specified node $\\Phi_k$, each candidate node can only be expanded once. Note the nodes expended as set $AS_{\\Phi_k}$\\par\n\n$\\bullet$ Simulation: To each node ${{\\Phi_0}}\\in AS_{\\Phi_k}$, using the default strategy simulation(Expert method or random method) until reaching the termination condition. Noting the distance of the simulation as $d({\\Phi_{0}})$. Taking the nodes of the simulation generated as set $T_{\\Phi_0}$. The simulation termination conditions are similar to the standard MCTS simulation, but without the parameter of the simulation horizon. The simulation termination condition of this method is that the robot is continuously stuck or reaches its destination.\\par\n\n$\\bullet$ Updating Master Branch: Select the extended maximum simulation distance node $\\Phi _{k,f} \\in AS(\\Phi _ k) $.\\par\n\\begin{equation}\n \\Phi_{k,f} =\\mathop{\\arg\\max}_{\\Phi \\in AS_{\\Phi _k}} (d(\\Phi))\n\\end{equation}\n\nAdding simulation node sequence $T_{\\Phi_0} $ to the search tree and considering the branch as master branch.\\par\n\n$\\bullet$ Backtracking: If the master branch does not reach the destination, then select the node $\\Phi_I$ from the leaf node, which is closest to the target, toward the root node successively, and start to expand, simulate, and update the master branch.\\par\n\n\nNext, introduce the flow of the entire algorithm, according to Figure \\ref{fastMCTSfig}. In the Figure \\ref{fastMCTSfig}(a), all the candidate state nodes are expanded according to the selected node. Then the simulation is performed with them as the starting point, the simulation distance and state sequence are recorded. In Figure \\ref{fastMCTSfig}(b), the node with the largest simulation distance is selected for expansion, and each node in the state sequence recorded in the simulation is added one by one. The master branch indicated by the thick solid line in the figure. Figure \\ref{fastMCTSfig}(c)(d)(e) indicates that if the master branch does not reach the destination, the algorithm will gradually expand backwards from the furthest child node and update the master branch. The end condition of the entire algorithm is: the tree node reaches the destination, or the program traces back to the root node. The method is presented as pseudocode in Algorithm 1.\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.51]{photo\/fastMCTSfig.pdf} \n\\caption{Workflow of Fast-MCTS} \n\\label{fastMCTSfig} \n\\end{figure}\n\nThe algorithm uses the results of the simulation to establish the main branch quickly and updates the main branch by backtracking until it reaches the destination or back to the root node. The idea of the algorithm is to find the position where the robot is easily trapped through multiple simulations, and then keep back and try again until it finds an available solution. The method is consistent with human behaviour during walking. Although this algorithm cannot guarantee the optimality, it has a fast search speed and an excellent effect on improving the passability of a specified strategy and for example, improving the passing performance of expert methods.\n\n\\subsection{Selection Planning Based Sliding MCTS}\nThe two methods of planning a six-legged robot based on MCTS have been introduced. The first method uses the standard MCTS method for planning of a six-legged robot. The entire algorithm is very computationally intensive and time-consuming. Fast-MCTS can quickly find a feasible path, which has a good effect on improving the passability of the expert planning method. However, the entire algorithm is too sparsely sampled, does not highlight the idea of estimating the global situation through sampling, and does not optimize the solution sequence. In view of the above problems, this paper proposes a search algorithm that not only improves the search speed but also optimizes the random sequence. It is defined as Sliding-MCTS.\\par\nThe core processing steps of the algorithm are described below:\\par\n1): Moving root node\\par\nSliding-MCTS is similar to the standard MCTS method. The most crucial difference is that the root node of standard MCTS is fixed, while the root node of Sliding-MCTS will change after a period of sampling.\\par\nThe core idea of this algorithm is that each step of the robot decision is determined after a large number of samples. Once the best next step in the current situation is selected, then the node corresponding to the state of the robot at this step is chosen as the new root node. As shown in Figure \\ref{slidingMCTS}, the simulation in each pane selects the best next step to continue, circularly, a sequence of states is generated.\\par\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.8]{photo\/slidingMCTS.pdf} \n\\caption{Workflow of sliding-MCTS} \n\\label{slidingMCTS} \n\\end{figure*}\n\n\n\n2):Simulation Horizon\\par\nTo facilitate the subsequent quantization of the node score, the simulation horizon distance $SH$ described above is set to a fixed number of simulation steps $N_{\\rm SimStepNum}$.\\par\n\n3):Node score\\par\nIn the previous article, we introduced the node score, which is defined as the number of successful simulations divided by the number of visits. According to the definition of this score, although an available solution sequence can be found in most cases, there are still some problems. First of all, the definition of scores lacks relevant indicators in the field of legged-robots, which results in the algorithm having no effective target at runtime. Although the solution sequence can be found, people still prefer the algorithm to plan a sequence that walks faster or more stable. In addition, in some cases, some nodes can pass during simulation, but the distance of the node from its parent node is minimal. The algorithm sometimes selects this type of node repeatedly, resulting in an infinite loop and unable to obtain an effective solution. To obtain a higher quality solution sequence, the reward function is used as a new node evaluation method here. The score of node $i$ is defined as $J_i$, and its components are shown in Figure \\ref{estimateFunctionFig}. The composition of the score items is as follows:\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.75]{photo\/estimateFunctionFig.pdf} \n\\caption{Schematic diagram of the reward function indicator composition. Gray nodes indicate expanded nodes and red nodes indicate simulation nodes.} \n\\label{estimateFunctionFig} \n\\end{figure}\n\n$J_{i,{\\rm SimStepL}}$: It rewards the average step size of the simulation sequence from node $i$ with a fixed number of simulation steps $N_{\\rm SimStepNum}$. The longer the simulation distance, the larger the average step size of the sequence. The higher the value of this parameter is, the higher the potential for the node $i$ to have greater passability\\par\n$J_{i,{\\rm StepExp}}$: It rewards the average step size of the extended sequence nodes from node $i$ to the current root node, making the algorithm tend to converge to sequences that walk faster. Define the state sequence from the extended node $i$ to the root node as a set $C_i$. Define the quantity of elements in $C_i$ as $n_i$. Take the step size between node $i$ and its parent as $s_i$. For the root node $r$, $s_r$ is equal to 0. The formula for calculating $J_{i,{\\rm StepExp}}$ is as follows:\\par\n\\begin{equation}\nJ_{i,{\\rm StepExp}} = \\frac{1}{n_i}\\sum_{j\\in C_i }s_j\n\\end{equation}\n$J_{i,{\\rm marginExp}}$: It rewards the average static stability margin of the extended sequence nodes from node $i$ to the current root node, making the algorithm tend to converge to a sequence with a larger average stability margin. The $SM_i$ represents the stability margin of node $i$. The formula for calculating $J_{i,{\\rm marginExp}}$ is as follows:\\par\n\\begin{equation}\nJ_{i,{\\rm marginExp}} = \\frac{1}{n_i}\\sum_{j\\in C_i }SM_j\n\\end{equation}\n$J_{i,{\\rm disToPar}}$: It rewards the step size from the node $i$ to its parent node, preventing the algorithm from repeatedly accessing nodes with a minimal forward distance.\\par\nThe sum of each term multiplied by the corresponding weight is the score of node $i$.\\par\n\\begin{equation}\n J_i=\\sum \\omega_{i,(.)}J_{i,(.)}\n\\end{equation}\nWhere $\\omega_{i,(.)}$ represents the weight coefficient of each term. According to our debugging experience, the weight value corresponding to $J_{i,{\\rm SimStepL}}$ and $J_{i,{\\rm StepExp}}$ can be larger. $J_{i,{\\rm disToPar}}$'s weight value can be small to prevent getting stuck in advance due to excessively greedy forward speed.\\par\n\n\n4):Score Backpropagation\\par\nWhen the extended node calculates a new reward score, upward propagation does not calculate the average of all extended node scores but retains the highest score. The propagation formula is as follows:\n\\begin{equation}\n X_i=J_i\n\\end{equation}\n\n\\begin{equation}\nX_j=\\left\\{\n\\begin{array}{rcl}\nJ_i & & {\\mathrm{if} J_i>X_j}\\\\\nX_j & & {\\mathrm{else}} \n\\end{array} \\right.\n\\end{equation}\n\nFor the gait and foothold sequence planning of a legged robot, the goal is to find only one result sequence. Therefore, it is better to measure the quality of a tree by its best child nodes. Conversely, if the average score of the entire tree is used as a measurement index, some nodes with lower scores will diminish the scores of the best nodes. In a sparse foothold environment, there are a few solution sequences that can pass, and such a measure will make it difficult for the algorithm to find these solutions.\\par\n\n5):Single Step Decision Time\\par\nThe state of the robot at each step is determined after a certain period of sampling. Define the single-step decision time as the time required for $N_{\\rm Samp}$ samplings. The parameter $N_{\\rm Samp}$ can be adjusted according to the actual situation. As the complexity of the environment increases, the value of $N_{\\rm Samp}$ can increase correspondingly.\n\n6):Algorithm Termination Condition\\par\n\nThere are two algorithm termination conditions: first, if the extended node reaches the specified target point, the algorithm stops; second, if the expanded node approaches the farthest simulation distance, the algorithm terminates. It happens when encountering an area that cannot pass. As shown in Figure \\ref{simulatioinHorizon}, the edge of the grey area is the farthest simulation position of the robot. If the farthest simulation position is very close to the current expansion node, the entire algorithm cannot continue.\\par\n\n\\begin{figure}[htb] \n\\centering \n\\includegraphics[scale=0.5]{photo\/simulatioinHorizon.pdf} \n\\caption{Schematic diagram of the movement of the simulation horizon.} \n\\label{simulatioinHorizon} \n\\end{figure}\n\n7):Choose the best subtree\\par\nThe best subtree is selected using the standard UCB formula, but the coefficient $C$ is set to zero. Select the branch where the node with the highest score is located, and subtract the remaining branches.\n\nAlthough Sliding-MCTS does not completely optimize the entire sequence, the algorithm still has a good effect. There are three reasons for this. First, as mentioned earlier, only planning certain steps in advance will hardly reduce the overall passability. Second, compared with the standard MCTS algorithm, Sliding-MCTS can greatly decrease the search time. Third, based on the parameter $N_{\\rm Samp}$ ,$N_{\\rm SimStepNum}$and simulation steps $N_{\\rm SimStepNum}$, the search time and the optimization degree can be effectively balanced. The method is presented as pseudocode in Algorithm 2.\\par\n\n\\section{Experiment}\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=1]{photo\/ElspiderFig.pdf} \n\\caption{Elspider hexapod robot} \n\\label{ELSpiderFig} \n\\end{figure}\n\nWe validated our approach on the Elspider robot\\cite{liu2018static}\\cite{liu2018state}\\cite{gao2019low}. The experimental platform is an electric heavy-duty hexapod Elspider developed by Harbin Institute of Technology, as shown in Figure \\ref{ELSpiderFig}. The overall mass of the robot is 300kg, and it can walk stably under a load of 150kg. The design of the machine adopts a high-stability uniformly distributed six-leg configuration, and the driving wheelsets are evenly distributed at the base joints of each leg. The robot is approximately 1.9m long, 2.1m wide, and 0.5m tall. The relevant parameters of the robot are shown in Table \\ref{robotParameter}, the radius of the trunk body(0.4m), coxa link(0.18m), thigh link(0.5m), and shank link(0.5m).\\par\n\n\\begin{table}[]\n\\centering\n\\caption{Mechanical and geometric parameters of Elsiper robot}\n\\label{robotParameter}\n\\begin{tabular}{ccc}\n\\hline\nParameter & Lengh(m) & Mass(kg) \\\\\n\\hline\nBody & 0.4 & 121.9 \\\\\nCoxa link & 0.18 & 3.6 \\\\\nThigh link & 0.5 & 22 \\\\\nShank link & 0.5 & 7.2 \\\\\nFoot & 0.025 & 0.2 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table*}[ht]\n\\centering\n\\caption{Experimental parameter setting table}\n\\label{parameterForAlgorithm}\n\\begin{tabular}{lll}\n\\hline\n\\multicolumn{1}{c}{\\textbf{Parameter}} & \\multicolumn{1}{c}{\\textbf{Value}} & \\multicolumn{1}{c}{\\textbf{Parameter Meaning}} \\\\\n\\hline\n$BM_{\\rm min}$ & 0.05{\\rm m} & Minimal static stability margin \\\\\n$\\omega_1$ & 0.7 & Support state planning weight coefficient \\\\\n$\\omega_2$ & 0.3 & Support state planning weight coefficient \\\\\n$\\omega_L$ & 0.7 & Foothold planning weight coefficient \\\\\n$\\omega_M$ & 0.3 & Foothold planning weight coefficient \\\\\n$N_{\\rm stop}$ &5 & Threshold for the number of consecutive stuck \\\\\n$N_{\\rm SimStepNum}$ &20 & Fixed simulation steps \\\\\n$\\omega_{i,{\\rm SimStepL}}$ &3 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm StepExp}}$ &1 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm marginExp}}$ &0.5 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm disToPar}} $ &0.2 & Sequence evaluation function weight coefficient \\\\\n$N_{\\rm Samp}$ &500 & Number of single steps sampling for sliding MCTS \\\\\n$C$ &0.3 & UCB algorithm balance coefficient \\\\\n\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\nTo examine the behaviour of the proposed algorithm, We designed three different types of experiments to expand the description. The first experiment is performed on terrain with randomly distributed footholds. This experiment can statistically compare the different planning methods' ability to pass complex terrain, speed of advance, and planning time. By reducing the support polygon area, the stability of the robot is ensured. Therefore, the comparison of the body stability margin index is not performed in the experiment. The second type of experiment is tested in artificially designed challenging terrains to verify the validity of the proposed method. The last experiment is to test on a real robot to illustrate how the proposed method can be applied to a real environment.\\par\n\nThe experimental planning method includes the following six methods: (1) Triple gait. (2) Wave gait. (3) Free fault-tolerant gait. (4) Fast-MCTS adopts a random scheme as a simulation strategy, which is defined as Fast-MCTS (Random). (5) Fast-MCTS adopts the free-fault-tolerant gait expert scheme as the planning scheme of the simulation strategy, which is defined as FastMCTS (Expert). (6) Sliding-MCTS method, its simulation strategy uses a random method.\\par\n\nTriple gait and wave gait are two typical gait methods commonly used by hexapod robots. The diagonal gait is the fastest, while the wave gait is the slowest but the most stable. The planning of step length and foothold is the same as the expert planning method described above. If the robot is trapped or reaches the target point, the algorithm ends. \\par \n\nThe latter three methods are planning methods based on MCTS. As mentioned in formula \\ref{alternativeStateNum}, there are $\\sum_{s \\in S}\\sum_{l \\in L}3m_{s,l}$ candidate states of state $k$. To reduce the calculation amount of the algorithm, only one foothold combination is reserved through expert planning method for a support state. Therefore, the number of candidate states for the next step of each state is reduced to $\\sum_{s \\in S}3$. How to select valuable alternative states to accelerate search time is also a research direction.\\par\n\nAll algorithms run on the Intel i5 2.20GHz notebook computer and only use single-thread programming. The setting values of the entire algorithm parameters are shown in Table \\ref{parameterForAlgorithm}.\n\n\n\n\\subsection{Random Terrain Simulation Experiment}\nThe terrain of the random experiment is shown in Figure \\ref{experimentTerrainFig}. The entire map is 12.5 meters long and 5 meters wide, and the footholds are randomly distributed in this area. The starting point of the robot is the coordinate origin, the forward direction is the positive direction of the $x$ axis, and the target point is (8,0). When the robot advances more than 8 meters, the robot reaches the target point.\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.5]{photo\/13.png} \n\\caption{Random foothold distribution experiment map} \n\\label{experimentTerrainFig} \n\\end{figure}\n\nExperiments were carried out on terrains with three different numbers of footholds, including 400 footholds, 350 footholds, and 300 footholds. Each density generates 20 different maps, and experiments on six different planning schemes are performed on each map.\\par\n\nFigure \\ref{originDataDisFig} shows the raw data of 60 experiments. The abscissa is the label of the different test map, and the ordinate is the distance travelled by the robot. It can be seen intuitively that the passing capacity of the three planning methods based on sequence optimization is much higher than the results of three single-step optimization expert planning method. With the reduction of the number of footholds, the situation that the robot can reach the target point gradually decreases. For the single-step optimization method, it can be seen that in most cases, the free fault-tolerant gait has a larger amount of progress, but there are still some cases where the triple gait goes further. Although the free fault-tolerant gait method constructed according to expert experience improves the passing ability to a certain extent, the method still cannot guarantee that all cases are better than other typical gait methods.\\par\n\\begin{figure}[H] \n\\centering \n\\includegraphics[scale=0.6]{photo\/originDataDisFig.pdf} \n\\caption{Comparison of advance distance for different planning method tested on different maps.} \n\\label{originDataDisFig} \n\\end{figure}\n\n\n\n\n\n\n\nBy statistical analysis of the passability data, the error band diagrams of different planning methods under three different foothold density can be obtained. It can be seen from Figure \\ref{aaaaaaaaaa}(a) that as the density of the footholds of the terrain increases, the advance distance of all planning methods gradually increases. The error bands of triple gait, wave gait and free fault-tolerant gait become broader as the density of the foothold increases. In contrast, the error bands of the last three planning methods using MCTS gradually becomes narrower as the density of the foothold increases. In the environment with low foothold density, the rule-based expert method has poor passability. Therefore, in most cases, the robot has a small travel distance. The increase in the density of footholds has improved the robot's passability. However, there are still some maps that the robot cannot pass due to the defeats of the rule-based method. Therefore, the result shows that the error band becomes wider with the rise of the foothold density of terrain. MCTS-based sequence optimization methods are different. Most of the latter three planning methods can still travel long distances in low foothold density environment, and a small part cannot pass because the environment is too harsh. When the foothold density of terrain increases, the robot can reach the destinations almost for all maps. Therefore, the result shows that the error band becomes narrower with the rise of the foothold density of terrain.\\par\n\n\n\\begin{figure*}[t] \n\\centering \n\\includegraphics[scale=1]{photo\/aaaaaaaaaa.pdf} \n\\caption{Experimental data of different planning methods under different foothold density environments:(a)Forward distance error band diagram for different planning methods. (b)The comparison chart of the average advance distance represents the passing ability of the robot. (c)Comparison of the average step size of the robot. This can indicate the robot's advancing speed. } \n\\label{aaaaaaaaaa} \n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\nCompare the passing capabilities of all planning methods, as shown in Figure \\ref{aaaaaaaaaa}(b). It can be seen that the free fault-tolerant gait has significantly higher passing capacity than the diagonal gait and wave gait. In terms of passing ability, the three planning methods using sequence optimization are far superior to the first three methods. The passing capacity of sliding-MCTS and fast-MCTS is the best. \n\n\n\nIn terms of forward speed, we use the average length of the entire planning sequence to represent the forward speed of the robot. According to Figure \\ref{aaaaaaaaaa}(c), the diagonal gait is the fastest, and Fast-MCTS (Expert) is the second-fastest one. The free fault-tolerant gait has a slower walking speed than Sliding-MCTS in a sparse foothold environment, and the walking speed is faster than free fault-tolerant gait when the foothold density of terrain is denser. It can be seen that the Sliding-MCTS method can ensure the best passing ability, besides it can search for a high-speed gait sequence in a sparse foothold environment. The slowest speeds are diagonal gait and Fast-MCTS (Random). Although Fast-MCTS (Random) has a high passing capacity, the sequence it searches for has not been optimized by a large number of samples, resulting in many invalid states in the entire sequence and the lowest speed.\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.6]{photo\/bbbbbb.pdf} \n\\caption{(a)Single-step planning time error band diagram for different planning methods. (b)Index comparison chart of different planning methods} \n\\label{bbbbbb} \n\\end{figure*}\n\n\nTo compare the planning time of the algorithms, we use the single-step planning time of the entire sequence to represent. As shown in Figure \\ref{bbbbbb}(a), the gait planning time of the first three expert methods is about 3ms, and the planning time increases as the density of the foothold increases. It is because when the density of foothold becomes larger, there are more available foothold can be selected, and more support states can be chosen by free fault-tolerant gait. And it takes more time to calculate with the increase of foothold density. The single-step planning time of Fast-MCTS algorithm is about 1s. As the environment becomes harsher, the search time gradually increases. For certain sparse foothold environments, the algorithm occasionally finds solution sequences quickly. So this leads to a larger error band for planning time in environments with low foothold density. The search time of the Sliding-MCTS method is determined by the number of expansion nodes $N_{\\rm sliding}$ planned for each step and the fixed number of simulation steps $N_{\\rm SimStepNum}$. Its single-step planning time is about 30s. The reason that the planning time becomes longer as the foothold density increases is the same as the free fault-tolerant gait. In a low-density foothold environment, the number of invalid nodes is unstable. The higher the density of environmental footholds, the less the number of invalid node expansions. This is also the reason why the error band gradually narrows.\\par\n\n\nIn summary, as shown in Figure \\ref{bbbbbb}(b), the six planning methods have their advantages and disadvantages. In terms of passability, Fast-MCTS (Random) and Sliding-MCTS methods have the highest passability. Expert planning methods have very poor passability, and free fault-tolerant gaits that take into account environmental fault tolerance have slightly better passability. In terms of walking speed, the triple gait is the fastest, followed by Fast-MCTS (Expert), and the speeds of free fault-tolerant gait and Sliding-MCTS are both fast. The other two methods are slow. In terms of planning time, the three planning methods using MCTS take longer, and the Fast-MCTS single-step planning time is about one second. The single-step planning time of Sliding-MCTS is relatively long, and it is related to its set parameters. However, the planning of each step of this method is independent and is not affected by subsequent plan. Therefore, Sliding-MCTS is suitable for local planning.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Special Terrain Simulation Experiment}\n\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.67]{photo\/demo.png} \n\\caption{(a) Artificially designed terrains. (b)(c)(d) Screenshots and gait charts of the robot's passage through three different terrains. The light blue part of the gait diagram corresponds to the screenshot of the robot walking. Each screenshot of the robot represents an operating state, and the red curve is the trajectory of the swinging leg at this time. In the gait diagram, black indicates that the leg is a swinging leg, yellow indicates that the leg is a supporting leg, and red indicates that the leg is a fault-tolerant leg.} \n\\label{demoFig} \n\\end{figure*}\n\n\nThe proposed sliding-MCTS algorithm is applied to some artificial terrain to verify its validity. As shown in Figure \\ref{demoFig}(a), we design 3 different terrains. The first type represents the segmented terrain that can be seen in real life. The second terrain is more extreme, with a rectangular area deducted in the middle of the flat terrain. The third type of terrain represents continuous trench terrain, and the width of the trench is inconsistent. All three types of terrain robots can pass smoothly. Figure \\ref{demoFig} is a screenshot of a part of the robot passing the terrain and a gait diagram of the entire process. Figures \\ref{demoFig} (b)(c) show that in a harsh environment, the robot can temporarily lift its legs without an effective foothold through such terrain. In Figure \\ref{demoFig} (c), the robot even becomes a quadruped robot to cross the terrain. In Figure \\ref{demoFig} (d), the robot continuously adjusts the step size to cross the continuous trench terrain effectively. In the gait diagram, black represents the swing state, yellow represents the support state, and red represents the wrong leg (still belongs to swing state). It can be seen that the robot can successfully pass these challenging terrains by continuously adjusting the gait.\n\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=1]{photo\/realRobotExperiment.pdf} \n\\caption{Elspider walking on discrete bricks.} \n\\label{realRobotExperiment} \n\\end{figure*}\n\n\n\\subsection{Physical robot experiment}\n\tWe carried out some physical experiments to illustrate the feasibility of the algorithm. The experiment terrain is set in advance, as shown in Figure \\ref{realRobotExperiment}, bricks represent discrete areas where the robot's feet can land. The position of the robot is measured by a visual capture system. The planning method uses the Sliding-MCTS algorithm proposed in this paper. The robot needs to go straight from one side of the field to the end of the field. It can be seen that the robot can choose the bricks scattered on the ground as a foothold, and successfully reached the target position. The experimental results show that the algorithm proposed in this paper can be effectively applied to physical robots.\n\n\n\n\n\\section{Conclusion} \nIn this work, a gait foothold planning method based on MCTS is proposed. Before introducing the sequence optimization method, according to the harshness of the environment, we combine fault-tolerant gait planning and free gait, a free fault-tolerant gait method based on the expert planning method is proposed. According to the particularity of the application of MCTS in the field of legged robots, we have made some changes to the standard MCST and introduced two methods, Fast MCTS and Sliding MCTS. FastMCTS can quickly improve the passing ability of the default planning method, but it is not very convergent. SlidingMCTS can effectively balance the search time and convergence while having a good passing ability. The simulation experiments verify the advantages and disadvantages of different methods, the rule-based expert method has a fast calculation speed, while the optimization-based method has better passability. The calculation time of the optimization method can also meet real-time requirements. Finally, through artificially designing some challenging terrain to test the algorithm, and applying the algorithm on the physical robot, the results show that the proposed method can have a good passability in the sparse foothold environment. In the future, we will also continue to study how to increase the search speed of the algorithm and combine it with machine vision to explore the wild environment in real-time.\n\n\n\\section{Acknowledgements} \nThis study was supported in part by the National Natural Science Foundation of China(91948202), and the National Key Research and Development Project of China(2019YFB1309500).\n\n\n\n\\bibliographystyle{unsrt}\n\n\n\n\n\n\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum teleportation scheme was first introduced by Bennett et al.\nin 1993. In their pioneering work, they proposed a scheme for teleporting\nan unknown qubit using a maximally entangled Bell state \\cite{bennett1993}.\nSince then many teleportation schemes have been proposed and many\nvariants of teleportation (e.g., remote state preparation (RSP), quantum\nsecret sharing (QSS), quantum information splitting (QIS), bidirectional\nteleportation (see \\cite{pathak2013elements,sharma2015controlled}\nand references therein)) have also been introduced. Recently Yu et\nal. \\cite{yu2017} have proposed a scheme for the teleportation of\ntwo different quantum states to two different receivers. Specifically,\nin their scheme, Alice wants to teleport a state\n\n\\begin{eqnarray}\n|\\chi_{a}\\rangle & = & \\alpha_{1}|0\\rangle^{\\otimes m}+\\beta_{1}|1\\rangle^{\\otimes m}\\label{eq:1}\n\\end{eqnarray}\n and an another state\n\n\\begin{eqnarray}\n|\\chi_{b}\\rangle & = & \\alpha_{2}|0\\rangle^{\\otimes(m+1)}+\\beta_{2}|1\\rangle^{\\otimes(m+1)}\\label{eq:2}\n\\end{eqnarray}\n to Bob$_{1}$ and Bob$_{2}$, respectively using a five qubit-cluster\nstate\n\n\\begin{eqnarray}\n|\\psi\\rangle_{12345} & = & \\frac{1}{2}(|00000\\rangle+|01011\\rangle+|10100\\rangle+|11111\\rangle)_{12345}.\\label{eq:3}\n\\end{eqnarray}\n\nIn their work, Yu et al. have referred to state $|\\chi_{a}\\rangle$\nas $m$-qubit state of GHZ class and analogously $|\\chi_{b}\\rangle$\nas $(m+1)$-qubit state of GHZ class. We have some reservation about\nusing this nomenclature and would prefer to name these states as generalized\nBell-type state as was logically done in several earlier works \\cite{pathak2011,panigrahi2006}.\nIn fact, in Ref. \\cite{pathak2011}, it was explicitly shown that\nany quantum state of the from $\\alpha|x\\rangle+\\beta|\\bar{x}\\rangle:|\\alpha|^{2}+|\\beta|^{2}=1$\nwhere $x$ varies from $0$ to $2^{n}-1$ and $\\bar{x}=1^{\\otimes n}\\oplus x$\nin modulo 2 arithmetic, can be teleported using a Bell state. Clearly,\nthe states considered by Yu et al. (i. e., $|\\chi_{a}\\rangle$ and\n$|\\chi_{b}\\rangle$) are of the form $\\alpha|x\\rangle+\\beta|\\bar{x}\\rangle$\nand it's obvious that $|\\chi_{a}\\rangle$ and $|\\chi_{b}\\rangle$\ncan be independently teleported to two receivers using two Bell states.\nThus, the use of a five qubit-cluster state or any such complicated\nquantum channel is not required to perform the multi-output teleportation\ntask considered by Yu et al. \n\nExtending the above observation, it will be apt to note that a generalized\nscheme for teleportation has been reported in \\cite{sisodia2017},\nwhere it is mentioned that teleportation of a quantum state having\n$m$-unknown coefficients require $\\lceil\\log_{2}m\\rceil$ Bell states.\nThe scheme proposed by Yu et al., is essentially meant for teleportation\nof a product state $|\\psi_{ab}\\rangle=|\\chi_{a}\\rangle\\otimes|\\chi_{b}\\rangle$\nhaving four unknown coefficient $\\alpha_{1}$, $\\beta_{1}$, $\\alpha_{2}$,\nand $\\beta_{2}$ and hence require only $\\lceil\\log_{2}4\\rceil=2$\nBell states to perform the teleportation task. In fact, the scheme\nof \\cite{sisodia2017} allows one to teleport more general quantum\nstates using two Bell state\\textcolor{black}{s}. Interestingly, despite\nthe existence of these general results, several authors have recently\nreported different type of teleportation schemes using excessively\nhigher amount of quantum resource. For example, in Ref. \\cite{bikash2020}\na four qubit cluster state is used as a quantum resource for teleporting\ntwo qubit\\textcolor{black}{{} states}. The two qubit \\textcolor{black}{states}\nused for teleportation is\n\n\\begin{eqnarray}\n|\\lambda\\rangle_{ab} & = & \\alpha|00\\rangle+\\beta|01\\rangle+\\gamma|10\\rangle+\\delta|11\\rangle.\\label{eq:4}\n\\end{eqnarray}\nNow, as per the scheme reported in \\cite{sisodia2017} and the references\ntherein, since there are four unknown coefficients in the state to\nbe teleported, it will be sufficient to use$\\lceil\\log_{2}4\\rceil=2$\nBell states. The discussion so far is sufficient to establish that\nthe resources used in Yu et al., paper are not optimal and we could\nhave concluded this comment here, but the fact that they have realized\ntheir scheme for $m=1$ using IBM quantum experience have motivated\nus to explicitly implement our scheme scheme for $m=1$ with the help\nof a quantum computer whose cloud-based access is provided by IBM.\n\nThe paper is organized as follows. Our scheme for multi-output teleportation\nusing two Bell states is described in Sec. \\ref{sec:circui}. Subsequently,\nthe implementation of the scheme using an IBM quantum computer and\nthe relevant results are reported in Sec. \\ref{sec:Experimental-realization-usingIBM}.\nFinally, the paper is concluded in Sec. \\ref{sec:Conclusion}. \n\n\\section{Multi-output quantum teleportation using two Bell states\\label{sec:circui}}\n\nIn 2017, Yu et al., coined the term multi-output quantum teleportation\n{\\cite{yu2017}} in an effort to propose a scheme that\nallows Alice to teleport two different single qubit states $|\\chi_{1}\\rangle$and\n$|\\chi_{2}\\rangle$ to two different receivers using a four qubit\ncluster state $|\\psi\\rangle_{A_{1}A_{2}B_{1}B_{2}}$. In the original\nscheme, Alice used to keep the first two qubits (indexed by subscripts\n$A_{1}$and $A_{2}$) of the cluster state with herself and sends\nthe other two qubits to the two receivers, say Bob$_{1}$ and Bob$_{2}$\n(qubits sent to ${\\rm Bob}_{i}$ is indexed by $B_{i}$). Now, Alice\ndoes a measurement in the cluster basis on first four qubits $|\\psi_{i}\\rangle_{12A_{1}A_{2}}$,\ntwo of which are information qubits and the other two are the qubits\nwhich Alice kept with her. The measurement result is publicly announced\nand the two receivers applies the corresponding unitary operators\nto obtain the corresponding desired states $|\\chi_{1}\\rangle$ and\n$|\\chi_{2}\\rangle$, respectively. Almost in the similar line, in\n2021, Yu et al., proposed another scheme for multi-output quantum\nteleportation, but this time the states to be teleported were $m$-qubit\nand $(m+1)$-qubit states (cf. Eqs. (\\ref{eq:1}) and (\\ref{eq:2})\nand the related discussions in the previous section) and the quantum\nchannel used was a five-qubit cluster state (see Eq. \\ref{eq:3}).\nWe have already mentioned that the same multi-output teleportation\ntask can be done using two bell states. As the experimental part of\nYu et al. is restricted to $m=1$ case, for comparison, in Fig. \\ref{fig:Multi-output-quantum-teleportati},\nwe explicitly show the schematic of the quantum circuit that will\nbe required for performing the task using two Bell states. Let $|\\chi_{a}\\rangle$\nand $|\\chi_{b}\\rangle$ be the two states to be teleported (Eqs. (\\ref{eq:1})\nand (\\ref{eq:2}) for $m=1)$. The state $|\\chi_{b}\\rangle$ can be\nreduced to a simpler state $|\\chi_{b}^{\\prime}\\rangle$ after applying\na unitary operation CNOT with control on first qubit and target on\nsecond qubit. Now the problem reduces to the teleportation of the\nproduct state of $|\\chi_{a}\\rangle$ and |$\\chi_{b}^{\\prime}\\rangle=\\alpha_{2}|0\\rangle+\\beta_{2}|1\\rangle$\nas\n\n\\begin{eqnarray}\nCNOT|\\chi_{b}\\rangle & \\longrightarrow & |\\chi'_{b}\\rangle|0\\rangle=(\\alpha_{2}|0\\rangle+\\beta_{2}|1\\rangle)\\otimes|0\\rangle.\\label{eq:5}\n\\end{eqnarray}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.5]{Figures\/blockdiag}\n\\par\\end{centering}\n\\caption{(Color online) An optimal quantum circuit illustrating a multi-output\nquantum teleportation scheme\\label{fig:Multi-output-quantum-teleportati}}\n\n\\end{figure}\n\n\n\\section{Experimental realization using an IBM quantum computer\\label{sec:Experimental-realization-usingIBM}}\n\nWe have designed a simple (but experimentally realizable using IBM\nquantum experience) circuit shown in Fig. \\ref{fig:MQTCkt} (a) which\nis equivalent to the schematic of the circuit shown in Fig. \\ref{fig:Multi-output-quantum-teleportati}\nexcept the presence of the first and last CNOT gates. Local operations\nperformed by these two CNOT gates do not affect the main teleportation\npart. This circuit is run in IBM quantum composer to yield the results\nreported in the following subsection. There is another reason for\nimplementing the circuit without the CNOT gates, as that allowed us\nto use ibmq\\_casablanca which is a seven qubit quantum computer that\nhas enough resources to implement the circuit shown in Fig. \\ref{fig:MQTCkt}\n(a), but not enough qubits to implement the technically equivalent\ncircuit shown in Fig. \\ref{fig:Multi-output-quantum-teleportati}.\nThe ibmq\\_casablanca is one of the IBM Quantum Falcon processors \\cite{ibm2021}.\nThe circuit given in Fig. \\ref{fig:MQTCkt} (a) can be briefly described\nas a process in which Alice wants to teleport $\\text{\\ensuremath{\\frac{1}{\\sqrt{2}}}(|0\\ensuremath{\\rangle}+|1\\ensuremath{\\rangle})\\ensuremath{\\otimes\\ensuremath{\\frac{1}{\\sqrt{2}}}(|0\\ensuremath{\\rangle}+|1\\rangle)=|+\\rangle\\otimes|+\\rangle}}$\nto the receivers Bob$_{1}$ and Bob$_{2}$, respectively as the first\nCNOT in Fig. \\ref{fig:Multi-output-quantum-teleportati} can transform\nthe Bell state $|\\phi^{+}\\rangle=\\frac{1}{\\sqrt{2}}(|00\\rangle+|11\\rangle)$\nto separable state $\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)\\otimes|0\\rangle$\nand the last CNOT can recreate it at the receiver's end with the help\nof an ancilla qubit and output of the teleportation process. In Fig.\n\\ref{fig:MQTCkt} (a), first two qubits are the information qubits\nand the last four qubits are the quantum channels used for the teleportation,\nwhich is comprised of two Bell states as desired and argued above\nas sufficient resource. Now Alice does a Bell measurement on first\n($Q_{1}$) and third qubits ($Q_{0}$) and another Bell measurement\non second ($Q_{5}$) and fifth qubits ($Q_{4}$) and then sends the\nmeasurement results to Bob$_{1}$ and Bob$_{2}$. Here it may be noted\nthat qubit numbers are indexed in accordance with the convention adopted\nby IBM Quantum experience in describing the 7 qubit quantum computer\nwhose topology is shown in Fig. \\ref{fig:MQTCkt} (b). Further, the\nqubits are chosen such that the circuit after transpilation has a\nminimal circuit cost \\cite{dueck2018optimization}. According to the\nmeasurement results announced by Alice, Bob$_{1}$ and Bob$_{2}$\napply corresponding unitaries to obtain the teleported states. Clearly\nthis is just two independent implementation of the standard teleportation\ncircuit, and the same is enough to achieve what is done using costly\nquantum resources in the earlier works.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.5]{Figures\/MQTCktTopo}\n\\par\\end{centering}\n\\centering{}\\caption{(Color online) (a) Quantum circuit for the teleportation of the states\n$\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)$ and $\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)$\nto two different receivers Bob$_{1}$ and Bob$_{2}$ simultaneously\nusing 2 Bell states $|\\phi^{+}\\rangle^{\\otimes2}$ (b) Topology of\nibmq\\_casablanca. \\label{fig:MQTCkt}}\n\\end{figure}\n\n\n\\subsection{Results}\n\nThe circuit described above is run using ibmq\\_casablanca which is\na 7 qubit superconductivity based quantum computer that uses transmon\nqubits. The obtained result is illustrated in Fig \\ref{fig:results}.\nAs we teleported $|+\\rangle|+\\rangle$ it was expected that in output\nstates $|00\\rangle,|01\\rangle,|10\\rangle$ and $|11\\rangle$ would\nappear with equal probability, but from Fig. \\ref{fig:results} we\ncan see that the states are produced with slightly different probabilities,\nthe same is also depicted in the corresponding density matrix shown\nin Fig. \\ref{fig:ExDM}. This is because of the inherent implementation\nerrors as summarized in Table \\ref{tab:Calibration-data-of}. Fidelity\nbetween the state produced and the expected state is computed using\nthe formula $F(\\sigma,\\rho)=Tr\\left[\\sqrt{\\sqrt{\\sigma}\\rho\\sqrt{\\sigma}}\\right]^{2}$,\nwhere $\\sigma$ is theoretical (expected) density matrix of the final\nstate and $\\rho$ is the density matrix of the experimentally obtained\nfinal state. The fidelity is obtained as 84.64 \\% for the case illustrated\nhere for a particular set of experiment comprised of 8192 runs of\nthe experiment. To check the consistency of the result the same exercise\nis repeated 10 times and the fidelities are obtained as (in \\%) 77.51,\n84.64, 79.31, 78.98, 76.17, 81.33, 83.64, 80.21, 74.65, 79.92. The\nstandard deviation is 3.096. This is a reasonably accurate result\nand the fidelity is quite high compared to the classical limit of\n2\/3. This simply establishes that resources used in the earlier works\nwere not optimal. Fidelity can not be compared with the earlier work,\nas Yu et al., have not reported that. However, it's obvious that simpler\nentangled states used here will be \\textcolor{black}{affected} less\nby the noise. \n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.6]{Figures\/RDMQT2}\n\\par\\end{centering}\n\\centering{}\\caption{(Color online) Experimental result for the quantum circuit shown in\nFig \\ref{fig:MQTCkt}. \\label{fig:results}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.6]{Figures\/EDMMQTe}\n\\par\\end{centering}\n\\caption{(Color Online) Experimental quantum state tomography result for the\ncircuit shown in Fig \\ref{fig:MQTCkt}.\\label{fig:ExDM}}\n\\end{figure}\n\n\\begin{table}\n\\begin{centering}\n\\begin{tabular}{|c|c|c|>{\\centering}p{2cm}|>{\\centering}p{2cm}|>{\\centering}p{2cm}|>{\\centering}p{5cm}|}\n\\hline \nQubit & T1 ($\\mu s$) & T2 ($\\mu s$) & \\centering{}Frequency (GHz) & \\centering{}Readout assignment error & \\centering{}Single-qubit Pauli-X-error & \\centering{}CNOT error\\tabularnewline\n\\hline \n$Q_{0}$ & 97.07 & 41.56 & \\centering{}4.822 & \\centering{}$3.52\\times10^{-2}$ & \\centering{}$2.73\\times10^{-4}$ & \\centering{}cx0\\_1: $1.105\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{1}$ & 179.27 & 106.63 & \\centering{}4.76 & \\centering{}$1.56\\times10^{-2}$ & \\centering{}$1.56\\times10^{-4}$ & \\centering{}cx1\\_3: $6.796\\times10^{-3}$, cx1\\_2: $1.013\\times10^{-2}$,\ncx1\\_0: $1.105\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{2}$ & 164.86 & 96.43 & \\centering{}4.906 & \\centering{}$8.50\\times10^{-3}$ & \\centering{}$3.54\\times10^{-4}$ & \\centering{}cx2\\_1: $1.013\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{3}$ & 123.23 & 151.27 & \\centering{}4.879 & \\centering{}$1.70\\times10^{-2}$ & \\centering{}$3.40\\times10^{-4}$ & \\centering{}cx3\\_1: $6.796\\times10^{-3}$, cx3\\_5: $1.139\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{4}$ & 128.4 & 54.14 & 4.871 & \\centering{}$3.06\\times10^{-2}$ & \\centering{}$2.88\\times10^{-4}$ & cx4\\_5: $1.148\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{5}$ & 133.5 & 91.77 & 4.964 & \\centering{}$9.60\\times10^{-3}$ & \\centering{}$3.17\\times10^{-4}$ & cx5\\_3: $1.139\\times10^{-2}$, cx5\\_4:$1.148\\times10^{-2}$, cx5\\_6:\n$1.156\\times10^{-2}$,\\tabularnewline\n\\hline \n$Q_{6}$ & 112.08 & 166.07 & 5.177 & \\centering{}$2.18\\times10^{-2}$ & \\centering{}$4.70\\times10^{-4}$ & cx6\\_5:$1.156\\times10^{-2}$\\tabularnewline\n\\hline \n\\end{tabular}\n\\par\\end{centering}\n\\caption{Calibration data of ibmq\\_casablanca on Dec 01, 2021.cxi\\_j represents\nCNOT gate with control qubit i and target qubit j.\\label{tab:Calibration-data-of}}\n\\end{table}\n\n\n\\section{Conclusion\\label{sec:Conclusion}}\n\nIt's shown that quantum resources used in Yu et al., \\cite{yu2021}\nfor multiparty teleportation was not optimal and the same drawback\nexists in \\cite{bikash2020} and other similar works. Relevant existing\nresults are noted and it's explicitly shown that ibmq\\_casablanca\ncan be used to implement the task described by Yu et al., using only\ntwo Bell states. Here the purpose was only to show that cluster state\nand similar resources are not required for performing this type of\ntasks, and consequently we have restricted ourselves to simplest possible\nimplementation of the multi-output quantum teleportation. It's obvious\nto extend this approach for the multioutput teleportation of more\ncomplex quantum states.\n\n\\section*{Acknowledgment:}\n\nAuthor acknowledges the support from the QUEST scheme of Interdisciplinary\nCyber Physical Systems (ICPS) program of the Department of Science\nand Technology (DST), India (Grant No.: DST\/ICPS\/QuST\/Theme-1\/2019\/14\n(Q80)). He also thanks Anirban Pathak for his feedback and advises\nin relation to the present work.\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDue to the lack of a Riemann mapping theorem in several complex variables, it is of fundamental importance to study the biholomorphic equivalence of various domains in $\\cv^n$, $n\\ge 2$. For such a study, it is necessary to introduce different kinds of holomorphic invariants. In this paper, we study two such invariants, the Fridman invariants and (generalized) squeezing functions.\n\nThe Fridman invariant was defined by Fridman in \\cite{Fridman1983} for Kobayashi hyperbolic domains $D$ in $\\cv^n$, $n\\ge 1$, as follows. Denote by $B_D^k(z,r)$ the $k_D$-ball in $D$ centered at $z\\in D$ with radius $r>0$, where $k_D$ is the Kobayashi distance on $D$. For two domains $D_1$ and $D_2$ in $\\cv^n$, denote by $\\oc_u(D_1,D_2)$ the set of \\textit{injective} holomorphic maps from $D_1$ into $D_2$.\n\nRecall that a domain $\\Omega\\subset \\cv^n$ is said to be \\textit{homogeneous} if the automorphism group of $\\Omega$ is transitive. For any bounded homogeneous domain $\\Omega$, set\n$$h_D^\\Omega(z)=\\inf \\{1\/r:\\ B_D^k(z,r)\\subset f(\\Omega),\\ f\\in \\oc_u(\\Omega,D)\\}.$$\nFor comparison purposes, we call $e_D^\\Omega(z):=\\tanh (h_D^\\Omega(z))^{-1}$ the \\textit{Fridman invariant} (cf. \\cite{Deng-Zhang2019, Nikolov-Verma2018}).\n\nFor any bounded domain $D\\subset \\cv^n$, the \\textit{squeezing function} was introduced in \\cite{Deng2012} by Deng, Guan and Zhang as follows:\n$$s_D(z)=\\sup \\{r:\\ r\\bv^n\\subset f(D),\\ f\\in \\oc_u(D,\\bv^n),\\ f(z)=0\\}.$$\nHere $\\bv^n$ denotes the unit ball in $\\cv^n$. Comparing with the Fridman invariant, it seems natural to consider more general squeezing functions, replacing $\\bv^n$ by other ``model domains\".\n\nRecall that a domain $\\Omega$ is said to be \\textit{balanced} if for any $z\\in \\Omega$, $\\lambda z\\in \\Omega$ for all $|\\lambda|\\le 1$. Let $\\Omega$ be a bounded, balanced and convex domain in $\\cv^n$. The \\textit{Minkowski function} $\\rho_\\Omega$ is defined as (see e.g. \\cite{Pflug2013})\n$$\\rho_\\Omega(z)=\\inf \\{t>0:\\ z\/t\\in \\Omega\\},\\ \\ \\ z\\in \\cv^n.$$\nNote that $\\Omega=\\{z\\in \\cv^n:\\ \\rho_\\Omega(z)<1\\}$. Set $\\Omega(r)=\\{z\\in \\cv^n:\\ \\rho_\\Omega(z)0$ such that $\\bv^n(z,s)\\subset B_D^k(z,r_1)$. By Cauchy's inequality, for any $i$, $|\\det g_i'(z)|\\frac{1}{c}$, for any $i$. Thus, we have $|\\det f'(0)|>0$ and $|\\det g'(z)|>0$. By Lemma \\ref{lh}, both $f$ and $g$ are injective. In particular, $f(\\Omega)\\subset D$ and $g(B_D^k(z,\\arctanh(e_D^\\Omega(z))))\\subset \\Omega$. Since $f\\circ g(w)=w$, for all $w\\in B_D^k(z,\\arctanh(e_D^\\Omega(z)))$, it shows that $f$ is the desired extremal map.\n\\end{proof}\n\nBased on Theorem \\ref{tee}, we can give another proof of \\cite[Theorem 1.3(2)]{Fridman1983} as follows.\n\n\\begin{thm}\nIf there exists $z\\in D$ such that $e_D^\\Omega(z)=1$, then $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{thm}\n\\begin{proof}\nSince $\\Omega$ is homogeneous, $s_\\Omega(z)\\equiv c$ for some positive number $c$. Thus, by \\cite[Theorem 4.7]{Deng2012}, $\\Omega$ is Kobayashi complete, hence taut.\n\nWithout loss of generality, assume that $0\\in \\Omega$. Let $f_i$'s and $g_i$'s be as in the proof of Theorem \\ref{tee}. Since $e_D^\\Omega(z)=1$, we have $\\bigcup_i B_D^k(z,r_i)=D$.\n\nSince $\\Omega$ is taut, by \\cite[Theorem 5.1.5]{Kobayashi98}, there exists a subsequence $\\{g_{k_i}\\}$ of $\\{g_i\\}$ which converges to a holomorphic map $g:D \\rightarrow \\Omega$ uniformly on compact subsets of $D$. By the decreasing property of the Kobayashi distance, for $z_1,z_2\\in D$ such that $g(z_1)=g(z_2)$, we have for $k_i$ large enough,\n$$k_D(z_1,z_2)\\le k_{f_{k_i}(\\Omega)}(f_{k_i}\\circ g_{k_i}(z_1),f_{k_i}\\circ g_{k_i}(z_2))=k_\\Omega(g_{k_i}(z_1),g_{k_i}(z_2)).$$\nLetting $k_i\\rightarrow \\infty$, by the continuity of the Kobayashi distance, we have $k_D(z_1,z_2)\\le k_\\Omega(g(z_1),g(z_2))=0$. Since $D$ is Kobayashi hyperbolic, we have $z_1=z_2$. Thus, $g$ is injective and $D$ is biholomorphic to a bounded domain.\n\nNow Theorem \\ref{tee} applies and shows that $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{proof}\n\nIt was shown in \\cite[Theorem 1.3(1)]{Fridman1983} that $h_D^\\Omega(z)$, hence $e_D^\\Omega(z)$, is continuous. For its proof, Fridman showed that for $z_1$ and $z_2$ sufficiently close, $|1\/h_D^\\Omega(z_1)-1\/h_D^\\Omega(z_2)|\\le k_D(z_1,z_2)$. Our next result gives a ``global\" version of this estimate in terms of $e_D^\\Omega(z)$.\n\n\\begin{thm}\\label{tec}\nFor any $z_1$ and $z_2$ in $D$, we have\n$$|e_D^\\Omega(z_1)-e_D^\\Omega(z_2)|\\le \\tanh[k_D(z_1,z_2)].$$\n\\end{thm}\n\nFor the proof of Theorem \\ref{tec}, we need the following basic fact, whose proof we provide for completeness.\n\n\\begin{lem}\\label{ltanh}\nSuppose that $t_i\\ge 0$, $i=1,2,3$, and $t_3\\le t_1+t_2$. Then,\n$$\\tanh(t_3)\\le \\tanh(t_1)+\\tanh(t_2).$$\n\\end{lem}\n\\begin{proof}\nSince $t_3\\le t_1+t_2$, we have\n$$-\\frac{2}{e^{2t_3}+1}-1\\le -\\frac{2}{e^{2(t_1+t_2)}+1}-1.$$\nDefine\n$$f(t_1,t_2)=\\frac{2}{e^{2t_1}+1}+\\frac{2}{e^{2t_2}+1}-\\frac{2}{e^{2(t_1+t_2)}+1}-1.$$\nTo show that $\\tanh(t_3)\\le \\tanh(t_1)+\\tanh(t_2)$, it suffices to show that $f(t_1,t_2)\\le 0$, for all $t_1,t_2\\ge 0$. For any fixed $t_1\\ge 0$, consider\n$$g(t_2)=\\frac{2}{e^{2(t_1+t_2)}+1}-\\frac{2}{e^{2t_2}+1}.$$\nThen,\n$$g'(t_2)=-\\frac{4e^{2(t_1+t_2)}}{(e^{2(t_1+t_2)}+1)^{2}}+\\frac{4e^{2t_2}}{(e^{2t_2}+1)^{2}}.$$\nSince the function $\\ds \\frac{e^t}{(e^t+1)^{2}}$ is decreasing for $t\\ge 0$, we have $g'(t_2)\\ge 0$ for all $t_2\\ge 0$. Hence, $g(t_2)\\ge g(0)$ for all $t_2\\ge 0$, which implies that $f(t_1,t_2)=g(0)-g(t_2)\\le 0$ for all $t_1,t_2\\ge 0$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{tec}]\nFix $0<\\epsilon0\\ge e_D^\\Omega(z_1)-\\epsilon-\\tanh[k_D(z_1,z_2)].$$\n\nIf $z_2\\in B_D^k(z_1,\\arctanh[e_D^\\Omega(z_1)-\\epsilon])$, then by Lemma \\ref{ltanh}, we have for all $z$ with $\\tanh[k_D(z_2,z)]0$ such that $K\\subset D_j$ for all $j>N$. In this case, we also say that $\\{D_j\\}_{j\\ge 1}$ \\textit{exhausts} $D$.\n\n\\begin{cor}\\label{cek}\nLet $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. If $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ for all $z\\in D$, then the convergence is uniform on compact subsets of $D$.\n\\end{cor}\n\\begin{proof}\nLet $K$ be a compact subset of $D$. Then there exists $00$ such that $\\bigcup_{z\\in K}\\bv^n(z,r)\\subset D_j$ for all $j>N_1$. Fix any $\\epsilon>0$ and take $\\delta=r\\epsilon\/3$. Since $\\{\\bv^n(z,\\delta)\\}_{z\\in K}$ is an open covering of $K$, there is a finite set $\\{z_i\\}_{i=1}^m$ such that $K\\subset \\bigcup_{i=1}^m \\bv^n(z_i,\\delta)$. For any $z\\in K$, there is some $z_i$ such that $z\\in \\bv^n(z_i,\\delta)$. By Theorem \\ref{tec} and the decreasing property of the Kobayashi distance, we have\n\\begin{align*}\n|e_D^\\Omega(z)-e_{D_j}^\\Omega(z)|\n&\\le |e_D^\\Omega(z)-e_D^\\Omega(z_i)|+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|+|e_{D_j}^\\Omega(z)-e_{D_j}^\\Omega(z)|\\\\\n&\\le \\tanh[k_D(z,z_i)]+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|+\\tanh[k_{D_j}(z,z_i)]\\\\\n&\\le 2\\tanh[k_{\\bv^n(z_i,r)}(z,z_i)]+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|\\\\\n&<2\\epsilon\/3+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|\n\\end{align*}\nOn the other hand, there exists $N_2>0$ such that $|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|<\\epsilon\/3$ for all $z_i$ and $j>N_2$. Take $N=\\max\\{N_1,N_2\\}$. Then for any $j>N$, we have $|e_D^\\Omega(z)-e_{D_j}^\\Omega(z)|<\\epsilon$ for all $z\\in K$.\nThis completes the proof.\n\\end{proof}\n\nThe condition $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ in the previous corollary is usually referred to as the \\textit{stability} of the Fridman invariant, which was shown to be true when $D$ is Kobayashi complete in \\cite[Theorem 2.1]{Fridman1983}. Under the weaker assumption of $D$ being taut (or bounded), we have the following inequality.\n\n\\begin{thm}\\label{tes}\nSuppose that $D$ is bounded or taut. Let $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. Then for any $z\\in D$, $\\ds \\limsup_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\le e_D^\\Omega(z)$.\n\\end{thm}\n\nTo prove Theorem \\ref{tes}, we need the following\n\n\\begin{lem}\\label{lbe}\nLet $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. Then for any $z\\in D$ and $r>0$, $\\{B_{D_j}^k(z,r)\\}_{j\\ge 1}$ exhausts $B_D^k(z,r)$.\n\\end{lem}\n\\begin{proof}\nBy Lemma \\ref{lbc}, we know that $B_D^k(z,r)$ is a subdomain of $D$ for any $z\\in D$ and $r>0$. Firstly, we show that\n$$\\lim_{j\\rightarrow \\infty}k_{D_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\nConsider a sequence of subdomains $\\{G_j\\}_{j\\ge 1}$ such that (i) $G_j\\Subset D$, (ii) $G_j\\subset G_{j+1}$, (iii) $D=\\bigcup_{j\\ge 1} G_j$. By \\cite[Proposition 3.3.5]{Pflug2013}, we have\n$$\\lim_{j\\rightarrow \\infty}k_{G_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\nFor any $j\\ge 1$, there exists $N_j>0$ such that $G_j\\subset D_i$, for all $i>N_j$. By the decreasing property of the Kobayashi distance, we get\n$$\\lim_{j\\rightarrow \\infty}k_{D_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\n\nNow we prove that for any $K\\Subset B_D^k(z,r)$, there exists $N>0$ such that $K\\subset B_{D_j}^k(z,r)$ for all $j>N$.\n\nSince $k_D(z,\\cdot)$ is continuous, there exists $00$ such that $\\bigcup_{w\\in K} \\bv^n(w,\\delta)\\Subset B_D^k(z,r)$. Hence, there exists $N_1>0$ such that $\\bigcup_{w\\in K} \\bv^n(w,\\delta)\\subset D_j$ for all $j>N_1$.\n\nLet $0<\\epsilon0$ such that $|k_{D_j}(z,z_l)-k_D(z,z_l)|<\\epsilon\/3$ for any $j>N_2$ and $1\\le l\\le m$. For any $w\\in K$, there is some $z_l$ such that $w\\in \\bv^n(z_l,\\delta_1)$. Set $N=\\max\\{N_1,N_2\\}$. Then for all $j>N$, by the decreasing property of the Kobayashi distance, we have\n\\begin{align*}\n&|k_{D_j}(z,w)-k_D(z,w)|\\\\\n\\le &|k_{D_j}(z,w)-k_{D_j}(z,z_l)|+|k_{D_j}(z,z_l)-k_D(z,z_l)|+|k_D(z,z_l)-k_D(z,w)|\\\\\n\\le &k_{D_j}(z_l,w)+|k_{D_j}(z,z_l)-k_D(z,z_l)|+k_D(z_l,w)\\\\\n\\le &2k_{\\bv^n(z_l,\\delta)}(z_l,w)+|k_{D_j}(z,z_l)-k_D(z,z_l)|\\\\\n<&2\\epsilon\/3+\\epsilon\/3=\\epsilon.\n\\end{align*}\nTherefore, $k_{D_j}(z,w)N$. This completes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{tes}]\nSince the proof for the taut case is similar as (and simpler than) for the bounded case, we will assume that $D$ is bounded.\n\nFor any $z\\in D$, let $e_{D_{l_i}}^\\Omega$ be a sequence such that $\\ds \\lim_{l_i\\rightarrow \\infty}e_{D_{l_i}}^\\Omega(z)=\\limsup_{j\\rightarrow \\infty} e^\\Omega_{D_j}(z)=:\\tanh r$. For any $0<\\epsilon0$ such that $e_{D_{l_i}}^\\Omega>\\tanh(r-\\epsilon)$ for all $l_i>N_1$.\n\nWithout loss of generality, assume that $0\\in \\Omega$. By definition, for any $l_i>N_1$, there exists an open holomorphic embedding $f_{l_i}:\\Omega \\rightarrow D_{l_i}$ such that $f_{l_i}(0)=z$ and $B_{D_{l_i}}^k(z,r-\\epsilon) \\subset f_{l_i}(\\Omega)$. Since $D$ is bounded, by Montel's theorem, there exists a subsequence $\\{f_{k_i}\\}$ of $\\{f_{l_i}\\}$ which converges to a holomorphic map $f:\\Omega \\rightarrow \\bar{D}$ uniformly on compact subsets of $\\Omega$.\n\nBy Lemma \\ref{lbc}, each $B_{D_{l_i}}^k(z,r-\\epsilon)$ is a domain. Define $g_{l_i}=f_{l_i}^{-1}|B_{D_{l_i}}^k(z,r-\\epsilon)$. By Montel's theorem and Lemma \\ref{lbe}, we may assume that the sequence $g_{k_i}$ converges uniformly on compact subsets of $B_D^k(z,r-\\epsilon)$ to a holomorphic map $g:B_D^k(z,r-\\epsilon)\\rightarrow \\bar\\Omega$.\n\nTake $s>0$ such that $\\bv^n(z,s)\\Subset B_D^k(z,r-\\epsilon)$. By Lemma \\ref{lbe}, there exists $N>N_1$ such that $\\bv^n(z,s)\\subset B_{D_{l_i}}^k(z,r-\\epsilon)$, for all $l_i>N$. Consider $g_{l_i}|\\bv^n(z,s)$. By Cauchy's inequality, $|\\det g_{l_i}'(z)|N$, for some positive constant $c$. So we have $|\\det f_{l_i}'(0)|>\\frac{1}{c}$ for all $l_i>N$. Thus, we have $|\\det f'(0))|>0$ and $|\\det g'(z)|>0$. By Lemma\\ref{lh}, both $f$ and $g$ are injective. In particular, $f(\\Omega)\\subset D$ with $f(0)=z$ and $g(B_D^k(z,r-\\epsilon))\\subset \\Omega$ with $g(z)=0$. Since $f\\circ g(w)=w$ for all $w\\in B_D^k(z,r-\\epsilon)$, we get $e_D^\\Omega(z)\\ge \\tanh(r-\\epsilon)$. Since $\\epsilon$ is arbitrary ,we have $\\ds e_D^\\Omega(z)\\ge \\tanh r=\\limsup_{j\\rightarrow \\infty} e^\\Omega_{D_j}(z)$.\n\\end{proof}\n\nBased on Corollary \\ref{cek} and Theorem \\ref{tes}, we can slightly refine \\cite[Theorem2.1]{Fridman1983} as follows.\n\n\\begin{thm}\nSuppose that $D$ is Kobayashi complete and $\\{D_j\\}_{j\\ge 1}$ exhausts $D$. Then $\\ds \\lim_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ uniformly on compact subsets of $D$.\n\\end{thm}\n\\begin{proof}\nSince $D$ is Kobayashi complete, thus taut, we have $\\ds \\limsup_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\le e_D^\\Omega(z)$ for all $z\\in D$, by Theorem\\ref{tes}.\n\nFor $z\\in D$ and $0<\\epsilon0$ such that $B_D^k(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)\\Subset D$. Hence, there exists $N>0$ such that $B_D^k(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)\\subset D_j$ for all $j>N$. By the decreasing property of the Kobayashi distance, we have $B^k_{D_j}(z,e_D^\\Omega(z)-\\epsilon)\\subset B_D^k(z,e_D^\\Omega(z)-\\epsilon)$. So we have $B^k_{D_j}(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)$ for all $j>N$, which implies that $\\ds \\liminf_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\ge e_D^\\Omega(z)-\\epsilon$. Since $\\epsilon$ is arbitrary, we get $\\ds \\liminf_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\ge e_D^\\Omega(z)$ and hence $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$. By Corollary \\ref{cek}, the convergence is uniform on compact subsets of $D$.\n\\end{proof}\n\n\\section{Generalized squeezing functions}\\label{S:squeezing}\n\nThroughout this section, we suppose that $D$ is a bounded domain in $\\cv^n$ and $\\Omega$ is a bounded, balanced and convex domain in $\\cv^n$ (unless otherwise stated).\n\nDenote by $k_\\Omega$ and $c_\\Omega$ the Kobayashi and Carath\\'{e}odory distance on $\\Omega$, respectively. The following Lempert's theorem is well-known:\n\n\\begin{thm}\\cite[Theorem 1]{L:convex}\\label{T:convex}\nOn a convex domain $\\Omega$, $k_\\Omega=c_\\Omega$.\n\\end{thm}\n\nCombining Theorem \\ref{T:convex} with \\cite[Proposition 2.3.1 (c)]{Pflug2013}, we have the following key lemma.\n\n\\begin{lem}\\label{lnk}\nFor any $z\\in \\Omega$, $\\rho_\\Omega(z)=\\tanh (k_\\Omega(0,z))=\\tanh (c_\\Omega(0,z))$.\n\\end{lem}\n\nWe will also need the following basic fact.\n\n\\begin{lem}\\label{lmn}\n$\\rho_\\Omega$ is a $\\cv$-norm.\n\\end{lem}\n\\begin{proof}\nFor any $z_1$, $z_2\\in \\cv^n$, we want to show that $\\rho_\\Omega(z_1+z_2)\\le \\rho_\\Omega(z_1)+\\rho_\\Omega(z_2)$.\n\nFix $\\epsilon >0$. Take $c_1=\\rho_\\Omega(z_1)+\\epsilon\/2$ and $c_2=\\rho_\\Omega(z_2)+\\epsilon\/2$, Then $z_1\/c_1 \\in \\Omega$ and $z_2\/c_2 \\in \\Omega$. Since $\\Omega$ is convex, we get\n$$\\frac{z_1+z_2}{c_1+c_2}=\\frac{c_1}{c_1+c_2}\\frac{z_1}{c_1}+\\frac{c_2}{c_1+c_2}\\frac{z_2}{c_2}\\in \\Omega$$\nHence, $\\rho_\\Omega( z_1+z_2)\\le c_1+c_2 \\le \\rho_\\Omega(z_1)+ \\rho_\\Omega(z_2)+\\epsilon$. Since $\\epsilon$ is arbitrary, we obtain $\\rho_\\Omega( z_1+z_2)\\le \\rho_\\Omega(z_1)+ \\rho_\\Omega(z_2)$.\n\nSince $\\Omega$ is bounded, it is obvious that $\\rho_\\Omega(z)>0$ for all $z\\neq 0$, which completes the proof.\n\\end{proof}\n\nWe say that $f\\in \\oc_u(D,\\Omega)$ is an \\textit{extremal map} at $z\\in D$ if $\\Omega(s_D^\\Omega(z))\\subset f(D)$. When $\\Omega=\\bv^n$, the existence of extremal maps was given in \\cite[Theorem 2.1]{Deng2012}. The proof of the next theorem is very similar to that of Theorem \\ref{tee} and \\cite[Theorem 2.1]{Deng2012}, based on Montel's theorem and the generalized Hurwitz theorem, so we omit the details.\n\n\\begin{thm}\\label{tse}\nAn extremal map exists at each $z\\in D$.\n\\end{thm}\n\nAs an immediate corollary, we have\n\n\\begin{cor}\n$s_D^\\Omega(z)=1$ for some $z\\in D$ if and only if $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{cor}\n\nIn \\cite[Theorem 3.1]{Deng2012}, it was shown that $s_D(z)$ is continuous. Moreover, it was given in \\cite[Theorem 3.2]{Deng2012} without details the following inequality:\n$$|s_D(z_1)-s_D(z_2)|\\le 2\\tanh[k_D(z_1,z_2)],\\ \\ \\ z_1,z_2\\in D.$$\nOur next theorem gives the same inequality for generalized squeezing functions, and in particular shows that they are also continuous.\n\n\\begin{thm}\\label{tsc}\nFor any $z_1,z_2\\in D$, we have\n$$|s_D^\\Omega(z_1)-s_D^\\Omega(z_2)|\\le 2\\tanh[k_D(z_1,z_2)].$$\nIn particular, $s_D^\\Omega(z)$ is continuous.\n\\end{thm}\n\\begin{proof}\nBy Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega$ such that $f(z_1)=0$ and $\\Omega(s_D^\\Omega(z_1)) \\subset f(D)$.\n\nIf $\\tanh[k_D(z_1,z_2)]\\ge s_D^\\Omega(z_1)$, then it is obvious that\n$$s_D^\\Omega(z_2)>0\\ge \\frac{s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)]}{1+\\tanh[k_D(z_1,z_2)]}.$$\n\nSuppose now that $\\tanh[k_D(z_1,z_2)]\\tanh[k_D(z_1,z_2)]=\\tanh[k_{f(D)}(f(z_1),f(z_2))]\\\\\n&\\ge \\tanh[k_\\Omega(f(z_1),f(z_2))]=\\tanh[k_\\Omega(0,f(z_2))]=\\rho_\\Omega(f(z_2)).\n\\end{aligned}$$\nDefine\n$$h(w):=\\frac{w-f(z_2)}{1+\\tanh[k_D(z_1,z_2)]},$$\nand set $g(z)=h\\circ f(z)$. Then $g\\in \\oc_u(D,\\Omega)$ and $g(z_2)=0$.\n\nFor any $w\\in \\Omega$ with\n$$\\rho_\\Omega(w)<\\frac{s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)]}{1+\\tanh[k_D(z_1,z_2)]},$$\nwe have \n$$\\rho_\\Omega(h^{-1}(w)-f(z_2))=\\rho_\\Omega(h^{-1}(w)-h^{-1}(g(z_2)))0\\ge s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)].$$\n\nSuppose now that $\\tanh[k_D(z_1,z_2)]0$, a subsequence $\\{l_j\\}$ and $z_{l_j}\\in K\\subset D_{l_j}$ such that\n$$|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|\\ge \\epsilon.$$\nSince $K$ is compact, there exists a convergent subsequence, again denoted by $\\{z_{l_j}\\}$, with $\\lim_{j\\rightarrow \\infty} z_{l_j}=z\\in K$. Choose $r>0$ such that $\\overline{\\bv^n(z,r)}\\subset D$. Then, there is $N_1>0$ such that $z_{l_j}\\in \\bv^n(z,r)\\subset D_{l_j}$ for all $l_j>N_1$. By Theorem \\ref{tsc} and the decreasing property of the Kobayashi distance, for all $l_j>N_1$ we have\n\\begin{align*}\n|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|\n& \\le |s^\\Omega_{D_{l_j}}(z_{l_j})-s^\\Omega_{D_{l_j}}(z)|+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|+|s_D^\\Omega(z)-s_D^\\Omega(z_{l_j})|\\\\\n&\\le 2\\tanh[k_{D_{l_j}}(z_{l_j},z)]+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|+2\\tanh[k_D(z,z_{l_j})]\\\\\n& \\le 4\\tanh\\left(\\frac{\\|z_{l_j}-z\\|}{r}\\right)+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|.\n\\end{align*}\nIt is clear that there is $N_2>0$ such that for all $l_j>N_2$ we have\n$$\\tanh\\left(\\frac{\\|z_{l_j}-z\\|}{r}\\right)<\\frac{\\epsilon}{6}\\ \\ \\textup{and}\\ \\ |s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|<\\frac{\\epsilon}{3}.$$\nSet $N=\\max\\{N_1,N_2\\}$. Then for all $l_j>N$ we have\n$$|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|<\\epsilon,$$\nwhich is a contradiction.\n\\end{proof}\n\nThe notion of the squeezing function was originally introduced to study the ``uniform squeezing\" property. In this regard, we have the following\n\n\\begin{thm}\\label{te}\nFor two bounded, balanced and convex domains $\\Omega_1$ and $\\Omega_2$ in $\\cv^n$, $s^{\\Omega_1}_D(z)$ has a positive lower bound if and only if $s^{\\Omega_2}_D(z)$ has a positive lower bound.\n\\end{thm}\n\\begin{proof}\nIt suffices to prove the equivalence when $\\Omega_2=\\bv^n$. By Lemma \\ref{lmn}, $\\rho_{\\Omega_1}(z)$ is a $\\cv$-norm. Thus, it is continuous and there exist $M\\ge m>0$ such that $m\\|z\\|\\le \\rho_{\\Omega_1}(z) \\le M\\|z\\|$. Then, one readily checks using the definition that\n$$\\frac{s^{\\Omega_1}_D(z)}{M}\\le s^{\\bv_n}_D(z)\\le \\frac{s^{\\Omega_1}_D(z)}{m}.$$\n\\end{proof}\n\nCombining Theorem \\ref{te} with \\cite[Theorems 4.5 \\& 4.7]{Deng2012}, we have the following\n\n\\begin{thm}\\label{tckc}\nIf $s_D^\\Omega(z)$ has a positive lower bound, then $D$ is complete with respect to the Carath\\'{e}odory distance, the Kobayashi distance and the Bergman distance of $D$.\n\\end{thm}\n\n\\section{Comparison of Fridman invariants and generalized squeezing functions}\\label{S:comparison}\n\nSince Fridman invariants and generalized squeezing functions are similar in spirit to the Kobayashi-Eisenman volume form $K_D$ and the Carath\\'{e}odory volume form $C_D$, respectively, it is natural to study the comparison of them. For this purpose, we will always assume that $D$ is a bounded domain in $\\cv^n$ and $\\Omega$ is a bounded, balanced, convex and homogeneous domain in $\\cv^n$.\n\nSimilar to the classical quotient invariant $M_D(z):=C_D(z)\/K_D(z)$, we introduce the quotient $m_D^\\Omega(z)=s_D^\\Omega(z)\/e_D^\\Omega(z)$, which is also a biholomorphic invariant. When $\\Omega=\\bv^n$, we simply write $m_D(z)=s_D(z)\/e_D(z)$.\n\nIn \\cite{Nikolov-Verma2018}, Nikolov and Verma have shown that $m_D(z)$ is always less than or equal to one. The next result shows that the same is true for $m_D^\\Omega(z)$.\n\n\\begin{thm}\\label{te>s}\nFor any $z\\in D$, we have $m_D^\\Omega(z)\\le 1$.\n\\end{thm}\n\\begin{proof}\nFor any $z\\in D$, by Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega$ such that $f(z)=0$ and $\\Omega(s_D^\\Omega(z))\\subset f(D)$.\n\nDefine $g(w):=f^{-1}(s_D^\\Omega(z)w)$, which is an injective holomorphic mapping from $\\Omega$ to $D$ with $g(0)=z$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n$$B^k_{f(D)}(0,\\arctanh[s_D^\\Omega(z)])\\subset B^k_\\Omega(0,\\arctanh[s_D^\\Omega(z)])=\\Omega(s_D^\\Omega(z)).$$\nThus,\n$$B_D^k(z,\\arctanh[s_D^\\Omega(z)])=f^{-1}(B^k_{f(D)}(z,\\arctanh[s_D^\\Omega(z)]))\\subset f^{-1}(\\Omega(s_D^\\Omega(z)))=g(\\Omega).$$\nThis implies that $e_D^\\Omega(z)\\ge s_D^\\Omega(z)$, i.e. $m_D^\\Omega(z)\\le 1$.\n\\end{proof}\n\nA classical result of Bun Wong (\\cite[Theorem E]{W:ball}) says that if there is a point $z\\in D$ such that $M_D(z)=1$, then $D$ is biholomorphic to the unit ball $\\bv^n$. In \\cite[Theorem 3]{RY:comparison}, we showed that an analogous result for $m_D(z)$ does not hold. The next result is a generalized version of \\cite[Theorem 3]{RY:comparison} for $m_D^\\Omega(z)$.\n\n\\begin{thm}\\label{te=s}\nIf $D$ is bounded, balanced and convex, then $m_D^\\Omega(0)=1$.\n\\end{thm}\n\\begin{proof}\nBy Theorem \\ref{tee}, there exists a holomorphic embedding $f:\\Omega \\rightarrow D$ such that $f(0)=0$ and $B_D^k(0,e_D^\\Omega(0))\\subset f(\\Omega)$.\n\nDefine $g(w):=f^{-1}(e_D^\\Omega(0)w)$, which is an injective holomorphic mapping from $D$ to $\\Omega$ with $g(0)=0$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n$$B_{f(\\Omega)}^k(0,\\arctanh[e_D^\\Omega(0)])\\subset B_D^k(0,\\arctanh[e_D^\\Omega(0)])=D(e_D^\\Omega(0)).$$\nThus,\n$$B_\\Omega^k(0,\\arctanh[e_D^\\Omega(0)])=f^{-1}(B_{f(\\Omega)}^k(z,\\arctanh[e_D^\\Omega(0)]))\\subset f^{-1}(D(e_D^\\Omega(0)))=g(\\Omega).$$\nThis implies that $s_D^\\Omega(0)\\ge e_D^\\Omega(0)$. By Theorem \\ref{te>s}, we always have $s_D^\\Omega(0)\\le e_D^\\Omega(0)$. This completes the proof.\n\\end{proof}\n\n\\begin{cor}\\label{cs=s}\nLet $\\Omega_i$, $i=1,2$, be two bounded, balanced, convex and homogeneous domains in $\\cv^n$. Then $s^{\\Omega_2}_{\\Omega_1}(z_1)=s^{\\Omega_1}_{\\Omega_2}(z_2)$ for all $z_1\\in \\Omega_1$ and $z_2\\in \\Omega_2$.\n\\end{cor}\n\\begin{proof}\nSince both $\\Omega_1$ and $\\Omega_2$ are homogeneous, it suffices to show that $s^{\\Omega_2}_{\\Omega_1}(0)=s^{\\Omega_1}_{\\Omega_2}(0)$.\n\nBy Lemma \\ref{lnk}, we have $B^k_{\\Omega_2}(0,\\arctanh(r))=\\Omega_2(r)$ for $r>0$. Then, by definition, $s^{\\Omega_2}_{\\Omega_1}(0)=e^{\\Omega_1}_{\\Omega_2}(0)$. By Theorem \\ref{te>s}, we get $s^{\\Omega_2}_{\\Omega_1}(0)=s^{\\Omega_1}_{\\Omega_2}(0)$.\n\\end{proof}\n\nWe can also compare generalized squeezing functions for different model domains as follows.\n\n\\begin{thm}\nLet $\\Omega_i$, $i=1,2$, be two bounded, balanced, convex and homogeneous domains in $\\cv^n$. Then, for any $z\\in D$, we have\n$$s^{\\Omega_1}_{\\Omega_2}(0)s^{\\Omega_2}_D(z)\\le s^{\\Omega_1}_D(z)\\le \\frac{1}{s^{\\Omega_1}_{\\Omega_2}(0)}s^{\\Omega_2}_D(z).$$\n\\end{thm}\n\\begin{proof}\nFor any $z\\in D$, by Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega_1$ such that $f(z)=0$ and $\\Omega_1(s_D^\\Omega(z))\\subset f(D)$. And there exists a holomorphic embedding $g:\\Omega_1 \\rightarrow \\Omega_2$ such that $g(0)=0$ and $\\Omega_2(s^{\\Omega_2}_{\\Omega_1}(0))\\subset g(\\Omega_1)$.\n\nSet $F=g\\circ f$. Then $F\\in \\oc_u(D,\\Omega_2)$ with $F(z)=0$. Denote $\\Omega=\\Omega_2(s^{\\Omega_2}_{\\Omega_1}(0))$. Then $\\Omega$ is a bounded, balanced and convex domain with $\\rho _\\Omega=\\frac{1}{s^{\\Omega_2}_{\\Omega_1}(0)}\\rho _{\\Omega_2}$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n\\begin{align*}\nB^k_\\Omega(0,\\arctanh[s^{\\Omega_1}_D(z)])&\\subset B^k_{g(\\Omega_1)}(0,\\arctanh[s^{\\Omega_1}_D(z)])=g(B^k_{\\Omega_1}(0,\\arctanh[s^{\\Omega_1}_D(z)]))\\\\\n&=g(\\Omega_1(s^{\\Omega_1}_D(z)))\\subset g(f(D))=F(D).\n\\end{align*}\nOn the other hand, by Lemma \\ref{lnk}, we have\n\\begin{align*}\nB^k_\\Omega(0,\\arctanh[s^{\\Omega_1}_D(z)])&=\\{w\\in \\Omega:\\rho _\\Omega(w)2\\ {\\rm km\\ s}^{-1}$,\nand the rate drops\nto $1.3$--$0.7\\ {\\rm km\\ s}^{-1}$\nduring the next 6 hours \\citep{har73}.\nNew magnetic flux emerges continuously\nwithin the opposite polarities.\nIf the flux is sufficient,\nthe pores are gathered,\nand gradually sunspots are formed\nnear the leading and the following plages\n\\citep{zir72}.\n\nIn the last several decades,\nnumerical computations have been\nwell developed\nto reveal the dynamics\nof the flux emergence\nand the birth of the active region\n\\citep[e.g.][]{shi89}.\nIn our recent simulations\non the large-scale flux emergence\nfrom a depth of $20,000\\ {\\rm km}$,\nthe rising twisted flux tube\nin the convection zone\ndecelerates and makes\na flat structure\njust beneath the photosphere\n\\citep[e.g.][]{tor12}.\nIn this calculation,\nthe plasma,\nwhich is pushed up\nby the rising flux,\nescapes laterally\naround the surface.\nThe appearance\nof the divergent outflow\nat the photosphere\nwas found to be earlier than\nthat of magnetic flux,\nand, at this moment,\nthe outflow is mainly horizontal.\nHereafter\nwe call this preceding outflow\nas a horizontal divergent flow (HDF).\nA similar flow\nis also reported\nby \\citet{che10}.\nHowever,\nto our knowledge,\nthe HDF\nprior to the flux emergence\nhas not been confirmed clearly\nin previous observations\n\\citep{kos09}.\nHere, we use \nthe term ``horizontal''\nto indicate the direction\nparallel to the solar surface.\n\nThe aim of this study\nis to investigate the HDF\nand the evolving magnetic field\nat an early phase\nof the flux emergence.\nFor this purpose,\nwe used\nthe Dopplergrams and magnetograms\nof the Helioseismic and Magnetic Imager (HMI)\non board the Solar Dynamics Observatory (SDO),\nsince their continuous observations\nof the whole solar disk\nmake it possible\nto achieve information\nat the very moment of,\nor even before the flux emergence\nat the surface.\n\nOur numerical result\nindicates that,\nif the newly emerging region\nis located away from the disk center,\nif a pair of\npositive and negative Doppler patterns\nis detected\njust before the flux emergence,\nand if the positive (negative) pattern\nis limbward (disk-centerward),\nthe observed Doppler velocity\nis mainly horizontal\nrather than vertical.\nTherefore,\nwe can evaluate the horizontal velocity\nof the escaping plasma\nfrom the Doppler velocity,\nby considering the heliocentric angle\nof the active region\nfrom the disk center.\nOne advantage of this method\nover the ordinal local correlation-tracking method\n\\citep{nov88}\nis that\nthe horizontal velocity\nof the plasma\ncan be evaluated independently\nof the apparent motion\nof magnetic elements\nat the photosphere.\nAfter the flux has emerged,\nwe can not obtain\nthe horizontal speed\nfrom the Doppler velocity,\nsince it may contain a vertical motion\nsuch as rising of magnetic fields\nor a downflow\nin the convective collapse process.\n\nIn this Paper,\nwe report the first determination\nof the HDF\nprior to the flux appearance,\nusing SDO\/HMI Dopplergrams and magnetograms.\nWe also studied\nthe chromospheric reaction\nto the flux emergence\nin the photosphere\nby using H$\\alpha$ images\ntaken by\nthe Solar Magnetic Activity Research Telescope\n(SMART)\nat Hida Observatory.\nIn Section \\ref{sec:observation},\nwe will introduce the observations\nand the method of data reduction.\nAnalysis and the results will appear\nin Section \\ref{sec:results}.\nThen, in Section \\ref{sec:discussion},\nwe will discuss the observational results.\nFinally, we will summarize the Paper\nin Section \\ref{sec:summary}.\n\n\\section{Observation and Data Reduction\n \\label{sec:observation}}\n\nIn this Paper,\nwe studied NOAA AR 11081\nformed in 2010 June,\nin the northwest of the solar disk.\nTo measure the Doppler shift\nand line-of-sight (LoS) magnetic field\nin the photosphere,\nwe used Dopplergrams and magnetograms\ntaken by\nSDO\/HMI.\nAlso,\nto study the chromospheric response\nto the flux emergence,\nwe used H$\\alpha$ images\ntaken by SMART at Hida Observatory.\n\n\n\\subsection{SDO\/HMI Dopplergram and Magnetogram}\n\nSDO\/HMI continuously observes\nthe whole solar disk\nat the 6173 \\AA \\ion{Fe}{1} line,\nwhich is resolved by $4096^{2}$ pixels\n\\citep{sch12}.\nTo obtain the tracked data cubes\nof the birth of AR 11081,\nwe used {\\tt mtrack} module\n\\footnote{http:\/\/hmi.stanford.edu\/teams\/rings\/mod\\_mtrack.html}.\nThe data cubes\nof the Doppler velocity\nand the LoS magnetogram\nhave\na spatial resolution of $0.5\\ {\\rm arcsec}$\n(1 pixel corresponds to $\\sim 360\\ {\\rm km}$)\nwith $512^{2}$ pixel field-of-view (FoV),\nand a temporal resolution of $45\\ {\\rm s}$\nwith a duration of $36\\ {\\rm hr}$,\nstarting at 12:00 UT\non 2010 June 10.\nIn the initial state,\nthe center of the $512^{2}$ FoV\nis located at\nN$22^{\\circ}$ W$25.6^{\\circ}$,\nor ($+392, +383$) arcsecs\nin solar disk coordinates.\nHere, we applied Postel's projection,\nthat is, both Doppler and magnetic maps\nare projected\nas if seen from\ndirectly above.\nThen, to eliminate the effects of\nthe rotation of the Sun\nand the orbital motion of the satellite,\nand to determine the zero point\nof the LoS velocity,\nwe reduced the mean velocity\nfrom each Dopplergram.\nAlso, a 30-min (40-frame) moving average\nwas applied\nto the Dopplergrams and magnetograms.\n\nFigure \\ref{fig:fov} is\nthe HMI magnetogram\nof NOAA AR 11081\ntaken at 06:00 UT,\n2010 June 11,\nthat is,\nafter the emergence started.\nHere, white and black indicate\nthe positive and negative polarities,\nrespectively.\nThe diagonal line in this figure\nis the slit\nfor the time-sliced diagram\nin Section \\ref{sec:slice}.\nThe slit angle\nis chosen to fit\nthe first separating motion\nof both polarities.\nThe square indicates\nthe region analyzed\nin Section \\ref{sec:histogram}\nto measure the distributions\nof the Doppler velocity\nand the LoS\nfield strength.\n\n\n\\subsection{SMART H\\boldmath{$\\alpha$} Images}\n\nSMART\nat Hida Observatory,\nKyoto University,\nconsists of four different telescopes,\nwhich are T1, T2, T3 and T4, respectively\n\\citep{uen04}.\nThey are placed on a tower\nwith a height of $16\\ {\\rm m}$.\nT1 obtains H$\\alpha$\nfull solar disk images\nat high temporal and spatial resolution.\nFor studying the chromospheric reaction\nto the photospheric flux emergence,\nwe analyzed the H$\\alpha$ data\nof 01:00--05:00 UT,\n2010 June 11,\nwhich resolves the full solar disk\nwith $4096^{2}$ pixels\n(1 pixel corresponds\nto $\\sim 0.56\\ {\\rm arcsec}$)\nand has a maximum temporal resolution\nof 2 minutes.\n\nIn this study,\nwe only used H$\\alpha$ line core images\n(wavelength at $6562.8\\ {\\rm \\AA}$).\nFirst,\ndark-current subtraction\nand flat fielding\nwere performed\non the obtained SMART data.\nThen, by taking\na cross-correlation\nof the two consecutive images\nto fix the position\nof the target emerging active region,\nwe made a data cube\nof H$\\alpha$ images.\nNote that H$\\alpha$ image\nis a simple zoom-up\nof the full disk image,\nwhile\nPostel's projection is applied\nto the HMI images.\n\n\n\\section{Data Analysis and Results\n \\label{sec:results}}\n\nFigure \\ref{fig:evolution}\nshows the temporal evolution\nof the Dopplergram and the magnetogram\nfor 12 hours\nfrom 18:00 UT,\n2010 June 10.\nIn the Dopplergram,\nthe motion toward and away\nfrom the observer are\nshown in blue and red,\nrespectively.\nAt first,\nduring 18:00--00:00 UT,\nthe surface is relatively quiet\nwith some preceding magnetic elements\nof both positive\nand negative polarities.\nAn area \nwith strong blue shift\n($< -1\\ {\\rm km\\ s}^{-1}$) appears\nin the middle of the FoV\nat 01:00 UT on 11 June,\nwhich is gradually growing\nin size.\nAfter 3:00 UT,\nthe strong red shift\n($> 1\\ {\\rm km\\ s}^{-1}$) appears\nand magnetic field emergence\ntakes place.\nBoth positive and negative polarities\nmove apart from each other.\nHere, the separation\nof the magnetic elements\nis almost along the slit,\nwhich is indicated as a diagonal line.\nFinally, at 06:00 UT,\nthe red and blue areas\nbecome faint.\nThe separated magnetic elements stop\nand gather to form pores\nat the boundary\nof the emerging region.\n\nIn this section,\nwe first introduce the results\nof time-slices\nof the Dopplergrams and magnetograms\nin Section \\ref{sec:slice}.\nThen, in Section \\ref{sec:histogram},\nwe will clarify\nthe occurrence times\nof the HDF\nand the flux emergence,\nand evaluate\nthe horizontal speed\nof the HDF.\nSection \\ref{sec:chromosphere}\nis dedicated\nto showing\nthe chromospheric studies.\n\n\n\\subsection{Time-sliced Diagram\\label{sec:slice}}\n\nTo examine the motion\nof the magnetic elements\nof positive and negative polarities\nand the corresponding LoS velocity,\nwe made time-sliced diagrams\nof HMI Dopplergrams and magnetograms.\nThe spatial slit is indicated\nas a diagonal line\nin Figure \\ref{fig:fov}\nand Figure \\ref{fig:evolution},\nwhich is placed\nparallel to the separation\nof both polarities.\n\nFigure \\ref{fig:slice}\nis the time-sliced diagram\nof the Dopplergram and the magnetogram\nalong the slit.\nFrom the time-slice\nof the magnetogram,\nFigure \\ref{fig:slice}(b),\nwe can see that\nboth positive and negative polarities\nmove apart from each other\nfrom around 03:00 UT on June 11.\nThe speed of each element\nis estimated to be\n$\\sim 1.2\\ {\\rm km\\ s}^{-1}$,\nwhich then drops\nto $\\sim 0.4\\ {\\rm km\\ s}^{-1}$.\nThus, the separation\nspeed is $0.8$--$2.4\\ {\\rm km\\ s}^{-1}$.\nThis deceleration\nof the separated polarities\nmay reflect that\nthe polarities are reaching\nthe boundary\nof the active region.\nThese elements then gathered\nto create stronger pores,\nof which the absolute LoS\nfield intensity is\ngreater than $200\\ {\\rm G}$.\nOne would find that\nweak and small elements\nof both polarities appear\nbetween the main separating pores\nduring 03:00--09:00 UT\non June 11.\nAlso, the main positive pore\ncollides with\nthe preexisting negative polarity,\nand they cancel\neach other out.\n\nIn the Doppler slice,\nFigure \\ref{fig:slice}(a),\na pair of red and blue patterns\nemerged at around 02:00 UT, June 11,\nslightly earlier than\nthe appearance of the magnetic elements\nin Figure \\ref{fig:slice}(b).\nThe red and blue shift patterns\nimmediately started to separate,\nand the propagation speed\nof the patterns\n(the slope of the patterns)\nis about $0.4\\ {\\rm km\\ s}^{-1}$.\nHere,\nwe note that\nthe blue (red) pattern is located\ndisk-centerward (limbward),\nwhich indicates that\nthe flow is divergent.\nMoreover,\nfrom the fact\nthat the divergent outflow\ncame before\nthe flux emergence,\nwe can assume that\nthe outflow\nduring this period\nis caused by the plasma\nescaping from\nthe rising magnetic flux.\nIt should be noted that\nthe trend\nof the Doppler pattern\ncoming before the flux emergence\ndoes not change\nwhen we vary the thickness\nof the slit.\n\nHowever,\nthe determination\nof the appearance time\nof the Doppler pattern\nassociated with\nthe flux emergence\nis difficult,\nbecause the Doppler pattern,\nespecially the blue shift,\nappeared at the location\nwhere the supergranulation\nshowed blue shift\n(21:00--01:00 UT).\nThe definition\nof the flux emergence\nand the appearance of the related Doppler pattern\nis dealt with\nin the next subsection\n(\\S \\ref{sec:histogram}).\n\n\n\\subsection{Appearance times\n of the HDF and the flux emergence,\n and the velocity of the HDF\n \\label{sec:histogram}}\n\nIt is not easy to determine\nthe timings\nof the appearance of\nthe HDF\nand the associated\nflux emergence\nfrom Figures \\ref{fig:evolution}\nand \\ref{fig:slice}.\nIn particular,\nwe have to distinguish\nthe outflow\nrelated to the flux emergence\nfrom the preexisting\nconvective motions\nof the quiet Sun\n(e.g., granulations and supergranulations).\nTo clarify with significance\nwhen the HDF\noccurred\nand when the magnetic flux emerged,\nwe studied the temporal changes\nof the Doppler and magnetic patterns\nfrom those before the emergence,\nnamely, patterns of the quiet Sun.\nAlso, in this subsection,\nwe describe how we evaluate\nthe horizontal speed\nof the HDF.\n\nFirst, we plotted the histograms\nof the Doppler velocity\nand the absolute LoS\nfield strength\ninside the square\nof Figure \\ref{fig:fov}\nfor each frame.\nThe size of the square\nis $70\\times 70$ pixels\n$(\\sim 25\\times 25\\ {\\rm Mm}^{2})$,\nwhich is selected\nto include the emergence region.\nAs for the Dopplergram,\nthe apex of the histogram\nwas shifted\nto fit the zero point.\nThen, considering the photospheric condition\nin the 3 hours\nfrom 21:00 UT of June 10\nto be sufficiently quiet,\nwe averaged up each 240 histograms\nof the Dopplergrams and the magnetograms\nin this period,\nand regarded these averages\nas reference quiet-Sun profiles.\n\nIn the left column\nof Figure \\ref{fig:histogram},\nwe show histograms\nof the Doppler velocity\nat five different times of June 11,\nplotted over the reference\nquiet-Sun profile.\nHere we note that\nthe quiet-Sun profile\nobtained is similar\nto a Gaussian distribution.\nThe shade indicates\nthe standard deviation\nabove and below the reference.\nAs time goes by,\nthe profile becomes deviated\nfrom the reference,\nbecause the number of pixels\nof which the absolute Doppler velocity\nis greater\nthan $0.5\\ {\\rm km\\ s}^{-1}$\nincreases.\nThe right column\nof Figure \\ref{fig:histogram}\nis the residual\nof the Doppler histogram\nfrom the reference.\nOne standard deviation\nis also shown as a shaded area.\nAt first,\nthe residual is below\none standard deviation level\nfor most of the velocity range.\nFrom 02:00 UT, however,\nthe residual exceeds the deviation.\n\nFigure \\ref{fig:histogram_mag}\nis the same as Figure \\ref{fig:histogram},\nbut for the absolute\nfield strength\nof the LoS magnetograms.\nHere, the quiet-Sun profile\nconsists of a distribution with\na width of $\\sim 10\\ {\\rm G}$\n(about the precision\nof the HMI magnetogram)\nand some preexisting pores\nwithin the FoV.\nThus, the profile is\ndifferent from\na Gaussian distribution.\nThe residual in the range\nof $> 200\\ {\\rm G}$\nfurther exceeds\none standard deviation level\nfrom 04:00 UT.\nAfter this time,\nthe residual of $> 200\\ {\\rm G}$\nbecomes well over the standard deviation,\nbecause more and more flux is emerged\nand stronger pores are created.\n\nFor the significance\nof the measurement,\nwe define\nthe start time of the HDF\nand the flux emergence\nas the time\nwhen each residual\nof the Dopplergrams and the magnetograms\nexceeded one standard deviation level.\nTo know these times,\nwe show in Figure \\ref{fig:timing}\neach time-evolution\nof the residuals\n(taken from and averaged over\nthe range\n$[-0.8\\ {\\rm km\\ s}^{-1}, -0.4\\ {\\rm km\\ s}^{-1}]$\nand $[0.4\\ {\\rm km\\ s}^{-1}, 0.8\\ {\\rm km\\ s}^{-1}]$\nfor Dopplergram,\nand the range $[200\\ {\\rm G}, 300\\ {\\rm G}]$\nfor magnetogram),\nplotted over\none standard deviation.\nIn this figure,\nthe residual of the Dopplergram\nbecomes over the standard deviation\nat 01:23 UT on 11 June,\nwhile that of the magnetogram\nexceeds the level\nat 03:06 UT.\nThat is,\nthe appearance of the HDF\ncame before the flux emergence\nby about 100 minutes.\n\nDuring this period,\nit is expected that\nthe flow is mainly horizontal\nand a vertical component\nis less dominant.\nThus, we can calculate\nthe horizontal velocity\nfrom the residual distribution\nof the Doppler velocity\n(Figure \\ref{fig:histogram}),\nby considering\nthe geometric effect.\nThe relation between\nthe horizontal velocity $V_{\\rm h}$\nand the Doppler velocity $V_{\\rm D}$ is\n$V_{\\rm h}=V_{\\rm D}\/\\sin{\\theta}$,\nwhere $\\theta$ is the heliocentric angle\nof the emerging region\nmeasured from the disk center.\nFrom 01:23 to 03:06 UT,\nthe Doppler velocity range\nwhere the residual exceeds\nthe one standard deviation\nis typically\n0.4--$1.0\\ {\\rm km\\ s}^{-1}$,\nwhich is up to\n$1.5\\ {\\rm km\\ s}^{-1}$,\nand the heliocentric angle is\n$\\sim 40^{\\circ}$.\nTherefore,\nthe horizontal velocity\nis calculated to be\n$0.6$--$1.5\\ {\\rm km\\ s}^{-1}$,\nand the maximum is\n$2.3\\ {\\rm km\\ s}^{-1}$.\n\nHere, we comment\non the selection\nof the field-strength range\n($[200\\ {\\rm G}, 300\\ {\\rm G}]$)\nand its dependence\non the start time\nof the flux emergence.\nIf we use the lower strength range,\nfor example $[50\\ {\\rm G}, 100\\ {\\rm G}]$\nor $[100\\ {\\rm G}, 200\\ {\\rm G}]$,\nat which the residual exceeds\none standard deviation level faster\n(Figure \\ref{fig:histogram_mag}, right column),\nthe start time of the flux emergence\nis calculated to be much earlier.\nIn the present analysis,\nhowever,\nthe strength range\n$[200\\ {\\rm G}, 300\\ {\\rm G}]$\nis used,\nsince the number of the pixels of $>200\\ {\\rm G}$\nis so small in the quiet Sun\nthat the flux emergence is easily detected\nwhen it occurs.\nWe confirmed this fact\nby applying the same analysis\non the quiet-Sun data.\nAs for the dependence of the strength range\non the observation results,\nwe tested the analysis\nwith various ranges,\nwhich is summarized in\nTable \\ref{tab:range}.\nFrom this table\none can see that\nthe start time does not so change\nfor [$200\\ {\\rm G}, 300\\ {\\rm G}$],\n[$300\\ {\\rm G}, 400\\ {\\rm G}$],\nand [$400\\ {\\rm G}, 500\\ {\\rm G}$] cases.\n\nWe also checked the dependence\nof the size of the square\nwhere the histograms are made\n(Fig. \\ref{fig:fov}),\nwhich is summarized in\nTable \\ref{tab:size}.\nHere, the time difference\nis almost constant\nfor various square sizes\nand is about 100 min.\nWith increasing square size,\nthe ratio of high-speed or strong pixels\nin the square reduces.\nAt the same time,\nthe quiet-Sun reference profile\nbecomes more accurate\nand one standard deviation level decreases.\nTherefore, in total,\nthe time difference remains constant.\n\n\n\\subsection{Chromospheric Response\n \\label{sec:chromosphere}}\n\nIn this subsection,\nwe investigate\nthe time-evolution\nof the H$\\alpha$ intensity\nto examine the relation\nbetween the chromosphere\nand the photosphere\nin this studied event.\nFigure \\ref{fig:ha}(a)\nis a sample image\nof the SMART H$\\alpha$ data.\nThe color and contours\nindicate\nthe relative H$\\alpha$ intensity.\nIn this figure,\nthere are two bright regions\n(plages)\nin the middle of the FoV.\nThen,\nalong the slit\nof Figure \\ref{fig:ha}(a),\nwe made a time-sliced diagram\nfor 4 hours\nstarting at 01:00 UT, 11 June,\nwhich is shown\nas Figure \\ref{fig:ha}(b).\nNote that the slit\nin Figure \\ref{fig:ha}(a)\nis not exactly the same as\nthat in Figure \\ref{fig:fov},\nsince the H$\\alpha$ data\nis a simple closeup view\nof the full disk image,\nwhile Postel's projection\nis applied to the HMI data.\nThus,\nfrom this study,\nwe can only determine\nthe appearance time\nof the chromospheric brightening.\n\nIn Figure \\ref{fig:ha}(b),\nthe first bright source\nat the slit location\nof $5\\times 10^{4}\\ {\\rm km}$\nstarts at 02:40 UT.\nHowever, it was found that\nthis brightening\nis due to the activity\namong the preexisting quiet-Sun pores\nof both polarities,\nwhich later collide with\npositive patches\nof the newly emerging flux\n(see Section \\ref{sec:slice}).\nIt is difficult to\nseparate this bright source\ninto activity\nof the preexisting pores\nand that of the newly emerged\npositive pores.\nThe second source\nlocated at $7\\times 10^{4}\\ {\\rm km}$\nstarts at 03:20 UT,\nand there was\nno preceding pore\nin this region.\nTherefore,\nwe consider that\nthe second source\nis entirely due to\nthe newly\nemerged negative pores,\nand determine that\nthe chromospheric reaction\nstarts at this time\n(03:20 UT;\nindicated by a dashed line\nin Figure \\ref{fig:ha}(b)).\nThe two chromospheric sources\nare located\njust over the positive\nand negative polarities\nin the photosphere.\n\n\n\\section{Discussion\\label{sec:discussion}}\n\n\\subsection{Mechanism of the Time Difference\n \\label{sec:mechanism}}\n\nIn this Paper\nwe analyze\nthe newly emerging active region\nand find that\nthere is a time difference\nbetween the appearance of\nthe horizontal divergent flow (HDF)\nand the corresponding flux emergence;\nthe HDF\nappears prior to the flux emergence\nby about 100 minutes.\n\nAccording to the thin-flux-tube\nsimulation \\citep{fan09},\nthe rising speed of the flux tube\naccelerates\nfrom the top few tens of Mm\nof the convection zone.\nHowever, at the same time,\nthe flux tube expands\nas the external density (pressure) \ndecreases with height.\nThe radius of the tube eventually exceeds\nthe local pressure scale height\nat a depth of $\\sim 20\\ {\\rm Mm}$\nand the thin-flux-tube approximation\nbreaks out.\nRecently, our numerical simulations\nusing the fully compressed MHD,\nincluding the convection zone,\nthe photosphere,\nand the corona\nin a single computational box,\nhave revealed that\nthe rising flux tube\ndecelerates\nin the uppermost convection zone\n\\citep{tor11,tor12}.\nIt is because\nthe plasma on the flux tube piles up\nbetween the apex of the tube\nand the subadiabatically stratified photosphere ahead,\nand the plasma inhibits the rising motion of the flux tube.\nThen, the accumulated plasma\nin turn extends the tube laterally.\nThis accumulation becomes effective\nfrom the depth\nwhere the apex of the tube\nbecomes ``flat''.\nThis critical depth\nis also considered as\nbeing\nwhere the tube's radius exceeds\nthe local pressure scale height\n(depth $\\sim -20\\ {\\rm Mm}$).\nThe lateral expansion\nof the flux tube\nappears\nsimilar to those\nfound by \\citet{mag01} and \\citet{arc04}.\nHowever, their expansions occur\nbecause the tubes themselves\nmove into the subadiabatic photosphere.\n\nAs the rising tube approaches\nthe photosphere,\nthe accumulated plasma\non the rising tube\nescapes horizontally\naround the surface\nand is observed\nas an HDF,\nwhile the tube\nstops beneath the surface.\nSince the flux is\ncontinuously transported\nfrom below,\nthe magnetic pressure gradient\nat the photosphere\nenhances\nand the further emergence\nto the upper atmosphere\nstarts\ndue to the magnetic buoyancy instability.\nWhen the flux resumes rising,\nit can be observed as a ``flux emergence''\nat the photospheric level.\nTherefore,\nthe time difference\ndetected in this Paper implies\nthe period of latency\nduring which\nthe flux tube\nreaching the photosphere\ndevelops the magnetic buoyancy instability.\nThe growth time\nof the instability is,\nhowever,\ncomplicated\nand may be related\nto many parameters\nof the rising flux tube\nsuch as field strength,\ntotal flux, twist, etc.\nThus, we shall leave\nthe estimation of the time gap\nfor our future numerical research.\n\n\n\\subsection{Depth of the Magnetic Flux\n \\label{sec:model}}\n\nTo describe the relation\nbetween the HDF\nand the contributing upflow\nbelow the surface,\nwe make a simple model,\nwhich is schematically illustrated\nas Figure \\ref{fig:model}.\nWhen the magnetic flux tube has emerged\nfrom the deeper convection zone,\nan upflow region is formed\nin front of the flux tube.\nIf the typical size\nof this region is $L$\nand the velocity is $V_{\\rm up}$,\nthe mass flux passing through\nthe area of $\\pi L^{2}$\ncan be described as\n\\begin{eqnarray}\n F_{1}=\\rho_{1} V_{\\rm up} \\pi L^{2},\n \\label{eq:f1}\n\\end{eqnarray}\nwhere $\\rho_{1}$ is the plasma density.\nNext, the photospheric plasma\nthat escapes from the upflow\npropagates the surface\nas an HDF.\nIf we write the horizontal velocity\nat the radial distance $r$\nas $V_{\\rm h}(r)$,\nthe thickness as $T$,\nand the density as $\\rho_{2}$,\nthe mass flux passing through $2\\pi rT$ is\n\\begin{eqnarray}\n F_{2}=2\\pi r \\rho_{2}TV_{\\rm h}(r).\n \\label{eq:f2}\n\\end{eqnarray}\nThese fluxes,\n$F_{1}$ and $F_{2}$,\nare assumed to be conserved.\nTherefore,\nfrom Equations (\\ref{eq:f1}) and (\\ref{eq:f2}),\nthe upflow velocity is\n\\begin{eqnarray}\n V_{\\rm up}=\\frac{2\\rho_{2}}{\\rho_{1}}\\frac{rTV_{\\rm h}(r)}{L^{2}}.\n \\label{eq:vup1}\n\\end{eqnarray}\n\nAs a result of the observational study,\nthe horizontal speed is\n$V_{\\rm h}\\sim 1\\ {\\rm km\\ s}^{-1}$\nat $r=5000\\ {\\rm km}$.\nHere we assume that\n(a) plasma density is almost\nuniform\naround the photosphere,\ni.e., $\\rho_{1}\\sim \\rho_{2}$,\n(b) the thickness is about\nthe local pressure scale height,\n$T\\sim 200\\ {\\rm km}$,\nand (c) the size of the upflow\nis $4000\\ {\\rm km}$\n(the smallest distance\nbetween the blue and red patterns\nin Figure \\ref{fig:slice});\n$L\\sim 2000\\ {\\rm km}$.\nUnder these assumptions,\nEquation (\\ref{eq:vup1}) reduces to\n$V_{\\rm up}=0.5\\ {\\rm km\\ s}^{-1}$.\nThe time gap\nbetween the HDF\nappearance\nand the flux emergence\nwas observed to be $100\\ {\\rm min}$.\nTherefore,\nthe depth\nthat the apex of the magnetic flux\ntransited across\nafter it decelerated,\nis estimated to be\n$\\sim 3000\\ {\\rm km}$,\nif the flux tube rises\nat the same rate\nas the upflow.\n\nIn this section,\nfor simplicity,\nwe assumed that\nthe apex of the rising flux is circular,\nand that the outflow velocity $V_{\\rm h}$\nis only a function of $r$.\nFrom Figure \\ref{fig:evolution},\nhowever,\nit seems that the HDF is not axisymmetric\nand is stronger\nin the direction of flux emergence\n(the northwest-southeast slit in this figure).\nThis property is consistent\nwith our preceding numerical results;\nthe photospheric plasma flow\nis found to be\nalong the direction of flux emergence\n\\citep[see][Fig. 4]{tor12}.\nMoreover,\nin that simulation,\nthe twist of the rising flux tube\nis stronger\nand the magnetic field\nat the tube's surface\nis almost perpendicular\nto the axis of the tube.\nIn the later phase of\nthe target AR of this Paper,\nthe separation of\npositive and negative polarities\nshifted into the northeast-southwest direction,\ni.e., perpendicular to the diagonal line\nin Figure \\ref{fig:evolution}.\nTaking into account\nthe previous numerical results,\nand considering that\nthe observed NE-SW direction indicates\nthe axis of the flux tube\nthat forms this AR,\nwe can think that the twist\nof this flux tube is tight,\nand therefore the flow\nis in the NW-SE direction.\n\n\n\\subsection{Relations with Recent Observations:\n HDF as a precursor\n \\label{sec:seismology}}\n\nUsing SOHO\/MDI,\n\\citet{gri07} observed NOAA AR 10488\nand found that\nupflows of matter\nwith a high velocity\n($\\gtrsim 0.4\\ {\\rm km\\ s}^{-1}$)\npreceded flux emergences\nby 8 and 13 min.\nThus,\nthe last $\\sim 10$ min\nof the divergent Doppler pattern\nobserved in our study\nthat remained for 100 min,\nmay contain\nthe upward motion.\nHowever,\nfor most of the period,\nthe flow is expected\nto remain horizontal.\nNote that\nthe upflow velocity of\n$\\gtrsim 0.4\\ {\\rm km\\ s}^{-1}$\nreported\nby \\citet{gri07}\nmay be the speed\nof a magnetic flux\nrising in the photosphere.\nAs for the estimated velocity\n($V_{\\rm up}=0.5\\ {\\rm km\\ s}^{-1}$)\nin Section \\ref{sec:model},\nthis value indicates\nthe emergence speed\nof a magnetic flux\nin the uppermost convection zone.\n\nBy means of time-distance helioseismology,\n\\citet{ilo11} detected\nstrong acoustic travel-time anomalies\nas deep as 65 Mm,\n1 to 2 days\nbefore the flux rate reaches its peak,\nand (in most cases)\na few hours before\nthe start of\nthe flux appearance\nat the surface\n\\citep[see also][]{kos08,kos09}.\nThese anomalies are\nconsidered as\nsigns of the rising\nmagnetic flux.\nTaking account\nof our numerical simulations\n\\citep[e.g.][]{tor12},\nit is consistent\nto interpret\nthis helioseismic anomaly\nas a result\nof the effect\nsimilar to the plasma accumulation;\nexternal media\nmay be perturbed or compressed\nby the rising motion\nof the magnetic flux.\nThe importance\nof the helioseismic anomaly\nin \\citet{ilo11}\nand the HDF in our study\nis that\nthese phenomena occur\nprior to the flux emergence\nat the photosphere.\nThat is,\nthese are\nthe precursors\nof the flux emergence.\nBy combining two types\nof observations,\nsunspot appearances\nmay be predicted\nin the near future.\n\n\n\\subsection{Further Emergence to the Upper Atmosphere\n \\label{sec:further}}\n\nIn Section \\ref{sec:chromosphere},\nwe found that the H$\\alpha$ brightenings\n(plages) were located\nover the positive and negative pores\nin the photosphere.\nThis indicates that\nthe brightenings\nare caused by the plasma\nflowing down along magnetic loops\nthat connect the photospheric magnetic elements\n\\citep[see][Figure 10]{shi89}.\nThe appearance of the chromospheric source\nwas at 03:20 UT\non June 11,\nwhile the flux emergence\nwas at 03:06 UT.\nIf we assume the H$\\alpha$ formation height\nas $2000\\ {\\rm km}$,\nthe rise velocity of the magnetic field is \n$\\sim 2.5\\ {\\rm km\\ s}^{-1}$.\nThis value is smaller than\nthe observed speed\nof the chromospheric arch filament system (AFS)\nof $\\sim 20\\ {\\rm km\\ s}^{-1}$\n\\citep[e.g.][]{bru67},\nwhich implies that\nthe actual rise speed\nis faster than $2.5\\ {\\rm km\\ s}^{-1}$\nand it takes some time\nto create H$\\alpha$ plage\nafter the flux reaches\nthe chromospheric height.\n\n\n\\section{Summary\\label{sec:summary}}\n\nIn this Paper,\nwe have observed\nthe horizontal divergent flow (HDF)\nprior to the flux emergence\nby using SDO\/HMI Dopplergram and magnetogram.\nThe presence of the HDF\nwas predicted\nby our preceding numerical simulations\n\\citep[e.g.][]{tor12}.\nThe HMI's continuous observation\nof the whole solar disk provides\nthe means to analyze\nthe earlier stage\nof the flux emergence.\nThe summary of the observation\nis given\nas Table \\ref{tab:summary}.\n\nFirst, we made time-slices of\nDopplergram and LoS magnetogram\nof NOAA AR 11081.\nFrom the magnetic slice,\nwe found that\nthe magnetic elements\nof positive and negative polarities\nseparated from each other.\nThe apparent speed\nof a single element was,\nat first, $1.2\\ {\\rm km\\ s}^{-1}$.\nThe speed then dropped\nto $0.4\\ {\\rm km\\ s}^{-1}$\nand the elements gathered\nto create stronger pores\nof $>200\\ {\\rm G}$.\nIn the Doppler slice,\na pair of blue and red pattern\nwas observed to separate,\nslightly earlier than\nthe flux emergence,\nand the blue (red) pattern\nwas located disk-centerward (limbward).\nThis indicates that\nthe HDF\nappeared prior to the flux emergence.\nAccording to our previous numerical experiments,\nthe outflow is mainly horizontal\nduring the period\nfrom the appearance of the outflow\nto the emergence of the magnetic flux.\n\nSecondly,\nwe evaluated the times of the HDF\nappearance\nand the flux emergence.\nTo determine these times\nwith significance,\nwe studied the temporal changes\nof the Doppler and magnetic patterns\nfrom those of the quiet Sun,\nand defined them as\nthe times when each profile exceeded\none standard deviation\nof its quiet-Sun profile.\nAs a result,\nthe Doppler profile was found to\ndeviate from the quiet-Sun profile\nat 01:23 UT, 2010 June 11,\nwhile the magnetic profile\ndeviated at 03:06 UT.\nTherefore,\nthe time difference was\nabout 100 minutes.\nAlso, by considering the heliocentric angle,\nthe horizontal speed of\nthe HDF in this time gap\nwas estimated to be\n$0.6$--$1.5\\ {\\rm km\\ s}^{-1}$,\nup to $2.3\\ {\\rm km\\ s}^{-1}$.\n\nThe creation of the HDF\nis due to the density accumulated\non the apex of the flux tube\nduring its ascent\nin the convection zone.\nThis accumulation occurs\nbetween the flattened apex\nof the rising flux tube\nand the subadiabatically stratified photosphere.\nThe compressed plasma\nescapes horizontally\naround the photosphere,\nwhich was observed\nin this Paper.\nAfter the magnetic flux\nis sufficiently intensified,\nthe magnetic buoyancy instability\nis triggered\nand the magnetic field restarts\ninto the upper atmosphere,\nwhich was also seen\nas a flux emergence\nin this Paper.\nTherefore, the time difference\nof $\\sim 100$ min\nmay reflect\nthe latency\nduring which\nthe flux is waiting\nfor the instability onset. \n\nApplying a simple model\nof the horizontal flow\nand the corresponding upflow\nbeneath the surface,\nwe speculated that\nthe depth of the magnetic flux\nis about $3000\\ {\\rm km}$.\nPreviously,\nSOHO\/MDI found that\nan upflow preceded the flux emergence\nby about 10 minutes\n\\citep{gri07}.\nThis implies that the last\n$\\sim 10$ min of the divergent outflow\nmay include the upward motion.\nEven so,\nfor most of the period,\nthe outflow remains horizontal.\n\nMoreover,\nusing H$\\alpha$ images\ntaken by SMART,\nwe studied chromospheric response\nto the flux emergence\nat the photosphere.\nThe time-slice\nshowed a pair of\nH$\\alpha$ plages,\nwhich started from 03:20 UT,\nthat is,\n$\\sim 14$ min\nafter the flux emergence.\nThe location of these brightenings\nwere just over the photospheric pores.\nTherefore,\nwe speculated that\nthese brightenings are caused by\nthe plasma precipitating along\nthe magnetic fields\nthat connect photospheric pores\nof both polarities.\n\nThe time gap\nbetween the HDF occurrence\nand the flux emergence\nwill be investigated\nin our future numerical study.\nAs for the observational study,\nthe statistical analysis on HDFs\nwould be the next target.\nAnother importance\nof observing HDF is that\nthis phenomenon\ncan be considered as a precursor,\nwhich may allow us\nto predict sunspot formation\nthat occurs in several hours.\n\n\n\n\n\\acknowledgments\n\nWe thank the SDO\/HMI team\nfor data support\nand useful discussions.\nS.T. thanks Dr. A. Kosovichev\nfor arranging his stay\nat Stanford University.\nThis work was supported\nby the JSPS Institutional Program\nfor Young Researcher Overseas Visits,\nand by the Grant-in-Aid\nfor JSPS Fellows.\nWe are grateful\nto the GCOE program instructors\nof the University of Tokyo\nfor proofreading\/editing assistance.\nWe also appreciate\nthe thorough and helpful comments\nby the anonymous referee.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{}}\n\n\\section{\\label{sec:intro}INTRODUCTION}\nSupersymmetry (SUSY) is one of the most compelling theories for physics beyond the Standard Model (SM)~\\cite{susy}. It predicts a new symmetry between bosons and fermions such that for every SM particle, a superpartner should exist with a spin value differing by one half unit. This hypothesis has strong theoretical and experimental implications. On the theory side, it naturally solves the hierarchy problem~\\cite{hierarchy}, a divergent value of the Higgs mass when considering radiative corrections and the SM valid up until the Planck scale. In addition, SUSY makes the unification of forces at a Grand Unification Scale (GUT)~\\cite{GUT} possible. On the experimental side, the existence of several new particles, including a dark matter candidate under certain conditions, are predicted. If no particular fine tuning is introduced in the theory, these particles should be light enough to be produced at the current hadron colliders.\n\nSince the mechanism that breaks SUSY is unknown, more than 100 new parameters are introduced in the Minimal Supersymmetric extension of the Standard Model (MSSM) to induce a soft breaking of the symmetry~\\cite{mssm}. To reduce them to a more manageable set, different approaches are typically considered. The so-called ``top-down'' approach, makes some assumptions at the GUT scale and via renormalization group equations the phenomenology at the electroweak scale is predicted. CMSSM~\\cite{CMSSM} or GMSB~\\cite{GMSB} are among the models most commonly used in this context. Alternatively, one can follow a ``bottom-up'' approach in which different phenomenological assumptions are made at the electroweak scale to simplify the number of particles expected and their relationships. Finally, limits can also be given generically as the product of cross section, efficiency and acceptance ($\\sigma\\cdot\\epsilon\\cdot A$). In this case, it is worth mentioning that this value is provided as an upper limit on the effective cross section given the luminosity and the number of expected and observed events, without any attempt of correcting for the experimental constraints.\n\nThe Tevatron and the LHC hadron colliders are actively looking for signs of SUSY and, in their absence, constraining further the SUSY parameter space beyond the LEP legacy~\\cite{LEPsusy}. Two multipurpose experiments are collecting data at each of the colliders: ATLAS and CMS at the LHC and CDF and D\\O~at the Tevatron. The LHC, being a proton-proton collider currently operating at a center-of-mass energy of 7~TeV, is particularly sensitive to colored SUSY particles such as squarks and gluinos (the superpartners of the quarks and gluons, respectively), even with relatively low luminosity. The Tevatron, with a center-of-mass energy of 1.96~TeV, was the first machine establishing limits beyond LEP constraints in pair production of SUSY particles. Nowadays, it profits from the large dataset of proton-antiproton collisions to search for non-colored SUSY particles and direct production of third generation squarks, establishing the most stringent limits up to date on these processes.\n\nThe SUSY searches are generically classified in $R$-parity conserving (RPC) or violating (RPV) analyses. $R$-parity~\\cite{Rparity} is a symmetry postulated to avoid some leptonic and baryonic number violating terms appearing in the SUSY superpotential. If $R$-parity is conserved, SUSY particles will always be produced in pairs and will decay in cascade until the Lightest Supersymmetric Particle (LSP) is produced. This particle is stable and constitutes a dark matter candidate, which will escape detection producing a characteristic signature of large momentum imbalance in the transverse plane (\\met). On the contrary, RPV signatures are mostly characterized by the possibility of producing mass resonances from the decay of SUSY particles fully into SM particles. \n\nA comprehensive overview of all the different searches carried out at the Tevatron and the LHC experiments is out of the scope of this document. The reader is referred to the dedicated pages of the experiments for further information~\\cite{wwwexp}. The rest of the document briefly describes the techniques and results of the different RPC and RPV searches carried out at the experiments at the Tevatron and the LHC colliders. Other more exotic scenarios, such as displaced vertices or R-hadrons, and results from indirect searches, in which experiments look for deviations of rare SM processes to constrain the SUSY parameter space, are not considered in this document.\n\n\n\n\n\\section{\\label{sec:RPC}RPC ANALYSES}\nThe most general signature of RPC processes is the presence of large \\met. In addition, SUSY cascade decays from the initial particles can be long or short and can include different number and flavor of leptons\\footnote{Throughout this document, hadronically-decaying taus are considered as jets unless otherwise stated} and jets. This rich phenomenology is used by the experiments to define dedicated searches and control different type of backgrounds.\n\n\\subsection{\\label{sec:nolep}Searches without Leptons}\nThe strong production of SUSY particles typically involves a relatively large number of jets and \\met. This is one of the most characteristic signatures in SUSY models and this is why searches without leptons are the most sensitive to a large variety of scenarios. By vetoing leptons, the SM backgrounds are dominated by QCD multijet processes that have extremely large cross sections but a very small $\\epsilon\\cdot A$ when requiring large \\met. This situation is very difficult to model with Monte Carlo (MC) simulations and different data-driven strategies are needed. Other important backgrounds are \\ensuremath{t\\bar t}, $W$+jet and $Z$+jet in which the $Z$ decays invisibly, constituting an irreducible background.\n\nATLAS carried out a search~\\cite{ATLAS_0lep} with 1.04\\invfb of integrated luminosity using the \\ensuremath{m_{\\mathrm{eff}}}~quantity, defined as the scalar sum of the \\pt~of the jets and the \\met. In order to maximize the sensitivity of the analysis to a variety of models, five different signal regions are defined requiring different jet inclusive multiplicities (from $\\geq 2$ to $\\geq 4$, with a leading jet of $\\pt>130$~GeV and subleading jets of $\\pt>40$~GeV), $\\met>130$~GeV and different \\ensuremath{m_{\\mathrm{eff}}}~thresholds ranging from 500 to 1100~GeV. Event selections reduce the QCD multijet contribution by requiring large \\met~relative to the hadronic activity and that no jet is aligned in the azimuthal plane with the \\met. For each signal region, five control regions enhancing different backgrounds are defined. The QCD multijet background is estimated using a completely data-driven technique, which consists in the generation of pseudo-events following a smearing of the jets according to their response function, as derived in a low \\met~significance region. This estimation is normalized appropriately in a region where at least one of the jets is aligned with the \\met. The rest of the backgrounds are estimated using MC or data-driven approaches in the control regions and then a MC-driven transfer function is used to estimate the contribution in the signal region. A global likelihood fit combines all this information and takes into account the correlation between the uncertainties. No significant deviations from SM expectations are found and some limits are derived. Figure~\\ref{fig:ATLAS_0lep} show the 95\\% CL limits in a model with simplified phenomenology, in which all SUSY particles except for squarks of the first and second generation and gluinos are set to the 5~TeV range. In this way, the SUSY colored particles produced are forced to decay directly to the LSP, which is considered massless. These results significantly extend previous limits and are valid up to an LSP mass of 200~GeV.\n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_0lep_interp}%\n \\caption{\\label{fig:ATLAS_0lep} ATLAS 95\\% CL limits on gluino and squark masses derived from the search without leptons in a simplified model containing only gluinos, squarks from the first and second generation and a massless LSP. Previous limits are also shown for reference.}\n\\end{figure}\n\nA search aiming at large jet multiplicities was also carried out by ATLAS with 1.34\\invfb~\\cite{ATLAS_multijets}. In this case, signal regions are defined by six, seven or eight jets with \\pt~ranging 55 to 80~GeV. The main background contribution is from QCD multijet production, which is controlled by using the fact that $\\met\/\\sqrt{\\HT}$ (with \\HT~being the scalar \\pt~sum of the jets) is invariant under jet multiplicities. This assumption was validated in many different control regions. The rest of the backgrounds are estimated using MC and validated in dedicated control regions requiring one muon. No significant deviations are found and gluino masses below 520~GeV (680~GeV under the assumption that $\\msq = 2\\cdot \\mgl$) are excluded at 95\\% CL in a CMSSM model with $\\tan\\beta=10$, $A_0=0$ and $\\mu>0$.\n\nCMS carried out as well a series of searches aiming at the same signature but with a special focus on topological variables to discriminate against backgrounds. In the following, two of them are described: $\\alpha_T$~\\cite{CMS_alphaT} and Razor~\\cite{CMS_Razor} searches. The $\\alpha_T$ variable~\\cite{alphaT} is defined as the ratio between the \\pt~of the second leading jet and the transverse mass between the first two leading jets. In back-to-back topologies, such as QCD multijet production, this ratio shows a strong cutoff at 0.5, providing a good handle to discriminate against this type of background. In the case of more than two jets in the event, the two-jet topology is achieved by clustering all the jets that are relatively close in pseudo-rapidity and azimuthal distance, using a dedicated algorithm. The CMS analysis uses 1.1\\invfb of data and the fact that $R_{\\alpha_T}$, the ratio between events with $\\alpha_T>0.55$ and $\\alpha_T<0.55$, is flat versus \\HT~for the SM background. This information is exploited, together with some data-driven predictions, in a global likelihood fit. The experiment uses multiple \\HT~bins to maximize the sensitivity and good agreement between data and expectations is found in all of them. This result significantly extends the previous limits produced with only $35\\invpb$ of data and are interpreted in a CMSSM benchmark scenario of $\\tan\\beta=10$, $\\mu>0$ and $A_0=0$, as shown in Figure~\\ref{fig:CMS_tanb10}, together with other results from different searches.\n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{CMS_SUSY_2011Limits_tanb10}%\n \\caption{\\label{fig:CMS_tanb10} CMS 95\\% CL exclusion limits from many different searches in a CMSSM scenario with $\\tan\\beta=10$, $\\mu>0$ and $A_0=0$.}\n\\end{figure}\n\nThe Razor quantity~\\cite{Razor} is also exploited by CMS in a dedicated analysis with 35\\invpb of data. This search clusters the jets until a dijet topology is obtained and then the system is boosted back to the center-of-mass frame. The $M_R$ quantity is defined to be the momentum of the jets in this system, where both jets are equal in momentum since the pair produced SUSY particles are of the same mass. This variable is defined only from energy and $z$-momentum components and has the property to peak at the mass difference between the produced particles and the invisible particles that escape detection, with a width that relates to the initial boost from radiation. In this way, the traditional search looking for an excess at the tails of some kinematic distributions can be converted into a bump-hunting search. The transverse version of this quantity, $M_{RT}$, is also defined and enters into the razor variable definition, $R=M_R\/M_{RT}$. In this way, $R$ is dimensionless and combines longitudinal and transverse information. The analysis performs a fit to evaluate the different backgrounds using some {\\it ansatz} defined at dedicated control regions. The signal region is defined as $R>0.5$ and $M_R>500$~GeV and $5.5\\pm 1.4$ events are expected, which is in agreement with the 7 events observed.\n\n\n\\subsection{\\label{sec:onelep}Searches with One Lepton}\nRequiring the presence of at least one lepton in the event reduces the yield of some type of background processes, like QCD multijet production, and makes the analysis sensitive to SUSY cascade decays involving leptons. ATLAS developed a search with 1.04\\invfb~\\cite{ATLAS_1lep} of data in which four signal regions are defined with three or four jets in the final state and with different kinematic thresholds in order to increase the sensitivity to a generic set of models. The transverse mass between the lepton and the \\met~together with the \\ensuremath{m_{\\mathrm{eff}}}~quantity, now with the lepton included in the definition, are exploited to increase sensitivity. The QCD multijet contribution is assessed in a completely data-driven manner using a matrix method~\\cite{ATLAS_MM}. The rest of SM backgrounds are predicted using MC normalized to data in dedicated control regions and multiplied by a MC-driven transfer factor to estimate the corresponding contribution in the signal region. The different results and their uncertainties are finally combined in an overall likelihood fit and found to be compatible with the observed number of events. These null results are interpreted in different models, such as the one shown in Figure~\\ref{fig:ATLAS_1lep}, where 95\\% CL limits are derived in a simplified topology where only the gluino, the LSP and an intermediate chargino are relevant. The colored scale indicates cross sections excluded for any beyond SM process with similar topology and the lines indicate the expected and observed exclusions in the MSSM case. CMS has also recently released a one-lepton analysis~\\cite{CMS_1lep} with 1.1\\invfb of data in which no deviation from SM expectations is found and these results are interpreted in the context of CMSSM, as shown in Figure~\\ref{fig:CMS_tanb10}. \n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_1lep_interp}%\n \\caption{\\label{fig:ATLAS_1lep} ATLAS excluded cross sections at 95\\% CL with a dedicated one-lepton analysis for processes in which gluinos are pair-produced and each of them decays into a quark and chargino, subsequently producing a real or virtual $W$ and the LSP. The chargino is imposed to have a mass exactly at $x = (m_{\\ch} - m_{\\mathrm{LSP}})\/(\\mgl - m_{\\mathrm{LSP}})=1\/2$. The solid and dashed lines are the exclusion limits when the MSSM scenario is considered.}\n\\end{figure}\n\n\n\\subsection{\\label{sec:twolep}Searches with Two Leptons}\nSearches with two identified leptons in the final state are also sensitive to strong production processes. Different cases depending on whether the leptons have opposite sign (OS), same sign (SS), different flavor (DF), same flavor (SF) or combinations like OSSF can be addressed and lead to different background estimation techniques.\n\nCDF developed a SS dilepton analysis~\\cite{CDF_SS}, using 6.1\\invfb of data, aiming at squark or gluino pair-production with an intermediate neutralino and chargino decaying via a real or virtual $W$ or $Z$ boson. Backgrounds yields are dominated by processes containing real leptons (dibosons) and lepton misidentification from jets ($W$+jet and \\ensuremath{t\\bar t}) or conversions ($Z\/\\gamma^*$ and \\ensuremath{t\\bar t}). No deviations from the SM expectations are found.\n\nCMS also developed a SS dilepton analysis with a null result using 0.98\\invfb~\\cite{CMS_SS} of integrated luminosity. In this case, different flavor combinations (including taus) are considered together with several \\pt, \\HT~and \\met~thresholds. For each of the cases, a dedicated data-driven technique is used to estimate the different background contributions. The results are interpreted in terms of limits in the CMSSM scenario, as shown in Figure~\\ref{fig:CMS_tanb10}. \n\nCMS has also released results with 0.98\\invfb of integrated luminosity in a dilepton OS channel using two different approaches~\\cite{CMS_OS}. The first one investigates the presence of an excess in the OSSF combination. In SUSY, cascades such as $\\tilde\\chi_2^0\\to l\\tilde l\\to ll\\tilde\\chi_1^0$ are expected and the invariant mass of the OSSF leptons produced in this way would form a characteristic kinematic edge that relates to the mass difference between the SUSY particles. Thus, unbinned maximum likelihood fits are performed in control and signal regions, defined respectively as $100<\\HT<300$~GeV and $\\HT>300$~GeV. As shown in Figure~\\ref{fig:CMS_OSSF}, good agreement with the expectation is observed. The other approach follows a canonical counting experiment with two different signal regions defined at high \\met~or \\HT~and with three different data-driven methods to estimate the backgrounds. Good agreement between observed and expected yields in all cases is found and limits are also derived in the context of CMSSM, as shown in Figure~\\ref{fig:CMS_tanb10}.\n\nATLAS has also recently released results for OS, SS and OSSF dilepton combinations with 1\\invfb\\cite{ATLAS_2l} of data. For OS (SS) analyses, three (two) signal regions are defined, with at least one of them requiring large \\met~and no jet requirement. For the OSSF, an excess of SF over DF is tested over a background-only hypothesis calculated with pseudo-experiments and taking into account the different uncertainties. In all cases, no excess is observed with respect to the SM expectations.\n\n\\begin{figure}[h!]\n \\includegraphics[width=70mm]{CMS_OSSF_fit}%\n \\caption{\\label{fig:CMS_OSSF}Results of the maximum likelihood fit to the dilepton mass distribution for events in the CMS OSSF signal region.}\n\\end{figure}\n\n\\subsection{\\label{sec:multilep}Searches with Multiple Leptons}\nAnalyses requiring three leptons in the final state are particularly sensitive to production of uncolored particles such as a chargino and neutralino, which may decay via virtual $W$ or $Z$ bosons or via sleptons, if it is kinematically allowed. SM backgrounds producing three leptons in the final state and significant \\met~are small and mostly reduce to diboson production and \\ensuremath{t\\bar t}~with a lepton from a semi-leptonic decay of a $b$-jet. This final state has been considered as the golden signature for SUSY searches at the Tevatron due to the particularly favorable signal-to-background ratio. Thus, despite the fact that with a data sample $\\sim 1$\\invfb~the LHC may become as powerful as the Tevatron soon, the current most sensitive searches for these processes have been performed at CDF and D\\O.\n\nD\\O~developed a search with 2.3\\invfb of integrated luminosity in four different channels by combining electrons and muons with an isolated track and taus~\\cite{D0_trilep}. The trigger performance establishes the minimum possible \\pt~threshold of the objects: $\\pt>(12, 8)$~GeV for two-lepton triggers and 15~GeV for single muon trigger, needed for the tau case. Two different \\pt~selections per channel are implemented. An extensive set of cuts exploiting kinematic information such as invariant masses, \\HT, angular distributions, etc. are applied in each of the different channels, aiming at reducing the dominant backgrounds. No significant deviation from the background expectation is observed in any of the selections.\n\n\\begin{figure}[h!]\n \\includegraphics[width=80mm]{CDF_trilep_mumu_track_met.pdf}\n \\caption{\\label{fig:CDF_trilep}Distribution of \\met~in one of the signal regions of the CDF analysis with two muons and a track.}\n\\end{figure}\n\nCDF updated recently~\\cite{CDF_trilep} their previous study on trileptons by considering 5.8\\invfb of data and eight different exclusive channels, combining two electrons or two muons with a third object that could be an electron, a muon, a tau or a track in \\pt~ranges between 5 and 20~GeV. In order to control the description of the different backgrounds, mostly dominated by Drell-Yan with a misidentified jet, 24 (40) control regions were defined in the dilepton and track (trilepton) case. As shown in Figure~\\ref{fig:CDF_trilep} for the case of dimuon and track selection, no significant deviation from SM expectations is observed. CDF excludes at 95\\% CL chargino mass below 168~GeV in a CMSSM scenario with \\ensuremath{m_{0}}=60~GeV, $\\tan\\beta=3$, $A_0=0$ and $\\mu>0$. This limit is similar to the one obtained by D\\O.\n \n\n\\subsection{\\label{sec:bjets}Searches with $b$-jet Tagging}\nSUSY particles of the third generation, such as the stop, the sbottom or the stau, could have significantly lower masses than the rest of the SUSY particles due to the mixing between the weak left- and right-handed eigenstates.\n\nSearches for the direct production of sbottoms at CDF, using 2.65\\invfb~\\cite{CDF_sbottom} of data, and D\\O, using 5.2\\invfb~\\cite{D0_sbottom} of data, focus on the simplified case of $\\tilde b \\to b+\\tilde\\chi_1^0$. The final state signature of two $b$-jets~and \\met~is exploited by requiring one or two $b$-tagged jets, a lepton veto and some dedicated kinematic variables to reduce the top and QCD multijet backgrounds. One loose and one tight selections are imposed in both experiments in order to enhance the sensitivity to different $\\tilde b - \\tilde\\chi_1^0$ mass differences. Since no deviations from expectations are observed, sbottom masses between approximately 230 and 250~GeV are excluded when the LSP mass is below 70~GeV. \n\nD\\O~recently published a search for direct stop production with 5.4\\invfb~\\cite{D0_stop} of integrated luminosity. The stop can decay in many different final states depending on its own mass and that of other SUSY particles such as charginos, neutralinos and sleptons. In this analysis, the targeted scenario is a decay via a sneutrino: $\\tilde t\\bar{\\tilde t}\\to(b e\\tilde\\nu)(\\bar b\\mu\\tilde\\nu)$. The main backgrounds for OSDF dileptons are $Z\\to\\tau\\tau$, dibosons and dileptonic top. A discriminant using a linear combination of different variables is built and two selections optimized for small and large stop-sneutrino mass differences are considered. Since data is found to be in agreement with the SM, limits on the stop mass as a function of the sneutrino mass are derived, significantly extending the previous results, as shown in Figure~\\ref{fig:D0_stop}.\n\nSimilarly to the situation in direct gaugino production searches, with a dataset of 1\\invfb, the LHC is not yet as sensitive as the Tevatron in searches for direct production of third generation particles. Instead, ATLAS developed an analysis with 0.83\\invfb of integrated luminosity targeting gluino-mediated production of sbottom, which has a larger cross section and provides a striking signature of four $b$-jets~and \\met~\\cite{ATLAS_glsb}. The gluino is assumed to decay via on-shell or off-shell sbottom to the LSP and all other SUSY particles are assumed to be decoupled. Four different signal regions are defined requiring either one or two $b$-tagged jets and \\ensuremath{m_{\\mathrm{eff}}}~thresholds of 500 or 700~GeV. A lepton veto is also applied and the QCD multijet background is determined fully data-driven, as in the ATLAS search without leptons described in Section~\\ref{sec:nolep}. Other SM backgrounds are evaluated using MC and validated with semi data-driven estimations by requiring one lepton. No significant deviations are observed and these null results are interpreted in different theoretical models. Figure~\\ref{fig:ATLAS_glsbottom} shows the extension of the limits with respect to the Tevatron and the previous ATLAS results with only 35\\invpb of integrated luminosity, in the scenario in which the gluino is heavier than the sbottom and all the other SUSY particles are set at a higher scale except for the neutralino, which has a mass of 60~GeV.\n\n\\begin{figure}[h!]\n \\includegraphics[width=72mm]{D0_stop}%\n \\caption{\\label{fig:D0_stop} Observed and expected 95\\% CL exclusion regions on the scalar top mass for different sneutrino mass values in the direct stop search performed by D\\O. The shaded band around the expected limit shows the effects of the scalar top quark pair production cross section uncertainty. Other limits from previous analyses are also shown for reference. }\n\\end{figure}\n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_glsb}%\n \\caption{\\label{fig:ATLAS_glsbottom} Exclusion limits at 95\\% CL in the gluino-sbottom mass plane for the ATLAS gluino-mediated sbottom production analysis. Here, the neutralino mass is set to 60~GeV and other limits are shown for reference, including the direct sbottom constraints from Tevatron in the same scenario.}\n\\end{figure}\n\nIn addition, ATLAS performed a gluino-mediated stop search with 1.03\\invfb~\\cite{ATLAS_glst} of integrated luminosity. In this case, the gluino is forced to decay to the LSP via an on-shell or off-shell stop. In the former case, the stop decays into $b\\tilde\\chi_1^\\pm$ or $t\\tilde\\chi_1^0$, depending on the mass. The search is performed requiring four jets, one lepton and at least one $b$-tagged jet, as well as large \\met, \\ensuremath{m_{\\mathrm{eff}}}~and transverse mass between lepton and \\met. The SM expectation is estimated via fully or semi data-driven techniques to be $54.9\\pm 13.6$ and 74 events are observed in data. Gluino masses are excluded approximately below 500~GeV with a small dependence on the stop mass.\n\n\\subsection{\\label{sec:photon}Searches with Photons}\nOne of the most favorable SUSY models with photons in the final state is GMSB~\\cite{GMSB}. In this model, SUSY particles acquire masses via gauge interactions, which are proportional to the breaking scale $\\Lambda$. In this context, the gravitino is always the LSP and different types of next-to-LSP (NLSP) can be considered. In the case of a $\\tilde\\chi_1^0$ NLSP being mostly bino\\footnote{The SUSY partner of the U(1) gauge boson\\label{fn:bino}}, the predominant decay is to a photon and a gravitino, yielding a diphoton and \\met~signature. Backgrounds to this signature can be classified in QCD ``instrumental'' (mainly from diphoton, photon+jet and dijet productions), electroweak ``genuine'' ($\\gamma+(W\\to e\\nu)$) and irreducible backgrounds ($(Z\\to\\nu\\nu)+\\gamma\\gamma$ and $(W\\to l\\nu)+\\gamma\\gamma$). The two former backgrounds can be treated using data-driven techniques and the latter is usually small and assessed using MC predictions.\n\nAll four experiments performed a search for this final state using very similar techniques and reported null results. Tevatron searches were focused on the GMSB SPS8 scenario~\\cite{SPS8}, which is dominated by gaugino pair production. D\\O, with 6.3\\invfb of data, excluded $\\tilde\\chi_1^0$ masses below 175~GeV~\\cite{D0_photons} and CDF, with a smaller dataset of 2.6\\invfb, constrained the NLSP masses also as a function of the NLSP lifetime~\\cite{CDF_photons}. Since the LHC is more sensitive to strong production, experiments targeted the Generalized Gauge Mediated (GGM) model~\\cite{GGM}, in which the constraints at the GUT scale have been relaxed to allow for almost arbitrary values of squark and gluino masses. Both ATLAS~\\cite{ATLAS_photons} and CMS~\\cite{CMS_photons}, with approximately 1\\invfb of data, excluded squarks (gluino) masses below $\\sim 700$ ($\\sim 800-900$)~GeV, when assuming all other SUSY particles at higher scales. In addition, as shown in Figure~\\ref{fig:ATLAS_SPS8}, ATLAS produced for the first time exclusion limits in the SPS8 scenario that extend D\\O~ limits by $\\sim 30$~GeV in the $\\tilde\\chi_1^0$ mass.\n\n\\begin{figure}%\n \\includegraphics[width=70mm]{ATLAS_SPS8_interp}%\n \\caption{\\label{fig:ATLAS_SPS8}ATLAS expected and observed 95\\% CL upper limits on the SPS8 production cross section as a function of $\\Lambda$ and the lightest chargino and neutralino masses.}\n\\end{figure}\n\n\n\\section{\\label{sec:RPV}RPV ANALYSES}\n$R$-parity violating terms in the SUSY lagrangian are strongly constrained by experimental limits (e.g. proton lifetime)~\\cite{Rparity}. Experiments usually assume all couplings to be zero except the less constrained couplings, such as $\\lambda'_{311}$ and $\\lambda_{312}$, where indices refer to the family and couplings are described in the superpotential as $\\lambda_{ijk}\\hat L_i \\hat L_j \\hat E_k +\\lambda'_{ijk}\\hat L_i \\hat Q_j \\hat D_k$. Searches in RPV scenarios focus on finding a resonance produced by the decay of the SUSY particles to SM particles.\n\n\\begin{figure}[h!]\n \\includegraphics[width=80mm]{D0_snutau}%\n \\caption{\\label{fig:D0_snutau}Invariant mass of $e\\mu$ final states for different SM processes and two signal samples used for reference in the D\\O~stau neutrino search.}\n\\end{figure}\n\n\\subsection{\\label{sec:snutau}Searches for Scalar Tau Neutrino}\nA search for RPV scalar tau neutrino decaying to an electron and a muon was carried out in D\\O~ using a data sample of 5.3\\invfb~\\cite{D0_staunu}. After some cuts to require exactly one electron and muon and to reduce the jet fake contamination, no evidence of a mass resonance peak is found, as shown in Figure~\\ref{fig:D0_snutau}. A similar analysis but requiring opposite sign leptons and some different background techniques was performed by ATLAS with 0.87\\invfb~of data~\\cite{ATLAS_staunu}. No deviation from SM was found and limits are translated in a plane of $\\tilde\\nu_\\tau$ production coupling ($\\lambda'_{311}$) against $\\tilde\\nu_\\tau$ mass for different decay coupling ($\\lambda_{312}$) values, as shown in Figure~\\ref{fig:ATLASD0_RPV}. These limits exemplify the current complementarity between the different experiments since D\\O~is more competitive at lower masses whereas it is limited at higher masses, which is the region in which ATLAS is more sensitive.\n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{ATLAS_snutau}%\n \\caption{\\label{fig:ATLASD0_RPV}Upper 95\\% CL limits on the $\\lambda'_{311}$ coupling as a function of $\\tilde\\nu_\\tau$ mass for three values of $\\lambda_{312}$. Regions above the curves are excluded by either ATLAS or D\\O~scalar tau neutrino searches.}\n\\end{figure}\n\n\\subsection{\\label{sec:jetresonance}Searches for Jet Resonances}\nBoth CDF (with 3.2\\invfb of data)~\\cite{CDF_3jet} and CMS (with 35\\invpb of data)~\\cite{CMS_3jet} collaborations performed a search for gluino pair production decaying into three jets. The search for two 3-jet resonances in a 6 jet final state is performed by exploiting the kinematic relationship between the jet triplet scalar \\pt~and the invariant mass of the three jets. In this way, the experiments manage to reduce the combinatorics and reject the QCD multijet backgrounds, as shown in Figure~\\ref{fig:jetresonance}. The complementarity between the experiments allows to fully cover a mass range from 77 to 500~GeV. With this technique, CDF excludes RPV gluino masses below 144~GeV (a $2~\\sigma$ excess is found around the top mass) and CMS excludes gluino masses between 200 and 280~GeV (a $1.9~\\sigma$ excess is found at 380~GeV).\n \n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{CMS_3jetresonance}%\n \\caption{\\label{fig:jetresonance}Simulated triplet jet invariant mass versus the triplet scalar \\pt~of all possible combinations for a 250~GeV gluino mass. All triplets falling to the right of the red dashed line pass the final selection. In the inset, the combinations before and after the selection are shown.}\n\\end{figure}\n\n\\section{\\label{sec:summary}SUMMARY AND OUTLOOK}\nSearches for supersymmetry have been carried out at the Tevatron and the LHC colliders. Thanks to the complementarity between machines, many different final states and mass ranges have been carefully scrutinized. Since no significant deviations from the SM predictions have been found, the vast parameter space available for SUSY has been substantially reduced and the most probable scenarios predicted by electroweak precision tests are now excluded or under some constraints after the new stringent limits. The question whether SUSY really exists or whether it is within the reach of the current collider experiments is becoming more relevant.\n\nOne of the great virtues of SUSY is the stabilization of the electroweak sector. The radiative corrections to the Higgs mass need of a relatively low stop mass in order to avoid too much fine tuning. This also means that gluino masses should be relatively light, since they contribute to the stop mass corrections. Thus, in order to preserve naturalness arguments for SUSY, two main scenarios can be envisioned: one is the existence of heavy squarks, intermediate gluino and light stop and gauginos, and the other is the presence of a SUSY spectrum compressed into a narrow range of masses, which would evade the current searches at colliders and would also mean that the SUSY breaking scale resides at relatively low energies. Both scenarios are still possible and will probably determine the roadmap of the searches in the coming years, at least until the LHC is able to reach the nominal 14~TeV center-of-mass energy and provide a more conclusive answer to the current open questions in our understanding of the universe.\n\n\n\\bigskip\n\\begin{acknowledgments}\nThe author would like to thank the organizers for their hospitality and their commitment to make this conference a successful event.\n\n\\end{acknowledgments}\n\n\\bigskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis paper presents an integrated deep-learning-based system, contingent on monocular images and fixed single-beam echo-sounder (SBES) measurements, for navigating an underwater robot in unknown 3D environments with obstacles.\n\nObstacle avoidance is fundamental for Autonomous Underwater Vehicles (AUVs) to safely explore the largely unmapped underwater realms (e.g., coral reefs, shipwrecks).\nHowever, the underwater environment itself poses unique challenges in regards to safe navigation, which is still an open problem for AUVs~\\cite{petillot2019underwater}.\nThere are limited sensors and positioning systems (e.g., GPS) that accurately measure the surroundings and operate underwater, thus preventing the use of well-established navigation methods~\\cite{pfrunder2017real} that were originally designed for ground vehicles with sensors like LiDAR.\nIn addition, the sensor configurations in low-cost AUVs, equipped with monocular camera, inexpensive IMU, compass, and fixed SBES, bear their own individual drawbacks, such as no scale information and drifting\/uncertain measurements.\nThese challenges make the classic methods for obstacle avoidance and navigation in unknown environments -- i.e., those which (1) estimate the geometry of the space using sensors with direct~\\cite{engel2017direct} or indirect~\\cite{campos2021orb, rahman2019iros-svin2} state estimation methods and (2) apply specific behaviors or planning in the partial map (e.g., Vector Field Histogram~\\cite{Panagou2014}, Dynamic Window Approach~\\cite{fox1997dynamic}%\n) -- not directly applicable in underwater scenarios.\n\n\\begin{wrapfigure}[19]{R}{0.48\\textwidth}\n\\vspace{-2.5em}\n\\includegraphics[width=0.48\\textwidth]{figs\/beauty_obstacle_avoidance_pengzhi_3.pdf}\\vspace{-1.2em}\n\\caption{How to guide an underwater robot to 3D waypoints given only monocular images, fixed echo-sounder range measurements, and a localization system, but \\emph{no map}, while also avoiding obstacles?}\n\\label{fig:beauty}\n\\end{wrapfigure}\n\n\n %\n\n\n\nWith recent advances in deep reinforcement learning (DRL)\\\\~\\cite{kober2013reinforcement,%\nmnih2015human}, several end-to-end deep neural network based methods have emerged, from raw images to control outputs.\nThese end-to-end methods -- typically tasked to endlessly navigate or reach a visual target -- demonstrated good performance for ground robots in unknown environments~\\cite{xie2018wheels%\n}. Comparatively, underwater domains bring problems to learning-based vision navigation due to a more complex image formation model that results in, e.g., backscattering and light attenuation. %\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis paper proposes a goal-oriented end-to-end DRL navigation approach, given that classical planning methods are not straightforward to apply as they require accurate maps, which are difficult to obtain due to the underwater perception challenges described above. %\nIn particular, we design the first multi-modal end-to-end underwater navigation system in unstructured 3D environments for which no map is available, based on Proximal Policy Optimization (PPO) \\cite{wijmans2019ddppo}, which allows for continuous action space. The provided inputs are goal positions, estimated depth images, and range measurements from the fixed SBES. Monocular camera and fixed SBES keep the AUV's cost low, while exploiting and complementing the individual sensor' strengths -- i.e., large field of view from the monocular camera that can provide relative scene depth and the absolute range measurement from the SBES. %\n %\nWe also propose a method to mitigate the sim-to-real gap problem by leveraging domain randomization into our system. We generated realistic simulated environments with different underwater visibility and randomized training environments, enhancing the model robustness to the changing visual conditions in real underwater domain.\n %\n %\n Extensive experimental analysis with tests and ablation studies of the proposed navigation system were conducted both in simulation and real-world. Results demonstrated high safety and efficiency compared to traditional navigation baselines and other sensor\/model configurations, as well as reliable transferability to new environments.\n %\n \n %\n \n\n\n\\section{Related Work}\\label{sec:relatedwork}\nObstacle avoidance and navigation without a prior map has been studied starting with wheeled mobile robots equipped with bumpers and sonar sensors~\\cite{choset2005principles} and later branching off into different environments and sensor configurations.\nFor underwater domains, one of the main challenges is the limit of choices for sensors.\nWhile some underwater LiDAR solutions are available~\\cite{mcleod2013autonomous}, they are expensive (US\\$100,000 or more) and bulky -- requiring a laser scanner and a camera. In addition, there is a lack of global positioning systems and the acoustic based positioning systems are affected by noise, making mapping underwater challenging~\\cite{petillot2019underwater}.\nOur goal is to enable navigation for low-cost AUVs. Therefore, in the following, we discuss applications using sensors (i.e., SBES, cameras) that are typically configured on low-cost underwater robots.\n\n\nIn practice, many underwater navigation systems depend on acoustic, inertial, and magnetic sensors \\cite{kinsey2006navigation,williams2001navigation,paull2013auv}.\nFor example, Calado \\textit{et al.}~\\cite{calado2011obstacle} proposed a method where the robot used a SBES to detect obstacles and construct a map of them.\nHowever, SBES can only provide a fixed single distance measurement and has high uncertainty given the wide beam cone -- around \\ang{30}. \nTo infer more about the complex scene, the robot must frequently turn in multiple directions, which negatively affects navigation efficiency. \nAlternatively, multi-beam and mechanical scanning sonars can cover a larger field of view~\\cite{petillot2001underwater}. \nHern\\'{a}ndez \\textit{et al.}~\\cite{hernandez2015online} used a multi-beam sonar to simultaneously build an occupancy map of the environment and generate collision-free paths to the goals. Grefstad \\textit{et al.}~\\cite{grefstad2018navigation} proposed a navigation and collision avoidance method using a \nmechanically scanning sonar for obstacle detection.\nHowever, a scanning sonar takes a few seconds to scan a 360$^{\\circ}$ view. \nThe acoustic sensors' accuracy depends on the environment structure and the type of reflections that arise. In addition, multi-beam and mechanical scanning sonars are significantly more expensive than monocular cameras and SBES (in the order of $>$US\\$10k vs.\\ US\\$10 - US\\$100). \n\n\n %\n\n\nWhile cameras have shown to provide dense real-time information about the surroundings out of the water~\\cite{liu2015learning}, there are fewer underwater obstacle avoidance methods that use cameras. The underwater domain indeed poses significant challenges, including light attenuation and scattering. \nMost work considers reactive controls, i.e., no goal is specified. \nRodr{\\'\\i}guez-Teiles \\textit{et al.}~\\cite{rodriguez2014vision} segmented RGB images to determine the direction for escape. Drews-Jr \\textit{et al.}~\\cite{drews2016dark} estimated a relative depth using the underwater dark channel prior and used that estimated information to determine the action. \nThere has been recent efforts in 3D trajectory optimization for underwater robots.\nXanthidis \\textit{et al.}~\\cite{xanthidis2020navigation} proposed a navigation framework for AUV planning in cases when a map is known or when a point cloud provided by a visual-inertial SLAM system~\\cite{rahman2019iros-svin2} is available. Our proposed method navigates the robot to 3D waypoints without explicit representation of the environment.\n\n\n\n\nRecently, deep learning (DL) methods have shown to work well with underwater robots.\nManderson \\textit{et al.}~\\cite{manderson2018vision} proposed a convolutional neural network that takes input RGB images and outputs unscaled, relative path changes for AUV driving. The network was trained with human-labeled data with each image associated with desired changes in yaw and\/or pitch to avoid obstacles and explore interesting regions. \nLater it was extended with a conditional-learning based method for navigating to sparse waypoints, while covering informative trajectories and avoiding obstacles~\\cite{manderson2020vision}. Our proposed method does not require human-labeled data.\n\nAmidst the progress in DRL, there is more research on robots operating out of water with monocular cameras.\nSome of these methods addressed the problem of safe endless 2D navigation without specifying any target location. \nXie \\textit{et al.}~\\cite{xie2017monocular} trained a Double Deep Q-network to avoid obstacles in simulated worlds and tested it on a wheeled robot. Kahn \\textit{et al.}~\\cite{kahn2018self} proposed a generalized computation graph for robot navigation that can be trained with fewer samples by subsuming value-based model-free and model-based learning. \nOther works provided the goal as a target image instead of a location~\\cite{zhu2017target,devo2020towards,wu2020towards}. %\nSome methods, based on an end-to-end network, guided the robot to the goal using LiDAR or RGB-D cameras~\\cite{pfeiffer2017perception,%\nxie2018wheels,%\nzhang2017deep, liang2021crowd} and goal's relative position for path planning. %\nRecently, a DD-PPO based method was used to navigate a robot in an unknown indoor (simulated) environment, using a RGB-D camera, GPS, and compass~\\cite{wijmans2019ddppo}. Our method will be based on PPO, with the additional challenge of not having depth information directly from the camera.\n\nNevertheless, due to the difficulties of applying DRL in real-world environments, most works performed training in simulation.\nHowever, policies learned in simulated environments may not transfer well to the real-world environment, due to the existence of reality (sim-to-real) gap~\\cite{tobin2017domain}.\nTo address this, several methods utilized domain randomization, where parameters of the simulated world were varied so that policies learned remained robust in real-world domain.\nFor example, Sadeghi and Levine~\\cite{sadeghi2016cad2rl} proposed a DRL approach for indoor flight collision avoidance trained only in CAD simulation that was able to generalize to the real world by highly randomizing the simulator's rendering settings. %\n\nOur approach draws from the advances in DRL: we design an end-to-end pipeline for low-cost underwater robot navigation to address the underwater challenges, combining multiple sensors and applying domain randomization.\n\n\n\n\n\n\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[trim={0cm .4cm 0cm 0cm}, clip, width=.9\\textwidth]{figs\/flowchart.png}\n \\vspace{-1em}\n \\caption{\\textit{Flowchart for the Proposed End-to-End Underwater 3D Navigation System.}\n The pipeline includes two stages: a depth prediction module (DPT) followed by a decision making module (PPO). During training, at each episode $i$, the robot is deployed in a randomized simulated environment. Predicted depth map $o^\\textrm{imageDepth}_t$ of the raw RGB image $o^\\textrm{imageRGB}_t$, relative goal position $o^\\textrm{goal}_t$, echo-sounder reading $o^\\textrm{range}_t$, and previous executed action $a_{t-1}$ are stacked with past $k$ observations from the previous times steps to feed into the PPO network (solid lines). The robot performs the action sampled from the output policy distribution. New observations (dashed lines) are then obtained for computing the next action at time step $t+1$. %\n During real-world deployment, DPT's computationally less expensive counterpart MiDaS was used as the depth prediction module for real-time inference.\n }\n \\label{fig:system_overview}\n \\vspace{-15pt}\n\\end{figure*}\n\n\n\\section{Approach}\\label{sec:approach}\nThe problem considered in this paper is as follows: an underwater robot deployed in an unknown environment needs to navigate to a goal location $G \\in \\mathbb{R}^3$, minimizing the travel time, while avoiding collisions with obstacles. \n\nTo develop a mapless navigation solution for low-cost robots, we consider an underwater thruster-vectored robot that has an inexpensive sensor suite composed of: (1) a monocular camera, (2) a SBES placed below the camera and looking forward, (3) a compass, (4) pressure sensor for water depth, and (5) a (noisy) localization system. Selecting this sensor configuration allows us to exploit the larger field of view (FOV) covered by the camera while obtaining absolute front distance estimates with the fixed SBES.\n\nFor a general solution, robust to noise and changing visual conditions, we approach the real-time 3D navigation problem by devising an end-to-end system ( see \\fig{fig:system_overview} ) based on a neural network for dense depth prediction from monocular images and on a deep reinforcement learning method that takes as input %\nthe sensor suite data and outputs vertical and steering commands. %\nWe consider a window of prior measurements and executed actions given the absence of prior knowledge of the environment. \n\nIn the remainder of this section, we describe in detail the RL approach, the depth prediction network, and how to address the sim-to-real gap.\n\n\\subsection{Multi-Modal Deep Reinforcement Learning Navigation}\nGiven an unknown environment, the navigation problem can be formulated as a Partially Observable Markov Decision Process (POMDP), defined with a 6-tuple: state space $S$ that cannot be directly observed by the robot, action space $A$ modifying the current state of the robot, observation space $\\Omega$, a state-transition model $T$, the observation probability distribution $O$, and a reward function $R$ which returns the reward after a state transition.\n\n\\textbf{Observation space.} The observation $O_t$ at time step $t$ consists of: (1) the predicted depth image $o^\\textrm{imageDepth}_t \\in \\mathbb{R}^{128\\times160}$; (2) an SBES range measurement $o^\\textrm{range}_t \\in \\mathbb{R}$; (3) the current relative goal position $o^\\textrm{goal}_t \\in \\mathbb{R}^3$ -- specifically, $[D^h_t, D^v_t, \\theta^h_t]^\\top$, where $D^h_t$, $D^v_t$ are robot's current horizontal, vertical distances to the goal and $\\theta^h_t$ represents the relative yaw heading difference; and (4) the past executed actions $o^\\textrm{action}_t \\in \\mathbb{R}^2$. We stack observations considering a time window $k$ to capture the robot's progress towards the goal and to avoid obstacles that left the periphery view. In experiments, model using 5 time steps (decision period lasts 0.5 second for each step) showed good performance without adding too much computational expense.\n\n\\textbf{Action space.} The action space is $a_t = [v_t,\\omega_t] \\in \\mathbb{R}^2$, where $v_t$ is the vertical linear velocity and $\\omega_t$ is the yaw angular velocity. To generalize the applicability of the learned behavior to different robots, we consider the actions to be in a range of $[-1.0, 1.0]$ which will be linearly mapped to the range of velocities of a specific robot. Note that while we could include the horizontal forward linear velocity, we decided to keep it constant to facilitate surveying missions that require the same velocity to collect consistent high-quality measurements. \n\n\nThe action is then given by the policy:\n\\begin{equation}\n\\small\n a_t = \\pi(O_t)%\n %\n\\end{equation}\nThe goal is to find the optimal policy $\\pi^*$ which maximizes the navigation policy's expected return over a sequence $\\tau$ of observations, actions, and rewards:\n\\begin{equation}\n\\small\n \\pi^* = \\argmax_\\pi \\mathbb{E}_{r\\sim p(\\tau|\\pi)}\\Big[\\sum\\gamma^t r_t\\Big]\n\\end{equation}\n\\noindent where $\\gamma \\in [0,1.0]$ is the discount factor. The optimal policy would translate in a path that is safe and minimizes the time it takes to travel to the goal.\n\n\n\n\n\n\n\n\\textbf{Reward function.} Our reward function $r_t$ at time $t$ encodes the objectives to stay not too close to any obstacle ($r^{\\textrm{obs}}_t$) and to reach the goal area as soon as possible ($r^{\\textrm{goal}}_t$). \n\nWhen the robot is close to an obstacle, it will compute a negative reward: %\n\\begin{equation}\n\\small\n r_t^{\\textrm{obs}} = \n \\left\\{\n \\begin{array}{lr}\n %\n \n -r_{\\textrm{crash}}, & d_t^h < \\delta_h \\lor\\,d_t^v < \n \\delta_v \\lor\\,d_t^{\\textrm{sur}} < \\delta_v\\\\\n -s_0(2\\delta_h - d_t^h), & \\delta_h \\leq d_t^h < 2\\delta_h \\\\\n 0 & \\textrm{otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nwhere $\\delta_h$, $\\delta_v$ represent the thresholds for the distances of the robot to the closest obstacle $d_t^h$, $d_t^v$ -- horizontally or vertically, respectively. \nWe also check the distance to the water surface $d_t^{\\textrm{sur}}$, as there might be surface obstacles that cannot be detected given the sensor configuration of the robot.\nThe threshold values $\\delta_h$, $\\delta_v$ should consider the robot's size and turning radius.\nWhen any of the constraints are met -- i.e., the robot is too close to an obstacle or the surface -- the current episode terminates with a large negative constant reward $-r_{\\textrm{crash}}$.\nIn addition, to guarantee safety, a penalty for motions within a range $[\\delta_h, 2\\delta_h)$ of distance to nearby obstacles is given according to the current distance.\nOtherwise, if the robot is far from the obstacles, no negative reward is applied.\n\nTo guide the robot towards the goal both horizontally and vertically, \nwe split the goal-based reward into two parts.\nFirst, the horizontal goal-based reward:\n\\begin{equation}\n\\small\n r_t^{\\textrm{goalh}} = \n \\left\\{\n \\begin{array}{lr}\n -s_1|\\theta_t^h|, & \\Delta_h < D_t^{h} \\\\\n r_{\\textrm{success}} - s_2|\\theta_t^h|, & \\textrm{otherwise}%\n \\end{array}\n \\right.\n\\end{equation}\nIf the robot's horizontal distance to the goal $D_h^t$ is greater than a threshold $\\Delta_h$, \nthen the penalty is based on the robot's orientation to the goal -- i.e., a robot already facing the goal gets a smaller penalty, as the constant forward velocity will ensure shorter arrival time.\nOtherwise, if the robot is within the goal area, then there is a positive reward with a preference to the robot's orientation towards the goal.\n\nLikewise, the vertical goal-based reward:\n\\begin{equation}\n\\small\n r_t^{\\textrm{goalv}} = \n \\left\\{\n \\begin{array}{lr}\n s_3|\\dot{D}_t^v|%\n , & \\dot{D}_t^v \\leq 0 \\land\\,\\Delta_h < D_t^h \\\\\n %\n - s_3|\\dot{D}_t^v|%\n , & \\dot{D}_t^v > 0 \\land\\,\\Delta_h < D_t^h \\\\\n %\n - s_4|D_t^v|, & \\textrm{otherwise}%\n \\end{array}\n \\right.\n\\end{equation}\nWhen the robot is not near the goal, the vertical goal-based reward is a positive value if the change in vertical distance over time $\\dot{D}_t^v$ is negative or 0 -- i.e., the robot is getting closer to the target depth.\nOn the contrary, it is a negative value if the change is positive -- i.e., the robot is getting farther from the target depth.\nOtherwise, if the robot is within goal area, the negative reward is relative to the distance to the target depth.\nThis split (horizontal and vertical) of the goal reward showed better stability in experiments than when a single combined goal reward was applied, potentially due to the separate focus of two mostly independent actions.\n\n\nThe above obstacle- and goal-based rewards conflict with each other; they could lead to oscillations at local optima when an obstacle is nearby.\nThus, we devised a priority-based strategy (when the robot is not in the goal area) that focuses on moving away from the obstacle by scaling $r_t^{\\textrm{goalh}}$:\n\\begin{equation}\n\\small\n \\begin{array}{lr}\n %\n r_t^{\\textrm{goalh}} \\ *\\!= \n s_5(d_t^h - \\delta_h)\/\\delta_h, & \\Delta_h < D_t^{h} \\land \\delta_h \\leq d_t^h < 2\\delta_h \\label{con:priority}\n %\n \\end{array}\n\\end{equation}\n\nIn all the reward equations, $s_0, \\ldots ,s_5$ are positive scaling factors. Intuitively, they are set so that rewards are in an appropriate scale for a balanced training performance. \n\nFinally, the collective reward at time $t$ can be obtained as:\n\\begin{equation}\n\\small\n r_t = r^{\\textrm{obs}}_t + r^{\\textrm{goalh}}_t + r^{\\textrm{goalv}}_t\n\\end{equation}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[trim={0cm 0.cm 0.cm 0},clip,width=0.45\\textwidth]{figs\/network_structure.pdf}\n \\includegraphics[width=0.4\\textwidth]{figs\/training_env.png}\n \\vspace{-1em}\n \\caption{(left) \\textit{Network Architecture.} Predicted depth images are processed by three layers of convolutional layers (orange). Its output is flattened and concatenated with feature vectors (green) representing the stacked relative goal positions, echo-sounder readings, and past actions. The final fully-connected layer outputs a navigation policy and state value. (right) \\textit{Top View of the Training Env.} Our model was trained in the above simulated environment in area A (inside area with fewer obstacles and smaller space) and B (outside area with more obstacles and larger space).}\n \\label{fig:training_setting}\n \\vspace{-2em}\n\\end{figure}\n\n\\textbf{Network architecture.}\nThe network structure depicted in \\fig{fig:training_setting}(left) illustrates how we integrate the information vectors from the sensors. First, the stacked predicted depth images are processed by three convolutional layers, then the flattened output $\\in \\mathbb{R}^{512}$ is concatenated with processed feature vectors consisting of the stacked relative goal positions $\\in \\mathbb{R}^{96}$, SBES readings $\\in \\mathbb{R}^{32}$, and past actions $\\in \\mathbb{R}^{64}$. Specifically, the combined echo-sounder readings provide an implicit scale on the relative depth prediction without requiring calibration. The network will produce a navigation policy and state value. \n\n\\subsection{Image Depth Prediction Network} \\label{Combined Perception Inputs}\n\nAccurate image depth predictions is important for our navigation pipeline to work.\nPrevious work used ground truth simulated depth images with Gaussian noise as input for training and applied depth estimation during deployment~\\cite{xie2017monocular}.\nHowever, this broadens the sim-to-real gap as real-world noise in depth predictions is more complex than implemented simulated noise models~\\cite{sweeney2019supervised}.\nInstead, we utilized one of the latest monocular depth prediction networks, Dense Prediction Transformer (DPT)~\\cite{ranftl2021vision}, which has an encoder-decoder design and applies a transformer as the encoder's main building block. We selected DPT over other deep neural networks for depth prediction for its state-of-the-art performance in single-view depth estimation and robustness across diverse environments. %\n\n\n \n\n\n\\subsection{Transferable Model} \n\nDRL often has the problem of generalization: models trained in one domain fail to transfer to other domains even if there are small differences between the domains~\\cite{cobbe2019quantifying}. %\nUnlike in-air, images taken underwater will look drastically different across various environments due to the more complex lighting and backscattering effects~\\cite{akkaynak2018underwater}.\nThus, training the model in a single fixed environment would lead to over-fitting to that environment's visual conditions. %\nOne solution is to retrain the depth prediction network with an existing underwater image depth dataset, which, however, is not available. Another solution is to enhance the input underwater images to its approximate in-air counterpart~\\cite{roznere2019color, %\nakkaynak2018underwater}.\nYet, most image enhancement techniques require difficult-to-retrieve information (e.g., water attenuation coefficients, depth maps).\n\nOur approach is to integrate underwater features into the simulation used for training.\nWe modified an existing underwater simulator framework for games to create the training and testing simulations for our proposed approach. The framework contains custom shaders that incorporates a light transmission model to simulate underwater optical effects, thus providing a good amount of realism. \n\n\n\n\n\\textbf{Domain randomization.} \nWe integrated domain randomization to generate underwater environments with different visual conditions, thus enabling transferability. %\nIn particular, %\nat the start of every training episode, we randomize the underwater visibility -- the gradient and conditions in visibility over distance.\nVisibility was selected as it significantly impacts the relative depth estimation, thus affecting to a large extent how the robot perceives its surroundings.\n\nWe decided not to apply domain adaptation~\\cite{peng2020learning} -- i.e., the process of learning different environment encoding and corresponding adapted policy during training, so that during testing the best environment encoding will be found with the corresponding adapted policy -- because searching the best environment encoding is not very practical for underwater deployments.\nFor instance, the search would require robot motions towards obstacles to identify the (potentially changing) visibility feature of the specific environment. %\n\n\\textbf{Multi-scenario training.}\nWe built the simulated training environment via Unity Engine\\footnote{\\scriptsize \\url{http:\/\/www.unity.com\/}}. %\nWe generated two activity areas to represent two classes of environments that an AUV might encounter: \\textit{A} -- a small area with fewer obstacles, and \\textit{B} -- a big cluttered area with obstacles at various positions and heights (see \\fig{fig:training_setting}(right)).\nIn each training episode, the robot's starting pose and goal location are randomly reset in the environment.\nThis exposure to different training scenarios ensures that the learned policy will be more likely to handle more complex environments~\\cite{ %\ntobin2017domain}. %\n\n\n\n\n\n\n\n \n\n\n\n\n\\section{Experimental Results}\nWe trained and performed experiments in simulation, in real-world with a vector-thruster underwater robot, and with underwater datasets to validate our DRL-based multi-modal sensor navigation system.\nWe performed comparisons and ablation studies with other methods.\nOur framework is publicly available\\footnote{\\scriptsize\\url{https:\/\/github.com\/dartmouthrobotics\/deeprl-uw-robot-navigation}}.\n\n\\subsection{Training Experimental Settings}\nOur model was first trained and tested on a workstation with two 12GB NVIDIA 2080Ti GPUs.\nIt was implemented with PyTorch and Adam optimizer~\\cite{kingma2014adam}.\n\nIn simulation, the robot's forward velocity, vertical velocity range, and yaw angular velocity range were set to\n\\SI{0.345}{m\/s}, \n\\SIrange{-0.23}{0.23}{m\/s},\n\\SIrange[parse-numbers = false]{-\\text{$\\pi$}\/6}{\\text{$\\pi$}\/6}{rad\/s}, respectively.\nWhile the training environment allows for higher velocities, we chose low velocities to avoid any ``jerky'' motion that could happen with the AUV at high speed. The camera's horizontal and vertical FOVs were set to \\ang{80} and \\ang{64}. The simulated echo-sounder's max detection range was set to \\SI{4}{m}, which are all consistent with the real-world sensor configuration.\nThe simulation environments' visibility value was randomly chosen within the range of \\SIrange{3}{39}{m}.\n\nWe trained for 250 iterations -- each with at least 2048 time steps -- and observed the reward was stable after around 120 iterations (learning rate of 3e-5).\nThe detailed constant and threshold values for the reward function -- i.e., $r_{\\textrm{success}}$, $r_{\\textrm{crash}}$, $\\Delta_h$, $\\delta_h$, and $\\delta_v$ -- were set to $10$, $10$, \\SI{0.6}{m}, \\SI{0.5}{m} and \\SI{0.3}{m}, while the scaling factors $s_0, s_1, \\ldots, s_5$ were set to $2.0$, $0.1$, $1.0$, $1.0$, $8.0$, $1.0$.\n\n\n\\subsection{Performance Comparison with Different Sensor Configurations}\n\\vspace{-0.5em}\n\\begin{figure}[t]\n \\centering\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{cccc}\n \\includegraphics[height=.5in, valign=c,trim={0cm 0.7cm 3.5cm 2.3cm},clip]{figs\/test_with_different_configurations2.png}\n \\includegraphics[height=.5in, valign=c,trim={4.6cm 0.7cm 4.95cm 1.3cm},clip]{figs\/withbug2.png}\n & \\includegraphics[height=.5in, valign=c,trim={5.2cm 0.7cm 4.9cm 1.3cm},clip]{figs\/withoutechosounder.png}\n & \\includegraphics[height=.5in, valign=c,trim={5.2cm 0.7cm 4.9cm 1.3cm},clip]{figs\/withechosounder.png} \n \\end{tabular}\n }\n \\caption{\\textit{Partial top view of runs in Cluttered Env. (left): Bug2 (second), Our Model w\/o SBES (third), and Our Model w\/ SBES (right).} Legend: robot's start pose (green dot); obstacles (black dots); waypoints to reach in order (circled numbers). %\n %\n }\n \\label{fig:Waypoint_tests_trajectories}\n \\vspace{-2em}\n\\end{figure}\n\n\n\\begin{table*}[b]\n\\centering\n\\caption{\\textit{Waypoint Tests Results.} 10 runs for each of the three methods: Bug2 with multi-beam sonar, our model trained without fixed single-beam echo-sounder, and our proposed model.\nThe travel time average and standard deviation (in seconds) of successful runs for each waypoint were calculated, as well as the overall success ratio to reach all five waypoints.\n}\n\\label{Quantitative_Analysis_Waypoints}\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{cccccccc}\n \\toprule\n \\multirow{2}*{Method} & \\multirow{2}*{Sensors} & \\multicolumn{5}{c}{Traveling Time\/s (less is better)} & Success Ratio \\\\\\cline{3-7}\n & & $wp1$ & $wp2$ & $wp3$ & $wp4$ & $wp5$ & (higher is better) \\\\\n \\midrule\n %\n Bug2 & MBS & 57.6 $\\pm$ 0.3 & 66.95 $\\pm$ 0.15 & 41.15 $\\pm$ 0.45 & 69.8 $\\pm$ 0.9 & 77.65 $\\pm$ 0.45 & \\textbf{100\\%} \\\\\n Ours\\ w\/o\\ SBES & Monocular Camera & 51.8 $\\pm$ 5.94 & 56.5 $\\pm$ 2.09 & 35.62 $\\pm$ 8.07 & 47.0 $\\pm$ 2.03 & 76.0 $\\pm$ 2.21 & 40\\% \\\\\n Ours\\ w\/ SBES & Monocular Camera \\& SBES & \\textbf{38.35 $\\pm$ 0.45} & \\textbf{49.8 $\\pm$ 0.78} & \\textbf{29.3 $\\pm$ 0.78} & \\textbf{44.3 $\\pm$ 0.6} & \\textbf{67.25 $\\pm$ 0.6} & \\textbf{100\\%} \\\\\\bottomrule\n \\end{tabular}\n}\n\n\\end{table*}\n\n\nWe first tested the efficiency of our proposed multi-modal low-cost navigation approach against a traditional metric-based goal-oriented navigation method that does not require any map, given that no map of the underwater environment is available. In particular, we selected Bug2 algorithm given its guarantees on the path length. To have Bug2 work effectively, we employed a multi-beam sonar (MBS), a common but expensive sensor for underwater obstacle avoidance, which emits multiple beams in a plane with a typical horizontal FOV of $120^{\\circ}$. %\nWe also considered our model trained without the echo-sounder as ablation study to observe the effect of the SBES. %\n\nWe generated a test environment in simulation with multiple obstacles. The robot's task was to navigate to five randomly set consecutive waypoints.\nWe set all waypoints at the same depth, as typical navigation with an MBS involves the robot first arriving to the target depth and then navigating along the 2D plane.\n\n\n\n\\fig{fig:Waypoint_tests_trajectories} shows the trajectories of the three navigation methods and \\tab{Quantitative_Analysis_Waypoints} reports the quantitative results measured in terms of traveling time and success ratio. \nOur proposed system with inexpensive monocular camera and SBES achieved the highest navigation efficiency with comparable safety to $\\textrm{Bug2}$ with MBS. While the $\\textrm{Bug2}$ trajectory appeared not to be affected by noise, it spent the longest navigation time especially when moving along the obstacles. \nNote the echo-sounder played a fundamental role in safe navigation. If the echo-sounder was excluded, the model relied solely on relative monocular image depth estimation to detect surrounding obstacles. As a result, at times the chosen action might be conservative, leading to sub-optimal paths in terms of distance, or too aggressive, increasing the likelihood of collision. \n\n\n \n\n\n\n\n\n\n\\subsection{Ablation Study with Transferability Tests} \\label{Ablation Study with Transferability Tests}\n\\vspace{-0.5em}\nTo show the transferability of our proposed model to different environments and visibilities, we performed an ablation study with the same hyper-parameters and protocols, but considering the following combinations of training settings in a simulated underwater environment:\n(1) \\textbf{\\textit{Rand}}: proposed domain randomization, (2) \\textbf{\\textit{No Rand (Water)}}: fixed underwater visibility (approximately \\SI{11}{m}), and (3) \\textbf{\\textit{No Rand (Air)}}: no underwater features. \nTo firstly exhibit the models' generalizability, another simulated environment\\footnote{\\scriptsize\\url{https:\/\/github.com\/Scrawk\/Ceto}} was employed for testing. With different materials, textures, lightings and custom shaders, it had a different visual appearance compared to the training environment. In this environment, the models were tested in three different scenes, constructed to resemble possible underwater obstacles present in the real-world, such as natural structures (Scene1), submerged wrecks (Scene2) and man-made structures (Scene3). \n\n\n\\begin{figure*}[t]\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{ccccccc}\n \\textbf{Scenes} & \\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{8}{m}}} & \\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{12}{m}}} &\\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{20}{m}}} \\\\ \n \\textbf{Scene1}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_L.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1cm 3cm 3cm},clip]{figs\/rand_3_3000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_M.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1cm 3cm 3cm},clip]{figs\/rand_3_2000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_H.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1.5cm 3cm 1.5cm},clip]{figs\/rand_3_1000.png}\\\\ \n \\textbf{Scene2}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_L.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3cm},clip]{figs\/rand_5_3000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_M.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3cm},clip]{figs\/rand_5_2000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_H.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3.2cm},clip]{figs\/rand_5_1000.png} \\\\ \n \\textbf{Scene3}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_L.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_3000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_M.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_2000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_H.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_1000.png} \\\\ \n \\end{tabular}\n }\n \\caption{\\textit{Example of Trajectories in Different Scenes with Different Training.} Legend: robot's initial position and goal waypoint (green and red dots); robot collision (red ``X''); obstacles (approximated with polygons in the plots for simplicity).\n }\n \\label{figure of 3D trajectories}\n \\vspace{-2em}\n\\end{figure*}\n\n\n\n\\begin{table*}\n\\centering\n\\caption{\\textit{Quantitative Results for Transferability Tests.} 10 runs for the three models in three scenes with different visual conditions. %\nNote: N\/A means the method failed to reach the goal during the runs and bold means the best result.\n}\n\\label{Transferability Comparison Tests}\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{ccccccccccc}\n \\toprule\n \\multirow{2}*{Method} & & \\multicolumn{3}{c}{$\\textrm{Scene1}$} & \\multicolumn{3}{c}{$\\textrm{Scene2}$} & \\multicolumn{3}{c}{$\\textrm{Scene3}$} \\\\\\cline{3-5}\\cline{6-8}\\cline{9-11}\n & & Blurry & Medium & Clear & Blurry & Medium & Clear & Blurry & Medium & Clear\\\\\n \\midrule\n %\n & reward & 5.74 $\\pm$ 2.17 & 6.5 $\\pm$ 5.95 & 28.14 $\\pm$ 2.85 & 0.43 $\\pm$ 2.26 &\n 10.93 $\\pm$ 11.31 & 12.05 $\\pm$ 8.92 & 24.64 $\\pm$ 10.19 & 20.58 $\\pm$ 13.7\n & 29.18 $\\pm$ 8.01\n \\\\\n No\\ Rand\\ (Air) & success & 0\\% & 10\\% & \\textbf{100\\%} & 0\\% &\n 40\\% & 50\\% & 70\\% & 60\\% & 90\\%\n \\\\ \n & trav. time & N\/A & 70.0 & 67.2 $\\pm$ 0.84 & N\/A &\n \\textbf{53.12 $\\pm$ 0.65} & 55.2 $\\pm$ 2.84 & 63.29 $\\pm$ 0.88 & 66.5 $\\pm$ 4.53\n & 66.11 $\\pm$ 1.07\n \\\\ \\hline\n & reward & \\textbf{25.27 $\\pm$ 8.42} & 18.35 $\\pm$ 11.18 & 13.46 $\\pm$ 14.51 & 2.19 $\\pm$ 1.78 &\n -1.58 $\\pm$ 5.94 & 15.04 $\\pm$ 10.6 & 18.03 $\\pm$ 11.32 & 30.14 $\\pm$ 7.5\n & 29.42 $\\pm$ 3.27\n \\\\ \n No\\ Rand\\ (Water) & success & \\textbf{90\\%} & 90\\% & 40\\% & 0\\% &\n 10\\% & 70\\% & 60\\% & \\textbf{90\\%} & \\textbf{100\\% }\n \\\\\n & trav. time & 70.5 $\\pm$ 4.93 & 88.17 $\\pm$ 18.36 & 69.25 $\\pm$ 1.35 & N\/A &\n 115.0 & 59.79 $\\pm$ 8.25 & 71.42 $\\pm$ 6.9 & 73.39 $\\pm$ 2.63\n & 65.35 $\\pm$ 0.78\n \\\\ \\hline\n \n \n \n & reward & 24.66 $\\pm$ 9.3 & \\textbf{28.39 $\\pm$ 2.26} & \\textbf{29.56 $\\pm$ 2.58} & \\textbf{21.68 $\\pm$ 9.61} &\n \\textbf{23.36 $\\pm$ 7.49} & \\textbf{24.86 $\\pm$ 2.92} & \\textbf{29.17 $\\pm$ 11.34} & \\textbf{30.26 $\\pm$ 9.25}\n & \\textbf{36.26 $\\pm$ 0.83}\n \\\\\n Rand & success & \\textbf{90\\%} & \\textbf{100\\%} & \\textbf{100\\%} & \\textbf{80\\%} &\n \\textbf{90\\%} & \\textbf{100\\%} & \\textbf{80\\%} & \\textbf{90\\%} & \\textbf{100\\%}\n \\\\\n & trav. time & \\textbf{67.56 $\\pm$ 0.44} & \\textbf{68.45 $\\pm$ 0.72} & \\textbf{67.05 $\\pm$ 1.27} & \\textbf{52.0 $\\pm$ 0.35} &\n 53.44 $\\pm$ 1.23 & \\textbf{50.75 $\\pm$ 0.46} & \\textbf{60.75 $\\pm$ 0.56} & \\textbf{62.56 $\\pm$ 0.98}\n & \\textbf{61.05 $\\pm$ 0.57}\n \\\\\n \\bottomrule\n \\end{tabular}\n}\n\n\\end{table*}\n\n\nWe considered three visibility scenarios: blurry, medium, and relatively clear, with maximum visibility ranges of \\SI{8}{m}, \\SI{12}{m}, and \\SI{20}{m}, respectively. \n\\fig{figure of 3D trajectories} shows snapshots of each scene and the resulting trajectories in some sample runs. \n\n\\textbf{Comparison metrics.} \nThe following metrics were used to compare the three methods' performances (see \\tab{Transferability Comparison Tests}):\n\\begin{itemize}\n \\item [1)] \n Rewards (higher is better): cumulative reward average and standard deviation over $10$ runs,\n \\item [2)]\n Success Ratio (higher is better): number of times the robot reached the goal with no collision over $10$ runs, %\n \\item [3)]\n Travel Time (less is better): average and standard deviation traveling time ($s$). Failed runs were not considered.\n\\end{itemize}\n\nFrom the results, training with underwater features has the highest gain. Adding domain randomization allows a further increase of the cumulative rewards, success rate, and travel time.\nModels trained without randomization did not previously encounter abundant visual conditions, thus explored a limited observation space. Accordingly, they would not be easily applicable to different visibility conditions and are more vulnerable to noise especially in low-visibility environments when depth estimations are inaccurate. Scene3 in particular was challenging with blurry visibility, due to the narrow passage between the logs. \n\n\n\n\n\n\n\n\n\\subsection{Performance Demonstration in Real-World Environment} \\label{Performance Demonstration in Real-World Environment}\n\n\nWe conducted real-world experiments with a BlueROV2 in a swimming pool. \nThe robot was equipped with a Sony IMX322LQJ-C camera\\footnote{\\scriptsize\\url{https:\/\/www.bluerobotics.com\/store\/sensors-sonars-cameras\/cameras\/cam-usb-low-light-r1\/}} \nwith a resolution of 5 MP, a horizontal and vertical FOV of \\ang{80} and \\ang{64}.\nThe fixed SBES has a \\ang{30} beam width and a maximum range set to \\SI{4}{m}.\nThe (noisy) robot's pose was provided by an on-board compass, a water-pressure sensor to recover water depth, and a short baseline acoustic positioning system (SBL)\\footnote{\\scriptsize\\url{https:\/\/waterlinked.github.io\/explorer-kit\/introduction\/}}. %\nA \\SI{2.8}{GHz} Intel i7 laptop with Nvidia Quadro M1200 was used for running the inference network through the Robot Operating System (ROS). For real-time inference, DPT was replaced with its computationally less expensive counterpart MiDaS~\\cite{ranftl2019towards} as our depth prediction network -- about 0.08 seconds per inference.\n\\begin{figure}\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c c c c}\n \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_plot_0.png}\n & \\includegraphics[height=.5in,%\n trim={.3cm .4cm .3cm .6cm}, clip,valign=c]{figs\/pool_real_0.png} \n \\includegraphics[height=.5in,trim={3cm .4cm 1.5cm 3cm}, clip,%\n valign=c]{figs\/pool_plot_1.png}\n & \\includegraphics[height=.5in,%\n valign=c]{figs\/pool_real_1.png} \\\\\n \\includegraphics[height=.4in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_plot_2.png}\n & \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_real_2.png} \n \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,height=.6in,%\n valign=c]{figs\/pool_plot_3.png}\n & \\includegraphics[height=.5in,%\n valign=c]{figs\/pool_real_3.png}\n \\end{tabular}\n }\n \\vspace{-1em}\n \\caption{\\textit{Pool Experiment.} Navigation trajectories with localization noise smoothing (legend: Start and goal green and red dots; obstacles, cuboids) and images from the robot's camera. Red arrows point to the approximate goal locations behind the boxes.}\n \\label{fig:table_of_paths_and_images}\n \\vspace{-1em}\n\\end{figure}\n\nThe swimming pool was about \\SI{20}{m} by \\SI{7}{m} in size with a shallow (\\SI{1}{m}) and deep (\\SI{3}{m}) end, and a slope in the middle. Two black boxes (approximate size: 0.8 x 0.5 x 0.3 m\nwere placed in two different configurations: side by side as a large obstacle and with a \\SI{1}{m} separation to create a channel. \n\n\nResulting paths and reference images are shown in \\fig{fig:table_of_paths_and_images}.\nOur proposed navigation approach successfully drove the BlueROV2 to different 3D waypoints, avoiding obstacles by going around, above, or through a channel (see \\fig{fig:table_of_paths_and_images}).\nWe observed that the SBL provided noisier position information compared to in simulation -- at times the robot's location jumped up to a meter. %\nWhile the noise affected the calculation of the relative position to the goal, our approach does not depend on the absolute robot location to infer obstacle distance, so the robot was able to avoid obstacles.\n\n\\vspace{-0.5em}\n\\subsection{Action Prediction from Static Underwater Images}\n\n\nWe also tested joint image and SBES reading data from past field trials (in the Caribbean Sea and lake) as input to our model for action prediction.\n\\fig{fig:table_of_oceanic_image_depth_estimation} shows a sample of such images with corresponding depth predictions, locations of the goal, and predicted actions.\nAs expected, with obstacles nearby the predicted action prioritized obstacle avoidance, steering the robot away, otherwise, the action's direction pointed towards the goal. This qualitative test demonstrates our model's generalizability to real-world applications.\n\\begin{figure}[t]\n \\centering\n \\begingroup\n\\renewcommand{\\arraystretch}{4} %\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c c c c c c c}\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_left.pdf}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_center.pdf}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_right.pdf} \n \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_left.pdf}\n & \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_center.pdf}\n & \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_right.pdf} \n \\\\\\hline\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226769-743939.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226904-039137.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226900-919820.png}\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_left_reef.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_front-reef.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_right_reef.png}\n \\end{tabular}\n }\n \\endgroup\n \\vspace{-1em}\n \\caption{\\textit{Single Image Action and Depth Prediction.} 1st row: images from Lake Sunapee and Caribbean Sea. 2nd row: their respective depth predictions. Direction and magnitude of the action predicted (red arrow); approximate goal location (yellow arrow).}\n \\vspace{-2em}\n \\label{fig:table_of_oceanic_image_depth_estimation}\n \\end{figure}\n\n\n\n\n\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\nWe presented the first 3D map-less underwater navigation approach, based on Proximal Policy Optimization Network (PPO) and domain randomization, for low-cost underwater robots with a monocular camera and a fixed single-beam echo-sounder. By choosing deep reinforcement learning over classic methods, we were able to address the intrinsic challenges of seamless underwater navigation (e.g., lack of low-cost efficiency sensor and difficulty in generating a map given noisy positioning and perception data). We validated our approach with several comparisons and ablation studies in different simulated environments, as well as real-world validation in a swimming pool and with static underwater images. Results showed that the robot is able to navigate to arbitrary 3D goals while avoiding obstacles inferred from estimated depth images and sonar readings.\n\nIn the future, we will investigate explicit sensor fusion of camera and SBES data to achieve better depth prediction with absolute scale, e.g. early fusion~\\cite{roznere2020iros}, as well as controller and SBL data. In addition, we will consider the generation of more complex environments, other real-world experiments, and the design of integrated models for different sensor configurations (e.g., stereo cameras) and dynamic models to adapt our method to heterogeneous underwater robots.\n\n\n\n\n\n{\n\\footnotesize\n\\vspace{-1em}\n\\section*{Acknowledgments}\n\\vspace{-1em}\nWe thank Devin Balkcom for access to the pool for experiments, and Bo Zhu, Mary Flanagan, and Sukdith Punjasthitkul for GPU access. This work is supported in part by the Burke Research Initiation Award and NSF CNS-1919647, 2024541, 2144624, OIA-1923004. \n}\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{}\\label{subsection:1.1}\n\nWe study global\/local Weyl modules for toroidal Lie algebras and an affine analog of current Lie algebras.\nThe notion of Weyl modules for affine Lie algebras has been introduced by Chari-Pressley in \\cite{MR1850556} as a family of integrable highest weight modules with a universal property.\nLater Chari-Loktev initiated in \\cite{MR2271991} to study Weyl modules for current Lie algebras in a graded setting.\nThe graded characters of local Weyl modules for current Lie algebras have been studied by many authors.\nNow they are known to coincide with Macdonald polynomials specialized at $t=0$, a.k.a.\\ $q$-Whittaker functions (Chari-Loktev~\\cite{MR2271991}, Fourier-Littelmann~\\cite{MR2323538}, Naoi~\\cite{MR2855081}, Sanderson~\\cite{MR1771615}, Ion~\\cite{MR1953294}, Lenart-Naito-Sagaki-Schilling-Shimozono~\\cite{MR3674171}).\n\nToroidal Lie algebras are natural generalization of affine Lie algebras.\nFor a finite-dimensional simple Lie algebra $\\frg$, the corresponding toroidal Lie algebra $\\tor$ is defined as the universal central extension of the double loop Lie algebra $\\frg \\otimes \\bbC[s^{\\pm 1}, t^{\\pm 1}]$ with the degree operators.\nWe can also consider a Lie algebra $\\tor^+$ which is defined by replacing $\\bbC[s^{\\pm 1}, t^{\\pm 1}]$ with $\\bbC[s, t^{\\pm 1}]$.\nSee Section~\\ref{subsection:toroidal} for precise definitions.\nWe expect that the characters of Weyl modules for $\\tor$ and $\\tor^+$ produce a very interesting class of special functions.\nIn this article, we study the first nontrivial example: the Weyl module associated with the level one dominant integral weight.\n\nA big difference between the toroidal and the affine Lie algebra is the structure of their centers.\nThe toroidal Lie algebra without the degree operators has an infinite-dimensional center, while the center of the affine Lie algebra is one-dimensional.\nThe Weyl modules are examples of modules over the toroidal Lie algebra on which the action of the center does not factor a finite-dimensional quotient.\nWe note that Chari-Le have studied in \\cite{MR2017585} local Weyl modules for a quotient of the toroidal Lie algebra.\nThe resulting quotient is an extension of the double loop Lie algebra by a two-dimensional center with the degree operators.\nIn particular, the Weyl modules considered in this article are possibly bigger than those studied in \\cite{MR2017585} (See \\ref{subsection:1.3} below).\n\n\\subsection{}\\label{subsection:1.2}\n\nLet us summarize contents and results of the article.\nIn Section~\\ref{section:Preliminaries}, we introduce the main object: the toroidal Lie algebra $\\tor$.\nWe also introduce an affine analog of the current Lie algebra which is denoted by $\\tor^+$.\nThen we recall their basic properties.\nAmong other things, a certain automorphism of $\\tor$ will play an important role.\nThe ring $\\bbC[s^{\\pm 1}, t^{\\pm 1}]$ admits an $\\mathrm{SL}_2(\\mathbb{Z})$-action by the coordinate change.\nThis action naturally induces automorphisms of $\\tor$.\nWe denote by $S$ the automorphism corresponding to the $S$-transformation.\n\nIn Section~\\ref{section:Weyl modules}, we define the global and the local Weyl modules following \\cite{MR1850556}, \\cite{MR2271991}, \\cite{MR2102326}, \\cite{MR2718936}, \\cite{MR2017585}.\nThe global Weyl module $\\glob(\\Lambda)$ for $\\tor$ is attached to each dominant integral weight $\\Lambda$ of the affine Lie algebra. \nWe identify the endomorphism ring of $\\glob(\\Lambda)$ with a symmetric Laurent polynomial ring $A(\\Lambda)$ in Proposition~\\ref{prop:endomorphism} and define the local Weyl module $\\loc(\\Lambda,\\mathbf{a})$ for each maximal ideal $\\mathbf{a}$ of $A(\\Lambda)$.\nThe argument is similar to known one for the affine and the current Lie algebras.\nThe global\/local Weyl modules $\\glob^+(\\Lambda)$ and $\\loc^+(\\Lambda,\\mathbf{a})$ for $\\tor^+$ are similarly defined.\nWe prove in Proposition~\\ref{prop:weight} a finiteness property for weight spaces of the Weyl modules.\nBy this property, the characters of the local Weyl modules are well-defined.\nThis result has been established for the case of the affine Lie algebra in \\cite{MR1850556} and for a quotient of the toroidal Lie algebra in \\cite{MR2017585}. \nWe remark that we need to investigate the action of the infinite-dimensional center, which is not treated in \\cite{MR2017585}.\nThen we turn to a special case where $\\Lambda$ is of level one.\nBy the diagram automorphism, we can reduce the general level one case to that for the basic level one weight $\\Lambda_0$.\nTherefore we only consider the case of $\\Lambda_0$ in the sequel.\nWe give an upper bound for the graded character of the level one local Weyl module $\\loc^+(\\Lambda_0,0)$ over $\\tor^+$ in Proposition~\\ref{prop:upper_bound}.\n\nIn Section~\\ref{section:Vertex operator construction}, we prove an isomorphism between the level one global Weyl module $\\glob(\\Lambda_0)$ over the toroidal Lie algebra $\\tor$ and the twist of a module $\\bbV(0)$ by the automorphism $S^{-1}$, where $\\bbV(0)$ has been constructed in works of Moody-Eswara Rao-Yokonuma~\\cite{MR1066569}, Iohara-Saito-Wakimoto~\\cite{MR1688100} and Eswara Rao \\cite{MR3076215}.\nThis is our main theorem.\n\n\\begin{thm}[Theorem~\\ref{thm:main}]\nWe have an isomorphism\n\\[\n\t\\glob(\\Lambda_0) \\stackrel{\\cong}{\\longrightarrow} (S^{-1})^*\\bbV(0)\n\\]\nof $\\tor$-modules.\n\\end{thm}\n\nAs a byproduct, we prove that the upper bound in Proposition~\\ref{prop:upper_bound} indeed gives the characters of the level one local Weyl modules (see Section~\\ref{subsection:Characters} for the definition of $\\ch_p$ and $\\ch_{p,q}$).\n\\begin{cor}[Corollary~\\ref{cor:character}]\nWe have\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) = \\ch_{p} \\loc^+(\\Lambda_0,a) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n} \\right)\n\\]\nfor $a \\in \\bbC^{\\times}$ and\n\\[\n\t\\ch_{p,q} \\loc^+(\\Lambda_0,0) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n q} \\right).\n\\]\nHere $L(\\Lambda_0)$ is the level one integrable irreducible module of the affine Lie algebra with highest weight $\\Lambda_0$. \n\\end{cor}\n\n\\subsection{}\\label{subsection:1.3}\n\nLet us give two comments regarding other works.\nThe first one is for \\cite{MR2017585} mentioned earlier.\nIn \\cite{MR2017585}, Chari-Le have studied local Weyl modules for some quotients of $\\tor$ and $\\tor^+$.\nThey have proved that the level one local Weyl modules in their setting are irreducible and are isomorphic to the evaluation modules \\cite[Theorem~4]{MR2017585}.\nHence we see by our results that the level one local Weyl modules for $\\tor$ and $\\tor^+$ are bigger than those studied in \\cite{MR2017585}.\nWe remark that one of our results (Proposition~\\ref{prop:upper_bound}) gives an alternative proof of \\cite[Theorem~4]{MR2017585}.\n\nThe second one is for \\cite{MR3908899}.\nIn \\cite[Theorem~3.8]{MR3908899}, Tsymbaliuk has proved that the level one Fock representation of Saito-Takemura-Uglov \\cite{MR1603798} and Feigin-Jimbo-Miwa-Mukhin \\cite{MR3023228} over the quantum toroidal algebra of type A is isomorphic to a twist of the vertex representation of Saito \\cite{MR1617066}.\nHere the twist is given by an automorphism analogous to $S^{-1}$ which has been constructed by Miki \\cite{MR1693755}.\nThis result motivated the present work.\nIn the situation of \\cite{MR3908899}, both the Fock and the vertex representations are known to be irreducible and hence it can be checked by comparing their highest weights to show the isomorphism.\nThus, although the calculation of $S^{-1}$ in the quantum toroidal case is much more involved, the argument to show the isomorphism is simple.\nIt is an interesting problem to establish results analogous to this article for quantum toroidal algebras and affine Yangians.\n\n\\subsection*{Acknowledgments}\nThe author is grateful to Ryo Sato who pointed out that the result of \\cite{MR3076215} can be used to improve this work.\nHe also would like to thank Yoshihisa Saito and Kentaro Wada for helpful discussion. \nThis work was supported by JSPS KAKENHI Grant Number 17H06127 and 18K13390.\n\n\\section{Preliminaries}\\label{section:Preliminaries}\n\n\\subsection{Simple Lie algebras}\n\nLet $\\frg$ be a finite-dimensional simple Lie algebra over $\\bbC$ with a fixed Cartan subalgebra $\\frh$.\nWe also fix a Borel subalgebra containing $\\frh$.\nThe index set of simple roots is denoted by $I$.\nLet $\\alpha_i$ ($i \\in I$) be simple roots.\nWe denote by $\\Delta$, $\\Delta^+$, $\\Delta^-$ the sets of roots, positive roots, negative roots, respectively.\nLet $\\frg_{\\alpha}$ ($\\alpha \\in \\Delta)$ be the corresponding root space and put $\\frg_0 = \\frh$.\nThe highest root is denoted by $\\theta$.\n\nLet $(\\,,\\,)$ be a nondegenerate invariant symmetric bilinear form on $\\frg$.\nWe denote by the same letter the bilinear form on $\\frh^*$ induced from $(\\,,\\,)$ and normalize them by $(\\theta,\\theta)=2$.\nPut $d_i = (\\alpha_i,\\alpha_i)\/2$.\nWe fix Chevalley generators $e_i, f_i, h_i$ ($i \\in I$) so that $(e_i,f_i)=d_i^{-1}$ and $h_i = [e_i,f_i]$.\nWe also fix root vectors $e_{\\theta} \\in \\frg_{\\theta}$ and $f_{\\theta} \\in \\frg_{-\\theta}$ so that $(e_{\\theta},f_{\\theta})=1$.\nWe denote by $h_{\\alpha} \\in \\frh$ the coroot corresponding to $\\alpha \\in \\Delta$.\nThe root lattice $Q$ is defined by $Q=\\bigoplus_{i \\in I} \\bbZ \\alpha_i$.\n\n\\subsection{Toroidal Lie algebras}\\label{subsection:toroidal}\n\nThe universal central extension of the Lie algebra $\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}]$ is given by\n\\[\n\t\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\Omega_{\\bbC[s^{\\pm 1},t^{\\pm 1}]} \/ \\Ima d.\n\\]\nHere $\\Omega_A$ for a commutative $\\bbC$-algebra $A$ denotes the module of differentials, and $d \\colon A \\to \\Omega_A$ the differential map.\nThe Lie bracket is given by\n\\[\n\t[x \\otimes a, y \\otimes b] = [x,y] \\otimes ab + (x,y) (da)b.\n\\]\nSee \\cite[Section~2]{MR1066569} for details.\n\nWe put\n\\[\n\tc(k,l) = \\begin{cases}\n\t\ts^k t^{l-1} dt & \\text{if } k \\neq 0,\\\\\n\t\ts^{-1} t^l ds & \\text{if } k = 0\n\t\\end{cases}\n\\]\nfor $(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}$ and $c_s = s^{-1} ds$, $c_t = t^{-1} dt$. \nThen $\\Omega_{\\bbC[s^{\\pm 1},t^{\\pm 1}]} \/ \\Ima d$ has a $\\bbC$-basis $c(k,l)$ with $(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}$, $c_s$, $c_t$.\nWe can explicitly describe the Lie bracket as follows:\n\\begin{equation}\n\t\\begin{split}\n\t\t&[x \\otimes s^k t^l, y \\otimes s^m t^n] \\\\\n\t\t&= \\begin{cases}\n\t\t\t[x,y] \\otimes s^{k+m} t^{l+n} + (x,y) \\dfrac{lm-kn}{k+m} c(k+m,l+n) & \\text{if } k+m \\neq 0,\\\\\n\t\t\t[x,y] \\otimes t^{l+n} + (x,y) k c(0,l+n) & \\text{if } k+m = 0 \\text{ and } l+n \\neq 0,\\\\\n\t\t\t[x,y] \\otimes 1 + (x,y) ( k c_s + l c_t ) & \\text{if } k+m = 0 \\text{ and } l+n = 0.\n\t\t\\end{cases}\\label{eq:bracket}\n\t\\end{split}\n\\end{equation}\nWe add the degree operators $d_s$, $d_t$ to this central extension and define the toroidal Lie algebra $\\tor$ by\n\\[\n\t\\tor = \\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t,\n\\]\nwhere the additional commutation relations are as follows:\n\\begin{gather*}\n\t[d_s, x \\otimes s^k t^l] = k x \\otimes s^k t^l, \\quad [d_t, x \\otimes s^k t^l] = l x \\otimes s^k t^l, \\\\\n\t[d_s, c(k,l)] = k c(k,l), \\quad [d_t, c(k,l)] = l c(k,l),\\\\\n\t[d_s,c_s]=[d_t,c_s]=[d_s,c_t]=[d_t,c_t]=[d_s,d_t]=0.\n\\end{gather*}\n\n\\begin{rem}\nNote that we have\n\\[\n\tc(k,l) = \\begin{cases}\n\t\t(-k\/l) s^{k-1} t^{l} ds & \\text{if } k \\neq 0,\\\\\n\t\ts^{-1} t^l ds & \\text{if } k = 0\n\t\\end{cases}\n\\]\nfor $l \\neq 0$.\nIn particular, $c(k+1,l)$ is a nonzero multiple of $s^{k} t^{l} ds$ if $l \\neq 0$. \nWe will use this fact throughout the article.\n\\end{rem}\n\nLet $\\tor'$ be the Lie subalgebra of $\\tor$ without $d_s$:\n\\[\n\t\\tor' = \\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nWe also consider the following Lie subalgebra $\\tor^+$ of $\\tor$:\n\\[\n\t\\tor^+ = \\frg \\otimes \\bbC[s,t^{\\pm 1}] \\oplus \\bigoplus_{\\substack{k \\geq 1\\\\l \\in \\bbZ}} \\bbC c(k,l) \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nThe Lie algebra $\\tor^+$ is the semidirect product of the universal central extension of $\\frg \\otimes \\bbC[s,t^{\\pm 1}]$ and the 1-dimensional abelian Lie algebra $\\bbC d_t$.\nIt is an affine analog of the current Lie algebra $\\frg \\otimes \\bbC[s]$ and has a $\\bbZ_{\\geq 0}$-graded Lie algebra structure by assigning\n\\[\n\t\\deg (x \\otimes s^k t^l) = k \\ (x \\in \\frg),\\quad \\deg c(k,l) = k \\ (k \\geq 1, l \\in \\bbZ),\\quad \\deg c_t = \\deg d_t = 0.\n\\]\n\n\\begin{rem}\nLater we will study graded $\\tor^+$-modules.\nIt is equivalent to considering modules of $\\tor^+ \\oplus \\bbC d_s$. \n\\end{rem}\n\nThe toroidal Lie algebra $\\tor$ contains two Lie subalgebras $\\aff^{(s)}$ and $\\aff^{(t)}$ isomorphic to the affine Lie algebra associated with $\\frg$:\n\\[\n\t\\aff^{(s)} = \\frg \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s, \\quad \\aff^{(t)} = \\frg \\otimes \\bbC[t^{\\pm 1}] \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nNote that $\\tor^+$ contains $\\aff^{(t)}$.\nWe have\n\\[\n\t\\tor = \\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bigoplus_{\\substack{k \\in \\bbZ\\\\l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t,\n\\]\n\\[\n\t\\tor^+ = \\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s] \\oplus \\bigoplus_{\\substack{k \\geq 1\\\\l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC d_t,\n\\]\nwhere $\\left(\\aff^{(t)}\\right)' = \\frg \\otimes \\bbC[t^{\\pm 1}] \\oplus \\bbC c_t$.\nHere, the elements $c(k,0)=s^k t^{-1} dt$ are regarded as $c_t \\otimes s^k \\in \\left(\\aff^{(t)}\\right)' \\otimes s^k$.\n\n\\begin{rem}\\label{rem:CL}\nChari-Le~\\cite{MR2017585} have studied a version of toroidal Lie algebras which is the quotient of $\\tor$ modulo the elements $c(k,l)$ with $l \\neq 0$, namely, it is equal to\n\\[\n\t\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{k \\neq 0} \\bbC c(k,0) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t\n\t=\\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t\n\\]\nas a $\\bbC$-vector space.\n\\end{rem}\n\nWe introduce presentations of $\\tor$ and $\\tor^+$.\nPut $\\affI = I \\sqcup \\{0\\}$.\nLet $(a_{ij})_{i,j \\in \\affI}$ be the Cartan matrix of $\\aff^{(t)}$ and set $d_0 = 1$.\n\\begin{dfn}\nLet $\\frt$ be the Lie algebra generated by $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$), $c_s$, $d_s$, $d_t$ subject to the following defining relations:\n\\begin{gather*}\n\tc_s :\\text{central}, \\quad [h_{i,k},h_{j,l}]=d_j^{-1} a_{ij} k \\delta_{k+l,0} c_s, \\quad [e_{i,k},f_{j,l}]=\\delta_{ij} \\left( h_{i,k+l} + d_i^{-1} k \\delta_{k+l,0} c_s \\right),\\\\\n\t[h_{i,k},e_{j,l}] = a_{ij} e_{j,k+l}, \\quad [h_{i,k},f_{j,l}] = -a_{ij} f_{j,k+l},\\\\\n\t[e_{i,k},e_{i,l}] = 0, \\quad [f_{i,k},f_{i,l}] = 0,\\\\\n\t(\\ad e_{i,0})^{1-a_{ij}} e_{j,k} = 0, \\quad (\\ad f_{i,0})^{1-a_{ij}} f_{j,k} = 0, \\quad (i \\neq j)\\\\\n\t[d_s, e_{i,k}] = k e_{i,k}, \\quad [d_s, f_{i,k}] = k f_{i,k}, \\quad [d_s, h_{i,k}] = k h_{i,k},\\\\\n\t[d_t, e_{i,k}] = \\delta_{i,0} e_{i,k}, \\quad [d_t, f_{i,k}] = -\\delta_{i,0} f_{i,k}, \\quad [d_t, h_{i,k}] = 0,\\\\\n\t[d_s,d_t]=0.\n\\end{gather*}\n\\end{dfn}\n\n\\begin{dfn}\nLet $\\frs$ be the Lie algebra generated by $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$), $d_t$ subject to the following defining relations:\n\\begin{gather*}\n\t[h_{i,k},h_{j,l}]=0, \\quad [e_{i,k},f_{j,l}]=\\delta_{ij} h_{i,k+l},\\\\\n\t[h_{i,k},e_{j,l}] = a_{ij} e_{j,k+l}, \\quad [h_{i,k},f_{j,l}] = -a_{ij} f_{j,k+l},\\\\\n\t[e_{i,k},e_{i,l}] = 0, \\quad [f_{i,k},f_{i,l}] = 0,\\\\\n\t(\\ad e_{i,0})^{1-a_{ij}} e_{j,k} = 0, \\quad (\\ad f_{i,0})^{1-a_{ij}} f_{j,k} = 0, \\quad (i \\neq j)\\\\\n\t[d_t, e_{i,k}] = \\delta_{i,0} e_{i,k}, \\quad [d_t, f_{i,k}] = -\\delta_{i,0} f_{i,k}, \\quad [d_t, h_{i,k}] = 0.\n\\end{gather*}\n\\end{dfn}\n\n\\begin{thm}[\\cite{MR1066569} Proposition~3.5, \\cite{GRW} Proposition~4.4]\nWe have an isomorphism of Lie algebras $\\frt \\to \\tor$ such that\n\\begin{gather*}\n\te_{i,k} \\mapsto \\begin{cases}\n\t\te_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\tf_{\\theta} \\otimes s^k t & \\text{if } i =0,\n\t\\end{cases}\\quad \n\tf_{i,k} \\mapsto \\begin{cases}\n\t\tf_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\te_{\\theta} \\otimes s^k t^{-1} & \\text{if } i =0,\n\t\\end{cases}\\\\\n\th_{i,k} \\mapsto \\begin{cases}\n\t\th_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\t-h_{\\theta} \\otimes s^k + s^k t^{-1} dt & \\text{if } i =0,\n\t\\end{cases}\\quad c_s \\mapsto c_s,\\quad d_s \\mapsto d_s,\\quad d_t \\mapsto d_t.\n\\end{gather*}\nMoreover this restricts to an isomorphism $\\frs \\to \\tor^+$.\n\\end{thm}\n\nUnder the isomorphism, the elements $e_{i,0}, f_{i,0}, h_{i,0}$ are in the Lie subalgebra $\\aff^{(t)}$ and identified with its Chevalley generators.\nWe sometimes denote them by $e_{i}, f_{i}, h_{i}$.\nNote that $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in I$, $k \\in \\bbZ$), $c_s$, $d_s$ generate the Lie subalgebra $\\aff^{(s)}$ of $\\frt \\cong \\tor$.\n\nWe introduce notions for the affine Lie algebra $\\aff^{(t)}$.\nLet $\\affn^{(t)}$ be the Lie subalgebra of $\\aff^{(t)}$ generated by $e_i$ ($i \\in \\affI$), and $\\affnbar^{(t)}$ that generated by $f_i$ ($i \\in \\affI$).\nSet\n\\[\n\t\\affh^{(t)} = \\frh \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nThe generator of imaginary roots is denoted by $\\delta$.\nWe put $\\alpha_0 = -\\theta + \\delta$ so that $\\alpha_i$ ($i \\in \\affI$) forms simple roots of $\\aff^{(t)}$.\nWe denote by $\\affDelta$, $\\affDelta^+$ the sets of roots, positive roots, respectively.\nLet $\\left(\\aff^{(t)}\\right)_{\\alpha}$ ($\\alpha \\in \\affDelta)$ be the corresponding root space.\nThe coroot is defined by $h_{\\beta+l\\delta}=h_{\\beta}+lc_t$ for $\\beta \\in \\Delta \\cup \\{0\\}$ and $l \\in \\bbZ$.\nWe set $\\affQ = \\bigoplus_{i \\in \\affI} \\bbZ \\alpha_i$ and $\\affQ^+ = \\sum_{i \\in \\affI} \\bbZ_{\\geq 0} \\alpha_i$. \n\nWe say that an element $\\Lambda$ of $\\Hom_{\\bbC} (\\affh^{(t)},\\bbC)$ is a dominant integral weight of $\\aff^{(t)}$ if $\\langle h_i, \\Lambda\\rangle \\in \\bbZ_{\\geq 0}$ holds for any $i \\in \\affI$.\nIn this article, they are further assumed to satisfy $\\langle d_t, \\Lambda\\rangle =0$ for simplicity.\nDefine the fundamental weights $\\Lambda_i$ ($i \\in \\affI$) by $\\langle h_j , \\Lambda_i \\rangle = \\delta_{ij}$ and $\\langle d_t, \\Lambda_i \\rangle = 0$.\nWe denote by $L(\\Lambda)$ the irreducible $\\aff^{(t)}$-module with highest weight $\\Lambda$.\nWe will use the symbol $L(\\Lambda)^{(s)}$ for the irreducible $\\aff^{(s)}$-module with highest weight $\\Lambda$.\n\n\\subsection{Triangular decomposition}\n\nLet $\\torn$ be the Lie subalgebra of $\\tor$ generated by $e_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$), and $\\tornbar$ that generated by $f_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$).\nSet\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\torh &= \\frh \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{k \\neq 0} \\bbC c(k,0) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t \\\\\n\t\t&= \\left(\\frh \\oplus \\bbC c_t\\right) \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t.\n\t\\end{split}\n\\end{equation*}\n\n\\begin{prop}\nWe have\n\\[\n\t\\torn = \\affn^{(t)} \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\in \\bbZ \\\\ l \\geq 1}} \\bbC c(k,l),\\quad\n\t\\tornbar = \\affnbar^{(t)} \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\in \\bbZ \\\\ l \\leq -1}} \\bbC c(k,l).\n\\]\n\\end{prop}\n\n\\begin{proof}\nDenote by $\\torn'$ and $\\tornbar'$ the right-hand sides.\nThen we see by the formula of the Lie bracket (\\ref{eq:bracket}) that $\\torn \\supset \\torn'$ and $\\tornbar \\supset \\tornbar'$.\nWe also see that $\\tornbar + \\torh + \\torn = \\tornbar \\oplus \\torh \\oplus \\torn$.\nSince we have $\\tor = \\tornbar' \\oplus \\torh \\oplus \\torn'$, the assertion holds.\n\\end{proof}\n\nIn this article, we call\n\\[\n\t\\tor = \\tornbar \\oplus \\torh \\oplus \\torn\n\\]\nthe triangular decomposition of $\\tor$.\n\nIn $\\tor^+$, the elements $e_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$) generate \n\\[\n\t\\torn \\cap \\tor^+ = \\affn^{(t)} \\otimes \\bbC[s] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\geq 1 \\\\ l \\geq 1}} \\bbC c(k,l),\n\\]\nand $f_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$) generate \n\\[\n\t\\tornbar \\cap \\tor^+ = \\affnbar^{(t)} \\otimes \\bbC[s] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\geq 1 \\\\ l \\leq -1}} \\bbC c(k,l).\n\\]\nFurther set\n\\[\n\t\\torh' = \\torh \\cap \\tor' = \\left(\\frh \\oplus \\bbC c_t\\right) \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_t.\n\\]\n\n\\subsection{Automorphisms}\\label{subsection:auto}\n\nLet $S$ be the ring automorphism of $\\bbC[s^{\\pm 1},t^{\\pm 1}]$ defined by $s \\mapsto t$, $t \\mapsto s^{-1}$.\nIt naturally induces a Lie algebra automorphism of $\\tor$ which is denoted by the same letter $S$.\nLater we will rather use its inverse $S^{-1}$.\nIt corresponds to the assignment $s \\mapsto t^{-1}$, $t \\mapsto s$.\nIn particular we have $S^{-1}(c(k,l)) = c(l,-k)$, $S^{-1}(c_s) = -c_t$ and $S^{-1}(c_t) = c_s$.\n\nWe introduce Lie algebra automorphisms $T_0$ and $T_{\\theta}$ of $\\tor$ by\n\\[\n\tT_0 = \\exp\\ad e_0 \\circ \\exp\\ad (-f_0) \\circ \\exp\\ad e_0,\n\\]\n\\[\n\tT_{\\theta} = \\exp\\ad e_{\\theta} \\circ \\exp\\ad (-f_{\\theta}) \\circ \\exp\\ad e_{\\theta}.\n\\]\nWe can regard them as automorphisms of $\\tor^+$ by restriction.\n\n\\begin{lem}\\label{lem:induction}\nWe have $e_{\\theta} \\otimes s^k t^l = T_0 T_{\\theta} (e_{\\theta} \\otimes s^k t^{l+2})$.\n\\end{lem}\n\n\\begin{proof}\nBy a direct calculation.\nWe use the following:\n\\begin{align*}\n\tT_{\\theta} (e_{\\theta} \\otimes s^k t^{l+2}) &= - f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad e_0 (f_{\\theta} \\otimes s^k t^{l+2}) &= f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad (-f_0) (f_{\\theta} \\otimes s^k t^{l+2}) &= f_{\\theta} \\otimes s^k t^{l+2} - (h_{\\theta} \\otimes s^k t^{l+1}-s^kt^ldt) - e_{\\theta} \\otimes s^k t^{l},\\\\\n\t\\exp\\ad e_0 (h_{\\theta} \\otimes s^k t^{l+1}) &= h_{\\theta} \\otimes s^k t^{l+1} + 2 f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad e_0 (e_{\\theta} \\otimes s^k t^{l}) &= e_{\\theta} \\otimes s^k t^{l} - h_{\\theta} \\otimes s^k t^{l+1} + s^k t^l dt - f_{\\theta} \\otimes s^k t^{l+2}.\n\\end{align*}\n\\end{proof}\n\nLet $M$ be a module of $\\mathcal{A}=\\tor,$ $\\tor',$ or $\\tor^+$ and assume that $M$ is integrable as a $\\aff^{(t)}$-module.\nThen $T_0, T_{\\theta} \\in \\Aut M$ are similarly defined.\nMoreover they satisfy\n\\[\n\tT_0(xv) = T_0(x)T_0(v), \\quad T_{\\theta}(xv) = T_{\\theta}(x)T_{\\theta}(v)\n\\]\nfor $x \\in \\mathcal{A}$ and $v \\in M$.\n\nThe Lie algebra automorphism $\\tau_a$ ($a \\in \\bbC$) of $\\tor^+$ is induced from the map $s \\mapsto s+a$.\n\n\\subsection{Characters}\\label{subsection:Characters}\n\nLet $M$ be a module of $\\mathcal{A}=\\tor,$ $\\tor',$ or $\\tor^+$ and regard it as a $\\aff^{(t)}$-module by restriction.\nFor $\\lambda \\in \\frh^*$ and $m \\in \\bbC$, let $M_{\\lambda-m\\delta}$ be the corresponding weight space.\nIn this article, we always assume that any $\\aff^{(t)}$-module $M$ has the weight space decomposition and $M_{\\lambda-m\\delta}=0$ unless $m \\in \\bbZ$.\n\nWe define the $p$-character $\\ch_p M$ of $M$ by\n\\[\n\t\\ch_p M = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} (\\dim M_{\\lambda-m\\delta}) e^{\\lambda} p^{m}\n\\]\nif it is well-defined.\nThis is nothing but the ordinary $\\aff^{(t)}$-character with $p=e^{-\\delta}$. \nLet $M$ be a graded $\\tor^+$-module and $M_{\\lambda-m\\delta} = \\bigoplus_{n \\in \\bbZ} M_{\\lambda-m\\delta}[n]$ the decomposition of the weight space into graded pieces.\nWe define the $(p,q)$-character $\\ch_{p,q} M$ of $M$ by\n\\[\n\t\\ch_{p,q} M = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} (\\dim M_{\\lambda-m\\delta}[n]) e^{\\lambda} p^{m} q^{n}\n\\] \nif it is well-defined.\nFor two formal sums \n\\[\n\tf = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} f_{\\lambda,m} e^{\\lambda} p^{m}, \\quad g = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} g_{\\lambda,m} e^{\\lambda} p^{m} \\quad (f_{\\lambda,m}, g_{\\lambda,m} \\in \\bbZ),\n\\] \nwe say $f \\leq g$ if $f_{\\lambda,m} \\leq g_{\\lambda,m}$ holds for all $\\lambda$ and $m$.\nWe define an inequality $\\leq$ for \n\\[\n\tf = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} f_{\\lambda,m,n} e^{\\lambda} p^{m}q^{n}, \\quad g = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} g_{\\lambda,m,n} e^{\\lambda} p^{m}q^{n} \\quad (f_{\\lambda,m,n}, g_{\\lambda,m,n} \\in \\bbZ)\n\\] \nsimilarly.\n\n\\section{Weyl modules}\\label{section:Weyl modules}\n\n\\subsection{Definitions of global\/local Weyl modules}\n\n\\begin{dfn}\nLet $\\Lambda$ be a dominant integral weight of $\\aff^{(t)}$.\nThe global Weyl module $\\glob(\\Lambda)$ for $\\tor$ with highest weight $\\Lambda$ is the $\\tor$-module generated by $v_{\\Lambda}$ subject to the following defining relations:\n\\begin{gather*}\n\te_{i,k} v_{\\Lambda} = 0\\ (i \\in \\affI, k \\in \\bbZ),\\quad h v_{\\Lambda} = \\langle h,\\Lambda \\rangle v_{\\Lambda}\\ (h \\in \\affh^{(t)}),\\quad\tf_i^{\\langle h_i,\\Lambda \\rangle + 1} v_{\\Lambda} = 0\\ (i \\in \\affI), \\label{eq:global1} \\\\\n\t\tc_s v_{\\Lambda} = d_s v_{\\Lambda} = 0. \\label{eq:global2}\n\\end{gather*}\nThe global Weyl module $\\glob^+(\\Lambda)$ for $\\tor^+$ with highest weight $\\Lambda$ is the $\\tor^+$-module generated by $v_{\\Lambda}^+$ subject to the following defining relations:\n\\[\n\te_{i,k} v_{\\Lambda}^+ = 0\\ (i \\in \\affI, k \\in \\bbZ_{\\geq 0}),\\quad h v_{\\Lambda}^+ = \\langle h,\\Lambda \\rangle v_{\\Lambda}^+\\ (h \\in \\affh^{(t)}),\\quad \tf_i^{\\langle h_i,\\Lambda \\rangle + 1} v_{\\Lambda}^+ = 0\\ (i \\in \\affI).\n\\]\n\\end{dfn}\n\nWe describe the endomorphism rings of $\\glob(\\Lambda)$ and $\\glob^{+}(\\Lambda)$.\nThe following argument is the same as in the case of the affine and the current Lie algebras.\nWe give some details for completeness.\n\n\\begin{lem}\nWe have an action $\\varphi$ of $U(\\torh')$ on each weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ $(\\beta \\in \\affQ^{+})$ defined by\n\\[\n\t\\varphi(a) (X v_{\\Lambda} ) = X (a v_{\\Lambda})\n\\]\nfor $a \\in U(\\torh')$ and $X \\in U(\\tor')$.\n\\end{lem}\n\n\\begin{proof}\nTo see that the action is well-defined, we need to check that $X v_{\\Lambda}=0$ implies $X (a v_{\\Lambda})=0$.\nBy the same argument as \\cite[3.4]{MR2718936}, we can show that if $v$ satisfies the relations \n\\[\n\te_{i,k} v = 0\\ (i \\in \\affI, k \\in \\bbZ),\\ h v = \\langle h,\\Lambda \\rangle v\\ (h \\in \\affh^{(t)}),\\ f_i^{\\langle h_i,\\Lambda \\rangle + 1} v = 0\\ (i \\in \\affI),\\ c_s v = 0,\n\\]\nthen so does $a v$.\nThis completes the proof.\n\\end{proof}\n\nLet $\\Ann v_{\\Lambda}$ be the annihilator ideal of $U(\\torh')$ and set\n\\[\n\t\\tilde{A}(\\Lambda) = U(\\torh') \/ \\Ann v_{\\Lambda}.\n\\]\nSince the action $\\varphi$ of $\\torh'$ factors through an abelian Lie algebra $\\torh' \/ \\bbC c_s \\oplus \\bbC d_t$, $\\tilde{A}(\\Lambda)$ is a commutative algebra.\n\n\\begin{lem}\\label{lem:highest_weight_space}\nThe action map\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\glob(\\Lambda)_{\\Lambda}, \\quad a \\mapsto a v_{\\Lambda}\n\\]\ngives an isomorphism of $\\bbC$-vector spaces.\n\\end{lem}\n\n\\begin{proof}\nThe well-definedness and the injectivity immediately follow from the definition of $\\tilde{A}(\\Lambda)$.\nThe surjectivity holds since we have $\\glob(\\Lambda)_{\\Lambda} = U(\\torh') v_{\\Lambda}$.\n\\end{proof}\n\n\\begin{lem}\nThe natural map\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\End_{\\tor'} \\glob(\\Lambda), \\quad a \\mapsto \\varphi(a)\n\\]\ngives an isomorphism of $\\bbC$-algebras.\n\\end{lem}\n\n\\begin{proof}\nBy the definition of $\\tilde{A}(\\Lambda)$, we have a natural injective algebra homomorphism\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\End_{\\tor'} \\glob(\\Lambda), \\quad a \\mapsto \\varphi(a).\n\\]\nWe also have a natural $\\bbC$-linear map\n\\[\n\t\\End_{\\tor'} \\glob(\\Lambda) \\to \\glob(\\Lambda)_{\\Lambda}, \\quad f \\mapsto f(v_{\\Lambda})\n\\]\nand this is injective since $\\glob(\\Lambda)$ is generated by $v_{\\Lambda}$.\nThe composite of the maps\n\\[\n\t\\tilde{A}(\\Lambda) \\hookrightarrow \\End_{\\tor'} \\glob(\\Lambda) \\hookrightarrow \\glob(\\Lambda)_{\\Lambda}\n\\]\nis given by $a \\mapsto a v_{\\Lambda}$.\nSince this map is bijective by Lemma~\\ref{lem:highest_weight_space}, the two injective maps are bijective.\n\\end{proof}\n\nWrite $\\Lambda = \\sum_{i \\in \\affI} m_i \\Lambda_i$ with the fundamental weights $\\Lambda_i$ and $m_i \\in \\bbZ_{\\geq 0}$.\nWe define $A(\\Lambda)$ by\n\\[\n\tA(\\Lambda) = \\bigotimes_{i \\in \\affI} \\bbC[z_{i,1}^{\\pm 1}, \\ldots, z_{i,m_i}^{\\pm 1}]^{\\frakS_{m_i}},\n\\]\t\nthe symmetric Laurent polynomial algebra associated with $\\Lambda$.\n\n\\begin{prop}\nThe assignment\n\\[\n\t\\sum_{m=1}^{m_i} z_{i,m}^k \\mapsto h_{i,k}\n\\]\ngives an isomorphism $A(\\Lambda) \\cong \\tilde{A}(\\Lambda)$ of $\\bbC$-algebras.\n\\end{prop}\n\n\\begin{proof}\nThe well-definedness and the surjectivity of the map is proved in the same way as \\cite[Proposition~1.1 (i), (iv), (v)]{MR1850556}.\n\nWe follow the argument in \\cite[5.6]{MR3384485} to show the injectivity.\nTake a nonzero element $a$ of $A(\\Lambda)$ and fix a maximal ideal $\\mathfrak{m}$ which does not contain $a$.\nAssume that $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ is nonzero.\nThen the image of $a$ in $A(\\Lambda) \/ \\mathfrak{m}$ acts on $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ by a nonzero scaler.\nHence we conclude that $a$ acts on $\\glob(\\Lambda)$ nontrivially and the map $A(\\Lambda) \\to \\tilde{A}(\\Lambda) \\cong \\End_{\\tor'}\\glob(\\Lambda)$ is shown to be injective.\n\nThus it is enough to show that $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ is nonzero.\nWe denote by $\\bar{p}_{k}^{(i)}$ ($i \\in \\affI$, $k \\in \\bbZ$) the image of the power some function $p_{k}^{(i)} = \\sum_{m=1}^{m_i} z_{i,m}^k$ in $A(\\Lambda)\/\\mathfrak{m}$.\nWe can choose a set of nonzero complex numbers $\\{ a_{i,m} \\}$ satisfying\n\\[\n\t\\sum_{m=1}^{m_i} a_{i,m}^k = \\bar{p}_{k}^{(i)}\n\\]\nunder an identification $A(\\Lambda)\/\\mathfrak{m} \\cong \\bbC$.\nFor each $a \\in \\bbC^{\\times}$, we have the evaluation map\n\\[\n\t\\ev_a \\colon \\tor' \\to \\aff^{(t)} \n\\]\ndefined as the composite of\n\\[\n\t\\tor' \\to \\tor' \/ \\bigoplus_{\\substack{k \\in \\bbZ\\\\ l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC c_s \\cong \\left( \\aff^{(t)} \\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC d_t\n\\]\nand the evaluation at $s=a$.\nThen we have a nonzero $\\tor'$-module homomorphism\n\\[\n\t\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m} \\to \\bigotimes_{i \\in \\affI} \\bigotimes_{m=1}^{m_i} \\ev_{a_{i,m}}^{*}L(\\Lambda_i)\n\\]\nassigning $v_{\\Lambda} \\otimes 1$ to the tensor product of highest weight vectors.\nThis proves the assertion.\n\\end{proof}\n\nWe have a completely analogous story for the global Weyl module $\\glob^+(\\Lambda)$ over $\\tor^+$ if we replace $A(\\Lambda)$ with\n\\[\n\tA^+(\\Lambda) = \\bigotimes_{i \\in \\affI} \\bbC[z_{i,1}, \\ldots, z_{i,m_i}]^{\\frakS_{m_i}}.\n\\]\nWe can summarize the discussion so far as follows.\n\n\\begin{prop}\\label{prop:endomorphism}\nWe have $\\End_{\\tor'} \\glob(\\Lambda) \\cong A(\\Lambda)$ and $\\End_{\\tor^+} \\glob^+(\\Lambda) \\cong A^+(\\Lambda)$.\n\\end{prop}\n\nFor a maximal ideal $\\mathbf{a}$ of $A = A(\\Lambda)$ or $A^+(\\Lambda)$, we denote by $\\bbC_{\\mathbf{a}}$ the corresponding one-dimensional module $A\/\\mathbf{a}$.\n\n\\begin{dfn}\nWe call\n\\[\n\t\\loc(\\Lambda,\\mathbf{a}) = \\glob(\\Lambda) \\otimes_{A(\\Lambda)} \\bbC_{\\mathbf{a}}, \\quad \\loc^+(\\Lambda,\\mathbf{a}) = \\glob^+(\\Lambda) \\otimes_{A^+(\\Lambda)} \\bbC_{\\mathbf{a}}\n\\]\nthe local Weyl modules for $\\tor'$ and $\\tor^+$, respectively.\n\\end{dfn}\nWe denote the images of $v_{\\Lambda}$ and $v_{\\Lambda}^+$ in the local Weyl modules by $v_{\\Lambda,\\mathbf{a}}$ and $v_{\\Lambda,\\mathbf{a}}^+$.\n\n\\begin{rem}\nThe global\/local Weyl modules for $\\tor$ and $\\tor^+$ can be regarded as a sort of highest weight modules with respect to their triangular decompositions:\n\\[\n\t\\tor = \\tornbar \\oplus \\torh \\oplus \\torn, \\quad \\tor^+ = \\left( \\tornbar \\cap \\tor^+ \\right) \\oplus \\left( \\torh \\cap \\tor^+ \\right) \\oplus \\left( \\torn\\cap \\tor^+ \\right).\n\\]\n\\end{rem}\n\n\\subsection{Finiteness of weight spaces}\\label{subsection:finiteness_property}\n\nThe goal of this subsection is to prove the following.\n\n\\begin{prop}\\label{prop:weight}\n\\begin{enumerate}\n\\item\nEvery weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ is finitely generated over $A(\\Lambda)$.\nEvery weight space $\\loc(\\Lambda,\\mathbf{a})_{\\Lambda-\\beta}$ is finite-dimensional.\n\\item\nEvery weight space $\\glob^+(\\Lambda)_{\\Lambda-\\beta}$ is finitely generated over $A^+(\\Lambda)$.\nEvery weight space $\\loc^+(\\Lambda,\\mathbf{a})_{\\Lambda-\\beta}$ is finite-dimensional.\n\\item\nWe have $\\loc(\\Lambda,\\mathbf{a}) = U(\\tor^+) v_{\\Lambda,\\mathbf{a}}$. \n\\end{enumerate}\n\\end{prop}\n\nWe start to prove the following lemma.\n\n\\begin{lem}\\label{lem:single}\nLet $\\Lambda$ be a dominant integral weight of $\\aff^{(t)}$.\n\\begin{enumerate}\n\\item\nFor each positive root $\\beta \\in \\affDelta^+$, there exists a nonnegative integer $N(\\beta)$ satisfying the following: we have\n\\[\n\t(X_{-\\beta} \\otimes s^{k}) v_{\\Lambda} \\in \\sum_{m=0}^{N(\\beta)} (X_{-\\beta} \\otimes s^{m}) A(\\Lambda) v_{\\Lambda}\n\\]\nfor any root vector $X_{-\\beta}$ of $\\affnbar^{(t)}$ corresponding to a negative root $-\\beta$ and any $k$.\n\n\\item\nFor each positive integer $l >0$, there exists a nonnegative integer $N_l$ satisfying the following: we have\n\\[\n\tc(k,-l) v_{\\Lambda} \\in \\sum_{m=1}^{N_l} c(m,-l) A(\\Lambda) v_{\\Lambda} + \\sum_{m=0}^{N_l} \\left( \\left( \\aff^{(t)} \\right)_{-l\\delta} \\otimes s^m\\right) A(\\Lambda) v_{\\Lambda} \n\\]\nfor any $k$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nThe assertion (i) is proved in the same way as \\cite[Proposition~3.2 and Corollary~3.1]{MR2017585}. \n\nWe prove (ii).\nTake an arbitrary element $\\alpha$ of $\\Delta^+$ and fix root vectors $x_{\\alpha} \\in \\frg_{\\alpha}$ and $x_{-\\alpha} \\in \\frg_{-\\alpha}$ satisfying $(x_{\\alpha},x_{-\\alpha})=1$.\nThen we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^{k} t^{-l} ds)v_{\\Lambda} &= \\left( [x_{\\alpha} \\otimes s, x_{-\\alpha} \\otimes s^{k}t^{-l}] - h_{\\alpha} \\otimes s^{k+1} t^{-l} \\right) v_{\\Lambda} \\\\\n\t\t&= (x_{\\alpha} \\otimes s) (x_{-\\alpha} \\otimes s^{k}t^{-l}) v_{\\Lambda} - (h_{\\alpha} \\otimes s^{k+1} t^{-l}) v_{\\Lambda}.\n\t\\end{split}\n\\end{equation*} \nWe have\n\\[\n\t(x_{\\alpha} \\otimes s) (x_{-\\alpha} \\otimes s^{k}t^{-l}) v_{\\Lambda} \\in (x_{\\alpha} \\otimes s) \\sum_{m=0}^{N(\\alpha + l\\delta)} (x_{-\\alpha} \\otimes s^{m} t^{-l}) A(\\Lambda) v_{\\Lambda}\n\\]\nby (i).\nThe right-hand side is equal to\n\\[\n\t\\sum_{m=0}^{N(\\alpha + l\\delta)} (h_{\\alpha} \\otimes s^{m+1} t^{-l} + s^m t^{-l} ds ) A(\\Lambda) v_{\\Lambda} = \\sum_{m=1}^{N(\\alpha + l\\delta)+1} (h_{\\alpha} \\otimes s^{m} t^{-l} + c(m,-l) ) A(\\Lambda) v_{\\Lambda}.\n\\]\nWe have\n\\[\n\t(h_{\\alpha} \\otimes s^{k+1} t^{-l}) v_{\\Lambda} \\in \\sum_{m=0}^{N(l\\delta)} (h_{\\alpha} \\otimes s^{m} t^{-l}) A(\\Lambda) v_{\\Lambda}\n\\]\nagain by (i).\nHence we conclude that\n\\[\n\t(s^{k} t^{-l} ds) v_{\\Lambda} \\in \\sum_{m=1}^{N_l} c(m,-l) A(\\Lambda) v_{\\Lambda} + \\sum_{m=0}^{N_l} \\left( \\left( \\aff^{(t)} \\right)_{-l\\delta} \\otimes s^m\\right) A(\\Lambda) v_{\\Lambda}\n\\]\nif we put $N_l = \\max(N(l\\delta),N(\\alpha+l\\delta)+1)$.\n\\end{proof}\n\nThe following proposition is an analog of \\cite[Proposition~1.2]{MR1850556} for the case of the affine Lie algebra and of \\cite[Proposition~3.2 and Corollary~3.1]{MR2017585} for the quotient of $\\tor$ modulo the elements $c(k,l)$ with $l \\neq 0$ (cf.\\ Remark \\ref{rem:CL}).\n\n\\begin{prop}\\label{prop:span}\nFor each positive root $\\beta_j \\in \\affDelta^+$ and each positive integer $l >0$, there exist nonnegative integers $N(\\beta_j)$ and $N_l$ such that the weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ for $\\beta \\in \\affQ^+$ is spanned by elements of the form\n\\begin{equation}\n\t(X_{-\\beta_1} \\otimes s^{k_1}) \\cdots (X_{-\\beta_a} \\otimes s^{k_a}) \\left( \\prod_{j=1}^{b} c(m_j,-l_j) \\right) A(\\Lambda) v_{\\Lambda}, \\label{eq:span}\n\\end{equation}\nwhere each $X_{-\\beta_{j}}$ is a root vector of $\\affnbar^{(t)}$ corresponding to a negative root $-\\beta_j$ and each $l_j > 0$ is a positive integer satisfying $\\beta = \\sum_{j=1}^a \\beta_j + \\left(\\sum_{j=1}^b l_j \\right) \\delta$ and $0 \\leq k_j \\leq N(\\beta_j)$, $1 \\leq m_j \\leq N_{l_j}$.\nA similar statement also holds for $\\glob^+(\\Lambda)_{\\Lambda-\\beta}$. \n\\end{prop}\n\n\\begin{proof}\nBy the PBW theorem, we see that $\\glob(\\Lambda)_{\\Lambda-\\beta}$ is spanned by elements of the form as (\\ref{eq:span}) without any conditions on $k_j$ and $m_j$.\nThen we use Lemma~\\ref{lem:single} to show the assertion by the induction on $a+b$. \n\\end{proof}\n\nThus we establish Proposition~\\ref{prop:weight} from Proposition~\\ref{prop:span}.\nWe also have the following.\n\n\\begin{prop}\\label{prop:character}\nLet $\\mathbf{a}$ be a maximal ideal of $A(\\Lambda)$ and regard it also as a a maximal ideal of $A^{+}(\\Lambda)$.\nThen we have $\\ch_p \\loc^+(\\Lambda,\\mathbf{a}) \\geq \\ch_p \\loc(\\Lambda,\\mathbf{a})$.\n\\end{prop}\n\\begin{proof}\nWe have a $\\tor^+$-homomorphism $\\loc^+(\\Lambda,\\mathbf{a}) \\to \\Res \\loc(\\Lambda,\\mathbf{a})$ assigning $v_{\\Lambda,\\mathbf{a}}^+ \\mapsto v_{\\Lambda,\\mathbf{a}}$.\nIt is surjective by Proposition~\\ref{prop:weight} (iii).\n\\end{proof}\n\n\\subsection{Upper bound for the level one Weyl module}\n\nIn this subsection, we consider the case $\\Lambda=\\Lambda_0$.\nThe ring $A(\\Lambda_0)$ is identified with $\\bbC[z^{\\pm 1}]$ and the action on $\\glob(\\Lambda_0)$ is given by \n\\[\n\tz^k (X v_{\\Lambda_0}) = X (h_{0,k} v_{\\Lambda_0})\n\\]\nfor $X \\in U(\\tor')$.\nThis identification induces $A^+(\\Lambda_0) = \\bbC[z]$.\n\n\\begin{lem}\\label{lem:h_{i,k}}\nWe have $h_{i,k} v_{\\Lambda_0} = 0$ for $i \\in I$ and $k \\in \\bbZ$.\n\\end{lem}\n\n\\begin{proof}\nThe defining relations $e_{i,k} v_{\\Lambda_0}=0$ and $f_i v_{\\Lambda_0} = 0$ for $i \\in I$ imply the assertion. \n\\end{proof}\n\nRecall that $\\sum_{i \\in \\affI} h_{i,k} = s^k t^{-1} dt$.\nBy Lemma~\\ref{lem:h_{i,k}}, wee see that the action of $A(\\Lambda_0)$ on $\\glob(\\Lambda_0)$ is given by $z^k \\mapsto s^k t^{-1} dt$.\nIn particular, $z$ acts by $c(1,0)=st^{-1}dt$.\n\nWe have defined the local Weyl modules $\\loc(\\Lambda_0,a)$ for $a \\in \\bbC^{\\times}$ and $\\loc^+(\\Lambda_0,a)$ for $a \\in \\bbC$ by\n\\[\n\t\\loc(\\Lambda_0,a) = \\glob(\\Lambda_0) \\otimes_{A(\\Lambda_0)} \\bbC_a, \\quad \\loc^+(\\Lambda_0,a) = \\glob^+(\\Lambda_0) \\otimes_{A^+(\\Lambda_0)} \\bbC_a.\n\\]\n\\begin{prop}\\label{prop:independent}\nThe p-character $\\ch_p \\loc^+(\\Lambda_0,a)$ is independent of $a \\in \\bbC$.\n\\end{prop}\n\n\\begin{proof}\nThe defining relations of $\\loc^+(\\Lambda_0,a)$ are given by\n\\begin{gather*}\n\t(\\torn \\cap \\tor^+) v_{\\Lambda_0,a}^+ = 0,\\quad h_{i,k} v_{\\Lambda_0,a}^+ = \\delta_{i,0} a^k v_{\\Lambda_0,a}^+ \\ (i \\in \\affI, k \\geq 0), \\quad d_t v_{\\Lambda_0,a}^+ = 0,\\\\\n\tf_0^2 v_{\\Lambda_0,a}^+ = 0,\\quad f_i v_{\\Lambda_0,a}^+ = 0 \\ (i \\in I). \n\\end{gather*}\nHence we have $\\tau_a^*\\loc^+(\\Lambda_0,0) \\cong \\loc^+(\\Lambda_0,a)$, where $\\tau_a$ is the automorphism of $\\tor^+$ defined in Section~\\ref{subsection:auto}.\nThis proves the assertion. \n\\end{proof}\n\nWe put \n\\[\n\tW(\\Lambda_0)=\\loc^+(\\Lambda_0,0) = \\glob^+(\\Lambda_0) \\otimes_{A^+(\\Lambda_0)} \\bbC_0\n\\]\nand denote its highest weight vector $v_{{\\Lambda_0},0}^+$ by $v_0$.\nThis $W(\\Lambda_0)$ is regarded as a graded $\\tor^+$-module by setting $\\deg v_0 = 0$. \n\n\\begin{lem}\\label{lem:f}\nWe have $f_{i,k} v_0 = 0$ for any $i \\in \\affI$ and $k \\geq 1$.\n\\end{lem}\n\n\\begin{proof}\nThe assertion for $i \\in I$ follows from $f_i v_0 =0$ and $h_{i,k} v_0 =0$.\nThe assertion for $i = 0$ follows from\n\\[\n\t0 = e_{0,k} f_0^2 v_0 = [e_{0,k}, f_0^2] v_0 = (-2f_{0,k} + 2 f_0 h_{0,k}) v_0 \n\\]\nand $h_{0,k} v_0 =0$ for $k \\geq 1$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:key}\nLet $k \\geq 1$.\nWe have\n\\begin{enumerate}\n\\item\n\\[\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} c(k,-l+m) (e_{\\theta} \\otimes t^{-m}) v_0 & \\text{if } l > k,\n\t\\end{cases}\n\\]\n\\item\n\\[\n\t(s^k t^{-l} ds) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} c(k,-l+m) (t^{-m}ds) v_0 & \\text{if } l > k.\n\t\\end{cases}\n\\]\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nWe prove the assertions (i) and (ii) by induction on $l$.\n\nFor $l \\leq 0$, $e_{\\theta} \\otimes s^k t^{-l}$ is an element of $\\torn \\cap \\tor^+$, hence it kills $v_0$.\nFor $l = 1$, $e_{\\theta} \\otimes s^k t^{-1} = f_{0,k}$ kills $v_0$ by Lemma~\\ref{lem:f}.\nThen we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^k t^{-l} ds)v_0 = \\left( [f_{\\theta} \\otimes s, e_{\\theta} \\otimes s^k t^{-l}] - [f_{\\theta}, e_{\\theta} \\otimes s^{k+1}t^{-l}] \\right) v_0 =0\n\t\\end{split}\n\\end{equation*}\nfor $l \\leq 1$.\nWe thus have proved (i) and (ii) for $l \\leq 1$.\n\nLet $l \\geq 2$.\nWe assume the assertions (i) and (ii) for all $l' < l$.\nBy Lemma~\\ref{lem:induction}, we have\n\\begin{equation}\n\t\\begin{split}\n\t\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 &= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} T_0^{-1} v_0 \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} (f_0 v_0) \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} (f_0) v_0 \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( T_{\\theta}^{-1}(f_0)(e_{\\theta} \\otimes s^k t^{-l+2}) v_0 + [e_{\\theta} \\otimes s^k t^{-l+2}, T_{\\theta}^{-1} (f_0)] v_0 \\right). \\label{eq:induction}\n\t\\end{split}\n\\end{equation}\nWe have\n\\begin{equation*}\n\t\\begin{split}\n\t\t[e_{\\theta} \\otimes s^k t^{-l+2}, T_{\\theta}^{-1} (f_0)] &= [e_{\\theta} \\otimes s^k t^{-l+2}, -f_{\\theta} \\otimes t^{-1}] \\\\\n\t\t&=- \\left( [e_{\\theta} \\otimes s^k t^{-l+1}, f_{\\theta}] + c(k,-l+1) \\right) \\\\\n\t\t&= [f_{\\theta}, e_{\\theta} \\otimes s^k t^{-l+1}] - c(k,-l+1).\n\t\\end{split}\n\\end{equation*}\nPut\n\\[\n\tA= T_{\\theta}^{-1}(f_0)(e_{\\theta} \\otimes s^k t^{-l+2}) v_0, \\quad B= f_{\\theta}(e_{\\theta} \\otimes s^k t^{-l+1}) v_0. \n\\]\nThen (\\ref{eq:induction}) is equal to $T_0 T_{\\theta}(A+B-c(k,-l+1)v_0)$.\nBy the induction assumption, we have\n\\[\n\tA= T_{\\theta}^{-1}(f_0) \\sum_{m=1}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m}) v_0,\n\\]\n\\begin{equation*}\n\t\\begin{split}\n\t\tB= f_{\\theta} \\sum_{m=1}^{l-1-k} c(k,-l+1+m) (e_{\\theta} \\otimes t^{-m}) v_0 = f_{\\theta} \\sum_{m=0}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m-1}) v_0.\n\t\\end{split}\n\\end{equation*}\nThen (\\ref{eq:induction}) is equal to\n\\begin{multline}\n\t\tT_0 T_{\\theta} \\Bigg( \\sum_{m=1}^{l-2-k} c(k,-l+2+m) \\Big( T_{\\theta}^{-1}(f_0) (e_{\\theta} \\otimes t^{-m}) + f_{\\theta} (e_{\\theta} \\otimes t^{-m-1}) \\Big) v_0 \\\\\n\t\t+ c(k,-l+2) f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0 - c(k,-l+1) v_0 \\Bigg) \\label{eq:induction2}\n\\end{multline}\nif $l \\geq k+2$ and to $T_0 T_{\\theta}(- c(k,-l+1) v_0)$ if $l \\leq k+1$.\n\nWe prove (i) for $l$.\nFirst consider the case $l \\leq k$.\nIn this case, we have\n\\begin{equation*}\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = T_0 T_{\\theta}(- c(k,-l+1) v_0) = \\dfrac{k}{-l+1} T_0 T_{\\theta}( (s^{k-1} t^{-(l-1)} ds) v_0) = 0\n\\end{equation*}\nby the induction assumption.\nHence (i) holds for $l$.\nNext consider the case $l = k+1$.\nIn this case, we have\n\\begin{equation*}\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = T_0 T_{\\theta}(- c(k,-l+1) v_0) = - c(k,-l+1) T_0 T_{\\theta}(v_0).\n\\end{equation*}\nSince we have $T_0T_{\\theta} (v_0)=-f_0 v = -(e_{\\theta} \\otimes t^{-1})v_0$, (i) holds for $l=k+1$.\nFinally consider the case $l \\geq k+2$.\nThe equality (\\ref{eq:induction}) is valid even for $k=0$ and hence we have\n\\[\n\t(e_{\\theta} \\otimes t^{-m-2}) v_0 = T_0 T_{\\theta} \\Bigg( \\Big( T_{\\theta}^{-1} (f_0) (e_{\\theta} \\otimes t^{-m}) + f_{\\theta} (e_{\\theta} \\otimes t^{-m-1}) \\Big) v_0 \\Bigg)\n\\]\nfor each $m$.\nThis implies that (\\ref{eq:induction2}) is equal to\n\\begin{multline*}\n\t\t\\sum_{m=1}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m-2}) v_0\\\\\n\t\t+ c(k,-l+2) T_0 T_{\\theta} ( f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0) + c(k,-l+1) (e_{\\theta} \\otimes t^{-1}) v_0.\n\\end{multline*}\nSince we can easily show $T_0 T_{\\theta} ( f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0) = (e_{\\theta} \\otimes t^{-2})v_0$, (i) is proved for $l$.\n\nWe prove (ii) for $l$.\nBy (i), we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t&(s^k t^{-l} ds)v_0 = \\left( [f_{\\theta} \\otimes s, e_{\\theta} \\otimes s^k t^{-l}] - [f_{\\theta}, e_{\\theta} \\otimes s^{k+1}t^{-l}] \\right) v_0\\\\\n\t\t&= (f_{\\theta} \\otimes s) \\sum_{m=1}^{l-k} c(k,-l+m) (e_{\\theta} \\otimes t^{-m}) v_0 - f_{\\theta} \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n) (e_{\\theta} \\otimes t^{-n}) v_0 \n\t\\end{split}\n\\end{equation*}\nif $l > k$ and $(s^k t^{-l} ds)v_0 = 0$ otherwise.\nTherefore we may assume $l > k$. \nWe have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(f_{\\theta} \\otimes s) (e_{\\theta} \\otimes t^{-m}) v_0 &= [f_{\\theta} \\otimes s,e_{\\theta} \\otimes t^{-m}]v_0 \\\\\n\t\t&= \\left( [f_{\\theta}, e_{\\theta} \\otimes s t^{-m}] + t^{-m}ds \\right) v_0 \\\\\n\t\t&= f_{\\theta} (e_{\\theta} \\otimes s t^{-m}) v_0 + (t^{-m}ds) v_0 \\\\\n\t\t&= f_{\\theta} \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 + (t^{-m}ds) v_0.\n\t\\end{split}\n\\end{equation*}\nWe claim that\n\\[\n\t\\sum_{m=1}^{l-k} c(k,-l+m) \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 = \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n)(e_{\\theta} \\otimes t^{-n}) v_0\n\\]\nholds.\nIndeed this equality is obtained by applying $h_{\\theta} \\otimes s$ to both sides of (i).\nHence we conclude\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^k t^{-l}ds) v_0 &= \\sum_{m=1}^{l-k} c(k,-l+m) \\Bigg( f_{\\theta} \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 + (t^{-m}ds) v_0 \\Bigg)\\\\\n\t\t&\\qquad - f_{\\theta} \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n) (e_{\\theta} \\otimes t^{-n}) v_0 \\\\\n\t\t&= \\sum_{m=1}^{l-k} c(k,-l+m) (t^{-m}ds) v_0.\n\t\\end{split}\n\\end{equation*}\n\\end{proof}\n\nWe define the subalgebra $\\bar{C}$ of $U(\\tor^+)$ to be generated by $c(k,-l)$ ($k \\geq 1$, $l \\geq 1$).\nLet $\\bar{C}_1$ be the subalgebra of $\\bar{C}$ generated by $c(1,-l)$ ($l \\geq 1$).\n\n\\begin{lem}\\label{lem:degree_one}\nWe have $\\bar{C} v_0 = \\bar{C}_1 v_0$. \n\\end{lem}\n\n\\begin{proof}\nSuppose $k \\geq 1$ and $l \\geq 1$.\nWe rewrite Lemma~\\ref{lem:key} (ii) as\n\\[\n\t(s^{k} t^{-l} ds) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} \\dfrac{k}{l-m} (s^{k-1} t^{-l+m} ds) (t^{-m}ds) v_0 & \\text{if } l > k.\n\t\\end{cases}\n\\]\nThis implies that the action of $c(k+1,-l) = ((k+1)\/l) s^{k}t^{-l} ds$ on $v_0$ is written in terms of a polynomial in $c(1,-m) = (1\/m)t^{-m} ds$ with $m \\geq 1$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:key2}\nWe have\n\\[\n\t\\left(\\affnbar^{(t)} \\otimes s\\bbC[s]\\right) v_0 \\subset \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\n\\end{lem}\n\n\\begin{proof}\nNote that we have\n\\begin{equation*}\n\t\\affnbar^{(t)} \\otimes s^k = \\bigoplus_{\\substack{\\alpha \\in \\Delta^+ \\cup \\{0\\}\\\\ l \\geq 1}} \\frg_{\\alpha} \\otimes s^k t^{-l} \\oplus \\bigoplus_{\\substack{\\alpha \\in \\Delta^- \\\\ l \\geq 0}} \\frg_{\\alpha} \\otimes s^k t^{-l}.\n\\end{equation*}\nSuppose $k \\geq 1$.\nWe show\n\\begin{equation}\n\t(x \\otimes s^k t^{-l}) v_0 \\in \\bar{C}_1 U(\\affnbar^{(t)}) v_0 \\label{eq:contain}\n\\end{equation}\nfor\n\\begin{itemize}\n\\item\n$x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^+ \\cup \\{0\\}$) and $l \\geq 1$;\n\n\\item\n$x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^-$) and $l \\geq 0$.\n\\end{itemize}\nLemma~\\ref{lem:key} (i) and \\ref{lem:degree_one} imply (\\ref{eq:contain}) for $x=e_{\\theta}$ and $l \\geq 1$.\nThen we obtain (\\ref{eq:contain}) for $x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^+$) and $l \\geq 1$ by successively applying $f_i$'s ($i \\in I$) to $(e_{\\theta} \\otimes s^k t^{-l}) v_0$.\nWe obtain (\\ref{eq:contain}) for $x = h_i$ ($i \\in I$) and $l \\geq 1$ by applying $f_i$ to $(e_{i} \\otimes s^k t^{-l}) v_0$.\nWe show (\\ref{eq:contain}) for $x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^-$) and $l \\geq 0$.\nThe case $l=0$ is immediate from Lemma~\\ref{lem:f}.\nAssume $l \\geq 1$.\nWe use $[h_{\\alpha} \\otimes s^k t^{-l}, x] = 2 x \\otimes s^k t^{-l}$ and $x v_0 = 0$ to deduce\n\\[\n\t(x \\otimes s^k t^{-l}) v_0 = -\\dfrac{1}{2} x(h_{\\alpha} \\otimes s^k t^{-l}) v_0 \\in x \\bar{C}_1 U(\\affnbar^{(t)}) v_0 \\subset \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\n\\end{proof}\n\n\\begin{prop}\\label{prop:upper_bound}\nWe have\n\\[\n\tW(\\Lambda_0) = \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\nIn particular, we have an inequality\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) \\leq \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n q}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nLet $N$ be the $\\bbC$-span of monomials in $\\affnbar^{(t)} \\otimes s\\bbC[s]$.\nThen the PBW theorem and Lemma~\\ref{lem:degree_one} imply\n\\[\n\tW(\\Lambda_0) = U(\\tornbar \\cap \\tor^+)v_0 = \\bar{C}_1 U(\\affnbar^{(t)}) N v_0.\n\\]\nSince $\\affnbar^{(t)} \\otimes s\\bbC[s]$ is $\\ad \\affnbar^{(t)}$-invariant modulo central elements, we prove the assertion by Lemma~\\ref{lem:key2} and \\ref{lem:degree_one}. \n\\end{proof}\n\n\\begin{rem}\nWe will show in Corollay~\\ref{cor:character} that the equality\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) = \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n q}\n\\]\nholds.\n\\end{rem}\n\n\\begin{rem}\nBy Proposition~\\ref{prop:character}, \\ref{prop:independent} and \\ref{prop:upper_bound}, we have an inequality\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) \\leq \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n}.\n\\]\nWe will show in Corollay~\\ref{cor:character} that the equality holds.\nIn fact, we can directly prove this inequality for $\\ch_{p} \\loc(\\Lambda_0,a)$ by a similar calculation for $\\loc(\\Lambda_0,a)$ instead of $W(\\Lambda_0)$.\nMore precisely, we can show $\\loc(\\Lambda_0,a) = \\bar{C}_1 U(\\affnbar^{(t)}) v_{\\Lambda_0,a}$.\nMoreover, we can show that\n\\[\n\t\\loc(\\Lambda_0,a) = \\bar{C}_0 U(\\affnbar^{(t)}) v_{\\Lambda_0,a}\n\\]\nalso holds, where $\\bar{C}_0$ is the subalgebra of $U(\\tor')$ generated by $c(0,-l)$ ($l \\geq 1$).\n\nHere we gave the calculation for $W(\\Lambda_0)$ by two reasons:\n\\begin{enumerate}\n\\item\nwe are interested in the $(p,q)$-characters of the graded local Weyl modules for $\\tor^+$;\n\n\\item\nthe calculation for $W(\\Lambda_0)$ is easier than that for $\\loc(\\Lambda_0,a)$.\n\\end{enumerate}\n\\end{rem}\n\n\\section{Vertex operator construction and Weyl modules}\\label{section:Vertex operator construction}\n\n\\subsection{Heisenberg Lie algebras}\\label{subsection:Heisenberg}\n\nWe assume that $\\frg$ is of type ADE in Section~\\ref{subsection:Heisenberg} and \\ref{subsection:vertex}.\nRecall that $\\affQ = \\bigoplus_{i \\in \\affI} \\bbZ \\alpha_i$ is the root lattice of $\\aff^{(t)}$.\nWe fix a bimultiplicative 2-cocycle $\\ve \\colon \\affQ \\times \\affQ \\to \\{\\pm 1\\}$ satisfying\n\\[\n\t\\ve(\\alpha,\\alpha) = (-1)^{(\\alpha,\\alpha)\/2}, \\quad \\ve(\\alpha,\\beta)\\ve(\\beta,\\alpha) = (-1)^{(\\alpha,\\beta)}, \\quad \\ve(\\alpha,\\delta)=1\n\\]\nas in \\cite[Section~4]{MR1066569}.\nLet $\\bbC[\\affQ]$ be the group algebra of $\\affQ$ with a $\\bbC$-basis denoted by $e^{\\alpha}$ ($\\alpha \\in \\affQ$).\nWe make $\\bbC[\\affQ]$ into a $\\bbC[\\affQ]$-module via $\\ve$, that is, we define $e^{\\alpha} \\cdot e^{\\beta} = \\ve(\\alpha,\\beta)e^{\\alpha+\\beta}$. \nWe denote by $\\bbC_{\\ve}[\\affQ]$ this module.\nWe define an action of $h \\in \\affh^{(t)}$ on $\\bbC_{\\ve}[\\affQ]$ by $h \\cdot e^{\\alpha} = \\langle h, \\alpha \\rangle e^{\\alpha}$.\n\nThe toroidal Lie algebra $\\tor$ contains a Heisenberg Lie algebra \n\\[\n\t\\calH = \\displaystyle\\bigoplus_{\\substack{i \\in \\affI\\\\k \\neq 0}} \\bbC h_{i,k} \\oplus \\bbC c_s.\n\\]\nDefine the Fock representation $\\affF$ of $\\calH$ by\n\\[\n\t\\affF = U(\\calH) \/ \\sum_{\\substack{i \\in \\affI\\\\ k >0}}U(\\calH) h_{i,k} + U(\\calH)(c_s-1).\n\\]\nWe set\n\\[\n\t\\bbV(0) = \\affF \\otimes \\bbC_{\\ve}[\\affQ].\n\\]\nDefine the degree on $\\bbV(0)$ by $\\deg e^{\\alpha}= (\\alpha,\\alpha)\/2$ and $\\deg h_{i,k}=k$.\nThen we regard $\\bbV(0)$ as a module of $\\torh = \\calH \\oplus \\affh^{(t)} \\oplus \\bbC d_s$ via the actions of $\\calH$ and $\\affh^{(t)}$ on $\\affF$ and $\\bbC_{\\ve}[\\affQ]$ respectively, and so that $d_s$ counts the degree.\n\nSimilarly we define $\\mathcal{F}$ to be the Fock representation for a Heisenberg Lie subalgebra\n\\[\n\t\\displaystyle\\bigoplus_{\\substack{i \\in I\\\\k \\neq 0}} \\bbC h_{i,k} \\oplus \\bbC c_s\n\\]\nof $\\aff^{(s)}$.\n\n\\subsection{Vertex representations}\\label{subsection:vertex}\n\nFor each $\\alpha \\in \\affDelta$, we set\n\\[\n\tX(\\alpha,u) = u^{(\\alpha,\\alpha)\/2} \\left( e^{\\alpha} u^{h_{\\alpha}} \\right) \\exp\\left( \\sum_{k>0} \\dfrac{h_{\\alpha} \\otimes s^{-k}}{k} u^{k} \\right) \\exp\\left( -\\sum_{k>0} \\dfrac{h_{\\alpha} \\otimes s^{k}}{k} u^{-k} \\right)\n\\]\nas an element of $( \\End_{\\bbC} \\bbV(0) )[[u^{\\pm1}]]$.\nHere $u^{h_{\\alpha}}$ acts by\n\\[\n\tu^{h_{\\alpha}} \\cdot e^{\\beta} = u^{(\\alpha,\\beta)} e^{\\beta}.\n\\]\nDefine $X_{k}(\\alpha)$ by the expansion\n\\[\n\tX(\\alpha,u) = \\sum_{k \\in \\bbZ} X_k(\\alpha) u^{-k}.\n\\]\n\n\\begin{thm}[\\cite{MR1066569} Proposition~4.3]\\label{thm:MEY}\nWe can extend the action of $\\torh = \\calH \\oplus \\affh^{(t)} \\oplus \\bbC d_s$ to $\\tor$ on $\\bbV(0)$ by\n\\[\n\te_{i,k} \\mapsto X_{k}(\\alpha_i), \\quad f_{i,k} \\mapsto X_{k}(-\\alpha_i).\n\\]\n\\end{thm}\n\nWe denote by $\\tau$ the action of $c(0,1)$ on $\\bbV(0)$.\nThen by \\cite[(4.1) and Proposition~5.3 (ii)]{MR1066569}, the action of $c(0,k)$ for $k \\neq 0$ is given by $\\tau^k$.\nThe subalgebra of $\\End_{\\bbC} \\bbV(0)$ generated by $\\tau^k$ ($k \\in \\bbZ$) is isomorphic to the Laurent polynomial algebra $\\bbC[\\tau^{\\pm 1}]$. \n\nWe denote by $\\delta(k)$ the action of $c(k,0)$ on $\\bbV(0)$ for $k<0$.\nThey freely generate a polynomial subalgebra of $\\End_{\\bbC} \\bbV(0)$ and we denote it by $D$. \nWe have an isomorphism of $\\bbC$-vector spaces \n\\[\n\t\\affF \\cong \\mathcal{F} \\otimes D.\n\\]\n\n\\begin{prop}[\\cite{MR1066569} Lemma~5.6]\\label{prop:freeness_vertex_rep}\nThe multiplication map gives an isomorphism\n\\[\n\t\\bbV(0) \\cong \\mathcal{F} \\otimes \\bbC_{\\ve}[Q] \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]\n\\]\nof $\\bbC$-vector spaces.\nIn particular, $\\bbV(0)$ is free over $\\bbC[\\tau^{\\pm 1}]$.\n\\end{prop}\n\nThe $\\aff^{(s)}$-submodule $\\mathcal{F} \\otimes \\bbC_{\\ve}[Q]$ is known to be isomorphic to the level one integrable irreducible $\\aff^{(s)}$-module $L(\\Lambda_0)^{(s)}$ with highest weight $\\Lambda_0$ by Frenkel-Kac \\cite{MR595581}. \nHence it has the following defining relations:\n\\begin{gather}\n\t(f_{\\theta} \\otimes s) (1 \\otimes e^0) = 0,\\quad e_i (1 \\otimes e^0) = 0 \\ (i \\in I), \\label{eq:Frenkel-Kac1}\\\\\n\tc_s (1 \\otimes e^0) = 1 \\otimes e^0,\\quad h_i (1 \\otimes e^0) = 0 \\ (i \\in I),\\quad d_s (1 \\otimes e^0) = 0,\\label{eq:Frenkel-Kac2}\\\\\n\t(e_{\\theta} \\otimes s^{-1})^2 (1 \\otimes e^0) = 0,\\quad f_i (1 \\otimes e^0) = 0 \\ (i \\in I).\\label{eq:Frenkel-Kac3}\n\\end{gather}\nWe will determine the defining relations of $\\bbV(0)$ as a $\\tor$-module as a main result of this article.\n\n\\subsection{General construction}\n\nWe review the construction of $\\tor$-modules given by Iohara-Saito-Wakimoto~\\cite{MR1688100} and Eswara Rao~\\cite{MR3076215}.\nAssume that $\\frg$ is an arbitrary simple Lie algebra.\nLet $D$ be the polynomial algebra generated by the elements $\\delta(k)$ ($k < 0$). \nFor a given smooth $\\aff^{(s)}$-module $M$, we will define a $\\tor$-module structure on\n\\[\n\tM \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]\n\\]\nas follows.\nFor an element $x$ of $\\frg$, we put $x(u) = \\sum_{k \\in \\bbZ} (x \\otimes s^k) u^{-k}$.\nDefine a formal series $\\Delta_l(u)$ for each $l \\in \\bbZ$ by\n\\[\n\t\\Delta_l(u) = \\exp \\left( \\sum_{k > 0} \\dfrac{l \\delta(-k)}{k} u^{k} \\right).\n\\]\nWe make $D$ into a graded algebra by $\\deg \\delta(k) = k$ and let $d^{(D)}$ be the operator which counts the degree on $D$.\nWe make $\\bbC[\\tau^{\\pm 1}]$ into a graded algebra by $\\deg \\tau = 1$ and let $d^{(\\tau)}$ be the operator which counts the degree on $\\bbC[\\tau^{\\pm 1}]$.\n\n\\begin{thm}[\\cite{MR1688100} Lemma~2.1, \\cite{MR3076215} Theorem~4.1]\\label{thm:ISW-E}\nLet $M$ be a smooth $\\aff^{(s)}$-module.\nThe assignment\n\\[\n\t\\sum_{k \\in \\bbZ} (x \\otimes s^k t^l) u^{-k} \\mapsto x(u) \\otimes \\Delta_l(u) \\otimes \\tau^l\n\\]\nfor $x \\in \\frg,$\n\\[\n\t\\sum_{k \\in \\bbZ} (s^{k-1} t^l ds) u^{-k} \\mapsto c_s \\otimes \\Delta_l(u) \\otimes \\tau^l, \\quad \n\ts^{k} t^{-1} dt \\mapsto \\begin{cases}\n\t\t\\id \\otimes \\delta(k) \\otimes \\id & \\text{ if } k < 0,\\\\\n\t\t0 & \\text{ if } k \\geq 0,\n\t\\end{cases}\n\\]\n\\[\n\td_s \\mapsto d_s \\otimes \\id \\otimes \\id + \\id \\otimes d^{(D)} \\otimes \\id, \\quad d_t \\mapsto \\id \\otimes \\id \\otimes d^{(\\tau)}\n\\]\ngives a $\\tor$-module structure on $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$.\n\\end{thm}\n\n\\begin{rem}\nLet us give a remark on the results of \\cite{MR1688100} and \\cite{MR3076215} stated above.\nIn \\cite{MR1688100}, the authors consider a Lie algebra bigger than $\\tor$ and the module they construct is bigger than $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$.\nIf one restricts the action to $\\tor$, we can take $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$ as a $\\tor$-submodule.\nMoreover, although they assume that $\\frg$ is of type ADE in \\cite{MR1688100}, the construction does not need the assumption.\nLater this construction of $\\tor$-modules has been generalized in \\cite{MR3076215} to some Lie superalgebras. \n\\end{rem}\n\nTake $M$ as the level one integrable irreducible $\\aff^{(s)}$-module $L(\\Lambda_0)^{(s)}$ with highest weight $\\Lambda_0$ and set\n\\[\n\t\\bbV(0) = L(\\Lambda_0)^{(s)} \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}].\n\\]\nThis definition is compatible with the construction given in Section~\\ref{subsection:Heisenberg} and \\ref{subsection:vertex} if $\\frg$ is of type ADE.\nIndeed, the definition of the vertex operator $X(\\alpha,u)$ implies that\n\\[\n\tX(\\beta+l\\delta,u) = \\begin{cases}\n\t\tX(\\beta,u) \\otimes \\Delta_l(u) \\otimes \\tau^l & \\text{if } \\beta \\in \\Delta,\\\\\n\t\t\\id \\otimes \\Delta_l(u) \\otimes \\tau^l & \\text{if } \\beta = 0,\n\t\\end{cases}\n\\]\nwhen we write $\\alpha \\in \\affDelta$ as $\\alpha = \\beta + l\\delta$ with $\\beta \\in \\Delta \\cup \\{0\\}$ and $l \\in \\bbZ$.\n\nLet $v^{(s)}$ be a highest weight vector of $L(\\Lambda_0)^{(s)}$. \nWe generalize the relations given in (\\ref{eq:Frenkel-Kac1}), (\\ref{eq:Frenkel-Kac2}), (\\ref{eq:Frenkel-Kac3}).\n\n\\begin{lem}\\label{lem:highest}\nWe have\n\\begin{gather}\n\t(f_{\\theta} \\otimes s) (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\quad e_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I), \\label{eq:Frenkel-Kac1new}\\\\\n\tc_s (v^{(s)} \\otimes 1 \\otimes 1) = v^{(s)} \\otimes 1 \\otimes 1, \\quad h_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I),\\quad d_s (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\label{eq:Frenkel-Kac2new}\\\\\n\t(e_{\\theta} \\otimes s^{-1})^2 (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\quad f_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I).\\label{eq:Frenkel-Kac3new}\n\\end{gather}\n\\end{lem}\n\n\\begin{proof}\nThese are direct consequences of the definition of the action and the relations in $L(\\Lambda_0)^{(s)}$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:vertex}\nWe have $\\aff^{(t)} (v^{(s)} \\otimes 1 \\otimes 1) = 0$.\n\\end{lem}\n\n\\begin{proof}\nWe have $\\frg (v^{(s)} \\otimes 1 \\otimes 1) = (\\frg v^{(s)}) \\otimes 1 \\otimes 1 = 0$.\nTo see the action of $e_0 = f_{\\theta} \\otimes t$, consider the assignment\n\\[\n\t\\sum_{k \\in \\bbZ} (f_{\\theta} \\otimes s^k t) u^{-k} \\mapsto f_{\\theta} (u) \\otimes \\Delta_1(u) \\otimes \\tau.\n\\]\nExpand $\\Delta_1(u) = \\sum_{k \\geq 0} \\Delta_1^{(-k)} u^k$.\nThen the action of $e_0 = f_{\\theta} \\otimes t$ is given by $\\sum_{k \\geq 0} (f_{\\theta}\\otimes s^k) \\otimes \\Delta_1^{(-k)} \\otimes \\tau$.\nSince we have $(f_{\\theta}\\otimes s^k) v^{(s)} = 0$ for $k \\geq 0$, we have $e_0(v^{(s)} \\otimes 1 \\otimes 1)=0$.\nSimilarly the action of $f_0 = e_{\\theta} \\otimes t^{-1}$ is given by $\\sum_{k \\geq 0} (e_{\\theta}\\otimes s^k) \\otimes \\Delta_{-1}^{(-k)} \\otimes \\tau^{-1}$, hence it acts on $v^{(s)} \\otimes 1 \\otimes 1$ by $0$.\nWe have $c_t (v^{(s)} \\otimes 1 \\otimes 1) = 0$ and $d_t (v^{(s)} \\otimes 1 \\otimes 1) = 0$ by the definition of the action of $c_t$ and $d_t$.\n\\end{proof}\n\n\\subsection{Isomorphisms}\n\nWe define a $\\tor$-module $\\bbV$ by the pull-back of $\\bbV(0)$ via the automorphism $S^{-1}$, that is, $\\bbV = (S^{-1})^*\\bbV(0)$.\nDenote the vector of $\\bbV$ corresponding to $v^{(s)} \\otimes 1 \\otimes 1 \\in \\bbV(0)$ by $\\bfv$.\n\nThe action of $c(1,0)$ on $\\bbV$ corresponds to $\\tau^{-1}$ on $\\bbV(0)$ via $S^{-1}$ since $S^{-1}(c(1,0)) = c(0,-1)$.\nWe regard $\\bbV$ as a module over $A(\\Lambda_0)=\\bbC[z^{\\pm 1}]$ via $z \\mapsto c(1,0)$ and then $\\bbV$ becomes a free $A(\\Lambda_0)$-module by Proposition~\\ref{prop:freeness_vertex_rep}.\nWe put $\\bbV_a = \\bbV \\otimes_{A(\\Lambda_0)} \\bbC_a$ for $a \\in \\bbC^{\\times}$.\nThis $\\bbV_a$ is a $\\tor'$-module.\nThe character of $\\bbV_a$ is given as follows.\n\n\\begin{prop}\\label{prop:character_V}\nWe have $\\ch_p \\bbV_a = \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n}$.\n\\end{prop}\n\n\\begin{proof}\nThe assertion obviously follows from the construction of the action of $\\tor$ on $\\bbV(0) = L(\\Lambda_0)^{(s)} \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$. \n\\end{proof}\n\nLet us study relation between the level one global Weyl module $\\glob(\\Lambda_0)$ and $\\bbV$. \n\n\\begin{lem}\\label{lem:relation}\nWe have \n\\[\n\th_{i,k} \\bfv = \\begin{cases} 0 & \\text{if } i \\in I, \\\\ z^k \\bfv & \\text{if } i=0 \\end{cases}\n\\]\nfor any $k \\in \\bbZ$.\t\n\\end{lem}\n\n\\begin{proof}\nWe have \n\\[\n\tS^{-1}(h_{i,k}) = \\begin{cases} h_i \\otimes t^{-k} & \\text{if } i \\in I, \\\\ s^{-1} t^{-k} ds - h_{\\theta} \\otimes t^{-k} & \\text{if } i=0. \\end{cases}\n\\]\nBy Lemma~\\ref{lem:vertex}, we have $(h_i \\otimes t^{-k}) (v^{(s)} \\otimes 1 \\otimes 1) = (h_{\\theta} \\otimes t^{-k}) (v^{(s)} \\otimes 1 \\otimes 1) =0$.\nSince we have $(s^{-1} t^{-k} ds) (v^{(s)} \\otimes 1 \\otimes 1) = \\tau^{-k} (v^{(s)} \\otimes 1 \\otimes 1)$ and $\\tau^{-1}$ corresponds to $z$, the assertion is proved.\n\\end{proof}\n\n\\begin{lem}\\label{lem:surjection}\nWe have a surjective homomorphism $\\glob(\\Lambda_0) \\to \\bbV$ of modules over both $\\tor$ and $A(\\Lambda_0)$.\n\\end{lem}\n\n\\begin{proof}\nThe equalities (\\ref{eq:Frenkel-Kac1new}), (\\ref{eq:Frenkel-Kac2new}), (\\ref{eq:Frenkel-Kac3new}) are equivalent to\n \\begin{gather*}\n\te_i \\bfv = 0 \\ (i \\in \\affI), \\\\\n\tc_t \\bfv = \\bfv, \\quad h_i \\bfv = 0 \\ (i \\in I),\\quad d_t \\bfv = 0,\\\\\n\tf_0^2 \\bfv = 0,\\quad f_i \\bfv = 0 \\ (i \\in I).\n\\end{gather*}\nMoreover we have \n\\begin{align*}\n\tc_s \\bfv &= S^{-1}(c_s)(v^{(s)} \\otimes 1 \\otimes 1) = c_t (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\\\\n\td_s \\bfv &= S^{-1}(d_s)(v^{(s)} \\otimes 1 \\otimes 1) = d_t (v^{(s)} \\otimes 1 \\otimes 1) = 0\n\\end{align*}\nby Lemma~\\ref{lem:vertex}.\nWe need to check $e_{i,k} \\bfv = 0$ for $i \\in \\affI$ and $k \\in \\bbZ$.\nThis follows from $e_i \\bfv = 0$ and Lemma~\\ref{lem:relation}.\n\\end{proof}\n\nBy Lemma~\\ref{lem:surjection}, we have a surjective $\\tor'$-homomorphism $\\loc(\\Lambda_0,a) \\to \\bbV_a$ for every $a \\in \\bbC^{\\times}$. \nHence we have inequalities of the characters\n\\begin{equation}\n\t\\ch_p \\loc^+(\\Lambda_0,a) \\geq \\ch_p \\loc(\\Lambda_0,a) \\geq \\ch_p \\bbV_a \\label{eq:inequality}\n\\end{equation}\nby Proposition~\\ref{prop:character}.\n\n\\begin{thm}\\label{thm:main}\nWe have isomorphisms\n\\[\n\t\\glob(\\Lambda_0) \\stackrel{\\cong}{\\longrightarrow} \\bbV, \\qquad \\loc(\\Lambda_0,a) \\stackrel{\\cong}{\\longrightarrow} \\bbV_a\n\\]\nof modules over $\\tor$ and $\\tor'$ respectively.\n\\end{thm}\n\n\\begin{proof}\nFirst we prove the isomorphism $\\loc(\\Lambda_0,a) \\cong \\bbV_a$.\nWe have\n\\begin{equation}\n\t\\ch_p \\loc^+(\\Lambda_0,a) = \\ch_p W(\\Lambda_0) \\leq \\ch_p L(\\Lambda_0) \\prod_{n>0} \\dfrac{1}{1-p^n} = \\ch_p \\bbV_a \\label{eq:inequality2}\n\\end{equation}\nby Proposition~\\ref{prop:independent}, \\ref{prop:upper_bound}, \\ref{prop:character_V}.\nThen the inequalities (\\ref{eq:inequality}) and (\\ref{eq:inequality2}) imply $\\ch_p \\loc(\\Lambda_0,a) = \\ch_p \\bbV_a$.\nThis shows that the surjective homomorphism $\\loc(\\Lambda_0,a) \\to \\bbV_a$ is an isomorphism for every $a \\in \\bbC^{\\times}$.\nNext we prove the isomorphism $\\glob(\\Lambda_0) \\cong \\bbV$.\nSince $\\bbV$ is a free $A(\\Lambda_0)$-module, we can take a splitting of the exact sequence\n\\[\n\t0 \\to \\Ker \\to \\glob(\\Lambda_0) \\to \\bbV \\to 0\n\\]\nof $A(\\Lambda_0)$-modules.\nThe isomorphism $\\loc(\\Lambda_0,a) \\cong \\bbV_a$ implies $\\Ker \\otimes_{A(\\Lambda_0)} \\bbC_a = 0$ for every $a \\in \\bbC^{\\times}$.\nThen by Nakayama's lemma, we see that $\\Ker = 0$ and obtain the isomorphism $\\glob(\\Lambda_0) \\cong \\bbV$.\n\\end{proof}\n\n\\begin{cor}\\label{cor:character}\nWe have\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) = \\ch_{p} \\loc^+(\\Lambda_0,a) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n} \\right)\n\\]\nfor $a \\in \\bbC^{\\times}$ and\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n q} \\right).\n\\]\n\\end{cor}\n\n\\begin{proof}\nThe equalities for the $p$-characters are verified in the proof of Theorem~\\ref{thm:main}.\nThe equality for the $(p,q)$-character follows from that for the $p$-character and Proposition~\\ref{prop:upper_bound}. \n\\end{proof}\n\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n\\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'$}\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{introduction}\nRecently, significant public attention has been drawn to the consequences of achieving human-level \nartificial intelligence. While there have been small communities analyzing the long-term impact of AI \nand related technologies for decades, these forecasts were made before the many recent \nbreakthroughs that have dramatically accelerated the pace of research in areas as diverse as robotics, \ncomputer vision, and autonomous vehicles, to name just a few \\cite{bostrom2014superintelligence, \nshanahan2015technological, chalmers2010singularity}. \\\\\n\nMost researchers and industrialists view advances in artificial intelligence as having the potential to be \noverwhelmingly beneficial to humanity. Medicine, transportation, and fundamental scientific research \nare just some of the areas that are actively being transformed by advances in artificial intelligence. On \nthe other hand, issues of privacy and surveillance, access and inequality, or economics and policy are \nalso of utmost importance and are distinct from the specific technical challenges posed by most \ncutting-edge research problems \\cite{tegmark2015open, russell2015research}. \\\\\n\nIn the context of AI forecasting, one set of issues stands apart, namely, the consequences of artificial \nintelligence whose capacities vastly exceed that of human beings. Some researchers have argued that \nsuch a ``superintelligence'' poses distinct problems from the more modest AI systems described above. \nIn particular, the emerging discipline of AI safety has focused on issues related to the potential \nconsequences of mis-specifying goal structures for AI systems which have significant capacity to exert \ninfluence on the world. From this vantage point, the fundamental concern is that deviations from \n``human-compatible values'' in a superintelligent agent could have significantly detrimental \nconsequences \\cite{bostrom2014superintelligence}. \\\\\n\nOne strategy that has been advocated for addressing safety concerns related to superintelligence is \nOracle AI, that is, an AI system that only answers questions. In other words, an Oracle AI does not \ndirectly influence the world in any capacity except via the user of the system. Because an Oracle AI \ncannot directly take physical action except by answering questions posed by the system's operator, \nsome have argued that it may provide a way to bypass the immediate need for solving the ``value \nalignment problem'' and would itself be a powerful resource in enabling the safe design of autonomous, \ndeliberative superintelligent agents \\cite{armstrong2012thinking, bostrom2014superintelligence, \nfallenstein2015reflective, armstrong2017, armstrong2017oracle}. \\\\\n\nA weaker notion of the term oracle, what we call a \\emph{domain-specific oracle}, refers to a modular \ncomponent of a larger AI system that is queried for domain-specific tasks. In this article, we view \ncomputer algebra systems as primitive domain-specific oracles for mathematical \ncomputation which are likely to become quite powerful on the time horizons on which many expect \nsuperintelligent AI systems to be developed \\cite{muller2016future, 2017arXiv170508807G}. \nUnder the assumption that math oracles prove to be \nuseful in the long-term development of AI systems, addressing well-defined architectural \nproblems with CASs and their integration with interactive theorem provers provides a concrete \nset of research problems that align with long-term issues in AI safety. In addition, such systems may also\nbe useful in proving the functional correctness of other aspects of an AI architecture. In Section \n\\ref{metascience}, we briefly discuss the unique challenges in allocating resources for AI safety \nresearch. In Section \\ref{oracle-overview}, we briefly summarize the motivation for developing\noracles in the context of AI safety and give an overview of safety risks and control strategies \nwhich have been identified for superintelligent oracle AIs. \nIn Section \\ref{oracle} we analyze contemporary question answering systems and argue that \nin contrast to computer algebra systems, current consumer-oriented, NLP-based systems are poor \ncandidates for rigorous analysis as oracles. In Section \\ref{itp-cas}, we review the differences between \ntheorem provers and computer algebra systems, efforts at integrating the two, and known architectural \nproblems with CASs. We close with a list of additional research projects related to mathematical computation \nwhich may be of interest to scientists conducting research in AI safety. \n\n\\section{Metascience of AI Safety Research}\\label{metascience}\nFrom a resource allocation standpoint, AI safety poses a unique set of challenges. Few areas of \nacademic research operate on such long and potentially uncertain time horizons. This is not to say that \nacademia does not engage in long-term research. Research in quantum gravity, for example, is \napproaching nearly a century's worth of effort in theoretical physics \\cite{Rovelli:2008}. However, the \nkey difference between open-ended, fundamental research in the sciences or humanities and AI safety \nis the possibility of negative consequences, indeed significant ones, of key technological \nbreakthroughs taking place without corresponding advances in frameworks for safety \n\\cite{bostrom2014superintelligence, russell2016should} . \\\\\n\nThese issues have been controversial, largely due to disagreement over the time-horizons for achieving \nhuman-level AI and the subsequent consequences \\cite{muller2016future, 2017arXiv170508807G}. \nSpecifically, the notion of an ``intelligence explosion,'' whereby the intelligence of software systems \ndramatically increases due their capacity to model and re-write their own source code, has yet to receive \nadequate scientific scrutiny and analysis \\cite{linstone2014singularity}. \\\\\n\nWe affirm the importance of AI safety research and also agree with those who have cautioned against \nproceeding down speculative lines of thinking that lack precision. \nOur perspective in this article is that it is \npossible to fruitfully discuss long-term issues related to AI safety while maintaining a connection to \npractical research problems. To some extent, our goal is similar in spirit to the widely discussed \nmanuscript ``Concrete Problems in AI Safety'' \\cite{amodei2016concrete}. However, we aim to be a bit \nmore bold. While the authors of ``Concrete Problems'' state at the outset that their analysis will set \naside questions related to superintelligence, our goal is to explicitly tackle superintelligence related \nsafety concerns. We believe that there are areas of contemporary research that overlap \nwith novel ideas and concepts that have arisen among researchers who have purely focused on \nanalyzing the consequences of AI systems whose capacities vastly exceed those of human beings. \\\\\n\nTo be clear, we do not claim that the strategy of searching for pre-existing research objectives that align\nwith the aims of superintelligence theory is sufficient to cover the full spectrum of issues identified by \nAI safety researchers. There is no doubt that the prospect of superintelligence raises entirely new \nissues that have no context in contemporary research. However, considering how young the field is, \nwe believe that the perspective adopted in this article is a down-to-earth and moderate stance to take \nwhile the field is in a critical growth phase and a new culture is being created. \\\\\n\nThis article focuses on one area of the AI safety landscape, Oracle AI. We identify a set of concrete software \nprojects that relate to more abstract, conceptual ideas from AI safety, to bridge the gap between \npractical contemporary challenges and longer term concerns which are of an uncertain time horizon. \nIn addition to providing concrete problems for researchers and engineers to tackle, we hope\nthis discussion will be a useful introduction to the concept of Oracle AI for newcomers to the subject. \nWe state at the outset that within the context of Oracle AI, our analysis is limited in scope to systems \nwhich perform mathematical computation, and not to oracles in general. Nonetheless, considering how \nlittle effort has been directed at the \nsuperintelligence control problem, we are confident that there is low-hanging fruit in addressing these \nmore general issues which are awaiting discovery. \n\n\\section{Brief Overview of Oracle AI}\\label{oracle-overview}\nAs described above, an Oracle AI is a system which only answers questions. Although the term has traditionally\nbeen used to denote fully general AI systems, we can also imagine oracles whose capabilities are more limited. \nThe latter systems are what we refer to as \\emph{domain-specific oracles}. AI safety researchers have \nargued that superintelligent oracles have the potential to help address safety concerns of autonomous, \ndeliberative superintelligent AI systems, as an Oracle AI's only ability to influence the world is through \nthe user of the system. Consequently, an Oracle AI itself may be in a position to assist researchers in\nsolving difficult technical and conceptual problems related to the design of independent, deliberative agents. \nYet superintelligent oracle AIs pose their own distinct set of safety concerns. \nThe problems identified in \\cite{armstrong2012thinking}, for example, presume a superintelligent AI \nsystem, which while limited in its ability to interact with the world, may nonetheless have the potential to \nnegatively impact the world by manipulating the user of the system. As Armstrong, Bostrom, and Sandberg write:\n\n\\begin{quote}\n{\\small\nWe are all vulnerable to the OAI's social engineering. An advanced OAI can literally offer us the world. \nGenuine altruism is no defence against this; it can offer a world free of poverty, hunger, war, disease or \ndeath, as well as all personal trinkets we could want. It could guess the weaknesses of each individual, \nand find the right arguments to convince us that granting it power or liberty is the moral and profitable \nthing to do. It will be able to play on rivalries, institutional weaknesses and ethical dilemmas with more \nvirtuosity than any human could. \\cite{armstrong2012thinking}\n}\n\\end{quote}\n\nGiven the potential dangers of human manipulation by an OAI, researchers have searched for control strategies\nto mitigate such risks. A few examples of control methods that have been proposed are:\n\\begin{itemize}\n\\item \\textbf{Physical security}- physically containing the OAI system\n\\item \\textbf{Limiting or eliminating external manipulators}- like the previous security measure, \nprecluding the OAI from physically manipulating the world in any direct manner\n\\item \\textbf{Epistemic capability control}- the OAIs output could be restricted even further beyond \nquestion answering, for example, restricting it to answer ``yes \/ no '' questions\n\\item \\textbf{Virtual worlds}- the OAI could be run in a simulated world so that even if the system's \ncapabilities were expanded, its behavior could be observed and modeled. \n\\item \\textbf{Resetting the OAI after each question}- this would prevent the OAI from engaging in long-term \nsocial engineering by manipulating the answers to the users' questions\n\\end{itemize}\n\nAlthough the capacities of domain-specific oracles are more limited, they can nonetheless pose safety risks. \nArchitectural deficiencies in such oracles might be exploited by a larger AI system to \nmanipulate the human user. It could give answers which are difficult to verify and which allow the OAI to \nexecute complex and intricate plans unbeknownst to the user. Therefore, while flaws in domain-specific oracles are not\ninherently risky if used solely in their domain of applicability, they may very well be dangerous as part of \na larger system with more general capabilities. Though not a ``control strategy'' in the narrowest sense, \ncreating ``robust'' domain-specific oracles is an\nimportant objective in designing safe OAIs. Furthermore,\nensuring the robustness of domain-specific subsystems might mitigate the need for stronger control strategies,\nas the OAI would have fewer weaknesses to exploit. \\\\\n\nIt should go without saying that the arguments presented above are highly schematic and do not dependent\non specific technologies. To our knowledge, there is very limited work on translating analyses of \nsuperintelligent oracle AIs into the concrete language of modern artificial intelligence \n\\cite{armstrong2016safely, armstrong2017, armstrong2017oracle}. Our goal in this manuscript is in this spirit, that is, to\nanchor schematic, philosophical arguments in practical, contemporary research. To do so, we will narrow our focus\nto the mathematical domain. In the remainder of the article, we will use the \nterm oracle in the more limited sense of a domain-specific subsystem, and in particular, oracles for performing \nmathematical computations. We hope that the analysis presented here will be of intrinsic value in \ndeveloping robust math oracles, as well as provide some intuition and context for identifying \nconcrete problems relevant to developing safe, superintelligent oracle AI systems. \n\n\\section{Are there contemporary systems which qualify as oracles?}\\label{oracle}\nThe obvious class of contemporary systems which would seem to qualify as oracles are question \nanswering systems (QASs). As we stated above, a basic criterion characterizing oracles is that their \nfundamental mode of interaction is answering questions posed by a user, or for domain-specific queries \nas part of a larger AI system. \\\\\n\nContemporary QASs are largely aimed at using natural language processing techniques to answer \nquestions pertaining to useful facts about the world such as places, movies, historical figures, and so \non. An important point to make about QASs is the highly variable nature of the underlying technology. \nFor instance, IBM's original Watson system which competed in Jeopardy, was developed prior to the \nrecent advances in deep learning which have fundamentally transformed areas ranging from computer \nvision, to speech recognition, to natural language processing \\cite{ferrucci2010building}. In this \nparticular task, the system was nonetheless able to perform at a level beyond that of the most \naccomplished human participants. The introduction of ``info panes'' into popular search \nengines, on the other hand, have been based on more recent machine learning technology, and indeed, \nthese advances are also what power the latest iterations of the Watson system \n\\cite{watson_upgrade}. On the other end of the spectrum is Wolfram $\\vert$ Alpha, which is also a question \nanswering system, but which is architecturally centered around a large, curated repository of structured \ndata, rather than datasets of unstructured natural language \\cite{wolfram_QAS}. \\\\\n\nWhile these systems are currently useful for humans in navigating the world, planning social outings, \nand arriving at quick and useful answers to ordinary questions, it is not clear that they will remain useful \nin quite the same capacity many years from now, or as standalone components of superintelligent AI \nsystems. Although the underlying techniques of deep learning or NLP are of fundamental interest in \ntheir own right, the fact that these systems are QASs at all seems to be more of an artifact of their utility for \nconsumers. \\\\\n\nAnother important observation about contemporary QASs is that much of their underlying NLP-based \narchitecture can be replaced by taking advantage of structured data, as the example of Wolfram | Alpha \ndemonstrates. For the other NLP or machine learning based \nsystems, the underlying technology can be used as part of larger, semi-automated pipelines to turn \nunstructured data from textual sources into structured data. Once again, this fact simply underscores \nthat contemporary QASs are not particularly appealing model systems to analyze from the Oracle AI \nsafety perspective.\\footnote{We emphasize that our argument that\ncontemporary QASs are not good candidates for analysis as Oracle AIs is not an argument \nagainst the traditional formulation of Oracle AI as a tool for AI safety. We fully expect significant \nbreakthroughs to be made in advancing the theory and practice of oracle-based\ntechniques for AI safety and we hope that this manuscript will provide some motivation \nto pursue such research. Rather, our point is that when viewing\ncontemporary systems from the lens of superintelligence, there seems little reason to believe that \ncurrent NLP-based QASs will remain sufficiently architecturally stable to be used as standalone components \nin AI systems many years from now. On the other hand, there are certainly important \\emph{present-day} problems \nto examine when evaluating the \nbroader impact of QASs, such as bias in NLP systems, overgeneralization, and privacy, to name just a \nfew. Some of these issues overlap with the set of problems identified in \\cite{amodei2016concrete} as \nexamples of concrete problems in AI safety. In addition, we are beginning to see conferences \ndevoted to contemporary ethical issues raised by machine learning. See, for example, the workshop \n\\href{https:\/\/www.aclweb.org\/portal\/content\/first-workshop-ethics-natural-language-processing}{Ethics in \nNatural Language Processing}.}\n\n\\subsection{Computer Algebra and Domain-Specific Oracles for Mathematical Computation}\nThe question answering systems described above all rely on natural language processing to varying \ndegrees. In addition, their domain of applicability has tended towards ``ordinary'' day-to-day knowledge \nuseful to a wide array of consumers. Another type of question answering system is a computer algebra \nsystem (CAS). Computer algebra has traditionally referred to systems for computing specific results to \nspecific mathematical equations, for example, computing derivatives and integrals, group theoretic \nquantities, etc. In a sense, we can think of computer algebra as a set of algorithms for performing what \nan applied mathematician or theoretical physicist might work out on paper and pencil. Indeed, some of \nthe early work in computer algebra came from quantum field theory---one of the first computer algebra \nsystems was Veltman's \\emph{Schoonschip} for performing field theoretic computations that led to the \ntheory of electroweak unification \\cite{Schoonschip}. \\\\\n\nAs computer algebra systems have grown in popularity, their functionality has expanded substantially\nto cover a wide range of standard computations in mathematics and theoretical physics, including differentiation,\nintegration, matrix operations, manipulation of symbolic expressions, symbolic substitution, algebraic equation solving,\nlimit computation, and many others. Computer algebra systems typically run in a \\texttt{read, evaluate, print} loop (\\texttt{repl}), \n and in the research and education context, their popularity has also grown as a result of the notebook model pioneered\nby the \\emph{Mathematica} system, allowing for computations in CASs to closely mimic the sequential, paper and pencil\nwork of mathematicians and theoretical physicists. \\\\\n\nIn assessing the long-term utility of CASs, it is important to note that there is little reason to believe that\ncomputer algebra will be subsumed by other branches of AI research such as machine learning. Indeed, \nrecent research has \ndemonstrated applications of machine learning to both computer algebra and theorem proving (which \nwe discuss in more detail below), via algorithm selection in the former case \\cite{huang2016machine} \nand proof assistance in the latter \\cite{irving2016deepmath, komendantskaya2012machine}. While \ncertainly not as visible as machine learning, computer algebra and theorem proving are very much \nactive and deep areas of research which are also likely to profit from advances in other fields of \nartificial intelligence, as opposed to being replaced by them \\cite{bundy_et_al:DR:2012:3731}. \nOn the time horizons on which we are likely to \nsee human-level artificial intelligence and beyond, we can expect that these systems will become quite \npowerful, and possess capabilities that \nmay be useful in the construction of more general AI systems. Therefore, it is worth examining such \nsystems from the perspective of AI \nsafety.\n\n\\subsection{Briefly Clarifying Nomenclature}\nBefore proceeding, we want to explicitly describe issues relating to nomenclature that have arisen in the \ndiscussion thus far, and state our choices for terminology. Given that the phrase ``Oracle AI'' has \nbecome common usage in the AI safety community, we will continue to use this phrase, with the first \nword capitalized, as well as the acronym OAI. Where clarification is needed, we may also use the full \nphrase ``superintelligent oracle AI,'' without capitalization. \\\\\n\nFor more modest use cases of the word oracle, we will either refer to ``domain-specific oracles,'' or state the \ndomain of knowledge where the oracle is applicable. We can, at the very least in the abstract, consider \nextending this terminology to other domains such as ``physics oracles,'' ``cell biology oracles,'' or \n``ethics oracles'' and so on. Therefore, the remainder of the article \nwill be concerned with safety and robustness issues in the design of ``math oracles.''\n\n\\section{Robust Computer Algebra and Integrated Theorem Proving}\\label{itp-cas}\n\\begin{quote}\n{\\small \\emph{Today we should consider as a standard feature much closer interaction between proof \nassistance and computer algebra software. Several areas can benefit from this, including specification \nof interfaces among components, certification of results and domains of applicability, justification of \noptimizations and, in the other direction, use of efficient algebra in proofs.}\\\\ -\n\\textbf{Stephen Watt in \\emph{On the future of computer algebra systems at the threshold of 2010}}}\n\\end{quote}\n\nAs we described above, computer algebra systems can be thought of as question answering systems \nfor a subset of mathematics. A related set of systems are interactive proof assistants or interactive \ntheorem provers (ITPs). While ITPs are also systems for computer-assisted mathematics, it is for a \ndifferent \nmathematical context, for computations in which one wishes to construct a proof of a general kind of \nstatement. In other words, rather than computing specific answers to specific questions, ITPs are used \nto show that candidate mathematical structures (or software systems) possess certain properties. \\\\ \n\nIn a sense, the \ndistinction between theorem proving and computer algebra should be viewed as a historical anomaly. \nFrom the perspective of philosophical and logical efforts in the early 20th century that led to the \n``mechanization of mathematics'' the distinction between computing the $n^{th}$ Laguerre polynomial \nand constructing a proof by induction might have been viewed as rather artificial, although with the \nbenefit of hindsight we can see that the two types of tasks are quite different in practice \n\\cite{beeson2004mechanization}. \\\\\n\nThe role of ITPs in the research world is very different from that of CASs. Whereas CASs allow researchers\nto perform difficult computations that would be impossible with paper and pencil, constructing proofs using \nITPs is often more difficult than even the most rigorous methods of pure mathematics. In broad terms, the \noverhead of using ITPs to formalize theorems arises from the fact that proofs in these systems must proceed\nstrictly from a set of formalized axioms so that the system can verify each computation. Consequently, ITPs\n(and related systems, such as automatic theorem provers) are largely used for verifying properties of \nmission-critical software systems which require a high-degree of assurance, or for hardware verification,\nwhere mistakes can lead to costly recalls \\cite{seL4, kaivola2009replacing, fix2008fifteen, kern1999formal, Kropf}. \\\\\n\nAs the quotation above suggests, many academic researchers view the integration of interactive proof \nassistants and computer algebra systems as desirable, and there have been numerous efforts over the \nyears at exploring possible avenues for achieving this objective \\cite{Ballarin, HOLCAS, Watt, \nTheorema} (a more complete list is given below). By integrating theorem proving with computer \nalgebra, we would be opening up a wealth of potentially interoperable algorithms that have to date \nremained largely unintegrated. To cite one such example, in \\cite{MapleIsabelle}, the authors have \ndeveloped a framework for exchange of information between the Maple computer algebra system and \nthe Isabelle interactive theorem prover. They show a simple problem involving the proof of an \nelementary polynomial identity that could be solved with the combined system, but in neither system \nalone (see Fig. \\ref{fig:maple_isabelle}). \\\\\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{maple_isabelle}\n\\caption{\\label{fig:maple_isabelle}Example of a polynomial identity proven by integrating the Maple \ncomputer algebra system with Isabelle. Maple's simplifier is used for expanding polynomials---a \npowerful complement to the theorem proving architecture of Isabelle which allows for the setup of a \nproof by induction.}\n\\end{center}\n\\end{figure*}\n\nWe cite this example to demonstrate how a simply stated elementary problem cannot be solved in \nexisting environments for either computer algebra or proof assistance. The computer algebra system \ndoes not have the capacity for structural induction and theorem provers generally have rather weak \nexpression simplifiers. There are numerous examples such as this one in the academic literature. \\\\\n\nAnother key difference between CASs and ITPs is the architectural soundness of the respective \nsystems. As we will discuss below, computer algebra systems have well-defined architectural \ndeficiencies, which while not a practical issue for the vast majority of use cases, pose problems for their \nintegration with theorem provers, which by their nature, are designed to be architecturally sound. In the \ncontext of superintelligent AI systems, the architectural problems of CASs are potential points of \nweakness that could be exploited for malicious purposes or simply lead to unintended and detrimental consequences. \nTherefore, we use the phrase ``robust \ncomputer algebra'' to refer to CASs which lack the problems \nthat have been identified in the research literature. In the section below, we combine the discussion \nof robust computer algebra and integration with interactive theorem provers, as there is a spectrum of \napproaches which address both of these issues to varying degrees. \n\n\\subsection{A Taxonomy of Approaches}\nThere are many possible avenues to tackle the integration of theorem provers with computer algebra \nsystems. We give 4 broad categories characterizing such integration efforts\\footnote{This classification \nwas first described by Kaliszyk and Wiedijk \\cite{HOLCAS} in a paper arguing for an architecture which \nwe list as the fourth category given above.}: \n\n\\begin{enumerate}\n\\item \\textbf{Theorem provers built on top of computer algebra systems:} These include Analytica, \nTheorema, RedLog, and logical extensions to the Axiom system \\cite{clarke1992analytica, Theorema, \ndolzmann1997redlog, jenks2013axiomtm, poll1998adding} .\n\\item \\textbf{Frameworks for mathematical exchange between the two systems:} This category includes \nMathML, OpenMath, OMSCS, MathScheme, and Logic Broker \\cite{miner2005importance, \nbuswell2004open, calmet2004toward, carette2011mathscheme, armando2000towards}. \n\\item \\textbf{``Bridges'' or ``ad-hoc'' information exchange solutions:} The pairs of systems in this \ncategory include bridges combining PVS, HOL, or Isabelle with Maple, NuPRL with Weyl, Omega with \nMaple\/GAP, Isabelle with Summit, and most recently, Lean with \\emph{Mathematica} \n\\cite{MapleIsabelle, adams2001computer, harrison1998skeptic, \nballarin1995theorems, jackson1994exploring, siekmann2002proof, ballarin1999pragmatic, \nLeanMathematica2017}. The \nexample given above, bridging Isabelle and Maple, is an example of an approach from this category.\n\\item \\textbf{Embedding a computer algebra system inside a proof assistant:} This is the approach \ntaken by Kaliszyk and Wiedijk in the HOLCAS system. In their system, all expressions have precise \nsemantics, and the proof assistant proves the correctness of each simplification made by the computer \nalgebra system \\cite{HOLCAS}.\n\\end{enumerate}\n\nOne primary aspect of integration that differentiates these approaches is the degree of trust the \ntheorem prover places in the computer algebra system. Computer algebra systems give the false \nimpression of being monolithic systems with globally well-defined semantics. In reality, they are large \ncollections of algorithms which are neatly packaged into a unified interface. Consequently, there are \noften corner cases where the lack of precise semantics can lead to erroneous solutions. Consider the \nfollowing example: \n\n\\begin{figure}[h]\n\\begin{center}\n\\frame{\\includegraphics[scale=.32]{solve_macsyma}}\n\\caption{\\label{fig:solve-error-simp}Example of an incorrect solution to a simple polynomial equation by \na computer algebra system.}\n\\end{center}\n\\end{figure}\n\nThe system incorrectly gives 1 as a solution, even though the given polynomial has an indeterminate \nvalue for $x = 1$. However, because the expression is treated as a fraction of polynomials, it is first \nsimplified before the solve operation is applied. In other words, there is an unclear semantics between \nthe solver module and the simplifier which leads to an incorrect result. \\\\\n\nAnother simple example is the following integral:\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[scale=.70]{evaluation_substitution}\n\\caption{\\label{fig:solve-error-noncomm}A problem arising in symbolic integration due to the non-commutativity \nof evaluation and substitution.}\n\\end{center}\n\\end{figure}\n\nMaking the substitution $n = -1$ gives an indeterminate result, while it is clear by inspection that the \nsolution to the integral for $n = -1$ is simply $ln(x)$. This belongs to a class of problems known as the \n\\emph{specialization problem}, namely that expression evaluation and variable substitution do not \ncommute \\cite{Ballarin}. So while we have seen above that theorem proving can benefit tremendously \nfrom the wealth of algorithms for expression simplification and mathematical knowledge in computer \nalgebra, there is the potential cost of compromising the reliability of the combined system. As a possible\napplication to current research in AI safety, consider the decision-theoretic research agenda \nfor the development of safe, superintelligent AI systems outlined in \\cite{yudkowsky2013tiling, lavictoire2015introduction, \nbarasz2014robust, fallenstein2014problems, soares2015toward}. If we require formal guarantees of \ncorrectness at any point in a sequence of computations in which computer algebra is used, current systems \nwould be unable to provide the necessary framework for constructing such a proof.\n\n\\subsubsection{Qualitatively Certified Computations} \nIn our taxonomy of approaches to bridging theorem provers with computer algebra, we described how a \nkey distinction was the degree of trust that the theorem prover places in the computer algebra system. \nFor instance, approaches which build theorem provers on top of computer algebra systems do not \naddress the architectural issues with CASs. They are integrative, but not more sound. On the other \nextreme, building a computer algebra system on top of a theorem prover allows for a degree of trust \nthat is on par with that of the theorem prover itself. However, this approach has the distinct \ndisadvantage that computer algebra systems represent many hundred man-years worth of effort. \\\\\n\nThe more intermediate approaches involving common languages for symbolic exchange or ad-hoc bridges, \nbring to light an important notion in the spectrum of provable safety, namely \nthe ability to assign probabilities for the correctness of computations. In \\cite{garrabrant2016logical}, \nthe authors present an algorithm for assigning probabilities to any statement in a formal language. We \nmight ask what strategies might look like that have a similar goal in mind, but are significantly weaker. \nInterfaces between theorem provers and computer algebra systems provide a concrete example where \nwe can ask a question along these lines. Fundamentally, in such an interface, the computer algebra \nsystem is the weaker link and should decrease our confidence in the final result. But by how much? \nFor instance, in the example given in Figure \\ref{fig:maple_isabelle}, how should we revise our \nconfidence in the result knowing that polynomial simplification was conducted within a computer algebra \nsystem? \\\\\n\nIt is worth asking for simple answers to this question that do not require major theoretical advances to \nbe made. For instance, we might imagine curating information from computer algebra experts about \nknown weaknesses, and use this information to simply give a qualitative degree of confidence in a \ngiven result. Or, for example, in a repository of formal proofs generated using integrated systems, steps \nof the proof that require computer algebra can be flagged and also assigned a qualitative measure of \nuncertainty. \\\\\n\nThe relationship that this highly informal method of giving qualitative certification to computations has \nwith the formal algorithm developed in \\cite{garrabrant2016logical} can be compared to existing \ntechniques in the software industry for ensuring correctness. On the one hand, unit testing is a \ntheoretically trivial, yet quite powerful practice, something along the lines of automated checklists for \nsoftware. The complexities of modern software would be impossible to handle without extensive \nsoftware testing frameworks \\cite{Beck2002, Osherove2013, maximilien2003assessing, \nerdogmus2005effectiveness, sarma2016unit}. On the other hand, formal verification can provide substantially stronger \nguarantees, yet is a major undertaking, and the correctness proofs are often significantly more \ndemanding to construct than the software itself. Consequently, as discussed in Section \\ref{itp-cas},\nformal verification is much less frequently used in industry, typically only in exceptional circumstances \nwhere high guarantees of correctness are required, or for \nhardware verification \\cite{seL4, kaivola2009replacing, fix2008fifteen, kern1999formal, Kropf}. \\\\\n\nIntegrated systems for computer algebra and theorem proving give rise to a quite interesting (and \nperhaps ironic) opportunity to pursue simple strategies for giving qualitative estimates for the \ncorrectness of a computation.\n\n\\subsubsection{Logical Failures and Error Propagation}\nAs the examples described above demonstrate, errors in initial \ncalculations may very well propagate and give rise to non-sensical results. As AI systems capable of performing\nmathematical computation become increasingly sophisticated and embedded as part of design workflows\nfor science and engineering (beyond what we see today), we could imagine such errors being quite costly\nand difficult to debug. In the case of a \nsuperintelligent AI system, more concerning scenarios would be if systematic errors in computer \nalgebra could be exploited for adversarial purposes or if they led to unintentional accidents on a large scale.\\\\\n\nThe issue of error propagation is another example of a concrete context for pursuing simple strategies \nfor assigning qualitative measures of certainty to computations performed by integrated theorem \nproving \/ computer algebra systems. For instance, we may be less inclined to trust a result in which the \ncomputer algebra system was invoked early on in a computation as opposed to later. With curated data \nfrom computer algebra experts on the reliability or failure modes of various algorithms, we might also \nchain together these informal estimates to arrive at a single global qualitative estimate. If multiple \nsystems were to be developed independently, or which were based on fundamentally different \narchitectures, we might also be significantly more confident in a result which could be verified by two \nseparate systems. \n\n\\subsubsection{Additional Topics}\nSome related ideas merit investigation in the broader context of mathematical computation:\n\\begin{itemize}\n\\item \\textbf{Integrating SMT solvers with interactive theorem provers:} Satisfiability modulo theories \n(SMT) solvers are an important element of automated reasoning and there have been efforts \nanalogous to those described above to bridge SMT solvers with interactive theorem provers \n\\cite{keller2013matter, armand2011modular}. \\\\\n\\item \\textbf{Identifying the most important \/ widely used algorithms in computer algebra:} Computer \nalgebra systems have grown to become massive collections of algorithms extending into domains well \noutside of the realm of mathematics. If the purely mathematical capacities of CASs prove to be \nuseful in future AI systems, it would be valuable to rank order algorithms by their popularity or \nimportance.\\\\\n\nOne approach would be to do basic textual analysis of the source code from GitHub or StackExchange. \nThis would also allow for more targeted efforts to directly address the issues with soundness in core \nalgorithms such as expression simplification or integration. In the context of the HOLCAS system \ndescribed above, for example, it would be valuable to have rough estimates for the number of man-hours \nrequired to implement a minimal CAS with the most widely used functionality on top of a theorem \nprover. \\\\\n\\item \\textbf{Proof checkers for integrated systems:}\nProof checkers are important tools in the landscape of formal verification and theorem proving. Indeed, \nas it is often much less computationally expensive to verify the correctness of a proof than to generate it \nfrom scratch, the availability of proof checkers for the widely used interactive theorem provers is one \nreason we can be confident in the correctness of formal proofs \n\\cite{harrison2006towards,pollack1998believe}. \\\\\n\nAs we described above, strategies for integrating computer algebra with theorem provers can \npotentially result in a combined system which is less trustworthy than the theorem prover alone. \nTherefore, the availability of proof checkers for combined systems would be a valuable resource in \nverifying proof correctness, and in certain mathematical domains, potentially provide an avenue for \nsurmounting the need to directly make the CAS itself more architecturally robust. \\\\\n\nThe development of integrated proof checkers is likely to be a substantial undertaking and require novel \narchitectures for integrating the core CAS and ITP systems distinct from what has been described \nabove. However, it is a largely unexplored topic that merits further investigation. \\\\\n\n\\item \\textbf{Analyzing scaling properties of algorithms for computer algebra and theorem proving as a \nfunction of hardware resources:}\nThe premise of the analysis presented above is that CASs (and integrated theorem proving) are likely to \nremain sufficiently architecturally stable and useful on a several decade time-horizon in the construction \nof AI \nsystems. On the other hand, as we argued earlier, it is much less clear that the same will be true of the \nmost visible, NLP-based, consumer-oriented question answering systems. To make these arguments \nmore rigorous, it would be valuable to develop quantitative predictions of what the capabilities will be of \nexisting algorithms for computer algebra and theorem proving when provided with substantially \nexpanded hardware resources. For instance, we might examine problems in mathematics or theoretical physics for \nwhich na\\\"{i}ve solutions in CASs are intractable with current resources, but which may be feasible with \nfuture hardware. \\\\\n\n\\item \\textbf{The cognitive science of computer algebra:}\nWhat role has computer algebra played in theoretical physics and mathematics? How has it influenced \nthe thinking process of researchers? Has computer algebra simply been a convenience that has shifted \nthe way problems are solved, or has it fundamentally enabled new problems to be solved that would \nhave been completely intractable otherwise? \\\\\n\nThe cognitive science of mathematical thought is a substantial topic which overlaps with many \nestablished areas of research \\cite{hardy1946psychology, dehaene2011number, drijvers2005computer, \ndrijvers2002learning, lakoff2000mathematics}. However, a systematic review of research in mathematics and theoretical \nphysics since the advent of computer algebra and its role in the mathematical thought process is an \nunderexplored topic. It would be an interesting avenue to pursue in understanding the role that CASs, \nITPs, and integrated systems may come to play in superintelligence, particularly in the case of neuromorphic\nsystems that have been modeled after human cognition. These questions also relate to \nunderstanding the scaling properties of CAS and theorem proving algorithms \nas well as cataloguing the most widely used algorithms in computer algebra. \n\n\\end{itemize}\n\n\\section{Conclusion}\nThe aim of this article has been to examine pre-existing research objectives in computer science and \nrelated disciplines which align with problems relevant to AI safety, thereby providing concrete, practical \ncontext for problems which are otherwise of a longer time horizon than most research. In particular, we \nfocused on the notion of ``Oracle AI'' as used in the AI safety community, and observed that the word \noracle has two meanings in the context of superintelligent AI systems. One usage refers to a \nsubsystem of a larger AI system queried for domain-specific tasks, and the other to superintelligent AI \nsystems restricted to only answer questions. \\\\\n\nWe examined contemporary question answering systems (QASs) and argued that due to their \narchitectural heterogeneity, consumer-oriented, NLP-based systems do not readily lend themselves to \nrigorous analysis from an AI safety perspective. On the other hand, we identified computer algebra \nsystems (CASs) as concrete, if primitive, examples of domain-specific oracles. We examined well-known architectural\ndeficiencies with CASs identified by the theorem proving community and argued that the integration of \ninteractive theorem provers (ITPs) with CASs, an objective that has been an area of research in the \nrespective communities for several decades, provides a set of research problems and practical software \nprojects related to the development of powerful and robust math oracles on a multi-decade time horizon. \nIndependent of their role as domain-specific oracles, such systems may also prove to be useful tools for \nAI safety researchers in proving the functional correctness of other components of an AI architecture. \nNatural choices of systems to use would be interfaces for the Wolfram Language, the most widely \nused computer algebra system, with one of the HOL family of theorem provers or Coq, \nboth of which have substantial repositories of formalized proofs \n\\cite{wolfram2015elementary, paulson1989foundation, paulson1994isabelle, bertot2013interactive}, \nor a more modern ITP such as Lean \\cite{de2015lean, LeanMathematica2017}. \\\\\n\nRather than representing a bold and profound new agenda, we view these projects as being\nconcrete and achievable goals that may pave the way to more substantial research directions. \nBecause the topics we have discussed have a long and rich academic history, there are a number \nof ``shovel-ready'' projects appropriate for students anywhere from undergraduates to PhD students \nand beyond. Good undergraduate research projects would probably start with some basic data science \nto catalogue core computer algebra algorithms by their usage and popularity. From there, it would be \nuseful to have an estimate of what certified implementations of these algorithms would entail, \nwhether formally verified implementations, or along the lines of Kaliszyk and Wiedijk's HOLCAS \nsystem where the CAS is built on top of a theorem prover. Also useful would \nbe a systematic study of role that computer algebra has played in mathematics and theoretical physics. \nThis would have some interesting overlap with cognitive psychology, and these three projects \ntogether would make for an approachable undergraduate thesis, or a beginning project for a \ngraduate student. A solid PhD thesis devoted to the topic of Oracle AI might involve tackling \napproaches to oracles stemming from reinforcement learning (RL) \\cite{armstrong2016safely, armstrong2017},\nas well as more advanced theorem proving and CAS related topics such as investigating \nthe development of a hybrid architecture that would allow for proof-checking. \nA student who worked on these projects for several years would develop a unique \nskill set spanning philosophy, machine learning, theorem proving, and computer algebra. \\\\\n\nIn the context of superintelligent oracle AIs which may possess the ability to manipulate\na human user, we differentiate between addressing architectural \nor algorithmic deficiencies in subsystems versus general control methods or containment strategies. \nGiven that strong mathematical capabilities\nare likely to be useful in the construction of more general AI systems, designing robust CASs \n(and any other domain-specific oracle)\nis an important counterpart to general control strategies, as the top-level AI system will have fewer loopholes to exploit. \nControlling OAIs poses a distinct set of challenges for which concrete mathematical \nanalysis is in its infancy \\cite{armstrong2016safely, armstrong2017, armstrong2017oracle}. Nonetheless, considering \nhow little attention has been given to the superintelligence control problem in general, we are optimistic \nabout the potential to translate the high-level analyses of OAIs that have arisen in the AI safety \ncommunity into the mathematical and software frameworks of modern artificial intelligence. \n\n\\section*{Acknowledgements}\nWe would like to thank Stuart Armstrong, David Kristoffersson, Marcello Herreshoff, \nMiles Brundage, Eric Drexler, Cristian Calude, and several anonymous reviewers \nfor insightful discussions and feedback on the manuscript. We would also like to thank\nthe guest editors of \\emph{Informatica}, Ryan Carey, Matthijs Maas, Nell Watson,\nand Roman Yampolskiy, for organizing this special issue. \n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdzjs b/data_all_eng_slimpj/shuffled/split2/finalzzdzjs new file mode 100644 index 0000000000000000000000000000000000000000..4d4905db68f7eb313cd643e6c91d22a2d8b81ee1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdzjs @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\vglue0.4cm\n\nI am very happy to have the opportunity to speak about strong\/weak \ncoupling duality on\nthis occasion honoring the 60th birthday of Professor Keiji Kikkawa. \nHis own foundational work on T-duality \\cite{KY}, the \nworldsheet analogue of S-duality, \nwas in many ways instrumental in inspiring the recent \ndevelopments in nonperturbative string theory.\n\nStrong-weak coupling dualities now allow us to determine the strong coupling\ndynamics of string vacua with $N \\geq 4$ supersymmetry in four dimensions\n\\cite{Schwarz}.\nIt is natural to ask if this progress in our understanding of string theory\ncan be extended to the more physical vacua with less supersymmetry.\nFor N=2 theories in four dimensions, \nquantum corrections significantly modify the mathematical structure of\nthe moduli space of vacua, as well as the\nphysical interpretation of its apparent singularities. \nThis was beautifully demonstrated in the field theory case in \n\\cite{SeiWit} and it has more recently become possible to compute the\nexact quantum moduli spaces for N=2 string compactifications as \nwell \\cite{KV,FHSV}. \nThis constitutes the subject of the first part of my talk.\n\nOf course, the case of most physical interest is $N \\leq 1$ theories.\nIn the second part of my talk, I discuss examples of dual heterotic\/type II\nstring pairs\nwhere the heterotic theory is expected to exhibit nonperturbative dynamics\nwhich may fix the dilaton and break supersymmetry \\cite{KS}. The type II \ndual manages to reproduce the qualitative features expected of the\nheterotic side at tree level. It is to be hoped that further work\nalong similar lines will result in a better understanding of\nsupersymmetry breaking in string theory.\n\nThe first part of this talk is based on joint work with C. Vafa, and the\nsecond part of this talk is based on joint work with E. Silverstein.\n\n\\section{N=2 Gauge Theory and String Compactifications}\n\nRecall that the N=2 gauge theory with gauge group $SU(2)$ is the theory of\na single N=2 vector multiplet consisting of a vector $A^{\\mu}$, \ntwo Weyl fermions $\\lambda$ and $\\psi$,\nand a complex scalar field $\\phi$, all in the adjoint representation of \n$SU(2)$. \nIn N=1 language, this is a theory of an N=1 vector multiplet\n$(\\lambda, A^{\\mu})$ coupled to an N=1 chiral multiplet $(\\phi, \\psi)$.\nThe scalar potential of the theory is determined by supersymmetry to\nbe \n\\begin{equation}\nV(\\phi) = {1\\over g^{2}} [\\phi, \\phi^{+}]^{2}\n\\end{equation}\nWe see that $V$ vanishes as long as we take $\\phi = diag (a,-a)$, so there\nis a moduli space of classical vacua parameterized by the\ngauge invariant parameter $u = tr(\\phi^{2})$.\n\nAt generic points in this moduli space ${\\cal M}_{v}$ of vacua, there is\na massless N=2 U(1) vector multiplet $A$. The leading terms in its effective\nlagrangian are completely determined in terms of a single holomorphic\nfunction $F(A)$, the prepotential:\n\\begin{equation}\nL \\sim \\int d^{4}\\theta {\\partial F \\over {\\partial A}}\\bar A +\n\\int d^{2}\\theta {\\partial^{2} F \\over{\\partial A^{2}}}W_{\\alpha}W^{\\alpha}\n + c.c.\n\\end{equation} \nThe first term determines, in N=1 language, the Kahler potential (and hence\nthe metric on ${\\cal M}_{v}$) while the second term determines the\ngauge coupling as a function of moduli.\n\nIn \\cite{SeiWit} the exact form of $F$ including \ninstanton corrections was determined. \nIn addition, the masses of all of the BPS saturated particles were \ncomputed. This was reviewed in great detail in several other talks\nat this conference, so I will not repeat the solution here. It will\nsuffice to say that \nthe crucial insight is that the\nsingular \npoint $u=tr(\\phi^{2}) = 0$ where $SU(2)$ gauge symmetry is restored in\nthe classical theory splits, in the quantum theory, into two \nsingular points\n$u = \\pm \\Lambda^{2}$, where a monopole and a dyon become massless. \n\nIn this talk our interest \nis not really in $N=2$ gauge theories but in the string\ntheories which reduce to $N=2$ gauge theories in the infrared. \nThere are two particularly simple classes of $d=4, N=2$ supersymmetric\nstring compactifications. One obtains such theories from Type II (A or B)\nstrings on Calabi-Yau manifolds, and from heterotic strings on $K_{3}\n\\times T^{2}$ (with appropriate choices of instantons on the $K_3$). \nHere we briefly summarize some basic properties of these theories. \n\nType IIA strings on a Calabi-Yau threefold $M$ give rise to a four-dimensional\neffective theory with $n_v$ vector multiplets and $n_{h}$ hypermultiplets\nwhere \n\\begin{equation}\nn_{v} = h^{1,1}(M), ~~n_{h} = h^{2,1}(M) + 1 \n\\end{equation}\nThe $+1$ in $n_{h}$ corresponds to the fact that for such type II\nstring compactifications, the $\\it dilaton$ is in a hypermultiplet.\n\nThe vector fields in such a theory are Ramond-Ramond U(1)s, so there\nare no charged states in the perturbative string spectrum. Furthermore, \nbecause of the theorem of de Wit, Lauwers, and Van Proeyen \\cite{dLVP}\nwhich forbids couplings of vector multiplets to neutral hypermultiplets\nin N=2 effective lagrangians, the dilaton does not couple to the vector \nmoduli. This means that there are no perturbative or nonperturbative\ncorrections to the moduli space of vector multiplets.\nOn the other hand the moduli spaces of hypermultiplets are expected to\nreceive highly nontrivial corrections, including ``stringy'' corrections\nwith $e^{-1\/g}$ strength \\cite{BBS}.\n \nOne interesting feature of the moduli spaces of vector multiplets in\nsuch theories is the existence of conifold points at finite distance\nin the moduli space. At such points the low energy effective theory\nbecomes singular (e.g., the prepotential develops a logarithmic\nsingularity) \\cite{CDGP}. This phenomenon is reminiscent of \nthe singularities in the prepotential which occur at the ``massless\nmonopole'' points in the Seiberg-Witten solution of N=2 gauge theory,\nsingularities which are only present because one has integrated out a\ncharged field which is becoming massless. In the case at hand, in fact,\none can show that there are BPS saturated states (obtained by wrapping\n2-branes around collapsing 2-cycles) which become massless and which\nare charged under (some of) the Ramond-Ramond $U(1)$s \\cite{Strominger}. These\nexplain the singularity in the prepotential. In fact at special\nsuch points, where enough charged fields (charged under few enough\n$U(1)$s) become massless, one can give them VEVs consistent with\nD and F flatness. This results in new ``Higgs branches'' of the moduli\nspace. These new branches correspond to string compactifications on\ndifferent Calabi-Yau manifolds, topologically distinct from $M$\n\\cite{GMS}, and there is evidence that all Calabi-Yau \ncompactifications may be connected\nin this manner \\cite{Cornell,Texas}. \n\nThe other simple way of obtaining an N=2 theory in four dimensions from\nstring theory is to compactify the heterotic string (say $E_8\\times E_8$)\non $K_3 \\times T^2$. Because of the Bianchi identity\n\\begin{equation}\ndH = Tr(R\\wedge R) - Tr(F\\wedge F)\n\\end{equation} \none must embed 24 instantons in the $E_8 \\times E_8$ in order\nto obtain a consistent theory.\nAn $SU(N)$ k-instanton on $K_3$ comes with $Nk + 1 - N^2$ hypermultiplet moduli\n(where $k\\geq 2N$), and $K_3$ comes with 20 hypermultiplet moduli\nwhich determine its size and shape. \nEmbedding an $SU(N)$ instanton in $E_8$ breaks the observable low\nenergy gauge group to the maximal subgroup of $E_8$ which commutes with\n$SU(N)$ ($E_7$ for N=2, $E_6$ for N=3, and so forth). \n\nIn addition, there are three $U(1)$ vector multiplets associated with \nthe $T^2$. Their scalar components are the dilaton $S$ and the\ncomplex and kahler moduli $\\tau$ \nand $\\rho$ of the torus (both of which live on the upper half-plane\n$H$ mod $SL(2,Z)$). At special points in the moduli space\nthe $U(1)^2$ associated with $\\tau$ and $\\rho$ is enhanced to\na nonabelian gauge group:\n\\begin{equation}\n\\tau = \\rho \\rightarrow SU(2)\\times U(1),~~ \\tau=\\rho=i \\rightarrow SU(2)^2, \n~~\\tau = \\rho = 1\/2 + i{\\sqrt 3}\/2 \\rightarrow SU(3)\n\\end{equation}\n\n\nBecause the dilaton lives in a vector multiplet in such compactifications,\nthe moduli space of vectors is modified by quantum effects. On the other\nhand, the moduli space of hypermultiplets receives neither perturbative nor\nnonperturbative corrections. \n\nAn interesting feature of the heterotic ${\\cal M}_{v}$ \nis the existence of special points where the\nclassical theory exhibits an enhanced gauge symmetry (as described\nabove for the compactification on $T^2$).\nSometimes by \nappropriate passage to a Higgs or Coulomb phase, such enhanced gauge\nsymmetry points link moduli spaces of N=2 heterotic theories which\nhave different generic spectra (for some examples see\n\\cite{KV,AFIQ}). It is natural to conjecture that \nsuch transitions connect all heterotic N=2 models, in much the same way\nthat conifold transitions connect Calabi-Yau compactifications of type II\nstrings.\n\n\n\\section{N=2 String-String Duality}\n\n>From the brief description of heterotic and type II N=2 vacua in the\nprevious section, it is clear that a duality relating the two classes of\ntheories would be extremely powerful. If one were to find a model with dual\ndescriptions as a compactification of the Type IIA string on $M$ and \nthe heterotic string on $K_3 \\times T^{2}$, one could compute the exact\nprepotential for ${\\cal M}_{v}$ from the Type IIA side (summing up what from\nthe heterotic perspective would be an infinite series of instanton \ncorrections). Similarly, one would get exact results for ${\\cal M}_{h}$\nfrom the heterotic side -- this would effectively compute the $e^{-1\/g}$\ncorrections expected from the IIA perspective. In fact, such a duality \nhas been found to occur in several examples in \\cite{KV,FHSV}.\n\nOne of the simplest examples is as follows.\nConsider the heterotic string compactified to eight dimensions on $T^{2}$ \nwith $\\tau = \\rho$. Further compactify on a $K_{3}$, satisfying the\nBianchi identity for the $H$ field by embedding \n$c_{2}=10$ $SU(2)$ instantons in each $E_8$ and a $c_{2}=4$ $SU(2)$ instanton\ninto the ``enhanced'' $SU(2)$ arising from the $\\tau=\\rho$ torus.\nAfter Higgsing the remaining $E_7$ gauge groups one is left with a generic\nspectrum of 129 hypermultiplets and 2 vector multiplets.\nThe 2 vectors are $\\tau$ and the dilaton $S$ -- when $\\tau = i$, one expects\nan $SU(2)$ gauge symmetry to appear (the other $SU(2)$ factor that \nwould normally\nappear there has been broken in the compactification process). \n\n\n\nThis tells us that if there is a type IIA dual compactification on a \nCalabi-Yau $M$, then the Betti numbers of $M$ must be \n\\begin{equation} \nh_{11}(M) = 2, ~~h_{21}(M) = 128\n\\end{equation} \nThere is a known candidate manifold with these Betti numbers -- the\ndegree 12 hypersurface in $WP^{4}_{1,1,2,2,6}$ defined by the vanishing of\n$p$\n\\begin{equation}\np = z_{1}^{12} + z_{2}^{12} + z_{3}^{6} + z_{4}^{6} + z_{5}^{2} + ....\n\\end{equation}\nThis manifold has in fact been studied intensively as a simple example of\nmirror symmetry in \\cite{Hosono,Morrison}. \n\nThe mirror manifold $W$ has $h_{11}(W) = 128, h_{21}(W) = 2$. The conjecture\nthat IIA on $M$ is equivalent to the heterotic string described above \nimplies that IIB on $W$ is also equivalent to that heterotic\nstring. The structure of the moduli space of vector \nmultiplets of the heterotic string should be\n$\\it exactly$ given by the classical (in both sigma model and string\nperturbation theory) moduli space of complex structures of $W$. \n\nThe mirror manifold can be obtained by orbifolding\n$p=0$ by the maximal group of phase symmetries which preserves the\nholomorphic three-form \\cite{GP}. Then the two vector moduli are represented\nby $\\psi$ and $\\phi$ in the polynomial \n\\begin{equation}\np = z_{1}^{12} + z_{2}^{12} + z_{3}^{6} + z_{4}^{6} + z_{5}^{2} - 12 \\psi\nz_{1}z_{2}z_{3}z_{4}z_{5} - 2\\phi z_{1}^{6} z_{2}^{6} \n\\end{equation} \nIt is also useful, following \\cite{Hosono}, to introduce\n\\begin{equation}\nx = {-1\\over 864} {\\phi\\over \\psi^{6}}, ~~y = {1\\over \\phi^{2}}\n\\end{equation} \nThese are the convenient ``large complex structure'' coordinates on the\nmoduli space of vector multiplets for the IIB string. \n\nIn order to test our duality conjecture, we should start by checking\nthat the IIB string reproduces some qualitative features that we expect\nof the heterotic ${\\cal M}_{v}$. For example, $\\tau = i$ for weak coupling\n$S\\rightarrow \\infty$ is an $SU(2)$ point. There should therefore be a \nsingularity of ${\\cal M}_{v}$ at this point which splits, as one turns on \nthe string coupling, to $\\it two$ singular points (where monopoles\/dyons\nbecome massless), as in the case of pure $SU(2)$ gauge theory.\n\nThe ``discriminant locus'' where the IIB model becomes singular is given\nby\n\\begin{equation}\n(1-x)^{2} - x^{2} y = 0\n\\end{equation}\nSo we see that as a function of $y$ for $y \\neq 0$ there are two solutions\nfor $x$ and as $y \\rightarrow 0$ they merge to a single singular point\n$x=1$. This encourages us to identify $x=1, y=0$ with $\\tau =i, S \\rightarrow\n\\infty$ of the heterotic string -- the $SU(2)$ point. The metric on the\nmoduli space for $y$ at $y=0$ and $S$ at weak coupling also agree if one \nmakes the identification $y\\sim e^{-S}$.\n\nThere is also a remarkable observation in \n\\cite{Morrison} that the mirror map, restricted to $y=0$, is given by\n\\begin{equation}\nx = {j(i)\\over j(\\tau_{1})}\n\\end{equation}\nwhere $\\tau_{1}$ is one of the coordinates on the Kahler cone of $M$.\nHere $j$ is the elliptic j-function mapping $C$ onto $H\/SL(2,Z)$.\nThis tells us that the classical heterotic $\\tau$ moduli space,\nwhich is precisely $H\/SL(2,Z)$, is embedded in the moduli space of $M$\nat weak coupling precisely as expected from duality.\nIn fact using the uniqueness of special coordinates up to rotations,\none can find the exact formula expressing the IIB coordinates\n$(x,y)$ in terms of the heterotic coordinates $(\\tau,S)$. \n\nOf course with this map in hand there are now several additional things\none can check. The tests which have been performed in \n\\cite{KV,KLT,AGNT,KKLMV} include\n\\medskip\n\n\\noindent 1) A matching \nof the expected loop corrections to the heterotic prepotential\nwith the form of the tree-level exact Calabi-Yau prepotential.\n\\medskip\n\n\\noindent 2) A test that \nthe g-loop F-terms computed by the topological partition\nfunctions $F_g$ on the type II side (which include e.g. $R^{2}$ and other\nhigher derivative terms) are reproduced by appropriate (one-loop!)\ncomputations on the heterotic side.\n\\medskip\n\n\\noindent 3) A demonstration that in an appropriate double-scaling limit,\napproaching the $\\tau = i$, $S \\rightarrow \\infty$ point of the heterotic\nstring while taking $\\alpha^{'} \\rightarrow 0$, the IIB prepotential\nreproduces the exact prepotential of $SU(2)$ gauge theory \n(including Yang-Mills instanton effects) computed in \\cite{SeiWit}.\n\n\\medskip\nThese tests give very strong evidence in favor of the conjectured duality.\nGiven its veracity, what new physics does the duality bring into reach?\n\\medskip\n\n\\noindent $\\bullet$ One now has examples of four-dimensional theories\nwith exactly computable quantum gravity corrections. In the example\ndiscussed above, the\nSeiberg-Witten prepotential which one finds in an expansion\nabout $\\tau = i, S \\rightarrow \\infty$\nreceives gravitational corrections which\nare precisely computable as a power series in $\\alpha^{'}$. \n\\medskip\n\n\\noindent $\\bullet$ On a more conceptual level, the approximate duality of\n\\cite{SeiWit} between a microscopic $SU(2)$ theory (at certain points in its\nmoduli space) and a $U(1)$ monopole\/dyon theory is promoted to an\n$\\it exact$ duality, valid at all wavelengths, between heterotic and\ntype II strings. \n\\medskip\n\n\\noindent $\\bullet$ There is evidence that at strong heterotic coupling,\nnew gauge bosons and charged matter fields appear, sometimes giving rise\nto new branches of the moduli space \\cite{KMP,KM}.\n\\medskip\n\n\\noindent$\\bullet$ The $e^{-1\/g}$ corrections to the hypermultiplet moduli\nspace of type II strings are in\nprinciple exactly computable using duality (and may be of some mathematical\ninterest).\n\n\\medskip\nOne might wonder what is special about the Calabi-Yau manifolds which are\ndual to weakly coupled heterotic strings.\nIn fact it was soon realized that the examples of duality in \\cite{KV} involve\nCalabi-Yau manifolds which are $K_3$ fibrations \\cite{KLM}. That is, locally \nthe Calabi-Yau looks like $CP^{1}\\times K_{3}$.\nIn fact, one can prove that if the type IIA string on a Calabi-Yau $M$\n(at large radius) \nis dual to a weakly coupled heterotic string, then $M$ must be a $K_3$\nfibration \\cite{AL}.\n\nTo make this more concrete,\nin the example of the previous section, we saw $M$ was defined by the\nvanishing of\n\\begin{equation}\np = z_{1}^{12} + z_{2}^{12} + z_{3}^{6} + z_{4}^{6} + z_{5}^{2} + ...\n\\end{equation}\nin $WP^{4}_{1,1,2,2,6}$. Set \n$z_{1}=\\lambda z_{2}$ and define $y=z_{1}^{2}$ (which is an allowed\nchange of variables since an identification on the $WP^{4}$ takes \n$z_{1} \\rightarrow -z_{1}$ without acting on $z_{3,4,5}$). Then the\npolynomial becomes (after suitably rescaling to absorb $\\lambda$)\n\\begin{equation}\np = y^{6} + z_{3}^{6} + z_{4}^{6} + z_{5}^{2} + ... \n\\end{equation}\nwhich defines a $K_{3}$ surfaces in $WCP^{3}_{1,1,1,3}$. The choice\nof $\\lambda$ in $z_{1}=\\lambda z_{2}$ is a point on $CP^{1}$, and the\n$K_{3}$ for fixed choice of $\\lambda$ is the fiber. \n\nIt is not surprising that $K_3$ fibrations play a special role in \n4d N=2 heterotic\/type II duality. Indeed the most famous example of\nheterotic\/type II duality is the 6d duality between heterotic strings on \n$T^{4}$ and type IIA strings on $K_{3}$ \\cite{HT,Witten}. If \none compactifies the type IIA\nstring on a CY threefold which is a $K_3$ fibration, and simultaneously\ncompactifies the heterotic string on a $K_{3}\\times T^{2}$ where the $K_3$\nis an elliptic fibration, then locally one can imagine taking the bases\nof both fibrations to be large and obtaining in six dimensions an example\nof the well-understood 6d string-string duality \\cite{VW}. This picture is not\nquite precise because of the singularities in the $K_3$ fibration,\nbut it does provide an intuitive understanding of the special role of\n$K_3$ fibrations.\n\n\\section{N=1 Duality and Gaugino Condensation}\n \nStarting with an $N=2$ dual pair of the sort discussed above, one can try\nto obtain an $N=1$ dual pair by orbifolding both sides by freely acting\nsymmetries. This strategy was used in \\cite{VW,HLS} where several\nexamples with trivial infrared dynamics were obtained. Here we will\nfind that examples with highly nontrivial infrared dynamics can also\nbe constructed \\cite{KS}.\n\nOur starting point is \nan N=2 dual pair \n(IIA on a Calabi-Yau $M$ and heterotic on $K_{3}\\times T^{2}$)\nwhere the heterotic gauge group takes the form\n\\begin{equation}\nG ~=~E_{8}^{H} \\otimes E_{7}^{obs}\\otimes ...\n\\end{equation}\n$H$ denotes the hidden sector and $obs$ the observable sector.\nWe will first discuss the technical details of the $Z_2$ symmetry by \nwhich we can orbifold both sides\nto obtain an $N=1$ dual pair, and then we discuss the physics of the\nduality.\n\n\nOrbifold the heterotic side by the Enriques involution \nacting on $K_3$ \nand a total reflection on the $T^{2}$. This acts on the base of the elliptic\nfibration $(z_{1},z_{2})$ by\n\\begin{equation}\n(z_{1}, z_{2}) ~ \\rightarrow ~ (\\bar z_{2}, - \\bar z_{1})\n\\end{equation}\ntaking $CP^{1} \\rightarrow RP^{2}$.\nIn addition, we need to choose a lifting of the orbifold group to the gauge\ndegrees of freedom.\n\nWe do this as follows:\n\n\\noindent $\\bullet$ Put a modular invariant embedding into the ``observable''\npart of the gauge group alone.\n\n\\noindent $\\bullet$ Embed the translations which generate the $T^2$ into\n$E_{8}^{H}$, constrained by maintaining level-matching and the relations of\nthe space group. For example one could take Wilson lines $A_{1,2}$ \nalong the $a$\nand $b$ cycle of the $T^{2}$ given by\n\\begin{equation}\nA_{1} = {1\\over 2} (0,0,0,0,1,1,1,1),~~~A_{2} = {1\\over 2}(-2,0,0,0,0,0,0,0)\n\\end{equation}\nHere $A_{1,2} = {1\\over 2}L_{1,2}$ where $L_{1,2}$ are vectors in the \n$E_8$ root lattice. These Wilson lines break the $E_{8}^{H}$ gauge\nsymmetry to $SO(8)_{1} \\otimes SO(8)_{2}$. \n\n\\medskip\nHow does the $Z_2$ map over to the type II side?\n>From the action\n\\begin{equation}\n(z_{1},z_{2}) ~\\rightarrow ~(\\bar z_{2}, -\\bar z_{1})\n\\end{equation} \non the $CP^{1}$ base (which is common to both the heterotic and type II sides),\nwe infer that the $Z_2$ must be an antiholomorphic, orientation-reversing \nsymmetry of the Calabi-Yau manifold $M$.\nTo make this a symmetry of the type IIA string theory, we must simultaneously\nflip the worldsheet orientation, giving us an ``orientifold.'' \nIn such a string theory, one only includes maps $\\Phi$ of the worldsheet\n$\\Sigma$ to spacetime $M\/Z_{2}$ if they satisfy\n\\begin{equation}\n\\Phi^{*}(w_{1}(M\/Z_{2})) = w_{1}(\\Sigma)\n\\end{equation} \nwhere $w_1$ is the first Stieffel-Whitney class.\n\nWe know from 6d string-string duality that the Narain lattice $\\Gamma^{20,4}$\nof heterotic string compactification on $T^4$ maps to the integral cohomology\nlattice of the dual $K_{3}$. This means that we can infer from the action\nof the $Z_2$ on the heterotic gauge degrees of freedom, what the action of\nthe $Z_2$ must be on the integral cohomology of the $K_3$ fiber on the IIA\nside.\nSince we are frozen on the heterotic side at a point with $SO(8)^{2}$ gauge\nsymmetry in the hidden sector, the dual $K_3$ must be frozen at its singular\nenhanced gauge symmetry locus. \n\nThe $K_3$ dual to heterotic enhanced gauge symmetry $G$ has rational\ncurves $C_i$, $i = 1,...,rank(G)$ shrinking to zero area (with the \nassociated $\\theta_{i}=0$ too). It is easy to see, e.g. from Witten's\ngauged linear sigma model that in this situation the type II theory\nindeed exhibits an extra $Z_2$ symmetry. The bosonic potential of\nthe relevant gauged linear sigma model (for the case of a single\nshrinking curve) is given by \n$$V=\n{1\\over{2e^2}}\n\\sum_i\\biggl\\{\\biggl(\\bigl[\\sum_\\alpha Q_i^\\alpha(|\\phi^i_\\alpha|^2\n-|\\tilde\\phi^i_\\alpha|^2)\\bigr]-r_i^0\\biggr)^2$$\n$$+\\biggl(Re(\\sum_\\alpha\\phi^i_\\alpha\\tilde\\phi^i_\\alpha)-r_i^1\\biggr)^2\n+\\biggl(Im(\\sum_\\alpha\\phi^i_\\alpha\\tilde\\phi^i_\\alpha)-r_i^2\\biggr)^2\n\\biggr\\}$$ \n$$+{1\\over 2}\\sum_i\\bigl[\\sum_\\alpha Q^{\\alpha~2}_i\n(|\\phi^i_\\alpha|^2+|\\tilde\\phi^i_\\alpha|^2)\\bigr]|\\sigma_i|^2$$ \nHere the $\\phi$s represent the $K_3$ coordinates while $r$ parametrizes\nthe size of the curve and $\\sigma$ is the Kahler modulus.\nPrecisely when $\\vec r \\rightarrow 0$, the model has the $Z_2$ symmetry\n$\\phi \\rightarrow -\\tilde \\phi$, $\\sigma \\rightarrow - \\sigma$. \nOrbifolding\nby this $Z_2$ then freezes the $K_3$ at its enhanced gauge symmetry locus,\nas expected.\n\nWhat is the physics of the dual pairs that one constructs in this manner?\nIn the heterotic string, when there is a hidden sector pure gauge group\n\\begin{equation}\nG^{hidden} = \\Pi ~G^{b}\n\\end{equation}\none expects gaugino condensation to occur. This induces an effective\nsuperpotential\n\\begin{equation} \nW = \\sum ~h_{b}~ \\Lambda_{b}^{3}(S)\n\\end{equation}\nwhere $\\Lambda_{b}(S) \\sim e^{-\\alpha_{b} S}$ and $\\alpha_{b}$ is \nrelated to the\nbeta function for the running $G_b$ coupling. It was realized early on\n\\cite{Krasnikov,DKLP} that in such models (with more than one hidden factor)\none might expect both stabilization of the dilaton and supersymmetry \nbreaking.\nIt has remained a formidable problem to determine which (if any) such models\nactually do have a stable minimum at weak coupling with broken supersymmetry.\n\nFor now, we will be content to simply understand how the $\\it qualitative$\nstructure of the heterotic theory (e.g. the gaugino-condensation induced\neffective superpotential) is reproduced by the type II side.\nThis is mysterious because the type II N=2 theory we orientifolded had only\nabelian gauge fields in its spectrum, so we need to reproduce the strongly\ncoupled nonabelian dynamics of the heterotic string with an $\\it abelian$\ngauge theory on the type II side.\n\nThe heterotic orbifold indicates the spectrum of the string theory as\n$g_{het} \\rightarrow 0$. The\nheterotic dilaton $S$ maps to the radius $R$ of the $RP^{2}$ base of the\ntype II orientifold\n(recall one obtains the $RP^2$ by orbifolding the\nbase $P^1$ of the $K_3$ fibration)\n\\begin{equation}\nS_{het} \\leftrightarrow R_{RP^{2}}\n\\end{equation} \nThe purported stable vacuum of the heterotic theory should then be expected\nto lie at large radius for the base, and on the (orientifold of the) conifold\nlocus dual to enhanced gauge symmetry. There are two crucial features of this\nlocus:\n\\medskip\n\n\\noindent 1) The $RP^2$ base has $\\pi_{1}(RP^{2}) = Z_{2}$. So a state\nprojected out in orientifolding the N=2 theory will have a massive version\ninvariant under the $Z_2$. Say $\\beta \\in \\pi_{1}(RP^{2})$ \nis the nontrivial element. Take $x$ a coordinate along an appropriate\nrepresentative of $\\beta$ -- a representative can be obtained by taking \nthe image of a great circle on the original base $P^1$ after \norientifolding. \nThen if the original non-invariant vertex operator was $V$, a\nnew invariant vertex operator is given adiabatically by\n\\begin{equation}\nV^{\\prime} = e^{ix \\over R} V \n\\end{equation}\nThe $Z_2$ takes $x$ to $x + \\pi R$ and therefore $V^{\\prime}$ is invariant\nif $V$ was not. In particular this gives us massive versions of the\nscalars $a^{i}_{b,D}$ in the N=2 vector multiplets for $G^{b}$ with masses\n\\begin{equation}\nM_{a} \\sim {1\\over R^{2}}\n\\end{equation}\nEffectively, for very large $R$, one is restoring the original N=2 \nsupersymmetry. \n\\medskip \n\n\\noindent 2) The low energy theory for IIA at the conifold locus contains\nmassless $\\it solitonic$ states \\cite{Strominger}. One can see that\nthey survive the N=2 $\\rightarrow$ N=1 orientifolding by examining \nthe behavior of the gauge couplings \\cite{VW}. These extra solitonic\nstates play the role of the ``monopole hypermultiplets'' $M_{i}^{b}, \n\\tilde M_{i}^{b}$ of the N=2 theory. \n\n\\medskip\nThese two facts taken together imply that as $R \\rightarrow \\infty$ there\nis an effective superpotential\n\\begin{equation}\nW_{II} = \\sum_{b} \\left( m_{b}u^{b}_{2}(a^{i}_{b,D},R) +\n\\sum_{i=1}^{rank(b)} M_{i}^{b} a^{i}_{b,D}\\tilde M^{b}_{i} \\right)\n\\end{equation} \nwhere $u^{b}_{2}$ is the precise analogue of $u$ of \\S2 for $G^b$ and \nits functional dependence on $R$ can be found from the $N=2$ dual pair. \nAs we'll now discuss, this structure\n\\medskip \n\n\\noindent a) Allows us to reproduce the gaugino-condensation induced\neffective superpotential of the heterotic side. \n\\medskip\n\n\\noindent b) Implies $\\langle M \\rangle \\neq 0$, suggesting a geometrical\ndescription of the type II side by analogy with N=2 conifold transitions.\n\n\\medskip\nTo see a), recall how the physics of N=1 $SU(2)$ gauge theory is\nrecovered from the N=2 theory in \\cite{SeiWit}. One can obtain the\nN=1 theory by giving a bare mass to the adjoint scalar in the N=2\nvector multiplet and integrating it out. In the vicinity of the\nmonopole points this means there is an effective superpotential \n\\begin{equation}\nW = m u(a_{D}) + \\sqrt{2} a_{D} M \\tilde M\n\\end{equation}\nUsing the equations of motion and D-flatness, one finds\n\\begin{equation}\n\\vert \\langle M \\rangle \\vert = \n\\vert \\langle \\tilde M \\rangle \\vert \n= ( -mu^{\\prime}(0)\/\\sqrt{2})^{1\/2},~~a_{D} = 0 \n\\end{equation}\nThe monopoles condense and given a mass to the (dual) $U(1)$ gauge field\nby the Higgs mechanism, leaving a mass gap. Two vacua arise in this way --\none at each of the monopole\/dyon points -- in agreement with the \nWitten index computation for pure $SU(2)$ gauge theory. \n \nIn our case, we expect that condensation of the massless solitons will lead\nto the gaugino condensation induced superpotential (and, perhaps, supersymmetry\nbreaking). To see this we must expand $W_{II}$ in $a^{i}_{b,D}$ \nin anticipation of finding\na minimum at small $a_{D}$ (since the minimum was at $a_{D} = 0$ in the global\ncase). \nUsing\n\\begin{equation}\nu_{2}(a^{i}_{b,D},R)) = e^{i\\gamma^{b}}\\lambda_{b}^{2}(R) + \\cdots \n\\end{equation}\n(where $\\gamma^{b}$ comes from the phase of the gaugino condensate)\nas well as the matching condition\n\\begin{equation}\nm_{b}\\Lambda_{b,high}^{2} = \\Lambda_{b,low}^{3}\n\\end{equation}\none obtains from integrating out the massive adjoint scalar, one sees \n\\begin{equation}\nW_{II} = \\sum_{b} e^{i\\gamma^{b}}\\Lambda_{b}^{3}(S) + \\cdots\n\\end{equation} \nSimply minimizing the supergravity scalar potential\n\\begin{equation} \nV = e^{K}(D_{i}W G^{i\\bar j}D_{\\bar j}W - 3 \\vert W \\vert^{2}) + \n{1\\over 2}g^{2}D^{2}\n\\end{equation}\nwe also find that \n\\begin{equation}\n\\langle M_{i}^{b} \\tilde M_{i}^{b} \\rangle = -h_{b}m_{b}u_{2,i}^{b}(S) - K_{i}\nW\n\\end{equation}\nThat is, the ``wrapped two-branes'' which give us the massless monopoles have\ncondensed, in accord with the global result.\nSo integrating out the massive $M, \\tilde M$ and adjoint scalar degrees of\nfreedom yields the same form of bosonic potential that we expect from\ngaugino condensation on the heterotic side.\n\nIn summary, we have argued that the type II dual description of the effects\nof gaugino condensation involves a mass perturbation breaking N=2 \nsupersymmetry. One cannot add mass terms by hand in string theory: The\ntype II orientifold produces the requisite massive mode as a Kaluza-Klein\nexcitation of the original N=2 degrees of freedom that were projected out.\n\nOne intriguing feature of the IIA vacuum is the nonzero VEVs for the wrapped\ntwo-branes $M, \\tilde M$. In the N=2 context $\\langle M \\rangle \\neq 0 \n\\rightarrow$ conifold transition. There is a well known geometrical \ndescription\nof the conifold points. For example, in the IIB theory the conifold \nin vector multiplet moduli space is obtained by going to a point in \n${\\cal M}_{complex}$ where there is a cone over $S^{3}\\times S^2$\nin the Calabi-Yau. One can either ``deform the complex structure'' (return\nto the Coulomb phase, in physics language) by deforming the tip of the\ncone into an $S^3$, or one can do a ``small resolution'' and blow the tip\nof the cone into an $S^2$. The latter corresponds to moving to a new\nHiggs phase, in the N=2 examples \\cite{GMS}.\n\nIt was noted long ago by Candelas, De La Ossa, Green, and Parkes \\cite{CDGP}\nthat at a $\\it generic$ conifold singularity such a small resolution does\nnot produce a Kahler manifold. They speculated that such nonKahler resolutions\nmight correspond to supersymmetry breaking directions. It is natural\nto suggest that we might be seeing a realization of that idea by duality.\nThe analogy with N=2 conifold transitions suggests that\n$\\langle M \\rangle \\neq 0 \\rightarrow$ nonKahler resolution. One can hope\nthat this will provide a useful dual view of supersymmetry breaking in \nstring theory.\n\n\n\\vfill\\eject \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \nWith recent advancements in VLSI technologies, integrated circuits (ICs) manufacturing involves multiple companies, introducing new security challenges at each stage of IC production. \nHardware Trojans (HTs) are defined as any undesired modification in an IC which can lead to erroneous outputs and or leak of information~\\cite{pan2021automated}. According to the adversarial model introduced by~\\cite{shakya2017benchmarking}, HTs can be inserted into the target IC in three different scenarios, namely intellectual properties (IPs) (processing cores, various I\/O components, and network-on-chip~\\cite{sarihi2021survey}), disgruntled employees at the integration stage, reverse-engineering by an untrusted foundry.\n\n\\begin{comment}\n\n \\begin{itemize}\n \\item The first scenario is through intellectual properties (IPs) (e.g., processing cores, memory modules, various I\/O components, and network-on-chip~\\cite{sarihi2021survey}) that are purchased from IP vendors to expedite the time-to-market of an IC and reduce design costs. As a result of integrating an infected third-party IP, HTs can be inserted into the IC. \n \\item Second, designs by engineers at a company can be attacked by compromised employees at the integration stage to inflict harm on the integrity of the IC. \n \\item Third, an untrusted foundry can reverse-engineer the design and insert HTs to infect chips at the fabrication stage.\n \\end{itemize}\n\n\\end{comment}\n\n\\begin{comment}\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=.57]{figures\/Trigger_payload.pdf}\n \\%vspace{-3mm}\n \\caption{An HT with a trigger and payload. Whenever A=1, B=1, C =0, the trigger is activated (D=1) and the XOR payload inverts the value of E. }\n \\%vspace{-6mm}\n \\label{trigger_payload}\n\\end{figure}\n\n\\end{comment}\nTo study the behavior of HTs in digital circuits, researchers have been mostly using limited benchmarks ~\\cite{shakya2017benchmarking,salmani2013design} including a set of 91 trust benchmarks with different HT sizes and configurations (available at~\\url{trust-hub.org}). Over the past years, various HT detection approaches have been developed based on these benchmarks~\\cite{salmani2016cotd, sabri2021sat,hasegawa2017trojan, sebt2018circuit}. Despite the valuable effort to create such HT benchmarks, they are limited in size and variety necessary to push the detection tools. Another downside of using these existing benchmarks is the problem of dealing with fixed static trigger conditions~\\cite{cruz2018automated}, which stems from the human bias during HT insertions. As a result, HT detectors can be tuned in a way to enhance their detection accuracy while not truly being effective in complex and real-world modern ICs. These shortcomings emphasize the need for an automated HT-insertion tool free of human biases that can be used to create high-volume HT benchmarks. Such a tool will help implement HT benchmarks aligned with the fast growth of attack approaches and cater to the security needs in the realm of hardware design. By introducing new HTs into a design, one can create new benchmarks and push the capabilities of HT detection via introducing never-seen-before HTs.\n\nAlthough a few researchers have tried to address these problems by introducing tunable HT insertion toolsets~\\cite{cruz2018automated,yu2019improved}, these approaches have no concrete guideline for selecting trigger and payload nets; instead, triggering is done on an ad-hoc basis with little design space exploration capabilities. In this paper, we attempt to develop an HT-insertion tool, free of human biases, using a Reinforcement Learning (RL) agent that decides where to insert HTs through a trial and error method. Although machine learning techniques have been used to detect HTs in the past~\\cite{xue2020ten,salmani2016cotd,hasegawa2017trojan}, to the best of our knowledge, this work is the first that addresses HT insertion using a machine learning approach via design space exploration\n\nOur toolset translates each circuit to a graph representation in which different properties of each net, such as controllability, observability, and logical depth, are computed (so-called the SCOAP\\footnote{Sandia controllability and observability analysis program.} parameters ~\\cite{goldstein1980scoap}). The circuit graph is considered as an environment in which the RL agent tries to insert the HT to maximize the gained rewards.\nObtained results confirm that the inserted HTs are very hard to detect as the toolset maximizes the number of IC's inputs involved in the activation of the inserted HTs. We define a metric called the input coverage percentage (ICP) to determine the difficulty of HT activation.\n\\begin{comment}\nThe contributions of this work are the following:\n\\begin{itemize}\n \\item Development of an HT inserting toolset free of human biases using RL.\n \\item The automating of HT insertion; our toolset makes the selection for the trigger and payload nets.\n \\item The toolset can be tuned to allow different HT insertion goals.\n\\end{itemize}\n\\end{comment}\n\nThe paper is organized as follows: The mechanics of our proposed approach are presented in Section~\\ref{proposed}. Section~\\ref{results} demonstrates the experimental results and Section~\\ref{conclusion} concludes the paper.\n\n\\begin{comment}\n\n\n\\section{Background and Related work}\n\\label{background}\n\n\\subsection{Hardware Trojan Benchmarks}\n\\label{previous}\n\nThe first attempts to gather a benchmark with hard-to-activate HTs were made by Shakya \\etal{} and Salmani \\etal{}~\\cite{shakya2017benchmarking,salmani2013design}. A set of 91 trust benchmarks with different HT sizes and configurations are available at~\\url{trust-hub.org}. While these benchmarks have been a valuable contribution for researchers to assess detection techniques, they only represent a subset of possible HT insertion landscape in digital circuits. While the HTs are carefully inserted to seriously compromise the security, a more general approach is needed to explore more options and diversify the HT insertion process.\n\nCruz \\etal{}~\\cite{cruz2018automated} tried to address these shortcomings by presenting a toolset that is capable of inserting a variety of HTs based on the parameters passed to the toolset. Their software inserts HTs with the following configuration parameters: the number of trigger nodes, the number of rare nodes among the trigger nodes, the threshold of rare nodes computed with the SCOAP parameters, the number of the HT instances to be inserted, the HT effect, the activation method, its type, and the choice of payload.\nDespite increasing the variety of inserted HTs, there is no solution for finding the optimal trigger and payload nets.\n\nYu \\etal{}~\\cite{yu2019improved} considers a different criterion to identify rare nets in a circuit. The set of test vectors can lead to an inaccurate selection of rare nets and, subsequently, inefficient trigger nets. Instead, their approach is to use transition probability to represent the switching activity of the nets. The transition probability of each net is computed based on the time required for the value of each net to toggle, and it is modeled using geometric distribution. The HT insertion criteria are very similar to~\\cite{cruz2018automated}. In the end, the trigger and payload nets are selected randomly.\n\nIn an attempt to deceive machine learning HT detection approaches, Nozawa \\etal{}~\\cite{nozawa2021generating} have devised adversarial examples. Their proposed method replaces the HT instance with its logically equivalent circuit to make the classification algorithm erroneously overlook it. To design the best adversarial example, the authors have defined two parameters: Trojan-net concealment degree (TCD) is tuned in a way to maximize the loss function of the neural network in the detection process, and a modification evaluating value (MEV) that should be minimized to have the least impact on circuits. These two metrics help the attacker to look for more effective logical equivalents and limit the design of HTs to more effective ones. By doing so, the generated framework can decrease the detection accuracy significantly.\n\n\\%vspace{-2mm}\n\\subsection{Reinforcement Learning}\n\\label{Previous_RL}\n\nReinforcement Learning (RL) has recently attracted considerable attention as a powerful machine learning strategy for decision making through trial and error~\\cite{sutton2018reinforcement}. The training process is very similar to how humans and animals learn in the sense that good actions are rewarded positively, and bad actions are rewarded negatively. Figure~\\ref{RL} shows the typical flow of RL algorithms, which consist of five main components: \n\\begin{enumerate}[noitemsep,topsep=0pt]\n\\item{\\emph{Agent}: which interacts with the environment by taking actions. }\n\\item{\\emph{Action}: selecting from a set of possible decisions. }\n\\item{\\emph{Reward}: the feedback provided by the environment after the action has been done. }\n\\item{\\emph{Environment}: rewards the agent after receiving a new action. }\n\\item{\\emph{State}: the observation that the agent receives after every action. }\n\\end{enumerate}\nEvery agent starts from a reset condition where it takes action ($a_t$), based on the state ($s_t$) observed from the environment. The environment rewards the agent with new reward $r_{t+1}$ and updates the state to $s_{t+1}$. This cycle repeats until either a number of actions are reached or a terminal state is reached. The whole process is called an episode, e.g., one game of chess. During the training session, numerous episodes are played, and the agent's goal is to maximize the sum of collected rewards over all episodes. \n\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[scale=0.77]{figures\/RL.pdf}\n \\%vspace{-8mm}\n \\caption{The main components of RL which consist of an agent, environment, action, state, and reward.}\n \\%vspace{-5mm}\n \\label{RL}\n\\end{figure}\n\n\n\n\n\nAlthough the first uses of RL were in the classic control domain, the HT community has also turned to RL recently. A study in~\\cite{pan2021automated} uses RL framework to detect HTs. The authors address the computation complexity and weak trigger coverage in large designs by using RL to find the best input test vectors. In this case, the advantage of using RL is the ability to solve problems with large and complex solution space. For each circuit, test vector bits are flipped to activate the most trigger nets and gain the highest summation of SCOAP parameters of these nets. The RL model significantly reduces the test generation time and is able to detect a variety of HTs in the ISCAS-85 and ISCAS-89 benchmarks. \n\n\\end{comment}\n\\section{RL-based HT insertion}\n\\label{proposed}\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[scale=.48]{figures\/Flowchart.pdf}\n \n \\caption{The proposed RL-based HT insertion tool flow.}\n \n \\label{Toolflow}\n\\end{figure*}\nFigure~\\ref{Toolflow} shows the flow of the proposed HT insertion tool. The first step to insert an HT into a circuit is to create a graph representation of the flattened netlist from the circuit. \nYosys Open Synthesis Suite~\\cite{wolf2013yosys} translates a Verilog file of the circuit into a JSON (JavaScript Object Notation)~\\cite{bassett2015introduction} netlist where the JSON file is used by a python script to parse the internal graph representation of the circuit. Next, the tool finds a set of rare nets to be used as HT trigger nets (this is described in detail in Subsection~\\ref{s1}). Finally, an RL agent uses the rare net information and attempts to insert an HT to maximize a reward function as described in Subsection~\\ref{s2}.\n\n\\subsection{Rare Nets Extraction}\n\\label{s1}\n\nAs discussed earlier, different circuit criteria have been used for trigger selection. In this work, we use the parameters introduced in \\cite{sebt2018circuit} where the trigger nets are selected based on functions of net \\emph{controllability} and \\emph{observability} \\cite{goldstein1980scoap}. \n\\begin{comment}\n\nControllability measures the difficulty of setting a particular net in a design to either \\emph{'0'} or \\emph{'1'}. Observability, on the other hand, is the difficulty of propagating the net to at least one of the circuit outputs.\n\n\n\\end{comment}\n\n\nThe first parameter is called the HT trigger susceptibility parameter, and it is derived from the fact that low switching nets have a high difference between their controllability values. Equation \\ref{HTS1} describes this parameter:\n\\begin{equation}\n HTS(Net_i)=\\frac{|CC1(Net_i)-CC0(Net_i)|}{Max(CC1(Net_i),CC0(Net_i))}\n \\label{HTS1}\n\\end{equation}\nwhere $HTS$ is the HT trigger susceptibility parameter of a net; $CC0(Net_i)$ and $CC1(Net_i)$ are the combinational controllability 0 and 1 of $Net_i$, respectively. The $HTS$ parameter ranges between $[0,1)$ such that higher values of $HTS$ are correlated with lower activity on the net. \n\nThe other used parameter is specified in Equation~\\ref{OCR} to measure the ratio of observability to controllability:\n\\begin{equation}\n OCR(Net_i)=\\frac{CO(Net_i)}{CC1(Net_i)+CC0(Net_i)}\n \\label{OCR}\n\\end{equation}\nwhere $OCR$ is the observability to controllability ratio of a net. This equation requires that the HT trigger nets must be very hard to control but not so hard to observe. Unlike the $HTS$ parameter, $OCR$ is not bounded, and it belongs to the interval of $[0,\\infty)$. We will specify thresholds (see Section~\\ref{s2}) for each parameter and use them as filters to populate the set of rarely-activated nets for our tool.\n\\begin{comment}\n\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[scale=0.50]{figures\/Levels.pdf}\n \\%vspace{-4mm}\n \\caption{Levelizing a circuit. The output level of each digital gate is computed by $max(Level(in1),Level(in2))+1$.}\n \\%vspace{-3mm}\n \\label{level}\n\\end{figure}\n\\end{comment}\n\\subsection{RL-Based HT Insertion.}\n\\label{s2}\n\nAgent, Action, Environment, State, and Reward are the five main components of reinforcement learning. From an RL perspective, we define the environment as the circuit in which we are trying to insert HTs. The agent's action is the insertion of the HT. Note, we consider combinational HTs where trigger nets are ANDED, and the payload is an XOR gate. \n\nWe represent different HT insertions with a state vector in each circuit. To address this issue, we first levelize the circuit. The output level of an $m$-input gate is computed by equation~\\ref{eq1}:\n\\begin{equation}\n Level(out)=MAX(Level(in_1), Level(in_2), ... , Level(in_m))+1\n \\label{eq1}\n\\end{equation}\n\n\\begin{figure}[!t]\n \\centering\n \n \\includegraphics[width=3.3in, height=1.6in]{figures\/HT_insertion.pdf}\n \\caption{Obtaining the state vector in the presence of an HT.}\n \\label{State}\n\\end{figure}\n\nFigure~\\ref{State} depicts a 2-input HT (in yellow) where the XOR payload flips the value of the target net when the trigger is activated. For a given HT, the state vector is comprised of $s_t=[s_1,s_2, ...,s_{n-2},s_{n-1},s_{n}]$ where $s_1$ through $s_{n-2}$ are levels of the HT inputs and $s_{n-1}$ and $s_{n}$ are levels of the target net and the output of the XOR payload, respectively. As an example, the HT in Figure~\\ref{State} has the state vector $s_t=[2,1,3,4]$. The action space of the described HT agent is multi-discrete, i.e, each input of the HT can choose an action from a set of 5 available actions. These actions are:\n\n\\begin{itemize}\n \\item \\emph{\\textbf{Next level}}: the input of the HT moves to one of the nets that are one level higher than the current net level.\n \\item \\emph{\\textbf{Previous level}}: the input of the HT moves to one of the nets that are one level lower than the current net level.\n \\item \\emph{\\textbf{Same level up}}: the input of the HT will move to one of the nets at the same level as the current net level. The net is picked by pointing to the next net in the ascending list of net ids for the given level. \n \\item \\emph{\\textbf{Same level down}}: the input of the HT will move to one of the nets at the same level as the current net level. The net is picked by pointing to the previous net in the ascending list of nets for the given level. \n \\item \\emph{\\textbf{No action}}: the input of the HT will not move.\n\\end{itemize}\n\nIf an action leads the agent to step outside the circuit boundaries, it is substituted with a ``No action''.\n\nThe action space is also represented by a vector where its size is equal to the number of the HT inputs, and each action can be one of the five actions above, e.g., for the HT in Figure~\\ref{State}, the action space would be $a_t=[a_1,a_2]$ since it has two inputs. \n\nAs we explained in Section~\\ref{s1}, SCOAP parameters are first computed. We specify two thresholds $T_{HTS}$ and $T_{OCR}$ and require our algorithm to find nets that have higher $HTS$ values than $T_{HTS}$ and lower $OCR$ values than $T_{OCR}$. These nets are classified as suspicious nets. \n\nOur toolset utilizes an algorithm that consists of two conditional while loops that keep track of the terminal states and the elapsed timesteps. The first used function in the algorithm is called \\emph{reset\\_environment} which resets the environment before each episode. Upon reset, an HT is randomly inserted within the circuit according to the following set of rules.\n\\begin{itemize}\n \\item Rule 1) Trigger nets are selected randomly from the list of the total nets.\n \\item Rule 2) No trigger net is allowed to be fed from a previously used net.\n \\item Rule 3) Trigger nets cannot be assigned as the target.\n \\item Rule 4) The target net is selected considering the level of trigger nets. To prevent forming combinational loops, we specify that the level of the target net should be greater than that of the trigger nets.\n\\end{itemize}\n\nDuring the training process in each episode, we do not change the payload net to help the RL algorithm converge faster for possible solution\n. Unlike the manual payload selection, we allow the algorithm explore the environment during different episodes and decide which payload can more seriously compromise the security by collecting higher rewards. This solution addresses the problem of finding optimal payload selection.\n\n\nThe training process of the agent takes place in a loop where actions are being issued, rewards are collected, the state is updated, and eventually, the updated graph is returned. To evaluate the taken actions by the RL agent (meaning if the HT can be triggered with any input vector), we use PODEM (Path-Oriented Decision Making), an automatic test pattern generator~\\cite{bushnell2000essentials}. If the HT payload propagates through at least one of the circuit outputs, the action gains a reward proportional to the number of inputs of the circuit that are engaged in the activation of the HT (we call this feature input coverage). We believe that the number of inputs engaged in the HT activation could be viewed as a metric of how rarely the HT is activated (see Section \\ref{results}).\n\nIn PODEM, $input\\_stack$ is the list of circuit inputs and their values that activate the HT. The more inputs engaged, the higher the reward will be. For an action to get positively rewarded, at least one of the HT trigger nets must belong to the set of the suspicious nets. This type of rewarding encourages the agent to search in the vicinity of the suspicious nets (states) where rewards are more positive and hence result in stealthier HTs. The agent is rewarded -1 when the trigger nets do not belong to the set of suspicious nets. Our proposed RL rewarding scheme drives the agent towards inserting hard-to-active HTs and maximizing the input coverage.\nThe rewarding scheme is given in Equation \\ref{reward}\n\\begin{equation}\n reward+= 20*(size(input\\_stack)\/size(in\\_ports))\n \\label{reward}\n\\end{equation}\nwhere $in\\_ports$ is the total number of inputs.\nWe selected the coefficient 20 since it strikes a balance between the mostly '-1' rewards collected during training and the limited number of HTs found in each episode. One key benefit of using our tool is that the designer can adjust the reward scheme to achieve different goals. \n\n\nTo train the RL agent, we use the PPO (Proximal Policy Optimization) RL algorithm. PPO can train agents with muti-discrete action spaces in discrete or continuous spaces. The main idea of PPO is that the updated new policy (which is a set of actions to reach the goal) should not deviate too far from the old policy following an update in the algorithm. To avoid substantial updates, the algorithm uses a technique called clipping in the objective function~\\cite{schulman2017proximal}. At last, when the HTs are inserted, the toolset outputs Verilog gate-level netlist files that contain the malicious HTs.\n\n\n\\begin{comment}\n\n\n\\begin{algorithm}[t]\n \\caption{Training of the HT Reinforcement Learning Agent}\n \\begin{flushleft}\n \\hspace*{\\algorithmicindent}\\textbf{\\textit{Input: }}{Graph $G$, HTS Threshold $T_{HTS}$, OCR Threshold,}\\\\\n \\hspace*{\\algorithmicindent}{$T_{OCR}$, circuit inputs $in\\_ports$,state space $s_t$,}\\\\\n \\hspace*{\\algorithmicindent}{terminal state $TS$,circuit inputs $in\\_ports$,}\\\\\n \\hspace*{\\algorithmicindent}{total timesteps $j$;}\\\\\n \\hspace*{\\algorithmicindent}\\textbf{\\textit{Output: }}{Malicious design $T$;}\n \\end{flushleft}\n \n \\begin{algorithmic}[1]\n \\label{Alg1}\n \\STATE Compute SCOAP parameters:\\\\\n \\hspace*{\\algorithmicindent}{$=computeSCOAP(G)$};\n \\STATE Get the set of suspicious nets:\\\\\n \\hspace*{\\algorithmicindent}{$suspicious\\_nets=computeSuspiciousNets(G, T_{HTS}, T_{OCR});$}\\\\\n \n \n \\STATE $counter=0;$\n \\WHILE{($countert_2$, one has \n\\begin{eqnarray}\n\\left\\langle A\\left( t_{1}\\right) B\\left( t_{2}\\right) \\right\\rangle\n&=&{\\rm Tr}_{S\\otimes R}\\left[A\\left( t_{1}\\right) B\\left(\nt_{2}\\right) \\rho _{T}\\left( 0\\right) \\right]\n\\nonumber \\\\\n&=&{\\rm Tr}_{S\\otimes R}\\left[ A e^{-iHt\/\\hbar}B \\rho _{T}\\left( t_{2}\\right)\ne^{iHt\/\\hbar}\\right], \n\\label{AB12}\n\\end{eqnarray}\nwhere $t=t_1-t_2$, and $A=A(0)$ and $B=B(0)$ are system operators. \nLet $\\chi_T(0)=B \\rho _{T}\\left( t_{2}\\right)$. \nThen the two-time CF (\\ref{AB12}) becomes $\\left\\langle A\\left( t_{1}\\right) B\\left( t_{2}\\right) \\right\\rangle\n={\\rm Tr}_{S\\otimes R}\\left[ A\\chi_{T}\\left( t\\right) \\right]$. \nIt is then equivalent to\nthe expectation value of \nthe operator $A$\nwith respect to the effective density matrix operator \n$\\chi_T(t)=e^{-{iHt}\/{\\hbar}}B \\rho _{T}\\left( t_{2}\\right)e^{{iHt}\/{\\hbar} }$\nthat satisfies the same Liouville evolution equation as $\\rho_T(t)$\neven though $\\chi_T(t)$ may not be a proper density matrix \n(i.e., positive-definite trace-conservative operator).\nThe evolution equation of the two-time CF can be formally written as \n\\begin{eqnarray}\n{d\\langle A\\left( t_{1}\\right) B(t_2)\\rangle }\/{dt_{1}}\n&=&{\\rm Tr}_{S\\otimes R}\\left[ A ({d\\chi_T(t)}\/{dt})\\right] \n\\label{2time_evol}\\\\\n&=&{\\rm Tr}_{S}\\left[ A({d\\chi(t)}\/{dt})\\right],\n\\label{2time}\n\\end{eqnarray}\nwhere the relations of the reduced operator $\\chi(t)={\\rm Tr}_R[\\chi_T(t)]$\nand ${\\rm Tr}_R[{d\\chi_T(t)}\/{dt}]={d\\chi(t)}\/{dt}$\nhave been used.\nIf the reduced master equations \n${d\\chi(t)}\/{dt}$ and ${d\\rho(t)}\/{dt}$ had the same \noperator equation form, one might conclude that the structure and the form\nof the evolution equation of the two-time CF would be the same as those of the single-time evolution equation and thus the QRT would apply.\nIn fact, it has been shown that \nthe QRT is not valid in general \\cite{Ford96,Ford99}, but \nthe QRT or regression procedure is useful and correct for systems\nwhere the coupling to reservoirs is weak \nand the Markovian approximation holds \\cite{Carmichael99,Lax00,Ford00}. \nThe main purpose of the present paper is to derive \nthe non-Markovian finite-temperature evolution equation of the \ntwo-time system CF's using a quantum master equation approach, an\napproach different from those in Refs.~\\cite{Vega06,Alonso07}. \nOur equations, which are valid for both a Hermitian and a\nnon-Hermitian system coupling operators and thus generalize the\ncorresponding results in Refs.~\\cite{Vega06,Alonso07}, can be used to\ncalculate the two-time CF's for any factorized (separable) \nsystem-reservoir initial\nstate and for any arbitrary temperature as long as the approximation\nof the weak system-environment coupling still holds. \n\n\n\\subsection{Evolution equations in the weak system-environment coupling limit}\n\nLet us proceed to first derive perturbatively the explicit \nevolution equation of the\nsingle-time expectation values in the \nnon-Markovian case. \nHere we consider the second-order non-Markovian\ntime-convolutionless (time-local) evolution equation in our derivation. \nWe wish to\nobtain an evolution equation, $d\\rho_T(t)\/dt$, valid to second order\nin system-environment interaction Hamiltonian, to substitute into\nEq.~(\\ref{1time_evol}) for the single-time expectation values and into\nEq.~(\\ref{2time_evol}) for the two-time CF's. \nIt is convenient to first go to the interaction picture\nand obtain a time-local (time-convolutionless) evolution equation of\nthe density matrix valid to that order. This can be achieved by the\nsubstitution of $\\tilde{\\rho}_{T}(t')\\to \\tilde{\\rho}_{T}(t)$ in the\nsecond term on the right hand side of the equal sign of\nEq.~(\\ref{tilderho})\n\\cite{Breuer02,Paz01,Breuer99,Breuer01, Yan98,Schroder06,Ferraro09,Shibata77,Kleinekathofer04,Liu07,Sinayskiy09,Mogilevtsev09,Haikka10,Ali10,Chen11}.\nTo go back to the Schr\\\"{o}dinger picture, we substitute the resultant\nsecond-order equation obtained from Eq.~(\\ref{tilderho}) into\nEq.~(\\ref{rh0T_relation}) to obtain the evolution equation,\n$d\\rho_T(t)\/dt$. By substituting this equation $d\\rho_T(t)\/dt$ valid\nto second order in the interaction Hamiltonian into\nEq.~(\\ref{1time_evol}), the evolution equation of the single-time\nexpectation values then consists of three terms. The second term involves\nthe first term on the right hand side of the equal sign of\nEq.~(\\ref{tilderho}), and will vanish on the conditions that\n$\\rho_{T}(0)=\\tilde{\\rho}_{T}(0)=\\rho(0)\\otimes R_0$ and\n${\\rm Tr}_R[\\tilde{H}_{I}(t) R_0]=0$ [Eq.~(\\ref{traceless_1st_order})] \nare satisfied. \nAs a result, we obtain up to second order in the interaction Hamiltonian\n\\begin{eqnarray}\n\\frac{d\\left\\langle A\\left( t_{1}\\right) \\right\\rangle }{dt_{1}}\n&=&\\frac{i}{\\hbar }{\\rm Tr}_{S\\otimes R}\\left(\n[{H}_S , {A}] {\\rho}_{T}(t_1)\n\\right) \\nonumber \\\\\n&&+\\frac{1}{\\hbar^2 }\\int_0^{t_1}d\\tau {\\rm Tr}_{S\\otimes R} \\nonumber \\\\\n&&\\quad\n\\left(\\tilde{H}_I(\\tau-t_1)[A,H_I]\\rho_T(t_1) \\right.\n\\nonumber \\\\\n&&\\quad\\left.\n+[H_I,A]\\tilde{H}_I(\\tau-t_1)\\rho_T(t_1)\\right) \n\\label{1time_rhot}\n\\nonumber \\\\\n&=&({i}\/{\\hbar}){\\rm Tr}_{S\\otimes R}\\left( \\{[{H}_S, {A}] \\}(t_1) \n\\rho_{T}(0)\\right) \\nonumber \\\\\n&&+\\frac{1}{\\hbar^2 }\\int_0^{t_1}d\\tau {\\rm Tr}_{S\\otimes R} \\nonumber\\\\\n&&\\quad\n\\left( \\{\\tilde{H}_I(\\tau-t_1)[A,H_I]\\}(t_1)\\rho_T(0) \\right. \\nonumber\\\\\n&&\\quad\\left.\n+\\{[H_I,A]\\tilde{H}_I(\\tau-t_1)\\}(t_1)\\rho_T(0)\\right),\n\\label{1time_evol_eq}\n\\end{eqnarray}\nwhere we have transformed from the Schr\\\"{o}dinger picture to the\nHeisenberg picture in the second equal sign and \n$\\{AB\\}(t)\\equiv \\exp(iHt\/\\hbar) AB\\exp(-iHt\/\\hbar)$.\n\nSince $\\chi_T(t)$ and $\\rho_T(t)$ obey the same equations of \nEqs.~(\\ref{rh0T_relation}) and (\\ref{tilderho}),\nat first sight, one may think that the two-time evolution equations,\nEqs.~(\\ref{2time_evol}) and (\\ref{2time}), are similar to \nthe single-time evolution equations, Eqs.~(\\ref{1time_evol}) and\n(\\ref{1time}), and thus might be tempted to conclude that they have the\nsame form of the evolution equations. \nIndeed, by using Eqs.~(\\ref{2time_evol}), (\\ref{rh0T_relation}) \nand (\\ref{tilderho}), \nthe first and third terms of the resultant equation derived from\nEq.~(\\ref{2time_evol}) are similar to the right-hand side of the single-time \nevolution equation (\\ref{1time_evol_eq}) with the\nreplacement of \n$\\rho_T(0)\\to\\chi_T(-t_2)=B(t_2)\\rho_{T}(0)$ \nand with the change of the integration region from $[0,t_1]$ to\n$[t_2,t_1]$. Then we obtain \n\\begin{eqnarray}\n&&\\frac{i}{\\hbar }{\\rm Tr}_{S\\otimes R}\\left( \\{[{H}_S, {A}] \\}(t_1) \nB(t_2)\\rho_{T}(0)\\right) \\nonumber \\\\\n&&+\\frac{1}{\\hbar^2 }\\int_{t_2}^{t_1}d\\tau {\\rm Tr}_{S\\otimes R} \\nonumber\\\\\n&&\\quad\n\\left( \\{\\tilde{H}_I(\\tau-t_1)[A,H_I]\\}(t_1)B(t_2)\\rho_{T}(0) \\right. \\nonumber\\\\\n&&\\quad\\left.\n+\\{[H_I,A]\\tilde{H}_I(\\tau-t_1)\\}(t_1)B(t_2)\\rho_{T}(0)\\right).\n\\label{2time_evol_1_3}\n\\end{eqnarray}\n\nHowever, a significant difference is that the expectation values for the second term does not vanish, i.e., \n\\begin{equation}\n(-i\/\\hbar){\\rm Tr}_{S\\otimes R}\\left(Ae^{-iH_{0}t\/\\hbar}\\left[\\tilde{H}_{I}(t) ,\\tilde{\\chi}_{T}( 0)\\right]e^{iH_{0}t\/\\hbar}\\right)\\neq 0,\n\\label{non-Markovian_1st_order}\n\\end{equation} \nin the non-Markovian case, where $t=t_1-t_2$ \nin Eq.~(\\ref{non-Markovian_1st_order}). \nThe reason can be understood as follows.\nThe interaction Hamiltonian $\\tilde{H}_{I}(t_1-t_2)$ in\nEq.~(\\ref{non-Markovian_1st_order}) involves the\nenvironment operators in the time interval from $t_2$ to $t_1$, and \nthe effective density matrix operator $\\tilde{\\chi}_T(0)$ can be written as \n$\\tilde{\\chi}_T(0)=\\chi_T(0)=B\\rho_T(t_2)=BU(t_2,0)\\rho_T(0)U^{\\dagger}(t_2,0)$,\nwhere $U(t_2,0)=e^{-iHt_2\/\\hbar}$ is the Heisenberg evolution operator of the\ntotal Hamiltonian from time $0$ to $t_2$. \nIf the environment\nis Markovian where the environment operator CF's at two\ndifferent times are $\\delta$-correlated in time, then we may regard\nthat the environment operators in $\\tilde{H}_{I}(t_1-t_2)$ are not\ncorrelated with those in $U(t_2,0)$. So the trace over the environment degrees\nof freedom for operator $\\tilde{H}_{I}(t_1-t_2)$ and operator $U(t_2,0)$ can be\nperformed independently or separately. The trace of \n$\\rho_T(t_2)=U(t_2,0)\\rho_T(0)U^{\\dagger}(t_2,0)$ over the environment\ndegrees of freedom\nyields the reduced density matrix \n$\\rho(t_2)={\\rm Tr}_{R}[\\rho_T(t_2)]$, but the trace of\n$\\tilde{H}_{I}(t_1-t_2)$ vanishes, \ni.e., ${\\rm Tr}_{R}[\\tilde{H}_{I}(t_1-t_2)R_0]=0$, \nbecause of Eq.~(\\ref{traceless_1st_order}). \nThus Eq.~(\\ref{non-Markovian_1st_order}) vanishes in the Markovian limit.\nBut the situation differs for a\nnon-Markovian environment as the environment operator in\n $\\tilde{H}_{I}(t_1-t_2)$ may, in general, be correlated with that in \n$U(t_2,0)$.\nTherefore, the evolution from $\\rho_T(0)$ to $\\rho_T(t_2)$ \nunder the influence of interaction Hamiltonian \nin the presence of the reservoir needs to be taken into account \nbefore the trace over the environment is performed in \nEq.~(\\ref{non-Markovian_1st_order}). \nWe emphasize here that it is this nonlocal environment (bath) memory term,\nEq.~(\\ref{non-Markovian_1st_order}), that vanishes in the Markovian\ncase but makes the evolution equation of the two-time \nCF's of the system operators deviate from the QRT. \nAs we aim to obtain an evolution equation of the two-time CF's of the\nsystem operators, valid up to second order in the interaction\nHamiltonian, we need to find $\\rho_T(t_2)$ only up to \nfirst order in the interaction Hamiltonian. \nSo substituting \n$\\rho_{T}(t_2)= e^{-iH_{0}t_2\/\\hbar }\\tilde{\\rho}_{T}(t_2)e^{iH_{0}t_2\/\\hbar }$\nwith the expression\n\\begin{equation}\n\\tilde{\\rho}_{T}\\left( t_2\\right) \\approx \\tilde{\\rho}_{T}\\left( 0\\right) \n-\\frac{i}{\\hbar }\\int_{0}^{t_2}d\\tau\\left[ \\tilde{H}_{I}\\left(\n\\tau\\right) ,\\tilde{\\rho}_{T}\\left( t_2\\right) \\right] \n\\label{rhoT_t2}\n\\end{equation\nfor $\\tilde{\\chi}_T(0)=\\chi_T(0)=B\\rho_T(t_2)$ \nin Eq.~(\\ref{non-Markovian_1st_order}),\nwe then obtain up to second order in the interaction Hamiltonian (in\nthe system-environment coupling strength) \n\\begin{eqnarray}\n&&-\\frac{1}{\\hbar^2}\\int_0^{t_2}d\\tau\\, {\\rm Tr}_{S\\otimes R}\\left(Ae^{-iH_{0}t\/\\hbar}\\, \n\\left[ \\tilde{H}_{I}\\left(\nt\\right), \\right. \\right. \\nonumber\\\\\n&&\\quad \\quad\\left. \\left. B\\,e^{-iH_{0}t_2\/\\hbar} [\\tilde{H}_{I}(\\tau),\\tilde{\\rho}_{T}(t_2)]e^{iH_{0}t_2\/\\hbar} \\right]\\, e^{iH_{0}t\/\\hbar }\\right),\n\\nonumber \\\\\n&=& -\\frac{1}{\\hbar^2}\\int_0^{t_2}d\\tau\\, \n{\\rm Tr}_{S\\otimes R}\\left( A\n\\left[ H_{I},\\right. \\right. \\nonumber\\\\\n&&\\quad \\quad\\left. \\left. e^{-iH_{0}t\/\\hbar}\\,\nB[\\tilde{H}_{I}(\\tau-t_2),\\rho_{T}(t_2)] \\, e^{iH_{0}t\/\\hbar }\\right]\\right),\n\\label{extra1}\n\\end{eqnarray}\nwhere the first order term in the interaction Hamiltonian coming from $\\tilde{\\rho}_T(0)$ term in Eq.~(\\ref{rhoT_t2}) \nhas been dropped because of Eq.~(\\ref{traceless_1st_order}). \nSince the product of two $H_I$ already appear explicitly in Eq.~(\\ref{extra1}),\nwe may then transform Eq.~(\\ref{extra1}) into Heisenberg\nrepresentation with the evolution equation $\\exp(iHt\/\\hbar)\\approx\n\\exp(iH_0t\/\\hbar)+{\\cal O}(H_I)$. \nFurthermore, by writing out the commutators explicitly and rearranging the\nHeisenberg operator terms, \nthe resultant equation from Eq.~(\\ref{extra1}) then becomes\n\\begin{eqnarray}\n&& -\\frac{1}{\\hbar^2}\\int_0^{t_2}d\\tau\\, \n{\\rm Tr}_{S\\otimes R}\\left( \n\\{\\tilde{H}_I(\\tau-t_1)[H_I,A]\\}(t_1)B(t_2)\\rho_{T}(0) \\right. \\nonumber\\\\\n&&\\quad\\quad\\quad\n\\left. +\\{[A,H_I]\\}(t_1)\\{B\\tilde{H}_I(\\tau-t_2)\\}(t_2)\\rho_T(0) \\right).\n\\label{extra2}\n\\end{eqnarray}\nThe first term in Eq.~(\\ref{extra2}) is ready to combine with the second term in Eq.~(\\ref{2time_evol_1_3}) to extend the integration from $0$ to $t_1$. \nSimilarly, one may rewrite the last term in Eq.(\\ref{extra2}) using the relation $B\\tilde{H}_I(\\tau-t_2)=\\tilde{H}_I(\\tau-t_2)B+[B,\\tilde{H}_I(\\tau-t_2)]$ so that the first new term can be combined with last term in Eq.~(\\ref{2time_evol_1_3}) to extend the integration from $0$ to $t_1$.\n \nPutting all the resultant terms together, we obtain the evolution equation of the two-time CF's valid to second order in the interaction Hamiltonian as\n\\begin{eqnarray}\n&&{d\\left\\langle A(t_{1}) B(t_2)\\right\\rangle }\/{dt_{1}} \\nonumber \\\\\n&=&({i}\/{\\hbar}){\\rm Tr}_{S\\otimes R}\\left( \\{[{H}_S, {A}] \\}(t_1) \nB(t_2)\\rho_{T}(0)\\right) \\nonumber \\\\\n&&+\\frac{1}{\\hbar^2 }\\int_{0}^{t_1}d\\tau {\\rm Tr}_{S\\otimes R} \\nonumber\\\\\n&&\\quad\n\\left( \\{\\tilde{H}_I(\\tau-t_1)[A,H_I]\\}(t_1)B(t_2)\\rho_{T}(0) \\right. \\nonumber\\\\\n&&\\quad\n+\\left.\\{[H_I,A]\\tilde{H}_I(\\tau-t_1)\\}(t_1)B(t_2)\\rho_{T}(0) \\right)\n\\nonumber\\\\\n&&\n+\\frac{1}{\\hbar^2 }\\int_{0}^{t_2}d\\tau {\\rm Tr}_{S\\otimes R} \\nonumber\\\\\n&&\\quad \\left(\\{[H_I,A]\\}(t_1)\\{[B,\\tilde{H}_I(\\tau-t_2)]\\}(t_2)\\rho_T(0)\\right).\n\\label{2time_evol_eq}\n\\end{eqnarray}\nCompared to Eq.~(\\ref{1time_evol_eq}), it is the existence of the last\nterm in Eq.~(\\ref{2time_evol_eq}) that makes the QRT invalid. \nEquation (\\ref{2time_evol_eq}) is the main result of this paper. \nThe derivation is based on perturbative quantum master\nequation approach, so non-Markovian open quantum system models that are\nnot exactly solvable can use our derived evolution equation to \nobtain the time evolutions of \ntheir two-time CF's of system operators, valid to second order in\nthe system-environment interaction. \nIn the derivation of Eqs.~(\\ref{1time_evol_eq}) and (\\ref{2time_evol_eq}),\nwe have also used the assumption of a factorized initial system-bath state\n$\\rho_{T}(0)=\\tilde{\\rho}_{T}(0)=\\rho(0)\\otimes R_0$ and the\ncondition of \n${\\rm Tr}_R[\\tilde{H}_{I}(t) R_0]=0$, Eq.~(\\ref{traceless_1st_order}),\nto eliminate the first-order term.\nSince the form and nature of the Hamiltonians are not specified,\nEq.~(\\ref{2time_evol_eq}) can be used to calculate the two-time CF's \nfor non-Markovian open quantum systems with \nmulti-level discrete or continuous Hilbert\nspaces, interacting with bosonic and\/or\nfermionic environments. \nThe procedure and the degrees of difficulty \nto apply Eq.~(\\ref{2time_evol_eq}) to a open quantum system model\n(by taking into account \nnonlocal bath memory effects and \ntracing out the bath degrees of freedom for \nfactorized system-bath initial states) to obtain the two-time\nCF's of system operator \nare similar to those for the evaluation of the reduced density matrix\nof a second-order time-convolutionless non-Markovian \nquantum master equation [e.g., Eq.~(\\ref{time_convolutionless_ME})].\nWe will explicitly apply the evolution equation (\\ref{2time_evol_eq})\nto a general model of a quantum system coupled to a\nfinite-temperature bosonic environment in Sec.~\\ref{sec:bosonic}\nand a specific model of two-level system in Sec.~\\ref{sec:spin-boson}.\nOpen quantum systems coupled to fermionic reservoirs\n(environments) could, for example,\nbe quantum\ndots or other nanostructure systems coupled (connected) to\nnonequilibrium electron reservoirs (electrodes or leads) in the\nelectron transport problems \\cite{Kleinekathofer04,Welack06,Harbola06,\n Goan01, Li04, Utami04, Zedler09,Gudmundsson09,Jin10}. \nThe evolution equation\n(\\ref{2time_evol_eq}) can also be used to calculate the non-Markovian \ntwo-time CF's in such systems.\nIn summary, our evolution equation (\\ref{2time_evol_eq}) can be\napplied to a wide range \nof system-environment models with \nany factorized (separable) system-environment initial states (pure or mixed). \n \n\n\n\n\\section{Evolution equations for thermal bosonic bath models}\n\\label{sec:bosonic}\nTo make contact with Refs.~\\cite{Alonso05,Vega06,Alonso07}, we \nconsider a quantum system coupled to a\nbosonic environment with a general Hamiltonian of the form\n\\begin{eqnarray}\nH&=&H_{S}+\\sum_{\\lambda }\\hbar g_{\\lambda }\\left( L^{\\dagger} a_{\\lambda\n}+La_{\\lambda }^{\\dagger}\\right) +\\sum_{\\lambda }\\hbar\\omega_{\\lambda\n}a_{\\lambda }^{\\dagger}a_{\\lambda }, \n\\label{Hamiltonian_L}\n\\end{eqnarray}\nwhere the system coupling operator $L$ acts on the Hilbert space of\nthe system, \n$a_{\\lambda }$ and $a_{\\lambda }^{\\dagger}$ are the annihilation and creation\noperators on the bosonic environment Hilbert space, and $g_{\\lambda }$\nand $\\omega _{\\lambda }$ are respectively the coupling strength and the\nfrequency of the $\\lambda$th environmental oscillator.\n\n\nApplying Eq.~(\\ref{Hamiltonian_L}) to Eqs.~(\\ref{1time_evol_eq}) and\n(\\ref{2time_evol_eq}) and after tracing over the environmental degrees\nof freedom for factorized (separable) system-bath initial states, \nwe arrive at the second-order evolution equations of the single-time expectation values\n\\begin{eqnarray}\n&&\n{d\\left\\langle A\\left( t_{1}\\right) \\right\\rangle }\/{dt_{1}} \n\\nonumber \\\\\n&=&\n({i}\/{\\hbar }){\\rm Tr}_{S}\\left( \\{[H_{S},A]\\}(t_1) \\rho(0) \\right)\n\\nonumber \\\\\n&&+\n\\int_{0}^{t_{1}}d\\tau {\\rm Tr}_S\n\\nonumber \\\\\n&&\\quad \\left( \\alpha^{\\ast }( t_{1}-\\tau)\n\\{\\tilde{L}^{\\dagger}(\\tau -t_{1})[{A},{L}]\\}( t_{1}){\\rho}(0) \\right.\n\\nonumber \\\\\n&&\\quad\n+\\alpha( t_{1}-\\tau )\\{ [ {L}^{\\dagger},A] \\tilde{L}(\\tau -t_{1})\\}(t_{1}) \n\\rho( 0) \n \\nonumber \\\\\n&&\\quad+\n\\beta^{\\ast }(t_{1}-\\tau)\\{\\tilde{L}(\\tau -t_{1})[A,L^{\\dagger}]\\}(t_{1})\n\\rho(0) \n\\nonumber \\\\\n&&\n\\quad+\n\\left.\n\\beta(t_{1}-\\tau)\\{[L,A] \\tilde{L}^{\\dagger}(\\tau -t_{1})\\}(t_{1}) \n\\rho(0) \\right), \n\\label{1time_evol_eq_f}\n\\end{eqnarray}\nand of the two-time CF's\n\\begin{eqnarray}\n&&{d\\left\\langle A\\left( t_{1}\\right) B\\left( t_{2}\\right)\n\\right\\rangle }\/{dt_{1}} \\nonumber \\\\ \n&=&\n({i}\/{\\hbar}){\\rm Tr}_{S}\\left( \\{[{H}_{S}, A]\\}( t_{1}){B}( t_{2}) \n\\rho(0) \\right) \\nonumber \\\\\n&&+\n\\int_{0}^{t_{1}}d\\tau {\\rm Tr}_S\n\\nonumber \\\\\n&&\\quad\\left( \\alpha^{\\ast }( t_{1}-\\tau)\n\\{\\tilde{L}^{\\dagger}(\\tau -t_{1})[{A},{L}]\\}( t_{1}) \n{B}(t_{2}){\\rho}(0) \\right.\n\\nonumber \\\\\n&&\\quad+\n\\alpha( t_{1}-\\tau )\\{ [ {L}^{\\dagger},A] \\tilde{L}(\\tau -t_{1})\\} ( t_{1}) B( t_{2}) \n\\rho( 0) \n\\nonumber \\\\\n&&\\quad+\n\\beta^{\\ast }(t_{1}-\\tau)\\{\\tilde{L}(\\tau -t_{1})[A,L^{\\dagger}]\\}(t_{1}) B(t_{2}) \n\\rho(0) \n\\nonumber \\\\\n&&\\quad+\n\\left. \n\\beta(t_{1}-\\tau)\\{[L,A] \\tilde{L}^{\\dagger}(\\tau -t_{1})\\}(t_{1}) \nB(t_{2})\\rho(0) \n\\right) \n\\nonumber \\\\\n&&+\n\\int_{0}^{t_{2}}d\\tau \n{\\rm Tr}_S \\nonumber \\\\\n&&\\quad\\left( \\alpha(t_{1}-\\tau) \\{ [ L^{\\dagger},A]\\}(t_{1}) \n\\{ [B,\\tilde{L}(\\tau -t_{2})]\\}(t_{2}) \\rho(0) \\right.\n\\nonumber \\\\\n&&\\hspace{-0.3cm}+\n\\left. \\beta( t_{1}-\\tau)\n\\{ [L,A]\\}(t_{1})\\{[B,\\tilde{L}^{\\dagger}(\\tau -t_{2})]\\} (t_{2}) \n\\rho(0) \\right). \n\\label{2time_evol_eq_f}\n\\end{eqnarray}\nHere $\\tilde{L}(t)=\\exp(iH_St\/\\hbar)L\\exp(-iH_St\/\\hbar)$ is the system operator in the interaction picture with respect to $H_S$, and \n\\begin{eqnarray}\n\\alpha \\left( t_{1}-\\tau \\right) &=&\\sum_{\\lambda }\\left( \\bar{n}_{\\lambda }+1\\right)\n\\left\\vert g_{\\lambda }\\right\\vert ^{2}\ne^{-i\\omega _{\\lambda }\\left( t_{1}-\\tau \\right)},\n\\label{CFalpha}\\\\ \n\\beta \\left( t_{1}-\\tau \\right)\n&=&\\sum_{\\lambda }\\bar{n}_{\\lambda }\\left\\vert g_{\\lambda }\\right\\vert ^{2}e^{i\\omega\n_{\\lambda }\\left( t_{1}-\\tau \\right) }\n\\label{CFbeta}\n\\end{eqnarray}\nare the environment CF's with $\\alpha(t_1-\\tau)=\\left\\langle\n \\sum_{\\lambda}g_{\\lambda}\\tilde{a}_{\\lambda}(t_1)\\sum_{\\lambda'}g_{\\lambda'}\\tilde{a}^{\\dagger}_{\\lambda'}(\\tau)\\right\\rangle_R$\nand $\\beta(t_1-\\tau)=\\left\\langle\n \\sum_{\\lambda}g_{\\lambda}\\tilde{a}^{\\dagger}_{\\lambda}(t_1)\\sum_{\\lambda'}g_{\\lambda'}\\tilde{a}_{\\lambda'}(\\tau)\\right\\rangle_R$,\nwhere $\\tilde{a}_{\\lambda}(t_1)=a_{\\lambda}e^{-i\\omega_\\lambda t_1}$\nand\n$\\tilde{a}^{\\dagger}_{\\lambda}(t_1)=a^{\\dagger}_{\\lambda}e^{i\\omega_\\lambda\n t_1}$ are the environment operators in the interaction\npicture, and the symbol $\\langle \\cdots \\rangle_R$ denotes taking a trace with\nrespect to the density matrix of \nthe thermal bosonic reservoir (environment). \nThe thermal mean occupation number $\\bar{n}_\\lambda$ of the\nbosonic environment\noscillators in Eqs.~(\\ref{CFalpha}) and (\\ref{CFbeta}) is\n$\\bar{n}_\\lambda=(e^{\\hbar\\omega_\\lambda\/k_BT}-1)^{-1}$. \n\nThe evolution equations (\\ref{1time_evol_eq_f}) and (\\ref{2time_evol_eq_f})\nfor a non-Markovian bosonic environment have been presented in\nRef.~\\cite{Goan10} without any derivation. In this paper, the detailed\nderivation of the evolution equations is given.\nFurthermore, the two-time CF evolution equation, Eq.~(\\ref{2time_evol_eq}),\napplicable for both bosonic and fermionic environments and applicable\nfor more general form of system-environment interaction Hamiltonian \nhas not been published in the literature yet. \n\n\n\n\nAs mentioned, the two-time evolution equations derived \nin Refs.~\\cite{Alonso05,Vega06,Alonso07} is, strictly speaking,\napplicable only for a\nzero-temperature environment. However, these equations\nwere used to calculate the\ntwo-time CF's of system observables of dissipative spin-boson models\nat finite temperatures. \nThis is possible only for the dissipative spin-boson models \nwith Hermitian system coupling\noperators, $L=L^\\dagger$. We will discuss this point in details\nin subsection \\ref{sec:comparison}.\nIn contrast, our bosonic evolution equations,\nEqs.~(\\ref{1time_evol_eq_f}) and (\\ref{2time_evol_eq_f}), are valid\nfor both a Hermitian and a non-Hermitian system coupling operators and\ncan be used to calculate the two-time CF's for any factorized (separable)\nsystem-reservoir initial state and for any arbitrary temperature as\nlong as the assumption of the weak system-environment coupling still\nholds. \n\nIn Ref.~\\cite{Goan10}, we used Eqs.~(\\ref{1time_evol_eq_f}) and \n(\\ref{2time_evol_eq_f}) to calculate\nthe finite-temperature \nsingle-time expectation values and two-time CF's for a non-Markovian\npure-dephasing spin-boson model of \n\\begin{equation}\nH_S=(\\hbar\\omega_S\/2)\\sigma_z, \\quad\\quad\nL=\\sigma_z=L^{\\dagger}. \n\\label{SpinBoson} \n\\end{equation}\nSince the non-Markovian dynamics of this exactly solvable\npure-dephasing model can be cast into a time-local,\nconvolutionless form and $[L,H_S]=0$, the results obtained by\nour second-order evolution equations turn out to be\nexactly the same as the exact results obtained by the direct\noperator evaluation. \nHowever, these results significantly differ from the non-Markovian\ntwo-time CF's obtained by wrongly directly applying the quantum\nregression theorem (QRT). \nThis demonstrates the validity of\nthe evolution equations (\\ref{1time_evol_eq_f}) and \n(\\ref{2time_evol_eq_f}).\nBut the system coupling operators $L$ of this pure dephasing model\n\\cite{Goan10} and the examples calculated in\nRefs.~\\cite{Alonso05,Vega06,Alonso07} are all Hermitian, i.e., $L^{\\dagger}=L$. \nSo we will present in Sec.~\\ref{sec:spin-boson} the calculations of\none-time averages and two-time CF's for a thermal spin-boson model\nwith $L\\neq L^\\dagger$, for which only the evolution equations\n(\\ref{1time_evol_eq_f}) and (\\ref{2time_evol_eq_f}), rather than those\nin Refs.~\\cite{Alonso05,Vega06,Alonso07}, are applicable.\n\n\n\n\\subsection{Comparison and discussion} \n\\label{sec:comparison}\nWe discuss in the following the connection of our derived two-time \nevolution equation (\\ref{2time_evol_eq_f}) with those presented in \nRefs.~\\cite{Alonso05,Vega06,Alonso07}.\nIn Ref.~\\cite{Vega06}, a master equation conditioned on initial coherent states of the environment in Bargmann representation, $(z_0,z'_0)$, was derived in the weak system-environment coupling limit. Provided that the whole set of the initial conditions of the system of interest, $|\\psi(z^*_0)\\rangle$, and the statistical probability ${\\cal J}(z_0,z^*_0)$ for the member $|\\psi(z^*_0)\\rangle|z_0\\rangle$ of the statistical ensemble are known, this master equation \nwith $z'_0=z_0$ is capable of evaluating the evolution of \n{\\em single-time} expectation values for general initial conditions, including initially correlated mixed states between the system and environment. \nHowever, the evolution equations of {\\em two-time (multi-time)} CF's of system observables, Eq.~(6) in Ref.~\\cite{Alonso05}, Eq.~(31) in Ref.~\\cite{Vega06} and Eq.~(60) in Ref.~\\cite{Alonso07}, were derived for an initial vacuum state of the environment and an initial pure state of the system of interest. \nAs a result, these two-time evolution equations are valid only for a zero-temperature environment \n(if the system coupling operator is not Hermitian, i.e., $L\\neq L^\\dagger$; see discussions below). \nCompared with the corresponding zero-temperature two-time (multi-time) evolution equations \nderived in Refs.~\\cite{Alonso05,Vega06,Alonso07}, our finite-temperature \ntwo-time evolution equation \n(\\ref{2time_evol_eq_f}) is valid for any initial factorized (separable) states (pure or mixed) at finite temperatures and for both Hermitian and non-Hermitian system coupling operators. The extra terms containing the bath CF $\\beta(t_1-\\tau)$ or $\\beta^*(t_1-\\tau)$ are due to the finite-temperature thermal environment. If we take the zero-temperature limit by letting $\\bar{n}_\\lambda=0$ and thus $\\beta(t_1-\\tau)=\\beta^*(t_1-\\tau)=0$, as well as consider the initial pure-state case by letting ${\\rm Tr}_S[\\cdots\\rho(0)]\\to \\langle \\psi(0)|\\cdots |\\psi(0)\\rangle$, then Eqs.(\\ref{1time_evol_eq_f}) and (\\ref{2time_evol_eq_f}) reduce exactly to their corresponding zero-temperature pure-state evolution equations in Refs.~\\cite{Alonso05,Vega06,Alonso07}.\n\n\nHowever, calculations of the two-time CF's of system observables of\ndissipative spin-boson models in finite-temperature thermal baths (rather than zero-temperature baths) were presented in Refs.~\\cite{Alonso05,Vega06,Alonso07} \neven though in their derivations of the two-time (multi-time) evolution equations, the bath CF is given in its zero-temperature form, \n\\begin{equation}\n\\alpha_0(t-\\tau)=\\sum_{\\lambda }\n\\left\\vert g_{\\lambda }\\right\\vert ^{2}\ne^{-i\\omega _{\\lambda }\\left( t_{1}-\\tau \\right)}. \n\\label{CFalpha0}\n\\end{equation}\nThis is only possible due to the reason that the system coupling operator is Hermitian, $L=L^\\dagger$, in the thermal bath examples presented in Refs.~\\cite{Alonso05,Vega06,Alonso07}. \nOne may understand this as follows. \nIt is known that the finite-temperature density matrix operator of a thermal bath can be canonically mapped onto the zero-temperature density operator (the vacuum) of a larger (hypothetical) environment \\cite{Semenoff83,Yu04}. \nBy mapping the total Hamiltonian Eq.~(\\ref{Hamiltonian_L}) and an\ninitial thermal state to an extended total system with a vacuum state,\nthe finite-temperature problem can be reduced to a zero-temperature\nproblem, and the resultant pure state $\\psi_t=|\\psi_t(z^*,w^*)\\rangle$\nfor the system of interest satisfies the following linear\nfinite-temperature non-Markovian stochastic Schr\\\"odinger equation\nwith two independent noises $z^*_t$ and $w^*_t$ \\cite{Diosi98,Yu04}:\n\\begin{eqnarray} \n\\frac{\\partial}{\\partial t}\\psi_t&=&-iH_S\\psi_t+L z^*_t\\psi_t-L^\\dagger\\int_0^t d\\tau \\alpha(t-\\tau)\\frac{\\delta\\psi_t}{\\delta z^*_\\tau} \\nonumber \\\\\n&&+L^\\dagger w^*_t\\psi_t-L\\int_0^t d\\tau \\beta(t-\\tau)\\frac{\\delta\\psi_t}{\\delta w^*_\\tau}, \n\\label{SSEfiniteT}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray} \nz^*_t&=&-i\\sum_\\lambda \\sqrt{\\bar{n}_{\\lambda }+1}\\, g^*_\\lambda z^*_\\lambda e^{i\\omega_\\lambda t},\n\\label{zt}\\\\\nw^*_t&=&-i\\sum_\\lambda \\sqrt{\\bar{n}_{\\lambda }}\\, g^*_\\lambda w^*_\\lambda e^{-i\\omega_\\lambda t}\n\\label{wt}\n\\end{eqnarray}\nare two independent, colored, complex Gaussian noises with zero mean \nand satisfy \n\\begin{eqnarray}\n{\\cal M}[z_tz_\\tau]={\\cal M}[z^*_tz^*_\\tau]=0,&&\\quad {\\cal M}[z^*_tz_\\tau]=\\alpha(t-\\tau);\n\\\\\n{\\cal M}[w_tw_\\tau]={\\cal M}[w^*_tw^*_\\tau]=0,&&\\quad {\\cal M}[w^*_tw_\\tau]=\\beta(t-\\tau).\n\\end{eqnarray}\nHere, $z^*_\\lambda$ and $w^*_\\lambda$ are coherent state variables of the extended environment in Bargmann representation, ${\\cal M}[\\cdots]$ denotes the statistical average over the Gaussian processes $z^*_t$ and $w^*_t$, and the bath CF's $\\alpha(t-\\tau)$ and $\\beta(t-\\tau)$ are defined in Eqs.~(\\ref{CFalpha}) and (\\ref{CFbeta}), respectively.\nIn the zero-temperature limit $T\\to 0$, the mean thermal occupation number of quanta in mode $\\lambda$ approaches zero, i.e., $\\bar{n}_{\\lambda } \\to 0$. \nand thus $\\beta(t-\\tau)\\to 0$ and $\\alpha(t-\\tau)\\to \\alpha_0(t-\\tau)$ \nthat is defined in Eq.~(\\ref{CFalpha0}). \nIn this case, the noises, Eqs.~(\\ref{zt}) and (\\ref{wt}), become\n $z^*_t=-i\\sum_\\lambda g^*_\\lambda z^*_\\lambda e^{i\\omega_\\lambda t}$\n and $w^*_t=0$, and \nthe finite-temperature equation (\\ref{SSEfiniteT}) reduces to the\nsimple zero-temperature equation \\cite{Diosi98,Yu04} \n\\begin{equation}\n\\frac{\\partial}{\\partial t}\\psi_t=-iH_S\\psi_t+L z^*_t\\psi_t-L^\\dagger\\int_0^t d\\tau \\alpha_0(t-\\tau)\\frac{\\delta\\psi_t}{\\delta z^*_\\tau}.\n\\label{SSEzeroT}\n\\end{equation} \n\n\nNow consider the case of a Hermitian system coupling operator $L=L^\\dagger$. \nThe finite-temperature equation (\\ref{SSEfiniteT}) can be simplified considerably by introducing the sum process $u^*_t=z^*_t+w^*_t$ that has a zero mean and satisfies\n\\begin{eqnarray}\n{\\cal M}[u_tu_\\tau]&=&{\\cal M}[u^*_tu^*_\\tau]=0;\\\\\n{\\cal M}[u^*_tu_\\tau]&=&\\alpha_{\\rm eff}(t-\\tau) \\nonumber\\\\\n&=&\\alpha(t-\\tau)+\\beta(t-\\tau) \\nonumber \\\\\n&=& \\sum_\\lambda |g_\\lambda|^2\n\\left\\{\\coth\\left({\\hbar\\omega_\\lambda}\/{2k_{B}T}\\right)\\cos[\\omega_\\lambda(t-\\tau)]\\right.\n\\nonumber\\\\\n&& \\hspace{1.5cm}\\left.-i\\sin[\\omega_\\lambda(t-\\tau)]\\right\\}. \n\\label{CFalpha_eff} \n\\end{eqnarray}\nIn terms of the single noise process $u^*_t$, the linear finite-temperature non-Markovian stochastic Schr\\\"odinger equation (\\ref{SSEfiniteT}) for the case of a Hermitian coupling operator $L=L^\\dagger$ takes the simple form of the zero-temperature equation (\\ref{SSEzeroT}) with the replacement of $z^*_t$ by $u^*_t$ and \n$\\alpha_0(t-\\tau)$ by $\\alpha_{\\rm eff}(t-\\tau)=\\alpha(t-\\tau)+\\beta(t-\\tau)$ \ndefined in Eq.~(\\ref{CFalpha_eff}). \nIt is for this reason of the Hermitian coupling operator\n$L=L^\\dagger=\\sigma_x$ in the dissipative spin-boson model with a\nthermal environment that the two-time CF's of the system observables\ncan be evaluated with the evolution equations derived in\nRefs.~\\cite{Alonso05,Vega06,Alonso07}. \nIn other words, if the system operator coupled to the environment is\nnot Hermitian $L\\neq L^\\dagger$, \nthe two-time (multi-time) differential evolution equations presented in \nRefs.~\\cite{Alonso05,Vega06,Alonso07} are valid only for a zero-temperature\nenvironment. \n\n\n\nIn contrast, our two-time evolution equation (\\ref{2time_evol_eq_f}) is valid for more general finite-temperature cases where the system coupling operator is not a Hermitian operator, i.e., $L\\neq L^\\dagger$. \nIn the case of $L\\neq L^\\dagger$, \nour two-time evolution equation contains additional finite-temperature $\\beta(t_1-\\tau)$ and $\\beta^*(t_1-\\tau)$ terms which can not be combined and simplified to a simpler form as the zero-temperature evolution equation derived in Refs.~\\cite{Alonso05,Vega06,Alonso07}.\nFor a Hermitian coupling operator $L=L^{\\dagger}$, one can see that besides the replacement of a more general system state with an initial pure system state by letting $\\langle \\psi(0)|\\cdots |\\psi(0)\\rangle \\to {\\rm Tr}_S[\\cdots\\rho(0)]$, the finite-temperature evolution equation (\\ref{2time_evol_eq_f}) reduces to its zero-temperature counterpart in Refs.~\\cite{Alonso05,Vega06,Alonso07} with the effective bath CF given by \n$\\alpha_{\\rm eff}=\\alpha(t_1-\\tau)+\\beta(t_1-\\tau)$ defined in Eq.~(\\ref{CFalpha_eff}). This demonstrates explicitly why the zero-temperature \ntwo-time evolution equations derived in Refs.~\\cite{Alonso05,Vega06,Alonso07} can be used to calculate the system operator CF's for a thermal spin-boson model with a Hermitian system coupling operator. \n\n\n\\subsection{Conditions for the QRT to hold} \n\\label{sec:QRT_hold}\n\nAs mentioned in subsection \\ref{sec:QRT},\nQRT is a very useful procedure that enables one\n(in certain circumstances) to calculate for two-time\n(multi-time) CF's of system operators from the knowledge of the\nevolution equations of \nsingle-time expectation values. \nOne can notice that if the last two terms in\nEq.~(\\ref{2time_evol_eq_f}) vanish, then the non-Markovian\nsingle-time and two-time evolution equations (\\ref{1time_evol_eq_f}) and (\\ref{2time_evol_eq_f}) will have the same form with the same evolution coefficients and thus the QRT can apply. As expected, these two terms vanish in the Markovian case since the time integration of the corresponding $\\delta$-correlated reservoir CF's, \n$\\alpha(t_1-\\tau)\\propto \\delta(t_1-\\tau)$ and \n$\\beta(t_1-\\tau)\\propto \\delta(t_1-\\tau)$, over the variable $\\tau$ in the domain from $0$ to $t_2$ is zero as $t_1>t_2$.\nSo from Eq.~(\\ref{2time_evol_eq_f}), in the weak system-environment\ncoupling case the QRT holds when (i) $[ L^{\\dagger},A]=0$ or\n$[B,\\tilde{L}(\\tau -t_{2})]=0$ at the zero temperature, (ii) at finite\ntemperatures, in addition to condition (i), the following condition\nalso needs to be satisfied: $[L,A]=0$ or $[B,\\tilde{L}^{\\dagger}(\\tau -t_{2})]=0$, (iii) in the Markovian case where the bath CF's are \n$\\delta$-correlated in time. \n\nNote that in some models, certain CF's, which formally obey the QRT but with evolution equations coupled with those of other CF's that do not obey the QRT,\nmay yield solutions different from those given by the QRT \\cite{Vega06}.\n\n\n\n\\section{Application to a thermal spin boson model with $L\\neq L^\\dagger$}\n\\label{sec:spin-boson}\nTo illustrate the usage of the equations we have derived, we apply them to the problem of a two-level system coupled to a thermal reservoir, in which $L\\neq L^{\\dagger}$. \nWe consider Hamiltonian, Eq.~(\\ref{Hamiltonian_L}), with $H_S=(\\hbar\\omega_A\/2)\\sigma_z$, a coupling operator $L=\\sigma_-$ and a system-environment interaction Hamiltonian whose magnitude is small enough to be considered as a perturbation. Since the coupling operator $L\\neq L^{\\dagger}$ is not Hermitian, \nthe two-time (multi-time) evolution equations derived in Refs.~\\cite{Alonso05,Vega06,Alonso07} are not applicable and \nour evolution equation (\\ref{2time_evol_eq_f}) should be employed to obtain the time evolutions of the two-time CF's. \n\n\\subsection{Single-time expectation values}\n\nBefore calculating the two-time CF's, it is instructive to\nobtain the master equation of the reduced system density matrix for\nthe model. \nTransferring from the interaction picture back to the Schr\\\"odinger\npicture for Eq.~(\\ref{time_convolutionless_ME}) \nand using the general Hamiltonian, Eq.~(\\ref{Hamiltonian_L}), we obtain \nthe second-order time-convolutionless non-Markovian master equation\nfor the reduced density matrix $\\rho(t)$ as\n\\begin{eqnarray}\n\\frac{d \\rho(t)}{dt} \n&=&\\frac{1}{i\\hbar}\\left[H_S,\\rho(t)\\right] \n\\nonumber \\\\\n&&\n\\hspace{-1cm}\n-\\int_0^t d\\tau \\{\\alpha(t-\\tau)[L^\\dagger\\tilde{L}(\\tau-t)\\rho(t)\n-\\tilde{L}(\\tau-t)\\rho(t)L^\\dagger] \\nonumber\\\\\n&&\n\\hspace{-0.5cm}\n +\\beta(t-\\tau)[L\\tilde{L}^\\dagger(\\tau-t)\\rho(t)\n-\\tilde{L}^\\dagger(\\tau-t)\\rho(t)L] \\nonumber \\\\\n&&\\hspace{-0.5cm} +{\\rm h.c.} \\},\n\\label{master_eq}\n\\end{eqnarray}\nwhere $\\alpha(t-\\tau)$ and $\\beta(t-\\tau)$ are defined in\nEqs.~(\\ref{CFalpha}) and (\\ref{CFbeta}) respectively, \nh.c. indicates the hermitian conjugate of previous terms, and an operator with a tilde on the top indicates that it is an operator in the interaction picture.\nThe only real assumption used to obtain Eq.~(\\ref{master_eq})\nvalid to second order in the system-environment coupling strength \nis the total density matrix\nfactorizing at the initial time $t=0$, $\\rho_T(0)=\\rho(0)\\otimes R_0$.\nTaking $L=\\sigma _{-}$, $L^{\\dagger }=\\sigma _{+}$, then\n$\\tilde{L}(t)=\\sigma _{-}e^{-i\\omega _{A}t}$, and\n$\\tilde{L}^{\\dagger}(t)=\\sigma _{+}e^{i\\omega _{A}t}$, and\nsubstituting them into Eq.~(\\ref{master_eq}), we obtain \n\\begin{eqnarray}\n\\frac{d\\rho(t)}{dt}&=&-i\\frac{\\omega_A}{2}\\left[\\sigma_z,\\rho(t)\\right]\\nonumber\\\\\n&&\n\\hspace{-1cm}\n-\\left\\{\\Gamma _{1}(t)\\left(\\sigma_+\\sigma_\n-\\rho(t)-\\sigma_-\\rho(t)\\sigma_+\\right)\\right.\n\\nonumber\\\\\n&&\n\\hspace{-1cm}\n\\left. +\\Gamma _{2}(t)\\left(\\sigma_-\\sigma_+\\rho(t)\n-\\sigma_+\\rho(t)\\sigma_-\\right)\n+h.c.\\right\\},\n\\label{master_eq_Schrodinger}\n\\end{eqnarray} \nwhere\n\\begin{eqnarray}\n\\Gamma _{1}(t) &=&\\int_{0}^{t}d\\tau \\alpha (t-\\tau)e^{+i\\omega _{A}\\left(t-\\tau \\right) }, \n\\label{Gamma_1} \\\\\n\\Gamma _{2}(t) &=&\\int_{0}^{t}d\\tau \\beta (t-\\tau)e^{-i\\omega _{A}\\left(t-\\tau \\right) }. \n\\label{Gamma_2}\n\\end{eqnarray}\nThe master equation (\\ref{master_eq_Schrodinger}) \nis a time-local and convolutionless differential equation.\nThe effect of the non-Markovian environment \non the second-order master equation (\\ref{master_eq_Schrodinger})\nis taken into account by the time-dependent coefficients $\\Gamma _{1}(t_{1})$ and $\\Gamma _{2}(t_{1})$ defined in Eqs.~(\\ref{Gamma_1}) and (\\ref{Gamma_2}) \ninstead of memory integrals. \nLikewise, the evolution equations of single-time expectation values and two-time CF's of system operators valid to the second order \nare also expected to be convolutionless.\nTaking again $L=\\sigma _{-}$, $L^{\\dagger }=\\sigma _{+}$, then\n$\\tilde{L}(t)=\\sigma _{-}e^{-i\\omega _{A}t}$, and\n$\\tilde{L}^{\\dagger}(t)=\\sigma _{+}e^{i\\omega _{A}t}$, and using the\ncommutation relation between the Pauli matrices, \nwe obtain \nstraightforwardly from Eq.~(\\ref{1time_evol_eq_f}) the following evolution\nequations of the single-time expectation values of system operators as \n\\begin{eqnarray}\nd\\langle \\sigma _{+}(t_{1})\\rangle \/dt_{1} &=&i\\omega _{A}\\langle \\sigma\n_{+}(t_{1})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}(t_{1})\\right] \\langle\n\\sigma _{+}(t_{1})\\rangle, \\label{one_evol_+} \\\\\nd\\langle \\sigma _{-}(t_{1})\\rangle \/dt_{1} &=&-i\\omega _{A}\\langle \\sigma\n_{-}(t_{1})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{2}^{\\ast }(t_{1})\\right] \\langle\n\\sigma _{-}(t_{1})\\rangle, \\label{one_evol_-} \\\\\nd\\langle \\sigma _{z}(t_{1})\\rangle \/dt_{1} &=&-\\left[ \\Gamma\n_{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}^{\\ast }(t_{1})+\\Gamma\n_{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma _{z}(t_{1})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})-\\Gamma _{2}^{\\ast\n}(t_{1})-\\Gamma _{2}(t_{1})\\right]. \\nonumber \\\\\n\\label{one_evol_z}\n\\end{eqnarray}\nEquations (\\ref{one_evol_+})-(\\ref{one_evol_z}) \ncan also be obtained directly from the master equation\n(\\ref{master_eq_Schrodinger}) through ${d\\left\\langle A\\left( t_{1}\\right) \\right\\rangle }\/{dt_{1}}={\\rm Tr}_{S}\\left[ A({d\\rho(t_1)}\/{dt_1})\\right]$.\n\n\n\n\\subsection{Two-time CF's}\nFor the evaluations of the two-time CF's of the system observables, we consider the following four cases.\n\n{\\em {Case 1:}} $[L^{\\dagger },A]=0$ or $[B,\\tilde{L}(t)]=0$; and $[L,A]=0$\nor $[B,\\tilde{L}^{\\dagger }(t)]=0$. In this case, let $A=\\sigma _{i}$, \nB=\\sigma _{i}$ with $i=+,-$.\nApplying the commutation relations between the Pauli matrices and the\ndefinition of ${\\rm Tr}_S[\\sigma_i(t_1) \\sigma_i(t_2)\\rho(0)]=\\langle\n\\sigma_i(t_1) \\sigma_i(t_2) \\rangle$ to the right-hand side of the\ntwo-time evolution equation (\\ref{2time_evol_eq_f}), \nwe then obtain\n\\begin{eqnarray}\nd\\langle \\sigma _{+}(t_{1})\\sigma _{+}(t_{2})\\rangle \/dt_{1} &=&i\\omega\n_{A}\\langle \\sigma _{+}(t_{1})\\sigma _{+}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}(t_{1})\\right] \\langle\n\\sigma _{+}(t_{1})\\sigma _{+}(t_{2})\\rangle , \\nonumber \\\\\n \\label{2time_evol_++} \\\\\nd\\langle \\sigma _{-}(t_{1})\\sigma _{-}(t_{2})\\rangle \/dt_{1} &=&-i\\omega\n_{A}\\langle \\sigma _{-}(t_{1})\\sigma _{-}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{2}^{\\ast }(t_{1})\\right] \\langle\n\\sigma _{-}(t_{1})\\sigma _{-}(t_{2})\\rangle , \\nonumber \\\\\n\\label{2time_evol_--}\n\\end{eqnarray\nwhere \n$\\Gamma _{1}(t_{1})$ and $\\Gamma _{2}(t_{1})$ are defined in Eqs.~(\\re\n{Gamma_1}) and (\\ref{Gamma_2}), respectively. One can see that the evolution\nequations of the two-time CF's, Eqs.~(\\ref{2time_evol_++}) and (\\re\n{2time_evol_--}), have the same forms as the evolution equations of the\nsingle-time expectation values $\\langle \\sigma _{+}(t_{1})\\rangle $ and \n\\langle \\sigma _{-}(t_{1})\\rangle $, Eqs.~(\\ref{one_evol_+}) and (\\re\n{one_evol_-}), respectively. Hence the QRT holds in this case.\n\n{\\em {Case 2:}} $[A,L]=0$ or $[B,\\tilde{L}(t)]=0$. In this case, using Eq.~\n\\ref{2time_evol_eq_f}), we obtain \n\\begin{eqnarray}\nd\\langle \\sigma _{-}(t_{1})\\sigma _{z}(t_{2})\\rangle \/dt_{1} &=&-i\\omega\n_{A}\\langle \\sigma _{-}(t_{1})\\sigma _{z}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{2}^{\\ast }(t_{1})\\right] \\langle\n\\sigma _{-}(t_{1})\\sigma _{z}(t_{2})\\rangle \\nonumber \\\\\n&&-2\\Gamma _{3}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{z}(t_{1})\\sigma _{-}(t_{2})\\right\\rangle , \\label{2time_evol_-z} \\\\\nd\\langle \\sigma _{z}(t_{1})\\sigma _{-}(t_{2})\\rangle \/dt_{1} &=&-\\left[\n\\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}^{\\ast\n}(t_{1})+\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma _{z}(t_{1})\\sigma\n_{-}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})-\\Gamma _{2}^{\\ast\n}(t_{1})-\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma\n_{-}(t_{2})\\rangle \\nonumber \\\\\n&&-2\\Gamma _{4}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{-}(t_{1})\\sigma _{z}(t_{2})\\right\\rangle , \\label{2time_evol_z-}\n\\end{eqnarray\nwhere \n\\begin{eqnarray}\n\\Gamma _{3}\\left( t_{1},t_{2}\\right) &=&\\int_{0}^{t_{2}}d\\tau \\alpha\n(t_{1}-\\tau )e^{i\\omega _{A}\\left( t_{2}-\\tau \\right) }, \\label{Gamma_3} \\\\\n\\Gamma _{4}\\left( t_{1},t_{2}\\right) &=&\\int_{0}^{t_{2}}d\\tau \\beta\n(t_{1}-\\tau )e^{-i\\omega _{A}\\left( t_{2}-\\tau \\right) }. \\label{Gamma_4}\n\\end{eqnarray\nWhen we obtain Eq.~(\\ref{2time_evol_-z}) from Eq.~(\\ref{2time_evol_eq_f}), the last term containing $\\Gamma _{3}(t_{1},t_{2})$ in Eq.~(\\ref{2time_evol_-z}) does not vanish since $[L^{\\dagger },A]\\neq 0$ and $[B,\\tilde{L}(\\tau -t_{2})]\\neq 0$. Similarly, when we obtain Eq.~(\\ref{2time_evol_z-}) from Eq.~(\\ref{2time_evol_eq_f}), because $[L,A]\\neq 0$ and \n$[B,\\tilde{L}^{\\dagger }(\\tau -t_{2})]\\neq 0$, the last term containing \n$\\Gamma _{4}(t_{1},t_{2})$ in Eq.~(\\ref{2time_evol_z-}) exists. Thus,\ncompared with single-time equations (\\ref{one_evol_-}) and (\\ref{one_evol_z}), \nthe two-time equations (\\ref{2time_evol_-z}) and (\\ref{2time_evol_z-})\nhave the extra last terms containing $\\Gamma _{3}(t_{1},t_{2})$ and \n$\\Gamma _{4}(t_{1},t_{2})$, respectively. As a result, the QRT does not hold\nin this case. It is also obvious from the sole appearance of the individual\ncoefficient of either $\\Gamma _{3}(t_{1},t_{2})$ in \nEq.~(\\ref{2time_evol_-z}) or $\\Gamma _{4}(t_{1},t_{2})$ \nin Eq.~(\\ref{2time_evol_z-})\nthat the finite-temperature bath CF's $\\alpha (t_{1}-\\tau )$ and $\\beta\n(t_{1}-\\tau )$ can not be combined into the single effective bath CF $\\alpha\n_{\\mathrm{eff}}(t_{1}-\\tau )=\\alpha (t_{1}-\\tau )+\\beta (t_{1}-\\tau )$ of\nEq.~(\\ref{CFalpha_eff}) as in the Hermitian coupling operator case.\n\n{\\em {Case 3:}} $[A,L^{\\dagger }]=0$ or $[B,\\tilde{L}^{\\dagger }(t)]=0$.\nUsing Eq.~(\\ref{2time_evol_eq_f}), we obtain for this case \n\\begin{eqnarray}\nd\\langle \\sigma _{+}(t_{1})\\sigma _{z}(t_{2})\\rangle \/dt_{1} &=&+i\\omega\n_{A}\\langle \\sigma _{+}(t_{1})\\sigma _{z}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}(t_{1})\\right] \\langle\n\\sigma _{+}(t_{1})\\sigma _{z}(t_{2})\\rangle \\nonumber \\\\\n&&-2\\Gamma _{4}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{z}(t_{1})\\sigma _{+}(t_{2})\\right\\rangle , \\label{2time_evol_+z} \\\\\nd\\langle \\sigma _{z}(t_{1})\\sigma _{+}(t_{2})\\rangle \/dt_{1} &=&-\\left[\n\\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}^{\\ast\n}(t_{1})+\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma _{z}(t_{1})\\sigma\n_{+}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\ast }(t_{1})-\\Gamma _{2}^{\\ast\n}(t_{1})-\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma\n_{+}(t_{2})\\rangle \\nonumber \\\\\n&&-2\\Gamma _{3}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{+}(t_{1})\\sigma _{z}(t_{2})\\right\\rangle. \n\\label{2time_evol_z+}\n\\end{eqnarray}\nSimilarly, compared with the single-time evolution equations \n(\\ref{one_evol_+}) and (\\ref{one_evol_z}), \nthe two-time evolution equations (\\ref{2time_evol_+z}) \nand (\\ref{2time_evol_z+}) have the extra last terms\ncontaining $\\Gamma _{4}^{\\ast }(t_{1})$ and $\\Gamma _{3}(t_{1})$,\nrespectively. As a result, the QRT also does not hold for these two CF's.\n\n{\\em {Case 4:}} $[A,L]\\neq 0$, $[A,L^{\\dagger }]\\neq 0$ and \n$[B,\\tilde{L}(t)]\\neq 0$ , $[B,\\tilde{L}^{\\dagger }(t)]\\neq 0$. \nIn this case, by using Eq.~(\\ref{2time_evol_eq_f}), we obtain\n the following equations \n\\begin{eqnarray}\nd\\langle \\sigma _{z}(t_{1})\\sigma _{z}(t_{2})\\rangle \/dt_{1} &=&-\\left[\n\\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\dagger }(t_{1})+\\Gamma _{2}^{\\dagger\n}(t_{1})+\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma _{z}(t_{1})\\sigma\n_{z}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{1}^{\\dagger }(t_{1})-\\Gamma\n_{2}^{\\dagger }(t_{1})-\\Gamma _{2}(t_{1})\\right] \\nonumber \\\\\n&&\\quad\\quad \\times \\langle \\sigma\n_{z}(t_{2})\\rangle \\nonumber \\\\\n&&+4\\Gamma _{3}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{+}(t_{1})\\sigma _{-}(t_{2})\\right\\rangle \\nonumber \\\\\n&&+4\\Gamma _{4}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{-}(t_{1})\\sigma _{+}(t_{2})\\right\\rangle. \n\\label{2time_evol_zz}\n\\end{eqnarray}\nThe evolution equation of the CF $\\langle \\sigma _{z}(t_{1})\\sigma\n_{z}(t_{2})\\rangle $, Eq.~(\\ref{2time_evol_zz}), is coupled with the evolution\nequations of the CF's $\\langle \\sigma _{+}(t_{1})\\sigma _{-}(t_{2})\\rangle $\nand $\\langle \\sigma _{-}(t_{1})\\sigma _{+}(t_{2})\\rangle $, which correspond\nto the CF's in Case 2 and Case 3, respectively. Their evolution equations,\nobtained from Eq.~(\\ref{2time_evol_eq_f}), are \n\\begin{eqnarray}\nd\\langle \\sigma _{-}(t_{1})\\sigma _{+}(t_{2})\\rangle \/dt_{1} &=&-i\\omega\n_{A}\\langle \\sigma _{-}(t_{1})\\sigma _{+}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}(t_{1})+\\Gamma _{2}^{\\ast }(t_{1})\\right] \\langle\n\\sigma _{-}(t_{1})\\sigma _{+}(t_{2})\\rangle \\nonumber \\\\\n&&+\\Gamma _{3}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{z}(t_{1})\\sigma _{z}(t_{2})\\right\\rangle, \n\\label{2time_evol_-+} \\\\\nd\\langle \\sigma _{+}(t_{1})\\sigma _{-}(t_{2})\\rangle \/dt_{1} &=&i\\omega\n_{A}\\langle \\sigma _{+}(t_{1})\\sigma _{-}(t_{2})\\rangle \\nonumber \\\\\n&&-\\left[ \\Gamma _{1}^{\\ast }(t_{1})+\\Gamma _{2}(t_{1})\\right] \\langle\n\\sigma _{+}(t_{1})\\sigma _{-}(t_{2})\\rangle \\nonumber \\\\\n&&+\\Gamma _{4}\\left( t_{1},t_{2}\\right) \\left\\langle \\sigma\n_{z}(t_{1})\\sigma _{z}(t_{2})\\right\\rangle. \n\\label{2time_evol_+-}\n\\end{eqnarray}\nFrom Eqs.~(\\ref{2time_evol_zz}), (\\ref{2time_evol_-+}) and (\\re\n{2time_evol_+-}), it is obvious that the QRT also does not hold for CF's \n\\langle \\sigma _{z}(t_{1})\\sigma _{z}(t_{2})\\rangle $, $\\langle \\sigma\n_{+}(t_{1})\\sigma _{-}(t_{2})\\rangle $ and $\\langle \\sigma _{-}(t_{1})\\sigma\n_{+}(t_{2})\\rangle $.\n\n \n\n\\begin{figure}[tbp]\n\\includegraphics[width=0.95\\linewidth]{CF_noise.eps}\n\\caption{(Color online) (a) Real part of the time evolution and (b) Fourier spectrum $S(\\omega)$ of the system operator CF $\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$, and (c) real part of \n$\\langle \\sigma _{z}(t_{1})\\sigma _{z}(t_{2})\\rangle $ for three different \ncases: Markovian using the QRT (blue dot-dashed line), non-Markovian\nusing the QRT (green dashed line) and non-Markovian (red solid line) using the\nevolution equation (\\ref{2time_evol_eq_f}). \nOther parameters used are $\\omega_A=3$, $(k_BT\/\\hbar)=1$, $\\Lambda=5$, $\\gamma=0.1$, and $t_{2}=1$.\n(d) Fourier spectrum $S(\\omega)$ of $\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$ for a different parameter of $\\gamma =0.35$ and for an initial mixed system state.\nThe results of the non-Markovian QRT case and the non-Markovian\nevolution case in (c) become indistinguishable when $t=t_1-t_2$ is larger\nthan 1.5.\n}\n\\label{fig:CF}\n\\end{figure}\n\nWe may consider any spectral density for which the time-convolutionless\nperturbation theory is still valid to characterize the environment, but for simplicity we consider a spectral density $J(\\omega)=\\sum_\\lambda|g_\\lambda|^2\\delta(\\omega-\\omega_\\lambda)=\\gamma \\hbar\\omega (\\omega\/\\Lambda)^{n-1}\\exp(-\\omega^2\/\\Lambda^2)$ with $n=1$ (Ohmic), where $\\Lambda$ is the cut-off frequency and $\\gamma$ is a dimensionless constant characterizing the interaction strength \nto the environment. \nFigure \\ref{fig:CF} shows the real part of the time evolution \nof the system operator CF's $\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$ and \n$\\langle \\sigma _{z}(t_{1})\\sigma _{z}(t_{2})\\rangle $, as well as \nthe Fourier spectrum $S(\\omega)$ of $\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$. \nThe CF's are obtained in three different cases: the first is in the\nMarkovian case [i.e, taking the reservoir CF's \n$\\alpha(t_1-\\tau)$ and \n$\\beta(t_1-\\tau)$ in Eq.~(\\ref{2time_evol_eq_f}) to be $\\delta$-correlated in time, or equivalently\ntaking the coefficients of $\\Gamma_1$, $\\Gamma_1^\\dagger$, $\\Gamma_2$,\nand $\\Gamma_2^\\dagger$ to be time-independent and equal to their\nMarkovian long-time values and setting all $\\Gamma_3(t_1,t_2)$ and\n$\\Gamma_4(t_1,t_2)$ to be zero in Eqs.~(\\ref{2time_evol_zz}),\n(\\ref{2time_evol_-+}) and (\\ref{2time_evol_+-})], the second is in the\nnon-Markovian case with a finite cut-off frequency but wrongly\ndirectly using the QRT method [i.e., \nthe last two terms of Eq.~(\\ref{2time_evol_eq_f}) or equivalently the terms containing $\\Gamma_3(t_1,t_2)$ or $\\Gamma_4(t_1,t_2)$ in \nEqs.~(\\ref{2time_evol_zz}), (\\ref{2time_evol_-+}) and (\\ref{2time_evol_+-})\nbeing all neglected],\nand the third is in the non-Markovian case with a finite cut-off\nfrequency [i.e., using the evolution equation (\\ref{2time_evol_eq_f})\nor equivalently Eqs.~(\\ref{2time_evol_zz}), (\\ref{2time_evol_-+}) and\n(\\ref{2time_evol_+-}) derived\nin this paper]. \nThe initial environmental state is in the thermal state and the system state in Fig.~\\ref{fig:CF}(a)--(c) is arbitrarily chosen to be $\\vert \\Psi\\rangle =\\left( \\frac{\\sqrt{3}}{2}\\left\\vert e\\right\\rangle +\\frac{1}{\n}\\left\\vert g\\right\\rangle \\right)$. \nWe can see that there are considerable differences between the results \nobtained in the three different cases in Fig.~\\ref{fig:CF}(a) and (b), \nand more significant differences can be observed \nin Fig.~\\ref{fig:CF}(c) and (d). \nThe oscillations of the CF's are more pronounced in the non-Markovian cases. \nIn Fig.~\\ref{fig:CF}(b), the coherent peaks of the Fourier spectrum \ncentered at $\\omega=\\pm\\omega_A$\nare higher and the widths are narrower in the non-Markovian cases.\nThe CF $\\langle \\sigma _{z}(t_{1})\\sigma\n_{z}(t_{2})\\rangle $ of the non-Markovian evolution equation case (in\nred solid line) in\nFig.~\\ref{fig:CF}(c) \ndiffers more from the CF's of the other two cases (in green dashed\nline and in blue dot-dashed line) in the short-time regime \nthan the CF $\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$ of the\nnon-Markovian evolution equation case (in red solid line) in\nFig.~\\ref{fig:CF}(a) does. \nThis is because as compared to the evolution equation of\n$\\langle\\sigma_+(t_1)\\sigma_-(t_2)\\rangle$ of\nEq.~(\\ref{2time_evol_+-}), the evolution equation of \n$\\langle \\sigma _{z}(t_{1})\\sigma_{z}(t_{2})\\rangle $ of\nEq.~(\\ref{2time_evol_zz}) has, in addition to a term proportional to\n$\\Gamma_4(t_1,t_2)$, an extra correction term \nproportional to $\\Gamma_3(t_1,t_2)$ over its QRT counterparts.\nIt is also found that generally the results of the non-Markovian QRT\ncase (in green dashed lines) approach those\nof the non-Markovian evolution equation case (in red solid lines)\nmore closely than the results of the Markovian QRT case (in blue\ndot-dashed lines) do.\nSimilar behaviors are also observed when the temperature is increased\nor when the cut-off frequency $\\Lambda$ is\nincreased.\nThe Markovian case can be recovered from the non-Markovian ones in the\nlimit when the cut-off frequency $\\Lambda\\to\\infty$, in which the\nthree results coincide. \nFor a \nlarger $\\gamma$ and for an initial mixed system state with the values of the off-diagonal density matrix elements being a quarter of those of the pure state $\\vert \\Psi\\rangle$, the peak heights of $S(\\omega)$ \nare lower as shown in Fig.~\\ref{fig:CF}(d).\nFurthermore in Fig.~\\ref{fig:CF}(d), \nthe two coherent peaks are still clearly visible in the non-Markovian cases, \nwhile the two peaks is barely visible in the Markovian case.\n \nFor the present spin-boson model with the system coupling operator\n$L\\neq L^\\dagger$, the self-Hamiltonian of the spin does not commute\nwith the system coupling operator, i.e, $[H_S,L]\\neq 0$, and the\nenvironment coupling operator also does not commute with the\nself-Hamiltonian of the environment, i.e., $[H_R, a_\\lambda]\\neq 0$. \nThus the exact non-Markovian finite-temperature two-time CF's of\nthe present spin-boson model are not directly available. \nBut in Ref.~\\cite{Goan10}, we evaluated the exact non-Markovian\nfinite-temperature two-time CF's of the system operators for an\nexactly solvable pure-dephasing spin-boson model in two ways, one by\nexact direct operator technique without any approximation and the\nother by the derived evolution equation (\\ref{2time_evol_eq_f})\nvalid to second order in the system-environment interaction Hamiltonian.\nThe perfect agreement of the results between the non-Markovian\nevolution equation case and the exact operator evaluation case, and\nthe significant difference between the non-Markovian evolution\nequation case and the case of wrongly applying non-Markovian QRT\n\\cite{Goan10}. \ndemonstrate clearly the validity of the derived evolution equation\n(\\ref{2time_evol_eq_f}).\nIt is thus believed that in the weak system-environment coupling\nlimit, the finite-temperature CF's calculated\nusing our evolution equation that takes into account the nonlocal\nenvironment memory term, Eq.~(\\ref{non-Markovian_1st_order}), for the\npresent spin-boson model would\nagree more closely with the exact \nresults than those in the non-Markoian QRT and Markovian QRT\ncases.\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn summary, we have derived evolution equations of the single-time and\ntwo-time CF's of system operators, using a quantum master equation \ntechnique different from \nthose presented in Refs.~\\cite{Alonso05,Vega06,Alonso07}. \nThis quantum master equation approach allows us to explicitly point\nout an important nonlocal environment (bath) memory term that vanishes\nin the Markovian case but makes the evolution equation deviate from\nthe QRT in general cases. \nThe derived two-time \nequations are valid for thermal environments at any temperature with\nHermitian or non-Hermitian coupling operators and for any initially\nfactorized (separable) system-reservoir state (pure or mixed) as long as the \nassumption of Eq.~(\\ref{traceless_1st_order}) and the approximation \nof the weak system-environment coupling \nthat are used to derive the equations apply. \nIn contrast to the evolution equations presented in\nRefs.~\\cite{Alonso05,Vega06,Alonso07,Goan10} that are applicable for\nbosonic environments, Eq.~(\\ref{2time_evol_eq}) derived in this paper\ncan be used to calculate the two-time CF's for a wide range of \nsystem-environment models with\nbosonic and\/or fermionic environments.\nWe have also given conditions on which the QRT holds in the weak\nsystem-environment coupling case and have applied the derived equations to\na problem of a two-level system (atom) coupled to a\nfinite-temperature \nthermal bosonic environment (electromagnetic fields),\nin which the system coupling operator is not Hermitian, $L\\neq\nL^\\dagger$, and the evolution equations derived in\nRefs.~\\cite{Alonso05,Vega06,Alonso07} are not applicable. \nIt is easy to calculate the two-time CF's using the\nderived evolution equations. Other non-Markovian open quantum system\nmodels that are not exactly solvable can be proceeded in a similar way\nto obtain the time evolutions of their two-time system operator CF's\nvalid to second order in the system-environment interaction\nHamiltonian. This illustrates the practical usage of the evolution\nequations. \nTherefore, the derived evolution equations that\ngeneralize the QRT to the non-Markovian cases will\nhave broad applications in\nmany different branches of physics, such as quantum optics,\nstatistical physics, chemical physics, quantum\ntransport in nanostructure devices and so forth when the properties\nrelated to the two-time CF's are of interests. \n\n\\begin{acknowledgments}\nWe would like to acknowledge support from the National Science\nCouncil, Taiwan, under Grant No. 97-2112-M-002-012-MY3, \nsupport from the Frontier and Innovative Research Program \nof the National Taiwan University under Grants No. 99R80869 and \nNo. 99R80871, and support from the focus group program of the National Center for Theoretical Sciences, \nTaiwan. H.S.G. is grateful to the National Center for High-performance Computing, Taiwan, \nfor computer time and facilities.\n\\end{acknowledgments}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\subsection{Astrophysical source targets} The O2 GstLAL search targeted GW\nsignals from merging binary compact objects with component masses between\n1\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and 399\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$. These include binary systems with two neutron\nstars (BNS), two black holes (BBH), or a neutron star and a black hole (NSBH).\nThis component mass region is known to be populated with compact objects\nproduced from the collapse of massive stars. With stellar evolution models,\nneutron stars can form in the mass range between 1\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and\n3\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$~\\cite{Rhoades1974, Kalogera1996, OzelNSLower, Lattimer2012,\nKiziltan2013} although there is only one observed neutron star with a mass\nlarger than 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$~\\cite{MassiveNS}, and those in binaries do not approach\n2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$~\\cite{Ozel2016}. Stellar evolution models also predict that black\nholes may exist with a minimum mass down to 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$~\\cite{Shaughnessy2005}\nand a maximum mass up to 100\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ or potentially\nhigher~\\cite{Belczynski2014, deMink2015}. Black holes with masses between\n$\\sim$100\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and $\\sim10^5$\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ are classified as intermediate-mass\nblack holes and could have formed through hierarchical merging of lower mass\nblack holes~\\cite{Miller2004}. This search is also sensitive to GWs from\nbinaries of primordial black holes (PBH), formed from over-dense regions in the\nearly universe. However, distinguishing a PBH GW signal from a conventional\nstellar-evolution black hole GW signal would not be possible with this search\nand is instead pursued in a separate search of the sub-solar mass\nregion~\\cite{Subsolar2018}.\n\nWe also define different ranges of allowed angular momentum for component\nneutron stars and component black holes. We consider only the dimensionless\nspin $\\chi= c\\left| \\vec{S}\\right| \/Gm^2$ where $\\vec{S}$ is the angular\nmomentum and $m$ is the component mass. Observations of the fastest spinning\npulsar constrain $\\chi \\lesssim 0.4$~\\cite{Hessels2006} while pulsars in\nbinaries have $\\chi \\le 0.04$~\\cite{Kramer2009}. X-ray observations of\naccreting BHs indicate a broad distribution of BH spins~\\cite{Fabian2012,\nGou2011, Mcclintock2011}, while the relativistic Kerr bound $\\chi \\le 1$ gives\nthe theoretical constraint~\\cite{MisnerGrav}.\n\nThese observations and evolution models inform the ranges of parameters we\ndefine for our template banks. As shown in\nFig.~\\ref{fig:H1L1V1-GSTLAL_INSPIRAL_PLOTBANKS_bank_regions_imri-0-0.png}, we can\nsee the BNS, NSBH, and BBH populations represented in the O2 GstLAL offline search. We impose an additional\nconstraint on the component dimensionless spins of template waveforms by\nrequiring their orientations to be aligned or anti-aligned with the orbital\nangular momentum of the binary $\\hat{L}$. Then the dimensionless projections of\nthe component spins along $\\hat{L}$ are defined as $\\chi_i \\equiv c\\left|\n\\vec{S}_i \\cdot \\hat{L} \\right| \/Gm_i^2$. The region in green marks the BNS\ntemplates with component masses between 1\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and\n(anti-)aligned dimensionless spin magnitudes with $ \\chi_{1,2}<0.05$. This\n$\\chi$ limit is motivated by the observational limit of $\\chi \\le 0.04$ but\nwith some added uncertainty. The region in blue marks the BBH templates with\ncomponent masses between 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and 399\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and (anti-)aligned\ndimensionless spin magnitudes with $\\chi_{1,2} <0.999$. This $\\chi$ limit is chosen to\nbe as close to the theoretical limit of 1 as possible with current waveform\napproximants, as described in Section~\\ref{sec:approximant}. The templates in red mark the NSBH\nrange with the neutron star mass between 1\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and the\nblack hole mass between 2\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ and 200\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$. For these systems, neutron\nstars have $ \\chi_{1,2} <0.05$ and black holes have $\\chi_{1,2} <0.999$.\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.45\\textwidth]{H1L1V1-GSTLAL_INSPIRAL_PLOTBANKS_bank_regions_imri-0-0.png}\n\\caption{The template bank used by the O2 GstLAL offline search in component mass space.The templates representing the different astrophysical populations are shown in green for BNS, blue for BBH, and red for NSBH.}\n\\label{fig:H1L1V1-GSTLAL_INSPIRAL_PLOTBANKS_bank_regions_imri-0-0.png}\n\\end{figure}\n\nIn Fig.~\\ref{online_bank.png}, we can see the BNS, NSBH, and BBH populations represented in the O2 GstLAL online search.\nThe BNS templates cover the same component mass and dimensionless spin magnitude\nrange as the offline bank. However, a cut on total mass results\nin different mass ranges for NSBH and BBH templates. The maximum allowed total\nmass is 150\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$, to remove high-mass templates which correspond to short waveforms\nthat recover short transient noise fluctuations (glitches) at a high rate.\n\n\\begin{figure}[hbt!] \\includegraphics[width=0.45\\textwidth]{online_bank.png}\n\\caption{\\label{online_bank.png}\nThe template bank used by the O2 GstLAL online search in component\nmass space. The templates representing the different astrophysical populations\nare shown in green for BNS, blue for BBH, and red for NSBH.} \\end{figure}\n\n\\subsection{Construction of the O2 bank}\n\nThe construction of a template bank relies on a number of parameters, including the selection of\na representative noise power spectral density $S_n(f)$ and appropriate waveform models,\nthe waveform starting frequency $f_\\mathrm{low}$, the placement method, and a specified minimum\nfitting factor criteria~\\cite{privitera2014improving, fittingfactor, FFdef} for all templates in the bank.\n\nThe minimum fitting factor describes the effectualness of a template bank in recovering\nastrophysical sources. To define this quantity, we note that the matched filter output is maximized\nwhen a template waveform exactly overlaps the signal waveform. This optimization is impossible in practice,\nhowever, since the template bank samples the parameter space discretely while\nastrophysical sources arise from a continuum. Regardless, it is useful to\nquantify the degree to which two waveforms, $h_1$ and $h_2$, overlap. The\noverlap is defined as the noise-weighted inner product integral:\n\\begin{align} \\label{eq:overlap}\n(h_{1} | h_{2}) = 2 \\int^\\infty_{f_\\mathrm{low}} \\frac{\\tilde{h}_1(f)\\tilde{h}_2(f) + \\tilde{h}_1(f)\\tilde{h}_2(f)}{S_n(f)}df,\n\\end{align}\nwhere $f_\\mathrm{low}$ was set to 15\\,Hz, as motivated by the noise power spectral density described in\nSection~\\ref{sec:psd}.\n\nThe \\emph{match} between two waveforms is then defined as the maximization over\ncoalescence phase and time of the noise-weighted inner product:\n\n\\begin{align} \\label{eq:Match} M(h_{1}, h_{2}) = \\underset{\\phi_{c},\nt_{c}}{\\text{max}} (h_{1}|h_{2}(\\phi_{c},t_{c})) \\end{align}\nThis defines the percent of signal-to-noise ratio (SNR) retained when recovering waveform $h_2$\nwith the (non-identical) waveform $h_1$. Then, the fitting factor is the related quantity used\nin describing the effectualness of template banks:\n\\begin{align} \\label{eq:FF} FF(h_{s}) = \\underset{h \\in \\{h_{b}\\}}{\\text{max}} M(h_s, h) \\end{align}\nwhere $h_b$ is the set of templates in the bank and $h_s$ is a signal waveform\nwith parameters drawn from the continuum. The fitting factor describes the fraction of SNR\nretained for arbitrary signals in the parameter space covered by the bank. Typically,\ncompact binary coalescence searches have required a fitting factor of $97\\%$ to ensure\nthat no more than $\\sim10 \\%$ of possible astrophysical signals are lost due\nto the discrete nature of the bank. As described in Sect.~\\ref{sec:placement}, we\nuse a hierarchical set of fitting factor requirements to construct the bank.\n\n\\subsubsection{Modeling the detector noise}\\label{sec:psd}\nThe noise power spectral density (PSD) as shown in Fig.~\\ref{psd.pdf} was used to\ncompute the overlap integrals in the construction of the O2 template bank.\nThis projected O2 sensitivity curve was produced by combining some of the\nbest LIGO L1 sensitivities achieved before the start of O2. At low frequencies,\nbelow 100\\,Hz, the best sensitivity is taken from L1 measurements during commissioning\nin February 2016. At high frequencies, above 100\\,Hz, the best sensitivity is taken from\nL1 during O1, with projected improved shot noise due to slightly higher input power and\nimproved efficiency of the readout chain. Calculation of\nthis PSD has been documented in~\\cite{psd}.\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.4\\textwidth]{psd_newcode.png}\n\\caption{\\label{psd.pdf}Representation of the model power spectral density of\ndetector noise. This was used to construct the O2 template bank. } \\end{figure}\n\t\t\n\\subsubsection{Waveform approximants} \\label{sec:approximant}\n Gravitational waveforms from compact binary mergers are described by\n 17 intrinsic and extrinsic parameters. However, as demonstrated in \\cite{ajith2014effectual}, for template\n placement purposes, we can parameterize these systems by three parameters composed of\ncomponent masses $m_i$ and the reduced-spin parameter $\\chi$ which is a function of\nthe dimensionless spin parameters $\\chi_i$ for $i=1,2$.\n\nAbove a total mass of 4\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$, the waveforms of the binary systems are\napproximated by an effective-one-body formalism (SEOBNRv4\\textunderscore\nROM)~\\cite{SEOBNRv4ROM}, combining results from the post-Newtonian approach,\nblack hole perturbation theory and numerical relativity to model the complete\ninspiral, merger and ringdown waveform. Below a total mass of 4\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$, the\nbinary systems are approximated by an inspiral waveform accurate to\nthird-and-a-half post-Newtonian order called the TaylorF2 waveform\nmodel~\\cite{TaylorF2,CBCcompanion}. The extent of the present parameter space\ncovered by the template bank is limited by the availability of waveform models\nand the sensitivity of the present search. We neglect the effect of precession\nand higher order modes in our templates.\n\n\\subsubsection{Template placement}\\label{sec:placement}\nBoth the O2 offline and online template banks were created in the same way, by constructing two sub-banks that were added together. For systems with total mass $2\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace \\le M \\le 4\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ where the TaylorF2 approximant is used, the templates were first laid out using a\ngeometric metric technique~\\cite{geometric}. This geometric bank was used as a coarse seed bank for an additional stochastic method placement~\\cite{ajith2014effectual, stochastic} with a convergence threshold set to 97$\\%$. For systems with total mass greater than 4\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$, where the SEOBNRv4\\textunderscore ROM approximant is used, a coarse bank was first generated with the stochastic method but\nwith a very low convergence threshold. Again this stochastic bank was used as a coarse seed bank for an additional stochastic method placement with a convergence threshold set to 97$\\%$. Additionally, only waveforms with a duration longer than\n0.2\\,s were retained, to avoid recovering short transient noise glitches. The two sub-banks were added to form the full bank with a total of 661,335 templates.\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.45\\textwidth]{gstlal_bank_m1m2.png}\n\\caption{\\label{gstlal_bank_m1m2.png}A visual representation of the original O2\nbank in the component mass space, containing a total of 661,335 templates\nplaced with a minimal match of 97$\\%$.} \\end{figure}\n\nThe original O2 offline bank, as shown in Fig.~\\ref{gstlal_bank_m1m2.png} aided in the discovery of\none of the earliest events detected during O2, GW170104 [3]. The higher density of\nthe bank at lower masses is expected because low mass systems have\nlonger waveforms and spend more time in the detectors' sensitive frequency band. This\nenables the matched-filter search to better distinguish between two different\nlow mass systems. This also means that more templates are required in the lower mass region of the bank for\nthe required fitting factor convergence. At the highest masses, the waveforms contain very few\ncycles and very few templates are required for coverage in this region.\n\nEarly in O2, short duration glitches were found to be particularly problematic for the online search, even with a duration cut of 0.2\\,s applied. Thus, to avoid delays in delivering low-latency gravitational-wave triggers, only waveforms with a total mass $<150\\,\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ were retained in the online bank. This online bank, as shown in Fig.~\\ref{online_bank.png}, was used for the entirety of the O2 observing run.\n\n\\subsubsection{Overcoverage in the offline bank}\\label{sec:overcoverage}\nAs outlined in Section~\\ref{sec:implementation}, templates are grouped by the GstLAL search so that each\ngroup has the same number of templates with similar parameters and background noise\ncharacteristics~\\cite{O1Methods, O2Methods}. It was uncovered partway through O2 that the\nlower density of templates in the high mass part of the offline bank was resulting in templates with\nvery different background noise properties being grouped together. This led to incorrect averaging of\nnoise properties in the high mass groupings of templates and, in turn, resulted in incorrect estimation of the\nsignificance of loud coincident noise in time-shifted data from the two detectors ~\\cite{O2Methods}. This was\nnot an issue in the online bank due to the cut at total mass $>150\\,\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$.\n\nTwo different recourses were taken. The offline bank was\noverpopulated with extra templates in the higher mass region as outlined below. Additionally, the templates in\nthis part of the bank were grouped differently from those in the denser lower\nmass region such that templates with more similar noise characteristics can be\ngrouped together. Details of the template grouping methods are given in Ref.~\\cite{O2Methods}, which is to be published soon.\n\nRegarding the overcoverage, extra templates were added to the initial offline bank in the total mass range of\n$80\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace \\le M \\le 400\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ using two methods. As a first step, the original offline bank was used as a seed\nfor an additional stochastic placement in the total mass range $80\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace \\le M \\le 400\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ with an\nincreased convergence threshold of 98$\\%$. Additionally, no template duration threshold was used so as to not exclude the short\nwaveforms corresponding to the heavier mass systems. A total of 14,665\ntemplates, as shown in Fig.~\\ref{mtotalcut80_m1m2.png}, were added to the initial offline bank.\n\nDespite the increased convergence threshold, the high mass region of the bank remained sparsely\npopulated, as the overlap between high mass waveforms with few cycles are generally high. Thus, the convergence\nthreshold is already met, without the placement of additional templates. However, short duration glitches are also\nrecovered by a relatively few number of high mass templates, and if these few glitchy templates are\ngrouped together for background estimation with quieter templates, they can spoil the sensitivity over a\nbroad mass range. Thus, we chose to force the placement of additional templates at higher mass using a\nuniform grid placement in component mass space for the total mass range between $100\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace \\le M \\le 400\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$, with mass ratios between 1 and 97.989.\nA total of 1000 templates were placed without any limitations on the waveform duration. This gridded bank, as\nshown in Fig.~\\ref{uniformgrid_m1m2.png}, was then added to the offline bank produced in the previous step.\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.45\\textwidth]{mtotalcut80_m1m2.png}\n\\caption{\\label{mtotalcut80_m1m2.png}The bank of extra 14,665 templates that\nwere added to the initial O2 bank, with a 98$\\%$ minimum match above a total\nmass of 80 $\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ in the component mass space.} \\end{figure}\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.45\\textwidth]{uniformgrid_m1m2.png}\n\\caption{\\label{uniformgrid_m1m2.png}The uniform grid bank with a 1000\ntemplates spanning 100-400 $\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ in total mass .} \\end{figure}\n\nAll together, the final, improved O2 bank has a total of about 677,000 templates, as shown in\nFig.~\\ref{fig:H1L1V1-GSTLAL_INSPIRAL_PLOTBANKS_bank_regions_imri-0-0.png}.\n\n\\subsection{Implementation in the GstLAL pipeline}\\label{sec:implementation}\nThe GstLAL-based inspiral search is a matched-filtering pipeline. The noise-weighted inner\nproduct of each whitened template with the whitened data produces the signal-to-noise\nratio (SNR). Both signals and glitches can produce high SNR, thus a number of additional\nconsistency and coincidence checks are implemented in the pipeline, as detailed in Ref.~\\cite{O2Methods}.\nIn order to access the full waveform of systems up to 400\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ that merge at lower frequencies,\nthe filtering frequency was reduced from 30\\,Hz in the O1 search to 15\\,Hz.\n\nFor the purpose of background estimation, templates are grouped together so that\neach group has the same number of templates with similar background noise characteristics.\nNoise properties are averaged separately for each group. Prior to O2, templates were\ngrouped according to two composite parameters that characterize the\nwaveform inspiral to leading order. These were the chirp mass of the binary $\\ensuremath{\\mathcal{M}}\\xspace$ and the\neffective spin parameter $\\chi_\\mathrm{eff}$. The chirp mass is \\begin{align}\n\\ensuremath{\\mathcal{M}} &= \\frac{(m_1 m_2)^{3\/5}}{(m_1 + m_2)^{1\/5}}. \\end{align} The effective\nspin parameter is defined as \\begin{align} \\chi_\\mathrm{eff} &\\equiv \\frac{m_1\n\\chi_1 + m_2 \\chi_2}{m_1 + m_2}, \\end{align} and acts as a mass-weighted\ncombination of the spin components (anti-)aligned with the total angular\nmomentum.\n\nHowever, as described in Sect.~\\ref{sec:overcoverage}, extra templates were placed in the\nhigh mass region of the offline bank, to better capture the properties of the noise in that regime.\nTemplates above a total mass of 80\\,$\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$ were then grouped by template duration from 15\\,Hz\nrather than the $\\ensuremath{\\mathcal{M}}\\xspace$ and $\\chi_\\mathrm{eff}$ binning used at lower masses. Template duration\nbetter characterizes the waveform merger and ringdown, the detectable part of the signal\nfor high mass systems.\n\n\\section{Effectualness} \\label{sec:effectual}\n\nTo assess the effectualness of this template bank, we compute again the\n$\\mathrm{FF}(h_{s})$ as defined in Eq.~\\ref{eq:FF} for a collection of simulated signals with\nparameters drawn randomly from the covered mass and spin space. The $FF$ depends on a number of parameters including\nmasses, spins, spin orientations and sky locations. Hence it is often\nrepresented and plotted as a function of two such parameters. In order to do so, the $FF$ is binned in the two parameters and the \naverage or mean $FF$ in each bin is plotted~\\cite{fittingfactor}:\n\\begin{equation} \\label{eq:FFav} FF_\\mathrm{mean} = \\langle FF \\rangle \\end{equation}\n\nWe selected simulated signals from various populations of BNS, NSBH, BBH, and\nIMBHB systems to check the effectualness of the bank. The\ndetails of the simulation sets are summarized in\ntable~\\ref{t:banksim_injections}. For each signal population,\n$10^{4}$ simulations were performed.\n\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{ lllll }\n\\hline\n\\hline\nPopulation & Mass($\\ensuremath{\\mathrm{M}_{\\odot}}\\xspace$) & Spin & & Waveform approximant\\\\\n\\hline \n\nBNS & $m_{1,2} \\in [1,3]$ & $\\chi_{1,2} \\in [-0.05,0.05]$, aligned & & TaylorF2~\\cite{TaylorF2} \\\\\n\nBNS & $m_{1,2} \\in [1,3]$ & $\\chi_{1,2} \\in [-0.4,0.4]$, precessing & & SpinTaylorT4~\\cite{SpinTaylorT4} \\\\\n\n\\multirow{2}*{NSBH} & $m_{1} \\in [1,3]$ & $\\chi_{1} \\in [-0.4,0.4]$, aligned & & SEOBNRv4\\_ROM~\\cite{SEOBNRv4ROM} \\\\\n & $m_{2} \\in [3,97]$ & $\\chi_{2} \\in [-0.989,0.989]$, aligned & & \\\\\n\nBBH & $m_{1,2} \\in [2,99]$ & $\\chi_{1,2} \\in [-0.99,0.99]$ aligned & & SEOBNRv4\\_ROM~\\cite{SEOBNRv4ROM} \\\\\n\nIMBHB & $m_{1,2} \\in [1,399]$ & $\\chi_{1,2} \\in [-0.998,0.998]$ aligned & & SEOBNRv4\\_ROM~\\cite{SEOBNRv4ROM} \\\\\n\nIMBHB & $m_{1,2} \\in [50,350]$ & Non-spinning & & EOBNRv2HM~\\cite{EOBNRv2HM} \\\\\n\n\\end{tabular}\n\\caption{\\label{t:banksim_injections}Description of different categories of\nastrophysical populations, from which random mass and spin parameters were drawn and used to generate\nwaveforms to check the effectualness of the template bank. Multiple simulation sets of the same population\nwere used, varying in the type of waveform, mass ranges covered and whether the\nspin is aligned to the orbital angular momentum.} \\end{table*}\n\nIn Fig.~\\ref{fig:bnsff}, we can see the fitting factors in the $M$-$\\chi_\\mathrm{eff}$ plane for BNS aligned-spin TaylorF2 waveform approximants~\\cite{TaylorF2} and precessing-spin SpinTaylorT4 waveform approximants~\\cite{SpinTaylorT4}. The majority of fitting factors are above 0.97, except along the low-mass edge of the bank at $M=2.0$ below which no templates are placed. The bank is constructed with aligned-spin TaylorF2 waveforms in this low mass region so fitting factors are expected to be at least as high as the required fitting factor of 0.97 to ensure that no more than $\\sim10 \\%$ of possible astrophysical signals are lost due to the discrete nature of the bank. We can also see that the majority of fitting factors for precessing-spin SpinTaylorT4 waveform approximants are also above 0.9 although sensitivity falls off rapidly outside $-0.05 < \\chi_\\mathrm{eff} < 0.05$ for systems with NS component mass less than 2 \\ensuremath{\\mathrm{M}_{\\odot}}\\xspace. There are no templates placed in this region so the fall off in fitting factor is expected. This also demonstrates that a search based on an aligned-spin template bank can recover precessing-spin signals.\n\n\\begin{figure*}[t] \n\\centering\n\\begin{minipage}[b]{0.47\\textwidth}\n\\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/bns_TaylorF2_chiM_hexbin.png}\n \\end{minipage}\n \\begin{minipage}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/bns_SpinTaylorT4_chiM_hexbin.png}\n \\end{minipage}\n\\caption{\\label{fig:bnsff} Fitting factors in $M$-$\\chi_\\mathrm{eff}$ plane for BNS aligned-spin TaylorF2 waveform approximants~\\cite{TaylorF2} ({\\it left}) and precessing-spin SpinTaylorT4 waveform approximants~\\cite{SpinTaylorT4} ({\\it right}). ({\\it Left}) The majority of fitting factors are above 0.97, except along the low-mass edge of the bank at $M=2.0$ where the fitting factor starts to fall off. The bank is constructed with TaylorF2 waveforms so fitting factors are expected to be at least as high as the required fitting factor of 0.97 to ensure that no more than $\\sim10 \\%$ of possible astrophysical signals are lost due to the discrete nature of the bank. ({\\it Right}) The majority of fitting factors are above 0.9 although sensitivity falls off rapidly outside $-0.05 < \\chi_\\mathrm{eff} < 0.05$ for systems with NS component mass less than 2 \\ensuremath{\\mathrm{M}_{\\odot}}\\xspace. There are no templates placed in this region so the fall off in fitting factor is expected. This also demonstrates that a search based on an aligned-spin template bank can recover precessing-spin signals.}\n\\end{figure*}\n\nIn Fig.~\\ref{fig:nsbhff}, we can see the fitting factors in the $M$-$\\chi_\\mathrm{eff}$ plane and as a function of mass ratio for NSBH aligned-spin SEOBNRv4\\_ROM waveform approximants~\\cite{SEOBNRv4ROM}. The majority of fitting factors are above 0.97 although fitting factors down to 0.885 are present across this region. Lower fitting factors occur for systems with more extreme mass ratios indicating that the bank is not optimized to recover signals from highly asymmetric systems. A dedicated search in this region may be required to find signals from systems with extreme mass ratios.\n\n\\begin{figure*}[t]\n\\centering\n\\begin{minipage}[b]{0.47\\textwidth}\n\\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/nsbh_highmass_chiM_hexbin.png}\n \\end{minipage}\n \\begin{minipage}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/nsbh_highmass_qVsFF.png}\n \\end{minipage}\n\\caption{\\label{fig:nsbhff} Fitting factors in $M$-$\\chi_\\mathrm{eff}$ plane ({\\it left}) and as a function of mass ratio ({\\it right}) for NSBH aligned-spin SEOBNRv4\\_ROM waveform approximants~\\cite{SEOBNRv4ROM}. ({\\it Left}) The majority of fitting factors are above 0.97 although fitting factors down to 0.885 are present across this region. ({\\it Right}) We can see that the low fitting factors occur for systems with more extreme mass ratios. This indicates that the bank is not optimized to recover signals from highly asymmetric systems.}\n\\end{figure*}\n\nIn Figures~\\ref{fig:bbhff} and~\\ref{fig:imbhff}, we can see the fitting factors in $M$-$\\chi_\\mathrm{eff}$ plane for BBH and IMBH aligned-spin SEOBNRv4\\_ROM waveform approximants~\\cite{SEOBNRv4ROM} and as a function of mass ratio for IMBH non-spinning EOBNRv2HM waveform approximants~\\cite{EOBNRv2HM}. For the recovery of aligned-spin SEOBNRv4\\_ROM waveform approximants, the majority of fitting factors are above 0.97. The bank is constructed with SEOBNRv4\\_ROM waveforms in the high mass region so fitting factors are expected to be at least as high as the required fitting factor of 0.97. We note that fitting factors fall off for high $\\chi_\\mathrm{eff}$ although coverage in this region is still high. Non-spinning EOBNRv2HM waveform approximants with higher-order modes can also be recovered by the search in the IMBHB region, despite template waveforms not including higher-order mode effects. The fitting factors have a dependency on mass ratios, as higher-order modes become more significant at higher mass ratios. Higher-order modes have higher frequency content and will be within the sensitive frequency band of LIGO and Virgo for IMBH signals. Hence it will become important to include templates containing higher-order modes in their waveforms, in order to increase the sensitivity of the search towards heavier mass systems~\\cite{IMBHHM}.\n\n\\begin{figure}[hbt!]\n\\includegraphics[width=0.45\\textwidth]{ban_sim_plots\/bbh_new_chiM_hexbin.png}\n\\caption{\\label{fig:bbhff} Fitting factors in $M$-$\\chi_\\mathrm{eff}$ plane for BBH aligned-spin SEOBNRv4\\_ROM waveform approximants~\\cite{SEOBNRv4ROM}.} \\end{figure}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{minipage}[b]{0.47\\textwidth}\n\\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/imbh_noHM_chiM_hexbin.png}\n \\end{minipage}\n \\begin{minipage}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ban_sim_plots\/imbh_HM_qVsFF.png}\n \\end{minipage}\n\\caption{\\label{fig:imbhff} Fitting factors in $M$-$\\chi_\\mathrm{eff}$ plane for IMBH aligned-spin SEOBNRv4\\_ROM waveform approximants~\\cite{SEOBNRv4ROM} ({\\it left}) and as a function of mass ratio for IMBH non-spinning EOBNRv2HM waveform approximants~\\cite{EOBNRv2HM} ({\\it right}). ({\\it Left}) The majority of fitting factors are above 0.97 for the recovery of aligned-spin SEOBNRv4\\_ROM waveform approximants. The bank is constructed with SEOBNRv4\\_ROM waveforms in the high mass region so fitting factors are expected to be at least as high as the required fitting factor of 0.97 to ensure that no more than $\\sim10 \\%$ of possible astrophysical signals are lost due to the discrete nature of the bank. We note that fitting factors fall off for high $\\chi_\\mathrm{eff}$ although coverage in this region is still high. ({\\it Right}) Non-spinning EOBNRv2HM waveform approximants with higher-order modes can also be recovered by the search in the IMBHB region, despite template waveforms not including higher-order mode effects.}\n\\end{figure*}\n\n\n\n\\section{Introduction}\\label{sec:intro}\n\t\\input{intro.tex}\n\n\\section{Design and construction of the O2 bank} \\label{sec:design}\n\t\\input{design.tex}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\t\\input{conclusion.tex}\t\n\n\\section{Acknowledgements} \\label{sec:ack}\n\nWe thank the LIGO-Virgo Scientific Collaboration for access to data. LIGO was\nconstructed by the California Institute of Technology and Massachusetts\nInstitute of Technology with funding from the National Science Foundation (NSF)\nand operates under cooperative agreement PHY-0757058. We thank Satya Mohapatra for the helpful comments and suggestions. We also thank Graham Woan for helping with the review of the template bank used for O2. We gratefully acknowledge the support by NSF grant PHY-1626190 for the UWM computer cluster and PHY-1607585 for JC, PB, DC and DM. SC is supported by the research programme of the Netherlands Organisation for Scientific Research (NWO). SS was supported in part by the Eberly Research Funds of Penn State, The Pennsylvania State University, University Park, PA 16802, USA. HF was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). CH was supported in part by the NSF through PHY-1454389. Funding for this project was provided by the Charles E. Kaufman Foundation of The Pittsburgh Foundation. TGFL was partially supported by a grant from the Research Grants Council of the Hong Kong (Project No. CUHK 14310816 and CUHK 24304317) and the Direct Grant for Research from the Research Committee of the Chinese University of Hong Kong. MW was supported by NSF grant PHY-1607178. This paper carries LIGO Document Number LIGO-P1700412. \n\n\\input{hyperbank.bbl}\n\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nData flows from new and planned astronomical survey telescopes are steadily increasing. This shows no sign of stopping, with LSST starting operations in \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 2020. There is clearly a need for accurate, fast, automated classification of photometric lightcurves to maximise the scientific returns from these surveys. Even when later spectroscopic followup is required, finding which targets to prioritise is a necessary first step. \n\nThe literature contains multiple examples of such classification, using a wide variety of techniques. These include a variety of supervised machine learning applications \\citep[e.g.][]{Eyer:2005ce,Mahabal:2008if,Blomme:2010bq,Debosscher:2011kz,Brink:2013hv,Nun:2014kv}. Recently Random Forests (RF) have begun to gain popularity, due to their robustness and applicability to different sets of data, extracted lightcurve properties, and classification schemes \\citep[e.g.][]{Richards:2011ji,Richards:2012ea,Masci:2014bk}. Several improvements have been proposed, in areas such as parametrising lightcurves with maximal information retention \\citep{Kugler:2015jq}, and adjusting for training set deficiencies \\citep{Richards:2011bn}. One method of \\emph{unsupervised} machine learning is a Kohonen Self-Organising-Map \\citep[SOM, ][]{Kohonen:1990fd} demonstrated by \\citet{Brett:2004cr} in an astronomical context. Here we adopt a novel technique based on a combination of SOM and RF machine learning. SOMs can efficiently parametrise lightcurve shapes without resorting to specific lightcurve features, and RFs are capable of placing objects into classes.\n\n\nIn this work we apply these techniques to data from the K2 mission, the repurposed \\emph{Kepler} satellite \\citep{Borucki:2010dn}. K2 and its predecessor \\emph{Kepler} have left a lasting mark in studies of variable stars, showing that most $\\delta$ Scuti and $\\gamma$ Dor stars show pulsations in both the p-mode and g-mode frequency regimes \\citep{Grigahcene:2010kd}. Many studies have been performed on \\emph{Kepler} variable stars \\citep[e.g.][]{Blomme:2010bq,Balona:2011kw,Balona:2011gn,Debosscher:2011kz,Uytterhoeven:2011jv,Tkachenko:2013jr,Bradley:2015ep}, but few so far on K2. \\citet{Balona:2015jh} studied B star variability in \\emph{Kepler} and K2, and found that K2 data presented some new challenges from the original mission. Despite these, it has for example discovered the several RR Lyrae stars known outside our own Galaxy \\citep{Molnar:2015tr}. \\citet{LaCourse:2015jr} have also produced a catalogue of eclipsing binary stars in K2 field 0.\n\nThe initial version of this catalogue \\citep{Armstrong:2015bn} classified several thousand K2 variable stars in K2 fields 0 and 1. This classification was based on an interpretation of lightcurve periodicity, and split objects into Periodic, Quasiperiodic, and Aperiodic variables. Here we improve on this initial work, by applying an automated technique to classify variables into more usual classes. We extend the classification to K2 fields 0--4, and will release updates as more K2 fields become available.\n\n\\section{Data}\n\\subsection{Source}\nData are taken from the K2 satellite \\citep{Howell:2014ju}. K2 is the repurposed \\emph{Kepler} mission, and provides lightcurve flux measurements at a 30 minute `long' cadence continuously for 80 days per target. Targets are organised into campaigns, with each campaign spanning an \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 80 day period and covering several thousand objects. A much smaller number of targets (a few tens per campaign) are available at the `short' cadence of \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 1 minutes. For the purposes of this work, we restrict ourselves to long cadence data only, to preserve uniformity in the data. At the time of writing, 5 campaigns had been released to the public (covering fields 0--4), with more due as the mission continues. Four of these campaigns cover \\mytilde80 days, with the first campaign 0 covering \\mytilde40 days. We take data for these campaigns from the Michulski Archive for Space Telescopes (MAST) website\\footnote{https:\/\/archive.stsci.edu\/k2\/}, limiting ourselves to objects classified as stars in the MAST catalogue. This cut primarily removes a small number of solar system bodies and extended sources from the analysis. At this point, we have 68910 object lightcurves.\n\nFor the purposes of training the classifier, we also use data from the original \\emph{Kepler} mission. In these cases a single quarter of long cadence \\emph{Kepler} data is randomly selected. This covers \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 90 days, and hence is similar to a single K2 campaign in duration and cadence. \\emph{Kepler} does however have different noise properties than K2, particularly in regards to the \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 6 hour thruster firing, which is present in K2 but not in \\emph{Kepler}. \\emph{Kepler} data was also downloaded from MAST, and the Presearch-Data-Conditioning (PDC) detrended lightcurves \\citep{Stumpe:2012bj,Smith:2012ji} used.\n\n\\subsection{Extraction and Detrending}\n\\label{sectExtDet}\nK2 data shows instrumental artefacts not previously seen in the original \\emph{Kepler} mission. The strongest of these is a signal at \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 6 hours, which is the timescale on which the satellite thrusters are fired to adjust the spacecraft pointing. This pointing adjustment is necessary due to drift associated with the new mode of operations, and is explained fully in the K2 mission papers. It has the unfortunate effect of causing systematic noise, due to aperture losses and inter-pixel sensitivity changes. A number of techniques have been put forward for removing this noise \\citep{Vanderburg:2014bi,Aigrain:2015ew,Lund:2015cs}, including one in the previous version of this catalogue \\citep{Armstrong:2015bn}. Each has advantages and disadvantages; our experience has been that while overall most techniques perform comparably, for individual objects the differences can be large. We use an updated version of our own extraction and detrending method here, which is fully described in \\citet{Armstrong:2015bn}. The only change from that publication is the performing of a polynomial fit to the lightcurve, prior to detrending. This fit is performed by considering successive 0.3 day long regions of the lightcurve, and fitting third degree polynomials to 4 day regions centred on these. Outlier points more than $10\\sigma$ from the initial fit are masked, and the fit redone without these points. The $10\\sigma$ masking and refitting is repeated for 10 iterations. Masked points are not cut from the final lightcurves. The final fit is removed, detrending is performed, and the fit then added back in. This step was added to improve preservation of variability signals, a notable improvement on the first method. Lightcurves detrended using this method are publicly available at the MAST website.\n\nIt is important to note that, as described in \\citet{Armstrong:2015bn}, our detrending method works best when performed separately on each half of the lightcurve (the exact split can be a few days from the precise halfway time). This is due to a change in the pointing characteristics of the spacecraft near the middle of each campaign, possibly the result of a change in orientation to the Sun. The precise times used to split the data are given in Table \\ref{tabsplittimes}. Before conducting the analysis presented later in this work, we normalise each lightcurve half by performing a linear fit.\n\n\\begin{table}\n\\caption{Times of pointing characteristic change, used to split the K2 data before detrending}\n\\label{tabsplittimes}\n\\begin{tabular}{lr}\n\\hline\nCampaign & Split Time \\\\\n & BJD - 2454833 \\\\\n\\hline\n0 & N\/A \\\\\n1 & 2016.0\\\\\n2 & 2101.41\\\\\n3 & N\/A \\\\\n4 & 2273.0\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nWith the release of campaign 3, the K2 mission team began to release its own detrended lightcurves (these are not available for earlier campaigns at the time of writing). Similarly to the other methods, we find that these perform well overall but are by no means the best choice for every object. We will apply the classifier to both our lightcurves (hereafter the `Warwick' set) and the K2 team lightcurves (hereafter the `PDC' set) for campaigns 3--4. The comparison is complicated by the fact that the above mentioned change in pointing characteristics does not occur in the usual way for these campaigns. Rather than change once in the middle of the campaign, in campaign 3 the change occurs twice, at roughly one third intervals. We do not adjust our detrending method for this, as introducing the option for another split adds an additional layer of complexity, and reduces the number of points available in each section (a risky option, as these points form the base surface used to decorrelate flux from pointing). Instead we perform the detrending with no split at all. For campaign 4, we split at time 2273 (BJD-2454833, as given in the K2 data files), and cut points up to the first change in pointing at 2240.5. This shortens each campaign 4 lightcurve by 11 days, but results in improved detrending. We do not perform such an adjustment for campaign 3 as even more data would need to be cut.\n\n\n\n\\section{Classification}\n\n\\subsection{Methodology}\nWe employ a classification scheme using two distinct components. These are Self-Organising-Maps (SOMs), otherwise known as Kohonen maps, and a Random Forest (RF) classifier. Each is described below.\n\n\\subsubsection{SOM}\n\nSOMs have been tested in an astrophysical context before \\citep{CarrascoKind:2014gb,Torniainen:2008cc,Brett:2004cr}, but are rarely to date applied in astronomy in practice. As such we outline their methodology here.\n\nA SOM is a form of dimensionality reduction; data consisting of multiple pieces of information can be condensed into a pre-defined number of dimensions, and is grouped together according to similarity. In our case, the SOM takes phase folded lightcurve shapes and groups similar shapes into clusters, in one or two dimensions. The great strength of a SOM is in the unsupervised nature of its clustering algorithm. The user need not specify what groups or labels to look for; any set of similar input data, including for example previously unseen variability classes, will form a cluster in the resulting map. Similar clusters will lie near each other, those that are the same according to the input data will overlap. Furthermore, the input parameters for the algorithm are quite insensitive to small variations, making the clustering process robust \\citep{Brett:2004cr}.\n\nThe key component of a SOM is the Kohonen layer. This can be N-dimensional, but we will consider 2D layers here for clarity. The layer consists of pixels, each of which represents a template against which the input data is compared. The size of the layer is unimportant as long as it is sufficiently large to express the variation present in the input data. Once trained on a set of data, the Kohonen layer becomes a set of templates, representing the observed data features that it was trained on. These templates can be examined to spot interesting features in the data set, such as variation within an already known class. Further data (or the original data itself) can be compared to the trained layer and the closest matching template found. In this way, an object is placed onto the map. \n\nThe specific implementation of SOMs used here is described in Section \\ref{sectSOMtraining}, with an example of their use shown. The result is a map against which any input K2 phase-folded lightcurve can be compared. The location of the lightcurve on the map gives us its similarity to certain shapes, such as the distinctive lightcurve of an eclipsing binary star.\n\n\\subsubsection{Random Forest}\nThe SOM allows us to classify and study the \\emph{shape} of a given phase curve, and the sets of similar shapes found within a dataset. It does not place an object into a specific variability class. For that we utilise a RF classifier \\citep{Breiman:fb}. These have been used in a number of previous variable star studies cited above. To use a RF classifier the lightcurve must be broken down into specific features, which represent the data (see Section \\ref{sectdatafeatures} for those used here). These features are then paired with known classes in a training set of known variables, and the classifier fit to this set. For a given object, the RF classifier can then map sets of features to probabilities for class membership, giving the likelihood for an unclassified object to be in each class.\n\nRF classifiers are ensemble methods, in that they give results based on a large sample of simple estimators, in this case decision trees. In this way they can reduce bias in estimation. The core components of an RF are these decision trees. See \\citet{Richards:2011ji} for a concise discussion of the underlying trees and how they are constructed. The specific parameters and implementation used here are discussed in Section \\ref{sectRFimplement}.\n\n\\subsection{Automated period finding}\n\\label{sectautoper}\nOur classification methodology relies heavily on the phase-folded lightcurves of our targets. This requires knowledge of the target's dominant period. Such knowledge is available for some known variables, but not for the general K2 sample at the time of writing. As such we use the K2 photometry to determine frequencies for each target.\n\nThere are a number of methods popularly used for determining lightcurve frequencies. The most common is the Lomb-Scargle (LS) periodogram \\citep{Lomb:1976bo,Scargle:1982eu}, which performs a fit of sinusoids at a series of test frequencies. Other available methods include the autocorrelation function \\citep[ACF, see e.g.][]{McQuillan:2014gp} and wavelet analyses \\citep{Torrence:1998wk}. We use LS here, due to its provenance and simplicity of implementation. The same arguments can be made for the ACF, which for stellar rotation periods has been shown to be more resilient than LS at detecting dominant frequencies \\citep{McQuillan:2013df}. However we find removing unwanted power from frequencies and harmonics, and detecting multiple frequencies from the same lightcurve, to be simpler for the LS method, at least in the implementations that we had available. In future utilising the ACF alone or in combination with the LS may be possible.\n\nWe use the fast LS method of \\citet{Press:1989hb}, with an oversampling factor of 20 run up to our Nyquist frequency of 24.5\\ d$^{-1}$. To avoid excessive human interference (and maintain the `automated' status of this classification), the dominant frequencies for a target must be found without supervision. To avoid frequencies commonly associated with thruster firing noise in K2 (see Section \\ref{sectExtDet}) we remove frequencies within 5\\% of $4.0850$d$^{-1}$ and their $1\/2$, 1st, 2nd, 3rd and 4th harmonics from the periodogram, by removing the best fitting sinusoid of form\n\n\\begin{equation}\n\\label{eqnLSmodel}\n z = a\\sin(2\\pi ft)+b\\cos(2\\pi ft)+c\n\\end{equation}\n\n\\noindent at each of these frequencies. In this model $f$ represents the frequency being removed, $t$ and $z$ the time and flux data, and $a$, $b$ and $c$ free parameters of the model. We then cut these frequencies altogether before extracting the dominant period. We also remove frequencies associated with the data cadence which commonly show power in our periodogram ($48.94355819$d$^{-1}$ and $20.394709$d)$^{-1}$) and their $1\/2$ frequency harmonics, by similarly fitting and removing a sinusoid at these frequencies and then cutting the frequencies from the periodogram. We did not find it necessary to remove other harmonics of the data cadence frequencies, as doing so provided little improvement. Finally periods above 20\\ d (10\\ d in campaign 0) are cut, as the data baseline is not long enough to reliably determine them without the introduction of spurious noise related frequencies. At this point the most significant peak in the LS periodogram is taken.\n\nTo extract other significant frequencies, we remove the dominant frequency using a fit of the model of Equation \\ref{eqnLSmodel}, then recalculate the LS periodogram, again ignoring thruster firing and cadence related frequencies as above. The remaining most significant peak is taken. To compare the power of different peaks, we calculate their amplitude $A$ using $A=\\left(a^2+b^2\\right)^{\\frac{1}{2}}$. This is used to produce the frequency amplitude ratios used later in this work. We repeat this process to extract a total of 3 frequencies from each lightcurve.\n\nA common weakness in period-finding algorithms occurs for eclipsing binary stars, a significant variability class. The LS periodogram often gives its highest power for half the true binary orbital period (i.e. when the primary and secondary eclipses occur at the same phase). This error is simple to spot by inspection, but harder to correct automatically. We account for this potential error source by introducing a check into the automated period finder. This phase folds each lightcurve at double its LS-determined dominant period. The phase folded lightcurve is then binned into 64 bins, and the bin values at the minimum bin and the lowest bin value between phases 0.45 and 0.55 from this minimum found. We perform two checks on these two bin values. If the initial period is correct, they should be the same. We first check for an absolute difference between the two, finding that 0.0025 in relative flux works well as a threshold. We further test that the difference between them is greater than 3\\% of the range of the un-phasefolded lightcurve. We calculate this range by taking the difference between the median of the largest 30 and median of the lowest 30 flux points in the lightcurve, to avoid unwanted outlier effects. If the difference between the two tested bin values is greater than both of these thresholds, the object period is doubled. If the doubled period would be over the 20\\ d upper period limit already applied (10\\ d for campaign 0), the doubling is not allowed. Similar adjustments have been made in previous variability studies \\citep[e.g.][]{Richards:2012ea}. Only the dominant extracted period may be adjusted in this way.\n\nTo test the efficacy of our automated period finding software, we trial it against a known sample of variable stars from the \\emph{Kepler} data. See Section \\ref{secttrainingset} for a full description of this set, which is also used a training set for our classifier. We use one randomly selected quarter of \\emph{Kepler} data, to give data with a similar baseline and cadence to a single K2 campaign. There are 2128 training objects with previously determined periods (after removing objects with periods below our Nyquist period of 0.0408d). Figure \\ref{figpercomp} shows the comparison between our dominant determined periods and the previously known ones. The acceptance rate is 70.3\\%, rising to 82.2\\% if half and double periods are included. In 90.8\\% of the sample, one of our 3 determined periods finds either the previously known period or its half or double harmonic. In the remaining lightcurves, we find that either the noise obscures the known period (due possibly to different quarters with differing noise properties being used by us and previous studies) or that the dominant period has changed.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{periodcomparev2.pdf}}\n\\caption{Periods determined using our method compared to previously known period, for a set of known variables in \\emph{Kepler}. An acceptance rate of 70.3\\% is obtained, 82.2\\% if half and double periods are included, and 90.8\\% if second and third detected frequencies are included. Variables lying at the correct, half or double frequency are plotted as red stars.}\n\\label{figpercomp}\n\\end{figure}\n\n\n\n\n\\subsubsection{Phase curve template preparation}\nThe SOM element of our classifier requires phased lightcurve shapes to function. We create these using the periods determined in Section \\ref{sectautoper}. Each lightcurve is phase-folded on this period. For known training set objects (see Section \\ref{secttrainingset}), the literature period is used. Once phase-folded, the lightcurve is binned into 64 equal width bins, and the mean of each bin used to form the phase curve that will be passed to the classifier. The exact number of bins is unimportant, as long as it gives enough resolution to see any variability in the phase curve. \\citet{Brett:2004cr} used 32 bins and found satisfactory results, we use 64 as the performance decrease is small and it reduces the chances of missing rapidly changing variability such as eclipses.\n\nIt is essential that the phase curves be on the same scale and aligned, so that the classifier can spot similarities between them (see next Section). As such we normalise each phase curve to span between 0 and 1, and shift it so that the minimum bin is at phase 0. Each phase curve then consists of 64 elements, with the first being at (0,0).\n\n\n\\subsection{Training the SOM}\n\\label{sectSOMtraining}\nThere are variations in the literature on how precisely to train the SOM. Here we run through the procedure followed for this work. The input parameters are the initial learning rate, $\\alpha_0$, which influences the rate at which pixels in the Kohonen layer are adjusted, and the initial learning radius, $\\sigma_0$, which affects the size of groups. Initially each pixel is randomised so that each of its 64 elements lies between 0 and 1, as our phase curves have been scaled to this range. For each of a series of iterations, each input phase curve is compared to the Kohonen layer. The best matching pixel in the layer is found, via minimising the difference between the pixel elements and the phase curve. Each element in each pixel in the layer is then updated according to the expression\n\n\\begin{equation}\nm_{xy,k,new} = \\alpha e^\\frac{-d_{xy}^2}{2\\sigma^2} \\left(s_k - m_{xy,k,old}\\right)\n\\end{equation}\n\n\\noindent where $m_{xy,k}$ is the value $m$ of the pixel at coordinates $x$,$y$ and element $k$ in the phase curve, $d_{xy}$ is the euclidean distance of that pixel from the best matching pixel in the layer, and $s_k$ is the kth element of the considered input phase curve. This expression is specific to 2-dimensional SOMs, but can be easily adapted for 1-dimension by setting the size of the second dimension to be 1. Note that distances are continued across the Kohonen layer boundaries, i.e. they are periodic. Once this has been performed for each phase curve, $\\alpha$ and $\\sigma$ are updated according to\n\n\\begin{equation}\n\\label{eqnsigmadecay}\n\\sigma = \\sigma_0 e^{\\left(\\frac{-i*log(r)}{n_\\textrm{iter}}\\right)}\n\\end{equation}\n\\begin{equation}\n\\label{eqnalphadecay}\n\\alpha = \\alpha_0 \\left( 1 - \\frac{i}{n_\\textrm{iter}} \\right)\n\\end{equation}\n\n\\noindent where $i$ is the current iteration, and $r$ is the size of the largest dimension of the Kohonen layer. This is then repeated for $n_\\textrm{iter}$ iterations. \n\nIt is possible to use different functional forms for the evolution of $\\alpha$ and $\\sigma$; typically a linear or exponential decay is used. \\citet{Brett:2004cr} found that the performance of the SOM was largely unimpeded by the choice of form or initial value, as long as the learning rate does not drop too quickly. We find satisfactory results for the expressions above and values of $\\alpha_0=0.1$ and $\\sigma_0=r$, as can be seen in the below example. The code used in this study was initially adapted from the \\texttt{SOM} module of the open source \\texttt{PyMVPA} package\\footnote{http:\/\/www.pymvpa.org}\\citep{Hanke:2009bm}, and has now been contributed as an update to that package by the authors. As such any readers wishing to use this code should look to the given reference. Note that the functional form of Equations \\ref{eqnsigmadecay} and \\ref{eqnalphadecay} are slightly different in the online version of the code, to preserve compatibility with older versions of the module. The formulae described here are the ones used in this work.\n\nAs an example we train a SOM on the K2 data from campaigns 0-2, as well as \\emph{Kepler} data used for training the classifier (see Section \\ref{secttrainingset} for a full description of the data set). We use a 40x40 Kohonen Layer. K2 data was only used if the range of variation in the phase curve before normalisation was greater than 1.5 times the overall mean of the standard deviations of points falling in each phase bin (see previous Section). This cut was imposed to avoid essentially flat lightcurves from impacting the SOM, removing \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 40\\% of the K2 lightcurves. The majority of these were classified as 'Noise' or 'AP' in \\citet{Armstrong:2015bn}, showing that we are not removing many periodically varying sources. We note that the SOM is robust enough to work without this cut, and it is imposed only to increase the purity of the training set.\n\nWe take the known \\emph{Kepler} variables, along with `OTHPER' other periodic and quasi-periodic objects from K2, and plot them on the resulting SOM in Figure \\ref{figsommap}. Clear groups can be seen, with eclipsing binary types well differentiated but bordering each other, as would be expected. RR Lyraes are very well grouped, and $\\delta$ Scuti variables cluster but more weakly. Example templates from the Kohonen layer are shown in Figure \\ref{figsomtemplates}, representing the major clusters seen. Note that the size of a group is determined by a number of factors, including the number of input objects matching it, and the extent of small variations within the group. As there are many more sinusoidal variables than eclipsing binaries or RR Lyraes, the $\\delta$ Scuti, $\\gamma$ Doradus and `OTHPER' groups fill most of the map. Different regions within these groups show for example slight skews from a pure sinusoid, and may represent interesting intra-class differences. $\\delta$ Scutis lying near the eclipsing binary groups have likely been mapped using double their true period, and so look similar to a contact binary star. They may also have been previously misclassified. It is also interesting to see that $\\delta$ Scutis and `OTHPER' objects overlap, as would be expected given that their phase curve shapes are not particularly distinctive to their respective classes. `OTHPER' objects also overlap with the RR Lyrae cluster, and likely mark out newly discovered RR Lyrae stars.\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{SOMmapv2.pdf}}\n\\caption{Known variables placed onto a SOM. Random jitter within each pixel has been added for clarity. Green triangles = 'EA' (detached eclipsing binaries), red crosses = 'EB' (semi-detached and contact eclipsing binaries), pink stars = 'RRab' (ab-type fundamental mode RR Lyraes), blue circles = 'DSCUT' ($\\delta$ Scuti variables), black dots = 'GDOR' ($\\gamma$ Dor variables) and yellow pluses = `OTHPER' (other periodic and quasi-periodic objects). See Section \\ref{sectclassscheme} for more detail on these variability classes.}\n\\label{figsommap}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{SOMtemplates.pdf}}\n\\caption{Template phase curves from the Kohonen layer of the SOM in Figure \\ref{figsommap}. Clockwise from top left, templates are for pixel [13,34] (EA), [6,32] (EB), [37,35] (RRab) and [25,19] (DSCUT). See Section \\ref{sectclassscheme} for a description of the classes. Note that templates do not have to span the range 0-1, even if the input phase curves do. Note also that all these templates were found from initially random pixels without any human guidance or input.}\n\\label{figsomtemplates}\n\\end{figure}\n\nThe SOM used for final classification is the same as that described above, but using only one dimension of 1600 pixels. This produces the same clustering results, but is less useful for visualisation. We use only one dimension so that the other part of our classifier (the RF) can more easily make use of the information contained within the SOM.\n\n\n\n\\subsection{Data Features}\n\\label{sectdatafeatures}\nFor the classification of variables into classes, we use a number of specific features of each lightcurve. This is common practice in general classification problems \\citep[e.g.][]{Richards:2011ji}. However, there is a subjective element to selecting features, and it can be desirable to minimise this if possible \\citep[see e.g.][]{Kugler:2015jq}. We do so through the use of the SOM. This encodes the shape of the phase curve into one parameter (the location of the closest pixel in the SOM to the lightcurve in question), rather than a series of features, none of which may capture the desired shape properties.\n\nThere are however other features which are useful and which are uninformed by the SOM. A key example is the dominant (most significant) period of the lightcurve. Other significant frequencies can also be used, and in some cases many more have been studied. We only use the three most significant periods here.\n\nThe full range of features used is described in Table \\ref{tabfeatures}. These features are incorporated largely to separate out lightcurves which show purely noise, something which is generally uninformed by the SOM, as well as those without one particularly dominant frequency. We take the potentially controversial step of adjusting some of the noise related features between the \\emph{Kepler} and K2 datasets, due to the differing noise properties between each set. This is unavoidable here, as the scatter and increased noise in K2 causes catastrophic errors in the classifier if \\emph{Kepler} lightcurves are used as they come. In this case the general result is that the vast majority of K2 objects are classified as Noise. This problem is solved by multiplying the marked features in Table \\ref{tabfeatures} by a factor to align their median values with those of K2. These features are those driven primarily by dataset noise, rather than those associated with periodicity (noise-related periodicity is assumed to have been removed by the procedure in Section \\ref{sectautoper}). As the \\emph{Kepler} data used all comes from known variable stars, the median of the features is not strictly comparable to K2, where the data comes from the whole target list. As such we set the multiplication factor so that the median of the non-eclipsing binary \\emph{Kepler} data features is increased to equal the median of the `OTHPER' K2 data features. Eclipsing binaries are left alone, as their features are in our case dominated by the binary eclipses.\n\nA similar problem arises when studying the PDC lightcurves. These have different characteristics to the Warwick lightcurves. Assuming that the intrinsic distribution of stellar variability should be the same across fields, this difference is due to the differing detrending methods. We adjust for it in the same way and to the same features as above, marked in Table \\ref{tabfeatures}. As we do not have prior classifications for fields 3--4, the factor is applied to the whole dataset, and set so as to match the medians of these features between the PDC campaigns 3 and 4 and the Warwick campaigns 0--2. Each PDC campaign is adjusted separately.\n\nIt would be desirable to use colour information as a feature to aid classification of variability types connected to specific stellar spectral types. However, colours are not uniformly available for the K2 sample, although some can be found through a cross-match with the TESS input catalog \\citep{Stassun:2014wz}. As such we do not use them, as doing so would mean large fractions of the K2 targets would need to be disregarded. This has consequences for the variability classes we use, see Section \\ref{sectclassscheme}.\n\n\n\\begin{table}\n\\caption{Data Features}\n\\label{tabfeatures}\n\\begin{tabular}{ll}\n\\hline\nFeature Name & Description \\\\\n\\hline\nperiod & Most significant period (Section \\ref{sectautoper})\\\\\namplitude & Max - min of phase curve\t\t\t\\\\\nperiod\\_2 & Second detected period (Section \\ref{sectautoper}) \\\\\nperiod\\_3 & Third detected period (Section \\ref{sectautoper}) \\\\\nampratio\\_21 & period\\_2 to period amplitude ratio \\\\\nampratio\\_31 & period\\_3 to period amplitude ratio \\\\\nSOM\\_index & Index of closest pixel in 1D SOM\t\t\\\\\nSOM\\_distance & Euclidean distance to closest pixel \t\t\t\\\\\n & in 1D SOM \\\\\np2p\\_98perc $^a$& 98th percentile of point to point scatter\\\\\n & in lightcurve\t\t\t\\\\\t\np2p\\_mean $^a$& Mean of point to point scatter in lightcurve \\\\\nphase\\_p2p\\_max & Maximum point to point scatter in binned\\\\\n & phase curve\t\t\t\\\\\nphase\\_p2p\\_mean & Mean of point to point scatter in binned \\\\\n & phase curve\t\t\t\\\\\nstd\\_ov\\_err $^a$& Whole lightcurve standard deviation over \\\\\n &mean point error\t\t\t\\\\\n\\hline\n\\multicolumn{2}{l}{$^a$ adjusted between datasets, see text.}\n\\end{tabular}\n\\end{table}\n\n\\subsection{Classification Scheme}\n\\label{sectclassscheme}\nAn important decision is in which variability classes to use. We experimented with classifying RR Lyrae (subtype ab), $\\delta$ Scuti, eclipsing binary (split into detached, subtype EA, and semi-detached or contact, subtype EB), $\\gamma$ Dor, and so-called ROT variables, a class applying to likely rotationally modulated lightcurves seen in \\citet{Bradley:2015ep}. We also attempted to split the $\\gamma$ Dor class into symmetric, asymmetric, and 'MULT' classes, as defined in \\citet{Balona:2011kw}. This approach had varied success; RR Lyrae ab, $\\delta$ Scuti, $\\gamma$ Dor and eclipsing binary classes performed well, but we found that the $\\gamma$ Dor subtypes were not well constrained by our available features. This may be because we lack sufficient training objects to reliably map the range of features offered by these subtypes. This problem could be navigable when an increased sample of objects is available through K2, and we plan to address this in later work. \n\nSimilarly, we found that the 'ROT' class was not very coherent - the classifier struggled to identify regions in parameter space corresponding to these variables. This likely arises due to the tendency of this class to have an indistinct cluster of low frequency peaks rather than one clear signal \\citep{Bradley:2015ep}. Rather than use the ROT class by itself, we make use of the previous version of this catalogue, which contained a `QP' quasiperiodic variable class. This class contains a number of variable types, but is characterised by periodic variability that is not strictly sinusoidal, and changes in amplitude and\/or period. We use this as a variable classification, to catch interesting variables of astrophysical origin which are not one of the five other classes (RR Lyrae ab, EA, EB, $\\delta$ Scuti, $\\gamma$ Dor). It is likely dominated by spot-modulated stars, but also contains other variables such as Cepheids. We rename this class to `OTHPER' for `other periodic' to avoid confusion, as variables which are strictly periodic but not in another class can be classified by this group.\n\nWe considered including other variable classes, such as Cepheids, the other RR Lyrae subtypes (first-overtone or multimode RR Lyraes), and Mira variables. We could not find sufficient training set objects in any of these classes (less than 20 in each case). While it is possible to attempt classification with small training sets, rather than present a weak or unreliable classification for these classes we prefer to wait for more K2 data. As more fields are observed, more training set objects will become available. We intend to include more classes in future versions of this catalogue.\n\nFinally, we include 'Noise', non-variable lightcurves, as a class label. This leave 7 classes, DSCUT ($\\delta$ Scuti), GDOR ($\\gamma$ Doradus), EA (detached eclipsing binaries), EB (semi-detached and contact eclipsing binaries), OTHPER (other periodic and quasi-periodic variables), RRab (RR Lyrae ab type) and Noise. It is important to note that as we do not have colour information, there will be degeneracy in the DSCUT class between true $\\delta$ Scutis and $\\beta$ Ceph variables, as in \\citet{Debosscher:2011kz}. This is also true for slowly pulsating B stars, which are degenerate with $\\gamma$ Dor variables.\n\n\\subsection{Training Set}\n\\label{secttrainingset}\nAlthough the SOM described is unsupervised and so requires no training set, the RF classifier we use for final classification does. An ideal training set would consist of a set of known variable stars from the K2 mission, to which we can fit the classifier. Some previous classification work on K2 has been done (for B stars \\citep{Balona:2015jh}, for eclipsing binaries \\citep{LaCourse:2015jr}, and in the previous version of this catalogue). These sources however suffer from either small numbers, only being applicable to a few variable types, or in the \\citet{Armstrong:2015bn} case using variability classes derived from the lightcurves rather than externally recognised types. We cross matched the observed K2 targets in fields 0--3 (4 was not available at that time) with catalogues of known variable stars, including those from AAVSO\\footnote{www.aavso.org}, GCVS \\citep{Samus:2009tf} and ASAS \\citep{Richards:2012ea}. This led to a small number of targets (a few tens of each class at best), not enough for a full training set. As such, we turned to the original \\emph{Kepler} mission. Much classification work has been done on the \\emph{Kepler} lightcurves. The data has differing noise properties to K2 data, but the same cadence, instrument, and if only one 90 day quarter of data is used a similar baseline to a K2 campaign.\n\nAlthough multiple works are available offering classified variable stars in \\emph{Kepler}, we limit ourselves to a small number of relatively large scale catalogues, in order to maintain homogeneity among classification methods and simplify the process. We began by taking the EA, EB, DSCUT classes from \\citet{Bradley:2015ep} We also took ROT, SPOTM and SPOTV, low frequency variables likely due to rotational modulation, reclassifying these objects as OTHPER. We supplemented the DSCUT set with those from \\citet{Uytterhoeven:2011jv}. The bulk of our eclipsing binary training set come from the Kepler Eclipsing Binary Catalogue \\citep{Prsa:2011dx,Slawson:2011fg}. We removed all heartbeat binaries \\citep{Thompson:2012ca} and those where the primary eclipse depth was less than 1\\%. A threshold of 1\\% was implemented in order to avoid shallow, likely blended binary eclipses from being included in the training set and hence increase training set purity. This also avoids the problem of noisy lightcurves with instrumental systematics of order a percent being misclassified as eclipsing binaries. Binaries were then classified as EA or EB based on a morphology threshold of 0.5 (see \\citet{Matijevic:2012di} for a discussion of morphology in this context). For RR Lyrae stars we use the list in \\citet{Nemec:2013bp}. Fundamental mode subtype ab stars were labelled RRab, and the first-overtone subtype c stars classified as OTHPER. To increase this relatively small RR Lyrae sample we used the results from the K2 AAVSO cross-match, taking fundamental mode RR Lyraes and adding them to the RRab training set. The B-star catalogue of \\citet{Balona:2015jh} was also used, with the SPB class reclassified as GDOR (given the degeneracy between GDOR and SPB present without temperature information) and the ROT class being reclassified as OTHPER.\n\nFor the OTHPER and Noise classes, we also use our previous catalogue. This contained 5445 OTHPER (QP in the original catalogue) and 29228 Noise objects in fields 0--2, with labels assigned by human eyeballing. To avoid having an excessive disparity between training set classes, we downsample this set to 1000 of each class, selected randomly, which are then added on to the \\emph{Kepler} OTHPER set above. This also makes the results on fields 0--2 more independent, as we can compare previously classified OTHPERs (the majority of which are now not in the training sample) with newly found ones. To reduce the impact of potential mistakes in the previous catalogue, we removed the small number of objects in the OTHPER training set which were in an initial run of this classifier reclassified as another class. Objects with a probability of being in the RRab class of greater than 0.2 were also removed, as the probabilities for the RRab class are not well calibrated (see Section \\ref{sectprobcal}). These cuts caught \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 50 objects misclassified as OTHPER and \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 30 objects misclassified as Noise out of the 1000 each initially selected. \n\nThe final classes and number of objects in each training set are shown in Table \\ref{tabtrainingset}.\n\n\n\n\\begin{table}\n\\caption{Training Set}\n\\label{tabtrainingset}\n\\begin{tabular}{lr}\n\\hline\nClass & N objects \\\\\n\\hline\nRRab & 91\\\\\nDSCUT & \t278\t\\\\\nGDOR & 233 \\\\\nEA & \t694\t\\\\\nEB & 759\t\\\\\nOTHPER & 1992 \\\\\nNoise & 976 \\\\\n\\hline\n\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Random Forest Implementation}\n\\label{sectRFimplement}\nWe use the implementation of RFs in the \\texttt{scikit-learn} \\texttt{Python} module\\footnote{http:\/\/scikit-learn.org\/stable\/}. There are several input parameters for an RF classifier. The key ones are the number of estimators, the maximum features considered at each branch in the component decision trees, and the minimum number of samples required to split a node on the tree, which controls how far each tree is extended. In a typical case, increasing the number of estimators always leads to improvement in performance but with decreasing returns and increasing computation time. The theoretical optimum maximum features for a classification problem is the square root of the total number of features, in our case 3. We optimise the parameters using the 'out-of-bag' score of the RF. When training, the classifier uses a random subset of the total data sample given to it for each tree, to reduce the chance of bias. The left out data is then used to test the performance of the tree -- its known class is compared to the predicted class, giving a performance metric between 0 (for absolute failure) and 1 (for perfect classification). Maximising this metric allows us to optimise the parameters. We find the best results for 300 estimators, a maximum of 3 features, and 5 samples to split a node. These parameters are used for classification. Additionally we apply weights to the training set, so that each class is inversely weighted according to its frequency in the training set (input option class\\_weight=`auto'). This makes sure that classes with more members (such as OTHPER and Noise) do not drown out other classes, and in effect imposes a uniform prior on the class probabilities.\n\nThere are several random elements in our method. These are the selection of the OTHPER and Noise training sets, as well as certain elements of the RF. Random subsets of training objects and features are selected for each decision tree as part of the RF method, to avoid bias. To minimise any effects of this randomness (especially the OTHPER and Noise selection), we train 50 classifiers with the above parameters and repeat the selection for each, applying each classifier to the K2 dataset. The average class probability across the classifiers gives the final result.\n \nTo explore the power of the SOM method, we trial the RF on only the SOM map location (SOM\\_index). The classifier is cross-validated by taking one training set member and training the classifier on the remaining members (so-called leave-one-out cross validation). The left out object is then tested on the classifier, and the process repeated for each member. The performance of the classifier is best described by a `confusion matrix', shown in Figure \\ref{figconfmatrixsom}. This shows what proportion of training members in each class were assigned to which other classes. In the ideal case each object is predicted correctly. Here we can see clearly which classes are well-informed by the SOM. RRab, EA, and EB classes are strongly recovered, as expected from their strong localisation in Figure \\ref{figsommap}. The DSCUT class is also recovered although less so. On the other side, OTHPER and Noise classes are found more weakly, and GDOR barely at all, due to the often multiple pulsation frequencies in this class combining to produce no distinctive phase curve shape. This demonstrates the power of the SOM alone to classify certain classes of variable stars. \n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{confmatrix_somonlyv2.pdf}}\n\\caption{Confusion matrix for a RF considering only SOM map location, generated using leave-one-out cross validation. Text shows the percentage of each sample which was classified into the relevant box. Correct classification lies on the diagonal.}\n\\label{figconfmatrixsom}\n\\end{figure}\n\n\nMoving on to the full classification scheme, we test the RF in a similar manner. All 7 classes are used, and the classifier cross-validated as before. The resulting confusion matrix is shown in Figure \\ref{figconfmatrix}. It highlights some interesting cases. Firstly, the classifier works well, with an overall success rate of 92.0\\%. There is some porosity between the two eclipsing binary classes, with objects of one class being placed into the other. As there is no rigid boundary in lightcurve shape between them, this is to be expected. Similarly there is some spread between OTHPER and Noise. This is not desirable, but the numbers involved are low, and represent objects with either variability only just emerging above the noise or objects with unusual noise properties. The biggest misclassification occurs between the GDOR and OTHPER classes. This arises due to the less distinct nature of the OTHPER class - it acts as a `catch-all' class to find any periodic or quasi-periodic variables which do not fit the other classes. GDOR objects can in some circumstances present similar lightcurve features to for example fast rotating stars, leading to some confusion between the classes.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{confmatrixv2.pdf}}\n\\caption{Confusion matrix for a RF considering all features and classes, generated using leave-one-out cross validation. Text shows the percentage of each sample which was classified into the relevant box. Correct classification lies on the diagonal.}\n\\label{figconfmatrix}\n\\end{figure}\n\nOne advantage of RF classifiers is the ability to estimate feature importance. The classifier naturally measures which features have more descriptive power, through for example how often those features are used in the decision trees, or through the reduction in performance that would be observed is a feature was replaced by a randomly sampled distribution. This allows for model refinement, and is of great use in developing a classifier. We plot the importance of our features in Figure \\ref{figfeatimportance}. These are found through training the classifier 100 times, and extracting the mean and standard deviation of the feature importances for each classifier.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{featimportancev2.pdf}}\n\\caption{Relative importance of features to the RF. Values and errors arise from the mean and standard deviation of the feature importances extracted from 100 trained classifiers.}\n\\label{figfeatimportance}\n\\end{figure}\n\n\n\n\\subsection{Class posterior probability calibration}\n\\label{sectprobcal}\nThe RF classifier automatically generates class probabilities (through the proportion of estimators classifying an object into each class). These probabilities are not necessarily accurate. Although it is true that higher class probability means more likelihood of an object being in that class, the probabilities can need calibrating to ensure that they are true posterior probabilities. This is where, if a set of objects have probability p that they are in a certain class, the same proportion of them actually are of that class.\n\nInitially we test the calibration of our `raw' class probabilities. Figure \\ref{figprobcal} shows the class probabilities found from the cross validated training set data created as described in Section \\ref{sectRFimplement}. This allows the predicted class probabilities for each training set object to be compared to their known classes. They are clearly not true posterior probabilities, especially for the RRab class, where essentially every object with class probability $>0.5$ is a true class member. For the other classes the given probabilities are closer, but still show some departure from the ideal case.\n\nOne common way of testing classifier performance in this way is the Brier score \\citep{BRIER:1950hg}. Our raw probabilities have a Brier score of 0.1336. We attempted a number of methods of calibrating them (and so reducing this score). The most usual methods are sigmoid and isotonic regression, which fit certain functions to the calibration curve to transform the probabilities. Similarly to \\citet{Richards:2012ea}, we find that these methods are not effective in our case. We attempted the method of \\citet{Bostrom:2008dt} to transform the initial class probabilities, but also found the results to be unsatisfactory. Rather than present an incomplete calibration, we give the class probabilities as they are. Users should be aware of this, and avoid interpreting class probabilities as true posterior probabilities.\n\n\n\n\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{probcalv2.pdf}}\n\\caption{Overall classifier predicted probability against true probability for the RRab class (crosses) and the average of all other classes (dots). The straight black dashed line represents the ideal case.}\n\\label{figprobcal}\n\\end{figure}\n\nAs the training set will not be representative of the true K2 distribution, biases may exist. As the priors are not well known, and the distribution of training sources by no means matches the underlying distribution of variables in K2, true posterior probabilities are impossible to create. Hence the given class probabilities, even if calibrated, would only be posterior probabilities under the assumption that each class has a uniform probability of arising.\n\n\n\\section{Catalogue}\n\n\\subsection{Overview}\nThe full catalogue for K2 fields 0--4 inclusive is given in Table \\ref{tabcatalogue}. This Table contains classifications using the Warwick lightcurves, as described in Section \\ref{sectExtDet}. The features used to classify these objects are given in Table \\ref{tabcatfeatures}. We also run the classifier on the PDC lightcurves produced by the K2 mission team. These were only available for campaigns 3--4. The resulting classifications are given in Table \\ref{tabcatalogueKTeam}, and their associated features in Table \\ref{tabcatfeaturesKTeam}.\n\n\\begin{landscape}\n\\begin{table}\n\\caption{Catalogue table for our Warwick detrended lightcurves. Fields 0--4 are included. Only an extract is shown here for guidance in form. The full table is available online.}\n\\label{tabcatalogue}\n\\begin{tabular}{lllllllllll}\n\\hline\nK2 ID & Campaign & Class & \\multicolumn{7}{c}{Class Probabilities} & Anomaly \\\\\n & & & DSCUT & EA & EB & GDOR & Noise & OTHPER & RRab & \\\\\n\\hline\n202059070 & 0 & Noise & 0.004195 & 0.120507 & 0.016615 & 0.005925 & 0.604636 & 0.246088 & 0.002034 & 0.023891\\\\\n .&.&.&.&.& . &.&.&.&.&. \\\\\n\\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\caption{Data features for our Warwick detrended lightcurves. Fields 0--4 are included. Only an extract is shown here for guidance in form. The full table is available online.}\n\\label{tabcatfeatures}\n\\begin{tabular}{llllllllllll}\n\\hline\nK2 ID & Campaign & SOM\\_index & period & period\\_2 & period\\_3 & SOM\\_distance & phase\\_p2p\\_mean & phase\\_p2p\\_max & amplitude & ampratio\\_21 & ampratio\\_31 \\\\\n & & & d & d & d & & rel. flux & rel. flux & rel. flux & & \\\\\n\\hline\n202059070 & 0 & 1544 & 4.764370 & 1.241680 & 0.174448 & 1.180831 & 0.003801 & 0.487419 & 0.042283 & 0.629987 & 0.548721 \\\\\n .&.&.&.&.& . &.&.&.&.&. & . \\\\\n\\hline\np2p\\_mean & p2p\\_98perc & std\\_ov\\_err&&&&&&&&& \\\\\nrel. flux & rel. flux &&&&&&&&&& \\\\\n\\hline\n0.016326 & 0.047548 & 1.310764&&&&&&&&& \\\\\n\n .&.&.&.&.& . &.&.&.&.&. & . \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\caption{Catalogue table for PDC detrended lightcurves. Fields 3--4 only. Only an extract is shown here. The full table is available online.}\n\\label{tabcatalogueKTeam}\n\\begin{tabular}{lllllllllll}\n\\hline\nK2 ID & Campaign & Class & \\multicolumn{7}{c}{Class Probabilities} & Anomaly \\\\\n & & & DSCUT & EA & EB & GDOR & Noise & OTHPER & RRab & \\\\\n\\hline\n205889250 & 3 & Noise & 0.000067 & 0.000000 & 0.000000 & 0.000030 & 0.966544 & 0.033359 & 0.000000 & 0.000000\\\\\n\n .&.&.&.&.& . &.&.&.&.&. \\\\\n\\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\caption{Data features for PDC detrended lightcurves. Fields 3--4 only. Only an extract is shown here. The full table is available online.}\n\\label{tabcatfeaturesKTeam}\n\\begin{tabular}{llllllllllll}\n\\hline\nK2 ID & Campaign & SOM\\_index & period & period\\_2 & period\\_3 & SOM\\_distance & phase\\_p2p\\_mean & phase\\_p2p\\_max & amplitude & ampratio\\_21 & ampratio\\_31 \\\\\n & & & d & d & d & & rel. flux & rel. flux & rel. flux & & \\\\\n\\hline\n205889250 & 3 & 0630 & 19.754572 & 12.803889 & 2.281881 & 1.179035 & 0.003795 & 0.421976 & 0.008715 & 0.741302 & 0.592596\\\\\n\n .&.&.&.&.& . &.&.&.&.&. & . \\\\\n\\hline\np2p\\_mean & p2p\\_98perc & std\\_ov\\_err&&&&&&&&& \\\\\nrel. flux & rel. flux &&&&&&&&&& \\\\\n\\hline\n0.005249 & 0.017133 & 1.371857&&&&&&&&& \\\\\n\n .&.&.&.&.& . &.&.&.&.&. & . \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\end{landscape}\n\nThe total number of objects found in each class is given in Table \\ref{tabnclass}, at various probability cuts. Note that for RRab class objects in particular, most objects with class probability $>0.5$ are real classifications. In the other cases the probability calibration is better, but these probabilities should still not be interpreted as posterior probabilities.\n\n\\begin{table}\n\\caption{Total objects in each class.}\n\\label{tabnclass}\n\\begin{tabular}{lllll}\n\\hline\nClass & Total & Prob $>0.5$ & Prob $> 0.7$ & Prob $> 0.9$ \\\\\n\\hline\nRRab & 248 & 154 & 72 & 25 \\\\\nDSCUT & 750 & 562 &377 & 166\t\\\\\nGDOR & 451 & 264 & 133 & 37\t\\\\\nEA & 607 & 308 & 183 & 99 \t\\\\\nEB & 463 & 392 & 290 & 186 \t\t\\\\\nOTHPER & 22428 & 18698 & 9399 & 3547 \\\\\nNoise & 43963 & 38609 & 21210 & 6018 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nWe find that the classifier works well on all fields. The RRab class performs well throughout, due to the distinctive shape of their phasecurves. These are well characterised by the SOM. There are however some distinct features unique to fields 3 and 4. The EA class has a tendency to pick up noise dominated lightcurves in these fields, primarily because their point to point scatter is much higher than in fields 0--2. In these cases the class probability, although highest for EA, is still relatively low however. Similarly for DSCUT objects, there are a higher proportion of objects in these fields with many anomalous points, possibly due to flaring or instrumental noise. These points can cause biases in the phase curve, resulting in an artificial sinusoid, which when combined with a short period results in a DSCUT classification. Again these noise objects have a lower probability than real DSCUT lightcurves. One final interesting property is the split between OTHPER and Noise lightcurves. This is good for fields 0--2. In fields 3 and 4, while OTHPER lightcurves are recognised, several Noise lightcurves can be classified as OTHPER. Probability cuts remove the worst of these, but there is no way to distinguish between quasi-periodic instrumental noise and astrophysical variability in this scheme. These issues all lead to the conclusion that the classifier has more trouble with fields 3--4, due to a pattern of increased noise. We expect this issue to improve as K2 detrending methods become more robust.\n\n\\subsection{Detrending method comparison}\n\n\\begin{table*}\n\\caption{Total objects in each class in fields 3--4, split by detrending method (W=Warwick, PDC=K2 Team released lightcurves).}\n\\label{tabdetcomp}\n\\begin{tabular}{lllllll}\n\\hline\nClass & Total W & Total PDC & Prob $> 0.5$ W & Prob $> 0.5$ PDC &Prob $> 0.7$ W & Prob $> 0.7$ PDC \\\\\n\\hline\nRRab & 141 & 152 & 95 &115 & 48 & 83 \\\\\nDSCUT & 280 & 266 & 180 & 201 & 116 & 148\\\\\nGDOR & 198 & 382 & 122 & 238 & 61 & 101\\\\\nEA & 255 & 413 & 97 & 223 & 54 & 102 \t\\\\\nEB & 168 & 150 & 140 & 131 & 106 & 105\t\\\\\nOTHPER & 11402 & 9102 & 8709 & 8034 & 3522 & 4565 \\\\\nNoise & 17143 & 19126 &13012 & 17919 & 3625 & 11566 \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\nTable \\ref{tabdetcomp} shows the numbers of variable stars found using each dataset. At first glance the numbers in Table \\ref{tabdetcomp} seem to imply significant differences between detrending methods. The discrepancy in RRab numbers is largely a result of differing probability calibration - the same stars are found in both datasets, but those in the Warwick set given lower probabilities (although still higher than all other classes). Other major discrepancies are in the GDOR and EA classes. For GDOR, we find that the PDC set gives better results. Several GDOR lightcurves are misclassified in the Warwick set due to poor detrending masking the true variability. In some cases the PDC GDOR classification is inaccurate, but this is rare for the class probability $>0.7$ objects. For the EA objects, the reverse is true. Several PDC lightcurves are misclassified as EA due to a higher number of lightcurves in the PDC set with very significant remnant outliers. These lead to a high point-to-point scatter, which is interpreted by the classifier as an eclipse. Here the Warwick set is more reliable. The largest absolute difference in the variable classes is in the OTHPER objects, where \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 1000 lightcurves extra pass the high probability cut for the PDC set. This is partly a result of a similar effect as for the RRab objects, where similarly classified objects are given lower probabilities in the Warwick set. However, there are also several objects found in the PDC set which are missed in the Warwick set, due to increased noise levels. The converse is also true, with some lightcurves found in the Warwick set but missed by the PDC. Overall, the two detrending methods perform comparably well, and can be used to reinforce each other when studying variable classes.\n\n\n\\subsection{Anomaly detection}\nDue to the limited classification scheme used, it is inevitable that some objects will not fit any of the given classes \\citep{Protopapas:2006br}. Due to the inclusion of Noise and OTHPER as classes, this is not a large problem as each class is quite broad. However it is worth noting any particular anomalies. One way of doing this is already intrinsic to the SOM -- the Euclidean distance of a phase curve to its nearest matching pixel template. However this metric only works for periodic sources, and can flag high for noisy sources. We perform a check for anomalies following the method of \\citet{Richards:2012ea}. This works by extracting the proximity measure, $\\rho_{ij}$ between each tested object $i$ and each object $j$ in the training set. The proximity measure is the proportion of trees in the classifier for which each object ends at the same final classification. It is close to unity for similar objects, and close to zero for dissimilar ones. From the proximity the discrepancy $d$ is calculated, via\n\n\\begin{equation}\nd_{ij} = \\frac{1-\\rho_{ij}}{\\rho_{ij}}\n\\end{equation}\n\nThe anomaly score is then given by the second smallest discrepancy of an object to the training set. High anomaly scores represent objects which are not well explained by any object in the training set, and are hence outliers.\n\nWe find that in this case, the highest few percentiles of anomalous objects are a mixture of noise-dominated lightcurves, unusual eclipsing binaries and variability which does not fit into the used classification scheme. We leave a full analysis of these unusual lightcurves to future work.\n\n\n\n\n\n\n\n\n\\subsection{Eclipsing Binaries}\nEncouragingly we identify 139 (96 at class probability $>0.7$) of the 165 EPIC, non-M35 eclipsing binaries identified by \\citet{LaCourse:2015jr} in field 0 as either 'EA' or 'EB' type, despite automating the process and not focusing on exclusively eclipsing binaries. The majority of the remainder are identified as 'OTHPER' or 'DSCUT', and are discussed below. We further identify an additional 61 EPIC, non-M35 objects in field 0 as 'EA' or 'EB' at class probability $>0.7$, although as our identification is automated rather than visual some of these may be misidentified by the classifier. Many more eclipsing binaries are found in the other fields.\n\nThe previously labelled, but not identified by our classifier, eclipsing binaries fall into three main groups. The first show near-sinusoidal short period lightcurves, and are generally identified as 'DSCUT'. In these cases it is difficult to reliably assign a class with the information available. These objects may be actual $\\delta$ Scuti stars, or contact eclipsing binaries. The other and largest group, with 14 members, are identified as 'OTHPER', and show pulsations or spot-modulation in addition to the known eclipses. We note that the classifier will assign a class based largely on the dominant period and phasecurve at this period, hence performs as expected in these cases. Pulsating stars in eclipsing binaries are useful objects, and so while a detailed study of these objects is beyond the scope of this paper we provide a list of such objects in Table \\ref{tabqpebs}. These are eclipsing binaries identified by a visual check of the lightcurves performed ourselves (as the \\citet{LaCourse:2015jr} catalogue only covered field 0), which are classified as `OTHPER' by our classifier. Some may be blended signals, and hence the pulsator or spot-modulated star may not be a member of the eclipsing binary system.\n\n\\begin{table}\n\\caption{EPIC IDs for 29 visually identified eclipsing binaries classified as `OTHPER' by our classifier, from fields 0--4.}\n\\label{tabqpebs}\n\\begin{tabular}{lllll}\n\\hline\n201158453 &201173390 &201569483 & 201584594 \\\\\n 201638314 &202072962 &202137580 & 203371239\\\\\n203476597 &203637922 &204043888 &204193529\\\\\n204328391 &204411840 &205510143 &205919993 \\\\\n 205985357 &205990339 &\n206047297 &\n206060972 \\\\\n206066862 &\n206226010 &\n206311743 &\n206500801 \\\\\n210350446 &\n210501149 &\n210766835 &\n210945342 \\\\\n211093684 &\n211135350 &\n \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{$\\delta$ Scuti Stars}\nWe have a sample of 377 $\\delta$ Scuti candidates, using a class probability cut of 0.7. The majority of these candidates were previously unknown. It is interesting to study their frequency and amplitude distribution. Note that here we use amplitude defined as in the max-min of the binned phase curve, and semi-amplitude as half this value. The distribution of amplitudes for the 377 $\\delta$ Scuti candidates is shown in Figure \\ref{figdscutampdist}. We see a number of HADS (high amplitude $\\delta$ Scutis). Using an amplitude threshold of $10^4$ ppm as used by \\citet{Bradley:2015ep}, 104 of our candidates are HADS. Included in this sample are 11 candidates with an amplitude greater than $10^5$ ppm. The period distribution of the whole sample is shown in Figure \\ref{figdscutperdist}, and covers the expected range for $\\delta$ Scuti variables, limited by our Nyquist sampling frequency.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{dscut_ampdistv2.pdf}}\n\\caption{The distribution of phase curve amplitude for DSCUT classified objects. Several high amplitude candidates are visible.}\n\\label{figdscutampdist}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{dscut_perdistv2.pdf}}\n\\caption{The distribution of pulsation periods for DSCUT classified objects. The cutoff at the low period end is imposed by our Nyquist sampling frequency.}\n\\label{figdscutperdist}\n\\end{figure}\n\nAs has been mentioned, the DSCUT classified objects are degenerate with $\\beta$ Ceph variables due to the lack of colour information available. There is a catalogue of estimated K2 temperatures available for some objects \\citep{Stassun:2014wz} which could be used to make probable distinctions if necessary.\n\n\\subsection{$\\gamma$ Doradus Stars}\nWe have a sample of 133 $\\gamma$ Doradus candidates, using a class probability cut of 0.7. We plot the amplitude and period distributions in Figures \\ref{figgdorampdist} and \\ref{figgdorperdist}, following the same definition of amplitude as for the $\\delta$ Scuti sample. Note that this amplitude is only for the dominant period phase curve, and so does not include the other significant frequencies often present in $\\gamma$ Doradus lightcurves. The period distribution covers the expected range for $\\gamma$ Doradus variables. Due to the lack of colour information available, $\\gamma$ Doradus objects are degenerate with slowly pulsating B stars.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{gdor_ampdistv2.pdf}}\n\\caption{The distribution of phase curve amplitude for GDOR classified objects.}\n\\label{figgdorampdist}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{gdor_perdistv2.pdf}}\n\\caption{The distribution of pulsation periods for GDOR classified objects.}\n\\label{figgdorperdist}\n\\end{figure}\n\n\\subsection{RR Lyrae ab-type Stars}\nAs the RRab class has less well calibrated probability (almost all candidates with Prob(RRab) greater than 0.5 seem to be real) we use an adjusted class probability threshold of 0.5 to study this class. This leaves 154 candidates. Their amplitude distribution is shown in Figure \\ref{figrrabampdist}, and peaks at significantly higher amplitude than that of the DSCUT and GDOR candidates as would be expected. Most of these candidates are previously known; we find that 129 of them are in K2 proposals focused on RR Lyrae stars. These proposals contain both known and candidate RR Lyraes; in the candidate cases our classification provides some support for them truly being RR Lyrae variables. Assuming these proposals were comprehensive (reasonable, given the multiple teams involved), this leaves 25 candidates as potential new discoveries by this catalogue. However, as these objects are those not in the proposals, there is a selection effect in favour of misclassified non-RR Lyrae objects. We performed a visual examination of each of these 25 lightcurves, which resulted in 8 of the 25 being confirmed as real RR Lyrae candidates (the others being either misclassified outbursting stars or particularly high amplitude noise). An additional 3 candidates were found by using the PDC lightcurve set and checking objects in both sets with class probability between 0.4 and 0.5, resulting in 10 total new candidates. These objects may still be blends of true RR Lyraes, hence the candidate designation. We plot the phase folded lightcurves for two new discoveries and two known RR Lyrae stars in Figure \\ref{fignewrrab}. Some amplitude modulation can be seen, due to some of these targets exhibiting the Blazhko effect \\citep{1907AN....175..325B}. RR Lyraes are immensely useful objects, allowing studies of the evolution of stellar populations throughout the Galaxy and in other nearby galaxies. Due to an absolute magnitude-metallicity relation \\citep{Sandage:1981ja} it is possible to use them for distance estimation.\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{rrab_ampdistv2.pdf}}\n\\caption{The distribution of phase curve amplitude for RRab classified objects.}\n\\label{figrrabampdist}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{newrrabs.pdf}}\n\\caption{Four phase folded RRAB classified lightcurves. Clockwise from top-left, the EPIC IDs are 210830646, 206409426, 211069540 and 203692906.}\n\\label{fignewrrab}\n\\end{figure}\n\t\n\\section{Conclusion}\nWe have implemented a novel combined machine learning algorithm, using both Self Organising Maps and Random Forests to classify variable stars in the K2 data. We consider fields 0--4, and intend to update the catalogue as more fields are released. As more data builds up, it may become possible to implement new variability classes, and study the effect of different detrending methods on the catalogue performance. We obtain a success rate of 92\\% using out of bag estimates on the training set.\n\nWe train the classifier on a set of Kepler and some K2 data from fields 0--2. As such it is applied completely independently to the majority of the K2 data, and the whole of fields 3--4. That we obtain good results for fields 3--4 bodes well for application of the classifier to future data. \n\nAlgorithms like this will become an increasingly important step in processing the data volumes expected from future astronomical surveys. To maximise scientific return it is critical to select interesting candidates, and do so rapidly and with minimal input. We hope that this method will contribute to the growing body of work attempting to address this issue. \n\n\\section*{Acknowledgements}\nThe authors thank the anonymous referee for a helpful review of the manuscript. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. The data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzelho b/data_all_eng_slimpj/shuffled/split2/finalzzelho new file mode 100644 index 0000000000000000000000000000000000000000..6f9e11ca974b78f8ecb6438def4e48d1c5910750 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzelho @@ -0,0 +1,5 @@ +{"text":"\n\n\\subsection{Retrieval Methodology}\nWe take a two-step approach for retrieving \\emph{evidence sentences}: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences. \nWikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics. A dump of December 21, 2016 was downloaded. \nFor training, evidence sentences are retrieved with queries constructed from target user arguments. For test, queries are constructed from OP.\n\n\\smallskip\n\\noindent \\textbf{Article Retrieval.} \nWe first create an inverted index lookup table for Wikipedia as done in \\newcite{chen-EtAl:2017:Long4}. \nFor a given statement, we construct one query per sentence to broaden the diversity of retrieved articles. Therefore, multiple passes of retrieval will be conducted if more than one query is created. \nSpecifically, we first collect topic signature words of the post. Topic signatures~\\cite{lin2000automated} are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus. We treat posts from other discussions in our dataset as background. \nFor each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word. \nFor instance, a query ``\\texttt{the government}, \\texttt{my e-mails}, \\texttt{national security}'' is constructed for the first sentence of OP in the motivating example (Figure~\\ref{fig:pipeline}). \nTop five retrieved articles with highest TF-IDF similarity scores are kept per query.\n\n\n\\smallskip\n\\noindent \\textbf{Sentence Reranking.} \nThe retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement. Up to 100 top ranked paragraphs with positive scores are retained. These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again. We only keep up to 10 top sentences with positive scores for inclusion in the evidence set. \n\n\n\\begin{table}[t]\n\\fontsize{10}{12}\\selectfont\n \\centering\n \\setlength{\\tabcolsep}{2mm}\n \\begin{tabular}{l|p{15mm} p{15mm}}\n \\hline\n & \\multicolumn{2}{|c}{\\textit{Queries Constructed from}}\\\\\n & \\textbf{OP} & \\textbf{Argument} \\\\ \n \\hline\n Avg \\# Topic Sig. & 17.2 & 9.8 \\\\\n Avg \\# Query\t& 6.7 &\t1.9 \\\\\n Avg \\# Article Retrieved \t & 26.1 &\t8.0 \\\\\n Avg \\# Sent. Retrieved \t & 67.3 &\t8.5 \\\\\n \\hline\n \\end{tabular}\n \n\n \\caption{\\fontsize{10}{12}\\selectfont \n Statistics for evidence sentence retrieval from Wikipedia. Considering query construction from either OP or target user arguments, we show the average numbers of topic signatures collected, queries constructed, and retrieved articles and sentences. \n }\n \\label{tab:retrieval-stats}\n\\end{table}\n\n\\subsection{Gold-Standard Keyphrase Construction}\nTo create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: \n\n\n{\n\\begin{itemize}\n\\vspace{-2mm}\n\\item Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP~\\cite{manning-EtAl:2014:P14-5}.\n\\vspace{-2mm}\n\\item Keep phrases of length between 2 and 10 that overlap with content words in the argument.\n\\vspace{-2mm}\n\\item If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.\n\\end{itemize}\n}\n\nThe resultant phrases are then concatenated with a special delimiter \\texttt{} and used as gold-standard generation for training.\n\n\n\n\\subsection{Final Dataset Statistics}\nEncoding the full set of evidence by our current decoder takes a huge amount of time. \nWe there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated. \nThis procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set. In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence. \nFinally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs). \n\n\n\\subsection{Training Setup}\nFor all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer. We apply dropout~\\cite{gal2016theoretically} on RNN cells with a keep probability of 0.8. \nWe use Adam~\\cite{kingma2014adam} with an initial learning rate of 0.001 to optimize the cross-entropy loss. Gradient clipping is also applied with the maximum norm of 2. \nThe input and output vocabulary sizes are both 50k. \n\n\\smallskip\n\\noindent \\textbf{Curriculum Training.} \nWe train the models in three stages where the truncated input and output lengths are gradually increased. Details are listed in Table \\ref{tab:staged-training}. Importantly, this strategy allows model training to make rapid progress during early stages. \nTraining each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32. The model converges after about 10 epochs in total with pre-training initialization, which is described below.\n\n\\begin{table}[H]\n \\centering\n \\fontsize{10}{12}\\selectfont\n \\begin{tabular}{l|c|c|c}\n \\hline\n \\textbf{Component} & \\textbf{Stage 1} & \\textbf{Stage 2} & \\textbf{Stage 3} \\\\\n \\hline\n \\multicolumn{4}{l}{\\it {Encoder}}\\\\\n \\quad OP & 50 & 150 & 400 \\\\\n \\quad Evidence \t& 0 & 80 & 120 \\\\\n \\multicolumn{4}{l}{\\it {Decoder}}\\\\\n \\quad Keyphrases & 0 & 80 & 120 \\\\\n \\quad Target Argument \t& 30 & 80 & 120 \\\\\n \\hline\n \\end{tabular}\n \\caption{\\fontsize{10}{12}\\selectfont\n Truncation size (i.e., number of tokens including delimiters) for different stages during training. Note that in the first stage we do not include evidence and keyphrases.}\n \\label{tab:staged-training}\n\\end{table}\n\n\\smallskip\n\\noindent \\textbf{Adding Pre-training.} \nWe pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set. After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder). Experimental results show that pre-training boosts all methods by roughly 2 METEOR~\\cite{denkowski-lavie:2014:W14-33} points. We describe more detailed results in the supplementary material.\n\n\\subsection{Baseline and Comparisons}\n\nWe first consider a \\textsc{Retrieval}-based baseline, which concatenates retrieved evidence sentences to form the argument. \nWe further compare with three seq2seq-based generation models with different training data: \n(1) \\textsc{seq2seq}: training with OP as input and the argument as output; \n(2) \\textsc{seq2seq} + \\textit{encode evd}: augmenting input with evidence sentences as in our model; \n(3) \\textsc{seq2seq} + \\textit{encode KP}: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known. \nAll seq2seq models use a regular beam search decoder with the same beam size as ours. \n\n\\smallskip\n\\noindent \\textbf{Variants of Our Models.} \nWe experiment with variants of our models based on the proposed separate decoder model (\\textsc{Dec-separate}) or using a shared decoder (\\textsc{Dec-shared}). For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ \\textit{attend KP}).\n\n\n\n\n\\smallskip\n\\noindent \\textbf{System vs. Oracle Retrieval.} \nFor test time, evidence sentences are retrieved with queries constructed from OP (\\textit{System Retrieval}). We also experiment with an \\textit{Oracle Retrieval} setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.\n\n\n\n\n\\section{Introduction}\n\\input{intro.tex}\n\n\\section{Framework}\n\\label{sec:framework}\n\\input{framework.tex}\n\n\\section{Data Collection and Processing}\n\\label{sec:data}\n\\input{data.tex}\n\n\\section{Model}\n\\label{sec:model}\n\\input{model.tex}\n\n\\section{Relevant Evidence Retrieval}\n\\label{sec:retrieval}\n\\input{context.tex}\n\n\\section{Experimental Setup}\n\\label{sec:experiments}\n\\input{experiments.tex}\n\n\\section{Results}\n\\label{sec:results}\n\\input{results.tex}\n\n\\section{Further Discussion}\n\\label{sec:discussion}\n\\input{discussion.tex}\n\n\\section{Related Work}\n\\label{sec:related}\n\\input{related.tex}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\input{conclusion.tex}\n\n\n\\section*{Acknowledgements}\nThis work was partly supported by National Science Foundation Grant IIS-1566382, and a GPU\ngift from Nvidia. \nWe thank three anonymous reviewers for their insightful suggestions on various aspects of this work.\n\n\n\\subsection{Model Formulation}\nOur model takes as input a sequence of tokens $\\bm{x} = \\{\\bm{x}^O; \\bm{x}^E\\}$, where $\\bm{x}^O$ is the statement sequence and $\\bm{x}^E$ contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module. A special token \\texttt{} is inserted between $\\bm{x}^O$ and $\\bm{x}^E$. Our model then first generates a set of keyphrases as a sequence $\\bm{y}^p = \\{y^p_l\\}$, followed by an argument $\\bm{y}^a = \\{y^a_t\\}$, by maximizing $\\log P(\\bm{y}|\\bm{x})$, where $\\bm{y}=\\{\\bm{y}^p; \\bm{y}^a\\}$.\n\n\nThe objective is further decomposed into $\\sum_{t} \\log P(y_t|y_{1:t-1},\\bm{x})$, with each term estimated by a softmax function over a non-linear transformation of decoder hidden states $\\bm{s}^a_t$ and $\\bm{s}^p_t$, for argument decoder and keyphrase decoder, respectively. The hidden states are computed as done in \\newcite{bahdanau2014neural} with attention: \n\n\\vspace{-2mm}\n{\\fontsize{10}{11}\\selectfont\n\\setlength{\\abovedisplayskip}{2pt}\n\\setlength{\\belowdisplayskip}{2pt}\n\\begin{align}\n\\fontsize{10}{11}\\selectfont\n \t & \\bm{s}_t = g(\\bm{s}_{t-1}, \\bm{c}_t, y_t) \\label{eq:attn_0}\\\\\n &\\bm{c}_t = \\sum_{j=1}^T \\alpha_{tj} \\bm{h}_j \\label{eq:attn_1} \\\\\n &\\alpha_{tj} = \\frac{\\textnormal{exp}(e_{tj})}{\\sum_{k=1}^T \\textnormal{exp}(e_{tk})} \\label{eq:attn_2} \\\\\n &e_{tj} = \\bm{v}^T\\tanh (\\bm{W_h} \\bm{h}_j + \\bm{W_s} \\bm{s}_t + \\bm{b}_{attn}) \\label{eq:attn_3}\n\\end{align}\n}\n\nNotice that two sets of parameters and different state update functions $g(\\cdot)$ are learned for separate decoders: $\\{\\bm{W}^a_h$, $\\bm{W}^a_s$, $\\bm{b}^a_{attn}, g^a(\\cdot) \\}$ for the argument decoder; $\\{\\bm{W}^p_h$, $\\bm{W}^p_s$, $\\bm{b}^p_{attn}, g^p(\\cdot) \\}$ for the keyphrase decoder. \n\n\\smallskip\n\\noindent \\textbf{Encoder.} \nA two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states $\\bm{h}_i$ for each time step $i$. For biLSTM, the hidden state is the concatenation of forward and backward hidden states: $\\bm{h}_i=[\\overrightarrow{\\bm{ h}_i}; \\overleftarrow{\\bm{h}_i}]$. Word representations are initialized with 200-dimensional pre-trained GloVe embeddings~\\cite{pennington-socher-manning:2014:EMNLP2014}, and updated during training. \nThe last hidden state of encoder is used to initialize both decoders. In our model the encoder is shared by argument and keyphrase decoders.\n\n\n\\smallskip\n\\noindent \\textbf{Decoders.}\nOur model is equipped with two decoders: \\textit{keyphrase decoder} and \\textit{argument decoder}, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning~\\cite{luong2015multi}. The distinction is that our training objective is the sum of two loss functions:\n\n\\vspace{-3mm}\n{\\fontsize{10}{11}\\selectfont\n\\begin{align}\n\\begin{split}\n \\mathcal{L}(\\theta) = &-\\frac{\\alpha}{T_p}{\\sum_{(\\bm{x},\\bm{y}^p)\\in D}} \\log P(\\bm{y}^p|\\bm{x};\\theta) \\\\ \n &- \\frac{(1-\\alpha)}{T_a}{\\sum_{(\\bm{x},\\bm{y}^a)\\in D}} \\log P(\\bm{y}^{a}|\\bm{x};\\theta) \\label{eq:loss_overall} \\\\\n\\end{split}\n\\end{align}\n}\n\n\\noindent where $T_p$ and $T_a$ denote the lengths of reference keyphrase sequence and argument sequence. \n$\\alpha$ is a weighting parameter, and it is set as $0.5$ in our experiments.\n\n\\smallskip\n\\noindent \\textbf{Attention over Both Input and Keyphrases.} \nIntuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process. We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states. Additional context vector $\\bm{c}'_t$ is then computed over keyphrase decoder hidden states $\\bm{s}^p_j$, which is used for computing the new argument decoder state:\n\n\n\\vspace{-2mm}\n{\\fontsize{10}{11}\\selectfont\n\\setlength{\\abovedisplayskip}{2pt}\n\\setlength{\\belowdisplayskip}{2pt}\n\\begin{align}\n&\\bm{s}^a_t = g'(\\bm{s}^a_{t-1}, [\\bm{c}_t; \\bm{c}'_t], {y}^a_t) \\label{eq:kattn_1} \\\\\n& \\bm{c}'_t = \\sum_{j=1}^{T_p} \\alpha'_{tj}\\bm{s}^p_j \\label{eq:kattn_3}\\\\\n& \\alpha'_{tj} = \\frac{\\textnormal{exp}(e'_{tj})}{\\sum_{k=1}^{T_p} \\textnormal{exp}{(e'_{tk})}} \\label{eq:kattn_4}\\\\\n& e'_{tj} = {\\bm{v}'}^T\\tanh (\\bm{W'_p} \\bm{s}^p_j + \\bm{W'_a} \\bm{s}^a_t + \\bm{b}'_{attn}) \\label{eq:kattn_5}\n\\end{align}\n}\n\n\\noindent where $\\bm{s}^p_j$ is the hidden state of keyphrase decoder at position $j$, $\\bm{s}^a_t$ is the hidden state of argument decoder at timestep $t$, and $\\bm{c}_t$ is computed in Eq.~\\ref{eq:attn_1}.\n\n\\smallskip\n\\noindent \\textbf{Decoder Sharing.} \nWe also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder. A special token \\texttt{} is inserted between the two sequences, indicating the start of argument generation. \n\n\\subsection{Hybrid Beam Search Decoding} \nHere we describe our decoding strategy on the argument decoder. We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments. \n\n\\smallskip\n\\noindent \\textbf{Hybrid Beam Expansion.} \nIn the standard beam search, the top $k$ words of highest probability are selected deterministically based on the softmax output to expand each hypothesis. However, this may lead to suboptimal output for text generation~\\cite{wiseman-rush:2016:EMNLP2016}, e.g., one beam often dominates and thus inhibits hypothesis diversity. \nHere we only pick the top $n$ words ($n1$ means that the angular momentum decreases as\nthe flux rope expands. Combine Eq.\\ref{eq_rrx}, \\ref{eq_vr}, \\ref{eq_mom_2} and\n\\ref{eq_vphi}, we can rewrite the momentum conservation equation\nwith $\\hat{r}$ as\n\\begin{eqnarray}\n(\\vec{j}\\times\\vec{B})_r=\\rho(a_ex-k_1^2R^{-2k_2-1}x^{2k_2-3})+R^{-1}\\frac{\\partial p}{\\partial x}\n\\label{eq_mom_4}\n\\end{eqnarray}\nFor a thermodynamic process, we relate the thermal\npressure $p$ with the density $\\rho$ by the polytropic equation of state\n\\begin{eqnarray}\n p=k_3\\rho^\\Gamma \\label{eq_prho}\n\\end{eqnarray}\nwhere $k_3$ is a positive constant and $\\Gamma$ is a variable treated as\nthe polytropic index, and Eq.\\ref{eq_mom_4} becomes\n\\begin{eqnarray}\n(\\vec{j}\\times\\vec{B})_r=\\rho(a_ex-k_1^2R^{-2k_2-1}x^{2k_2-3})+k_3R^{-1}\\frac{\\partial \\rho^\\Gamma}{\\partial x} \\label{eq_mom_5}\n\\end{eqnarray}\nDefine a quantity $f_{em}$ to be the average Lorentz force\nover $\\hat{r}$ from the axis to the boundary of the flux rope,\n$f_{em}=\\frac{1}{R}\\int_0^R(\\vec{j}\\times\\vec{B})_rdr$.\nFrom Eq.\\ref{eq_mom_5}, we get\n\\begin{eqnarray}\nf_{em}=a_e\\int_0^1\\rho xdx-k_1^2R^{-2k_2-1}\\int_0^1\\rho x^{2k_2-3}dx+k_3R^{-1}\\int_0^1\\frac{\\partial\\rho^\\Gamma}{\\partial x}dx \\label{eq_pem1}\n\\end{eqnarray}\n$f_{em}>0$ means that the average Lorentz force directs outward from the\naxis of the flux rope, causing expansion. On the other hand, $f_{em}<0$\nprevents the expansion of the flux rope.\n\nWe assume that the mass of a CME is conserved when it propagates in the\nouter corona and interplanetary space, where the CME has fully developed.\nThe mass conservation gives\n\\begin{eqnarray}\n \\int\\rho rdr d\\phi dz=2\\pi lR^2\\int_0^1\\rho xdx=M \\mathrm{\\ (constant)} \\label{eq_rho}\n\\end{eqnarray}\nwhere $l$ is the axial length of the flux rope (Fig.\\ref{fg_coor}).\nSince the flux rope is assumed to be self-similar and it is generally\naccepted that the magnetic field lines are frozen-in with the plasma flows\nin corona\/interplanetary space, the density in the flux rope has a fixed\ndistribution $f_\\rho(x)$, and therefore\n\\begin{eqnarray}\n\\rho(t, x)=f_\\rho(x)\\rho_0(t)\n\\end{eqnarray}\nDefine positive constants\n\\begin{eqnarray}\nk_4=\\int_0^1f_\\rho xdx \\\\\nk_5=\\int_0^1f_\\rho x^{2k_2-3}dx\n\\end{eqnarray}\nand a variable\n\\begin{eqnarray}\nq(\\Gamma)=f_\\rho^\\Gamma(0)-f_\\rho^\\Gamma(1)\n\\end{eqnarray}\nThen it can be inferred from Eq.\\ref{eq_rho} that\n\\begin{eqnarray}\n\\rho_0=\\frac{1}{2\\pi}k_{4}^{-1}MR^{-2}l^{-1}\\label{eq_rho_3}\n\\end{eqnarray}\nand Eq.\\ref{eq_pem1} can be written as\n\\begin{eqnarray}\nf_{em}=\\frac{M}{2\\pi}(a_eR^{-2}l^{-1}-k_1^2k_4^{-1}k_5R^{-2k_2-3}l^{-1})-f_{th} \\label{eq_pem2}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nf_{th}=\\frac{1}{R}\\int_0^R-\\frac{\\partial p}{\\partial r}dr=k_3q\\rho_0^\\Gamma R^{-1} \\label{eq_pth}\n\\end{eqnarray}\nis the average thermal pressure force. Like $f_{em}$, $f_{th}$ points outward\nif it is larger than zero.\n\nOn the other hand, in an axisymmetric cylindrical flux rope,\n\\begin{eqnarray}\n&\\vec{B}=B_\\phi\\hat{\\phi}+B_z\\hat{z}=\\nabla\\times\\vec{A} \\label{eq_bcomp} \\\\\n&B_\\phi=-\\frac{\\partial A_z}{\\partial r} \\label{eq_b_phi} \\\\\n&B_z=\\frac{1}{r}\\frac{\\partial}{\\partial r}(rA_\\phi) \\label{eq_b_z} \\end{eqnarray}\nAs the magnetic flux is conserved in both $\\hat\\phi$ and $\\hat{z}$ directions,\nwe get\n\\begin{eqnarray}\n&\\Phi_\\phi=-l\\int_0^R\\frac{\\partial A_z}{\\partial r} dr=l(A_z(0)-A_z(R)) \\label{eq_Phi_phi} \\\\\n&\\Phi_z=2\\pi\\int_0^R\\frac{\\partial}{\\partial r}(rA_\\phi) dr=2\\pi RA_\\phi(R) \\label{eq_Phi_z}\n\\end{eqnarray}\nIn order to satisfy the self-similar expansion assumption, $A_\\phi$ and $A_z$\nhave to keep their own distributions, respectively. Thus, according to the above two\nequations,\n\\begin{eqnarray}\n A_\\phi(t, x)&=&\\frac{f_\\phi(x)}{R} \\label{eq_A_phi} \\\\\n A_z(t, x)&=&\\frac{f_z(x)}{l} \\label{eq_A_z}\n\\end{eqnarray}\nIt can be proved that the conservation of helicity is satisfied automatically\n\\begin{eqnarray}\nH_m&=&\\int\\vec{B}\\cdot\\vec{A}rdrd\\phi dz \\nonumber \\\\\n&=&2\\pi \\int_0^1 [\\frac{f_z}{x}\\frac{\\partial}{\\partial x}(xf_\\phi)-f_\\phi\\frac{\\partial f_z}{\\partial x}] xdx \\nonumber \\\\\n&=&\\mathrm{constant}\n\\end{eqnarray}\nCombining Eq.\\ref{eq_b_phi}, \\ref{eq_b_z}, \\ref{eq_A_phi}\nand \\ref{eq_A_z}, we can calculate the Lorentz force in the flux\nrope\n\\begin{eqnarray}\n\\vec{j}\\times\\vec{B}&=&\\frac{1}{\\mu_0}(\\nabla\\times\\vec{B})\\times\\vec{B} \\nonumber \\\\\n&=&-\\mu_0^{-1}R^{-5}\\{x^{-2}\\frac{\\partial}{\\partial x}(xf_\\phi)\\frac{\\partial^2}{\\partial x^2}(xf_\\phi)-x^{-3}[\\frac{\\partial}{\\partial x}(xf_\\phi)]^2\\}\\hat{r} \\nonumber\\\\\n&&-\\mu_0^{-1}R^{-3}l^{-2}x^{-1}\\frac{\\partial f_z}{\\partial x}\\frac{\\partial}{\\partial x}(x\\frac{\\partial f_z}{\\partial x})\\hat{r} \\label{eq_jb_3}\n\\end{eqnarray}\nand therefore\n\\begin{eqnarray}\nf_{em}&=&-\\mu_0^{-1}R^{-5}\\int_0^1\\{x^{-2}\\frac{\\partial}{\\partial x}(xf_\\phi)\\frac{\\partial^2}{\\partial x^2}(xf_\\phi)-x^{-3}[\\frac{\\partial}{\\partial x}(xf_\\phi)]^2\\}dx \\nonumber\\\\\n&&-\\mu_0^{-1}R^{-3}l^{-2}\\int_0^1x^{-1}\\frac{\\partial f_z}{\\partial x}\\frac{\\partial}{\\partial x}(x\\frac{\\partial f_z}{\\partial x})dx \\label{eq_jbr_0} \\nonumber \\\\\n&=&-\\mu_0^{-1}k_6R^{-5}-\\mu_0^{-1}k_7R^{-3}l^{-2} \\label{eq_pem3}\n\\end{eqnarray}\nwhere $k_6$ and $k_7$ are both constants. It could be proved that\nthe sign of $k_6$ is determined by $B_z^2(R)-B_z^2(0)$, and $k_7\\geq0$.\n\nThe two forms of $f_{em}$, Eq.\\ref{eq_pem2} and \\ref{eq_pem3}, result in\n\\begin{eqnarray}\n&a_e-k_1^2k_4^{-1}k_5R^{-2k_2-1} \\nonumber\\\\\n&=-2\\pi\\mu_0^{-1}M^{-1}(k_6R^{-3}l+k_7R^{-1}l^{-1})+2\\pi M^{-1}k_3(2\\pi k_4M^{-1}R^2l)^{-\\Gamma}[f_\\rho^\\Gamma(0)-f_\\rho^\\Gamma(1)]Rl \\label{eq_acs}\n\\end{eqnarray}\nin which $f_{th}$ is substituted by Eq.\\ref{eq_pth}.\nAs at present it is impossible to practically detect the axial length of a\nflux rope, here we will relate it with a measurable variable, $L$, the distance\nbetween the flux rope axis and the solar surface (Fig.\\ref{fg_coor}),\nat which altitude the flux rope originates, by the assumption\n\\begin{eqnarray}\nl=k_8L \\label{eq_ll}\n\\end{eqnarray}\nwhere $k_8$ is a positive constant. The topology of\nflux rope as shown in Figure \\ref{fg_coor} implies that this\nassumption is reasonable.\nFinally, Eq.\\ref{eq_acs} can be simplified to\n\\begin{eqnarray}\na_e-c_0R^{-c_1-3}=-c_2R^{-3}L-c_3R^{-1}L^{-1}+c_4(c_5^\\Gamma-c_6^\\Gamma)R^{1-2\\Gamma}L^{1-\\Gamma} \\label{eq_sase_fitting}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&c_0=k_1^2k_4^{-1}k_5\\geq 0 \\\\\n&c_1=2k_2-2 \\geq 0 \\\\\n&c_2=2\\pi\\mu_0^{-1}M^{-1}k_6k_8 \\\\\n&c_3=2\\pi\\mu_0^{-1}M^{-1}k_7k_8^{-1}\\geq 0 \\\\\n&c_4=2\\pi M^{-1}k_3k_8>0 \\\\\n&c_5=(2\\pi)^{-1}Mk_4^{-1}k_8^{-1}f_\\rho(0) \\geq0 \\label{eq_density_center} \\\\\n&c_6=(2\\pi)^{-1}Mk_4^{-1}k_8^{-1}f_\\rho(1) \\geq0 \\label{eq_density_boundary}\n\\end{eqnarray}\nThe left-hand side of Eq.\\ref{eq_sase_fitting} describes the motion of the fluids\nin the flux rope. Its first item is the acceleration due to the radial motion (i.e., expansion)\nand the second one gives the acceleration due to the poloidal motion. The right-hand\nside reflects the contributions from the Lorentz force (the first two items) and\nthermal pressure force (the last one).\nThe constants $k_{1-8}$ and $c_{0-6}$ appeared above are summarized in Table \\ref{tb_constants}.\n\n\\begin{table*}[t\n\\caption{List of the constants $k_{1-8}$ and $c_{0-6}$}\n\\label{tb_constants}\n\\begin{tabular}{cp{120pt}|cp{120pt}}\n\\hline\nConstant & Interpretation & Constant & Interpretation \\\\\n\\hline\n$k_1$ & Scale the initial magnitude of the poloidal motion &$k_4$ and $k_5$ & Integral constants related to the density distribution \\\\\n$k_2$ & Decrease rate of the angular momentum as the flux rope expands &$k_6$ and $k_7$ & Scale the initial magnitude of the Lorentz force contributed by the axial and poloidal fields \\\\\n$k_3$ & Coefficient in the polytropic equation of state &$k_8$ & Assumed constant to relate the length of flux rope $l$ to distance $L$ \\\\\n\\hline\\hline\n$c_0$ & Scale the initial magnitude of the acceleration due to the poloidal motion &$c_2$ and $c_3$ & Similar to $k_6$ and $k_7$ \\\\\n$c_1$ & Similar to $k_2$ &$c_4(c_5^\\Gamma-c_6^\\Gamma)$ & Scale the initial magnitude of the contribution by thermal pressure force \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\nThe Lorentz force and thermal pressure force can be rewritten in terms of the\nconstants $c_{0-6}$, $k_8$ and the total mass $M$ as follows\n\\begin{eqnarray}\nf_{em}=-\\frac{M}{2\\pi k_8}(c_2R^{-5}+c_3R^{-3}L^{-2})\\label{eq_fem}\\\\\nf_{th}=\\frac{M}{2\\pi k_8}c_4(c_5^\\Gamma-c_6^\\Gamma)R^{-2\\Gamma-1}L^{-\\Gamma}\\label{eq_fth}\n\\end{eqnarray}\nand their ratio is\n\\begin{eqnarray}\n\\frac{f_{em}}{f_{th}}=\\frac{c_2R^{2\\Gamma-4}L^\\Gamma+c_3R^{2\\Gamma-2}L^{\\Gamma-2}}{c_4(c_6^\\Gamma-c_5^\\Gamma)}\\label{eq_inff}\n\\end{eqnarray}\n\nIn summary, starting from MHD equations with the three major assumptions that\n(1) the flux-rope CME has an axisymmetric cylinder configuration, (2) its cross-section is\nself-similarly evolving, and (3) its axial length is proportional to the distance\nfrom the solar surface, we find that the polytropic index, $\\Gamma$, can be related to the\nmeasurable parameters: the distance, $L$, the radius, $R$, and another derived\nquantity, the expansion acceleration ($a_e$),\nas shown in Eq.\\ref{eq_sase_fitting}.\nIf we have enough measurement points, the unknown constants $c_{0-6}$ and variable\n$\\Gamma$ could be obtained through some fitting techniques (e.g., that described in the first\nparagraph of Sec.\\ref{sec_results}), and then the relative strength\nof the Lorentz force and thermal pressure force can also be easily calculated by Eq.\\ref{eq_fem} and \\ref{eq_fth}.\n\n\\begin{comment}\nFurther, we define the ratio of the Lorentz\nforce to thermal pressure gradient force as\n\\begin{eqnarray}\n\\mathrm{Plasma\\ } \\delta=\\frac{|f_{em}|}{|f_{th}|}\n\\end{eqnarray}\nIt reflects the relative significance of the Lorentz\nforce compared to the thermal pressure gradient force. Plasma $\\delta=0$ means $\\vec j\\times\\vec B=0$, implying a strictly force-free\nstate. From Eq.\\ref{eq_rho_3}, \\ref{eq_pth} and \\ref{eq_pem3}, we find that\n$\\delta$ has the following form\n\\begin{eqnarray}\n\\delta=\\left|\\frac{c_2R^{2\\Gamma-4}L^{\\Gamma}+c_3R^{2\\Gamma-2}L^{\\Gamma-2}}{c_4(c_5^\\Gamma-c_6^\\Gamma)}\\right|\n\\label{eq_inff}\n\\end{eqnarray}\nthat can be simply calculated after the parameters $c_{0-6}$\nand $\\Gamma$ in Eq.\\ref{eq_sase_fitting} are obtained.\n\\end{comment}\n\n\\subsection{Asymptotic Value of Polytropic Index $\\Gamma$} \\label{sec_asymptotic}\nHere, we consider the case of a nearly force-free expanding flux rope.\nIt is generally true that most CMEs are almost force-free at least near 1 AU though\nthey may be far away from a froce-free state at initial stage.\nIt can be proved that $R\\propto L$\n(ref. to Appendix), i.e., $R=\\alpha L$ where $\\alpha$ is a positive constant.\nThen Eq.\\ref{eq_inff} becomes\n\\begin{eqnarray}\n\\frac{f_{em}}{f_{th}}=\\frac{(c_2\\alpha^{2\\Gamma-4}+c_3\\alpha^{2\\Gamma-2})L^{3\\Gamma-4}}{c_4(c_6^\\Gamma-c_5^\\Gamma)}\n\\end{eqnarray}\nIt is found that $\\Gamma=\\frac{4}{3}$ is a critical point, above\/below which\nthe absolute value of Lorentz force decreases slower\/faster than that of thermal\npressure force as increasing distance $L$. This value of $\\Gamma$ is the same as\nthat obtained by \\citet{Low_1982} and \\citet{Kumar_Rust_1996} for a self-similar\nexpanding flux rope. This inference is reasonable because smaller $\\Gamma$ implies the\nplasma absorbs more heat for the same expansion and therefore the thermal pressure\nshould decrease slower.\n\nUnder force-free condition, Eq.\\ref{eq_sase_fitting} can also reduce to\n\\begin{eqnarray}\na_e=c_0(\\alpha L)^{-c_1-3}-c_2\\alpha^{-3}L^{-2}-c_3\\alpha^{-1}L^{-2}+c_4(c_5^\\Gamma-c_6^\\Gamma)\\alpha^{1-2\\Gamma}L^{2-3\\Gamma}\n\\end{eqnarray}\nand at infinite distance, $L\\rightarrow+\\infty$, we have\n\\begin{eqnarray}\na_{e\\infty}\\sim c_4(c_5^{\\Gamma_\\infty}-c_6^{\\Gamma_\\infty})\\alpha^{1-2{\\Gamma_\\infty}}L_\\infty^{2-3{\\Gamma_\\infty}}\n\\end{eqnarray}\nThe above equation indicates that $\\Gamma=\\frac{2}{3}$ is another critical point.\nThe polytropic index $\\Gamma$ should be larger than $\\frac{2}{3}$ to make sure that the flux rope will finally approach a\nsteady expansion and propagation state (including the case that the flux rope stop somewhere\nwithout expansion). Otherwise, the flux rope will always accelerated expanding.\n\nBased on the current observations, the expansion behavior of CMEs at large heliocentric\ndistance is not as clear as that in the inner heliosphere. The investigations on\nthe radial widths of CMEs suggest that CMEs at least keep expanding within about 15 AU\n\\citep[e.g.,][]{Wang_Richardson_2004, Wang_etal_2005a}, but the expansion speeds seem\nto be slower and slower. Although the number of CMEs observed near and beyond 15 AU is\nsmall and the uncertainty of statistics is large, it is likely that a CME may not be\nable to keep an accelerated expansion always. Thus, in practice, the polytropic index\nof the CME plasma should be larger than $\\frac{2}{3}$.\n\n\n\\section{The 2007 October 8 CME} \\label{sec_case}\n\\subsection{Observations and Measurements} \\label{sec_measurements}\nThe suite of SECCHI instruments on board STEREO spacecraft provide an\nunprecedented continuous view of CMEs from the surface of the Sun through\nthe inner heliosphere. The instruments, EUVI, COR1, COR2, HI1 and HI2,\nmake the images of the solar corona in the ranges of 0--1.5, 1.4--4.0,\n2.5--15.0, $\\sim$15--90, and $\\sim$90--300 solar radii ($R_S$),\nrespectively \\citep{Howard_etal_2008}. The SECCHI observations present\nus the great opportunity to study the evolution of CMEs over an extended\ndistance. The CME launched on 2007 October 8 is a well observed event,\nwhich is used to study the CME evolution and the applicability of our model.\n\n\\begin{figure*}[p]\n \\centering\n \\includegraphics[width=0.9\\hsize]{f2low.eps}\n \\caption{Images of the 2007 October 8 CME taken by (a) EUVI at 304\\AA,\n(b) COR1, (c) COR2, (d) HI1 and (e) HI2 on board STEREO B. The panel\nat lower-right corner shows a direct image of the CME in the HI1\nFOV. This image has been corrected to the plane perpendicular to the\nline between the Sun and STEREO B, because the CME is assumed to be\na limb event in CORs and the direction that HI1 camera faces to is\ndifferent with CORs'.}\\label{fg_20071008cme}\n\\end{figure*}\n\nThis CME was initiated close to the western limb as seen from STEREO\nB. Hereafter all the observations used are from instruments on board\nthe B spacecraft. Figure \\ref{fg_20071008cme} shows five images of\nthe CME at different distances from the Sun. The CME was accompanied\nby a prominence eruption starting at about 07:00 UT on October 8, as\nseen by EUVI. The CME source region is clearly shown in the EUVI\n304\\AA\\ image on the top-left panel of the figure. The erupting\nprominence was also seen in the COR1 running-difference image (the\ntop-right panel). The CME was first observed in COR1 at about 08:46\nUT on October 8, and continuously ran through COR2 and HI1 fields of\nview (FOV). It even showed in the HI2 FOV after about 12:00 UT on\nOctober 10. Since the CME was launched from the western limb and\nshowed a circular-like structure, we believe that the CME was viewed\nby the instruments through an axial-view angle. Therefore, the\nprojection of the CME on the plane of the sky can be treated as the\ncross-section of the CME.\n\nTo obtain the two quantities, $R$ and $L$, for necessary model inputs,\nhere we simply measure three parameters,\nthe heliocentric distance of the CME leading edge, $h$, and the maximum\nand minimum position angles, $\\theta_{max}$ and $\\theta_{min}$, of the CME\nas shown in Figure \\ref{fg_coor}. Under the assumption of a circular\ncross-section, $R$ and $L$ could be derived by\n\\begin{eqnarray}\nR&=&h-L-R_S \\\\\nL&=&\\frac{h}{1+\\sin \\frac{\\theta_{max}-\\theta_{min}}{2}}-R_S\n\\end{eqnarray}\nIt should be noted that the measurements in HI2 images are not included\nin the following analysis, because the elongation effect is not negligible.\n\n\\begin{figure*}[tbh]\n \\centering\n \\includegraphics[width=0.48\\hsize]{f3a.eps}\n \\includegraphics[width=0.45\\hsize]{f3b.eps}\\\\\n \\hskip -170pt\n \\includegraphics[width=0.45\\hsize]{f3c.eps}\n \\caption{{\\it Left-upper panel:} The measurements of the heliocentric distance,\n$h$, of the flux-rope CME leading edge and its angular width,\n$\\Delta\\theta=\\theta_{max}-\\theta_{min}$. {\\it Right panel:} The\nderived distance, $L$, of the flux rope axis from the solar surface\nand the flux rope radius, $R$. {\\it Left-lower panel:} The\npropagation, $v_c$, and expansion, $v_e$, speeds derived from $L$\nand $R$, respectively.}\\label{fg_cme_motion}\n\\end{figure*}\n\nFigure \\ref{fg_cme_motion} shows the measurements and the derived parameters.\nThe CME is a slow and gradually accelerated event. It took about 46 hours\nfor its leading edge to reach 70 $R_S$. Nevertheless, because of its slowness,\nwe are able to make about one hundred measurement points for this CME. The\nred crosses plotted in the leftupper panel suggests that the CME angular width\nincreased at the early phase (mainly in the COR1 FOV), and then reached to\na near-constant value in the COR2 and HI1 FOVs. The right panel presents\nthe evolution of the derived $R$ and $L$.\nIt is shown that the radius of the flux-rope CME is about 20 $R_S$ when it\npropagated nearly 50 $R_S$ away from the Sun, which put the leading edge at\nabout 70 $R_S$. The left-lower panel exhibits the speeds derived from the $R$ and $L$,\nnamely expansion, $v_e$ and propagation, $v_c$, speeds, respectively. At the early\nphase, the expansion speed was very close to the propagation speed.\nIn the later phase, the propagation speed increased more\nquickly than the expansion speed. The increased difference between $v_c$ and\n$v_e$ is probably because of (1) the enhanced drag force of the ambient solar wind,\nwhich is fully formed in the outer corona and (2) the weakened pressure in the CME.\nThe issue of CME acceleration, which is as important as CME expansion, is not\naddressed in our model presented in this paper.\n\nIn the measurements, the CME radius obtained is the one along the\nlatitudinal direction on the meridional plane. This radius would be\nthe same as the radius along the radial direction if the\ncross-section is a perfect circle. However, the true cross-section\ndeviates from the perfect circle, and the deviation becomes larger\nas the CME is further from the Sun\n\\citep[e.g.,][]{Riley_Crooker_2004}. The distortional stretching of\nthe cross-section is caused by the divergent radial expansion of the\nbackground solar wind, which causes kinematic expansion of CMEs\nalong both the meridional and azimuthal directions, but not at all\nalong the radial direction. The CME expansion along the radial\ndirection is mostly driven by the dynamic effect, such as pressure\ngradient forces, while the expansion along the other two directions\nthat lie on the spherical surface is caused by the combination of\nthe dynamic and kinematic effects. As a result, the overall\ncross-section is a convex-outward ``pancake\" shape\n\\citep{Riley_Crooker_2004}. Figure \\ref{fg_20071008cme}f shows such\na distortion of the 2008 November 8 CME as observed in HI1 FOV; the\naspect ratio, defined by the ratio of the radius along the\nmeridional direction and that along the radial direction, is about\n1.4 when the CME leading edge is at $\\sim70 R_S$.\n\nDue to this stretching effect, our measurements assuming a circular cross-section lead to the inaccuracy of the\nmeasured parameters and the inferred parameters as well. In order to study the internal state of a CME, the radius\nof the CME, $R$, should be the one along the radial direction, and it is apparently overestimated when the radius\nalong the meridional direction is adopted. The derived expansion speed of CME is thus larger than the true value.\nSuch simplified measurements would infer unrealistic parameters of CME at 1 AU. For instance, the observed radius\nof 20 $R_S$ of the CME at a distance of 50 $R_S$ from the Sun would imply a CME cross-section of 0.8 AU at 1 AU, which is\ntoo larger to be true. The observed speeds of $v_e$ and $v_c$ would imply a speed of about 150 km\/s at the trailing\nedge of the CME, which is much smaller than the observed solar wind speed, i.e., about 300 km\/s. Therefore, one\nshould be cautious when our method is applied to CMEs at a large distance from the Sun (e.g, $> 70 R_S$). The\nheliospheric region we investigated in this paper is within about 70 $R_S$, and the stretching effect is\nrelatively small. Nevertheless, we will carefully estimate the errors on CME parameters in the second\nparagraph of Sec.\\ref{sec_discussion}.\nWe point out here that there is an observational difficulty in measuring the radius of CMEs along the radial\ndirection in a consistent way, mainly because of the low brightness contrast of the CME trailing edge in coronagraph\nimages. This difficulty might be overcome if the CME of interest is particularly bright.\n\n\nBefore modeling the CME, we fit the measurement points with a certain function\nto retrieve the smooth evolution process of the CME, which is required for the\nmodel. We use the modified function of log normal distribution to fit the speeds.\nWe did not fit the expansion acceleration directly, because any small\nerror in measurements of $R$ will be dramatically amplified in its second derivative\n$a_e$.\nThe fitting function of velocity has the form\n\\begin{eqnarray}\nv(t)=\\frac{v_\\infty}{2}\\left[1+\\mathrm{erf}\\left(\\frac{\\ln(t-t_0)-M}{S\\sqrt{2}}\\right)\\right]\n\\end{eqnarray}\nwhere $\\mathrm{erf}(z)$ is the erf function or error function, defined by\n\\begin{eqnarray}\n\\mathrm{erf}(z)\\equiv\\frac{2}{\\sqrt{\\pi}}\\int_0^ze^{-t^2}dt\n\\end{eqnarray}\nThis function has a value range from 0 to $v_\\infty$. It is chosen\nbecause the measurements show a trend that, at least within the FOVs of SECCHI,\nboth the speeds will not increase forever, but instead\nasymptotically approach a constant speed, $v_\\infty$. The acceleration\ncan be derived by\n\\begin{eqnarray}\na_e(t)=\\frac{v_\\infty}{S\\sqrt{2\\pi}(t-t_0)}e^{-\\frac{[\\ln(t-t_0)-M]^2}{2S^2}}\n\\end{eqnarray}\nThe solid lines in the left-lower panel of Figure\n\\ref{fg_cme_motion} show the fitting results. The fitted parameter\n$v_\\infty$ is 118 km\/s for expansion and 246 km\/s for propagation.\nAs a comparison with the measurements, the integrals of the fitting\ncurves of the speeds are also plotted in the right panel. It has\nbeen mentioned before that these estimated speeds suffer the solar\nwind stretching effect. Particularly, the estimated expansion speed\nis larger than that it should be. The error will be discussed in\nSec.\\ref{sec_discussion}.\n\n\\subsection{Results}\\label{sec_results}\nTo fit the above curves with the model, Eq.\\ref{eq_sase_fitting}, we use\nan iterative method. Generally speaking, first we solve this equation\nin every 8 neighboring measurement points to obtain a set of parameters $c_{0-6}$\nand $\\Gamma$. The segment of the 8 points is a running box through the entire\nevolution process of the CME. Secondly, input the obtained variable $\\Gamma$\ninto the model as guess values to fit the global constants $c_{0-6}$. Thirdly,\nuse the fitted $c_{0-6}$ to update the variable $\\Gamma$ by solving\nEq.\\ref{eq_sase_fitting} again. Then iterate the above 2nd and 3rd steps to\nmake constants $c_{0-6}$ and $\\Gamma$ converging to a steady solution. For the sake\nof simplicity, we ignore the poloidal motion of the fluid by setting $c_0$ zero. It\nis also because there seems no strong observational evidence showing a ring flow\ninside a CME.\n\nThe model results are shown in Figure \\ref{fg_cme_state}.\nThe uncertainty of the model results is estimated from the relative error\nof $a_e$, which is given by\n\\begin{eqnarray}\nE=\\left|\\frac{a_{em}-a_{ei}}{a_{ei}}\\right|\n\\end{eqnarray}\nwhere $a_{ei}$ is the value calculated by the input data, and $a_{em}$ is\nthe model value. The error curve is plotted in the left-upper panel of Figure\n\\ref{fg_cme_state}. It is found that the error is\nsmaller than 1\\%, except during 12:00 -- 18:00 UT.\nA possible explanation of the large uncertainty during that time has been\ngiven in the last second paragraph of Sec.\\ref{sec_discussion}.\n\n\\subsubsection{Polytropic Index}\nFrom the right-upper panel of Figure \\ref{fg_cme_state}, it is found\nthat $\\Gamma$ was less than 1.4 throughout the interplanetary space.\nIn the inner corona, say $L\\lsim2R_S$, it was about 1.24. After\nentering the outer corona, it quickly increased to above 1.35 at\n$L\\approx5 R_S$, and then slowly approached down to about 1.336,\nwhich is very close to the first critical point $\\frac{4}{3}$. This\nvalue of $\\Gamma$ is consistent with the observational value\nobtained from \\citet{Liu_etal_2006} statistics for protons. As the\nCME kept expanding during its propagation in the FOVs, the\npolytropic index less than $\\frac{5}{3}$ means that there must be\nsome mechanisms to inject heat from somewhere into the CME plasma.\nAlthough the CME plasma continuously got thermal energy, the proton\ntemperature may be still much lower than that in the ambient solar\nwind, as revealed by many in-situ observations of MCs\n\\citep[e.g.,][]{Burlaga_etal_1981}.\n\n\\begin{figure*}[tbh]\n \\centering\n\\hskip -20pt \\includegraphics[width=0.45\\hsize]{f4a.eps}\n\\hskip 10pt \\includegraphics[width=0.41\\hsize]{f4b.eps}\\\\\n\\hskip 0pt \\includegraphics[width=0.415\\hsize]{f4c.eps} \\hskip\n10pt\n\\includegraphics[width=0.495\\hsize]{f4d.eps}\n \\caption{{\\it Left-upper panel:} The profile of $a_e$ (black), the modeled result (dashed green),\nand the relative error (see text for details). {\\it Right-upper panel:} The variation of the polytropic index, $\\Gamma$,\nof the CME plasma. {\\it Left-lower panel:} The variations of the average Lorentz force, $f_{em}$, and the average\nthermal pressure force, $f_{th}$. Their signs have been marked on the upper right corner. {\\it Right-lower panel:}\nThe sum and ratio of the two forces.}\\label{fg_cme_state}\n\\end{figure*}\n\nWe believe that the hot plasmas in the lower solar atmosphere is\nprobably a major heat source of CMEs in the interplanetary space. As shown\nin Figure \\ref{fg_coor}, a CME is believed to be a looped structure\nwith two ends rooted on the solar surface in a global scale.\nBidirectional electron streams are one of the evidence of it\n\\citep[e.g.][]{Farrugia_etal_1993b, Larson_etal_1997}. Thus it is\npossible that heat is conducted from the bottom to CMEs. The ambient\nsolar wind with higher temperature might be an additional source\nbecause the temperature difference between the two mediums is\nsignificant. However, the cross-field diffusion of particles are\nmuch more difficult than the motion parallel to magnetic field\nlines, especially in a nearly force-free flux rope; the coefficient\nratio $\\kappa_\\perp\/\\kappa_\\parallel$ of perpendicular to parallel\ndiffusion roughly locates in the range of $0.005 - 0.05$\n\\citep[e.g.,][]{Jokipii_etal_1995, Giacalone_Jokipii_1999,\nZank_etal_2004, Bieber_etal_2004}. Thus the contribution of the\nambient high-temperature solar wind should be very limited.\n\nIt is well known that the magnetic energy decreases as CMEs\npropagate away from the Sun. According to our model, the total\nmagnetic energy is given by\n\\begin{eqnarray}\nE_m=\\frac{1}{2\\mu_0}\\int B^2rdrd\\phi dz=\\pi\\mu_0^{-1}(k_9l^{-1}+k_{10}R^{-2}l)\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&k_9=\\int_0^1(\\frac{\\partial f_z}{\\partial x})^2xdx \\\\\n&k_{10}=\\int_0^1[\\frac{\\partial}{x\\partial x}(xf_\\phi)]^2xdx \\label{eq_menergy}\n\\end{eqnarray}\nare both positive integral constants. The magnetic energy generally\ndissipates at the rate of $\\sim l^{-1}$, which is a significant\ndissipation as CMEs move outward. However, such magnetic energy\ndissipation does not necessarily mean to be a major source of the\nheat. According to MHD theory, magnetic energy partially goes into\nkinetic energy and partially converts to thermal energy. The former\nis due to the work done by Lorentz force ($\\vec j\\times\\vec\nB\\cdot\\vec u$), and the latter is through the Joule heating\n($\\frac{j^2}{\\sigma}$) process, where $j$ is the current density and\n$\\sigma$ is the electrical conductivity. Since $\\sigma$ usually has\na large value in interplanetary medium, without anomalous\nresistivity, the magnetic energy does not have an efficient way to\nbe converted to thermal energy. However, there are possibly many\nnon-ideal processes, such as turbulence, but not accounted by MHD\ntheory. Thus we do not know whether the dissipated magnetic energy\nis a major source of heating or not.\n\n\\subsubsection{Internal Forces}\nThe averaged Lorentz force, $f_{em}$, and thermal pressure force,\n$f_{th}$, have been presented in the left-lower panel of Figure\n\\ref{fg_cme_state}. Their absolute values are very close to each\nother, and both of them decreased continuously throughout the\ninterplanetary space. The signs of the two forces are opposite.\n$f_{em}$ is negative indicating a centripetal force, whereas\n$f_{th}$ is positive, centrifugal. This result suggests that the\nthermal pressure force contributed to the CME expansion, but the\nLorentz force prevented the CME from expanding.\n\nThe difference between the two forces can be seen more clear from\nthe right-lower panel of Figure \\ref{fg_cme_state}. The black line\nexhibits the net force, $f_{em}+f_{th}$, inside the CME. It directed\noutward and reached the maximum at about 10:30 UT. The profile is\nconsistent with the expansion acceleration presented in the\nleft-upper panel (the black line). Thus the net force just shows us\nthe internal cause of the CME expansion. The red line is the ratio\nof their absolute values. Its value changed in a very small range\nfrom about 1.0 to 0.98. It suggests that such a small difference\nbetween the two forces is able to drive the CME expanding with the\nacceleration at the order of 1 $m\/s^2$. Moreover, the ratio decrease\nmeans that the Lorentz force decreased slightly faster than the thermal\npressure force. One may notice that, since $\\Gamma$ was larger than\nthe first critical point $\\frac{4}{3}$ at $L\\gsim5R_S$, according to the analysis in\nSec.\\ref{sec_asymptotic}, the Lorentz force should drop slower than\nthermal pressure force. Actually it may not be an inconsistency, because\nthe inference derived in Sec.\\ref{sec_asymptotic} is established on\nthe force-free assumption, the CME we studied may not be\nperfectly force-free, and therefore the first critical point of $\\Gamma$ probably shifts a little bit.\n\nUsually, CMEs are a flux rope with two ends rooted on the Sun. The\naxial curvature of the flux rope may cause the magnetic strength at\nthe Sun-side of the flux rope larger than that at the opposite side,\nwhich leads the Lorentz force having an additional component to\ndrive the flux rope moving outward away from the Sun\n\\citep[e.g.,][]{Garren_Chen_1994, Lin_etal_1998, Lin_etal_2002,\nKliem_Torok_2006, Fan_Gibson_2007}. Thus, as the flux rope we\napplied here is assumed to be a straight cylinder, the Lorentz force\n$f_{em}$ we derived does not include the component caused by the\naxial curvature of the flux rope. This component is important in\nstudying the propagation properties of a CME. However, our model is\nto study the CME internal state (specifically the thermodynamic process\nand expansion behavior), and its propagation behavior is obtained\ndirectly from coronagraph observations, thus the neglect of this\ncomponent should be acceptable although it does bring on some error,\nwhich has been briefly mentioned in the\nsecond paragraph of Sec.\\ref{sec_discussion}.\n\n\n\\section{Summary} \\label{sec_summary}\nIn this paper, we developed an analytical flux rope model for the\npurpose of probing the internal state of CMEs and understanding its\nexpansion behavior. The model suggests that, if the flux rope\nis force free, there are two critical\nvalues for the polytropic index $\\Gamma$. One is $\\frac{4}{3}$, above\/below\nwhich the absolute value of the Lorentz force decreases slower\/faster than\nthat of the thermal pressure force as the flux-rope CME propagates away\nfrom the Sun. The other is $\\frac{2}{3}$, above which the flux-rope CME\nwill essentially approach a steady expansion and propagation state.\n\nBy applying this model to the 2007 October 8 CME event, we find that\n(1) the polytropic index of the CME plasma increased from initially 1.24\nto more than 1.35 quickly, and then slowly decreased to about 1.336; it\nsuggests that there be continuously heat injected\/converted into\nthe CME plasma and the value of $\\Gamma$ tends to be the first critical value $\\frac{4}{3}$;\n(2) the Lorentz force directed inward while the thermal pressure force outward, both\nof them decreased rapidly as the CME moved out, and the small difference between\nthem is consistent with the expansion acceleration of the CME; the direction of\nthe two forces reveal that the thermal pressure force is the internal driver of the CME expansion,\nwhereas the Lorentz force prevented the CME from expanding.\n\n\\section{Discussion}\\label{sec_discussion}\nIn our model, the interaction between CMEs and the solar wind has\nbeen implicitly included to certain extent, though we do not\nexplicitly address these effects. The consequences of the\ninteraction, in terms of the effects on the CME dynamic evolution\ncan be roughly classified into the following three types: (1) the\nsolar wind dragging effect, which is is due to the momentum exchange\nbetween the CME plasma and the ambient solar wind and mainly affects\nthe CME's propagation speed or the bulk motion speed, (2) the solar\nwind constraint effect on expansion, which is caused by the presence\nof the external magnetic and thermal pressures and mainly prevents a\nfree expansion of the CME (i.e., in all directions), and (3) the\nsolar wind stretching effect on expansion, which is caused by the\ndivergent radial expansion of solar wind flow, and causes flattening\nor ``pancaking\" of CMEs. The first two effects are indirectly\nincluded in the model through the measurements of $L$ and $R$.\nDifferent dragging and\/or constraint force(s) may result in\ndifferent variation of $L$ and\/or $R$ with time (or heliocentric\ndistance). Particularly, we do not need to explicitly put the solar\nwind dragging term in the model, because we are addressing the\ninternal state of CMEs, not the bulk acceleration. The stretching\neffect, which is of a kinematic effect, is not included in our\nmodel. As discussed in the sixth paragraph of\nSec.\\ref{sec_measurements}, this is largely due to the limitation of\nthe measurements. The possible errors caused by such effect are\nexplicitly addressed in the next paragraph.\n\nThe main uncertainty of this model, we believe, comes from the\nassumption of an axisymmetric cylinder, in which the curvature of\nthe axis of the flux rope and the distortion of the circular\ncross-section are not taken into account. As to the first one, the\nneglect of the axial curvature generally results in the Lorentz\nforce underestimated. As to the second one, as discussed earlier,\nthe distortion of the CME cross-section is due to the kinematic\nstretching effect of a spherically divergent solar wind flow\n\\citep[e.g.,][]{Crooker_Intriligator_1996, Russell_Mulligan_2002,\nRiley_etal_2003, Riley_Crooker_2004, Liu_etal_2006}. In the case of\nthe particular CME studied in this paper, the aspect ratio is about\n1.4 when the CME leading edge is at $\\sim70 R_S$ (or the flux rope\naxis is at $\\sim56 R_S$). The overall shape of the CME looks like an\nellipse. To estimate the errors caused by the circular assumption,\nwe approximate the ellipse to be a circle of the same area. With\nthis treatment, we estimate that $R$ is overestimated by 19\\%, and\n$L$ is underestimated by 11\\%. Therefore, the expansion speed is\noverestimated by 19\\%, and the propagation speed is underestimated\nby 11\\%. Further, we find that the density is underestimated by\n21\\% (ref to Eq.\\ref{eq_rho_3}), $f_{th}$ is underestimated by 39\\%\n(ref to Eq.\\ref{eq_pth} and assume $\\Gamma=\\frac{4}{3}$), $f_{em}$\nis underestimated by 25--58\\% (ref to Eq.\\ref{eq_pem3}), and the\nerror of the polytropic index is probably neglected (ref. to\nEq.\\ref{eq_fth}). These errors are evaluated for the CME at $\\sim70\nR_S$. At a smaller distance, we expect that the errors be smaller,\nsince the distortion is less severe.\n\nThe self-similar assumption made in our model may be another error\nsource, in which we assume that the distributions of the quantities\nalong $\\hat{\\vec r}$ in the flux rope remain unchanged during the\nCME propagates away from the Sun. Self-similar evolution is a frequently\nused assumption in modeling \\citep[e.g.,][]{Low_1982, Kumar_Rust_1996,\nGibson_Low_1998, Krall_StCyr_2006}. The recent research by\n\\citet{Demoulin_Dasso_2009} suggested that, when $l$, the length of flux\nrope, is proportional to $p_t^{-1\/4}$, the total pressure in the ambient\nsolar wind, a force-free flux rope will evolve self-similarly. The\ntotal pressure of solar wind consists of\nthermal pressure $p_{th}=nkT$ and magnetic pressure\n$p_{m}=\\frac{B^2}{2\\mu}$. Near the Sun, we can assume that the magnetic\npressure is dominant, thus it is approximated that $p_t\\approx p_m\\propto L^{-4}$,\ni.e., $p_t^{-1\/4}\\propto L$. Since the length of a flux rope is\nusually proportional to the distance $L$, we have\n$l\\propto p_t^{-1\/4}$. It means that self-similar assumption should\nbe a good approximation when the CME is nearly force-free and not too\nfar away from the Sun. Other previous studies also showed that the\nself-similar evolution of CMEs is probably true within tens solar radii\n\\citep[e.g.,][]{Chen_etal_1997, Krall_etal_2001, Maricic_etal_2004}.\nOn the other hand, however, the self-similar assumption must be broken\ngradually. An obvious evidence is from the solar wind stretching effect\nas have been addressed before. Another evidence is that a CME may relax\nfrom a complex structure to a nearly force-free flux rope structure, for\nexample the simulation by \\citet{Lynch_etal_2004}.\n\n\\begin{comment}\nIn addition, we have treated CMEs as a one-fluid plasma, i.e., we do not distinguish between\nelectrons and protons. The model results mainly address the behavior\nof protons, since protons are much heavier than electrons, and thus form\nthe most mass of CMEs. Further, protons obey the\nMaxwellian distribution, while electrons seem not\n\\citep[e.g.,][]{Scudder_1992, Liu_etal_2005}. The non-Maxwellian\ndistribution means that electrons may not be treated as an ideal gas,\nand therefore the equations used in the\nmodel may not be appropriate as long as electrons are concerned.\n\\end{comment}\n\n\nFor the CME plasma, neglecting the viscous stress tensor\nin Eq.\\ref{eq_mom_1} might be appropriate. The viscous stress tensor\nof protons can be approximately given by\n\\begin{eqnarray}\nS_{ij}\\approx3\\eta_0\\left(\\frac{\\delta_{ij}}{3}-\\frac{B_iB_j}{B^2}\\right)\\left(\\frac{\\vec{B}\\cdot\\vec{B}\\cdot\\nabla\\vec{v}}{B^2}-\\frac{\\nabla\\cdot\\vec{v}}{3}\\right)\n\\end{eqnarray}\nand $\\eta_0$ is the coefficient of viscosity that could be estimated\nby $\\eta_0\\approx10^{-17}T_p^{\\frac{5}{2}}$ kg$\\cdot$m$^{-1}\\cdot$s$^{-1}$\n\\citep{Braginskii_1965, Hollweg_1985}.\nHere $\\delta_{ij}$ is the unit tensor, $\\vec{v}$ is flow velocity, and\n$T_p$ is the proton temperature.\nSince the proton temperature in CMEs is low, $\\eta_0$ and therefore\nthe viscous stress tensor is very small. Thus, we guess that the\nviscosity in the momentum equation might be ignored.\n\nBoth forces ignored in Eq.\\ref{eq_mom_1}, the gravity $\\vec{F}_g$ and the equivalent fictitious force $\\vec{F}_a$\ndue to the use of a non-inertial reference frame, are in the radial direction in the solar frame.\nTheir effects can be evaluated by comparing them with the acceleration of\nthe expansion of the fluids in flux rope CMEs. The solar\ngravity acceleration is about 270 m\/s$^2$ at the surface, and decreases\nat the rate of $r^{-2}$, which makes it as low as $\\sim$2.7 m\/s$^2$\nat 10 $R_S$. And $\\vec{F}_a$ should be also very small for most CMEs beyond\n10 $R_S$. Thus both forces would significantly distort the model results\nonly on CMEs with slow expansion acceleration in the lower corona, but not on\nthose with large expansion acceleration or in the outer corona.\nThis may be the reason why a large error of $a_e$ appears during 12:00 -- 18:00 UT\nin modeling this CME (left-upper panel of Fig.\\ref{fg_cme_state}).\n\nThe flux-rope model presented in this paper might be the first of its kind\nto provide a way to infer the inter state of CMEs directly based on coronagraph\nobservations. It is different from other CME dynamic models, such as those\nby \\citet{Chen_1989} and \\citet{Gibson_Low_1998}, which were designed to study\nthe interaction of CMEs with the ambient solar wind and other dynamic processes\nby adjusting the initial conditions of CMEs and the global parameters of the\nambient solar wind. Besides, \\citet{Kumar_Rust_1996} proposed a current-core\nflux rope model with self-similar evolution (ref. to KR model thereafter).\nAlthough a self-similar flux rope\nis also employed in their model, our model is largely different from theirs.\nFirst, the flux rope in KR model is assumed force-free and the Lundquist\nsolution \\citep{Lundquist_1950} is applied to describe the internal\nmagnetic structure, but our model does not specify the magnetic field\ndistribution and it may be non-force-free. Secondly, the self-similar assumption\nin KR model limits the radius of the flux rope to be proportional to the\ndistance, whereas our self-similar condition is held only in the cross-section\nof the flux rope; the $R$ and $L$ in our model are two independent measurements\n(see Fig.\\ref{fg_cme_motion}). Thirdly, KR model did not consider the solar wind\neffects on the flux rope, while two of three solar wind effects are implicitly\nincluded in our model. Thus one can treat our model a more generic one.\nUndoubtedly, KR model is an excellent model for force-free\nflux ropes, and got many interesting results. For example, it is suggested that the\npolytropic index is $\\frac{4}{3}$ for a CME far from the Sun. It is an inference\nfrom their self-similar assumption, and it seems to be true for the 2007 October\nCME we studied here. In our model, the $\\Gamma$ value of $\\frac{4}{3}$ implies a\nspecial case (Sec.\\ref{sec_asymptotic}) in which the two internal forces $f_{em}$\nand $f_{th}$ vary at the same rate. Further work will be performed to test whether\nit holds for all CME events.\n\n\n\n\\paragraph{Acknowledgments.}\nWe acknowledge the use of the data from\nSTEREO\/SECCHI. We are grateful to James Chen and Yong C.-M. Liu for\ndiscussions. We also thank the referees for valuable comments.\nY. Wang and J. Zhang are supported by grants from\nNASA NNG05GG19G, NNG07AO72G, and NSF ATM-0748003. Y. Wang and C. Shen\nalso acknowledge the support of China grants from NSF 40525014, 973\nkey project 2006CB806304, and Ministry of Education 200530.\n\n\\section*{Appendix}\nIn cylindrical coordinate system, the magnetic field of a force-free\nflux rope has the \\citet{Lundquist_1950} solution\n\\begin{eqnarray}\nB_r&=&0 \\nonumber\\\\\nB_\\phi&=&HB_0J_1(2.41x) \\\\\nB_z&=&B_0J_0(2.41x) \\nonumber\n\\end{eqnarray}\nwhere $x=\\frac{r}{R}$ is the normalized radial distance as defined in Sec.\\ref{sec_model},\n$J_0$ and $J_1$ are the zero and first order Bessel functions, $H=\\pm1$\nindicates the sign of the handedness and $B_0$ is the magnetic field\nmagnitude at the axis of the flux rope. According to the properties\nof Bessel function, we have the magnetic vector potential\n\\begin{eqnarray}\nA_\\phi&=&\\frac{RB_0}{2.41}J_1(2.41x) \\label{eq_ffaphi}\\\\\nA_z&=&\\frac{HRB_0}{2.41}J_0(2.41x) \\label{eq_ffaz}\n\\end{eqnarray}\nThe conservation of $\\Phi_z$\n\\begin{eqnarray}\n\\Phi_z=2\\pi\\int_0^R\\frac{\\partial}{\\partial r}(rA_\\phi)dr=\\frac{2\\pi R^2B_0}{2.41}J_1(2.41)=\\mathrm{constant}\n\\end{eqnarray}\nrequires that\n\\begin{eqnarray}\nB_0=2.41a_1R^{-2} \\label{eq_ffb0}\n\\end{eqnarray}\nwhere $a_1$ is a constant. The magnetic vector potential can be\nrewritten as\n\\begin{eqnarray}\nA_\\phi&=&\\frac{a_1}{R}J_1(2.41x) \\label{eq_ffA_phi}\\\\\nA_z&=&\\frac{a_1H}{R}J_0(2.41x) \\label{eq_ffA_z}\n\\end{eqnarray}\nMeanwhile, the magnetic helicity is\n\\begin{eqnarray}\nH_m=\\int\\vec{B}\\cdot\\vec{A}drrd\\phi dz=4.82\\pi a_1^2a_2HR^{-1}l\n\\end{eqnarray}\nwhere $a_2=\\int_0^1x(J_0^2+J_1^2)dx$ is a constant.\nThe conservation of $H_m$ results in\n\\begin{eqnarray}\n&R\\propto l\n\\end{eqnarray}\nCombined it with the assumption Eq.\\ref{eq_ll}, it is inferred that\n\\begin{eqnarray}\n&R\\propto L\n\\end{eqnarray}\nwhich means that the force-free flux rope expands radially.\n\n\\bibliographystyle{agufull08}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSymbiotic stars are interacting binaries. They consist of an evolved red giant \nor Mira-type variable, and a hot component --- white dwarf, subdwarf, neutron \nstar or main-sequence star. The hot component accretes material from the stellar \nwind of the donor (Sokoloski 2003). The wind is ionized by the hot component, \ncausing the rise of a nebula. Orbital periods of the symbiotic stars range from \nyears to decades.\n\nCH~Cyg is an eclipsing symbiotic star composed of a M6-7~III star and an \naccreting white dwarf, so the system belongs to the S-type symbiotics. The \nbinary separation is $8.7^{+1.1}_{-0.7}$~AU (Miko{\\l}ajewska et al., 2010). The \nmasses of the components are $M_{\\rm rg}=2^{+1}_{-0.5}\\, \\rm M_{\\sun}$ and \n$M_{\\rm wd}=0.70^{+0.22}_{-0.09}\\, \\rm M_{\\sun}$ (Miko{\\l}ajewska et al., 2010). \nBased on {\\it Hipparcos} satellite measurements, the distance to CH~Cyg is \nestimated to $d=244^{+49}_{-35}\\, \\rm pc$ (van Leeuven 2007). CH~Cyg ejects \ncollimated bipolar outflows with velocity of $\\sim$700~\\kms, detectable in the \nradio (Taylor, Seaquist \\& Mattei 1986; Crocker et al., 2001). The system is \ndetectable also in X-rays (Galloway \\& Sokoloski 2004). The orbital period of \nCH~Cyg is $\\sim$15.6~yr (Hinkle, Fekel \\& Joyce 2009). The light curve of CH~Cyg \nis very complex. The detected variability varies from dozens of years caused by \nthe orbital motion and dust obscuration events (Bogdanov \\& Taranova 2001), \nthrough periodicity with periods of several hundred days caused by pulsations of \nthe giant (Mikolajewski, Mikolajewska \\& Khudyakova 1992), to flickering \nactivity with time-scales of a few minutes (Dobrzycka, Kenyon \\& Milone 1996).\n\nFlickering is broad-band stochastic light variations on time-scales of a few \nminutes with amplitude from a few$\\times$0.01~mag to more than one magnitude. \nFlickering activity is detected in only 10 symbiotic stars --- RS~Oph, T~CrB, \nMWC~560, V2116~Oph, CH~Cyg, RT~Cru, $o$~Cet, V407~Cyg, V648~Car and EF~Aql \n(Dobrzycka, Kenyon \\& Milone 1996; Sokoloski, Bildsten \\& Ho 2001; Gromadzki et \nal. 2006; Angeloni et al. 2012; \nZamanov et al., 2017).\n\nThe flickering of CH~Cyg was first detected by Wallerstein (1968) and Cester \n(1968) and it was studied in detail later (Skopal 1988; Mikolajewski et al. \n1990; Mikolajewski et al. 1992; Panov \\& Ivanova 1992; Kuczawska, Mikolajewski \n\\& Kirejczyk 1992; Hric et al. 1993; Dobrzycka, Kenyon \\& Milone 1996; Sokoloski \n\\& Kenyon 2003). CH~Cyg is an eclipsing system (Mikolajewski, Mikolajewska \\& \nTomov 1987), and the flickering activity disappears during the eclipses \n(Sokoloski \\& Kenyon 2003). In 2010, the flickering from CH~Cyg became \nnon-detectable (Sokoloski et al., 2010) until it renewed its activity in 2014 \n(Stoyanov et al., 2014). CH~Cyg probably enters a new active stage in 2017 \n(Iijima 2017).\n\nHere we present photometric observations of the flickering of CH Cyg and \ncalculations of the flickering source parameters.\n\n\\section{Observations}\n\nThe observations are performed with the following three telescopes equipped with \nCCD cameras:\n\\begin{itemize}\n \\item the 60 cm Cassegrain telescope of Rozhen NAO\n \n \\item the 50\/70 cm Schmidt telescope of Rozhen NAO \n \n \\item the automated 41 cm telescope of the University of Ja\\'en, Spain \n(Mart{\\'{\\i}}, Luque-Escamilla, \\& Garc\\'{\\i}a-Hern\\'andez 2017)\n\\end{itemize}\n\nIn Fig.~\\ref{fig.obs} are plotted the light curves from a few nights. The \nobservations consisted of repeated exposures in U, B and V bands, or in B and V \nbands. On 20110609 the total duration of the run is 86 minutes; 20141001 --- 90 \nminutes; 20170724 --- 66 minutes; 20170809 --- 147 minutes; 20170811 --- 284 \nminutes.\n\nThe data reduction was done using IRAF (Tody 1993) following standard procedures \nfor aperture photometry. A few comparison stars from the list of Henden \\& \nMunari (2006) have been used, bearing in mind that SAO~31628 is an eclipsing \nbinary (Sokoloski \\& Stone 2000).\n\nThe journal of observations is given in Table~\\ref{table1}. In the table are \ngiven the telescope, band, number of exposures, exposure time, average \nmagnitude, minimum and maximum magnitude during the run, and typical \nobservational error.\n\n\\section{Flickering source parameters}\n\nIn our observations obtained during the period 2010--2013 the flickering of \nCH~Cyg was not detectable (see Table~\\ref{flick} and Fig.~\\ref{fig.obs}). It \nre-appeared in August 2014. After that CH~Cyg exhibited variability on a time \nscale of 1--30 minutes with amplitude $0.2-0.3$~mag in $V$. The amplitude \nincreases in B and U bands.\n\nBruch (1992) proposed that the light curve of CVs can be separated into two \nparts --- constant light, and variable (flickering) source. We assume that all \nthe variability in each night is due to flickering. In these suppositions the \nflickering light source is considered 100\\% modulated. Following these \nassumptions, we calculate the flux of the flickering light source as $F_{\\rm \nfl}=F_{\\rm av}-F_{\\rm min}$, where $F_{\\rm av}$ is the average flux during the \nrun and $F_{\\rm min}$ is the minimum flux during the run (corrected for the \ntypical error of the observations). $F_{\\rm fl}$ has been calculated for each \nband, using the values given in Table~1 and the Bessel (1979) calibration for \nthe fluxes of a zero magnitude star.\n\nA modification of this method is given in Nelson et al. (2011), which proposed \nto use the $F_{\\rm fl}=F_{\\rm max}-F_{\\rm min}$, where $F_{\\rm max}$ is the \nmaximum flux during the run. Adopting these, we find that the flickering light \nsource contributes about 4\\% in $V$, 6\\% in $B$, and 8\\% in $U$ (October 2014).\n\nThe calculated colours of the flickering light source are given in \nTable~\\ref{table2}, where $T_{(B-V)_1}$ is calculated using $F_{av}$ and \n$T_{(B-V)_2}$ is calculated using $F_{max}$.\n\nAssuming that the flickering source radiates as a black body, for the colours of \nthe black body we use the calibration given in Strai\\v zis (1977). The use of \nother formulae (e.g. Ballesteros 2012) could introduce a difference of about \n$\\pm 500$~K.\n\nAn independent estimate of the black body temperatures from the B-V colour using \nan analytic approximation (Ballesteros 2012) provides similar results with a \ndifference not exceeding 1000 K.\n\nThe radius is calculated for B-V colour and B band flux, assuming effective \nwavelength of B band $\\lambda = 4400$~\\AA, the temperature calculated from the \nB-V colour, assuming black body, spherical form of the flickering source, and a \ndistance $d=244 pc$.\n\nIn a comparison of the temperatures, calculated using the method of Bruch (1992) \nand Nelson et al. (2011), we find that the temperatures are in agreement.\n\nAs expected, the method of Nelson et al. (2011) gives higher values for the size \nof the flickering source, because it uses greater flux.\n\nIn Table~\\ref{table2} are summarized the calculated flickering source \nparameters: \n (1) date of observations; \n (2) U-B colour of the flickering source calculated following Bruch (1992); \n (3) temperature corresponding to the U-B colour;\n (4) B-V colour of the flickering source calculated following Bruch (1992); \n (5) temperature corresponding to B-V colour;\n (6) radius of the flickering source calculated following Bruch (1992);\n (7) U-B colour of the flickering source calculated following Nelson et al. \n(2011); \n (8) temperature corresponding to the U-B colour;\n (9) B-V colour of the flickering source calculated following Nelson et al. \n(2011); \n (10) temperature corresponding to B-V colour;\n (11) radius of the flickering source calculated following Nelson et al. \n(2011).\n\n\\begin{figure*} \n \\vspace{16.0cm} \n \\special{psfile=20110609.eps hoffset=-10 voffset=200 hscale=38 \nvscale=38 angle=0} \n \\special{psfile=20141001.eps hoffset=90 voffset=200 hscale=38 \nvscale=38 angle=0} \n \\special{psfile=20170724.J.eps hoffset=210 voffset=200 hscale=38 \nvscale=38 angle=0} \n \\special{psfile=20170809.J.eps hoffset=-20 voffset=-40 hscale=38 \nvscale=38 angle=0} \n \\special{psfile=20170811.eps hoffset=140 voffset=-40 hscale=38 \nvscale=38 angle=0} \n \\caption[]{Flickering of CH~Cyg in U, B and V bands.}\n \\label{fig.obs} \n\\end{figure*}\t \n\n\\begin{table}\n \\begin{center}\n \\caption{Journal of observations.}\n\\begin{tabular}{l cccc ccccc cc}\ndate-obs & telescope & band & exposures & average & min & max & merr & \\\\\n\\\\\n20110609 & 60cm Roz & B & 191 $\\times$ 10s & 10.317 & 10.301 & 10.336 & \n0.005 & \\\\\n20110609 & 60cm Roz & V & 190 $\\times$ 5s & 8.796 & 8.784 & 8.812 & \n0.002 & \\\\\n\\\\\n20141001 & 60cm Roz & U & 69 $\\times$ 15s & 7.578 & 7.344 & 7.714 & \n0.009 & \\\\\n20141001 & 60cm Roz & B & 71 $\\times$ 10s & 7.916 & 7.735 & 8.017 & \n0.003 & \\\\\n20141001 & 60cm Roz & V & 71 $\\times$ 4s & 7.197 & 7.074 & 7.294 & \n0.004 & \\\\\n\\\\\n20170724 & 41cm Jaen & B & 200 $\\times$ 3s & 8.264 & 8.040 & 8.497 & \n0.009 & \\\\\n20170724 & 41cm Jaen & V & 196 $\\times$ 2s & 7.533 & 7.399 & 7.686 & \n0.005 & \\\\\n\\\\\n20170809 & 41cm Jaen & B & 417 $\\times$ 6s & 8.410 & 8.248 & 8.600 & \n0.003 & \\\\ \n20170809 & 41cm Jaen & V & 389 $\\times$ 2s & 7.666 & 7.547 & 7.827 & \n0.004 & \\\\ \n\\\\\n20170811 &50\/70cm Roz& U & 318 $\\times$ 20s & 7.748 & 7.476 & 8.041 & \n0.007 & \\\\\n20170811 &50\/70cm Roz& B & 318 $\\times$ 4s & 8.298 & 8.068 & 8.545 & \n0.005 & \\\\ \n20170811 &50\/70cm Roz& V & 318 $\\times$ 2s & 7.649 & 7.453 & 7.826 & \n0.005 & \\\\\n\\\\\n\\end{tabular}\n\\label{table1}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\caption{Flickering source parameters.}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{c | cc | ccc | cc | ccccc }\ndate-obs & $U-B$ & $T_{(U-B)_1}$ & $B-V$ & $T_{(B-V)_1}$ & $R\/R_\\odot$ & $U-B$ &\n$T_{(U-B)_2}$ & $B-V$ & $T_{(B-V)_2}$ & $R\/R_\\odot$ & \\\\\n& & & & & & & & & & & \\\\\n1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & \\\\\n& & & & & & & & & & & \\\\\n20141001 & -0.6439 & 9292 & 0.6782 & 5554 & 1.94 & -0.6441 & 9284 & 0.4212 & \n6992 &\n2.07 & \\\\\n20170724 & --- & --- & 0.3126 & 7895 & 1.20 & --- & --- & 0.2161 & 9049 & 1.42 & \n\\\\\n20170809 & --- & --- & 0.5430 & 6231 & 1.62 & --- & --- & 0.5322 & 6299 & 2.24 & \n\\\\\n20170811 & -0.7134 & 9834 & 0.3211 & 7824 & 1.23 & -0.7353 & 10723 & 0.3966 & \n7195 &\n2.10 & \\\\\n\\end{tabular}\n}\n\\label{table2}\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n\\caption{List of observations of CH~Cyg. The last column indicates if the \nflickering\nis present or not.}\n\\begin{tabular}{c | c c | c | c | c | c | c | c c c c}\ndate-obs & bands & flickering & date-obs & bands & flickering \\\\\n& & & & & \\\\\n20100430 & BV & no & 20110918 & B & no \\\\ \n20100501 & UB & no & 20111006 & B & no \\\\\n20100502 & B & no & 20111118 & B & no \\\\\n20100506 & B & no & 20120618 & B & no \\\\\n20100507 & UB & no & 20120820 & B & no \\\\\n20100509 & BV & no & 20130514 & BV & no \\\\\n20100816 & UB & no & 20130703 & B & no \\\\\n20100817 & BV & no & 20130803 & B & no \\\\\n20100818 & UB & no & 20140814 & B & yes \\\\\n20100909 & BV & no & 20141001 & UB & yes \\\\\n20101029 & UB & no & 20170724 & BV & yes \\\\\n20101030 & BV & no & 20170809 & BV & yes \\\\\n20110529 & B & no & 20170811 & UBV & yes \\\\\n20110609 & BV & no & & &\\\\\n\\end{tabular}\n\\label{flick}\n\\end{center}\n\\end{table}\n\n\\section{Discussion}\n\n\\subsection*{Parameters of the flickering source}\n\nThe temperature of the flickering source in similar to the temperature of the \nbright spot in cataclysmic variable stars (e. g. Marsh 1988;\nWood et al., 1989; Zhang \\& Robinson 1987) and it is significantly lower than \nthe temperature of the boundary layer. This is a hint that the flickering \nprobably originates from the bright spot, but its nature is still unknown. For \ncomparison, the temperatures and radii of the flickering source in other \nsymbiotic stars are comparable with those that we estimate for CH~Cyg. For \nRS~Oph, Zamanov et al. (2010) give $T_{fl}=9500 \\pm 500$~K and $R_{fl} = 3.5 \\pm \n0.5 R_\\odot$. For MWC~560, Zamanov et al. (2011) give $T_{fl}=13550 \\pm 500$~K \nand $R_{fl} = 1.68 \\pm 0.16 R_\\odot$.\n\n\\subsection*{Why was the flickering missing for 4 years?}\n\nFlickering is expected to arise in the vicinity of the accretion disc around the \nwhite dwarf companion in CH~Cyg. \nIts absence during a nearly four year time interval is most naturally \ninterpreted as a major disruption of the inner disc structure. \nThis is probably due to reduced supply of mass flow from the M-type giant across \nthe L$_1$ Lagrangian point of the system. \nThis situation, lasting in the period 2010--2013 according to Table~\\ref{flick}, \nrenders now difficult other alternative interpretations \nfor the lack of flickering based on an eclipse configuration (Stoyanov et al. \n2014). The required eclipse duration, at \nleast four years, appears to be too long unless a highly eccentric system is \ninvoked. Interestingly, in the CH~Cyg case there \nis observational evidence based on infrared observations for episodic creation \nand dissipation of a dust envelope around it \n(Taranova \\& Shenavrin 2004). Such a behavior has been observed in the past. \nThe creation and dissipation of a dust envelope is connected with a change of the mass-loss of the giant, which \nunderwent a significant reduction during the \n2010--2013 time interval. The time scale reported by \nTaranova \\& Shenavrin (2004) for previous dust envelope creation and dissipation \n events is $\\sim$ 10 yr. The cessation of the flickering is consistent with \nthe giant undergoing a reduced mass-loss episode with a shorter, few year \nduration. Unfortunately, no contemporaneous infrared \nobservations are available to us to confirm this hypothesis.\n\n\n\n\n\n\n\n\n\\section*{Conclusions}\n\nWe performed quasi-simultaneous multicolour observations of the flickering of \nthe symbiotic star CH Cyg in Johnson U, B and V bands. We calculated the \nflickering source parameters --- the temperature is in the range $5000 < T < \n11~000$~K and the radius is in the range 1.42 $ 10$~Gyr) populations in these\nfields must be predominantly metal-poor ([Fe\/H]$\\leq -1$), and that\nthe metal-rich populations ([Fe\/H]$> -1$) must be of intermediate age\n($\\sim 6$--8~Gyr). An old metal-rich population would have a MSTO\nmuch redder and fainter than observed, while an intermediate-age\nmetal-poor population would have a MSTO much bluer and brighter than\nobserved. That said, there is a minority population of young stars\nspanning a wide range in metallicity, with the brightest and bluest\nstars in the plume matched by the 3~Gyr isochrone at [Fe\/H]~=~$-2$.\n\n\\begin{figure*}[t]\n\\epsscale{1.1}\n\\plotone{f4.eps}\n\\epsscale{1.0}\n\\caption{{\\it Top panels:} The CMDs of the spheroid ($a$), stream\n($b$) and disk ($c$), compared to the ridge lines of the Galactic globular\nclusters in Table~\\ref{tabclus} ({\\it colored curves}). \nThe Andromeda data are shown as Hess diagrams with the same binning\nused in Figure~\\ref{cmdsvels}, but over a narrower range of \ncolor and luminosity. The ridge\nlines shift redward with increasing cluster metallicity. {\\it Bottom\npanels:} The CMDs of the spheroid ($d$), stream ($e$) and disk ($f$),\ncompared to isochrones at [Fe\/H]~=~$-2$ ({\\it yellow curves}), $-1$\n({\\it pink curves}), and 0 ({\\it light blue curves}), and ages of 3,\n8, and 13~Gyr (running from left to right for each color). It is\nclear that the old ($> 10$~Gyr) populations in these fields must be\npredominantly metal-poor ([Fe\/H]$\\leq -1$), and that most of the\nmetal-rich populations ([Fe\/H]$> -1$) must be of intermediate age\n($\\sim 6$--8~Gyr).}\n\\label{comparegrid}\n\\end{figure*}\n\nThe implications of the SGB distribution warrant additional\ndiscussion. The isochrones in Figure~\\ref{comparegrid} show that the\nluminosity of the SGB decreases with either increasing age or\nincreasing metallicity. Thus, different age-metallicity relations for\nthe stars in our CMDs would be expected to produce different\nluminosity distributions across the SGB. To evaluate the implications\nof this constraint, we show in Figure~\\ref{sgbwidth} hypothetical\npopulations of stars in the vicinity of the SGB as they would appear\nif observed under the same conditions as in our spheroid field. The\nupper panels present the age-metallicity relations of the isochrones\nemployed to construct each model population, with the stars divided\nequally among the isochrones. The lower panels show the corresponding\nCMDs resulting from these hypothetical populations. Even with a very\nwide range in age, a single metallicity does not reproduce the width\nof the RGB (panels $a$ and $e$); this is because the RGB is far more\nsensitive to metallicity than to age. Moreover, the SGB luminosity\ndistribution is much wider than observed. If one has old metal-rich\nstars and young metal-poor stars (panels $b$ and $f$), the RGB becomes\nmuch wider, but the SGB luminosity distribution is still much wider\nthan observed. If all of the stars are at a single age (panels $c$\nand $g$), the SGB narrows, but it is still wider than the SGB observed\nin our fields. It is only when one has young metal-rich stars and old\nmetal-poor stars (panels $d$ and $h$) that the SGB locus becomes very\ntight and horizontal, as observed for the dominant populations in our\nthree CMDs, while at the same time reproducing a wide RGB. Because\nthe RGB is more sensitive to metallicity than to age, while the MSTO\nis very sensitive to both, one is able to break the age-metallicity\ndegeneracy in studies employing this region of the CMD. Note\nthat relatively young and metal-poor stars (panels $a$, $b$,\n$e$, and $f$) are needed to explain the brightest and bluest stars in\nthe blue plume of our observed CMDs.\n\n\\begin{figure*}[t]\n\\epsscale{1.1}\n\\plotone{f5.eps}\n\\epsscale{1.0}\n\\caption{{\\it Top panels:} Four hypothetical populations of stars. \nIn each population, the stars are equally distributed among 20 isochrones\nwith distinct distributions in age and metallicity.\n{\\it Bottom panels:} Model CMDs for these hypothetical\npopulations, with the observational errors and completeness of our\nspheroid data, shown at a logarithmic stretch.}\n\\label{sgbwidth}\n\\end{figure*}\n\nThe similarities at the HB and RGB between the stream and spheroid\nimply that these populations have very similar metallicity distributions,\nat least at the positions of our fields (Brown et al.\\ 2006). Much\nfarther out in the galaxy (31 kpc from the center), Guhathakurta et\nal.\\ (2006) found that the stream was more metal-rich than the\nsurrounding spheroid, but this finding is not inconsistent with our\nresults. Kalirai et al.\\ (2006a) have shown that the spheroid of\nAndromeda has a metallicity gradient, such that it is significantly\nmore metal poor at 30 kpc than it is close to the galaxy's center.\nOur finding of similar metallicities between the stream and\nspheroid in our interior fields, when combined with the Guhathakurta\net al. (2006) results, reaffirms the existence of this metallicity\ngradient.\n\nAlthough the CMDs for each field have many similarities, closer\ninspection reveals significant distinctions, especially between the\ndisk and the other two fields. We highlight these distinctions in\nFigure~\\ref{compare}, which shows the differences between the stream\nand spheroid and also those between the disk and spheroid. The\nspheroid data used in each comparison are a subset that reaches\napproximately the same depth as the stream and disk data. The\nspheroid CMD was also scaled to the number of stars in each of the\nother two CMDs before subtracting; note that it makes little\ndifference if this normalization is done based on the total number of\nstars in each field or just those well above the detection limits\n(e.g., $m_{F814} < 28$~mag). Relative to the spheroid main sequence,\nthe stream main sequence extends somewhat farther to the blue, even\nthough the RGB and HB distributions are nearly identical. Thus, the\nage distribution in the stream must extend to slightly younger ages\nthan those in the spheroid (as also noted by Brown et al.\\ 2006). In\ncontrast, the distributions of age and metallicity in the disk extend\nto significantly younger ages and higher metallicities than those in\nthe spheroid and stream, and the old metal-poor population is not as\nprominent. The RGB stars in the disk are skewed toward redder colors,\nwhile the HB population is largely restricted to the red clump; both\nof these features indicate a higher metallicity in the disk. In the\ndisk population, the red clump HB is also somewhat extended in\nluminosity, indicating a younger age distribution (an excellent\nexample of the variation in clump luminosity with age can be seen in\nthe Monelli et al.\\ [2003] study of the Carina dwarf spheroidal).\nThere does not appear to be a significant population on the blue HB,\nalthough a trace population might be hidden in the blue plume of stars\nrising above the dominant MSTO; Figure~\\ref{compare}f shows an\noversubtraction of the blue HB from the spheroid ({\\it dark boxes})\nappearing within the cloud of undersubtracted blue plume stars from\nthe disk ({\\it light boxes}). The stronger plume in the disk\npopulation indicates an extension to significantly younger ages. The\nplume in the disk population includes $\\approx$40 stars that are\nbrighter than the region where the blue end of the HB would nominally\nfall, implying that these bright blue stars have masses of\n$\\sim$2--5~$M_\\odot$ and ages of $\\sim$0.2--1 Gyr. Note that\nCuillandre et al.\\ (2001) have also seen evidence for trace\npopulations of young stars in the outer disk of M31. However, the\ndisk does not look quite as young as one might expect if there were a\nsignificant thin disk population -- a point we will return to later.\n\n\\begin{figure*}[t]\n\\epsscale{1.1}\n\\plotone{f6.eps}\n\\epsscale{1.0}\n\\caption{Comparisons of the CMDs for our three fields. The ridge line\nfor NGC~104 ({\\it curve}) is shown for reference. {\\it a)} The CMD of the\nspheroid field, shown at its full depth, at a logarithmic stretch.\n{\\it b)} The CMD of the stream field. {\\it c)} The CMD of the disk\nfield. {\\it d)} The CMD of the spheroid field, shown at a depth that\nmatches that in the stream and disk. {\\it e)} The difference between\nthe stream and spheroid CMDs (with the latter scaled to match the\nnumber of stars in the former). The RGB and HB distributions are very\nsimilar, but the locus of stars at the MS extends slightly brighter and\nbluer than that in the spheroid. {\\it f)} The difference between\nthe disk and spheroid CMDs (with the latter scaled to match the number\nof stars in the former). The RGB of the disk is considerably redder\nthan that of the spheroid, indicating higher metallicities in the\ndisk. The HB of the disk is almost entirely in the red clump, with a\nspread to brighter luminosities, indicating higher metallicities and\nyounger ages in the disk. The blue plume of stars above the MSTO is\nmuch stronger in the disk, indicating younger ages than in the\nspheroid.}\n\\label{compare}\n\\end{figure*}\n\nIn a field population, it is difficult to distinguish between young\nmetal-poor stars and old blue stragglers (see Carney, Latham, \\& Laird\n2005 and references therein). Thus, some of the apparently young\nstars in our CMDs ($\\lesssim 6$~Gyr) might instead be blue stragglers.\nHowever, whether blue stragglers form via merger or mass transfer, in\nan old population they will be limited to $M \\lesssim $2~$M_\\odot$.\nAll three of our fields show blue plume stars as bright as the HB over\na wide range of color, and in the disk these stars continue to\nluminosities significantly brighter than the HB. The high masses\nrequired to explain the brightest stars in the blue plume population\nimply that truly young stars are present, and these stars appear to be\na smooth extension of the fainter blue plume population. This argues\nagainst a significant contribution from blue stragglers in the blue plume.\n\nIf we fit Gaussian distributions to the velocity data in our fields\n(Figure~\\ref{cmdsvels}), we find that the spheroid is a $\\sim$25\\%\ncontamination in our stream field and a $\\sim$33\\% contamination in\nour disk field. Given the wide separation between our fields\n(Figure~\\ref{mapfig}), we cannot necessarily assume that the\npopulation in our spheroid field is representative of the spheroid\ncontamination in our stream and disk fields. However, it is natural\nto ask how the stream and disk CMDs would look if the spheroid\ncontamination were subtracted under the assumption that the population\nin our spheroid field is in fact representative of this contamination.\nTo show this, we used that subset of the spheroid data that is matched\nto the depth of the stream and disk data. We randomly drew a star\nfrom these spheroid data, found the star in the stream data that most\nclosely agreed in its photometry, and then subtracted that star from\nthe stream data. These subtractions were repeated until 25\\% of the\nstream stars were removed. In 99\\% of the subtractions, the star\nsubtracted from the stream data was within 0.02~mag of the spheroid\nstar, and in 99.9\\% of the subtractions, the star subtracted from the\nstream data was within 0.1~mag of the spheroid star; the handful of\nstars that could not be matched at this level fell very far from the\ndominant stellar locus (in the negligible cloud of sparse stars at\nrandom colors and magnitudes), and these were not subtracted. We\nrepeated this process on the disk data, but there subtracted 33\\% of\nthe disk stars; again, 99\\% of the subtractions matched disk to\nspheroid stars within 0.02~mag, while 99.9\\% of the subtractions\nmatched disk to spheroid stars within 0.1~mag. The resulting CMDs are\nshown in Figure~\\ref{subtract}. Because of the many similarities\nbetween the original three CMDs (Figure~\\ref{cmdsvels}), the changes\ndue to the subtraction of the spheroid contamination are subtle. To\nhelp highlight the differences between the three fields, we also show\nluminosity and color cuts across the CMDs ({\\it colored boxes});\npanels $d$ and $e$ show the color distributions on the lower RGB and\nHB, respectively, while panels $f$ and $g$ show the luminosity\ndistributions at the red clump and SGB, respectively. The color and\nluminosity cuts help quantify the similarities and differences between\nthe populations discussed above and shown in Figure~\\ref{compare}.\nCompared to the spheroid population, the stream population exhibits\nsimilar RGB and HB morphologies, but its main sequence extends\nsomewhat brighter and bluer. In contrast, the disk population\nexhibits RGB and HB morphologies that are skewed toward redder colors,\nwith the main sequence showing a strong extension to brighter and\nbluer colors.\n\n\\begin{figure*}[t]\n\\epsscale{1.1}\n\\plotone{f7.eps}\n\\epsscale{1.0}\n\\caption{The spheroid CMD compared to the stream and disk CMDs, where\nwe have attempted to subtract the spheroid contamination \nfrom the stream and disk.\n{\\it a)} The CMD of the spheroid field, shown at a depth that\nmatches that in the stream and disk fields. Cuts across the CMD ({\\it\nblue boxes}) are used to make comparisons with the stream and disk;\nthe histograms in each cut ({\\it panels d-g}) are normalized to the\nnumber of stars in the spheroid. Labels refer to subsequent panels in\nthis figure. {\\it b)} The CMD of the stream field, with a subtraction\nof spheroid stars assumed to contaminate at the 25\\% level, and with\nthe same cuts indicated ({\\it green boxes}). {\\it c)} The CMD of the\ndisk field, with a subtraction of spheroid stars assumed to\ncontaminate at the 33\\% level, and with the same cuts indicated ({\\it\nred boxes}). {\\it d)} Histograms for stars along the RGB color cut\nfor the spheroid ({\\it blue}), stream ({\\it green}) and disk ({\\it\nred}). {\\it e)} Histograms along the HB color cut. {\\it f)}\nHistograms along the HB luminosity cut. {\\it g)} Histograms along the\nSGB luminosity cut. Compared to the spheroid and stream, the disk\npopulation has a redder RGB (indicating higher metallicities), an HB\nthat falls mostly in a red clump that extends to brighter luminosities\n(indicating younger ages and higher metallicities), and a stronger\nblue plume above the MSTO (indicating younger ages).}\n\\label{subtract}\n\\end{figure*}\n\n\\subsection{Maximum Likelihood Fitting of Isochrones}\n\nWe turn now to the quantitative fitting of our CMDs.\nOur characterization of the star formation history in each field\nprimarily uses the StarFish code of Harris \\& Zaritsky (2001). This\ncode takes a grid of isochrones, populates them according to the \ninitial mass function (IMF),\nthen applies the photometric scatter and incompleteness (as a function\nof magnitude and color) determined in the artificial star tests. The\ncode then fits the observed CMD by employing linear combinations of the\nscattered isochrones. The fitting can be done via minimization of\neither a $\\chi^2$ statistic or the Maximum Likelihood statistic of\nDolphin (2002). We found little difference between fits done with\neither statistic, and ultimately used the Maximum Likelihood statistic\nin our analysis.\n\nIn the StarFish fitting, each isochrone at a given age and metallicity\nis varied independently, resulting in a large number of free\nparameters in the fit. This method is similar to most of the star\nformation history methods used in the literature (e.g., Dolphin 2002;\nSkillman et al.\\ 2003). Although the term ``star formation history''\nmight imply a physical connection between the subpopulations, this\nmethod is really a fit to the age and metallicity distributions. In\naddition to StarFish, we wrote our own codes that fit the isochrones to\nthe data according to mathematical and physical restrictions that\ngreatly reduce the number of free parameters; these models will be \nthe subject of a future paper.\n\nWe do not fit the entire range of stars observed in the CMD. Instead,\nwe restrict our fits to the lower RGB (below the level of the HB),\nSGB, and upper main sequence. Specifically, we fit over $-0.9 \\leq\nm_{F606W}-m_{F814W} \\leq -0.1$~mag in color, and $26.5 \\leq m_{F814W}\n\\leq 30.5$~mag in magnitude for the spheroid data and $26.5 \\leq\nm_{F814W} \\leq 30.0$~mag in magnitude for the stream and disk data\n(which are $\\approx$0.5~mag shallower). This region of the CMD\noffers excellent sensitivity to age and metallicity while avoiding\nthose regions of the CMD that have low signal-to-noise ratio or that are poorly\nconstrained by the models (such as the HB, the upper RGB, the RGB\nluminosity function bump, and the faint end of the CMD). The HB is a\nqualitative indicator of age and metallicity, becoming\nredder at younger ages and higher metallicities, and eventually forming a\nred clump with a significant spread in luminosity. However,\ndisentangling the effects of age and metallicity is highly uncertain;\nindeed, the ``second parameter debate'' in the study of HB morphology\nrefers to the dependence of the HB morphology on parameters other than\nmetallicity, such as age and helium abundance. Although Galactic\nforeground stars comprise much less than 1\\% of the stars in our\nfield, they tend to fall near the upper RGB in M31, which is sparsely\npopulated in our data; the upper RGB is thus the one region of our\nCMDs with significant foreground dwarf contamination. In addition,\nthe upper RGB is contaminated by asymptotic giant branch (AGB) stars,\nwhich in turn have a distribution depending on the age and [Fe\/H] of\ntheir progenitor HB stars. The RGB luminosity function bump is a\nqualitative metallicity indicator, and it is most prominent in CMDs of\nmetal-rich populations, where it appears as an overdensity on the RGB\nimmediately below the luminosity of the HB; theoretical models\nreproduce the general trend for the bump luminosity to brighten with\ndecreasing metallicity, but the zeropoint of the relationship is\nuncertain, and the mix of age and metallicity in our populations makes\nit difficult to interpret this feature in the data. The faintest main\nsequence stars in the CMD suffer from large photometric scatter and\nlow completeness.\n\nWe use the Victoria-Regina Isochrones (VandenBerg et al.\\ 2006) in all\nof our fitting. These isochrones do not include core He diffusion,\nwhich would decrease their ages at a given turnoff luminosity by $\\sim\n10$\\% (VandenBerg et al.\\ 2002). Although the ages of isochrones with\ncore He diffusion are likely more accurate, models in which diffusion\nis allowed to act efficiently on other elements in the surface layers\nshow significant discrepancies when compared to observed CMDs,\nindicating that there must be some other mechanism at work, such as\nturbulence in the surface layers (see Brown et al.\\ 2005 and\nreferences therein). Helium diffusion can still occur in the core,\nand thus the ages discussed herein should be reduced by $\\sim$10\\% to\nobtain absolute ages.\n\nThe Victoria-Regina Isochrones are distributed with a ground-based\nmagnitude system. Sirianni et al.\\ (2005) provide an iterative\ntransformation to put ACS data in a ground-based system, but warn\nagainst its use, given the systematic errors intrinsic to such a\nprocess. The biggest problem is that the F606W bandpass is very\ndifferent from Johnson $V$, although the difference between F814W and\nCousins $I$ is nonnegligible, too. To properly make the\ntransformation from one system to the other, one must know the\nintrinsic spectral energy distribution of the source, and this is\ndifficult to estimate based on photometry in two broad bandpasses.\nIt is much more straightforward to use the physical parameters along\neach model isochrone (effective temperature and surface gravity) to\ntransform the models into the observational system using synthetic\nspectra of the appropriate metallicity. We use the transformation of\nBrown et al.\\ (2005), which produces good agreement between these\nisochrones and the ACS observations of Galactic clusters spanning a\nwide range in metallicity (Table~\\ref{tabclus}). Over most of the CMD\n(including the region we use here for fitting), the agreement is\nbetter than $\\sim$0.02~mag. In this sense, we are using the\nisochrones to provide relative changes in age and metallicity, once\nthey have been anchored to observations of Galactic clusters. We are\nthus providing star formation histories in a reference frame based on\nthe ages and metallicities of the clusters listed in\nTable~\\ref{tabclus}.\n\n\\subsubsection{The Isochrone Grid}\n\nWe fit a large grid of isochrones spanning $1 \\leq$~age~$\\leq 14$~Gyr\n(with 0.5 Gyr steps) and $-2.3 \\leq $~[Fe\/H]~$\\leq +0.5$ (with\n$\\approx$0.1 dex steps) using the StarFish code. The fine spacing in\nage and metallicity avoids artificial lumpiness in the synthetic\nCMDs but means that neighboring isochrones in the grid are nearly\ndegenerate. Such degeneracies, plus the large number of free\nparameters, do not allow a fit to converge in a reasonable time.\nFortunately, the StarFish code allows groups of neighboring isochrones\nto be locked such that their amplitudes vary together; one of these\nisochrone groups is treated as a single isochrone as far as the\nfitting is concerned, even if its stars span a small range in age and\nmetallicity (see Harris \\& Zaritsky 2001 for details). We locked our\nfull grid of isochrones into 117 independent isochrone groups, with\nthe sampling chosen to match the nonlinear changes in the CMD with age\nand metallicity (the CMD changes more rapidly at higher metallicities\nand younger ages). The grid of isochrones and the locked isochrone\ngroups are shown in Figure~\\ref{gridfig}.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f8.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{The isochrones used in StarFish fitting. A fine grid of \nisochrones ({\\it crosses}) was used to avoid artificial lumpiness\nin the synthetic CMD, but these isochrones were locked together in\ngroups ({\\it boxes}) to reduce the number of free parameters and\nto avoid degeneracies in the fit.}\n\\label{gridfig}\n\\end{figure}\n\n\\subsubsection{Fixed Parameters}\n\nBesides distance and reddening, there are several other parameters\nthat must be fixed before proceeding with a fit. The binary fraction\nis highly uncertain, and not even well-constrained in the field or\ncluster populations of our own Galaxy; the value appears to be in the\nrange of 10--30\\% in the field population of the Galactic halo (Ryan\n1992 and references therein). The fits to our data are best when the\nbinary fraction is near 10\\%, whereas fits with the binary fraction\nsignificantly deviating from 10\\% show noticeable residuals. Thus,\nunless specified otherwise, the binary fraction is assumed to be 10\\%\nthroughout this paper. The binary fraction is set in the StarFish\ncode (Harris \\& Zaritsky 2001) at the stage where it scatters the\nisochrones; specifically, for a given fraction of stars, it draws a\nsecond star randomly from the IMF and produces a single unresolved\nobject with the combined color and magnitude of the two stars. For\nthe IMF index, we chose the Salpeter (1955)\nvalue of $-1.35$. For the isochrone abundances, we did not assume a\nscaled-solar abundance pattern. Instead, we assumed that the alpha\nelements are enhanced at low metallicity and unenhanced (scaled-solar)\nat high metallicity; specifically, we assumed [$\\alpha$\/Fe]~=~0.3 at\n[Fe\/H]~$\\leq -0.7$ and [$\\alpha$\/Fe]~=~0.0 at [Fe\/H]~$> -0.7$. At the\n[$\\alpha$\/Fe] resolution available in our isochrone grid, this trend\nroughly reproduces that seen in the Galaxy (Pritzl, Venn, \\& Irwin\n2005 and references therein), although bulge populations appear to be\nenhanced in alpha elements even at high metallicity (McWilliam \\& Rich\n1994; Rich \\& Origlia 2005). As it turns out, the IMF and\nalpha-enhancement assumptions make little difference in our results.\nAll of these assumptions (distance, reddening, binary fraction, IMF,\nand alpha enhancement) are varied in our exploration of systematic\nerrors (see \\S\\ref{syssec}).\n\n\\subsubsection{Uncertainties}\n \nIn the fits below, we do not plot error bars for the weights of the\nindividual isochrones. This is because the uncertainty associated\nwith the normalization of any individual isochrone is very large, and\ncorrelated with the normalization of neighboring isochrones. If any\none isochrone in the best-fit model is deleted from the fit,\ncompensating changes can be made in neighboring\nisochrones that restore the quality of the fit. The result is that\nthe uncertainty on any individual isochrone weight is largely\nmeaningless. These difficulties are a continuing plague for studies of\nstar formation histories in complex populations (e.g., Skillman et\nal.\\ 2003; Harris \\& Zaritsky 2004). If one is fitting a simple\nstellar population (single age and single metallicity), one can trace\nout confidence contours in the age-metallicity plane according to the\nchange in fit quality, but with a complex star formation history, it\nis the distribution of ages and metallicities that matters. What one\nreally wants is a set of isochrones that are truly eigenfunctions of\nan orthogonal basis set. However, there is not an obvious basis\nfunction that relates in a simple way back to physical parameters.\nThe sampling in our isochrone grid is fine enough to avoid artificial\nstructure in the synthetic CMDs, yet coarse enough to avoid isochrones\nthat are completely degenerate within the photometric errors.\n\nEven though some of the isochrone weights in the final fits are very\nsmall, the ensemble of these small weights is necessary for a good\nfit. One way of demonstrating this assertion is by repeating the fits\nafter deleting isochrones with low weights. Starting with the best\nfit, we first sorted the isochrones by their fitted weights, and then\nretained only those whose weight exceeded a specified cutoff;\nspecifically, the cutoff in weight was chosen so that this subset of\nisochrones accounted for 90\\% of the stars in the best fit. Refitting\nwith this reduced set of isochrones produced terrible fits (fit score\n$\\sim$50\\% larger). The fit was also poor when we retained those\nisochrones responsible for 95\\% of the stars in the best fit. The fit\ndid not become acceptable until we had retained those isochrones\nresponsible for 99\\% of the stars in the best fit ($\\sim$50 of the\noriginal 117 isochrone groups).\n\n\\subsection{Results for the Spheroid}\n\\label{secsph}\n\nThe distribution of age and metallicity in our best fit to the\nspheroid data is shown in Figure~\\ref{halofit}. In this figure, the\narea of the symbols ({\\it filled circles}) is proportional to the\nnumber of stars in each isochrone group. Note that the spacing of the\nisochrone groups is irregular, so that if one were to plot a star\nformation rate in units of $M_\\odot$ per unit time per unit\nlogarithmic metallicity, the relative sizes of the symbols would be\nsomewhat increased at younger ages and higher metallicities (where the\nspacing is finer). As noted by Brown et al.\\ (2003), the\nspheroid CMD is best fitted by a wide range of age and metallicity, and\nis strikingly different from the old, metal-poor halo of the Milky\nWay. Approximately 40\\% of the stars are less than 10~Gyr old, and\napproximately 50\\% of the stars are more metal-rich than 47~Tuc\n([Fe\/H]$\\approx-0.7$). The mean metallicity, $<$[Fe\/H]$>$=$-0.6$, is\nidentical to that found by Durrell et al.\\ (1994) at 9~kpc on the\nminor axis, and slighter higher than the $<$[m\/H]$>$=$-0.6$ found by\nHolland et al.\\ (1996) from earlier WFPC2 photometry of our field,\nwith similar spreads to both higher and lower metallicities. Although\nour mean metallicity is much higher than that in the Milky Way halo,\nthe metallicity distribution definitely has a tail extending to\nmetal-poor stars. These include the RR Lyrae stars in our field,\nwhich have a mean metallicity of [Fe\/H]~=~$-1.7$ (Brown et al.\\\n2004a), and the minority population of blue HB stars.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f9.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{The distribution of age and metallicity in the best-fit model\nof the spheroid data. The area of the filled circles is proportional\nto the number of stars in each isochrone group.\n}\n\\label{halofit}\n\\end{figure}\n\nAlthough we used the Dolphin (2002) Maximum Likelihood statistic to\nperform our fits, we also compared the results with those obtained\nfrom a traditional $\\chi^2$ statistic, and the fits were\nsimilar. Dolphin (2002) also provides a goodness of fit statistic,\n$\\chi^2_{\\rm eff}$, for those more familiar\nwith $\\chi^2$ fitting (with values close to unity indicating a good\nfit). The best fit model (Figure~\\ref{halofit}) has $\\chi^2_{\\rm eff}\n=1.11$ per degree of freedom (8000 CMD bins minus 117 freely varying\nisochrone weights). This score clearly implies an imperfect fit. To\ndemonstrate this, we ran Monte Carlo simulations of the idealized\ncase. We created random realizations of the data drawn from the\nbest-fit model to obtain the distribution of the Maximum Likelihood\nstatistic, and found that the Maximum Likelihood statistic obtained in\nour best-fit model exceeds the mean score by 6$\\sigma$ (where $\\sigma$ is\none standard deviation in the distribution of the Maximum Likelihood\nstatistic from the Monte Carlo runs).\n\nThere are many reasons why the model should not\nexactly reproduce the data. These include imperfections in the\nisochrones (they are calibrated at the $\\sim$0.02~mag level against\nGalactic globular clusters observed in the same filters), deviations\nfrom a Salpeter (1955) IMF, deviation from our assumed binary fraction\nof 10\\% (e.g., one might imagine that the binary fraction varies with\nage and metallicity depending on the variations in the formation\nenvironment), and the limitations of the artificial star tests used to\nscatter the isochrones (artificial stars are created, with noise, from\nthe same PSF model used in the PSF fitting, while real stars will\ndeviate from the PSF model due to noise and true intrinsic\ninaccuracies in the PSF model). Although the model does not exactly\nreproduce the data distribution over 8000 CMD bins, the deviations are\nremarkably small, as we show in Figure~\\ref{halomod}. In the top row\nof panels, we show the data in the fitting region ({\\it yellow}), the\nbest-fit model ({\\it blue}), and the differences between the two ({\\it\nyellow} and {\\it blue}) shown at the same linear stretch; i.e., \nthe CMD bins in panel $c$ are shaded blue where the model exceeds the\ndata, and shaded yellow where the data exceeds the model, with the\nshading on the same linear scale employed in panels $a$ and $b$. The\ndifferences between the data and model appear almost completely\nrandom, with minimal systematic residuals; in fact, the upper panels look\nmuch like the idealized case shown in the bottom row of panels, where\nthe residuals are completely random.\nThere, we show a random realization of the best-fit model ({\\it\nyellow}), a repeat of the best-fit model ({\\it blue}), and the\ndifferences between the two ({\\it yellow} and {\\it blue}). The\nrealization ({\\it bottom left}) is nearly indistinguishable from the\nactual data ({\\it top left}). The difference between the realization\nand the model ({\\it bottom right}) demonstrates the noise residuals one can\nexpect when comparing a smooth model to discrete data in the idealized\ncase ($\\chi^2_{\\rm eff} = 1$). \n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f10.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{{\\it Top panels:} The CMD of the spheroid data ({\\it\nyellow}), the best-fit model to those data ({\\it blue}), and the\ndifferences between the data and model ({\\it yellow} and {\\it blue}),\nall shown at the same linear stretch. {\\it Bottom panels:} An\nartificial CMD drawn from the best-fit model ({\\it yellow}), the same\nbest-fit model ({\\it blue}), and the differences between the\nartificial data and model ({\\it yellow} and {\\it blue}), all shown at\nthe same linear stretch employed in the top panels. }\n\\label{halomod}\n\\end{figure}\n\nGiven the large number of free parameters, one might also wonder if\nthe ``best-fit'' model has truly converged on the best fit. One way\nto test this is through repeated fitting with distinct initial\nconditions. We show in Figure~\\ref{converge} the results of three\n``best-fit'' models to the spheroid data, each of which started from a\ndistinct random set of isochrone weights. Although there are small\nvariations in the final individual isochrone weights, it is clear that\nthe overarching result is the same in each case. As stated earlier,\nthe degeneracies in the isochrone set mean that any individual\nisochrone can be varied significantly without changing the fit\nquality. For example, in Figure~\\ref{converge}, the relatively low\nweight at [Fe\/H]=$-1.7$, compared to the weights at [Fe\/H]=$-1.4$\nand [Fe\/H]=$-2.1$, is not meaningful; for the isochrones at 13~Gyr,\nwe can redistribute the weights at [Fe\/H]=$-1.4$, $-1.7$, and $-2.1$\nso that they are the same in each of these bins, and the fit quality\ndoes not suffer.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f11.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{Three different attempts at fitting the spheroid data, using\nthe same isochrones in each case, but where the initial guess in each\npanel was a distinct random distribution of isochrones.\nThe area of the filled circles is proportional to the\nnumber of stars in each isochrone group. Although there are small\nvariations in the individual amplitude weights from panel to panel, it\nis clear that the best-fit model is well converged.}\n\\label{converge}\n\\end{figure}\n\nAlthough the uncertainties on the individual isochrone weights in the\nbest-fit model are large, one can ask what classes of models, in a\nbroad sense, produce fits that are as good as the best-fit model. If\none restricts the fit to isochrones of ages$<10$~Gyr, the quality of\nthe fit is noticeably reduced, with $\\chi^2_{\\rm eff} = 1.18$ (a fit that\nis an additional 5$\\sigma$ worse than the best-fit model). Much of\nthe weight in this fit falls at the top end of the allowed age range,\nand the difference between the model CMD and the data CMD shows\nsignificant residuals (Figure~\\ref{haloyngold}). Alternatively, if\none restricts the fit to isochrones with ages$\\ge 10$~Gyr, the quality\nof the fit is grossly reduced, with $\\chi^2_{\\rm eff} = 3.09$ and\nvery obvious differences between the model CMD and the data CMD\n(Figure~\\ref{haloyngold}). This is consistent with the results of\nBrown et al.\\ (2003), who showed that the spheroid CMD is inconsistent\nwith a purely old population of stars.\n\n\\begin{table*}[t]\n\\begin{center}\n\\caption{Summary of Spheroid Fitting}\n\\begin{tabular}{lcccl}\n\\tableline\nModel & $<$[Fe\/H]$>$ & $<$age$>$ & $\\chi^2_{\\rm eff}$ & Comment \\\\\n\\tableline\nStandard model & $-0.6$ & 9.7 & 1.11 & Minimal residuals in fit\\\\\nAge $< 10$~Gyr & $-0.5$ & 8.4 & 1.18 & Significant residuals in fit\\\\\nAge $\\ge 10$~Gyr & $-0.8$ & 10.9 & 3.09 & Gross residuals in fit\\\\\nNo old metal-rich stars & $-0.6$ & 9.6 & 1.11 & Minimal residuals in fit\\\\\nNo young metal-poor stars & $-0.6$ & 9.7 & 1.18 & Misses part of plume\\\\\n\\tableline\n\\end{tabular}\n\\label{tabspheroid}\n\\end{center}\n\\end{table*}\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f12.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{{\\it Top panels:} The CMD of the spheroid data ({\\it\nyellow}), the best-fit model to those data using a set of isochrones\nrestricted to ages less than 10~Gyr ({\\it blue}), and the\ndifferences between the data and model ({\\it yellow} and {\\it blue}),\nall shown at the same linear stretch. {\\it Bottom panels:} The same\nCMD of the spheroid data ({\\it yellow}), the best-fit model to those\ndata using a set of isochrones restricted to ages $\\ge 10$~Gyr\n({\\it blue}), and the differences between the data and\nmodel ({\\it yellow} and {\\it blue}), all shown at the same linear\nstretch employed in the top panels. It is clear that neither model\nis acceptable, given the residuals ({\\it right hand panels}). }\n\\label{haloyngold}\n\\end{figure}\n\nThe best-fit model has minority populations in the isochrones\nrepresenting old metal-rich stars and young metal-poor stars. If\ntruly present, these populations are extremely interesting, because\nthe former imply that at least some of the stars were formed in\nsomething like a bulge environment (with rapid early enrichment),\nwhile the latter imply the accretion of metal-poor stars from dwarf\ngalaxies or star formation following the infall of\nrelatively pristine material. To test this, we repeated the fit while\nexcluding two regions from the input grid of isochrones: age$\\ge\n10$~Gyr at [Fe\/H]$\\ge 0$, and age$< 5$~Gyr at [Fe\/H]$< -0.5$; each of\nthese regions contains 3\\% of the stellar mass in the best-fit model.\nIf the old metal-rich isochrones are excluded from the fit, the fit\nquality in the resulting model does not suffer at all; thus, the CMD\nis consistent with either a small population of such old metal-rich\nstars or none at all. In contrast, if the young metal-poor isochrones\nare excluded from the fit, the fit quality is somewhat reduced, with\n$\\chi^2_{\\rm eff} = 1.18$, due to the model missing the brightest and\nbluest stars in the blue plume above the dominant main sequence. \nThis is not surprising, given our visual inspection of the\nCMD and comparison to young isochrones of various metallicities\n(Figure~\\ref{comparegrid}). Note that the scattered model isochrones\ninclude the effects of blends (determined by the artificial star tests)\nbut not any contribution from blue stragglers; thus, some (but not all) of the\nyoung stars in the fit ($\\lesssim 6$~Gyr) could be an attempt to\naccount for blue stragglers (see \\S\\ref{secinspec}).\n\nWe summarize the fits to the spheroid data in Table~\\ref{tabspheroid}.\nOur standard model is that which simply allows the full grid\n(Figure~\\ref{gridfig}) to vary freely, while the other models are\nself-explanatory. Mean values of [Fe\/H] and age are not as useful as\nthe full age and metallicity distributions, given the complicated star\nformation history present in the field, but these mean values do serve\nas a yardstick to gauge differences between the fits. \\\\\n\n\\subsection{Results for the Stream}\n\nThe distribution of age and metallicity in our best fit to the\nstream data is shown in Figure~\\ref{strmfit}. Given the qualitative\nsimilarities between the stream and spheroid CMDs, it is not\nsurprising that the best-fit distribution of age and metallicity in\nthe stream resembles that in the spheroid. However, as noted above,\nthere are some distinctions. The mean age in the stream (8.8~Gyr) is\n$\\sim$1~Gyr younger than that in the spheroid (9.7~Gyr), while the\nmean metallicities are nearly the same ($-0.6$ in the spheroid and\n$-0.7$ in the stream). The fit quality for the best-fit stream model\nis similar to that for the spheroid, with $\\chi^2_{\\rm eff} = 1.08$. \nIn Figure~\\ref{strmmod}, we show the comparison of the best-fit\nmodel to the data, as well as the residuals.\n\nGiven that the stream and spheroid are so similar, we also explored to\nwhat extent both populations might be consistent with a single star\nformation history. First, we simply used the spheroid star formation\nhistory (Figure~\\ref{halofit}) to normalize a set of isochrones\nscattered according to the stream artificial star tests, and then\nscaled the result to match the number of stars in the stream. This\ncreated a model with the spheroid star formation history but the\nobservational properties of the stream data, enabling a fair\ncomparison of the two. The result is shown in\nFigure~\\ref{strmhalomod}. It is obvious that there are gross\nresiduals in the model. Although this was not a fit (given that we\nsimply applied the star formation history of the spheroid), if this\nmodel had resulted from our standard isochrone fitting, it would have\nproduced a $\\chi^2_{\\rm eff}$ of 1.32. The comparison of the spheroid\ndata with this model population yielded a $\\chi^2_{\\rm eff}$ of 1.11\n(\\S\\ref{secsph}); the much larger discrepancy of the stream data\nwith this model population strongly implies that the spheroid and\nstream data were drawn from distinct populations, at a confidence\nlevel exceeding 99\\%.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f13.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{The distribution of age and metallicity in the best-fit model\nof the stream data. The area of the filled circles is proportional to\nthe number of stars in each isochrone group. The total area within\nthe filled symbols has been normalized to that in\nFigure~\\ref{halofit}, to ease comparison (but in reality the surface\nbrightness in the stream is $\\sim$0.5~mag fainter). }\n\\label{strmfit}\n\\end{figure}\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f14.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{{\\it Top panels:} The CMD of the stream data ({\\it yellow}),\nthe best-fit model to those data ({\\it blue}), and the differences\nbetween the data and model ({\\it yellow} and {\\it blue}), all shown at\nthe same linear stretch. {\\it Bottom panels:} An artificial CMD drawn\nfrom the best-fit model ({\\it yellow}), the same best-fit model ({\\it\nblue}), and the differences between the artificial data and model\n({\\it yellow} and {\\it blue}), all shown at the same linear stretch\nemployed in the top panels. Note that the magnitude range of the\nstream fit is smaller than that in the spheroid fit, because the\nspheroid data are $\\sim$0.5 mag deeper than the stream data.}\n\\label{strmmod}\n\\end{figure}\n\nNext, we tried fitting both the spheroid and stream simultaneously\nwith the same star formation history. Specifically, a model for the\nstream was constructed from isochrones appropriately matching the\nstream observations (utilizing the stream artificial star tests), and\na model for the spheroid was constructed from isochrones appropriately\nmatching the spheroid observations (utilizing the spheroid artificial\nstar tests), but the relative weights of the isochrones used to\nconstruct these stream and spheroid models came from a single\ndistribution of age and metallicity. This distribution was varied\nuntil the best fit to both the stream and spheroid data was achieved.\nThe resulting age and metallicity distribution is shown in\nFigure~\\ref{strmhalofit}. Curiously, this compromise solution to both\nCMDs is a bit older and more metal-poor than that found for either CMD\nindividually; this is likely due to the fact that the spheroid and\nstream are distinct, resulting in a poor fit when fitting both at the\nsame time. The poor quality of the fit can be seen when this\ncompromise model is compared to the stream data, as shown in\nFigure~\\ref{strmhalomod}. The value for $\\chi^2_{\\rm eff}$ is not\nterrible (1.14), but there are approximately twice the number of\ndegrees of freedom in this fit, given that we are fitting two CMDs of\ndata simultaneously, so the deviation from unity is more significant.\nBoth of these tests imply that while the stream and spheroid CMDs are very\nsimilar, they are not drawn from exactly the same population.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f15.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{{\\it a)} The CMD of the stream data ({\\it yellow}). {\\it b)}\nA model for the stream data ({\\it blue}), but constructed from the\nspheroid star formation history, scattered with the observational\nerrors of the stream data and normalized to the stream star counts. {\\it c)}\nThe differences between the data and model ({\\it\nyellow} and {\\it blue}), all shown at the same linear stretch. {\\it\nd)} The same CMD of the stream data ({\\it yellow}). {\\it e)} The\nbest-fit compromise model fit simultaneously to the spheroid and\nstream datasets ({\\it blue}). {\\it f)} The differences between the data and\nmodel ({\\it yellow} and {\\it blue}), all shown at the same linear\nstretch employed in the top panels. Significant residuals can be seen\nin either case ({\\it right hand panels}) implying that the stream and\nspheroid CMDs are not drawn from the same population.}\n\\label{strmhalomod}\n\\end{figure}\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f16.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{\nThe distribution of age and metallicity in the best-fit model\nsimultaneously fit to the spheroid and stream data. The area of the\nfilled circles is proportional to the number of stars in each\nisochrone group. The distribution shown here is clearly a compromise\nbetween those shown in Figures~\\ref{halofit} and \\ref{strmfit}.\n}\n\\label{strmhalofit}\n\\end{figure}\n\nAs done with the spheroid data, we also explored to what extent stream models\nwith more restricted age ranges are consistent with the data. When\nthe isochrones are restricted to ages$< 10$~Gyr, the quality of the\nfit is nearly unchanged (with $\\chi^2_{\\rm eff}=1.10$), although the\nresulting distribution of age and metallicity looks somewhat skewed,\nwith much of the weight falling at the top end of the age range. When\nthe isochrones are restricted to ages$\\ge 10$~Gyr, the quality of the\nfit is very poor, with $\\chi^2_{\\rm eff}=2.80$. If old metal-rich\nstars are removed from the input isochrone grid, the quality of the\nfit is unchanged from the best fit model, while if young metal-poor\nstars are removed, the quality of the fit is noticeably affected, with\n$\\chi^2_{\\rm eff} = 1.14$, but the model is only\nmissing the brightest and bluest stars in the blue plume.\n\n\\begin{table*}[t]\n\\begin{center}\n\\caption{Summary of Stream Fitting}\n\\begin{tabular}{lcccl}\n\\tableline\nModel & $<$[Fe\/H]$>$ & $<$age$>$ & $\\chi^2_{\\rm eff}$ & Comment \\\\\n\\tableline\nStandard model & $-0.7$ & 8.8 & 1.08 & Minimal residuals in fit\\\\\nAge $< 10$~Gyr & $-0.6$ & 8.1 & 1.10 & Minimal residuals in fit\\\\\nAge $\\ge 10$~Gyr & $-1.0$ & 11.0 & 2.80 & Gross residuals in fit\\\\\nNo old metal-rich stars & $-0.7$ & 8.8 & 1.09 & Minimal residuals in fit\\\\\nNo young metal-poor stars & $-0.7$ & 8.8 & 1.14 & Misses part of plume\\\\\nBest-fit spheroid model & $-0.6$ & 9.7 & 1.32\\tablenotemark{a} & Gross residuals \\\\\nSimultaneous fit to spheroid stream & $-0.8$ & 10.1 & 1.14\\tablenotemark{b} & Significant residuals\\\\\nFixed 25\\% spheroid contamination & $-0.8$ & 8.8 & 1.10 & Similar to standard model\\\\\n\\tableline\n\\multicolumn{5}{l}{$^a$Not actually a fit. See text for details.}\\\\\n\\multicolumn{5}{l}{$^b$Twice the degrees of freedom. See text for details.}\n\\end{tabular}\n\\label{tabstream}\n\\end{center}\n\\end{table*}\n\nThe Keck data for our stream field imply that 75\\% of its stars fall\nin two kinematically cold stream components (Kalirai et al.\\ 2006b),\nand that 25\\% of its stars are in the underlying spheroid. Although\nthe population in our spheroid field might not be representative of\nthe underlying spheroid in the stream field, it is reasonable to\nwonder how the fitting of the stream star formation history is\naffected if this spheroid contamination is taken into account. To\nexplore this, we fitted the stream with the same set of isochrones, but\nadded an additional component to the model, fixed at 25\\% of the\npopulation, representing the spheroid contamination. This\ncontamination component was constructed from the best-fit model to the\nspheroid but using the isochrones scattered with the stream artificial\nstar tests; thus the contamination component appropriately represents\nthe spheroid population as it would appear in the stream data. The\nresults are shown in Figure~\\ref{strmfit_cont}. The quality of the\nfit is good; $\\chi^2_{\\rm eff} = 1.10$. In the top panel, we show the\ntotal star formation history (combining the fixed spheroid\ncontamination and the fit to the stream). In the bottom panel, we\nhave subtracted the spheroid contamination component from the star\nformation history, to show the star formation history of the stream in\nisolation. The isolated star formation history of the stream\n(Figure~\\ref{strmfit_cont}b) is very similar to the best-fit model to\nthe stream that did not try to account for the spheroid contamination\n(Figure~\\ref{strmfit}). Given the similarity between the stream and\nspheroid CMDs, and the fact that the spheroid contamination is only\n25\\%, this is not that surprising. We summarize the results of the\nstream fitting in Table~\\ref{tabstream}.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f17.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{ The best-fit model to the stream, assuming a fixed 25\\%\ncontamination from the underlying spheroid that matches the population\nin Figure~\\ref{halofit}. The area of the filled circles is\nproportional to the number of stars in each isochrone group. {\\it Top\npanel:} The complete star formation history, including the fixed\nspheroid contamination. {\\it Bottom panel:} The star formation\nhistory for the stream population in isolation, excluding that part of\nthe fit representing the spheroid contamination. The population has\nbeen normalized such that the total area in the symbols is the same in\nboth panels.}\n\\label{strmfit_cont}\n\\end{figure}\n\n\\subsection{Results for the Disk}\n\nThe distribution of age and metallicity in our best fit to the disk\ndata is shown in Figure~\\ref{diskfit}. As expected from our earlier\ninspection of the CMDs, the star formation history in the disk is\nmarkedly distinct from that in the spheroid or stream, in the sense\nthat the population is younger and significantly more metal-rich, with\na mean age of 7.5~Gyr and a mean metallicity of [Fe\/H]~=~$-0.2$. The\nfit quality for the best-fit disk model is excellent, with\n$\\chi^2_{\\rm eff} = 1.05$. In Figure~\\ref{diskmod}, we show the\ncomparison of the best-fit model to the data, as well as the residuals.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f18.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{The distribution of age and metallicity in the best-fit model\nof the disk data. The area of the filled circles is proportional to\nthe number of stars in that isochrone group. The distribution shown\nhere is clearly distinct from those shown in Figures~\\ref{halofit} and\n\\ref{strmfit}. The total area within the filled symbols has been\nnormalized to that in Figures~\\ref{halofit} and \\ref{strmfit}, to ease\ncomparisons.}\n\\label{diskfit}\n\\end{figure}\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f19.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{{\\it Top panels:} The CMD of the disk data ({\\it\nyellow}), the best-fit model to those data ({\\it blue}), and the\ndifferences between the data and model ({\\it yellow} and {\\it blue}),\nall shown at the same linear stretch. {\\it Bottom panels:} An\nartificial CMD drawn from the best-fit model ({\\it yellow}), the same\nbest-fit model ({\\it blue}), and the differences between the data and\nmodel ({\\it yellow} and {\\it blue}), all shown at the same linear\nstretch employed in the top panels. Note that the magnitude range\nof the disk fit is smaller than that in the spheroid fit, because\nthe spheroid data are $\\sim$0.5 mag deeper than the disk data.}\n\\label{diskmod}\n\\end{figure}\n\nThe best-fit model to the disk is dominated by stars at ages of less\nthan 10~Gyr. In fact, if we fit the data with a subset of the\nisochrones restricted to ages$<$10~Gyr (i.e., remove all old\nisochrones from the input grid, not just the metal-rich ones), the fit\nis only negligibly worse than that achieved with the full set of\nisochrones ($\\chi^2_{\\rm eff} = 1.06$), and the resulting distribution\nof age and metallicity looks very similar to that in the best-fit\nmodel. In contrast, a fit restricted to ages$\\ge 10$~Gyr is grossly\ninadequate, with $\\chi^2_{\\rm eff}=5.07$. If young metal-poor stars\nare removed from the fit (as done with the stream and spheroid\nfitting), the fit quality drops, with $\\chi^2_{\\rm eff} = 1.14$, and\nthe model misses the bright blue stars in the plume.\n\n\\begin{table*}[t]\n\\begin{center}\n\\caption{Summary of Disk Fitting}\n\\begin{tabular}{lcccl}\n\\tableline\nModel & $<$[Fe\/H]$>$ & $<$age$>$ & $\\chi^2_{\\rm eff}$ & Comment \\\\\n\\tableline\nStandard model & $-0.2$ & 7.5 & 1.05 & Minimal residuals in fit\\\\\nAge $< 10$~Gyr & $-0.1$ & 6.9 & 1.06 & Minimal residuals in fit\\\\\nAge $\\ge 10$~Gyr & $-0.9$ & 11.0 & 5.07 & Gross residuals in fit\\\\\nNo young metal-poor stars & $-0.2$ & 7.6 & 1.14 & Misses part of plume\\\\\nFixed 33\\% spheroid contamination & $+0.1$ & 6.6 & 1.05 & Younger. Minimal residuals\\\\\n\\tableline\n\\end{tabular}\n\\label{tabdisk}\n\\end{center}\n\\end{table*}\n\nThe metallicity distribution in our best fit to the disk CMD is\nsomewhat more metal-rich than that typically found in the outer disk\nof M31 (e.g., Worthey et al.\\ 2005). There are several possible\nreasons for this. First, the greatest color dependence upon [Fe\/H] is\nat the tip of the RGB, which in our data is both sparsely populated\nand seriously contaminated by foreground dwarf stars. Instead, we are\nusing the lower RGB, which offers the advantage of large numbers of\nM31 stars and little contamination, but the penalty is a reduced color\nsensitivity to [Fe\/H]. Second, the use of distinct isochrone sets and\ndistinct observing bands results in significant scatter for abundance\ndeterminations even when the population is a simple one, such as a\nglobular cluster. The metallicities we derive are calibrated to the\nglobular cluster metallicities given in Table~\\ref{tabclus}.\nPublished abundances for globular clusters of intermediate metallicity\nvary by $\\sim$0.2 dex in the recent literature, while abundances for\nhigh metallicity clusters vary by even more (see Brown et al. 2005 and\nreferences therein). Moreover, isochrones at high metallicity are\ndifficult to calibrate, given that appropriate clusters tend to be in\nheavily reddened regions, such as the Galactic bulge. Finally,\nprevious [Fe\/H] distributions for M31 fields invariably employed old\nisochrones or the ridge lines of old globular clusters as reference\npoints. This will bias the results toward lower metallicity if the\nmetal-rich population is in fact significantly younger than Galactic\nglobular clusters. For example, the upper RGB for a 13 Gyr population at\n[Fe\/H]~=~0.0 is very similar to that for a 6 Gyr population at\n[Fe\/H]~=~+0.230.\n\nThe Keck kinematics of our disk field imply that $\\sim$67\\% of its\nstars are moving in the disk (Kalirai et al.\\ 2006b; Reitzel et al.\\ in\nprep.), and that $\\sim$33\\% of its stars are in the underlying\nspheroid. As with our analysis of the stream, the population in our\nspheroid field might not be representative of the underlying spheroid\nin the disk field, but it is reasonable to explore a fit to the disk\nwith a fixed contamination component from the spheroid. We repeated\nthe disk fitting with an additional model component held fixed at 33\\%\nof the population, representing spheroid contamination. This\ncontamination component was constructed from the best-fit model to the\nspheroid but using the isochrones scattered with the disk artificial\nstar tests; thus the contamination component appropriately represents\nthe spheroid population as it would appear in the disk data. The\nresults are shown in Figure~\\ref{diskfit_cont}. The quality of the\nfit is excellent, with $\\chi^2_{\\rm eff} = 1.05$. In the top panel, we\nshow the total star formation history (which includes the fixed\nspheroid component in the fit to the disk field). In the bottom\npanel, we show the same fit to the star formation history, but\nsubtract that fixed component representing spheroid contamination, in\norder to show the star formation history of the disk population in\nisolation. The isolated disk population (Figure~\\ref{diskfit_cont}b)\nis significantly younger and more metal-rich than that found in our\ninitial model (Figure~\\ref{diskfit}), where we did not try to account\nfor the spheroid contamination. The isolated point at 13 Gyr and\n[Fe\/H]~=~0.5 is not significant (repeating the fit with no isochrones\nolder than 10~Gyr yields $\\chi^2_{\\rm eff} = 1.06$). The similarities\nbetween Figures~\\ref{diskfit} and Figure~\\ref{diskfit_cont}a are\nreassuring; the fit in Figure~\\ref{diskfit} did not employ any\nknowledge of the spheroid contamination, yet it is clear that this fit\ntried to reproduce the old metal-poor component that is present in\nFigure~\\ref{diskfit_cont}a, where we explicitly specified a spheroid\ncontamination component to the model. Because the spheroid\ncontamination can completely account for the old and metal-poor stars\nin the disk field, the dearth of metal-poor stars is another example\nof the ``G dwarf problem'' -- that a simple closed box model of\nchemical evolution predicts a longer tail of metal-poor stars than\nseen in all massive galaxies (see Worthey et al.\\ 2005 and references\ntherein). We summarize the results of the disk fitting in\nTable~\\ref{tabdisk}.\n\n\\begin{figure}[h]\n\\epsscale{1.2}\n\\plotone{f20.eps}\n\\epsscale{1.0}\n\\vskip -0.1in\n\\caption{ The best-fit model to the disk, assuming a fixed 33\\%\ncontamination from the underlying spheroid that matches the population\nin Figure~\\ref{halofit}. {\\it Top panel:} The complete star formation\nhistory, including the fixed spheroid contamination. {\\it Bottom\npanel:} The star formation history for the disk population in\nisolation, excluding that part of the fit representing the spheroid\ncontamination. The population has been normalized such that the total\narea in the symbols is the same in both panels.}\n\\label{diskfit_cont}\n\\end{figure}\n\n\\subsection{Systematic Effects of Binaries, Alpha-enhancement, \nIMF, Distance, and Reddening}\n\\label{syssec}\n\nThe fits above make assumptions about the binary fraction,\nalpha-element enhancement, IMF, distance, and reddening. Of these\nthree parameters, the binary fraction and reddening uncertainties\ntranslate into the largest uncertainties in the resulting fits, but do\nnot change the gross interpretation of the CMDs. We assumed an IMF\nindex of $-1.35$ (Salpeter 1955), and assumed that [$\\alpha$\/Fe]~=~0.3\nat [Fe\/H]~$\\leq -0.7$ and [$\\alpha$\/Fe]~=~0.0 at [Fe\/H]~$> -0.7$. We\nassumed 770~kpc for the M31 distance (Freedman \\& Madore 1990), which\nis based on Cepheids and falls in the middle of the range generally\nquoted in the literature (e.g., Pritchet \\& van den Bergh 1987; Stanek\n\\& Garnavic 1998; Holland 1998; Durrell et al.\\ 2001; McConnachie et\nal.\\ 2005; Ribas et al.\\ 2005). \nWe assumed $E(B-V)=0.08$~mag in each field, but as noted\nearlier, the Schlegel et al.\\ (1998) map is uncertain at the\n$\\sim$0.02~mag level in random fields, with somewhat higher\nuncertainties near Local Group galaxies.\n\nWe chose a binary fraction of 10\\%, because grossly changing this\nvalue produced lower quality fits with obvious residuals in the\ncomparison of the models and data. Given that we chose a binary\nfraction that minimized fit residuals, in a sense we ``fit'' the\nbinary fraction, but did so on a very coarse scale. Fortunately all\nthree fields can be reasonably fit with the same binary fraction,\nbecause this avoids complications in the interpretation of the fits.\nIf we assumed distinct binary fractions in the fitting to each field,\none could attribute some of the age variations to this varying binary\nfraction. At larger binary fractions, the features in the synthetic\nCMD become brighter, and the age distribution must shift to older ages\nto compensate, while lower binary fractions result in younger age\ndistributions.\n\nTo demonstrate the sensitivity of our fits to these parameters, we\nrepeated our fits while varying our assumptions. The results are\nshown in Table~\\ref{systab} for all three fields, and in\nFigure~\\ref{sysfig} for the spheroid field. Reducing the binary\nfraction to 0\\% would decrease our ages by 0--0.4~Gyr, while\nincreasing the binary fraction to 40\\% would increase our ages by\n$\\sim$1~Gyr. Changing the alpha enhancement has almost no effect,\nother than a slight shift in the metallicity distribution. The\ninsensitivity to alpha enhancement makes sense, because in these\nbandpasses, isochrones with enhanced alpha elements look much like\nscaled-solar isochrones at slightly higher metallicity (note that the\nisochrones are always transformed to the ACS bandpasses using\nsynthetic spectra of a consistent alpha enhancement; see Brown et al.\\\n2005). Changing the IMF index from $-1.35$ to $-1.15$ also has little\neffect on the metallicity and age distributions; this is\nbecause our CMDs are sampling a fairly small range in stellar mass\n(the bulk of the stars brighter than the faint limit in our fitting\nregion fall in the mass range 0.7$\\lesssim M \\lesssim 1.2$~$M_\\odot$).\nChanging the extinction by 0.03~mag in either direction (assuming the\naverage Galactic extinction curve of Fitzpatrick 1999) primarily\naffects the metallicity distribution; an increase in the assumed\nextinction (redder stars) is compensated by a lower metallicity (bluer\nstars), and vice versa. Changing the distance modulus by 0.03~mag in\neither direction primarily affects the age distribution; an increase\nin the assumed distance (fainter apparent magnitudes) is compensated by\na younger age (brighter absolute and apparent magnitudes), and vice\nversa. Note that no change in assumptions for the spheroid\n(Figure~\\ref{sysfig}) can make the spheroid population look like that\nof the disk (Figure~\\ref{diskfit}).\n\n\\begin{table*}[t]\n\\begin{center}\n\\caption{Systematic Effects of Assumptions}\n\\begin{tabular}{lccccccccc}\n\\tableline\n & \\multicolumn{3}{c}{spheroid} & \\multicolumn{3}{c}{stream} & \\multicolumn{3}{c}{disk} \\\\\nFit & $<$age$>$ & $<$[Fe\/H]$>$ & $\\chi^2_{\\rm eff}$ & $<$age$>$ & $<$[Fe\/H]$>$ & $\\chi^2_{\\rm eff}$ & $<$age$>$ & $<$[Fe\/H]$>$ & $\\chi^2_{\\rm eff}$ \\\\\n\\tableline\nStandard model & 9.7 & $-$0.6 & 1.11 & 8.8 & $-$0.7 & 1.08 & 7.5 & $-$0.2 & 1.05 \\\\\nBinary fraction = 0.0 & 9.3 & $-$0.5 & 1.14 & 8.7 & $-$0.7 & 1.09 & 7.5 & $-$0.1 & 1.11 \\\\\nBinary fraction = 0.2 & 10.1 & $-$0.8 & 1.17 & 9.2 & $-$0.8 & 1.13 & 7.7 & $-$0.2 & 1.05 \\\\\nBinary fraction = 0.4 & 10.9 & $-$0.9 & 1.42 & 9.9 & $-$1.0 & 1.25 & 8.5 & $-$0.4 & 1.13 \\\\\n$[\\alpha$\/Fe$]=0.3$ at [Fe\/H]$\\leq$0 & 9.7 & $-$0.7 & 1.11 & 8.9 & $-$0.8 & 1.08 & 7.7 & $-$0.3 & 1.05 \\\\\n$[\\alpha$\/Fe$]=0.0$ at all [Fe\/H] & 9.7 & $-$0.6 & 1.11 & 8.9 & $-$0.7 & 1.08 & 7.6 & $-$0.1 & 1.05 \\\\\nIMF index $-$1.15 & 9.9 & $-$0.7 & 1.11 & 9.0 & $-$0.7 & 1.08 & 7.6 & $-$0.2 & 1.04 \\\\\nDistance = 760 kpc & 9.8 & $-$0.7 & 1.12 & 9.1 & $-$0.7 & 1.09 & 7.6 & $-$0.2 & 1.04 \\\\ \nDistance = 780 kpc & 9.5 & $-$0.6 & 1.11 & 8.7 & $-$0.7 & 1.08 & 7.4 & $-$0.1 & 1.05 \\\\\n$E(B-V)=0.05$~mag & 9.8 & $-$0.2 & 1.15 & 9.1 & $-$0.3 & 1.08 & 7.7 & $+$0.2 & 1.14 \\\\\n$E(B-V)=0.11$~mag & 9.4 & $-$1.0 & 1.17 & 8.6 & $-$1.1 & 1.16 & 7.3 & $-$0.5 & 1.05 \\\\\n\\tableline\n\\end{tabular}\n\\label{systab}\n\\end{center}\n\\end{table*}\n\n\\begin{figure*}[t]\n\\epsscale{1.1}\n\\plotone{f21.eps}\n\\epsscale{1.0}\n\\caption{The distribution of age and metallicity in the best-fit model\nto the spheroid data, making different assumptions about the binary\nfraction, IMF, alpha-enhancement, distance, and reddening. The area\nof the filled circles is proportional to the number of stars in each\nisochrone group. {\\it a)} Our standard model: binary fraction 10\\%,\nSalpeter (1955) IMF, alpha-element enhancement at [Fe\/H]$<-0.7$,\ndistance of 770 kpc, $E(B-V)=0.08$~mag. {\\it b)} A binary fraction\nof 0\\%. {\\it c)} A binary fraction of 20\\%. {\\it d)} A binary\nfraction of 40\\%. {\\it e)} Isochrones at [Fe\/H]$\\leq 0$ are\nalpha-enhanced. {\\it f)} None of the isochrones are alpha-enhanced.\n{\\it g)} An IMF index of $-1.15$. {\\it h)} Distance is 760~kpc. {\\it\ni)} Distance is 780~kpc. {\\it j)} $E(B-V)=0.05$~mag. {\\it k)}\n$E(B-V)=0.11$~mag. }\n\\label{sysfig}\n\\end{figure*}\n\n\\section{Discussion}\n\\label{secdisc}\n\nThe quantitative fitting to the CMDs of the spheroid, stream, and\nouter disk reaffirmed our general impressions from the qualitative\ninspection of the CMDs. In Figure~\\ref{compfits}, we compare the star\nformation histories for the three fields. The star formation history\nin the spheroid is simply our standard model (Figure~\\ref{halofit}),\nwhile the star formation histories in the stream and disk are those\nthat have had an assumed spheroid contamination subtracted\n(Figures~\\ref{strmfit_cont}b and \\ref{diskfit_cont}b). All three\nfields show an extended star formation history. The star formation\nhistory in the stream is similar to that in the spheroid, but is\nshifted somewhat younger. The disk population is dominated by\nintermediate-age stars, with little evidence for the old metal-poor\npopulation present in the spheroid and stream. All three fields have\na trace population of young metal-poor stars, presumably due to the\naccretion of metal-poor stars from dwarf galaxies or due to stars\nforming from the infall of relatively pristine material. The fact\nthat such material continues to fall into Andromeda is evidenced by the\nextensive population of \\ion{H}{1} clouds recently found in the\noutskirts of the galaxy (Thilker et al.\\ 2004).\n\n\\begin{figure}[h]\n\\epsscale{1.1}\n\\plotone{f22.eps}\n\\epsscale{1.0}\n\\caption{The best-fit star formation histories for the\nspheroid ($a$), stream ($b$), and disk ($c$). The area\nof the filled circles ({\\it grey}) is proportional to the\nnumber of stars falling in the given isochrone. For comparison,\nthe star formation history of the spheroid is overplotted in\neach panel ({\\it black open circles}) The stream and disk fits\neach assumed a fixed contamination from the spheroid, which has\nbeen subtracted.\n}\n\\label{compfits}\n\\end{figure}\n\n\\subsection{Disk}\n\nMost hierarchical CDM models predict that a spiral disk forms\ninside-out, generally leading to a disk that becomes progressively\nyounger at increasing radius. For example, the simulated disk of\nAbadi et al.\\ (2003a, 2003b) has a mean age of $\\sim 8$--10~Gyr within\n2 kpc of the center and $\\sim$6--8~Gyr near 20 kpc. However, the\nliterature does include counter-examples with more complex age\ngradients. The simulated galaxy of Robertson et al.\\ (2004) exhibits\na mean stellar age of $\\sim$7.5~Gyr in the center (within 2 kpc) and\n$\\sim$10~Gyr in the disk outskirts (beyond 14 kpc). The CDM models of\nSommer-Larson, G$\\ddot{\\rm o}$tz, \\& Portinari (2003) result in disk\ngalaxies that sometimes form inside-out and sometimes form outside-in.\nBoth classes predict mean ages of 6--8~Gyr in the outer disk (6 scale\nlengths from the center), but the age distributions differ, with the\ninside-out galaxy hosting a significantly larger fraction of young\nstars ($\\lesssim 3$~Gyr) in the outskirts. In a sophisticated model\nof the chemical evolution in the Milky Way disk, Chiappini et al.\\\n(2001) demonstrate an inside-out formation scenario where the stellar\nage is not a monotonically varying function of distance from the\nGalactic center; in the inner disk (4--10 kpc), the stellar ages are\ndecreasing with increasing radius, as expected, but beyond this radius,\nthe stellar ages increase with radius, because the thick disk and halo\nbegin to dominate over the thin disk. All of these models can be\ncompared to the solar neighborhood (e.g., Ibukiyama \\& Arimoto 2002;\nSandage, Lubin, \\& VandenBerg 2003; Fontaine, Brassard, \\& Bergeron\n2001), but we know little of the detailed star formation histories for\nother giant spiral galaxies. As far as the structures are concerned,\nobservations of high-redshift disk galaxies (e.g. Ferguson, Dickinson\n\\& Williams 2000; Ravindranath et al.\\ 2004) suggest that disks were\nlargely in place 8 Gyr ago. Since then, they have increased their\nstellar masses and increased their sizes consistent with an inside-out\nsequence of star formation (Trujillo et al.\\ 2005), with the average\nstellar surface mass density staying roughly constant from $z=1$ to\nthe present (Barden et al.\\ 2005).\n\nOur mean age in the outer disk (6.6~Gyr) is in good agreement with the\nmodels of Abadi et al.\\ (2003a, 2003b), and significantly younger than\nthe models of Robertson et al.\\ (2004); these comparisons\nsuggest a consistency with an inside-out formation scenario. Our mean\nage also falls in the range found in both the inside-out and\noutside-in models of Sommer-Larson et al.\\ (2003), but our age\ndistribution, with a significant dearth of stars younger than 3~Gyr,\nis in somewhat better agreement with their outside-in model. However,\nthese are all hydrodynamical models that track the birth of\nparticles but largely ignore the details of chemical evolution. It\nwould be interesting to compare our age-metallicity distribution with\nsuch a distribution in a true chemical evolution model under an\ninside-out formation scenario (e.g., Chiappini et al.\\ 2001). \n\nOur star formation history in the disk is probably saying less about\nthe validity of the inside-out formation scenario and more about the\nrelative scales of the thin and thick disk; because our disk field is\n25~kpc from the galactic center, it is well into the regime where one\nmight expect the thick disk to dominate (Chiappini et al.\\ 2001).\nIndeed, there is evidence that the thick disk begins to dominate well\ninside this radius; in their WFPC2 images of an off-axis field 5~kpc\nfrom the nucleus, Sarajedini \\& Van Duyne (2001) found a population\napparently dominated by thick disk stars.\nNote that Morrison et al.\\ (2004) apparently found a subsystem of the\nM31 globular cluster system with thin disk kinematics, but this\nsubsystem is largely restricted to that part of the disk plane\ninterior to our own disk field. In Figure~\\ref{diskfit_comp}, we\ncompare the age and metallicity distribution in our disk field to\nthose distributions in the solar neighborhood. The outer disk of\nAndromeda is clearly similar to the thick disk population of the solar\nneighborhood (dominated by intermediate-age stars at relatively high\nmetallicities), but looks nothing like the thin disk of the solar\nneighborhood (dominated by stars younger than 5~Gyr). The hydrogen\ncolumn density in our disk field (Table~\\ref{fieldtab}; Braun et al.\\\nin prep.) is below the threshold typically assumed for star formation\nin disk galaxies ($N_{HI} \\sim 10^{21}$ cm$^{-2}$; Kennicutt 1989),\nand so the dearth of very young stars should not be surprising.\n\n\\begin{figure}[ht]\n\\epsscale{1.1}\n\\plotone{f23.eps}\n\\epsscale{1.0}\n\\caption{{\\it a)} The distribution of age and metallicity in the\nbest-fit model of the disk data (assuming a 33\\% contamination from\nthe spheroid, which has been subtracted). The area of the filled circles is\nproportional to the number of stars in that isochrone group. The\ndistribution shown here is clearly distinct from those shown in\nFigures~\\ref{halofit} and \\ref{strmfit}. {\\it b)} The distribution of\nage and metallicity for individual thick disk stars in the solar\nneighborhood, from the photometric ({\\it open boxes}) and\nspectroscopic ({\\it filled boxes}) measurements of Ibukiyama \\&\nArimoto (2002; their Figure~8). {\\it b)} The distribution of age and\nmetallicity for individual thin disk stars in the solar neighborhood,\nfrom the photometric ({\\it open circles}) and spectroscopic ({\\it\nfilled circles}) measurements of Ibukiyama \\& Arimoto (2002; their\nFigure~5).}\n\\label{diskfit_comp}\n\\end{figure}\n\nOur star formation history in the disk is in rough agreement with that\nfound by other groups studying the outskirts of the disk with\nshallower {\\it HST} data. Looking at a field $\\sim$15$^{\\prime}$\nfurther away from the galaxy center than our own field, Ferguson \\&\nJohnson (2001) found a somewhat older and more metal-poor population;\nthey quoted a mean age $\\gtrsim 8$~Gyr and a metallicity of\n$<$[Fe\/H]$>$~$\\sim -0.7$. They reported trace populations of young\nstars ($\\sim 1.5$--3 $M_\\odot$) and ancient metal-poor stars ($\\gtrsim\n10$~Gyr and [Fe\/H]$\\sim -1.7$), which we also find in our field.\nFerguson \\& Johnson (2001) assumed that disk stars comprised\n$\\sim$95\\% of their field population, based on an extrapolation of\nthe Walterbos \\& Kennicutt (1988) decomposition. During our\nobservation planning, we also used the work of Walterbos \\& Kennicutt\n(1988) as a guide, and estimated that the disk contribution in our own\nfield was similarly high. We were subsequently surprised to find that\nthe kinematic data in our field imply that the disk in fact comprises only\n67\\% of the population (Figure~\\ref{cmdsvels}); it must be even lower\nin the Ferguson \\& Johnson (2001) field. The disk is clearly falling\noff more rapidly than an extrapolation of the Walterbos \\& Kennicutt\n(1988) data from the interior. Note that on the other side of the\ngalaxy, looking in the outer disk near the massive cluster G1, Rich et\nal.\\ (2004) also found a population dominated by intermediate-age\nstars (6--8~Gyr). The dominance of intermediate-age stars in the\nouter disk of Andromeda appears to be ubiquitous.\n\nLooking at fields sampling a wide range of radial distance and\nazimuthal angle in Andromeda, Ibata et al.\\ (2005) found significant\nnumbers of stars moving with velocities close to the expected mean\nvelocity for circular orbits. They found these stars primarily at\ndistances of 15--40~kpc from the center, with possible detections out\nto 70~kpc. Their extended disk has an exponential scale length of\n5.1~kpc, similar to that of the bright inner disk, but its irregular\nmorphology and substructure strongly suggest that it is dominated by tidal\ndebris. They estimate that the luminosity of this ``disk-like\nstructure'' accounts for $\\sim$10\\% of the total luminosity in the M31\ndisk. For reference, their ``F13'' field is near our outer disk field\n($\\sim$10$^{\\prime}$ away), and shows kinematic structures very\nsimilar to those in Figure~\\ref{cmdsvels}c; their data show a narrow\npeak near the velocity expected for stars orbiting in the disk, and a\nmuch broader peak for spheroid stars that show little rotation with\nthe disk. Ibata et al.\\ (2005) argue that their extended disk is more\nlikely associated with the thin disk than the thick disk of Andromeda.\nHowever, given the kinematic and population data in our outer disk field, it\nwould seem more likely that \ntheir disk-like structure is an extension of the thick\ndisk. This would also be consistent with its irregular morphology,\ngiven that thick disks are thought to form via mergers that disrupt\nthe thin disk (see Wyse et al.\\ 2006 and references therein). \\\\\n\n\\subsection{Spheroid and Stream}\n\nAs found by Brown et al.\\ (2003), the Andromeda spheroid population\nspans a surprisingly wide range of age and metallicity, especially\ncompared to the halo of the Milky Way. Given the substructure in\nAndromeda (Ferguson et al.\\ 2002; Figure~\\ref{mapfig}) and the success\nof $\\Lambda$CDM models, we have strong observational and theoretical\nreasons for turning to merger scenarios as possible explanations for\nthe observed distribution of age and metallicity. One can imagine\nthat, compared to the Milky Way, Andromeda has experienced many more\nsmall mergers or a few more large ones. These mergers may have\npolluted the inner spheroid with their own material and material\nfrom the Andromeda disk and bulge; in this scenario, the declining\npresence of this pollution at increasing radius would account\nfor the appearance of the spheroid beyond 30~kpc, which looks\nmore like a canonical metal-poor halo (Guhathakurta et al.\\ 2005; \nIrwin et al.\\ 2005).\n\nIf the Andromeda spheroid is the result of many smaller mergers that\ndid not occur in the Milky Way, one must ask why there is such a\nstatistically significant distinction between the merger histories of\ntwo similarly-sized spirals in the same galaxy group. Is Andromeda\nthe ``normal'' massive spiral, having cannibalized 10 small galaxies\nin its history, while the Milky Way is a 3$\\sigma$ outlier, having\ncannibalized only 1 small galaxy? Alternatively, if the Andromeda\nspheroid was polluted by one large merger that did not occur in the\nMilky Way, one may ask if such a merger is consistent with the\ndisturbed, but not destroyed, Andromeda disk. Plausible merger\nscenarios must balance both of these concerns.\n\nRecent models by Font et al.\\ (2006a) show promise in this regard. In\ntheir various realizations of a spiral galaxy halo, two models stand\nout. One halo underwent a large accretion event ($10^{8-9}$ $M_\\odot$ stellar\nmass) 11 Gyr ago, and the other underwent two accretion events ($10^9$\n$M_\\odot$ stellar mass) $\\sim$8.5~Gyr ago; in the former case, the resulting\nhalo had a lower mean metallicity, with\n$<$[Fe\/H]$>$=$-1.3$, while in the latter case, the resulting\nhalo had a significantly higher mean metallicity, with $<$[Fe\/H]$>$=$-0.9$.\nVel$\\acute{\\rm a}$zquez \\& White (1999) find that, depending upon the\norbit of the infalling satellite, satellites with up to 20\\% of the\ndisk mass can be accreted without destroying the disk. Clearly the\namount of disk disruption spans a continuum of outcomes depending upon\nthe mass of the infalling satellite and its orbit. Given a mass of\n$\\approx 7 \\times 10^{10}$ $M_\\odot$ in Andromeda's disk (Geehan et\nal.\\ 2006), the disk could survive the accretion of one or two\n$\\sim$10$^{9-10}$ $M_\\odot$ satellites that would in turn\nsignificantly increase the spheroid metallicity. It is worth noting\nthat in the Font et al.\\ (2006a) models, when metal-rich stars are\npresent in the spheroid, they are still predominantly old, whereas the\nmetal-rich stars are very clearly of intermediate age in our own data.\nWith only 11 of these computationally-intensive realizations, it\nappears that the Font et al.\\ (2006a) simulations do not sufficiently\npopulate the possible parameter space to demonstrate if these old\nmetal-rich stars are a fluke or a general tendency in the models.\nIn contrast, recent simulations by Renda et al.\\ (2005) show that spiral\ngalaxies with more extended merging histories can have halos that are\nboth younger and metal-rich.\nCould the distinction between the spheroids of the Milky Way and\nAndromeda be due to the ingestion of something like the LMC? There is\nalso evidence that the globular cluster system of Andromeda includes\nclusters much younger than those in our own Galaxy, although it is\ndebatable if these clusters could have originated in the accretion of\nsomething like the LMC (e.g., Puzia et al.\\ 2005; Burstein et al.\\\n2004; Beasley et al.\\ 2005), which hosts a large globular cluster\nsystem spanning a wide range of ages.\n\nAndromeda is not alone in having a metal-rich spheroid with an age\ndispersion. The halo of NGC5128 (Cen A) is metal rich, with\n$<$[Fe\/H]$> = -0.41$ (Harris, Harris, \\& Poole 1999). The presence of\nlong period variables with extremely long periods (Rejkuba et al.\\\n2003) implies the presence of young stars, while the analysis of the\nHB, RGB, and AGB populations found in deep {\\it HST} photometry of the\ngalaxy imply an average age of $\\sim$8~Gyr in its halo (Rejkuba et\nal.\\ 2005). The galaxy also shows evidence for mergers in its shells\nand dust lane (Malin, Quinn, \\& Graham 1983).\n\nThe relatively high metallicity of the stream implies its progenitor\nwas at least as massive as $10^9$ $M_\\odot$ (see Dekel \\& Woo 2003);\nas such, most numerical simulations of the stream assume it is a dwarf\ngalaxy that was only recently disrupted by close passage to Andromeda,\nwithin the last $\\sim$0.5~Gyr (Font et al.\\ 2006b; Fardal et al.\\\n2006). The star formation history in the stream is plausible for such\na progenitor, given the wide range of star formation histories seen in\nLocal Group dwarfs (Mateo 1998). As noted by Brown et al.\\ (2006), it\nwould be worth exploring whether or not the progenitor is a disk\ngalaxy, given that the stream combines a relatively high metallicity\nwith a low velocity dispersion; however, models by Font et al.\\\n(2006b) and Fardal et al.\\ (2006) imply this discrepancy in velocity\ncan perhaps be explained by dynamical cooling.\n\nThe strong similarities between the spheroid and stream populations\noffer another clue, but it is a puzzling one. The field population of\nthe Milky Way halo does not look to be comprised of populations like\nthose of present-day dSphs (Shetrone et al.\\ 2003), but the field\npopulation of the Andromeda spheroid looks nearly identical to that of\none of its infalling satellites. A natural question is whether the\n$10^{9-10}$ $M_\\odot$ merger needed to explain the spheroid data is\nsitting in plain sight: the stream. However, if the progenitor of the\nstream really is on its first or second orbit around the galaxy, with\nmuch of its debris coherent on the sky, it is unlikely to comprise a\nsignificant fraction of the population in the relatively smooth\nregions of the spheroid, such as our field. As noted by Brown et al.\\\n(2006), the star count map of Andromeda (Figure~\\ref{mapfig}) and the\nkinematic data (Figure~\\ref{cmdsvels}) imply that the stream dominates\nover the spheroid by a 3:1 ratio in our stream field, but these same\ndata show no evidence for a single dominant stream in our spheroid\nfield. Current orbit models for the stream span a wide range of\npossibilities (e.g., Font et al.\\ 2006b; Fardal et al.\\ 2006); even if\nthe stream wraps around the Andromeda nucleus and then passes through\nour spheroid field (e.g., Ibata et al.\\ 2004), it is implausible that\nit would spread out enough to hide in the star count maps and\nkinematic data, yet still comprise $\\sim$75\\% of the population in our\nspheroid field. Furthermore, the metallicity distribution in our\nspheroid field is clearly very similar to the metallicity distribution\nin other fields throughout the inner spheroid of Andromeda (Ferguson\net al.\\ 2002; Durrell et al.\\ 1994, 2001, 2004). \nThus, arguments (e.g., Ibata et al.\\ 2004) that the intermediate-age \nmetal-rich stars in our spheroid field simply represent contamination \nby the stream would seem to imply\nthat the inner spheroid is metal-rich and ancient everywhere except\nfor our spheroid field, where the $\\sim$40\\% of the population is\nmetal-rich and of intermediate age. Instead of invoking such a\nconspiracy, it is much more plausible that the high metallicities seen\nthroughout the inner spheroid are associated with intermediate-age\npopulations, as in our particular spheroid field.\n\nThe modeling of the stream's progenitor and its possible orbits is\nstill in the early stages. Can a model be constructed where the\ndebris of the stream progenitor dominates the relatively smooth inner\nspheroid everywhere, while maintaining a coherent tidal tail on the\nsky? At the moment, models for the stream progenitor are focused on a\n$\\sim$10$^9$ $M_\\odot$ dwarf galaxy progenitor that only recently\nmerged with Andromeda (within the last few hundred Myr). How far can\nthe models be pushed away from this scenario? At what point does the\ndisruption of the Andromeda disk exceed the level of substructure seen\nby Ferguson et al.\\ (2002)? Depending upon the orbit, the progenitor\ncould be as massive as a few $10^{10}$ $M_\\odot$ without destroying\nthe Andromeda disk. If the progenitor was significantly more massive\nthan the $10^9$ $M_\\odot$ typically assumed now, and perhaps an\ninfalling disk galaxy, could the start of the merger be pushed\nbackward in time, such that its debris could more fully pollute the\ninner spheroid while still leaving a coherent debris stream on the\nsky? Alternatively, the pollution of the inner spheroid might be due\nto a merger event unrelated to that which produced the stream. The\nrecent models of Penarrubia, McConnachie, \\& Babul (2006) are\ninteresting in this regard; they find that an ancient merger with a\nmassive dwarf (10$^{9-10}$ $M_\\odot$) could produce the extended\ndisk-like population found by Ibata et al.\\ (2005).\n\nBrown et al.\\ (2006) offered two other possible explanations for the\nstream and spheroid similarities, but noted that they were\nproblematic. One possibility is that the spheroid is comprised of\nmany disrupted satellites similar to the stream progenitor. However, it is\ndifficult to see how the ensemble average of these disrupted\nsatellites (the spheroid) would so closely resemble the population\nin a single disrupted satellite (the stream). Although the star\nformation history for the stream is plausible for a dwarf galaxy,\nit is not plausible that it is representative for all dwarf\ngalaxies already cannibalized by Andromeda. Another possibility is\nthat the stream is comprised of material disrupted from the Andromeda\ndisk and that the same event polluted the spheroid, but it is unclear\nif the dynamics and energetics of such a scenario can actually work,\nand the stellar populations in our three fields offer evidence against this\nscenario (Figure~\\ref{compfits}). The isolated disk population\n(removing the spheroid contamination) is dominated by metal-rich\n($-0.5 < $ [Fe\/H] $< +0.5$) intermediate-age (4--8 Gyr) stars. The\nisolated stream population (removing the spheroid contamination), on\nthe other hand, also contains stars that are both older and more\nmetal-poor. If our disk population is representative of the outer\ndisk in general, creating the stream from a disruption of disk material\nwould not result in a stream hosting so many old and metal-poor stars.\nThis does not preclude significant contamination of the spheroid by\ndisrupted disk stars -- the population mix in our spheroid field might\nbe an older metal-poor halo with some contribution of disrupted disk\nstars -- but we are still left with coincidence to explain the\nsimilarity between the stream and spheroid populations.\n\n\\subsection{Does the Disk Contribute to our Spheroid Field?}\n\nRecently, Worthey et al.\\ (2005) put forth a provocative hypothesis,\nbased on chemical evolution arguments and the high metallicity of the\nAndromeda spheroid: that all fields in the spheroid observed to date\nare actually dominated by the disk. They suggested that this\nhypothesis could explain the surprisingly broad range of ages found in\nour spheroid field (Brown et al.\\ 2003). More recently, Ibata et al.\\\n(2005) found stars 40 kpc from the center of Andromeda (in all\ndirections) that appear to be moving in the disk. With the kinematic\nand population information available, we can show that the disk\ncontribution in our spheroid field must be very small ($\\lesssim\n1$\\%), as originally claimed by Brown et al.\\ (2003).\n\nThe relevant data are in Figure~\\ref{cmdsvels} and\nTable~\\ref{fieldtab}. Given the disk inclination of 12.5$^{\\rm o}$,\nour spheroid field is 11 kpc from the galactic center in the plane of\nthe sky and 51 kpc from the center in the plane of the disk. The disk\nfield is 25 kpc from the galactic center in both the plane of the sky\nand the plane of the disk. \n\nFigure~\\ref{cmdsvels}c shows the distribution of velocities in our\ndisk field. There are clearly two components. The broader component\n(comprising $\\sim$1\/3 of the population) is at the systemic velocity\nof Andromeda, while the narrower component is redshifted with respect\nto Andromeda due to the rotation of the Andromeda disk. In the\nWorthey et al.\\ (2005) scenario, one would associate the broad\ncomponent with the thick disk and the narrow component with the thin\ndisk, with only the latter component significantly rotating. However,\nwe know from the disk CMD that there is no evidence for a thin disk\npopulation in this field; instead, the population appears to be\ndominated by a thick disk and spheroid. Thus, it is much more\nplausible that the narrow velocity structure is the thick disk and the\nbroad velocity structure is the spheroid. These designations would\nalso explain why the narrow component is significantly rotating but\nthe broad component is not.\n\nCompared to the disk field, the spheroid field is twice as far from\nthe galactic center in the plane of the disk, but half the distance\nfrom the galactic center in the plane of the sky. So, moving our\nattention from the disk field to the spheroid field, we expect the\ncontribution from the disk to decline and the contribution from the\nspheroid to increase. With an exponential disk scale length of\n$\\approx$5~kpc (Walterbos \\& Kennicutt 1988), the disk contribution\nmust drop from the $\\sim$2\/3 in the disk field to $<1$\\% in the\nspheroid field. Indeed, Figure~\\ref{cmdsvels}a shows no indication of\na single narrow component at the Andromeda systemic velocity, as one\nwould expect if the disk were dominating this position, 51 kpc on the\nminor axis. Furthermore, it is worth noting that the hydrogen column\ndensity in the spheroid field is nearly 25 times smaller than that in the disk\nfield (Table~\\ref{fieldtab}).\n\nIbata et al.\\ (2005) found stars moving with disk velocities at\ndistances of 15--40 kpc from the galactic center, but they note that\nour spheroid field lies beyond the break in the density profile of\ntheir ``disk-like structure.'' They show no evidence that this\nstructure should comprise a significant population in our field. The\nvelocity dispersion in their extended disk is 30 km s$^{-1}$, which is\nmuch narrower than the 80 km s$^{-1}$ we see in our spheroid field.\nThe velocity dispersion in our spheroid field is in agreement with the\nkinematics of the planetary nebulae (Halliday et al.\\ 2006;\nHurley-Keller et al.\\ 2004), which show a distribution of similar\nbreadth and evidence for some rotational support.\n\nAn additional piece of evidence comes from the similarity of the\nstream and spheroid populations, given that the Worthey et al.\\ (2005)\nhypothesis rests largely on metallicity. If metallicity alone were\nenough to prove that a field in Andromeda is dominated by disk stars,\none could try to argue that our stream field was dominated by disk\nstars, too. However, it is clear from the morphology, HB luminosity,\nand kinematics in our stream field that $\\sim$75\\% of the population\nin this field is comprised of two kinematically-cold components\nfalling toward Andromeda (Kalirai et al.\\ 2006b). There is no way that\nthe stream is composed of stars residing in the Andromeda disk.\n\nOn all of these grounds, one can see that the spheroid field must have\na negligible contribution from stars currently moving in the Andromeda\ndisk. It is also clear that the spheroid velocity distribution is not\nas hot as one would expect for a hot halo, nor does it reflect the\nkinematics of the halo globular cluster system ($\\sigma \\sim 150$ km\ns$^{-1}$; Perrett et al.\\ 2002). The high metallicity and wide age\ndistribution of the spheroid is likely due to the merger history of\nAndromeda, with the spheroid polluted by a combination of disrupted\nsatellites, stars born in the merger(s), and stars disrupted from the\nAndromeda disk.\n\n\\section{Summary}\n\\label{secsumm}\n\nUsing deep {\\it HST} observations of Andromeda, we have reconstructed\nthe complete star formation history in three fields: the spheroid,\ntidal stream, and outer disk.\n\nIn the best-fit model to the spheroid, 40\\% of the stars are\nmetal-rich and younger than 10~Gyr, in stark contrast to our own\nGalactic halo. The data cannot be reproduced by a population of old\nstars alone (age~$>$~10~Gyr). Although the fit is dominated by old\nmetal-poor stars and young metal-rich stars, a non-negligible\npopulation of young metal-poor stars is also present, implying that at\nleast some stars in the spheroid were accreted from dwarf galaxies or\nformed from relatively pristine infalling material. Since the\ndiscovery of a metal-rich intermediate-age population in our spheroid\nfield (Brown et al.\\ 2003), various explanations have been put forth\nin the literature, including the hypothesis that the disk dominates\nall inner spheroid fields (Worthey et al.\\ 2005), and the idea that\nour spheroid field is contaminated by the tidal stream and not\nrepresentative of the inner spheroid in general (Ibata et al.\\ 2004).\nIn the former scenario, the spheroid field is not special, but it is\nactually the disk instead of the spheroid, whereas in the latter\nscenario, the field is special, because it is the stream and not the\nspheroid. The constraints provided by the population and kinematic\ndata argue that the spheroid field does not have a significant\ncontribution from stars currently residing in Andromeda's disk, but\nthe young metal-rich population may be the result of stars disrupted\nfrom Andromeda's disk by an earlier merger event. The star count maps\nand kinematic data show no evidence for a dominant stream passing\nthrough the spheroid field, as required to explain the similarity\nbetween the spheroid and stream populations by some chance\nintersection of the spheroid field with the stream's orbit.\nFurthermore, the metallicity distribution in the spheroid field looks\nmuch like that observed in various other fields throughout the inner\nspheroid (Ferguson et al.\\ 2002; Durrell et al.\\ 1994, 2001, 2004).\nIt is much more likely that the metal-rich populations throughout the\ninner spheroid are of intermediate age, as found in our spheroid\nfield, instead of invoking the pathological situation where these\nmetal-rich populations are ancient everywhere except in our spheroid\nfield.\n\nIn the best-fit model to the stream, 70\\% of the stars are younger\nthan 10~Gyr. A detailed comparison of the age and metallicity\ndistributions in the stream and spheroid shows them to be remarkably\nsimilar but distinct. It is unclear if the similarity implies that\nthe stream's progenitor is representative of the objects that formed\nthe inner spheroid or if the entire inner spheroid is polluted by\nstars stripped from the stream's progenitor during its particular\ndisruption. The distinction between the disk and stream populations\n-- with the stream including old metal-poor stars that are lacking in\nthe disk -- suggests that the stream is not comprised of stars\ndisrupted from the Andromeda disk.\n\nThe outer disk of Andromeda more closely resembles the thick disk of\nthe solar neighborhood than either the spheroid or the stream.\nAlthough a trace population of 0.2--1.0~Gyr stars is present, there\nare few stars younger than 4~Gyr, and thus the outer disk does not\nappear to host a significant thin disk component. In the best-fit\nmodel to the disk data, 80\\% of the stars are younger than 10~Gyr;\nindeed, we also showed that these data are consistent with a\npopulation that is completely devoid of stars older than 10~Gyr. The\nminority population of old metal-poor stars in the disk field is\nconsistent with the field's kinematics, which show a $\\sim$33\\%\ncontribution from the spheroid. If the population in this spheroid\ncontribution is assumed to be the same as that in our spheroid field,\nthe resulting model reproduces the data extremely well, and implies\nthat $\\sim$70\\% of the stars in the outer disk are 4--8~Gyr old. The\ndisk of Andromeda clearly shares the ``G dwarf problem'' seen in the\nsolar neighborhood.\n\nIn the upcoming {\\it HST} observing cycle, we will be observing four\nmore deep fields in the Andromeda spheroid. One will be at\n$\\sim$22~kpc on the minor axis, and the other three will be in the\nvicinity of $\\sim$35~kpc on the minor axis, thus bracketing that point\nin the spheroid where there is a transition from a bulge-like\npopulation to one that more closely resembles a canonical halo. The\nstar formation history in these additional fields should help to further\ndisentangle the complex formation history of the Andromeda system\nand its various substructures.\n\n\\acknowledgements\n\nSupport for proposals 9453 and 10265 was provided by NASA through a\ngrant from STScI, which is operated by AURA, Inc., under NASA contract\nNAS 5-26555. P.G. would like to acknowledge partial support from NSF\ngrants AST-0307966 and AST-0507483 and NASA\/STScI grants GO-10265\nand GO-10134. R.M.R. also acknowledges support from NSF grant AST-0307931\nand from NASA\/STScI grants GO-9453 and GO-10265. We are\ngrateful to P.\\ Stetson for providing his DAOPHOT code, and to J.\\\nHarris for providing his StarFish code. During our observation\nplanning, A. Ferguson kindly provided ground images of our fields;\nwe also thank her for providing the star count\nmap used in Figure~\\ref{mapfig}. We wish to acknowledge the\nassistance D. VandenBerg provided in determining the transformation of\nhis isochrones to the ACS bandpasses. D. Taylor, P. Royle, and\nD. Soderblom were enormously helpful during the scheduling and\nexecution of these large {\\it HST} programs. D. Thilker kindly\nprovided $N_{HI}$ values at our field locations using his published\nand unpublished maps of M31. We thank J. Kalirai and D. Reitzel for\nproviding the velocity histograms in Figure~\\ref{cmdsvels}, and\nF. Hammer, A. Font, and M. Fardal for enlightening discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeytf b/data_all_eng_slimpj/shuffled/split2/finalzzeytf new file mode 100644 index 0000000000000000000000000000000000000000..272c003ab4668372ce10c5905b365cb8aba5e97b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeytf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nUnsupervised dimensionality reduction is a key step in many applications, including visualization \\cite{maaten2008visualizing} \\cite{mcinnes2018umap}, clustering \\cite{cohen2015dimensionality} \\cite{niu2011dimensionality}, and preprocessing for downstream supervised learning \\cite{pechenizkiy2004pca}. Principal Component Analysis (PCA) is one well-known technique for dimensionality reduction, which notably makes no assumptions about the ordering of the samples in the data matrix $X \\in \\RR^{N \\times D}$. Multivariate Singular Spectrum Analysis (MSSA) \\cite{hassani2013multivariate} is an extension of PCA for time series data, which been successfully applied in applications like signal decomposition and forecasting \\cite{hassani2009forecasting} \\cite{mahmoudvand2015forecasting} \\cite{patterson2011multivariate}. In MSSA, each row is read at a certain time step, and thus is influenced by the ordering of the samples. MSSA works primarily by identifying key oscillatory modes in a signal, which also makes it useful as a general-purpose signal denoiser. However, MSSA (like PCA, upon which it is based) is limited to finding the principal components that capture the maximal variance in the data. In situations where the information of interest explains little overall variance, these methods fail to reveal it. Recently, extensions like contrastive PCA (cPCA) \\cite{abid2018exploring, zou2013contrastive,ge2016rich} have shown that utilizing a background dataset $Y \\in \\RR^{M \\times D}$ can help better discover structure in the foreground (target) $X$ that is of interest to the analyst.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\columnwidth]{diagram}\n \\caption{Schematic illustrating the relations among PCA, cPCA, MSSA, and cMSSA.}\n \\label{fig:diagram}\n\\end{figure}\n\nContrastive Multivariate Singular Spectrum Analysis (cMSSA) generalizes cPCA and applies it to time series data. Figure~\\ref{fig:diagram} visualizes the relationships between the four methods. As a contrastive method, cMSSA emphasizes salient and unique sub-signals in time series data rather than just the sub-signals comprise the majority of the structure. So while standard MSSA is useful for denoising a signal, cMSSA additionally ``denoises'' signals of structured but irrelevant information.\n\n\\section{Contrastive Multivariate Singular Spectrum Analysis}\\label{cmssa}\n\n\\textbf{Standard MSSA}\nConsider a centered one-channel times series $\\mathbf{x} \\in \\RR^T$. We construct a Hankel matrix $H_\\mathbf{x} \\in \\RR^{T' \\times W}$ with window size $W$ as follows:\n\\[\nH_\\mathbf{x} = \n\\begin{pmatrix}\nx_1 & x_2 & \\ldots & x_W \\\\\nx_2 & x_3 & \\ldots & x_{W+1} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nx_{T'} & x_{T'+1} & \\ldots & x_T \\\\\n\\end{pmatrix}\n\\]\nwhere $T' = T-W+1$.\nTo extend to the multivariate case, let $X \\in \\RR^{T \\times D}$ be a $D$-channel time series that runs for $T$ steps. We construct the Hankelized matrix $H_X$ with window $W$ by horizontally concatenating the per-channel Hankel matrices into a $T'$-by-$DW$ matrix:\n$H_X = [H_{\\mathbf{x}^{(1)}} ; H_{\\mathbf{x}^{(2)}} ; \\ldots ; H_{\\mathbf{x}^{(D)}}]$. Next we compute the covariance matrix $C_X \\in \\RR^{DW\\times DW}$ for $H_X$. The next step is to perform the eigendecomposition on $C_X$, yielding $DW$ eigenvectors. Of these we take the top $K$ vectors with the largest corresponding eigenvalues. We denote $\\mathbf{e}^{(k)}$ as the eigenvector with the $k$th largest eigenvalue. We collect the vectors into a matrix $E \\in \\RR^{DW \\times K}$.\n\nTo transform our original time series $X$, we have two options: (a) Project $X$ into the principal component (PC) space defined by $E$:\n$A = H_X E$ or (b) use $A$ to compute the $k$th reconstructed component (RC) $R^{(k)}$ as done in the SSA literature:\n\n\\[\nR^{(k)}_{tj} = \\frac{1}{W_t} \\sum^{U_t}_{t' = L_t} A_{t-t'+1, k} \\cdot \\mathbf{e}^{(k)}_{(j-1)W + t'}\n\\]\n\nwhere $L_t = \\max(1, t-T+W)$, $U_t = \\min(t, W)$, and $W_t = U_t - L_t + 1$. The rows of $R^{(k)}$ are indexed by time $t \\in \\{1,\\ldots,T\\}$ and the columns by channel $j \\in \\{1,\\ldots,D\\}$. Summing up the reconstructed components reproduces a denoised version of the original signal. For our purposes, we opt instead to take the horizontal concatenation of the reconstructed components as the second transform:\n$R = [R^{(1)} ; R^{(2)} ; \\ldots ; R^{(K)}].$\nTo handle multiple time series, one simply vertically stacks each Hankelized matrix. The algorithm proceeds identically from there.\n\n\\bigskip\n\\noindent \\textbf{Contrastive MSSA}\nThe modification to MSSA we introduce is via a new variable $\\alpha \\geq 0$ we call the \\emph{contrastive} hyperparameter. We construct $H_Y$ for another $D$-channel times series $Y$ (the background data) via the same process. It is not required that $X$ and $Y$ run for the same number of time steps, only that their channels are aligned. We compute a contrastive covariance matrix $C = C_X - \\alpha C_Y$ and perform the eigendecomposition on $C$ instead of $C_X$. The intuition for this is that by subtracting out a portion of the variance in $Y$, the remaining variance in $X$ is likely to be highly specific to $X$ but not $Y$.\nThis is the key additional mechanism behind cMSSA --- if $\\alpha = 0$, then no contrast is performed, and cMSSA reduces down to just MSSA.\n\n\\begin{algorithm}[htb]\n\\caption{Spectral $\\alpha$-Search\n}\n\\label{algo:gen_alpha}\n\\begin{algorithmic}[1]\n\n\\Require Minimum $\\alpha$ to consider $\\alpha_{\\min}$, maximum $\\alpha$ to consider $\\alpha_{\\max}$, number of $\\alpha$s to consider $n$, number of $\\alpha$s to return $m$, foreground signal $X$, background signal $Y$, window $W$, and number of components $K$.\n\n\\Procedure{}{}\n \\Let{$Q$}{\\textsc{LogSpace}($\\alpha_{\\min}$, $\\alpha_{\\max}$, $n$) $\\cup \\{0\\}$}\n \\For{$\\alpha^{(i)} \\in Q$}\n \\Let{$H_X$, $H_Y$}{\\textsc{Hankel}($X$, $W$), \\textsc{Hankel}($Y$,$W$)}\n \\Let{$C_X$, $C_Y$}{\\textsc{Cov}($H_X$), \\textsc{Cov}($H_Y$)}\n \\Let{$E^{(i)}$}{\\textsc{EigenDecomp}($C_X - \\alpha^{(i)}C_Y$, $K$)}\n \\EndFor\n \\Let{$S$}{\\textsc{Empty}($\\RR^{n+1 \\times n+1}$)}\n \\For{$i \\in \\{1, \\ldots, n+1\\}$, $j \\in \\{i, \\ldots, n+1\\}$}\n \\Let{$S_{i,j}, S_{j,i}$}{$\\left\\lVert {E^{(i)}}^T E^{(j)} \\right\\rVert_*$}\n \\EndFor\n \\Let{$Z$}{\\textsc{SpectralCluster}($S$, $Q$, $m$)}\n \\Let{$Q^*$}{\\{0\\}}\n \\For{$z \\in Z$}\n \\If{$0 \\notin z$}\n \\Let{$\\alpha^*$}{\\textsc{ClusterMediod}($z$, $S$)}\n \\Let{$Q^*$}{$Q^* \\cup \\{\\alpha^*\\}$}\n \\EndIf\n \\EndFor\n \\\\\n \\Return{$Q^*$, set of $m$ best $\\alpha$s, including zero.}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\nThe choice of $\\alpha$ is non-trivial. Algorithm~\\ref{algo:gen_alpha} outlines a routine for auto-selecting a small number of promising values for $\\alpha$. Because cMSSA is designed to assist data exploration, Algorithm~\\ref{algo:gen_alpha} uses spectral clustering to identify a diverse set of $\\alpha$ values corresponding to diverse eigenspaces and representations of $X$. The procedure works by first generating a large number of $\\alpha$s spread evenly in log-space. For each candidate $\\alpha$, we use cMSSA to compute its corresponding eigenvector matrix $E$. The procedure then performs spectral clustering, which requires a pairwise distance matrix as input. The distance metric used takes the nuclear norm of the matrix computed by multiplying the eigenvector matrices $E$ for any pair of $\\alpha$s. After specifying the number of clusters desired, we take the mediod $\\alpha$ of each cluster and return them as output. We always include 0 in this set, as the analyst may want to perform their analysis without contrast as control.\n\n\\section{Experiments}\\label{experiements}\n\n\\textbf{Synthetic example}\nTo illustrate cMSSA, we present a simple synthetic example. We generate an artificial one-channel signal $Y$ by sampling 500 sinusoids with different frequencies, amplitudes, phases, and vertical shifts. White Gaussian noise sample from $\\mathcal{N}(0,1)$ is added in as well. We generate $X$ in the same manner, but add in a very specific sub-signal (Figure~\\ref{fig:syn_sub}) that has comparatively low variance compared to the whole time series. The signals $X$ and $Y$ are generated independently as to rule out simple signal differencing as an explanation. We take $X$ as foreground and $Y$ as background.\n\nWe set $W=100$, $\\alpha=2$, and use only the top $K=2$ RCs. Fig.~\\ref{fig:syn_exp} displays the reconstructions computed by MSSA versus cMSSA, alongside the sub-signal that was injected into $X$. Specifically, we see that the cMSSA reconstruction shown in Fig.~\\ref{fig:syn_x_rcs_contrast} yields a noisy approximation of the sub-signal of interest, Fig.~\\ref{fig:syn_sub}. The variance of the noise here is comparable to the variance of the sub-signal---more noise would eventually overpower cMSSA's ability to extract the sub-signal.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_sub}\n \\caption{Sub-signal specific to the foreground data $X$, which is of much lower amplitude than the other sinusoidal sub-signals in $X$.}\n \\label{fig:syn_sub}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_x_rcs}\n \\caption{Without contrast, the reconstructed time series consists of the high-amplitude sinusoidal sub-signals in $X$.}\n \\label{fig:syn_x_rcs}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_x_rcs_contrast}\n \\caption{With contrast, the reconstructed time series is able to identify the unique sub-signal in $X$.}\n \\label{fig:syn_x_rcs_contrast}\n\\end{subfigure}\n\n\\caption{Results of a synthetic experiment that demonstrates that cMSSA is able to identify unique sub-signals in a time series, even when they are of much lower amplitude than background components.}\n\n\\label{fig:syn_exp}\n\\end{figure}\n\n\\bigskip\n\\noindent \\textbf{Clustering of electrocardiograms}\nThe data used in this experiment is taken from the public MHEALTH dataset \\cite{banos2014mhealthdroid}. In the dataset, 10 individuals were asked to perform 12 physical activities as several sensors recorded varied motion data. The researchers also collected two-lead electrocardiogram (ECG) readings, which we take as dual-channel time series data. In addition to the 12 activities, there is a 13th \\textsc{NULL} class that represents ECG signals collected between each activity but which don't have labels themselves. To increase the number of individual time series, we partition each one in half. \n\n \n \n\nFor our experiments, the foreground data are all time series labelled as either \\textsc{Jogging}, \\textsc{Running}, \\textsc{Jumping}, or \\textsc{Cycling}, 20 time series each for a total of 80. These four, being the more cardio-intensive of the 12, had much more signal activity that would be needed to be sifted through, exactly the type of environment cMSSA is intended to handle. For background data, we take all 272 time series belonging to the \\textsc{NULL} class.\n\nTo evaluate the effectiveness of cMSSA over its non-contrastive counterpart, we run both cMSSA and MSSA with a variety of hyperparameter settings. For each fitted model, we transform the foreground data to both the PC and RC spaces. Once the transformations are had, we perform spectral clustering into 4 clusters and compare the resulting clusters to the activity labels on the time series data, which were hitherto withheld from the algorithms. There are 3 hyperparameters: the window size $W \\in \\{8, 16, 32, 84, 18\\}$, the number of desired components $K \\in \\{1,2,4,6,8,10,12,14,16,18,20\\}$, and the contrastive parameter $\\alpha$. We set $K$ only if the value is less than or equal to $DW$ (where $D=2$ in this case). For $\\alpha$, we used our automatic routine to compute five key values to try for each setting of $W$ and $K$. For each run of the routine, a total of 300 candidate $\\alpha$s we considered, with the minimum and maximum $\\alpha$s being $10^{-3}$ and $10^{3}$, respectively. Of the five ultimately returned, one was zero, representing standard MSSA. Altogether, we run 530 experiments, 106 of which are standard MSSA, and the remaining cMSSA.\n\nThe spectral clustering requires an affinity matrix $S \\in \\RR^{N \\times N}$ which contains the similarities between any pair of time series, where $N$ is the number of times series we wish to cluster. Let $X^{(i)}$ and $X^{(j)}$ be two time series. Using the FastDTW metric \\cite{salvador2007toward} with a euclidean norm\\footnote{FastDTW is not a symmetric metric, so we take the minimum between the two orderings of the operands.}, we define the similarity $S_{ij}$ to be\n$\n\\frac{1}{\\textsc{FastDTW}(X^{(i)}, X^{(j)}) + 1}.\n$\nThe cluster evaluation uses the well-rounded BCubed metric \\cite{amigo2009comparison} to compute the precision, recall, and F1 harmonic mean for a particular cluster prediction. We also perform the evaluation in the model-free sense where we simply cluster the time series with no transformation as a basic baseline.\n\n\\begin{table}[htb]\n \\caption{Best cMSSA and MSSA results in terms of maximum F1 score. Model-free clustering baseline also included. For the best MSSA and cMSSA models (with $\\alpha$ automatically selected via Algorithm 1), PC transform outperformed RC transform. Best result per metric (precision, recall, or F1) is bolded.}\n \\label{tab:mhealth_best}\n \\centering\n \\begin{tabular}{l | c c c}\n \\toprule\n Model &\n $W$ &\n $K$ &\n P \/ R \/ F1 \\\\\n \\midrule\n \n Model-free & - & - & 50.49 \/ 48.82 \/ 49.54 \\\\\n MSSA & 16 & 16 & 57.67 \/ 64.63 \/ 60.95 \\\\\n cMSSA ($\\alpha = 12.41$) & 128 & 1 & \\textbf{65.44} \/ \\textbf{75.88} \/ \\textbf{70.27} \\\\\n \n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nTable~\\ref{tab:mhealth_best} reports the best representative contrastive and non-contrastive models, comparing both to the model-free baseline. We observe a number of things. First, both MSSA and cMSSA outperform the model-free baseline. Second, cMSSA has 9-10 point gains over cMSSA in each of precision, recall, and F1. Third, both find that using $A$ over $R$ as the transform yielded better results.\nFinally, of the $DW$ number of PCs available, MSSA gets its best performance using half (16 out of 32), while cMSSA only uses one PC out of the maximum of 256 available. This highlights an interesting efficiency of cMSSA. By filtering out unnecessary components, the remaining not only account for less signal variance, but provide diminishing returns with each subsequent component used.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\columnwidth]{f1_vs_F1_A}\n \\caption{Plot of paired F1 scores. Each point is for a particular setting of $W$ and $K$. The contrastive F1 score used is the maximum of the four runs (one per automatically selected $\\alpha$) for that setting of the hyperparams. $x=y$ line drawn as guidance. The points look at only those where the transform used is $A$.}\n \\label{fig:f1_vs_F1_A}\n\\end{figure}\n\nFigure~\\ref{fig:f1_vs_F1_A} shows a more granular view of the general gains to be had from using cMSSA. For a particular setting of $W$ and $K$, we plot the F1 score for the non-contrastive case vs the contrastive case. Due to the four values of $\\alpha$s used in the contrastive case, we take the model that had the greatest F1. Points below the diagonal line mean that the contrast was useful for a particular setting of the hyperparameters.\n\n\\begin{figure}[htb]\n\\begin{subfigure}{0.46\\columnwidth}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_cycling_k16_no_contrast_best_compare}\n \\label{fig:random_cycling_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jumping_k16_no_contrast_best_compare}\n \\label{fig:random_jumping_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jogging_k16_no_contrast_best_compare}\n \\label{fig:random_jogging_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_running_k16_no_contrast_best_compare}\n \\label{fig:random_running_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\end{subfigure}\n\\qquad\n\\begin{subfigure}{0.46\\columnwidth}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_cycling_k1_contrast_best_compare}\n \\label{fig:random_cycling_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jumping_k1_contrast_best_compare}\n \\label{fig:random_jumping_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jogging_k1_contrast_best_compare}\n \\label{fig:random_jogging_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_running_k1_contrast_best_compare}\n \\label{fig:random_running_k1_contrast_best_compare}\n\\end{subfigure}\n\\end{subfigure}\n\n\\caption{Reconstructed time series after performing MSSA ($W=16$, $K=16$) and cMSSA ($W=128$, $K=1$, $\\alpha = 12.41$). Each row is the same random signal reconstructed using MSSA (left) and cMSSA (right), one row per activity. The two colors correspond to the dual recording channels. \n}\n\\label{fig:random_compare}\n\\end{figure}\n\nFinally, Figure~\\ref{fig:random_compare} shows a visual comparison of MSSA versus cMSSA, using their respective hyperparameters settings as shown in Table~\\ref{tab:mhealth_best}. Each row depicts how a random signal is processed with contrast on or off. We immediately see that cMSSA finds simpler signals than those found by MSSA. In the case of MSSA, the processed signals do not look substantially different from the originals. This is due to the fact that the high variance signals are shared across activities, so MSSA favors them during reconstruction. This is not the case with cMSSA, which identifies the differentiating signals that can disambiguate the activities.\n\n\\section{Conclusion}\n\nWe have developed cMSSA, a general tool for dimensionality reduction and signal decomposition of temporal data. By introducing a background dataset, we can efficiently identify sub-signals that are enhanced in one time series data relative to another. In an empirical experiment, we find that for virtually any setting of the hyperparameters, cMSSA is more effective at unsupervised clustering than MSSA, contingent on appropriate choices for the foreground and background data. It is worth emphasizing that cMSSA is an unsupervised learning technique. It does \\emph{not} aim to discriminate between time series signals, but rather discover structure and sub-signals within a given time series more effectively by using a second time series as background. This distinguishes it from various discriminant analysis techniques for time series that are based on spectral analysis \\cite{maharaj2014discriminant}, \\cite{krafty2016discriminant}.\n\nSome basic heuristics should be kept in mind when choosing to use cMSSA. First, the data ideally should exhibit periodic behavior, as MSSA (and by extension, cMSSA) is particularly well suited to finding oscillatory signals. Second, the data of interest $X$ and background $Y$ should not be identical, but should share common structured signal such that the contrast retains some information in the foreground. As an example, the ECG foreground data consisted of subjects performing very specific activities, whereas the background consisted of unlabelled ECG signals in which the participants performed no specific activity. We would expect a good amount of overlap in signal variance, but signals specific to the four activities would be under-represented in the background. Thus contrast is a plausible way to extract this signal.\n\nFinally, we note that the only hyper parameter of cMSSA is the contrast\nstrength, $\\alpha$. In our default algorithm, we developed an automatic\nsubroutine that selects\nthe most informative values of $\\alpha$. The experiments\nperformed used the automatically\ngenerated values. We believe that this default will be sufficient\nin many use cases of cMSSA, but the user may also\nset specific values for $\\alpha$ if more granular exploration\nis desired.\n\n\n\n\n\\vfill\\pagebreak\n\n\\nocite{*}\n\\bibliographystyle{IEEEbib}\n\\input{Template.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\IEEEPARstart{C}{ell}-free massive multiple-input multiple-output (MIMO) systems have been proposed to effectively alleviate intercell interference by coordinating a large number of distributed access points (APs), which are connected through fronthaul links to the central processing unit (CPU) \\cite{NAY+17, NAM+17, ZBM+20}. Intelligent reflecting surface (IRS), also known as reconfigurable intelligent surface (RIS), has been considered as one of the prospective multiple antenna technologies for beyond the fifth-generation (5G) networks \\cite{ZBM+20}. By adjusting phase shifts of the IRS elements, the propagation environment can be favorably manipulated.\n\nCell-free MIMO systems powered by IRSs have been recently introduced to further enhance the performance of cell-free MIMO systems at low and affordable cost and energy consumption by integrating multiple IRSs with the cell-free MIMO systems. The existing works, which studied the active and passive beamforming design, focused on the instantaneous performance metrics, i.e., instantaneous sum-rate \\cite{ZD21, ZDZ+21, HYX+21} or energy efficiency \\cite{ZDZ+21a, LND20X}. Theses works adopted alternating optimization algorithms, in which the active and passive beamformers are derived based on instantaneous channel state information (I-CSI). Thus, these algorithms would incur huge channel acquisition complexity and related pilot overhead since I-CSI is required for all AP-UE, AP-IRS, and IRS-UE links separately. Moreover, these algorithms would incur immense computational complexity and enormous fronthaul signaling overhead, since the active and passive beamformers need to be computed many times and transferred over fronthaul links for each coherence time. It is also challenging to control the IRSs in real-time, which requires stringent time synchronization \\cite{ZBM+20}. These disadvantages can be alleviated by designing a passive beamformer based on long-term channel statistics \\cite{HTJ+19, ZWZ+21}.\n\nTo the best of our knowledge, this paper is the first attempt for the cell-free MIMO systems powered by IRSs to consider the max-min achievable rate, where the achievable rate is a lower bound of the average rate. This metric aims to provide uniform performance and thus is widely used in the cell-free MIMO systems \\cite{NAY+17, NAM+17}. Moreover, we propose a novel non-iterative two-timescale algorithm to obtain 1) short-term active precoders for the APs that depend on I-CSI, 2) long-term power allocation for the APs using statistical CSI (S-CSI), and 3) long-term passive beamformers for the IRSs using S-CSI.\n\nThe rest of this paper is organized as follows. In Section~\\ref{sec:model}, we present the system model and formulate the max-min achievable rate optimization problem. In Section \\ref{sec:two-step}, we propose a two-step algorithm to solve the problem. Simulation results are provided to evaluate the proposed algorithm in Section~\\ref{sec:result}. Finally, we conclude the paper in Section \\ref{sec:conclusion}.\n\n\\textit{Notation:} Vectors and matrices are denoted by lower-case and upper-case boldface letters. The notations $\\otimes$ and $\\odot$ denote the Kronecker product and Hadamard product. $|\\cdot|$ and $\\angle(\\cdot)$ return the magnitude and angle of a complex argument. $\\mathbb{E}\\{\\cdot\\}$ represents the expectation operator. For a square matrix $\\bS$, $\\trace(\\bS)$ denotes the trace operation, and $\\bS \\succeq 0$ means that $\\bS$ is positive semidefinite. A random variable $x \\sim \\mathcal{CN}(m, \\sigma^2)$ is circularly symmetric complex Gaussian (CSCG) distributed with mean~$m$ and variance~$\\sigma^2$. \n\t\n\\section{System Model} \\label{sec:model}\n\nWe consider a downlink cell-free MIMO system powered by IRSs as illustrated in Fig. \\ref{fig:model}, where $L$ APs and $R$ IRSs are distributed to cooperatively serve $K$ single-antenna user equipments (UEs). All APs and IRSs are connected by wired or wireless fronthaul links to the CPU, which coordinates them. Each AP is equipped with $M$ antennas, and each IRS is comprised of $N$ passive reflecting elements. \n\n\\begin{figure}[t!] \n\t\\centering\n\t\\includegraphics[width=0.7 \\columnwidth]{fig1_model.eps}\n\t\\caption{A downlink cell-free MIMO system powered by IRSs.} \\label{fig:model}\n\\end{figure}\n\nWe assume that the data symbols for $K$ UEs $\\bs \\in \\mathbb{C}^{K \\times 1}$ are transmitted from all APs \\cite{NAY+17, NAM+17}. The transmit signal from AP~$l$ is given by \n\\begin{align}\n\\bx_l = \\textstyle\\sum_{k=1}^K \\bw_{l,k} s_k, \n\\end{align}\nwhere $\\bw_{l,k} \\in \\mathbb{C}^{M \\times 1}$ and $s_k$ are the active beamforming vector and data symbol for UE $k$. Assuming $\\mathbb{E}\\{|s_k|^2\\} = 1$ for all~$k$, the transmit power constraint of AP $l$ can be written as\n\\begin{align}\n\\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\bw_{l,k}}^2 \\} \\le \\bar{P}_l,\n\\end{align}\nwhere $\\bar{P}_l$ denotes the maximum transmit power of AP~$l$.\n\nThe channel between an AP and a UE consists of the direct (AP-UE) channel and the $R$ reflection (AP-IRS-UE) channels.\\footnote{Note that the signals reflected by the IRSs twice or more are weak enough to be neglected due to the harsh propagation loss of multiple hops \\cite{ZD21, ZDZ+21, HYX+21, ZDZ+21a, LND20X}.} Then, the overall channel from AP~$l$ to UE~$k$ can be expressed as\n\\begin{align}\n\\bh_{l,k}^H = \\bd_{l,k}^H + \\textstyle \\sum_{r=1}^R \\bv_{r,k}^H \\boldsymbol{\\Theta}_r \\bG_{l,r}, \\label{eq:overallCh}\n\\end{align}\nwhere $\\bd_{l,k}^H \\in \\mathbb{C}^{1 \\times M}$, $\\bG_{l,r} \\in \\mathbb{C}^{N \\times M}$, and $\\bv_{r,k}^H \\in \\mathbb{C}^{1 \\times N}$ denote the channel from AP~$l$ to UE~$k$, from AP~$l$ to IRS~$r$, and from IRS~$r$ to UE~$k$, respectively. The reflection coefficient matrix of IRS~$r$ is denoted by $\\boldsymbol{\\Theta}_r = \\diag(\\theta_{r,1}, \\cdots, \\theta_{r,N}) \\in \\mathbb{C}^{N \\times N}$, where $|\\theta_{r,n}| = 1, \\forall r,n$ represents the unit-modulus constraint on the IRSs elements.\n\nWe assume the Rician fading channel model for all channels \\cite{ZD21, ZDZ+21, ZDZ+21a, WZ19}. Specifically, the channel between AP~$l$ and UE~$k$ is given by\n\\begin{align}\n\\bd_{l,k} &= \\textstyle \\sqrt{\\xi_{l,k}^\\mathrm{d}} \\sqrt{\\frac{\\beta_\\mathrm{d}}{1+\\beta_\\mathrm{d}}}\\bar{\\bd}_{l,k}' + \\sqrt{\\xi_{l,k}^\\mathrm{d}} \\sqrt{\\frac{1}{1+\\beta_\\mathrm{d}}}\\tilde{\\bd}_{l,k}' \\notag \\\\\n&= \\bar{\\bd}_{l,k} + \\tilde{\\bd}_{l,k}, \\label{eq:ricianCh}\n\\end{align}\nwhere $\\bar{\\bd}_{l,k}'$, $\\tilde{\\bd}_{l,k}'$, and $\\beta_\\mathrm{d}$ denote the line-of-sight (LoS) component, non-line-of-sight (NLoS) component, and Rician K-factor of the channel~$\\bd_{l,k}$, respectively. The channel~$\\bd_{l,k}$ includes the distance-dependent path loss $\\xi_{l,k}^\\mathrm{d}$. The AP-IRS and IRS-UE channels follow the same model as in \\eqref{eq:ricianCh} with proper notation changes. It is assumed that the NLoS components of all AP-UE, AP-IRS, and IRS-UE channels are independent each other, and each NLoS component has independent and identically distributed (i.i.d.) $\\mathcal{CN}(0, 1)$ entries.\n\nThe received signal at UE $k$ can be written as\n\\begin{align}\ny_k &= \\textstyle \\sum_{l=1}^L \\bh_{l,k}^H \\bx_l + z_k \\notag \\\\\n\t&= \\bh_{k}^H \\bw_{k} s_k + \\textstyle \\sum_{k' \\neq k}^K \\bh_{k}^H \\bw_{k'} s_{k'} + z_k,\n\\end{align}\nwhere $\\bh_k = [\\bh_{1,k}^T, \\cdots, \\bh_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$, $\\bw_k = [\\bw_{1,k}^T, \\cdots, \\bw_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$, and $z_k \\sim \\mathcal{CN}(0, \\sigma^2)$ is the i.i.d. complex additive white Gaussian noise (AWGN). To analyze the theoretic performance gain with the IRSs, we assume that the perfect I-CSI of direct and reflection channels is available at the CPU. We also assume that the UE $k$ has the knowledge of the average of effective channel $\\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\}$ and adopt the \\emph{hardening bound}, which is widely used in the massive MIMO literature \\cite{BS20a}. Then, the achievable rate of UE $k$ is $\\log_2(1+\\mathsf{SINR}_k)$, where the effective signal-to-interference-plus-noise ratio (SINR) of UE~$k$ is given by\n\\begin{align}\n\\mathsf{SINR}_k = \\frac{ | \\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\} |^2 }\n{ \\textstyle \\sum_{k'=1}^K \\mathbb{E}\\{ |\\bh_{k}^H \\bw_{k'}|^2 \\} - | \\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\} |^2 + \\sigma^2}. \\label{eq:SINR}\n\\end{align}\n\nIn this paper, we aim to maximize the minimum achievable rate by jointly designing active and passive beamformers subject to the per AP transmit power constraint and unit-modulus constraint on the IRSs elements. This optimization problem can be formulated as\n\\begin{alignat}{3}\n& \\max_{ \\{\\bw_{k}\\}, \\{\\boldsymbol{\\Theta}_r\\} } ~ && \\min_{k} \\log_2(1+\\mathsf{SINR}_k) \\label{eq:prbA} \\\\\n&~ \\quad~~ \\mathrm{s.t.} &&~ C_1: \\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\bw_{l,k}}^2 \\} \\le \\bar{P}_l, && ~~\\forall l, \\notag \\\\\n&&&~ C_2: |\\theta_{r,n}| = 1, && ~~\\forall r, n, \\notag\n\\end{alignat}\nwhere $\\{\\bw_{k}\\}$ and $\\{\\boldsymbol{\\Theta}_r\\}$ represent the active and passive beamformers, respectively. The joint optimization of the problem \\eqref{eq:prbA} is very challenging since the active and passive beamformers are tightly coupled.\n\n\n\n\\section{Proposed Two-Step Algorithm} \\label{sec:two-step}\nIn this section, we propose a suboptimal two-step algorithm to solve the problem \\eqref{eq:prbA}. We first design an active beamforming technique, which consists of active precoding and power allocation. Then, we design a passive beamforming technique based on S-CSI. Finally, we summarize the proposed algorithm.\n\n\n\n\\subsection{Active Beamforming Design} \nWe decompose an active beamformer into a short-term active precoder and long-term power allocation to reduce computational complexity and fronthaul signaling overhead. We considered a zero-forcing (ZF) precoder since it shows better max-min rate performance than conjugate beamforming precoder in cell-free MIMO systems \\cite{NAM+17}. Moreover, the ZF precoder eliminates inter-user interference, and thus it makes the optimal power allocation simple.\n\nWe can express the received signal $\\by$ for $K$~UEs as\n\\begin{align}\n\\by = \\bH^H\\bW\\bs + \\bz,\n\\end{align}\nwhere $\\by = [y_1, \\cdots, y_K]^T \\in \\mathbb{C}^{K \\times 1}$, $\\bH = [\\bh_1, \\cdots, \\bh_K] \\in \\mathbb{C}^{LM \\times K}$, $\\bW = [\\bw_1, \\cdots, \\bw_K] \\in \\mathbb{C}^{LM \\times K}$, and $\\bz = [z_1, \\cdots, z_K]^T \\in \\mathbb{C}^{K \\times 1}$.\nThe active beamformer can be set as $\\bW = \\widetilde{\\bW} \\bP^{\\frac{1}{2}}$ with the ZF precoder $\\widetilde{\\bW} = \\bH \\left( \\bH^H\\bH \\right)^{-1}$, where the related condition $LM \\ge K$ can be easily fulfilled in the cell-free MIMO systems \\cite{NAM+17}. The long-term power allocation $\\bP = \\diag(p_1, \\cdots, p_K) \\in \\mathbb{C}^{K \\times K}$ is applied to all APs.\n\n\nWith the ZF precoder and long-term power allocation, the effective SINR \\eqref{eq:SINR} is simply reduced to $\\frac{p_k}{\\sigma^2}$. With a fixed passive beamformer, the problem \\eqref{eq:prbA} boils down to the long-term power allocation problem as\n\\begin{alignat}{3}\n& \\cP_1: && \\max_{ \\bP } ~ && \\min_{k} \\frac{p_k}{\\sigma^2} \\label{eq:prb1} \\\\\n&&&~~ \\mathrm{s.t.} &&~ C_1: \\textstyle \\sum_{k=1}^K p_k \\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\} \\le \\bar{P}_l, ~\\forall l, \\notag\n\\end{alignat} \nwhere $\\widetilde{\\bW} = [\\tilde{\\bw}_1, \\cdots, \\tilde{\\bw}_K] \\in \\mathbb{C}^{LM \\times K}$ and $\\tilde{\\bw}_k = [\\tilde{\\bw}_{l,k}^T, \\cdots, \\tilde{\\bw}_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$.\n\n\nThe objective function in the problem \\eqref{eq:prb1} forces the power allocation for all UEs to be the same\\footnote{Note that the instantaneous transmit power for UE~$k$ at AP~$l$ is equal to $p_k\\norm{\\tilde{\\bw}_{l,k}}^2$, and thus the actual transmit power is different per UE.}, i.e., $p_1 = \\cdots = p_K = p^{\\mathrm{opt}}$. Under the typical condition that $\\bar{P}_1 = \\cdots = \\bar{P}_L = \\bar{P}$ with a fixed $\\bar{P}$, the optimal power allocation $p^{\\mathrm{opt}}$ is determined by the AP that consumes the largest power for the active precoder, i.e., $\\max_l \\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\}$. As the largest power for the active precoder reduces, the optimal power allocation increases, and thus the minimum achievable rate improves accordingly.\n\n\n\\subsection{Passive Beamforming Design} \nBased on the proposed active beamforming design, we find that the passive beamformers are irrelevant to the objective function and only related to the transmit power constraint and unit-modulus constraint in the problem (7). Therefore, we can design a long-term passive beamformer to minimize the largest power for the active precoder. The corresponding optimization problem can be formulated as\n\\begin{alignat}{2}\n& \\min_{ \\boldsymbol{\\theta} } ~ && \\max_{l} \\textstyle \\sum_{k=1}^K\\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\} \\\\\n&~~ \\mathrm{s.t.} &&~ C_2: |\\theta_{r,n}| = 1, ~\\forall r, n, \\notag\n\\end{alignat}\nwhere $\\boldsymbol{\\theta} = \\boldsymbol{\\Theta}^H\\bone_{RN} \\in \\mathbb{C}^{RN \\times 1}$ and $\\boldsymbol{\\Theta} = \\diag(\\boldsymbol{\\Theta}_1, \\cdots, \\boldsymbol{\\Theta}_R) \\in \\mathbb{C}^{RN \\times RN}$.\n\nTo the best of our knowledge, there is no closed-form expression of $\\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\}$ in terms of the long-term passive beamformer~$\\boldsymbol{\\theta}$. It is worth noting, however, that the transmit power reduces as the channel gain increases \\cite{WZ19}. Considering this fact, we propose a suboptimal optimization problem that maximizes the minimum average channel gain by passive beamforming at the IRSs as \n\\begin{alignat}{2}\n& \\max_{ \\boldsymbol{\\theta} } ~ && \\min_{k} \\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\} \\label{eq:prbB} \\\\\n&~~ \\mathrm{s.t.} &&~ C_2: |\\theta_{r,n}| = 1, ~\\forall r, n. \\notag\n\\end{alignat}\t\t\n\nBy exploiting S-CSI, the average channel gain of UE~$k$ can be expressed as an explicit function of~$\\boldsymbol{\\theta}$ that is given as\n\\begin{align}\n\\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\}\n= \\boldsymbol{\\theta}^H \\bA_k \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_k + \\bb_k^H \\boldsymbol{\\theta} + c_k, \\label{eq:avChGain}\n\\end{align}\nwhere $\\bA_k$, $\\bb_k$, and $c_k$ are defined in Appendix A. Since $\\bA_k~\\succeq~0$, the average channel gain is a convex function of~$\\boldsymbol{\\theta}$. However, the problem \\eqref{eq:prbB} is a non-convex optimization problem since the objective function is not a concave function of~$\\boldsymbol{\\theta}$, and the unit-modulus constraint is not a convex set.\n\nWe apply semidefinite relaxation (SDR) to convert the non-convex problem \\eqref{eq:prbB} to a convex problem \\cite{NAK+20}. At first, by introducing an auxiliary variable~$q$, the average channel gain of UE~$k$ can be rewritten as \n\\begin{align}\n\\bar{\\boldsymbol{\\theta}}^H \\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} + c_k,\n\\end{align}\t\nwhere $\\bar{\\boldsymbol{\\theta}} = \\begin{bmatrix} \\boldsymbol{\\theta} \\\\ q \\end{bmatrix}$ and $\\boldsymbol{\\Psi}_k = \\begin{bmatrix} \\bA_k & \\bb_k \\\\ \\bb_k^H & 0 \\end{bmatrix} \\succeq 0$. Note that $\\bar{\\boldsymbol{\\theta}}^H \\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} = \\trace(\\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} \\bar{\\boldsymbol{\\theta}}^H)$. We define $\\bar{\\boldsymbol{\\Theta}} = \\bar{\\boldsymbol{\\theta}} \\bar{\\boldsymbol{\\theta}}^H$, where $\\bar{\\boldsymbol{\\Theta}} \\succeq 0$ and $\\mathrm{rank}(\\bar{\\boldsymbol{\\Theta}})=1$. By relaxing the rank-one constraint on $\\bar{\\boldsymbol{\\Theta}}$, which is non-convex, the problem \\eqref{eq:prbB} can be reformulated as \n\\begin{alignat}{3}\n& \\cP_2: && \\max_{ \\bar{\\boldsymbol{\\Theta}} } ~ && \\min_{k} \\trace(\\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\Theta}}) + c_k \\label{eq:prb2} \\\\\n&&&~~ \\mathrm{s.t.} &&~ [\\bar{\\boldsymbol{\\Theta}}]_{i,i} = 1 , i = 1, \\ldots, RN+1, \\notag \\\\\n&&&&&~ \\bar{\\boldsymbol{\\Theta}} \\succeq 0. \\notag\n\\end{alignat}\n\nSince the problem \\eqref{eq:prb2} is a convex semidefinite program (SDP), it can be efficiently solved by existing convex optimization solvers. If the optimal $\\bar{\\boldsymbol{\\Theta}}^{\\mathrm{opt}}$ is a rank-one matrix, then the optimal $\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ is derived by taking the eigenvector corresponding to the maximum eigenvalue of $\\bar{\\boldsymbol{\\Theta}}^{\\mathrm{opt}}$. Otherwise, Gaussian randomization is applied to find $\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ \\cite{NAK+20}. Finally, the optimal solution of the problem \\eqref{eq:prbB} is recovered by taking ${\\boldsymbol{\\theta}}^{\\mathrm{opt}} = \\exp\\left( j\\angle \\left( \\left[ \\frac{\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}}{\\bar{\\theta}_{RN+1}^{\\mathrm{opt}}} \\right]_{(1:RN)} \\right) \\right)$, where $[\\bx]_{(1:RN)}$ denotes the vector that contains the first $RN$ entries in $\\bx$, and $\\bar{\\theta}_{RN+1}^{\\mathrm{opt}}$ is the last element of~$\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\n\\subsection{Overall Algorithm Description} \n\\begin{algorithm}[t]\n\t\\caption{Proposed Two-Step Algorithm}\n\t\\hspace{1mm} \\textbf{Input}: S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$ \\\\\n\t\\hspace*{1mm} \\textbf{Step 1}: Passive beamforming design\n\t\\begin{itemize} \t\t\t\n\t\t\\item Solve the problem $\\cP_2$ to obtain the optimal long-term passive beamformer ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\t\\end{itemize}\n\t\\hspace{1mm} \\textbf{Step 2}: Active beamforming design\n\t\\begin{itemize} \n\t\t\\item Apply the ZF precoder to instantaneous channels with the given ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\t\t\\item Solve the problem $\\cP_1$ to obtain the optimal long-term power allocation $\\bP^{\\mathrm{opt}}$.\n\t\\end{itemize}\n\\end{algorithm} \nIn the previous subsections, we first explained the active beamforming design and then described the passive beamforming design. The proposed algorithm, however, actually operates as summarized in Algorithm 1. First, the optimal long-term passive beamformer ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ is obtained by solving $\\cP_2$ based on the S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$. Then, the ZF precoder is applied to instantaneous channels with the given~${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$, and the optimal long-term power allocation $\\bP^{\\mathrm{opt}}$ is derived by solving $\\cP_1$.\n\n\n\n\\section{Simulation Results} \\label{sec:result}\nIn this section, we provide simulation results to validate the minimum achievable rate performance of the proposed two-step algorithm. We consider a hotspot deployment scenario, where the UEs are placed in a hotspot, the APs are deployed a little far from the hotspot, and the IRSs are installed on a circle surrounding the hotspot in order to improve the rate performance \\cite{ZD21, ZDZ+21a}. This scenario is illustrated in Fig. \\ref{fig:hotspot}, where $L=4$ APs are located at $(0,0)$, $(D,0)$, $(D,D)$, and $(0,D)$, respectively. Up to $R=8$ IRSs are placed on a circle centered at $(d,d)$ with radius $r$, and $K=4$ UEs are uniformly distributed within the circle. We simulate three cases of $d = \\{40, 60, 120 \\}$~m with $r=30$~m and $D=300$~m.\n\n\t\n\n\n\\begin{figure}[h] \n\t\\centering\n\t\\includegraphics[width=0.5 \\columnwidth]{fig2_hotspot.eps}\n\t\\caption{Hotspot deployment scenario.} \\label{fig:hotspot}\n\\end{figure}\n\nAll APs are equipped with uniform linear arrays (ULAs) at a height of $10$~m with $M = \\{4, 8, 16\\}$ transmit antennas. Uniform planar array (UPA) is installed at a height of $5$~m for each IRS with $N = \\{8, 16, 32, 64, 128\\}$ elements. All UEs have single antenna, which is placed at a height of $1.5$~m \\cite{M.2412}. We evaluate three cases of $R = \\{2, 4, 8\\}$, where when $R=2$, only the first and fifth IRSs are present, and when $R=4$, the odd-numbered IRSs are present. It is assumed that all IRSs are deployed on building facades \\cite{ZBM+20} and look towards UEs. We consider an AP-IRS blockage model that when a signal from an AP arrives to the back of an IRS, then this signal is not reflected to the UEs, e.g., the signals from the first AP to the first, second, and eighth IRSs are blocked.\n\nThe distance-dependent path loss of all channels is modeled as $\\xi(d_{\\mathrm{link}}) = \\xi_0d_{\\mathrm{link}}^{-\\alpha_\\mathrm{X}}$, where $\\xi_0$ is the path loss at the reference distance $1$~m, $\\alpha_\\mathrm{X}$ denotes the path loss exponent of the channel~$\\bX$, and $d_{\\mathrm{link}}$ represents three-dimensional distance of a channel link considering vertical difference among the APs, IRSs, and UEs. We set $\\xi_0 = -30$~dB, $\\alpha_\\mathrm{d} = 3.4$, $\\alpha_\\mathrm{v} = \\alpha_\\mathrm{G} = 2.2$, $\\beta_\\mathrm{d} = -5$~dB, and $\\beta_\\mathrm{v} = \\beta_\\mathrm{G} = 5$~dB considering that the AP-UE channels would suffer from severer attenuation than the AP-IRS and IRS-UE channels \\cite{ZWZ+21}. Other system parameters are set as follows: $\\bar{P}_l = \\{20, 30, 40\\}$~dBm for all~$l$, $\\sigma^2 = -97$~dBm assuming $10$~MHz of system bandwidth, and $7$~dB of noise figure \\cite{M.2412}. We simulate $1,000$ uniform UE drops and generate $1,000$ independent instantaneous channels for each UE drop. \n\nWe consider three benchmark schemes. One is \\emph{No-IRS} and another is \\emph{Random Passive Beamforming}: the long-term passive beamformers at the IRSs are randomly selected. The active beamforming for both benchmarks is the same as that of the proposed algorithm. To the best of our knowledge, there are no other schemes that can be directly applied to the max-min achievable rate problem in the cell-free MIMO systems powered by IRSs. Instead, we compare with the third benchmark scheme of \\emph{Sum-Rate-Max} that maximizes instantaneous sum-rate by utilizing an alternating optimization algorithm to derive the active and passive beamformers [4].\n\nFig. \\ref{fig:cdf} depicts the empirical cumulative distribution functions (CDFs) of the minimum achievable rate by varying the maximum transmit power $\\bar{P}_l$. The proposed scheme provides a significant gain over No-IRS and the random passive beamforming for all values of $\\bar{P}_l$. Specifically for $\\bar{P}_l=20$~dBm, the median rate gains of the proposed scheme over No-IRS are equal to $3.4\\%$, $7.1\\%$, and $12.7\\%$ with $N=32$, $64$, and $128$, respectively. By doubling the number of IRS elements, the performance gain is almost doubled. Furthermore, the proposed scheme achieves comparable performance to the Sum-Rate-Max, which requires much higher computational complexity and signaling overhead than the proposed scheme.\n\n\\begin{figure}[t] \n\t\\centering\n\t\\includegraphics[width=0.95 \\columnwidth]{fig3_cdf.eps}\n\t\\caption{CDFs of the minimum achievable rate by varying the maximum transmit power $\\bar{P}_l$ with $d=40$~m, $M=8$, and $R=4$.} \\label{fig:cdf}\n\\end{figure}\n\nIn Fig.~\\ref{fig:rate}, we plot the minimum achievable rate versus the number of IRS elements~$N$ by varying the center of hotspot~$d$, the number of AP transmit antennas~$M$, and the number of IRSs $R$ with $\\bar{P}_l=20$~dBm. For all cases, the minimum achievable rate of the proposed scheme significantly improves with~$N$ and outperforms that of both benchmark schemes.\nFig.~\\ref{fig:rate}(a) shows that the performance of all schemes decreases as $d$ increases, i.e., the UEs move towards the center of service area. This is attributed to the fact that the received signal power from the first AP, which is the dominant AP to the UEs, decreases. However, the performance gain of the proposed scheme over No-IRS increases with $d$, i.e., when $N = 128$, the gains are equal to $12.5\\%$, $12.9\\%$, and $16.1\\%$ for $d=40$~m, $60$~m, and $120$~m, respectively.\nIn Fig.~\\ref{fig:rate}(b), it is seen that the smaller $M$, the lower the performance of all schemes.\nThe proposed scheme, however, provides higher performance gain over No-IRS as $M$ decreases, i.e., when $N = 128$, the gains are equal to $10.4\\%$, $12.5\\%$, and $12.8\\%$ for $M=16$, $8$, and $4$, respectively. \nFig.~\\ref{fig:rate}(c) shows that the performance of the proposed scheme increases with $R$ as expected. \nWhen the total number of IRS elements $RN$ is the same, similar performance is observed, showing that the proposed scheme is robust to IRS deployment scenarios. \n\n\\begin{figure*}[t] \n\t\\centering\n\t\\includegraphics[width=0.9 \\textwidth]{fig4_rate.eps}\n\t\\caption{The minimum achievable rate vs. the number of IRS elements $N$: (a) Varying $d$ with $R=4$, $M=8$; (b) Varying $M$ with $d=40$~m, $R=4$; (c) Varying $R$ with $d=40$~m, $M=8$.} \n\t\\label{fig:rate}\n\\end{figure*}\t\n\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this paper, we considered a joint beamforming framework in a cell-free MIMO system powered by IRSs. We formulated a maximization of minimum achievable rate problem and proposed a novel non-iterative two-timescale algorithm that derives the long-term passive beamformers and power allocation and short-term active precoders by exploiting S-CSI. Simulation results revealed that the proposed scheme can significantly improve the minimum achievable rate of the cell-free MIMO systems powered by IRSs compared to the benchmark schemes.\n\n\n\n\\begin{appendices}\n\\section{Derivation of \\eqref{eq:avChGain}}\nThe overall channel from AP~$l$ to UE~$k$ in \\eqref{eq:overallCh} can be rewritten as \n\\begin{align}\n\\bh_{l,k}^H = \\boldsymbol{\\theta}^H \\bV_k^H \\bG_l + \\bd_{l,k}^H, \n\\end{align}\nwhere, $\\bG_l = [\\bG_{l,1}^T, \\cdots, \\bG_{l,R}^T]^T \\in \\mathbb{C}^{RN \\times M}$, $\\bV_k^H = \\diag(\\bv_k^H) \\in \\mathbb{C}^{RN \\times RN}$, and $\\bv_k^H = [\\bv_{1,k}^H, \\cdots, \\bv_{R,k}^H] \\in \\mathbb{C}^{1 \\times RN}$. By decomposing the Rician fading channels into the LoS and NLoS components in \\eqref{eq:ricianCh}, the average channel gain from AP~$l$ to UE~$k$ can be expressed as\n\\begin{align}\n&\\mathbb{E} \\{ \\norm{\\bh_{l,k}}^2 \\} \\notag \\\\\n&~~= \\mathbb{E} \\left\\{ \\left\\| (\\bar{\\bG}_l^H + \\tilde{\\bG}_l^H)(\\bar{\\bV}_k + \\tilde{\\bV}_k) \\boldsymbol{\\theta} \n\t+ (\\bar{\\bd}_{l,k} + \\tilde{\\bd}_{l,k}) \\right\\|^2 \\right\\} \\notag \\\\\n&~~= \\boldsymbol{\\theta}^H \\bA_{l,k} \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_{l,k} + \\bb^H_{l,k}\\boldsymbol{\\theta} + c_{l,k},\n\\end{align}\nwhere $\\bb_{l,k} = \\bar{\\bV}_k^H \\bar{\\bG}_l \\bar{\\bd}_{l,k}$, $c_{l,k} = \\norm{\\bar\\bd_{l,k}}^2 + \\frac{M\\xi_{l,k}^\\mathrm{d}}{1+\\beta_\\mathrm{d}}$, and $\\bA_{l,k}$ is defined below in \\eqref{eq:A_lk}, where $\\boldsymbol{\\Xi}_k^\\mathrm{v} = \\diag{(\\xi_{1,k}^\\mathrm{v}, \\cdots, \\xi_{R,k}^\\mathrm{v}) \\in \\mathbb{C}^{R \\times R}}$, $\\boldsymbol{\\Xi}_l^\\mathrm{G} = \\diag{(\\xi_{l,1}^\\mathrm{G}, \\cdots, \\xi_{l,R}^\\mathrm{G}) \\in \\mathbb{C}^{R \\times R}}$, and $\\bA_{l,k} \\succeq 0$. All the variables $\\bA_{l,k}$, $\\bb_{l,k}$, and $c_{l,k}$ are expressed in terms of the S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$ and path loss $\\{ \\xi_{l,k}^\\mathrm{d}, \\boldsymbol{\\Xi}_l^\\mathrm{G}, \\boldsymbol{\\Xi}_k^\\mathrm{v} \\}$ of all channel links \\cite{HTJ+19, ZWZ+21}. The details are omitted here due to the space limitation. Finally, the average channel gain of UE~$k$ can be written as\t\n\\begin{align}\n\\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\}\n= \\boldsymbol{\\theta}^H \\bA_k \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_k + \\bb^H_k \\boldsymbol{\\theta} + c_k,\n\\end{align}\t\nwhere $\\bA_k = \\textstyle \\sum_{l=1}^L \\bA_{l,k}$, $\\bb_k = \\textstyle \\sum_{l=1}^L \\bb_{l,k}$, and $c_k = \\textstyle \\sum_{l=1}^L c_{l,k}$.\n\n\\begin{figure*} [b]\t\n\t\\hrule \\vspace{5mm}\t\n\t\\begin{align}\n\t\t\\bA_{l,k} = \\bar{\\bV}_k^H \\left( \\bar{\\bG}_l \\bar{\\bG}_l^H + \\frac{M}{1+\\beta_\\mathrm{G}} \\boldsymbol{\\Xi}_l^\\mathrm{G} \\otimes \\bI_N \\right) \\bar{\\bV}_k\n\t\t+ \\left( \\frac{1}{1+\\beta_\\mathrm{v}} \\boldsymbol{\\Xi}_k^\\mathrm{v} \\otimes \\bI_N \\right) \\odot \\left( \\bar{\\bG}_l \\bar{\\bG}_l^H \\right) \n\t\t+ \\frac{1}{1+\\beta_\\mathrm{v}}\\frac{M}{1+\\beta_\\mathrm{G}} \\boldsymbol{\\Xi}_k^\\mathrm{v} \\boldsymbol{\\Xi}_l^\\mathrm{G} \\otimes \\bI_N \\label{eq:A_lk}\n\t\\end{align}\n\\end{figure*}\n\n\\end{appendices}\n\t\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCondensed matter provides a platform to realize many physical objects in\nother subjects such as Majorana and Dirac Weyl fermions which are proposed\nin particle physics but not be discovered in nature \\cite{XWAN, LLU, SMH,\nHWENG, SYXU, BQLV, LLU2, VMO, SNA}. Another example is the Dirac monopole,\nwhich is a point source of a magnetic field proposed by Dirac \\cite{PAM}. It\nhas a quantum analogy in quantum physics, where the Berry curvature of\nenergy band acts as the magnetic field generated by degeneracy points as\nDirac monopoles \\cite{DXIAO}.\\ As the extension of degeneracy points, nodal\nloops as closed $1$-dimensional ($1$D) manifolds in $3$D momentum space can\nbe classified as nodal rings \\cite{AAB}, nodal chains \\cite{TB}, nodal links\n\\cite{ZYAN}, and nodal knots \\cite{RBI}. It has been extensively studied\nboth theoretically\\cite{XQSUN, SNIE, JAHN, CFANG1, TK, JYL, CFANG2, PYC,\nWCHEN, MEZAWA, YZHOU, ZYANG, MXIAO} and experimentally \\cite{QXU, RYU, QYAN,\nGCHANG, XFENG}. In the recent work, it has turned out that the relation\nbetween degeneracy lines and the corresponding polarization field in the\nparameter space is topologically isomorphic to Biot-Savart law in\nelectromagnetism \\cite{RWANG}.\n\n\\begin{figure}[tbp]\n\\includegraphics[ bb=112 370 500 623, width=0.45\\textwidth, clip]{1.eps}\n\\caption{Schematic illustration of the aim of present work. (a) We consider\na $2$D system with the Bloch Hamiltonian related to two periodic\nvector functions $\\mathbf{r}_{1}(k_{x})$\nand $\\mathbf{r}_{2}(k_{y})$ in auxiliary\nspace, which correspond to two knots. The topological index of the energy\nband is determined by the relations of two knots: The Chern number of the\nband equals to the linking number of two knots.\\ Several representative\nconfigurations of $[\\mathbf{r}_{1}(k_{x}),$ $\\mathbf{r}_{2}(k_{y})]$ is\npresented. Here $\\mathbf{r}_{2}$ is a trefoil knot, while three types of \n\\mathbf{r}_{1}$\\ are taken as simple loops, but at different positions,\nresulting linking numbers $\\mathcal{N}=0$, $1$, and $2$, respectively. The\ncorresponding Bloch Hamiltonians describe the systems with Chern numbers \nc=0 $, $1$, and $2$, respectively. (b) The main purpose of this work. For a \n1$D model with a fixed function $\\mathbf{r}_{2}(k_{y})$, $\\mathbf{r}_{2}(k_{y})$ is\nreferred as to the degeneracy circuit. For an\narbitrary point $\\mathbf{r}_{1}(k_{x})$, the polarization filed $\\mathbf{P}\n\\mathbf{r}_{1})$ obeys the Biot-Savart law for magnetic field arising from\nthe degeneracy loop as a current loop. Polarization field $\\mathrm{d}\\mathbf\nP}$ at point $\\mathbf{r}_{1}$\\ generated from an infinitesmall length of\ndegeneracy line $\\mathrm{d}\\mathbf{r}_{2}$ at $\\mathbf{r}_{2}$, has the\nidentical form with the Biot-Savart Law related magnetic field generated\nfrom the current loop. Finding the polarization field $\\mathbf{P}$ at\narbitrary point $\\mathbf{r}_{1} $ resulting from a degeneracy line can be\nsimply obtained by the Biot-Savart law in electromagnetism.}\n\\label{fig1}\n\\end{figure}\n\nIn this work, we provide another quantum analogy of classical\nelectromagnetism. We consider a class of Bloch Hamiltonians, which contains\ntwo periodic vector functions with respect to two independent variables,\nsuch as momentum $k_{x}$\\textbf{\\ }and\\textbf{\\ }$k_{y}$ for a $2$D lattice\nsystem, respectively. These two periodic vector functions correspond to two\nknots in $3$D auxiliary space (see Fig. \\ref{fig1}(a)). The Bloch vector is\nthe difference of two vectors. When we only consider one of two knots, the\nsystem reduces to a $1$D lattice system. The Zak phase and polarization\nfield at a fixed point in $3$D auxiliary space can be obtained.\\ We show\nexactly that the knot as a degeneracy line has a simple relation of its\ncorresponding polarization field, obeying the Biot-Savart law: The\ndegeneracy line acts as a current-carrying wire, while the polarization\nfield corresponds to the generated magnetic field. The relationship between\ntwo knots can be characterized by applying the Amp\\`{e}re's circuital law on\nthe field integral arising from one knot along another knot. For a\nnontrivial topological system, the integral is nonzero, due to the fact that\ntwo Bloch knots entangle with each other, forming a link with the linking\nnumber being the value of Chern number of the energy band. In Fig. \\ref{fig1\n, we schematically illustrate the main conclusion of this work.\n\nWe propose two lattice models to exemplify the application of our approach.\nThe first one is an extended QWZ model. We show that the Bloch Hamiltonian\nis an example of our concerned system. Two knots of the original QWZ model\nsimply reduce to two circles. The second one is a time-dependent quasi-$1$D\nmodel with magnetic flux. In this case, the Amp\\`{e}re circulation integral\nis equivalent to the topological invariant. In the aid of the Biot-Savart\nlaw, the pumping charge acts as a dynamic measure of the Chern number. We\nperform numerical simulation for several representative quasi-adiabatic\nprocesses to demonstrate this application.\n\nThe remainder of this paper is organized as follows. In Sec. \\ref{Model with\ndegeneracy loop}, we present a class of models, whose Bloch Hamiltonian\nrelates to two knots. In Sec. \\ref{Kitaev model on square lattice} We\npropose the extended QWZ model to exemplify the application of our approach.\nSec. \\ref{Ladder system} gives another example, which is a time-dependent\nquasi-$1$D model with magnetic flux. Sec. \\ref{Pumping charge} devotes to a\ndynamic measure of Chern number, the pumping charge, which can be computed\nnumerically for several representative quasi-adiabatic processes to\ndemonstrate our work. Finally, we present a summary and discussion in Sec.\n\\ref{Summary}.\n\n\\section{Double-knot model}\n\n\\label{Model with degeneracy loop}\n\nConsider a Bloch Hamiltonian $h_{\\mathbf{k}}$ in the for\n\\begin{eqnarray}\nh_{\\mathbf{k}} &=&\\left(\n\\begin{array}{cc}\n\\left( z_{1}-z_{2}\\right) & x_{1}-x_{2}-i\\left( y_{1}-y_{2}\\right) \\\\\nx_{1}-x_{2}+i\\left( y_{1}-y_{2}\\right) & -\\left( z_{1}-z_{2}\\right\n\\end{array\n\\right) \\notag \\\\\n&=&\\left[ \\mathbf{r}_{1}(k_{x})\\mathbf{-r}_{2}(k_{y})\\right] \\mathbf{\\cdot\n\\sigma }, \\label{hk}\n\\end{eqnarray\nwhich is the starting point of our study. It is consisted of two periodic\nvector functions $\\mathbf{r}_{1}(k_{x})=\\mathbf{r}_{1}(2\\pi +k_{x})=x_{1\n\\mathbf{i}+y_{1}\\mathbf{j}+z_{1}\\mathbf{k}$\\textbf{\\ }and\\textbf{\\ }$\\mathbf\nr}_{2}(k_{y})=\\mathbf{r}_{2}(2\\pi +k_{y})=x_{2}\\mathbf{i}+y_{2}\\mathbf{j\n+z_{2}\\mathbf{k}$, representing two knots (loops) in $3$D auxiliary space.\nHere $\\mathbf{\\sigma =(}\\sigma _{x},\\sigma _{y},\\sigma _{z}\\mathbf{)}$ are\nPauli matrices and $h_{\\mathbf{k}}$\\ represents a class of models, which is\nreferred\\ as to double-knot (double-loop) model. Matrix $h_{\\mathbf{k}}$\\\ncan take the role of a core matrix of crystalline system for non-interacting\nHamiltonian, or Kitaev Hamiltonian. We note that the spectrum of $h_{\\mathbf\nk}}$\\ is two-band and the gap closes when two knots have crossing points.\nThe aim of this work is to reveal the feature of the system which is\noriginated from the character of two knots.\n\nTo this end, we first consider the case with a fixed $k_{x}$. Then the model\nonly contains a point $\\mathbf{r}_{1}$ and a knot $\\mathbf{r}_{2}\\left(\nk_{y}\\right) $. The Hamiltonian reduces t\n\\begin{equation}\nh_{k_{y}}=\\left[ \\mathbf{r}_{1}\\mathbf{-r}_{2}(k_{y})\\right] \\mathbf{\\cdot\n\\sigma },\n\\end{equation\nwhich is a $1$D system in real space. Here $r_{2}(k_{y})$\\ is a degeneracy\nline, at which the gap closes.\\ The solution of equation \nh_{k_{y}}\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle =\\varepsilon _{k_{y}}^{\\pm\n}\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle $ has the form\n\\begin{equation}\n\\left\\vert u_{+}^{k_{y}}\\right\\rangle =\\left(\n\\begin{array}{c}\n\\cos \\frac{\\theta _{k_{y}}}{2}e^{-i\\varphi _{k_{y}}} \\\\\n\\sin \\frac{\\theta _{k_{y}}}{2\n\\end{array\n\\right) ,\\left\\vert u_{-}^{k_{y}}\\right\\rangle =i\\left(\n\\begin{array}{c}\n-\\sin \\frac{\\theta _{k_{y}}}{2} \\\\\n\\cos \\frac{\\theta _{k_{y}}}{2}e^{i\\varphi _{k_{y}}\n\\end{array\n\\right)\n\\end{equation\nwith $\\varepsilon _{k_{y}}^{\\pm }=\\pm \\left\\vert \\mathbf{r}_{1}\\mathbf{-r\n_{2}\\left( k_{y}\\right) \\right\\vert $, where the azimuthal and polar angles\nare defined a\n\\begin{equation}\n\\cos \\theta _{k_{y}}=\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}\\mathbf{-r\n_{2}\\right\\vert },\\tan \\varphi _{k_{y}}=\\frac{y_{1}-y_{2}}{x_{1}-x_{2}}.\n\\end{equation\nFor this $1$D system, the corresponding Zak phases for upper and lower bands\nare defined a\n\\begin{equation}\n\\mathcal{Z}_{\\pm }=\\frac{i}{2\\pi }\\int_{-\\pi }^{\\pi }\\left\\langle u_{\\pm\n}^{k_{y}}\\right\\vert \\frac{\\partial }{\\partial k_{y}}\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle \\mathrm{d}k_{y}.\n\\end{equation\nIt is well known that the Zak phase is gauge-dependent and the present\nexpression of $\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle $\\ results i\n\\begin{equation}\n\\mathcal{Z}=\\mathcal{Z}_{+}=-\\mathcal{Z}_{-}=\\frac{1}{2\\pi }\\oint_{\\mathrm{L\n}\\cos ^{2}\\frac{\\theta _{k_{y}}}{2}\\mathrm{d}\\varphi _{k_{y}},\n\\end{equation\nwhere \\textrm{L} denotes the integral loop about the solid angle.\nAccordingly, the polarization vector field is defined a\n\\begin{equation}\n\\mathbf{P}=-\\mathbf{\\nabla }\\mathcal{Z},\n\\end{equation\nwhere $\\mathbf{\\nabla }$ is the nabla operato\n\\begin{equation}\n\\mathbf{\\nabla }=(\\frac{\\partial }{\\partial x_{1}}\\mathbf{i}+\\frac{\\partial\n}{\\partial y_{1}}\\mathbf{j}+\\frac{\\partial }{\\partial z_{1}}\\mathbf{k}),\n\\end{equation\nwith unitary vectors $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ in $3$D\nauxiliary space. Straightforward derivation (see Appendix) shows that\n\n\\begin{equation}\n\\mathbf{P}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathrm{d}\\mathbf{r\n_{2}\\times \\left( \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right) }{\\left\\vert \\mathbf{\n}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}, \\label{Polarization}\n\\end{equation\nwhere $\\mathrm{L}$ denotes the integral loop about the degeneracy loop. It\nis clear that if we consider a degeneracy loop as current-carrying wire with\nsteady current strength $I=1\/\\mu _{0}$, flowing in the direction of\nincreasing $k_{y}$ from $0$\\ to $2\\pi $, the field $\\mathbf{P}$ is identical\nto the magnetic field generated by the current loop,\\ where $\\mu _{0}$\\ is\nthe vacuum permittivity of free space. Since the Eq. (\\ref{Polarization})\nholds for an arbitrary loop $\\mathrm{L}$, one can have its differential for\n\\begin{equation}\n\\mathrm{d}\\mathbf{P}=\\frac{1}{4\\pi }\\frac{\\mathrm{d}\\mathbf{r}_{2}\\times\n\\left( \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right) }{\\left\\vert \\mathbf{r}_{1}\n\\mathbf{r}_{2}\\right\\vert ^{3}}, \\label{Biot-Savart law}\n\\end{equation\nwhich is illustrated in Fig. \\ref{fig1}. It indicates that the relationship\nbetween $\\mathbf{P}$ and the degeneracy loop obeys the Biot-Savart law. It\nreveals the topological characteristics of the degeneracy lines in a clear\nphysical picture. We will regard degeneracy loops as a \\textit{band\ndegeneracy circuit}. This result helps us to determine the polarization of\nany loops in the auxiliary space. In addition, the Amp\\`{e}re circulation\nintegral $\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d}\\mathbf{r}$\nalong a loop $\\ell $\\ has clear physical means: (i) It equals to the sum of\nthe current through the surface spanned by the loop $\\ell $; (ii) It is the\npumping charge for the adiabatic passage $\\ell $.\n\n\\begin{figure*}[tbp]\n\\includegraphics[ bb=3 414 620 765, width=0.92\\textwidth, clip]{links.eps}\n\\caption{Schematic several representative configurations of double-knot \n\\left\\{ \\mathbf{r}_{1}(k_{x}),\\mathbf{r}_{2}(k_{y})\\right\\} $ for the\nextended QWZ model. The plots are obtained from parameter equations in Eq. \n\\protect\\ref{XYZ}) with parameters indicated in the panels. The arrows on\nthe loops indicate the directions of the knots with various topologies. The\ncorresponding Chern numbers are labeled, that match the linking numbers\nexactly.}\n\\label{fig2}\n\\end{figure*}\n\nNow we go back to $h_{\\mathbf{k}}$, taking the loop $\\ell $\\ as the knot \n\\mathbf{r}_{1}(k_{x})$, which has no crossing point on the knot $\\mathbf{r\n_{2}(k_{y})$. We find that the corresponding Amp\\`{e}re circulation integral\nis connected to the topology of two knots and the band structure of the\nsystem\n\\begin{equation}\n-\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d}\\mathbf{r=}c=\\mathcal{N\n\\mathbf{.} \\label{CN}\n\\end{equation\nHere the Chern number for lower band is defined as \\cite{XLQI, GYCHO\n\\begin{equation}\nc=\\frac{1}{4\\pi }\\int_{0}^{2\\pi }\\int_{0}^{2\\pi }\\frac{\\mathbf{r}^{\\prime }}\n\\left\\vert \\mathbf{r}^{\\prime }\\right\\vert ^{3}}\\mathbf{\\cdot }\\left( \\frac\n\\partial \\mathbf{r}^{\\prime }}{\\partial k_{x}}\\times \\frac{\\partial \\mathbf{\n}^{\\prime }}{\\partial k_{y}}\\right) \\mathrm{d}k_{x}\\mathrm{d}k_{y},\n\\end{equation\nwith $\\mathbf{r}^{\\prime }=\\mathbf{r}_{1}-\\mathbf{r}_{2}$, which also equals\nto the linking number \\cite{RICCA} of two knots $\\mathbf{r}_{1}(k_{y})$ and \n\\mathbf{r}_{2}(k_{x})$\n\\begin{equation}\n\\mathcal{N}=\\frac{1}{4\\pi }\\int_{0}^{2\\pi }\\int_{0}^{2\\pi }\\frac{\\mathbf{r\n^{\\prime }}{\\left\\vert \\mathbf{r}^{\\prime }\\right\\vert ^{3}}\\mathbf{\\cdot \n\\left( \\frac{\\partial \\mathbf{r}_{1}}{\\partial k_{x}}\\times \\frac{\\partial\n\\mathbf{r}_{2}}{\\partial k_{y}}\\right) \\mathrm{d}k_{x}\\mathrm{d}k_{y}.\n\\end{equation\nThese relations are evident demonstrations of the system's topological\nfeature and clearly reveal the physical significance of the Amp\\`{e}re\ncirculation integral $\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d\n\\mathbf{r}$. Furthermore, it corresponds to the jump of Zak phase for an\nadiabatic passage along a knot, which\\ can be measured by the Thouless\npumping charge in a quasi $1$D system. In the following, we present two\nexamples to illustrate our results.\n\n\\section{Extended QWZ model}\n\n\\label{Kitaev model on square lattice}\n\nIn this section, we consider a model, which is an extension of QWZ model\nintroduced by Qi, Wu and Zhang \\cite{QWZ}, to illustrate our result. The\nBloch Hamiltonian is\n\n\\begin{equation}\nh_{\\mathbf{k}}=B_{x}\\sigma _{x}+B_{y}\\sigma _{y}+B_{z}\\sigma _{z},\n\\end{equation\nwhere the field components ar\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nB_{x}=\\sin k_{x}+\\lambda \\sin \\left( 2k_{x}\\right) \\\\\nB_{y}=\\sin k_{y}+\\lambda \\sin \\left( 2k_{y}\\right) \\\\\nB_{z}=u+\\cos k_{x}+\\cos k_{y} \\\\\n+\\lambda \\left[ \\cos \\left( 2k_{x}\\right) +\\cos \\left( 2k_{y}\\right) \\right\n\\end{array\n\\right. . \\label{XYZ}\n\\end{equation\nIt reduces to original QWZ model when taking $\\lambda =0$.\n\nNow we rewrite it in the for\n\\begin{equation}\nh_{\\mathbf{k}}=\\left[ \\mathbf{r}_{1}(k_{x})\\mathbf{-r}_{2}(k_{y})\\right]\n\\mathbf{\\cdot \\sigma },\n\\end{equation\nwhere two vector functions ar\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n\\mathbf{r}_{1}=(\\sin k_{x}+\\lambda \\sin \\left( 2k_{x}\\right) ,0,u+\\cos\nk_{x}+\\lambda \\cos \\left( 2k_{x}\\right) ) \\\\\n\\mathbf{r}_{2}=-(0,\\sin k_{y}+\\lambda \\sin \\left( 2k_{y}\\right) ,\\cos\nk_{y}+\\lambda \\cos \\left( 2k_{y}\\right) \n\\end{array\n\\right. .\n\\end{equation\nIt is clear that $\\mathbf{r}_{1}(k_{x})$\\ and $\\mathbf{r}_{2}(k_{y})$\\\nrepresent two limacons within $xz$ and $yz$ plane, respectively. When taking\n$\\left\\vert \\lambda \\right\\vert <0.5$, the crossing point of the limacon\ndisappears. Particularly, when taking $\\lambda =0$, limacons reduce to\ncircles. The radiuses of two circles are both $1$, but the centers are \n(0,0,u)$\\ and $(0,0,0)$, respectively. Chern numbers can be easily obtained\nfrom the linking numbers of these two circles: $c=0$, for $\\left\\vert\nu\\right\\vert >2$, and $c=\\pm 1$, for $0<\\pm u<2$. When taking $\\left\\vert\n\\lambda \\right\\vert >0.5$, the crossing point of the limacon appears. Since\nlimacons with crossing point cannot be classified as knots, we add\nperturbation terms $\\kappa \\sin 2k_{x}$\\ to $r_{1y}$ and $\\kappa \\sin 2k_{y}$\nto\\ $r_{2x}$\\ to untie the crossing point ($\\left\\vert \\kappa \\right\\vert\n\\ll 1$), then limacons become knots again. The possible linking numbers of\nsuch two knots are still equal to the Chern numbers $c=0$, $\\pm 1$, $\\pm 3$,\nand $\\pm 4$. The absence of $c=\\pm 2$ is due to the fact that we take the\nidentical $\\lambda $ in the expressions of $\\mathbf{r}_{1}(k_{x})$ and \n\\mathbf{r}_{2}(k_{y})$. In Fig. \\ref{fig2}, we plot some representative\nconfigurations to demonstrate this point. Comparing to the direct\ncalculation of Chern number from the Berry connection, the example shows\nthat the Chern number can be easily obtained by the geometrical\nconfigurations hidden in the Bloch Hamiltonian.\n\n\\section{Ladder system}\n\n\\label{Ladder system}\n\nAs a simple application of our result, we consider a quasi $1$D system with\nperiodically time-dependent parameters. The Bloch Hamiltonian has the for\n\\begin{equation}\nh_{k}(t)=\\left[ \\mathbf{r}(t)\\mathbf{-r}_{c}(k)\\right] \\mathbf{\\cdot \\sigma \n,\n\\end{equation\nwhere $\\mathbf{r}(t)=\\mathbf{r}(t+T)$ represents a loop $\\ell $ without\ncrossing point on the degeneracy loop $\\mathbf{r}_{c}(k)$. The result\nobtained above still apply to the case of replacing $(k_{x},k_{y})$\\ with \n(t,k)$, and replacing $\\left\\vert u_{\\pm }^{\\mathbf{k}}\\right\\rangle $ with \n|u_{\\pm }^{k}(t)\\rangle $ accordingly. In this section we will demonstrate\nour result and its physical implications through an alternative\ntight-binding model, which is two coupled SSH chains, or a ladder system\nwith staggered magnetic flux, on-site potential and long range hopping\nterms. These ingredients allow the system to support multiple types of\ndegeneracy loops with different geometric topologies.\n\nWe consider a ladder system which is illustrated in Fig. \\ref{fig3},\nrepresented by the Hamiltonian\n\n\\begin{eqnarray}\n&&H_{\\text{L}}=\\sum_{j=1}^{N}\\{r_{\\bot }e^{i\\phi }c_{2j}^{\\dag\n}c_{2j-1}+\\alpha c_{2j-1}^{\\dag }c_{2\\left( j+1\\right) }+\\beta c_{2\\left(\nj+1\\right) -1}^{\\dag }c_{2j} \\notag \\\\\n&&+\\mu c_{2\\left( j+2\\right) -1}^{\\dag }c_{2j}+\\nu c_{2j-1}^{\\dag\n}c_{2\\left( j+2\\right) }+i\\kappa \\lbrack c_{2\\left( j+3\\right) -1}^{\\dag\n}c_{2j-1} \\notag \\\\\n&&-c_{2\\left( j+3\\right) }^{\\dag }c_{2j}]+\\text{\\textrm{H.c.}\n\\}+z\\sum_{j=1}^{2N}\\left( -1\\right) ^{j+1}c_{j}^{\\dag }c_{j},\\label{ladd}\n\\end{eqnarray\non a $2N$ lattice. Here $c_{j}^{\\dag }$ is the creation operator of a\nfermion at the $j$th site with the periodic boundary condition \nc_{2N+1}=c_{1}$. The inter-sublattice hopping amplitudes are $\\left( \\alpha\n,\\beta ,\\mu ,\\nu \\right) $ and the intra-sublattice hopping amplitude is \n\\kappa $.\\ Besides, two time-dependent parameters, $2\\phi (t)$ is the\nstaggered magnetic flux threading each plaquette and $z(t)$\\ is the strength\nof staggered potentials. The ladder system is essentially two coupled SSH\nchains. As a building block of the system, the SSH model \\cite{SSH} has\nserved as a paradigmatic example of the $1$D system supporting topological\ncharacter \\cite{Zak}. It has an extremely simple form but well manifests the\ntypical feature of topological insulating phase, and the transition between\nnon-trivial and trivial topological phases, associated with the number of\nzero energy and edge states as the topological invariant \\cite{Asboth}. It\nhas been demonstrated that all the parameters of this model can be easily\naccessed within the existing technology of cold-atomic experiments \\cit\n{Clay,Ueda,Jo}. We schematically illustrate this model in Fig.~\\ref{fig3}.\nWe introduce the fermionic operators in $k$ space\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\na_{k}=\\frac{1}{\\sqrt{N}}\\sum_{j=1}^{N}e^{-ikj}c_{2j-1} \\\\\nb_{k}=\\frac{1}{\\sqrt{N}}\\sum_{j=1}^{N}e^{-ikj}c_{2j\n\\end{array\n\\right. , \\label{Fourier 1}\n\\end{equation\nand the wave vector $k=\\pi (2n-N)\/N$, $(n=0,1,...,N-1)$. Then we hav\n\\begin{equation}\nH_{\\text{L}}=\\sum_{k}(a_{k}^{\\dagger },b_{k}^{\\dagger })h_{k}\\left(\n\\begin{array}{c}\na_{k} \\\\\nb_{k\n\\end{array\n\\right) ,\n\\end{equation\n\\begin{figure}[tbp]\n\\includegraphics[ bb=130 440 476 540, width=0.5\\textwidth, clip]{3.eps}\n\\caption{Schematics of the two coupled SSH chains with staggered flux and\npotential. The system consists of two sublattices A and B with on-site\npotentials $z$\\ and $-z$, indicated by filled and empty circles,\nrespectively. Hopping amplitudes along each chain are staggered by $\\protec\n\\alpha $ (blue solid line) and $\\protect\\beta $ (blue dotted line). The\ninterchain hopping amplitude is $r_{\\perp }$ (thick black line) associated\nwith a phase factor and interchain diagonal hopping amplitude $\\protect\\mu $\n(gray solid line), $\\protect\\nu $ (gray dotted line) and $i\\protect\\kappa $\n(light green solid line). The red arrows indicate the hopping directions for\ncomplex amplitudes, which are induced by the staggered flux threading each\nplaquettes (arrow circles).}\n\\label{fig3}\n\\end{figure}\nwhere the core matrix has the for\n\\begin{equation}\nh_{k}=\\left(\n\\begin{array}{cc}\nz+2\\kappa \\sin \\left( 3k\\right) & R(\\phi ,k) \\\\\nR^{\\ast }(\\phi ,k) & -z-2\\kappa \\sin \\left( 3k\\right\n\\end{array\n\\right) ,\n\\end{equation\nand the off-diagonal matrix element i\n\\begin{equation}\nR(\\phi ,k)=r_{\\bot }e^{-i\\phi }+\\alpha e^{ik}+\\beta e^{-ik}+\\mu e^{-2ik}+\\nu\ne^{2ik}.\n\\end{equation\nTaking\n\n\\begin{equation}\nx+iy=r_{\\bot }e^{i\\phi },\n\\end{equation\nthe parameter equations for degeneracy loop i\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nx_{c}=-\\left( \\alpha +\\beta \\right) \\cos k-\\left( \\mu +\\nu \\right) \\cos\n\\left( 2k\\right) \\\\\ny_{c}=-\\left( \\beta -\\alpha \\right) \\sin k-\\left( \\mu -\\nu \\right) \\sin\n\\left( 2k\\right) \\\\\nz_{c}=-2\\kappa \\sin \\left( 3k\\right\n\\end{array\n\\right. , \\label{trefoil knot}\n\\end{equation\nwhich is plotted in Fig. \\ref{fig4} for the case with parameters $\\alpha\n=\\mu =0.5$, $\\beta =1$, $\\nu =1.5$, and $\\kappa =0.1$. One can see that the\ndegeneracy curve is a trefoil knot. Intuitively, it should result in\ntopological features with indices $2$, $1$, and $0$. We will demonstrate\nthis point in the next section.\n\n\\begin{figure*}[tbp]\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4a.eps}\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4b.eps}\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4c.eps}\n\\caption{ (a1-a3) Schematics of three adiabatic passages in $3$D auxiliary\nspace for pumping charge. The degeneracy curve (red) is a trefoil knot with\nparameters $\\protect\\alpha =\\protect\\mu =0.5$, $\\protect\\beta =1$, $\\protec\n\\nu =1.5$, and $\\protect\\kappa =0.1$ of the system in Eq. (\\protect\\ref{ladd\n) (Fig. \\protect\\ref{fig3}). The adiabatic passages are straight lines\n(blue)\\ at positions $(x,y):$ (a1) $(3.80,0)$, (a2) $(1.15,0.84)$, and (a3) \n(0.40,0.01)$, respectively. (b1-b3) and (c1-c3)\\ are plots of current and\nthe corresponding total charge transfer for quasi-adiabatic process. The\nresults are obtained by numerically exact diagonalization method for the\nsystem in Eq. (\\protect\\ref{ladd}) with $N=100 $. The speed of time\nevolution is $\\protect\\omega =1\\times 10^{-3}$. It indicates that the\ntopological invariant can be obtained by dynamical process.}\n\\label{fig4}\n\\end{figure*}\n\n\\section{Pumping charge}\n\n\\label{Pumping charge}\n\nFor a $2$D system with Bloch Hamiltonian in the form of Eq. (\\ref{hk}), the\nphysical and geometric meanings of\\ Chern number is well established. For a\nquasi $1$D system with Bloch Hamiltonian in the form of Eq. (\\ref{hk}) by\nreplacing $(k_{x},k_{y})$\\ with $(t,k)$, the Chern number is connected to an\nadiabatic passage driven by the parameters from $t$\\ to $t+T$, or a periodic\nloop $\\mathbf{r=r}(t)$ in auxiliary space. In a $1$D model, it has been\nshown that the adiabatic particle transport over a time period takes the\nform of the Chern number\\ and it is quantized \\cite{DXIAO}. The pumped\ncharge counts the net number of degeneracy point enclosed by the loop. This\ncan be extended to the loop $\\mathbf{r=r}(t)$\\ in the present model.\n\nActually, one can rewrite Eq. (\\ref{CN}) in the for\n\\begin{equation}\nc=\\mathcal{N}=-\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\frac{\\partial\n\\mathbf{\\mathbf{r}}}{\\partial t}\\mathrm{d}t\\mathbf{.}\n\\end{equation\nwhere $\\mathbf{\\mathbf{r}}$\\ (or $r_{\\bot },\\phi ,$ and $z$) is periodic\nfunction of time $t$. Furthermore, we can find out the physical meaning of\nthe Chern number by the relation\n\\begin{equation}\nc=\\int_{0}^{T}\\mathcal{J}(t)\\mathrm{d}t,\n\\end{equation\nwher\n\\begin{equation}\n\\mathcal{J}=\\frac{i}{2\\pi }\\int_{0}^{2\\pi }[(\\partial _{t}\\langle\nu_{-}^{k}|)\\partial _{k}|u_{-}^{k}\\rangle -(\\partial _{k}\\langle\nu_{-}^{k}|)\\partial _{t}|u_{-}^{k}\\rangle ]\\mathrm{d}k\n\\end{equation\nis the adiabatic current. Then $c$ is pumped charge of all channel $k$\ndriven by the time-dependent Hamiltonian varying in a period, which can be\nmeasured through a quasi adiabatic process.\n\nInspired by these analysis,\\ we expect that the Chern number can be unveiled\nby the pumping charge of all the energy levels. This can be done in\nsingle-particle sub-space. The accumulated charge passing the unit cell $l$\nduring the period $T$ i\n\\begin{equation}\nQ_{l}=\\sum_{k}\\int_{0}^{T}j_{l}\\mathrm{d}t,\n\\end{equation\nwhere current across two neighboring unit cells i\n\\begin{eqnarray}\n&&j_{l}=\\frac{1}{i}\\left\\langle u_{-}^{k}\\left( t\\right) \\right\\vert [\\alpha\na_{j}^{\\dag }b_{j+1}+\\beta b_{j}^{\\dag }a_{j+1}+\\mu b_{j}^{\\dag }a_{j+2}+\n\\notag \\\\\n&&\\nu a_{j}^{\\dag }b_{j+2}-i\\kappa a_{j}^{\\dag }a_{j+3}+i\\kappa b_{j}^{\\dag\n}b_{j+3}-\\text{\\textrm{H.c.}}]\\left\\vert u_{-}^{k}\\left( t\\right)\n\\right\\rangle .\n\\end{eqnarray}\n\n\nAs we mentioned above, there are three types of adiabatic loop $\\mathbf{r=r\n(t)$ in auxiliary space, with pumping charges $Q_{l}=0$, $1$, and $2$,\nrespectively. In general, three periodic functions $r_{\\bot }(t)$, $\\phi (t)\n, and $z(t)$\\ should be taken to measure the pumping charge. However, a\nquasi adiabatic loop is tough to be realized in practice. Thanks to the\nBiot-Savart law for the field $\\mathbf{P}(r)$, we can take the adiabatic\npassage along a straight line with fixed $r_{\\bot }$ and $\\phi $, since the\nfield $\\mathbf{P}$\\ far from the trefoil knot $\\mathbf{r}_{c}(k)$ has no\ncontribution to the Amp\\`{e}re circulation integral, or the pumping charge.\n\nWe consider the case by taking $z=\\omega t$ with $\\omega \\ll 1$. According\nto the analysis above, if $t$ varies from $-\\infty $ to $\\infty $, $Q_{l}$\\\nshould be $0$, $1$, and $2$, respectively. To examine how the scheme works\nin practice, we simulate the quasi-adiabatic process by computing the time\nevolution numerically for finite system. In principle, for a given initial\neigenstate $\\left\\vert u_{-}^{k}\\left( 0\\right) \\right\\rangle $, the time\nevolved state under a Hamiltonian $H_{\\text{L}}\\left( t\\right) $ i\n\\begin{equation}\n\\left\\vert \\Phi \\left( t\\right) \\right\\rangle =\\mathcal{T}\\{\\exp\n(-i\\int_{0}^{t}H_{\\text{L}}\\left( t\\right) \\mathrm{d}t)\\left\\vert\nu_{-}^{k}\\left( 0\\right) \\right\\rangle \\},\n\\end{equation\nwhere $\\mathcal{T}$ is the time-ordered operator. In low speed limit $\\omega\n\\rightarrow 0$, we hav\n\\begin{equation}\nf\\left( t\\right) =\\left\\vert \\langle u_{-}^{k}\\left( t\\right) \\left\\vert\n\\Phi \\left( t\\right) \\right\\rangle \\right\\vert \\rightarrow 1,\n\\end{equation\nwhere $\\left\\vert u_{-}^{k}\\left( t\\right) \\right\\rangle $\\ is the\ncorresponding instantaneous eigenstate of $H_{\\text{L}}\\left( t\\right) $.\nThe computation is performed by using a uniform mesh in the time\ndiscretization for the time-dependent Hamiltonian $H_{\\text{L}}$. In\norder to demonstrate a quasi-adiabatic process, we keep $f\\left( t\\right)\n>0.9$\\ during the whole process by taking sufficient small $\\omega $. Fig.\n\\ref{fig4} plots the simulations of particle current and the corresponding\ntotal probability, which shows that the obtained dynamical quantities are in\nclose agreement with the expected Chern number.\n\n\\section{Summary and discussion}\n\n\\label{Summary}\n\nWe have analyzed a family of $2$D tight-binding model with various Chern\nnumbers, which are directly connected to the topology of two knots. When\nreduced to $1$D single-knot degeneracy model, a polarization vector field\ncan be established for a gapped band. We have exactly shown an interesting\nanalogy between the topological feature of the band and classical\nelectromagnetism: polarization vector field acts as the static magnetic\nfield generated by the degeneracy knot as a current circuit. It indicates\nthat there is a quantum analogy of\\ Biot-Savart law in quantum matter.\nBefore ending this paper, we would like to point out that our findings also\nreveal the topological feature hidden in the case with zero Chern number. In\nFig. \\ref{fig2}(a) and (b), we find out that though the linking numbers of\nthese two sets of loops are zero, the configurations are different. It\nshould imply certain topological feature in a single direction, which will\nbe investigated in future work. This finding extends the understanding of\ntopological feature in matter and provides methodology and tool for dealing\nwith the calculation and detection of Chern numbers.\n\n\\section{Appendix: Proof of the Biot-Savart law}\n\nIn this appendix, we provide the proof of Eq. (\\ref{Polarization}) in the\nmain text. To this end, we first revisit the Biot-Savart law\\ for a current\ncarrying loop, and then compare it with the polarization field in the\npresent work.\n\n\\subsection{The magnetic field}\n\nConsider a current carrying loop $\\mathrm{L}$\\ with current strength $I=1\n\\mu\n_{0}$, which is described by a periodic function $\\mathbf{r}_{2}\\left(\nk_{y}\\right) =x_{2}\\mathbf{i+}y_{2}\\mathbf{j+}z_{2}\\mathbf{k}$ in a $3$D\nspace. Here \n\\mu\n_{0}$ is the vacuum permittivity of free space and the current\\ flows in the\ndirection of increasing $k_{y}$ from $0$ to $2\\pi $. According to the\nBiot-Savart law, the magnetic field $\\mathbf{B}$ at position $\\mathbf{r\n_{1}=x_{1}\\mathbf{i+}y_{1}\\mathbf{j+}z_{1}\\mathbf{k}$ generated by the loop \n\\mathrm{L}$ is\n\n\\begin{equation}\n\\mathbf{B}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2}-\\mathbf{r\n_{1}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}\\times\n\\mathrm{d}\\mathbf{r}_{2}.\n\\end{equation\nFor the sake of simplicity we only give the proof for $\\mathbf{B}$\\ and \n\\mathbf{P}$\\ in the $x$ component as an example. The explicit form of the\ncomponent i\n\\begin{equation}\nB_{x}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\left( y_{2}-y_{1}\\right)\n\\mathrm{d}z_{2}\\mathbf{-}\\left( z_{2}-z_{1}\\right) \\mathrm{d}y_{2}}\n\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}.\n\\end{equation\nAccording to the Stokes' theorem, the line integral of $B_{x}$ can be\nexpressed as a double integra\n\\begin{eqnarray}\nB_{x} &=&\\frac{1}{4\\pi }\\iint\\nolimits_{\\mathrm{S}}[\\frac{3\\left(\nx_{2}-x_{1}\\right) ^{2}-\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d\ny_{2}\\mathrm{d}z_{2} \\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( y_{1}-y_{2}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}z_{2}\\mathrm{d}x_{2}\n\\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( z_{1}-z_{2}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}x_{2}\\mathrm{d\ny_{2}],\n\\end{eqnarray}\nwhere $S$ represents a smooth surface spanned by the loop $L$.\n\n\\subsection{The polarization vector field}\n\nNow we turn to the quantum analogy of Biot-Savart law. For a fixed $k_{x}$, \nh_{\\mathbf{k}}$\\ reduces to a $1$D system $h_{k_{y}}$, and the corresponding\nZak phases for upper and lower bands are defined a\n\\begin{equation}\n\\mathcal{Z}_{\\pm }=\\frac{i}{2\\pi }\\int_{-\\pi }^{\\pi }\\left\\langle u_{\\pm\n}^{k_{y}}\\right\\vert \\frac{\\partial }{\\partial k_{y}}\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle \\mathrm{d}k_{y},\n\\end{equation\nwhich is gauge-dependent. For the present expression of $\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle $, we have\n\\begin{equation}\n\\mathcal{Z}=\\mathcal{Z}_{+}=-\\mathcal{Z}_{-}=\\frac{1}{2\\pi }\\oint_{\\mathrm{L\n}\\cos ^{2}\\frac{\\theta }{2}\\mathrm{d}\\varphi ,\n\\end{equation\nwhere $\\mathrm{L}$ denotes loop $\\mathbf{r}_{2}(k_{y})$\\ an\n\\begin{equation}\n\\cos \\theta =\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert },\\tan \\varphi =\\frac{y_{1}-y_{2}}{x_{1}-x_{2}}.\n\\end{equation\nThe polarization vector field is defined a\n\\begin{equation}\n\\mathbf{P}=-\\mathbf{\\nabla }\\mathcal{Z}, \\label{P1}\n\\end{equation\nwhere $\\mathbf{\\nabla }$\\ is the nabla operato\n\\begin{equation}\n\\mathbf{\\nabla }=(\\frac{\\partial }{\\partial x_{1}}\\mathbf{i}+\\frac{\\partial\n}{\\partial y_{1}}\\mathbf{j}+\\frac{\\partial }{\\partial z_{1}}\\mathbf{k}),\n\\end{equation\nwith unitary vectors $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ in $3$D\nauxiliary space. We note the fact that\n\n\\begin{equation}\n\\mathbf{k\\cdot \\lbrack }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2\\bot }-\\mathbf{\n}_{1\\bot }}{\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert\n^{2}}\\times \\mathrm{d}\\left( \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right)\n]=\\oint_{\\mathrm{L}}\\mathrm{d}\\varphi =2\\pi w,\n\\end{equation\nwhere $w$ is winding number of the integral loop $\\mathbf{r}_{2\\bot\n}(k_{y})=x_{2}\\mathbf{i}+y_{2}\\mathbf{j}$\\ around the point $\\mathbf{r\n_{1\\bot }=x_{1}\\mathbf{i}+y_{1}\\mathbf{j}$. Then the Zak phase can be\nrewritten a\n\\begin{equation}\n\\mathcal{Z}=\\frac{\\mathbf{k}}{4\\pi }\\mathbf{\\cdot }\\oint_{\\mathrm{L}}\\left[\n1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert \n\\right] \\frac{\\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }}{\\left\\vert \\mathbf{r\n_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}}\\times \\mathrm{d}\\left(\n\\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right) .\n\\end{equation\nThe projection of the polarization vector field $\\mathbf{P}$ in the $x$\ndirection is represented a\n\\begin{equation}\nP_{x}=-\\frac{\\partial }{\\partial x_{1}}\\mathcal{Z}=\\frac{1}{4\\pi }\\oint_\n\\mathrm{L}}\\left( G\\mathrm{d}x_{2}+Q\\mathrm{d}y_{2}+R\\mathrm{d}z_{2}\\right) ,\n\\end{equation\nwher\n\\begin{eqnarray}\nG &=&-\\frac{\\left( z_{1}-z_{2}\\right) \\left( x_{2}-x_{1}\\right) \\left(\ny_{2}-y_{1}\\right) }{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{3}\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}} \\\\\n&&-\\left( 1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert }\\right) \\frac{2\\left( x_{2}-x_{1}\\right) (y_{2}-y_{1})}\n\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{4}}, \\notag\n\\end{eqnarray\nan\n\\begin{eqnarray}\nQ &=&\\left( 1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert }\\right) \\frac{\\left[ \\left( x_{2}-x_{1}\\right) ^{2}-\\left(\ny_{2}-y_{1}\\right) ^{2}\\right] }{\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r\n_{1\\bot }\\right\\vert ^{4}} \\notag \\\\\n&&+\\frac{\\left( z_{1}-z_{2}\\right) \\left( x_{2}-x_{1}\\right) ^{2}}\n\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}\\left\\vert \\mathbf{r\n_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}}.\n\\end{eqnarray\nBy the Stokes' theorem, the line integral of $P_{x}$ can be expressed as a\ndouble integral\n\n\\begin{eqnarray}\nP_{x} &=&\\frac{1}{4\\pi }\\iint\\nolimits_{\\mathrm{S}}[\\frac{3\\left(\nx_{2}-x_{1}\\right) ^{2}-\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d\ny_{2}\\mathrm{d}z_{2} \\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( y_{1}-y_{2}\\right) \\allowbreak }\n\\left\\vert r_{2}-r_{1}\\right\\vert ^{5}}\\mathrm{d}z_{2}\\mathrm{d}x_{2} \\notag\n\\\\\n&&-\\frac{3\\left( x_{1}-x_{2}\\right) \\left( z_{2}-z_{1}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}x_{2}\\mathrm{d\ny_{2}],\n\\end{eqnarray\nwhich results i\n\\begin{equation}\nP_{x}=B_{x}.\n\\end{equation\nSimilarly, the projection of polarization vector field $\\mathbf{P}$ and\nmagnetic field $\\mathbf{B}$ in the $y$ and $z$ direction can be calculated\nin the same way. Eventually, we can come to a conclusio\n\\begin{equation}\n\\mathbf{P}=\\mathbf{B}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2}\n\\mathbf{r}_{1}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}\n\\times \\mathrm{d}\\mathbf{r}_{2}.\n\\end{equation}\n\n\\acknowledgments This work was supported by the National Natural Science\nFoundation of China (under Grant No. 11874225)..\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Ice model} \\label{sec:mathmodel}\n\n\\subsection{The full Stokes (FS) equations}\n\\label{sec:Stokes}\nWe use the FS equations in 2D with coordinates $\\mathbf x=(x, z)^T$ for modeling of the flow of an ice sheet \\cite{Hutter83}.\nThese nonlinear partial differential equations (PDEs) in the interior of the ice $\\Omega$ are given by \n\\begin{equation}\n\\begin{cases}\n \\nabla\\cdot\\mathbf{u}=0,\\\\\n - \\nabla\\cdot{\\mathbf{\\mathbb{\\sigma}}} =\\rho \\mathbf{g},\n \\end{cases}\n \\label{eq:FS}\n\\end{equation}\n where the stress tensor is ${\\mathbf{\\mathbb{\\sigma}}} = 2\\eta(\\mathbf{u})\\mathbf{\\mathbb{\\tau}}(\\mathbf{u})-p\\mathbb{I}$. The symmetric strain rate tensor is defined by\n\\begin{equation}\\label{eq:taudef}\n \\mathbf{\\mathbb{\\tau}}(\\mathbf{u})=\\frac{1}{2}(\\nabla\\mathbf{u}+\\nabla\\mathbf{u}^T)=\\left(\\begin{array}{cc}\\tau_{11}&\\tau_{12}\\\\\\tau_{12}&\\tau_{22}\\end{array}\\right),\n\\end{equation}\n$\\mathbb{I}$ is the identity matrix, and the viscosity is defined by Glen's flow law\n\\begin{equation}\\label{eq:visc}\n \\eta(\\mathbf{u})=\\frac{1}{2}\\left(\\mathcal{A}(T^\\prime)\\right)^{-\\frac{1}{n}}\\mathbf{\\mathbb{\\tau}}_e^{\\frac{1-n}{n}},\\qquad \\mathbf{\\mathbb{\\tau}}_e = \\sqrt{\\frac{1}{2}\\text{tr}(\\mathbf{\\mathbb{\\tau}}(\\mathbf{u})\\mathbf{\\mathbb{\\tau}}(\\mathbf{u}))}.\n\\end{equation}\n\nHere ${\\mathbf{u}}=(u, w)^T$ is the vector of velocities, $\\rho$ is the density of the ice, $p$ denotes the pressure, and the gravitational acceleration in the $z$-direction is denoted by ${\\bf g}$. The rate factor $\\mathcal{A}(T^\\prime)$ describes how the viscosity depends on the pressure melting point corrected temperature $T^\\prime$. For isothermal flow assumed here, the rate factor $\\mathcal{A}$ is constant. Finally, $n$ is usually taken to be 3.\n\n\n\\subsection{Boundary conditions}\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/GL} \n \\caption{A two dimensional schematic view of a marine ice sheet.}\n\\label{fig:ice}\n\\end{figure}\n\n\nAt the boundary $\\Gamma$ of the ice we define the normal outgoing vector $\\mathbf{n}$ and tangential vector $\\mathbf{t}$, see Figure~\\ref{fig:ice}.\nIn a 2D case considered here, $y$ is constant in the figure. The upper boundary is denoted by $\\Gamma_s$ and the lower boundary is $\\Gamma_b$.\nAt $\\Gamma_s$ and $\\Gamma_{bf}$, the floating part of $\\Gamma_b$, we have that \n\\begin{equation}\n{\\mathbf{\\mathbb{\\sigma}}}{\\mathbf{n}}=\\mathbf{f}_s.\n\\label{eq:bc_s}\n\\end{equation}\nThe ice is stress-free at $\\Gamma_s$, $\\mathbf{f}_s=0$, and $\\mathbf{f}_s=-p_w\\mathbf{n}$ at the ice\/ocean interface $\\Gamma_{bf}$ where $p_w$ is the\nwater pressure. Let\n\\[\n \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\mathbf{t}}=\\mathbf{t}\\cdot\\mathbf{\\mathbb{\\sigma}}\\mathbf{n},\\; \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}=\\mathbf{n}\\cdot\\mathbf{\\mathbb{\\sigma}}\\mathbf{n},\\; u_\\mathbf{t}=\\mathbf{t}\\cdot\\mathbf{u}.\n\\]\nThen for the slip boundary $\\Gamma_{bg}$, the part of $\\Gamma_b$ where the ice is grounded, we have a friction law for the sliding ice\n \\begin{equation}\n {\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\mathbf{t}} + \\beta(\\mathbf{u},\\mathbf x) u_\\mathbf{t}=0,\\quad u_\\mathbf{n}=\\mathbf{n}\\cdot\\mathbf{u}=0, \\quad -{\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}\\geq p_w. \\label{eq:BCGI}\n \\end{equation} \nThe type of friction law is determined by the friction coefficient $\\beta$.\nThere is a balance between ${\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}$ and $p_w$ at $\\Gamma_{bf}$ and the contact is friction-free, $\\beta=0$,\n \\begin{equation}\n {\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\mathbf{t}} = 0, \\qquad -{\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}= p_w.\n \\label{eq:BCFI}\n \\end{equation}\nThe GL is located where the boundary condition switches from $\\beta>0$ and $u_\\mathbf{n}=0$ on $\\Gamma_{bg}$ to $\\beta=0$ and a free $u_\\mathbf{n}$ on $\\Gamma_{bf}$. In 2D,\nthe GL is the point $(x_{GL}, z_{GL})$ between $\\Gamma_{bg}$ and $\\Gamma_{bf}$. \n\nWith the ocean surface at $z=0$, $p_w=-\\rho_w g z_b$ where $\\rho_w$ is the density of sea water, $z_b$ is the $z$-coordinate of $\\Gamma_b$, and $g$ is the gravitation constant. \n\n\n\\subsection{The free surface equations}\n\\label{sec:height}\n\nThe boundaries $\\Gamma_s$ and $\\Gamma_b$ are time-dependent and move according to two free surface equations. The boundary $\\Gamma_{bg}$ follows the\nfixed bedrock with coordinates $(x, b(x))$.\n\nThe $z$-coordinate of the free surface position $z_s(x,t)$ at $\\Gamma_s$ (see Fig. \\ref{fig:ice}) is the solution of an advection equation\n\\begin{equation}\n \\frac{\\partial z_s}{\\partial t}+u_s \\frac{\\partial z_s}{\\partial x}-w_s=a_s,\n\\label{eq:freeSurface}\n\\end{equation}\nwhere $a_s$ denotes the net surface accumulation\/ablation of ice and ${\\mathbf{u}}_s=(u_s, w_s)^T$ the velocity at the free surface in contact with the atmosphere. Similarly, the $z$-coordinate for the lower surface $z_b$ of the floating ice at $\\Gamma_{bf}$ satisfies\n\\begin{equation}\n \\frac{\\partial z_b}{\\partial t}+u_b \\frac{\\partial z_b}{\\partial x}-w_b=a_b,\n\\label{eq:lowerSurface}\n\\end{equation}\nwhere $a_b$ is the net accumulation\/ablation at the lower surface and ${\\mathbf{u}}_b=(u_b, w_b)^T$ the velocity of the ice at $\\Gamma_{bf}$. On $\\Gamma_{bg}$, $z_b=b(x)$.\n\nThe thickness of the ice is denoted by $H=z_s-z_b$ and depends on $(x, t)$. \n\n\\subsection{The solution close to the grounding line}\n\\label{sec:GLsol}\n\nThe 2D solution of the FS equations in Eq. \\eqref{eq:FS} with a constant viscosity, $n=1$ in Eq. \\eqref{eq:visc}, is expanded in small parameters in \\cite{Schoof11}. The solutions in different\nregions around the GL are connected by matched asymptotics. Upstream of the GL at the bedrock, $xx_{GL}$ then $\\chi=0$ and Eq. \\eqref{eq:flot} holds true\non $\\Gamma_{bf}$.\nIn numerical experiments with the linear FS $(n=1)$ in \\cite{NoWi08}, $\\chi(x, z_b)$ in the original variables varies linearly in $x$ for $x0$ on $\\Gamma_{bf}$ then the ice is not in contact with the bedrock and $\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w=0$ and if $\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w<0$ on $\\Gamma_{bg}$ then the ice and the bedrock are in contact and $d=0$. Hence, the\ncomplementarity relation in the vertical direction is\n \\begin{equation}\\label{eq:complv}\n\\begin{array}{ll}\n z_b(x, t)-b(x)\\ge 0,\\; \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w\\le 0,\\\\ \n (z_b(x, t)-b(x))(\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w)=0\\;\\textrm{on}\\;\\Gamma_b.\n\\end{array}\n \\end{equation}\nThe contact friction law is such that $\\beta>0$ when $xx_{GL}$. The complementarity relation along the slope at $x$ is then the non-negativity of $d$ and \n \\begin{equation}\\label{eq:compls}\n \\beta\\ge 0,\\; \\beta(x, t)(z_b(x, t)-b(x))=0\\;\\textrm{on}\\;\\Gamma_b.\n \\end{equation}\nIn particular, these relations are valid at the nodes $x=x_j$, $j=0,1,\\dots,N$.\n \n\nThe complementarity condition also holds for $u_\\mathbf{n}$ and $\\sigma_{\\mathbf{n}\\bn}$ such that\n \\begin{equation}\\label{eq:complu}\n\\begin{array}{ll}\n \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w\\le 0,\\\\ \n u_\\mathbf{n}(\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w)=0\\;\\textrm{on}\\;\\Gamma_b,\n\\end{array}\n \\end{equation}\nwithout any sign constraint on $u_\\mathbf{n}$ except for the retreat phase when the ice leaves the ground and $u_\\mathbf{n}<0$. \n\n\nSimilar implementations for contact problems using Nitsche's method are found in \\cite{chouly2017overview,chouly2017nitsche}, where the unknowns in the PDEs are the displacement fields\ninstead of the velocity in Eq. \\eqref{eq:FS}.\nAnalysis in \\cite{chouly2017overview} suggests that Nitsche's method for the contact problem can provide a stable numerical solution with an optimal convergence rate.\n\nThe nonlinear equations for the nodal values of $\\mathbf{u}$ and $p$ are solved by Newton iterations. The system of linear equations in every Newton iteration is solved iteratively by using the\nGeneralised Conjugate Residual (GCR) method in Elmer\/ICE. The condition on $d_j$ in a node $x_j$ is used for a so called grounded mask, which is computed at each timestep and not changed during the nonlinear iterations.\n\n\n\n\\subsection{Discretization of the advection equations}\\label{sec:updlower}\n\nThe advection equations for the moving ice boundary in Eq. \\eqref{eq:freeSurface} and \\eqref{eq:lowerSurface} are discretized in time by a finite difference method and in\nspace by FEM with linear Lagrange elements for $z_s$ and $z_b$. A stabilization term is added, making the spatial discretization behave \nlike an upwind scheme in the direction of the velocity as implemented in Elmer\/ICE.\n\n\nThe advection equations Eq. \\eqref{eq:freeSurface} and Eq. \\eqref{eq:lowerSurface} are integrated in time by a semi-implicit method of first order accuracy. \nLet $c=s$ or $b$. Then the solution is advanced\nfrom time $t^n$ to $t^{n+1}=t^n+\\Delta t$ with the timestep $\\Delta t$ by \n\\begin{equation}\\label{eq:zint}\n z_c^{n+1}=z_c^n+\\Delta t(a_c^n-u_c^n \\frac{\\partial{z_{c}^{n+1}}}{\\partial x}+w_c^n).\n\\end{equation}\nThe spatial derivative of $z_c$ is approximated by FEM. A system of linear equations is solved at $t^{n+1}$ for $z_c^{n+1}$. This time discretization and its properties are \ndiscussed in \\cite{cheng2017accurate}.\n\nA stability problem in $z_b$ is encountered in the boundary condition at $\\Gamma_{bf}$ in \\cite{Durand09b}. \nIt is solved by expressing $z_b$ in $p_w$ at $\\Gamma_{bf}$ with a damping term in \\cite{Durand09b}.\nAn alternative interpretation of the idea in \\cite{Durand09b} and an explanation follow below.\n\n\nThe relation between $u_\\mathbf{n}$ and $u_\\mathbf{t}$ at $\\Gamma_{bf}$ and $\\mathbf{u}_b=\\mathbf{u}(x, z_b(x))$ is\n\\begin{equation}\n \\mathbf{u}_b=\\left(\\begin{array}{c} u_b \\\\ w_b\\end{array}\\right)=\\left(\\begin{array}{c} z_{bx} \\\\ -1\\end{array}\\right)\\frac{u_\\mathbf{n}}{\\sqrt{1+z_{bx}^2}}\n +\\left(\\begin{array}{c} 1 \\\\ z_{bx}\\end{array}\\right)\\frac{u_\\mathbf{t}}{\\sqrt{1+z_{bx}^2}},\n\\label{eq:udef}\n\\end{equation}\nwhere $z_{bx}$ denotes $\\partial z_b\/\\partial x$. Insert $u_b$ and $w_b$ from Eq. \\eqref{eq:udef} into Eq. \\eqref{eq:lowerSurface} to obtain\n\\begin{equation}\n \\frac{\\partial z_b}{\\partial t}=a_b-u_\\mathbf{n}\\sqrt{1+z_{bx}^2},\n\\label{eq:zbeq2}\n\\end{equation}\nInstead of discretizing Eq. \\eqref{eq:zbeq2} explicitly at $t^n$ with $u_\\mathbf{n}^{n-1}$ to determine $p_w^n$, the base coordinate is updated implicitly\n\\begin{equation}\n z_{b}^n=z_{b}^{n-1}+\\Delta t\\left(a_b^n-u_\\mathbf{n}^n\\sqrt{1+z_{bx}^2}\\right)\n\\label{eq:zbimpl}\n\\end{equation}\nin the solution of Eq. \\eqref{eq:FSweakform}.\n\nAssume that $z_{bx}$ is small.\nThe timestep restriction in Eq. \\eqref{eq:zbimpl} is estimated by considering a 2D slab of the floating ice of width $\\Delta x$ and thickness $H$. Newton's law of motion yields\n\\[\n M \\dot{u}_\\mathbf{n}= M g-\\Delta x p_w,\n\\]\nwhere $M=\\Delta x(z_s-z_b)\\rho$ is the mass of the slab. Divide by $M$, integrate in time for $u_\\mathbf{n}(t^m)$, let $m=n$ or $n-1$, and approximate the integral by the trapezoidal rule for the quadrature to obtain\n\\[\n\\begin{split}\n u_\\mathbf{n}(t^m)&=\\displaystyle{\\int_0^{t^m} g+\\frac{g\\rho_w}{\\rho}\\frac{z_b}{z_s-z_b}\\,\\textrm{d}s} \\\\\n &\\approx \\displaystyle{gt^m+\\frac{g\\rho_w}{\\rho}\\sum_{i=0}^m\\alpha_i\\frac{z_b^i}{z_s^i-z_b^i}\\Delta t,}\\\\\n\\end{split}\n\\] \n\\[\n \\alpha_i=0.5, i=0, m,\\quad \\alpha_i=1, i=1,\\ldots,m-1. \n\\]\nThen insert $u_\\mathbf{n}^m$ into Eq. \\eqref{eq:zbimpl}. All terms in $u_\\mathbf{n}^m$ from timesteps $i b(x)$, as the blue line in Fig. \\ref{fig:GL}, the boundary conditions are given by Eq. \\eqref{eq:BCFI}, and where the ice is in contact with the bedrock, as the red line in Fig. \\ref{fig:GL}, the boundary conditions are given by Eq. \\eqref{eq:BCGI}. \nHowever, there is another case as shown in Fig. \\ref{fig:GL2} when the net force at $x_i$ is pointing inward, namely $\\sigma_{\\mathbf{n}\\bn}(x_i)+p_w(x_i)>0$.\nThen, the floating boundary condition Eq. \\eqref{eq:BCFI} should be imposed up until the node $x_{i-1}$.\nThis can happen at some point due to the low spatial and temporal resolutions, but the node $x_i$ will move upward as long as $\\mathbf{u}\\cdot\\mathbf{n}<0$, or the net force switches signs and the condition transforms into the case in Fig. \\ref{fig:GL} when $\\sigma_{\\mathbf{n}\\bn}(x_i)+p_w(x_i)<0$.\nDenote the situation in Fig. \\ref{fig:GL} by case {\\romannumeral 1}, and the one in Fig. \\ref{fig:GL2} by case {\\romannumeral 2}.\nWe call the node `grounded' when it is in contact with the bedrock with net force from the ice pointing outward ($\\sigma_{\\mathbf{n}\\bn}+p_w<0$), and `floating' when the net force is pointing inward ($\\sigma_{\\mathbf{n}\\bn}+p_w\\geq0$).\nThe element which contains both grounded and floating nodes is called the GL element and the grounded node in it is called the last grounded node and the floating one is called the first floating node.\n\nIn coarse meshes, the true position of the GL is generally not in one of the nodes, but usually between the last grounded and the first floating nodes. \nInstead of refining the mesh around GL, which would lead to very small time steps for stability reasons, we will here introduce a subgrid model for the GL element.\n\n\nWe let $\\chi(x)=\\sigma_{\\mathbf{n}\\bn}(x)+p_w(x)$ and assume that it is linear as in Eq. \\eqref{eq:chidef} to determine the position of the GL, $x_{GL}$, in the GL element. \nIn case {\\romannumeral 2}, the GL is located between $x_{i-1}$ and $x_i$ even though the whole element $[x_{i-1},x_i]$ is geometrically grounded.\nThe equation $\\chi(x_{GL})=0$ is solved by linear interpolation between $\\chi(x_{i-1})<0$ and $\\chi(x_i)>0$ yielding a unique solution satisfying $x_{i-1}x_i$, we have $b(x)p_w(x)$.\nTherefore, $\\tilde{\\chi}(x_{i+1})>\\chi(x_{i+1})=0$ and $\\tilde{\\chi}(x_i)=\\chi(x_i)<0$.\nThen, a linear interpolation between $\\tilde{\\chi}(x_i)$ and $\\tilde{\\chi}(x_{i+1})$ guarantees a unique solution of $\\tilde{\\chi}(x_{GL})=0$ in the GL element $[x_i,x_{i+1}]$, see Fig. \\ref{fig:GL}.\nIn case {\\romannumeral 2}, $p_b$ can also be used since $p_b(x)=p_w(x)$ as long as the element is on the bedrock.\n\nConceptually, the linear interpolation of the function $\\tilde{\\chi}(x)$ can be considered separately by looking at the two linear functions $\\sigma_{\\mathbf{n}\\bn}(x)$ and $p_b(x)$. \nAs the GL always rests on the bedrock, $p_b(x_{GL})=p_w(x_{GL})$ is actually an exact representation of the water pressure imposed on the ice at GL, although geometrically $z_b(x_{GL})$ may not coincide with $b(x_{GL})$, especially on coarse meshes.\nThis also leads to the fact that the interpolated normal stress $\\sigma_{\\mathbf{n}\\bn}(x_{GL},z_b(x_{GL}))$ is a first order approximation of the normal stress at the exact GL position $(x_{GL},b(x_{GL}))$.\n\nThis correction is not necessary when the GL is advancing since the implicit treatment of the bottom surface is equivalent to additional water pressure at the stress boundary as discussed in Sect. \\ref{sec:updlower}.\n\n\n\nAfter the GL position is determined, the domains $\\Gamma_{bg}$ and $\\Gamma_{bf}$ are separated at $x_{GL}$ as in Eq. \\eqref{eq:Nitscheint} and the integrals are calculated with a high-order integration scheme as in \\cite{Seroussi14} to achieve a better resolution within the element shown in Figures \\ref{fig:GL} and \\ref{fig:GL2}.\nFor a smoother transition of $\\beta$ at $GL$, the slip coefficient is multiplied by 1\/2 at the whole GL element before integrating using the high order scheme.\n\n\n\nThe penalty term from Nitsche's method restricts the motion of the element in the normal direction. It should only be imposed on the element which is fully on the ground.\nOn the contrary, in case {\\romannumeral 1}, the GL element $[x_i,x_{i+1}]$ is not in contact with the bedrock as in Fig. \\ref{fig:GL}, so only the floating boundary condition should be used on the element $[x_i,x_{i+1}]$.\nAdditionally, the implicit representation of the bottom surface in Eq. \\eqref{eq:zbimpl} also implies that the case {\\romannumeral 2}\\, with retreating GL should be merged to case {\\romannumeral 1}\\, since the surface is leaving the bedrock and the normal velocity should not be forced to zero.\nTo summarize, Nitsche's penalty term should be imposed on all the fully grounded elements and partially on the GL element in the advance phase.\n\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth,page=1]{.\/Figures\/subgrid} \n \\caption{Schematic figure of Grounding Line in case {\\romannumeral 1}.\n Upper panel: the last grounded and first floating nodes as defined in Elmer\/ICE. \n Lower panel: linear interpolation to compute a more accurate position of the Grounding Line.}\n \\label{fig:GL}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth,page=2]{.\/Figures\/subgrid} \n \\caption{Schematic figure of Grounding Line in case {\\romannumeral 2}.\n Upper panel: the last grounded and first floating nodes as defined in Elmer\/ICE. \n Lower panel: linear interpolation to compute a more accurate position of the Grounding Line.}\n \\label{fig:GL2}\n\\end{figure}\n\n\nEquations (\\ref{eq:FS}), (\\ref{eq:freeSurface}), and (\\ref{eq:lowerSurface}) form a system of coupled nonlinear equations. They are solved in the same manner as in Elmer\/ICE v.8.3.\nThe $x_{GL}$ position is determined dynamically within every nonlinear iteration when solving the FS equations and the high order integrations are based on the current $x_{GL}$.\nThe nonlinear FS is solved with fixed-point iterations to $10^{-5}$ relative error with a limit of maximal 25 nonlinear iterations and the grounded condition is set if the distance between of the bottom surface and the bedrock is smaller than $10^{-3}$~m.\n\n\n\n\\section{Results} \\label{sec:results}\n\nThe numerical experiments follow the MISMIP benchmark \\cite{MISMIP} and comparison is made with the results in \\cite{gagliardini2016impact}.\nUsing the experiment MISMIP 3a, the setups are exactly the same as in the advancing and retreating simulations in \\cite{gagliardini2016impact}.\nThe experiments are run with spatial resolutions of $\\Delta x=4$~km, 2~km and 1~km with 20 vertical extruded layers.\nThe timestep is $\\Delta t=0.125$~year for all the three resolutions to eliminate time discretization errors when comparing different spatial resolutions.\n\n\nThe dependence on $\\gamma_0$ for the retreating ice is shown in Fig. \\ref{fig:gammas} with $\\gamma_0$ between $10^4$ and $10^9$.\nThe estimated GL positions do not vary with different choices of $\\gamma_0$ from $10^5$ to $10^8$ which suggests a suitable range of $\\gamma_0$.\nIf $\\gamma_0$ is too small ($\\gamma_0\\ll10^4$), oscillations appear in the estimated GL positions. \nIf $\\gamma_0$ is too large ($\\gamma_0\\gg10^8$), then more nonlinear iterations are needed for each time step.\nThe same dependency of $\\gamma_0$ is observed for the advance experiments and for different mesh resolutions as well.\nFor the remaining experiments, we fix $\\gamma_0=10^6$.\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.4\\textwidth]{.\/Figures\/MISMIP_3_gammas.pdf} \n \\caption{The MISMIP 3a retreat experiment with $\\Delta x=1000$~m for different choices of $\\gamma_0$ in\n the time interval $[0,10000]$ years.}\n\\label{fig:gammas}\n\\end{figure}\n\n\nThe GL position during 10000 years in the advance and retreat phases are displayed in Fig. \\ref{fig:MISMIP3} for different mesh sizes.\nThe range of the results from \\cite{gagliardini2016impact} with mesh resolutions $\\Delta x=25$ and $50$~m are shown as background shaded regions with colors purple and pink.\nWe achieve similar GL migration results both for the advance and retreat experiments with at least 20 times larger mesh sizes.\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/MISMIP_3.pdf} \n \\caption{The MISMIP 3a experiments for the GL position when $t\\in[0, 10000]$ with $\\Delta x=4000, 2000$ and $1000$~m for the advance (solid) and retreat (dashed) phases. \n The shaded regions indicate the range of the results in \\cite{gagliardini2016impact} with $\\Delta x=50$~m in red and $\\Delta x=25$~m in blue. }\n\\label{fig:MISMIP3}\n\\end{figure}\n\n\nWe observed oscillations at the top surface near the GL in all the experiments as expected from \\cite{Durand09b, Schoof11}.\nA zoom-in plot of the surface elevation with $\\Delta x=1$~km at $t=10000$ years is shown to the left in Fig. \\ref{fig:surfOsc}, where the red dashed line indicates the estimated GL position. \nObviously, the estimated GL position does not coincide with any nodes even at the steady state.\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/retreatDetails} \n \\caption{Details of the solutions for the retreat experiment with $\\Delta x=1$~km after 10000 years. \n The solid dots represent the nodes of the elements and the vertical, red, dashed lines indicate the GL position.\n \\emph{Left panel}: The oscillations at top surface near GL.\n \\emph{Right panel}: The flotation criterion is evaluated by $H_{bw}\/H$. The ratio between $\\rho\/\\rho_w$ is drawn in a horizontal, purple, dash-dotted line. \n }\n\\label{fig:surfOsc}\n\\end{figure}\n\n\nThe ratio between the thickness below sea level $H_{bw}$ and the ice thickness $H$ is shown in Fig. \\ref{fig:surfOsc}.\nThe horizontal, purple, dash-dotted line indicates the ratio of $\\rho\/\\rho_w$ and the estimated GL is located at the red, dashed line.\nThis result confirms that the hydrostatic assumption $H\\rho=H_{bw}\\rho_w$ is not valid in the FS equations for $x>x_{GL}$ close to the GL and at the GL position, cf. \\cite{Durand09b, Schoof11}. For $x