diff --git a/data_all_eng_slimpj/shuffled/split2/finalwb b/data_all_eng_slimpj/shuffled/split2/finalwb new file mode 100644 index 0000000000000000000000000000000000000000..3452e58bad04b6d96f42a8c25d6b0236bdb844cc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalwb @@ -0,0 +1,5 @@ +{"text":"\\section*{Background}\n\nCraps is played by rolling a pair of dice repeatedly. For most bets, only the sum of the numbers appearing on the two dice matters, and this sum has distribution\n\\begin{equation}\\label{dice}\n\\pi_j:={6-|j-7|\\over36}, \\qquad j=2,3,\\ldots,12.\n\\end{equation}\nThe basic bet at craps is the \\textit{pass-line bet}, which is defined as follows. The first roll is the \\textit{come-out roll}. If 7 or 11 appears (a \\textit{natural}), the bettor wins. If 2, 3, or 12 appears (a \\textit{craps number}), the bettor loses. If a number belonging to \n$$\n\\mathscr{P}:=\\{4,5,6,8,9,10\\}\n$$\nappears, that number becomes the \\textit{point}. The dice continue to be rolled until the point is repeated (or \\textit{made}), in which case the bettor wins, or 7 appears, in which case the bettor loses. The latter event is called a \\textit{seven out}. A win pays even money. The first roll following a decision is a new come-out roll, beginning the process again.\n\nA shooter is permitted to roll the dice until he or she sevens out. The sequence of rolls by the shooter is called the \\textit{shooter's hand}. Notice that the shooter's hand can contain winning 7s and losing decisions prior to the seven out. The \\textit{length} of the shooter's hand (i.e., the number of rolls) is a random variable we will denote by $L$. Our concern here is with \n\\begin{equation}\\label{tail}\nt(n):=\\P(L\\ge n),\\qquad n\\ge1,\n\\end{equation}\nthe tail of the distribution of $L$. For example, $t(154)$ is the probability of achieving a hand at least as long as that of Ms.~DeMauro. As can be easily verified from \n(\\ref{recursion}), (\\ref{matrixformula}), or (\\ref{closed}) below, $t(154)\\approx0.178\\,882\\,426\\times10^{-9}$; \nto state it in the way preferred by the media, this amounts to one chance in 5.59 billion, approximately. The 1 in 3.5 billion figure came from a simulation that was not long enough. The 1 in 1.56 trillion figure came from $(1-\\pi_7)^{154}$, which is the right answer to the wrong question.\n\n\n\n\n\n\n\n\\section*{Two methods}\n\nWe know of two methods for evaluating the tail probabilities (\\ref{tail}).\nThe first is by recursion. As pointed out in \\cite{Ethier}, $t(1)=t(2)=1$ and \n\\begin{eqnarray}\\label{recursion}\n\\lefteqn{t(n)}\\nonumber\\\\\n&=&\\bigg(1-\\sum_{j\\in\\mathscr{P}}\\pi_j\\bigg)t(n-1)+\\sum_{j\\in\\mathscr{P}}\\pi_j(1-\\pi_j-\\pi_7)^{n-2}\\nonumber\\\\\n&&\\quad{}+\\sum_{j\\in\\mathscr{P}}\\pi_j\\sum_{l=2}^{n-1}(1-\\pi_j-\\pi_7)^{l-2} \\pi_j\n\\,t(n-l)\n\\end{eqnarray}\nfor each $n\\ge3$. Indeed, for the event that the shooter\nsevens out in no fewer than $n$ rolls to occur, consider the result of the initial\ncome-out roll. If a natural or a craps number occurs, then, beginning with the\nnext roll, the shooter must seven out in no fewer than $n-1$ rolls. If a\npoint number occurs, then there are two possibilities. Either the point is still\nunresolved after $n-2$ additional rolls, or it is made at roll $l$ for some\n$l\\in\\{2,3,\\ldots,n-1\\}$ and the shooter subsequently sevens out in no fewer than\n$n-l$ rolls. \n\nThe second method, first suggested, to the best of our knowledge, by Peter A. Griffin in 1987 (unpublished) and rediscovered several times since, is based on a Markov chain. The state space is \n\\begin{equation}\\label{S}\nS:=\\{{\\rm co},{\\rm p}4\\mbox{-}10,{\\rm p}5\\mbox{-}9,{\\rm p}6\\mbox{-}8,7{\\rm o}\\} \\equiv \\{1,2,3,4,5\\},\n\\end{equation}\nwhose five states represent the events that the shooter is coming out, has established the point 4 or 10, has established the point 5 or 9, has established the point 6 or 8, and has sevened out. The one-step transition matrix, which can be inferred from (\\ref{dice}), is\n\\begin{equation}\\label{P}\n\\bm P:={1\\over36}\\left(\\begin{array}{ccccc}\n12&6&8&10&0\\\\\n3&27&0&0&6\\\\\n4&0&26&0&6\\\\\n5&0&0&25&6\\\\\n0&0&0&0&36\\end{array}\\right).\n\\end{equation}\nThe probability of sevening out in $n-1$ rolls or less is then just the probability that absorption in state 7o occurs by the $(n-1)$th step of the Markov chain, starting in state co. A marginal simplification results by considering the 4 by 4 principal submatrix $\\bm Q$ of (\\ref{P}) corresponding to the transient states.\nThus, we have\n\\begin{equation}\\label{matrixformula}\nt(n)=1-(\\bm P^{n-1})_{1,5} = \\sum_{j=1}^4(\\bm Q^{n-1})_{1,j}.\n\\end{equation}\nClearly, (\\ref{recursion}) is not a closed-form expression, and we do not regard (\\ref{matrixformula}) as being in closed form either. Is there a closed-form expression for $t(n)$?\n\n\n\n\n\\section*{Positivity of the eigenvalues}\n\nWe begin by showing that the eigenvalues of $\\bm Q$ are positive. The determinant of\n\\begin{eqnarray*}\n\\lefteqn{\\bm Q - z \\bm I}\\\\ &=& {1\\over36}\n\\left(\\begin{array}{cccc}\n12- 36z&6&8&10\\\\\n3&27- 36z&0&0\\\\\n4&0&26- 36z&0\\\\\n5&0&0&25- 36z\\\\\n\\end{array}\\right).\n\\end{eqnarray*}\nis unaltered by row operations. From the first row subtract $6\/(27-36z)$ times the second row, $8\/(26-36z)$ times the third row,\nand $10\/(25- 36z)$ times the fourth row, cancelling the entries 6\/36, 8\/36, and 10\/36 and making the (1,1) entry equal to 1\/36 times\n\\begin{equation}\\label{1,1-entry}\n12- 36z - 3\\, \\frac{6}{ 27-36z} - 4\\, \\frac{8}{26-36z} - 5\\, \\frac{10}{25- 36z}.\n\\end{equation}\nThe determinant of $\\bm Q - z \\bm I $, and therefore the characteristic polynomial $q(z)$ of \n$\\bm Q$ is then just the product of the diagonal entries in the transformed matrix,\nwhich is (\\ref{1,1-entry}) multiplied by $(27- 36z)(26- 36z)(25- 36z)\/(36)^4$. Thus,\n\\begin{eqnarray*}\nq(z)&=&[(12- 36z)(27- 36z)(26- 36z)(25- 36z)\\\\\n&&\\quad{}-18(26- 36z)(25- 36z)\\\\\n&&\\quad{}-32(27- 36z)(25- 36z)\\\\\n&&\\quad{}-50(27- 36z)(26- 36z)]\/(36)^4.\n\\end{eqnarray*}\nWe find that $q(1),q(27\/36),q(26\/36),q(25\/36),q(0)$ alternate signs and therefore the eigenvalues are positive and interlaced between the diagonal entries (ignoring the entry 12\/36). \nMore precisely, denoting the eigenvalues by $1>e_1>e_2>e_3>e_4>0$, we have\n$$\n1>e_1>{27\\over36}>e_2>{26\\over36}>e_3>{25\\over36}>e_4>0.\n$$\n\nThe matrix $\\bm Q$, which has the structure of an arrowhead matrix \\cite{HornJohnson}, is positive definite, although not symmetric.\nThis is easily seen by applying the same type of row operations to the symmetric part $\\bm A = \\frac{1}{2}(\\bm Q+\\bm Q^\\T)$ of $\\bm Q$ to show that the eigenvalues of $\\bm A$ interlace its diagonal elements (except 12\/36). But a symmetric matrix is positive definite if and only if all its eigenvalues are positive, and a non-symmetric matrix is positive definite if and only if its symmetric part is positive definite, confirming that $\\bm Q$ is positive definite.\n\n\n\n\n\\section*{A closed-form expression}\n\nThe eigenvalues of $\\bm Q$ are the four roots of the quartic equation $q(z)=0$ or\n$$\n23328z^4-58320z^3+51534z^2-18321z+1975=0,\n$$\nwhile $\\bm P$ has an additional eigenvalue, 1, the spectral radius. \nWe can use the quartic formula (or \\textit{Mathematica}) to find these roots. We notice that the complex number \n$$\n\\alpha:=\\zeta^{1\/3}+{9829\\over \\zeta^{1\/3}}, \n$$\nwhere\n$$\n\\zeta:=-710369+18i\\sqrt{1373296647},\n$$ \nappears three times in each root. Fortunately, $\\alpha$ is positive, as we see by writing $\\zeta$ in polar form, that is, $\\zeta=re^{i\\theta}$. We obtain\n$$\n\\alpha=2\\sqrt{9829}\\,\\cos\\bigg[{1\\over3}\\cos^{-1}\\bigg(-{710369\\over9829\\sqrt{9829}}\\bigg)\\bigg].\n$$\nThe four eigenvalues of $\\bm Q$ can be expressed as\n\\begin{eqnarray*}\ne_1&:=&e(1,1),\\\\\ne_2&:=&e(1,-1),\\\\\ne_3&:=&e(-1,1),\\\\\ne_4&:=&e(-1,-1),\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\ne(u,v)&:=&{5\\over8}+{u\\over72}\\sqrt{{349+\\alpha\\over3}}\\\\\n&&\\quad{}+{v\\over72}\\sqrt{{698-\\alpha\\over3}-2136u\\sqrt{3\\over349+\\alpha}}.\n\\end{eqnarray*}\n\nNext we need to find right eigenvectors corresponding to the five eigenvalues of $\\bm P$. Fortunately, these eigenvectors can be expressed in terms of the eigenvalues. Indeed,\nwith $\\bm r(x)$ defined to be the vector-valued function\n$$\n\\left(\\begin{array}{c}-5+(1\/5)x\\\\\n-175+(581\/15)x-(21\/10)x^2+(1\/30)x^3\\\\\n275\/2-(1199\/40)x+(8\/5)x^2-(1\/40)x^3\\\\\n1\\\\\n0\\end{array}\\right)\n$$\nwe find that right eigenvectors corresponding to eigenvalues $1,e_1,e_2,e_3,e_4$ are\n$$\n(1,1,1,1,1)^\\T,\\;\\bm r(36e_1),\\;\\bm r(36e_2),\\;\\bm r(36e_3),\\;\\bm r(36e_4),\n$$\nrespectively. Letting $\\bm R$ denote the matrix whose columns are these right eigenvectors and putting $\\bm L:=\\bm R^{-1}$, the rows of which are left eigenvectors, we know by (\\ref{matrixformula}) and the spectral representation that\n$$\nt(n)=1-\\{\\bm R\\,{\\rm diag}(1,e_1^{n-1},e_2^{n-1},e_3^{n-1},e_4^{n-1})\\bm L\\}_{1,5}.\n$$\n\nAfter much algebra (and with some help from \\textit{Mathematica}), we obtain\n\\begin{equation}\\label{closed}\nt(n)=c_1 e_1^{n-1}+c_2 e_2^{n-1}+c_3 e_3^{n-1}+c_4 e_4^{n-1},\n\\end{equation}\nwhere the coefficients are defined in terms of the eigenvalues and the function\n\\begin{eqnarray*}\nf(w,x,y,z)&:=&(-25+36w)[4835-5580(x+y+z)\\\\\n&&\\quad{}+6480(xy+xz+yz)-7776xyz]\\\\\n&&\\;\\qquad\/[38880(w-x)(w-y)(w-z)]\n\\end{eqnarray*}\nas follows:\n\\begin{eqnarray*}\nc_1&:=&f(e_1,e_2,e_3,e_4),\\\\ c_2&:=&f(e_2,e_3,e_4,e_1),\\\\ c_3&:=&f(e_3,e_4,e_1,e_2),\\\\ c_4&:=&f(e_4,e_1,e_2,e_3).\n\\end{eqnarray*}\nOf course, (\\ref{closed}) is our closed-form expression. \n\nIncidentally, the fact that\n$t(1)=t(2)=1$ implies that\n\\begin{equation}\\label{sum1}\nc_1+c_2+c_3+c_4=1\n\\end{equation}\nand\n$$\nc_1e_1+c_2e_2+c_3e_3+c_4e_4=1.\n$$\n\nIn a sequence of independent Bernoulli trials, each with success probability $p$, the number of trials $X$ needed to achieve the first success has the \\textit{geometric distribution} with parameter $p$, and\n$$\nP(X\\ge n)=(1-p)^{n-1},\\qquad n\\ge1.\n$$\nIt follows that the distribution of $L$ is \\textit{a linear combination of four geometric distributions}. It is not a convex combination: (\\ref{sum1}) holds but, as we will see,\n$$ \nc_1>0,\\quad c_2<0,\\quad c_3<0,\\quad c_4<0. \n$$\nIn particular, we have the inequality\n\\begin{equation}\\label{ineq}\nt(n) - U$, in which $\\Psi^{\\lambda}$ is the wavefunction that minimizes $\\hat{T}+\\lambda \\hat{V}_{\\rm ee}$ but has the same density as the real ground-state system \\cite{Y87}. At $\\lambda=0$, one recovers the KS system, and at $\\lambda=1$, one recovers the real interacting system. In this way, one connects the KS system with the real interacting system by changing $\\lambda$ from $0$ to $1$.\n\nA cartoon of $W_\\lambda$ versus $\\lambda$ is shown in the upper panel of Fig. \\ref{cartoons}. By definition, we have $W_0=E_{\\sss X}$ and the area under the curve is $E_{\\sss XC}$. We can also identify the kinetic correlation energy $T_{\\sss C}=E_{\\sss XC}-W_1$ in this graph.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{cart_n.pdf}\n\\includegraphics[width=3.5in]{cart_u.pdf}\n\\caption{Traditional (upper panel) and the new (lower panel) adiabatic connection curves.}\n\\label{cartoons}\n\\end{center}\n\\end{figure}\n\n\\subsection{Strictly-Correlated Reference System}\n\nThe KS wavefunction can be defined as the wavefunction that minimizes the kinetic energy alone, but whose density matches that of the interacting system. Analogously, the SC wavefunction is found by minimizing the electron-electron repulsion energy alone, subject to reproducing the exact density. In practice, there might be multiple degeneracies, so it is best defined in the limit as the kinetic energy goes to zero.\n\nThen, using the strictly-correlated (SC) system as the reference, the energy of the true interacting ground state is:\n\\begin{equation}\nE[n] = U_{\\rm sc}[n] + {\\int d^3 r \\,} v_{\\rm ext}(\\mathbf{r}) \\, n(\\mathbf{r}) + T_{\\sss S} [n] + E_{\\sss DC}[n],\n\\label{eingswsc}\n\\end{equation}\nwhere $U_{\\rm sc}=<\\Psi^\\infty |\\hat{V}_{\\rm ee}|\\Psi^\\infty>$. In KS DFT quantities, $U_{\\rm sc}=U+W_\\infty$ [see Eq. (\\ref{normalac})].\n\nJust as we separate the Hartree energy from the total energy in KS DFT [Eq. (\\ref{eings})], here in Eq. (\\ref{eingswsc}) we separate $T_{\\sss S}[n]$ from the total energy in SC DFT. There are a variety of algorithms that can be used to extract this quantity accurately for any given density, effectively by inverting the KS equations \\cite{PVW03}. We label the remainder as the ``decorrelation energy'', $E_{\\sss DC}[n]$. The reason we call it ``decorrelation energy'' is that, if we consider the electrons in the reference system ``strictly correlated'', with energy $U_{\\rm sc}[n]$, the electrons in the real system are \\emph{less} correlated than in the reference system. We will see the physical meaning explicitly very soon.\n\nSo far, we have defined our reference, and next we deduce an exact expression for the newly-defined ``decorrelation energy'' $E_{\\sss DC}[n]$ with the adiabatic connection formalism, just as one does for $E_{\\sss XC}[n]$ \\cite{SPL99, SPK00} in the KS DFT [Eq.( \\ref{normalac})].\n\n\\subsection{Strictly-Correlated Adiabatic Connection Formula}\n\nWe denote $\\Psi^{\\mu}$ as the wavefunction minimizing $\\hat{H}^{\\mu} = \\mu^2 \\hat{T} + \\hat{V}_{\\rm ee} + \\hat{V}_{\\rm ext}^{\\mu}$ with density $n({\\mathbf r})$. For $\\mu=0$, we recover the strictly-correlated system, and for $\\mu=1$, we recover the real system. For each value of $\\mu$, there is a corresponding unique external potential yielding the correct density, $v_{\\rm ext}^{\\mu}({\\mathbf r})$. So we have:\n\\begin{equation}\nE^{\\mu}\n=\\langle \\Psi^\\mu|\\, \\mu^2 \\hat{T} + \\hat{V}_{\\rm ee} + \\hat{V}_{\\rm ext}^{\\mu} | \\Psi^{\\mu} \\rangle\n\\label{minihmu}\n\\end{equation}\nUsing Hellmann-Feynman theorem \\cite{LP85}, we have:\n\\begin{equation}\n\\frac{dE^{\\mu}}{d\\mu}=\\left< \\Psi^{\\mu} \\left| \\frac{d\\hat{H}^{\\mu}}{d\\mu} \\right| \\Psi^{\\mu} \\right> = \\left< \\Psi^{\\mu} \\left| 2\\mu \\hat{T} + \\frac{d\\hat{V}_{\\rm ext}^{\\mu}}{d\\mu} \\right| \\Psi^{\\mu} \\right>.\n\\label{hft}\n\\end{equation}\nIntegrating and cancelling the external potential terms at both sides, we recognize the left hand side is just $T_{\\sss S}[n]+E_{\\sss DC}[n]$. Thus:\n\\begin{equation}\nE_{\\sss DC}[n] = \\int_0^1 d\\mu \\, 2\\mu \\left< \\Psi^{\\mu} \\left| \\hat{T} \\right| \\Psi^{\\mu} \\right> - T_{\\sss S}[n].\n\\label{ac}\n\\end{equation}\nThis is our adiabatic connection formula for strictly-correlated electrons. Finally, since $T_{\\sss C}[n] = T[n] - T_{\\sss S}[n]$, and $T_{\\sss S}[n]$ is independent of $\\mu$:\n\\begin{equation}\nE_{\\sss DC}[n] = \n\\int_0^1 d\\mu \\, K_\\mu[n],\n\\label{updnac}\n\\end{equation}\nwhere\n\\begin{equation}\nK_\\mu[n] = 2\\mu\\, T_{\\sss C}^\\mu[n].\n\\label{Kmudef}\n\\end{equation}\nThis is the SC doppelganger of Eq. (\\ref{normalac}). We plot a cartoon of the integrand $K_\\mu$ vs. $\\mu$ in the lower panel of Fig. \\ref{cartoons}, identifying the area below the curve as $E_{\\sss DC}$, and noting that $K_1=2T_{\\sss C}$.\n\n\\subsection{Relation to Kohn-Sham DFT}\n\nFrom a formal viewpoint, what we derived here is not new, but simply another way to describe the real interacting system. Thus we can relate all quantities defined here, such as $E_{\\sss DC}[n]$ and $K_{\\mu}[n]$, to quantities defined in the traditional KS DFT. Since both Eq. (\\ref{eingswsc}) and Eq. (\\ref{eings}) are exact for the real system, and if we use the expression of $U_{\\rm sc}[n]$ in KS language [see discussion below Eq. (\\ref{eingswsc})], we find:\n\\begin{equation}\nE_{\\sss DC}[n] = E_{\\sss XC}[n] - W_\\infty[n].\n\\label{edcexc}\n\\end{equation}\nThus $E_{\\sss DC}[n]$ defined in our theory is just the difference between the usual exchange-correlation energy of the real system, $E_{\\sss XC}[n]$, and the potential contribution to the exchange-correlation energy of the strictly-correlated system, $W_\\infty[n]$.\n\nWe can also deduce an expression for $K_\\mu$ in terms of $W_\\lambda$. Since $\\Psi^{\\mu}$ minimizes $\\hat{H}^{\\mu}=\\mu^2 \\hat{T}+\\hat{V}_{\\rm ee}$ while yielding $n({\\mathbf r})$, and $\\Psi^{\\lambda}$ minimizes $\\hat{T}+\\lambda \\hat{V}_{\\rm ee}$, we deduce $\\Psi^{1\/ \\mu^2}=\\Psi^{\\lambda}$. Now, from the scaling properties of KS DFT \\cite{L91}, we know:\n\\begin{equation}\nT_{\\sss C}^{\\lambda}=E_{\\sss C}^{\\lambda}-\\lambda \\frac{dE_{\\sss C}^{\\lambda}}{d\\lambda}.\n\\label{inttc}\n\\end{equation}\nIf we write $E_{\\sss C}^{\\lambda}=T_{\\sss C}^{\\lambda}+U_{\\sss C}^{\\lambda}$, i.e., $U_{\\sss C}^\\lambda$ is the potential contribtion to $E_{\\sss C}^\\lambda$, $U_{\\sss C}^\\lambda=\\lambda (W_\\lambda - E_{\\sss X})$, we have:\n\\begin{equation}\n\\frac{dT_{\\sss C}^{\\lambda}}{d\\lambda}=\\frac{U_{\\sss C}^{\\lambda}}{\\lambda}-\\frac{dU_{\\sss C}^{\\lambda}}{d\\lambda}.\n\\label{difftc}\n\\end{equation}\nIntegrating over $\\lambda$ from 0 to $1\/ \\mu^2$, and using the definition of $W_\\lambda$ in Eq. (\\ref{normalac}) and that $E_{\\sss X}^{\\lambda}=\\lambda E_{\\sss X}$ by scaling \\cite{L91}, we can express $T_{\\sss C}^{\\mu}[n]$ in terms of $W_\\lambda[n]$, we find:\n\\begin{equation}\nK_\\mu[n] = 2\\mu \\int_0^{1\/ \\mu^2} d\\lambda \\left( W_\\lambda[n] - W_{1\/ \\mu^2}[n] \\right).\n\\label{wl4kmu}\n\\end{equation}\nFrom this relation, we can generate the new adiabatic connection curve, as long as we know the integrand of the KS adiabatic connection, i.e. $W_\\lambda[n]$ for $\\lambda=1$ to $\\infty$.\n\n\\subsection{Exact conditions}\n\nMany of the well-established exact conditions on the correlation energy can be translated and applied to the decorrelation energy. In particular, the simple relations between scaling the density and altering the coupling constant all apply, i.e.,\n\\begin{equation}\nE_{\\sss C}^\\lambda[n]=\\lambda^2\\, E_{\\sss C}[n_{1\/\\lambda}],\n\\end{equation}\nwhere $n_\\gamma({\\mathbf r}) = \\gamma^3\\, n(\\gamma {\\mathbf r})$. Thus, in terms of scaling:\n\\begin{equation}\nK_\\mu [n] = \\frac{2}{\\mu^3} T_{\\sss C}[n_{\\mu^2}].\n\\end{equation}\nNote that, as $\\mu \\to \\infty$, $K_\\mu \\to 0$, while $K_{\\mu=0}=2W_{\\infty}'$, where $W_{\\infty}'$ is defined in the expansion of $W_\\lambda$ as $\\lambda \\to \\infty$ \\cite{SPK00}:\n\\begin{equation}\nW_\\infty'=\\lim_{\\lambda \\to \\infty} \\sqrt{\\lambda} \\left( W_\\lambda-W_\\infty \\right).\n\\label{watinfty}\n\\end{equation}\nThus the SC energy is found from solving the strictly-correlated system, while $K_{\\mu=0}$ is determined by the zero-point oscillations around that solution. Both are currently calculable for spherical systems \\cite{S07,GVS09}.\n\nThe most general property known about the correlation energy \\cite{L91} is that, under scaling toward lower densities, it must become more negative. In turn, this implies that $W_\\lambda$ is never positive. Using the definition of $T_{\\sss C}^\\mu$ and changing variable $\\lambda=1\/\\mu^2$ in Eq. (\\ref{difftc}), we find:\n\\begin{equation}\n\\frac{dT_{\\sss C}^\\mu}{d\\mu}=\\frac{2}{\\mu^5}\\frac{dW_\\lambda}{d\\lambda}<0,\n\\label{tcmul0}\n\\end{equation}\nthen using $K_\\mu=2\\mu T_{\\sss C}^\\mu$ and the fact that $K_\\mu>0$, we find:\n\\begin{equation}\n\\frac{d}{d\\mu} \\ln K_\\mu <0.\n\\label{inequality}\n\\end{equation}\nAlso, because $T_{\\sss C}^\\mu>0$, so $K_\\mu=2\\mu T_{\\sss C}^\\mu>0$, and $E_{\\sss DC}$, as an integration of $K_\\mu$, is always positive.\n\nBased on these properties of $K_\\mu$, a crude approximation to $K_\\mu$ can be a simple exponential parametrization, using $K_0$ and the derivative of $\\ln K$ at $\\mu=0$ as inputs:\n\\begin{equation}\nK=K_0\\,e^{-\\gamma \\mu}, \\hspace{0.5in} \\gamma=-\\left. \\frac{d}{d\\mu} \\ln K \\right|_0.\n\\label{simpleunif}\n\\end{equation}\n\n\\section{Illustrations}\nIn this section, we illustrate the theory developed above on three different systems, to show how $K_\\mu$ behaves for very different systems, and where the adiabatic connection formula might be most usefully approximated.\n\n\\subsection{Uniform Electron Gas}\n\nFor a uniform electron gas, we assume we know the correlation energy per particle, $\\epsilon_{\\sss C}$, accurately as a function of $r_{s}=(3\/4\\pi n)^{1\/3}$. In order to apply Eq. (\\ref{wl4kmu}) to calculate $K_\\mu[n]$, we use $\\epsilon _{\\sss C}^{\\lambda} (r_s) = \\lambda ^2 \\epsilon _{\\sss C} (\\lambda r_s)$ \\cite{L91}. Substituting into Eq. (\\ref{inttc}), changing variables $\\lambda=1\/ \\mu^2$, and using $K_\\mu=2 \\mu T_{\\sss C}^{\\mu}=N \\kappa_\\mu$, with $N$ the number of particles, we find:\n\\begin{equation}\n\\kappa_\\mu^{\\rm unif} = -\\frac{2}{\\mu^3} \\frac{d}{dr_s} \\left. \\left( r_s \\epsilon_{\\sss C}(r_s) \\right) \\right| _{r_s\/\\mu^2}.\n\\label{unigaskmu}\n\\end{equation}\nUsing Eq. (\\ref{edcexc}) and the definition of $W_\\infty$, we find:\n\\begin{equation}\n\\epsilon_{\\sss DC}^{\\rm unif}=\\epsilon_{\\sss C}+\\frac{d_0}{r_s},\n\\label{edcunif}\n\\end{equation}\nwhere $d_0$ is defined below and $d_0=0.433521$. In the large-$r_s$ limit or the low-density expansion \\cite{PW92}: \n\\begin{equation}\n\\epsilon _{\\sss C} (r_s) = - \\frac{d_0}{r_s} + \\frac{d_1}{r_s^{3\/2}} + \\frac{d_2}{r_s^2}+\\cdots\n\\label{unigasec}\n\\end{equation}\nwhere $d_2=-3.66151$ from data of Ref. \\cite{PW92}. Substituting this expansion into Eq. (\\ref{unigaskmu}), we find:\n\\begin{equation}\n\\kappa_\\mu^{\\rm unif} = \\frac{d_1}{r_s^{3\/2}}+ 2 \\mu \\frac{d_2}{r_s^2}+\\cdots \\hspace{0.5in} \\mbox{as $\\mu \\to 0$}.\n\\label{unigaskmuexpn}\n\\end{equation}\nThus $\\kappa_\\mu$ is expected to have a well-behaved expansion in powers of\n$\\mu$ for small $\\mu$.\n\nUsing Perdew and Wang's \\cite{PW92} parametrization of the correlation energy of the uniform gas, we plot $\\kappa_\\mu$ vs. $\\mu$ for $r_s=1$ in Fig. \\ref{unigasplot}, and find $\\epsilon_{\\sss DC}=0.374$ at $r_s=1$.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{unigas.pdf}\n\\caption{Exact adiabatic connection curve $\\kappa_{\\mu}$ for uniform electron gas ($r_s=1$) and a simple exponential parametrization.}\n\\label{unigasplot}\n\\end{center}\n\\end{figure}\n\nUsing the exact curve for $r_s=1$ in the simple exponential parametrization [Eq. (\\ref{simpleunif})], we find $\\kappa_0=1.44073$ and $\\gamma=5.0826$. We plot the exponential parametrization in Fig. \\ref{unigasplot} and we can see that it decays much faster than the exact curve, producing a $\\epsilon_{\\sss DC}$ that is too small by about 25\\%, which means about 150\\% larger in $|\\epsilon_{\\sss C}|$ [see Eq. (\\ref{edcexc})].\n\nWe calculated $\\epsilon_{\\sss DC} \/ |\\epsilon_{\\sss C}|$ for different values of $r_s$ and plot the curve in Fig. \\ref{diffrs}. At small $r_s$, $\\epsilon_{\\sss DC} \\gg |\\epsilon_{\\sss C}|$, which suggests that the KS reference system is a better starting point, as a smaller contribution to the energy needs to be approximated. At large $r_s$, $|\\epsilon_{\\sss C}| > \\epsilon_{\\sss DC}$ so $\\epsilon_{\\sss DC}$ is a smaller quantity and may be better approximated. Under such circumstances, the strictly-correlated system might serve as a better reference. For the uniform gas, the switch-over occurs at about $r_s=16$, which is at densities much lower than those relevant to typical processes of chemical and physical interest. However, as we show below, for systems with static correlation, this regime can occur much more easily.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{edcvec.pdf}\n\\caption{$\\epsilon_{\\sss DC} \/ |\\epsilon_{\\sss C}|$ for different $r_s$ for uniform electron gas.}\n\\label{diffrs}\n\\end{center}\n\\end{figure}\n\n\\subsection{Hooke's Atom}\nAs we pointed out, as long as we have an approximate $W_\\lambda[n]$ for $\\lambda$ between 1 and $\\infty$, we can substitute it into Eq. (\\ref{wl4kmu}) to get the new adiabatic connection formula for the decorrelation energy. Of course, most such formulas focus on the shape between $0$ and $1$, since only that section is needed for the regular correlation energy. But any such approximate formula can be equally applied to $K_\\mu$, yielding an approximation for the decorrelation energy. Peach et al. \\cite{PMTT08} analyze various parametrizations for $W_\\lambda$, and the same forms can be used as well to parametrize $K_\\mu$, based on the similar shape of $W_\\lambda$ and $K_\\mu$ curves. In general, application to $K_\\mu$ will yield a distinct approximation to the ground-state energy, with quite different properties.\n\nTo give just one example, one of the earliest sensible smooth parametrizations is the [1,1] Pade of Ernzerhof \\cite{E96}:\n\\begin{equation}\nW_\\lambda = a \\left( \\frac{1+b\\lambda}{1+c\\lambda} \\right).\n\\end{equation}\nOne can imagine applying it with inputs of e.g., $E_{\\sss X}$, $W'_0$ given by G\\\"{o}rling-Levy perturbation theory, and $W_\\infty$ from the SC limit. It yields a sensible approximation to $W_\\lambda$ in the 0 to 1 range, but because it was not designed with the strictly-correlated limit in mind, the formula itself is not immediately applicable to the decorrelation energy, since, e.g., $K_{\\mu=0}$ vanishes. However, much more sensible is to make the same approximation directly for $K_\\mu$ instead, if one is doing an SC calculation, i.e.,\n\\begin{equation}\nK_\\mu = \\tilde{a} \\left( \\frac{1+\\tilde{b}\\mu}{1+\\tilde{c}\\mu} \\right),\n\\end{equation}\nwhose inputs could be $K_0$, $K'_0$, and, e.g., a GGA for $K_{\\mu=1}$. This is then a very different approximation from the same idea applied to the usual adiabatic connection formula.\n\nOn the other hand, there are several approximations designed to span the entire range of $\\lambda$, the most famous being ISI (interaction strength interpolation) model \\cite{SPKb00} developed by Seidl et al. This model uses the values and the derivatives of $W_\\lambda$ at two limits, namely the high-density limit (KS system, $\\lambda=0$) and the low-density limit (strictly-correlated system, $\\lambda \\to \\infty$), to make an interpolation. Another approximation to $W_\\lambda$ is developed in our previous work \\cite{LB09}, which employs $W_0, W_\\infty$ and $W_0'$ as inputs. We compare the approximate $K_\\mu$'s obtained from the two models.\n\nHooke's atom is a two-electron system (i.e., with Coulomb repulsion) in a spherical harmonic well \\cite{T94}. Using the accurate values $W_0=-0.515, W_\\infty=-0.743, W_0'=-0.101$ \\cite{MTB03}, and $W_\\infty'=0.208$ \\cite{P09}, we find:\n\\begin{equation}\nK_\\mu^{\\rm ISI} = -0.947\\mu + 1.029 A\\mu -\\frac{0.336}{\\mu B} +0.270 \\mu \\ln B,\n\\label{kmuisi4ho}\n\\end{equation}\nwhere $A=\\sqrt{1+0.653\/ \\mu^2}$ and $B=A-0.263$. With the same data substituted in $W^{\\rm simp}$ \\cite{LB09}, we find:\n\\begin{equation}\nK_\\mu^{\\rm simp} = -\\frac{0.228}{\\alpha^4 \\mu} (\\alpha^3-\\alpha^2+1)+1.287 \\mu (\\alpha-1),\n\\label{kmusimp4ho}\n\\end{equation}\nwhere $\\alpha=\\sqrt{1+0.354\/ \\mu^2}$.\nWe plot the two forms of $K_\\mu$ in Fig. \\ref{hookeplot}. The exact curve (down to $\\mu=0.5$) is taken from Ref. \\cite{MTB03}. We compare three quantities in Table \\ref{hooketable}. Although $K_\\mu^{\\rm ISI}$ contains a spurious $\\mu \\ln \\mu$ term as $\\mu \\to 0$ \\cite{S07, GVS09, LB09}, it nonetheless yields accurate results. The simple model, applied with the usual inputs, is less accurate pointwise, but integrates to an accurate value.\n\n\\begin{table}[h]\n\\caption{Comparison of several quantities for three approximations to $K_\\mu$. Note: ISI uses $K_0$ as an input. The exact values are taken from Ref. \\cite{MTB03}.}\n\\begin{center}\n\\begin{tabular}{c|c c c c}\n\\hline\\hline\n & exact & ISI & simp & exponential\\\\\n\\hline\n $K_0$ & 0.416 & 0.416 & 0.383 & 0.456\\\\\n $K_1$ & 0.058 & 0.054 & 0.059 & 0.058\\\\\n $E_{\\sss DC}$ & 0.189 & 0.191 & 0.190 & 0.193\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{hooketable}\n\\end{table}\n\nWe can try the simple exponential parametrization Eq. (\\ref{simpleunif}) for $K_\\mu$ again for Hooke's atom. Because we do not know the value of $d\/d\\mu \\ln K_\\mu$ at $\\mu=0$ exactly, instead we do an exponential fitting using the method of least squares, with the exact $K_\\mu$ values (between $\\mu=0.5$ and 1) taken from Ref. \\cite{MTB03}. We plot $K_\\mu$ vs. $\\mu$ in Fig. \\ref{hookeplot} and compare several quantities in Table \\ref{hooketable}.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{hooke.pdf}\n\\caption{Adiabatic connection curves for Hooke's atom. The exact curve (down to $\\mu=0.5$) is taken from Ref. \\cite{MTB03}.}\n\\label{hookeplot}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{H$_2$ Bond Dissociation}\nBond dissociation of the H$_2$ molecule produces a well-known dilemma in computational chemistry \\cite{Koch, PSB95, B01, CMY08}. In the exact case, as the bond length $R \\to \\infty$, the hydrogen molecule should dissociate to two free hydrogen atoms, with the ground state always a singlet and spin-unpolarized. However, spin-restricted, e.g., restricted Hartree-Fock or restricted Kohn-Sham DFT, give the correct spin multiplicity, i.e. the wavefunction is an eigenfunction of $\\hat{S}^2$, but produce an overestimated total energy, much higher than that of two free hydrogen atoms. Spin-unrestricted, e.g., unrestricted Hartree-Fock or unrestricted Kohn-Sham DFT, give a fairly good total energy, but the wavefunction is spin-contaminated, i.e., the deviation of $< \\hat{S}^2 >$ from the exact value is significant. This is known as ``symmetry breaking'' in H$_2$ bond dissociation.\n\nFuchs et al. \\cite{FNGB05} argued that DFT within RPA (random phase approximation) gives a correct picture of the H$_2$ bond dissociation within the spin-restricted KS scheme. They also gave highly-accurate adiabatic connection curves for ground-state H$_2$ at bond length $R=1.4$\\AA\\, and stretched H$_2$ at bond length $R = 5$\\AA. The curves were interpolated particularly between $\\lambda=0$ and $1$, shown as the difference of the integrand, $\\Delta W_\\lambda$, between the stretched H$_2$ molecule and two free H atoms (Fig. 1 and 3 of Ref. \\cite{FNGB05}). \n\nFor $R=1.4$\\AA\\, and $R=5$\\AA, if we use an interpolation (see Ref. 63 in Fuchs paper \\cite{FNGB05}) to estimate $\\Delta W_\\lambda$, we find reasonable values $\\Delta W_\\infty=-7.00$ and $\\Delta W_\\infty=0.13$, respectively. Using Eq. (\\ref{edcexc}), we find $\\Delta E_{\\sss DC}=4.96$ and $\\Delta E_{\\sss DC}=0.69$, respectively. We compare $\\Delta E_{\\sss DC}$ and $\\Delta E_{\\sss C}$ values in Table \\ref{stretchedtable}. The comparison shows a physical example where the strictly-correlated system is a better starting point in the calculation.\n\n\\begin{table}[h]\n\\caption{Comparison of several quantities for stretched H$_2$ at different bond lengths. The values for $\\Delta E_{\\sss X}$ and $\\Delta E_{\\sss XC}$ are taken from Ref. \\cite{FNGB05}. All values are in eV.}\n\\begin{center}\n\\begin{tabular}{c|c c c}\n\\hline\\hline\nbond length & $1.4$\\AA & $5$\\AA & $\\infty$ \\\\\n\\hline\n $\\Delta E_{\\sss X}$ & -0.98 & 5.85 & 8.5 \\\\\n $\\Delta E_{\\sss XC}$ & -2.04 & 0.82 & 0.0 \\\\\n $\\Delta W_\\infty$ & -7.00 & 0.13 & 0.0 \\\\\n $\\Delta E_{\\sss C}$ & -1.06 & -5.03 & -8.5 \\\\\n $\\Delta E_{\\sss DC}$ & 4.96 & 0.69 & 0.0 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{stretchedtable}\n\\end{table}\n\nWe can see that at the equilibrium bond length, $|\\Delta E_{\\sss C}|$ is much smaller than $\\Delta E_{\\sss DC}$, presumably making it easier to approximate the ground-state energy starting from the KS reference system. This is typical at equilibrium. However for stretched bonds, $\\Delta E_{\\sss DC}$ is much smaller than $|\\Delta E_{\\sss C}|$, and so $\\Delta E_{\\sss DC}$ instead may be better accurately approximated in the calculation and the strictly-correlated system should be a better reference. Molecules with strong static correlation, such as Cr$_2$ and O$_3$, might fall somewhere inbetween.\n\n\\section{Conclusion}\n\nIn this paper, we constructed an adiabatic connection formalism for the strictly-correlated system. We found that this adiabatic connection formula and curve can be well defined with respect to this new reference. Our formula connects the strictly-correlated system and the real system, by Eq. (\\ref{updnac}). We also defined the quantity for this new integral ``decorrelation energy'' and related this with the usual KS adiabatic connection. We illustrated how the decorrelation energy behaves, using the uniform electron gas, Hooke's atom, and stretched H$_2$ as examples.\n\nWe emphasize again that a real application of this theory is only possible when the reference, i.e., the strictly-correlated system, can be routinely calculated. At present, one can calculate quantities such as $U_{\\rm sc}$ exactly only for spherical symmetric systems \\cite{SGS07}. However, nonempirical approximations to $E_{\\sss XC}$ of Kohn-Sham theory can be employed to estimate $W_\\infty$ with useful accuracy \\cite{PTSS04}. The computation of this quantity may become much easier in the future \\cite{GSV09}. If this is so, then based on the properties discussed here, the strictly-correlated reference may be preferable in cases that are difficult for standard KS DFT calculations with standard approximations to $E_{\\sss XC}$. In fact, a recent work \\cite{GSV09} independently shows progress using exactly the formalism discussed here and suggests approximations to $E_{\\sss DC}$. In any event, the advent of strictly-correlated calculations opens up a whole new alternative approach to DFT calculations of electronic structure, and only experience can show how and when this will prove more fruitful than the traditional (KS) scheme.\n\n\\section{Acknowledgement}\nWe thank John Perdew, Michael Seidl and Paola Gori-Giorgi for kind discussions. This work is supported by the National Science Foundation under No. CHE-0809859.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUnsupervised nonlinear dimensionality reduction methods, which embed high-dimensional data to a low-dimensional space, have been extensively deployed in many real-world applications for data visualization. Data visualization is an important component of data exploration and data analytics, as it helps data analysts to develop intuitions and gain deeper understanding about the mechanisms underlying data generation. Comprehensive surveys about dimensionality reduction and data visualization methods can be found in van der Maaten et al. (2009)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten08dimensionalityreduction} and Burges (2010)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{dimension-reduction-a-guided-tour-2}. Among these approaches, nonparametric neighbor embedding methods such as t-SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2008visualizing} and Elastic Embedding~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{carreira2010elastic} are widely adopted. They generate low-dimensional latent representations by preserving neighboring probabilities of high-dimensional data in a low-dimensional space, which involves pairwise data point comparisons and thus has quadratic computational complexity with respect to the size of a given data set. This prevents them from scaling to any dataset with a size beyond several thousand. Moreover, these methods are not designed for readily generating the embedding of out-of-sample data that are prevalent in modern big data analytics. To generate out-of-sample data embedding given an existing sample embedding, computationally expensive numerical optimization or Nystr{\\\" o}m approximation is often performed, which is undesirable in practice~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{bengio2004out,vladymyrov2014linear,carreira2015fast}. \n\nParametric embedding methods, such as parametric t-SNE (pt-SNE)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09} employing a deep neural network (DNN), learn an explicit parametric mapping function from a high-dimensional data space to a low-dimensional embedding space, which can readily generate the embedding of out-of-sample data. The objective function of pt-SNE is the same as that of t-SNE with quadratic computational complexity. Fortunately, owing to the explicit mapping function defined by the DNN, optimization methods such as stochastic gradient descent or conjugate gradient descent based on mini-batches can be deployed when pt-SNE is applied to large-scale datasets. \n\nHowever, on one hand, the objective function of pt-SNE is a sum of a quadratic number of terms over pairwise data points, which requires mini-batches with fairly large batch sizes to achieve a reasonably good approximation to the original objective; On the other hand, optimizing the parameters of the DNN in pt-SNE also requires careful choices of batch sizes, which is often best served with small batch sizes to avoid being stuck in a bad local minimum. These conflicting choices of batch sizes make the optimization of pt-SNE hard and render its performance sensitive to the chosen batch size. In addition, to approximate the loss function defined over all pairwise data points, pt-SNE independently computes pairwise neighboring probabilities of high-dimensional data for each mini-batch, so it often produces dramatically different embeddings with different choices of user-defined perplexities that are coupled with batch sizes. Finally, although the mapping function of pt-SNE parameterized by a DNN is powerful, it is very hard to learn and requires complicated procedures such as tuning network architectures and tuning many hyper-parameters. For data embedding and visualization purposes, most users are reluctant to go through these complicated procedures.\n\nTo address the aforementioned problems, in this paper, we present unsupervised parametric t-distributed stochastic exemplar-centered embedding. Instead of modeling pairwise neighboring probabilities, our strategy learns embedding parameters by comparing high-dimensional data only with precomputed representative high-dimensional exemplars, resulting in an objective function with linear computational and memory complexity with respect to the number of exemplars. The exemplars are identified by a small number of iterations of k-means updates, taking into account both local data density distributions and global clustering patterns of high-dimensional data. These nice properties make the parametric exemplar-centered embedding insensitive to batch size and scalable to large-scale datasets. All the exemplars are repeatedly included into each mini-batch, and the choice of the perplexity hyper-parameter only concerns the expected number of neighboring exemplars calculated globally, independent of batch sizes. Therefore, the perplexity is much easier to choose by the user and much more robust to produce good embedding performance. We further use noise contrastive samples to avoid comparing data points with all exemplars, which further reduces computational\/memory complexity and increases scalability. Although comparing training data points only with representative exemplars indirectly preserves similarities between pairwise data points in each local neighborhood, it is much better than randomly sampling small mini-batches in pt-SNE whose coverages are too small to capture all pairwise similarities on a large dataset.\n\nMoreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep neural network employed by pt-SNE. Experimental results on several benchmark datasets show that, our proposed parametric exemplar-centered embedding methods for data visualization significantly outperform pt-SNE in terms of robustness, visual effects, and quantitative evaluations. We call our proposed deep t-distributed stochastic exemplar-centered embedding method dt-SEE and high-order t-distributed exemplar-centered embedding method hot-SEE.\n\nOur contributions in this paper are summarized as follows: (1) We propose a scalable unsupervised parametric data embedding strategy with an objective function of significantly reduced computational complexity, avoiding pairwise training data comparisons in existing methods; (2) With the help of exemplars, our methods eliminate the instability and sensitivity issues caused by batch sizes and perplexities haunting other unsupervised embedding approaches including pt-SNE; (3) Our proposed approach hot-SEE learns a simple shallow high-order parametric embedding function, beating state-of-the-art unsupervised deep parametric embedding method pt-SNE on several benchmark datasets in terms of both qualitative and quantitative evaluations.\n\n\\section{Related Work}\\label{sec:related}\nDimensionality reduction and data visualization have been extensively studied in the last twenty years~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten08dimensionalityreduction,dimension-reduction-a-guided-tour-2}. SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{SNE2002}, its variant t-SNE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2008visualizing}, and Elastic Embedding~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{carreira2010elastic} are among the most successful approaches. To efficiently generate the embedding of out-of-sample data, SNE and t-SNE were, respectively, extended to take a parametric embedding form of a shallow neural network~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinThesis} and a deep neural network~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09}. As is discussed in the introduction, the objective functions of neighbor embedding methods have $O(n^2)$ computational complexity for $n$ data points, which limits their applicability only to small datasets. Recently, with the growing importance of big data analytics, several research efforts have been devoted to enhancing the scalability of nonparametric neighbor embedding methods~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2013barnes,van2014accelerating,yang2013scalable,vladymyrov2014linear}. These methods mainly borrowed ideas from efficient approximations developed for N-body force calculations based on Barnes-Hut trees~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{van2013barnes} or fast multipole methods~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{greengard1987fast}. Iterative methods with auxiliary variables and second-order methods have been developed to optimize the objective functions of neighbor embedding approaches~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{vladymyrov2012partial,vladymyrov2014linear,carreira2015fast}. Particularly, the alternating optimization method with auxiliary variables was shown to achieve faster convergence than mini-batch based conjugate gradient method for optimizing the objective function of pt-SNE. All these scalability handling and optimization research efforts are orthogonal to our development in this paper, because all these methods are designed for the embedding approaches modeling the neighboring relationship between pairwise data points. Therefore, they still have the sensitivity and instability issues, and we can readily borrow these speedup methods to further accelerate our approaches modeling the relationship between data points and exemplars. \n\nOur proposed method hot-SEE learns a shallow parametric embedding function by considering high-order feature interactions. High-order feature interactions have been studied for learning Boltzmann Machines, autoencoders, structured outputs, feature selection, and biological sequence classification~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{DBLP:conf\/iccv\/Memisevic11,DBLP:conf\/aistats\/MinNCG14,MinPSB14,DBLP:conf\/cvpr\/RanzatoH10,DBLP:journals\/jmlr\/RanzatoKH10,DBLP:journals\/corr\/GuoZM15,Min2014kdd,MinBio2015,MinGS17}. To the best of our knowledge, our work here is the first successful one to model input high-order feature interactions for unsupervised data embedding and visualization.\n\nOur work in this paper is also related to a recent supervised data embedding method called en-HOPE~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinGS17}. Unlike en-HOPE, our proposed methods here are unsupervised and have a completely different objective function with different motivations.\n\n\\section{Methods}\\label{sec:method}\nIn this section, we introduce the objective of pt-SNE at first. Then we describe the parametric embedding functions of our methods based on a deep neural network as in pt-SNE and a shallow neural network with high-order feature interactions. Finally, we present our proposed parametric stochastic exemplar-centered embedding methods dt-SEE and hot-SEE with low computational cost.\n\n\\subsection{Parametric t-SNE using a Deep Neural Network and a Shallow High-order Neural Network}\\label{sec:sup}\nGiven a set of data points $\\mathcal{D} = \\{{\\mathbf x}^{(i)}: i = 1,\\ldots, n\\}$, where ${\\mathbf x}^{(i)} \\in {\\mathbb R}^H$ is the input feature vector.\npt-SNE learns a deep neural network as a nonlinear feature transformation from the high-dimensional input feature space to a low-dimensional latent embedding space \n$\\{f({\\mathbf x}^{(i)}): i = 1,\\ldots, n\\}$, where $f({\\mathbf x}^{(i)}) \\in {\\mathbb R}^h$, and $h < H$. For data visualization, we set $h = 2$.\n\npt-SNE assumes, $p_{j|i}$, the probability of each data point $i$ chooses every other data point $j$ as its nearest neighbor in the high-dimensional space follows a Gaussian distribution. The joint probabilities measuring the pairwise similarities between data points ${\\mathbf x}^{(i)}$ and ${\\mathbf x}^{(j)}$ are defined by symmetrizing two conditional probabilities, $p_{j|i}$ and $p_{i|j}$, as follows,\n\\begin{eqnarray}\np_{j|i} & = & \\frac{\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf x}^{(j)} ||^2\/2\\sigma_i^2)} {\\sum_{k \\neq i}\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf x}^{(k)} ||^2\/2\\sigma_i^2)}, \\label{eqn:symmp} \\\\\np_{i|i} & = & 0, \\\\\np_{ij} & = & \\frac{p_{j|i} + p_{i|j}}{2n}, \n\\end{eqnarray}\n\nwhere the variance of the Gaussian distribution, $\\sigma_i$, is set such that the perplexity of the conditional distribution $P_i$ equals a user-specified perplexity $u$ that can be interpreted as the expected number of nearest neighbors of data point $i$. With the same $u$ set for all data points, $\\sigma_i$'s tend to be smaller in regions of higher data densities than the ones in regions of lower data densities. The optimal value of $\\sigma_i$ for each data point $i$ can be easily found by a simple binary search~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{SNE2002}. Although the user-specified perplexity $u$ makes the variance $\\sigma_i$ for each data point $i$ adaptive, the embedding performance is still very sensitive to this hyperparameter, which will be discussed later. In the low-dimensional space, pt-SNE assumes, the neighboring probability between pairwise data points $i$ and $j$, $q_{ij}$, follows a heavy-tailed student t-distribution. The student t-distribution is able to, on one hand, measure the similarities between pairwise low-dimensional points, on the other hand, allow dissimilar objects to be modeled far apart in the embedding space, avoiding crowding problems. \n \\begin{eqnarray}\nq_{ij} & = & \\frac{(1 + ||f({\\mathbf x}^{(i)}) - f({\\mathbf x}^{(j)})||^2)^{-1}} {\\sum_{kl:k \\neq l} (1 + ||f({\\mathbf x}^{(k)}) - f({\\mathbf x}^{(l)})||^2)^{-1}}, \\label{eqn:symmq} \\\\\nq_{ii} & = & 0. \n\\end{eqnarray}\n\nTo learn the parameters of the deep embedding function ${\\mathbf f}(.)$, pt-SNE minimizes the following Kullback-Leibler divergence between the joint distributions $P$ and $Q$ using Conjugate Gradient descent,\n\\begin{equation}\\label{obj}\n\\small\n\\ell = KL (P || Q) = \\sum_{ij: i \\neq j} p_{ij}\\log \\frac{p_{ij}}{q_{ij}}.\n\\end{equation}\nThe above objective function has $O(n^2)$ terms defined over pairwise data points, which is computationally prohibitive and prevents pt-SNE from scaling to a fairly big dataset. To overcome such scalability issue, heuristic mini-batch approximation is often used in practice. However, as will be shown in our experiments, pt-SNE is unstable and highly sensitive to the chosen batch size to achieve reasonable performance. This is due to the dilemma of the quadratic cost function approximation and DNN optimization through mini-batches: approaching the true objective requires large batch sizes but finding a good local minimum benefits from small batch sizes.\n\nAlthough pt-SNE based on a deep neural network has a powerful nonlinear feature transformation, parameter learning is hard and requires complicated procedures such as tuning network architectures and tuning many hyperparameters. Most users who are only interested in data embedding and visualization are reluctant to go through these complicated procedures. Here we propose to use high-order feature interactions, which often capture structural knowledge of input data, to learn a shallow parametric embedding model instead of a deep model. The shallow model is much easier to train and does not have many hyperparameters. In the following, the shallow high-order parametric embedding function will be presented. We expand each input feature vector ${\\mathbf x}$ to have an additional component of $1$ for absorbing bias terms, that is, ${\\mathbf x}^{\\prime} = [{\\mathbf x}; 1]$, where ${\\mathbf x}^{\\prime} \\in {\\mathbb R}^{H+1}$. The $O$-order feature interaction is the product of all possible $O$ features $\\{x_{i_1}\\times \\ldots \\times x_{i_t} \\times \\ldots \\times x_{i_O}\\}$ where, $t \\in \\{1, \\ldots, O\\}$, and $\\{i_1, \\ldots, i_t, \\ldots, i_O\\} \\in \\{1, \\ldots, H\\}$. Ideally, we want to use each $O$-order feature interaction as a coordinate and then learn a linear transformation to map all these high-order feature interactions to a low-dimensional embedding space. However, it's very expensive to enumerate all possible $O$-order feature interactions. For example, if $H = 1000, O = 3$, we must deal with a $10^9$-dimensional vector of high-order features. We approximate a Sigmoid-transformed high-order feature mapping ${\\mathbf y} = f({\\mathbf x})$ by constrained tensor factorization as follows, \n\\begin{equation}\n\\label{shopemap}\ny_s = \\sum_{k=1}^m V_{sk} \\sigma(\\sum_{f=1}^F W_{fk}({\\mathbf C_f}^T {\\mathbf x}^{\\prime})^O + b_k),\n\\end{equation}\nwhere $b_k$ is a bias term, ${\\mathbf C} \\in {\\mathbb R}^{(H+1) \\times F}$ is a factorization matrix, ${\\mathbf C}_f$ is the $f$-th column of ${\\mathbf C}$, ${\\mathbf W}\\in {\\mathbb R}^{F\\times m}$ and ${\\mathbf V}\\in {\\mathbb R}^{h\\times m}$ are projection matrices, $y_s$ is the $s$-th component of ${\\mathbf y}$, $F$ is the number of factors, $m$ is the number of high-order hidden units, and $\\sigma (x) = \\frac{1}{1 + e^{-x}}$. Because the last component of $\\mathbf{x}^\\prime$ is 1 for absorbing bias terms, the full polynomial expansion of $({\\mathbf C_f}^T {\\mathbf x}^\\prime)^O$ essentially captures all orders of input feature interactions up to order $O$. Empirically, we find that $O=2$ works best for all datasets we have and set $O=2$ for all our experiments. The hyperparameters $F$ and $m$ are set by users. Combining Equation~\\ref{obj}, Equation~\\ref{eqn:symmp}, Equation~\\ref{eqn:symmq} and the feature transformation function in Equation~\\ref{shopemap} leads to a method called high-order t-SNE (hot-SNE). As pt-SNE, the objective function of hot-SNE involves comparing pairwise data points and thus has quadratic computational complexity with respect to the sample size. The parameters of hot-SNE are learned by Conjugate Gradient descent as in pt-SNE. \n\n\\subsection{Parametric t-Distributed Stochastic Exemplar-centered Embedding}\nTo address the instability, sensitivity, and unscalability issues of pt-SNE, we present deep t-distributed stochastic exemplar-centered embedding (dt-SEE) and high-order t-distributed stochastic exemplar-centered embedding (hot-SEE) building upon pt-SNE and hot-SNE for parametric data embedding described earlier. The resulting objective function has significantly reduced computational complexity with respect to the size of training set compared to pt-SNE. The underlying intuition is that, instead of comparing pairwise training data points, we compare training data only with a small number of representative exemplars in the training set for neighborhood probability computations. To this end, we simply precompute the exemplars by running a fixed number of iterations of k-means with scalable k-means++ seeding on the training set, which has at most linear computational complexity with respect to the size of training set~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{bahmani2012scalable}.\n\nFormally, given the same dataset $\\mathcal{D}$ with formal descriptions as introduced in Section~\\ref{sec:sup}, \nwe perform a fixed number of iterations of k-means updates on the training data to identify $z$ exemplars from the whole dataset, where $z$ is a user-specified free parameter and $z << n$ (please note that k-means often converges within a dozen iterations and shows linear computational cost in practice). Before performing k-means updates, the exemplars are carefully seeded by scalable k-means++, which will make our methods robust under abnormal conditions, although our experiments show that random seeding works equally well. We denote these exemplars by $\\{{\\mathbf e}^{(j)}: j = 1, \\ldots, z\\}$. The high-dimensional neighboring probabilities is calculated through a Gaussian distribution,\n\\begin{eqnarray}\np_{j|i} & = & \\frac{\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf e}^{(j)} ||^2\/2\\sigma_i^2)} {\\sum_k\\exp(-||{\\mathbf x}^{(i)} - {\\mathbf e}^{(k)} ||^2\/2\\sigma_i^2)}, \\label{eqn:symmp_exem} \\\\\np_{j|i} & = & \\frac{p_{j|i}}{n}, \n\\end{eqnarray}\nwhere $i = 1, \\ldots, n, j=1, \\ldots, z$, and the variance of the Gaussian distribution, $\\sigma_i$, is set such that the perplexity of the conditional distribution $P_i$ equals a user-specified perplexity $u$ that can be interpreted as the expected number of nearest exemplars, not neighboring data points, of data instance $i$. Since the high-dimensional exemplars capture both local data density distributions and global clustering patterns, different choices of perplexities over exemplars will not change the embedding too much, resulting in much more robust visualization performance than that of other embedding methods insisting on modeling local pairwise neighboring probabilities.\n\nSimilarly, the low-dimensional neighboring probabilities is calculated through a t-distribution,\n\\begin{eqnarray}\nq_{j|i} & = & \\frac{(1 + d_{ij})^{-1}} {\\sum_{i=1}^n\\sum_{k=1}^z (1 + d_{ik})^{-1}}, \\label{eqn:q_exem}\\\\\nd_{ij} & = & ||f({\\mathbf x}^{(i)}) - f({\\mathbf e}^{(j)}) ||^2,\n\\end{eqnarray}\nwhere $f(\\cdot)$ denotes a deep neural network for dt-SEE or the high-order embedding function as described in Equation~\\ref{shopemap} for hot-SEE. \n\nThen we minimize the following objective function to learn the embedding parameters ${\\mathbf \\Theta}$ of dt-SEE and hot-SEE while keeping the exemplars $\\{{\\mathbf e}^{(j)}\\}$ fixed,\n\\begin{eqnarray}\\label{exobj}\n&\\min_{}^{} \\ell({\\mathbf \\Theta}, \\{{\\mathbf e}^{(j)}\\}) = \\sum_{i=1}^{n}\\sum_{j=1}^{z} p_{j|i}\\log \\frac{p_{j|i}}{q_{j|i}}\n\\end{eqnarray}\nwhere $i$ indexes training data points, $j$ indexes exemplars, ${\\mathbf \\Theta}$ denotes the high-order embedding parameters $\\{\\{b_k\\}_{k=1}^m, \\mathbf{C }, \\mathbf{W}, \\mathbf{V}\\}$ in Equation~\\ref{shopemap}.\n\nNote that unlike the probability distribution in Equation~\\ref{eqn:symmq}, $q_{j|i}$ here is computed only using the pairwise distances between training data points and exemplars. This small modification has significant benefits. Because $z << n$, compared to the quadratic computational complexity with respect to $n$ of Equation~\\ref{obj}, the objective function in Equation~\\ref{exobj} has a significantly reduced computational cost, considering that the number of representative exemplars is often much much smaller than $n$ for real-world large datasets in practice. \n\n\\subsection{Further Reduction on Computational Complexity and Memory Complexity by Noise Contrastive Estimation}\nWe can even further reduce the computational complexity and memory complexity of dt-SEE and hot-SEE using noise contrastive estimation (NCE). Instead of computing neighboring probabilities between each data point $i$ and all $z$ exemplars, we can simply only compute the probabilities between data point $i$ and its $z_e$ nearest exemplars for both $P$ and $Q$. For high-dimensional probability distribution $P_i$, we simply set the probabilities between $i$ and other exemplars 0; for low-dimensional probability distribution $Q_i$, we randomly sample $z_n$ non-neighboring exemplars outside of these $z_e$ neighboring exemplars, and use the sum of these $z_n$ non-neighboring probabilities multiplied by a constant $K_e$ and the $z_e$ neighboring probabilities to approximate the normalization terms involving data point $i$ in Equation~\\ref{eqn:q_exem}. Since this strategy based on noise contrastive estimation eliminates the need of computing neighboring probabilities between data points and all exemplars, it further reduces computational and memory complexity.\n\n\n\n\\section{Experiments}\\label{sec:experiment}\nIn this section, we evaluate the effectiveness of dt-SEE and hot-SEE by comparing them against state-of-the-art unsupervised parametric embedding method pt-SNE based upon three datasets, \\textit{i.e.}, COIL100, MNIST, and Fashion. The COIL100 data~\\footnote{http:\/\/www1.cs.columbia.edu\/CAVE\/software\/softlib\/coil-100.php} contains 7200 images with 100 classes, where 3600 samples for training and 3600 for test. The MNIST dataset~\\footnote{http:\/\/yann.lecun.com\/exdb\/mnist\/} consists of 60,000 training and 10,000 test gray-level 784-dimensional images. The Fashion dataset~\\footnote{https:\/\/github.com\/zalandoresearch\/fashion-mnist} has the same number of classes, training and test data points as that of MNIST, but is designed to classify 10 fashion products, such as boot, coat, and bag, where each contains a set of pictures taken by professional photographers from different aspects of the product, such as looks from front, back, with model, and in an outfit. \n\n\n\nTo make computational procedures and tuning procedures for data visualization simpler, none of these models was pre-trained using any unsupervised learning strategy, although hot-SNE, hot-SEE, dt-SEE, and pt-SNE could all be pre-trained by autoencoders or variants of Restricted Boltzmann Machines~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{MinMYBZ10,MinBio2015}. \n\nFor hot-SNE and hot-SEE, we set $F=800$ and $m=400$ for all the datasets used. For pt-SNE and dt-SEE, we set the deep neural network architecture to input dimensionality $H$-500-500-2000-2 for all datasets, following the architecture design in van der Maaten (2009)~\\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{Maaten09}. For hot-SEE and dt-SEE, when the exemplar size is smaller than 1000, we set batch size to 100; otherwise, we set it 1000. With the above architecture design, the shallow high-order neural network used in hot-SNE and hot-SEE is as fast as $2.5$ times of the deep neural network used in pt-SNE and dt-SEE for embedding $10,000$ MNIST test data.\n\nFor all the experiments, the predictive accuracies were obtained by the 1NN approach on top of the 2-dimensional representations generated by different methods. The error rate was calculated by the number of misclassified test data points divided by the total number of test data points. \n\n\n\n\n\\subsection{Performance Comparisons with Different Batch Sizes and Perplexities on COIL100 and MNIST}\nOur first experiment aims at examining the robustness of different testing methods with respect to the batch size and the perplexity used. Figures~\\ref{fig:batchsizeSensitivity} and~\\ref{fig:perplexitySensitivity} depict our results on the COIL100 and MNIST datasets when varying the batch size and perplexity, respectively, used by the testing methods.\n\n \\begin{figure}[H\n \\subfloat[COIL100\\label{COIL100}]{%\n \\includegraphics[width=0.4738\\textwidth]{COIL100BatchSize.eps}\n }\n \\hfill\n \\subfloat[MNIST\\label{MNISTd}]{%\n \\includegraphics[width=0.4738\\textwidth]{MNISTBatchSize.eps}\n }\n \\centering\n \\caption{batch size sensitivity test on COIL100 and MNIST }\n \\label{fig:batchsizeSensitivity}\n \\end{figure}\n \n\n\n\nFigure~\\ref{fig:batchsizeSensitivity} suggests that, for the COIL100 data, the pt-SNE was very sensitive to the selection of the batch size; efforts were needed to find a right batch size in order to obtain good performance. On the other hand, the use of different batch sizes had very minor impact on the predictive performance of both the dt-SEE and hot-SEE strategies. Similarly, for the MNIST data, as shown in Figure~\\ref{fig:perplexitySensitivity}, in order to obtain good predictive performance, the pt-SNE needed to have a batch size not too big and not too small. On the contrary, the hot-SEE methods was insensitive to the size of batch larger than 300. \n\n \\begin{figure}[H\n \\subfloat[COIL100\\label{COIL100}]{%\n \\includegraphics[width=0.4738\\textwidth]{COIL100Perplexity.eps}\n }\n \\hfill\n \\subfloat[MNIST\\label{MNISTd}]{%\n \\includegraphics[width=0.4738\\textwidth]{MNISTPerplexity.eps}\n }\n \\centering\n \\caption{perplexity sensitivity test on COIL100 and MNIST }\n \\label{fig:perplexitySensitivity}\n \\end{figure}\nBased on the results in Figure~\\ref{fig:batchsizeSensitivity}, we selected the best batch sizes for both the COIL100 and MNIST data sets, with 600 and 1000, respectively, but we varied the values of the perplexities used. In Figure~\\ref{fig:perplexitySensitivity}, one can observe that, the performance of the pt-SNE and hot-SNE could dramatically change due to the use of different perplexities, but that was not the case for both the dt-SEE and hot-SEE. Similarly, for the MNIST data, as depicted in Figure~\\ref{fig:perplexitySensitivity}, in order to obtain good predictive performance, one would need to carefully tune for the right perplexity. On the contrary, both the dt-SEE and hot-SEE methods performed quite robust with respect to different selected perplexities. \n\n\n\n\n\n\nBecause the choices of batch size and perplexity are coupled in a complicated way in pt-SNE as explained in the introduction, we run additonal experiments to show the advantages of dt-see and hot-see. When we set perplexity to 10 and batch size to 100, 300, 600, 1000, 2000, 3000, 5000, 10000, the test error rate of pt-SNE on MNIST is, respectively, $32.97\\%$, $22.1\\%$, $24.00\\%$, $16.30\\%$, $12.41\\%$, $12.28\\%$, $13.09\\%$, $16.43\\%$, which still varies a lot. In contrast, the error rates of dt-SEE or hot-SEE using 1000 exemplars are consistently below $10\\%$ with the same batch size ranging from 100 to 10000 and perplexity 3 and 10, which again shows shat exemplar-centered embedding dt-see and hot-see are much more robust than pt-SNE.\n\n\\subsection{Experimental Results on the Fashion dataset}\nWe also further evaluated the predictive performance of the testing methods using the Fashion data set. We used batch sizes of 1000 and 2000, along with perplexity of 3 in all the experiments since both pt-SNE and hot-SNE favored these settings as suggested in Figures~\\ref{fig:batchsizeSensitivity} and~\\ref{fig:perplexitySensitivity}. The achieved accuracies are shown in Table~\\ref{tab:vggknn}.\n\n\\begin{table}[H]\n\\centering\n \\begin{tabular}{|c|c|}\\hlin\n Methods & Error Rates \\\\\n \\hline \npt-SNE (batchsize = 1000) & 32.48\\\\\n\\hline\npt-SNE (batchsize = 2000) & 32.04\\\\\n\\hline \nhot-SNE (batchsize = 1000) & 31.29\\\\\n\\hline\nhot-SNE (batchsize = 2000) & 31.82\\\\\n\\hline\ndt-SEE (batchsize = 1000) & 29.42\\\\\n\\hline\ndt-SEE (batchsize = 2000) & 28.30\\\\\n\\hline \nhot-SEE (batchsize = 1000) & 29.06\\\\\n\\hline\nhot-SEE (batchsize = 2000) & \\textbf{28.18}\\\\\n\\hline\n\\end{tabular}\n\\caption{Error rates (\\%) by 1NN on the 2-dimensional representations produced by different methods with perplexity = 3 on the Fashion dataset.}\\label{tab:vggknn}\n\\end{table}\n\nResults in Table~\\ref{tab:vggknn} further confirmed the superior performance of our methods. Both the dt-SEE and hot-SEE significantly outperformed the pt-SNE and hot-SNE. \n\n\n\n\\subsection{Two-dimensional Visualization of Embeddings}\nThis section provides the visual results of the embeddings formed by the pt-SNE and hot-SEE methods. \n\n The top and bottom subfigures in Figure~\\ref{fig:ministSmallBatchSize} depicts the 2D embeddings on the MNIST data set created by pt-SNE and hot-SEE, with batch size of 100 (perplexity = 3) and perplexity of 10 (batch size = 1000), respectively. From these visual figures, one may conclude that the hot-SEE was more stable compared to its competitor pt-SNE. \n\t\n\t\\begin{figure*}[h\n \\subfloat[pt-SNE, batch size 100\\label{COIL100}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNIST_PtSNEBatchsize100.eps}\n }\n \\subfloat[hot-SEE, batch size 100\\label{MNISTd}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNIST_hotSEEExemsize100.eps}\n }\n \t\t \\hfill\n \\subfloat[pt-SNE, perplexity 10\\label{COIL100}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNISTPtSNEperplexity10.eps}\n }\n \\subfloat[hot-SEE, perplexity 10\\label{MNISTd}]{%\n \\includegraphics[width=0.503738\\textwidth]{MNISThotSEEperplexity10.eps}\n\n}\n \\centering\n \\caption{Comparing pt-SNE to hot-SEE with a small batch size = 100 (perplexity = 3) or a reasonable perplexity = 10 (batch size = 1000) to illustrate pt-SNE's unstable visual performance.}\n \\label{fig:ministSmallBatchSize}\n \\end{figure*}\n\n\n \\begin{figure*}[h\n \\subfloat[pt-SNE\\label{COIL100}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISTPtsne.eps}\n }\n \\subfloat[hot-SNE\\label{MNISTd}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISThotsne.eps}\n }\n\t\t \\hfill\n \\subfloat[dt-SEE\\label{COIL100}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISTdtsee.eps}\n }\n \\subfloat[hot-SEE\\label{MNISTd}]{%\n \\includegraphics[width=0.50139738\\textwidth]{MNISThotsee.eps}\n }\n \\centering\n \\caption{MNIST embedding figures for pt-SNE, hot-SNE, dt-SEE, and hot-SEE }\n \\label{fig:mnistEmbedding}\n \\end{figure*}\n\n In Figure~\\ref{fig:mnistEmbedding}, we also provided the visual results of the MNIST embeddings created by pt-SNE, hot-SNE, dt-SEE, and hot-SEE, with batch size of 2000. These results imply that \n the dt-SEE and hot-SEE produced the best visualization: the data points in each cluster were close to each other but with large separation between different clusters, compared to that of the pt-SNE and hot-SNE methods. \n \n \n \\begin{figure}[H\n \\subfloat[pt-SNE\\label{COIL100}]{%\n \\includegraphics[width=0.483738\\textwidth]{FashionPtsne.eps}\n }\n \\hfill\n \\subfloat[hot-SEE\\label{MNISTd}]{%\n \\includegraphics[width=0.483738\\textwidth]{Fashionhotsee.eps}\n }\n \\centering\n \\caption{Fashion embedding figures for pt-SNE and hot-SEE}\n \\label{fig:fashionEmbedding}\n \\end{figure}\nAlso, in Figures~\\ref{fig:fashionEmbedding}, we depicted our visual 2D embedding results on the Fashion data set. These figures further confirmed the better clustering quality generated by the hot-SEE method, compared to that of the pt-SNE strategy.\n\n\\subsection{Noise Contrastive Estimation}\nIn this section, we evaluated the performance of the noise contrastive estimation (NCE) strategy applied to our method hot-SEE with perplexity 3 and 2000 exemplars. We set $z_e = z_n = 100$ and $K_e = 18$. Table~\\ref{tab:nce} show the error rates (\\%) obtained by 1NN on the two-dimensional representations produced by hot-SEE with or without NCS, respectively, on the MNIST and Fashion datasets.\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|lc|lc|lc|lc|}\\hlin\n\\multicolumn{4}{|c}{MNIST}&\\multicolumn{4}{|c|}{Fashion}\\\\ \\hline\n\\multicolumn{2}{|c|}{standard}&\\multicolumn{2}{c|}{w\/ NCE} &\\multicolumn{2}{c|}{standard}&\\multicolumn{2}{c|}{w\/ NCE} \\\\\n\\hline \n\\multicolumn{2}{|c|}{9.30}&\\multicolumn{2}{c|}{9.69} &\\multicolumn{2}{c|}{28.18}&\\multicolumn{2}{c|}{28.19}\\\\\n\\hline\n\\end{tabular}\n\\caption{Error rates (\\%) obtained by 1NN on the two-dimensional representations produced by hot-SEE (perplexity = 3 and 2000 exemplars) with or without further computational complexity reduction based on Noise Contrastive Estimation (NCE), respectively, on the MNIST and Fashion datasets.} \\label{tab:nce}\n\\end{table}\n\nResults in Table~\\ref{tab:nce} suggest that the NCE was able to further reduce the \n computational and memory complexity of our method without sacrificing the predictive performance. As shown in the table, the accuracy difference of the hot-SEE method with and without NCE was less than 0.4\\% for both the MNIST and Fashion data sets. \n\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|lc|lc|lc|lc|lc|lc|}\\hlin\n\\multicolumn{4}{|c|}{COIL100}&\\multicolumn{4}{c}{MNIST}&\\multicolumn{4}{|c|}{Fashion}\\\\ \\hline\n\\multicolumn{2}{|c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} &\\multicolumn{2}{c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} &\\multicolumn{2}{c|}{careful seeding}&\\multicolumn{2}{c|}{random seeding} \\\\\n\\hline \n\\multicolumn{2}{|c|}{58.67}&\\multicolumn{2}{c|}{58.44} &\\multicolumn{2}{c|}{9.30}&\\multicolumn{2}{c|}{9.19} &\\multicolumn{2}{c|}{28.18}&\\multicolumn{2}{c|}{28.53} \\\\\n\\hline\n\\end{tabular}}\n \\caption{Error rates (\\%) obtained by 1NN on the 2-dimensional representations produced by hot-SEE (perplexity = 3) with careful seeding or random seeding on the COIL100 (with 600 exemplars), MNIST (with 2000 exemplars), and Fashion (with 2000 exemplars) datasets.}\n \\label{tab:seed}\n\\end{table*}\n \n\\subsection{Careful Exemplar Seeding vs. Random Initialization}\nWe also further evaluate the performance of our methods in terms of different exemplar initializations used. We compared the performance of using careful seeding based on scalable K-means++ and randomly initialized exemplars. We presented the results in Table~\\ref{tab:seed}. From Table~\\ref{tab:seed}, one can observe that our methods were insensitive to the exemplar seeding approach used. That is, very similar predictive performances (less than 0.4\\%) were obtained by our methods on all the three testing data sets, i.e., COIL100, MNIST, and Fashion. \n\n\\subsection{Comparing Evaluation Metrics of kNN ($k \\geq 1$) and Quality Score}\nWe believe that the evaluation metric based on 1NN test error rate used in the previous experimental sections is more appropriate than kNN test error rate with $k > 1$. The reason is that the 1NN performance exactly shows how accurately our exemplar-based embedding methods catpure very local neighborhood information, which is more challenging for our proposed methods. Because exemplars are computed globally, it is much easier for dt-see and hot-see to achieve better performance based on kNN with $k > 1$. On the MNIST dataset, we show the best training and test error rates of kNN with $k \\geq 1$ using the two-dimensional embedding generated by different methods in Talbe~\\ref{tab:knnerr}, which consistently shows that dt-see and hot-see significantly outperforms pt-SNE and supports our claims above.\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multicolumn{10}{c|}{The Number of Nearest Neighbors k in kNN}\\\\\n\\hline\nMethod&1&2&3&4&5&6&7&8&9&10\\\\\n\\hline\npt-sne\\_tr&12.49&12.49&9.26&8.84&8.45&8.30&8.18&8.12&8.08&8.08\\\\\npt-sne\\_te&12.55&12.55&9.79&9.48&9.12&8.95&8.83&8.72&8.72&8.69\\\\\n\\hline\nhot-see\\_tr&8.87&8.87&6.31&6.05&5.83&5.68&5.64&5.63&5.60&5.58\\\\\nhot-see\\_te&9.19&9.19&7.21&6.76&6.61&6.42&6.41&6.41&6.42&6.36\\\\\n\\hline\ndt-see\\_tr&\\textbf{7.19}&\\textbf{7.19}&\\textbf{5.09}&\\textbf{4.90}&\\textbf{4.72}&\\textbf{4.67}&\\textbf{4.62}&\\textbf{4.62}&\\textbf{4.56}&\\textbf{4.56}\\\\\ndt-see\\_te&\\textbf{8.80}&\\textbf{8.80}&\\textbf{6.69}&\\textbf{6.45}&\\textbf{6.25}&\\textbf{6.17}&\\textbf{6.02}&\\textbf{6.02}&\\textbf{5.94}&\\textbf{5.96}\\\\\n\\hline\n\\end{tabular}}\n \\caption{The training error rates (\\_tr) and test error rates (\\_te) of kNN with different k's using the two-dimensional embedding generated by different methods on MNIST.}\n \\label{tab:knnerr}\n\\end{table*}\n\nAnother evaluation metric based on Quality Score was used by a recent method called kernel t-SNE (kt-SNE) \\def\\citeauthoryear##1##2{##1, ##2}\\@internalcite{ktsne}. The Quality Score metric computes the k (neighborhood size) nearest neighbors of each data point, respectively, in the high-dimensional space and in the low-dimensional space, and the metric calculates the preserved percentage of the high-dimensional neighborhood in the low-dimensional neighborhood averaged over all test data points as the Quality Score, with respect to different neighborhood size k. In Table~\\ref{tab:qualityscores}, we compute the quality scores of different methods on the MNIST test data for preserving their neighborhood on the training data with neighborhood size ranging from 1 to 100. These results also show that hot-see and dt-see consistently outperform pt-SNE. \n\nWe find that Kernel t-SNE is also capable of embedding out-of-sample data. To have a similar experiment setting on MNIST as that used in kernel t-SNE, we randomly choose 2000 data points as held-out test set from the original test set (size=10000) to get 10 different test sets with size 2000, the test error rates of our methods compared to kernel t-SNE are, kernel t-SNE: $14.2\\%$, fisher kernel t-SNE: $13.7\\%$, hot-see: $9.11\\% \\pm 0.43\\%$, dt-see: $8.74\\% \\pm 0.37\\%$. Our methods hot-see and dt-see significantly outperform (fisher) kernel t-SNE.\n\n\\begin{table*}[h]\n \\centering\n\n\t\\scalebox{0.95}{ \n \\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multicolumn{11}{c|}{Neighborhood Size}\\\\\n\\hline\nMethod&1&10&20&30&40&50&60&70&80&90&100\\\\\n\\hline\npt-sne&0.55&4.01&6.68&8.76&10.56&12.17&13.62&14.93&16.06&17.19&18.23\\\\\nhot-see&1.12&5.25&8.22&10.53&12.48&14.19&15.69&17.04&18.27&19.41&20.44\\\\\ndt-see&\\textbf{1.14}&\\textbf{6.74}&\\textbf{10.68}&\\textbf{13.52}&\\textbf{15.78}&\\textbf{17.56}&\\textbf{19.03}&\\textbf{20.22}&\\textbf{21.31}&\\textbf{22.27}&\\textbf{23.17}\\\\\n\\hline\n\\end{tabular}}\n \\caption{Quality scores (\\%, the higher the better) for different embedding methods computed on the test set against the training set on MNIST.}\n \\label{tab:qualityscores}\n\\end{table*}\n\n\\section{Conclusion and Future Work\n\\label{sec:discussion}\nIn this paper, we present unsupervised parametric t-distributed stochastic exemplar-centered data embedding and visualization approaches, leveraging a deep neural network or a shallow neural network with high-order feature interactions. \nOwing to the benefit of a small number of precomputed high-dimensional exemplars, our approaches avoid pairwise training data comparisons and have signicantly reduced computational cost. In addition, the high-dimensional exemplars reflect local data density distributions and global clustering patterns. With these nice properties, the resulting embedding approaches solved the important problem of embedding performance being sensitive to hyper-parameters such as batch sizes and perplexities, which have haunted other neighbor embedding methods for a long time. Experimental results on several benchmark datasets demonstrate that our proposed methods significantly outperform state-of-the-art unsupervised deep parametric embedding method pt-SNE in terms of robustness, visual effects, and quantitative evaluations.\n\nIn the future, we plan to incorporate recent neighbor-embedding speedup developments based on efficient N-body force approximations into our exemplar-centered embedding framework. \n\n\\bibliographystyle{splncs04\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\n\n\nWe would like to thank Rainer St\\\"{o}hr and Nan Zhao for discussions. The work is financially supported by ERC SQUTEC, EU-SIQS SFB TR21 and DFG KO4999\/1-1. Ya Wang thanks the supporting from 100-Talent program of CAS.\\\\\n\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\n\nThe successful fabrication of two-dimensional materials such as graphene and transition metal dichalcogenides arouses intense interest of researchers with intriguing electronic, mechanical, optical, and thermal properties.\\cite{Novoselov2005, Zhang2005, Radisavljevic2011, Wang2012} The gapless nature of graphene and low carrier mobility of transition metal dichalcogenides, however, present limitations to their potential application in industry.\\cite{Liao2010, Schwierz2010, Wu2011, Mak2010, Radisavljevic2011} Very recently, another exciting two-dimensional material, few-layer black phosphorus name as phosphorene, has been successfully fabricated.\\cite{Li2014, Dai2014, Reich2014, Liu2014} The phosphorene-based field effect transistor exhibits a carrier mobility up to 1000 cm$^{2}$\/V$\\cdot$s and an of\/off ration up to 10$^4\\sim10^5$.\\cite{Li2014,Liu2014,Koenig2014} \n\nSimilar to graphite, black phosphorus is also a layered material held together by interlayer van der Waals (vdW) interactions. Inside a layer, each phosphorus atom bonds with three nearest neighbors by sharing all three valence electrons for \\emph{sp}$^3$ hybridization in a puckered honeycomb structure.\\cite{Rodin2014} Black phosphorus has a direct band gap of 0.31$\\sim$0.35 eV.\\cite{Keyes1953, Maruyama1981, Akahama1983, Warschauer2004} The band gap of phosphorene has been found to depend on the film thickness. First-principles calculations demonstrated that the energy band gap decreases from 1.5 $\\sim$2.0 eV for a monolayer to $\\sim$0.6 eV for a five-layer phosphorene.\\cite{Qiao2014, Tran2014} It was also predicted that under strain, few-layer phosphorene could go through a semiconductor-to-metal or direct-to-indirect band gap transition.\\cite{Peng2014, Rodin2014} Most recently, Liu \\emph{et al.} constructed an inverter using MoS$_2$ as an \\emph{n}-type transistor and phosphorene as a \\emph{p}-type transistor, and integrated the two on the same Si\/SiO$_2$ substrate.\\cite{Liu2014} They observed unintentional \\emph{p}-type conductivity with high hole mobility in few-layer phosphorene. Additionally, a number of experiments have also achieved intrinsic \\emph{p}-type phosphorene.\\cite{Yuan2014, Li2014, Liu2014, Deng2014, Das2014} \n\nThen a question arises: what is the origin of the reported intrinsic \\emph{p}-type conductivity in phosphorene? \nDefects and impurities are usually unavoidable in real materials and often change dramatically the electrical, optical and magnetic properties of three- \\cite{Tilley2008} and two-dimensional semiconductors.\\cite{Terrones2012, Tongay2013, Qiu2013, Zhu2014} A large number of theoretical studies on the thickness-dependence of the electronic structure of few-layer phosphorene notwithstanding, knowledge of the properties of native point defects in few-layer phosphorene is still missing. In the present work, we have investigated the formation energies and transition levels of both vacancies and self-interstitials in few-layer phosphorene by performing first-principles calculations using hybrid density functional \\cite{Becke1993, Perdew1996a, Heyd2003} in combination with a semiempirical vdW correction approach developed by Grimme and co-workers\\cite{Grimme2006}, aiming to elucidate the origin of unintentional \\emph{p}-type conductivity displayed by this novel material. Our calculations demonstrated that: (\\emph{i}) the host band gap, formation energies and acceptor transition levels of both vacancies and self-interstitials all decrease with increasing film thickness of phosphorene; (\\emph{ii}) both native point defects are possible sources of the intrinsic \\emph{p}-type conductivity manifested in few-layer phosphorene; (\\emph{iii}) these native defects have low formation energies and thus could serve as compensating centers in \\emph{n}-type multilayer phosphorene. The remainder of this paper is organized as follows. In Sec. II, methodology and computational details are described. Sec. III presents the calculations of formation energies and transition energies of native point defects in few-layer phosphorene, followed by electronic structure analyses. Finally, a short summary is given in Sec. IV.\n\n\n\\section{Methodology}\nOur total energy and electronic structure calculations were carried out using the VASP code, \\cite{Kresse1996, Kresse1996a}, based on the hybrid density functional theory (DFT) proposed by Heyd, Scuseria, and Ernzerhof (HSE).\\cite{Heyd2003} The recent development of hybrid DFT can yield band gaps in good agreement with measurements,\\cite{Paier2006, Marsman2008, Park2011} and thus provide more reliable description of transition levels and formation energies of defects in semiconductors.\\cite{Alkauskas2011, Deak2010, Komsa2011, Freysoldt2014} We here have employed a revised scheme, HSE06.\\cite{Krukau2006} The screening parameter was set to 0.2 {\\AA}$^{-1}$; the Hartree-Fock (HF) mixing parameter $\\alpha$ was tuned to produce a band gap similar to the one given by the GW0 approximation, \\cite{Hedin1965, Shishkin2006} which means that $\\alpha$\\% of HF exchange with (100-$\\alpha$)\\% of Perdew, Burke and Ernzerhof (PBE) exchange\\cite{Perdew1996} in the generalized gradient approximation (GGA) were mixed and adopted in exchange functional. The core-valence interaction was described by the frozen-core projector augmented wave (PAW) method.\\cite{PAW, Kresse1999} The electronic wave functions were expanded in a plane-wave basis with a cutoff of 250 eV. Test calculations show that the calculated formation energies of neutrally and negatively charged P vacancy in monolayer phosphorene will change by less than 0.1 eV if the energy cutoff is increased to 400 eV. Previous theoretical calculations have shown that the interlayer vdW interaction need to be considered for a proper description of the geometrical properties of black phosphorus.\\cite{Appalakondaiah2012} We therefore incorporated the vdW interactions by employing a semiempirical correction scheme of Grimme's DFT-D2 method, which has been successful in describing the geometries of various layered materials.\\cite{Grimme2006, Bucko2010}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{bulk.pdf}\n\\caption{\\label{bulk}(Color online) Top (a) and side (b) views of the unit cell of black phosphorus.}\n\\end{figure}\n\nIn simulation, a thin film of black phosphorus can be easily obtained by simply truncating the bulk into a slab containing only a few atomic layers. The atomic structure of the black phosphorus is presented in Fig. \\ref{bulk}, from which a layered structure is clearly seen. In each layer, the \\emph{sp}$^3$ hybridization between one P atom and its three neighbors lead to the tripod-like local structure along \\emph{c} direction. In the slab model of few-layer phosphorene, periodic slabs were separated by a vacuum no thinner than 15 {\\AA}. For bulk black phosphorus, an 8\\texttimes{}6\\texttimes{}1 \\emph{k}-mesh including $\\Gamma$-point, generated according to the Monkhorst-Pack scheme,\\cite{Monkhorst1976} was applied to the Brillouin-zone integrations. On geometry optimization, both the shapes and internal structural parameters of pristine unit-cells were fully relaxed until the residual force on each atom less than 0.01 eV\/{\\AA}. \n\nThe defective system containing a self-interstitial atom, P$_i$, or a vacancy, V$_\\text{P}$, was modeled by adding or removing a P atom to or from a 3\\texttimes{}2 supercell of few-layer phosphorene. They were the native point defects considered in the present work. In a monolayer phosphorene, there are three interstitial sites; whereas in a multi-layer film, both P$_i$ and V$_\\text{P}$ can reside either in the outer or inner layers. We label these positions as X$^{in}$ and X$^{out}$ (X=P$_i$ and V$_\\text{P}$) respectively. In Fig. \\ref{interstitial}, we show the six inequivalent interstitial sites in a bilayer phosphorene. In view of the fact that the contribution of vdW interaction to the stability of adsorbate on graphene, even in the chemisorption cases, is non-negligible, \\cite{Wang2012b} we expected that the HSE06 plus DFT-D2 method should give a more accurate description on the local structure of interstitial defects in few-layer phosphorene. A $\\Gamma$-centered 2\\texttimes{}2\\texttimes{}1 Monkhorst-Pack \\emph{k}-mesh was adopted for the 3\\texttimes{}2\\texttimes{}1 supercells. The internal coordinates in the defective supercells were relaxed to reduce the residual force on each atom to less than 0.02 eV\/{\\AA}. Moreover, we have allowed spin-polarization for defective systems. A more detailed discussion on the convergence of total energies of defective systems with respect to vacuum thickness is given in the next section.\n\nAn accurate description of the band structure of phosphorene is a prerequisite for obtaining reliable predictions on defect properties, which impact greatly the electronic conductivity in phosphorene. Since there is no reported experimental data for the band gaps of few-layer phosphorene, we compare our HSE06 results for defect-free few-layer phosphorene with those obtained using highly accurate quasiparticle GW0 calculations \\cite{Hedin1965, Shishkin2006}. The GW0 approximation has been shown to provide very reliable descriptions of the electronic and dielectric properties for many semiconductors and insulators.\\cite{Fuchs2007, Shishkin2007} To achieve good convergence of dielectric function in the GW0 calculations, we used a large number of energy band, 80 times of the total number of involved atoms. The converged eigenvalues and wavefunctions obtained from HSE06 with 25\\% HF exact exchange (denoted as HSE06-25\\% hereafter) functional were chosen as the initial input for the GW0 calculations. \nNote that in GW0 calculations only the quasiparticle energies were recalculated self-consistently in four iterations;\nthe wavefunctions were not updated but remain fixed at the HSE06-25\\% level. A 200 frequency grid points was applied to the integration over the frequencies along the imaginary time axis and real axis. For visualization purpose, the GW0 bands were interpolated based on Wannier orbitals, implemented in WANNIER90 code.\\cite{Mostofi2008}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.40]{interstitial.pdf}\n\\caption{\\label{interstitial}(Color online) Six inequivalent interstitial configurations in phosphorene bilayer. The point defects are colored differently.}\n\\end{figure}\n\nTo model a charged-defect, a uniform background charge with opposite sign was added to keep the global charge neutrality of the whole system. The formation energy of a charged defect was defined as \\cite{Zhang1991}\n\n\\begin{equation}\\label{eq1}\n\\begin{split}\n\\Delta E^f_D(\\alpha,q)=E_{tot}(\\alpha,q)-E_{tot}(host,0)-n_{\\alpha}\\mu_{\\alpha} \\\\ \n+q(\\mu_{e}+\\epsilon_{v})+E_{corr}[q],\n\\end{split}\n\\end{equation}\n\nwhere $E_{tot}(\\alpha,q)$ and $E_{tot}(host,0)$ are the total energies of the supercells with and without defect $\\alpha$. \\emph{n}$_\\alpha$ is the number of atoms of species $\\alpha$ added to (\\emph{n}$_\\alpha$>0) or and removed from (\\emph{n}$_\\alpha$<0) the perfect supercell to create defect $\\alpha$. $\\mu_{\\alpha}$ is the atomic chemical potential equal to the total energy per atom in its elemental crystal.\\emph{q} is the charge state of defect, $\\epsilon_{v}$ is the host valence band maximum (VBM) level and $\\mu_{e}$ is electron chemical potential in reference to the $\\epsilon_{v}$ level. Therefore, $\\mu_{e}$ can vary between zero and the band-gap (\\emph{E}$_g$) of few-layer phosphorene. The final term accounts for both the alignment of the electrostatic potential between the bulk \nand defective (charged) supercells, as well as the finite-size effects resulting from the long-range Coulomb interaction of charged defects in a homogeneous neutralizing background. It can be evaluated by using the Freysoldt correction scheme with an average static dielectric constant.\\cite{Freysoldt2009} \n\nA 12\\texttimes{}8\\texttimes{}1 \\emph{k}-mesh with a Gaussian smearing of 0.01 eV was employed in the calculations of static dielectric tensors $\\epsilon$ of pristine few-layer phosphorene. For the static dielectric tensors, the ion-clamped contribution was calculated from the response theory of insulators in finite electric field.\\cite{Souza2002} Since the ionic contributions depend on the Born-effective charges and the vibrational modes only,\\cite{Paier2009} they were treated using GGA-PBE approximation based on density-functional perturbation theory.\\cite{Wu2005} More details of this technique can be found in our previous work.\\cite{Wang2014} \nThe defect thermodynamic transition (ionization) energy level $\\epsilon_{\\alpha}$(\\emph{q}\/$\\emph{q}^{\\prime}$) is defined as the Fermi-level (\\emph{E}$_\\text{F}$) position for which the formation energies of these charge states are equal for the same defect,\n\\begin{equation}\\label{eq3}\n\\epsilon_{\\alpha}(q\/q^{\\prime})=[\\Delta E^f_D(\\alpha,q)-\\Delta E^f_D(\\alpha,q^{\\prime})]\/(q^{\\prime}-q).\n\\end{equation}\nMore specifically, the defect is stable in the charge state \\emph{q} when the \\emph{E}$_\\text{F}$ is below $\\epsilon_{\\alpha}(q\/q^{\\prime})$, while the defect is stable in the charge state q$^{\\prime}$ for the \\emph{E}$_\\text{F}$ positions above $\\epsilon_{\\alpha}(q\/q^{\\prime})$.\n\n\\section{Results and discussion}\n\\subsection{Fundamental properties of pristine few-layer phosphorene}\nPrior to the investigation of defective system, we have first calculated the atomic and electronic properties of pristine few-layer phosphorene. The calculated lattice parameters as a function of film thickness, yielded by PBE, PBE+vdW, and HSE06-25\\%+vdW treatments of the density functional are listed in Table \\ref{lattice}. \nWe find the lattice parameter \\emph{b} increases by 0.07-0.15 {\\AA} from bulk to monolayer, while \\emph{a} and interlayer distance $\\Delta$\\emph{d} are quite insensitive to the film thickness. Similar trends were also reported in a previous first-principles study by Qiao \\emph{et al.}\\cite{Qiao2014} For bulk black phosphorus, the measured lattice parameters are \\emph{a}=3.31 {\\AA}, \\emph{b}=4.38 {\\AA} and $\\Delta$\\emph{d}=5.24 {\\AA}.\\cite{Brown1965} We see that PBE overestimates both \\emph{b} (3.6\\%) and $\\Delta$\\emph{d} (5.5\\%); PBE+vdW and HSE06+vdW, on the other hand, are in much better agreement with experiment. So far, there are no experimental data for few-layer phosphorene systems, but we speculate that the success of PBE+vdW and HSE06+vdW in description of bulk black phosphorus could probably extend to few-layer phosphorene. Therefore, we include vdW correction in the following calculations unless otherwise stated. \n\n\n\\begin{table*}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{lattice} Lattice constants \\emph{a}, \\emph{b} and interlayer distance $\\Delta$\\emph{d} as a function of film thickness in few-layer phosphorene given by PBE, PBE+vdW and HSE06-25\\%+vdW approaches respectively.}\n\\begin{tabular}{c|ccc|ccc|ccc|}\n&\\multicolumn{3}{c|}{PBE}\n&\\multicolumn{3}{c|}{PBE+vdW}\n&\\multicolumn{3}{c|}{HSE06-25\\%+vdW}\\\\\nSystems & \\emph{a} (\\AA) & \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) & \\emph{a} (\\AA) & \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) & \\emph{a} (\\AA)& \\emph{b} (\\AA) & $\\Delta$\\emph{d} (\\AA) \\\\\n\\hline\nmonolayer &3.30 & 4.61 & - & 3.32 & 4.56 & - & 3.30 & 4.50 & - \\\\\nbilayer & 3.31 & 4.58 & 5.57 & 3.32 & 4.50 & 5.21\t& 3.30 & 4.45 & 5.17 \\\\\ntrilayer & 3.31 & 4.58 & 5.58 & 3.32 & 4.47 & 5.22\t& 3.30 & 4.44 & 5.18 \\\\\nquadrilayer & 3.31 & 4.57& 5.59 & 3.32 & 4.46 & 5.23 & 3.30 & 4.44 & 5.19 \\\\\nbulk$^a$ &3.31 & 4.54 & 5.53 & 3.33 & 4.41 & 5.23 & 3.31 & 4.37 & 5.19 \\\\\n\\end{tabular}\n\\leftline{$^a$ Experimental lattice conctants: \\emph{a}=3.31 {\\AA}, \\emph{b}=4.38 {\\AA} and $\\Delta$\\emph{d}=5.24 {\\AA} in reference \\onlinecite{Brown1965}.}\n\\end{ruledtabular}\n\\end{table*}\n\nThe standard HSE06 approach with 25\\% exact exchange is known to well reproduce the band gaps of small- to medium-gap systems, but not those of wide-gap materials.\\cite{Paier2006, Paier2006a, Marsman2008} Recently, Fuchs \\emph{et al.} have shown that GW0 approach can describe very well (but slightly overestimate) the electronic structure of wide-gap materials, and the mean absolute relative error (MARE) on the calculated band gaps of some representative traditional semiconductors is only 8.0\\%.\\cite{Fuchs2007} We summarize the PBE, HSE06, and GW0 calculated band gap of few-layer phosphorene and bulk phosphorus in Table \\ref{bandgap}. For the bulk, GW0 gives a band gap of 0.65 eV, significantly higher than the experimental value of 0.31-0.35 eV.\\cite{Keyes1953, Maruyama1981, Akahama1983, Warschauer2004} The HSE06 result, 0.28 eV, is slightly lower than experimental value. We therefore expect that GW0 and HSE06-25\\% approaches would give a reasonable upper and lower bounds for the band gap of few-layer phosphorene.\n\nThe most important knowledge learn from Table \\ref{bandgap} is that all density functional forms predict a similar trend: the energy band gap of phosphorene decreases with increasing film thickness. This phenomenon, we argue, is mainly due to the energy band broadening induced by interlayer interaction. Additionally, the quantum confinement effect in low dimensional materials are likely to contribute to this trend.\\cite{Kang2013} Since there is no experimental results concerning defective phosphorene and the GW0 approach can perform neither structural optimization nor total energy calculations, we chose to utilize somewhat larger Hartree-Fock mixing parameters $\\alpha_\\text{opt}$ for thin phosphorene, \\emph{i.e.}, 35\\% for monolayer and 30\\% for bilayer, in an attempt to rectify the probably underestimated band gaps. As for the quadrilayer phosphorene, we used a parameter of 25\\%, the same value as for the bulk. \n\n\n\\begin{table*}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{bandgap} The calculated band gap (\\emph{E}$_g$) of few-layer phosphorene as a function of film thickness using PBE, HSE06 and GW0 methods respectively.}\n\\begin{tabular}{c|cccccc}\nSystems & PBE & HSE06-25\\% & HSE06-$\\alpha_\\text{opt}$ & GW0 & Previous work$^a$ & Exp. \\\\\n\\hline\nmonolayer &0.91 & 1.56 & 1.91$^b$ & 2.41 & 1.5-2.0 & - \\\\\nbilayer &0.45 & 1.04 & \t1.23$^c$ & 1.66 & 1.0-1.3 &- \\\\\ntrilayer &0.20 & 0.74 & \t0.98$^c$ & 1.20 & 0.7-1.1 &- \\\\\nquadrilayer & 0.16 & 0.71 & 0.71$^d$ & 1.08 &0.5-0.7 & -\\\\\nbulk &0.10 & 0.28 & 0.28$^d$ & 0.58 & $\\sim$0.3 & 0.31$\\sim$0.35$^e$ \\\\\n\\end{tabular}\n\\leftline{$^a$ References \\onlinecite{Rodin2014,Tran2014,Qiao2014}.}\n\\leftline{$^b$ HSE06-35\\%.}\n\\leftline{$^c$ HSE06-30\\%.}\n\\leftline{$^d$ HSE06-25\\%.}\n\\leftline{$^e$ References \\onlinecite{Keyes1953,Maruyama1981,Akahama1983,Warschauer2004}.}\n\\end{ruledtabular}\n\\end{table*}\n\nFigure \\ref{monolayer} displays the calculated band structure of monolayer phosphorene using HSE06 and GW0. Note that both the VBM and conduction band minimum (CBM) are located at $\\Gamma$-point, and hence a direct band gap. This result is consistent with many previous theoretical studies.\\cite{Qiao2014, Tran2014, Liu2014, Peng2014, Rodin2014} However, there is a disagreement on this point. For example, Li \\emph{et al.} have argued that monolayer phosphorene possibly possesses an indirect\nband gap, because the band interactions near the $\\gamma$ point are complicated, as was viewed from a $k\\cdot p$ perturbation theory.\\cite{Li2014a} The partial charge density analyses show that the VBM are derived from the bonding states between P atoms in different sublayers and the anti-boding states between P atoms in the same sublayer. The opposite is true for the case of CBM. \n\nWe plot in Fig. \\ref{bilayer}(a) the band structure of bilayer phosphorene. Clearly, the band characteristics are similar to those of the monolayer, except that in the bilayer, energy level splitting occurs due to the interlayer interactions. The formation of a bilayer phosphorene can be viewed as the result of two monolayer moving close to each other. The degenerated energy levels of two monolayers become non-degenerated via interlayer interactions. Overall, in both monolayer and bilayer cases, HSE06 and GW0 yield similar band dispersion. Remarkable discrepancy occurs to valence states lying 10 eV below the VBM. Energy bands calculated using HSE06 approach are pushed further downward compared to those obtained using GW0 approach.\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{monolayer.pdf}\n\\caption{\\label{monolayer}(Color online) (a) (Color online) Energy band structures (a) of phosphorene monolayer calculated using HSE06 and GW0 methods, and side views of charge density of (b) CBM and (c) VBM. The vacuum level is set to zero and the charge density isosurface levels are shown at 40\\% of their maximum values.}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.52]{bilayer.pdf}\n\\caption{\\label{bilayer}(Color online) Energy band structures (a) of phosphorene monolayer calculated using HSE06 and GW0 methods, and side views of charge density of (b) CBM and (c) VBM. The vacuum level is set to zero and the charge density isosurface levels are shown at 40\\% of their maximum values.}\n\\end{figure}\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.48]{bandalign.pdf}\n\\caption{\\label{bandalign}(Color online) Band alignments for few-layer phosphorene. The vacuum level is taken as zero energy reference.}\n\\end{figure}\n\nThe calculated band alignments for few-layer phosphorene using difference approaches are shown in Fig. \\ref{bandalign}. Although differing in magnitude, all approaches produce similar trends: (\\emph{i}) with the increases in film thickness, the VBM and CBM of few-layer phosphorene move upward and downward respectively, as is the case in few-layer transition-metal dichalcogenides;\\cite{Kang2013} (\\emph{ii}) overall, the magnitude of band offset on the valence band is more significant than that on the conduction band. This implies that the transition levels of acceptors are more sensitively dependent on film thickness than those of donors.\n\n\nTo evaluate the formation energy of charged defects via Eq. (\\ref{eq1}), we need to know the static dielectric tensor of few-layer phosphorene. With the periodic slab model, our calculated static dielectric constant tensor $\\varepsilon$ demonstrate a linear dependence on the inverse of vacuum thickness (Fig. \\ref{epsilon}). Obviously, the \\emph{true} value of the static dielectric tensor is the one obtained in the limiting case of infinite vacuum. In effect, it can be extrapolated from the results for finite-size supercells with different vacuum thickness by scaling scheme. We list in Table \\ref{dielectric} the calculated $\\varepsilon$ of few-layer phosphorene parallel to \\emph{a} ($\\varepsilon^{a}$), \\emph{b} ($\\varepsilon^{b}$), and \\emph{c} ($\\varepsilon^{c}$) axes using HSE06. It is seen that the static dielectric tensor becomes larger for thicker phosphorene, due to enhanced screening effect. Additionally, the decrease in the band gap with increasing film thickness also contributes to this trend. The ionic contributions to the $\\varepsilon$, on the other hand, are found to be rather small ($\\leq$0.5). For the bulk system, our calculated $\\varepsilon$ are noticeably different from Ref. \\onlinecite{Asahina1984}, in which the frequency-dependent dielectric function calculations were performed using the local density approximation. \nSince the defective few-layer phosphorene systems have been modeled with supercells containing finite-size vacuum, the $\\varepsilon$ obtained from the corresponding pristine unit-cells have been adopted in calculating formation energies of defects.\n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{ruledtabular}\n\\caption{\\label{dielectric} Static dielectric tensors $\\varepsilon$ as a function of the inverse of vacuum thickness for (a) monolayer, (b) bilayer and (c) quatrilayer phosphorene.}\n\\begin{tabular}{c|ccc}\nSystems & $\\varepsilon^{a}$ & $\\varepsilon^{b}$ & $\\varepsilon^{c}$ \\\\\n\\hline\nmonolayer & 1.12 & 1.15 & 1.01 \\\\\nbilayer & 1.72 & 1.93 & 1.03 \\\\\nquadrilayer & 2.79 & 3.02 & 1.05 \\\\\nbulk & 11.99 & 14.64 & 7.86 \\\\\nbulk (Ref. \\onlinecite{Asahina1984}) & 10.2 & 12.5 & 8.3 \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.42]{dielectric.pdf}\n\\caption{\\label{epsilon}(Color online) Static dielectric tensors $\\epsilon$ as functions of the inverse of vacuum thickness for (a) monolayer, (b) bilayer and (c) quantrilayer phosphorene respectively.}\n\\end{figure}\n\n\n\\subsection{Properties of native defects in few-layer phosphorene}\nIn consideration of the electrostatic screening effect of vacuum slab along the \\emph{c} direction is small (the dielectric constant of vacuum is equal to 1), we take monolayer phosphorene as an example to check the total energy convergence of charged-defect systems with respect to the vacuum thickness. Test calculations show that a vacuum thickness of 12 {\\AA} can ensure the charge-neutral systems being well converged within 0.01 eV in total energies. This is not the case, however, for charged defects. Figure \\ref{converge}(a) reveals that the numerical errors in the calculated total energies of monolayer phosphorene containing one V$_\\text{P}^{out}$ or P$_i^{out}$ in 1- charge state are about 0.01 eV when a vacuum of 40 {\\AA} was applied. However, for defects in 1+ charge state, a vacuum of 40 {\\AA} is still far less than enough [Fig. \\ref{converge}(b)]. Thus, the formation energies of positively and negatively charged native defects would be overestimated and underestimated in few-layer phosphorene when a typical 12 {\\AA} vacuum was adopted without any corrections. These errors lead to unrealistic deeper transition levels for both acceptors and donors.\n\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.34]{converge.pdf}\n\\caption{\\label{converge}(Color online) Total energies of phosphorene monolayer containing a vacancy, V$_\\text{P}^{out}$, or a self-interstitial, P$_i^{out}$, in the charge states of (a) 1- or (b) 1+, as a function of the inverse of vacuum thickness. The total energies of the slabs with a vacuum thickness of 20 {\\AA} are taken as zero.}\n\\end{figure}\n\nThe calculated formation energy of V$_\\text{P}$ and P$_i$ in monolayer phosphorene as a function of electron chemical potential are plotted in Fig. \\ref{monolayer_D}(a). The change of slope in the line for P$_i$ corresponds to the transition between charge states where thermodynamic transition takes place. We find that V$_\\text{P}$ is stable in the charge state of 1- with respect to the neutral state for all values of \\emph{E}$_\\text{F}$ in the host band gap. This means that V$_\\text{P}$ behaves as a shallow acceptor and could be one of the sources for \\emph{p}-type conductivity observed experimentally.\\cite{Liu2014} Because of the high formation energy (around 2.9 eV at \\emph{E}$_\\text{F}$=VBM), the negatively charged V$_\\text{P}$ has a low concentration in phosphorene monolayer under equilibrium growth conditions, and thus might not be an efficient \\emph{p}-type defect. Upon geometry optimization, the nearest neighbor of 1- charged V$_\\text{P}$ on the top sublayer relaxes toward V$_\\text{P}$ and bonds to its four neighbors with two different bond lengths of 2.41 {\\AA} and 2.28 {\\AA} respectively [Fig. \\ref{monolayer_D}(b)]. It should be pointed that the donor ionization levels of V$_\\text{P}$ or P$_i$ are unstable for all positions of \\emph{E}$_\\text{F}$ in the host band gaps of few-layer phosphorene, suggesting that both V$_\\text{P}$ and P$_i$ are expected to be acceptors instead of donors.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.43]{monolayer_D.pdf}\n\\caption{\\label{monolayer_D}(Color online) (a) Formation energy of V$_\\text{P}$ and P$_i$ as a function of electron chemical potential $\\mu_e$ in monolayer phosphorene. (b) Local structures of V$_\\text{P}$ and P$_i$. The defect and its nearest-neighbors are colored differently.}\n\\end{figure}\n\nA self-interstitial P atom, P$_i$, finds its stable position by bridging two host P atoms [see Fig. \\ref{interstitial}(a)]. The formation energy of P$_i$ is about 1.0 eV lower than that of V$_\\text{P}$ when \\emph{E}$_\\text{F}$ is near the VBM, suggesting that P$_i$ is the dominant native point defect under \\emph{p}-type conditions. The (0\/1-) acceptor level of P$_i$ is predicted to be 0.88 eV above the VBM, implying that P$_i$ is a deep acceptor. On the other hand, when the \\emph{E}$_\\text{F}$ is close to the host CBM, both V$_\\text{P}$ and P$_i$ have much lowered formation energies and are energetically stable in the charge state of 1-. This means that they can serve as compensating centers in \\emph{n}-type doping monolayer. In the neutral charge state, P$_i$ bonds to two host P atom with identical bond lengths of 2.14 {\\AA}. A small asymmetry was observed in these two bonds (2.06 {\\AA} versus 2.20 {\\AA}), a local lattice distortion different from that around P$_i$ in 1- charge state. \n\nIn Figure \\ref{multilayer_D}, we display the calculated formation energies of V$_\\text{P}$ and P$_i$ in bilayer (panel a) and quadrilayer phosphorene (panel b) as a function of electron chemical potential. Our calculations show that both P$_i^{out}$ and V$_\\text{P}^{out}$ are energetically more stable than P$_i^{in}$ and V$_\\text{P}^{in}$ in both films, regardless of the charge states. The acceptor transition levels for V$_\\text{P}^{out}$ and P$_i^{out}$ are -0.64 eV (not shown) and 0.19 eV with respect to the VBM, indicating that all possible native defects can contribute to the \\emph{p}-type conductivity in bilayer. For quadrilayer phosphorene, both V$_\\text{P}^{out}$ and P$_i^{out}$ are stable in the charge state of 1- for any \\emph{E}$_\\text{F}$ in the band gap. This trend is closely related to the upward shift of the band offset for VBM (see Fig. \\ref{bandalign}). The calculated formation energies of all considered native defects decrease with the increase of film thickness. For examples, the calculated formation energies of the neutral V$_\\text{P}^{out}$ and P$_i^{out}$ decrease from 2.88 and 1.86 eV in monolayer, to 2.67 and 1.82 eV in bilayer, and further to 2.18 and 1.73 eV in quanrilayer, with the layer-dependent effect being more significant on V$_\\text{P}^{out}$ than P$_i^{out}$. Therefore, the formation energies of these acceptor-type defects in \\emph{N}-layer phosphorene (\\emph{N}>4) could be low enough when the \\emph{E}$_\\text{F}$ is near the CBM. As a result, self-compensation would be unavoidable in \\emph{n}-type phosphorene. We expect that nonequilibrium growth techniques might be necessary to reduce the concentrations of native defects in preparation of \\emph{n}-type phosphorene. \n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.4]{multilayer_D.pdf}\n\\caption{\\label{multilayer_D}(Color online) Formation energies of V$_\\text{P}$ and P$_i$ in (a) bilayer and (b) quadrilayer phosphorene as a function of electron chemical potential.}\n\\end{figure}\nWe plot in Fig. \\ref{bilayer_struc} the local atomic arrangements around the negatively charged V$_\\text{P}^{out}$, V$_\\text{P}^{in}$, P$_i^{out}$ and P$_i^{in}$ in bilayer phosphorene. One can see that the relaxed local structure of V$_\\text{P}^{out}$ is very similar to the case of monolayer (panel a). Unlike V$_\\text{P}^{out}$, the neighboring P atoms of negatively charged V$_\\text{P}^{in}$ undergo no significant distortion from their ideal lattice positions (panel b). This in turn leads to long ($\\geq$3.1 {\\AA}) and weak bonds between nearest-neighbors of V$_\\text{P}^{in}$. The equilibrium local structure of negatively charged P$_i^{out}$ in bilayer is also similar to that in monolayer. As for P$_i^{in}$, the upper layer pushes the negatively charged P$_i^{in}$ to move downward, resulting in two identical bond lengths between P$_i^{in}$ and its two nearest-neighbors (2.14 {\\AA}). Meanwhile, the nearest-neighbors on the upper layer relax symmetrically away from P$_i^{in}$, as illustrated in panel (d).\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.34]{bilayer_struc.pdf}\n\\caption{\\label{bilayer_struc}(Color online) Local structure of negatively charged (a) V$_\\text{P}^{out}$, (b) V$_\\text{P}^{in}$, (c) P$_i^{out}$ and P$_i^{in}$ in bilayer. The point defects and their nearest-neighbors are colored differently.}\n\\end{figure}\nTo gain deeper insight into the origin of the conductive characteristics in few-layer phosphorene, we display in Fig. \\ref{defectalign} the transition levels of native point defects with respect to the vacuum level. One can see that the transition levels of V$_\\text{P}$ and P$_i$ generally decrease with increasing film thickness. This means that the magnitudes of formation energies of negatively charged defects decrease more rapidly than those for the neutral ones when going from monolayer to quadrilayer. This results in the shift of the acceptor transition levels of V$_\\text{P}$ and P$_i$ toward lower energies. Combined with the band offset effects for the host VBM, this shift is also responsible for the observed shallower acceptor levels of V$_\\text{P}$ and P$_i$ in thicker films.\n\nWe note that three different HF mixing parameter $\\alpha$ (25\\%, 30\\% and 35\\%) were adopted for monolayer, bilayer and quadrilayer phosphorene. We now take V$_\\text{P}^{out}$ and P$_i^{out}$ as examples to investigate the impact of $\\alpha$ on their stability and conductivity. We present in panel (a) of Fig. \\ref{alpha} the comparison of HSE06-25\\% and HSE06-35\\% in respect of the formation energy of V$_\\text{P}$ and P$_i$ as a function of electron chemical potential in monolayer phosphorene. A deviation of around 0.4 eV is observed for the formation energy of P$_i^{out}$; while the change in transition levels of V$_\\text{P}^{out}$ and P$_i^{out}$, shown in panel (b), is within 0.1 eV when $\\alpha$ goes from 35\\% to 25\\%. This suggests that $\\alpha$ has insignificant effects on the transition levels of V$_\\text{P}^{out}$ and P$_i^{out}$. The rigid shifts of the host VBM are primarily responsible for the shallower transition levels which are calculated by using HSE06-25\\% approach. Furthermore, one can conclude that P$_i$ still acts as a deep acceptor if the monolayer phosphorene has a band gap value of 1.56 eV, based on the HSE06-25\\% calculated results. \nWe expect it to hold true for thicker phosphorene. Similar results were also found for the native defects in GaInO$_3$.\\cite{Wang2015}\n\n \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.5]{defectalign.pdf}\n\\caption{\\label{defectalign}(Color online) Transition levels of V$_\\text{P}$ and P$_i$ in few-layer phosphorene, referenced to the vacuum level.}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.4]{alpha.pdf}\n\\caption{\\label{alpha}(Color online) Formation energies of V$_\\text{P}$ and P$_i$ as functions of electron chemical potential in monolayer phosphorene given by HSE06-$\\alpha$ method. The solid and dashed lines represent $\\alpha$=35\\% and $\\alpha$=25\\% results. The gray region represents the HSE06-25\\% band gap. (b) Transition energy referenced to the vacuum level.}\n\\end{figure}\n\n\\section{summary}\nIn conclusion, we have investigated the structural and electronic properties of native point defect in few-layer phosphorene using first-principles calculations based on hybrid density functional theory including vdW correction. Our calculations show that both vacancy and self-interstitial P defects exhibit acceptor-like behavior and their formation energies and transition levels decrease with increasing film thickness. The same trend is also observed in the host band gap. These trends can be explained by the band offsets for few-layer phosphorene. Specifically, we find that the valence band maximum and conduction band minimum systematically shift upward and downward in reference to the vacuum level with the increases of film thickness. As a result, both vacancies and self-interstitials become shallow acceptors in few-layer phosphorene and can acount for the sources of \\emph{p}-type conductivities observed in experiments. On the other hand, these native acceptors could have non-negligible concentrations and thus act as compensating centers in \\emph{n}-type phosphorene. \n \n\n\n\n\\begin{acknowledgments}\nWe thank Y. Kumagai for valuable discussions. V. Wang acknowledges the support of the Natural Science Foundation of Shaanxi Province, China (Grant no. 2013JQ1021). Y. Kawazoe is thankful to the Russian Megagrant Project No.14.B25.31.0030 ``New energy technologies and energy carriers''\nfor supporting the present research. The calculations were performed on the HITACHI SR16000 supercomputer at the Institute for Materials Research of Tohoku University, Japan.\n\\end{acknowledgments}\n\n\n\\nocite{*}\n\\bibliographystyle{aipnum4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzeei b/data_all_eng_slimpj/shuffled/split2/finalzeei new file mode 100644 index 0000000000000000000000000000000000000000..8295413fa179563564b78ac4fe86c5f22a5bea3a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzeei @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe rise of deep neural networks for learning general-purpose representations in an end-to-end manner has led to numerous breakthroughs in different areas of artificial intelligence, including object recognition~\\cite{ren2015faster}, complex gameplay~\\cite{silver2017mastering}, and language modeling~\\cite{devlin2018bert}. These advancements have brought their widespread adoption to other domains, particularly for problems involving time-series or sensory inputs, which, crucially, depended on ad-hoc feature extraction with shallow learning techniques. The efficiency of deep learning algorithms substantially improved the state-of-the-art in these fields~\\cite{supratak2017deepsleepnet, martinez2013learning, hannun2019cardiologist, saeed2018model, radu2018multimodal}; while largely dismissing manual feature design strategies. However, this success is due to supervised learning models, which require a huge amount of well-curated data to solve the desired task.\nCompared to computer vision or other realms, semantically-labeled sensory data (such as electrooculography, heart rate variability, and inertial signals) is much more difficult to acquire, owing to: privacy issues, complicated experimental set-ups and the prerequisite of expert-level knowledge for data labeling.\n\nDue to these limitations, unsupervised learning holds an enormous potential to leverage a vast amount of unlabeled data produced via omnipresent sensing systems. For instance, an average smartphone or smartwatch is equipped with a multitude of sensors, such as IMUs, microphone, proximity, ambient light and heart rate monitors producing a wealth of data that can be utilized for solving challenging problems and can enable novel use cases through harnessing the power of machine learning. Past efforts to learn from sensory (or time-series) data were mainly limited to the use of autoencoding based approaches~\\cite{li2014unsupervised, bhattacharya2014using, martinez2013learning, plotz2011feature} that can learn to compress the data, but fail to learn semantically useful features~\\cite{oord2018representation}. More recently, generative adversarial networks (GANs) have been explored to some extent for unsupervised learning from sensory inputs~\\cite{yao2018sensegan}, but GANs are infamous for being notoriously unstable during training and suffer from mode collapse, making it a great challenge to use them in practice, for now~\\cite{thanh2019improving}. It might also be excessive to use GANs as a pre-training strategy when synthesizing data is not a core focus, as the number of parameters in the network that need to be learned increases extensively. Moreover, transfer learning has been utilized to a limited extent for tackling the issue of unavailability of massive well-annotated sensory datasets for training deep models. It has been explored to improve the performance in a supervised setting through joint-training on labeled source and target datasets~\\cite{chen2019cross, gjoreski2019cross}. In these cases, the features transferred from supervised models may not be general and are mostly tied to a specific task; therefore, they might not generalize well to other tasks of interest, compared to methods that learn task-agnostic features, in an unsupervised manner. Likewise, existing methods did not focus on learning in low-data regimes nor from unlabeled input which is available in much larger quantities (see section~\\ref{sec:rw} for related work). In this paper, we show that the emerging paradigm of self-supervised learning offers an efficient way for learning semantically-meaningful representations from sensory data that can be used for solving a diverse set of downstream tasks~\\footnote{downstream or end tasks referred to the tasks of interest e.g., sleep stage scoring.}. The self-supervised approaches exploit the inherent structure of the input to derive a supervisory signal. The idea is to define a pretext task, for which annotations can be acquired without human involvement (directly from the raw data) and can be solved using some form of unsupervised learning techniques. This intriguing property essentially renders a deep sensing model, that is developed based on the earlier described principle of \"self-learning\" in nature: a system that can be trained continuously on massive, readily-accessible data in an unsupervised manner~\\cite{de1994learning, schmidhuber1990making}. However, in this case, the challenge lies in designing complex auxiliary tasks that can force the deep neural network to capture meaningful features of the input, while avoiding shortcuts~\\cite{geirhos2020shortcut} (i.e., simple unintended ways to trivially solve the auxiliary task without learning anything useful that generalizes beyond the auxiliary task). \n\nOver the last few years, given the large potential of self-supervised learning in exploiting unlabeled data, multiple surrogate or auxiliary tasks have been proposed for feature learning to ultimately solve complex problems in different domains~\\cite{oord2018representation, gidaris2018unsupervised, devlin2018bert}. Particularly in the vision community, a surge has been seen in developing self-supervised methods, owing to the availability of a wide variety of large scale datasets and well-established deep network architectures. In this realm, the most straightforward strategy is the reconstruction of contextual information based on partially observable input~\\cite{doersch2015unsupervised}. The prediction of color values for grayscale images~\\cite{zhang2016colorful} and the detection of the angle of rotation~\\cite{gidaris2018unsupervised} are recent attempts found to be useful in learning visual representations. Similarly, the temporal synchronization of multimodal data is exploited to learn audio-visual representations~\\cite{korbar2018cooperative}. Likewise, contrastive learning is another highly promising technique that aim to capture shared information among multiple views of the data~\\cite{tian2019contrastive, oord2018representation}, including successes in robotic imitation learning~\\cite{Sermanet2017TCN}. Thus, we conjecture that self-supervision is fruitful for automatically extracting generic latent embeddings from sensory data that can improve much-needed label efficiency, as acquiring well-labeled sensory data is extremely challenging in the real world. Furthermore, due to its annotation-free nature, this learning strategy is not only effective and scalable, but can also be directly leveraged in a federated learning environment~\\cite{bonawitz2019towards}, to learn from widely distributed and decentralized data without aggregating it in a centralized repository, which can preserve users' privacy~\\cite{mcmahan2017communication}. \n\nIn this paper, we present a principled framework for self-supervised learning of multisensor representations from unlabeled data. Our objective is to have numerous tasks, with each perhaps imposing a distinct prior on to the learning process, resulting in varying quality features that may differ across sensing datasets. Specifically, as proxy tasks and modalities could be of more or less relevance to the downstream task's performance, it is essential to explore and compare several pretext tasks so as to discover the ones with better generalization properties. The broad aim is to have many auxiliary tasks in a user's toolbox such that, either experimentally or based on prior knowledge, a relevant task can be selected for training deep models. Particularly, the objective is to have proxy tasks that enable learning of representations invariant to several input deformations that commonly arise in the temporal data, such as sensor noise and sampling-rate disparities, or that can be used jointly in a multi-task learning setting. To this end, we develop eight novel auxiliary tasks that intrinsically obtain supervision from the unlabeled input signals to learn general-purpose features with a temporal convolutional network, such that the pre-trained model generalizes well to the end tasks.\n\nOur approach comprises of pre-training a network through self-supervision with unlabeled data so that it captures high-level semantics and can be used either as a feature extractor\\footnote{i.e. leveraging representations from intermediate layers of the deep neural network} or utilized as initialization for making successive tasks of interest easier to solve with few labeled data. To develop the auxiliary tasks, we take advantage of the synchronized multisensor (or multimodal) data as it belongs to the same underlying phenomena and we exploit it to create proxy tasks that can capture broadly useful features. Specifically, it can substantially help in learning powerful representations of each modality, and ultimately learn more abstract concepts in a joint-embedding space. Thus, we use a multi-stream neural network architecture to solve proxy tasks so that it can learn modality-specific features with a distinct encoder per modality and subsequently learn a shared embedding space with a modality-agnostic encoder. The fundamental structure of our framework is illustrated in Figure~\\ref{fig:overview}. We adopt a small model architecture in this work to highlight a) effectiveness of self-supervised tasks (i.e. improvement is not due to complex architecture) and b) potential of deployment on resource-constrained devices for training and inference.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=10.5cm]{Figures\/overview.pdf}\n\\caption{Illustration of our \\textit{Sense and Learn} representation learning framework. A deep neural network is pre-trained with self-supervision using input modalities from large unlabeled sensory data, such as inertial measurements (or electroencephalogram, heart rate, and channel state information). The learned network can then be utilized as a feature-extractor or initialization for rapidly solving downstream tasks of interest with few labeled data.}\n\\label{fig:overview}\n\\end{figure}\n\nWe demonstrate that a relatively straightforward suite of auxiliary tasks results in meaningful features for diverse problems, including: activity recognition, stress detection, sleep stage scoring, and WiFi sensing. First, we show that the self-supervised representations are highly competitive with those learned with a fully-supervised model, by training a linear classifier on top of the frozen network, as it is a standard evaluation protocol for assessing the quality of self-supervised tasks~\\cite{tagliasacchi2019self, oord2018representation}. Second, we explore fine-tuning the last layer of the encoder to gain further improvements over training from scratch. Third, we investigate the effectiveness of the learned representations in low-data regime\\footnote{or in a semi-supervised setting}. Using our pre-trained network as initialization, we achieve a significant performance boost with as little as $5$ to $10$ labeled instances per class, which clearly highlights the value of self-supervised learning. Lastly, we evaluate the transferability of the features across related datasets\/tasks to show the generality of our method in an unsupervised transfer learning setting.\n\nIn summary, our main contributions are as follows:\n\n\\begin{itemize}\n\\item We propose \\textit{Sense and Learn}, a generalized self-supervised learning framework comprising several surrogate tasks to extract semantic structural concepts inherent to diverse types of sensory or time-series data. \n\n\\item We extensively evaluate our self-supervised tasks on various problems (e.g. sleep stage scoring, activity recognition, stress detection, and WiFi sensing) and learning settings (i.e. transfer and semi-supervised) to significantly improve the data efficiency or lower the requirement of collecting large-scale labeled datasets.\n\n\\item Our results demonstrate that self-supervision provides an effective initialization of the network (and powerful embeddings) that improves performance significantly with minimal fine-tuning, and works well in a low-data regime, which is of high importance for real-world use cases.\n\n\\item The developed auxiliary tasks require an equivalent computational cost as standard supervised learning and has fewer parameters than autoencoding methods, but provide better generalization with greatly improved sample efficiency. \n\n\\item We utilize a small network architecture to show the capability of self-supervision and its prospective usage on resource-constrained devices. In particular, the majority of our proposed tasks are designed around the principle that self-supervised data generation should not be computationally expensive; thus, it can be readily used for on-device learning. \n\n\\item We briefly discuss how to use our framework in practice, as well as its limitations.\n\n\\end{itemize}\n\n\\noindent In the following sections, we present the relevant literature to our work in Section~\\ref{sec:rw}. Our self-supervised methodology is described in Section~\\ref{methodology}. The experimental results are discussed in Section~\\ref{experiments}, real-world impact and limitations in Section~\\ref{sec:impact}, and conclusions and directions for future work are presented in Section~\\ref{sec:conclusion}.\n\n\\section{Related Work}\n\\label{sec:rw}\n\\subsection{Unsupervised and Self-Supervised Learning}\nDeep learning has revolutionized several areas of research with an intuitive property of learning discriminative features directly from the raw data and eliminating the need of manual feature extraction\\cite{radu2018multimodal, hammerla2016deep, martinez2013learning, hannun2019cardiologist}. The success of deep learning is largely attributed to the massive labeled datasets apart from other factors, such as availability of computational power and better neural architectures. Obtaining semantically labeled data required for training supervised models is an expensive and time-consuming process. Therefore, unsupervised learning has seen growing interest in the last couple of years as unlabeled data is available in huge quantities, especially on decentralized edge devices. A classical illustration of unsupervised feature learning is the autoencoder, which learns to map an input onto a lower-dimensional embedding so that reconstructing the original input from such a space incurs a lower error. However, the decoding-based strategies deplete the network capacity through attending to low-level details instead of capturing semantically meaningful features. Therefore, the focus of recent studies is on providing an alternative form of supervision, where annotations can be intrinsically extracted from the data itself. \n\nThe field of self-supervised learning exploits the natural supervision available within the input signal to define a surrogate task that can force the network to learn broadly-usable representations. To that end, numerous pretext tasks are proposed in different domains.~\\cite{noroozi2016unsupervised} established the task of predicting the relative position of randomly cropped image patches.~\\cite{larsson2016learning, zhang2016colorful} inferred color values for grayscale pictures,~\\cite{Sermanet2017TCN} utilize time-contrastive loss as a way to minimize the embedding distances of the same scene recorded from multiple viewpoints, while maximizing the distances for those captured at different timesteps. A similar technique is proposed in~\\cite{tian2019contrastive} to learn from multiple views of the data.~\\cite{tagliasacchi2019self} defined self-supervised tasks for audio, inspired by word$2$vec~\\cite{mikolov2013distributed}.~\\cite{korbar2018cooperative} showed that video representations could be learned by exploiting audio-visual temporal synchronization. Time-contrastive learning is suggested in~\\cite{hyvarinen2016unsupervised} for extracting features from time-series, in an unsupervised manner, through predicting segment IDs. Likewise, autoregressive modeling has been combined with predictive coding to learn compact latent embeddings for various domains~\\cite{oord2018representation}. For natural language modeling, self-supervised objectives, such as predicting masked tokens from surrounding ones and predicting the next sentence, turn out to be powerful methods for learning generic representations of text~\\cite{devlin2018bert}. Similarly, for learning inertial sensory features, ~\\cite{saeed2019multi} presented a signal transformation recognition task. Lately, self-supervised learning has been shown to be beneficial for semi-supervised learning, through jointly optimizing supervised and self-supervised losses~\\cite{zhai2019s}. In this work, we develop several self-supervised tasks for learning representations from a wide range of sensory data such as electroencephalography, electrodermal activity and inertial signals. We show that pre-training with self-supervision using unlabeled data helps in learning highly generalizable features that improve data efficiency and transfer well to a related set of tasks. \n\n\\subsection{Learning Sensing Models with Machine Learning}\nAn understanding of human contexts, activities and states is an important area of research in ambient computing and pervasive sensing due to the fact that it can play a central role in several application domains including: health, wellness, assistance, monitoring, and human computer interaction. To achieve the earlier described objective, the data is collected from users through wearables or other sensors, under varied environments, for learning a task-specific model. For instance, prior work on activity recognition explored various methodologies with inertial sensors embedded in smartphones or smartwatches~\\cite{himberg2001time, stisen2015smart, hammerla2016deep}. Emotional state recognition is widely achieved with physiological signals, such as skin conductance and heart rate variability~\\cite{saeed2018model, martinez2013learning, picard2001toward}. Similarly in sleep analysis, the electrical brain activity is captured with an electroencephalogram to classify sleep into different stages~\\cite{supratak2017deepsleepnet, lajnef2015learning, gunecs2010efficient}. Importantly, for device-free sensing systems, channel state information from WiFi is utilized to infer participants' activities in a non-intrusive manner~\\cite{yousefi2017survey}. Earlier developed methods for these problems heavily relied on manual feature extraction from sensory data to infer a user's activity, emotional state or sleep score and these methods were limited depending on the domain knowledge available to extract discriminative features. With the tremendous progress in end-to-end supervised learning via deep networks, it has been shown that the features can be learned directly from data instead of hand-crafting them based on domain knowledge \\cite{radu2018multimodal, hammerla2016deep, martinez2013learning, hannun2019cardiologist}.\n\nConsequently, 1D convolutional and recurrent neural networks have become standard techniques for achieving state-of-the-art performance on problems involving temporal data~\\cite{hannun2019cardiologist, saeed2018model, supratak2017deepsleepnet, hammerla2016deep}. Nevertheless, these approaches have heavily relied on the availability of large-annotated datasets, which are notoriously difficult to acquire in the real-world. Due to this, in recent years, few work explored unsupervised feature learning to exploit the availability of vast amounts of unlabeled data, while mainly focusing on input reconstruction via autoencoders and related variants, such as restricted Boltzmann machines and sparse coding ~\\cite{li2014unsupervised, bhattacharya2014using, martinez2013learning, plotz2011feature}. There has also been work on utilizing generative adversarial networks for modeling data distributions without supervision~\\cite{luo2018multivariate, esteban2017real} and in semi-supervised learning for sensing models~\\cite{yao2018sensegan}. Furthermore, transfer learning has also been leveraged to improve neural network generalization in domains where large labeled data is difficult to obtain, but focused on transfer from supervised models~\\cite{chen2019cross, gjoreski2019cross}. More recently,~\\cite{saeed2019multi} proposed a self-supervised task of signal transformation recognition for feature learning that achieved significant improvement in activity recognition over autoencoding, though focusing only on unimodal input and the activity recognition problem. As opposed to earlier works, we present a general framework for learning multimodal representations from a diverse set of sensors in a self-supervised way and compared to~\\cite{saeed2019multi} we simplify the problem formulation of transformation recognition (see section~\\ref{sec:sslt}); our novel proxy tasks work on-par and can be used when transforming the input is not desirable or when it may lead to unintended outcomes (e.g. ECG signals). Furthermore, pre-training models with our auxiliary tasks significantly lower the amount of labeled data required to achieve better generalization and opens up the possibility of on-device learning from decentralized unlabeled data.\n\n\\section{Methodology}\n\\label{methodology}\nIn this section, we begin with a motivation and an overview of our self-supervised framework for learning sensory representations. Next, we provide a formalization of the auxiliary tasks and discuss an end-to-end approach for mutli-modal learning. Subsequently, we describe the network architecture design, its implementation, and the optimization procedure. \n\n\\subsection{Motivation and Overview}\nThe key insight behind our technique is that the self-supervised pre-training acts as a prior that can give rise to varying quality representations that encode underlying signal semantics at different levels, which may or may not be useful for a downstream-task of interest. Therefore, it is vital to employ multiple auxiliary tasks to discover the suitable inductive bias necessary to obtain optimal performance on the desired end-task. This intuition is important considering that the time-series (or sensory) data shows peculiar characteristics (e.g. signal-to-noise ratio, amplitude variances, and sampling rates) depending on the nature of phenomena being recorded. Likewise, there should be an array of tasks to choose from depending on the learning problem and device type (e.g. available resources, sensor types etc.). Importantly, we want the self-supervised model to learn generic features rather than focusing on low-level input details, as a pre-trained network has to provide a strong initialization for learning with limited labeled data and generalize to other related tasks. Thus, instead of relying on a single auxiliary task, we learn latent representations with a broad set of tasks based on different objective functions. \n\nWe propose a generalized framework comprising of eight pretext tasks that can be used to learn features from heterogeneous multisensor data. To achieve this, we utilize a temporal convolutional network (TCN) $F_\\theta$ with a distinct encoder $e_m$ for each input modality $I_m$ and a shared encoder $e_s$ for multi-modal representation extraction. We choose to use TCN as an embedding network for sequence modeling due to its effectiveness in capturing long-term dependencies and parallelizability at a significantly lower cost than recurrent networks~\\cite{bai2018empirical}. For every learning problem, we consider unlabeled multisensor (or multimodal) data $\\mathcal{D} = \\{(\\textbf{u}_1, \\textbf{v}_1), (\\textbf{u}_2, \\textbf{v}_2), \\ldots (\\textbf{u}_n, \\textbf{v}_n)\\}$ consisting of $N$ examples. Here, $\\textbf{u}_n$ and $\\textbf{v}_n$ denote the samples of different modalities (e.g. accelerometer and gyroscope) of the $n^{th}$ example. The defined pretext tasks exploit the inherent properties of the data to obtain supervision from the input pairs without requiring any manual annotation to optimize a certain loss function. Specifically, each surrogate task employs its own loss function $L_t$ for learning $F_\\theta$ differently. For instance, an input reconstruction task employs mean-square error loss, while another task, concerning the detection of odd segments within a signal, uses negative log-likelihood; we discuss these in detail in the subsequent section. At a high-level, we utilize these objectives as necessary proxies for sensory representation learning without focusing on how well the model performs on them but on an end-task. After pre-training, $F_\\theta$ captures a joint embedding space of the inputs, and thus it can be utilized either as a feature extractor or as initialization for rapidly learning to solve other problems. Finally, it is important to note that proxy tasks cannot be applied arbitrarily to any type of input and tasks like blend detection can only be used when modalities are related to each other, e.g. as accelerometer and gyroscope.\n\n\\subsection{Self-Supervised Tasks}\n\\label{sec:sslt}\nIn order to achieve self-supervised learning of disentangled semantic representations from unannotated sensory data, we develop eight surrogate tasks for the network. To solve these tasks, we assume $\\textbf{u} = \\{u_1, u_2, \\ldots, u_l\\}$ and $\\textbf{v} = \\{v_1, v_2, \\ldots, v_l\\}$ denote multi-channel signals of length $l$ from different modalities (e.g. accelerometer and gyroscope). Let $z_u = e_u(\\textbf{u})$ and $z_v = e_v(\\textbf{v})$ be the low-dimensional embeddings computed from the corresponding input signals with respective encoders. Likewise, $z_s = e_s(e_u(\\textbf{u}), e_v(\\textbf{v}))$ provides a shared embedding of the inputs through fusion that may capture more abstract features. A high-level illustration of the self-supervised learning procedure is shown in Figure~\\ref{fig:overview}. A self-supervised data generation module produces annotated input from unlabeled multisensor data for learning $F_\\theta$. We utilize this formulation to define the self-supervised objectives in the following subsections.\n\n\n\\subsection*{Blend Detection}\nTo take advantage of the multisensor signals, we define an auxiliary task of detecting input blending as a multi-class classification problem. Given an unlabeled input batch $B = \\cup_{i=1}^{|B|} \\{(\\textbf{u}, \\textbf{v})\\}_i$, we generate three types of instances. First, we keep the original samples as belonging to a class $c_a$. Second, we perform a weighted blending of an instance from one modality with another randomly selected example from a different modality as class $c_b$. Third and last, the instances of the same modalities are blended to have instances for a class $c_c$. The blending weight $\\mu$ is sampled from a uniform distribution, i.e. $\\mu \\sim \\mathcal{U}(0, 1)$. The network is trained with a negative log-likelihood loss $\\mathcal{L}_{NL}$ for learning to differentiate between examples of blended and clean classes ($y_k$) on the entire training set $\\mathcal{D}_{train}$:\n\n\n\\begin{align*} \n\\mathcal{L}_{NL} = - \\frac{1}{K} \\sum_{k=1}^{K} y_{k} \\times \\log(F_\\theta(\\textbf{u},\\textbf{v}))\n\\end{align*}\n\n\n\\subsection*{Fusion Magnitude Prediction}\nWe create a variant of the earlier defined task that uses a similar data generation strategy but differs fundamentally in terms of the objective it optimizes. Here, we task the network with predicting the magnitude $\\mu$, which defines the blending (or weighting) factor of the signals. We assign $\\mu = 0$ to the clean examples, while assigning weight $\\mu \\sim \\mathcal{U}(0, 1)$ to the blended examples, as earlier. In this case, a natural choice is to adopt mean-square loss as learning objective. However, we experimentally discovered that utilizing binary cross-entropy with a logistic function in the network's output layer results in better generalization; thus the network is trained to minimize the following loss $\\mathcal{L}_{BCE}$ for each input modality: \n\n\\begin{align*} \n\\mathcal{L}_{BCE} = -(y \\times \\log(F_\\theta(\\textbf{u}, \\textbf{v})) + (1 - y) \\times \\log(1 - F_\\theta(\\textbf{u}, \\textbf{v})))\n\\end{align*}\n\n\n\\subsection*{Feature Prediction from Masked Window}\nIt is observed that networks which try to reconstruct every bit of the input waste capacity on modeling low-level details~\\cite{oord2018representation}. Instead, in this auxiliary task we ask the network to approximate summary statistics of a masked temporal segment within a signal. To generate the data, we randomly sample the segment length $s_l \\sim \\mathcal{U}(n_{low}, n_{high})$ and starting point $s_p \\sim \\mathcal{U}(0, l - s_l)$. From the selected subsequence, we extract $8$ basic features: \\texttt{mean}, \\texttt{standard deviation}, \\texttt{maximum}, \\texttt{minimum}, \\texttt{median}, \\texttt{kurtosis}, \\texttt{skewness}, \\texttt{number of peaks}; and then mask the segment with zeros. The multi-head network is trained with Huber loss $\\mathcal{L}_{HL}$ to predict statistics of a missing sequence as:\n\n\n\\begin{align*} \n\\mathcal{L}_{HL}=\\begin{cases}\n\t\\frac{1}{2} \\times o^2, & \\text{if $|o| \\leq \\delta$}\\\\\n \\delta \\times (|o| - \\frac{\\delta}{2}), & \\text{otherwise $|o| > \\delta$} \n \\end{cases}, \\text{where $o = F_\\theta(\\textbf{u}, \\textbf{v}) - y$} \n\\end{align*}\n\n\n\\subsection*{Transformation Recognition}\nThe signal transformation recognition is presented in~\\cite{saeed2019multi} as an auxiliary task, where it is posed as a set of binary classification problems solved with a multi-task network to determine whether a signal is a transformed version or not. Here, we simplify the problem formulation and treat the task as multi-class classification, to learn a network that can directly recognize the applied transformation on an input from one out of $K$ classes. The benefits of our formulation are that it does not require specifying weights for task-specific losses and the network can be efficiently optimized with categorical cross-entropy objective $\\mathcal{L}_{NL}$. Another key difference is that we address the problem of learning from multimodal data as opposed to a unimodal signal. To produce task-specific data, we generate transformed versions of each instance utilizing eight transformation functions: \\texttt{permutation} , \\texttt{channel shuffle}, \\texttt{timewarp}, \\texttt{scale}, \\texttt{noise}, \\texttt{rotation}, \\texttt{flip}, \\texttt{negation}), and an identity operation while assigning the function type as the corresponding class. During network training, we feed a batch of data consisting of examples for all the classes (inclusive of originals) and optimize a separate loss function for each input signal. \n\n\\subsection*{Temporal Shift Prediction}\nThis conceptually straight-forward task consists of estimating the number of steps by which the samples are circularly-shifted in their temporal dimension. We pose this problem such that it can be treated either as a classification or as a regression task. We define a range of shift intervals, depending on the input resolution. For instance, in the activity recognition task, the considered ranges are: $[(0, 5), (6, 10), (11, 20), (21, 50), (51, 100), (101, 200), (201, 300)]$. For producing shifted inputs, we first select a pair at random from the defined ranges, and second we sample a shifting factor within the defined boundary of the selected range. Last, we temporally shift the values of an input segment with the sampled factor. The network can be trained to predict either the range index (treating each entry as a class, with $7$ classes in total) or regress the factor. In our experiments, we notice that solving it as a regression problem results in better generalization on the end-task. Thus, the network is trained by minimizing mean-square error loss $\\mathcal{L}_{MSE}$ for each sensing modality: \n\n\n\\begin{align*} \n\\mathcal{L}_{MSE} = \\| F_\\theta(\\textbf{u},\\textbf{v}) - y \\|\n\\end{align*}\n\n\n\\subsection*{Modality Denoising}\nThis task's objective is to decompose a signal for obtaining a clean target through input reconstruction, i.e. isolating the mixed noise. It is similar in spirit to source separation in audio~\\cite{luo2019convtasnet, zeghidour2020wavesplit} and a denoising autoencoder~\\cite{vincent2008extracting}. The fundamental intuition here is that if the network is tasked to reconstruct the original input from corrupted or mixed modality signals, then it forces the network to identify core signal characteristics while learning usable representations in the process. In our case, instead of mixing arbitrary noise, we exploit the availability of multisensor data to generate instances that might be of sufficient difficulty for the network to denoise. Specifically, we utilize a \\texttt{weighted blending} operation $\\mathbf{u} \\times (1 - \\mu) + \\mathbf{v} \\times \\mu$ to mix instances of different modalities, i.e. we produce samples through combining the clean instances of accelerometer with gyroscope and vice versa while keeping the original samples as additional data points. The encoder-decoder network is trained end-to-end to minimize the mean-square error loss $\\mathcal{L}_{MSE}$ between ground truth and corrupted input pairs. \n\n\\subsection*{Odd Segment Recognition}\nThe goal of odd segment recognition is to identify the unrelated subsegment that does not belong to the input under consideration, where the rest of the sequences are in the correct order. The high-level idea behind the task is that if the network can spot artifacts in the signal, it should then also learn about useful input features. Similar ideas have been employed in video representation learning~\\cite{fernando2017self} to spot invalid frame detection in video. There are multiple ways to generate examples with odd subsegments; we approach it as an input consisting of an irregular segment of fixed length $s_o$ that is selected randomly from a different input modality. To generate proxy task examples, we begin with splitting an instance into equal-length sequences (e.g. of length $100$). Then, $2$ sequences from different modalities are randomly selected, that are either directly swapped or blended before applying a substitution operation. The index of the interchanged slices is used as the class, where valid inputs are assigned a distinct class. The network is asked to predict an index $id$ of the odd sequence in each input modality. For this task, we minimize a categorical cross-entropy loss $\\mathcal{L}_{NL}$ to train a multi-head network.\n\n\n\n\\subsection*{Metric Learning with Triplet Loss}\nAs we are interested in learning from multisensor data, we take advantage of multiple input modalities to formulate a metric learning objective. For this purpose, we utilize a symmetric triplet loss~\\cite{zhang2016tracking}, which encourages the representations of similar inputs but different modalities to be closer, while the representations of dissimilar inputs to be further apart. To optimize the specified loss, we need to generate input triplets consisting of an anchor, which can be an original instance, a positive sample that should be related (i.e. provides a complementary view of the input) to the anchor, and a negative sample which must be entirely different from the former pair. The loss then minimizes the distance between the anchor and the positive samples, while maximizing the distance of the negative samples from the anchor and the positive samples. For metric learning under this formulation, we generate the examples as follows: the actual instances are treated as anchors, and positive instances are generated by applying selected transformations at random~\\cite{saeed2019multi} on each anchor; whereas the negative instances are sampled from a different modality (i.e. for accelerometer, we treat samples from gyroscope as negatives). We then optimize $F_\\theta$ with triplet loss $\\mathcal{L}_{TL}$ to produce a smaller distance on associated samples and a more considerable distance on unrelated ones: \n\n\n\\begin{align*} \n\\mathcal{L}_{TL} = \\max [0, \\ D(z_{a}, z_{p}) - \\frac{1}{2} \\times (D(z_{a}, z_{n}) + D(z_{p}, z_{n})) + \\alpha],\n\\end{align*}\n\n\\noindent where $z_{a}$, $z_{p}$, $z_{n}$ are the embeddings of anchor, positive and negative samples respectively, $\\alpha$ represents the distance margin, and $D$ denotes squared-euclidean distance. \n\n\\begin{algorithm}[htbp]\n\\caption{Sense and Learn}\n\\label{alg:sal}\n\\KwIn{Multisensory unlabeled data $\\mathcal{D}_{U}$ and labeled data $\\mathcal{D}_{L}$, auxiliary task $A_{t}$, number of iterations $I$, batch size $B$, L$2$ regularization rate $\\beta$}\n\\KwOut{Self-supervised pre-trained network $F$}\n\ninitialize a representation learning network $F$ with parameters $\\theta_{F}$\\\\\ninitialize a linear classifier $C$ with parameter $\\theta_{C}$ for a down-stream task\\\\\ninitialize self-labeling data generation procedure $G_{T}$ based on task $A_{t}$\\\\\ninitialize proxy-task and end-task loss functions $\\mathcal{L}_{T}$ and $\\mathcal{L}_{E}$, respectively\\\\\n\n\n\\For{iteration $i$ $\\in$ $\\{$ $1$, \\ $\\ldots$, \\ $I$ $\\}$ }\n{\n Randomly sample a mini-batch of $B$ instances from $\\mathcal{D}_{U}$ as $\\{x_1, x_2, \\ldots, x_b\\}$ \\\\\n Generate labeled (self-supervised) samples $\\{$$(x$, $y)_1$, $(x$, $y)_2$, $\\ldots$, and $(x$, $y)_b$$\\}$ with $G_{T}$\\\\ \n Update $\\theta_F$ by descending along its gradient \\\\\n $\\nabla_{\\theta_{F}} \\Big[\\frac{1}{b} \\sum_{i=1}^{B} \\mathcal{L}_{T}(F_{\\theta}(x_i), y_i) + \\beta \\left\\lVert \\theta\\right\\rVert^2 \\Big]$\n}\n\n\\For{iteration $i$ $\\in$ $\\{$ $1$, \\ $\\ldots$, \\ $I$ $\\}$ }\n{\n Randomly sample a mini-batch of $B$ labeled instances from $\\mathcal{D}_{L}$ as $\\{$$(x$, $y)_1$, $(x$, $y)_2$, $\\ldots$, and $(x$, $y)_b$$\\}$ \\\\\n Extract latent embeddings $\\textbf{z}$ from encoder $e$ within $F_{\\theta}(\\textbf{x})$\\\\\n Update $\\theta_C$ by descending along its gradient \\\\\n $\\nabla_{\\theta_{c}} \\Big[\\frac{1}{b} \\sum_{i=1}^{B} \\mathcal{L}_{E}(C_{\\theta}(z_i), y_i) \\Big]$\n}\n\nWe use Adam optimizer~\\cite{kingma2014adam} for computing gradient-based parameter updates in all the experiments.\n\\end{algorithm}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=11cm]{Figures\/architecture.pdf}\n\\caption{A multistream neural network architecture for learning representations from multiple sensory inputs. A distinct stream (with an identical architecture) is used for each modality, as depicted on the right.}\n\\label{fig:architecture}\n\\end{figure}\n\n\\subsection{Network Architecture Design}\n\nWe implement the learning network $F_{\\theta}$ as a multi-stream temporal convolutional model (TCN). The part of the motivation to use TCN came from~\\cite{bai2018empirical} where it has been shown that convolutional networks perform remarkably well on sequence modeling tasks. Likewise, they have a low footprint for training and inference as compared to other methods and can be pruned easily to further compress the network~\\cite{molchanov2016pruning}. Our model consists of a distinct learning stream for each input to extract modality-specific features. The subnetworks share the same architecture, which is followed by a modality-agnostic network that fuses and learns a shared representation from the multimodal input. Jointly, we refer to these modules as encoder $e$, which is embedded within $F_{\\theta}$. Importantly, we add an extra block connected to $e$, which is discarded after self-supervised pre-training. The intuition behind this strategy is that the model's last layers capture features that are primarily task-specific and do not generalize well on the end-task of interest. Therefore, the additional layers allow the base encoder to capture more generic features, while solving the auxiliary tasks. Figure~\\ref{fig:architecture} illustrates the architecture design by precisely highlighting these main building blocks. The modality-specific encoder consists of three $1$D convolutional layers with $32$, $64$, and $96$ feature maps and a kernel size of $24$, $16$, and $8$, respectively. The max-pooling layer, with a pooling size of $4$ and a stride of $2$, is added after the initial convolutional layers. A dropout is used with a rate of $0.1$ at the end of the block. The shared encoder consists of a single convolutional layer with $128$ feature maps and a kernel size of $4$, which takes concatenated features as input. The supplementary layers in the pre-training block consist of a convolutional layer with $64$ feature maps and a kernel size of $4$ and a dense layer having $512$ hidden units. Importantly, a separate output layer is used for each input modality for all the surrogate tasks except `sensor blend,' which, based on its formulation, does not require this. Likewise, we use global pooling as the last layer in the representation learning network that aggregates discriminative features. L$2$ regularization with a rate of $0.0001$ is applied to the weights of all the layers to avoid overfitting. Moreover, we employ SELU as non-linearity except on the output layer; the network is trained with a learning rate of $0.0001$ for a maximum of $30$ epochs unless stated otherwise. \n\n\nWe utilize a fixed network architecture for all the considered tasks (both auxiliary and down-stream), the intuition behind this choice being threefold. Firstly, we want to minimize the architectural differences to discover the true potential of self-supervision, i.e. it can be used with minimal effort on architecture tuning to extract semantic representations across diverse datasets. Secondly, our aim is to show that self-supervision has a huge prospect to be utilized for on-device learning. Having a smaller architecture and given the annotation-free nature of the proposed approach opens several exciting avenues in learning and inference with devices having limited processing capabilities. Lastly, our multi-modal architectural specification provides the flexibility to incorporate other modalities effortlessly. Furthermore, we highlight that in this work our focus is on individual task proposal and evaluation, but the framework can be used for jointly solving proxy tasks (i.e. in multi-task learning setting) as they share the same architecture, but differ fundamentally in terms of the loss function being optimized.\n\nThe high-level description of the learning procedure is summarized in Algorithm~\\ref{alg:sal}. Given an unlabeled data $\\mathcal{D}_{U}$ and a specified auxiliary task $A_{t}$, we optimize $F_{\\theta}$ with task-specific data that is generated on-the-fly, as described in the preceding section. Once pre-training converges, the layers specific to self-supervised learning are discarded, and the encoder $e$ is saved. Then, the second round of training on a down-stream task of interest begins with labeled data $\\mathcal{D}_{L}$. Depending on the evaluation criteria, the following can be done: a) the network is either kept frozen and used as a generic feature extractor for learning a linear classifier\\footnote{logistic regression}, b) the modality-agnostic encoder $e_{s}$ is fine-tuned during learning an end-task, or c) the self-supervised network is used as initialization for rapidly solving the final-task, e.g., fine-tuning a model with little labeled data. The \\textit{encoder} network shown in Figure~\\ref{fig:architecture} represents the module that is kept frozen, while depending on the learning setting the \\textit{shared} layers are further fine-tuned.\n\n\n\\section{Experiments}\n\\label{experiments}\nWe perform a comprehensive evaluation of our framework on four different application domains: a) activity recognition, b) sleep-stage scoring, c) stress detection, and d) WiFi sensing. For every area, we train the self-supervised networks with each proposed task and determine the quality of the learned representation with either a linear classifier or by fine-tuning with few labeled instances. Furthermore, we also examine the knowledge transferability between related datasets. In the following, we describe the utilized datasets, pre-processing steps, and assessment strategy, including the baselines. \n\n\\subsection{Datasets}\n\\label{sec:datasets}\nWe assess the performance of \\textit{Sense and Learn} on $8$ publicly available multisensor datasets from diverse domains. The brief description of each utilized data source is provided below, with Table~\\ref{tab:dataset} summarizing their major characteristics. \n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Key characteristics of the datasets used in the experiements. The relative class distribution of each dataset is given in Figure~\\ref{fig:cd} of appendix~\\ref{appendix:class_distribution}.}\n\\label{tab:dataset}\n\\small\n\\begin{tabular}{ccccc}\n\\hline\n\\textbf{Dataset} & \\textbf{\\#Subjects} & \\textbf{\\#Classes} & \\textbf{Task} & \\textbf{Inputs} \\\\ \\hline\nHHAR & 9 & 6 & \\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}Activity\/Context\\\\ Recognition\\end{tabular}} & \\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}Accelerometer \\\\ \\&\\\\ Gyroscope\\end{tabular}} \\\\\nMobiAct & 66 & 11 & & \\\\\nMotionSense & 24 & 6 & & \\\\\nUCI HAR & 30 & 6 & & \\\\\nHAPT & 30 & 12 & & \\\\ \\hline\nSleep-EDF & 20 & 5 & Sleep Stage Scoring & EEG \\& EOG \\\\ \\hline\nMIT DriverDb & 17 & 2 & Stress Detection & \\begin{tabular}[c]{@{}c@{}}Heart Rate \\& \\\\ Skin Conductance\\end{tabular} \\\\ \\hline\nWiFi CSI & 6 & 7 & \\begin{tabular}[c]{@{}c@{}}Activity (Behavior)\\\\ Recognition\\end{tabular} & CSI Amplitude \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsubsection*{Activity Recognition}\nFor smartphone-based human activity recognition, we select $5$ datasets containing accelerometer and gyroscope signals, namely: HHAR, MobiAct, UCI HAR, MotionSense, and HAPT. The Heterogeneity Human Activity Recognition (HHAR) dataset~\\cite{stisen2015smart} is collected from $9$ participants, each performing $6$ basic activities (i.e. sitting, standing, walking, stairs-up, stairs-down and biking) for $5$ minutes. A broad range of devices is used for the systematic analysis of sensor, device, and workload-specific heterogeneities across manufacturers. Specifically, each user carried $8$ smartphones on different body locations that were selected from a pool of $36$ devices of different models and brands. Likewise, the sampling rate differs considerably across phones with values ranging between 50Hz-200Hz. The MotionSense dataset~\\cite{malekzadeh2018protecting} is recorded with the aim of inferring personal attributes, such as physical and demographics, in addition to the activities. The iPhone$6$s is placed in the users' front pocket during the collection phase, while they performed $15$ trials of $6$ activities in the same experimental setting. In total, $24$ subjects of varying height, weight, age and gender performed the following 6 activities: walking, jogging, sitting, standing, downstairs and upstairs. We use this data only for the detection of activities without concerning with the identification of other attributes. UCI HAR~\\cite{anguita2013public} comprises data obtained from $30$ subjects with waist-mounted Samsung Galaxy S$2$ devices sampling at $50$Hz. Each participant completed $6$ activities of daily living (i.e. standing, sitting, lying down, walking, downstairs and upstairs) during $2$ trials with a $5$ seconds resting condition in-between. The MobiAct~\\cite{Vavoulas2014TheMD} contains inertial sensors data collected from $66$ participants with Samsung Galaxy S$3$ phones through more than $3200$ trails. The subjects freely placed the device in their trouser's pocket to mimic real-life phone usage and placement. We utilize the data of $61$ subjects for whom data of any of the following $11$ activity classes is available: walking, jogging, jumping, upstairs, downstairs, sitting, stand to sit, sit to stand, sitting on a chair, car step-in and car step-out. The Human Activities and Postural Transitions (HAPT) dataset~\\cite{reyes2016transition} is collected from a group of $30$ volunteers with Samsung Galaxy S$2$ devices sampling at $50$Hz. The phone was mounted on the waist of each subject who completed $3$ dynamic activities (walking, upstairs, downstairs), $3$ static posture activities (lying, sitting, standing), and $6$ postural transitions (sit-to-lie, lie-to-sit, stand-to-sit, sit-to-stand, stand-to-lie, and lie-to-stand); resulting in $12$ classes. \n\n\\subsubsection*{Sleep Stage Scoring}\nWe use the PhysioNet Sleep-EDF\\footnote{version 1} dataset~\\cite{kemp2000analysis, goldberger2000physiobank} consisting of $61$ polysomnograms (PSGs) from $20$ subjects. It is comprised of participants from $2$ different studies: a) effect of age on sleep and b) Temazepam effect on sleep. We use the $2$ whole-night PSGs sleep recording sampled at $100$Hz from the former study. Each record contains $2$ electroencephalogram (EEG) signals from Fpz-Cz and Pz-Oz electrode locations, electrooculography (EOG), electromyography (EMG) and event markers. Some instances also have oro-nasal respiration and body temperature. The hypnograms (30-seconds1 epochs) were manually annotated by sleep expert with one of the $8$ sleep classes (Wake, N$1$, N$2$, N$3$, N$4$, Rapid Eye Movement, Movement, Unknown), based on the R\\&K standard. We utilize EEG (Fpz-Cz) and EOG signals in our evaluation. Following previous work~\\cite{supratak2017deepsleepnet}, we merged N$3$ and N$4$ into a single class N$3$ and discarded Movement and unscored samples, to have $5$ sleep stages.\n\n\\subsubsection*{Stress Detection}\nFor physiological stress recognition, we utilize the MIT DriverDb dataset~\\cite{healey2005detecting, goldberger2000physiobank}, which is collected during a real-world driving experiment in a city, on a highway and in a resting condition. The publicly-available version on PhysioNet consists of $17$ drives out of $24$, each lasted between $1$-$1.5$ hours. The following physiological signals are recorded: EMG, electrocardiography (ECG), galvanic skin response (GSR) from hand and foot, heart rate (HR; derived from ECG), and breathing rate. The signals were originally sampled at different rates but downsampled to $15.5$Hz. The `marker' signal provided in the dataset is used to derive the binary ground truth, indicating a change-of-drive (i.e. resting, city or highway driving), which is found to be correlated with distress level through post-driving video analysis by experts~\\cite{healey2005detecting}. We use the following $10$ drives $04$, $05$, $06$, $07$, $08$, $09$, $10$, $11$, $12$ and $16$ in our experiments, which have HR and GSR (from hand), given collection of other signals in real-life is quite problematic.\n\n\\subsubsection*{WiFi Sensing}\nDevice-free context recognition with WiFi is an emerging area of research. To show the robustness of our self-supervised methods on this task, particularly on a unimodal signal, we utilize the WiFi channel state information (CSI) dataset~\\cite{yousefi2017survey} for activity recognition. This dataset is collected in a controlled office environment, where the transmitting (router) and receiving (Intel 5300 NIC) devices were 3m apart, and the channel state information (CSI) was recorded at $1$kHz. The $6$ subjects performed $20$ trials for each of the following $7$ activities: lying down, falling, walking, running, sitting down, standing up and picking something up. The ground truth was obtained from videos recorded during the data collection process, and CSI amplitude is used for learning a model.\n\n\\subsection{Pre-processing and Assessment Strategy}\nTo prepare the data for sequence modeling with a temporal convolutional network, we utilize a sliding window approach to segment the signals into fixed-sized inputs. In the case of the activity recognition task, we choose a window size of $400$ samples with a $50\\%$ overlap, except for the HAPT dataset where a segment size of $200$ samples is used, due to the short duration of posture-transition activities. We found these windows sizes to be optimal based on earlier experiments, as each activity dataset has a different sampling rate. We did not perform resampling as the sampling rate differences among phones does not vary significantly and $1$D convolutional layers with wide kernel sizes learn to adapt to the specific characteristics of the input signal. However, if the sampling rate varies considerably it might be essential to do resampling. For Sleep-EDF, we applied minimal pre-processing based on existing work~\\cite{supratak2017deepsleepnet} to formulate the problem as a $5$-stage sleep classification and used the $30$ seconds epochs as model input. In the WiFi sensing task, we process the input same as the original work that open-sourced the data and utilize a downsampled CSI signal of $500$Hz as~\\cite{yousefi2017survey}, which corresponds to an input window of $1$ second. The heart rate and skin conductance signals from MIT DriverDb are processed to remove artifacts and these signals are mean normalized using the `mean' and `standard deviation' calculated from the baseline (or resting phase) of the data collection following~\\cite{saeed2017personalized} for each subject. We use a window size of $30$ seconds with $50\\%$ overlap to generate input segments for the model. We randomly split the datasets based on subjects into train and test sets withholding $70\\%$ users for training and the rest $30\\%$ for testing. We further divide the training set to obtain a validation set of size $20\\%$, which is used for hyper-parameter tuning and early stopping. Most importantly, we also perform $5$-fold cross-validation for thorough performance analysis whenever it is applicable. Furthermore, we z-normalize the samples with mean and standard deviation calculated from the training set. For self-supervision, we pre-train the models using only the training set, including for the transfer learning experiments. The self-labeled examples are generated for each task on-the-fly during the learning phase, as defined earlier in Section~\\ref{sec:sslt}. \n\nFor each recognition problem, we treat a fully-supervised model directly trained (in an end-to-end manner) with the annotated data of an end-task as a `baseline.' Likewise, we compare self-supervised tasks against pre-training with a standard autoencoder. As explained earlier, we assess the quality of the self-supervised representation (including in the transfer-learning setting) through training a linear classifier or fine-tuning the last convolutional layer of the encoder on the downstream tasks. For learning in the low-data regime, we use a self-supervised network as initialization to quickly learn a model with few labeled examples. In all the cases, we assess the network performance with a weighted version of F-score and Cohen's kappa (see appendix~\\ref{appendix:kappa_results}); as these metrics are robust to unbalanced class distributions while being sensitive to misclassifications. \n\n\\subsection{Results and Discussion}\n\n\n\\begin{table}[t]\n \\caption{Performance evaluation (weighted F-score) of self-supervised representations with a linear classifier. The unsupervised pre-trained networks achieve competitive performance with the fully-supervised networks. In WiFi-CSI sub-table, the entries with hyphen indicate auxiliary tasks that cannot be applied to unimodal signals. See Table~\\ref{tab:kappa_linear} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:linear}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.794$\\pm$0.014 & 0.934$\\pm$0.005 & 0.952$\\pm$0.007 & 0.962$\\pm$0.006 \\\\\n Random Init. & 0.218$\\pm$0.062 & 0.383$\\pm$0.109 & 0.246$\\pm$0.090 & 0.221$\\pm$0.079 \\\\\n Autoencoder & 0.777$\\pm$0.003 & 0.726$\\pm$0.001 & 0.675$\\pm$0.019 & 0.782$\\pm$0.042 \\\\ \\hline\n Sensor Blend & 0.823$\\pm$0.006 & \\cellcolor[gray]{0.93}0.912$\\pm$0.001 & \\cellcolor[gray]{0.93}0.911$\\pm$0.009 & 0.902$\\pm$0.010 \\\\\n Fusion Magnitude & \\cellcolor[gray]{0.93}0.848$\\pm$0.005 & 0.905$\\pm$0.001 & \\cellcolor[gray]{0.93} 0.925$\\pm$0.011 & 0.895$\\pm$0.010 \\\\\n Feature Prediction & 0.817$\\pm$0.005 & 0.902$\\pm$0.001 & 0.849$\\pm$0.010 & 0.899$\\pm$0.010 \\\\\n Transformations & \\cellcolor[gray]{0.93} 0.854$\\pm$0.005 & \\cellcolor[gray]{0.93}0.911$\\pm$0.002 & 0.869$\\pm$0.013 & \\cellcolor[gray]{0.93}0.906$\\pm$0.011 \\\\\n Temporal Shift & 0.834$\\pm$0.008 & 0.909$\\pm$0.003 & 0.851$\\pm$0.016 & 0.747$\\pm$0.027 \\\\\n Modality Denoise. & 0.807$\\pm$0.006 & 0.817$\\pm$0.004 & 0.675$\\pm$0.019 & 0.798$\\pm$0.035 \\\\\n Odd Segment & 0.835$\\pm$0.006 & 0.901$\\pm$0.001 & 0.869$\\pm$0.012 & 0.888$\\pm$0.010 \\\\\n Tripet Loss & 0.773$\\pm$0.005 & 0.841$\\pm$0.002 & 0.910$\\pm$0.008 & \\cellcolor[gray]{0.93}0.905$\\pm$0.011 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.01cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.899$\\pm$0.009 & 0.825$\\pm$0.005 & 0.824$\\pm$0.029 & 0.964$\\pm$0.007 \\\\\n Random Init. & 0.119$\\pm$0.041 & 0.149$\\pm$0.127 & 0.321$\\pm$0.198 & 0.153$\\pm$0.04 \\\\ \n Autoencoder & 0.669$\\pm$0.003 & 0.679$\\pm$0.012 & 0.876$\\pm$0.002 & 0.767$\\pm$0.005 \\\\ \\hline\n Sensor Blend & 0.818$\\pm$0.006 & 0.779$\\pm$0.004 & 0.890$\\pm$0.002 & - \\\\\n Fusion Magnitude & 0.815$\\pm$0.004 & \\cellcolor[gray]{0.93}0.782$\\pm$0.006 & 0.892$\\pm$0.004 & - \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.822$\\pm$0.002 & 0.671$\\pm$0.022 & 0.866$\\pm$0.000 & \\cellcolor[gray]{0.93}0.837$\\pm$0.005 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.841$\\pm$0.003 & 0.778$\\pm$0.006 & \\cellcolor[gray]{0.93}0.908$\\pm$0.001 & 0.768$\\pm$0.007 \\\\\n Temporal Shift & 0.782$\\pm$0.004 & 0.707$\\pm$0.012 & 0.883$\\pm$0.005 & 0.731$\\pm$0.011 \\\\\n Modality Denoise. & 0.738$\\pm$0.002 & \\cellcolor[gray]{0.93}0.784$\\pm$0.002 & 0.902$\\pm$0.001 & - \\\\\n Odd Segment & 0.790$\\pm$0.003 & 0.772$\\pm$0.003 & \\cellcolor[gray]{0.93}0.885$\\pm$0.002 & 0.774$\\pm$0.008 \\\\\n Tripet Loss & 0.815$\\pm$0.002 & 0.775$\\pm$0.003 & 0.891$\\pm$0.001 & 0.749$\\pm$0.009 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\\subsubsection*{Linear separability and effects of fine-tuning the shared encoder}\n\\label{subsec:linear}\nFor assessing the quality of the self-supervised embeddings, we conduct experiments with a linear classifier on the end-tasks. Linear separability is a standard way of measuring the power of self-supervised-learned features in the literature~\\cite{oord2018representation, tagliasacchi2019self, gidaris2018unsupervised}, i.e. if the representations disentangle factors of variations in the input, then it becomes easier to solve subsequent tasks. Here, we train a linear classifier (i.e. logistic regression) $10$-times on top of a frozen network (pre-trained with self-supervision) using annotated data of the downstream task. Table~\\ref{tab:linear} summarizes the results on eight benchmark datasets from four application domains. We compare the performance against a fully-supervised network that is trained in an end-to-end manner (directly with annotated data). We also consider unsupervised pre-training with a standard autoencoder to analyze the improvements of self-supervision. Likewise, a linear model is also trained with random features (i.e. from a randomly initialized frozen network) to estimate its learning capacity. On the activity recognition problem, the self-supervised features achieve very close results on multiple benchmarks to training an entire network with annotated instances. On the HHAR dataset, the transformation and fusion magnitude prediction tasks improve the F-score by $7$ points. On other datasets with a large number of classes, such as HAPT and MobiAct, our simple proxy tasks learn features that are generalizable to end-tasks. In the case of sleep stage scoring, linear layers trained with features from the modality denoising and the fusion magnitude tasks achieve a kappa of $0.70$, which is impressive given that the representations are learned from completely unlabeled data. Similarly, in a stress classification problem, the self-supervised networks outperform a fully-supervised model with a large margin. The transformations and modality denoising tasks achieve kappa scores of $0.80$ and $0.79$, respectively. We believe it is because pre-training results in generic features, whereas a model trained directly on the end-task suffers from overfitting. Lastly, we evaluate on the device-free sensing problem using the amplitude of WiFi CSI. Although we designed the auxiliary tasks for multisensorinput, we find a subset of these to be applicable for self-supervision with a unimodal input. We achieve good results with self-supervised features even though the dataset size is relatively small, and input is noisy, complex and high-dimensional. The linear layer trained on top of the feature-prediction task representations achieves an F-score of $83\\%$ compared to the end-to-end training F-score of $96\\%$.\n\n\n\\begin{table}[htbp]\n \\caption{Improvement in recognition rate (weighted F-score) by fine-tuning the shared layers of the encoder while training on the end-task. We observe a significant increase in performance across datasets with self-supervised networks, either surpassing or achieving results on-par with the baseline. See Table~\\ref{tab:kappa_ft} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:ft}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.794$\\pm$0.014 & 0.934$\\pm$0.005 & 0.952$\\pm$0.007 & 0.961$\\pm$0.008 \\\\\n Random Init. & 0.218$\\pm$0.062 & 0.383$\\pm$0.109 & 0.246$\\pm$0.090 & 0.221$\\pm$0.079 \\\\\n Autoencoder & 0.835$\\pm$0.003 & 0.927$\\pm$0.003 & 0.938$\\pm$0.002 & 0.943$\\pm$0.004 \\\\ \\hline\n Sensor Blend & \\cellcolor[gray]{0.93}0.841$\\pm$0.009 & \\cellcolor[gray]{0.93}0.943$\\pm$0.004 & 0.937$\\pm$0.004 & \\cellcolor[gray]{0.93}0.956$\\pm$0.003 \\\\\n Fusion Magnitude & 0.831$\\pm$0.006 & 0.938$\\pm$0.005 & 0.945$\\pm$0.002 & 0.946$\\pm$0.002 \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.840$\\pm$0.007 & 0.937$\\pm$0.002 & 0.951$\\pm$0.003 & 0.943$\\pm$0.003 \\\\\n Transformations & 0.828$\\pm$0.006 & \\cellcolor[gray]{0.93}0.946$\\pm$0.004 & \\cellcolor[gray]{0.93}0.951$\\pm$0.005 & \\cellcolor[gray]{0.93}0.954$\\pm$0.006 \\\\\n Temporal Shift & 0.831$\\pm$0.008 & 0.939$\\pm$0.002 & 0.934$\\pm$0.006 & 0.909$\\pm$0.008 \\\\\n Modality Denoise. & 0.840$\\pm$0.003 & 0.938$\\pm$0.002 & 0.928$\\pm$0.006 & 0.941$\\pm$0.001 \\\\\n Odd Segment & 0.826$\\pm$0.003 & 0.938$\\pm$0.005 & 0.935$\\pm$0.006 & 0.953$\\pm$0.003 \\\\\n Tripet Loss & 0.835$\\pm$0.013 & 0.912$\\pm$0.006 & \\cellcolor[gray]{0.93}0.955$\\pm$0.003 & 0.950$\\pm$0.002 \\\\ \\hline\n \\end{tabular} \n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.899$\\pm$0.009 & 0.825$\\pm$0.005 & 0.824$\\pm$0.029 & 0.964$\\pm$0.007 \\\\\n Random Init. & 0.119$\\pm$0.041 & 0.149$\\pm$0.127 & 0.321$\\pm$0.198 & 0.153$\\pm$0.048 \\\\\n Autoencoder & 0.881$\\pm$0.002 & 0.805$\\pm$0.008 & 0.877$\\pm$0.002 & \\cellcolor[gray]{0.93}0.898$\\pm$0.025 \\\\ \\hline\n Sensor Blend & 0.895$\\pm$0.003 & 0.809$\\pm$0.003 & 0.881$\\pm$0.014 & - \\\\\n Fusion Magnitude & 0.898$\\pm$0.002 & 0.813$\\pm$0.003 & 0.882$\\pm$0.011 & - \\\\\n Feature Prediction & 0.893$\\pm$0.003 & 0.748$\\pm$0.006 & 0.859$\\pm$0.003 & 0.832$\\pm$0.037 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.898$\\pm$0.002 & \\cellcolor[gray]{0.93}0.822$\\pm$0.005 & \\cellcolor[gray]{0.93}0.890$\\pm$0.005 & 0.823$\\pm$0.028 \\\\\n Temporal Shift & 0.876$\\pm$0.007 & 0.779$\\pm$0.005 & 0.883$\\pm$0.005 & 0.736$\\pm$0.063 \\\\\n Modality Denoise. & 0.885$\\pm$0.003 & \\cellcolor[gray]{0.93}0.819$\\pm$0.002 & \\cellcolor[gray]{0.93}0.889$\\pm$0.001 & - \\\\\n Odd Segment & \\cellcolor[gray]{0.93}0.899$\\pm$0.003 & 0.804$\\pm$0.003 & 0.853$\\pm$0.023 & \\cellcolor[gray]{0.93}0.860$\\pm$0.030 \\\\\n Tripet Loss & 0.887$\\pm$0.005 & 0.805$\\pm$0.003 & 0.884$\\pm$0.002 & 0.755$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n } \n\\end{table}\n\nIn Table~\\ref{tab:ft}, we notice a substantial improvement on the downstream tasks if the last convolutional layer of the encoder (see Figure~\\ref{fig:architecture}) is fine-tuned while training the linear classifier. Comparing with the results given in Table~\\ref{tab:linear}, it can be seen that the recognition rate of the models improved significantly, achieving similar results as the fully-supervised baselines; while features learned by input reconstruction with an autoencoder scored low compared to our proposed surrogate tasks even after fine-tuning, except for the WiFi sensing task. On the MobiAct dataset, transformations and sensor blend tasks gain ~$2$ points improvement in kappa. Likewise, for MotionSense, HAPT and UCI HAR, we bridge the gap between fully-supervised and self-supervised models. Interestingly, fine-tuning did not help much with MIT DriverDb compared to training a linear classifier. These results agree with our intuition that training on an end-task directly in this case results in overfitting. \n\nIn summary, the evaluation with a linear classifier trained on top of a pre-trained (self-supervised) feature extractor highlights that the representations learned with auxiliary tasks are broadly useful and better than autoencoding-based approaches. It also confirms our hypothesis that general-purpose representations can be learned directly from raw input without any strongly (task-specific) labeled data. It is important to note we did not aim to surpass fully-supervised approaches in this setting. Supervised methods will be better because they have direct access to task-specific labels, while self-supervised objectives train a network without any foresight of the end-task. It can also be seen from the results of fine-tuning the encoder, as presented in Table~\\ref{tab:ft}, that the network performance matches the supervised methods or improves upon, when shared layers are further trained on the downstream tasks. Likewise, it might be possible to improve generalization of self-supervised models through pre-training on larger unlabeled datasets in a real-world setting.\n\n\\subsubsection*{Impact on learning in low-data regime}\nWe next investigate the performance of our approach in a semi-supervised (or low-data) setting. For this purpose, we pre-train an encoder using unlabeled instances for each self-supervised task and utilize it as initialization for efficiently learning with few labeled instances on the end-task; for the end-task, we add a randomly-initialized dense layer with $1024$ hidden units before a linear output layer. The non-linear classifier is then learned and the encoder is fine-tuned with the specified number of instances per class. Specifically, for the defined auxiliary tasks and datasets, we use $5$ and $10$ examples for each category. We want to highlight that in a on-device learning case, a few labeled instances can be pooled from multiple users quite easily (e.g. $2$-$3$ examples per user) as compared to accumulating several hundred for learning fully-supervised models. Likewise, personalization can also be achieved through precisely asking for a few labels for targeted classes. In Figure~\\ref{fig:ld}, we provide an average weighted F-score of $10$ independent experiment runs, comparing training from scratch (FS) with the pre-training as an effective initialization for learning a robust classifier. We show that in contrast to the purely supervised approach, leveraging unlabeled data for learning network parameters improves the performance on the end-task. Specifically, our self-supervised models greatly improve the F-score in the low-data setting, in some cases achieving F-scores nearly as good as networks trained with the entire labeled data. Similarly, the self-supervised trained models perform better than the autoencoder, which shows that, despite the simplicity, our proposed auxiliary tasks force the network to learn highly-generalizable features. For each experiment run, we randomly sample the stated number of annotated instances and use these to train all the networks, including fully-supervised baselines. \n\nOn activity recognition, our methodology significantly improves the performance in low-data; for example, on the HHAR dataset with $5$ and $10$ instances, temporal shift and transformations tasks gain $4$ and $7$ points over the fully-supervised models' F-score of $0.60$ and $0.68$. respectively. Similarly, for MobiAct, pre-training with the temporal shift task helps achieve an F-score of $0.75$ ($5$ instances) and $0.82$ ($10$ instances), compared to $0.61$ and $0.73$ respectively for networks learned from scratch. Furthermore, we achieve identical improvements on UCI HAR, HAPT, and MotionSense with $5$ instances per class. The attained F-scores are $0.91$, $0.77$ and $0.83$ in contrast to $0.90$, $0.59$, and $0.77$ of fully-supervised models, respectively. Our method represents a $26$ points increase in F-score on the challenging problem of sleep stage scoring. Likewise, on physiological stress detection and device-free sensing problems, the benefit of pre-training with auxiliary tasks is further apparent, where the presented methods achieve $12$ points improvement in F-score over the baseline. These results suggest that self-supervision can greatly help with learning general-purpose representations that work well in the low-data regime. We also want to highlight that although the selection of an equal number of instances results in a balanced training set, we use the full test sets (as in earlier experiments) for evaluation, which could be imbalanced. Importantly, utilizing even bigger unlabeled datasets and combining weak-supervision methods can boost the quality of the learned representations. \n\nWe emphasize that the broader objective of self-supervised methods is to learn high-level semantic features that can be used to solve an array of downstream tasks with minimal labeled data. The evaluation of our presented auxiliary tasks clearly highlights the benefit of pre-training the network with unlabeled data to achieve better generalization on the tasks of interest, with very few labeled instances. To the best of our knowledge, we, for the first time, evaluate self-supervised methods in a semi-supervised setting for problems involving multisensor data as earlier work developed fully-supervised network architectures or used classical autoencoding-based approaches for pre-training, followed by network fine-tuning with the entire labeled data. Overall, our approach provides a base for further work in developing sensing techniques that can achieve on-device personalization and perform continual, and few-shot learning, as the presented framework considerably reduces the requirement of labeled data from human annotators to learn the end-task models.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfloat[HHAR]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/hhar_ld_summarized.pdf}} \n\\subfloat[MobiAct]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ma_ld_summarized.pdf}}\\\\\n\\subfloat[MotionSense]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ms_ld_summarized.pdf}} \n\\subfloat[UCI HAR]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/uci_ld_summarized.pdf}} \\\\\n\\subfloat[HAPT]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/hapt_ld_summarized.pdf}} \n\\subfloat[Sleep-EDF]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/edf_ld_summarized.pdf}} \\\\\n\\subfloat[MIT DriverDb]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/driverdb_ld_summarized.pdf}} \n\\subfloat[WiFi CSI]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/csi_ld_summarized.pdf}}\\\\\n\\subfloat{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ld_caption.pdf}}\n\\caption{Contribution of self-supervised pre-training for improving end-task performance with few labeled data. We utilize pre-trained self-supervised models as initialization for learning in a semi-supervised setting. The subplots provide the mean F-score of $10$ independent runs, where randomly selected instances are used to train the models. The bars with gray color represent the results of the networks trained only on the labeled instances while vertical black line shows results of fully-supervised model trained with entire data.}\n\\label{fig:ld}\n\\end{figure}\n\n\\subsubsection*{Effectiveness in a transfer learning setting}\n\n\\begin{figure}[t]\n\\subfloat[Autoencoder]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/aerl_summarized.pdf}} \n\\subfloat[Sensor Blend]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/sbrl_summarized.pdf}}\n\\subfloat[Fusion Magnitude]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/swrl_summarized.pdf}} \\\\\n\\subfloat[Feature Prediction]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/fprl_summarized.pdf}} \n\\subfloat[Transformations]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/tprl_summarized.pdf}} \n\\subfloat[Temporal Shift]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/sdrl_summarized.pdf}} \\\\\n\\subfloat[Modality Denoising]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/msrl_summarized.pdf}}\n\\subfloat[Odd Segment]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/osrl_summarized.pdf}}\n\\subfloat[Triplet Loss]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/tlrl_summarized.pdf}}\n\\caption{Generalization of the self-supervised representations under transfer learning setting. We evaluate the features transferability on activity recognition task by pre-training networks with each auxiliary task for every dataset. For solving downstream tasks, we train a linear classifier on-top of the frozen feature extractor $10$ times, independently, and report the average F-score. The diagonal entries denote the numbers when the source and target datasets are the same with the x-axis and y-axis representing target and source datasets, respectively.}\n\\label{fig:tf}\n\\end{figure}\n\nIn a real-world learning setup, there is a high chance that we are interested in a different dataset and downstream task than the one originating from the unlabeled data accessible for pre-training. A broadly useful auxiliary task is thus one that produces generalizable representations that transfer well to other related end tasks. To examine the transferability property of the features learned with our proxy tasks, we evaluate their performance on the activity recognition datasets. To this end, we pre-train the feature extractor with each self-supervised objective (i.e. by discarding the semantic class labels) for all the five datasets (see section~\\ref{sec:datasets}) and investigate their performance through a) training a linear classifier with the entire target annotated data and b) fine-tuning it end-to-end with few labeled data (i.e. learning an activity classifier with $5$ and $10$ instances of each class from target dataset). Figure~\\ref{fig:tf} provides the results of the source-to-target transfer of self-supervised models trained with nine different auxiliary losses. The diagonal entries of each subplot represent the F-scores when the source and target datasets are the same. In comparison with autoencoder pre-training, features learned with our tasks transfer well between datasets. We observe that even leveraging smaller unlabeled datasets produces useful features, as with sensor-blend-task-learned features on UCI HAR scored $0.91$ F-score on the HHAR dataset. On the HAPT dataset of low input resolution (i.e. a segment size of $200$ samples) and complex postural activities, transfer learning improves the performance with approximately $8$ percentage points in F-score over pre-training on the same dataset. Importantly, our results are also competitive with the fully-supervised baselines on the respective datasets. \n\nWe further examine if the transferred self-supervised models are beneficial in learning from low-data; i.e. few labeled instances are available from the target data, but separate unannotated data is available for pre-training. We utilize the same network configuration as discussed earlier for low-data experiments and we fine-tune the model end-to-end. We randomly sample a specified number of instances and perform experiments $10$ times while utilizing the same instances for both types of networks (i.e. pre-trained and baseline) and report average F-score. In Figure~\\ref{fig:tf_ld}, we present the results of optimal auxiliary tasks for each combination of the source to target transfer, where gray-colored bars show a fully-supervised baseline. Our experiments show that the features learned from different but related datasets do transfer well and improve the recognition rate even when as little as $5$ examples per class are available. On the MobiAct dataset, our approach with HAPT as source data results in an F-score of $0.68$ and $0.78$ compared to the training from scratch F-score of $0.61$ and $0.73$, respectively. Similarly, with HAPT as a target, transferring from the UCI HAR using the sensor blend task, the F-score improved from $0.59$ to $0.68$ and $0.72$ to $0.78$. Interestingly, on UCI HAR and MotionSense, the performance attained with our approach is very close to the purely supervised models trained with entirely labeled data (see Table~\\ref{tab:linear}). \n\nLearning generalizable representations that can be reused for solving related tasks is an important property to have in a learning system. Our investigation of transferring unsupervised pre-trained models consistently highlights substantial performance improvements, indicating that the self-supervised features are broadly useful across different subjects, devices, environments and data collection protocols. In particular, the data efficiency enabled by our method in a low-data regime provides further evidence of semantic feature learning without merely over-fitting on the source dataset. It is also important to note that compared to earlier work which focuses on supervised transfer or joint-training on source and target datasets, we provide evaluation of unsupervised transfer and its ability to boost performance even with few-labeled data. Likewise, self-supervised learning has other benefits as it has been shown to improve adversarial robustness and uncertainty of deep models as compared to purely supervised methods~\\cite{hendrycks2019using}. Although we did not study these aspects explicitly in this work, the results of transfer learning across domains hint that our auxiliary tasks also enhance the model's robustness; we leave an in-depth study for future work.\n\n\\begin{figure}[htbp]\n\\subfloat[HHAR]{\\includegraphics[width=8cm]{Figures\/Transfer\/hhar-tfld-summarized.pdf}} \\\\\n\\subfloat[MobiAct]{\\includegraphics[width=8cm]{Figures\/Transfer\/ma-tfld-summarized.pdf}} \\\\\n\\subfloat[MotionSense]{\\includegraphics[width=8cm]{Figures\/Transfer\/motionsense-tfld-summarized.pdf}} \\\\\n\\subfloat[UCI HAR]{\\includegraphics[width=8cm]{Figures\/Transfer\/uci_har-tfld-summarized.pdf}} \\\\\n\\subfloat[HAPT]{\\includegraphics[width=8cm]{Figures\/Transfer\/hapt-tfld-summarized.pdf}} \\\\\n\\subfloat{\\includegraphics[width=6.5cm]{Figures\/Low_Data\/ld_caption.pdf}}\n\\caption{Contribution of self-supervised learning, and fine-tuning of the transferred networks in learning from few-data. We utilize a pre-trained model on each source data and train a non-linear classifier on the target task to assess the effectiveness of self-supervision for improving the recognition rate. The networks are fine-tuned with a specified number of instances per class $10$ times. For each source data, we provide mean results only of the best performing auxiliary task in order to improve readability.}\n\\label{fig:tf_ld}\n\\end{figure}\n\n\\subsubsection*{Cross-validation to determine robustness against subject variations}\n\\label{subsec:cv}\nTo validate the stability of our methodology against variations in subjects' data utilized for pre-training and downstream task evaluation, we perform $5$-fold cross-validation based on user split (i.e. the train and test division ($80-20$) is based on users with no overlap among them; train\/test users are entirely independent); and we follow the same experimental setup as earlier. For each fold's data and surrogate task, we pre-train the models and train a linear classifier on top of the frozen network. The fully-supervised baseline is trained in an end-to-end manner, directly with the semantic labels. Table~\\ref{tab:cv_linear} summarizes the results averaged across $5$ folds on eight considered datasets. We observe that the results achieved with self-supervision are consistent with earlier experiments. This highlights that our approach for sensory representation learning works well with different users' data and it is robust to subjects' differences. On the MobiAct dataset, the feature prediction and transformation recognition tasks achieve $0.90$ F-score, which is very close to a fully-supervised model's F-score of $0.91$. Likewise, on MIT DriverDb, self-supervision provides an impressive improvement over training from scratch. To summarize, these results suggest that the learned representations with unlabeled data learn useful features that can be used to a large extent for solving the end-task with a simple linear layer. Furthermore, we explore fine-tuning the last convolutional layer of the encoder while training a linear layer on downstream tasks. In Table~\\ref{tab:cv_ft}, we show that fine-tuning a shared layer leads to a better performance than the fully-supervised model training from scratch on most of the datasets. The feature prediction task on the HHAR dataset achieved an F-score of $0.87$, which is $5$ points above the baseline. Likewise, on other datasets and tasks, our technique either bridges the gap or achieves broadly similar results as the supervised models. We think that careful fine-tuning of the architecture and related hyper-parameters could further improve the recognition rate of self-supervised networks. We note that a direct comparison of our approach with existing methods is not feasible as we learn representations from unlabeled data and evaluate through training a linear classifier, whereas, prior methods focus on fully-supervised learning with different architectures and evaluation strategies. However, to be comparative, we summarize related results here, which are only indicative. On MotionSense, our sensor blend task achieves an F-score of $0.92$ compared to $0.95$ and $0.86$ accuracy for trial- and subject-wise evaluation in~\\cite{malekzadeh2018protecting}. For SleepEDF, our fusion magnitude task scores a kappa of $0.72$ compared to $0.76$ of a sophisticated fully-supervised model~\\cite{supratak2017deepsleepnet}. Likewise, on WiFi sensing task, feature prediction proxy task results in an F-score of $0.85$ compared to the $0.90$ accuracy of an LSTM-based model~\\cite{yousefi2017survey} over six classes.\n\nWe wondered whether pre-training with our auxiliary tasks is invariant to utilized subjects' data, as it is critical for learning in a real-world setting due to the non-curated nature of the data. We found that proxy tasks are highly stable and result in a similar performance as earlier, when a linear classifier is trained on top of self-supervised feature extractors. This analysis further shows that the self-supervised features are not necessarily subject-specific, but are general in nature. Moreover, our evaluation demonstrates there is a room for improvement through selecting problem- or task-specific network architectures and using larger unlabeled datasets for unsupervised learning. Specifically, it would be valuable to explore unifying supervised and self-supervised objectives in a multi-task setting to personalize or adapt sensing models directly on user devices.\n\n\n\\begin{table}[htbp]\n \\caption{Comparison of self-supervised representation learning to fully-supervised approach with $5$-fold cross-validation based on user-split. We pre-train the feature extractors for each fold's data and learn a linear classifier for the end-task as usual. We report weighted F-score averaged over the 5 folds, highlighting the robustness of our method to subject variations. See Table~\\ref{tab:kappa_cv_linear} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:cv_linear}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.844$\\pm$0.090 & 0.917$\\pm$0.017 & 0.960$\\pm$0.007 & 0.951$\\pm$0.025 \\\\\n Random Init. & 0.199$\\pm$0.047 & 0.394$\\pm$0.086 & 0.284$\\pm$0.086 & 0.268$\\pm$0.208 \\\\\n Autoencoder & 0.722$\\pm$0.085 & 0.736$\\pm$0.021 & 0.752$\\pm$0.050 & 0.831$\\pm$0.041 \\\\ \\hline\n Sensor Blend & 0.829$\\pm$0.061 & 0.886$\\pm$0.010 & \\cellcolor[gray]{0.93}0.920$\\pm$0.019 & \\cellcolor[gray]{0.93}0.915$\\pm$0.038 \\\\\n Fusion Magnitude & \\cellcolor[gray]{0.93}0.841$\\pm$0.040 & 0.889$\\pm$0.014 & \\cellcolor[gray]{0.93}0.924$\\pm$0.025 & 0.899$\\pm$0.049 \\\\\n Feature Prediction & 0.820$\\pm$0.068 & \\cellcolor[gray]{0.93}0.900$\\pm$0.016 & 0.900$\\pm$0.025 & 0.896$\\pm$0.043 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.822$\\pm$0.059 & \\cellcolor[gray]{0.93}0.900$\\pm$0.011 & 0.898$\\pm$0.013 & \\cellcolor[gray]{0.93}0.916$\\pm$0.018 \\\\\n Temporal Shift & 0.811$\\pm$0.057 & 0.890$\\pm$0.017 & 0.889$\\pm$0.027 & 0.793$\\pm$0.030 \\\\\n Modality Denoise. & 0.798$\\pm$0.077 & 0.834$\\pm$0.029 & 0.780$\\pm$0.058 & 0.829$\\pm$0.056 \\\\\n Odd Segment & 0.812$\\pm$0.079 & 0.890$\\pm$0.015 & 0.901$\\pm$0.014 & 0.861$\\pm$0.015 \\\\\n Tripet Loss & 0.749$\\pm$0.065 & 0.822$\\pm$0.013 & 0.917$\\pm$0.022 & 0.893$\\pm$0.036 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.897$\\pm$0.053 & 0.822$\\pm$0.025 & 0.789$\\pm$0.122 & 0.959$\\pm$0.005 \\\\\n Random Init. & 0.155$\\pm$0.061 & 0.072$\\pm$0.021 & 0.206$\\pm$0.015 & 0.214$\\pm$0.044 \\\\\n Autoencoder & 0.818$\\pm$0.064 & 0.701$\\pm$0.026 & 0.850$\\pm$0.054 & \\cellcolor[gray]{0.93}0.793$\\pm$0.014 \\\\ \\hline\n Sensor Blend & 0.855$\\pm$0.044 & 0.788$\\pm$0.014 & 0.824$\\pm$0.106 & - \\\\\n Fusion Magnitude & 0.840$\\pm$0.040 & \\cellcolor[gray]{0.93}0.795$\\pm$0.025 & 0.859$\\pm$0.061 & - \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.859$\\pm$0.040 & 0.777$\\pm$0.033 & 0.843$\\pm$0.045 & \\cellcolor[gray]{0.93}0.855$\\pm$0.024 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.863$\\pm$0.045 & 0.788$\\pm$0.028 & 0.860$\\pm$0.060 & 0.770$\\pm$0.032 \\\\\n Temporal Shift & 0.837$\\pm$0.042 & 0.753$\\pm$0.027 & 0.844$\\pm$0.082 & 0.729$\\pm$0.015 \\\\\n Modality Denoise. & 0.835$\\pm$0.050 & \\cellcolor[gray]{0.93}0.797$\\pm$0.029 & \\cellcolor[gray]{0.93}0.864$\\pm$0.061 & - \\\\\n Odd Segment & 0.821$\\pm$0.043 & 0.767$\\pm$0.037 & 0.839$\\pm$0.071 & 0.793$\\pm$0.018 \\\\\n Tripet Loss & 0.845$\\pm$0.044 & 0.789$\\pm$0.027 & \\cellcolor[gray]{0.93}0.860$\\pm$0.059 & 0.769$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\\begin{table}[htbp]\n \\caption{The effect of fine-tuning modality-agnostic encoder while learning downstream task under $5$-folds cross-validation as evaluated through weighted F-score. See Table~\\ref{tab:kappa_cv_ft} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:cv_ft}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.844$\\pm$0.090 & 0.917$\\pm$0.017 & 0.960$\\pm$0.007 & 0.951$\\pm$0.025 \\\\\n Random Init. & 0.199$\\pm$0.047 & 0.394$\\pm$0.086 & 0.284$\\pm$0.086 & 0.268$\\pm$0.208 \\\\\n Autoencoder & 0.891$\\pm$0.049 & 0.914$\\pm$0.019 & 0.961$\\pm$0.010 & 0.936$\\pm$0.051 \\\\ \\hline\n Sensor Blend & 0.893$\\pm$0.062 & 0.919$\\pm$0.011 & 0.964$\\pm$0.011 & \\cellcolor[gray]{0.93}0.949$\\pm$0.036 \\\\\n Fusion Magnitude & 0.885$\\pm$0.054 & 0.918$\\pm$0.011 & 0.961$\\pm$0.013 & 0.942$\\pm$0.039 \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.894$\\pm$0.050 & \\cellcolor[gray]{0.93}0.930$\\pm$0.014 & 0.962$\\pm$0.003 & 0.943$\\pm$0.047 \\\\\n Transformations & 0.893$\\pm$0.052 & \\cellcolor[gray]{0.93}0.933$\\pm$0.0126 & \\cellcolor[gray]{0.93}0.968$\\pm$0.007 & 0.949$\\pm$0.033 \\\\\n Temporal Shift & 0.885$\\pm$0.055 & 0.920$\\pm$0.014 & 0.941$\\pm$0.012 & 0.915$\\pm$0.050 \\\\\n Modality Denoise. & 0.886$\\pm$0.061 & 0.929$\\pm$0.015 & \\cellcolor[gray]{0.93}0.966$\\pm$0.011 & 0.933$\\pm$0.054 \\\\\n Odd Segment & \\cellcolor[gray]{0.93}0.894$\\pm$0.067 & 0.927$\\pm$0.011 & 0.962$\\pm$0.004 & \\cellcolor[gray]{0.93}0.951$\\pm$0.030 \\\\\n Tripet Loss & 0.856$\\pm$0.055 & 0.904$\\pm$0.020 & 0.957$\\pm$0.006 & 0.944$\\pm$0.044 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.897$\\pm$0.053 & 0.822$\\pm$0.025 & 0.789$\\pm$0.122 & 0.959$\\pm$0.005 \\\\\n Random Init. & 0.155$\\pm$0.061 & 0.072$\\pm$0.021 & 0.206$\\pm$0.015 & 0.214$\\pm$0.044 \\\\\n Autoencoder & 0.883$\\pm$0.059 & 0.764$\\pm$0.028 & 0.804$\\pm$0.132 & \\cellcolor[gray]{0.93}0.911$\\pm$0.032 \\\\ \\hline\n Sensor Blend & \\cellcolor[gray]{0.93}0.892$\\pm$0.052 & 0.801$\\pm$0.020 & 0.793$\\pm$0.149 & - \\\\\n Fusion Magnitude & 0.884$\\pm$0.051 & \\cellcolor[gray]{0.93}0.808$\\pm$0.023 & 0.788$\\pm$0.148 & - \\\\\n Feature Prediction & 0.893$\\pm$0.055 & 0.794$\\pm$0.031 & 0.795$\\pm$0.143 & 0.857$\\pm$0.040 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.896$\\pm$0.051 & \\cellcolor[gray]{0.93}0.801$\\pm$0.029 & 0.806$\\pm$0.127 & 0.805$\\pm$0.051 \\\\\n Temporal Shift & 0.890$\\pm$0.052 & 0.781$\\pm$0.027 & 0.805$\\pm$0.133 & 0.758$\\pm$0.048 \\\\\n Modality Denoise. & 0.882$\\pm$0.051 & 0.796$\\pm$0.028 & \\cellcolor[gray]{0.93}0.858$\\pm$0.051 & - \\\\\n Odd Segment & 0.888$\\pm$0.048 & 0.778$\\pm$0.035 & \\cellcolor[gray]{0.93}0.849$\\pm$0.050 & \\cellcolor[gray]{0.93}0.854$\\pm$0.032 \\\\\n Tripet Loss & 0.888$\\pm$0.056 & 0.792$\\pm$0.031 & 0.806$\\pm$0.128 & 0.765$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\\section{Impact and Limitations}\n\\label{sec:impact}\nOur \\textit{Sense and Learn} framework shows that it is possible to use unlabeled data, in addition to smaller amounts of labeled data, when learning features for varied classification problems. We believe our method is useful in practice, where obtaining labeled data is difficult and costly. Since the same approach, with a fixed neural network structure, provides gains for quite different application areas, ranging from activity recognition to sleep stage scoring, we also believe the method is applicable in practice. While it is true that a practitioner cannot be certain which self-supervised task will work best for a new application, the range of experiments we present should provide a valuable starting point as to which tasks are most promising. Moreover, our fine-tuning experiments (Table~\\ref{tab:ft}) show that e.g. the Transformations task provides significant gains across all datasets even when using all available supervised data. Finally, self-supervised tasks don't need any labels while learning the representations, which opens up the possibility of using our framework for on-device Federated Learning~\\cite{bonawitz2019towards}, where the sensor data never leaves the users' device (e.g., smartphone).\n\nSelf-supervised learning provides a scalable, inexpensive, and data efficient way to learn high-level features with deep neural networks without requiring strong labels, which could be unclear, noisy or limited for many real-world problems. However, there are limitations of these approaches which are also applicable to our methodology. First, deep neural networks are prone to learning via shortcuts through exploiting low-level cues in the input e.g. object textures and other local artifacts in image classification~\\cite{geirhos2020shortcut}. The unintended cue learning is not limited to supervised methods, but is a problem for self-supervised methods too, as networks can use shortcuts to solve proxy task without learning anything useful (e.g. chromatic aberration in vision models~\\cite{nathan2018improvements}). For time-series or multisensor inputs discovering, a model relying on shortcuts is an unsolved problem and could be challenging to detect. Second, as getting access to large unlabeled and labeled sensory datasets is difficult, evaluating how auxiliary tasks will perform on non-curated data or learning in an open-world environment needs further exploration. Third and last, interpretability and understanding the decision mechanism of deep models is another open area of research to address issues of model uncertainty, bias and fairness. The features learned with deep network could be non-interpretable, but we think that unifying shallow models using hand-crafted features with deep networks consuming raw input through knowledge distillation~\\cite{hinton2015distilling} might shed light on the importance of certain features.\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\nWe proposed a self-supervised framework for multisensor representation learning from unlabeled data, produced by the omnipresent sensors. To realize the vision of unsupervised learning for sensing systems and IoT in general, we developed eight novel auxiliary tasks that acquire their supervision signal directly from the raw input, without any human involvement. The defined proxy objectives are utilized to learn general and effective deep models for a wide variety of problems. Through extensive evaluation on eight publicly available datasets from four application domains, we demonstrate that the self-supervised networks learn useful semantic representations that are competitive with fully-supervised models (i.e. trained end-to-end with labeled data). In summary, we demonstrated that the straight-forward and computationally-inexpensive surrogate tasks perform well on downstream tasks of interest by learning a linear classifier on top of frozen feature extractors. We further showed that fine-tuning a pre-trained modality-agnostic encoder further improved the detection rate of a network. As the key objective of leveraging unannotated data is to reduce the labeled data required for the end-tasks, we have also shown that our approach significantly improves the performance in the low-data regime. In particular, with as few as $5$ to $10$ labeled examples per class, the self-supervised initialized networks achieve an F-score between $0.70$-$0.80$. Furthermore, we examined the effectiveness of learned representations in an unsupervised transfer setting with linear separability analysis and semi-supervised learning, achieving much better results than training from scratch. \n\nWhile in this work, we individually evaluate the quality of learned features for each auxiliary task, an interesting direction for future research is to jointly solve these problems in a multi-task setting, in order to learn more discriminative features. Likewise, an important area of investigation is to utilize the proposed tasks in a large-scale federated learning setting on distributed data. We believe this will truly highlight the potential of self-supervision for continual on-device (e.g., smartphones) learning and improving personalization. Finally, the general nature of our methodology offers the opportunity for leveraging self-supervision in other application areas, where labeled data accumulation is naturally difficult, such as arrhythmia detection.\n\n\\section*{Acknowledgements}\nThe authors would like to thank F\u00e9lix de Chaumont Quitry, Marco Tagliasacchi and Richard F. Lyon for their valuable feedback and help with this work.\nVarious icons used in the figure are created by Sriramteja SRT, Berkah Icon, Ben Davis, Eucalyp, ibrandify, Clockwise, Aenne Brielmann, Anuar Zhumaev, and Tim Madle from the Noun Project.\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nJust like with natural images, deep convolutional neural networks (CNNs) have shown impressive results for the classification of various diseases in medical images \\cite{rajpurkar2017chexnet}, \\cite{10.3389\/fmed.2019.00264}, \\cite{campanella2019clinical}. CNNs have also been used on histopathology images for tasks such as screening pre-cancerous lesions and localizing tumors \\cite{spanhol2016breast}, as well as predicting mutations \\cite{coudray2018classification}, survival \\cite{zhu2017wsisa}, and cancer recurrence \\cite{xu2019deep}\\cite{ye2018hybrid}\\cite{mkl}.\n\nThough CNN based algorithms on histopathology images have produced promising results, these algorithms lack interpretability. Localization and visualization algorithms in CNNs such as guided-backpropagation \\cite{springenberg2014striving}, grad-CAM \\cite{selvaraju2017grad}, and other CAM-related techniques fail to produce informative visualization for histopathology images. For instance, these techniques do not highlight cell nuclei responsible for the diagnosis and relevant features of the tumor microenvironment to further our understanding of disease and treatment mechanisms. Also, often CNNs are not able to highlight the relevant portions of the macro environment of the tumor due to large sizes (giga-pixels) of the whole-slide images. \n\nMorphological features of nuclei and the spatial relationships between them decide the diagnosis of histopathology slide. Representing histopathology images in the form of graphs can help capture the interaction between nuclei and the spatial arrangement of the relative positions with each other. Nuclei are represented as nodes of a graph and the distance between the nuclei can be described as edges between nodes of a graph\\cite{gadiya2019histographs}. This representation of histopathology images as graphs can be fed to graph convolutional networks (GCNs) to learn the characteristics of tissue at the macro-environment level.\n\nTaking the idea of using GCNs on graphs extracted from histology images further in this work, we propose to use an attention-based architecture and an occlusion-based visualization technique to highlight informative nuclei and inter-nuclear relationships. Our visualization results for classification of disease states in breast and prostate cancer datasets agree satisfactorily with the pathologists' observations of the relevance of various inter-nuclear relationships. Our technique paves the way for visualization of previously unknown features relevant for more important problems such as prognosis and prediction of treatment response.\n\n\\section{Related Work}\nBefore the emergence of deep learning, processing of histopathology images as graphs was explored in various ways. Weyn et al.\\cite{weyn1999computer} represents a histopathology image as a minimum spanning tree for the diagnosis of mesotheliomas. They use k-nearest neighbor for the classification of minimum spanning trees. Similarly, Cigdem et al. \\cite{demir2005augmented} form a graph from a histopathology image by considering the cluster of nuclei as a node that is connected using binary edges between nodes. A multi-layer perceptron is used for the detection of inflammation in brain biopsy. Cell-graphs \\cite{yener2016cell} uses nuclei as nodes and heuristic features as node and vertex features to perform classification on breast cancer and brain biopsy datasets.\n\n\\begin{figure*}\n\\centering \n\\includegraphics[width=0.9\\linewidth]{image_nucleus_graph\/rsf.jpg}\n\\caption{Proposed graph convolutional neural network with an attention layer.}\n\\end{figure*}\nThough the above mentioned methods form graphs from histopathology images, they use classical machine learning approaches such as support vector machine (SVM),k-nearest neighbors (kNN), etc. Recent developments in deep learning for graphs have enabled the use of GCNs on graphs derived from histopathology images. Kipf et al. \\cite{kipf2016semi} exhibits impressive results for node classification on various graph datasets such as Citeseer, Cora, Pubmed and NELL. They used spectral graph convolution to operate on homogeneous graphs. Other lines of work in GCNs operate in the spectral domain, which enables these algorithms to analyze heterogeneous graphs as well. Such et al. \\cite{such2017robust} introduced a graph convolutional algorithm in spatial domain. This method achieves excellent performance on various graph datasets. CGC-Net\\cite{zhou2019cgc} uses a variant of GraphSage\\cite{hamilton2017inductive} for identification of grade of prostate cancer slide represented as a graph. Recently, GCNs have been applied to graphs of nuclei in histopathology images with classification accuracy that is at par with CNNs \\cite{gadiya2019histographs}.\n\nA large portion of the medical community is skeptical about deep learning deployment in histopathology due to the lack of transparency in its working. Some attempts have been made to make deep learning more explainable. For instance, attention-based multiple instance learning \\cite{ilse2018attention} frames classification of histopathology images as weakly supervised problem and assigns weights to patches of a large image. This method produces an attention map for histopathology images to highlight patches important for the classification of the overall slide, but it cannot be scaled to giga-pixel images because of its substantial computation requirements. Visualization in the form of clustering and heatmaps was presented in \\cite{coudray2018classification}, but insightful interpretations beyond the highlighting of the tumor regions cannot be derived through these visualizations. Not only does interpretable visualization in general for histopathology images remains an open problem, to our knowledge, visualization for histopathology images through graph representation has also not been explored yet.\n\n\\section{Datasets and Methodology}\nIn this section, we describe the datasets and methodology used.\n\n\\subsection{Datasets}\nIn order to test the ability of the proposed method to highlight interpretable features automatically, we used two datasets for which we knew the features that were expected to be seen by the pathologists. The first dataset is from ICIAR2018 Grand Challenge on Breast Cancer Histology images (BACH) \\cite{aresta2019bach} and it comprises of 400 histopathology images of breast cancer. Each image of this dataset is of the size of 2048 x 1536 pixels. The original BACH dataset contains four classes, viz. normal, benign, in-situ and invasive. We trained a GCN to perform the binary classification task between invasive and in-situ classes because these two differ in the spatial arrangement of nuclei even though the nuclei themselves share similar morphologies. We used PyTorch package for our simulations.\n\nGleason grade classification and visualization tasks were also performed on a prostate cancer dataset~\\cite{arvaniti2018automated}. This dataset consists of a total of 1506 images for various prostate cancer tumor grades. Experiments were carried out for binary classification between Gleason grade 3+3 (primary+secondary) versus Gleason grade 4+4 or 4+5.\n\n\\subsection{Graph construction from Hematoxylin and eosin stain (H\\&E) stained images}\nWe have used a UNet \\cite{ronneberger2015unet} based model for detecting the nuclei. Edge features are based on the inter-nucleus distance. We measure the distance between two nuclei as $$dist(i,j) = \\sqrt{(x_i-x_j)^2 +(y_i-y_j)^2}$$, where $(x_{i}, y_{i})$ are the co-ordinates of nucleus $n_{i}$.\nWe form an edge between two nodes i and j, $A_{i,j}$ if their inter-nuclei distance is less than 100 pixels and assign the following weight to the resultant edge in the adjacency matrix (A):\n\n\\begin{equation}\n A_{i,j} = 1-\\frac{dist(i,j)}{100}\n\\end{equation}\n\\begin{figure*}\n\\centering \n\\subfigure[Whole slide tissue image]{\\label{fig:a}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is10.jpg}}\n\\subfigure[Detected nucleus mask]{\\label{fig:b}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is010.jpg}}\n\\subfigure[Generated graph]{\\label{fig:c}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is010_tif.jpg}}\n\\caption{Graph formation: We start with a histopathology image, detect all nuclei using a U-Net, and construct a graph by linking pairs of nuclei closer than a distance threshold.}\n\\label{combination}\n\\end{figure*}\n\n\\subsection{Robust spatial filtering (RSF)}\nOur GCN was adapted from robust spatial filtering (RSF) \\cite{such2017robust}. For a graph $G(V, E)$, $V$ is the set of vertices and $E$ is the set of edges and $N$ is the number of nodes. Each vertex and edge can have multiple features.The numbers of features for a vertex and an edge are $F$ and $L$ respectively. The above arrangement allows the set $V$ and $E$ to be represented as tensors such as $V \\in \\mathcal{R}^{ N\\times F}$ and $E \\in \\mathcal{R}^{N\\times N \\times L}$ respectively. In RSF, the convolution operation on graphs is given by the following equation:\n\\begin{equation} \\label{eq1}\n \\begin{split}\n V_{conv} &= \\sum_{i=1}^{F} H^i V_{in}^i + b, \\\\\n \\text{where, } H^i &= h_0^i + \\sum_{j=1}^{L} h_j^i A_j\n \\end{split}\n\\end{equation}\n\\\\\nwhere, $h_i^j$ and b are learnable parameters and $A_j$ represents the ${j^{th}}$ edge feature of adjacency matrix. Multiple such filters are used to learn $F^{'}$ vertex features. In RSF, the graph adjacency matrix is not transformed into the spectral domain. Hence the computationally heavy operation of inversion of the Laplacian matrix is avoided. \n\nFor pooling operation, $V_{emb}^{'} \\in \\mathcal{R}^{N \\times N'}$ is derived from the input graph with $V_{in} \\in \\mathcal{R}^{N \\times F}$ and $A \\in \\mathcal{R} ^{N\\times N \\times L}$. This operation is similar to convolution operation given in Equation \\ref{eq1}. Further, $V_{out} \\in \\mathcal{R}^{N' \\times F}$ and $A_{out} \\in \\mathcal{R}^{N' \\times N' \\times F}$ with $N' < N$ is obtained by,\n\\begin{equation}\n \\begin{split}\n V_{emb} &= Softmax(V) \\\\\n V_{out} &= V_{emb}^T V_{in} \\\\\n A_{out} &= V_{emb}^TA_{in}V_{emb}\n \\end{split}\n\\end{equation}\n\\subsection{RSF with edge convolutions (RSF+Edge) }\nThe convolutional layer in RSF convolves vertex features of neighbor vertices to learn enhanced vertex features. This operation does not exploit the edge features directly. Gadiya et al. \\cite{gadiya2018some} proposed a method to learn enhanced vertex as well as edge features. Edge convolutional is performed as per the following equation:\n\\begin{equation}\n A_{out} = \\phi ( W X )\n\\end{equation}\nwhere $W$ is tensor of learnable parameters and $X$ is obtained by concatenating edge and vertex features of a node and $\\phi$ is a monotonic nonlinear activation function.\n\\subsection{Robust Spatial Filtering with Attention (RSF+Attention)}\nWe conjectured that an attention mechanism could help rank the graph vertices in their relative order of importance. Attention mechanism is used in neural networks extensively for natural language processing and to a lesser extent for computer vision tasks \\cite{xu2015show,ilse2018attention}. In our work, the attention layer was included before the first pooling operation at the input to highlight important nuclei directly, as shown in Figure 1.\n\\subsection{Visualization}\nFor the proposed model (RSF+Attention), we used the attention scores for visualization of the importance of individual nuclei. For the models that lacked an attention mechanism, given a trained model $\\mathcal{M}$ and a graph $G$, we rank all the nodes based on the drop in classification probability in a manner similar to \\cite{zeiler2014visualizing}. To get a more discernible drop in accuracy, for every node all the 1-hop neighbors along with their edges were also occluded. Occlusion of a node $n_{i}$ creates a new graph $G_{n_{i}}$ . Classification probability is computed for the occluded graph. The relative drop in probability for the nodes $n_{i}$ gives a measure $score_{i}$ for the importance of each node. We also tested 2-hop and 3-hop occlusion but the results were similar to those of 1-hop. Formally, $score_{i}$ for node $n_{i}$ can be given as,\n\n\\begin{equation}\n score_{i} = p(\\mathcal{M}(G)) - p(\\mathcal{M}(G_{n_{i}})) \n\\end{equation}\n\n\\section{Experiments and Results}\nIn this section, we show graphs formed from histology images, classification accuracy of using various GCN architectures, and visualization of highlighted nuclei.\n\n\\begin{table*}\n\\begin{tabular}{llll}\n\\hspace{2cm}\nOriginal image \\hspace{1.5cm} & Detected nucleus map \\hspace{1.7cm} & RSF+edge \\hspace{1.8cm} & RSF+attention \n\\end{tabular}\n\\end{table*}\n\\ExplSyntaxOn\n\\NewDocumentEnvironment{places}{mm}\n \n \\setlength{\\tabcolsep}{2pt}\n \\dim_set:Nn \\l_places_width_dim\n {\n (#1-\\ht\\strutbox-\\dp\\strutbox-2pt)\/(#2)\n }\n \\begin{tabular}{r @{\\hspace{2pt}} *{#2}{c}}\n }\n {\n \\end{tabular}\n }\n\\NewDocumentCommand{\\place}{mm}\n \n \\seq_set_from_clist:Nn \\l_places_images_in_seq { #2 }\n \\seq_set_map:NNn \\l_places_images_out_seq \\l_places_images_in_seq { \\places_set_image:n {##1} }\n \\seq_put_left:Nn \\l_places_images_out_seq\n {\n \\begin{tabular}{c}\\rotatebox[origin=c]{90}{\\strut#1}\\end{tabular}\n }\n \\seq_use:Nn \\l_places_images_out_seq { & } \\\\ \\addlinespace\n }\n\\dim_new:N \\l_places_width_dim\n\\seq_new:N \\l_places_images_in_seq\n\\seq_new:N \\l_places_images_out_seq\n\\cs_new_protected:Nn \\places_set_image:n\n {\\makebox[\\l_places_width_dim]\n { \\begin{tabular}{c}\n \\includegraphics[\n width=\\l_places_width_dim,\n height=\\l_places_width_dim,\n keepaspectratio,\n ]{#1}\n \\end{tabular}\n }\n }\n\\ExplSyntaxOff\n\\begin{figure*}\n\\centering\n\\begin{places}{0.9\\textwidth}{4}\n\\place{In-situ }{\n image_nucleus_graph\/is10.jpg,results\/nucleus\/is010_tif.jpg,results\/edge\/aa.png,results\/attention\/a1.png\n}\n\\place{Invasive}{\n results\/invasive\/original.png,results\/nucleus\/iv002_tif.jpg,results\/invasive\/edge.png,results\/invasive\/attention.png\n}\n\\place{Gleason 3}{results\/gleason\/ori_3.jpg,results\/nucleus\/g.jpg,results\/gleason\/mask_3_vertex.png,results\/gleason\/gleason3_at.png}\n\\place{Gleason 4}{results\/gleason\/ori_4.jpg,results\/nucleus\/z.jpg,results\/gleason\/mask_4_vertex.png,results\/gleason\/gleason4_at.png}\n\\end{places}\n\\caption{Comparison of visualization of RSF+edge and the proposed RSF+attention: Using RSF+attention, while nuclei on gland boundary are relatively more highlighted in the in-situ breast cancer (first row), all cancerous nuclei are highlighted in invasive breast cancer (first row). Similarly, the gland shapes are prominently highlighted in Gleason 3 prostate cancer (third row) as opposed to all cancer cells being highlighted in Gleason 4 prostate cancer (bottom row). The scale on the right shows color scale for the relative importance of nuclei.}\n\\label{Insitu(1)}\n\\end{figure*}\n\n\\subsection{Graphs from H\\&E stained histopathology images} \nEach image produces a graph with a different number of nodes. For BACH and prostate cancer Gleason grade datasets, the average number of nodes in a graph was 1546 and 613, respectively. Figure \\ref{combination} shows an example of transforming H\\&E stained histopathology image to a graph. \n\n\\subsection{Classification of breast and prostate cancers}\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{RSF} & \\textbf{RSF + Edge} & \\textbf{RSF + Attention} \\\\ \\hline\nVertex Conv 1 & Vertex Conv 1 & Vertex Conv 1 \\\\\nVertex Conv 2 & Vertex Conv 2 & Vertex Conv 2 + Attn \\\\ \\hline\nPooling 1 & Pooling 1 & Pooling 1 \\\\ \\hline\n & Edge Conv 1 & \\\\\nVertex Conv 3 & Vertex Conv 3 & Vertex Conv 3 \\\\\n & Edge Conv 2 & \\\\ \\hline\nPooling 2 & Pooling 2 & Pooling 2 \\\\ \\hline\n & Edge Conv 3 & \\\\\nFC - 1 & FC - 1 & FC - 1 \\\\\nFC - 2 & FC - 2 & FC - 2 \\\\\nFC - 3 & FC - 3 & FC - 3 \\\\ \\hline\n\\end{tabular}\n\\caption{The architecture of techniques implemented}\n\\label{architecture}\n\\end{table}\n\\begin{table}\n\\centering\n\\begin{tabular}{\n>{\\columncolor[HTML]{FFFFFF}}c |\n>{\\columncolor[HTML]{FFFFFF}}c |\n>{\\columncolor[HTML]{FFFFFF}}c }\n\\hline\n\\textbf{Model} & \\textbf{BACH} & \\textbf{Gleason} \\\\ \\hline\nRSF Based & 94\\% & 97\\% \\\\\nRSF + Edge & 92\\% & 97\\% \\\\\nRSF + Attention & 90\\% & 97\\%\n\\end{tabular}\n\\caption{Accuracy results for different techniques}\n\\label{results}\n\\end{table}\nWe trained the three models described in the previous section, viz. robust spatial filtering (RSF), robust spatial filtering with edge convolution (RSF+Edge), and robust spatial filtering with attention (RSF+Attention). All models were trained for approximately 50 epochs with a learning rate of 0.01 using the Adam optimizer. The architectures of the three models are given in table \\ref{architecture}. Table \\ref{results} shows that classification accuracy for the three models was quite comparable to each other. All the models contained nearly 300,000 parameters.\n\n\\subsection{Visualization} \nWe now present the visualization produced by occlusion and attention mechanisms. We performed occlusion experiments on predictions of RSF and RSF+Edge models on the breast and prostate cancer datasets. Visualization produced by these models were nearly the same, so we have omitted the results from the former due to space constraints. The images in the first row correspond to in-situ subtype in breast cancer from BACH dataset. We can see that nuclei on the outer layer of the gland are highlighted by the occlusion experiments. Also, in the second row, which corresponds to the invasive class in BACH dataset, nearly all the nuclei are highlighted. Outer linings are crucial for in-situ classification and where as for invasive cancer is spread across the entire region. These are the characteristics of in-situ and invasive histologies that are correctly captured by the occlusion and attention experiments. In the last two rows, visualization results for the prostate cancer Gleason grade dataset are shown. In these images, nuclei of the glands that lose their structure are highlighted, as we expected them to be. The images in the last column of Figure \\ref{Insitu(1)} are visualization results from RSF+Attention model. These results were verified by expert pathologists and visibly better at highlighting the above mentioned features.\n\n\\section{Conclusion}\nWe occluded nuclei clusters and exploited an attention layer in a graph convolutional neural network to highlight nuclei in histopathology slides and visualized the results on a breast cancer and a prostate cancer datasets. The proposed methods provide a notably more interpretable map depicting the contribution of each nucleus and its neighborhood in the final diagnosis. The presented results provide a way to explain the new patterns the deep learning models found on the tissue images. The proposed techniques not only open a path for the verification of the existing practices in pathology but suggest a way to generate new knowledge on where to focus to find meaningful differences between tissue classes, for example, those that may have different disease or treatment outcome.\n\n\\section*{Acknowledgment}\nAuthors would like to thank Nvidia Corporation for donation of GPUs used for this research.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStar and planet formation processes both give rise to objects in the $\\sim$1 to 20 M$_{Jup}$ range\n(e.g., Luhman et al. 2005; Bakos et al. 2009; Marois et al. 2010; Joergens et al. 2013).\nNaively, objects of higher mass are typically assumed to form primarily via the star formation process and\nobjects of lower mass are assumed to form primarily from a proto-planetary disk. This\nsimplification is directly testable with a variety of approaches, both\ntheoretical and observational. One straightforward observational approach\nis the study of multiplicity: do brown dwarfs come in pairs as frequently and\nwith the same binary properties as stars? Although trends are suggestive\namong stellar visual binaries, i.e. decreasing frequency with later spectral types\n(e.g., Siegler et al. 2003), the frequency of short-period brown dwarf binaries is\nrelatively uncertain (Burgasser et al. 2012).\n\n\nBinary parameter space is multi-dimensional. For a given \nspectral type, binaries may be examined in terms of companion frequency, separation,\nmass ratio distribution, or secondary mass distribution. Mazeh et al. (2003),\nfor example, showed that for main sequence M stars the secondary mass distribution\ndoes not conform to a standard initial mass function (IMF) but instead follows a\nrelatively flat distribution for a primary sample\nwith M$\\sim$0.7 $\\pm$0.1 M$_{\\odot}$ and orbital period P$<$3,000 days.\nThis represents one small slice of a parameter space that may also be studied\nin diverse populations: young, intermediate age, old,\nmetal-poor, clustered, etc. Specific comparisons between these samples provide a wealth\nof diagnostics for understanding the similarities and differences in formation\nand evolution of distinct groups of objects.\n\nIn a sample of 454 F6$-$K3 solar-type stellar systems within 25pc of the Sun,\nRaghavan et al. (2010) found a total fraction of binaries and higher order\nmultiples of 44\\%\\footnote{Thus, although the majority of these {\\it systems} may not\nbe multiple, the majority of the {\\it stars} studied reside in multiple systems,\nas previously concluded (e.g., Abt \\& Levy 1976; Duquennoy \\& Mayor 1991).}.\nAmong other results, they reconfirm the existence of the brown dwarf desert,\nthe pronounced dearth of brown dwarf mass (i.e. $\\sim$10$-$80 M$_{Jup}$) companions\nto stars in orbits with periods less than a few years (e.g., Grether \\& Lineweaver 2006;\nMetchev \\& Hillenbrand 2009). \n\nIs the brown dwarf desert the result of dynamical evolution preferentially\nimpacting lower mass companions (e.g., Reipurth \\& Clarke 2001; Armitage \\& Bonnell 2002)\nor does it have more to do with poorly understood\nbarriers to the formation of tightly bound companions of brown dwarf mass? \nIn a radial velocity (RV) survey with a few hundred m\/s precision of $>$100 stars in the Taurus star\nforming region (Crockett et al. 2012), no brown dwarf companions to 1$-$3 Myr old\nstars have been observed (Mahmud et al. 2015, in prep), indicating that the existence of the\ndesert is more likely related to formation than dynamical evolution in\norigin. Among 1$-$10 Myr old populations, to date only one brown dwarf-brown dwarf\nshort-period ($\\la$1 year) spectroscopic binary pair has been identified (Stassun et al. 2006).\nJoergens et al. (2010) found an orbital period of 5.2 years\nfor the young Chameleon brown dwarf binary Cha~H$\\alpha$8 and Joergens (2008) estimates a period\nof $>$12 years for the pair CHXR74. However, the results of Stassun et al. and Joergens are\nbased, respectively, on a survey for eclipsing systems and on a relatively small sample and thus are\nlikely incomplete. Joergens estimates a binary frequency among very low mass young objects\nof $10^{+18}_{-8}$\\%.\n\nAmong intermediate age brown dwarf spectroscopic binaries, Basri \\& Mart\\'{i}n (1999)\nfound the first brown dwarf pair, PPL 15, a 5.8 day period system, in a study of the\nPleiades. Simon et al. (2006) studied the 100 Myr old brown dwarf\nbinary GL~569B (Zapatero Osorio et al. 2004),\na pair with a $\\sim$2.4 year orbit, a semi-major axis of 0.89~AU.\nThey detected some evidence for a third spectroscopic component in the system, yet to be confirmed.\n\nRV surveys among field brown dwarfs, sensitive to binaries with periods\nof several years or less and semi-major axes of a few AU, have yielded a handful of definitive detections. \nIn a sample of 59 field brown dwarfs, Blake et al. (2010) found a tight binary (a$<$1~AU) frequency\nof $2.5^{+8.6}_{-1.6}$\\%. They had previously identified and measured the orbit of the $\\sim$247 day\nperiod system 2MASS 0320-04 (Blake et al. 2008), independently identified on the basis of spectral analysis by Burgasser et al. (2008),\ncombined their RV measurements with the astrometry of Dahn et al. (2008)\nfor the $\\sim$607 day period system LSR J1610-0040, and presented RV data on two wide\nsubstellar pairs with periods of $>$10 years (2MASS J15074769-1627386 and 2MASS J07464256+2000321).\nZapatero Osorio et al. (2007) measured space motions for over a dozen field\nbrown dwarfs but found no spectroscopic pairs in their sample.\nBurgasser et al. (2012) presented a solution for the spectroscopic orbit of\nthe 148 day period pair SDSS J000649.16-085246.3AB, in common proper motion with\nthe very low mass star LP 704-48, with M9 and T0 components straddling the substellar limit.\nAlthough Basri \\& Reiners (2006) indicate an overall spectroscopic binary fraction for\nfield brown dwarfs and very low mass stars of 11\\% in their RV survey of 53 targets,\nonly three L dwarfs in their sample show some level of RV variability. Of these three,\n2MASS J15065441$+$1321060 was subsequently shown by Blake et al. to be non-variable and\n2MASS J15074769-1627386 and 2MASS J07464256+2000321 are long-period systems identified as\nbinaries with imaging observations. Other brown dwarf pairs have been identified\nwith imaging (e.g., Lodieu et al. 2007), microlensing (e.g., Bennett et al. 2008), and astrometry\n(e.g., Sahlmann et al. 2013). Spectral brown dwarf binaries, systems that appear single in light of\nexisting data but are spectroscopically peculiar, implicating the possible presence of a companion, may be\nnumerous. Bardalez Gagliuffi et al. (2014) have identified 50 candidates, although Artigau et al. (2009)\nand Radigan et al. (2013) identified cases in which the brown dwarf binary candidates were instead found to have\nheterogeneous cloud covers. Thus it is unlikely that all spectral binaries have $<$1 year orbits, but two have been\nconfirmed as such to date (Blake et al. 2008; Burgasser et al. 2012).\n\nNo short period ($<$100 days) field brown dwarf spectroscopic binaries are known, but one\nintermediate age and one young system with periods of just a few days were identified by\nBasri \\& Mart\\'{i}n (1999) and Stassun et al. (2006), respectively. Short period systems ought to be\nthe most straightforward to identify; however,\nRV surveys for brown dwarf multiples require the world's biggest\ntelescopes and generous time allocations, challenging to obtain, and are fraught with bias. \nYet without such work our understanding of substellar multiplicity is\nskewed towards the anecdotal, and astronomers' grasp of the basis for planetary\nmass companion formation is isolated from the context of brown dwarf and stellar\nmass companion formation.\n\nWe report here on 11 years of dynamical observations of over two dozen\nfield brown dwarfs taken at high spectral resolution in the near-infrared (IR) at the Keck II telescope. \nThe intrinsic faintness of brown dwarfs, particularly the late L and T types, presents a challenge to\nhigh-resolution spectroscopic observations. However, this is the only method by which we may\nderive the RV data necessary for calculating space motions, and hence\npossible moving group or cluster membership, and the telltale RV variability of a short-period\nbinary. Measurements of $v \\sin i$ provide a lower limit on rotational velocity, crucial for understanding angular\nmomentum evolution. Ultimately, with sufficiently precise data, the combination of RV versus phase together with\nthe angularly resolved orbits for the few year or few tens of year period systems will yield the\nabsolute component masses of brown dwarfs in binaries, invaluable for furthering our understanding\nof brown dwarf structure and evolution (Konopacky et al. 2010; Dupuy et al. 2010). Because brown dwarfs emit the\nbulk of their energy at wavelengths greater than $\\sim$1~$\\mu$m, IR spectroscopy provides the\nbest approach for their RV measurements.\n\nOur goals were to identify brown dwarfs in spectroscopic\nbinary systems and to measure the dynamical properties of any such pairs\ndiscovered. This project to identify short-period\nbrown dwarf multiples is the latest contribution to the NIRSPEC Brown Dwarf Spectroscopic Survey\n(BDSS; McLean et al. 2001, 2003, 2007; McGovern et al. 2004; Rice et al. 2010) and leverages over a\ndecade of observations to characterize brown dwarfs at high spectral resolution.\nWe find that a critical factor in a productive survey hinges on the RV precision; sensitivity to\nRV variability scales rapidly with this parameter. \nWe describe our sample, observations, and data reduction in \\S 2 and discuss our\ndata analysis in \\S 3. Section 4 provides a discussion of our results and we briefly\nsummarize our work in \\S 5.\n\n\n\\section{Observations and Data Reduction}\n\nTargets were selected for a range of spectral types across the span of late M, L, and T dwarfs and on the basis of \nmagnitude ($J\\la15$ mag) and accessibility from the Keck Observatory ($\\delta\\ga-30^{\\circ}$). \nThe complete target list and observing log appears in Table 1 which lists the object name (column 1),\nRight Ascension and Declination (columns 2 and 3), spectral type (column 4),\n$2MASS$ J magnitude (column 5), reference for discovery paper (column 6), and the UT dates of observation (column 7).\n\nObservations were carried out with the high-resolution, cross-dispersed echelle mode\nof the facility, near-infrared, cryogenic spectrograph NIRSPEC (McLean et al. 1998, 2000)\non the Keck II 10 m telescope on Mauna Kea, Hawaii. The NIRSPEC \nscience detector is a 1024 $\\times$ 1024 pixel ALADDIN InSb array; a 256 $\\times$ 256\npixel HgCdTe array in a slit viewing camera was used for source acquisition.\nThe N3 (J-band) filter with the 12 $\\times$ 0$\\farcs$432 (3-pixel) slit, an echelle angle of 63.00$^{\\circ}$, \nand a grating angle of 34.08$^{\\circ}$ produces a resolving power\nof R = $\\lambda$\/$\\Delta \\lambda$ $\\approx$ 20,000 and nearly \ncontinuous coverage from 1.165$-$1.324$\\mu$m (orders 58$-$65; McLean et al. 2007).\nObservations made on 2000 July 25 and 29 employed the 12 $\\times$ 0$\\farcs$576 (4-pixel) slit, yielding\na resolution of $\\sim$15,000. Internal white-light spectra, dark frames, and arc lamp spectra were obtained for\nflat-fielding, dark current correction, and wavelength calibration.\nScience exposures were made in 600~s nodded AB pairs at two locations along the slit.\n\nAll spectroscopic reductions were made using the REDSPEC package, software produced\nat UCLA by S. Kim, L. Prato, and I. McLean specifically for the analysis of NIRSPEC\ndata\\footnote{See: http:\/\/www2.keck.hawaii.edu\/inst\/nirspec\/redspec.html} as \ndescribed in McLean et al. (2007). Wavelength solutions were determined using the OH night\nsky emission lines in each order; 4$-$5 OH lines across each of the orders used yielded\nwavelength solutions with typical uncertainties of better than 0.4 km~s$^{-1}$.\nThe two spectral orders most favorable for the analysis, 62 for the L dwarfs (1.221 $\\mu$m$-$1.239 $\\mu$m; Figure 1)\nand 59 for the T dwarfs (1.283 $\\mu$m$-$1.302 $\\mu$m; Figure 2), were\nselected independently on the basis of the presence of deep inherent lines in the brown dwarf targets.\nFurthermore, an additional advantage of these particular orders is the absence of terrestrial absorption lines,\nthus avoiding the necessity of division by a featureless telluric standard star.\nThis provided the optimal approach for several reasons: (1) eliminating division by\ntelluric standards maximized the signal-to-noise ratio and avoided the possible introduction of\nslightly offset spectra and potential small shifts in the brown dwarf absorption lines and hence RV measurements,\n(2) focusing on the narrowest and deepest lines available yielded the highest possible RV precision;\nalthough the KI lines in orders 61 and 65 are deep (e.g., McLean et al. 2007), their breadth is unfavorable\nto precision RV measurements through cross-correlation, and (3) selecting orders 62 and 59 further\nguaranteed the best possible RV precision given the regular spacing of the OH night sky emission lines across both\nof these orders, required for a superior dispersion solution; this condition was not met for all orders in our\nJ band setting. Multiple-epoch sequences for the L2 dwarf\nKelu-1 and the peculiar T6p dwarf 2M0937 are shown in Figures 3 and 4, respectively.\n\n\n\n\\section{Analysis}\n\n\\subsection{Radial Velocities}\n\nRice et al. (2010) found typical systematic RV uncertainties of 1$-$2 km~s$^{-1}$ for a sample\nobserved with a similar methodology, similar signal to noise, and with some overlap in target\ndata with this paper (Table 2). We thus adopt\na conservative 2 km~s$^{-1}$ internal uncertainty for our RVs.\nWe tested this estimate by cross-correlation of the RV invariant target 2M0036 (Blake et al. 2010), an L4 dwarf,\nfor which we obtained 7 epochs over more than 5 years.\nThe maximum RV shift between epochs was 1.91 km~s$^{-1}$; the standard deviation\nin the RV shift for all epochs was 0.59 km~s$^{-1}$. Thus 2 km~s$^{-1}$ provides a reasonable\nif conservative internal uncertainty on individual RV measurements. \n\nAt least two, and as many as seven, spectra were taken for each of our targets.\nWe tested for radial velocity variability by cross-correlating the\nhighest signal-to-noise spectrum against the spectra from all other epochs for a given target;\nno significant variability was detected (\\S 4).\nTable 2 lists the number of spectra taken for each object (column 2) and the\ntotal number of days spanned by the observations (column 3). RVs\n(column 4) were either taken from Blake et al. (2010) or determined by cross-correlation\nof the highest signal-to-noise spectrum for a particular object with spectra of objects with known RV\n(from Blake et al.) and similar spectral type, sometimes of type both earlier and later than our target.\nThe RVs resulting from cross-correlation of a target with more than one other object\nwere averaged and the standard deviation added in\nquadrature with the internal uncertainties in the radial velocity measurements. This result\nin most cases was dominated by the 2 km~s$^{-1}$ internal uncertainty; however, for a few objects,\nprimarily the fainter and thus lower signal to noise late T dwarfs,\nthis procedure resulted in an uncertainty of 3 km~s$^{-1}$ (Table 2).\n\nWe use the average RV values from Blake et al. (2010) when available because of their unprecedented precision,\nobtained by fitting models to near-IR K-band CO\nbandhead at $\\sim$2.3 $\\mu$m target spectra. The models are composed of synthetic\ntemplate spectra plus observed telluric spectra; the CO bandhead region of the telluric spectrum is rich in deep\nCH$_4$ lines that provide a wavelength dispersion and zero-point reference with a precision as\ngood as a few tens of m\/s. Small iterations of the RV shift between the synthetic photospheric spectra\nand the telluric spectra allow for high accuracy in the target RV measurements.\nWe compared our results with values from other RV studies in the literature, e.g., Basri et al. (2000),\nMohanty \\& Basri (2003), and Zapatero-Osorio et al. (2007). In every case our RVs were comparable\nto other values within 1~$\\sigma$. We provide our results where indicated in Table 2.\n\n\\subsection{Rotational Velocities}\n\nColumn 5 of Table 2 lists the $v \\sin i$ values for our targets. Most of these were taken\nfrom the literature (Basri et al. 2000; Mohanty \\& Basri 2003; Rice et al. 2010; Blake et al. 2010; references given in column 6). \nTo estimate $v \\sin i$ values for the remaining targets, we used visual comparison with objects\nof neighboring spectral types after superimposing the spectra. For some objects we\nconvolved comparison spectra of known $v \\sin i$ with a boxcar kernel in order to produce \nresulting spectra of larger $v \\sin i$ for comparison. This method was approximate and yielded\nuncertainties of 5$-$10 km~s$^{-1}$, based on visual comparisons with objects of known\n$v \\sin i$, for the T dwarfs in our sample. Nevertheless, these \nare the first estimates available for some of the targets and thus provide a useful guide.\n\n\n\\section{Discussion}\n\n\\subsection{Field Brown Dwarf Spectroscopic Multiplicity}\n\nKonopacky et al. (2010) obtained angularly resolved imaging and spectroscopy for each\ncomponent in 24 very low mass stellar and brown dwarf subarcsecond {\\it visual} binaries \ncontributing to eventual measurements of orbital solutions and\ncomponent masses. Our goal was to use high spectral resolution observations to identify\nany RV variability over time that might indicate a {\\it spectroscopic} binary brown dwarf.\nThis requires binaries with orbital periods sufficiently short to measure the component\nmotion at a significant level, i.e. at least several $\\sigma$ above the RV uncertainty. \n\nTo explore our sensitivity to the brown dwarf binary parameter space, given our $\\sim$2~ km~s$^{-1}$ RV precision, \nwe ran a Monte Carlo simulation of 10$^5$ possible binary orbits for each of the 25 objects in our sample, following\nBurgasser et al. (2014). Orbital parameters were\nuniformly distributed in log semi-major axis (10$^{-3}$ to 10$^2$ AU),\nmass ratio (0.8$-$1.0), sine inclination (0$-$1), eccentricity (0$-$0.6; Dupuy \\& Liu 2011), and all\nother orbital parameters (argument of periapsis, longitude of ascending node, and mean anomaly, 0$-$2$\\pi$).\nWe converted our target spectral types to effective temperature using the empirical relation of Stephens et al. (2009),\nand then from effective temperature to mass using the evolutionary models of Burrows et al. (2001), assuming ages of 0.5 Gyr, 1.0 Gyr, and 5 Gyr. \nEach of the 10$^5$ simulated orbits was sampled at the dates given and the primary orbital RV was\ncalculated. A binary detection, for a given semi-major axis bin (0.2 dex), was defined as a system for which a maximum RV\ndifference between all dates was $>$3$\\sigma$, i.e. $>$6~km~s$^{-1}$ given our 2~km~s$^{-1}$ precision.\nThe results are summarized in Figure 5. The most important factor impacting the probability\nof detecting a potential binary was the frequency of observation for a given target (Table 2).\n\nA binary with a separation of $\\la$0.1 AU should in principle be straightforward to detect with an RV precision of 2~km~s$^{-1}$.\nHowever, given our estimated target masses and sampling frequency, and assuming an age of 1.0 Gyr,\nwe could have detected such an orbit only 50\\% of the time for only 12 of the sources in\nour sample (middle left-hand panel of Figure 5). The detection probability for an 0.1 AU binary fails to\nreach 90\\% for {\\it any} of our sources. Using the probabilities of detection for separations greater than\na given threshold $a$, $P(>a)$, as a measure of the effective sample size, $N_{eff}(a) = \\Sigma_i P(>a)$,\nwe find our null result translates into a 1$\\sigma$ upper limit of 18\\% for spectroscopic\nbinaries down to $a=0.1$~AU, based on binomial statistics.\nOnly for systems with separations below 0.01 AU ($\\sim$1 day orbits) could the spectroscopic binary frequency\nof our sample be characterized as relatively rare, i.e. $\\la$10\\%.\n\nThese limits apply when we consider the detectability of individual systems. However, a signature of unresolved multiplicity could also emerge in higher velocity dispersions for the sample as a whole. Identifying higher dispersions across the sample requires robust determination of the individual measurement uncertainties, but we can perform a rough assessment as follows. Using the same simulation parameters, we calculated the distribution of velocity dispersions one would obtain if a given fraction of sources (randomly selected) were binaries with semi-major axes in logarithmically spaced bins. For a sample devoid of binaries, the mean dispersion is somewhat less than the adopted measurement uncertainty, about 1.75~km~s$^{-1}$. Sources with radial orbital motion drive the mean velocity dispersions of the sample higher. Figure~6 displays the thresholds at which the mean simulated sample velocity dispersions are 1.5, 3 and 5 times higher than the dispersions assuming a 2~km~s$^{-1}$ measurement uncertainty. The most conservative threshold is reached at a semi-major axis of 0.03--0.04~AU, and is detectable at even small binary fractions (i.e., 1--2 sources in the sample being binary). This analysis is roughly consistent with the individual detection limits above, and again implies that we can rule out a significant fraction of binaries ($\\gtrsim$10\\%) only for separations $\\lesssim$0.01~AU.\n\n\n\n\\subsection{Notes on Known Visual Binaries}\n\nOf the 25 targets in our sample, 7 are known visual binaries. For these systems we estimated the upper limit for the\nobservable RV shift, $\\Delta$(RV)$_{max}$, for the brighter binary component between two epochs, assuming the\nmost favorable possible observing conditions:\n(1) the epochs correspond to the two phases at which the primary component is moving toward and away from us with\nmaximum RV, (2) the projected separation corresponds to the semi-major axis\nof the system, and (3) the orbit is circular and edge-on. The observed and estimated binary properties are given\nin Table 3. A discussion of each visual binary and the results of our observations follows.\n\n\\subsubsection{2MASS J22344161+4041387 $-$ M6}\n\nUsing laser guide star adaptive optics imaging at the Keck II telescope,\nAllers et al. (2009) identified 2M2234 as a 1 Myr year old, visual binary with a projected physical separation of\n51 AU. Given the observed binary properties, the $\\Delta$(RV)$_{max}$\nis $\\sim$1.9 km~s$^{-1}$ (Table 3).\nWith an orbital period of $824^{+510}_{-310}$ years the\ninclination is effectively indeterminable. This estimate for the period is based on a circular orbit.\nA more realistic value, $1000^{+1600}_{-500}$ years, is calculated \nin Allers et al. (2009). In either case, it is not possible to observe the system at phases separated by half the orbit.\nFurthermore, given the $v \\sin i$ of 17 km~s$^{-1}$ for 2M2234 (Table 2), it is also impossible\nto spectroscopically resolve the RVs of the two components in a single epoch spectrum, even though the\ncomponent near-IR magnitudes are almost equal, because the maximum relative component RV separation is\nsignificantly less than the rotational broadening.\n\nCross-correlation of our 3 epochs of spectra with each other demonstrated no RV shift between\nthe 2007 and 2009 data. Between the 2006 and 2007 data there was an apparent shift of\n$-$7.7 km~s$^{-1}$; however, the signal to noise of the 2006 spectrum ($\\sim$20) is considerably\nlower than that of the other epochs ($\\sim$80), and the spectra are veiled (Allers et al. 2009),\nthus we do not have confidence in the 2006 result.\nUsing the young M6 brown dwarf [GY92] 5 for cross-correlation with our 2007 spectrum, we\nobtain an RV of $-$10$\\pm$2 km~s$^{-1}$ (Table 2)\\footnote{Cross-correlating the same 2M2234 spectrum with another young M6,\nCFHT Tau 7, Rice et al. (2010) found $-$13.1~km~s$^{-1}$.}, similar to the results of Allers et al. on the basis \nof Keck HIRES data from 2006\\footnote{This result is the weighted mean of two RVs;\nShkolnik et al. (2012) use the same Keck\nmeasurements to determine an unweighted mean of $-$10.9$\\pm$0.7 km~s$^{-1}$.}, $-$10.6$\\pm$0.5 km~s$^{-1}$.\nAllers et al. raise the possibility that 2M2234 could be a higher\norder multiple system, which would account for the overluminous nature of the A component.\nOur multi-epoch observations failed to detect any short-period, i.e. P $<$a few years, hierarchical spectroscopic binary in this system,\nalthough our sensitivity to intermediate separation binaries, and binary orientations unfavorable for detection, limit\nany significant statistical conclusions (\\S 4.1). Given the\ngreater $K_s-L'$ excess in the 2M2234A, it is feasible that the excess luminosity is related to the\ncircumstellar disk structure, orientation, and\/or possible accretion activity. Such a mismatch in disk properties around\nthe components of very low mass binaries is not unprecedented; for example,\nthe TWA 30AB wide, co-moving pair has an apparently edge-on disk around the embedded, earlier-type component, extinguishing this \nlate type M star by 5 magnitudes with respect to the cooler component (Looper et al. 2010).\n\n\n\n\\subsubsection{2MASS J07464256+2000321 $-$ L1}\n\nThe 2M0746 binary is a nearby ($d\\sim12$ pc), tight ($\\sim$3 AU) system.\nWe use the Konopacky et al. (2010) astrometric measurements (Table 3) to determine a $\\Delta$(RV)$_{max}$\nof 2.0 km~s$^{-1}$. Konopacky et al. find an average primary\/secondary flux ratio of 1.5 $\\pm$0.1, challenging the assumption\nthat angularly unresolved spectra are fully dominated by the primary (Blake et al. 2010). \n\nWe observed 2M0746 at two epochs separated by almost exactly 4 years, about 1\/3 of the orbital period.\nCross-correlation of our two order 62 J-band spectra yielded a 1.3 km~s$^{-1}$ shift with a high correlation coefficient, 0.92.\nComparing the epochs of our observations with the RV curve plotted for this system in Figure 14 of Blake et al. (2010), this is almost\nexactly the expected result; however, we are not sufficiently confident in our RV uncertainties to give it much weight.\n\n\n\\subsubsection{Kelu-1 $-$ L2}\n\nA rapid rotator with $v \\sin i$ of $\\sim$70 km~s$^{-1}$ and Li absorption, Kelu-1 was identified as a brown dwarf by Ruiz et al. (1997).\nMart\\'{i}n et al. (1999) hypothesized that Kelu-1's over-luminosity and Li abundance might be explained by a young age or an\nadditional component in the system (e.g., Golimowski et al. 2004). \nLiu \\& Leggett (2005) using Keck AO imaging found that Kelu-1 was a 0$\\farcs$291 binary.\nGelino et al. (2006) estimated spectral types for the components of L2 and L3.5 and a total mass of 0.115 $\\pm$0.014 M$_{\\odot}$.\nIn an unpublished preprint, Stumpf et al. (2008) describe additional observations of the system with VLT AO imaging\nthrough 2008; the separation steadily increased to 0$\\farcs$366 in 2008. The position angle has not changed by more than\n$4^{\\circ}$ or $5^{\\circ}$. Adopting the largest separation observed by Gelino et al. as the semi-major axis, 0$\\farcs$298 $\\pm$0$\\farcs$003,\nwe estimate a period of 39 $\\pm$5 years based on a circular orbit (although Stumpf et al. favor a high eccentricity of 0.82 $\\pm$0.10).\nIf viewed edge-on, this implies a $\\Delta$(RV)$_{max}$ of 4.3 $\\pm$0.4 km~s$^{-1}$, marginally detectable with our $\\sim$2 km~s$^{-1}$ precision.\n\nMeasurements of the Kelu-1 system RV in the literature are inconsistent: Basri et al. (2000)\nfound 17 $\\pm$1 km~s$^{-1}$ in June of 1997 and Blake et al. (2010) determined RVs of 6.35 $\\pm$0.39 and 6.41 $\\pm$0.75 km~s$^{-1}$\nin March and April of 2003. On the basis of angularly resolved spectra of the two\nknown components, Stumpf et al. (2008) suggest that Kelu-1 A\nis itself a spectroscopic binary. We used our highest signal to noise (S\/N) ratio spectrum of Kelu-1 to cross-correlate against five other epochs\n(Figure 3), all of S\/N ratio $>$50 per resolution element (the January, 2006, spectrum, with a S\/N of $\\sim$10, was not included in this analysis). \nOur RV measurements, from 2002 through 2011, show RV shifts of $<$3 km~s$^{-1}$.\nWe did not detect any clear evidence in our spectra for additional motion resulting from the A-component moving\nin a relatively short-period spectroscopic orbit; however, this could conceivably be the result of binary properties and\/or viewing geometry (\\S 4.1).\n\n\\subsubsection{2MASS J15074769-1627386 $-$ L5.5}\n\nOver a 6-year baseline,\nBlake et al. (2010) detect a marginally significant ($<$2 $\\sigma$) trend in the RV of 2M1507, a\nnearby (d$=$7.3 pc) L dwarf. They obtain a false alarm probability of 2.2 \\% and suggest the possibility that 2M1507 is a $>$5000 day\nbinary with an angular separation of 0$\\farcs$4. However, deep, high-resolution imaging sensitive to a\ncontrast ratio of 5 magnitudes (Bouy et al. 2003; Reid et al. 2006)\nhas not revealed any companions. No significant RV variations are evident in the 5 high-resolution spectra we obtained between\n2000 and 2008; cross-correlation of the highest S\/N ratio spectrum (UT 2000 April 25) against the other 4 epochs resulted in RV\nshifts of $<$1.7 km~s$^{-1}$ with an uncertainty of $\\sim$2 km~s$^{-1}$. This result, however, does not rule out multiplicity;\nBlake et al. observed $\\sim$0.5 km~s$^{-1}$ of motion over 6.5 years, thus we would not expect much more than that over our\n8 year baseline. Given the lack of definitive evidence for multiplicity, this system is not included in Table 3.\n\n\\subsubsection{DENIS-P J0205.4-1159 $-$ L5.5}\n\nKoerner et al. (1999) initially identified this system as binary. Bouy et al. (2005) describe evidence for a third\nobject in a bound orbit with the secondary component. The estimated properties of the wide binary orbit are uncertain but the period is at least 47 years\nand the $\\Delta$(RV)$_{max}$ is at most 4.4 km~s$^{-1}$ (Table 3). For the presumed close binary, Bouy et al. estimate an orbital period of\n8 years and a semi-major axis of 1.9 AU, implying a $\\Delta$(RV)$_{max}$ of $\\sim$7 km~s$^{-1}$. Our three spectra of DENIS 0205,\ntaken in 2001 and in 2006, are of low S\/N ratio. Cross-correlation between the epochs yields $-$2.7 and $-$2.1 km~s$^{-1}$\nwith a correlation coefficient of only $\\sim$0.4, reflecting the poor quality of the data. Sufficiently frequent and deep\nimaging and RV monitoring of this system may provide\nthe requisite phase coverage, preferably with better precision than 2 km~s$^{-1}$, to determine a full orbital solution for\nthe inner binary over the course of one orbital period.\n\n\n\\subsubsection{DENIS-P J1228.2-1547 $-$ L6} \n\nUsing the {\\it Hubble Space Telescope}, Mart\\'{i}n et al. (1999) identified DENIS 1228 as the first angularly resolved brown dwarf - brown dwarf\npair with a separation of 0$\\farcs$275$\\pm$0$\\farcs$002 (Bouy et al. 2003). After several years of monitoring the components' positions,\nBrandner et al. (2004) estimated the orbital properties of the system, listed in Table 3. The $\\Delta$(RV)$_{max}$ for this binary, 4.3 km~s$^{-1}$,\nin combination with the period of $\\sim$44 years from Brandner et al., is not favorable for the detection of an RV shift over the 4 year time scale\nof our NIRSPEC observations. Cross-correlating our 2007 May spectrum with those taken in 2011 February and June yields a\n0 km~s$^{-1}$ RV shift. Continued monitoring of the visual orbit with high angular resolution imaging and high precision RV spectroscopic\ntechniques will help to refine the parameters\nover the next decades, necessary to determine individual component masses in the long term. \n\n\\subsubsection{SDSSp J042348.57-041403.5 $-$ T0}\n\nSDSS 0423 is one of the visual brown dwarf binary systems which spans the L and T classes. Burgasser et al. (2005) measured a\nseparation of 0$\\farcs$16 and estimated a total mass of 0.08$-$0.14 M$_{\\odot}$. Assuming that the separation is equal\nto the semi-major axis of the system, 2.50$\\pm$0.07 AU, the period falls in the range of 10.5 to 13.9 years (Table 3)\nand the $\\Delta$(RV)$_{max}$ is 5.3$-$7.1 km~s$^{-1}$. We observed the system in 2001, 2005, and 2006, covering close to half of the\nestimated orbital period. However, the cross-correlation between the 2001 and 2005 spectra yielded a shift of only $-$0.4 km~s$^{-1}$\nand between 2001 and 2006 of 1.77 km~s$^{-1}$, indistinguishable within the uncertainty of our RV measurements, especially because the\n2006 spectrum was particularly noisy. Thus we find no evidence for significant orbital motion, implying a longer period or an\nunfavorable viewing geometry for the detection of an RV shift, or both.\n\n\n\\subsection{2MASS J05591914-1404488}\n\nThe T4.5 dwarf 2M0559 presents an enigmatic case of an over-luminous, extremely low-mass object. Observers and theorists\nalike have speculated (Burgasser 2001; Dahn et al. 2002; Burrows et al. 2006; Dupuy \\& Liu 2012) that this\nsystem is an equal mass binary. Alternatively, there may be fundamental processes at play\nin the mid-T dwarf atmospheres that are not yet well-understood. Specifically, this source is the lynchpin in the J-band\nbrightening\/cloud disruption scenario (Burgasser et al. 2002a). Zapatero Osorio et al. (2007) estimate limits on possible planetary\nmass companions in this system, but such a secondary component would not explain the unusually high brightness.\n\nWe obtained four observations of 2M0559 over a time baseline of 6.5 years. For an age of 1 Gyr, our Monte Carlo simulation (\\S 4.1)\nindicates a 50\\% detection probability for a threshold semi-major axis of 0.13 AU and a 90\\% detection probability for a threshold\nsemi-major axis of 0.003 AU. The threshold semi-major axis is the separation below which a spectroscopic companion would\nbe detected with a particular probability. Burgasser et al. (2003) rule out the presence of a relatively bright companion object closer than \n0$\\farcs$09. At the $\\sim$10 pc distance to 2M0559 (Dahn et al. 2002), 0$\\farcs$09 corresponds to $\\sim$0.9 AU. Thus, ample\nparameter space for a bright binary companion to this object remains unexplored and our confidence in a null result for a \ncompanion object is only high ($\\ge$90\\%) for extremely short periods of days or less. Monitoring this system with extremely\nprecise RV measurements (see next section) with regular cadence over a considerable time baseline will help to fill in the\npotential binary parameter space gap and might also provide insight into the atmospheric properties.\n\n\n\\subsection{The Importance of High Precision RV Measurements}\n\nFor spectroscopic binary systems, Figure 7\nillustrates the relationships between the primary object's mass, the primary orbital velocity, and the orbital period on the basis of Kepler's third law.\nWe show results for three distinct values of the mass ratio (q); a circular, edge-on orbit is assumed\nfor simplicity. For a system with a primary of mass 0.08, the\nsubstellar limit, and a mass ratio of 1.0, the primary object's RV is $\\sim$3.5 km~s$^{-1}$ for a period of 12 years, approximately the shortest period system\namong the visual binaries in our sample (Table 3). With a precision of 2 km~s$^{-1}$, motion of the primary (or the secondary, given a mass ratio of unity)\nin such a system is only detectable for very specific phases and viewing angles. The probability of detection with 2 km~s$^{-1}$\nprecision increases for shorter-period binaries; however, again, this is only true under certain specialized conditions (\\S 4.1).\nNone of the multi-epoch spectra in our sample of 25 brown dwarf systems reveals more than $\\sim$3 km~s$^{-1}$ of RV variability. Even\nfor the seven known brown dwarf binaries observed, some with a cadence that regularly sampled a significant fraction of the estimated orbital period,\nwe were unable to unambiguously detect any RV variability.\n\nSpecialized techniques for the\nhighly precise measurement of small RV shifts, such as those applied to high-resolution K-band spectra\nby Blake et al. (2007, 2010), Prato et al. (2008), Konopacky et al. (2010), Bailey et al. (2012), Burgasser et al. (2012), and others,\nare required to reliably detect motion in brown dwarf binaries, even for those with orbital periods as short as days. In their 6-year study of\nlate-type M and L dwarfs with NIRSPEC on the Keck II telescope, Blake et al. (2010) obtained a precision of 200 m~s$^{-1}$ on slowly rotating\nL dwarfs, providing sensitivity to orbital motion of brown dwarf binaries with periods of decades and mass ratios as low as $\\sim$10\\% (Figure 7),\nthe upper limit for the detection of giant planetary companions.\n\nIn the study described here, even for our sample of 25 systems with zero detections of spectroscopic binaries, it is still not\npossible to use the results to definitively characterize short-period low mass binaries as rare. The sampling and geometry of such systems\nare simply not well-suited to identification with our 2 km~s$^{-1}$ precision and random observing cadence. Thus as far as it is possible to say\nwith the extant data, very low mass spectroscopic binaries are not necessarily intrinsically rare, but even with one of the largest samples\navailable, statistics show (\\S 4.1) that 2 km~s$^{-1}$ uncertainties provide relatively weak constraints.\n\n\\section{Summary}\n\nWe obtained multiple-epoch spectra of a sample of 25 very low-mass field dwarfs, three M dwarfs, sixteen L dwarfs, and six T dwarfs,\nbetween 2000 April and 2011 June to search for\nRV variability and spectral evidence for multiple components. With a precision of $\\sim$2 km~s$^{-1}$, we were sensitive to RV\nvariability at a statistically significant level only in systems with periods of about a day or less, assuming a favorable distribution of\norbital properties and viewing geometries relative to our line of sight.\nIn none of the systems studied, including the seven known, wide binaries observed, did we detect any RV variability \n$>$3 km~s$^{-1}$. For over a dozen objects in our sample we present the\nfirst published high-resolution spectra and provide RVs and rotational velocities for the entire sample, either based on\nthis work or taken from the more precise measurements in Blake et al. (2010). We show multi-epoch spectral sequences for two\nobjects of particular interest, Kelu-1 and 2M0937, an L2 and a peculiar T6, respectively. No significant variations are seen in these\nor the other target spectra, some of which boast an exquisite S\/N ratio in excess of 100. \n\nRV measurements of brown dwarfs are important both for the ultimate measurement of brown dwarf\nmasses (Konopacky et al. 2010) and for the spectroscopic detection of very low-mass, even planetary, companions to presumed single brown dwarfs\n(Blake et al. 2010). The close binary fraction of very low mass systems is highly uncertain (e.g., Bardalez Gagliuffi et al. 2014).\nWe conclude with the observation that to satisfy these scientific goals requires\nhigh S\/N ratio, strategic sampling cadence, and relatively high precision measurements: with the 200 m~s$^{-1}$ precision\nof Blake et al., it is possible to detect several-Jupiter mass companions even in orbits of decades (bottom panel, Figure 7). Long-term\nmonitoring programs of binary brown dwarfs, and in particular candidate spectroscopic binary brown dwarfs (Bardalez Gagliuffi et al.), with high spectral resolution,\ncomponent-resolved spectroscopy (Konopacky et al.), with high spectral resolution unresolved spectroscopy (Burgasser et al. 2012), and with\nhigh-angular resolution imaging (e.g., Radigan et al. 2013), over time scales of days to years are required. Results of these efforts\nwill yield component mass measurements with sufficient precision to stringently test models of\nbrown dwarf structure and evolution, and, in the case of younger systems, formation (e.g., Schaefer et al. 2014). It is crucial\nthat RV monitoring programs take advantage of high-precision techniques for a future high-yield science return.\n\n\n\\bigskip\n\\bigskip\n\nWe thank the Keck Observatory OAs and SAs and B. Schaefer, probably all of whom helped with these runs and observations during the\n11 year period over which the data were gathered, for their exceptional support.\nWe are grateful to Q. Konopacky and M. McGovern for assistance with some of the later observing runs.\nL.P. thanks O. Franz and L. Wasserman for helpful discussions on orbital dynamics. We are grateful to the\nanonymous referee for comments which improved this manuscript.\nPartial support to L.P. for this work was provided by NSF grant AST 04-44017.\nThis research has benefited from the M, L, T, and Y dwarf compendium housed at DwarfArchives.org.\nThis work made use of the SIMBAD reference database, the NASA\nAstrophysics Data System, and the data products from the Two Micron All\nSky Survey, which is a joint project of the University of Massachusetts\nand the Infrared Processing and Analysis Center\/California Institute\nof Technology, funded by the National Aeronautics and Space\nAdministration and the National Science Foundation.\nData presented herein were obtained at the W. M. Keck\nObservatory, which is operated as a scientific partnership among the California Institute of Technology,\nthe University of California, and the National Aeronautics and Space Administration. The Observatory\nwas made possible by the generous financial support of the W. M. Keck Foundation.\nThe authors recognize and acknowledge the\nsignificant cultural role that the summit of Mauna Kea\nplays within the indigenous Hawaiian community. We are\ngrateful for the opportunity to conduct observations from this special mountain.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs exemplified by GW170817, neutron star mergers are empirically known to produce a rich array of multimessenger\nemission \\cite{2017ApJ...848L..12A,LIGO-GW170817-mma}. The presence of matter is most unambiguously indicated by electromagnetic\nemission from nuclear matter ejected during the merger itself, which produces distinctive ``kilonova''\nemission \\cite{1974ApJ...192L.145L,1998ApJ...507L..59L,Metzger_2017,2020GReGr..52..108B} via radioactive heating of this expanding\n material. \nKilonova observations can provide insight into uncertain nuclear physics \\cite{2020arXiv201011182B,2020arXiv201003668Z,2020arXiv200604322V,2019AnPhy.41167992H,2020GReGr..52..109C} and help constrain the expansion rate of\nthe universe \n\\cite{2020arXiv201101211C,2020NatCo..11.4129C,2020PhRvR...2b2006C,2020ApJ...892L..16D},\nparticularly in conjunction with gravitational wave observations \n\\cite{LIGO-GW170817-mma,LIGO-GW170817-H0,LIGO-GW170817-EOS,LIGO-GW170817-EOSrank,2020PhRvL.125n1103B,2019LRR....23....1M,2019NatAs...3..940H,2020Sci...370.1450D}.\n\nIn principle, kilonova observations encode the amount and properties of the ejected material in their complex\nmulti-wavelength light curves (and spectra) \\cite{2019LRR....23....1M,2018MNRAS.480.3871C,2017ApJ...851L..21V}. \nFor example, several studies of GW170817 attempted to infer the amount of material ejected\n\\cite{2021arXiv210101201B,gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,2019MNRAS.489L..91C,2017ApJ...851L..21V,2017Natur.551...75S,tanvir17,2017ApJ...848L..21A,chornock17,2017ApJ...848L..17C}.\nIn practice, these observations have historically been interpreted with semianalytic models, as they can be evaluated quickly and\ncontinuously over the parameters which characterize potential merger ejecta. \nHowever, it is well known that these semianalytic models contain oversimplified physics of already simplified anisotropic\nradiative transfer calculations \\cite{2018MNRAS.478.3298W,2020ApJ...899...24E,kilonova-lanl-WollaegerNewGrid2020} that neglect\ndetailed anisotropy, radiative transfer, opacity, sophisticated nuclear reaction networks, and composition differences.\n\nTo circumvent these biases, some groups have attempted to construct surrogate kilonova light-curve models, calibrated to\ndetailed radiative transfer simulations\n\\cite{gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,RisticThesis}.\nFor example, Coughlin et al. \\cite{2018MNRAS.480.3871C} used Gaussian process (GP) regression of principal components to construct a\nmultiwavelength surrogate calibrated to a fixed three-dimensional grid of simulations \\cite{2017Natur.551...80K}, describing flux $F_k$ from a single component of ejected material. This study generated a ``two-component''\nejecta model by adding the fluxes of two independent calculations ($F=F_1+F_2$), ignoring any photon reprocessing effects.\nMore recently, Heinzel et al \\cite{gwastro-mergers-em-CoughlinGPKilonova-2020} applied this method to construct an anisotropic\nsurrogate depending on two components $M_{1},M_{2}$ and viewing angle, calibrating to their own anisotropic radiative transfer\ncalculations. They also included reprocessing effects,\nshowing that their previous simplified approach which treats the radiation from each of the two components of the\noutflow independently introduces biases in inference for the components' parameters. \nThese strong reprocessing or morphology-dependent effects are expected in kilonova light curves\n \\cite{2020arXiv200400102K,2017ApJ...850L..37P}. %\nFinally, a recent study by Breschi et al. \\cite{2021arXiv210101201B} favored an anisotropic multicomponent model.\n\nIn this work, extending \\cite{RisticThesis}, we apply an adaptive-learning technique to generate surrogate light\ncurves from simulations of anisotropic kilonovae. Starting with a subset of \\nSimStart{} simulations reported in\n\\cite{kilonova-lanl-WollaegerNewGrid2020}, we use these adaptive learning methods to identify new\nsimulations to perform, refining our model with \\nSimPlaced{} simulations so far. \nWe apply our surrogate light curves to reassess the parameters of GW170817.\nWe distribute the updated simulation archive, our current-best surrogate models, and our training algorithms at \\texttt{https:\/\/github.com\/markoris\/surrogate\\char`_kne}.\n\nThis paper is organized as follows.\nIn Section \\ref{sec:Placement} we describe the kilonova simulation family we explore in this study and the active learning methods we\nemploy to target new simulations to perform. We also briefly comment on our model's physical completeness.\nIn Section \\ref{sec:Interpolation} we describe the specific procedures we employed to interpolate between our simulations\nto construct surrogate light curves.\nIn Section \\ref{sec:PE} we describe how we compare observations to our surrogate light curves to deduce the\n(distribution of) best fitting two-component kilonova model paramers for a given event. We specifically compare our\nmodel to GW170817.\nIn Section \\ref{sec:discussion} we describe how our surrogate models and active learning fit into the broader challenges\nof interpreting kilonova observations. \nWe conclude in Section \\ref{sec:conclude}.\n\n\n\\section{Kilonova Simulation Placement}\n\\label{sec:Placement}\n\n\n\\subsection{Kilonova simulations}\n\\label{sec:kne_sims}\n\nThe kilonova simulations described in this work adopt a similar setup as and expand on the work of \\cite{kilonova-lanl-WollaegerNewGrid2020}. \nThe simulations discussed throughout were generated using the SuperNu \\cite{2014ApJS..214...28W} time-dependent radiative transfer code, \nusing tabulated binned opacities generated with the Los Alamos suite of atomic physics codes \\cite{2015JPhB...48n4014F,2020MNRAS.493.4143F}.\nWe use results from the \\textsc{WinNet} code \\cite{2012ApJ...750L..22W} to determine radioactive heating and\ncomposition effects. We employ the thermalization model of \\cite{2016ApJ...829..110B}, but use a grey Monte Carlo\ntransport scheme for\ngamma ray energy deposition \\cite{2018MNRAS.478.3298W}. \n\nThe ejecta model is based on a symmetrically-shaped ideal fluid expanding in vacuum described by the\nEuler equations of ideal hydrodynamics. The assumption of a radiation-dominated polytropic equation of state allows for\nan analytic representation of the ejected mass $M$ and average velocity $\\bar{v}$ as a function of\ninitial central density $\\rho_0$, initial time $t_0$, and the velocity of the expansion front $v_{max}$\n(Equations 11 and 12 in \\cite{2018MNRAS.478.3298W}). When combined with Monte Carlo-based radiative transfer and a specified\nelemental composition for the ejecta, the code produces time- and orientation-dependent spectra. Convolving these spectra with\nstandard observational filters produces light curves such as the ones in Figures \\ref{fig:sample_lc} and\n\\ref{fig:off_sample_interp}. %\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/initial_grid_lc_logt_longer_times}\n\\includegraphics[width=\\columnwidth]{Figures\/placed_to_date_logt_longer_times}\n\\caption{\\label{fig:sample_lc}\\textbf{Bolometric luminosities of initial and adaptively placed simulations}: The top panel\nshows the $\\log_{10}$ bolometric luminosity in cgs units versus time in days for the simulations we initially used to train\nour grid. These similations all extend out to roughly 8 days. The bottom panel shows the bolometric light curves for our\nadaptively placed simulations overlaid on top of the initial grid light curves. Most of these simulations extend past 32 days. Both panels exhibit\nsignificant diversity in behavior and timescale.}\n\\end{figure}\n\n\n\nReal neutron star mergers have (at least) two mechanisms for ejecting material, denoted as dynamical and wind ejecta \\cite{2020ARNPS..7013120R}.\nDue to the difference in formation mechanisms of dynamical and wind ejecta \\cite{Metzger_2019}, a multi-component\napproach is necessary for accurate modeling. Each of the two types of ejecta, dynamical and wind, is modeled by a\nseparate component with a specified morphology, elemental composition, ejecta mass and ejecta\nvelocity. \nThe components are modeled together as one radiative outflow \\cite{2018MNRAS.478.3298W}. The thermal decay energy is treated by mass-weighting\nbetween the components where they overlap. %\nThe end product represents a time-dependent spectral energy distribution contained in 54 angular bins, equally spaced in\n$\\cos\\theta$ from $1$ to $-1$. For the purposes of this study, the spectra are convolved with broadband filters to produce a series of\nbroadband light curves. Specifically, we use the LSST $grizy$ filters for optical and near-infrared bands, 2MASS $JHK$ filters\nfor longer wavelength near-infrared, and the mid-infrared $S$ filter for the Spitzer $4.5\\;\\mu$m band.\nFor each band and emission direction, we estimate the AB\nmagnitude for that filter, defined for a source at $10\\unit{pc}$ in terms of the CGS energy flux $F_\\nu$ per unit\nfrequency via\n$\nm_{X,AB} = -2.5 \\log_{10} \\E{F_{\\nu}} - 48.6\n$.\nAll observations used in this work are provided or are translated into this AB-magnitude system \\cite{1998A&A...333..231B,2007AJ....133..734B,2006MNRAS.367..454H}.\nBecause our simulations tend toward reflection-symmetric behavior across the $z=0$ plane, we only consider the independent information contained in the upper half ($z > 0$) of these angular bins. \nTo reduce the acquisition cost of each simulation, we evolved each kilonova simulation in our initial grid out to $\\tEndDays$\ndays. To minimize data-handling and training cost, unless otherwise noted, we manipulate a subset of our simulation\noutput based on a log-uniform grid. For the initial simulations, this log-uniform grid consists of \\nTimePoints{} time points ranging from $\\tStartDays{}$ to $\\tEndDays$ days. \nFor the remaining simulations, this grid is extended in log-time to cover their available duration, up to\na maximum of 64 days.\nBecause of several systematics associated with modeling emission at early times (e.g., in the ionization states of the\nmediuim and in the contribution from and interaction with any strong jet), we do not report on behavior prior to 3 hours\npost-merger.\nIn this work, we use the orientation-averaged luminosity for simulation placement, but reconstruct the luminosity\ncontinuously in angle and time.%\n\n\nThe original simulation hypercubes discussed in \\cite{2018MNRAS.478.3298W,kilonova-lanl-WollaegerNewGrid2020} consider multiple wind ejecta morphologies and\ncompositions. To simplify the dimensionality of the problem, this work only considers simulations from the initial grid\nwith a peanut-shaped morphology \\cite{2020arXiv200400102K} and lower $Y_e = 0.27$ composition describing the wind ejecta. Table\n\\ref{tbl:grid_params} %\nsummarizes the parameters for the \\nSimStart{} simulations in our four-dimensional hypercube and highlights\nvariation in only ejected mass $M$ and average velocity $\\bar{v}$ for each of the two components: the mass and velocity of the dynamical and wind\nejecta, denoted henceforth as $M_{d},v_d, M_{w},v_w$. \nEvery simulation in our hypercube adopts the same morphologies for the dynamical and wind ejecta, respectively.\nThis initial simulation hypercube thus consists of only 2 of the 3 velocities and 3 of the 5 masses explored in the\ncompanion study \\cite{kilonova-lanl-WollaegerNewGrid2020}.\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|c@{\\hskip 2mm}c@{\\hskip2mm}c@{\\hskip 5mm}c@{\\hskip5mm}c|} \n\\hline\nEjecta & Morphology & $Y_e$ & $M\\textsubscript{ej}$ & $\\bar{v}$ \\\\ [0.5ex] \n & & & $M_\\odot$ & $c$ \\\\ [0.5ex] \n\\hline\\hline\nDynamical & Torus & $0.04$ & \\begin{tabular}{@{}c@{}} 0.001, 0.01, 0.1 \\end{tabular} & \\begin{tabular}{@{}c@{}} 0.05, 0.3 \\end{tabular} \\\\\\hline\nWind & Peanut & $0.27$ & \\begin{tabular}{@{}c@{}} 0.001, 0.01, 0.1 \\end{tabular} & \\begin{tabular}{@{}c@{}} 0.05, 0.3 \\end{tabular} \\\\ %\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\textbf{Kilonova simulation parameters}: Within the framework of models explored in\n \\cite{kilonova-lanl-WollaegerNewGrid2020}, parameters of the initial kilonova simulations used to initialize our\n adaptive learning prcess in this work. All simulations used in this work adopt a two-component model where the\n morphology and composition of each component is fixed. }\n\\label{tbl:grid_params}\n\\end{table}\n\n\nAs expected and discussed elsewhere \\cite{kilonova-lanl-WollaegerNewGrid2020}, these simulations exhibit significant viewing-angle\ndependence on the relative speed of the components. %\nThe obscuration of the wind by the dynamical ejecta becomes less significant closer to the symmetry axis and the \npeanut morphology itself also produces orientation dependence.\nThe two-component model shows ``blanketing'' of slow\nblue components by fast red components \\cite{2015MNRAS.450.1777K}.\nAlso expected and observed are qualitative trends versus the component masses and velocities: more wind ejecta mass\nincreases the $g$ band luminosity along the symmetry axis.\n\n\n\n\\subsubsection*{Illustrating systematics of kilonova simulations}\nBefore extensively discussing our ability to reproduce this specific family of simulations, we first comment on their\nsystematic limitations. Our simulation archive explores only a limited range of initial conditions for the ejecta, with specific assumptions\nabout the composition, morphology, and velocity profiles; with specific assumptions about nucleosynthetic heating; and\nwith specific assumptions about (the absence of) additional power and components, such as a jet or a central source to\nprovide additional power or light \\cite{2021MNRAS.500.1772N,2021MNRAS.502..865K}.\n %\nSeveral previous studies have indicated that these and other aspects of kilonova simulations can noticably impact the\noutcome\n\\cite{2018MNRAS.478.3298W,2020ApJ...899...24E,2019ApJ...880...22W,2020arXiv201214711K,2021ApJ...906...94Z,2021ApJ...910..116K,2017ApJ...850L..37P}. Where possible, we very briefly comment on how current and previous SuperNu\nsimulations' results change when making similar changes in assumptions. \n\nPrior work with SuperNu has explored the impact of composition \\cite{2020ApJ...899...24E}. \nHowever, recently, Kawaguchi et al 2020 \\cite{2020arXiv201214711K} (henceforth K20) demonstrated that Zr makes a substantial contribution to the final light\ncurve. Figure \\ref{fig:Zr} shows how our simulations depend on a similar change in composition, noting substantial\nchange in the late-time optical light curves when we remove Zr.\n\nAs demonstrated by many previous studies using SuperNu, the morphology and velocity structure also has a notable impact on the post-day light curve behavior\n\\cite{2018MNRAS.478.3298W,2021ApJ...910..116K,2017ApJ...850L..37P}. \nSeveral other groups have demonstrated similar strong morphology and orientation dependence in their work \\cite{2020ApJ...897..150D,2020arXiv201214711K,2021arXiv210101201B,gwastro-mergers-em-CoughlinGPKilonova-2020}. \nFor example, in their Figure 8, K20 demonstrate how the light curve changes when a specific polar component of the\nejecta is removed.\n\nUncertain nuclear physics inputs also propagate into notable uncertainties about the expected light curve; see, e.g., \\cite{2021ApJ...906...94Z,2020arXiv201011182B}.\nEven for the same morphology and amount of ejecta, nuclear physics uncertainties can modify the effective heating rate,\nparticularly for material with low $Y_e$ which has the greatest prospect for producing r-process elements. \n\n\nGiven limited exploration of possible kilonova initial conditions and physics, we can only at present quantify the\nuncertainties of the type listed above. In future work, we will employ our parameterized models to assess the impact of\nthese uncertainties on inferences about kilonova parameters. Future work could require kilonova models which include\nEOS parameters to enable joint inference which also\nsimultaneously constrains the equation of state.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/rHmags_noZr_axis}\n\\includegraphics[width=\\columnwidth]{Figures\/gKmags_noZr_axis}\n\\caption{\\label{fig:Zr}\\textbf{Impact of removing Zirconium}: Solid and dashed lines show simulations with otherwise identical\n assumptions about composition, morphology, and velocity structure, differing only by the presence (solid) and\n elimination (dashed) of Zr. The selected simulation parameters, $M_{d}=0.01 M_{\\odot}$, $v_d=0.3 c$, $M_{w}=0.01 M_{\\odot}$,\nand $v_w=0.15 c$, are our closest-matching representation of the simulation parameters considered\nduring the Zr-omitting study in \\cite{2020arXiv201214711K}.\n}\n\\end{figure}\n\n\n\n\n\n\nAs discussed elsewhere \\cite{kilonova-lanl-WollaegerNewGrid2020}, at late times some light curves show a modest deficit of blue light ($g$-band) \nrelative to observations of GW170817 (unless the dynamical ejecta mass is large). Notably, our $g$-band light curves fall off significantly more rapidly after\ntheir peak in all viewing directions and for most parameters considered here. \nPrevious work with other morpologies also recovers similar falloff in these bands,\nsee e.g. \\cite{tanvir17}, though additional components could conceivably contribute. \n Similar $g$-band behavior has been seen in other\ndetailed kilonova simulations; see, e.g, Figure 12 in \\cite{2020ApJ...889..171K}.\nAs noted above, this behavior depends on the assumed composition, notably Zr. \n\n\n\n\n\n\n\\subsection{Interpolation Methodology}\n\\label{sec:interp_method}\n\n\nIn this work, we principally interpolate using Gaussian process (GP) regression. In GP regression, given\ntraining data pairs $(x_a,y_a)$, the estimated function $\\hat{y}(x)$ and its variance $s(x)^2$ are approximated by\n\\begin{subequations}\n\\label{eq:gp}\n\\begin{align}\n\\hat{y}(x) &= \\sum_{a,a'}k(x,x_a) K^{-1}_{aa'}y_{a'} \\\\\ns(x)^2 &= k(x,x) - k(x,x_a)K^{-1}_{aa'} k(x_{a'},x)\n\\end{align}\n\\end{subequations}\nwhere the matrix $K_{aa'} = k(x_a,x_{a'})$ and where the function $k(x,x')$ is called the kernel of the Gaussian\nprocess. In this work, unless otherwise noted, we used a squared-exponential kernel and a white noise (diagonal) kernel\n\\begin{eqnarray}\nk(x,x') = \\sigma_o^2 e^{-(x-x')Q(x-x')\/2} + \\sigma_n^2 \\delta_{x,x'}\n\\end{eqnarray}\nwhere $Q$ is a diagonal matrix of possible length scales and $\\sigma_0,\\sigma_n$ are hyperparameters that characterize\nthe amount of noise allowed in the problem. \nThe other interpolation method considered in this work was random forest (RF) regression \\cite{breiman2001}. Unlike the GP, the \nRF output had no error quantification and was used primarily as a consistency check on the Gaussian process\nprediction. \nUnless otherwise noted, we performed all GP and RF regression with \\textsc{scikit-learn} \\cite{scikit-learn}.\n\n\nBecause of the substantial dynamic range of our many outputs, we interpolate the $\\log_{10}$ luminosity (for\nplacement) or AB magnitudes (for all other results).\nUnless otherwise noted, we quantify the performance of our interpolation with the RMS difference between our prediction\nand the true value\n\\begin{equation}\n\\ell^2 = \\frac{1}{n} \\sum_{j=1}^{n} (y_j - \\log_{10}(L_\\text{bol})_j)^2,\n \\label{eq:simple_loss}\n\\end{equation}\n[This expression overweights the importance of large errors when the source is not detectable at late times; see Appendix\n\\ref{ap:validate_pe}].\n\nWe employ GP interpolation in two standard use cases. In the first case, used for our exported production results, we interpolate the AB magnitude $\nm_\\alpha(t_*|\\Lambda)$ at some fixed reference time $t_*$ and band $\\alpha$ versus our four simulation hyperparameters\n (and, in the end, also across the extrinsic parameters of angle and wavelength) contained in $\\Lambda$. In this case, the prediction $y(x_a)$ has a single scalar value at each point; the\n$x_a$ refer to model hyperparameters; and the interpolation provides us with a scalar function of four or more\nvariables. GP regression [Eq. (\\ref{eq:gp})] provides an error estimate for $m_\\alpha$ at this specific time\n$t_*$, which in general will depend on time.\n\n\nIn the second case, used for simulation placement, we interpolate the log bolometric luminosity\n\\emph{light curve} $\\log_{10} L_{bol}(t|\\Lambda)$ versus \\emph{all time}. \n[In terms of each simulation's spectrum, the bolometric luminosity is\n$\nL_{bol} = 4 \\pi R^2 \\int_0^{\\infty} F_{\\nu}d\\nu\n$ \nwhere R = 10 pc.]\nIn this case, the prediction $\\vec{y}(x_a)$ is vector-valued at each point; the $x_a$ refer to model\nhyperparameters; and the interpolation provides us with a vector-valued function of four or more variables.\nFor simplicity and given our use case, we reduce our error estimate to a single overall value for the entire light curve, reflecting the overall\nuncertainty in $\\vec{y}(x_a)$. \n\n\n\n\\subsection{Active Learning Scheme}\n\\label{sec:active_learning}\n\nGaussian processes have long been used for active learning because they provide an error estimate: follow-up simulations\ncan be targeted in regions with the largest expected error (and thus improvement) \\cite{book-Murphy-MachineLearning}.\nWe follow this approach in our active learning scheme; see \\cite{ZacksSolomon1970,krause07nonmyopic,Cohn1996,Gal2017,MacKay92bayesianmethods,Srinivas10,Mockus78,Wu16} for a broader discussion of active\nlearning methods and their tradeoffs. \nTo reduce the data volume needed for targeting followup simulations, we used vector-valued interpolation as described\nabove, applied to \\emph{orientation-averaged outputs} of our simulations. This\napproach has the substantial advantage of providing a single error estimate per light curve (both in training and off-sample), which we can immediately\nuse as an objective function in a minimization algorithm. \n\n\n\nWe pursued an active learning simulation placement approach in order to maximally explore the parameter space and reduce\nthe amount of redundant information obtained from each new simulation. The subset of \\nSimStart{} light curves discussed in Section\n\\ref{sec:kne_sims} was used as the initial training set.\nThousands of parameter combinations were subsequently drawn from uniform distributions with maxima and minima matching\nthose of the varied parameters in Table \\ref{tbl:grid_params}. Each of these parameter combinations was evaluated by an\ninterpolator to produce an initial light-curve prediction as well as an error on the entire light-curve output. The prediction with\nthe largest error across all the tested paremeter combinations was selected as the next placed simulation.\n\n\n\n\n %\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/active_learning_before_logt}\n\\includegraphics[width=\\columnwidth]{Figures\/active_learning_after_logt}\n\\caption{\\label{fig:GP_before_after} \\textbf{Impact of adaptive placement on interpolation:}\n Example of interpolation output at a point with large predicted fitting error, both before and after placing the\n simulation. In both panels, the solid black curve shows the true simulated bolometric light curve versus time.\nThe red band shows the GP-predicted one-sigma error bar about the expected value.\n \\emph{Top panel}: Predictions from our RF and GP interpolations versus time. The large error and low practical\n utility of the GP fit is apparent. \\emph{Bottom panel}: After including this simulation in the training set, the revised RF and\n GP predictions much more closely conform with this specific simulation as expected.\n}\n\\end{figure}\n\n\n\n\\subsection{Prediction Improvement and Interpolation Results}\n\n\nWe verified our active learning strategy for simulation placement by randomly sampling combinations of\nparameters and creating two light curve predictions based on those parameters. The first prediction was trained solely on our\ninitial grid of simulations from Section \\ref{sec:kne_sims}, while the second prediction was trained on the same initial\ngrid, but with an added simulation output characterized by the aforementioned random combination of parameters. Figure\n\\ref{fig:GP_before_after} shows these before- and after-inclusion predictions which show that, as expected, the GP\ninterpolation capability is improved. \nThis pair of figures anecdotally illustrates the degree to which new training data improves our surrogate light curve models.\n\nWith over 400 placed simulations since the start of the active learning process, the training library is built up\nenough to allow for physically meaningful interpolation of off-sample events. The performance of our adaptive learning\nis best illustrated with our production-quality interpolation scheme, illustrated in Figure \\ref{fig:off_sample_interp}\nand described in the next section.\n\n\n\n\n\n\nDespite producing many follow-up simulations, we achieve success with a\nvery sparse coverage of our parameter space. To illustrate the sparsity of our parameter space coverage, and how slowly\nour added simulations increase coverage, we evaluated the median ``inter-simulation'' distance,\nusing a simple Euclidean ($L^2$) norm over $\\log_{10} L_{bol}(t_k)$ for several reference times $t_k$. \nAs expected given the high apparent dimension of our output, this median distance changes very\nslowly with $n$, owing to the large effective dimension of the output light curves. The median distance is also\n larger than the residual error in our fit, as reported below. The success of our interpolation relies not on an\noverwhelmingly large training sample, but on the smoothness and predictability of our physics-based light curves.\n\n\n\n\\section{Light curve interpolation}\n\\label{sec:Interpolation}\n\n\\subsection{Stitched fixed-time interpolation}\nTo efficiently interpolate across the whole model space, we follow a strategy illustrated in Figure 1 of\n\\cite{2014PhRvX...4c1006F}: we pick several fiducial reference times $t_{q}$ (and angles); use GP interpolation to produce an\nestimate $m_\\alpha(t_q|\\Lambda)$ versus $\\Lambda$; interpolate in time to construct a continuous\nlight curve at the model hyperparameters $\\Lambda$ at each reference angle; and then interpolate in angle to construct a\nlight curve for an arbitrary orientation. For an error estimate, we stitch together the error estimates in\neach band to produce a continuous function of time. \nFigure \\ref{fig:off_sample_interp} shows the output of our interpolation (smooth lines), compared to a validation\nsimulation at the same parameters (dashed lines). Our predictions generally agree, though less so for the shortest wavelengths at the\nlatest times. \nSubsequent figures also illustrate the typical GP error estimate, which is usually $O(0.1)$ in $\\log_{10} L$ for most bands\nand times considered. \n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/initial_grid_off_sample_interp_ABmag}\n\\includegraphics[width=\\columnwidth]{Figures\/off_sample_interp_logt_longer_times_ABmag}\n\\caption{\\textbf{Off-sample interpolation with original and refined grid: } Example of an interpolated stitched fixed-time prediction compared to a simulation\noutput created from the same corresponding input parameters. The top panel shows our estimate based on the initial\n$\\nSimStart$ simulations; the bottom panel shows the result after adaptive learning. Different colors denote different filter bands, described\nin the legend. The dashed lines show full simulation output for each band. The colored points show our interpolated\nbolometric magnitude predictions at the $\\nTimePoints$ evaluation times. The solid lines show our final\ninterpolated light curves, interpolating between the points shown. The largest error in this example occurs for the\n$g$ band at late times. The simulated parameters and viewing angle for this configuration are $M_{d}=0.097050 M_{\\odot}$, $M_{w}=0.083748 M_{\\odot}$,\n$v_d=0.197642 c$ and $v_w=0.297978 c$, viewed on axis ($\\theta=0$). The exaggerated modulations in the top panel's solid\nlines and dotted curves illustrate interpolation failures, arising from adopting an initially insufficient training set.}\n\\label{fig:off_sample_interp}\n\\end{figure}\n\n\n\n\\subsection{Trends identified with interpolated light curves}\nIn Figure \\ref{fig:CharacterizeTrends:OneParameter} we show the results of our fit evaluated at a fixed viewing angle ($\\theta=0$), varying one parameter at a time\ncontinuously, relative to a fiducial configuration with $M_{d}=M_{w}=0.01 M_\\odot$, $v_w\/c=v_d\/c=0.05$.\nThe fixed value for the ejected mass of $M=0.01 M_\\odot$ was chosen as the middle ground of the initial grid's sampled mass space, which\ndoes not introduce any biases toward lighter or heavier masses. Since no similar central value was initially available for the velocity\nparameters, the lower value was selected in the case of both components. The slower velocity resulted in the ejecta not dissipating\nas quickly and allowed for more variation in the light curves as the non-static parameter was varied. \nFor this viewing angle, changes in the amount and velocity of the dynamical ejecta have relatively modest effect,\nin large part because that ejecta is concentrated in the equatorial plane. By contrast, changes in the mostly polar\nwind ejecta has a much more substantial impact on the polar light curve ($\\theta=0$). \nSpecifically, increasing the amount of wind ejecta brightens and broadens the light curve, as expected from classic\nanalytic arguments pertaining to how much material the light must diffuse through \\cite{1980ApJ...237..541A,1982ApJ...253..785A,Chatzopoulos_2012,Metzger_2019}.\nSimilarly, increasing the velocity of wind ejecta causes the peak to occur at earlier times (diffusion is easier) and be\nbrighter.\n\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_md_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_mw_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_vd_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_vw_logt_longer_times_ABmag}\n\\caption{\\label{fig:CharacterizeTrends:OneParameter}\\textbf{Interpolated and simulated g-band light curves}:\n In this figure, we generate $\\log L_g(t|\\Lambda)$ for a\n one-parameter family of simulations $\\Lambda$ where either one of the $M$ parameters vary from $0.001 M_\\odot$ to $0.1\n M_\\odot$ or one of the $v$ parameters vary from $0.05c$ to $0.3c$, and the viewing angle is $\\theta=0$. \n The remaining model parameters are fixed to $(M\/M_\\odot,v\/$c$) = (0.01,0.05)$. Contours in $M$\n are uniform in $\\log M$, while those for $v$ are linearly uniform. For\n comparison, the heavy dashed lines show the initial training simulation results for the two parameter endpoints.\nThe $g$ band light curve has the largest dynamic range and is the most sensitive to interpolation errors; notably, the\ninterpolation does not always conform tightly to the underlying simulation data at late times.\n}\n\\end{figure}\n\n\\subsection{Interpolation in viewing angle}\n\nAll of the interpolated light curves discussed thus far have been trained at some fixed viewing angle. \nIn Figure \\ref{fig:AngleInterpolation}, we explore the interpolation of several families of models, each of which was \ntrained using simulation data at a different viewing angle. The symmetry of the ejecta across the orbital plane allows for\nthe assumption that any angular variation between $0$ and $\\pi\/2$ can simply be mirrored across the symmetry axis. \n\nFigure \\ref{fig:AngleInterpolation} indicates that the first day post-merger does not introduce much angular variation and, as such,\nis quite well predicted even when interpolating across only 11\nangles. After 1 day, the luminosity across different angles begins to change considerably as the peanut-shaped wind\nejecta becomes more dominant.\nAt late times, there is a strong periodic variability which manifests near the orbital plane, most strongly apparent in\nthe blue (g) and near infrared (K) bands. In the blue bands, the angular variation reflects lanthanide curtaining; in\nthe red bands, the angular variation reflects red emission from the late-peaking red dynamical ejecta. \n[At the latest times and faintest luminosities along the equatorial plane, numerical uncertainty in our Monte Carlo simulations are apparent in the\nlight-curve results.]\nIn all panels, the solid band denotes an estimated error bar from our GP fit in time, extended in angle.\n\n\\begin{figure}[ht!]\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_g_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_g_zoom_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_y_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_K_ABmag}\n\\caption{\\label{fig:AngleInterpolation} \\textbf{Interpolation of g-, y-, and K-band luminosity at different viewing angles}: This figure\ncompares the $g$-, $y$-, and $K$-band luminosity at select times as a function of viewing angle. The solid points represent fixed angles at which the\ndifferent families of models were trained. The solid lines connecting the points indicate the interpolated prediction of the\nangular variation at some given time in the light curve. The dashed lines represent the simulation data and show the true angular \nvariation.\nThe shaded regions denote the $1\\sigma$ error estimate derived from our Gaussian process fit versus time, extended in angle.\n}\n\\end{figure}\n\n\\subsection{Predictive Accuracy versus time, angle grid sizes}\n\nTo better understand the systematic limitations and computational inefficiencies introduced by our stitched-time\ninterpolation grid, we investigated the accuracy of our fits when only using a subset of the time or angular grid.\n\nFirst, we consider a simple analysis of loss of predictive accuracy as the number of GP interpolators used to make a surrogate light curve is decreased. We denote $t \\in T$ as the subset of \ntimes represented by the GP interpolators used to make a prediction, $T$ as the total available number of time points, and thus interpolators, which can be used to make a light curve, and $\\bar{t}$ \nas all the other times in T which are not represented by $t$ such that $t \\cap \\bar{t} = 0$ and $t \\cup \\bar{t} = T$.\n\nThus, when using any number of interpolators at times $t \\in T$ which is less than the total number of possible time points $T$, we first generate predictions $y(t)$ with the chosen subset of \ninterpolators. These predictions $y(t)$, along with the times $\\bar{t}$ which are outside of the chosen subset of interpolators, are then used as inputs for \\texttt{SciPy}'s UnivariateSpline method \nfrom which the remainder of the light curve $z(t) = f(\\bar{t}, y(t))$ is constructed.\n\nFigure \\ref{fig:residuals_vs_nintps} shows how the average residual between on-sample light curve predictions and the respective simulation data changes\nas a function of the number of time points used as the base for constructing the time-interpolated light curve. For the\ncurrent scheme, we can remove up to roughly 75\\% of the initial set of time points without substantially diminishing our\noverall accuracy. Future work will explore smarter selection of representative time points in an effort to further reduce\nthe number of interpolators which can be removed without significant loss of accuracy.\n\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_comparison_kawaguchi20_fig7}\n\\caption{\\label{fig:AngularDependence:2} \\textbf{Light curve versus time for selected angles and bands}: Comparison to Figure 7 of \\cite{2020ApJ...889..171K}\nindicating angular dependence of light-curve predictions across the $g-$, $z-$ and $K-$bands.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/offset_vs_interps_used}\n\\caption{\\label{fig:residuals_vs_nintps} \\textbf{Average residual as a function of number of considered time points}:\nA plot of the average residuals between on-sample time-interpolated light curves and the respective simulation data as a function of how many time points are used to generate the light curves.\nIn each case, we drew the respective number of samples from a log-uniform distribution between the start and end time of our light curves.}\n\\end{figure}\n\n\n\\section{Parameter inference of radioactively-powered kilonovae }\n\\label{sec:PE}\n\nIn this section, we describe and demonstrate the algorithm we use to infer kilonova parameters given observations, using\nthe interpolated light-curve model above. \nUnless otherwise noted, for simplicity all\ncalculations in this section assume the kilonova event\ntime and distance are known parameters. We likewise assume observational errors are understood and well\ncharacterized by independent gaussian magnitude errors in each observation, and that our model families include the\nunderlying properties of the source (i.e., we neglect systematic modeling errors due to the parameters held constant in\nour simulation grid: morphology, initial composition, et cetera). \n\n\n\n\n\\subsection{Framework and validation}\nAs in many previous applications of Bayesian inference to infer parameters of kilonovae\n\\cite{gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,2019MNRAS.489L..91C,2017ApJ...851L..21V,2017Natur.551...75S},\nwe seek to compare the observed magnitudes $x_i$ at evaluation points $i$ (denoting a combination of band and time) to a\ncontinuous model that makes predictions $m(i|{\\bm \\theta})$ [henceforth denoted by $m_i({\\bm \\theta})$ for brevity] which depend on some model parameters $\\theta$. Bayes\ntheorem expresses the posterior probability $p({\\bm\\theta})$ in terms of a prior probability $p_{\\rm\n prior}({\\bm\\theta})$ for the model parameters $\\bm\\theta$ and a likelihood ${\\cal L}(\\theta)$ of all observations,\ngiven the model parameters, as \n\\begin{equation}\np({\\bm \\theta}) = \\frac{{\\cal L}({\\bm \\theta}) p_{\\rm prior}({\\bm \\theta})}{\n \\int d {\\bm \\theta} {\\cal L}({\\bm \\theta}) p_{\\rm prior}({\\bm \\theta})\n}\n\\end{equation}\nUnless otherwise noted, for simplicity we assume the source sky location, distance, and merger time are known.\nWe adopt a uniform prior on the ejecta velocity $v\/c\\in[0.05,0.3]$ and a log uniform prior on the ejecta masses\n$m\/M_\\odot \\in [10^{-3},0.1]$. \nWe assume the observations have Gaussian-distributed \\emph{magnitude} errors with presumed known observational\n(statistical) uncertainties $\\sigma_i$, convolved with\nsome additional unknown systematic uncertainty $\\sigma$, so that \n\\begin{equation}\n \\ln \\mathcal{L}(\\bm{\\theta}) = -0.5 \\sum_{i=1}^n \\left [ \\frac {(x_i - m_i(\\bm{\\theta}))^2} {\\sigma_i^2 + \\sigma^2} + \\ln(2 \\pi (\\sigma_i^2 + \\sigma^2)) \\right ]\n\\end{equation}\nwhere the sum is taken over every data point in every band used in the analysis. In tests, we treat $\\sigma$ as an uncertain model\nparameter, de facto allowing for additional systematic observational uncertainty (or for some systematic theoretical\nuncertainty). For our GP surrogate models, we set $\\sigma$ to the estimated GP model error.\n\n\nUnlike prior work, we eschew Markov-chain Monte Carlo, instead constructing the posterior distribution by direct Monte\nCarlo integration as in \\cite{2015PhRvD..92b3002P,gwastro-PENR-RIFT}. To efficiently capture correlations, we employ a\ncustom adaptive Monte Carlo integrator; see \\citet{gwastro-RIFT-Update} for implementation details.\nIn Appendix \\ref{ap:validate_pe}, we describe several tests we performed to validate this inference technique using\nsynthetic kilonova data drawn from a previously published semianalytic kilonova model. Our tests include recovering\nthe parameters of a hundred synthetic kilonova sources.\nIn future work, we will demonstrate how our parameter inference method can be incorporated efficiently and\nsimultaneously with gravitational wave (GW)\nparameter inference with the rapid iterative fitting (RIFT) parameter estimation pipeline \\cite{gwastro-PENR-RIFT}.\n\n\\subsection{Inference with surrogate kilonova model}\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_test_kn_interp_angle_20210421}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_test_kn_interp_angle_combined}\n\\caption{\\label{fig:pedemo:interp}\\textbf{Synthetic source recovery with surrogate model: }\nRecovery of a parameters of a known two-component surrogate kilonova model, using inference based on our interpolated model.\nSolid black curves show results adopting a strong angular prior motivated by radio observations of GW170817.\n\\emph{Top panel}: Synthetic light cuve data in several bands.\n\\emph{Bottom panel}: Inferred distribution of the four model parameters, and viewing angle. The blue cross denotes the\ninjected values. Red contours show results without adopting a prior on observing angle; black contours show results\ninferred when adopting a prior on viewing angle consistent with observations of GW170817.\n}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_simulation_injection_kn_interp_angle_20210413}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_simulation_injection_kn_interp_angle_20210413}\n\\caption{\\label{fig:pedemo:interp_on_sim}\\textbf{Simulation parameter recovery with surrogate model: }\nRecovery of a parameters of a known two-component kilonova \\emph{simulation}, using inference based on our interpolated\nmodel. The parameters corresponding to the relevant simulation are $M_{d}=0.052780 M_{\\odot}$, $v_d=0.164316 c$, $M_{w}=0.026494 M_{\\odot}$,\nand $v_w=0.174017 c$.\n\\emph{Top panel}: Synthetic light cuve data in several bands.\n\\emph{Bottom panel}: Inferred distribution of the three model parameters. \n}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_simulation_injection_kilonova_20210118}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_simulation_injection_kilonova_20210118}\n\\caption{\\label{fig:pedemo:model_on_sim}\\textbf{Simulation parameter recovery with analytic model: }\nRecovery of a parameters of a known two-component kilonova \\emph{simulation}, using inference based on the simplified\nanalytic model described in the appendix. The analytic model cannot fit our simuation data. While only a one-component\nfit is shown, similar results arise when employing multiple components.\nThe parameters corresponding to the relevant simulation are $M_{d}=0.01 M_{\\odot}$, $v_d=0.3 c$, $M_{w}=0.01 M_{\\odot}$,\nand $v_w=0.3 c$.\n\\emph{Top panel}: Synthetic light cuve data in several bands, including error bars on both the synthetic data and\nposterior light curve predictions. The analytic model cannot fit our simulated data well.\n\\emph{Bottom panel}: Inferred distribution of the three four model parameters. The blue cross denotes the injected\nvalues.\n}\n\\end{figure}\n\nFigure \\ref{fig:pedemo:interp} demonstrates parameter inference using our surrogate light curves, for a synthetic source\ngenerated using our own model. As expected, we can recover a known source, including constraining the viewing angle $\\theta$.\nFigure \\ref{fig:pedemo:interp_on_sim} performs a similar test, but now using a specific simulation, without\n interpolation. As expected given our adopted systematic error, we recover the simulation parameters.\nFinally, Figure \\ref{fig:pedemo:model_on_sim} repeats the test above, using the semianalytic model described in the\nappendix. This comparison emphatically demonstrates large systematic differences between this semianalytic model and\nour detailed simulations. \n\n\n\n\n\n\n\\subsection{Example: GW170817}\n\nSuperNu-based kilonova models have already been successfully used to interpret GW170817, \nthough as noted previously these models have a rapid falloff in the late-time optical magnitudes that is not present in\nthe observations; see \\cite{tanvir17}.\n Because\nof the close proximity of GW170817, only distance modulus (but not redshift) corrections are needed to translate our\npredictions to apparent magnitudes which can be directly compared to electromagnetic observations.\nObservational results are taken from \\citep{2017ApJ...851L..21V}'s compilation of\n photometry reported in \n\\cite{2017Natur.551...64A,tanvir17,troja17,2017ApJ...848L..17C,2017Sci...358.1570D,2017arXiv171005841S,2017ApJ...848L..24V,2017Natur.551...67P,2017Sci...358.1559K,2017Sci...358.1574S,2017PASJ...69..101U}.\nFigure \\ref{fig:170817:SimulationsOnly} shows the results of directly comparing our extended simulation archive directly\nto observations of GW170817, selecting for simulations (parameters and angles) with the highest overall likelihood. The\nsolid black curves in these figures show the 50 highest-likelihood configurations, where the likelihood requires\nsimultaneously reproducing all observed bands. Except for reddest three bands\n(JHK), many simulations compare extremely favorably to the observations. \nThe parameters of these simulations, however, do not represent the optimal parameters of this model family: because our placement\nalgorithm minimizes interpolation error, the selected points preferentially occur at the edges of our domain. \nFinally, for the reddest band (K), our fits exhibit notable systematic uncertainty relative to the underlying\nsimulation grid.\n\nWe have performed parametric inference on GW170817 using our surrogate light curve model to the underlying SuperNu\nresults. \nMotivated by the direct comparisons above, we perform two analyses. In the first, we use all observing data at all\ntimes.\nIn the second, we omit the reddest (K) band.\nFigure \\ref{fig:170817:ToyModel} shows the results of these comparisons. Beccause of the systematic fitting\nuncertainties at late times, we highlight the analysis omitting K band observations as our preferred result.\nThough previously-reported inferences about ejecta masses cover a considerable dynamic range (see, e.g., Fig. 1 in \\cite{2019EPJA...55..203S}), our inferred masses are\nqualitatively consistent with selected previous estimates including previous inferences with similar SuperNu models \n\\cite{tanvir17} and recent surrogate models adapted to simplified multidimensional radiative transfer\n\\cite{2020Sci...370.1450D}.\nNotably, however, we infer a large amount of ``dynamical'' (red, lanthanide-rich) ejecta mass (i.e.,\n$M_{ej}\\simeq O(1\/30) M_\\odot$), more dynamical ejecta than wind, and the velocities for the dynamical and wind component are inverted relative to customary expectations (i.e.,\n$v_d \\frac{n-1}{n}$, we find the constant\nmean curvature sphere-like hypersurfaces obtained in \\cite{HH89} and\nthe Delaunay-like hypersurfaces obtained in \\cite{PR99}. When $0< H\n\\le \\frac{n-1}{n}$, we obtain complete simply-connected\nhypersurfaces ${\\mathcal S}_H$ which are entire vertical graphs above\n$\\mathbb{H}^n$, as well as some complete embedded or complete immersed\ncylinders which are bi-graphs (Theorems~\\ref{T-h3-r0} and\n\\ref{T-h3-rp}). When $H=\\frac{n-1}{n}$, the asymptotic behaviour of\nthe height function of these hypersurfaces is exponential, and it\nonly depends on the dimension when $n \\ge 3$. In\nSection~\\ref{S-appli}, we give geometric applications using the\nsimply-connected rotation $H$-hypersurfaces ${\\mathcal S}_H$ ($0 < H \\le\n\\frac{n-1}{n}$) mentioned above as barriers. We give existence and\ncharacterization of vertical $H$-graphs ($0 < H \\le \\frac{n-1}{n}$)\nover appropriate bounded domains (Proposition \\ref{P-appl-2}) as\nwell as symmetry and uniqueness results for compact hypersurfaces\nwhose boundary is one or two parallel submanifolds in slices\n(Theorems \\ref{T-appl-6} and \\ref{T-appl-7}). These results\ngeneralize the $2$-dimensional results obtained previously in\n\\cite{NSST08}.\\bigskip\n\nWe treat translation $H$-hypersurfaces in Section\n\\ref{SS-dim3-Htransl} (Theorem \\ref{T-tra-1}). When $n\\ge 3$ and\n$H=\\frac{n-1}{n}$, we in particular find a complete embedded\nhypersurface generated by a compact, simple, strictly convex\ncurve.\\bigskip\n\nWhen $0 < H <\\frac{n-1}{n}$, we obtain a complete non-entire\nvertical graph over the non-mean convex domain bounded by an\nequidistant hypersurface $\\Gamma$. This graph takes infinite\nboundary value data on $\\Gamma$ and it has infinite asymptotic\nboundary value data. \\bigskip\n\n\nThe authors would like to thank the Mathematics Department of\nPUC-Rio (PB) and the Institut Fourier -- Universit\\'{e} Joseph Fourier\n(RSA) for their hospitality. They gratefully acknowledge the\nfinancial support of CNPq, FAPERJ (in particular \\emph{Pronex} and\n\\emph{Cientistas do nosso Estado}), Acordo Brasil - Fran\\c{c}a,\nUniversit\\'{e} Joseph Fourier and R\\'{e}gion Rh\\^{o}ne-Alpes.\\bigskip\n\n\n\\bigskip\n\\section{Examples of $H$-hypersurfaces in $\\mathbb{H}^n \\times \\mathbb{R}$}\\label{S-examples}\n\n\nWe consider the ball model for the hyperbolic space $\\mathbb{H}^n$,\n\n$$\\mathbb{B} := \\ens{(x_1, \\ldots , x_n) \\in \\mathbb{R}^n}{x_1^2 + \\cdots + x_n^2 < 1},$$\n\nwith the hyperbolic metric $g_{\\mathbb{B}}$,\n\n$$g_{\\mathbb{B}} := 4 \\big( 1 - (x_1^2 + \\cdots + x_n^2) \\big)^{-2} \\big( dx_1^2 +\n\\cdots + dx_n^2\\big),$$\n\nand the product metric\n\n$$\\hat{g} = g_{\\mathbb{B}} + dt^2$$\n\non $\\mathbb{H}^n \\times \\mathbb{R}$. \\bigskip\n\n\n\n\\subsection{Rotation $H$-hypersurfaces in $\\mathbb{H}^n \\times \\mathbb{R}$}\n\\label{SS-dim3-Hrot} \\bigskip\n\n\nThe mean curvature equation for rotation hypersurfaces,\n\n\\begin{equation*}\\label{E-h3-rot1a}\nn H(\\rho) \\sinh^{n-1}(\\rho) = \\partial_{\\rho} \\Big(\n\\sinh^{n-1}(\\rho) \\dot{\\lambda}(\\rho) (1 +\n\\dot{\\lambda}^2(\\rho))^{-1\/2}\\Big)\n\\end{equation*}\\medskip\n\ncan be established using the flux formula, see Appendix\n\\ref{S-vflux}. We consider rotation hypersurfaces about $\\{0\\}\n\\times \\mathbb{R}$, where $\\rho$ denotes the hyperbolic distance to the axis\nand the mean curvature is taken with respect to the unit normal\npointing upwards. \\bigskip\n\nMinimal rotation hypersurfaces in $\\mathbb{H}^n \\times \\mathbb{R}$ have been\nstudied in \\cite{ST05} in dimension $2$ and in \\cite{BS08a} in\nhigher dimensions. In this Section we consider the case in which $H$\nis a non-zero constant. We may assume that $H$ is positive.\\bigskip\n\nIntegrating the above differential equation, we obtain the equation\nfor the generating curves of rotation $H$-hypersurfaces in $\\mathbb{H}^n\n\\times \\mathbb{R}$,\n\n\\begin{equation}\\label{E-h3-rot1}\n\\dot{\\lambda}(\\rho) \\big( 1+ \\dot{\\lambda}^2(\\rho)\\big)^{-1\/2}\n\\sinh^{n-1}(\\rho) = nH \\int_0^{\\rho} \\sinh^{n-1}(t) \\, dt + d\n\\end{equation}\n\nfor $H > 0$ and for some constant $d$. \\bigskip\n\nThis equation has been studied in \\cite{NSST08, ST05} in dimension\n$2$ (with a different constant $d$). \\bigskip\n\n\\noindent \\textbf{Notations.}~ For later purposes we introduce some\nnotations. \\bigskip\n\n\\noindent $\\bullet $~ For $m \\ge 0$, we define the function $I_m(t)$ by\n\\begin{equation}\\label{E-h3-rot2}\nI_m(t) := \\int_0^{t} \\sinh^m(r) \\, dr.\n\\end{equation} \\bigskip\n\n\\noindent $\\bullet $~ For $H>0$ and $d \\in \\mathbb{R}$, we define the functions,\n\n\\begin{equation}\\label{E-h3-4}\n\\left\\{%\n\\begin{array}{lll}\n M_{H,d}(t) &:=& \\sinh^{n-1}(t) - nH I_{n-1}(t) - d,\\\\\n P_{H,d}(t) &:=& \\sinh^{n-1}(t) + nH I_{n-1}(t) + d,\\\\\n Q_{H,d}(t) &:=& \\big[nH I_{n-1}(t) + d\\big] \\big[M_{H,d}(t) \\,\n P_{H,d}(t)\\big]^{-1\/2}, \\\\\n && \\text{when the square root exists.}\\\\\n\\end{array}%\n\\right.\n\\end{equation}\\bigskip\n\nWe see from (\\ref{E-h3-rot1}) that $\\dot{\\lambda}(t)$ has the sign\nof $n H I_{n-1}(t) +d$. It follows that $\\lambda$ is given, up to an\nadditive constant, by\n\n$$\\lambda_{H,d}(\\rho) = \\int_{\\rho_0}^{\\rho} \\frac{nH I_{n-1}(t) +\nd}{\\sqrt{\\sinh^{2n-2}(t) - \\big(nH I_{n-1}(t) + d\\big)^2}}\\, dt$$\n\nor, with the above notations,\n\n\\begin{equation}\\label{E-h3-rot3}\n\\lambda_{H,d}(\\rho) = \\int_{\\rho_0}^{\\rho} \\frac{nH\nI_{n-1}(t)+d}{\\sqrt{M_{H,d}(t) P_{H,d}(t)}}\\, dt =\n\\int_{\\rho_0}^{\\rho} Q_{H,d}(t) \\, dt\n\\end{equation}\n\nwhere the integration interval $[\\rho_0,\\rho]$ is contained in the\ninterval in which the square-root exists. The existence and\nbehaviour of the function $\\lambda_{H,d}$ depend on the signs of the\nfunctions $nH I_{n-1}(t) + d$, $M_{H,d}(t)$ and $P_{H,d}(t)$.\n\\bigskip\n\n\nUp to vertical translations, the rotation hypersurfaces about the\naxis $\\{0\\} \\times \\mathbb{R}$, with constant mean curvature $H>0$ with\nrespect to the unit normal pointing upwards, can be classified\naccording to the sign of $H - \\frac{n-1}{n}$ and to the sign of $d$.\nWe state three theorems depending on the value of $H$. \\bigskip\n\n\\newpage\n\n\\begin{thm}[Rotation $H$-hypersurfaces with $H=\\frac{n-1}{n}$]\\label{T-h3-r0}\n$ $\n \\begin{enumerate}\n \\item When $d=0$, the hypersurface ${\\mathcal S}_{\\frac{n-1}{n}}$ is a\n simply-connected entire vertical graph above $\\mathbb{H}^n \\times \\{0\\}$,\n tangent to the slice at $0$, generated by a strictly convex\n curve. The height function $\\lambda (\\rho)$ on ${\\mathcal S}_{\\frac{n-1}{n}}$\n grows exponentially.\n\n \\item When $d>0$, the hypersurface ${\\mathcal C}_{\\frac{n-1}{n}}$ is a complete\n embedded cylinder, symmetric with respect to the slice $\\mathbb{H}^n \\times \\{0\\}$.\n The parts ${\\mathcal C}_{\\frac{n-1}{n}}^{\\pm} := {\\mathcal C}_{\\frac{n-1}{n}} \\cap \\mathbb{H}^n\n \\times \\mathbb{R}_{\\pm}$ are vertical graphs above the exterior of a ball\n $B(0,a)$, for some constant $a > 0$ depending on $d$.\n The height function $\\lambda (\\rho)$ on ${\\mathcal C}_{\\frac{n-1}{n}}^{\\pm}$ grows\n exponentially. When $n=2$, the solution exists when $0< d<1$ only.\n\n \\item When $d<0$, the hypersurface ${\\mathcal D}_{\\frac{n-1}{n}}$ is complete and\n symmetric with respect to the slice $\\mathbb{H}^n \\times \\{0\\}$. It has\n self-intersections along a sphere in $\\mathbb{H}^n \\times \\{0\\}$. The parts\n ${\\mathcal D}_{\\frac{n-1}{n}}^{\\pm} := {\\mathcal D}_{\\frac{n-1}{n}} \\cap \\mathbb{H}^n\n \\times \\mathbb{R}_{\\pm}$ are vertical graphs above the exterior of a ball\n $B(0,a)$, for some constant $a > 0$ depending on $d$. The height\n function $\\lambda (\\rho)$ on ${\\mathcal D}_{\\frac{n-1}{n}}^{\\pm}$ grows exponentially.\n \\end{enumerate}\n\n The asymptotic behaviour of the height function when $\\rho$ tends to\n infinity is as follows.\n$$\n\\left\\{%\n\\begin{array}{l}\n\\text{For~ } n=2, ~\\lambda (\\rho) \\sim\n \\frac{e^{\\rho\/2}}{\\sqrt{1 - d}}.\\\\[8pt]\n\\text{For~ } n=3, ~\\lambda (\\rho) \\sim\n \\frac{1}{2 \\sqrt{2}} \\int^{\\rho} \\frac{e^{t}}{\\sqrt{t}}\\, dt.\\\\[8pt]\n\\text{For~ } n \\ge 4, ~\\lambda (\\rho) \\sim a(n) e^{b(n)t}, \\text{\n~for\nsome positive constants~ } a(n), b(n).\\\\\n\\end{array}%\n\\right.\n$$\n\\end{thm}\n\nThe generating curves are obtained by symmetries from the curves\n$(=)$ (standing for $H = \\frac{n-1}{n}$) which appear in\nFigures~\\ref{F-rot-1}-\\ref{F-rot-3}. \\bigskip\n\n\n\\begin{pb1-figs}\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-rot-1.eps}\n \\caption[Case $d=0$]{Case $d=0$}\n \\label{F-rot-1}\n\\end{minipage}\\hfill\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-rot-2.eps}\n \\caption[Case $d>0$]{Case $d>0$}\n \\label{F-rot-2}\n\\end{minipage}\\hfill\n\\end{center}\n\\end{figure}\\bigskip\n\\end{pb1-figs}\n\n\\begin{pb1-figs}\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-rot-3.eps}\n \\caption[Case $d<0$]{Case $d<0$}\n \\label{F-rot-3}\n\\end{minipage}\\hfill\n\\end{center}\n\\end{figure}\\bigskip\n\\end{pb1-figs}\n\n\\textbf{Remark.}~ When $n=2$ the asymptotic growth depends on the\nvalue of the integration contant $d$. \\bigskip\n\n\n\\begin{thm}[Rotation $H$-hypersurfaces with $0 < H < \\frac{n-1}{n}$]\\label{T-h3-rp}\n$ $\n \\begin{enumerate}\n \\item When $d=0$, the hypersurface ${\\mathcal S}_{H}$ is a\n simply-connected entire vertical graph above $\\mathbb{H}^n \\times \\{0\\}$,\n tangent to the slice at $0$, generated by a strictly convex\n curve. The height function $\\lambda (\\rho)$ on ${\\mathcal S}_{H}$ grows linearly.\n\n \\item When $d>0$, the hypersurface ${\\mathcal C}_{H}$ is a complete embedded cylinder,\n symmetric with respect to the slice $\\mathbb{H}^n \\times \\{0\\}$. The parts\n ${\\mathcal C}_{H}^{\\pm} := {\\mathcal C}_{H} \\cap \\mathbb{H}^n \\times \\mathbb{R}_{\\pm}$ are vertical graphs above\n the exterior of a ball $B(0,a)$, for some constant $a > 0$ depending on\n $H$ and $d$. The height function $\\lambda (\\rho)$ on ${\\mathcal C}_{H}^{\\pm}$ grows linearly.\n\n \\item When $d<0$, the hypersurface ${\\mathcal D}_{H}$ is complete and symmetric with\n respect to the slice $\\mathbb{H}^n \\times \\{0\\}$. It has self-intersections along a\n sphere in $\\mathbb{H}^n \\times \\{0\\}$. The parts ${\\mathcal D}_{H}^{\\pm} := {\\mathcal D}_{H} \\cap\n \\mathbb{H}^n \\times \\mathbb{R}_{\\pm}$ are vertical graphs above\n the exterior of a ball $B(0,a)$, for some constant $a > 0$ depending on $H$\n and $d$. The height function $\\lambda(\\rho)$ on ${\\mathcal D}_{H}^{\\pm}$ grows linearly.\n \\end{enumerate}\n\nThe asymptotic behaviour of the height function when $\\rho$ tends to\ninfinity is given by\n$$\\lambda (\\rho) \\sim \\dfrac{\\frac{nH}{n-1}}{\\sqrt{1 - (\\frac{nH}{n-1})^2}} \\, \\rho.$$\n\\end{thm}\\bigskip\n\nThe generating curves are obtained by symmetries from the curves\n$(<)$ (standing for $H < \\frac{n-1}{n}$) which appear in\nFigures~\\ref{F-rot-1}-\\ref{F-rot-3}.\\bigskip\n\n\\newpage\n\n\\begin{thm}[Rotation $H$-hypersurfaces with $H > \\frac{n-1}{n}$]\\label{T-h3-rg}\n$ $\n \\begin{enumerate}\n \\item When $d=0$, the hypersurface ${\\mathcal K}_H$ is\n compact and diffeomorphic to an $n$-dimensional sphere. It\n is generated by a compact, simple, strictly convex curve.\n\n \\item When $d>0$, the hypersurface ${\\mathcal U}_H$ is\n complete, embedded and periodic in the $\\mathbb{R}$-direction. It looks like\n an unduloid and is contained in a domain of the form $B(0,b)\n \\setminus B(0,a) \\times \\mathbb{R}$, for some constants $0 < a < b$,\n depending on $H$ and $d$.\n\n \\item When $d<0$, the hypersurface ${\\mathcal N}_H$ is\n complete and periodic in the $\\mathbb{R}$-direction. It has\n self-intersections, looks like\n a nodoid and is contained in a domain of the form $B(0,b)\n \\setminus B(0,a) \\times \\mathbb{R}$, for some constants $0 < a < b$\n depending on $H$ and $d$.\n \\end{enumerate}\n\\end{thm}\\bigskip\n\n\nThe generating curves are obtained by symmetries from the curves\n$(>)$ (standing for $H > \\frac{n-1}{n}$) which appear in\nFigures~\\ref{F-rot-1}-\\ref{F-rot-3}. \\bigskip\n\n\n\n\n\n\\textbf{Remarks}\\bigskip\n\n1.~ Constant mean curvature rotation hypersurfaces with $H >\n\\frac{n-1}{n}$ were obtained in \\cite{HH89} and \\cite{PR99}.\\medskip\n\n2.~ The hypersurfaces ${\\mathcal S}_H$ and the upper (lower) halves of the\ncylinders ${\\mathcal C}_H$ in Theorems~\\ref{T-h3-r0} and \\ref{T-h3-rp} are\nstable (as vertical graphs).\n\\bigskip\n\n\n\\subsection{Proofs of Theorem \\ref{T-h3-r0} - \\ref{T-h3-rg}}\n\n\nThe proofs follow from an analysis of the asymptotic behaviour of\n$I_m(t)$ (Formula (\\ref{E-h3-rot2})) when $t$ goes to infinity and\nfrom an analysis of the signs of the functions $nH I_{n-1}(t) + d$,\n$M_{H,d}(t)$ and $P_{H,d}(t)$ (Formulas (\\ref{E-h3-4})), using the\ntables which appear below.\n\\bigskip\n\nWhen $d=0$, using (\\ref{E-h3-rot1}) one can show that\n$\\ddot{\\lambda} > 0$ and conclude that the generating curve is\nstrictly convex. When $d \\le 0$, the formula for $\\ddot{\\lambda}$\nalso shows that the curvature extends continuously at the vertical\npoints.\\bigskip\n\n\n\\noindent \\textbf{Proof of Theorem \\ref{T-h3-r0}}\\bigskip\n\nAssume $H = \\frac{n-1}{n}$. \\bigskip\n\nWhen $d=0$, the functions $M_{H,0}$ and $P_{H,0}$ are non-negative\nand vanish at $t=0$. Near $0$ we have $Q_{H,0}(t) \\sim Ht$ and hence\n$\\lambda_{H,0}(\\rho) = \\int_0^{\\rho} Q_{H,0}(t) \\, dt \\sim\n\\frac{H}{2}\\rho^2$.\n\\bigskip\n\nWhen $d>0$, the function $Q_{H,d}$ exists on an interval $]a_{H,d},\n\\infty[$ for some constant $a_{H,d} > 0$ and the integral\n$\\int_{a_{H,d}}^{\\rho}Q_{H,d}(t) \\, dt$ converges at\n$a_{H,d}$.\\bigskip\n\nWhen $d<0$, the function $Q_{H,d}$ exists on an interval\n$]\\alpha_{H,d}, \\infty[$ for some constant $\\alpha_{H,d}\n> 0$, changes sign from negative to positive, the integral\n$\\int_{\\alpha_{H,d}}^{\\rho}Q_{H,d}(t) \\, dt$ converges at\n$\\alpha_{H,d}$ and the curve has a vertical tangent at this point.\nThe generating curve can be extended by symmetry to a complete curve\nwith one self-intersection.\\bigskip\n\nUsing the recurrence relations for the functions $I_m(t)$ one can\ndetermine their asymptotic behaviour at infinity and deduce the\nprecise exponential growth of the height function $\\lambda(\\rho)$.\n\n\\hfill \\hfill\\penalty10000\\copy\\qedbox\\par\\medskip\n\\bigskip\n\n\\noindent \\textbf{Proof of Theorem \\ref{T-h3-rp}}\\bigskip\n\nAssume $0 < H < \\frac{n-1}{n}$. \\bigskip\n\nWhen $d=0$, the functions $M_{H,0}$ and $P_{H,0}$ are non-negative\nand vanish at $t=0$. Near $0$ we have $Q_{H,0}(t) \\sim Ht$ and hence\n$\\lambda_{H,0}(\\rho) = \\int_0^{\\rho} Q_{H,0}(t) \\, dt \\sim\n\\frac{H}{2}\\rho^2$. \\bigskip\n\nWhen $d>0$, the function $Q_{H,d}$ exists on an interval $]a_{H,d},\n\\infty[$ for some constant $a_{H,d} > 0$ and the integral\n$\\int_{a_{H,d}}^{\\rho}Q_{H,d}(t) \\, dt$ converges at\n$a_{H,d}$.\\bigskip\n\nWhen $d<0$, the function $Q_{H,d}$ changes sign from negative to\npositive, exists on an interval $]\\alpha_{H,d}, \\infty[$ for some\nconstant $\\alpha_{H,d} > 0$, the integral\n$\\int_{\\alpha_{H,d}}^{\\rho}Q_{H,d}(t) \\, dt$ converges at\n$\\alpha_{H,d}$ and the generating curve has a vertical tangent at\nthis point. The generating curve can be extended by symmetry to a\ncomplete curve with one self-intersection.\\bigskip\n\nUsing the recurrence relations for the functions $I_m(t)$ one can\ndetermine their asymptotic behaviour at infinity and deduce the\nprecise linear growth of the height function $\\lambda(\\rho)$.\n\n\\hfill \\hfill\\penalty10000\\copy\\qedbox\\par\\medskip\n\\bigskip\n\n\\noindent \\textbf{Proof of Theorem \\ref{T-h3-rg}}\\bigskip\n\nAssume $H > \\frac{n-1}{n}$. \\bigskip\n\nWhen $d=0$, $Q_{H,0}(t)$ exists on some interval $]0,a_{H,0}[$ for\nsome positive $a_{H,0}$ and the integral $\\lambda_{H,0}(\\rho) =\n\\int_0^{\\rho} Q_{H,0}(t) \\, dt$ converges at $0$ and at $a_{H,0}$.\nThe generating curve has a horizontal tangent at $0$ and a vertical\ntangent at $a_H$. It can be extended by symmetries to a closed\nembedded convex curve.\n\\bigskip\n\nWhen $d>0$, the function $Q_{H,d}(t)$ exists on an interval\n$]b_{H,d},c_{H,d}[$ for some constants $0 < b_{H,d} < c_{H,d}$ and\nthe integral converges at the limits of this interval. The\ngenerating curve at these points is vertical. It can be extended by\nsymmetry to a complete embedded periodic curve (unduloid). \\bigskip\n\nWhen $d<0$, the function $Q_{H,d}(t)$ exists on an interval\n$]\\beta_{H,d},\\gamma_{H,d}[$ for some constants $0 < \\beta_{H,d} <\n\\gamma_{H,d}$, changes sign from negative to positive and the\nintegral converges at the limits of this interval. The generating\ncurve at these points is vertical. The generating curve can extended\nby symmetries to a complete periodic curve with self-intersections\n(nodoid).\n\n\\hfill \\hfill\\penalty10000\\copy\\qedbox\\par\\medskip\n\\bigskip\n\n\n\\textbf{Remark.}~ We note that the integrand $Q_{H,d}(t)$ in\n(\\ref{E-h3-rot3}) is an increasing function of $H$ for $t$ and $d$\nfixed. This fact provides the relative positions of the curves\n$\\lambda_{H,d}(\\rho)$ when $\\rho$ and $d$ are fixed. The curve\ncorresponding to $H > \\frac{n-1}{n}$ is above the curve\ncorresponding to $H = \\frac{n-1}{n}$ which is above the curve\ncorresponding to $H < \\frac{n-1}{n}$. See Figures \\ref{F-rot-1} to\n\\ref{F-rot-3}.\\bigskip\n\n\n\nThe above sketches of proof can be completed using the details\nbelow.\\bigskip\n\n\\noindent $\\bullet $~ We have the following relations for the functions $I_m$,\n\n\\begin{equation}\\label{E-h3-rot2a}\n\\left\\{%\n\\begin{array}{llll}\n m=0 & I_0(t) & = & t, \\\\\n m=1 & I_1(t) & = & \\cosh(t) - 1, \\\\\n m=2 & 2I_2(t) & = & \\sinh(t) \\cosh(t) - t, \\\\\n m=3 & 3 I_3(t) & = & \\sinh^2(t) \\cosh(t) - 2(\\cosh(t)-1), \\\\\n m\\ge 2 & m I_m(t) & = & \\sinh^{m-1}(t) \\cosh(t) - (m-1) I_{m-2}(t). \\\\\n\\end{array}\n\\right.\n\\end{equation}\\bigskip\n\nFor $m\\ge 5$, the asymptotic behavior of $I_m(t)$ near infinity is\ngiven by,\n\n\\begin{equation}\\label{E-h3-rot2b}\n\\left\\{%\n\\begin{array}{lll}\n m I_m(t) & = & \\sinh^{m-3}(t) \\cosh(t) \\big(\\sinh^2(t) - \\frac{m-1}{m-2}\\big)\n + O(e^{(m-4)t}),\\\\\n m I_m(t) & = & \\sinh^{m-1}(t) \\cosh(t) \\big( 1 + O(e^{-2t})\\big). \\\\\n\\end{array}\n\\right.\n\\end{equation}\\bigskip\n\nThe same holds for $m=4$ with remainder term $O(t)$ in the first\nrelation. \\bigskip\n\n\\noindent $\\bullet $~ The derivative of $P_{H,d}$ is positive for $t$ positive. The\nbehaviour of the function $P_{H,d}(t)$ is summarized in the\nfollowing table.\n\\bigskip\n\n\\begin{equation*}\n \\begin{array}{|c|ccc|}\n\\hline\nn\\ge 2 & & 0 < H &\\\\\n\\hline\n t & 0 & & \\infty \\\\\n\\hline\n \\partial_t P_{H,d} & & + & \\\\\n \\hline\n P_{H,d}(t) & d & \\nearrow & \\infty \\\\\n \\hline\n\\end{array}\n\\end{equation*}\\bigskip\n\n\n\\noindent $\\bullet $~ The derivative of $M_{H,d}$ is given by $\\partial_t M_{H,d}(t)\n= (n-1) \\sinh^{n-1}(t) \\big( \\coth(t) - \\frac{nH}{n-1}$\\big). For $H\n> \\frac{n-1}{n}$, we denote by $C_H$ the number such that\n$\\coth(C_H) = \\frac{nH}{n-1}$. The behaviour of the function\n$M_{H,d}(t)$ is summarized in the following tables.\n\n\\begin{equation*}\n \\begin{array}{|c|ccc||ccccc|}\n \\hline\n n=2 & & 0 < H \\le \\frac{1}{2} &&&& H > \\frac{1}{2}&&\\\\\n\\hline\n t & 0 & & \\infty & 0&&C_H&& \\infty \\\\\n\\hline\n \\partial_t M_{H,d} & & + & && +&0&-&\\\\\n\\hline\n M_{H,d}(t)& -d & \\nearrow &\n\\left\\{%\n\\begin{array}{lr}\n \\infty, & H < \\frac{1}{2} \\\\[4pt]\n 1-d, & H = \\frac{1}{2}\\\\\n\\end{array}\n\\right.\n & -d & \\nearrow & f_H(d) &\\searrow & -\\infty \\\\\n \\hline\n\\end{array}\n\\end{equation*}\\bigskip\n\n\n\\begin{equation*}\n \\begin{array}{|c|ccc||ccccc|}\n \\hline\nn\\ge 3 && 0 < H \\le \\frac{n-1}{n} && &&H > \\frac{n-1}{n}&&\\\\\n\\hline\n t & 0 & & \\infty &0&&C_H&& \\infty \\\\\n\\hline\n \\partial_t M_{H,d} & & + && &+& 0&-&\\\\\n\\hline\n M_{H,d}(t) & -d & \\nearrow & \\infty & -d & \\nearrow & f_H(d) &\\searrow & -\\infty \\\\\n\\hline\n\\end{array}\n\\end{equation*}\\bigskip\n\nwhere $f_H(d) := M_{H,d}(C_H) = \\sinh^{n-1}(C_H) - nH I_{n-1}(C_H)\n-d$. \\bigskip\n\n\n\nThe signs and zeroes of the functions $M_{H,d}(t)$ and $P_{H,d}(t)$\nwhen $d \\not = 0$ are summarized in the following charts, together\nwith the existence domain of the function $Q_{H,d}$.\\bigskip\n\nWhen $d > 0$, we have\n\n\n\\begin{equation*}\n \\begin{array}{|c|cccccc|}\n\\hline n=2 &&\n\\begin{array}{c}\n0 < H < \\frac{1}{2},\\\\\nH=\\frac{1}{2},\n\\end{array}\n&&\n\\begin{array}{c}\n0 < d \\\\\n0 \\frac{n-1}{n}\\\\\n0 < d < D_H\n\\end{array}\n&&&\\\\\n\\hline\nt & 0 & & b_{H,d} & C_{H} & c_{H,d} & & \\infty \\\\\n\\hline\nM_{H,d}& &-& 0 & + & 0& -& \\\\\n\\hline\nP_{H,d}& &+& & + & & +& \\\\\n\\hline\nQ_{H,d}& &\\not \\exists & +\\infty & \\exists & +\\infty & \\not \\exists & \\\\\n\\hline\n \\end{array}\n\\end{equation*}\\bigskip\n\n\n\nwhere $D_H := \\sinh^{n-1}(C_H) - nH I_{n-1}(C_H)$. \\bigskip\n\n\n\n\n\\newpage\n\nWhen $d<0$, we have the following tables.\\bigskip\n\n\\begin{equation*}\n \\begin{array}{|c|ccccccc|}\n\\hline n \\ge 2 &&&&\n\\begin{array}{c}\n0 < H \\le \\frac{n-1}{n}\\\\\nd < 0\n\\end{array}\n&&&\\\\\n\\hline\nt & 0 & & & \\alpha_{H,d} & & & \\infty \\\\\n\\hline\nM_{H,d}& && + & & +& & \\\\\n\\hline\nP_{H,d}& && - & 0 & +& & \\\\\n\\hline\nQ_{H,d}& & & \\not \\exists & - \\infty & \\exists & & \\\\\n\\hline\n \\end{array}\n\\end{equation*}\\bigskip\n\nNote that the function $Q_{H,d}$ changes sign from negative to\npositive when $t$ goes from $\\alpha_{H,d}$ to infinity.\\bigskip\n\n\\begin{equation*}\n \\begin{array}{|c|ccccccc|}\n\\hline n \\ge 2 &&&&\n\\begin{array}{c}\nH > \\frac{n-1}{n}\\\\\nd < 0\n\\end{array}\n&&&\\\\\n\\hline\nt & 0 & & \\gamma_{H,d} & & \\beta_{H,d} & & \\infty \\\\\n\\hline\nM_{H,d}& &+& + & +& 0& -& \\\\\n\\hline\nP_{H,d}& &-& 0 & + & +& +& \\\\\n\\hline\nQ_{H,d}& &\\not \\exists & -\\infty & \\exists & +\\infty & \\not \\exists & \\\\\n\\hline\n \\end{array}\n\\end{equation*}\\bigskip\n\n\nNote that the function $Q_{H,d}$ changes sign from negative to\npositive when $t$ goes from $\\gamma_{H,d}$ to $\\beta_{H,d}$.\\bigskip\n\n\n\n\n\n\\subsection{Translation invariant $H$-hypersurfaces\nin $\\mathbb{H}^n \\times \\mathbb{R}$}\\label{SS-dim3-Htransl}\n\\bigskip\n\n\\subsubsection{Translation hypersurfaces}\n\\bigskip\n\n\\noindent $\\bullet $~ \\textbf{Definitions and Notations.}~ We consider $\\gamma$ a\ngeodesic through $0$ in $\\mathbb{H}^n$ and the totally geodesic vertical\nplane $\\mathbb{V} = \\gamma \\times \\mathbb{R} = \\ens{(\\gamma (\\rho),t)}{(\\rho ,t) \\in\n\\mathbb{R} \\times \\mathbb{R}}$ where $\\rho$ is the signed hyperbolic distance to $0$\non $\\gamma$.\n\\bigskip\n\nTake $\\mathbb{P}$ a totally geodesic hyperplane in $\\mathbb{H}^n$, orthogonal to\n$\\gamma$ at $0$. We consider the hyperbolic translations with\nrespect to the geodesics $\\delta$ through $0$ in $\\mathbb{P}$. We shall\nrefer to these translations as translations with respect to $\\mathbb{P}$.\nThese isometries of $\\mathbb{H}^n$ extend ``slice-wise'' to isometries of\n$\\mathbb{H}^n \\times \\mathbb{R}$. \\bigskip\n\nIn the vertical plane $\\mathbb{V}$, we consider the curve $c(\\rho) := \\big(\n\\tanh(\\rho \/2), \\mu(\\rho)\\big)$. \\bigskip\n\nIn $\\mathbb{H}^n \\times \\{\\mu(\\rho)\\}$, we translate the point $c(\\rho)$ by\nthe translations with respect to $\\mathbb{P}\\times \\{\\mu(\\rho)\\}$ and we\nget the equidistant hypersurface $\\mathbb{P}_{\\rho}$ passing through\n$c(\\rho)$, at distance $\\rho$ from $\\mathbb{P}\\times \\{\\mu(\\rho)\\}$. The\ncurve $c$ then generates a \\emph{translation hypersurface} $M =\n\\cup_{\\rho}\\mathbb{P}_{\\rho}$ in $\\mathbb{H}^n \\times \\mathbb{R}$.\\bigskip\n\n\\noindent $\\bullet $~ \\textbf{Principal curvatures.}~ The principal directions of\ncurvature of $M$ are the tangent to the curve $c$ in $\\mathbb{V}$ and the\ndirections tangent to $\\mathbb{P}_{\\rho}$. The corresponding principal\ncurvatures with respect to the unit normal pointing upwards are\ngiven by\n\n\\begin{equation*}\\label{E-tra-1}\n\\left\\{%\n\\begin{array}{lll}\n k_{\\mathbb{V}} & = & \\ddot{\\mu}(\\rho) \\big( 1 + \\dot{\\mu}^2(\\rho) \\big)^{-3\/2}, \\\\\n k_{\\mathbb{P}} & = & \\dot{\\mu}(\\rho) \\big( 1 + \\dot{\\mu}^2(\\rho) \\big)^{-1\/2}\n \\tanh(\\rho). \\\\\n\\end{array}%\n\\right.\n\\end{equation*}\\bigskip\n\nThe first equality comes from the fact that $\\mathbb{V}$ is totally geodesic\nand flat. The second equality follows from the fact that\n$\\mathbb{P}_{\\rho}$ is totally umbilic and at distance $\\rho$ from\n$\\mathbb{P}\\times \\{\\mu(\\rho)\\}$ in $\\mathbb{H}^n \\times \\{\\mu(\\rho)\\}$. \\bigskip\n\n\n\\noindent $\\bullet $~ \\textbf{Mean curvature.}~ The mean curvature of the\ntranslation hypersurface $M$ associated with $\\mu$ is given by\n\n\n\n\\begin{equation}\\label{E-tra-3}\nn H(\\rho) \\cosh^{n-1}(\\rho) = \\partial_{\\rho} \\Big(\n\\cosh^{n-1}(\\rho) \\dot{\\mu}(\\rho) \\big( 1 + \\dot{\\mu}^2(\\rho)\n\\big)^{-1\/2} \\Big).\n\\end{equation}\\bigskip\n\n\n\\subsubsection{Constant mean curvature translation hypersurfaces}\n\\bigskip\n\n\nWe may assume that $H \\ge 0$. The generating curves of translation\nhypersurfaces with constant mean curvature $H$ are given by the\ndifferential equation\n\n\\begin{equation}\\label{E-tra-4}\n\\dot{\\mu}(\\rho) \\big( 1 + \\dot{\\mu}^2(\\rho) \\big)^{-1\/2}\n\\cosh^{n-1}(\\rho) = n H \\int_0^{\\rho} \\cosh^{n-1}(t) \\, dt + d\n\\end{equation}\n\nfor some integration constant $d$.\\bigskip\n\nMinimal translation hypersurfaces have been studied in \\cite{Sa08,\nST08} in dimension $2$ and in \\cite{BS08a} in higher dimensions.\nConstant mean curvature ($H \\not = 0$) translation hypersurfaces\nhave been treated in \\cite{Sa08} in dimension $2$. The purpose of\nthe present section is to investigate the higher dimensional\ntranslation $H$-hypersurfaces.\\bigskip\n\n\n\\textbf{Notations.}~ For later purposes, we introduce some\nnotations.\\bigskip\n\n\\noindent $\\bullet $~ For $m \\ge 0$, we define the functions\n\n\\begin{equation}\\label{E-tra-5a}\nJ_m(r) := \\int_0^r \\cosh^m(t) \\, dt.\n\\end{equation}\\bigskip\n\n\n\\noindent $\\bullet $~ For $H > 0$ and $d \\in \\mathbb{R}$, we introduce the functions,\n\n\\begin{equation}\\label{E-tra-11}\n\\left\\{%\n\\begin{array}{lll}\nR_{H,d}(t) & = & \\cosh^{n-1}(t) - nH J_{n-1}(t) -d ,\\\\\nS_{H,d}(t) & = & \\cosh^{n-1}(t) + nH J_{n-1}(t) +d ,\\\\\nT_{H,d}(t) & = & \\big[ nH J_{n-1}(t) + d \\big]\n\\big[R_{H,d(t)} S_{H,d}(t) \\big]^{-1\/2}.\\\\\n\\end{array}%\n\\right.\n\\end{equation}\\bigskip\n\n\n\nWe note from (\\ref{E-tra-4}) that $\\dot{\\mu}(t)$ has the sign of\n$nHJ_{n-1}(t) +d$. It follows that $\\mu$ is given (up to an additive\ncontant) by\n\n\\begin{equation*}\\label{E-tra-8a}\n\\mu_{H,d} (\\rho) = \\int_{\\rho_0}^{\\rho} \\big[ nH J_{n-1}(t) + d\n\\big] \\big[ \\cosh^{2n-2}(t) - \\big( nH J_{n-1}(t) + d \\big)^2\n\\big]^{-1\/2} \\, dt\n\\end{equation*}\n\nor, using the above notations,\n\n\n\\begin{equation}\\label{E-tra-8}\n\\mu_{H,d} (\\rho) = \\int_{\\rho_0}^{\\rho} \\big[ nH J_{n-1}(t) + d\n\\big] \\big[R_{H,d(t)} \\, S_{H,d}(t) \\big]^{-1\/2} \\, dt =\n\\int_{\\rho_0}^{\\rho} T_{H,d}(t) \\, dt \\, ,\n\\end{equation}\n\nwhere the integration interval $[\\rho_0, \\rho]$ is contained in the\ninterval in which the square root exists. The existence and\nbehaviour of the function $\\mu_{H,d}$ depend on the signs of the\nfunctions $nH J_{n-1}(t) + d$, $R_{H,d}(t)$ and\n$S_{H,d}(t)$.\\bigskip\n\n\nFor $H=\\frac{n-1}{n}$, we give a complete description of the\ncorresponding translation $H$-hypersurfaces. For $0 < H <\n\\frac{n-1}{n}$, we prove the existence of a complete non-entire\n$H$-graph with infinite boundary data and infinite asymptotic\nbehaviour. The other cases can be treated similarly using the tables\nbelow. \\bigskip\n\n\n\\begin{thm}[Translation $H$-hypersurfaces, with $n\\ge 3$ and $H =\n\\frac{n-1}{n}$]\\label{T-tra-1}\n$ $\n\\begin{enumerate}\n \\item When $d=0$, ${\\mathcal T}_0$ is a complete embedded smooth\n hypersurface generated by a compact, simple, strictly convex curve.\n The hypersurface is symmetric with respect\n to a horizontal hyperplane and the parts above and below this hyperplane\n are vertical graphs. The hypersurface also admits a vertical\n symmetry. The asymptotic boundary of ${\\mathcal T}_0$ is topologically\n a cylinder.\n\n \\item When $0 < d < 1$, the hypersurface ${\\mathcal T}_d$ is similar to\n ${\\mathcal T}_0$ except that it is not smooth.\n\n\n \\item When $d \\le -1$, ${\\mathcal T}_d$ is a smooth complete immersed\n hypersurface with self-intersections and horizontal symmetries.\n The asymptotic boundary of ${\\mathcal T}_d$ is topologically a cylinder.\n\n \\item When $-1 < d < 0$, the hypersurface ${\\mathcal T}_d$ looks like\n ${\\mathcal T}_{-1}$ except that it is not smooth.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{pb1-figs}\n\\begin{figure}[h]\n\\begin{center}\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-tra-3a.eps}\n \\caption[$n \\ge 3, H =\\frac{n-1}{n}, d = 0$]{$n \\ge 3, H =\\frac{n-1}{n}, d = 0$}\n \\label{F-tra-3a}\n\\end{minipage}\\hfill\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-tra-3c.eps}\n \\caption[$n \\ge 3, H =\\frac{n-1}{n}, d < -1$]{$n \\ge 3, H =\\frac{n-1}{n}, d < -1$}\n \\label{F-tra-3c}\n\\end{minipage}\\hfill\n\\end{center}\n\\end{figure}\n\\end{pb1-figs}\\bigskip\n\n\\textbf{Remark.}~ When $d \\ge 1$, the differential equation\n(\\ref{E-tra-4}) does not have solutions.\n\n\n\\begin{pb1-figs}\n\\begin{figure}[h]\n\\begin{center}\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-tra-3d.eps}\n \\caption[$n \\ge 3, H =\\frac{n-1}{n}, d = - 1$]{$n \\ge 3, H =\\frac{n-1}{n}, d = - 1$}\n \\label{F-tra-3d}\n\\end{minipage}\\hfill\n\\begin{minipage}[c]{6.5cm}\n \\includegraphics[width=6.5cm]{f-tra-3b.eps}\n \\caption[$n \\ge 3, H =\\frac{n-1}{n}, 0 < d < 1$]{$n \\ge 3,\n H =\\frac{n-1}{n}, 0 < d < 1$}\n \\label{F-tra-3b}\n\\end{minipage}\\hfill\n\\end{center}\n\\end{figure}\n\\end{pb1-figs}\n\n\\begin{pb1-figs}\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{minipage}[c]{6.5cm}\n\\includegraphics[width=6.5cm]{f-tra-3e.eps}\n \\caption[$n \\ge 3, H =\\frac{n-1}{n}, -1 < d < 0$]{$n \\ge 3,\n H =\\frac{n-1}{n}, -1 < d < 0$}\n \\label{F-tra-3e}\n\\end{minipage}\\hfill\n\\begin{minipage}[c]{6.5cm}\n\\includegraphics[width=6.5cm]{f-tra-33.eps}\n \\caption[$n \\ge 2, H <\\frac{n-1}{n}$]{$n \\ge 2,\n H < \\frac{n-1}{n}$}\n \\label{F-tra-33}\n\\end{minipage}\\hfill\n\\end{center}\n\\end{figure}\\bigskip\n\\end{pb1-figs}\n\n\n\\begin{thm}[Complete $H$-graph with infinite boundary data]\\label{T-tra-2}\n$ $\\\\[2pt]\nThere exists a complete translation hypersurface ${\\mathcal T}_{H}$, with $0\n< H < \\frac{n-1}{n}$, such that\n\\begin{enumerate}\n\\item ${\\mathcal T}_H$ is a complete monotone vertical $H$-graph over the non mean\nconvex side of an equidistant hypersurface $\\Gamma\\subset \\mathbb{H}^n$\nwith mean curvature $\\frac{nH}{n-1},$\n\\item ${\\mathcal T}_H$ takes infinite boundary value data on $\\Gamma$ and infinite\nasymptotic boundary data.\n\\end{enumerate}\n\\end{thm}\\bigskip\n\n\n\n\n\n\\subsection{Proof of Theorem \\ref{T-tra-1}}\n\n\\bigskip\n\nThe proof of Theorem \\ref{T-tra-1} follows from an analysis of the\nasymptotic behaviour of the functions $J_m(t)$ (Formula\n(\\ref{E-tra-5a})) when $t$ goes to infinity and from an analysis of\nthe signs of the functions $R_{H,d}$ and $S_{H,d}$ (Formulas\n(\\ref{E-tra-11})) depending on the signs of $H - \\frac{n-1}{n}$ and\n$d$.\\bigskip\n\n\n\\noindent $\\bullet $~ We have the relations\n\n\\begin{equation}\\label{E-tra-10}\n\\left\\{%\n\\begin{array}{lll}\nJ_0(t) & = & t ,\\\\\nJ_1(t) & = & \\sinh(t) ,\\\\\n2J_2(t) & = & \\sinh(t) \\cosh(t) + t ,\\\\\n3J_3(t) & = & \\sinh(t) \\cosh^2(t) + 2 J_1(t) ,\\\\\nmJ_m(t) & = & \\sinh(t) \\cosh^{m-1}(t) + (m-1) J_{m-2}(t), \\text{ ~for~ } m\\ge 3.\\\\\n\\end{array}%\n\\right.\n\\end{equation}\\bigskip\n\n\nThese relations give us the asymptotic behaviour of the functions\n$J_m(t)$ when $t$ tends to infinity. In particular,\n\n$$\nm J_m(t) = \\sinh(t) \\cosh^{m-1}(t) + \\frac{m-1}{m-2} \\sinh(t)\n\\cosh^{m-3}(t) + O(e^{(m-4)t}), \\text{ ~for~ } m\\ge 5\n$$\n\nwith the remainder term replaced by $O(t)$ when $m=4$.\\bigskip\n\n\n\n\\noindent $\\bullet $~ \\textbf{The function $S_{H,d}(t)$}\\bigskip\n\nFor all $H > 0$, the function $S_{H,d}$ increases from $1+d$ to $+\n\\infty$. Its behaviour is summarized in the following table.\n\n\\begin{equation}\\label{E-tra-12s}\n\\begin{array}{|c|ccccc|}\n \\hline\n & \\text{Case} & 0 \\frac{n-1}{n} & & \\\\\n \\hline\n t & 0 & & & & + \\infty \\\\\n \\hline\n R_{H,d}(t) & 1-d & & \\searrow & & -\\infty \\\\\n \\hline\n\\end{array}\n\\end{equation}\\bigskip\n\n\n\n\\textbf{Proof of Theorem \\ref{T-tra-1}, continued}\\bigskip\n\nWe now investigate the behaviour of the solution $\\mu$ to Equation\n(\\ref{E-tra-4}) when $n\\ge 3$ and $H=\\frac{n-1}{n}$ (for $n=2$, see\n\\cite{Sa08}).\\bigskip\n\n\n\n\\noindent According to Table (\\ref{E-tra-12s}), the function $S_{H,d}$\nincreases from $1+d$ to $+ \\infty$ and we have to consider two\ncases, \\emph{(i)} $d \\ge - 1$, in which case $S_{H,d}$ is always\nnon-negative and \\emph{(ii)} $d < -1$, in which case $S_{H,d}$ has\none zero $\\alpha_{H,d}$ such that $$\\cosh^{n-1}(\\alpha_{H,d}) + nH\nJ_{n-1}(\\alpha_{H,d}) + d = 0.$$\n\n\\noindent According to Table (\\ref{E-tra-12r}), the function $R_{H,d}$\ndecreases from $1-d$ to\n$\\left\\{%\n\\begin{array}{cc}\n- \\infty, & n \\ge 3 \\\\\n-d, & n=2 \\\\\n\\end{array} \\right. $, depending on the value of $n$. It follows\nthat we have two cases, \\emph{(i)} $d \\ge 1$, in which case the\nfunction $R_{H,d}$ is always non-positive and \\emph{(ii)} $d < 1$,\nin which case it has one zero $c_{H,d}$ for $n \\ge 3$. When it\nexists, the zero $c_{H,d}$ satisfies $$\\cosh^{n-1}(c_{H,d}) - nH\nJ_{n-1}(c_{H,d}) - d = 0.$$\n\\bigskip\n\nLooking at the equations defining $\\alpha_{H,d}$ and $c_{H,d}$ we\nsee that $\\alpha_{H,d} < c_{H,d}$ when they both exist.\\bigskip\n\n\n\nThe behaviour of the function $\\mu$ is described in the following\ntables, see also Figures~\\ref{F-tra-3a} to \\ref{F-tra-3e}.\\bigskip\n\n\\begin{equation}\\label{E-tra-15-1}\n\\begin{array}{|c|ccccccc|}\n\\hline\n \\textbf{Case 1} & & H=\\frac{n-1}{n} & & d < -1 & & n\\ge 3 & \\\\\n\\hline\n t & 0 & & \\alpha_{H,d} & & c_{H,d} & & + \\infty \\\\\n\\hline\n R_{H,d} & & + & + & + & 0 & - & \\\\\n\\hline\n S_{H,d} & & - & 0 & + & + & + & \\\\\n\\hline\n T_{H,d} & & \\not \\exists & - \\infty & \\exists & + \\infty\n & \\not \\exists & \\\\\n\\hline\n\\end{array}\n\\end{equation}\\bigskip\n\nThe function $\\mu$ is given by\n\n$$\\mu(\\rho) = \\int_{\\rho_0}^{\\rho} T_{H,d}(t) \\, dt$$\n\nfor $\\rho_0, \\rho \\in [\\alpha_{H,d}, c_{H,d}]$ and the integral\nexists at both limits. Note that the integrand is negative near the\nlower limit while it is positive near the upper limit. \\bigskip\n\nWhen $d=0$, using (\\ref{E-tra-4}) one can show that $\\ddot{\\mu}\n> 0$ and conclude that the generating curve is strictly convex. The\nformula for $\\ddot{\\mu}$ also shows that the curvature extends\ncontinuously at the vertical points.\\bigskip\n\n\nThe generating curve can be extended by symmetry and periodicity to\ngive rise to a complete immersed hypersurface with\nself-intersections.\\bigskip\n\n\n\\begin{equation}\\label{E-tra-15-2}\n\\begin{array}{|c|ccccccc|}\n\\hline\n \\textbf{Case 2} & & H=\\frac{n-1}{n} & & -1 \\le d < 1 & & n\\ge 3 & \\\\\n\\hline\n t & 0 & & & & c_{H,d} & & + \\infty \\\\\n\\hline\n R_{H,d} & & & + & & 0 & - & \\\\\n\\hline\n S_{H,d} & & & + & & + & + & \\\\\n\\hline\n T_{H,d} & & & \\exists & & + \\infty\n & \\not \\exists & \\\\\n\\hline\n\\end{array}\n\\end{equation}\\bigskip\n\n\nThe function $\\mu$ is given by\n\n$$\\mu(\\rho) = \\int_{0}^{\\rho} T_{H,d}(t) \\, dt$$\n\nfor $\\rho_0, \\rho \\in [0, c_{H,d}]$ and the integral exists at both\nends. Note that the integrand has the sign of $d$ near $0$, with\n$\\dot{\\mu}(0) = d\/\\sqrt{1-d^2}$ ; it is positive near the upper\nbound with $\\dot{\\mu}(c_{H,d}) = + \\infty$. \\bigskip\n\nWhen $d=-1$, the original curve has a vertical tangent at $0$. It\ncan be extended by symmetry and periodicity to give rise to a\ncomplete immersed hypersurface with self-intersections. \\bigskip\n\nWhen $d=0$, the curve has a horizontal tangent and is strictly\nconvex (use (\\ref{E-tra-4})). It can be extended by symmetry as a\ntopological circle and gives rise to a complete embedded surface.\n\\bigskip\n\n\nWhen $d \\ge 1$, Equation (\\ref{E-tra-4}) has no solution.\\bigskip\n\n\n\\subsection{Proof of Theorem \\ref{T-tra-2}}\n\nGiven $n$ and $H$, such that $0 0$ for $t > t_H$ and hence the\nquantity $nH J_{n-1}(t) +d_H$ does not change sign for $t>t_H$ and\nthe same is true for $T_{H,d_H}(t)$.\\bigskip\n\n\nTaking (\\ref{E-tra-11}) into account, we choose $\\rho_0 > t_H$ and\ndefine the generating curve by Formula (\\ref{E-tra-8}).\\bigskip\n\n\nWe conclude that $\\mu(\\rho)$ is well-defined and strictly increasing\nfor $\\rho>t_H$. Moreover, $\\mu(\\rho)$ goes to $-\\infty$, if\n$\\rho\\rightarrow t_H^+.$ Notice that the mean curvature of the\nequidistant hypersurface at distance $t_H$ to $\\mathbb{P}$ is $\\tanh\n(t_H)=\\frac{n H}{n-1}$, by the choice of $t_H$.\\bigskip\n\n\n Now recall that if $ 0 1,\\\\\n\\end{array}%\n\\right.\n\\end{equation}\n\nwhere the principal curvatures are taken with respect to the unit\nnormal to $\\partial \\Omega $ pointing inwards.\\bigskip\n\n\nGiven a hypersurface $\\Gamma$ satisfying Properties\n(\\ref{E-appl-gam}), there exists some radius $R$ such that for any\npoint $p$, the ball $B_{p,R} \\subset \\mathbb{H}^n$ with radius $R$ is\ntangent to $p$ at $\\Gamma$ and $\\Gamma \\subset B_{p,R}$. We denote\nby\n\n\\begin{equation}\\label{E-appl-gam2}\n{\\mathcal S}_{p,+} \\text{ ~and~ } {\\mathcal S}_{p,-}\n\\end{equation}\n\nthe two hypersurfaces in ${\\mathcal R}$ passing through the sphere $\\partial\nB_{p,R}$ and symmetric with respect to the slice $\\mathbb{H}^n \\times\n\\{0\\}$.\\bigskip\n\n\nWe first prove an existence result for a Dirichlet problem.\n\\bigskip\n\n\\begin{prop}\\label{P-appl-2}\nLet $\\Omega \\subset \\mathbb{H}^n \\times \\{0\\}$ be a bounded domain with\nsmooth boundary $\\Gamma$ satisfying (\\ref{E-appl-gam}). Then, for\nany $H, 0 < H \\le \\frac{n-1}{n}$, there exists a vertical graph\n$M_{\\Gamma}$ over $\\Omega$ in $\\mathbb{H}^n \\times \\mathbb{R}$, with constant mean\ncurvature $H$ with respect to the upward pointing normal. This means\nthat there exists a function $u : \\Omega \\to \\mathbb{R}$, smooth up to the\nboundary, such that $u|_{\\Gamma} = 0$, and whose graph\n$\\ens{(x,u(x))}{x \\in \\Omega}$ has constant mean curvature $H$ with\nrespect to the unit normal pointing upwards.\n\\end{prop}\\bigskip\n\n\\textbf{Remark.} ~The graph $M_{\\Gamma}$ having positive mean\ncurvature with respect to the upward pointing normal, must lie below\nthe slice $\\mathbb{H}^n \\times \\{0\\}$. The symmetric $\\check{M}_{\\Gamma}$\nwith respect to the slice lies above the slice and has positive mean\ncurvature with respect to the normal pointing downwards. \\bigskip\n\n\\textbf{Proof of Proposition \\ref{P-appl-2}}\\bigskip\n\n\\noindent $\\bullet $~ We first consider the case $H = \\frac{n-1}{n}$.\\bigskip\n\nBy our assumption on $\\Gamma$, using the hypersurfaces\n(\\ref{E-appl-gam2}) and the Convex hull lemma, Proposition\n\\ref{P-appl-1}, any solution to our Dirichlet problem must be\ncontained in ${\\mathcal C}({\\mathcal S}_{p,-}) \\cap {\\mathcal C}({\\mathcal S}_{p,+})$. This provides a\npriori height estimates and boundary gradient estimates on the\nsolution. \\bigskip\n\nWe could use \\cite{Spr08} and classical elliptic theory \\cite{GT83},\nto get existence for our Dirichlet problem when $H=\\frac{n-1}{n}$.\nWe shall instead apply \\cite{Spr08} directly. Indeed, in our case,\nthe mean curvature $H_{\\Gamma}$ of $\\Gamma$ satisfies $H_{\\Gamma} >\n1 = H \\frac{n}{n-1}$, and the Ricci curvature of $\\mathbb{H}^n$ satisfies\n$\\mathrm{Ric} = - (n-1) \\ge - \\frac{n^2}{n-1}H^2$. Theorem 1.4 in\n\\cite{Spr08} states that under theses assumptions there exists a\nvertical graph over $\\Omega$ with boundary $\\Gamma$ and constant\nmean curvature $H=\\frac{n-1}{n}$.\\bigskip\n\n\n\\noindent $\\bullet $~ We now consider the case $0 < H \\le \\frac{n-1}{n}$. \\bigskip\n\nWe use the graphs constructed previously as barriers to obtain a\npriori height estimates and apply the interior and global gradient\nestimates of \\cite{Spr08} to conclude. \\bigskip\n\nWe consider the Dirichlet problem $(P_t)$ for $0 \\le t \\le 1$,\n\n\\begin{equation*}\\label{E-appl-5}\n\\left\\{%\n\\begin{array}{cccc}\n\\mathrm{div}\\big(\\dfrac{\\nabla u}{W}\\big) & = &\nt \\, (n-1) & \\text{in~ }\\Omega\\\\[5pt]\nu & = & 0 & \\text{on~ } \\Gamma \\\\\n\\end{array}%\n\\right.\n\\end{equation*}\n\n\nwhere $u\\in C^2(\\Omega)$ is the height function, $\\nabla u$ its\ngradient and $W=(1 +|\\nabla u|^2)^{1\/2}$, and where the gradient and\nthe divergence are taken with respect to the metric on $\\mathbb{H}^n$. This\nis the equation for \\emph{vertical} $H$-graphs in $\\mathbb{H}^n \\times \\mathbb{R}$.\nIt is elliptic of divergence type. \\bigskip\n\nBy the first step, we have obtained the solution $u_1$ for the\nDirichlet problem $(P_1)$. The solution for $(P_0)$ is the trivial\nsolution $u_0=0$. By the maximum principle, using the fact that\nvertical translations are positive isometries for the product\nmetric, and the existence of the solutions $u_1$ and $u_0$, we have\nthat any $C^1(\\overline{\\Omega})$ solution $u_t$ of the Dirichlet\nproblem $(P_t)$ stays above $u_1$ and below $u_0.$ This yields a\npriori height and boundary gradient estimates, independently of $t$\nand $u_t$. Global gradient estimates follow Theorem 1.1 and Theorem\n3.1 in \\cite{Spr08}. We have therefore $C^1(\\overline{\\Omega})$ a\npriori estimates independently of $t$ and $u_t$. The existence of\nthe solution $u_t$ for $00$. Let $M$ be a\ncompact connected embedded $H$-hypersurface such that $\\partial M =\n\\Gamma_{+} \\cup \\Gamma_{-}$, with $0 < H \\le \\frac{n-1}{n}$. Assume\nthat $2a \\ge \\frac{\\pi}{n-1}$.\n\\begin{enumerate}\n \\item Assume that $\\Gamma$ is symmetric with respect to a hyperbolic\n hyperplane $P$ and that each connected component of $\\Gamma\\setminus P$\n is a graph above $P$. Then $M$ is symmetric with respect to the vertical\n hyperplane $P\\times \\mathbb{R}$ and each connected component of $M\\setminus\n P\\times \\mathbb{R}$ is a horizontal graph.\n \\item Assume that $\\Gamma$ is an $(n-1)$-sphere. Then $M$ is part of\n the complete embedded rotation hypersurface given by\n Theorem~\\ref{T-h3-r0} and \\ref{T-h3-rp} and containing $\\Gamma$.\n It follows that $M$ is symmetric with respect to the slice $\\mathbb{H}^n \\times\n \\{0\\}$ and the parts of $M$ above and below the slice of symmetry are\n vertical graphs.\n\\end{enumerate}\n\\end{thm}\\bigskip\n\n\\textbf{Proof of Theorem \\ref{T-appl-7}}.\\bigskip\n\n\\noindent Let $\\Omega_{+} = \\Omega \\times \\{a\\}$ and $\\Omega_{-} = \\Omega\n\\times \\{-a\\}$. By the Convex hull Lemma, Proposition\n\\ref{P-appl-1}, using the hypersurfaces given by (\\ref{E-appl-gam2})\nwe have that $M \\cap \\overline{\\mathrm{ext}(\\Omega_{+})} = \\Gamma_+$\nand $M \\cap \\overline{\\mathrm{ext}(\\Omega_{-})} = \\Gamma_-$.\\bigskip\n\n\\noindent We claim that $M \\cap (\\overline{\\Omega} \\times \\mathbb{R}) = \\Gamma_+\n\\cap \\Gamma_-$. Let $M_{\\Gamma,a}$ be the graph above\n$\\overline{\\Omega_+}$ contained in $\\mathbb{H}^n \\times [a, \\infty[$ and\n$M_{\\Gamma,-a}$ be the graph below $\\overline{\\Omega_-}$ contained\nin $\\mathbb{H}^n \\times ]-\\infty, a]$, given by Theorem~\\ref{T-appl-6}.\n\\bigskip\n\nConsider $\\widetilde{M} = M_{\\Gamma,a} \\cap M \\cap M_{\\Gamma,-a}$\noriented by the mean curvature vector of $M$ by continuity. Take the\nfamily of (minimal) catenoids symmetric with respect to $\\mathbb{H}^n\n\\times \\{0\\}$ with rotation axis some $\\{\\bullet \\} \\times \\mathbb{R}$.\nComing from infinity with such catenoids, using the assumption that\n$2a \\ge \\frac{\\pi}{n-1}$ and the fact that the catenoids have height\n$< \\frac{\\pi}{n-1}$, we see that one catenoid will eventually touch\n$\\widetilde{M}$ at some interior point in $M$. This implies that the\nnormal to $M$ at this point is the same as the normal to the\ncatenoid at the same point (maximum principle) and hence that the\nnormal to $M$ points inside $\\widetilde{M}$.\\bigskip\n\nAssume that $M \\cap (\\Omega \\times \\{a\\}) \\not = \\emptyset$ (\\emph{resp. }\nthat $M \\cap (\\Omega \\times \\{-a\\}) \\not = \\emptyset$). Then at the\nhighest point of $M$ the normal would be pointing upwards (\\emph{resp. }\ndownwards) and we would get a contradiction with the maximum\nprinciple by considering the horizontal slice (a minimal\nhypersurface) at this point.\\bigskip\n\nFinally, $M \\cap (\\overline{\\Omega} \\times \\mathbb{R}) = \\Gamma_+ \\cap\n\\Gamma_-$ and the normal to $M$ points inside $M \\cup \\Omega_+ \\cup\n\\Omega_-$.\\bigskip\n\n\\noindent To conclude, we use Alexandrov Reflection Principle in vertical\nhyperplanes $P_t\\times \\mathbb{R}$ in ambient space, obtained by applying\nhorizontal translations along the horizontal geodesic orthogonal to\n$P,$ to the hyperplane $P\\times \\mathbb R$ of symmetry of $\\Gamma$.\nWe conclude that $M$ is symmetric about $P\\times \\mathbb{R}$ and that each\nconnected component of $M\\setminus P\\times \\mathbb{R}$ is a horizontal\ngraph. This complete the proof of the first statement in the\ntheorem. \\bigskip\n\nIf $\\Gamma$ is spherical then $M$ is a rotation hypersurface. As the\nmean curvature vector points into the region of ambient space that\ncontains the axes, by the geometric classification of the rotation\n$H$-hypersurfaces with constant mean curvature $H \\le (n-1)\/n$ given\nby Theorems~\\ref{T-h3-r0} and \\ref{T-h3-rp}, it follows that $M$ is\npart of a complete embedded rotation hypersurface $\\overline{M}$. It\nfollows that $\\overline{M}$ has a slice of symmetry at $\\mathbb{H}^n \\times\n\\{0\\}$ and each connected component of $\\overline{M}$ above and\nbelow $t=0$ is a complete vertical graph over the exterior of a\nround ball in $t=0$.\n\n\\hfill \\hfill\\penalty10000\\copy\\qedbox\\par\\medskip \\bigskip\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThermoelectric response of a hybrid junction between two normal metals in the mesoscopic regime has been discussed extensively both theoretically and experimentally \\cite{sivan_PRB_33_551,staring_EPL_22_57, moller_PRL_81_5197, scheibner_PRL_95_176602,ludoph_PRB_59_12290, reddy_AAAS_315_1568, widawsky_NL_12_354, thierschmann_NN_10_854, shankouri_ARMR_41_399, sanchez_PRB_83_085428, beenakker_PRB_46_9667, entin_PRB_82_115314, sanchez_PRB_84_201307, jordan_PRB_87_075312, brandner_PRL_110_070603}. Whereas, analogous situation comprising of a junction of superconductors is a less explored topic though discussion of thermoelectric response of superconductor has a long history. Such set-ups are of great importance because of the possibility of its applications in improving the efficiency of thermoelectric generator by strongly suppressing Ohmic losses \\cite{kolenda_PRL_116_097001, kolenda_PRB_95_224505, shimizu_NC_10_825, tan_NC_12_138, fornieri_NN_12_944, giazotto_RMP_78_217}. \n\nIn 1944, Ginzberg \\cite{ginzburg_JPUSSR_8_148, ginzburg_RMP_76_981} showed that a temperature gradient in a bulk superconductor leads to a finite normal current response, though this current gets completely cancelled by a counter flow of supercurrent in a homogeneous isotropic superconductor which make it impossible to detect the thermoelectric response in isolation. This fact lead him to theoretically explore the possibilities of anisotropic and inhomogeneous superconductor for the detection of the thermoelectric effect. Since then, various theoretical study\\cite{ginzburg_SST_4_S1, ginzburg_PCS_235_3129, marinescu_PRB_55_11637, galperin_ZETF_66_1387, galperin_PRB__65_064531, virtanen_APA_89_625} has been conducted exploring possibilities of detection of thermoelectric response of superconductors in anisotropic and inhomogeneous situations. Experimental study in this direction goes back all the way to 1920's \\cite{Falco1981book, borelius_PKNAW_34_1365, burton_Nature_136_141, keesom_Physica_5_437, casimir_Physica_13_33, pullan_PRSLSA_217_280, harlingen_PRB_21_1842, kartsovnik_JETPL_33_7, fornieri_NN_11_258} and this topic has been revisited in the recent past in an interesting work by Shelly et.al.\\cite{connor_SA_2_e1501250}. The discovery of Josephson effect \\cite{josephson_PL_1_251} in 1962 provided a natural setting for exploring thermoelectric response for a inhomogeneous superconductor. Later in 1997, Guttman and Bergman made an attempt to theoretically explore the thermoelectric response of a JJ in a tunnel Hamiltonian approach \\cite{guttman_PRB_55_12691}.\n\nPershoguba and Glazman \\cite{pershoguba_PRB_99_134514} have carried out an elaborate study on the possibility of generating thermometric current across a junction between two quasi-one dimensional superconductors, which goes beyond the tunneling limit and also discussed the relevance of the odd and the even part of the Josephson current as a function of the superconducting phase bias $\\phi_{12}$ owing to scattering in junction region which breaks the $\\omega \\rightarrow -\\omega$ symmetry. In this regard, the helical edge state of two dimensional topological insulators pose an interesting and cleane testing ground for such theoretical study which hosts one-dimensional Josephson junction\\cite{hart_NP_10_638}. Thermal response of quantum hall edge has already being studies in experiment\\cite{banerjee_Nat_545_75} and hence an similar experimental set-up involving the spin Hall edge may not be far in the future. Recent theoretical studies have explored the possibility of inducing thermoelectric effect in helical edge state-based Josephson junction involving either an anisotropic ferromagnetic barrier\\cite{gresta_PRL_123_186801, marchegiani_APL_117_212601} or a three-terminal geometry\\cite{blasi_PRL_124_227701, blasi_PRB_103_235434, blasi_PRB_102_241302}. In this work we show that thermoelectric effect can exist in the HES of QSH even in a simplest case of two terminal ballistic JJ owing to breaking of the $\\omega \\rightarrow -\\omega$ symmetry of the quasi-particle transmission probabilities across the junction at finite length. We argue that this is generic to ballistic JJ and is not specific to HES. Lastly, it is worth noting that the use of thermal transport for probing quantum states has been much in pursuit in contemporary science\\cite{li_MRSB_45_348} and hence such a discussion is quite timely.\n\nThe paper is organized as follows. In Section \\ref{system_HES} we described the JJ based on HES of a 2D QSH state and in Section \\ref{left_righth_symmetry} we discussed how a long ballistic JJ can break the $\\omega \\rightarrow -\\omega$ symmetry and hence resulting in thermoelectric response which also survives in presence of disorder. In Section \\ref{even_and_odd_section} we extend our discussion to the odd-in-$\\phi_{12}$ and even-in-$\\phi_{12}$ part of the thermoelectric conductance and shown that minimal breaking of the $\\omega \\rightarrow -\\omega$ symmetry is not enough to induce an even-in-$\\phi_{12}$ contribution.We have also argued that the presence of thermoelectric response through the breaking of $\\omega \\rightarrow -\\omega$ symmetry is not unique to HES, rather it is a generic property of a JJ.\n\n\n\n\n\n\\section{Ballistic Josephson junction in helical edge state}\n\\label{system_HES}\n\n\\begin{figure}[]\n\t\\includegraphics[width=0.4\\textwidth]{set_up_BW.eps}\n\t\\caption{Schematic of the Josephson junction set-up in a Helical edge state.}\n\t\\label{set_up}\n\\end{figure}\n\n\\begin{figure*}[t!]\n\t\\includegraphics[width=\\textwidth]{ABS.eps}\n\t\\caption{(a) Pictorial representation of below the gap $(\\omega<\\Delta_0)$ tunneling of Cooper pairs (CP) form left(right) to right(left) via two different Andreev bound states $\\omega_0^{21}$ (indicated by blue lines) and $\\omega_0^{12}$ (indicated by orange lines). The dotted lines represent the fact that tunnelling of the quasielectrons (quasiholes) above the gap $(\\omega>\\Delta_0)$ across the junction are in correspondence with distinct bound states, as can be noted from the poles of the quasielectron (quasihole) transmission probabilities $\\mathcal{T}_{ee}^{21}$ and $\\mathcal{T}_{ee}^{12}$ (or $\\mathcal{T}_{hh}^{12}$ and $\\mathcal{T}_{hh}^{21}$). (b) Two types of Andreev bound states as a function of superconducting phase difference $\\phi_{12}$, are plotted for different values of junction lengths where $\\xi=\\hbar v_F\/\\Delta_0$ is the superconducting coherence length. \n\t(c) Density plot for the thermoelectric coefficient $\\kappa^{21}$ of a ballistic JJ based on the edge states of a quantum spin Hall insulator in proximity to a s-wave superconductor, as a function of superconducting phase bias $\\phi_{12}$ and junction length $L$. The average temperature of the junction is considered to be $k_BT=0.5 \\Delta_0$.}\n\t\\label{ABSeps}\n\\end{figure*}\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.9 \\textwidth]{Transmission_mod.eps}\n\t\\caption{Different transmission probabilities of the quasiparticles through a ballistic Josephson junction based on the helical edge state of a quantum spin Hall insulator, in the space of energy ($\\omega$) and superconducting phase-difference ($\\phi_{12}$) for different values of junction length ($L$). A clear asymmetry between the electron and hole transmission probability from left (right) to right(left) develops as we increase the length of the junction. The plot in energy window $(\\vert\\omega\\vert<\\Delta_0)$ signifies the evolution of the pole (location of ABS) of the transmission amplitude as a function of $\\phi_{12}$.}\n\t\\label{transmission}\n\\end{figure*}\n\n\nWe first consider a JJ based on 1D Dirac fermions in proximity to a s-wave superconductor, realized in a HES of QSH insulator\\cite{fu_PRB_79_161408, fu_PRL_100_096407, calzona_arxiv_1909_06280} because of its algebraic simplicity. Later we will also explore the case of quadratic dispersion. The junction is considered to be of length $L$ laying over the region $|x|\\Delta_0$ ) across the JJ. It is straightforward to match the plane wave solutions of the BdG equation to obtain the transmission probabilities across the JJ (from $S1$ to $S2$) as described by Eq. \\ref{DiracHamiltonian} are given by (see Appendix \\ref{Appendix_clean_junction})\n\\begin{align}\n\t\\mathcal{T}_{ee}^{21} &= \\mathcal{T}_{hh}^{12} &=\\dfrac{\\omega^2-\\Delta_0^2}{\\omega^2-\\Delta_0^2 \\cos^2 \\left( \\frac{k_e-k_h}{2}L-\\frac{\\phi_{12}}{2} \\right)},\t\\label{T_ee^21}\\\\\n\t\\mathcal{T}_{hh}^{21} &= \\mathcal{T}_{ee}^{12} &=\\dfrac{\\omega^2-\\Delta_0^2}{\\omega^2-\\Delta_0^2 \\cos^2 \\left( \\frac{k_e-k_h}{2}L+\\frac{\\phi_{12}}{2} \\right)}, \\label{T_hh^21}\n\\end{align}\nwhile $\\mathcal{T}_{he}^{21}=\\mathcal{T}_{eh}^{21}=\\mathcal{T}_{he}^{12}=\\mathcal{T}_{eh}^{12}=0$. Quasiparticle transmission probabilities through a ballistic JJ is shown in FIG. \\ref{transmission} for two different lengths of the junction. Here $\\mathcal{T}_{q'q}^{ji}$ denote the transmission probability of an $q$-like QP ($q=e,h$) from lead $Si$ to a $q'$-like QP in lead $Sj$. Note that, the tunneling of an electron- (hole-) like QP from S1 to S2 (S2 to S1) is in correspondence with the ABS having energy $\\omega_0^{21}$ while the tunneling of a hole- (electron-) like QP from S1 to S2 (S2 to S1) is in correspondence with the ABS having energy $\\omega_0^{12}$ [See Fig. \\ref{ABSeps}(a)] which is apparent from the fact that the poles of the transmission amplitudes for these two processes coincides with the corresponding ABS energies. Within linear response theory, thermoelectric coefficient of a JJ can be defined in terms of the transmission probabilities as\\cite{pershoguba_PRB_99_134514}\n\\begin{align}\n\t\\kappa^{21} = \\left[\\dfrac{e}{h}\\int_{\\Delta_0}^{\\infty} d\\omega \\dfrac{\\omega}{\\sqrt{\\omega^2-\\Delta^2}} [i_{e}^{21}-i^{21}_{h}] \\dfrac{d\\mathrm{f}(\\omega,T)}{dT}\\right]_{T=T_{\\text{avg}}}\t\\label{thermoelectricCoefficient}\n\\end{align}\nwhere $i^{21}_{e}=(\\mathcal{T}_{ee}^{21}-\\mathcal{T}_{he}^{21})$, $i^{21}_{h}=(\\mathcal{T}_{hh}^{21}-\\mathcal{T}_{eh}^{21})$, $e$ is the electronic charge, $\\mathrm{f (\\omega,T)}$ is the Fermi distribution function at temperature $T$ and $T_{\\text{avg}}$ is the average temperature of the junction. Note that, in the limit $L \\rightarrow 0$, $\\kappa^{21}$ is zero.\n\nThe integration in Eq. (\\ref{thermoelectricCoefficient}) can be done numerically and $\\kappa^{21}$ can be obtained as a function of superconducting phase difference $\\phi_{12}$ and junction length $L$ which is plotted as a density plot in FIG. \\ref{ABSeps}(c). In case of HES, owing to its linear dispersion, the value of the overall chemical potential $\\mu$ does not effect the calculations for the ballistic case.\n\nTo obtain an estimate of the extremum values of thermoelectric conductance for a single channel ballistic junction, we perform a numerical scan over the parameter space of $\\phi_{12}$ and $L$ for a given temperature of $k_BT_{\\text{avg}}=0.5\\Delta_0$. We found that the maximum of (minimum of) $|\\kappa^{21}_e| \\approx\\,0.3438 \\,ek_B\/h$ $(\\approx 1.477 nA\/K)$ is obtained at a junction length $L \\approx 0.555 \\xi$ for $\\phi_{12}\\approx 0.353 \\pi$ (minimum at $\\phi_{12} \\approx 1.647 \\pi$).\n\n\\begin{figure*}[t]\n\t\\includegraphics[width=1.0 \\textwidth]{Even_Odd.eps}\n\t\\caption{Thermoelectric conductance of a Josephson junction based on helical edge state of quantum spin Hall insulator in presence of four random scattering centers. (a) Total thermoelectric conductance (b) the part of thermal conductance that is even in $\\phi_{12}$ (c) the part of conductance that is odd in $\\phi_{12}$, for single random configuration of scattering centers and after averaging over different numbers of random configurations.}\n\t\\label{even_odd}\n\\end{figure*}\n\n\nAs we can see from FIG. \\ref{transmission}, for a ballistic JJ, in general for any given value of $\\phi_{12}$ and at an energy $\\omega>\\Delta_0$ the quasiparticle transmission probabilities $\\mathcal{T}_{ee}^{21}$ and $\\mathcal{T}_{hh}^{21}$ are different if the length of the junction $L$ is comparable to the superconducting coherence length (i.e. when we are not in the short junction limit). Note that, the difference between these quantities at a given $\\omega$ is maximum in the neighborhood of $\\omega=\\Delta_0$ and it decreases as we go higher in $\\omega$, although non-monotonically. Additionally, one must notice, the thermoelectric effect identically vanishes both at $\\phi_{12}=0$ and $\\pi$ which are the time reversal symmetric points (See FIG. \\ref{ABSeps}(c)).\n\n\\section{Even-in-$\\phi_{12}$ and Odd-in-$\\phi_{12}$ part of the thermoelectric response and the effect of disorder}\n\\label{even_and_odd_section}\nPresence of scatter within the junction region, which breaks the $\\omega \\rightarrow -\\omega$ symmetry, not only leads to a finite thermoelectric conductance, but also results in deviation from thermoelectric conductance being odd in $\\phi_{12}$\\cite{pershoguba_PRB_99_134514}. As discussed above, a JJ of finite length also breaks the $\\omega \\rightarrow -\\omega$ symmetry hence it is curious if this minimal symmetry breaking can result in such a deviation, i.e. the thermoelectric response can be written as a liner sum of an even-in-$\\phi_{12}$ part and an odd-in-$\\phi_{12}$ part.\n\nIt is straightforward to check that the expression for thermoelectric conductance in the ballistic limit, obtained from Eq. \\ref{T_ee^21}, \\ref{T_hh^21} and \\ref{thermoelectricCoefficient} is an odd function of $\\phi_{12}$, independent of the length of the junction. This implies that the breaking of $\\omega \\rightarrow -\\omega$ symmetry via $k_e\\neq k_h$ (as discussed in the previous section) does not lead to any contribution to the thermoelectric response which is even in $\\phi_{12}$. Further, we calculate the thermoelectric conductance in presence of a single localized scatterer which is positioned at an arbitrary point within the junction region and we assume that the scattering matrix corresponding to the scatterer has no energy dependence. The expression for the thermoelectric conductance in this case is given below,\n\\begin{widetext}\n\\begin{equation}\n \\kappa^{21} = \\left[\\dfrac{e}{h}\\int_{\\Delta_0}^{\\infty} d\\omega \\dfrac{\\omega}{\\sqrt{\\omega^2-\\Delta^2}} \\left[ \\dfrac{4 \\tau \\left((1-\\tau) \\sin{((k_e-k_h)L(m-n))}+ \\sin{((k_e-k_h)L)} \\right)\\sin{\\phi_{12}} \\sinh2{\\theta}}{\\Omega \\Omega^*} \\right] \\dfrac{d\\mathrm{f}(\\omega,T)}{dT}\\right]_{T=T_{\\text{avg}}},\n \\label{m_n_junction}\n\\end{equation}\n\\end{widetext}\nwhere, $\\Omega=(1-\\tau) \\cos{\\left((k_e-k_h)L(m-n)\\right)} + \\cos{\\left( (k_e-k_h)L-2i\\theta \\right)}-\\tau \\cos{\\phi_{12}}$, $\\theta=\\text{arccosh}{\\omega\/\\Delta_0}$, $\\tau$ is the normal state transmission probability across the scatterer and the position of the scattering center divides the junction region in the ratio $m:n$ ($m,n\\leq 1$ and $m+n=1$). All other notations have their usual meanings as discussed before. Eq. \\ref{m_n_junction} clearly shows that the thermoelectric response in this case also, is odd in $\\phi_{12}$. Hence, our study establishes the fact that the minimal breaking of $\\omega \\rightarrow - \\omega$ symmetry for a finite length ballistic junction (or in presence of a single scatterer which does not break the $\\omega \\rightarrow -\\omega$ symmetry) is sufficient to induce thermoelectric response across the JJ, though it is not enough to induce an even-in-$\\phi_{12}$ contribution to the thermoelectric conductance.\n\n\\begin{figure*}[t]\n\t\\includegraphics[width=1.0 \\textwidth]{multiple_disorders.eps}\n\t\\caption{Disordered averaged mean value (left figure) and the variance (right figure) of the thermoelectric coefficient $\\kappa^{21}$ of a S-TI-S junction based on the edge states of a quantum spin Hall insulator with proximity to a s-wave superconductor, are plotted as a function of superconducting phase difference $\\phi_{12}$ . The average temperature of the junction is considered to be $k_BT=0.5 \\Delta_0$ and the overall chemical potential to be $\\mu=10\\Delta_0$. Length of the junction is considered to be $L=0.555 \\xi$ where $\\xi$ is the superconducting coherence length. Average is done over 500 disorder configurations. The middle plot show that, as we increase the number of scatterers, the curves for thermoelectric conductance tend to a $\\sin{(\\phi_{12})}$ curves (solid lines) with an amplitude (Max($\\kappa_e^{12}$)-Min($\\kappa_e^{21}$))\/2.}\n\t\\label{multipleDisorders}\n\\end{figure*}\n\nNow, if we consider a situation comprising of more than one such scatterer, then the effective scattering matrix describing the collection of scatterers will become energy dependent and in general will also break the $\\omega \\rightarrow -\\omega$ symmetry, resulting in an even-in-$\\phi_{12}$ contribution to the thermoelectric conductance as expected \\cite{pershoguba_PRB_99_134514}. The even-in-$\\phi_{12}$ part of the thermoelectric conductance is proportional to $(\\tau_{\\omega} -\\tau_{-\\omega})$, where $\\tau_{\\omega}$ is the normal state transmission probability across the junction at an energy $\\omega$, and thus can vary drastically (both in amplitude and in sign) for different disorder configurations for a given $\\phi_{12}$. Hence, averaging over random configurations results in vanishingly small values of the even part. Next we perform a numerical calculation to analyze the effect of averaging over a large number of disorder configurations in presence of multiple scatterers. To begin with, we consider four scattering centers represented by four energy-independent scattering matrices placed at random positions inside the junction region. Transmission probabilities of the scattering matrices are chosen randomly from a one-sided Gaussian distribution with a mean of $95\\%$ and standard deviation of $5\\%$. All the phase freedom of the disorders have been chosen randomly from a Gaussian distribution with a mean of $0$ and standard deviation $0.05 \\pi$. We have fixed the length of the junction to be $L=0.555 \\xi$, the value at which we get maximum thermoelectric conductance (which occurs for $\\phi=0.353 \\pi$) for a ballistic JJ. It can be seen clearly from FIG. \\ref{even_odd} that averaging over as-small-as 10 configurations already shows a convergence towards an odd-in-$\\phi_{12}$ behaviour while the even-in-$\\phi_{12}$ part is strongly suppressed. It is interesting to note that, in absence of an averaging (corresponding to a fixed quenched disorder configuration), for certain range of values of $\\phi_{12}$, the even part can be the dominant contribution in the net thermal conductance (See FIG. \\ref{even_odd}).\n\nNow we extend the numerical analysis to a larger number of scattering centers. The scattering centers are modeled as before and the length of the junction is fixed at $L=0.555 \\xi$. For a given number of scattering centers, disorder average is done over 500 configurations where we have checked that beyond this, there is negligible variation of the result. The mean and the variance of the thermoelectric conductances are plotted as a function of the superconducting phase difference $\\phi_{12}$ in FIG.\\ref{multipleDisorders}. Note that, in presence of a single scatterer, the variance of the thermoelectric conductance is smallest because in this case there is no even-in-$\\phi_{12}$ part of the thermoelectric conductance. We have also observed that, in general, the variance is relatively lower in the neighborhood of $\\phi_{12}=\\pi$ rather than in the neighborhood of $\\phi_{12}=0$ or $2\\pi$. To conclude, the plot for thermoelectric conductance after averaging tend to reduce to the universal sinusoidal dependence of $\\phi_{12}$ as the number of scatterers within the junction region increases (see the middle figure of FIG. \\ref{multipleDisorders}). This is due to the fact that, with increasing opacity of the JJ, the $\\phi_{12}$ sensitivity of the thermoelectric conductance via the poles of the quasiparticle transmission probabilities decreases and the major contribution comes from the explicit $\\sin{(\\phi_{12})}$ factor in the numerator.\n\n\n\\begin{figure*}[t]\n\n\t\\includegraphics[width=0.7 \\textwidth]{DensityPlotInTauAndPhi.eps}\n\t\\caption{The maximum possible thermoelectric coefficient of a JJ with (left) s-wave and (right) p-wave superconductivity for a given junction length and normal state reflection probability $r$ (as calculated wit the analytic approximation $\\mu>>\\Delta_0,k_BT$). Note that the scatterer is assumed to be energy independent and is placed at the middle of the junction. The parameters are assumed to be $\\mu=100\\Delta_0$ and $k_BT=0.5\\Delta_0$.}\n\t\\label{max_thermo}\n\\end{figure*}\n\n\\section{Discussion}\nOccurrence of thermoelectric effect through the breaking of $\\omega \\rightarrow -\\omega$ symmetry for a ballistic long JJ is not specific to the HES. 1D JJ with quadratic dispersion and with s-wave or p-wave superconductivity should also demonstrate such a response. Of course, in the high doping limit, the thermoelectric coefficient should reduce to the results obtained in the paper when linearized about the Fermi energy. Thus, the thermoelectric response is a generic property of any ballistic JJ with junction length of the order of the superconducting coherence length. However, with the increasing opacity of the JJ for junction length less than the superconducting coherence length, the ABS energies tend to move towards the zero energy for p-wave superconductivity due to the presence of Majorana fermions. Whereas, for a JJ with s-wave superconductivity within the same limit, the ABS energies tend to move towards the continuum with increasing opacity of the junction. This fact manifests itself in the thermoelectric conductance via the poles of the quasiparticle transmission probabilities. Also, for JJ with junction length longer than the superconducting coherence length, the states from the continuum spectrum tend to leak into the superconducting gap, thereby changing the details of the thermoelectric coefficient.\n\nFurther, to check if the s-wave or the p-wave leads to a larger thermoelectric coefficient for a given junction length we perform an analysis where we have placed an energy-independent scatterer at the middle of a JJ, and plotted the maximum possible thermoelectric conductance (scanned over all values of $\\phi_{12}$) for a given junction length $L$ and given transparency of the scatterer (normal state transmission probability $\\tau$) as shown in FIG. \\ref{max_thermo}. We have performed this study within the approximation $\\mu>>\\Delta_0, k_BT$ (See Appendix \\ref{Appendix_middle_scatterer}). From these results we can conclude that in general, there is no distinguishable pattern in the thermoelectric coefficient for the case of s-wave and p-wave superconductivity.\n\nAs far as the possible strategy for the measurement of the thermoelectric current is concerned, it cannot be measured in isolation as it will always be accompanied by the finite temperature Josephson current. However, there may be ways to measure the thermoelectric coefficient indirectly. For example, consider a situation where a JJ is initially maintained at an equilibrium temperature $T$. The current that is obtained, is totally the Josephson current $\\mathtt{S}_{(T,T)}=I_J$, where the first (second) subscript corresponds to the temperature of the left (right) lead S1 (S2). Now, if $S1$ is raised to temperature $T+\\Delta T$, then the corresponding total current will be a sum of the Josephson current and the thermoelectric current $\\mathtt{S}_{(T+\\Delta T, T)}=I_J-\\Delta I_J + \\kappa^{21}_e \\Delta T$, where $\\Delta I_J$ is the variaation in the Josephson current due temperature bias. Next, consider the situation where $S1$ is kept at temperature $T$ while $S2$ is raised to temperature $T+\\Delta T$, then the corresponding total current will be $\\mathtt{S}_{(T, T+\\Delta T)}=I_J-\\Delta I_J - \\kappa^{21}_e \\Delta T$. Now if, $(2\\mathtt{S}_{(T,T)}-(\\mathtt{S}_{(T+\\Delta T, T)}+\\mathtt{S}_{(T, T+\\Delta T)}))\/2\\mathtt{S}_{(T,T)}<< 1$ then a measurement of the ratio, $(\\mathtt{S}_{(T+\\Delta T, T)}-\\mathtt{S}_{(T, T+\\Delta T)})\/2\\Delta T$ will provide the thermoelectric coefficient. Note that, a similar strategy involving $\\phi_{12}\\rightarrow -\\phi_{12}$ rather than involving $\\Delta T \\rightarrow -\\Delta T$ is difficult to implement due to the presence of even-in-$\\phi_{12}$ part of the thermoelectric coefficient.\n\n\\textit{\\underline{Acknowledgment}}: A.M. thanks Vivekananda Adak for helpful discussions. We thank Chris Olund, Sergey Pershoguba and Erhai Zhao for useful communication over email. A.M. acknowledges Ministry of Education, India for funding. S.D. would like to acknowledge the MATRICS\ngrant (MTR\/ 2019\/001 043) from the Science and Engineering Research Board (SERB) for funding.\n\n\n\\section{Matrix formalism}\n\\label{matrixFormalism}\nTo have a clear physical insight into different semi-classical paths that give rise to degeneracy-lifted ABS and the thermoelectric response of a JJ, we shall be using the matrix method as discussed by A. Kundu et. al. \\cite{kundu_PRB_82_155441}.\n\nLet $\\Psi_{qp[N]}^{e+(-)}$ and $\\Psi_{qp[N]}^{h+(-)}$ denote forward (backward) moving electron-like QP and forward (backward) moving hole-like QP respectively within the superconducting lead $S_i$ having superconducting phase $\\phi_i$ $(i \\in \\{1,2\\})$ [within the normal region]. These wave functions can be explicitly calculated using the BdG Hamiltonian (\\ref{DiracHamiltonian}) in the main text or the BdG Hamiltonian with quadratic dispersion and with s-wave or p-wave superconductivity\n\\begin{align}\n\t\\mathcal{H}_{\\eta}= \\left( -\\dfrac{\\hbar^2}{2m}\\dfrac{\\partial^2}{\\partial x^2}-\\mu \\right) \\tau_z + \\Delta^{\\eta}(x) (\\cos \\phi_r \\tau_x - \\sin \\phi_r \\tau_y);\t\\label{HamiltonianSP}\n\\end{align}\nwhere $\\Delta^{\\eta}(x)=\\Delta_0 [ \\Theta(-x-L\/2)+\\Theta(x-L\/2) ]\\mathit{f}(\\eta)$, $\\mathit{f}(\\eta)=(-i\\partial_x\/k_F)^{(1-\\eta)\/2}$, $\\eta=\\pm 1$ for s-wave and p-wave superconductivity respectively and $p_F=\\hbar k_F=\\sqrt{2m \\mu}$ is the Fermi momentum.\n\nWe first consider two reflection matrices $\\mathbb{R}^{\\gamma}$, $\\gamma \\in \\{1,2\\}$, which describe both Andreev and normal reflections at the normal-superconducting junctions.\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t\\mathbb{R}^1 \\Psi^{e+}_N &= r^1_{Ahe} \\Psi^{h-}_N + r^1_{Nee} \\Psi^{e-}_N,\\\\\n\t\t\\mathbb{R}^1 \\Psi^{h-}_N &= r^1_{Aeh} \\Psi^{e+}_N + r^1_{Nhh} \\Psi^{h+}_N,\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\mathbb{R}^2 \\Psi^{e-}_N &= r^2_{Ahe} \\Psi^{h+}_N + r^2_{Nee} \\Psi^{e+}_N,\\\\\n\t\t\\mathbb{R}^2 \\Psi^{h+}_N &= r^2_{Aeh} \\Psi^{e-}_N + r^2_{Nhh} \\Psi^{h-}_N,\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\mathbb{R}^1 \\Psi^{e-}_N &= \\mathbb{R}^1 \\Psi^{h+}_N = \\mathbb{R}^2 \\Psi^{e+}_N = \\mathbb{R}^2 \\Psi^{h-}_N = 0,\n\t\\end{aligned}\n\\end{equation*}\nwhere $r^{\\gamma}_{Aqq'}$ and $r^{\\gamma}_{Nqq'}$ respectively describe the amplitudes of different Andreev reflections and normal reflections.\n\nTo consider the propagation of the wave functions through a length $l$ within the normal region, we consider two propagation matrices, $\\mathbb{T}^{\\gamma}$ $(\\gamma \\in \\{1,2\\})$, such that\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t&\\mathbb{T}^1(l) \\Psi^{e+}_N|_{x} = \\Psi^{e+}_N|_{x+l},\\\\\n\t\t&\\mathbb{T}^1(l) \\Psi^{h-}_N|_{x} = \\Psi^{h-}_N|_{x-l},\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t&\\mathbb{T}^2(l) \\Psi^{e-}_N|_{x} = \\Psi^{e-}_N|_{x-l},\\\\\n\t\t&\\mathbb{T}^2(l) \\Psi^{h+}_N|_{x} = \\Psi^{h+}_N|_{x+l},\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\t&\\mathbb{T}^1(l) \\Psi^{e-}_N = \\mathbb{T}^1(l) \\Psi^{h+}_N = \\mathbb{T}^2(l) \\Psi^{e+}_N = \\mathbb{T}^2(l) \\Psi^{h-}_N =0.\n\t\\end{aligned}\n\\end{equation*}\n\nFor energies above the superconducting gap, two tunneling matrices at the two boundaries, $\\mathbb{T}^{L,R}_B$, are defined as\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t&\\mathbb{T}_B^L \\Psi_{qp}^{e+}[\\phi_1] = t_e \\Psi^{e+}_N + t_{Ae} \\Psi^{h+}_N,\\\\\n\t\t&\\mathbb{T}_B^L \\Psi_{qp}^{h+}[\\phi_1] = t_h \\Psi^{h+}_N + t_{Ah} \\Psi^{e+}_N,\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t&\\mathbb{T}_B^R \\Psi^{e+}_N = t_e^{qp} \\Psi_{qp}^{e+}[\\phi_2] + t_{Ae}^{qp}\\Psi_{qp}^{h+}[\\phi_2],\\\\\n\t\t&\\mathbb{T}_B^R \\Psi^{h+}_N = t_h^{qp} \\Psi_{qp}^{h+}[\\phi_2] + t_{Ah}^{qp} \\Psi_{qp}^{e+}[\\phi_2],\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t&\\mathbb{T}_B^L \\Psi_{qp}^{e-} = \\mathbb{T}_B^L \\Psi_{qp}^{h-} \\\\\n\t\t&= \\mathbb{T}_B^R \\Psi^{e-}_N= \\mathbb{T}_B^R \\Psi^{h-}_N = 0.\n\t\\end{aligned}\n\\end{equation*}\n\nWe also consider scattering matrices within the normal region to account for the disorders,\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t\\mathscr{T}^e\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi^{e+}|_{-x}\t\\\\\n\t\t\t\\Psi^{e-}|_{+x}\n\t\t\\end{bmatrix}\n\t\t&=\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi^{e+}|_{+x}\t\\\\\n\t\t\t\\Psi^{e-}|_{-x}\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\mathscr{T}^h\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi^{h-}|_{+x}\t\\\\\n\t\t\t\\Psi^{h+}|_{-x}\n\t\t\\end{bmatrix}\n\t\t&=\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi^{h-}|_{-x}\t\\\\\n\t\t\t\\Psi^{h+}|_{+x}\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\\end{equation*}\nNote that, the matrices $\\mathscr{T}^e$ and $\\mathscr{T}^h$ are related by the particle-hole symmetry of the corresponding BdG Hamiltonian.\n\nExplicit expressions of the reflection matrices $\\mathbb{R}^{\\gamma}$ and tunneling matrices $\\mathbb{T}^{L,R}_B$ can be obtained by demanding the continuity of the wave functions across the boundaries in case of JJ based on HES or by using the following boundary conditions in case of JJ with quadratic dispersion\\cite{tinyukova2019andreev}\n\\begin{align}\n\t\\dfrac{\\hbar^2}{2m} \\tau_z \\left[ \\partial_x^{(\\beta)} \\Psi_S^{\\pm} - \\partial_x^{(\\beta)} \\Psi_N^{\\pm} \\right] +i \\beta \\left(\\dfrac{1-\\eta}{2}\\right) \\dfrac{\\Delta_0}{k_F} \\left[ \\cos \\phi_{\\pm} \\tau_x - \\sin \\phi_{\\pm} \\tau_y \\right] \\Psi_S^{\\pm}=0\n\\end{align}\nwhere $\\beta \\in \\{0,1\\}$; $\\eta=1$ for s-wave and $\\eta=-1$ for p-wave superconductivity; $\\phi_{+}=\\phi_2$ and $\\phi_{-}=\\phi_1$; $\\Psi_S$ and $\\Psi_N$ are the wave functions in the superconducting and normal regions respectively.\n\n\\section{Clean junction}\n\\label{Appendix_clean_junction}\nAndreev bound states are the result of multiple Andreev reflections. There are two ways in which Andreev bound state can be formed as discussed in the main text. We shall describe the same processes here with the help of matrix formalism discussed in \\ref{matrixFormalism}.\n\n\\textit{(i) Tunneling of a Cooper pair from left to right:} An electron-like quasiparticle starts at $x=-L\/2$ (i.e. $\\Psi^{e+}_N|_{x=-L\/2}$) and propagates through the normal region and reaches at $x=L\/2$ (i.e. $\\Psi^{e+}_N|_{x=L\/2}=\\mathbb{T}^{1} \\Psi^{e+}_N|_{x=-L\/2}$). It Andreev reflects back as a hole with uni-modular amplitude $r^1_{Ahe}$ (i.e. $r^1_{Ahe}\\Psi^{h-}_N= \\mathbb{R}_A^{1} \\Psi^{e+}_N$) by creating a Cooper pair in the superconducting lead 2 (S2). The reflected hole then travels through the normal region and reaches at $x=-L\/2$ (i.e. $\\Psi^{h-}_N|_{x=-L\/2} = \\mathbb{T}^{1} \\Psi^{h-}_N|_{x=L\/2}$). It then again Andreev reflects as an electron with uni-modular amplitude $r^1_{Aeh}$ (i.e. $r^1_{Aeh} \\Psi^{e+}_N = \\mathbb{R}_A^{1} \\Psi^{h-}_N$) by annihilating a Cooper pair in the superconducting lead 1 (S1). Now for $\\omega \\leq \\Delta_0$, matrices $\\mathbb{R}^{\\gamma}$ and $\\mathbb{T}^{\\gamma}$ are unitary, so it must be\n\\begin{align}\n\t\\Psi^{e+}_N|_{x=-L\/2} = (\\mathbb{R}^{1}\\mathbb{T}^{1}\\mathbb{R}^{1}\\mathbb{T}^{1}) \\Psi^{e+}_N|_{x=-L\/2}.\t\\label{conditionABSfirstType}\n\\end{align}\nThe corresponding Andreev bound state energy can be obtained by solving the determinant condition\n\\begin{align}\n\t\\text{det.}(\\mathbb{I}_{4\\times 4}-\\mathbb{R}^{1}\\mathbb{T}^{1}\\mathbb{R}^{1}\\mathbb{T}^{1}) =0,\n\\end{align}\nwhich gives the ABS energy $\\omega_0^{21}$.\n\n\\textit{(ii) Tunneling of a Cooper pair from right to left:} If a right-moving hole-like quasiparticle starts from $x=-L\/2$ (i.e. $\\Psi^{h+}_N|_{x=-L\/2}$) and completes the cycle after two Andreev reflections, it can transfer a Cooper pair from S2 to S1\n\\begin{align}\n\t\\Psi^{h+}_N|_{x=-L\/2} = (\\mathbb{R}^2\\mathbb{T}^2\\mathbb{R}^2\\mathbb{T}^2) \\Psi^{h+}_N|_{x=-L\/2}.\t\\label{conditionABSsecondType}\n\\end{align}\nThe corresponding Andreev bound state energy can be obtained by solving the equation\n\\begin{align}\n\t\\text{det.}(\\mathbb{I}_{4\\times 4}-\\mathbb{R}^{2}\\mathbb{T}^{2}\\mathbb{R}^{2}\\mathbb{T}^{2}) =0,\n\\end{align}\nwhich gives the ABS energy $\\omega_0^{12}$.\n\nNow, tunneling of a quasiparticle with energy $\\omega>\\Delta_0$ from S1 to S2 can be understood in terms of the matrices $\\mathbb{R}^{\\gamma}$, $\\mathbb{T}^{\\gamma}$ and $\\mathbb{T}_B^{(L,R)}$.\n\n\\textit{(i) Tunneling of an electron (hole)-like quasiparticle from left (right) to right (left):} For a clean junction, an incident electron-like quasiparticle in S1 (i.e. $\\Psi_{qp}^{e+}[\\phi_1]$) can tunnel into S2 as a electron-like quasiparticle (i.e. $\\Psi_{qp}^{e+}[\\phi_2]$) either directly or by any even number of Andreev reflections. Mathematically,\n\\begin{align}\n\t\\chi_{ee}^{21} \\Psi_{qp}^{e+}[\\phi_2]&=\\mathbb{T}_B^R (\\mathbb{T}^1 + \\mathbb{T}^1 \\mathbb{R}^1 \\mathbb{T}^1 \\mathbb{R}^1 \\mathbb{T}^1 + ...)\\mathbb{T}_B^L \\Psi_{qp}^{e+}[\\phi_1] = \\mathbb{T}_B^R \\mathbb{T}^1(\\mathbb{I}-\\mathbb{R}^1 \\mathbb{T}^1 \\mathbb{R}^1 \\mathbb{T}^1)^{-1} \\mathbb{T}_B^L \\Psi_{qp}^{e+}[\\phi_1].\t\\label{electronTransmission}\n\\end{align}\nIt is clear from Eq. (\\ref{electronTransmission}) and (\\ref{conditionABSfirstType}) that the tunneling of an electron-like quasiparticle from S1 to S2 is in correspondence with the Andreev bound state having energy $\\omega_0^{21}$. Solving Eq. (\\ref{electronTransmission}) we can calculate $\\chi_{ee}^{21}$ and hence $\\mathcal{T}_{ee}^{21}$.\n\n\\textit{(ii) Tunneling of an hole (electron)-like quasiparticle from left (right) to right (left):} Similarly, tunneling of a hole-like quasiparticle from S1 to S2 can be mathematically expressed as\n\\begin{align}\n\t\\chi_{hh}^{21} \\Psi_{qp}^{h+}[\\phi_2]&=\\mathbb{T}_B^R (\\mathbb{T}^2 + \\mathbb{T}^2 \\mathbb{R}^2 \\mathbb{T}^2 \\mathbb{R}^2 \\mathbb{T}^2 + ...)\\mathbb{T}_B^L \\Psi_{qp}^{h+}[\\phi_1]= \\mathbb{T}_B^R \\mathbb{T}^2(\\mathbb{I}-\\mathbb{R}^2 \\mathbb{T}^2 \\mathbb{R}^2 \\mathbb{T}^2)^{-1} \\mathbb{T}_B^L \\Psi_{qp}^{h+}[\\phi_1].\t\\label{holeTransmission}\n\\end{align}\nA comparison between Eq. (\\ref{holeTransmission}) and (\\ref{conditionABSsecondType}) clearly indicates the fact that the tunneling of a hole-like quasiparticle from S1 to S2 is in correspondence with the Andreev bound state having energy $\\omega_0^{12}$. Solving Eq. (\\ref{holeTransmission}) we can calculate $\\chi_{hh}^{21}$ and hence $\\mathcal{T}_{hh}^{21}$.\n\n\\section{Significance of the quantity $(k_e-k_h)L\/2$}\n\\label{ApproximationkL}\nWe have assumed the doping of the junction is sufficiently high, so let us retain the expressions of $k_e$ and $k_h$ up to the first order of $\\omega\/\\mu$ for quadratic dispersion relation,\n\\begin{align}\n\tk_e =\\dfrac{\\sqrt{2m}}{\\hbar}\\sqrt{\\mu+\\omega} \\approx \\dfrac{\\sqrt{2m \\mu}}{\\hbar} \\left( 1+\\dfrac{\\omega}{2\\mu} \\right)\t\\text{\t;\t}\tk_h =\\dfrac{\\sqrt{2m}}{\\hbar}\\sqrt{\\mu-\\omega} \\approx \\dfrac{\\sqrt{2m \\mu}}{\\hbar} \\left( 1-\\dfrac{\\omega}{2\\mu} \\right)\n\\end{align}\nNow, we shall consider the length of the junction $L$ to be finite compare to the superconducting coherence length $\\xi=\\hbar \\sqrt{2\\mu\/m}\/\\Delta_0$ so let $L=\\mathrm{x}\\xi$. Now,\n\\begin{align}\n\t\\dfrac{k_e-k_h}{2}L\n\t\\approx \\dfrac{1}{2} \\dfrac{\\sqrt{2m \\mu}}{\\hbar} \\left[ \\left( 1+\\dfrac{\\omega}{2\\mu} \\right)-\\left( 1-\\dfrac{\\omega}{2\\mu} \\right) \\right] \\mathrm{x}\\xi\n\t\\approx \\dfrac{1}{2} \\dfrac{\\sqrt{2m \\mu}}{\\hbar} \\dfrac{\\omega}{\\mu} \\left( \\mathrm{x} \\dfrac{\\hbar}{\\Delta_0} \\sqrt{\\dfrac{2\\mu}{m}} \\right)\t\n\t\\approx \\mathrm{x} \\dfrac{\\omega}{\\Delta_0}.\n\\end{align}\nThus, even for large enough doping, the quantity $(k_e-k_h)L\/2$ is of the order of $\\omega\/\\Delta_0$, and thus cannot be neglected.\n\nFor linear dispersion relation, $k_e =\\frac{\\mu+\\omega}{\\hbar v_F}$ and $k_h =\\frac{\\mu-\\omega}{\\hbar v_F}$, hence, here also $\\frac{k_e-k_h}{2}L \\approx \\mathrm{x} \\frac{\\omega}{\\Delta_0}$.\n\n\\section{Presence of a scatterer in the middle of the junction}\n\\label{Appendix_middle_scatterer}\nStarting with an initial state $\\left( (\\Psi^{e+}_N|_{x=-L\/2}), (\\Psi^{e-}_N|_{x=L\/2}) \\right)^T$, it will come back to the same state after a electron scattering followed by an Andreev reflection, a hole scattering and another Andreev reflection. For $\\omega<\\Delta_0$, these matrices all being unitary, it must be\n\\begin{align}\n\t\\begin{bmatrix}\n\t\t\\Psi^{e+}_N|_{x=-L\/2}\t\\\\\n\t\t\\Psi^{e-}_N|_{x=L\/2}\n\t\\end{bmatrix}\n\t=\n\t\\mathbb{R}^A_P \\mathscr{T}^h_P \\mathbb{R}^A_P \\mathscr{T}^e_P\n\t\\begin{bmatrix}\n\t\t\\Psi^{e+}_N|_{x=-L\/2}\t\\\\\n\t\t\\Psi^{e-}_N|_{x=L\/2}\n\t\\end{bmatrix}\n\\end{align}\nwhere we have defined $\\mathbb{M}_P=\\mathbb{M} \\mathbb{T}^P$. Note that, in the absence of barrier i.e. at $\\mathscr{T}^e=\\mathscr{T}^h=\\mathbb{I}$, all the matrices $\\mathbb{T}^P$, $\\mathbb{R}^A$, $\\mathscr{T}^e$ and $\\mathscr{T}^h$ are block diagonal and the aforesaid two types of ABS ($\\omega_0^{21}$ and $\\omega_0^{12}$) do not interfere. In presence of barrier, finite backscattering (off-diagonal blocks of $\\mathscr{T}^e$ and $\\mathscr{T}^h$) gives rise to the interference between the two types of ABS ($\\omega_0^{21}$ and $\\omega_0^{12}$).\n\nABS energies, in presence of barrier can be obtained by solving the equation\n\\begin{align}\n\t\\text{det}.\\left( \\mathbb{I}_{4\\times 4}-\\mathbb{R}^A_P \\mathscr{T}^h_P \\mathbb{R}^A_P \\mathscr{T}^e_P \\right) =0\n\t\\label{EQ63}\n\\end{align}\nNote that, if we had started with the initial state $\\left( (\\Psi^{h-}|_{x=L\/2}), (\\Psi^{h+}|_{x=-L\/2}) \\right)^T$ then Eq. (\\ref{EQ63}) would have looked like\n\\begin{align}\n\t\\text{det}.\\left( \\mathbb{I}_{4\\times 4}-\\mathbb{R}^A_P \\mathscr{T}^e_P \\mathbb{R}^A_P \\mathscr{T}^h_P \\right) =0\t\\label{EQ64}\n\\end{align}\nIt turns out, the ABS energies, as obtained from (\\ref{EQ63}) or (\\ref{EQ64}) are same.\n\n\nFor energies $\\omega>\\Delta_0$, we define the following matrices\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t\\mathbb{T}^L =\n\t\t\\begin{bmatrix}\n\t\t\t\\mathbb{T}_B^L\t&0\t\\\\\n\t\t\t0\t&\\mathbb{T}_B^L\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\mathbb{T}^R_e =\n\t\t\\begin{bmatrix}\n\t\t\t\\mathbb{T}_B^R\t&0\t\\\\\n\t\t\t0\t&0\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\mathbb{T}^R_h =\n\t\t\\begin{bmatrix}\n\t\t\t0\t&0\t\\\\\n\t\t\t0\t&\\mathbb{T}_B^R\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\\end{equation*}\nWith this, tunneling of a QP from S1 to S2 can be understood as follows:\n\n\\textbf{(i)} An incident electron-like QP in S1 $\\left( (\\Psi_{qp}^{e+}[\\phi_1]),(0) \\right)^T$ can tunnel into S2 as an electron-like QP $\\left( (\\Psi_{qp}^{e+}[\\phi_2]),(0) \\right)^T$ either directly or by any even number of Andreev reflections whereas tunneling of an electron-like QP from S1 into S2 as an hole like QP $\\left( (0),(\\Psi_{qp}^{h+}[\\phi_2]) \\right)^T$ must be mediated by an odd number of Andreev reflections.\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t\\chi_{ee}^{21} \n\t\t\\begin{bmatrix}\n\t\t\t\\Psi_{qp}^{e+}[\\phi_2] \t\\\\\n\t\t\t0\n\t\t\\end{bmatrix}\n\t\t&=\\mathbb{T}_e^R \\mathbb{T}^P \\mathscr{T}^e_P (\\mathbb{B}^e)^{-1} \\mathbb{T}^L\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi_{qp}^{e+}[\\phi_1] \t\\\\\n\t\t\t0\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\chi_{he}^{21}\n\t\t\\begin{bmatrix}\n\t\t\t0\t\\\\\n\t\t\t\\Psi_{qp}^{h+}[\\phi_2]\n\t\t\\end{bmatrix}\n\t\t&= \\mathbb{T}_h^R \\mathbb{T}^P \\mathscr{T}^h_P \\mathbb{R}^A_P \\mathscr{T}^e_P (\\mathbb{B}^e)^{-1}\\mathbb{T}^L\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi_{qp}^{e+}[\\phi_1]\t\\\\\n\t\t\t0\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\\end{equation*}\nwhere $\\mathbb{B}^e=\\mathbb{I}_{4\\times 4}-\\mathbb{R}^A_P \\mathscr{T}_P^{h}\\mathbb{R}^A_P\\mathscr{T}_P^{e}$. Solving above equations we can calculate $\\chi_{ee}^{21}$ and $\\chi_{he}^{21}$ and hence we $\\mathcal{T}_{ee}^{21}$ and $\\mathcal{T}_{he}^{21}$.\n\n\\textbf{(ii)} Similarly, tunneling of a hole-like QP from S1 $\\left( (0),(\\Psi_{qp}^{h+}[\\phi_1]) \\right)^T$ into S2 as an hole-like QP $\\left( (0),(\\Psi_{qp}^{h+}[\\phi_2]) \\right)^T$ can be mediated directly or by any even number of Andreev reflections whereas tunneling of a hole-like QP from S1 into S2 as an electron-like QP $\\left( (\\Psi_{qp}^{e+}[\\phi_2]),(0) \\right)^T$ must be mediated by an odd number of Andreev reflections.\n\\begin{equation*}\n\t\\begin{aligned}[c]\n\t\t\\chi_{hh}^{21} \n\t\t\\begin{bmatrix}\n\t\t\t0\t\\\\\n\t\t\t\\Psi_{qp}^{h+}[\\phi_2]\n\t\t\\end{bmatrix}\n\t\t&=\\mathbb{T}_h^R \\mathbb{T}^P \\mathscr{T}^h_P (\\mathbb{B}^h)^{-1} \\mathbb{T}^L\n\t\t\\begin{bmatrix}\n\t\t\t0\t\\\\\n\t\t\t\\Psi_{qp}^{h+}[\\phi_1]\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\t\\qquad\n\t\\begin{aligned}[c]\n\t\t\\chi_{eh}^{21}\n\t\t\\begin{bmatrix}\n\t\t\t\\Psi_{qp}^{e+}[\\phi_2]\t\\\\\n\t\t\t0\n\t\t\\end{bmatrix}\n\t\t&= \\mathbb{T}_e^R \\mathbb{T}^P \\mathscr{T}^e_P \\mathbb{R}^A_P \\mathscr{T}^h_P (\\mathbb{B}^h)^{-1}\\mathbb{T}^L\n\t\t\\begin{bmatrix}\n\t\t\t0\t\\\\\n\t\t\t\\Psi_{qp}^{h+}[\\phi_1]\n\t\t\\end{bmatrix}\n\t\\end{aligned}\n\\end{equation*}\nwhere $\\mathbb{B}^h=\\mathbb{I}_{4\\times 4}-\\mathbb{R}^A_P \\mathscr{T}_P^{e}\\mathbb{R}^A_P\\mathscr{T}_P^{h}$. Solving above equations we can calculate $\\mathcal{T}_{hh}^{21}$ and $\\mathcal{T}_{eh}^{21}$.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Proofs} \n\\label{sec:appendix_more_dis}\n\n\\printProofs \n\n \\begin{thm} \\label{thm:lyap2} \n Assume $$\\d Z_t = \\eta(Z_t, t) \\dt + \\sigma(\\X_t, t) \\dW_t,~~~~~~~t\\in[0,\\T].$$ \nWe have $Z_\\T \\in A$ with probability one \nif there exists a function $U\\colon \\RR^d \\times [0,1] \\to \\RR$ such that \n\n1) $U(\\cdot, t) \\in C^2(\\RR^d)$ and $U(z, \\cdot) \\in C^1([0,\\T]);$ \n \n 2) $U(z, \\T) \\geq 0$, $z\\in \\RR^d$, and $U(z, \\T) = 0$ implies that $z \\in A$, where $A$ is a measurable set in $\\RR^d$; \n \n 3) There exists a sequence $\\{\\alpha_t$, $\\beta_{t}, \\gamma_t \\colon t\\in[0, \\T] \\}$, such that for %\n $t\\in[0,\\T]$, \n \\bb \n \\E[\\dd_z U(Z_t, t) \\tt \\eta(Z_t, t)] & \\leq - \\alpha_t \\E[U(Z_t, t)] + \\beta_{t}, \\\\ \n \\E[\\partial_t U(Z_t, t) + \\frac{1}{2} \\trace(\\dd_z^2 U(Z_t, t) \\sigma^2(Z_t, t))] & \\leq \\gamma_t; \n\\ee \n\n4) Define $\\zeta_t = \\exp(\\int_0^t \\alpha_s \\d s)$. We assume \n\\bbb \\label{equ:zeta}\n\\lim_{t\\uparrow T} \\zeta_t = +\\infty, ~~~~ \n\\lim_{t\\uparrow T} \\frac{\\zeta_t}{\\int_0^t \\zeta_s (\\beta_s + \\gamma_s) \\d s} = +\\infty. \n\\eee \n\n\\end{thm} \n \n \\begin{proof} \n Following $\\d Z_t = \\eta(Z_t, t) \\dt + \\sigma(\\X_t, t) \\dW_t$, we have by Ito's Lemma, \n \\bb \n \\d U(Z_t, t) \n & = \\dd U(Z_t, t) \\tt (\\eta(Z_t, t) \\dt + \\sigma(\\X_t, t) \\dW_t) + \n \\partial_t U(Z_t, t) \\dt + \n \\frac{1}{2} \\trace(\\dd^2 U(Z_t, t)\\sigma^2(\\X_t, t)) \\dt,\n \\ee \n for $t\\in[0,T]$. \n Taking expectation on both sides, \n $$\n \\frac{\\d}{\\d t} \\E(U(Z_t)) = \n \\E[\\dd_z U(Z_t, t) \\tt \\eta(Z_t, t)] + \n \\E\\left [\\partial_t U(Z_t, t) + \\frac{1}{2} \\trace(\\dd^2 U(Z_t, t)\\sigma^2(\\X_t, t)) \\right].\n $$\n Let $u_t = \\E[U(Z_t, t)]$.\n By the assumption above, we get \n $$\n \\dot u_t \\leq -\\alpha_t u_t + \\beta_t + \\gamma_t. \n $$\nFollowing Gr\u00f6nwall's inequality (see Lemma~\\ref{lem:gronwall} below), we have $\\E[U(Z_\\T, \\T)] = u_\\T = \\lim_{t\\uparrow \\T} u_t \\leq 0$ if \\eqref{equ:zeta} holds. Because $U(z, \\T) \\geq 0$, this suggests that $U(Z_\\T, \\T) = 0$ and hence $Z_\\T \\in A$ almost surely. \n\\end{proof} \n\n\\begin{lem}\\label{lem:gronwall} \nLet $u_t\\in \\RR$ and $\\alpha_t, \\beta_t\\geq 0$, and \n $\\frac{\\d}{\\dt } u_t \\leq - \\alpha_t u_t + \\beta_t$, $t \\in [0,T]$ for $T>0$. \nWe have \n\\bb \nu_t \\leq \\frac{1}{\\zeta_t} (\\zeta_0 u_0 + \\int_0^t \\zeta_s \\beta_s \\d s), \n&&\\text{where}&& \\zeta_t = \\exp(\\int_0^t \\alpha_s \\d s).\n\\ee \nTherefore, we have $\\lim_{t\\uparrow T} u_t \\leq 0$ if \n$$\n\\lim_{t\\uparrow T} \\zeta_t = +\\infty, ~~~~ \n\\lim_{t\\uparrow T} \\frac{\\zeta_t}{\\int_0^t \\zeta_s \\beta_s \\d s} = +\\infty. \n$$\n\\end{lem}\n\\begin{proof} \n Let $v_t = \\zeta_t u_t$, where $\\zeta_t =\\exp(\\int_0^t \\alpha_s \\d s)$ so $\\dot \\zeta_t = \\zeta_t \\alpha_t$. Then \n $$\\frac{\\d}{\\dt} v_t =\\dot \\zeta_t u_t + \\zeta_t \\dot u_t \\leq (\\dot \\zeta_t - \\zeta_t \\alpha_t ) u_t + \\zeta_t \\beta_t = \\zeta_t \\beta_t. \n $$\n So \n $$\n v_t \\leq v_0 + \\beta \\int_0^t \\gamma_s \\d s, \n $$\n and hence \n $$\n u_t \\leq \\frac{1}{\\zeta_t} (\\zeta_0 u_0 + \\int_0^t \\zeta_s \\beta_s \\d s). \n $$\n To make $\\lim_{t\\uparrow T} u_t \\leq 0$, we want \n$$\n\\lim_{t\\uparrow T} \\zeta_t = +\\infty, ~~~~ \n\\lim_{t\\uparrow T} \\frac{\\zeta_t}{\\int_0^t \\zeta_s \\beta_s \\d s} = +\\infty. \n$$\n \\end{proof} \n \n\n\\begin{cor}\nLet $\\d \\Z_t = \\frac{x-\\Z_t}{\\T-t} + \\varsigma_t\\d W_t$ with law $\\Q$. This uses the drift term of Brownian bridge, but have a time-varying diffusion coefficient $\\varsigma_t\\geq0$. \nAssume $\\sup_{t\\in[0,T]}\\varsigma_t <\\infty$. Then $\\Q(Z_\\T = z) = 1$. \n\\end{cor}\n\\begin{proof}\nWe verify the conditions in Theorem~\\ref{thm:lyap2}. \nDefine $U(z,t) = \\norm{x-z}^2\/2$, and $\\eta(z,t) = \\frac{x-\\Z_t}{\\T-t}$. We have \n$\\eta(z,t) \\tt \\dd U(z,t) = - U(z,t)\/(T-t)$. So $\\alpha_t = 1\/(T-t)$. \n\nAlso, $\\partial_t U(z,t)+\\frac{1}{2} \\trace(\\varsigma_t^2 \\dd_z ^2 U(z,t)) = \\frac{1}{2} \\diag(\\varsigma_t^2 I_{d\\times d}) = \\frac{d}{2} \\varsigma_t^2 \\defeq \\beta_t \\leq C <\\infty$. \n\n\nThen $\\zeta_t = \\exp(\\int_0^t \\alpha_s \\d s) = \\frac{\\T}{\\T-t} \\to +\\infty$ as $t\\uparrow T$. \n\nAlso, $\\int_0^t \\zeta_s \\beta_s \\d s \\leq C \\int_0^t \\zeta_s \\d s = C T(\\log(T) - \\log (T-t))$. \nSo \n$$\n\\lim_{t\\uparrow T} \\frac{\\zeta_t}{\\int_0^t \\zeta_s \\beta_s \\d s} \n\\geq \\lim_{t\\uparrow T} \\frac{\\frac{\\T}{\\T-t}}{ C T(\\log(T) - \\log (T-t))} = +\\infty . \n$$\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\nUsing Girsanov theorem, we \n show that introducing arbitrary non-singular changes (as defined below) on the drift and initialization of a process does not change its bridge conditions. \n\\renewcommand{\\breve}{\\tilde}\n \\begin{pro} \\label{thm:perturb}\nConsider the following processes \n\\bb \n& \\Q \\colon ~~~~ Z_t = b_t(Z_t) \\dt + \\sigma_t (Z_t) \\d W_t, ~~~ Z_0 \\sim \\mu_0\\\\ \n&\\tilde \\Q \\colon ~~~~ Z_t = (b_t(Z_t) + \\sigma_t(Z_t) f_t(Z_t)) \\dt + \\sigma_t (Z_t) \\d W_t, ~~~ Z_0 \\sim \\tilde \\mu_0. \n\\ee \nAssume we have $\\KL(\\mu_0~||~\\breve \\mu_0) <+\\infty$ and $\\E_{\\Q}[\\int_0^T \\norm{f_t(Z_{t})}^2] < \\infty$. Then for any event $A$, we have $ \\Q(Z\\in A) = 1$ if and only if $\\breve \\Q(Z\\in A) = 1$. %\n \\end{pro} \n \\begin{proof} \n Using Girsnaov theorem \\cite{Oksendal2013}, we have \n $$\n \\KL(\\Q~||~\\tilde \\Q) = \n \\KL(\\mu_0~||~ \\tilde \\mu_0) + \\frac{1}{2} \n \\E_{\\Q}\\left [\\int_0^\\t \\norm{f_t(Z_t)}_2^2\\dt \\right]. \n $$\n Hence, we have $\\KL(\\Q~||~\\tilde \\Q) < +\\infty$. This implies that $\\Q$ and $\\tilde \\Q$ has the same support. Hence $\\Q( Z \\in A) = 1$ iff $\\tilde \\Q(Z\\in A) = 1$ for any measurable set $A$. \n \\end{proof}\n\nThis gives an immediate proof of the following result that we use in the paper. %\n\n\\begin{cor}\n\\label{app:cor5}\nConsider the following two processes: \n\\bb %\n\\Q^{x, \\mathrm{bb}}: && \n\\d Z_t = \\left ( \\sigma_t^2\\frac{x-Z_t}{\\beta_\\t - \\beta_t} \\right) \\dt + \\sigma_t \\d W_t,~~~~ Z_0 \\sim \\mu_0, \\\\ \n\\Q^{x, \\mathrm{bb}, f}: && \n\\d Z_t = \\left ( \\sigma_t f_t(Z_t) + \\sigma_t^2\\frac{x-Z_t}{\\beta_\\t - \\beta_t} \\right) \\dt + \\sigma_t \\d W_t,~~~~ Z_0 \\sim \\mu_0.\n\\ee \nAssume $\\E_{\\Q^{x, \\mathrm{bb}, f}} [\\norm{f_t(Z_t)}^2] <+\\infty$ and $\\sigma_t > 0$ for $t\\in[0,+\\infty)$. Then $\\Q^{x, \\mathrm{bb}, f}$ is a bridge to $x$. \n\\end{cor} \n\n\n\n\n \n\\section{Experiment}\n\\label{sec:exp}\nWe verify the advantages of our proposed method (Bridge with Priors) in several different domains.\nWe first compare our method with advanced generators (\\emph{e.g.}, diffusion model, normalizing flow, etc.) on molecule generation tasks.\nWe then implement our method on point cloud generations, which targets producing generated samples in a higher quality.\nWe directly compare the performance and also analyze the difference between our energy prior and other energies we discuss in Section \\ref{sec:method}.\n\n\n\\subsection{Force Guided Molecule Generation}\n\nTo demonstrate the efficiency and effectiveness of our bridge processes and physical energy, we conduct experiments on molecule and macro-molecule generation experiments.\nWe follow \\cite{luo2022equivariant} in settings and observe that our proposed prior bridge processes consistently improve the state-of-the-art performance. Diving deeper, we analyze the impact of different energy terms and hyperparameters.\n\n\\textbf{Metrics.} Following \\cite{hoogeboom2022equivariant,satorras2021n}, we use the atom and molecular stability score to \nmeasure the model performance.\nThe atom stability is the proportion of atoms that have the right valency while the molecular stability stands for the proportion of generated\nmolecules for which all atoms are stable.\nFor visualization, we use the distance between pairs of atoms and the atom types to predict bond types, which is a common practice.\nTo demonstrate that our force does not only memorize the data in the dataset, we further calculate and report the RDKit-based \\cite{landrum2013rdkit} novelty score.\nwe extracted 10,000 samples to calculate the above metrics.\n\n\\begin{table}[h]\n \\centering\n \\caption{\\small{Results of our method and several baselines on QM9 and GEOM-DRUG. For QM9, we additionally report the `Novelty' score evaluated by RDKit \\cite{landrum2013rdkit} to show that our method can generate novel molecules. \\RV{We evaluate the percentage of valid and unique molecules out of 12000 generated molecules.}}\n }\n \\scalebox{0.7}{\n \\begin{tabular}{l|cccc|cc}\n \\hline\n & \\multicolumn{4}{c|}{QM9} & \\multicolumn{2}{c}{GEOM-DRUG} \\\\\n \\hline\n & Atom Sta (\\%) $\\uparrow$ & Mol Sta (\\%) $\\uparrow$ & Novelty (\\%) $\\uparrow$ & \\RV{Valid + Unique $\\uparrow$} & Atom Sta (\\%) $\\uparrow$ & Mol Sta (\\%) $\\uparrow$ \\\\\n \\hline\n EN-Flow \\cite{satorras2021n} & 85.0 & 4.9 & 81.4 & \\RV{0.349} & 75.0 & 0.0 \\\\\n GDM \\cite{hoogeboom2022equivariant} & 97.0 & 63.2 & 74.6 & \\RV{-} & 75.0 & 0.0 \\\\\n E-GDM \\cite{hoogeboom2022equivariant} & \\bf{98.7$\\pm$0.1} & 82.0$\\pm$0.4 & 65.7$\\pm$0.2 & \\RV{0.902} & 81.3 & 0.0 \\\\\n \\hline \n Bridge & \\bf{98.7$\\pm$0.1} & 81.8$\\pm$0.2 & 66.0$\\pm$0.2 & \\RV{0.902} & 81.0$\\pm$0.7 & 0.0 \\\\\n Bridge + Force \\eqref{eq:our_energy} & \\bf{98.8$\\pm$0.1} & \\bf{84.6$\\pm$0.3} & 68.8$\\pm$0.2 & \\RV{0.907} & \\bf{82.4$\\pm$0.8} & 0.0 \\\\\n \\hline\n \\end{tabular}}\n \\label{tab:mol_main}\n\\end{table}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{text\/figures\/mol6.png}\n \\vspace{-2\\baselineskip}\n \\caption{\\small{Examples of molecules generated by our method on QM9 and GEOM-DRUG.}}\n \n \\label{fig:mol_example}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{text\/figures\/traj32.png}\n \n \\caption{\\small{An example of generation trajectory following $\\P^\\theta$ of our method, trained on GEOM-DRUG.}} \n \n \\label{fig:mol_traj}\n\\end{figure}\n\n\n\\begin{table}[h]\n \\centering\n \\caption{\\small{We compare w. and w\/o force results with different discretization time steps.}\n }\n \\scalebox{0.72}{\n \\begin{tabular}{l|cc|cc|cc}\n \\hline \n & \\multicolumn{6}{c}{Time Step} \\\\\n \\hline\n & \\multicolumn{2}{c|}{50} & \\multicolumn{2}{c|}{100} & \\multicolumn{2}{c}{500} \\\\\n \\hline\n & Atom Stable (\\%) & Mol Stable (\\%) & Atom Stable (\\%) & Mol Stable (\\%) & Atom Stable (\\%) & Mol Stable (\\%) \\\\\n \\hline\n EGM & 97.0$\\pm$0.1 & 66.4$\\pm$0.2 & 97.3$\\pm$0.1 & 69.8$\\pm$0.2 & \\bf{98.5$\\pm$0.1} & \\bf{81.2$\\pm$0.1} \\\\\n Bridge + Force \\eqref{eq:our_energy} & \\bf{97.3$\\pm$0.1} & \\bf{69.2$\\pm$0.2} & \\bf{97.9$\\pm$0.1} & \\bf{72.3$\\pm$0.2} & \\bf{98.7$\\pm$0.1} & \\bf{83.7$\\pm$0.1} \\\\\n \\hline\n \\end{tabular}}\n \\label{tab:mol_timestep}\n \n\\end{table}\n\n\\textbf{Dataset Settings}\nQM9 \\cite{ramakrishnan2014quantum} molecular properties and atom coordinates for 130k small molecules with up to 9 heavy atoms with 5 different types of atoms. This data set contains small amino acids, such as GLY, ALA, as well as nucleobases cytosine, uracil, and thymine. We follow the common practice in \\cite{hoogeboom2022equivariant} to split the train, validation, and test partitions, with 100K, 18K, and 13K samples.\nGEOM-DRUG \\cite{axelrod2022geom} is a dataset that contains drug-like molecules. It features 37 million molecular conformations annotated by energy and statistical weight for over 450,000 molecules. \nEach molecule contains 44 atoms on average, with 5 different types of atoms.\nFollowing \\cite{hoogeboom2022equivariant,satorras2021n}, we retain the 30 lowest energy conformations for each molecule.\n\n\\textbf{Training Configurations.}\nOn QM9, we train the EGNNs with 256 hidden features and 9 layers for 1100 epochs, a batch size 64, and a constant learning rate $10^{-4}$, which is the default training configuration. \nWe use the polynomial noise schedule used in \\cite{hoogeboom2022equivariant} which linearly decay from $10^{-2} \/ T$ to 0. We linearly decay $\\alpha$ from $10^{-3} \/ T$ to 0 \\emph{w.r.t.} time step. \nWe set $k=5$ \\eqref{eq:our_energy} by default.\nOn GEOM-DRUG, we train the EGNNs with 256 hidden features and 8 layers with batch size 64, a constant learning rate $10^{-4}$, and 10 epochs.\nIt takes approximately 10 days to train the model on these two datasets on one \\texttt{Tesla V100-SXM2-32GB} GPU.\nWe provide E(3) Equivariant Diffusion Model (EDM) \\cite{hoogeboom2022equivariant} and E(3) Equivariant Normalizing Flow (EN-Flow) \\cite{satorras2021n} as our baselines. Both two are trained with the same configurations as ours.\n\n\n\n\\textbf{Results: Higher Quality and Novelty.}\nWe summarize our experimental results in Table \\ref{tab:mol_main}. We observe that \\textbf{(1)} our method generates molecules with better qualities than the others. On QM9, we notice that we improve the molecule stability score by a large margin (from $82.0$ to $84.6$) and slightly improve the atom stability score (from $98.7$ to $98.8$). \nIt indicates that with the informed prior bridge helps improves the \nquality of the generated molecules. \n\\textbf{(2)} Our method achieves a better novelty score. Compared to E-GDM, we improve the novelty score from $65.7$ to $68.8$. This implies that our introduced energy does not hurt the novelty when the statistics are estimated over the training dataset. Notice that although the GDM and EN-Flow achieve a better novelty score, the sample quality is much worse. The reason is that, due to the metric definition, low-quality out-of-distribution samples lead to high novelty scores.\n\\textbf{(3)} On the GEOM-DRUG dataset, the atom stability is improved from $81.3$ to $82.4$, which shows that our method can work for macro-molecules.\n\\textbf{(4)} We visualize and qualitatively evaluate our generate molecules. Figure \\ref{fig:mol_traj} displays the trajectory on GEOM-DRUG and Figure \\ref{fig:mol_example} shows the samples on two datasets. \n\\textbf{(5)} Bridge processes and E-GDM obtain comparable results on our tested benchmarks.\n\\RV{\\textbf{(6)} \nThe computational load added by introducing prior bridges is small. \nCompared to EGM, we only introduce 8\\% additional cost in training and 3\\% for inference.}\n\n\n\n\n\\textbf{Result: Better With Fewer Time Steps.}\nWe display the performance of our method with fewer time steps in Table \\ref{tab:mol_timestep}. We observe that\n\\textbf{(1)} with fewer time steps, the baseline EGM method gets worse results than 1000 steps in Table \\ref{tab:mol_main}. \n\\textbf{(2)} with 500 steps, our method still keeps a consistently good performance. \\textbf{(3)} with even fewer 50 or 100 steps, our method yields a worse result than 1000 steps in Table \\ref{tab:mol_main}, but still outperforms the baseline method by a large margin. \n\n\n\\begin{table}[t]\n \\centering\n \\caption{\\small{We compare EGM models trained with different force mentioned in Section \\ref{sec:method}.}\n }\n \\scalebox{.78}{\n \\begin{tabular}{l|cc||l|cc}\n \\hline \n Method & Atom Stable (\\%) & Mol Stable (\\%) & Method & Atom Stable (\\%) & Mol Stable (\\%) \\\\\n \\hline\n Force \\eqref{eq:our_energy}, $k=7$ & \\bf{98.8$\\pm$0.1} & \\bf{84.5$\\pm$0.2} & Force \\eqref{eq:our_amber_energy} & 98.7$\\pm$0.1 & 83.1$\\pm$0.2 \\\\ \n Force \\eqref{eq:our_energy}, $k=5$ & \\bf{98.8$\\pm$0.1} & \\bf{84.6$\\pm$0.3} & Force \\eqref{eq:our_amber_energy} w\/o. bond & 98.7$\\pm$0.1 & 82.5$\\pm$0.1 \\\\\n Force \\eqref{eq:our_energy}, $k=3$ & \\bf{98.8$\\pm$0.1} & 83.9$\\pm$0.3 & Force \\eqref{eq:our_amber_energy} w\/o. angle & 98.7$\\pm$0.1 & 82.4$\\pm$0.2 \\\\\n Force \\eqref{eq:our_energy}, $k=1$ & \\bf{98.8$\\pm$0.1} & 82.7$\\pm$0.3 & Force \\eqref{eq:our_amber_energy} w\/o. Long-range & 98.7$\\pm$0.1 & 82.7$\\pm$0.2 \\\\\n \\hline\n \\end{tabular}}\n \\label{tab:mol_force}\n\\end{table}\n\n\\textbf{Ablation: Impacts of Different Energies.}\nWe apply several energies we discuss in Section \\ref{sec:method}, and compare them on the QM9 dataset.\n\\textbf{(1)} We notice that our energy \\eqref{eq:our_energy} gets better performance with larger $k$ when $k \\leq 5$. $k=7$ achieves comparable performance as $k=5$.\nLarger $k$ also requires more computation time, which yields a trade-off between performance and efficiency.\n\\textbf{(2)} For \\eqref{eq:our_amber_energy}, once removing a typical term, the performance drops.\n\\textbf{(3)} In all the cases, applying additional forces outperforms the bridge processes baseline w\/o. force.\n\n\\subsection{Force Guided Point Cloud Generation}\n\n\nWe apply uniformity-promoting priors \nto point cloud generation. %\nWe apply our method based on the diffusion model for point cloud generation introduced by point cloud diffusion model ~\\cite{luo2021diffusion} and compare it with the original diffusion model as well as the case of bridge processes w\/o. force prior.\nWe observe that our method yields better results in various evaluation metrics under different setups.\n\n\n\\begin{table}[t!]\n \\centering\n \\caption{ \\small{Point cloud generation results. CD is multiplied by $10^3$, EMD is multiplied by $10$.}\n }\n \\scalebox{0.87}{\n \\begin{tabular}{l|l|cccc|cccc}\n \\hline\n & & \\multicolumn{4}{c|}{10 Steps} & \\multicolumn{4}{c}{100 Steps} \\\\\n \\hline\n & & \\multicolumn{2}{c}{MMD $\\downarrow$} & \\multicolumn{2}{c|}{COV $\\uparrow$} & \\multicolumn{2}{c}{MMD $\\downarrow$} & \\multicolumn{2}{c}{COV $\\uparrow$} \\\\\n \\cline{3-10}\n & & CD & EMD & CD & EMD & CD & EMD & CD & EMD \\\\\n \\hline\n \\multirow{4}{*}{Chair} & Diffusion \\cite{luo2021diffusion} & 14.01 & 3.23 & 32.72 & 29.36 & 12.32 & 1.79 & 47.41 & \\textbf{47.59} \\\\\n & Bridge & 13.04 & 2.14 & 46.01 & 42.59 & 12.47 & 1.85 & 47.83 & 47.13 \\\\\n \\cline{2-10}\n & + Riesz & 12.84 & 1.95 & 47.21 & 44.31 & 12.31 & 1.82 & 48.14 & 47.42 \\\\\n & + Statistic & \\textbf{12.65} & \\textbf{1.84} & \\textbf{47.58} & \\textbf{45.23} & \\textbf{12.25} & \\textbf{1.78} & \\textbf{48.39} & 47.56 \\\\\n \\hline\n \\hline\n \\multirow{4}{*}{Airplane} & Diffusion \\cite{luo2021diffusion} & 3.71 & 1.31 & 43.12 & 39.94 & 3.28 & \\textbf{1.04} & \\textbf{48.74} & 46.38 \\\\\n & Bridge & 3.44 & 1.24 & 46.90 & 43.46 & 3.37 & 1.08 & 47.11 & 46.17 \\\\\n \\cline{2-10}\n & + Riesz & 3.39 & 1.20 & \\textbf{47.11} & 43.12 & \\textbf{3.24} & 1.09 & 48.62 & 46.23 \\\\\n & + Statistic & \\textbf{3.30} & \\textbf{1.12} & \\textbf{47.02} & \\textbf{44.67} & \\textbf{3.24} & 1.06 & 48.53 & 46.73 \\\\\n \\hline\n \\end{tabular}}\n \n \\label{tab:pc_main}\n\\end{table}\n\n\\textbf{Dataset.} \nWe use the ShapeNet~\\cite{chang2015shapenet} dataset \nfor point cloud generation. \nShapeNet contains 55 categories. \nWe select Airplane and Chair,\nwhich are the two most common categories to evaluate in recent point cloud generation works~\\cite{cai2020learning,luo2021diffusion,yang2019pointflow,zhou20213d}. We construct the point clouds following the setup in ~\\cite{luo2021diffusion}, split the train, valid and test dataset in $80\\%$, $15\\%$ and $5\\%$ and samples 2048 points uniformly on the mesh surface.\n\n\n\n\\textbf{Evaluation Metric.} We evaluate the generated shape quality in two aspects following the previous works, including the minimum matching distance (MMD) and coverage score (COV). These scores are the two most common practices in the previous works. We use Chamfer Distance (CD) and Earth Mover's Distance (EMD) as the distance metric to compute the MMD and COV.\n\n\\textbf{Experiment Setup.} We train the model with two different configurations. The first one uses exactly the same experiment setup configuration introduced in~\\cite{luo2021diffusion}. Thus, we use the same model architecture and train the model in 100 diffuse steps with a learning rate $2 \\times 10^{-3}$, batch size 128, and linear noise schedule from $0.02$ to $10^{-4}$. We initial $\\alpha$ with 0.1 and jointly learn it with the network. For the second setup, to evaluate the better converge speed of our method, we decrease the diffuse step from 100 to 10 with other settings the same. For the diffusion model baseline, we reproduce the number by directly using the pre-trained model checkpoint and testing it on the test set provided by the official codebase.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{text\/figures\/pcv.pdf}\n \n \\caption{\\small{From left to right are examples of point clouds generated by \\cite{luo2021diffusion}, our method with uniformative bridge ($f_t=0$), \n bridge with Riesz energy ($f_t=-\\dd E_{\\mathrm{Riesz}}$) and with KNN energy ($f_t = -\\dd E_{knn}$). \n We see that the Riesz and KNN energies \n yield more uniformly distributed points. \n Riesz energy sometimes creates additional outlier points due to its repulsive nature. \n }}\n\n \\label{fig:pc}\n\\end{figure}\n\n\\textbf{Result.} We show our experimental result in Table~\\ref{tab:pc_main}. We see that\n\\textbf{(1)} In the 10 steps setup, all variants of our approach are clearly stronger than the diffusion model. \nWith force added, our method with physical prior achieves nearly the same performance as the 100 steps setup.\n\\textbf{(2)} In the 100 steps setup, adding energy potential as prior improves the bridge process performance and further let it beat the diffusion model baseline.\\textbf{(3)}\nSince the test points are uniformly sampled on the surface, a better score indicates a closer point distribution to the reference set. Further, when compare with Riesz energy \\eqref{eq:riesz_energy}, statistic gap energy \\eqref{eq:knn_energy} performs better. \nOne explanation is the Riesz energy pushes the points to some outlier position in sample generate samples, while statistic gap energy is more robust. We also show visualization samples in Figure~\\ref{fig:pc}. \n\n\n\n\n\n\n\n\\section{Introduction}\nAs exemplified by the success of AlphafoldV2 \\cite{jumper2021highly}\nin solving protein folding,\ndeep learning techniques\nhave been creating new frontiers\non molecular sciences \\cite{wang2017accurate}.\nIn particular, the problem of building deep generative models\nfor molecule design has attracted increasing interest with\n a magnitude of applications in physics, chemistry, and drug discovery \\cite{alcalde2006environmental,anand2022protein,lu2022machine}.\nRecently, diffusion-based generative model have been applied to molecule generation problems \\cite{de2021diffusion,hoogeboom2022equivariant} and obtain superior performance.\nThe idea of these methods is to corrupt the data with diffusion noise and learn a neural diffusion model to revert the corruption process to\ngenerate meaningful data from noise.\nA key challenge in deep generative models for molecule and 3D point generation is to efficiently incorporate strong prior information to reflect the physical and problem-dependent statistical properties of the problems at hand.\nIn fact, a recent fruitful line of research \\cite{du2022molgensurvey,liu2021pre, satorras2021n, gong2022fill}\nhave shown promising results by introducing inductive bias\ninto the design of model architectures to reflect physical constraints such as SE(3) equivariance.\nIn this work, we present a different paradigm\nof prior incorporation tailored to diffusion-based generative models,\nand leverage it to yield substantial improvement in both\n1) high-quality and stable molecule generation and 2) uniformity-promoted point cloud generation.\nOur contributions are summarized as follows.\n\n\\textbf{Prior Guided Learning of Diffusion Models.}\nWe introduce a simple and flexible framework for injecting informative problem-dependent prior and physical information when learning diffusion-based generative models.\nThe idea is to elicit and inject prior information regarding how the diffusion process should look like for generating each given data point,\nand train the neural diffusion model to imitate the prior processes.\nThe prior information is presented in the form of diffusion bridges\nwhich are diffusion processes that are guaranteed to generate each data point at the fixed terminal time.\nWe provide a general Lyapunov approach for constructing and determining bridges and leverage it to develop a way to systematically incorporate prior information into bridge processes.\n\n\\textbf{Physics-informed Molecule Generation.} We apply our method to molecule generation. We propose a number of energy functions for incorporating physical and statistical prior information. Compared with existing physics-informed\nmolecule\ngeneration methods \\cite{de2021diffusion, gnaneshwar2022score,luo2022equivariant, gnaneshwar2022score},\nour method modifies the training process, rather than imposing constraints on the model architecture.\nExperiments show that our method achieves current state-of-the-art generation quality and stability on multiple test benchmarks of molecule generation.\n\n\\textbf{Uniformity-promoting Point Generation.}\nA challenging task in physical simulation,\ngraphics, 3D vision is to generate\npoint clouds for representing real objects ~\\cite{achlioptas2018learning,cai2020learning,luo2021diffusion,yang2019pointflow}. A largely overlooked problem of existing approaches is that they tend to generate\nunevenly distributed points, which lead to unrealistic shapes and make the subsequent processing and applications, such as mesh generation, challenging and inefficient.\nIn this work, we leverage our framework to introduce uniformity-promoting forces into the prior bridge of diffusion generative models. This yields a simple and efficient approach to generating regular and realistic point clouds in terms of both shape and point distribution.\n\n\n\\section{Conclusion and Limitations}\n\\vspace{-2mm}\nWe propose a framework to inject informative priors into\nlearning neural parameterized diffusion models, with applications to both\nmolecules and 3D point cloud generation.\nEmpirically, we demonstrate that our method has the advantages such as better generation quality, less sampling time and easy-to-calculate potential energies.\nFor future works, we plan to 1) study the relation between different types of forces for different domain of molecules, 2) study how to generate valid proteins in which the number of atoms is very large,\nand 3) apply our method to more realistic applications such as antibody design or hydrolase engineering.\n\nIn both energy functions in \\eqref{eq:our_amber_energy} and \\eqref{eq:our_energy}, we do not add torsional angle related energy \\cite{jing2022torsional} mainly because it is hard to verify whether four atoms are bonded together\nduring the stochastic process.\nWe plan to study how to include this for better performance in future works.\n\nAnother weakness of deep diffusion bridge processes are their computation time. Similar to previous diffusion models \\cite{luo2022equivariant}, it takes a long time to train a model.\nWe attempted to speed the training up by using a large batch size (\\emph{e.g.}, 512, 1024) but found a performance drop. An important future direction is to study methods to distribute and accelerate the training. %\n\n\\paragraph{Acknowledgements}\nAuthors are supported in part by CAREER-1846421, SenSE-2037267, EAGER-2041327, and Office of Navy Research, and NSF AI Institute for Foundations of Machine Learning (IFML). We would like to thank the anonymous reviewers and the area chair for their thoughtful comments and efforts towards improving our manuscript.\n\n\n\n\n\\section{Method}\n\\label{sec:method}\n\n\n\n\nWe first introduce the definition of diffusion generative models and discuss how to learn these models with prior bridges.\nAfter introducing the training algorithm for deep diffusion generative models, we discuss the energy functions that we apply to molecules and point cloud examples.\n\n\\subsection{Learning Diffusion Generative Models with Prior Bridges} \n\n\\textbf{Problem Definition.}\nWe aim at learning a generative model given a dataset $\\{x\\datak\\}_{k=1}^n$ drawn from an unknown distribution $\\tg$ on $\\RR^d$. \nA diffusion model on time interval $[0,\\t]$ is\n$$\n\\P^\\theta \\colon ~~~~~\\d Z_t = s_t^\\theta(Z_t) \\dt + \\sigma_t(Z_t) \\d W_t,\n~~~ \\forall t\\in[0,\\t], ~~ \n~~~ Z_0 \\sim \\mu_0, \n$$\nwhere $W_t$ is a standard Brownian motion; \n$\\sigma_t\\colon \\RR^d\\to \\RR^{d\\times d}$ \nis a positive definition covariance coefficient; \n$s^\\theta_t\\colon \\RR^d\\to \\RR^{d}$ is parameterized as a neural network with parameter $\\theta$, and $\\mu_0$ is the initialization. Here we \nuse $\\P^\\theta$ to denote the distribution of the whole trajectory $Z=\\{Z_t\\colon t\\in[0,1] \\}$, and $\\P^\\theta_t$ the marginal distribution of $Z_t$ at time $t$. \nWe want to learn the parameter $\\theta$ such that the distribution $\\P_\\t^\\theta$ of the terminal state $Z_\\t$ equals the data distribution $\\tg$. \n\n\\textbf{Learning Diffusion Models.}\nThere are an infinite number of diffusion processes $\\P^\\theta$ that yield the same terminal distribution but have different distributions of latent trajectories $Z$. \nHence, it is important to \ninject problem-dependent prior information \ninto the learning procedure to obtain a model $\\P^\\theta$ that simulate the data for the problem at hand fast and accurately. \nTo achieve this, we elicit an \n\\emph{imputation} process $\\Q^x$ for each $x\\in \\RR^d$, \nsuch that a draw $Z\\sim \\Q^x$ \nyields trajectories that 1) are consistent with $x$ in that $\\Z_\\t = x$ deterministically, and \n2) reflect important physical and statistical prior information on the problem at hand.\n\nFormally, if $\\Q^x(\\Z_\\t = x) = 1$, we call that $\\Q^x$ is a bridge process pinned at end point $x$, or simply an $x$-bridge. Assume we first generate a data point $x\\sim \\tg$, and then draw a bridge $Z\\sim \\Q^x$ pinned at $x$, then the distribution of $Z$ is a mixture of $\\Q^x$ with $x$ drawn from the data distribution: $\\Q^{\\tg} := \\int \\Q^x(\\cdot) \\tg(\\dx)$. \n\nA key property of $\\Q^\\tg$ is that its terminal distribution equals the data distribution, i.e., $\\Q_\\t^\\tg = \\tg$. Therefore, we can learn the diffusion model $\\P^\\theta$ by fitting the trajectories drawn from $\\Q^\\tg$ with the ``backward'' procedure above. This can be formulated by maximum likelihood or equivalently minimizing the KL divergence: \n$$\n\\min_{\\theta}\\left \\{\\L(\\theta)\\defeq \\KL(\\Q^\\tg~||~ \\P^\\theta) \\right\\}.\n$$\nFurthermore, assume that the bridge $\\Q^x$ is a diffusion model of form \n\\begin{align}\n \\Q^x \\colon ~~~~\n \\d Z_t = b_t(Z_t ~|~ x) \\dt + \\sigma_t(Z_t) \\d W_t, ~~~ \\Z_0 \\sim \\mu_0,\n \\label{equ:Qx}\n\\end{align}\nwhere $b_t(Z_t~|~x)$ is an $x$-dependent drift term need to carefully designed to both satisfy the bridge condition and incorporate important prior information (see Section~\\ref{sec:lyap}). \nAssuming this is done, using Girsanov theorem \\cite{mao2007stochastic}, \nthe loss function $\\L(\\theta)$ can be reformed into a form ofdenoised score matching loss of \\cite{song2020denoising,song2020score, song2021maximum}: \n\\bbb \\label{equ:mainloss} \n\\L(\\theta) \n= \\E_{Z \\sim \\Q^\\tg} \n\\left [\n\\frac{1}{2} \\int_0^\\t \\norm{\\sigma(Z_t)^{-1}(s_t^\\theta(Z_t) - b_t(Z_t~|~Z_\\t))}_2^2 \\dt \\right ] + \\mathrm{const}, \n\\eee \nwhich is a score matching term between $s^\\theta$ and $b$.\nThe const term contains the log-likelihood for the initial distribution $\\mu_0$, which is a const in our problem.\nHere $\\theta\\true$ is an global optimum of $\\L(\\theta)$ if \n$$\ns^\\thetat_t(z) = \n\\E_{Z\\sim \\Q^\\tg} [b_t( z | Z_\\t)~|~ Z_t = z].\n$$\nThis means that %\nthe drift term $s_t^\\theta$ should be matched with the conditional expectation of $b_t(z | x)$ with $x= Z_\\t$ conditioned on $Z_t = z$. \n\n\n\n\n\\begin{rem}\nThe SMLD can be viewed as a special case of this framework when we take $\\Q^x$ to be a time-scaled Brownian bridge process: \n\\bbb \\label{equ:ztdfd}\n\\Q^{x, \\mathrm{bb}}: && \n\\d Z_t = \\sigma_t^2\\frac{x-Z_t}{\\beta_\\t - \\beta_t} \\dt + \\sigma_t \\d W_t,~~~~ \nZ_0 \\sim \\normal(x, \\beta_\\t), \n\\eee \nwhere $\\sigma_t \\in [0,+\\infty)$ and $\\beta_t = \\int_0^t \\sigma_s^2 \\d s$. \nThis can be seen by the fact that the time-reversed process $\\rev Z_{t} \\defeq Z_{1-t}$ follows the simple time-scaled Brownian motion $\\d \\rev Z_t = \\sigma_{\\t -t} \\d \\rev W_t$ starting from the data point $\\rev Z_0 = x$, where $\\rev W_t$ is another standard Brownian motion. \nThe Brownian bridge achieves $Z_\\t = x$ because the magnitude of the drift force is increasing to infinite when $t$ is close to time $\\T.$ \\\\ \n\\end{rem}\n\nHowever, \nthe bridge of SMLD above is a relative simple and uninformative process and does not incorporate problem-dependent prior information into the learning procedure. \nThis is also the case of the other standard diffusion-based models \\cite{song2020score}, such as denoising diffusion probabilistic models (DDPM) which can be shown to use a bridge constructed from an Ornstein\u2013Uhlenbeck process. \nWe refer the readers to \\cite{peluchetti2022nondenoising}, \nwhich provides a similar forward time bridge framework \nfor learning diffusion models, \nand it recovers the bridges in SMLD and DDPM as a conditioned stochastic process derived using the $h$-transform technique \\cite{doob1984classical}. However, \nthe $h$-transform method \nis limited to elementary stochastic processes \nthat have an explicit formula of the transition probabilities, and can not incorporate complex physical statistical prior information. \nOur work strikes to construct and use a broader class of more complex \nbridge processes that both reflect problem-dependent prior knowledge and satisfy the endpoint condition $\\Q^x(Z_\\t = x)=1$. \nThis necessitate systematic techniques for constructing a large family of bridges, as we pursuit in Section~\\ref{sec:lyap}.\n \n\\subsection{Designing Informative Prior Bridges} \n\\label{sec:lyap} \nThe key to realizing the general prior-informed learning framework above is to have a general and user-friendly technique to design $\\Q^x$ in \\eqref{equ:Qx} to ensure the bridge condition $\\Q^x(Z_\\t = x) = 1$ while \nleaving the flexibility of incorporating rich prior information. \nTo achieve this, we first develop a general criterion of bridges based on a \\emph{Lyapunov function method} which allows us to identify a very general form of bridge processes; we \nthen propose a particularly simple family of bridges that we use in practice by introducing modification to Brownian bridges. \n\n\\begin{mydef}[\\textbf{Lyapunov Functions}] \\label{def:lyap}\nA function $U_t(z)$ is said to be a Lyapunov function for set $A\\subset \\RR^d$ at time $t=\\t$ if \n$U_\\t (z) \\geq 0$ for $\\forall z\\in \\RR^d$ and $U_\\t(z) = 0$ if and only if $z \\in A$.\n\\end{mydef} \nIntuitively, a diffusion process $\\Q$ is a bridge $A$, i.e., $\\Q(\\Z_\\t \\in A) =1$, if it (at least) approximately follows the gradient flow of a Lyapunov function and the magnitude (or step size) or the gradient flow should increase with a proper magnitude in order to ensure that $\\Z_t \\in A$ at the terminal time $t=\\t$. Therefore, we identify a general form of bridges to $A$ as follows: \n \\bbb \\label{equ:duz}\n \\Q^A:&&\n \\d Z_t =\\left( -\\alpha_t \\dd_z U_t(Z_t) + \\nu_t(Z_t)\\right ) \\dt \n + \\sigma_t(\\X_t) \\dW_t,~~~~~~~t\\in[0,1],~~~ Z_0 \\sim \\mu_0, \n \\eee \n where $\\alpha_t>0$ is the step size of the gradient flow of $U$ and $\\nu$ is an extra perturbation term. \n The step size $\\alpha_t$ should increase to infinity as $t\\to \\T$ sufficiently fast to dominate the effect of the diffusion term $\\sigma_t \\d W_t$ and the perturbation $\\nu_t \\d t$ term to ensure that $U$ is minimized at time $t = \\t$. \n \n \\begin{theoremEnd}[\\isproofhere]{pro}\n \\label{pro:lya}\n Assume $U_t(z) = U(z,t)$ is a Lyapunov function \n of a measurable set $A$ at time $\\t$ and $U(\\cdot, t) \\in C^2(\\RR^d)$ and $U(z, \\cdot) \\in C^1([0,\\T]).$ %\nThen, $\\Q^A$ in \\eqref{equ:duz} is an bridge to $A$, i.e., \n$\\Q^A(Z_\\T \\in A) = 1$, if the following holds: %\n\n 1) $U$ follows an (expected) Polyak-Lojasiewicz condition: $\\E_{\\Q^A}[U_t(Z_t)] -\\norm{\\dd_z U_t(Z_t)}^2]\\leq 0, \\forall t.$\n \n 2) Let $\\beta_t = \\E_{\\Q^A}[\\dd_z U_t(Z_t)\\tt \\nu_t(Z_t)]$, and $\\gamma_t = \\E_{\\Q^A}[\\partial_t U_t(Z_t) + \\frac{1}{2} \\trace(\\dd_z^2 U_t(Z_t) \\sigma^2_t(Z_t))]$, and \n $\\zeta_t = \\exp(\\int_0^t \\alpha_s \\d s)$. Then \n$\\lim_{t\\uparrow 1} \\zeta_t = +\\infty,$ and \n$\\lim_{t\\uparrow 1} \\frac{\\zeta_t}{\\int_0^t \\zeta_s (\\beta_s + \\gamma_s) \\d s} = +\\infty. $\n\\end{theoremEnd} \n\\begin{proofEnd}\nIt is a direct result of Theorem~\\ref{thm:lyap2}. \n\\end{proofEnd}\nBrownian bridge can be viewed as the case when $U_t(z) =\\norm{x-z}^2\/2$ and $\\alpha_t = \\sigma_t^2\/(\\beta_\\t-\\beta_t)$, and $\\nu = 0$. Hence simply introducing an extra drift term into bridge bridge yields that a broad family of bridges to $x$: \n\\bbb \\label{equ:qxbbf}\n\\Q^{x, \\mathrm{bb}, f}: && \n\\d Z_t = \\left ( \\sigma_t f_t(Z_t) + \\sigma_t^2\\frac{x-Z_t}{\\beta_\\t - \\beta_t} \\right) \\dt + \\sigma_t \\d W_t,~~~~ Z_0 \\sim \\mu_0.\n\\eee \n In Appendix~\\ref{thm:perturb} and \\ref{app:cor5},\nwe show that $\\Q^{x, \\mathrm{bb}, f}$ is a bridge to $x$ if $\\E_{\\Q^{x,\\mathrm{bb}}}[\\norm{f_t(Z_t)}^2]<+\\infty$ and $\\sigma_t>0,\\forall t$, which is very mild condition and is satisfied for most practical functions. \nThe intuition is that the Brownian drift $\\sigma^2_t\\frac{x-Z_t}{\\beta_1-\\beta_t}$ is singular and grows to infinite as $t$ approaches $1$. \nHence, introducing an $f$ into the drift would not change of the final bridge condition, unless $f$ is also singular and has a magnitude that dominates the Brownian bridge drift as $t\\to 1$. \n\nTo make the model $\\P^\\theta$ compatible \nwith the physical force $f$, \nwe assume the learnable drift has a form of $s_t^\\theta(z) = \\alpha f_t(z) + \\tilde s_t^\\theta(z)$ where $\\tilde s$ is a neural network (typically a GNN) and $\\alpha$ can be another learnable parameter or a pre-defined parameter. \nPlease refer to algorithm \\ref{alg:learning} and Figure \\ref{fig:algorithm_demo} for descriptions about our practical algorithm.\n\n\n\n\n\n\n \n\\begin{algorithm}[h] \n\\label{alg:learning}\n\\caption{Learning diffusion generative models.} %\n\\begin{algorithmic}\n\\STATE \\textbf{Input}: \nGiven a dataset $\\{x\\datak\\}$, $\\Q^x$ the bridge in \\eqref{equ:qxbbf}, and a problem-dependent prior force $f$,v %\nand a diffusion model $\\P^\\theta$. \n\\STATE \\textbf{Training}: Estimate $\\theta$ by minimizing $\\L(\\theta)$ in \\eqref{equ:mainloss} with stochastic gradient descent and time discretization. \n\\STATE \\textbf{Sampling}: Simulate from $\\P^\\theta$.\n\\end{algorithmic}\n\\end{algorithm} \n\n \n\\begin{figure}[t]\n \\centering\n \\vspace{-5pt}\n \\includegraphics[width=1.\\linewidth]{text\/figures\/pipeline.pdf}\n\n \\caption{\\small{An overview of our training pipeline with molecule generation as an example. Initialized from a given distribution, we pass the data through the network multiple times, and finally get the meaningful output.}}\n \\label{fig:algorithm_demo}\n\n\\end{figure}\n \n\\section{Molecule and 3D Generation with Informative Prior Bridges}\n\nWe apply our method to the molecule \ngeneration as well as point cloud generation. \nInformative physical or statistical priors\nthat reflects the underlying real physical structures \ncan be particularly beneficial for molecule generation as we show in experiments. \n\nIn our problem, each data point $x$ is a collection of\n atoms of different types, more generally marked points, in 3D Euclidean space. \nIn particular, we have $x = [x^r_i, x^h_i]_{i=1}^m$, \nwhere $x^r_i \\in \\RR^3$ is the coordinate of the $i$-th atom, and $x^h_i \\in \\{e_1,\\ldots, e_k\\}$ where each $e_i=[0\\cdots 1\\cdots0]$ is the $i$-th basis vector of $\\RR^k$, which \nindicates the type of the $i$-th atom of $k$ categories.\nTo apply the diffusion generative model, we treat $x^h_i$ as a continuous vector in $\\RR^r$ and round it to the closest basis vector when we want to output a final result or have computations that depend on atom types (e.g., calculating an energy function as we do in sequel). \nSpecifically, for a continuous $x^h_i\\in \\RR^k$, we denote by $\\hat x^h_i = \\ind(x^h_i = \\max(x^h_i))$ the discrete type rounded from it by taking the type with the maximum value. \nTo incorporate priors, we design an energy function $\\eng(x)$ and incorporate $f_t( \\cdot) =-\\dd \\eng( \\cdot)$ into the Brownian bridge \\eqref{equ:qxbbf} to guide the training process. \nWe discuss different choices of $\\eng$ in the following. \n\n\n\n\n\n\n\n\n\\subsection{Prior Bridges for Molecule Generation}\n\nPrevious prior guided molecule or protein 3D structure generation usually depends on pre-defined energy or force \\cite{ luo2021predicting,xu2022geodiff}.\nWe introduce our two potential energies. One is formulated inspired by previous works in biology, and the other is an $k$ nearest neighbour statistics directly obtained from the data.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\paragraph{AMBER Inspired Physical Energy.}\nAMBER \\cite{duan2003point} is a family of force fields for molecule simulation. It is designed to provide a computationally efficient tool for modern chemistry-molecular dynamics and free energy calculations. \nIt consists of a number of important forces, including \nthe bond energy, angular energy, torsional energy, \nthe van der Waals energy and \nthe Coulomb energy. \nInspired by AMBER, \nwe propose to incorporate the following energy term into the bridge process: \n\\begin{align}\n \\eng(x) = E_{bond}(x) + E_{angle}(x) + E_{LJ}(x) + E_{Coulomb}(x). \n \\label{eq:our_amber_energy}\n\\end{align}\n$\\bullet$ The bond energy is $ E_{bond}(x) = \\sum_{ij\\in bond(x)} (\\mathrm{Len}(x^r_{ij})- \\ell(\\hat x^h_i, \\hat x^h_j))^2$, where \n$\\mathrm{Len}(x^r_{ij}) = \\norm{x^r_i - x^r_j}$, and \n$bond(x)$ denotes the set of \nbonds from $x$, which is set to be the set of atom pairs with a distance smaller than $1.15$ times the covalent radius; the $\\ell^0(r, c)$ denotes the expected bond length between atom type $r$ and $c$, which we calculate as side information from the training data.\n\n$\\bullet$ \nThe angle energy \nis $ E_{angle}(x) = \\sum_{ijk\\in angle(x)} \n(\\mathrm{Ang}({x^r_{ijk}})- \\omega^0(\\hat x^h_{ijk}))^2$, \nwhere $angle(x)$ \ndenotes the set of \nangles between two neighbour bonds in $bound(x)$, and \n$\\mathrm{Ang}({x^r_{ijk}})$ denotes the angle formed by vector $x^r_i-x^r_j$ and $x^r_k - x^r_j$, and $\\omega^0(\\hat x^h_{ijk})$ is the expected angle between atoms of type $\\hat x^h_i$, $\\hat x^h_j$, $\\hat x^h_k$, which we calculate as side information from the training data.\n\n$\\bullet$ \nThe Lennard-Jones (LJ) energy is defined by $\\eng_{LJ}(x) = \\sum_{i\\neq j} e(\\norm{x^r_i - x^r_j})$ and $e(\\ell) = (\\sigma\/\\ell)^{12} - 2(\\sigma\/\\ell)^6$. \nThe parameter $\\sigma$ is an approximation for average nucleus distance.\n\n$\\bullet$ \nThe nuclei-nuclei repulsion (Coulomb) electromagnetic potential energy is $E_{Coulomb}(x) = \\kappa \\sum_{ij}q(\\hat x^h_i) q(\\hat x^h_j)\/\\norm{x^r_i-x^r_j}$, where $\\kappa$ is Coulomb constant and \n$q(r)$ denotes the point charge of atom of type $r$, which depends on the number of protons.\n\n\n\n\\textbf{Statistical Energy. }\nWhen accurate physic laws are unavailable, \nmolecular geometric statistics, \nsuch as bond lengths, bond angles, and torsional angles, etc, \ncan be directly calculated from the data and shed important insights on the system \\cite{cornell1995second, jorgensen1996development, manaa2002decomposition}. \nWe propose to design a prior energy function in bridges by directly calculate these statistics over the dataset. %\n\nSpecifically, we assume that the lengths and angles of each type of bond follows a Gaussian distribution that we learn from the dataset, and define the energy function as the negative log-likelihood:\n\\begin{align}\nE_{stat}(x) \n= \\sum_{ij\\in knn(x)} \n\\frac{1}{\\hat \\sigma_{\\hat x^h_{ij}}^2}\n\\norm{\\mathrm{Len}(x^r_{ij}) - \\hat \\mu_{\\hat x^h_{ij}}}^2 \n+ \\sum_{ij,jk\\in knn(x)} \n\\frac{1}{\\sigma_{\\hat x^h_{ijk}}^2}\n\\norm{\\mathrm{Ang}(x^r_{ijk}) - \\mu_{\\hat x^h_{ijk}}}^2,\n\\label{eq:our_energy}\n\\end{align}\nwhere $knn(x)$ denotes the K-nearest neighborhood graph constructed based on the distance matrix of $x$; \nfor each pair of atom types $r,c\\in[k]$, \n$\\hat \\mu_{rc}$ and $\\hat \\sigma_{rc}^2$ denotes \nempirical mean and variance of length of $rc$-edges in the dataset; for each triplet $r,c,r'\\in[k]$, \n$\\hat \\mu_{rcr'}$ and $\\hat \\sigma_{rcr'}^2$ is \nthe empirical mean and variance of angle betwen $rc$ and $cr'$ bonds. \n\nIntuitively, depending on the atom type and order of the nearest neighbour, we force the atom distance and angle to mimic the statistics calculated from the data. \nWe thus implicitly capture different kinds of interaction forces. \nCompared with the AMBER energy, \nthe statistical energy \\eqref{eq:our_energy} is simpler and more adaptive to \nthe dataset of interest. \n\n\n\n\\subsection{Prior Bridges for Point Cloud Generation}\nWe design prior forces for \n3D point cloud generation, which is similar to molecule generation except that the points are un-typed so we only have the coordinates $\\{x_i^r\\}$. \nOne important aspect of point cloud generation is to distribute points uniformly on the surface, which is important for producing high-quality meshes and other post-hoc geometry applications and manipulations. %\n\n\\textbf{Riesz Energy.}\nOne idea to make the point distribute uniformly is adding a repulsive force to separate the points apart from each other. We achieve this by minimizing the Riesz energy~\\cite{gotz2003riesz}, \n\\begin{align}\n \\eng_{\\mathrm{Riesz}}(x) = \\frac{1}{2} \\sum_{j \\neq i} ||{x}^r_{i} - {x}^r_{j}||^{-2}.\n \\label{eq:riesz_energy}\n\\end{align}\n\\textbf{KNN Distance Energy.}\nSimilar to molecule design, we directly calculate the average distance between each point and its k nearest neighbour neighbour, and define the following energy: \n\\begin{align}\n \\eng_\\mathrm{knn}(x) = \\sum_i \\left(\\text{knn-dist}_i(x^r) - \\mu_{knn}\\right)^2, \n \\label{eq:knn_energy}\n\\end{align}\nwhere $\\text{knn-dist}_i(x) = \\frac{1}{K} \\sum_{j \\in \\mathcal{N}_K(x_i;x)} ||x^r_i - x^r_j||^2 $ denotes the\naverage distance from $x_i$ to its $K$ nearest neighbors, and $\\mu_{knn}$ is the empirical mean of $\\text{knn-dist}_i(x)$ in the dataset. \nThis would encourage the points to have similar average nearest neighbor distance and yield uniform distribution between points. \nIn common geometric setups, the valence of the point on the surface is 4, which means we set $k = 4$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related works}\n\n\\textbf{Diffuse Bridge Process.}\nDiffusion-based generative models~\\cite{ho2020denoising,liu2022bridge,sohl2015deep,song2020denoising, vargas2021solving} have achieved great successes in various AI generation tasks recently;\nthese methods leverage a time reversion technique and can be viewed as learning variants auto-encoders with diffusion processes as encoders and decoders.\nSchrodinger bridges \\cite{ chen2021likelihood,de2021diffusion,wang2021deep}\n have also been proposed for learning diffusion generative models that guarantee to output desirable outputs in a finite time interval, but %\nthese methods involve iterative proportional fittings and are computationally costly.\nOur framework of learning generative models with diffusion bridges\nis similar to that of \\cite{peluchetti2022nondenoising},\nwhich learn diffusion models as a mixture of forward-time diffusion bridges to avoid the time-reversal technique of \\cite{song2020score}.\nBut our framework is designed to incorporate physical prior into bridges\nand develop a systematic approach for constructing\na broad class of prior-informed bridges.\n\n\n\n\\textbf{3D Molecule Generation.}\nGenerating molecule in 3D space has been gaining increasing interest.\nA line of works ~\\cite{luo2021predicting,mansimov2019molecular,shi2021confgf,simm2019generative,xu2021learning,xu2021end,xu2022geodiff} consider conditional conformal generation, which takes the 2D SMILE structure as conditional input and generate the 3D molecule conformations condition on the input. Another series of works \\cite{NIPS2019_8974, hoogeboom2022equivariant, luo20223d,satorras2021n,wu2022score} focus on directly generating the atom position and type for the molecule unconditionally.\nFor these series of works, improvements usually come from architecture design and loss design.\nFor example, %\nG-Schnet~\\cite{NIPS2019_8974} auto-regressively generates the atom position and type one by one after another; EN-Flow ~\\cite{satorras2021n} and EDM~\\cite{hoogeboom2022equivariant} adopt E(n) equivariant graph neural network (EGNN) \\cite{satorras2021n} to train flow-based model and diffusion model. These methods aim at generating valid and natural molecules in 3D space and outperform previous approaches by a large margin.\nOur work provides a very different approach to\nincorporating the physical information for molecule generation\nby injecting the prior information into the diffusion process,\nrather than neural network architectures.\n\n\\textbf{Point Cloud Generation.}\nA vast literature has been devoted to learning\ndeep generative models for real-world 3D objects in the form of point clouds. %\n\\cite{achlioptas2018learning} first proposed to generate the point cloud by generating a latent code and training a decoder to generate point clouds from the latent code.\nBuild upon this approach,\nmethods have been developed using flow-based generative models \\cite{yang2019pointflow}\nand diffusion-base models \\cite{cai2020learning, luo2021diffusion, luo2022equivariant}.\nHowever, the existing works miss a key important prior information:\nthe points in a point cloud tend to distribute regularly and uniformly.\nIgnoring this information causes poor generation quality.\nBy introducing uniformity-promoting forces\nin diffusion bridges,\nwe obtain a simple and efficient approach to generating\nregular and realistic point clouds.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDans cette partie, on suppose que la repr\u00e9sentation $R(\\alpha,\\beta,\\gamma;l)$ de $W(p,q,r)$ est r\u00e9ductible et qu'il existe une forme bilin\u00e9aire $G$-invariante non nulle. Alors dans ces conditions, on montre que $G'=G\/N(G)$ est isomorphe \u00e0 un groupe di\u00e9dral fini et $N(G)$ est donn\u00e9 explicitement ainsi que l'op\u00e9ration de $G$ sur $N(G)$. The general case will\n\\subsection{Contraintes sur les param\u00eatres}\nLes conditions \u00e9nonc\u00e9es ci-dessus sont \u00e9quivalentes aux deux conditions suivantes:\n\\[\n\\Delta=8-2\\alpha-2\\beta-2\\gamma-\\alpha l-\\beta m=0\\quad \\text{et} \\quad\\alpha l=\\beta m\n\\]\nNous allons voir que cela implique de fortes contraintes sur $p$, $q$, $r$ d'une part et sur $\\alpha$, $\\beta$, $\\gamma$ d'autre part. Pour cela, nous montrons d'abord le r\u00e9sultat suivant:\n\\begin{proposition}\nSoient $a$, $b$ et $c$ trois nombres r\u00e9els. On pose: $\\alpha:=4\\cos^{2}a\\pi$, $\\beta:=4\\cos^{2}b\\pi$ et $\\gamma:=4\\cos^{2}c\\pi$. Alors les deux conditions suivantes sont \u00e9quivalentes:\n\\begin{enumerate}\n \\item (1) $\\exists \\epsilon,\\epsilon' \\in \\{-1,+1\\}$ tels que $c\\equiv \\epsilon a+\\epsilon' b \\mod{\\mathbb{Z}}$;\n \\item (2) $\\alpha\\beta\\gamma=(4-\\alpha-\\beta-\\gamma)^{2}$.\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n1) Montrons que (1) implique (2).\\\\\nIl existe $\\epsilon_{1}\\in \\{-1,+1\\}$ tel que $\\epsilon_{1}\\cos c\\pi=\\cos a\\pi \\cos b\\pi-\\epsilon\\epsilon'\\sin a\\pi \\sin b\\pi$.\\\\\nNous obtenons:\n\\begin{align*}\n\\cos^{2}c\\pi=\\cos^{2}a\\pi \\cos^{2}b\\pi + \\sin^{2}a\\pi \\sin^{2}b\\pi -2\\epsilon\\epsilon'\\cos a\\pi \\sin a\\pi \\cos b\\pi sin b\\pi\\\\\n=1-\\cos^{2}a\\pi - \\cos^{2}b\\pi +2\\cos^{2}a\\pi cos^{2}b\\pi -2\\epsilon\\epsilon'\\cos a\\pi \\sin a\\pi \\cos b\\pi sin b\\pi\n\\end{align*}\nd'o\u00f9, en multipliant cette \u00e9galit\u00e9 par $8$:\n\\[\n2\\gamma=8-2\\alpha-2\\beta+\\alpha\\beta-4\\epsilon\\epsilon' \\sin2a\\pi \\sin 2b\\pi.\n\\]\nNous en d\u00e9duisons\n\\begin{equation*}\n\\begin{split}\n16\\sin^{2}2a\\pi \\sin^{2}2b\\pi\\\\\n&=(8-2\\alpha-2\\beta-2\\gamma+\\alpha\\beta)^{2}\\\\\n&=16(1-2\\cos^{2}2a\\pi)(1-2cos^{2}2b\\pi)\\\\\n&=16(1-\\cos 2a\\pi)(1+\\cos 2a\\pi)(1-\\cos 2b\\pi)(1+\\cos 2b\/pi)\\\\\n&=16(2-2\\cos^{2}a\\pi)2\\cos^{2}a\\pi (2-2\\cos^{2}b\\pi)2\\cos^{2}b\\pi\\\\\n&=(4-\\alpha)\\alpha(4-\\beta)\\beta\\\\\n&=16\\alpha\\beta-4\\alpha^{2}\\beta-4\\alpha\\beta^{2}+\\alpha^{2}\\beta^{2}\\\\\n&=(8-2\\alpha-2\\beta-2\\gamma)^{2}+\\alpha^{2}\\beta^{2}+2\\alpha\\beta(8-2\\alpha-2\\beta-2\\gamma)\n\\end{split}\n\\end{equation*}\nd'o\u00f9, apr\u00e8s simplifications\n\\[\n4\\alpha\\beta\\gamma=(8-2\\alpha-2\\beta-2\\gamma)^{2}=4(4-\\alpha-\\beta-\\gamma)^{2}\n\\]\nc'est la relation (2)\\\\\n2) Montrons que (2) implique (1).\\\\\nPosons, pour simplifier l'\u00e9criture: $u:=\\cos a\\pi$, $v:=\\cos b\\pi$ et $w:= \\cos c\\pi$ de telle sorte que $\\alpha=4u^{2}$, $\\beta=4v^{2}$ et $\\gamma=4w^{2}$. La relation (2) devient \n\\[\n64u^{2}v^{2}w^{2}=16(1-u^{2}-v^{2}-w^{2})^{2}\n\\]\nd'o\u00f9\n\\[\n(1-u^{2}-v^{2}-w^{2})^{2}-4u^{2}v^{2}w^{2}=0=(1-u^{2}-v^{2}-w^{2}+2uvw)(1-u^{2}-v^{2}-w^{2}-2uvw).\n\\]\nNous brisons maintenant la sym\u00e9trie entre $u$, $v$ et $w$ pour obtenir:\n\\[\n((w-uv)^{2}-(1-u^{2})(1-v^{2}))((w+uv)^{2}-(1-u^{2})(1-v^{2}))=0.\n\\]\nD'apr\u00e8s les valeurs de $u$, $v$ et $w$, nous voyons que $uv=\\cos a\\pi \\cos b\\pi$ et $(1-u^{2})(1-v^{2})=\\sin^{2}a\\pi \\sin^{2}b\\pi$, donc:\n\\[\n0=((w-\\cos a\\pi \\cos b\\pi)^{2}-(\\sin a\\pi \\sin b\\pi)^{2})((w+\\cos a\\pi \\cos b\\pi)^{2}-(\\sin a\\pi \\sin b\\pi)^{2})\n\\]et $0=d_{1}d_{2}d_{3}d_{4}$ o\u00f9:\n\\[\n\\begin{aligned}\nd_{1} &= w-(\\cos a\\pi \\cos b\\pi+\\sin a\\pi \\sin b\\pi) &= w-\\cos(a-b)\\pi\\\\\nd_{2} &= w-(\\cos a\\pi \\cos b\\pi-\\sin a\\pi \\sin b\\pi) &= w-\\cos(a+b)\\pi\\\\\nd_{3} &=w+(\\cos a\\pi \\cos b\\pi+\\sin a\\pi \\sin b\\pi) &= w+\\cos(a-b)\\pi\\\\\nd_{4} &=w+(\\cos a\\pi \\cos b\\pi-\\sin a\\pi \\sin b\\pi) &= w+\\cos(a+b)\\pi\\\\\n\\end{aligned}\n\\]\nComme $\\cos^{2}x-\\cos^{2}y=\\sin (x+y)\\sin(y-x)$, nous obtenons:\n\\[\n\\begin{aligned}\nd_{1}d_{3} &= w^{2}-\\cos^{2}(a-b)\\pi &= \\sin(c+a-b)\\pi \\sin(a-b-c)\\pi\\\\\nd_{2}d_{4} &= w^{2}-\\cos^{2}(a+b)\\pi &= \\sin(c+a+b)\\pi \\sin(a+b-c)\\pi\n\\end{aligned}\n\\]\nd'o\u00f9 finalement:\n\\[\n0=\\sin(c+a-b)\\pi \\sin(a-b-c)\\pi\\sin(c+a+b)\\pi \\sin(a+b-c)\\pi,\n\\]\ncomme $\\sin \\pi x =0$ si et seulement si $x$ est entier, nous voyons qu'il existe $\\epsilon$ et $\\epsilon'$ dans $\\{-1,+1\\}$ tels que $c\\equiv \\epsilon a+\\epsilon' b \\mod{\\mathbb{Z}}$ c'est la condition (1).\n\\end{proof}\n\nPour expliciter les relations entre $p$, $q$, $r$ d'une part et $\\alpha$, $\\beta$, $\\gamma$ d'autre part, nous utilisons le r\u00e9sultat suivant d\u00e9montr\u00e9 dans l'appendice :\nLes deux conditions suivantes $(C)$ et $(D)$ sur le triple d'entiers non nuls $(a_{1},a_{2},a_{3})$ sont \u00e9quivalentes:\n\\begin{align*}\n(C)\n\\begin{cases}\n(C_{1}) & n=ppcm(a_{1},a_{2},a_{3})=ppcm(a_{i},a_{j}) (1\\leqslant i\\neq j \\leqslant 3);\\\\\n(C_{2}) & \\parbox{11 cm}{%\n$\\exists i,j \\in \\mathbb{N}$ tels que $(1\\leqslant i\\neq j \\leqslant 3)$ et $v_{2}(a_{i})=v_{2}(a_{j})=v_{2}(n)$;\nsi $|\\{i,j,k\\}|=3$, $v_{2}(a_{k})$. Alors $L$ poss\u00e8de une repr\u00e9sentation fid\u00e8le irr\u00e9ductible $T$ sur un corps alg\u00e9briquement clos de caract\u00e9ristique $0$ si et seulement si $H$ est un groupe cyclique (et donc $L$ est un groupe di\u00eadral). Dans ce cas $\\deg T=2$.\n\\end{proposition}\n\\begin{proof}\nNous allons utiliser le th\u00e9or\u00e8me suivant de Gasch\\\"utz (\\cite{G}):\"Un groupe fini poss\u00e8de une repr\u00e9sentation irr\u00e9ductible et fid\u00e8le si et seulement si son socle est engendr\u00e9 par une classe de conjugaison\".\\\\\nSoit $T$ une repr\u00e9sentation irr\u00e9ductible de $L$. Comme $H$ est un sous-groupe normal commutatif de $L$, nous savons que $\\deg T\\leqslant 2$. Si $\\deg T=1$, alors $D(L)\\subset \\ker T$ et comme $D(L) \\neq {1}$ car $L$ n'est pas commutatif, nous voyons que $T$ n'est pas fid\u00e8le dans ce cas. Ainsi $\\deg T=2$.\n\nSoit $S$ le socle de $L$. Il est clair que $S \\subset H$. Pour chaque $s$ dans $S$, nous avons $\\sigma s \\sigma^{-1}=s \\,\\text{ou} \\,\\sigma s \\sigma^{-1}=s^{-1}$, donc la classe de conjugaison de $s$ est $\\{s\\}$ ou $\\{s,s^{-1}\\}$. Nous voyons ainsi, gr\u00e2ce au th\u00e9or\u00e8me de Gasch\\\"utz que $L$ poss\u00e8de une repr\u00e9sentation fid\u00e8le et irr\u00e9ductible si et seulement si $S$ est un groupe cyclique. D'apr\u00e8s la structure des groupes commutatifs finis, $S$ est un groupe cyclique si et seulement si $H$ est groupe cyclique. Dans ce cas $L$ est un groupe di\u00e9dral (d'ordre $\\geqslant 6$).\n\\end{proof}\n\\begin{notation}\nOn suppose que le triple d'entiers non nuls $(p,q,r)$ satisfait \u00e0 la condition $(C_{1})$. On pose $n:=ppcm(p,q,r)$, $d:= pgcd(p,q,r)$, $n=pp_{1}=qq_{1}=rr_{1}$ de telle sorte que $pgcd(p_{1},q_{1})=pgcd(q_{1},r_{1})=pgcd(r_{1},p_{1})=1$, $n=p_{1}q_{1}r_{1}d$, $p=q_{1}r_{1}d$, $q=r_{1}p_{1}d$ et $r=p_{1}q_{1}d$.\n\\end{notation}\n\\begin{proposition}\nSoit $G\"$ le quotient de $W(p,q,r)$ (avec comme syst\u00e8me g\u00e9n\u00e9rateur canonique $(t_{1},t_{2},t_{3})$) obtenu en ajoutant la relation $(t_{1}t_{2}t_{3})^{2}=1$.\\\\ Soit $H\":=$. Alors:\\\\\n1) $H\"$ est un sous-groupe commutatif normal de $G\"$.\\\\\n2)Le triple d'entiers $(p,q,r)$ satisfait \u00e0 la condition ($C_{1}$).\\\\\n3) $H\"\\simeq \\mathbb{Z}\/d\\mathbb{Z}\\times \\mathbb{Z}\/n\\mathbb{Z}$.\n\\end{proposition}\n\\begin{proof}\n1) Posons $a:=t_{1}t_{2}$, $b:=t_{2}t_{3}$ et $c:=t_{3}t_{1}$. Il est clair que $abc=1$. De plus $acb=(t_{1}t_{2}t_{3})^{2}=1$, $cb=bc$. On voit de la m\u00eame mani\u00e8re que $ac=ca$ et $ab=ba$: le groupe $H\"$ est commutatif.\\\\\n2) Comme $t_{1}at_{1}^{-1}=a^{-1}$ et $t_{1}ct_{1}^{-1}=c^{-1}$ et comme $G\"=$, nous voyons que $H\"$ est un sous-groupe normal d'indice $2$ de $G\"$ dont tous les \u00e9l\u00e9ments sont invers\u00e9s par $t_{i}$ $(1\\leqslant i \\leqslant 3)$. Il en r\u00e9sulte que l'ordre de $ac$ est $ppcm(p,q)$ donc $r\\,| \\,ppcm(p,q)$. Par sym\u00e9trie, on voit que $p\\,| \\,ppcm(q,r)$ et $q\\,| \\,ppcm(r,p)$: le triple d'entiers $(p,q,r)$ satisfait \u00e0 la condition $(C_{1})$.\\\\\n3) Soit $\\pi$ un nombre premier et soit $\\Pi$ la partie $\\pi$-primaire de $|H\"|$. Nous avons\n\\[\nH\"=\\oplus_{\\pi \\in \\mathcal{P}}\\Pi\n\\]\no\u00f9 $\\mathcal{P}$ d\u00e9signe l'ensemble des nombres premiers positifs.\\\\\nPosons $p=\\pi^{n(p)}p'$, $q=\\pi^{n(q)}q'$ et $r=\\pi^{n(r)}r'$ o\u00f9 $p'$, $q'$ et $r'$ sont premiers \u00e0 $p$. Posons $a':=a^{p'}$, $b':= b^{r'}$ et $c':=c^{q'}$. Alors $a'$ est d'ordre $\\pi^{n(p)}$, $b'$ est d'ordre $\\pi^{n(r)}$ et $c'$ est d'ordre $\\pi^{n(q)}$.\\\\\nNous avons\n\\[\n\\Pi=\n\\]\nNous pouvons supposer que $n(p)=n(r)$ et $n(q)\\leqslant n(p)$ (voir l'appendice ). Nous effectuons des op\u00e9rations \u00e9l\u00e9mentaires sur la matrice des relations de $\\Pi$:\n\\[\n\\begin{pmatrix}\n\\pi^{n(p)} & 0\\\\\n0 & \\pi^{n(p)}\\\\\n\\pi^{n(q)} & \\pi^{n(q)}\n\\end{pmatrix}\n\\to\n\\begin{pmatrix}\n\\pi^{n(p)} & -\\pi^{n(p)}\\\\\n0 & \\pi^{n(p)}\\\\\n\\pi^{n(q)} & 0\n\\end{pmatrix}\n\\to\n\\begin{pmatrix}\n\\pi^{n(p)} & 0\\\\\n0 & \\pi^{n(p)}\\\\\n\\pi^{n(q)} & 0\n\\end{pmatrix}\n\\]\n\\[\n\\to\n\\begin{pmatrix}\n\\pi^{n(q)} & 0\\\\\n0 & \\pi^{n(p)}\\\\\n\\pi^{n(p)} & 0\n\\end{pmatrix}\n\\to\n\\begin{pmatrix}\n\\pi^{n(q)} & 0\\\\\n0 & \\pi^{n(p)}\\\\\n0 & 0\n\\end{pmatrix}\n\\]\nIl en r\u00e9sulte que $\\Pi \\simeq \\mathbb{Z}\/\\pi^{n(q)}\\mathbb{Z} \\times \\mathbb{Z}\/\\pi^{n(p)}\\mathbb{Z}$. Comme $\\pi^{n(q)}$ est la $\\pi$-contribution de $pgcd(p,q,r)$ et $\\pi^{n(p)}$ est la $\\pi$-contribution de $ppcm(p,q,r)$, nous avons le r\u00e9sultat car $H\"$ est le produit de ses sous-groupes de Sylow.\n\\end{proof}\nNous avons maintenant assez d'\u00e9l\u00e9ments pour le r\u00e9sultat suivant:\n\\begin{theorem}\nSoient $W(p,q,r)$ un groupe de Coxeter et $R$ une repr\u00e9sentation de r\u00e9flexion de $W(p,q,r)$. On suppose que le triple d'entiers $(p,q,r)$ satisfait \u00e0 la condition (C). Soit $\\alpha$ une racine de $v_{p}(X)$. On sait (voir l'appendice ) que l'on peut trouver $\\beta$ racine de $v_{q}(X)$ et $\\gamma$ racine de $v_{r}(X)$ de telle sorte que $\\Delta=0$ et $\\alpha l=\\beta m$. Alors $G'$ est un groupe di\u00e9dral d'ordre $2n$, o\u00f9 $n=ppcm(p,q,r)$ et $R$ est r\u00e9ductible.\n\\end{theorem}\n\\begin{proof}\nD'apr\u00e8s la proposition 1 de l'appendice (voir \\cite{Z1}), on sait que le polyn\u00f4me caract\u00e9ristique de $t_{1}=s_{1}s_{2}s_{3}$ est $P_{t_{1}}=(X-1)^{2}(X+1)$, donc $t_{1}$ a comme valeurs propres sur $M'$: $-1$ et $+1$, d'o\u00f9 $(t'_{1})^{2}=id_{M'}$. D'apr\u00e8s la proposition 4, $G'$ est isomorphe \u00e0 un quotient du groupe $G\"$ d\u00e9fini dans cette proposition. Il en r\u00e9sulte que $G'$ est une extension d'un groupe commutatif fini $H$ par un \u00e9l\u00e9ment d'ordre $2$ qui inverse tous les \u00e9l\u00e9ments de $H$. De plus $G'$ op\u00e8re fid\u00e8lement sur $M'$ par d\u00e9finition. Comme $$ $(1\\leqslant i < j \\leqslant 3)$ s'envoie fid\u00e8lement dans $G'$ et comme la repr\u00e9sentation du groupe di\u00e9dral $$ est absolument irr\u00e9ductible sur $M'$, nous voyons que $G'$ est un groupe di\u00e9dral d'ordre $2n$ en appliquant la proposition 3 car toute repr\u00e9sentation irr\u00e9ductible complexe de $D_{n}$ est r\u00e9alisable sur $K_{0}$.\n\\end{proof}\n\\subsection{Le groupe $N(G)$}\nComme $G'$ est un groupe di\u00e9dral d'ordre $2n$ et comme $N$ est un groupe commutatif libre tel que la repr\u00e9sentation de $G'$ sur $N\\otimes_{\\mathbb{Z}}\\mathbb{Q}$ est irr\u00e9ductible, nous allons \u00e9tudier les repr\u00e9sentations rationnelles irr\u00e9ductibles du groupe di\u00e9dral $D_{n}$ d'ordre $2n$ $(n\\geqslant 3)$. En particulier, nous allons voir qu'il n'y en a qu'une seule qui est fid\u00e8le.\n\nNous allons d'abord construire les repr\u00e9sentations irr\u00e9ductibles rationnelles du groupe di\u00e9dral $D_{n}$. D'apr\u00e8s un r\u00e9sultat bien connu (voir \\cite{S}), on a :\\\\\n\"Le nombre des classes de repr\u00e9sentations irr\u00e9ductibles du groupe fini $G$ sur $\\mathbb{Q}$ est \u00e9gal au nombre des classes de conjugaison de sous-groupes cycliques de $G$.\"\n\nNous cherchons donc le nombre de classes de conjugaison de sous-groupes cycliques du groupe $G_{0}$\n\\[\nG_{0}:=\\quad (n\\geqslant 3).\n\\]\nOn pose $C:=$. Il y a d'abord les classes de cardinal $1$ contenant chacune un sous-groupe de $C$: tous les sous-groupes de $C$ sont normaux dans $G_{0}$.\\\\\nSi $n$ est impair, il y a une classe de conjugaison de sous-groupes d'ordre $2$ de $G_{0}$ et si $n$ est pair, il y a deux classes de conjugaison de sous-groupes d'ordre $2$ contenant les involutions non centrales de $G_{0}$.\n\nSoit $\\Phi_{n}(X)$ le n-i\u00e8me polyn\u00f4me cyclotomique . Le polyn\u00f4me $\\Phi_{n}(X)\\in \\mathbb{Z}[X]$, est irr\u00e9ductible et r\u00e9ciproque. On consid\u00e8re le corps $L=\\mathbb{Q}[x]\/(\\Phi_{n}(X))$. C'est un espace vectoriel de dimension $\\varphi(n)=\\deg \\Phi_{n}(X)$ sur le corps $\\mathbb{Q}$. Le groupe de Galois $\\mathcal{G}al(L\/\\mathbb{Q})$ est un groupe commutatif d'ordre $\\varphi(n)$. Soit $\\pi:\\mathbb{Q}[x]\\to L$ la projection canonique. On pose $g:=\\pi(X)$ et l'on fait op\u00e9rer $g$ sur $L$ par multiplication. Comme $n\\geqslant 3$, $\\varphi(n)$ est pair et le groupe de Galois contient un \u00e9l\u00e9ment $s$ d'ordre $2$ qui op\u00e8re comme $g\\mapsto g^{-1}$. Le sous-corps $L_{0}$ des points fixes de $s$ est de degr\u00e9 $\\frac{1}{2}\\varphi(n)$. Il en r\u00e9sulte que $[L,s]$, sous-espace vectoriel de $L$ form\u00e9 des \u00e9l\u00e9ments transform\u00e9s en leurs oppos\u00e9s par $s$, est de dimension $\\frac{1}{2}\\varphi(n)$. Le sous-groupe de $GL(L)$ engendr\u00e9 par $g$ et $s$ est un groupe di\u00e9dral d'ordre $2n$.\n\nNous construisons une base du $\\mathbb{Q}$-espace vectoriel $L$ de la mani\u00e8re suivante: soit $e_{1}\\in [L,s]-\\{0\\}$. Pour $1\\leqslant i \\leqslant\\varphi(n)-1$, on pose $e_{i+1}=g.e_{i}$. Alors $(e_{1},e_{2},\\cdots,e_{\\varphi(n)})$ est une base de $L$ car $\\Phi_{n}(X)$ est un polyn\u00f4me irr\u00e9ductible.\\\\\nSi $\\Phi_{n}(X)=X^{\\varphi(n)}+a_{1}X^{\\varphi(n)-1}+\\cdots+a_{1}X+1$, on a:\n\\[\ng.e_{n}=-e_{1}-a_{1}e_{2}-\\cdots-a_{1}e_{\\varphi(n)}.\n\\]\nDe plus un calcul simple montre que $s(e_{i})=-g^{1-i}.e_{1}$ $(1\\leqslant i \\leqslant \\varphi(n)$).\\\\\nIl est clair que nous avons construit de cette mani\u00e8re une $\\mathbb{Q}$-repr\u00e9sentation irr\u00e9ductible et fid\u00e8le $R_{n}$ du groupe $G_{0}$. Soit maintenant $d\\geqslant 3$ un diviseur de $n$ et soit $C_{d}:=$. Alors $C_{d}\\lhd G_{0}$ et $G_{0}\/C_{d}$ est isomorphe \u00e0 un groupe di\u00e9dral d'ordre $2d$. Nous appliquons ce qui pr\u00e9c\u00e8de au groupe $G_{0}\/C_{d}$ et nous obtenons ainsi une $\\mathbb{Q}$-repr\u00e9sentation irr\u00e9ductible du groupe $G_{0}$, de noyau $C_{d}$ et de degr\u00e9 $\\varphi(d)$.\\\\\n- Si $n$ est impair, alors $C_{1}$ est d'ordre $1$ et nous obtenons la repr\u00e9sentation $R_{1}$ de degr\u00e9 $1$ o\u00f9 $s$ op\u00e8re comme $-1$. Nous avons aussi dans ce cas la repr\u00e9sentation $R_{0}$ o\u00f9 $s$ op\u00e8re comme l'identit\u00e9. D'apr\u00e8s le r\u00e9sultat cit\u00e9 plus haut nous avons obtenu toutes les repr\u00e9sentations rationnelles irr\u00e9ductibles de $G_{0}$.\\\\\n- Si $n$ est pair, alors $C_{2}$ est d'ordre $2$ et nous obtenons la repr\u00e9sentation $R_{2}$ de degr\u00e9 $\\varphi(2)=1$ o\u00f9 $g$ et $s$ op\u00e8rent comme $-1$. Comme dans le cas impair, il y a la repr\u00e9sentation $R_{1}$ de degr\u00e9 $1$ o\u00f9 $s$ op\u00e8re comme $-1$ et $g$ comme l'identit\u00e9; et enfin il y a la repr\u00e9sentation $R_{1}\\otimes R_{2}$ de degr\u00e9 $1$ o\u00f9 $s$ op\u00e8re comme l'identit\u00e9 et $g$ op\u00e8re comme $-1$.\n\nNous pouvons encore remarquer que toutes les repr\u00e9sentations $R_{i}$ sont des $\\mathbb{Z}$-repr\u00e9sentations.\n\nToutes les repr\u00e9sentations absolument simples de $G_{0}$ sont de degr\u00e9 $1$ ou $2$ et peuvent s'\u00e9crire sur le corps $K=\\mathbb{Q}(\\cos \\frac{2\\pi}{n})$. Celles qui sont fid\u00e8les sont celles pour lesquelles le caract\u00e8re $\\chi_{k}$ est tel que $\\chi_{k}(g)=2\\cos \\frac{2k\\pi}{n}$ avec $1\\leqslant k =$ nous voyons que $g \\in G_{2}$.\n\nComme l'ensemble des transform\u00e9s de $e$ par $$ contient une base de $\\Lambda$, nous avons le r\u00e9sultat $G_{2}\\simeq G_{1}$. Il en r\u00e9sulte aussit\u00f4t que $G_{1}$ est isomorphe \u00e0 un quotient du groupe $G$. Comme tous les quotients propres de $G$ sont finis, nous obtenons $G_{1}\\simeq G$.\n\\end{proof}\n\\subsection{Appendice}\nDans cet appendice, nous d\u00e9montrons un r\u00e9sultat d'arithm\u00e9tique (voir \\cite{St}) utilis\u00e9 pour caract\u00e9riser certains triples d'entiers qui donnent les groupes di\u00e9draux.\n\\begin{proposition}\\label{propC1}\nLes deux conditions suivantes $(C)$ et $(D)$ sur le triple d'entiers non nuls $(a_{1},a_{2},a_{3})$ sont \u00e9quivalentes:\n\\begin{align*}\n(C)\n\\begin{cases}\n(C_{1}) & n=ppcm(a_{1},a_{2},a_{3})=ppcm(a_{i},a_{j}) (1\\leqslant i\\neq j \\leqslant 3);\\\\\n(C_{2}) & \\parbox{11 cm}{%\n$\\exists i,j \\in \\mathbb{N}$ tels que $(1\\leqslant i\\neq j \\leqslant 3)$ et $v_{2}(a_{i})=v_{2}(a_{j})=v_{2}(n)$;\nsi $|\\{i,j,k\\}|=3$, $v_{2}(a_{k})0$ et $\\frac{x}{a_{2}}>0$, on a $\\frac{y}{a_{3}}<1$. Ensuite $\\frac{c_{1}}{a_{1}}+\\frac{x}{a_{2}}<2$ donc $-1<\\frac{y}{a_{3}}$. Nous obtenons ainsi $-a_{3}$ un groupe di\u00e9dral d'ordre $2n$ avec $n\\geqslant 3$. On appelle $G$ son sous-groupe cyclique d'ordre $n$. Soient $a_{1},a_{2},a_{3}$ trois entiers $>0$. Les deux conditions suivantes sont \u00e9quivalentes:\n\\begin{itemize}\n \\item (A) Il existe trois involutions non centrales $s_{i}$ $(1\\leqslant i \\leqslant 3)$ de $D$ telles que si $s_{i}s_{j}=g_{k}$ $(1 \\leqslant i \\neq j \\leqslant 3, \\; |\\{i,j,k\\}|=3)$ avec $g_{k}$ d'ordre $a_{k}$, on ait $G= \\;(1 \\leqslant i \\neq j \\leqslant 3)$.\n \\item (B) Le triple d'entiers $(a_{1},a_{2},a_{3})$ satisfait \u00e0 la condition (C).\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\n1) Montrons que $(A)\\Longrightarrow(B)$. Comme $G=$ $(1 \\leqslant i \\neq j \\leqslant 3)$, n\u00e9cessairement $ppcm(a_{i},a_{j})=n$ $(1 \\leqslant i \\neq j \\leqslant 3)$ donc la condition $(C_{1})$ est satisfaite. Supposons maintenant $n$ pair, $n=2m$. Nous avons vu qu'il existe $i$ et $j$ $(1\\leqslant i \\leqslant 3)$ tels que $v_{2}(a_{i})=v_{2}(a_{j})=v_{2}(n)$. Les conjugu\u00e9s de $s_{3}$ dans $D$ sont les $g^{2l}s_{3}$ $(0\\leqslant l \\leqslant m-1)$ et l'autre classe d'involutions non centrales est l'ensemble des $g^{2l+1}s_{3}$ $(0\\leqslant l \\leqslant m-1)$. Nous avons $s_{1}s_{2}=g_{3}$, $s_{1}s_{3}=g_{2}$, $s_{2}s_{3}=g_{1}$ donc $g_{3}=g_{2}g_{1}^{-1}$. Si nous supposons que $v_{2}(a_{1})=v_{2}(a_{2})=v_{2}(n)$, alors $s_{1}=g^{2\\alpha+1}s_{3}$ et $s_{2}=g^{2\\beta+1}s_{3}$. Il en r\u00e9sulte que $g_{3}=s_{1}s_{2}=g^{2\\alpha-2\\beta}$ et alors $v_{2}(a_{3})-0.85$ the continuum contributions become large\nwhile for $t<-1.15$ the contributions from higher-order terms in the\nOPE become important relative to the leading-order terms.\n\nThe subscript $S_{\\rm V}$ in Eq.~(\\ref{corr}) indicates the\npresence of the external field. Thus,\nthe correlator should be calculated with an additional term\n\\begin{equation}\n\\Delta{\\cal L} \\equiv - S_{\\rm V} [\\overline u (x) u(x)\n-\\overline d(x) d(x)]\\; ,\n\\label{lag}\n\\end{equation}\nadded to the usual QCD Lagrangian, and $-\\Delta{\\cal L}$\nadded to ${\\cal H}_{\\rm QCD}$.\nSince $S_{\\rm V}$ is a scalar constant,\nLorentz covariance and parity allow one to decompose\n$\\Pi(S_{\\rm V},q)$ into two distinct structures\\cite{jin2}\n\\begin{equation}\n\\Pi(S_{\\rm V},q)\\equiv \\Pi^1(S_{\\rm V},q^2)+\n\\Pi^q(S_{\\rm V},q^2)\\rlap{\/}{q}\\ .\n\\end{equation}\nTo obtain QCD sum rules, one needs to construct a phenomenological\nrepresentation for $\\Pi(S_{\\rm V},q)$ and evaluate\n$\\Pi(S_{\\rm V},q)$ using the OPE.\n\n\\subsection{The dispersion relation and phenomenological spectral ansatz}\n\nTo determine the correlator at the hadron level we\nuse the dispersion relation\n\\begin{equation}\n\\Pi^i(S_{\\rm V},q^2)=\\int_0^\\infty{\\rho^i(S_{\\rm V},s)\n\\over s-q^2}ds\\\n\\label{des-rel}\n\\end{equation}\nfor each invariant function $\\{i=1,q\\}$, where\n$\\rho^i(S_{\\rm V},s)={1\\over\\pi}{\\rm Im}\\Pi^i(S_{\\rm V},s)$\nis the spectral density. Here we have omitted polynomial\nsubtractions which will be eliminated by a subsequent Borel\ntransformation.\nWe have also omitted infinitesimal as we are only concerned\nwith large and space like $q^2$ in QCD sum rules.\n\nIn practical applications of QCD sum-rule approach, one usually\nparametrizes the spectral density by a simple pole representing the\nlowest energy baryon state of interest plus a continuum which\nis approximated by a perturbative evaluation of the correlator\nstarting at an effective threshold\\cite{svz1,reinders1,ioffe3}.\nWhen $S_{\\rm V}$\nis present, we add $-\\Delta{\\cal L}$ to ${\\cal H}_{\\rm QCD}$,\nwhich is equivalent to increase $m_u$ and $m_d$ by $S_{\\rm V}$\nand $-S_{\\rm V}$, respectively.\nConsequently at the hadron level, the baryon spectrum will\nbe shifted. Since we are concerned here with the linear response\nto the external source, $S_{\\rm V}$ can be taken to be arbitrarily\nsmall (see below). Thus, there\nis no rearrangement of the spectrum, and we can use a pole\nplus continuum ansatz for the baryon spectral density\n\\begin{equation}\n\\rho^i(S_{\\rm V},s)=\\lambda_{\\rm B}^{*^2}\\phi^i \\delta (s-M_{\\rm B}^{*^2})\n+\\widetilde{\\rho}^i(S_{\\rm V},s)\\theta (s-s_0^{*^i})\\ ,\n\\label{ans}\n\\end{equation}\nwhere $\\phi^{*^i}=\\{M^*_{\\rm B}, 1\\}$ for $\\{i=1,q\\}$, and\n$\\widetilde{\\rho}^i(S_{\\rm V},s)$ is to be evaluated in\nperturbation theory.\nHere $\\lambda^*_{\\rm B}$ is defined by\n$\\langle 0|\\eta_{\\rm B}|{\\rm B}\\rangle_{S_{\\rm V}}\n=\\lambda_{\\rm B}^* v^*_{\\rm B}$\nwith $v^*_{\\rm B}$ the Dirac spinor normalized to\n$\\overline{v}^*_{\\rm B} v^*_{\\rm B}=2M^*_{\\rm B}$, $M^*_{\\rm B}$\nis the mass of the lowest baryon state and $s_0^{*^i}$ is the\ncontinuum threshold in the presence of the external field.\n\nLet us now expand both sides of Eq.~(\\ref{des-rel}) for small $S_{\\rm V}$\n\\begin{equation}\n\\Pi^i_0(q^2)+S_{\\rm V}\\Pi^i_1(q^2)+\\cdots\n=\\int_0^\\infty {\\rho^i_0(s)\\over s-q^2} ds\n+S_{\\rm V}\\int_0^\\infty {\\rho^i_1(s)\\over s-q^2} ds+\\cdots\\ .\n\\label{des-expand}\n\\end{equation}\nSince $S_{\\rm V}$ is arbitrary, one immediately concludes that\n\\begin{eqnarray}\n\\Pi^i_0(q^2)&=&\\int_0^\\infty{\\rho^i_0(s)\\over s-q^2} ds\\ ,\n\\label{des-2}\n\\\\*[7.2pt]\n\\Pi^i_1(q^2)&=&\\int_0^\\infty {\\rho^i_1(s)\\over s-q^2} ds\\ .\n\\label{des-3}\n\\end{eqnarray}\nObviously, Eq.~(\\ref{des-2}) leads to the baryon {\\it mass} sum rules\nin vacuum which have been extensively studied\n\\cite{ioffe3,belyaev3,reinders1,leinweber2}. Here we are interested in\nEq.~(\\ref{des-3}), which corresponds to the linear response of the\ncorrelator to the external source and contains the baryon\nmatrix element under consideration (see below).\n\nExpanding the right-hand side of Eq.~(\\ref{ans}), we find\n\\begin{eqnarray}\n\\rho^i_0(s)&=&\\lambda^2_{\\rm B}\\phi^i_0\\delta(s-M^2_{\\rm B})+\n\\widetilde{\\rho}^i_0(s)\\theta(s-s_0^i)\\ ,\n\\label{ans-0}\n\\\\*[7.2pt]\n\\rho^i_1(s)&=&-2 H_{\\rm B}\\, M_{\\rm B}\\lambda^2_{\\rm B} \\phi_0^i\n\\delta^\\prime (s-M^2_{\\rm B})+\\Delta\\lambda_{\\rm B}^2\\,\\phi_0^i\n\\delta (s-M^2_{\\rm B})\n\\nonumber\n\\\\*[7.2pt]\n& &\n+\\Delta\\phi^i\\, \\lambda^2_{\\rm B}\\delta (s-M^2_{\\rm B})\n-\\Delta s_0^i\\, \\widetilde{\\rho}^i_0(s)\\delta(s-s_0^i)+\n\\widetilde{\\rho}^i_1(s)\\theta (s-s_0^i)\\ ,\n\\label{ans-1}\n\\end{eqnarray}\nwhere we have defined\n\\begin{eqnarray}\nM^*_{\\rm B}&=&M_{\\rm B}+S_{\\rm V} H_{\\rm B}+\\cdots\\ ,\n\\\\*[7.2pt]\n\\lambda^{*^2}_{\\rm B}&=&\\lambda^2_{\\rm B}\n+S_{\\rm V} \\Delta\\lambda^2_{\\rm B}+\\cdots\\ ,\n\\\\*[7.2pt]\ns_0^{*^i}&=&s_0^i+S_{\\rm V} \\Delta s_0^i+\\cdots\\ ,\n\\\\*[7.2pt]\n\\phi^{*^i}&=&\\phi^i_0+S_{\\rm V}\\Delta\\phi^i+\\cdots\\ ,\n\\\\*[7.2pt]\n\\widetilde{\\rho}^{*^i}(s)&=&\\widetilde{\\rho}^i_0(s)+S_{\\rm V}\n\\widetilde{\\rho}^i_1(s)+\\cdots\\ ,\n\\end{eqnarray}\nwhere the first terms are the vacuum spectral parameters\nin the absence of the external field.\nNote that $\\Delta\\phi^1=H_{\\rm B}$ and $\\Delta\\phi^q=0$.\nTreating $S_{\\rm V}$ as a small parameter, one can\n use the Hellman-Feynman theorem\\cite{hellman1,feynman1}\n to show that\n\\begin{equation}\nH_{\\rm B}={\\langle B|\\overline{u}u-\\overline{d}d|B\\rangle\n\\over 2M_{\\rm B}}\\ ,\n\\label{hf-s}\n\\end{equation}\nwhere we have used covariant normalization $\\langle k^\\prime,\nB|k, B\\rangle=(2\\pi)^2 k^0\\delta^{(3)}(\\vec k^\\prime-\\vec k)$.\n\nOne notices that $\\rho^i_1(s)$ has specific new features\nwhich distinguish it from $\\rho^i_0(s)$. The first term\nin Eq.~(\\ref{ans-1}), which is {\\it absent} in $\\rho^i_0(s)$,\ngive rise to a double pole at the baryon mass whose residue\ncontains the matrix element of interest. The second and\nthird terms are single pole terms; the residue at the single\npole contains the information about the\ntransition between the ground state baryon and the excited states.\nIn terms of quantum mechanical perturbation, the double pole term\ncorresponds to the energy shift while the single pole\nterms result from the response of baryon wave function to\nthe external field. The fouth term is due to the response\nof the continuum threshold to the external source and the\nlast term is the continuum contribution. As emphasized in\nthe previous works, the\nsingle pole contributions are not exponentially damped after\nthe Borel transformation relative to the double\nterm and should be retained in a consistent analysis of the sum\nrules.\n\nThe fourth term has been neglected in Ref.~\\cite{jin2}.\nThe contribution\nof this term is suppressed in comparison with the single\npole terms by a factor $e^{-(s_0^i-M^2_{\\rm B})\/M^2}$\n[see Eqs.~(\\ref{sum_p_1}--\\ref{sum_x_q})].\nIf the response of the continuum threshold is small, one can neglect\nthe contribution of the fourth term. However, if\nthe response of the continuum threshold is strong, one needs\nto include the fourth term\nin the calculation. This point has been noticed recently by\nIoffe in Ref.~\\cite{ioffe2}, where a double dispersion\nrelation is considered for the vertex function\n\\begin{equation}\n\\Pi_1(q)=\\int d^4 x e^{iq\\cdot x}\\langle 0|T\n\\eta_{\\rm B}(x)\\left[\\int d^4 z (\\overline{u}(z)u(z)-\\overline{d}(z)d(z))\n\\right]\\overline{\\eta}_{\\rm B}(0)|0\\rangle\\\n\\label{vertex}\n\\end{equation}\nin order to get the appropriate phenomenological representation.\n[This vertex function can be obtained by expanding the\nright-hand side of Eq.~(\\ref{corr}) directly.]\nWe note that our discussion and\nEq.~(\\ref{ans-1}) are consistent with those given\nin Ref.~\\cite{ioffe2}.\nSubstituting Eq.~(\\ref{ans-1}) into Eq.~(\\ref{des-3}),\none obtains the appropriate phenomenological representation.\n\n\n\n\\subsection{QCD representation}\n\n\nThe QCD representation of the correlator is obtained by applying\nthe OPE to the\ntime-ordered product in the correlator. When the external field\nis present, the up and down quark fields satisfy the modified\nequations of motion:\n\\begin{eqnarray}\n(i\\rlap{\\,\/}D-m_u-S_{\\rm V})u(x)&=& 0\\ ,\n\\\\*[7.2pt]\n(i\\rlap{\\,\/}D-m_d+S_{\\rm V})d(x)&=& 0\\ ,\n\\label{eq-mo}\n\\end{eqnarray}\nwhere $\\rlap{\\,\/}D=\\gamma^\\mu(\\partial_\\mu-ig_s{\\cal A}_\\mu)$ is\nthe covariant derivative. (The equation of motion for the\nstrange quark field does not change.)\nIn the framework of the OPE, the external field contributes to\nthe correlator in two ways: It couples directly to\nthe quark fields in the baryon interpolating fields and it also\npolarizes the QCD vacuum. Since the external\n field in the present problem is a Lorentz scalar,\nnon-scalar correlators cannot be induced in the QCD vacuum. However the\nexternal field does modify the condensates already\npresent in the QCD vacuum. To first order in $S_{\\rm V}$,\nthe chiral quark condensates\ncan be written as follows\n\\begin{eqnarray}\n\\langle\\overline{u}u\\rangle_{S_{\\mbox{\\tiny{\\rm\n V}}}}&=&\\langle\\overline{u}u\\rangle_{\\mbox{\\tiny{\\rm 0}}}-\\chi\n S_{\\mbox{\\tiny{\\rm V}}}\\langle\\overline{u}u\\rangle_{\\mbox\n{\\tiny{\\rm 0}}}\\ ,\n\\label{uc}\n\\\\*[7.2pt]\n\\langle\\overline{d}d\\rangle_{S_{\\mbox{\\tiny{\\rm\n V}}}}&=&\\langle\\overline{d}d\\rangle_{\\mbox{\\tiny{\\rm 0}}}+\\chi\nS_{\\mbox{\\tiny{\\rm V}}}\\langle\\overline{d}d\\rangle_{\\mbox{\\tiny\n{\\rm 0}}}\\ ,\n\\label{dc}\n\\\\*[7.2pt]\n\\langle\\overline{s}s\\rangle_{S_{\\mbox{\\tiny{\\rm\n V}}}}&=&\\langle\\overline{s}s\\rangle_{\\mbox{\\tiny{\\rm 0}}}-\\chi_{\\rm s}\nS_{\\mbox{\\tiny{\\rm V}}}\\langle\\overline{s}s\\rangle_{\\mbox{\\tiny\n{\\rm 0}}}\\ ,\n\\label{sc}\n\\end{eqnarray}\nwhere $\\langle \\hat{O}\\rangle_{\\mbox{\\tiny{\\rm 0}}}\\equiv \\langle\n0|\\hat{O}|0\\rangle$.\nThe mixed quark-gluon condensates change in a similar way\n\\begin{eqnarray}\n\\langle g_s\\overline{u}\\sigma\\cdot {\\cal G} u\n\\rangle_{S_{\\mbox{\\tiny{\\rm V}}}}\n&=&\\langle g_s\\overline{u}\\sigma\\cdot {\\cal G} u\\rangle_{\\mbox{\\tiny\n{\\rm 0}}}-\\chi_{\\rm m}\nS_{\\mbox{\\tiny{\\rm V}}}\\langle g_s\\overline{u}\\sigma\\cdot {\\cal G}\nu\\rangle_{\\mbox\n{\\tiny{\\rm 0}}}\\ ,\n\\label{uqc}\n\\\\*[7.2pt]\n\\langle g_s\\overline{d}\\sigma\\cdot {\\cal G} d\n\\rangle_{S_{\\mbox{\\tiny{\\rm V}}}}\n&=&\\langle g_s\\overline{d}\\sigma\\cdot {\\cal G} d\\rangle_{\\mbox{\\tiny\n{\\rm 0}}}+\\chi_{\\rm m}\nS_{\\mbox{\\tiny{\\rm V}}}\\langle g_s\\overline{d}\\sigma\\cdot {\\cal G}\nd\\rangle_{\\mbox\n{\\tiny{\\rm 0}}}\\ ,\n\\label{dqc}\n\\\\*[7.2pt]\n\\langle g_s\\overline{s}\\sigma\\cdot {\\cal G} s\n\\rangle_{S_{\\mbox{\\tiny{\\rm V}}}}\n&=&\\langle g_s\\overline{s}\\sigma\\cdot {\\cal G} s\\rangle_{\\mbox{\\tiny\n{\\rm 0}}}-\\chi_{\\rm ms}\nS_{\\mbox{\\tiny{\\rm V}}}\\langle g_s\\overline{s}\\sigma\\cdot {\\cal G}\ns\\rangle_{\\mbox\n{\\tiny{\\rm 0}}}\\ ,\n\\label{sdc}\n\\end{eqnarray}\nwhere $\\sigma\\cdot {\\cal G}\\equiv\n\\sigma_{\\mu\\nu}{\\cal G}^{\\mu\\nu}$ with ${\\cal G}^{\\mu\\nu}$ the gluon\nfield tensor. One can express $\\chi$, $\\chi_{\\rm s}$, $\\chi_{\\rm m}$,\nand $\\chi_{\\rm ms}$ in terms of correlation functions\n(see Ref.~\\cite{jin2}). Here\nwe have assumed that the response of the up and down quarks is\nthe same, apart from the sign.\nThe Wilson coefficients can be calculated following the methods\noutlined in Ref.~\\cite{jin2}. The results of our calculations for\nthe invariant functions $\\Pi^1_1$ and $\\Pi^q_1$\nare given in Appendix A.\n\n\n\\subsection{Sum rules}\n\nThe QCD sum rules are obtained by equating the QCD representation\nand the phenomenological representation and applying the Borel\ntransformation. The resulting sum rules in the proton case\ncan be expressed as\n\\begin{eqnarray}\n& &{c_1+6c_2\\over 2}M^8 E_2 L^{-8\/9}\n-{c_1+6c_2\\over 2}\\chi a M^6 E_1\n+{3c_2\\over 2}\\chi_{\\rm m}m_0^2 a M^4E_0L^{-14\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.0cm}\n+{c_1+3c_2-c_3\\over 3} a^2 M^2\n=\\biggl[2 H_p\\,\\widetilde{\\lambda}_p^2 M_p^2-\n\\Delta\\widetilde{\\lambda}_p^2\\, M_p M^2\n-H_p\\,\\widetilde{\\lambda}_p^2 M^2\\biggr]e^{-M_p^2\/M^2}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.5cm}\n+\\left[\n{c_1-6c_2\\over 2}s^1_0 a L^{-4\/9}\n+{3c_2\\over 2}m_0^2 a L^{-26\/27}\\right]\\Delta s_0^1 M^2\ne^{-s^1_0\/M^2}\\; ,\n\\label{sum_p_1}\n\\\\*[14.4pt]\n& &-{4c_1-c_3\\over 4} a M^4 E_0 L^{-4\/9}\n-{c_4+c_5-6c_2\\over 12}m_0^2 a M^2 L^{-26\/27}\n+{2c_1\\over\n3}\\chi a^2 M^2 L^{4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.4cm}\n-{c_1+2c_2\\over 12}\\chi m_0^2 a^2 L^{-2\/27}\n-{c_1-2c_2\\over 12}\\chi_{\\rm m}m_0^2 a^2L^{-2\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.8cm}=\n\\biggl[2H_p\\,\\widetilde{\\lambda}_p^2 M_p-\n\\Delta\\widetilde{\\lambda}_p^2 M^2\\biggr]e^{-M_p^2\/M^2}\n+{c_3\\over 16}(s_0^q)^2\\Delta s_0^q\nM^2 L^{-8\/9} e^{-s^q_0\/M^2}\\; ,\n\\label{sum_p_q}\n\\end{eqnarray}\nwhere $a\\equiv -4\\pi^2\\langle\\overline{q}q\\rangle_{\\mbox{\\tiny{\\rm 0}}}$,\n$\\widetilde{\\lambda}_p^2\\equiv 32\\pi^4\\lambda_p^2$,\n$\\Delta\\widetilde{\\lambda}_p^2\\equiv 32\\pi^4\\Delta\\lambda_p^2$,\nand $m_0^2\\equiv\n\\langle g_s\\overline{q}\\sigma\\cdot\n{\\cal G} q\\rangle_{\\mbox{\\tiny{\\rm 0}}}\/\n\\langle\\overline{q}q\\rangle_{\\mbox{\\tiny{\\rm 0}}}$.\nHere we have\nignored the isospin breaking in\nthe vacuum condensates (i.e.,\n$\\langle\\overline{u}\\hat{O}u\\rangle_{\\mbox{\\tiny{\\rm 0}}}\n\\simeq\\langle\\overline{d}\\hat{O}d\\rangle_{\\mbox{\\tiny{\\rm 0}}}\n=\\langle\\overline{q}\\hat{O}q\\rangle_{\\mbox{\\tiny{\\rm 0}}}$);\nthe inclusion of the isospin breaking in vacuum condensates\nonly gives small refinements of the results.\nWe have also defined\n\\begin{eqnarray}\n& &E_0\\equiv 1-e^{-s^i_0\/M^2}\\ ,\n\\nonumber\n\\\\*[7.2pt]\n& &E_1\\equiv 1-e^{-s^i_0\/M^2}\\left[{s^i_0\\over\nM^2}+1\\right]\\ ,\n\\nonumber\n\\\\*[7.2pt]\n& &E_2\\equiv 1-e^{-s^i_0\/M^2}\\left[{(s^i_0)^2\\over 2M^4}\n+{s^i_0\\over M^2}+1\\right]\\ ,\n\\label{conform}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n& &c_1=(1-t)^2\\ ,\\hspace{0.8cm}c_2=1-t^2\\ ,\\hspace{0.8cm}c_3=5t^2+2t+5\\ ,\n\\nonumber\n\\\\*[7.2pt]\n& &c_4=t^2+10t+1\\ ,\\hspace{0.8cm}c_5=t^2+4t+7\\ .\n\\label{c-def}\n\\end{eqnarray}\nThe anomalous dimensions\nof the various operators have been taken into\naccount through the factor\n$L\\equiv\\ln(M^2\/\\Lambda_{\\rm QCD}^2)\/\\ln(\\mu^2\/\\Lambda_{\\rm\nQCD}^2)$\\cite{svz1,ioffe3}. We\ntake the renormalization scale $\\mu$ and the QCD scale parameter\n$\\Lambda_{\\rm QCD}$ to be $500\\,\\text{MeV}$\n and $150\\,\\text{MeV}$\\cite{ioffe3}.\n\nThe sum rules in the $\\Sigma^+$ case are given by\n\\begin{eqnarray}\n& &3c_2M^8 E_2L^{-8\/9}\n-3c_2\\chi a M^6 E_1\n+{c_1\\over 2}\\chi_{\\rm s}faM^6 E_1\n+(c_1-2c_3)m_s a M^4 E_0 L^{-8\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-3c_2m_s f a M^4 E_0 L^{-8\/9}\n+{3c_2\\over 2}\\chi_{\\rm m}m_0^2 a M^4 E_0 L^{-14\/27}\n-{c_2\\over 4}m_s f_s m_0^2 a M^2L^{-38\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-{2c_1+3c_2-6c_3\\over 12}m_s m_0^2 a M^2 L^{-38\/27}\n+{c_1-2c_3\\over 3} f a^2 M^2\n+{2c_3\\over 3}\\chi m_s a^2 M^2\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+c_2\\chi m_s f a^2 M^2\n+c_2\\chi_{\\rm s}m_s f a^2 M^2\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.0cm}\n=\\biggl[2H_{\\Sigma^+}\n\\,\\widetilde{\\lambda}_{\\Sigma^+}^2 M_{\\Sigma^+}^2\n-\\Delta\\widetilde{\\lambda}_{\\Sigma^+}^2\\, M_{\\Sigma^+} M^2\n-H_{\\Sigma^+}\\,\\widetilde{\\lambda}_{\\Sigma^+}^2 M^2\n\\biggr]e^{- M_{\\Sigma^+}^2\/M^2}\n+\\biggl[{c_1\\over 4}m_s (s^1_0)^2L^{-4\/3}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.5cm}\n+{c_1\\over 2} f a s^1_0 L^{-4\/9}\n-3c_2a s^1_0 L^{-4\/9}\n+{3c_2\\over 2} m_0^2 a L^{-26\/27}\\biggr]\\Delta s_0^1\nM^2 e^{-s^1_0\/M^2}\\; ,\n\\label{sum_s_1}\n\\\\*[14.4pt]\n& &3c_2m_sM^6 E_1L^{-4\/3}\n-{2c_1-c_3\\over 2}a M^4 E_0L^{-4\/9}\n+3c_2 f a M^4 E_0 L^{-4\/9}\n-3c_2\\chi m_s a M^4E_0L^{-4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-{c_3\\over 4}\\chi_{\\rm s} m_s f a M^4 L^{-4\/9}\n-{c_2\\over 12}m_0^2 a M^2 L^{-26\/27}\n-{5c_2\\over 4}f_s m_0^2 a M^2 L^{-26\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+{7c_2\\over 4}\\chi_{\\rm m}m_s m_0^2 a M^2 L^{-26\/27}\n-{c_5\\over 12}\\chi_{\\rm ms} m_s f_s m_0^2 a M^2\nL^{-2\/27}\n+{2c_1\\over 3}\\chi a^2 M^2 L^{4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-2c_2\\chi f a^2 M^2 L^{4\/9}\n-2c_2\\chi_{\\rm s} f a^2 M^2 L^{4\/9}\n-c_2m_s a^2 L^{-4\/9}\n-{c_3-2c_1\\over 6} m_s f a^2 L^{-4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-{c_1\\over 12}\\chi m_0^2 a^2 L^{-2\/27}\n+{5c_2\\over 12}\\chi f_s m_0^2 a^2 L^{-2\/27}\n+{7c_2\\over 12}\\chi_{\\rm s}f m_0^2 a^2 L^{-2\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-{c_1\\over 12}\\chi_{\\rm m} m_0^2 a^2 L^{-2\/27}\n+{5c_2\\over 12}\\chi_{\\rm ms} f_s m_0^2 a^2 L^{-2\/27}\n+{7c_2\\over 12}\\chi_{\\rm m} f m_0^2 a^2 L^{-2\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.0cm}\n=\n\\biggl[2H_{\\Sigma^+}\\,\\widetilde{\\lambda}_{\\Sigma^+}^2 M_{\\Sigma^+}-\n\\Delta\\widetilde{\\lambda}_{\\Sigma^+}^2 M^2\\biggr]e^{-M_{\\Sigma^+}^2\/M^2}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.5cm}\n+\\biggl[{c_3\\over 16}(s^q_0)^2-3c_2m_s a\n-{c_3\\over 4}m_s f a\\biggr]\\Delta s_0^q\nM^2 L^{-8\/9} e^{-s^q_0\/M^2}\\; ,\n\\label{sum_s_q}\n\\end{eqnarray}\nwhere $f\\equiv \\langle\\overline{s}s\\rangle_{\\mbox{\\tiny{\\rm 0}}}\n\/\\langle\\overline{q}q\\rangle_{\\mbox{\\tiny{\\rm 0}}}$ and\n$f_s\\equiv\n\\langle g_s\\overline{s}\\sigma\\cdot\n{\\cal G} s\\rangle_{\\mbox{\\tiny{\\rm 0}}}\/\n\\langle g_s\\overline{q}\\sigma\\cdot\n{\\cal G} q\\rangle_{\\mbox{\\tiny{\\rm 0}}}$.\n The sum rules in the $\\Xi^0$ case are\n\\begin{eqnarray}\n& &-{c_1\\over 2}M^8 E_2 L^{-8\/9}\n+{c_1\\over 2}\\chi a M^6 E_1\n-3c_2\\chi_{\\rm s}f a M^6 E_1\n-3c_2m_s a M^4 E_0 L^{-8\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+(c_1-2c_3)m_s f a M^4 E_0 L^{-8\/9}\n+{3c_1\\over 2}\\chi_{\\rm ms}\nf_s m_0^2 a M^4 E_0 L^{-14\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-{c_2\\over 4}m_s m_0^2 a M^2 L^{-38\/27}\n-{2c_1+3c_2-6c_3\\over 12}m_s f_s m_0^2 a M^2 L^{-38\/27}\n-{c_3\\over 3}f^2 a^2 M^2\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-c_2f a^2 M^2\n-(c_1-2c_3)\\chi m_s f a^2 M^2\n-(c_1-2c_3)\\chi_{\\rm s}m_s f a^2 M^2\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.0cm}\n=\\biggl[2H_{\\Xi^0}\\,\\widetilde{\\lambda}_{\\Xi^0}^2\nM_{\\Xi^0}^2\n-\\Delta\\widetilde{\\lambda}_{\\Xi^0}^2\\, M_{\\Xi^0} M^2\n-H_{\\Xi^0}\\,\\widetilde{\\lambda}_{\\Xi^0}^2 M^2\\biggr]\ne^{-M_{\\Xi^0}^2\/M^2}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{2.0cm}\n+\n\\biggl[-{3c_2\\over 2}m_s (s^1_0)^2 L^{-4\/3}\n+{c_1\\over 2}s^1_0 a L^{-4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.5cm}\n-3c_2s_0 f a L^{-4\/9}\n+{3c_2\\over 2}f_s m_0^2 a L^{-26\/27}\\biggr]\\Delta s_0^1\ne^{-s^1_0\/M^2}\\; ,\n\\label{sum_x_1}\n\\\\*[14.4pt]\n& &3c_2m_s M^6 E_1 L^{-4\/3}\n+{c_3\\over 4} a M^4 E_0 L^{-4\/9}\n+3c_2f a M^4 E_0 L^{-4\/9}\n-3c_2\\chi m_s a M^4 E_0 L^{-4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+{2c_1-c_3\\over 2}\\chi_{\\rm s}m_s f a M^4 E_0 L^{-4\/9}\n+{c_5\\over 12}m_0^2 a M^2 L^{-26\/27}\n-{7c_2\\over 4}f_s m_0^2 a M^2 L^{-26\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+{5c_2\\over 4}\\chi_{\\rm m}m_s m_0^2 a M^2 L^{-14\/27}\n+{c_4\\over 12}\\chi_{\\rm ms}m_s f_s m_0^2 M^2 L^{-26\/27}\n-2c_2\\chi f a^2 M^2 L^{4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n-2c_2\\chi_{\\rm s} f a^2 M^2 L^{4\/9}\n+{2c_1\\over 3}\\chi_{\\rm s} f^2 a^2 M^2 L^{4\/9}\n-c_2m_s f^2 a^2 L^{-4\/9}\n-{c_3-2c_1\\over 6}m_s f_s a^2 L^{-4\/9}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+{7c_2\\over 12}\\chi f_s m_0^2 a^2 L^{-2\/27}\n-{c_1\\over 12}\\chi_{\\rm s}f f_s m_0^2 a^2L^{-2\/27}\n+{5c_2\\over 12}\\chi_{\\rm s}f m_0^2 a^2 L^{-2\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{0.5cm}\n+{5c_2\\over 12}\\chi_{\\rm m}f m_0^2 a^2 L^{-2\/27}\n-{c_1\\over 12}\\chi_{\\rm ms}f f_s m_0^2 a^2 L^{-2\/27}\n+{7c_2\\over 12}\\chi_{\\rm ms}f_s m_0^2 a^2 L^{-2\/27}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{1.0cm}=\n\\biggl[2H_{\\Xi^0}\\,\\widetilde{\\lambda}_{\\Xi^0}^2 M_{\\Xi^0}-\n\\Delta\\widetilde{\\lambda}_{\\Xi^0}^2\\biggr]e^{-M_{\\Xi^0}^2\/M^2}\n\\nonumber\\\\*[7.2pt]\n& &\\hspace*{2.0cm}\n+\\biggl[\n{c_3\\over 16}(s^q_0)^2\n-3c_2m_s a\n-{c_3-2c_1\\over 2} m_s f a\\biggr]\\Delta s_0^q\n L^{-8\/9} M^2 e^{-s_0^q\/M^2}\\; .\n\\label{sum_x_q}\n\\end{eqnarray}\n{}.\n\n\n\\section{Sum-rule analysis}\n\\label{anay}\n\nWe now analyze the sum rules derived in the previous section\nand extract the baryon matrix elements of interest. Here\nwe follow Ref.~\\cite{jin2} and use only the\nsum rules Eqs.~(\\ref{sum_p_q}), (\\ref{sum_s_q}), and\n(\\ref{sum_x_q}), which are more stable than the other\nthree sum rules. The pattern that one of the sum rules\n(in each case) works well while the other does not\nhas been seen in various external field\nproblems\\cite{ioffe1,chiu1,chiu2,jin1,jin2}.\nThis may be attributed to the different asymptotic\nbehavior of various sum rules. As emphasized earlier,\nthe phenomenological side of the external field sum rules\ncontains single pole terms arising from the transition\nbetween the ground state and the excited states, whose\ncontribution is {\\it not} suppressed relative to the\ndouble pole term and thus contaminates\nthe double pole contribution. The degree of this\ncontamination may vary from one sum rule to another.\nThe sum rule with smaller single pole contribution\nworks better.\nWe refer the reader to Refs.~\\cite{chiu2,jin1,jin2}\nfor more discussion\nabout the different behavior of various external field\nsum rules.\nIn the analysis to follow, we disregard the sum\nrule Eqs.~(\\ref{sum_p_1}), (\\ref{sum_s_1}), and\n(\\ref{sum_x_1}), and consider only the results from\nthe sum rules Eqs.~(\\ref{sum_p_q}), (\\ref{sum_s_q}), and\n(\\ref{sum_x_q}).\n\n\n\nWe adopt the numerical optimization procedures used in\nRefs.~\\cite{leinweber2,furnstahl1}. The\nsum rules are sampled in the fiducial region of Borel $M^2$, where\nthe contributions from the high-dimensional condensates\nremain small and the continuum contribution is controllable.\nWe choose\n\\begin{eqnarray}\n& &\n0.8\\leq M^2\\leq 1.4\\, {\\mbox{GeV}}^2\\hspace*{2cm}\n{\\mbox{for proton case}}\\ ,\n\\\\*[7.2pt]\n& &\n1.2\\leq M^2\\leq 1.8\\, {\\mbox{GeV}}^2\\hspace*{2cm}\n{\\mbox{for}}\\,\\Sigma^+\\,{\\mbox{and}}\\,\\Xi^0 {\\mbox{case}}\\ ,\n\\end{eqnarray}\nwhich have been identified as the fiducial region for the baryon\nmass sum rules\\cite{ioffe1,ioffe5}. Here we adopt these boundaries as\nthe maximal limits of applicability of the external field sum\nrules. The sum-rule predictions are obtained by\nminimizing the logarithmic measure\n$\\delta (M^2)={\\mbox{ln}}[{\\mbox{maximum}}\\{{\\mbox{LHS,RHS}}\\}\/\n{\\mbox{minimum}}\\{{\\mbox{LHS,RHS}}\\}]$ averaged over $150$ points\nevenly spaced within the fiducial region of $M^2$, where\nLHS and RHS denote the left- and right-hand sides of\nthe sum rules, respectively.\n\nNote that the {\\it vacuum} spectral parameters $\\lambda_{\\rm B}^2$,\n$M_{\\rm B}$ and $s_0^i$, also appear in the external field sum rules\n Eqs.~(\\ref{sum_p_1}--\\ref{sum_p_q}) and\n(\\ref{sum_s_1}--\\ref{sum_x_q}).\nHere we use the experimental values for\nthe baryon masses and extract $\\lambda_{\\rm B}^2$\nand $s_0^i$ from baryon mass sum rules using the same\noptimization procedure as described above.\nWe then extract $H_{\\rm B}$, $\\Delta\\lambda_{\\rm B}^2$, and\n$\\Delta s_0^i$ from the external field sum rules.\n\nFor vacuum condensates, we use $a=0.55\\, {\\mbox{GeV}}^3\\, (m_u\n+m_d\\simeq 11.8{\\mbox{MeV}})$\\cite{ioffe1,ioffe3},\n$m_0^2=0.8\\, {\\mbox{GeV}}^2$\\cite{ioffe1,belyaev3},\nand $f\\simeq f_s=0.8$\\cite{belyaev3,leinweber2}.\nWe take the strange quark mass\n$m_s$ to be $150\\, {\\mbox{MeV}}$\\cite{ioffe5}. The\nparameter $\\chi$ has been estimated in Ref.~\\cite{jin2}.\nThe estimate in chiral perturbation theory gives\n$\\chi\\simeq 2.2\\, {\\mbox{GeV}}^{-1}$. It is also shown that\nto the lowest order in $\\delta m$, $\\chi$ is determined by\n\\begin{equation}\n\\chi\\delta m=-\\gamma+O[(\\delta m)^2]\\ ,\n\\label{chi-est}\n\\end{equation}\nwhere $\\gamma\\equiv \\langle\\overline{d}d\n\\rangle_{\\mbox{\\tiny{\\rm 0}}}\/\\langle\\overline{u}u\n\\rangle_{\\mbox{\\tiny{\\rm 0}}}-1$, and $\\delta m$\nhas been determined by Gasser and Leutwyler,\n$\\delta m \/(m_u + m_d ) = 0.28 \\pm 0.03$\\cite{gasser1}.\nThe value of $\\gamma$ has been estimated previously in various\napproaches\\cite{gasser2,paver1,pascual1,bagan1,dominguez1,%\ndominguez2,narison1,adami2,adami1,eletsky1}\n with results ranging from $-1\\times 10^{-2}$\nto $-2\\times 10^{-3}$, which upon using Eq.~(\\ref{chi-est})\nand a median value for $\\delta m=3.3\\,\\text{MeV}$,\ncorresponds to\n\\begin{equation}\n0.5\\,\\text{GeV}^{-1}\\leq\\chi\\leq 3.0 \\,\\text{GeV}^{-1}\\ .\n\\label{chi-range}\n\\end{equation}\nWe shall consider this range of $\\chi$ values.\nWe follow Ref.~\\cite{jin2} and assume\n$\\chi_{\\rm m}\\simeq \\chi$,\nwhich is equivalent to the assumption that $m_0^2$\nis isospin independent.\n\nThe parameter $\\chi_{\\rm s}$ measures the response of\nthe strange quark condensate to the external field, which\nhas not been estimated previously.\nSince $\\overline{s}s$ is an isospin scalar operator,\n$\\chi_{\\rm s}$ arises from the isospin mixing and\nwe expect $\\chi_{\\rm s}<\\chi$.\nFollowing Ref.~\\cite{jin2},\none may express $\\chi_{\\rm s}$ in terms of a\ncorrelation function\nand estimate it in chiral perturbation theory. It is\neasy to show that $\\chi_{\\rm s}\\langle\\overline{s}s\n\\rangle_{\\mbox{\\tiny{\\rm 0}}}={d\\over d\\delta m}\n\\langle\\overline{s}s\n\\rangle_{\\mbox{\\tiny{\\rm 0}}}$. So, one may determine\n$\\chi_{\\rm s}$ by evaluating ${d\\over d\\delta m}\n\\langle\\overline{s}s\n\\rangle_{\\mbox{\\tiny{\\rm 0}}}$ in effective QCD models.\nHere we shall treat $\\chi_{\\rm s}$ as a free parameter\nand consider the values of $\\chi_{\\rm s}$ in the\nrange of $0\\leq\\chi_{\\rm s}\\leq 3.0\\,\\text{GeV}^{-1}$.\nWe also assume that $\\chi_{\\rm ms}\\simeq \\chi_{\\rm s}$.\n\nWe first analyze the sum rules for Ioffe's\ninterpolating field (i.e., $t=-1$). We start from\nthe proton case. The optimized result for $H_p$\nas function of $\\chi$ is plotted\nin Fig.~\\ref{fig-1}. One can see that\n$H_p$ varies rapidly with $\\chi$. Therefore,\nthe sum-rule prediction for the proton matrix element\n$H_p$ depends strongly on the response of the up and down\nquark condensates to the external source.\n(The sum rules in the proton case\nare independent of $\\chi_{\\rm s}$ and $\\chi_{\\rm sm}$.)\nFor moderate values of $\\chi$ ($1.5\\,\\text{GeV}^{-1}\n\\leq\\chi\\leq 2.0\\,\\text{GeV}^{-1}$), the predictions\nare\n\\begin{equation}\nH_p\\simeq 0.54-0.78\\ .\n\\label{typ-p}\n\\end{equation}\nOn the other hand, for large values of\n$\\chi$ ($2.4\\,\\text{GeV}^{-1}\\leq\\chi\n\\leq 3.0\\,\\text{GeV}^{-1}$), we find $H_p\\simeq\n0.97-1.25$. For small values of $\\chi$\n($\\chi\\leq 1.4\\,\\text{GeV}^{-1}$),\nthe continuum contribution is larger than $50\\%$,\nimplying that the continuum contribution is dominant\nin the Borel region of interest and the prediction\nis not reliable.\n The predictions for $\\Delta\n\\lambda_p^2$ and $\\Delta s_0^q$ also change with $\\chi$\n in the same way as $H_p$.\n\nTo see how well the sum rule works, we plot the LHS, RHS,\nand the individual terms of RHS of Eq.~(\\ref{sum_p_q}) as functions\nof $M^2$ with $\\chi=1.8\\,\\text{GeV}^{-1}$ in Fig.~\\ref{fig-2}\nusing the optimized values for $H_p$, $\\Delta\\lambda_p^2$,\nand $\\Delta s_0^q$. We see that the solid (LHS) and long-dashed (RHS)\ncurves are right on top of each other, showing a very good\noverlap. We also note from Fig.~\\ref{fig-2}\nthat the first term of RHS (curve 1) is larger than\nthe second (curve 2) and third (curve 3) terms. This shows that\nthe double pole contribution is stronger than the single\npole contribution and the predictions are thus stable.\n(Although the second and third terms are sizable\nindividually, their sum is small.)\n\n\nIn Fig.~\\ref{fig-3}, we have displayed the predicted\n$H_{\\Sigma^+}$ as function of $\\chi$ for three\ndifferent values of $\\chi_{\\rm s}$. One notices that\n$H_{\\Sigma^+}$ is largely insensitive to $\\chi_{\\rm s}$,\nbut strongly dependent on $\\chi$ value. For $\\chi$\nvalues in the range of $2.2\\,\\text{GeV}^{-1}\\leq\n\\chi\\leq 3.0\\,\\text{GeV}^{-1}$, we find\n\\begin{equation}\nH_{\\Sigma^+}\\simeq 1.65-2.48\\ .\n\\label{typ-s}\n\\end{equation}\nFor smaller $\\chi$, we obtain smaller values for $H_{\\Sigma^+}$.\nThe predictions for $\\Delta\\lambda_{\\Sigma^+}^2$\nand $\\Delta s_0^q$ change in a similar pattern. The sum rule\nworks very well and the continuum contribution is small for\nall $\\chi$ and $\\chi_{\\rm s}$ values considered here.\n\nThe optimized $H_{\\Xi^0}$ as function of $\\chi_{\\rm s}$\nis shown in Fig.~\\ref{fig-4}. [When $t=-1$, the sum rule\nEq.~(\\ref{sum_x_q}) is independent of $\\chi$\nand $\\chi_{\\rm s}$.] We see that\nthe result is very sensitive to the $\\chi_{\\rm s}$ value.\nThus the prediction for $H_{\\Xi^0}$ has a strong dependence\non the response of the strange quark condensate to the\nexternal field.\nFor moderate $\\chi_{\\rm s}$ ($1.7\\,\\text{GeV}^{-1}\\leq\n2.2\\,\\text{GeV}^{-1}$), we get\n\\begin{equation}\nH_{\\Xi^0}\\simeq 1.57-1.84\\ .\n\\label{typ-x}\n\\end{equation}\nFor larger (smaller) values of $\\chi_{\\rm s}$, we find\nlarger (smaller) values for $H_{\\Xi^0}$.\nAt $\\chi_{\\rm s}=0$, we get\n$H_{\\Xi^0}\\simeq 0.68$. The results for\n$\\Delta\\lambda^2_{\\Xi^0}$ and $\\Delta s_0^q$ increase\n(decrease) as $\\chi_{\\rm s}$ increases (decreases).\n\nAll of the results above use Ioffe's interpolating field\n(i.e., $t=-1$); we now present the results for general\ninterpolating field. In Fig.~\\ref{fig-5}, we have plotted\nthe predicted $H_p$, $H_{\\Sigma^+}$, and $H_{\\Xi^0}$ as\nfunctions of $t$ for $\\chi=2.5\\,\\text{GeV}^{-1}$ and\n$\\chi_{\\rm s}=1.5\\,\\text{GeV}^{-1}$. As $t$ increases,\n$H_p$, $H_{\\Sigma^+}$, and $H_{\\Xi^0}$ all increase; the\nrate of increase is essentially the same for $H_p$\nand $H_{\\Sigma^+}$, but somewhat smaller for $H_{\\Xi^0}$.\nWe note that the {\\it vacuum} spectral parameters $\\lambda^2_{\\rm B}$\nand $s^q_0$ decrease as $t$ increases; this leads to\na large variation of $H_p$, $H_{\\Sigma^+}$, and $H_{\\Xi^0}$\nwith $t$.\n\nThe sensitivity of our results to the assumption of\n$\\chi_{\\rm m}=\\chi$ is displayed in Fig.~\\ref{fig-6},\nwhere $t$ and $\\chi_{\\rm s} (=\\chi_{\\rm ms})$ are\nfixed at $-1$ and $1.5\\,\\text{GeV}^{-1}$, respectively.\nThe three curves are obtained by using $\\chi_{\\rm m}\n=\\chi$, ${1\\over 2}\\chi$, and ${3\\over 2}\\chi$, respectively.\nWe note that $H_p$ and $H_{\\Sigma^+}$\nget larger (smaller) as $\\chi_{\\rm m}$ becomes\nsmaller (larger). The results are more sensitive\nto $\\chi_{\\rm m}$ in the proton case than in the $\\Sigma^+$\ncase. The prediction for $H_p$ changes by about $25\\%$\nwhile the prediction for $H_{\\Sigma^+}$ changes\nby about $15\\%$ when the $\\chi_{\\rm m}$ value\nis changed by $50\\%$. This implies that the terms\nproportional to $\\chi_{\\rm m}$ in the sum rules\ngive rise to sizable contributions. The sensitivity\nof our predictions to the assumption of\n$\\chi_{\\rm sm}=\\chi_{\\rm s}$ is illustrated in\nFig.~\\ref{fig-7}, with $t=-1$ and $\\chi=\\chi_{\\rm m}\n=2.5\\,\\text{GeV}^{-1}$. The three curves correspond\nto $\\chi_{\\rm ms}=\\chi_{\\rm s}$, ${1\\over 2}\\chi_{\\rm s}$,\nand ${3\\over 2}\\chi_{\\rm s}$, respectively. One can see that both\n$H_{\\Sigma^+}$ and $H_{\\Xi^0}$ are insensitive to\nchanges in $\\chi_{\\rm ms}$. This indicates that\nthe terms proportional to $\\chi_{\\rm ms}$ give\nonly small contributions to the sum rules. One\nalso notices that $H_{\\Sigma^+}$ depends only\nweakly on $\\chi_{\\rm s}$. Finally, the effect of\nignoring the response of continuum threshold is\nshown in Fig.~\\ref{fig-8}. The solid (dashed) curve\nis obtained by including (omitting) the third term\non the RHS of Eq.~(\\ref{sum_p_q}). The difference\nbetween the two curves is large for moderate and\nlarge values of $\\chi$. This shows that the response\nof the continuum threshold can be sizable and\nshould be included in the sum rules. Unfortunately,\nthe response\nof the continuum thresholds has been omitted\nin all previous works on external field sum rules.\nThis was first noticed by Ioffe\\cite{ioffe2}.\n\n\n\n\\section{Estimate of baryon isospin mass splittings}\n\\label{isospin}\n\nIn this section we estimate the baryon isospin mass splittings\nusing $\\delta m$ and the baryon matrix elements of\nisovector-scalar current calculated in the previous section.\n\n\nThe observed hadron isospin mass splittings arise from electromagnetic\ninteraction and from the difference between up and down quark masses:\n\\begin{equation}\n\\delta m_h=(\\delta m_h)_{\\rm el}+(\\delta m_h)_{\\rm q}\\ ,\n\\label{dm-sep}\n\\end{equation}\nwhere $(\\delta m_h)_{\\rm el}$ and $(\\delta m_h)_{\\rm q}$ denote the\ncontributions due to electromagnetic interaction and due to\nthe up and down quark mass difference, respectively.\\footnote%\n{This separation is renormalization scale dependent. However,\nthis scale dependence is weak; it is thus meaningful to\nseparate the contribution of quark mass difference\nfrom that due to electromagnetic interaction (see Ref.~\\cite{jin2}).}\nFollowing Ref.~\\cite{jin2}, one can treat $\\delta m$\nas a small parameter and using the Hellman-Feynman\ntheorem~\\cite{hellman1,feynman1} to show that\nthe octet baryon\nisospin mass splittings to first order in $\\delta m$\ncan be expressed as\n\\begin{eqnarray}\n& &M_n-M_p=(M_n-M_p)_{\\rm el}+\\delta m H_p\\ ,\n\\label{mq-np}\n\\\\*[7.2pt]\n& &M_{\\Sigma^-}-M_{\\Sigma^+}=\n(M_{\\Sigma^-}-M_{\\Sigma^+})_{\\rm el}+\\delta m H_{\\Sigma^+}\\ ,\n\\label{mq-sig}\n\\\\*[7.2pt]\n& &M_{\\Xi^-}-M_{\\Xi^0}=(M_{\\Xi^-}-M_{\\Xi^0})_{\\rm el}\n+\\delta m H_{\\Xi^0}\\ .\n\\label{mq-xi}\n\\end{eqnarray}\nNote that $H_n=-H_p$, $H_{\\Sigma^-}=-H_{\\Sigma^+}$,\nand $H_{\\Xi^-}=-H_{\\Xi^0}$ to the lowest order\nin $\\delta m$.\nTherefore, QCD sum rule predictions for $H_p$, $H_{\\Sigma^+}$,\nand $H_{\\Xi^0}$,\nalong with the electromagnetic contributions\\cite{gasser1}\n\\begin{eqnarray}\n& &(M_n-M_p)_{\\rm el}= -0.76\\pm 0.30\\,{\\mbox{MeV}}\\ ,\n\\label{mel-np}\n\\\\*[7.2pt]\n& &(M_{\\Sigma^-}-M_{\\Sigma^+})_{\\rm el}= 0.17\\pm0.3\\,{\\mbox{MeV}}\\ ,\n\\label{mel-sig}\n\\\\*[7.2pt]\n& &(M_{\\Xi^-}-M_{\\Xi^0})_{\\rm el}=0.86\\pm 0.30\\,{\\mbox{MeV}}\\ ,\n\\label{mel-xi}\n\\end{eqnarray}\nwill lead to an estimate of the baryon isospin mass splittings.\nTaking the experimental mass difference\\cite{particle1}, one finds\n\\begin{eqnarray}\n& &(M_n-M_p)_{\\rm q}^{\\rm exp}=2.05\\pm 0.30\\,\\text{MeV}\\ ,\n\\\\*[7.2pt]\n& &(M_{\\Sigma^-}-M_{\\Sigma^+})_{\\rm q}^{\\rm exp}=7.9\\pm 0.33\\,\\text{MeV}\\ ,\n\\\\*[7.2pt]\n& &(M_{\\Xi^-}-M_{\\Xi^0})_{\\rm q}^{\\rm exp}=5.54\\pm 0.67\\,\\text{MeV}\\ .\n\\label{mass-dif-exp}\n\\end{eqnarray}\n\nWe have seen from last section that the uncertainties in our\nknowledge of the response of the quark condensates to the\nexternal field, $\\chi$ and $\\chi_{\\rm s}$, leads to uncertainties\nin the sum-rule determination of the baryon matrix elements\n$H_{\\rm B}$. (There are also uncertainties in $\\delta m$.)\nTherefore, our estimate here are only {\\it qualitative}.\nFor most of the values for $t$, $\\chi$ and $\\chi_{\\rm s}$\nconsidered here, the sum-rule analysis gives\n$0 < H_p < H_{\\Xi^0}\\leq H_{\\Sigma^+}$ (see Figs.~\\ref{fig-5}, \\ref{fig-6}\nand \\ref{fig-7}), which implies\n\\begin{equation}\n0<(M_n-M_p)_{\\rm q}<(M_{\\Xi^-}-M_{\\Xi^0})_{\\rm q}\\leq\n(M_{\\Sigma^-}-M_{\\Sigma^+})_{\\rm q}\\ .\n\\label{quli}\n\\end{equation}\nThis qualitative feature is compatible with the experimental data.\nFor the baryon interpolating fields with $t=-1$\nand moderate $\\chi$ and $\\chi_{\\rm s}$ values\n($1.6\\,\\text{GeV}^{-1}\\leq\\chi\\leq 2.2\\,\\text{GeV}^{-1}$\nand $1.3\\,\\text{GeV}^{-1}\\leq\\chi_{\\rm s}\\leq 1.8\\,\\text{GeV}^{-1}$),\nwe get\n\\begin{eqnarray}\n& &1.95\\,\\text{MeV}\\leq (M_n-M_p)_{\\rm q}\\leq 2.41\\,\\text{MeV}\\ ,\n\\\\*[7.2pt]\n& &4.0\\,\\text{MeV}\\leq (M_{\\Sigma^-}-M_{\\Sigma^+})_{\\rm q}\\leq 6.3\\,\\text{MeV}\\\n,\n\\\\*[7.2pt]\n& &4.5\\,\\text{MeV}\\leq (M_{\\Xi^-}-M_{\\Xi^0})_{\\rm q}\\leq 5.38 \\,\\text{MeV}\\ ,\n\\label{mass-est}\n\\end{eqnarray}\nwhere we have used a median value\n$\\delta m\\simeq 3.3\\,\\text{MeV}$. These results are\ncomparable to the experimental data, though the\nresult in the $\\Sigma$ case is somewhat too small.\nSmaller and larger values of $\\chi$ and $\\chi_{\\rm s}$\nlead to correspondingly smaller and larger values for\nthe baryon isospin mass differences. As $t$ increases\n(decreases), the results increase (decrease).\n\n\\section{Discussion}\n\\label{discussion}\n\nOur primary goal in the present paper has been to extract the\nbaryon matrix element $H_{\\rm B}=\\langle B|\\overline{u}u-\n\\overline{d}d|B\\rangle\/2M_{\\rm B}$ for octet baryons. We observe that\nthe sum-rule predictions for $H_{\\rm B}$ are quite sensitive\nto the response of quark condensates to the external\nisovector-scalar field, which is not well determined.\nThis means that our conclusion about $H_{\\rm B}$ can only be\n{\\it qualitative} at this point. The most concrete conclusion\nwe can draw from this work is that QCD sum rules\npredict positive values for $H_p$, $H_{\\Sigma^+}$,\nand $H_{\\Xi^0}$ and $H_p 304^\\circ$ pass through\nthe Scutum-Crux arm, and are dominated by negative RMs; at $\\ell < 304^\\circ$,\nfield lines are directed toward the observer, and RMs are consequently\npositive. The quality of the fit is indicated in Figure~\\ref{fig_fit},\nwhere we compare RM data to the predictions of the model in\nFigure~\\ref{fig_model}. For extragalactic data, the model RMs and\nthe data match very well. For pulsars, the scatter is larger (mainly\nbecause pulsar RM data cannot be meaningfully smoothed), but the\nmajor features are reproduced.\n\n\\begin{figure}[b!]\n\\centerline{\\psfig{file=fig_brown_sgps_rms.eps,width=\\textwidth}}\n\\caption{RM vs.\\ Galactic longitude for extragalactic sources (upper;\npurple points) and pulsars (lower; black points) in the SGPS region;\nthe extragalactic data have been smoothed as in\nFig.~\\protect\\ref{fig_sgps}. The green symbols show the RMs predicted\nby the best-fit model of \\protect\\cite{bhg+07} at the positions of\neach of the observed sources. The model data for extragalactic\nsources have been smoothed in the same way as for the observations.\nAdapted from \\protect\\cite{bhg+07}.}\n\\label{fig_fit}\n\\end{figure}\n\nThis global fit is at odds with earlier studies utilising smaller\ndata-sets, in that it suggests that the Galaxy can be modelled with\na predominantly clockwise field, plus a single reversed region.\nThis structure is in line with what is seen also for other spiral\ngalaxies, but needs to be verified by a better mapping of extragalactic\nRMs in the first Galactic quadrant.\n\n\n\n\\section{The Magnetic Field of the Large Magellanic Cloud}\n\\label{sec_lmc}\n\nThe LMC is also particularly amenable to extragalactic RMs as a\nprobe of its magnetic field, because of its large angular extent\n($\\sim6^\\circ$) on the sky. Gaensler et al.\\ \\cite{ghs+05} re-analysed\narchival LMC data taken with the ATCA, and extracted polarisation\nand RMs for 292 background sources. The results, shown in\nFigure~\\ref{fig_lmc}, show that RMs are generally positive on the\neastern half of the galaxy, and negative on the western half.\nAnalysed in more detail, these RMs reveal a sinusoidal pattern as\na function of azimuth, implying a coherent, spiral, pattern in the\nLMC's magnetic field, with a strength of about 1~$\\mu$G \\cite{ghs+05}.\n\n\\begin{figure}\n\\centerline{\\psfig{file=fig_lmc_compress.eps,width=\\textwidth}}\n\\caption{RMs of extragalactic sources behind the Large Magellanic\nCloud \\cite{ghs+05}. The image shows the distribution of emission measure toward\nthe LMC in units of pc~cm$^{-6}$. The symbols show the position,\nsign and magnitude of extragalactic RMs: filled (open) circles\ncorrespond to positive (negative) RMs, while asterisks indicate RMs\nwhich are consistent with zero within their errors. The diameter\nof each circle is proportional to the magnitude of the RM.}\n\\label{fig_lmc}\n\\end{figure}\n\nThe presence of this relatively strong, ordered, field is somewhat\nsurprising in the LMC. Standard turbulent dynamo theory requires\n5--10~Gyr to amplify a weak primordial seed field to microgauss\nlevels, but the repeated tidal interactions between the LMC, Milky\nWay and Small Magellanic Cloud should disrupt any field that might be\nslowly built up through this process. The coherent field revealed\nin Figure~\\ref{fig_lmc} must have been amplified and organised rapidly,\nin only a few hundred million years. One possibility is a cosmic\nray dynamo (e.g., \\cite{hkol04}), which should thrive in the vigorous\nstarburst environment supplied by the LMC.\n\n\n\n\\section{Magnetism with the Square Kilometre Array}\n\\label{sec_ska}\n\n\\subsection{The Rotation Measure Grid}\n\nThe results presented in \\S\\ref{sec_mw} \\& \\S\\ref{sec_lmc} can be\ngreatly expanded upon with a larger sample of RMs (see also Kronberg,\nthese proceedings). With the SKA, we envisage an all-sky ``rotation\nmeasure grid'' \\cite{bg04,gbf04}, which would be derived from a\n1.4~GHz full-Stokes continuum survey. For an SKA field of view of\n5~deg$^2$, six months of observing would result in an RMS sensitivity\nof $\\approx$$0.1$~$\\mu$Jy~beam$^{-1}$. To estimate the yield of the\nresulting RM grid, one needs to consider ``$\\log N - \\log P$'',\ni.e., the differential source counts in linear polarisation, analogous\nthe usual $\\log N - \\log S$ function in total intensity \\cite{bg04,tsg+07}.\nThis results in a distribution like that shown in Figure~\\ref{fig_grid}.\nWhile there are uncertainties in extrapolating to the low flux\nlevels expected for the SKA (see discussion by \\cite{tsg+07}), we\ncan roughly predict that the RM grid should yield about $\\sim10^8$\nsightlines, with a typical spacing between measurements of $\\sim1'$.\nA simulation of the polarised sky as might be seen with the SKA\nis shown in Figure~\\ref{fig_sim}.\nSuch a data-base will provide a fantastic probe of all manner of\nextended foreground sources, either individually (like the case of\nthe LMC) or as a statistical ensemble (see \\cite{sab+08} and Arshakian\net al., these proceedings).\n\n\\begin{figure}\n\\centerline{\\psfig{file=fig_logNlogP_xfig.eps,width=\\textwidth}}\n\\caption{The predicted flux distribution of extragalactic radio\nsources in both total intensity (solid lines) and in linear\npolarisation (dashed lines), adapted from \\cite{bg04}. The upper\npanel shows the differential source count distribution, while the\nlower panel shows the integral distribution. Approximate flux\nlimits for wide-field surveys with the EVLA and with the SKA are indicated.}\n\\label{fig_grid}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\psfig{file=fig_ska_sim_compress.eps,width=\\textwidth}}\n\\caption{A simulation of a one-hour SKA observation in linear\npolarisation, for a field of view of 1~deg$^2$. The angular resolution\nis a few arcsec, and the gray scale is in units of $\\mu$Jy. The\nimage was created by squaring the intensity values of an NVSS field\nto generate a Ricean noise distribution, and then adjusting the\nflux and spatial scales to simulate the SKA source density predicted\nby Fig.~\\protect\\ref{fig_grid}. The insets show some hypothesised\ndistributions of position angle vs.\\ $\\lambda^2$ for three sources\nin the field, from which RMs can then be calculated.}\n\\label{fig_sim}\n\\end{figure}\n\n\\subsection{The Magnetic Universe}\n\nOne of the main applications of the SKA's RM grid will be to study\nthe growth of galactic-scale magnetic fields as a function of cosmic\ntime. The expectation is that the sightlines to distant extragalactic\nsources should generally intersect one or more foreground galaxies,\nas is seen in the Ly-$\\alpha$ and Mg\\,{\\sc ii} absorption lines in\nthe optical spectra of quasars. Such intervenors should generate\nan RM signature in the background source, and this signature should\npotentially evolve with redshift. In particular, Equation~(\\ref{eqn_rm2}),\nif rewritten to take into account cosmological effects, contains a\n$(1+z)^{-2}$ dilution term because of redshift of the emitted\nradiation, but in some models can also contain co-moving terms with\ndependencies $n_e \\propto (1+z)^3$ and $B_\\parallel \\propto (1+z)^2$ \\cite{wid02}.\nThe overall RM may then potentially evolve as rapidly as RM $\\propto\n(1+z)^3$, in which case we expect that filaments\nand absorbers should begin to show an increasingly large RM at\nhigher $z$. If we can obtain a large sample of both RMs (from the\nSKA) and accompanying redshifts (from the next generation of\nphotometric and spectroscopic optical surveys), we can apply a\nvariety of statistical tests to the distribution of RM vs.\\ $z$\nto map the magnetic evolution of the Universe to $z \\sim 3$\n\\cite{kol98,bbo99,kbm+08}.\n\n\n\\subsection{Magnetic Fields at $z > 5$}\n\nThere is already good evidence that microgauss-strength magnetic\nfields exist out to redshifts $z \\sim 1-2$ \\cite{kbm+08,kpz92}.\nIf we can extend such data out to $z \\ga 5$, we can potentially\nobtain strong constraints on how large-scale magnetic fields\nwere created and then amplified. Such measurements\ncan be made by obtaining RMs for polarised sources at very high redshifts.\n\nIndeed, radio emission has already been detected from two classes\nof sources at $z > 6$: gamma-ray burst afterglows \\cite{fck+06} and\nquasars \\cite{mbhw06}. We currently lack the sensitivity to detect\nlinear polarisation and RMs from these objects, but such measurements\nshould be possible with the SKA. Furthermore, since the cosmic\nmicrowave background is linearly polarised, deep observations at\nthe upper end of the SKA frequency range may be able to measure RMs\nagainst it \\cite{kl96b,shm04}. Such an experiment, while challenging,\nwould probe the integrated Faraday rotation over almost all of the\nUniverse's history. In considering such measurements, it is important\nto note that an RM measurement to a high-$z$ source does not provide\nany direct constraints on high-$z$ magnetic fields on its own, since\nthe observed RM will also contain contributions from low-$z$\ncomponents of the sightline, and from the Milky Way foreground.\nOnce a high-$z$ RM data point has been obtained, deep radio and\noptical observations of that field can yield a large number of RMs\nand redshifts for adjacent foreground objects (see Fig.~\\ref{fig_sim}).\nWhen the corresponding foreground contribution is then removed, the\nhigh-$z$ magnetic field can be isolated studied.\n\n\\section{Conclusions}\n\nCosmic magnetism is a vigorous and rapidly developing field. What\nmakes this area particularly relevant for the SKA is that magnetic\nfields at cosmological distances are uniquely probed at radio\nwavelengths. By studying the evolution of magnetic fields over the\nUniverse's history, we can simultaneously address a variety of major\ntopics in fundamental physics and astrophysics. This work also has\nstrong synergies with other astronomy and astroparticle experiments,\nsuch as {\\em Planck}, LSST, {\\em JWST}, HESS and Auger \\cite{ewvs06,apr07}.\n\nIn the coming years, a host of SKA pathfinders will begin to finally\nreveal the depth and detail of the polarised sky \\cite{rbb+06,jbb+07},\nculminating in an exploration of the full Magnetic Universe with\nthe Square Kilometre Array.\n\n\n\n\\acknowledgments\n\nI thank my various collaborators for their contributions to the\nwork reported here, in particular Jo-Anne Brown for providing the\nmaterial for several figures. This work has been supported by the\nAustralian Research Council (grant FF0561298) and by the National\nScience Foundation (grant AST-0307358).\n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe existence of higher structures (such as Gerstenhaber algebras or Batalin-Vilkovisky algebras) on cohomology or homology of a certain algebraic structure was initiated by M. Gerstenhaber in the study of Hochschild cohomology of associative algebras \\cite{gers}. Later, a more sophisticated approach of his result was given by Gerstenhaber and Voronov using operad with multiplication in connections with Deligne's conjecture \\cite{gers-voro}. \nHowever, it can only ensure the existence of a Gerstenhaber structure. In differential geometry and noncommutative geometry, one wants a more concrete structure of differential calculus to understand the full account of the picture \\cite{nest-tsy,tamar-tsy}. A pair $(\\mathcal{A}, \\Omega )$ consisting of a Gerstenhaber algebra $\\mathcal{A}$ and a graded space $\\Omega$ is called a calculus if $\\Omega$ carries a module structure over the algebra $\\mathcal{A}$ (given by a map $i$) and a module structure over the Lie algebra $\\mathcal{A}^{\\bullet + 1}$ (given by a map $\\mathcal{L}$) and a differential $B : \\Omega_\\bullet \\rightarrow \\Omega_{\\bullet +1}$ that mixes $\\mathcal{L}, i$ and $B$ by the Cartan-Rinehart homotopy formula. For a smooth manifold $M$, the pair $(\\mathcal{X}^\\bullet (M), \\Omega^\\bullet (M))$ of multivector fields and differential forms; for an associative algebra $A$, the pair $(H^\\bullet (A), H_\\bullet (A))$ of Hochschild cohomology and homology yields differential calculus structure. In \\cite{kowal} Kowalzig extends the result of Gerstenhaber and Voronov by introducing a cyclic comp module over an operad (with multiplication). Such a structure induces a simplicial homology on the underlying graded space of the comp module and certain action maps. When passing onto the cohomology and homology, one gets a differential calculus structure.\n\nIn this paper, we first construct a new example of differential calculus in the context of hom-associative algebras. A hom-associative algebra is an algebra whose associativity is twisted by a linear homomorphism \\cite{makh-sil-0}. Such twisted structures were first appeared in the context of Lie algebras to study $q$-deformations of Witt and Virasoro algebras \\cite{hart}. Hom-associative algebras are widely studied in the last 10 years from various points of view. In \\cite{amm-ej-makh,makh-sil} Hochschild cohomology and deformations of hom-associative algebras are studied, whereas, homological perspectives are studied in \\cite{hasan}. The homotopy theoretic study of hom-associative algebras are considered in \\cite{das3}. In \\cite{das1} the present author showed that the space of Hochschild cochains of a hom-associative algebra $A$ carries a structure of an operad with a multiplication yields the cohomology a Gerstenhaber algebra (see also \\cite{das2}). Here we show that the space of Hochschild chains of $A$ forms a cyclic comp module over the above-mentioned operad. The induced simplicial homology coincides with the Hochschild homology of $A$. Hence, following the result of Kowalzig, we obtain a differential calculus structure on the pair of Hochschild cohomology and homology of $A$. See Section \\ref{section-3} for details clarification.\n\nFinally, as an application, we obtain a Batalin-Vilkovisky algebra structure on the Hochschild cohomology of a regular unital symmetric hom-associative algebra. This generalizes the corresponding result for associative algebras obtained by Tradler \\cite{tradler}.\n\nThroughout the paper, $k$ is a commutative ring of characteristic $0$. All linear maps and tensor products are over $k$.\n\n\\section{Noncommutative differential calculus and cyclic comp modules}\nIn this section, we recall noncommutative differential calculus and cyclic comp modules over non-symmetric operads. We mention how a cyclic comp module induces a noncommutative differential calculus. Our main references are \\cite{gers-voro,kowal}. We mainly follow the sign conventions of \\cite{kowal}.\n\n\\begin{defn}\n(i) A Gerstenhaber algebra over $k$ is a graded $k$-module $\\mathcal{A} = \\oplus_{i \\in \\mathbb{Z}} \\mathcal{A}^i$ together with a graded commutative, associative product $\\cup : \\mathcal{A}^p \\otimes \\mathcal{A}^q \\rightarrow \\mathcal{A}^{p+q}$ and a degree $-1$ graded Lie bracket $[~, ~]: \\mathcal{A}^p \\otimes \\mathcal{A}^q \\rightarrow \\mathcal{A}^{p+q-1}$ satisfying the following Leibniz rule\n\\begin{align*}\n[f, g \\cup h ] = [f , g ] \\cup h + (-1)^{(p-1) q} g \\cup [f, h], ~~ \\text{ for } f \\in \\mathcal{A}^p, g \\in \\mathcal{A}^q, h \\in \\mathcal{A}.\n\\end{align*}\n\n(ii) A pair $(\\mathcal{A}, \\Omega)$ consisting of a Gerstenhaber algebra $\\mathcal{A} = ( \\mathcal{A}, \\cup, [~, ~])$ and a graded $k$-module $\\Omega$ is called a precalculus if there is a graded $(\\mathcal{A}, \\cup)$-module structure on $\\Omega$ given by $i : \\mathcal{A}^p \\otimes \\Omega_n \\rightarrow \\Omega_{n-p}$ and a graded Lie algebra module by $\\mathcal{L} : \\mathcal{A}^{p+1} \\otimes \\Omega_n \\rightarrow \\Omega_{n-p}$ satisfying\n\\begin{align}\\label{eqn-t}\ni_{[f, g]} = i_f \\circ \\mathcal{L}_g - (-)^{p (q+1)} \\mathcal{L}_g \\circ i_f, ~ \\text{ for } f \\in \\mathcal{A}^p, g \\in \\mathcal{A}^q.\n\\end{align}\n\n(iii) A precalculus $(\\mathcal{A}, \\Omega)$ is said to be a calculus if there is a degree $+1$ map $B : \\Omega_\\bullet \\rightarrow \\Omega_{\\bullet + 1}$ satisfying $B^2 =0$ and the following Cartan-Rinehart homotopy formula holds\n\\begin{align*}\n\\mathcal{L}_f = B \\circ i_f - (-1)^p~ i_f \\circ B, ~ \\text{ for } f \\in \\mathcal{A}^p.\n\\end{align*}\n\\end{defn}\n\n\\begin{defn}\nA non-symmetric operad $\\mathcal{O}$ in the category of $k$-modules consists of a collection $\\{ \\mathcal{O}(p) \\}_{p \\geq 1}$ of $k$-modules together with $k$-bilinear maps (called partial compositions) $\\circ_i : \\mathcal{O}(p) \\otimes \\mathcal{O}(q) \\rightarrow \\mathcal{O}(p+q-1)$, for $1 \\leq i \\leq p$, satisfying the following identities\n\\begin{align*}\n(f \\circ_i g) \\circ_j h = \\begin{cases} (f \\circ_j h) \\circ_{i+p-1} g ~~~&\\mbox{if } j n$. Thus, it follows from (\\ref{cartan-pre-id}) that the Cartan-Rinehart homotopy formula holds on the induced (co)homology. Hence the pair $(H^\\bullet_\\pi (\\mathcal{O}), H_\\bullet (\\mathcal{M}))$ is a differential calculus.\n\n\n\n\\section{Hom-associative algebras and calculus structure}\\label{section-3}\nIn this section, we first recall hom-associative algebras and their Hochschild (co)homologies \\cite{makh-sil-0}, \\cite{amm-ej-makh}, \\cite{hasan}. In the next, we show that the pair of cohomology and homology forms a precalculus. Under some additional conditions on the hom-associative algebra, the precalculus turns out to be a noncommutative differential calculus.\n\n\\begin{defn}\nA hom-associative algebra is a $k$-module $A$ together with a $k$-bilinear map $\\mu : A \\otimes A \\rightarrow A, (a, b) \\mapsto a \\cdot b $ and a $k$-linear map $\\alpha : A \\rightarrow A$ satisfying the following hom-associativity:\n\\begin{align*}\n(a \\cdot b ) \\cdot \\alpha (c) = \\alpha (a) \\cdot ( b \\cdot c ), ~ \\text{ for } a, b, c \\in A.\n\\end{align*} \n\\end{defn}\nA hom-associative algebra as above is denoted by the triple $(A, \\mu, \\alpha )$. It is called multiplicative if $\\alpha ( a \\cdot b ) = \\alpha (a) \\cdot \\alpha (b)$, for $a, b \\in A$. In the rest of the paper, by a hom-associative algebra, we shall always mean a multiplicative hom-associative algebra.\nIt follows from the above definition that any associative algebra is a (multiplicative) hom-associative algebra with $\\alpha = \\mathrm{id}_A$.\n\nA hom-associative algebra $(A, \\mu, \\alpha )$ is said to be unital if there is an element $1 \\in A$ such that $\\alpha (1) = 1$ and $a \\cdot 1 = 1 \\cdot a = \\alpha (a),$ for all $a \\in A$.\n\n\\begin{exam}\\label{hom-alg-exam}\nLet $(A, \\mu)$ be an associative algebra over $k$ and $\\alpha : A \\rightarrow A$ be an algebra homomorphism. Then $(A, \\mu_\\alpha =\\alpha \\circ \\mu , \\alpha )$ is a hom-associative algebra, called obtained by composition. If the associative algebra $(A, \\mu)$ is unital and $\\alpha$ is an unital associative algebra morphism then the hom-associative algebra $(A, \\mu_\\alpha =\\alpha \\circ \\mu , \\alpha )$ is unital with the same unit.\n\\end{exam}\n\nLet $(A, \\mu, \\alpha)$ be a hom-associative algebra. A bimodule over it consists of a $k$-module $M$ and a linear map $\\beta : M \\rightarrow M$ with actions $l : A \\otimes M \\rightarrow M,~ (a,m) \\mapsto am$, and $r: M \\otimes A \\rightarrow M, ~ (m,a) \\mapsto ma$ satisfying $\\beta (am) = \\alpha (a) \\beta(m),~ \\beta (ma) = \\beta(m) \\alpha (a)$ and\nthe following bimodule conditions are hold\n\\begin{align*}\n(a \\cdot b)\\beta( m) = \\alpha(a) (bm) ~~~~ (am)\\alpha(b) = \\alpha(a)(mb) ~~~~ (ma) \\alpha(b) = \\beta(m) (a \\cdot b), ~ \\text{ for } a, b \\in A, m \\in M.\n\\end{align*} \nA bimodule can be simply denoted by $(M, \\beta)$ when the actions are understood. It is easy to see that $(A, \\alpha)$ is a bimodule with left and right actions are given by $\\mu$. \n\nLet $(A, \\mu, \\alpha)$ be a hom-associative algebra and $(M, \\beta)$ be a bimodule over it. The group of $n$-cochains of $A$ with coefficients in $(M, \\beta)$ is given by $C^n_\\alpha ( A, M) := \\{ f : A^{\\otimes n} \\rightarrow M |~ \\beta \\circ f = f \\circ \\alpha^{\\otimes n} \\},$ for $n \\geq 1$. The coboundary map $\\delta_\\alpha : C^n_\\alpha (A, M) \\rightarrow C^{n+1}_\\alpha (A, M)$ given by\n\\begin{align}\n(\\delta_\\alpha f)(a_1, \\ldots, a_{n+1}) :=~& \\alpha^{n-1}(a_1) f ( a_2, \\ldots, a_{n+1}) + (-1)^{n+1} f (a_1, \\ldots, a_n) \\alpha^{n-1} (a_{n+1}) \\\\\n~&+ \\sum_{i=1}^n (-1)^i~ f ( \\alpha (a_1), \\ldots, \\alpha (a_{i-1}), a_i \\cdot a_{i+1}, \\alpha (a_{i+2}), \\ldots, \\alpha (a_{n+1})). \\nonumber\n\\end{align}\nThe corresponding cohomology groups are denoted by $H^n_\\alpha (A, M)$, for $n \\geq 1$. When the bimodule is given by $(A, \\alpha)$, the corresponding cochain groups are denoted by $C^n_\\alpha (A)$ and the cohomology groups are denoted by $H^n_\\alpha (A)$, for $n \\geq 1$.\n\nIn this paper, we only require the Hochschild homology of $A$ with coefficients in itself. For homology with coefficients, see \\cite{hasan}. The $n$-th Hochschild chain group of $A$ with coefficients in itself is given by $C_n^\\alpha (A) := A \\otimes A^{\\otimes n}$, for $n \\geq 0$ and the boundary operator $d^\\alpha : C_n^\\alpha (A) \\rightarrow C_{n-1}^\\alpha (A)$ given by\n\\begin{align}\\label{hoch-hom-boundary}\nd^\\alpha ( a_0 \\otimes a_1 \\cdots a_n) :=~& a_0 \\cdot a_1 \\otimes \\alpha (a_2) \\cdots \\alpha (a_n) + (-1)^n a_n \\cdot a_0 \\otimes \\alpha (a_1) \\cdots \\alpha (a_{n-1}) \\\\\n~&+ \\sum_{i=1}^{n-1} (-1)^i~ \\alpha (a_0) \\otimes \\alpha (a_1) \\cdots (a_i \\cdot a_{i+1}) \\cdots \\alpha (a_n). \\nonumber\n\\end{align}\nThe corresponding homology groups are denoted by $H_n^\\alpha (A)$, for $n \\geq 0$.\n\n\nLet $(A, \\mu, \\alpha )$ be a hom-associative algebra. It has been shown in \\cite{das1} that the collection of Hochschild cochains $\\mathcal{O} = \\{ \\mathcal{O}(p) = C^p_\\alpha (A) \\}_{p \\geq 1}$ forms a non-symmetric operad with partial compositions\n\\begin{align}\\label{hom-ope}\n(f \\circ_i g ) ( a_1, \\ldots, a_{p+q-1}) = f ( \\alpha^{q-1} (a_1), \\ldots, \\alpha^{q-1} ( a_{i-1}), g (a_i, \\ldots, a_{i+q-1}), \\ldots, \\alpha^{q-1} ( a_{p+q-1})),\n\\end{align}\nfor $f \\in \\mathcal{O}(p), g \\in \\mathcal{O}(q)$ and the identity element $\\mathds{1} = \\mathrm{id}_A.$ Moreover, the element $\\mu \\in \\mathcal{O}(2) = C^2_\\alpha (A)$ is a multiplication in the above operad. The differential induced by $\\mu$ is same as $\\delta_\\alpha$ up to a sign. Hence the graded space of Hochschild cohomology $H_\\alpha^\\bullet (A)$ carries a Gerstenhaber structure.\n\n\\subsection{Precalculus structure}\n\nLet $\\mathcal{M}(n) := A \\otimes A^{\\otimes n}$, for $n \\geq 0$ be the $n$-th Hochschild chain group of $A$. For convenience, we denote the basic elements of $\\mathcal{M}(n)$ by $a_0 \\otimes a_1 \\cdots a_n$ when there is no confusion causes. For $p \\leq n$ and $1 \\leq i \\leq n-p+1$, we define maps $\\bullet_i : \\mathcal{O}(p) \\otimes \\mathcal{M}(n) \\rightarrow \\mathcal{M}(n-p+1)$ by\n\\begin{align*}\nf \\bullet_i ( a_0 \\otimes a_1 \\cdots a_n ) := \\alpha^{p-1}( a_0 ) \\otimes \\alpha^{p-1}( a_1) \\cdots \\alpha^{p-1}(a_{i-1}) f (a_i, \\ldots, a_{i+p-1}) \\alpha^{p-1}(a_{i+p}) \\cdots \\alpha^{p-1}( a_n). \n\\end{align*} \n\\begin{prop}\nWith these notations, $\\mathcal{M} = \\{ \\mathcal{M}(n) \\}_{n \\geq 0}$ is a unital comp module over the operad $\\mathcal{O}$.\n\\end{prop}\n\n\\begin{proof}\nWe have to verify the identities (\\ref{eqn-p}) and (\\ref{eqn-q}). First, for $j < i$, we have\n\\begin{align*}\n&f \\bullet_i ( g \\bullet_j (a_0 \\otimes a_1 \\cdots a_n )) \\\\&= f \\bullet_i \\big( \\alpha^{q-1}(a_0) \\otimes \\alpha^{q-1}(a_1) \\cdots g (a_j, \\ldots, a_{j+q-1}) \\cdots \\alpha^{q-1} (a_n) \\big) \\\\\n&= \\alpha^{p+q-2} (a_0) \\otimes \\alpha^{p+q-2} (a_1) \\cdots \\alpha^{p+q-2} (a_{j-1}) \\alpha^{p-1} ( g (a_j, \\ldots, a_{j+q-1} )) \\cdots \\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad f (\\alpha^{q-1} (a_{i+q-1}), \\ldots, \\alpha^{q-1}(a_{i+p+q-2})) \\cdots \\alpha^{p+q-2} (a_n).\n\\end{align*}\nOn the other hand,\n\\begin{align*}\n&g \\bullet_j ( f \\bullet_{i+q-1} (a_0 \\otimes a_1 \\cdots a_n )) \\\\\n&= g \\bullet_j \\big( \\alpha^{p-1} (a_0) \\otimes \\alpha^{p-1} (a_1) \\cdots f (a_{i+q-1} , \\ldots, a_{i+p+q-2}) \\cdots \\alpha^{p-1} (a_n) \\big)\\\\\n&= \\alpha^{p+q-2} (a_0) \\otimes \\alpha^{p+q-2} (a_1) \\cdots \\alpha^{p+q-2} (a_{j-1}) g ( \\alpha^{p-1}(a_j), \\ldots, \\alpha^{p-1}(a_{j+q-1}) ) \\cdots \\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\alpha^{q-1}(f ( a_{i+q-1}, \\ldots, a_{i+p+q-2})) \\cdots \\alpha^{p+q-2} (a_n).\n\\end{align*}\nHence $f \\bullet_i ( g \\bullet_j (a_0 \\otimes a_1 \\cdots a_n )) = g \\bullet_j ( f \\bullet_{i+q-1} (a_0 \\otimes a_1 \\cdots a_n ))$. Similarly, for $j-p 1$, organizing\nthe partition function by the number of covers also nicely organizes\nthe sum into powers of $1\/N$. Hence it is very easy to compute the\nleading order contribution in the large $N$ limit for these surfaces.\nFor the torus, there can be leading order contributions from any number\nof coverings, but it is easy to sum their contribution.\nHowever, in the\ncase of the sphere finding the leading order contribution in $1\/N$\nis not so easy, namely because all $SU(N)$ or $U(N)$ representations\ncontribute to the leading order behavior, and computing these contributions\ninvolves computing $1\/N$ corrections to the dimensions of these\nrepresentations.\n\nRecently, Douglas and Kazakov (DK) discovered a clever way to solve\nthe problem of the sphere by treating the rows of the young tableau\nand the number of boxes in each row as continuous variables[\\DK]. Doing\nthis they were able to compute the leading behavior which led them\nto a surprising result: the theory contains\na third order phase transition, similar to the case of the\nlattice version of Gross, Witten\nand Wadia, but for the continuum limit of the theory.\n\nHowever, the $U(N)$ case has an interesting feature--- the sum\nover representations is not asymptotic. In order to correct for this,\none must sum over an infinite number of charge sectors. While this\nwill eventually lead to overcounting for finite $N$, the answer will\nat least be asymptotic in the large $N$ limit. These different charge\nsectors can be thought of as being extra solutions to the large $N$\nequations of motion. These different sectors are conjugate to the $U(1)$\ninstantons. Because of this, one should be able to find\nthe such solutions using the DK analysis.\nIn this paper we do precisely this by generalizing an ansatz described\nby DK.\n\nThere has also been some recent work by Witten in ${\\rm QCD}_2$, although from\na different perspective[\\Witten]. He has shown that the partition function on\nany Riemann surface is given by a sum over the saddle points. An interesting\nquestion is how the DK phase transition appears from this point of view.\n\nIn section two we review recent work on 2d QCD and construct a free fermion\npicture for this theory. In section three we present an alternative derivation\nof Witten's result that the partition function is a sum over saddle points.\nWe then show how the continuum phase transition occurs in this dual picture\nof ${\\rm QCD}_2$. In section 4 we consider the new solutions\nfor $U(N)$ ${\\rm QCD}_2$ and generalize the analysis of DK for\nthese solutions. We consider the two cases where the area is near its\ncritical value, and when the area is very large. In section 5 we compare\nthese solutions to corresponding solutions for lattice $SU(N)$ ${\\rm QCD}_2$.\nIn section 6 we present our conclusions. We include an appendix with some\nuseful equations for elliptic integrals.\n\n\n\\chapter{Review of ${\\rm QCD}_2$}\n\nLet us first review how ${\\rm QCD}_2$\\ on a cylinder is the same as a\ntheory of free fermions by showing that it can be reduced to a one-dimensional\nunitary matrix model[\\MP-\\ROI].\nIn the gauge $A_0=0$, the Hamiltonian is given as\n$$H=\\half\\int_0^L dx\\tr F_{01}^2=\\half\\int_0^L dx\\tr \\dot A_1^2\\eqn\\Hamgauge$$\nwith the overdot denoting a time derivative.\nThe $A_0$ equation of motion is now the constraint\n$$D_1F_{10}=\\partial_1\\dot A_1+ig[A_1,\\dot A_1]=0.\\eqn\\gconstraint$$\nDefine a new variable $V(x)$,\n$$V(x)=W_0^x\\dot A_1(x)W_x^L,\\eqn\\Vdef$$\nwhere\n$$W_a^b={\\rm P}e^{ig\\int_a^bdxA_1}.\\eqn\\Wdef$$\nThen \\Hamgauge\\ can be written as\n$$\\partial_1V(x)=0,\\eqn\\Veq$$\nso $V(x)$ is a constant. Thus $V(0)=V(L)$, which implies that\n$$[W,\\dot A_1(0)]=0,\\eqn\\WAcomm$$\nwhere $W\\equiv W_0^L$ and we have used the periodicity of $A_1$ in $x$.\n\n{}From the definitions \\Vdef\\ and \\Wdef, we find the relation\n$$\\dot W=ig \\int_0^L dx W_0^x\\dot A_1(x)W_x^L=ig \\int_0^L dx V(x),\\eqn\\Wdeq$$\nand therefore using \\Veq\\ and \\WAcomm, we derive\n$$\\dot W=igLW\\dot A_1(0)=igL\\dot A_1(0)W.\\eqn\\WdeqII$$\n\\WdeqII\\ then implies that\n$$[W,\\dot W]=0.\\eqn\\WWdeq$$\n\nBecause $V(x)=V(0)$, $\\dot A_1(x)$ satisfies\n$$\\dot A_1(x)=W_0^x\\dot A_1(0)W_x^0.\\eqn\\Adeq$$\nThus, using this relation along with \\WdeqII, we can rewrite the Hamiltonian\nin \\Hamgauge\\ as\n$$H=-{1\\over 2g^2L}\\tr(W^{-1}\\dot W)^2.\\eqn\\Hammm$$\nIf the gauge group is $U(N)$, with the $U(1)$ coupling given by $g\/N$, then\n\\Hammm\\ is the Hamiltonian for the one-dimensional unitary matrix model.\nThe constraint in \\WWdeq\\ reduces the space of states to singlets[\\APA].\nHence, the problem is reducible to the eigenvalues of $W$.\n\nUpon quantization, this problem is equivalent to a system of $N$\nnonrelativistic fermions living on a circle, with the Hamiltonian given by\n$$H=-\\left({g^2L\\over2}\\right)\\sum_{i=1}^N{\\partial^2\\over\\partial\\theta_i^2},\n\\qquad\\qquad 0\\le\\theta_i<2\\pi.\\eqn\\Hamferm$$\nThe fermionization is due to the Jacobian of the change of variables from\n$W$ to its eigenvalues, introducing the Vandermonde-type\ndeterminant in the wavefunction of the states, which in the unitary\nmatrix case reads\n$$ {\\widetilde \\Delta} = \\prod_{ij}(p_i-p_j)^2(x_i-x_j)(y_i-y_j)\n\\exp(-\\half g^2LT\\sum_i p_i^2),\\eqn\\Zreduce$$\nwhere $C$ is an unimportant constant.\nNot surprisingly, this term approaches zero in this limit.\nBut in order to find the sphere contribution, one should notice that\nthe fermion wavefunction at the end points are {\\it more}\nsingular than a $\\delta$-function, namely\n$$\\psi (x_i ) = \\widetilde\\Delta (x_i ) \\delta (W-1) = {1 \\over \\Delta (x_i )}\n\\delta (x_i ) \\eqn\\Sing$$\nwhere a factor of $1\/\\widetilde\\Delta^2 (x_i )$ was produced by the change of\nvariables from $W$ to $x_i$.\nTherefore, it is necessary to divide the expression in \\Zreduce\\ by these\nextra Vandermonde determinants, that is,\n$$\\prod(x_i-x_j)(y_i-y_j),$$\nleaving a finite expression. Hence the sphere partition function is\n$$Z_{\\rm sphere}=C\\sum_{p_i}\n\\prod_{i>j}(p_i-p_j)^2 \\exp(-\\half g^2LT\\sum_i p_i^2).\\eqn\\Zsphere$$\n\nWe can compare this to the sphere partition function of Migdal and\nRusakov[\\Migdal,\\Rus], which is given by\n$$Z_{MR}=\\sum_R (d_R)^2 \\exp(-g^2AC_{2R})\\eqn\\MRsph$$\nwhere the sum is over all representations of $U(N)$ or $SU(N)$ and\n$d_R$ is the dimension of the representation and $C_{2R}$ is the\nquadratic casimir.\nThe correspondence of the fermion states with the $U(N)$ represenatations\nis as follows[\\Mike]:\nIf we describe a representation by a Young tableau, then the number\nof boxes in row $i$, $n_i$ is the momentum shift\nof the fermion with the $i^{\\rm th}$ highest momentum\nabove its ground state value.\nIn terms of boxes, the casimir is given by\n$$C_{2R}=\\half\\left(N\\sum_i n_i+\\sum_i n_i(n_i-2i+1)\\right)\\eqn\\casimir$$\nwhich one can easily checked is reproduced by the fermions after\nsubtracting off the ground state energy. The dimension of the\nrepresentation is given by\n$$\\eqalign{d_R&=\\prod_{i>j}\\left(1-{n_i-n_j\\over i-j}\\right)\\cr\n&=\\prod_{i>j}(i-j)^{-1}\\prod_{i>j}\\Bigl((n_j-j)-(n_i-i)\\Bigr).}\\eqn\\dimrep$$\nThe first product in the second line of \\dimrep\\ is a representation\nindependent term and is thus an unimportant\nconstant. The second term is just $p_j-p_i$ for the fermions.\nThe total momentum is the $U(1)$ charge for a representation.\nHence we find full agreement with the result of Migdal and Rusakov.\n\n\\chapter{Classical Solutions}\n\nAfter a cursory inspection of \\Zsphere\\ it would appear that $Z_{\\rm sphere}$\nis simply the partition function for a $d=0$ matrix model in a\nquadratic potential. Unlike the lattice case, the potential never turns over,\nso one might not expect a phase transition. As was shown by Douglas and\nKazakov, this is not correct. The point is that the variables $p_i$ that\nappear in \\Zsphere\\ are discrete, hence the density of eigenvalues will\nbe bounded. When this bound is reached, a phase transition occurs. Thus,\nin the strong coupling phase we simply have condensation of the fermions\nin their momentum lattice, which gives a very simple physical understanding\nof the phase transition mechanism. The critical value of the area is reached\nwhen the density of $p_i$, as given by the Wigner semicircle law which is\nvalid in the continuoum, reaches somewhere the lattice bound, namely one,\nthus reproducing the DK result.\n\nThere should be a classical field configuration, termed master field, which\ndominates the path-integral in each phase. In fact, as was shown by\nWitten[\\Witten]\nusing the localization theorem of Duistermaat and Heckman (DH),\nthe full ${\\rm QCD}_2$\\ path integral can be\nwritten as a suitable sum over classical saddle points. In what follows we\nwill give a very simple demonstration of Witten's result using the fermion\npicture and identify the classical configuration which dominates, that is,\nthe master field.\n\nThe key observation is that the exact propagator of a free particle is\nproportional to the exponential of the action corresponding to the classical\n(straight) path connecting the initial and final points. Since ${\\rm QCD}_2$\\ on the\ntorus is equivalent to $N$ free fermions, its partition function will also\nbe given by an appropriate classical path of the free fermions. The things to\nbe taken into account, however, are\n\ni) The particles live on a circle; therefore there are several possible\nclassical paths for each of them, differing by their winding around the\ncircle with fixed initial and final positions.\n\nii) The particles are fermions; thus one should also consider paths where\nthe final positions of the particles have been permuted, weighted by a\nfermionic factor $(-)^C$, where $C$ is the number of times the paths of\nthe particles cross.\n\nThe total partition function will then be the (weighted) sum of the actions\nof all these classical configurations. Since to each\npath corresponds a (diagonal) matrix $W(t)$, and to that (up to gauge\ntransformations) a classical field configuration satisfying the field\nequations of motion, this is the sum over saddle points of the action\nof Witten.\n\nFor the sphere the same picture holds, with the difference that all paths\nstart and end at the point $x=0$, and that each path is further weighted\nby an extra factor, due to the division by the Vandermonde determinants as\nexplained in the previous section. This is, again, the sum over saddle\npoints of Witten, and the extra weighting factors are the determinants\nwhich appear in the DH theorem. These paths are characterized by their\nwinding numbers $\\{ n_i \\}$ (up to permutation) and thus this is a sum of\nthe form\n$$ Z_{\\rm sphere} = \\sum_{n_i} w( n_i ) \\exp \\Big(-{2\\pi^2 \\over g^2 LT}\n\\sum_i n_i^2 \\Big)\\eqn\\Zcl$$\nwhere $w( n_i )$ are the (as yet undetermined) weighting factors.\n\nThe easiest way to obtain the full expression in \\Zcl\\ is to Poisson resum\n\\Zsphere. Using the formula\n$$ \\sum_n f(n) = \\sum_n {\\tilde f} (2\\pi n)\\eqn\\resum$$\nwhere $f(x)$ is any function and $\\tilde f$ is its Fourier\ntransform, we obtain\n$$Z_{\\rm sphere} = C \\sum_{n_i} F_2 (2\\pi n_i ),\\eqn\\Zres$$\nwhere\n$$ F_2 ( x_i ) = \\int \\prod_i dp_i e^{-i \\sum_i x_i p_i } \\Delta^2 ( p_i )\n\\exp(-\\half g^2LT\\sum_i p_i^2).\\eqn\\FF$$\nTo find the Fourier transform appearing in \\FF, we first note that\n$$ F_1 \\equiv \\int \\prod_i dp_i e^{-i \\sum_i x_i p_i } \\Delta ( p_i )\n\\exp(-\\half \\alpha \\sum_i p_i^2) = C \\Delta ( x_i )\n\\exp(-{1 \\over 2\\alpha} \\sum_i x_i^2) .\\eqn\\F$$\nTo prove this, notice that\n$$ F_1 = \\Delta ( -\\partial_{p_i}) \\int \\prod_i dp_i e^{-i \\sum_i x_i p_i }\n\\exp(-\\half \\alpha \\sum_i p_i^2) = P ( x_i )\n\\exp(-{1 \\over 2\\alpha} \\sum_i x_i^2).\\eqn\\FP$$\n$P( x_i )$ is a polynomial of degree $N(N-1)\/2$; moreover it is completely\nantisymmetric in $x_i$. Therefore, up to a normalization, it is the\nVandermonde. The constant $C$ in \\F\\ can be found explicitly, it is however\nirrelevant for this discussion since it will amount to an overall coefficient\nin the final result. Using the convolution property of the Fourier transform\nof a product, in combination with \\F, we find\n$$\\eqalign{ F_2 ( x_i ) &= (F_1 \\otimes F_1 ) ( x_i )\\cr\n& = C \\int \\prod_i dy_i\n\\Delta ({x_i - y_i \\over 2}) \\Delta ({x_i + y_i \\over 2})\n\\exp\\Big(-{1 \\over 4g^2LT}\\sum_i [(x_i+y_i)^2 + (x_i-y_i)^2 ]\\Big)\\cr\n&= C \\exp({1 \\over 2g^2LT} \\sum_i x_i^2 ) \\int \\prod_i dy_i \\prod_{ia$, the second for $b<|\\lambda|1$. But this violates the\nansatz that the density is less than or equal to unity. Therefore,\nwe must choose $b=0$. Using \\hIIIre\\ and \\hIVre\\ we then see that\n$a={2\\over\\pi}$ and that the charge for this solution is zero.\n\nNow consider small, but nonzero values for $Q$. In this case,\n$b-c$ must be nonzero. To this end, let\n$$\\epsilon=b-c,\\qquad\\qquad\\delta=b+c\\eqn\\bpc$$\nUsing \\hIIIre\\ and the asymptotic expansions in the appendix,\nwe find the leading correction to $a$ from its critical value is\n$$\\Delta a=-{\\pi\\over32}(\\epsilon^2+2\\delta^2).\\eqn\\acorr$$\nSince this correction is of order $\\epsilon^2$ and not $\\epsilon$,\nit will not contribute to $a+d$ in leading order.\nThis leading order correction can be found from\n\\hIIre\\ and the asymptotic expansions in the appendix. A little algebra\nshows that\n$$a+d={\\pi^2\\over32}\\epsilon^2\\delta.\\eqn\\apdcorr$$\nWe can now use \\hIVre, \\bpc\\ and \\apdcorr\\ to find $Q$.\nLet us rewrite \\hIVre\\ as\n$$\\eqalign{Q&=(a+d)\\left({1\\over4}+{K\\over16\\rho}(a-d+b-c)(a-d-b+c)\\right)\\cr\n&\\qquad\\qquad\n+(b+c)\\left({1\\over4}-{K\\over16\\rho}(a-d+b-c)(a-d-b+c)\\right).}\\eqn\\chargere$$\nThe factor multiplying $a+d$ is to leading order $1\/2$. The term multiplying\n$b+c$ is actually much smaller. In fact, this term is third order in $\\epsilon$\nand is given by $-\\pi^3\\epsilon^3\/1024$. Hence, the leading contribution to\nthe charge actually comes from the $a+d$ term and is\n$$Q={\\pi^2\\over64}\\epsilon^2\\delta.\\eqn\\chcorr$$\n\nThe charge that appears in \\chcorr\\ is limited by the maximum value of\n$\\delta$ given $\\epsilon$. This bound is determined by enforcing the ansatz\nthat $u(\\lambda)\\le1$. The region where this ansatz might be violated\nis where $\\lambda\\approx b$ or $\\lambda\\approx c$. Let us\nconsider the case where $\\lambda$ is near $b$. Examining equation \\density,\nwe see that the first modulus in the elliptic integral of the third kind\napproaches unity as $\\lambda$ approaches $b$ from above.\nTherefore, if we substitute the asymptotic expansion for $\\Pi$ in (A.11)\nin \\density, we find that the density is given by\n$$\\eqalign{u(\\lambda)&= {2\\over\\pi}{1\\over b-d}\\sqrt{(a-b)(\\lambda-b)\\over(a-c)(b-c)}\n\\Biggl\\{(b-c)K(q)+(c-d)K-E(q){(a-c)(b-d)\\over a-b}\\cr\n&\\qquad\\qquad\\qquad+(c-d){\\pi\\over2}\\sqrt{(a-c)(b-d)\\over(a-b)(c-d)}\n\\sqrt{(b-d)(b-c)\\over(\\lambda-b)(c-d)}\n\\Biggr\\}+{\\rm O}(\\lambda-b)\\cr\n&=1+\\sqrt{\\lambda-b}{2\\over\\pi}\\sqrt{a-b\\over(a-c)(b-c)}\\left(K(q)-E(q){a-c\\over\na-b}\n\\right)+{\\rm O}(\\lambda-b).}\\eqn\\densityapp$$\nHence, in order to ensure that the ansatz is satisfied, it is necessary\nthat the relation\n$${a-c\\over a-b}E(q)-K(q)>0,\\eqn\\EKrel$$\nbe upheld. Using the asymptotic expansions and the values for $b$, $c$, $a$\nand $d$ given by \\bpc\\ and \\acorr, we find that\n$${a-c\\over a-b}E(q)-K(q)\\approx{\\pi^3\\over32}(2\\delta\\epsilon+\\epsilon^2).\\eqn\\epsde$$\nTherefore, in order for \\EKrel\\ to be satisfied, we must have $\\delta>-\\epsilon\/2$.\nWe can also derive the constraint that $\\delta<\\epsilon\/2$ by examining $u(\\lambda)$\nas $\\lambda\\to c$. Therefore, we must satisfy\n$$|\\delta|<\\epsilon\/2.\\eqn\\delepscon$$\nWe can rewrite this constraint in terms of the charge and the area. From\n\\hIre\\ and the asymptotic expansion, we have the relation\n$$A-A_c={3\\pi^4\\over64}(\\epsilon^2+2\\delta^2),\\eqn\\Aeps$$\nwhere $A_c$ is the critical value for the area.\nHence, using \\chcorr, \\delepscon\\ and \\Aeps, we have that\nthe maximum allowed charge sector for a given area near its critical value is\n$$Q_{max}={32\\sqrt{2}\\over27\\pi^4}(A-A_c)^{3\/2}.\\eqn\\Qmax$$\nThis relation is a little reminiscent of the maximum charge of a\nReissner-Nordstrom black hole.\n\nNow we wish to examine the behavior deep in the strong coupling regime,\nwhich corresponds to large values of the area.\nIn this case, we expect $b$ to approach $a$ and $c$ to approach $d$. The\nvalues of $a$ and $d$ are determined by the charge of the sector that we\nare considering. For this behavior $q\\to1$, and thus $q'\\to0$. At this\npoint it is convenient to rewrite $\\Pi(\\alpha,q)$ in terms of elliptic\nintegrals of the first and second kind. Using the relation in the\nappendix, equation \\hIIre\\ can be written as\n$$-E(q)F(\\theta,q)+K(q)E(\\theta,q)={a+b-c+d\\over2\\rho}K(q),\\eqn\\hIIinc$$\nwhere $\\sin\\theta={a-c\\over a-d}$\nUsing \\hIIIre\\ and the asymptotic expansions in (A.12) and (A.13),\nwe find that in this region\n$$a-d\\approx1\\eqn\\adrel$$\nwhich is expected since the the integral of $u(\\lambda)$ should be unity\nand for almost all values of $\\lambda$ between $a$ and $d$, $u(\\lambda)=1$.\nPlugging in leading order asymptotic expansions in (A.12)-(A.15)\ninto \\hIIinc\\ and invoking \\adrel, we then find the following approximate\nequation\n$$-\\log{1+\\sin\\theta\\over\\cos\\theta}+\\log\\sqrt{(a-c)(b-d)\\over(a-b)(c-d)}\\to\n\\log\\sqrt{1\\over a-b}\\approx a\\log\\sqrt{1\\over(a-b)(c-d)}.\\eqn\\logrel$$\nIf we let $a-b=\\epsilon$ and $c-d=\\epsilon^\\mu$, then \\logrel\\ gives\n$$a={1\\over1+\\mu}.\\eqn\\aeq$$\nSince $0<\\mu<\\infty$, we see that the possible values of $a$ range from\n0 to 1. Of course what this means is that no local solutions exist\nif the fermions are shifted such that all of the momenta are greater\nthan 0 or all are less than 0. This should not come as a big surprise,\nsince once these limits are reached, then there are small deformations\nof the fermion momenta which lower the energy of the state.\n\nThe charge for these solutions is dominated by the $W$ term in \\hIVre,\nsince $X$ $Y$ and $Z$ are all small. Clearly in the limit $b\\to a$ and\n$c\\to d$, the charge approaches\n$$Q\\to {a+d\\over2}=a-\\half.\\eqn\\Qlarge$$\n\n\\chapter{Correspondence with Lattice Models}\n\nDouglas and Kazakov have remarked that the phase transition for the continuum\nmodel is similar to the phase transition that occurs for the lattice. In\nboth cases the phase transition is third order and the equations of motion\nfor the eigenvalues are given by a two cut model. In some sense, these\ntwo situations are dual to each other, with the weak coupling region of the\nlattice model acting like the strong coupling region of the continuum\ncase.\n\nHowever, it would appear that this correspondence breaks down when we\nconsider the solutions with nonzero values of $Q$. There are no\ncorresponding solutions for weakly coupled $U(N)$ on the lattice. But\na little more thought shows that correspondence is between continuum $U(N)$\nand the lattice $SU(N)$ and vice versa. For instance, the continuum\n$SU(N)$ case does not have these extra solutions, since the $U(1)$ charge\nis not a degree of freedom. In terms of the fermions, the center of mass\ncoordinate is modded out. Hence shifting all the fermion momenta by\nthe same amount gives back the same state. On the other hand, the $SU(N)$\ncase has an extra term in the casimir, $n^2\/N^2$, where $n$ is the number\nof boxes in the representation. But this term does not survive the scaling\nlimit, so we can safely drop it.\n\nFor $SU(N)$ on the lattice, the eigenvalues sit in a potential described\nby \\matpot. However, unlike the $U(N)$ case, the center of mass position\nfor these eigenvalues must satisfy the constraint that\n$$\\sum_i\\theta_i=2\\pi m,\\eqn\\thcon$$\nwhere $m$ is an integer. Clearly, the $U(N)$ classical solution is the\nsame as the lowest energy state for the $SU(N)$ case, which will have\n$m=0$. But suppose we consider a case where $00$ are verified on finite elementary tensors by a straightforward computation. Similar arguments as used in the proof of \nProposition \\ref{proposition:rep-Fbeta} ensure that the maps $g_n \\mapsto \\tilde{\\rho}_M(g_n) := \\tilde{\\alpha}_n $ extend multiplicatively \nto a representation $\\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. Its generating property is again immediate from the \nminimality of the stationary process by Proposition \\ref{proposition:minimality-generating}. Finally, the Markovianity of the\nbilateral stationary process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1})$ follows from Corollary\n\\ref{corollary:markov-filtration-MN}.\n\\end{proof}\nGiven the stationary Markov process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1})$ (from Proposition \\ref{proposition:rep-Falpha}), \na restriction of the generating algebra $\\tilde{\\mathcal{M}}^{\\tilde{\\alpha}_1}$ to a von Neumann subalgebra $\\mathcal{A}_0$ provides a candidate \nfor another stationary Markov process. Viewing the Markov shift $\\tilde{\\alpha}_0$ as a `perturbation' of the Bernoulli\nshift $\\tilde{\\beta}_0$, the subalgebra $\\mathcal{A}_0 = \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}$ is an interesting choice. \n\\begin{Proposition} \\label{proposition:MarkovTwoReps2}\nThe quadruple $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is a bilateral stationary Markov process.\n\\end{Proposition} \n\n\\begin{proof} \nWe recall from \\eqref{eq:f-p-a-0} that \n\\begin{align*}\n \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0} &= \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\mathbbm{1}_\\mathcal{C}^{\\otimes_{\\mathbb{N}_0}} \n \\otimes \\cdots. \n\\end{align*}\nLet $P_I$ denote the $\\tilde{\\psi}$-preserving normal conditional expectation from $\\tilde{\\mathcal{M}}$ onto \n$ \\mathcal{A}_{I} := \\bigvee_{i \\in I} \\tilde{\\alpha}_0^i(\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ for an interval $I \\subset \\mathbb{Z}$. \nBy Lemma \\ref{lemma:Mark-Suff}, it suffices to verify the Markov property\n\\[\nP_{(-\\infty,0]}P_{[0,\\infty)} = P_{[0,0]}.\n\\]\nFor this purpose we use the von Neumann subalgebra \n\\[\n\\mathcal{D}_0 :=\n\\begin{array}{ccccccccc}\n&& \\vdots & & \\vdots & & \\vdots & & \\\\\n && \\otimes & & \\otimes & & \\otimes & & \\\\\n && \\mathbbm{1}_\\mathcal{C} & & \\mathbbm{1}_\\mathcal{C} & & \\mathbbm{1}_\\mathcal{C} & & \\cdots \\\\\n &&\\otimes & & \\otimes & & \\otimes & & \\\\\n\\mathcal{A} & \\otimes & \\mathcal{C} & \\otimes & \\mathbbm{1}_\\mathcal{C} & \\otimes & \\mathbbm{1}_\\mathcal{C} & \\otimes & \\cdots \\\\\n\\end{array}\n\\]\nand the tensor shift $\\tilde{\\beta}_0$ to generate the `past algebra' \n$\\mathcal{D}_{<} := \\bigvee_{i < 0} \\tilde{\\beta}_0^i(\\mathcal{D}_0)$ and\nthe `future algebra' $\\mathcal{D}_{\\ge } := \\bigvee_{i\\ge 0} \\tilde{\\beta}_0^i(\\mathcal{D}_0)$. \nOne has the inclusions\n\\[\n\\mathcal{A}_{(-\\infty,0]} \\subset \\mathcal{D}_{<}, \\qquad\n\\mathcal{A}_{[0,\\infty)} \\subset \\mathcal{D}_{\\ge}, \\qquad\n\\mathcal{D}_{<} \\cap \\mathcal{D}_{\\ge} = \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}.\n\\]\nHere we used for \nthe first inclusion that $\\tilde{\\alpha}_0 = \\gamma_0 \\circ \\tilde{\\beta}_0$ and thus \n$\\tilde{\\alpha}_0^{-1} = \\tilde{\\beta}_0^{-1} \\circ \\gamma_0^{-1}$. The second inclusion is\nimmediate from the definitions of the von Neumann algebras. Finally, the claimed \nintersection property is readily deduced from the underlying tensor product structure. \nLet $E_{\\mathcal{D}_{<}}$ and $E_{\\mathcal{D}_{\\ge}}$ denote the $\\tilde{\\psi}$-preserving normal conditional expectations from $\\tilde{\\mathcal{M}}$ onto\n$ \\mathcal{D}_{<} $ and $ \\mathcal{D}_{\\ge}$, respectively. We observe that $E_{\\mathcal{D}_{<}} E_{\\mathcal{D}_{\\ge}} = P_{[0,0]}$ is immediately\ndeduced from the tensor product structure of the probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. But this allows us to compute\n\\begin{align*}\n P_{(-\\infty,0]}P_{[0,\\infty)} \n = P_{(-\\infty,0]} E_{\\mathcal{D}_{<}} E_{\\mathcal{D}_{\\ge}} P_{[0,\\infty)} \n = P_{(-\\infty,0]} P_{[0,0]} P_{[0,\\infty)}\n = P_{[0,0]}. \n\\end{align*}\n\\end{proof}\nWe remark that the bilateral stationary Markov process $(\\tilde{\\mathcal{M}}, \\tilde{\\psi}, \\tilde{\\alpha}_0, \\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is not minimal. \n\n\n\\subsection{Constructions of Representations of $F$ from stationary Markov processes} \\label{subsection:constr-rep-F}\nThe following theorem uses the tensor product construction of the present section to show that bilateral stationary Markov processes of tensor product type give rise to representations of $F$.\n\\begin{Theorem}\\label{theorem:TensorMarkovF}\nLet $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi, \\gamma, \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C})$ be a bilateral \nstationary Markov process.\nThen there exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, generating representations \n$\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and an embedding \n$\\kappa \\colon (\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n\\begin{enumerate}\n \\item $\\kappa (\\mathcal{A} \\otimes \\mathbbm{1}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$,\n \\item $\\gamma^n|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}} = \\kappa^* \\tilde{\\rho}_M(g_0^n)\\kappa|_{\\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C}}$ \n\tfor all $n \\in \\mathbb{N}_0$. \n\\end{enumerate}\n\t\n\\end{Theorem}\n\\begin{proof} \nWe take \n\\[\n\t(\\tilde{\\mathcal{M}}, \\tilde{\\psi}) \n\t:= \\big(\\mathcal{A} \\otimes \\mathcal{C}^{\\otimes_{\\mathbb{N}_0^2}} , \\varphi \\otimes \\chi^{\\otimes_{\\mathbb{N}_0^2}}\\big)\n\\]\nand construct two representations of the Thompson group $F$ as obtained in Propositions \\ref{proposition:rep-Fbeta} and \\ref{proposition:rep-Falpha}. That is, we define the representation $\\tilde{\\rho}_B \\colon F \\to\n\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ as $\\tilde{\\rho}_B(g_n):=\\tilde{\\beta}_n$ and the representation $\\tilde{\\rho}_M \\colon F \\to\n\\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ as $\\tilde{\\rho}_M(g_n):=\\tilde{\\alpha}_n$ with $\\tilde{\\alpha}_0 = \\gamma_0 \\circ \\tilde{\\beta}_0$ and $\\tilde{\\alpha}_n = \\tilde{\\beta}_n$ for $n \\ge 1$. We remind that $\\gamma_0$ is the natural extension of $\\gamma$ to an automorphism on $(\\tilde{\\mathcal{M}}, \\tilde{\\psi})$.\nLet $\\kappa$ be the natural\nembedding of $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi)$ into $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$. By Proposition \\ref{proposition:MarkovTwoReps2}, $(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha}_0,\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ is a\nnoncommutative stationary Markov process, and $\\kappa(\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}})=\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}$.\n\nWe are given that $(\\mathcal{A} \\otimes \\mathcal{C}, \\varphi \\otimes \\chi, \\gamma, \\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{C})$ is a\nstationary Markov process, hence by Proposition \\ref{proposition:dilation}, we get \n\\begin{equation}\\label{eq:Markv1} \n (\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}})^n = \\gamma^n|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}, \n\\end{equation}\nfor $n\\in \\mathbb{N}_0$. It is easy to check that, for all $a\\in \\mathcal{A}$, \n\\[\n\\kappa^* \\tilde{\\alpha}_0\\kappa(a\\otimes \\mathbbm{1}_{\\mathcal{C}})= \\kappa^* \\tilde{\\alpha}_0 \\left( a \\otimes \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} \\otimes \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} \\cdots \\right)\n= \\kappa^*\\left(\n\\begin{array}{ccccccccc}\n& & \\vdots& & && & &\\\\\n& & \\otimes &&& & & &\\\\\n& & \\mathbbm{1}_{\\mathcal{C}} &&& & & &\\\\\n& & \\otimes &&& & & &\\\\\n& & \\mathbbm{1}_{\\mathcal{C}} &&& & & &\\\\\n& & \\otimes &&& & & &\\\\\n\\gamma (a & \\otimes & \\mathbbm{1}_{\\mathcal{C}}) \n&\\otimes &\\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} & \\otimes & \\mathbbm{1}_{\\mathcal{C}}^{\\otimes_{\\mathbb{N}_0}} &\\cdots& \\end{array} \\right)\n= \\gamma (a \\otimes \\mathbbm{1}_{\\mathcal{C}}).\n\\] \nHence, $\\kappa^* \\tilde{\\alpha}_0 \\kappa|_{\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C}}=\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}$. As\n$\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0}=\\kappa(\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C})$, the stationary Markov process \n$(\\tilde{\\mathcal{M}},\\tilde{\\psi},\\tilde{\\alpha}_0,\\tilde{\\mathcal{M}}^{\\tilde{\\beta}_0})$ has the transition operator \n$\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}}$. We once again appeal to Proposition \\ref{proposition:dilation} to get\n\\begin{equation}\\label{eq:Markv2} \n(\\gamma|_{\\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{C}}})^n=\\kappa^* \\tilde{\\alpha}_0^n \\kappa|_{\\mathcal{A}\\otimes \\mathbbm{1}_\\mathcal{C}.}\n\\end{equation}\nCombining \\eqref{eq:Markv1} and \\eqref{eq:Markv2} completes the proof of the theorem.\n\\end{proof}\n\n\n\\subsection{The Classical Case} \\label{subsection:constr-classical}\n\nWe state a result of K\\\"ummerer that provides a tensor dilation of any Markov map on a commutative von Neumann algebra. This will allow us to obtain a representation of $F$ as in Theorem \\ref{theorem:TensorMarkovF}.\n \n\\begin{Notation} \\normalfont\nThe (non)commutative probability space $(\\mathcal{L}, \\operatorname{tr}_\\lambda)$\nis given by the Lebesgue space of essentially bounded functions $\\mathcal{L} := L^\\infty([0,1],\\lambda)$ \nand $\\operatorname{tr}_\\lambda := \\int_{[0,1]} \\cdot\\, d\\lambda$ as the faithful normal state on $\\mathcal{L}$. \nHere $\\lambda$ denotes the Lebesgue measure on the unit interval $[0,1] \\subset \\mathbb{R}$. \n\\end{Notation}\n\\begin{Theorem}[{\\cite[4.4.2]{Ku86}}] \\label{theorem:kuemmerer-twosided}\nLet $R$ be a $\\varphi$-Markov map on $\\mathcal{A}$, where $\\mathcal{A}$ is a commutative von \nNeumann algebra with separable predual. Then there exists \n$\\gamma \\in \\operatorname{Aut}(\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$ \nsuch that $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes \\operatorname{tr}_{\\lambda}, \\gamma, \\iota_0)$ is a Markov (tensor)\ndilation of $R$. That is, $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes \\operatorname{tr}_{\\lambda}, \\gamma, \\mathcal{A}\\otimes\n\\mathbbm{1}_{\\mathcal{L}})$ is a stationary Markov process, and for all $n \\in \\mathbb{N}_0$, \n\\[\nR^n = \\iota_0^* \\, \\gamma^n \\iota_0,\n\\]\nwhere $\\iota_0 \\colon (\\mathcal{A},\\varphi) \\to (\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$\ndenotes the canonical embedding $\\iota_0(a) = a \\otimes \\mathbbm{1}_\\mathcal{L}$ such\nthat $E_0 := \\iota_0 \\circ \\iota_0^* $ is the $\\varphi \\otimes \\operatorname{tr}_\\lambda$-preserving \nnormal conditional expectation from $\\mathcal{A} \\otimes \\mathcal{L}$ onto $\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}$.\n\\end{Theorem}\n\n\n\n\\begin{Theorem}\\label{theorem:F-gen-compression}\nLet $(\\mathcal{A},\\varphi)$ be a probability space where $\\mathcal{A}$ is commutative with separable predual,\nand let $R$ be a $\\varphi$-Markov map on $\\mathcal{A}$. There exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$,\ngenerating representations $\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, and an embedding \n$\\iota \\colon (\\mathcal{A}, \\varphi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n\\begin{enumerate}\n\\item \n$\\iota(\\mathcal{A}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$,\n\\item\n$R^n = \\iota^* \\tilde{\\rho}_M(g_0^n)\\iota $ \nfor all $n \\in \\mathbb{N}_0$. \t\n\\end{enumerate}\t\n\\end{Theorem}\n\\begin{proof}\nBy Theorem \\ref{theorem:kuemmerer-twosided}, there exists $\\gamma\\in \\operatorname{Aut}(\\mathcal{A}\\otimes\n\\mathcal{L},\\varphi\\otimes \\operatorname{tr}_{\\lambda})$ such that $(\\mathcal{A}\\otimes \\mathcal{L}, \\varphi\\otimes\n\\operatorname{tr}_{\\lambda}, \\gamma, \\mathcal{A}\\otimes \\mathbbm{1}_{\\mathcal{L}})$ is a stationary Markov process. By Theorem \\ref{theorem:TensorMarkovF},\nthere exists a probability space $(\\tilde{\\mathcal{M}},\\tilde{\\psi})$, generating representations \n$\\tilde{\\rho}_B, \\tilde{\\rho}_M \\colon F \\to \\operatorname{Aut}(\\tilde{\\mathcal{M}},\\tilde{\\psi})$ and an embedding \n$\\kappa \\colon (\\mathcal{A} \\otimes \\mathcal{L}, \\varphi \\otimes \\chi) \\to (\\tilde{\\mathcal{M}},\\tilde{\\psi})$ such that\n$\\kappa(\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}) = \\tilde{\\mathcal{M}}^{\\tilde{\\rho}_B(g_0)}$ and $\\gamma^n |_{\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}} = \\kappa^* \\tilde{\\rho}_M(g_0^n) \\kappa |_{\\mathcal{A} \\otimes \\mathbbm{1}_\\mathcal{L}}$. The proof is completed by taking $\\iota : = \\kappa \\circ \\iota_0$, where $\\iota_0 \\colon (\\mathcal{A},\\varphi) \\to (\\mathcal{A} \\otimes \\mathbbm{1}_{\\mathcal{L}}, \\varphi \\otimes \\operatorname{tr}_\\lambda)$\ndenotes the canonical embedding $\\iota_0(a) = a \\otimes \\mathbbm{1}_\\mathcal{L}$.\n\\end{proof}\n\n\\section*{Acknowledgements}\n\\label{section:acknowledgements}\nThe second author was partially supported by a Government of\nIreland Postdoctoral Fellowship (Project ID: GOIPD\/2018\/498). \nBoth authors acknowledge several helpful discussions\nwith B.~V.~Rajarama Bhat in an early stage of this project. \nAlso the first author would like to thank Persi Diaconis, Gwion Evans, Rolf Gohm, Burkhard K\\\"ummerer \nand Hans Maassen for several fruitful discussions on Markovianity.\nBoth authors thank the organizers of the conference\n\\emph{Non-commutative algebra, Probability and Analysis in Action}\nheld at Greifswald in September 2021 in honour of Michael\nSch\\\"urmann.\n\\label{section:bibliography}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nRecursive algorithms are often the most efficient technique \nfor calculating gauge theory amplitudes, as ideally information \nis maximally recycled \\cite{Britto:2004ap,Berends:1987me,Parke:1986gb}. \nIn recent years recursive techniques have become a major component for \nevent simulation at the LHC, for tree-level generation \nof multi-jet events and part of the vast improvement in \nNLO calculations at higher multiplicity \n\\cite{Gleisberg:2008fv,Badger:2010nx,\nBern:1994cg,Berger:2008sj}. The \nirreducible complexity of full-matrix \nelements limit computations of final-state partons to a fairly modest number \n(typically $n \\le 10$ at LO, $n \\le 5$ at NLO), which \nin the hard and widely separated regime \nmeets essential experimental demand \\cite{Berger:2010zx,Ita:2011wn,Aad:2013ysa}. \nHowever, for the logarithmically enhanced sector of soft and collinear radiation, \ngenerating high multiplicity is crucial and in practice proceeds through parton shower \nMonteCarlo~\\cite{pythia,herwig,sherpa}. \n\nThis paper introduces a simple technique for recursively \nextracting logarithmic coefficients of $n$-jet \nrates. We emphasize that these coefficients are a mere \nskeleton of the complete (even tree-level matrix element) calculation, but \nour goal here is to explore the high multiplicity regime. \nWe find simple implementations in both the \nexclusive-$k_t$ (here on, \\emph{Durham}) \nand generalized inclusive-$k_t$ (here on, \\emph{Generalized-$k_t$}) jet algorithms \nstarting from their respective generating functionals \n\\cite{fastjet,Catani:1991hj,Leder:1996py,Brown:1991hx,Webber:2010vz,Gerwick:2012fw}. \nThe rates we calculate correspond to expanding in powers of $(\\alpha_s \/\\pi) L^2$, where \nin the Durham algorithm $L$ is the logarithm of a dimensionless resolution scale \n$y_{cut}$, while in the Generalized-$k_t$ algorithm $L^2$ contains separate \nenergy and angular logarithms which depends on a minumum energy scale \n$E_R$ and jet radius $R$. It is known that the resolved coefficients obtained \nin this way are present in the LO matrix element calculation \\cite{Brown:1991hx}, while the \nunresolved ones start at the NLO. As we will see, since our formula allows the efficient \ncomputation of an exclusive $n$-gluon rate to arbitrarily \nhigh order in $(\\alpha_s \/\\pi)L^2$ ({\\sl i.e.} \\, including additional \nunresolved gluons), for all practice purposes these rates can \nbe thought of as resummed containing the same level of \nformal accuracy as a standard parton shower\\footnote{\nour implementation is for the double-leading-logarithms \nonly, but the extension to the relevant next-to-double-logarithms \nalso including the $g\\to q\\bar{q}$ splitting is discussed at a later stage.}. It \nis important to bear in mind however that the rate coefficients do \nnot \\emph{a priori} contain \nany notion of kinematics or recoil as in the parton shower.\n\nThere are several potential applications for our work, all generally \nfollowing from the ability to compute analytic expressions in a shower-like \napproximation. To illustrate the improvement \nwith an example, let us outline how the calculation proceeds directly \nfrom the generating functional for the exclusive rates in \n$e^+ e^- \\to \\bar{q}q + 20 \\; \\text{gluons}$ \nin the Durham algorithm. Starting from the generating \nfunctional\n\\begin{alignat}{5}\n\\Phi_{g\/q}(u,Q^2) &= u \\; \\exp \n\\left[ \\int_{Q_0^2}^{Q^2} dt \\; \\Gamma_{g\/q}(Q^2,t) \n \\left( \\Phi_g(u,t) -1 \\right) \\right] \n\\label{eq:gf_evolution}\n\\end{alignat}\nwe obtain the resummed rate differentiating $(\\Phi_q)^2$ 22 times \nwith respect to the variable $u$ at the point $u=0$. Thus we define \nthe exclusive jet fractions \n \\begin{alignat}{5}\n f_{n} = \\left. \\frac{1}{n!} \\frac{d^n}{du^n} \\Phi_q^2 \\right|_{u=0} .\n\\label{eq:def_gf}\n\\end{alignat}\nThe resulting resummed $20$-gluon expression we \nmercifully do not include, but note that it is a linear combination \nof 39,289,183 possible splitting histories\\footnote{The number of splitting \nhistories contributing to an $n$ gluon final state is the recursive number of \ninteger sub-partitions of the integer partitions of $n$.}. Obtaining a \nnumerical answer requires either a numerical evaluation of each of the \n19-dimensional integrals (38 dimensional for the generalized \nclass of $k_t$ algorithms) or for the fixed order coefficient, expanding \nthe Sudakov form factors to the appropriate order and evaluating \nthe still 19-dimensional integral analytically. In practice this procedure could \nbe optimized so that, for example, partial results for \nthe multi-dimensional integrals are recycled, \nbut it should be clear that the manipulations are extremely unwieldy. \n\nExpressing the expanded rates as the resolved and unresolved \ncoefficients \n\\begin{equation}\nP_n = \\text{Res}_n \\; + \\; \\text{URes}_n\n\\end{equation}\nwhere $\\text{Res}_n \\sim \\alpha_s^n $ and $\\text{URes}_n$ starts \nat ${\\mathcal{O}}(\\alpha_s^{n+1})$, our method allows the computation of \n$\\text{Res}_{20}$ in a matter of seconds. Once $\\text{Res}_{20}$ is \nknown, it is straight-forward to ``bootstrap\" the unresolved \ncomponents for the lower multiplcities using \nsimple identical boson (Poisson) statistics. Doing this to \nsufficiently high order, one recovers the resummed rates\\footnote{\nWe note here very explicitly that the physics in our recursive \nprescription is identical to the coherent branching formalism. In fact, we prove \nthe consistency of our method directly from the generating functional. What is \nspecial is that a simplified recursive formula allows us to study gluonic coefficients \nfor arbitrary multiplicities, in practice an order of magnitude larger than using \nconventional techniques. \n}. \n\nThe reason we are able to construct a simple recursive formula \ncomes down to a well known fact about the exponentiation of \nleading singularities in gauge theory amplitudes, namely that it is \ndetermined by the maximally non-abelian contribution \n\\cite{Frenkel:1984pz,Gatheral:1983cz} (for more recent results \nalong these lines see {\\sl e.g.} \\, Refs.~\\cite{DelDuca:2011ae,Gardi:2013ita}). \nFor our prescription, which determines the coefficients of the \nleading soft-collinear singularities in the $L\\to \\infty$ sense, the only required \nphysics input is the (coherent branching formalism analogous) maximally \nsecondary coefficient, corresponding to a string of gluons each \nemitting exactly once. This is diagrammatically encapsulated in the \nfirst moment of the generating functional \\eq{eq:gf_evolution}, and these \ncontributions are also order by order guaranteed to exponentiate. Knowing \nonly this contribution, the remainder \nof our recursive formula determines the entire leading coefficient \nusing bosonic statistics. We hope that our proof of the \nrecursive algorithm makes this point clear.\n\n\nThis paper is arranged as follows. In \\sec{sec:Rec} we introduce \nthe details of our recursive prescription for the resolved component. \nFor the sake of presentation we prove the individual steps \nonly at the end of the section. We outline the \nmethod for pure Yang-Mills in the Durham algorithm. \nAt the stated level of accuracy it is simple to \ngeneralize to arbitrary numbers of initial quarks or gluons. \nWe include the prescription for the unresolved component \nin \\sec{sec:URes}. In \\sec{sub_ex} we provide an example \nstep in the recursion for $4$-gluon emission from a $q\\bar{q}$ dipole.\nIn \\sec{sec:genkt} we summarize the small \nmodifications necessary for the \ninclusive-$k_t$ algorithm. In \\sec{sec:proof} we provide proofs for the \nindividual steps of the recursion directly from the generating \nfunctionals. We study the \ngluonic coefficients at high multiplicity in \\sec{sec:app} and discuss some \npossible applications for our computational tool. In the appendix \nwe provide the resummed 6-gluon $f_6$ contribution used to validate our algorithm.\n\n\\medskip\n\n\\section{Recursive prescription}\n\\label{Recsas}\n\\subsection{Resolved Component}\n\\label{sec:Rec}\n\nWe consider here pure Yang-Mills (YM) in the Durham \nalgorithm and start by decomposing the $n$-gluon final \nstate in terms of its \\emph{splitting history}. We differentiate these from \nFeynman diagrams by distinguishing between the emitter and \nemitted parton at each $1\\to2$ splitting. We call each splitting \ninvolving an initial parton \\emph{primary}, and any non-primary \nsplitting is termed \\emph{secondary}.\nFor fixed $n$ we write the resolved component of the \ncorresponding $n$-gluon rate from a single initiator as \n\\begin{equation}\n\\text{Res}_n \\;\\; = \\;\\; \\sum_k \\sum_{i=0}^{n-1} c^{(n)}_{ik} \n\\left(a_s C_A L^{2} \\right)^n \n\\label{defers}\n\\end{equation}\nwhere $a_s = \\alpha_S\/\\pi$, $L = \\log(1\/y_{\\text{cut}})$ and \n$c^{(n)}_{ik} > 0$. The index $i$ counts the number of secondary \nemission in a particular splitting history. \nThe sum on $k$ is over all diagrams of the same order in $i$, which \nis left implicit for the moment. Our definition ensures that every term in \n\\eq{defers} is in one-to-one correspondence with a specific \nsplitting history. However, the recursive formula for the \nresolved coefficients does not depend on the index $k$, so \nwe drop it for the time being. \n\nWe claim that given a specific subset of coefficients from multiplicities $n$ and smaller, we \ncan write a general expression for $c^{(n+1)}_{i}$, and using \\eq{defers}, \ncompute $\\text{Res}_{n+1}$. The necessary ingredients \nfor $c^{(n+1)}_{i}$ are:\n\\begin{itemize}\n\\item All of the $c^{(l)}_{l-1}$ with $l < n+1$. These \nare all of the previous coefficients highest order in secondary \nemissions, or in other \nwords containing precisely one primary splitting from the initial \nhard line. \n\\item All of the $c^{(n)}_{l-1}$ with $l -1< n$. These \nare the coefficients with at least two primary splittings only \nfrom the $n$-th coefficient. \n\\item Integer partitions of $n+1$.\\end{itemize}\nUsing these ingredients we find a simple formula for the rate coefficients. \nTo illustrate this procedure we first go through the steps in the recursive \nprescription. We provide a detailed example of one step in the recursion \nfor 4-gluon emission in Section. \\ref{sub_ex}.\n\nThe first step is to divide the coefficients into two categories\n\\begin{equation}\nc^{(n)}_{i} = c^{(n)}_{k} + c^{(n)}_{n-1}\n\\end{equation}\nThe index $i \\in (0,n-1)$, $k \\in (0,n-2)$.\nThe $c^{(n)}_{k}$ coefficients are the contributions with at least 2 primary splittings. \nEach gluonic structure is already present in the lower multiplcity coefficients, \nand can therefore be constructed by multiplying such coefficients and taking into \naccount symmetry factors (this is intuitive, although it is proven in \nSec. \\ref{sec:proof} more explicitly).\nIn contrast, the $c^{(n)}_{n-1}$ coefficients are maximally secondary with respect \nto the hard initial line. These satisfy a relatively simple recursion relation for \npromoting coefficients higher up on the emission tree\n\\begin{equation}\nc^{(n+1)}_{n} \\; =\\; \\; \\sum_{j=0}^{n-1} c^{(n)}_{j} d^{(n)} \n\\qquad \\qquad d^{(n)} = \\dfrac{(2n)!}{(2n+2)!} \\; .\n\\label{easlif}\n\\end{equation}\nDiagrammatically the $n-1$ term in \\eq{easlif} corresponds to the relation in \nFig. \\ref{ill3}.\n\\begin{figure}[t]\n\\includegraphics[width=1.0\\textwidth]{fig_1_f.pdf}\n\\caption{Illustration of the first term in \\eq{easlif}. The solid line always represents \nthe initial partons in the process, which for our current example is a single gluon.}\n\\label{ill3}\n\\end{figure}\nThe grey blob indicates that this gluon is allowed to emit an arbitrary \nnumber of times, and each emission itself may split \\emph{et cetera}. \nThe solid line will always indicate an arbitrary number and type of initial \npartons, which for this specific example we take as a single gluon.\nThe other terms in \\eq{easlif} sums over the $c^{(n)}_j$ terms not maximally \nsecondary and not representable in the relation above.\nWe see that the two step process promotes diagrams with at least two \nprimary emissions to ones on the RHS and finally to the LHS of Fig. \\ref{ill3}. \nThe origin of the specific form of \\eq{easlif} is that the prescription for \npromoting primary to secondary emission essentially involves reweighting\nby the first moment of the generating functional, which for the Durham \nalgorithm is $\\Phi'_{u=1} \\sim \\sum_{n=0}^{\\infty} (aC_A L^2)^{2n}\/(2n)!$ \\cite{Ellis:1991qj}. \nDiagrammatically, this is identical to the sum of maximally secondary \nsplitting histories.\n\nThe final step in our recursion is to generate the $c^{(n+1)}_k$ with $k < n$ \ncoefficients. It is easy to see that a recursion based solely on $c^{(n)}$ \ncoefficients is bound to fail, as the integer partition of $n$ arising at each multiplicity \nis not easily defined recursively. Instead, we compute $c^{(n+1)}_k$ by enumerating \nthe various partitions of gluons and weighting by the appropriate irreducible \nstructures $c^{(k)}_{k-1}$. Note that only the values of $c^{(k)}_{k-1}$ need \nto be stored from previous multiplicities. \nComputing $n+1$ coefficients we only require $n$ of such numbers \nmaking this step computationally manageable\\footnote{Looping over the \nvarious partitions still constitutes the most computationally intensive part \nof our algorithm.}. An additional ingredient is that \n$m$ identical structures carries a phase space factor $1\/m!$. \n\nA complete representation for this contribution is\n\\begin{equation}\nc^{(n+1)}_{k} = \\sum_{p(n)} \\frac{1}{S} \\left[ \\prod_{\\sigma_i=\\{ \\sigma_1,\\cdots\\, \\sigma_{r}\\} } \\;\nc^{(\\sigma_i)}_{\\sigma_i-1}\\right].\n\\label{c1rec}\n\\end{equation}\nwhere the sum is over integer partitions $p(n)$ of $n$ of length \n$r \\ge 2$. The product is over the individual elements of each \npartition. For example, for $n = 4$ there are 4 partitions in the \nsum $\\left\\{\\sigma_1,\\sigma_2,\\sigma_3,\\sigma_4\\right\\} = \n\\{(3,1),(2,2), (2,1,1),(1,1,1,1)\\}$. \nHere $S$ is the overall symmetry number taken as the \nproduct of identical structure phase space factors, {\\sl e.g.} \\, the \ncontribution from the (2,2) term is \n$(1\/2!)(c^{(2)}_{1})^2$.\nWe can summarize the entire recursive algorithm for the resolved \ncoefficients and the main result of this paper \n\\begin{equation}\n\\sum_{i=0}^{n} c^{(n+1)}_{i} = \\sum_{p(n)} \\frac{1}{S} \\left[ \\prod_{\\sigma_i=\\{ \\sigma_1,\\cdots\\, \\sigma_{r}\\} } \\; \nc^{(\\sigma_i)}_{\\sigma_i-1}\\right]\n\\;\\; + \\;\\;\\sum_{j=0}^{n-1} c^{(n)}_{j} d^{(n)} \n\\label{final}\n\\end{equation}\nAn example recursion for an individual diagram is given in Fig. \\ref{exred}. Note \nthat as soon as a diagram ends up in the furthest right $c^{(n+2)}_{n+1}$ class it \nremains there indefinitely. It is simple to check that our formula exhausts all \npossible splitting histories to a given multiplicity. We confirm the validity \nof \\eq{final} by comparing with a direct computation from the generating \nfunctional with up to $5$ final state gluons \\cite{Gerwick:2012hq}.\n\n\\begin{figure}[t]\n\\includegraphics[width=1.0\\textwidth]{fig_2_f.pdf}\n\\caption{Example recursion for an individual diagram. As soon as \nthe diagram ends up on the right-hand side it is repeated in the \nrecursion according to the $d$ term in \\eq{easlif}.}\n\\label{exred}\n\\end{figure}\n\n\n\\bigskip\n\n\\subsection{Unresolved Component}\n\\label{sec:URes}\n\nGiven the set of resolved coefficients up to multiplicity $n$, it is \nrelatively straight-forward to determine the unresolved coefficients \nfor lower multiplicitities also up to order $(a_s L^2)^n$. To describe these \ncoefficients we extend our notation slightly so that $c^{(l)}_{i} \\to c^{(l,n)}_{i}$ \nwhere $l$ ranges between $0, 1,\\cdots n$ and indicates the \nmultiplicity. The resolved coefficients are then $c^{(n,n)}_{i}$ \nand the unresolved are the rest. Now it should be clear that \nthe unresolved coefficients \ncome from expanding the Sudakovs beyond leading order. \nTherefore, we expect the unresolved coefficients to be related to an expanded \nexponential and most importantly, to be determined from the resolved \ncomponents at the same order.\n\nFor the simplest case of the all primary contributions we find\n\\begin{equation}\nc^{(l,n)}_{0} = (-1)^{n-l} \\frac{1}{(n-l)!} \\frac{1}{l!} \\;.\n\\label{simp_res}\n\\end{equation}\nNote that at every order the individual coefficients correctly \nsatisfy $\\sum_{l=0}^{n} c^{(l,n)}_{0} = 0$. This fact holds on \na diagram by diagram basis to all multiplicities for the exclusive rates. \nIn order to extend also to the secondary terms, the complication is that we \nneed to distinguish diagrams beyond what we have so far for the resolved \ncomponent. The additional necessary ingredient is the number of \nrepeated identical emissions in a given splitting history. \n\nIn order to proceed, let us note that due to our recursion relation \nthe resolved component $c^{(n,n)}_{j}$ of each splitting history is known, \nand can be decomposed in terms of numerical coefficients times \npowers of $c^{(1,1)}_{0}$, $c^{(2,2)}_{0}$ and $c^{(2,2)}_{1}$. \nThese are our starting conditions, which we will refer to as the \\emph{primordial \ncoefficients}. Let us denote the powers of each as $a$, $b$ and $c$ \nrespectively. Now we define \n\\begin{equation}\np = a + 2b + c\n\\label{nuaasp}\n\\end{equation}\nand claim that for the unresolved components $c^{(l,n)}_{j}$ of this \nparticular diagram, that $l \\in (n, n-1, \\cdots , n-p)$ with coefficients \ngiven by \n\\begin{equation}\nc^{(l,n)}_{j} = (-1)^{n-l} \\frac{1}{(n-l)!}\\frac{ p! } {(p-(n-l))!} c^{(n,n)}_{j} \\, .\n\\label{fin_un_r}\n\\end{equation}\nFor $l1$).\n\n\\medskip\n\n\nGiven a tree $\\bf t\\in T^{\\infty}$, for $a\\in {\\mbb N}$, define $r_a{\\bf t}=\\{\\nu\\in{\\bf\nt}:|\\nu|\\leq a\\}$. Then $r_a{\\bf t}$ is a finite tree whose\ncontour function is denoted by $\\{C_a({\\bf t}, s):s\\geq0\\}$. For $k\\geq0$, we denote by $Y_k({\\bf t})$ the number of individuals in generation $k$: $$Y_k({\\bf t})=\\#\\{\\nu\\in{\\bf t}: |\\nu|=k\\}, \\quad k\\geq 0.$$\n\n\nGiven a probability measure $p=\\{p_k: k\\geq0\\}$ with\n$\\sum_{k\\geq0}kp_k >1.$ Let ${\\cal G}^p$ be a super-critical Galton-Watson tree with offspring distribution $p$.\n Then ${\\P}(\\#{\\cal G}^p<\\infty)=f(p)$, where $f(p)$ is the minimal solution of the following equation of $s$:\n$$\ng^p(s):=\\sum_{k\\geq0}s^kp_k=s, \\quad 0\\leq s\\leq1.\n$$\nLet $q=\\{q_k: k\\geq0\\}$ be another probability distribution such that\n$$q_k=f(p)^{k-1}p_k, \\text{ for } k\\geq1,\\text{ and } q_0=1-\\sum_{k\\geq1}q_k.$$\nThen $\\sum_{k\\geq0}kq_k<1$.\nLet ${\\cal G}^q$ be a subcritical GW tree\nwith offspring distribution $q$. Note that $\\l(Y_k({\\cal G}^q), k\\geq0\\r)$ is a Galton-Watson process starting from a single ancestor with offspring distribution $q$. We first present a simple lemma.\n\n\\begin{lemma}\\label{Lem:GWGir} Let $F$ be any nonnegative measurable function on $\\bf T$. Then for ${\\bf t}\\in {\\bf T}$,\n\\beqlb\\label{GWGirb}\n{\\mP}\\l[{\\cal G}^p={\\bf t}\\r]=f(p){\\mP}\\left[{\\cal\nG}^q={\\bf t}\\right]\n\\eeqlb\nand for any $a\\in \\mbb N$,\n\\beqlb\\label{GWGira}\n{\\mE}\\l[F(r_a{\\cal G}^p)\\r]={\\mE}\\left[f(p)^{1-Y_{a}({\\cal G}^q)}F(r_a{\\cal\nG}^q)\\right].\n\\eeqlb\n\\end{lemma}\n\\proof (\\ref{GWGirb}) is just (4.8) in \\cite{adh:pgwttvmp}. The proof of (\\ref{GWGira}) is straightforward. Set $\\text{gen}(a, {\\bf t})=\\{\\nu\\in{\\bf t}: |\\nu|=a\\}$.\nBy (\\ref{ForGuT}), for $\\bf t\\in T$, we have\n$$\\P(r_a{\\cal G}^p=r_a{\\mathbf t})=\\prod_{\\nu\\in r_a{\\mathbf t}\\setminus{\\cal L}(r_a{\\mathbf t})}p_{k_{\\nu}{\\mathbf t}}\\cdot \\prod_{\\nu\\in{\\cal L}(r_a{\\mathbf t})\\setminus \\text{gen}(a, {\\bf t})}p_0 $$\nand then\n$$\n\\P(r_a{\\cal G}^p=r_a{\\mathbf t})=f(p)^{-\\left(\\sum_{\\nu\\in r_a{\\mathbf t}\\setminus{\\cal\nL}(r_a{\\mathbf t})}(k_{\\nu}{\\mathbf\nt}-1)\\right)}\\left(\\frac{p_0}{q_0}\\right)^{\\#{\\cal L}(r_a{\\mathbf\nt})-Y_{a}({\\bf t})}\\P(r_a{\\cal G}^q=r_a{\\mathbf t}).\n$$\nWe also have\n\\begin{equation}\\label{eq:pzero}\nq_0=1-\\sum_{k=1}^{\\infty}f(p)^{k-1}p_k=1+p_0\/f(p)-g^p(f(p))\/f(p)=p_0\/f(p).\n\\end{equation}\nThen (\\ref{GWGira}) follows from the fact that given a tree\n$\\mathbf{t}\\in \\mathbf{T}$,\n$$\\#\\mathcal{L}({\\mathbf t})=1+\\sum_{\\nu\\in {\\mathbf t}\\setminus{\\cal\nL}({\\mathbf t})}(k_{\\nu}{\\mathbf t}-1).$$\n \\qed\n\n\\begin{remark}\\label{Rem: dis}It is easy to see that $(f(p)^{-Y_{n}({\\cal G}^q)}, n\\geq0)$ is a martingale with respect to ${\\cal F}_n=\\sigma(r_n {\\cal G}^q).$ In fact, by the branching property, we have for all $0\\leq m\\leq n$,\n\\beqlb\\label{mart}\n{\\mbb E}\\l[f(p)^{-Y_{n}({\\cal G}^q)}\\bigg{|}r_m{\\cal G}^q\\r]=\\l[{\\mbb E}\\l[f(p)^{-Y_{n-m}({\\cal G}^q)}\\r]\\r]^{Y_{m}({\\cal G}^q)}=f(p)^{-Y_{m}({\\cal G}^q)}.\n\\eeqlb\n\\end{remark}\nSince contour functions code finite trees in $\\mathbf{T}$, we immediately get the following result.\n\\begin{corollary}\\label{cor:con}\nFor any nonnegative measurable function $F$ on $ C({\\mbb R}^+, {\\mbb R}^+)$ and $a\\in\\mbb N$,\n\\beqlb\\label{cor:cona}{\\mE}\\l[F\\l( C_{a}({\\cal G}^p,\\cdot)\\r)\\r]={\\mE}\\left[f(p)^{1-Y_{a}({\\cal G}^q)}F\\l( C_{a}({\\cal G}^q,\\cdot)\\r)\\right].\n\\eeqlb\n\\end{corollary}\n\n\nLemma \\ref{Lem:GWGir} could be regarded as a discrete counterpart of the martingale transformation for L\\'evy trees in Section 4 of \\cite{ad:ctvmp}; see also (\\ref{Gircon}) below in this paper. To see this, we need to introduce continuous state branching processes and L\\'evy trees.\n\n\n\\subsection{Continuous State Branching Processes}\\label{Secbm}\nLet $\\alpha\\in \\mbb R$, $\\beta\\ge 0$ and $\\pi$ be a $\\sigma$-finite measure\non $(0,+\\infty)$ such that $\\int_{(0,+\\infty)}(1\\wedge\nr^2)\\pi(dr)<+\\infty$.\nThe branching mechanism $\\psi$ with characteristics $(\\alpha,\\beta,\n\\pi)$ is\ndefined by:\n\\begin{equation}\n \\label{eq:psi}\n\\psi(\\lambda)=\\alpha\\lambda+\\beta\\lambda^2\n+\\int_{(0,+\\infty)}\\left(e^{-\\lambda\n r}-1+\\lambda r1_{\\{r<1\\}}\\right)\\pi(dr).\n\\end{equation}\nA c\\`ad-l\\`ag $\\mbb R^+$-valued Markov process $Y^{\\psi,x}=(Y_t^{\\psi, x}, t\\geq0)$ started at $x\\geq0$ is called $\\psi$-continuous state branching process ($\\psi$-CSBP in short) if its transition kernels satisfy\n$$\nE[e^{-\\lambda Y^{\\psi,x}_t}]=e^{-xu_t(\\lambda)},\\quad t\\geq0,\\, \\lambda>0,\n$$\nwhere $u_t(\\lambda)$ is the unique nonnegative solution of\n$$\n\\frac{\\partial u_t(\\lz)}{\\partial t}=-\\psi(u_t(\\lz)),\\quad u_0(\\lz)=\\lz.\n$$\n\n\n\\noindent $\\psi$ and $Y^{\\psi,x}$ are said to be sub-critical (resp. critical, super-critical) if $\\psi'(0+)\\in(0,+\\infty)$ (resp. $\\psi'(0+)=0, \\psi'(0+)\\in[-\\infty,0)$). We say that $\\psi$ and $Y^{\\psi,x}$ are (sub)critical if they are critical or sub-critical.\n\n\\bigskip\n\n\\noindent In the sequel of this paper, we will assume the following assumptions on $\\psi$ are in force:\n\\begin{enumerate}\n\n\n\\item[(H1)] The Grey condition holds:\n\\begin{equation}\n\\int^{+\\infty}_1\\frac{d\\lambda}{\\psi(\\lambda)}<+\\infty.\n\\end{equation}\nThe Grey condition is equivalent to the a.s. finiteness of the\nextinction time of the corresponding CSBP. This assumption is used to\nensure that the corresponding height process is continuous.\n\n\n\\item[(H2)] The branching mechanism $\\psi$ is conservative: for all\n $\\varepsilon>0$,\n\\[\n\\int_{(0, \\varepsilon]} \\frac{d\\lambda}{|\\psi(\\lambda)|}=+\\infty.\n\\]\nThe conservative assumption is equivalent to the finiteness of the\ncorresponding CSBP at all time.\n\n\\end{enumerate}\nLet us remark that (H1) implies $\\beta>0$ or $\\int_{(0,1)} r \\pi(dr)=+\\infty $. And if $\\psi$ is (sub)critical, then we must have $\\alpha-\\int_{(1,{+\\infty})}r\\pi(dr)\\in[0,+\\infty).$ We end this subsection by collecting some results from \\cite{ad:ctvmp}.\n\n\\bigskip\n\n\\noindent Let $(X_t,t\\geq0)$ denote the canonical process of ${\\cal D}:=D({\\mbb R}^+, \\mbb R)$. Let $P_x^{\\psi}$ be the probability measure on $D({\\mbb R}^+, \\mbb R)$ such that $P_x^{\\psi}(X_0=x)=1$ and $(X_t,t\\geq0)$ is a $\\psi$-CSBP under $P_x^{\\psi}$.\n\n\\begin{lemma}\\label{AD2.2}(Lemma 2.4 in \\cite{ad:ctvmp})\nAssume that $\\psi$ is supercritical satisfying (H1) and (H2).\nThen\n\\begin{enumerate}\n\n\\item[(i)] $P_x^{\\psi}$-a.s. $X_{\\infty}=\\lim_{t\\rar\\infty}X_t$ exists, $X_{\\infty}\\in\\{0,\\infty\\}$ and\n $$\n P_x^{\\psi}(X_{\\infty}=0)=e^{-\\gamma x},\n $$\nwhere $\\gamma$ is the largest root of $\\psi(\\lz)=0$.\n\\item[(ii)] For any nonnegative random variable measurable w.r.t. $\\sigma(X_t, t\\geq0)$, we have\n $$\n E_x^{\\psi}[W|X_{\\infty}=0]=E_x^{\\psi_{\\gamma}}[W],\n $$\nwhere $\\psi_{\\gamma}(\\cdot)=\\psi(\\cdot+\\gamma).$\n\n\\end{enumerate}\n\\end{lemma}\n\\subsection{Height processes}\\label{Secbpheight}\n\n\nTo code the genealogy of the $\\psi$-CSBP, Le Gall and Le Jan \\cite{[LeLa98]} introduced the so-called height process, which is a functional of the L\\'{e}vy process with Laplace exponent $\\psi$; see also Duquesne and Le Gall \\cite{[DL02]}.\n\nAssume that $\\psi$ is (sub)critical satisfying (H1). Let ${\\mbb P}^{\\psi}$ be a probability measure on $\\cal D$ such that under ${\\mbb P}^{\\psi}$, $X=(X_t,t\\geq0)$ is a L\\'{e}vy process with\nnonnegative jumps and with Laplace exponent $\\psi$:\n $$\n {\\mbb E}^{\\psi}\\Big[e^{-\\lz X_t}\\Big]=e^{t\\psi(\\lz)},\\quad t\\geq0,\\,\\lz\\geq0.\n $$\n\n The so-called continuous-time \\textit{height process} denoted by $H$ with sample path in $C(\\mbb R^+,\\mbb R^+)$ is defined for every $t\\geq0$ by:\n $$\n H_t=\\liminf_{\\ez\\rightarrow0}\\frac{1}{\\ez}\\int_0^t1_{\\{X_s0$, define $$T_x=\\inf\\{t\\geq0: I_t\\leq -x\\},$$\n where $I_t=\\inf_{0\\leq r\\leq t}X_r$. By Theorem 1.4.1 of \\cite{[DL02]}, the process\n$(L^a_{T_x},a\\geq0)$ under $\\mP^{\\psi}$ is distributed as a $\\psi$-CSBP started at $x$.\n\nLet ${{\\cal C}:=C({\\mbb R}^+, {\\mbb R}^+)}$ be the space of nonnegative continuous functions on ${\\mbb R}^+$ equipped with the supmum norm. Denote by $(e_t,t\\geq0)$ the canonical process of ${\\cal C}$. Denote by $\\mP_x^{\\psi}$ the law of $(H_{t\\wedge T_x}, t\\geq0)$ under $\\mP^{\\psi}$. Then $\\mP_x^{\\psi}$ is a probability distribution on $\\cal C$. Set $Z_a=L_{T_x}^a$ under $\\mP_x^{\\psi}$, i.e.,\n$$\n\\lim_{\\ez\\rightarrow0}\\sup_{a\\geq\\ez}\\mE_x^{\\psi}\\left[\\sup_{s\\leq\nt}\\left|\\ez^{-1}\\int_0^s 1_{\\{a-\\ez0$, define\n$$\n\\Gamma_{f,a}(x)=\\int_0^x1_{\\{f(t)\\leq a\\}}dt,\\quad \\Pi_{f,a}(x)=\\inf\\{r\\geq0: \\Gamma_{f,a}(r)>x\\},\\quad x\\geq0,\n$$\nwhere we make the convention that $\\inf\\emptyset=+\\infty.$ Then we define\n$$\n\\pi_a(f)(x)=f(\\Pi_{f,a}(x)),\\quad f\\in {C({\\mbb R}^+, {\\mbb R}^+)}, \\, x\\geq0.\n$$\nNote that $\\pi_a\\circ\\pi_b=\\pi_a$ for $0\\leq a\\leq b.$ Let $\\psi$ be a super-critical branching\nmechanism satisfying (H2).\nDenote by $q^*$ the unique (positive) root of $\\psi'(q)=0$. Then the\nbranching mechanism $\\psi_q(\\cdot)=\\psi(\\cdot+q)-\\psi(q)$ is critical for\n$q=q^*$ and sub-critical for $q>q^*$. We also have $\\gamma>q^*$. Because super-critical branching processes may have infinite mass,\nin \\cite{ad:ctvmp} it was cut at a given level to construct the corresponding\ngenealogical continuum random tree. Define\n$$\nM_a^{\\psi_q, -q}=\\exp\\left\\{-qZ_0+qZ_a+\\psi(q)\\int_0^aZ_sds\\right\\},\\quad a\\geq0.\n$$\nDefine a filtration ${\\cal H}_a=\\sigma(\\pi_a(e))\\vee {\\cal N}$, where $\\cal N$ is the class of ${\\mbb P}_x^{\\psi_q}$ negligible sets. By (\\ref{local}), we have $M^{\\psi_q,-q}$ is ${\\cal H}$-adapted.\n\\begin{theorem}(Theorem 2.2 in \\cite{ad:ctvmp}) For each $q\\geq q^*$, $M^{\\psi_q,-q}$ is an ${\\cal H}$-martingale under ${\\mbb P}_x^{\\psi_q}.$\n\\end{theorem}\n\\proof See Theorem 2.2 and arguments in Section 4 in \\cite{ad:ctvmp}. \\qed\n\n\nDefine the distribution $\\mP_x^{\\psi,a}$\nof the $\\psi$-CRT cut at level $a$ with initial mass $x$, as\nthe distribution of $\\pi_a(e)$ under $M_a^{\\psi_q,-q}d\\mP_x^{\\psi_q}$:\nfor any\nnon-negative measurable function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$,\n \\beqlb\\label{supercriticalP}\n \\mE_x^{\\psi,a}[F(e)]&=&\\mE_x^{\\psi_q}\\Big[M_a^{\\psi_q,-q}F(\\pi_a(e))\\Big],\n\n\n\n\n \\eeqlb\nwhich do not depend on the choice of $q\\geq q^*$; see Lemma 4.1 of\n\\cite{ad:ctvmp}. Taking $q=\\gamma$ in (\\ref{supercriticalP}), we see\n\\beqlb\\label{Gircon} \\mE_x^{\\psi,a}[F(e)]=\\mE_x^{\\psi_{\\gamma}}\\Big[e^{-\\gamma x+\\gamma Z_a}F(\\pi_a(e))\\Big]\\eeqlb\nand $(e^{-\\gamma x+\\gamma Z_a}, a\\geq0)$ under ${\\mP}_x^{\\psi_{\\gamma}}$ is an ${\\cal H}$-martingale with mean 1.\n\\begin{remark}\\label{lem: defsuper} $\\mP_x^{\\psi,a}$ gives the law of super-critical L\\'evy trees truncated at height $a$. Then the law of the whole tree could be defined as a projective limit. To be more precise,\n let $\\cal W$ be the set of $C({\\mbb R}^+, {\\mbb R}^+)$-valued functions endowed with the $\\sigma$-field generated by the coordinate maps. Let $(w^a,a\\geq0)$ be the canonical process on $\\cal W$. Proposition 4.2 in \\cite{ad:ctvmp} proved that there exists a probability measure $\\bar{\\mP}_x^{\\psi}$\non $\\W$ such that for every\n$a\\geq0$, the distribution of $w^a$ under $\\bar{\\mP}_x^{\\psi}$\n is $\\mP_x^{\\psi,a}$\n\n and for $0\\leq a\\leq b$\n$$\n\\pi_a(w^b)=\\pi_a\\quad \\bar{\\mP}_x^{\\psi}-a.s\n$$\n\\end{remark}\n\n\\begin{remark}\nThe above definitions of $ \\mE_x^{\\psi,a}$ and $\\bar{\\mP}_x^{\\psi}$\nare also valid for (sub)critical branching mechanisms.\n\\end{remark}\n\n\n\n\n\n\n\\section{From Galton-Watson forests to L\\'evy forests}\\label{Secmain}\nComparing (\\ref{cor:cona}) with (\\ref{Gircon}), one can see that the l.h.s. are similar. The super-critical trees (discrete or continuum) truncated at height $a$ are connected to sub-critical trees, via a martingale transformation. Motivated by Duquesne and Le Gall's work \\cite{[DL02]}, which studied the scaling limit of (sub)critical trees, one may hope that the laws of suitably rescaled super-critical Galton-Watson trees truncated at height $a$ could converge to the law defined in (\\ref{Gircon}). Our main result, Theorem \\ref{Main}, will show it is true.\n\n\nFor each integer $n\\geq1$ and real number $x>0$,\n\\begin{itemize}\n\\item\n Let $[x]$ denote the integer part of $x$ and let $\\lceil x\\rceil$ denote the minimal integer which is larger than $x$.\n\n\\item\n Let\n$p^{(n)}=\\{p^{(n)}_k: k=0,1,2,\\ldots\\}$ be a probability\nmeasure on $\\mbb N$.\n\n\\item Let\n${\\cal G}^{p^{(n)}}_1, {\\cal G}^{p^{(n)}}_2, \\ldots, {\\cal G}^{p^{(n)}}_{[nx]}$ be independent Galton-Watson trees with the same offspring distribution $p^{(n)}.$\n\n\\item Define $Y_k^{p^{(n)},x}=\\sum_{i=1}^{[nx]}Y_k({\\cal G}_{i}^{p^{(n)}})$. Then $Y^{p^{(n)},x}=(Y_k^{p^{(n)},x},k=0,1,\\ldots)$ is a Galton-Watson process with offspring distribution $p^{(n)}$ starting from $[nx]$.\n\n\\item For $a\\in \\mbb N$, define the contour function of trees cut at level $a$, $C_a^{p^{(n)},x}=(C_a^{p^{(n)},x}(t), t\\geq0)$, by concatenating the contour functions $(C(r_a{\\cal G}_{1}^{p^{(n)}}, t), t\\in[0,2\\#r_a{\\cal G}_{1}^{p^{(n)}}]),\n \\ldots, (C(r_a{\\cal G}_{[nx]}^{p^{(n)}}, t), t\\in[0,2\\#r_a{\\cal G}_{[nx]}^{p^{(n)}}])$ and setting $C_a^{p^{(n)},x}(t)=0$ for $t\\geq 2\\sum_{i=1}^{[nx]}\\#r_a{\\cal G}_{i}^{p^{(n)}}$.\n\n\\item For $a\\in{\\mbb R}^+$, define $C_a^{p^{(n)},x}=\\pi_a(C_{\\lceil a\\rceil}^{p^{(n)},x})$.\n\n\n\\item If $\\sum_{k\\geq0}kp_k^{(n)}\\leq 1$, then we define the contour function $C^{p^{(n)},x}=(C^{p^{(n)},x}(t), t\\geq0)$ by concatenating the contour functions $(C({\\cal G}_{1}^{p^{(n)}}, t), t\\in[0,2\\#{\\cal G}_{1}^{p^{(n)}}]), \\ldots, (C({\\cal G}_{[nx]}^{p^{(n)}}, t), t\\in[0,2\\#{\\cal G}_{[nx]}^{p^{(n)}}])$ and setting $C^{p^{(n)},x}(t)=0$ for $t\\geq 2\\sum_{i=1}^{[nx]}\\#{\\cal G}_{i}^{p^{(n)}}$.\n \\end{itemize}\n\n\n\n\\noindent\nLet $(\\gamma_n, n=1,2,\\ldots)$ be a nondecreasing sequence of positive numbers converging to $\\infty$. Define\n$$\nG^{(n)}(\\lambda)=n\\gamma_n[g^{p^{(n)}}(e^{-\\lambda\/n})-e^{-\\lambda\/n}],$$ where $g^{p^{(n)}}$ is the generating function of $p^{(n)}$,\nand define a probability measure on $[0,\\infty)$ by\n$$\\mu^{(n)}\\l(\\frac{k-1}{n}\\r)=p_k^{(n)},\\quad k\\geq0.$$\nWe then present the following statements. By $\\overset{(d)}{\\rightarrow}$ we mean convergence in distribution.\n\\begin{enumerate}\n\\item[(A1)] $G^{(n)}(\\lambda)\\rar\\psi(\\lambda)$ as $n\\rar\\infty$ uniformly on any bounded interval.\n\n\\item[(A2)]\n\\beqlb\\label{lem:lib}\n\\left(\\frac{1}{n}Y^{p^{(n)},x}_{[\\gamma_n\nt]},\\; t\\geq0\\right)\\overset{(d)}{\\longrightarrow}(Y_t^{\\psi,x},\\; t\\geq0),\\quad \\text{as }n\\rar\\infty,\n\\eeqlb\nin $D(\\mbb R^+,\\mbb R^+)$.\n\n\\item[(A3)] There exists a probability measure $\\mu$ on $(-\\infty, +\\infty)$ such that\n$\\l(\\mu^{(n)}\\r)^{*[n\\gamma_n]}\\rar \\mu$ as $n\\rar\\infty$, where $\\int e^{-\\lambda x}\\mu(dx)=e^{\\psi(\\lambda)}.$\n\\end{enumerate}\nThe following lemma is a variant of Theorem 3.4 in \\cite{[Gr74]}.\n\\begin{lemma}\\label{lem:li}\nLet $\\psi$ be a branching mechanism satisfying (H1) and (H2). Then (A1), (A2) and (A3) are equivalent.\n\\end{lemma}\n\\begin{remark}\n(A3) is just the condition (i) in Theorem 3.4 of \\cite{[Gr74]}. Under our assumption on $\\psi$, we do not need condition (b) there. (A3) is also equivalent to the convergence of random walks to $\\psi$-L\\'evy processes; see Theorem 2.1.1 of \\cite{[DL02]} for (sub)critical case.\n\\end{remark}\n\n\\proof We shall show that (A2)$\\Leftrightarrow$(A3) and (A3)$\\Leftrightarrow$(A1).\n\ni): If (A2) holds, then $\\psi$ is conservative implies that ${\\mbb P}(Y_t<\\infty)=1$ for all $t\\geq0$. Then Theorem 3.3 in \\cite{[Gr74]} gives (A2)$\\Rightarrow$(A3). Meanwhile, Theorem 3.1 in \\cite{[Gr74]} implies (A3)$\\Rightarrow$(A2).\n\nii): We first show (A3)$\\Rightarrow$(A1). Denote by $L^{(n)}(\\lambda)$ the Laplace transform of $\\l(\\mu^{(n)}\\r)^{*[n\\gamma_n]}$. Then Theorem 2.1 in \\cite{[Gr74]}, together with (A3), gives that for every real number $d>0$\n\\beqlb\\label{lap}\n\\log L^{(n)}(\\lz)=[n\\gamma_n] \\log \\l(\\frac{e^{\\lz\/n}}{n\\gamma_n}G^{(n)}(\\lz)+1\\r)\\rar {\\psi(\\lz)},\n\\text{ as } n\\rar\\infty,\n\\eeqlb uniformly in $\\lz\\in[0,d],$\nwhich implies that for any $\\ez>0$, all $n>n(d,\\ez)$ and $\\lz\\in[0,d]$,\n$$\nn\\gamma_n\\l(e^{\\frac{\\psi(\\lz)-\\ez}{[n\\gamma_n]}}-1\\r)\n<\ne^{\\lz\/n}G^{(n)}(\\lz)<\nn\\gamma_n\\l(e^{\\frac{\\psi(\\lz)+\\ez}{[n\\gamma_n]}}-1\\r).\n$$\nThen by $|e^x-1-x|<{e^{|x|}}|x|^2\/2$,\n$$\n-\\frac{2(\\psi(\\lz)-\\ez)^2 e^{\\frac{|\\psi(\\lz)-\\ez|}{[n\\gamma_n]}}}{n\\gamma_n}-2\\ez\n<\ne^{\\lz\/n}G^{(n)}(\\lz)-\\psi(\\lz)<\n\\frac{(\\psi(\\lz)+\\ez)^2 e^{\\frac{|\\psi(\\lz)+\\ez|}{n\\gamma_n}}}{n\\gamma_n}+\\ez.$$\nNote that $\\psi$ is locally bounded. Thus as $n\\rar\\infty$,\n$G^{(n)}(\\lz)\\rar \\psi(\\lz),$\nuniformly on any bounded interval, which is just (A1).\nSimilarly, one can deduce that if $(A1)$ holds, then $ L^{(n)}(\\lz)\\rar e^{\\psi(\\lz)}$ as $n\\rar\\infty$, which implies (A3). \\qed\n\n\n\n\nNow, we are ready to present our main theorem. Define $\\mathcal{E}^{p^{(n)},x}=\\inf\\{k\\geq0: Y^{p^{(n)},x}_k=0\\}$ and $\\mathcal{E}^{\\psi,x}=\\inf\\{t\\geq0: Y_t^{\\psi,x}=0\\}$ with the convention that $\\inf\\emptyset=+\\infty$. Denote by $g_{k}^{p^{(n)}}$ the $k$-th iterate of $g^{p^{(n)}}$.\n\\begin{theorem}\\label{Main}\nLet $\\psi$ be a branching mechanism satisfying (H1) and (H2).\n Assume that (A1) or (A2) holds. Suppose in addition that for every $\\dz>0$,\n\\beqlb\\label{Main1}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen for $x>0$,\n\\beqlb\\label{conexeb}\n\\frac{1}{\\gamma_n}{\\cal E}^{p^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi,x}\n\\text { on }[0,+\\infty]\\eeqlb\n and for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$ and every $a\\geq0$,\n\\beqlb\\label{Main2}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\pi_a\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r]=\\mE^{\\psi,a}_x\\l[F\\l(e\\r)\\r].\\eeqlb\n\\end{theorem}\n\nBefore proving the theorem, we would like to give some remarks.\n\\begin{remark}\n(\\ref{Main1}) is essential to (\\ref{Main2}); see the comments following Theorem 2.3.1 in \\cite{[DL02]}. In fact under our assumptions $(H1), (H2)$ and $(A1)$, (\\ref{Main1}) is equivalent to (\\ref{conexeb}). To see (\\ref{conexeb}) implies (\\ref{Main1}), note that\n $$g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^{[nx]}=\\mP\\l[Y^{p^{(n)},x}_{[\\dz \\gamma_n]}=0\\r]=\\mP[{\\cal E}^{p^{(n)},x}\/{\\gamma_n}< \\dz]$$\nwhich, together with (\\ref{conexeb}), gives\n$$\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^{[nx]}= \\liminf_{n\\rar\\infty}\n\\mP[{\\cal E}^{p^{(n)},x}\/{\\gamma_n}<\\dz]\\geq \\mP[{\\cal E}^{\\psi,x}<\\dz]>0,\n$$\nwhere the last inequality follows from our assumption $(H1)$; see Chapter 10 in \\cite{[Ky06]} for details.\n\\end{remark}\n\\begin{remark}\\label{remDW}\nSome related work on the convergence of discrete Galton-Watson trees has been done in \\cite{[DL02]} and \\cite{[DW12]}. In \\cite{[DL02]}, only the (sub)critical case was considered; see Theorem \\ref{lem:dl} below. In Theorem 4.15 of \\cite{[DW12]}, a similar work was done using a quite different formalism. The assumptions there are same as our assumptions in the Theorem \\ref{Main}. But the convergence holds for locally compact rooted real trees in the sense of the pointed Gromov-Hausdorff distance, which is a weaker convergence. Thus Theorem \\ref{Main} implies that the super-critical L\\'evy trees constructed in \\cite{ad:ctvmp} coincides with the one studied in \\cite{[DW12]}; see also \\cite{[ADH12]} and \\cite{[DW07]}.\n\\end{remark}\n\n\nWe then present a variant of Theorem 2.3.1 and Corollary 2.5.1 in \\cite{[DL02]} which is essential to our proof of Theorem \\ref{Main}.\n\n\\begin{theorem}\\label{lem:dl} (Theorem 2.3.1 and Corollary 2.5.1 of \\cite{[DL02]})\nLet $\\psi$ be a (sub)critical branching mechanism satisfying $(H1)$.\n Assume that (A1) or (A2) holds. Suppose in addition that for every $\\dz>0$,\n\\beqlb\\label{lem:dlb}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen \\beqlb\\label{lem:dla}\n\\frac{1}{\\gamma_n}{\\cal E}^{p^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi,x}\n\\text { on }[0,+\\infty)\\eeqlb\n and for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)\\times D({\\mbb R}^+, {\\mbb R}^+)$,\n\\beqlb\\label{coropia}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\pi_a\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{p^{(n)}, x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi}_x\\l[F\\l(\\pi_a(e), (Z_a)_{a\\geq0}\\r)\\r].\n\\eeqlb\n\\end{theorem}\n\\proof The comments following Theorem 2.3.1 in \\cite{[DL02]} give (\\ref{lem:dla}). And by Corollary 2.5.1 in \\cite{[DL02]}, we have\n \\beqlb\\label{lem:dlc}\n\\lim_{n\\rar\\infty}{\\mE}\n\\l[F\\l(\\l(\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{p^{(n)},x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi}_x\\l[F\\l(e, (Z_a)_{a\\geq0}\\r)\\r].\n\\eeqlb\n On the other hand, let ${\\cal C}_{a}$ be the set of discontinuities of $\\pi_a$. (\\ref{localtime}) yields\n\\beqlb\\label{coropib}\n\\Gamma_{e,a}(x)=\\int_0^x1_{\\{e_t\\leq a\\}}dt=\\int_{\\mbb R^+}1_{\\{s\\leq a\\}}L_x^sds=\\int_0^x1_{\\{e_t0$,\n\\beqlb\\label{lem:exe1}\n\\liminf_{n\\rar\\infty}g_{[\\dz\\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen as $n\\rar\\infty$,\n\\beqlb\\label{conexea}\nf(p^{(n)})^{[nx]}\\rar e^{-\\gamma x},\\quad x>0.\n\\eeqlb\n\\end{lemma}\n\\proof\nRecall that $f(p^{(n)})$ denotes the minimal solution of $g^{p^{(n)}}(s)=s.$\nFor each $n\\geq1$, define\n$$\nq^{(n)}_k=f({p^{(n)}})^{k-1}p^{(n)}_k,\\quad k\\geq1\\quad \\text{ and }\\quad\nq^{(n)}_0=1-\\sum_{k\\geq1}q^{(n)}_k.\n$$\nThen $q^{(n)}=\\{q_k^{(n)}: k\\geq0\\}$ is a probability distribution with generating function given by\n\\beqlb\\label{gq}\ng^{q^{(n)}}(s)=g^{p^{(n)}}\\l(sf({p^{(n)}})\\r)\/f({p^{(n)}}),\\quad 0\\leq s\\leq 1.\n\\eeqlb\n\n\\noindent Thus $ g^{q^{(n)}}(0)=g^{p^{(n)}}(0)\/f({p^{(n)}})$ and by induction we further have\n\\beqlb\\label{extinclim}\ng_{k+1}^{q^{(n)}}(0)=g^{q^{(n)}}\\l(g_{k}^{q^{(n)}}(0)\\r)\n=g^{p^{(n)}}\\l(g_{k}^{q^{(n)}}(0)f({p^{(n)}})\\r)\/f({p^{(n)}})\n=g_{k+1}^{p^{(n)}}(0)\/f({p^{(n)}}),\\quad k\\geq1.\n\\eeqlb\nWith (\\ref{lem:exe1}), we see that for any $\\delta>0$,\n\\beqlb\\label{sub2}\n1\\geq\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{q^{(n)}}(0)^n\n=\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{p^{(n)}}(0)^n\/f({p^{(n)}})^n\\geq\\liminf_{n\\rar\\infty}g_{[\\dz \\gamma_n]}^{p^{(n)}}(0)^n>0.\n\\eeqlb\nThen we also have $e^{-\\gamma_0}:=\\liminf_{n\\rar\\infty}f({p^{(n)}})^n>0.$\nSince $f({p^{(n)}})\\leq1$, we may write $f({p^{(n)}})=e^{-a_n\/n}$ for some $a_n\\geq0$. We further have $\\limsup_{n\\rar\\infty}a_n=\\gamma_0.$ We shall show that $\\gamma_0=\\gamma$ and $\\{a_n:n\\geq1\\}$ is a convergent sequence. To this end, let $\\{a_{n_k}:k\\geq1\\}$ be a convergent subsequence of $\\{a_n:n\\geq1\\}$ with $\\lim_{k\\rar\\infty}a_{n_k}=:\\tilde{\\gamma}\\leq \\gamma_0.$ Then by (A1),\n$$\n0=n_k\\gamma_{n_k}[{g^{p^{(n_k)}}(e^{-a_{n_k}\/n_k})-e^{-a_{n_k}\/{n_k}}}]\\rar \\psi(\\tilde{\\gamma}),\\quad \\text{as}\\quad k\\rar\\infty.\n$$\nThus $\\psi(\\tilde{\\gamma})=0.$ On the other hand, note that $\\psi$ is a convex function with $\\psi(0)=0$ and $\\gamma$ is the largest root of $\\psi(\\lz)=0$. Then we have $\\psi(\\lz_1)<0$ and $\\psi(\\lz_2)>0$ for $0<\\lz_1<\\gamma<\\lz_2$. If $\\tilde{\\gamma}\\neq\\gamma$, then $\\tilde{\\gamma}=0$. In this case, we may find a sequence $\\{b_{n_k}: k\\geq1\\}$ with $b_{n_k}>a_{n_k}$ for all $k\\geq 1$ such that $b_{n_k}\\rar\\gamma$ and for $k$ sufficiently large\n$$\n{g^{p^{(n_k)}}(e^{-b_{n_k}\/n_k})-e^{-b_{n_k}\/{n_k}}}=0.\n$$\nThis contradicts the fact that $f(p^{(n)})=e^{-a_n\/n}$ is the minimal solution of $g^{p^{(n)}}(s)=s.$ Thus $\\tilde{\\gamma}=\\gamma$ which implies that $\\lim_{n\\rar\\infty}a_n=\\gamma$ and $\\lim_{n\\rar\\infty}f(p^{(n)})^{[nx]}=e^{\\gamma x}$ for any $x>0.$ \\qed\n\n\n\n\\bigskip\n\nWe are in the position to prove Theorem \\ref{Main}.\n\n\\bigskip\n\n{\\bf Proof of Theorem \\ref{Main}:} With Theorem \\ref{lem:dl} in hand, we only need to prove the result when $\\psi$ is super-critical. The proof will be divided into three steps.\n\n\\textit{First step:} One can deduce from (A1) and (\\ref{conexea}) that\n\\beqlb\\label{sub1}\n&&n\\gamma_n[g^{q^{(n)}}\\l(e^{-\\lambda\/n}\\r)-e^{-\\lambda\/n}]\\cr\n&&\\qquad=n\\gamma_n\\l\n[g^{p^{(n)}}\\l(e^{-\\lambda\/n}f({p^{(n)}})\\r)\n-e^{-\\lambda\/n}f({p^{(n)}})\\r]\/f({p^{(n)}})\\cr\n&&\\qquad\\rightarrow\\psi(\\lambda+\\gamma),\n\\quad\n\\text{as }n\\rightarrow\\infty,\n\\eeqlb\nuniformly on any bounded interval. Then Lemma \\ref{lem:li} and Theorem \\ref{lem:dl}, together with (\\ref{sub2}) and (\\ref{sub1}), imply that\n\\beqlb\\label{conexesub}\n\\frac{1}{\\gamma_n}{\\cal E}^{q^{(n)},x}\\overset{(d)}{\\rar}\\mathcal{E}^{\\psi_{\\gamma},x}\n\\text { on }[0,+\\infty)\\eeqlb\nand for any bounded continuous function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)\\times D({\\mbb R}^+, {\\mbb R})$,\n\\beqlb\\label{consub}\n\\lim_{n\\rar\\infty}{\\mE}\\l[F\\l(\\pi_a\\l((\\gamma_n^{-1}C^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r), \\l(\\frac{1}{n}Y^{q^{(n)},x}_{[\\gamma_n\na]}\\r)_{a\\geq0}\\r) \\r]=\\mE^{\\psi_{\\gamma}}_x\\l[F(\\pi_a(e), (Z_a)_{a\\geq0})\\r].\n\\eeqlb\n\n\\textit{Second step:} We shall prove (\\ref{conexeb}). Note that\n$$\n\\{{\\cal E}^{p^{(n)},x}<\\infty\\}=\\{{\\cal G}_i^{p^{(n)}}, i=1,\\ldots,[nx] \\text{ are finite trees }\\}.\n$$\nThen by Corollary \\ref{cor:con}, for $f\\in C({\\mbb R}^+,{\\mbb R}^+)$,\n$$\n{\\mE}\\l[f\\l({\\cal E}^{p^{(n)},x}\/{\\gamma_n}\\r)1_{\\{{\\cal E}^{p^{(n)},x}<\\infty\\}}\\r]=f(p^{(n)})^{[nx]}{\\mE}\\l[f\\l({\\cal E}^{q^{(n)},x}\/{\\gamma_n}\\r)\\r]\n$$\nwhich, by (\\ref{conexea}), (\\ref{conexesub}) and Lemma \\ref{AD2.2}, converges to $e^{-\\gamma x}{\\mE}\\l[f\\l(\\mathcal{E}^{\\psi_{\\gamma},x}\\r)\\r]\n={\\mE}\\l[f\\l(\\mathcal{E}^{\\psi,x}\\r)1_{\\{{\\cal E}^{\\psi,x}<\\infty\\}}\\r]$, as $n\\rar\\infty$.\nWe also have that\n$$\n\\mP[{\\cal E}^{p^{(n)},x}=\\infty]=1-f(p^{(n)})^{[nx]}\\rar1-e^{-\\gamma x}=\\mP[{\\cal E}^{\\psi,x}=\\infty],\\quad\\text{as }n\\rar\\infty,\n$$\nwhich gives (\\ref{conexeb}).\n\n\\textit{Third step:} We shall prove (\\ref{Main2}). By Corollary \\ref{cor:con}, for any nonnegative measurable function $F$ on $C({\\mbb R}^+, {\\mbb R}^+)$ and $a\\geq0$,\n\\beqlb\\label{defsuper}\n{\\mE}\\l[F(C_{\\lc a\\rc}^{p^{(n)},x}(\\cdot))\\r]={\\mE}\\left[f(p^{(n)})^{[nx]-Y_{\\lc a\\rc}^{q^{(n)},x}}\nF(C_{\\lc a\\rc}^{q^{(n)},x}(\\cdot))\\right].\n\\eeqlb\nNote that\n$$\nC_a^{q^{(n)},x}=\\pi_aC_{\\lc a\\rc}^{q^{(n)},x}\\text{ and }\\pi_a(\\gamma_n^{-1}C^{q^{(n)},x})=\\gamma_n^{-1}C_{\\gamma_n a}^{q^{(n)},x}.\n$$\nThen by (\\ref{defsuper}) we have for $a\\in {\\mbb R}^+$\n\\beqlb\\label{defsuper1}\n{\\mE}\\l[F(C_{a}^{p^{(n)},x}(\\cdot))\\r]={\\mE}\\left[f(p^{(n)})^{[nx]-Y_{\\lc a\\rc}^{q^{(n)},x}}\nF(C_{a}^{q^{(n)},x}(\\cdot))\\right]\n\\eeqlb\nand\n\\beqnn{\\mE}\\l[F\\l(\\gamma_n^{-1}\nC_{\\gamma_na}^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r) \\r]\n={\\mE}\\l[f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}F\\l(\\pi_a\\l(\\gamma_n^{-1}\nC^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r].\n\\eeqnn\nWe shall show that $\\{f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}, n\\geq1\\}$ is uniformly integrable. Write $Y_a^{n}=Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}\/n$ for simplicity.\nFirst, note that ${\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1$. Then with (\\ref{conexea}) and (\\ref{consub}) in hand, by the bounded convergence theorem, we have\n$$\n \\lim_{l\\rar\\infty}\\lim_{n\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]\n=\\lim_{l\\rar\\infty}{\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma(l\\wedge Z_a)}\\r]={\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma Z_a}\\r]=1.\n$$\nNote that both ${\\mE}_x^{\\psi_{\\gamma}}\\l[e^{-\\gamma x+\\gamma(l\\wedge Z_a)}\\r]$ and ${\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]$ are increasing in $l$. Thus for every $\\ez>0$, there exist $l_0$ and $n_0$ such that for all $l>l_0$ and $n>n_0$,\n$$1-\\ez\/2<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]\\leq 1.$$\nMeanwhile, since\n$$\n \\lim_{l\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-n(l\\wedge Y_a^{n})}\\r]={\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1,\n$$\nthere exists $l_1>0$ such that for all $n\\geq1$,\n$$1-\\ez\/2<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l_1\\wedge Y_a^{n})}\\r]\\leq {\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}\\r]=1.$$\nThen for all $n\\geq1$,\n$${\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}1_{\\{Y_a^{n}>l_1\\}}\\r]-\n{\\mE}\\l[f(p^{(n)})^{[nx]-nl_1}1_{\\{Y_a^{n}>l_1\\}}\\r]<\\ez\/2.$$\nDefine $C_0=\\sup_{n\\geq 1}f(p^{(n)})^{[nx]-nl_1}<\\infty.$ Then for any set $A\\in {\\cal F}$ with ${\\mP}(A)<\\frac{\\ez}{2C_0},$\n\\beqnn\n{\\mE}\\l[f(p^{(n)})^{[nx]-nY_a^{n}}1_A\\r]<{\\mE}\\l[f(p^{(n)})^{[nx]-n(l_1\\wedge Y_a^{n})}1_A\\r]+\\ez\/2<\\ez.\n\\eeqnn\nThus $\\{f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}, n\\geq1\\}$ is uniformly integrable; see Lemma 4.10 in \\cite{[Ka02]}. Using the Skorohod representation theorem and (\\ref{consub}), one can deduce that\n\\beqnn\n&&\\lim_{n\\rar\\infty}{\\mE}\\l[F\\l(\\gamma_n^{-1}\nC_{\\gamma_na}^{p^{(n)},x}({2n\\gamma_n\\cdot})\\r) \\r]\n\\cr&&\n\\quad=\\lim_{n\\rar\\infty}{\\mE}\\l[f(p^{(n)})^{[nx]-Y_{\\lc \\gamma_na\\rc}^{q^{(n)},x}}F\\l(\\pi_a\\l(\\gamma_n^{-1}\nC^{q^{(n)},x}({2n\\gamma_n\\cdot})\\r)\\r) \\r]\n\\cr&&\n\\quad=\\mE^{\\psi_{\\gamma}}_x\\l[e^{-\\gamma x+\\gamma Z_a}F(\\pi_a e) \\r].\n\\eeqnn\nwhich is just the right hand side of (\\ref{Main2}).\nWe have completed the proof.\n\\qed\n\n\n\n\\begin{remark}Write $C^n_t=\\gamma_n^{-1}C^{p^{(n)},x}({2n\\gamma_nt})$ for simplicity and recall that $(w^a,a\\geq0)$ denotes the canonical process on $\\cal W$.\nSuppose that the assumptions of Theorem \\ref{Main} are satisfied. Then one can construct a sequence of probability measures $\\bar{\\mP}_x^{p^{n}}$ on $\\W$ such that for every\n$a\\geq0$, the distribution of $w^a$ under $\\bar{\\mP}_x^{p^{n}}$ is the same as $\\pi_a(C^n)$ and for $0\\leq\na\\leq b$,\\;\n$$\n\\pi_a(w^b)=\\pi_a\\quad \\bar{\\mP}_x^{p^{(n)}}-a.s.\n$$\nWe then have \\beqlb\\label{conversuper}\\bar{\\mP}_x^{p^{n}}\\rar \\bar{\\mP}_x^{\\psi}\\text{ as }n\\rar\\infty.\\eeqlb\n\\end{remark}\n\n\\begin{remark}\nIn \\cite{ad:ctvmp}, an excursion measure (`distribution' of a single tree) was also defined. However, we could not find an easy proof of convergence of trees under such excursion measure.\n\\end{remark}\n\n{\\bf Acknowledgement} Both authors would like to give their sincere thanks to J.-F. Delmas and M. Winkel for their valuable comments and suggestions on an earlier version of this paper. H. He is supported by SRFDP (20110003120003), Ministry of Education (985 project) and NSFC (11071021, 11126037, 11201030).\nN. Luan is supported by UIBE (11QD17) and NSFC (11201068).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{The number of conditional images} To analyze the impact of the number of conditional images, we train our F2GAN with $K_1$ conditional images based on seen categories, and generate new images for unseen categories with $K_2$ conditional images. By default, we set $K=K_1=K_2=3$ in our experiments. We evaluate the quality of generated images using different $K_1$ and $K_2$ in low-data (\\emph{i.e.}, $10$-sample) classification (see Section $4.4$ in the main paper). By taking EMNIST dataset as an example, we report the results in Table~\\ref{tab:number_effect} by varying $K_1$ and $K_2$ in the range of $[3, 5, 7, 9]$. From Table~\\ref{tab:number_effect}, we can observe that our F2GAN can achieve satisfactory performance when $K_2=K_1$. The performance generally increases as $K$ increases (except from 3 to 5), but the performance gain is not very significant. Then, we observe the performance variance with fixed $K_1$.\nGiven a fixed $K_1$, when $K_2K_1$, the performance drops sharply, especially when $K_2$ is much larger than $K_1$ (\\emph{e.g.}, $K_1=3$ and $K_2=9$). One possible explanation is that when we train our F2GAN with $K_1$ conditional images, it is not adept at fusing the information of more conditional images ($K_2>K_1$) in the testing phase.\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{Accuracy(\\%) of low-data (10-sample) classification augmented by our F2GAN with different $ \\textbf{K}_1$ and $ \\textbf{K}_2$ on EMNIST dataset.} \n \\centering\n \\fontsize{8}{8}\\selectfont\n \\begin{tabular}{lrrrr}\n \\hline\n ~ & $K_1=3$ & $K_1=5$&$K_1=7$ & $K_1=9$ \\cr\n \n \\hline\n $K_2=3$&97.01 & 96.86 & 95.82 & 94.56 \\cr\n \\hline\n $K_2=5$& 95.24 & 96.98 & 96.08 & 95.52 \\cr\n \\hline\n $K_2=7$&93.76 & 95.13 & 97.23 & 96.86 \\cr\n \\hline\n $K_2=9$&90.17 & 92.74 & 94.38& 97.86 \\cr\n \\hline \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:number_effect}\n\\end{table}\n\n\n\n\n\\section{More Generation Results}\nWe show more example images generated by our F2GAN ($K=3$) on Flowers and Animals datasets in Figure~\\ref{fig:flowers} and Figure~\\ref{fig:animals} respectively. Besides, we additionally conduct experiments on FIGR-8~\\cite{clouatre2019figr} dataset, which is not used in our main paper. The generated images on FIGR-8 dataset are shown in Figure~\\ref{fig:figr-8}. On all three datasets, our F2GAN can generally generate diverse and plausible images based on a few conditional images. However, for some complex categories with very large intra-class variance, the generated images are not very satisfactory. For example, in the $4$-th row in Figure~\\ref{fig:animals}, the mouths of some dog faces look a little unnatural. We conjecture that in these hard cases, our fusion generator may have difficulty in fusing the high-level features of conditional images or seeking for relevant details from conditional images.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/flower_generated.jpg}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on Flowers dataset. The conditional images are in the left three columns.} \n\\label{fig:flowers} \n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/animals_generated.jpg}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on Animals Faces dataset. The conditional images are in the left three columns.} \n\\label{fig:animals} \n\\end{figure*}\n\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.5]{.\/figures\/figr8.png}\n\\end{center}\n\\caption{Images generated by our F2GAN($ \\textbf{K=3}$) on FIGR-8 dataset ~\\cite{clouatre2019figr}. The conditional images are in the left three columns.} \n\\label{fig:figr-8} \n\\end{figure*}\n\n\n\\section{More Interpolation results}\nAs in Section 4.3 in the main paper, We show more interpolation results of our F2GAN in Figure~\\ref{fig:interpolation}. \nGiven two images from the same unseen category, we perform linear interpolation based on these two conditional images. In detail, for interpolation coefficients $\\bm{a}=[a^1, a^2]$, we start from $[0.9,0.1]$, and then gradually decrease (\\emph{resp.}, increase) $a^1$ (\\emph{resp.}, $a^2$) to $0.1$ (\\emph{resp.}, $0.9$) with step size $0.1$.\nIt can be seen that our F2GAN is able to produce diverse and realistic images with rich details between two conditional images, even when two conditional images are quite different. \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.6]{.\/figures\/interpolation_sup.png}\n\\end{center}\n\\caption{Linear interpolation results of our F2GAN on Flowers dataset.}\n\\label{fig:interpolation} \n\\end{figure*}\n\n\n\n\n\n\\section{Comparison with Few-shot Image Translation}\nFew-shot image translation methods like FUNIT~\\cite{liu2019few} mainly borrow category-invariant content information from seen categories to generate new images for unseen categories in the testing phase. \nTechnically, FUNIT disentangles the category-relevant factors~(\\emph{i.e.}, class code) and category-irrelevant factors~(\\emph{i.e.}, content code) of images. Next, we refer to the images from seen (\\emph{resp.}, unseen) categories as seen (\\emph{resp.}, unseen) images.\nBy replacing the content code of an unseen image with those of seen images, FUNIT can generate more images for this unseen category.\nHowever, in this way, few-shot image translation can only introduce category-irrelevant diversity, but fails to introduce enough category-relevant diversity for category-specific properties. \n\nTo confirm this point, we conduct few-shot classification experiments (see Section $4.5$ in the main paper) to evaluate the quality of generated images. Based on the released model of FUNIT~\\cite{liu2019few} trained on Animal Faces~\\cite{deng2009imagenet}, we use class codes of unseen images and content codes of seen images to generate $512$ new images for each unseen category. Then, we use the generated images to help few-shot classification (see Section $4.5$ in the main paper), which is referred to as ``FUNIT-1\" in Table~\\ref{tab:performance_fewshot_classifier}. Besides, we also exchange content codes within the images from the same unseen category to generate new images for each unseen category, but the number of new images generated in this way is quite limited. \nSpecifically, in $N$-way $C$-shot setting, we can only generate $(C-1) \\times C$ new images for each unseen category. We refer to this setting as ``FUNIT-2\" in Table~\\ref{tab:performance_fewshot_classifier}. \n\nFrom Table~\\ref{tab:performance_fewshot_classifier}, it can be seen that ``FUNIT-1\" is better than ``FUNIT-2\", because ``FUNIT-1\" leverages a large amount of extra seen images when generating new unseen images. However, ``FUNIT-1\" is inferior to some state-of-the-art few-shot classification methods as well as our F2GAN, because FUNIT cannot introduce adequate category-relevant diversity as analyzed above.\n\n\n\\setlength{\\tabcolsep}{2pt}\n\\begin{table}[t]\n \\caption{Accuracy(\\%) of different methods on Animals Faces in few-shot classification setting.} \n \\centering\n \n \\begin{tabular}{lrr}\n \n \n \\hline\n Method & 5-way 5-shot &10-way 5-shot\\cr\n \n \\hline\n MatchingNets~\\cite{vinyals2016matching} &59.12 &50.12 \\cr\n\n MAML~\\cite{finn2017model} & 60.03 &49.89 \\cr\n\n RelationNets~\\cite{sung2018learning} &67.51 & 58.12 \\cr\n\n MTL~\\cite{sun2019meta} &79.85 &70.91 \\cr\n\n DN4~\\cite{li2019revisiting} &81.13 &71.34 \\cr\n \n MatchingNet-LFT~\\cite{Hungfewshot} &80.95 &71.62 \\cr\n \n \n MatchingGAN~\\cite{hong2020matchinggan} & 80.36 & 70.89\\cr\n \n FUNIT-1 & 78.02 &69.12 \\cr\n FUNIT-2 &75.29 &67.87 \\cr\n F2GAN & $\\textbf{82.69}$ & $\\textbf{73.19}$ \\cr\n \n \\bottomrule[0.8pt]\n \\end{tabular}\n \\label{tab:performance_fewshot_classifier}\n\\end{table}\n\n\\section{Details of Network Architecture}\n\\textbf{Generator} In our fusion generator, there are in total $11$ residual blocks ($5$ encoder blocks, $5$ decoder blocks, and $1$ intermediate block), in which each encoder (\\emph{resp.},decoder) block contains $3$ convolutional layers with leaky ReLU and batch normalization followed by one downsampling (\\emph{resp.}, upsampling) layer, while intermediate block contains $3$ convolutional layers with leaky ReLU and batch normalization. The architecture of our generator is summarized in Table~\\ref{tab:generator}. \n\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{The network architecture of our fusion generator. BN denotes batch normalization.} \n \\centering\n \n \\begin{tabular}{ccccc}\n \\hline\n Layer & Resample & Norm & Output Shape \\cr\n \n \\hline\n Image $\\bm{x}$ & - & - & 128*128*3 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*32 \\cr\n \\hline\n Residual Block & AvgPool & BN & 64*64*64 \\cr\n \\hline\n Residual Block & AvgPool & BN & 32*32*64 \\cr\n \\hline\n Residual Block & AvgPool & BN & 16*16*96 \\cr\n \\hline\n Residual Block & AvgPool & BN & 8*8*96 \\cr\n \\hline\n Residual Block & AvgPool & BN & 4*4*128 \\cr\n \\hline\n Residual Block & - & BN & 4*4*128 \\cr\n \\hline\n Residual Block & Upsample & BN & 8*8*96 \\cr\n \\hline\n Residual Block & Upsample & BN & 16*16*96 \\cr\n \\hline\n Residual Block & Upsample & BN & 32*32*64 \\cr\n \\hline\n Residual Block & Upsample & BN & 64*64*64 \\cr\n \\hline\n Residual Block & Upsample & BN & 128*128*64 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*3 \\cr\n \\hline \n \\end{tabular}\n \\label{tab:generator}\n\\end{table}\n\n\\noindent\\textbf{Discriminator} Our discriminator is analogous to that in~\\cite{liu2019few}, which consists of one convolutional layer followed by five groups of residual blocks. Each group of residual blocks is as follows: ResBlk-$k$ $\\rightarrow$ ResBlk-$k$ $\\rightarrow$ AvePool$2$x$2$, where ResBlk-$k$ is a ReLU first residual block~\\cite{mescheder2018training} with the number of channels $k$ set as $64$, $128$, $256$, $512$, $1024$ in five residual blocks. We use one fully connected (fc) layer with $1$ output following global average pooling layer to obtain the discriminator score. The architecture of our discriminator is summarized in Table ~\\ref{tab:discriminator}. \n\nThe classifier shares the feature extractor with the discriminator and only replaces the last fc layer with another fc layer with $C^s$ outputs with $C^s$ being the number of seen categories. The mode seeking loss and the interpolation regression loss also use the feature extractor from the discriminator. Specifically, we remove the last fc layer from discriminator to extract the features of generated images, based on which the mode seeking loss and the interpolation regression loss are calculated. \n\n\n\n\n\n\n\\setlength{\\tabcolsep}{8pt}\n\\begin{table}[t]\n \\caption{The network architecture of our fusion discriminator.} \n \\centering\n \n \\begin{tabular}{ccccc}\n \\hline\n Layer & Resample & Norm & Output Shape \\cr\n \n \\hline\n Image $\\bm{x}$ & - & - & 128*128*3 \\cr\n \\hline\n Conv $1 \\times 1$ & - & - & 128*128*32 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 64*64*64 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 32*32*128 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 16*16*256 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 8*8*512 \\cr\n \\hline\n Residual Blocks & AvgPool & - & 4*4*1024 \\cr\n \\hline\n Global & GlobalAvgPool & - & 1*1*1024 \\cr\n \\hline\n FC & - & - & 1*1*1 \\cr\n \\hline\n \\end{tabular}\n \\label{tab:discriminator}\n\\end{table}\n\\section{Conclusion}\nIn this paper, we have proposed a novel few-shot generation method F2GAN to fuse high-level features of conditional images and fill in the detailed information borrowed from conditional images. Technically, we have developed a non-local attentional fusion module and an interpolation regression loss. We have conducted extensive generation and classification experiments on five datasets to demonstrated the effectiveness of our method.\n\n\\section{Related Work}\n\\label{sec:related}\n\n\n\n\\noindent\\textbf{Data Augmentation:}\nData augmentation~\\cite{Krizhevsky2012ImageNet} targets at augmenting the training set with new samples. Traditional data augmentation techniques (\\emph{e.g.}, crop, rotation, color jittering) can only produce limited diversity. Some more advanced augmentation techniques~\\cite{zhang2017mixup,yun2019cutmix} are proposed, but they fail to produce realistic images. \nIn contrast, deep generative models can exploit the distribution of training data to generate more diverse and realistic samples for feature augmentation~\\cite{schwartz2018delta,mm1} and image augmentation~\\cite{antoniou2017data}. Our method belongs to image augmentation and can produce more images to augment the training set.\n\n\\noindent\\textbf{Generative Adversarial Network:}\nGenerative Adversarial Network (GAN)~\\cite{goodfellow2014generative,xu2019learning} is a powerful generative model based on adversarial learning. In the early stage, unconditional GANs~\\cite{miyato2018spectral} generated images with random vectors by learning the distribution of training images. Then, GANs conditioned on a single image~\\cite{miyato2018cgans,antoniou2017data} were proposed to transform the conditional image to a target image. Recently, a few conditional GANs attempted to accomplish more challenging tasks conditioned on more than one image, such as few-shot image translation~\\cite{liu2019few} and few-shot image generation~\\cite{clouatre2019figr,bartunov2018few}.\nIn this paper, we focus on few-shot image generation, which will be detailed next.\n\n\\noindent\\textbf{Few-shot Image Generation}\nFew-shot generation is a challenging problem which can generate new images with a few conditional images. Early few-shot image generation works are limited to certain application scenario. For example, Bayesian learning and reasoning were applied in ~\\cite{lake2011one,rezende2016one} to learn simple concepts like pen stroke and combine the concepts hierarchically to generate new images. \nMore recently, FIGR~\\cite{clouatre2019figr} was proposed to combine adversarial learning with optimization-based few-shot learning method Reptile~\\cite{nichol2018first} to generate new images. Similar to FIGR~\\cite{clouatre2019figr}, DAWSON~\\cite{liang2020dawson} applied meta-learning MAML algorithms~\\cite{finn2017model} to GAN-based generative models to achieve domain adaptation between seen categories and unseen categories. Metric-based few-shot learning method Matching Network~\\cite{vinyals2016matching} was combined with Variational Auto-Encoder~\\cite{Pu2016Variational} in GMN~\\cite{bartunov2018few} to generate new images without finetuning in the test phase. MatchingGAN~\\cite{hong2020matchinggan} attempted to use learned metric to generate images based on a single or a few conditional images. In this work, we propose a new solution for few-shot image generation, which can generate more diverse and realistic images.\n\n\\noindent\\textbf{Attention Mechanism:}\nAttention module aims to localize the regions of interest. Abundant attention mechanisms like spatial attention~\\cite{xu2016ask}, channel attention~\\cite{chen2017sca}, and full attention~\\cite{wang2018mancs} have been developed. Here, we discuss two works most related to our method. The method in~\\cite{lathuiliere2019attention} employs local attention mechanism to select relevant information from multi-source human images for human image generation, but it fails to capture long-range relevance. Inspired by non-local attention~\\cite{zhang2019self,wang2018non}, we develop a novel non-local attentional fusion (NAF) module for few-shot image generation.\n\n\n\\section{Introduction}\nDeep generative models\n, mainly including Variational Auto-Encoder (VAE) based methods~\\cite{vae} and Generative Adversarial Network (GAN) based methods~\\cite{goodfellow2014generative}, draw extensive attention from the artificial intelligence community. Despite the advances achieved in current GAN-based methods~\\cite{cyclegan,stargan1, stargan2,stylegan1, stylegan2,mm2,DoveNet2020,GAIN2019}, the remaining bottlenecks in deep generative models are the necessity of amounts of training data and the difficulties with fast adaptation to a new category~\\cite{clouatre2019figr,bartunov2018few,liang2020dawson}, especially for those newly emerging categories or long-tail categories. Therefore, it is necessary to consider how to generate images for a new category with only a few images. This task is referred to as few-shot image generation~\\cite{clouatre2019figr,hong2020matchinggan}, which can benefit a wide range of downstream category-aware tasks like few-shot classification~\\cite{vinyals2016matching,sung2018learning}. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.32]{.\/figures\/attention_motivation.png}\n\\end{center}\n\\caption{Illustration of fusing three conditional images $\\textbf{x}_1, \\textbf{x}_2, \\textbf{x}_3$ with interpolation coefficients $[ \\textbf{0.2}, \\textbf{0.3}, \\textbf{0.5}]$ in our proposed F2GAN. The high-level features of conditional images are fused with interpolation coefficients and the details (\\emph{e.g.}, color dots representing query locations) of the generated image are filled by using relevant low-level features (\\emph{e.g.}, color boxes corresponding to query locations) from conditional images. Best viewed in color.}\n\\label{fig:attention_explain} \n\\end{figure}\n\n\nIn the few-shot image generation task, the model is trained on seen categories with sufficient labeled training images. Then, given only a few training images from a new unseen category, the learnt model is expected to produce more diverse and realistic images for this unseen category. In some previous few-shot image generation methods~\\cite{vinyals2016matching,hong2020matchinggan}, the model is trained on seen categories in an episode-based manner~\\cite{vinyals2016matching}, in which a small number (\\emph{e.g.}, 1, 3, 5) of images from one seen category are provided in each training episode~\\cite{vinyals2016matching} to generate new images. The input images used in each training episode are called conditional images. After training, the learnt model can generate new images by using a few conditional images from each unseen category. \n\n\n\n\nTo the best of our knowledge, there are quite few works on few-shot image generation. Among them, DAGAN~\\cite{antoniou2017data} is a special case, \\emph{i.e.}, one-shot image generation, which injects random noise into the generator to produce a slightly different image from the same category. However, this method is conditioned on only one image and fails to fuse the information of multiple images from the same category. More recent few-shot image generation methods can be divided into optimization-based methods and metric-based methods. Particularly, optimization-based FIGR~\\cite{clouatre2019figr} (\\emph{resp.}, DAWSON~\\cite{liang2020dawson}) adopted a similar idea to Reptile~\\cite{nichol2018first} (\\emph{resp.}, MAML~\\cite{finn2017model}), by initializing a generator with images from seen categories and fine-tuning the trained model with images from each unseen category. \nMetric-based method GMN~\\cite{bartunov2018few} (\\emph{resp.}, MatchingGAN~\\cite{hong2020matchinggan}) is inspired by matching network~\\cite{vinyals2016matching} and combines matching procedure with VAE (\\emph{resp.}, GAN). However, FIGR, DAWSON, and GMN can hardly produce sharp and realistic images. MatchingGAN performs better, but has difficulty in fusing complex natural images.\n\n\n\n\n\nIn this paper, we follow the idea in \\cite{hong2020matchinggan} by fusing conditional images, and propose a novel fusing-and-filling GAN (F2GAN) to enhance the fusion ability. The high-level idea is fusing the high-level features of conditional images and filling in the details of generated image with relevant low-level features of conditional images, which is depicted in Figure~\\ref{fig:attention_explain}. In detail, our method contains a fusion generator and a fusion discriminator as shown in Figure~\\ref{fig:framework}. Our generator is built upon U-Net structure with skip connections~\\cite{ronneberger2015u} between the encoder and the decoder. A well-known fact is that in a CNN encoder, shallow blocks encode low-level information at high spatial resolution while deep blocks encode high-level information at low spatial resolution. We interpolate the high-level bottleneck features (the feature vector between encoder and decoder) of multiple conditional images with random interpolation coefficients. Then, the fused high-level feature is upsampled through the decoder to produce a new image. In each upsampling stage, we borrow missing details from the skip-connected shallow encoder block by using our Non-local Attentional Fusion (NAF) module. Precisely, NAF module searches the outputs from shallow encoder blocks of conditional images in a global range, to attend the information of interest for each location in the generated image.\n\nIn the fusion discriminator, we employ typical adversarial loss and classification loss to enforce the generated images to be close to real images and from the same category of conditional images. To ensure the diversity of generated images, we additionally employ a mode seeking loss and an interpolation regression loss, both of which are related to interpolation coefficients. Specifically, we use a variant of mode seeking loss~\\cite{mao2019mode} to prevent the images generated based on different interpolation coefficients from collapsing to a few modes. Moreover, we propose a novel interpolation regression loss by regressing the interpolation coefficients based on the features of conditional images and generated image, which means that each generated image can recognize its corresponding interpolation coefficients. In the training phase, we train our F2GAN based on the images from seen categories. In the testing phase, conditioned on a few images from each unseen category, we can randomly sample interpolation coefficients to generate diverse images for this unseen category.\n\nOur contributions can be summarized as follows: 1) we design a new few-shot image generation method F2GAN, by fusing high-level features and filling in low-level details; 2) Technically, we propose a novel non-local attentional fusion module in the generator and a novel interpolation regression loss in the discriminator; 3) Comprehensive experiments on five real datasets demonstrate the effectiveness of our proposed method.\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.25]{.\/figures\/framework.png}\n\\end{center}\n\\caption{The framework of our method which consists of a fusion generator and a fusion discriminator. $\\tilde{ \\textbf{x}}$ is generated based on the random interpolation coefficients $ \\textbf{a}$ and $ \\textbf{K}$ conditional images $\\{ \\textbf{x}_k|_{k=1}^K\\}$. Due to space limitation, we only draw three encoder blocks and two decoder blocks. Best viewed in color.}\n\\label{fig:framework} \n\\end{figure*}\n\n\\section{Our Method}\nGiven a few conditional images $\\mathcal{X}_S=\\{\\bm{x}_k|_{k=1}^K\\}$ from the same category ($K$ is the number of conditional images) and random interpolation coefficients $\\bm{a}=[a^1,\\ldots,a^K]$, our model targets at generating a new image from the same category. We fuse the high-level bottleneck features of conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ with interpolation coefficients $\\bm{a}$, and fill in the low-level details specified by Non-local Attentional Fusion (NAF) module during upsampling to generate a new image $\\tilde{\\bm{x}}$. \n\nWe split all categories into seen categories $\\mathcal{C}^{s}$ and unseen categories $\\mathcal{C}^{u}$, where $\\mathcal{C}^{s} \\cap \\mathcal{C}^{u}=\\emptyset$. In the training phase, our model is trained with images from seen categories $\\mathcal{C}^{s}$ to learn a mapping, which translates a few conditional images $\\mathcal{X}_S$ of a seen category to a new image belonging to the same category. In the testing phase, a few conditional images from an unseen category in $\\mathcal{C}^{u}$ together with random interpolation coefficients $\\bm{a}$ are fed into the trained model to generate new diverse images for this unseen category. As illustrated in Figure~\\ref{fig:framework}, our model consists of a fusion generator and a fusion discriminator, which will be detailed next.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.36]{.\/figures\/attention.png}\n\\end{center}\n\\caption{The architecture of our Non-local Attentional Fusion (NAF) module. $\\bm{\\xi}_r^k$ is the feature of $\\textbf{x}_k$ from the $r$-th encoder block, $\\bm{\\phi}_r$ is the output from the $r$-th decoder block, and $\\bm{\\eta}_r$ is the output of NAF.}\n\n\\label{fig:attention} \n\\end{figure}\n\n\\subsection{Fusion Generator}\nOur fusion generator $G$ adopts an encoder-decoder structure~\\cite{antoniou2017data} which is a combination of U-Net~\\cite{ronneberger2015u} and ResNet~\\cite{he2016deep}. Specifically, there are in total $11$ residual blocks ($5$ encoder blocks, $5$ decoder blocks, and $1$ intermediate block), in which each encoder (\\emph{resp.}, decoder) block contains $3$ convolutional layers with leaky ReLU and batch normalization followed by one downsampling (\\emph{resp.}, upsampling) layer, while the intermediate block contains $3$ convolutional layers with leaky ReLU and batch normalization. The detailed architecture can be found in Supplementary. The encoder (\\emph{resp.}, decoder) blocks progressively decrease (\\emph{resp.}, increase) the spatial resolution. For ease of description, the encoder (\\emph{resp.}, decoder) blocks from shallow to deep are indexed from $4$ (\\emph{resp.}, $1$) to $0$ (\\emph{resp.}, $5$). We use $\\bm{{\\psi}}^k$ to denote the bottleneck feature of $\\bm{x}_k$ from the intermediate block. Besides, we add $3$ skip connections between the encoder and the decoder. For $r=1,2,3$, the $r$-th skip connection directs the output from the $r$-th encoder block to the output from the $r$-th decoder block.\nThen, we use $\\bm{{\\xi}}_r^k \\in \\mathcal{R}^{W_r \\times H_r \\times C_r}$ to denote the output feature of conditional image $\\bm{x}_k$ from the $r$-th encoder block, and $\\bm{{\\phi}}_r \\in \\mathcal{R}^{W_r \\times H_r \\times C'_r}$ to denote the output feature from the $r$-th decoder block, where $C_r$ and $C'_r$ are the number of channels in the $r$-th encoder and decoder respectively. \n\nTo fuse the bottleneck features of conditional images $\\mathcal{X}_S$, we randomly sample interpolation coefficients $\\bm{a}=[a^1,\\ldots,a^K]$, which satisfy $a^k\\geq 0$ and $\\sum_{k=1}^K a^k=1$, leading to the fused bottleneck feature $\\bm{{\\eta}}_0 =\\sum_{k=1}^{K} a^{k} \\bm{{\\psi}}^k$.\nSince the spatial size of bottleneck feature is very small (\\emph{e.g.}, $4\\times 4$), the spatial misalignment issue can be ignored and high-level semantic information of conditional images is fused. \nThen, the fused bottleneck feature is upsampled through decoder blocks. During upsampling in each decoder block, lots of details are missing and need to be filled in.\nWe borrow the low-level detailed information from the output features of its skip-connected encoder block. Furthermore, we insert a Non-local Attentional Fusion (NAF) module into the skip connection to attend relevant detailed information, as shown in Figure~\\ref{fig:framework}. For the $r$-th skip connection, NAF module takes $a^k\\bm{\\xi}_r^k$ and $\\bm{\\phi}_r$ as input and outputs $\\bm{{\\eta}}_r = \\textnormal{NAF}(\\{a^k\\bm{\\xi}_r^k|_{k=1}^K\\}, \\bm{\\phi}_r)$.\nThen, $\\bm{{\\eta}}_r$ concatenated with $\\bm{\\phi}_r$, that is, $\\hat{\\bm{{\\phi}}}_{r} = [\\bm{{\\eta}}_r, \\bm{{\\phi}}_r]$, is taken as the input to the $(r+1)$-th decoder block.\n\nOur attention-enhanced fusion strategy is a little similar to~\\cite{lathuiliere2019attention}. However, for each spatial location on $\\bm{\\phi}_r$, the attention module in~\\cite{lathuiliere2019attention} only attends exactly the same location on $\\bm{\\xi}_r^k$, which will hinder attending relevant information if the conditional images are not strictly aligned. For example, for category ``dog face\", the dog eyes may appear at different locations in different conditional images which have different face poses. Inspired by non-local attention~\\cite{zhang2019self,wang2018non}, for each spatial location on $\\bm{\\phi}_r$, we search relevant information in a global range on $\\bm{\\xi}_r^k$. Specifically, our proposed Non-local Attentional Fusion (NAF) module calculates an attention map $\\bm{A}_r^k$ based on $\\bm{{\\phi}}_r$ and $a^k\\bm{{\\xi}}_r^k$, in which each entry ${A}_r^k(i,j)$ represents the attention score between the $i$-th location on $\\bm{{\\phi}}_r$ and the $j$-th location on $a^k\\bm{{\\xi}}_r^k$. Therefore, the design philosophy and technical details of our NAF module are considerably different from those in~\\cite{lathuiliere2019attention}. \n\nThe architecture of NAF module is shown in Figure~\\ref{fig:attention}. First, $a^k\\bm{{\\xi}}_r^k$ and $\\bm{\\phi}_r$ are projected to a common space by $f(\\cdot)$ and $g(\\cdot)$ respectively, where $f(\\cdot)$ and $g(\\cdot)$ are $1 \\times 1 \\times \\frac{C_r}{8}$ convolutional layer with spectral normalization~\\cite{miyato2018spectral}. For ease of calculation, we reshape $f(a^k\\bm{{\\xi}}_r^k) \\in \\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}} $ (\\emph{resp.}, $g(\\bm{{\\phi}}_r) \\in \\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}}$) into $\\bar{f}(a^k\\bm{{\\xi}}_r^k) \\in \\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$ (\\emph{resp.}, $\\bar{g}(\\bm{{\\phi}}_r) \\in \\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$), in which $N_r=W_r \\times H_r$. Then, we can calculate the attention map between $\\bm{{\\phi}}_r$ and $a^k\\bm{{\\xi}}_r^k$:\n\\begin{equation}\\label{eqn:attention_map}\n\\begin{aligned}\n\\bm{A}_r^k = softmax\\left(\\bar{g}(\\bm{{\\phi}}_r)\\bar{f}(a^k\\bm{{\\xi}}_r^k)^{T} \\right).\n\\end{aligned}\n\\end{equation}\nWith obtained attention map $\\bm{A}_r^k$, we attend information from $a^k\\bm{{\\xi}}_r^k$ and achieve the attended feature map $\\bm{{\\eta}}_r$:\n\\begin{equation}\n\\begin{aligned}\n\\bm{{\\eta}}_r = \\sum_{k=1}^{K} v\\left(\\bm{A}_r^k \\bar{h}(a^k\\bm{{\\xi}}_r^k)\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\bar{h}(\\cdot)$ means $1 \\times 1 \\times \\frac{C_r}{8} $ convolutional layer followed by reshaping to $\\mathcal{R}^{N_r \\times \\frac{C_r}{8}}$, similar to $\\bar{f}(\\cdot)$ and $\\bar{g}(\\cdot)$ in (\\ref{eqn:attention_map}). $v(\\cdot)$ reshapes the feature map back to $\\mathcal{R}^{W_r \\times H_r \\times \\frac{C_r}{8}}$ and then performs $1 \\times 1 \\times \\frac{C_r}{8} $ convolution.\n \n\nAs the shallow (\\emph{resp.}, deep) encoder block contains the low-level (\\emph{resp.}, high-level) information, our generated images can fuse multi-level information of conditional images coherently. Finally, the generated image can be represented by $\\tilde{\\bm{x}} = G(\\bm{a}, \\mathcal{X}_S)$.\n\nFollowing \\cite{hong2020matchinggan}, we adopt a weighted reconstruction loss to constrain the generated image:\n\\begin{equation} \\label{eqn:loss_reconstruction}\n\\begin{aligned}\n\\mathcal{L}_1 = \\sum_{k=1}^{K} a^k || \\bm{x}_k - \\tilde{\\bm{x}}||_1.\n\\end{aligned}\n\\end{equation}\nIntuitively, the generated image should bear more resemblance to the conditional image with larger interpolation coefficient.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.3]{.\/figures\/combo.jpg}\n\\end{center}\n\\caption{Images generated by DAGAN, MatchingGAN, and our F2GAN ($ \\textbf{K=3}$) on five datasets (from top to bottom: Omniglot, EMNIST, VGGFace, Flowers, and Animals Faces). The conditional images are in the left three columns.}\n\\label{fig:visualization} \n\\end{figure*}\n\n\\subsection{Fusion Discriminator}\nThe network structure of our discriminator is analogous to that in~\\cite{liu2019few}, which consists of one convolutional layer followed by five residual blocks~\\cite{mescheder2018training}. The detailed architecture can be found in Supplementary. Differently, we use one fully connected (fc) layer with $1$ output following average pooling layer to obtain the discriminator score. We treat $K$ conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ as real images and the generated image $\\tilde{\\bm{x}}$ as fake image. In detail, the average score $\\mathrm{D}(\\bm{x})$ for $K$ conditional images and the score $\\mathrm{D}(\\tilde{\\bm{x}})$ for generated image $\\tilde{\\bm{x}}$ are calculated for adversarial learning. To stabilize training process, we use hinge adversarial loss in~\\cite{miyato2018cgans}. To be exact, the goal of discriminator $\\mathrm{D}$ is minimizing $\\mathcal{L}_D$ while the goal of generator is minimizing $\\mathcal{L}_{GD}$:\n\\begin{eqnarray}\n\\!\\!\\!\\!\\!\\!\\!\\!&&\\mathcal{L}_D = \\mathbb{E}_{\\tilde{\\bm{x}}} [\\max (0,1+\\mathrm{D}(\\tilde{\\bm{x}})] + \\mathbb{E}_{\\bm{x}_k} [\\max (0,1-\\mathrm{D}({\\bm{x}}))], \\nonumber\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!&&\\mathcal{L}_{GD} = - \\mathbb{E}_{\\tilde{\\bm{x}}} [\\mathrm{D}(\\tilde{\\bm{x}})]\n\\end{eqnarray}\n\nAnalogous to ACGAN~\\cite{odena2017conditional}, we apply a classifier with cross-entropy loss to classify the real images and the generated images into the corresponding seen categories.\nSpecifically, the last fc layer of the discriminator $D$ is replaced by another fc layer with the output dimension being the number of seen categories:\n\\begin{equation}\n\\begin{aligned}\\label{eqn:loss_classification}\n\\mathcal{L}_{c} = -\\log p(c(\\bm{x})|\\bm{x}),\n\\end{aligned}\n\\end{equation}\nwhere $c(\\bm{x})$ is the ground-truth category of $\\bm{x}$. We minimize $\\mathcal{L}^{D}_{c} = -\\sum_{k=1}^K\\log p(c(\\bm{x}_k)|\\bm{x}_k)$ for $K$ conditional images $\\{\\bm{x}_k|_{k=1}^K\\}$ when training the discriminator. We update the generator by minimizing $\\mathcal{L}^{G}_{c}=-\\log p(c(\\tilde{\\bm{x}})|\\tilde{\\bm{x}})$, since we expect the generated image $\\tilde{\\bm{x}}$ to be classified as the same category of conditional images.\n\n\nBy varying interpolation coefficients $\\bm{a}$, we expect to generate diverse images, but one common problem for GAN is mode collapse~\\cite{mao2019mode}, which means that the generated images may collapse into a few modes.\nIn our fusion generator, when sampling two different interpolation coefficients $\\bm{a}_1$ and $\\bm{a}_2$, the generated images $G(\\bm{a}_1,\\mathcal{X}_S)$ and $G(\\bm{a}_2,\\mathcal{X}_S)$ are likely to collapse into the same mode. To guarantee the diversity of generated images, we use two strategies to mitigate mode collapse, one is a variant of mode seeking loss~\\cite{mao2019mode} to seek for more modes, the other is establishing bijection between the generated image $\\tilde{\\bm{x}}$ and its corresponding interpolation coefficient $\\bm{a}$. The mode seeking loss in~\\cite{mao2019mode} was originally used to produce diverse images when using different latent codes. Here, we slightly twist the mode seeking loss to produce diverse images when using different interpolation coefficients. Specifically,\nwe remove the last fc layer of $D$ and use the remaining feature extractor $\\hat{D}$ to extract the features of generated images with different interpolation coefficients. Then, we maximize the ratio of the distance between $\\hat{D}(G(\\bm{a}_1,\\mathcal{X}_S))$ and $\\hat{D}(G(\\bm{a}_2,\\mathcal{X}_S))$ over the distance between $\\bm{a}_1$ and $\\bm{a}_2$, yielding the following mode seeking loss:\n\\begin{equation} \\label{eqn:loss_mode_seeking}\n\\begin{aligned}\n\\mathcal{L}_{m} = \\frac {|| \\hat{D}(G(\\bm{a}_1,\\mathcal{X}_S)) - \\hat{D}(G(\\bm{a}_2,\\mathcal{X}_S))||_1} {|| \\bm{a}_1 - \\bm{a}_2||_1}.\n\\end{aligned}\n\\end{equation}\n\nTo further ensure the diversity of generated images, the bijection between the generated image $\\tilde{\\bm{x}}$ and its corresponding interpolation coefficient $\\bm{a}$ is established by a novel interpolation regression loss, which regresses the interpolation coefficient $\\bm{a}$ based on the features of conditional images $\\hat{D}(\\bm{x}_k)$ and generated image $\\hat{D}(\\tilde{\\bm{x}})$. Note that the feature extractor $\\hat{D}$ is the same as in (\\ref{eqn:loss_mode_seeking}). Specifically, we apply a fully-connected (fc) layer $E$ to the concatenated feature $[\\hat{D}(\\bm{x}_k),\\hat{D}(\\tilde{\\bm{x}})]$, and obtain the similarity score $s_k$ between $\\bm{x}_k$ and $\\tilde{\\bm{x}}$: $s_k = E([\\hat{D}(\\bm{x}_k), \\hat{D}(\\tilde{\\bm{x}})])$.\nThen, we apply softmax layer to $\\bm{s}=[s_1,\\ldots,s_K]$ to obtain the predicted interpolation coefficients $\\tilde{\\bm{a}} = softmax(\\bm{s})$, which are enforced to match the ground-truth $\\bm{a}$:\n\\begin{equation} \\label{eqn:loss_interpolation}\n\\begin{aligned}\n\\mathcal{L}_{a} = ||\\tilde{\\bm{a}} - \\bm{a} ||_2.\n\\end{aligned}\n\\end{equation}\nBy recognizing the interpolation coefficient based on the generated image and conditional images, we actually establish a bijection between the generated image and interpolation coefficient, which discourages two different interpolation coefficients from generating the same image.\n\n\\subsection{Optimization}\nThe overall loss function to be minimized is as follows, \n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L} = \\mathcal{L}_D + \\mathcal{L}_{GD}+ \\lambda_1 \\mathcal{L}_{1} + \\mathcal{L}_{c} - \\lambda_m \\mathcal{L}_{m} + \\lambda_a \\mathcal{L}_{a},\\label{optimization}\n\\end{aligned}\n\\end{equation}\nin which $\\lambda_1$, $\\lambda_m$, and $\\lambda_a$ are trade-off parameters. In the framework of adversarial learning, fusion generator and fusion discriminator are optimized by related loss terms in an alternating manner. In particular, the fusion discriminator is optimized by minimizing $\\mathcal{L}_D$ and $\\mathcal{L}^{D}_c$, while the fusion generator is optimized by minimizing $\\mathcal{L}_{GD}$, $\\mathcal{L}_{1}$, $\\mathcal{L}^{G}_{c}$, $-\\mathcal{L}_{m}$, and $\\mathcal{L}_{a}$, in which $\\mathcal{L}^{D}_c$ and $\\mathcal{L}^{G}_{c}$ are defined below (\\ref{eqn:loss_classification}). \n\n\n\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}[t]\n \\caption{FID ($\\downarrow$), IS ($\\uparrow$) and LPIPS ($\\uparrow$) of images generated by different methods for unseen categories on three datasets.} \n \\centering\n \n \\begin{tabular}{lrrrrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\n \\multicolumn{3}{c}{VGGFace} & \\multicolumn{3}{c}{Flowers} &\\multicolumn{3}{c}{Animals Faces}\\cr\n &FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS($\\uparrow$) & FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) &FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) \\cr\n \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10}\n FIGR~\\cite{clouatre2019figr} &139.83 &2.98 &0.0834 & 190.12&1.38 &0.0634 &211.54 &1.55 &0.0756\\cr\n \n GMN~\\cite{bartunov2018few}&136.21 &2.14 &0.0902 &200.11 &1.42 &0.0743 &220.45 &1.71 &0.0868 \\cr\n DAWSON~\\cite{liang2020dawson} &137.82 &2.56 & 0.0769 & 188.96& 1.25 &0.0583 & 208.68 &1.51 &0.0642 \\cr\n \n DAGAN~\\cite{antoniou2017data}& 128.34 & 4.12 & 0.0913& 151.21&2.18 &0.0812 &155.29 &3.32 &0.0892\\cr\n MatchingGAN~\\cite{hong2020matchinggan}& 118.62 & 6.16 & 0.1695& 143.35& 4.36&0.1627 & 148.52& 5.08& 0.1514\\cr\n F2GAN &$\\textbf{109.16}$ &$\\textbf{8.85}$ & $\\textbf{0.2125}$ &$\\textbf{120.48}$ &$\\textbf{6.58}$ &$\\textbf{0.2172}$ &$\\textbf{117.74}$ &$\\textbf{7.66}$ &$\\textbf{0.1831}$\\cr\n \n \\bottomrule[0.8pt]\n \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_metric}\n\\end{table*}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}[t]\n \\caption{Accuracy(\\%) of different methods on three datasets in low-data setting.} \n \\centering\n \n \\begin{tabular}{lrrrrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\n \\multicolumn{3}{c}{Omniglot } & \\multicolumn{3}{c}{EMNIST} &\\multicolumn{3}{c}{VGGFace}\\cr\n &5-sample & 10-sample & 15-sample & 5-sample & 10-sample & 15-sample &5-sample & 10-sample &15-sample \\cr\n \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10}\n Standard &66.22 & 81.87 &83.31 & 83.64 & 88.64 & 91.14 & 8.82 & 20.29 & 39.12\\cr\n Traditional &67.32 &82.28 & 83.95 & 84.62 & 89.63 & 92.07 & 9.12 &22.83 & 41.63 \\cr\n \n FIGR~\\cite{clouatre2019figr} & 69.23 & 83.12 & 84.89 & 85.91 & 90.08 & 92.18 & 6.12 & 18.84& 32.13 \\cr\n GMN~\\cite{bartunov2018few} & 67.74 & 84.19 & 85.12 & 84.12 & 91.21 & 92.09 & 5.23 & 15.61 &35.48\\cr\n DAWSON~\\cite{liang2020dawson} &68.56 &82.02 & 84.01 & 83.63 & 90.72 & 91.83 & 5.27 &16.92 &30.61 \\cr\n DAGAN~\\cite{antoniou2017data} & 88.81 &89.32 & 95.38 &87.45 & 94.18& 95.58 &19.23 &35.12 & 44.36\\cr\n MatchingGAN~\\cite{hong2020matchinggan} &89.03 &90.92 & 96.29 & 91.75 & 95.91 &96.29 &21.12 &40.95 & 50.12\\cr\n F2GAN &$\\textbf{91.93}$ &$\\textbf{92.48}$ & $\\textbf{97.12}$& $\\textbf{93.18}$& $\\textbf{97.01}$ &$\\textbf{97.82}$ & $\\textbf{24.76}$&$\\textbf{43.21}$ & $\\textbf{53.42}$\\cr\n \n \n \\bottomrule[0.8pt]\n \n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_vallia_classifier}\n\\end{table*}\n\n\n\\setlength{\\tabcolsep}{2pt}\n\\begin{table*}[t]\n \\caption{Accuracy(\\%) of different methods on three datasets in few-shot classification setting.} \n \\centering\n \n \\begin{tabular}{lrrrrrr}\n \n \\toprule[0.8pt]\n \\multirow{2}{*}{Method}&\\multicolumn{2}{c}{VGGFace}&\\multicolumn{2}{c}{Flowers}&\\multicolumn{2}{c}{Animals Faces}\n \\cr & 5-way 5-shot &10-way 5-shot & 5-way 5-shot &10-way 5-shot & 5-way 5-shot &10-way 5-shot\\cr\n \\cmidrule(r){2-3} \\cmidrule(r){4-5} \\cmidrule(r){6-7}\n MatchingNets~\\cite{vinyals2016matching} & 60.01 &48.67 & 67.98&56.12 &59.12 &50.12 \\cr\n\n MAML~\\cite{finn2017model} & 61.09&47.89 & 68.12&58.01 & 60.03 &49.89 \\cr\n\n RelationNets~\\cite{sung2018learning}& 62.89 & 54.12 &69.83&61.03 &67.51 & 58.12 \\cr\n\n MTL~\\cite{sun2019meta}&77.82 &68.95 &82.35 &74.24 &79.85 &70.91 \\cr\n\n DN4~\\cite{li2019revisiting}&78.13 &70.02 &83.62 &73.96 &81.13 &71.34 \\cr\n \n MatchingNet-LFT~\\cite{Hungfewshot} &77.64 &69.92 & 83.19 &74.32 &80.95 &71.62 \\cr\n \n \n MatchingGAN~\\cite{hong2020matchinggan} & 78.72 & 70.94 &82.76 & 74.09 & 80.36 & 70.89\\cr\n\n F2GAN&$\\textbf{79.85}$ &$\\textbf{72.31}$ &$\\textbf{84.92}$ &$\\textbf{75.02}$ &$\\textbf{82.69}$ &$\\textbf{73.19}$ \\cr\n \n \\bottomrule[0.8pt]\n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:performance_fewshot_classifier}\n\\end{table*}\n\n\n\n\n\\subsection{Datasets and Implementation Details}\nWe conduct experiments on five real datasets including Omniglot \\cite{Brenden2015One}, EMNIST~\\cite{cohen2017emnist}, VGGFace~\\cite{cao2018vggface2}, Flowers~\\cite{nilsback2008automated}, and Animal Faces~\\cite{deng2009imagenet}. For VGGFace (\\emph{resp.}, Omniglot, EMNIST), following MatchingGAN \\cite{hong2020matchinggan}, we randomly select $1802$ (\\emph{resp.}, $1200$, $28$) categories from total $2395$ (\\emph{resp.}, $1623$, $48$) categories as training seen categories and select $497$ (\\emph{resp.}, $212$, $10$) categories from remaining categories as unseen testing categories. For Animal face and flower datasets, we use the seen\/unseen split provided in~\\cite{liu2019few}. In Animal Faces, $117574$ animal faces from $149$ carnivorous animal categories are selected from ImageNet~\\cite{deng2009imagenet}. All animal categories are split into $119$ seen categories for training and $30$ unseen categories for testing. For Flowers dataset with $8189$ images distributed in $102$ categories, there are $85$ training seen categories and $17$ testing unseen categories.\n\nWe set $\\lambda_1=1$, $\\lambda_m = 0.01$, and $\\lambda_a = 1$ in (\\ref{optimization}). We set the number of conditional images $K=3$ by balancing the benefit against the cost, because larger $K$ only brings slight improvement (see Supplementary). We use Adam optimizer with learning rate 0.0001 and train our model for $200$ epochs.\n\n\n\\subsection{Quantitative Evaluation of Generated Images} \\label{sec:visualization}\nWe evaluate the quality of images generated by different methods on three datasets based on commonly used Inception Scores (IS)~\\cite{xu2018empirical}, Fr\u00e9chet Inception Distance (FID)~\\cite{heusel2017gans}, and Learned Perceptual Image Patch Similarity (LPIPS)~\\cite{zhang2018unreasonable}. The IS is positively correlated with visual quality of generated images. We fine-tune the ImageNet-pretrained Inception-V3 model~\\cite{szegedy2016rethinking} with unseen categories to calculate the IS for generated images. The FID is designed for measuring similarities between two sets of images. We remove the last average pooling layer of the ImageNet-pretrained Inception-V3 model as the feature extractor. Based on the extracted features, we compute Fr\u00e9chet Inception Distance between the generated images and the real images from the unseen categories. The LPIPS can be used to measure the average feature distance among the generated images. We compute the average of pairwise distances among generated images for each category, and then compute the average over all unseen categories as the final LPIPS score. The details of distance calculation can be found in \\cite{zhang2018unreasonable}.\n\nFor our method, we train our model based on seen categories. Then, we use a random interpolation coefficient and $K=3$ conditional images from each unseen category to generate a new image for this unseen category. We can generate adequate images for each unseen category by repeating the above procedure.\nSimilarly, GMN~\\cite{bartunov2018few}, FIGR~\\cite{clouatre2019figr} and MatchingGAN~\\cite{hong2020matchinggan} are trained in $1$-way $3$-shot setting based on seen categories, and the trained models are used to generate images for unseen categories. Different from the above methods, DAGAN~\\cite{antoniou2017data} is conditioned on a single image, but we can use one conditional image each time to generate adequate images for unseen categories.\n\nFor each unseen category, we use each method to generate $128$ images based on sampled $30$ real images, and calculate FID, IS and LPIPS based on the generated images. The results of different methods are reported in Table \\ref{tab:performance_metric}, from which we observe that our method achieves the highest IS, lowest FID, and highest LPIPS, demonstrating that our model could generate more diverse and realistic images compared with baseline methods.\n\nWe show some example images generated by our method on five datasets including simple concept datasets and relatively complex natural datasets in Figure~\\ref{fig:visualization}. For comparison, we also show the images generated by DAGAN and MatchingGAN, which are competitive baselines as demonstrated in Table \\ref{tab:performance_metric}. On concept datasets Omniglot and EMNIST, we can see that the images generated by DAGAN are closer to inputs with limited diversity, while MatchingGAN and F2GAN can both fuse features from conditional images to generate diverse images for simple concepts. On natural datasets VGGFace, Flowers, and Animals Faces, we observe that MatchingGAN can generate plausible images on VGGFace dataset because face images are well-aligned. However, the images generated by MatchingGAN are of low quality on Flowers and Animals Faces datasets. In contrast, the images generated by our method are more diverse and realistic than DAGAN and MatchingGAN, because the information of more than one conditional image are fused more coherently in our method. In Supplementary, we also visualize our generated results on FIGR-8 dataset, which is released and used in FIGR~\\cite{clouatre2019figr}, as well as more visualization results on Flowers and Animals datasets.\n\n\\subsection{Visualization of Linear Interpolation}\nTo evaluate whether the space of generated images is densely populated, we perform linear interpolation based on two conditional images $\\bm{x}_1$ and $\\bm{x}_2$ for ease of visualization. In detail, for interpolation coefficients $\\bm{a}=[a^1, a^2]$, we start from $[0.9, 0.1]$, and then gradually decrease (\\emph{resp.}, increase) $a^1$ (\\emph{resp.}, $a^2$) to $0.1$ (\\emph{resp.}, $0.9$) with step size $0.1$. Because MatchingGAN also fuses conditional images with interpolation coefficients, we report the results of both MatchingGAN and our F2GAN in Figure~\\ref{fig:interpolation}. Compared with MatchingGAN, our F2GAN can produce more diverse images with smoother transition between two conditional images. More results can be found in Supplementary.\n\n\n\n\\subsection{Low-data Classification}\n\\label{sec:vallia}\nTo further evaluate the quality of generated images, we use generated images to help downstream classification tasks in low-data setting in this section and few-shot setting in Section \\ref{sec:few-shot}. For low-data classification on unseen categories, following MatchingGAN~\\cite{hong2020matchinggan}, we randomly select a few (\\emph{e.g.}, $5$, $10$, $15$) training images per unseen category while the remaining images in each unseen category are test images. Note that we have training and testing phases for the classification task, which are different from the training and testing phases of our F2GAN.\nWe initialize the ResNet$18$~\\cite{he2016deep} backbone using the images of seen categories, and then train the classifier using the training images of unseen categories. Finally, the trained classifier is used to predict the test images of unseen categories. This setting is referred to as ``Standard\" in Table~\\ref{tab:performance_vallia_classifier}. \n\nThen, we use the generated images to augment the training set of unseen categories. For each few-shot generation method, we generate $512$ images for each unseen category based on the training set of unseen categories. Then, we train the ResNet$18$ classifier on the augmented training set (including both original training set and generated images) and apply the trained classifier to the test set of unseen categories. We also use traditional augmentation techniques (\\emph{e.g.}, crop, rotation, color jittering) to augment the training set and report the results as ``Traditional\" in Table~\\ref{tab:performance_vallia_classifier}.\n\nThe results of different methods are listed in Table~\\ref{tab:performance_vallia_classifier}. On Omniglot and EMNIST datasets, all methods outperform ``Standard\" and ``Traditional\", which demonstrates the benefit of deep augmentation methods. On VGGFace dataset, our F2GAN, MatchingGAN~\\cite{hong2020matchinggan}, and DAGAN~\\cite{antoniou2017data} outperform ``Standard\", while the other methods underperform ``Standard\". One possible explanation is that the images generated by GMN and FIGR on VGGFace are of low quality, which harms the classifier. It can also be seen that our proposed F2GAN achieves significant improvement over baseline methods, which corroborates the high quality of our generated images.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{.\/figures\/interpolation_comparison.png}\n\\end{center}\n\\caption{Linear interpolation results of MatchingGAN (top row) and our F2GAN (bottom row) based on two conditional images $ \\textbf{x}_1$ and $ \\textbf{x}_2$ on Flowers dataset.}\n\\label{fig:interpolation} \n\\end{figure}\n\n \n\n\n\n\n\n\n\\subsection{Few-shot Classification}\n\\label{sec:few-shot}\n\nWe follow the $N$-way $C$-shot setting in few-shot classification~\\cite{vinyals2016matching,sung2018learning} by creating evaluation episodes and calculating the averaged accuracy over multiple evaluation episodes. In each evaluation episode, $N$ categories are randomly selected from unseen categories. Then, $C$ images from each of $N$ categories are randomly selected as training set while the remaining images are used as test set. We use pretrained ResNet$18$~\\cite{he2016deep} (pretrained on the seen categories) as the feature extractor and train a linear classifier for the selected $N$ unseen categories. Besides $N\\times C$ training images, our fusion generator produces $512$ additional images for each of $N$ categories to augment the training set. \n\nWe compare our method with existing few-shot classification methods, including representative methods MatchingNets~\\cite{vinyals2016matching}, RelationNets~\\cite{sung2018learning}, MAML~\\cite{finn2017model} as well as state-of-the-art methods MTL~\\cite{sun2019meta}, DN4~\\cite{li2019revisiting}, MatchingNet-LFT~\\cite{Hungfewshot}. Note that no augmented images are added to the training set of $N$ unseen categories for these baseline methods. Instead, we strictly follow their original training procedure, in which the images from seen categories are used to train those few-shot classifiers. Among the baselines, MAML \\cite{finn2017model} and MTL~\\cite{sun2019meta} need to\nfurther fine-tune the trained classifier based on the training set of $N$ unseen categories in each evaluation episode.\n\nWe also compare our method with competitive few-shot generation baseline MatchingGAN~\\cite{hong2020matchinggan}. For MatchingGAN, We use the same setting as our F2GAN and generate augmented images for unseen categories. Besides, we compare our F2GAN with FUNIT~\\cite{liu2019few} in Supplementary.\n\nBy taking $5$-way\/$10$-way $5$-shot as examples, we report the averaged accuracy over $10$ episodes on three datasets in Table~\\ref{tab:performance_fewshot_classifier}.\nOur method achieves the best results in both settings on all datasets, which shows the benefit of using augmented images produced by our fusion generator. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.47]{.\/figures\/query_attention.png}\n\\end{center}\n\\caption{The visualization of learned attention maps from NAF module. In each row, $\\tilde{ \\textbf{x}}$ is generated based on three conditional images $ \\textbf{x}_1$, $ \\textbf{x}_2$, and $ \\textbf{x}_3$.\nFor each color dot (query location) in $\\tilde{ \\textbf{x}}$, we draw a color arrow with the same color to summarize the bright region (the most-attended region corresponding to the query location) in $ \\textbf{x}_k$. Best viewed in color.}\n\\label{fig:attention_map} \n\\end{figure}\n\n\\subsection{Ablation Studies}\n\n\n\n\\textbf{The number of conditional images} To analyze the impact of the number of conditional images, we train F2GAN with $K_1$ conditional images based on seen categories, and generate new images for unseen categories with $K_2$ conditional images. Due to space limitation, we leave the details to Supplementary.\n\n\n \n \n\n\\noindent\\textbf{Loss terms: }In our method, we employ weighted reconstruction loss $\\mathcal{L}_{1}$ (\\ref{eqn:loss_reconstruction}), mode seeking loss $\\mathcal{L}_{m}$ (\\ref{eqn:loss_mode_seeking}), and interpolation regression loss $\\mathcal{L}_{a}$ (\\ref{eqn:loss_interpolation}). To investigate the impact of $\\mathcal{L}_{1}$, $\\mathcal{L}_{m}$, and $\\mathcal{L}_{a}$, we conduct experiment on Flowers dataset by removing each loss term from the final objective (\\ref{optimization}) separately. The quality of generated images is evaluated from two perspectives. On one hand, IS, FID, and LPIPS of generated images are computed as in Section~\\ref{sec:visualization}. On the other hand, we report the accuracy of few-shot ($10$-way $5$-shot) classification augmented with generated images as in Section~\\ref{sec:few-shot}. The results are summarized in Table~\\ref{tab:network_design}, which shows that ablating $\\mathcal{L}_1$ leads to slight degradation of generated images. Another observation is that without $\\mathcal{L}_{m}$, the results \\emph{w.r.t.} all metrics become much worse, which indicates that the mode seeking loss can enhance the diversity of generated images. Besides, when removing $\\mathcal{L}_a$, it can be seen that the diversity and realism of generated images are compromised, resulting in lower classification accuracy.\n\n\\noindent\\textbf{Attention module: }In our fusion generator, a Non-local Attentional Fusion (NAF) module is designed to borrow low-level information from the encoder. To corroborate the effectiveness of our design, we remove the NAF module, and directly connect the fused encoder features with the output of corresponding decoder blocks via skip connection, which is referred to as ``w\/o NAF\" in Table ~\\ref{tab:network_design}. Besides, we replace our NAF module with local attention used in~\\cite{lathuiliere2019attention} to compare two different attention mechanisms, which is referred to as ``local NAF\" in Table~\\ref{tab:network_design}. The results show that both ``local NAF\" or our NAF achieve better results than ``w\/o NAF\", which proves the necessity of attention enhanced fusion strategy. We also observe that our NAF module can improve the realism and diversity of generated images, which is justified by lower FID, higher IS, and higher LPIPS. \n\nMoreover, we visualize the attention maps in Figure~\\ref{fig:attention_map}. The first column exhibits the generated images based on three conditional images. For each generated image, we choose three representative query locations, which borrow low-level details from three conditional images respectively. For the conditional image $\\textbf{x}_k$, \nwe obtain the $H_1\\times W_1$ attention map from the corresponding row in $\\bm{A}_1^k$ in (\\ref{eqn:attention_map}). For each color query point, we draw a color arrow with the same color to summarize the most-attended regions (bright regions) in the corresponding conditional image. In the first row, we can see that the red (\\emph{resp.}, green, blue) query location in the generated flower $\\tilde{\\textbf{x}}$ borrows some color and shape details from $\\textbf{x}_1$ (\\emph{resp.}, $\\textbf{x}_2$, $\\textbf{x}_3$). Similarly, in the second row, the red (\\emph{resp.}, green, blue) query location in the generated dog face $\\tilde{\\textbf{x}}$ borrows some visual details of forehead (\\emph{resp.}, tongue, cheek) from $\\textbf{x}_1$ (\\emph{resp.}, $\\textbf{x}_2$, $\\textbf{x}_3$).\n\n\n\\setlength{\\tabcolsep}{6pt}\n\\begin{table}[t]\n \\caption{Ablation studies of our loss terms and attention module on Flowers dataset.} \n \\centering\n \n \\begin{tabular}{lrrrr} \n \\hline\n setting& accuracy (\\%) & FID ($\\downarrow$) & IS ($\\uparrow$) & LPIPS ($\\uparrow$) \\cr\n \n \\hline\n w\/o $\\mathcal{L}_{1}$ & 74.89 & 122.68 & 6.39 & 0.2114 \\cr\n \n w\/o $\\mathcal{L}_{m}$ & 73.92 & 125.26 & 4.92 & 0.1691\\cr\n \n w\/o $\\mathcal{L}_{a}$ & 72.42 & 122.12 & 4.18 & 0.1463 \\cr\n \n \\hline\n w\/o NAF & 72.62 & 137.81 & 5.11& 0.1825 \\cr\n \n local NAF & 73.98 & 134.45 & 5.92 & 0.2052 \\cr\n \\hline\n \n F2GAN &$\\textbf{75.02}$ &$\\textbf{120.48}$ &$\\textbf{6.58}$ &$\\textbf{0.2172}$\\cr\n \\hline\n \\end{tabular}\n \\vspace{0.1mm}\n \\label{tab:network_design}\n\\end{table}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeuvi b/data_all_eng_slimpj/shuffled/split2/finalzzeuvi new file mode 100644 index 0000000000000000000000000000000000000000..8253e0f721b4c6999822c1fec44312b0b4a70b31 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeuvi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{S0}\n\nMany function spaces of practical interest are also algebras and\/or lattices. An important example is $C(X)$, the space of continuous real-valued functions on a compact Hausdorff space. \nHence, a natural problem in the course of understanding the structure of the space $C(X)$ is to characterize the algebraic and\/or lattice isomorphisms on it.\nThe classical solutions were given by Gelfand and Kolmogorov \\cite{GK} and Kaplansky \\cite{K} respectively.\nAn in depth exposition of investigations into the algebraic structure of $C(X)$ and much more can be found in the classic monograph \\cite{GJ}.\nSubsequent research has tied these two strands together in the form of the disjointness structure of the function space $C(X)$.\nSpecifically, algebraic or lattice homomorphisms are disjointness preserving; they map disjoint functions to disjoint functions. An algebraic or lattice isomorphism is biseparating; that is, it is a bijection $T$ so that both $T$ and $T^{-1}$ are disjointness preserving. Moreover, generalization to disjointness preserving or biseparating maps allows for extension to function spaces that are neither algebras nor lattices, and even to vector-valued functions.\nCopius research has been devoted to the study of disjointness preserving and biseparating maps on various function spaces; see, e.g., \\cite{A}-\\cite{AJ2}, \\cite{BBH}, \\cite{GJW}, \\cite{HBN}-\\cite{J-V W}, \\cite{L}.\nAs far as the authors are aware of, the study of biseparating maps thus far has been confined to linear or at least additive maps. Since additive bijective maps are ${\\mathbb Q}$-linear, such maps are not far removed from the linear world.\nIn this paper, we initiate the study of general nonlinear biseparating maps on spaces of vector-valued functions.\nThe following example shows that the definition of ``biseparating'' needs to be adjusted in order to obtain meaningful results.\n\n\\bigskip\n\n\\noindent{\\bf Example}. Let $A$ be the set of all functions $f\\in C[0,1]$ so that the set $\\{t\\in [0,1]: f(t) \\neq 0\\}$ is dense in $[0,1]$.\nLet $T:C[0,1]\\to C[0,1]$ be a map such that $T$ maps $A$ bijectively onto itself and that $Tf = f$ if $f\\neq A$.\nThen $T$ is a bijection so that $f\\cdot g = 0 \\iff Tf\\cdot Tg = 0$.\n\n\\bigskip\n\nThe example shows that the definition of ``biseparating'' used for linear or additive maps is too weak when applied to general nonlinear maps. In the next section, we propose a revised definition of ``biseparating'' for nonlinear maps. The definition reduces to the usual one for additive maps. Moreover, with the revised definition, a satisfactory theory of nonlinear biseparating maps arise, subject to some mild assumptions. See the paragraph preceding Lemma \\ref{l1.21}. The theory of nonlinear biseparating maps is somewhat related to the theory or order isomorphisms developed in \\cite{LT}. It also partly generalizes the notion of ``nonlinear superposition operators''. We refer to \\cite{AZ} for a comprehensive study of the latter types of operators.\n\nLet us give an overview of the content of the paper. As mentioned, the definition of ``nonlinear biseparatimg maps'' is given in \\S \\ref{s1}. Under the mild assumptions of ``basic'' and ``compatible'', the fundamental characterization theorem of nonlinear biseparating maps (Theorem \\ref{t5}) is obtained. The theorem shows that a nonlinear bijective operator is biseparating if and only if it is ``locally determined''. For an exposition of some applications of locally determined operators to operator functional equations, particularly on spaces of differentiable functions, refer to \\cite{KM}.\nThe characterization theorem applies in particular to a number of familiar (vector-valued) function spaces such as spaces of continuous, uniformly continuous, Lipschitz and differentiable functions.\nWe would like to point out a general resemblance of Theorem \\ref{t5} with the fundamental characterization theorem for ``nonlinear order isomorphisms'' \\cite[Theorem 2.11]{LT}.\nIndeed, our study of nonlinear biseparating maps is motivated and informed by the study of nonlinear order isomorphisms at various points. However, the lack of an order makes many of the arguments more difficult in the present case, especially for uniformly continuous and Lipschitz functions.\nFor further information on nonlinear order isomorphisms on function spaces, we refer to \\cite{LT} and the references therein.\n\n\\S \\ref{s2} studies nonlinear biseparating maps between spaces of vector-valued continuous or bounded continuous functions.\nOne of the main results is Theorem \\ref{t2.8}, which shows that if $X,Y$ are realcompact spaces and $E,F$ are Hausdorff topological vector spaces, and there is a biseparating map $T:C(X,E)\\to C(Y,F)$, then $X$ and $Y$ are homeomorphic. With reference to the classical Gelfand-Kolmogorov and Kaplansky theorems, one sees that one needs rather much less than the full algebraic or lattice structure of $C(X)$ to determine the topology of $X$.\n\nFrom \\S\\ref{s3} onwards, we focus on metric spaces $X$ and $Y$.\nIn the course of \\S\\ref{s3} and \\S\\ref{s4}, full representations of biseparating maps between spaces of continuous, uniformly continuous and Lipschitz functions defined on metric spaces are obtained. See Propositions \\ref{p4.2}, \\ref{p4.3} and \\ref{p4.4}. \\S\\ref{s5} revisits spaces of continuous functions, this time defined on metric spaces.\nComplete characterizations of nonlinear biseparating maps are obtained; see Theorems \\ref{t5.4} and \\ref{t5.5}.\nWe also prove an automatic continuity result Theorem \\ref{t5.6}.\n\n\\S7 is concerned with nonlinear biseparating maps between spaces of uniformly continuous functions.\nCharacterization of such maps is carried out in two stages. First it is shown that a biseparating map induces a uniform homeomorphism of the underlying metric spaces. The second part involves solving the ``section problem'': determining the maps $\\Xi$ so that $Sf(x) = \\Xi(x,f(x))$ is uniformly continuous whenever the input function $f$ is uniformly continuous. Refer to Theorems \\ref{t6.7.1} and \\ref{t6.7.2}.\nFrom these characterization theorems, one can also obtain an automatic continuity result (Theorem \\ref{t6.9}).\nA classical result of Atsuji \\cite{At} and Hejcman \\cite{H}, rediscovered in \\cite{O'F}, states that all uniformly continuous functions on a metric space $X$ are bounded if and only if $X$ is Bourbaki bounded (see definition in \\S7.2). Theorem \\ref{t6.10} generalizes this result. It shows that there is a biseparating map from $U(X,E)$ onto a space $U_*(Y,F)$ (the space of bounded uniformly continuous functions) if and only if $X$ is Bourbaki bounded.\n\n\n\\S \\ref{s8} focuses on spaces of Lipschitz functions. First it is shown that we may reduce to considering spaces $\\operatorname{Lip}(X,E)$, where $X$ is bounded metric (Proposition \\ref{p6.2}).\nMaking use of the Baire Catergory Theorem and some intricate combinatorial arguments, it is then shown that a biseparating map between vector-valued Lipschitz spaces defined on bounded complete metric spaces induces a Lipschitz homeomorphism between the underlying metric spaces (Theorem \\ref{t7.5}).\nNext, the section problem for Lipschitz functions is solved (Theorem \\ref{t7.7}), which enables the characterization of nonlinear biseparating maps between spaces of Lipschitz functions (Theorem \\ref{t7.8}).\nSuppose that $\\Xi$ is a ``Lipschitz section'', i.e., the function $\\Xi(x,f(x))$ is a Lipschitz function of $x$ whenever $f$ is Lipschitz. It is known that even if $x_0$ is an accumulation point, the function $\\Xi(x_0,\\cdot)$ need not be continuous with respect to the second variable. Nevertheless, exploiting the Baire Category Theorem, we show that $\\Xi(x_0,\\cdot)$ is continuous on an open dense set if $x_0$ is an accumulation point (Theorem \\ref{t7.11}).\n\nThe final section \\S \\ref{s9} determines the biseparating maps that act between a space of uniformly continuous functions on the one hand and a space of Lipschitz functions on the other. The main results (Theorems \\ref{t6.6} and \\ref{t6.7}) show that there is a certain rigidity, so that the existence of such maps imply very strong conditions on the underlying metric spaces.\n\nTo end this introduction, we note that linear or nonlinear biseparating maps acting between spaces of differentiable functions seem to be rather more difficult to deal with. A notable achievement in this regard is the paper by Araujo \\cite{A3}. We intend to address some of the problems raised therein in a future paper.\n\n\n\\section{Generalities}\\label{s1}\n\nLet $X, Y$ be sets and let $E, F$ be (real or complex) vector spaces. \nSuppose that $A(X,E)$ is a vector subspace of $E^X$ and $A(Y,F)$ is a vector subspace of $F^Y$.\nIf $f\\in A(X,E)$, let the {\\em carrier} of $f$ be the set \n\\[C(f) = \\{x\\in X: f(x) \\neq 0\\}.\\] \nSet ${\\mathcal C}(X) = \\{C(f):f\\in A(X,E)\\}$.\nFor functions $f,g,h\\in A(X,E)$, say that $f$ and $g$ are {\\em disjoint with respect to} $h$, $f\\perp_h g$, if $C(f-h) \\cap C(g-h) = \\emptyset$. We abbreviate $\\perp_0$ as $\\perp$.\nThe {\\em support} of a function $f\\in A(X,E)$ is the set\n\\[ \\widehat{C}(f) = X\\backslash \\bigcup\\{C(g): g\\in A(X,E), f\\perp g\\} = X\\backslash \\bigcup\\{C\\in {\\mathcal C}(X): C \\cap C(f) = \\emptyset\\}.\\]\nObviously $C(f) \\subseteq \\widehat{C}(f)$. Furthermore, if $f_1,f_2\\in A(X,E)$ and $f_1=f_2$ on $C(f)$, then $f_1 = f_2$ on $\\widehat{C}(f)$.\n Set ${\\mathcal D}(X) = \\{\\widehat{C}(f): f\\in A(X,E)\\}$. Similar definitions apply to $A(Y,F)$.\n A map $T:A(X,E) \\to A(Y,F)$ is {\\em biseparating} if it is a bijection and for any $f,g,h \\in A(X,E)$,\n\\[ f\\perp_hg \\iff Tf\\perp_{Th}Tg.\\]\nFor the remainder of the section, let $T:A(X,E)\\to A(Y,F)$ be a given biseparating map.\nThe following proposition, although simple to state and easy to prove, turns out to be key to understanding biseparating maps.\n\n\\begin{prop}\\label{p1}(Araujo's Lemma, cf \\cite[Lemma 4.2]{A})\n\nFor any $f,g,h\\in A(X,E)$, \n\\[ \\widehat{C}(f-h) \\subseteq \\widehat{C}(g-h) \\iff \\widehat{C}(Tf-Th) \\subseteq \\widehat{C}(Tg-Th).\\]\n\\end{prop}\n\n\\begin{proof}\nSuppose that $\\widehat{C}(f-h) \\subseteq \\widehat{C}(g-h)$.\nAssume that there exists $z\\in \\widehat{C}(Tf-Th) \\backslash \\widehat{C}(Tg-Th)$.\nThere exists $v\\in A(Y,F)$ so that $v\\perp Tg-Th$ and $z\\in C(v)$.\nSince $z\\in \\widehat{C}(Tf-Th)$, $v\\not\\perp Tf-Th$. \nSet $u = T^{-1}(v+Th) \\in A(X,E)$. Then $v = Tu - Th$. Hence\n\\[ Tu -Th = v \\perp Tg-Th \\implies Tu \\perp_{Th} Tg \\implies u\\perp_h g\\implies u-h \\perp g-h.\\]\nTherefore,\n\\[ C(u-h) \\subseteq (\\widehat{C}(g-h))^c \\subseteq (\\widehat{C}(f-h))^c \\implies u-h \\perp f-h \\implies u\\perp_h f.\\]\nIt follows that \n\\[ Tu \\perp_{Th}Tf \\implies v = Tu-Th \\perp Tf-Th.\\]\nThis contradicts that fact that $v\\not\\perp Tf-Th$. This completes the proof for the forward implication ``$\\implies$\". The reverse implication follows by symmetry.\n\\end{proof}\n\n\n\n\n\n\\begin{prop}\\label{p2}\nLet $f\\in A(X,E)$ be given. The map \\[\\theta_f: \\widehat{C}(h) \\mapsto \\widehat{C}(T(f+h) -Tf)\\] is a well-defined bijection from ${\\mathcal D}(X)$ onto ${\\mathcal D}(Y)$ that preserves set inclusion. For any $f,g\\in A(X,E)$ and any $U\\in{\\mathcal D}(X)$, $f= g$ on $U$ if and only if $Tf = Tg$ on $\\theta_f(U)$.\n\\end{prop}\n\n\\begin{proof}\nBy Proposition \\ref{p1}, $\\widehat{C}(h_1) = \\widehat{C}(h_2)$ if and only if \n\\[ \\widehat{C}((h_1+f)-f) = \\widehat{C}((h_2+f)-f) \\iff \\widehat{C}(T(h_1+f) - Tf) = \\widehat{C}(T(h_2+f)-Tf).\\]\nThis shows that the map $\\theta_f$ is well-defined and injective.\nSince any $g\\in A(Y,F)$ can be written in the form $T(f+h) - Tf$ with $h = T^{-1}(g+Tf) -f$, $\\theta_f$ is surjective. It follows from Proposition \\ref{p1} that $\\theta_f$ preserves set inclusion.\n\nFinally, suppose that $U = \\widehat{C}(h) \\in {\\mathcal D}(X)$.\nThen $f= g$ on $U$ if and only if $g-f \\perp h = (f+h) -f$, which in turn is equivalent to the fact that $Tg-Tf \\perp T(f+h) - Tf$. The last statement is easily seen to be equivalent to the fact that $Tg-Tf = 0$ on $\\theta_f(\\widehat{C}(h))$.\n\\end{proof}\n\n\nThe idea behind Proposition \\ref{p2} is that a biseparating map gives rise to a collection of ``set movers'' $\\theta_f$. \nIn order to make the set mover $\\theta_f$ independent of the function $f$, we impose two conditions on the function space $A(X,E)$.\nSay that $A(X,E)$ is \n\\begin{enumerate}\n\\item {\\em basic} if whenever $x\\in C_1\\cap C_2$ for some $C_1, C_2\\in {\\mathcal C}(X)$, then there exists $C\\in {\\mathcal C}(X)$ so that $x\\in C \\subseteq C_1\\cap C_2$; \n\\item {\\em compatible} if for any $f\\in A(X,E)$, any $D\\in {\\mathcal D}(X)$ and any point $x\\notin D$, there exist $g\\in A(X,E)$ and $C\\in {\\mathcal C}(X)$ so that $x\\in C$ and \n\\[ g = \\begin{cases} f &\\text{on $C$},\\\\\n 0 &\\text{on $D$}.\\end{cases}\\]\n \\end{enumerate} \n\n\n\\begin{lem}\\label{l1.21}\nSuppose that $A(Y,F)$ is basic. If $f, g\\in A(Y,F)$ and $V\\in {\\mathcal D}(Y)$ are such that $f = g$ on $V$ and $V\\subseteq \\widehat{C}(g)$, then $V \\subseteq \\widehat{C}(f)$.\n\\end{lem}\n\n\\begin{proof}\nAssume otherwise. There is a point $y_1\\in V$ so that $y_1\\notin \\widehat{C}(f)$.\nThere exists $u\\in A(Y,F)$ so that $y_1\\in C(u)$ and $u\\perp f$. Say $V = \\widehat{C}(v)$.\nSince $y_1\\in \\widehat{C}(v)$, $u\\not\\perp v$. As $A(Y,F)$ is basic, there exists $C\\in {\\mathcal C}(Y)$ so that $\\emptyset\\neq C\\subseteq C(u)\\cap C(v)$.\nIf $y\\in C$, then $y \\in C(u)$ and hence $f(y) = 0$. Moreover, $y\\in C(v) \\subseteq V$ and hence $g(y) = f(y) = 0$. This proves that $C\\cap C(g) = \\emptyset$. Since $C\\in {\\mathcal C}(Y)$, it follows that $C\\cap \\widehat{C}(g) = \\emptyset$.\nThis is impossible since $C$ is a nonempty subset of $C(v)\\subseteq \\widehat{C}(v) = V\\subseteq \\widehat{C}(g)$.\n\\end{proof}\n\n\n\\begin{prop}\\label{p3}\nAssume that $A(Y,F)$ is basic. Suppose that $f, g\\in A(X,E)$ and $U \\in {\\mathcal D}(X)$. If $f = g$ on $U$, then $\\theta_f(U) = \\theta_g(U)$.\n\\end{prop}\n\n\\begin{proof}\nLet $U = \\widehat{C}(h)$. Since $f= g$ on $U\\supseteq C(h)$, $C(h)\\subseteq C(f+h-g)$.\nHence $U \\subseteq \\widehat{C}(f+h- g)$.\nThus \n\\[ \\theta_g(U) \\subseteq \\theta_g(\\widehat{C}(f+h- g)) = \\widehat{C}(T(f+h) - Tg)).\\]\nBy Proposition \\ref{p2}, $Tf = Tg$ on $\\theta_g(U)$. In other words,\n\\[ T(f+h) - Tf = T(f+h)-Tg \\text{ on } \\theta_g(U) \\subseteq \\widehat{C}(T(f+h) - Tg)).\\]\nTherefore, by Lemma \\ref{l1.21}, \\[\\theta_g(U) \\subseteq \\widehat{C}(T(f+h) - Tf) = \\theta_f(U).\\]\nThe reverse inclusion follows by symmetry.\n\\end{proof}\n\n\n\\begin{prop}\\label{p4}\nAssume that $A(Y,F)$ is both basic and compatible. Let $f,g\\in A(X,E)$ and let $U$ be a set in ${\\mathcal D}(X)$. Then $\\theta_g(U) = \\theta_f(U)$.\n\\end{prop}\n\n\\begin{proof} \nSuppose that $U = \\widehat{C}(h)$ and that there exists $y \\in \\theta_f(U) \\backslash \\theta_g(U)$.\nThen there exists $C\\in {\\mathcal C}(Y)$ so that $y \\in C$ and that $C \\cap \\theta_g(U) = \\emptyset$.\nSince $y\\in \\theta_f(U)$, $C \\cap C(T(f+h)-Tf) \\neq \\emptyset$.\nUsing the fact that $A(Y,F)$ is basic, there exist $C'\\in {\\mathcal C}(Y)$ and $z\\in Y$ so that \n\\[z\\in C' \\subseteq C \\cap C(T(f+h)-Tf).\\]\nIn particular, $z\\notin \\theta_g(U)$ and $\\theta_g(U) \\in {\\mathcal D}(Y)$.\nUse the compatibility of $A(Y,F)$ to choose $v\\in A(Y,F)$ and $C''\\in {\\mathcal C}(Y)$ so that $z\\in C''$ and that \n\\[ v = \\begin{cases} Tf-Tg &\\text{on $C''$,}\\\\\n0 &\\text{on $\\theta_g(U)$}.\\end{cases}\\]\nSince $A(Y,F)$ is basic, we may also assume that $C'' \\subseteq C'$.\nSet $k = v+Tg$. We have $k= Tg$ on $\\theta_g(U)$. By Proposition \\ref{p2}, $T^{-1}k = g$ on $U$.\nSay $C'' = C(w)$. Then $k = Tf$ on $\\widehat{C}(w)\\subseteq \\theta_f(U)$. Thus Proposition \\ref{p2} implies that \n\\[ T^{-1}k = f \\text{ on } (\\theta_f)^{-1}(\\widehat{C}(w)) \\subseteq U.\n\\]\nIt follows that $f = g$ on the set $(\\theta_f)^{-1}(\\widehat{C}(w))\\in {\\mathcal D}(X)$.\nBy Proposition \\ref{p3}, $\\widehat{C}(w) = \\theta_g((\\theta_f)^{-1}(\\widehat{C}(w)) \\subseteq \\theta_g(U)$.\nHence $z\\in C(w)\\subseteq \\theta_g(U) =\\emptyset$, contrary to the choice of $z$.\nThis proves that $\\theta_f(U) \\subseteq \\theta_g(U)$. The reverse inclusion follows by symmetry.\n\\end{proof}\n\n\n\n\n\n\n \nWe now obtain the fundamental description of biseparating maps from the foregoing propositions.\n\n\n\\begin{Def}\\label{d2.1}\nRetain the notation above. A bijection $T: A(X,E)\\to A(Y,F)$ is {\\em locally determined} if there is a bijection $\\theta:{\\mathcal D}(X)\\to{\\mathcal D}(Y)$, preserving set inclusions, so that \nfor any $f,g\\in A(X,E)$ and any $U \\in {\\mathcal D}(X)$, $f = g$ on $U$ if and only if $Tf = Tg$ on $\\theta(U)$\n\\end{Def}\n\n\\begin{lem}\\label{l2.0}\nAssume that $A(X,E)$ is basic. Let $T:A(X,E)\\to A(Y,F)$ be locally detemined, with a map $\\theta$ as given in Definition \\ref{d2.1}. If $g,h \\in A(X,E)$, then $C(Tg-Th) \\subseteq \\theta(\\widehat{C}(g-h))$.\n\\end{lem}\n\n\\begin{proof}\nSuppose that $y \\notin \\theta(\\widehat{C}(g-h))$. Choose $v_1\\in A(Y,F)$ so that \n$\\widehat{C}(v_1) = \\theta(\\widehat{C}(g-h))$.\nThere exists $v_2\\in A(Y,F)$ such that $y\\in C(v_2)$ and $v_1\\perp v_2$.\nLet $u_1 = g-h$ and $u_2\\in A(X,E)$ be such that $\\widehat{C}(u_2) = \\theta^{-1}(\\widehat{C}(v_2))$.\n\nWe claim that $u_1\\perp u_2$.\nOtherwise, there exists nonempty $C\\in {\\mathcal C}(X)$ so that $C\\subseteq C(u_1) \\cap C(u_2)$.\nHence \n\\[ \\theta(\\widehat{C}) \\subseteq \\theta(\\widehat{C}(u_1))\\cap \\theta(\\widehat{C}(u_2)) = \\widehat{C}(v_1)\\cap \\widehat{C}(v_2).\\]\nLet $\\theta(\\widehat{C}) = \\widehat{C}(v)$ for some nonzero $v\\in A(Y,F)$.\nSince $v_1\\perp v_2$, $C(v_1) \\cap \\widehat{C}(v_2) = \\emptyset$.\nTherefore, \n\\[C(v_1) \\cap C(v) \\subseteq C(v_1) \\cap \\widehat{C}(v_2) =\\emptyset.\n\\]\nHence $\\widehat{C}(v_1)\\cap {C}(v) = \\emptyset$. But $\\emptyset \\neq C(v) \\subseteq \\widehat{C}(v) = \\theta(\\widehat{C}) \\subseteq \\widehat{C}(v_1)\n$. Thus we have a contradiction. This proves the claim.\n\nFrom the claim, $g=h$ on $\\widehat{C}(u_2)$. Hence $Tg = Th$ on $\\theta(\\widehat{C}(u_2)) = \\widehat{C}(v_2)$. In particular, $y\\notin C(Tg-Th)$. This completes the proof of the lemma.\n\\end{proof}\n\n\\begin{thm}\n\\label{t5}\nSuppose that $A(X,E)$ and $A(Y,F)$ are both basic and compatible. A bijection $T:A(X,E)\\to A(Y,F)$ is a biseparating map if and only if it is locally determined.\n\\end{thm}\n\n\\begin{proof}\nAssume that $T$ is biseparating. Take any $f\\in A(X,E)$ and let $\\theta = \\theta_f$. By Proposition \\ref{p4}, $\\theta$ is independent of the choice of $f$.\nThe properties enunciated for $\\theta$ now follow from the same ones for $\\theta_f$ by Proposition \\ref{p2}. Therefore, $T$ is locally determined.\n\nConversely, suppose that $T$ is locally determined. Let $f,g,h\\in A(X,E)$ be such that $f\\perp_h g$.\nThen $f =h$ on $\\widehat{C}(g-h)$.\nTherefore, $Tf = Th$ on $\\theta(\\widehat{C}(g-h))$.\nBy Lemma \\ref{l2.0}, $Tf = Th$ on $C(Tg-Th)$. Thus $Tf \\perp_{Th} Tg$.\nSince the same argument applies to $T^{-1}$, we see that $T$ is biseparating.\n\\end{proof}\n\n\n\nLet us give some examples of function spaces that are both basic and compatible. The verifications are simple and will be omitted. \nIf $G$ is a Banach space, a {\\em bump function} on $G$ is a nonzero real-valued function on $G$ with bounded support.\n\n\n\\begin{ex}\\label{e1.7}\nLet $A(X,E)$ be any of the spaces described below. Then $A(X,E)$ is both basic and compatible.\nFurthermore, $\\widehat{C}(f) = \\overline{C(f)}$ for any $f \\in A(X,E)$. \n\\begin{enumerate}\n\\item Let $X$ be a Hausdorff completely regular topological space and let $E$ be a nonzero Hausdorff topological vector space.\n$A(X,E) = C(X,E)$ or $C_*(X,E)$, the subspace consisting of all bounded functions in $C(X,E)$. (By a bounded function, we mean a function whose image is bounded in $E$, i.e, can be absorbed by any neighborhood of $0$.)\n\\item Let $X$ be a metric space and let $E$ be a normed space. Take $A(X,E)$ to be one of the following spaces. $U(X,E)$, the space of all $E$-valued uniformly continuous functions on $X$; or $\\operatorname{Lip}(X,E)$, the space of all $E$-valued Lipschitz functions on $X$; or $U_*(X,E)$, respectively, $\\operatorname{Lip}_*(X,E)$, the bounded functions in $U(X,E)$ and $\\operatorname{Lip}(X,E)$ respectively.\n\\item Let $X$ be an open set in a Banach space $G$, $p\\in {\\mathbb N}\\cup\\{\\infty\\}$, and let $E$ be a normed space. Assume that $G$ supports a $C^p$ bump function and take $A(X,E) = C^p(X,E)$, the space of all $p$-times continuous differentiable $E$-valued functions on $X$. Alternatively, let $A(X,E) = C^p_*(X,E)$, the subspace of $C^p(X,E)$ so that \n$D^kf$ is bounded on $X$ for $k\\in \\{0\\}\\cup{\\mathbb N}$, $k\\leq p$. In the latter case, assume that $G$ supports a $C^p_*$ bump function.\n\\end{enumerate}\n\\end{ex}\n\n\n\n\n\\section{Spaces of continuous functions}\\label{s2}\n\n\nIn this section, let $X$ and $Y$ be Hausdorff completely regular topological spaces and $E$ and $F$ be nontrivial Hausdorff topological vector spaces.\nTake $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.\nLet $T:A(X,E)\\to A(Y,F)$ be a biseparating map.\nWithout loss of generality, we may assume that $T0 =0$.\nWe retain the notation in \\S \\ref{s1}.\nThe main aim is to derive topological relationship between $X$ and $Y$ based on the map $T$.\nRecall that a Hausdorff completely regular topological space $X$ has a ``largest\" compactification, namely the Stone-\\v{C}ech compactification $\\beta X$. \nIf $V$ is a set in $Y$, denote its closures in $Y$ and $\\beta Y$ by $\\overline{V}$ and $\\overline{V}^{\\beta Y}$ respectively.\nBy Example \\ref{e1.7}, $\\widehat{C}(f) = \\overline{C(f)}$ for any $f\\in A(X,E)$.\n\n\n\n\n\n\n\\begin{lem}\\label{l2.0.1}\nLet $U_i, i=1,2$, be open sets in $\\beta X$ so that $U_i\\cap X\\in {\\mathcal C}(X)$ and that $\\overline{U_1}^{\\beta X}\\cap \\overline{U_2}^{\\beta X}= \\emptyset$. Then $\\overline{\\theta(\\overline{U_1\\cap X})}^{\\beta X} \\cap \\overline{\\theta(\\overline{U_2\\cap X})}^{\\beta X} = \\emptyset$.\n\\end{lem}\n\n\\begin{proof}\nLet $v$ be a nonzero vector in $F$. There exists $h\\in C_*(X)$ so that $h(x) =1$ for all $x\\in U_1\\cap X$ and $h(x) = 0$ for all $x\\in U_2\\cap X$. The function $f = h\\cdot T^{-1}(1\\otimes v)$ belongs to $A(X,E)$. By Lemma \\ref{t5}, $Tf = 1\\otimes v$ on $\\theta(\\overline{U_1\\cap X})$ and $Tf = 0$ on $\\theta(\\overline{U_2\\cap X})$.\nSince $F$ is a Hausdorff topological vector space, it is completely regular. \nSo there exists a continuous function $g: F\\to {\\mathbb R}$ so that $g(v) \\neq g(0)$.\nSet $k =g\\circ (Tf):Y\\to {\\mathbb R}$. Then $k$ is continuous on $Y$ has hence has a continuous extension $\\widetilde{k}:\\beta Y\\to {\\mathbb R}\\cup \\{\\infty\\}$.\nNow $k(y) = g(v)$ for all $y\\in \\theta(\\overline{U_1\\cap X})$ and $k(y) = g(0)$ for all $y\\in \\theta(\\overline{U_2\\cap X})$.\nBy continuity of $\\widetilde{k}$, the sets $\\overline{\\theta(\\overline{U_i\\cap X})}^{\\beta X}$, $ i=1,2$, must be disjoint.\n\\end{proof}\n\nFor any $x\\in \\beta X$, let ${\\mathcal F}_x$ be the family of all open neighborhoods $U$ of $x$ in $\\beta X$ so that $U\\cap X \\in {\\mathcal C}(X)$.\nDefine ${\\mathcal F}_y$ similarly for $y\\in \\beta Y$.\nWe will use the following fact which is easily deduced from the Urysohn Lemma.\nLet $U$ be an open neighborhood of a point $x$ in $\\beta X$. There exists an open neighborhood $V\\in {\\mathcal F}_x$ so that $V\\subseteq U$.\n\n\n\\begin{lem}\\label{l2.0.2}\nLet $x\\in \\beta X$, $y\\in \\beta Y$. Then $y\\in\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}$ for all $U \\in {\\mathcal F}_x$ if and only if \n$x\\in\\overline{\\theta^{-1}(\\overline{V\\cap Y})}^{\\beta X}$ for all $V \\in {\\mathcal F}_y$.\n\\end{lem}\n\n\n\\begin{proof}\nAssume that $y\\in\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}$ for all $U \\in {\\mathcal F}_x$.\nSuppose that there exists $V\\in {\\mathcal F}_y$ so that $x\\notin\\overline{\\theta^{-1}(\\overline{V\\cap Y})}^{\\beta X}$.\nChoose $U \\in {\\mathcal F}_x$ so that $\\overline{U}^{\\beta X} \\cap \\overline{\\theta^{-1}(\\overline{V\\cap Y})}^{\\beta X} = \\emptyset$.\nNote that by definition of $\\theta^{-1}$, $\\theta^{-1}(\\overline{V\\cap Y}) = \\overline{W_0}$ for some $W_0\\in {\\mathcal C}(X)$.\nExpress $W_0 = W\\cap X$ for some open set $W\\in \\beta X$.\nThen $\\overline{U}^{\\beta X} \\cap \\overline{W}^{\\beta X} = \\emptyset$.\nBy Lemma \\ref{l2.0.1}, $\\overline{\\theta(\\overline{U\\cap X})}^{\\beta X} \\cap \\overline{\\theta(\\overline{W\\cap X})}^{\\beta X} = \\emptyset$. By choice, $y \\in \\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}$.\nAlso, since $V\\in {\\mathcal F}_y$, \n\\[ y \\in \\overline{{V\\cap Y}}^{\\beta Y}= \\overline{\\theta(\\overline{W\\cap X})}^{\\beta Y}. \\]\nThis contradicts the disjointness of $\\overline{\\theta(\\overline{U\\cap X})}^{\\beta X}$ and $\\overline{\\theta(\\overline{W\\cap X})}^{\\beta X}$ and completes the proof of the ``only if'' part. The reverse implication follows by symmetry.\n\\end{proof}\n\n\\begin{lem}\\label{l2.0.3}\nFor any $x\\in \\beta X$, the set $\\bigcap\\{\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}: U \\in {\\mathcal F}_x\\}$ has exactly one point in $\\beta Y$.\n\\end{lem}\n\n\\begin{proof}\nLet $U_i$, $i=1,\\dots, n$, be sets in ${\\mathcal F}_x$.\nThen $\\bigcap^n_{i=1}U_i$ is an open neighborhood of $x$ in $\\beta X$. By the remark after Lemma \\ref{l2.0.1}, \nthere exists $U\\in {\\mathcal F}_x$ so that $U \\subseteq \\bigcap^n_{i=1}U_i$.\nThen $\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y} \\subseteq \\bigcap^n_{i=1}\\overline{\\theta(\\overline{U_i\\cap X})}^{\\beta Y}$.\nSince the set on the left is nonempty, this shows that the family $\\{\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}: U \\in {\\mathcal F}_x\\}$, which consists of closed sets in $\\beta Y$, has the finite intersection property. By compactness of $\\beta Y$, we conclude that the intersection in the statement of the lemma is nonempty.\n\nSuppose that $y_1,y_2$ are distinct points in \n $\\bigcap\\{\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}: U \\in {\\mathcal F}_x\\}$.\nChoose sets $V_i \\in {\\mathcal F}_{y_i}$, $i=1,2$, so that $\\overline{V_1}^{\\beta _Y} \\cap \\overline{V_2}^{\\beta Y} = \\emptyset$.\n By Lemma \\ref{l2.0.2}, $x\\in \\overline{\\theta^{-1}(\\overline{V_i\\cap Y})}^{\\beta X}$, $i=1,2$.\nThis contradicts Lemma \\ref{l2.0.1} applied to the map $\\theta^{-1}$.\n\\end{proof}\n\nDefine the map $\\varphi: \\beta X\\to \\beta Y$ by taking $\\varphi(x)$ to be the unique point in $\\bigcap\\{\\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}: U \\in {\\mathcal F}_x\\}$.\nBy symmetry, we also have an analogous map $\\psi : \\beta Y\\to \\beta X$.\nNow we arrive at the first structural result on biseparating maps on vector-valued $C\/C_*$ spaces.\n\n\n\n\\begin{thm}\\label{t2.0.4}\nLet $X, Y$ be Hausdorff completely regular topological spaces and let $E,F$ be nonzero Hausdorff topological vector spaces.\nSuppose that $T:A(X,E)\\to A(Y,F)$ is a biseparating map, where $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.\nThen there is a homeomorphism $\\varphi:\\beta X\\to \\beta Y$ so that for any $f,g \\in A(X,E)$ and any open set $U$ in $\\beta {X}$, $f = g$ on $U\\cap X$ if and only if $Tf = Tg$ on ${\\varphi}(U) \\cap Y$.\n\\end{thm}\n\n\\begin{proof}\nConsider the maps $\\varphi:\\beta X \\to \\beta Y$ and $\\psi: \\beta Y \\to \\beta X$ given above.\nBy Lemma \\ref{l2.0.2}, $\\varphi$ and $\\psi$ are mutual inverses.\nLet us show that $\\varphi$ is continuous. \nIf $\\varphi$ is not continuous at some $x_0 \\in \\beta X$, then there is a net $(x_\\alpha)$ converging to $x_0$ so that $(\\varphi(x_\\alpha))$ converges to $y_1 \\neq \\varphi(x_0)= y_0$.\nChoose $V_i \\in {\\mathcal F}_{y_i}$, $i=0,1$, so that $\\overline{V_0}^{\\beta Y} \\cap \\overline{V_1}^{\\beta Y} = \\emptyset$.\nFor a cofinal set of $\\alpha$, $V_1 \\in {\\mathcal F}_{\\varphi(x_\\alpha)}$, hence $x_\\alpha = \\psi(\\varphi(x_\\alpha)) \\in \\overline{\\theta^{-1}(\\overline{V_1\\cap Y})}^{\\beta X}$. Therefore, $x_0 \\in \\overline{\\theta^{-1}(\\overline{V_1\\cap Y})}^{\\beta X}$. Also, $V_0\\in {\\mathcal F}_{y_0}$ implies that $x_0\\in \\overline{\\theta^{-1}(\\overline{V_0\\cap Y})}^{\\beta X}$.\nThis is impossible since $\\overline{\\theta^{-1}(\\overline{V_i\\cap Y})}^{\\beta X}$, $i =0,1$, are disjoint by Lemma \\ref{l2.0.1}.\nThis completes the proof of continuity of $\\varphi$. It follows that $\\varphi$ is a homeomorphism by symmetry.\n\nLet $U$ be an open set in $\\beta X$ and suppose that $f= g$ on $U\\cap X$ for some $f,g\\in A(X,E)$. \nLet $y_0\\in \\varphi(U)\\cap Y$ and set $x_0 = \\psi(y_0)\\in U$. We wish to show that $Tf(y_0) =Tg(y_0)$.\nBy the remark preceding Lemma \\ref{l2.0.2}, we may assume that $U\\cap X \\in {\\mathcal C}(X)$. \nThen $U\\in {\\mathcal F}_{x_0}$. By definition of $\\varphi$, $y_0 = \\varphi(x_0) \\in \\overline{\\theta(\\overline{U\\cap X})}^{\\beta Y}$.\nSince $y_0\\in Y$ and $\\theta(\\overline{U\\cap X})$ is closed in $Y$, $y_0 \\in \\theta(\\overline{U\\cap X})$.\nBy Theorem \\ref{t5}, $Tf = Tg$ on $\\theta(\\overline{U\\cap X})$.\nHence $Tf(y_0) = Tg(y_0)$.\nThis proves that $Tf = Tg$ on $\\varphi(U) \\cap Y$.\nThe reverse implication follows by symmetry.\n\\end{proof}\n\n\n \n\n\n\n\n\n\n\nRecall that a Hausdorff completely regular topological space\n$X$ is {\\em realcompact} if for any $x_0\\in \\beta X\\backslash X$, there is a continuous function $f:\\beta X\\to [0,1]$ so that $f(x_0) =1 > f(x)$ for all $x\\in X$.\nFor more on realcompact spaces, refer to the classic \\cite{GJ}.\nIn particular, let $\\upsilon X$ be the Hewitt realcompactification of $X$ \\cite{GJ}. Then every $f\\in C(X)$ has a unique continuous extension to a (real valued) function $\\stackrel{\\smile}{f} \\in C(\\upsilon X)$. \nThe map $f\\mapsto\\stackrel{\\smile}{f}$ is an algebraic isomorphism and hence biseparating. Hence it is rather natural to consider realcompact spaces in the present context.\nWhen one or more of the spaces $X$ or $Y$ is realcompact, Theorem \\ref{t2.0.4} can be improved.\n\n\\begin{lem}\\label{l2.4}\\cite[Lemma 3.2]{LT}\nLet $Y$ be a realcompact space and let $y_0 \\in \\beta Y \\backslash Y$. There exist open sets $U_n$ and $V_n$ in $\\beta Y$, $n \\in {\\mathbb N}$, such that\n\\begin{enumerate}\n\\item $\\overline{U_n}^{\\beta Y} \\subseteq V_n$ for all $n$;\n\\item $y_0 \\in \\overline{\\bigcup^\\infty_{n=m}U_n}^{\\beta Y}$ for all $m$;\n\\item $Y \\cap \\bigcap^\\infty_{m=1}\\overline{\\bigcup^\\infty_{n=m}V_n}^{\\beta Y} = \\emptyset$;\n\\item $\\overline{V_n}^{\\beta Y} \\cap \\overline{V_m}^{\\beta Y} = \\emptyset$ if $n\\neq m$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{lem}\\label{l2.5.1}\nLet $E$ be a Hausdorff topological vector space and let $0 \\neq u\\in E$.\nThere is a continuous function $h:E\\to {\\mathbb R}$ so that $h(nu)= u$ for all $n\\in {\\mathbb N}$.\n\\end{lem}\n\n\\begin{proof}\nLet $U$ be a circled open neighborhood of $0$ in $E$ so that $u \\notin U + U +U$.\nThen let $V$ be an open neighborhood of $0$ so that $\\overline{V}\\subseteq U$.\nSet $V_n = nu + V$, $n\\in{\\mathbb N}$. Suppose that $m\\neq n$ and $\\overline{V_m} \\cap \\overline{V_n} \\neq \\emptyset$.\nThen there are $v_1,v_2\\in \\overline{V}$ so that $mu+v_1 = nu + v_2$.\nThus\n\\[ u = \\frac{v_2}{m-n} + \\frac{v_1}{n-m} \\in U+U \\subseteq U+U+U,\n\\]\ncontrary to the choice of $U$. This shows that $\\overline{V_m}\\cap \\overline{V_n} = \\emptyset$ if $m\\neq n$.\n\nNext, we claim that \n$\\overline{\\bigcup V_n} = \\bigcup \\overline{V_n}$.\nSuppose that $x\\in \\overline{\\bigcup V_n}$. \nChoose $n_0\\in {\\mathbb N}$ so that $\\frac{x}{n_0} \\in U$.\nFor any $n\\geq n_0$, if $V_n \\cap (x+U) \\neq \\emptyset$, then there are $v\\in V$ and $w\\in U$ so that \n$nu+ v = x+w$. Thus \n\\[ u = \\frac{x}{n} + \\frac{w}{n} -\\frac{v}{n} \\in U + U + U,\\]\na contradiction. Hence $x\\notin \\overline{\\bigcup_{n\\geq n_0}V_n}$.\nTherefore, $x\\in \\overline{\\bigcup^{n_0-1}_{n=1}V_n} = \\bigcup^{n_0-1}_{n=1}\\overline{V_n}$.\nThis proves the claim.\n\n\nSince $E$ is completely regular, for each $n\\in {\\mathbb N}$, there exists a continuous function $h_n:E\\to {\\mathbb R}$ so that $h_n(nu) = n$ and that $h_n(x) = 0$ if $x \\notin V_n$.\nDefine $h:E\\to {\\mathbb R}$ by $h(x) = h_n(x)$ if $x\\in V_n$ for some $n$ and $h(x) = 0$ otherwise.\nFrom the properties of the sets $V_n$ shown above, for each $x\\in E$, there are an open neighborhood $O$ \nof $x$ and some $n\\in {\\mathbb N}$ so that $h = h_n$ on $O$ or $h = 0$ on $O$.\nIt follows easily that $h$ is continuous.\nObviously, $h(nu)= n$ for all $n\\in {\\mathbb N}$.\n\\end{proof}\n\n\\begin{lem}\\label{l2.6}\nIf $A(Y,F) = C(Y,F)$ and $Y$ is realcompact, then $\\varphi(X)\\subseteq Y$.\n\\end{lem}\n\n\\begin{proof}\nRetain the notation of Theorem \\ref{t2.0.4}. Suppose that there exists $x_0\\in X$ so that $y_0 = \\varphi(x_0) \\in \\beta Y \\backslash Y$.\nChoose open sets $U_n, V_n$ in $\\beta Y$ using Lemma \\ref{l2.4}.\nFrom property (1) of said lemma, there exists a continuous function $f_n:\\beta Y \\to [0,1]$ so that\n$f_n =1$ on $U_n$ and $f_n = 0$ outside $V_n$.\nFix a nonzero vector $u \\in E$ and let $g_n$ be defined on $Y$ by $g_n(y) = f_n(y)T(n\\otimes u)(y)$.\nThen set $g(y) = g_n(y)$ if $y\\in V_n\\cap Y$ for some $n$ and $g(y) = 0$ if $y \\in Y \\backslash (\\bigcup^\\infty_{n=1}V_n)$.\nFix $y \\in Y$. By property (3) of Lemma \\ref{l2.4}, there exists $m$ so that $y \\notin \\overline{\\bigcup^\\infty_{n=m}V_n}^{\\beta Y}$.\nBy property (4) of Lemma \\ref{l2.4}, there exists at most one $n_0$, $1\\leq n_0< m$, so that $y\\in \\overline{V_{n_0}}^{\\beta Y}$.\nTherefore, there exists an open neighborhood $U$ of $y$ in $Y$ so that $g = g_{n_0}$ or $g =0$ on the set $U$.\nThus $g$ is continuous at $y$. Since $y\\in Y$ is arbitrary, $g\\in C(Y,F)$.\nAs $g = T(n\\otimes u)$ on $U_n\\cap Y$, by Theorem \\ref{t2.0.4}, $T^{-1}g = nu$ on ${\\varphi}^{-1}(U_n)\\cap X$.\nBy Lemma \\ref{l2.5.1}, there is a continuous function $h:E\\to {\\mathbb R}$ so that $h(nu) = n$ for all $n$.\nSet $k = h\\circ T^{-1}g \\in C(X)$. Let $m\\in {\\mathbb N}$. From the above, $k \\geq m$ on ${\\varphi}^{-1}(\\bigcup^\\infty_{n=m}U_n)\\cap X$.\nBy (2) of Lemma \\ref{l2.4}, $y_0 \\in \\overline{\\bigcup^\\infty_{n=m}U_n}^{\\beta Y}$.\nSince each $U_n$ is open in $\\beta Y$ and $\\varphi(X)$ is dense in $\\beta Y$,\n\\[y_0 \\in \\overline{(\\bigcup^\\infty_{n=m}U_n) \\cap \\varphi(X)}^{\\beta Y}.\n\\] As $\\varphi:\\beta X\\to \\beta Y$ is a homeomorphism,\n\\[x_0 =\\varphi^{-1}(y_0) \\in \\overline{\\varphi^{-1}(\\bigcup^\\infty_{n=m}U_n)\\cap X}^{\\beta X}.\n\\]\nRecall that $x_0\\in X$. By continuity of $k$, $k(x_0) \\geq m$. This is a contradiction since $k$ is real-valued and $m$ is arbitrary.\n\\end{proof}\n\n\\begin{thm}\\label{t2.8}\nLet $X$, $Y$ be realcompact spaces and let $E$ and $F$ be Hausdorff topological vector spaces.\nIf $T:C(X,E)\\to C(Y,F)$ is a (nonlinear) biseparating map, then $X$ and $Y$ are homeomorphic.\n\\end{thm}\n\n\\begin{proof}\nBy Lemma \\ref{l2.6}, $\\varphi$ maps $X$ into $Y$. By symmetry, $\\varphi$ maps $X$ onto $Y$.\nHence it is a homeomorphism from $X$ onto $Y$.\n\\end{proof}\n\nTheorem \\ref{t2.8} generalizes the same result obtained in \\cite{A, BBH} for {\\em linear} biseparating maps.\n\n\\begin{thm}\\label{t2.9}\nSuppose that $Y$ is realcompact. Let $T:C_*(X,E)\\to C(Y,F)$ be a biseparating map. Then $Y$ is compact.\n\\end{thm}\n\n\\begin{proof}\nThe proof is the same as the proof of Lemma \\ref{l2.6}.\nIf $y_0 \\in \\beta Y \\backslash Y$, then following the proof of Lemma \\ref{l2.6}, one can choose $0\\neq u\\in E$ and construct a function $g\\in C(Y,F)$ \nand a sequence of nonempty sets $(W_n)$ in $X$ so that $T^{-1}g = nu$ on $W_n$ for each $n\\in {\\mathbb N}$.\n($W_n$ is the set $\\varphi^{-1}(U_n)\\cap X$ in the proof of Lemma \\ref{l2.6}.)\nSince the set $(nu)^\\infty_{n=1}$ cannot be a bounded set in $E$, $T^{-1}g\\notin C_*(X,E)$, contrary to the assumption. Therefore, $Y = \\beta Y$ is compact.\n\\end{proof}\n\n\\noindent \n{\\bf Remark}. It is well known that $C_*(X)$ is algebraically isomorphic to $C(\\beta X)$. Hence one cannot expect $X$ to be compact in Theorem \\ref{t2.9} in general.\n\n\n\\section{Metric cases -- general results}\\label{s3}\nThroughout this section, let $X$ and $Y$ be metric spaces, $E$ and $F$ be nontrivial normed spaces and $A(X,E)$, $A(Y,F)$ be vector subspaces of $C(X,E)$ and $C(Y,F)$ respectively. Recall that a {\\em bump function} on a Banach space $G$ is a nonzero real-valued function on $G$ with bounded support. \nWhen we speak of the spaces $C^p(X,E)$ or $C^p_*(X,E)$, it will be assumed additionally that $X$ is an open set in a Banach space that supports a $C^p$, respectively, $C^p_*$, bump function. \nA sequence $(x_n)$ in $X$ is {\\em separated} if $\\inf_{n\\neq m}d(x_n,x_m) > 0$.\nThe main aim of the section is an analog of the structural result Theorem \\ref{t2.0.4} when $X$ and $Y$ are metric spaces. In this instance, we make use of completion instead of compactification.\n\n\n\\begin{prop}\\label{p3.0.1}\nLet $A(X,E)$ be one of the spaces $C(X,E)$, $C_*(X,E)$, $U(X,E)$, $U_*(X,E)$, $\\operatorname{Lip}(X,E)$, $\\operatorname{Lip}_*(X,E)$, $C^p(X,E)$ or $C^p_*(X,E)$.\nThen $A(X,E)$ has\n the following properties.\n\\begin{enumerate}\n\\item[(S1)] $A(X,E)$ is compatible.\n\\item[(S2)] For any $x\\in X$ and any $\n\\varepsilon >0$, there exists $C\\in {\\mathcal C}(X)$ so that $x\\in C$ and $\\operatorname{diam} C <\\varepsilon$. In particular, $A(X,E)$ is basic and $\\widehat{C}(f) = \\overline{C(f)}$ for all $f\\in A(X,E)$. \n\\item[(S3)] If $f\\in A(X,E)$ and $(x_n)$ is a separated sequence in $X$, then there are a sequence $(C_n)$ in ${\\mathcal C}(X)$ and a function $g\\in A(X,E)$ so that $x_n \\in C_n$ for all $n$, $g = f$ on $C_n$ for infinitely many $n$ and $g = 0$ on $C_n$ for infinitely many $n$. \n\\item[(S4)] Let $(x_n)$ and $(x'_n)$ be Cauchy sequences so that $\\inf_{m,n} d(x_m,x'_n) > 0$.\nFor any $f\\in A(X,E)$, there are sets $U,V\\in {\\mathcal C}(X)$ and a function $g\\in A(X,E)$ so that $x_n\\in U$ and $x'_n\\in V$ for infinitely many $n$, $g = f$ on $U$ and $g = 0$ on $V$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nExcept for property (S3) for the spaces $U(X,E)$ and $\\operatorname{Lip}(X,E)$, all other verifications are straightforward and are left to the reader.\nTo verify (S3) for $A(X,E) = U(X,E)$ or $\\operatorname{Lip}(X,E)$, \nlet $(x_n)$ be a sequence in $X$ so that $\\inf_{n\\neq m}d(x_n,x_m) = 3r > 0$ and let $f\\in A(X,E)$.\n\nIn the first instance, assume that $(f(x_n))$ is a bounded sequence in $E$. \nSince $f\\in U(X,E)$, there exists $0< r' < r$ so that \\[\\sup\\{\\|f(x)\\|: x\\in \\bigcup_n B(x_n,r')\\} = M < \\infty.\\]\nFor each $n\\in {\\mathbb N}$, let $h_n, k_n:E\\to [0,1]$ be defined by \n\\[ h_n(x) = (2 - \\frac{2}{r'}d(x,x_n))^+\\wedge 1 \\text{ and } k_n(x) = (1 - \\frac{d(x,x_n)}{r'})^+.\\]\nThen $(h_n)$ is a sequence of disjoint functions. Let $h$ be the pointwise sum $\\sum h_{2n-1}$. It is easily verified that $h: E\\to [0,1]$ is a Lipchitz function with Lipschitz constant $\\frac{2}{r'}$.\nTake $g = h\\cdot f$. For each $n$, let $C_n = B(x_n,\\frac{r'}{2})$. Fix a nonzero vector $a\\in E$. Then $k_n\\otimes a \\in \\operatorname{Lip}(X,E) \\subseteq A(X,E)$. Hence\n$C_n = C(k_n\\otimes a) \\in {\\mathcal C}(X)$. Clearly $g= f$ on $C_n$ if $n$ is odd and $g = 0$ on $C_n$ if $n$ is even.\nLet us verify that $g\\in A(X,E)$.\nIndeed, suppose that $s, t\\in X$. If $s,t\\notin \\bigcup_n B(x_n,r')$, then $h(s) = h(t) =0$. Hence $\\|g(s) - g(t)\\| = 0$. Otherwise, assume without loss of generality that $t\\in \\bigcup_n B(x_n,r')$. We have\n\\begin{align*}\n\\|g(s) -g(t)\\| & \\leq |h(s)|\\, \\|f(s) - f(t)\\| + |h(s)-h(t)|\\,\\|f(t)\\| \\\\\n& \\leq \\|f(s)-f(t)\\| + \\frac{2}{r'}\\,d(s,t)\\cdot M.\n\\end{align*}\nSince $f\\in A(X,E)$, it follows that $g \\in A(X,E)$.\n\nIn the second case, assume that $(f(x_n))$ is unbounded in $E$.\nLet $t_n = \\|f(x_n)\\|$ for all $n$. By replacing $(x_n)$ by a subsequence if necessary, we may assume that $t_1> 0$ and $6t_n < t_{n+1}$ for all $n$. Define $\\gamma: [0,\\infty) \\to [0,1]$ by\n\\[ \\gamma(t) = \\begin{cases} \n (2 - \\frac{3}{t_n}|t-t_n|)\\wedge 1 &\\text{if $|t-t_n| < \\frac{2t_n}{3}$ for some odd $n$}\\\\\n 0 & \\text{otherwise}. \\end{cases}\n \\]\nDirect verification shows that if $0\\leq a\\leq b\\neq 0$, then $|\\gamma(a)-\\gamma(b)| \\leq \\frac{6|a-b|}{b}$.\n\nLet $g:X\\to E$ be given by $g(x) = \\gamma(\\|f(x)\\|)f(x)$.\nSuppose that $y,z\\in X$ with $\\|f(y)\\| \\leq \\|f(z)\\|$ and $f(z) \\neq 0$.\nThen\n\\begin{align*}\n\\|g(y)- g(z)\\| & \\leq |\\gamma(\\|f(y)\\|) - \\gamma(\\|f(z)\\|)|\\,\\|f(y)\\| + |\\gamma(\\|f(z)\\|)|\\,\\|f(y) - f(z)\\|\\\\\n& \\leq \\frac{6(\\|f(z)\\| - \\|f(y)\\|)}{\\|f(z)\\|}\\,\\|f(y)\\| + \\|f(y) - f(z)\\|\\\\\n& \\leq 7\\|f(y)-f(z)\\|.\n\\end{align*}\nThe same inequality obviously holds if $f(y) = f(z) = 0$.\nSince $f$ belongs to $A(X,E)$, so does $g$. \nAs $\\frac{t_n}{3} < \\|f(x_n)\\| < \\frac{5t_n}{3}$, there exists $C_n\\in {\\mathcal C}(X)$ so that \n\\[ x_n \\in C_n \\subseteq \\{x: \\frac{t_n}{3} < \\|f(x)\\| < \\frac{5t_n}{3}\\}.\\]\n\nFinally, $\\gamma(\\|f(x)\\|) =1$ if $x \\in C_{2n-1}$ and $\\gamma(\\|f(x)\\|) =0$ if $x \\in C_{2n}$.\nHence $g = f$ on $C_{2n-1}$ and $g=0$ on $C_{2n}$.\nThis completes the verification of property (S3) for $A(X,E) = U(X,E)$ or $\\operatorname{Lip}(X,E)$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\nFor the sake of brevity, let us say that $A(X,E)$ is {\\em standard} if it satisfies properties (S1) -- (S4).\nFor the rest of the section, assume that $A(X,E)$ and $A(Y,F)$ are standard spaces and that $T:A(X,E)\\to A(Y,F)$ is a biseparating map. Without loss of generality, normalize $T$ by taking $T0=0$.\nLet $\\theta:{\\mathcal D}(X) \\to {\\mathcal D}(Y)$ be the map obtained from Theorem \\ref{t5}.\nAs in Section \\ref{s2}, we will show that $\\theta$ induces a point mapping $\\varphi$.\n\n\n\nDenote by $\\widetilde{X}$ and $\\widetilde{Y}$ the respective completions of the spaces $X$ and $Y$.\nFor any subset $U$ of $\\widetilde{X}$, denote the closure of $U$ in $\\widetilde{X}$ by $\\widetilde{U}$. Similarly for sets in $\\widetilde {Y}$.\nIf $x_0\\in \\widetilde{X}$ and $(U_n)$ is a sequence of nonempty sets in ${\\mathcal C}(X)$ so that $\\operatorname{diam} U_n \\to 0$ and $d(x_0,U_n)\\to 0$, we write $(U_n) \\sim x_0$. By condition (S2), for any $x_0\\in \\widetilde{X}$, there is always a sequence $(U_n)$ so that $(U_n)\\sim x_0$.\n\n\n\nSuppose that $y \\in \\theta(\\overline{U})\\cap V$, where $U = C(u) \\in {\\mathcal C}(X)$ and $V= C(v)\\in {\\mathcal C}(Y)$.\nSince $\\theta(\\overline{U}) = \\overline{C(Tu)}$, $C(Tu) \\cap C(v)\\neq \\emptyset$.\nThus $C(u)\\cap C(T^{-1}v)\\neq \\emptyset$. \nHence $U \\cap \\theta^{-1}(\\overline{V})\\neq \\emptyset$.\n\n\n\n\\begin{lem}\\label{l3.1}\nLet $x_0\\in \\widetilde{X}$ and assume that $(U_n)\\sim x_0$.\nTake $y_n \\in \\theta(\\overline{U_n})$ for each $n$.\n\\begin{enumerate}\n\\item If $x_0\\in X$, then $(y_n)$ is a Cauchy sequence in $Y$.\n\\item If, in additon, $A(X,E)\\subseteq U(X,E)$ and contains a nonzero constant function, then $(y_n)$ is a Cauchy sequence in $Y$ for any $x_0\\in \\widetilde{X}$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nFirst we show that every subsequence of $(y_n)$ has a further Cauchy subsequence. Otherwise, there is a subsequence $(y'_n)$ of $(y_n)$ that is separated. \nUnder assumption (1), $x_0 \\in X$. It follows from condition (S2) that there is a function $f\\in A(X,E)$ so that $f(x_0) \\neq 0$. Under assumption (2), take $f = 1\\otimes a\\in A(X,E)$, where $a\\neq 0$. \nSince $A(Y,F)$ has property (S3), there are a subsequence of $(y_n')$, still denoted as $(y_n')$, a sequence $(V_n)$ in ${\\mathcal C}(Y)$ and a function $g\\in A(Y,E)$ so that $y'_n\\in V_n$ for all $n$, $g= Tf$ on $V_n$ for infinitely many $n$ and $g = 0$ on $V_n$ for infinitely many $n$.\nThen $T^{-1}g = f$ on $\\theta^{-1}(\\overline{V_n})$ for infinitely many $n$ and $T^{-1}g = 0$ on $\\theta^{-1}(\\overline{V_n})$ for infinitely many $n$.\nSince $y'_n \\in \\theta(\\overline{U_n}) \\cap V_n$, $U_n \\cap \\theta^{-1}(\\overline{V_n}) \\neq \\emptyset$ by the discussion just before the lemma. Choose a point $x'_n$ from the intersection. Then $(T^{-1}g)(x'_n)=f(x_n')$ for infintely many $n$ and $0$ infinitely often.\nMoreover, $(x'_n)$ converges to $x_0$ in $\\widetilde{X}$.\nUnder assumption (1),\n$x_0 \\in X$, and we have a contradiction to the continuity of $T^{-1}g$ at $x_0$. \nUnder assumption (2), $T^{-1}g \\in U(X,E)$ and $(x'_n)$ is Cauchy in $X$. Hence $((T^{-1}g)(x'_n))$ is Cauchy in $E$. This is impossible since $(T^{-1}g)(x'_n) = f(x'_n) = a$ and $(T^{-1}g)(x'_n) = 0$ both occur infinitely many times.\n\nIf the whole sequence $(y_n)$ is not Cauchy, then in view of the previous paragraph, there are subsequences $(y_{i_n})$ and $(y_{j_n})$ and $\\varepsilon >0$ so that both subsequences are Cauchy and that $d(y_{i_m},y_{j_n}) > \\varepsilon $ for all $m,n$.\nChoose the function $f$ as in the last paragraph. By property (S4), there are $U,V \\in {\\mathcal C}(Y)$ and $g\\in A(Y,F)$ so that $y_{i_n}\\in U$, $y_{j_n}\\in V$ for infinitely many $n$, $g= Tf$ on $U$ and $g = 0$ on $V$.\nThus $T^{-1}g = f$ on $\\theta^{-1}(\\overline{U})$ and $T^{-1}g = 0$ on $\\theta^{-1}(\\overline{V})$.\nThen $y_{i_n} \\in \\theta(\\overline{U_{i_n}}) \\cap U$ for infinitely many $n$ and hence $U_{i_n} \\cap \\theta^{-1}(\\overline{U}) \\neq \\emptyset$ for infinitely many $n$.\nLet $x_{i_n} \\in U_{i_n} \\cap \\theta^{-1}(\\overline{U})$. Then $(x_{i_n})$ converges to $x_0$ in $\\widetilde{X}$ and $T^{-1}g(x_{i_n}) = f(x_{i_n})$ for all $n$.\nSimilar consideration using the sequence $(y_{j_n})$ shows that there is a sequence $(x_{j_n})$ converging to $x_0$ in $\\widetilde{X}$ so that $T^{-1}g(x_{j_n}) = 0$ for all $n$.\nUnder assumption (1), \n\\[ \\lim T^{-1}g(x_{i_n}) = \\lim f(x_{i_n}) = f(x_0) \\neq 0 = \\lim T^{-1}g(x_{j_n}),\\]\ncontradicting the continuity of $T^{-1}g$ at $x_0$.\nUnder assumption (2), $f(x_{i_n}) =a\\neq 0$ by choice of $f$. Thus $T^{-1}g(x_{i_n}) = a$ and $T^{-1}g(x_{j_n}) = 0$ for all $n$, contradicting the uniform continuity of $T^{-1}g$.\n\\end{proof}\n\n\n\nSuppose that $x_0\\in \\widetilde{X}$. Let $(U_n)\\sim x_0, (V_n)\\sim x_0$ and $y_n\\in U_n, z_n\\in V_n$ for all $n$.\nThen $(U_1,V_1,U_2,V_2,\\dots)\\sim x_0$.\nBy Lemma \\ref{l3.1}, if $x_0\\in X$, then the sequence $(y_1,z_1,y_2,z_2,\\dots)$ is Cauchy.\nDefine $\\varphi: X\\to \\widetilde{Y}$ by setting $\\varphi(x) = \\lim y_n$, where $(U_n)\\sim x$ and $y_n \\in \\theta(\\overline{U_n})$ for all $n$. From the above, $\\varphi(x)$ is independent of the choices of $(U_n)$ and $(y_n)$.\nSimilarly, if $A(X,E)\\subseteq U(X,E)$ and contains a constant function $1\\otimes a$ for some $a\\in E\\backslash \\{0\\}$, then Lemma \\ref{l3.1}(2) shows that there is a well defined map $\\widetilde{\\varphi}:\\widetilde{X}\\to \\widetilde{Y}$ given by \n$\\widetilde{\\varphi}(x) = \\lim y_n$, where $(U_n)\\sim x$ and $y_n \\in \\theta(\\overline{U_n})$ for all $n$.\nClearly, in this case, $\\widetilde{\\varphi}$ extends $\\varphi$.\n By symmetry, there is also a similar map $\\psi:Y\\to \\widetilde{X}$ and a map $\\widetilde{\\psi}:\\widetilde{Y}\\to \\widetilde{X}$ under corresponding assumptions on $A(Y,F)$.\n\n\n\n\n\n\\begin{lem}\\label{l3.2}\nThe map $\\varphi$ is continuous from $X$ into $\\widetilde{Y}$. If, in addition, $A(X,E)\\subseteq U(X,E)$ and contains a nonzero constant function, then $\\widetilde{\\varphi}:\\widetilde{X}\\to \\widetilde{Y}$ is continuous on $\\widetilde{X}$.\n\\end{lem}\n\n\\begin{proof}\nWe will prove the second assertion. The first statement can be shown in the same way.\nUnder the second assumption, $\\widetilde{\\varphi}$ is well defined. Let $(x_n)$ be a sequence in $\\widetilde{X}$ that converges to a point $x_0\\in \\widetilde{X}$.\nBy definition of $\\widetilde{\\varphi}$, for each $n$, $\\widetilde{\\varphi}(x_n) = \\lim_k y_{nk}$, where $y_{nk} \\in \\theta(\\overline{U_{nk}})$ and $(U_{nk})_k \\sim x_n$.\nFor each $n$, choose $k_n$ so that $d(y_{nk_n}, \\widetilde{\\varphi}(x_n)), \\operatorname{diam} U_{nk_n}, d(U_{nk_n}, x_{n}) < \\frac{1}{n}$. Then $y_{n_{k_n}} \\in U_{nk_n}$ and $(U_{nk_n})\\sim x_0$.\nThus $\\widetilde{\\varphi}(x_0) = \\lim y_{n{k_n}} = \\lim \\widetilde{\\varphi}(x_n)$.\n\\end{proof}\n\n\n\nSuppose that $x\\in U \\in {\\mathcal C}(X)$. We can choose $(U_n)$ so that $(U_n)\\sim x$ and $U_n \\subseteq U$ for all $n$. By definition of $\\varphi$, $\\varphi(x) = \\lim y_n$ where $y_n\\in \\theta(\\overline{U_n}) \\subseteq \\theta(\\overline{U})$. Hence $\\varphi(x) \\in \\widetilde{\\theta(\\overline{U})}$.\n\n\n\\begin{lem}\\label{l3.2.1}\nAssume that $A(X,E)\\subseteq U(X,E)$ and contains a nonzero constant function.\nLet $f,g\\in A(X,E)$ and $U$ be an open set in $\\widetilde{X}$. If $f = g$ on $U\\cap X$, then $Tf = Tg$ on the set $\\widetilde{\\varphi}(U) \\cap Y$.\n\\end{lem}\n\n\\begin{proof}\nAssume that $x_0\\in U$ and $y_0 = \\widetilde{\\varphi}(x_0) \\in Y$.\nChoose $(U_n)\\sim x_0$ and let $x_n \\in U_n$.\nBy the foregoing remark, $\\varphi(x_n) \\in \\widetilde{\\theta(\\overline{U_n})}$. Pick $y_n \\in \\theta(\\overline{U_n})$.\nBy definition of $\\widetilde{\\varphi}$, $y_0 = \\widetilde{\\varphi}(x_0) = \\lim y_n$.\nFor all sufficiently large $n$, $\\overline{U_n} \\subseteq U\\cap X$.\nHence $f=g$ on $\\overline{U_n}$. By Theorem \\ref{t5}, $Tf = Tg$ on $\\theta(\\overline{U_n})$.\nIn particular, $Tf(y_n) = Tg(y_n)$.\nBy continuity of $Tf$ and $Tg$ at $y_0$, $Tf(y_0) = Tg(y_0)$.\n\\end{proof}\n\nThe following structure theorem applies to spaces of uniformly continuous functions and spaces of Lipschitz functions.\n\n\\begin{thm}\\label{t3.5}\nSuppose that both $A(X,E)$ and $A(Y,F)$ are standard subspaces of \n$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions.\nThere is a homeomorphism $\\widetilde{\\varphi}:\\widetilde{X}\\to \\widetilde{Y}$ so that if\n $f,g\\in A(X,E)$ and $U$ is an open set in $\\widetilde{X}$, then $f=g$ on $U\\cap X$ if and only if $Tf = Tg$ on $\\widetilde{\\varphi}(U)\\cap Y$.\n \\end{thm}\n \n\\begin{proof}\nUnder the given assumptions, we have well defined continuous maps $\\widetilde{\\varphi}:\\widetilde{X}\\to \\widetilde{Y}$ and $\\widetilde{\\psi}:\\widetilde{Y}\\to \\widetilde{X}$ by Lemma \\ref{l3.2}.\nIn the next paragraph, we will show that $\\widetilde{\\psi}\\circ \\widetilde{\\varphi}$ is the identity map on $\\widetilde{X}$. With symmetry, this allows us to conclude that $\\widetilde{\\varphi}$ is a homeomorphism.\nThe final property in the statement of the theorem follows from Lemma \\ref{l3.2.1} and symmetry.\n\nLet $x_0\\in \\widetilde{X}$ and let $(U_n)\\sim x_0$.\nIt follows from (2) of Lemma \\ref{l3.1} and the definition of $\\widetilde{\\varphi}$ that $\\operatorname{diam} \\theta(\\overline{U_n})\\to 0$ and $d(\\theta(\\overline{U_n}),\\widetilde{\\varphi}(x_0)) \\to 0$. By definition of $\\theta$, there exists $V_n \\in {\\mathcal C}(Y)$ so that $\\theta(\\overline{U_n}) = \\overline{V_n}$.\nThen $(V_n)\\sim \\widetilde{\\varphi}(x_0)$.\nHence $\\widetilde{\\psi}( \\widetilde{\\varphi}(x_0))= \\lim x_n$, where $x_n \\in \\theta^{-1}(\\overline{V_n}) = \\overline{U_n}$ for all $n$.\nTherefore, $\\widetilde{\\psi}( \\widetilde{\\varphi}(x_0))= x_0$, as claimed.\n\\end{proof} \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNext, we consider the cases where one or both of $A(X,E)$ and $A(Y,F)$ is either $C, C_*$ or $C^p$.\n\n\n\\begin{lem}\\label{l3.6}\nSuppose that $A(X,E)$ is standard and $A(Y,F) = C(Y,F)$, $C_*(Y,F)$ or $C^p(Y,F)$. If $T:A(X,E)\\to A(Y,F)$ is a biseparating map, then $\\varphi(X)\\subseteq Y$.\n\\end{lem}\n\n\\begin{proof}\nSuppose on the contrary that there exists $x_0\\in X$ so that $y_0 = \\varphi(x_0) \\in \\widetilde{Y}\\backslash Y$.\nLet $(U_n)\\sim x_0$. Then $(\\theta(\\overline{U_n}))$ is a sequence of sets in $Y$, each with nonempty interior, so that $\\operatorname{diam}\\theta(\\overline{U_n}) \\to 0$ and $d(\\theta(\\overline{U_n}),y_0) \\to 0$.\nHence one can find a sequence $(g_n)$ in $C_*(Y)$, respectively $C^p(Y)$, and a sequence $(V_n)$ of nonempty sets in ${\\mathcal C}(Y)$ so that $g_n =1$ on $V_n$, $\\overline{C(g_n)}\\subseteq \\theta(\\overline{U_n})$ for all $n$, and $\\overline{C(g_m)}\\cap \\overline{C(g_n)} = \\emptyset$ if $m\\neq n$.\nAs observed in the proof of Theorem \\ref{t3.5}, $\\operatorname{diam} \\theta(\\overline{U_n})\\to 0$.\nSo $(\\overline{C(g_n}))$ is a pairwise disjoint sequence so that $\\operatorname{diam} \\overline{C(g_n)}\\to 0$ and $d(\\overline{C(g_n)},y_0) \\to 0$, where $y_0 \\notin Y$. Therefore the pointwise sum $g = \\sum g_{2n}$ belongs to $C_*(Y)$, respectively, $C^p(Y)$.\nBy condition (S2), there exists $f\\in A(X,E)$ so that $f(x_0) \\neq 0$.\nThen $h = g\\cdot Tf$ lies in $A(Y,F)$.\nSince $h = Tf$ on $\\overline{V_n}$ if $n$ is even and $h = 0$ on $\\overline{V_n}$ if $n$ is odd, and $\\overline{V_n} \\in {\\mathcal D}(Y)$,\nby Theorem \\ref{t5}, $T^{-1}h = f$ on $\\theta^{-1}(\\overline{V_n})$ if $n$ is even and \n$T^{-1}h = 0$ on $\\theta^{-1}(\\overline{V_n})$ if $n$ is odd.\nSince $\\overline{V_n} \\subseteq \\theta(\\overline{U_n})$, $\\theta^{-1}(\\overline{V_n}) \\subseteq \\overline{U_n}$.\nChoose $x_n \\in \\theta^{-1}(\\overline{V_n})$ for each $n$. Then $(x_n)$ converges to $x_0$.\nHowever, $T^{-1}h(x_n) = f(x_n)$ if $n$ is odd and $T^{-1}h(x_n) = 0$ if $n$ is even.\nAs $(f(x_n))$ converges to $f(x_0) \\neq 0$, \nthis contradicts the continuity of $T^{-1}h$ at $x_0$. This proves that $\\varphi(X) \\subseteq Y$.\n\\end{proof}\n\nThe next two results can be obtained utilizing the proof of Theorem \\ref{t3.5} and taking into account Lemma \\ref{l3.6}. \n\n\\begin{thm}\\label{t3.7}\nLet $A(X,E) = C(X,E),$ $C_*(X,E)$ or $C^p(X,E)$ and let $A(Y,F) = C(Y,F),$ $C_*(Y,F)$ or $C^q(Y,F)$.\nThere exists a homeomorphism $\\varphi: X\\to Y$ so that \n for any $f,g\\in A(X,E)$, and any open set $U$ in $X$, $f =g$ on $U$ $\\iff$ $Tf = Tg$ on $\\varphi(U)$. \n\\end{thm}\n\n\n\n\n\\begin{thm}\\label{t3.8}\nLet $A(X,E)$ be a standard vector subspace of $U(X,E)$ that contains a nonzero constant fucntion. Suppose that $A(Y,F) = C(Y,F)$, $C_*(Y,F)$ or $C^p(Y,F)$.\nThere exists a homeomorphism $\\varphi: X\\to \\varphi(X)$, where $\\varphi(X)$ is a dense subset of $Y$, and\n for any $f,g\\in A(X,E)$ and any open set $U$ in $X$, $f =g$ on $U$ $\\iff$ $Tf = Tg$ on $\\varphi(U)$. \n\\end{thm}\n\n\\begin{proof}\nWe will only prove the density of $\\varphi(X)$ in $Y$. The other parts follow from the proof of Theorem \\ref{t3.5}, using Lemma \\ref{l3.6}.\nBy Lemma \\ref{l3.2} and Lemma \\ref{l3.6}, $\\varphi$ is a continuous map from $X$ into $Y$ with a continuous extension $\\widetilde{\\varphi}:\\widetilde{X}\\to \\widetilde{Y}$.\nAlso, we have an analogous continuous map $\\psi:Y\\to \\widetilde{X}$. \nFrom the second paragraph of the proof of Theorem \\ref{t3.5}, we see that $\\widetilde{\\varphi}\\circ\\psi$ is the identity map on $Y$. Given $y\\in Y$, $\\psi(y)\\in \\widetilde{X}$. Hence there is a sequence $(x_n)$ in $X$ that converges to $\\psi(y)$.\nBy continuity of $\\widetilde{\\varphi}$ at $\\psi(y)$, $(\\varphi(x_n))=(\\widetilde{\\varphi}(x_n))$ converges to $\\widetilde{\\varphi}(\\psi(y)) =y$.\nThis proves that $y\\in \\overline{\\varphi(X)}$.\n\\end{proof}\n\n\nWe conclude this section with a remark concerning the space $C^p_*(X,E)$, where $X$ is an open set in a Banach space $G$ that supports a $C^p_*$ bump function. In general, it may not be true that all functions in $C^p_*(X,E)$ are uniformly continuous (with respect to the norm on $G$).\nOn the other hand, an easy application of the mean value inequality shows that if $X$ is open and {\\em convex} in $G$, then $C^p_*(X,E) \\subseteq U(X,E)$. \nIn particular, Theorems \\ref{t3.5} and \\ref{t3.8} apply to $C^p_*$ spaces whose domains are convex open sets.\n\n\n\n\n\\section{Pointwise representation}\\label{s4}\n\nRetain the notation of Section \\ref{s3}. That is, let $X$ and $Y$ be metric spaces, $E$ and $F$ be nontrivial normed spaces, and assume that $A(X,E)$ and $A(Y,F)$ are standard vector subspaces of $C(X,E)$ and $C(Y,F)$ respectively. Say that $A(X,E)$ has property (P) if\n\\begin{enumerate}\n\\item[(P)] For any accumulation point $x$ of $\\widetilde{X}$ and any function $f\\in A(X,E)$ so that $\\lim_{\\stackrel{z\\to x}{z\\in X}}f(z) =0$, there are open sets $U$ and $V$ in $X$ and a function $g\\in A(X,E)$ so that $x\\in \\widetilde{U} \\cap \\widetilde{V}$ and that $g =f$ on $U$ and $g = 0$ on $V$.\n\\end{enumerate}\n\n\\medskip\n\n\\noindent {\\bf Remark}. \nIf $x$ is an isolated point of $\\widetilde{X}$, then $x\\in X$. In this case, given $f\\in A(X,E)$ so that $f(x) = 0$, take $U = V= \\{x\\}$ and $g =f$. It is clear that the conditions above are fulfilled.\n\n\n\\begin{prop}\\label{p3.10}\nLet $A(X,E)$ be one of the spaces $C(X,E)$, $C_*(X,E)$, $U(X,E)$, $U_*(X,E)$, $\\operatorname{Lip}(X,E)$ or $\\operatorname{Lip}_*(X,E)$. Then $A(X,E)$ has property (P).\n\\end{prop}\n\n\\begin{proof}\nLet $x_0$ be an accumulation point of $\\widetilde{X}$ and let $f$ be a function in $A(X,E)$ so that $\\lim_{\\stackrel{x\\to x_0}{x\\in X}}f(x) =0$.\nThere is a sequence $(x_n)$ in $X$ converging to $x_0$ so that $0 < d(x_{n+1},x_0) < \\frac{d(x_n,x_0)}{3}$ for all $n$. Set $r_n = d(x_n,x_0)$ and let $\\gamma_n:[0,\\infty)\\to {\\mathbb R}$ be the function\n\\[\\gamma_n(r) = (2 - \\frac{4|r-r_n|}{r_n})^+\\wedge 1.\\]\n$(\\gamma_n)$ is a disjoint sequence of functions. Furthermore,\n\\[ |\\gamma_n(a) - \\gamma_n(b)| \\leq \\frac{4}{r_n}|a-b| \\wedge 1 \\text{ for all $a, b\\geq 0$}.\\]\nWe may assume that $\\|f(x)\\| \\leq1$ if $d(x,x_0) < \\frac{3r_1}{2}$.\nLet \n\\[ g_n(x) = \\gamma_n(d(x,x_0))f(x)\\quad \\text{and} \\quad g = \\sum g_{2n} \\ \\text{(pointwise sum).}\\]\nSince $\\gamma_n(d(\\cdot,x_0))$ is bounded Lipschitz and $f$ is bounded on the support of $g_n$, it is easy to check that $g_n\\in A(X,E)$.\nNote that\n\\[ \\|g_n\\|_\\infty \\leq \\sup\\{\\|f(x)\\| :\\frac{r_n}{2} < d(x,x_0) < \\frac{3r_n}{2}\\} \\to 0.\\]\nTherefore, if $A(X,E)$ is any of the spaces except $\\operatorname{Lip}(X,E)$ or $\\operatorname{Lip}_*(X,E)$, $g$ is the uniform limit of its partial sums and hence belongs to $A(X,E)$.\n\nNow consider the cases $A(X,E) = \\operatorname{Lip}(X,E)$ or $\\operatorname{Lip}_*(X,E)$. First of all, the function $g$ is bounded.\nLet's check that it is Lipschitz. Since $f$ is Lipschitz and $\\lim_{\\stackrel{x\\to x_0}{x\\in X}}f(x) =0$, $\\|f(x)\\| \\leq L(f)d(x,x_0)$, where $L(f)$ is the Lipschitz constant of $f$.\nFor any $n\\in {\\mathbb N}$, we claim that $g_n$ is Lipschitz with $L(g_n) \\leq 7L(f)$. Let $x,z\\in X$, $a = d(x,x_0)$, $b = d(z,x_0)$. \nIf $\\gamma_n(a) = \\gamma_n(b) =0$, then $g_n(x) - g_n(z) =0$. Otherwise, we may assume that $\\gamma_n(a) \\neq 0$, so that $\\frac{r_n}{2} < a < \\frac{3r_n}{2}$.\nThen\n\\begin{align*}\n\\|g_n(x) - g_n(z)\\| & \\leq |\\gamma_{n}(a)-\\gamma_{n}(b)|\\,\\|f(x)\\| + |\\gamma_{n}(b)|\\,\\|f(x) - f(z)\\|\\\\\n&\\leq \\frac{4}{r_{n}}|a-b|\\,L(f)a + \\|f(x) -f(z)\\|\\\\\n&\\leq 6L(f)|a-b| + L(f)d(x,z) \\leq 7L(f)d(x,z).\n\\end{align*}\nThus $L(g_n)\\leq 7L(f)$, as claimed.\nFor any $x,z\\in X$, either there exists $n$ so that $g(x) = g_{2n}(x)$, $g(z) = g_{2n}(z)$, or there are distinct $m,n$ so that $g(x) = g_{2n}(x) + g_{2m}(x)$, $g(z) = g_{2n}(z) + g_{2m}(z)$. In either case, it follows that $\\|g(x)-g(z)\\| \\leq 14L(f)d(x,z)$. This completes the proof that $g\\in \\operatorname{Lip}_*(X,E)\\subseteq A(X,E)$.\nClearly, $g= f$ on the open set \n\\[ U = \\bigcup_n \\{x\\in X: \\frac{3r_{2n}}{4}< d(x,x_0) < \\frac{5r_{2n}}{4}\\}\\]\n and $g = 0$ on the open set \n\\[V = \\bigcup_n \\{x\\in X: \\frac{3r_{2n-1}}{4}< d(x,x_0) < \\frac{5r_{2n-1}}{4}\\}.\\]\nSince $x_n\\in U$ for all even $n$, and $x_n\\in V$ for all odd $n$, $x_0 \\in \\widetilde{U} \\cap \\widetilde{V}$.\n\\end{proof}\n\nWith the help of property (P), we can improve Theorems \\ref{t3.5}, \\ref{t3.7} and \\ref{t3.8}.\nFirst we consider the case where $A(X,E)$ and $A(Y,F)$ are standard subspaces of \n$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions.\nDenote by $\\widetilde{E}$ the completion of $E$. Since $A(X,E) \\subseteq U(X,E)$, every function $f\\in A(X,E)$ has a unique continuous extension $\\widetilde{f}:\\widetilde{X}\\to \\widetilde{E}$. \nFor each $x\\in \\widetilde{X}$, let \n\\[ \\widetilde{E}_x = \\{\\widetilde{f}(x): f\\in A(X,E)\\}.\\]\nSimilarly for $\\widetilde{F}_y$ if $y\\in \\widetilde{Y}$.\nFix a biseparating map $T:A(X,E)\\to A(Y,F)$, which we may normalize by taking $T0 = 0$.\nLet $\\widetilde{\\varphi}: \\widetilde{X}\\to \\widetilde{Y}$ be the homeomorphism given by Theorem \\ref{t3.5}, with inverse $\\widetilde{\\psi}$.\n\n\\begin{prop}\\label{p4.2}\nSuppose that $A(X,E)$ and $A(Y,F)$ are standard subspaces of \n$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions. Assume that $A(X,E)$ has property (P).\nGiven any $y\\in \\widetilde{Y}$, there is a bijective function $\\Phi(y,\\cdot): \\widetilde{E}_{\\widetilde{\\psi}(y)}\\to \\widetilde{F}_y$ so that \n\\[ \\widetilde{Tf}(y) = \\Phi(y,\\widetilde{f}(\\widetilde{\\psi}(y))) \\text{ for all $f\\in A(X,E)$}.\\]\n\\end{prop}\n\n\\begin{proof}\nLet $y_0 \\in \\widetilde{Y}$ and $\\widetilde{\\psi}(y_0) =x_0\\in \\widetilde{X}$. For any $a\\in \\widetilde{E}_{x_0}$, fix a function $g_a\\in A(X,E)$ so that $\\widetilde{g_a}(x_0) =a$ and define $\\Phi(y_0,\\cdot): \\widetilde{E}_{x_0} \\to F$ by $\\Phi(y_0,a) = Tg_a(y_0)$.\nIf $f\\in A(X,E)$, let $a = \\widetilde{f}(x_0)$. Clearly $\\widetilde{f- g_a}(x_0) =0$. By property (P) and the remark following its definition, there are open sets $U,V$ in $X$ and a function $h\\in A(X,E)$ so that $x_0 \\in \\widetilde{U} \\cap \\widetilde{V}$, $h = f-g_a$ on $U$ and $h =0$ on $V$.\nLet $W$ be an open set in $\\widetilde{X}$ so that $W\\cap X = U$.\nSince $\\widetilde{\\varphi}$ is a homeomorphism and $y_0 = \\widetilde{\\varphi}(x_0)$, \n$y_0 \\in \\widetilde{\\varphi}(\\widetilde{U}) \\subseteq \\widetilde{\\widetilde{\\varphi}(W)}$.\nBut $\\widetilde{\\varphi}(W)$ is open in $\\widetilde{Y}$. So $y_0 \\in ({\\varphi(W)}\\cap Y)^{\\widetilde{\\ }}$.\nAs $f= h+g_a$ on $W\\cap X$, $Tf = T(h+g_a)$ on $\\widetilde{\\varphi}(W)\\cap Y$ by Lemma \\ref{l3.2.1}.\nBy continuity, $\\widetilde{Tf}(y_0) = [T(h+g_a)]^{\\widetilde{ \\ }}(y_0)$. \nSimilarly, looking at the set $V$ instead of $U$, one can show that \n\\[ [T(h+g_a)]^{\\widetilde{ \\ }}(y_0) = {Tg_a}(y_0) = \\Phi(y_0,a). \\]\n Thus $\\widetilde{Tf}(y_0) = \\Phi(y_0,a)$, as required.\n \n By symmetry, there is a function $\\Psi(x, \\cdot): \\widetilde{F}_{\\widetilde{\\varphi}(x)}\\to \\widetilde{E}_x$ so that $(T^{-1}g)^{\\widetilde{\\ }}(x) = \\Phi(x, \\widetilde{g}(\\widetilde{\\varphi}(x)))$ for all $g\\in A(Y,F)$.\n The fact that $\\Phi(y,\\cdot)$ is a bijection follows from expressing the equations $T(T^{-1}g) = g$ and $T^{-1}(Tf) = f$ in terms of the mappings $\\Phi(y,\\cdot)$ and $\\Psi(x,\\cdot)$.\n\\end{proof}\n\n\nThe next two propositions can be obtained in a similar vein. The details are omitted.\n\n\n\\begin{prop}\\label{p4.3}\nSuppose that $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F)$ $= C(Y,F)$ or $C_*(Y,F)$. \nThere is a function $\\Phi:Y\\times E\\to F$ so that \n\\[ Tf(y) = \\Phi(y,f(\\psi(y))) \\text{ for all $f\\in A(X,E)$ and all $y\\in Y$}.\\]\n\\end{prop}\n\n\\begin{prop}\\label{p4.4}\nSuppose that $A(X,E)$ is standard subspace of \n$U(X,E)$ that contains a nonzero constant function. Assume that $A(X,E)$ has property (P).\nLet $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.\n\\begin{enumerate}\n\\item For any $y\\in Y$, there is a function $\\Phi(y,\\cdot): \\widetilde{E}_{\\psi(y)}\\to F$ so that \n\\[Tf(y) = \\Phi(y,\\widetilde{f}(\\psi(y))) \\text{ for all $f\\in A(X,E)$}.\\]\n\\item There is a function $\\Psi: X\\times F\\to E$ so that \n\\[T^{-1}g(x) = \\Psi(x,g(\\varphi(x))) \\text{ for all $g\\in A(Y,F)$ and all $x \\in X$}.\\]\n\\end{enumerate}\n\\end{prop}\n\n\n\n\n\n\n\n\n\n\\section{Spaces of continuous functions -- metric case}\\label{s5}\n\nIn this section, let $X, Y$ be metric spaces and $E,F$ be normed spaces.\nLet $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.\nFix a biseparating map $T:A(X,E)\\to A(Y,F)$. \nBy Theorem \\ref{t3.7}, Proposition \\ref{p4.3} and symmetry, there are a homeomorphism $\\varphi:X\\to Y$ and functions $\\Phi:Y\\times E\\to F$, $\\Psi:X\\times F\\to E$ so that \n\\[ (Tf)(y)= \\Phi(y,f(\\varphi^{-1}(y))),\\quad (T^{-1}g)(x) = \\Psi(x,g(\\varphi(x)))\\]\nfor any $f\\in A(X,E)$, $g\\in A(Y,F)$, $x\\in X$ and $y\\in Y$.\nFrom the equations \n\\[ 1\\otimes a = T^{-1}(T(1\\otimes a)),\\quad 1\\otimes b = T(T^{-1}(1\\otimes b))\\]\nfor all $a\\in E$, $b\\in F$, we find that $\\Phi(y,\\cdot)$ and $\\Psi(x,\\cdot)$ are mutual inverses provided $y = \\varphi(x)$. \nThe aim of the present section is to characterize the functions $\\Phi$ that lead to biseparating maps and prove a result on automatic continuity.\nObserve that if we define $S:C(Y,F)\\to C(X,F)$ by $Sg(x) = g(\\varphi^{-1}(x))$, then $S$ is a biseparating map that also acts as a biseparating map from $C_*(Y,F)$ onto $C_*(X,F)$.\nThus characterization of biseparating maps reduces to the ``section problem'' addressed in Proposition \\ref{p5.1}.\nThe result is well known, at least in the case for for $C(X)$. See, e.g. \\cite[Chapter 9]{AZ}.\nWe omit the easy proof of the next lemma.\n\n\\begin{lem}\\label{l5.0}\nLet $(x_n)$ be a sequence of distinct points in $X$ and let $(a_n)$ be a sequence in $E$.\n\\begin{enumerate}\n\\item If $(x_n)$ has no convergent subsequence, then there is a function $f\\in C(X,E)$ so that $f(x_n) = a_n$ for all $n$. Moreover, $f$ can be chosen to be bounded if $(a_n)$ is bounded.\n\\item If $(x_n)$ converges to a point $x_0\\in X$, $x_0\\neq x_n$ for all $n$, and $(a_n)$ converges to a point $a_0\\in E$, then there exists $f\\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$.\n\\end{enumerate}\n\\end{lem}\n\nDenote the set of accumulation points of $X$ by $X'$ and the unit balls of $E$ and $F$ by $B_E$ and $B_F$ respectively.\n\n\n\n\n\\begin{prop}\\label{p5.1}\nLet $X$ be a metric space and let $E$ and $F$ be normed spaces. Consider a function $\\Phi:X\\times E\\to F$.\n\\begin{enumerate}\n\\item The function $x\\mapsto \\Phi(x,f(x))$ belongs to $C(X,F)$ for every $f\\in C(X,E)$ if and only if $\\Phi$ is continuous at every point in $X'\\times E$.\n\\item The function $x\\mapsto \\Phi(x,f(x))$ belongs to $C_*(X,F)$ for every $f\\in C_*(X,E)$ if and only if both of the following conditions hold.\n\\begin{enumerate}\n\\item $\\Phi$ is continuous at every point in $X'\\times E$. \n\\item For any bounded set $B$ in $E$, every $(x_n) \\in \\prod_n X_n(B)$ has a subsequence that converges in $X$, where \n\\[X_n(B) = \\{x\\in X: \\Phi(x,B) \\not\\subseteq nB_F\\}.\\]\n\\end{enumerate}\n\\end{enumerate}\n\\end{prop}\n\n\n\\begin{proof}\nSuppose that $x\\mapsto \\Phi(x,f(x))$ belongs to $C(X,F)$ for every $f\\in C_*(X,E)$.\nLet $(x_0,a_0)$ be a point in $X'\\times E$ and let $((x_n,a_n))$ be a sequence in $X\\times E$ that converges to $(x_0,a_0)$.\nSince $\\Phi(x,a_n)$ is a continuous function of $x$, by making small perturbations if necessary, we may assume that $x_n \\neq x_0$ for all $n$.\nChanging to a subsequence, we may further assume that $(x_n)$ is a sequence of distinct points. By Lemma \\ref{l5.0}(2), there exists $f\\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$. By continuity of $f$, $f(x_0) = a_0$.\nThen \n\\[ \\Phi(x_n,a_n) = \\Phi(x_n,f(x_n)) \\to \\Phi(x_0,f(x_0)) = \\Phi(x_0,a_0).\\]\nThis proves the continuity of $\\Phi$ at $(x_0,a_0)$.\nHence the ``only if'' parts in statement (1) and statement (2)(a) are verified.\nOn the other hand,\nif $\\Phi$ is continuous at any point in $X'\\times E$ and $f\\in C(X,E)$, then it is clear that $\\Phi(x,f(x))$ is a continuous function of $x\\in E$. This completes the proof of statement (1).\n\nLet us proceed to prove the necessity of condition (b) in statement (2).\nAssume that condition (b) in (2) fails. \nLet $B$ be a bounded set in $E$ and let $(x_n)$ be an element in $\\prod X_n(B)$ so that $(x_n)$ has no convergent subsequence in $X$.\nSince $X_n(B) \\subseteq X_m(B)$ if $m \\leq n$, we may replace $(x_n)$ by a subsequence to assume that all $x_n$'s are distinct.\nChoose $a_n\\in B$ so that $\\|\\Phi(x_n,a_n)\\| > n$ for all $n$.\nBy Lemma \\ref{l5.0}(1), there exists $f\\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$.\nThen $\\|\\Phi(x_n,f(x_n))\\| = \\|\\Phi(x_n,a_n)\\| \\to \\infty$.\nThis contradicts the assumption that the function $x\\mapsto \\Phi(x,f(x))$ is bounded.\n\n\nFinally, we prove the sufficiency in statement (2). Let $f\\in C_*(X,E)$. As observed above, by (2) condition (a), $x\\mapsto \\Phi(x,f(x))$ is continuous on $X$ since $f\\in C(X,E)$.\nLet $B = f(X)$. Then $B$ is a bounded set in $E$. If $(\\Phi(x,f(x)))_{x\\in X}$ is unbounded, there is a sequence of distinct points $(x_n)$ so that $\\|\\Phi(x_n,f(x_n))\\| > n$ for all $n$.\nIn particular, $(x_n) \\in \\prod X_n(B)$. By condition 2(b), we may replace it by a subsequence to assume that $(x_n)$ converges to a point $x_0$ in $X$. In particular, $x_0\\in X'$. By assumption 2(a),\n$\\Phi(x_n, f(x_n))\\to \\Phi(x_0,f(x_0))$, contradicting the unboundedness of the sequence.\n\\end{proof}\n\nThe next two results follow immediately from the preceding discussion.\n\n\n\\begin{thm}\\label{t5.4}\nLet $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.\nA map $T:C(X,E)\\to C(Y,F)$ is a biseparating map if and only if there are a homeomorphism $\\varphi:X\\to Y$ and functions $\\Phi:Y\\times E\\to F$, $\\Psi:X\\times F\\to E$ so that\n\\begin{enumerate}\n\\item $\\Phi(y,\\cdot)$ and $\\Psi(x,\\cdot)$ are mutual inverses if $\\varphi(x) = y$. \n\\item $\\Phi$ is continuous at any point in $Y'\\times E$; $\\Psi$ is continuous at any point in $X'\\times F$.\n\\end{enumerate}\n\\end{thm}\n\n\n\n\\begin{thm}\\label{t5.5}\nLet $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.\nA map $T:C_*(X,E)\\to C_*(Y,F)$ is a biseparating map if and only if there are a homeomorphism $\\varphi:X\\to Y$ and functions $\\Phi:Y\\times E\\to F$, $\\Psi:X\\times F\\to E$ so that\n\\begin{enumerate}\n\\item $\\Phi(y,\\cdot)$ and $\\Psi(x,\\cdot)$ are mutual inverses if $\\varphi(x) = y$. \n\\item $\\Phi$ is continuous at any point in $Y'\\times E$; $\\Psi$ is continuous at any point in $X'\\times F$.\n\\item If $B_1$ and $B_2$ are bounded sets in $E$ and $F$ respectively, and \n\\[Z_n(B_1,B_2) = \\{x\\in X: \\Phi(\\varphi(x),B_1) \\not\\subseteq nB_F \\text{ or } B_2 \\not\\subseteq \\Phi(\\varphi(x),nB_E)\\},\\]\nthen every $(x_n) \\in \\prod_n Z_n(B_1,B_2)$ has a subsequence that converges in $X$.\n\\end{enumerate}\n\\end{thm}\n\nObserve that by condition (1) of Theorem \\ref{t5.5}, $B_2 \\not\\subseteq \\Phi(\\varphi(x),nB_E)$ if and only if $\\Psi(x,B_2)\\not\\subseteq nB_E$. Hence condition (3) in Theorem \\ref{t5.5} is a combination of condition 2(b) in Proposition \\ref{p5.1} for the maps $\\Phi$ and $\\Psi$.\n\n\\begin{prop}\\label{p5.2}\nIf there is a biseparating map $T:C(X,E)\\to C_*(Y,F)$, then $X$ are $Y$ are compact.\n\\end{prop}\n\n\\begin{proof}\nAssume otherwise. Since $X$ and $Y$ are homeomorphic,\n there is a sequence of distinct points $(x_n)$ in $X$ that has no convergent subsequence. Fix a nonzero element $b\\in F$ and let $a_n = (T^{-1}(1\\otimes nb))(x_n)$ for each $n$.\nBy Lemma \\ref{l5.0}(1), there is a function $f\\in C(X,E)$ so that $f(x_n) =a_n$ for all $n$.\nNote that \n\\[nb = T((T^{-1}(1\\otimes nb))(\\varphi(x_n)) = \\Phi(\\varphi(x_n),a_n) \\text{ for each $n$.}\\]\nThen \n\\[(Tf)(\\varphi(x_n)) = \\Phi(\\varphi(x_n),a_n) = nb \\text{ for all $n$},\\]\ncontradicting the boundedness of $Tf$.\n\\end{proof}\n\n\n\nNote that if $T:C(X,E)\\to C_*(Y,F)$ is biseparating, then $X$ is compact by Proposition \\ref{p5.2}. Hence $C(X,E) = C_*(X,E)$. Therefore, the characterization Theorem \\ref{t5.5} applies.\nWe conclude this section with an automatic continuity result. If $K$ is a compact subset of $X$, let \n\\[\\|f\\|_K = \\sup\\{\\|f(x)\\|:x\\in K\\} \\text{ for any $f\\in C(X,E)$}.\\]\n\n\n\\begin{thm}\\label{t5.6}\nLet $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.\nSuppose that $A(X,E) = C(X,E)$ or $C_*(X,E)$, $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$. \nLet $T:A(X,E)\\to A(Y,F)$ be a biseparating map. For any compact subset $K$ of $Y'$, any $f\\in A(X,E)$, and any $\\varepsilon > 0$, there exists $\\delta >0$ so that \n\\[ g\\in A(X,E),\\ \\|g-f\\|_{\\varphi^{-1}(K)} <\\delta \\implies \\|Tg-Tf\\|_K < \\varepsilon.\\]\n\\end{thm}\n\n\\begin{proof}\nSuppose that $(g_n) \\subseteq A(X,E)$ and that $\\|g_n-f\\|_{\\varphi^{-1}(K)} \\to 0$. It suffices to show that a subsequence of $(\\|Tg_n-Tf\\|)$ converges to $0$.\nPick $(y_n) \\subseteq K$ so that $\\|Tg_n-Tf\\|_K = \\|(Tg_n)(y_n) - (Tf)(y_n)\\|$ for all $n$.\nBy using a subsequence if necessary, we may assume that $(y_n)$ converges to some $y_0\\in K$.\nLet $x_n = \\varphi^{-1}(y_n)$, \n$g_n(x_n) = a_n$ , and $f(x_n) = a'_n$. Then $(x_n)$ converges to $x_0 = \\varphi^{-1}(y_0)$ and $f(x_0) = a_0$.\nSince $\\|g_n-f\\|_K \\to 0$, $(a_n)$ converges to $a_0$. \nAlso $(a_n')$ converges to $a_0$ by continuity of $f$.\nSince $y'\\in K\\subseteq Y'$, it follows fromm Proposition \\ref{p5.1}(1) that $\\Phi$ is continuous at $(y_0,a_0)$.\nTherefore,\n\\[ \n\\|(Tg_n)(y_n) - (Tf)(y_n)\\| = \\|\\Phi(y_n,a_n)- \\Phi(y_n,a'_n)\\|\\to 0.\\]\nThus $\\|Tg_n-Tf\\|_K\\to 0$.\n\\end{proof}\n\n\\section{Spaces of uniformly continuous functions}\\label{s6}\n\nIn this section, let $X$, $Y$ be complete metric spaces and let $E$, $F$ be Banach spaces.\nThe aim of this section is to characterize biseparating maps from $U(X,E)$ or $U_*(X,E)$ onto $U(Y,F)$ or $U_*(Y,F)$. By Propositions \\ref{p3.10} and \\ref{p4.2}, a biseparating map $T: U(X,E)\/U_*(X,E) \\to U(Y,F)\/U_*(Y,F)$\ncan be represented in the form \n\\[ Tf(y) = \\Phi(y,f(\\psi(y))) \\text{ for all $f\\in U(X,E)\/U_*(X,E)$ and all $y\\in Y$},\\]\nwhere $\\psi:Y\\to X$ is a homeomorphism with inverse $\\varphi$ and $\\Phi:Y\\times E\\to F$ is a function so that $\\Phi(y,\\cdot)$ is a bijection from $E$ onto $F$ for all $y\\in Y$.\nIn fact, characterizations can be obtained without completeness assumptions of $X,Y,E,F$. However, the case of complete spaces contains all pertinent ideas without the distraction of niggling details.\nCharacterizations of lattice isomorphisms and of {\\em linear} biseparating maps on spaces of uniformly continuous functions were obtained in \\cite{GaJ} and \\cite{A2} respectively.\n\n\n\n\n\n\n\n\n\n\\begin{prop}\\label{p6.3.0}\nLet $A(X,E)= U(X,E)$ or $U_*(X,E)$, and $A(Y,F)= U(Y,F), U_*(Y,F)$ or $\\operatorname{Lip}_*(Y,F)$.\nLet ${\\varphi}: {X}\\to {Y}$ be the homeomorphism associated with $T$ according to Theorem \\ref{t3.5}.\nThen ${\\varphi}$ is uniformly continuous.\n\\end{prop}\n\n\\begin{proof}\nSuppose that $\\varphi$ is not uniformly continuous. There are sequences $(x_n)$, $(x_n')$ in $X$ and $\\varepsilon >0$ so that\n$d(x_n,x'_n)\\to 0$ and that $d(\\varphi(x_n),\\varphi(x_n')) > \\varepsilon$ for all $n$.\nSet $y_n = \\varphi(x_n)$ and $y_n' = \\varphi(x_n')$.\nIn view of the continuity of ${\\varphi}$, niether $(x_n)$ nor $(x'_n)$ can have a convergent subsequence in ${X}$.\nHence we may assume that $(x_n)$ is a separated sequence.\nSince ${\\varphi}^{-1}$ is continuous, neither $(y_n)$ nor $(y'_n)$ can have a convergent subsequence in ${Y}$.\nAs we also have $d(y_n,y_n') > \\varepsilon$ for all $n$, by using subsequences if necessary, we may assume that $(y_n) \\cup (y_n')$ is a separated set.\nWithout loss of generality, take $T0 = 0$. We will use repeatedly the following formulation of Proposition \\ref{p4.2}. If $x\\in {X}$ and $f, g\\in A(X,E)$, then ${f}(x) = {g}(x)$ if and only if ${Tf}(\\varphi(x)) = {Tg}(\\varphi(x)).$\n\n\\medskip\n\n\\noindent\\underline{Case 1}. $A(Y,F) = U_*(Y,F)$ or $\\operatorname{Lip}_*(Y,F)$.\n\nFix a nonzero vector $a\\in E$ and let $b_n = {(T(1\\otimes a))}(y_n)$. Then $(b_n)$ is a bounded sequence.\nSince $(y_n)\\cup (y_n')$ is separated, one can easily construct a function $g\\in A(Y,F)$ so that ${g}(y_n) = b_n$ and ${g}(y'_n) = 0$ for all $n$. By Proposition \\ref{p4.2}, we see that $(T^{-1}g)(x_n) = a$ and $(T^{-1}g)(x'_n) = 0$. Since $T^{-1}g$ is uniformly continuous and $d(x_n,x'_n)\\to 0$, we have a contradiction.\n\n\\medskip\n\nIn the remaining cases, take $A(Y,F) = U(Y,F)$.\n\n\\medskip\n\n\\noindent\\underline{Case 2}. There exist $r>0$ and an infinite subset $N$ of ${\\mathbb N}$ so that $B(y_n,r) = \\{y_n\\}$ for all $n\\in N$.\n\nFix a nonzero vector $a \\in E$. Let $b_n = (T(1\\otimes a))(y_n)$ for each $n\\in N$. Define $g:Y \\to F$ by $g(y) = b_n$ if $y = y_n, n\\in N$, and $g(y) = 0$ otherwise.\nClearly $g\\in U(Y,F)$. \nBut by Proposition \\ref{p4.2}, for all $n\\in N$,\n\\[ (T^{-1}g)(x_n) = a \\text{ and } (T^{-1}g)(x_n') = 0.\\]\nHence $T^{-1}g$ is not uniformly continuous, contrary to the fact that $T^{-1}g \\in A(X,E) \\subseteq U(X,E)$.\n\n\\medskip\n\n\\noindent\\underline{Case 3}. For all $r>0$, $B(y_n,r) = \\{y_n\\}$ occurs for only finitely many $n$.\n\nIn this case, by using a subsequence if necessary, we may assume that there is a sequence $(y_n'')$ in $Y$ so that $0 < d(y_n,y_n'') \\to 0$.\nSet $x_n'' = {\\varphi}^{-1}(y_n'')$ for all $n$.\nTake a nonzero element $b\\in F$ and let \n$a_n =(T^{-1}(1\\otimes b))(x_n)$.\nSince $(y_n)\\cup(y_n')$ is separate\n, we can find $g\\in U(Y,F)$ so that \n ${g}(y_n) = b$ and ${g}(y_n') = 0$ for all $n$.\nBy Proposition \\ref{p4.2},\n \\[ (T^{-1}g)(x_n) = a_n \\text{ and } (T^{-1}g)(x'_n) = 0.\\]\nSince $T^{-1}g$ is uniformly continuous and $d(x_n,x_n') \\to 0$,\n\\[ a_n = (T^{-1}g)(x_n) - (T^{-1}g)(x_n') \\to 0.\n\\]\nAs $(x_n)$ is separated, $x_n'' \\neq x_n$ and $(a_n)$ is a null sequence, we may, after replacing $(x_n)$ and $(x_n'')$ with subsequences, construct a function $f\\in U_*(X,E)\\subseteq A(X,E)$ so that $f(x_n) = a_n$ and $f(x_n'') = 0$ for all $n$.\nThen ${Tf}(y_n) = {g}(y_n) = b$ and $Tf(y_n'') = 0$.\nThis is impossible since ${Tf}$ is uniformly continuous and $d(y_n,y_n'')\\to 0$.\n\\end{proof}\n\nBy Proposition \\ref{p6.3.0}, if $T: U(X,E)\/U_*(X,E) \\to U(Y,F)\/U_*(Y,F)$ is a biseparating map, then $\\varphi:X\\to Y$ is a uniform homeomorphism. \nIn this case, the map $\\widehat{T}$ given by \n\\[\\widehat{T}f(x) = Tf(\\varphi(x)) = \\Phi(\\varphi(x),f(x)) = \\widehat{\\Phi}(x,f(x))\\]\nmaps $U(X,E)\/U_*(X,E)$ onto $U(X,F)\/U_*(X,F)$, with $\\widehat{\\Phi}:X\\times E\\to F$ being a function such that $\\widehat{\\Phi}(x,\\cdot):E\\to F$ is a bijection for each $x\\in X$.\nTo complete the characterization of $T$, it suffices to determine the functions $\\widehat{\\Phi}: X\\times E\\to F$ so that $x\\mapsto \\widehat{\\Phi}(x,f(x))$ belongs to $U(X,F)\/U_*(X,F)$ for each $f\\in U(X,E)\/U_*(X,E)$.\nWe will refer to this as the ``section problem'' for uniformly continuous functions.\n\nFor any $\\varepsilon > 0$, define $d_\\varepsilon:\\widetilde{X}\\times \\widetilde{X}\\to [0,\\infty]$ by\n\\begin{equation}\\label{eq6.0} d_\\varepsilon(a,b) = \\inf\\{\\sum^n_{i=1}d(x_{i-1},x_i): n\\in {\\mathbb N}, x_0 = a, x_n = b, d(x_{i-1},x_i) \\leq \\varepsilon \\text{ for all $i$}\\},\\end{equation}\nwhere we take $\\inf \\emptyset = \\infty$.\nThe connection of the $d_\\varepsilon$ ``metrics'' with uniformly continuous functions is well known; see, e.g. \\cite{A,H, O'F}.\nIn particular, the first part of the next proposition formalizes the well known principle that uniformly continuous functions are ``Lipschitz for large distances''.\n\n\\begin{prop}\\label{p6.3}\nLet $X$ be a complete metric space and let $E$ be a Banach space.\n\\begin{enumerate}\n\\item If $f\\in U(X,E)$, then there exist $\\varepsilon > 0$ and $C<\\infty$ such that \n\\[ \\|{f}(x_1)-{f}(x_2)\\| \\leq Cd_\\varepsilon(x_1,x_2) \\text{ whenever $x_1,x_2\\in {X}$, $d(x_1,x_2) > \\varepsilon$}.\\]\n\\item If $f:X\\to E$ and there exist $\\varepsilon >0$ and $C<\\infty$ so that \n\\[ \\|f(x_1) - f(x_2)\\| \\leq Cd_\\varepsilon(x_1,x_2) \\text{ for all $x_1,x_2\\in X$},\\]\nthen $f\\in U(X,E)$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nStatement (2) is trivial since $d_\\varepsilon(x_1,x_2) = d(x_1,x_2)$ if $d(x_1,x_2) \\leq \\varepsilon$. Let us prove statement (1).\nAssume that $f\\in U(X,E)$. There exists $\\varepsilon >0$ so that $\\|f(a)-f(b)\\| \\leq 1$ if $d(a,b) \\leq \\varepsilon$.\nLet $x_1,x_2\\in X$ be points so that $\\varepsilon < d(x_1,x_2)$ and $d_\\varepsilon(x_1,x_2) <\\infty$.\nThere are $n\\in {\\mathbb N}$, $(a_i)^n_{i=0} \\subseteq X$ so that $a_0 = x_1$, $a_n = x_2$, $d(a_{i-1}, a_i) \\leq\\varepsilon$, and \n\\[ \\sum^n_{i=1}d(a_{i-1}, a_i) \\leq 2d_\\varepsilon(x_1,x_2).\\]\nNote that since $d(x_1,x_2) > \\varepsilon$, $n \\geq 2$.\nIt is clear that we may assume that $d(a_{i-1},a_i) + d(a_i,a_{i+1}) >\\varepsilon$ for $0\\leq i< n$.\nThus \n\\[ 2d_\\varepsilon(x_1,x_2) \\geq \\sum^n_{i=1}d(a_{i-1}, a_i) \\geq \\frac{n-1}{2}\\,\\varepsilon \\geq \\frac{n\\varepsilon}{4}.\\]\nBy choice of $\\varepsilon$, $\\|f(a_{i-1}) - f(a_i)\\| \\leq 1$ for all $i$.\nHence \n\\[ \\|f(x_1)-f(x_2)\\| \\leq n \\leq \\frac{8}{\\varepsilon}\\,d_\\varepsilon(x_1,x_2).\\]\n\\end{proof}\n\n\n\nFor the rest of the section, let $\\Xi:X\\times E\\to F$ be a given function and associate with it a mapping $Sf(x) = \\Xi(x,f(x))$ for any function $f:X\\to E$.\nDenote the set of accumulation points in $X$ by $X'$.\n\n\\begin{prop}\\label{p6.4}\nIf $Sf \\in U(X,F)$ for any $f\\in U_*(X,E)$, then $\\Xi$ is continuous at any $(x_0,e_0)\\in X'\\times E$ .\n\\end{prop}\n\n\\begin{proof}\nAssume to the contrary that $\\Xi$ is discontinuous at some $(x_0,e_0)\\in X'\\times E$ .\nThere are a sequence $((x_n,e_n))^\\infty_{n=1} \\in X\\times E$ converging to $(x_0,e_0)$ and $\\varepsilon>0$\nso that $\\|\\Xi(x_n,e_n) - \\Xi(x_0,e_0)\\| > \\varepsilon$ for all $n$.\nReplacing $(x_n)$ by a subsequence if necessary, we may assume that either $(x_n)$ is a sequence of distinct points in $X\\backslash \\{x_0\\}$ or $x_n = x_0$ for all $n$. \nIn the former case, there is a function $f\\in U_*(X,E)$ so that $f(x_n) = e_n$ for all $n$ and $f(x_0) = e_0$.\nSince $Sf$ is continuous at $x_0$,\n\\[ \\Xi(x_n,e_n) = Sf(x_n) \\to Sf(x_0) = \\Xi(x_0,e_0),\\]\ncontrary to the choice of $((x_n,e_n))^\\infty_{n=1}$.\nFinally, suppose that $x_n = x_0$ for all $n$.\nFor each $n$, let $f_n$ be the constant function with value $e_n$. Then $Sf_n$ is continuous at $x_0$. Since $x_0$ is an accumulation point, there exists $x'_n$ with $0< d(x'_n,x_0) < \\frac{1}{n}$ so that \n\\[ \\|\\Xi(x'_n,e_n) - \\Xi(x_0,e_n)\\| = \\|Sf_n(x'_n) - Sf_n(x_0)\\| < \\frac{1}{n}.\\]\nBut by the previous case, $\\Xi(x'_n,e_n) \\to \\Xi(x_0,e_0)$. Thus \n$\\Xi(x_0,e_n) \\to \\Xi(x_0,e_0)$.\n\\end{proof}\n\n\nCall a sequence $((x_n,e_n))^\\infty_{n=1}\\in X\\times E$ a $u$-sequence if $(x_n)$ is a separated sequence and there are $\\varepsilon > 0$, $C<\\infty$ so that \n\\[ \\|e_n-e_m\\| \\leq Cd_\\varepsilon(x_n,x_m) \\text{ for all $m,n\\in{\\mathbb N}$}.\\]\nThe importance of $u$-sequences is captured in the next lemma.\n\n\\begin{lem}\\label{l6.4}\nLet $((x_n,e_n))^\\infty_{n=1}$ be a $u$-sequence in $X\\times E$. Then there is an infinite subset $N$ of ${\\mathbb N}$ and a uniformly continuous function $f:X\\to E$ so that $f(x_n) =e_n$ for all $n \\in N$.\n\\end{lem}\n\n\\begin{proof}\nLet $\\varepsilon$ and $C$ be as in the definition above.\nIf there is an infinite set $N$ in ${\\mathbb N}$ so that $d_\\varepsilon(x_n,x_m) = \\infty$ for all distinct $m,n\\in N$, then clearly the function defined by $f(x) = e_n$ if $d_{\\varepsilon}(x,x_n) <\\infty$ for some $n\\in N$ and $f(x) = 0$ otherwise is uniformly continuous. Obviously $f(x_n) =e_n$ for all $n\\in N$.\nThus, without loss of generality, we may assume that $d_\\varepsilon(x_m,x_n)<\\infty$ for all $m,n$.\nIf $(d_\\varepsilon(x_n,x_m))_{m,n}$ is bounded, then $(e_n)$ is a bounded sequence. Since $(x_n)$ is separated, there exists $f\\in U(X,E)$ so that $f(x_n) = e_n$ for all $n$.\n\nFinally, assume that $(d_\\varepsilon(x_n,x_m))_{m,n}$ is unbounded set in ${\\mathbb R}_+$.\nBy taking a subsequence, we may assume that $4r_n < r_{n+1}$ for all $n$, where $r_n = d_\\varepsilon(x_n,x_1)$.\nDefine $f:X\\to E$ by \n\\[ f(x) = \\begin{cases}\n e_1 + (1- \\frac{2}{r_n}d_\\varepsilon(x,x_n))^+(e_n-e_1) &\\text{if $d_\\varepsilon(x,x_n)< \\frac{r_n}{2}$, $n > 1$}\\\\\n e_1 &\\text{otherwise}.\n \\end{cases}\\]\nUsing the fact that $\\|e_n-e_1\\|\\leq Cr_n$ for all $n$, one can check that \n\\[ \\|f(a) - f(b)\\| \\leq 16Cd_\\varepsilon(a,b)\n\\]\nfor all $a,b\\in X$. By Proposition \\ref{p6.3}(2), $f\\in U(X,E)$. Clearly, $f(x_n) = e_n$ for all $n$, as required.\n\\end{proof}\n\n\\begin{prop}\\label{p6.5}\nSuppose that $Sf\\in U(X,F)$ for all $f\\in U(X,E)$.\nLet $((x_n,e_n))^\\infty_{n=1}$ be a $u$-sequence. \nAssume that $((x_n',e_n'))^\\infty_{n=1} \\in X\\times E$, $x_n\\neq x_n'$ for all $n$, and $\\lim (d(x_n,x_n') + \\|e_n-e_n'\\|) =0$, then\n\\[ \\lim\\|\\Xi(x_n,e_n) - \\Xi(x_n',e'_n)\\| = 0.\\]\n\\end{prop}\n\n\\begin{proof}\n It suffices to show that $\\liminf\\|\\Xi(x_n,e_n) - \\Xi(x_n',e'_n)\\| = 0$. By Lemma \\ref{l6.4}, there exist $f\\in U(X,E)$ and an infinite set $N$ in ${\\mathbb N}$ so that $f(x_n) = e_n$ for all $n\\in N$.\nSince $d(x_n,x_n') \\to 0$, $\\lim_{n\\in N}\\|e_n -f(x_n')\\| = 0$ and hence $\\lim_{n\\in N}\\|e_n'-f(x_n')\\| = 0$.\nAs $(x_n)$ is a separated sequence and $0 < d(x_n,x_n') \\to 0$, we can construct a uniformly continuous function $g:X\\to E$ \nsuch that $g(x_n) = 0$ and $g(x_n') = e_n'-f(x_n')$ for all sufficiently large $n\\in N$.\nThen $f+g\\in U(X,E)$, \n\\[ \\Xi(x_n,e_n) = S(f+g)(x_n) \\text{ and } \\Xi(x_n',e_n') = S(f+g)(x_n')\\] for all sufficiently large $n\\in N$.\nAs $S(f+g)\\in U(X,F)$ and $d(x_n,x_n') \\to 0$, we see that $\\lim_{n\\in N}\\|\\Xi(x_n,e_n) - \\Xi(x_n',e_n')\\| =0$.\n\\end{proof}\n\n\nWe will say that $\\Xi$ is {\\em $u$-continuous} if it satisfies the conclusion of Proposition \\ref{p6.5}.\nWe can now solve the section problem for uniformly continuous functions.\n\n\\begin{thm}\\label{t6.5}\nLet $X$ be a complete metric space, $E$ and $F$ be Banach spaces. Given a function $\\Xi:X\\times E\\to F$, associate with it a mapping $S$ by $Sf(x) = \\Xi(x,f(x))$.\nThen $S$ maps $U(X,E)$ into $U(X,F)$ if and only if \n $\\Xi$ is continuous at all $(x_0,e_0) \\in X'\\times E$ and $\\Xi$ is \n $u$-continuous.\n\\end{thm}\n\n\\begin{proof}\nThe necessity of the two conditions on $\\Xi$ follow from Propositions \\ref{p6.4} and \\ref{p6.5}.\nConversely, suppose that $\\Xi$ is continuous at any $(x_0,e_0)\\in X'\\times E$ and also $u$-continuous.\nLet $f\\in U(X,E)$. \nIf $Sf\\notin U(X,F)$, there are sequences $(x_n)$, $(x_n')$ in $X$, $d(x_n,x'_n)\\to 0$, and $\\eta >0$ so that\n\\begin{equation}\\label{eq6.1} \\|\\Xi(x_n,f(x_n)) - \\Xi(x_n',f(x_n'))\\| = \\|Sf(x_n) - Sf(x_n')\\| > \\eta \\text{ for all $n$}.\\end{equation}\nSuppose that $(x_n)$ has a subsequence that converges to some $x_0\\in X$.\nWe may assume that the whole sequence converges to $x_0$. In particular, $(f(x_n))$ and $(f(x_n'))$ converge to $f(x_0)$.\nClearly, $x_n\\neq x'_n$ for all $n$. Hence $x_0\\in X'$.\nIn this case, (\\ref{eq6.1}) violates the continuity of $\\Xi$ at $(x_0,f(x_0))$.\nFinally, assume that $(x_n)$ is a separated sequence. Choose $\\varepsilon' > 0$ so that $d(x_m,x_n) > \\varepsilon'$ if $m\\neq n$. Then let $\\varepsilon$ and $C$ be as given in condition (1) of Proposition \\ref{p6.3} for the function $f$.\nObviously, $d_\\varepsilon \\leq d_{\\varepsilon \\wedge \\varepsilon'}$. So we may assume without loss of generality that $\\varepsilon \\leq\\varepsilon'$.\nHence $(x_n,f(x_n))^\\infty_{n=1}$ is a $u$-sequence by Proposition \\ref{p6.3}(1).\nAgain, (\\ref{eq6.1}) implies that $x_n\\neq x_n'$ for all $n$.\nFurthermore, $d(x_n,x'_n)\\to 0$ and $\\|f(x_n)-f(x_n)\\| \\to 0$, the latter as a result of the uniform continuity of $f$. Therefore, \n\\[ \\lim\\|\\Xi(x_n,f(x_n)) - \\Xi(x_n',f(x'_n))\\| =0\\]\nby $u$-continuity of $\\Xi$, contradicting (\\ref{eq6.1}).\n\\end{proof}\n\nCharacterization of biseparating maps from $U(X,E)$ onto $U(Y,F)$ can be obtained by using Theorem \\ref{t6.5} together with the ``switch'' from $Y$ to $X$ described prior to Proposition \\ref{p6.3}.\n\n\\begin{thm}\\label{t6.7.1}\nLet $X, Y$ be complete metric spaces and let $E,F$ be Banach spaces.\nSuppose that $T:U(X,E)\\to U(Y,F)$ is a biseparating map. \nThen there are a uniform homeomorphism ${\\varphi}:{X}\\to {Y}$ and a function $\\Phi:Y\\times E\\to F$ so that \n\\begin{enumerate}\n\\item For each $y\\in Y$, $\\Phi(y,\\cdot):E\\to F$ is a bijection with inverse $\\Psi(x,\\cdot):F\\to E$, where $\\varphi(x) = y$.\n\\item $Tf(y) = \\Phi(y,f(\\varphi^{-1}(y)))$ and $T^{-1}g(x) = \\Psi(x,g(\\varphi(x)))$ for all $f\\in U(X,E), g\\in U(Y,F)$ and $x\\in X$, $y\\in Y$. \n\\item $\\Phi$ is continuous on $Y'\\times E$ and $\\Psi$ is continuous on $X'\\times F$.\n\\item $(x,e)\\mapsto \\Phi(\\varphi(x),e))$ and $(y,e')\\mapsto \\Psi(\\varphi^{-1}(y),e'))$ are both $u$-continuous.\n\\end{enumerate}\nConversely, assume that $\\varphi,\\Phi$ satisfy conditions (1), (3) and (4). Define $Tf(y)$ as in (2) for any $f\\in U(X,E)$ and $y\\in Y$. Then $T$ is a biseparating map from $U(X,E)$ onto $U(Y,F)$.\n\\end{thm}\n\n\n\n\\begin{lem}\\label{l6.8}\nLet $\\Xi:X\\times E\\to F$ be a given function and associate with it a mapping $Sf(x) = \\Xi(x,f(x))$ for any function $f:X\\to E$. If $Sf\\in U_*(X,F)$ for any $f \\in U_*(X,E)$, then for any separated sequence $(x_n)$ in $X$ and any bounded set $B$ in $E$, there is exists $k\\in{\\mathbb N}$ so that \n$\\bigcup_{n=k}^\\infty\\Xi(x_n,B)$ is bounded in $F$.\n\\end{lem}\n\n\\begin{proof}\nSuppose that $Sf \\in U_*(X,F)$ for any $f\\in U_*(X,E)$.\nLet $(x_n)$ be a separated sequence in $X$ and let $B$ be a bounded set in $E$.\nAssume that for any $k\\in {\\mathbb N}$, $\\bigcup_{n=k}^\\infty\\Xi(x_n,B)$\nis unbounded.\nThen there exists $(e_n)$ in $B$ so that $(\\Xi(x_n,e_n))^\\infty_{n=1}$ is unbounded.\nSince $(x_n)$ is separated and $(e_n)$ is bounded, there exists $f\\in U_*(X,E)$ so that $f(x_n) = e_n$.\nBy assumption $Sf$ is bounded. Hence $(\\Xi(x_n,e_n))^\\infty_{n=1} = (Sf(x_n))^\\infty_{n=1}$ is bounded, a contradiction.\n\\end{proof}\n\n\nWe now obtain the analog of Theorem \\ref{t6.7.1} for biseparating maps between spaces of bounded uniformly continuous functions. The details are similar to Theorem \\ref{t6.7.1}, with the extra ingredient Lemma \\ref{l6.8} for ``boundedness''.\n\n\n\\begin{thm}\\label{t6.7.2}\nLet $X, Y$ be complete metric spaces and let $E,F$ be Banach spaces.\nSuppose that $T:U_*(X,E)\\to U_*(Y,F)$ is a biseparating map. \nThen there are a uniform homeomorphism ${\\varphi}:{X}\\to {Y}$ and a function $\\Phi:Y\\times E\\to F$ so that \n\\begin{enumerate}\n\\item For each $y\\in Y$, $\\Phi(y,\\cdot):E\\to F$ is a bijection with inverse $\\Psi(x,\\cdot):F\\to E$, where $\\varphi(x) = y$.\n\\item $Tf(y) = \\Phi(y,f(\\varphi^{-1}(y)))$ and $T^{-1}g(x) = \\Psi(x,g(\\varphi(x)))$ for all $f\\in U(X,E), g\\in U(Y,F)$ and $x\\in X$, $y\\in Y$. \n\\item $\\Phi$ is continuous on $Y'\\times E$ and $\\Psi$ is continuous on $X'\\times F$.\n\\item $(x,e)\\mapsto \\Phi(\\varphi(x),e))$ and $(y,e')\\mapsto \\Psi(\\varphi^{-1}(y),e'))$ are both $u$-continuous.\n\\item Let $(x_n)$ be a separated sequence in $X$ and $y_n = \\varphi(x_n)$ for all $n$. If $B$ and $B'$ are bounded sets in $E$ and $F$ respectively, then there exists $k\\in {\\mathbb N}$ so that \n\\[ \\bigcup_{n=k}^\\infty\\Phi(y_n,B) \\text{ and } \\bigcup_{n=k}^\\infty\\Psi(x_n,B')\\]\narer bounded sets in $F$ and $E$ respectively.\n\\end{enumerate}\nConversely, assume that $\\varphi,\\Phi$ satisfy conditions (1), (3), (4) and (5). Define $Tf(y)$ as in (2) for any $f\\in U_*(X,E)$ and $y\\in Y$. Then $T$ is a biseparating map from $U_*(X,E)$ onto $U_*(Y,F)$.\n\\end{thm}\n\n\\subsection{Automatic continuity}\nAutomatic continuity results for biseparating maps acting between spaces of uniformly continuous functions can be deduced easily from the characterization theorems \\ref{t6.7.1} and \\ref{t6.7.2}.\nIf $S$ is a subset of $X$, respectively, $Y$, and $f:X\\to E$, respectively, $f:Y\\to F$, let\n\\[ \\|f\\|_S = \\sup_{s\\in S}\\|f(s)\\|.\\]\n\n\\begin{thm}\\label{t6.9}\nLet $X,Y$ be complete metric spaces and $E,F$ be Banach spaces.\nSuppose that $T$ is a biseparating map from $U(X,E)$ onto $U(Y,F)$, respectively, from $U_*(X,E)$ onto $U_*(Y,F)$. Let $T$ be represented as in theorems \\ref{t6.7.1} or \\ref{t6.7.2}.\nAssume that $f\\in U(X,E)\/U_*(X,E)$ and $S\\subseteq X'$, the set of accumulation points of $X$.\nFor any $\\varepsilon > 0$, there exists $\\delta >0$ so that if $g\\in U(X,E)\/U_*(X,E)$ and $\\|g-f\\|_S < \\delta$, then \n$\\|Tg- Tf\\|_{\\varphi(S)} < \\varepsilon$.\n\\end{thm}\n\n\\begin{proof}\nSuppose that the theorem fails. There exist $S\\subseteq X'$, $\\varepsilon >0$ and functions $(g_n)$ in $U(X,E)\/U_*(X,E)$ so that \n\\[ \\|g_n-f\\|_S\\to 0 \\text{ and } \\|Tg_n-Tf\\|_{\\varphi(S)} > \\varepsilon \\text{ for all $n$.}\\]\nChoose $(x_n) \\subseteq S$ so that $\\|Tg_n(\\varphi(x_n)) - Tf(\\varphi(x_n))\\| >\\varepsilon$ for all $n$.\nThus\n\\begin{equation}\\label{e6.5} \\|\\Phi(\\varphi(x_n), v_n) - \\Phi(\\varphi(x_n),u_n)\\| >\\varepsilon \\text{ for all $n$},\\end{equation}\nwhere $v_n = g_n(x_n)$ and $u_n = f(x_n)$.\nIf $(x_n)$ has a subsequence $(x_{n_k})$ that converges to some $x_0$, then $x_0\\in X'$.\nNote that $(u_{n_k}) = Tf(\\varphi(x_{n_k}))$ converges to $u_0 = Tf(\\varphi(x_0))$ and $\\|v_n - u_n\\| \\leq \\|g_n-f\\|_S\\to 0$ as well. Thus $(v_{n_k})$ converges to $u_0$.\nThis shows that both sequences $((\\varphi(x_{n_k}),v_{n_k}))$ and $((\\varphi(x_{n_k}),u_{n_k}))$ converge to $(\\varphi(x_0),u_0)$.\nBy condition (3) of Theorem \\ref{t6.7.1} or \\ref{t6.7.2}, $\\Phi$ is continuous at $(\\varphi(x_0),u_0)$, contradicting (\\ref{e6.5}).\n\nIf $(x_n)$ does not have a convergent subsequence, then it has a separated subsequence $(x_{n_k})$. Again, let $u_{n_k} = f(x_{n_k})$\n and $v_{n_k} = g_{n_k}(x_{n_k})$.\nSince $g_{n_k}$ and $Tg_{n_k}$ are both continuous, one can choose $x_{n_k}'\\neq x_{n_k}$\n so that\n \\[ d(x_{n_k}',x_{n_k}), \\|g_{n_k}(x'_{n_k}) - v_{n_k}\\|, \\|Tg_{n_k}(x'_{n_k}) - Tg_{n_k}(x_{n_k})\\| \\to 0.\\]\n Note that the last limit can be stated as \n \\begin{equation}\\label{e6.6} \\Phi(\\varphi(x'_{n_k}),g_{n_k}(x'_{n_k})) - \\Phi(\\varphi(x_{n_k}), v_{n_k}) \\to 0.\\end{equation}\nBy Proposition \\ref{p6.3}(1), $(x_{n_k}, u_{n_k}))$ is a $u$-sequence.\nBy (4) of Theorem \\ref{t6.7.1} or \\ref{t6.7.2}, $(x,e) \\mapsto \\Phi(\\varphi(x),e)$ is $u$-continuous.\nSince $x_{n_k}' \\neq x_{n_k}$ and \n\\[ d(x'_{n_k},x_{n_k}) + \\|g_{n_k}(x'_{n_k}) - u_{n_k}\\| \\leq \n d(x'_{n_k},x_{n_k}) + \\|g_{n_k}(x'_{n_k}) - v_{n_k}\\| + \\|g_{n_k} - f\\|_S\n \\to 0,\\]\n $u$-continuity gives \n \\begin{equation}\\label{e6.7} \\Phi(\\varphi(x'_{n_k}),g_{n_k}(x'_{n_k})) - \\Phi(\\varphi(x_{n_k}), u_{n_k}) \\to 0.\\end{equation}\nThe limits (\\ref{e6.6}) and (\\ref{e6.7}) yield \n\\[ \\Phi(\\varphi(x_{n_k}), v_{n_k}) - \\Phi(\\varphi(x_{n_k}), u_{n_k}) \\to 0,\\]\ncontrary to (\\ref{e6.5}).\n\\end{proof}\n\n\n\n\\subsection{Bourbaki boundedness} \nLet $X$ be a metric space. For any $\\varepsilon>0$, recall the ``metric'' $d_\\varepsilon$ defined by (\\ref{eq6.0}). $d_\\varepsilon$ induces an equivalence relation $\\sim_\\varepsilon$ on $X$ by $a\\sim_\\varepsilon b$ if and only if $d_\\varepsilon(a,b) < \\infty$.\nThe equivalence classes will be called {\\em $\\varepsilon$-sets}. $d_\\varepsilon$ is a proper metric (i.e., finite valued) on each $\\varepsilon$-set.\n$X$ is said to be {\\em Bourbaki bounded} if for any $\\varepsilon>0$, there are only finitely many $\\varepsilon$-sets, each of which is bounded in the $d_\\varepsilon$ metric. See \\cite{BG, B, GM}.\nA classical result of Atsuji \\cite{At} and Hejcman \\cite{H}, rediscovered in \\cite{O'F}, states that $U(X) = U_*(X)$ if and only if $X$ is Bourbaki bounded.\nThe final theorem in this section generalizes this result.\n\n\\begin{thm}\\label{t6.10}\nLet $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.\nIf there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$, then $X$ is Bourbaki bounded.\n\\end{thm}\n\nBefore proceeding to the proof of the theorem, observe that if $X$ is Bourbaki bounded, then $U(X,E) = U_*(X,E)$. This follows easily from Proposition \\ref{p6.3}(2).\n \nLet $T:U(X,E)\\to U_*(Y,F)$ be a biseparating map. By Propositions \\ref{p4.2} and \\ref{p6.3.0}, $T$ has a representation\n\\[ Tf(y) = \\Phi(y,f(\\varphi^{-1}(y))) \\text{ for all $y\\in Y$ and all $f\\in U(X,E)$},\\]\nwhere $\\varphi$ is a uniform homeomorphism and $\\Phi(y,\\cdot):E\\to F$ is a bijection for all $y\\in Y$.\nWe may and do assume that $T0 = 0$, so that $\\Phi(y,0) = 0$ for all $y$.\n\n\\begin{lem}\\label{l6.11}\nLet $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.\nIf there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$, then for any $\\varepsilon>0$, $X$ has finitely many $\\varepsilon$-sets.\n\\end{lem}\n\n\\begin{proof}\nSuppose that there exists some $\\varepsilon >0$ so that $X$ contains an infinite sequence $(X_n)^\\infty_{n=1}$ of $\\varepsilon$-sets.\nChoose $x_n \\in X_n$ for each $n$ and let $y_n =\\varphi(x_n)$.\nSince $\\Phi(y_n,\\cdot):E\\to F$ is a bijection, there exists $e_n\\in E$ so that $\\|\\Phi(y_n,e_n)\\| > n$.\nDefine $f:X\\to E$ by $f(x) = e_n$ if $x\\in X_n$, $n\\in {\\mathbb N}$ and $0$ otherwise.\nThen $f$ is uniformly continuous but $\\|Tf(y_n)\\|= \\|\\Phi(y_n,e_n)\\| > n$ for all $n$.\nThis contradicts the assumption that $Tf \\in U_*(Y,F)$.\n\\end{proof}\n\n\n\\begin{lem}\\label{l6.12}\nLet $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.\nSuppose that there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$. For any $\\varepsilon>0$, any $\\varepsilon$-set of $X$ is $d_\\varepsilon$-bounded.\n\\end{lem}\n\n\n\\begin{proof}\nDefine $\\Xi:X\\times E\\to F$ by $\\Xi(x,e) = \\Phi(\\varphi(x),e)$. The formula $Sf(x) = \\Xi(x,f(x))$ defines a biseparating map from $U(X,E)$ onto $U_*(X,F)$ so that $S0 = 0$. For each $x$, $\\Xi(x,\\cdot):E\\to F$ is a bijection. Denote its inverse by $\\Theta(x,\\cdot)$. \nSuppose that there exist $\\varepsilon>0$ and an $\\varepsilon$-set $X_0$ that is not $d_\\varepsilon$ bounded.\nFix $x_0\\in X_0$ and a sequence $(x_n)$ in $X_0$ so that $d_\\varepsilon(x_{n+1},x_0) > 3d_\\varepsilon(x_n,x_0)$ for all $n$.\nLet $a$ be a nonzero vector in $E$. \nBy Proposition \\ref{p6.3}(2), the function $f:X\\to E$ given by $f(x) = d_\\varepsilon(x,x_0)a$ belongs to $U(X,E)$.\nHence $Sf\\in U_*(X,F)$.\nIn particular, the sequence $(b_n) = (Sf(x_n))$ is bounded in $F$.\n\n\n\\medskip\n\\noindent{Claim}. There exists $m\\in {\\mathbb N}$ so that \n\\[ \\limsup_n\\|\\Theta(x_n,sb_n) - \\Theta(x_n,tb_n)\\| \\leq 1 \\text{ if $s,t\\in [0,1]$, $|s-t| \\leq \\frac{1}{m}$}.\\]\n\nFirst suppose that the claim holds.\nThen\n\\begin{align*}\n \\limsup_n\\|\\Theta(x_n,b_n)\\| & = \\limsup_n\\|\\Theta(x_n,0)-\\Theta(x_n,b_n)\\|\n \\\\&\\leq \\sum^m_{k=1}\\limsup\\|\\Theta(x_n,\\frac{k-1}{m}\\,b_n) - \\Theta(x_n,\\frac{k}{m}\\,b_n)\\| \\leq m.\n\\end{align*}\nHowever, $\\Xi(x_n,d_\\varepsilon(x_n,x_0)a) = Sf(x_n) = b_n$ for all $n$.\nHence $\\Theta(x_n,b_n) = d_\\varepsilon(x_n,x_0)a$ for all $n$.\nIn particular, $(\\Theta(x_n,b_n))$ cannot be bounded, contradicting the preceding inequality.\n\nTo complete the proof of the lemma, let us verify the claim.\nIf the claim fails, for each $m\\in {\\mathbb N}$, one can find $s_m,t_m\\in [0,1]$, $|s_m-t_m|\\leq \\frac{1}{m}$, so that\n\\[ \\limsup_n\\|\\Theta(x_n,s_mb_n) - \\Theta(x_n,t_mb_n)\\| > 1.\\]\nWe may assume that $(s_m), (t_m)$ both converge to some $t_0\\in [0,1]$.\nWithout loss of generality, \n\\[ \\limsup_n\\|\\Theta(x_n,s_mb_n) - \\Theta(x_n,t_0b_n)\\| > \\frac{1}{2} \\text{ for all $m$.}\\]\nChoose $n_1 < n_2 < \\cdots$ so that \n\\begin{equation}\\label{e6.4}\\|\\Theta(x_{n_m},s_mb_{n_m}) - \\Theta(x_{n_m},t_0b_{n_m})\\| > \\frac{1}{2} \\text{ for all $m$.}\\end{equation}\nClearly, $(x_n)$, and hence $(x_{n_m})$, is a separated sequence by choice. Since $(t_0b_{n_m})$ is bounded, there exist $g_1\\in U_*(X,F)$ so that \n$g_1(x_{n_m}) = t_0b_{n_m}$ for all $m$.\nIf there exists $\\delta > 0$ and an infinite set $M$ so that $B(x_{n_m},\\delta) = \\{x_{n_m}\\}$ for all $m\\in M$, then each $\\{x_{n_m}\\}$ is a $\\delta$-set in $X$, contradicting Lemma \\ref{l6.11}.\nTherefore, there is a sequence $(x'_m)$ in $X$ so that $0 < d(x_{n_m},x'_m)\\to 0$.\nNote that \n$\\|s_mb_{n_m} - t_0b_{n_m}\\| \\to 0$. \nHence there exists $h \\in U_*(X,F)$ so that $h(x_{n_m}) = s_mb_{n_m} - t_0b_{n_m}$ and $h(x'_m) = 0$ for all sufficiently large $m$.\nSet $g_2 = g_1 +h$.\nSince $g_1(x'_m) = g_2(x'_m)$, $S^{-1}g_1(x'_m) = S^{-1}g_2(x'_m)$ for all sufficiently large $m$. Thus\n\\begin{align*}\n\\|\\Theta(x_{n_m},&\\, t_0b_{n_m}) - \\Theta(x_{n_m},s_mb_{n_m})\\| = \\|S^{-1}g_1(x_{n_m}) - S^{-1}g_2(x_{n_m})\\| \\\\\n&\\leq \\|S^{-1}g_1(x_{n_m}) - S^{-1}g_1(x'_m)\\| + \\|S^{-1}g_2(x'_{m}) - S^{-1}g_2(x_{n_m})\\|.\n\\end{align*}\nAs $S^{-1}g_1, S^{-1}g_2$ are uniformly continuous functions and $d(x_{n_m},x'_m) \\to 0$, both terms on the right of the inequality tend to $0$. So we have reached a contradiction with (\\ref{e6.4}).\nThis completes the proof of the claim and hence of the lemma.\n\\end{proof}\n\nLemmas \\ref{l6.11} and \\ref{l6.12} prove Theorem \\ref{t6.10}.\nIf $T:U(X,E)\\to U_*(Y,F)$ is a biseparating map, then $X$ is Bourbaki bounded by Theorem \\ref{t6.10}.\nHence $U(X,E) = U_*(X,E)$ \\cite{O'F}.\nThus the characterization Theorem \\ref{t6.7.2} applies.\n\n\\section{Spaces of Lipschitz functions}\\label{s8}\n\nWe focus on biseparating maps on Lipschitz spaces in this section. Again, we restrict consideration to complete metric spaces $X$, $Y$ and Banach spaces $E, F$.\nIn contrast to previous cases, we will see that there is no difference between spaces of bounded and unbounded Lipschitz functions. In fact, it is even sufficient to consider bounded metric spaces $X$ and $Y$.\nIndeed, if $(X,d)$ is a metric space, let $X_1$ be the set $X$ with the bounded metric $d\\wedge 1$.\nThen clearly $\\operatorname{Lip}_*(X,E) = \\operatorname{Lip}(X_1,E)$.\nTo see that $\\operatorname{Lip}(X,E)$ is equivalent to some $\\operatorname{Lip}(Z,E)$ for a bounded metric space $Z$ via a linear biseparating map, we employ essentially the same argument from \\cite[Proposition 5.2]{LT}, which has its roots in \\cite{W}.\nFix a distinguished point $e$ in $X$ and define a function $\\xi:X\\to {\\mathbb R}$ by $\\xi(x) = d(x,e)\\vee 1$.\nDenote the Lipschitz constant of a function $f$ by $L(f)$. Let $d':X\\times X\\to {\\mathbb R}$ be given by\n\\[ d'(p,q) = \\sup_{\\stackrel{f\\in \\operatorname{Lip}(X)}{L(f),|f(e)| \\leq 1}}\\bigl|\\frac{f(p)}{\\xi(p)} - \\frac{f(q)}{\\xi(q)}\\bigr|,\\]\nwhere $\\operatorname{Lip}(X)$ is the space of all real-valued Lipschitz functions on $X$.\n\n\n\\begin{lem}\\cite[Proposition 5.1]{LT}\\label{p6.1}\n\\begin{enumerate}\n\\item $d'$ is a metric on $X$ that is bounded above by $4$.\n\\item \n\\[ \\frac{d(p,q)}{\\xi(p)\\vee \\xi(q)}\\leq d'(p,q) \\leq \\frac{3d(p,q)}{\\xi(p)\\vee \\xi(q)}\\]\nfor all $p,q\\in X$.\n\\item If $\\xi(p) \\leq \\xi(q)$, then\n\\[ d'(p,q) \\leq d'(p,q)\\xi(p) \\leq 3d(p,q).\\]\n\\item If $X$ is complete with respect to the metric $d$, then it is complete with respect to the metric $d'$.\n\\end{enumerate}\n\\end{lem}\n\nLet $Z$ be the set $X$ with the metric $d'$.\n\n\\begin{prop}\\label{p6.2}\n$f\\in \\operatorname{Lip}(X,E)$ if and only if $\\frac{f}{\\xi} \\in \\operatorname{Lip}(Z,E)$.\nIn particular, $T:\\operatorname{Lip}(X,E)\\to \\operatorname{Lip}(Z,E)$, $Tf = \\frac{f}{\\xi}$, is a linear biseparating map.\n\\end{prop}\n\n\\begin{proof}\nThe second assertion follows easily from the first.\nSuppose that $f\\in \\operatorname{Lip}(X,E)$. Set $c = L(f)\\vee \\|f(e)\\| \\vee 1$.\nFor any $x^* \\in E^*$, $\\|x^*\\| \\leq 1$, $g = (x^*\\circ f)\/c\\in \\operatorname{Lip}(X)$ and $L(g), |g(e)| \\leq 1$.\nBy definition of $d'$,\n\\[ cd'(p,q) \\geq c\\bigl|\\frac{g(p)}{\\xi(p)} - \\frac{g(q)}{\\xi(q)}\\bigr| = \\bigl|x^*\\bigl(\\frac{f}{\\xi}(p) - \\frac{f}{\\xi}(q)\\bigr)\\bigr|.\n\\]\nTaking supremum over $\\|x^*\\|\\leq 1$ shows that $\\frac{f}{\\xi}\\in \\operatorname{Lip}(Z,E)$ with Lipschitz constant at most $c$.\n\nConversely, suppose that $g= \\frac{f}{\\xi}\\in \\operatorname{Lip}(Z,E)$. Let $p,q$ be distinct points in $X$ so that $\\xi(p) \\leq \\xi(q)$. Denote the Lipschitz constant of $g$ with respect to the metric $d'$ by $L'(g)$.\nThen\n\\[ \\|g(q)\\| \\leq \\|g(q) - g(e)\\| + \\|g(e)\\| \\leq L'(g)d'(q,e) + \\|g(e)\\| \\leq 4L'(g) + \\|g(e)\\|\\]\nsince $d' \\leq 4$.\nHence\n\\begin{align*}\n\\|f(p) - f(q)\\| & \\leq \\|g(p) - g(q)\\|\\xi(p) + \\|g(q)\\|(\\xi(q) - \\xi(p))\\\\\n&\\leq L'(g)d'(p,q)\\xi(p) + (4L'(g)+ \\|g(e)\\|)d(p,q)\\\\\n&\\leq (7L'(g)+\\|g(e)\\|)d(p,q) \\text{ by Lemma \\ref{p6.1}(3)}.\n\\end{align*}\nThus $f\\in \\operatorname{Lip}(X,E)$.\n\\end{proof}\n\n\\subsection{$\\varphi$ is a Lipschitz homeomorphism} \n\nIn view of the above, throughout the rest of this section, $X$ and $Y$ will be assumed to be bounded complete metric spaces. Let $T:\\operatorname{Lip}(X,E)\\to \\operatorname{Lip}(Y,F)$ be a biseparating map so that $T0 =0$.\nOnce again, we have a representation (Proposition \\ref{p4.2})\n\\begin{equation}\\label{e7.1}Tf(y) = \\Phi(y,f(\\varphi^{-1}(y))) \\text{ for all $y\\in Y$ and all $f\\in \\operatorname{Lip}(X,E)$},\\end{equation}\nwhere $\\varphi:X\\to Y$ is a homeomorphism and $\\Phi:Y\\times E\\to F$ \nis a function such that $\\Phi(y,\\cdot):E\\to F$ is a bijection for all $y\\in Y$.\nDenote the inverse of $\\Phi(y,\\cdot)$ by $\\Psi(x,\\cdot)$, where $\\varphi(x) = y$.\n\n\\begin{prop}\\label{p7.3}\nSuppose that $(x_n), (x'_n)$ are sequences in $X$. Let $y_n = \\varphi(x_n), y_n' = \\varphi(x_n')$ for all $n$.\nIf $(y_n)$ is a separated sequence, then there exists $C<\\infty$ so that $d(y_n,y_n') \\leq Cd(x_n,x_n')$ for all $n$.\n\\end{prop}\n\n\\begin{proof}\nAssume that the proposition fails. There are sequences as in the statement of the proposition so that $d(y_n,y_n')\/d(x_n,x_n') \\to \\infty$. Since $Y$ is bounded, $d(x_n,x_n') \\to 0$.\nIf $(y_n')$ has a convergent subsequence, then $(x_n')$, and hence $(x_n)$ has a convergent subsequence, which in turn implies that $(y_n)$ has a convergent subsequence, contrary to the choice of $(y_n)$.\nThus, by taking a subsequence if necessary, we may assume that both $(y_n)$ and $(y_n')$ are separated sequences. Fix a nonzero vector $a\\in E$ and let $g = T(1\\otimes a)$.\n\n\\medskip\n\n\\noindent\\underline{Case 1}. $d(y_n,y_n') \\not\\to 0$.\n\nIn this case, by taking a subsequence, we may assume that $(y_n)\\cup (y_n')$ is a separated set.\nSince $g\\in \\operatorname{Lip}(Y,F)$, $(g(y_n))$ is a bounded sequence in $F$.\nHence there exists $h\\in \\operatorname{Lip}(Y,F)$ so that $h(y_n) = g(y_n)$ and $h(y_n') = 0$ for all large $n$.\nThen $T^{-1}h(x_n) = a$ and $T^{-1}(x_n') = 0$ for all large $n$.\nSince $T^{-1}h$ is Lipschitz and $d(x_n,x_n')\\to 0$, we have a contradiction.\n\n\\medskip\n\n\\noindent\\underline{Case 2}. $d(y_n,y_n')\\to 0$.\n\nIf $(\\frac{\\|g(y_n)\\|}{d(y_n,y_n')})$ is bounded, then there is a function $h\\in \\operatorname{Lip}(Y,F)$ so that \n$h(y_n) = g(y_n)$ and $h(y_n') = 0$ for all large $n$.\nThus $T^{-1}h(x_n) = a$ and $T^{-1}h(x_n') = 0$. This is impossible since $T^{-1}h$ is Lipschitz and $d(x_n,x_n') \\to 0$.\n\nFrom the unboundedness of $(\\frac{\\|g(y_n)\\|}{d(y_n,y_n')})$, we may assume without loss of generality that for each $n$, there exists $s_n$ so that $d(y_n,y'_n)\\leq s_n< 2d(y_n,y_n')$ and that $k_n = \\frac{\\|g(y_n)\\|}{s_n}\\in {\\mathbb N}$.\nSince $\\Phi(y_n,a) = g(y_n)$, $\\Psi(x_n,g(y_n)) = a$. Thus\n\\[ \\sum^{k_n}_{j=1}[\\Psi(x_n,\\frac{jg(y_n)}{k_n}) - \\Psi(x_n,\\frac{(j-1)g(y_n)}{k_n})] = \\Psi(x_n,g(y_n)) = a.\n\\]\nHence there exists $j_0\\in \\{1,\\dots, k_n\\}$ so that \n\\[ \\|\\Psi(x_n,\\frac{j_0g(y_n)}{k_n}) - \\Psi(x_n,\\frac{(j_0-1)g(y_n)}{k_n})\\| \\geq \\frac{\\|a\\|}{k_n}.\\]\nTherefore, there exists $i_0\\in \\{j_0-1,j_0\\}$ so that \n\\begin{equation}\\label{e7.2.0} \\|\\Psi(x_n, \\frac{i_0g(y_n)}{k_n}) - \\Psi(x'_n, \\frac{j_0g(y_n')}{k_n})\\| \\geq \\frac{\\|a\\|}{2k_n}.\\end{equation}\nNow\n\\begin{align}\\label{e7.2}\n\\|\\frac{i_0g(y_n)}{k_n} &- \\frac{j_0g(y_n')}{k_n}\\| \\leq \\frac{\\|g(y_n)\\|}{k_n} + \\frac{j_0}{k_n}\\|g(y_n)-g(y_n')\\| \\\\ \\notag\n&\\leq s_n + \\frac{j_0L(g)}{k_n}d(y_n,y_n') \\leq (2+L(g))d(y_n,y_n'),\n\\end{align}\nwhere $L(g)$ is the Lipschitz constant of $g$.\nSince $(y_n)$ is separated and $(\\frac{i_0g(y_n)}{k_n})$ is a bounded sequence in $F$, there exists $h_1\\in \\operatorname{Lip}(Y,F)$ so that $h_1(y_n) = \\frac{i_0g(y_n)}{k_n}$ for all $n$.\nLet $L(h_1)$ be the Lipschitz constant of $h_1$. By (\\ref{e7.2}),\n\\begin{align*} \\|\\frac{j_0g(y_n)}{k_n} - h_1(y_n')\\|& \\leq \\|\\frac{j_0g(y_n)}{k_n} - h_1(y_n)\\| + L(h_1)d(y_n,y_n')\n\\\\&\\leq (2+L(g)+L(h_1))d(y_n,y_n').\n\\end{align*}\n Therefore, \none can construct a function $h_2\\in \\operatorname{Lip}(Y,F)$ so that \n\\[ h_2(y_n) = 0 \\text{ and } h_2(y_n') = \\frac{j_0g(y_n')}{k_n}- h_1(y_n') \\text{ for all large $n$}.\\]\nLet $f= T^{-1}(h_1+h_2)$.\nThen $Tf(y_n) = h_1(y_n)$ and hence $f(x_n) =\\Psi(x_n, \\frac{i_0g(y_n)}{k_n})$ for all large $n$.\nSimilarly, $f(x_n') = \\Psi(x_n, \\frac{j_0g(y_n)}{k_n})$ for all large $n$.\nNote that $g$ is a bounded function. Set $\\|g\\|_\\infty = \\sup_{y\\in Y}\\|g(y)\\|$. By (\\ref{e7.2.0}, \n\\[ \\|f(x_n) -f(x_n')\\| \\geq \\frac{\\|a\\|}{2k_n} \\geq \\frac{\\|a\\|s_n}{2\\|g\\|_\\infty} \\geq \\frac{\\|a\\|}{2\\|g\\|_\\infty}d(y_n,y_n')\\] for all large $n$.\nSince $f$ is Lipschitz, it follows that $d(y_n,y_n')\/d(x_n,x'_n)\\not\\to \\infty$.\nThis contradiction completes the proof of the proposition.\n\\end{proof}\n\nIf $(x_0,e_0)\\in X\\times E$, $C<\\infty$ and $r >0$, let\n\\begin{equation}\\label{e7.4} \\Delta(x_0,e_0,C,r) = \\{(x,e)\\in X\\times E: d(x,x_0) \\leq r, \\|e-e_0\\| \\leq Cd(x,x_0)\\}.\\end{equation}\nThen set $\\Delta(x_0,e_0,C) = \\bigcup_{r > 0} \\Delta(x_0,e_0,C,r)$.\nIt is not surprising that understanding the map $T$ depends on analyzing the sets $\\Delta(x_0,e_0,C,r)$ and $\\Delta(x_0,e_0,C)$. For a very special instance, see \\cite[Section 7.2]{AZ}.\nDefine sets in $Y\\times F$ in a similar manner.\nLet $M:X\\times E\\to Y\\times F$ be the function\n\\[M(x,u) = (\\varphi(x),\\Phi(\\varphi(x),u)).\\]\nThen $M$ is a bijection. Moreover, if $f \\in \\operatorname{Lip}(X,E)$, then $M(x_0, f(x_0)) = (\\varphi(x_0), Tf(\\varphi(x_0)))$.\n\n\n\n\nSuppose that $\\varphi:X\\to Y$ is not Lipschitz. There are sequences $(x_n), (x'_n)$ in $X$, $x_n\\neq x_n'$ for all $n$, so that taking $y_n = \\varphi(x_n), y_n' = \\varphi(x_n')$, we have $d(y_n,y_n')\/d(x_n,x_n') \\to \\infty$.\nSince $Y$ is bounded, $d(x_n,x_n') \\to 0$. By Proposition \\ref{p7.3}, $(y_n)$ cannot be a separated sequence.\nTaking a subsequence if necessary, we may assume that $(y_n)$ converges to some $y_0$.\nThen $(x_n)$ converges to $\\varphi^{-1}(y_0) = x_0$. The same must hold for $(x'_n)$. Therefore, $(y_n')$ also converges to $y_0$.\nWith further subsequences and relabeling the primed and unprimed terms if necessary, we may assume that $d(y_n,y_0) \\leq d(y_n',y_0)$ for all $n$.\nWith this assumption, $y_n'\\neq y_0$ for all $n$. For otherwise $y_n = y_0 = y_n'$, which implies that $x_n = x'_n$, contrary to their choice. Hence we may further assume that \n\\[ 2d(y'_{n+1},y_0) m d(x_{m}',x_{m}) \\text{ for all $m$.}\\]\nLet $d_m = d(y_m',y_m)$. \nDefine a function $g:Y\\to F$ by\n\\[ g(y) = \\begin{cases}\n v_0 + (1- \\frac{4d(y,y_m')}{d_m})(v_m'-v_0) &\\text{if $m\\in {\\mathbb N}$, $d(y,y_m') < \\frac{d_m}{4}$}\\\\\n v_0 &\\text{otherwise}.\n \\end{cases}\\]\nFrom the disjointness of the balls $B(y_m',\\frac{d_m}{4})$ and the inequality $\\|v_m'-v_0\\| \\leq d_m$ for all $m$,\nwe see that $g\\in \\operatorname{Lip}(Y,F)$.\n\nNext, we claim that $y_n \\notin B(y_m',\\frac{d_m}{4})$ for any $m,n\\in {\\mathbb N}$.\nIndeed, this is obvious if $m =n$.\nNote that \\[ d_m \\leq d( y_m',y_0) + d(y_0,y_m) \\leq 2d(y_m',y_0).\\]\nIf $m < n$, then \n\\[ d(y_n,y_0) \\leq d(y_n',y_0) < \\frac{d(y_m',y_0)}{2}.\n\\]\nHence\n\\[ d(y_n,y_m') \\geq d(y_m',y_0) - d(y_n,y_0)> \\frac{d(y_m',y_0)}{2} \\geq \\frac{d_m}{4}.\n\\]\nOn the other hand, if $m > n$ and $y_n\\neq y_0$, then\n\\[2d(y_m',y_0) \\leq 2d(y_{n+1}',y_0) < d(y_n,y_0).\n\\]\nHence\n\\[ d(y_n,y_m') \\geq d(y_n,y_0) - d(y_m',y_0) \\geq d(y_m',y_0) \\geq \\frac{d_m}{2}.\n\\]\nFinally, if $y_n = y_0$, then $d(y_n,y_m') = d(y_0,y_m') \\geq \\frac{d_m}{2}$.\nThus $y_n \\notin B(y_m',\\frac{d_m}{4})$ in all cases.\n\nObviously, $g(y_m') = v_m'= \\Phi(y_m',u_m')$. \nFrom the fact that $y_n \\notin B(y_m',\\frac{d_m}{4})$ for all $m,n$,\n $g(y_m) = v_0 = \\Phi(y_m,u_m)$ for all $n$.\nTherefore, $T^{-1}g(x_m) = u_m$ and $T^{-1}g(x'_m) = u_m'$ for all $m$.\nBut $\\|u_{m}'-u_{m}\\| > m d(x_{m}',x_{m})$, which contradicts the fact that $T^{-1}g$ is Lipschitz.\n\\end{proof}\n\nWe are now ready to prove the main result regarding the homeomorphism $\\varphi$.\n\n\n\\begin{thm}\\label{t7.5}\nLet $T:\\operatorname{Lip}(X,E)\\to \\operatorname{Lip}(Y,F)$ be a biseparating map, where $X, Y$ are complete bounded metric spaces and $E, F$ are Banach spaces. \nIn the notation of (\\ref{e7.1}), $\\varphi:X\\to Y$ is a Lipschitz homeomorphism.\n\\end{thm}\n\n\\begin{proof}\nIf $\\varphi$ is not a Lipschitz function, then we obtain sequences $(x_n), (x_n')$, $(y_n)$ and $(y_n')$ as in the discussion before Proposition \\ref{p7.5}.\nFor each $v\\in F$, determine $m = m(v) \\in {\\mathbb N}$ by Proposition \\ref{p7.5}.\nSet $F_k = \\{v\\in F: m(v) \\leq k\\}$ for each $k\\in {\\mathbb N}$.\nThen $F = \\bigcup_{k=1}^\\infty\\overline{F_k}$.\nBy the Baire Category Theorem, there are an open ball $O$ in $F$ and $k_0\\in {\\mathbb N}$ so that $O\\subseteq \\overline{F_{k_0}}$.\nPick distinct points $a,b\\in O\\cap F_{k_0}$. Since $d_n = d(y_n,y_n') \\to 0$, we may assume without loss of generality that $\\|a-b\\| > d_n$ for all $n$. For each $n$, choose $k_n\\in {\\mathbb N}$ so that \n\\[ \\frac{k_nd_n}{2} \\leq \\|a-b\\| < {k_nd_n}.\\]\nNote that $a + \\frac{j}{k_n}(b-a) \\in O$, $0\\leq j\\leq k_n$. By making small perturbations, one can find $w_{nj}\\in O\\cap F_{k_0}$, $0\\leq j\\leq k_n$, so that $w_{n0} = a$, $w_{nk_n} = b$ and $\\|w_{nj} - w_{n,j-1}\\|$ is sufficiently close to $\\frac{\\|a-b\\|}{k_n}$ so as to make it $< d_n$.\nNow\n\\begin{align*}\n\\|\\sum^{k_n}_{j=1}[\\Psi(x_n&,w_{nj}) - \\Psi(x_n,w_{n,j-1})]\\| = \\|\\Psi(x_n,v_1) -\\Psi(x_n,v_0)\\|\n\\\\ & = \\|T^{-1}(1\\otimes v_1)(x_n) - T^{-1}(1\\otimes v_0)(x_n)\\|\\\\\n& \\to \\|T^{-1}(1\\otimes v_1)(x_0) - T^{-1}(1\\otimes v_0)(x_0)\\|\\\\\n& = \\|\\Psi(x_0,b) -\\Psi(x_0,a)\\| = c.\n\\end{align*}\nSince $\\Psi(x_0,\\cdot)$ is a bijection, $c > 0$.\nFor all $n$, there exists $1\\leq j_n\\leq k_n$ so that \n\\[ \\|\\Psi(x_n,w_{n,j_n}) - \\Psi(x_n,w_{n,j_n-1})\\| > \\frac{c}{2k_n}.\\]\nNow choose $i_n\\in \\{j_n-1,j_n\\}$ so that, setting \n\\[ u_n = \\Psi(x_n,w_{n,j_n}) \\text{ and } u_n' = \\Psi(x_n',w_{n,i_n}),\\]\nwe have $\\|u_n-u_n'\\| > \\frac{c}{4k_n}$.\nNote that \n\\[ M^{-1}(y_n,w_{n,j_n}) = (\\varphi^{-1}(y_n),\\Psi(\\varphi^{-1}(y_n), w_{n,j_n}))\n=\n(x_n,u_n).\n\\]\nSimilarly, $M^{-1}(y_n',w_{n,i_n}) =(x_n',u_n')$.\nBy choice,\n\\[ \\|w_{n,i_n} - w_{n,j_n}\\| \\leq \\|w_{n,j_n-1} - w_{n,j_n}\\| < d_n = d(y_n',y_n).\\]\nHence $(y_n', w_{n,i_n}) \\in \\Delta(y_n,w_{n,j_n},1)$.\nSince $w_{n,j_n} \\in F_{k_0}$, $m=m(w_{n,j_n}) \\leq k_0$.\nBy definition of $m(w_{n,j_n})$, this implies that for all $n \\geq k_0$,\n\\[ (x_n',u'_n) \\in \\Delta(x_n,u_n,m)\\subseteq \\Delta(x_n,u_n,k_0),\n\\]\nwhich in turns yields that \n$\\|u_n'-u_n\\| \\leq k_0d(x_n',x_n).$\nTherefore,\n\\[ \\frac{c}{2\\|a-b\\|}\\cdot d_n \\leq \\frac{c}{4k_n} < \\|u_n - u_n'\\| \\leq k_0d(x_n',x_n).\\]\nSince this holds for all sufficiently large $n$, and $d_n = d(y_n,y_n')$, it contradicts the assumption that $d(y_n,y_n')\/d(x_n,x'_n) \\to \\infty$.\n\nThis completes the proof that $\\varphi$ is a Lipschitz function. By symmetry, so is $\\varphi^{-1}$. Hence $\\varphi$ is a Lipschitz homeomorphism.\n\\end{proof}\n\n\\subsection{Section problem for Lipschitz functions}\n\nLet $X$ be a complete bounded metric space and let $E,F$ be Banach spaces. Consider a given function $\\Xi:X\\times E\\to F$. Define $M:X\\times E \\to X\\times F$ by $M(x,e) = (x,\\Xi(x,e))$. Recall the sets $\\Delta(x,e,C,r)$ and $\\Delta(x,e,C)$ in $X\\times E$ as given by (\\ref{e7.4}). Similar definitions apply in $X\\times F$. Theorem \\ref{t7.7} solves the section problem for spaces of Lipschitz functions. For a very special case, refer to \\cite[Theorem 7.1]{AZ}.\n\n\\begin{lem}\\label{l7.6}\nSuppose that $x_0\\in X$, $u_0\\in E$ and $C<\\infty$.\nLet $(x^n_1), (x^n_2)$ in $X$, $(u^n_1), (u^n_2)$ in $E$ be sequences so that $x^n_1 \\neq x^n_2$ for all $n$,\n\\[ \\|u^n_1-u^n_2\\|\\leq Cd(x^n_1,x^n_2),\\\n\\|u^n_i-u_0\\| \\leq Cd(x^n_i,x_0) \\text{ $i =1,2$, $n\\in {\\mathbb N}$, and}\\]\n$\\lim_n d(x^n_i,x_0)=0$, $i =1,2$.\nThen there exists $f\\in \\operatorname{Lip}(X,E)$ so that $f(x^n_i) = u^n_i$, $i =1,2$, for infinitely many $n$.\n\\end{lem}\n\n\\begin{proof}\nSet $r^n_i = d(x^n_i,x_0)$. There is no loss of generality in assuming that $r^n_2 \\geq r^n_1$ for all $n$. After taking subsequences, we may divide the proof into the following cases.\n\n\\medskip\n\n\\noindent \\underline{Case 1}. $x^n_1 = x_0$, i.e., $r^n_1 = 0$ for all $n$.\n\n\\medskip\n\n\nNote that in this case $u^n_1 = u_0$ for all $n$. Since $x^n_2 \\neq x^n_1$ and $r^n_2=d(x^n_2,x_0) \\to 0$, we may further assume that $r^{n+1}_2 < \\frac{1}{3}r^n_2$ for all $n$, which implies that the balls $B(x^n_2,\\frac{r_n}{2})$ are pairwise disjoint.\nDefine $f:X\\to E$ by\n\\[ f(x) = \\begin{cases}\nu_0 + (1 - \\frac{2d(x,x^n_2)}{r_n})(u^n_2-u_0) &\\text{if $d(x,x^n_2) < \\frac{r_n}{2}$}\\\\\nu_0 &\\text{otherwise}.\n\\end{cases}\\]\nThen it can be checked that $f\\in \\operatorname{Lip}(X,E)$, $f(x^n_2) = u^n_2$, $f(x^n_1) = f(x_0) = u_0=u^n_1$ for all $n$.\n\n\\medskip\n\n\\noindent \\underline{Case 2}. $r^n_1 >0$ and there exists $c > 0$ so that $d(x^n_1,x^n_2) \\geq cr^n_2$ for all $n$.\n\n\\medskip\n\nWe may of course assume that $0 < c < 1$. The assumptions imply that the balls $B(x^n_1, \\frac{cr^n_1}{2})$ and $B(x^n_2, \\frac{cr^n_2}{2})$ are disjoint.\nSince $r^n_2\\to 0$, we may further assume that $B(x^{n+1}_1, \\frac{cr^{n+1}_1}{2})\\cup B(x^{n+1}_2, \\frac{cr^{n+1}_2}{2})$ is disjoint from $\\bigcup^n_{k=1}[B(x^{k}_1, \\frac{cr^{k}_1}{2})\\cup B(x^{k}_2, \\frac{cr^{k}_2}{2})]$ for any $n$.\nAs a result, the sets $B(x^n_1, \\frac{cr^n_1}{2})$, $B(x^n_2, \\frac{cr^n_2}{2})$, $n\\in {\\mathbb N}$, are all mutually disjoint.\nDefine $f:X\\to E$ by\n\\[ f(x) = \\begin{cases}\nu_0 + (1 - \\frac{2d(x,x^n_i)}{cr^n_i})(u^n_i-u_0) &\\text{if $d(x,x^n_i) < \\frac{cr^n_i}{2}$, $n \\in {\\mathbb N}$, $i=1,2$,}\\\\\nu_0 &\\text{otherwise}.\n\\end{cases}\\]\nThen it can be checked that $f\\in \\operatorname{Lip}(X,E)$, $f(x^n_i) = u^n_i$, $i=1, 2$, $n\\in {\\mathbb N}$.\n\n\n\n\\medskip\n\n\\noindent \\underline{Case 3}. $r^n_1 >0$ for all $n$ and $d(x^n_1,x^n_2)\/r^n_2 \\to 0$.\n\n\\medskip\n\nAs in Case 2, we may assume that the sets $B(x^n_2, \\frac{r^n_2}{2}), n\\in {\\mathbb N}$, are dsijoint.\nIn this instance, we may further assume that $B(x^n_1,d(x^n_1,x^n_2)) \\subseteq B(x^n_2, \\frac{r^n_2}{2})$ for all $n$.\nDefine $g:X\\to E$ by\n\\[ g(x) = \\begin{cases}\n (1 - \\frac{2d(x,x^n_2)}{r^n_2})(u^n_2-u_0) &\\text{if $d(x,x^n_2) < \\frac{r^n_2}{2}$}\\\\\n0 &\\text{otherwise}.\n\\end{cases}\\]\nSince $\\|u^n_2-u_0\\| \\leq Cd(x^n_2,x_0)= Cr^n_2$ for all $n$, $g\\in \\operatorname{Lip}(X,E)$ and has Lipschitz constant at most $2C$.\nClearly, $g(x^n_2) = u^n_2-u_0$ for all $n$.\nNow let \n$h:X\\to E$ be given by\n\\[ h(x) = \\begin{cases}\n (1 - \\frac{d(x^n_1,x)}{d(x^n_1,x^n_2)})(u^n_1-u_0-g(x^n_1)) &\\text{if $d(x^n_1,x) < d(x^n_1,x^n_2)$}\\\\\n0 &\\text{otherwise}.\n\\end{cases}\\]\nNote that\n\\[\\|u^n_1-u_0-g(x^n_1)\\| \\leq \\|u^n_1-u^n_2\\| + \\|g(x^n_2)-g(x^n_1)\\| \\leq 3Cd(x^n_1,x^n_2).\n\\]\nTaking into account the disjointness of the sets $B(x^n_1, d(x^n_1,x^n_2))$, it follows that $h\\in \\operatorname{Lip}(X,E)$.\nFurthermore, \n\\[ \n(g+h)(x^n_1) = u^n_1-u_0 \\text{ and } (g+h)(x^n_2) = g(x^n_2) = u^n_2-u_0.\\]\nFinally, the function $f(x) = g(x) +h(x) +u_0$ is the one we seek.\n\\end{proof}\n\n\n\\begin{thm}\\label{t7.7}\nLet $\\Xi:X\\times E\\to F$ be a given function. Define $Sf(x) = \\Xi(x,f(x))$ for any function $f:X\\to E$.\nSuppose that $Sf$ belongs to $\\operatorname{Lip}(X,F)$ for all $f\\in \\operatorname{Lip}(X,E)$. Then\n\\begin{enumerate}\n\\item If $(x_n)$ is a separated sequence in $X$, and $B$ is a bounded set in $E$, then there is a finite set $N \\subseteq {\\mathbb N}$ so that $\\bigcup_{n\\notin N}\\Xi(x_n,B)$ is bounded.\n\\item Suppose that $x_0\\in X$, $u_0\\in E$ and $C<\\infty$. There exist $r >0$ and $D<\\infty$ so that \n\\[ \\|\\Xi(x_1,u_1) - \\Xi(x_2,u_2)\\| \\leq Dd(x_1,x_2) \n\\]\nwhenever $\\|u_1-u_2\\| \\leq Cd(x_1,x_2)$, $\\|u_i-u_0\\| \\leq Cd(x_i,x_0)$ and $d(x_i,x_0) \\leq r$, $i=1,2$.\n\\item Let $(x_n)$ be a separated sequence in $X$ and $(u_n)$ be a bounded sequence in $E$.\nFor any $C<\\infty$, there exist $r>0$ and $D<\\infty$ so that \n\\[ \\|\\Xi(x_n',u'_n) - \\Xi(x_n,u_n)\\| \\leq Dd(x_n',x_n) \\text{ for all $n$}\\]\nwhenever $\\|u_n'-u_n\\| \\leq Cd(x_n',x_n)$ and $d(x_n',x_n) \\leq r$ for all $n$.\n\\end{enumerate}\nConversely, suppose that conditions (1), (2) and (3) hold. Then $Sf\\in \\operatorname{Lip}(X,F)$ for any $f\\in \\operatorname{Lip}(X,E)$.\n\\end{thm}\n\n\\begin{proof}\nSuppose that $Sf\\in \\operatorname{Lip}(X,F)$ for any $f\\in \\operatorname{Lip}(X,E)$. \nLet $(x_n)$ be a separated sequence in $X$ and $B$ be a bounded set in $E$. If $\\bigcup_{n\\notin N}\\Xi(x_n,B)$ is unbounded for any finite set $N\\subseteq {\\mathbb N}$, there exists a sequence $(u_n)\\subseteq B$ so that $(\\Xi(x_n,u_n))$ is unbounded. Since $(x_n)$ is separated and $(u_n)$ is bounded, there is a Lipschitz function $f:X\\to E$ so that $f(x_n) = u_n$ for all $n$.\nThen $Sf\\in \\operatorname{Lip}(X,F)$, $(\\Xi(x_n,u_n)) = (Sf(x_n))$ is bounded in $F$, a contradiction. This proves condition (1).\n\nSuppose that condition (2) fails. Then there are $(x^n_1), (x^n_2)$ in $X$, $(u^n_1), (u^n_2)$ in $E$ so that \n$\\|u^n_1-u^n_2\\| \\leq Cd(x^n_1,x^n_2)$, \n$\\|u^n_i-u_0\\| \\leq Cd(x^n_i,x_0)$ and $d(x^n_i,x_0) \\leq \\frac{1}{n}$, $i =1,2$, $n\\in {\\mathbb N}$, but $\\|\\Xi(x^n_1,u^n_1) - \\Xi(x^n_2,u^n_2)\\| > nd(x^n_1,x^n_2)$.\nIn particular, the last inequality implies that $x^n_1\\neq x^n_2$ for all $n$.\nApply Lemma \\ref{l7.6} to find a function $f\\in \\operatorname{Lip}(X,E)$ so that $f(x^n_i) = u^n_i$ for infinitely many $n$.\nLet $L$ be the Lipschitz constant of $Sf$.\nThen \n\\[\\|\\Xi(x^n_1,u^n_1) - \\Xi(x^n_2,u^n_2)\\| = \\|Sf(x^n_1) - Sf(x^n_2)\\| \n\\leq Ld(x^n_1,x^n_2)\\]\nfor all $n$, contrary to their choices.\n\nLet $(x_n)$ be a separated sequence in $X$ and let $(u_n)$ be a bounded sequence in $E$. Assume that condition (3) fails for a constant $C$.\nFor each $k\\in {\\mathbb N}$, there exist $n_k\\in {\\mathbb N}$, $x_k\\in X $ and $u_k'\\in E$ so that $\\|u_k'- u_{n_k}\\| \\leq Cd(x_k',x_{n_k})$ and $d(x_k',x_{n_k}) < \\frac{1}{k}$ but\n\\begin{equation}\\label{e7.4.1}\\|\\Xi(x_k',u_k') - \\Xi(x_{n_k},u_{n_k})\\| > kd(x_k',x_{n_k}).\\end{equation}\nIf $(n_k)$ has a constant subsequence, then, say, $x_{n_k} = x_0$ and $u_{n_k} = u_0$ for infinitely many $k$.\nIn this case, we have a contradiction to condition (2), which has been shown above.\nOtherwise, we may assume that $n_k \\uparrow \\infty$.\nSince $(x_{n_k})$ is separated and $(u_{n_k})$ is bounded, there exists $g\\in \\operatorname{Lip}(X,E)$ so that $g(x_{n_k}) = u_{n_k}$ for all $k$.\nLet $L$ be the Lipschitz constant of $g$. We have\n\\[ \\|u_k'- g(x_k')\\| \\leq \\|u_k'-u_{n_k}\\| + \\| u_{n_k} - g(x_k')\\| \\leq (C+L)d(x_{n_k},x_k').\\]\nAs $(x_{n_k})$ is separated and $d(x_k',x_{n_k})\\to 0$, we can find $h\\in \\operatorname{Lip}(X,E)$ so that $h(x_{n_k}) = 0$ and $h(x_k') = u_k'-g(x_k')$ for all large $k$. Let $f = g+h\\in \\operatorname{Lip}(X,E)$.\nThen $Sf\\in \\operatorname{Lip}(X,E)$ and \n\\[ Sf(x_{n_k}) = \\Xi(x_{n_k},u_{n_k}),\\ Sf(x_k') = \\Xi(x_k',u_k').\\]\nThus (\\ref{e7.4.1}) leads to a contradiction.\n\nConversely, suppose that conditions (1) - (3) hold.\nLet $f\\in \\operatorname{Lip}(X,E)$ with Lipschitz constant $C$.\nFirst, let us show that $Sf$ is a bounded function.\nIf not, there is a sequence $(z_n)\\in X$ so that $\\|Sf(z_n)\\| \\to \\infty$.\nBy condition (1), $(z_n)$ cannot have a separated subsequence.\nHence we may assume that $(z_n)$ converges to some $z_0\\in X$.\nThen $\\|f(z_n) - f(z_0)\\| \\leq Cd(z_n,z_0)$ and $d(z_n,z_0) \\to 0$.\nApplying condition (2) with $(x_0,u_0) = (x_1,u_1) = (z_0,f(z_0))$ and $(x_2,u_2) = (z_n,f(z_n))$, we obtain $D<\\infty$ so that \n\\[ \\|\\Xi(z_0,f(z_0)) - \\Xi(z_n,f(z_n))\\| \\leq Dd(z_0,z_n) \\text{ for all sufficiently large $n$}.\\]\nHence $(Sf(z_n)) = (\\Xi(z_n,f(z_n))$ is surely bounded, contrary to its choice.\n\n\nNow suppose that $Sf\\notin \\operatorname{Lip}(X,F)$. There are sequences $(x_n)$, $(x_n')$ in $X$ so that \n\\begin{equation}\\label{e7.5} \\|\\Xi(x_n,u_n) - \\Xi(x_n',u_n')\\| = \\|Sf(x_n) - Sf(x_n')\\|> nd(x_n,x_n') \\text{ for all $n$},\n\\end{equation}\nwhere $u_n = f(x_n)$ and $u_n' = f(x_n')$.\nSince $Sf$ is a bounded function, we must have $d(x_n,x_n') \\to 0$.\nBy using subsequences, we may assume that either $(x_n)$ converges to some $x_0$ or that $(x_n)$ is a separated sequence.\nIn the former case, since $d(x_n,x_0), d(x_n',x_0)\\to 0$, $\\|f(x_n) - f(x_n')\\|\\leq Cd(x_n,x_n')$,\n$\\|f(x_n)-f(x_0)\\| \\leq Cd(x_n,x_0)$ and $\\|f(x_n')-f(x_0)\\| \\leq Cd(x'_n,x_0)$, it follows from condition (2) that there exists $D<\\infty$ so that for all sufficiently large $n$,\n\\[ \\|\\Xi(x_n,u_n) - \\Xi(x_n',u_n')\\| = \\|\\Xi(x_n,f(x_n)) - \\Xi(x_n',f(x_n'))\\| \\leq Dd(x_n,x_n'),\\]\ncontrary to (\\ref{e7.5}).\nThe proof is similar in case $(x_n)$ is a separated sequence, using condition (3) instead.\n\\end{proof}\n\nThe next theorem is easily deduced from Theorem \\ref{t7.7}, keeping in mind that $\\varphi:X\\to Y$ is a Lipschitz homeomorphism.\n\n\\begin{thm}\\label{t7.8}\nLet $X,Y$ be complete bounded metric spaces and let $E, F$ be Banach spaces.\nSuppose that $T:\\operatorname{Lip}(X,E) \\to \\operatorname{Lip}(Y,F)$ is a biseparating map. \nThen there are a Lipschitz homeomorphism $\\varphi:X\\to Y$ and a function $\\Phi:Y\\times E\\to F$ so that \n\\begin{enumerate}\n\\item For each $y\\in Y$, $\\Phi(y,\\cdot):E\\to F$ is a bijection with inverse $\\Psi(x,\\cdot):F\\to E$, where $\\varphi(x) =y$.\n\\item $Tf(y) = \\Phi(y,f(\\varphi^{-1}(y)))$ and $T^{-1}g(x) = \\Psi(x,g(\\varphi(x)))$ for all $f\\in \\operatorname{Lip}(X,E)$, $g\\in \\operatorname{Lip}(Y,F)$ and $x\\in X$, $y\\in Y$.\n\\item Let $(x_n)$ be a separated sequence in $X$. For any bounded sets $B$ in $E$ and $B'\\in F$, there is a finite set $N \\subseteq {\\mathbb N}$ so that $\\bigcup_{n\\notin N}\\Phi(\\varphi(x_n),B)$ and $\\bigcup_{n\\notin N}\\Psi(x_n,B')$ are bounded.\n\n\\item Suppose that $x_0\\in X$, $u_0\\in E$, $v_0\\in F$ and $C<\\infty$. There exist $r >0$ and $D<\\infty$ so that \n\\[\n \\|\\Phi(\\varphi(x_1),u_1) - \\Phi(\\varphi(x_2),u_2)\\|, \\|\\Psi(x_1,v_1) - \\Psi(x_2,v_2)\\|\\leq Dd(x_1,x_2) \n\\]\nwhenever $\\|u_1-u_2\\|, \\|v_1-v_2\\| \\leq Cd(x_1,x_2)$, $\\|u_i-u_0\\|, \\|v_i-v_0\\| \\leq Cd(x_i,x_0)$ and $d(x_i,x_0) \\leq r$, $i=1,2$.\n\\item Let $(x_n)$ be a separated sequence in $X$ and $(u_n), (v_n)$ be bounded sequences in $E$ and $F$ respectively.\nFor any $C<\\infty$, there exist $r>0$ and $D<\\infty$ so that \n\\[ \\|\\Phi(\\varphi(x_n'),u'_n) - \\Phi(\\varphi(x_n),u_n)\\|,\\ \n\\|\\Psi(x_n',v'_n) - \\Psi(x_n,v_n)\\| \\leq Dd(x_n',x_n)\\]\nfor all $n$,\nwhenever $\\|u_n'-u_n\\|, \\|v_n'-v_n\\| \\leq Cd(x_n',x_n)$ and $d(x_n',x_n) \\leq r$\nfor all $n$.\n \\end{enumerate}\n Conversely, if $\\varphi$, $\\Phi$ satisfy conditions (1)-(5) and $T$ is defined by (2), then $T$ is a biseparating map from $\\operatorname{Lip}(X,E)$ onto $\\operatorname{Lip}(Y,F)$.\n\\end{thm}\n\n\\subsection{A property of Lipschitz sections}\n\nLet $X$ be a bounded metric space and let $E$ and $F$ be Banach spaces.\nTheorem \\ref{t7.7} characterizes the ``section maps'' $\\Xi:X\\times E\\to F$ so that $Sf(x) = \\Xi(x,f(x))$ is Lipschitz whenever $f\\in \\operatorname{Lip}(X,E)$.\nAn example in \\cite[p.~190]{AZ}, where $X= [0,1]$ with the H$\\ddot{\\text{o}}$lder metric $d(x,y)= |x-y|^\\alpha$, $0<\\alpha < 1$, and $E = F = {\\mathbb R}$, shows that for a given $x\\in X$, the function $\\Xi(x,\\cdot):E\\to F$ need not be continuous.\nNevertheless, in this subsection, we will show that if $x$ is an accumulation point of $X$, then there is a dense open set $O$ in $E$ so that $\\Xi(x,\\cdot)$ is continuous on $O$.\nLet $\\Xi:X\\times E\\to F$ be a ``Lipschitz section''. Taking $(x_1,u_1) = (x,u)$ and $(x_2,u_2) = (x_0,u_0)$ in Theorem \\ref{t7.7}(2) yields the next lemma.\n\n\n\\begin{lem}\\label{l7.10}\nLet $(x_0,u_0) \\in X\\times E$ and let $v_0 = \\Xi(x_0,u_0)$.\nFor any $C<\\infty$, there exists $n = n(x_0,u_0,C) \\in{\\mathbb N}$ so that if $(x,u)\\in X\\times E$,\n$\\|u-u_0\\| \\leq Cd(x,x_0)$ and $d(x,x_0) < \\frac{1}{n}$, then \n\\[ \\|v-v_0\\|\\leq nd(x,x_0), \\text{ where $v = \\Xi(x,u)$}.\\]\n\\end{lem}\n\n\n\\begin{thm}\\label{t7.11}\nLet $x_0$ be an accumulation point of $X$.\nThere is a dense open set $O$ in $E$ so that $\\Xi(x_0,\\cdot)$ is continuous on $O$.\n\\end{thm}\n\n\\begin{proof}\nIn the notation of Lemma \\ref{l7.10}, for each $n\\in {\\mathbb N}$, let \n\\[ A_n = \\{u_0\\in E: n(x_0,u_0,1) \\leq n\\}.\\]\nBy the lemma, $E= \\bigcup_n \\overline{A_n}$.\nSince $E$ is a complete metric space, $O = \\bigcup_n \\operatorname{int}\\overline{A_n}$ is a dense open set in $E$.\nTo complete the proof of the theorem, let us show that $\\Xi(x_0,\\cdot)$ is continuous on $O$.\nClearly, it suffices to show that $\\Xi(x_0,\\cdot)$ is continuous on each $\\operatorname{int}\\overline{A_n}$.\nFix $N\\in {\\mathbb N}$. Suppose that $(u_n)$ is a sequence in $\\operatorname{int}\\overline{A_N}$ converging to $u_0 \\in \\operatorname{int}\\overline{A_N}$.\n\n\\medskip\n\n\\noindent\\underline{Claim}. There is a sequence $(u_n')$ in $A_N$ so that \n\\[ \\|u_n'-u_n\\| , \\|\\Xi(x_0,u_n') - \\Xi(x_0,u_n)\\| \\to 0.\\]\n\n\\medskip\n\nConsider a given $n\\in {\\mathbb N}$. Since $\\Xi(x,u_n)$ is a Lipschitz function of $x$ and $x_0$ is an accumulation point, there exists $x\\in X$ so that $0< Nd(x,x_0) < \\frac{1}{n}$ and that $\\|\\Xi(x,u_n) -\\Xi(x_0,u_n)\\| < \\frac{1}{n}$.\nAs $u_n\\in \\overline{A_N}$, there exists $u_n'\\in A_N$ so that $\\|u_n'-u_n\\| \\leq d(x,x_0) < \\frac{1}{nN}$.\nNote that $n(x_0,u_n',1) \\leq N$. Hence the condition $\\|u_n-u_n'\\| \\leq d(x,x_0)< \\frac{1}{N}$ implies\n\\[ \\|\\Xi(x,u_n) - \\Xi(x_0,u_n')\\| \\leq Nd(x,x_0)< \\frac{1}{n}.\\]\nTherefore,\n\\begin{align*}\n\\|\\Xi(x_0,u_n)- &\\Xi(x_0,u_n')\\| \\\\&\\leq \\|\\Xi(x_0,u_n)- \\Xi(x,u_n)\\| + \\|\\Xi(x,u_n)- \\Xi(x_0,u_n')\\|\\\\\n& < \\frac{2}{n}.\n\\end{align*}\nThis completes the proof of the claim.\n\n\\medskip\n\nIn view of the claim, in order to prove the continuity of $\\Xi(x_0,\\cdot)$ at $u_0$, it suffices to show that $\n\\Xi(x_0,u_n') \\to \\Xi(x_0,u_0)$.\nLet $\\varepsilon > 0$ be given. As before, one can choose $x'$ so that $0< d(x',x_0) < \\frac{1\\wedge \\varepsilon}{N}$ and that \n$\\|\\Xi(x',u_0) - \\Xi(x_0,u_0)\\| < \\varepsilon$.\nFor all sufficiently large $n$, $\\|u_n'-u_0\\| < d(x',x_0)$.\nOnce again, $\\|u_0-u_n'\\| \\leq d(x',x_0) < \\frac{1}{N}$ implies\n\\[ \\|\\Xi(x',u_0) - \\Xi(x_0,u_n')\\| \\leq Nd(x',x_0) < \\varepsilon.\\]\nTherefore,\n\\begin{align*}\n\\|\\Xi(x_0,u_n')- &\\Xi(x_0,u_0)\\| \\\\&\\leq \\|\\Xi(x_0,u_n')- \\Xi(x',u_0)\\| + \\|\\Xi(x',u_0)- \\Xi(x_0,u_0)\\|\n < 2\\varepsilon\n\\end{align*}\nfor all sufficiently large $n$.\n\\end{proof}\n\n\n\n\n\n\\section{Comparisons}\\label{s9}\n\nWe close with some results comparing different types of spaces under nonlinear biseparating maps.\nThroughout this section, $X,Y$ will be complete metric spaces and $E$, $F$ will be Banach spaces.\n\n\\begin{prop}\\label{p8.1}\nLet $T:A(X,E)\\to \\operatorname{Lip}(Y,F)$ be a biseparating map, where $Y$ is bounded. If $A(X,E) = U(X,E)$, then $X$ is separated. If $A(X,E) = U_*(X,E)$, then both $X$ and $Y$ are separated.\n\\end{prop}\n\n\\begin{proof}\nNormalize $T$ by taking $T0 = 0$. Suppose that $A(X,E)$ is either $U(X,E)$ or $U_*(X,E)$. First assume, if possible, that there is a convergent sequence $(x_n)$ in $X$ consisting of distinct points.\nLet $x_0$ be its limit, which we may assume to be distinct from all $x_n$'s.\nSet $y_n = {\\varphi}(x_n)$, $n\\in {\\mathbb N}\\cup\\{0\\}$, and $r_n = d(y_n,y_0)$, $n\\in {\\mathbb N}$.\nSince $r_n \\to 0$, without loss of generality, we may further assume that $r_{n+1} < \\frac{r_n}{3}$ for all $n\\in {\\mathbb N}$.\nFix a nonzero vector $b\\in F$. For each $m\\in{\\mathbb N}$, define $g_m: Y\\to F$ by \n\\[ g_m(y) = \\begin{cases}\n\\bigl(1- \\frac{2d(y,y_n)}{r_n}\\bigr)mr_nb &\\text{if $d(y,y_n) < \\frac{r_n}{2}$, $n\\in {\\mathbb N}$,}\\\\\n0 &otherwise.\n\\end{cases}\\]\nThen $g_m\\in \\operatorname{Lip}(Y,F)$, ${g_m}(y_n) = mr_nb$ for all $n\\in {\\mathbb N}$ and ${g_m}(y_0) =0$.\nBy Proposition \\ref{p4.2}, ${T^{-1}g_m}(x_0) =0$.\nBy continuity of $g_m$, there is an increasing sequence $(n_m)$ so that $T^{-1}g_m(x_{n_m}) \\to 0$.\nThus, there is a function $f\\in U_*(X,E)\\subseteq A(X,E)$ so that $f(x_{n_m}) = T^{-1}g_m(x_{n_m})$ for all $m\\in {\\mathbb N}$ and $f(x_0) =0$.\nBy Proposition \\ref{p4.2}, \n\\[ {Tf}(y_{n_m}) = {g_m}(y_{n_m}) = mr_{n_m}b\\text{ and } {Tf}(y_0) = 0.\\]\nHowever, $Tf$ is Lipschitz on $Y$.\nWe have reached a contradiction since \n\\[ \\|{Tf}(y_{n_m}) - {Tf}(y_0)\\| = mr_{n_m}b = md(y_{n_m},y_0)b.\\]\nThis shows that $X$ does not contain any nontrivial convergent sequence.\n\nIf $X$ is not separated, there are points $x_n,x_n'\\in X$ so that $0